id
int64
2.05k
16.6k
title
stringlengths
5
75
fromurl
stringlengths
19
185
date
timestamp[s]
tags
sequencelengths
0
11
permalink
stringlengths
20
37
content
stringlengths
342
82.2k
fromurl_status
int64
200
526
status_msg
stringclasses
339 values
from_content
stringlengths
0
229k
12,086
如何知道你的 Linux 用的哪种显卡?
https://itsfoss.com/check-graphics-card-linux/
2020-04-08T13:41:54
[ "显卡" ]
https://linux.cn/article-12086-1.html
无论是 [Nvidia](https://www.nvidia.com/en-us/) 还是 [Radeon](https://www.amd.com/en/graphics/radeon-rx-graphics) 或者 Intel,它们的显卡都可能在 Linux 中有问题。当你要对图形问题进行故障排除时,首先要了解系统中装有哪种显卡。 Linux 有几个命令可以检查硬件信息。你可以使用它们来检查你有哪些显卡(也称为视频卡)。让我向你展示一些命令来获取 Linux 中的 GPU 信息。 ### 在 Linux 命令行中检查显卡详细信息 ![](/data/attachment/album/202004/08/134159a11qu83339vb9njz.jpg) #### 使用 lspci 命令查找显卡 `lspci` 命令显示通过 [PCI](https://en.wikipedia.org/wiki/Conventional_PCI)(<ruby> 外设组件互连 <rt> Peripheral Component Interconnect </rt></ruby>)总线连接的设备的信息。基本上,此命令提供有关系统从键盘和鼠标到声卡、网卡和显卡的所有外设的详细信息。 默认情况下,你会有大量的此类外设列表。这就是为什么你需要用 `grep` 命令过滤出显卡的原因: ``` lspci | grep VGA ``` 这应该会显示一行有关你显卡的信息: ``` abhishek@itsfoss:~$ lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation HD Graphics 620 (rev 02) ``` 如你所见,我的系统中有 Intel HD 620 显卡。 #### 在 Linux 中使用 lshw 命令获取显卡详细信息 `lspci` 命令足以查看你的显卡,但是并不能告诉你很多信息。你可以使用 `lshw` 命令获取有关它的更多信息。 此命令要求你有 root 用户权限。你需要以这种方式查找视频卡(显卡)信息: ``` sudo lshw -C video ``` 正如你在下面的输出中看到的那样,此命令提供了有关显卡的更多信息,例如时钟频率、位宽、驱动等。 ``` abhishek@itsfoss:~$ sudo lshw -C video [sudo] password for abhishek: *-display description: VGA compatible controller product: HD Graphics 620 vendor: Intel Corporation physical id: 2 bus info: [email protected]:00:02.0 version: 02 width: 64 bits clock: 33MHz capabilities: pciexpress msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:139 memory:db000000-dbffffff memory:90000000-9fffffff ioport:f000(size=64) memory:c0000-dffff ``` #### 附赠技巧:以图形方式检查显卡详细信息 并非必须使用命令行在 Linux 中查找显卡详细信息。大多数 Linux 发行版(或者应该说是桌面环境)在设置中提供了必要的详细信息。 例如,如果你使用的是 [GNOME 桌面环境](https://www.gnome.org/),那么可以进入“设置”的“关于”部分来检查详细信息。[Ubuntu 20.04](https://itsfoss.com/ubuntu-20-04-release-features/) 中看上去像这样: ![Graphics card information check graphically](/data/attachment/album/202004/08/134201mrreercwi8giff3p.jpg) 我希望这个快速技巧对你有所帮助。你也可以使用相同的命令来[查找网卡](https://itsfoss.com/find-network-adapter-ubuntu-linux/)和 [Linux 中的 CPU 信息](https://linuxhandbook.com/check-cpu-info-linux/)。 如果你有任何疑问或建议,请随时发表评论。 --- via: <https://itsfoss.com/check-graphics-card-linux/> 作者:[Abhishek Prakash](https://itsfoss.com/author/abhishek/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
![Warp Terminal](/assets/images/warp-terminal.webp) ![Warp Terminal](/assets/images/warp-terminal.webp) Be it [Nvidia](https://www.nvidia.com/en-us/?ref=itsfoss.com) or [Radeon](https://www.amd.com/en/graphics/radeon-rx-graphics?ref=itsfoss.com) or Intel, they all may have some issues with Linux. When you are on your way to troubleshoot the graphics problem, the first thing you want to know is which graphics card do you have in your system. Linux has several commands to check hardware information. You can use them to check what graphics card (also refer to as video card) do you have. Let me show you a couple of commands to get GPU information in Linux. ## Use lspci command to find graphics card The lspci command displays the information about devices connected through [PCI](https://en.wikipedia.org/wiki/Conventional_PCI?ref=itsfoss.com) (peripheral Component Interconnect) buses. Basically, this command gives you the detail about all the peripheral devices to your system from keyboard and mouse to sound, network and graphics cards. By default, you’ll have a huge list of such peripheral devices. This is why you need to filter the output for graphics card with grep command in this manner: `lspci | grep VGA` This should show a one line information about your graphics card: ``` abhishek@itsfoss:~$ lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation HD Graphics 620 (rev 02) ``` As you can see, my system has Intel HD 620 video card. `inxi -G` if you have inxi installed on your system.## Get detailed graphics card information with lshw command in Linux The lspci command is good enough to see what graphics card you have but it doesn’t tell you a lot. You can use lshw command to get more information on it. You may have to install lshw on Fedora, Manjaro and a few non-Ubuntu distributions. This command requires you to have root access. You need to specify that you are looking for video card (graphics card) information in this fashion: `sudo lshw -C video` And as you can see in the output below, this command gives more information on the graphics card such as clock rate, width, driver etc. ``` abhishek@itsfoss:~$ sudo lshw -C video [sudo] password for abhishek: *-display description: VGA compatible controller product: HD Graphics 620 vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 02 width: 64 bits clock: 33MHz capabilities: pciexpress msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:139 memory:db000000-dbffffff memory:90000000-9fffffff ioport:f000(size=64) memory:c0000-dffff ``` ## Bonus Tip: Check graphics card details graphically It’s not that you must use the command line to find graphics card details in Linux. Most Linux distributions (or should I say desktop environments) provide essential details in the settings application. For example, if you are using [GNOME desktop environment](https://www.gnome.org/?ref=itsfoss.com), you can check the details by going to About section of Settings. Here’s what it looks like in [Ubuntu 20.04](https://itsfoss.com/ubuntu-20-04-release-features/): ![Ubuntu GPU Check](https://itsfoss.com/content/images/wordpress/2020/03/ubuntu-gpu-check.jpg) I hope you find this quick tip helpful. You can also use the same commands to [find your network adapter](https://itsfoss.com/find-network-adapter-ubuntu-linux/) and [CPU information in Linux](https://linuxhandbook.com/check-cpu-info-linux/?ref=itsfoss.com). If you have questions or suggestions, don’t hesitate to write a comment.
12,087
为何你的 Python 代码应是扁平与稀疏的
https://opensource.com/article/19/12/zen-python-flat-sparse
2020-04-08T19:18:27
[ "Python" ]
https://linux.cn/article-12087-1.html
> > 本文是 Python 之禅特别系列的第三篇,此篇着眼于其中第五与第六条原则:扁平与稀疏。 > > > ![](/data/attachment/album/202004/08/191645uiniiy56keig95gi.jpg) [Python 之禅](https://www.python.org/dev/peps/pep-0020/) 之所以得名,正是由于它那简明扼要的规则被设计出的意图在于让读者进行深入地思考,而绝不单是为编程提供一份易于遵守的指南。 读后不去三思其意,断然难以体会 Python 之禅的妙处。倘若 Python 之禅仅仅罗列出一组清晰的法则,那法则之间的矛盾是一种缺憾,然而作为引导读者沉思最优方案沉思的工具,矛盾却是绝佳的。 ### <ruby> 扁平胜过嵌套 <rt> Flat is better than nested </rt></ruby> 迫于对缩进的强硬要求,Python 对“扁平化”的需求显然远超它者。其余编程语言为了缓解对缩进的需求,通常会在嵌套结构里加入一种“作弊”的手段。为了理解这一点,不妨一同来看看 JavaScript。 JavaScript 本质上是异步的,这意味着程序员用 JavaScript 写的代码会用到大量的回调函数。 ``` a(function(resultsFromA) { b(resultsFromA, function(resultsfromB) { c(resultsFromC, function(resultsFromC) { console.log(resultsFromC) } } } ``` 忽略这段代码的具体内容,只去观察这段代码的形状与缩进带来一个最右边的点的方式。这种独特的“箭头”图形在我们扫看代码时格外扎眼,这种写法也因此被视作不可取,甚至得到了“回调地狱”的绰号。不过,在 JavaScript 中,这种反映嵌套关系的缩进可以通过“作弊”来回避。 ``` a(function(resultsFromA) { b(resultsFromA, function(resultsfromB) { c(resultsFromC, function(resultsFromC) { console.log(resultsFromC) }}} ``` Python 并没有提供这种作弊手段:每一级嵌套在代码中都如实的对应着一层缩进。因此,Python 深层的嵌套关系在*视觉*上也一定是深层嵌套的。这使得“回调地狱”的问题对于 Python 而言要比在 JavaScript 中严重得多:嵌套的回调函数必定带来缩进,而绝无使用花括号来“作弊”的可能。 这项挑战与 Python 之禅的指导原则相结合后,在我参与的库中催生出了一个优雅的解决方案。我们在 [Twisted](https://twistedmatrix.com/trac/) 框架里提出了 deferred 抽象,日后 JavaScript 中流行的 promise 抽象亦是受其启发而生。正是由于 Python 对整洁代码的坚守,方能推动 Python 开发者去发掘新的、强力的抽象。 ``` future_value = future_result() future_value.addCallback(a) future_value.addCallback(b) future_value.addCallback(c) ``` (现代 JavaScript 程序员也许会觉得这段代码十分眼熟:promise 着实受到了 Twisted 里 deferred 抽象的深远影响。) ### <ruby> 稀疏胜过密集 <rt> Sparse is better than dense </rt></ruby> 最易降低代码密集程度的方法是引入嵌套。这种习惯也正是有关稀疏的原则要随着前一条提出的原因:在竭尽所能地减少嵌套之后,我们往往会遗留下*密集的*代码或数据结构。此处的密集,是指塞进过量信息的小段代码,它们会导致错误发生后的解析变得困难。 这种密集性唯有通过创造性的思考方可改善,此外别无捷径。Python 之禅并不为我们提供简单的解决方案,它只会指明改善代码的方向,而非提供“如何”去做的向导。 起身走走,泡个热水澡,抑或是闻闻花香。盘坐冥思,直至灵感袭来。当你终于得到启发,便是动身写代码之时。 --- via: <https://opensource.com/article/19/12/zen-python-flat-sparse> 作者:[Moshe Zadka](https://opensource.com/users/moshez) 选题:[lujun9972](https://github.com/lujun9972) 译者:[caiichenr](https://github.com/caiichenr) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
The [Zen of Python](https://www.python.org/dev/peps/pep-0020/) is called that for a reason. It was never supposed to provide easy-to-follow guidelines for programming. The rules are specified tersely and are designed to engage the reader in deep thought. In order to properly appreciate the Zen of Python, you must read it and then meditate upon the meanings. If the Zen was designed to be a set of clear rules, it would be a fault that it has rules that contradict each other. However, as a tool to help you meditate on the best solution, contradictions are powerful. ## Flat is better than nested. Nowhere is the pressure to be "flat" more obvious than in Python's strong insistence on indentation. Other languages will often introduce an implementation that "cheats" on the nested structure by reducing indentation requirements. To appreciate this point, let's take a look at JavaScript. JavaScript is natively async, which means that programmers write code in JavaScript using a lot of callbacks. ``` a(function(resultsFromA) { b(resultsFromA, function(resultsfromB) { c(resultsFromC, function(resultsFromC) { console.log(resultsFromC) } } } ``` Ignoring the code, observe the pattern and the way indentation leads to a right-most point. This distinctive "arrow" shape is tough on the eye to quickly walk through the code, so it's seen as undesirable and even nicknamed "callback hell." However, in JavaScript, it is possible to "cheat" and not have indentation reflect nesting. ``` a(function(resultsFromA) { b(resultsFromA, function(resultsfromB) { c(resultsFromC, function(resultsFromC) { console.log(resultsFromC) }}} ``` Python affords no such options to cheat: every nesting level in the program must be reflected in the indentation level. So deep nesting in Python *looks* deeply nested. That makes "callback hell" was a worse problem in Python than in JavaScript: nesting callbacks mean indenting with no options to "cheat" with braces. This challenge, in combination with the Zen principle, has led to an elegant solution by a library I worked on. In the [Twisted](https://twistedmatrix.com/trac/) framework, we came up with the *deferred* abstraction, which would later inspire the popular JavaScript *promise* abstraction. In this way, Python's unwavering commitment to clear code forces Python developers to discover new, powerful abstractions. ``` future_value = future_result() future_value.addCallback(a) future_value.addCallback(b) future_value.addCallback(c) ``` (This might look familiar to modern JavaScript programmers: Promises were heavily influenced by Twisted's deferreds.) ## Sparse is better than dense. The easiest way to make something less dense is to introduce nesting. This habit is why the principle of sparseness follows the previous one: after we have reduced nesting as much as possible, we are often left with *dense* code or data structures. Density, in this sense, is jamming too much information into a small amount of code, making it difficult to decipher when something goes wrong. Reducing that denseness requires creative thinking, and there are no simple solutions. The Zen of Python does not offer simple solutions. All it offers are ways to find what can be improved in the code, without always giving guidance for "how." Take a walk. Take a shower. Smell the flowers. Sit in a lotus position and think hard, until finally, inspiration strikes. When you are finally enlightened, it is time to write the code. ## Comments are closed.
12,088
如何在 Bash 中使用循环
https://opensource.com/article/19/6/how-write-loop-bash
2020-04-08T22:57:54
[ "循环", "Bash" ]
https://linux.cn/article-12088-1.html
> > 使用循环和查找命令批量自动对多个文件进行一系列的操作。 > > > ![](/data/attachment/album/202004/08/225655by8i8k7uyppp18ph.jpg) 人们希望学习批处理命令的一个普遍原因是要得到批处理强大的功能。如果你希望批量的对文件执行一些指令,构造一个可以重复运行在那些文件上的命令就是一种方法。在编程术语中,这被称作*执行控制*,`for` 循环就是其中最常见的一种。 `for` 循环可以详细描述你希望计算机对你指定的每个数据对象(比如说文件)所进行的操作。 ### 一般的循环 使用循环的一个简单例子是对一组文件进行分析。这个循环可能没什么用,但是这是一个安全的证明自己有能力独立处理文件夹里每一个文件的方法。首先,创建一个文件夹然后拷贝一些文件(例如 JPEG、PNG 等类似的文件)至文件夹中生成一个测试环境。你可以通过文件管理器或者终端来完成创建文件夹和拷贝文件的操作: ``` $ mkdir example $ cp ~/Pictures/vacation/*.{png,jpg} example ``` 切换到你刚创建的那个新文件夹,然后列出文件并确认这个测试环境是你需要的: ``` $ cd example $ ls -1 cat.jpg design_maori.png otago.jpg waterfall.png ``` 在循环中逐一遍历文件的语法是:首先声明一个变量(例如使用 `f` 代表文件),然后定义一个你希望用变量循环的数据集。在这种情况下,使用 `*` 通配符来遍历当前文件夹下的所有文件(通配符 `*` 匹配*所有文件*)。然后使用一个分号(`;`)来结束这个语句。 ``` $ for f in * ; ``` 取决于你个人的喜好,你可以选择在这里按下回车键。在语法完成前,shell 是不会尝试执行这个循环的。 接下来,定义你想在每次循环中进行的操作。简单起见,使用 `file` 命令来得到 `f` 变量(使用 `$` 告诉 shell 使用这个变量的值,无论这个变量现在存储着什么)所存储着的文件的各种信息: ``` do file $f ; ``` 使用另一个分号结束这一行,然后关闭这个循环: ``` done ``` 按下回车键启动 shell 对当前文件夹下*所有东西*的遍历。`for` 循环将会一个一个的将文件分配给变量 `f` 并且执行你的命令: ``` $ for f in * ; do > file $f ; > done cat.jpg: JPEG image data, EXIF standard 2.2 design_maori.png: PNG image data, 4608 x 2592, 8-bit/color RGB, non-interlaced otago.jpg: JPEG image data, EXIF standard 2.2 waterfall.png: PNG image data, 4608 x 2592, 8-bit/color RGB, non-interlaced ``` 你也可以用这种形式书写命令: ``` $ for f in *; do file $f; done cat.jpg: JPEG image data, EXIF standard 2.2 design_maori.png: PNG image data, 4608 x 2592, 8-bit/color RGB, non-interlaced otago.jpg: JPEG image data, EXIF standard 2.2 waterfall.png: PNG image data, 4608 x 2592, 8-bit/color RGB, non-interlaced ``` 对你的 shell 来说,多行和单行的格式没有什么区别,并且会输出完全一样的结果。 ### 一个实用的例子 下面是一个循环在日常使用中的实用案例。假如你拥有一堆假期拍的照片想要发给你的朋友。但你的照片太大了,无法通过电子邮件发送,上传到[图片分享服务](http://nextcloud.com)也不方便。因此你想为你的照片创建小型的 web 版本,但是你不希望花费太多时间在一个一个的压缩图片体积上。 首先,在你的 Linux、BSD 或者 Mac 上使用包管理器安装 ImageMagick 命令。例如,在 Fedora 和 RHEL 上: ``` $ sudo dnf install ImageMagick ``` 在 Ubuntu 和 Debian 上: ``` $ sudo apt install ImageMagick ``` 在 BSD 上,使用 `ports` 或者 [pkgsrc](http://pkgsrc.org) 安装。在 Mac 上,使用 [Homebrew](http://brew.sh) 或者 [MacPorts](https://www.macports.org) 安装。 在你安装了 ImageMagick 之后,你就拥有一系列可以用来操作图片的新命令了。 为你将要创建的文件建立一个目标文件夹: ``` $ mkdir tmp ``` 使用下面的循环可以将每张图片减小至原来大小的 33%。 ``` $ for f in * ; do convert $f -scale 33% tmp/$f ; done ``` 然后就可以在 `tmp` 文件夹中看到已经缩小了的照片了。 你可以在循环体中使用任意数量的命令,因此如果你需要对一批文件进行复杂的操作,可以将你的命令放在一个 `for` 循环的 `do` 和 `done` 语句之间。例如,假设你希望将所有处理过的图片拷贝至你的网站所托管的图片文件夹并且在本地系统移除这些文件: ``` $ for f in * ; do convert $f -scale 33% tmp/$f scp -i seth_web tmp/$f [email protected]:~/public_html trash tmp/$f ; done ``` 你的计算机会对 `for` 循环中处理的每一个文件自动的执行 3 条命令。这意味着假如你仅仅处理 10 张图片,也会省下输入 30 条指令和更多的时间。 ### 限制你的循环 一个循环常常不需要处理所有文件。在示例文件夹中,你可能需要处理的只是 JPEG 文件: ``` $ for f in *.jpg ; do convert $f -scale 33% tmp/$f ; done $ ls -m tmp cat.jpg, otago.jpg ``` 或者,你希望重复特定次数的某个操作而不仅仅只处理文件。`for` 循环的变量的值是被你赋给它的(不管何种类型的)数据所决定的,所以你可以创建一个循环遍历数字而不只是文件: ``` $ for n in {0..4}; do echo $n ; done 0 1 2 3 4 ``` ### 更多循环 现在你了解的知识已经足够用来创建自己的循环体了。直到你对循环非常熟悉之前,尽可能的在需要处理的文件的*副本*上进行操作。使用内置的保护措施可以预防损坏自己的数据和制造不可复现的错误,例如偶然将一个文件夹下的所有文件重命名为同一个名字,就可能会导致他们的相互覆盖。 更进一步的 `for` 循环话题,请继续阅读。 ### 不是所有的 shell 都是 Bash 关键字 `for` 是内置在 Bash shell 中的。许多类似的 shell 会使用和 Bash 同样的关键字和语法,但是也有某些 shell ,比如 [tcsh](https://en.wikipedia.org/wiki/Tcsh),使用不同的关键字,例如 `foreach`。 tcsh 的语法与 Bash 类似,但是它更为严格。例如在下面的例子中,不要在你的终端的第 2、3 行键入 `foreach?` 。它只是提示你仍处在构建循环的过程中。 ``` $ foreach f (*) foreach? file $f foreach? end cat.jpg: JPEG image data, EXIF standard 2.2 design_maori.png: PNG image data, 4608 x 2592, 8-bit/color RGB, non-interlaced otago.jpg: JPEG image data, EXIF standard 2.2 waterfall.png: PNG image data, 4608 x 2592, 8-bit/color RGB, non-interlaced ``` 在 tcsh 中,`foreach` 和 `end` 都必须单独的在一行中出现。因此你不能像 Bash 或者其他类似的 shell 一样只使用一行命令创建一个 `for` 循环。 ### for 循环与 find 命令 理论上,你可能会用到不支持 `for` 循环的 shell,或者你只是更想使用其他命令的一些特性来完成和循环一样的工作。 使用 `find` 命令是另一个实现 `for` 循环功能的途径。这个命令提供了多种方法来定义循环中包含哪些文件的范围以及[并行](https://opensource.com/article/18/5/gnu-parallel)处理的选项。 `find` 命令顾名思义就是帮助你查询存储在硬盘里的文件。它的用法很简单:提供一个你希望它查询的位置的路径,接着 `find` 就会查询这个路径下面的所有文件和文件夹。 ``` $ find . . ./cat.jpg ./design_maori.png ./otago.jpg ./waterfall.png ``` 你可以通过添加名称的某些部分来过滤搜索结果: ``` $ find . -name "*jpg" ./cat.jpg ./otago.jpg ``` `find` 命令非常好的地方在于你可以通过 `-exec` 参数标志将它查询到的每一个文件放入循环中。例如,只对存放在你的 `example` 文件夹下的 PNG 图片进行体积压缩操作: ``` $ find . -name "*png" -exec convert {} -scale 33% tmp/{} \; $ ls -m tmp design_maori.png, waterfall.png ``` 在 `-exec` 短语中,括号 `{}` 表示的是 `find` 正在处理的条目(换句话说,每一个被找到的以 PNG 结尾的文件)。`-exec` 短语必须使用分号结尾,但是 Bash 中常常也会使用分号。为了解决这个二义性问题,你的 `结束符` 可以使用反斜杠加上一个分号(`\;`),使得 `find` 命令可以知道这个结束符是用来标识自己结束使用的。 `find` 命令的操作非常棒,某些情况下它甚至可以表现得更棒。比如说,在一个新的进程中使用同一条命令查找 PNG 文件,你可能就会得到一些错误信息: ``` $ find . -name "*png" -exec convert {} -flip -flop tmp/{} \; convert: unable to open image `tmp/./tmp/design_maori.png': No such file or directory @ error/blob.c/OpenBlob/2643. ... ``` 看起来 `find` 不只是定位了当前文件夹(`.`)下的所有 PNG 文件,还包括已经处理并且存储到了 `tmp` 下的文件。在一些情况下,你可能希望 `find` 查询当前文件夹下再加上其子文件夹下的所有文件。`find` 命令是一个功能强大的递归工具,特别体现在处理一些文件结构复杂的情境下(比如用来放置存满了音乐人音乐专辑的文件夹),同时你也可以使用 `-maxdepth` 选项来限制最大的递归深度。 只在当前文件夹下查找 PNG 文件(不包括子文件夹): ``` $ find . -maxdepth 1 -name "*png" ``` 上一条命令的最大深度再加 1 就可以查找和处理当前文件夹及下一级子文件夹下面的文件: ``` $ find . -maxdepth 2 -name "*png" ``` `find` 命令默认是查找每一级文件夹。 ### 循环的乐趣与收益 你使用的循环越多,你就可以越多的省下时间和力气,并且可以应对庞大的任务。虽然你只是一个用户,但是通过使用循环,可以使你的计算机完成困难的任务。 你可以并且应该就像使用其他的命令一样使用循环。在你需要重复处理单个或多个文件时,尽可能的使用这个命令。无论如何,这也算是一项需要被严肃对待的编程活动,因此如果你需要在一些文件上完成复杂的任务,你应该多花点时间在规划自己的工作流上面。如果你可以在一份文件上完成你的工作,接下来将操作包装进 `for` 循环里就相对简单了,这里面唯一的“编程”的需要只是理解变量是如何工作的并且进行充分的规划工作将已处理过的文件和未处理过的文件分开。经过一段时间的练习,你就可以从一名 Linux 用户升级成一位知道如何使用循环的 Linux 用户,所以开始让计算机为你工作吧! --- via: <https://opensource.com/article/19/6/how-write-loop-bash> 作者:[Seth Kenlon](https://opensource.com/users/seth/users/goncasousa/users/howtopamm/users/howtopamm/users/seth/users/wavesailor/users/seth) 选题:[lujun9972](https://github.com/lujun9972) 译者:[chunibyo-wly](https://github.com/chunibyo-wly) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux 中国](https://linux.cn/) 荣誉推出
200
OK
A common reason people want to learn the Unix shell is to unlock the power of batch processing. If you want to perform some set of actions on many files, one of the ways to do that is by constructing a command that iterates over those files. In programming terminology, this is called *execution control, *and one of the most common examples of it is the **for** loop. A **for** loop is a recipe detailing what actions you want your computer to take *for* each data object (such as a file) you specify. ## The classic for loop An easy loop to try is one that analyzes a collection of files. This probably isn't a useful loop on its own, but it's a safe way to prove to yourself that you have the ability to handle each file in a directory individually. First, create a simple test environment by creating a directory and placing some copies of some files into it. Any file will do initially, but later examples require graphic files (such as JPEG, PNG, or similar). You can create the folder and copy files into it using a file manager or in the terminal: ``` $ mkdir example $ cp ~/Pictures/vacation/*.{png,jpg} example ``` Change directory to your new folder, then list the files in it to confirm that your test environment is what you expect: ``` $ cd example $ ls -1 cat.jpg design_maori.png otago.jpg waterfall.png ``` The syntax to loop through each file individually in a loop is: create a variable (**f** for file, for example). Then define the data set you want the variable to cycle through. In this case, cycle through all files in the current directory using the ***** wildcard character (the ***** wildcard matches *everything*). Then terminate this introductory clause with a semicolon (**;**). `$ for f in * ;` Depending on your preference, you can choose to press **Return** here. The shell won't try to execute the loop until it is syntactically complete. Next, define what you want to happen with each iteration of the loop. For simplicity, use the **file** command to get a little bit of data about each file, represented by the **f** variable (but prepended with a **$** to tell the shell to swap out the value of the variable for whatever the variable currently contains): `do file $f ;` Terminate the clause with another semi-colon and close the loop: `done` Press **Return** to start the shell cycling through *everything* in the current directory. The **for** loop assigns each file, one by one, to the variable **f** and runs your command: ``` $ for f in * ; do > file $f ; > done cat.jpg: JPEG image data, EXIF standard 2.2 design_maori.png: PNG image data, 4608 x 2592, 8-bit/color RGB, non-interlaced otago.jpg: JPEG image data, EXIF standard 2.2 waterfall.png: PNG image data, 4608 x 2592, 8-bit/color RGB, non-interlaced ``` You can also write it this way: ``` $ for f in *; do file $f; done cat.jpg: JPEG image data, EXIF standard 2.2 design_maori.png: PNG image data, 4608 x 2592, 8-bit/color RGB, non-interlaced otago.jpg: JPEG image data, EXIF standard 2.2 waterfall.png: PNG image data, 4608 x 2592, 8-bit/color RGB, non-interlaced ``` Both the multi-line and single-line formats are the same to your shell and produce the exact same results. ## A practical example Here's a practical example of how a loop can be useful for everyday computing. Assume you have a collection of vacation photos you want to send to friends. Your photo files are huge, making them too large to email and inconvenient to upload to your [photo-sharing service](http://nextcloud.com). You want to create smaller web-versions of your photos, but you have 100 photos and don't want to spend the time reducing each photo, one by one. First, install the **ImageMagick** command using your package manager on Linux, BSD, or Mac. For instance, on Fedora and RHEL: `$ sudo dnf install ImageMagick` On Ubuntu or Debian: `$ sudo apt install ImageMagick` On BSD, use **ports** or [pkgsrc](http://pkgsrc.org). On Mac, use [Homebrew](http://brew.sh) or [MacPorts](https://www.macports.org). Once you install ImageMagick, you have a set of new commands to operate on photos. Create a destination directory for the files you're about to create: `$ mkdir tmp` To reduce each photo to 33% of its original size, try this loop: `$ for f in * ; do convert $f -scale 33% tmp/$f ; done` Then look in the **tmp** folder to see your scaled photos. You can use any number of commands within a loop, so if you need to perform complex actions on a batch of files, you can place your whole workflow between the **do** and **done** statements of a **for** loop. For example, suppose you want to copy each processed photo straight to a shared photo directory on your web host and remove the photo file from your local system: ``` $ for f in * ; do convert $f -scale 33% tmp/$f scp -i seth_web tmp/$f [email protected]:~/public_html trash tmp/$f ; done ``` For each file processed by the **for** loop, your computer automatically runs three commands. This means if you process just 10 photos this way, you save yourself 30 commands and probably at least as many minutes. ## Limiting your loop A loop doesn't always have to look at every file. You might want to process only the JPEG files in your example directory: ``` $ for f in *.jpg ; do convert $f -scale 33% tmp/$f ; done $ ls -m tmp cat.jpg, otago.jpg ``` Or, instead of processing files, you may need to repeat an action a specific number of times. A **for** loop's variable is defined by whatever data you provide it, so you can create a loop that iterates over numbers instead of files: ``` $ for n in {0..4}; do echo $n ; done 0 1 2 3 4 ``` ## More looping You now know enough to create your own loops. Until you're comfortable with looping, use them on *copies* of the files you want to process and, as often as possible, use commands with built-in safeguards to prevent you from clobbering your data and making irreparable mistakes, like accidentally renaming an entire directory of files to the same name, each overwriting the other. For advanced **for** loop topics, read on. ## Not all shells are Bash The **for** keyword is built into the Bash shell. Many similar shells use the same keyword and syntax, but some shells, like [tcsh](https://en.wikipedia.org/wiki/Tcsh), use a different keyword, like **foreach**, instead. In tcsh, the syntax is similar in spirit but more strict than Bash. In the following code sample, do not type the string **foreach?** in lines 2 and 3. It is a secondary prompt alerting you that you are still in the process of building your loop. ``` $ foreach f (*) foreach? file $f foreach? end cat.jpg: JPEG image data, EXIF standard 2.2 design_maori.png: PNG image data, 4608 x 2592, 8-bit/color RGB, non-interlaced otago.jpg: JPEG image data, EXIF standard 2.2 waterfall.png: PNG image data, 4608 x 2592, 8-bit/color RGB, non-interlaced ``` In tcsh, both **foreach** and **end** must appear alone on separate lines, so you cannot create a **for** loop on one line as you can with Bash and similar shells. For loops with the find command In theory, you could find a shell that doesn't provide a **for** loop function, or you may just prefer to use a different command with added features. The **find** command is another way to implement the functionality of a **for** loop, as it offers several ways to define the scope of which files to include in your loop as well as options for [Parallel](https://opensource.com/article/18/5/gnu-parallel) processing. The **find** command is meant to help you find files on your hard drives. Its syntax is simple: you provide the path of the location you want to search, and **find** finds all files and directories: ``` $ find . . ./cat.jpg ./design_maori.png ./otago.jpg ./waterfall.png ``` You can filter the search results by adding some portion of the name: ``` $ find . -name "*jpg" ./cat.jpg ./otago.jpg ``` The great thing about **find** is that each file it finds can be fed into a loop using the **-exec** flag. For instance, to scale down only the PNG photos in your example directory: ``` $ find . -name "*png" -exec convert {} -scale 33% tmp/{} \; $ ls -m tmp design_maori.png, waterfall.png ``` In the **-exec** clause, the bracket characters **{}** stand in for whatever item **find** is processing (in other words, any file ending in PNG that has been located, one at a time). The **-exec** clause must be terminated with a semicolon, but Bash usually tries to use the semicolon for itself. You "escape" the semicolon with a backslash (**\;**) so that **find** knows to treat that semicolon as its terminating character. The **find** command is very good at what it does, and it can be too good sometimes. For instance, if you reuse it to find PNG files for another photo process, you will get a few errors: ``` $ find . -name "*png" -exec convert {} -flip -flop tmp/{} \; convert: unable to open image `tmp/./tmp/design_maori.png': No such file or directory @ error/blob.c/OpenBlob/2643. ... ``` It seems that **find** has located all the PNG files—not only the ones in your current directory (**.**) but also those that you processed before and placed in your **tmp** subdirectory. In some cases, you may want **find** to search the current directory plus all other directories within it (and all directories in *those*). It can be a powerful recursive processing tool, especially in complex file structures (like directories of music artists containing directories of albums filled with music files), but you can limit this with the **-maxdepth** option. To find only PNG files in the current directory (excluding subdirectories): `$ find . -maxdepth 1 -name "*png"` To find and process files in the current directory plus an additional level of subdirectories, increment the maximum depth by 1: `$ find . -maxdepth 2 -name "*png"` Its default is to descend into all subdirectories. ## Looping for fun and profit The more you use loops, the more time and effort you save, and the bigger the tasks you can tackle. You're just one user, but with a well-thought-out loop, you can make your computer do the hard work. You can and should treat looping like any other command, keeping it close at hand for when you need to repeat a single action or two on several files. However, it's also a legitimate gateway to serious programming, so if you have to accomplish a complex task on any number of files, take a moment out of your day to plan out your workflow. If you can achieve your goal on one file, then wrapping that repeatable process in a **for** loop is relatively simple, and the only "programming" required is an understanding of how variables work and enough organization to separate unprocessed from processed files. With a little practice, you can move from a Linux user to a Linux user who knows how to write a loop, so get out there and make your computer work for you! ## 10 Comments
12,092
使用 at 命令在 Linux 上安排任务
https://www.networkworld.com/article/3535808/scheduling-tasks-on-linux-using-the-at-command.html
2020-04-09T21:59:59
[ "at", "cron" ]
https://linux.cn/article-12092-1.html
> > at 命令可以很容易地安排 Linux 任务在你选择的任何时间或日期运行,让我们来看看它能为你做什么。 > > > ![](/data/attachment/album/202004/09/215934pf4iau5vvi4rv4vg.jpg) 当你希望命令或脚本在某个特定时间运行时,你不需要将手指放在键盘上盘旋等待按下回车键,或者是在特定时间坐在办公桌前。相反,你可以通过 `at` 命令来设置任务。在本文中,我们将研究如何使用 `at` 来安排任务,如何精确地选择任务希望运行的时间,以及如何使用 `at` 来查看安排运行的任务。 ### at vs cron 对于那些使用 cron 在 Linux 系统上安排任务的人来说,`at` 命令类似于 cron,因为你可以在选定的时间调度任务,但是 cron 用于定期运行的作业 —— 甚至是每年仅一次。大多数 cron 作业的频率都设置为每天、每周或每月运行一次,不过你可以控制运行的频率和时间。 另一方面,`at` 命令用于仅运行一次的任务。想在午夜重启系统?没问题,只要你有适当的权限,`at` 可以为你完成此操作。如果你希望系统在每个星期六凌晨 2 点重启,那么改用 cron。 ### 使用 at `at` 命令很容易使用,只需记住几件事。一个简单使用 `at` 的例子类似于这样: ``` $ at 5:00PM at> date >> thisfile at> <EOT> ``` 在输入 `at` 和应该运行命令的时间,`at` 会提示你在设定时间会运行该命令(此例中是 `date` 命令)。输入 `^D`(`Ctrl + d`)来完成请求。 假设我们在下午 5 点之前设置这个 `at` 命令,那么这个日期和时间将在当天下午 5 点添加到名为 `thisfile` 文件的末尾。否则,该命令将在第二天下午 5 点运行。 与 `at` 命令进行交互时,可以输入多个命令。如果你要同时运行多个命令,只需输入多个命令行即可: ``` $ at 6:22 warning: commands will be executed using /bin/sh at> echo first >> thisfile at> echo second >> thisfile at> <EOT> ``` 在上面的命令中,我们使用了一个普通的用户账户,将一些简单的文本添加到该用户主目录的文件中。如果在上午 6:22 之后运行这些命令,那么命令会在第二天运行,因为 6:22 表示上午 6:22。如果你想在下午 6:22 运行,使用 `6:22 PM` 或者 `18:22`。`6:22 PM` 这样也是可以工作的。 你也可以通过使用 `at` 来安排命令在指定的日期或时间运行,例如 `10:00AM April 15 2021` 或 `noon + 5 days`(从今天起 5 天内的中午运行),以下是一些例子: ``` at 6PM tomorrow at noon April 15 2021 at noon + 5 days at 9:15 + 1000 days ``` 在指定要运行的命令并按下 `^D` 后,你会注意到 `at` 命令为每个请求分配了一个作业编号,这个数字将显示在 `at` 命令的作业队列中。 ``` $ at noon + 1000 days warning: commands will be executed using /bin/sh at> date >> thisfile at> <EOT> job 36 at Tue Dec 27 12:00:00 2022 <== job # is 36 ``` ### 检查队列 你可以使用 `atq`(at queue)命令来查看 `at` 作业队列: ``` $ atq 32 Thu Apr 2 03:06:00 2020 a shs 35 Mon Apr 6 12:00:00 2020 a shs 36 Tue Dec 27 12:00:00 2022 a shs 34 Thu Apr 2 18:00:00 2020 a shs ``` 如果你需要取消队列中的一个作业,使用 `atrm`(at remove)命令和作业编号: ``` $ atrm 32 $ atq 35 Mon Apr 6 12:00:00 2020 a shs 36 Tue Dec 27 12:00:00 2022 a shs 34 Thu Apr 2 18:00:00 2020 a shs ``` 你可以使用 `at -c` 命令来查看安排任务的详细信息,其它详细信息(活动的搜索路径等)也可以看到,但是输出的最后一行将显示计划运行的命令。 ``` $ at -c 36 | tail -6 cd /home/shs || { echo 'Execution directory inaccessible' >&2 exit 1 } date >> thisfile ``` 注意,该命令显示首先会测试是否可以通过 `cd` 命令进入用户目录。如果不可以,作业将退出并显示错误。如果可以,则运行在 `at` 中指定的命令。它将命令视为 “进入 `/home/shs` 或退出并显示错误”。 ### 以 root 身份运行作业 要以 root 身份运行 `at` 作业,只需将 `sudo` 与你的 `at` 命令一起使用,如下所示: ``` $ sudo at 8PM [sudo] password for shs: warning: commands will be executed using /bin/sh at> reboot now at> <EOT> job 37 at Wed Apr 1 16:00:00 2020 ``` 注意,root 的任务以 `root` 作为执行者显示在队列中。 ``` 35 Mon Apr 6 12:00:00 2020 a shs 36 Tue Dec 27 12:00:00 2022 a shs 37 Wed Apr 1 20:00:00 2020 a root <== ``` ### 运行脚本 你还可以使用 `at` 命令来运行脚本,这里有一个例子: ``` $ at 4:30PM warning: commands will be executed using /bin/sh at> bin/tryme at> <EOT> ``` ### 禁止使用 at 命令 `/etc/at.deny` 文件提供了一种禁止用户使用 `at` 命令的方法。默认情况下,它可能会包含一个不允许的账户列表,例如 `ftp` 和 `nobody`。可以使用 `/etc/at.allow` 文件执行相反的操作,但是通常只配置 `at.deny` 文件。 ### 总结 当你要安排一项一次性任务时,无论你是希望在今天下午或几年后运行,`at` 命令都是通用且易于使用的。 --- via: <https://www.networkworld.com/article/3535808/scheduling-tasks-on-linux-using-the-at-command.html> 作者:[Sandra Henry-Stocker](https://www.networkworld.com/author/Sandra-Henry_Stocker/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[MjSeven](https://github.com/MjSeven) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
301
Moved Permanently
null
12,093
解读 Ubuntu 里的密钥环概念
https://itsfoss.com/ubuntu-keyring/
2020-04-09T23:04:18
[ "密钥", "密码", "密钥环" ]
https://linux.cn/article-12093-1.html
![](/data/attachment/album/202004/09/230421ufdydfttjd9882ka.png) 如果你用过 Ubuntu 或者其他的 Linux 发行版里的[自动登录功能](https://itsfoss.com/ubuntu-automatic-logon/), 你可能遇到过这种弹出消息: > > 请输入密码以解锁你的登录密钥环 > > > 登录密钥环在你登录系统时未解锁。 > > > ![Enter Password To Unlock Your Login Keyring Ubuntu](/data/attachment/album/202004/09/230422m0xvgv0oj4fjdk7d.jpg) 如果你一直点击取消,它会不断弹出几次才会消失。你可能想知道,为什么你会一直看到这个密钥环信息呢? 让我来告诉你吧。它其实不是错误,而是一个安全特性。 奇怪吗?下面就让我来解释下 Linux 里的密钥环概念。 ### Linux 里的密钥环是什么。为什么需要它? 在现实生活中你为什么要用钥匙环(也叫[钥匙链](https://en.wikipedia.org/wiki/Keychain))?你用它把一把或多把钥匙串到一起, 以便于携带和查找。 Linux 里也是类似的。密钥环特性使你的系统可以将各种密码放在一起,并将其保存在一个地方。 大多数 Linux 桌面环境,如 GNOME、KDE、Xfce 等采用 [GNOME 密钥环](https://wiki.archlinux.org/index.php/GNOME/Keyring)来提供这个功能。 该密钥环保存了 ssh 密钥、GPG 密钥以及使用此功能的应用程序(例如 Chromium 浏览器)的密钥。默认情况下,**“密钥环”通过主密码来保护**,该密码通常是帐户的登录密码。 系统上的每个用户都有自己的密钥环,(通常)密码与用户帐户本身的密码相同。当你使用密码登录系统时,你的密匙环将使用你帐户的密码自动解锁。 当你[启用 Ubuntu 中的自动登录功能时](https://itsfoss.com/ubuntu-automatic-logon/)时,就有问题了。这意味着你无需输入密码即可登录系统。在这种情况下,你的密钥环不会自动解锁。 #### 密钥环是一个安全特性 记得我说过密钥环是一个安全特性吗?现在想象一下你在 Linux 电脑上开启了自动登录功能。有权访问你电脑的任何人无需密码就能进入你的系统。但是你可能不会在意,因为你只是用它来访问互联网。 但是,如果你在 Ubuntu 中使用 Chromium 或 [Google Chrome](https://itsfoss.com/install-chrome-ubuntu/) 之类的浏览器,并使用它来保存各种网站的登录密码,那么你将遇到麻烦。任何人都可以使用浏览器并利用你在浏览器中保存的密码登录网站。这不很危险吗? 这就是为什么当你使用 Chrome 时,它将反复地提示你先解锁密钥环。这确保了只有知道密钥环密码(即账户密码)的人才能使用在浏览器中保存的密码来登录它们相关的网站。 如果你反复取消解锁密钥环的提示,它最终将消失,并允许你使用浏览器。但是,保存的密码将不会被解锁,你在 Chromium/Chome 浏览器上将会看到“同步暂停”的提示。 ![Sync paused in Google Chrome](/data/attachment/album/202004/09/230425e6ufyzpksktk7msf.jpg) #### 如果密钥环一直存在,为什么你从来没有见过它呢? 如果你在你的 Linux 系统上从没见过它的话,这个问题就很有道理。 如果你从没有用过自动登录功能(或者修改你的账户密码),你可能都没有意识到这个特性的存在。 这是因为当你通过你的密码登录系统时,你的密钥环被你的账户密码自动解锁了。 Ubuntu(和其他发行版)在执行普通的管理任务如修改用户、安装新软件等需要输入密码,无论你是否是自动登录的。但是对于日常任务像使用浏览器,它不需要输入密码因为密钥环已经被解锁了。 当你切换到自动登录时,你不再需要输入登录密码。这意味着密钥环没有被自动解锁,因此当你使用利用了密钥环特性的浏览器时,它将提示你来解锁密钥环。 #### 你可以轻松地管理密钥环和密码 这个密钥环放在哪里?它的核心是一个守护任务(一个后台自动运行的程序)。 别担心。你不必通过终端来操作守护任务。大多数桌面环境都自带一个可以和这个守护进程进行交互的图形化应用程序。KDE 上有 KDE 钱包,GNOME 和其他桌面上叫做“密码和密钥”(之前叫 [Seahorse](https://wiki.debian.org/Seahorse))。 ![Password And Keys App in Ubuntu](/data/attachment/album/202004/09/230425u49fevvd2284zhwh.jpg) 你可以用这个 GUI 程序来查看哪些应用程序在用密钥环来管理/保护密码。 你可以看到,我的系统有自动创建的登录密钥环。也有一个存储 GPG 和 SSH 密钥的密钥环。那个[证书](https://help.ubuntu.com/lts/serverguide/certificates-and-security.html)用来保存证书机构颁发的证书(如 HTTPS 证书)。 ![Password and Keys application in Ubuntu](/data/attachment/album/202004/09/230426pr9jj9ifjsgiez9i.png) 你也可以使用这个应用程序来手动保存网站的密码。例如,我创建了一个新的叫做“Test”的被密码保护的密钥环,并手动存储了一个密码。 这比在一个文本文件中保存一批密码要好一些。至少在这种情况下,你的密码只有在你通过密码解锁了密钥环时才允许被看到。 ![Saving New Password Seahorse](/data/attachment/album/202004/09/230427q8ehahh2aes6huhw.png) 这里有一个潜在的问题,如果你格式化你的系统,手动保存的密码必然会丢失。通常,你会备份你的个人文件,但并不是所有的用户特定数据,如密钥环文件。 有一种办法能解决它。密钥环数据通常保存在 `~/.local/share/keyrings` 目录。在这里你可以看到所有的密钥环,但是你不能直接看到它们的内容。如果你移除密钥环的密码(我会在这篇文章的后面描述操作步骤),你可以像一个普通的文本文件一样读取密钥环的内容。你可以将这个解锁后的密钥环文件完整地复制下来,并在其他的 Linux 机器上运行“密码和密钥”应用程序导入到其中。 总结一下目前为止所学的内容: * 大多数 Linux 系统缺省已经安装并激活了密钥环特性 * 系统上的每个用户都拥有他自己的密钥环 * 密钥环通常是用账户密码锁定的(保护) * 当你通过密码登录时密钥环会被自动解锁 * 对于自动登录,密钥环不会自动解锁,因此当你试图使用依赖密钥环的应用程序时会被提示先解锁它 * 并不是所有的浏览器或应用程序利用了密钥环特性 * (Linux 上)安装一个 GUI 程序可以和密钥环交互 * 你可以用密钥环来手动存储加密格式的密码 * 你可以自己修改密钥环密码 * 你可以通过导出(需要先解锁密钥环)并导入到其他计算机上的方式来获取手工保存的密码。 ### 修改密钥环密码 假设你修改了你的账户密码。当你登录时,你的系统试图通过新的登录密码来自动解锁密钥环。但是密钥环还在使用老的登录密码。 这种情况下,你可以修改密钥环密码为新的登录密码,这样密码环才能在你登录系统时自动解锁。 从菜单中打开“密码和密钥”应用程序: ![Look for Password and Keys app in the menu](/data/attachment/album/202004/09/230425u49fevvd2284zhwh.jpg) 在“Login”密钥环上右击并点击“修改密码”: ![Change Keyring Password](/data/attachment/album/202004/09/230428kthx6noqhnzqnzph.png) #### 如果你不记得老的登录密码怎么办? 你可能知道在 [Ubuntu 上重置忘记的密码很容易](https://itsfoss.com/how-to-hack-ubuntu-password/)。但是密钥环在这种场景下还是有问题。你修改了账户密码,但是你不记得仍然被密钥环使用的老的账户密码。 你不能修改它因为你不知道老的密码。怎么办? 这种情况下,你将不得不移除整个密钥环。你可以通过“密码和密钥”应用程序来操作: ![Delete Keyring Ubuntu](/data/attachment/album/202004/09/230431ng60t99gtzxhm9go.jpg) 它会提示你进行确认: ![Delete Keyring](/data/attachment/album/202004/09/230437ojzpwajfqcfwkzj6.jpg) 另外,你也可以手动删除 `~/.local/share/keyrings` 目录下的密钥环文件。 老的密钥环文件被移除后,你再打开 Chrome/Chromium 时,它会提示你创建一个新的密钥环。 ![New Keyring Password](/data/attachment/album/202004/09/230438h3puukppjkopu41o.jpg) 你可以用新的登录密码,密钥环就会被自动解锁了。 ### 禁用密钥环密码 在你想用自动登录但又不想手动解锁密钥环时,你可以把禁用密钥环密码作为一个规避方法。记住你正在禁用一个安全特性,因此请三思。 操作步骤和修改密钥环相似。打开“密码和密钥”应用程序,然后修改密钥环密码。 技巧在于当它提示修改密码时,不要输入新密码,而是点击“继续”按钮。这将移除密钥环的密码。 ![Disable Keyring password by not setting any password at all](/data/attachment/album/202004/09/230439m4n1i2i3i723z24n.png) 这种方法,密钥环没有密码保护,并将一直处于解锁状态。 --- via: <https://itsfoss.com/ubuntu-keyring/> 作者:[Abhishek Prakash](https://itsfoss.com/author/abhishek/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[messon007](https://github.com/messon007) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
![Warp Terminal](/assets/images/warp-terminal.webp) ![Warp Terminal](/assets/images/warp-terminal.webp) If you use [automatic login in Ubuntu](https://itsfoss.com/ubuntu-automatic-logon/) or other Linux distributions, you might have come across a pop-up message of this sort: **Enter password to unlock your login keyring****The login keyring did not get unlocked when you logged into your computer.** ![Enter Password To Unlock Your Login Keyring Ubuntu](https://itsfoss.com/content/images/wordpress/2020/03/enter-password-to-unlock-your-login-keyring-ubuntu.jpg) It keeps on popping up several times before disappearing if you keep on clicking cancel. You may wonder why do you keep seeing this keyring message all the time? Let me tell you something. It’s not an error. It’s a security feature. Surprised? Let me explain the keyring concept in Linux. ## What is keyring in Linux and why is it used? ![Keyring Concept Ubuntu 1](https://itsfoss.com/content/images/wordpress/2020/03/keyring-concept-ubuntu-1.png) Why do you use a keyring (also called [keychain](https://en.wikipedia.org/wiki/Keychain?ref=itsfoss.com)) in the real life? You use it to keep one or more keys grouped together so that they are easy to find and carry. It’s the same concept in Linux. The keyring feature allows your system to group various passwords together and keep it one place. Most desktop environments like GNOME, KDE, Xfce etc use an implementation of [gnome-keyring](https://wiki.archlinux.org/index.php/GNOME/Keyring?ref=itsfoss.com) to provide this keyring feature in Linux. This keyring keeps your ssh keys, GPG keys and keys from applications that use this feature, like Chromium browser. By default, the **keyring is locked with a master password** which is often the login password of the account. Every user on your system has its own keyring with (usually) the same password as that of the user account itself. When you login to your system with your password, your keyring is unlocked automatically with your account’s password. The problem comes when you [switch to auto-login in Ubuntu](https://itsfoss.com/ubuntu-automatic-logon/). This means that you login to the system without entering the password. In such case, your keyring is not unlocked automatically. ### Keyring is a security feature Remember I told you that the keyring was a security feature? Now imagine that on your Linux desktop, you are using auto-login. Anyone with access to your desktop can enter the system without password but you have no issues with that perhaps because you use it to browse internet only. But if you use a browser like Chromium or [Google Chrome in Ubuntu](https://itsfoss.com/install-chrome-ubuntu/), and use it to save your login-password for various websites, you have an issue on your hand. Anyone can use the browser and login to the websites for which you have saved password in your browser. That’s risky, isn’t it? This is why when you try to use Chrome, it will ask you to unlock the keyring repeatedly. This ensures that only the person who knows the keyring’s password (i.e. the account password) can use the saved password in browser for logging in to their respective websites. If you keep on cancelling the prompt for keyring unlock, it will eventually go away and let you use the browser. However, the saved password won’t be unlocked and you’ll see ‘sync paused’ in Chromium/Chrome browsers. ![Sync Paused Keyring Ubuntu](https://itsfoss.com/content/images/wordpress/2020/03/sync-paused-keyring-ubuntu.jpg) ### If this keyring always existed, why you never saw it? That’s a valid question if you have never seen this keyring thing in your Linux system. If you never used automatic login (or changed your account’s password), you might not even have realized that this feature exists. This is because when you login to your system with your password, your keyring is unlocked automatically with your account’s password. Ubuntu (and other distributions) asks for password for common admin tasks like modifying users, installing new software etc irrespective of whether you auto login or not. But for regular tasks like using a browser, it doesn’t ask for password because keyring is already unlocked. When you switch to automatic login, you don’t enter the password for login anymore. This means that the keyring is not unlocked and hence when you try to use a browser which uses the keyring feature, it will ask to unlock the keyring. ### You can easily manage the keyring and passwords Where is this keyring located? At the core, it’s a daemon (a program that runs automatically in the background). Don’t worry. You don’t have to ‘fight the daemon’ in the terminal. Most desktop environments come with a graphical application that interacts with this daemon. On KDE, there is KDE Wallet, on GNOME and others, it’s called Password and Keys (originally known as [Seahorse](https://wiki.debian.org/Seahorse?ref=itsfoss.com)). ![Password And Keys App Ubuntu](https://itsfoss.com/content/images/wordpress/2020/03/password-and-keys-app-ubuntu.jpg) You can use this GUI application to see what application use the keyring to manage/lock passwords. As you can see, my system has the login keyring which is automatically created. There is also a keyrings for storing GPG and SSH keys. The [Certificates](https://help.ubuntu.com/lts/serverguide/certificates-and-security.html?ref=itsfoss.com) is for keeping the certificates (like HTTPS certificates) issued by a certificate authority. ![Keyring Pasword Ubuntu](https://itsfoss.com/content/images/wordpress/2020/03/keyring-pasword-ubuntu.png) You can also use this application to manually store passwords for website. For example, I created a new password-protected keyring called ‘Test’ and stored a password in this keyring manually. This is slightly better than keeping a list of passwords in a text file. At least in this case your passwords can be viewed only when you unlock the keyring with password. ![Saving New Password Seahorse](https://itsfoss.com/content/images/wordpress/2020/03/saving-new-password-seahorse.png) One potential problem here is that if you format your system, the manually saved passwords are definitely lost. Normally, you make backup of personal files, not of all the user specific data such as keyring files. There is way to handle that. The keyring data is usually stored in ~/.local/share/keyrings directory. You can see all the keyrings here but you cannot see its content directly. If you remove the password of the keyring (I’ll show the steps in later section of this article), you can read the content of the keyring like a regular text file. You can copy this unlocked keyring file entirely and import it in the Password and Keys application on some other Linux computer (running this application). So, let me summarize what you have learned so far: - Most Linux has this ‘keyring feature’ installed and activated by default - Each user on a system has its own keyring - The keyring is normally locked with the account’s password - Keyring is unlocked automatically when you login with your password - For auto-login, the keyring is not unlocked and hence you are asked to unlock it when you try to use an application that uses keyring - Not all browsers or application use the keyring feature - There is a GUI application installed to interact with keyring - You can use the keyring to manually store passwords in encrypted format - You can change the keyring password on your own - You can export (by unlocking the keyring first) and import it on some other computer to get your manually saved passwords ## Change keyring password Suppose you changed your account password. Now when you login, your system tries to unlock the keyring automatically using the new login password. But the keyring still uses the old login password. In such a case, you can change the keyring password to the new login password so that the keyring gets unlocked automatically as soon as you login to your system. Open the Password and Keys application from the menu: ![Password And Keys App Ubuntu](https://itsfoss.com/content/images/wordpress/2020/03/password-and-keys-app-ubuntu.jpg) Now, right click on the Login keyring and click on Change Password: ![Change Keyring Password](https://itsfoss.com/content/images/wordpress/2020/03/change-keyring-password.png) What if you don’t remember the old login password? ![Delete Keyring Ubuntu](https://itsfoss.com/content/images/wordpress/2020/03/delete-keyring-ubuntu.jpg) ![Delete Keyring](https://itsfoss.com/content/images/wordpress/2020/03/delete-keyring.jpg) ![New Keyring Password](https://itsfoss.com/content/images/wordpress/2020/03/new-keyring-password.jpg) ## Disable keyring password In cases where you want to use automatic login but don’t want to unlock keyring manually, you may choose to disable the keyring with a workaround. Keep in mind that you are disabling a security feature so think twice before doing so. The process is similar to changing keyring password. Open Password and Keys application and go on to change the keyring password. The trick is that when it asks to change the password, don’t enter a new password and hit Continue instead. This will remove any password from the keyring. ![Disable Keyring Password Ubuntu](https://itsfoss.com/content/images/wordpress/2020/03/disable-keyring-password-ubuntu.png) This way, the keyring will have no password and it remains unlocked all the time.
12,095
Linux 上的最佳音频编辑工具推荐
https://itsfoss.com/best-audio-editors-linux
2020-04-10T17:45:28
[ "音乐", "音频" ]
https://linux.cn/article-12095-1.html
在 Linux 上,有很多种音频编辑器可供你选用。不论你是一个专业的音乐制作人,还是只想学学怎么做出超棒的音乐的爱好者,这些强大的音频编辑器都是很有用的工具。 对于专业级的使用,我总是建议使用 [DAW](https://en.wikipedia.org/wiki/Digital_audio_workstation)(数码音频工作站)。但并不是每个人都需要全部的功能,所以你也应该了解一些最简单的音频编辑器。 在本文中,我们将讨论几款 DAW 和基础的音频编辑器,而且它们都是在 Linux 和(可能)其它操作系统上可以使用的**自由开源**的解决方案。 ### Linux 上的最佳音频编辑器 ![Best audio editors and DAW for Linux](/data/attachment/album/202004/10/174532p3vg0n3ccf4qnqqp.jpg) 我们不会关注 DAW 提供的所有功能,而只是关注基本的音频编辑功能。不过,你仍然可以把以下内容看作是 Linux 的最佳 DAW 名单。 **安装说明:**你可以在 AppCenter 或软件中心中找到所有提到的音频编辑器或 DAW。如果你在这两个地方没有找到它们,请前往它们的官方网站获取更多信息。 #### 1、Audacity ![Audacity audio editor](/data/attachment/album/202004/10/174535l52wuzkvwzvg2ebl.jpg) [Audacity](https://www.audacityteam.org/) 是 Linux 中最基本音频编辑器之一,但是它也很强大。它是一个自由开源的跨平台工具。肯定已经有很多人了解这个软件了。 现在的它相比它最初流行的时候有了很大的改进。我记得我以前试着通过从音频中去除人声来制作卡拉 OK 伴奏。现在有些时候,你仍然可以这么做。 **特性:** 它还支持包含 VST 效果的插件。当然,你不应该期望它支持 VST 乐器。 * 通过麦克风或混音器进行现场录制 * 支持同时从多种音频格式的多个文件中批量导出/导入内容 * 支持 LADSPA、LV2、Nyquist、VST 和 Audio Unit 的效果插件 * 使用简单,带有剪切、粘贴、删除和拷贝的功能 * 可观测声音频率的频谱模式 #### 2、LMMS ![LMMS audio editor](/data/attachment/album/202004/10/174537mz7j2faqbs18kkz1.jpg) [LMMS](https://lmms.io/) 是一个自由开源的(跨平台)数码音频工作站。它包括所有基本的音频编辑功能以及许多高级功能。 你可以混音、组合音频,或使用 VST 设备创造音频。LMMS 支持这些功能。此外,它还自带一些样本音频、预设、VST 设备和特效来帮助你起步。此外,你还可以得到一个频谱分析仪,以便进行高级的音频编辑工作。 **特性:** * 基于 MIDI 的音符回放 * 支持 VST 设备 * 原生支持多采样 * 内置压缩器、限制器、延迟功能、混响功能、失真功能和低音增强器 #### 3、Ardour ![Ardour audio editor](/data/attachment/album/202004/10/174539vqz3v4qpqjlrjmsj.jpg) [Ardour](https://ardour.org/) 是另一个自由开源的数码音频工作站。只要你有音频接口,Ardour 就支持它的使用。当然,你也可以无限地添加多声道音轨。这些多声道音轨也可以被指派到不同的混音带,以方便编辑和录音。 你也可以导入一个视频,编辑其中的音频,然后导出新的视频。它提供了许多内置插件,并且支持 VST 插件。 **特性:** * 非线性编辑 * 垂直窗口堆叠,便于导航 * <ruby> 静默消除功能 <rt> strip silence </rt></ruby>,<ruby> 推拉修剪功能 <rt> push-pull trimming </rt></ruby>,和用以短暂片段或基于基于音符开始的编辑的 Rhythm Ferret。 #### 4、Cecilia ![Cecilia audio editor](/data/attachment/album/202004/10/174541r3go6fb2uyftmwy5.jpg) [Cecilia](http://ajaxsoundstudio.com/software/cecilia/) 不是一个普通的音频编辑器。它的使用者一般是音效设计师或者正在努力成为音效设计师的人。 Cecilia 实际上是一个音频信号处理环境。它可以让你的作品余音绕梁。 你还可以得到内置的音效与合成模组和插件。Cecilia 为一个明确的目的而生:如果你正在找音效设计工具,这是你的不二之选! **特性:** * 利用模块来完成更多工作(UltimateGrainer —— 最先进的颗粒化处理工具,RandomAccumulator —— 记录变量速度的累加器,UpDistoRes——通过上采样和谐振低通滤波器创造失真效果的工具) * 自动保存调制设定 #### 5、Mixxx ![Mixxx audio DJ](/data/attachment/album/202004/10/174547vgcxrwlxthkrgfcz.jpg) 如果你想要在混合和录制一些东西的同时能够有一个虚拟的 DJ 工具,[Mixxx](https://www.mixxx.org/) 将是完美的工具。你可以用到 BPM、音调,并使用主同步功能来匹配歌曲的节奏和节拍。另外,不要忘记它也是一个 Linux 的自由开源的软件。 它还支持自定义 DJ 设备。所以,如果你有 DJ 设备或者 MIDI,你可以用这个工具录制你的现场混音。 **特性:** * 播送和录制你的歌曲的 DJ 混音 * 可以连接到你的设备并且现场演奏 * 音调检测和 BPM 检测 #### 6、Rosegarden ![Rosegarden audio editor](/data/attachment/album/202004/10/174552djbalxe6xmjcwofy.jpg) [Rosegarden](https://www.rosegardenmusic.com/) 是另一个令人赞叹的 Linux 的自由开源的音频编辑器。它既不是一个功能齐全的 DAW,也不是一个基本的音频编辑工具。它是两者的混合体,并带有一些缩减的功能。 我不会向专业人士推荐这款软件,但如果你经营家庭音乐工作室或只是想体验一下,这将是 Linux 上可以安装的最好的音频编辑器之一。 **特性:** * 乐谱编辑 * 录音、混音以及采样 ### 小结 这些是你可以找到的 Linux 上的最棒的一些音频编辑器了。不论你是需要 DAW,一个剪切/粘贴的编辑工具,或者仅仅想要一个拥有基础的混音和录音功能的音频编辑工具,上述软件都能够满足你的需求。 如果在这篇文章中我们遗漏了你最喜欢的一些音频工具,可以在原文下方评论中回复告诉我们。 --- via: <https://itsfoss.com/best-audio-editors-linux> 作者:[Ankush Das](https://itsfoss.com/author/ankush/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[MFGJT](https://github.com/MFGJT) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
301
Moved Permanently
null
12,096
IEEE 标准协会推出开源协作平台
https://itsfoss.com/ieee-open-source-collaboration-platform/
2020-04-10T22:29:34
[ "IEEE" ]
https://linux.cn/article-12096-1.html
> > IEEE 标准协会宣布了一个基于 GitLab 的开源协作平台。 > > > ![](/data/attachment/album/202004/10/222910vjpiqd6lqqm6riqp.jpg) [IEEE](https://www.ieee.org/about/index.html) 是世界上最大的技术专业组织,致力于推动技术的发展。<ruby> IEEE 标准协会 <rt> the IEEE Standards Association </rt></ruby>(IEEE SA)是 IEEE 内部的一个组织,负责制定全球各行业的标准。 IEEE 标准协会(SA)提出了一个开源协作平台,即 [IEEE SA Open](https://standards.ieee.org/content/ieee-standards/en/initiatives/opensource/)。 技术上来说,它是一个自托管的 GitLab 实例,结合了 [Mattermost](https://mattermost.com/)(一个 [Slack 的替代品](https://itsfoss.com/open-source-slack-alternative/))和 [GitLab Pages](https://docs.gitlab.com/ee/user/project/pages/)。[其官方博文](https://spectrum.ieee.org/the-institute/ieee-products-services/ieee-standards-association-launches-a-platform-for-open-source-collaboration)对此进一步解释道: > > 该平台使独立软件开发者、初创企业、业界、学术机构等能够在一个协作、安全、负责任的环境中创建、测试、管理和部署创新项目。 > > > ### 它有什么不同或有用的地方? 这个平台最主要的吸引力应该是 IEEE 的会员网络、技术专长和资源。 IEEE 主席 [Robert Fish](https://www.linkedin.com/in/robertsfish/),也曾(在接受 Radio Kan 的采访时)简单地提到它有什么不同之处,以及为什么 IEEE 想要使用它。 > > 如今,世界上大部分的基础设施都是由软件运行的,而这些软件需要符合通信网络、电网、农业等方面的标准。 > > > 这是有道理的 —— 如果我们想提高标准化技术,这在很大程度上取决于软件。所以,这听起来肯定是要对创新的开源项目进行标准化,让它们也能为潜在的资本机会做好准备。 IEEE 还澄清说: > > 随着软件在当今世界越来越普遍,道德规范、可靠性、透明度和民主治理成为必须具备的条件。IEEE 在赋予开源项目这些属性方面有着得天独厚的优势。 > > > 虽然听起来很好,但 IEEE 的开源平台究竟能提供什么?让我们一起来看看这个问题。 ### IEEE SA Open 概览 ![](/data/attachment/album/202004/10/222937u7i8dcrrbybdchpc.jpg) 首先,它对所有人开放并且完全免费使用。你只需要创建一个 [IEEE 帐户](https://www.ieee.org/profile/public/createwebaccount/showRegister.html),然后[登录到这个开源平台](https://opensource.ieee.org/)就可以开始。 除了与 IEEE 广泛的会员网络相关的好处之外,你还可以期望其开源社区经理或社区成员提供指导性支持。 ![Ieee Gitlab](/data/attachment/album/202004/10/222938w1rqa1tat21x1c2a.jpg) 该平台提供了标准和非标准项目的用例,你可以尝试一下。 因为选择将 GitLab 与 Mattermost 和 Pages 结合起来,你可以获得一些有用的功能,它们是: * 项目规划和管理功能 * 源代码管理 * 测试、代码质量和持续集成功能 * Docker 容器注册库和 Kubernetes 集成 * 应用程序的发布和交付功能 * 集成了 Mattermost 聊天论坛的斜线命令(完全支持 Android 和 iPhone 应用程序) * 能够弥合标准制定和开源社区之间的差距,以便以更快的速度推进灵活和创造性的技术解决方案 * 安全的开放空间,并有严格的行为准则。 ### 小结 显然,有更多的平台来潜在地放大开源项目的曝光率是一件好事 —— 因此,IEEE 的举措听起来很有希望。 你对此有何看法?让我知道你的想法吧! --- via: <https://itsfoss.com/ieee-open-source-collaboration-platform/> 作者:[Ankush Das](https://itsfoss.com/author/ankush/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
*Brief: IEEE Standards Association has announced a GitLab-based open source collaboration platform. Read how is it different and what advantages it has.* [IEEE](https://www.ieee.org/about/index.html) is the world’s largest technical professional organization dedicated to advancing technology. The IEEE Standards Association (IEEE SA) is an organization within IEEE that develops global standards in a broad range of industries. The IEEE Standards Association (SA) has come up with an open-source collaboration platform i.e [IEEE SA Open](https://standards.ieee.org/content/ieee-standards/en/initiatives/opensource/). It is technically a self-hosted GitLab instance combined with [Mattermost](https://mattermost.com/) (a [slack alternative](https://itsfoss.com/open-source-slack-alternative/)) and [GitLab Pages](https://docs.gitlab.com/ee/user/project/pages/). To describe it further, the [official blog post](https://spectrum.ieee.org/the-institute/ieee-products-services/ieee-standards-association-launches-a-platform-for-open-source-collaboration) mentioned: The platform enables independent software developers, startups, industry, academic institutions, and others to create, test, manage, and deploy innovative projects in a collaborative, safe, and responsible environment. ## How is it different or useful? The main key attraction for this platform would be IEEE’s members’ network, technical expertise, and resources. IEEE President, [Robert Fish](https://www.linkedin.com/in/robertsfish/), also mentions in brief (during an interview with Radio Kan) how it’s different and why IEEE wanted to go with it: Today, much of the world’s infrastructure is run by software, and that software needs to comply with standards in communications networking, electrical grids, agriculture, and the like. It makes sense – if we want to improve standardizing technologies, it highly depends on the software. So, this definitely sounds like something to standardize innovative open-source projects to gear them up for potential capital opportunities as well. IEEE also clarified that: As software becomes increasingly prevalent in the world today, ethical alignment, reliability, transparency, and democratic governance become must-haves. IEEE is uniquely positioned to endow open-source projects with these attributes. While this sounds good, what exactly the open-source platform by the IEEE offer? Let’s take a look at that: ## IEEE SA Open: Quick overview ![Ieee Opensource](https://itsfoss.com/content/images/wordpress/2020/03/ieee-opensource.jpg) To start with, it is open to all and completely free to use. You just need to create an [IEEE account](https://www.ieee.org/profile/public/createwebaccount/showRegister.html) and then [sign in to the open-source platform](https://opensource.ieee.org/) to get started. Along with the benefits associated with IEEE’s extensive network of Members, you can also expect guidance support from their open-source community managers or community members. ![Ieee Gitlab](https://itsfoss.com/content/images/wordpress/2020/03/ieee-gitlab.jpg) The platform presents use cases for both standard and non-standard projects, so you can give it a try. For its choice to go with GitLab combined with Mattermost and Pages, you get a couple of useful features, they are: - Project planning and management features - Source code management - Testing, code quality, and continuous integration features - Docker container registry and Kubernetes integration - Application release and delivery features - Integrated Mattermost chat forum w/slash commands; (Android and iPhone apps are fully supported) - Capable of bridging the gap between Standards development and open source communities to allow for the advancement of nimble and creative technical solutions at a faster pace - A safe open space with an enforced code of conduct ## Wrapping Up It’s obviously a good thing to have more platforms to potentially amplify the exposure of open-source projects – hence, IEEE’s initiative sounds promising to start with. What do you think about it? Let me know your thoughts!
12,098
在 Ubuntu 20.04 中完全进入深色模式
https://itsfoss.com/dark-mode-ubuntu/
2020-04-11T20:09:31
[ "深色模式" ]
https://linux.cn/article-12098-1.html
> > 深色模式是 Ubuntu 20.04 最受瞩目的[新功能](https://itsfoss.com/ubuntu-20-04-release-features/)之一了。任何版本的 Ubuntu 都可以通过[安装深色主题](https://itsfoss.com/install-themes-ubuntu/)让用户界面拥有一个深色的外观,但在 Ubuntu 20.04 中,这个过程变得更简单。 > > > ![](/data/attachment/album/202004/11/200841gvvaja25jaz5z7hv.jpg) 在 Ubuntu 20.04 中,无需额外安装主题,默认主题(称为 Yaru)本身就带有三种模式,其中就包括深色模式。 下面我会展示如何将 Ubuntu 系统完全设置为深色模式。 ### 在 Ubuntu 20.04 打开深色模式 这个步骤是在 GNOME 桌面上进行的,如果你[使用的是其它桌面](https://itsfoss.com/find-desktop-environment/),你看到的可能会和下面的截图不一样。 按下 super 键(或 Windows 键),然后输入 “settings”,就可以找到系统设置。 ![Search for Settings](/data/attachment/album/202004/11/200933vk5rfyk5vavz9mif.jpg) 在系统设置中,进入“<ruby> 外观 <rt> Appearance </rt></ruby>”部分,就可以看到<ruby> 浅色 <rt> light </rt></ruby>、<ruby> 标准 <rt> standard </rt></ruby>和<ruby> 深色 <rt> dark </rt></ruby>三个选项。既然要使用深色模式,那自然而然要选择“深色”这个选项了。 ![Enable Dark Theme in Ubuntu](/data/attachment/album/202004/11/200935k8sk99vpp79haza7.png) 完成设置以后,使用了 GTK3 的应用程序都可以跟随深色模式。因此你会看到系统中包括文本编辑器、终端、LibreOffice 等在内的大多数应用程序都已经切换成深色了。但未使用 GTK3 的应用程序可能并没有跟随进入深色模式,下面我会展示如何更完整地进入深色模式。 ### 继续调整,进入完整深色模式 这个时候你会发现,shell 主题、屏幕顶部面板中的消息托盘和系统托盘还仍然保持在原有的模式当中。 ![No Dark Shell by default in Ubuntu](/data/attachment/album/202004/11/200939rqzx558t88qy8utx.jpg) 现在就需要使用 [GNOME 扩展](https://itsfoss.com/gnome-shell-extensions/)安装 Yaru 深色 shell 主题了。[在 Ubuntu 中通过 Ctrl+Alt+T 打开终端](https://itsfoss.com/ubuntu-shortcuts/),然后执行以下这个命令安装浏览器扩展: ``` sudo apt install chrome-gnome-shell ``` 进入[扩展页面](https://extensions.gnome.org/extension/19/user-themes/)启用这个扩展: ![Enable User Themes GNOME Extension](/data/attachment/album/202004/11/200940f3m4omo4roavovsl.jpg) 执行以下命令安装 [GNOME 调整工具](https://itsfoss.com/gnome-tweak-tool/): ``` sudo apt install gnome-tweaks ``` 打开 GNOME 调整工具,进入“<ruby> 外观 <rt> Appearance </rt></ruby>”部分,就可以看到 shell 主题的选项,现在只需要把它启用就可以了。 ![Enable Yaru Dark Shell Theme in Ubuntu](/data/attachment/album/202004/11/200941hnyp5f3fnzzeeo5y.jpg) 设置完之后再观察一下,桌面通知、消息托盘、系统托盘等等都已经进入深色模式了。 ![Yaru Dark Shell Theme in Ubuntu](/data/attachment/album/202004/11/200941b9vyxg4f789vosyw.jpg) 现在感觉好多了。但你可能还会注意到,在使用浏览器访问网站的时候,很多网站都使用了白色的背景色。如果期望网站方提供深色模式,那是很不现实的,但我们可以自己实现这一件事。 你需要用到的东西就是诸如 [Dark Reader](https://darkreader.org/) 这样的浏览器扩展。《[在 Firefox 中启用深色模式](https://itsfoss.com/firefox-dark-mode/)》这篇文章中也有讨论过这个浏览器扩展,它的使用过程并不复杂,如果你使用的浏览器是 Firefox、Chrome 或 [Ubuntu 下的 Chromium](https://itsfoss.com/install-chromium-ubuntu/),就可以直接安装[其官方网站](https://darkreader.org/)上列出的扩展。 Dark Reader 安装完成后,就会以深色模式打开网站了。 ![It’s FOSS Homepage in Dark Mode with Dark Reader](/data/attachment/album/202004/11/200942teiez33kan2goafx.jpg) 当然,有些外部的第三方应用程序可能仍然是浅色状态。如果它们自己附带了深色模式的选项,就需要手动启用它们的深色模式。 如今,深色模式在非开发者人群中也越来越流行了。按照以上的步骤,你就可以轻松进入深色模式。 请享受深色模式。 --- via: <https://itsfoss.com/dark-mode-ubuntu/> 作者:[Abhishek Prakash](https://itsfoss.com/author/abhishek/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[HankChow](https://github.com/HankChow) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
One of the most talked about [new features in Ubuntu 20.04](https://itsfoss.com/ubuntu-20-04-release-features/) is the dark mode. You can [install dark theme in any Ubuntu version](https://itsfoss.com/install-themes-ubuntu/) to give it a dark look but the new Ubuntu 20.04 makes it a lot easier. You don’t need to install themes on your own. You can find three variants of the default theme (called Yaru) and you can enable the dark mode from the settings. It still leaves a few thing to I am going to show you some tips on giving complete dark mode to your Ubuntu system. ## Enable dark mode in Ubuntu 20.04 The steps are performed on the GNOME desktop. If you are [using some other desktop environment](https://itsfoss.com/find-desktop-environment/), the screenshots may look different. Press the super key (Windows key) and start typing settings to search for the Settings application. ![Settings Search Ubuntu 20 04](https://itsfoss.com/content/images/wordpress/2020/04/settings-search-ubuntu-20-04.jpg) In the Settings application, go to Appearance section and you should see three variants of the theme: Light, Standard and Dark. No prizes for guessing that you need to select Dark if you want to use the dark mode. ![Enable Dark Theme Ubuntu](https://itsfoss.com/content/images/wordpress/2020/04/enable-dark-theme-ubuntu.png) Not all but applications using GTK3 should automatically comply with the dark theme. In other words, you’ll notice that most of the applications like text editor, terminal, LibreOffice etc on your system automatically switch to dark mode. But it still leaves a few thing in the light mode. Let me show you a couple of tips to ‘go to the darker side’. ## Tweaks to go full dark mode in Ubuntu 20.04 You’ll notice that the shell theme is still light. The message tray, system tray (in the top panel) are still not using dark mode. ![No Dark Shell Ubuntu](https://itsfoss.com/content/images/wordpress/2020/04/no-dark-shell-ubuntu.jpg) You’ll have to [use a GNOME extension](https://itsfoss.com/gnome-shell-extensions/) here that will let you install Yaru dark shell theme. Install the browser extension first by using this command in terminal ([use Ctrl+Alt+T keyboard shortcut for terminal in Ubuntu](https://itsfoss.com/ubuntu-shortcuts/)). `sudo apt install chrome-gnome-shell` Now [go to the extension’s webpage](https://extensions.gnome.org/extension/19/user-themes/) and enable the extension by switching it on: ![Enable User Themes Gnome](https://itsfoss.com/content/images/wordpress/2020/04/enable-user-themes-gnome.jpg) You need to [install GNOME Tweaks tool](https://itsfoss.com/gnome-tweak-tool/) by using this command: `sudo apt install gnome-tweaks` Open the GNOME Tweaks tool and go to Appearance. You’ll see the option to change the shell theme. You don’t need to install this Yaru dark shell theme. It’s already there. You just have to enable it. ![Ubuntu Yaru Dark Shell Theme](https://itsfoss.com/content/images/wordpress/2020/04/ubuntu-yaru-dark-shell-theme.jpeg) Now, you can see that even the shell features like desktop notifications, message tray, system tray are also in dark mode. ![Yaru Dark Shell Theme Ubuntu](https://itsfoss.com/content/images/wordpress/2020/04/yaru-dark-shell-theme-ubuntu.jpg) Looks better, isn’t it? There is one more thing you could do to further enjoy dark mode in Ubuntu. You’ll notice that the websites you visit still have white background. You cannot expect all the websites to provide a dark mode. This is where you can use a browser extension like [Dark Reader](https://darkreader.org/). John has already discussed it in the tutorial on [enabling dark mode in Firefox](https://itsfoss.com/firefox-dark-mode/). It’s not a complicated task. If you use Firefox, Chrome or [Chromium in Ubuntu](https://itsfoss.com/install-chromium-ubuntu/), you can install the browser extension listed on the [Dark Reader website](https://darkreader.org/). Once you have installed Dark Reader, all the websites you visit should be in dark mode automatically. ![It's FOSS Homepage in Dark Mode with Dark Reader](https://itsfoss.com/content/images/wordpress/2020/01/itsfoss_dark_mode.jpg) You may still find a few external third party applications using light theme. You may have to manually enable dark theme, if they have such an option available. Dark mode is getting popular these days even among the non-coders. Turning on the dark mode is on my list of [first few things to do after installing Ubuntu 20.04](https://itsfoss.com/things-to-do-after-installing-ubuntu-20-04/). With these tips, you can satisfy your craving of dark mode in Ubuntu. I hope you like it. Enjoy the dark mode.
12,099
使用 Python 读取电子表格中的数据
https://fedoramagazine.org/using-data-from-spreadsheets-in-fedora-with-python/
2020-04-12T09:35:35
[ "CSV" ]
https://linux.cn/article-12099-1.html
![](/data/attachment/album/202004/12/093539jz81k1wgeh85akoe.jpg) [Python](https://python.org) 是最流行、功能最强大的编程语言之一。由于它是自由开源的,因此每个人都可以使用。大多数 Fedora 系统都已安装了该语言。Python 可用于多种任务,其中包括处理逗号分隔值(CSV)数据。CSV文件一开始往往是以表格或电子表格的形式出现。本文介绍了如何在 Python 3 中处理 CSV 数据。 CSV 数据正如其名。CSV 文件按行放置数据,数值之间用逗号分隔。每行由相同的*字段*定义。简短的 CSV 文件通常易于阅读和理解。但是较长的数据文件或具有更多字段的数据文件可能很难用肉眼解析,因此在这种情况下计算机做得更好。 这是一个简单的示例,其中的字段是 `Name`、`Email` 和 `Country`。在此例中,CSV 数据将字段定义作为第一行,尽管并非总是如此。 ``` Name,Email,Country John Q. Smith,[email protected],USA Petr Novak,[email protected],CZ Bernard Jones,[email protected],UK ``` ### 从电子表格读取 CSV Python 包含了一个 `csv` 模块,它可读取和写入 CSV 数据。大多数电子表格应用,无论是原生(例如 Excel 或 Numbers)还是基于 Web 的(例如 Google Sheet),都可以导出 CSV 数据。实际上,许多其他可发布表格报告的服务也可以导出为 CSV(例如,PayPal)。 Python `csv` 模块有一个名为 `DictReader` 的内置读取器方法,它可以将每个数据行作为有序字典 (`OrderedDict`) 处理。它需要一个文件对象访问 CSV 数据。因此,如果上面的文件在当前目录中为 `example.csv`,那么以下代码段是获取此数据的一种方法: ``` f = open('example.csv', 'r') from csv import DictReader d = DictReader(f) data = [] for row in d: data.append(row) ``` 现在,内存中的 `data` 对象是 `OrderedDict` 对象的列表: ``` [OrderedDict([('Name', 'John Q. Smith'), ('Email', '[email protected]'), ('Country', 'USA')]), OrderedDict([('Name', 'Petr Novak'), ('Email', '[email protected]'), ('Country', 'CZ')]), OrderedDict([('Name', 'Bernard Jones'), ('Email', '[email protected]'), ('Country', 'UK')])] ``` 引用这些对象很容易: ``` >>> print(data[0]['Country']) USA >>> print(data[2]['Email']) [email protected] ``` 顺便说一句,如果你需要处理没有字段名标题行的 CSV 文件,那么 `DictReader` 类可以让你定义它们。在上面的示例中,添加 `fieldnames` 参数并传递一系列名称: ``` d = DictReader(f, fieldnames=['Name', 'Email', 'Country']) ``` ### 真实例子 我最近想从一长串人员名单中随机选择一个中奖者。我从电子表格中提取的 CSV 数据是一个简单的名字和邮件地址列表。 幸运的是,Python 有一个有用的 `random` 模块,可以很好地生成随机值。该模块 `Random` 类中的 `randrange` 函数正是我需要的。你可以给它一个常规的数字范围(例如整数),以及它们之间的步长值。然后,该函数会生成一个随机结果,这意味着我可以在数据的总行数范围内获得一个随机整数(或者说是行号)。 这个小程序运行良好: ``` from csv import DictReader from random import Random d = DictReader(open('mydata.csv')) data = [] for row in d: data.append(row) r = Random() winner = data[r.randrange(0, len(data), 1)] print('The winner is:', winner['Name']) print('Email address:', winner['Email']) ``` 显然,这个例子非常简单。电子表格本身包含了复杂的分析数据的方法。但是,如果你想在电子表格应用之外做某事,Python 或许是一种技巧! 题图由 [Isaac Smith](https://unsplash.com/@isaacmsmith?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) 拍摄,发表于 [U​​nsplash](https://unsplash.com/s/photos/spreadsheets?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText)。 --- via: <https://fedoramagazine.org/using-data-from-spreadsheets-in-fedora-with-python/> 作者:[Paul W. Frields](https://fedoramagazine.org/author/pfrields/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
[Python](https://python.org) is one of the most popular and powerful programming languages available. Because it’s free and open source, it’s available to everyone — and most Fedora systems come with the language already installed. Python is useful for a wide variety of tasks, but among them is processing comma-separated value (**CSV**) data. CSV files often start off life as tables or spreadsheets. This article shows how to get started working with CSV data in Python 3. CSV data is precisely what it sounds like. A CSV file includes one row of data at a time, with data values separated by commas. Each row is defined by the same *fields*. Short CSV files are often easily read and understood. But longer data files, or those with more fields, may be harder to parse with the naked eye, so computers work better in those cases. Here’s a simple example where the fields are *Name*, *Email*, and *Country*. In this example, the CSV data includes a field definition as the first row, although that is not always the case. Name,Email,Country John Q. Smith,[email protected],USA Petr Novak,[email protected],CZ Bernard Jones,[email protected],UK ## Reading CSV from spreadsheets Python helpfully includes a *csv* module that has functions for reading and writing CSV data. Most spreadsheet applications, both native like Excel or Numbers, and web-based such as Google Sheets, can export CSV data. In fact, many other services that can publish tabular reports will also export as CSV (PayPal for instance). The Python *csv* module has a built in reader method called *DictReader* that can deal with each data row as an ordered dictionary (OrderedDict). It expects a file object to access the CSV data. So if our file above is called *example.csv* in the current directory, this code snippet is one way to get at this data: f = open('example.csv', 'r') from csv import DictReader d = DictReader(f) data = [] for row in d: data.append(row) Now the *data* object in memory is a list of OrderedDict objects : [OrderedDict([('Name', 'John Q. Smith'), ('Email', '[email protected]'), ('Country', 'USA')]), OrderedDict([('Name', 'Petr Novak'), ('Email', '[email protected]'), ('Country', 'CZ')]), OrderedDict([('Name', 'Bernard Jones'), ('Email', '[email protected]'), ('Country', 'UK')])] Referencing each of these objects is easy: >>> print(data[0]['Country']) USA >>> print(data[2]['Email']) [email protected] By the way, if you have to deal with a CSV file with no header row of field names, the *DictReader* class lets you define them. In the example above, add the *fieldnames* argument and pass a sequence of the names: d = DictReader(f, fieldnames=['Name', 'Email', 'Country']) ## A real world example I recently wanted to pick a random winner from a long list of individuals. The CSV data I pulled from spreadsheets was a simple list of names and email addresses. Fortunately, Python also has a helpful *random* module good for generating random values. The *randrange* function in the *Random* class from that module was just what I needed. You can give it a regular range of numbers — like integers — and a step value between them. The function then generates a random result, meaning I could get a random integer (or row number!) back within the total number of rows in my data. So this small program worked well: from csv import DictReader from random import Random d = DictReader(open('mydata.csv')) data = [] for row in d: data.append(row) r = Random() winner = data[r.randrange(0, len(data), 1)] print('The winner is:', winner['Name']) print('Email address:', winner['Email']) Obviously this example is extremely simple. Spreadsheets themselves include sophisticated ways to analyze data. However, if you want to do something outside the realm of your spreadsheet app, Python may be just the trick! *Photo by Isaac Smith on Unsplash.* ## Christian Thank you!. It is an excellent example. ## Paweł How about random.choice(data) ? 🙂 ## Paul W. Frields There are almost always multiple ways to solve a problem. ## Mark Very useful for people comfortable with using Python I suppose. However most people would not use Python for simple (or even complex) scripting. The ‘real world example can be simply done in bash (or any other shell) without needing to import functions or use arrays. A good article in the way it explains the layout of the array created and how to reference the results, as a non Python user I was easily able to see how it all works with that information. Having seen this article I did a quick search on DictReader and found this page https://courses.cs.washington.edu/courses/cse140/13wi/csv-parsing.html where the example shows using Dictreader against a CSV file in a way that almost emulates the intent of a sql query against a database which does make the function seem a little useful for people who prefer working with large CSV files for data queries rather than databases. I see there is also a DictWriter function, so pherhaps the article examples could be updated to show how to use the two functions in a SQL type way to convert a CSV file A into a CSV file B with fewer output fields/columns and fewer records based on a selection criteria ?. That may be a better example as the only main reasons I can think of for such python functions are to split out data in this way to create new smaller partial CSV datasets, perhaps in a corporate environment which is too cheap to use a database and stores data in spreadsheets but only wants people to see subsets of the spreadsheet. ## Paul W. Frields Hopefully the formatting above is fixed for you now. Thank you for the contribution! ## Vernon Van Steenkist While I agree 100% with your approach and your observation that using simple tools that process files line by line are much more efficient and scalable to large data sets than loading everything into Python tables, I would like to suggest a couple of improvements: First linecount= I prefer linecount=$(wc -l mydata.csv | cut -f 1 -d” “) because it only processes every line in mydata.csv once. Second head -${linerandom} mydata.csv | tail -1 I believe can be more simply written sed -n $linerandom’p’ mydata.csv ## Jeffrey P Burdick I am surprised we’re talking about tabulated data in Python and not talking about Pandas, which is a WONDERFUL library for working with tabulated data. https://pandas.pydata.org/
12,103
在 Fedora 命令行下玩转防火墙
https://fedoramagazine.org/control-the-firewall-at-the-command-line/
2020-04-12T12:30:54
[ "防火墙" ]
https://linux.cn/article-12103-1.html
![](/data/attachment/album/202004/12/123040ghxuicphyhsoppch.jpg) 网络防火墙,顾名思义:为了阻止不需要的网络连接而设置的防护性屏障。在与外界建立连接或是提供网络服务时常常会用到。例如,在学校或是咖啡厅里使用笔记本电脑时,你一定不想某个陌生人窥探你的电脑。 每个 Fedora 系统都内置了一款防火墙。这是 Linux 内核网络功能的一部分。本文介绍如何通过 `firewall-cmd` 命令修改防火墙的配置。 ### 网络基础 本文并不教授计算机网络的[所有知识](https://en.wikipedia.org/wiki/Portal:Internet),但还是会简单介绍一些网络基础。 网络中的所有计算机都有一个 *IP 地址*,可以把它想象成一个邮箱地址,有了邮箱地址,邮件才知道发往何处。每台计算机还会拥有一组*端口*,端口号范围从 0 到 65535。同样的,你可以把这些端口想象成用来连接邮箱地址的连接点。 通常情况下,端口会是一个标准端口号或是根据应用程序的应答要求选定的一个端口范围。例如,一台 Web 服务器通常会保留 80 端口用于 HTTP 通信,443 端口用于 HTTPS。小于 1024 的端口主要用于系统或常见用途,1024-49151 端口是已经注册的,49152 及以上端口多为临时使用(只限短时间使用)。 互联网传输中最常见的两个协议,[TCP](https://en.wikipedia.org/wiki/Transmission_Control_Protocol) 和 [UDP](https://en.wikipedia.org/wiki/User_Datagram_Protocol)。当要传输的数据很重要,不能有丢包时,就使用 TCP 协议,如果数据包没有按顺序到达,还需要重组为正确的顺序。UDP 协议则更多用于对时间敏感的服务,为了保证时效性,有时允许丢失部分数据。 系统中运行的应用,例如 Web 服务器,会保留一些端口(例如上文提到的 80 和 443)。在网络传输过程中,主机会为传输的两端建立一个链接,一端是源地址和源端口,另一端是目的地址和目的端口。 网络防火墙就是基于地址、端口及其他标准的一组规则集,来对网络数据的传输进行屏蔽或阻断的。通过 `firewall-cmd` 命令,我们就可以查看或修改防火墙的工作配置。 ### 防火墙域(zone) 为了验证防火墙是否开启,使用 `firewall-cmd` 命令,输入时要加上 [sudo](https://fedoramagazine.org/howto-use-sudo/)。(通常,在运行了 [PolicyKit](https://en.wikipedia.org/wiki/Polkit) 的环境中,你也可以不加 `sudo`) ``` $ sudo firewall-cmd --state running ``` firewalld 服务支持任意数量的域。每个域都可以拥有独立的配置和防护规则。一台 Fedora 工作站的外部接口(例如 WIFI 或有线网卡)其默认域为 `FedoraWorkstation`。 要看有哪些域是激活状态,可以使用 `-–get-active-zones` 选项。在本示例中,有两个网卡,有线以太网卡 `wlp2s0` 和虚拟(libvirt)桥接网卡 `virbr0`: ``` $ sudo firewall-cmd --get-active-zones FedoraWorkstation interfaces: wlp2s0 libvirt interfaces: virbr0 ``` 如果想看看默认域是什么,或是直接查询所有域: ``` $ sudo firewall-cmd --get-default-zone FedoraWorkstation $ sudo firewall-cmd --get-zones FedoraServer FedoraWorkstation block dmz drop external home internal libvirt public trusted work ``` 查询默认域中防火墙放行了哪些系统,使用 `-–list-services` 选项。下例给出了一个定制系统的查询结果,你可以看到与常见的结果有些不同。 ``` $ sudo firewall-cmd --list-services dhcpv6-client mdns samba-client ssh ``` 该系统对外开启了四个服务。每个服务都对应一个常见端口。例如 `ssh` 服务对应 22 端口。 如果要查看当前域中防火墙还开启了哪些端口,可以使用 `--list-ports` 选项。当然,你也可以随时对其他域进行查询: ``` $ sudo firewall-cmd --list-ports --zone=FedoraWorkstation 1025-65535/udp 1025-65535/tcp ``` 结果表明,从 1025 到 65535 端口(包含 UDP 和 TCP)默认都是开启的。 ### 修改域、端口及服务 以上的配置都是预先设计好的防火墙策略。是为了确保新手用户安装的应用都能够正常访问网络。如果你确定自己心里有数,想要一个保护性更强的策略,可以将接口放入 `FedoraServer` 域,明确禁止所有端口的访问。(**警告**:如果你的服务器之前是联网状态,这么做可能会导致连接中断,那你就得到机房里去修改更多的配置项!) ``` $ sudo firewall-cmd --change-interface=<ifname> --zone=FedoraServer success ``` **本文并不讨论如何制定防火墙策略,Fedora 社区里已经有很多讨论了。你大可以按照自身需要来修改配置。** 如果你想要开放某个服务的常见端口,可以将该服务加入默认域(或使用 `--zone` 指定一个不同的域)。还可以一次性将其加入多个域。下例开放了 HTTP 和 HTTPS 的常见端口 80、443: ``` $ sudo firewall-cmd --add-service=http --add-service=https success ``` 并非所有的服务都有默认端口,不过大部分都是有的。使用 `-–get-services` 选项可以查看完整列表。 如果你想指定某个特定端口号,可以直接用数字和协议进行配置。(多数情况下,`-–add-service` 和 `-–add-port` 这两个选项是合在一起使用的)下例开启的是 UDP 协议的网络启动服务: ``` $ sudo firewall-cmd --add-port=67/udp success ``` **重要**:如果想要在系统重启或是 firewalld 服务重启后,配置仍然生效,**必须**在命令中加上 `-–permanent` 选项。本文中的例子只是临时修改了配置,下次遇到系统重启或是 firewalld 服务重启,这些配置就失效了。 以上只是 `firewall-cmd` 和 firewalld 服务诸多功能中的一小部分。firewalld 项目的[主页](https://firewalld.org/)还有更多信息值得你去探索和尝试。 --- via: <https://fedoramagazine.org/control-the-firewall-at-the-command-line/> 作者:[Paul W. Frields](https://fedoramagazine.org/author/pfrields/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[tinyeyeser](https://github.com/tinyeyeser) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
A network *firewall* is more or less what it sounds like: a protective barrier that prevents unwanted network transmissions. They are most frequently used to prevent outsiders from contacting or using network services on a system. For instance, if you’re running a laptop at school or in a coffee shop, you probably don’t want strangers poking around on it. Every Fedora system has a firewall built in. It’s part of the network functions in the Linux kernel inside. This article shows you how to change its settings using *firewall-cmd*. ## Network basics This article can’t teach you [everything](https://en.wikipedia.org/wiki/Portal:Internet) about computer networks. But a few basics suffice to get you started. Any computer on a network has an *IP address*. Think of this just like a mailing address that allows correct routing of data. Each computer also has a set of *ports*, numbered 0-65535. These are not physical ports; instead, you can think of them as a set of connection points at the address. In many cases, the port is a [standard number](https://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers) or range depending on the application expected to answer. For instance, a web server typically reserves port 80 for non-secure HTTP communications, and/or 443 for secure HTTPS. The port numbers under 1024 are reserved for system and well-known purposes, ports 1024-49151 are registered, and ports 49152 and above are usually ephemeral (used only for a short time). Each of the two most common protocols for Internet data transfer, [TCP](https://en.wikipedia.org/wiki/Transmission_Control_Protocol) and [UDP](https://en.wikipedia.org/wiki/User_Datagram_Protocol), have this set of ports. TCP is used when it’s important that all data be received and, if it arrives out of order, reassembled in the right order. UDP is used for more time-sensitive services that can withstand losing some data. An application running on the system, such as a web server, reserves one or more ports (as seen above, 80 and 443 for example). Then during network communication, a host establishes a connection between a source address and port, and the destination address and port. A network firewall can block or permit transmissions of network data based on rules like address, port, or other criteria. The *firewall-cmd* utility lets you interact with the rule set to view or change how the firewall works. ## Firewall zones To verify the firewall is running, use this command with [sudo](https://fedoramagazine.org/howto-use-sudo/). (In fairness, you can run *firewall-cmd* without the *sudo* command in environments where [PolicyKit](https://en.wikipedia.org/wiki/Polkit) is running.) $sudo firewall-cmd --state running The firewalld service supports any number of *zones*. Each zone can have its own settings and rules for protection. In addition, each network interface can be placed in any zone individually The default zone for an external facing interface (like the wifi or wired network card) on a Fedora Workstation is the *FedoraWorkstation* zone. To see what zones are active, use the *––get-active-zones* flag. On this system, there are two network interfaces, a wired Ethernet card *wlp2s0* and a virtualization (libvirt) bridge interface *virbr0*: $sudo firewall-cmd --get-active-zonesFedoraWorkstation interfaces: wlp2s0 libvirt interfaces: virbr0 To see the default zone, or all the defined zones: $sudo firewall-cmd --get-default-zoneFedoraWorkstation $sudo firewall-cmd --get-zonesFedoraServer FedoraWorkstation block dmz drop external home internal libvirt public trusted work To see the services the firewall is allowing other systems to access in the default zone, use the *––list-services* flag. Here is an example from a customized system; you may see something different. $sudo firewall-cmd --list-servicesdhcpv6-client mdns samba-client ssh This system has four services exposed. Each of these has a well-known port number. The firewall recognizes them by name. For instance, the *ssh* service is associated with port 22. To see other port settings for the firewall in the current zone, use the *––list-ports* flag. By the way, you can always declare the zone you want to check: $sudo firewall-cmd --list-ports --zone=FedoraWorkstation1025-65535/udp 1025-65535/tcp This shows that ports 1025 and above (both UDP and TCP) are open by default. ## Changing zones, ports, and services The above setting is a design decision.* It ensures novice users can use network facing applications they install. If you know what you’re doing and want a more protective default, you can move the interface to the *FedoraServer* zone, which prohibits any ports not explicitly allowed. *( Warning: if you’re using the host via the network, you may break your connection — meaning you’ll have to go to that box physically to make further changes!)* $sudo firewall-cmd --change-interface=success<ifname>--zone=FedoraServer * *This article is not the place to discuss that decision, which went through many rounds of review and debate in the Fedora community. You are welcome to change settings as needed.* If you want to open a well-known port that belongs to a service, you can add that service to the default zone (or use *––zone* to adjust a different zone). You can add more than one at once. This example opens up the well-known ports for your web server for both HTTP and HTTPS traffic, on ports 80 and 443: $sudo firewall-cmd --add-service=http --add-service=httpssuccess Not all services are defined, but many are. To see the whole list, use the *––get-services* flag. If you want to add specific ports, you can do that by number and protocol as well. (You can also combine *––add-service* and *––add-port* flags, as many as necessary.) This example opens up the UDP service for a network boot service: $sudo firewall-cmd --add-port=67/udpsuccess **Important:** If you want your changes to be effective after you reboot your system or restart the firewalld service, you **must** add the *––permanent* flag to your commands. The examples here only change the firewall until one of those events next happens. These are just some of the many functions of the *firewall-cmd* utility and the firewalld service. There is much more information on firewalld at the project’s [home page](https://firewalld.org/) that’s worth reading and trying out. *Photo by Jakob Braun on Unsplash.* ## Joao Rodrigues Or setup your firewall as you like, and in the end use: $ sudo firewall-cmd –runtime-to-permanent To make your configuration permanent. ## Paul W. Frields @Joao: Yes, this is a great tip! ## kb Best firewald zone for desktop or home laptop ? ## Joao Rodrigues Zones are not for desktop or laptops. Zones are for networks. Home networks, Work networks, public networks. When I connect to my home network, I trust every computer that connects to that network and I expose some services (like samba shares). In a public network I don’t trust any of the computers, so I don’t want to expose any service. The cool thing is that you can bind firewalld zones to wifi connections (you have to use nm-connection-editor or nmcli, though), so when I connect to my home wifi I always land on the home zone and if I go to a public library or cafe I usually set it to public, block or drop. ## Paul W. Frields You’re correct, and I don’t believe the article states zones are for types of systems. It says instead they allow grouping rules. The names of the packaged zones (such as dmv, public, home, etc.) do hint they can be used to assert a level a trust on a specific network type. ## Nils Thanks for the tutorial! I think the next interesting step would be to add own services. This is actually quite easy and described here: https://firewalld.org/documentation/howto/add-a-service.html ## Ondrej What I miss written about firewalld is writting complete router setup. Like masquerading, different zones, etc. I would be really grateful for complete tutorial like that. 🙂 I am really enjoying using firewall-cmd in oposite to iptables syntax, which I usually have to look up all the time. ## Jordan Nice! I want more post about this. ## atolstoy I suspect that Paul got inspired with this after the recent article on Firewalld and OpenSnitch in the Linux Pro magazine. ## Paul W. Frields Nope, it was actually just an idea someone put in our idea queue for the Magazine some weeks ago, and since no one else wrote it, I decided to. 🙂 ## Earl Ramirez Can you share a bit of detail on the “Idea queue” is this something available for the public? ## Paul W. Frields Earl: Sure, refer to https://docs.fedoraproject.org/en-US/fedora-magazine/ for the big picture and instructions. ## Earl Ramirez Thank you, sir ## Frank Dorr Paul Thanks for making this! I think i need your help! I was recently hi-jhacked by some clown and am currently on what appears to be a contaminated node on THEIR network. I’m so new to linux that I probably installed all of this stuff for them. Where can a nub like me go to clean out my computer, reset my 1 tarabyte hd (it says i have 3 partitions with 999gigs free on 1 I cant access) Where can a noob go to setup his firewall at like AEGIS levels, DETOX the damn drives and be assured his Fedora installation is CLEAN???? Thanking you again, in advance, I am 🙂 ## atolstoy Thanks Paul, you’re great! ## Mark You do have to keep an eye on what ports are opened when adding services, while a service like http, ssh or cockpit adds 80, 22, 9090 repectively but if you were to add (found from playing around after a previous article on GSConnect) “firewall-cmd –add-service kde-connect” (service provided by the kdeconnectd package) you will see that (on F30 anyway) it opens a huge port range of 1714 to 1764 on both tcp and udp. Which after a few reboots to check I found the kde-connect process consistently only ever used one port so removed the service and just added the one tcp and udp port. It does highlight that you need to be aware of what ports a service opens rather than assuming the service knows best, whether by examining the service file at /usr/lib/firewalld/services/yourservicename.xml (where you can also create your own services) or as I do in my active environments by a server checking script that as one of its functions checks that every port defined in the active firewall rules actually matches a port in a listening state on the server which helps me keep the rules up to date when I remove packages. On the mention in the post for the output of the command “sudo firewall-cmd –list-ports –zone=FedoraWorkstation” allowing ports 1025 and above thanks for mentioning that, most users of a new Fedora desktop install probably don’t realise they are wide open by default and must really trust everything in their network so it is good you mentioned it. The first thing any desktop user should do is change the default/active zone or delete that port range and just define what they need. ## Frank Dorr So….if I catch a virus, I think its this new Minergate trojan/worm, whatever they did MS released CVE-ID-2020 (hehe THE CVE-ID-19 VIRUS!), to fix that gets in on the printer ports and then acts like WANNACRY, on a network with 2 windows computers, it infects both, i install fedora, it comes in such a state that the virus may be able to mess my brand new fedora up from the windows connection? how do i check/stop it? what can i do, i can see where its adding firewall exceptions in the anaconda files in the jump drive :(. ## nehemiah typo near the end or your two dashes were turned into an em dash: it say’s add ‘-permanent’ to your commands for the setting to survive reboot, it’s two dashes. could you put that in the code example or in a monospaced font ? ## Paul W. Frields Thanks for catching, this is a “feature” of the hosting software. Fixed.
12,104
Git 都 15 岁了,如何入门或学习点新东西
https://opensource.com/article/20/4/get-started-git
2020-04-13T09:14:27
[ "Git" ]
https://linux.cn/article-12104-1.html
> > 在 Git 15 周年之际,了解为什么 Git 是保持软件行业运行的重要组成部分。 > > > ![](/data/attachment/album/202004/13/091410k80er3mttrznc22e.jpg) 如果说过去二十年来有什么东西改变了软件,那么 [Git](https://en.wikipedia.org/wiki/Git) 肯定位列榜首。 如果你没有亲自使用过 Git,你可能会认为它只是一种技术时尚,只是因为它是由 [Linux](https://opensource.com/resources/linux) 项目的创始人创建的,所以在开发者中只是一个偶然的宠儿。这或许有一定的道理,但 Git 确实取得了一些其他行业所没有的成就。有了 Git,分布在世界各地的开发者们可以在同一时间对同一段代码进行工作,并记录下每一次修改的历史,然后将所有的工作合并到一起,形成一个成品。由于这件事情非常复杂,所以这个工具本身也会变得很复杂,但归根结底,它是维持软件行业运行的重要组成部分。 无论你是否了解 Git,如果你足够深入的研究开源软件,或者进入计算机科学领域,都有可能遇到它。无论你使用 Git 只是为了下载一个安装包,还是每天与它交互来管理代码,了解更多关于它的知识,都会对你有很大的启发和帮助。 ### Git 术语 与任何专业工具一样,Git 中也有很多行话。像“<ruby> 克隆 <rt> clone </rt></ruby>”、“<ruby> 合并 <rt> merge </rt></ruby>”和“<ruby> 变基 <rt> rebase </rt></ruby>”这样的术语,最起码也是神秘的,而更糟的情况下会令人感到排斥。试图理解这些术语的含义可能会让人不知所措,但如果你从 Matthew Broberg 的优秀文章《[Git 术语基础](https://opensource.com/article/19/2/git-terminology)》中得到一点指导,就不会这样了。只需快速阅读一下,你就能真正理解地听懂关于 Git 的对话。 ### Git 入门 如果你需要知道如何使用 Git,那么我自己的[关于使用 Git 的入门文章系列](https://opensource.com/life/16/7/stumbling-git)是一个很好的开始。这些文章已经有几年的历史了,但就像许多 Linux 和 UNIX 技术一样,它的界面并没有发生很大的变化,所以这些文章和我写这些文章那时一样,在今天还是很有意义的。这一系列文章向你介绍了 Git 最基本的概念,并带领你完成创建仓库、提交文件、恢复文件、合并分支等过程。 ### 常见的 Git 服务 Git 最常见的用途之一是公共的 Git 托管服务,比如 GitLab 和 GitHub。Kedar Vijay Kulkarni 在他的《[如何在 Git 中克隆、修改、添加和删除文件](https://opensource.com/article/18/2/how-clone-modify-add-delete-git-files)》一文中,演示了大多数开发者使用 Git 执行的日常任务。这不是非开发者的必读书目,但对于任何想在公共 Git 托管服务上为项目做贡献的人来说,这篇文章是必读的。这篇文章专门针对的是 Github,因为它是当今最常见的平台之一,但其原理也适用于任何 Git 服务的 Web 前端,包括 [GitLab](https://about.gitlab.com/install/)、[Gogs](https://gogs.io/) 和 [Gitea](https://gitea.io/en-us/) 等流行的开源框架。 ### 试试这个 Git 演练 与其漫无目的的探索,你是不是更喜欢在导游的带领下学习?有时候,学习一件事最简单的方法就是模仿别人的准确步骤。你知道最终的结果是肯定成功的,所以你在进行练习的时候会有信心,而你的大脑和手指也会得到重复的好处,从而建立起记忆。如果这是你的学习风格,那就跟着 Alan Formy-Duvall 的《[Git 的实用学习练习](https://opensource.com/article/19/5/practical-learning-exercise-git)》,找出成功的 Git 课程的感觉。 ### Git 应用程序 信不信由你,Git 的界面比你在终端输入的文字更多。显然,在线托管的 Git 有 Web 界面,但是你也可以在计算机上使用 Git 客户端。如果想获得更多的帮助,请阅读 Jesse Duffield 关于 [Lazygit](https://opensource.com/article/20/3/lazygit) 的文章或 Olaf Anders 关于 [Tig](https://opensource.com/article/19/6/what-tig) 的文章。要获得完整的图形应用程序体验,请阅读我有关 [Git-cola](https://opensource.com/article/20/3/git-cola)、[Sparkleshare](https://opensource.com/article/19/4/file-sharing-git) 以及[其它应用](https://opensource.com/life/16/8/graphical-tools-git)的文章。是的,甚至还有[用于你的移动设备的界面](https://opensource.com/article/19/4/calendar-git#mobile)! ### 了解更多关于 Git 的信息 知识就是力量,所以不要让 Git 对你来说像个谜。无论你是直接使用它,还是只知道它的名字,或者你以前从未听说过它,现在都是了解 Git 的好时机。这里有很多资源可以帮助你了解它的工作原理、工作原理以及人们为什么这么喜欢它。潜入其中,按照自己的节奏来学习,并学会爱上 Git 吧! --- via: <https://opensource.com/article/20/4/get-started-git> 作者:[Seth Kenlon](https://opensource.com/users/seth) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
If there's anything that's changed software in the past two decades, [Git](https://en.wikipedia.org/wiki/Git) is at the top of the list. If you don't use Git personally, you might think it's just a tech fad, an incidental darling among developers just because it was created by the same person who started the [Linux](https://opensource.com/resources/linux) project itself. There may be some truth to that, but Git does manage to achieve some feats that no other industry has managed. With Git, developers spread all over the world are able to work on the same code, literally at the same time, with a history of every change made, and then merge all the work together to result in a finished product. The complexity is enormous, and so the tool itself can get complex, but in the end, it's a major component in keeping the software industry running. Whether you know Git or not, you'll very likely encounter it should you dig deep enough into open source software or enter into computer science. Whether you use Git to just download an installer package or whether you interface with it daily to manage code, learning more about it is elucidating and empowering. ## Git terminology As with any specialized tool, there's a lot of jargon in Git. Terms like "clone" and "merge" and "rebase" are mysterious at best, and at worst can feel almost exclusionary. Trying to understand what all of these terms mean can be overwhelming, but not if you take a little guidance from Matthew Broberg's excellent [Git Terminology 101](https://opensource.com/article/19/2/git-terminology) article. In just one quick read, you'll be able to listen in on conversations about Git with real comprehension. ## Getting started with Git If you need to know how to use Git, then my own [introductory article series about using Git](https://opensource.com/life/16/7/stumbling-git) is a great place to start. The articles are several years old now, but as with so many Linux and UNIX technologies, the interface hasn't changed significantly, so the articles are as relevant today as they were when I wrote them. The series introduces you to the most basic concepts of Git, and steps you through the process of creating a repository, committing files, restoring files, merging branches, and much more. ## Common Git services One of the most common uses of Git is a public Git hosting service, such as GitLab and GitHub. In his [How to clone, modify, add, and delete files in Git](https://opensource.com/article/18/2/how-clone-modify-add-delete-git-files) article, Kedar Vijay Kulkarni demonstrates the everyday tasks most developers perform with Git. This isn't required reading for non-developers, but it's a must for anyone who wants to contribute to a project on a public Git hosting service. This article addresses Github specifically because it's one of the most common platforms today, but the principles apply to any web front-end for Git, including popular open source frameworks like [GitLab](https://about.gitlab.com/install/), [Gogs](https://gogs.io/), and [Gitea](https://gitea.io/en-us/). ## Try this Git walkthrough Do you prefer a guided tour to aimless exploration? Sometimes the easiest way to learn something is to mimic someone else's exact steps. You know the end result is a guaranteed success, so you have confidence while performing the exercise, and your brain and fingers get the benefit of repetition, which builds memory. If that's your learning style, then follow along with Alan Formy-Duvall's [practical learning exercise for Git](https://opensource.com/article/19/5/practical-learning-exercise-git) and find out what a successful Git session feels like. ## Git apps Believe it or not, Git has more interfaces than text you type into a terminal. Obviously there are the web interfaces of Git hosts online, but you can use Git clients on your computer, too. For just a light layer of assistance, read Jesse Duffield's article about [Lazygit](https://opensource.com/article/20/3/lazygit) or Olaf Anders' article about [Tig](https://opensource.com/article/19/6/what-tig). For the full graphical application experience, read my article about [Git-cola](https://opensource.com/article/20/3/git-cola), [Sparkleshare](https://opensource.com/article/19/4/file-sharing-git), and [still others](https://opensource.com/life/16/8/graphical-tools-git). And yes, there are even [interfaces for your mobile devices](https://opensource.com/article/19/4/calendar-git#mobile)! ## Learn more about Git Knowledge is power, so don't let Git be a mystery to you. Whether you use it directly or you only know it by name or you'd never heard of it before, now's a great time to learn about Git. There are great resources out there to help you understand how it works, why it works, and why people love it so much. Dive in, take it at your own pace, and learn to love Git! ## 2 Comments
12,106
用 k3s 轻松管理 SSL 证书
https://opensource.com/article/20/3/ssl-letsencrypt-k3s
2020-04-13T15:32:41
[ "k3s" ]
https://linux.cn/article-12106-1.html
> > 如何在树莓派上使用 k3s 和 Let's Encrypt 来加密你的网站。 > > > ![](/data/attachment/album/202004/13/153032ncp8q55pjwdj8ppj.jpg) 在[上一篇文章](/article-12081-1.html)中,我们在 k3s 集群上部署了几个简单的网站。那些是未加密的网站。不错,它们可以工作,但是未加密的网站有点太过时了!如今,大多数网站都是加密的。在本文中,我们将安装 [cert-manager](https://cert-manager.io/) 并将其用于在集群上以部署采用 TLS 加密的网站。这些网站不仅会被加密,而且还会使用有效的公共证书,这些证书会从 [Let's Encrypt](https://letsencrypt.org/) 自动获取和更新!让我们开始吧! ### 准备 要继续阅读本文,你将需要我们在上一篇文章中构建的 [k3s 树莓派集群](/article-12049-1.html)。另外,你需要拥有一个公用静态 IP 地址,并有一个可以为其创建 DNS 记录的域名。如果你有一个动态 DNS 提供程序为你提供域名,可能也行。但是,在本文中,我们使用静态 IP 和 [CloudFlare](https://cloudflare.com/) 来手动创建 DNS 的 A 记录。 我们在本文中创建配置文件时,如果你不想键入它们,则可以在[此处](https://gitlab.com/carpie/k3s_using_certmanager/-/archive/master/k3s_using_certmanager-master.zip)进行下载。 ### 我们为什么使用 cert-manager? Traefik(在 k3s 预先捆绑了)实际上具有内置的 Let's Encrypt 支持,因此你可能想知道为什么我们要安装第三方软件包来做同样的事情。在撰写本文时,Traefik 中的 Let's Encrypt 支持检索证书并将其存储在文件中。而 cert-manager 会检索证书并将其存储在 Kubernetes 的 “<ruby> 机密信息 <rt> secret </rt></ruby>” 中。我认为,“机密信息”可以简单地按名称引用,因此更易于使用。这就是我们在本文中使用 cert-manager 的主要原因。 ### 安装 cert-manager 通常,我们只是遵循 cert-manager 的[文档](https://cert-manager.io/docs/installation/kubernetes/)在 Kubernetes 上进行安装。但是,由于我们使用的是 ARM 体系结构,因此我们需要进行一些更改,以便我们可以完成这个操作。 第一步是创建 cert-manager 命名空间。命名空间有助于将 cert-manager 的 Pod 排除在我们的默认命名空间之外,因此当我们使用自己的 Pod 执行 `kubectl get pods` 之类的操作时,我们不必看到它们。创建名称空间很简单: ``` kubectl create namespace cert-manager ``` 安装说明会让你下载 cert-manager 的 YAML 配置文件并将其一步全部应用到你的集群。我们需要将其分为两个步骤,以便为基于 ARM 的树莓派修改文件。我们将下载文件并一步一步进行转换: ``` curl -sL \ https://github.com/jetstack/cert-manager/releases/download/v0.11.0/cert-manager.yaml |\ sed -r 's/(image:.*):(v.*)$/\1-arm:\2/g' > cert-manager-arm.yaml ``` 这会下载配置文件,并将包含的所有 docker 镜像更新为 ARM 版本。来检查一下它做了什么: ``` $ grep image: cert-manager-arm.yaml image: "quay.io/jetstack/cert-manager-cainjector-arm:v0.11.0" image: "quay.io/jetstack/cert-manager-controller-arm:v0.11.0" image: "quay.io/jetstack/cert-manager-webhook-arm:v0.11.0" ``` 如我们所见,三个镜像现在在镜像名称上添加了 `-arm`。现在我们有了正确的文件,我们只需将其应用于集群: ``` kubectl apply -f cert-manager-arm.yaml ``` 这将安装 cert-manager 的全部。我们可以通过 `kubectl --namespace cert-manager get pods` 来检查安装何时完成,直到所有 Pod 都处于 `Running` 状态。 这就完成了 cert-manager 的安装! ### Let's Encrypt 概述 Let's Encrypt 的好处是,它免费为我们提供了经过公共验证的 TLS 证书!这意味着我们可以拥有一个完全有效的、可供任何人访问的 TLS 加密网站,这些家庭或业余的爱好活动挣不到钱,也无需自己掏腰包购买 TLS 证书!以及,当通过 cert-manager 使用 Let's Encrypt 的证书时,获得证书的整个过程是自动化的,证书的续订也是自动的! 但它是如何工作的?下面是该过程的简化说明。我们(或代表我们的 cert-manager)向 Let's Encrypt 发出我们拥有的域名的证书请求。Let's Encrypt 通过使用 ACME DNS 或 HTTP 验证机制来验证我们是否拥有该域。如果验证成功,则 Let's Encrypt 将向我们提供证书,这些证书将由 cert-manager 安装在我们的网站(或其他 TLS 加密的端点)中。在需要重复此过程之前,这些证书可以使用 90 天。但是,cert-manager 会自动为我们更新证书。 在本文中,我们将使用 HTTP 验证方法,因为它更易于设置并且适用于大多数情况。以下是幕后发生的基本过程。cert-manager 向 Let's Encrypt 发出证书请求。作为回应,Let's Encrypt 发出所有权验证的<ruby> 质询 <rt> challenges </rt></ruby>。这个质询是将一个 HTTP 资源放在请求证书的域名下的一个特定 URL 上。从理论上讲,如果我们可以将该资源放在该 URL 上,并且让 Let's Encrypt 可以远程获取它,那么我们实际上必须是该域的所有者。否则,要么我们无法将资源放置在正确的位置,要么我们无法操纵 DNS 以使 Let's Encrypt 访问它。在这种情况下,cert-manager 会将资源放在正确的位置,并自动创建一个临时的 `Ingress` 记录,以将流量路由到正确的位置。如果 Let's Encrypt 可以读到该质询要求的资源并正确无误,它将把证书发回给 cert-manager。cert-manager 将证书存储为“机密信息”,然后我们的网站(或其他任何网站)将使用这些证书通过 TLS 保护我们的流量。 ### 为该质询设置网络 我假设你要在家庭网络上进行设置,并拥有一个以某种方式连接到更广泛的互联网的路由器/接入点。如果不是这种情况,则可能不需要以下过程。 为了使质询过程正常运行,我们需要一个我们要申请证书的域名,以将其路由到端口 80 上的 k3s 集群。为此,我们需要告诉世界上的 DNS 系统它的位置。因此,我们需要将域名映射到我们的公共 IP 地址。如果你不知道你的公共 IP 地址是什么,可以访问 [WhatsMyIP](https://whatsmyip.org/) 之类的地方,它会告诉你。接下来,我们需要输入 DNS 的 A 记录,该记录将我们的域名映射到我们的公共 IP 地址。为了使此功能可靠地工作,你需要一个静态的公共 IP 地址,或者你可以使用动态 DNS 提供商。一些动态 DNS 提供商会向你颁发一个域名,你可以按照以下说明使用它。我没有尝试过,所以不能肯定地说它适用于所有提供商。 对于本文,我们假设有一个静态公共 IP,并使用 CloudFlare 来设置 DNS 的 A 记录。如果愿意,可以使用自己的 DNS 服务器。重要的是你可以设置 A 记录。 在本文的其余部分中,我将使用 [k3s.carpie.net](http://k3s.carpie.net) 作为示例域名,因为这是我拥有的域。你显然会用自己拥有的任何域名替换它。 为示例起见,假设我们的公共 IP 地址是 198.51.100.42。我们转到我们的 DNS 提供商的 DNS 记录部分,并添加一个名为 [k3s.carpie.net](http://k3s.carpie.net) 的类型为 `A` 的记录(CloudFlare 已经假定了域的部分,因此我们只需输入 `k3s`),然后输入 `198.51.100.42` 作为 IPv4 地址。 ![](/data/attachment/album/202004/13/153252k3tos5tam00maywo.png) 请注意,有时 DNS 更新要传播一段时间。你可能需要几个小时才能解析该名称。在继续之前该名称必须可以解析。否则,我们所有的证书请求都将失败。 我们可以使用 `dig` 命令检查名称是否解析: ``` $ dig +short k3s.carpie.net 198.51.100.42 ``` 继续运行以上命令,直到可以返回 IP 才行。关于 CloudFlare 有个小注释:ClouldFlare 提供了通过代理流量来隐藏你的实际 IP 的服务。在这种情况下,我们取回的是 CloudFlare 的 IP,而不是我们的 IP。但对于我们的目的,这应该可以正常工作。 网络配置的最后一步是配置路由器,以将端口 80 和 443 上的传入流量路由到我们的 k3s 集群。可悲的是,路由器配置页面的差异很大,因此我无法确切地说明你的外观是什么样子。大多数时候,我们需要的管理页面位于“端口转发”或类似内容下。我甚至看到过它列在“游戏”之下(显然是端口转发主要用于的游戏)!让我们看看我的路由器的配置如何。 ![](/data/attachment/album/202004/13/153257qb6fm6ef3udhz6kt.png) 如果你和我的环境一样,则转到 192.168.0.1 登录到路由器管理应用程序。对于此路由器,它位于 “ NAT/QoS” -> “端口转发”。在这里,我们将端口 80/TCP 协议设置为转发到 192.168.0.50(主节点 `kmaster` 的 IP)的端口 80。我们还设置端口 443 也映射到 `kmaster`。从技术上讲,这对于质询来说并不是必需的,但是在本文的结尾,我们将部署一个启用 TLS 的网站,并且需要映射 443 来进行访问。因此,现在进行映射很方便。我们保存并应用更改,应该一切顺利! ### 配置 cert-manager 来使用 Let's Encrypt(暂存环境) 现在,我们需要配置 cert-manager 来通过 Let's Encrypt 颁发证书。Let's Encrypt 为我们提供了一个暂存(例如用于测试)环境,以便审视我们的配置。这样它更能容忍错误和请求的频率。如果我们对生产环境做了错误的操作,我们很快就会发现自己被暂时禁止访问了!因此,我们将使用暂存环境手动测试请求。 创建一个文件 `letsencrypt-issuer-staging.yaml`,内容如下: ``` apiVersion: cert-manager.io/v1alpha2 kind: ClusterIssuer metadata: name: letsencrypt-staging spec: acme: # The ACME server URL server: https://acme-staging-v02.api.letsencrypt.org/directory # Email address used for ACME registration email: <your_email>@example.com # Name of a secret used to store the ACME account private key privateKeySecretRef: name: letsencrypt-staging # Enable the HTTP-01 challenge provider solvers: - http01: ingress: class: traefik ``` 请确保将电子邮件地址更新为你的地址。如果出现问题或我们弄坏了一些东西,这就是 Let's Encrypt 与我们联系的方式! 现在,我们使用以下方法创建<ruby> 发行者 <rt> issuer </rt></ruby>: ``` kubectl apply -f letsencrypt-issuer-staging.yaml ``` 我们可以使用以下方法检查发行者是否已成功创建: ``` kubectl get clusterissuers ``` `clusterissuers` 是由 cert-manager 创建的一种新的 Kubernetes 资源类型。 现在让我们手动请求一个测试证书。对于我们的网站,我们不需要这样做;我们只是在测试这个过程,以确保我们的配置正确。 创建一个包含以下内容的证书请求文件 `le-test-certificate.yaml`: ``` apiVersion: cert-manager.io/v1alpha2 kind: Certificate metadata: name: k3s-carpie-net namespace: default spec: secretName: k3s-carpie-net-tls issuerRef: name: letsencrypt-staging kind: ClusterIssuer commonName: k3s.carpie.net dnsNames: - k3s.carpie.net ``` 该记录仅表示我们要使用名为 `letsencrypt-staging`(我们在上一步中创建的)的 `ClusterIssuer` 来请求域 [k3s.carpie.net](http://k3s.carpie.net) 的证书,并在 Kubernetes 的机密信息中名为 `k3s-carpie-net-tls` 的文件中存储该证书。 像平常一样应用它: ``` kubectl apply -f le-test-certificate.yaml ``` 我们可以通过以下方式查看状态: ``` kubectl get certificates ``` 如果我们看到类似以下内容: ``` NAME READY SECRET AGE k3s-carpie-net True k3s-carpie-net-tls 30s ``` 我们走在幸福之路!(这里的关键是 `READY` 应该是 `True`)。 ### 解决证书颁发问题 上面是幸福的道路。如果 `READY` 为 `False`,我们可以等等它,然后再次花点时间检查状态。如果它一直是 `False`,那么我们就有需要解决的问题。此时,我们可以遍历 Kubernetes 资源链,直到找到一条告诉我们问题的状态消息。 假设我们执行了上面的请求,而 `READY` 为 `False`。我们可以从以下方面开始故障排除: ``` kubectl describe certificates k3s-carpie-net ``` 这将返回很多信息。通常,有用的内容位于 `Events:` 部分,该部分通常位于底部。假设最后一个事件是 `Created new CertificateRequest resource "k3s-carpie-net-1256631848`。然后我们<ruby> 描述 <rt> describe </rt></ruby>一下该请求: ``` kubectl describe certificaterequest k3s-carpie-net-1256631848 ``` 现在比如说最后一个事件是 `Waiting on certificate issuance from order default/k3s-carpie-net-1256631848-2342473830`。 那么,我们可以描述该顺序: ``` kubectl describe orders default/k3s-carpie-net-1256631848-2342473830 ``` 假设有一个事件,事件为 `Created Challenge resource "k3s-carpie-net-1256631848-2342473830-1892150396" for domain "k3s.carpie.net"`。让我们描述一下该质询: ``` kubectl describe challenges k3s-carpie-net-1256631848-2342473830-1892150396 ``` 从这里返回的最后一个事件是 `Presented challenge using http-01 challenge mechanism`。看起来没问题,因此我们浏览一下描述的输出,并看到一条消息 `Waiting for http-01 challenge propagation: failed to perform self check GET request ... no such host`。终于!我们发现了问题!在这种情况下,`no such host` 意味着 DNS 查找失败,因此我们需要返回并手动检查我们的 DNS 设置,正确解析域的 DNS,并进行所需的任何更改。 ### 清理我们的测试证书 我们实际上想要使用的是域名的真实证书,所以让我们继续清理证书和我们刚刚创建的机密信息: ``` kubectl delete certificates k3s-carpie-net kubectl delete secrets k3s-carpie-net-tls ``` ### 配置 cert-manager 以使用 Let's Encrypt(生产环境) 现在我们已经有了测试证书,是时候移动到生产环境了。就像我们在 Let's Encrypt 暂存环境中配置 cert-manager 一样,我们现在也需要对生产环境进行同样的操作。创建一个名为 `letsencrypt-issuer-production.yaml` 的文件(如果需要,可以复制和修改暂存环境的文件),其内容如下: ``` apiVersion: cert-manager.io/v1alpha2 kind: ClusterIssuer metadata: name: letsencrypt-prod spec: acme: # The ACME server URL server: https://acme-v02.api.letsencrypt.org/directory # Email address used for ACME registration email: <your_email>@example.com # Name of a secret used to store the ACME account private key privateKeySecretRef: name: letsencrypt-prod # Enable the HTTP-01 challenge provider solvers: - http01: ingress: class: traefik ``` (如果要从暂存环境进行复制,则唯一的更改是 `server:` URL。也请不要忘记修改电子邮件!) 应用它: ``` kubectl apply -f letsencrypt-issuer-production.yaml ``` ### 申请我们网站的证书 重要的是需要注意,我们到目前为止完成的所有步骤都只需要进行一次!而对于将来的任何其他申请,我们可以从这个说明开始! 让我们部署在[上一篇文章](/article-12081-1.html)中部署的同样站点。(如果仍然可用,则可以修改 YAML 文件。如果没有,则可能需要重新创建并重新部署它)。 我们只需要将 `mysite.yaml` 的 `Ingress` 部分修改为: ``` --- apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: mysite-nginx-ingress annotations: kubernetes.io/ingress.class: "traefik" cert-manager.io/cluster-issuer: letsencrypt-prod spec: rules: - host: k3s.carpie.net http: paths: - path: / backend: serviceName: mysite-nginx-service servicePort: 80 tls: - hosts: - k3s.carpie.net secretName: k3s-carpie-net-tls ``` 请注意,上面仅显示了 `mysite.yaml` 的 `Ingress` 部分。所做的更改是添加了注解 `cert-manager.io/cluster-issuer: letsencrypt-prod`。这告诉 traefik 创建证书时使用哪个发行者。 其他唯一增加的是 `tls:` 块。这告诉 traefik 我们希望在主机 [k3s.carpie.net](http://k3s.carpie.net) 上具有 TLS 功能,并且我们希望 TLS 证书文件存储在机密信息 `k3s-carpie-net-tls` 中。 请记住,我们没有创建这些证书!(好吧,我们创建了名称相似的测试证书,但我们删除了这些证书。)Traefik 将读取这些配置并继续寻找机密信息。当找不到时,它会看到注释说我们想使用 `letsencrypt-prod` 发行者来获取它。由此,它将提出请求并为我们安装证书到机密信息之中! 大功告成! 让我们尝试一下。 它现在具有了加密 TLS 所有优点!恭喜你! --- via: <https://opensource.com/article/20/3/ssl-letsencrypt-k3s> 作者:[Lee Carpenter](https://opensource.com/users/carpie) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
In a [previous article](https://carpie.net/articles/ingressing-with-k3s), we deployed a couple of simple websites on our k3s cluster. These were non-encrypted sites. Now that's fine, and they work, but non-encrypted is very last century! These days most websites are encrypted. In this article, we are going to install [cert-manager](https://cert-manager.io/) and use it to deploy TLS encrypted sites on our cluster. Not only will the sites be encrypted, but they will be using valid public certificates that are automatically provisioned and automatically renewed from [Let's](https://letsencrypt.org/)[ Encrypt](https://letsencrypt.org/)! Let's get started! ## Materials needed To follow along with the article, you will need [the k3s Raspberry Pi cluster](https://opensource.com/article/20/3/kubernetes-raspberry-pi-k3s) we built in a previous article. Also, you will need a public static IP address and a domain name that you own and can create DNS records for. If you have a dynamic DNS provider that provides a domain name for you, that may work as well. However, in this article, we will be using a static IP and [CloudFlare](https://cloudflare.com/) to manually create DNS "A" records. As we create configuration files in this article, if you don't want to type them out, they are all available for download [here](https://gitlab.com/carpie/k3s_using_certmanager/-/archive/master/k3s_using_certmanager-master.zip). ## Why are we using cert-manager? Traefik (which comes pre-bundled with k3s) actually has Let's Encrypt support built-in, so you may be wondering why we are installing a third-party package to do the same thing. At the time of this writing, Traefik's Let's Encrypt support retrieves certificates and stores them in files. Cert-manager retrieves certificates and stores them in Kubernetes **secrets**. **Secrets** can be simply referenced by name and, therefore, easier to use, in my opinion. That is the main reason we are going to use cert-manager in this article. ## Installing cert-manager Mostly we will simply follow the cert-manager [documentation](https://cert-manager.io/docs/installation/kubernetes/) for installing on Kubernetes. Since we are working with an ARM architecture, however, we will be making slight changes, so we will go through the procedure here. The first step is to create the cert-manager namespace. The namespace helps keep cert-manager's pods out of our default namespace, so we do not have to see them when we do things like **kubectl get pods** with our own pods. Creating the namespace is simple: ``` kubectl create namespace cert-manager ``` The installation instructions have you download the cert-manager YAML configuration file and apply it to your cluster all in one step. We need to break that into two steps in order to modify the file for our ARM-based Pis. We will download the file and do the conversion in one step: ``` curl -sL \ https://github.com/jetstack/cert-manager/releases/download/v0.11.0/cert-manager.yaml |\ sed -r 's/(image:.*):(v.*)$/\1-arm:\2/g' > cert-manager-arm.yaml ``` This downloads the configuration file and updates all the contained docker images to be the ARM versions. To check what it did: ``` $ grep image: cert-manager-arm.yaml image: "quay.io/jetstack/cert-manager-cainjector-arm:v0.11.0" image: "quay.io/jetstack/cert-manager-controller-arm:v0.11.0" image: "quay.io/jetstack/cert-manager-webhook-arm:v0.11.0" ``` As we can see, the three images now have **-arm** added to the image name. Now that we have the correct file, we simply apply it to our cluster: `kubectl apply -f cert-manager-arm.yaml` This will install all of cert-manager. We can know when the installation finished by checking with **kubectl --namespace cert-manager get pods** until all pods are in the **Running** state. That is actually it for cert-manager installation! ## A quick overview of Let's Encrypt The nice thing about Let's Encrypt is that they provide us with publicly validated TLS certificates for free! This means that we can have a completely valid TLS encrypted website that anyone can visit for our home or hobby things that do not make money to support themselves without paying out of our own pocket for TLS certificates! Also, when using Let's Encrypt certificates with cert-manager, the entire process of procuring the certificates is automated. Certificate renewal is also automated! But how does this work? Here is a simplified explanation of the process. We (or cert-manager on our behalf) issue a request for a certificate to Let's Encrypt for a domain name that we own. Let's Encrypt verifies that we own that domain by using an ACME DNS or HTTP validation mechanism. If the verification is successful, Let's Encrypt provides us with certificates, which cert-manager installs in our website (or other TLS encrypted endpoint). These certificates are good for 90 days before the process needs to be repeated. Cert-manager, however, will automatically keep the certificates up-to-date for us. In this article, we will use the HTTP validation method as it is simpler to set up and works for the majority of use cases. Here is the basic process that will happen behind the scenes. Cert-manager will issue a certificate request to Let's Encrypt. Let's Encrypt will issue an ownership verification challenge in response. The challenge will be to put an HTTP resource at a specific URL under the domain name that the certificate is being requested for. The theory is that if we can put that resource at that URL and Let's Encrypt can retrieve it remotely, then we must really be the owners of the domain. Otherwise, either we could not have placed the resource in the correct place, or we could not have manipulated DNS to allow Let's Encrypt to get to it. In this case, cert-manager puts the resource in the right place and automatically creates a temporary **Ingress** record that will route traffic to the correct place. If Let's Encrypt can read the challenge and it is correct, it will issue the certificates back to cert-manager. Cert-manager will then store the certificates as secrets, and our website (or whatever) will use those certificates for securing our traffic with TLS. ## Preparing our network for the challenges I'm assuming that you are wanting to set this up on your home network and have a router/access point that is connected in some fashion to the broader internet. If that is not the case, the following process may not be what you need. To make the challenge process work, we need the domain that we are requesting a certificate for to route to our k3s cluster on port 80. To do that, we need to tell the world's DNS system where that is. So, we'll need to map the domain name to our public IP address. If you do not know what your public IP address is, you can go to somewhere like [WhatsMyIP](https://whatsmyip.org/), and it will tell you. Next, we need to enter a DNS "A" record that maps our domain name to our public IP address. For this to work reliably, you need a static public IP address, or you may be able to use a dynamic DNS provider. Some dynamic DNS providers will issue you a domain name that you may be able to use with these instructions. I have not tried this, so I cannot say for sure it works with all providers. For this article, we are going to assume a static public IP and use CloudFlare to set the DNS "A" records. You may use your own DNS provider if you wish. The important part is that you are able to set the "A" records. For this rest of the article, I am going to use ** k3s.carpie.net** as the example domain since this is a domain I own. You would obviously replace that with whatever domain you own. Ok, for the sake of example, assume our public IP address is 198.51.100.42. We would go to our DNS provider's DNS record section and add a record of type "A," with a name of ** k3s.carpie.net** (CloudFlare assumes the domain, so there we could just enter **k3s**) and enter 198.51.100.42 as the IPv4 address. ![](https://opensource.com/sites/default/files/uploads/ep011_dns_example.png) Be aware that sometimes it takes a while for the DNS updates to propagate. It may be several hours before you can resolve the name. It is imperative that the name resolves before moving on. Otherwise, all our certificate requests will fail. We can check that the name resolves using the **dig** command: ``` $ dig +short k3s.carpie.net 198.51.100.42 ``` Keep running the above command until an IP is returned. Just a note about CloudFlare: ClouldFlare provides a service that hides your actual IP by proxying the traffic. In this case, we'll get back a CloudFlare IP instead of our IP. This should work fine for our purposes. The final step for network configuration is configuring our router to route incoming traffic on ports 80 and 443 to our k3s cluster. Sadly, router configuration screens vary widely, so I can't tell you exactly what yours will look like. Most of the time, the admin page we need is under "Port forwarding" or something similar. I have even seen it listed under "Gaming" (which is apparently what port forwarding is mostly used for)! Let's see what the configuration looks like for my router. ![](https://opensource.com/sites/default/files/uploads/ep011_router.png) If you had my setup, you would go to 192.168.0.1 to log in to the router administration application. For this router, it's under **NAT / QoS** -> **Port Forwarding**. Here we set port **80**, **TCP** protocol to forward to 192.168.0.50 (the IP of **kmaster** our master node) port **80**. We also set port **443** to map to **kmaster** as well. This is technically not needed for the challenges, but at the end of the article, we are going to deploy a TLS enabled website, and we will need **443** mapped to get to it. So it's convenient to go ahead and map it now. We save and apply the changes, and we should be good to go! ## Configuring cert-manager to use Lets Encrypt (staging) Now we need to configure cert-manager to issue certificates through Let's Encrypt. Let's Encrypt provides a staging (e.g., test) environment for us to sort out our configurations on. It is much more tolerant of mistakes and frequency of requests. If we bumble around on the production environment, we'll very quickly find ourselves temporarily banned! As such, we'll manually test requests using the staging environment. Create a file, **letsencrypt-issuer-staging.yaml** with the contents: ``` apiVersion: cert-manager.io/v1alpha2 kind: ClusterIssuer metadata: name: letsencrypt-staging spec: acme: # The ACME server URL server: https://acme-staging-v02.api.letsencrypt.org/directory # Email address used for ACME registration email: <your_email>@example.com # Name of a secret used to store the ACME account private key privateKeySecretRef: name: letsencrypt-staging # Enable the HTTP-01 challenge provider solvers: - http01: ingress: class: traefik ``` Make sure to update the email address to your address. This is how Let's Encrypt contacts us if something is wrong or we are doing bad things! Now we create the issuer with: `kubectl apply -f letsencrypt-issuer-staging.yaml` We can check that the issuer was created successfully with: `kubectl get clusterissuers` **Clusterissuers** is a new Kubernetes resource type created by cert-manager. Let's now request a test certificate manually. For our sites, we will not need to do this; we are just testing out the process to make sure our configuration is correct. Create a certificate request file, **le-test-certificate.yaml** with the contents: ``` apiVersion: cert-manager.io/v1alpha2 kind: Certificate metadata: name: k3s-carpie-net namespace: default spec: secretName: k3s-carpie-net-tls issuerRef: name: letsencrypt-staging kind: ClusterIssuer commonName: k3s.carpie.net dnsNames: - k3s.carpie.net ``` This record just says we want to request a certificate for the domain ** k3s.carpie.net**, using a **ClusterIssuer**named **letsencrypt-staging**(which we created in the previous step) and store the certificate files in the Kubernetes secret named **k3s-carpie-net-tls**. Apply it like normal: `kubectl apply -f le-test-certificate.yaml` We can check the status with: `kubectl get certificates` If we see something like: ``` NAME READY SECRET AGE k3s-carpie-net True k3s-carpie-net-tls 30s ``` We are good to go! (The key here is **READY** being **True**). ## Troubleshooting certificate request issues That's the happy path. If **READY** is **False**, we could give it some time and check the status again in case it takes a bit. If it stays **False,** then we have an issue we need to troubleshoot. At this point, we can walk the chain of Kubernetes resources until we find a status message that tells us the problem. Let's say that we did the request above, and **READY** was **False**. We start the troubleshooting with: `kubectl describe certificates k3s-carpie-net` This will return a lot of information. Usually, the helpful things are in the **Events:** section, which is typically at the bottom. Let's say the last event was **Created new CertificateRequest resource "k3s-carpie-net-1256631848**. We would then describe that request: `kubectl describe certificaterequest k3s-carpie-net-1256631848` Now let's say the last event there was **Waiting on certificate issuance from order default/k3s-carpie-net-1256631848-2342473830**. Ok, we can describe the order: `kubectl describe orders default/k3s-carpie-net-1256631848-2342473830` Let's say that has an event that says **Created Challenge resource "k3s-carpie-net-1256631848-2342473830-1892150396" for domain " k3s.carpie.net"**. Let's describe the challenge: `kubectl describe challenges k3s-carpie-net-1256631848-2342473830-1892150396` The last event returned from here is **Presented challenge using http-01 challenge mechanism**. That looks ok, so we scan up the describe output and see a message **Waiting for http-01 challenge propagation: failed to perform self check GET request … no such host**. Finally! We have found the problem! In this case, **no such host** means that the DNS lookup failed, so then we would go back and manually check our DNS settings and that our domain's DNS resolves correctly for us and make any changes needed. ## Clean up our test certificates We actually want a real certificate for the domain name we used, so let's go ahead and clean up both the certificate and the secret we just created: ``` kubectl delete certificates k3s-carpie-net kubectl delete secrets k3s-carpie-net-tls ``` ## Configuring cert-manager to use Let's Encrypt (production) Now that we have test certificates working, it's time to move up to production. Just like we configured cert-manager for Let's Encrypt staging environment, we need to do the same for production now. Create a file (you can copy and modify staging if desired) named **letsencrypt-issuer-production.yaml** with the contents: ``` apiVersion: cert-manager.io/v1alpha2 kind: ClusterIssuer metadata: name: letsencrypt-prod spec: acme: # The ACME server URL server: https://acme-v02.api.letsencrypt.org/directory # Email address used for ACME registration email: <your_email>@example.com # Name of a secret used to store the ACME account private key privateKeySecretRef: name: letsencrypt-prod # Enable the HTTP-01 challenge provider solvers: - http01: ingress: class: traefik ``` (If you are copying from the staging, the only that changes is the **server:** URL. Don't forget the email)! Apply with: `kubectl apply -f letsencrypt-issuer-production.yaml` ## Request a certificate for our website It's important to note that all the steps we have completed to this point are one time set up! For any additional requests in the future, we can start at this point in the instructions! Let's deploy that same site we deployed in the [previous article](https://carpie.net/articles/ingressing-with-k3s#deploying-a-simple-website). (If you still have it around, you can just modify the YAML file. If not, you may want to recreate it and re-deploy it). We just need to modify **mysite .yaml's** **Ingress** section to be: ``` --- apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: mysite-nginx-ingress annotations: kubernetes.io/ingress.class: "traefik" cert-manager.io/cluster-issuer: letsencrypt-prod spec: rules: - host: k3s.carpie.net http: paths: - path: / backend: serviceName: mysite-nginx-service servicePort: 80 tls: - hosts: - k3s.carpie.net secretName: k3s-carpie-net-tls ``` Please note that just the **Ingress** section of **mysite.yaml** is shown above. The changes are the addition of the **annotation cert-manager.io/cluster-issuer: letsencrypt-prod**. This tells traefik which issuer to use when creating certificates. The only other addition is the **tls:**block. This tells traefik that we expect to have TLS on host **and we expect the TLS certificate files to be stored in the secret** [k3s.carpie.net](http://k3s.carpie.net),**k3s-carpie-net-tls**. Please remember that we did not create these certificates! (Well, we created test certificates similarly named, but we deleted those.) Traefik will read this and go looking for the secret. When it does not find it, it sees the annotation saying we want to use **letsencrypt-prod** issuer to procure one. From there, it will make the request and install the certificate in the secret for us! We're done! Let's try it out. There it is in all its encrypted TLS beauty! Congratulations! ## 8 Comments
12,107
什么是 Arch 用户仓库(AUR)以及如何使用?
https://itsfoss.com/aur-arch-linux/
2020-04-13T20:24:00
[ "AUR" ]
https://linux.cn/article-12107-1.html
如果你一直在使用 [Arch Linux](https://www.archlinux.org/) 或其他基于 Arch 的发行版,如 Manjaro,那么可能会遇到 AUR。你尝试安装新软件,有人建议从 AUR 中安装它。这让你感到困惑。 什么是 AUR?为什么使用它?如何使用 AUR?我将在本文中回答这些问题。 ### 什么是 AUR? ![](/data/attachment/album/202004/13/202409mu6h0u7u62og52gh.png) AUR 表示<ruby> Arch 用户仓库 <rt> Arch User Repository </rt></ruby>。它是针对基于 Arch 的 Linux 发行版用户的社区驱动的仓库。它包含名为 [PKGBUILD](https://wiki.archlinux.org/index.php/PKGBUILD) 的包描述,它可让你使用 [makepkg](https://wiki.archlinux.org/index.php/Makepkg) 从源代码编译软件包,然后通过 [pacman](https://wiki.archlinux.org/index.php/Pacman#Additional_commands)(Arch Linux 中的软件包管理器)安装。 创建 AUR 的目的是组织和共享社区中的新软件包,并帮助加速将流行的软件包纳入[社区仓库](https://wiki.archlinux.org/index.php/Community_repository)。 进入官方仓库的大量新软件包都从 AUR 开始。在 AUR 中,用户可以贡献自己的软件包构建(PKGBUILD 和相关文件)。 AUR 社区可以对 AUR 中的软件包进行投票。如果一个软件包变得足够流行(假设它具有兼容的许可证和良好的打包技术),那么可以将其加入 `pacman` 直接访问的社区仓库中。 > > 简而言之,AUR 是开发人员在 Arch 仓库中正式包含新软件之前向 Arch Linux 用户提供新软件的一种方式。 > > > ### 你应该使用 AUR 吗?有什么风险? 使用 AUR 就像过马路一样。如果你谨慎操作,应该就没问题。 如果你刚接触 Linux,建议你在建立有关 Arch/Manjaro 和 Linux 的基础知识之前不要使用 AUR。 的确,任何人都可以将软件包上传到 AUR,但[受信任用户](https://wiki.archlinux.org/index.php/Trusted_Users)(TU)负责监视上传的内容。尽管 TU 对上传的软件包执行质量控制,但不能保证 AUR 中的软件包格式正确或没有恶意。 在实践中,AUR 似乎很安全,但理论上讲它可以造成一定程度的损害,但前提是你不小心。从 AUR 构建软件包时,聪明的 Arch 用户**总是**检查 `PKGBUILD` 和 `*.install` 文件。 此外,TU(受信任用户)还会删除 AUR 中包含在 core/extra/community 中的软件包,因此它们之间不应存在命名冲突。AUR 通常会包含软件包的开发版本(cvs/svn/git 等),但它们的名称会被修改,例如 foo-git。 对于 AUR 软件包,`pacman` 会处理依赖关系并检测文件冲突,因此,除非你默认使用 `–force` 选项,否则你不必担心用另一个包中的文件会覆盖另一个包的文件。如果这么做了,你可能会遇到比文件冲突更严重的问题。 ### 如何使用 AUR? 使用 AUR 的最简单方法是通过 AUR 助手。 [AUR 助手](https://itsfoss.com/best-aur-helpers/) 是一个命令行工具(有些还有 GUI),可让你搜索在 AUR 上发布的软件包并安装。 #### 在 Arch Linux 上安装 AUR 助手 假设你要使用 [Yay AUR 助手](https://github.com/Jguer/yay)。确保在 Linux 上安装了 git。然后克隆仓库,进入目录并构建软件包。 依次使用以下命令: ``` sudo pacman -S git sudo git clone https://aur.archlinux.org/yay-git.git cd yay makepkg -si ``` 安装后,你可以使用 `yay` 命令来安装软件包: ``` yay -S package_name ``` 并非必须使用 AUR 助手来从 AUR 安装软件包。从以下文章解如何在没有 AUR 助手的情况下使用 AUR。 #### 不使用 AUR 助手安装 AUR 软件包 如果你不想使用 AUR 助手,那么也可以自行从 AUR 安装软件包。 在 [AUR 页面](https://aur.archlinux.org/)上找到要安装的软件包后,建议确认“许可证”、“流行程度”、“最新更新”、“依赖项”等,作为额外的质量控制步骤。 ``` git clone [package URL] cd [package name] makepkg -si ``` 例如。假设你要安装 [telegram 桌面包](https://aur.archlinux.org/packages/telegram-desktop-git): ``` git clone https://aur.archlinux.org/telegram-desktop-git.git cd telegram-desktop-git makepkg -si ``` #### 在 Manjaro Linux 中启用 AUR 支持 它默认情况下未启用 AUR,你必须通过 `pamac` 启用它。我的笔记本电脑运行 [Manjaro](https://manjaro.org/) Cinnamon,但是所有 Manjaro 变种的步骤都相同。 打开 Pamac(显示为 “Add/Remove Software”): ![](/data/attachment/album/202004/13/202757a8kak498x4lk4831.png) 进入 Pamac 后,请进入如下所示的<ruby> 首选项 <rt> preferences </rt></ruby>。 ![](/data/attachment/album/202004/13/202829u9j7v8mg7g2m02nd.png) 在首选项对话框中,进入 “AUR” 选项卡,启用 AUR 支持,启用检查更新,并关闭对话框。 ![](/data/attachment/album/202004/13/202852ni77vilrwzdwwype.png) 现在,你可以搜索软件包,并且可以通过软件包描述下的标签来识别属于 AUR 的软件包。 ![](/data/attachment/album/202004/13/202922iu6ww7t8hsyzzasd.png) 希望本文对你有用,并关注社交媒体上即将出现的与 Arch 相关的主题。 --- via: <https://itsfoss.com/aur-arch-linux/> 作者:[Dimitrios Savvopoulos](https://itsfoss.com/author/dimitrios/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
![Warp Terminal](/assets/images/warp-terminal.webp) ![Warp Terminal](/assets/images/warp-terminal.webp) If you have been using [Arch Linux](https://www.archlinux.org/) or other [distributions based on Arch](https://itsfoss.com/arch-based-linux-distros/) such as Manjaro, you might have come across the term AUR. You try to install new software and someone suggests installing it from AUR. This leaves you confused. What is this AUR? Why is it used? How to use AUR? I’ll answer these questions in this article. ![what is aur](https://itsfoss.com/content/images/wordpress/2022/12/what-is-aur.webp) ## What is AUR? AUR stands for Arch User Repository. It is a community-driven repository for Arch-based Linux distribution users. It contains package descriptions named [PKGBUILDs](https://wiki.archlinux.org/index.php/PKGBUILD) that allow you to compile a package from source with [makepkg](https://wiki.archlinux.org/index.php/Makepkg) and install it via [pacman](https://wiki.archlinux.org/index.php/Pacman#Additional_commands), the package manager in Arch Linux. The AUR was created to organize and share new packages from the community and to help accelerate popular packages’ inclusion into the [community repository](https://wiki.archlinux.org/index.php/Community_repository). Many new packages that enter the official repositories start in the AUR. In the AUR, users are able to contribute their own package builds (PKGBUILD and related files). The AUR community can vote for packages in the AUR. If a package becomes popular enough — provided it has a compatible license and good packaging technique — it may be entered into the community repository directly accessible by pacman. ## Should you use AUR? What’s the risk involved? Using the AUR is like crossing the street. If you proceed with caution you should be fine. If you are new to Linux it is advised not to use the AUR until you build a foundation knowledge about Arch/Manjaro and Linux in general. It is true that anyone can upload packages to the AUR but the [Trusted Users](https://wiki.archlinux.org/index.php/Trusted_Users) (TUs) are charged with keeping an eye on what gets uploaded. Although TUs perform quality control on the uploaded packages, there is no guarantee that packages in the AUR are well-formed or not malicious. In practice, the AUR seems to be quite safe but in theory, it can do some damage, but only if you are not careful. A smart Arch user **always** inspects PKGBUILDs and *.install files when building packages from the AUR. Additionally, TUs (Trusted Users) also remove packages in the AUR that are included in core/extra/community so there should be no naming conflicts between them. The AUR will often contain developmental versions of packages (cvs/svn/git/etc) but they will have modified names such as foo-git. As for the AUR packages, pacman handles dependency resolution and detects file conflicts. You never have to worry about overwriting files in one package with files from another unless you use the `--force` option by default. If you do that, you probably have more serious problems than file conflicts. ## How to use AUR? The simplest way to use AUR is through an AUR helper. An [AUR helper](https://itsfoss.com/best-aur-helpers/) is a command line tool (some have GUI as well) that lets you search for packages published on the AUR and install them. ### Installing an AUR helper on Arch Linux Let’s say you want to use [Yay AUR helper](https://github.com/Jguer/yay). Make sure that you have git installed on Linux. And then clone the repository, go to the directory and build the package. Use these commands one by one for that: ``` sudo pacman -S --needed git base-devel git clone https://aur.archlinux.org/yay.git cd yay makepkg -si ``` Once installed, you can use the yay command like this to install a package: `yay -S package_name` It’s not that you must use AUR helper for installing packages from AUR. The next section shows how to use AUR without an AUR helper. ### Installing AUR packages without AUR helpers If you don’t want to use AUR helper, you can still install packages from AUR. You have to build them with a few commands. As soon as you find the package you want to install on [AUR page](https://aur.archlinux.org/) it is advised to confirm “Licence”, “Popularity”, “Last Updated”, “Dependencies” and so on as an extra quality control step. ``` git clone [package URL] cd [package name] makepkg -si ``` For example. let’s say you want to install [telegram desktop package](https://aur.archlinux.org/packages/telegram-desktop-git): ``` git clone https://aur.archlinux.org/telegram-desktop-git.git cd telegram-desktop-git makepkg -si ``` ## Enabling AUR support in Manjaro Linux AUR isn’t enabled by default and you have to enable it through pamac. My laptop runs [Manjaro](https://manjaro.org/) Cinnamon but the steps are the same for all Manjaro flavors. Open Pamac (listed as Add/Remove Software): ![open add remove software pamac from system menu](https://itsfoss.com/content/images/wordpress/2022/12/open-add-remove-software-pamac-from-system-menu.png) Once you are in pamac go to preferences like shown below. ![preferences inside pamac](https://itsfoss.com/content/images/wordpress/2022/12/preferences-inside-pamac.png) In the preferences dialog box go to the AUR tab, enable the AUR support, enable check for updates and close the dialog box. ![enable aur support in manjaro](https://itsfoss.com/content/images/wordpress/2022/12/enable-aur-support-in-manjaro.png) You can now search for packages and those which belong to AUR can be identified by the tag under the package descriptions. ![aur packages in pamac](https://itsfoss.com/content/images/wordpress/2022/12/aur-packages-in-pamac.png) AUR is one of the many [reasons why people love Arch Linux](https://itsfoss.com/why-arch-linux/) and you can see why it is so popular. I hope you find this article useful and keep an eye on social media for upcoming Arch-related topics.
12,109
用 Chezmoi 取回你的点文件
https://fedoramagazine.org/take-back-your-dotfiles-with-chezmoi/
2020-04-14T18:36:38
[ "点文件" ]
https://linux.cn/article-12109-1.html
![](/data/attachment/album/202004/14/183618dwkhe4ehx1kthxvw.jpg) 在 Linux 中,点文件是隐藏的文本文件,从 Bash、Git 到 i3 或 VSCode 等更复杂的许多应用程序,都用它存储配置设置。 这些文件大多数都放在 `~/.config` 目录中或用户主目录中。编辑这些文件使你可以自定义也许没有提供设置菜单的应用程序,并且它们可以跨设备甚至是跨其它 Linux 发行版移植。但是,整个 Linux 发烧友社区的讨论焦点是如何管理这些点文件以及如何共享它们。 我们将展示一个名为 [Chezmoi](https://www.chezmoi.io/) 的工具,该工具与其它工具略有不同。 ### 点文件管理的历史 如果你在 [GitHub 上搜索“dotfiles”](https://github.com/search?q=dotfiles&type=Repositories),那么你将看到有超过 10 万个存储库在解决一个目标:将人们的点文件存储在可共享且可重复的领地中。但是,除了都在使用 Git 之外,它们存储文件的方式各有不同。 虽然 Git 解决了代码管理问题,也将其转换为配置文件管理,但它并没有解决如何区分发行版、角色(例如家用计算机与工作计算机)、机密信息管理以及按设备配置的问题。 因此,许多用户决定制定自己的解决方案,多年来,社区已经做出了许多成果。本文将简要介绍已有的一些解决方案。 #### 在孤立的环境中进行实验 你想在封闭的环境中快速尝试以下解决方案吗?运行: ``` $ podman run --rm -it fedora ``` 来创建一个 Fedora 容器尝试应用程序。退出容器时,该容器将自动删除自身。 #### 安装问题 如果将点文件存储在 Git 存储库中,你肯定希望可以让更改轻松地自动应用到主目录之中,乍一看,最简单的方法是使用符号链接,例如 `ln -s ~/.dotfies/bashrc ~/.bashrc`。这可以使你的更改在更新存储库时立即就绪。 符号链接的问题在于管理符号链接可能很麻烦。Stow 和 [RCM](https://fedoramagazine.org/managing-dotfiles-rcm/)(在 Fedora 杂志上介绍过)可以帮助你管理这些,但是这些并不是非常舒服的解决方案。下载后,需要对私有文件进行适当的修改和设置访问模式。如果你在一个系统上修改了点文件,然后将存储库下载到另一个系统,则可能会发生冲突并需要进行故障排除。 解决此问题的另一种方法是编写自己的安装脚本。这是最灵活的选项,但要权衡花费更多时间来构建自定义解决方案是否值得。 #### 机密信息问题 Git 旨在跟踪更改。如果你在 Git 存储库中存储密码或 API 密钥之类的机密信息,则会比较麻烦,并且需要重写 Git 历史记录以删除该机密信息。如果你的存储库是公开的,那么如果其他人下载了你的存储库,你的机密信息将不再保密。仅这个问题就会阻止许多人与公共世界共享其点文件。 #### 多设备配置问题 问题不在于如何将配置拉到多个设备,而是当你有多个需要不同配置的设备的问题。大多数人通过使用不同的文件夹或使用不同的<ruby> 复刻 <rt> fork </rt></ruby>来处理此问题。这使得难以在不同设备和角色集之间共享配置。 ### Chezmoi 是如何干的 Chezmoi 是一种考虑了以上问题的用于管理点文件的工具,它不会盲目地从存储库复制或符号链接文件。 Chezmoi 更像是模板引擎,可以根据系统变量、模板、机密信息管理器和 Chezmoi 自己的配置文件来生成你的点文件。 #### Chezmoi 入门 目前,Chezmoi 并不在 Fedora 的默认软件库中。你可以使用以下命令下载 Chezmoi 的当前版本。 ``` $ sudo dnf install https://github.com/twpayne/chezmoi/releases/download/v1.7.17/chezmoi-1.7.17-x86_64.rpm ``` 这会将预打包的 RPM 安装到你的系统中。 让我们继续使用以下方法创建你的存储库: ``` $ chezmoi init ``` 它将在 `~/.local/share/chezmoi/` 中创建你的新存储库。你可以使用以下命令轻松地切换到该目录: ``` $ chezmoi cd ``` 让我们添加第一个文件: ``` chezmoi add ~/.bashrc ``` 这将你的 `.bashrc` 文件添加到 chezmoi 存储库。 注意:如果你的 `.bashrc` 文件实际上是一个符号链接,则需要添加 `-f` 标志以跟随它来读取实际文件的内容。 现在,你可以使用以下命令编辑该文件: ``` $ chezmoi edit ~/.bashrc ``` 现在让我们添加一个私有文件,这是一个具有 600 或类似权限的文件。我在 `.ssh/config` 中有一个文件,我想通过使用如下命令添加它: ``` $ chezmoi add ~/.ssh/config ``` Chezmoi 使用特殊的前缀来跟踪隐藏文件和私有文件,以解决 Git 的限制。运行以下命令以查看它: ``` $ chezmoi cd ``` **请注意,标记为私有的文件实际上并不是私有的,它们仍会以纯文本格式保存在你的 Git 存储库中。稍后会进一步解释。** 你可以使用以下方法应用任何更改: ``` $ chezmoi apply ``` 并使用如下命令检查有什么不同: ``` $ chezmoi diff ``` #### 使用变量和模板 要导出 Chezmoi 可以收集的所有数据,请运行: ``` $ chezmoi data ``` 其中大多数是有关用户名、架构、主机名、操作系统类型和操作系统名称的信息。但是你也可以添加我们自己的变量。 继续,运行: ``` $ chezmoi edit-config ``` 然后输入以下内容: ``` [data] email = "[email protected]" name = "Fedora Mcdora" ``` 保存文件,然后再次运行 `chezmoi data`。你将在底部看到你的电子邮件和姓名已经添加成功。现在,你可以将这些与 Chezmoi 的模板一起使用。运行: ``` $ chezmoi add -T --autotemplate ~/.gitconfig ``` 来将你的 `.gitconfig` 作为模板添加到 Chezmoi 中。如果 Chezmoi 成功地正确推断了模板,你将获得以下信息: ``` [user] email = "{{ .email }}" name = "{{ .name }}" ``` 如果没有,则可以将文件更改为这样。 使用以下方法检查文件: ``` $ chezmoi edit ~/.gitconfig ``` 然后使用: ``` $ chezmoi cat ~/.gitconfig ``` 来查看 Chezmoi 为此文件生成什么。我生成的示例如下: ``` [root@a6e273a8d010 ~]# chezmoi cat ~/.gitconfig [user] email = "[email protected]" name = "Fedora Mcdora" [root@a6e273a8d010 ~]# ``` 它将在我们的 Chezmoi 配置中生成一个充满变量的文件。你也可以使用变量执行简单的逻辑语句。一个例子是: ``` {{- if eq .chezmoi.hostname "fsteel" }} # 如果主机名为 "fsteel" 才包括此部分 {{- end }} ``` 请注意,要使其正常工作,该文件必须是模板。你可以通过查看文件是否在 `chezmoi cd` 中的文件名后附加 `.tmpl` 或使用 `-T` 选项读取文件来进行检查。 #### 让机密信息保持机密 要对设置进行故障排除,请使用以下命令。 ``` $ chezmoi doctor ``` 这里重要的是它还向你显示了[所支持的密码管理器](https://www.chezmoi.io/docs/how-to/#keep-data-private)。 ``` [root@a6e273a8d010 ~]# chezmoi doctor warning: version dev ok: runtime.GOOS linux, runtime.GOARCH amd64 ok: /root/.local/share/chezmoi (source directory, perm 700) ok: /root (destination directory, perm 550) ok: /root/.config/chezmoi/chezmoi.toml (configuration file) ok: /bin/bash (shell) ok: /usr/bin/vi (editor) warning: vimdiff (merge command, not found) ok: /usr/bin/git (source VCS command, version 2.25.1) ok: /usr/bin/gpg (GnuPG, version 2.2.18) warning: op (1Password CLI, not found) warning: bw (Bitwarden CLI, not found) warning: gopass (gopass CLI, not found) warning: keepassxc-cli (KeePassXC CLI, not found) warning: lpass (LastPass CLI, not found) warning: pass (pass CLI, not found) warning: vault (Vault CLI, not found) [root@a6e273a8d010 ~]# ``` 你可以使用这些客户端,也可以使用[通用客户端](https://www.chezmoi.io/docs/how-to/#use-a-generic-tool-to-keep-your-secrets),也可以使用系统的[密钥环](https://www.chezmoi.io/docs/how-to/#use-a-keyring-to-keep-your-secrets)。 对于 GPG,你需要使用以下命令将以下内容添加到配置中: ``` $ chezmoi edit-config ``` ``` [gpg] recipient = "<Your GPG keys Recipient" ``` 你可以使用: ``` $ chezmoi add --encrypt ``` 来添加任何文件,这些文件将在你的源存储库中加密,并且不会以纯文本格式公开。Chezmoi 会在应用时自动将其解密。 我们也可以在模板中使用它们。例如,存储在 [Pass](https://fedoramagazine.org/using-pass-to-manage-your-passwords-on-fedora/)(已在 Fedora 杂志上介绍)中的机密令牌。继续,生成你的机密信息。 在此示例中,它称为 `githubtoken`: ``` rwaltr@fsteel:~] $ pass ls Password Store └── githubtoken [rwaltr@fsteel:~] $ ``` 接下来,编辑你的模板,例如我们之前创建的 `.gitconfig` 并添加以下行。 ``` token = {{ pass "githubtoken" }} ``` 然后让我们使用检查: ``` $ chezmoi cat ~/.gitconfig ``` ``` [rwaltr@fsteel:~] $ chezmoi cat ~/.gitconfig This is Git's per-user configuration file. [user] name = Ryan Walter email = [email protected] token = mysecrettoken [rwaltr@fsteel:~] $ ``` 现在,你的机密信息已在密码管理器中妥善保护,你的配置可以公开共享而没有任何风险! ### 最后的笔记 这里仅仅涉及到表面。请访问 [Chezmoi 的网站](https://www.chezmoi.io/)了解更多信息。如果你正在寻找有关如何使用 Chezmoi 的更多示例,作者还可以公开了他的[点文件](https://github.com/twpayne/dotfiles)。 --- via: <https://fedoramagazine.org/take-back-your-dotfiles-with-chezmoi/> 作者:[Ryan Walter](https://fedoramagazine.org/author/rwaltr/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
In Linux, dotfiles are hidden text files that are used to store various configuration settings for many such as Bash and Git to more complex applications like i3 or VSCode. Most of these files are contained in the *~/.config *directory or right in the home directory. Editing these files allows you to customize applications beyond what a settings menu may provide, and they tend to be portable across devices and even other Linux distributions. But one talking point across the Linux enthusiast community is how to manage these dotfiles and how to share them. We will be showcasing a tool called [Chezmoi](https://www.chezmoi.io/) that does this task a little differently from the others. ## The history of dotfile management If you search [GitHub for dotfiles](https://github.com/search?q=dotfiles&type=Repositories), what you will see are over 100k repositories after one goal: Store people’s dotfiles in a shareable and repeatable manor. However, other than using git, they store their files differently. While Git has solved code management problems that also translates to config file management, It does not solve how to separate between distributions, roles (such as home vs work computers) secrets management, and per device configuration. Because of this, many users decide to craft their own solutions, and the community has responded with multiple answers over the years. This article will briefly cover some of the solutions that have been created. ### Experiment in an isolated environment Do you want to try these below solutions quickly in a contained environment? Run: $ podman run --rm -it fedora … to create a Fedora container to try the applications in. This container will automatically delete itself when you exit the shell. ### The install problem If you store your dotfiles in Git repository, you will want to make it easy for your changes to automatically be applied inside your home directory, the easiest way to do this at first glance is to use a symlink, such as *ln -s ~/.dotfies/bashrc ~/.bashrc*. This will allow your changes to take place instantly when your repository is updated. The problem with symlinks is that managing symlinks can be a chore. Stow and [RCM (covered here on Fedora Magazine)](https://fedoramagazine.org/managing-dotfiles-rcm/) can help you manage those, but these are not seamless solutions. Files that are private will need to be modified and chmoded properly after download. If you revamp your dotfiles on one system, and download your repository to another system, you may get conflicts and require troubleshooting. Another solution to this problem is writing your own install script. This is the most flexible option, but has the tradeoff of requiring more time into building a custom solution. ### The secrets problem Git is designed to track changes. If you store a secret such as a password or an API key in your git repository, you will have a difficult time and will need to rewrite your git history to remove that secret. If your repository is public, your secret would be impossible to recover if someone else has downloaded your repository. This problem alone will prevent many individuals from sharing their dotfiles with the public world. ### The multi-device config problem The problem is not pulling your config to multiple devices, the problem is when you have multiple devices that require different configuration. Most individuals handle this by either having different folders or by using different forks. This makes it difficult to share configs across the different devices and role sets ## How Chezmoi works Chezmoi is a tool to manage your dotfiles with the above problems in mind, it doesn’t blindly copy or symlink files from your repository. Chezmoi acts more like a template engine to generate your dotfiles based on system variables, templates, secret managers, and Chezmoi’s own config file. ### Getting Started with Chezmoi Currently Chezmoi is not in the default repositories. You can download the current version of Chezmoi as of writing with the following command. $ sudo dnf install https://github.com/twpayne/chezmoi/releases/download/v1.7.17/chezmoi-1.7.17-x86_64.rpm This will install the pre-packaged RPM to your system. Lets go ahead and create your repository using: $ chezmoi init It will create your new repository in *~/.local/share/chezmoi/*. You can easily cd to this directory by using: $ chezmoi cd Lets add our first file: chezmoi add ~/.bashrc … to add your bashrc file to your chezmoi repository. Note: if your bashrc file is actually a symlink, you will need to add the -f flag to follow it and read the contents of the real file. You can now edit this file using: $ chezmoi edit ~/.bashrc Now lets add a private file, This is a file that has the permissions 600 or similar. I have a file at .ssh/config that I would like to add by using $ chezmoi add ~/.ssh/config Chezmoi uses special prefixes to keep track of what is a hidden file and a private file to work around Git’s limitations. Run the following command to see it: $ chezmoi cd **Do note that files that are marked as private are not actually private, they are still saved as plain text in your git repo. More on that later.** You can apply any changes by using: $ chezmoi apply and inspect what is different by using $ chezmoi diff ### Using variables and templates To export all of your data Chezmoi can gather, run: $ chezmoi data Most of these are information about your username, arch, hostname, os type and os name. But you can also add our own variables. Go ahead and run: $ chezmoi edit-config … and input the following: [data] email = "[email protected]" name = "Fedora Mcdora" Save your file and run chezmoi data again. You will see on the bottom that your email and name are now added. You can now use these with templates with Chezmoi. Run: $ chezmoi add -T --autotemplate ~/.gitconfig … to add your gitconfig as a template into Chezmoi. If Chezmoi is successful in inferring template correctly, you could get the following: [user] email = "{{ .email }}" name = "{{ .name }}" If it does not, you can change the file to this instead. Inspect your file with: $ chezmoi edit ~/.gitconfig After using $ chezmoi cat ~/.gitconfig … to see what chezmoi will generate for this file. My generated example is below: [root@a6e273a8d010 ~]# chezmoi cat ~/.gitconfig [user] email = "[email protected]" name = "Fedora Mcdora" [root@a6e273a8d010 ~]# It will generate a file filled with the variables in our chezmoi config. You can also use the varibles to perform simple logic statements. One example is: {{- if eq .chezmoi.hostname "fsteel" }} # this will only be included if the host name is equal to "fsteel" {{- end }} Do note that for this to work the file has to be a template. You can check this by seeing if the file has a “.tmpl” appended to its name on the file in *chezmoi cd*, or by readding the file using the -T option ### Keeping secrets… secret To troubleshoot your setup, use the following command. $ chezmoi doctor What is important here is that it also shows you the [password managers it supports](https://www.chezmoi.io/docs/how-to/#keep-data-private). [root@a6e273a8d010 ~]# chezmoi doctor warning: version dev ok: runtime.GOOS linux, runtime.GOARCH amd64 ok: /root/.local/share/chezmoi (source directory, perm 700) ok: /root (destination directory, perm 550) ok: /root/.config/chezmoi/chezmoi.toml (configuration file) ok: /bin/bash (shell) ok: /usr/bin/vi (editor) warning: vimdiff (merge command, not found) ok: /usr/bin/git (source VCS command, version 2.25.1)ok: /usr/bin/gpg (GnuPG, version 2.2.18) warning: op (1Password CLI, not found) warning: bw (Bitwarden CLI, not found) warning: gopass (gopass CLI, not found) warning: keepassxc-cli (KeePassXC CLI, not found) warning: lpass (LastPass CLI, not found) warning: pass (pass CLI, not found) warning: vault (Vault CLI, not found)[root@a6e273a8d010 ~]# You can use either of these clients, or a [generic client](https://www.chezmoi.io/docs/how-to/#use-a-generic-tool-to-keep-your-secrets), or your system’s [Keyring](https://www.chezmoi.io/docs/how-to/#use-a-keyring-to-keep-your-secrets). For GPG, you will need to add the following to your config using: $ chezmoi edit-config [gpg] recipient = "<Your GPG keys Recipient" You can use: $ chezmoi add --encrypt … to add any files, these will be encrypted in your source respository and not exposed to the public world as plain text. Chezmoi will automatically decrypt them when applying. We can also use them in templates. For example, a secret token stored in [Pass (covered on Fedora Magazine)](https://fedoramagazine.org/using-pass-to-manage-your-passwords-on-fedora/). Go ahead and generate your secret. In this example, it’s called “githubtoken”: rwaltr@fsteel:~] $ pass ls Password Store └── githubtoken [rwaltr@fsteel:~] $ Next, edit your template, such as your .gitconfig we created earlier and add this lines. token = {{ pass "githubtoken" }} Then lets inspect using*:* $ chezmoi cat ~/.gitconfig [rwaltr@fsteel:~] $ chezmoi cat ~/.gitconfig This is Git's per-user configuration file. [user] name = Ryan Walter email = [email protected] token = mysecrettoken [rwaltr@fsteel:~] $ Now your secrets are properly secured in your password manager, your config can be publicly shared without risk! ## Final notes This is only scratching the surface. Please check out [Chezmoi’s website](https://www.chezmoi.io/) for more information. The author also has his [dotfiles public](https://github.com/twpayne/dotfiles) if you are looking for more examples on how to use Chezmoi. ## Yaroslav To store dotfiles in a VCS it’s enough to have a bare git repository (see https://www.atlassian.com/git/tutorials/dotfiles for the full recipe). Your tool seems to contradict the UNIX philosophy that there should be one instrument for each task. To create dotfiles from templates seems an ununderstandable overhead here. ## Paul W. Frields ## Yaroslav What exactly in your comment should be not obvious for Linux users? Linux belongs to the UNIX family. See https://en.wikipedia.org/wiki/Linux Yes, I’m aware of the existence different philosophies. UNIX philosophy is pretty good though. Care to explain what philosophy or good programming patterns this tool follows? ## Sebastiaan Franken No, it doesn’t. Linux is a UNIX-like (with emphasis on the like part) OS family. It’ s notpart of the UNIX family.Also, “good programming patterns” seem to change every six months or so… ## Ryan Walter I would disagree, I would say it embraces the Unix philosophy. It does one thing well, it builds your dotfiles based on a template. It does not solve encryption. It uses GPG to do that, it does not solve password management. It uses password managers for that. It does not solve code revisioning. It uses Git (or any of your choosing) I was originally using git with stow. Which would be similar to your config alias. But when I was installing to other systems. I would have to chmod all my private files. I also was attempting to branches to separate my roles. But found that managing the commits between the two branches a bit of a chore. ## ergr I have symlink to my config file for example .vimrc mc alvways ask me breaking simlink or no. How tell mc tho stop ask me? In normal hard link too! ## aairey I am using dotdrop, which also does many of these except the secrets management and templating (AFAIK). Seems I need to look into chezmoi, thanks for sharing! ## Stephen A timely article for me as I was in the process of cleaning up my dot files again. I was using stow, but had seen some drawbacks with it over time. I set up chezmoi, and have it covering the primary dot files with intentions of adding all of them in eventually. The biggest issue I am having currently is trying to stop old habits from interfering with it, like editing a file with Vim instead of chezmoi edit. I use Silverblue, so it would be nice if this was layered from an official package repo or COPR at least. Ideally, it should be able to be a flatpak quite easily. ## Ryan Walter Have you started using templates yet? If you edited your file and it’s not a template. You can re-add it using Chezmoi add and it would achieve the same affect. This took me some time to adapt to as well. But Chezmoi edit is mainly important for your template files. Since your target will be different from your source. Packaging it for Fedora is something I’m currently looking at. But I have not packaged something before and might take me some time. ## Stephen Hello, Yes I have my .gitconfig setup as a template, and I had already been working on a .zshrc template prior to your article, so like I said timely. I have my pgp key setup for encryption as well. It seems promising, and a better thought out solution thanthe usual methods of dot file care taking. The packaging would be good since it becomes a problem at update time on Silverblue, basically I need to update it separately and manually each time while I choose to use it. Official, even COPR repos will get automatically checked for updates, plus the added benefit of being built with everything else at the same time with the same versions of lib’s. ## Patrick O'Callaghan I have a minor nit: Linux (and UNIX) have no such thing as “hidden” files. Dotfiles are only “hidden” in the sense that by default they don’t appear when you run ‘ls’ and some other utilities that follow the same convention. They don’t have some magical special property that hides them from anything else. Although old hands know this perfectly well, it’s better to avoid this terminology so as not to confuse newcomers. ## Klaus Ferreira Thanks for the article. I was about to get crazy with my dot files hahaha BTW, is it me or you can only use diff BEFORE applying? I mean, if I follow the article (using diff AFTER applying) I got nothing as output. ## Ryan Walter You are correct. Diff should come before apply. Otherwise the states match and there is nothing that is different! 🙂 My bad
12,111
Bitwarden:一个自由开源的密码管理器
https://itsfoss.com/bitwarden/
2020-04-14T22:58:00
[ "密码管理器", "Bitwarden" ]
https://linux.cn/article-12111-1.html
> > Bitwarden 是流行的开源密码管理器。在这里,我们来看看它提供了什么。 > > > ![](/data/attachment/album/202004/14/225804ktwtwthzzrhktgbk.jpg) [Bitwarden](https://bitwarden.com/) 是一个自由开源的密码管理器。你可能还记得,我们之前将它列为 [Linux 中的最佳密码管理器](/article-11531-1.html)之一。 就个人而言,几个月来我一直在多个设备上使用 Bitwarden 作为我的密码管理器。因此,在本文中,我将说明它提供的功能以及我的使用经验。 **注意:** 如果你对服务的安全性有疑问,请查看其官方安全性[常见问题页面](https://help.bitwarden.com/security/)。 ### Bitwarden 密码管理器的特性 ![](/data/attachment/album/202004/14/225807emwxf17f7bq155z8.jpg) [Bitwarden](https://bitwarden.com/) 是许多其他方便的密码管理器的不错替代品。 以下是它的特性: * 提供免费和付费选择 * 适用于团队(企业)和个人 * 开源 * 支持自托管 * 能够作为身份验证器应用(如 Google 身份验证器) * 跨平台支持(安卓、iOS、Linux、Windows 和 macOS) * 提供浏览器扩展(Firefox,、Chrome、Opera、Edge、Safari) * 提供命令行工具 * 提供网页保管库 * 能够导入/导出密码 * [密码生成器](https://itsfoss.com/password-generators-linux/) * 自动填充密码 * 两步身份验证 从技术上讲,Bitwarden 使用完全免费。然而,它也提供了一些付费计划(个人付费和商务付费计划)。 通过付费计划,你可以与更多用户共享密码、获取 API 访问权限(业务使用)以及更多此类高级功能。 以下是定价(在编写本文时): ![](/data/attachment/album/202004/14/225811dso7kz88bxsrf77c.jpg) 对于大多数个人来说,考虑到支持开源项目,10 美元/年的高级个人计划不应成为问题。当然,你也可以选择没有限制地免费使用。 ### 在 Linux 上安装 Bitwarden ![](/data/attachment/album/202004/14/225813tyyxromf24molxse.png) 很容易将 Bitwarden 安装到你的 Linux 系统上,因为它提供了一个 .AppImage 文件。如果你还不知道[如何使用 AppImage](https://itsfoss.com/use-appimage-linux/) 文件,你可以参考我们的指南。 如果你不喜欢使用 AppImage,你可以选择 [snap 包](https://snapcraft.io/bitwarden)或在其[官方下载页面](https://bitwarden.com/#download)上下载 .deb 或者 .rpm 文件。你还可以查看其 [GitHub 页面](https://github.com/bitwarden)了解更多信息。 * [下载 Bitwarden](https://bitwarden.com/) 如果你对使用桌面应用不感兴趣,也可以使用浏览器扩展。 ### 我使用 Bitwarden 的体验 在 Bitwarden 之前,我使用 [LastPass](https://www.lastpass.com/) 作为密码管理器。尽管这不是一个糟糕的选择,但它不是开源软件。 所以,在我发现 Bitwarden 后就决定使用它。 首先,我从 LastPass 导出我的数据,并导入到 Bitwarden 没有遇到困难。在此过程中我没有丢失任何数据。 除了桌面应用,我一直在使用 Bitwarden 的火狐插件和 Android 应用。使用六个多月后,我没有遇到任何问题。所以,如果你愿意试试看,我一定会给它好评! ### 总结 我想说,对于那些想要一个可以到处工作,并且跨设备轻松同步的密码管理器的用户而言,Bitwarden 是一个完整的解决方案。 你可以免费入门,但如果可以,请购买 **10 美元/年**的高级计划来支持这个开源项目。 如果你正在寻找更多选择,你也可以查看我们的 [Linux 中 5 个最佳密码管理器](/article-11531-1.html)。 你试过 Bitwarden 了吗?如果没有,请试试看!此外,你最喜欢的密码管理器是什么?让我在下面的评论中知道! --- via: <https://itsfoss.com/bitwarden/> 作者:[Ankush Das](https://itsfoss.com/author/ankush/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
**Brief: **Bitwarden is a popular open-source password manager. Here, we take a look at what it has to offer. ![Bitwarden Screenshot](https://itsfoss.com/content/images/wordpress/2020/04/bitwarden-screenshot.jpg) [Bitwarden](https://bitwarden.com/) is a free and open-source password manager. You might remember that earlier we listed it as one of the [best password managers available for Linux](https://itsfoss.com/password-managers-linux/). Personally, I’ve been using Bitwarden as my password manager across multiple devices for several months now. So, in this article, I’ll be mentioning the features it offers along with my experience with it. **Note:** *In case you have questions about how secure the service is, check out their official security FAQ page to explore about it.* ## Features of Bitwarden password manager ![Bitwarden Dark Mode](https://itsfoss.com/content/images/wordpress/2020/04/bitwarden-dark-mode.jpg) [Bitwarden](https://bitwarden.com/) is an impressive alternative to a lot of other convenient password managers out there. Here’s a breakdown of the features: - Free & Paid options available - Available for Teams (Enterprise) and Individuals - Open source - Self-hosting support - Ability to use it as an authenticator app (like Google Authenticator) - Cross-platform support (Android, iOS, Linux, Windows, & macOS) - Browser extensions available (Firefox, Chrome, Opera, Edge, Safari) - Offers command-line tools - Offers a Web Vault - Ability to Import/Export Passwords [Password Generator](https://itsfoss.com/password-generators-linux/)- Auto-fill password - Two-step authentication Technically, Bitwarden is completely free to use – however, it also offers some paid plans (Personal pricing plans & Business plans). With the premium plans, you get the ability to share the passwords with more users, get API access (business use), and more such premium perks. Here’s how the pricing looks like (at the time of writing this article): ![Bitwarden Pricing](https://itsfoss.com/content/images/wordpress/2020/04/bitwarden-pricing.jpg) For most individuals, the premium personal plan of **$10/year** shouldn’t be an issue considering the fact that you will be supporting an open-source project. Of course, you can also choose to use it for free with no essential restrictions. ## Installing Bitwarden on Linux ![Bitwarden Settings](https://itsfoss.com/content/images/wordpress/2020/04/bitwarden-settings.png) It’s quite easy to get Bitwarden installed on your Linux system because it offers an .**AppImage** file. You can refer to our guide on [how to use AppImage](https://itsfoss.com/use-appimage-linux/) files, if you didn’t know it already. If you don’t prefer using AppImage – you can opt for the [snap package](https://snapcraft.io/bitwarden) or simply download the **.deb** or **.rpm** file available on their [official download page](https://bitwarden.com/#download). You can also check out their [GitHub page](https://github.com/bitwarden) for more information. You can also utilize the browser extensions if you’re not interested in using the desktop app. ## My experience with Bitwarden Before Bitwarden, I was using [LastPass](https://www.lastpass.com/) as my password manager. Even though that’s not a bad option – it’s not an open-source software. So, I decided to switch to Bitwarden as soon as I found out about it. To start with, I simply exported my data from LastPass and imported it to Bitwarden without any hiccups. I didn’t lose any data in the process. In addition to the desktop app, I’ve been using Bitwarden the Firefox add-on, and the Android app. I haven’t had any issues with it after over six months of usage. So, I’d definitely give it thumbs up from my side if you’re willing to try it out! ## Wrapping Up I’d say that Bitwarden is a complete solution for Linux users who want a password manager that works everywhere and syncs easily across multiple devices. You can get started for free but if you can, please go for the premium plan of **$10/year** to support this open-source project. You may also check our list of [top 5 password managers for Linux](https://itsfoss.com/password-managers-linux/) should come in handy if you’re looking for more options. Have you tried Bitwarden yet? If not, give it a try! Also, what is your favorite password manager? Let me know in the comments below!
12,113
在 VirtualBox 上安装 Kali Linux:最快速和最安全的方法
https://itsfoss.com/install-kali-linux-virtualbox/
2020-04-15T12:46:00
[ "VirtualBox", "Kali" ]
https://linux.cn/article-12113-1.html
> > 这篇教程向你展示如何在 Windows 和 Linux 中以最快的方式在 VirtualBox 上安装 Kali Linux。 > > > [Kali Linux](https://www.kali.org/) 是 [最适合脆弱性测试和安全爱好者的 Linux 发行版](https://itsfoss.com/linux-hacking-penetration-testing/) 之一。 因为它涉及一个像黑客之类的敏感话题,就像一把双刃剑。我们过去在详细的 Kali Linux 评论中讨论过,所以我不会再次赘述。 虽然你可以通过替换现有的操作系统的形式安装 Kali Linux,但是通过虚拟机来使用它可能会是更好、更安全的选择。 使用 VirtualBox,你可以在 Windows/Linux 系统中将 Kail Linux 作为常规应用程序使用。这和在系统中运行 VLC 或游戏几乎是一样的。 在虚拟机中使用 Kali Linux 是安全的。不管你在 Kali Linux 做什么都不会影响你的 ‘宿主系统’(即你原来的 Windows 或 Linux 操作系统)。你的实际操作系统将不会受到影响,并且在你的宿主系统中数据也是安全的。 ![](/data/attachment/album/202004/15/124658vj8jzuupjhsrwf6k.png) ### 如何在 VirtualBox 中安装 Kali Linux 在这里我会使用 [VirtualBox](https://www.virtualbox.org/)。它是一个非常好的开源虚拟化解决方案,几乎适合于任何人,无论是专业使用或个人使用。它是免费提供的。 在这篇文章中,我们将特别讨论 Kali Linux,但你也可以安装几乎任何其他的操作系统,只要有 ISO 文件或预建的虚拟机保存文件就可以安装。 **注意:**同样的步骤适用于运行 VirtualBox 的 Windows 或 Linux。 如上所述 ,你可以安装 Windows 或 Linux 作为你的宿主系统。但是,在我已安装 Windows 10 的情况下(别仇恨我!),我会尝试着在其上的 VirtualBox 中一步步地安装 Kali Linux 。 并且,最棒的是,即使你碰巧使用一个 Linux 发行版作为你的主要操作系统,也将使用同样的步骤! 想知道如何做?让我们来看看… ### 在 VirtualBox 上安装 Kali Linux 的分步指南 我们将使用一个专门为 VirtualBox 定制的 Kali Linux 镜像。你也可以下载 Kali Linux 的 ISO 文件,并创建一个新的虚拟机,但是当你有一个简单的选择时,为什么还这样做呢? #### 1、下载并安装 VirtualBox 第一件要做的事是从甲骨文的官方网站下载和安装 VirtualBox。 * [下载 VirtualBox](https://www.virtualbox.org/wiki/Downloads) 在你下载了安装器之后,只需要双击它来安装 VirtualBox。在 [Ubuntu](https://itsfoss.com/install-virtualbox-ubuntu/)/Fedora Linux 安装 VirtualBox 也是一样的方式。 #### 2、下载即用型的 Kali Linux 虚拟镜像 在安装成功后,前往 [Offensive Security 的下载页面](https://www.offensive-security.com/kali-linux-vm-vmware-virtualbox-image-download/) 来下载适用于 VirtualBox 的虚拟机镜像。如果你改变主意使用 [VMware](https://itsfoss.com/install-vmware-player-ubuntu-1310/),那里也有适用的。 ![](/data/attachment/album/202004/15/124659udma55m5l73d55ml.jpg) 如你所见,文件大小大约 3 GB,你应该使用 torrent 方式,或者使用一个[下载管理器](https://itsfoss.com/4-best-download-managers-for-linux/)来下载它。 * [下载 Kali Linux 虚拟镜像](https://www.offensive-security.com/kali-linux-vm-vmware-virtualbox-image-download/) #### 3、在 Virtual Box 上安装 Kali Linux 当你安装 VirtualBox 并下载 Kali Linux 镜像后,你只需要将其导入到 VirtualBox 中就可以使其正常工作。 这里是如何导入 Kali Linux 的 VirtualBox 镜像: ##### 步骤 1 启动 VirtualBox。你会看到一个<ruby> 导入 <rt> Import </rt></ruby> 按钮,点击它。 ![点击导入按钮](/data/attachment/album/202004/15/124703xwog4u1excvgfm4k.jpg) ##### 步骤 2 接下来,浏览刚刚下载的文件,选择要导入的文件(如下图所示)。文件名应该以“kali linux”开始,以 .ova 扩展名结束。 ![导入 Kali Linux 镜像](/data/attachment/album/202004/15/124704e5xa0zr1r03x1te3.jpg) 选择后,单击<ruby> 下一步 <rt> Next </rt></ruby>继续进行。 ##### 步骤 3 现在,你会看到要导入的虚拟机的设置。所以,你可以自定义它们或者不自定义,这是你的选择。采用默认设置也是可以的。 你需要选择一个有足够可用存储空间的路径。在 Windows 上,我绝不建议使用 C: 盘。 ![将硬盘驱动器导入为 VDI](/data/attachment/album/202004/15/124705sml16cwnzll44ecz.jpg) 在这里,“将硬盘驱动器导入为 VDI”指的是通过分配存储器空间集来虚拟挂载硬盘驱动器。 在你完成设置后,单击<ruby> 导入 <rt> Import </rt></ruby>,等待一段时间。 ##### 步骤 4 你现在将看到它被列在虚拟机列表中。所以,只需点击<ruby> 开始 <rt> Start </rt></ruby>来启动它。 你可能会在开始时得到一个 USB 2.0 端口控制器的错误,你可以禁用它来解决问题,或者只需按照屏幕上的指示来安装一个附加软件包修复问题。然后就大功告成了! ![Kali Linux 运行在 VirtualBox 中](/data/attachment/album/202004/15/124706u4d7j56deh0sw69s.jpg) 以前 Kali Linux 中的默认用户名是 root,默认密码是 toor。但从 2020 年 1 月起,Kali Linux 就不使用 root 账号了。现在,默认账号和密码都是 kali。 你应该可以用它来登录系统了。 请注意,在尝试安装一个新的应用程序或尝试破解 WiFi 密码之前,请先[更新 Kali Linux](https://linuxhandbook.com/update-kali-linux/) 。 我希望这篇指南能帮助您在 VirtualBox 上很容易地安装 Kali Linux。当然,Kali Linux 有很多有用的渗透测试工具 – 祝你好运! **提示** : Kali Linux 和 Ubuntu 都是基于 Debian 的,如果你在使用 Kali Linux 时遇到任何问题或错误,你可以按照互联网上的 Ubuntu 和 Debian 的教程解决。 ### 奖励: 免费的 Kali Linux 指南书 如果你刚刚开始使用 Kali Linux, 那么了解如何使用 Kali Linux 就很有必要了。 Kali Linux 背后的公司 Offensive Security 制作了一本指南书,讲解了 Linux 的基础知识、Kali Linux 的基础知识、配置和设置,书中还有一些关于渗透测试和安全工具的章节。 基本上,它包含你上手 Kali Linux 所需要的一切东西。更重要的是,这本书可以免费下载。 * [免费下载《揭秘 Kali Linux》](https://kali.training/downloads/Kali-Linux-Revealed-1st-edition.pdf) 如果你在 VirtualBox 上使用 Kali Linux 时遇到问题,请在下面的评论中告诉我们,或者直接分享你的经验。 --- via: <https://itsfoss.com/install-kali-linux-virtualbox/> 作者:[Ankush Das](https://itsfoss.com/author/ankush/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[robsean](https://github.com/robsean) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
![Warp Terminal](/assets/images/warp-terminal.webp) ![Warp Terminal](/assets/images/warp-terminal.webp) Kali Linux is one of the [best Linux distributions for hacking](https://itsfoss.com/linux-hacking-penetration-testing/) and security enthusiasts. Since it deals with a sensitive topic like hacking, it’s like a double-edged sword. We have discussed it in the past with a [detailed Kali Linux review](https://itsfoss.com/kali-linux-review/), so I am not going to bore you with the same stuff again. While you can install Kali Linux by replacing the existing operating system, using it via a virtual machine would be a better and safer option. With VirtualBox, you can use Kali Linux as a regular application in your Windows/Linux system. It’s almost the same as running VLC or a game in your system. Using Kali Linux in a virtual machine is also safe. Whatever you do inside Kali Linux will NOT impact your ‘host system’ (i.e. your original Windows or Linux operating system). Your actual operating system will be untouched and your data in the host system will be safe. ## How to Install Kali Linux on VirtualBox I’ll be using [VirtualBox](https://www.virtualbox.org/?ref=itsfoss.com) here. You may also [install Kali Linux on VMWare](https://itsfoss.com/install-kali-linux-vmware/). VirtualBox is a wonderful [open source virtualization solution](https://itsfoss.com/virtualization-software-linux/) for anyone (professional or personal use). It is available free of cost. In this tutorial, we will talk about Kali Linux in particular but you can install almost any other OS using the ISO file or a pre-built virtual machine save file. The same steps apply for Windows/Linux running VirtualBox. As I already mentioned, you can have either Windows or Linux installed as your host. But, in this case, I have Windows installed (don’t hate me!), where I try to install Kali Linux in VirtualBox step by step. And, the best part is that even if you use a Linux distro as your primary OS, the same steps will be applicable! Wondering how? Let’s see… ## Step-by-Step Guide to Install Kali Linux on VirtualBox ### 1. Download and install VirtualBox The first thing you need to do is to download and install VirtualBox from Oracle’s official website. Once you download the installer, double click on it to install VirtualBox. There are several ways you can [install VirtualBox in Ubuntu](https://itsfoss.com/install-virtualbox-ubuntu/), Fedora, etc. ### 2. Download ready-to-use virtual image of Kali Linux After installing it successfully, head to [Kali Linux download page](https://www.kali.org/get-kali/?ref=itsfoss.com#kali-platforms) to download the VM image for VirtualBox. If you change your mind about utilizing [VMware](https://itsfoss.com/install-vmware-player-ubuntu-1310/), that is available too. ![Download the virtual machine specific image of Kali Linux from official website](https://itsfoss.com/content/images/2023/06/Kali-virtual-machine-images-in-browser.png) As the file size is around 3 GB, you should either use the torrent option or download it using a [download manager](https://itsfoss.com/4-best-download-managers-for-linux/), whichever is fastest for you. ### 3. Install Kali Linux on Virtual Box Once you have installed VirtualBox and downloaded the Kali Linux 7z image, you just need to add it to VirtualBox in order to make it work. Here’s how to add the VirtualBox image for Kali Linux: ** Step 1:** Extract the downloaded 7z file. You can [use 7zip for extracting the file](https://itsfoss.com/use-7zip-ubuntu-linux/). **drive on Windows.** **C:**** Step 2**: Launch VirtualBox. You will notice an **button – click on it.** **Add**![Click on the Add button in VirtualBox Welcome screen](https://itsfoss.com/content/images/2023/06/select-add-in-virtual-box.png) ** Step 3:** Next, browse the folder you just downloaded and extracted. Choose the VirtualBox Machine Definition file to be added (as you can see in the image below). The file name should start with ‘kalilinux‘ and end with . **extension.** **vbox**![Open the .vbox file in VirtualBox](https://itsfoss.com/content/images/2023/06/open-the-vbox-file.png) Once selected, proceed by clicking on ** Open**. ** Step 4**: Now, you will be shown the settings for the virtual machine you are about to add. So, you can customize them or not – that is your choice. It is okay if you go with the default settings. ![Start the Kali Linux Virtual Machine with default settings](https://itsfoss.com/content/images/2023/06/start-kali-virtual-machine-1.png) Here, the hard drives as VDI refer to virtually mounting the hard drives by allocating the storage space set. After you are done with the settings, hit ** Start** and wait for a while. You might get an error at first for USB port 2.0 controller support, you can disable it to resolve it or just follow the on-screen instruction of installing an additional package to fix it. And, you are done! ![Enter the user name and password for Kali Linux on the login screen. The default username and password is "kali"](https://itsfoss.com/content/images/2023/06/kali-login-screen.png) The default username in Kali Linux used to be root and the default password was toor. But since January 2020, [Kali Linux is not using the root account](https://itsfoss.com/kali-linux-root-user/). Now, the default account and password both are **kali.** You should be able to login to the system with it. ![Kali Linux is running inside VirtualBox](https://itsfoss.com/content/images/2023/06/kali-linux-desktop-in-virtualbox.webp) Do note that you should [update Kali Linux](https://linuxhandbook.com/update-kali-linux/?ref=itsfoss.com) before trying to install new applications or trying to hack your neighbor’s WiFi. The VirtualBox Guest Addition is pre-installed in the Live image since Kali Linux 2021.3. Of course, [Kali Linux has a lot of useful tools in it for penetration testing](https://itsfoss.com/best-kali-linux-tools/) – so you can explore those after installation. Both Kali Linux and Ubuntu are Debian-based. If you face any issues or error with Kali Linux, you may follow the tutorials intended for Ubuntu or Debian on the internet. **Suggested Read 📖** [How to Install Windows 10 in VirtualBox in LinuxStep by step screenshot guide to installing Windows 10 on Linux using VirtualBox.](https://itsfoss.com/install-windows-10-virtualbox-linux/)![](https://itsfoss.com/content/images/wordpress/2016/02/Install-Windows-virtualbox-linux.jpg) ![](https://itsfoss.com/content/images/wordpress/2016/02/Install-Windows-virtualbox-linux.jpg) ## How to install Kali Linux on VirtualBox using VDI While you can always follow the instructions recommended above, there's also another way of installing Kali Linux. You will notice a VDI file when extracting the 7z file of Kali Linux. You can use this VDI file to create a Kali Linux Virtual Machine. Open VirtualBox and select ** New** option. ![Click on the New button to create a new virtual machine](https://itsfoss.com/content/images/2023/06/click-new-VM-in-vbox.png) Now, go to expert mode on Virtual box. ![Go to Expert mode in VirtualBox](https://itsfoss.com/content/images/2023/06/vbox-expert-mode.png) This is nothing but a comprehensive view of all the tweaks that we can do. From there, set all the things like below: Kali Linux**Name of VM:**Linux**Type:**Debian 64-bit**Version:**): 4GB (Recommended)**Under Hardware, Base Memory (RAM**: More than one, as per availability**Processors** Now, for the ** Hard Disk** part, select *and browse for the extracted .vdi file of Kali Linux.* *Use an existing Virtual Hard Disk File*![Select "Use an Existing Virtual Hard Disk File" option and browse for the .vdi file of Kali Linux, that you have extracted earlier.](https://itsfoss.com/content/images/2023/06/use-existing-vdi-option-in-vbox-expert-mode.png) On the new dialog box, click on ** Add** and search for the VDI file in resulting file browser. Once you find the file, select it and then press **.** **choose**![Select the Kali Linux VDI file and click on Choose button](https://itsfoss.com/content/images/2023/06/choose-the-kali-linux-vdi-file.png) You can now press the ** Finish **button. ![Click on the Finish button on the bottom right of VirtualBox expert mode settings to create a VM](https://itsfoss.com/content/images/2023/06/click-finish-in-vbox-expert-mode.png) The VM created will have several settings like Display Memory, Network etc set to default. You should give the Display memory as 128 MB and choose to enable 3D acceleration. ![](https://itsfoss.com/content/images/2023/06/video-memory-1.png) You can now start the VM, and use username and password "kali" once asked to log in. You can always install Kali Linux using the ISO file, which has same process like any other Linux distribution. [How to Install Linux Inside Windows Using VirtualBoxUsing Linux in a virtual machine allows you to try Linux within Windows. This step-by-step guide shows you how to install Linux inside Windows using VirtualBox.](https://itsfoss.com/install-linux-in-virtualbox/)![](https://itsfoss.com/content/images/wordpress/2018/03/install-linux-inside-windows-featured.jpg) ![](https://itsfoss.com/content/images/wordpress/2018/03/install-linux-inside-windows-featured.jpg) ## Bonus: Free Kali Linux Guide If you are just starting with Kali Linux, it will be a good idea to know how to use Kali Linux. Offensive Security, the company behind Kali Linux, has created courses that explains the basics of Kali Linux, configuration, and more. It also has a few chapters on penetration testing and security tools. Basically, it has everything you need to get started with Kali Linux. And the best thing is that the course is available for free. You can go to the portal to explore courses and certification exams, and learn them there. *Let us know in the comments below if you face an issue or simply share your experience with Kali Linux on VirtualBox. If you are curious, you can also try Kali Linux on Windows using WSL.*
12,114
新的 Linux 发行版 UbuntuDDE 将漂亮的深度桌面带到 Ubuntu
https://itsfoss.com/ubuntudde/
2020-04-15T19:32:28
[ "深度" ]
https://linux.cn/article-12114-1.html
深度是一个漂亮桌面环境,拥有直观的 UI。UbuntuDDE 项目结合了 Ubuntu 的强大和深度之美。 [深度桌面环境](https://www.deepin.org/en/dde/)(DDE)是由[深度 Linux](https://www.deepin.org/en/) 的开发人员创建的漂亮桌面环境。最初,深度 Linux 基于 [Ubuntu](https://ubuntu.com/),但后来他们切换到了 [Debian](https://www.debian.org/)。 深度 Linux 的一个主要问题是它的下载服务器速度较慢。常规系统需要花几个小时才能下载,因为他们的所有服务器都在中国,而这些服务器很不幸的是速度极慢。 如果你想使用深度桌面,那没有什么可以阻止你将其安装在常规 Ubuntu 系统上。[UbuntuDDE](https://ubuntudde.com/) 试图通过在 Ubuntu 之上为你提供开箱即用的深度桌面体验来使其更简单。这样可以节省你在 Ubuntu 上安装和配置深度桌面的时间和精力。 ![Screenshot of UbuntuDDE](/data/attachment/album/202004/15/193232byo44gv5i4s4vag8.jpg) ### Ubuntu DDE:Ubuntu 的强大和深度桌面的漂亮 请注意,UbuntuDDE 不是 Ubuntu 的官方变种。UbuntuDDE 的开发人员与 Ubuntu 团队无关。UbuntuDDE 目前一个混合发行版,其目标是在未来的发行版中被接纳为 Ubuntu 的官方变种。 UbuntuDDE 开发人员得到了 Ubuntu Snapcraft 团队的 Alan Pope、Ubuntu Budgie 团队和 [Ubuntu Cinnamon](https://itsfoss.com/ubuntu-cinnamon/) 团队,以及其他开发者,如 Hualet Wang 和 Felix Yan 的帮助。 在与 FOSS 的对话中,其主要开发人员 Arun 强调说,该项目的重点是定期维护 Ubuntu 的 DDE 软件包,并帮助用户享受 DDE(深度桌面环境)的全部乐趣。 ![Ubuntu Deepin Edition login screen](/data/attachment/album/202004/15/193233d9hq0gigilg0e828.jpg) Arun 还提到,这个 Ubuntu 和深度的混合项目首先是维护和打包来自上游(即深度仓库)的最新版本。然后,它最终与 Ubuntu 20.04 focal 结合,生成了一个镜像文件,每个人都可以安装,而不必麻烦地先安装常规的 Ubuntu,然后再安装深度桌面。UbuntuDDE 不仅是 DDE 和 Ubuntu 的组合,而且还是 UbuntuDDE 团队的软件包选择和设计变更的融合。 ![UbuntuDDE screenshot](/data/attachment/album/202004/15/193234ov4927z3la89ogo2.jpg) 与 Deepin Linux 不同,UbuntuDDE 不使用深度应用商店。它改用 Ubuntu 软件中心。如果你被[这个来自武汉的深度 Linux 的间谍软件谣言](https://www.deepin.org/en/2018/04/14/linux-deepin-is-not-spyware/)吓到了,这应该是一个好消息。 ### 下载 UbuntuDDE 20.04 Beta UbuntuDDE 的目标是与 Ubuntu 20.04 一起发布第一个正式的稳定版本。像[其他 Ubuntu 变种](https://itsfoss.com/which-ubuntu-install/)一样,UbuntuDDE 20.04 beta 也可供你下载并尝试。 > > 警告! > > > 一句话警告。 UbuntuDDE 是正在开发的新手项目。请不要在你的主用系统上使用它。如果要尝试,请在虚拟机或备用系统中使用它。 > > > * [下载 Ubuntu 20.04 DDE Beta](https://ubuntudde.com/download/) ![Installing UbuntuDDE](/data/attachment/album/202004/15/193235dxxa0f7sca6j6atw.jpg) 由于本质上是 Ubuntu,因此安装 UbuntuDDE 与安装 Ubuntu 相同。你可以参考这篇教程,其中展示了[如何在 VirtualBox 内安装 Ubuntu](https://itsfoss.com/install-linux-in-virtualbox/)。 我知道你可能会认为这“不过是另一个 Ubuntu” 或者“只是 Ubuntu 上的深度,任何人都可以做到的”。但是我也知道有一小部分用户喜欢像 UbuntuDDE 这样的项目,这对他们来说使事情变得容易。我的意思是有许多 Ubuntu 变种就是这样出现的。你怎么看? --- via: <https://itsfoss.com/ubuntudde/> 作者:[Abhishek Prakash](https://itsfoss.com/author/abhishek/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
Deepin is a beautiful desktop environment with an intuitive UI. UbuntuDDE project combines the power of Ubuntu and the beauty of Deepin. [Deepin Desktop Environment](https://www.deepin.org/en/dde/) (DDE) is a beautiful desktop environment created by the developers of [Deepin Linux](https://www.deepin.org/en/). Initially, Deepin Linux was based on [Ubuntu](https://ubuntu.com/) but later they switched to [Debian](https://www.debian.org/). ![Ubuntu Dde 20 10 App Drawer](https://itsfoss.com/content/images/wordpress/2020/12/ubuntu-dde-20-10-app-drawer.png) *Deepin Desktop Environment in Ubuntu*DDE 20.10 One major problem with Deepin Linux is its slow servers. A regular system update takes hours to download for the fact that they have all their servers in China and these servers are unfortunately extremely slow. You will even find the app store on the latest [Deepin 20](https://itsfoss.com/deepin-20-review/) slow to load. If you want to use Deepin desktop, nothing stops you from installing it on your regular Ubuntu system. [UbuntuDDE](https://ubuntudde.com/) is trying to make it simpler by providing you an out of the box Deepin desktop experience on top of Ubuntu. This saves you time and effort in installing and configuring Deepin on Ubuntu. ## Ubuntu DDE: Power of Ubuntu and beauty of Deepin desktop Please note that UbuntuDDE is not an official flavor of Ubuntu. UbuntuDDE developers are not associated with the Ubuntu team. UbuntuDDE is currently a Remix distribution and is aiming for getting recognized as Ubuntu’s official flavor in future releases. You can see it in action in the YouTube video above but do note that the overview includes our take on UbuntuDDE 20.04. ![Ubuntu Dde 20 10](https://itsfoss.com/content/images/wordpress/2020/12/ubuntu-dde-20-10.jpg) ### Development Overview UbuntuDDE developers are helped by Alan Pope of Ubuntu’s Snapcraft team and teams of Ubuntu Budgie and [Ubuntu Cinnamon](https://itsfoss.com/ubuntu-cinnamon/) and a few other developers like Hualet Wang and Felix Yan. In a conversation with It’s FOSS, its lead developer Arun highlighted that the important aspect of this project is to regularly maintain the DDE packages for Ubuntu and help users enjoy the full taste of DDE (Deepin Desktop Environment). ![Ubuntu Deepin Edition Screenshot 5](https://itsfoss.com/content/images/wordpress/2020/04/ubuntu-deepin-edition-screenshot-5.jpg) Arun also mentioned that this Ubuntu Deepin remix project started first by maintaining and packaging the packages to the latest release from the upstream i.e. Deepin Repository. Then, it eventually got spin with Ubuntu 20.04 focal resulting in an image file that everyone can install without the hassle to install regular Ubuntu first and then the Deepin Desktop. UbuntuDDE is not just the combo of DDE and Ubuntu but also the fusion of selective packages and design changes by the UbuntuDDE Team. ### The Latest Deepin Desktop Experience As I mentioned above, the UbuntuDDE team maintains selective packages that also includes the Deepin desktop. You will be able to [install Deepin Desktop on your Ubuntu system](https://itsfoss.com/install-deepin-ubuntu/) just because they’ve made it easier for you. They just want users to have the option of easily experiencing the Deepin desktop environment on top of Ubuntu. So, you can try UbuntuDDE or simply install the Deepin desktop tailored for Ubuntu. If you notice a new Deepin release, UbuntuDDE team should have it ready for the next upgrade. As of writing this, Ubuntu 20.10 features the latest Deepin desktop compared and 20.04 LTS version features the older desktop. You can try UbuntuDDE 20.04 or opt for the latest 20.10 edition to experience the best Deepin desktop experience as shown in some of the screenshots. ![Ubuntu Deepin Edition Screenshot 2](https://itsfoss.com/content/images/wordpress/2020/04/ubuntu-deepin-edition-screenshot-2.jpg) Unlike Deepin Linux, UbuntuDDE doesn’t use Deepin Appstore. It uses Ubuntu Software Center instead. This should be a good news if you are spooked by the [spyware labeling of Wuhan-based Deepin Linux](https://www.deepin.org/en/2018/04/14/linux-deepin-is-not-spyware/). Not just limited to the software center, the overall experience feels good enough with the intuitive desktop experience and a proper dark mode as well. ![Ubuntu Dde 20 10 Dark](https://itsfoss.com/content/images/wordpress/2020/12/ubuntu-dde-20-10-dark.jpg) ### Performance Overview Deepin desktop isn’t really known for the best-performing experience. However, you do get the option to select “**Normal Mode**” for the best experience, and “**Effect Mode**” for faster performance with stripped down animations/effects right after installing UbuntuDDE. By default, without anything installed, it takes 1 GB of RAM. So, it is safe to say that you will need a minimum of 4 GB RAM to comfortably use programs within UbuntuDDE. ![Ubuntudde Performance](https://itsfoss.com/content/images/wordpress/2020/12/ubuntudde-performance.jpg) ## Download & Install UbuntuDDE 20.04 / 20.10 ![Ubuntu Dde 20 10 Home](https://itsfoss.com/content/images/wordpress/2020/12/ubuntu-dde-20-10-home.jpg) UbuntuDDE’s first official stable release kicked off with Ubuntu 20.04 LTS. Like [other Ubuntu flavors](https://itsfoss.com/which-ubuntu-install/), UbuntuDDE 20.04 is also available for you to download and try. You can also find the latest Ubuntu 20.10 with the latest Deepin desktop experience. Warning! A word of warning. UbuntuDDE is a novice project under development. Please don’t use it on your main system. If you want to try it, use it in virtual machine or on a spare system. ![UbuntuDDE is Deepin Desktop Edition on Ubuntu](https://itsfoss.com/content/images/wordpress/2020/04/ubuntu-deepin-edition-screenshot-4.jpg) Since it is essentially Ubuntu, installing UbuntuDDE is the same as installing Ubuntu. You may refer to this tutorial showing [how to install Ubuntu inside VirtualBox](https://itsfoss.com/install-linux-in-virtualbox/). I know you may think ‘not another Ubuntu’ or ‘it’s just Deepin on Ubuntu that anyone can do’ and you do have a point. But I also know there is a small segment of users who like projects like UbuntuDDE that makes thing easier for them. I mean that’s how many Ubuntu flavor came into existence. So, what do you think? Let me know your thoughts in the comments below.
12,116
使用树莓派和 Rocket.Chat 构建一个私人聊天服务器
https://opensource.com/article/20/3/raspberry-pi-rocketchat
2020-04-16T20:31:21
[ "聊天" ]
https://linux.cn/article-12116-1.html
> > 使用这些简单、经济高效的开源工具构建自己真正的私人聊天和消息解决方案。 > > > ![](/data/attachment/album/202004/16/203055g5r5w7ei8eenw1ex.jpg) 互联网提供了大量免费的消息服务。像 WhatsApp 和 Viber 这样的应用已经是我们日常生活的一部分,也是我们与亲朋好友沟通的最常见方式。但是,安全意识的提高,让人们对真正的私密聊天解决方案的需求越来越大。此外,消息类应用在我们的设备中占用了大量空间,因此一个备用聊天渠道可能对于我们与朋友分享媒体、信息和联系人很有作用。 今天,我们将了解如何使用[树莓派](https://opensource.com/resources/raspberry-pi)和 Rocket.Chat 安装一个私人聊天和消息服务器。 ### 什么是 Rocket.Chat? [Rocket.Chat](https://rocket.chat/) 是一个开源解决方案,它提供了一个增强的聊天服务。它包括媒体共享、屏幕共享和视频/音频呼叫支持等协作工具。 它可以通过浏览器或从所有主要应用商店(Google Play、App Store 等)下载使用。 除了社区版本外,Rocket.Chat 还提供企业版和专业版,包括支持和其他附加功能。 ### 我们需要什么 对于这个项目,我将使用更便宜的树莓派 3A+。树莓派 3B 和 3B+ 以及树莓派 4B 应该也可以用同样的方法。 我也建议使用一块高性能 SD 卡,因为 Rocket.Chat 会给树莓派带来很大的负载。如其他文章中所述,高性能 SD 卡可显著提高 Raspbian 操作系统的性能。 我们将使用 Raspbian 的精简版本,拥有预配置的 WiFi 访问和 SSH 服务,因此不需要键盘或 HDMI 线缆。 ### 分步过程 从[安装最新版本的 Raspbian Buster Lite](https://peppe8o.com/2019/07/install-raspbian-buster-lite-in-your-raspberry-pi/) 开始。 我们将使用 [Snap](https://snapcraft.io/) 简化 Rocket.Chat 安装。通过 SSH 登录并从终端输入: ``` sudo apt-get update sudo apt-get upgrade ``` 安装 Snap: ``` sudo apt-get install snapd ``` 安装 Snap 后,我们需要重启系统使其正常工作: ``` sudo reboot ``` 再次通过 SSH 登录,并用以下简单的命令安装 Rocket.Chat: ``` sudo snap install rocketchat-server ``` 从终端安装后,请等待一段时间,让 Rocket.Chat 初始化数据库和服务。休息一下,几分钟后,你应该能够在浏览器中访问 `http://<<YOUR_RPI_IP_ADDRESS>>:3000`,你应该看到以下内容: ![Rocket Chat setup wizard](/data/attachment/album/202004/16/203125r4osal0rxaaquraa.jpg "Rocket Chat setup wizard") 填写所需的表单就可以了。经过四个简单的设置窗口后,你应该会进入 Rocket.Chat 主页: ![Rocket Chat home page](/data/attachment/album/202004/16/203126ya3awkttbksw5skx.jpg "Rocket Chat home page") 享受吧! 本文最初发表在 [peppe8o.com](https://peppe8o.com/private-chat-and-messaging-server-with-raspberry-pi-and-rocket-chat/),并获许重新发布。 --- via: <https://opensource.com/article/20/3/raspberry-pi-rocketchat> 作者:[Giuseppe Cassibba](https://opensource.com/users/peppe8o) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
The internet offers plenty of free messaging services. Applications like WhatsApp and Viber are part of our daily life and are the most common way we communicate with relatives and friends. But security awareness is increasing the demand for a truly private chat solution. Furthermore, messaging apps take up a lot of space in our devices, so an alternative chat channel could be useful to share media, info, and contacts with our friends. Today we are going to see how to install a private chat and messaging server with a [Raspberry Pi](https://opensource.com/resources/raspberry-pi) and Rocket.Chat. ## What is Rocket.Chat? [Rocket.Chat](https://rocket.chat/) is an open source solution that provides an enhanced chat service. It includes collaboration tools like media sharing, screen sharing, and video/audio calling support. It can be used both via browser or from apps available in all the main app stores (Google Play, App Store, etc.). In addition to the community version, Rocket.Chat also offers Enterprise and Professional versions, including support and additional features. ## What we need For this project, I’m going to use a cheaper Raspberry Pi 3 model A+. RPI 3 models B and B+, and RPI 4 model B should also work in the same way. I also suggest a performing SD card, because Rocket.Chat can put a heavy workload on our Raspberry Pi. As discussed in other articles, a performing SD card strongly improves Raspbian OS performance. We’ll use a lite version of Raspbian with pre-configured WiFi access and SSH service, so there will no need for keyboards or HDMI cables. ## Step-by-step procedure Start by [installing the last version of Raspbian Buster Lite](https://peppe8o.com/2019/07/install-raspbian-buster-lite-in-your-raspberry-pi/). We’ll simplify Rocket.Chat installation by using [Snap](https://snapcraft.io/). Login via SSH and type from the terminal: ``` sudo apt-get update sudo apt-get upgrade ``` Install Snap: `sudo apt-get install snapd` For Snap installation, we need a system reboot to make it work: `sudo reboot` Login again via SSH and install the Rocket.Chat server with the simple command: `sudo snap install rocketchat-server` After installing from the terminal, please wait a while for Rocket.Chat to initialize its database and services. Have a cup of tea, and after a few minutes you should be able to reach with your browser the address http://<<YOUR_RPI_IP_ADDRESS>>:3000 and you should see the following: ![Rocket Chat setup wizard Rocket Chat setup wizard](https://opensource.com/sites/default/files/uploads/rocket-chat-setup-wizard.jpg) Complete the required forms, and everything should go fine. After four simple setup windows, you should reach the Rocket.Chat home page: ![Rocket Chat home page Rocket Chat home page](https://opensource.com/sites/default/files/uploads/rocket-chat-home.jpg) Enjoy! *This article originally posted on peppe8o.com, reposted with permission.* ## 9 Comments
12,118
如何在 LibreOffice 中创建模板以实现省时高效
https://itsfoss.com/create-templates-libreoffice/
2020-04-16T21:43:19
[ "LibreOffice", "模板" ]
https://linux.cn/article-12118-1.html
![](/data/attachment/album/202004/16/214314y17oww0ymfylyn1m.jpg) 在 [LibreOffice](https://www.libreoffice.org/) 中为你经常使用的文档创建模板可以为你节省一些时间。它可以是信件、财务表格抑或是简报。 模板不仅可以为你节省时间,另一方面它可以保证在同一机构内参与统一项目的小组成员文档的一致性。 举例而言,如果你是一家需要经常开具工作经验证明的企业,你可以创建一个模板,而不再需要从某个地方复制粘贴已保存的文档。当你需要开具一个新的经验证明时,你可以从模板中创建,稍微编辑一下就可以了。 LibreOffice 默认情况下提供了一些模板,但并不仅仅局限于使用这些,你可以根据自己的需求自由定制。 我认为模板是每个用户都应该了解的 [LibreOffice 基础技巧之一](https://itsfoss.com/libreoffice-tips/)。下面我将为你演示如何使用。 ### 如何在 LibreOffice 中创建一个模板 首先,创建你希望通过最少的编辑就可以重复使用的文档。它可以是文档、电子表格或演示文稿。我在示例中使用的是 word 文档,但是所有步骤都是相同的。 然后转到“文件”选项卡并选择“存储为模板”。你将被提示输入“名称”及选择“类别”,再单击“保存”。 ![Creating a new template in LibreOffice](/data/attachment/album/202004/16/214324nqoahn8o6ros1rqt.png) 此文件将以 .ots 格式保存在 LibreOffice 的模板文件夹中。你可以在其他安装了 LibreOffice 的系统上使用这些 .ots 文件,并在这些系统上使用相同的模板。 ### 如何在 LibreOffice 中使用模板 要使用模板,请选择 “文件选项卡”,然后选择 “模板”。 不要担心! LibreOffice 在打开一个模板时会在不影响原始模板的情况下创建一个副本。你可以随意编辑文档而不必担心模板发生改动。 ![Using templates](/data/attachment/album/202004/16/214336cm76puol4mposp7m.png) 选择模板后,单击打开。你就可以随意编辑了。 ### 如何在 LibreOffice 中更改模板 我们需求可能会不时变化,所以需要对模板进行相应。 编辑一个现有的模板,单击“文件” -> “模板”,然后右键单击所需的模板,然后单击“编辑”。 ![edit Template](/data/attachment/album/202004/16/214339rp4es4jjkpjmjemu.png) 当你完成对模板的编辑时,单击“保存”以使更改生效。 总之,模板不仅可以减少重复任务的工作量,还可以防止用户出错。你可以利用电脑优势来灵活的处理重复性的任务,并以此提高你的效率。 > > 福利小贴士 > > > 你可以在 [LibreOffice 网站](https://extensions.libreoffice.org/templates)上找到大量的附加模板。你可以搜索你需要的,下载并使用它们。请注意,这些模板来自第三方和未经验证的用户。所以使用它们的风险需要自己承担。 之后我会继续分享更多这样的技巧。同时,你还可以学习一下如何创建模板[在 GNOME 的右键菜单上下文中添加“创建新文档”选项](https://itsfoss.com/add-new-document-option/)。 --- via: <https://itsfoss.com/create-templates-libreoffice/> 作者:[Dimitrios Savvopoulos](https://itsfoss.com/author/dimitrios/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[qfzy1233](https://github.com/qfzy1233) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
![Warp Terminal](/assets/images/warp-terminal.webp) ![Warp Terminal](/assets/images/warp-terminal.webp) Creating a template in [LibreOffice](https://www.libreoffice.org/?ref=itsfoss.com) can save you some time for the documents that you use often. It can be a letter, a financial spreadsheet or even a presentation. Time is one factor that a template can save and on the other hand it provides consistency where a group of people in an organization work together at the same project. For example, if you are a small organization that has to often issue certificates of experience, instead of copy-pasting from a saved document somewhere, you can create a template. When you need to issue a new certificate of experience, you create a new one from the template, edit it slightly and you are good to go. LibreOffice comes with a few templates by default but you are not restricted to use just them. You are free to create your own as per your requirements. I think templates are one of the [essential LibreOffice tips](https://itsfoss.com/libreoffice-tips/) that every user should know. Let me show you how to do it. ## How to create a template in LibreOffice First, create the document that you want to reuse with minimal editing. It could be a document, spreadsheet or presentation. I am using a word document in the example but the steps are the same for all of them. Now go to file and select Save as Template. You will be prompted to give a name and a category from the menu, the press save. ![Saving template in LibreOffice](https://itsfoss.com/content/images/wordpress/2020/04/1.Template-save-as-800x567.png) This file will be saved in the LibreOffice template folder in .ots format. You can use these .ots files on other systems that have LibreOffice installed and use the same templates on those systems as well. ## How to use templates in LibreOffice To use a template, select File and then Templates. Don’t worry! Opening a template LibreOffice creates a copy without affecting the original template. You can edit the document without worrying about your template getting changed. ![Using template in LibreOffice](https://itsfoss.com/content/images/wordpress/2020/04/2.Use-a-template.png) Once you choose your template, click open. You can edit it as you like. ## How to change a template in LibreOffice Needs may change from time to time and adjustments to your templates can be necessary. To edit an existing template go to File -> Templates and then right click on the desired template and click edit. ![Edit a template in LibreOffice](https://itsfoss.com/content/images/wordpress/2020/04/3.edit-template.png) When you finish editing the template, click on save to make the changes permanent. **Conclusion** [LibreOffice website](https://extensions.libreoffice.org/templates?ref=itsfoss.com). You can search for the ones you need, download them and use them. Be advised that these are from third-party, unverified users. So use them at your risk. All in all templates are great not only at reducing the workload on repetitive tasks but also for user mistake proofing. You can take advantage of your computer’s ability to handle a repetitive task but with flexibility. It increases your efficiency. I’ll keep on sharing more such tips in future. Meanwhile, you may also learn about creating templates to [add the “create new document” option in the right click menu context in GNOME](https://itsfoss.com/add-new-document-option/).
12,120
用树莓派搭建一个私人社交网络
https://opensource.com/article/20/3/raspberry-pi-open-source-social
2020-04-17T21:02:22
[ "树莓派", "社交网络" ]
https://linux.cn/article-12120-1.html
> > 手把手教你怎样以低成本的硬件和简易步骤搭建自己的社交网络。 > > > ![](/data/attachment/album/202004/17/210209f3rqbxj9ch2bnrr3.jpg) 近年来,社交网络已经革新了人们的生活习惯。人们每天都会使用社交频道与朋友和家人联系。但是涉及到隐私和数据安全时,仍有一些共同的问题。尽管社交网络创建了复杂的隐私策略来保护用户的信息,但如果你不想自己的信息被泄露,最好的办法还是把数据保存在自己的服务器上。 一个树莓派 — 多才多艺的 Raspbian Lite 版本就可以让你搭建很多有用的家庭服务(参照我的文章[树莓派项目](https://peppe8o.com/2019/04/best-raspberry-pi-projects-with-open-source-software/))。通过搜索开源软件你就可以实现一些令人痴迷的功能,你也可以用这个神奇的设备来感受那些功能。其中一个有趣的尝试就是在你的树莓派上安装 OSSN。 ### OSSN 是什么? <ruby> <a href="https://www.opensource-socialnetwork.org/"> 开源社交网络 </a> <rt> OpenSource Social Network </rt></ruby>(OSSN)是用 PHP 写的一个快速开发社交网络软件,让你可以搭建自己的社交网站。OSSN 可以用来搭建不同类型的社交应用,如: * 私人内部网 * 公用/公开网络 * 社区 OSSN 支持的功能: * 照片 * 个人资料 * 朋友圈 * 表情 * 搜索 * 聊天 OSSN 运行在 LAMP 服务器上。硬件需求很简单,却能提供强大的用户界面,也友好支持移动端。 ### 我们需要准备什么 这个项目很简单,而且由于我们只安装远程 Web 服务,因此我们只需要一些便宜的零件就够了。我使用的是树莓派 3B+,但是用树莓派 3A+ 或其他更新的板应该也可以。 硬件: * 带有电源模块的树莓派 3B+ * 一张 SD 卡(最好是性能好点的卡,至少 16 GB) * 一台有 SFTP 软件(如免费的 [Filezilla](https://filezilla-project.org/))的桌面 PC,用来把安装包传到你的树莓派上 ### 操作步骤 我们首先搭建一个传统的 LAMP 服务器,然后配置数据库用户和安装 OSSN。 #### 1、安装 Raspbian Buster Lite 操作系统 你可以直接参照我的文章[在你的树莓派上安装 Raspbian Buster Lite](https://peppe8o.com/2019/07/install-raspbian-buster-lite-in-your-raspberry-pi/)。 为了确保你的系统是最新的,ssh 登录到树莓派后在终端输入下面的命令: ``` sudo apt-get update sudo apt-get upgrade ``` #### 2、安装 LAMP 服务 LAMP(Linux–Apache–Mysql–Php)服务通常与 MySQL 数据库配合。在我们的项目中,我们选择 MariaDB,因为它更轻量,完美支持树莓派。 安装 Apache 服务: ``` sudo apt-get install apache2 -y ``` 你可以通过在浏览器输入 `http://<<YouRpiIPAddress>>` 来检查 Apache 是否安装正确: ![](/data/attachment/album/202004/17/210243iwyg33g3wihh076t.jpg) 安装 PHP: ``` sudo apt-get install php -y ``` 安装 MariaDB 服务和 PHP connector: ``` sudo apt-get install mariadb-server php-mysql -y ``` 安装 phpMyAdmin: 在 OSSN 中 phpMyAdmin 不是强制安装的,但我建议你安装,因为它可以简化数据库的管理。 ``` sudo apt-get install phpmyadmin ``` 在 phpMyAdmin 配置界面,执行以下步骤: * 按下空格和 “OK” 选择 apache(强制)。 * 在 dbconfig-common 选择“Yes”,配置 phpMyAdmin 的数据库。 * 输入想设置的密码,按下 “OK”。 * 再次输入 phpMyAdmin 密码来确认,按下 “OK”。 为 phpMyAdmin 用户添加数据库权限来管理数据库: 我们用 root 用户连接 MariaDB(默认没有密码)来设置权限。 ``` sudo mysql -uroot -p grant all privileges on *.* to 'phpmyadmin'@'localhost'; flush privileges; quit ``` 最后,重启 Apache 服务: ``` sudo systemctl restart apache2.service ``` 在浏览器输入 `http://<<YouRpiIPAddress>>/phpmyadmin/` 来检查 phpMyAdmin 是否正常: ![](/data/attachment/album/202004/17/210246u7gvlup7sqvj9poz.jpg) 默认的 phpMyAdmin 登录凭证: * 用户名:`phpmyadmin` * 密码:在 phpMyAdmin 安装步骤中你设置的密码 #### 3、安装 OSSN 所需的其他包和配置 PHP 在第一次配置 OSSN 前,我们还需要在系统上安装一些所需的包: * PHP 版本 5.6、7.0 或 7.1 * MYSQL 5 及以上 * APACHE * MOD\_REWRITE * 需要打开 PHP 扩展 cURL 和 Mcrypt * PHP GD 扩展 * PHP ZIP 扩展 * 打开 PHP 设置 `allow_url_fopen` * PHP JSON 支持 * PHP XML 支持 * PHP OpenSSL 在终端输入以下命令来安装上述包: ``` sudo apt-get install php7.3-curl php7.3-gd php7.3-zip php7.3-json php7.3-xml ``` 打开 mod\_rewrite: ``` sudo a2enmod rewrite ``` 修改默认的 Apache 配置,使用 mod\_rewrite: ``` sudo nano /etc/apache2/sites-available/000-default.conf ``` 在 `000-default.conf` 文件中添加下面的内容: ``` <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/html ErrorLog ${APACHE_LOG_DIR}/error.log CustomLog ${APACHE_LOG_DIR}/access.log combined # 需要添加的部分开始 -------------------------------- <Directory /var/www/html> Options Indexes FollowSymLinks MultiViews AllowOverride All Require all granted </Directory> # 需要添加的部分结束 -------------------------------- </VirtualHost> ``` 安装 Mcrypt: ``` sudo apt install php-dev libmcrypt-dev php-pear sudo pecl channel-update pecl.php.net sudo pecl install mcrypt-1.0.2 ``` 打开 Mcrypt 模块: 在 `/etc/php/7.3/apache2/php.ini` 文件中 `extension=mcrypt.so`(或取消注释): ``` sudo nano /etc/php/7.3/apache2/php.ini ``` `allow_url_fopen` 应该已经在 `/etc/php/7.3/apache2/php.ini` 文件中打开了。OpenSSL 应该在 php7.3 中安装了。 我建议的另一个设置是把 PHP 最大上传文件数修改为 16 MB: ``` sudo nano /etc/php/7.3/apache2/php.ini ``` 搜索到 `upload_max_filesize` 所在的行,参照下面的设置: ``` upload_max_filesize = 16M ``` 保存并退出,重启 Apache: ``` sudo systemctl restart apache2.service ``` #### 4、安装 OSSN ##### 创建数据库,设置用户 回到 phpmyadmin web 页面(浏览器输入 `http://<<YouRpiIPAddress>>/phpmyadmin/`)并登录: * 用户名: `phpmyadmin` * 密码:在 phpMyAdmin 安装步骤中你设置的密码 点击数据库标签页: ![](/data/attachment/album/202004/17/210247zk5g6g1mlz41g175.jpg) 创建一个数据库,记下数据库的名字,因为在之后的安装过程中,你要手动输入它。 ![](/data/attachment/album/202004/17/210249as8aaytieeai01az.jpg) 现在为 OSSN 创建一个数据库用户,我使用下面的凭证: * 用户名: `ossn_db_user` * 密码: `ossn_db_password` 在终端输入下面的命令(如果你没有修改过密码,root 密码应该仍然是空): ``` sudo mysql -uroot -p CREATE USER 'ossn_db_user'@'localhost' IDENTIFIED BY 'ossn_db_password'; GRANT ALL PRIVILEGES ON ossn_db.* TO 'ossn_db_user'@'localhost'; flush privileges; quit ``` ##### 安装 OSSN 软件 在你 PC 上从 [OSSN 下载页面](https://www.opensource-socialnetwork.org/download) 下载 OSSN 安装压缩文件,保存为文件 `ossn-v5.2-1577836800.zip`。 使用你习惯的 SFTP 软件把整个压缩文件通过 SFTP 传到树莓派的新目录 `/home/pi/download` 下。常用的(默认)SFP 连接参数是: * 主机:你树莓派的 IP 地址 * 用户名:`pi` * 密码:raspberry(如果没有修改过默认密码) * 端口: 22 在终端输入: ``` cd /home/pi/download/ # 进入上传的 OSSN 安装文件的目录。 unzip ossn-v5.2-1577836800.zip # 从压缩包中提取所有文件 cd /var/www/html/ # 进入 Apache Web 目录 sudo rm index.html # 删除 Apache 默认页面 - 我们将使用 OSSN sudo cp -R /home/pi/download/ossn-v5.2-1577836800/* ./ #Copy installation files to web directory sudo chown -R www-data:www-data ./ ``` 创建数据文件夹:OSSN 需要一个文件夹来存放数据。出于安全目的,OSSN 建议这个文件夹创建在公开文档根目录之外。所以,我们在 `/opt` 下创建。 ``` sudo mkdir /opt/ossn_data sudo chown -R www-data:www-data /opt/ossn_data/ ``` 在浏览器输入 `http://<<YourRpiIPAddress>>` 来开始安装向导。 ![](/data/attachment/album/202004/17/210251f1w228mzsmmz1q8s.jpg) 所有项都检查完后,点击页面最下面的下一步按钮。 ![](/data/attachment/album/202004/17/210255v3ghclyr5cqf0ah2.jpg) 阅读证书验证并点击页面最下面的下一步按钮来接受证书。 ![](/data/attachment/album/202004/17/210259eq0x3f3qqksxjqji.jpg) 输入数据库用户名,密码和你选择的数据库名字,记得也要输入 OSSN 数据文件夹名称。点击安装。 ![](/data/attachment/album/202004/17/210301z8w6ncwc6a306abb.jpg) 输入你的管理员账号信息,点击创建按钮。 ![](/data/attachment/album/202004/17/210304swwdgd6387ohccp7.jpg) 现在所有的工作应该都完成了。点击结束,进入管理员首页。 ![](/data/attachment/album/202004/17/210309wohpksvk5ma0eev7.jpg) 你可以通过 URL `http://<<YourRpiIPAddress>>/administrator` 进入管理员控制面板,普通用户可以访问链接是 `http://<<YourRpiIPAddress>>`。 ![](/data/attachment/album/202004/17/210320ksyst49tssewp752.jpg) 本文首发在 [peppe8o.com](https://peppe8o.com/private-social-network-with-raspberry-pi-and-opensource-social-network/)。已获得转载授权。 --- via: <https://opensource.com/article/20/3/raspberry-pi-open-source-social> 作者:[Giuseppe Cassibba](https://opensource.com/users/peppe8o) 选题:[lujun9972](https://github.com/lujun9972) 译者:[lxbwolf](https://github.com/lxbwolf) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
Social networks have revolutionized people's lives in the last several years. People use social channels every day to stay connected with friends and family. But a common question remains regarding privacy and data security. Even if social networks have created complex privacy policies to protect users, maintaining your data in your own server is always the best option if you don't want to make them available to the public. Again, a Raspberry Pi—Raspbian Lite version can be very versatile to help you put a number of useful home services (see also my [Raspberry Pi projects](https://peppe8o.com/2019/04/best-raspberry-pi-projects-with-open-source-software/) article) in place. Some addictive features can be achieved by searching for open source software and testing it with this fantastic device. An interesting example to try is installing OpenSource Social Network in your Raspberry Pi. ## What Is OpenSource Social Network? [OpenSource Social Network](https://www.opensource-socialnetwork.org/) (OSSN) is a rapid-development social networking software written in PHP, that essentially allows you to make a social networking website. OSSN can be used to build different types of social apps, such as: - Private Intranets - Public/Open Networks - Community OSSN supports features like: - Photos - Profile - Friends - Smileys - Search - Chat OSSN runs on a LAMP server. It has very poor hardware requirements, but an amazing user interface, which is also mobile-friendly. ## What we need This project is very simple and, because we're installing only remote web services, we only need a few cheap parts. I'm going to use a Raspberry Pi 3 model B+, but it should also work with Raspberry Pi 3 model A+ or newer boards. Hardware: - Raspberry Pi 3 model B+ with its power supply - a micro SD card (better if it is a performing card, at least 16GB) - a Desktop PC with an SFTP software (for example, the free [Filezilla](https://filezilla-project.org/)) to transfer installation packages into your RPI. ## Step-by-step procedure We'll start by setting up a classic LAMP server. We'll then set up database users and install OpenSource Social Network. ### 1. Install Raspbian Buster Lite OS For this step, you can simply follow my [Install Raspbian Buster Lite in your Raspberry Pi](https://peppe8o.com/2019/07/install-raspbian-buster-lite-in-your-raspberry-pi/) article. Make sure that your system is up to date. Connect via ssh terminal and type following commands: ``` sudo apt-get update sudo apt-get upgrade ``` LAMP (Linux–Apache–Mysql–Php) servers usually come with the MySQL database. In our project, we'll use MariaDB instead, because it is lighter and works with Raspberry Pi. ### 3. Install Apache server: `sudo apt-get install apache2 -y` ![](https://opensource.com/sites/default/files/uploads/ossn_1_0.jpg) ### 4. Install PHP: `sudo apt-get install php -y` `sudo apt-get install mariadb-server php-mysql -y` PhpMyAdmin is not mandatory in OpenSource Social Network, but I suggest that you install it because it simplifies database management. `sudo apt-get install phpmyadmin` - Select apache (mandatory) with space and press OK. - Select Yes to configure the database for phpMyAdmin with dbconfig-common. - Enter your favorite phpMyAdmin password and press OK. - Enter your phpMyAdmin password again to confirm and press OK ### 7. Grant phpMyAdmin user DB privileges to manage DBs: We'll connect to MariaDB with root user (default password is empty) to grant permissions. Remember to use semicolons at the end of each command row as shown below: ``` sudo mysql -uroot -p grant all privileges on *.* to 'phpmyadmin'@'localhost'; flush privileges; quit ``` `sudo systemctl restart apache2.service` ![](https://opensource.com/sites/default/files/uploads/ossn_2.jpg) Default phpMyAdmin login credentials are: - user: phpmyadmin - password: the one you set up in the phpMyAdmin installation step ## Installing other open source social network-required packages and setting up PHP We need to prepare our system for OpenSource Social Network's first setup wizard. Required packages are: - PHP version any of 5.6, 7.0, 7.1 - MYSQL 5 OR > - APACHE - MOD_REWRITE - PHP Extensions cURL & Mcrypt should be enabled - PHP GD Extension - PHP ZIP Extension - PHP settings allow_url_fopen enabled - PHP JSON Support - PHP XML Support - PHP OpenSSL So we'll install them with following terminal commands: `sudo apt-get install php7.3-curl php7.3-gd php7.3-zip php7.3-json php7.3-xml` ### 1. Enable MOD_REWRITE: `sudo a2enmod rewrite` `sudo nano /etc/apache2/sites-available/000-default.conf` **000-default.conf**file appears like the following (excluding comments): ``` <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/html ErrorLog ${APACHE_LOG_DIR}/error.log CustomLog ${APACHE_LOG_DIR}/access.log combined # SECTION TO ADD -------------------------------- <Directory /var/www/html> Options Indexes FollowSymLinks MultiViews AllowOverride All Require all granted </Directory> # END SECTION TO ADD -------------------------------- </VirtualHost> ``` ``` sudo apt install php-dev libmcrypt-dev php-pear sudo pecl channel-update pecl.php.net sudo pecl install mcrypt-1.0.2 ``` `sudo nano /etc/php/7.3/apache2/php.ini` **allow_url_fopen**should be already enabled in "/etc/php/7.3/apache2/php.ini". OpenSSL should be already installed in php7.3. ### 6. Another setting that I suggest is editing the PHP max upload file size up to 16 MB: `sudo nano /etc/php/7.3/apache2/php.ini` **upload_max_filesize**parameter and set it as the following: `upload_max_filesize = 16M` `sudo systemctl restart apache2.service` ## Install OSSN ### 1. Create DB and set up user: Go back to phpmyadmin web page (browse "http://<<YourRpiIPAddress>>/phpmyadmin/") and login: User: phpmyadmin Password: the one set up in phpmyadmin installation step Click on database tab: ![](https://opensource.com/sites/default/files/uploads/ossn_3.jpg) Create a database and take note of the database name, as you will be required to enter it later in the installation process. ![](https://opensource.com/sites/default/files/uploads/ossn_4.jpg) It's time to create a database user for OSSN. In this example, I'll use the following credentials: User: ossn_db_user Password: ossn_db_password So, terminal commands will be (root password is still empty, if not changed by you before): ``` sudo mysql -uroot -p CREATE USER 'ossn_db_user'@'localhost' IDENTIFIED BY 'ossn_db_password'; GRANT ALL PRIVILEGES ON ossn_db.* TO 'ossn_db_user'@'localhost'; flush privileges; quit ``` Download the OSSN installation zip file from the [OSSN download page](https://www.opensource-socialnetwork.org/download) on your local PC. At the time of this writing, this file is named "ossn-v5.2-1577836800.zip." Using your favorite SFTP software, transfer the entire zip file via SFTP to a new folder in the path "/home/pi/download" on your Raspberry Pi. Common (default) SFP connection parameters are: - Host: your Raspberry Pi IP address - User: pi - Password: raspberry (if you didn't change the pi default password) - Port: 22 Back to terminal: ``` cd /home/pi/download/ #Enter directory where OSSN installation files have been transferred unzip ossn-v5.2-1577836800.zip #Extracts all files from zip cd /var/www/html/ #Enter Apache web directory sudo rm index.html #Removes Apache default page - we'll use OSSN one sudo cp -R /home/pi/download/ossn-v5.2-1577836800/* ./ #Copy installation files to web directory sudo chown -R www-data:www-data ./ ``` ``` sudo mkdir /opt/ossn_data sudo chown -R www-data:www-data /opt/ossn_data/ ``` ![](https://opensource.com/sites/default/files/uploads/ossn_5.jpg) All checks should be fine. Click the Next button at the end of the page. ![](https://opensource.com/sites/default/files/uploads/ossn_6.jpg) Read the license validation and click the Next button at the end of the page to accept. ![](https://opensource.com/sites/default/files/uploads/ossn_7.jpg) Enter the database user, password, and the DB name you chose. Remember also to enter the OSSN data folder. Press Install. ![](https://opensource.com/sites/default/files/uploads/ossn_8.jpg) Enter your admin account information and press the Create button. ![](https://opensource.com/sites/default/files/uploads/ossn_9.jpg) Everything should be fine now. Press Finish to access the administration dashboard. ![](https://opensource.com/sites/default/files/uploads/ossn_10.jpg) So, administration panel can be reached with URL "http://<<YourRpiIPAddress>>/administrator" while user link will be "http://<<YourRpiIPAddress>>". ![](https://opensource.com/sites/default/files/uploads/ossn_11.jpg) *This article was originally published at peppe8o.com. Reposted with permission.* ## 25 Comments
12,121
如何在 Ubuntu 中添加多个时区
https://itsfoss.com/add-multiple-time-zones-ubuntu/
2020-04-18T14:20:46
[ "时区", "时钟" ]
https://linux.cn/article-12121-1.html
> > 本快速教程介绍了在 Ubuntu 和其他发行版中使用 GNOME 桌面环境添加多时区时钟的步骤。 > > > ![](/data/attachment/album/202004/18/142021jborblzkglbebrkk.jpg) 如果你的家人或同事在另一个国家,或者你居住在一个有多个时区的国家,那么了解时差就变得很重要。毕竟,你不想在凌晨 4 点打电话打扰别人。 一些 Linux 用户还会记下 [UTC 时间](https://en.wikipedia.org/wiki/Coordinated_Universal_Time),因为大多数服务器都使用 UTC。 如果你有多个时钟,那么可以更好地管理此类情况。你可以将一个时钟设置为本地时间,并将其他时钟同步到其他时区。这使得了解不同时间变得更加容易。 在本教程中,我将向你展示如何在 Ubuntu 和其他使用 GNOME 桌面环境的 Linux 发行版中添加其他时钟。 ### 在 Ubuntu(以及其他使用 GNOME 的 Linux)中添加多个时区时钟 请[检查你正在使用的桌面环境](https://itsfoss.com/find-desktop-environment/)。本教程仅适用于 GNOME 桌面。 要添加其他时钟,可以使用一个叫 [GNOME Clocks](https://wiki.gnome.org/Apps/Clocks) 的小程序。 GNOME Clocks 是一个简单的应用,它可以显示多个位置的时间和日期。你也可以使用它来设置闹钟或计时器,它还包括秒表功能。 GNOME Clocks 存在于 Ubuntu 的 Universe 仓库中。因此,请确保首先[启用 Universe 仓库](https://itsfoss.com/ubuntu-repositories/)。 你可以在软件中心中搜索 “GNOME Clocks” 并从那里安装它。 ![Gnome Clocks Ubuntu Software Center](/data/attachment/album/202004/18/142050ykt7akopgetflgkl.jpg) 或者,你可以打开终端并使用以下命令来安装 GNOME Clocks: ``` sudo apt install gnome-clocks ``` 如果你使用的是其他 Linux 发行版,那么请使用发行版的软件中心或软件包管理器来安装此程序。 安装后,请按 `Super` 键( `Windows` 键)并搜索 clocks: ![Gnome Clocks App Search Ubuntu](/data/attachment/album/202004/18/142051hmki2wgqzcsm1k99.jpg) 启动程序,你应该会看到一个界面,提供一些选项,例如添加世界时钟、设置闹钟、使用秒表和计时器。 单击左上角的 “+” 号,它将为你提供搜索地理位置的选项。搜索、选择并添加。 ![Adding additional clocks](/data/attachment/album/202004/18/142052kzioclno5z3k2nln.jpg) 通过地理位置添加所需的时区后,你可以看到现在在消息托盘中添加了这个新时钟。它还显示了你当地时间与其他时区之间的时差。 ![Multiple clocks for multiple time zones](/data/attachment/album/202004/18/142057p1ijaa88rzz9lvvz.jpg) 你可以使用 `Super + M` 键快速打开消息托盘。你可以掌握这些[有用的 Ubuntu 快捷方式](https://itsfoss.com/ubuntu-shortcuts/)来节省时间。 如果要删除其他时钟,你可以从 GNOME Clocks 应用界面执行以下操作: ![Remove Additional Clocks](/data/attachment/album/202004/18/142058rijol5fj6rjrci9r.jpg) 你无法(在这里)删除当前时区并设置为其他时区。有其他方法[更改 Linux 中的当前时区](https://itsfoss.com/change-timezone-ubuntu/)。 我希望你喜欢这个快速技巧。欢迎提出问题和建议。 --- via: <https://itsfoss.com/add-multiple-time-zones-ubuntu/> 作者:[Abhishek Prakash](https://itsfoss.com/author/abhishek/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
![Warp Terminal](/assets/images/warp-terminal.webp) ![Warp Terminal](/assets/images/warp-terminal.webp) If you have family members or colleagues in another country or if you live in a country with multiple time zones, keeping a track of the time difference becomes important. After all, you don’t want to disturb someone by calling at 4’o clock in the morning. Some Linux users also keep a tab on the [UTC time](https://en.wikipedia.org/wiki/Coordinated_Universal_Time) because an overwhelming majority of servers use UTC. Situations like these can be managed better if you have multiple clocks. You can set one clock to your local time and sync other clock(s) to other time zone(s). It makes keep an eye on the different times easier. In this quick tutorial, I’ll show you how to add additional clocks in Ubuntu and other Linux distributions that use GNOME desktop environment. ## Adding multiple time zone clocks in Ubuntu (and other Linux using GNOME) Please [check which desktop environment you are using](https://itsfoss.com/find-desktop-environment/). This tutorial is suitable for GNOME desktop only. To add additional clocks, you can use a nifty little app unsurprisingly called [GNOME Clocks](https://wiki.gnome.org/Apps/Clocks). GNOME Clocks is a simple application that shows the time and date in multiple locations. You can also use it to set alarms or timers. Stopwatch feature is also included. GNOME Clocks is available in the universe repository in Ubuntu. So please make sure to [enable universe repository](https://itsfoss.com/ubuntu-repositories/) first. You can search for GNOME Clocks in Software Center and install it from there. ![Gnome Clocks in Ubuntu Software Center](https://itsfoss.com/content/images/wordpress/2020/04/gnome-clocks-ubuntu-software-center.jpg) Alternatively, you can open a terminal and use the following command to install GNOME Clocks: `sudo apt install gnome-clocks` If you are using some other Linux distribution, please use your distribution’s software center or package manager to install this application. Once you have installed it, search for it by pressing the super key (Windows key) and typing clocks: ![Gnome Clocks App in Ubuntu](https://itsfoss.com/content/images/wordpress/2020/04/gnome-clocks-app-search-ubuntu.jpg) Start the application and you should see an interface that provides you a few options like adding world clock, setting alarms, use stopwatch and timer. Click on the + sign in the top left corner it will give you an option to search for a geographical location. Search it, select it and add it. ![Adding additional clocks in Ubuntu](https://itsfoss.com/content/images/wordpress/2020/04/add-multiple-time-zones-gnome.jpg) Once you have added the required time zone(s) via its geographical location, you can see that this new clock is now added in the message try. It also shows the time difference between your local time and other time zones. ![Multiple Clocks in Ubuntu](https://itsfoss.com/content/images/wordpress/2020/04/multiple-clocks-ubuntu.jpg) You can use Super + M keys to quickly open the message tray. There are some more [useful Ubuntu shortcuts](https://itsfoss.com/ubuntu-shortcuts/) you may master and save your time. If you want to remove the additional clocks, you can do that from the GNOME Clocks application interface: ![Remove Additional Clocks in Ubuntu](https://itsfoss.com/content/images/wordpress/2020/04/remove-additional-clocks-ubuntu.jpg) You cannot delete your current time zone and set it to something else. There are other ways [to change your current time zone in Linux](https://itsfoss.com/change-timezone-ubuntu/). I hope you liked this quick tip. Questions and suggestions are always welcome.
12,123
用 ROX 桌面重温 Linux 历史
https://opensource.com/article/19/12/linux-rox-desktop
2020-04-18T15:16:16
[ "桌面" ]
/article-12123-1.html
> > 这篇文章是 Linux 桌面 24 天特别系列的一部分。如果你想来一次有趣的时光之旅,ROX 桌面非常值得一试。 > > > ![](/data/attachment/album/202004/18/151533n196wag64gwhs0ga.jpg) [ROX](http://rox.sourceforge.net/desktop/) 桌面已经不再积极开发,而它的遗留问题至今仍然存在,但即使在它活跃开发的时候,它也是一个另类的 Linux 桌面。当其他的桌面感觉与旧式的 Unix 或 Windows 界面大致相似时,ROX 则属于 BeOS、AmigaOS 和 [RISC OS](https://www.riscosopen.org/content/) 桌面阵营。 它专注于拖放式操作(这使得它的可访问性对某些用户来说并不理想)、点击式操作、弹出式上下文菜单,以及一个独特的应用程序目录系统,无需安装即可运行本地应用程序。 ### 安装 ROX 如今,ROX 基本上都被遗弃了,只剩下一点残渣碎片留给用户自己去收集整理。幸运的是,这个难题相对来说比较容易解决,但是当你在发行版的软件仓库中找到 ROX 桌面的碎片时,不要被迷惑了,因为那并不是 ROX 桌面全部的碎片。ROX 常用的部分 —— 文件管理器([ROX-Filer](http://rox.sourceforge.net/desktop/ROX-Filer))和终端([ROXTerm](http://roxterm.sourceforge.net/)) —— 似乎在大多数流行的发行版软件仓库中都有存在,你可以将它们作为独立的应用程序安装(并使用)。然而,要运行 ROX 桌面,你必须同时安装 ROX-Session 和它所依赖的库。 我在 Slackware 14.2 上安装了 ROX,但它应该可以在任何 Linux 或 BSD 系统上运行。 首先,你必须从其版本库中安装 [ROX-lib2](http://rox.sourceforge.net/desktop/ROX-Lib)。你要安装 ROX-lib2,按照它的理念,只需下载tarball、[解压](https://opensource.com/article/17/7/how-unzip-targz-file),然后将 `ROX-lib` 目录移动到 `/usr/local/lib` 下就行。 接下来,你要安装 [ROX-Session](http://rox.sourceforge.net/desktop/ROX-Session.html)。这可能需要从源码中编译,因为它很可能不在你的软件仓库中。编译过程需要编译工具,这些工具在 Slackware 上是默认提供的,但在其他发行版中往往会被省略,以节省初始下载空间。根据你的发行版不同,你必须安装的包的名称也不同,所以请参考文档来了解具体内容。例如,在 Debian 发行版中,你可以在 [Debian 的 wiki](https://wiki.debian.org/BuildingTutorial) 中了解构建需求,而在 Fedora 发行版中,请参考 [Fedora 的文档](https://docs.pagure.org/docs-fedora/installing-software-from-source.html)。安装了构建工具后,执行自定义的 ROX-Session 构建脚本。 ``` $ ./AppRun ``` 这个脚本会自己管理构建和安装,并提示你需要 root 权限,以在你的登录屏上将其添加为一个选项。 如果你还没有从你的软件库中安装 ROX-Filer,请在继续之前安装。 这些组件共同组成了一个完整的 ROX 桌面。要登录到新桌面,请从当前桌面会话中注销。默认情况下,你的会话管理器(KDM、GDM、LightDM 或 XDM,视你的设置而定)会继续登录到你之前的桌面,所以在登录前必须覆盖。 使用 SDDM: ![](/data/attachment/album/202004/18/151622na5l3i3znzyybyvy.jpg) 使用 GDM: ![](/data/attachment/album/202004/18/151631gr7kcxdr5v5q8ff7.jpg) ### ROX 桌面特性 ROX 桌面默认情况下很简单,屏幕底部有一个面板,桌面上有一个通往主目录的快捷方式图标。面板中包含了一些常用位置的快捷方式。这就是 ROX 桌面的全部功能,至少在安装后就是这样。如果你想要时钟、日历或系统托盘,你需要找到提供这些功能的应用程序。 ![Default ROX desktop](/data/attachment/album/202004/18/151637bofpfzf6yfuf5zfh.jpg "Default ROX desktop") 虽然没有任务栏,但当你将窗口最小化时,它就会成为桌面上的一个临时图标。你可以点击该图标,将其窗口恢复到以前的大小和位置。 面板也可以进行一些修改。你可以在其中放置不同的快捷方式,甚至可以创建自己的小程序。 它没有应用菜单,也没有上下文菜单中的应用快捷方式。相反,你可以手动导航到 `/usr/share/applications`,或者你可以将你的应用目录或目录添加到 ROX 面板中。 ![ROX desktop](/data/attachment/album/202004/18/151643z6zxq6q61c1ggfck.jpg "ROX desktop") ROX 桌面的工作流程集中在鼠标驱动上,让人联想到 Mac OS 7.5 和 8 系统。通过 ROX-filer,你可以管理权限、文件管理、<ruby> 内省 <rt> introspection </rt></ruby>、脚本启动、后台设置,以及几乎所有你能想到的东西,只要你有足够的耐心,就可以实现点击式的交互。对于高级用户来说,这似乎很慢,但 ROX 设法让它变得相对无痛,而且非常直观。 ### 应用程序目录、AppRun 和 AppImage ROX 桌面有一个优雅的惯例,按照此惯例,包含一个名为 `AppRun` 的脚本的目录就可以像一个应用程序一样被执行。这意味着,要制作一个 ROX 应用程序,你所要做的就是将代码编译到一个目录中,将一个名为`AppRun` 的脚本放在该目录的根目录下,来执行你所编译的二进制文件,然后将该目录标记为可执行即可。ROX-Filer 会按照你设置的方式来显示一个目录,并以特殊的图标和颜色显示一个目录。当你点击一个应用程序目录,ROX-Filer 会自动运行里面的 `AppRun` 脚本。它的外观和行为就像一个已经安装好的应用程序,但它是在用户的主目录下的本地目录,不需要特殊的权限。 这是一个方便的功能,但它是那些你使用时感觉很好的小功能之一,因为它很容易做到。它绝不是必要的,它只是比在本地建立一个应用程序,将目录隐藏在某个不显眼的地方,并建立一个快速的 `.desktop` 文件作为你的启动器,要领先了几步。然而,应用程序目录的概念已经当做灵感被 [AppImage](https://appimage.org/) 打包系统所 [借鉴](https://github.com/AppImage/AppImageKit/wiki/AppDir)。 ### 为什么应该试试 ROX 桌面 把 ROX 设置好并使用是有些困难的,它似乎真的被抛弃了。然而,它的遗产在今天以多种方式继续存在,它是 Linux 历史上的一段迷人而有趣的历史。它可能不会成为你的主要桌面,但如果你想来一次有趣的回溯之旅,那么 ROX 非常值得一试。探索它、定制它,看看它包含了哪些巧妙的想法。也许还有一些隐藏的宝石可以让开源社区受益。 --- via: <https://opensource.com/article/19/12/linux-rox-desktop> 作者:[Seth Kenlon](https://opensource.com/users/seth) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
null
HTTPSConnectionPool(host='opensource.com', port=443): Read timed out. (read timeout=10)
null
12,124
如何找出你所使用的桌面环境
https://itsfoss.com/find-desktop-environment/
2020-04-18T20:55:17
[ "桌面环境", "GNOME" ]
https://linux.cn/article-12124-1.html
如果你是 Linux 新用户,并在一个 Linux 论坛中寻求帮助,那么你可能会被问以下问题: > > “你使用的是哪个桌面环境?” > > > 你知道什么是<ruby> 桌面环境 <rt> desktop environment </rt></ruby>(DE),但你如何知道你使用的是哪一个?我会告诉你如何找到它。我将首先展示命令行方法,因为这适用于[各种 Linux 发行版](https://itsfoss.com/what-is-linux/)。我还将展示如何通过图形方式获得。 ### 检查你使用的是哪个桌面环境 ![](/data/attachment/album/202004/18/205519b7n0ibnmhthxiivy.jpg) 你可以[在 Linux 中使用 echo 命令](https://linuxhandbook.com/echo-command/)在终端中显示 `XDG_CURRENT_DESKTOP` 变量的值。 打开终端并复制粘贴此命令: ``` echo $XDG_CURRENT_DESKTOP ``` 例如,这表明我在 [Ubuntu 20.04](https://itsfoss.com/ubuntu-20-04-release-features/) 中使用了 [GNOME 桌面](https://www.gnome.org/): ``` [email protected]:~$ echo $XDG_CURRENT_DESKTOP ubuntu:GNOME ``` 尽管此命令可以快速告诉你正在使用哪个桌面环境,但它不会提供任何其他信息。 在某些情况下,了解桌面环境版本可能很重要。软件的每个新版本都会带来新功能或删除某些功能。[GNOME 3.36](https://itsfoss.com/gnome-3-36-release/) 引入了“请勿打扰”选项,以关闭所有桌面通知。 假设你了解了这个新的“请勿打扰”功能。你确认自己正在使用 GNOME,但是在 GNOME 桌面上看不到此选项。如果你可以检查系统上已安装的 GNOME 桌面版本,那么这会很清楚。 我将先向你展示命令检查桌面环境版本,因为你可以在任何运行桌面环境的 Linux 中使用它。 ### 如何获取桌面环境版本 与获取桌面环境的名称不同。获取其版本号的方法并不直接,因为它没有标准的命令或环境变量可以提供此信息。 在 Linux 中获取桌面环境信息的一种方法是使用 [Screenfetch](https://github.com/KittyKatt/screenFetch) 之类的工具。此[命令行工具以 ascii 格式显示 Linux 发行版的 logo](https://itsfoss.com/display-linux-logo-in-ascii/) 以及一些基本的系统信息。桌面环境版本就是其中之一。 在基于 Ubuntu 的发行版中,你可以通过[启用 Universe 仓库](https://itsfoss.com/ubuntu-repositories/)安装 Screenfetch,然后使用以下命令: ``` sudo apt install screenfetch ``` 对于其他 Linux 发行版,请使用系统的软件包管理器来安装此程序。 安装后,只需在终端中输入 `screenfetch` 即可,它应该显示桌面环境版本以及其他系统信息。 ![Check Desktop Environment Version](/data/attachment/album/202004/18/205520cg5yfua5ifgfgzfg.jpg) 如上图所示,我的系统使用 GNOME 3.36.1(基本版本是 GNOME 3.36)。你也可以这样[检查 Linux 内核版本](https://itsfoss.com/find-which-kernel-version-is-running-in-ubuntu/)和其他详细信息。 请记住,Screenfetch 不一定显示桌面环境版本。我查看了它的源码,它有许多 if-else 代码,可以从各种桌面环境中的许多源和参数获取版本信息。如果找不到任何版本,那么仅显示桌面环境名称。 ### 使用 GUI 检查桌面环境版本 几乎所有桌面环境在其 “Settings”->“About” 部分中都提供了基本的系统详细信息。 一个主要问题是,大多数桌面环境看起来都不同,因此我无法展示每个桌面环境的确切步骤。我将展示 GNOME 的,让你在桌面上发现它。 在菜单中搜索 “Settings”(按 Windows 键并搜索): ![Search for Settings application](/data/attachment/album/202004/18/205521twtrw2rf8y2x64hg.jpg) 在这里,找到底部的 “About” 部分。单击它,你应该就能看到桌面环境及其版本。 ![Check Desktop Environment in Ubuntu](/data/attachment/album/202004/18/205524guhx8aobcn887m8o.jpg) 如你所见,这表明我的系统正在使用 GNOME 3.36。 我希望这个快速入门技巧对你有所帮助。如果你有任何疑问或建议,请在下面发表评论。 --- via: <https://itsfoss.com/find-desktop-environment/> 作者:[Abhishek Prakash](https://itsfoss.com/author/abhishek/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
![Warp Terminal](/assets/images/warp-terminal.webp) ![Warp Terminal](/assets/images/warp-terminal.webp) If you are a new Linux user and seeking help in one of the Linux forums, you may be asked this question: *“Which desktop environment are you using?”* You have an idea about [what a desktop environment is](https://itsfoss.com/what-is-desktop-environment/) but how do you know which one are you using? I’ll tell you how to find it out. I’ll show the command line method first because that is applicable to [all kind of Linux distributions](https://itsfoss.com/what-is-linux/). I’ll also show the graphical way of getting this information. ## Check which desktop environment you are using You can [use the echo command in Linux](https://linuxhandbook.com/echo-command/) to display the value of XDG_CURRENT_DESKTOP variable in the terminal. Open the terminal and copy paste this command: `echo $XDG_CURRENT_DESKTOP` For example, it shows that I am using [GNOME desktop](https://www.gnome.org/) in [Ubuntu 20.04](https://itsfoss.com/ubuntu-20-04-release-features/): ``` abhishek@itsfoss:~$ echo $XDG_CURRENT_DESKTOP ubuntu:GNOME ``` While this command quickly tells you which desktop environment is being used, it doesn’t give any other information. Knowing the version of desktop environment (also called DE) could be important in some cases. Each new version of a software brings new features or removes some. [GNOME 3.36](https://itsfoss.com/gnome-3-36-release/) introduces a ‘Do Not Disturb’ option to toggle off all the desktop notifications. Suppose you read about this new Do Not Disturb feature. You verify that you are using GNOME and yet you don’t see this option in your GNOME desktop. If you could check the GNOME desktop version you have installed on your system, that could make things clear for you. I’ll show you the commands to check the desktop environment’s version first because you can use it in any Linux, running desktop environment. ## How to get desktop environment version Unlike getting the name of desktop environment. getting its version number is not straightforward because there is no standard command or environment variable that could give this information. One way to get the desktop environment information in Linux is by using a tool like [Screenfetch](https://github.com/KittyKatt/screenFetch). This [command line tool displays the logo of your Linux distribution in ascii format](https://itsfoss.com/display-linux-logo-in-ascii/) along with a few basic system information. Desktop environment version is one of them. In Ubuntu based distributions, you can install Screenfetch by [enabling Universe repository](https://itsfoss.com/ubuntu-repositories/) and then using this command: `sudo apt install screenfetch` For other Linux distributions, please use your system’s package manager to install this program. Once installed, simply type screenfetch in the terminal and it should show the desktop environment version along with other system information. ![Check Desktop Environment Version](https://itsfoss.com/content/images/wordpress/2020/04/check-desktop-environment-version.jpg) As you can see in the above image, my system is using GNOME 3.36.1 (basically GNOME 3.36). You can also [check the Linux kernel version](https://itsfoss.com/find-which-kernel-version-is-running-in-ubuntu/) and other details here. Please keep in mind that it is not guaranteed that Screenfetch will display the desktop environment version. I checked its source code and it has lots of if-else code to get the version information from a number of sources and parameters in various desktop environments. If it can find nothing on version, it just displays the DE name. ## Using GUI to check desktop environment version Almost all desktop environments provide basic system details in their Settings-About section. The one major problem is that most DEs look different and thus I cannot show the exact steps for each of them. I am going to show it for GNOME and I let you discover it in your desktop. So, search for Settings in the menu (press Windows key and search): ![Applications Menu Settings](https://itsfoss.com/content/images/wordpress/2019/08/applications_menu_settings.jpg) In here, go to the bottom to find the About section. Click on it and you should have the desktop environment along with its version. ![Check Desktop Environment Ubuntu](https://itsfoss.com/content/images/wordpress/2020/04/check-desktop-environment-ubuntu.jpg) As you can see, it shows that my system is using GNOME 3.36. ## Which DE are you using? Similarly, you can find out whether you are using Wayland or Xorg display server. [How to Check if You are Using Wayland or Xorg?There is a technical transition taking place in the desktop Linux world. Most mainstream distros have started to move to the Wayland display server by default. But not all legacy components are compatible with the newer Wayland. They work only with the good old X or Xorg display server.…](https://itsfoss.com/check-wayland-or-xorg/)![](https://itsfoss.com/content/images/wordpress/2022/09/check-wayland-or-xorg.png) ![](https://itsfoss.com/content/images/wordpress/2022/09/check-wayland-or-xorg.png) I hope you find this quick beginner tip useful. If you have questions or suggestions, please leave a comment below.
12,125
升级 Ubuntu Linux 内核的几种不同方法
https://www.ostechnix.com/different-ways-to-update-linux-kernel-for-ubuntu/
2020-04-18T22:50:55
[ "内核", "升级" ]
https://linux.cn/article-12125-1.html
![](/data/attachment/album/202004/18/224957b3q2xjb91h512013.jpg) 这个指南里介绍了 7 种为 Ubuntu 升级 Linux 内核的不同方法。这 7 种方法里,有 5 种需要重启系统来使新内核生效,其他两种则不用。升级之前,**强烈建议你将重要数据进行备份!** 这里提到的所有方法只在 Ubuntu 中测试过。我们并不确定这些方法是不是也能适用于其他 Ubuntu 的特色发行版(如: Xubuntu)和衍生发行版(如:Linux Mint)。 ### 第一部分:需要重启的内核升级 以下方法需要你重启系统以便新的内核生效。以下所有方法都建议在个人系统或测试系统中进行。重要的事儿再说一遍,请备份好你 Ubuntu 中的重要数据、配置文件和其他重要的东西。 #### 方法 1 - 使用 dpkg 升级 Linux 内核(手动方式) 这个方法可以帮助你从 [kernel.ubuntu.com](http://kernel.ubuntu.com/%7Ekernel-ppa/mainline/) 网站手动下载可用的最新 Linux 内核。如果你打算安装最新版(而不是稳定版或者正式发布版),那这种方法对你会很有用。从以上链接下载 Linux 内核版本。编写这个指南的时候,最新的可用版本是 **5.0-rc1**,最新的稳定版是 **v4.20**。 ![](/data/attachment/album/202004/18/225058u0lhvl43lvmefmlf.png) 点击你所选择的 Linux 内核版本链接,找到你对应的架构(“Build for XXX”)的那部分。然后下载符合以下格式的两个文件(其中 X.Y.Z 是最高版本号): 1. linux-image-*X.Y.Z*-generic-\*.deb 2. linux-modules-X.Y.Z*-generic-*.deb 在终端中改变到文件所在的目录,然后执行此命令手动安装内核: ``` $ sudo dpkg --install *.deb ``` 重启系统,使用新内核: ``` $ sudo reboot ``` 检查是否如你所愿: ``` $ uname -r ``` 对于分步的说明,请查看下列链接中对应的部分。 * [在基于 RPM 和 DEB 的系统中安装 Linux 内核 4.15](https://www.ostechnix.com/install-linux-kernel-4-15-rpm-deb-based-systems/) 以上的指南是针对的是 4.15 版本,不过安装最新版本的所有的步骤都是一样的。 **优势:** 不必联网(你可以从任何系统中下载 Linux 内核来使用) **缺点:** 手动更新,需要重启系统。 #### 方法 2 - 用 apt-get 来升级 Linux 内核(推荐方法) 这是在类 Ubuntu 系统中升级 Linux 内核的推荐方法。不同于上一个方法,这种方法会从 Ubuntu 官方仓库下载、安装内核版本,而不是从 **kernel.ubuntu.com**网站。 要升级包括内核的整个系统,只需要执行: ``` $ sudo apt-get update $ sudo apt-get upgrade ``` 如果只希望升级内核,运行: ``` $ sudo apt-get upgrade linux-image-generic ``` **优势:** 简单。推荐方法。 **缺点:** 需要联网,需要重启。 从官方库中升级内核是最接近开箱即用的方法,并且不会出什么问题。如果是生产环境的系统,这是最为推荐的升级 Linux 内核的方法。 方法 1 和方法 2 都需要用户去介入到升级 Linux 内核的过程中。而下边的方法(3、 4、 5)则几乎是全自动的。 #### 方法 3 - 使用 Ukuu 升级 Linux 内核 **Ukuu**是一个 Gtk GUI 和命令行工具,它可以从 kernel.ubuntu.com 下载最新的 Linux 主线内核,并自动安装到你的 Ubuntu 桌面版和服务器版中。Ukku 不仅简化了手动下载和安装新内核的过程,同时也会帮助你安全地移除旧的和不再需要的内核。更多细节可以参照以下指南。 * [Ukuu:在 Ubuntu 系统中安装和升级 Linux 内核的简单方法](https://www.ostechnix.com/ukuu-an-easy-way-to-install-and-upgrade-linux-kernel-in-ubuntu-based-systems/) **优势:** 易于安装使用。自动安装主线内核。 **缺点:** 需要联网,需要重启。 #### 方法 4 - 使用 UKTools 升级 Linux 内核 跟 Ukuu 差不多,**UKTools** 也会从 kernel.ubuntu.com 网站获取最新的稳定内核并且自动安装到 Ubuntu 以及类似于 Linux Mint 的延伸发行版中。关于UKTools的更多详情,请参见下面的链接。 * [UKTools:升级Ubuntu及其衍生产品中的最新Linux内核](https://www.ostechnix.com/uktools-upgrade-latest-linux-kernel-in-ubuntu-and-derivatives/) **优势:** 简单,自动。 **缺点:** 需要联网,需要重启。 #### 方法 5 - 使用 Linux 内核实用程序更新 Linux 内核 **Linux 内核实用程序**是目前另一个用于升级类 Ubuntu 系统 Linux 内核的程序。实质上,它是一个由一系列 Bash 脚本构成的合集,用于编译并且可以选择性地为 Debian(LCTT 译注:Ubuntu 的上游发行版)及其衍生发行版升级内核。它包含三个实用程序,一个用于手动编译、安装来自于 <http://www.kernel.org> 网站的源码内核,另一个用于安装来自 <https://kernel.ubuntu.com> 网站的预编译的内核,第三个脚本用于移除旧内核。更多细节请浏览以下链接。 * [Linux 内核实用程序:编译和更新最新的 Linux 内核的脚本,适用于 Debian 及其衍生产品](https://www.ostechnix.com/linux-kernel-utilities-scripts-compile-update-latest-linux-kernel-debian-derivatives/) **优势:** 简单,自动。 **缺点:** 需要联网,需要重启。 ### 第二部分:无需重启的内核升级 我之前说过,上边所有的方法都需要你重启服务器(LCTT 译注:也可以是桌面版)来启用新内核。如果是个人系统或者测试系统,可以这么办。但对于无法停机的生产环境系统该怎么办呢?一点问题没有,这时候<ruby> 实时补丁 <rt> livepatching </rt></ruby>就派上用场了。 **实时补丁**(或者叫热补丁)允许你在不重启的情况下安装 Linux 更新或补丁,使你的服务器处于最新的安全级别。这对 web 主机、游戏服务器这类需要不间断在线的服务器来说是很有价值的。事实上,任何情况下,服务器都应该保持在不间断运行的状态下。由于 Linux 供应商只会在出于修复安全漏洞的目的下维护补丁,所以如果安全性是你最关注的问题时,这种方式再适合不过了。 以下两种方法不需要重启,对于生产环境和执行关键任务的 Ubuntu 服务器的 Linux 内核更新非常有用。 #### 方法 6 – 使用 Canonical 实时补丁服务来更新 Linux 内核 ![](/data/attachment/album/202004/18/225103sv90kfs1019vx0i0.png) [Canonical 实时补丁服务](https://www.ubuntu.com/livepatch)可以在不需要重启 Ubuntu 系统的情况下自动应用内核更新、补丁和安全补丁。它可以减少Ubuntu系统的停机时间,并保证系统的安全。Canonical 实时补丁服务可以在安装过程当中或安装之后进行设置。如果你使用的是 Ubuntu 桌面版,软件更新器会自动检查内核补丁的更新,并通知你。在基于控制台的系统中,则需要你定期运行 `apt-get update` 命令来进行升级。由于需要你手动运行 `apt-get upgrade` 命令它才会安装内核的安全补丁,所以算是半自动的。 实时补丁对三个及以下系统免费,如果多于三个,你需要升级成名为 **Ubuntu Advantage** 的企业支持方案套件。这个套件包括 **Kernel 实时补丁**及以下服务: * 扩展安全维护 – Ubuntu 生命周期后的重要安全更新 * Landscape – 针对大规模使用 Ubuntu 的系统管理工具 * 知识库 – 由 Ubuntu 专家撰写的私人文章和教程 * 电话和网站支持 **价格** Ubuntu Advantage 包含三种付费计划,即基本计划、标准计划和高级计划。最基础的计划(基本计划)从 **单物理节点 225 美元/年**和**单VPS 75美元/年**开始计价。对于 Ubuntu 服务器版和桌面版看上去没有按月订阅。你可以在[此处](https://www.ubuntu.com/support/plans-and-pricing)浏览所有计划的细节信息。 **优势:** 简单。半自动化。无需重启。支持三个免费系统。 **缺点:** 4 个以上主机的话非常昂贵。没有补丁回滚。 ##### 开启 Canonical 实时补丁 如果你想在安装后设置实时补丁服务,依照以下方法逐步执行: 从 <https://auth.livepatch.canonical.com/> 获取一个密钥。 ``` $ sudo snap install canonical-livepatch $ sudo canonical-livepatch enable your-key ``` #### 方法 7 - 使用 KernelCare 升级 Linux 内核 ![](/data/attachment/album/202004/18/225105htzt88xfnt4it8rm.png) [KernelCare](https://www.kernelcare.com/) 是最新的实时补丁方案。它是 [CloudLinux](https://www.cloudlinux.com/) 推出的产品。KernelCare 可以运行在 Ubuntu 和其他的 Linux 发行版中。它每四个小时检查一遍补丁的发布,并在无需确认的情况下安装它们。如果更新后存在问题,可以将补丁进行回滚。 **价格** 费用,每台服务器:**4 美元/月**,**45 美元/年**。 跟 Ubuntu 实时补丁相比,KernelCare 看起来非常便宜、实惠。好的方面在于**也可以按月订阅**。另一个前者不具备的功能是支持其他 Linux 发行版,如 Red Hat、CentOS、Debian、Oracle Linux、Amazon Linux 以及 OpenVZ、Proxmox 等虚拟化平台。 你可以在[此处](https://www.kernelcare.com/update-kernel-linux/)了解 KernelCare 的所有特性和简介,以及所有的付费计划的细节。 **优势:** 简单。全自动化。覆盖范围更广的操作系统。补丁回滚。无需重启。对非营利组织提供免费许可。价格低廉。 **缺点:** 不是免费的(除了30天的试用期)。 ##### 开启 KernelCare 服务 在 <https://cloudlinux.com/kernelcare-free-trial5> 获取一个 30 天免费试用密钥。 执行以下命令开启 KernelCare 并注册秘钥。 ``` $ sudo wget -qq -O - https://repo.cloudlinux.com/kernelcare/kernelcare_install.sh | bash $ sudo /usr/bin/kcarectl --register KEY ``` 如果你正在寻找一种经济实惠且可靠的商业服务来保持 Linux 服务器上的 Linux 内核更新,那么 KernelCare 是个不错的选择。 *由来自 Cloud Linux 的技术撰稿人和内容作者 Paul A. Jacobs 提供。* 到此,希望这边文章能对你有所帮助。如果你觉得还有其他的工具和方法需要列在这里,可以在留言区给我们留言。我会根据反馈检查和更新这篇指南的。 接下来会有更多好东西给大家呈现,敬请期待。 Cheers! --- via: <https://www.ostechnix.com/different-ways-to-update-linux-kernel-for-ubuntu/> 作者:[SK](https://www.ostechnix.com/author/sk/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[mr-ping](https://github.com/mr-ping) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
403
Forbidden
null
12,127
在 Linux 中遨游手册页的海洋
https://www.networkworld.com/article/3519853/navigating-man-pages-in-linux.html
2020-04-19T11:56:46
[ "man", "手册页" ]
https://linux.cn/article-12127-1.html
> > Linux 系统上的手册页可以做的不仅仅是提供特定命令的信息。它们可以帮助你发现你没有意识到的命令。 > > > ![](/data/attachment/album/202004/19/115639l21jdqltn02zbwq7.jpg) 手册页提供了关于 Linux 命令的基本信息,很多用户经常参考它,但手册页的内容比我们很多人意识到的要多得多。 你总是可以输入一个像 `man who` 这样的命令,然后得到 `who` 命令的工作原理的漂亮描述,但是探索你可能不知道的命令可能会更有启发。例如,你可以使用 `man` 命令来帮助找到一些处理非常具有挑战性的任务的命令,或者显示一些选项,这些选项可以帮助你以新的更好的方式使用你已经知道的命令。 让我们来浏览一些选项,看看最终的结果是什么。 ### 用 man 去识别命令 `man` 命令可以帮助你按主题查找命令。例如,如果你要找一个计算文件中的行数的命令,你可以提供一个关键字。在下面的例子中,我们把关键字 “count” 放在了引号中,并添加了空格,这样我们就不会得到与 “accounts” 或 “accounting” 相关的命令,而是得到那些可以为我们做一些计算的命令。 ``` $ man -k ' count ' anvil (8postfix) - Postfix session count and request rate control cksum (1) - checksum and count the bytes in a file sum (1) - checksum and count the blocks in a file timer_getoverrun (2) - get overrun count for a POSIX per-process timer ``` 为了显示与新用户账户相关的命令,我们可以尝试使用这样的命令。 ``` $ man -k "new user" newusers (8) - update and create new users in batch useradd (8) - create a new user or update default new user information zshroadmap (1) - informal introduction to the zsh manual The Zsh Manual, … ``` 需要说明的是,上面的第三项只是提到 “new users” 类似的内容,并不是设置、删除或配置用户账号的命令。`man` 命令只是在命令描述中匹配了一些词,作用很像 `apropos` 命令。注意上面列出的每个命令后面的括号中的数字。这些数字与包含这些命令的手册页的分区有关。 ### 确定手册页的分区 `man` 命令的分区将其内容划分为不同的类别。要列出这些类别,请键入 `man man`,并查看类似下面的描述。你的系统中很可能没有第 9 分区的命令。 * `1`:可执行程序或 shell 命令 * `2`:系统调用(内核提供的函数) * `3`:库调用(程序库内的函数) * `4`:特殊文件(通常在可以 `/dev` 中找到) * `5`:文件格式和惯例,例如 `/etc/passwd` * `6`:游戏 * `7`:杂项(包括宏包和约定),例如 `man`(7)、`groff`(7) * `8`:系统管理命令(通常只由 root 用户使用) * `9`:内核例程(非标准) 手册页涵盖了比我们通常认为的“命令”更多的内容。从上面的描述中可以看到,它们涵盖了系统调用、库调用、特殊文件等等。 下面的列表显示了 Linux 系统中的手册页的实际存储位置。这些目录上的日期会有所不同,因为随着更新,其中一些分区会有新的内容,而另一些则不会。 ``` $ ls -ld /usr/share/man/man? drwxr-xr-x 2 root root 98304 Feb 5 16:27 /usr/share/man/man1 drwxr-xr-x 2 root root 65536 Oct 23 17:39 /usr/share/man/man2 drwxr-xr-x 2 root root 270336 Nov 15 06:28 /usr/share/man/man3 drwxr-xr-x 2 root root 4096 Feb 4 10:16 /usr/share/man/man4 drwxr-xr-x 2 root root 28672 Feb 5 16:25 /usr/share/man/man5 drwxr-xr-x 2 root root 4096 Oct 23 17:40 /usr/share/man/man6 drwxr-xr-x 2 root root 20480 Feb 5 16:25 /usr/share/man/man7 drwxr-xr-x 2 root root 57344 Feb 5 16:25 /usr/share/man/man8 ``` 注意,为了节省空间,手册页文件一般都是 gzip 压缩的。每当你使用 `man` 命令时,`man` 命令会根据需要解压。 ``` $ ls -l /usr/share/man/man1 | head -10 total 12632 lrwxrwxrwx 1 root root 9 Sep 5 06:38 [.1.gz -> test.1.gz -rw-r--r-- 1 root root 563 Nov 7 05:07 2to3-2.7.1.gz -rw-r--r-- 1 root root 592 Apr 23 2016 411toppm.1.gz -rw-r--r-- 1 root root 2866 Aug 14 10:36 a2query.1.gz -rw-r--r-- 1 root root 2361 Sep 9 15:13 aa-enabled.1.gz -rw-r--r-- 1 root root 2675 Sep 9 15:13 aa-exec.1.gz -rw-r--r-- 1 root root 1142 Apr 3 2018 aaflip.1.gz -rw-r--r-- 1 root root 3847 Aug 14 10:36 ab.1.gz -rw-r--r-- 1 root root 2378 Aug 23 2018 ac.1.gz ``` ### 按分区列出的手册页 即使只看第 1 分区的前 10 个手册页(如上所示),你也可能会看到一些新的命令 —— 也许是 `a2query` 或 `aaflip`(如上所示)。 探索命令的更好策略是按分区列出命令,不查看文件本身,而是使用 `man` 命令向你显示命令并提供每个命令的简要说明。 在下面的命令中,`-s 1` 指示 `man` 显示第 1 分区中的命令信息。`-k .` 使该命令对所有命令都有效,而不是指定一个特定的关键字;如果没有这个,`man` 命令就会回过头来问:“你想要什么手册页?”所以,使用关键字来选择一组相关的命令,或者使用点来显示一个分区中的所有命令。 ``` $ man -s 1 -k . 2to3-2.7 (1) - Python2 to Python3 converter 411toppm (1) - convert Sony Mavica .411 image to ppm as (1) - the portable GNU assembler. baobab (1) - A graphical tool to analyze disk usage busybox (1) - The Swiss Army Knife of Embedded Linux cmatrix (1) - simulates the display from "The Matrix" expect_dislocate (1) - disconnect and reconnect processes red (1) - line-oriented text editor enchant (1) - a spellchecker … ``` ### 有多少手册页? 如果你对每个分区中有多少手册页感到好奇,可以使用以下命令按分区对它们进行计数: ``` $ for num in {1..8} > do > man -s $num -k . | wc -l > done 2382 493 2935 53 441 11 245 919 ``` 确切的数量可能有所不同,但是大多数 Linux 系统的命令数量差不多。如果我们使用命令将这些数字加在一起,我们可以看到运行该命令的系统上有将近 7500 个手册页。有很多命令,系统调用等。 ``` $ for num in {1..8} > do > num=`man -s $num -k . | wc -l` > tot=`expr $num + $tot` > echo $tot > done 2382 2875 5810 5863 6304 6315 6560 7479 <=== total ``` 阅读手册页可以学到很多东西,但是以其他方式浏览手册页可以帮助你了解系统上可能不知道的命令。 --- via: <https://www.networkworld.com/article/3519853/navigating-man-pages-in-linux.html> 作者:[Sandra Henry-Stocker](https://www.networkworld.com/author/Sandra-Henry_Stocker/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
301
Moved Permanently
null
12,128
如何在 Linux 的 Nautilus 文件管理器中以管理员身份打开文件和文件夹
https://itsfoss.com/open-nautilus-as-administrator/
2020-04-19T21:12:50
[ "Nautilus", "文件管理器" ]
https://linux.cn/article-12128-1.html
> > 了解如何在 Ubuntu 和其他 Linux 发行版的 Nautilus 文件管理器的右键菜单中添加“以管理员身份打开”选项。 > > > ![](/data/attachment/album/202004/19/211224hurk01u0kuvqnpvn.jpg) 如果要以根用户身份打开或编辑文件,你总是可以在终端中执行此操作。但我知道有些人对命令行不适应。 桌面 Linux 通常为那些害怕终端的人提供方法避免命令行。 如果你必须以 root 用户身份访问文件夹或以 root 用户权限编辑文件,那你可以在 [Nautilus 文件管理器](https://wiki.gnome.org/Apps/Files)中以图形方式进行操作。 一个小巧优雅的 Nautilus 技巧能让你以管理员(也就是 root)打开文件和文件夹。让我向你展示如何做。 ### 在 Nautilus 文件管理器的右键菜单中添加“以管理员身份打开”选项 > > 警告!请不要以 root 用户身份打开和编辑随机文件,因为这样可能会弄乱文件并导致系统损坏。仅在需要时使用它。 > > > 我展示的是 Ubuntu 的步骤。你可以根据你的发行版的软件包管理器进行更改。 你必须使用终端(即使你不喜欢它)来安装 Nautilus 插件。请[确保已启用 Universe 仓库](https://itsfoss.com/ubuntu-repositories/): ``` sudo apt install nautilus-admin ``` 关闭并再次打开 Nautilus 文件管理器以查看更改生效。 ![Right clock to see the “Open as Administrator” option](/data/attachment/album/202004/19/211253hvvqqdg77wrptpm5.jpg) 你也可以用 root 用户身份编辑文件。只需选择文件,右键单击它,然后选择“以管理员身份编辑”选项。 ![Edit Files As Root Ubuntu](/data/attachment/album/202004/19/211256x7ewi1u4y74u77nu.jpg) 这两种情况下,系统都会提示你输入帐户密码: ![You need to enter your password, of course](/data/attachment/album/202004/19/211257ykcyu0ryj677z287.png) 差不多了。你可以享受 GUI 的舒适了。 如果你不想再以 root 用户身份运行 Nautilus,那么可以删除此插件。删除已安装但不再使用的其他东西总是没错的。 在终端中(没错,再一次在终端),使用以下命令删除 Nautilus 插件。 ``` sudo apt remove nautilus-admin ``` 顺便说一句,如果你在使用 [Ubuntu MATE](https://ubuntu-mate.org/),你可以使用 caja-admin 代替 nautilus-admin。其他文件管理器可能会或可能不会提供此类功能。 我希望这个快速技巧对你有所帮助。随时欢迎提出问题和建议。 --- via: <https://itsfoss.com/open-nautilus-as-administrator/> 作者:[Abhishek Prakash](https://itsfoss.com/author/abhishek/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
![Warp Terminal](/assets/images/warp-terminal.webp) ![Warp Terminal](/assets/images/warp-terminal.webp) If you want to open or edit a file as root, you can always do that in the terminal. I know a few people don’t feel too comfortable with the command line. The desktop Linux often provides ways to avoid the command line for terminal-fearing people. If you are in a situation where you have to access a folder as root or edit a file with root privilege, you can do that graphically in [Nautilus file manager](https://wiki.gnome.org/Apps/Files?ref=itsfoss.com). A neat Nautilus hack allows you to open files and folders as administrator, i.e. root. Let me show you how. ## Add ‘open as administrator’ option in right click context menu in Nautilus file manager I am showing the installation steps for Ubuntu. You can change it as per your distribution’s package manager. You’ll have to use terminal (even if you don’t like it) for installing the Nautilus plugin. Please [make sure that you have the universe repository enabled](https://itsfoss.com/ubuntu-repositories/): `sudo apt install nautilus-admin` Close and open the Nautilus file manager again to see the changes in effect. ![Open Folder As Administrator Ubuntu](https://itsfoss.com/content/images/wordpress/2020/04/open-folder-as-administrator-ubuntu.jpg) You can also edit files as root the same way. Just select the file, right click on it and choose the “Edit as Administrator” option. ![Edit Files As Root Ubuntu](https://itsfoss.com/content/images/wordpress/2020/04/edit-files-as-root-ubuntu.jpg) In both cases, you’ll be prompted to enter your account’s password: ![Authentication Pop Up Ubuntu](https://itsfoss.com/content/images/wordpress/2020/04/authentication-pop-up-ubuntu.png) That’s pretty much it. You can enjoy the comfort of GUI. In case you don’t want to run Nautilus as root anymore, you can remove this plugin. It’s always good to remove additional things you installed but you don’t use anymore. In the terminal (yes, again), use the following command to remove the Nautilus plugin. `sudo apt remove nautilus-admin` By the way, if you are using [Ubuntu MATE](https://ubuntu-mate.org/?ref=itsfoss.com), you can use caja-admin, instead of nautilus-admin. Other file managers may or may not provide such a feature. ## More Nautilus tweaks You can explore more such uncommon, [hidden features of Nautilus](https://itsfoss.com/nautilus-tips-tweaks/) in this article. [13 Ways to Tweak Nautilus File Manager in LinuxNautilus, aka GNOME Files, is a good file manager with plenty of features. You can further enhance your experience by using these extensions, tweaks and tips.](https://itsfoss.com/nautilus-tips-tweaks/)![](https://itsfoss.com/content/images/wordpress/2022/07/customizing-GNOME-Nautilus-File-Manager.png) ![](https://itsfoss.com/content/images/wordpress/2022/07/customizing-GNOME-Nautilus-File-Manager.png) I hope you find this quick tip helpful. Questions and suggestions are always welcome.
12,130
Linux 防火墙入门教程
https://opensource.com/article/20/2/firewall-cheat-sheet
2020-04-20T10:42:00
[ "防火墙", "firewalld" ]
/article-12130-1.html
> > 防火墙是你的计算机防止网络入侵的第一道屏障。为确保你的安全,请下载我们的备忘单。 > > > ![](/data/attachment/album/202004/20/104205paugcytauqctqw6c.jpg) 合理的防火墙是你的计算机防止网络入侵的第一道屏障。你在家里上网,通常互联网服务提供会在路由中搭建一层防火墙。当你离开家时,那么你计算机上的那层防火墙就是仅有的一层,所以配置和控制好你 Linux 电脑上的防火墙很重要。如果你维护一台 Linux 服务器,那么知道怎么去管理你的防火墙同样重要,只要掌握了这些知识你才能保护你的服务器免于本地或远程非法流量的入侵。 ### 安装防火墙 很多 Linux 发行版本已经自带了防火墙,通常是 `iptables`。它很强大并可以自定义,但配置起来有点复杂。幸运的是,有开发者写出了一些前端程序来帮助用户控制防火墙,而不需要写冗长的 iptables 规则。 在 Fedora、CentOS、Red Hat 和一些类似的发行版本上,默认安装的防火墙软件是 `firewalld`,通过 `firewall-cmd` 命令来配置和控制。在 Debian 和大部分其他发行版上,可以从你的软件仓库安装 firewalld。Ubuntu 自带的是<ruby> 简单防火墙 <rt> Uncomplicated Firewall </rt></ruby>(ufw),所以要使用 firewalld,你必须启用 `universe` 软件仓库: ``` $ sudo add-apt-repository universe $ sudo apt install firewalld ``` 你还需要停用 ufw: ``` $ sudo systemctl disable ufw ``` 没有理由*不用* ufw。它是一个强大的防火墙前端。然而,本文重点讲 firewalld,因为大部分发行版都支持它而且它集成到了 systemd,systemd 是几乎所有发行版都自带的。 不管你的发行版是哪个,都要先激活防火墙才能让它生效,而且需要在启动时加载: ``` $ sudo systemctl enable --now firewalld ``` ### 理解防火墙的域 Firewalld 旨在让防火墙的配置工作尽可能简单。它通过建立<ruby> 域 <rt> zone </rt></ruby>来实现这个目标。一个域是一组的合理、通用的规则,这些规则适配大部分用户的日常需求。默认情况下有九个域。 * `trusted`:接受所有的连接。这是最不偏执的防火墙设置,只能用在一个完全信任的环境中,如测试实验室或网络中相互都认识的家庭网络中。 * `home`、`work`、`internal`:在这三个域中,接受大部分进来的连接。它们各自排除了预期不活跃的端口进来的流量。这三个都适合用于家庭环境中,因为在家庭环境中不会出现端口不确定的网络流量,在家庭网络中你一般可以信任其他的用户。 * `public`:用于公共区域内。这是个偏执的设置,当你不信任网络中的其他计算机时使用。只能接收选定的常见和最安全的进入连接。 * `dmz`:DMZ 表示隔离区。这个域多用于可公开访问的、位于机构的外部网络、对内网访问受限的计算机。对于个人计算机,它没什么用,但是对某类服务器来说它是个很重要的选项。 * `external`:用于外部网络,会开启伪装(你的私有网络的地址被映射到一个外网 IP 地址,并隐藏起来)。跟 DMZ 类似,仅接受经过选择的传入连接,包括 SSH。 * `block`:仅接收在本系统中初始化的网络连接。接收到的任何网络连接都会被 `icmp-host-prohibited` 信息拒绝。这个一个极度偏执的设置,对于某类服务器或处于不信任或不安全的环境中的个人计算机来说很重要。 * `drop`:接收的所有网络包都被丢弃,没有任何回复。仅能有发送出去的网络连接。比这个设置更极端的办法,唯有关闭 WiFi 和拔掉网线。 你可以查看你发行版本的所有域,或通过配置文件 `/usr/lib/firewalld/zones` 来查看管理员设置。举个例子:下面是 Fefora 31 自带的 `FedoraWorkstation` 域: ``` $ cat /usr/lib/firewalld/zones/FedoraWorkstation.xml <?xml version="1.0" encoding="utf-8"?> <zone> <short>Fedora Workstation</short> <description>Unsolicited incoming network packets are rejected from port 1 to 1024, except for select network services. Incoming packets that are related to outgoing network connections are accepted. Outgoing network connections are allowed.</description> <service name="dhcpv6-client"/> <service name="ssh"/> <service name="samba-client"/> <port protocol="udp" port="1025-65535"/> <port protocol="tcp" port="1025-65535"/> </zone> ``` ### 获取当前的域 任何时候你都可以通过 `--get-active-zones` 选项来查看你处于哪个域: ``` $ sudo firewall-cmd --get-active-zones ``` 输出结果中,会有当前活跃的域的名字和分配给它的网络接口。笔记本电脑上,在默认域中通常意味着你有个 WiFi 卡: ``` FedoraWorkstation interfaces: wlp61s0 ``` ### 修改你当前的域 要更改你的域,请将网络接口重新分配到不同的域。例如,把例子中的 `wlp61s0` 卡修改为 public 域: ``` $ sudo firewall-cmd --change-interface=wlp61s0 --zone=public ``` 你可以在任何时候、任何理由改变一个接口的活动域 —— 无论你是要去咖啡馆,觉得需要增加笔记本的安全策略,还是要去上班,需要打开一些端口进入内网,或者其他原因。在你凭记忆学会 `firewall-cmd` 命令之前,你只要记住了关键词 `change` 和 `zone`,就可以慢慢掌握,因为按下 `Tab` 时,它的选项会自动补全。 ### 更多信息 你可以用你的防火墙干更多的事,比如自定义已存在的域,设置默认域,等等。你对防火墙越了解,你在网上的活动就越安全,所以我们创建了一个[备忘单](https://opensource.com/downloads/firewall-cmd-cheat-sheet)便于速查和参考。 * 下载你的 [防火墙备忘单](https://opensource.com/downloads/firewall-cmd-cheat-sheet)。(需注册) --- via: <https://opensource.com/article/20/2/firewall-cheat-sheet> 作者:[Seth Kenlon](https://opensource.com/users/seth) 选题:[lujun9972](https://github.com/lujun9972) 译者:[lxbwolf](https://github.com/lxbwolf) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
null
HTTPSConnectionPool(host='opensource.com', port=443): Read timed out. (read timeout=10)
null
12,131
我是如何用 AI 把“请洗手”翻译成 500 种语言的?
https://opensource.com/article/20/4/ai-translation
2020-04-20T18:44:00
[ "翻译", "AI" ]
https://linux.cn/article-12131-1.html
> > 通过使用人类和机器生成的翻译,可以将关键的健康短语翻译成世界各地的当地语言。 > > > ![](/data/attachment/album/202004/20/184427f1a2t5z61m5xxo1t.jpg) 你可能不知道,目前世界上有 [7117 种语言](https://www.ethnologue.com/guides/how-many-languages)在使用,不是方言,而是在用的语言! 然而,世界上许多数字媒体只能使用几十种语言,而像谷歌翻译这样的翻译平台只支持 100 种左右的语言。这样的现实意味着,由于缺乏及时获取信息的机会,全世界有数十亿人被边缘化。当前的冠状病毒(COVID-19)大流行已经让人痛苦地意识到了这一点,凸显了将健康相关的短语(如“<ruby> 请洗手 <rt> wash your hands </rt></ruby>”或“保持距离”等)即时、快速翻译成小众语言的必要性。 为此,我应用了最先进的 AI 技术,用 544 种语言构建出了与“请洗手”相近的短语并进行了统计(我的 GPU 还在运行)。<ruby> 多语言无监督和受监督嵌入 <rt> Multilingual Unsupervised and Supervised Embeddings </rt></ruby>(MUSE)方法被用来训练这 544 种语言和英语之间的跨语言单词嵌入。然后,这些嵌入方法可以从现有文档中提取出与目标短语相似的短语。 我与 SIL 国际公司的同事们合作完成了这项工作,他们收集了该短语的更多的人工翻译结果。这些人工翻译结果和我的一些机器翻译结果的组合可以在[这个民族语指南页面](https://www.ethnologue.com/guides/health)上搜索到(机器生成的短语用一个小的机器人图标表示),更多的翻译将在生成/收集到的时候加入。 ### 利用现有的语料库 SIL 国际公司已经完成了 2000 多种语言的语言工作,目前管理着 1600 多个语言项目。因此,当我解决这个特殊的问题时,我知道我们很可能已经多次将“请洗手”和/或类似的短语翻译成了数百种语言,而这一猜测得到了回报。我很快就从我们的档案库中收集到了超过 900 种语言的文档(主要是完成的贝壳书模板、教材和圣经)。这些文档中的每一份都有一个英文的对应版本,其中必然包括“请洗手”和/或类似“请洗脸”这样的短语。此外,这些文档的质量都很高,并与当地语言社区合作进行了翻译和检查。 这是相当多语言的数据集。然而,有两个问题需要克服。首先,这个数据包含了大多数语言的数千种样本,这与训练机器翻译模型所使用的数百万个样本形成了鲜明对比。其次,即使文档中包含目标语言中的“请洗手”这个短语,我们也不知道这个短语在周围文本中的确切位置。 我们当然可以利用[低资源语言的机器翻译](https://datadan.io/blog/resources-for-low-resource-machine-translation)中的一些最新技巧,但是需要花费一些时间来调整自动化方法,以快速适应每种语言对中的翻译模型。此外,我们所针对的许多语言都没有现成的的基线,可以用来比较评估指标(例如 [BLEU 评分](https://en.wikipedia.org/wiki/BLEU))。考虑到对冠状病毒大流行的迫切担忧,我们希望比这更快一点(尽管我们计划在将来再来解决这个问题)。 我选择通过在现有的文档中寻找短语本身或短语的组件(如“请洗”或“你的手”)来尝试构建“请洗手”这个短语。为了找到这些成分,我使用 Facebook Research 的[多语言无监督和受监督嵌入(MUSE)](https://github.com/facebookresearch/MUSE)对每个 {英语、目标语言} 对进行了<ruby> 跨语言 <rt> cross-lingual </rt></ruby>嵌入训练。MUSE 以<ruby> 单语言 <rt> monolingual </rt></ruby>的单词嵌入作为输入(我使用 [fasttext](https://fasttext.cc/) 来生成这些词),并使用对抗性方法学习了从英语到目标嵌入空间的映射。这个过程的输出是<ruby> 跨语言 <rt> cross-lingual </rt></ruby>的单词嵌入。 ![](/data/attachment/album/202004/20/185121p87fx77spuff0gcf.gif) 一旦产生了跨语言嵌入,我们就可以开始在目标语言文档中寻找短语组件。结果发现,整个文档中清楚地使用了“请洗脸”这个短语以及单独的“手”、“请洗”等词。对于每一种语言,我都通过 n-gram 搜索我预期该短语会出现的地方(根据其在英语的对应版本中的用法)。使用跨语言嵌入法对 n-gram 进行了矢量化处理,并使用各种距离指标与英语短语的矢量化版本进行了比较。在嵌入空间中,与英文短语“最接近”的 n-gram 被确定为与目标语言匹配。 最后,将与英语对应的成分短语进行组合,生成目标语言中的“请洗手”短语。这种组合方式再次利用了跨语言嵌入,以确保以合适方式组合组件。例如,如果我们在目标语言中匹配“请洗脚”这个短语,就必须将“脚”对应的 n-gram 替换成“手”对应的 n-gram。下面是<ruby> 伯利兹·克里奥尔 <rt> Belize Kriol </rt></ruby>英语的一个例子: ![](/data/attachment/album/202004/20/185306mn8n48v9lbgcnbnz.gif) 当然,在这个匹配过程中,会做一些假设,这个过程完全有可能不能产生语法上正确的预测。例如,我假设在大多数语言中,“手”的单词和“脚”的单词都是一个<ruby> 字元 <rt> token </rt></ruby>长的(字元由空格和标点符号隔开)。当然并非总是如此。这可能会造成类似于“和洗和手你”或类似的瑕疵词条。希望我们可以克服其中的一些局限性,并在未来扩展这个系统,但是,现在,我们选择用图形来强化这个想法。 我们将世界卫生组织的洗手说明改编成了一个 PNG 图片模板。然后,我们把我们翻译和生成的短语,用 Bash 和 Go 脚本的组合将其渲染到洗手图像中。这样,在文字和图像中都强调了正确洗手的理念(以防万一我们生成的翻译很尴尬)。 ### 结果 到目前为止,我已经能够训练出 544 种语言的跨语言嵌入。我使用上述讨论过的方法尝试为所有这些语言构建“请洗手”这个短语。因为我没有许多语言对的对齐数据,所以我使用了同样包含“请洗手”成分的单独的保留文档来帮助验证构造短语中的字元。这让我们对公开发布的翻译版本有了一些信心(至少它们包含了表示“洗”和/或“手”的信息)。此外,我还将该方法与谷歌翻译支持的和/或有可用的人工翻译的语言对进行了比较。以下是来自 [Ethnologue](https://www.ethnologue.com/) 带有语言统计的翻译样本。 **语言:意大利语 [Ita]** * 地点:意大利 * 人口: 68,000,000 * 我们的系统: làvati la mani * 谷歌翻译: Lavati le mani **语言:保加利亚语 [bul]** * 地点:保加利亚 * 人口:8,000,000 * 我们的系统:умий ръцете * 谷歌翻译:Измий си ръцете **语言: 荷兰语 [nld]** * 地点:荷兰 * 人口:24,000,000,000 * 我们的系统:wast uw handen * 谷歌翻译:Was je handen **语言: Pijin [pis]** * 地点:所罗门群岛 * 人口: 550,000 * 我们的系统:wasim han * 谷歌翻译:不支持 **语言:Tikar [tik]** * 地点:喀麦隆 * 人口:110,000 * 我们的系统:ɓɔsi fyàʼ * 谷歌翻译:不支持 **语言:Waffa [waj]** * 地点:巴布亚新几内亚 * 人口:1,300 * 我们的系统:yaakuuvaitana nnikiiyauvaa fini * 谷歌翻译:不支持 构造的短语类似于参考翻译,或者似乎是“请洗手”的另一种说法。例如,在保加利亚语中,我预测为“умий ръцете”,而谷歌翻译预测为“Измий си ръцете”。 然而,如果我用谷歌翻译回译我的预测,我还是会得到“请洗手”。有一些不确定的地方,我无法与参考译文(例如,所罗门群岛的 Pijin [pis])或人类注释的跨度进行比较,但我仍然可以验证“洗”(wasim)和“手”(han)分别用在其他必定是谈论洗或手的参考文件中。 大约有 15% 的译文可以用这个方法验证,我希望在收集参考文献字典的过程中能进行更多的验证。 请注意,我最多使用了每种语言中大约 7000 个句子来得到上述译文,即使是意大利语这样的高资源语言也是如此。我也不依赖语言对之间的对齐句子。尽管存在这种数据非常稀缺、无监督的情况,但对于两个系统都支持的语言,我仍然能够获得类似于谷歌翻译的短语。这证明了这种“混合”方法(无监督的单词嵌入+基于规则的匹配)在将短语翻译成数据非常少的语言中的潜在用途。 注意:我绝对不是说这是解决冠状病毒和其他健康相关的信息传播问题的解决方案。这里仍有很多东西需要探索和正式评估,我们正在为此努力。在很多情况下,这种方法无法帮助构建数百种语言的重要信息资料。但是,我认为,我们所有人都应该尝试着为当前危机的相关问题制定创造性的解决方案。也许这只是一个非常大的拼图中的一块。 你可以在[这个民族语言指南](https://www.ethnologue.com/guides/health)上查看经过验证的译文加上人工翻译的完整列表。此外,我们即将以论文的形式对这一系统进行更深入的描述和分析。我们欢迎公众对翻译进行反馈,以帮助系统进行微调,最重要的是,确保将健康信息传递给世界各地的边缘化语言社区。 ### 制作自己的洗手海报 我们已经开源了[用于渲染复合的脚本和生成洗手海报的代码](https://github.com/sil-ai/wash-your-hands)。这种方法应该能够处理几乎所有的语言和脚本。你可以在海报中添加你自己的“请洗手”的翻译,以帮助传播,或者根据自己的本地语境进行翻译。请务必在社交媒体上以 #WashYourHands 为标签分享你生成的海报。 ### 培养你的 AI 技能 有很多令人兴奋的 AI 问题,可以给世界带来巨大的影响。如果你想用人工智能解决像上面提到的问题,或者你认为你的企业可能需要开始利用人工智能来做其他事情(供应链优化、推荐、客户服务自动化等),那么不要错过今年 5 月的[AI 课堂培训活动](https://datadan.io/)。*AI 课堂*是一个沉浸式的、为期三天的虚拟培训活动,适合至少有一定编程经验和数学基础知识的人参加。该培训提供了使用 Python 和开源框架(如 TensorFlow 和 PyTorch)进行现实的 AI 开发的实用基础知识。完成课程后,学员将有信心开始开发和部署自己的 AI 解决方案。 本文经许可转载自 <https://datadan.io/blog/wash-your-hands> --- via: <https://opensource.com/article/20/4/ai-translation> 作者:[Daniel Whitenack](https://opensource.com/users/datadan) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
You might not know, but there are currently [7,117 languages spoken in the world](https://www.ethnologue.com/guides/how-many-languages). Not dialects, but living languages! However, much of the world's digital media is available in only a couple dozen languages, and translation platforms like Google Translate only support around 100 languages. This reality means that there are billions of people around the world that are marginalized due to a lack of timely access to information. The current coronavirus (COVID-19) pandemic has made this painfully clear, and it has stressed the need for immediate, rapid translation of health-related phrases (like "wash your hands" or "keep your distance") into the long tail of languages. To this end, I applied state-of-the-art AI techniques to construct something close to the phrase "wash your hands" in 544 languages and counting (my GPUs are still running). Multilingual Unsupervised and Supervised Embeddings (MUSE) methods are used to train cross-lingual word embeddings between each of 544 languages and English. These embeddings then allow for the extraction of a phrase similar to the target phrase from existing documents. I performed this work in collaboration with my colleagues at SIL International, who have gathered even more human translations of the phrase. The combination of these human translations and some of my machine translations can be searched on [this Ethnologue guide page](https://www.ethnologue.com/guides/health) (machine-generated phrases are indicated with a little robot icon), and more translations will be added as they are generated/gathered. ## Leveraging existing corpora SIL International has done linguistic work in over 2000 languages and is currently managing over 1600 language projects. Thus, as I approached this particular problem, I knew that we had likely already translated the phrase "wash your hands" and/or similar phrases many times into hundreds of languages, and that guess paid off in spades. I was able to quickly gather documents (mostly completed shell book templates, educational materials, and Bibles) from our archives in over 900 languages. Each of these documents has an English parallel, which necessarily includes the phrase "wash your hands" and/or similar phrases like "wash your face." Moreover, each of these documents is very high quality and translated and checked in cooperation with the local language communities. That is quite the multilingual data set. However, there are two problems to overcome. First, this data included thousands of samples for most languages, which is in contrast to the millions used to train machine translation models. Second, even if the documents include the phrase "wash your hands" in the target language, we don't know the exact location of the phrase within the surrounding text. We could certainly exploit some of the latest tricks in [machine translation for low resource languages](https://datadan.io/blog/resources-for-low-resource-machine-translation), but it would take some time to tune automated methods for rapidly adapting translation models in each language pair. Moreover, many of the languages we are targeting have no existing baseline with which we could compare evaluation metrics, e.g., [BLEU score](https://en.wikipedia.org/wiki/BLEU). Given the pressing concerns about the Coronavirus pandemic, we wanted to move a bit faster than that (although we plan to return to this problem in the future). I opted to try and construct the phrase "wash your hands" by finding the phrase itself or components of the phrase (like "wash your" or "your hands") in existing documents. To find these, I trained cross-lingual embedding for each {English, Target Language} pair using [Multilingual Unsupervised and Supervised Embedding (MUSE)](https://github.com/facebookresearch/MUSE) from Facebook Research. MUSE takes monolingual word embeddings as input (I used [ fasttext](https://fasttext.cc/) to generate these) and learns a mapping from the English to the target embedding space using adversarial methods. The output of this process is cross-lingual word embeddings. ![Using fasttext along with MUSE to perform cross-language embedding Using fasttext along with MUSE to perform cross-language embedding](https://opensource.com/sites/default/files/uploads/ai-language-translation-wash-your-hands-opensourcedotcom.gif) Once the cross-lingual embeddings are generated, we can get to finding the phrase components in the target language documents. As it turns out, the phrase "wash your face" was most clearly used throughout the documents along with instances of "hands," "wash your," etc. in isolation. For each of the languages, I search through n-grams in areas where I expected the phrase to appear (based on its usage in the English parallel). N-grams were vectorized using the cross-lingual embedding and compared with vectorized versions of the English phrases using various distance metrics. The n-grams that were "closest" to the English phrases in the embedding space were determined to be the target language matches. Finally, component phrases matching their English counterparts were combined to generate the phrase "wash your hands" in the target language. This combination utilizes the cross-lingual embedding again to make sure that the components are combined in an appropriate manner. For example, if we matched the phrase "wash your feet" in the target language, the n-gram corresponding to "feet" must be replaced with the n-gram corresponding to "hands." Here's an example for Belize Kriol English: ![](https://opensource.com/sites/default/files/uploads/ai-language-translation-wash-your-hands-opensourcedotcom2.gif) There were, of course, some assumptions that were made during this matching process, and it is entirely possible that this procedure does not produce grammatically correct predictions. For example, I assumed that in most languages, the word for "hands" and the word for "feet" are both one token long (with tokens being separated by spaces and punctuation). This is certainly not always the case. This could create a bad word salad something like "and wash the and hand you" or similar. Hopefully, we can overcome some of these limitations and extend the system in the future, but, for now, we chose to reinforce the idea with graphics. We adapted the World Health Organization's hand washing instructions into a template PNG image. We then took our translated and generated phrases and rendered them into the hand washing image using a combination of Bash and Go scripts. In this way, the idea of proper hand washing is emphasized in both text and imagery (just in case our generated translations are awkward). ![](https://opensource.com/sites/default/files/uploads/ai-language-translation-wash-your-hands-opensourcedotcom3.gif) ## Results Thus far, I've been able to train cross-lingual embeddings for 544 languages. I used the above-discussed method to try and construct "wash your hands" for all of these languages. Because I don't have aligned data for many of the language pairs, I used separate holdout documents also containing components of "wash your hands" to help validate the tokens in the constructed phrase. This gives us some confidence in the translations that we publicly release (at least that they contain information indicating washing and/or hands). In addition, I compared the method with language pairs that are also supported by Google Translate and/or have available human translations. Here's a sample of the translations with language stats from [the Ethnologue](https://www.ethnologue.com/): ### Language: Italian [ita] Location: Italy Population: 68,000,000 Our system: "làvati la mani" Google Translate: "Lavati le mani" ### Language: Bulgarian [bul] Location: Bulgaria Population: 8,000,000 Our system: "умий ръцете" Google Translate: "Измий си ръцете" ### Language: Dutch [nld] Location: Netherlands Population: 24,000,000 Our system: "wast uw handen" Google Translate: "Was je handen" ### Language: Pijin [pis] Location: Solomon Islands Population: 550,000 Our system: "wasim han" Google Translate: Not supported ### Language: Tikar [tik] Location: Cameroon Population: 110,000 Our system: "ɓɔsi fyàʼ" Google Translate: Not supported ### Language: Waffa [waj] Location: Papua New Guinea Population: 1,300 Our system: "yaakuuvaitana nnikiiyauvaa fini" Google Translate: Not supported The constructed phrases are similar to reference translations or appear to be alternative ways of saying "wash your hands." For example, in Bulgarian, I predict "умий ръцете," and Google Translate predicts "Измий си ръцете." However, if I back-translate my prediction using Google Translate, I still get "wash your hands." There is some uncertainty where I can't compare to reference translations (e.g., Pijin [pis] from the Solomon Islands) or human-annotated spans, but I can still validate that the word for wash (wasim) and the word for hands (han) are used in other reference documents that are necessarily talking about washing, or hands, respectively. About 15% of the translations could be validated using this method, and I hope to validate more as I gather reference dictionaries. Note, I used at most about 7,000 sentences in each language to get the above translations, even for high-resource languages like Italian. I also did not rely on aligned sentences between the language pairs. Despite this very data-scarce, unsupervised scenario, I was still able to obtain phrases similar to that of Google Translate for languages supported by both systems. This demonstrates the potential utility of this sort of "hybrid" approach (unsupervised alignment of word embeddings + rule-based matching) for translating short phrases into languages where very little data exists. Note—I'm definitely not saying that this is a solution to the problem of information spread about Coronavirus and other health-related issues. There are still a lot of things to explore and formally evaluate here, and we are working on that. In many cases, this approach won't be able to help construct important informational material in hundreds of languages. However, I think that we should all be trying to develop creative solutions to problems related to the current crisis. Maybe this is one piece of a very large puzzle. You can view the complete list of validated translations plus human translations on [this Ethnologue guide page](https://www.ethnologue.com/guides/health). In addition, a more thorough description and analysis of the system in paper form is forthcoming. We welcome feedback from the public on the translations to help fine-tune the system and, most of all, to make sure that health information gets out to marginalized language communities around the world. ## Create your own hand washing posters We have open sourced [the code used to render complex scripts and generate the hand washing posters](https://github.com/sil-ai/wash-your-hands). This methodology should be able to handle almost all languages and scripts. You can add your own translation of "wash your hands" to a poster to help spread the word or tailor the translations for your own local context. Be sure to share your generated posters on social media with the hashtag #WashYourHands. ## Develop your AI skills There are so many exciting AI problems out there that can make a huge impact in the world. If you want to solve problems like the one above with AI or if you think your business might need to start leveraging AI for other things (supply chain optimization, recommendation, customer service automation, etc.), don't miss the [ AI Classroom training event this May](https://datadan.io/). *AI Classroom*is an immersive, three-day virtual training event for anyone with at least some programming experience and foundational understanding of mathematics. The training provides a practical baseline for realistic AI development using Python and open source frameworks like TensorFlow and PyTorch. After completing the course, participants will have the confidence to start developing and deploying their own AI solutions. *This article was republished with permission from https://datadan.io/blog/wash-your-hands* ## 3 Comments
12,132
如何在 Linux 上安装 Python
https://opensource.com/article/20/4/install-python-linux
2020-04-21T09:45:12
[ "Python" ]
https://linux.cn/article-12132-1.html
> > 在 Linux 上安装最新 Python,替代或与老版本并存的分步说明。 > > > ![](/data/attachment/album/202004/21/094500u63lnlgukgnjb0t0.jpg) [Python](https://www.python.org/) 现在是[最流行](http://pypl.github.io/PYPL.html)、最常用的编程语言。Python 的简单语法和较低的学习曲线使其成为初学者和专业开发人员的终极选择。Python 还是一种非常通用的编程语言。从 Web 开发到人工智能,它几乎在除了移动开发的所有地方都有使用。 如果你使用的是 Python,那么你很有可能是一名开发人员(或想成为一名开发人员),而 Linux 是创建软件的绝佳平台。但是,当你每天使用 Python 时,有时你希望使用最新版本。你可能不想仅仅为了测试最新版本的系统而替换了默认的 Python 安装,因此本文说明了如何在 Linux 上安装最新版本的 Python 3,而不替换发行版提供的版本。 使用 `python --version` 终端命令检查是否已安装 Python,如果已安装,那么检查是哪个版本。如果你的 Linux 系统上未安装 Python,或者你想安装更新的版本,请按照以下步骤操作。 ### 分步安装说明 #### 步骤 1:首先,安装构建 Python 所需的开发包 在 Debian 上 ``` $ sudo apt update $ sudo apt install build-essential zlib1g-dev \ libncurses5-dev libgdbm-dev libnss3-dev \ libssl-dev libreadline-dev libffi-dev curl ``` 在 Fedora 上: ``` $ sudo dnf groupinstall development ``` #### 步骤 2:下载最新的稳定版本的 Python 3 访问[官方 Python 网站](http://python.org)并下载最新版本的 Python 3。下载完成后,你会有一个 `.tar.xz` 归档文件(“tarball”),其中包含 Python 的源代码。 #### 步骤 3:解压 tarball 下载完成后,使用解压程序或 [Linux 的 tar 命令](https://opensource.com/article/17/7/how-unzip-targz-file)解压压缩包,例如: ``` $ tar -xf Python-3.?.?.tar.xz ``` #### 步骤 4:配置脚本 解压 Python 压缩包后,进入 `configure` 脚本所在目录并在 Linux 终端中使用以下命令执行该脚本: ``` $ cd Python-3.* ./configure ``` 配置可能需要一些时间。等待直到成功完成,然后再继续。 #### 步骤 5:开始构建过程 如果你的系统上已经安装了某个版本的 Python,并且希望同时安装新版本的 Python,请使用以下命令: ``` $ sudo make altinstall ``` 构建过程可能需要一些时间。 如果要使用此版本替换当前版本的 Python,那么应使用包管理器(例如 `apt` 或 `dnf`)卸载当前的 Python 包,然后安装: ``` $ sudo make install ``` 但是,通常最好以软件包的形式(例如 `.deb` 或 `.rpm` 文件)来安装软件,以便系统可以为你跟踪和更新它。因为本文假设尚未打包最新的 Python,所以你可能没有这个选择。在这种情况下,你可以按照建议使用 `altinstall` 来安装 Python,或者使用最新的源代码重构现有的 Python 包。这是一个高级主题,并且特定于你的发行版,因此不在本文讨论范围之内。 #### 步骤 6:验证安装 如果你没有遇到任何错误,那么现在你的 Linux 系统上已安装了最新的 Python。要进行验证,请在终端中输入以下命令之一: ``` python3 --version ``` 或者 ``` python --version ``` 如果输出显示 `Python 3.x`,那么说明 Python 3 已成功安装。 ### 创建虚拟环境(可选) Python 提供了名为 `venv`(虚拟环境)的软件包,可帮助你将程序目录或软件包与其他目录或软件包隔离。 要创建虚拟环境,请在 Python 终端中输入以下内容(在此示例中,假定你安装的 Python 版本为 `3.8` 系列): ``` python3.8 -m venv example ``` 该命令创建一个带有一些子目录的新目录(我将其命名为 `example`)。 要激活虚拟环境,请输入: ``` $ source example/bin/activate (example) $ ``` 请注意,你的终端提示符(`$`)现在以环境名称开头。 要停用虚拟环境,请使用 `deactivate` 命令: ``` (example) $ deactivate ``` ### 总结 Python 是一种有趣的语言,它的开发和改进非常频繁。一旦了解了如何安装最新版本而又不干扰发行版提供的稳定版本,熟悉新功能将很容易。 如果你有任何反馈或问题,请在评论中提出。 --- via: <https://opensource.com/article/20/4/install-python-linux> 作者:[Vijay Singh Khatri](https://opensource.com/users/vijaytechnicalauthor) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
[Python](https://www.python.org/) is now the [most popular](http://pypl.github.io/PYPL.html), most used programming language. Python's simple syntax and low learning curve make it the ultimate choice for beginners as well as professional developers. Python is also a very versatile programming language. It's used nearly everywhere—from web development to artificial intelligence—really anywhere other than mobile development. If you're using Python, there's a good chance you're a developer (or want to become one), and Linux is a great platform for creating software. But when you're working with Python every day, you sometimes want to stay up to date with the very latest version. You may not want to replace the default install of Python on your system just to test drive the latest one, so this article explains how to install the latest version of Python 3 on Linux without replacing the version provided by your distribution. Use the **python --version terminal** command to check whether Python is already installed and, if so, which version you have. If Python is not installed on your Linux system, or you want to install an updated version, follow the steps below. ## Step-by-step installation instructions **Step 1: **First, install development packages required to build Python. ### On Debian: ``` $ sudo apt update $ sudo apt install build-essential zlib1g-dev \ libncurses5-dev libgdbm-dev libnss3-dev \ libssl-dev libreadline-dev libffi-dev curl ``` ### On Fedora: `$ sudo dnf groupinstall development` ### Step 2: Download the stable latest release of Python 3 Visit the [official Python website](http://python.org) and download the latest version of Python 3. After the download is complete, you hav a **.tar.xz** archive file (a "tarball") containing the source code of Python. ### Step 3: Extract the tarball Once the download is complete, extract the tarball by either using the extractor application of your choice or the [Linux tar command](https://opensource.com/article/17/7/how-unzip-targz-file), for example: `$ tar -xf Python-3.?.?.tar.xz` ### Step 4: Configure the script Once the Python tarball has been extracted, navigate to the configure script and execute it in your Linux terminal with: ``` $ cd Python-3.* ./configure ``` The configuration may take some time. Wait until it is successfully finishes before proceeding. ### Step 5: Start the build process If you already have a version of Python installed on your system and you want to install the new version alongside it, use this command: `$ sudo make altinstall` The build process may take some time. If you want to replace your current version of Python with this new version, you should uninstall your current Python package using your package manager (such as **apt** or **dnf**) and then install: `$ sudo make install` However, it's generally preferable to install software as a package (such as a **.deb** or **.rpm** file) so your system can track and update it for you. Because this article assumes the latest Python isn't yet packaged yet, though, you probably don't have that option. In that case, you can either install Python with **altinstall** as suggested, or rebuild an existing Python package using the latest source code. That's an advanced topic and specific to your distribution, so it's out of scope for this article. ### Step 6: Verify the installation If you haven't encountered any errors, the latest Python is now installed on your Linux system. To verify it, write one of these commands in your terminal: `python3 --version` or `python --version` If the output says **Python 3.x**, Python 3 has been successfully installed. ## Create a virtual environment (optional) Python provides a package known as **venv** (virtual environment), which helps you isolate a program directory or package from other ones. To create a virtual environment, enter the following in the Python terminal (in this example, assume the version of Python you've installed is in the **3.8 **series): `python3.8 -m venv example` This command creates a new directory (which I've named **example**), with some subdirectories. To activate the virtual environment, enter: ``` $ source example/bin/activate (example) $ ``` Notice that your terminal prompt (**$**) is now preceeded by an environment name. To deactivate the virtual environment, use the **deactivate** command: `(example) $ deactivate` ## Conclusion Python is a fun language that's developed and improved frequently. Getting familiar with new features is easy, once you understand how to install the latest release without interfering with the stable version provided from your distribution. If you have any feedback or questions, please leave them in the comments. ## 5 Comments
12,135
值得关注的 9 个开源云原生项目
https://opensource.com/article/19/8/cloud-native-projects
2020-04-21T22:29:00
[ "容器", "云原生" ]
/article-12135-1.html
> > 工作中用了容器?熟悉这些出自云原生计算基金会的项目吗? > > > ![](/data/attachment/album/202004/21/222735oa1wib1wgypoiwpp.jpg) 随着用容器来开发应用的实践变得流行,[云原生应用](https://opensource.com/article/18/7/what-are-cloud-native-apps)也在增长。云原生应用的定义为: > > “云原生技术用于开发使用打包在容器中的服务所构建的应用程序,以微服务的形式部署,并通过敏捷的 DevOps 流程和持续交付工作流在弹性基础设施上进行管理。” > > > 这个定义提到了构成云原生应用的不可或缺的四个元素: 1. 容器 2. 微服务 3. DevOps 4. 持续集成和持续交付 (CI/CD) 尽管这些技术各有各自独特的历史,但它们之间却相辅相成,在短时间内实现了云原生应用和工具的惊人的指数级增长。这个[云原生计算基金会(CNCF)](https://www.cncf.io)信息图呈现了当今云原生应用生态的规模和广度。 ![Cloud-Native Computing Foundation applications ecosystem](/data/attachment/album/202004/21/223008fcjtssc4zt8cb4j9.jpg "Cloud-Native Computing Foundation applications ecosystem") *云原生计算基金会项目* 我想说,瞧着吧!这仅仅是一个开始。正如 NodeJS 的出现引发了无数的 JavaScript 工具的爆炸式增长一样,容器技术的普及也推动了云原生应用的指数增长。 好消息是,有几个组织负责监管并将这些技术连接在一起。 其中之一是 <ruby> <a href="https://www.opencontainers.org"> 开放容器倡议 </a> <rt> Open Containers Initiative </rt></ruby>(OCI),它是一个轻量级的、开放的治理机构(或项目),“它是在 Linux 基金会的支持下形成的,其明确目的是创建开放的行业标准的容器格式和运行时。” 另一个是 CNCF,它是“一个致力于使云原生计算普及和可持续发展的开源软件基金会”。 通常除了围绕云原生应用建立社区之外,CNCF 还帮助项目围绕其云原生应用建立结构化管理。CNCF 创建了成熟等级的概念(沙箱级、孵化级或毕业级),分别与下图中的“创新者”、“早期采用者”和“早期大量应用”相对应。 ![CNCF project maturity levels](/data/attachment/album/202004/21/223027f5rz5sfrrxrmxc36.jpg "CNCF project maturity levels") *CNCF 项目成熟等级* CNCF 为每个成熟等级制定了详细的[标准](https://github.com/cncf/toc/blob/master/process/graduation_criteria.adoc)(为方便读者而列在下面)。获得技术监督委员会(TOC)三分之二的同意才能转为孵化或毕业级。 **沙箱级** > > 要想成为沙箱级,一个项目必须至少有两个 TOC 赞助商。 有关详细过程,请参见《CNCF 沙箱指南 v1.0》。 > > > **孵化级** > > 注:孵化级是我们期望对项目进行全面的尽职调查的起点。 > > > 要进入孵化级,项目除了满足沙箱级的要求之外还要满足: > > > * 证明至少有三个独立的最终用户已成功将其用于生产,且 TOC 判断这些最终用户具有足够的质量和范围。 > * 提交者的数量要合理。提交者定义为具有提交权的人,即可以接受部分或全部项目贡献的人。 > * 显示出有大量持续提交和合并贡献。 > * 由于这些指标可能会根据项目的类型、范围和大小而有很大差异,所以 TOC 有权决定是否满足这些标准的活动水平。 > > > **毕业级** > > 要从沙箱或孵化级毕业,或者要使一个新项目作为已毕业项目加入,项目除了必须满足孵化级的标准外还要满足: > > > * 至少有两个来自组织的提交者。 > * 已获得并保持了“核心基础设施计划最佳实践徽章”。 > * 已完成独立的第三方安全审核,并发布了具有与以下示例类似的范围和质量的结果(包括已解决的关键漏洞):<https://github.com/envoyproxy/envoy#security-audit>,并在毕业之前需要解决所有关键的漏洞。 > * 采用《CNCF 行为准则》。 > * 明确规定项目治理和提交流程。最好将其列在 `GOVERNANCE.md` 文件中,并引用显示当前提交者和荣誉提交者的 `OWNERS.md` 文件。 > * 至少有一个主仓的项目采用者的公开列表(例如,`ADOPTERS.md` 或项目网站上的徽标)。 > * 获得 TOC 的绝大多数票,进入毕业阶段。如果项目能够表现出足够的成熟度,则可以尝试直接从沙箱级过渡到毕业级。项目可以无限期保持孵化状态,但是通常预计它们会在两年内毕业。 > > > ### 值得关注的 9 个项目 本文不可能涵盖所有的 CNCF 项目,我将介绍最有趣的 9 个毕业和孵化的开源项目。 | 名称 | 授权类型 | 简要描述 | | --- | --- | --- | | [Kubernetes](https://github.com/kubernetes/kubernetes) | Apache 2.0 | 容器编排平台 | | [Prometheus](https://github.com/prometheus/prometheus) | Apache 2.0 | 系统和服务监控工具 | | [Envoy](https://github.com/envoyproxy/envoy) | Apache 2.0 | 边缘和服务代理 | | [rkt](https://github.com/rkt/rkt) | Apache 2.0 | Pod 原生的容器引擎 | | [Jaeger](https://github.com/jaegertracing/jaeger) | Apache 2.0 | 分布式跟踪系统 | | [Linkerd](https://github.com/linkerd/linkerd) | Apache 2.0 | 透明服务网格 | | [Helm](https://github.com/helm/helm) | Apache 2.0 | Kubernetes 包管理器 | | [Etcd](https://github.com/etcd-io/etcd) | Apache 2.0 | 分布式键值存储 | | [CRI-O](https://github.com/cri-o/cri-o) | Apache 2.0 | 专门用于 Kubernetes 的轻量级运行时环境 | 我也创建了视频材料来介绍这些项目。 ### 毕业项目 毕业的项目被认为是成熟的,已被许多组织采用的,并且严格遵守了 CNCF 的准则。以下是三个最受欢迎的开源 CNCF 毕业项目。(请注意,其中一些描述来源于项目的网站并被做了改编。) #### Kubernetes(希腊语“舵手”) Kubernetes! 说起云原生应用,怎么能不提 Kubernetes 呢? Google 发明的 Kubernetes 无疑是最著名的基于容器的应用程序的容器编排平台,而且它还是一个开源工具。 什么是容器编排平台?通常,一个容器引擎本身可以管理几个容器。但是,当你谈论数千个容器和数百个服务时,管理这些容器变得非常复杂。这就是容器编排引擎的用武之地。容器编排引擎通过自动化容器的部署、管理、网络和可用性来帮助管理大量的容器。 Docker Swarm 和 Mesosphere Marathon 也是容器编排引擎,但是可以肯定地说,Kubernetes 已经赢得了这场比赛(至少现在是这样)。Kubernetes 还催生了像 [OKD](https://www.okd.io/) 这样的容器即服务(CaaS)平台,它是 Kubernetes 的 Origin 社区发行版,并成了 [Red Hat OpenShift](https://www.openshift.com) 的一部分。 想开始学习,请访问 [Kubernetes GitHub 仓库](https://github.com/kubernetes/kubernetes),并从 [Kubernetes 文档](https://kubernetes.io/docs/home)页面访问其文档和学习资源。 #### Prometheus(普罗米修斯) Prometheus 是 2012 年在 SoundCloud 上构建的一个开源的系统监控和告警工具。之后,许多公司和组织都采用了 Prometheus,并且该项目拥有非常活跃的开发者和用户群体。现在,它已经成为一个独立的开源项目,独立于公司之外进行维护。 ![Prometheus’ architecture](/data/attachment/album/202004/21/223038gdpdaqpx6yoea1ex.jpg "Prometheus’ architecture") *Prometheus 的架构* 理解 Prometheus 的最简单方法是可视化一个生产系统,该系统需要 24(小时)x 365(天)都可以正常运行。没有哪个系统是完美的,也有减少故障的技术(称为容错系统),但是,如果出现问题,最重要的是尽快发现问题。这就是像 Prometheus 这样的监控工具的用武之地。Prometheus 不仅仅是一个容器监控工具,但它在云原生应用公司中最受欢迎。此外,其他开源监视工具,包括 [Grafana](https://grafana.com),都借助了 Prometheus。 开始使用 Prometheus 的最佳方法是下载其 [GitHub 仓库](https://github.com/prometheus/prometheus)。在本地运行 Prometheus 很容易,但是你需要安装一个容器引擎。你可以在 [Prometheus 网站](https://prometheus.io/docs/introduction/overview)上查看详细的文档。 #### Envoy(使者) Envoy(或 Envoy 代理)是专为云原生应用设计的开源的边缘代理和服务代理。由 Lyft 创建的 Envoy 是为单一服务和应用而设计的高性能的 C++ 开发的分布式代理,同时也是为由大量微服务组成的服务网格架构而设计的通信总线和通用数据平面。Envoy 建立在 Nginx、HAProxy、硬件负载均衡器和云负载均衡器等解决方案的基础上,Envoy 与每个应用相伴(并行)运行,并通过提供平台无关的方式提供通用特性来抽象网络。 当基础设施中的所有服务流量都经过 Envoy 网格时,很容易就可以通过一致的可观测性来可视化问题域,调整整体性能,并在单个位置添加基础功能。基本上,Envoy 代理是一个可帮助组织为生产环境构建容错系统的服务网格工具。 服务网格应用有很多替代方案,例如 Uber 的 [Linkerd](https://linkerd.io/)(下面会讨论)和 [Istio](https://istio.io/)。Istio 通过将其部署为 [Sidecar](https://istio.io/docs/reference/config/networking/v1alpha3/sidecar) 并利用了 [Mixer](https://istio.io/docs/reference/config/policy-and-telemetry) 的配置模型,实现了对 Envoy 的扩展。Envoy 的显著特性有: * 包括所有的“<ruby> 入场筹码 <rt> table stakes </rt></ruby>(LCTT 译注:引申为基础必备特性)”特性(与 Istio 这样的控制平面组合时) * 带载运行时 99% 数据可达到低延时 * 可以作为核心的 L3/L4 过滤器,提供了开箱即用的 L7 过滤器 * 支持 gRPC 和 HTTP/2(上行/下行) * 由 API 驱动,并支持动态配置和热重载 * 重点关注指标收集、跟踪和整体可监测性 要想了解 Envoy,证实其能力并实现其全部优势,需要丰富的生产级环境运行的经验。你可以在其[详细文档](https://www.envoyproxy.io/docs/envoy/latest)或访问其 [GitHub](https://github.com/envoyproxy/envoy) 仓库了解更多信息。 ### 孵化项目 下面是六个最流行的开源的 CNCF 孵化项目。 #### rkt(火箭) rkt, 读作“rocket”,是一个 Pod 原生的容器引擎。它有一个命令行接口用来在 Linux 上运行容器。从某种意义上讲,它和其他容器如 [Podman](https://podman.io)、Docker 和 CRI-O 相似。 rkt 最初是由 CoreOS (后来被 Red Hat 收购)开发的,你可以在其网站上找到详细的[文档](https://coreos.com/rkt/docs/latest),以及在 [GitHub](https://github.com/rkt/rkt) 上访问其源代码。 #### Jaeger(贼鸥) Jaeger 是一个开源的端到端的分布式追踪系统,适用于云端应用。在某种程度上,它是像 Prometheus 这样的监控解决方案。但它有所不同,因为其使用场景有所扩展: * 分布式事务监控 * 性能和延时优化 * 根因分析 * 服务依赖性分析 * 分布式上下文传播 Jaeger 是一项 Uber 打造的开源技术。你可以在其网站上找到[详细文档](https://www.jaegertracing.io/docs/1.13),以及在 [GitHub](https://github.com/jaegertracing/jaeger) 上找到其源码。 #### Linkerd 像创建 Envoy 代理的 Lyft 一样,Uber 开发了 Linkerd 开源解决方案用于生产级的服务维护。在某些方面,Linkerd 就像 Envoy 一样,因为两者都是服务网格工具,旨在提供平台级的可观测性、可靠性和安全性,而无需进行配置或代码更改。 但是,两者之间存在一些细微的差异。 尽管 Envoy 和 Linkerd 充当代理并可以通过所连接的服务进行上报,但是 Envoy 并不像 Linkerd 那样被设计为 Kubernetes Ingress 控制器。Linkerd 的显著特点包括: * 支持多种平台(Docker、Kubernetes、DC/OS、Amazon ECS 或任何独立的机器) * 内置服务发现抽象,可以将多个系统联合在一起 * 支持 gRPC、HTTP/2 和 HTTP/1.x请 求和所有的 TCP 流量 你可以在 [Linkerd 网站](https://linkerd.io/2/overview)上阅读有关它的更多信息,并在 [GitHub](https://github.com/linkerd/linkerd) 上访问其源码。 #### Helm(舵轮) Helm 基本上就是 Kubernetes 的包管理器。如果你使用过 Apache Maven、Maven Nexus 或类似的服务,你就会理解 Helm 的作用。Helm 可帮助你管理 Kubernetes 应用程序。它使用“Helm Chart”来定义、安装和升级最复杂的 Kubernetes 应用程序。Helm 并不是实现此功能的唯一方法;另一个流行的概念是 [Kubernetes Operators](https://coreos.com/operators),它被 Red Hat OpenShift 4 所使用。 你可以按照其文档中的[快速开始指南](https://helm.sh/docs)或 [GitHub 指南](https://github.com/helm/helm)来试用 Helm。 #### Etcd Etcd 是一个分布式的、可靠的键值存储,用于存储分布式系统中最关键的数据。其主要特性有: * 定义明确的、面向用户的 API(gRPC) * 自动 TLS,可选的客户端证书验证 * 速度(可达每秒 10,000 次写入) * 可靠性(使用 Raft 实现分布式) Etcd 是 Kubernetes 和许多其他技术的默认的内置数据存储方案。也就是说,它很少独立运行或作为单独的服务运行;相反,它以集成到 Kubernetes、OKD/OpenShift 或其他服务中的形式来运作。还有一个 [etcd Operator](https://github.com/coreos/etcd-operator) 可以用来管理其生命周期并解锁其 API 管理功能: 你可以在 [etcd 文档](https://etcd.io/docs/v3.3.12)中了解更多信息,并在 [GitHub](https://github.com/etcd-io/etcd)上访问其源码。 #### CRI-O CRI-O 是 Kubernetes 运行时接口的 OCI 兼容实现。CRI-O 用于各种功能,包括: * 使用 runc(或遵从 OCI 运行时规范的任何实现)和 OCI 运行时工具运行 * 使用容器/镜像进行镜像管理 * 使用容器/存储来存储和管理镜像层 * 通过容器网络接口(CNI)来提供网络支持 CRI-O 提供了大量的[文档](https://github.com/cri-o/cri-o/blob/master/awesome.md),包括指南、教程、文章,甚至播客,你还可以访问其 [GitHub 页面](https://github.com/cri-o/cri-o)。 我错过了其他有趣且开源的云原生项目吗?请在评论中提醒我。 --- via: <https://opensource.com/article/19/8/cloud-native-projects> 作者:[Bryant Son](https://opensource.com/users/brsonhttps://opensource.com/users/marcobravo) 选题:[lujun9972](https://github.com/lujun9972) 译者:[messon007](https://github.com/messon007) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
null
HTTPSConnectionPool(host='opensource.com', port=443): Read timed out. (read timeout=10)
null
12,136
一个方便的用于创建树莓派 SD 卡镜像的程序
https://opensource.com/article/20/4/raspberry-pi-imager-mac
2020-04-21T23:24:00
[ "树莓派" ]
https://linux.cn/article-12136-1.html
> > 开始在 Mac 上使用 Raspberry Pi Imager。 > > > ![](/data/attachment/album/202004/21/233005nexe10x7xh1vvh1y.jpg) 有多种购买树莓派的方法,根据你的购买渠道的不同,可能附带或不附带操作系统。要在树莓派上安装操作系统,只需将操作系统镜像 “闪存” 到 SD 卡即可。为了使之尽可能简单,[树莓派基金会](https://www.raspberrypi.org/)推出一个 Raspberry Pi Imager 实用程序,你可以在所有主流平台上下载它。下面就来简单介绍一下这个有用的新工具。 ### 安装 Imager 你通常可以在[树莓派下载](https://www.raspberrypi.org/downloads/)页面上找到 Raspberry Pi Imager。它有 Mac、Ubuntu 和 Windows 版本。我将下载并演示 Mac 版本。 Mac 的安装包是常规的 DMG 镜像,它会挂载到你的桌面,然后经典的安装界面就会出现: ![](/data/attachment/album/202004/21/233054ms81ut1lllut6sjp.png) 只需将可爱的树莓图标拖到“应用”文件夹,就可以完成。从启动台中调用它,你会看到一系列简单的按钮和菜单供你选择。真的不能比这更简单了: ![](/data/attachment/album/202004/21/233103v4486cfn49z4n95f.png) ### 可用的镜像和选项 默认选项包含各种树莓派型号的镜像。Raspbian 是首选,它有两个可用的选项,较小的 “Lite” 版本和较大的 “Full” 版本。LibreELEC Kodi 娱乐系统有各种特定于型号的版本。Ubuntu 18 和 19 有适用于不同树莓派型号的 32 位和 64 位版本。有一个 RPi 4 EEPROM 恢复程序,以及使用 FAT32 格式化卡的功能。最后,有一个通用的镜像安装程序选项,稍后我将进行尝试。这个简单而紧凑的程序非常方便。 ### 安装一些镜像 我决定使用 16g 的 micro SD 卡。我选择了默认的 Raspbian 镜像,选择已连接的 USB/SD 设备,然后按下 “WRITE” 按钮。这是一个简短的演示: ![](/data/attachment/album/202004/21/233113mpylvrtwvlvhwwvz.gif) 我没有在此处发布整个操作过程。我认为它是在写入的时候下载了镜像,对于我的无线连接这花费了几分钟。该过程在完成之前要先经过写入,然后经过验证环节。完成后,我弹出设备,并将卡插入到我的树莓派 3 中,然后按照通常的图形 Raspbian 安装向导和桌面环境进行设置。 这对我来说还不够。我每天都会下载许多 Linux,今天我还在寻找更多。我回到了[树莓派下载](https://www.raspberrypi.org/downloads/)页面,并下载了 RISC OS 镜像。这个过程几乎一样容易。下载 RISCOSPi.5.24.zip 文件,将其解压缩,然后找到 ro524-1875M.img 文件。在 “Operating System” 按钮中,我选择了 “Use Custom” 并选择了所需的镜像文件。这个过程几乎是相同的。唯一真正的不同是我必须在下载目录中搜寻并选择一个镜像。文件写完后,回到树莓派 3,RISC OS 可以使用了。 ### 对 USB C 的抱怨 顺便说一句,如今有多少人对 USB C 带来的不便感到沮丧?我使用的是只有 USB C 口的 MacBook Pro,我需要不断更换适配器才能完成工作。看看这个: ![](/data/attachment/album/202004/21/233151fxdufavyy5u4yxw4.png) 是的,那是一个 USB C 到 USB A 适配器,然后是一个 USB 到 SD 卡读卡器,以及一个 SD 到 micro SD 适配器。我可能可以在网上找到一些东西来简化此过程,但这些都是我手头有的部件,以支持我家五花八门的 Mac、Windows 和 Linux 主机。说到这里就不多说了,但我希望你能从这些疯狂的东西中得到一个笑点。 ### 总结 新的 Raspberry Pi Imager 是一种简单有效的工具,可以快速烧录树莓派镜像。[BalenaEtcher](https://www.balena.io/etcher/) 是用于对可移动设备进行烧录的类似工具,但是新的 Raspberry Pi Imager 通过省去了获取那些常见镜像的步骤,使普通树莓派系统安装(如 Raspbian)更加容易。 --- via: <https://opensource.com/article/20/4/raspberry-pi-imager-mac> 作者:[James Farrell](https://opensource.com/users/jamesf) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
There are many ways to buy a Raspberry Pi, and depending on who you buy it from, it may or may not come with an operating system already installed on it. Getting an OS onto a Raspberry Pi is a matter of "flashing" an SD card with an OS image. To make this as easy as possible, the [Raspberry Pi Foundation](https://www.raspberrypi.org/) has introduced a Raspberry Pi Imager application, and you can download it for all major platforms. Here's a quick intro to this helpful new utility. ## Install the Imager You can find the Raspberry Pi Imager over at the usual [Raspberry Pi Downloads](https://www.raspberrypi.org/downloads/) page. Versions exist for Mac, Ubuntu, and Windows. I will download and demonstrate the Mac version. Installation on Mac consists of the usual DMG image that mounts to your desktop, and then a typical installer window appears: ![Raspberry Pi Imager installer Raspberry Pi Imager installer](https://opensource.com/sites/default/files/uploads/install_1.png) Simply drag the cute raspberry icon to the Application folder, and you are done. Invoke that from Launchpad, and you are presented with a series of simple buttons and menus to choose from. It really cannot be simpler than this: ![Raspberry Pi Imager home screen Raspberry Pi Imager home screen](https://opensource.com/sites/default/files/uploads/screen_2_0.png) ## Images and options available The default options contain a variety of images for various Raspberry Pi models. Raspbian is the top choice with two available options for smaller "Lite" and fatter "Full" versions available. The LibreELEC Kodi entertainment system is available in various model-specific builds. Ubuntu 18 and 19 have 32-bit and 64-bit builds available for different Pi models. There is an RPi 4 EEPROM recovery utility and a function to format your card using FAT32. Finally, a generic image installer option is available that I will try out a little later. Pretty handy for a simple and compact utility. ## Install some images I had a 16g micro SD card that I decided to play with. I selected the default Raspbian image, chose my attached USB/SD device, and pressed WRITE. Here is a brief demo: ![Raspberry Pi Imager demo Raspberry Pi Imager demo](https://opensource.com/sites/default/files/uploads/demo_3.gif) I didn't post the entire sequence there. I believe it downloaded the image as it was writing and took a few minutes on my wireless connection to finish. The process goes through a write and then a verify cycle before it is finished. When it was done, I ejected the device, popped the card into my RPi 3, and was treated to the usual graphical Raspbian setup wizard and desktop environment. That wasn't quite enough for me; I get plenty of Linux on a daily basis and was looking for a little more today. I went back to the [Raspberry Pi Downloads](https://www.raspberrypi.org/downloads/) page and pulled down the RISC OS image. This process was nearly as easy. Download the RISCOSPi.5.24.zip file, extract it, and find the ro524-1875M.img file. From the Operating System button, I selected the Use Custom option and selected the desired image file. The process was pretty much the same; the only real difference being I had to hunt around my Downloads directory and select an image. Once the file was finished writing, back into the Pi 3, and RISC OS was ready to go. ## Gripes on USB C This is just a silly aside, but how many of you are a bit frustrated with the total inconvenience of USB C these days? I'm using a MacBook Pro, which only has USB C ports, and I am subject to a never-ending swap of adapters to get things done. Take a look at this: ![USB C adapter USB C adapter](https://opensource.com/sites/default/files/uploads/adapter_4.png) Yes, that is a USB C to USB A adapter, then a USB to SD card reader, and an SD to micro SD adapter inside. I probably could have found something online to simplify this, but these are the parts I had on hand to support my family's myriad Mac, Windows, and Linux hosts. Enough about that, but I hope you got a chuckle from that insanity. ## Summary The new Raspberry Pi Imager is a simple and effective tool for getting off the ground quickly with Raspberry Pi images. [BalenaEtcher](https://www.balena.io/etcher/) is a similar tool for imaging your removable devices, but this new Raspberry Pi Imager makes the process of common RPi OS installations (like Raspbian) a bit easier by eliminating the steps to fetch those common images. ## 11 Comments
12,138
以太网技术联盟宣布完成 800Gb 以太网规范
https://www.networkworld.com/article/3538529/ethernet-consortium-announces-completion-of-800gbe-spec.html
2020-04-22T16:15:18
[ "以太网" ]
https://linux.cn/article-12138-1.html
> > 800Gb 以太网规范使当前以太网标准的最高速度提高了一倍,但同时也对包括延迟在内的其他方面进行了调整。 > > > ![](/data/attachment/album/202004/22/161500tdjawujw52wvu205.jpg) 由业界支持的<ruby> 以太网技术联盟 <rt> Ethernet Technology Consortium </rt></ruby>(ETC)已宣布完成 800Gb 以太网技术规范。 新规范基于当前高端 400Gb 以太网协议中使用的许多技术,新规范正式称为 800GBASE-R。设计它的联盟(当时称为 25Gb 以太网联盟)在开发 25、50 和 100Gb 以太网协议方面也发挥了重要作用,其成员包括 博通、思科、谷歌和微软。 800Gb 以太网规范增加了新的<ruby> 介质访问控制 <rt> media access control </rt></ruby>(MAC)和<ruby> 物理编码子层 <rt> physical coding sublayer </rt></ruby>(PCS)方法,新规范对这些功能进行了调整,来使用 8 条 106.25Gbps 的物理通道分发数据。(通道可以是铜双绞线,也可以是光缆,一束光纤或光波。)800GBASE-R 规范建立在两个 400 GbE 2xClause PCS 之上,以创建一个以 800Gbps 的总速率运行的单个 MAC。 尽管主要是使用八条 106.25Gb 通道,但这并不是固定的。它可以以一半的速度 (53.125Gbps) 使用 16 条通道。 新标准提供了 400G 以太网规范的一半延迟,但是新规范也将运行在 50 Gbps、100 Gbps 和 200 Gbps 的网络上的<ruby> 前向纠错 <rt> forward error correction </rt></ruby>(FEC)开销减少了一半,从而减少了网卡上的数据包处理负担。 通过降低延迟,这将满足对延迟敏感的应用(例如[高性能计算](https://www.networkworld.com/article/3444399/high-performance-computing-do-you-need-it.html)和人工智能)中对速度的需求,在这些应用中,需要尽可能快地移动大量数据。 从 400G 增加到 800G 并不是太大的技术飞跃。它意味着在相同的传输速率下增加更多的通道,再做一些调整。但是,要想突破 Tb 级,Cisco 和其他网络公司已经讨论了十年了,这将需要对技术进行重大修改,而且并非易事。 新技术可能也不便宜。800G 可与现有硬件一起使用,而 400Gb 以太网交换机价格不菲,高达六位数。对技术进行重大修改,越过 Tb 障碍,可能会变得更加昂贵。但是对于大客户和高性能计算客户而言,这也是情理之中的事。 ETC 并未透露何时会支持 800G 的新硬件,但考虑到它对现有规格的变化不大,它可能会在今年出现,前提是疫情引起的停滞不会影响它。 --- via: <https://www.networkworld.com/article/3538529/ethernet-consortium-announces-completion-of-800gbe-spec.html> 作者:[Andy Patrizio](https://www.networkworld.com/author/Andy-Patrizio/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
404
Not Found
null
12,139
使用 Reveal.js 和 Git 创建网页教程
https://opensource.com/article/20/4/create-web-tutorial-git
2020-04-23T09:50:04
[ "幻灯片" ]
https://linux.cn/article-12139-1.html
> > 通过这个简单的工作流程创建的研讨会幻灯片,可以在任何浏览器、设备和平台上获得一致的查看效果。 > > > ![](/data/attachment/album/202004/23/094800ohkpbjd3zbj0dj0d.jpg) 无论你是学习者还是教师,你可能都会认识到采用幻灯片放映来传播知识的在线<ruby> 研讨会 <rt> workshop </rt></ruby>的价值。如果你曾经偶然看到过这样一个逐页、逐章设置的井井有条的教程,你可能会想知道创建这样的一个网站有多难。 好吧,让我在这里向你展示,使用全自动化的流程来生成这样的教程是多么容易。 ### 介绍 当我开始将学习内容放到网上置时,体验并不是很好。我想要的是一种可重复的、一致的、易于维护的东西,因为我的内容会随着我教学的技术发展而变化。 我尝试了许多交付模型,从 [Asciidoctor](https://asciidoctor.org/) 这样的低级代码生成器到在单个 PDF 文件中放置教程。全都不能让我满意。当我举办现场的在座研讨会时,我喜欢使用幻灯片放映,因此我想知道我是否可以为我自己的在线的,自定进度的研讨会体验做同样的事情。 经过一番挖掘,我为创建无痛的研讨会网站打下了基础。当时我已经在使用一个演示文稿生成框架,这对我来说是很有帮助的,因为这个框架可以产生对网站友好的格式(HTML)。 ### 设置 这里是这个项目所需要的基本组件。 * 研讨会的想法(这是你的问题,我帮不了你) * 用于研讨会幻灯片的 Reveal.js * GitLab 项目仓库 * 你常用的 HTML 代码编辑器 * Web 浏览器 * 在你的机器上安装好 Git 如果这个列表看起来令人望而生畏,那么有一个快速入门的方法,不需要把所有的东西一个个都拉到一起。你可以用我的模板项目来给你提供幻灯片和项目设置的入门教程。 本文假设你熟悉 Git 和在 Git 平台(如 GitLab)上托管项目。如果你需要指导或教程,请查看我们的[Git 入门系列](https://opensource.com/resources/what-is-git)。 首先,将模板项目克隆到本地机器上。 ``` $ git clone https://gitlab.com/eschabell/beginners-guide-automated-workshops.git ``` 为此设置一个新的 GitLab 项目,导入模板项目作为初始导入。 研讨会网站有一些重要的文件。在**根目录**下,你会发现一个名为 `.gitlab-ci.yml` 的文件,当你向主分支提交修改时(即将拉取请求合并到 `master` 分支),这个文件会作为触发器。它可以触发将 `slides` 目录的全部内容复制到 GitLab 项目的 `website` 文件夹中。 我把它托管在我的 GitLab 账户中,名为 `beginners-guide-automated-workshops`。当它部署完毕后,你可以在浏览器中通过导航到下列地址查看 `slides` 目录的内容: ``` https://eschabell.gitlab.io/beginners-guide-automated-workshops ``` 对于你的用户帐户和项目,URL 如下所示: ``` https://[YOUR_USERNAME].gitlab.io/[YOUR_PROJECT_NAME] ``` 这些是你开始创建网站内容所需要的基本素材。当你推送修改后,它们会自动生成更新过的研讨会网站。请注意,默认模板包含了几个示例幻灯片,这将是你完成对存储库的完整签入后的第一个研讨会网站。 研讨会模板生成的结果是一个 [receive.js](https://revealjs.com/#/) 幻灯片,可以在任何浏览器中运行,并可以自动调整大小,几乎可以让任何人在任何地方、任何设备上观看。 这样创建一个方便、易访问的研讨会怎么样? ### 它是如何工作的 有了这些背景信息,你就可以开始探索研讨会的这些素材,并开始把你的内容放在一起了。你需要的一切都可以在项目的 `slides` 目录中找到;这里是使用 reveal.js 在浏览器中创建研讨会幻灯片的地方。 你将用来制作研讨会的文件和目录是: * `default.css`文件 * `images` 目录 * `index.html`文件 在你常用的 HTML/CSS 编辑器中打开每一个文件,然后进行下面描述的修改。你用哪个编辑器并不重要,我更喜欢 [RubyMine IDE](https://www.jetbrains.com/ruby/),因为它能在本地浏览器中提供页面预览。这对我在将内容推送到研讨会网站之前测试内容时很有帮助。 #### default.css 文件 文件 `css/theme/default.css` 是一个基础文件,你将在这里为你的研讨会幻灯片设置重要的全局设置。其中值得注意的两个主要的项目是所有幻灯片的默认字体和背景图片。 在 `default.css` 中,看一下标有 `GLOBAL STYLES` 的部分。当前的默认字体在这一行中列出了。 ``` font-family: "Red Hat Display", "Overpass", san-serif; ``` 如果你使用的是非标准字体类型,则必须在以下行中将其导入(Overpass 字体类型就是这样做的): ``` @import url('SOME_URL'); ``` `background` 是你创建的每张幻灯片的默认图像。它存储在 `images` 目录下(见下面),并在下面这一行中设置(重点是图像路径)。 ``` background: url("…/…/images/backgrounds/basic.png") ``` 要设置一个默认背景,只需将这一行指向你要使用的图片。 #### images 目录 顾名思义,`images` 目录是用来存储你想在研讨会幻灯片上使用的图片。例如,我通常会把展示研讨会主题进展的截图放在我的个人幻灯片上。 现在,你只需要知道你需要将背景图片存储在一个子目录(`backgrounds`)中,并将你计划在幻灯片中使用的图片存储在 `images` 目录中。 #### index.html 文件 现在你已经把这两个文件整理好了,剩下的时间你就可以在 HTML 文件中创建幻灯片了,从 `index.html` 开始。为了让你的研讨会网站开始成形,请注意这个文件中的以下三个部分。 * `head`部分,在这里你可以设置标题、作者和描述。 * `body` 部分,你可以在这里找到要设计的单个幻灯片。 * 你可以在每个 `section` 中定义各个幻灯片的内容。 从 `head` 部分开始,因为它在顶部。模板项目有三个占位符行供你更新。 ``` <title>INSERT-YOUR-TITLE-HERE</title> <meta name="description" content="YOUR DESCIPTION HERE."> <meta name="author" content="YOUR NAME"> ``` `title` 标签包含文件打开时显示在浏览器选项卡中的文字。请将其改为与你的研讨会的标题相关的内容(或研讨会的某个部分),但记得要简短,因为标签页的标题空间有限。`description` 元标签包含了对你的工作坊的简短描述,而 `author` 元标签是你应该把你的名字(如果你是为别人写的,则是工作坊创建者的名字)。 现在继续到 `body` 部分。你会注意到它被分成了许多 `section` 标签。`body` 的开头包含了一个注释,说明你正在为每个标有 `section` 的打开和关闭的标签创建幻灯片。 ``` <body> <div class="reveal"> <!-- Any section element inside of this container is displayed as a slide --> <div class="slides"> ``` 接下来,创建你的各个幻灯片,每张幻灯片都用 `section` 标签封装起来。这个模板包括了一些幻灯片来帮助你开始制作。例如,这里是第一张幻灯片。 ``` <section> <div style="width: 1056px; height: 300px"> <h1>Beginners guide</h1> <h2>to automated workshops</h2> </div> <div style="width: 1056px; height: 200px; text-align: left"> Brought to you by,<br/> YOUR-NAME<br/> </div> <aside class="notes">Here are notes: Welcome to the workshop!</aside> </section> ``` 这张幻灯片有两个区域,用 `div` 标签分隔。用空格隔开了标题和作者。 如果你有一定的 HTML 使用知识,可以尝试各种东西来开发你的研讨会。使用浏览器预览结果的时候真的很方便。有些 IDE 提供了本地查看修改,但你也可以打开 `index.html` 文件查看你的修改,然后再推送到资源库中。 一旦你对你的研讨会感到满意,推送你的修改,然后等待它们通过持续集成管道。它们将像模板项目一样被托管在 <https://eschabell.gitlab.io/beginners-guide-automated-workshops>。 ### 了解更多 要了解更多关于这个工作流程可以做什么,请查看下面的示例研讨会和托管了研讨会集合的网站。所有这些都是基于本文中描述的工作流程。 研讨会例子: * [Red Hat Process Automation Manage workshop](https://gitlab.com/bpmworkshop/rhpam-devops-workshop) * [JBoss Travel Agency BPM Suite online workshop](https://gitlab.com/bpmworkshop/presentation-bpmworkshop-travel-agency) 研讨会集合: * [Rule the world: Practical decisions & process automation development workshops](https://bpmworkshop.gitlab.io/) * [Application development in the cloud workshop](https://appdevcloudworkshop.gitlab.io/) * [Portfolio architecture: Workshops for creating impactful architectural diagrams](https://redhatdemocentral.gitlab.io/portfolio-architecture-workshops) 我希望这本新手指南和模板研讨会项目能让你看到,在开发和维护研讨会网站的过程中,可以轻松、无痛地完成。我也希望这个工作流程能让你的研讨会受众几乎在任何设备上都能完全访问你的内容,这样他们就能从你分享的知识中学习到你的知识。 --- via: <https://opensource.com/article/20/4/create-web-tutorial-git> 作者:[Eric D. Schabell](https://opensource.com/users/eschabell) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
Whether you're a learner or a teacher, you probably recognize the value of online workshops set up like slideshows for communicating knowledge. If you've ever stumbled upon one of these well-organized tutorials that are set up page by page, chapter by chapter, you may have wondered how hard it was to create such a website. Well, I'm here to show you how easy it is to generate this type of workshop using a fully automated process. ## Introduction When I started putting my learning content online, it was not a nice, seamless experience. So, I wanted something repeatable and consistent that was also easy to maintain, since my content changes as the technology I teach progresses. I tried many delivery models, from low-level code generators, such as [Asciidoctor](https://asciidoctor.org/), to laying out a workshop in a single PDF file. All failed to satisfy me. When I deliver live, onsite workshops, I like using slideshows, so I wondered if I could do the same thing for my online, self-paced workshop experiences. After some digging, I built a foundation for creating painless workshop websites. It helped that I was already using a presentation-generation framework that resulted in a website-friendly format (HTML). ## Setting it up Here the basic components you need for this project: - Workshop idea (this is your problem, can't help you here) - Reveal.js for the workshop slides - GitLab project repository - Your favorite HTML code editor - Web browser - Git installed on your machine If this list looks intimidating, there's a quick way to get started that doesn't involve pulling all the pieces together one by one: You can use my template project to give you a kickstart with the slides and project setup. This article assumes you're familiar with Git and projects hosted on a Git platform like GitLab. If you need a refresher or tutorial, check out our [introductory Git series](https://opensource.com/resources/what-is-git). Start by cloning the template project to your local machine: `$ git clone https://gitlab.com/eschabell/beginners-guide-automated-workshops.git` Set up a new GitLab project for this and import the template project as the initial import. There are a number of important files for the workshop website. In the **root** directory, you'll find a file called **.gitlab-ci.yml**, which is used as a trigger when you commit changes to the master branch (i.e., merge pull requests to **master**). It triggers a copy of the complete contents of the **slides** directory into the GitLab project's **website** folder. I have this hosted as a project called **beginners-guide-automated-workshops** in my GitLab account. When it deploys, you can view the contents of the **slides** directory in your browser by navigating to: `https://eschabell.gitlab.io/beginners-guide-automated-workshops` For your user account and project, the URL would look like: `https://[YOUR_USERNAME].gitlab.io/[YOUR_PROJECT_NAME]` These are the basic materials you need to start creating your website content. When you push changes, they will automatically generate your updated workshop website. Note that the default template contains several example slides, which will be your first workshop website after you complete the full check-in to your repository. The workshop template results in a [reveal.js](https://revealjs.com/#/) slideshow that can run in any browser, with automatic resizing that allows it to be viewed by almost anyone, anywhere, on any device. How's that for creating handy and accessible workshops? ## How it works With this background in place, you're ready to explore the workshop materials and start putting your content together. Everything you need can be found in the project's **slides** directory; this is where all of the magic happens with reveal.js to create the workshop slideshow in a browser. The files and directories you'll be working with to craft your workshop are: - The **default.css**file - The **images**directory - The **index.html**file Open each one in your favorite HTML/CSS editor and make the changes described below. It does not matter which editor you use; I prefer [RubyMine IDE](https://www.jetbrains.com/ruby/) because it offers a page preview in the local browser. This helps when I'm testing out content before pushing it online to the workshop website. ### Default.css file The file **css/theme/default.css** is the base file where you will set important global settings for your workshop slides. The two main items of interest are the default font and background image for all slides. In **default.css**, look at the section labeled **GLOBAL STYLES**. The current default font is listed in the line: `font-family: "Red Hat Display", "Overpass", san-serif;` If you're using a non-standard font type, you must import it (as was done for the Overpass font type) in the line: `@import url('SOME_URL');` The **background** is the default image for every slide you create. It is stored in the **images** directory (see below) and set in the line below (focus on the image path): `background: url("…/…/images/backgrounds/basic.png")` To set a default background, just point this line to the image you want to use. ### Images directory As its name implies, the **images** directory is used for storing the images you want to use on your workshop slides. For example, I usually put screenshots that demonstrate the progress of the workshop topic on my individual slides. For now, just know that you need to store the background images in a subdirectory (**backgrounds**) and the images you plan to use in your slides in the **Images** directory. ### Index.html file Now that you have those two files sorted out, you'll spend the rest of your time creating slides in the HTML files, starting with **index.html**. For your workshop website to start taking shape, pay attention to the following three sections in this file: - The **head**section, where you set the title, author, and description - The **body**section, where you find the individual slides to design - Each **section**, where you define the contents of individual slides Start with the **head** section, since it's at the top. The template project has three placeholder lines for you to update: ``` <title>INSERT-YOUR-TITLE-HERE</title> <meta name="description" content="YOUR DESCIPTION HERE."> <meta name="author" content="YOUR NAME"> ``` The **title** tag contains the text that appears in the browser tab when the file is open. Change it to something relevant to the title of your workshop (or maybe a section of your workshop), but remember to keep it short since tab title space is limited. The **description** meta tag contains a short description of your workshop, and the **author** meta tag is where you should put your name (or the workshop creator's name, if you're doing this for someone else). Now move on to the **body** section. You'll notice that it's divided into a number of **section** tags. The opening of the **body** contains a comment that explains that you're creating slides for each open and closing tag labeled **section**: ``` <body> <div class="reveal"> <!-- Any section element inside of this container is displayed as a slide --> <div class="slides"> ``` Next, create your individual slides, with each slide enclosed in **section** tags. The template includes a few slides to help you get started. For example, here's the first slide: ``` <section> <div style="width: 1056px; height: 300px"> <h1>Beginners guide</h1> <h2>to automated workshops</h2> </div> <div style="width: 1056px; height: 200px; text-align: left"> Brought to you by,<br/> YOUR-NAME<br/> </div> <aside class="notes">Here are notes: Welcome to the workshop!</aside> </section> ``` This slide has two areas divided with **div** tags. Spacing separates the title and the author. Assuming you have some knowledge of using HTML, try various things to develop your workshop. It's really handy to use a browser as you go to preview the results. Some IDEs provide local viewing of changes, but you can also open the **index.html** file and view your changes before pushing them to the repository. Once you're satisfied with your workshop, push your changes, and wait for them to pass through the continuous integration pipeline. They'll be hosted like the template project at [https://eschabell.gitlab.io/beginners-guide-automated-workshops](https://eschabell.gitlab.io/beginners-guide-automated-workshops). ## Learn more To learn more about what you can do with this workflow, check out the following example workshops and sites that host workshop collections. All of these are based on the workflow described in this article. Workshop examples: Workshop collections: [Rule the world: Practical decisions & process automation development workshops](https://bpmworkshop.gitlab.io/)[Application development in the cloud workshop](https://appdevcloudworkshop.gitlab.io/)[Portfolio architecture: Workshops for creating impactful architectural diagrams](https://redhatdemocentral.gitlab.io/portfolio-architecture-workshops) I hope this beginner's guide and the template workshop project show you easy and painless it can be to develop and maintain workshop websites in a consistent manner. I also hope this workflow gives your workshop audiences full access to your content on almost any device so they can learn from the knowledge you're sharing. ## 3 Comments
12,141
什么是互联网骨干网,它是怎样工作的
https://www.networkworld.com/article/3532318/what-is-the-internet-backbone-and-how-it-works.html
2020-04-23T15:46:15
[]
https://linux.cn/article-12141-1.html
> > 一级互联网服务提供商(ISP)将其高速光纤网络连接在一起,形成互联网的骨干网,实现在不同地理区域之间高效地传输流量。 > > > ![](/data/attachment/album/202004/23/154605tvkv8g2t8v1k1na8.jpg) 互联网会产生大量的计算机到计算机的流量,要确保所有流量都可以在世界上任何地方之间传输,就需要大量汇聚的高速网络,这些网络统称为互联网骨干网,但是它是如何工作的呢? ### 互联网的骨干网是什么? 像任何其他网络一样,互联网由接入链路组成,这些接入链路将流量传输到高带宽路由器,路由器又将流量从源地址通过最佳可用路径传输到目的地址。其核心是由相互连接的、彼此对等的各个高速光纤网络而构成的互联网骨干网。 单个独立的核心网络由一级互联网服务提供商(ISP)所拥有。他们的网络连接在一起。这些提供商包括 AT&T、CenturyLink、Cogent Communications,、Deutsche Telekom、Global Telecom and Technology (GTT)、NTT Communications、Sprint、Tata Communications,、Telecom Italia Sparkle、Telia Carrier和 Verizon。 通过将这些长途网连接在一起,一级 ISP 们创建了一个他们可以访问整个路由表的单一的全球性网络,因此他们可以通过逐步层次化地增加本地 ISP 网络来有效地将流量传输到其目的地。 除了物理连接之外,这些骨干网提供商还通过一致的网络协议 TCP/IP 融合在一起,这实际上是两个协议,<ruby> 传输控制协议 <rt> transport control protocol </rt></ruby>和<ruby> 互联网协议 <rt> internet protocol </rt></ruby>,它们在计算机之间建立连接,以确保连接可靠,并将消息格式化为数据包。 ### 互联网交接点(IXP)将骨干网连接在一起 骨干网 ISP 在中立位置的对等点通过高速交换机和路由器连接其网络。这些通常由第三方(有时是非营利组织)提供,以促进骨干网的统一。 参与的一级 ISP 会帮助资助 IXP,但不向其他一级 ISP 收取流量传输费用,这种关系称为无结算对等。这种协议消除了可能导致互联网性能下降的潜在财务纠纷。 ### 骨干网有多快? 互联网骨干网由最快的路由器组成,可以提供 100Gbps 的线路速度。这些路由器由包括 Cisco、Extreme、华为、Juniper 和 Nokia 在内的供应商制造,使用边界网关协议(BGP)在彼此之间路由流量。 ### 流量是如何进入骨干网的 在 1 级 ISP 之下是规模较小的 2 级和 3 级 ISP。 3 级 ISP 为企业和消费者提供了互联网接入服务。这些提供商自己没有接入互联网骨干网,因此,他们自己无法将其客户连接到数十亿台互联网上的计算机。 购买一级 ISP 提供商的接入非常昂贵。通常 3 级 ISP 与拥有自己网络的 2 级(区域)ISP 签订合同,利用 2 级 ISP 的网络将流量传输到有限的地理区域,但不能传输到所有互联网上的设备。 为此,2 级 ISP 与 1 级 ISP 签约以访问全球骨干网,并以这种方式使客户可以访问整个互联网。 这种方式使得来自世界一侧的计算机的流量能够连接到另一侧的计算机。流量从源计算机流向 3 级 ISP,再路由到 2 级 ISP,再路由到 1 级骨干网提供商,再路由到正确的 2 级 ISP,最后路由到提供该数据的 3 级接入提供商连接的目标计算机。 --- via: <https://www.networkworld.com/article/3532318/what-is-the-internet-backbone-and-how-it-works.html> 作者:[Tim Greene](https://www.networkworld.com/author/Tim-Greene/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[messon007](https://github.com/messon007) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
301
Moved Permanently
null
12,143
如何为 Linux 打包 Python 应用
https://opensource.com/article/20/4/package-python-applications-linux
2020-04-23T23:55:58
[ "Python", "打包", "软件包" ]
https://linux.cn/article-12143-1.html
> > 了解如何使用 dh\_virtualenv 来让你的 Python 应用可作为 .deb 包安装。 > > > ![](/data/attachment/album/202004/23/235547iztz5d955t9s9b5t.jpg) 在基于 Debian 的操作系统(例如 Debian 或 [Elementary OS](https://opensource.com/article/19/12/pantheon-linux-desktop))上安装 Python 应用的一种方法是使用 [dh\_virtualenv](https://dh-virtualenv.readthedocs.io/en/latest/) 工具。它可以构建一个 `.deb` 包,在应用之外封装了一个 Python 虚拟环境,并在安装时进行部署。 在本文中,我将以构建一个包含 [HTTPie](https://opensource.com/article/19/8/getting-started-httpie) 工具的包为例来解释如何使用它,以便在无需激活虚拟环境的情况下从命令行测试 HTTP API。 ### 使用 dh\_virtualenv 打包 首先,你需要安装 `dh_virtualenv` 所需的工具。`dh_virtualenv` 的[文档](https://dh-virtualenv.readthedocs.io/en/1.1/tutorial.html)提供了所有安装选项。在基于 Debian 的系统上,我输入: ``` apt-get install dh-virtualenv devscripts ``` 尽管不需要 [devscripts](http://man.he.net/man1/devscripts) 包,但它可以简化后续操作。 现在,创建一个目录来保存源码。由于这是一个本地的、非官方的 HTTPie 打包,因此我将其称为 `myhttp`。接下来,让我们在 `myhttp` 内创建一些文件,向 Debian 构建系统提供元数据。 首先,创建 `debian/control` 文件: ``` Source: myhttp Section: python Priority: extra Maintainer: Jan Doe <[email protected]> Build-Depends: debhelper (>= 9), python3.7, dh-virtualenv (>= 0.8) Standards-Version: 3.9.5 Package: myhttp Architecture: any Pre-Depends: dpkg (>= 1.16.1), python3.7, ${misc:Pre-Depends} Depends: ${misc:Depends} Description: http client Useful for doing stuff ``` 那么这些是什么信息呢?正如 [Debian 文档](https://www.debian.org/doc/manuals/maint-guide/dreq.en.html#control)指出的: > > “第 1–7 行是源码包的控制信息。第 9–13 行是二进制包的控制信息。” > > > 以下是我使用的: * `Section` 的值对于我们来说大多没有意义,但需要存在。它对给引导式 UI 安装程序提供信息是有意义的,但对于这个包来说,没有意义。 * `Priority` 对像这样的第三方包的正确值是 `extra`。 * 强烈建议在 `Maintainer` 字段中填写正确的联系人信息。但不一定非得是你的个人电子邮件,如果包由团队维护,并且你希望将问题发送到团队的邮件别名,例如 `Infrastructure Team <[email protected]>`。 * `Build-Depends` 字段标识你需要 `debhelper`、`python` 和 `dh-virtualenv` 来构建包:包构建过程中将确保这些依赖项在包构建时已安装。 * `Standards-Version` 字段主要给人看。它表明你遵循的指南。本指南基于 `dh-virtualenv` 的官方文档,它是基于 Debian 的 3.9.5 指南。最好一直将源码包和二进制包命名相同。 * `Architecture` 字段应为 `Any`,因为除非虚拟环境可能包含一些特定于体系结构的文件。否则,最好选择该字段为 `any`。 * 保持 `Pre-Depends` 列表不变:它是一种非常严格的依赖关系形式,你很少会需要比这里建议的最小依赖更多的依赖项。依赖项通常由构建系统准确计算,因此没有理由手动指定它们。 * 如果你的包主要用于内部,那么 `Description` 字段可能只需要最少的信息或者指向公司 wiki 的链接,不然更多的信息会更有用。 然后创建 `debian/compat` 文件,它[主要出于历史目的而存在](https://www.debian.org/doc/manuals/maint-guide/dother.en.html#compat): ``` $ echo "9" > debian/compat ``` 接下来,创建更新日志以告知包用户自上次发布以来发生了什么变化。最简单的方法是使用 `dch --create` 创建模板,然后填写值。 填写后,它看起来像: ``` myhttp (2.0.0-1) stable; urgency=medium * Initial release. -- Jan Doe <[email protected]> Fri, 27 Mar 2020 01:09:22 +0000 ``` 现在你需要告诉工具安装 HTTPie,但是哪个版本? 创建一个宽松版本的 `requirements.in` 文件: ``` httpie ``` 通常,宽松的需求文件将仅包含项目的直接依赖项,并在需要时指定最低版本。不一定总是需要指定最低版本:这些工具通常偏向于将依赖关系转化为“可能的最新版本”。如果你的 Debian 包与一个内部 Python 包相对应,这是内部应用中的一种常见情况,那么宽松的需求文件看起来将很相似:仅包含包名的一行。 然后使用 `pip-compile`(可通过安装 PyPI 包 `pip-tools` 获得): ``` $ pip-compile requirements.in > requirements.txt ``` 这会生成一个严格的依赖文件,名为 `requirements.txt`: ``` # # This file is autogenerated by pip-compile # To update, run: # # pip-compile requirements.in # certifi==2019.11.28 # via requests chardet==3.0.4 # via requests httpie==2.0.0 # via -r requirements.in idna==2.9 # via requests pygments==2.6.1 # via httpie requests==2.23.0 # via httpie urllib3==1.25.8 # via requests ``` 最后,写一个 `debian/rules` 文件来创建包。因为 `dh_virtualenv` 会处理所有困难的事,因此规则文件很简单: ``` #!/usr/bin/make -f %: dh $@ --with python-virtualenv --python /usr/bin/python3.7 ``` 确保指定 Python 解释器。默认它会使用 `/usr/bin/python`,这是 Python2,但是你应该使用一个[受支持的 Python 版本](https://opensource.com/article/19/11/end-of-life-python-2)。 完成了,接下来就是构建包: ``` $ debuild -b -us -uc ``` 这会在父目录生成一个类似 `myhttp_2.0.0-1_amd64.deb` 的文件。该文件可在任何兼容的系统上安装。 通常,最好在同一平台上构建用于特定平台(例如 Debian 10.0)的 Debian 包。 你可以将此 Debian 包保存在软件仓库中,并使用例如 [Ansible](https://opensource.com/resources/what-ansible) 的工具将其安装在所有相关系统上。 ### 总结 给基于 Debian 的系统的打包应用是一个有着多个步骤的过程。使用 `dh_virtualenv` 将使过程变得简单明了。 --- via: <https://opensource.com/article/20/4/package-python-applications-linux> 作者:[Moshe Zadka](https://opensource.com/users/moshez) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
One way to make Python applications installable on Debian-based operating systems (such as Debian or [Elementary OS](https://opensource.com/article/19/12/pantheon-linux-desktop)) is by using the [dh_virtualenv](https://dh-virtualenv.readthedocs.io/en/latest/) tool. It builds a **.deb** package that wraps a Python virtual environment around an application and deploys it upon installing. In this article, I will explain how to use it with the example of building a package containing the [HTTPie](https://opensource.com/article/19/8/getting-started-httpie) tool to test HTTP APIs from the command line without having to activate a virtual environment. ## Packaging with dh_virtualenv First, you need to install the tools that dh_virtualenv needs. dh_virtualenv's [documentation](https://dh-virtualenv.readthedocs.io/en/1.1/tutorial.html) provides all of the installation options. On my Debian-based system, I entered: `apt-get install dh-virtualenv devscripts` While the [devscripts](http://man.he.net/man1/devscripts) package is not required, it will simplify doing the subsequent operations. Now, create a directory to keep the sources. Since this is a local, unofficial, packaging of HTTPie, I called it **myhttp**. Next, let's create some files inside **myhttp** to provide metadata to the Debian build system. First, create the **debian/control** file: ``` Source: myhttp Section: python Priority: extra Maintainer: Jan Doe <[email protected]> Build-Depends: debhelper (>= 9), python3.7, dh-virtualenv (>= 0.8) Standards-Version: 3.9.5 Package: myhttp Architecture: any Pre-Depends: dpkg (>= 1.16.1), python3.7, ${misc:Pre-Depends} Depends: ${misc:Depends} Description: http client Useful for doing stuff ``` So what is all this information about? As the [Debian documentation](https://www.debian.org/doc/manuals/maint-guide/dreq.en.html#control) puts it: "Lines 1–7 are the control information for the source package. Lines 9–13 are the control information for the binary package." Here's my take: - the **section**value is mostly meaningless for our case, but needs to be there. It's meaningful to provide information to the guided UI installer, which is not relevant for this package. - The extra **Priority**value is the right priority for 3rd party packages like this one. - It is highly recommended to put real contact details in the **Maintainer**field. It does not have to be your personal e-mail, though -- "Infrastructure Team <[[email protected]](mailto:[email protected])>", for example, if the package is maintained by the team and you would like issues to be sent to the team's mail alias. - The **build-depends**field indicates that you need debhelper, python, and dh-virtualenv to build the package: the package build process will make sure those dependencies are installed at package build time. - The **standards version**is mostly for human consumption. It indicates which guide you are following. This guide is based on the official documentation of dh-virtualenv, which is based on the 3.9.5 guide from Debian. It is almost always the best choice to name the binary package and the source package the same. - The **Architecture**field should be**Any**because a virtual environment might include some architecture-specific files: otherwise, the field would be better chosen as**all**. - Keep the **pre-depends**list as-is: pre-depends is a pretty strict form of dependencies, and it is rare that you need anything more than the minimum suggested here. The dependencies are usually calculated accurately by the build system, so there is no reason to specify them manually. - If your package is mostly for internal use, then the **Description**might only specify minimal information and a link to the company wiki; otherwise, more details might be useful. Then create the **debian/compat** file, which [exists mostly for historical purposes](https://www.debian.org/doc/manuals/maint-guide/dother.en.html#compat): `$ echo "9" > debian/compat` Next, create the changelog to tell package users what has changed since the last release. The easiest way is to use **dch --create** to create a template and then fill in the values. Filled in, it looks like: ``` myhttp (2.0.0-1) stable; urgency=medium * Initial release. -- Jan Doe <[email protected]> Fri, 27 Mar 2020 01:09:22 +0000 ``` Now you need to tell the tool to install HTTPie, but which version? Create a **requirements.in** file that has loose versions: `httpie` In general, the loose requirements file will only contain direct dependencies of your project and will specify minimum versions if needed. It is not always necessary to specify the minimum versions: the tools are usually biased towards tightening the dependencies towards "latest version possible". In the case where your Debian package corresponds to one internal Python package, a common case in internal applications, the loose requirements file will look similar: just one line with the name of the package. Then use **pip-compile** (which is available by installing the PyPI package **pip-tools**): `$ pip-compile requirements.in > requirements.txt` This will produce a strict dependency file called **requirements.txt**: ``` # # This file is autogenerated by pip-compile # To update, run: # # pip-compile requirements.in # certifi==2019.11.28 # via requests chardet==3.0.4 # via requests httpie==2.0.0 # via -r requirements.in idna==2.9 # via requests pygments==2.6.1 # via httpie requests==2.23.0 # via httpie urllib3==1.25.8 # via requests ``` Finally, write a **debian/rules** file for creating the package. Since dh_virtualenv does all the hard work, the rules file is simple: ``` #!/usr/bin/make -f %: dh $@ --with python-virtualenv --python /usr/bin/python3.7 ``` Be sure to specify the Python interpreter. By default, it will use the interpreter in **/usr/bin/python**, which is Python 2, but you should use a [supported version of Python](https://opensource.com/article/19/11/end-of-life-python-2). The writing is finished; all that's left is to build the package: `$ debuild -b -us -uc` This will produce a file in the parent directory with a name like **myhttp_2.0.0-1_amd64.deb**. This file can be installed on any compatible operating system. In general, it's best to build Debian packages that are intended for a specific platform, such as Debian 10.0, on the same platform. You can store this Debian package in a repository and install it on all relevant systems with, for example, [Ansible](https://opensource.com/resources/what-ansible). ## Conclusion Packaging applications for Debian-based operating systems is a multi-step process. Using dh_virtualenv will make the process straightforward. ## 2 Comments
12,146
Ubuntu 20.04 LTS 下载及新特性
https://itsfoss.com/download-ubuntu-20-04/
2020-04-24T10:49:00
[ "Ubuntu" ]
https://linux.cn/article-12146-1.html
> > Ubuntu 20.04 LTS Focal Fossa 终于发布了。以下是 Ubuntu 20.04 的新功能和下载链接,以及Ubuntu 20.04的官方风味版。 > > > 等待终于结束了。Ubuntu 20.04 LTS 终于来了,可以下载了! ![](/data/attachment/album/202004/24/105007sfvif4cxwp0ftqtt.jpg) 如果你已经在使用 Ubuntu 19.10,你可能发现没有太多明显的差异 —— 但我所看到的改进清单确实令人印象深刻。 如果你想知道有什么新的东西,我将会提到这个版本的几个主要亮点。 ### Ubuntu 20.04 有什么新特性? 与之前的 18.04 LTS 版本相比,Ubuntu 20.04 LTS 有很多视觉上的改变,同时也包括了一些底层的改进。 当然,随着 [GNOME 3.36](https://itsfoss.com/gnome-3-36-release/) 的加入,视觉上的重大升级和性能上的提升是相当明显的。以下是一段视频,重点介绍了 Ubuntu 20.04 中的新功能。 如果你有兴趣的话,你也可以深入到我们的文章中去看看,重点介绍 [Ubuntu 20.04 的新功能](https://itsfoss.com/ubuntu-20-04-release-features/)。 不管是哪种情况,让我来给你介绍一些亮点。 * GNOME 3.36 是默认的桌面系统 * 性能提高了很多 * 新的 Yaru 主题很华丽,也有黑暗模式。 * 你不会再看到亚马逊应用程序,它已经永远消失了! * 改进的 [ZFS](https://itsfoss.com/what-is-zfs/) 支持 * 你可以获得最新的 [Linux 内核 5.4](https://itsfoss.com/linux-kernel-5-4/)(LTS) * 增加了对 exFAT 的支持 * 改进硬件和图形支持 * 更新软件 Python 3.8.2 * Wireguard 已被移植到 Linux 内核5.4,可以在 Ubuntu 20.04 上使用。 如果你想看详细的功能介绍,请观看这段视频,重点介绍 20.04 LTS 的最佳功能。 如果你是 Ubuntu 的新手,对 Ubuntu 20.04 有疑问,我们也为你提供了一篇快速的 [Ubuntu 20.04 FAQ](https://itsfoss.com/ubuntu-20-04-faq/) 文章。 ### 从 18.04 和 19.10 升级到 Ubuntu 20.04 如果你已经在使用 Ubuntu 18.04 或 19.10,你可以从系统中轻松升级到 Ubuntu 20.04。 这样一来,当你开始使用新版本的时候,你的文件和大部分的应用程序设置都会保持原样,而无需从 ISO 中重新安装。 你可以阅读此详细教程来学习[如何将 Ubuntu 升级到新版本](https://itsfoss.com/upgrade-ubuntu-version/)。 请注意,如果你使用的是 Lubuntu 18.04,请不要升级到 Lubuntu 20.04。Lubuntu 18.04 使用的是 Lxde 桌面,而后面的版本使用的是 LXQt 桌面。这样升级可能会导致冲突和系统崩溃。 ### 下载 Ubuntu 20.04 LTS 你可以抓取 ISO 或 torrent 文件,这取决于你喜欢什么。 * [Ubuntu 20.04 LTS (ISO)](http://releases.ubuntu.com/focal/ubuntu-20.04-desktop-amd64.iso) * [Ubuntu 20.04 LTS (Torrent)](http://releases.ubuntu.com/focal/ubuntu-20.04-desktop-amd64.iso.torrent) 请按照本教程学习[如何安装 Ubuntu](https://itsfoss.com/install-ubuntu/)。 ### 下载Ubuntu 20.04 LTS官方版下载 如果你想在不同的桌面环境下得到 Ubuntu 的官方版本,请按照下面的链接进行操作。 * [Ubuntu MATE 20.04 LTS](https://ubuntu-mate.org/download/amd64/focal/) * [Kubuntu 20.04 LTS](https://kubuntu.org/getkubuntu/) * [Xubuntu 20.04 LTS](https://xubuntu.org/download/) * [Ubuntu Budgie 20.04 LTS](https://ubuntubudgie.org/downloads/) * [Ubuntu Studio 20.04 LTS](https://ubuntustudio.org/) * [Lubuntu 20.04 LTS](https://lubuntu.me/downloads/) 如果你没有在各自的官方网站上找到最新的 ISO/torrent 文件,只需前往 [cdimages.ubuntu.com](http://cdimages.ubuntu.com/) 找到所有的版本。 接下来,导航到“发行版名称” -> “releases” -> “20.04” -> “release”,然后你应该可以找到列出的 ISO 和 torrent 文件的链接。 如果你有问题,请参考这篇文章,它回答了[关于 Ubuntu 20.04 的常见问题](https://itsfoss.com/ubuntu-20-04-faq/)。如果你要安装,请查看我们推荐的[安装 Ubuntu 20.04 之后要做的事情](https://itsfoss.com/things-to-do-after-installing-ubuntu-20-04/)。 享受 Ubuntu 20.04 LTS Focal Fossa! --- via: <https://itsfoss.com/download-ubuntu-20-04/> 作者:[Ankush Das](https://itsfoss.com/author/ankush/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
**Brief: Ubuntu 20.04 LTS Focal Fossa has been finally released. Here is a quick recap of the new features and the download links for Ubuntu 20.04** **and the official flavours for it**. The wait is finally over. Ubuntu 20.04 LTS has finally arrived and is available to download! ![Ubuntu 20.04 Lts Released](https://itsfoss.com/content/images/wordpress/2020/04/Ubuntu-20.04-LTS-released.png) If you were already using Ubuntu 19.10, you may not notice a lot of noticeable differences – but the list of improvements that I’m looking at is indeed impressive. If you’re curious about what’s new, I’ll mention a few key highlights for this release. ## Ubuntu 20.04: What’s new? Considering the previous 18.04 LTS release, [Ubuntu](https://ubuntu.com/) 20.04 LTS involves a lot of visual changes and under-the-hood improvements as well. Of course, with the addition of [GNOME 3.36](https://itsfoss.com/gnome-3-36-release/), a major visual upgrade and the performance improvement is quite obvious. Here’s a video highlighting what’s new in Ubuntu 20.04: You can also dive into one of our articles highlighting [new features of Ubuntu 20.04](https://itsfoss.com/ubuntu-20-04-release-features/), if you’re curious. In either case, let me just give you some highlights: - GNOME 3.36 is the default desktop - The performance has improved a lot - The new Yaru theme is gorgeous, comes in dark mode as well - You won’t have to see the Amazon app anymore, it’s gone for good! - Improved [ZFS](https://itsfoss.com/what-is-zfs/)support - You get the latest [Linux Kernel 5.4](https://itsfoss.com/linux-kernel-5-4/)(LTS) - Adds exFAT support - Improved hardware and graphics support - GameMode for better gaming performance for compatible games - Updated software Python 3.8.2 - Wireguard is being backported to Linux Kernel 5.4 to be utilized on Ubuntu 20.04 If you want to see the features in detail and in action, please watch this video highlighting the best features of 20.04 LTS: In case you’re new to Ubuntu and have questions about Ubuntu 20.04, we also have a quick [Ubuntu 20.04 FAQ](https://itsfoss.com/ubuntu-20-04-faq/) article for you. ## Upgrade to Ubuntu 20.04 from 18.04 and 19.10 If you are already using Ubuntu 18.04 or 19.10, you can easily upgrade to Ubuntu 20.04 from within your system. This way, your files and most of the applications settings remain as it is while you start using the new version without reinstalling it from the ISO. You can read this detailed tutorial to learn [how to upgrade Ubuntu to a newer version](https://itsfoss.com/upgrade-ubuntu-version/). Please note that if you are using Lubuntu 18.04, you must not upgrade to Lubuntu 20.04. Lubuntu 18.04 used Lxde desktop while later versions use LXQt desktop. Upgrading this way result in conflicts and possible broken system. ## Download Ubuntu 20.04 LTS You can grab the ISO or the torrent file, depending on what you prefer: Please follow this tutorial to learn [how to install Ubuntu](https://itsfoss.com/install-ubuntu/). ## Download Ubuntu 20.04 LTS Official Flavours If you want to grab an official flavour of Ubuntu with a different desktop environment, please follow the links below. You may also [download Ubuntu via torrent](https://itsfoss.com/download-ubuntu-via-torrent/). If you don’t find the latest ISO/torrent file listed in their respective official sites, simply head to [cdimages.ubuntu.com](http://cdimages.ubuntu.com/) to find all the flavours listed. Next, navigate to the **name-of-the-distro** -> **releases** -> **20.04** -> **release** and then you should find the links to the ISO and the torrent files listed. If you have questions, please refer to this article that answers [frequently asked questions about Ubuntu 20.04](https://itsfoss.com/ubuntu-20-04-faq/). If you are going to install it, do check our recommended [things to do after installing Ubuntu 20.04](https://itsfoss.com/things-to-do-after-installing-ubuntu-20-04/). Enjoy Ubuntu 20.04 LTS Focal Fossa! [interaction id=”5ea129d49fea4b69f305fdd1″]
12,149
12 个有趣的 Linux 终端命令
https://itsfoss.com/funny-linux-commands/
2020-04-25T17:33:00
[ "命令", "有趣" ]
https://linux.cn/article-12149-1.html
> > 你觉得 Linux 终端里只有无趣的工作吗?那你一定不知道下面这些有趣的 Linux 命令吧。 > > > Linux 终端是用来完成复杂的工作的,我们有很多有用的 [linux 命令奇技淫巧](https://itsfoss.com/linux-command-tricks/)来帮助你。 但是,你知道你还可以用终端来做很多有趣的事吗?如果你不知道,没关系,大多数 Linux 用户也都只把终端视为一个用来管理系统和开发工作的交互界面。 然而,如果你知道这里有些你可以在终端玩的[基于终端的游戏](https://itsfoss.com/best-command-line-games-linux/)和 [ASCII 码游戏](https://itsfoss.com/best-ascii-games/),你一定会大吃一惊。 在这篇文章中,我将会探索一些有趣的、可笑的、荒谬的命令来让你可以在终端中找点乐子! ### 用这些命令在 Linux 终端中找点乐子 ![](/data/attachment/album/202004/25/173540f111i1i7jtqijtn1.png) 你可能会觉得这些命令荒谬或没用,但是有一些还是可以被好好利用的。 我已经放上了 Ubuntu/Debian 的安装指令,如果你使用基于 Ubuntu 的发行版,请确保[启用 universe 仓库](https://itsfoss.com/ubuntu-repositories/),因为大多数命令不在主仓库中。 如果你使用 Arch、Fedora、SUSE、Solus 或者其他非 Ubuntu 的发行版,请使用你的发行版包管理工具去安装这些有趣的 Linux 命令。 #### 1、在终端开一辆火车 让我们坐上火车,来一场说走就走的旅行的,没错,就是字面意思! `sl` 命令可以让你在终端运行一辆火车。 ![](/data/attachment/album/202004/25/173348yx81qv1481w68i4f.jpg) 安装方法: ``` sudo apt install sl ``` 完成后,你只要在终端输入下面的命令就可以开始: ``` sl ``` 很精彩,对吧?但等等,我们还没结束!你还可以让你的火车飞起来,只要加上参数 `-F`,波特先生,请飞吧: ``` sl -F ``` 这样会让火车长出翅膀飞出终端窗口! #### 2、给你的 Linux 终端加上黑客帝国效果 还记得科幻电影[黑客帝国](https://en.wikipedia.org/wiki/The_Matrix)吗?终端里掉落的绿色字符成为了黑客帝国的标志形象。 你也可以在你的 Linux 电脑里有这样的黑客帝国数字雨!你只需要安装 `cmatric` 然后在终端输入它就行。 ![](/data/attachment/album/202004/25/173350c7d7mw13g310cgc0.png) 在 Debian/Ubuntu Linux 上安装 cmatrix: ``` sudo apt install cmatrix ``` 现在,你要做的就是输入下面的命令,在终端就会有黑客帝国屏幕了: ``` cmatrix ``` 按下 `Ctrl+C` 来停止它,黑客先生。 #### 3、燃起来 拿好灭火器,因为接下来你要在你的终端里点火了! ![](/data/attachment/album/202004/25/173355q7kypp1znnwrvrri.png) 想安装它,你要输入: ``` sudo apt install libaa-bin ``` 完成后,你输入下面的命令,你的终端就会燃起一团火焰: ``` aafire ``` 按下 `Ctrl+C` 来停止它。 #### 4、幸运饼干命令 想知道你的运气怎样但身边没有幸运饼干? 别担心,你只需在终端打出 `fortune` 然后按下回车。终端将会随机显示出一个幸运语,就像你从幸运饼干里得到的一样。 ![](/data/attachment/album/202004/25/173356si7kd297uy1uu1c7.jpg) 安装它: ``` sudo apt install fortune ``` 完成后,只要在命令行打出下面的内容,你就会知道你的运气怎样: ``` fortune ``` 这是一个你可以实际使用的命令。你可以用它作为每日消息,这样在多用户环境下,每个用户登录后都会得到一个“幸运饼干”。 你也可以把它添加到 `bashrc` 文件,这样当你登进终端你就会看到了。 #### 5、宠物爱好者?这是给你准备的 `oneko` 是一个有趣的命令,可以把你的光标变成一只老鼠,然后创造一只好奇的猫,一旦你移动光标,就会来追你。这不仅局限于终端。当猫追逐光标时,你可以继续工作。 如果你家里有孩子这一定很有趣。 ![](/data/attachment/album/202004/25/173650l2tczstuvov6zveg.jpg) 用下面的命令安装 `oneko`: ``` sudo apt install oneko ``` 用下面的命令运行它: ``` oneko ``` 如果你不喜欢猫,喜欢狗,输入: ``` oneko -dog ``` 它有很多种小宠物,你可以用 `oneko -help` 获取信息。按下 `Ctrl+C` 终止它。 #### 6、有个小兄弟在看着你 `xeyes` 是一个很小的 GUI 程序,它可以绘制出一双眼睛一直盯着你!它会不断跟随你的鼠标光标,运行命令自己看看! ![](/data/attachment/album/202004/25/173402xf8exxjx1kkuukuk.jpg) 你可以用下面命令安装: ``` sudo apt install xeyes ``` 然后这样用它: ``` xeyes ``` 按下 `Ctrl+C` 终止它。 #### 7、让终端帮你讲话 打开你的扬声器,你来试试这个命令,[eSpeak](https://itsfoss.com/espeak-text-speech-linux/) 是一个有趣的命令,它可以让你的终端说话。是的,你没听错。 先安装这个包: ``` sudo apt install espeak ``` 接下来,你只需要输入在命令行中输入你想听到的话: ``` espeak "Type what your computer says" ``` 无论你在双引号里面填什么,你的电脑都会复述出来!它就像[在 Linux 中的 echo 命令](https://linuxhandbook.com/echo-command/),但不是打印出来,而是说出来。 #### 8、Toilet(但与洗手间无关) 这听起来有点奇怪,是的。但是,这只是一个命令,用来将文本转换成大的 ASCII 字符。 ![](/data/attachment/album/202004/25/173447dhfre4rwehroft5p.jpg) 用这个命令安装 `toilet`: ``` sudo apt install toilet ``` 完成后,你只需输入: ``` toilet sample text you want ``` 我也不知道为啥这个小程序叫 Toilet。 #### 9、那个牛说什么? `cowsay` 是一个在终端中用 ASCII 字符展示出一头牛的命令。通过这个命令,你可以控制牛说出你想说的话。 别纠结声音,它只展示文本(就是你看漫画书一样)。 ![Cowsay Cowthink](/data/attachment/album/202004/25/173453pbfvuu8te7ttqbvm.jpg) 安装 `cowsay`: ``` sudo apt install cowsay ``` 安装完成后,你只要输入: ``` cowsay "your text" ``` 无论你在双引号里填什么,你的牛都会说!我看到一些系统管理员用它来展示每天的消息。你也可以这样,你甚至可以把它和 `fortune` 命令结合。 #### 10、旗帜 `banner` 命令与 `toilet` 命令相似,但它限制最多只能打印 10 个字符。 ![](/data/attachment/album/202004/25/173457fywe4ot9ne7yy77w.jpg) 你可以这样安装 `banner` 命令: ``` sudo apt install sysvbanner ``` 然后这样运行: ``` banner "Welcome" ``` 替换双引号里的内容,你将会得到你想要的展示内容。 #### 11、Yes ![](/data/attachment/album/202004/25/173458lp2v6jtk8iripc2k.jpg) `yes` 命令帮助你在循环中自动响应直到终止命令。这个命令将会一直打印相同的内容。如果你想快速生成大量垃圾文本,那么此命令将像超级按钮一样工作。 你也可以使用它为命令提供 `yes` 应答(如果提示应答时)。例如,`apt upgrade` 命令会要求你确认,你可以像这样使用它: ``` yes | sudo apt upgrade ``` 你不需要安装任何包,`yes` 命令已经存在了。 想要终止 `yes` 命令循环,只需按下 `CTRL+C`。 #### 12、得到一个新的身份 要生成一个随机的假身份吗?我推荐你用 `rig` 命令。你在终端运行它,就会生成一个假的身份。 ![](/data/attachment/album/202004/25/173500uitrqodidsdby9mo.jpg) 用这个命令安装 `rig`: ``` sudo apt install rig ``` 只需像这样输入: ``` rig ``` 它可能被用在脚本或者 web 应用中展示随机信息,但我自己还没做过什么。 ### 结尾 我希望你会喜欢这个有趣的 Linux 命令列表。你最喜欢哪个命令呢?你还知道其他有趣的命令吗?请在评论部分与我们分享。 --- via: <https://itsfoss.com/funny-linux-commands/> 作者:[Srimanta Koley](https://itsfoss.com/author/itsfoss/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[Zioyi](https://github.com/Zioyi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
![Warp Terminal](/assets/images/warp-terminal.webp) ![Warp Terminal](/assets/images/warp-terminal.webp) The Linux terminal is the place to get serious work done. We have plenty of useful [Linux command tips and tricks](https://itsfoss.com/linux-command-tricks/) to help you with that. But, did you know that you can have a lot of fun using the terminal? Well, if you did not, then you are not alone. Most Linux users see the terminal as an interface that is designed and built for system management and development tasks. However, you will be surprised to know that there are tons of [terminal based games](https://itsfoss.com/best-command-line-games-linux/) and [ASCII games](https://itsfoss.com/best-ascii-games/) that you can play in the terminal. And, in this article, I’m going to explore some interesting, some funny and some ridiculous commands that you can type into the terminal on Linux and have fun! [enable universe repository](https://itsfoss.com/ubuntu-repositories/)as most of these commands are not in the main repository. For other distros, please use your respective package managers. ## 1. sl*: *Run a train in your terminal *:* Let’s take a ride in the locomotive and begin our auspicious journey. And I mean it literally! The `sl` command allows you to [run a train in your terminal](https://itsfoss.com/ubuntu-terminal-train/). ![Use "sl" or Steam Locomotive command to get a running train inside your terminal](https://itsfoss.com/content/images/2023/10/sl-command.gif) `sl` commandHere’s how to install the command: ` sudo apt install sl` Once done, you can simply type in the following in the terminal to get started: `sl` Impressive, isn’t it? But, hold on. We are not done yet! Apparently, you can fly your locomotive. Just add the option -F, Mr. Potter: `sl -F` ![Use "sl" command with "-F" option to get a flying train](https://itsfoss.com/content/images/2023/10/sl-fly.gif) This should make the locomotive get wings to fly off from the terminal window! [Running a Train in the Linux Terminal With sl CommandChoo choo! All aboard the choo choo train in the Linux terminal.](https://itsfoss.com/ubuntu-terminal-train/)![](https://itsfoss.com/content/images/wordpress/2015/04/Linux_Terminal_Train.jpeg) ![](https://itsfoss.com/content/images/wordpress/2015/04/Linux_Terminal_Train.jpeg) ## 2. cmatrix*: *Add the Matrix effect to your Linux terminal *:* Remember the iconic sci-fi movie [The Matrix](https://en.wikipedia.org/wiki/The_Matrix?ref=itsfoss.com)? The green text falling down on the terminal became an identity of Matrix. You can have this Matrix digital rain on your Linux boxes as well! You just need to [install cmatrix](https://itsfoss.com/using-cmatrix/) and type it in the terminal. ![Get a Matrix Screen using "cmatrix" Command](https://itsfoss.com/content/images/2023/10/cmatrix.gif) `cmatrix` commandInstalling cmatrix on Debian/Ubuntu Linux: `sudo apt install cmatrix` Now, all you have to do is type the following to get the matrix screen on the terminal: `cmatrix` **Press Ctrl+C to stop it, Mr. Anderson.** ## 3. aafire*: *Let there be fire *:* Keep your fire extinguisher handy because now you are going to start a fire in your terminal! ![Get an ASCII Fire using the "aafire" Command](https://itsfoss.com/content/images/2023/10/aafire.gif) To get it installed, here’s what you have to type: `sudo apt install libaa-bin` Once done, start a fire in your terminal by entering: `aafire` Press Ctrl+C to stop it. ## 4. fortune*: *Want to know your fortune, but there are no fortune cookies around you? *:* Not to worry, you just need to type “fortune” on your terminal and press enter. The terminal will display a random sentence, just like you usually get in fortune cookies. `fortune` commandInstall it with: `sudo apt install fortune` Once done, simply type in the command below to know your fortune: `fortune` Now, this is one of the commands here that you could actually use. You can use it as message of the day so that in a multi-user environment, all the users will see a random fortune cookie when they log in. ## 5. oneko*: *Pet lover? This is for you *:* Oneko is a little fun command that will change your regular cursor into a mouse and creates a curious little cat who will chase your cursor once you move it. It’s not limited to just the terminal. You can keep on working while the cat chases the cursor. Now, that’s something fun to do, specially if you have kids at home. ![Change your regular cursor into a mouse and see a curious little cat chasing your cursor once you move it, using Oneko](https://itsfoss.com/content/images/2023/10/oneko.gif) Install Oneko with this command: `sudo apt install oneko` Run it with this command: `oneko` In case you want a dog instead of a cat, type: `oneko -dog` ![Get a dg instead of cat using "-dog" option with "oneko" command](https://itsfoss.com/content/images/2023/10/oneko-dog.gif) There are a few more types of cats available. You can get that information by using oneko –help. To stop it, use Ctrl+C. ## 6. xeyes*: *Little brother is watching you *:* Xeyes is a tiny GUI program that let the user draws a pair of ever watching eyes! It will follow your mouse cursor constantly. Run the command and see it yourself! ![Use Xeyes Command to draw a pair of ever watching eyes! It will follow your mouse cursor constantly.](https://itsfoss.com/content/images/2023/10/xeyes.gif) This command is provided by `x11-apps` package. You can install it using this command: `sudo apt install x11-apps` And then use it with this: `xeyes` Press Ctrl+C to stop it. ## 7. espeak*: *Let the terminal speak for you *:* To try out this command, make sure you have turned on your speakers. [eSpeak](https://itsfoss.com/espeak-text-speech-linux/) is a fun command that gives your terminal a voice. Yes, you heard that right. Install the package first: `sudo apt install espeak` Next, you need to simply type in the command along with a text that you want to listen as audio: `espeak "Type what your computer says"` Whatever you place in the double quotes, your computer is obligated to say! It’s like [echo command in Linux](https://linuxhandbook.com/echo-command/?ref=itsfoss.com). But instead of printing, it speaks. ## 8. Toilet*: *But it has nothing to do with a washroom *:* This sounds weird, yes. But, it’s just a command that transforms a text into large ASCII characters. Install toilet with this command: `sudo apt install toilet` Once done, you just need to type in: `toilet "sample text you want"` I don’t know why this little program is called toilet. ## 9. cowsay*: *What does the ~~fox~~ cow say? *:* [Cowsay is a command](https://itsfoss.com/cowsay/) that displays a cow using ASCII characters in the terminal. And by using this command, you can instruct the cow to say anything you want. Not to be confused with any audio – it will just display a text (like you usually see in a comic book). Install** **cowsay: `sudo apt install cowsay` Once you have it installed, you just need to type in: `cowsay "your text"` Whatever you place in the double quotes, your cow is obligated to say! I have seen a few sysadmins using it to display the message of the day. Maybe you can do the same. You may even combine it with fortune command. [Using Cowsay Linux Command Like a ProThe cowsay is a fun little Linux command line utility that can be enjoyed in so many ways. Here are several examples.](https://itsfoss.com/cowsay/)![](https://itsfoss.com/content/images/2023/06/cowsay-linux-1.png) ![](https://itsfoss.com/content/images/2023/06/cowsay-linux-1.png) ## 10. banner*: *Whose banner is it? *:* The banner command works just like the toilet command, but it is limited to print only 10 characters at most. You can install banner command like this: `sudo apt install sysvbanner` Then use it in the following way: `banner "Welcome"` Replace content in the double quotes and you shall have your desired text displayed. ## 11. Yes*: *Yes, Terminal! *:* The “yes” command helps you to loop an automated response until you terminate the command. This command will print the exact same thing indefinitely. If you want to produce huge amounts of junk text fast, then this command will work like a charm. ![Use "Yes" command to loop an automated response until you terminate the command](https://itsfoss.com/content/images/2023/10/yes-Command.gif) You may also use it to provide a yes to a command (if it prompts for it). For example, the apt upgrade command asks for your confirmation, you can use it like this: `yes | sudo apt upgrade` You don’t need to install any package for it. Yes command is already available. To terminate the yes command loop, simply press **CTRL + C**. ## 12. rig*: *Get a new identity, well, sort of *:* Want to generate a random fake identity? I give you the command “rig”. Once you place this in the terminal, it will generate a fake identity. Install rig with this command: `sudo apt install rig` Then simply type this: `rig` It may be used in scripts or web-apps that displays random information, but I haven’t done anything of that sort on my own. ## 13: Asciiquarium (An aquarium for your terminal!) Asciiquarium can generate a beautiful colored aquarium right inside your terminal. ![Asciiquarium command and get a colorful aquarium inside your terminal](https://itsfoss.com/content/images/2023/10/asciiquarium.gif) On Fedora and Arch Linux based system, you can install Asciiquarium through respective package managers. ``` sudo dnf install asciiquarium OR sudo pacman -Syu asciiquarium ``` On Ubuntu, you can [download the source package and make yourself](https://robobunny.com/projects/asciiquarium/html/). Or use a PPA: ``` sudo add-apt-repository ppa:ytvwld/asciiquarium sudo apt update sudo apt install asciiquarium ``` [PPA in Ubuntu Linux [Definitive Guide]An in-depth article that covers almost all the questions around using PPA in Ubuntu and other Linux distributions.](https://itsfoss.com/ppa-guide/)![](https://itsfoss.com/content/images/2023/05/Understanding-ppa-in-ubuntu.png) ![](https://itsfoss.com/content/images/2023/05/Understanding-ppa-in-ubuntu.png) Now, simply run: `asciiquarium` **There are more fun stuff in the terminal** You’ll find a lot of these commands ridiculous or useless, but banner and some other commands could actually be put to some good use. Want to experiment more with the terminal? How about a vintage one? [Get a Vintage Linux Terminal with Cool Retro TerminalGetting nostalgic about the old CRT monitors of the 80s? You can relive it in Linux with the Cool Retro Term application.](https://itsfoss.com/cool-retro-term/)![](https://itsfoss.com/content/images/wordpress/2015/10/Retro-Terminal-Linux.jpeg) ![](https://itsfoss.com/content/images/wordpress/2015/10/Retro-Terminal-Linux.jpeg) Or, displaying your distribution's logo in ASCII format. [Display Linux Distribution Logo in ASCII Art in TerminalWondering how they display Linux logo in terminal? With these tools, you can display logo of your Linux distribution in ASCII art in the Linux terminal.](https://itsfoss.com/display-linux-logo-in-ascii/)![](https://itsfoss.com/content/images/wordpress/2015/10/display-linux-logo-ascii-terminal.jpeg) ![](https://itsfoss.com/content/images/wordpress/2015/10/display-linux-logo-ascii-terminal.jpeg) If you liked that, [have more fun with ASCII art in the terminal](https://itsfoss.com/ascii-art-linux-terminal/). [10 Tools to Have Fun With ASCII Art in Linux TerminalThink Linux terminal is all about serious work? Think again. Here are a few fun things you can do with ASCII art in the terminal.](https://itsfoss.com/ascii-art-linux-terminal/)![](https://itsfoss.com/content/images/wordpress/2022/07/ascii-art-tools-linux.png) ![](https://itsfoss.com/content/images/wordpress/2022/07/ascii-art-tools-linux.png) I hope you liked this list of fun Linux commands. Which command do you like the most here? Do you know some other such amusing commands? Do share it with us in the comment section. [let us know](https://itsfoss.com/contact-us/).
12,150
关于 Emacs 中的变量你需要知道的事情
https://opensource.com/article/20/3/variables-emacs
2020-04-25T19:09:00
[ "Elisp", "Emacs" ]
/article-12150-1.html
> > 学习 Elisp 是如何处理变量的,以及如何在你的脚本与配置中使用它们。 > > > ![](/data/attachment/album/202004/25/190905pq1qfk1f8f9qs9v8.jpg) GNU Emacs 是由 C 和 Emacs Lisp(Elisp,Lisp 编程语言的一种方言)写成,它是一个编辑器的同时,又碰巧是 Elisp 的沙盒。因此,理解 Elisp 中的一些基本编程概念会对你有一些帮助。 如果你是 [Emacs](https://www.gnu.org/software/emacs/) 新手,请先阅读 Sacha Chua 的《[给 Emacs 新手的资源](http://sachachua.com/blog/p/27144)》精品帖。本篇文章假定你熟悉常见的 Emacs 术语,并且能够阅读并求值 Elisp 代码的简单片段。最好你也听说过变量作用域的概念,知道它在其它编程语言中的作用。本篇文章中的示例假定你使用的是相对较新的 Emacs 版本([v.25 之后的版本](https://www.gnu.org/software/emacs/download.html))。 [Elisp 手册](https://www.gnu.org/software/emacs/manual/html_node/elisp/) 包含了 Elisp 的方方面面,但它是写给那些有明确查找目标的人们的(它在这方面也做得相当棒)。但是很多人想要能够在更高的层次上解释 Elisp 概念的材料,同时将信息压缩成最精华的部分。本篇文章也正是我回应这种呼声的一次尝试,为读者描绘基础的大体轮廓。使他们能在配置中用上这些技巧,也让他们在手册中查询细节变得更容易。 ### 全局变量 用 `defcustom` 定义的用户设置和用 `defvar` 或 `defconst` 定义的变量是全局的。使用 `defcustom` 或 `defvar` 声明变量的一个非常重要的原因是,当一个变量已经被<ruby> 绑定 <rt> bind </rt></ruby>,对它们进行重新求值不会覆盖掉已有的值。举个栗子,如果你在初始化文件中对 `my-var` 进行如下绑定: ``` (setq my-var nil) ``` 对如下表达式求值不会将变量覆盖为 `t`: ``` (defvar my-var t) ``` 注意此处有*一个例外*:如果你用 `C-M-x` 快捷键对上述声明求值,它将调用 `eval-defun` 函数,并将变量覆盖为 `t`。通过此方式,你可以按需将变量强制覆盖。这种行为是刻意而为之的:你可能知道,Emacs 中的许多特性是按需加载的,也可以称为自动加载。如果那些文件中的声明将变量覆盖为它们的默认值,那它也就覆盖了你初始化文件中的设置。 ### 用户选项 用户选项就是使用 `defcustom` 声明的全局变量。与使用 `defvar` 声明的变量不同,这些变量可以用 `M-x customize` 界面来配置。据我所知,大部分人因为觉得它开销较大而不经常使用。一旦你知道如何在你的初始化文件中设置变量,也就没有理由一定要去使用它了。许多用户没有意识到的一个细节是,通过 `customize` 的方式设置用户选项能够执行代码,有的时间可用来运行一些附加的配置说明: ``` (defcustom my-option t "My user option." :set (lambda (sym val) (set-default sym val) (message "Set %s to %s" sym val))) ``` 若你对这段代码求值,并键入 `M-x customize-option RET my-option RET` 运行 `customize` 界面,lambda 匿名函数就会被调用,回显区域就会显示出该选项的符号名与值。 如果你在初始化文件中使用 `setq` 改变该选项的值,那么匿名函数不会运行。要想在 Elisp 中正确设置一个选项,你需要使用函数 `customize-set-variable`。或者,人们在他们的配置文件中使用了各种版本的 `csetq` 宏来自动处理(如你所愿,你可以通过 GitHub 的代码搜索发现更复杂的变体)。 ``` (defmacro csetq (sym val) `(funcall (or (get ',sym 'custom-set) 'set-default) ',sym ,val)) ``` 若你正在使用 [use-package](https://github.com/jwiegley/use-package#customizing-variables) 宏,`:custom` 关键字会替你处理好以上这些。 在你将以上代码放入到你的初始化文件中之后,你便可以使用 `csetq` 宏在设置变量的同时运行任何现存的 `setter` 函数。要证明这点,你可以使用此宏来改变上面定义的选项,并观察回显区域的消息输出。 ``` (csetq my-option nil) ``` ### 动态绑定与词法绑定 当你在使用其它编程语言时,你可能不会意识到动态绑定与词法绑定的区别。当今的大部分编程语言使用词法绑定,并且在学习变量作用域与变量查找时也没有必要去了解它们之间的区别。 如此看来,Emacs Lisp 比较特殊因为动态绑定是默认选项,词法绑定需要显式启用。这里有一些历史遗留原因,但在实际使用中,你应该*时刻*启用词法绑定,因为它更快并且不容易出错。要启用词法绑定,只需将如下的注释行作为你的 Emacs Lisp 文件的第一行: ``` ;;; -*- lexical-binding: t; -*- ``` 另一种方式,你可以调用 `add-file-local-variable-prop-line`,在你选择将变量 `lexical-binding` 置为 `t` 后,会自动插入如上的注释行。 在加载包含如上特殊格式行的文件时,Emacs 会相应地设置变量,这意味着该缓冲区中的代码加载时启用了词法绑定。若要采用交互式的方式,你可以调用 `M-x eval-buffer` 命令,它会将词法绑定考虑在内。 既然你已经知道了如何启用词法绑定,那么了解这些术语的含义就很明智了。对于动态绑定,在程序执行期间建立的最后一个绑定将用于变量查找。你可以通过将以下代码放入空缓冲区并执行 `M-x eval buffer`,以对此进行测试: ``` (defun a-exists-only-in-my-body (a) (other-function)) (defun other-function () (message "I see `a', its value is %s" a)) (a-exists-only-in-my-body t) ``` 你可能会很惊讶地发现,在 `other-function` 中查找变量 `a` 竟然成功了。 若你在顶部添加了特殊的词法绑定注释后,重新运行前面的示例,这段代码将抛出 `variable is void` 错误,因为 `other-functioin` 无法识别变量 `a`。如果你使用的是其它编程语言,这才是你所期望的行为。 启用词法绑定后,作用域会由周围的代码所定义。这并不单单是性能原因,时间也已经表明了词法绑定才是更受喜爱的。 ### 特殊变量与动态绑定 如你所知,`let` 用于临时建立局部绑定: ``` (let ((a "I'm a") (b "I'm b")) (message "Hello, %s. Hello %s" a b)) ``` 接下来有趣的是——使用 `defcustom`、`defvar` 以及 `defconst` 定义的变量被称为*特殊变量*,不论词法绑定是否启用,它们都将使用动态绑定: ``` ;;; -*- lexical-binding: t; -*- (defun some-other-function () (message "I see `c', its value is: %s" c)) (defvar c t) (let ((a "I'm lexically bound") (c "I'm special and therefore dynamically bound")) (some-other-function) (message "I see `a', its values is: %s" a)) ``` 通过 `C-h e` 切换至 `Messages` 缓冲区,查看上述示例输出的消息。 使用 `let` 或者函数参数绑定的局部变量会遵循由 `lexical-binding` 变量定义的查找规则,但使用 `defvar`、`defconst` 或 `defcustom` 定义的全局变量,能够沿着调用栈在 `let` 表达式中被修改。 这种技巧允许方便地进行特殊定制,并且经常在 Emacs 中被使用。这并不奇怪,毕竟 Emacs Lisp 最开始只提供动态绑定作为唯一选择。下面是一个常见的示例,说明如何向只读缓冲区临时写入数据: ``` (let ((inhibit-read-only t)) (insert ...)) ``` 这是另一个常见的示例,如何进行大小写敏感的搜索: ``` (let ((case-fold-search nil)) (some-function-which-uses-search ...)) ``` 动态绑定允许你采用作者未曾预料的方式对函数进行修改。对于像 Emacs 这样设计使用的程序来说,这是个强大的工具与特性。 有一点需要注意:你可能会意外地使用局部变量名,该变量在其他地方被声明为特殊变量。防止这种冲突的一个技巧是避免在局部变量名中使用下划线。在我当前的 Emacs 会话中,以下代码只留下少数潜在冲突的候选: ``` (let ((vars ())) (mapatoms (lambda (cand) (when (and (boundp cand) (not (keywordp cand)) (special-variable-p cand) (not (string-match "-" (symbol-name cand)))) (push cand vars)))) vars) ;; => (t obarray noninteractive debugger nil) ``` ### 缓冲区局部变量 每个缓冲区都能够拥有变量的一个局部绑定。这就意味着对于任何变量,都会首先在当前缓冲区中查找缓冲区局部变量取代默认值。局部变量是 Emacs 中一个非常重要的特性,比如它们被主模式用来建立缓冲区范围内的行为与设置。 事实上你已经在本文中见过*缓冲区局部变量*——也就是将 `lexical-binding` 在缓冲区范围内设置为 `t` 的特殊注释行。在 Emacs 中,在特殊注释行中定义的缓冲区局部变量也被称为*文件局部变量*。 任何的全局变量都可以用缓冲区局部变量来遮掩,比如上面定义的变量 `my-var`,你可用如下方式设置局部变量: ``` (setq-local my-var t) ;; or (set (make-local-variable 'my-var) t) ``` 此时 `my-var` 对于你在对上述代码进行求值时对应的缓冲区来说就是局部变量。若你对它调用 `describe-variable`,文档会同时告诉你局部与全局的值。从编程的角度来讲,你可以分别用 `buffer-local-value` 获取局部值,用 `default-value` 获取全局值。若要移除局部值,你可以调用 `kill-local-variable`。 另一个需要注意的重要性质就是,一旦一个变量成为缓冲区局部变量,后续在该缓冲区中使用的 `setq` 都将只能设置局部的值。要想设置默认值,你需要使用 `setq-default`。 因为局部变量意味着对缓冲区的定制,它们也就经常被用于模式钩子中。一个典型的例子如下所示: ``` (add-hook 'go-mode-hook (defun go-setup+ () (setq-local compile-command (if (string-suffix-p "_test.go" buffer-file-name) "go test -v" (format "go run %s" (shell-quote-argument (file-name-nondirectory buffer-file-name))))))) ``` 这将设置 `go-mode` 缓冲区中 `M-x compile` 使用的编译命令。 另一个重要的方面就是一些变量会*自动*成为缓冲区局部变量。这也就意味着当你使用 `setq` 设置这样一个变量时,它会针对当前缓冲区设置局部绑定。这个特性不应该被经常使用,因为这种隐式的行为并不好。不过如果你想的话,你可以使用如下方法创建自动局部变量: ``` (defvar-local my-automatical-local-var t) ;; or (make-variable-buffer-local 'my-automatical-local-var) ``` 变量 `indent-tabs-mode` 就是 Emacs 内建的一个例子。如果你在初始化文件中使用 `setq` 改变变量的值,根本不会影响默认值。只有在你加载初始化文件时正处在当前的缓冲区的局部值会被改变。因此,你需要使用 `setq-default` 来改变 `indent-tabs-mode` 的默认值。 ### 结语 Emacs 是一个强大的编辑器,并且随着你的定制它将变得更加强大。现在,你知道了 Elisp 是如何处理变量的,以及你应如何在你自己的脚本与配置中使用它们。 *本篇文章此前采用 CC BY-NC-SA 4.0 许可证发布在 [With-Emacs](https://with-emacs.com/posts/tutorials/almost-all-you-need-to-know-about-variables/) 上,经过修改(带有合并请求)并在作者允许的情况下重新发布。* --- via: <https://opensource.com/article/20/3/variables-emacs> 作者:[Clemens Radermacher](https://opensource.com/users/clemera) 选题:[lujun9972](https://github.com/lujun9972) 译者:[cycoe](https://github.com/cycoe) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
null
HTTPSConnectionPool(host='opensource.com', port=443): Read timed out. (read timeout=10)
null
12,152
如何使用 Inkscape 制作万圣节灯笼
https://opensource.com/article/19/10/how-make-halloween-lantern-inkscape
2020-04-26T15:54:23
[ "Inkscape" ]
https://linux.cn/article-12152-1.html
> > 使用开源软件为你最喜欢的万圣节鬼屋制作一个有趣和怪异的装饰品。 > > > ![万圣节 - 背光飞行的蝙蝠](/data/attachment/album/202004/26/155427m768thy2vtz18d2x.jpg "Halloween - backlit bat flying") 使用开源软件装饰一个独一无二的万圣节灯笼! 通常,灯笼的一部分结构体是不透明的,以阻挡内部的光线。灯笼之所以成为灯笼,是因为其去掉了一些东西 :从结构体上切开的窗口,这样光线就可以射出。虽然对于照明来说不实用,但是一个有着怪异形状窗口和隐藏暗色轮廓的灯笼却可以令人兴奋,并创造出很多乐趣。 这篇演示如何使用 [Inkscape](https://opensource.com/article/18/1/inkscape-absolute-beginners) 创建你自己的灯笼。如果还没有 Inkscape ,在 Linux 上,你可以从软件库中安装它,在 MacOS 和 Windows 上,你可以从[Inkscape 网站](http://inkscape.org) 上下载它. ### 使用材料 * 模板([A4](https://www.dropbox.com/s/75qzjilg5ak2oj1/papercraft_lantern_A4_template.svg?dl=0) 或[信纸](https://www.dropbox.com/s/8fswdge49jwx91n/papercraft_lantern_letter_template%20.svg?dl=0)大小) * 卡片纸(黑色是传统色) * 描图纸(可选) * 裁纸刀、尺子、切割垫板(可使用工艺切割机/激光切割机代替) * 工艺胶 * LED 茶灯蜡烛 安全注意事项:这个项目只使用电池操作的蜡烛。 ### 理解模板 首先,从上面的链接下载你所在区域(A4 或信纸)的合适的模板,然后在 Inkscape 中打开它。 ![灯笼模板荧光屏](/data/attachment/album/202004/26/155512omgg0jceggnjdg7m.png "Lantern template screen") 灰白色的棋盘格背景是透明的(从技术角度来说,它是 alpha 通道。) 黑色基板构成了灯笼。现在,没有窗口可以让光线穿过;灯笼有一个非镂空的黑色基板。你将在 Inkscape 中使用**并集**和**差集**选项来数字化的设计窗口。 蓝色虚线表示折线。橙色实线表示参考线。采光窗口不应该放在橙色盒子的外面。 模板的左侧是你可以在你设计中使用的一些预先制作好的对象。 ### 创建一个窗口或形状 1. 创建一个看起来像你想要的窗口样式的对象。可以使用 Inkscape 左侧工具栏中的一些形状工具来创建对象。此外,你可以下载共创或公共领域的剪贴画,并导入 PNG 文件到你的项目中。 2. 当你对对象的形状满意时,在顶部菜单中选择“路径” -> “对象转化成路径” 将其转换为一个路径(而不是一个形状,Inkscape 视其为两种不同的对象)。 ![对象到路径 菜单](/data/attachment/album/202004/26/155531e77qloopqlfsz0v7.png "Object to path menu") 3. 在基板形状的上面放置对象。 4. 选择对象和黑色基板。通过先单击一个,并按住 `Shift` 按键,再选择另一个来完成。 5. 从顶部菜单选择“路径” -> “差集” 来从基板的对象中移除形状。这将创建灯笼中的一个窗口。 ![路径 > 差集 菜单](/data/attachment/album/202004/26/155550jp6dot6rw4ruocst.png "Object > Difference menu") ### 添加对象到窗口中 在制作了一个窗口后,你可以添加对象到其中来创建一个场景。 提示: * 所有的对象,包括文本,必须连接到灯笼的基板,否则,在切割后会掉落下来,并留下一片空白。 * 避免小而复杂的细微之处。即使使用激光切割机或工艺切割机等机器,也很难切割这些细微之处。 1. 创建或导入一个对象。 2. 放置对象到窗口内,以便它至少接触基板的两侧。 3. 选择对象后,从顶部菜单选择“连接” -> “对象转化成路径”。 ![对象到路径 菜单](/data/attachment/album/202004/26/155606o8b5b77wf474wkfe.png "Object to path menu") 4. 选择对象和黑色基板,通过在按住 `Shift` 按键的同时单击每一个来完成。 5. 选择“路径” -> “并集”来使对象和基板合二为一。 ### 添加文本 文本既可以从基板剪出文字来创建一个窗口(就像我对星星所做的那样),或者也可以添加到一个窗口上(它可以阻挡来自灯笼内部的光线)。如果你要创建一个窗口,只需要执行下面的步骤 1 和步骤 2,然后使用“差集”来从基板移除文本。 1. 从左侧边栏中选择文本工具来创建文本。粗体字体效果最好。 ![文本工具](/data/attachment/album/202004/26/155626sgax1c0ncenjggje.png "Text tool") 2. 选择你的文本,然后从顶部菜单选择“路径” -> “对象转化成路径”。这将转换文本对象为一个路径。注意,这个步骤意味着你将不能再编辑该文本,所以,*只有当*你确定你拥有想要的单词后,执行这个步骤。 3. 在你转换文本后,你可以按键盘上的 `F2` 来激活节点编辑器工具,当选择使用这个工具时,可以清楚地显示文本的节点。 ![选中的文本使用节点编辑器](/data/attachment/album/202004/26/155630h6npt8knqn8u4oln.png "Text selected with Node editor") 4. 取消文本分组。 5. 调整每个字母,以便使其与相邻字母或基板稍微重叠。 ![重叠文本](/data/attachment/album/202004/26/155640l66y3orgr693eqr8.png "Overlapping the text") 6. 为将所有的字母彼此连接,并连接到基板,重新选择所有文本和基板,然后选择“路径” -> “并集”。 ![使用 路径 > 并集 连接字母和基板](/data/attachment/album/202004/26/155653v8ohz009xn8ht7io.png "Connecting letters and base with Path > Union") ### 准备打印 下面是手工切割灯笼的说明。如果使用激光切割机或工艺切割机,遵循硬件所需要的技巧来准备好你的文件。 1. 在“图层”面板中,单击“安全”图层旁边的“眼睛”图标来隐藏安全线。如果你看不到图层面板,通过顶部菜单选择“图层” -> “图层”来显示它。 2. 选择黑色基板。在“填充和笔划”面板中,设置填充为“X”(意味着*不填充*),设置“笔划”为纯黑色(对于喜欢十六进制的粉丝来说是 `#000000ff` )。 ![设置填充和笔划](/data/attachment/album/202004/26/155707lqbqugtcbxqzcugl.png "Setting fill and stroke") 3. 使用“文件” -> “打印”来打印你的图案。 4. 使用一把工艺刀和直尺,小心地绕着每一条黑线切割。在蓝色虚线上轻划,然后折叠。 ![裁剪灯笼](/data/attachment/album/202004/26/155713wqgzqv2ml0vmwgvw.jpg "Cutting out the lantern") 5. 要完成窗口的制作,剪切描图纸为每个窗口的大小,然后粘贴它到灯笼的内侧。 ![添加描图纸](/data/attachment/album/202004/26/155724g378373uvcpi07br.jpg "Adding tracing paper") 6. 在折条处把灯笼的边粘在一起。 7. 接通电池供电的 LED 蜡烛,并放置它到你灯笼中。 ![完成灯笼](/data/attachment/album/202004/26/155734kfjzkj73ukvfjgov.jpg "Completed lantern") 现在你的灯笼已经完成了,准备好点亮你的鬼屋了。 --- via: <https://opensource.com/article/19/10/how-make-halloween-lantern-inkscape> 作者:[Jess Weichler](https://opensource.com/users/cyanide-cupcake) 选题:[lujun9972](https://github.com/lujun9972) 译者:[robsean](https://github.com/robsean) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
The spooky season is almost here! This year, decorate your haunt with a unique Halloween lantern made with open source! Typically, a portion of a lantern's structure is opaque to block the light from within. What makes a lantern a lantern are the parts that are missing: windows cut from the structure so that light can escape. While it's impractical for lighting, a lantern with windows in spooky shapes and lurking silhouettes can be atmospheric and a lot of fun to create. This article demonstrates how to create your own lantern using [Inkscape](https://opensource.com/article/18/1/inkscape-absolute-beginners). If you don't have Inkscape, you can install it from your software repository on Linux or download it from the [Inkscape website](http://inkscape.org) on MacOS and Windows. ## Supplies - Template ( [A4](https://www.dropbox.com/s/75qzjilg5ak2oj1/papercraft_lantern_A4_template.svg?dl=0)or[Letter](https://www.dropbox.com/s/8fswdge49jwx91n/papercraft_lantern_letter_template%20.svg?dl=0)size) - Cardstock (black is traditional) - Tracing paper (optional) - Craft knife, ruler, and cutting mat (a craft cutting machine/laser cutter can be used instead) - Craft glue - LED tea-light "candle" *Safety note:*Only use battery-operated candles for this project. ## Understanding the template To begin, download the correct template for your region (A4 or Letter) from the links above and open it in Inkscape. ![Lantern template screen Lantern template screen](https://opensource.com/sites/default/files/uploads/lanterntemplate_screen.png) The gray-and-white checkerboard background is see-through (in technical terms, it's an *alpha channel*.) The black base forms the lantern. Right now, there are no windows for light to shine through; the lantern is a solid black base. You will use the **Union** and **Difference** options in Inkscape to design the windows digitally. The dotted blue lines represent fold scorelines. The solid orange lines represent guides. Windows for light should not be placed outside the orange boxes. To the left of the template are a few pre-made objects you can use in your design. ## To create a window or shape - Create an object that looks like the window style you want. Objects can be created using any of the shape tools in Inkscape's left toolbar. Alternately, you can download Creative Commons- or Public Domain-licensed clipart and import the PNG file into your project. - When you are happy with the shape of the object, turn it into a **Path**(rather than a**Shape**, which Inkscape sees as two different kinds of objects) by selecting**Object > Object to Path**in the top menu. ![Object to path menu Object to path menu](https://opensource.com/sites/default/files/uploads/lantern1.png) - Place the object on top of the base shape. - Select both the object and the black base by clicking one, pressing and holding the Shift key, then selecting the other. - Select **Object > Difference**from the top menu to remove the shape of the object from the base. This creates what will become a window in your lantern. ![Object > Difference menu Object > Difference menu](https://opensource.com/sites/default/files/uploads/lantern2.png) ## To add an object to a window After making a window, you can add objects to it to create a scene. **Tips:** - All objects, including text, must be connected to the base of the lantern. If not, they will fall out after cutting and leave a blank space. - Avoid small, intricate details. These are difficult to cut, even when using a machine like a laser cutter or a craft plotter. - Create or import an object. - Place the object inside the window so that it is touching at least two sides of the base. - With the object selected, choose **Object > Object to Path**from the top menu. ![Object to path menu Object to path menu](https://opensource.com/sites/default/files/uploads/lantern3.png) - Select the object and the black base by clicking on each one while holding the Shift key). - Select **Object > Union**to join the object and the base. ## Add text Text can either be cut out from the base to create a window (as I did with the stars) or added to a window (which blocks the light from within the lantern). If you're creating a window, only follow steps 1 and 2 below, then use **Difference** to remove the text from the base layer. - Select the Text tool from the left sidebar to create text. Thick, bold fonts work best. - Select your text, then choose **Path > Object to Path**from the top menu. This converts the text object to a path. Note that this step means you can no longer edit the text, so perform this step*only after*you're sure you have the word or words you want. - After you have converted the text, you can press **F2**on your keyboard to activate the**Node Editor**tool to clearly show the nodes of the text when it is selected with this tool. ![Text selected with Node editor Text selected with Node editor](https://opensource.com/sites/default/files/uploads/lantern5.png) - Ungroup the text. - Adjust each letter so that it slightly overlaps its neighboring letter or the base. ![Overlapping the text Overlapping the text](https://opensource.com/sites/default/files/uploads/lantern6.png) - To connect all of the letters to one another and to the base, re-select all the text and the base, then select **Path > Union**. ## Prepare for printing The following instructions are for hand-cutting your lantern. If you're using a laser cutter or craft plotter, follow the techniques required by your hardware to prepare your files. - In the **Layer**panel, click the**Eye**icon beside the**Safety**layer to hide the safety lines. If you don't see the Layer panel, reveal it by selecting**Layer > Layers**from the top menu. - Select the black base. In the **Fill and Stroke**panel, set the fill to**X**(meaning*no fill*) and the**Stroke**to solid black (that's #000000ff to fans of hexes). ![Setting fill and stroke Setting fill and stroke](https://opensource.com/sites/default/files/uploads/lantern8.png) - Print your pattern with **File > Print**. - Using a craft knife and ruler, carefully cut around each black line. Lightly score the dotted blue lines, then fold. - To finish off the windows, cut tracing paper to the size of each window and glue it to the inside of the lantern. - Glue the lantern together at the tabs. - Turn on a battery-powered LED candle and place it inside your lantern. ![Completed lantern Completed lantern](https://opensource.com/sites/default/files/uploads/lantern11.jpg) Now your lantern is complete and ready to light up your haunt. Happy Halloween! ## 6 Comments
12,153
Silverblue 是什么?
https://fedoramagazine.org/what-is-silverblue/
2020-04-26T21:51:18
[ "Silverblue" ]
https://linux.cn/article-12153-1.html
![](/data/attachment/album/202004/26/215121o03zsyfiri22qu32.jpg) Fedora Silverblue 在 Fedora 世界内外越来越受欢迎。因此,根据社区的反馈,以下是关于这个项目的一些有趣问题的答案。如果你有任何其他与 Silverblue 相关的问题,请在评论区留言,我们会在未来的文章中回答。 ### Silverblue 是什么? Silverblue 是新一代桌面操作系统的代号,之前被称为 Atomic Workstation。该操作系统是通过利用 [rpm-ostree 项目](https://rpm-ostree.readthedocs.io/en/latest/)创建的映像来交付的。这种系统的主要优点是速度、安全性、原子更新和不变性。 ### “Silverblue” 到底是什么意思? “Team Silverblue” 或简称 “Silverblue”,没有任何隐藏的含义。该项目以前被称为 Atomic Workstation,大约两个月后更名时选中了这个名字。在这个过程中,审查过 150 多个单词或单词组合。最终选择了 “Silverblue”,因为它有一个可用的域名以及社交网络账号。人们可以把它看成是 Fedora 的蓝色品牌的一个新的品牌形象,可以用在诸如“加油,Silverblue 团队!”或“想加入该团队,改进 Silverblue 吗?”这样的短语中。 ### 何谓 ostree? [OSTree(或 libostree)是一个项目](https://ostree.readthedocs.io/en/latest/),它结合了一个类似 Git 的提交和下载可引导文件系统树的模型,以及用于部署它们和管理引导加载程序配置的层。OSTree 由 rpm-ostree 使用,这是 Silverblue 使用的一个基于包/镜像的混合系统。它原子化地复制了一个基础操作系统,并允许用户在需要时在基础操作系统之上“层叠”传统的 RPM。 ### 为何使用 Silverblue? 因为它可以让你专注于你的工作,而不是你正在运行的操作系统。因为系统的更新是原子式的,所以它更稳健。你唯一需要做的事情就是重新启动到新的镜像中。此外,如果当前启动的镜像有什么问题,你可以很容易地重启/回滚到之前可以工作的镜像,如果有的话。如果没有,你可以使用 `ostree` 命令下载并启动过去生成的任何其他镜像。 另一个好处是可以在不同的分支(或者用旧的语境说就是不同的 Fedora 风味版本)之间轻松切换。你可以轻松地尝试 [Rawhide](https://fedoraproject.org/wiki/Releases/Rawhide) 或 [updates-testing](https://fedoraproject.org/wiki/QA:Updates_Testing) 分支,然后返回到包含当前稳定版本的分支。另外,如果你想尝试一些新奇的东西,也可以考虑试试 Silverblue。 ### 不可变的操作系统有什么好处? 其中一个主要的好处是安全。基础操作系统是以只读的形式挂载的,因此不能被恶意软件修改。唯一可以改变系统的方法是通过 `rpm-ostree` 实用程序。 另一个好处是健壮性。普通用户几乎不可能在不小心或无意中删除了一些系统库后,导致操作系统回到不启动或无法正常工作的状态。试着想想你过去的这些经历,就可以明白 Silverblue 可以如何帮助你。 ### 如何在 Silverblue 中管理应用程序和包? 对于图形化用户界面的应用程序,建议使用 [Flatpak](https://flatpak.org/) —— 如果应用程序是以 Flatpak 的形式提供的话。用户可以选择来自 Fedora 的 Flatpak,和从 Fedora 包及 Fedora 自己的基础架构中构建的 Flatpak,或者是目前有更广泛的交付品的 Flathub。用户可以通过已经支持 Fedora Silverblue 的 GNOME “软件”轻松安装它们。 用户首先发现的一件事就是操作系统中没有预装 `dnf`。主要原因是它不能在 Silverblue 上工作 —— 它的部分功能被 `rpm-ostree` 命令所取代。用户可以通过使用 `rpm-ostree install PACKAGE` 来层叠传统的软件包。但只有在没有其他方法的情况下,才应该使用这种方式。这是因为从存储库中提取新的系统镜像时,每次更改系统镜像时都必须重新构建系统镜像,以容纳层叠的包或从基础操作系统中删除及替换为其他版本的包。 Fedora Silverblue 自带的默认 GUI 应用程序集是基础操作系统的一部分。团队正在努力将它们移植到 Flatpak 上,这样它们就可以通过这种方式分发。其中一个好处是,基础操作系统将变得更小,更容易维护和测试,用户可以更容易修改他们的默认安装环境。如果你想看一下它是怎么做的,或者有什么帮助,可以看看官方的[文档](https://docs.fedoraproject.org/en-US/flatpak/tutorial/)。 ### 什么是 Toolbox? [Toolbox](https://github.com/debarshiray/toolbox) 是一个让普通用户可以轻松使用容器的项目。它通过使用 podman 的无 root 容器环境来实现。Toolbox 可以让你在常规的 Fedora 环境中轻松、快速地创建一个容器,你可以在这个容器上折腾或开发,而与你的操作系统分离。 ### Silverblue 有路线图吗? 形式上没有,因为我们正在关注在测试过程中发现的问题和社区的反馈。我们目前正在使用 Fedora 的 [Taiga](https://teams.fedoraproject.org/project/silverblue/) 来进行规划。 ### Silverblue 的发布周期是多少? 它和普通的 Fedora Workstation 是一样的。每 6 个月发布一次新版本,支持期为 13 个月。团队计划每两周(或更长时间)发布一次更新,而不是像现在这样每天发布一次。这样一来,更新可以在发送给其他用户之前,由 QA 和社区志愿者进行更彻底的测试。 ### 不可变操作系统的未来前景如何? 从我们的角度来看,桌面的未来会走向到不可变的操作系统。这对用户来说是最安全的,Android、ChromeOS、ChromeOS、最近的 macOS Catalina 全都在底层采用了这种方式。而对于 Linux 桌面来说,一些第三方软件期望写到操作系统的问题还是存在的。HP 打印机驱动程序就是一个很好的例子。 另一个问题是系统中的部分软件如何分发和安装。字体就是一个很好的例子。目前在 Fedora 中,它们是以 RPM 包的形式分发的。如果你想使用它们,你必须层叠它们,然后重新启动到新创建的包含它们的镜像中。 ### 标准版 Workstation 的前景如何? Silverblue 有可能会取代普通的 Workstation 版本。但 Silverblue 要提供与 Workstation 版本相同的功能和用户体验还有很长的路要走。在此期间,这两款桌面产品将同时推出。 ### Atomic Workstation 或 Fedora CoreOS 与这些有什么关系? Atomic Workstation 是在更名为 Fedora Silverblue 之前的项目名称。 Fedora CoreOS 是一个不同但相似的项目。它与 Silverblue 共享一些基本技术,如 `rpm-ostree`、`toolbox` 等。尽管如此,CoreOS 是一个更简约、专注于容器、自动更新的操作系统。 --- via: <https://fedoramagazine.org/what-is-silverblue/> 作者:[Tomáš Popela](https://fedoramagazine.org/author/tpopela/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
Fedora Silverblue is becoming more and more popular inside and outside the Fedora world. So based on feedback from the community, here are answers to some interesting questions about the project. If you do have any other Silverblue related questions, please leave it in the comments section and we will try to answer them in a future article. ## What is Silverblue? Silverblue is a codename for the new generation of the desktop operating system, previously known as Atomic Workstation. The operating system is delivered in images that are created by utilizing the [rpm-ostree](https://rpm-ostree.readthedocs.io/en/latest/)[ project](https://rpm-ostree.readthedocs.io/en/latest/). The main benefits of the system are speed, security, atomic updates and immutability. ## What does “Silverblue” actually mean? “Team Silverblue” or “Silverblue” in short doesn’t have any hidden meaning. It was chosen after roughly two months when the project, previously known as Atomic Workstation was rebranded. There were over 150 words or word combinations reviewed in the process. In the end *Silverblue* was chosen because it had an available domain as well as the social network accounts. One could think of it as a new take on Fedora’s blue branding, and could be used in phrases like “Go, Team Silverblue!” or “Want to join the team and improve Silverblue?”. ## What is ostree? [OSTree or libostree is a project](https://ostree.readthedocs.io/en/latest/) that combines a “git-like” model for committing and downloading bootable filesystem trees, together with a layer to deploy them and manage the bootloader configuration. OSTree is used by rpm-ostree, a hybrid package/image based system that Silverblue uses. It atomically replicates a base OS and allows the user to “layer” the traditional RPM on top of the base OS if needed. ## Why use Silverblue? Because it allows you to concentrate on your work and not on the operating system you’re running. It’s more robust as the updates of the system are atomic. The only thing you need to do is to restart into the new image. Also, if there’s anything wrong with the currently booted image, you can easily reboot/rollback to the previous working one, if available. If it isn’t, you can download and boot any other image that was generated in the past, using the *ostree* command. Another advantage is the possibility of an easy switch between branches (or, in an old context, Fedora releases). You can easily try the * Rawhide* or *branch and then return back to the one that contains the current stable release. Also, you should consider Silverblue if you want to try something new and unusual.* [updates-testing](https://fedoraproject.org/wiki/QA:Updates_Testing)## What are the benefits of an immutable OS? Having the root filesystem mounted read-only by default increases resilience against accidental damage as well as some types of malicious attack. The primary tool to upgrade or change the root filesystem is *rpm-ostree.* Another benefit is robustness. It’s nearly impossible for a regular user to get the OS to the state when it doesn’t boot or doesn’t work properly after accidentally or unintentionally removing some system library. Try to think about these kind of experiences from your past, and imagine how Silverblue could help you there. ## How does one manage applications and packages in Silverblue? For graphical user interface applications, [Flatpak](https://flatpak.org/) is recommended, if the application is available as a flatpak. Users can choose between Flatpaks from either Fedora and built from Fedora packages and in Fedora-owned infrastructure, or Flathub that currently has a wider offering. Users can install them easily through GNOME Software, which already supports Fedora Silverblue. One of the first things users find out is there is no *dnf* preinstalled in the OS. The main reason is that it wouldn’t work on Silverblue — and part of its functionality was replaced by the *rpm-ostree* command. Users can overlay the traditional packages by using the *rpm-ostree install PACKAGE*. But it should only be used when there is no other way. This is because when the new system images are pulled from the repository, the system image must be rebuilt every time it is altered to accommodate the layered packages, or packages that were removed from the base OS or replaced with a different version. Fedora Silverblue comes with the default set of GUI applications that are part of the base OS. The team is working on porting them to Flatpaks so they can be distributed that way. As a benefit, the base OS will become smaller and easier to maintain and test, and users can modify their default installation more easily. If you want to look at how it’s done or help, take a look at the official [documentation](https://docs.fedoraproject.org/en-US/flatpak/tutorial/). ## What is Toolbox? [ Toolbox](https://github.com/debarshiray/toolbox) is a project to make containers easily consumable for regular users. It does that by using *podman*’s rootless containers. *Toolbox*lets you easily and quickly create a container with a regular Fedora installation that you can play with or develop on, separated from your OS. ## Is there any Silverblue roadmap? Formally there isn’t any, as we’re focusing on problems we discover during our testing and from community feedback. We’re currently using Fedora’s [Taiga](https://teams.fedoraproject.org/project/silverblue/) to do our planning. ## What’s the release life cycle of the Silverblue? It’s the same as regular Fedora Workstation. A new release comes every 6 months and is supported for 13 months. The team plans to release updates for the OS bi-weekly (or longer) instead of daily as they currently do. That way the updates can be more thoroughly tested by QA and community volunteers before they are sent to the rest of the users. ## What is the future of the immutable OS? From our point of view the future of the desktop involves the immutable OS. It’s safest for the user, and Android, ChromeOS, and the last macOS Catalina all use this method under the hood. For the Linux desktop there are still problems with some third party software that expects to write to the OS. HP printer drivers are a good example. Another issue is how parts of the system are distributed and installed. Fonts are a good example. Currently in Fedora they’re distributed in RPM packages. If you want to use them, you have to overlay them and then restart to the newly created image that contains them. ## What is the future of standard Workstation? There is a possibility that the Silverblue will replace the regular Workstation. But there’s still a long way to go for Silverblue to provide the same functionality and user experience as the Workstation. In the meantime both desktop offerings will be delivered at the same time. ## How does Atomic Workstation or Fedora CoreOS relate to any of this? Atomic Workstation was the name of the project before it was renamed to Fedora Silverblue. Fedora CoreOS is a different, but similar project. It shares some fundamental technologies with Silverblue, such as *rpm-ostree*, *toolbox* and others. Nevertheless, CoreOS is a more minimal, container-focused and automatically updating OS. ## Daniel I’d argue for fonts, a better option would to just install them to /usr/local/share/fonts or ~/.local/share/fonts, both work on Silverblue and a font manager would be able to do this, even when Flatpaked. You’re not messing with system packages, and if anything goes wrong you can just wipe /usr/local and not have a broken system. ## jakfrost “a better option would to just install them to /usr/local/share/fonts” On Silverblue, this would be /var/usr/local/share/fonts, since the /usr directory is a part of the immutable atomic OS and /var plus /etc are the only two writable areas of the file system. ## Daniel There’s a symlink at /usr/local still though. ## Leslie Satenstein I modify the keyboard layout to add additional key characters. With SB, will I still be able to do that. I add the yen and euro symbols to my French Canada Layout. ## Daniel That depends on how you edit your keyboard layouts. e.g. If you edit files in /usr/share/X11/xkb you can put your modified ones in /usr/local/share/X11/xkb it should override. This would also have the benefit of not breaking after updates. ## Matthew Bunt Are there any disadvantages to using package layerering on Silverblue vs just containers and flatpaks? ## Phil Parsons this sounds like great stuff for the Raspberry Pi 4…. also good environment for testing the technology ## Stef Good article 😃. Would be nice to add a link to the project main page and the Silverblue docs. ## Gregory Bartholomew Silverblue is probably great for the typical user, but as a developer/power-user/sysadmin (and just someone who likes to play around a bit with the lower-level workings of the OS), I worry that large chunks of the filesystem being “immutable” is just going to cause me frustration when I try to do the non-standard things that I like to do. I hope Silverblue never replaces Workstation as is suggested in the article. Also, I’d like to point out that some of the main features of Silverblue can be replicated quite easily by a more powerful filesystem. In particular I am currently using ZFS for my “root” filesystem and it allows me to “rollback” any changes that I make to my filesystem by interrupting the boot processes just before the root filesystem is mounted and running (for example) “zfs rollback [email protected]_64″. ZFS’s rollback feature gives me the same recovery ability that Silverblue does, but without locking me out of vast portions of my filesystem. ## Matthew Miller What are the non-standard things you’re interested in doing? Can you move to a model where you’re doing them in containers? Of course, Silverblue will never be perfect for everyone. We can’t make it perfect for anyonein specific if we try to targeteveryonein general. Even if it becomes our main desktop edition, we’ll have non-ostree spins and releases for as long as people are interested in making them.## Gregory Bartholomew It isn’t that I have a specific few non-standard things that I am doing. It is really more that I liketo do/try non-standard things. Maybe I’m a bit of an oddball in that regard, but it is one of the things that has always appealed to me about Fedora — its “bleeding edge” nature and my ability to play with the latest and greatest new packages and package features.I can give you a brief list of some of the non-standard things that I am currently doing just as an example, but supporting these specific things isn’t really the point for me. The point is that I cando this sort of thing and the day that Fedora no longer allows me to do it might be the day that I find another distribution.Just a few examples of non-standard things from my current setup: I just upgraded from rotational disks to SSDs and switched to using ZFS for my root filesystem. I wanted to try out ZFS’s “endor” checksum algorithm. From the man page: “Edon-R is a very high-performance hash algorithm that was part of the NIST SHA-3 competition. It provides extremely high hash performance (over 350% faster than SHA-256), but was not selected because of its unsuitability as a general purpose secure hash algorithm. This implementation utilizes the new salted checksumming functionality in ZFS, which means that the checksum is preseeded with a secret 256-bit random key (stored on the pool) before being fed the data block to be checksummed. Thus the produced checksums are unique to a given pool.” But Grub’s custom zfs driver doesn’t support the newest ZFS features. Also, I wanted to be able to move my hard drives to a newer machine without redoing the partitioning scheme someday, but my current system is BIOS and the new system might be UEFI. So I did another non-standard thing and setup a non-standard $BOOT partition that is formatted with vfat and has the syslinux bootloader installed with a custom patch (here) that allows it to read the systemd-boot style drop-in files that are automatically placed under the $BOOT/$MACHINE-ID directory if you pre-create it and uninstall all the grub packages (includding grubby). Rather than mirroring the $BOOT partition with mdraid, I decided to create a custom /etc/rc.d/rc.local script with the followin clause: if mountpoint -q /boot.1 && mountpoint -q /boot.2; then rsync -r –delete /boot.1/* /boot.2 fi Now my $BOOT partition is “backed up” to the secondary drive, but only after successfully booting from the primary. /boot is a symlink to /boot.1 and /boot.1 and /boot.2 are listed individually in /etc/fstab like so: root / zfs defaults 0 0 /dev/disk/by-partuuid/31af9b2d-c158-4d89-8ed6-fc8379432cf3 /boot.1 vfat defaults,nofail 0 0 /dev/disk/by-partuuid/c363eec2-3acc-46dc-a58c-7e4f8c65fb18 /boot.2 vfat defaults,nofail 0 0 /dev/md/swap swap swap defaults,nofail 0 0 When I upgrade to UEFI I’ll probably make the rsync contitional on the current boot-up being from the primary drive (as can be determined from the efivariables). I’ve recently started using i3 desktop manager in combination with LXDM. I made some visual customizations to LXDM by tweaking /etc/lxdm/lxdm.conf and adding a background image in /var/lib/lxdm. I created a custom /etc/kernel/postinst.d/99-snapshot script so that a ZFS restore point will get automatically created every time a new kernel is installed: cat /etc/kernel/postinst.d/99-snapshot #!/usr/bin/bash vim:set ts=3: if [[ -n “$1” ]]; then zfs snapshot root@$1 &> /dev/null fi I’ll probably create a corresponding one under /etc/kernel/prerm.d to clean up old snapshots, but I haven’t gotten that far yet. There are other things as well of varying degrees of complexity, and I suspect a lot of them could be made to work with Silverblue if one want to put the time and effort into fighting with it, but if I can find another distribution that allows me to play with such things more easily, I am liable to switch to it. To date, however, Fedora has always been great about being the sort of OS that I can experiment with (other than perhaps selinux, but that is easy enough to switch off when necessary). It is exactly that sort of tinkering ability that I like about Fedora (and open source in general) and Silverblue seems like a bit of a move in the wrong direction to me. Though I can see its market appeal to the more general customer. ## Dave Airlie I think one thing that should be done is messaging where Silverblue isn’t useful or is going to work against power users. For example anyone doing OS development, like fedora devs, kernel devs, library devs etc are totally going to fail using silverblue. I’d really appreciate if there was some better messaging around the severe limitations using silverblue would have on a typical fedora packaging workflow or any of the lowlevel OS devs. ## Skyler Hawthorne I agree with this completely. Flatpak and Docker have their own limitations and shortcomings. The article itself says that the value proposition of Silverblue is: Using Docker for some things, Flatpak for others, and for the rest having to reboot any time you need to modify the underlying OS, seems like a lot morethinking about the OS than just. ## Colin Walters Why do you say that? I created ostree and rpm-ostree and I use Silverblue as a development platform for them, as well as OpenShift 4. I’ve submitted patches to quite a lot of userspace, and continue to do so. I talked about this and other things here https://fedorapeople.org/~walters/2018.01-devconf-desktopcontainers/#/ The big challenge is to switch to containerizing your development environment, but once you do, you get a lot of benefits as well. For example, I use from my toolbox container, not my host. ## Andrew I have ran into issues trying to access devices. I like the the idea of rootless and sandboxing but with IoT I have ran into some conceptual headaches that are poorly documented if at all. ## Leslie Satenstein Hi MM I have 6 disks on my home system with several (6) distributions therein. from the fstab UUID=7a3c702b-f021-49d1-944a-885480cca5c0 /scratch ext4 defaults,relatime 1 2 /scratch is to the mount point. On other disks .. /scratch1 /scratch2 /junk and /backup are soft links (ln -s ) to other partitions of the same name somewhere on one or more if the 6 disks. When I created the above, I also chmod’d them to chmod 1000 for /scratch1 /scratch2 /junk and /backup the fstab has, for example, … defaults,noauto,user for the above links 1) all the /scratch1… /junk are available from any distro. and are intentionally not automounted. 2) when I run a backup rsync synching /scratch to /scratchx my script first does a mount, it runs, rsync and demounts file partition. When not mounted, the mount points show 1000 for permisions. When one of these is mounted, the partition will show up as 755 permissions for the duration of the rsync. The chmod 1000 protects my data on that drive. With silverblue, if I continue, I will have to find a way to accept /scratch ## Ondrej Kolin When you rebase, it may fail on not being possible to just rebase, because the package could cause package inconsistency, etc. So you should use toolboxes and flatpaku. ## Misc I am running Silverblue on a laptop since the start (like, more than 1 year), and I still go back to my RHEL 7 laptop anytime I do anything more than “browse the web”, because “running things in containers” is fine for some workload, but I bump into limitation every time. For example, if you want to do anything fancy with network (like, using nmap, tcpdump), it get more complicated, as I need 2 pet containers depending on whether I need root or not (since root in the container is likely not really root outside). I can’t just switch from one to the others with sudo, unlike all docs written in the last 30 years. My latest example (like from this weekend) is flashing a arm board with dfu-util. Despites seeing the board with lsub, it didn’t work when using the toolbox container. I know why (because root in the matrix do not mean root outside the matrix), but that’s the kind of things that will surprise people. That’s not new, during Flock, I told the story of me trying to do some CTF on Silverblue (http://www.hackgnar.com/2018/06/learning-bluetooth-hackery-with-ble-ctf.html) and why this failed because of the sandboxing (since doing anything with bt requires to relax permission from containers, which took me a while to figure, and I am fairly technical). There is also no good workflow for running any kind of services in containers for now. If I want to do any work against a postgres database (such as “doing web dev’), I either have to do the container myself (so keep it updated, rebuild it locally, run it outside of the system, etc), or layer it. Running it in the toolbox container do not work (and I just tried). And I consider layering to be bad, as hinted in the article. ## Adriano Corte Real Unfortunately in my experience A LOT of flatpaks are outdated by several months. So then what? ## RObert But I don’t want immutable OS base files… I like control. I mean, who doesn’t. So no thanks. ## David I’m keen to try out Silverblue, but I’m not a fan of GNOME – are there plans to release spins for different desktop environments? ## Tomáš Popela It looks like the community is working on KDE and XFCE variants – https://discussion.fedoraproject.org/t/kinoite-a-kde-and-now-xfce-version-of-fedora-silverblue/147 ## Bruce Bigby What are the performance impacts of flatpaks? Aren’t they self-contained largely? Also, does being mostly self-contained mean that their integration with the rest of the system is less than perfect. To what extent can flatpaks share resources? I imagine that some flatpaks might duplicate resources, and thus, require additional memory resources, correct? ## MS I’ve been using Fedora for a few months now, and have installed almost every release except the lqxt spin. Didn’t get to it yet. Silverblue is a great idea and I’ve enjoyed testing it out. Although there were a few minor issues for me ( eg: I’m still at the I have no idea what I’m doing stage ), I’m sure it will only get better. There is a slightly different kind of workflow / learning curve from a normal Fedora release, and I found the documentation very helpful to get started from scratch. The first obstacle I had was figuring out what to do without dnf installed. Once I learned that Silverblue provides rpm and rpm-ostree to manage packages at the system level, and dnf is available inside the toolbox / container it was easier to get started. Outside the toolbox you can use rpm to query packages etc, and rpm-ostree for installs : rpm -qa | sort -fu > rpm-list-installed.txt rpm -qa | grep httpd — ** Please add a silverblue tag for this article – https://fedoramagazine.org/tag/silverblue/ Some other SIlverblue posts also have no silverblue tag which may make them harder to find in future : https://fedoramagazine.org/backup-on-fedora-silverblue-with-borg/ Thanks and keep up the great work. ## svsv sarma Why can’t I use Fedora Media Writer for Silverblue? Does it mean the Silverblue is quite different and a class apart from the available images/flavors? ## Tomáš Popela You can, but currently it’s a little bit hidden – open the Fedora Workstation product in Media Writer, then click on the “Other variants..” link that will open an popup, where you can choose the Silverblue. ## bioinfornatics I would like to know if Fedora silver can be used as replacement to : – Environment Modules —-> https://en.wikipedia.org/wiki/Environment_Modules_(software) – SCL —-> https://www.softwarecollections.org/en/ ## Alejandro I understand how Silverblue can be an advantage for some people and I wish the devs the best of luck with it. I like to have control over my system and it seems to me that, because of the way SB works, it will not stay out of my way. As a software developer, I like the os tree. The inmutable system, not so much. If the traditional workspace is replaced by SB, I’ll probably leave Fedora. I really hope it doesn’t happen, or at least that they’re maintained in parallel. ## Vasile Guta-Ciucur Well, they already said that a traditional Fedora distro will be a community thing. The key phrase here is “Total control” and there is where all popular OSes are forced to go – no exceptions. You will leave not only Fedora for serious work (probably you’ll keep it for browsing, media and social media consumption), but also Linux. Prepare ahead… ## Yazan Al Monshed Nice Blog, I Need more details about slivervblue ! ## Colin Walters Please replace this paragraph; it’s not accurate. See https://lwn.net/Articles/793674/ With e.g.: ## Paul W. Frields @Colin: Done, thanks. ## John Adesoye I just switched to fedora Silverblue from the fedora traditional desktop. For I can’t wait, so I installed silverblue across my laptops. I love it!. Thank you guys for working hard behind the scenes. I put my fonts into /var/usrlocal/share/fonts/myfonts. Remember to create directory for fonts and myfonts as example. Also, note that usrlocal is not the same as usr/local in fedora Silverblue. I installed all apps from terminal without using flathub. I got my printer Brother mfc-j4610dw working. Fedora Silverblue is more lenient to memory usage and lets you do everything you do in the current standard fedora (in a little different way) without having to shoot yourself in the foot. The only annoyance I encountered was that the system need reboot, to take snapshot, after apps installed. Also, Gnome software management wasn’t displaying all apps and slow when used to install apps. Apart from that no issue. Fedora Silverblue will be a great OS! ## Enrico thx for the article, every info about Silverblue is wanted. In my opinion is SB the best thing I have met, since I’m on Linux 15 years, It’s the OS of the future, that is worth working on. I’m running SB now on my ThinkPad laptop, it’s really amazing stable, clean, simple to use, toolbox container… I look forward to further development ## Alex The old model allows me to control and personalise my system as I see fit. Silverblue adds complexity that I don’t need nor want. If it replaces the Workstation, I’m moving to Qubes OS. I understand the advantages it might have for some users, but to me, seems like a solution in search of a problem. Nonetheless, I appreciate the tech behind it. Seems promising. ## Bev I tried SilverBlue out, but couldn’t get the firewall to install. Why isn’t that included in the initial installation files? Just wondering, why it’s not included in the initial installer package? After installing, then I couldn’t get to update my system, so ditched it and returned back to regular Fedora 30 installation. Even though it is supposed to be immutable I felt very exposed by not being able to install Gnome’s GUI app Firewall, and not being able to update my system. Shouldn’t every OS have a firewall ready at hand? I even tried enabling firewalld and it threw up a message notifying me it couldn’t install (sorry forgot the actual message), so I gave up on using SilverBlue : ( ## jakfrost “I tried SilverBlue out, but couldn’t get the firewall to install.” The firewall manager (firewalld) is installed on my Silverblue system, and I thought it was part of the Silverblue core image. Gnome Firewall would eventually get flatpakked I guess, but I’m not informed on that topic. In order to use firewalld, you would need to be root, and on Silverblue that normally means sudo, since your user is usually set up with administration rights. ## David Interesting and frankly I’m not sure I will ever go SilverBlue. The main reason is easy access to software, the Fedora Workstation distro has a huge repository and outside of that many Linux applications easily build to run under Workstation. I’m not too sure about SilverBlue. Frankly like with Flatpaks, SilverBlues team has failed miserably at getting across any real benefit of the technology. It isn’t just benefits either it perplexes me that an article like this doesn’t attempt to list out the current state of user software in SilverBlue. If not a list in the article at least a link to a list of user apps that, are supported currently under SilverBlu or intend to be. After all the whole point of a distro is to avoid going the DIY route for apps. ## John Firewall is installed out of the box both in regular and Silverblue fedora workstation. To install GUI firewall, simply open terminal and insert this command line: (sudo rpm-ostree install firewall-config) for siilverblue and reboot. For regular fedora work station change rpm-ostree to dnf and you are done – happy. I’m a fedora/centos user since 2005. I have tried many other distros but not happy with any of them. Fedora is clean, stable and secure OS that meets my IT needs. Guest what, is free! ## Philip Jones This is all new to me (Ubuntu / Debian based experience only) but I did try out EndlessOS and saw the disk was recognised as ostree so it’s presumably based on Silverblue or its predecessor. It worked well on my ancient Core2Duo laptop despite Gnome. My main gripe was that it was very resistant to dual booting with my regular distro. Is that unavoidable with Silverblue or just a ‘feature’ of EndlessOS (who said dual booting was only possible with Windows)? For what it’s worth my impression of the base concept and implementation was in all other respects very favourable. Very interesting development. ## Göran Uddeborg Trying to understand how this thing really works, here are a few questions: I’ve tweaked my /etc/ssh/sshd_config by simply editing it with emacs. Would that be any different with SB? I have made an SELinux module to allow logrotate to rotate a few more files than default, and created the corresponding file in /etc/logrotate.d. Would that be any different with SB? I’m using gpsbabel (via a wrapper script) to talk to a GPS device through /dev/ttyUSB* files. Would that be any different with SB? I’m running a slightly tweaked version of sendmail created by adding a few patches to the default SRPM, rebuilding and installing. This would be different with SB I understand, but exactly how would I do this? ## isr Does Silverblue back up and restore configurations? In my experience, confgurations are more important target to restore than packages in some cases.
12,155
Bodhi Linux 5.1 一览: 略有不同的轻量化 Linux
https://itsfoss.com/bodhi-linux-review/
2020-04-27T09:35:00
[ "Bodhi" ]
https://linux.cn/article-12155-1.html
Bodhi Linux 是一个基于 Ubuntu 的[轻量级 Linux 发行版](https://itsfoss.com/lightweight-linux-beginners/)。与其他大多数发行版不同,Bodhi 使用自己的 Moksha 桌面,并专注于为你提供一个可以在旧计算机上运行的最简设置。 ### 什么是 Bodhi Linux? ![](/data/attachment/album/202004/27/093318yawppv07zqpva4j6.png) [Bodhi Linux](https://www.bodhilinux.com/) 最早于 2011 年推出。它以“[简约、高效和用户自定义](https://www.bodhilinux.com/w/wiki/)”为设计理念。开发人员旨在提供一个“[实用但不臃肿的系统](https://www.bodhilinux.com/w/what-is-bodhi-linux/)”。因此,它使用轻量级的 Moksha 桌面,只预装了基本的应用程序。这一做法是为了给用户一个稳定的平台来构建他们想要的系统。它基于最新版的 Ubuntu 长期支持版本。 ### Moksha 桌面 ![Bodhi Desktop](/data/attachment/album/202004/27/093626xta36orp0hoa90tg.jpg) 起初 Bodhi 是装载着 [Enlightenment 桌面环境](https://www.enlightenment.org/start)的。Bodhi Linux 一直被认为是“Enlightenment 系的” Linux 发行版。事实上,“Bodhi”(菩提)这个词是基于梵文的“<ruby> 开悟 <rt> enlightenment </rt></ruby>”。 然而,当 Enlightenment 18 版本发布以后,这一切都改变了。该版本是如此的糟糕,以至于它并没有集成到 Bodhi 中。Enlightenment 19 发布后修复了一些问题,但仍然存在一些不足。 在尝试与 Enlightenment 开发团队合作却毫无进展之后,Bodhi 开发者在 2015 年[复刻](https://www.bodhilinux.com/2015/04/28/introducing-the-moksha-desktop/)了 Enlightenment 17。新的桌面环境被命名为 [Moksha](https://www.bodhilinux.com/moksha-desktop/),它是基于梵文单词“解脱、解放或释放”。你可以在 [GitHub](https://github.com/JeffHoogland/moksha) 上找到它的代码。 ### 5.1.0 有什么新特性? [Bodhi 5.1.0](https://www.bodhilinux.com/2020/03/25/bodhi-linux-5-1-0-released/) 是这两年内发布的第一个版本,也是基于 Ubuntu 18.04 的第二个版本。除了更新包,它还有新的默认图标和主题。该版本对默认应用程序做了几处更改。预装版 Leafpad 取代了 epad 并且 [GNOME Web](https://wiki.gnome.org/Apps/Web/)(也被称为 Epiphany)代替了 [Midori](https://en.wikipedia.org/wiki/Midori_(web_browser)。删除了 eepDater 系统更新器。 目前有[四个不同的版本](https://www.bodhilinux.com/w/selecting-the-correct-iso-image/)的 Bodhi5.1.0 可以[下载](https://www.bodhilinux.com/download/): <ruby> 标准版 <rt> Standard </rt></ruby>、<ruby> 硬件支持版 <rt> Hwe </rt></ruby>、<ruby> 兼容版 <rt> Legacy </rt></ruby> 和<ruby> 软件包版 <rt> AppPack </rt></ruby>。 * 标准适用于过去十年内电脑配置。它不推送内核更新。 * 硬件支持版是 Bodhi 家族的新成员,其设计用来包括对更新的硬件的支持,并会接收到内核更新。5.1 版本的使用的是 5.3.0-42 内核。 * 兼容版是仅有的 32 位版本。它使用“较旧的 4.9.0-6-686 Linux 内核,该内核针对旧的(15 年以上)硬件进行了优化。这个内核也不包括许多老系统不支持的 PAE 扩展。” * 软件包版是为那些想要一个开箱即用的全载系统的人准备的,并预装了许多应用程序。 ### Bodhi Linux 的系统要求 最低系统要求: * 500 MHz 处理器 * 256 MB 内存 * 5 GB 的硬盘存储空间 推荐系统要求: * 1.0 GHz 处理器 * 512 MB 内存 * 10 GB 的硬盘存储空间 ### 体验 Bodhi Linux ![Old Bodhi Linux](/data/attachment/album/202004/27/093537emrd012r2rdnl0d7.png) 由于它是基于 Ubuntu 的,所以安装 Bodhi 非常简单。当我登录到 Bodhi 后,新的主题和图标集让我大吃一惊。上次我安装 Bodhi(包括几个月前的 5.0)时,我认为它需要换一个新的外观。之前的主题并没有什么问题,但看起来像是二十一世纪初的东西。新的主题使它看起来更具现代感。 ![Bodhi Linux 5.1](/data/attachment/album/202004/27/093543sitismscxs1is5zs.jpg) 我也很高兴看到 Midori 浏览器被 GNOME Web 所取代。我不是 [Midori 浏览器](https://itsfoss.com/midori-browser/)的粉丝。对我来说,它总是显得功能太少了。(不过,随着 [Midori Next](https://www.midori-browser.org/2020/01/15/midori-next-come-on-yarovi-we-can/) 的推出,这种情况可能会改变。)GNOME Web 更像是我需要的网页浏览器。最重要的是它带有 Firefox Sync,这样我就可以同步我所有的书签和密码了。 与许多 Linux 发行版不同,Bodhi 并没有一个独立的软件中心。相反,如果你点击 AppCenter 图标,它会打开浏览器,并导航到 Bodhi 网站的软件中心页面 [AppCenter 页面](https://www.bodhilinux.com/a/)。这里的应用程序是按类别排序的,它们中的大多数是[轻量级应用程序](https://itsfoss.com/lightweight-alternative-applications-ubuntu/)。 ![Bodhi Linux Appcenter](/data/attachment/album/202004/27/093544w5kj3pe5yefgk5pz.png) 如果你点击其中一个页面并点击“安装”,(在你输入密码之后)Bodhi 就会安装它。这是通过一个名为 [apturl](https://wiki.ubuntu.com/AptUrl) 的小程序实现的,它是“是一个非常简单的从网页浏览器安装软件包的方法”。它非常灵巧,我希望更多基于 Ubuntu 的发行版使用它。 总的来说,我喜欢 Moksha 桌面。它坚持我们几十年来看到的桌面风格(这是我最喜欢的)。它不会影响你,却很容易改变和定制。我唯一怀念的是,当我按下超级键时,应用程序菜单不打开。但我猜你不可能拥有生活中的一切。 ### 结语 我对最近发布的 Bodhi Linux 感到十分惊喜。过去,我经常折腾它。并且我一直很喜欢它,但最近的这个版本是迄今为止最好的。在某种程度上,他们打破了 Bodhi 只适合老系统的想法,加入了对较新内核的支持。 如果你想换换个环境,同时又想在 Ubuntu 的世界里寻找新的风景,那就试试[Bodhi Linux](https://www.bodhilinux.com/)吧。 你用过 Bodhi Linux 吗?你最喜欢的基于 Ubuntu 的发行版是什么?请在下面的评论中告诉我们。 --- via: <https://itsfoss.com/bodhi-linux-review/> 作者:[John Paul](https://itsfoss.com/author/john/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[qfzy1233](https://github.com/qfzy1233) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
Bodhi Linux is a [lightweight Linux distribution](https://itsfoss.com/lightweight-linux-beginners/) based on Ubuntu. Unlike most other distributions, Bodhi uses its own Moksha desktop and focuses on providing you a minimal setup to run on older computers. ## What is Bodhi Linux? ![Bodhi Linux Start Page](https://itsfoss.com/content/images/wordpress/2020/03/bodhi-start-page-800x500.png) [Bodhi Linux](https://www.bodhilinux.com/) was first introduced in 2011. It is designed with “[minimalism, resource efficiency, and user choice](https://www.bodhilinux.com/w/wiki/)” in mind. The devs strove to provide a “[system that is functional but not bloated](https://www.bodhilinux.com/w/what-is-bodhi-linux/)“. As such, it uses the lightweight Moksha Desktop and has only the basic applications preinstalled. The idea is to give the user a stable platform to build the system that they want. It is based on the latest Ubuntu LTS. ## Moksha Desktop ![Bodhi Desktop](https://itsfoss.com/content/images/wordpress/2020/03/bodhi-desktop-800x500.jpg) Originally Bodhi shipped with the [Enlightenment desktop environment](https://www.enlightenment.org/start). Bodhi Linux has long been known as the “Enlightened” Linux distro. In fact, the word ‘bodhi’ is based on the Sanskrit word for “enlightenment”. However, that changed when Enlightenment 18 was released. The release was in such bad shape that it was not included in Bodhi. Enlightenment 19 was released and fixed some of the problems, but still had issues. After trying to work with the Enlightenment dev team and getting nowhere, the Bodhi devs [forked](https://www.bodhilinux.com/2015/04/28/introducing-the-moksha-desktop/) Enlightenment 17 in 2015. The new desktop environment would be named [Moksha](https://www.bodhilinux.com/moksha-desktop/), which is based on the Sanskrit word for “emancipation, liberation, or release”. You can find the code for it on [GitHub](https://github.com/JeffHoogland/moksha). ## What is new in 5.1.0? [Bodhi 5.1.0](https://www.bodhilinux.com/2020/03/25/bodhi-linux-5-1-0-released/) is the first release in two years and the second release to be based on Ubuntu 18.04. Besides updating packages, it also has new default icons and theme. This release makes several changes to the default applications. Leafpad comes preinstalled instead of epad and [GNOME Web](https://wiki.gnome.org/Apps/Web/) (also known as Epiphany) replaces [Midori](https://en.wikipedia.org/wiki/Midori_(web_browser)). The eepDater system updater was removed. There are currently [four different versions](https://www.bodhilinux.com/w/selecting-the-correct-iso-image/) of Bodhi 5.1.0 available to [download](https://www.bodhilinux.com/download/): Standard, Hwe, Legacy, and AppPack. - Standard will work for systems made in the last decade. It does not push kernel updates. - Hwe (Hardware Enablement) edition is new to the Bodhi family and is designed to include support for newer hardware and will received kernel updates. The 5.1 release features the 5.3.0-42 kernel. - Legacy is the only edition that is 32-bit. It uses the “older 4.9.0-6-686 Linux kernel that is optimized for old (15+ years old) hardware. This kernel also does not include the PAE extension which is not supported on many older systems.” - The AppPack edition is for those who want a fully-loaded system out of the box and comes with many applications preinstalled. ## System Requirements for Bodhi Linux Minimum system requirement - 500 MHz processor - 256 MB of RAM - 5 GB of drive space Recommended system requirement - 1.0 GHz processor - 512 MB of RAM - 10 GB of drive space ## Experiencing Bodhi Linux ![Bodhi](https://itsfoss.com/content/images/wordpress/2020/03/bodhi-800x400.png) Since it is based on Ubuntu, installing Bodhi was very simple. After I signed into Bodhi, I was surprised by the new theme and icon set. The last time I installed Bodhi (including 5.0 a couple of months ago) I thought that it needed a new look. There was nothing really wrong with the previous theme, but it looked like something from the early 2000. The new theme gives it a more modern look. ![Bodhi Linux 5.1 Screenshot](https://itsfoss.com/content/images/wordpress/2020/04/bodhi-Linux-5-1-screenshot.jpg) I was also glad to see that Midori had been replaced by GNOME Web. I’m not a fan of [Midori browser](https://itsfoss.com/midori-browser/). It always seemed too minimal for me. (However, that might change in the future with [Midori Next](https://www.midori-browser.org/2020/01/15/midori-next-come-on-yarovi-we-can/).) Web felt more like the web browser I need. Most importantly it comes with Firefox Sync, so I can keep all of my bookmarks and passwords synced. Unlike many Linux distros, Bodhi doesn’t really come with a stand-alone software center. Instead, if you click the AppCenter icon it opens the browser and navigates to the [AppCenter p](https://www.bodhilinux.com/a/)[a](https://www.bodhilinux.com/a/)[ge](https://www.bodhilinux.com/a/) of the Bodhi website. Here apps are sorted by category. Most of them are [lightweight applications](https://itsfoss.com/lightweight-alternative-applications-ubuntu/). ![Bodhi Linux Appcenter](https://itsfoss.com/content/images/wordpress/2020/03/Bodhi-Linux-AppCenter-800x500.png) If you click on one of the pages and click “Install”, Bodhi will install it (after to type in your passwords). This is achieved using a neat little program named [apturl](https://wiki.ubuntu.com/AptUrl) that “is a very simple way to install a software package from a web browser”. It’s pretty slick and I wish more Ubuntu-based distros would use it. Overall, I like the Moksha desktop. It adheres to the desktop metaphor we have seen for decades (and which I am most comfortable with). It stays out of your way but is very easy to change and modify. The only thing I miss is that the application menu doesn’t open when I hit the super key. But I guess you can’t have everything in life. ## Final Thoughts I was pleasantly surprised by this recent release of Bodhi Linux. In the past, I’ve played with it from time to time. I always liked it, but this last release has been the best so far. In a way, they have broken free of the idea that Bodhi is only for older system by adding support for newer kernels. If you are looking for a change of scenery while staying close to the world of Ubuntu give [Bodhi Linux](https://www.bodhilinux.com/) a try. Have you ever used Bodhi Linux? What is your favorite Ubuntu-based distro? Please let us know in the comments below. If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit](http://reddit.com/r/linuxusersgroup).
12,156
使用 GTWS 管理复杂的 Git 工作空间
https://opensource.com/article/20/2/git-great-teeming-workspaces
2020-04-27T18:21:59
[ "Git" ]
https://linux.cn/article-12156-1.html
> > GTWS 是一系列脚本,它使我们在开发环境中管理不同的项目和项目的各个版本变得很容易。 > > > ![](/data/attachment/album/202004/27/182149xh9s7kb5bkf5875b.jpg) [Great Teeming Workspaces](https://github.com/dang/gtws)(GTWS)是一个 Git 的复杂工作空间管理工具包,它使我们在开发环境中管理不同的项目和项目的各个版本变得很容易。 有点像 Python 的 [venv](https://docs.python.org/3/library/venv.html),但不是为 Python 语言准备的。GTWS 用来管理多个项目的多个版本的工作空间。你可以很容易地创建、更新、进入和离开工作空间,每个项目或版本的组合(最多)有一个本地的 origin,用来与 upstream 同步 — 其余的所有工作空间都从本地的 origin 更新。 ### 部署 ``` ${GTWS_ORIGIN}/<project>/<repo>[/<version>] ${GTWS_BASE_SRCDIR}/<project>/<version>/<workspacename>/{<repo>[,<repo>...]} ``` 源代码目录的每一级(包括全局的家目录)可以包含一个 `.gtwsrc` 文件,这个文件中维护与当前级相关的设置和 bash 代码。每一级的配置会覆盖上一级。 ### 安装 用下面的命令检出 GTWS: ``` git clone https://github.com/dang/gtws.git ``` 配置你的 `${HOME}/.gtwsrc`。它应该包含 `GTWS_ORIGIN`,也可以再包含 `GTWS_SETPROMPT`。 把仓库目录加到环境变量中: ``` export PATH="${PATH}:/path/to/gtws ``` ### 配置 通过级联 `.gtwsrc` 文件来进行配置。它从根目录向下遍历,会执行在每级目录中找到的 `.gtwsrc` 文件。下级目录的文件会覆盖上一级。 在你最上层的文件 `~/.gtws/.gtwsrc` 中进行如下设置: * `GTWS_BASE_SRCDIR`:所有项目源文件目录树的基目录。默认为 `$HOME/src`。 * `GTWS_ORIGIN`: 指定 origin git 目录树的路径。默认为 `$HOME/origin`。 * `GTWS_SETPROMPT`: 可选配置。如果配置了这个参数,shell 提示符会有工作空间的名字。 * `GTWS_DEFAULT_PROJECT`: 不指定项目或项目未知时默认的项目名。如果不指定,使用命令行时必须指明项目。 * `GTWS_DEFAULT_PROJECT_VERSION`: 检出的默认版本。默认为 `master`。 在每个项目的根目录进行以下设置: * `GTWS_PROJECT`: 项目的名字(和基目录)。 * `gtws_project_clone`: 这个函数用于克隆一个项目的指定版本。如果未定义,它会假定项目的 origin 对每一个版本都有一个单独的目录,这样会导致克隆一堆 Git 仓库。 * `gtws_project_setup`: 在克隆完所有的仓库后,可以选择是否调用这个函数,调用后可以对项目进行必要的配置,如在 IDE 中配置工作空间。 在项目版本级进行以下设置: * `GTWS_PROJECT_VERSION:` 项目的版本。用于正确地从 origin 拉取代码。类似 Git 中的分支名字。 下面这些参数可以在目录树的任意地方进行配置,如果能生效,它们可以被重写多次: * `GTWS_PATH_EXTRA`: 这些是工作空间中加到路径后的额外的路径元素。 * `GTWS_FILES_EXTRA`: 这些是不在版本控制内,但应该在工作空间中被检出的额外的文件。这些文件包括 `.git/info/exclude`,每个文件都与仓库的基目录相关联。 ### origin 目录 `GTWS_ORIGIN` (大部分脚本中)指向拉取和推送的原始 Git 检出目录。 `${GTWS_ORIGIN}` 部署: * `/<project>` + 这是一个项目的仓库的基目录。 + 如果指定了 `gtws_project_clone`,你可以配置任意的部署路径。 + 如果没有指定 `gtws_project_clone`,这个路径下必须有个名为 `git` 的子目录,且 `git` 目录下有一系列用来克隆的裸 Git 仓库。 ### 工作流示例 假设你有一个项目名为 `Foo`,它的 upstream 为 `github.com/foo/foo.git`。这个仓库有个名为 `bar` 的子模块,它的 upstream 是 `github.com/bar/bar.git`。Foo 项目在 master 分支开发,使用稳定版本的分支。 为了能在 Foo 中使用 GTWS,你首先要配置目录结构。本例中假设你使用默认的目录结构。 * 配置你最上层的 `.gtwsrc`: + `cp ${GTWS_LOC}/examples/gtwsrc.top ~/.gtwsrc` + 根据需要修改 `~/.gtwsrc`。 * 创建顶级目录: + `mkdir -p ~/origin ~/src` * 创建并配置项目目录: + `mkdir -p ~/src/foo` `cp ${GTWS_LOC}/examples/gtwsrc.project ~/src/foo/.gtwsrc` + 根据需要修改 `~/src/foo/.gtwsrc`。 * 创建并配置 master 版本目录: + `mkdir -p ~/src/foo/master` `cp ${GTWS_LOC}/examples/gtwsrc.version ~/src/foo/master/.gtwsrc` + 根据需要修改 `~/src/foo/master/.gtwsrc`。 * 进入版本目录并创建一个临时工作空间来配置镜像: + `mkdir -p ~/src/foo/master/tmp` `cd ~/src/foo/master/tmp` `git clone --recurse-submodules git://github.com/foo/foo.git` `cd foo` `gtws-mirror -o ~/origin -p foo`(译注:这个地方原文有误,不加 `-s` 参数会报错) + 上面命令会创建 `~/origin/foo/git/foo.git` 和 `~/origin/foo/submodule/bar.git`。 + 以后的克隆操作会从这些 origin 而不是 upstream 克隆。 + 现在可以删除工作空间了。 到现在为止,Foo 的 master 分支的工作可以结束了。假设你现在想修复一个 bug,名为 `bug1234`。你可以脱离你当前的工作空间为修复这个 bug 单独创建一个工作空间,之后在新创建的工作空间中开发。 * 进入版本目录,创建一个新的工作空间: + `cd ~/src/foo/master` `mkws bug1234` + 上面的命令创建了 `bug1234/`,在这个目录下检出了 Foo(和它的子模块 `bar`),并创建了 `build/foo` 来构建它。 * 有两种方式进入工作空间: + `cd ~/src/foo/master/bug1234` `startws` 或者 `cd ~/src/foo/master/` `startws bug1234` + 上面的命令在 `bug1234` 工作空间中开启了一个子 shell。这个 shell 有 GTWS 的环境和你在各级 `.gtwsrc` 文件中设置的环境。它也把你工作空间的基目录加入到了 CD,因此你可以从 base 路径 `cd` 到相关的目录中。 + 现在你可以修复 `bug1234` 了,构建、测试、提交你的修改。当你可以把代码推送到 upstream 时,执行下面的命令: `cd foo` `wspush` + `wspush` 会把代码推送到与你工作空间相关的分支 — 先推送到本地的 origin,再推送到 upstream。 + 当 upstream 有修改时,你可以用下面的命令同步到本地: `git sync` + 上面的命令调用了 GTWS 的 `git-sync` 脚本,会从本地 origin 更新代码。使用下面的命令来更新本地的 origin: `git sync -o` + 上面的命令会更新你本地的 origin 和子模块的镜像,然后用那些命令来更新你的检出仓库的代码。`git-sync` 也有一些其他的很好的工鞥。 + 当要结束工作空间中的工作时,直接退出 shell: `exit` + 你可以在任何时间重复进入工作空间,也可以在同一时间在相同的工作空间中开多个 shell。 * 当你不需要某个工作空间时,你可以使用 `rmws` 来删除它,或者直接删除它的目录树。 * 还有一个脚本 `tmws` 使用 tmux 进入工作空间,能创建一系列的窗口/窗格,这完美契合我的工作流。你可以根据你自己的需求来修改它。 --- via: <https://opensource.com/article/20/2/git-great-teeming-workspaces> 作者:[Daniel Gryniewicz](https://opensource.com/users/dang) 选题:[lujun9972](https://github.com/lujun9972) 译者:[lxbwolf](https://github.com/lxbwolf) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
301
Moved Permanently
null
12,157
开源的广告拦截器不但节能,而且能拯救生命!
https://opensource.com/article/20/4/ad-blockers
2020-04-27T22:01:00
[ "广告" ]
/article-12157-1.html
> > 三个开源广告拦截器与“无广告拦截器”对照组进行了测试。 > > > ![](/data/attachment/album/202004/27/220109b86sidn56sn6inoh.jpg) 一项旨在调查自由开源的广告拦截器节能的情况的[研究](https://www.mdpi.com/2227-7080/8/2/18),意外地发现了互联网广告正在浪费你大量的时间。 更重要的是,研究结果表明了你可以挽救回这些失去的时间。这项研究评估发现,使用 [uBlock Origin](https://github.com/gorhill/uBlock)(一个开源免费的广告拦截器)的情况下平均每个网民一年可以节约超过 100 个小时的时间。uBlock Origin 是测试中最有效的广告拦截器,不过其他的广告拦截器也为网民节省了时间、能源以及金钱。 ![Ad blocker screen comparison](/data/attachment/album/202004/27/220334mpd5pijgaxgp5xql.png "Ad blocker screen comparison") 在研究结果中,[AdBlock+](https://adblockplus.org/) 减少了 11% 的页面加载时间,[Privacy Badger](https://privacybadger.org/) 减少了 22%,[uBlock Origin](https://github.com/gorhill/uBlock) 减少了 28%。对于单独一个页面来说这个时间並不算多,但是网民们一半以上的浏览时间都是在网站间快速跳转,通常在一个页面停留少于 15 秒。鉴于这种情况,加载广告的额外时间加起来就很多了。 发布于 Technologies 杂志上的《[用开源的广告拦截器节能](https://www.academia.edu/42434401/Energy_Conservation_with_Open_Source_Ad_Blockers)》一文最初旨在解决日益增长的能源消耗问题。随着全球网民每日上网时间超过 6.5 小时,与互联网相关的用电量正在快速地增加。以美国人为例,自 2000 年他们的上网时间已经增加了一倍,几乎达到每周 24 小时。开源广告拦截器通过消灭上网和观看视频时产生的广告,潜在地减少了时间,从而减少用电。 在研究过程中,对三个开源广告拦截器与“无广告拦截器”对照组进行了测试。研究人员记录了浏览全球访问量最大的网站的页面加载时间,其中包括网络搜索(谷歌、雅虎、必应)、信息(Weather.com、维基百科)和新闻网站(CNN、福克斯、纽约时报)。除此之外,研究还分析了观看流行与非流行视频内容时广告所花费的时间。这部分研究由于缺乏 Youtube 上流行和非流行内容观看比例的数据而非常具有挑战性。每个视频浪费在广告观看上的时间可以从 0.06% 到惊人的 21% 不等。而且,这还只是浏览器上记录的加载广告而失去的时间。 总的来说,研究结果表明加载广告所浪费的能源并不是是小事。由于运行电脑所使用的大量电力仍然来自于煤炭,而煤炭会造成空气污染和过早死亡,因此该研究分析了广告拦截器拯救美国人生命的潜力。(LCTT 译注:由于这项研究是美国人完成的,所以这里仅提及了美国人,但是同理可推至全球。)结果是令人震惊的:如果美国人都使用开源广告拦截器,每年节约的能源将会拯救超过 36 个美国人的生命。 电能即金钱,所以削减广告也可以为消费者节约钱财。在美国,如果所有的网民都在他们的电脑上开启 [Privacy Badger](https://privacybadger.org/),美国人每年可以节约超过 9100 万美元。在全球范围内,调查研究的结果则更令人吃惊。uBlock Origin 每年可以为全球消费者节约 18 亿美元。 这项研究始于人们因为新冠肺炎大流行而被迫居家之前,因此所有的数据都可以被认为是保守的估算。整体来说,研究发现了开源广告拦截器是一项潜在的节能技术。 虽然自由开源的广告拦截器可以节约能源,对自然环境也有好处,但你可能主要是为了拦截恼人的广告和节省自己的时间而使用它们。 --- via: <https://opensource.com/article/20/4/ad-blockers> 作者:[Joshua Pearce](https://opensource.com/users/jmpearce) 选题:[lujun9972](https://github.com/lujun9972) 译者:[CrazyShipOne](https://github.com/CrazyShipOne) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
null
HTTPSConnectionPool(host='opensource.com', port=443): Read timed out. (read timeout=10)
null
12,158
关于 Ubuntu 20.04 你应该了解的事情
https://itsfoss.com/ubuntu-20-04-faq/
2020-04-28T09:45:00
[ "Ubuntu" ]
https://linux.cn/article-12158-1.html
[Ubuntu 20.04](/article-12142-1.html) 已经发布,你可能对升级、安装等有一些问题和疑问。 我在各种社交媒体渠道上主持了一些问答环节,回答像你这样的读者的疑虑。我将列出这些关于 Ubuntu 20.04 的常见问题,并给出答案。我希望它能帮助你消除你的疑虑。如果你仍有问题,请随时在下面的评论栏提问。 ### Ubuntu 20.04:已回复的问题 ![](/data/attachment/album/202004/28/094530xdpqws5x5q0sxznw.jpg) 为了澄清一下,这里的一些答案也许受我个人意见的影响。如果你是一个有经验的 Ubuntu 用户,有些问题听起来可能有点愚蠢,但它对 Ubuntu 新用户不是这样。 #### Ubuntu 20.04 何时发布? Ubuntu 20.04 LTS 于 2020 年 4 月 23 日发布。所有变种,如 Kubuntu、Lubuntu,、Xubuntu、Budgie、MATE 都将和 20.04 同一天发布。 #### Ubuntu 20.04 的系统要求是什么? 对于默认的 GNOME 版本,应至少具有 4GB 的内存、2GHz 双核处理器和至少 25GB 的磁盘空间。 其他 [Ubuntu 变种](https://itsfoss.com/which-ubuntu-install/)可能有不同的系统要求。 #### 我可以在 32 位系统上使用 Ubuntu 20.04 吗? 完全不行。你不能在 32 位系统上使用 Ubuntu 20.04。即使你使用的是 32 位 Ubuntu 18.04,也不能升级到 Ubuntu 20.04。32 位的系统 ISO 是以前用的。 ![Error while upgrading 32-bit Ubuntu 18.04 to Ubuntu 20.04](/data/attachment/album/202004/28/094534tlwixkhmisc2b4yh.jpg) #### 我可以在 Ubuntu 20.04 上使用 Wine 吗? 是的,你仍然可以在 Ubuntu 20.04 上使用 Wine,因为仍然用于 Wine 和 [Steam Play](https://itsfoss.com/steam-play/) 软件包所需的 32 位库。 #### 我需要购买 Ubuntu 20.04 或许可证? 不,Ubuntu 完全可以免费使用。你不必像在 Windows 中那样购买许可证密钥或激活 Ubuntu。 Ubuntu 的下载页会请求你捐赠一些资金,如果你想为开发这个强大的操作系统捐钱,由你自己决定。 #### GNOME 版本是什么? Ubuntu 20.04 有 GNOME 3.36。 #### Ubuntu 20.04 的性能是否优于 Ubuntu 18.04? 是的,在几个方面。Ubuntu 20.04 系统速度更快,甚至超快。我在下面这个视频的 4:40 处展示了性能对比。 在 GNOME 3.36 中,滚动、窗口动画和其他 UI 元素更加流畅,提供了更流畅的体验。 #### Ubuntu 20.04 将支持多长时间? 它是一个长期支持(LTS)版本,与任何 LTS 版本一样,它将在五年内得到支持。这意味着 Ubuntu 20.04 将在 2025 年 4 月之前获得安全和维护更新。 #### 升级到 Ubuntu 20.04 时,是否会丢失数据? 你可以从 Ubuntu 19.10 或 Ubuntu 18.04 升级到 Ubuntu 20.04。你无需创建 live USB 并从中安装。你所需要的是一个良好的互联网连接,来下载约 1.5GB 的数据。 从现有系统升级不会破坏你的文件。你应该会留有所有文件,并且大多数现有软件应具有相同的版本或升级后的版本。 如果你使用了某些第三方工具或[其他 PPA](https://itsfoss.com/ppa-guide/),升级过程将禁用它们。如果 Ubuntu 20.04 可以使用这些其他存储库,那么可以再次启用它们。 升级大约需要一个小时,重启后,你将登录到新版本。 虽然你的数据不会被触碰,并且不会丢失系统文件和配置,但最好在外部设备备份重要数据。 #### 何时可以升级到 Ubuntu 20.04? ![](/data/attachment/album/202004/28/094535iyyshjyzjpwiusop.jpg) 如果你正在使用 Ubuntu 19.10 并有正确的更新设置(如前面部分所述),那么应在发布后的几天内通知你升级到 Ubuntu 20.04。 对于 Ubuntu 18.04 用户,可能需要几周时间才能正式通知他们 Ubuntu 20.04 可用。可能,你可能会在第一个点版本 Ubuntu 20.04.1 后获得提示。 #### 如果我升级到 Ubuntu 20.04,我可以降级到 19.10 或 18.04 吗? 不行。虽然升级到新版本很容易,但无法选择降级。如果你想回到 Ubuntu 18.04,你需要重新[安装 Ubuntu 18.04](https://itsfoss.com/install-ubuntu/)。 #### 我使用的是 Ubuntu 18.04 LTS。我应该升级到 Ubuntu 20.04 LTS 吗? 这取决于你。如果你对 Ubuntu 20.04 中的新功能印象深刻,并希望上手尝试,那么你应该升级。 如果你想要一个更稳定的系统,我建议等待第一个点版本 Ubuntu 20.04.1,新版本会有 bug 修复。20.04.1 通常在 Ubuntu 20.04 发布后大约两个月到来。 无论是那种情况,我都建议你或早或晚升级到 Ubuntu 20.04。Ubuntu 20.04 具有更新的内核、性能改进,尤其是仓库中有更新版本的软件。 在外部磁盘上进行备份,并且有良好的互联网连接,升级不应成为问题。 #### 我应该重新安装 Ubuntu 20.04 还是从 18.04/19.10 升级到 Ubuntu? 如果你可以选择,请备份数据,并重新安装 Ubuntu 20.04。 从现有版本升级到 20.04 是一个方便的选择。然而,在我看来,它仍然保留有一些旧版本的痕迹/包。全新安装更加干净。 #### 关于 Ubuntu 20.04 的任何其他问题? 如果你对 Ubuntu 20.04 有任何疑问,请随时在下面发表评论。如果你认为应该将其他信息添加到列表中,请让我知道。 --- via: <https://itsfoss.com/ubuntu-20-04-faq/> 作者:[Abhishek Prakash](https://itsfoss.com/author/abhishek/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
[Ubuntu 20.04 release](https://itsfoss.com/ubuntu-20-04-release-features/) is just around the corner and you may have a few questions and doubts regarding upgrades, installation etc. I hosted some Q&A sessions on various social media channels to answer doubts of readers like you. I am going to list these common questions about Ubuntu 20.04 with their answers. I hope it helps you clear the doubts you have. And if you still have questions, feel free to ask in the comment section below. ## Ubuntu 20.04: Your Questions Answered ![Ubuntu 20 04 Faq](https://itsfoss.com/content/images/wordpress/2020/04/ubuntu_20_04_faq.jpg) Just to clarify, some of the answers here maybe influenced by my personal opinion. If you are an experienced Ubuntu user, some of the questions may sound *silly* to you but it not to the new Ubuntu users. ### When will Ubuntu 20.04 be released? Ubuntu 20.04 LTS is releasing on 23rd April 2020. All the participating flavors like Kubuntu, Lubuntu, Xubuntu, Budgie, MATE etc will have their 20.04 release available on the same day. ### What are the system requirements for Ubuntu 20.04? For the default GNOME version, you should have a minimum 4 GB of RAM, 2 GHz dual core processor and at least 25 GB of disk space. Other [Ubuntu flavors](https://itsfoss.com/which-ubuntu-install/) may have different system requirements. ### Can I use Ubuntu 20.04 on 32-bit systems? No, not at all. You cannot use Ubuntu 20.04 on 32-bit systems. Even if you are using 32-bit Ubuntu 18.04, you cannot upgrade to Ubuntu 20.04. There is ISO for 32-bit systems for past several years. ![Error while upgrading 32-bit Ubuntu 18.04 to Ubuntu 20.04](https://itsfoss.com/content/images/wordpress/2020/04/ubuntu-32-bit.jpg) ### Can I use Wine on Ubuntu 20.04? Yes, you can still use Wine on Ubuntu 20.04 as the 32-bit lib support is still there for packages needed by Wine and [Steam Play](https://itsfoss.com/steam-play/). ### Do I have to pay for Ubuntu 20.04 or purchase a license? No, Ubuntu is completely free to use. You don’t have to buy a license key or activate Ubuntu like you do in Windows. The download section of Ubuntu requests you to donate some money but it’s up to you if you want to give some money for developing this awesome operating system. ### What GNOME version does it have? Ubuntu 20.04 has GNOME 3.36. ### Does Ubuntu 20.04 have better performance than Ubuntu 18.04? Yes, in several aspects. Ubuntu 20.04 installs faster and it even boost faster. I have shown the performance comparison in the video below at 4:40 minutes. The scroll, Windows animation and other UI elements are more fluid and give a smoother experience in GNOME 3.36. ### How long will Ubuntu 20.04 be supported? It is a long-term support (LTS) release and like any LTS release, it will be supported for five years. Which means that Ubuntu 20.04 will get security and maintenance updates until April 2025. ### Will I lose data while upgrading to Ubuntu 20.04? You can upgrade to Ubuntu 20.04 from Ubuntu 19.10 or Ubuntu 18.04. You don’t need to create a live USB and install from it. All you need is a good internet connection that can download around 1.5 GB of data. Upgrading from an existing system doesn’t harm your files. You should have all your files as it is and most of your existing software should be either have the same version or upgraded versions. If you have used some third-party tools or [additional PPA](https://itsfoss.com/ppa-guide/), the upgrade procedure will disable them. You can enable these additional repositories again if they are available for Ubuntu 20.04. Upgrading takes like an hour and after a restart, you will be logged in to the newer version. Though your data will not be touched and you won’t lose system files and configurations, it is always a good idea to make backup of important data externally. ### When will I get to upgrade to Ubuntu 20.04? ![Upgrade Ubuntu 20 04](https://itsfoss.com/content/images/wordpress/2020/03/upgrade-ubuntu-20-04.jpg) If you are using Ubuntu 19.10 and have correct update settings in place (as mentioned in the earlier sections), you should be notified for upgrading to Ubuntu 20.04 within a few days of Ubuntu 18.04 release. For Ubuntu 18.04 users, it may take some weeks before they are officially notified of the availability of Ubuntu 18.04. Probably, you may get the prompt after the first point release of Ubuntu 20.04.1. ### If I upgrade to Ubuntu 20.04, can I downgrade to 19.10 or 18.04? No, you cannot. While upgrading to a newer version is easy, there is no option to downgrade. If you want to go back to Ubuntu 18.04, you’ll have [install Ubuntu 18.04](https://itsfoss.com/install-ubuntu/) again. ### I am using Ubuntu 18.04 LTS. Should I Upgrade to Ubuntu 20.04 LTS? That depends upon you. If you are impressed by the new features in Ubuntu 20.04 and want to get your hands on it, you should upgrade. If you want a more stable system, I advise waiting for the first point release Ubuntu 20.04.1 release that will have bug fixes in the new release. 20.04.1 should typically be coming approximately two months after the release of Ubuntu 20.04. In either case, I recommend upgrading to Ubuntu 20.04 sooner or later. Ubuntu 20.04 has newer kernel, performance improvement and above all newer versions of software available in the repository. Make a backup on external disk and with a good internet connectivity, the upgrade should not be an issue. ### Should I do a fresh install of Ubuntu 20.04 or upgrade to it from 18.04/19.10? If you have a choice, make a backup of your data and do a fresh install of Ubuntu 20.04. Upgrading to 20.04 from an existing version is a convenient option. However, in my opinion, it still keeps some traces/packages of the older version. A fresh install is always cleaner. ### Any other questions about Ubuntu 20.04? If you have any other doubts regarding Ubuntu 20.04, please feel free to leave a comment below. If you think some other information should be added to the list, please let me know.
12,160
MystiQ:一个自由开源的音视频转换器
https://itsfoss.com/mystiq/
2020-04-28T22:33:00
[ "转换", "视频" ]
https://linux.cn/article-12160-1.html
![](/data/attachment/album/202004/28/223258cr9rxzyrj344kh68.jpg) > > MystiQ 是一款全新的开源视频转换工具,适用于 Linux 和 Windows。它的底层使用 FFMPEG,并为你提供了一个基于 Qt 的整洁干净的图形界面。 > > > ### MystiQ,一个基于 QT 的 FFmpeg GUI 前端 ![](/data/attachment/album/202004/28/223338yy3uu3fpij2iyr5j.jpg) 音频/视频转换工具可为每位跨多个平台的计算机用户提供方便。 出于同样的原因,我想着重介绍 [MystiQ](https://mystiqapp.com/) 是个好主意,这是一个相对较新的视频/音频转换器工具,适用于 Linux 和 Windows。截至目前,它还不支持 macOS,但可能会在不久的将来支持。 MystiQ 是基于 [Qt 5 界面](https://www.qt.io/)的 [FFmpeg](https://www.ffmpeg.org/) 图形前端。现在,你可以随时[在 Linux 命令行中安装并使用 ffmpeg](https://itsfoss.com/ffmpeg/),但这不是很舒服,是吗?这就是为什么 [Handbrake](https://itsfoss.com/handbrake/) 和 MystiQ 之类的工具可以使我们的生活更方便的原因。 由于 MystiQ 基于 FFmpeg,因此你可以将其用于一些基本的视频编辑,例如修剪、旋转等。 让我们来看看它的功能。 ### MystiQ 视频转换器的功能 ![](/data/attachment/album/202004/28/223316scmmfaamia2o0mim.jpg) 即使 MystiQ 目前还算是一个新事物,但它也包含了一组很好的基本功能。以下它提供的: * 视频转换 * 音频转换(也可从视频中提取音频) * 支持的格式:MP4、WEBM、MKV、MP3、MOV、OGG、WAV、ASF、FLV、3GP、M4A 等。 * 跨平台(Windows 和 Linux) * 适用于 32 位和 64 位系统的安装包 * 能够调整音频质量(采样率、比特率等)进行转换 * 基本的视频编辑功能(剪辑视频、插入字幕、旋转视频、缩放视频等) * 将彩色视频转换为黑白 * 有几个预设方案,可轻松转换视频以获得最佳质量或获得最佳压缩效果。 ### 安装 MystiQ 你可能没有在软件中心中找到它,但将它安装在 Linux 发行版上非常容易。 它提供了 .AppImage 文件和 .deb / .rpm 文件(32 位和 64 位软件包)。如果你不清楚如何使用的话,可以阅读[如何使用 AppImage 文件](https://itsfoss.com/use-appimage-linux/)。 如果你想帮助他们测试软件进行改进,你还可以找到他们的 [GitHub 页面](https://github.com/swl-x/MystiQ/),并查看源码或任何近期的预发布软件包。 你可以在其官方网站下载适用于 Linux 和 Windows 的安装程序文件。 * [下载 MystiQ](https://mystiqapp.com/) ### 总结 在本文中,我使用 [Pop!\_OS](https://system76.com/pop) 20.04 测试了 MytiQ 转换器,并且在转换视频和音频时没遇到任何问题。而且,对于像我这样的普通用户来说,它的转换速度足够快。 欢迎尝试一下,让我知道你对它的想法!另外,如果你在 Linux 上一直使用其他工具转换视频和音频,那它是什么? --- via: <https://itsfoss.com/mystiq/> 作者:[Ankush Das](https://itsfoss.com/author/ankush/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
*Brief: MystiQ is a new open-source video converter tool available for Linux and Windows. It uses FFMPEG underneath and provides you a neat and clean graphical interface based on Qt.* ## MystiQ, a QT-based GUI Frontend for FFmpeg ![Mystiq audio video converyer](https://itsfoss.com/content/images/wordpress/2020/04/mystiq-converter-ft.jpg) An audio/video conversion tool always comes in handy for every computer user across multiple platforms. For that very same reason, I thought it would be a great idea to highlight the [MystiQ app](https://mystiqapp.com/) – which is a relatively new video/audio converter tool available for Linux and Windows users. As of now, there’s no support for macOS – but it could arrive in the near future. MystiQ is a graphical frontend for [FFmpeg](https://www.ffmpeg.org/) based on [Qt 5 user interface](https://www.qt.io/). Now, you can always [install and use ffmpeg in Linux command line](https://itsfoss.com/ffmpeg/) but that’s not very comfortable, is it? This is why tools like [Handbrake](https://itsfoss.com/handbrake/) and MystiQ exist to make our life easier. Since MystiQ is based on FFmpeg, you can use it for some basic video editing like trimming a video, rotating it etc. Let’s have a look at its features. ## Features of MystiQ video converter ![Mystiq video converter](https://itsfoss.com/content/images/wordpress/2020/04/mystiq-options.jpg) Even though the MystiQ app is fairly new to the scene – it packs a good set of essential features. Here’s an overview of what it offers: - Video conversion - Audio conversion (extracting the audio from the video as well) - Formats supported: MP4, WEBM, MKV, MP3, MOV, OGG, WAV, ASF, FLV, 3GP, M4A, and a few others. - Cross-platform (Windows & Linux) - Packages for both 32-bit and 64-bit systems available - Ability to tweak the audio quality (sample rate, bit rate, etc,.) for conversion **Basic video editing capabilities**(clipping the video, inserting a subtitle, rotating the video, scaling the video, etc)- Convert your color video to black and white - Several presets available to easily convert a video for the best quality or for the best compression. **Recommended Read:** ## Installing MystiQ You may not find it listed in your software center – but it is quite easy to get it installed on a Linux distro. It provides an **.AppImage file** and **.deb/.rpm** files (with 32-bit and 64-bit packages). If you’re curious, you can read [how to use the AppImage file](https://itsfoss.com/use-appimage-linux/) if you want to use that. You can also find their [GitHub page](https://github.com/swl-x/MystiQ/) and look at the source code or any recent pre-release packages if you want to help them test the software to improve it. You can download the installer files for both Linux and Windows from its official website. **Wrapping Up** For this quick highlight article, I used [Pop!_OS](https://system76.com/pop) 20.04 to test the MytiQ converter app and I had no issues converting video and audio files. And, the conversion was fast enough for an average user like me. Feel free to try it out and let me know your thoughts on it! Also, if you’ve been using another tool to convert videos and audio on Linux, what is it?
12,161
DNF 和 Yum 的区别,为什么 Yum 会被 DNF 取代?
https://www.2daygeek.com/comparison-difference-between-dnf-vs-yum/
2020-04-29T09:40:00
[ "DNF", "Yum" ]
https://linux.cn/article-12161-1.html
![](/data/attachment/album/202004/29/093910jjxifk8mewuxmgos.jpg) 由于 Yum 中许多长期存在的问题仍未得到解决,因此 [Yum 包管理器](https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/)已被 [DNF 包管理器](https://www.2daygeek.com/linux-dnf-command-examples-manage-packages-fedora-centos-rhel-systems/)取代。这些问题包括性能差、内存占用过多、依赖解析速度变慢等。 DNF 使用 `libsolv` 进行依赖解析,由 SUSE 开发和维护,旨在提高性能。 Yum 主要是用 Python 编写的,它有自己的应对依赖解析的方法。它的 API 没有完整的文档,它的扩展系统只允许 Python 插件。 Yum 是 RPM 的前端工具,它管理依赖关系和资源库,然后使用 RPM 来安装、下载和删除包。 为什么他们要建立一个新的工具,而不是修复现有的问题呢? Ales Kozamblak 解释说,这个修复在技术上是不可行的,而且 Yum 团队还没有准备好立即接受修改。 另外,最大的挑战是,Yum 有 56000 行代码,但 DNF 只有 29000 行代码。 所以除了分叉,没有办法解决。 不过 Yum 的运行情况还算可以。 | 编号 | DNF(Dandified YUM) | YUM(Yellowdog Updater, Modified) | | --- | --- | --- | | 1 | DNF 使用 libsolv 来解析依赖关系,由 SUSE 开发和维护 | YUM 使用公开的 API 来解析依赖关系 | | 2 | API 有完整的文档 | API 没有完整的文档 | | 3 | 由 C、C++、Python 编写的 | 只用 Python 编写 | | 4 | DNF 目前在 Fedora、RHEL 8、CentOS 8、OEL 8 和 Mageia 6/7 中使用 | YUM 目前在 RHEL 6/7、CentOS 6/7、OEL 6/7 中使用 | | 5 | DNF 支持各种扩展 | Yum 只支持基于 Python 的扩展 | | 6 | API 有良好的文档,因此很容易创建新的功能 | 因为 API 没有正确的文档化,所以创建新功能非常困难 | | 7 | DNF 在同步存储库的元数据时,使用的内存较少 | 在同步存储库的元数据时,YUM 使用了过多的内存 | | 8 | DNF 使用满足性算法来解决依赖关系解析(它是用字典的方法来存储和检索包和依赖信息) | 由于使用公开 API 的原因,Yum 依赖性解析变得迟钝 | | 9 | 从内存使用量和版本库元数据的依赖性解析来看,性能都不错 | 总的来说,在很多因素的影响下,表现不佳 | | 10 | DNF 更新:在 DNF 更新过程中,如果包中包含不相关的依赖,则不会更新 | YUM 将在没有验证的情况下更新软件包 | | 11 | 如果启用的存储库没有响应,DNF 将跳过它,并继续使用可用的存储库处理事务 | 如果有存储库不可用,YUM 会立即停止 | | 12 | `dnf update` 和 `dnf upgrade` 是等价的 | 在 Yum 中则不同 | | 13 | 安装包的依赖关系不更新 | Yum 为这种行为提供了一个选项 | | 14 | 清理删除的包:当删除一个包时,DNF 会自动删除任何没有被用户明确安装的依赖包 | Yum 不会这样做 | | 15 | 存储库缓存更新计划:默认情况下,系统启动后 10 分钟后,DNF 每小时会对配置的存储库检查一次更新。这个动作由系统定时器单元 `dnf-makecache.timer` 控制 | Yum 也会这样做 | | 16 | 内核包不受 DNF 保护。不像 Yum,你可以删除所有的内核包,包括运行中的内核包 | Yum 不允许你删除运行中的内核 | | 17 | libsolv:用于解包和读取资源库。hawkey: 为 libsolv 提供简化的 C 和 Python API 库。librepo: 提供 C 和 Python(类似 libcURL)API 的库,用于下载 Linux 存储库元数据和软件包。libcomps: 是 yum.comps 库的替代品。它是用纯 C 语言编写的库,有 Python 2 和 Python 3 的绑定。 | Yum 不使用单独的库来执行这些功能 | | 18 | DNF 包含 29000 行代码 | Yum 包含 56000 行代码 | | 19 | DNF 由 Ales Kozumplik 开发 | YUM 由 Zdenek Pavlas、Jan Silhan 和团队成员开发 | --- via: <https://www.2daygeek.com/comparison-difference-between-dnf-vs-yum/> 作者:[Magesh Maruthamuthu](https://www.2daygeek.com/author/magesh/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
404
Not Found
null
12,164
Fedora 32 正式发布!
https://fedoramagazine.org/announcing-fedora-32/
2020-04-29T11:56:39
[ "Fedora" ]
https://linux.cn/article-12164-1.html
![](/data/attachment/album/202004/29/115643ltjbzvnfyw5be5ee.png) 它来了! 我们很荣幸地宣布 Fedora 32 的发布。感谢成千上万的 Fedora 社区成员和贡献者的辛勤工作,我们又一次准时发布了。 如果你只想马上就能拿到它,请马上访问 <https://getfedora.org/>。更多详情,请继续阅读本文。 ### Fedora 的全部变种 Fedora Editions 是针对特定的“展示”用途输出的。 Fedora Workstation 专注于桌面系统。特别是,它面向的是那些希望获得“可以工作的” Linux 操作系统体验的软件开发者。这个版本采用了 [GNOME 3.36](https://www.gnome.org/news/2020/03/gnome-3-36-released/),一如既往地有很多很棒的改进。我最喜欢的是新的锁屏! Fedora Server 以一种易于部署的方式为系统管理员带来了新锐的开源服务器软件。对于边缘计算用例,[Fedora IoT](https://iot.fedoraproject.org/) 为 IoT 生态系统提供了坚实的基础。 Fedora CoreOS 是一个新兴的 Fedora Edition。它是一个自动更新的、最小化的操作系统,用于安全地大规模运行容器化工作负载。它提供了几个[更新流](https://docs.fedoraproject.org/en-US/fedora-coreos/update-streams/),遵循大约每两周一次的自动更新。目前,next 流是基于 Fedora 32,后续还有 testing 流和 stable 流。你可以从[下载页面](https://getfedora.org/en/coreos/download?stream=next)中找到关于按 next 流发布的工件的信息,并在 [Fedora CoreOS 文档](https://docs.fedoraproject.org/en-US/fedora-coreos/getting-started/)中找到关于如何使用这些工件的信息。 当然,我们制作的不仅仅是 Editions。[Fedora Spins](https://spins.fedoraproject.org/) 和[实验室](https://labs.fedoraproject.org/)针对的是不同的受众和用例,包括[Fedora 天文学实验室](https://labs.fedoraproject.org/en/astronomy/),它为业余和专业的天文学家带来了完整的开源工具链,还有像 [KDE Plasma](https://spins.fedoraproject.org/en/kde/) 和 [Xfce](https://spins.fedoraproject.org/en/xfce/) 这样的桌面环境。Fedora 32 中新增的 [计算神经科学实验室](https://labs.fedoraproject.org/en/comp-neuro) 是由我们的神经科学特别兴趣小组开发的,它可以实现计算神经科学。 还有,别忘了我们的备用架构,[ARM AArch64、Power 和 S390x](https://alt.fedoraproject.org/alt/)。特别值得一提的是,我们改进了对 Rockchip 系统级芯片的支持,包括 Rock960、RockPro64 和 Rock64。 ### 一般性的改进 无论你使用 Fedora 的哪个变体,你都能获得最新的开源世界。遵循我们的“[First](https://docs.fedoraproject.org/en-US/project/#_first)”理念,我们更新了关键的编程语言和系统库包,包括 GCC 10、Ruby 2.7 和 Python 3.8。当然,随着 Python 2 已经过了报废期,我们已经从 Fedora 中删除了大部分 Python 2 包,但我们为仍然需要它的开发者和用户提供了一个遗留的 python27 包。在 Fedora Workstation 中,我们默认启用了 EarlyOOM 服务,以改善低内存情况下的用户体验。 我们非常期待你能尝试一下新版本的使用体验! 现在就去 <https://getfedora.org/> 下载它。或者如果你已经在运行 Fedora 操作系统,请按照简单的[升级说明](https://docs.fedoraproject.org/en-US/quick-docs/upgrading/)进行升级。 ### 万一出现问题…… 如果你遇到问题,请查看[Fedora 32 常见错误](https://fedoraproject.org/wiki/Common_F32_bugs)页面,如果有问题,请访问我们的 [Askedora](http://ask.fedoraproject.org) 用户支持平台。 ### 谢谢大家 感谢在这个发布周期中为 Fedora 项目做出贡献的成千上万的人,特别是感谢那些在大流行期间为又一次准时发布而付出额外努力的人。Fedora 是一个社区,很高兴看到我们彼此之间的支持。我邀请大家参加 4 月28-29 日的[红帽峰会虚拟体验](https://www.redhat.com/en/summit),了解更多关于 Fedora 和其他社区的信息。 --- via: <https://fedoramagazine.org/announcing-fedora-32/> 作者:[Matthew Miller](https://fedoramagazine.org/author/mattdm/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
It’s here! We’re proud to announce the release of Fedora 32. Thanks to the hard work of thousands of Fedora community members and contributors, we’re celebrating yet another on-time release. If you just want to get to the bits without delay, head over to [https://getfedora.org/](https://getfedora.org/) right now. For details, read on! **All of Fedora’s Flavors** Fedora Editions are targeted outputs geared toward specific “showcase” uses. Fedora Workstation focuses on the desktop. In particular, it’s geared toward software developers who want a “just works” Linux operating system experience. This release features [GNOME 3.36](https://www.gnome.org/news/2020/03/gnome-3-36-released/), which has plenty of great improvements as usual. My favorite is the new lock screen! Fedora Server brings the latest in cutting-edge open source server software to systems administrators in an easy-to-deploy fashion. For edge computing use cases, [Fedora IoT](https://iot.fedoraproject.org/) provides a strong foundation for IoT ecosystems. Fedora CoreOS is an emerging Fedora Edition. It’s an automatically-updating, minimal operating system for running containerized workloads securely and at scale. It offers several [update streams](https://docs.fedoraproject.org/en-US/fedora-coreos/update-streams/) that can be followed for automatic updates that occur roughly every two weeks. Currently the **next** stream is based on Fedora 32, with the **testing** and **stable** streams to follow. You can find information about released artifacts that follow the **next** stream from [the download page](https://getfedora.org/en/coreos/download?stream=next) and information about how to use those artifacts in the [Fedora CoreOS Documentation](https://docs.fedoraproject.org/en-US/fedora-coreos/getting-started/). Of course, we produce more than just the editions. [Fedora Spins](https://spins.fedoraproject.org/) and [Labs](https://labs.fedoraproject.org/) target a variety of audiences and use cases, including the [Fedora Astronomy Lab](https://labs.fedoraproject.org/en/astronomy/), which brings a complete open source toolchain to both amateur and professional astronomers, and desktop environments like[ KDE Plasma](https://spins.fedoraproject.org/en/kde/) and[ Xfce](https://spins.fedoraproject.org/en/xfce/). New in Fedora 32 is the [Comp Neuro Lab](https://labs.fedoraproject.org/en/comp-neuro), developed by our Neuroscience Special Interest Group to enable computational neuroscience. And, don’t forget our alternate architectures: [ARM AArch64, Power, and S390x](https://alt.fedoraproject.org/alt/). Of particular note, we have improved support for Pine64 devices, NVidia Jetson 64 bit platforms, and the Rockchip system-on-a-chip devices including the Rock960, RockPro64, and Rock64. **General improvements** No matter what variant of Fedora you use, you’re getting the latest the open source world has to offer. Following our “[First](https://docs.fedoraproject.org/en-US/project/#_first)” foundation, we’ve updated key programming language and system library packages, including GCC 10, Ruby 2.7, and Python 3.8. Of course, with Python 2 past end-of-life, we’ve removed most Python 2 packages from Fedora. A legacy python27 package is provided for developers and users who still need it. In Fedora Workstation, we’ve enabled the EarlyOOM service by default to improve the user experience in low-memory situations. We’re excited for you to try out the new release! Go to [https://getfedora.org/](https://getfedora.org/) and download it now. Or if you’re already running a Fedora operating system, follow the easy [upgrade instructions](https://docs.fedoraproject.org/en-US/quick-docs/upgrading/). For more information on the new features in Fedora 32, see the [release notes](https://docs.fedoraproject.org/en-US/fedora/f32/release-notes/). **In the unlikely event of a problem….** If you run into a problem, check out the[ Fedora 32 Common Bugs](https://fedoraproject.org/wiki/Common_F32_bugs) page, and if you have questions, visit our [Ask Fedora](http://ask.fedoraproject.org) user-support platform. **Thank you everyone** Thanks to the thousands of people who contributed to the Fedora Project in this release cycle, and especially to those of you who worked extra hard to make this another on-time release during a pandemic. Fedora is a community, and it’s great to see how much we’ve supported each other. I invite you to join us in the [Red Hat Summit Virtual Experience](https://www.redhat.com/en/summit) 28-29 April to learn more about Fedora and other communities. *Edited 1800 UTC on 28 April to add a link to the release notes.* ## timfa21 AWESOME !!! ## Benjamin cool!!! ## louis Beautiful update ! ## Kleyton Amazing!! ## hos7ein It’s great. ## Adam I’ve been running Fedora 32 for a few days. Seems to work without a problem. My uptime is 4 days. I’ve been running Fedora for ~2 years now, and my install is that old as well. I’ve just been upgrading it in-place the whole time, and there have been no issues at all. By now, I’m guessing the GUI software updater will prompt users to upgrade, but I upgraded following these instructions: https://fedoraproject.org/wiki/Upgrading_Fedora_using_package_manager#Instructions_to_upgrade_using_dnf It’s good practice to prepare your system before upgrading to a newer release. Like, cleaning out old packages, installing system updates etc. I also deleted old kernels, which I haven’t done since I first installed Fedora. You can list the installed kernel packages like this: rpm -qa | grep -i kernel Check what your currently running kernel is (uname -a) and manually delete all the older kernel packages with dnf. The old kernels will disappear from the GRUB menu automatically, if you run: grub2-mkconfig Thanks for this release of Fedora! ## BH Or just do “rpm -q kernel” ## Zbigniew Jędrzejewski-Szmek In general there is no need to uninstall old kernels. DNF will automatically remove oldest kernels, always keeping the 3 newest ones. Doing it by hand is discouraged because there’s always the chance of removing the wrong kernel or all kernels. If you have more than four kernels installed than something is wrong with the removal process and you can report a bug. As far as old packages go, any old packages that would cause an problem during an upgrade (for example because they require a library which is now in an incompatible version) should be removed during an upgrade. To make this work, any packages which are known to cause such problems are declared in fedora-obsolete-packages. This process is partially manual, so if you encounter an error during upgrade that some old package is holding back the upgrade, report it as a bug an it will be added to fedora-obsolete-packages or otherwise fixed. So there should be no need to remove old packages either. But I’m happy that the upgrade worked for you 😉 ## Andre Gompel In order to modify the number of kernels installed, modify /etc/dnf/dnf.conf following line : (I set it to “2”). installonly_limit=3 ## Will Kemp I’ve been running Fedora since it was Red Hat 3 (nearly 25 years), and for as long as I can remember I’ve always kept a spare partition the size of the root partition, to install the next version in. In other words, I have two root partitions, the current one and the previous one – which will become the next one – and I rotate them as new versions are released. That way, I always do a clean install of the new version, and I’ve always got the previous version hopefully still bootable in case I need it. There are pros and cons of this method, of course. Pros: clean system, with no buildup of cruft, installation is fast, backup system is always ready to boot if necessery. Cons: requires manual configuration of partitions in the installer, and you have to keep track of which partition is what, some apps don’t get installed out of the box and have to be re-installed manually after the upgrade, uses a bit more space – but a, say, 32GB root partition, is insignificant given the sizes of hard drives nowadays, so that’s not a big deal. I tried doing an upgrade using dnf a few versions ago and it took a really long time, so I went back to the old method next time. ## Gianni Great ## Rohan Kumar Great ## S. Rose https://docs.fedoraproject.org/en-US/fedora/f32/release-notes/ ## wparedes Tnks ## M.el Great release: Been on it since Fedora 20. Great work! Exceptional! ## Alex COOL! Thanks! ## Fahim Murshed Awesome! ## hareesh Awesome. so excited… ## zzzz I am using Fedora 32 to leave this comment ## Mivall Amazing! thanks to the developers for such a great OS ## Andreas Thank you for that great work. I’m Fedora User since first days. I’ve updateted my Laptop. Now all of my ARMs (4x Odroid HC1, Odroid XU4, Raspi 4) are in the process to update to 32. ## Faisal Sani OMG! i just installed fedora spin BETA kde edition 10 hours ago. ## Adam Williamson Don’t worry – just update the system as usual and you will be up to date with the final release (in fact, a little after it). ## dwefe My fedora talk to me in native language, why not everywhere? ## Matthew Miller There is so much software included in Fedora that getting to 100% complete translation would be impossible. Take a look at our localization team to find out how you can help! https://fedoraproject.org/wiki/L10N ## Matthew P Delaforce I just tried to download the Fedora 32 Workstation ISO, but my Avast antivirus application is terminating the download, saying the mirror is infected with Win63:Evo-gen[susp]. I’ve tried 3 times so far and each time the same termination message from Avast, though two different mirrors were attempted. ## Paul W. Frields This sounds like a false positive, and you may want to report it to Avast. One way to check the download is to go ahead and pull down the ISO, and use the verify process documented on the website to satisfy yourself that it’s genuine. ## internet Thanks for the effort you guys put into this project. It feels so good to know that we humans can make this happen even when times are rough. And thank you RedHat for making it easier. Cheers. ## Stanford Bangaba Awesome! ## Antonio Nice new fedora.. I ve been on 32 beta for 5 weeks, and no big problems.. Worked flawless for daily driver in business. Good job fedora team! ## M.aD Congrats on another awesome release. ## clime Great! ## Leslie Satenstein Remember the old expression from a child movie. Fedora 32 is not a child, but the expression was: Everyday in everyway, Fedora gets better and better. Thank you Adam, Mathew, and the many generals and soldiers that made this version possible. By the way, one of the youtuber’s has stated that Fedora32 will be a rolling release. Is that not so? It has always been a rolling release for maintenance. Was it extended to shorter than 6 month releases? ## Adam Williamson Hey Leslie! Nothing has changed regarding the lifecycle or maintenance, things are just as they have always been. ## Ellen Thanks so much Fedora team for creating a system that requires no fiddling! The integration of EarlyOOM is a small step for the project but a giant leap for usability. Linux thrashing is the greatest menace on a low memory machine. ## Asiri Iroshan Love it. Using Fedora 31 Workstation here. Unfortunately I’m very busy these days with a lot of work that I cannot find a time to upgrade to Fedora 32. But I will be upgrading in a couple of weeks. The best Linux distribution!. Love it. ## jenche I agree that translation seems to lack behind. I’m a French user and for example, Geary seems to only be half translated in this language. Worse, some parts of the Gnome interface are broken in French, mostly because French tends to be more verbose than English. See for example these bugs: https://gitlab.gnome.org/GNOME/gnome-shell/-/issues/2574 or https://gitlab.gnome.org/GNOME/gnome-control-center/-/issues/932 Nothing really serious but it gives the feeling of something not really polished. ## Adam Williamson Translations are pretty much entirely a volunteer effort, unfortunately :/ No company sees enough of a benefit to pay for this work. So they’re always vulnerable to being a bit haphazard. Thanks for reporting the UI bugs, too! ## Alberto Patino Congrats!!! I have 3 systems with Fedora 32 now. Only thing I’m struggling with is fonts. I’m not able to load old fonts I had with Fedora 30 and backwards. One more time congratulations and thanks for this new release. ## Oscar Very fast and stable. I have been using Fedora since version 17. Only version 18 had problems. I hope they continue like this always. ## Walter Harrison Stoermer Thank you, to the whole Fedora team. You guys did it again, and I love what you did with 32; I love Fedora! Gnome 3.36 is what my work-flow needed; keep up the great work! ## Angel Yocupicio It is grateful for me! Thank you very much to Mattew Miller and colaborators. ## saeb Can I upgrade with iso image? ## Terry Wang Good job, folks. Right after the hype (Ubuntu 20.04 LTS release 😉 I run Fedora on my home NAS (N54L G7 since Fedora 20) and old laptops D630 (along with Manjaro MacBook3,1 and Arch N150). In fact Fedora Core 1 was my first full time Desktop (GNOME 2) workstation back in Uni (until Fedora Core 4 or 5, I did the distro hop thing 😉 ## Francisco Reyes Muchas gracias a todo el equipo por este nuevo release. ## Thomas Stephen Lee Great. Installed on VM. Testing it now. ## Andre Gompel From Beta, to released Fedora 32 came pretty good ! Most apps work well, and RPMFUSION Repos for F32 was ready early. May I suggest here : 1) To create a Fedora-32 -post-install ? Because for a Fedora novice, it is needed for basic stuff which does work out of the box, to be functional. Comes to mind multimedia, rpmfusion repos, etc.. I have mine as a crude text file, that I could share, since Fedora 28, or so. 2) Like for https://www.if-not-true-then-false.com/ explain how to properly install, apps where a simple “dnf install , is not enough. For example I found the mediawiki server app, with MariaD, very challenging to fully work on Fedora 32. Andre Gompel ## skierpage It’s a good idea but it’s a huge subject because as this article makes clear, there are so many users of and use cases for Fedora. The quality of online guides varies, and there’s the usual problem of Google search results featuring old obsolete articles. “Things to do after installing Fedora 32” on fosspost might be OK, though I can’t vouch for many of the things it suggests. The need for multimedia codecs from RPMFusion is greatly reduced now that GStreamer supports AAC and H.264 along with MP3. The best way to install many “consumer” apps is now Flatpak. The best way to run complex server apps may be in a container, whether or not you’re using Fedora CoreOS. ## mike Done clean install F32 Cinnamon this morning and moved all my files from F31 onto it no problems . I have simple-scan installed but no desktop icon on 32 like on 31 ? ## ChezThijs Works like a charm. I love it ## skierpage I just upgraded my Fedora KDE spin. The upgrade was noticeably quicker than previous upgrades, nice! It warned about a couple of modular packages (libgit2 and ripgrep) but upgraded them fine anyway. ## Colombo Already installed. It looks great. ## Andre Gompel Could you comment here, on how easy/hard it was by reinstalling Fedora 32, over a dual boot, by keeping existing /home users, and not reformatting it (when it is a btrfs compressed partition), etc… My take is that the Anaconda Installer, with respect to “custom partitioning” is still quite buggy, has serious shortcomings. Furthermore, I suggest to automatically create an fstab entry as /win/c (r/w) for the NTFS windows partition, commented out for easy enabling etc… Please comment. ## Marko Anyone knows if VirtualBox 6.1 is going to build F32 soon? 🙂 ## Balthazar I’ve just installed F32 on my ASUS R700V laptop. It works fine but I’ve got a little problem with the touchpad. When I do a right-click, it’s a left-click which is executed. With a standard mouse, there is no problem. Any suggestion ? Thanks ## vm linuz Well done to Fedora. Do a great job. I love Fedora. ## ButterflyOfFire Congrats Fedora <3 ! ## Nick Why is Python so much slower in 32 than 31? ## Dezső József It is great and nice. But the Hungarian language became unchoseable in the installer. In the earlier versions I could install also in Hungarian language. Ok I could install in Englis language and after installation the change language to Hungarian is simple, bat it is a step backward or mistake. ## Attila Szabo Szia, hogy sikerult modositani a nyelvet? En az xfce-t hasznalom es sehogy nem sikerul beallitani a felhasznaloi felulet nyelvet magyarra. ## Dezső József Simán a beállítások eszközben a terület és nyelv. Talán “Region and Language” az angol neve a menüben. ## Chandramouli Narayanan Does not work for me on Thinkpad x220. I tried with a USB stick with the Fedora 32 x86_64 iso. The laptop keyboard and mouse don’t work during the installation. I tried with an external keyboard/mouse. The installation worked. But post-installation, the laptop keyboard didn’t work afterwards. And the system crashed with the external keyboard again. Franky, I had issues with Fedora 31 updates as well. I will try going back again to 31. If it doesn’t work, I am moving on to some other distro. ## Shreepad S That is truly unusual.. I’ve used Fedora from I think 22 onwards and never faced any issues on Thinkpads (T420, T470) with the keyboard/ mouse that could be attributed to the OS. The keyboard on my old T420 has problems now where certain keys stop functioning after some time so am using it with an external wired USB keyboard (Logitech) which works fine. I don’t think its a software issue as the problem surfaces after a few hours of continuous use and goes away after cold starts so I suspect there is a hardware/ heating issue of some kind… ## Ricardo Brites Using fedora since 2003, before i changed in 2003 i was using Red Hat 9.0. Have tried lots of other distros but can’t find one as cutting edge and stable as Fedora. Well done team!!! ## john posey Gnome 3.36 works in full screen mode okay. Smaller sized windows don’t interface too well … blink … flash … no focus … no control (unable to respond to dialog boxes) grainy graphics … ## Timtish Thank You so much! All dependencies are allowed! I can run ibm ide again) ## Jovino ironic that fedora 32 defacto kills the last fedora with 32 bit support ## millerthegorilla Fedora 32 is lovely, it looks great and, in most instances, it works really well. However, I have been unable to get the right click to work on my laptop. I have tried disabling mouse click emulation in gnome-tweaks, and have even tried using dconf-editor to adjust the right click setting, which allows either left or right click, as left click, to be held down and then the right click menu appears. It is extremely frustrating to not have a right click. Does anyone know how to restore it? Or the correct IRC channel on which to ask? Thanks ## JEREMIAH CERALDE I upgraded my Fedora Workstation from 31 to 32. One issue is my internet connection speed. Approximately, 10mbps upon using my phone while on my Fedora 32 workstation it’s only .50mbps. Been wondering why. Can someone help please. Thanks! ## Holzkerbe I’ve just recently switched back from Windows 10 to Linux which I’ve only rarely used as my everyday OS, but rather to get an idea about it while dabbling around with it. So at first I’ve got Ubuntu which I’ve already tried some years ago. Recently I’ve gave 20.04 a shot, but it still felt somewhat bloated and not responsive enough to me. Then I’ve heard about Fedora while reading about Linus Torvalds himself who apparently uses Fedora as well and then I immediately gave Fedora 32 a shot. All I have to say now is that I totally fell in love with it…! It’s slick, sleek, elegant, minimal and blazingly fast and responsive. Only downsides were the Anaconda installer which in comparison to Ubuntu’s installer is a bit confusing (drive management) and unresponsive and the missing driver for my HP Laptop’s RTL8821ce wifi adapter. I had to dig up a kernel module and provide it manually via DKMS which took a lot of time to research and execute. Wifi works now, but suffers from a few bugs though. Anyway you did an awesome job guys. I’m here to stay 🙂 Thanks for you effort and keep it rollin’! ## Shatadru Bandyopadhyay Oh no I am late this time! Upgrading right now! ## Linuxguy1712 My boot time god longer after the update 🙁 2 and a half minutes to boot after updating to fedora 32. Not a polished update. ## Ralf Oltmanns Well done. But I have one concern: With Fedora 32, VirtualBox 6.1 is broken. As VirtualBox-6.1 is no Fedora component, where would be the appropriate place to file a bug? I’m not sure if Oracle really cares, if their VirtualBox is running on Fedora or not. I’d really need VirtualBox for our vagrant boxes. (On a sidenote: Getting VirtualBox 6.1 to run on ArchLinux is also a pain right now)
12,165
如何在 Linux 上检查网卡信息
https://www.2daygeek.com/linux-unix-check-network-interfaces-names-nic-speed-ip-mac-address/
2020-04-29T21:48:47
[ "网卡", "ip" ]
https://linux.cn/article-12165-1.html
![](/data/attachment/album/202004/29/214835m1ms3n00s6qbcycz.jpg) 默认情况下,在设置服务器时你会配置主网络接口。这是每个人所做的构建工作的一部分。有时出于各种原因,你可能需要配置额外的网络接口。 这可以是通过网络<ruby> 绑定 <rt> bonding </rt> <ruby> / <ruby> 协作 <rt> teaming </rt> </ruby> 来提供高可用性,也可以是用于应用需求或备份的单独接口。 </ruby></ruby> 为此,你需要知道计算机有多少接口以及它们的速度来配置它们。 有许多命令可检查可用的网络接口,但是我们仅使用 `ip` 命令。以后,我们会另外写一篇文章来全部介绍这些工具。 在本教程中,我们将向你显示可用网络网卡(NIC)信息,例如接口名称、关联的 IP 地址、MAC 地址和接口速度。 ### 什么是 ip 命令 [ip 命令](https://www.2daygeek.com/ip-command-configure-network-interface-usage-linux/) 类似于 `ifconfig`, 用于分配静态 IP 地址、路由和默认网关等。 ``` # ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether fa:16:3e:a0:7d:5a brd ff:ff:ff:ff:ff:ff inet 192.168.1.101/24 brd 192.168.1.101 scope global eth0 inet6 fe80::f816:3eff:fea0:7d5a/64 scope link valid_lft forever preferred_lft forever ``` ### 什么是 ethtool 命令 `ethtool` 用于查询或控制网络驱动或硬件设置。 ``` # ethtool eth0 ``` ### 1)如何在 Linux 上使用 ip 命令检查可用的网络接口 在不带任何参数的情况下运行 `ip` 命令时,它会提供大量信息,但是,如果仅需要可用的网络接口,请使用以下定制的 `ip` 命令。 ``` # ip a |awk '/state UP/{print $2}' eth0: eth1: ``` ### 2)如何在 Linux 上使用 ip 命令检查网络接口的 IP 地址 如果只想查看 IP 地址分配给了哪个接口,请使用以下定制的 `ip` 命令。 ``` # ip -o a show | cut -d ' ' -f 2,7 或 ip a |grep -i inet | awk '{print $7, $2}' lo 127.0.0.1/8 192.168.1.101/24 192.168.1.102/24 ``` ### 3)如何在 Linux 上使用 ip 命令检查网卡的 MAC 地址 如果只想查看网络接口名称和相应的 MAC 地址,请使用以下格式。 检查特定的网络接口的 MAC 地址: ``` # ip link show dev eth0 |awk '/link/{print $2}' 00:00:00:55:43:5c ``` 检查所有网络接口的 MAC 地址,创建该脚本: ``` # vi /opt/scripts/mac-addresses.sh #!/bin/sh ip a |awk '/state UP/{print $2}' | sed 's/://' | while read output; do echo $output: ethtool -P $output done ``` 运行该脚本获取多个网络接口的 MAC 地址: ``` # sh /opt/scripts/mac-addresses.sh eth0: Permanent address: 00:00:00:55:43:5c eth1: Permanent address: 00:00:00:55:43:5d ``` ### 4)如何在 Linux 上使用 ethtool 命令检查网络接口速度 如果要在 Linux 上检查网络接口速度,请使用 `ethtool` 命令。 检查特定网络接口的速度: ``` # ethtool eth0 |grep "Speed:" Speed: 10000Mb/s ``` 检查所有网络接口速度,创建该脚本: ``` # vi /opt/scripts/port-speed.sh #!/bin/sh ip a |awk '/state UP/{print $2}' | sed 's/://' | while read output; do echo $output: ethtool $output |grep "Speed:" done ``` 运行该脚本获取多个网络接口速度: ``` # sh /opt/scripts/port-speed.sh eth0: Speed: 10000Mb/s eth1: Speed: 10000Mb/s ``` ### 5)验证网卡信息的 Shell 脚本 通过此 shell 脚本你可以收集上述所有信息,例如网络接口名称、网络接口的 IP 地址,网络接口的 MAC 地址以及网络接口的速度。创建该脚本: ``` # vi /opt/scripts/nic-info.sh #!/bin/sh hostname echo "-------------" for iname in $(ip a |awk '/state UP/{print $2}') do echo "$iname" ip a | grep -A2 $iname | awk '/inet/{print $2}' ip a | grep -A2 $iname | awk '/link/{print $2}' ethtool $iname |grep "Speed:" done ``` 运行该脚本检查网卡信息: ``` # sh /opt/scripts/nic-info.sh vps.2daygeek.com ---------------- eth0: 192.168.1.101/24 00:00:00:55:43:5c Speed: 10000Mb/s eth1: 192.168.1.102/24 00:00:00:55:43:5d Speed: 10000Mb/s ``` --- via: <https://www.2daygeek.com/linux-unix-check-network-interfaces-names-nic-speed-ip-mac-address/> 作者:[Magesh Maruthamuthu](https://www.2daygeek.com/author/magesh/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
404
Not Found
null
12,166
Manjaro 20 Lysia 到来,支持 ZFS 和 Snap
https://itsfoss.com/manjaro-20-release/
2020-04-29T23:30:50
[ "Manjaro" ]
https://linux.cn/article-12166-1.html
> > Manjaro Linux 刷新了其 Manjaro 20 “Lysia” 的 ISO。现在在 Pamac 中支持了 Snap 和 Flatpak 软件包。在 Manjaro Architect 安装程序中增加了 ZFS 选项,并使用最新的内核 5.6 作为基础。 > > > ![](/data/attachment/album/202004/29/232925j8paomvp11pfu12v.jpg) 最近新的发行版的发布像下雨一样。在上周发布了 [Ubuntu 20.04 LTS](/article-12142-1.html) ,紧接着 [Fedora 32](/article-12164-1.html) 也刚刚发布,而现在 [Manjaro 发布了版本 20](https://forum.manjaro.org/t/manjaro-20-0-lysia-released/138633),代号为 Lysia。 ### Manjaro 20 Lysia 有什么新东西? 其实有很多。让我给大家介绍一下 Manjaro 20 中的一些主要新功能。 #### 新的抹茶主题 Manjaro 20 有一个新的默认主题,名为 Matcha(抹茶)。它让桌面看起来更有质感。 ![](/data/attachment/album/202004/29/233052k2o77y0et2tp9uyy.jpg) #### 对 Snap 和 Flatpak 的支持 Snap 和 Flatpak 软件包的支持得到了改进。如果你愿意,你可以在命令行中使用它们。 你还可以在 Pamac 图形界面包管理器中启用 Snap 和 Flatpak 支持。 ![Enable Snap support in Pamac Manjaro](/data/attachment/album/202004/29/233052dr1wwmustgi91w7w.jpg) 启用后,你可以在 Pamac 软件管理器中找到并安装 Snap/Flatpak 应用程序。 ![Snap applications in Pamac](/data/attachment/album/202004/29/233055a1bm08c7gt8r8a7g.jpg) #### Pamac 提供了基于搜索安装新软件的方式(在 GNOME 中) 在 GNOME 变种中,如果你搜索某个东西,Pamac 软件管理器会提供安装符合查询的软件。在其他使用 GNOME 桌面的发行版中,GNOME 软件中心也会这样做。 #### ZFS 支持登陆了 Manjaro Architect 现在,你可以在 Manjaro Linux 中轻松地使用 ZFS 作为根文件系统。在 [Manjaro Architect](https://itsfoss.com/manjaro-architect-review/) 中提供了对 [ZFS 文件系统](https://itsfoss.com/what-is-zfs/)的支持。 请注意,我说的是 Manjaro Architect,即基于终端的安装程序。它和普通的图形化的 [Calamares 安装程序](https://calamares.io/)不一样。 ![](/data/attachment/album/202004/29/233056boj6iqdg0kgj0u6u.jpg) #### Linux kernel 5.6 最新的稳定版 [Linux 内核 5.6](https://itsfoss.com/linux-kernel-5-6/) 带来了更多的硬件支持,如 thunderbolt、Nvidia 和 USB4。你也可以使用 [WireGuard VPN](https://itsfoss.com/wireguard/)。 ![](/data/attachment/album/202004/29/233056xh7h6cc26ll6c2cz.jpg) #### 其他杂项变化 * 新的桌面环境版本:Xfce 4.14、GNOME 3.36 和 KDE Plasma 5.18。 * 新的默认 shell 是 zsh。 * Display-Profiles 允许你存储一个或多个配置文件,用于你的首选显示配置。 * 改进后的 Gnome-Layout-Switcher。 * 最新的驱动程序。 * 改进和完善了 Manjaro 工具。 ### 如何取得 Manjaro 20 Lysia? 如果你已经在使用 Manjaro,只需更新你的 Manjaro Linux 系统,你就应该已经在使用 Lysia 了。 Manjaro 采用了滚动发布模式,这意味着你不必手动从一个版本升级到另一个版本。只要有新的版本发布,不需要重新安装就可以使用了。 既然 Manjaro 是滚动发布的,为什么每隔一段时间就会发布一个新版本呢?这是因为他们要刷新 ISO,这样下载 Manjaro 的新用户就不用再安装过去几年的更新了。这就是为什么 Arch Linux 也会每个月刷新一次 ISO 的原因。 Manjaro 的“ISO 刷新”是有代号和版本的,因为它可以帮助开发者清楚地标明每个开发阶段的发展方向。 所以,如果你已经在使用它,只需使用 Pamac 或命令行[更新你的 Manjaro Linux 系统](https://itsfoss.com/update-arch-linux/)即可。 如果你想尝试 Manjaro 或者想使用 ZFS,那么你可以通过从它的网站上[下载 ISO](https://manjaro.org/download/) 来[安装 Manjaro](https://itsfoss.com/install-manjaro-linux/)。 愿你喜欢新的 Manjaro Linux 发布。 --- via: <https://itsfoss.com/manjaro-20-release/> 作者:[Abhishek Prakash](https://itsfoss.com/author/abhishek/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
301
Moved Permanently
null
12,168
IPv5 发生了什么?为什么有 IPv4、IPv6 但没有 IPv5?
https://itsfoss.com/what-happened-to-ipv5/
2020-04-30T09:36:00
[ "IPv4", "IPv6", "IPv5" ]
https://linux.cn/article-12168-1.html
![](/data/attachment/album/202004/30/093620td8b60tib8kwtscd.png) 如果你花过很多时间在互联网上,那么你应该已经听说过计算机每天使用的 IPv4 和 IPv6 协议。 你可能会问的一个问题是:为什么没有 IPv5?为什么 IPv6 在 IPv4 之后而不是 IPv5 之后出现?是否有 IPv5,如果是,那么 IPv5 发生了什么? 答案是肯定的,曾经有一个 IPv5。让我解释一下这里发生的事。 ### 互联网的早期历史 ![ARPA Logical Map in 1977 | Image courtesy: Wikipedia](/data/attachment/album/202004/30/093657hffmkfva14zfiudv.png) 在 1960 年代后期,美国国防部的[高级研究计划局](https://en.wikipedia.org/wiki/DARPA)(DARPA)发起了一个[项目](https://en.wikipedia.org/wiki/ARPANET)来连接全国的计算机。最初的目标是创建一个由全国 ARPA 资助的所有计算机组成的网络系统。 由于这是第一次将如此规模的网络整合在一起,因此他们也在不断发展自己的技术和硬件。他们首先做的工作之一就是开发名为<ruby> <a href="https://en.wikipedia.org/wiki/Transmission_Control_Protocol"> 传输控制协议 </a> <rt> Transmission Control Protocol </rt></ruby>(TCP)的<ruby> 互联网协议 <rt> Internet Protocol </rt></ruby>(IP)。该协议“可靠、有序,并会对运行于通过 IP 网络传输的主机上的应用的八进制(字节)流通讯进行错误检测”。简单来说,它可以确保数据安全到达。 最初,TCP 被设计为[“主机级别的端到端协议以及封装和路由协议”](https://fcw.com/articles/2006/07/31/what-ever-happened-to-ipv5.aspx)。但是,他们意识到他们需要拆分协议以使其更易于管理。于是决定由 IP 协议处理封装和路由。 那时,TCP 已经经历了三个版本,因此新协议被称为 IPv4。 ### IPv5 的诞生 IPv5 开始时有个不同的名字:<ruby> 互联网流协议 <rt> Internet Stream Protocol </rt></ruby>(ST)。它是[由 Apple、NeXT 和 Sun Microsystems](https://www.lifewire.com/what-happened-to-ipv5-3971327) 为试验流式语音和视频而创建的。 该新协议能够“在保持通信的同时,以特定频率传输数据包”。 ### 那么 IPv5 发生了什么? IPv5 从未被接受为正式的互联网协议。这主要是由于 32 位限制。 IPV5 使用与 IPv4 相同的寻址系统。每个地址由 0 到 255 之间的四组数字组成。这将可能的地址数量限制为 [43 亿](https://www.lifewire.com/what-happened-to-ipv5-3971327)个。 在 1970 年代初,这似乎比全世界所需要的还要多。但是,互联网的爆炸性增长证明了这一想法是错误的。2011 年,世界上的IPv4地址正式用完了。 在 1990 年代,一个新项目开始致力于研究下一代互联网协议(IPng)。这形成了 128 位的 IPv6。IPv6 地址包含 [“8 组 4 字符的十六进制数字”](https://www.lifewire.com/what-happened-to-ipv5-3971327),它可以包含从 0 到 9 的数字和从 A 到 F 的字母。与 IPv4 不同,IPv6 拥有数万亿个可能的地址,因此我们应该能安全一阵子。 同时,IPv5 奠定了 VoIP 的基础,而该技术已被我们用于当今世界范围内的通信。**因此,我想在某种程度上,你可以说 IPv5 仍然可以保留到了今天**。 希望你喜欢有关互联网历史的轶事。你可以阅读其他[关于 Linux 和技术的琐事文章](https://itsfoss.com/category/story/)。 如果你觉得这篇文章有趣,请花一点时间在社交媒体、Hacker News 或 [Reddit](https://reddit.com/r/linuxusersgroup) 上分享它。 --- via: <https://itsfoss.com/what-happened-to-ipv5/> 作者:[John Paul](https://itsfoss.com/author/john/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
![Warp Terminal](/assets/images/warp-terminal.webp) ![Warp Terminal](/assets/images/warp-terminal.webp) If you have spent any amount of time in the world of the internet, you should have heard about the IPv4 and IPv6 protocols that our computers use every day. One question that you might be asking is: Why there is no IPv5? Why IPv6 came after IPv4 and not IPv5? Was there ever a IPv5 and if yes, whatever happened to IPv5? The answer is yes, there was an IPv5…sort of. Let me quickly explain a few things around it. ## The early history of the internet ![Arpa Internet](https://itsfoss.com/content/images/wordpress/2020/04/Arpa_internet.png?fit=800%2C573&ssl=1) [Wikipedia](https://en.wikipedia.org/wiki/ARPANET) In the late 1960s, the US Department of Defense’s [Advanced Research Projects Agency](https://en.wikipedia.org/wiki/DARPA) (ARPA) started a [project](https://en.wikipedia.org/wiki/ARPANET) to link computers across the country. The initial goal was to create a networked system of all of the ARPA-funded computers across the country. Since this was the first time a network of this scale was put together, they were also creating the technology and hardware as they went. One of the first things they worked was an internet protocol (IP) named [Transmission Control Protocol](https://en.wikipedia.org/wiki/Transmission_Control_Protocol) (TCP). This protocol “reliable, ordered, and error-checked delivery of a stream of octets (bytes) between applications running on hosts communicating via an IP network”. Basically, it made sure data got where it needed to go safely. Originally, TCP was designed to be [“a host-level, end-to-end protocol and a packaging and routing protocol”](https://fcw.com/articles/2006/07/31/what-ever-happened-to-ipv5.aspx). However, they realized that they needed to split the protocol to make it more manageable. It was decided that IP would handle packaging and routing. By this time TCP had gone through three versions, so the new protocol became known as IPv4. ## The birth of IPv5 IPv5 started life under a different name: Internet Stream Protocol (or ST). It was created to experiment with streaming voice and video [“by Apple, NeXT, and Sun Microsystems”](https://www.lifewire.com/what-happened-to-ipv5-3971327). This new protocol was capable of “transferring data packets on specific frequencies while maintaining communication”. ## So what happened to IPv5? ![What Happened To Ipv5](https://itsfoss.com/content/images/wordpress/2020/04/what-happened-to-ipv5.png) IPv5 was never accepted as an official internet protocol. This was mainly due to the 32-bit limitation. IPV5 used the same addressing system as IPv4. Each address was made up of four sets of numbers between 0 and 255. This limited the number of possible addresses to [4.3 billion](https://www.lifewire.com/what-happened-to-ipv5-3971327). In the early 1970s, that might have seemed like more than the world would ever need. However, the explosive growth of the Internet proved that idea wrong. In 2011, the world officially ran out of the IPv4 addresses. In the 1990s, a new project was started to work on the next generation of internet protocol (IPng). This led to the 128-bit IPv6. An IPv6 address contains a [“series of eight 4-character hexadecimal numbers”](https://www.lifewire.com/what-happened-to-ipv5-3971327) that can contain numbers from 0 to 9 and letters from A to F. Unlike IPv4, IPv6 had trillions of possible addresses, so we should be safe for a while. Meanwhile, IPv5 laid the groundwork for the voice-over-IP technology that we use to communicate all over the world today. **So, I guess in some small way, you could say that IPv5 still survives to this day**. I hope you liked this anecdote about internet history. You may read some other [trivia article about Linux and tech in general](https://itsfoss.com/category/story/). If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit](http://reddit.com/r/linuxusersgroup).
12,170
YUM 和 RPM 包管理器的不同之处
https://www.2daygeek.com/comparison-difference-between-yum-vs-rpm/
2020-04-30T21:56:29
[ "RPM", "Yum" ]
https://linux.cn/article-12170-1.html
![](/data/attachment/album/202004/30/215525o4e88nen85d8dzd7.jpg) 软件包管理器在 Linux 系统中扮演着重要的角色。它允许你安装、更新、查看、搜索和删除软件包,以满足你的需求。 每个发行版都有自己的一套包管理器,依据你的 Linux 发行版来分别使用它们。 RPM 是最古老的传统软件包管理器之一,它是为基于 Red Hat 的系统设计的,如 Red Hat Enterprise Linux(RHEL)、CentOS、Fedora 和 openSUSE(它基于 suse Enterprise Linux)等系统。但在依赖解析和包更新(全系统更新/升级)方面,RPM 包管理器有一个突出的限制。 > > 如果你想知道 [YUM 和 DNF 包管理器的区别](/article-12161-1.html)请参考该文章。 > > > 这意味着 `yum` 可以自动下载并安装所有需要的依赖项,但 `rpm` 会告诉你安装一个依赖项列表,然后你必须手动安装。 当你想用 [rpm 命令](https://www.2daygeek.com/linux-rpm-command-examples-manage-packages-fedora-centos-rhel-systems/) 安装一组包时,这实际上是不可能的,而且很费时间。 这时,[YUM 包管理器](https://www.2daygeek.com/linux-yum-command-examples-manage-packages-rhel-centos-systems/) 就派上了用场,解决了这两个问题。 ### 什么是 RPM? RPM 指的是 RPM Package Manager(原名 Red Hat Package Manager),是一个功能强大的命令行包管理工具,是为 Red Hat 操作系统开发的。 它现在被用作许多 Linux 发行版的核心组件,如 Centos、Fedora、Oracle Linux、openSUSE 和 Mageia 等。 RPM 软件包管理器允许你在基于 RPM 的 Linux 系统上安装、升级、删除、查询和验证软件包。 RPM 文件的扩展名为 `.rpm`。RPM 包由一个存档文件组成,其中包含了一个特定包的库和依赖关系,这些库和依赖关系与系统上安装的其他包不冲突。 在 Linux 上有很多前端工具可以用来安装 RPM 包,与 RPM 工具相比,这些工具可以使安装过程更加高效,尤其是在处理依赖关系方面。 如果你想了解更多关于 Linux 发行版的前端包管理器的信息,请到下面的链接。 * [Linux 命令行包管理器列表](https://www.2daygeek.com/list-of-command-line-package-manager-for-linux/) 如果你想了解 Linux 的 GUI 包管理器,请到下面的链接。 * [Linux GUI 包管理器列表](https://www.2daygeek.com/list-of-graphical-frontend-tool-for-linux-package-manager/) ### 什么是 YUM? Yum 是一个 Linux 操作系统上的自由开源的命令行包管理程序,它使用 RPM 包管理器。Yum 是一个 RPM 的前端工具,可以自动解决软件包的依赖关系。它可以从发行版官方仓库和其他第三方仓库中安装 RPM 软件包。 Yum 允许你在系统中安装、更新、搜索和删除软件包。如果你想让你的系统保持更新,你可以通过 yum-cron 启用自动更新。 此外,如果你需要的话,它还允许你在 `yum update` 中排除一个或多个软件包。 Yum 是默认安装的,你不需要安装它。 | 编号 | RPM | YUM | | --- | --- | --- | | 1 | 红帽在 1997 年引入了 RPM | Yellowdog UPdater(YUP)开发于 1999-2001 年,YUM 于 2003 年取代了原来的 YUP 工具 | | 2 | RPM 代表 RPM Package manager(原名 Red Hat package manager) | YUM 代表 Yellowdog Updater Modified | | 3 | RPM 文件的命名规则如下,`httpd-2.4.6-92.el7.x86_64.rpm`:`httpd` - 实际的包名;`2.4.6` - 包发布版本号;`92` - 包发布子版本号;`el7` - Red Hat 版本;`x86_64` - 硬件架构;`rpm` - 文件扩展名 | 后台使用 rpm 数据库 | | 4 | 不解析依赖关系,你必须手动安装依赖 | 可以自动解析依赖关系并同时安装它们(任何包都会和它的依赖关系一起安装) | | 5 | 允许你同时安装多个版本的软件包 | 不允许,并显示该软件包已经安装 | | 6 | 当使用 RPM 命令安装一个软件包时,你必须提供 `.rpm` 软件包的确切位置 | 你可以安装仓库中的任何软件包,而你只需要知道软件包的名称就可以了 | | 7 | RPM 不依赖于 YUM | 它是一个前端工具,在后台使用 RPM 包管理器来管理包 | | 8 | RPM 在安装包的管理方面比较难 | YUM 是最简单的管理 RPM 包的方法 | | 9 | RPM 不能让你将整个系统升级到最新的版本 | YUM 可以让你将系统升级到最新的版本(例如 7.0 到 7.x 的小版本升级) | | 10 | RPM 不能让你自动更新/升级安装在系统上的软件包 | YUM 可以让你自动更新/升级系统上的更新 | | 11 | 不使用在线仓库来执行任何操作 | 完全依赖在线仓库来完成所有的工作 | | 12 | RPM 是一种包格式,它也是一个底层的包管理器,只做基本的事情 | 这是一个上层的包管理器前端,它可以完成你所需要的一切工作 | --- via: <https://www.2daygeek.com/comparison-difference-between-yum-vs-rpm/> 作者:[Magesh Maruthamuthu](https://www.2daygeek.com/author/magesh/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
404
Not Found
null
12,171
Rambox:Linux 中多合一的消息收发工具
https://itsfoss.com/rambox/
2020-04-30T22:26:49
[ "Rambox", "消息", "IM" ]
https://linux.cn/article-12171-1.html
> > Rambox 是一个多合一消息收发工具,允许你将多种服务(如 Discord、Slack、Facebook Messenger)和数百个此类服务结合在一起。 > > > ### Rambox:在单个应用中添加多个消息服务 ![](/data/attachment/album/202004/30/222658waton215q2zs5qqz.jpg) Rambox 是通过安装单个应用管理多个通信服务的最佳方式之一。你可以在一个界面使用[多个消息服务](https://itsfoss.com/best-messaging-apps-linux/),如 Facebook Messenger、Gmail chats、AOL、Discord、Google Duo、[Viber](https://itsfoss.com/viber-linux-client-beta-install/) 等。 这样,你就不需要安装单独的应用或者在浏览器中一直打开着。你可以使用主密码锁定 Rambox 应用。你还可以使用“请勿打扰”功能。 Rambox 提供可免费使用的[开源社区版](https://rambox.pro/#ce)。付费专业版允许你访问 600 多个应用,而社区版则包含 99 多个应用。专业版本具有额外的功能,如主题、休眠、ad-block、拼写检查和高级支持。 不用担心。开源社区版本身非常有用,你甚至不需要这些专业功能。 ### Rambox 的功能 ![](/data/attachment/album/202004/30/222658j5qemexmsieynw57.png) 虽然你应该在开源版中找到大多数基本功能,但你可能会注意到其中一些功能仅限于专业版。 此处,我说下所有的基本功能: * 在开源版本中,你有大约 100 个应用/服务可供选择 * 能够使用单个主密码锁保护应用 * 能够锁定加载的每个会话 * 请勿打扰模式 * 能够跨多个设备同步应用和配置 * 你可以创建和添加自定义应用 * 支持键盘快捷键 * 启用/禁用应用而无需删除它们 * JS 和 CSS 注入支持,以调整应用的样式 * Ad-block (**专业版**) * 休眠支持 (**专业版**) * 主题支持(**专业版**) * 移动设备视图 (**专业版**) * 拼写检查 (**专业版**) * 工作时间 - 计划传入通知的时间 (**专业版**) * 支持代理 (**专业版**) 除了我在这里列出的内容外,你还可以在 Rambox Pro 版本中找到更多功能。要了解有关它的更多信息,你可以参考[正式功能列表](https://rambox.pro/#features)。 还值得注意的是,你不能超过 3 个活跃并发设备的连接。 ### 在 Linux 上安装 Rambox 你可以在[官方下载页](https://rambox.pro/#ce)获取 .AppImage 文件来运行 Rambox。如果你不清楚,你可以参考我们的指南,了解如何[在 Linux 上使用 AppImage 文件](https://itsfoss.com/use-appimage-linux/)。 另外,你也可以从 [Snap 商店](https://snapcraft.io/rambox)获取它。此外,请查看其 [GitHub release](https://github.com/ramboxapp/community-edition/releases) 部分的 .deb / .rpm 或其他包。 * [下载 Rambox 社区版](https://rambox.pro/#ce) ### 总结 使用 Rambox 安装大量应用可能会有点让人不知所措。因此,我建议你在添加更多应用并将其用于工作时监视内存使用情况。 还有一个类似的应用称为 [Franz](https://itsfoss.com/franz-messaging-app/),它也像 Rambox 部分开源、部分高级版。 尽管像 Rambox 或 Franz 这样的解决方案非常有用,但它们并不总是节约资源,特别是如果你同时使用数十个服务。因此,请留意系统资源(如果你注意到对性能的影响)。 除此之外,这是一个令人印象深刻的应用。你有试过了么?欢迎随时让我知道你的想法! --- via: <https://itsfoss.com/rambox/> 作者:[Ankush Das](https://itsfoss.com/author/ankush/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
*Brief: Rambox is an all-in-one messenger that lets you combine multiple services like Discord, Slack, Facebook Messenger and hundreds of more such services in one place. * ## Rambox: Add multiple messaging Services in a single app ![Rambox Ce](https://itsfoss.com/content/images/wordpress/2020/03/rambox-ce.jpg) Rambox is one of the best ways to manage multiple services for communication through a single app installed. You can use [multiple messaging services](https://itsfoss.com/best-messaging-apps-linux/) like Facebook Messenger, Gmail chats, AOL, Discord, Google Duo, [Viber](https://itsfoss.com/viber-linux-client-beta-install/) and a lot more from the same interface. This way, you don’t need to install individual apps or keep them opened in browser all the time. You can use a master password to lock the Rambox application. You can also use do not disturb feature. Rambox offers an [open source community edition](https://rambox.pro/#ce) which is free to use. The paid pro version gives you access to 600+ apps while the community addition has 99+ apps. Pro version has additional features like themes, hibernation, ad-block, spell check and premium support. Don’t worry. The open source community edition itself is quite useful and you may not even need those pro features. ## Features of Rambox ![Rambox Preferences](https://itsfoss.com/content/images/wordpress/2020/03/rambox-preferences.png) While you should find most of the essential features in the open-source edition, you might notice some of them limited to the pro version. Here, I’ve mentioned all the essential features available: - You get about 100 apps/services to choose from in the open-source edition - Ability to protect the app with a single Master password lock - Ability to lock each session that you load up - Do Not Disturb mode - Ability to sync apps and configuration across multiple devices. - You can create and add custom apps - Support for keyboard shortcuts - Ability to enable/disable apps without needing to delete them - JS & CSS injection support to tweak the styling of apps - Ad-block ( **pro version**) - Hibernation support ( **pro version**) - Theme support ( **pro version**) - Mobile view **(pro version)** - Spell check **(pro version)** - Work hours – to schedule a time for incoming notifications **(pro version)** - Proxies support **(pro version)** In addition to what I’ve listed here, you might find some more features in the Rambox Pro edition. To know more about it, you can refer to the [official list of features](https://rambox.pro/#features). It is also worth noting that you cannot have more than 3 active simultaneous device connections. ## Installing Rambox on Linux You can easily get started using Rambox using the **.AppImage** file available on the [official download page](https://rambox.pro/#ce). If you’re curious, you can refer our guide on how to [use the AppImage file on Linux](https://itsfoss.com/use-appimage-linux/). In either case, you can also get it from the [Snap store](https://snapcraft.io/rambox). Also, feel free to check their [GitHub releases section](https://github.com/ramboxapp/community-edition/releases) for **.deb / .rpm** or other packages. ## Wrapping Up It can be a little overwhelming to have a lot of apps installed using Rambox. So, I’d suggest you monitor the RAM usage when adding more apps and using them for work. There is also a similar app called [Franz](https://itsfoss.com/franz-messaging-app/) which is also part open source and part premium like Rambox. Even though solutions like Rambox or Franz are quite useful, they aren’t always resource-friendly, specially if you start using tens of services at the same time. So, keep an eye on your system resources (if you notice a performance impact). Otherwise, it’s an impressive app that does the work that you’d expect. Have you tried it out? Feel free to let me know your thoughts!
12,172
使用 Python 来可视化 COVID-19 预测
https://opensource.com/article/20/4/python-data-covid-19
2020-05-01T19:37:00
[ "可视化" ]
https://linux.cn/article-12172-1.html
> > 我将演示如何利用提供的全球病毒传播的开放数据,使用开源库来创建两个可视效果。 > > > ![](/data/attachment/album/202005/01/193624a2p2osojyf0yg4go.jpg) 使用 [Python](https://opensource.com/resources/python) 和一些图形库,你可以预测 COVID-19 确诊病例总数,也可以显示一个国家(本文以印度为例)在给定日期的死亡总数。人们有时需要帮助解释和处理数据的意义,所以本文还演示了如何为五个国家创建一个动画横条形图,以显示按日期显示病例的变化。 ### 印度的确诊病例和死亡人数预测 这要分三步来完成。 #### 1、下载数据 科学数据并不总是开放的,但幸运的是,许多现代科学和医疗机构都乐于相互之间及与公众共享信息。关于 COVID-19 病例的数据可以在网上查到,并且经常更新。 要解析这些数据,首先必须先下载。 <https://raw.githubusercontent.com/datasets/covid-19/master/data/countries-aggregated.csv>。 直接将数据加载到 Pandas `DataFrame` 中。Pandas 提供了一个函数 `read_csv()`,它可以获取一个 URL 并返回一个 `DataFrame` 对象,如下所示。 ``` import pycountry import plotly.express as px import pandas as pd URL_DATASET = r'https://raw.githubusercontent.com/datasets/covid-19/master/data/countries-aggregated.csv' df1 = pd.read_csv(URL_DATASET) print(df1.head(3)) # 获取数据帧中的前 3 项 print(df1.tail(3)) # 获取数据帧中的后 3 项 ``` 数据集的顶行包含列名。 1. `Date` 2. `Country` 3. `Confirmed` 4. `Recovered` 5. `Deaths` `head` 查询的输出包括一个唯一的标识符(不作为列列出)和每个列的条目。 ``` 0 2020-01-22 Afghanistan 0 0 0 1 2020-01-22 Albania 0 0 0 1 2020-01-22 Algeria 0 0 0 ``` `tail` 查询的输出类似,但包含数据集的尾端。 ``` 12597 2020-03-31 West Bank and Gaza 119 18 1 12598 2020-03-31 Zambia 35 0 0 12599 2020-03-31 Zimbabwe 8 0 1 ``` 从输出中,可以看到 DataFrame(`df1`)有以下几个列: 1. 日期 2. 国家 3. 确诊 4. 康复 5. 死亡 此外,你可以看到 `Date` 栏中的条目从 1 月 22 日开始到 3 月 31 日。这个数据库每天都会更新,所以你会有当前的值。 #### 2、选择印度的数据 在这一步中,我们将只选择 DataFrame 中包含印度的那些行。这在下面的脚本中可以看到。 ``` #### ----- Step 2 (Select data for India)---- df_india = df1[df1['Country'] == 'India'] print(df_india.head(3)) ``` #### 3、数据绘图 在这里,我们创建一个条形图。我们将把日期放在 X 轴上,把确诊的病例数和死亡人数放在 Y 轴上。这一部分的脚本有以下几个值得注意的地方。 * `plt.rcParams["figure.figsize"]=20,20` 这一行代码只适用于 Jupyter。所以如果你使用其他 IDE,请删除它。 * 注意这行代码:`ax1 = plt.gca()`。为了确保两个图,即确诊病例和死亡病例的图都被绘制在同一个图上,我们需要给第二个图的 `ax` 对象。所以我们使用 `gca()` 来完成这个任务。(顺便说一下,`gca` 代表 “<ruby> 获取当前坐标轴 <rt> get current axis </rt></ruby>”) 完整的脚本如下所示。 ``` # Author:- Anurag Gupta # email:- [email protected] %matplotlib inline import matplotlib.pyplot as plt import pandas as pd #### ----- Step 1 (Download data)---- URL_DATASET = r'https://raw.githubusercontent.com/datasets/covid-19/master/data/countries-aggregated.csv' df1 = pd.read_csv(URL_DATASET) # print(df1.head(3)) # Uncomment to see the dataframe #### ----- Step 2 (Select data for India)---- df_india = df1[df1['Country'] == 'India'] print(df_india.head(3)) #### ----- Step 3 (Plot data)---- # Increase size of plot plt.rcParams["figure.figsize"]=20,20 # Remove if not on Jupyter # Plot column 'Confirmed' df_india.plot(kind = 'bar', x = 'Date', y = 'Confirmed', color = 'blue') ax1 = plt.gca() df_india.plot(kind = 'bar', x = 'Date', y = 'Deaths', color = 'red', ax = ax1) plt.show() ``` 整个脚本[可在 GitHub 上找到](https://raw.githubusercontent.com/ag999git/jupyter_notebooks/master/corona_bar_india)。 ### 为五个国家创建一个动画水平条形图 关于 Jupyter 的注意事项:要在 Jupyter 中以动态动画的形式运行,而不是静态 png 的形式,你需要在单元格的开头添加一个神奇的命令,即: `%matplotlib notebook`。这将使图形保持动态,而不是显示为静态的 png 文件,因此也可以显示动画。如果你在其他 IDE 上,请删除这一行。 #### 1、下载数据 这一步和前面的脚本完全一样,所以不需要重复。 #### 2、创建一个所有日期的列表 如果你检查你下载的数据,你会发现它有一列 `Date`。现在,这一列对每个国家都有一个日期值。因此,同一个日期会出现多次。我们需要创建一个只具有唯一值的日期列表。这会用在我们条形图的 X 轴上。我们有一行代码,如 `list_dates = df[‘Date’].unique()`。`unique()` 方法将只提取每个日期的唯一值。 #### 3、挑选五个国家并创建一个 `ax` 对象。 做一个五个国家的名单。(你可以选择你喜欢的国家,也可以增加或减少国家的数量。)我也做了一个五个颜色的列表,每个国家的条形图的颜色对应一种。(如果你喜欢的话,也可以改一下。)这里有一行重要的代码是:`fig, ax = plt.subplots(figsize=(15, 8))`。这是创建一个 `ax` 对象所需要的。 #### 4、编写回调函数 如果你想在 Matplotlib 中做动画,你需要创建一个名为 `matplotlib.animation.FuncAnimation` 的类的对象。这个类的签名可以在网上查到。这个类的构造函数,除了其他参数外,还需要一个叫 `func` 的参数,你必须给这个参数一个回调函数。所以在这一步中,我们会写个回调函数,这个回调函数会被反复调用,以渲染动画。 #### 5、创建 `FuncAnimation` 对象 这一步在上一步中已经部分说明了。 我们创建这个类的对象的代码是: ``` my_anim = animation.FuncAnimation(fig = fig, func = plot_bar, frames = list_dates, blit = True, interval=20) ``` 要给出的三个重要参数是: * `fig`,必须给出一个 fig 对象,也就是我们之前创建的 fig 对象。 * `func`,必须是回调函数。 * `frames`,必须包含要做动画的变量。在我们这里,它是我们之前创建的日期列表。 #### 6、将动画保存为 mp4 文件 你可以将创建的动画保存为 mp4 文件。但是,你需要 `ffmpeg`。你可以用 `pip` 下载:`pip install ffmpeg-python`,或者用 conda(在 Jupyter 上):`install -c conda-forge ffmpeg`。 最后,你可以使用 `plt.show()` 运行动画。请注意,在许多平台上,`ffmpeg` 可能无法正常工作,可能需要进一步“调整”。 ``` %matplotlib notebook # Author:- Anurag Gupta # email:- [email protected] import pandas as pd import matplotlib.pyplot as plt import matplotlib.animation as animation from time import sleep #### ---- Step 1:- Download data URL_DATASET = r'https://raw.githubusercontent.com/datasets/covid-19/master/data/countries-aggregated.csv' df = pd.read_csv(URL_DATASET, usecols = ['Date', 'Country', 'Confirmed']) # print(df.head(3)) # uncomment this to see output #### ---- Step 2:- Create list of all dates list_dates = df['Date'].unique() # print(list_dates) # Uncomment to see the dates #### --- Step 3:- Pick 5 countries. Also create ax object fig, ax = plt.subplots(figsize=(15, 8)) # We will animate for these 5 countries only list_countries = ['India', 'China', 'US', 'Italy', 'Spain'] # colors for the 5 horizontal bars list_colors = ['black', 'red', 'green', 'blue', 'yellow'] ### --- Step 4:- Write the call back function # plot_bar() is the call back function used in FuncAnimation class object def plot_bar(some_date): df2 = df[df['Date'].eq(some_date)] ax.clear() # Only take Confirmed column in descending order df3 = df2.sort_values(by = 'Confirmed', ascending = False) # Select the top 5 Confirmed countries df4 = df3[df3['Country'].isin(list_countries)] # print(df4) # Uncomment to see that dat is only for 5 countries sleep(0.2) # To slow down the animation # ax.barh() makes a horizontal bar plot. return ax.barh(df4['Country'], df4['Confirmed'], color= list_colors) ###----Step 5:- Create FuncAnimation object--------- my_anim = animation.FuncAnimation(fig = fig, func = plot_bar, frames= list_dates, blit=True, interval=20) ### --- Step 6:- Save the animation to an mp4 # Place where to save the mp4. Give your file path instead path_mp4 = r'C:\Python-articles\population_covid2.mp4' # my_anim.save(path_mp4, fps=30, extra_args=['-vcodec', 'libx264']) my_anim.save(filename = path_mp4, writer = 'ffmpeg', fps=30, extra_args= ['-vcodec', 'libx264', '-pix_fmt', 'yuv420p']) plt.show() ``` 完整的脚本[可以在 GitHub 上找到](https://raw.githubusercontent.com/ag999git/jupyter_notebooks/master/corona_bar_animated)。 --- via: <https://opensource.com/article/20/4/python-data-covid-19> 作者:[AnuragGupta](https://opensource.com/users/999anuraggupta) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
Using [Python](https://opensource.com/resources/python) and some graphing libraries, you can project the total number of confirmed cases of COVID-19, and also display the total number of deaths for a country (this article uses India as an example) on a given date. Humans sometimes need help interpreting and processing the meaning of data, so this article also demonstrates how to create an animated horizontal bar graph for five countries, showing the variation of cases by date. ## Projecting confirmed cases and deaths for India This is done in three steps. ### 1. Download data Scientific data isn't always open, but fortunately, many modern science and healthcare organizations are eager to share information with each other and the public. Data about COVID-19 cases is available online, and it's updated frequently. To parse the data, you first must download it: [https://raw.githubusercontent.com/datasets/covid-19/master/data/countries-aggregated.csv](https://raw.githubusercontent.com/datasets/covid-19/master/data/countries-aggregated.csv) Load the data directly into a Pandas DataFrame. Pandas provides a function, **read_csv()**, which can take a URL and give back a DataFrame object, as shown below: ``` import pycountry import plotly.express as px import pandas as pd URL_DATASET = r'https://raw.githubusercontent.com/datasets/covid-19/master/data/countries-aggregated.csv' df1 = pd.read_csv(URL_DATASET) print(df1.head(3)) # Get first 3 entries in the dataframe print(df1.tail(3)) # Get last 3 entries in the dataframe ``` The top row of the data set contains column names: - Date - Country - Confirmed - Recovered - Deaths The output of the **head** query includes a unique identifier (not listed as a column) plus an entry for each column: ``` 0 2020-01-22 Afghanistan 0 0 0 1 2020-01-22 Albania 0 0 0 1 2020-01-22 Algeria 0 0 0 ``` The output of the **tail** query is similar but contains the tail end of the data set: ``` 12597 2020-03-31 West Bank and Gaza 119 18 1 12598 2020-03-31 Zambia 35 0 0 12599 2020-03-31 Zimbabwe 8 0 1 ``` From the output, you can see that the DataFrame (**df1**) has the following columns: - Date - Country - Confirmed - Recovered - Dead Further, you can see that the **Date** column has entries starting from January 22 to March 31. This database is updated daily, so you will have current values. ### 2. Select data for India In this step, we will select only those rows in the DataFrame that include India. This is shown in the script below: ``` #### ----- Step 2 (Select data for India)---- df_india = df1[df1['Country'] == 'India'] print(df_india.head(3)) ``` ### 3. Plot data Here we create a bar chart. We will put the dates on the X-axis and the number of confirmed cases and the number of deaths on the Y-axis. There are a few noteworthy things about this part of the script which are as follows: - The line of code: **plt.rcParams["**is meant only for Jupyter. So remove it if you are using some other IDE.*figure.figsize"*]=20,20 - Notice the line of code: **ax1 = plt.gca()**. To ensure that both the plots i.e. for confirmed cases as well as for deaths are plotted on the same graph, we need to give to the second graph the**ax**object of the plot. So we use**gca()**to do this. (By the way, 'gca' stands for 'get current axis'). The complete script is given below: ``` # Author:- Anurag Gupta # email:- [email protected] %matplotlib inline import matplotlib.pyplot as plt import pandas as pd #### ----- Step 1 (Download data)---- URL_DATASET = r'https://raw.githubusercontent.com/datasets/covid-19/master/data/countries-aggregated.csv' df1 = pd.read_csv(URL_DATASET) # print(df1.head(3)) # Uncomment to see the dataframe #### ----- Step 2 (Select data for India)---- df_india = df1[df1['Country'] == 'India'] print(df_india.head(3)) #### ----- Step 3 (Plot data)---- # Increase size of plot plt.rcParams["figure.figsize"]=20,20 # Remove if not on Jupyter # Plot column 'Confirmed' df_india.plot(kind = 'bar', x = 'Date', y = 'Confirmed', color = 'blue') ax1 = plt.gca() df_india.plot(kind = 'bar', x = 'Date', y = 'Deaths', color = 'red', ax = ax1) plt.show() ``` The entire script is [available on GitHub](https://raw.githubusercontent.com/ag999git/jupyter_notebooks/master/corona_bar_india). ## Creating an animated horizontal bar graph for five countries Note for Jupyter: To run this in Jupyter as a dynamic animation rather than as a static png, you need to add a magic command at the beginning of your cell, namely: **%matplotlib notebook**. This will keep the figure alive instead of displaying a static png file and can hence also show animations. If you are on another IDE, remove this line. ### 1. Download the data This step is exactly the same as in the previous script, and therefore, it need not be repeated. ### 2. Create a list of all dates If you examine the data you downloaded, you notice that it has a column **Date**. Now, this column has a date value for each country. So the same date is occurring a number of times. We need to create a list of dates with only unique values. This will be used on the X-axis of our bar charts. We have a line of code like: **list_dates = df[ ‘Date’].unique()**. The **unique()**method will pick up only the unique values for each date. ### 3. Pick five countries and create an **ax** object Take a list of five countries. (You can choose whatever countries you prefer, or even increase or decrease the number of countries). I have also taken a list of five colors for the bars of each country. (You can change this too if you like). One important line of code here is: **fig, ax = plt.subplots(figsize=(15, 8))**. This is needed to create an **ax** object. ### 4. Write the call back function If you want to do animation in Matplotlib, you need to create an object of a class called **matplotlib.animation.FuncAnimation**. The signature of this class is available online. The constructor of this class, apart from other parameters, also takes a parameter called **func**, and you have to give this parameter a callback function. So in this step, we will write the callback function, which is repeatedly called in order to render the animation. ### 5. Create **FuncAnimation **object This step has partly been explained in the previous step. Our code to create an object of this class is: ``` my_anim = animation.FuncAnimation(fig = fig, func = plot_bar, frames= list_dates, blit=True, interval=20) ``` The three important parameters to be given are: **fig**, which must be given a fig object, which we created earlier.**func**, which must be the call back function.**frames**, which must contain the variable on which the animation is to be done. Here in our case, it will be the list of dates we created earlier. ### 6. Save the animation to an mp4 file You can save the animation created into an mp4 file. But for this you need **ffmpeg**. You can download this using pip by **pip install ffmpeg-python**, or using conda (on Jupyter) **install -c conda-forge ffmpeg**. And finally, you can run the animation using **plt.show()**. Please note that on many platforms, the **ffmpeg** may not work properly and may require further "tweaking." ``` %matplotlib notebook # Author:- Anurag Gupta # email:- [email protected] import pandas as pd import matplotlib.pyplot as plt import matplotlib.animation as animation from time import sleep #### ---- Step 1:- Download data URL_DATASET = r'https://raw.githubusercontent.com/datasets/covid-19/master/data/countries-aggregated.csv' df = pd.read_csv(URL_DATASET, usecols = ['Date', 'Country', 'Confirmed']) # print(df.head(3)) # uncomment this to see output #### ---- Step 2:- Create list of all dates list_dates = df['Date'].unique() # print(list_dates) # Uncomment to see the dates #### --- Step 3:- Pick 5 countries. Also create ax object fig, ax = plt.subplots(figsize=(15, 8)) # We will animate for these 5 countries only list_countries = ['India', 'China', 'US', 'Italy', 'Spain'] # colors for the 5 horizontal bars list_colors = ['black', 'red', 'green', 'blue', 'yellow'] ### --- Step 4:- Write the call back function # plot_bar() is the call back function used in FuncAnimation class object def plot_bar(some_date): df2 = df[df['Date'].eq(some_date)] ax.clear() # Only take Confirmed column in descending order df3 = df2.sort_values(by = 'Confirmed', ascending = False) # Select the top 5 Confirmed countries df4 = df3[df3['Country'].isin(list_countries)] # print(df4) # Uncomment to see that dat is only for 5 countries sleep(0.2) # To slow down the animation # ax.barh() makes a horizontal bar plot. return ax.barh(df4['Country'], df4['Confirmed'], color= list_colors) ###----Step 5:- Create FuncAnimation object--------- my_anim = animation.FuncAnimation(fig = fig, func = plot_bar, frames= list_dates, blit=True, interval=20) ### --- Step 6:- Save the animation to an mp4 # Place where to save the mp4. Give your file path instead path_mp4 = r'C:\Python-articles\population_covid2.mp4' # my_anim.save(path_mp4, fps=30, extra_args=['-vcodec', 'libx264']) my_anim.save(filename = path_mp4, writer = 'ffmpeg', fps=30, extra_args= ['-vcodec', 'libx264', '-pix_fmt', 'yuv420p']) plt.show() ``` The complete script is [available on GitHub](https://raw.githubusercontent.com/ag999git/jupyter_notebooks/master/corona_bar_animated). ## 4 Comments
12,173
使用 Pixelorama 创建令人惊叹的像素艺术
https://itsfoss.com/pixelorama/
2020-05-01T23:19:00
[ "图形编辑器", "像素" ]
https://linux.cn/article-12173-1.html
> > Pixelorama 是一个跨平台、自由开源的 2D 精灵编辑器。它在一个整洁的用户界面中提供了创建像素艺术所有必要工具。 > > > ### Pixelorama:开源 Sprite 编辑器 [Pixelorama](https://www.orama-interactive.com/pixelorama) 是 [Orama 互动](https://www.orama-interactive.com/)公司的年轻游戏开发人员创建的一个工具。他们已经开发了一些 2D 游戏,其中一些使用了像素艺术。 虽然 Orama 主要从事于游戏开发,但开发人员也创建实用工具,帮助他们(和其他人)创建这些游戏。 自由开源的<ruby> 精灵 <rt> Sprite </rt></ruby>编辑器 Pixelorama 就是这样一个实用工具。它构建在 [Godot 引擎](https://godotengine.org/)之上,非常适合创作像素艺术。 ![Pixelorama screenshot](/data/attachment/album/202005/01/232159a4rr7rmwftygb7b3.jpg) 你看到上面截图中的像素艺术了吗?它是使用 Pixelorama 创建的。这段视频展示了制作上述图片的时间推移视频。 ### Pixelorama 的功能 以下是 Pixelorama 提供的主要功能: * 多种工具,如铅笔、橡皮擦、填充桶、取色器等 * 多层系统,你可以根据需要添加、删除、上下移动、克隆和合并多个层 * 支持 Spritesheets * 导入图像并在 Pixelorama 中编辑它们 * 带有 [Onion Skinning](https://en.wikipedia.org/wiki/Onion_skinning) 的动画时间线 * 自定义画笔 * 以 Pixelorama 的自定义文件格式 .pxo 保存并打开你的项目 * 水平和垂直镜像绘图 * 用于创建图样的磁贴模式 * 拆分屏幕模式和迷你画布预览 * 使用鼠标滚轮缩放 * 无限次撤消和重做 * 缩放、裁剪、翻转、旋转、颜色反转和去饱和图像 * 键盘快捷键 * 提供多种语言 * 支持 Linux、Windows 和 macOS ### 在 Linux 上安装 Pixelorama Pixelorama 提供 Snap 应用,如果你使用的是 Ubuntu,那么可以在软件中心找到它。 ![Pixelorama is available in Ubuntu Software Center](/data/attachment/album/202005/01/232200gcz8xgyp4tpxgcbj.jpg) 或者,如果你在 [Linux 发行版上启用了 Snap 支持](https://itsfoss.com/install-snap-linux/),那么可以使用此命令安装它: ``` sudo snap install pixelorama ``` 如果你不想使用 Snap,不用担心。你可以从[他们的 GitHub 仓库](https://github.com/Orama-Interactive/Pixelorama)下载最新版本的 Pixelorama,[解压 zip 文件](https://itsfoss.com/unzip-linux/),你会看到一个可执行文件。授予此文件执行权限,并双击它运行应用。 * [下载 Pixelorama](https://github.com/Orama-Interactive/Pixelorama/releases) ### 总结 ![Pixelorama Welcome Screen](/data/attachment/album/202005/01/231916iikvijis6wi14hhn.jpg) 在 Pixeloaram 的功能中,它说你可以导入图像并对其进行编辑。我想,这只是对某些类型的文件,因为当我尝试导入 PNG 或 JPEG 文件,程序崩溃了。 然而,我可以像一个 3 岁的孩子那样随意涂鸦并制作像素艺术。我对艺术不是很感兴趣,但我认为这[对 Linux 上的数字艺术家是个有用的工具](https://itsfoss.com/best-linux-graphic-design-software/)。 我喜欢这样的想法:尽管是游戏开发人员,但他们创建的工具,可以帮助其他游戏开发人员和艺术家。这就是开源的精神。 如果你喜欢这个项目,并且会使用它,请考虑通过捐赠来支持他们。[It’s FOSS 捐赠了](https://itsfoss.com/donations-foss/) 25 美元,以感谢他们的努力。 * [向 Pixelorama 捐赠(主要开发者的个人 Paypal 账户)](https://www.paypal.me/erevos) 你喜欢 Pixelorama 吗?你是否使用其他开源精灵编辑器?请随时在评论栏分享你的观点。 --- via: <https://itsfoss.com/pixelorama/> 作者:[Abhishek Prakash](https://itsfoss.com/author/abhishek/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
*Brief: Pixelorama is a cross-platform, free and open source 2D sprite editor. It provides all the necessary tools to create pixel art in a neat user interface.* ## Pixelorama: open source sprite editor [Pixelorama](https://www.orama-interactive.com/pixelorama) is a tool created by young game developers at [Orama Interactive](https://www.orama-interactive.com/). They have developed a few 2D games and a couple of them use pixel art. While Orama is primarily into game development, the developers are also creating utility tools that help them (and others) create those games. The free and open source sprite editor, Pixelorama is such a utility tool. It’s built on top of [Godot Engine](https://godotengine.org/) and is perfect for creating pixel art. ![Pixelorama free sprite editor for making pixel art](https://itsfoss.com/content/images/wordpress/2020/03/pixelorama-v6.jpg) You see the pixel art in the screenshot above? It’s been created using Pixelorama. This video shows a timelapse video of creating the above image. ## Features of Pixelorama Here are the main features Pixelorama provides: - Multiple tools like penicl, erase, fill bucket color picker etc - Multiple layer system that allows you to add, remove, move up and down, clone and merge as many layers as you like - Support for spritesheets - Import images and edit them inside Pixelorama - Animation timeline with [Onion Skinning](https://en.wikipedia.org/wiki/Onion_skinning) - Custom brushes - Save and open your projects in Pixelorama’s custom file format, .pxo - Horizontal & vertical mirrored drawing - Tile Mode for pattern creation - Split screen mode and mini canvas preview - Zoom with mouse scroll wheel - Unlimited undo and redo - Scale, crop, flip, rotate, color invert and desaturate your images - Keyboard shortcuts - Available in several languages - Supports Linux, Windows and macOS ## Installing Pixelorama on Linux Pixelorama is available as a Snap application and if you are using Ubuntu, you can find it in the software center itself. ![Pixelorama Ubuntu Software Center](https://itsfoss.com/content/images/wordpress/2020/03/pixelorama-ubuntu-software-center.jpg) Alternatively, if you have [Snap support enabled on your Linux distribution](https://itsfoss.com/install-snap-linux/), you can install it using this command: `sudo snap install pixelorama` If you don’t want to use Snap, no worries. You can download the latest release of Pixelorama from [their GitHub repository](https://github.com/Orama-Interactive/Pixelorama), [extract the zip file](https://itsfoss.com/unzip-linux/) and you’ll see an executable file. Give this file execute permission and double click on it to run the application. **Conclusion** ![Pixelorama](https://itsfoss.com/content/images/wordpress/2020/03/pixelorama.jpg) In the Pixeloaram features, it says that you can import images and edit them. I guess that’s only true for certain kind of files because when I tried to import PNG or JPEG files, the application crashed. However, I could easily doodle like a 3 year old and make random pixel art. I am not that into arts but I think this is a [useful tool for digital artists on Linux](https://itsfoss.com/best-linux-graphic-design-software/). I liked the idea that despite being game developers, they are creating tools that could help other game developers and artists. That’s the spirit of open source. If you like the project and will be using it, consider supporting them by a donation. [It’s FOSS has made a humble donation](https://itsfoss.com/donations-foss/) of $25 to thank their effort. Do you like Pixelorama? Do you use some other open source sprite editor? Feel free to share your views in the comment section.
12,175
Pop!_OS 20.04 点评:最好的基于 Ubuntu 的发行版变得更好了
https://itsfoss.com/pop-os-20-04-review/
2020-05-02T20:13:00
[]
https://linux.cn/article-12175-1.html
> > Pop!\_OS 20.04 是一款令人印象深刻的基于 Ubuntu 的 Linux 发行版。我在这篇评论中回顾了其主要的新功能,并分享了我对最新版本的体验。 > > > 现在,Ubuntu 20.04 LTS 及其官方变体版本已经发布了 - 是时候看看 [System76](https://system76.com) 的 Pop!\_OS 20.04 了,这是基于 Ubuntu 的最好的发行版之一。 老实说,Pop!\_OS 是我最喜欢的 Linux 发行版,主要用于我做的所有事情。 现在,Pop!\_OS 20.04 终于来了。是时候来看看它提供了哪些功能,以及你是否应该升级? ### Pop!\_OS 20.04 LTS 中有什么新东西? ![](/data/attachment/album/202005/02/201345a4otoee9izm9jt6j.jpg) 从视觉上看,Pop!\_OS 20.04 LTS 与 Pop!\_OS 19.10 并没有太大的区别。然而,你可以发现几个新功能和改进。 但是,如果你之前使用的是 Pop!\_OS 18.04 LTS,则可以发现有很多东西可以尝试。 随着 [GNOME 3.36](https://itsfoss.com/gnome-3-36-release/) 的到来,及其带来的一些新功能,Pop!\_OS 20.04 成为了一个令人激动的版本。 总的来说,以下是一些主要的亮点。 * 自动窗口平铺 * 新的应用程序切换器和启动器 * 在 Pop!\_Shop 中增加了对 Flatpack 的支持。 * GNOME 3.36 * Linux 内核 5.4 * 改进的混合图形支持 虽然听起来很有趣,但我们还是来了解一下详细的变化,以及到目前为止 Pop!\_OS 20.04 的体验如何。 #### Pop!\_OS 20.04 中的用户体验提升 毫无疑问,很多 Linux 发行版都提供了开箱即用的用户体验。同样的,[Ubuntu 20.04 LTS 也有一流的改进和功能](https://itsfoss.com/ubuntu-20-04-release-features/)。 而对于 System76 的 Pop!\_OS,他们总是试图更进一步。并且,大多数新功能旨在通过提供有用的功能来改善用户体验。 在这里,我将介绍一些改进,其中包括 [GNOME 3.36](https://itsfoss.com/gnome-3-36-release/) 和 Pop!\_OS 特有的一些功能。 #### 支持系统托盘图标 总算是有了!这可能不是什么大的改变 —— 但 Pop!\_OS 以前没有支持系统托盘图标(或小程序图标)。 ![](/data/attachment/album/202005/02/201347ug2b3vyw4agr7t9t.jpg) 随着 20.04 LTS 的发布,默认情况就有了系统托盘,不需要任何扩展。 依靠系统托盘图标的程序可能并不多 —— 但它仍然是重要的东西。 就我而言,我以前无法在 Pop!\_OS 19.10 上使用 [ActivityWatch](https://activitywatch.net/) —— 但现在可以了。 #### 自动窗口平铺 ![](/data/attachment/album/202005/02/201348up82dqzsfsh487sj.png) 自动窗口平铺是我一直想尝试的东西 —— 但从来没花时间使用过 [i3](https://i3wm.org/) 这样的[平铺窗口管理器](https://en.wikipedia.org/wiki/Tiling_window_manager)来设置它,更别说是 [Regolith 桌面](https://itsfoss.com/regolith-linux-desktop/)了。 在 Pop!\_OS 20.04 中,你就不需要这样做了。自动窗口平铺功能已经内置,无需设置。 它还提供了“显示活动提示”的选项,也就是说,它将高亮显示活动窗口以避免混淆。而且,你还可以调整窗口之间的间隙。 ![](/data/attachment/album/202005/02/201353quznzmjkrcfn9k77.jpg) 你可以在他们的官方视频中看到它是如何工作的: 而且,我得说,这是 Pop!\_OS 20.04 上最大的新增功能之一,有可能帮助你更有效地进行多任务处理。 即使每次使用该功能都很方便,但为了最大程度地利用它,最好是使用一个大于 21 英寸的显示屏(至少)! 而且,因为这个原因 —— 我真的很想把我的显示器也升级一下! #### 新的扩展应用 ![](/data/attachment/album/202005/02/201354yxk7smko0auxhe1x.jpg) Pop!\_OS 内置了一些独特的 GNOME 扩展。但是,你不需要用 GNOME Tweaks 来管理扩展。 新增加的 “Extensions” 应用可以让你在 Pop!\_OS 20.04 上配置和管理扩展程序。 #### 改进的通知中心 ![](/data/attachment/album/202005/02/201355ru59unnz98wmunz6.jpg) 在新的 GNOME 3.36 中,通知中心的外观经过了改进。这里,我启用了黑暗模式。 #### 新的应用程序切换器 & 启动器 ![](/data/attachment/album/202005/02/201357uqzykgqsqnq2ydnt.jpg) 你仍然可以用 `ALT+TAB` 或 `Super+TAB` 来浏览正在运行的应用程序。 但是,当你有很多事情要做的时候,这很耗时。所以,在 Pop!\_OS 20.04上,你可以使用 `Super+ /` 激活应用程序切换器和启动器。 一旦你习惯了这个快捷键,它将是非常方便的东西。 除此以外,你可能会发现 Pop!\_OS 20.04 上的图标/窗口在视觉上有许多其它细微的改进。 #### 新的登录界面 嗯,这是 GNOME 3.36 带来的一个明显的变化。但是,它看起来确实很不错! ![](/data/attachment/album/202005/02/201358lpd4oaall8n8pwpn.jpg) #### Pop!\_Shop 支持 Flatpak 通常,Pop!\_Shop 已经是一个非常有用的东西了,包括它自有的在内,它带有一个巨大的软件仓库。 现在,在 Pop!\_OS 20.04 中,你可以用 Pop!\_Shop 安装任何可用软件的 Debian 包或 Flatpak(通过 Flathub) —— 当然,前提是某个软件有 Flatpak 软件包。 如果你没有使用 Pop!\_OS 20.04,你可能要看看[如何在 Linux 上使用 Flatpak](https://itsfoss.com/flatpak-guide/)。 ![](/data/attachment/album/202005/02/201400u7uzxyi404soxds7.jpg) 就我个人而言,我并不是 Flatpak 的粉丝,但有些应用如 GIMP 需要你安装 Flatpak 包才能获得最新版本。所以,在 Pop!\_Shop 上直接支持了 Flatpak 绝对是一件好事。 #### 键盘快捷键更改 如果你习惯了 Pop!\_OS 19.10 或更早的版本上现有的键盘快捷键,这可能会让你很烦。 不管是哪种情况,有几个重要的键盘快捷键变化可能会改善你的体验,如下: * 锁定屏幕:`Super + L` 改为 `Super + Escape`。 * 移动工作区:`Super + 上/下箭头键` 改为 `Super + CTRL + 上/下箭头键`。 * 关闭窗口:`Super + W` 变更为 `Super + Q`。 * 切换最大化:`Super +向上箭头` 改为 `Super + M`。 #### Linux 内核 5.4 与其他大多数最新的 Linux 发行版相似,Pop!\_OS 20.04 搭载了 [Linux 内核 5.4](https://itsfoss.com/linux-kernel-5-4/)。 所以,很明显,你可以期望获得对 [exFAT 支持](https://itsfoss.com/mount-exfat/)、改进的 AMD 图形兼容性以及它附带所有其他功能。 #### 性能提升 尽管 Pop!\_OS 并不称自己是轻量级的 Linux 发行版,但它仍然是一个资源节约型的发行版。而且,有了 GNOME 3.36 的支持,它的速度应该足够快了。 考虑到我已经将 Pop!\_OS 作为主要发行版使用已经一年多了,我从来没有遇到过性能问题。这就是你安装了 Pop!\_OS 20.04 之后的资源使用情况(取决于你的系统配置)。 ![](/data/attachment/album/202005/02/201403oa5gkogg0cgeceg0.jpg) 给你一个作为参考,我的台式机配置包括 i5-7400 处理器、16GB 内存(2400MHz)、NVIDIA GTX 1050ti 显卡和 SSD。 我不是一个系统基准测试的忠实拥护者,因为除非你去尝试,否则它并不能让你知道特定的应用或游戏的性能。 你可以试试 [Phoronix 测试套件](https://www.phoronix-test-suite.com/)来分析你的系统表现。但是,Pop!\_OS 20.04 LTS 应该是一个很爽快的体验! #### 软件包更新 & 其他改进 尽管每个基于Ubuntu的发行版都受益于Ubuntu 20.04 LTS的改进,但也有一些 Pop!\_OS 特有的错误修复和改进。 除此之外,一些主要的应用程序/包(如 Firefox 75.0)也已经更新到了最新版本。 到现在为止,应该没有任何严重的错误,至少对我来说没有。 你可以在 [GitHub 上查看他们的开发进度](https://github.com/orgs/pop-os/projects/13),以了解他们在测试期间已经修复的问题和发布后即将修复的问题。 ### 下载 & 支持 Pop!\_OS 20.04 ![](/data/attachment/album/202005/02/201404y33meipsscpcicyk.jpg) 在这个版本中,System76 终于增加了一个可选的订阅模式来支持 Pop!\_OS 的开发。 你可以免费下载 Pop!\_OS 20.04 —— 但如果你想支持他们,我建议你只需要 $1/月就可以订阅。 * [Pop!\_OS 20.04](https://pop.system76.com/) ### 我对 Pop OS 20.04 的看法 我必须提到的是,我正在为最新的 20.04 版本提供全新的墙纸。但是,这没什么大不了的。 有了窗口平铺功能、支持 flatpak,以及众多其他改进,到目前为止,我对 Pop!\_OS 20.04 的体验是一流的。另外,很高兴看到他们在一些流行软件的开箱即用支持上突出了他们对创意专业人士的关注。 ![](/data/attachment/album/202005/02/201406tm79kramskeq7dmm.jpg) Ubuntu 20.04 的所有优点,再加上 System76 的一些额外的加料,让我印象深刻! 你试过 Pop!\_OS 20.04 吗?请在下面的评论中告诉我你的想法。 --- via: <https://itsfoss.com/pop-os-20-04-review/> 作者:[Ankush Das](https://itsfoss.com/author/ankush/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
*Brief: Pop OS 20.04 is an impressive Linux distribution based on Ubuntu. I review the major new features in this review and share my experience with the latest release.* Now that Ubuntu 20.04 LTS and its official flavours are here – it’s time to take a look at one of best Ubuntu-based distro i.e Pop!_OS 20.04 by [System76](https://system76.com). To be honest, Pop!_OS is my favorite Linux distro that I primarily use for everything I do. Now that Pop!_OS 20.04 has finally arrived. It’s time to take a look at what it offers and whether you should upgrade or not? ## What’s New In Pop!_OS 20.04 LTS? Visually, Pop!_OS 20.04 LTS isn’t really very different from Pop!_OS 19.10. However, you can find several new features and improvements. But, if you were using **Pop!_OS 18.04 LTS**, you have a lot of things to try. With [GNOME 3.36](https://itsfoss.com/gnome-3-36-release/) onboard along with some newly added features, Pop!_OS 20.04 is an exciting release. Overall, to give you an overview here are some key highlights: - Automatic Window Tiling - New Application Switcher and Launcher - Stacking feature - Flatpack support added in Pop!_Shop - GNOME 3.36 - Linux Kernel 5.8 - Improved hybrid graphics support While this sounds fun, let us take a look at a detailed look on what has changed and how’s the experience of Pop!_OS 20.04 so far. ## User Experience Improvements in Pop OS 20.04 Undoubtedly, a lot of Linux distros offer a pleasant user experience out of the box. Likewise, [Ubuntu 20.04 LTS has had top-notch improvements and features](https://itsfoss.com/ubuntu-20-04-release-features/) as well. And, when it comes to Pop!_OS by System 76, they always try to go a mile further. And, the majority of new features aim to improve the user experience by providing useful functionalities. Here, I’m going to take a look at some of the improvements that include [GNOME 3.36](https://itsfoss.com/gnome-3-36-release/) and Pop!_OS-specific features. ### Support For System Tray Icons Finally! This may not be a big change – but Pop!_OS did not have the support for system tray icons (or applet icons). ![System Tray Icons Pop Os](https://itsfoss.com/content/images/wordpress/2020/04/system-tray-icons-pop-os.jpg) With 20.04 LTS release, it’s here by default. No need of any extension. There may not be a whole lot of programs depending on system tray icons – but it is still something important to have. In my case, I wasn’t able to use [ActivityWatch](https://itsfoss.com/activitywatch/) on Pop!_OS 19.10 – but now I can. ### Automatic Window Tiling ![Pop Os Automatic Screen Tiling](https://itsfoss.com/content/images/wordpress/2020/04/pop-os-automatic-screen-tiling.png) **Automatic Window Tiling** is something I always wanted to try – but never invested any time to set it up using a [tiling window manager](https://en.wikipedia.org/wiki/Tiling_window_manager) like [i3](https://i3wm.org/), not even with [Regolith Desktop](https://itsfoss.com/regolith-linux-desktop/). With Pop!_OS 20.04, you don’t need to do that anyway. The automatic window tiling feature comes baked in without needing you to set it up. It also features an option to **Show Active Hint** i.e it will highlight the active window to avoid confusion. And, you can also adjust the gap between the windows. ![Tile Feature Options Popos](https://itsfoss.com/content/images/wordpress/2020/04/tile-feature-options-popos.jpg) You can see it in action in their official video: And, I must say that it is one of the biggest additions on Pop!_OS 20.04 that could potentially help you multi-task more efficiently. Even though the feature comes in handy everytime you use it. To make the most out of it, a display screen bigger than 21-inches (at least) should be the best way to go! For this very reason, I upgraded my monitor to a 27-inch screen. ### Stacking Feature Fret not, if you don’t have a big display and still want to stack up windows in one screen, you still have an option. Pop!_OS offers a **stacking feature** that comes in handy in such cases, here’s that in action: ### New Extensions App ![Pop Os Extensions](https://itsfoss.com/content/images/wordpress/2020/04/pop-os-extensions.jpg) Pop!_OS comes baked in with some unique GNOME extensions. But, you don’t need GNOME Tweaks the manage the extension anymore. The newly added **Extensions** app lets you configure and manage the extensions on Pop!_OS 20.04. ### Improved Notification Center ![Notification Center Pop Os](https://itsfoss.com/content/images/wordpress/2020/04/notification-center-pop-os.jpg) With the new GNOME 3.36 release, the notification center includes a revamped look. Here, I have the dark mode enabled. ### New Application Switcher & Launcher ![Pop Os Application Launcher](https://itsfoss.com/content/images/wordpress/2020/04/pop-os-application-launcher.jpg) You can still **ALT+TAB** or **Super key + TAB** to go through the running applications. But, that’s time-consuming when you have a lot of things going on. So, on Pop!_OS 20.04, you get an application switcher and launcher which you can activate using **Super key + /** Once you get used to the keyboard shortcut, it will be very convenient thing to have. In addition to this, you may find numerous other subtle improvements visually with the icons/windows on Pop!_OS 20.04. ### New Login Screen Well, with GNOME 3.36, it’s an obvious change. But, it does look good! ![Pop Os 20 04 Lock Screen](https://itsfoss.com/content/images/wordpress/2020/04/pop-os-20-04-lock-screen.jpg) ## Flatpak Support on Pop!_Shop Normally, Pop!_Shop is already something useful with a huge repository along with [Pop!_OS’s own repositories.](https://launchpad.net/~system76/+archive/ubuntu/pop) Now, with Pop!_OS 20.04, you can choose to install either Flatpak (via Flathub) or the Debian package of any available software on Pop!_Shop. Of course, only if a Flatpak package exists for the particular software. You might want to check [how to use Flatpak on Linux](https://itsfoss.com/flatpak-guide/) if you don’t have Pop!_OS 20.04. ![Pop Os Flatpak Deb](https://itsfoss.com/content/images/wordpress/2020/04/pop-os-flatpak-deb.jpg) Personally, I’m not a fan of Flatpak but some applications like GIMP requires you to install the Flatpak package to get the latest version. So, it is definitely a good thing to have the support for Flatpak on Pop!_Shop baked right into it. ## Keyboard Shortcut Changes This can be annoying if you’re comfortable with the existing keyboard shortcuts on Pop!_OS 19.10 or older. In either case, there are a few important keyboard shortcut changes to potentially improve your experience, here they are: - Lock Screen: **Super + L***changed to***Super + Escape** - Move Workspace: **Super + Up/Down Arrow***changed to***Super + CTRL + Up/Down Arrow** - Close Window: **Super + W***changed*to**Super + Q** - Toggle Maximize: **Super + Up Arrow***changed to***Super + M** ## Linux Kernel 5.4 (updated to 5.8) Similar to most of the other latest Linux distros, Pop!_OS 20.04 came pre-loaded with [Linux Kernel 5.4](https://itsfoss.com/linux-kernel-5-4/) at the time of publishing this. Now, you will find [Linux Kernel 5.8](https://itsfoss.com/kernel-5-8-release/) on board. So, obviously, you can expect the [exFAT support](https://itsfoss.com/mount-exfat/) and an improved AMD graphics compatibility along with all the other features that come with it. ## Performance Improvements Even though Pop!_OS doesn’t pitch itself as a lightweight Linux distro, it is still a resource-efficient distro. And, with GNOME 3.36 onboard, it should be fast enough. Considering that I’ve been using Pop!_OS as my primary distro for about a year, I’ve never had any performance issues. And, this is how the resource usage will probably look like (depending on your system configuration) after you install Pop!_OS 20.04. ![Pop Os 20 04 Performance](https://itsfoss.com/content/images/wordpress/2020/04/pop-os-20-04-performance.jpg) To give you an idea, my desktop configuration involves an i5-7400 processor, 16 GB RAM (2400 MHz), NVIDIA GTX 1050ti graphics card, and an SSD. I’m not really a fan of system benchmarks because it does not really give you the idea of how a specific application or a game would perform unless you try it. You can try the [Phoronix Test Suite](https://www.phoronix-test-suite.com/) to analyze how your system performs. But, Pop!_OS 20.04 LTS should be a snappy experience! ## Package Updates & Other Improvements While every Ubuntu-based distro benefits from the [improvements in Ubuntu 20.04 LTS](https://itsfoss.com/ubuntu-20-04-release-features/), there are some Pop OS specific bug fixes and improvements as well. In addition to it, some major apps/packages like **Firefox 75.0** (you should find the latest Firefox version depending on when you read this article) have been updated to their latest version. As of now, there should be no critical bugs present and at least none for me. You can check out their [development progress on GitHub](https://github.com/orgs/pop-os/projects/13) to check the details of issues they’ve already fixed during the beta testing and the issues they will be fixing right after the release. ## Download & Support Pop!_OS 20.04 ![Support Pop Os](https://itsfoss.com/content/images/wordpress/2020/05/support-pop-os.jpg) With this release, System76 has finally added a subscription model (optional) to support Pop!_OS development. You can download** Pop!_OS 20.04** for free – but if you want to support them I’d suggest you go for the subscription with just **$1/month**. ## My Thoughts on Pop OS 20.04 I must mention that I was rooting for a fresh new wallpaper with the latest 20.04 release. But, that’s not a big deal. With the window tiling feature, Flatpak support, stacking feature, and numerous other improvements, my experience with Pop!_OS 20.04 has been top-notch so far. Also, it’s great to see that they are highlighting their focus on creative professionals with out-of-the-box support for some popular software. ![Pop Os Stem Focus](https://itsfoss.com/content/images/wordpress/2020/05/pop-os-stem-focus.jpg) All the good things about Ubuntu 20.04 and some extra toppings on it by System76, I’m impressed! *Have you tried the Pop!_OS 20.04 yet? Let me know your thoughts in the comments below.*
12,176
Go 中的内联优化
https://dave.cheney.net/2020/04/25/inlining-optimisations-in-go
2020-05-02T22:22:47
[ "Go", "内联" ]
https://linux.cn/article-12176-1.html
> > 本文讨论 Go 编译器是如何实现内联的,以及这种优化方法如何影响你的 Go 代码。 > > > ![](/data/attachment/album/202005/02/222202e3v3pppkhnndpbpn.jpg) *请注意:*本文重点讨论 *gc*,这是来自 [golang.org](https://github.com/golang/go) 的事实标准的 Go 编译器。讨论到的概念可以广泛适用于其它 Go 编译器,如 gccgo 和 llgo,但它们在实现方式和功效上可能有所差异。 ### 内联是什么? <ruby> 内联 <rt> inlining </rt></ruby>就是把简短的函数在调用它的地方展开。在计算机发展历程的早期,这个优化是由程序员手动实现的。现在,内联已经成为编译过程中自动实现的基本优化过程的其中一步。 ### 为什么内联很重要? 有两个原因。第一个是它消除了函数调用本身的开销。第二个是它使得编译器能更高效地执行其他的优化策略。 #### 函数调用的开销 在任何语言中,调用一个函数 <sup id="fnref1"> <a href="#fn1" rel="footnote"> 1 </a></sup> 都会有消耗。把参数编组进寄存器或放入栈中(取决于 ABI),在返回结果时的逆反过程都会有开销。引入一次函数调用会导致程序计数器从指令流的一点跳到另一点,这可能导致管道滞后。函数内部通常有<ruby> 前置处理 <rt> preamble </rt></ruby>,需要为函数执行准备新的栈帧,还有与前置相似的<ruby> 后续处理 <rt> epilogue </rt></ruby>,需要在返回给调用方之前释放栈帧空间。 在 Go 中函数调用会消耗额外的资源来支持栈的动态增长。在进入函数时,goroutine 可用的栈空间与函数需要的空间大小进行比较。如果可用空间不同,前置处理就会跳到<ruby> 运行时 <rt> runtime </rt></ruby>的逻辑中,通过把数据复制到一块新的、更大的空间的来增长栈空间。当这个复制完成后,运行时就会跳回到原来的函数入口,再执行栈空间检查,现在通过了检查,函数调用继续执行。这种方式下,goroutine 开始时可以申请很小的栈空间,在有需要时再申请更大的空间。<sup id="fnref2"> <a href="#fn2" rel="footnote"> 2 </a></sup> 这个检查消耗很小,只有几个指令,而且由于 goroutine 的栈是成几何级数增长的,因此这个检查很少失败。这样,现代处理器的分支预测单元可以通过假定检查肯定会成功来隐藏栈空间检查的消耗。当处理器预测错了栈空间检查,不得不放弃它在推测性执行所做的操作时,与为了增加 goroutine 的栈空间运行时所需的操作消耗的资源相比,管道滞后的代价更小。 虽然现代处理器可以用预测性执行技术优化每次函数调用中的泛型和 Go 特定的元素的开销,但那些开销不能被完全消除,因此在每次函数调用执行必要的工作过程中都会有性能消耗。一次函数调用本身的开销是固定的,与更大的函数相比,调用小函数的代价更大,因为在每次调用过程中它们做的有用的工作更少。 因此,消除这些开销的方法必须是要消除函数调用本身,Go 的编译器就是这么做的,在某些条件下通过用函数的内容来替换函数调用来实现。这个过程被称为*内联*,因为它在函数调用处把函数体展开了。 #### 改进的优化机会 Cliff Click 博士把内联描述为现代编译器做的优化措施,像常量传播(LCTT 译注:此处作者笔误,原文为 constant proportion,修正为 constant propagation)和死代码消除一样,都是编译器的基本优化方法。实际上,内联可以让编译器看得更深,使编译器可以观察调用的特定函数的上下文内容,可以看到能继续简化或彻底消除的逻辑。由于可以递归地执行内联,因此不仅可以在每个独立的函数上下文处进行这种优化决策,也可以在整个函数调用链中进行。 ### 实践中的内联 下面这个例子可以演示内联的影响: ``` package main import "testing" //go:noinline func max(a, b int) int { if a > b { return a } return b } var Result int func BenchmarkMax(b *testing.B) { var r int for i := 0; i < b.N; i++ { r = max(-1, i) } Result = r } ``` 运行这个基准,会得到如下结果:<sup id="fnref3"> <a href="#fn3" rel="footnote"> 3 </a></sup> ``` % go test -bench=. BenchmarkMax-4 530687617 2.24 ns/op ``` 在我的 2015 MacBook Air 上 `max(-1, i)` 的耗时约为 2.24 纳秒。现在去掉 `//go:noinline` 编译指令,再看下结果: ``` % go test -bench=. BenchmarkMax-4 1000000000 0.514 ns/op ``` 从 2.24 纳秒降到了 0.51 纳秒,或者从 `benchstat` 的结果可以看出,有 78% 的提升。 ``` % benchstat {old,new}.txt name old time/op new time/op delta Max-4 2.21ns ± 1% 0.49ns ± 6% -77.96% (p=0.000 n=18+19) ``` 这个提升是从哪儿来的呢? 首先,移除掉函数调用以及与之关联的前置处理 <sup id="fnref4"> <a href="#fn4" rel="footnote"> 4 </a></sup> 是主要因素。把 `max` 函数的函数体在调用处展开,减少了处理器执行的指令数量并且消除了一些分支。 现在由于编译器优化了 `BenchmarkMax`,因此它可以看到 `max` 函数的内容,进而可以做更多的提升。当 `max` 被内联后,`BenchmarkMax` 呈现给编译器的样子,看起来是这样的: ``` func BenchmarkMax(b *testing.B) { var r int for i := 0; i < b.N; i++ { if -1 > i { r = -1 } else { r = i } } Result = r } ``` 再运行一次基准,我们看一下手动内联的版本和编译器内联的版本的表现: ``` % benchstat {old,new}.txt name old time/op new time/op delta Max-4 2.21ns ± 1% 0.48ns ± 3% -78.14% (p=0.000 n=18+18) ``` 现在编译器能看到在 `BenchmarkMax` 里内联 `max` 的结果,可以执行以前不能执行的优化措施。例如,编译器注意到 `i` 初始值为 `0`,仅做自增操作,因此所有与 `i` 的比较都可以假定 `i` 不是负值。这样条件表达式 `-1 > i` 永远不是 `true`。<sup id="fnref5"> <a href="#fn5" rel="footnote"> 5 </a></sup> 证明了 `-1 > i` 永远不为 true 后,编译器可以把代码简化为: ``` func BenchmarkMax(b *testing.B) { var r int for i := 0; i < b.N; i++ { if false { r = -1 } else { r = i } } Result = r } ``` 并且因为分支里是个常量,编译器可以通过下面的方式移除不会走到的分支: ``` func BenchmarkMax(b *testing.B) { var r int for i := 0; i < b.N; i++ { r = i } Result = r } ``` 这样,通过内联和由内联解锁的优化过程,编译器把表达式 `r = max(-1, i))` 简化为 `r = i`。 ### 内联的限制 本文中我论述的内联称作<ruby> 叶子内联 <rt> leaf inlining </rt></ruby>:把函数调用栈中最底层的函数在调用它的函数处展开的行为。内联是个递归的过程,当把函数内联到调用它的函数 A 处后,编译器会把内联后的结果代码再内联到 A 的调用方,这样持续内联下去。例如,下面的代码: ``` func BenchmarkMaxMaxMax(b *testing.B) { var r int for i := 0; i < b.N; i++ { r = max(max(-1, i), max(0, i)) } Result = r } ``` 与之前的例子中的代码运行速度一样快,因为编译器可以对上面的代码重复地进行内联,也把代码简化到 `r = i` 表达式。 下一篇文章中,我会论述当 Go 编译器想要内联函数调用栈中间的某个函数时选用的另一种内联策略。最后我会论述编译器为了内联代码准备好要达到的极限,这个极限 Go 现在的能力还达不到。 #### 相关文章: 1. [使 Go 变快的 5 件事](https://dave.cheney.net/2014/06/07/five-things-that-make-go-fast) 2. [为什么 Goroutine 的栈空间会无限增长?](https://dave.cheney.net/2013/06/02/why-is-a-goroutines-stack-infinite) 3. [Go 中怎么写基准测试](https://dave.cheney.net/2013/06/30/how-to-write-benchmarks-in-go) 4. [Go 中隐藏的编译指令](https://dave.cheney.net/2018/01/08/gos-hidden-pragmas) --- 1. 在 Go 中,一个方法就是一个有预先定义的形参和接受者的函数。假设这个方法不是通过接口调用的,调用一个无消耗的函数所消耗的代价与引入一个方法是相同的。 [↩](#fnref1) 2. 在 Go 1.14 以前,栈检查的前置处理也被垃圾回收器用于 STW,通过把所有活跃的 goroutine 栈空间设为 0,来强制它们切换为下一次函数调用时的运行时状态。这个机制[最近被替换](https://github.com/golang/proposal/blob/master/design/24543-non-cooperative-preemption.md)为一种新机制,新机制下运行时可以不用等 goroutine 进行函数调用就可以暂停 goroutine。 [↩](#fnref2) 3. 我用 `//go:noinline` 编译指令来阻止编译器内联 `max`。这是因为我想把内联 `max` 的影响与其他影响隔离开,而不是用 `-gcflags='-l -N'` 选项在全局范围内禁止优化。关于 `//go:` 注释在[这篇文章](https://dave.cheney.net/2018/01/08/gos-hidden-pragmas)中详细论述。 [↩](#fnref3) 4. 你可以自己通过比较 `go test -bench=. -gcflags=-S` 有无 `//go:noinline` 注释时的不同结果来验证一下。 [↩](#fnref4) 5. 你可以用 `-gcflags=-d=ssa/prove/debug=on` 选项来自己验证一下。 [↩](#fnref5) --- via: <https://dave.cheney.net/2020/04/25/inlining-optimisations-in-go> 作者:[Dave Cheney](https://dave.cheney.net/author/davecheney) 选题:[lujun9972](https://github.com/lujun9972) 译者:[lxbwolf](https://github.com/lxbwolf) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
This is a post about how the Go compiler implements inlining and how this optimisation affects your Go code. *n.b.* This article focuses on *gc*, the de facto Go compiler from [golang.org](https://github.com/golang/go). The concepts discussed apply broadly to other Go compilers like gccgo and tinygo but may differ in implementation and efficacy. ## What is inlining? Inlining is the act of combining smaller functions into their respective callers. In the early days of computing this optimisation was typically performed by hand. Nowadays inlining is one of a class of fundamental optimisations performed automatically during the compilation process. ## Why is inlining important? Inlining is important for two reasons. The first is it removes the overhead of the function call itself. The second is it permits the compiler to more effectively apply other optimisation strategies. ### Function call overhead Calling a function[ 1](#easy-footnote-bottom-1-4053) in any language carries a cost. There are the overheads of marshalling parameters into registers or onto the stack (depending on the ABI) and reversing the process on return. Invoking a function call involves jumping the program counter from one point in the instruction stream to another which can cause a pipeline stall. Once inside the function there is usually some preamble required to prepare a new stack frame for the function to execute and a similar epilogue needed to retire the frame before returning to the caller. In Go a function call carries additional costs to support dynamic stack growth. On entry the amount of stack space available to the goroutine is compared to the amount required for the function. If insufficient stack space is available, the preamble jumps into the runtime logic that grows the stack by copying it to a new, larger, location. Once this is done the runtime jumps back to the start of the original function, the stack check is performed again, which now passes, and the call continues. In this way goroutines can start with a small stack allocation which grows only when needed.2 This check is cheap–only a few instructions–and because goroutine stacks grows geometrically the check rarely fails. Thus, the branch prediction unit inside a modern processor can hide the cost of the stack check by assuming it will always be successful. In the case where the processor mis-predicts the stack check and has to discard the work done while it was executing speculatively, the cost of the pipeline stall is relatively small compared to the cost of the work needed for the runtime to grow a goroutines stack. While the overhead of the generic and Go specific components of each function call are well optimised by modern processors using speculative execution techniques, those overheads cannot be entirely eliminated, thus each function call carries with it a performance cost over and above the time it takes to perform useful work. As a function call’s overhead is fixed, smaller functions pay a larger cost relative to larger ones because they tend to do less useful work per invocation. The solution to eliminating these overheads must therefore be to eliminate the function call itself, which the Go compiler does, under certain conditions, by replacing the call to a function with the contents of the function. This is known as *inlining* because it brings the body of the function in line with its caller. ### Improved optimisation opportunities Dr. Cliff Click describes inlining as *the* optimisation performed by modern compilers as it forms the basis for optimisations like constant propagation and dead code elimination. In effect, inlining allows the compiler to *see **furthe*r, allowing it to observe, in the context that a particular function is being called, logic that can be further simplified or eliminated entirely. As inlining can be applied recursively optimisation decisions can be made not only in the context of each individual function, but also applied to the chain of functions in a call path. ## Inlining in action The effects of inlining can be demonstrated with this small example ``` package main import "testing" //go:noinline func max(a, b int) int { if a > b { return a } return b } var Result int func BenchmarkMax(b *testing.B) { var r int for i := 0; i < b.N; i++ { r = max(-1, i) } Result = r } ``` Running this benchmark gives the following result:3 %go test -bench=.BenchmarkMax-4 530687617 2.24 ns/op The cost of `max(-1, i)` is around 2.24 nanoseconds on my 2015 MacBook Air. Now let’s remove the `//go:noinline` pragma and see the result: %go test -bench=.BenchmarkMax-4 1000000000 0.514 ns/op From 2.24 ns to 0.51 ns, or according to `benchstat` , a 78% improvement. %benchstat {old,new}.txtname old time/op new time/op delta Max-4 2.21ns ± 1% 0.49ns ± 6% -77.96% (p=0.000 n=18+19) Where did these improvements come from? First, the removal of the function call and associated preamble[ 4](#easy-footnote-bottom-4-4053) was a major contributor. Pulling the contents of `max` into its caller reduced the number of instructions executed by the processor and eliminated several branches.Now the contents of `max` are visible to the compiler as it optimises `BenchmarkMax` it can make some additional improvements. Consider that once `max` is inlined, this is what the body of `BenchmarkMax` looks like to the compiler: ``` func BenchmarkMax(b *testing.B) { var r int for i := 0; i < b.N; i++ { if -1 > i { r = -1 } else { r = i } } Result = r } ``` Running the benchmark again we see our manually inlined version performs as well as the version inlined by the compiler %benchstat {old,new}.txtname old time/op new time/op delta Max-4 2.21ns ± 1% 0.48ns ± 3% -78.14% (p=0.000 n=18+18) Now the compiler has access to the result of inlining `max` into `BenchmarkMax` it can apply optimisation passes which were not possible before. For example, the compiler has noted that `i` is initialised to `0` and only incremented so any comparison with `i` can assume `i` will never be negative. Thus, the condition `-1 > i` will never be true.5 Having proved that `-1 > i` will never be true, the compiler can simplify the code to ``` func BenchmarkMax(b *testing.B) { var r int for i := 0; i < b.N; i++ { if false { r = -1 } else { r = i } } Result = r } ``` and because the branch is now a constant, the compiler can eliminate the unreachable path leaving it with ``` func BenchmarkMax(b *testing.B) { var r int for i := 0; i < b.N; i++ { r = i } Result = r } ``` Thus, through inlining and the optimisations it unlocks, the compiler has reduced the expression `r = max(-1, i)` to simply `r = i` . ## The limits of inlining In this article I’ve discussed, so called, *leaf* inlining; the act of inlining a function at the bottom of a call stack into its direct caller. Inlining is a recursive process, once a function has been inlined into its caller, the compiler may inline the resulting code into *its* caller, as so on. For example, this code ``` func BenchmarkMaxMaxMax(b *testing.B) { var r int for i := 0; i < b.N; i++ { r = max(max(-1, i), max(0, i)) } Result = r } ``` runs as fast as the previous example as the compiler is able to repeatedly apply the optimisations outlined above to reduce the code to the same `r = i` expression. In the next article I’ll discuss an alternative inlining strategy when the Go compiler wishes to inline a function in the middle of a call stack. Finally I’ll discuss the limits that the compiler is prepared to go to to inline code, and which Go constructs are currently beyond its capability. - In Go, a method is just a function with a predefined formal parameter, the receiver. The relative costs of calling a free function vs a invoking a method, assuming that method is not called through an interface, are the same. - Up until Go 1.14 the stack check preamble was also used by the garbage collector to stop the world by setting all active goroutine’s stacks to zero, forcing them to trap into the runtime the next time they made a function call. This system was [recently replaced](https://github.com/golang/proposal/blob/master/design/24543-non-cooperative-preemption.md)with a mechanism which allowed the runtime to pause an goroutine without waiting for it to make a function call. - I’m using the `//go:noinline` pragma to prevent the compiler from inlining`max` . This is because I want to isolate the effects of inlining on`max` rather than disabling optimisations globally with`-gcflags='-l -N'` . I go into detail about the`//go:` comments in[this presentation](https://dave.cheney.net/2018/01/08/gos-hidden-pragmas). - You can check this for yourself by comparing the output of `go test -bench=. -gcflags=-S` with and without the`//go:noinline` annotation. - You can check this yourself with the `-gcflags=-d=ssa/prove/debug=on` flag.
12,180
4 个不可或缺的 Git 脚本
https://opensource.com/article/20/4/git-extras
2020-05-03T21:15:00
[ "Git" ]
https://linux.cn/article-12180-1.html
> > Git Extras 版本库包含了 60 多个脚本,它们是 Git 基本功能的补充。以下是如何安装、使用和贡献的方法。 > > > ![](/data/attachment/album/202005/03/211446dshwbzoh235b3gre.jpg) 2005 年,[Linus Torvalds](https://en.wikipedia.org/wiki/Linus_Torvalds) 创建了 [Git](https://git-scm.com/),以取代他之前用于维护 Linux 内核的分布式源码控制管理的专有解决方案。从那时起,Git 已经成为开源和云原生开发团队的主流版本控制解决方案。 但即使是像 Git 这样功能丰富的应用程序,也没有人们想要或需要的每个功能,所以会有人花大力气去创建这些缺少的功能。就 Git 而言,这个人就是 [TJ Holowaychuk](https://github.com/tj)。他的 [Git Extras](https://github.com/tj/git-extras) 项目承载了 60 多个“附加功能”,这些功能扩展了 Git 的基本功能。 ### 使用 Git 附加功能 下面介绍一下如何使用四种最受欢迎的 Git 附加功能。 #### git-ignore `git ignore` 是一个方便的附加功能,它可以让你手动添加文件类型和注释到 `.git-ignore` 文件中,而不需要打开文本编辑器。它可以操作你的个人用户帐户的全局忽略文件和单独用于你正在工作的版本库中的忽略文件。 在不提供参数的情况下执行 `git ignore` 会先列出全局忽略文件,然后是本地的忽略文件。 ``` $ git ignore Global gitignore: /home/alice/.gitignore # Numerous always-ignore extensions *.diff *.err *.orig *.rej *.swo *.swp *.vi *~ *.sass-cache # OS or Editor folders Thumbs.db --------------------------------- Local gitignore: .gitignore nbproject ``` #### git-info `git info` 可以检索你所需要的所有信息,以获取你正在使用的版本库的上下文信息。它包括远程 URL、远程分支、本地分支、配置信息和最后一次的提交信息。 ``` $ git info ## Remote URLs: origin [email protected]:sampleAuthor/git-extras.git (fetch) origin [email protected]:sampleAuthor/git-extras.git (push) ## Remote Branches: origin/HEAD -> origin/master origin/myBranch ## Local Branches: myBranch * master ## Most Recent Commit: commit e3952df2c172c6f3eb533d8d0b1a6c77250769a7 Author: Sample Author <[email protected]> Added git-info command. Type ´git log´ for more commits, or ´git show <commit id>´ for full commit details. ## Configuration (.git/config): color.diff=auto color.status=auto color.branch=auto user.name=Sample Author [email protected] core.repositoryformatversion=0 core.filemode=true core.bare=false core.logallrefupdates=true core.ignorecase=true remote.origin.fetch=+refs/heads/*:refs/remotes/origin/* [email protected]:mub/git-extras.git branch.master.remote=origin branch.master.merge=refs/heads/master ``` #### git-mr 和 git-pr 这些附加功能的作用类似,工作方式也基本相同。 * `git mr` 检出来自 GitLab 的合并请求。 * `git pr` 检出来自 GitHub 的拉取请求。 无论是哪种情况,你只需要合并请求号/拉取请求号或完整的 URL,它就会抓取远程引用,检出分支,并调整配置,这样 Git 就知道要替换哪个分支了。 ``` $ git mr 51 From gitlab.com:owner/repository * [new ref] refs/merge-requests/51/head -> mr/51 Switched to branch 'mr/51' ``` #### git-release 通过将 `commit`、`tag` 和 `push` 合并到一个命令中,`git release` 可以节省大量的按键来执行这三个命令,而这三个命令往往是依次运行的。 要用特定的 `<tagname>` 和自定义消息提交: ``` $ git release 0.1.0 -m <+ powerful feature added> ``` #### 其他附加功能 这只是该版本库中 60 多个 Git 附加功能中的四个命令。要访问 Git Extras 中的全部命令,请查看该源代码库中的 [Commands.md](https://github.com/tj/git-extras/blob/master/Commands.md) 文件,或者在安装 Git Extras 后运行以下命令。 ``` $ git extras --help ``` ### 安装 Git 附加功能 使用 Git 附加功能的主要前提是安装了 Git 的命令行版本。如果你打算从源码中构建,还需要有额外的工具(例如:`make`)。 如果你使用的是最新版本的 macOS,那么 Git 附加功能的安装最好使用 [Homebrew](https://brew.sh/)(和大多数开源工具一样)。 ``` $ brew install git-extras ``` 在 Linux 上,每个平台原生的包管理器中都包含有 Git Extras。有时,你需要启用额外的仓库,比如在 CentOS 上的 [EPEL](https://fedoraproject.org/wiki/EPEL),然后运行一条命令。 ``` $ sudo yum install git-extras ``` 其他 Linux 发行版、BSD 和其他平台的完整安装说明可以在该版本库的 [Installation.md](https://github.com/tj/git-extras/blob/master/Installation.md) 文件中找到。 ### 贡献 你是否认为 Git 中有缺少的功能,并且已经构建了一个脚本来处理它?为什么不把它作为 Git Extras 发布版的一部分,与全世界分享呢? 要做到这一点,请将该功能贡献到 Git Extras 仓库中。更多具体细节请参见仓库中的 [CONTRIBUTING.md](https://github.com/tj/git-extras/blob/master/CONTRIBUTING.md) 文件,但基本的操作方法很简单: 1. 创建一个处理该功能的 Bash 脚本。 2. 创建一个基本的 man 文件,让大家知道如何使用它。 3. 更新命令列表和补完脚本,让人们知道这个功能的存在。 4. 运行完整性检查,确保你没有破坏任何东西。 5. 为你的功能创建一个拉取请求。 向 Git Extras 贡献贡献,会让你的 Git 用户的生活更轻松一些。你可以在项目的 [README](https://github.com/tj/git-extras/blob/master/Readme.md) 中了解更多。 --- via: <https://opensource.com/article/20/4/git-extras> 作者:[Vince Power](https://opensource.com/users/vincepower) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
In 2005, [Linus Torvalds](https://en.wikipedia.org/wiki/Linus_Torvalds) created [Git](https://git-scm.com/) to replace the proprietary distributed source control management solution that he had previously used to maintain the Linux kernel. Since then, Git has become a dominant version-control solution for open source and cloud-native development teams. Even feature-rich applications like Git don't have every feature that people want or need, so they make the effort to create them. In the case of Git, that person would be [TJ Holowaychuk](https://github.com/tj). His [Git Extras](https://github.com/tj/git-extras) project hosts more than 60 "extras" with features that expand Git's basic functionality. ## Using Git Extras Here's how to use four of the most popular Git Extras. ### git-ignore git-ignore is a convenient extra that allows you to manually add file types and comments to the **.git-ignore** file without having to open a text editor. It works with both the global ignore file for your user account and the individual ignore file for the repository you are working in. Executing git-ignore without a parameter will list the global ignore file first, then the local ignore files. ``` $ git ignore Global gitignore: /home/alice/.gitignore # Numerous always-ignore extensions *.diff *.err *.orig *.rej *.swo *.swp *.vi *~ *.sass-cache # OS or Editor folders Thumbs.db --------------------------------- Local gitignore: .gitignore nbproject ``` ### git-info git-info retrieves all the information you need to get your head in the context of a repo you are working with. It includes any remote URLs, remote branches, local branches, configuration info, and the last commit. ``` $ git info ## Remote URLs: origin [email protected]:sampleAuthor/git-extras.git (fetch) origin [email protected]:sampleAuthor/git-extras.git (push) ## Remote Branches: origin/HEAD -> origin/master origin/myBranch ## Local Branches: myBranch * master ## Most Recent Commit: commit e3952df2c172c6f3eb533d8d0b1a6c77250769a7 Author: Sample Author <[email protected]> Added git-info command. Type ´git log´ for more commits, or ´git show <commit id>´ for full commit details. ## Configuration (.git/config): color.diff=auto color.status=auto color.branch=auto user.name=Sample Author [email protected] core.repositoryformatversion=0 core.filemode=true core.bare=false core.logallrefupdates=true core.ignorecase=true remote.origin.fetch=+refs/heads/*:refs/remotes/origin/* [email protected]:mub/git-extras.git branch.master.remote=origin branch.master.merge=refs/heads/master ``` ### git-mr and git-pr These extras do similar things and work in basically the same way. - git-mr checks out a merge request from GitLab - git-pr checks out a pull request on GitHub In either case, you just need the merge or pull request number or the full URL, and it will fetch the remote reference, check out the branch, and adjust the config, so Git knows which branch it will replace. ``` $ git mr 51 From gitlab.com:owner/repository * [new ref] refs/merge-requests/51/head -> mr/51 Switched to branch 'mr/51' ``` ### git-release By combining **commit**, **tag**, and **push** into a single command, git-release saves a lot of keystrokes for executing three commands that often run in sequence. To commit with a specific **<tagname>** and a custom message: `$ git release 0.1.0 -m <+ powerful feature added>` ### Other extras These are just four of the 60+ Git Extras in the repo. To access the full list of commands available in Git Extras, either review the [Commands.md](https://github.com/tj/git-extras/blob/master/Commands.md) file in the source repository or run the following command after you install Git Extras. `$ git extras --help` ## Install Git Extras The main prerequisite for Git Extras is having the command-line version of Git installed. If you plan to build from source, you also need additional utilities (e.g., **make**) to be available. If you are using a recent version of macOS, Git Extras installation is best handled using [Homebrew](https://brew.sh/) (as with most open source tools): `$ brew install git-extras` On Linux, Git Extras is available on each platform's native package manager. Sometimes, you need to enable an extra repository, like [EPEL](https://fedoraproject.org/wiki/EPEL) on CentOS, then run a single command: `$ sudo yum install git-extras` Full installation instructions for other Linux distributions, BSD, and other platforms are available in the [Installation.md](https://github.com/tj/git-extras/blob/master/Installation.md) file in the repository. ## Contributing Do you have a piece of functionality you think is missing from Git and have built or want to build a script to handle it? Why not share it with the world by making it part of the Git Extras distribution! To do so, contribute the functionality to the Git Extras repository. There are more specific details in the [CONTRIBUTING.md](https://github.com/tj/git-extras/blob/master/CONTRIBUTING.md) file in the repository, but the basics are easy: - Create a Bash script that handles the functionality. - Create a basic man file so people will know how to use it. - Update the command list and completion scripts to let people know the functionality exists. - Run the integrity check to make sure you didn't break anything. - Create a pull request for your functionality. Contributing to Git Extras will go a long way towards making life a little easier for your fellow Git users. You can learn more about it in the project's [README](https://github.com/tj/git-extras/blob/master/Readme.md). ## Comments are closed.
12,181
以单用户模式启动 CentOS/RHEL 7/8 的三种方法
https://www.2daygeek.com/boot-centos-7-8-rhel-7-8-single-user-mode/
2020-05-03T23:01:00
[ "启动", "引导", "单用户" ]
https://linux.cn/article-12181-1.html
![](/data/attachment/album/202005/03/230109uw1f9zvv9upbhwv8.jpg) 单用户模式,也被称为维护模式,超级用户可以在此模式下恢复/修复系统问题。 通常情况下,这类问题在多用户环境中修复不了。系统可以启动但功能不能正常运行或者你登录不了系统。 在基于 [Red Hat](https://www.2daygeek.com/category/red-hat/)(RHEL)7/8 的系统中,使用 `runlevel1.target` 或 `rescue.target` 来实现。 在此模式下,系统会挂载所有的本地文件系统,但不开启网络接口。 系统仅启动特定的几个服务和修复系统必要的尽可能少的功能。 当你想运行文件系统一致性检查来修复损坏的文件系统,或忘记 root 密码后重置密码,或要修复系统上的一个挂载点问题时,这个方法会很有用。 你可以用下面三种方法以单用户模式启动 [CentOS](https://www.2daygeek.com/category/centos/)/[RHEL](https://www.2daygeek.com/category/rhel/) 7/8 系统。 * 方法 1:通过向内核添加 `rd.break` 参数来以单用户模式启动 CentOS/RHEL 7/8 系统 * 方法 2:通过用 `init=/bin/bash` 或 `init=/bin/sh` 替换内核中的 `rhgb quiet` 语句来以单用户模式启动 CentOS/RHEL 7/8 系统 * 方法 3:通过用 `rw init=/sysroot/bin/sh` 参数替换内核中的 `ro` 语句以单用户模式启动 CentOS/RHEL 7/8 系统 ### 方法 1 通过向内核添加 `rd.break` 参数来以单用户模式启动 CentOS/RHEL 7/8 系统。 重启你的系统,在 GRUB2 启动界面,按下 `e` 键来编辑选中的内核。你需要选中第一行,第一个是最新的内核,然而如果你想用旧的内核启动系统你也可以选择其他的行。 ![](/data/attachment/album/202005/03/230638ivavlhhetah9oaaz.png) 根据你的 RHEL/CentOS 版本,找到 `linux16` 或 `linux` 语句,按下键盘上的 `End` 键,跳到行末,像下面截图中展示的那样添加关键词 `rd.break`,按下 `Ctrl+x` 或 `F10` 来进入单用户模式。 如果你的系统是 RHEL/CentOS 7,你需要找 `linux16`,如果你的系统是 RHEL/CentOS 8,那么你需要找 `linux`。 ![](/data/attachment/album/202005/03/230657vp7ai7naoxpe79ax.png) 这个修改会让你的 root 文件系统以 “只读(`ro`)” 模式挂载。你可以用下面的命令来验证下。下面的输出也明确地告诉你当前是在 “<ruby> 紧急模式 <rt> Emergency Mode </rt></ruby>”。 ``` # mount | grep root ``` ![](/data/attachment/album/202005/03/230714ofp2cc2p4w43ptc8.png) 为了修改 `sysroot` 文件系统,你需要用读写模式(`rw`)重新挂载它。 ``` # mount -o remount,rw /sysroot ``` 运行下面的命令修改环境,这就是大家熟知的 “监禁目录” 或 “chroot 监狱”。 ``` # chroot /sysroot ``` ![](/data/attachment/album/202005/03/230731ddze7uhp7wu7pztz.png) 现在,单用户模式已经完全准备好了。当你修复了你的问题要退出单用户模式时,执行下面的步骤。 CentOS/RHEL 7/8 默认使用 SELinux,因此创建下面的隐藏文件,这个文件会在下一次启动时重新标记所有文件。 ``` # touch /.autorelabel ``` 最后,用下面的命令重启系统。你也可以输入两次 `exit` 命令来重启你的系统。 ``` # reboot -f ``` ### 方法 2 通过用 `init=/bin/bash` 或 `init=/bin/sh` 替换内核中的 `rhgb quiet` 语句来以单用户模式启动 CentOS/RHEL 7/8 系统。 重启你的系统,在 GRUB2 启动界面,按下 `e` 键来编辑选中的内核。 ![](/data/attachment/album/202005/03/230749m6qeqi7e2utk9qte.png) 找到语句 `rhgb quiet`,用 `init=/bin/bash` 或 `init=/bin/sh` 替换它,然后按下 `Ctrl+x` 或 `F10` 来进入单用户模式。 `init=/bin/bash` 的截图。 ![](/data/attachment/album/202005/03/230807e24n22k41j1zesj8.png) `init=/bin/sh` 的截图。 ![](/data/attachment/album/202005/03/230825eup47566sxyl2y4v.png) 默认情况下,上面的操作会以只读(`ro`)模式挂载你的 `/` 分区,因此你需要以读写(`rw`)模式重新挂载 `/` 文件系统,这样才能修改它。 ``` # mount -o remount,rw / ``` ![](/data/attachment/album/202005/03/230841wrqi4urzwqq9wcq9.png) 现在你可以执行你的任务了。当结束时,执行下面的命令来开启重启时的 SELinux 重新标记。 ``` # touch /.autorelabel ``` 最后,重启系统。 ``` # exec /sbin/init 6 ``` ### 方法 3 通过用 `rw init=/sysroot/bin/sh` 参数替换内核中的 `ro` 单词,以单用户模式启动 CentOS/RHEL 7/8 系统。 为了中断自动启动的过程,重启你的系统并在 GRUB2 启动界面按下任意键。 现在会展示你系统上所有可用的内核,选择最新的内核,按下 `e` 键来编辑选中的内核参数。 找到以 `linux` 或 `linux16` 开头的语句,用 `rw init=/sysroot/bin/sh` 替换 `ro`。替换完后按下 `Ctrl+x` 或 `F10` 来进入单用户模式。 运行下面的命令把环境切换为 “chroot 监狱”。 ``` # chroot /sysroot ``` 如果需要,做出必要的修改。修改完后,执行下面的命令来开启重启时的 SELinux 重新标记。 ``` # touch /.autorelabel ``` 最后,重启系统。 ``` # reboot -f ``` --- via: <https://www.2daygeek.com/boot-centos-7-8-rhel-7-8-single-user-mode/> 作者:[Magesh Maruthamuthu](https://www.2daygeek.com/author/magesh/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[lxbwolf](https://github.com/lxbwolf) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
404
Not Found
null
12,183
安装完 Ubuntu 20.04 后要做的 16 件事
https://itsfoss.com/things-to-do-after-installing-ubuntu-20-04/
2020-05-04T22:02:56
[ "Ubuntu" ]
https://linux.cn/article-12183-1.html
> > 以下是安装 Ubuntu 20.04 之后需要做的一些调整和事项,它将使你获得更流畅、更好的桌面 Linux 体验。 > > > [Ubuntu 20.04 LTS(长期支持版)带来了许多新的特性](/article-12146-1.html)和观感上的变化。如果你要安装 Ubuntu 20.04,让我向你展示一些推荐步骤便于你的使用。 ### 安装完 Ubuntu 20.04 LTS “Focal Fossa” 后要做的 16 件事 ![](/data/attachment/album/202005/04/220300dbgym2bdwbd82bdm.jpg) 我在这里提到的步骤仅是我的建议。如果一些定制或调整不适合你的需要和兴趣,你可以忽略它们。 同样的,有些步骤看起来很简单,但是对于一个 Ubuntu 新手来说是必要的。 这里的一些建议适用于启用 GNOME 作为默认桌面 Ubuntu 20.04,所以请检查 [Ubuntu 版本](/article-9872-1.html)和[桌面环境](/article-12124-1.html)。 以下列表便是安装了代号为 Focal Fossa 的 Ubuntu 20.04 LTS 之后要做的事。 #### 1、通过更新和启用额外的软件仓库来准备你的系统 安装 Ubuntu 或任何其他 Linux 发行版之后,你应该做的第一件事就是更新它。Linux 的运作是建立在本地的可用软件包数据库上,而这个缓存需要同步以便你能够安装软件。 升级 Ubuntu 非常简单。你可以运行软件更新从菜单(按 `Super` 键并搜索 “software updater”): ![Ubuntu 20.04 的软件升级器](/data/attachment/album/202005/04/220302nnzdduu23j2nssf8.jpg) 你也可以在终端使用以下命令更新你的系统: ``` sudo apt update && sudo apt upgrade ``` 接下来,你应该确保启用了 [universe(宇宙)和 multiverse(多元宇宙)软件仓库](https://itsfoss.com/ubuntu-repositories/)。使用这些软件仓库,你可以访问更多的软件。我还推荐阅读关于 [Ubuntu 软件仓库](https://itsfoss.com/ubuntu-repositories/)的文章,以了解它背后的基本概念。 在菜单中搜索 “Software & Updates”: ![软件及更新设置项](/data/attachment/album/202005/04/220302ge8ltxpic5zpg52z.jpg) 请务必选中软件仓库前面的勾选框: ![启用额外的软件仓库](/data/attachment/album/202005/04/220306hy2ghhtxh202aiex.jpg) #### 2、安装媒体解码器来播放 MP3、MPEG4 和其他格式媒体文件 如果你想播放媒体文件,如 MP3、MPEG4、AVI 等,你需要安装媒体解码器。由于各个国家的版权问题, Ubuntu 在默认情况下不会安装它。 作为个人,你可以[使用 Ubuntu Restricted Extra 安装包](/article-11906-1.html)很轻松地安装这些媒体编解码器。这将[在你的 Ubuntu 系统安装](/article-12074-1.html)媒体编解码器、Adobe Flash 播放器和微软 True Type 字体等。 你可以通过[点击这个链接](//ubuntu-restricted-extras/)来安装它(它会要求在软件中心打开它),或者使用以下命令: ``` sudo apt install ubuntu-restricted-extras ``` 如果遇到 EULA 或许可证界面,请记住使用 `tab` 键在选项之间进行选择,然后按回车键确认你的选择。 ![按 tab 键选择 OK 并按回车键](/data/attachment/album/202005/04/220307pyj5mzc5um5urlrj.jpg) #### 3、从软件中心或网络上安装软件 现在已经设置好了软件仓库并更新了软件包缓存,应该开始安装所需的软件了。 在 Ubuntu 中安装应用程序有几种方法,最简单和正式的方法是使用软件中心。 ![Ubuntu 软件中心](/data/attachment/album/202005/04/220308xzfkkzq2h8sf50ys.png) 如果你想要一些关于软件的建议,请参考这个[丰富的各种用途的 Ubuntu 应用程序列表](https://itsfoss.com/best-ubuntu-apps/)。 一些软件供应商提供了 .deb 文件来方便地安装他们的应用程序。你可以从他们的网站获得 .deb 文件。例如,要[在 Ubuntu 上安装谷歌 Chrome](https://itsfoss.com/install-chrome-ubuntu/),你可以从它的网站上获得 .deb 文件,双击它开始安装。 #### 4、享受 Steam Proton 和 GameModeEnjoy 上的游戏 [在 Linux 上进行游戏](/article-7316-1.html)已经有了长足的发展。你不再受限于自带的少数游戏。你可以[在 Ubuntu 上安装 Steam](https://itsfoss.com/install-steam-ubuntu-linux/)并享受许多游戏。 [Steam 新的 Proton 项目](/article-10054-1.html)可以让你在 Linux 上玩许多只适用于 Windows 的游戏。除此之外,Ubuntu 20.04 还默认安装了 [Feral Interactive 的 GameMode](https://github.com/FeralInteractive/gamemode)。 GameMode 会自动调整 Linux 系统的性能,使游戏具有比其他后台进程更高的优先级。 这意味着一些支持 GameMode 的游戏(如[古墓丽影·崛起](https://en.wikipedia.org/wiki/Rise_of_the_Tomb_Raider))在 Ubuntu 上的性能应该有所提高。 #### 5、管理自动更新(适用于进阶用户和专家) 最近,Ubuntu 已经开始自动下载并安装对你的系统至关重要的安全更新。这是一个安全功能,作为一个普通用户,你应该让它保持默认开启。 但是,如果你喜欢自己进行配置更新,而这个自动更新经常导致你[“无法锁定管理目录”错误](https://itsfoss.com/could-not-get-lock-error/),也许你可以改变自动更新行为。 你可以选择“立即显示”,这样一有安全更新就会立即通知你,而不是自动安装。 ![管理自动更新设置](/data/attachment/album/202005/04/220308v8uimis0e2ziik1r.png) #### 6、控制电脑的自动挂起和屏幕锁定 如果你在笔记本电脑上使用 Ubuntu 20.04,那么你可能需要注意一些电源和屏幕锁定设置。 如果你的笔记本电脑处于电池模式,Ubuntu 会在 20 分钟不活动后休眠系统。这样做是为了节省电池电量。就我个人而言,我不喜欢它,因此我禁用了它。 类似地,如果你离开系统几分钟,它会自动锁定屏幕。我也不喜欢这种行为,所以我宁愿禁用它。 ![Ubuntu 20.04 的电源设置](/data/attachment/album/202005/04/220310gubfxhya11xxyyi4.png) #### 7、享受夜间模式 [Ubuntu 20.04 中最受关注的特性](https://www.youtube.com/watch?v=lpq8pm_xkSE)之一是夜间模式。你可以通过进入设置并在外观部分中选择它来启用夜间模式。 ![开启夜间主题 Ubuntu](/data/attachment/album/202005/04/220313es5v4caiil4q8gvs.png) 你可能需要做一些[额外的调整来获得完整的 Ubuntu 20.04 夜间模式](/article-12098-1.html)。 #### 8、控制桌面图标和启动程序 如果你想要一个最简的桌面,你可以禁用桌面上的图标。你还可以从左侧禁用启动程序,并在顶部面板中禁用软件状态栏。 所有这些都可以通过默认的新 GNOME 扩展来控制,该程序默认情况下已经可用。 ![禁用 Ubuntu 20 04 的 Dock](/data/attachment/album/202005/04/220313ebfdcl7w5ww6ac67.png) 顺便说一下,你也可以通过“设置”->“外观”来将启动栏的位置改变到底部或者右边。 #### 9、使用表情符和特殊字符,或从搜索中禁用它 Ubuntu 提供了一个使用表情符号的简单方法。在默认情况下,有一个专用的应用程序叫做“字符”。它基本上可以为你提供表情符号的 [Unicode](https://en.wikipedia.org/wiki/List_of_Unicode_characters)。 不仅是表情符号,你还可以使用它来获得法语、德语、俄语和拉丁语字符的 unicode。单击符号你可以复制 unicode,当你粘贴该代码时,你所选择的符号便被插入。 ![Ubuntu 表情符](/data/attachment/album/202005/04/220314qpfpnafzpe1nh66b.jpg) 你也能在桌面搜索中找到这些特殊的字符和表情符号。也可以从搜索结果中复制它们。 ![表情符出现在桌面搜索中](/data/attachment/album/202005/04/220315tej4qa2e4jjacji4.jpg) 如果你不想在搜索结果中看到它们,你应该禁用搜索功能对它们的访问。下一节将讨论如何做到这一点。 #### 10、掌握桌面搜索 GNOME 桌面拥有强大的搜索功能,大多数人使用它来搜索已安装的应用程序,但它不仅限于此。 按 `Super` 键并搜索一些东西,它将显示与搜索词匹配的任何应用程序,然后是系统设置和软件中心提供的匹配应用程序。 ![桌面搜索](/data/attachment/album/202005/04/220316msx4dsbk8t6w4xlt.jpg) 不仅如此,搜索还可以找到文件中的文本。如果你正在使用日历,它也可以找到你的会议和提醒。你甚至可以在搜索中进行快速计算并复制其结果。 ![Ubuntu搜索的快速计算](/data/attachment/album/202005/04/220316uuiu9ikg8zrpgjgz.jpg) 你可以进入“设置”中来控制可以搜索的内容和顺序。 ![](/data/attachment/album/202005/04/220317j52vxa959v0muqcv.png) #### 11、使用夜灯功能,减少夜间眼睛疲劳 如果你在晚上使用电脑或智能手机,你应该使用夜灯功能来减少眼睛疲劳。我觉得这很有帮助。 夜灯的特点是在屏幕上增加了一种黄色的色调,比白光少了一些挤压感。 你可以在“设置”->“显示”切换到夜灯选项卡来开启夜光功能。你可以根据自己的喜好设置“黄度”。 ![夜灯功能](/data/attachment/album/202005/04/220318gx899o615lxygryd.png) #### 12、使用 2K/4K 显示器?使用分辨率缩放得到更大的图标和字体 如果你觉得图标、字体、文件夹在你的高分辨率屏幕上看起来都太小了,你可以利用分辨率缩放。 启用分辨率缩放可以让你有更多的选项来从 100% 增加到 200%。你可以选择适合自己喜好的缩放尺寸。 ![在设置->显示中启用高分缩放](/data/attachment/album/202005/04/220320t4xe6g7gvqt7x0sx.jpg) #### 13、探索 GNOME 扩展功能以扩展 GNOME 桌面可用性 GNOME 桌面有称为“扩展”的小插件或附加组件。你应该[学会使用 GNOME 扩展](https://itsfoss.com/gnome-shell-extensions/)来扩展系统的可用性。 如下图所示,天气扩展顶部面板中显示了天气信息。不起眼但十分有用。你也可以在这里查看一些[最佳 GNOME 扩展](https://itsfoss.com/best-gnome-extensions/)。不需要全部安装,只使用那些对你有用的。 ![天气扩展](/data/attachment/album/202005/04/220323kzwzyq8vdnmlwwbq.jpg) #### 14、启用“勿扰”模式,专注于工作 如果你想专注于工作,禁用桌面通知会很方便。你可以轻松地启用“勿扰”模式,并静音所有通知。 ![启用“请勿打扰”清除桌面通知](/data/attachment/album/202005/04/220324q5gtzx721lz51e55.png) 这些通知仍然会在消息栏中,以便你以后可以阅读它们,但是它们不会在桌面上弹出。 #### 15、清理你的系统 这是你安装 Ubuntu 后不需要马上做的事情。但是记住它会对你有帮助。 随着时间的推移,你的系统将有大量不再需要的包。你可以用这个命令一次性删除它们: ``` sudo apt autoremove ``` 还有其他[清理 Ubuntu 以释放磁盘空间的方法](https://itsfoss.com/free-up-space-ubuntu-linux/),但这是最简单和最安全的。 #### 16、根据你的喜好调整和定制 GNOME 桌面 我强烈推荐[安装 GNOME 设置工具](https://itsfoss.com/gnome-tweak-tool/)。这将让你可以通过额外的设置来进行定制。 ![Gnome 设置工具](/data/attachment/album/202005/04/220325tzeeekm1mepmg1er.png) 比如,你可以[以百分比形式显示电池容量](https://itsfoss.com/display-battery-ubuntu/)、[修正在触摸板右键问题](https://itsfoss.com/fix-right-click-touchpad-ubuntu/)、改变 Shell 主题、改变鼠标指针速度、显示日期和星期数、改变应用程序窗口行为等。 定制是没有尽头的,我可能仅使用了它的一小部分功能。这就是为什么我推荐[阅读这些](https://itsfoss.com/gnome-tweak-tool/)关于[自定义 GNOME 桌面](https://itsfoss.com/gnome-tricks-ubuntu/)的文章。 你也可以[在 Ubuntu 中安装新主题](https://itsfoss.com/install-themes-ubuntu/),不过就我个人而言,我喜欢这个版本的默认主题。这是我第一次在 Ubuntu 发行版中使用默认的图标和主题。 #### 安装 Ubuntu 之后你会做什么? 如果你是 Ubuntu 的初学者,我建议你[阅读这一系列 Ubuntu 教程](https://itsfoss.com/getting-started-with-ubuntu/)开始学习。 这就是我的建议。安装 Ubuntu 之后你要做什么?分享你最喜欢的东西,我可能根据你的建议来更新这篇文章。 --- via: <https://itsfoss.com/things-to-do-after-installing-ubuntu-20-04/> 作者:[Abhishek Prakash](https://itsfoss.com/author/abhishek/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[qfzy1233](https://github.com/qfzy1233) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
*Here is a list of tweaks and things to do after installing Ubuntu 20.04, to get a smoother and better desktop Linux experience.* [Ubuntu 20.04 LTS brings plenty of new features](https://itsfoss.com/ubuntu-20-04-release-features/) and visual changes. If you choose to install Ubuntu 20.04, let me show you a few recommended steps that you can follow to get started with it. ## 16 Things to do after installing Ubuntu 20.04 LTS “Focal Fossa” ![Things To Do After Installing Ubuntu 20.04](https://itsfoss.com/content/images/wordpress/2020/04/things-to-do-after-installing-ubuntu-20-04.jpg) The steps I am going to mention here are my recommendation. You may ignore a few customization or tweaks if they don’t suit your need and interest. Similarly, some steps may seem too simple but essential for someone completely new to Ubuntu. A number of suggestions here are suited for the default Ubuntu 20.04 with GNOME desktop. So please check [which Ubuntu version](https://itsfoss.com/how-to-know-ubuntu-unity-version/) and [which desktop environment](https://itsfoss.com/find-desktop-environment/) you are using. Let’s get started with the list of things to do after installing Ubuntu 20.04 LTS codenamed Focal Fossa. ### 1. Get your system ready by updating and enabling additional repos The first thing you should do after installing Ubuntu or any other Linux distribution is to update it. Linux works on a local database of available packages. And this cache needs to be synced in order for you to be able to install any software. It is very easy to update Ubuntu. You can run the software updater from the menu (press Windows key and search for software updater): ![Software Updater in Ubuntu 20.04](https://itsfoss.com/content/images/wordpress/2020/04/software-updater-ubuntu-20-04.jpg) You may also use the following command in the terminal to update your system: `sudo apt update && sudo apt upgrade` Next, you should make sure that you have [universe and multiverse repositories enabled](https://itsfoss.com/ubuntu-repositories/). You’ll have access to a lot more software with these repositories. I also recommend reading about [Ubuntu repositories](https://itsfoss.com/ubuntu-repositories/) to learn the basic concept behind it. Search for Software & Updates in the menu: ![Software & Updates Settings Ubuntu in 20.04](https://itsfoss.com/content/images/wordpress/2020/04/software-updates-settings-ubuntu-20-04.jpg) Make sure to check the boxes in front of the repositories: ![Extra Repositories Ubuntu 20](https://itsfoss.com/content/images/wordpress/2020/04/extra-repositories-ubuntu-20.jpg) ### 2. Install media codecs to play MP3, MPEG4 and other media files If you want to play media files like MP3, MPEG4, AVI etc, you’ll need to install media codecs. Ubuntu doesn’t install it by default because of copyright issues in various countries. As an individual, you can install these media codecs easily [using the Ubuntu Restricted Extra package](https://itsfoss.com/install-media-codecs-ubuntu/). This will install media codecs, Adobe Flash player and [Microsoft True Type Fonts in your Ubuntu system](https://itsfoss.com/install-microsoft-fonts-ubuntu/). You can install it by using this command: `sudo apt install ubuntu-restricted-extras` If you encounter the EULA or the license screen, remember to use the tab key to select between the options and then hit enter to confirm your choice. ![Installing Ubuntu Restricted Extras](https://itsfoss.com/content/images/wordpress/2020/02/installing_ubuntu_restricted_extras.jpg) ### 3. Install software from the software center or the web Now that you have set up the repositories and updated the package cache, you should start installing software that you need. There are several ways of [installing applications in Ubuntu](https://itsfoss.com/remove-install-software-ubuntu/). The easiest and the official way is to use the Software Center. ![Ubuntu Software Center](https://itsfoss.com/content/images/wordpress/2020/04/software-center-ubuntu-20-800x509.png) If you want some recommendation about software, please refer to this extensive [list of Ubuntu applications for different purposes](https://itsfoss.com/best-ubuntu-apps/). Some software vendors provide .deb files to easily install their application. You may get the deb files from their website. For example, to [install Google Chrome on Ubuntu](https://itsfoss.com/install-chrome-ubuntu/), you can get the deb file from its website and double click on it to start the installation. Note: There is an issue in Ubuntu 20.04 and double-clicking on .deb file doesn’t open it in software center. Read how to [fix the issue of .deb file not working in Ubuntu 20.04](https://itsfoss.com/cant-install-deb-file-ubuntu/). ### 4. Enjoy gaming with Steam Proton and GameMode [Gaming on Linux](https://itsfoss.com/linux-gaming-guide/) has come a long way. You are not restricted to a handful of games included by default. You can [install Steam on Ubuntu](https://itsfoss.com/install-steam-ubuntu-linux/) and enjoy a good number of games. [Steam’s new P](https://itsfoss.com/steam-play/)[r](https://itsfoss.com/steam-play/)[oton project](https://itsfoss.com/steam-play/) enables you to play a number of Windows-only games on Linux. In addition to that, Ubuntu 20.04 comes with [Feral Interactive’s GameMode](https://github.com/FeralInteractive/gamemode) installed by default. The GameMode automatically adjust Linux system performance to give more priority to games than other background processes. This means some games that support the GameMode (like [Rise of Tomb Raiders](https://en.wikipedia.org/wiki/Rise_of_the_Tomb_Raider)) should have improved performance on Ubuntu. ### 5. Manage auto-updates (for intermediate and experts) Recently, Ubuntu has started to automatically download and install security updates that are essential to your system. This is a security feature as a regular user, you should leave it as it is, But if you like to do everything on your own and this auto-update is frequently leading you to [“Unable to lock the administration directory” error](https://itsfoss.com/could-not-get-lock-error/), maybe you can change the auto updates behavior. You can opt for the Show immediately so that it notifies you of security updates as soon as they are available instead of automatically installing. ![Auto Updates Ubuntu](https://itsfoss.com/content/images/wordpress/2020/04/auto-updates-ubuntu-800x361.png) ### 6. Control automatic suspend and screenlock for laptops If you are using Ubuntu 20.04 on a laptop then you may want to pay attention to a few power and screenlock settings. If your laptop is on battery mode, Ubuntu will suspend the system after 20 minutes of inactivity. This is done to save battery power. Personally, I don’t like it and thus I disable it. Similarly, if you leave your system for a few minutes, it automatically locks the screen. I don’t like this behavior as well so I prefer disabling it. ![Power Settings Ubuntu 20 04](https://itsfoss.com/content/images/wordpress/2020/04/power-settings-ubuntu-20-04.png?fit=800%2C591&ssl=1) ### 7. Enjoy dark mode One of the [most talked about features of Ubuntu 20.04](https://www.youtube.com/watch?v=lpq8pm_xkSE) is the dark mode. You can enable the dark mode by going into Settings and selecting it under Appearance section. ![Enable Dark Theme Ubuntu](https://itsfoss.com/content/images/wordpress/2020/04/enable-dark-theme-ubuntu.png) You may have to do some [additional tweaking to get full dark mode in Ubuntu 20.04](https://itsfoss.com/dark-mode-ubuntu/). ### 8. Control desktop icons and launcher If you want a minimal looking desktop, you can disable the icons on the desktop. You can also disable the launcher from the left side and the appindicators in the top panel. All this can be controlled via the new GNOME Extensions that is already available by default. ![Disable Dock Ubuntu 20 04](https://itsfoss.com/content/images/wordpress/2020/04/disable-dock-ubuntu-20-04.png) By the way, you can also change the position of the launcher to the bottom or to the right by going to the Settings->Appearance. ### 9. Use emojis (smileys) and special characters or disable it from the search Ubuntu provides an easy way to use smiley or the emoticons. There is a dedicated application called Characters installed by default. It basically gives you [Unicode](https://en.wikipedia.org/wiki/List_of_Unicode_characters) of the emojis. Not only emojis, you can use it to get the unicode for French, German, Russian and Latin characters. Clicking on the symbol gives you the opportunity to copy the unicode and when you paste this code, your chosen symbol should be typed. ![Emoji Ubuntu](https://itsfoss.com/content/images/wordpress/2020/04/emoji-ubuntu.jpg) You’ll find these special characters and emoticons appearing in the desktop search as well. You can copy them from the search results as well. ![Emojis Desktop Search Ubuntu](https://itsfoss.com/content/images/wordpress/2020/04/emojis-desktop-search-ubuntu.jpg) If you don’t want to see them in search results, you should disable their access to the search feature. The next section discuss how to do that. ### 10. Master the desktop search The GNOME desktop has a powerful search feature. Most people use it for searching installed applications but it is more than just that. Press the super key (Windows key) and search for something. It will show any applications that matches that search term, followed by system settings and matching applications available in the software center. ![Ubuntu Desktop Search 1](https://itsfoss.com/content/images/wordpress/2020/04/ubuntu-desktop-search-1.jpg) Not only that, the search can also find text inside files. If you are using the calendar, it can also find your meetings and reminders. You can even do quick calculations in the search and copy its result. ![Quick Calculations Ubuntu Search](https://itsfoss.com/content/images/wordpress/2020/04/quick-calculations-ubuntu-search.jpg) You can control what can be searched and in which order by going into Settings. ![Search Settings Control Ubuntu](https://itsfoss.com/content/images/wordpress/2020/04/search-settings-control-ubuntu-800x534.png) ### 11. Use nightlight feature to reduce eye strain at night If you use your computer or smartphone at night, you should use the night light feature to reduce eye strain. I feel that it helps a lot. The night light feature adds a yellow tint to the screen which is less pinching than the white light. You can enable night light in the Settings -> Displays and switching to Night Light tab. You can set the ‘yellowness’ as per your liking. ![Nightlight in Ubuntu 20.04](https://itsfoss.com/content/images/wordpress/2020/04/nightlight-ubuntu-20-04.png) ### 12. Got a 2K/4K screen? Use fractional scaling to get bigger icons and fonts If you feel that the icons, fonts, folders everything looks too small on your HiDPI screen, you can take advantage of the fractional scaling. [Enabling fractional scaling](https://itsfoss.com/enable-fractional-scaling-ubuntu/) gives you more options to increase the size between 100% to 200%. You can choose the scaling size that suits your preference. ![Fractional Scaling in Ubuntu 20.04](https://itsfoss.com/content/images/wordpress/2020/04/fractional-scaling-ubuntu.jpg) ### 13. Explore GNOME Extensions to extend the usability of GNOME desktop The GNOME desktop has tiny plugins or add-ons called Extensions. You should [learn to use GNOME extensions](https://itsfoss.com/gnome-shell-extensions/) to extend the usability of your system. As you can see in the image below, the weather extension shows the weather information in the top panel. A tiny but useful thing. You may also take a look at some of [best GNOME extensions](https://itsfoss.com/best-gnome-extensions/) here. Don’t install all of them, use only those that are useful to you. ![Weather Extension Ubuntu](https://itsfoss.com/content/images/wordpress/2020/04/weather-extension-ubuntu.jpg) ### 14. Enable ‘do not disturb’ mode and focus on work If you want to concentrate on work, disabling desktop notifications would come handy. You can easily enable ‘do not disturb’ mode and mute all notifications. ![Do Not Disturb Option in Ubuntu 20.04](https://itsfoss.com/content/images/wordpress/2020/03/do-not-distrub-option-ubuntu-20-04.png) These notifications will still be in the message tray so that you can read them later but they won’t pop up on the desktop anymore. ### 15. Clean your system This is something you don’t need to do right after installing Ubuntu. But keeping it in mind will help you. Over the time, your system will have significant amount of packages that won’t be needed anymore. You can remove them all in one go with this command: `sudo apt autoremove` There are other [ways to clean Ubuntu to free disk space](https://itsfoss.com/free-up-space-ubuntu-linux/) but this is the easiest and safest. ### 16. Tweak and customize the GNOME desktop to your liking I highly recommend [installing GNOME Tweaks tool](https://itsfoss.com/gnome-tweak-tool/). This will give you access to a few additional settings to tweak. ![Gnome Tweaks Tool Ubuntu 20 04](https://itsfoss.com/content/images/wordpress/2020/04/gnome-tweaks-tool-ubuntu-20-04.png?fit=800%2C551&ssl=1) For example, you can [display battery percentage](https://itsfoss.com/display-battery-ubuntu/), [fix right click in touchpad issue](https://itsfoss.com/fix-right-click-touchpad-ubuntu/), change shell theme, change mouse pointer speed, display date and week numbers, change application window behavior etc. There is no end to customization and I cannot probably most of them here. This is why I recommend [reading these articles](https://itsfoss.com/gnome-tweak-tool/) about [customizing GNOME desktop](https://itsfoss.com/gnome-tricks-ubuntu/). You can also [install new themes in Ubuntu](https://itsfoss.com/install-themes-ubuntu/) though personally, I like the default theme in this release. This is the first time that I have stuck with the default icons and theme in an Ubuntu release. ### What do you do after installing Ubuntu? If you are an Ubuntu beginner, I recommend [going through this collection of Ubuntu tutorials](https://itsfoss.com/getting-started-with-ubuntu/) to get started with it. So these were my recommendations. What are the steps you follow after installing Ubuntu? Share your favorite things and I might update this article with your suggestions.
12,184
Go 中对栈中函数进行内联
https://dave.cheney.net/2020/05/02/mid-stack-inlining-in-go
2020-05-04T23:04:20
[ "内联", "Go", "编译器" ]
https://linux.cn/article-12184-1.html
![](/data/attachment/album/202005/04/230304avxkxlyoozbiw1bn.jpg) [上一篇文章](/article-12176-1.html)中我论述了<ruby> 叶子内联 <rt> leaf inlining </rt></ruby>是怎样让 Go 编译器减少函数调用的开销的,以及延伸出了跨函数边界的优化的机会。本文中,我要论述内联的限制以及叶子内联与<ruby> 栈中内联 <rt> mid-stack inlining </rt></ruby>的对比。 ### 内联的限制 把函数内联到它的调用处消除了调用的开销,为编译器进行其他的优化提供了更好的机会,那么问题来了,既然内联这么好,内联得越多开销就越少,*为什么不尽可能多地内联呢?* 内联可能会以增加程序大小换来更快的执行时间。限制内联的最主要原因是,创建许多函数的内联副本会增加编译时间,并导致生成更大的二进制文件的边际效应。即使把内联带来的进一步的优化机会考虑在内,太激进的内联也可能会增加生成的二进制文件的大小和编译时间。 内联收益最大的是[小函数](https://medium.com/@joshsaintjacque/small-functions-considered-awesome-c95b3fd1812f),相对于调用它们的开销来说,这些函数做很少的工作。随着函数大小的增长,函数内部做的工作与函数调用的开销相比省下的时间越来越少。函数越大通常越复杂,因此优化其内联形式相对于原地优化的好处会减少。 ### 内联预算 在编译过程中,每个函数的内联能力是用*内联预算*计算的 <sup id="fnref1"> <a href="#fn1" rel="footnote"> 1 </a></sup>。开销的计算过程可以巧妙地内化,像一元和二元等简单操作,在<ruby> 抽象语法数 <rt> Abstract Syntax Tree </rt></ruby>(AST)中通常是每个节点一个单位,更复杂的操作如 `make` 可能单位更多。考虑下面的例子: ``` package main func small() string { s := "hello, " + "world!" return s } func large() string { s := "a" s += "b" s += "c" s += "d" s += "e" s += "f" s += "g" s += "h" s += "i" s += "j" s += "k" s += "l" s += "m" s += "n" s += "o" s += "p" s += "q" s += "r" s += "s" s += "t" s += "u" s += "v" s += "w" s += "x" s += "y" s += "z" return s } func main() { small() large() } ``` 使用 `-gcflags=-m=2` 参数编译这个函数能让我们看到编译器分配给每个函数的开销: ``` % go build -gcflags=-m=2 inl.go # command-line-arguments ./inl.go:3:6: can inline small with cost 7 as: func() string { s := "hello, world!"; return s } ./inl.go:8:6: cannot inline large: function too complex: cost 82 exceeds budget 80 ./inl.go:38:6: can inline main with cost 68 as: func() { small(); large() } ./inl.go:39:7: inlining call to small func() string { s := "hello, world!"; return s } ``` 编译器根据函数 `func small()` 的开销(7)决定可以对它内联,而 `func large()` 的开销太大,编译器决定不进行内联。`func main()` 被标记为适合内联的,分配了 68 的开销;其中 `small` 占用 7,调用 `small` 函数占用 57,剩余的(4)是它自己的开销。 可以用 `-gcflag=-l` 参数控制内联预算的等级。下面是可使用的值: * `-gcflags=-l=0` 默认的内联等级。 * `-gcflags=-l`(或 `-gcflags=-l=1`)取消内联。 * `-gcflags=-l=2` 和 `-gcflags=-l=3` 现在已经不使用了。和 `-gcflags=-l=0` 相比没有区别。 * `-gcflags=-l=4` 减少非叶子函数和通过接口调用的函数的开销。<sup id="fnref2"> <a href="#fn2" rel="footnote"> 2 </a></sup> #### 不确定语句的优化 一些函数虽然内联的开销很小,但由于太复杂它们仍不适合进行内联。这就是函数的不确定性,因为一些操作的语义在内联后很难去推导,如 `recover`、`break`。其他的操作,如 `select` 和 `go` 涉及运行时的协调,因此内联后引入的额外的开销不能抵消内联带来的收益。 不确定的语句也包括 `for` 和 `range`,这些语句不一定开销很大,但目前为止还没有对它们进行优化。 ### 栈中函数优化 在过去,Go 编译器只对叶子函数进行内联 —— 只有那些不调用其他函数的函数才有资格。在上一段不确定的语句的探讨内容中,一次函数调用就会让这个函数失去内联的资格。 进入栈中进行内联,就像它的名字一样,能内联在函数调用栈中间的函数,不需要先让它下面的所有的函数都被标记为有资格内联的。栈中内联是 David Lazar 在 Go 1.9 中引入的,并在随后的版本中做了改进。[这篇文稿](https://docs.google.com/presentation/d/1Wcblp3jpfeKwA0Y4FOmj63PW52M_qmNqlQkNaLj0P5o/edit#slide=id.p)深入探究了保留栈追踪行为和被深度内联后的代码路径里的 `runtime.Callers` 的难点。 在前面的例子中我们看到了栈中函数内联。内联后,`func main()` 包含了 `func small()` 的函数体和对 `func large()` 的一次调用,因此它被判定为非叶子函数。在过去,这会阻止它被继续内联,虽然它的联合开销小于内联预算。 栈中内联的最主要的应用案例就是减少贯穿函数调用栈的开销。考虑下面的例子: ``` package main import ( "fmt" "strconv" ) type Rectangle struct {} //go:noinline func (r *Rectangle) Height() int { h, _ := strconv.ParseInt("7", 10, 0) return int(h) } func (r *Rectangle) Width() int { return 6 } func (r *Rectangle) Area() int { return r.Height() * r.Width() } func main() { var r Rectangle fmt.Println(r.Area()) } ``` 在这个例子中, `r.Area()` 是个简单的函数,调用了两个函数。`r.Width()` 可以被内联,`r.Height()` 这里用 `//go:noinline` 指令标注了,不能被内联。<sup id="fnref3"> <a href="#fn3" rel="footnote"> 3 </a></sup> ``` % go build -gcflags='-m=2' square.go # command-line-arguments ./square.go:12:6: cannot inline (*Rectangle).Height: marked go:noinline ./square.go:17:6: can inline (*Rectangle).Width with cost 2 as: method(*Rectangle) func() int { return 6 } ./square.go:21:6: can inline (*Rectangle).Area with cost 67 as: method(*Rectangle) func() int { return r.Height() * r.Width() } ./square.go:21:61: inlining call to (*Rectangle).Width method(*Rectangle) func() int { return 6 } ./square.go:23:6: cannot inline main: function too complex: cost 150 exceeds budget 80 ./square.go:25:20: inlining call to (*Rectangle).Area method(*Rectangle) func() int { return r.Height() * r.Width() } ./square.go:25:20: inlining call to (*Rectangle).Width method(*Rectangle) func() int { return 6 } ``` 由于 `r.Area()` 中的乘法与调用它的开销相比并不大,因此内联它的表达式是纯收益,即使它的调用的下游 `r.Height()` 仍是没有内联资格的。 #### 快速路径内联 关于栈中内联的效果最令人吃惊的例子是 2019 年 [Carlo Alberto Ferraris](https://go-review.googlesource.com/c/go/+/148959) 通过允许把 `sync.Mutex.Lock()` 的快速路径(非竞争的情况)内联到它的调用方来[提升它的性能](https://go-review.googlesource.com/c/go/+/148959)。在这个修改之前,`sync.Mutex.Lock()` 是个很大的函数,包含很多难以理解的条件,使得它没有资格被内联。即使锁可用时,调用者也要付出调用 `sync.Mutex.Lock()` 的代价。 Carlo 把 `sync.Mutex.Lock()` 分成了两个函数(他自己称为<ruby> 外联 <rt> outlining </rt></ruby>)。外部的 `sync.Mutex.Lock()` 方法现在调用 `sync/atomic.CompareAndSwapInt32()` 且如果 CAS(<ruby> 比较并交换 <rt> Compare and Swap </rt></ruby>)成功了之后立即返回给调用者。如果 CAS 失败,函数会走到 `sync.Mutex.lockSlow()` 慢速路径,需要对锁进行注册,暂停 goroutine。<sup id="fnref4"> <a href="#fn4" rel="footnote"> 4 </a></sup> ``` % go build -gcflags='-m=2 -l=0' sync 2>&1 | grep '(*Mutex).Lock' ../go/src/sync/mutex.go:72:6: can inline (*Mutex).Lock with cost 69 as: method(*Mutex) func() { if "sync/atomic".CompareAndSwapInt32(&m.state, 0, mutexLocked) { if race.Enabled { }; return }; m.lockSlow() } ``` 通过把函数分割成一个简单的不能再被分割的外部函数,和(如果没走到外部函数就走到的)一个处理慢速路径的复杂的内部函数,Carlo 组合了栈中函数内联和[编译器对基础操作的支持](https://dave.cheney.net/2019/08/20/go-compiler-intrinsics),减少了非竞争锁 14% 的开销。之后他在 `sync.RWMutex.Unlock()` 重复这个技巧,节省了另外 9% 的开销。 ### 相关文章: 1. [Go 中的内联优化](https://dave.cheney.net/2020/04/25/inlining-optimisations-in-go "Inlining optimisations in Go") 2. [goroutine 的栈为什么会无限增长?](https://dave.cheney.net/2013/06/02/why-is-a-goroutines-stack-infinite "Why is a Goroutine’s stack infinite ?") 3. [栈追踪和 errors 包](https://dave.cheney.net/2016/06/12/stack-traces-and-the-errors-package "Stack traces and the errors package") 4. [零值是什么,为什么它很有用?](https://dave.cheney.net/2013/01/19/what-is-the-zero-value-and-why-is-it-useful "What is the zero value, and why is it useful?") --- 1. 不同发布版本中,在考虑该函数是否适合内联时,Go 编译器对同一函数的预算是不同的。 [↩](#fnref1) 2. 时刻记着编译器的作者警告过[“更高的内联等级(比 -l 更高)可能导致错误或不被支持”](https://github.com/golang/go/blob/be08e10b3bc07f3a4e7b27f44d53d582e15fd6c7/src/cmd/compile/internal/gc/inl.go#L11)。 Caveat emptor。 [↩](#fnref2) 3. 编译器有足够的能力来内联像 `strconv.ParseInt` 的复杂函数。作为一个实验,你可以尝试去掉 `//go:noinline` 注释,使用 `-gcflags=-m=2` 编译后观察。 [↩](#fnref3) 4. `race.Enable` 表达式是通过传递给 `go` 工具的 `-race` 参数控制的一个常量。对于普通编译,它的值是 `false`,此时编译器可以完全省略代码路径。 [↩](#fnref4) --- via: <https://dave.cheney.net/2020/05/02/mid-stack-inlining-in-go> 作者:[Dave Cheney](https://dave.cheney.net/author/davecheney) 选题:[lujun9972](https://github.com/lujun9972) 译者:[lxbwolf](https://github.com/lxbwolf) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
In the [previous post](https://dave.cheney.net/2020/04/25/inlining-optimisations-in-go) I discussed how leaf inlining allows the Go compiler to reduce the overhead of function calls and extend optimisation opportunities across function boundaries. In this post I’ll discuss the limits of inlining and leaf vs mid-stack inlining. ## The limits of inlining Inlining a function into its caller removes the call’s overhead and increases the opportunity for the compiler to apply additional optimisations so the question should be asked, if some inlining is good, would more be better, *why not inline as much as possible?* Inlining trades possibly larger program sizes for potentially faster execution time. The main reason to limit inlining is creating many inlined copies of a function can increase compile time and result in larger binaries for marginal gain. Even taking into account the opportunities for further optimisation, aggressive inlining tends to increase the size of, and the time too compile, the resulting binary. Inlining works best for [small functions](https://medium.com/@joshsaintjacque/small-functions-considered-awesome-c95b3fd1812f) that do relatively little work compared to the overhead of calling them. As the size of a function grows, the time saved avoiding the call’s overhead diminishes relative to the work done inside the function. Larger functions tend to be more complex, thus the benefits of optimising their inlined forms vs in situ are reduced. ## Inlining budget During compilation each function’s inlineabilty is calculated using what is known as the *inlining budget*[ 1](#easy-footnote-bottom-1-4076). The cost calculation can be tricky to internalise but is broadly one unit per node in the AST for simple things like unary and binary operations but can be higher for complex operations like `make` . Consider this example:``` package main func small() string { s := "hello, " + "world!" return s } func large() string { s := "a" s += "b" s += "c" s += "d" s += "e" s += "f" s += "g" s += "h" s += "i" s += "j" s += "k" s += "l" s += "m" s += "n" s += "o" s += "p" s += "q" s += "r" s += "s" s += "t" s += "u" s += "v" s += "w" s += "x" s += "y" s += "z" return s } func main() { small() large() } ``` Compiling this function with `-gcflags=-m=2` allows us to see the cost the compiler assigns to each function. %go build -gcflags=-m=2 inl.go# command-line-arguments ./inl.go:3:6: can inline small with cost 7 as: func() string { s := "hello, world!"; return s } ./inl.go:8:6: cannot inline large: function too complex: cost 82 exceeds budget 80 ./inl.go:38:6: can inline main with cost 68 as: func() { small(); large() } ./inl.go:39:7: inlining call to small func() string { s := "hello, world!"; return s } The compiler determined that `func small()` can be inlined due to its cost of 7. `func large()` was determined to be too expensive. `func main()` has been marked as eligible and assigned a cost of 68; 7 from the body of `small` , 57 from the function call to `small` and the remainder in its own overhead. The inlining budget can be controlled to some degree with the `-gcflag=-l` flag. Currently the values that apply are: `-gcflags=-l=0` is the default level of inlining.`-gcflags=-l` (or`-gcflags=-l=1` ) disables inlining.`-gcflags=-l=2` and`-gcflags=-l=3` are currently unused and have no effect over`-gcflags=-l=0` `-gcflags=-l=4` reduces the cost for inlining non-leaf functions and calls through interfaces.2 ### Hairy optimisations Some functions with a relatively low inlining cost may be ineligible because of their complexity. This is known as the function’s hairiness as the semantics of some operations are hard to reason about once inlined, for example `recover` , `break` . Others, like `select` and `go` , involve co-ordination with the runtime so the extra effort of inlining doesn’t pay for itself. The list of hairy statements also includes things like `for ` and `range` which don’t have an inherently large cost, but simply haven’t been optimised yet. ## Mid stack inlining Historically the Go compiler only performed leaf inlining–only functions which did not call other functions were eligible. In the context of the hairiness discussion previously, a function call would disqualify the function from being inlined. Enter mid stack inlining which, as its name implies, allows functions in the middle of a call stack to be inlined without requiring everything below them to be eligible. Mid stack inlining was introduced by David Lazar in Go 1.9 and improved in subsequent releases. [This presentation](https://docs.google.com/presentation/d/1Wcblp3jpfeKwA0Y4FOmj63PW52M_qmNqlQkNaLj0P5o/edit#slide=id.p) goes into some of the difficulties with retaining the behaviour of stack traces and `runtime.Callers` in code paths that had been heavily inlined. We see an example of mid-stack inlining in the previous example. After inlining, `func main()` contains the body of `func small()` and a call to `func large()` , thus it is considered a non-leaf function. Historically this would have prevented it from being further inlined even though its combined cost was less than the inlining budget. The primary use case for mid stack inlining is to reduce the overhead of a path through the call stack. Consider this example: ``` package main import ( "fmt" "strconv" ) type Rectangle struct {} //go:noinline func (r *Rectangle) Height() int { h, _ := strconv.ParseInt("7", 10, 0) return int(h) } func (r *Rectangle) Width() int { return 6 } func (r *Rectangle) Area() int { return r.Height() * r.Width() } func main() { var r Rectangle fmt.Println(r.Area()) } ``` In this example `r.Area()` is a simple function which calls two others. `r.Width()` can be inlined while `r.Height()` , simulated here with the `//go:noinline` annotation, cannot. 3 %go build -gcflags='-m=2' square.go# command-line-arguments ./square.go:12:6: cannot inline (*Rectangle).Height: marked go:noinline ./square.go:17:6: can inline (*Rectangle).Width with cost 2 as: method(*Rectangle) func() int { return 6 } ./square.go:21:6:can inline (*Rectangle).Area with cost 67 as: method(*Rectangle) func() int { return r.Height() * r.Width() }./square.go:21:61: inlining call to (*Rectangle).Width method(*Rectangle) func() int { return 6 } ./square.go:23:6: cannot inline main: function too complex: cost 150 exceeds budget 80 ./square.go:25:20: inlining call to (*Rectangle).Area method(*Rectangle) func() int { return r.Height() * r.Width() } ./square.go:25:20: inlining call to (*Rectangle).Width method(*Rectangle) func() int { return 6 } As the multiplication performed by `r.Area()` is cheap compared to the overhead of calling it, inlining `r.Area()` ‘s single expression is a net win even if its downstream caller to `r.Height()` remains ineligible. ### Fast path inlining The most startling example of the power of mid-stack inlining comes from 2019 when [Carlo Alberto Ferraris improved the performance](https://go-review.googlesource.com/c/go/+/148959) of `sync.Mutex.Lock()` by allowing the fast path of the lock–the uncontended case–to be inlined into its caller. Prior to this change `sync.Mutex.Lock()` was a large function containing many hairy conditions which made it ineligible to be inlined. Even in the case where the lock was available, the caller had to pay the overhead of calling `sync.Mutex.Lock()` . Carlo’s change split `sync.Mutex.Lock()` into two functions (a process he dubbed *outlining*). The outer `sync.Mutex.Lock()` method now calls `sync/atomic.CompareAndSwapInt32()` and returns to the caller immediately if the CAS succeeds. If not, the function falls through to `sync.Mutex.lockSlow()` which handles the slow path required to register interest on the lock and park the goroutine.4 %go build -gcflags='-m=2 -l=0' sync 2>&1 | grep '(*Mutex).Lock'../go/src/sync/mutex.go:72:6: can inline (*Mutex).Lock with cost 69 as: method(*Mutex) func() { if "sync/atomic".CompareAndSwapInt32(&m.state, 0, mutexLocked) { if race.Enabled { }; return }; m.lockSlow() } By splitting the function into an easily inlineable outer function, falling through to a complex inner function to handle the slow path Carlo’s combined mid stack inlining and the [compiler’s support for intrinsic operations](https://dave.cheney.net/2019/08/20/go-compiler-intrinsics) to reduce the cost of an uncontended lock by 14%. Then he repeated the trick for an additional 9% saving in .[sync.RWMutex.Unlock()](https://go-review.googlesource.com/c/go/+/152698) - The budget the Go compiler applies to each function when considering if it is eligible for inlining changes release to release. - Keep in mind that the compiler authors warn that “ [Additional levels of inlining (beyond -l) may be buggy and are not supported”](https://github.com/golang/go/blob/be08e10b3bc07f3a4e7b27f44d53d582e15fd6c7/src/cmd/compile/internal/gc/inl.go#L11). Caveat emptor. - The compiler is powerful enough that it can inline complex functions like `strconv.ParseInt` . As a experiment, try removing the`//go:noinline` annotation and observe the result with`-gcflags=-m=2` . - The expression `race.Enable` is a constant controlled by the`-race` flag passed to the`go` tool. It is`false` for normal builds which allows the compiler to elide those code paths entirely.
12,186
如何在 Debian 10 中配置 Chroot 环境的 SFTP 服务
https://www.linuxtechi.com/configure-sftp-chroot-debian10/
2020-05-05T22:36:00
[ "ftp", "sftp", "ssh" ]
https://linux.cn/article-12186-1.html
SFTP 意思是“<ruby> 安全文件传输协议 <rt> Secure File Transfer Protocol </rt></ruby>” 或 “<ruby> SSH 文件传输协议 <rt> SSH File Transfer Protocol </rt></ruby>”,它是最常用的用于通过 `ssh` 将文件从本地系统安全地传输到远程服务器的方法,反之亦然。`sftp` 的主要优点是,除 `openssh-server` 之外,我们不需要安装任何额外的软件包,在大多数的 Linux 发行版中,`openssh-server` 软件包是默认安装的一部分。`sftp` 的另外一个好处是,我们可以允许用户使用 `sftp` ,而不允许使用 `ssh` 。 ![](/data/attachment/album/202005/05/223518ip4mbdi4nggbdtgu.jpg) 当前发布的 Debian 10 代号为 ‘Buster’,在这篇文章中,我们将演示如何在 Debian 10 系统中在 “监狱式的” Chroot 环境中配置 `sftp`。在这里,Chroot 监狱式环境意味着,用户不能超出各自的家目录,或者用户不能从各自的家目录更改目录。下面实验的详细情况: * OS = Debian 10 * IP 地址 = 192.168.56.151 让我们跳转到 SFTP 配置步骤, ### 步骤 1、使用 groupadd 命令给 sftp 创建一个组 打开终端,使用下面的 `groupadd` 命令创建一个名为的 `sftp_users` 组: ``` root@linuxtechi:~# groupadd sftp_users ``` ### 步骤 2、添加用户到组 sftp\_users 并设置权限 假设你想创建新的用户,并且想添加该用户到 `sftp_users` 组中,那么运行下面的命令, **语法:** ``` # useradd -m -G sftp_users <用户名> ``` 让我们假设用户名是 `jonathan`: ``` root@linuxtechi:~# useradd -m -G sftp_users jonathan ``` 使用下面的 `chpasswd` 命令设置密码: ``` root@linuxtechi:~# echo "jonathan:<输入密码>" | chpasswd ``` 假设你想添加现有的用户到 `sftp_users` 组中,那么运行下面的 `usermod` 命令,让我们假设已经存在的用户名称是 `chris`: ``` root@linuxtechi:~# usermod -G sftp_users chris ``` 现在设置用户所需的权限: ``` root@linuxtechi:~# chown root /home/jonathan /home/chris/ ``` 在各用户的家目录中都创建一个上传目录,并设置正确地所有权: ``` root@linuxtechi:~# mkdir /home/jonathan/upload root@linuxtechi:~# mkdir /home/chris/upload root@linuxtechi:~# chown jonathan /home/jonathan/upload root@linuxtechi:~# chown chris /home/chris/upload ``` **注意:** 像 Jonathan 和 Chris 之类的用户可以从他们的本地系统上传文件和目录。 ### 步骤 3、编辑 sftp 配置文件 /etc/ssh/sshd\_config 正如我们已经陈述的,`sftp` 操作是通过 `ssh` 完成的,所以它的配置文件是 `/etc/ssh/sshd_config`,在做任何更改前,我建议首先备份文件,然后再编辑该文件,接下来添加下面的内容: ``` root@linuxtechi:~# cp /etc/ssh/sshd_config /etc/ssh/sshd_config-org root@linuxtechi:~# vim /etc/ssh/sshd_config ...... #Subsystem sftp /usr/lib/openssh/sftp-server Subsystem sftp internal-sftp Match Group sftp_users X11Forwarding no AllowTcpForwarding no ChrootDirectory %h ForceCommand internal-sftp ...... ``` 保存并退出文件。 为使上述更改生效,使用下面的 `systemctl` 命令来重新启动 `ssh` 服务: ``` root@linuxtechi:~# systemctl restart sshd ``` 在上面的 `sshd_config` 文件中,我们已经注释掉了以 `Subsystem` 开头的行,并添加了新的条目 `Subsystem sftp internal-sftp` 和新的行。而 `Match Group sftp_users` –> 它意味着如果用户是 `sftp_users` 组中的一员,那么将应用下面提到的规则到这个条目。 `ChrootDierctory %h` –> 它意味着用户只能在他们自己各自的家目录中更改目录,而不能超出他们各自的家目录。或者换句话说,我们可以说用户是不允许更改目录的。他们将在他们的目录中获得监狱一样的环境,并且不能访问其他用户的目录和系统的目录。 `ForceCommand internal-sftp` –> 它意味着用户仅被限制到只能使用 `sftp` 命令。 ### 步骤 4、测试和验证 sftp 登录到你的 `sftp` 服务器的同一个网络上的任何其它的 Linux 系统,然后通过我们放入 `sftp_users` 组中的用户来尝试 ssh 和 sftp 服务。 ``` [root@linuxtechi ~]# ssh root@linuxtechi root@linuxtechi's password: Write failed: Broken pipe [root@linuxtechi ~]# ssh root@linuxtechi root@linuxtechi's password: Write failed: Broken pipe [root@linuxtechi ~]# ``` 以上操作证实用户不允许 `ssh` ,现在使用下面的命令尝试 `sftp`: ``` [root@linuxtechi ~]# sftp root@linuxtechi root@linuxtechi's password: Connected to 192.168.56.151. sftp> ls -l drwxr-xr-x 2 root 1001 4096 Sep 14 07:52 debian10-pkgs -rw-r--r-- 1 root 1001 155 Sep 14 07:52 devops-actions.txt drwxr-xr-x 2 1001 1002 4096 Sep 14 08:29 upload ``` 让我们使用 sftp 的 `get` 命令来尝试下载一个文件: ``` sftp> get devops-actions.txt Fetching /devops-actions.txt to devops-actions.txt /devops-actions.txt 100% 155 0.2KB/s 00:00 sftp> sftp> cd /etc Couldn't stat remote file: No such file or directory sftp> cd /root Couldn't stat remote file: No such file or directory sftp> ``` 上面的输出证实我们能从我们的 sftp 服务器下载文件到本地机器,除此之外,我们也必须测试用户不能更改目录。 让我们在 `upload` 目录下尝试上传一个文件: ``` sftp> cd upload/ sftp> put metricbeat-7.3.1-amd64.deb Uploading metricbeat-7.3.1-amd64.deb to /upload/metricbeat-7.3.1-amd64.deb metricbeat-7.3.1-amd64.deb 100% 38MB 38.4MB/s 00:01 sftp> ls -l -rw-r--r-- 1 1001 1002 40275654 Sep 14 09:18 metricbeat-7.3.1-amd64.deb sftp> ``` 这证实我们已经成功地从我们的本地系统上传一个文件到 sftp 服务中。 现在使用 winscp 工具来测试 sftp 服务,输入 sftp 服务器 IP 地址和用户的凭证: ![](/data/attachment/album/202005/05/223823f114114g5sqgob5s.jpg) 在 “Login” 上单击,然后尝试下载和上传文件: ![](/data/attachment/album/202005/05/223837eyayy73accrlvlay.jpg) 现在,在 `upload` 文件夹中尝试上传文件: ![](/data/attachment/album/202005/05/223858rih5hhw7iflh9xbl.jpg) 上面的窗口证实上传是完好地工作的,这就是这篇文章的全部。如果这些步骤能帮助你在 Debian 10 中使用 chroot 环境配置 SFTP 服务器s,那么请分享你的反馈和评论。 --- via: <https://www.linuxtechi.com/configure-sftp-chroot-debian10/> 作者:[Pradeep Kumar](https://www.linuxtechi.com/author/pradeep/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[robsean](https://github.com/robsean) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
301
Moved Permanently
null
12,187
在 Linux 上分析二进制文件的 10 种方法
https://opensource.com/article/20/4/linux-binary-analysis
2020-05-05T23:21:37
[ "二进制" ]
https://linux.cn/article-12187-1.html
> > 这些简单的命令和工具可以帮助你轻松完成分析二进制文件的任务。 > > > ![](/data/attachment/album/202005/05/232115nn0oduodo4oztv0a.jpg) “这个世界上有 10 种人:懂二进制的人和不懂二进制的人。” 我们每天都在与二进制文件打交道,但我们对二进制文件却知之甚少。我所说的二进制,是指你每天运行的可执行文件,从命令行工具到成熟的应用程序都是。 Linux 提供了一套丰富的工具,让分析二进制文件变得轻而易举。无论你的工作角色是什么,如果你在 Linux 上工作,了解这些工具的基本知识将帮助你更好地理解你的系统。 在这篇文章中,我们将介绍其中一些最流行的 Linux 工具和命令,其中大部分都是 Linux 发行版的一部分。如果没有找到,你可以随时使用你的软件包管理器来安装和探索它们。请记住:学习在正确的场合使用正确的工具需要大量的耐心和练习。 ### file 它的作用:帮助确定文件类型。 这将是你进行二进制分析的起点。我们每天都在与文件打交道,并非所有的文件都是可执行类型,除此之外还有各种各样的文件类型。在你开始之前,你需要了解要分析的文件类型。是二进制文件、库文件、ASCII 文本文件、视频文件、图片文件、PDF、数据文件等文件吗? `file` 命令将帮助你确定你所处理的文件类型。 ``` $ file /bin/ls /bin/ls: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=94943a89d17e9d373b2794dcb1f7e38c95b66c86, stripped $ $ file /etc/passwd /etc/passwd: ASCII text $ ``` ### ldd 它的作用:打印共享对象依赖关系。 如果你已经在一个可执行的二进制文件上使用了上面的 `file` 命令,你肯定会看到输出中的“<ruby> 动态链接 <rt> dynamically linked </rt></ruby>”信息。它是什么意思呢? 在开发软件的时候,我们尽量不要重造轮子。有一组常见的任务是大多数软件程序需要的,比如打印输出或从标准输入/打开的文件中读取等。所有这些常见的任务都被抽象成一组通用的函数,然后每个人都可以使用,而不是写出自己的变体。这些常用的函数被放在一个叫 `libc` 或 `glibc` 的库中。 如何找到可执行程序所依赖的库?这就是 `ldd` 命令的作用了。对动态链接的二进制文件运行该命令会显示出所有依赖库和它们的路径。 ``` $ ldd /bin/ls linux-vdso.so.1 => (0x00007ffef5ba1000) libselinux.so.1 => /lib64/libselinux.so.1 (0x00007fea9f854000) libcap.so.2 => /lib64/libcap.so.2 (0x00007fea9f64f000) libacl.so.1 => /lib64/libacl.so.1 (0x00007fea9f446000) libc.so.6 => /lib64/libc.so.6 (0x00007fea9f079000) libpcre.so.1 => /lib64/libpcre.so.1 (0x00007fea9ee17000) libdl.so.2 => /lib64/libdl.so.2 (0x00007fea9ec13000) /lib64/ld-linux-x86-64.so.2 (0x00007fea9fa7b000) libattr.so.1 => /lib64/libattr.so.1 (0x00007fea9ea0e000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00007fea9e7f2000) $ ``` ### ltrace 它的作用:库调用跟踪器。 我们现在知道如何使用 `ldd` 命令找到一个可执行程序所依赖的库。然而,一个库可以包含数百个函数。在这几百个函数中,哪些是我们的二进制程序正在使用的实际函数? `ltrace` 命令可以显示运行时从库中调用的所有函数。在下面的例子中,你可以看到被调用的函数名称,以及传递给该函数的参数。你也可以在输出的最右边看到这些函数返回的内容。 ``` $ ltrace ls __libc_start_main(0x4028c0, 1, 0x7ffd94023b88, 0x412950 <unfinished ...> strrchr("ls", '/') = nil setlocale(LC_ALL, "") = "en_US.UTF-8" bindtextdomain("coreutils", "/usr/share/locale") = "/usr/share/locale" textdomain("coreutils") = "coreutils" __cxa_atexit(0x40a930, 0, 0, 0x736c6974756572) = 0 isatty(1) = 1 getenv("QUOTING_STYLE") = nil getenv("COLUMNS") = nil ioctl(1, 21523, 0x7ffd94023a50) = 0 << snip >> fflush(0x7ff7baae61c0) = 0 fclose(0x7ff7baae61c0) = 0 +++ exited (status 0) +++ $ ``` ### hexdump 它的作用:以 ASCII、十进制、十六进制或八进制显示文件内容。 通常情况下,当你用一个应用程序打开一个文件,而它不知道如何处理该文件时,就会出现这种情况。尝试用 `vim` 打开一个可执行文件或视频文件,你屏幕上会看到的只是抛出的乱码。 在 `hexdump` 中打开未知文件,可以帮助你看到文件的具体内容。你也可以选择使用一些命令行选项来查看用 ASCII 表示的文件数据。这可能会帮助你了解到它是什么类型的文件。 ``` $ hexdump -C /bin/ls | head 00000000 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00 |.ELF............| 00000010 02 00 3e 00 01 00 00 00 d4 42 40 00 00 00 00 00 |..>......B@.....| 00000020 40 00 00 00 00 00 00 00 f0 c3 01 00 00 00 00 00 |@...............| 00000030 00 00 00 00 40 00 38 00 09 00 40 00 1f 00 1e 00 |[email protected]...@.....| 00000040 06 00 00 00 05 00 00 00 40 00 00 00 00 00 00 00 |........@.......| 00000050 40 00 40 00 00 00 00 00 40 00 40 00 00 00 00 00 |@.@.....@.@.....| 00000060 f8 01 00 00 00 00 00 00 f8 01 00 00 00 00 00 00 |................| 00000070 08 00 00 00 00 00 00 00 03 00 00 00 04 00 00 00 |................| 00000080 38 02 00 00 00 00 00 00 38 02 40 00 00 00 00 00 |8.......8.@.....| 00000090 38 02 40 00 00 00 00 00 1c 00 00 00 00 00 00 00 |8.@.............| $ ``` ### strings 它的作用:打印文件中的可打印字符的字符串。 如果你只是在二进制中寻找可打印的字符,那么 `hexdump` 对于你的使用场景来说似乎有点矫枉过正,你可以使用 `strings` 命令。 在开发软件的时候,各种文本/ASCII 信息会被添加到其中,比如打印信息、调试信息、帮助信息、错误等。只要这些信息都存在于二进制文件中,就可以用 `strings` 命令将其转储到屏幕上。 ``` $ strings /bin/ls ``` ### readelf 它的作用:显示有关 ELF 文件的信息。 ELF(<ruby> 可执行和可链接文件格式 <rt> Executable and Linkable File Format </rt></ruby>)是可执行文件或二进制文件的主流格式,不仅是 Linux 系统,也是各种 UNIX 系统的主流文件格式。如果你已经使用了像 `file` 命令这样的工具,它告诉你文件是 ELF 格式,那么下一步就是使用 `readelf` 命令和它的各种选项来进一步分析文件。 在使用 `readelf` 命令时,有一份实际的 ELF 规范的参考是非常有用的。你可以在[这里](http://www.skyfree.org/linux/references/ELF_Format.pdf)找到该规范。 ``` $ readelf -h /bin/ls ELF Header: Magic: 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00 Class: ELF64 Data: 2's complement, little endian Version: 1 (current) OS/ABI: UNIX - System V ABI Version: 0 Type: EXEC (Executable file) Machine: Advanced Micro Devices X86-64 Version: 0x1 Entry point address: 0x4042d4 Start of program headers: 64 (bytes into file) Start of section headers: 115696 (bytes into file) Flags: 0x0 Size of this header: 64 (bytes) Size of program headers: 56 (bytes) Number of program headers: 9 Size of section headers: 64 (bytes) Number of section headers: 31 Section header string table index: 30 $ ``` ### objdump 它的作用:从对象文件中显示信息。 二进制文件是通过你编写的源码创建的,这些源码会通过一个叫做编译器的工具进行编译。这个编译器会生成相对于源代码的机器语言指令,然后由 CPU 执行特定的任务。这些机器语言代码可以通过被称为汇编语言的助记词来解读。汇编语言是一组指令,它可以帮助你理解由程序所进行并最终在 CPU 上执行的操作。 `objdump` 实用程序读取二进制或可执行文件,并将汇编语言指令转储到屏幕上。汇编语言知识对于理解 `objdump` 命令的输出至关重要。 请记住:汇编语言是特定于体系结构的。 ``` $ objdump -d /bin/ls | head /bin/ls: file format elf64-x86-64 Disassembly of section .init: 0000000000402150 <_init@@Base>: 402150: 48 83 ec 08 sub $0x8,%rsp 402154: 48 8b 05 6d 8e 21 00 mov 0x218e6d(%rip),%rax # 61afc8 <__gmon_start__> 40215b: 48 85 c0 test %rax,%rax $ ``` ### strace 它的作用:跟踪系统调用和信号。 如果你用过前面提到的 `ltrace`,那就把 `strace` 想成是类似的。唯一的区别是,`strace` 工具不是追踪调用的库,而是追踪系统调用。系统调用是你与内核对接来完成工作的。 举个例子,如果你想把一些东西打印到屏幕上,你会使用标准库 `libc` 中的 `printf` 或 `puts` 函数;但是,在底层,最终会有一个名为 `write` 的系统调用来实际把东西打印到屏幕上。 ``` $ strace -f /bin/ls execve("/bin/ls", ["/bin/ls"], [/* 17 vars */]) = 0 brk(NULL) = 0x686000 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f967956a000 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=40661, ...}) = 0 mmap(NULL, 40661, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f9679560000 close(3) = 0 << snip >> fstat(1, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 1), ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9679569000 write(1, "R2 RH\n", 7R2 RH ) = 7 close(1) = 0 munmap(0x7f9679569000, 4096) = 0 close(2) = 0 exit_group(0) = ? +++ exited with 0 +++ $ ``` ### nm 它的作用:列出对象文件中的符号。 如果你所使用的二进制文件没有被剥离,`nm` 命令将为你提供在编译过程中嵌入到二进制文件中的有价值的信息。`nm` 可以帮助你从二进制文件中识别变量和函数。你可以想象一下,如果你无法访问二进制文件的源代码时,这将是多么有用。 为了展示 `nm`,我们快速编写了一个小程序,用 `-g` 选项编译,我们会看到这个二进制文件没有被剥离。 ``` $ cat hello.c #include <stdio.h> int main() { printf("Hello world!"); return 0; } $ $ gcc -g hello.c -o hello $ $ file hello hello: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=3de46c8efb98bce4ad525d3328121568ba3d8a5d, not stripped $ $ ./hello Hello world!$ $ $ nm hello | tail 0000000000600e20 d __JCR_END__ 0000000000600e20 d __JCR_LIST__ 00000000004005b0 T __libc_csu_fini 0000000000400540 T __libc_csu_init U __libc_start_main@@GLIBC_2.2.5 000000000040051d T main U printf@@GLIBC_2.2.5 0000000000400490 t register_tm_clones 0000000000400430 T _start 0000000000601030 D __TMC_END__ $ ``` ### gdb 它的作用:GNU 调试器。 好吧,不是所有的二进制文件中的东西都可以进行静态分析。我们确实执行了一些运行二进制文件(进行分析)的命令,比如 `ltrace` 和 `strace`;然而,软件由各种条件组成,这些条件可能会导致执行不同的替代路径。 分析这些路径的唯一方法是在运行时环境,在任何给定的位置停止或暂停程序,并能够分析信息,然后再往下执行。 这就是调试器的作用,在 Linux 上,`gdb` 就是调试器的事实标准。它可以帮助你加载程序,在特定的地方设置断点,分析内存和 CPU 的寄存器,以及更多的功能。它是对上面提到的其他工具的补充,可以让你做更多的运行时分析。 有一点需要注意的是,一旦你使用 `gdb` 加载一个程序,你会看到它自己的 `(gdb)` 提示符。所有进一步的命令都将在这个 `gdb` 命令提示符中运行,直到你退出。 我们将使用我们之前编译的 `hello` 程序,使用 `gdb` 来看看它的工作原理。 ``` $ gdb -q ./hello Reading symbols from /home/flash/hello...done. (gdb) break main Breakpoint 1 at 0x400521: file hello.c, line 4. (gdb) info break Num Type Disp Enb Address What 1 breakpoint keep y 0x0000000000400521 in main at hello.c:4 (gdb) run Starting program: /home/flash/./hello Breakpoint 1, main () at hello.c:4 4 printf("Hello world!"); Missing separate debuginfos, use: debuginfo-install glibc-2.17-260.el7_6.6.x86_64 (gdb) bt #0 main () at hello.c:4 (gdb) c Continuing. Hello world![Inferior 1 (process 29620) exited normally] (gdb) q $ ``` ### 结语 一旦你习惯了使用这些原生的 Linux 二进制分析工具,并理解了它们提供的输出,你就可以转向更高级和专业的开源二进制分析工具,比如 [radare2](https://github.com/radareorg/radare2)。 --- via: <https://opensource.com/article/20/4/linux-binary-analysis> 作者:[Gaurav Kamathe](https://opensource.com/users/gkamathe) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
"There are 10 types of people in this world: those who understand binary and those who don't." We work with binaries daily, yet we understand so little about them. By binaries, I mean the executable files that you run daily, right from your command line tools to full-fledged applications. Linux provides a rich set of tools that makes analyzing binaries a breeze! Whatever might be your job role, if you are working on Linux, knowing the basics about these tools will help you understand your system better. In this article, we will cover some of the most popular of these Linux tools and commands, most of which will be available natively as part of your Linux distribution. If not, you can always use your package manager to install and explore them. Remember: learning to use the right tool at the right occasion requires plenty of patience and practice. ## file What it does: Help to determine the file type. This will be your starting point for binary analysis. We work with files daily. Not everything is an executable type; there is a whole wide range of file types out there. Before you start, you need to understand the type of file that is being analyzed. Is it a binary file, a library file, an ASCII text file, a video file, a picture file, a PDF, a data file, etc.? The **file** command will help you identify the exact file type that you are dealing with. ``` $ file /bin/ls /bin/ls: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=94943a89d17e9d373b2794dcb1f7e38c95b66c86, stripped $ $ file /etc/passwd /etc/passwd: ASCII text $ ``` ## ldd What it does: Print shared object dependencies. If you have already used the **file** command above on an executable binary, you can't miss the "dynamically linked" message in the output. What does it mean? When software is being developed, we try not to reinvent the wheel. There are a set of common tasks that most software programs require, like printing output or reading from standard in, or opening files, etc. All of these common tasks are abstracted away in a set of common functions that everybody can then use instead of writing their own variants. These common functions are put in a library called **libc** or **glibc**. How does one find which libraries the executable is dependent on? That’s where **ldd** command comes into the picture. Running it against a dynamically linked binary shows all its dependent libraries and their paths. ``` $ ldd /bin/ls linux-vdso.so.1 => (0x00007ffef5ba1000) libselinux.so.1 => /lib64/libselinux.so.1 (0x00007fea9f854000) libcap.so.2 => /lib64/libcap.so.2 (0x00007fea9f64f000) libacl.so.1 => /lib64/libacl.so.1 (0x00007fea9f446000) libc.so.6 => /lib64/libc.so.6 (0x00007fea9f079000) libpcre.so.1 => /lib64/libpcre.so.1 (0x00007fea9ee17000) libdl.so.2 => /lib64/libdl.so.2 (0x00007fea9ec13000) /lib64/ld-linux-x86-64.so.2 (0x00007fea9fa7b000) libattr.so.1 => /lib64/libattr.so.1 (0x00007fea9ea0e000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00007fea9e7f2000) $ ``` ## ltrace What it does: A library call tracer. We now know how to find the libraries an executable program is dependent on using the **ldd** command. However, a library can contain hundreds of functions. Out of those hundreds, which are the actual functions being used by our binary? The **ltrace** command displays all the functions that are being called at run time from the library. In the below example, you can see the function names being called, along with the arguments being passed to that function. You can also see what was returned by those functions on the far right side of the output. ``` $ ltrace ls __libc_start_main(0x4028c0, 1, 0x7ffd94023b88, 0x412950 <unfinished ...> strrchr("ls", '/') = nil setlocale(LC_ALL, "") = "en_US.UTF-8" bindtextdomain("coreutils", "/usr/share/locale") = "/usr/share/locale" textdomain("coreutils") = "coreutils" __cxa_atexit(0x40a930, 0, 0, 0x736c6974756572) = 0 isatty(1) = 1 getenv("QUOTING_STYLE") = nil getenv("COLUMNS") = nil ioctl(1, 21523, 0x7ffd94023a50) = 0 << snip >> fflush(0x7ff7baae61c0) = 0 fclose(0x7ff7baae61c0) = 0 +++ exited (status 0) +++ $ ``` ## Hexdump What it does: Display file contents in ASCII, decimal, hexadecimal, or octal. Often, it happens that you open a file with an application that doesn’t know what to do with that file. Try opening an executable file or a video file using vim; all you will see is gibberish thrown on the screen. Opening unknown files in Hexdump helps you see what exactly the file contains. You can also choose to see the ASCII representation of the data present in the file using some command-line options. This might help give you some clues to what kind of file it is. ``` $ hexdump -C /bin/ls | head 00000000 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00 |.ELF............| 00000010 02 00 3e 00 01 00 00 00 d4 42 40 00 00 00 00 00 |..>......B@.....| 00000020 40 00 00 00 00 00 00 00 f0 c3 01 00 00 00 00 00 |@...............| 00000030 00 00 00 00 40 00 38 00 09 00 40 00 1f 00 1e 00 |[email protected]...@.....| 00000040 06 00 00 00 05 00 00 00 40 00 00 00 00 00 00 00 |........@.......| 00000050 40 00 40 00 00 00 00 00 40 00 40 00 00 00 00 00 |@.@.....@.@.....| 00000060 f8 01 00 00 00 00 00 00 f8 01 00 00 00 00 00 00 |................| 00000070 08 00 00 00 00 00 00 00 03 00 00 00 04 00 00 00 |................| 00000080 38 02 00 00 00 00 00 00 38 02 40 00 00 00 00 00 |8.......8.@.....| 00000090 38 02 40 00 00 00 00 00 1c 00 00 00 00 00 00 00 |8.@.............| $ ``` ## strings What it does: Print the strings of printable characters in files. If Hexdump seems a bit like overkill for your use case and you are simply looking for printable characters within a binary, you can use the **strings** command. When software is being developed, a variety of text/ASCII messages are added to it, like printing info messages, debugging info, help messages, errors, and so on. Provided all this information is present in the binary, it will be dumped to screen using **strings**. `$ strings /bin/ls` ## readelf What it does: Display information about ELF files. ELF (Executable and Linkable File Format) is the dominant file format for executable or binaries, not just on Linux but a variety of UNIX systems as well. If you have utilized tools like file command, which tells you that the file is in ELF format, the next logical step will be to use the **readelf** command and its various options to analyze the file further. Having a reference of the actual ELF specification handy when using **readelf **can be very useful. You can find the specification [here](http://www.skyfree.org/linux/references/ELF_Format.pdf). ``` $ readelf -h /bin/ls ELF Header: Magic: 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00 Class: ELF64 Data: 2's complement, little endian Version: 1 (current) OS/ABI: UNIX - System V ABI Version: 0 Type: EXEC (Executable file) Machine: Advanced Micro Devices X86-64 Version: 0x1 Entry point address: 0x4042d4 Start of program headers: 64 (bytes into file) Start of section headers: 115696 (bytes into file) Flags: 0x0 Size of this header: 64 (bytes) Size of program headers: 56 (bytes) Number of program headers: 9 Size of section headers: 64 (bytes) Number of section headers: 31 Section header string table index: 30 $ ``` ## objdump What it does: Display information from an object file. Binaries are created when you write source code which gets compiled using a tool called, unsurprisingly, a compiler. This compiler generates machine language instructions equivalent to the source code, which can then be executed by the CPU to perform a given task. This machine language code can be interpreted via mnemonics called an assembly language. An assembly language is a set of instructions that help you understand the operations being performed by the program and ultimately being executed on the CPU. **objdump** utility reads the binary or executable file and dumps the assembly language instructions on the screen. Knowledge of assembly is critical to understand the output of the **objdump** command. Remember: assembly language is architecture-specific. ``` $ objdump -d /bin/ls | head /bin/ls: file format elf64-x86-64 Disassembly of section .init: 0000000000402150 <_init@@Base>: 402150: 48 83 ec 08 sub $0x8,%rsp 402154: 48 8b 05 6d 8e 21 00 mov 0x218e6d(%rip),%rax # 61afc8 <__gmon_start__> 40215b: 48 85 c0 test %rax,%rax $ ``` ## strace What it does: Trace system calls and signals. If you have used **ltrace**, mentioned earlier, think of **strace** being similar. The only difference is that, instead of calling a library, the **strace** utility traces system calls. System calls are how you interface with the kernel to get work done. To give an example, if you want to print something to the screen, you will use the **printf** or **puts** function from the standard library **libc**; however, under the hood, ultimately, a system call named **write** will be made to actually print something to the screen. ``` $ strace -f /bin/ls execve("/bin/ls", ["/bin/ls"], [/* 17 vars */]) = 0 brk(NULL) = 0x686000 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f967956a000 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=40661, ...}) = 0 mmap(NULL, 40661, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f9679560000 close(3) = 0 << snip >> fstat(1, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 1), ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9679569000 write(1, "R2 RH\n", 7R2 RH ) = 7 close(1) = 0 munmap(0x7f9679569000, 4096) = 0 close(2) = 0 exit_group(0) = ? +++ exited with 0 +++ $ ``` ## nm What it does: List symbols from object files. If you are working with a binary that is not stripped, the **nm** command will provide you with the valuable information that was embedded in the binary during compilation. **nm** can help you identify variables and functions from the binary. You can imagine how useful this would be if you don't have access to the source code of the binary being analyzed. To showcase **nm**, we will quickly write a small program and compile it with the **-g** option, and we will also see that the binary is not stripped by using the file command. ``` $ cat hello.c #include <stdio.h> int main() { printf("Hello world!"); return 0; } $ $ gcc -g hello.c -o hello $ $ file hello hello: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=3de46c8efb98bce4ad525d3328121568ba3d8a5d, not stripped $ $ ./hello Hello world!$ $ $ nm hello | tail 0000000000600e20 d __JCR_END__ 0000000000600e20 d __JCR_LIST__ 00000000004005b0 T __libc_csu_fini 0000000000400540 T __libc_csu_init U __libc_start_main@@GLIBC_2.2.5 000000000040051d T main U printf@@GLIBC_2.2.5 0000000000400490 t register_tm_clones 0000000000400430 T _start 0000000000601030 D __TMC_END__ $ ``` ## gdb What it does: The GNU debugger. Well, not everything in the binary can be statically analyzed. We did execute some commands which ran the binary, like **ltrace** and **strace**; however, software consists of a variety of conditions that could lead to various alternate paths being executed. The only way to analyze these paths is at run time by having the ability to stop or pause the program at any given location and being able to analyze information and then move further down. That is where debuggers come into the picture, and on Linux, **gdb** is the defacto debugger. It helps you load a program, set breakpoints at specific places, analyze memory and CPU register, and do much more. It complements the other tools mentioned above and allows you to do much more runtime analysis. One thing to notice is, once you load a program using **gdb**, you will be presented with its own **(gdb)** prompt. All further commands will be run in this **gdb** command prompt until you exit. We will use the "hello" program that we compiled earlier and use **gdb** to see how it works. ``` $ gdb -q ./hello Reading symbols from /home/flash/hello...done. (gdb) break main Breakpoint 1 at 0x400521: file hello.c, line 4. (gdb) info break Num Type Disp Enb Address What 1 breakpoint keep y 0x0000000000400521 in main at hello.c:4 (gdb) run Starting program: /home/flash/./hello Breakpoint 1, main () at hello.c:4 4 printf("Hello world!"); Missing separate debuginfos, use: debuginfo-install glibc-2.17-260.el7_6.6.x86_64 (gdb) bt #0 main () at hello.c:4 (gdb) c Continuing. Hello world![Inferior 1 (process 29620) exited normally] (gdb) q $ ``` ## Conclusion Once you are comfortable with using these native Linux binary analysis tools and understanding the output they provide, you can then move onto more advanced and professional open source binary analysis tools like [radare2](https://github.com/radareorg/radare2). ## 4 Comments
12,188
经过了 3 年,Inkscape 1.0 终于发布了
https://itsfoss.com/inkscape-1-release/
2020-05-06T09:42:46
[ "Inkscape" ]
https://linux.cn/article-12188-1.html
![](/data/attachment/album/202005/06/094055fvnh9nnnbybwl4jn.jpg) 虽然我不是这方面的专业人员,但可以肯定地说,Inkscape 是[最好的矢量图形编辑器](https://itsfoss.com/vector-graphics-editors-linux/)之一。 不仅仅因为它是自由开源软件,而且对于数字艺术家来说,它是一个非常有用的应用程序。 上一次发布(0.92 版本)是在 3 年前。现在,终于,[Inkscape 宣布了它的 1.0 版本](https://inkscape.org/news/2020/05/04/introducing-inkscape-10/) —— 增加了很多新的功能和改进。 ### Inkscape 1.0 里的新东西 ![Inkscape 1.0](/data/attachment/album/202005/06/094249z99xn9tntvkrfi9n.jpg) 在这里,让我重点介绍一下 Inkscape 1.0 版本中重要关键变化。 #### 首个原生 macOS 应用 对于像 Inkscape 这样的神奇工具来说,适当的跨平台支持总是好的。在这个最新的版本中,它推出了原生的 macOS 应用。 请注意,这个 macOS 应用仍然是一个**预览版**,还有很多改进的空间。不过,在无需 [XQuartz](https://en.wikipedia.org/wiki/XQuartz) 的情况下就做到了更好的系统集成,对于 macOS 用户来说,应该是一个值得期许的进步。 #### 性能提升 不管是什么应用程序/工具,都会从显著的性能提升中受益,而 Inkscape 也是如此。 随着其 1.0 版本的发布,他们提到,当你使用 Inkscape 进行各种创意工作时,你会发现性能更加流畅。 除了在 macOS 上(仍为“预览版”),Inkscape 在 Linux 和 Windows 上的运行都是很好的。 #### 改进的 UI 和 HiDPI 支持 ![](/data/attachment/album/202005/06/094257k68gio8l7m9zum79.jpg) 他们在发布说明中提到: > > ……达成了一个重要的里程碑,使 Inkscape 能够使用最新的软件(即 GTK+3)来构建编辑器的用户界面。拥有 HiDPI(高分辨率)屏幕的用户要感谢 2018 年波士顿黑客节期间的团队合作,让更新后的 GTK 轮子开始运转起来。 > > > 从 GTK+3 的用户界面到高分辨率屏幕的 HiDPI 支持,这都是一次精彩的升级。 更不要忘了,你还可以获得更多的自定义选项来调整外观和感受。 #### 新增功能 ![](/data/attachment/album/202005/06/094258opir11ayqqrs09iy.jpg) 即便是从纸面上看,这些列出新功能都看起来不错。根据你的专业知识和你的喜好,这些新增功能应该会派上用场。 以下是新功能的概述: * 新改进过的实时路径效果(LPE)功能。 * 新的可搜索的 LPE 选择对话框。 * 自由式绘图用户现在可以对画布进行镜像和旋转。 * 铅笔工具的新的 PowerPencil 模式提供了压感的宽度,并且终于可以创建封闭路径了。 * 包括偏移、PowerClip 和 PowerMask LPE 在内的新路径效果会吸引艺术类用户。 * 能够创建复制引导、将网格对齐到页面上、测量工具的路径长度指示器和反向 Y 轴。 * 能够导出带有可点击链接和元数据的 PDF 文件。 * 新的调色板和网状渐变,可在网页浏览器中使用。 虽然我已经尝试着整理了这个版本中添加的关键功能列表,但你可以在他们的[发布说明](https://wiki.inkscape.org/wiki/index.php/Release_notes/1.0)中获得全部细节。 #### 其他重要变化 作为重大变化之一,Inkscape 1.0 现在支持 Python 3。而且,随着这一变化,你可能会注意到一些扩展程序无法在最新版本中工作。 所以,如果你的工作依赖于某个扩展程序的工作流程,我建议你仔细看看他们的[发布说明](https://wiki.inkscape.org/wiki/index.php/Release_notes/1.0),了解所有的技术细节。 ### 在 Linux 上下载和安装 Inkscape 1.0 Inkscape 1.0 有用于 Linux 的 AppImage 和 Snap 软件包,你可以从 Inkscape 的网站上下载。 * [下载 Inkscape 1.0 for Linux](https://inkscape.org/release/1.0/gnulinux/) 如果你还不知道,可以查看[如何在 Linux 上使用 AppImage 文件](https://itsfoss.com/use-appimage-linux/)来入门。你也可以参考[这个 Snap 指南](https://itsfoss.com/install-snap-linux/)。 Ubuntu 用户可以在 Ubuntu 软件中心找到 Inskcape 1.0 的 Snap 版本。 我在 [Pop!\_OS 20.04](https://itsfoss.com/pop-os-20-04-review/) 上使用了 AppImage 文件,工作的很好。你可以详细体验所有的功能,看看它的效果如何。 你试过了吗?请在下面的评论中告诉我你的想法。 --- via: <https://itsfoss.com/inkscape-1-release/> 作者:[Ankush Das](https://itsfoss.com/author/ankush/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
Even though I’m not an expert, it is safe to say that Inkscape is one of the [best vector graphics editors](https://itsfoss.com/vector-graphics-editors-linux/). Not just limited to the reason that it is free and open-source software – but it is indeed a useful application for digital artists creating something on it. The last release (version 0.92) was about 3 years ago. And, now, finally, [Inkscape announced its 1.0 release](https://inkscape.org/news/2020/05/04/introducing-inkscape-10/) – with a bunch of new features, additions, and improvements. ## Inkscape 1.0: What’s New? ![Inkscape 1 0](https://itsfoss.com/content/images/wordpress/2020/05/inkscape-1-0.jpg) Here, let me highlight the important key changes that you need to know about Inkscape 1.0 release: ### First native macOS application It’s always good to have a proper cross-platform support for amazing tools like Inkscape. And, with the latest release, a native macOS application has been made available as well. Do note that the macOS app is still a **preview **version and has room for a lot of improvements. However, with a better system integration without needing [XQuartz](https://en.wikipedia.org/wiki/XQuartz), it should be a promising progress for macOS users. ### Performance Improvements Any kind of application/tool benefits from a significant performance boost. And, so does Inkscape. With its 1.0 release, they mention that you will be able to notice the smoother performance when using Inkscape for all the creative work you do. Except on macOS (which is still a “preview” version), Inkscape should run just fine on Linux and Windows. ### Improved UI and HiDPI Support ![Inkscape Ui Customization](https://itsfoss.com/content/images/wordpress/2020/05/inkscape-ui-customization.jpg) In their release notes, they’ve mentioned: A major milestone was achieved in enabling Inkscape to use a more recent version of the software used to build the editor’s user interface (namely GTK+3). Users with HiDPI (high resolution) screens can thank teamwork that took place during the 2018 Boston Hackfest for setting the updated-GTK wheels in motion. So, starting from GTK +3 user interface to the HiDPI support for high-resolution screens, it is a wonderful upgrade. Not to forget, you get more customization options to tweak the look and feel as well. ### New Feature Additions ![Inkscape Live Path Effects](https://itsfoss.com/content/images/wordpress/2020/05/inkscape-live-path-effects.jpg) On paper, the list of new features sounds good. Depending on your expertise and what you prefer, the latest additions should come in handy. Here’s an overview of the new features: - New and improved Live Path Effect (LPE) features - A new searchable LPE selection dialog - Freestyle drawing users can now mirror and rotate the canvas - The new PowerPencil mode of the Pencil tool provides pressure-dependent width and it is finally possible to create closed paths. - New path effects that will appeal to the artistic user include Offset, PowerClip, and PowerMask LPEs. - Ability to create a duplicate guide, aligning grids to the page, the Measure tool’s path length indicator, and the inverted Y-axis. - Ability to export PDFs with clickable links and metadata - New palettes and mesh gradients that work in the web browser While I’ve tried to compile the list of the key features added to this release, you can get all the nitty gritty details in their [release notes](https://wiki.inkscape.org/wiki/index.php/Release_notes/1.0). ### Other Important Changes Along with all the major changes, Inkscape 1.0 now supports Python 3. And, with that going forward, you might notice some extensions that don’t work with the latest version. So, if your work depends on the workflow of your extensions, I suggest you to take a closer look at their [release notes](https://wiki.inkscape.org/wiki/index.php/Release_notes/1.0) to get all the technical details. ## Download & Install Inkscape 1.0 on Linux Inkscape 1.0 is available in AppImage and Snap format for Linux. You can download it from Inkscape’s website. If you aren’t aware, you can check [how to use AppImage file on Linux](https://itsfoss.com/use-appimage-linux/) to get started. You may also refer to [this Snap guide](https://itsfoss.com/install-snap-linux/). Ubuntu users can find the snap version of Inskcape 1.0 in the Ubuntu Software Center. I used the AppImage file on [Pop OS 20.04](https://itsfoss.com/pop-os-20-04-review/) and it worked just fine to get started. You can test drive all the features in detail to see how it works out for you. Have you tried it yet? Let me know your thoughts in the comments below.
12,190
在 Linux 上压缩文件的 5 种方法
https://www.networkworld.com/article/3538471/how-to-compress-files-on-linux-5-ways.html
2020-05-06T23:15:00
[ "压缩", "tar", "gzip" ]
https://linux.cn/article-12190-1.html
> > 在 Linux 系统上有很多可以用于压缩文件的工具,但它们的表现并不都是一样的,也不是所有的压缩效果都是一样的。在这篇文章中,我们比较其中的五个工具。 > > > ![](/data/attachment/album/202005/06/231536tgxma941yb8dgl53.jpg) 在 Linux 上有不少用于压缩文件的命令。最新最有效的一个方法是 `xz`,但是所有的方法都有节省磁盘空间和维护备份文件供以后使用的优点。在这篇文章中,我们将比较这些压缩命令并指出显著的不同。 ### tar `tar` 命令不是专门的压缩命令。它通常用于将多个文件拉入一个单个的文件中,以便容易地传输到另一个系统,或者将文件作为一个相关的组进行备份。它也提供压缩的功能,这就很有意义了,附加一个 `z` 压缩选项能够实现压缩文件。 当使用 `z` 选项为 `tar` 命令附加压缩过程时,`tar` 使用 `gzip` 来进行压缩。 就像压缩一组文件一样,你可以使用 `tar` 来压缩单个文件,尽管这种操作与直接使用 `gzip` 相比没有特别的优势。要使用 `tar` 这样做,只需要使用 `tar cfz newtarfile filename` 命令来标识要压缩的文件,就像标识一组文件一样,像这样: ``` $ tar cfz bigfile.tgz bigfile ^ ^ | | +- 新的文件 +- 将被压缩的文件 $ ls -l bigfile* -rw-rw-r-- 1 shs shs 103270400 Apr 16 16:09 bigfile -rw-rw-r-- 1 shs shs 21608325 Apr 16 16:08 bigfile.tgz ``` 注意,文件的大小显著减少了。 如果你愿意,你可以使用 `tar.gz` 扩展名,这可能会使文件的特征更加明显,但是大多数的 Linux 用户将很可能会意识到与 `tgz` 的意思是一样的 – `tar` 和 `gz` 的组合来显示文件是一个压缩的 tar 文件。在压缩完成后,你将同时得到原始文件和压缩文件。 要将很多文件收集在一起并在一个命令中压缩出 “tar ball”,使用相同的语法,但要指定要包含的文件为一组,而不是单个文件。这里有一个示例: ``` $ tar cfz bin.tgz bin/* ^ ^ | +-- 将被包含的文件 + 新的文件 ``` ### zip `zip` 命令创建一个压缩文件,与此同时保留原始文件的完整性。语法像使用 `tar` 一样简单,只是你必需记住,你的原始文件名称应该是命令行上的最后一个参数。 ``` $ zip ./bigfile.zip bigfile updating: bigfile (deflated 79%) $ ls -l bigfile bigfile.zip -rw-rw-r-- 1 shs shs 103270400 Apr 16 11:18 bigfile -rw-rw-r-- 1 shs shs 21606889 Apr 16 11:19 bigfile.zip ``` ### gzip `gzip` 命令非常容易使用。你只需要键入 `gzip`,紧随其后的是你想要压缩的文件名称。不像上述描述的命令,`gzip` 将“就地”“加密”文件。换句话说,原始文件将被“加密”文件替换。 ``` $ gzip bigfile $ ls -l bigfile* -rw-rw-r-- 1 shs shs 21606751 Apr 15 17:57 bigfile.gz ``` ### bzip2 像使用 `gzip` 命令一样,`bzip2` 将在你选择的文件“就地”压缩,不留下原始文件。 ``` $ bzip bigfile $ ls -l bigfile* -rw-rw-r-- 1 shs shs 18115234 Apr 15 17:57 bigfile.bz2 ``` ### xz `xz` 是压缩命令团队中的一个相对较新的成员,在压缩文件的能力方面,它是一个领跑者。像先前的两个命令一样,你只需要将文件名称提供给命令。再强调一次,原始文件被就地压缩。 ``` $ xz bigfile $ ls -l bigfile* -rw-rw-r-- 1 shs shs 13427236 Apr 15 17:30 bigfile.xz ``` 对于大文件来说,你可能会注意到 `xz` 将比其它的压缩命令花费更多的运行时间,但是压缩的结果却是非常令人赞叹的。 ### 对比 大多数人都听说过“大小不是一切”。所以,让我们比较一下文件大小以及一些当你计划如何压缩文件时的问题。 下面显示的统计数据都与压缩单个文件相关,在上面显示的示例中使用 `bigfile`。这个文件是一个大的且相当随机的文本文件。压缩率在一定程度上取决于文件的内容。 #### 大小减缩率 当比较时,上面显示的各种压缩命产生下面的结果。百分比表示压缩文件与原始文件的比较效果。 ``` -rw-rw-r-- 1 shs shs 103270400 Apr 16 14:01 bigfile ------------------------------------------------------ -rw-rw-r-- 1 shs shs 18115234 Apr 16 13:59 bigfile.bz2 ~17% -rw-rw-r-- 1 shs shs 21606751 Apr 16 14:00 bigfile.gz ~21% -rw-rw-r-- 1 shs shs 21608322 Apr 16 13:59 bigfile.tgz ~21% -rw-rw-r-- 1 shs shs 13427236 Apr 16 14:00 bigfile.xz ~13% -rw-rw-r-- 1 shs shs 21606889 Apr 16 13:59 bigfile.zip ~21% ``` `xz` 命令获胜,最终只有压缩文件 13% 的大小,但是所有这些压缩命令都相当显著地减少原始文件的大小。 #### 是否替换原始文件 `bzip2`、`gzip` 和 `xz` 命令都用压缩文件替换原始文件。`tar` 和 `zip` 命令不替换。 #### 运行时间 `xz` 命令似乎比其它命令需要花费更多的时间来“加密”文件。对于 `bigfile` 来说,大概的时间是: ``` 命令 运行时间 tar 4.9 秒 zip 5.2 秒 bzip2 22.8 秒 gzip 4.8 秒 xz 50.4 秒 ``` 解压缩文件很可能比压缩时间要短得多。 #### 文件权限 不管你对压缩文件设置什么权限,压缩文件的权限将基于你的 `umask` 设置,但 `bzip2` 除外,它保留了原始文件的权限。 #### 与 Windows 的兼容性 `zip` 命令创建的文件可以在 Windows 系统以及 Linux 和其他 Unix 系统上使用(即解压),而无需安装其他工具,无论这些工具可能是可用还是不可用的。 ### 解压缩文件 解压文件的命令与压缩文件的命令类似。在我们运行上述压缩命令后,这些命令用于解压缩 `bigfile`: * tar: `tar xf bigfile.tgz` * zip: `unzip bigfile.zip` * gzip: `gunzip bigfile.gz` * bzip2: `bunzip2 bigfile.gz2` * xz: `xz -d bigfile.xz` 或 `unxz bigfile.xz` ### 自己运行压缩对比 如果你想自己运行一些测试,抓取一个大的且可以替换的文件,并使用上面显示的每个命令来压缩它 —— 最好使用一个新的子目录。你可能需要先安装 `xz`,如果你想在测试中包含它的话。这个脚本可能更容易地进行压缩,但是可能需要花费几分钟完成。 ``` #!/bin/bash # 询问用户文件名称 echo -n "filename> " read filename # 你需要这个,因为一些命令将替换原始文件 cp $filename $filename-2 # 先清理(以免先前的结果仍然可用) rm $filename.* tar cvfz ./$filename.tgz $filename > /dev/null zip $filename.zip $filename > /dev/null bzip2 $filename # 恢复原始文件 cp $filename-2 $filename gzip $filename # 恢复原始文件 cp $filename-2 $filename xz $filename # 显示结果 ls -l $filename.* # 替换原始文件 mv $filename-2 $filename ``` --- via: <https://www.networkworld.com/article/3538471/how-to-compress-files-on-linux-5-ways.html> 作者:[Sandra Henry-Stocker](https://www.networkworld.com/author/Sandra-Henry_Stocker/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[robsean](https://github.com/robsean) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
301
Moved Permanently
null
12,191
如何避免中间人攻击(MITM)
https://opensource.com/article/20/4/mitm-attacks
2020-05-07T10:07:06
[ "中间人攻击" ]
https://linux.cn/article-12191-1.html
> > 首先搞明白到底什么是中间人攻击(MITM),才能避免成为此类高科技窃听的受害者。 > > > ![](/data/attachment/album/202005/07/100655i7og1eqsw6o3ww81.jpg) 当你使用电脑发送数据或与某人在线通话的时候,你一定采取了某种程度的安全隐私手段。 但如果有第三方在你不知情的情况下窃听,甚至冒充某个你信任的商业伙伴窃取破坏性的信息呢?你的私人数据就这样被放在了危险分子的手中。 这就是臭名昭著的<ruby> 中间人攻击 <rt> man-in-the-middle </rt></ruby>(MITM)。 ### 到底什么是中间人攻击? 黑客潜入到你与受害者或是某个设备间的通信过程中,窃取敏感信息(多数是身份信息)进而从事各种违法行为的过程,就是一次中间人攻击。Scamicide 公司创始人 Steve J. J. Weisman 介绍说: > > “中间人攻击也可以发生在受害者与某个合法 app 或网页中间。当受害者以为自己面对的是正常 app 或网页时,其实 Ta 正在与一个仿冒的 app 或网页互动,将自己的敏感信息透露给了不法分子。” > > > 中间人攻击诞生于 1980 年代,是最古老的网络攻击形式之一。但它却更为常见。Weisman 解释道,发生中间人攻击的场景有很多种: * **攻陷一个未有效加密的 WiFi 路由器**:该场景多见于人们使用公共 WiFi 的时候。“虽然家用路由器也很脆弱,但黑客攻击公共 WiFi 网络的情况更为常见。”Weisman 说,“黑客的目标就是从毫无戒心的人们那里窃取在线银行账户这样的敏感信息。” * **攻陷银行、金融顾问等机构的电子邮件账户**:“一旦黑客攻陷了这些电子邮件系统,他们就会冒充银行或此类公司给受害者发邮件”,Weisman 说,”他们以紧急情况的名义索要个人信息,诸如用户名和密码。受害者很容易被诱骗交出这些信息。” * **发送钓鱼邮件**:窃贼们还可能冒充成与受害者有合作关系的公司,向其索要个人信息。“在多个案例中,钓鱼邮件会引导受害者访问一个伪造的网页,这个伪造的网页看起来就和受害者常常访问的合法公司网页一模一样。”Weisman 说道。 * **在合法网页中嵌入恶意代码**:攻击者还会把恶意代码(通常是 JavaScript)嵌入到一个合法的网页中。“当受害者加载这个合法网页时,恶意代码首先按兵不动,直到用户输入账户登录或是信用卡信息时,恶意代码就会复制这些信息并将其发送至攻击者的服务器。”网络安全专家 Nicholas McBride 介绍说。 ### 有哪些中间人攻击的著名案例? 联想作为主流的计算机制造厂商,在 2014 到 2015 年售卖的消费级笔记本电脑中预装了一款叫做 VisualDiscovery 的软件,拦截用户的网页浏览行为。当用户的鼠标在某个产品页面经过时,这款软件就会弹出一个来自合作伙伴的类似产品的广告。 这起中间人攻击事件的关键在于:VisualDiscovery 拥有访问用户所有私人数据的权限,包括身份证号、金融交易信息、医疗信息、登录名和密码等等。所有这些访问行为都是在用户不知情和未获得授权的情况下进行的。联邦交易委员会(FTC)认定此次事件为欺诈与不公平竞争。2019 年,联想同意为此支付 8300 万美元的集体诉讼罚款。 ### 我如何才能避免遭受中间人攻击? * **避免使用公共 WiFi**:Weisman 建议,从来都不要使用公开的 WiFi 进行金融交易,除非你安装了可靠的 VPN 客户端并连接至可信任的 VPN 服务器。通过 VPN 连接,你的通信是加密的,信息也就不会失窃。 * **时刻注意:**对要求你更新密码或是提供用户名等私人信息的邮件或文本消息要时刻保持警惕。这些手段很可能被用来窃取你的身份信息。 如果不确定收到的邮件来自于确切哪一方,你可以使用诸如电话反查或是邮件反查等工具。通过电话反查,你可以找出未知发件人的更多身份信息。通过邮件反查,你可以尝试确定谁给你发来了这条消息。 通常来讲,如果发现某些方面确实有问题,你可以听从公司中某个你认识或是信任的人的意见。或者,你也可以去你的银行、学校或其他某个组织,当面寻求他们的帮助。总之,重要的账户信息绝对不要透露给不认识的“技术人员”。 * **不要点击邮件中的链接**:如果有人给你发了一封邮件,说你需要登录某个账户,不要点击邮件中的链接。相反,要通过平常习惯的方式自行去访问,并留意是否有告警信息。如果在账户设置中没有看到告警信息,给客服打电话的时候也*不要*联系邮件中留的电话,而是联系站点页面中的联系人信息。 * **安装可靠的安全软件**:如果你使用的是 Windows 操作系统,安装开源的杀毒软件,如 [ClamAV](https://www.clamav.net)。如果使用的是其他平台,要保持你的软件安装有最新的安全补丁。 * **认真对待告警信息**:如果你正在访问的页面以 HTTPS 开头,浏览器可能会出现一则告警信息。例如,站点证书的域名与你尝试访问的站点域名不相匹配。千万不要忽视此类告警信息。听从告警建议,迅速关掉页面。确认域名没有输入错误的情况下,如果情况依旧,要立刻联系站点所有者。 * **使用广告屏蔽软件**:弹窗广告(也叫广告软件攻击)可被用于窃取个人信息,因此你还可以使用广告屏蔽类软件。对个人用户来说,中间人攻击其实是很难防范的,因为它被设计出来的时候,就是为了让受害者始终蒙在鼓里,意识不到任何异常。有一款不错的开源广告屏蔽软件叫 [uBlock origin](https://github.com/gorhill/uBlock)。可以同时支持 Firefox 和 Chromium(以及所有基于 Chromium 的浏览器,例如 Chrome、Brave、Vivaldi、Edge 等),甚至还支持 Safari。 ### 保持警惕 要时刻记住,你并不需要立刻就点击某些链接,你也并不需要听从某个陌生人的建议,无论这些信息看起来有多么紧急。互联网始终都在。你大可以先离开电脑,去证实一下这些人的真实身份,看看这些“无比紧急”的页面到底是真是假。 尽管任何人都可能遭遇中间人攻击,只要弄明白何为中间人攻击,理解中间人攻击如何发生,并采取有效的防范措施,就可以保护自己避免成为其受害者。 --- via: <https://opensource.com/article/20/4/mitm-attacks> 作者:[Jackie Lam](https://opensource.com/users/beenverified) 选题:[lujun9972](https://github.com/lujun9972) 译者:[tinyeyeser](https://github.com/tinyeyeser) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
Whether you're sending data on your computer or talking to someone online, you want to assume some level of security and privacy. But what if a third party is eavesdropping online, unbeknownst to you? And worse, what if they're impersonating someone from a business you trust in order to gain damaging information? This could put your personal data into the hands of dangerous, would-be thieves. Welcome to what's called a man-in-the-middle (MITM) attack. ## What are man-in-the-middle attacks? A man-in-the-middle attack occurs when a cybercriminal inserts themselves into communications between you, the targeted victim, and a device in order to steal sensitive information that can be used for a variety of criminal purposes—most notably identity theft, says Steve J. J. Weisman, founder of Scamicide. "A man-in-the-middle-attack can also occur when the victim believes he or she is communicating with a legitimate app or website," says Weisman, "when the truth is that the victim is communicating with a phony website or app and thereby providing sensitive information to the criminal." One of the oldest forms of cyberattacks, MITM attacks have been around since the 1980s. What's more, they're quite common. As Weisman explains, there are a handful of ways a MITM attack can happen: **Attacking a WiFi router that is not properly secured:**This typically occurs when someone is using public WiFi. "While home routers might be vulnerable, it's more common for criminals to attack public WiFi networks," says Weisman. The goal is to spy on unsuspecting people who are handling sensitive information, such as their online bank accounts, he adds.**Hacking email accounts of banks, financial advisers, and other companies:**"Once [the criminals] have hacked these email systems, they send out emails that appear to come from the legitimate bank or other company," Weisman says. "[They ask] for personal information, such as usernames and passwords, under the guise of an emergency. The targeted victim is lured into providing that information."**Sending phishing emails:**Thieves might also send emails pretending to be legitimate companies that the targeted victim does business with, asking the recipient for their personal information. "In many instances, the spear-phishing emails will direct the victim to a counterfeit website that appears to be that of a legitimate company with which the victim does business," says Weisman.**Using malicious code in legitimate websites:**Attackers can also place malicious code—usually JavaScript—into a legitimate website by way of a web application. "When the victim loads the legitimate page, the malicious code just sits in the background until the user enters sensitive information, such as account login or credit card details, which the malicious code then copies and sends to the attackers' servers," says Nicholas McBride, a cybersecurity consultant. ## What is an example of an MITM attack? The Lenovo case is a well-known example of an MITM attack. In 2014 and 2015, the major computer manufacturer sold consumer laptops with preinstalled software that meddled with how a user's browser communicated with websites. Whenever the user's cursor hovered over a product, this software, called VisualDiscovery, sent pop-up ads from retail partners that sold similar products. Here's the kicker: This MITM attack allowed VisualDiscovery to access all of the user's personal data, including social security numbers, info about financial transactions, medical info, and logins and passwords. All without the user knowing or granting permission beforehand. The FTC deemed this a deceptive and unfair online scam. Lenovo agreed to pay $8.3 million in a class-action settlement in 2019. ## How can I protect myself from an online attack? - **Avoid using public WiFi:**Weisman recommends never using public WiFi for financial transactions unless you've installed a reliable virtual private network (VPN) client on your device and have a VPN host you can use and trust. Over a VPN connection, your communications are encrypted, so your information can't be stolen. - **Be on the lookout:**Be wary of emails or text messages that ask you to update your password or provide your username or personal information. These methods can be used to steal your identity.If you are unsure of the actual identity of the party sending you the email, you can use tools such as a reverse phone or email search. With a reverse phone number lookup, you may be able to find out more about the identity of an unknown texter. And with a reverse email lookup, you can try to determine who might have sent you a message. Generally, if something's actually a problem, you'll hear from someone you know and trust within your company, or from someone you can also go and meet, in person, at your bank or school or other organization. Important account information is never the purview of an unknown technician. - **Don't click on links contained in emails:**If someone sends you an email telling you that you need to sign into an account, don't click on the link provided in the email. Instead, navigate to the site yourself, log in as you normally would, and look for an alert there. If you don't see an alert message in your account settings, contact a representative by phone using contact information on the site and*not*from the email. - **Install reliable security software:**If you're on Windows, install good open source antivirus like[ClamAV](https://www.clamav.net). On all platforms, keep your software up to date with the latest security patches. - **Take alerts seriously:**If you're visiting a site that starts with HTTPS, your browser might alert you to an issue, says McBride. For instance, if the domain name on the site's certificate doesn't match the one you're trying to visit. Don't ignore the alert. Heed it and navigate away from the site for now. Verify that you haven't[mistyped it](https://opensource.com/article/20/1/stop-typosquatting-attacks), and if the problem persists, contact the site owner if you can. - **Use an ad blocker:**Pop-up ads (also known as*adware attacks*) can be used to intercept your personal information, so use an ad blocker. "The truth is, as an individual user, it's hard to protect against a MITM attack," says McBride, "as it is designed to leave the victim in the dark and to prevent them from noticing that there is anything wrong."A good open source ad blocker (or "wide-spectrum blocker," in the developer's words) is [uBlock origin](https://github.com/gorhill/uBlock). It's available for both Firefox and Chromium (and all Chromium-based browsers, such as Chrome, Brave, Vivaldi, Edge, and so on), and even Safari. ## Stay alert Remember, you don't have to click anything online right away, and you don't have to follow random people's instructions, no matter how urgent they may seem. The internet will still be there after you step away from the computer and verify the identity of a person or site demanding your attention. While MITM attacks can happen to anyone, understanding what they are, knowing how they happen, and actively taking steps to prevent them can safeguard you from being a victim. *This article was originally published on BeenVerified.com under a CC BY-SA 2.0 license.* ## Comments are closed.
12,193
线上图片请抛弃 PNG 和 JPG:使用 WebP
https://opensource.com/article/20/4/webp-image-compression
2020-05-07T14:39:00
[ "WebP" ]
https://linux.cn/article-12193-1.html
> > 了解一下这个开源的图片编辑工具来节省时间和空间。 > > > ![](/data/attachment/album/202005/07/143932l22hot7ebhbbqjmm.jpg) WebP 是 2010 年 Google 开发的一种图片格式,它为网页上的图片提供了卓越的无损和有损压缩。网站开发者们可以使用 WebP 来创建尺寸更小、细节更丰富的图片,以此来提高网站的速度。更快的加载速度对于网站的用户体验和网站的营销效果是至关重要的。 为了在所有设备和用户中达到最佳加载效果,你网站上的图片文件大小不应该超过 500 KB。 与 PNG 图片相比,WebP 无损图片通常至少要比 PNG 图片小 25%。在同等的 SSIM(<ruby> 结构相似度 <rt> structural similarity </rt></ruby>)质量指标下,WebP 有损图片通常比 JPEG 图片小 25% 到 34%。 无损 WebP 也支持透明度。而在可接受有损 RGB 压缩的情况下,有损 WebP 也支持透明度,通常 PNG 文件大小比它大三倍。 Google 报告称,把动画 GIF 文件转换为有损 WebP 后文件大小减少了 64%,转换为无损 WebP 后文件大小减少了 19%。 WebP 文件格式是一种基于 RIFF(<ruby> 资源互换文件格式 <rt> resource interchange file format </rt></ruby>)的文档格式。你可以用 [hexdump](https://opensource.com/article/19/8/dig-binary-files-hexdump) 看到文件的签名是 `52 49 46 46`(RIFF): ``` $ hexdump --canonical pixel.webp 00000000 52 49 46 46 26 00 00 00 [...] |RIFF&amp;...WEBPVP8 | 00000010 1a 00 00 00 30 01 00 9d [...] |....0....*......| 00000020 0e 25 a4 00 03 70 00 fe [...] |.%...p...`....| 0000002e ``` 独立的 libwebp 库作为 WebP 技术规范的参考实现,可以从 Google 的 [Git 仓库](https://storage.googleapis.com/downloads.webmproject.org/releases/webp/index.html) 或 tar 包中获得。 全球在用的 80% 的 web 浏览器兼容 WebP 格式。本文撰写时,Apple 的 Safari 浏览器还不兼容。解决这个问题的方法是将 JPG/PNG 图片与 WebP 图片一起提供,有一些方法和 Wordpress 插件可以做到这一点。 ### 为什么要这样做? 我的部分工作是设计和维护我们组织的网站。由于网站是个营销工具,而网站的速度是衡量用户体验的重要指标,我一直致力于提高网站速度,通过把图片转换为 WebP 来减少图片大小是一个很好的解决方案。 我使用了 web.dev 来检测其中一个网页,该工具是由 Lighthouse 提供服务的,遵循 Apache 2.0 许可证,可以在 <https://github.com/GoogleChrome/lighthouse> 找到。 据其官方描述,“LIghthouse 是一个开源的,旨在提升网页质量的自动化工具。你可以在任何公共的或需要鉴权的网页上运行它。它有性能、可用性、渐进式 web 应用、SEO 等方面的审计。你可以在 Chrome 浏览器的开发工具中运行 Lighthouse,也可以通过命令行或作为 Node 模块运行。你输入一个 URL 给 Lighthouse,它会对这个网页进行一系列的审计,然后生成这个网页的审计结果报告。从报告的失败审计条目中可以知道应该怎么优化网页。每条审计都有对应的文档解释为什么该项目是重要的,以及如何修复它。” ### 创建更小的 WebP 图片 我测试的页面返回了三张图片。在它生成的报告中,它提供了推荐和目标。我选择了它报告有 650 KB 的 `app-graphic` 图片。通过把它转换为 WebP 格式,预计可以把图片大小降到 61 KB,节省 589 KB。我在 Photoshop 中把它转换了,用默认的 WebP 设置参数保存它,它的文件大小为 44.9 KB。比预期的还要好!从下面的 Photoshop 截图中可以看出,两张图在视觉质量上完全一样。 ![](/data/attachment/album/202005/07/144528m4jgucnozc4v0iqz.png) *左图:650 KB(实际大小)。右图: 44.9 KB(转换之后的目标大小)。* 当然,也可以用开源图片编辑工具 [GIMP](http://gimp.org) 把图片导出为 WebP。它提供了几个质量和压缩的参数: ![](/data/attachment/album/202005/07/143538plu797s4wmhy9b1p.jpg) 另一张图放大后: ![](/data/attachment/album/202005/07/144549ee1ddngawdr01ari.png) PNG(左图)和 WebP(右图),都是从 JPG 转换而来,两图对比可以看出 WebP 不仅在文件大小更小,在视觉质量上也更优秀。 ### 把图片转换为 WebP 你也可以用 Linux 的命令行工具把图片从 JPG/PNG 转换为 WebP: 在命令行使用 `cwebp` 把 PNG 或 JPG 图片文件转换为 WebP 格式。你可以用下面的命令把 PNG 图片文件转换为质量参数为 80 的 WebP 图片。 ``` cwebp -q 80 image.png -o image.webp ``` 你还可以用 [Image Magick](https://imagemagick.org),这个工具可能在你的发行版本软件仓库中可以找到。转换的子命令是 `convert`,它需要的所有参数就是输入和输出文件: ``` convert pixel.png pixel.webp ``` ### 使用编辑器把图片转换为 WebP 要在图片编辑器中来把图片转换为 WebP,可以使用 [GIMP](https://en.wikipedia.org/wiki/GIMP)。从 2.10 版本开始,它原生地支持 WebP。 如果你是 Photoshop 用户,由于 Photoshop 默认不包含 WebP 支持,因此你需要一个转换插件。遵循 Apache License 2.0 许可证发布的 WebPShop 0.2.1 是一个用于打开和保存包括动画图在内的 WebP 图片的 Photoshop 模块,在 <https://github.com/webmproject/WebPShop> 可以找到。 为了能正常使用它,你需要把它放进 Photoshop 插件目录下的 `bin` 文件夹: Windows x64 :`C:\Program Files\Adobe\Adobe Photoshop\Plug-ins\WebPShop.8bi` Mac:`Applications/Adobe Photoshop/Plug-ins/WebPShop.plugin` ### Wordpress 上的 WebP 很多网站是用 Wordpress 搭建的(我的网站就是)。因此,Wordpress 怎么上传 WebP 图片?本文撰写时,它还不支持。但是,当然已经有插件来满足这种需求,因此你可以在你的网站上同时准备 WebP 和 PNG/JPG 图片(为 Apple 用户)。 在 [Marius Hosting](https://mariushosting.com/) 有下面的[说明](https://mariushosting.com/how-to-upload-webp-files-on-wordpress/): > > “直接向 Wordpress 上传 WebP 图片会怎样?这很简单。向你的主题 `functions.php` 文件添加几行内容就可以了。Wordpress 默认不支持展示和上传 WebP 文件,但是我会向你介绍一下怎么通过几个简单的步骤来让它支持。登录进你的 Wordpress 管理员界面,进入‘外观/主题编辑器’找到 `functions.php`。复制下面的代码粘贴到该文件最后并保存: > > > > ``` > //** *Enable upload for webp image files.*/ > function webp_upload_mimes($existing_mimes) { > $existing_mimes['webp'] = 'image/webp'; > return $existing_mimes; > } > add_filter('mime_types', 'webp_upload_mimes'); > ``` > > 如果你想在‘媒体/媒体库’时看到缩略图预览,那么你需要把下面的代码也添加到 `functions.php` 文件。为了找到 `functions.php` 文件,进入‘外观/主题编辑器’并搜索 `functions.php`,然后复制下面的代码粘贴到文件最后并保存: > > > > ``` > //** * Enable preview / thumbnail for webp image files.*/ > function webp_is_displayable($result, $path) { > if ($result === false) { > $displayable_image_types = array( IMAGETYPE_WEBP ); > $info = @getimagesize( $path ); > > if (empty($info)) { > $result = false; > } elseif (!in_array($info[2], $displayable_image_types)) { > $result = false; > } else { > $result = true; > } > } > > return $result; > } > add_filter('file_is_displayable_image', 'webp_is_displayable', 10, 2); > ``` > > ” > > > ### WebP 和未来 WebP 是一个健壮而优化的格式。它看起来更好,压缩率更高,并具有其他大部分常见图片格式的所有特性。不必再等了,现在就使用它吧。 --- via: <https://opensource.com/article/20/4/webp-image-compression> 作者:[Jeff Macharyas](https://opensource.com/users/jeffmacharyas) 选题:[lujun9972](https://github.com/lujun9972) 译者:[lxbwolf](https://github.com/lxbwolf) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
WebP is an image format developed by Google in 2010 that provides superior lossless and lossy compression for images on the web. Using WebP, web developers can create smaller, richer images that improve site speed. A faster loading website is critical to the user experience and for the website's marketing effectiveness. For optimal loading across all devices and users, images on your site should not be larger than 500 KB in file size. WebP lossless images are often at least 25% smaller in size compared to PNGs. WebP lossy images are often anywhere from 25-34% smaller than comparable JPEG images at equivalent SSIM (structural similarity) quality index. Lossless WebP supports transparency, as well. For cases when lossy RGB compression is acceptable, lossy WebP also supports transparency, typically providing three times smaller file sizes compared to PNG. Google reports a 64% reduction in file size for images converted from animated GIFs to lossy WebP, and a 19% reduction when converted to lossless WebP. The WebP file format is based on the RIFF (resource interchange file format) document format. The file signature is **52 49 46 46** (RIFF), as you can see with [hexdump](https://opensource.com/article/19/8/dig-binary-files-hexdump): ``` $ hexdump --canonical pixel.webp 00000000 52 49 46 46 26 00 00 00 [...] |RIFF&...WEBPVP8 | 00000010 1a 00 00 00 30 01 00 9d [...] |....0....*......| 00000020 0e 25 a4 00 03 70 00 fe [...] |.%...p...`....| 0000002e ``` The standalone libwebp library serves as a reference implementation for the WebP specification and is available from Google's[ Git repository](https://storage.googleapis.com/downloads.webmproject.org/releases/webp/index.html) or as a tarball. The WebP format is compatible with vast majority of the web browsers in use worldwide. At the time of this writing, it is not compatible with Apple's Safari browser. The workaround for this is to serve up a JPG/PNG alongside a WebP, and there are methods and Wordpress plugins to do that. ## Why does this matter? Part of my job is to design and maintain our organization's website. Since the website is a marketing tool and site speed is a critical aspect of the user experience, I have been working to improve the speed, and reducing image sizes by converting them to WebP has been a good solution. To test the speed of one of the pages, I turned to **web.dev**, which is powered by Lighthouse, released under the Apache 2.0 license, and can be found at [https://github.com/GoogleChrome/lighthouse](https://github.com/GoogleChrome/lighthouse). According to its official description, "Lighthouse is an open source, automated tool for improving the quality of web pages. You can run it against any web page—public or requiring authentication. It has audits for performance, accessibility, progressive web apps, SEO, and more. You can run Lighthouse in Chrome DevTools, from the command line, or as a Node module. You give Lighthouse a URL to audit, it runs a series of audits against the page, and then it generates a report on how well the page did. From there, use the failing audits as indicators on how to improve the page. Each audit has a reference doc explaining why the audit is important, as well as how to fix it." ## Creating a smaller WebP image On one graphic I tested, the original web version was 650 KB in size. By converting it to WebP, I reduced the file size to 45 KB, and there was no discernible loss in quality. The open source image editor [GIMP](http://gimp.org) supports WebP as an export format. It offers several options for quality and compression profile: ![GIMP dialog for exporting webp, as a webp GIMP dialog for exporting webp, as a webp](https://opensource.com/sites/default/files/webp-gimp.webp) A zoomed-in look of another image: ![WebP vs PNG comparison WebP vs PNG comparison](https://opensource.com/sites/default/files/uploads/xcompare-png-left-webp-right.png) PNG (left) and WebP (right), both converted from a JPG, shows the WebP, although smaller in size, is superior in visual quality. ## Convert to an image to WebP To convert images on Linux from JPG/PNG to WebP, you can also use the command-line: Use **cwebp** on the command line to convert PNG or JPG image files to WebP format. You can convert a PNG image file to a WebP image with a quality range of 80 with the command: ``` $ cwebp -q 80 image.png -o image.webp ``` Alternatively, you can also use [Image Magick](https://imagemagick.org), which is probably available in your distribution's software repository. The subcommand for conversion is **convert**, and all that's needed is an input and output file: `$ convert pixel.png pixel.webp` ## Convert an image to WebP with an editor To convert images to WebP with a photo editor, use [GIMP](https://en.wikipedia.org/wiki/GIMP). From version 2.10 on, it supports WebP natively. If you're a Photoshop user, you need a plugin to convert the files, as Photoshop does not include it natively. WebPShop 0.2.1, released under the Apache License 2.0 license, is a Photoshop module for opening and saving WebP images, including animations, and can be found at: [https://github.com/webmproject/WebPShop](https://github.com/webmproject/WebPShop). To use the plugin, put the file found in the **bin** folder inside your Photoshop plugin directory: Windows x64—C:\Program Files\Adobe\Adobe Photoshop\Plug-ins\WebPShop.8bi Mac—Applications/Adobe Photoshop/Plug-ins/WebPShop.plugin ## WebP on Wordpress Many websites are built using Wordpress (that's what I use). So, how does Wordpress handle uploading WebP images? At the time of this writing, it doesn't. But, there are, of course, plugins to enable it so you can serve up both WebP alongside PNG/JPG images (for the Apple crowd). Or there are these [instructions](https://mariushosting.com/how-to-upload-webp-files-on-wordpress/) from [Marius Hosting](https://mariushosting.com/): "How about directly uploading WebP images to Wordpress? This is easy. Just add some text line on your theme functions.php file. Wordpress does not natively support viewing and uploading WebP files, but I will explain to you how you can make it work in a few simple steps. Log in to your Wordpress admin area and go to Appearance/Theme Editor and find functions.php. Copy and paste the code below at the end of the file and save it. ``` //** *Enable upload for webp image files.*/ function webp_upload_mimes($existing_mimes) { $existing_mimes['webp'] = 'image/webp'; return $existing_mimes; } add_filter('mime_types', 'webp_upload_mimes'); ``` If you want to see the thumbnail image preview when you go to Media/Library, you have to add the code below in the same functions.php file. To find the functions.php file, go to Appearance/Theme Editor and find functions.php, then copy and paste the code below at the end of the file and save it." ``` //** * Enable preview / thumbnail for webp image files.*/ function webp_is_displayable($result, $path) { if ($result === false) { $displayable_image_types = array( IMAGETYPE_WEBP ); $info = @getimagesize( $path ); if (empty($info)) { $result = false; } elseif (!in_array($info[2], $displayable_image_types)) { $result = false; } else { $result = true; } } return $result; } add_filter('file_is_displayable_image', 'webp_is_displayable', 10, 2); ``` ## WebP and the future WebP is a robust and optimized format. It looks better, it has better compression ratio, and it has all the features of most other common image formats. There's no need to wait—start using it now. ## 7 Comments
12,195
将 Fedora 31 升级到 Fedora 32
https://fedoramagazine.org/upgrading-fedora-31-to-fedora-32/
2020-05-08T09:40:24
[ "Fedora", "升级" ]
https://linux.cn/article-12195-1.html
![](/data/attachment/album/202005/08/094029mlse7bs2pip6g87s.png) Fedora 32 [已经发布](/article-12164-1.html)。你可能想升级系统以获得 Fedora 中的最新功能。Fedora Workstation 有图形化的升级方法。另外,Fedora 提供了命令行方法,用于将 Fedora 31 升级到 Fedora 32。 升级前,请访问 [Fedora 32 个常见 bug 的维基页面](https://fedoraproject.org/wiki/Common_F32_bugs),查看是否存在可能影响升级的问题。尽管 Fedora 社区试图确保升级正常进行,但是无法为用户可能使用的每种软硬件组合提供保证。 ### 将 Fedora 31 Workstation 升级到 Fedora 32 在新版本发布不久之后就会出现通知,告诉你有可用的升级。你可以单击该通知启动 “GNOME 软件”。或者,你可以从 GNOME Shell 中选择“软件”。 在 “GNOME 软件”中选择<ruby> 更新 <rt> Updates </rt></ruby>选项卡,你会看到一个页面通知你 Fedora 32 现在可用。 如果你在此页面看不到任何内容,请尝试使用左上方的重新加载按钮。发布后,所有系统可能都需要一段时间才能看到可用的升级。 选择<ruby> 下载 <rt> Download </rt></ruby>获取升级包。你可以继续做事直到下载完成。然后使用 “GNOME 软件”重启系统并应用升级。升级需要时间,因此你可能需要喝杯咖啡,稍后再回来。 ### 使用命令行 如果你是从 Fedora 的先前版本升级的,那么你可能对 `dnf upgrade` 插件很熟悉。这个方法是推荐和受支持的从 Fedora 31 升级到 Fedora 32 的方法。使用此插件将使你轻松地升级到 Fedora 32。 #### 1、更新软件并备份系统 在开始升级过程之前,请确保你有 Fedora 31 的最新软件。如果你安装了<ruby> 模块化软件 <rt> modular software </rt></ruby>,这尤为重要。`dnf` 和 “GNOME 软件”的最新版本对某些模块化流的升级过程进行了改进。要更新软件,请使用 “GNOME 软件” 或在终端中输入以下命令。 ``` sudo dnf upgrade --refresh ``` 此外,在继续操作之前,请确保备份系统。有关备份的帮助,请参阅 Fedora Magazine 上的[备份系列](https://fedoramagazine.org/taking-smart-backups-duplicity/)。 #### 2、安装 DNF 插件 接下来,打开终端并输入以下命令安装插件: ``` sudo dnf install dnf-plugin-system-upgrade ``` #### 3、使用 DNF 开始更新 现在,你的系统已更新、已备份、并且已安装 DNF 插件,你可以在终端中使用以下命令开始升级: ``` sudo dnf system-upgrade download --releasever=32 ``` 这个命令将开始在本地下载所有的升级包,为升级做准备。如果你在升级的时候因为没有更新的包、依赖关系破损或退役的包而出现问题,请在输入上述命令时添加 `--allowerasing` 标志。这将允许 DNF 移除可能阻碍系统升级的软件包。 #### 4、重启并升级 当上一个命令完成了所有升级包的下载,你的系统就可以重新启动了。要将系统引导至升级过程,请在终端中输入以下命令: ``` sudo dnf system-upgrade reboot ``` 此后,系统将重启。在许多版本之前,`fedup` 工具会在内核选择/启动页上创建一个新选项。使用 `dnf-plugin-system-upgrade` 包,你的系统会重启进入 Fedora 31 当前安装的内核;这个是正常的。在选择内核之后,你的系统会立即开始升级过程。 现在可能是喝杯咖啡休息的好时机!完成后,系统将重启,你将能够登录到新升级的 Fedora 32 系统。 ![](/data/attachment/album/202005/08/094031hkrz75fe7oxrgtde.png) ### 解决升级问题 有时,升级系统时可能会出现意外问题。如果你遇到任何问题,请访问 [DNF 系统升级文档](https://docs.fedoraproject.org/en-US/quick-docs/dnf-system-upgrade/#Resolving_post-upgrade_issues),以获取有关故障排除的更多信息。 如果升级时遇到问题,并且系统上安装了第三方仓库,那么在升级时可能需要禁用这些仓库。对于 Fedora 不提供的仓库的支持,请联系仓库的提供者。 --- via: <https://fedoramagazine.org/upgrading-fedora-31-to-fedora-32/> 作者:[Adam Šamalík](https://fedoramagazine.org/author/asamalik/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
Fedora 32 [is available now](https://fedoramagazine.org/announcing-fedora-32/). You’ll likely want to upgrade your system to get the latest features available in Fedora. Fedora Workstation has a graphical upgrade method. Alternatively, Fedora offers a command-line method for upgrading Fedora 31 to Fedora 32. Before upgrading, visit the [wiki page of common Fedora 32 bugs](https://fedoraproject.org/wiki/Common_F32_bugs) to see if there’s an issue that might affect your upgrade. Although the Fedora community tries to ensure upgrades work well, there’s no way to guarantee this for every combination of hardware and software that users might have. ## Upgrading Fedora 31 Workstation to Fedora 32 Soon after release time, a notification appears to tell you an upgrade is available. You can click the notification to launch the **GNOME Software** app. Or you can choose Software from GNOME Shell. Choose the *Updates* tab in GNOME Software and you should see a screen informing you that Fedora 32 is Now Available. If you don’t see anything on this screen, try using the reload button at the top left. It may take some time after release for all systems to be able to see an upgrade available. Choose *Download* to fetch the upgrade packages. You can continue working until you reach a stopping point, and the download is complete. Then use GNOME Software to restart your system and apply the upgrade. Upgrading takes time, so you may want to grab a coffee and come back to the system later. ## Using the command line If you’ve upgraded from past Fedora releases, you are likely familiar with the *dnf upgrade* plugin. This method is the recommended and supported way to upgrade from Fedora 31 to Fedora 32. Using this plugin will make your upgrade to Fedora 32 simple and easy. ### 1. Update software and back up your system Before you do start the upgrade process, make sure you have the latest software for Fedora 31. This is particularly important if you have modular software installed; the latest versions of dnf and GNOME Software include improvements to the upgrade process for some modular streams. To update your software, use *GNOME Software* or enter the following command in a terminal. sudo dnf upgrade --refresh Additionally, make sure you back up your system before proceeding. For help with taking a backup, see [the backup series](https://fedoramagazine.org/taking-smart-backups-duplicity/) on the Fedora Magazine. ### 2. Install the DNF plugin Next, open a terminal and type the following command to install the plugin: sudo dnf install dnf-plugin-system-upgrade ### 3. Start the update with DNF Now that your system is up-to-date, backed up, and you have the DNF plugin installed, you can begin the upgrade by using the following command in a terminal: sudo dnf system-upgrade download --releasever=32 This command will begin downloading all of the upgrades for your machine locally to prepare for the upgrade. If you have issues when upgrading because of packages without updates, broken dependencies, or retired packages, add the *‐‐allowerasing* flag when typing the above command. This will allow DNF to remove packages that may be blocking your system upgrade. ### 4. Reboot and upgrade Once the previous command finishes downloading all of the upgrades, your system will be ready for rebooting. To boot your system into the upgrade process, type the following command in a terminal: sudo dnf system-upgrade reboot Your system will restart after this. Many releases ago, the *fedup* tool would create a new option on the kernel selection / boot screen. With the *dnf-plugin-system-upgrade* package, your system reboots into the current kernel installed for Fedora 31; this is normal. Shortly after the kernel selection screen, your system begins the upgrade process. Now might be a good time for a coffee break! Once it finishes, your system will restart and you’ll be able to log in to your newly upgraded Fedora 32 system. ## Resolving upgrade problems On occasion, there may be unexpected issues when you upgrade your system. If you experience any issues, please visit the [DNF system upgrade quick docs](https://docs.fedoraproject.org/en-US/quick-docs/dnf-system-upgrade/#Resolving_post-upgrade_issues) for more information on troubleshooting. If you are having issues upgrading and have third-party repositories installed on your system, you may need to disable these repositories while you are upgrading. For support with repositories not provided by Fedora, please contact the providers of the repositories. ## Max I’m stuck abroad because of Corona and therefore using my personal Fedora laptop for doing work stuff, as my work equipment is not with me. I’d prefer to upgrade once back in my home country in June, to not risk getting problems and not being able to work. Are there any security implications with this? Will security related upgrades continue to be deployed? ## Tom Yes, Fedora 31 will still be supported for another 6 months at least, I don’t know the exact number. ## Ben Cotton Fedora releases are supported until 4 weeks after N+2 is released, which means they have a roughly 13-month lifespan. So Fedora 31 is schedule to reach end-of-life in mid-to-late November. You can see the Fedora release life cycle documentation for more information. ## Max Thanks Ben, good news for me. I’ll read up on the life cycle documentation. ## Pedro Así me funciono sudo dnf upgrade –refresh sudo dnf install dnf-plugin-system-upgrade sudo dnf system-upgrade download –refresh –releasever=32 Un saludo ## Jorge Perfecto !! ## amx –refresh ## KW I like the line “Now might be a good time for a coffee break!” since it takes my machine many hours to do the upgrade. That’s a big cup. I usually let it upgrade overnight and on a Friday. ## KW Went very smoothly overnight although I had to delete some software to have enough disk space. Now I am reinstalling it. The whole process took about 14 hours. ## UT Denden Treminal turn back to me no command line exist such a commands-line. sudo dnf system-upgrade download –releasever=32 and sudo dnf system-upgrade reboot Allright I will take followeing coments. ## Sebastiaan Have you installed the dnf system-upgrade plugin? ## UT Denden I am sorry. i miss your comments. But I done upgrade with software center. Thank you. Thats fine distr’ I like fedora. ## jahid65 Went smoothly. Just had to upgrade dash to dock extensions after. Nvidia driver, rpmfusion all working great. ## barz Did you blacklisted Noveau driver again? It never worked for me the rpmfusion Nvidia installer thing. I have to do it manually . . . I want to upgrade but I don’t want to blacklist that again. How did it work for you? I’m curious. ## Karlis K. Allow me to ask you how do you blacklist the nouveau drivers? I believe if you blacklist them by adding the parameters to GRUB_CMDLINE_LINUX they persist through the upgrade process, while adding modprobe configs might get overriden. ## barz I did both following the next tutorial: https://www.if-not-true-then-false.com/2015/fedora-nvidia-guide/ but I did it several times until it worked, it wasn’t nice process. ## jahid65 No I didn’t have to blacklist noveau again. It was already blacklisted since first install. Basically, I disabled all extensions and change to default adwaita theme beforehand. Then followed the commands in the article and about 45 minutes later and 2.1 GB download I am on the latest Fedora 32. Didn’t had to do any editing to any repository file. Just pressed y few times during upgrade. I had only rpmfusion and opencisco enabled. Steam, nvidia driver, 3 Tomb Raider Games all are working great. Whatever I did for the test I did in a fedora 31 container and it is still working after the upgrade. It was painless upgrade. ## barz I have to update that it worked properly. In the end I had to install Nvidia Drivers Manually, but that’s ok. So, if someone comes and asks if Fedora update is going to reinstall Noveau without your concern, the answer is no. That’s great. Thanks everybody for the responses. ## christosgreece Did rpmfusion worked without any configurations ?? You didn’t had to configure rpmfusion repos at all ? This is too good ! ## Joey I upgraded from Fedora 31 KDE Spin to 32 following the steps. When I reboot on my updated system I checked if earlyoom was enabled. My surprise was that if was not even installed, so I had to download and enable it manually? Is this the expected behaviour? https://fedoraproject.org/wiki/Changes/EnableEarlyoom ## Ben Cotton Yes, EarlyOOM is only enabled for Fedora Workstation. The KDE Special Interest Group will have to decide if they want to enable it on the KDE Spin. ## Joey I understand. Thank you very much for your answer ^^ ## MPN First of all a thanks to the OP for bringing it up. I didn’t know that earlyoom existed and I’ve experienced more than a few situations where probably would have saved me the hard reboot. So, just a heads up, I did upgrade from F31 to F32 a few days ago. This package has not been installed during the upgrade. I installed it and enabled it myself just now! ## Chandler How did you upgraded from F31 KDE to F32 KDE?? ## Stan Sykes I used the cli method described above – it was perfect for my installation which also has other desktops installed such as Gnome and i3. All run perfectly after upgrade. ## Fabio Just successfully installed Fedora 32 Workstation from scratch on external USB disk (starting from a DVD burned with Fedora 32: x86_64 DVD ISO). No issues, all perfect. Congratulation to the developments team. ## Kuba This is an excellent release of Fedora! Already upgrading. ## Joel Ya lo instale. La fluidez que tiene en mi laptop se siente excelente. ## Gerhard the new Gnome version should have blurr on the login screen, so that the wallpaper is blurred. But I don’t have it, after the upgrade. ## Lukas Only the lock screen has the new blur design already. The new login should be introduced in the next GNOME version i think. ## Stenfrank Upgraded to Fedora 32 ## Doug Worked perfectly on my t480s – things have come a long way since Fedora 7 when I started. Brilliant job and many thanks. ## htoosattwai upgrading 31 to 32 stopped at 90 %. This is bug ??? Or ??? ## John Evans Same here. I let my desktop sit for 5 hours, but it wouldn’t go beyond 90%. Turned off and back on, and… mostly seems ok. My postgresql server is the only problem area so far, and that makes sense due to postgis moving from 2.5 to 3.0. Still looking thru logs to see where the upgrade went wrong, but my system seems functional. I upgraded a laptop a few hours before my desktop. Was expecting the laptop to be the troublesome one since it’s a macbook, but that one went fine. ## Rogelio Actualice a Fedora 32 sin problema por medio del asistente gráfico, algo muy fácil para personas que no sabemos nada de linux. Estoy muy satisfecho con Fedora. ## Gerhard Fedora 32 should have kernel version 5.6.7, but after the update I only have 5.6.6-300.fc32.x86_64 What now? And why, if the 5.6.7 kernel is up to date, don’t I get it? All very curious! ## Ben Cotton If you update now, you should get it. The 5.6.7 kernel build hit the stable repos 11 hours ago. ## Wayne Hammond I am still running Fedora 30 as my HP Envy laptop video does not work in Fedora 31. By all accounts the Optimus video is supposed to be fully supported. Have there been any improvement in Nvideo support? ## marcelo Most cases I saw, just adding nouveau.modeset=0 to the kernel boot parameters. Even if you did not install NVidia drivers. ## barz I just have a question. With the upgrade, do I need to blacklist Nouveau driver again? I’m currently with Fedora 31 and everything works great. I would like to upgrade but I don’t want to do that again. I don’t care if i have to reinstall nvidia driver with no gui every kernel update ## MPN In my case, I just left the RPMFusion Repos enabled (Free, NonFree, Nvidia NonFree etc) and the installation picked up the Nvidia driver. It didn’t try to use Nouveau. ## barz Well, It worked. Once blacklisted in modprobe and grub, it stays the same and that’s great. ## Ben Thank you for the command line instructions. I had to add –allowerasing to allow the system-upgrade download. Then after the reboot I walked away and returned a few minutes later to an upgraded system. Looks good so far! ## Alastor First, sorry for the dump question but I’m new to Linux and this is my first Fedora upgrade. I have some programs installed and modified like LibreOffice to make them work with MS office document. If I upgrade from fedora 31 to fedora 32 will this affect my installed programs? Do I need to reinstall them or will they still be there and ready to use. ## Tom Hi! I’m not sure what exactly you have changes, but all the program settings should stay the same, it works almost as an usual update, but on a larger scale. If you have custom settings for LibreOffice, they should stay the same; if you have customly installed LibreOffice (I usually install it manually to have the latest version) it should stay intact (from my experience). Some programs may have settings changed but only if the program itself has had a noticeable change. But, of course, if you have something very important and you are not too aware of how to reinstall the system safely if something doesn’t work, have a backup (but have a backup always for your important stuff, of course). Write if you tried to update (updated) and something didn’t work! ## Anon Can’t upgrade neither by GNOME Software nor by commands. Problem 1: package python2-matplotlib-2.2.5-1.fc31.x86_64 requires python2-backports-functools_lru_cache, but none of the providers can be installed – package python2-matplotlib-2.2.5-1.fc31.x86_64 requires python2.7dist(backports.functools-lru-cache), but none of the providers can be installed – python2-backports-functools_lru_cache-1.5-6.fc31.noarch does not belong to a distupgrade repository – problem with installed package python2-matplotlib-2.2.5-1.fc31.x86_64 Problem 2: package python2-matplotlib-2.2.5-1.fc31.x86_64 requires python2-cycler >= 0.10.0, but none of the providers can be installed – package python2-matplotlib-2.2.5-1.fc31.x86_64 requires python2.7dist(cycler) >= 0.10, but none of the providers can be installed – package python2-matplotlib-tk-2.2.5-1.fc31.x86_64 requires python2-matplotlib(x86-64) = 2.2.5-1.fc31, but none of the providers can be installed – python2-cycler-0.10.0-10.fc31.noarch does not belong to a distupgrade repository – problem with installed package python2-matplotlib-tk-2.2.5-1.fc31.x86_64 ## Cornel Panceac Removing the offnding packages fixed it for me (only two python2-* in my case). Still i’m left with this warning: Modular dependency problem: Problem: conflicting requests – nothing provides module(platform:f31) needed by module gimp:2.10:3120191106095052:f636be4b-0.x86_64 I’ve started the download anyway, let’s see if it’s ll break my system 🙂 ## MPN Inside /etc/dnf/modules.d you will find the file: gimp.modules Me, before the upgrade I renamed it to gimp.modules.disabled This stopped raising the conflict ## Cornel Panceac Thank you, i upgraded before reading your reply, that is i just ignored the warning. Upgrade completed successfully and gimp is at least starting. So everything looks good until now, gimp-related. What is not that good is that all the apps still depending on X11 will crash, as expected, i assume. In the particular of smplayer usage of mplayer, i’ve changed the ‘default’ video driver to ‘sdl’. This allows smplayer +mplayer to work again. This is just in case anyone hits the same problem. ## Cornel Panceac Actually i’ve found that i have to use Xorg if i want smplayer on sdl to work. So, from this point of view , this is a regression . ## Juan Same problem. Smplayer crashed except using sdl. While does not crash with sdl, in my system the screen is black (audio works). I’ve only made it work by selecting the advanced mplayer option: “Execute mplayer in own window” (or similar, mine is in spanish) ## Marko Had same issue on F31 too and it’s probably Wayland misbehaving… ## Takfed Upgraded fine on my Samsung RF410 running a 10 year old intel chip. It works fine on my Cinnamon edition, how to check if EarlyOOM is enabled? ## Ben Cotton You can run from a terminal to see if EarlyOOM is enabled. ## Philippe Typo in the first paragraph: “Alternatively, Fedora offers a command-line … Fedora 30 to Fedora 31.” Guessing from 31 to 32. ## Ben Cotton Fixed, thanks! ## Laurent Upgraded fine on my DELL XPS 17″ (2011). upgrade takes 32 minutes. Everything works fine. Thanks Fedora Teams. ## Edwin Upgraded successfully, but how do I get the newest wallpapers? I ran dnf install f32-backgrounds-gnome, but only two 4:3 blue wallpapers where added to my background selection menu. ## essa upgraded from 31 to 32 so easy ## Travis Juntara Nice. I plan to upgrade this weekend. Gotta clear up some disk space though, since my / and /home are on a 128GB SSD. It’ll be nice to see TRIM finally activated and the performance gains and new ACO shader compiler in Mesa 20. (My Fed 31 is on 19.2.8) ## Chucho Linux Error: Problem: package VirtualBox-6.1-6.1.0_135406_fedora31-1.x86_64 requires python(abi) = 3.7, but none of the providers can be installed – python3-3.7.7-1.fc31.x86_64 does not belong to a distupgrade repository – problem with installed package VirtualBox-6.1-6.1.0_135406_fedora31-1.x86_64 [station8@localhost ~]$ sudo dnf system-upgrade download –releasever=32 –skip-broken ## tetsuo1976 Remove VirtualBox, upgrade your system and then and install the Latest 6.1.x test build from here: https://www.virtualbox.org/wiki/Testbuilds It worked for me. ## CherubDoc Upgrading from 31 to 32 broke suspend/resume on my laptop, an old sony vaio. After a few tries, installing an “old” fc31 kernel solved the issue. So I just wanted to share this solution if anyone is experiencing the same. I think I have to wait for a fix on new fc32 kernels to use them again and stick with the one’s working… ## Yogesh Sharma I am using flatpak, after upgrade to Fedora 32 I was getting this warning during : F: Warning: Treating remote fetch error as non-fatal since runtime/org.fedoraproject.Platform/x86_64/f30 is already installed: No such ref 'runtime/org.fedoraproject.Platform/x86_64/f30' in remote fedora F: Warning: Can't find runtime/org.fedoraproject.Platform/x86_64/f30 metadata for dependencies: No entry for runtime/org.fedoraproject.Platform/x86_64/f30 in remote 'fedora' summary flatpak cache Running following helped me: ## Lukas Piekarski Thanks! That worked for me too. ## Francisco Lopez Rojas Buen dia, tiene usted idea, porque DNFGRADORA, no esta operando en esta version 32,… lo he instalado dos veces y nada, Gracias ## barz Que tal Francisco. Por alguna extraña razón como que se atora un poco leyendo los repositorios si es que tiene muchos agregados. La manera en que lo pude hacer arrancar bien la primera vez fue abrir una terminal con lo siguiente: sudo dnfdragora Siendo administrador solo hay que dejar que se ejecute. Una vez que funciona, cerramos y abrimos normalmente. No sé porqué hace eso, pero pues espero que en una actualización lo revisen. ## dkdk How download all packages, source, compilators etc. ? For time without internet ## River14 I have done a clean installation from a usb in a very old laptop,( Acer Extensa 5620z) removing a Fedora 31 server. Everything works like a charm. ## tom Hi, i get the following message when i try to upgrade: ... Copr repo for themes owned by tcg 8.2 kB/s | 3.3 kB 00:00 Fedora 32 openh264 (From Cisco) - x86_64 595 B/s | 543 B 00:00 Fedora Modular 32 - x86_64 20 kB/s | 18 kB 00:00 Fedora Modular 32 - x86_64 - Updates 38 kB/s | 23 kB 00:00 Fedora 32 - x86_64 - Updates 21 kB/s | 17 kB 00:00 Fedora 32 - x86_64 24 kB/s | 18 kB 00:00 Photivo - photo processor (Fedora_30) 33 kB/s | 1.7 kB 00:00 RPM Fusion for Fedora 32 - Free - Updates 56 kB/s | 9.1 kB 00:00 RPM Fusion for Fedora 32 - Free 20 kB/s | 10 kB 00:00 RPM Fusion for Fedora 32 - Nonfree - Updates 35 kB/s | 9.3 kB 00:00 RPM Fusion for Fedora 32 - Nonfree 93 kB/s | 10 kB 00:00 TeamViewer - x86_64 30 kB/s | 2.5 kB 00:00 ... Problem 1: conflicting requests - nothing provides module(platform:f31) needed by module gimp:2.10:3120191106095052:f636be4b-0.x86_64 Problem 2: conflicting requests - nothing provides module(platform:f31) needed by module minetest:5:3120191217165623:f636be4b-0.x86_64 Fehler: Problem 1: package python2-beautifulsoup4-4.9.0-1.fc31.noarch requires python2-lxml, but none of the providers can be installed - python2-lxml-4.4.0-1.fc31.x86_64 does not belong to a distupgrade repository - problem with installed package python2-beautifulsoup4-4.9.0-1.fc31.noarch Problem 2: package xboxdrv-0.8.8-8.fc29.x86_64 requires python2-dbus, but none of the providers can be installed - python2-dbus-1.2.8-6.fc31.x86_64 does not belong to a distupgrade repository - problem with installed package xboxdrv-0.8.8-8.fc29.x86_64 (try to add '--skip-broken' to skip uninstallable packages) i use gimp and xboxdrv on a daily basis, what may be a solution? ## CeSpues A solution may be usgin option –skip-broken, in uprade command, as you can read at the last line of your message, and also use –allowerasing to solve broken dependencies: dnf system-upgrade download –releasever=32 –skip-broken –allowerasing ## tom Ok thanks, i know that but i dont want to erase that packages, i need them, i also dont know why xboxdrv have been kicked since F30 – how does one use an xbox-controller since then? ## tom the command was right, its just the display of the font here. anyways it did not work. i had to uninstall 3 packages: * python2-beautifulsoup4-4.9.0-1.fc31.noarch * python2-lxml-4.4.0-1.fc31.x86_64 * xboxdrv-0.8.8-8.fc29.x86_64 i have read that controler support is now builtin with some xpad service but it did not work out of the box. also gimp lost its toolbox configuration. at least i had to uninstall the gnome-shell extension “Appfolders Management extension” which caused problems with the activities/search funktion ## Eduardo Fraga Youtube video created: https://youtu.be/3_3j44vARFA ## Gerale Ellis dnf system-upgrade download –releasever=32 Before you continue ensure that your system is fully upgraded by running “dnf –refresh upgrade”. Do you want to continue [y/N]: y Fedora Modular 32 – i386 29 kB/s | 63 kB 00:02 Errors during downloading metadata for repository ‘fedora-modular’: – Status code: 404 for https://mirrors.fedoraproject.org/metalink?repo=fedora-modular-32&arch=i386 (IP: 152.19.134.142) – Status code: 404 for https://mirrors.fedoraproject.org/metalink?repo=fedora-modular-32&arch=i386 (IP: 140.211.169.206) Error: Failed to download metadata for repo ‘fedora-modular’: Cannot prepare internal mirrorlist: Status code: 404 for https://mirrors.fedoraproject.org/metalink?repo=fedora-modular-32&arch=i386 (IP: 140.211.169.206) How do I resolve the above error? ## Mark FYI: the list of common bugs wiki page does not mention this little issue for those using puppet. The Fedora packaged puppet agent does not run on F32 due to ruby being upgraded https://bugzilla.redhat.com/show_bug.cgi?id=1815115 That is rather a big issue, workaround seems to be to drop the fedora package and use the F31 one from puppetlabs which bundles its own ruby environment. Which I have yet to try. ## TagRaa Hi, you may try again using the ‘‐‐allowerasing’ flag. i.e. sudo dnf system-upgrade download –releasever=32 –allowerasing ## wffger Error reported by gnome-shell-extension : Dash to Panel. No idea how to fix it. ## CeSpues You can delete the packages, upgrade to F32 and then I stall the needed packages again with solved dependencies. ## Susana Problem after reboot while upgrade from Fedora 31 to Fedora 32, this is what log shows: systemd[1]: Reached target Switch Root. systemd[1]: Starting Switch Root… systemd[1509]: Failed to switch root: Specified switch root path ‘/sysroot’ does not seem to be an OS tree. os-release file is missing. After that, system is into Emergency Mode. Any help? Thanks a lot. ## rafael me too ## Suryatej Yaramada same here facing similar issue any solutions for this please let us know ## JoeHannes Hi, I had the same problem. I could access my system using “mount /dev/sdX /sysroot” using the root partition number for X. Then you have to exit the Emergency Mode. The cause for that problem, was in my case a corrupted grubenv file in “/boot/efi/EFI/fedora”. IMPORTANT These commands are for systems using EFI-boot: Try to recreate your grub.cfg by “grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg”. If it complains about grubenv use “grub2-editenv /boot/efi/EFI/fedora/grubenv create” to create a new one. Now use “grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg” again. It should work now. I’m not sure if this solves your problem, because in my case the order was a bit different: 1. I hat the problem. 2. I restored F31 (using clonezilla) 3. Found the corrupted grubenv 4. Did the upgrade to F32 again So maybe one of the other steps helped, too. ## Susana Thanks a lot. I tried to mount /sysroot, but I didn’t find out what the problem was (it said there was a file left, os-release, but after copying it to the place where it asked for it, could not start normally anyway). At the end, I reinstalled fedora 32 from start, and after some compatibility problems, I’m back to fedora 31 until I’ll be sure I can use all my programs in 32. Also I find out about timeshift for making backups and now also know clonezilla, thansk! this time I did not had backup… never again… ## Mehdi Successfully upgraded the commandline way. There were some problems starting the upgrade though, complaints like “You cannot enable X from multiple streams, etc.”, as mentioned in https://bugzilla.redhat.com/show_bug.cgi?id=1822076. I simply disabled those comlaining streams and started the upgrade. ## Vladimir from Belarus VirtualBox does’t work 🙁 I’m in a deep depression… ## Tim I had the same and solved it with: F31 (pre upgrade): sudo dnf erase VirtualBox F32 (after) – sudo dnf install VirtualBox Other than that, my first fedora upgrade and it went smoothly. Nice one Fedora 🙂 ## Marko Doesn VirtualBox 6.1 have some hard deps on Python(abi) 2.7? I’d love to hear how did you solve that one. Also seems we’re still waiting for Docker repos as well 😉 ## Mehdi Maybe you should migrate to Podman and Buildah. They are pretty cool and work like Docker in fact. ## Marko Yeah, unfortunately cannot quite do that at this point, but I’m sure Docker repo will catch up. Bigger blocker is VirtualBox and will wait until Oracle catches up and upgrade. But good work on F32, looks good and already upgraded couple of boxes! ## Marko Seems rpmfusion VBox works nicely and available for F32 😉 ## Stan Sykes It seems I have been using Fedora forever and once again, this upgrade was seamless and basically trouble-free. I had a couple of small file conflicts at the beginning but I just erased them (for posterior re-installation) and from there everything went fine. Many thanks to the Fedora Team for this! ## Attila Fedora XFCE Spin successfully upgraded from 31 to 32 in ~15-20 minutes! Great work, thanks! ## Miguel Campos I have to the option –allowerasing to get new version upgrade ## D.K. Malungu The update looks great and feels awesome. Am running in T480. Thanks to the community. ## MrMarcie Thanks again for this fine OS. Already using Fedora now for a couple of years and very happy with it. I use XFCE. Upgrade was very smooth. ## Ricardo Problemas con Un Dock Thunderbolt Lenovo (Thinkpad) todo funciona, tengo una pantalla secundaria, cuando suspendo o bloqueo mi sistema, y luego vuelvo a trabajar la segunda pantalla nunca se despierta, pasa solamente con el Dock, conectando directamente todo funciona normal. En Fedora 31 no me pasaba esto. ## Gerhard ‘m just having trouble with Fedora 32! Updates hang up and are in a constant loop! So there are no updates coming in. After starting the computer, my second screen flickers permanently. It’s just annoying! I don’t like this release in any way and I think it has a lot of bugs. That one there would still be a lot to say! ## Steven dnf system-upgrade since Fedora 23! Never a fresh install since. Excelent work Fedora team. Best distro by far! ## Tom Hi, Problem: package python2-twisted-19.2.1-6.fc31.x86_64 requires ……. tried: sudo dnf system-upgrade download –releasever=32 ‐‐allowerasing got ERROR: dnf system-upgrade: error: unrecognized arguments: ‐‐allowerasing Any idea? THX ## Tomasz @Tom It seems, that the dashes (minuses) in the –allowerasing are not actually dashes. Just erase the dashes and replace them with the correct dash/minus characters. I assume, you copied it from a web, where they got replaced by a nicer/fancier ones for some reason. ## River14 Does anybody here know how to remove the sudoers on Fedora 32 ? Great great job fedora team. Congrats !!!!! ## Bob I had to unlock updates, which I blocked by exclude-entries in /etc/dnf/dnf.conf. Next step was to redo ‘sudo dnf upgrade refresh’ and follow the instructions for command line upgrade. The update via Software-GUI wasn’t very talkative about that in my initial attempt. Commandline updates revealed the root cause. Nevertheless the upgrade is running impressively well. Well done! ## emmanuel ninos do these upgrade commands work for Fedora 31 xfce too? ## Anthony Ups! After upgrade to F32 – SAGEmath dont work! Bad Python3 from Fedora32 for SAGE, its no good! ## Anthony More info: $ sage -n jupyter on kernel Sage 9.0 f(x) = x^2 run: TypeError Traceback (most recent call last) /usr/lib64/python3.8/codeop.py in call(self, source, filename, symbol)134 135 def call(self, source, filename, symbol):–> 136 codeob = compile(source, filename, symbol, self.flags, 1) 137 for feature in _features: 138 if codeob.co_flags & feature.compiler_flag: TypeError: required field “type_ignores” missing from Module Maybe someone will help, please … ## Bruno I am still on Fedora 30. Tried to update to Fedora 31 a month ago and I got black screen after login. Tried to install Nvidia drivers via terminal, but maybe I installed a wrong version and I couldn’t even reach login screen after that with Kernel Panic. Couldn’t login in to recovery also because I didn’t have a root password. Fortunately, I had done a Clonezilla backup before! That event scared me off a bit regarding updates. I can now login in recovery with root and I guess I will have to install Nvidia driver first on Fedora 30 and then try to update again until F32. Hope it can go smooth this time! ## andreas Upgraded a desktop and a laptop (HP spectre X360-15) to fedora 32 KDE flavor but login, screen lock and desktop background did not update. If I try to modify them manually, only sddm login screen reflects the changes I made, while desktop background and lock screen do not change. ## Stan Sykes I’ve been running F32 for a couple of days, and finding it awesome! One small issue I have is that text menus are now square boxes in Snap applications (Cherrytree, n’such). I believe that this is an issue with my setup because I installed a fresh F32 on Virtualbox and all is fine with Snaps. Has anyone got the same issue? I have refreshed my fontconfig to no avail. ## Stan Sykes Everything solved! I used a second profile to see if my problem was in my local configuration, or not. Turns out a couple of my /home/myacct/.config files had been altered somehow during the upgrade. A couple of my programmes didn’t run correctly after the upgrade until I fixed these .local issues. ## Bryan Every Fedora upgrade gives a warning but the installation guide and internet searches don’t give any information on whether it is safe to hit “y”. If it is not something to worry about, why does it say “warning”? warning: /var/lib/dnf/system-upgrade/fedora-558931b5e76b51a7/packages/libappindicator-12.10.0-27.fc32.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID 12c944d0: NOKEY Fedora 32 – x86_64 111 kB/s | 1.6 kB 00:00 Importing GPG key 0x12C944D0: Userid : “Fedora (32) [email protected]” Fingerprint: 97A1 AE57 C3A2 372C CA3A 4ABA 6C13 026D 12C9 44D0 From : /etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-32-x86_64 Is this ok [y/N]: y ## chandramouli narayanan Didn’t work for me. I tried with a USB with Fedora 32 x86_64 ISO. Upon boot, I can’t use the mouse or keyboard and it is simply stuck. ## Nissa What’s the required disk space needed for the upgrade? Fedora 32 isn’t downloading most likely due to only 2.9 GB of disk space in my root partition. ## danz Upgraded to F32 on my laptop yesterday. All works fine except an annoying keyboard issue with the gdm login screen. – I have to press the key for extra long time to be able to input letters for my username and password (~0.5 seconds?) Once I logged in, the problem went away and I can type on my keyboard as per usual. ## gombosg I just wanted to say that this was an incredibly smooth upgrade process with GNOME Software. My machine has lots of packages installed (I’m using it for everyday software development work), yet it was literally a single click. This is the first time this GUI upgrade process worked for me, so it’s great to see it. GNOME feels really stable now… 3.30 was pretty hard to use with all those pesky memory leaks, but since then the quality has increased a lot. No crashes or whatever for months. Same for companion apps like GNOME Software. Kudos for the Fedora folks for putting this release together! ## juvenihil Smooth upgrade in ~40 minutes on a HP-X2 detachable notebook. All good and nice. I gladly note that Fedora is the only distribution running smoothly and making use of almost all the hardware on this machine (except for the back side OV camera, but that’s a mainline kernel issue with Intel ISP2 chips). Kudos to the Fedora team for their great work. ## juvenihil It took 2.1GB on my system but your mileage may vary. ## Jar Did a fresh install. Where are Gnome Extensions? Where is dash to dock or side bar controls?? ## k316 errors al actualizar Error: Problema 1: package php-pecl-mcrypt-1.0.2-3.fc31.x86_64 requires php(api) = 20180731-64, but none of the providers can be installed – package php-pecl-mcrypt-1.0.2-3.fc31.x86_64 requires php(zend-abi) = 20180731-64, but none of the providers can be installed – php-common-7.3.17-1.fc31.x86_64 does not belong to a distupgrade repository – problem with installed package php-pecl-mcrypt-1.0.2-3.fc31.x86_64 Problema 2: package php-7.4.5-1.fc32.x86_64 requires php-common(x86-64) = 7.4.5-1.fc32, but none of the providers can be installed – cannot install both php-common-7.4.5-1.fc32.x86_64 and php-common-7.3.17-1.fc31.x86_64 – problem with installed package php-7.3.17-1.fc31.x86_64 – package php-pecl-zip-1.17.2-1.fc31.x86_64 requires php(api) = 20180731-64, but none of the providers can be installed – package php-pecl-zip-1.17.2-1.fc31.x86_64 requires php(zend-abi) = 20180731-64, but none of the providers can be installed – php-7.3.17-1.fc31.x86_64 does not belong to a distupgrade repository – problem with installed package php-pecl-zip-1.17.2-1.fc31.x86_64 Problema 3: problem with installed package composer-1.10.5-1.fc31.noarch – package composer-1.10.5-1.fc32.noarch requires php-zip, but none of the providers can be installed – package composer-1.10.1-1.fc32.noarch requires php-zip, but none of the providers can be installed – package php-pecl-zip-1.17.2-1.fc31.x86_64 requires php(api) = 20180731-64, but none of the providers can be installed – package php-pecl-zip-1.17.2-1.fc31.x86_64 requires php(zend-abi) = 20180731-64, but none of the providers can be installed – cannot install both php-common-7.4.5-1.fc32.x86_64 and php-common-7.3.17-1.fc31.x86_64 – package php-bcmath-7.4.5-1.fc32.x86_64 requires php-common(x86-64) = 7.4.5-1.fc32, but none of the providers can be installed – problem with installed package php-bcmath-7.3.17-1.fc31.x86_64 – php-bcmath-7.3.17-1.fc31.x86_64 does not belong to a distupgrade repository – composer-1.10.5-1.fc31.noarch does not belong to a distupgrade repository – package php-pecl-zip-1.18.2-1.fc32.x86_64 is filtered out by exclude filtering (pruebe a añadir ‘–allowerasing’ a la línea de comandos para reemplazar los paquetes que producen conflictos o ‘–skip-broken’ para descartar los paquetes que no se pueden instalar) ## Cesar Cuadros Hola k316, Añade “–alloweraing” al comando de actualización (como te dicen al final de los errores), es posible que el sistema los elimine, sinó los eliminas manualmente antes del upgrade y luego haces un “dnf clean metadata” Eliminar manualmente: deberías ir Problema por Problema, en cada uno de ellos haz un “dnf remove nombre_paquete” de cada paquete que te da problemas. Esos problemas son de paquetes con dependencias sin resolver, si lo eliminas dejarás de tener esos problemas. Apúntalos, y cuando consigas actualizar a fc32, los vuelves a instalar. Espero que te sirva de ayuda. ## Luizo Is Fedora 32 prepared to COVID-19? ## Marco Gómez I upgraded my system using the software package and worked like a charm. It took only 40~60min to install all new packages and is working very well. Now GNOME feels really stable and I liked a lot the new unluck screen design. ## Gustavo Domínguez How the support for dual-graphics Apple portables in Fedora 32? Is it there yet? I’m dying to switch already but this has been holding me back for way too long. I thought 31 was gonna be it because of some work they had done with Nvidia announced right here on this site but it wasn’t it. :/ ## Bob The biggest impact I had in updating to fedora 32 is that my kodi addons stopped working making kodi useless. There is a compatibility problem between python 2.x and 3. ## Stan Sykes I didn’t have a problem with Kodi but you might want to check if your Kodi works by creating (or using and existing) a second profile. That will tell you if the problem is in your local configuration, or not. A couple of my programmes didn’t run correctly after the upgrade until I fixed these .local issues. ## Bob Thanks for the reply. I did create a new user and deleted all . files related to kodi. Also reinstalled kodi. The difference between fedora 31 and 32 is the version of python used. This is my error. Error Type: Error Contents: cannot import name ‘urlencode’ from ‘urllib’ (/usr/lib64/python3.8/urllib/ init.py)The addon written in python2 does not work in python 3. ## Stan Sykes Check this link for Python on your system – it may lead you to the right conclusion, don’t give up! https://linuxconfig.org/check-python-version ## Aris Agung Wibono Successfully upgrade from my Thinkpad T470! ## Jason Pineda Sorry if this is super basic, but will upgrading, rather than using a image (.iso) erase my personal files? ## Gregory Bartholomew Upgrading shouldn’t erase any personal data stored under /home. That said, if you have any important files that you want to be sure not to lose, it is always a good idea to have a backup copy of them on some other storage device. ## Bogdan Hi, I have a problem with the upgrade from fedora 31 to fedora 32. I upgrade from the console, I get a conflict message: Problem: conflicting requests – nothing provides module(platform:f31) needed by module gimp:2.10:3120191106095052:f636be4b-0.x86_64 Błąd: Problem 1: package bleachbit-2.2-1.fc30.noarch requires gnome-python2, but none of the providers can be installed – gnome-python2-2.28.1-30.fc31.x86_64 does not belong to a distupgrade repository – problem with installed package bleachbit-2.2-1.fc30.noarch Problem 2: package python2-twisted-19.2.1-6.fc31.x86_64 requires python2.7dist(automat) >= 0.3, but none of the providers can be installed – python2-Automat-0.7.0-4.fc31.noarch does not belong to a distupgrade repository – problem with installed package python2-twisted-19.2.1-6.fc31.x86_64 Problem 3: package ktp-common-internals-19.12.2-1.fc32.x86_64 requires libtelepathy-qt5-service.so.0()(64bit), but none of the providers can be installed – problem with installed package ktp-common-internals-19.12.1-1.fc31.x86_64 – telepathy-qt5-0.9.7-9.fc31.x86_64 does not belong to a distupgrade repository – ktp-common-internals-19.12.1-1.fc31.x86_64 does not belong to a distupgrade repository Any ideas on how to solve the problem? greetings Bogdan ## Paul W. Frields Just a reminder, the Magazine is not a help forum. But you can get help from the community by following the advice on this page: https://fedoraproject.org/wiki/Communicating_and_getting_help ## Bogdan Thanks for the warning, but I’ve seen many users report problems here, so I thought I could too. Anyway, thanks for the link to help. ## Pramod Joshi Read comments of several Fedora Users. Previous Complete Upgrade of Fedora took near 9 hours. Why you do not start upgrade in steps ? Starting from basic upgrade (Fedora 32 with gnome, firefox, python,text editor etc.) then remaining others programs can be upgraded in 5-6 steps thus completing upgrade of Fedora in one week. ## David Frantz A big Thank You to the Fedora 32 team. This upgrade has been fantastic for my all AMD system (Zen & 5500XT). ## Christian Groove Thank you for your great job After successful upgrade, i still have a number of fc31 package on my system, i don’t know why? game-music-emu-0.6.2-3.fc31.x86_64 trousers-lib-0.3.13-13.fc31.x86_64 GConf2-3.2.6-27.fc31.x86_64 fcoe-utils-1.0.32-9.git9834b34.fc31.x86_64 telepathy-qt4-0.9.7-9.fc31.x86_64 adapta-gtk-theme-3.95.0.11-3.fc31.noarch crow-translate-2.2.3-2.fc31.x86_64 argyllcms-1.9.2-8.fc31.x86_64 xorg-x11-drv-qxl-0.1.5-12.fc31.x86_64 tomahawk-libs-0.8.4-23.fc31.x86_64 GeoIP-GeoLite-data-2018.06-4.fc31.noarch adapta-gtk-theme-gedit-3.95.0.11-3.fc31.noarch NetworkManager-iodine-1.2.0-9.fc31.x86_64 GeoIP-GeoLite-data-extra-2018.06-4.fc31.noarch pam_krb5-2.4.13-14.fc31.x86_64 NetworkManager-iodine-gnome-1.2.0-9.fc31.x86_64 tix-8.4.3-27.fc31.x86_64 gnome-icon-theme-symbolic-3.12.0-10.fc31.noarch When i try to remove one of them, dnf tells me, that some of fc32 Pakacges have dependencies on fc31. This is hard to understand!! ## Arvind Such Smoothness in the process, Fedora 31 -> Fedora 32. Just 3 simple steps, Cheers!!! ## Máté Wierdl The 31-32 upgrade went smoothly on all my family’s fedoras—5 in number. The only issue was that I had to remove the 31 package alsa-firmware since the upgrade reported conflict with 32’s alsa-sof-firmware. ## David Payne No playonlinux support 🙁 stuck on 31 ## OldMozzy The upgrade from F31->F32went smoothly and took less than an hour overall. The only problems I have detected is that I can no longer print to A5 sized paper (A4 is no problem) and Qadrapassel no longer launches! Videos is broken too but I normally use VLC so no big deal that Videos won’t launch. Otherwise, business as usual! Thanks very much to the hard-working global Fedora community for bringing us the best fast and stable Linux Distro. ## Lazreg Here is what I get when I follow the steps you gave: [gerzal@dhcppc1 ~]$ sudo dnf upgrade –refresh [sudo] password for gerzal: Copr repo for PyCharm owned by phracek 7.1 kB/s | 3.6 kB 00:00 Fedora 31 openh264 (From Cisco) – x86_64 421 B/s | 543 B 00:01 Fedora Modular 31 – x86_64 95 kB/s | 49 kB 00:00 Fedora Modular 31 – x86_64 – Updates 40 kB/s | 46 kB 00:01 Fedora 31 – x86_64 – Updates 45 kB/s | 43 kB 00:00 Fedora 31 – x86_64 57 kB/s | 50 kB 00:00 google-chrome 7.5 kB/s | 1.3 kB 00:00 Remi’s Modular repository – Fedora 31 – x86_64 5.2 kB/s | 3.5 kB 00:00 Remi’s RPM repository – PHP 7.4 – Fedora 31 – x86_64 4.9 kB/s | 3.0 kB 00:00 Remi’s RPM repository – Fedora 31 – x86_64 5.1 kB/s | 3.0 kB 00:00 RPM Fusion for Fedora 31 – Free – Updates 33 kB/s | 13 kB 00:00 RPM Fusion for Fedora 31 – Free 30 kB/s | 15 kB 00:00 RPM Fusion for Fedora 31 – Nonfree – Steam 33 kB/s | 14 kB 00:00 RPM Fusion for Fedora 31 – Nonfree – Updates 16 kB/s | 14 kB 00:00 RPM Fusion for Fedora 31 – Nonfree 32 kB/s | 15 kB 00:00 virtio-win builds roughly matching what was shipped in latest RHEL 2.8 kB/s | 3.0 kB 00:01 Dependencies resolved. Problem: gd-2.2.5-12.fc31.i686 has inferior architecture – cannot install both gd-2.3.0-1.fc31.remi.x86_64 and gd-2.2.5-12.fc31.x86_64 – cannot install the best update candidate for package gd-2.2.5-12.fc31.i686 – cannot install the best update candidate for package gd-2.2.5-12.fc31.x86_64 Package Architecture Version Repository Size Skipping packages with conflicts: (add ‘–best –allowerasing’ to command line to force their upgrade): gd x86_64 2.3.0-1.fc31.remi remi 138 k Transaction Summary Skip 1 Package Nothing to do. Complete! [gerzal@dhcppc1 ~]$ sudo dnf install dnf-plugin-system-upgrade Last metadata expiration check: 0:00:19 ago on Sun 17 May 2020 10:27:41 PM CET. Package python3-dnf-plugin-system-upgrade-4.0.10-1.fc31.noarch is already installed. Dependencies resolved. Nothing to do. Complete! [gerzal@dhcppc1 ~]$ sudo dnf system-upgrade download –releasever=32 Before you continue ensure that your system is fully upgraded by running “dnf –refresh upgrade”. Do you want to continue [y/N]: y Copr repo for PyCharm owned by phracek 7.0 kB/s | 3.6 kB 00:00 Copr repo for PyCharm owned by phracek 28 kB/s | 39 kB 00:01 Fedora 32 openh264 (From Cisco) – x86_64 463 B/s | 543 B 00:01 Fedora Modular 32 – x86_64 53 kB/s | 48 kB 00:00 Fedora Modular 32 – x86_64 50 kB/s | 188 kB 00:03 Fedora Modular 32 – x86_64 – Updates 52 kB/s | 46 kB 00:00 Fedora Modular 32 – x86_64 – Updates 76 kB/s | 422 kB 00:05 Fedora 32 – x86_64 – Updates 37 kB/s | 43 kB 00:01 Fedora 32 – x86_64 – Updates 383 kB/s | 6.1 MB 00:16 Fedora 32 – x86_64 21 kB/s | 48 kB 00:02 Fedora 32 – x86_64 202 kB/s | 1.7 MB 00:08 google-chrome 10 kB/s | 4.2 kB 00:00 determining the fastest mirror (1 hosts).. done. [ === ] — B/s | 0 B –:– ETA Remi’s Modular repository – Fedora 32 – x86_64 2.3 kB/s | 3.5 kB 00:01 done.s Modular repository – Fedora 32 – x86_64 [ === ] — B/s | 0 B –:– ETA Remi’s Modular repository – Fedora 32 – x86_64 121 kB/s | 171 kB 00:01 Remi’s RPM repository – PHP 7.4 – Fedora 32 – x86_64 79 B/s | 131 B 00:01 Errors during downloading metadata for repository ‘remi-php74’: – Status code: 403 for http://cdn.remirepo.net/fedora/32/php74/x86_64/mirror (IP: 195.154.241.117) – Status code: 403 for http://cdn.remirepo.net/fedora/32/php74/x86_64/mirror (IP: 176.31.103.194) Error: Failed to download metadata for repo ‘remi-php74’: Cannot prepare internal mirrorlist: Status code: 403 for http://cdn.remirepo.net/fedora/32/php74/x86_64/mirror (IP: 176.31.103.194) Remi’s RPM repository – Fedora 32 – x86_64 5.4 kB/s | 3.0 kB 00:00 Remi’s RPM repository – Fedora 32 – x86_64 412 kB/s | 2.0 MB 00:04 RPM Fusion for Fedora 32 – Free – Updates 32 kB/s | 13 kB 00:00 RPM Fusion for Fedora 32 – Free – Updates 108 kB/s | 206 kB 00:01 RPM Fusion for Fedora 32 – Free 30 kB/s | 15 kB 00:00 RPM Fusion for Fedora 32 – Nonfree – Steam 31 kB/s | 14 kB 00:00 RPM Fusion for Fedora 32 – Nonfree – Updates 32 kB/s | 13 kB 00:00 RPM Fusion for Fedora 32 – Nonfree – Updates 6.9 kB/s | 9.9 kB 00:01 RPM Fusion for Fedora 32 – Nonfree 30 kB/s | 15 kB 00:00 virtio-win builds roughly matching what was shipped in latest RHEL 2.8 kB/s | 3.0 kB 00:01 Ignoring repositories: remi-php74 Error: Problem: package python2-twisted-19.2.1-6.fc31.x86_64 requires python2.7dist(automat) >= 0.3, but none of the providers can be installed – python2-Automat-0.7.0-4.fc31.noarch does not belong to a distupgrade repository – problem with installed package python2-twisted-19.2.1-6.fc31.x86_64 (try to add ‘–skip-broken’ to skip uninstallable packages) [gerzal@dhcppc1 ~]$ ====================================== or you can take a look at this pic : https://pasteboard.co/J8QCUvL.png ## shy The upgrade moves /etc/pki/CA into an RPM named openssl-perl. This includes removing files under /etc/pki/CA. If you have a certificate authority make sure the is a back up. CentOS 7 has: $ rpm -qf /etc/pki/CA openssl-1.0.2k-19.el7.x86_64 Fedora 32 has: $ rpm -qf /etc/pki/CA openssl-perl-1.1.1g-1.fc32.x86_64 ## qoheniac Since this update using ctrl-c to copy text for example in Firefox sometimes kills the whole session instead of copying text so I have to log in again and everything I just did is gone. This is more then just annoying. I think this only happens in Wayland sessions, but I’m not completely sure as it does not happen all the time. Most of the time it just copies text to clipboard but sometimes it kills the session. ## Stephen Snow It is a good idea to try to document theses types of errors and perhaps file a bug with Fedorporject bug reporting which in turn helps to improved the distribution. ## RobbieTheK This is informational in case anyone else runs into this issue. Fedora 31 would keep rebooting back to 31 after running the full upgrade. In the end of the dnf.log was this: 2020-05-22T01:19:14Z INFO Last metadata expiration check: 0:10:33 ago on Thu 21 May 2020 09:08:41 PM EDT. 2020-05-22T01:19:15Z DDEBUG timer: sack setup: 8571 ms 2020-05-22T01:19:15Z DEBUG Completion plugin: Generating completion cache… 2020-05-22T01:19:18Z DEBUG Excludes in repo dell-system-update_independent: dell-system-update*.i386 2020-05-22T01:19:22Z INFO Package ncbi-blast-6.1-26.1.x86_64 is already installed. 2020-05-22T01:19:22Z INFO Package ncbi-data-6.1-26.1.noarch is already installed. 2020-05-22T01:19:22Z DEBUG –> Starting dependency resolution 2020-05-22T01:19:22Z DEBUG –> Finding unneeded leftover dependencies 2020-05-22T01:19:24Z DEBUG –> Finished dependency resolution 2020-05-22T01:19:24Z DDEBUG timer: depsolve: 1727 ms 2020-05-22T01:19:24Z SUBDEBUG Traceback (most recent call last): File “/usr/lib/python3.7/site-packages/dnf/cli/main.py”, line 130, in cli_run ret = resolving(cli, base) File “/usr/lib/python3.7/site-packages/dnf/cli/main.py”, line 166, in resolving base.resolve(cli.demands.allow_erasing) File “/usr/lib/python3.7/site-packages/dnf/base.py”, line 777, in resolve raise exc dnf.exceptions.DepsolveError: Problem: cannot install the best candidate for the job – conflicting requests 2020-05-22T01:19:24Z CRITICAL Error: Problem: cannot install the best candidate for the job – conflicting requests 2020-05-22T01:19:24Z INFO (try to add ‘–skip-broken’ to skip uninstallable packages) 2020-05-22T01:19:24Z DDEBUG Cleaning up. (END) I ran dnf distro-sync, then re-ran the system upgrade and booted to Fedora 32 successfully. Also dnf check had an error about Eclipse similar to https://bugzilla.redhat.com/show_bug.cgi?id=1759176 but running dnf module enable eclipse:latest fixed that issue. Also after reboot I ran into the slow user login as we use NIS/ypbind and this bug: https://bugzilla.redhat.com/show_bug.cgi?id=1829572, see https://bugzilla.redhat.com/show_bug.cgi?id=1829572#c5 mkdir /usr/lib/systemd/system/systemd-logind.service.d/ cp /usr/lib/systemd/system/systemd-logind.service.d/nss_nis.conf /usr/lib/systemd/system/systemd-userdbd.service.d/nss_nis.conf systemctl daemon-reload && systemctl restart systemd-userdbd ## Omid took about two hours to install f32. dash-to-dock didn’t work after update. restarting the laptop solved it. by far no other problems. In case, I will write here. ## Said Hello everyone, I am relatively new to the world of Linux and I have a doubt that if I update, I will lose my files
12,198
在 Ubuntu 桌面中使用文件和文件夹
https://itsfoss.com/add-files-on-desktop-ubuntu/
2020-05-08T22:46:00
[ "Ubuntu", "桌面" ]
https://linux.cn/article-12198-1.html
![](/data/attachment/album/202005/08/224609chff5qn5ccah56af.jpg) > > 此初学者教程讨论了在 Ubuntu 桌面上添加文件和文件夹时可能遇到的一些困难。 > > > 我认识一些习惯将所有重要/常用文件放在桌面上以便快速访问的人。 ![](/data/attachment/album/202005/08/224831vhshoxzgypgmmwpg.jpg) 我不喜欢杂乱的桌面,但是我可以想象它实际上可能对某些人有所帮助。 在过去的几个版本中,很难在 Ubuntu 的默认 GNOME 桌面上添加文件。这并不是 Ubuntu 的错。 [GNOME](https://www.gnome.org/) 的开发者认为,桌面上没有图标和文件的存身之地。当你可以在菜单中轻松搜索文件时,无需将文件放在桌面上。这在部分情况下是事实。 这就是为什么 [GNOME 的文件管理器 Nautilus](https://wiki.gnome.org/Apps/Files) 的较新版本不能很好地支持桌面上的图标和文件的原因。 也就是说,在桌面上添加文件和文件夹并非没有可能。让我告诉你如何做。 ### 在 Ubuntu 的桌面上添加文件和文件夹 ![](/data/attachment/album/202005/08/224640b93fvzzeebdkfvrv.png) 我在本教程中使用的是 Ubuntu 20.04。对于其他 Ubuntu 版本,步骤可能会有所不同。 #### 将文件和文件夹添加到“桌面文件夹” 如果打开文件管理器,你应该在左侧边栏或文件夹列表中看到一个名为“桌面”的条目。此文件夹(以某种方式)代表你的桌面。 ![Desktop folder can be used to add files to the desktop screen](/data/attachment/album/202005/08/224650siw0t0fsp0izaa09.png) 你添加到此文件夹的所有内容都会反映在桌面上。 ![Anything added to the Desktop folder will be reflected on the desktop screen](/data/attachment/album/202005/08/224653qkz1j5mm50fsa00s.jpg) 如果你从“桌面文件夹”中删除文件,那么文件也会从桌面中删除。 #### 将文件拖放到桌面不起作用 现在,如果你尝试从文件管理器往桌面上拖放文件,它会不起使用。这不是一个 bug,它是一个使很多人恼火的功能。 一种临时方案是打开两个文件管理器。在其中一个打开“桌面”文件夹,然后将文件拖放到该文件夹​​中,它们将被添加到桌面上。 我知道这并不理想,但是你没有太多选择。 #### 你不能使用 Ctrl+C 和 Ctrl+V 在桌面上复制粘贴,请使用右键单击菜单 更恼人的是,你不能使用 `Ctrl+V`(著名的键盘快捷键)将文件粘贴到桌面上。 但是,你仍然可以使用右键单击,然后选择“粘贴”,将文件复制到桌面上。你甚至可以通过这种方式创建新文件夹。 ![Right click menu can be used for copy-pasting files to desktop](/data/attachment/album/202005/08/224653wsj8q8f8b8fbp8s1.jpg) 是否有意义?对我来说不是,但这就是 Ubuntu 20.04 的方式。 #### 你无法使用 Delete 键删除文件和文件夹,请再次使用右键菜单 更糟糕的是,你无法使用 `Delete` 键或 `Shift+Delete` 键从桌面上删除文件。但是你仍然可以右键单击文件或文件夹,然后选择“移至回收站”来删除文件。 ![Delete files from desktop using right click](/data/attachment/album/202005/08/224709cg2hh5hiz3gjggrg.jpg) 好了,你现在知道至少有一种方法可以在桌面上添加文件,但有一些限制。不幸的是,这还没有结束。 你无法在桌面上用名称搜索文件。通常,如果你开始输入 “abc”,那么以 “abc” 开头的文件会高亮显示。但是在这里不行。 我不知道为什么在桌面上添加文件受到了如此多的限制。值得庆幸的是,我不会经常使用它,否则我会感到非常沮丧。 如果有兴趣,你也可以阅读[在 Ubuntu 桌面上添加应用快捷方式](https://itsfoss.com/ubuntu-desktop-shortcut/)这篇文章。 --- via: <https://itsfoss.com/add-files-on-desktop-ubuntu/> 作者:[Abhishek Prakash](https://itsfoss.com/author/abhishek/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
![Warp Terminal](/assets/images/warp-terminal.webp) ![Warp Terminal](/assets/images/warp-terminal.webp) I know a few people who are habitual of putting all the important/frequently used files on the desktop screen for quick access. Yes, it is a convenient way of file access for some 😎 ![An Ubuntu Desktop with many icons placed](https://itsfoss.com/content/images/2024/01/desktop-with-icons.webp) I am not a fan of a cluttered desktop screen, but it saves time, I'm sure. With initial GNOME versions, it was difficult to add files on the desktop screen. However, that has changed now with support for files on the desktop. So, it’s not impossible to add files and folders on the desktop. Let me show you how you can do it. **Suggested Read 📖** [7 Tips to Get More Out of GNOME Search in LinuxYou are missing out on lots of built-in search features in the GNOME desktop environment. Learn something new.](https://itsfoss.com/gnome-search/)![](https://itsfoss.com/content/images/2023/08/gnome-search-tips.png) ![](https://itsfoss.com/content/images/2023/08/gnome-search-tips.png) ## Desktop Icons NG (DING) GNOME Extension As said above, GNOME does not provide a way to add files/folders to the desktop by default. We rely on this extension, called Desktop Icons NG for the purpose. If you are using any other distro that comes with GNOME, install this from the GNOME Shell Extensions page, or you can [use the GNOME Shell Extension Manager](https://itsfoss.com/extension-manager/) application. ![Install Desktop Icons NG extension using GNOME Shell Extension Manager Application](https://itsfoss.com/content/images/2024/01/install-desktop-icons-extension-from-extension-manager-app.png) Once you are done installing the extension, you are good to go now! ## Adding files and folders on the desktop screen in Ubuntu I am using **Ubuntu 22.04 LTS** in this tutorial. The steps may or may not vary for other Ubuntu versions. ### Add the files and folders to the “Desktop folder” If you open the file manager, you should see an entry called Desktop in the left sidebar or in the folders list. This folder represents the storage location of your desktop screen. ![Desktop Folder in Nautilus File manager sidebar and HOME directory.](https://itsfoss.com/content/images/2024/01/desktop-folder-in-nautilus.png) Anything you add to this folder will be reflected on the desktop screen. ![Folders moved to the Desktop will be available on the Desktop Screen and Desktop Folder, and both are the same.](https://itsfoss.com/content/images/2024/01/desktop-folder-and-dekstop.png) If you delete files from this ‘Desktop folder’, it will be removed from the desktop screen as well. ### Drag and drop files to desktop screen Now, if you drag and drop files from the file manager to the desktop, it will **“move”** that file or folder to the `~/Desktop` folder. It will be visible on the desktop as well. ![Right-click on a folder/file on the desktop to access its context menu actions.](https://itsfoss.com/content/images/2024/01/desktop-folder-actions.png) Here, you can do all sort of things that you actually do in any other locations. ## Desktop Icons Appearance Settings in Ubuntu On Ubuntu, you can tweak the appearance of desktop icons. For this, first open Settings and then go to Appearance. ![On the Ubuntu Settings Application, the desktop icon settings can be found under the Appearance tab.](https://itsfoss.com/content/images/2024/01/appearance-setttings-of-desktop-icons.png) You can also go to these settings directly from the desktop. Right-click on the desktop and select “Desktop Icon Settings”. ![Right-click on an empty space on the desktop and select “Desktop Icons Settings” to reach the Desktop Icons Settings page of the Ubuntu Settings Application.](https://itsfoss.com/content/images/2024/01/right-cick-and-select-desktop-icon-settings.png) Here, you can change the size of desktop icons, to four different levels, as shown below. ![Change the Desktop Icons appearance size](https://itsfoss.com/content/images/2024/01/desktop-icon-size-in-ubuntu-settings.png) By default, in newer Ubuntu releases, the Desktop icons appear on the bottom-right of the screen. You can change this to another area if it suits you. ![Change the position of Desktop icons on the desktop screen.](https://itsfoss.com/content/images/2024/01/position-of-new-icons.png) Or, show the Personal Folder, that is `/home/$USER/` folder. ## Tweak the Desktop Icons Using the Extension The extension provides customizable options to fiddle with. You need to access the extension settings, preferably using the extension manager. Here's how it looks like: Head to the GNOME Shell Extension Manager application and then select the Gear icon corresponding to the Desktop Icons extension. ![Click on the gear icon corresponding to the Desktop Icon extension to open its settings.](https://itsfoss.com/content/images/2024/01/open-desktop-icon-settings.png) [GNOME extension called “Extension List”](https://extensions.gnome.org/extension/3088/extension-list/?ref=itsfoss.com)for quick access to extensions. This will help you access all the extensions you have installed on your system from the top panel. You can access the settings for each extension from here easily. This will open the settings menu. Most of the tweaks are straightforward so that you can understand easily. A couple of interesting tweaks will be listed here. You can show the Trash icon, External Drives, Network Drives, etc., on your desktop, by enabling the respective toggle switches. ![Enable the visibility of additional locations on the desktop](https://itsfoss.com/content/images/2024/01/show-other-drives-on-desktop.png) If you are using a very bright wallpaper, it will be good to show the labels in a dark color. Just enable the toggle button as shown in the screenshot below. ![Make the label text of desktop icons dark color for better visibility in light backgrounds.](https://itsfoss.com/content/images/2024/01/show-a-dark-text-for-labels.png) If you have [Nemo file manager installed](https://itsfoss.com/install-nemo-file-manager-ubuntu/), you can open the folder on the desktop in Nemo by enabling it in the settings. ![Use Nemo File manager to open folders from the desktop instead of Nautilus File Manager.](https://itsfoss.com/content/images/2024/01/use-nemo-to-open-folders.png) **Suggested Read 📖** [Top 20 GNOME Extensions to Enhance Your ExperienceYou can enhance the capacity of your GNOME desktop with extensions. Here, we list the best GNOME shell extensions to save you the trouble of finding them on your own.](https://itsfoss.com/best-gnome-extensions/)![](https://itsfoss.com/content/images/wordpress/2022/03/best-gnome-extensions-ft.png) ![](https://itsfoss.com/content/images/wordpress/2022/03/best-gnome-extensions-ft.png) ## Wrapping Up For the most part, you do not need access to the extension settings. If you are new to the Linux world, do not worry about the customization options through the extension settings. The toggle buttons work as expected. If interested, you may read about [adding application shortcuts on Ubuntu desktop](https://itsfoss.com/ubuntu-desktop-shortcut/) as well. [How to Add Desktop Shortcut on Ubuntu LinuxIn this quick tutorial, you’ll learn how to add application shortcuts on the Ubuntu desktop and other distributions that use the GNOME desktop.](https://itsfoss.com/ubuntu-desktop-shortcut/)![](https://itsfoss.com/content/images/2023/11/add-application-shortcut-on-ubuntu.png) ![](https://itsfoss.com/content/images/2023/11/add-application-shortcut-on-ubuntu.png) Moreover, you can improve your workflow with the files/folders if you know how to use the Nautilus File Manager features efficiently: *💬What are your thoughts on having files/folders on the desktop? *
12,199
在 Linux 上使用开源软件创建 SDN
https://opensource.com/article/20/4/quagga-linux
2020-05-09T09:37:43
[ "Quagga", "路由器", "SDN", "OSPF" ]
/article-12199-1.html
> > 使用开源路由协议栈 Quagga,使你的 Linux 系统成为一台路由器。 > > > ![](/data/attachment/album/202005/09/093541rqx3zr5dxn3yvnq6.jpg) 网络路由协议分为两大类:内部网关协议和外部网关协议。路由器使用内部网关协议在单个自治系统内共享信息。如果你用的是 Linux,则可以通过开源(GPLv2)路由协议栈 [Quagga](https://www.quagga.net/) 使其表现得像一台路由器。 ### Quagga 是什么? Quagga 是一个[路由软件包](https://en.wikipedia.org/wiki/Quagga_(software)),并且是 [GNU Zebra](https://www.gnu.org/software/zebra/) 的一个分支。它为类 Unix 平台提供了所有主流路由协议的实现,例如开放最短路径优先(OSPF),路由信息协议(RIP),边界网关协议(BGP)和中间系统到中间系统协议(IS-IS)。 尽管 Quagga 实现了 IPv4 和 IPv6 的路由协议,但它并不是一个完整的路由器。一个真正的路由器不仅实现了所有路由协议,而且还有转发网络流量的能力。 Quagga 仅仅实现了路由协议栈,而转发网络流量的工作由 Linux 内核处理。 ### 架构 Quagga 通过特定协议的守护程序实现不同的路由协议。守护程序名称与路由协议相同,加了字母“d”作为后缀。Zebra 是核心,也是与协议无关的守护进程,它为内核提供了一个[抽象层](https://en.wikipedia.org/wiki/Abstraction_layer),并通过 TCP 套接字向 Quagga 客户端提供 Zserv API。每个特定协议的守护程序负责运行相关的协议,并基于交换的信息来建立路由表。 ![Quagga architecture](/data/attachment/album/202005/09/093747cpcrqrrzkvp5cke5.png "Quagga architecture") ### 环境 本教程通过 Quagga 实现的 OSPF 协议来配置动态路由。该环境包括两个名为 Alpha 和 Beta 的 CentOS 7.7 主机。两台主机共享访问 **192.168.122.0/24** 网络。 **主机 Alpha:** IP:192.168.122.100/24 网关:192.168.122.1 **主机 Beta:** IP:192.168.122.50/24 网关:192.168.122.1 ### 安装软件包 首先,在两台主机上安装 Quagga 软件包。它存在于 CentOS 基础仓库中: ``` yum install quagga -y ``` ### 启用 IP 转发 接下来,在两台主机上启用 IP 转发,因为它将由 Linux 内核来执行: ``` sysctl -w net.ipv4.ip_forward = 1 sysctl -p ``` ### 配置 现在,进入 `/etc/quagga` 目录并为你的设置创建配置文件。你需要三个文件: * `zebra.conf`:Quagga 守护程序的配置文件,你可以在其中定义接口及其 IP 地址和 IP 转发 * `ospfd.conf`:协议配置文件,你可以在其中定义将通过 OSPF 协议提供的网络 * `daemons`:你将在其中指定需要运行的相关的协议守护程序 在主机 Alpha 上, ``` [root@alpha]# cat /etc/quagga/zebra.conf interface eth0 ip address 192.168.122.100/24 ipv6 nd suppress-ra interface eth1 ip address 10.12.13.1/24 ipv6 nd suppress-ra interface lo ip forwarding line vty [root@alpha]# cat /etc/quagga/ospfd.conf interface eth0 interface eth1 interface lo router ospf network 192.168.122.0/24 area 0.0.0.0 network 10.12.13.0/24 area 0.0.0.0 line vty [root@alphaa ~]# cat /etc/quagga/daemons zebra=yes ospfd=yes ``` 在主机 Beta 上, ``` [root@beta quagga]# cat zebra.conf interface eth0 ip address 192.168.122.50/24 ipv6 nd suppress-ra interface eth1 ip address 10.10.10.1/24 ipv6 nd suppress-ra interface lo ip forwarding line vty [root@beta quagga]# cat ospfd.conf interface eth0 interface eth1 interface lo router ospf network 192.168.122.0/24 area 0.0.0.0 network 10.10.10.0/24 area 0.0.0.0 line vty [root@beta ~]# cat /etc/quagga/daemons zebra=yes ospfd=yes ``` ### 配置防火墙 要使用 OSPF 协议,必须允许它通过防火墙: ``` firewall-cmd --add-protocol=ospf –permanent firewall-cmd –reload ``` 现在,启动 `zebra` 和 `ospfd` 守护程序。 ``` # systemctl start zebra # systemctl start ospfd ``` 用下面命令在两个主机上查看路由表: ``` [root@alpha ~]# ip route show default via 192.168.122.1 dev eth0 proto static metric 100 10.10.10.0/24 via 192.168.122.50 dev eth0 proto zebra metric 20 10.12.13.0/24 dev eth1 proto kernel scope link src 10.12.13.1 192.168.122.0/24 dev eth0 proto kernel scope link src 192.168.122.100 metric 100 ``` 你可以看到 Alpha 上的路由表包含通过 **192.168.122.50** 到达 **10.10.10.0/24** 的路由项,它是通过协议 zebra 获取的。同样,在主机 Beta 上,该表包含通过 **192.168.122.100** 到达网络 **10.12.13.0/24** 的路由项。 ``` [root@beta ~]# ip route show default via 192.168.122.1 dev eth0 proto static metric 100 10.10.10.0/24 dev eth1 proto kernel scope link src 10.10.10.1 10.12.13.0/24 via 192.168.122.100 dev eth0 proto zebra metric 20 192.168.122.0/24 dev eth0 proto kernel scope link src 192.168.122.50 metric 100 ``` ### 结论 如你所见,环境和配置相对简单。要增加复杂性,你可以向路由器添加更多网络接口,以为更多网络提供路由。你也可以使用相同的方法来实现 BGP 和 RIP 协议。 --- via: <https://opensource.com/article/20/4/quagga-linux> 作者:[M Umer](https://opensource.com/users/noisybotnet) 选题:[lujun9972](https://github.com/lujun9972) 译者:[messon007](https://github.com/messon007) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
null
HTTPSConnectionPool(host='opensource.com', port=443): Read timed out. (read timeout=10)
null
12,201
Fedora 32 Workstation 的新功能
https://fedoramagazine.org/whats-new-fedora-32-workstation/
2020-05-09T20:38:00
[ "Fedora" ]
https://linux.cn/article-12201-1.html
![](/data/attachment/album/202005/09/203804j5d7qz67dd7qttra.jpg) Fedora 32 Workstation 是我们免费的领先操作系统的[最新版本](/article-12164-1.html)。你现在可以从[官方网站](https://getfedora.org/workstation)下载它。Fedora 32 Workstation 中有几个新的且值得注意的变化。请阅读下面的详情。 ### GNOME 3.36 Fedora 32 Workstation 包含了适合所有用户的 GNOME 桌面环境的最新版本。Fedora 32 Workstation 中的 GNOME 3.36 包含了许多更新和改进,包括: #### 重新设计的锁屏界面 Fedora 32 中的锁屏是一种全新的体验。新设计消除了以前版本中使用的“窗口阴影”,并着重于易用性和速度。 ![Unlock screen in Fedora 32](/data/attachment/album/202005/09/203805oqepuwz5zfumhmmp.gif) #### 新的扩展程序 Fedora 32 有新的“扩展”应用,它可轻松管理你的 GNOME 扩展。过去,扩展是使用“软件”和/或“调整工具”来安装、配置和启用的。 ![The new Extensions application in Fedora 32](/data/attachment/album/202005/09/203806kxxkq24xxb2b2nhq.png) 请注意,默认情况下,Fedora 32 上未安装这个“扩展”应用。需要使用“软件”进行搜索和安装,或在终端中使用以下命令: ``` sudo dnf install gnome-extensions-app ``` #### 重新组织的设置应用 敏锐的 Fedora 用户会注意到“设置”应用已重新组织。设置类别的结构更加扁平,因此可以一次看到更多设置。 此外,“关于”中现在有有关系统的更多信息,包括正在运行的窗口系统(例如 Wayland)。 ![The reorganized settings application in Fedora 32](/data/attachment/album/202005/09/203807bpblihl545c55hch.png) #### 重新设计的通知/日历弹出框 单击桌面顶部的“日期和时间”可切换“通知/日历”弹出窗口,其中有许多小的样式调整项。此外,弹出窗口现在有“请勿打扰”开关,可快速禁用所有通知。这在希望只显示屏幕而不显示个人通知时很有用。 ![The new Notification / Calendar popover in Fedora 32 ](/data/attachment/album/202005/09/203808wt89ebrrotr9ubnx.png) #### 重新设计的时钟应用 Fedora 32 完全重新设计了时钟。该设计在较小的窗口中效果更好。 ![The Clocks application in Fedora 32](/data/attachment/album/202005/09/203809be3bz2yho1utyoe3.png) GNOME 3.36 还提供了许多其他功能和增强。有关更多信息,请查看 [GNOME 3.36 的发布说明](https://help.gnome.org/misc/release-notes/3.36/)。 ### 改进的内存不足处理 以前,如果系统内存不足,那么可能会遇到大量使用交换(也称为 [交换抖动](https://en.wikipedia.org/wiki/Thrashing_(computer_science)))–有时会导致 Workstation UI 变慢或在一段时间内无响应。Fedora 32 Workstation 现在默认启用 EarlyOOM。EarlyOOM 可以让用户在低内存的情况下,存在大量使用交换的情况下,更快速地恢复和恢复对系统的控制。 --- via: <https://fedoramagazine.org/whats-new-fedora-32-workstation/> 作者:[Ryan Lerch](https://fedoramagazine.org/author/ryanlerch/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
Fedora 32 Workstation is the [latest release](https://fedoramagazine.org/announcing-fedora-32/) of our free, leading-edge operating system. You can download it from [the official website here](https://getfedora.org/workstation) right now. There are several new and noteworthy changes in Fedora 32 Workstation. Read more details below. ## GNOME 3.36 Fedora 32 Workstation includes the latest release of GNOME Desktop Environment for users of all types. GNOME 3.36 in Fedora 32 Workstation includes many updates and improvements, including: ### Redesigned Lock Screen The lock screen in Fedora 32 is a totally new experience. The new design removes the “window shade” metaphor used in previous releases, and focuses on ease and speed of use. ![](https://fedoramagazine.org/wp-content/uploads/2020/04/unlock.gif) ### New Extensions Application Fedora 32 features the new Extensions application, to easily manage your GNOME Extensions. In the past, extensions were installed, configured, and enabled using either the Software application and / or the Tweak Tool. ![](https://fedoramagazine.org/wp-content/uploads/2020/04/extensions.png) Note that the Extensions application is not installed by default on Fedora 32. To either use the Software application to search and install, or use the following command in the terminal: sudo dnf install gnome-extensions-app ### Reorganized Settings Eagle-eyed Fedora users will notice that the Settings application has been re-organized. The structure of the settings categories is a lot flatter, resulting in more settings being visible at a glance. Additionally, the **About** category now has a more information about your system, including which windowing system you are running (e.g. Wayland) ![](https://fedoramagazine.org/wp-content/uploads/2020/04/settings.png) ### Redesigned Notifications / Calendar popover The Notifications / Calendar popover — toggled by clicking on the Date and Time at the top of your desktop — has had numerous small style tweaks. Additionally, the popover now has a **Do Not Disturb** switch to quickly disable all notifications. This quick access is useful when presenting your screen, and not wanting your personal notifications appearing. ![](https://fedoramagazine.org/wp-content/uploads/2020/04/donotdisturb.png) ### Redesigned Clocks Application The Clocks application is totally redesigned in Fedora 32. It features a design that works better on smaller windows. ![](https://fedoramagazine.org/wp-content/uploads/2020/04/clocks.png) GNOME 3.36 also provides many additional features and enhancements. Check out the [GNOME 3.36 Release Notes](https://help.gnome.org/misc/release-notes/3.36/) for further information ## Improved Out of Memory handling Previously, if a system encountered a low-memory situation, it may have encountered heavy swap usage (aka [swap thrashing](https://en.wikipedia.org/wiki/Thrashing_(computer_science)))– sometimes resulting in the Workstation UI slowing down, or becoming unresponsive for periods of time. Fedora 32 Workstation now ships and enables EarlyOOM by default. EarlyOOM enables users to more quickly recover and regain control over their system in low-memory situations with heavy swap usage. ## zermok important question, PYTHON2.7 still present? if not, any tutorial to install it? I’m using a lot of python apps that will never be updated, thanks ## Ben Cotton Yes, Python 2.7 will remain available in the python27 package. See here for more information. ## FeRD (Frank Dana) @zermok: Basically, to install it. You’ll either have to edit your python2 scripts to use a shebang, set up virtualenvs, or… I guess perform some symlink surgery in . ## Luca For curiosity, which are these “lot of python apps”? ## FeRD (Frank Dana) I can name one, for myself: mcomix. It’s written in Python2 and GTK2. AFAICT it would take a total rewrite to bring it into the 21st century. Bummer, because I still haven’t found an alternative I’m happy with. ## Lucas https://github.com/multiSnow/mcomix3 ## FeRD (Frank Dana) OOH! M’lud! fancyhat-tip## Robert Smol Hi, for example azure cli, which is currently running like this: /usr/lib64/az/lib/python3.6/site-packages/jmespath/visitor.py:32: SyntaxWarning: "is" with a literal. Did you mean "=="? if x is 0 or x is 1: /usr/lib64/az/lib/python3.6/site-packages/jmespath/visitor.py:32: SyntaxWarning: "is" with a literal. Did you mean "=="? if x is 0 or x is 1: /usr/lib64/az/lib/python3.6/site-packages/jmespath/visitor.py:34: SyntaxWarning: "is" with a literal. Did you mean "=="? elif y is 0 or y is 1: /usr/lib64/az/lib/python3.6/site-packages/jmespath/visitor.py:34: SyntaxWarning: "is" with a literal. Did you mean "=="? elif y is 0 or y is 1: /usr/lib64/az/lib/python3.6/site-packages/jmespath/visitor.py:260: SyntaxWarning: "is" with a literal. Did you mean "=="? if original_result is 0: Welcome to Azure CLI! ## Álvaro Python 2.7 is not recommended use because It does not have more support from upstream. ## Sorb No mentioning packaging kernel 5.6 with WireGuard support? ## Jan Kernel 5.6 with built-in Wireguard has already been available in Fedora 31. NetworkManager still has no GUI integration for Wireguard, however. ## magikmw It does, I’ve literally got it on my F31. ## Jordan I was thinking about the dark theme for my eyes in this version :eyes: ## Asiri Iroshan Is it not available in the extensions app? I don’t really know. Haven’t upgraded to Fedora 32 yet. ## Benjamin GNOME already as a dark theme! The easiest way to enable it is with GNOME Tweaks. However, I agree it’ll be nice to have a toggle directly in the settings. ## Miran Al Mehrab Is Nvidia proprietary driver included into the .iso file ? ## Jyrki Tikka Fedora does not distribute proprietary software so the answer is no. https://fedoraproject.org/wiki/Forbidden_items As always this driver is provided by the RPM Fusion repository. https://rpmfusion.org ## dreamer_ NVIDIA “allows” bundling their drivers with Linux distros but puts severe limitations on how their drivers can be used by end-users (as a user, you are NOT allowed to use their drivers for certain things). In other words: NVIDIA EULA makes it impossible for Fedora to ship the drivers. ## C. It would be great if these guides included instructions for Fedora Silverblue. ## Dave Matheis hear, hear! ## Mivall I like this OS 😉 ## Morvan I filled (in conjunction with other peoples, I just enforced bug appearance): Here, which sluggishes too much boot time. Any news?## Ombre Nice 😉 ## daiquiri the best distro ever: – extremely stable – very transparent updates QA process and releases via bodhi – selinux enabled by default – sweet spot between rolling and normal releases – excellent team thank you! ## Matt The title should be ‘What’s new in Gnome 3.36’ I was curious to see how Fedora has gotten better too. ## Tomasz Gąsior This post is expected to be linked by GNOME Software “more information” button. Also, this post is intended for causal user which looks only on UI. Since default, official Fedora uses GNOME it means that GNOME changes change Fedora look and feel for people who are target of this article.## Géraud So Fedora 32 release notes = GNOME 3.36 release notes modulo the out of memory handler? ## jie I wonder whether raspberry pi 4B is supported now. ## Wes Turner From https://fedoraproject.org/wiki/Architectures/ARM/Raspberry_Pi#Raspberry_Pi_4 , it looks like kernel 5.7+ is necessary to fully support a Raspberry Pi 4. FC31 got 5.6, so it’s likely that FC32 will eventually get 5.7. ## Trung LE I think it is unlikely, FC32 would stick to stable 5.6 release. Feel free to upgrade to FC33 Rawhide if you want to bump to Kernel 5.7 ## steve hanley You should be able to get it running with some elbow grease. The lead development person doesn’t want to support it because it would create too many “new user” questions that he doesn’t have time to deal with. ## Ryan Would be nice to know some of the other changes under the hood, rather than just the GNOME stuff…I personally don’t run GNOME so I don’t care. Still I guess I can always find out the more technical stuff from https://fedoraproject.org/wiki/Releases/32/ChangeSet ## Mehdi I love the EarlyOOM update. Seems very promising! ## Vinicius Bueno What about docker and docker-compose subsystem compatibility? Is it present? ## Trung LE AFIK docker is not compatible yet because since 31 Fedora has moved to cgroupv2. Podman and podman-compose can be a great substitution OR you could disable cgroupv2 to use docker. ## -George What about TeXLive and R? Do the latest versions come with Fedora 32? I had the beta version of Fedora 32 and got it updated through ‘sudo dnf update’. How do I update TeXLive and R in my case if the stable Fedora 32 got the latest versions of those? Thanks. ## Sagar Any article to upgrade fedora 31 to 32? ## Ben Cotton Yes: https://fedoramagazine.org/upgrading-fedora-31-to-fedora-32/ ## Pancho Hi, what if I have my development environment with nginx, mysql, php? should I be concern about reinstall all my environment? ## Nissa How much disk space is needed for the upgrade from Fedora 31 to Fedora 32? I currently have 2.9 GB of free space in root. ## Trung LE I think you need a bit more, for my case, the dnf downloaded nearly 3.5GB for full system upgrade ## FeRD (Frank Dana) And then you need additional space at least as large as the download size free, for the actual transaction to run. I have a lot of packages installed on my desktop / “literally-everything”-purposed machine, and I can’t usually convince dnf system-upgrade to do its thing unless I can find a good 7-8 GB of free space on the root volume. (Every upgrade, it seems, I end up furiously removing everything related to Flatpak, everything related to mock, everything related to docker, and then every -devel package larger than a meg or two in size, just to scrounge up the necessary free space. But that’s my fault for giving this box a 40GB root volume when I first built it. It wasn’t until after I allocated every remaining block to other volumes that I realized I desperately needed more room for the main OS volume. At this point I’ve resigned myself to feeling data-claustrophobic twice a year, until I can finally migrate everything to a bigger SSD and spread it out a bit more. Anyway, point is: Just as a precaution it’s a good idea to plan for upwards of 5, 6, even 7 gigs, depending on the size of your existing package set. Oh, and never short-sheet your root volume. ## Jim Still has the same Intel graphics driver that crashes overnight when my laptop is docked and idle. ## Trung LE Feel free to file bug report on RedHat Bugzilla under Fedora category. There are few issues I’ve found however the team is very responsive and some have already been resolved with minor patch upgrade. ## Trung LE Kudos to the Fedora community for the very first Workstation flavour for PowerPC64 LE computer. Now it is so easy to install Fedora on Raptor Computing Blackbird workstation. ## Bill Kuhn Any idea how to install virualbox? I can’t seem tomake it happen! ## David_Kypuros Any hope for playing a BlueRay? Ever? ## ernesto Can someone remind what happens to your extensions when you upgrade? Do they all get disable by default and you have to manually upgrade them (if possible) on the extensions site? ## Larry Barnes I run two desktops Xfce and cinnamon, will I have to upgrade them individually or can I upgrade one and it will automatically apply to the other. Thanks ## Johan Got “could not do untrusted question as no klass support” error message trying to update it. Some says it’s a faulty mirror. Tried several times several days in a row, no result. ## Alan I had Fedora Design 31 and used the upgrade via dfn for Fedora 32. Went smoothly. However, Fedora started to hang for a second or so when I clicked on Activities. I have desktop with an i7 intel processor, 4th Gen, 8GB DDR3 RAM and a 1TB drive with only Fedora on it. Shouldn’t be acting as though it was out of memory. I downloaded the Cinnamon edition of Fedora and it is now working without a hitch. Wonderful. It is my main OS – I only use my Windows laptop when using when I need to use heavily formatted Word docs or PowerPoints at my work. Libre Office can’t handle them well enough. ## JULIO CESAR FRANCO Funcionará Virtual Box 6.1.8 ?
12,202
超算即服务:超级计算机如何上云
https://www.networkworld.com/article/3534725/the-ins-and-outs-of-high-performance-computing-as-a-service.html
2020-05-09T22:38:53
[ "HPC", "超级计算机" ]
https://linux.cn/article-12202-1.html
> > 高性能计算(HPC)服务可能是一种满足不断增长的超级计算需求的方式,但依赖于使用场景,它们不一定比使用本地超级计算机好。 > > > ![](/data/attachment/album/202005/09/223805mrjfjzecr3hceais.jpg) 导弹和军用直升机上的电子设备需要工作在极端条件下。美国国防承包商<ruby> 麦考密克·史蒂文森公司 <rt> McCormick Stevenson Corp. </rt></ruby>在部署任何物理设备之前都会事先模拟它所能承受的真实条件。模拟依赖于像 Ansys 这样的有限元素分析软件,该软件需要强大的算力。 几年前的一天,它出乎意料地超出了计算极限。 麦考密克·史蒂文森公司的首席工程师 Mike Krawczyk 说:“我们的一些工作会使办公室的计算机不堪重负。购买机器并安装软件在经济上或计划上都不划算。”相反,他们与 Rescale 签约,该公司销售其超级计算机系统上的处理能力,而这只花费了他们购买新硬件上所需的一小部分。 麦考密克·史蒂文森公司已成为被称为超级计算即服务或高性能计算即服务(两个紧密相关的术语)市场的早期采用者之一。根据国家计算科学研究所的定义,HPC 是超级计算机在计算复杂问题上的应用,而超级计算机是处理能力最先进的那些计算机。 无论叫它什么,这些服务都在颠覆传统的超级计算市场,并将 HPC 能力带给以前负担不起的客户。但这不是万能的,而且绝对不是即插即用的,至少现在还不是。 ### HPC 服务实践 从最终用户的角度来看,HPC 即服务类似于早期大型机时代的批处理模型。 “我们创建一个 Ansys 批处理文件并将其发送过去,运行它,然后将结果文件取下来,然后导入到本地,” Krawczyk 说。 在 HPC 服务背后,云提供商在其自己的数据中心中运行超级计算基础设施,尽管这不一定意味着当你听到“超级计算机”时你就会看到最先进的硬件。正如 IBM OpenPOWER 计算技术副总裁 Dave Turek 解释的那样,HPC 服务的核心是“相互互连的服务器集合。你可以调用该虚拟计算基础设施,它能够在你提出问题时,使得许多不同的服务器并行工作来解决问题。” 理论听起来很简单。但都柏林城市大学数字商业教授 Theo Lynn 表示,要使其在实践中可行,需要解决一些技术问题。普通计算与 HPC 的区别在于那些互联互通 —— 高速的、低延时的而且昂贵的 —— 因此需要将这些互连引入云基础设施领域。在 HPC 服务可行之前,至少需要将存储性能和数据传输也提升到与本地 HPC 相同的水平。 但是 Lynn 说,一些制度创新相比技术更好的帮助了 HPC 服务的起飞。特别是,“我们现在看到越来越多的传统 HPC 应用采用云友好的许可模式 —— 这在过去是阻碍采用的障碍。” 他说,经济也改变了潜在的客户群。“云服务提供商通过向那些负担不起传统 HPC 所需的投资成本的低端 HPC 买家开放,进一步开放了市场。随着市场的开放,超大规模经济模型变得越来越多,更可行,成本开始下降。” ### 避免本地资本支出 HPC 服务对传统超级计算长期以来一直占据主导地位的私营部门客户具有吸引力。这些客户包括严重依赖复杂数学模型的行业,包括麦考密克·史蒂文森公司等国防承包商,以及石油和天然气公司、金融服务公司和生物技术公司。都柏林城市大学的 Lynn 补充说,松耦合的工作负载是一个特别好的用例,这意味着许多早期采用者将其用于 3D 图像渲染和相关应用。 但是,何时考虑 HPC 服务而不是本地 HPC 才有意义?对于德国的模拟烟雾在建筑物中的蔓延和火灾对建筑物结构部件的破坏的 hhpberlin 公司来说,答案是在它超出了其现有资源时。 Hpberlin 公司数值模拟的科学负责人 Susanne Kilian 说:“几年来,我们一直在运行自己的小型集群,该集群具有多达 80 个处理器核。……但是,随着应用复杂性的提高,这种架构已经越来越不足以支撑;可用容量并不总是够快速地处理项目。” 她说:“但是,仅仅花钱买一个新的集群并不是一个理想的解决方案:鉴于我们公司的规模和管理环境,不断地维护这个集群(定期进行软件和硬件升级)是不现实的。另外,需要模拟的项目数量会出现很大的波动,因此集群的利用率并不是真正可预测的。通常,使用率很高的阶段与很少使用或不使用的阶段交替出现。”通过转换为 HPC 服务模式,hhpberlin 释放了过剩的产能,并无需支付升级费用。 IBM 的 Turek 解释了不同公司在评估其需求时所经历的计算过程。对于拥有 30 名员工的生物科学初创公司来说,“你需要计算,但你真的不可能让 15% 的员工专门负责计算。这就像你可能也会说你不希望有专职的法律代表,所以你也会把它作为一项服务来做。”不过,对于一家较大的公司而言,最终归结为权衡 HPC 服务的运营费用与购买内部超级计算机或 HPC 集群的费用。 到目前为止,这些都是你采用任何云服务时都会遇到的类似的争论。但是,可以 HPC 市场的某些特殊性将使得衡量运营支出(OPEX)与资本支出(CAPEX)时选择前者。超级计算机不是诸如存储或 x86 服务器之类的商用硬件;它们非常昂贵,技术进步很快会使其过时。正如麦考密克·史蒂文森公司的 Krawczyk 所说,“这就像买车:只要车一开走,它就会开始贬值。”对于许多公司,尤其是规模较大,灵活性较差的公司,购买超级计算机的过程可能会陷入无望的泥潭。IBM 的 Turek 说:“你会被规划问题、建筑问题、施工问题、培训问题所困扰,然后必须执行 RFP。你必须得到 CIO 的支持。你必须与内部客户合作以确保服务的连续性。这是一个非常、非常复杂的过程,并没有很多机构有非常出色的执行力。” 一旦你选择走 HPC 服务的路线,你会发现你会得到你期望从云服务中得到的许多好处,特别是仅在业务需要时才需付费的能力,从而可以带来资源的高效利用。Gartner 高级总监兼分析师 Chirag Dekate 表示,当你对高性能计算有短期需求时,突发性负载是推动选择 HPC 服务的关键用例。 他说:“在制造业中,在产品设计阶段前后,HPC 活动往往会达到很高的峰值。但是,一旦产品设计完成,在其余产品开发周期中,HPC 资源的利用率就会降低。” 相比之下,他说:“当你拥有大型的、长期运行的工作时,云计算的经济性才会逐渐减弱。” 通过巧妙的系统设计,你可以将这些 HPC 服务突发活动与你自己的内部常规计算集成在一起。<ruby> 埃森哲 <rt> Accenture </rt></ruby>实验室常务董事 Teresa Tung 举了一个例子:“通过 API 访问 HPC 可以与传统计算无缝融合。在模型构建阶段,传统的 AI 流水线可能会在高端超级计算机上进行训练,但是最终经过反复按预期运行的训练好的模型将部署在云端的其他服务上,甚至部署在边缘设备上。” ### 它并不适合所有的应用场景 HPC 服务适合批处理和松耦合的场景。这与一个常见的 HPC 缺点有关:数据传输问题。高性能计算本身通常涉及庞大的数据集,而将所有这些信息通过互联网发送到云服务提供商并不容易。IBM 的 Turek 说:“我们与生物技术行业的客户交流,他们每月仅在数据费用上就花费 1000 万美元。” 而钱并不是唯一的潜在问题。构建一个利用数据的工作流程,可能会对你的工作流程提出挑战,让你绕过数据传输所需的漫长时间。hhpberlin 的 Kilian 说:“当我们拥有自己的 HPC 集群时,当然可以随时访问已经产生的仿真结果,从而进行交互式的临时评估。我们目前正努力达到在仿真的任意时刻都可以更高效地、交互地访问和评估云端生成的数据,而无需下载大量的模拟数据。” Mike Krawczyk 提到了另一个绊脚石:合规性问题。国防承包商使用的任何服务都需要遵从《国际武器交易条例》(ITAR),麦考密克·史蒂文森公司之所以选择 Rescale,部分原因是因为这是他们发现的唯一符合的供应商。如今,尽管有更多的公司使用云服务,但任何希望使用云服务的公司都应该意识到使用其他人的基础设施时所涉及的法律和数据保护问题,而且许多 HPC 场景的敏感性使得 HPC 即服务的这个问题更加突出。 此外,HPC 服务所需的 IT 治理超出了目前的监管范围。例如,你需要跟踪你的软件许可证是否允许云使用­ —— 尤其是专门为本地 HPC 群集上运行而编写的软件包。通常,你需要跟踪 HPC 服务的使用方式,它可能是一个诱人的资源,尤其是当你从员工习惯的内部系统过渡到有可用的空闲的 HPC 能力时。例如,Avanade 全球平台高级主管兼 Azure 平台服务全球负责人 Ron Gilpin 建议,将你使用的处理核心的数量回拨给那些对时间不敏感的任务。他说:“如果一项工作只需要用一小时来完成而不需要在十分钟内就完成,那么它可以使用 165 个处理器而不是 1,000 个,从而节省了数千美元。” ### 对 HPC 技能的要求很高 一直以来,采用 HPC 的最大障碍之一就是其所需的独特的内部技能,而 HPC 服务并不能神奇使这种障碍消失。Gartner 的 Dekate 表示:“许多 CIO 将许多工作负载迁移到了云上,他们看到了成本的节约、敏捷性和效率的提升,因此相信在 HPC 生态中也可以达成类似的效果。一个普遍的误解是,他们可以通过彻底地免去系统管理员,并聘用能解决其 HPC 工作负载的新的云专家,从而以某种方式优化人力成本。”对于 HPC 即服务来说更是如此。 “但是 HPC 并不是一个主流的企业环境。” 他说。“你正在处理通过高带宽、低延迟的网络互联的高端计算节点,以及相当复杂的应用和中间件技术栈。许多情况下,甚至连文件系统层也是 HPC 环境所独有的。没有对应的技能可能会破坏稳定性。” 但是超级计算技能的供给却在减少,Dekate 将其称为劳动力“老龄化”,这是因为这一代开发人员将目光投向了新兴的初创公司,而不是学术界或使用 HPC 的更老套的公司。因此,HPC 服务供应商正在尽其所能地弥补差距。IBM 的 Turek 表示,许多 HPC 老手将总是想运行他们自己精心调整过的代码,并需要专门的调试器和其他工具来帮助他们在云端实现这一目标。但是,即使是 HPC 新手也可以调用供应商构建的代码库,以利用超级计算的并行处理能力。第三方软件提供商出售的交钥匙软件包可以减少 HPC 的许多复杂性。 埃森哲的 Tung 表示,该行业需要进一步加大投入才能真正繁荣。她说:“HPCaaS 已经创建了具有重大影响力的新功能,但还需要做的是使它易于被数据科学家、企业架构师或软件开发人员使用。这包括易用的 API、文档和示例代码。它包括解答问题的用户支持。仅仅提供 API 是不够的,API 需要适合特定的用途。对于数据科学家而言,这可能是以 Python 形式提供,并容易更换她已经在使用的框架。价值来自于使这些用户能够通过新的效率和性能最终使他们的工作得到改善,只要他们能够访问新的功能就可以了。” 如果供应商能够做到这一点,那么 HPC 服务才能真正将超级计算带给大众。 --- via: <https://www.networkworld.com/article/3534725/the-ins-and-outs-of-high-performance-computing-as-a-service.html> 作者:[Josh Fruhlinger](https://www.networkworld.com/author/Josh-Fruhlinger/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[messon007](https://github.com/messon007) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
301
Moved Permanently
null
12,203
使用 Beaker 浏览器浏览对等 Web
https://itsfoss.com/beaker-browser/
2020-05-10T08:28:00
[ "浏览器", "P2P", "对等" ]
https://linux.cn/article-12203-1.html
![](/data/attachment/album/202005/10/082745ztmi4kqh4iq449ll.jpg) 在过去 50 年中,我们所了解的互联网没有什么变化,全球的网民使用他们的设备从遍布在世界各地的服务器上检索数据。 一群专业的技术专家想改变现状,使互联网变成人们可以连接和直接分享信息的地方,而不是依赖一个中心服务器(去中心化)。 我们已经在 It’s FOSS 讨论过很多这样的去中心化的服务。[YouTube 竞品:LBRY](https://itsfoss.com/lbry/)、[Twitter 竞品:Mastodon](https://itsfoss.com/mastodon-open-source-alternative-twitter/) 是其中的两个例子。 今天我将要介绍另一个这样的产品,名为 [Beaker 浏览器](https://beakerbrowser.com/),它的设计目标是浏览对等 Web。 ![Beaker Browser](/data/attachment/album/202005/10/083036yeso1o0ok1o0n0o8.jpg) ### “对等 Web” 是什么? 根据 Beaker 浏览器的[开发者之一](https://pfrazee.hashbase.io/blog/what-is-the-p2p-web)的描述,“对等 Web 是一项实验性的技术 ……旨在提高我们掌控 Web 的能力。” 还有,他们说对等 Web 有三个主要原则:任何一点都可以成为服务器;多台计算机可以为同一个网站提供服务;没有后端。 从这些原则中你可以看出,对等 Web 的思想与 BitTorrent 很像,文件由多个对端做种,这些对端共同承担带宽负载。这减少了一个用户需要提供给他们的网站的总带宽。 ![Beaker Browser Settings](/data/attachment/album/202005/10/082813xskck3cc47b6zb2z.jpg) 对等 Web 另一个重要的方面是创作者对于他们自己的想法的控制能力。当今年代,平台都是由庞大的组织控制的,往往拿你的数据为他们所用。Beaker 把数据的控制能力返还给了内容创造者。 ### 使用 Beaker 浏览去中心化的 web [Beaker 浏览器](https://beakerbrowser.com/) 是在 2016 年被创建的。该项目(及其周边技术)由[蓝链实验室](https://bluelinklabs.com/)的三人团队创建。Beaker 浏览器使用 [Dat 协议](https://www.datprotocol.com/)在计算机之间共享数据。使用 Dat 协议的网站以 `dat://` 而不是 `http://` 开头。 Dat 协议的优势如下: * 快速 – 档案能立即从多个源同步。 * 安全 – 所有的更新都是有签名和经过完整性检查的。 * 灵活 – 可以在不修改档案 URL 的情况下迁移主机。 * 版本控制 – 每次修改都被写到只能追加的版本日志中。 * 去中心化 – 任何设备都可以作为承载档案的主机。 ![Beaker Browser Seeding](/data/attachment/album/202005/10/082827y0qbju40lzp0j3to.jpg) Beaker 浏览器本质上是阉割版的 Chromium,原生支持 `dat://` 地址,也可以访问普通的 `http://` 站点。 每次访问一个 dat 站点,在你请求时该站点的内容才会下载到你的计算机。例如,在一个站点上的 about 页面中有一张 Linux Torvalds 的图片,只有在你浏览到该站点的这个页面时,才会下载这张图片。 此外,当你浏览一个 dat 网站时,“[你会短暂性的](https://beakerbrowser.com/docs/faq/)重新上传或做种你从该网站上下载的所有文件。”你也可以选择为网站(主动)做种来帮助创造者。 ![Beaker Browser Menu](/data/attachment/album/202005/10/082832j6dsizplps6ppwib.jpg) 由于 Beaker 的志向就是创建一个更开放的网络,因此你可以很容易地查看任何网站的源码。不像在大多数浏览器上你只能看到当前浏览的页面的源码那样,使用 Beaker 你能以类似 GitHub 的视图查看整个站点的结构。你甚至可以复刻这个站点,并托管你自己的版本。 除了浏览基于 dat 的网站外,你还可以创建自己的站点。在 Beaker 浏览器的菜单里,有创建新网站或空项目的选项。如果你选择了创建一个新网站,Beaker 会搭建一个小的演示站点,你可以使用浏览器里自带的编辑器来编辑。 然而,如果你像我一样更喜欢用 Markdown,你可以选择创建一个空项目。Beaker 会创建一个站点的结构,赋给它一个 `dat://` 地址。你只需要创建一个 `index.md` 文件后就行了。这有个[简短教程](https://beakerbrowser.com/docs/guides/create-a-markdown-site),你可以看到更多信息。你也可以用创建空项目的方式搭建一个 web 应用。 ![Beaker Browser Website Template](/data/attachment/album/202005/10/082833du6hzqquqoqu2xr6.jpg) 由于 Beaker 的角色是个 Web 服务器和站点做种者,当你关闭它或关机后你的站点就不可用了。幸运的是,你不必一直开着你的计算机或浏览器。你也可以使用名为 [Hashbase](https://hashbase.io/) 的做种服务或者你可以搭建一个 [homebase](https://github.com/beakerbrowser/homebase) 做种服务器。 虽然 Beaker [适用于](https://beakerbrowser.com/install/) Linux、Windows 和 macOS,但是在使用 Beaker 之前,还是要查阅下[各平台的教程](https://beakerbrowser.com/docs/guides/)。 ### Beaker 浏览器不是大众可用的,但它有这个意图 当第一次接触到时,我对 Beaker 浏览器有极高的热情。(但是)如它现在的名字一样(烧杯),Beaker 浏览器仍是非常实验性的。我尝试浏览过的很多 dat 站点还不可用,因为用户并没有为站点做种。当站点恢复可用时 Beaker 确实可以选择通知你。 ![Beaker Browser No Peer](/data/attachment/album/202005/10/082904jl72aa42av4jy2cg.jpg) 另一个问题是,Beaker 是真正阉割版的 Chromium。它不能安装扩展或主题。你只能使用白色主题和极少的工具集。我不会把 Beaker 浏览器作为常用浏览器,而且能访问 dat 网站并不是把它留在系统上的充分条件。 我曾经寻找一个能支持 `dat://` 协议的 Firefox 扩展。我确实找到了这样一款扩展,但它需要安装一系列其他的软件。相比而言,安装 Beaker 比安装那些软件容易点。 就如它现在的名字一样,Beaker 不适合我。也许在将来更多的人使用 Beaker 或者其他浏览器支持 dat 协议。那时会很有趣。目前而言,聊胜于无。 在使用 Beaker 的时间里,我用内建的工具创建了一个[网站](https://41bfbd06731e8d9c5d5676e8145069c69b254e7a3b710ddda4f6e9804529690c/)。不要担心,我已经为它做种了。 ![Beaker Bowser Site Source](/data/attachment/album/202005/10/083011dzyaocy5qlqel34o.jpg) 你怎么看 Beaker 浏览器?你怎么看对等 Web?请尽情在下面评论。 如果你觉得本文有意思,请花点时间把它分享到社交媒体,Hacker News 或 [Reddit](https://reddit.com/r/linuxusersgroup)。 --- via: <https://itsfoss.com/beaker-browser/> 作者:[John Paul](https://itsfoss.com/author/john/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[lxbwolf](https://github.com/lxbwolf) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
The Internet as we know it has existed unchanged (more or less) for the last 50 years. People across the globe use their devices to retrieve data from huge servers dotted around the world. A group of dedicated technologists wants to change that to make the internet a place where people can connect and share information directly instead of relying on a central server (decentralization). There are a bunch of such decentralized services that we have already covered on It’s FOSS. [LBRY as YouTube alternative](https://itsfoss.com/lbry/), [Mastodon as Twitter alternative](https://itsfoss.com/mastodon-open-source-alternative-twitter/) are just a couple of such examples. And today I am going to cover another such product called [Beaker Browser](https://beakerbrowser.com/) which is essentially for browsing the peer to peer web. ![Beaker Browser](https://itsfoss.com/content/images/wordpress/2020/04/beaker-browser-800x426.jpg) ## What is the ‘peer-to-peer Web’? According to [one of the devs](https://pfrazee.hashbase.io/blog/what-is-the-p2p-web) behind the Beaker browser, “The P2P Web is an experimental set of technologies…to give users more control over the Web.” Further, they say that the peer-to-peer Web has three main principles: anybody can be a server; multiple computers can serve the same site; there is no back end. As you can see from those principles. the idea of the peer-to-peer Web is very similar to BitTorrent where files are seeded by multiple peers and those peers share the bandwidth load. This reduces the overall bandwidth that a person needs to provide for their site. ![Beaker Bowser Setting](https://itsfoss.com/content/images/wordpress/2020/04/beaker-bowser-setting-800x573.jpg) The other major part of the peer-to-peer Web is creator control of their ideas. In this day and age, platforms being controlled by large corporations, who try to use your data for their benefit. Beaker returns control to the content creators. ## Browsing the decentralized web with Beaker The [Beaker Browser](https://beakerbrowser.com/) first came into existence in 2016. The project (and the technology that surrounds it) is created by a team of three at [Blue Link Labs](https://bluelinklabs.com/). The Beaker Browser uses the [Dat protocol](https://www.datprotocol.com/) to share data between computers. All websites that use the Dat protocol start with `dat://` instead of `http://` . The strengths of the Dat protocol are: - Fast – Archives sync from multiple sources at once. - Secure – All updates are signed and integrity-checked. - Resilient – Archives can change hosts without changing their URLs. - Versioned – Changes are written to an append-only version log. - Decentralized – Any device can host any archive. ![Beaker Browser Seeding](https://itsfoss.com/content/images/wordpress/2020/04/beaker-bowser-seedding-800x466.jpg) The Beaker Browser is essentially a cut down version of Chromium with built-in support for `dat://` addresses. It can still visit regular `http://` sites. Each time you visit a dat site, the content for that site is downloaded to your computer as you request it. For example, a picture of Linux Torvalds on the about page of a site is not downloaded until you navigate to that page. Also, once you visit a dat website, “[you temporarily](https://beakerbrowser.com/docs/faq/) re-upload or seed whichever files you’ve downloaded from the website.” You can also choose to seed the website to help its creator. ![Beaker Browser Menu](https://itsfoss.com/content/images/wordpress/2020/04/beaker-browser-menu.jpg) Since the whole idea of Beaker is to create a more open web, you can easily view the source of any website. Unlike most browsers where you just see the source code the current page, you are viewing, Beaker shows you the entire structure of the site in a GitHub-like view. You can even fork the site and host your version of it. Besides visiting dat-based websites, you can also create your own site. In the Beaker Browser menu, there is an option to create a new website or an empty project. If you select the option to create a new website, Beaker will build a little demo site that you can edit with the browser’s built-in editor. However, if you are like me and prefer to use Markdown, you can choose to create an empty project. Beaker will create the structure of a site and assign it a `dat://` address. Create an `index.md` file and you are good to go. There is a [short tutorial](https://beakerbrowser.com/docs/guides/create-a-markdown-site) with more info. You can also use the create empty project option to build a web app. ![Beaker Browser Website Template](https://itsfoss.com/content/images/wordpress/2020/04/beaker-browser-website-template-800x459.jpg) Since Beaker acts as a web server and site seeder, any time you close it or turn off your computer your site will become unavailable. Thankfully, you don’t have to run your computer or the browser constantly. You can also use a seeding service named [Hashbase](https://hashbase.io/) or you can set up a [ homebase](https://github.com/beakerbrowser/homebase) seeding server. Though Beaker is [available](https://beakerbrowser.com/install/) for Linux, Windows, and macOS. If you do start playing around Beaker, be sure to take a quick look at [their gui](https://beakerbrowser.com/docs/guides/)[d](https://beakerbrowser.com/docs/guides/)[es](https://beakerbrowser.com/docs/guides/). ## Beaker Browser is not for everyone but it has a purpose When I first got this assignment, I had high hopes for the Beaker Browser. As it stands now, it’s still very experimental. A number of the dat sites that I tried to visit were unavailable because the user was not seeding their site. Beaker does have an option to notify you when that site is back online. ![Beaker Browser No Peer](https://itsfoss.com/content/images/wordpress/2020/04/beaker-browser-no-peer-800x424.jpg) Another problem is that Beaker is a really stripped down version of Chromium. There is no option to install extensions or themes. Instead, you are stuck with a white theme and a very limited toolset. I would not use this as my main browser and having access to the world of dat websites is not enough of a reason to keep it installed on my system. I looked to see if there is an extension for Firefox that would add support for the `dat://` protocol. I did find such an extension, but it also required the installation of a couple of other pieces of software. It’s just easier to install Beaker. As it stands now, Beaker is not for me. Maybe in the future, more people will start using Beaker or the dat protocol will gain support by other browsers. Then it might be interesting. Right now, it’s kinda empty. As part of my time with Beaker, I created a [website](//41bfbd06731e8d9c5d5676e8145069c69b254e7a3b710ddda4f6e9804529690c/) using the built-in tools. Don’t worry, I made sure that it’s seeded. ![Beaker Bowser Site Source](https://itsfoss.com/content/images/wordpress/2020/04/beaker-bowser-source-800x544.jpg) What are your thoughts on the Beaker Brower? What are your thoughts on the peer-to-peer web? Please let us know in the comments below. If you found this article interesting, please take a minute to share it on social media, Hacker News, or [Reddit](http://reddit.com/r/linuxusersgroup).
12,205
GNU 核心实用程序简介
https://opensource.com/article/18/4/gnu-core-utilities
2020-05-10T17:24:36
[ "Linux", "实用程序" ]
https://linux.cn/article-12205-1.html
> > 大多数 Linux 系统管理员需要做的事情都可以在 GNU coreutils 或 util-linux 中找到。 > > > ![](/data/attachment/album/202005/10/172312hofgh88i3g6jajfj.jpg) 许多 Linux 系统管理员最基本和常用的工具主要包括在两套实用程序中:[GNU 核心实用程序(coreutils)](https://www.gnu.org/software/coreutils/coreutils.html)和 util-linux。它们的基本功能允许系统管理员执行许多管理 Linux 系统的任务,包括管理和操作文本文件、目录、数据流、存储介质、进程控制、文件系统等等。 这些工具是不可缺少的,因为没有它们,就不可能在 Unix 或 Linux 计算机上完成任何有用的工作。鉴于它们的重要性,让我们来研究一下它们。 ### GNU coreutils 要了解 GNU 核心实用程序的起源,我们需要乘坐时光机进行一次短暂的旅行,回到贝尔实验室的 Unix 早期。[编写 Unix](https://en.wikipedia.org/wiki/History_of_Unix) 是为了让 Ken Thompson、Dennis Ritchie、Doug McIlroy 和 Joe Ossanna 可以继续他们在大型多任务和多用户计算机项目 [Multics](https://en.wikipedia.org/wiki/Multics) 上的工作:开发一个叫做《太空旅行》游戏的小东西。正如今天一样,推动计算技术发展的似乎总是游戏玩家。这个新的操作系统比 Multics(LCTT 译注:multi- 字头的意思是多数的)的局限性更大,因为一次只能有两个用户登录,所以被称为 Unics(LCTT 译注:uni- 字头的意思是单独的)。后来这个名字被改成了 Unix。 随着时间的推移,Unix 取得了如此巨大的成功,开始贝尔实验室基本上是将其赠送给大学,后来送给公司也只是收取介质和运输的费用。在那个年代,系统级的软件是在组织和程序员之间共享的,因为在系统管理这个层面,他们努力实现的是共同的目标。 最终,AT&T 公司的[老板们](https://en.wikipedia.org/wiki/Pointy-haired_Boss)决定,他们应该在 Unix 上赚钱,并开始使用限制更多的、昂贵的许可证。这发生在软件变得更加专有、受限和封闭的时期,从那时起,与其他用户和组织共享软件变得不可能。 有些人不喜欢这种情况,于是用自由软件来对抗。Richard M. Stallman(RMS),他带领着一群“反叛者”试图编写一个开放的、自由的可用操作系统,他们称之为 GNU 操作系统。这群人创建了 GNU 实用程序,但并没有产生一个可行的内核。 当 Linus Torvalds 开始编写和编译 Linux 内核时,他需要一套非常基本的系统实用程序来开始执行一些稍微有用的工作。内核并不提供命令或任何类型的命令 shell,比如 Bash,它本身是没有任何用处的,因此,Linus 使用了免费提供的 GNU 核心实用程序,并为 Linux 重新编译了它们。这让他拥有了一个完整的、即便是相当基本的操作系统。 你可以通过在终端命令行中输入命令 `info coreutils` 来了解 GNU 核心实用程序的全部内容。下面的核心实用程序列表就是这个信息页面的一部分。这些实用程序按功能进行了分组,以方便查找;在终端中,选择你想了解更多信息的组,然后按回车键。 ``` * Output of entire files:: cat tac nl od base32 base64 * Formatting file contents:: fmt pr fold * Output of parts of files:: head tail split csplit * Summarizing files:: wc sum cksum b2sum md5sum sha1sum sha2 * Operating on sorted files:: sort shuf uniq comm ptx tsort * Operating on fields:: cut paste join * Operating on characters:: tr expand unexpand * Directory listing:: ls dir vdir dircolors * Basic operations:: cp dd install mv rm shred * Special file types:: mkdir rmdir unlink mkfifo mknod ln link readlink * Changing file attributes:: chgrp chmod chown touch * Disk usage:: df du stat sync truncate * Printing text:: echo printf yes * Conditions:: false true test expr * Redirection:: tee * File name manipulation:: dirname basename pathchk mktemp realpath * Working context:: pwd stty printenv tty * User information:: id logname whoami groups users who * System context:: date arch nproc uname hostname hostid uptime * SELinux context:: chcon runcon * Modified command invocation:: chroot env nice nohup stdbuf timeout * Process control:: kill * Delaying:: sleep * Numeric operations:: factor numfmt seq ``` 这个列表里有 102 个实用程序。它涵盖了在 Unix 或 Linux 主机上执行基本任务所需的许多功能。但是,很多基本的实用程序都缺失了,例如,`mount` 和 `umount` 命令不在这个列表中。这些命令和其他许多不在 GNU 核心实用程序中的命令可以在 util-linux 中找到。 ### util-linux util-linix 实用程序包中包含了许多系统管理员常用的其它命令。这些实用程序是由 Linux 内核组织发布的,这 107 条命令中几乎每一个都来自原本是三个单独的集合 —— fileutils、shellutils 和 textutils,2003 年它们被[合并成一个包](https://en.wikipedia.org/wiki/GNU_Core_Utilities):util-linux。 ``` agetty fsck.minix mkfs.bfs setpriv blkdiscard fsfreeze mkfs.cramfs setsid blkid fstab mkfs.minix setterm blockdev fstrim mkswap sfdisk cal getopt more su cfdisk hexdump mount sulogin chcpu hwclock mountpoint swaplabel chfn ionice namei swapoff chrt ipcmk newgrp swapon chsh ipcrm nologin switch_root colcrt ipcs nsenter tailf col isosize partx taskset colrm kill pg tunelp column last pivot_root ul ctrlaltdel ldattach prlimit umount ddpart line raw unshare delpart logger readprofile utmpdump dmesg login rename uuidd eject look renice uuidgen fallocate losetup reset vipw fdformat lsblk resizepart wall fdisk lscpu rev wdctl findfs lslocks RTC Alarm whereis findmnt lslogins runuser wipefs flock mcookie script write fsck mesg scriptreplay zramctl fsck.cramfs mkfs setarch ``` 这些实用程序中的一些已经被淘汰了,很可能在未来的某个时候会从集合中被踢出去。你应该看看[维基百科的 util-linux 页面](https://en.wikipedia.org/wiki/Util-linux)来了解其中许多实用程序的信息,而 man 页面也提供了关于这些命令的详细信息。 ### 总结 这两个 Linux 实用程序的集合,GNU 核心实用程序和 util-linux,共同提供了管理 Linux 系统所需的基本实用程序。在研究这篇文章的过程中,我发现了几个有趣的实用程序,这些实用程序是我从不知道的。这些命令中的很多都是很少需要的,但当你需要的时候,它们是不可缺少的。 在这两个集合里,有 200 多个 Linux 实用工具。虽然 Linux 的命令还有很多,但这些都是管理一个典型的 Linux 主机的基本功能所需要的。 --- via: <https://opensource.com/article/18/4/gnu-core-utilities> 作者: [David Both](https://opensource.com/users/dboth) 选题者: [lujun9972](https://github.com/lujun9972) 译者: [wxy](https://github.com/%E8%AF%91%E8%80%85ID) 校对: [wxy](https://github.com/%E6%A0%A1%E5%AF%B9%E8%80%85ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
Two sets of utilities—the [GNU Core Utilities](https://www.gnu.org/software/coreutils/coreutils.html) and —comprise many of the Linux system administrator's most basic and regularly used tools. Their basic functions allow sysadmins to perform many of the tasks required to administer a Linux computer, including management and manipulation of text files, directories, data streams, storage media, process controls, filesystems, and much more.[util-linux](https://en.wikipedia.org/wiki/Util-linux) These tools are indispensable because, without them, it is impossible to accomplish any useful work on a Unix or Linux computer. Given their importance, let's examine them. ## GNU coreutils To understand the origins of the GNU Core Utilities, we need to take a short trip in the Wayback machine to the early days of Unix at Bell Labs. [Unix was written](https://en.wikipedia.org/wiki/History_of_Unix) so Ken Thompson, Dennis Ritchie, Doug McIlroy, and Joe Ossanna could continue with something they had started while working on a large multi-tasking and multi-user computer project called [Multics](https://en.wikipedia.org/wiki/Multics). That little something was a game called Space Travel. As remains true today, it always seems to be the gamers who drive forward the technology of computing. This new operating system was much more limited than Multics, as only two users could log in at a time, so it was called Unics. This name was later changed to Unix. Over time, Unix turned out to be such a success that Bell Labs began essentially giving it away it to universities and later to companies for the cost of the media and shipping. Back in those days, system-level software was shared between organizations and programmers as they worked to achieve common goals within the context of system administration. Eventually, the [PHBs](https://en.wikipedia.org/wiki/Pointy-haired_Boss) at AT&T decided they should make money on Unix and started using more restrictive—and expensive—licensing. This was taking place at a time when software was becoming more proprietary, restricted, and closed. It was becoming impossible to share software with other users and organizations. Some people did not like this and fought it with free software. Richard M. Stallman, aka RMS, led a group of rebels who were trying to write an open and freely available operating system they called the GNU Operating System. This group created the GNU Utilities but didn't produce a viable kernel. When Linus Torvalds first wrote and compiled the Linux kernel, he needed a set of very basic system utilities to even begin to perform marginally useful work. The kernel does not provide commands or any type of command shell such as Bash. It is useless by itself. So, Linus used the freely available GNU Core Utilities and recompiled them for Linux. This gave him a complete, if quite basic, operating system. You can learn about all the individual programs that comprise the GNU Utilities by entering the command `info coreutils` at a terminal command line. The following list of the core utilities is part of that info page. The utilities are grouped by function to make specific ones easier to find; in the terminal, highlight the group you want more information on and press the Enter key. ``` * Output of entire files:: cat tac nl od base32 base64 * Formatting file contents:: fmt pr fold * Output of parts of files:: head tail split csplit * Summarizing files:: wc sum cksum b2sum md5sum sha1sum sha2 * Operating on sorted files:: sort shuf uniq comm ptx tsort * Operating on fields:: cut paste join * Operating on characters:: tr expand unexpand * Directory listing:: ls dir vdir dircolors * Basic operations:: cp dd install mv rm shred * Special file types:: mkdir rmdir unlink mkfifo mknod ln link readlink * Changing file attributes:: chgrp chmod chown touch * Disk usage:: df du stat sync truncate * Printing text:: echo printf yes * Conditions:: false true test expr * Redirection:: tee * File name manipulation:: dirname basename pathchk mktemp realpath * Working context:: pwd stty printenv tty * User information:: id logname whoami groups users who * System context:: date arch nproc uname hostname hostid uptime * SELinux context:: chcon runcon * Modified command invocation:: chroot env nice nohup stdbuf timeout * Process control:: kill * Delaying:: sleep * Numeric operations:: factor numfmt seq ``` There are 102 utilities on this list. It covers many of the functions necessary to perform basic tasks on a Unix or Linux host. However, many basic utilities are missing. For example, the `mount` and `umount` commands are not in this list. Those and many of the other commands that are not in the GNU coreutils can be found in the `util-linux` collection. ## util-linux The `util-linux` package of utilities contains many of the other common commands that sysadmins use. These utilities are distributed by the Linux Kernel Organization, and virtually every one of these 107 commands were originally three separate collections—`fileutils` , `shellutils` , and `textutils` —which were [combined into the single package](https://en.wikipedia.org/wiki/GNU_Core_Utilities) `util-linux` in 2003. ``` agetty fsck.minix mkfs.bfs setpriv blkdiscard fsfreeze mkfs.cramfs setsid blkid fstab mkfs.minix setterm blockdev fstrim mkswap sfdisk cal getopt more su cfdisk hexdump mount sulogin chcpu hwclock mountpoint swaplabel chfn ionice namei swapoff chrt ipcmk newgrp swapon chsh ipcrm nologin switch_root colcrt ipcs nsenter tailf col isosize partx taskset colrm kill pg tunelp column last pivot_root ul ctrlaltdel ldattach prlimit umount ddpart line raw unshare delpart logger readprofile utmpdump dmesg login rename uuidd eject look renice uuidgen fallocate losetup reset vipw fdformat lsblk resizepart wall fdisk lscpu rev wdctl findfs lslocks RTC Alarm whereis findmnt lslogins runuser wipefs flock mcookie script write fsck mesg scriptreplay zramctl fsck.cramfs mkfs setarch ``` Some of these utilities have been deprecated and will likely fall out of the collection at some point in the future. You should check [Wikipedia's util-linux page](https://en.wikipedia.org/wiki/Util-linux) for information on many of the utilities, and the man pages also provide details on the commands. ## Summary These two collections of Linux utilities, the GNU Core Utilities and `util-linux` , together provide the basic utilities required to administer a Linux system. As I researched this article, I found several interesting utilities I never knew about. Many of these commands are seldom needed, but when you need them, they are indispensable. Between these two collections, there are over 200 Linux utilities. While Linux has many more commands, these are the ones needed to manage the basic functions of a typical Linux host. ## 6 Comments
12,207
使用 mergefs 增加虚拟存储
https://fedoramagazine.org/using-mergerfs-to-increase-your-virtual-storage/
2020-05-10T22:07:46
[ "文件系统", "mergefs" ]
https://linux.cn/article-12207-1.html
![](/data/attachment/album/202005/10/220750xyyry5mf8fydyqey.png) 如果你想在一个媒体项目中用到了多个磁盘或分区,不想丢失任何现有数据,但又想将所有文件都存放在一个驱动器下,该怎么办?这时,mergefs 就能派上用场! [mergerfs](https://github.com/trapexit/mergerfs) 是一个联合文件系统,旨在简化存储和管理众多商业存储设备上的文件。 你需要从他们的 [GitHub](https://github.com/trapexit/mergerfs/releases) 页面获取最新的 RPM。Fedora 的版本名称中带有 “fc” 和版本号。例如,这是 Fedora 31 的版本: [mergerfs-2.29.0-1.fc31.x86\_64.rpm](https://github.com/trapexit/mergerfs/releases/download/2.29.0/mergerfs-2.29.0-1.fc31.x86_64.rpm)。 ### 安装和配置 mergefs 使用 `sudo` 安装已下载的 mergefs 软件包: ``` $ sudo dnf install mergerfs-2.29.0-1.fc31.x86_64.rpm ``` 现在,你可以将多个磁盘挂载为一个驱动器。如果你有一台媒体服务器,并且希望所有媒体文件都显示在一个地方,这将很方便。如果将新文件上传到系统,那么可以将它们复制到 mergefs 目录,mergefs 会自动将它们复制具有足够可用空间的磁盘上。 这是使你更容易理解的例子: ``` $ df -hT | grep disk /dev/sdb1 ext4 23M 386K 21M 2% /disk1 /dev/sdc1 ext4 44M 1.1M 40M 3% /disk2 $ ls -l /disk1/Videos/ total 1 -rw-r--r--. 1 curt curt 0 Mar 8 17:17 Our Wedding.mkv $ ls -l /disk2/Videos/ total 2 -rw-r--r--. 1 curt curt 0 Mar 8 17:17 Baby's first Xmas.mkv -rw-rw-r--. 1 curt curt 0 Mar 8 17:21 Halloween hijinks.mkv ``` 在此例中挂载了两块磁盘,分别为 `disk1` 和 `disk2`。两个驱动器都有一个包含文件的 `Videos` 目录。 现在,我们将使用 mergefs 挂载这些驱动器,使它们看起来像一个更大的驱动器。 ``` $ sudo mergerfs -o defaults,allow_other,use_ino,category.create=mfs,moveonenospc=true,minfreespace=1M /disk1:/disk2 /media ``` mergefs 手册页非常庞杂,因此我们将说明上面提到的选项。 * `defaults`:除非指定,否则将使用默认设置。 * `allow_other`:允许 `sudo` 或 `root` 以外的用户查看文件系统。 * `use_ino`:让 mergefs 提供文件/目录 inode 而不是 libfuse。虽然不是默认值,但建议你启用它,以便链接的文件共享相同的 inode 值。 * `category.create=mfs`:根据可用空间在驱动器间传播文件。 * `moveonenospc=true`:如果启用,那么如果写入失败,将进行扫描以查找具有最大可用空间的驱动器。 * `minfreespace=1M`:最小使用空间值。 * `disk1`:第一块硬盘。 * `disk2`:第二块硬盘。 * `/media`:挂载驱动器的目录。 看起来是这样的: ``` $ df -hT | grep disk /dev/sdb1 ext4 23M 386K 21M 2% /disk1 /dev/sdc1 ext4 44M 1.1M 40M 3% /disk2 $ df -hT | grep media 1:2 fuse.mergerfs 66M 1.4M 60M 3% /media ``` 你可以看到现在 mergefs 挂载显示的总容量为 66M,这是两块硬盘的总容量。 继续示例: 有一个叫 `Baby's second Xmas.mkv` 的 30M 视频。让我们将其复制到用 mergerfs 挂载的 `/media` 文件夹中。 ``` $ ls -lh "Baby's second Xmas.mkv" -rw-rw-r--. 1 curt curt 30M Apr 20 08:45 Baby's second Xmas.mkv $ cp "Baby's second Xmas.mkv" /media/Videos/ ``` 这是最终结果: ``` $ df -hT | grep disk /dev/sdb1 ext4 23M 386K 21M 2% /disk1 /dev/sdc1 ext4 44M 31M 9.8M 76% /disk2 $ df -hT | grep media 1:2 fuse.mergerfs 66M 31M 30M 51% /media ``` 从磁盘空间利用率中可以看到,因为 `disk1` 没有足够的可用空间,所以 mergefs 自动将文件复制到 `disk2`。 这是所有文件详情: ``` $ ls -l /disk1/Videos/ total 1 -rw-r--r--. 1 curt curt 0 Mar 8 17:17 Our Wedding.mkv $ ls -l /disk2/Videos/ total 30003 -rw-r--r--. 1 curt curt 0 Mar 8 17:17 Baby's first Xmas.mkv -rw-rw-r--. 1 curt curt 30720000 Apr 20 08:47 Baby's second Xmas.mkv -rw-rw-r--. 1 curt curt 0 Mar 8 17:21 Halloween hijinks.mkv $ ls -l /media/Videos/ total 30004 -rw-r--r--. 1 curt curt 0 Mar 8 17:17 Baby's first Xmas.mkv -rw-rw-r--. 1 curt curt 30720000 Apr 20 08:47 Baby's second Xmas.mkv -rw-rw-r--. 1 curt curt 0 Mar 8 17:21 Halloween hijinks.mkv -rw-r--r--. 1 curt curt 0 Mar 8 17:17 Our Wedding.mkv ``` 当你将文件复制到 mergefs 挂载点时,它将始终将文件复制到有足够可用空间的硬盘上。如果池中的所有驱动器都没有足够的可用空间,那么你将无法复制它们。 --- via: <https://fedoramagazine.org/using-mergerfs-to-increase-your-virtual-storage/> 作者:[Curt Warfield](https://fedoramagazine.org/author/rcurtiswarfield/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
What happens if you have multiple disks or partitions that you’d like to use for a media project and you don’t want to lose any of your existing data, but you’d like to have everything located or mounted under one drive. That’s where mergerfs can come to your rescue! [mergerfs](https://github.com/trapexit/mergerfs) is a union filesystem geared towards simplifying storage and management of files across numerous commodity storage devices. You will need to grab the latest RPM from their github page [here](https://github.com/trapexit/mergerfs/releases). The releases for Fedora have ** fc** and the version number in the name. For example here is the version for Fedora 31: [mergerfs-2.29.0-1.fc31.x86_64.rpm](https://github.com/trapexit/mergerfs/releases/download/2.29.0/mergerfs-2.29.0-1.fc31.x86_64.rpm) ## Installing and configuring mergerfs Install the mergerfs package that you’ve downloaded [using sudo](https://fedoramagazine.org/howto-use-sudo/): $ sudo dnf install mergerfs-2.29.0-1.fc31.x86_64.rpm You will now be able to mount multiple disks as one drive. This comes in handy if you have a media server and you’d like all of your media files to show up under one location. If you upload new files to your system, you can copy them to your mergerfs directory and mergerfs will automatically copy them to which ever drive has enough free space available. Here is an example to make it easier to understand: $ df -hT | grep disk /dev/sdb1 ext4 23M 386K 21M 2% /disk1 /dev/sdc1 ext4 44M 1.1M 40M 3% /disk2 $ ls -l /disk1/Videos/ total 1 -rw-r--r--. 1 curt curt 0 Mar 8 17:17 Our Wedding.mkv $ ls -l /disk2/Videos/ total 2 -rw-r--r--. 1 curt curt 0 Mar 8 17:17 Baby's first Xmas.mkv -rw-rw-r--. 1 curt curt 0 Mar 8 17:21 Halloween hijinks.mkv In this example there are two disks mounted as *disk1* and *disk2*. Both drives have a * Videos* directory with existing files. Now we’re going to mount those drives using mergerfs to make them appear as one larger drive. $ sudo mergerfs -o defaults,allow_other,use_ino,category.create=mfs,moveonenospc=true,minfreespace=1M /disk1:/disk2 /media The mergerfs man page is quite extensive and complex so we’ll break down the options that were specified. *defaults*: This will use the default settings unless specified.*allow_other*: allows users besides sudo or root to see the filesystem.*use_ino*: Causes mergerfs to supply file/directory inodes rather than libfuse. While not a default it is recommended it be enabled so that linked files share the same inode value.*category.create=mfs*: Spreads files out across your drives based on available space.*moveonenospc=true*: If enabled, if writing fails, a scan will be done looking for the drive with the most free space.*minfreespace=1M*: The minimum space value used.*disk1*: First hard drive.*disk2*: Second hard drive.*/media*: The directory folder where the drives are mounted. Here is what it looks like: $ df -hT | grep disk /dev/sdb1 ext4 23M 386K 21M 2% /disk1 /dev/sdc1 ext4 44M 1.1M 40M 3% /disk2 $ df -hT | grep media 1:2 fuse.mergerfs 66M 1.4M 60M 3% /media You can see that the mergerfs mount now shows a total capacity of 66M which is the combined total of the two hard drives. ## Using mergerfs Continuing with the example: There is a 30Mb video called *Baby’s second Xmas.mkv*. Let’s copy it to the */media* folder which is the mergerfs mount. $ ls -lh "Baby's second Xmas.mkv" -rw-rw-r--. 1 curt curt 30M Apr 20 08:45 Baby's second Xmas.mkv $ cp "Baby's second Xmas.mkv" /media/Videos/ Here is the end result: $ df -hT | grep disk /dev/sdb1 ext4 23M 386K 21M 2% /disk1 /dev/sdc1 ext4 44M 31M 9.8M 76% /disk2 $ df -hT | grep media 1:2 fuse.mergerfs 66M 31M 30M 51% /media You can see from the disk space utilization that mergerfs automatically copied the file to disk2 because disk1 did not have enough free space. Here is a breakdown of all of the files: $ ls -l /disk1/Videos/ total 1 -rw-r--r--. 1 curt curt 0 Mar 8 17:17 Our Wedding.mkv $ ls -l /disk2/Videos/ total 30003 -rw-r--r--. 1 curt curt 0 Mar 8 17:17 Baby's first Xmas.mkv -rw-rw-r--. 1 curt curt 30720000 Apr 20 08:47 Baby's second Xmas.mkv -rw-rw-r--. 1 curt curt 0 Mar 8 17:21 Halloween hijinks.mkv $ ls -l /media/Videos/ total 30004 -rw-r--r--. 1 curt curt 0 Mar 8 17:17 Baby's first Xmas.mkv -rw-rw-r--. 1 curt curt 30720000 Apr 20 08:47 Baby's second Xmas.mkv -rw-rw-r--. 1 curt curt 0 Mar 8 17:21 Halloween hijinks.mkv -rw-r--r--. 1 curt curt 0 Mar 8 17:17 Our Wedding.mkv When you copy files to your mergerfs mount, it will always copy the files to the hard disk that has enough free space. If none of the drives in the pool have enough free space, then you won’t be able to copy them. ## Mike Nice article as I wasn’t aware of mergerfs, but someone is going to be really mad when they find out that their Wedding video and Baby’s First Christmas are empty files 😉 ## trapexit Why would their files be empty? No matter what storage system you have you need backup. ## Ph0zzy Does it have something to do with silverblue? I mean it would be nice to be able to install overlay packages without reboot. ## ondrej This is not possible, as the system is readonly, it cant be changed on the fly. You can always use toolbox (containers) or virtualization. ## Vernon Van Steenkist Great Article. A couple of questions: How is this different from regular unionfs? Do you take any file system performance hit? Where does the mea data get stored? Thanks ## trapexit The mergerfs docs are very thorough. I recommend checking them out. As in unionfs-fuse? 1) It’s still maintained. 2) It offers a lot more functionality. Yes. Naturally. There is an additional layer so it will take a hit. The amount depends greatly on the usage patterns and system. What metadata? ## Mx Use lvm ## rfrr exactly ## Cara lvm and mergerfs serve different needs and lvm has the drawback of data spanning volumes, a failure that spans multiple volumes wouldmean data loss if any of the volumes it spans goes down, whereas with mergerfs you only lose whats on one volume and theres no data spanning multiple volumes. you cal also add in-use data disks to a mergerfs pool. …additionally you can use LVM with mergerfs since they work on fundamentally different levels https://www.teknophiles.com/2018/02/19/disk-pooling-in-linux-with-mergerfs/ ## luna wigs Everything is very open with a really clear description of the challenges. It waas truly informative. Your site is useful. Many thanks for sharing!
12,211
Systemd 服务:响应变化
https://www.linux.com/blog/intro-to-linux/2018/6/systemd-services-reacting-change
2020-05-12T00:11:20
[ "systemd", "电脑棒" ]
https://linux.cn/article-12211-1.html
![](/data/attachment/album/202005/12/001037iz91uu9b15dqb9w3.jpg) [我有一个这样的电脑棒](https://www.intel.com/content/www/us/en/products/boards-kits/compute-stick/stk1a32sc.html)(图1),我把它用作通用服务器。它很小且安静,由于它是基于 x86 架构的,因此我为我的打印机安装驱动没有任何问题,而且这就是它大多数时候干的事:与客厅的共享打印机和扫描仪通信。 ![](/data/attachment/album/202005/11/235637fqr5snii7si5dgng.jpg) *一个英特尔电脑棒。欧元硬币大小。* 大多数时候它都是闲置的,尤其是当我们外出时,因此我认为用它作监视系统是个好主意。该设备没有自带的摄像头,也不需要一直监视。我也不想手动启动图像捕获,因为这样就意味着在出门前必须通过 SSH 登录,并在 shell 中编写命令来启动该进程。 因此,我以为应该这么做:拿一个 USB 摄像头,然后只需插入它即可自动启动监视系统。如果这个电脑棒重启后发现连接了摄像头也启动监视系统就更加分了。 在先前的文章中,我们看到 systemd 服务既可以[手动启动或停止](/article-9700-1.html),也可以[在满足某些条件时启动或停止](/article-9703-1.html)。这些条件不限于操作系统在启动或关机时序中达到某种状态,还可以在你插入新硬件或文件系统发生变化时进行。你可以通过将 Udev 规则与 systemd 服务结合起来实现。 ### 有 Udev 支持的热插拔 Udev 规则位于 `/etc/udev/rules` 目录中,通常是由导致一个<ruby> 动作 <rt> action </rt></ruby>的<ruby> 条件 <rt> conditions </rt></ruby>和<ruby> 赋值 <rt> assignments </rt></ruby>的单行语句来描述。 有点神秘。让我们再解释一次: 通常,在 Udev 规则中,你会告诉 systemd 当设备连接时需要查看什么信息。例如,你可能想检查刚插入的设备的品牌和型号是否与你让 Udev 等待的设备的品牌和型号相对应。这些就是前面提到的“条件”。 然后,你可能想要更改一些内容,以便以后可以方便使用该设备。例如,更改设备的读写权限:如果插入 USB 打印机,你会希望用户能够从打印机读取信息(用户的打印应用程序需要知道其模型、制造商,以及是否准备好接受打印作业)并向其写入内容,即发送要打印的内容。更改设备的读写权限是通过你之前阅读的“赋值” 之一完成的。 最后,你可能希望系统在满足上述条件时执行某些动作,例如在插入某个外部硬盘时启动备份程序以复制重要文件。这就是上面提到的“动作”的例子。 了解这些之后, 来看看以下几点: ``` ACTION=="add", SUBSYSTEM=="video4linux", ATTRS{idVendor}=="03f0", ATTRS{idProduct}=="e207", SYMLINK+="mywebcam", TAG+="systemd", MODE="0666", ENV{SYSTEMD_WANTS}="webcam.service" ``` 规则的第一部分, ``` ACTION=="add", SUBSYSTEM=="video4linux", ATTRS{idVendor}=="03f0", ATTRS{idProduct}=="e207" [etc... ] ``` 表明了执行你想让系统执行的其他动作之前设备必须满足的条件。设备必须被添加到(`ACTION=="add"`)机器上,并且必须添加到 `video4linux` 子系统中。为了确保仅在插入正确的设备时才应用该规则,你必须确保 Udev 正确识别设备的制造商(`ATTRS{idVendor}=="03f0"`)和型号(`ATTRS{idProduct}=="e207"`)。 在本例中,我们讨论的是这个设备(图2): ![](/data/attachment/album/202005/12/000040vilf1ov3fgg4vovg.jpg) *这个试验使用的是 HP 的摄像头。* 注意怎样用 `==` 来表示这是一个逻辑操作。你应该像这样阅读上面的简要规则: > > 如果添加了一个设备并且该设备由 video4linux 子系统控制,而且该设备的制造商编码是 03f0,型号是 e207,那么… > > > 但是,你从哪里获取的这些信息?你在哪里找到触发事件的动作、制造商、型号等?你可要使用多个来源。你可以通过将摄像头插入机器并运行 `lsusb` 来获得 `IdVendor` 和 `idProduct` : ``` lsusb Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 003 Device 003: ID 03f0:e207 Hewlett-Packard Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 003: ID 04f2:b1bb Chicony Electronics Co., Ltd Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub ``` 我用的摄像头是 HP 的,你在上面的列表中只能看到一个 HP 设备。`ID` 提供了制造商和型号,它们以冒号(`:`)分隔。如果你有同一制造商的多个设备,不确定哪个是哪个设备,请拔下摄像头,再次运行 `lsusb` , 看看少了什么。 或者… 拔下摄像头,等待几秒钟,运行命令 `udevadmin monitor --environment` ,然后重新插入摄像头。当你使用的是HP摄像头时,你将看到: ``` udevadmin monitor --environment UDEV [35776.495221] add /devices/pci0000:00/0000:00:1c.3/0000:04:00.0 /usb3/3-1/3-1:1.0/input/input21/event11 (input) .MM_USBIFNUM=00 ACTION=add BACKSPACE=guess DEVLINKS=/dev/input/by-path/pci-0000:04:00.0-usb-0:1:1.0-event /dev/input/by-id/usb-Hewlett_Packard_HP_Webcam_HD_2300-event-if00 DEVNAME=/dev/input/event11 DEVPATH=/devices/pci0000:00/0000:00:1c.3/0000:04:00.0/ usb3/3-1/3-1:1.0/input/input21/event11 ID_BUS=usb ID_INPUT=1 ID_INPUT_KEY=1 ID_MODEL=HP_Webcam_HD_2300 ID_MODEL_ENC=HPx20Webcamx20HDx202300 ID_MODEL_ID=e207 ID_PATH=pci-0000:04:00.0-usb-0:1:1.0 ID_PATH_TAG=pci-0000_04_00_0-usb-0_1_1_0 ID_REVISION=1020 ID_SERIAL=Hewlett_Packard_HP_Webcam_HD_2300 ID_TYPE=video ID_USB_DRIVER=uvcvideo ID_USB_INTERFACES=:0e0100:0e0200:010100:010200:030000: ID_USB_INTERFACE_NUM=00 ID_VENDOR=Hewlett_Packard ID_VENDOR_ENC=Hewlettx20Packard ID_VENDOR_ID=03f0 LIBINPUT_DEVICE_GROUP=3/3f0/e207:usb-0000:04:00.0-1/button MAJOR=13 MINOR=75 SEQNUM=3162 SUBSYSTEM=input USEC_INITIALIZED=35776495065 XKBLAYOUT=es XKBMODEL=pc105 XKBOPTIONS= XKBVARIANT= ``` 可能看起来有很多信息要处理,但是,看一下这个:列表前面的 `ACTION` 字段, 它告诉你刚刚发生了什么事件,即一个设备被添加到系统中。你还可以在其中几行中看到设备名称的拼写,因此可以非常确定它就是你要找的设备。输出里还显示了制造商的ID(`ID_VENDOR_ID = 03f0`)和型号(`ID_VENDOR_ID = 03f0`)。 这为你提供了规则条件部分需要的四个值中的三个。你可能也会想到它还给了你第四个,因为还有一行这样写道: ``` SUBSYSTEM=input ``` 小心!尽管 USB 摄像头确实是提供输入的设备(键盘和鼠标也是),但它也属于 usb 子系统和其他几个子系统。这意味着你的摄像头被添加到了多个子系统,并且看起来像多个设备。如果你选择了错误的子系统,那么你的规则可能无法按你期望的那样工作,或者根本无法工作。 因此,第三件事就是检查网络摄像头被添加到的所有子系统,并选择正确的那个。为此,请再次拔下摄像头,然后运行: ``` ls /dev/video* ``` 这将向你显示连接到本机的所有视频设备。如果你使用的是笔记本,大多数笔记本都带有内置摄像头,它可能会显示为 `/dev/video0` 。重新插入摄像头,然后再次运行 `ls /dev/video*`。 现在,你应该看到多一个视频设备(可能是`/dev/video1`)。 现在,你可以通过运行 `udevadm info -a /dev/video1` 找出它所属的所有子系统: ``` udevadm info -a /dev/video1 Udevadm info starts with the device specified by the devpath and then walks up the chain of parent devices. It prints for every device found, all possible attributes in the udev rules key format. A rule to match, can be composed by the attributes of the device and the attributes from one single parent device. looking at device '/devices/pci0000:00/0000:00:1c.3/0000:04:00.0 /usb3/3-1/3-1:1.0/video4linux/video1': KERNEL=="video1" SUBSYSTEM=="video4linux" DRIVER=="" ATTR{dev_debug}=="0" ATTR{index}=="0" ATTR{name}=="HP Webcam HD 2300: HP Webcam HD" [etc...] ``` 输出持续了相当长的时间,但是你感兴趣的只是开头的部分:`SUBSYSTEM =="video4linux"`。你可以将这行文本直接复制粘贴到你的规则中。输出的其余部分(为简洁未显示)为你提供了更多的信息,例如制造商和型号 ID,同样是以你可以复制粘贴到你的规则中的格式。 现在,你有了识别设备的方式吗,并明确了什么事件应该触发该动作,该对设备进行修改了。 规则的下一部分,`SYMLINK+="mywebcam", TAG+="systemd", MODE="0666"` 告诉 Udev 做三件事:首先,你要创建设备的符号链接(例如 `/dev/video1` 到 `/dev/mywebcam`。这是因为你无法预测系统默认情况下会把那个设备叫什么。当你拥有内置摄像头并热插拔一个新的时,内置摄像头通常为 `/dev/video0`,而外部摄像头通常为 `/dev/video1`。但是,如果你在插入外部 USB 摄像头的情况下重启计算机,则可能会相反,内部摄像头可能会变成 `/dev/video1` ,而外部摄像头会变成 `/dev/video0`。这想告诉你的是,尽管你的图像捕获脚本(稍后将看到)总是需要指向外部摄像头设备,但是你不能依赖它是 `/dev/video0` 或 `/dev/video1`。为了解决这个问题,你告诉 Udev 创建一个符号链接,该链接在设备被添加到 `video4linux` 子系统的那一刻起就不会再变,你将使你的脚本指向该链接。 第二件事就是将 `systemd` 添加到与此规则关联的 Udev 标记列表中。这告诉 Udev,该规则触发的动作将由 systemd 管理,即它将是某种 systemd 服务。 注意在这个两种情况下是如何使用 `+=` 运算符的。这会将值添加到列表中,这意味着你可以向 `SYMLINK` 和 `TAG` 添加多个值。 另一方面,`MODE` 值只能包含一个值(因此,你可以使用简单的 `=` 赋值运算符)。`MODE` 的作用是告诉 Udev 谁可以读或写该设备。如果你熟悉 `chmod`(你读到此文, 应该会熟悉),你就也会熟悉[如何用数字表示权限](https://chmod-calculator.com/)。这就是它的含义:`0666` 的含义是 “向所有人授予对设备的读写权限”。 最后, `ENV{SYSTEMD_WANTS}="webcam.service"` 告诉 Udev 要运行什么 systemd 服务。 将此规则保存到 `/etc/udev/rules.d` 目录名为 `90-webcam.rules`(或类似的名称)的文件中,你可以通过重启机器或运行以下命令来加载它: ``` sudo udevadm control --reload-rules && udevadm trigger ``` ### 最后的服务 Udev 规则触发的服务非常简单: ``` # webcam.service [Service] Type=simple ExecStart=/home/[user name]/bin/checkimage.sh ``` 基本上,它只是运行存储在你个人 `bin/` 中的 `checkimage.sh` 脚本并将其放到后台。[这是你在先前的文章中看过的内容](/article-9700-1.html)。它看起来似乎很小,但那只是因为它是被 Udev 规则调用的,你刚刚创建了一种特殊的 systemd 单元,称为 `device` 单元。 恭喜。 至于 `webcam.service` 调用的 `checkimage.sh` 脚本,有几种方法从摄像头抓取图像并将其与前一个图像进行比较以检查变化(这是 `checkimage.sh` 所做的事),但这是我的方法: ``` #!/bin/bash # This is the checkimage.sh script mplayer -vo png -frames 1 tv:// -tv driver=v4l2:width=640:height=480:device= /dev/mywebcam &>/dev/null mv 00000001.png /home/[user name]/monitor/monitor.png while true do mplayer -vo png -frames 1 tv:// -tv driver=v4l2:width=640:height=480:device=/dev/mywebcam &>/dev/null mv 00000001.png /home/[user name]/monitor/temp.png imagediff=`compare -metric mae /home/[user name]/monitor/monitor.png /home/[user name] /monitor/temp.png /home/[user name]/monitor/diff.png 2>&1 > /dev/null | cut -f 1 -d " "` if [ `echo "$imagediff > 700.0" | bc` -eq 1 ] then mv /home/[user name]/monitor/temp.png /home/[user name]/monitor/monitor.png fi sleep 0.5 done ``` 首先使用[MPlayer](https://mplayerhq.hu/design7/news.html)从摄像头抓取一帧(`00000001.png`)。注意,我们怎样将 `mplayer` 指向 Udev 规则中创建的 `mywebcam` 符号链接,而不是指向 `video0` 或 `video1`。然后,将图像传输到主目录中的 `monitor/` 目录。然后执行一个无限循环,一次又一次地执行相同的操作,但还使用了[Image Magick 的 compare 工具](https://www.imagemagick.org/script/compare.php)来查看最后捕获的图像与 `monitor/` 目录中已有的图像之间是否存在差异。 如果图像不同,则表示摄像头的镜框里某些东西动了。该脚本将新图像覆盖原始图像,并继续比较以等待更多变动。 ### 插线 所有东西准备好后,当你插入摄像头后,你的 Udev 规则将被触发并启动 `webcam.service`。 `webcam.service` 将在后台执行 `checkimage.sh` ,而 `checkimage.sh` 将开始每半秒拍一次照。你会感觉到,因为摄像头的 LED 在每次拍照时将开始闪。 与往常一样,如果出现问题,请运行: ``` systemctl status webcam.service ``` 检查你的服务和脚本正在做什么。 ### 接下来 你可能想知道:为什么要覆盖原始图像?当然,系统检测到任何动静,你都想知道发生了什么,对吗?你是对的,但是如你在下一部分中将看到的那样,将它们保持原样,并使用另一种类型的 systemd 单元处理图像将更好,更清晰和更简单。 请期待下一篇。 --- via: <https://www.linux.com/blog/intro-to-linux/2018/6/systemd-services-reacting-change> 作者:[Paul Brown](https://www.linux.com/users/bro66) 选题:[lujun9972](https://github.com/lujun9972) 译者:[messon007](https://github.com/messon007) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
301
Moved Permanently
null
12,214
学会爱上 systemd
https://opensource.com/article/20/4/systemd
2020-05-13T08:51:00
[ "启动", "引导", "初始化", "systemd" ]
https://linux.cn/article-12214-1.html
> > systemd 是所有进程之母,负责将 Linux 主机启动到可以做生产性任务的状态。 > > > ![](/data/attachment/album/202005/13/085016gy86wj713zh7xq71.jpg) systemd(是的,全小写,即使在句子开头也是小写),是初始化程序(`init`)和 SystemV 初始化脚本的现代替代者。此外,它还有更多功能。 当我想到 `init` 和 SystemV 初始化时,像大多数系统管理员一样,我想到的是 Linux 的启动和关闭,而不是真正意义上的管理服务,例如在服务启动和运行后对其进行管理。像 `init` 一样,systemd 是所有进程之母,它负责使 Linux 主机启动到可以做生产性任务的状态。systemd 设定的一些功能比老的初始化程序要广泛得多,它要管理正在运行的 Linux 主机的许多方面,包括挂载文件系统、管理硬件、处理定时器以及启动和管理生产性主机所需的系统服务。 本系列文章是基于我的三期 Linux 培训课程《[使用和管理 Linux:从零开始进行学习系统管理](http://www.both.org/?page_id=1183)》部分内容的摘录,探讨了 systemd 在启动和启动完成后的功能。 ### Linux 引导 Linux 主机从关机状态到运行状态的完整启动过程很复杂,但它是开放的并且是可知的。在详细介绍之前,我将简要介绍一下从主机硬件被上电到系统准备好用户登录的过程。大多数时候,“引导过程”被作为一个整体来讨论,但这是不准确的。实际上,完整的引导和启动过程包含三个主要部分: * 硬件引导:初始化系统硬件 * Linux <ruby> 引导 <rp> ( </rp> <rt> boot </rt> <rp> ) </rp></ruby>:加载 Linux 内核和 systemd * Linux <ruby> 启动 <rp> ( </rp> <rt> startup </rt> <rp> ) </rp></ruby>:systemd 为主机的生产性工作做准备 Linux 启动阶段始于内核加载了 `init` 或 systemd(取决于具体发行版使用的是旧的方式还是还是新的方式)之后。`init` 和 systemd 程序启动并管理所有其它进程,它们在各自的系统上都被称为“所有进程之母”。 将硬件引导与 Linux 引导及 Linux 启动区分开,并明确定义它们之间的分界点是很重要的。理解它们的差异以及它们每一个在使 Linux 系统进入生产状态所起的作用,才能够管理这些进程,并更好地确定大部分人所谓的“启动”问题出在哪里。 启动过程按照三步引导流程,使 Linux 计算机进入可进行生产工作的状态。当内核将主机的控制权转移到 systemd 时,启动环节开始。 ### systemd 之争 systemd 引起了系统管理员和其它负责维护 Linux 系统正常运行人员的广泛争议。在许多 Linux 系统中,systemd 接管了大量任务,这在某些开发者和sysadmins群体中引起了反对和不和谐。 SystemV 和 systemd 是执行 Linux 启动环节的两种不同的方法。SystemV 启动脚本和 `init` 程序是老的方法,而使用<ruby> 目标 <rt> target </rt></ruby>的 systemd 是新方法。尽管大多数现代 Linux 发行版都使用较新的 systemd 进行启动、关机和进程管理,但仍有一些发行版未采用。原因之一是某些发行版维护者和系统管理员喜欢老的 SystemV 方法,而不是新的 systemd。 我认为两者都有其优势。 #### 为何我更喜欢 SystemV 我更喜欢 SystemV,因为它更开放。使用 Bash 脚本来完成启动。内核启动 `init` 程序(这是一个编译后的二进制)后,`init` 启动 `rc.sysinit` 脚本,该脚本执行许多系统初始化任务。`rc.sysinit` 执行完后,`init` 启动 `/etc/rc.d/rc` 脚本,该脚本依次启动 `/etc/rc.d/rcX.d` 中由 SystemV 启动脚本定义的各种服务。其中 `X` 是待启动的运行级别号。 除了 `init` 程序本身之外,所有这些程序都是开放且易于理解的脚本。可以通读这些脚本并确切了解整个启动过程中发生的事情,但是我不认为有太多系统管理员真正做到这一点。每个启动脚本都被编了号,以便按特定顺序启动预期的服务。服务是串行启动的,一次只能启动一个服务。 systemd 是由 Red Hat 的 Lennart Poettering 和 Kay Sievers 开发的,它是一个由大型的、编译的二进制可执行文件构成的复杂系统,不访问其源码就无法理解。它是开源的,因此“访问其源代码”并不难,只是不太方便。systemd 似乎表现出对 Linux 哲学多个原则的重大驳斥。作为二进制文件,systemd 无法被直接打开供系统管理员查看或进行简单更改。systemd 试图做所有事情,例如管理正在运行的服务,同时提供明显比 SystemV 更多的状态信息。它还管理硬件、进程、进程组、文件系统挂载等。systemd 几乎涉足于现代 Linux 主机的每个方面,使它成为系统管理的一站式工具。所有这些都明显违反了“程序应该小,且每个程序都应该只做一件事并做好”的原则。 #### 为何我更喜欢 systemd 我更喜欢用 systemd 作为启动机制,因为它会根据启动阶段并行地启动尽可能多的服务。这样可以加快整个的启动速度,使得主机系统比 SystemV 更快地到达登录屏幕。 systemd 几乎可以管理正在运行的 Linux 系统的各个方面。它可以管理正在运行的服务,同时提供比SystemV 多得多的状态信息。它还管理硬件、进程和进程组、文件系统挂载等。systemd 几乎涉足于现代 Linux 操作系统的每方面,使其成为系统管理的一站式工具。(听起来熟悉吧?) systemd 工具是编译后的二进制文件,但该工具包是开放的,因为所有配置文件都是 ASCII 文本文件。可以通过各种 GUI 和命令行工具来修改启动配置,也可以添加或修改各种配置文件来满足特定的本地计算环境的需求。 #### 真正的问题 你认为我不能喜欢两种启动系统吗?我能,我会用它们中的任何一个。 我认为,SystemV 和 systemd 之间大多数争议的真正问题和根本原因在于,在系统管理层面[没有选择权](http://www.osnews.com/story/28026/Editorial_Thoughts_on_Systemd_and_the_Freedom_to_Choose)。使用 SystemV 还是 systemd 已经由各种发行版的开发人员、维护人员和打包人员选择了(但有充分的理由)。由于 `init` 极端的侵入性,挖出并替换 `init` 系统会带来很多影响,会带来很多在发行版设计过程之外难以解决的后果。 尽管该选择实际上是为我而选的,但我的Linux主机能不能开机、能不能工作,这是我平时最关心的。作为最终用户,甚至是系统管理员,我主要关心的是我是否可以完成我的工作,例如写我的书和这篇文章,安装更新以及编写脚本来自动化所有事情。只要我能做我的工作,我就不会真正在意发行版中使用的启动系统。 在启动或服务管理出现问题时,我会在意。无论主机上使用哪种启动系统,我都足够了解如何沿着事件顺序来查找故障并进行修复。 #### 替换SystemV 以前曾有过用更现代的东西替代 SystemV 的尝试。大约在两个版本中,Fedora 使用了一个叫作 Upstart 的东西来替换老化的 SystemV,但是它没有取代 `init`,也没有提供我所注意到的任何变化。由于 Upstart 并未对 SystemV 的问题进行任何显著的改变,所以在这个方向上的努力很快就被放弃了,转而使用 systemd。 尽管大部分 Linux 开发人员都认可替换旧的 SystemV 启动系统是个好主意,但许多开发人员和系统管理员并不喜欢 systemd。与其重新讨论人们在 systemd 中遇到的或曾经遇到过的所有所谓的问题,不如带你去看两篇好文章,尽管有些陈旧,但它们涵盖了大多数内容。Linux 内核的创建者 Linus Torvalds 对 systemd 似乎不感兴趣。在 2014 年 ZDNet 的一篇文章《[Linus Torvalds 和其他人对 Linux 上的 systemd 的看法](https://www.zdnet.com/article/linus-torvalds-and-others-on-linuxs-systemd/)》中,Linus 清楚地表达了他的感受。 > > “实际上我对 systemd 本身没有任何特别强烈的意见。我对一些核心开发人员有一些问题,我认为他们在对待错误和兼容性方面过于轻率,而且我认为某些设计细节是疯狂的(例如,我不喜欢二进制日志),但这只是细节,不是大问题。” > > > 如果你对 Linus 不太了解的话,我可以告诉你,如果他不喜欢某事,他是非常直言不讳的,很明确,而且相当明确的表示不喜欢。他解决自己对事物不满的方式已经被社会更好地接受了。 2013 年,Poettering 写了一篇很长的博客,他在文章驳斥了[关于 systemd 的迷思](http://0pointer.de/blog/projects/the-biggest-myths.html),同时对创建 systemd 的一些原因进行了深入的剖析。这是一分很好的读物,我强烈建议你阅读。 ### systemd 任务 根据编译过程中使用的选项(不在本系列中介绍),systemd 可以有多达 69 个二进制可执行文件执行以下任务,其中包括: * `systemd` 程序以 1 号进程(PID 1)运行,并提供使尽可能多服务并行启动的系统启动能力,它额外加快了总体启动时间。它还管理关机顺序。 * `systemctl` 程序提供了服务管理的用户接口。 * 支持 SystemV 和 LSB 启动脚本,以便向后兼容。 * 服务管理和报告提供了比 SystemV 更多的服务状态数据。 * 提供基本的系统配置工具,例如主机名、日期、语言环境、已登录用户的列表,正在运行的容器和虚拟机、系统帐户、运行时目录及设置,用于简易网络配置、网络时间同步、日志转发和名称解析的守护进程。 * 提供套接字管理。 * systemd 定时器提供类似 cron 的高级功能,包括在相对于系统启动、systemd 启动时间、定时器上次启动时间的某个时间点运行脚本。 * 它提供了一个工具来分析定时器规范中使用的日期和时间。 * 能感知分层的文件系统挂载和卸载功能可以更安全地级联挂载的文件系统。 * 允许主动的创建和管理临时文件,包括删除。 * D-Bus 的接口提供了在插入或移除设备时运行脚本的能力。这允许将所有设备(无论是否可插拔)都被视为即插即用,从而大大简化了设备的处理。 * 分析启动环节的工具可用于查找耗时最多的服务。 * 它包括用于存储系统消息的日志以及管理日志的工具。 ### 架构 这些以及更多的任务通过许多守护程序、控制程序和配置文件来支持。图 1 显示了许多属于 systemd 的组件。这是一个简化的图,旨在提供概要描述,因此它并不包括所有独立的程序或文件。它也不提供数据流的视角,数据流是如此复杂,因此在本系列文章的背景下没用。 ![系统架构](/data/attachment/album/202005/13/085112xl9ukqlulkugszo5.png "systemd architecture") *图 1:systemd 的架构,作者 Shmuel Csaba Otto Traian (CC BY-SA 3.0)* 如果要完整地讲解 systemd 就需要一本书。你不需要了解图 1 中的 systemd 组件是如何组合在一起的细节。只需了解支持各种 Linux 服务管理以及日志文件和日志处理的程序和组件就够了。但是很明显, systemd 并不是某些批评者所宣称的那样,它是一个单一的怪物。 ### 作为 1 号进程的 systemd systemd 是 1 号进程(PID 1)。它的一些功能,比老的 SystemV3 `init` 要广泛得多,用于管理正在运行的 Linux 主机的许多方面,包括挂载文件系统以及启动和管理 Linux 生产主机所需的系统服务。与启动环节无关的任何 systemd 任务都不在本文讨论范围之内(但本系列后面的一些文章将探讨其中的一些任务)。 首先,systemd 挂载 `/etc/fstab` 所定义的文件系统,包括所有交换文件或分区。此时,它可以访问位于 `/etc` 中的配置文件,包括它自己的配置文件。它使用其配置链接 `/etc/systemd/system/default.target` 来确定将主机引导至哪个状态或目标。`default.target` 文件是指向真实目标文件的符号链接。对于桌面工作站,通常是 `graphical.target`,它相当于 SystemV 中的运行级别 5。对于服务器,默认值更可能是 `multi-user.target`,相当于 SystemV 中的运行级别 3。`emergency.target` 类似于单用户模式。<ruby> 目标 <rt> target </rt></ruby>和<ruby> 服务 <rt> service </rt></ruby>是 systemd 的<ruby> 单元 <rt> unit </rt></ruby>。 下表(图 2)将 systemd 目标与老的 SystemV 启动运行级别进行了比较。systemd 提供 systemd 目标别名以便向后兼容。目标别名允许脚本(以及许多系统管理员)使用 SystemV 命令(如 `init 3`)更改运行级别。当然,SystemV 命令被转发给 systemd 进行解释和执行。 | systemd 目标 | SystemV 运行级别 | 目标别名 | 描述 | | --- | --- | --- | --- | | `default.target` | | | 此目标总是通过符号连接的方式成为 `multi-user.target` 或 `graphical.target` 的别名。systemd 始终使用 `default.target` 来启动系统。`default.target` 绝不应该设为 `halt.target`,`poweroff.target` 或 `reboot.target` 的别名。 | | `graphic.target` | 5 | `runlevel5.target` | 带有 GUI 的 `multi-user.target`。 | | | 4 | `runlevel4.target` | 未用。在 SystemV 中运行级别 4 与运行级别 3 相同。可以创建并自定义此目标以启动本地服务,而无需更改默认的 `multi-user.target`。 | | `multi-user.target` | 3 | `runlevel3.target` | 所有服务在运行,但仅有命令行界面(CLI)。 | | | 2 | `runlevel2.target` | 多用户,没有 NFS,其它所有非 GUI 服务在运行。 | | `rescue.target` | 1 | `runlevel1.target` | 基本系统,包括挂载文件系统,运行最基本的服务和主控制台的恢复 shell。 | | `emergency.target` | S | | 单用户模式:没有服务运行;不挂载文件系统。这是最基本的工作级别,只有主控制台上运行的一个紧急 Shell 供用户与系统交互。 | | `halt.target` | | | 停止系统而不关闭电源。 | | `reboot.target` | 6 | `runlevel6.target` | 重启。 | | `poweroff.target` | 0 | `runlevel0.target` | 停止系统并关闭电源。 | *图 2:SystemV 运行级别与 systemd 目标和一些目标别名的比较* 每个目标在其配置文件中都描述了一个依赖集。systemd 启动必须的依赖项,这些依赖项是运行 Linux 主机到特定功能级别所需的服务。当目标配置文件中列出的所有依赖项被加载并运行后,系统就在该目标级别运行了。在图 2 中,功能最多的目标位于表的顶部,从顶向下,功能逐步递减。 systemd 还会检查老的 SystemV `init` 目录,以确认是否存在任何启动文件。如果有,systemd 会将它们作为配置文件以启动它们描述的服务。网络服务是一个很好的例子,在 Fedora 中它仍然使用 SystemV 启动文件。 图 3(如下)是直接从启动手册页复制来的。它显示了 systemd 启动期间一般的事件环节以及确保成功启动的基本顺序要求。 ``` cryptsetup-pre.target | (various low-level v API VFS mounts: (various cryptsetup devices...) mqueue, configfs, | | debugfs, ...) v | | cryptsetup.target | | (various swap | | remote-fs-pre.target | devices...) | | | | | | | | | v | v local-fs-pre.target | | | (network file systems) | swap.target | | v v | | | v | remote-cryptsetup.target | | | (various low-level (various mounts and | | | | | services: udevd, fsck services...) | | remote-fs.target | | tmpfiles, random | | | / | | seed, sysctl, ...) v | | / | | | local-fs.target | | / | | | | | | / \____|______|_______________ ______|___________/ | / \ / | / v | / sysinit.target | / | | / ______________________/|\_____________________ | / / | | | \ | / | | | | | | / v v | v | | / (various (various | (various | |/ timers...) paths...) | sockets...) | | | | | | | | v v | v | | timers.target paths.target | sockets.target | | | | | | v | v \_______ | _____/ rescue.service | \|/ | | v v | basic.target rescue.target | | | ________v____________________ | / | \ | | | | | v v v | display- (various system (various system | manager.service services services) | | required for | | | graphical UIs) v v | | multi-user.target emergency.service | | | | \_____________ | _____________/ v \|/ emergency.target v graphical.target ``` *图 3: systemd 启动图* `sysinit.target` 和 `basic.target` 目标可以看作启动过程中的检查点。尽管 systemd 的设计目标之一是并行启动系统服务,但是某些服务和功能目标必须先启动,然后才能启动其它服务和目标。直到该检查点所需的所有服务和目标被满足后才能通过这些检查点。 当 `sysinit.target` 所依赖的所有单元都完成时,就会到达 `sysinit.target`。所有这些单元,包括挂载文件系统、设置交换文件、启动 Udev、设置随机数生成器种子、启动低层服务以及配置安全服务(如果一个或多个文件系统是加密的)都必须被完成,但在 `sysinit.target` 中,这些任务可以并行执行。 `sysinit.target` 启动了系统接近正常运行所需的所有低层服务和单元,它们也是进入 `basic.target` 所需的。 在完成 `sysinit.target` 之后,systemd 会启动实现下一个目标所需的所有单元。`basic.target` 通过启动所有下一目标所需的单元来提供一些额外功能。包括设置为各种可执行程序目录的路径、设置通信套接字和计时器之类。 最后,用户级目标 `multi-user.target` 或 `graphical.target` 被初始化。要满足 `graphical.target` 的依赖必须先达到 `multi-user.target`。图 3 中带下划线的目标是通常的启动目标。当达到这些目标之一时,启动就完成了。如果 `multi-user.target` 是默认设置,那么你应该在控制台上看到文本模式的登录界面。如果 `graphical.target` 是默认设置,那么你应该看到图形的登录界面。你看到的具体的 GUI 登录界面取决于你的默认显示管理器。 引导手册页还描述并提供了引导到初始化 RAM 磁盘和 systemd 关机过程的图。 systemd 还提供了一个工具,该工具列出了完整的启动过程或指定单元的依赖项。单元是一个可控的 systemd 资源实体,其范围可以从特定服务(例如 httpd 或 sshd)到计时器、挂载、套接字等。尝试以下命令并滚动查看结果。 ``` systemctl list-dependencies graphical.target ``` 注意,这会完全展开使系统进入 `graphical.target` 运行模式所需的顶层目标单元列表。也可以使用 `--all` 选项来展开所有其它单元。 ``` systemctl list-dependencies --all graphical.target ``` 你可以使用 `less` 命令来搜索诸如 `target`、`slice` 和 `socket` 之类的字符串。 现在尝试下面的方法。 ``` systemctl list-dependencies multi-user.target ``` 和 ``` systemctl list-dependencies rescue.target ``` 和 ``` systemctl list-dependencies local-fs.target ``` 和 ``` systemctl list-dependencies dbus.service ``` 这个工具帮助我可视化我正用的主机的启动依赖细节。继续花一些时间探索一个或多个 Linux 主机的启动树。但是要小心,因为 systemctl 手册页包含以下注释: > > “请注意,此命令仅列出当前被服务管理器加载到内存的单元。尤其是,此命令根本不适合用于获取特定单元的全部反向依赖关系列表,因为它不会列出被单元声明了但是未加载的依赖项。” > > > ### 结尾语 即使在没有深入研究 systemd 之前,很明显能看出它既强大又复杂。显然,systemd 不是单一、庞大、独体且不可知的二进制文件。相反,它是由许多较小的组件和旨在执行特定任务的子命令组成。 本系列的下一篇文章将更详细地探讨 systemd 的启动,以及 systemd 的配置文件,更改默认的目标以及如何创建简单服务单元。 ### 资源 互联网上有大量关于 systemd 的信息,但是很多都很简短、晦涩甚至是带有误导。除了本文提到的资源外,以下网页还提供了有关 systemd 启动的更详细和可靠的信息。 * Fedora 项目有一个很好的实用的 [systemd 指南](https://docs.fedoraproject.org/en-US/quick-docs/understanding-and-administering-systemd/index.html)。它有你需要知道的通过 systemd 来配置、管理和维护 Fedora 主机所需的几乎所有知识。 * Fedora 项目还有一个不错的[速记表](https://fedoraproject.org/wiki/SysVinit_to_Systemd_Cheatsheet),将老的 SystemV 命令与对比的 systemd 命令相互关联。 * 有关 systemd 的详细技术信息及创建它的原因,请查看 [Freedesktop.org](http://Freedesktop.org) 对 [systemd 描述](http://www.freedesktop.org/wiki/Software/systemd)。 * [Linux.com](http://Linux.com) 的“systemd 的更多乐趣”提供了更高级的 systemd [信息和技巧](https://www.linux.com/training-tutorials/more-systemd-fun-blame-game-and-stopping-services-prejudice/)。 还有针对 Linux 系统管理员的一系列技术性很强的文章,作者是 systemd 的设计师和主要开发者 Lennart Poettering。这些文章是在 2010 年 4 月至 2011 年 9 月之间撰写的,但它们现在和那时一样有用。关于 systemd 及其生态的其它许多好文都基于这些论文。 * [重新思考 1 号进程](http://0pointer.de/blog/projects/systemd.html) * [systemd 系统管理员篇 I](http://0pointer.de/blog/projects/systemd-for-admins-1.html) * [systemd 系统管理员篇 II](http://0pointer.de/blog/projects/systemd-for-admins-2.html) * [systemd 系统管理员篇 III](http://0pointer.de/blog/projects/systemd-for-admins-3.html) * [systemd 系统管理员篇 IV](http://0pointer.de/blog/projects/systemd-for-admins-4.html) * [systemd 系统管理员篇 V](http://0pointer.de/blog/projects/three-levels-of-off.html) * [systemd 系统管理员篇 VI](http://0pointer.de/blog/projects/changing-roots) * [systemd 系统管理员篇 VII](http://0pointer.de/blog/projects/blame-game.html) * [systemd 系统管理员篇 VIII](http://0pointer.de/blog/projects/the-new-configuration-files.html) * [systemd 系统管理员篇 IX](http://0pointer.de/blog/projects/on-etc-sysinit.html) * [systemd 系统管理员篇 X](http://0pointer.de/blog/projects/instances.html) * [systemd 系统管理员篇 XI](http://0pointer.de/blog/projects/inetd.html) --- via: <https://opensource.com/article/20/4/systemd> 作者:[David Both](https://opensource.com/users/dboth) 选题:[lujun9972](https://github.com/lujun9972) 译者:[messon007](https://github.com/messon007) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
systemd—yes, all lower-case, even at the beginning of a sentence—is the modern replacement for init and SystemV init scripts. It is also much more. Like most sysadmins, when I think of the init program and SystemV, I think of Linux startup and shutdown and not really much else, like managing services once they are up and running. Like init, systemd is the mother of all processes, and it is responsible for bringing the Linux host up to a state in which productive work can be done. Some of the functions assumed by systemd, which is far more extensive than the old init program, are to manage many aspects of a running Linux host, including mounting filesystems, managing hardware, handling timers, and starting and managing the system services that are required to have a productive Linux host. This series of articles, which is based in part on excerpts from my three-volume Linux training course, [ Using and administering Linux: zero to sysadmin](http://www.both.org/?page_id=1183), explores systemd's functions both at startup and beginning after startup finishes. ## Linux boot The complete process that takes a Linux host from an off state to a running state is complex, but it is open and knowable. Before getting into the details, I'll give a quick overview from when the host hardware is turned on until the system is ready for a user to log in. Most of the time, "the boot process" is discussed as a single entity, but that is not accurate. There are, in fact, three major parts to the full boot and startup process: **Hardware boot:**Initializes the system hardware**Linux boot:**Loads the Linux kernel and then systemd**Linux startup:**Where systemd prepares the host for productive work The Linux startup sequence begins after the kernel has loaded either init or systemd, depending upon whether the distribution uses the old or new startup, respectively. The init and systemd programs start and manage all the other processes and are both known as the "mother of all processes" on their respective systems. It is important to separate the hardware boot from the Linux boot from the Linux startup and to explicitly define the demarcation points between them. Understanding these differences and what part each plays in getting a Linux system to a state where it can be productive makes it possible to manage these processes and better determine where a problem is occurring during what most people refer to as "boot." The startup process follows the three-step boot process and brings the Linux computer up to an operational state in which it is usable for productive work. The startup process begins when the kernel transfers control of the host to systemd. ## systemd controversy systemd can evoke a wide range of reactions from sysadmins and others responsible for keeping Linux systems up and running. The fact that systemd is taking over so many tasks in many Linux systems has engendered pushback and discord among certain groups of developers and sysadmins. SystemV and systemd are two different methods of performing the Linux startup sequence. SystemV start scripts and the init program are the old methods, and systemd using targets is the new method. Although most modern Linux distributions use the newer systemd for startup, shutdown, and process management, there are still some that do not. One reason is that some distribution maintainers and some sysadmins prefer the older SystemV method over the newer systemd. I think both have advantages. ### Why I prefer SystemV I prefer SystemV because it is more open. Startup is accomplished using Bash scripts. After the kernel starts the init program, which is a compiled binary, init launches the **rc.sysinit** script, which performs many system initialization tasks. After **rc.sysinit** completes, init launches the **/etc/rc.d/rc** script, which in turn starts the various services defined by the SystemV start scripts in the **/etc/rc.d/rcX.d**, where "X" is the number of the runlevel being started. Except for the init program itself, all these programs are open and easily knowable scripts. It is possible to read through these scripts and learn exactly what is taking place during the entire startup process, but I don't think many sysadmins actually do that. Each start script is numbered so that it starts its intended service in a specific sequence. Services are started serially, and only one service starts at a time. systemd, developed by Red Hat's Lennart Poettering and Kay Sievers, is a complex system of large, compiled binary executables that are not understandable without access to the source code. It is open source, so "access to the source code" isn't hard, just less convenient. systemd appears to represent a significant refutation of multiple tenets of the Linux philosophy. As a binary, systemd is not directly open for the sysadmin to view or make easy changes. systemd tries to do everything, such as managing running services, while providing significantly more status information than SystemV. It also manages hardware, processes, and groups of processes, filesystem mounts, and much more. systemd is present in almost every aspect of the modern Linux host, making it the one-stop tool for system management. All of this is a clear violation of the tenets that programs should be small and that each program should do one thing and do it well. ### Why I prefer systemd I prefer systemd as my startup mechanism because it starts as many services as possible in parallel, depending upon the current stage in the startup process. This speeds the overall startup and gets the host system to a login screen faster than SystemV. systemd manages almost every aspect of a running Linux system. It can manage running services while providing significantly more status information than SystemV. It also manages hardware, processes and groups of processes, filesystem mounts, and much more. systemd is present in almost every aspect of the modern Linux operating system, making it the one-stop tool for system management. (Does this sound familiar?) The systemd tools are compiled binaries, but the tool suite is open because all the configuration files are ASCII text files. Startup configuration can be modified through various GUI and command-line tools, as well as adding or modifying various configuration files to suit the needs of the specific local computing environment. ### The real issue Did you think I could not like both startup systems? I do, and I can work with either one. In my opinion, the real issue and the root cause of most of the controversy between SystemV and systemd is that there is [no choice](http://www.osnews.com/story/28026/Editorial_Thoughts_on_Systemd_and_the_Freedom_to_Choose) on the sysadmin level. The choice of whether to use SystemV or systemd has already been made by the developers, maintainers, and packagers of the various distributions—but with good reason. Scooping out and replacing an init system, by its extreme, invasive nature, has a lot of consequences that would be hard to tackle outside the distribution design process. Despite the fact that this choice is made for me, my Linux hosts boot up and work, which is what I usually care the most about. As an end user and even as a sysadmin, my primary concern is whether I can get my work done, work such as writing my books and this article, installing updates, and writing scripts to automate everything. So long as I can do my work, I don't really care about the start sequence used on my distro. I do care when there is a problem during startup or service management. Regardless of which startup system is used on a host, I know enough to follow the sequence of events to find the failure and fix it. ### Replacing SystemV There have been previous attempts at replacing SystemV with something a bit more modern. For about two releases, Fedora used a thing called Upstart to replace the aging SystemV, but it did not replace init and provided no changes that I noticed. Because Upstart provided no significant changes to the issues surrounding SystemV, efforts in this direction were quickly dropped in favor of systemd. Despite the fact that most Linux developers agree that replacing the old SystemV startup is a good idea, many developers and sysadmins dislike systemd for that. Rather than rehash all the so-called issues that people have—or had—with systemd, I will refer you to two good, if somewhat old, articles that should cover most everything. Linus Torvalds, the creator of the Linux kernel, seems disinterested. In a 2014 ZDNet article, * Linus Torvalds and others on Linux's systemd*, Linus is clear about his feelings. "I don't actually have any particularly strong opinions on systemd itself. I've had issues with some of the core developers that I think are much too cavalier about bugs and compatibility, and I think some of the design details are insane (I dislike the binary logs, for example), but those are details, not big issues." In case you don't know much about Linus, I can tell you that if he does not like something, he is very outspoken, explicit, and quite clear about that dislike. He has become more socially acceptable in his manner of addressing his dislike about things. In 2013, Poettering wrote a long blog post in which he debunks the [myths about systemd](http://0pointer.de/blog/projects/the-biggest-myths.html) while providing insight into some of the reasons for creating it. This is a very good read, and I highly recommend it. ## systemd tasks Depending upon the options used during the compile process (which are not considered in this series), systemd can have as many as 69 binary executables that perform the following tasks, among others: - The systemd program runs as PID 1 and provides system startup of as many services in parallel as possible, which, as a side effect, speeds overall startup times. It also manages the shutdown sequence. - The systemctl program provides a user interface for service management. - Support for SystemV and LSB start scripts is offered for backward compatibility. - Service management and reporting provide more service status data than SystemV. - It includes tools for basic system configuration, such as hostname, date, locale, lists of logged-in users, running containers and virtual machines, system accounts, runtime directories and settings, daemons to manage simple network configuration, network time synchronization, log forwarding, and name resolution. - It offers socket management. - systemd timers provide advanced cron-like capabilities to include running a script at times relative to system boot, systemd startup, the last time the timer was started, and more. - It provides a tool to analyze dates and times used in timer specifications. - Mounting and unmounting of filesystems with hierarchical awareness allows safer cascading of mounted filesystems. - It enables the positive creation and management of temporary files, including deletion. - An interface to D-Bus provides the ability to run scripts when devices are plugged in or removed. This allows all devices, whether pluggable or not, to be treated as plug-and-play, which considerably simplifies device handling. - Its tool to analyze the startup sequence can be used to locate the services that take the most time. - It includes journals for storing system log messages and tools for managing the journals. ## Architecture Those tasks and more are supported by a number of daemons, control programs, and configuration files. Figure 1 shows many of the components that belong to systemd. This is a simplified diagram designed to provide a high-level overview, so it does not include all of the individual programs or files. Nor does it provide any insight into data flow, which is so complex that it would be a useless exercise in the context of this series of articles. ![systemd architecture systemd architecture](https://opensource.com/sites/default/files/uploads/systemd-architecture.png) Fig 1: Architecture of systemd, by Shmuel Csaba Otto Traian (CC BY-SA 3.0) Fig 1: Architecture of systemd, by Shmuel Csaba Otto Traian (CC BY-SA 3.0) A full exposition of systemd would take a book on its own. You do not need to understand the details of how the systemd components in Figure 1 fit together; it's enough to know about the programs and components that enable managing various Linux services and deal with log files and journals. But it's clear that systemd is not the monolithic monstrosity it is purported to be by some of its critics. ## systemd as PID 1 systemd is PID 1. Some of its functions, which are far more extensive than the old SystemV3 init program, are to manage many aspects of a running Linux host, including mounting filesystems and starting and managing system services required to have a productive Linux host. Any of systemd's tasks that are not related to the startup sequence are outside the scope of this article (but some will be explored later in this series). First, systemd mounts the filesystems defined by **/etc/fstab**, including any swap files or partitions. At this point, it can access the configuration files located in **/etc**, including its own. It uses its configuration link, **/etc/systemd/system/default.target**, to determine which state or target it should boot the host into. The **default.target** file is a symbolic link to the true target file. For a desktop workstation, this is typically going to be the **graphical.target**, which is equivalent to runlevel 5 in SystemV. For a server, the default is more likely to be the **multi-user.target**, which is like runlevel 3 in SystemV. The **emergency.target** is similar to single-user mode. Targets and services are systemd units. The table below (Figure 2) compares the systemd targets with the old SystemV startup runlevels. systemd provides the systemd target aliases for backward compatibility. The target aliases allow scripts—and many sysadmins—to use SystemV commands like **init 3** to change runlevels. Of course, the SystemV commands are forwarded to systemd for interpretation and execution. systemd targets | SystemV runlevel | target aliases | Description | default.target | This target is always aliased with a symbolic link to either multi-user.target or graphical.target. systemd always uses the default.target to start the system. The default.target should never be aliased to halt.target, poweroff.target, or reboot.target. | || graphical.target | 5 | runlevel5.target | Multi-user.target with a GUI | 4 | runlevel4.target | Unused. Runlevel 4 was identical to runlevel 3 in the SystemV world. This target could be created and customized to start local services without changing the default multi-user.target. | | multi-user.target | 3 | runlevel3.target | All services running, but command-line interface (CLI) only | 2 | runlevel2.target | Multi-user, without NFS, but all other non-GUI services running | | rescue.target | 1 | runlevel1.target | A basic system, including mounting the filesystems with only the most basic services running and a rescue shell on the main console | emergency.target | S | Single-user mode—no services are running; filesystems are not mounted. This is the most basic level of operation with only an emergency shell running on the main console for the user to interact with the system. | | halt.target | Halts the system without powering it down | || reboot.target | 6 | runlevel6.target | Reboot | poweroff.target | 0 | runlevel0.target | Halts the system and turns the power off | Fig. 2: Comparison of SystemV runlevels with systemd targets and some target aliases Each target has a set of dependencies described in its configuration file. systemd starts the required dependencies, which are the services required to run the Linux host at a specific level of functionality. When all the dependencies listed in the target configuration files are loaded and running, the system is running at that target level. In Figure 2, the targets with the most functionality are at the top of the table, with functionality declining towards the bottom of the table. systemd also looks at the legacy SystemV init directories to see if any startup files exist there. If so, systemd uses them as configuration files to start the services described by the files. The deprecated network service is a good example of one that still uses SystemV startup files in Fedora. Figure 3 (below) is copied directly from the bootup man page. It shows a map of the general sequence of events during systemd startup and the basic ordering requirements to ensure a successful startup. ``` cryptsetup-pre.target | (various low-level v API VFS mounts: (various cryptsetup devices...) mqueue, configfs, | | debugfs, ...) v | | cryptsetup.target | | (various swap | | remote-fs-pre.target | devices...) | | | | | | | | | v | v local-fs-pre.target | | | (network file systems) | swap.target | | v v | | | v | remote-cryptsetup.target | | | (various low-level (various mounts and | | | | | services: udevd, fsck services...) | | remote-fs.target | | tmpfiles, random | | | / | | seed, sysctl, ...) v | | / | | | local-fs.target | | / | | | | | | / \____|______|_______________ ______|___________/ | / \ / | / v | / sysinit.target | / | | / ______________________/|\_____________________ | / / | | | \ | / | | | | | | / v v | v | | / (various (various | (various | |/ timers...) paths...) | sockets...) | | | | | | | | v v | v | | timers.target paths.target | sockets.target | | | | | | v | v \_______ | _____/ rescue.service | \|/ | | v v | basic.target rescue.target | | | ________v____________________ | / | \ | | | | | v v v | display- (various system (various system | manager.service services services) | | required for | | | graphical UIs) v v | | multi-user.target emergency.service | | | | \_____________ | _____________/ v \|/ emergency.target v graphical.target ``` Fig 3: The systemd startup map The **sysinit.target** and **basic.target** targets can be considered checkpoints in the startup process. Although one of systemd's design goals is to start system services in parallel, certain services and functional targets must be started before other services and targets can start. These checkpoints cannot be passed until all of the services and targets required by that checkpoint are fulfilled. The **sysinit.target** is reached when all of the units it depends on are completed. All of those units, mounting filesystems, setting up swap files, starting udev, setting the random generator seed, initiating low-level services, and setting up cryptographic services (if one or more filesystems are encrypted), must be completed but, within the **sysinit.target**, those tasks can be performed in parallel. The **sysinit.target** starts up all of the low-level services and units required for the system to be marginally functional and that are required to enable moving onto the **basic.target**. After the **sysinit.target** is fulfilled, systemd then starts all the units required to fulfill the next target. The basic target provides some additional functionality by starting units that are required for all of the next targets. These include setting up things like paths to various executable directories, communication sockets, and timers. Finally, the user-level targets, **multi-user.target** or **graphical.target**, can be initialized. The **multi-user.target** must be reached before the graphical target dependencies can be met. The underlined targets in Figure 3 are the usual startup targets. When one of these targets is reached, startup has completed. If the **multi-user.target** is the default, then you should see a text-mode login on the console. If **graphical.target** is the default, then you should see a graphical login; the specific GUI login screen you see depends on your default display manager. The bootup man page also describes and provides maps of the boot into the initial RAM disk and the systemd shutdown process. systemd also provides a tool that lists dependencies of a complete startup or for a specified unit. A unit is a controllable systemd resource entity that can range from a specific service, such as httpd or sshd, to timers, mounts, sockets, and more. Try the following command and scroll through the results. `systemctl list-dependencies graphical.target` Notice that this fully expands the top-level target units list required to bring the system up to the graphical target run mode. Use the **--all** option to expand all of the other units as well. `systemctl list-dependencies --all graphical.target` You can search for strings such as "target," "slice," and "socket" using the search tools of the **less** command. So now, try the following. `systemctl list-dependencies multi-user.target` and `systemctl list-dependencies rescue.target` and `systemctl list-dependencies local-fs.target` and `systemctl list-dependencies dbus.service` This tool helps me visualize the specifics of the startup dependencies for the host I am working on. Go ahead and spend some time exploring the startup tree for one or more of your Linux hosts. But be careful because the systemctl man page contains this note: "Note that this command only lists units currently loaded into memory by the service manager. In particular, this command is not suitable to get a comprehensive list at all reverse dependencies on a specific unit, as it won't list the dependencies declared by units currently not loaded." ## Final thoughts Even before getting very deep into systemd, it's obvious that it is both powerful and complex. It is also apparent that systemd is not a single, huge, monolithic, and unknowable binary file. Rather, it is composed of a number of smaller components and subcommands that are designed to perform specific tasks. The next article in this series will explore systemd startup in more detail, as well as systemd configuration files, changing the default target, and how to create a simple service unit. ## Resources There is a great deal of information about systemd available on the internet, but much is terse, obtuse, or even misleading. In addition to the resources mentioned in this article, the following webpages offer more detailed and reliable information about systemd startup. - The Fedora Project has a good, practical [guide](https://docs.fedoraproject.org/en-US/quick-docs/understanding-and-administering-systemd/index.html)[to systemd](https://docs.fedoraproject.org/en-US/quick-docs/understanding-and-administering-systemd/index.html). It has pretty much everything you need to know in order to configure, manage, and maintain a Fedora computer using systemd. - The Fedora Project also has a good [cheat sheet](https://fedoraproject.org/wiki/SysVinit_to_Systemd_Cheatsheet)that cross-references the old SystemV commands to comparable systemd ones. - For detailed technical information about systemd and the reasons for creating it, check out [Freedesktop.org](http://Freedesktop.org)'s[description of systemd](http://www.freedesktop.org/wiki/Software/systemd). [Linux.com](http://Linux.com)'s "More systemd fun" offers more advanced systemd[information and tips](https://www.linux.com/training-tutorials/more-systemd-fun-blame-game-and-stopping-services-prejudice/). There is also a series of deeply technical articles for Linux sysadmins by Lennart Poettering, the designer and primary developer of systemd. These articles were written between April 2010 and September 2011, but they are just as relevant now as they were then. Much of everything else good that has been written about systemd and its ecosystem is based on these papers. [Rethinking PID 1](http://0pointer.de/blog/projects/systemd.html)[systemd for Administrators, Part I](http://0pointer.de/blog/projects/systemd-for-admins-1.html)[systemd for Administrators, Part II](http://0pointer.de/blog/projects/systemd-for-admins-2.html)[systemd for Administrators, Part III](http://0pointer.de/blog/projects/systemd-for-admins-3.html)[systemd for Administrators, Part IV](http://0pointer.de/blog/projects/systemd-for-admins-4.html)[systemd for Administrators, Part V](http://0pointer.de/blog/projects/three-levels-of-off.html)[systemd for Administrators, Part VI](http://0pointer.de/blog/projects/changing-roots)[systemd for Administrators, Part VII](http://0pointer.de/blog/projects/blame-game.html)[systemd for Administrators, Part VIII](http://0pointer.de/blog/projects/the-new-configuration-files.html)[systemd for Administrators, Part IX](http://0pointer.de/blog/projects/on-etc-sysinit.html)[systemd for Administrators, Part X](http://0pointer.de/blog/projects/instances.html)[systemd for Administrators, Part XI](http://0pointer.de/blog/projects/inetd.html) ## 21 Comments
12,216
修复 Ubuntu 中的 “Unable to parse package file” 错误
https://itsfoss.com/unable-to-parse-package-file/
2020-05-13T16:20:33
[ "更新", "错误" ]
https://linux.cn/article-12216-1.html
过去,我已经讨论了许多 [Ubuntu 更新错误](https://itsfoss.com/ubuntu-update-error/)。如果你[使用命令行更新 Ubuntu](https://itsfoss.com/update-ubuntu/),那可能会遇到一些“错误”。 其中一些“错误”基本上是内置功能,可防止对系统进行不必要的更改。在本教程中,我不会涉及那些细节。 在本文中,我将向你展示如何解决在更新系统或安装新软件时可能遇到的以下错误: ``` Reading package lists… Error! E: Unable to parse package file /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_bionic_InRelease E: The package lists or status file could not be parsed or opened. ``` 在 Debian 中可能会遇到类似的错误: ``` E: Unable to parse package file /var/lib/apt/extended_states (1) ``` 即使遇到 `The package cache file is corrupted` 也完全不必惊慌。这真的很容易“修复”。 ### 在基于 Ubuntu 和 Debian 的 Linux 发行版中处理 “Unable to parse package file” 错误 ![](/data/attachment/album/202005/13/162038ovbjb3vhv5hh5fh2.png) 以下是你需要做的。仔细查看 [Ubuntu](https://ubuntu.com/) 报错文件的名称和路径。 ``` Reading package lists… Error! E: Unable to parse package file /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_bionic_InRelease E: The package lists or status file could not be parsed or opened. ``` 例如,上面的错误是在报 `/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_bionic_InRelease` 文件错误。 这让你想到这个文件不正确。现在,你需要做的就是删除该文件并重新生成缓存。 ``` sudo rm <file_that_is_not_parsed> ``` 因此,这里我可以使用以下命令:`sudo rm /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_bionic_InRelease`,然后使用 `sudo apt update` 命令重建缓存。 #### 给初学者的分步指导 如果你熟悉 Linux 命令,那么可能知道如何使用绝对路径删除文件。对于新手用户,让我指导你安全删除文件。 首先,你应该进入文件目录: ``` cd /var/lib/apt/lists/ ``` 现在删除无法解析的文件: ``` sudo rm archive.ubuntu.com_ubuntu_dists_bionic_InRelease ``` 现在,如果你再次运行更新,将重新生成 apt 缓存。 ``` sudo apt update ``` #### 有很多文件无法解析? 如果你在更新系统时有一个或两个文件无法解析,那么问题不大。但是,如果系统报错有十个或二十个此类文件,那么一一删除它们就太累了。 在这种情况下,你可以执行以下操作来删除整个缓存,然后再次生成它: ``` sudo rm -r /var/lib/apt/lists/* sudo apt update ``` #### 解释这为何能解决问题 `/var/lib/apt` 是与 apt 软件包管理器相关的文件和数据的存储目录。`/var/lib/apt/lists` 是用于保存系统 `source.list` 中指定的每个软件包资源信息的目录。 简单点来说,`/var/lib/apt/lists` 保存软件包信息缓存。当你要安装或更新程序时,系统会在此目录中检查该软件包中的信息。如果找到了该包的详细信息,那么它将进入远程仓库并实际下载程序或其更新。 当你运行 `sudo apt update` 时,它将构建缓存。这就是为什么即使删除 `/var/lib/apt/lists` 目录中的所有内容,运行更新也会建立新的缓存的原因。 这就是处理文件无法解析问题的方式。你的系统报某个软件包或仓库信息以某种方式损坏(下载失败或手动更改 `sources.list`)。删除该文件(或所有文件)并重建缓存即可解决此问题。 #### 仍然有错误? 这应该能解决你的问题。但是,如果问题仍然存在,或者你还有其他相关问题,请在评论栏告诉我,我将尽力帮助你。 --- via: <https://itsfoss.com/unable-to-parse-package-file/> 作者:[Abhishek Prakash](https://itsfoss.com/author/abhishek/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
![Warp Terminal](/assets/images/warp-terminal.webp) ![Warp Terminal](/assets/images/warp-terminal.webp) I have discussed a number of [Ubuntu update errors](https://itsfoss.com/ubuntu-update-error/) in the past. If you [use the command line to update Ubuntu](https://itsfoss.com/update-ubuntu/), you might run into some ‘errors’. Some of these ‘errors’ are basically built-in features to prevent unwarranted changes to your system. I am not going into those details in this quick tutorial. In this quick tip, I’ll show you how to tackle the following error that you could encounter while updating your system or installing new software: **Reading package lists… Error!****E: Unable to parse package file /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_bionic_InRelease****E: The package lists or status file could not be parsed or opened.** A similar error can be encountered in Debian: **E: Unable to parse package file /var/lib/apt/extended_states (1)** There is absolutely no need to panic even thought it says ‘**The package cache file is corrupted**‘. This is really easy to ‘fix’. ## Handling “Unable to parse package file” error in Ubuntu and Debian-based Linux distributions ![Unable To Parse Package File](https://itsfoss.com/content/images/wordpress/2020/05/Unable-to-parse-package-file.png) Here’s what you need to do. Take a closer look at the name and path of the file the [Ubuntu](https://ubuntu.com/) is complaining about. Reading package lists… Error!**E: Unable to parse package file /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_bionic_InRelease** E: The package lists or status file could not be parsed or opened. For example, in the above error, it was complaining about /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_bionic_InRelease This gives you the idea that something is not right with this file. Now all you need to do is to remove this file and regenerate the cache. `sudo rm <file_that_is_not_parsed>` So in my case, I could use this command: **sudo rm /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_bionic_InRelease** and then rebuild the cache with `sudo apt update` command. ### Step by step for beginners If you are familiar with Linux commands, you may know how to delete the file with its absolute path. For novice users, let me guide you to delete the file safely. First, you should go to the directory where the file is stored: `cd /var/lib/apt/lists/` Now delete the file which is not being parsed: `sudo rm archive.ubuntu.com_ubuntu_dists_bionic_InRelease` Now if you run the update again, the [apt cache](https://itsfoss.com/apt-cache-command/) will be regenerated. `sudo apt update` ### Too many files cannot be parsed? This is fine if you have one or two files that are not being parsed while updating the system. But if the system complains about ten or twenty such files, removing them one by one is too tiring. What you can do in such a case to remove the entire cache and then generate it again: ``` sudo rm -r /var/lib/apt/lists/* sudo apt update ``` ### Explanation of how it fixed your problem The /var/lib/apt is the directory where files and data related to the apt package manager are stored. The /var/lib/apt/lists is the directory that is used for storing information for each package resource specified in your system’s sources.list. In slightly noncomplicated terms, this /var/lib/apt/lists stores the package information cache. When you want to install or update a program, your system checks in this directory for the information on the said package, if it finds the detail on the package, then it goes to the remote repository and actually downloads the program or its update. When you run the `sudo apt update` it builds the cache. This is why even when you remove everything in the /var/lib/apt/lists directory, running the update will build a fresh cache. This is how it handles the issue of files not being parsed. Your system complained about a particular package or repository information that somehow got corrupted (either a failed download or manual change to the [sources.list](https://itsfoss.com/sources-list-ubuntu/)). Removing that file (or everything) and rebuilding the cache solves the issue. ### Still facing error? This should fix the issue for you. But if the problem still persists or if you have some other related issue, let me know in the comment section and I’ll try to help you out.
12,217
如何利用 SSL/TLS 保护你的 Linux 邮件服务
https://opensource.com/article/20/4/securing-linux-email
2020-05-13T21:57:00
[ "TLS", "邮件", "SSL" ]
https://linux.cn/article-12217-1.html
> > 通过理解安全证书来保护你的 Linux 邮件服务。 > > > ![](/data/attachment/album/202005/13/215637khaogmririavnrlk.jpg) 通常,不管你是通过<ruby> 简单邮件传输协议 <rt> Simple Mail Transport Protocol </rt></ruby>(SMTP)或者<ruby> 互联网消息访问协议 <rt> Internet Message Access Protocol </rt></ruby>(IMAP)或<ruby> 邮局协议 <rt> Post Office Protocol </rt></ruby>(POP)发送或者接受邮件,邮件服务默认都是以无保护的明文来传输数据。近来随着数据加密成为越来越多程序的共识,你需要<ruby> 安全套接层 <rt> Secure Sockets Layer </rt></ruby>/<ruby> 传输层安全性 <rt> Transport Layer Security </rt></ruby>(SSL/TLS)的安全证书来保护你的邮件服务。 首先,快速回顾一下邮件服务和协议的基本流程。邮件通过 SMTP 从 TCP 端口 25 发出。这个协议依靠 DNS <ruby> 邮件交换服务器 <rt> Mail eXchanger </rt></ruby>(MX)记录的地址信息来传输邮件。当邮件到达邮件服务器后,可以被以下两种服务中的任意一种检索:使用 TCP 端口 143 的 IMAP,或者使用 TCP 端口 110 的 POP3(邮局协议第 3 版)。然而,以上服务都默认使用明文传输邮件和认证信息。这非常的不安全! 为了保护电子邮件数据和认证,这些服务都增加了一个安全功能,使它们可以利用 SSL/TLS 证书对数据流和通讯进行加密封装。SSL/TLS 是如何加密数据的细节不在本文讨论范围,有兴趣的话可以阅读 [Bryant Son 关于互联网安全的文章](/article-11699-1.html)了解更多细节。概括的说,SSL/TLS 加密是一种基于公钥和私钥的算法。 通过加入这些安全功能后,这些服务将监听在新的 TCP 端口: | 服务 | 默认 TCP 端口 | SSL/TLS 端口 | | --- | --- | --- | | SMTP | 25 | 587 | | IMAP | 143 | 993 | | POP3 | 110 | 995 | ### 生成 SSL/TLS 证书 [OpenSSL](https://www.openssl.org/) 可以生成免费的 SSL/TLS 证书,或者你也可以从公共<ruby> 证书颁发机构 <rt> Certificate Authoritie </rt></ruby>(CA)购买。过去,生成自签发证书十分简单而且通用,但是由于安全被日益重视,大部分的邮件客户端是不信任自签发证书的,除非手动设置。 如果你只是自己使用或者做做测试,那就使用自签发证书省点钱吧。但是如果很多人或者客户也需要使用的话,那最好还是从受信任的证书颁发机构购买。 不管是哪种情况,开始请求新证书的过程是使用 Linux 系统上的 OpenSSL 工具来创建一个<ruby> 证书签发请求 <rt> Certificate Signing Request </rt> (CSR):</ruby> ``` $ openssl req -new -newkey rsa:2048 -nodes -keyout mail.mydomain.key -out mail.mydomain.csr ``` 这个命令会为你想保护的服务同时生成一个新的 CSR 文件和一个私匙。它会询问你一些证书相关的问题,如:位置、服务器的<ruby> 完全合规域名 <rt> Fully Qualified Domain Name </rt></ruby>(FQDN)、邮件联系信息等等。当你输入完这些信息后,私钥和 CSR 文件就生成完毕了。 #### 如果你想生成自签发证书 如果你想要生成自签发证书的话,在运行以上 CSR 命令之前,你必须先创建一个[自己的根 CA](https://en.wikipedia.org/wiki/Root_certificate)。你可以通过以下方法创建自己的根 CA。 ``` $ openssl genrsa -des3 -out myCA.key 2048 ``` 命令行会提示你输入一个密码。请输入一个复杂点的密码而且不要弄丢了,因为这将会是根 CA 私钥的密码,正如其名称所示,它是你的证书中所有信任关系的根。 接下来,生成根 CA 证书: ``` $ openssl req -x509 -new -nodes -key myCA.key -sha256 -days 1825 -out myCA.pem ``` 在回答完一些问题后,你就拥有一个有效期为 5 年的根 CA 证书了。 用之前生成的 CSR 文件,你可以请求生成一个新证书,并由您刚才创建的根 CA 签名。 ``` $ openssl x509 -req -in mail.mydomain.csr -CA myCA.pem -CAkey myCA.key -CAcreateserial -out mail.mydomain.pem -days 1825 -sha256 ``` 输入你的根 CA 私钥的密码来创建和签署证书。 现在你有了配置电子邮件服务以增强安全性所需的两个文件:私匙文件 `mail.mydomain.key` 和公开证书文件 `mail.mydomain.pem`。 #### 如果你愿意购买证书 如果你愿意从机构购买证书,则需要上传 CSR 文件到证书颁发机构的系统中,它将会被用于生成 SSL/TLS 证书。证书可作为文件下载,比如 `mail.mydomain.pem`。很多 SSL 机构也需要你下载一个中间证书。如果是这样的话,你必须把这个两个证书合并成一个,这样电子邮件服务就可以将这两个证书结合起来处理。可以使用以下命令把你的证书和第三方中间证书合并在一起: ``` $ cat mail.mydomain.pem gd_bundle-g2-g1.crt > mail.mydomain.pem ``` 值得一提的是 `.pem` 文件后缀代表<ruby> 隐私增强邮件 <rt> Privacy-Enhanced Mail </rt></ruby>。 现在你就有全部的设置邮件服务安全所需文件了:私匙文件 `mail.mydomain.key` 和组合的公开证书文件 `mail.mydomain.pem`。 ### 为你的文件生成一个安全的文件夹 不管你是的证书是自签发的或者从机构购买,你都需要生成一个安全的,管理员拥有的文件夹用于保存这两个文件。可以使用以下命令来生成: ``` $ mkdir /etc/pki/tls $ chown root:root /etc/pki/tls $ chmod 700 /etc/pki/tls ``` 在复制文件到 `/etc/pki/tls` 后,再次设置这些文件的权限: ``` $ chmod 600 /etc/pki/tls/* ``` ### 配置你的 SMTP 和 IMAP 服务 接下来,让 SMTP 和 IMAP 服务使用新的安全证书。我们用 `postfix` 和 `dovecot` 来作为例子。 用你顺手的编辑器来编辑 `/etc/postfix/main.cf` 文件。添加以下几行: ``` smtpd_use_tls = yes smtpd_tls_cert_file = /etc/pki/tls/mail.mydomain.pem smtpd_tls_key_file = /etc/pki/tls/mail.mydomain.key ``` ### 自定义选项 以下选项可以启用或禁用各种加密算法,协议等等: ``` smtpd_tls_eecdh_grade = strong smtpd_tls_protocols= !SSLv2, !SSLv3, !TLSv1, !TLSv1.1 smtpd_tls_mandatory_protocols= !SSLv2, !SSLv3, !TLSv1, !TLSv1.1 smtpd_tls_mandatory_ciphers = high smtpd_tls_security_level=may smtpd_tls_ciphers = high tls_preempt_cipherlist = yes smtpd_tls_mandatory_exclude_ciphers = aNULL, MD5 , DES, ADH, RC4, PSD, SRP, 3DES, eNULL smtpd_tls_exclude_ciphers = aNULL, MD5 , DES, ADH, RC4, PSD, SRP, 3DES, eNULL smtp_tls_mandatory_protocols = !SSLv2, !SSLv3, !TLSv1, !TLSv1.1 smtp_tls_protocols = !SSLv2, !SSLv3, !TLSv1, !TLSv1.1 ``` 编辑 `/etc/dovecot/dovecot.conf` 文件,添加以下三行: ``` ssl = required ssl_cert = &lt;/etc/pki/tls/mail.mydomain.pem ssl_key = &lt;/etc/pki/tls/mail.mydomain.key ``` 添加下列更多选项来启用或禁用各种加密算法、协议等等(我把这些留给你来理解): ``` ssl_cipher_list = EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:ALL:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS:!RC4:!SSLv2 ssl_prefer_server_ciphers = yes ssl_protocols = !SSLv2 !SSLv3 !TLSv1 !TLSv1.1 ssl_min_protocol = TLSv1.2 ``` ### 设置 SELinux 上下文 如果你的 Linux 发行版启用了 SELinux,请为你的新证书文件设置正确的 SELinux 上下文。 对于 Postfix 设置 SELinux: ``` $ chcon -u system_u -t cert_t mail.mydomain.* ``` 对于 Dovecot 设置 SELinux: ``` $ chcon -u system_u -t dovecot_cert_t mail.mydomain.* ``` 重启这些服务,并与你相应更新过的电子邮件客户端配置连接。有些电子邮件客户端会自动探测到新的端口,有些则需要你手动更新。 ### 测试配置 用 `openssl` 命令行和 `s_client` 插件来简单测试一下: ``` $ openssl s_client -connect mail.mydomain.com:993 $ openssl s_client -starttls imap -connect mail.mydomain.com:143 $ openssl s_client -starttls smtp -connect mail.mydomain.com:587 ``` 这些测试命令会打印出很多信息,关于你使用的连接、证书、加密算法、会话和协议。这不仅是一个验证新设置的好方法,也可以确认你使用了适当的证书,以及在 postfix 或 dovecot 配置文件中定义的安全设置正确。 保持安全! --- via: <https://opensource.com/article/20/4/securing-linux-email> 作者:[Marc Skinner](https://opensource.com/users/marc-skinner) 选题:[lujun9972](https://github.com/lujun9972) 译者:[Acceleratorrrr](https://github.com/Acceleratorrrr) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
Traditionally, email services send data in an unprotected way—whether you are sending emails via SMTP or receiving them via IMAP or POP, the defaults are in cleartext. With more online applications enforcing encryption and the general consensus to protect your data, it's best to secure your email services with a Secure Sockets Layer/Transport Layer Security (SSL/TLS) security certificate. First, a quick review of email services and protocols. Email is sent via a service called Simple Mail Transport Protocol (SMTP) using TCP port 25. This protocol sends emails from server to server based on DNS mail exchanger (MX) record lookups. Once an email is on the email server, it is retrieved using one of two services: Internet Message Access Protocol (IMAP) using port TCP 143, or Post Office Protocol (POP3) using port TCP 110. All of these services, by default, send your email and authentication to/from these services in plain text—thus, it's very unprotected! To protect the email data and authentication, these services have added a security feature in which they can utilize an SSL/TLS certificate to wrap the data flow and communication with encryption. How SSL/TLS encryption secures information is beyond the scope of this article, but [Bryant Son's internet security article](https://opensource.com/article/19/11/internet-security-tls-ssl-certificate-authority) covers it in great detail. At a high level, SSL/TLS encryption is a public/private encryption algorithm. By adding these security features into the services, they can listen on new TCP ports: Service | Default TCP Port | SSL/TLS Port | ---|---|---| SMTP | 25 | 587 | IMAP | 143 | 993 | POP3 | 110 | 995 | ## Generate SSL/TLS certificates SSL/TLS certificates can be generated for free using tools like [OpenSSL](https://www.openssl.org/), or they can be purchased for a range of prices from public certificate authorities (CAs). In the past, generating your own certificate was easy and worked in most cases, but with the increasing demand for better security, most email clients don't trust self-generated SSL/TLS certificates without a manual exception. If your use case is private or for testing, then saving money with a self-generated certificate makes sense. But if you're rolling this out to a large group or have paying customers, then you're better served by purchasing a certificate from a public, trusted company that sells them. In either case, the process to start requesting a new certificate is to use the OpenSSL tooling on your Linux system to create a certificate signing request (CSR): `$ openssl req -new -newkey rsa:2048 -nodes -keyout mail.mydomain.key -out mail.mydomain.csr` This command will create a new CSR and private key at the same time for the service you are trying to secure. The process will ask you a number of questions associated with the certificate: location details, server fully qualified domain name (FQDN), email contact information, etc. Once you have filled out the information, the key and CSR will be generated. ### If you generate your own certificate If you want to generate your own certificate, you must create your own [root CA](https://en.wikipedia.org/wiki/Root_certificate) before issuing the CSR command above. You can create your own root CA with: `$ openssl genrsa -des3 -out myCA.key 2048` It will prompt you to add a passphrase. Please give it a secure passphrase and don't lose it—this is your private root CA key, and as the name states, it's the root of all trust in your certificates. Next, generate the root CA certificate: `$ openssl req -x509 -new -nodes -key myCA.key -sha256 -days 1825 -out myCA.pem` After answering a few more questions, you will generate a root CA certificate with a five-year lifespan. Using the CSR file from the steps above, you can request a new certificate to be generated and signed by the root CA you just created: `$ openssl x509 -req -in mail.mydomain.csr -CA myCA.pem -CAkey myCA.key -CAcreateserial -out mail.mydomain.pem -days 1825 -sha256` Enter your private root CA key passphrase to create and sign the certificate. Now you have the two files needed to configure your email services for enhanced security: the private key file, **mail.mydomain.key**, and the public certificate file, **mail.mydomain.pem**. ### If you purchase a certificate If you purchase a certificate from a vendor, it will ask you to upload that CSR to its system, as it is used as the input to generate the SSL/TLS certificate. The certificate will be accessible as a file (such as **mail.mydomain.pem**). Many SSL vendors also require you to download an intermediate certificate. If this is the case, you must combine the two certificate files into one, so the email service can process them both in combination. You can combine your certificate with a third-party intermediate certificate with: `$ cat mail.mydomain.pem gd_bundle-g2-g1.crt > mail.mydomain.pem` Notice that the output's file extension is **.pem**, which stands for Privacy-Enhanced Mail. Now you have the two files you need to configure your email services for enhanced security: the private key file, **mail.mydomain.key**, and the public combined certificate file, **mail.mydomain.pem**. ## Create a safe directory for your files Whether you created your own key or bought one from a vendor, create a safe, root-owned directory for the two files you created above. An example workflow to create a safe play would be: ``` $ mkdir /etc/pki/tls $ chown root:root /etc/pki/tls $ chmod 700 /etc/pki/tls ``` Make sure to set the permissions on your files after you copy them into **/etc/pki/tls** with: `$ chmod 600 /etc/pki/tls/*` ## Configure your SMTP and IMAP services Next, configure both the SMTP and the IMAP services to use the new security certificates. The programs used in this example for SMTP and IMAP are **postfix** and **dovecot**. Edit **/****etc****/****postfix/main.cf** in your preferred text editor. Add the following lines: ``` smtpd_use_tls = yes smtpd_tls_cert_file = /etc/pki/tls/mail.mydomain.pem smtpd_tls_key_file = /etc/pki/tls/mail.mydomain.key ``` ## Customize your config The following options allow you to disable/enable different ciphers, protocols, etc.: ``` smtpd_tls_eecdh_grade = strong smtpd_tls_protocols= !SSLv2, !SSLv3, !TLSv1, !TLSv1.1 smtpd_tls_mandatory_protocols= !SSLv2, !SSLv3, !TLSv1, !TLSv1.1 smtpd_tls_mandatory_ciphers = high smtpd_tls_security_level=may smtpd_tls_ciphers = high tls_preempt_cipherlist = yes smtpd_tls_mandatory_exclude_ciphers = aNULL, MD5 , DES, ADH, RC4, PSD, SRP, 3DES, eNULL smtpd_tls_exclude_ciphers = aNULL, MD5 , DES, ADH, RC4, PSD, SRP, 3DES, eNULL smtp_tls_mandatory_protocols = !SSLv2, !SSLv3, !TLSv1, !TLSv1.1 smtp_tls_protocols = !SSLv2, !SSLv3, !TLSv1, !TLSv1.1 ``` Edit **/etc/dovecot/dovecot.conf** by adding these three lines: ``` ssl = required ssl_cert = </etc/pki/tls/mail.mydomain.pem ssl_key = </etc/pki/tls/mail.mydomain.key ``` Add the following options to disable/enable different ciphers, protocols, and more (I'll leave understanding and considering these up to you): ``` ssl_cipher_list = EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:ALL:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS:!RC4:!SSLv2 ssl_prefer_server_ciphers = yes ssl_protocols = !SSLv2 !SSLv3 !TLSv1 !TLSv1.1 ssl_min_protocol = TLSv1.2 ``` ## Set context for SELinux If your Linux distribution has SELinux enabled, set the correct SELinux context for your new certificate files. For Postfix SELinux: `$ chcon -u system_u -t cert_t mail.mydomain.*` For Dovecot SELinux: `$ chcon -u system_u -t dovecot_cert_t mail.mydomain.*` Restart both services and connect with your updated email client configurations. Some email clients will auto-detect the new port numbers; others will require you to update them. ## Test your setup Quickly test from the command line with **openssl** and the **s_client** plugin: ``` $ openssl s_client -connect mail.mydomain.com:993 $ openssl s_client -starttls imap -connect mail.mydomain.com:143 $ openssl s_client -starttls smtp -connect mail.mydomain.com:587 ``` These test commands will show a plethora of data about the connection, certificate, cipher, session, and protocol you're using. This is not only a good way to validate that the new configuration is working but also to confirm you're using the appropriate certificate and security settings you defined in the **postfix** or **dovecot** configuration files. Stay secure! ## 5 Comments
12,219
用于各种用途的最佳树莓派操作系统
https://itsfoss.com/raspberry-pi-os/
2020-05-15T10:54:44
[ "树莓派" ]
https://linux.cn/article-12219-1.html
![](/data/attachment/album/202005/15/105040b17d6v7gdmj63k3k.jpg) [树莓派](https://www.raspberrypi.org/) 是一款不可缺少的单板电脑,在很多工作中都能派上用场。不相信?只要[看看这个树莓派项目列表](https://itsfoss.com/raspberry-pi-projects/),就能了解这个小小的设备能做什么。 考虑到树莓派用途这么多,为它选择一个合适的操作系统就极其重要。当然,你可以用 Linux 做很多事,但专门为特定目的配置的操作系统可以为你节省大量的时间和精力。 因此,本文中我要介绍一些专门为树莓派量身定制的流行且实用的操作系统。 ### 由于有树莓派镜像工具,安装任何操作系统到树莓派上都很容易 [在 SD 卡上安装树莓派操作系统](/article-12136-1.html)比以前容易得多。你只需下载[树莓派镜像](https://www.raspberrypi.org/downloads/)就可以快速地安装任何树莓派操作系统。请看下面的官方视频,你就知道有多简单。 你也可以使用 [NOOBS](https://www.raspberrypi.org/downloads/noobs/)(<ruby> 新开箱即用软件 <rt> New Out Of the Box Software </rt></ruby>)在树莓派上轻松安装各种的操作系统。你还可以从他们的 [NOOBS 官方下载页面](https://www.raspberrypi.org/downloads/noobs/)提到的支持的零售商列表中获得预装 SD 卡。 欢迎在他们的[官方文档](https://www.raspberrypi.org/documentation/installation/installing-images/README.md)中了解更多关于安装操作系统的信息。 * [下载树莓派操作系统](https://www.raspberrypi.org/downloads/) 现在你知道了怎么安装它(以及从哪儿获取),让我来重点介绍几个有用的树莓派操作系统,希望对你有所帮助。 ### 适用于树莓派的各种操作系统 请注意,我花了一些精力筛选出了那些被积极维护的树莓派操作系统项目。如果某个项目在不久的将来会停止维护,请在评论区告诉我,我会更新本文。 另一件事是,我关注到现在最新的版本是树莓派 4,但是下面的列表不应被认为是树莓派 4 的操作系统列表,这些系统应该也能用于树莓派 3、3B+ 和其他变种,但是请参照项目的官方网站了解详细信息。 **注意:** 排名不分先后。 #### 1、Raspbian OS:官方的树莓派操作系统 ![](/data/attachment/album/202005/15/105447tbv3kv2ipbidkfhi.jpg) Raspbian OS 是官方支持的树莓派板卡操作系统。它集成了很多工具,用于教育、编程以及其他广泛的用途。具体来说,它包含了 Python、Scratch、Sonic Pi、Java 和其他一些重要的包。 最初,Raspbian OS 是基于 Debian 的,并预装了大量有用的包。因此,当你安装 Raspbian OS 后,你可能就不需要特意安装基本工具了 — 你会发现大部分工具已经提前安装好了。 Raspbian OS 是被积极地维护着的,它也是最流行的树莓派操作系统之一。你可以使用 [NOOBS](https://www.raspberrypi.org/downloads/noobs/) 或参照[官方文档](https://www.raspberrypi.org/documentation/installation/installing-images/README.md)来安装它。 * [Raspbian OS](https://www.raspbian.org/) #### 2、Ubuntu MATE:适合通用计算需求 ![](/data/attachment/album/202005/15/105448j27hkhchhs77srf7.jpg) 尽管 Raspbian 是官方支持的操作系统,但它的特点不是最新、最大的软件包。因此,如果你想更快的更新,想用最新的包,你可以试试 Ubuntu MATE 的树莓派版本。 Ubuntu MATE 的树莓派定制版是值得安装的非常不错的轻量级发行版。它还被广泛用于 [NVIDIA 的 Jetson Nano](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-nano/)。换言之,你可以在树莓派的很多场景下使用它。 为了更好地帮助你,我们还有一份详细的教程:[怎样在树莓派上安装 Ubuntu MATE](/article-10817-1.html)。 * [Ubuntu MATE for Raspberry Pi](https://ubuntu-mate.org/raspberry-pi/) #### 3、Ubuntu Server:把树莓派作为一台 Linux 服务器来使用 ![](/data/attachment/album/202005/15/105449z0wzb0x06us6sbdu.png) 如果你计划把你的树莓派当作项目的某个服务器来使用,那么安装 Ubuntu Server 会是一个不错的选择。 Ubuntu Server 有 32 位和 64 位的镜像。你可以根据你的板卡类型(是否支持 64 位)来选择对应的操作系统。 然而,值得注意的一点是 Ubuntu Server 不是为桌面用户定制的。因此,你需要留意 Ubuntu Server 默认不会安装图形用户界面。 * [Ubuntu Server](https://ubuntu.com/download/raspberry-pi) #### 4、LibreELEC:适合做媒体服务器 ![](/data/attachment/album/202005/15/105451fwddzzc7g1fg11oo.jpg) 我们已经有一个 [Linux 下可用的媒体服务器软件](https://itsfoss.com/best-linux-media-server/),LibreELEC 在列表中。 它是一个很棒的轻量级操作系统,让你可以在树莓派上安装 [KODI](https://kodi.tv/)。你可以尝试使用树莓派镜像工具来安装它。 你可以很容易地找到他们的[官方下载页面](https://libreelec.tv/downloads_new/),并找到适合你板卡的安装镜像。 * [LibreELEC](https://libreelec.tv/) #### 5、OSMC:适合做媒体服务器 ![](/data/attachment/album/202005/15/105453w5mqpecpi5pe4mch.jpg) OSMC 是另一个 Linux 下[流行的媒体服务器软件](https://itsfoss.com/best-linux-media-server/)。如果要把树莓派板作为媒体中心设备,那么 OSMC 是你可以向他人推荐的操作系统之一。 类似 LibreELEC,OSMC 也运行 KODI,可以帮助你管理你的媒体文件和欣赏你已有的素材。 OSMC 没有正式提及对树莓派 4 的支持。因此,如果你的树莓派是树莓派 3 或更早的版本,那么应该没有问题。 * [OSMC](https://osmc.tv/) #### 6、RISC OS:最初的 ARM 操作系统 ![](/data/attachment/album/202005/15/105456ne7ggldp7p141mug.jpg) RISC OS 最初是为 ARM 设备打造的,至今已有近 30 年左右的历史。 如果你想了解,我们也有篇详细介绍 [RISC OS](https://itsfoss.com/risc-os-is-now-open-source/) 的文章。简而言之,RISC OS 也是为诸如树莓派的现代基于 ARM 的单板计算机定制的。它的用户界面很简单,更专注于性能。 同样的,这并不是为树莓派 4 量身定做的。因此,如果你的树莓派是 3 或更早的版本,你可以试一下。 * [RISC OS](https://www.riscosopen.org/content/) #### 7、Mozilla WebThings Gateway:适合 IoT 项目 ![](/data/attachment/album/202005/15/105456n6ehl5i4qs94ecqx.png) 作为 Mozilla 的 [IoT 设备的开源实现](https://iot.mozilla.org/about/)的一部分,WebThings Gateway 让你可以监控和控制所有连接的 IoT 设备。 你可以参考[官方文档](https://iot.mozilla.org/docs/gateway-getting-started-guide.html)来检查所需的环境,遵照指导把安装到树莓派上。它确实是适合 IoT 应用的最有用的树莓派操作系统之一。 * [WebThings Gateway](https://iot.mozilla.org/gateway/) #### 8、Ubuntu Core:适合 IoT 项目 Ubuntu Core 是又一个树莓派操作系统,适用于潜在的 [IoT](https://en.wikipedia.org/wiki/Internet_of_things) 应用,或者只是测试一下 Snap。 Ubuntu Core 是专门为 IoT 设备或者具体来说是树莓派定制的。我不会刻意宣传它 —— 但是 Ubuntu Core 是一款适合树莓派板卡的安全操作系统。你可以自己尝试一下! * [Ubuntu Core](https://ubuntu.com/download/raspberry-pi-core) #### 9、DietPi:轻量级树莓派操作系统 ![](/data/attachment/album/202005/15/105457nwp7hwouw57sqb5z.jpg) DietPi 是一款轻量级的 [Debian](https://www.debian.org/) 操作系统,它还宣称比 “Raspbian Lite” 操作系统更轻量。 虽然它被视作一款轻量级的树莓派操作系统,但它提供了很多功能,可以在多个使用场景中派上用场。从简单的软件安装包到备份解决方案,还有很多功能值得探索。 如果你想安装一个低内存占用而性能相对更好的操作系统,你可以尝试一下 DietPi。 * [DietPi](https://dietpi.com/) #### 10、Lakka Linux:打造复古的游戏主机 ![](/data/attachment/album/202005/15/105500xyfmvllfvvetftii.jpg) 想让你的树莓派变成一个复古的游戏主机? Lakka Linux 发行版本最初是建立在 RetroArch 模拟器上的。因此,你可以立刻在树莓派上获得所有的复古游戏。 如果你想了解,我们也有一篇介绍 [Lakka Linux](/article-10158-1.html) 的文章。或者直接上手吧! * [Lakka](http://www.lakka.tv/) #### 11、RetroPie:适合复古游戏 ![](/data/attachment/album/202005/15/105502u9zb0mmgsofamsb9.png) RetroPie 是另一款可以让树莓派变成复古游戏主机的树莓派操作系统。它提供了几个配置工具,让你可以自定义主题,或者调整模拟器即可拥有最好的复古游戏。 值得注意的是它不包含任何有版权的游戏。你可以试一下,看看它是怎么工作的! * [RetroPie](https://retropie.org.uk/) #### 12、Kali Linux:适合低成本渗透 ![](/data/attachment/album/202005/15/105504s7jf4ee74y9ob6fq.png) 想要在你的树莓派上尝试和学习一些道德黑客技巧吗?[Kali Linux](/article-10690-1.html) 会是最佳选择。是的,Kali Linux 通常在最新的树莓派一发布就会支持它。 Kali Linux 不仅适合树莓派,它也支持很多其他设备。尝试一下,玩得愉快! * [Kali Linux](https://www.offensive-security.com/kali-linux-arm-images/) #### 13、OpenMediaVault:适合网络附加存储(NAS) ![](/data/attachment/album/202005/15/105505wxkrycg4gxrfkbyg.jpg) 如果你想在极简的硬件上搭建 [NAS](https://en.wikipedia.org/wiki/Network-attached_storage) 解决方案,树莓派可以帮助你。 OpenMediaVault 最初是基于 Debian Linux 的,提供了大量功能,如基于 Web 的管理能力、插件支持,等等。它支持大多数树莓派型号,因此你可以尝试下载并安装它! * [OpenMediaVault](https://www.openmediavault.org/) #### 14、ROKOS:适合加密挖矿 ![](/data/attachment/album/202005/15/105507woz72h8th4tdxthd.jpg) 如果你对加密货币和比特币很感兴趣,那么 ROKOS 会吸引你。 ROKOS 是基于 Debian 的操作系统,基本上可以让你把你的树莓派变成一个节点,同时预装了相应的驱动程序和软件包。当然,在安装之前你需要了解它是怎么工作的。因此,如果你对此不太了解,我建议你先调研下。 * [ROKOS](https://rokos.space/) #### 15、Alpine Linux:专注于安全的轻量级 Linux 当今年代,很多用户都在寻找专注于安全和[隐私的发行版本](https://itsfoss.com/privacy-focused-linux-distributions/)。如果你也是其中一员,你可以试试在树莓派上安装 Alpine Linux。 如果你是个树莓派新手,它可能不像你想象的那样对用户友好(或者说对初学者来说容易上手)。但是,如果你想尝试一些不一样的东西,那么你可以试试 Alpine Linux 这个专注于安全的 Linux 发行版本。 * [Alpine Linux](https://alpinelinux.org/) #### 16、Kano OS:适合儿童教育的操作系统 ![](/data/attachment/album/202005/15/105509hnqwc5q43hswq9ew.jpg) 如果你在寻找一款能让学习变得有趣还能教育儿童的树莓派操作系统,那么 Kano OS 是个不错的选择。 它正在积极维护中,而且 Kano OS 上的桌面集成的用户体验相当简单,玩起来也很有趣,可以让孩子们从中学习。 * [Kano OS](https://kano.me/row/downloadable) #### 17、KDE Plasma Bigscreen:把普通电视转换为智能电视 ![](/data/attachment/album/202005/15/105510xaftfq8qyaagyza4.jpg) 这是 KDE 一个正在开发中的项目。在树莓派上安装 [KDE “等离子大屏”](/article-12063-1.html) 后,你可以把普通电视变成智能电视。 你不需要特殊的遥控器来控制电视,你可以使用普通的遥控器。 “等离子大屏”也集成了 [MyCroft 开源 AI](https://itsfoss.com/mycroft-mark-2/) 作为声控。 这个项目还在测试阶段,所以如果你想尝试,可能会有一些错误和问题。 * [Plasma Bigscreen](https://plasma-bigscreen.org/#download-jumpto) #### 18、Manjaro Linux:为你提供多功能的桌面体验 ![](/data/attachment/album/202005/15/105512xxkz49e4lysxmv41.jpg) 如果你想在树莓派上寻找一个基于 Arch 的 Linux 发行版,那么 Manjaro Linux 应该是一个很好的补充,它可以做很多事情,适合一般的计算任务。 Manjaro Linux ARM 版也支持最新的树莓派 4。它为你的树莓派或任何[树莓派替代品](/article-10823-1.html)提供了 XFCE 和 KDE Plasma 变体。 此外,它似乎还提供了树莓派设备上最快/最好的体验之一。如果你还没试过,那就试试吧! * [Manjaro ARM](https://manjaro.org/download/#raspberry-pi-4) #### 19、Volumio:作为一个开源音乐播放器使用 ![](/data/attachment/album/202005/15/105515nwwoun6x6b99jwhn.jpg) 想做一个廉价的音乐发烧友系统?Volumio 应该可以帮到你。 它是一个自由而开源的操作系统([GitHub](https://github.com/volumio)),还支持集成多个设备的能力。你可以通过一个简单的 Web 控制界面,对所有连接的设备进行管理。除了免费版之外,它还提供了一个高级版,可以让你获得独家功能。 它也确实支持最新的树莓派 4。所以,如果你对调整已有的家庭立体声系统有一定的兴趣,想要获得最佳的音质,不妨试试这个。 * [Volumio](https://volumio.org/) #### 20、FreeBSD 不想使用 Linux 发行版?不用担心,你也可以用 FreeBSD 在树莓派上安装一个类 UNIX 操作系统。 如果你不知道的话,我们有一篇关于 [FreeBSD 项目](https://itsfoss.com/freebsd-interview-deb-goodkin/)的详细文章。 一旦你按照他们的[官方说明](https://www.freebsdfoundation.org/freebsd/how-to-guides/installing-freebsd-for-raspberry-pi/)安装好之后,你可以利用它来进行任何 DIY 实验,或者只是把它作为一个轻量级的桌面系统来完成特定的任务。 * [FreeBSD](https://www.freebsd.org/) #### 21、NetBSD NetBSD 是另一个令人印象深刻的类 UNIX 操作系统,你可以在树莓派上安装。它的目标是成为一个跨多个系统的便携式操作系统。 如果你在其他系统中使用过它,你可能已经知道它的好处了。然而,它不仅仅是一个轻量级的便携式操作系统,它的特点是拥有一套有用的功能,可以完成各种任务。 * [NetBSD](https://www.netbsd.org/) ### 结语 我相信还有很多为树莓派定制的操作系统 — 但是我尽力列出了被积极维护的最流行的或最有用的操作系统。 如果你觉得我遗漏了最合适树莓派的操作系统,尽情在下面的评论去告诉我吧! --- via: <https://itsfoss.com/raspberry-pi-os/> 作者:[Ankush Das](https://itsfoss.com/author/ankush/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[lxbwolf](https://github.com/lxbwolf) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
![Warp Terminal](/assets/images/warp-terminal.webp) ![Warp Terminal](/assets/images/warp-terminal.webp) [Raspberry Pi](https://www.raspberrypi.org/?ref=itsfoss.com) is an indispensable single-board computer that comes in handy for a lot of work. Don’t believe me? Just [go through this list of Raspberry Pi projects](https://itsfoss.com/raspberry-pi-projects/) to get a gist of what this tiny device is capable of. Considering how useful a Raspberry Pi is – **it is an important task to choose the right operating system** for it. Of course, you can do plenty of things with Linux, but an OS specially configured for Raspberry Pi can save you considerable time and effort. So, in this article, I will be listing out some popular and useful operating systems tailored for Raspberry Pi. Also, **stay tuned until the end** for installation info and some of my personal thoughts. ## Installing any Operating System on Raspberry Pi Easily [Installing a Raspberry PI operating system on an SD card](https://itsfoss.com/tutorial-how-to-install-raspberry-pi-os-raspbian-wheezy/) is easier than ever before. You can simply use the [Raspberry Pi Imager](https://www.raspberrypi.com/software/) tool and install any operating system. You can also refer to the official video linked below to learn how to install any OS: Previously, you could use **NOOBS** (New Out Of the Box Software), but that has been retired in favor of the new imager tool. Click on the button below to get started with the imager tool. ## Various Operating Systems for Raspberry Pi Please keep in mind that I have taken an effort to list only those Raspberry Pi operating system projects **that are being actively maintained**. If a project has been discontinued or will be, in the near future, do let me know in the comments section. Another thing is that I have **focused on the latest Raspberry Pi 4 and Pi 5 boards**, but this should not be considered just as a list of OSes for those. You should be able to use most of the options listed below on Raspberry Pi 3, 3 B+ and other variants as well. But, **make sure to check out the official websites** of the projects for more info on compatibility. ## 1. Raspberry Pi OS: The Official One ![a screenshot of raspberry pi os](https://itsfoss.com/content/images/2023/12/RaspberryPi_OS.jpg) Formerly known as “*Raspbian*”, [Raspberry Pi OS](https://www.raspberrypi.com/software/) is **the officially supported OS for Raspberry Pi boards**. It comes baked in with several tools for education, programming, and general use. Specifically, it includes Python, Scratch, Sonic Pi, Java, and numerous other important packages. Originally, Raspberry Pi OS is based on Debian and comes pre-installed with loads of useful packages. So, when you get this installed, you probably don’t need to install essentials separately – you should find almost everything pre-installed. Raspberry Pi OS **is actively maintained**, and it is one of the most popular operating systems for Raspberry Pi boards out there. ## 2. Ubuntu MATE: For General Purpose Computing ![a screenshot of ubuntu mate for raspberry pi](https://itsfoss.com/content/images/2023/12/Ubuntu_MATE_Rpi.jpg) Even though Raspberry Pi OS is the officially supported OS, sometimes you may need something different, so, you can try Ubuntu MATE for Raspberry Pi. [Ubuntu MATE](https://ubuntu-mate.org/raspberry-pi/?ref=itsfoss.com) is an incredibly lightweight distribution to have installed. It’s also popularly used on [NVIDIA’s Jetson Nano](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-nano/?ref=itsfoss.com). In other words, you can utilize it for several use-cases with the Raspberry Pi. To help you out, we also have a detailed guide on [how to install Ubuntu MATE on Raspberry Pi](https://itsfoss.com/ubuntu-mate-raspberry-pi/). ## 3. Ubuntu Server: For Use as a Linux Server ![a screenshot of ubuntu server installer](https://itsfoss.com/content/images/wordpress/2020/03/ubunt-server.png) If you’re planning to use your Raspberry Pi as some sort of server for a project, [Ubuntu Server](https://ubuntu.com/download/raspberry-pi?ref=itsfoss.com) can be a great choice to have installed. At the time of writing, you can get 64-bit images of the OS based on Ubuntu 22.04.3 LTS and Ubuntu 23.10 releases. However, it is worth noting that **Ubuntu Server isn’t tailored for desktop usage**. So, you need to keep in mind that **you will have no proper graphical user interface installed** by default. ## 4. LibreELEC: For Media Server ![a screenshot of libreelec 9.0 about info](https://itsfoss.com/content/images/wordpress/2019/02/libreelec-800x600.jpg) We already have a list of the [best Linux media server software](https://itsfoss.com/best-linux-media-server/) for Linux, and [LibreELEC](https://libreelec.tv/?ref=itsfoss.com) is one of them. You can **run a fully-fledged media server** on a Raspberry Pi board by using this. It’s **a great lightweight OS system **capable enough to have [KODI](https://kodi.tv/?ref=itsfoss.com) on your Raspberry Pi. You can head over to the [official download page](https://libreelec.tv/downloads/) to find a suitable installer image for your board. ![](https://itsfoss.com/content/images/2024/06/pironman-5.webp) #### Pironman 5 Case With Tower Cooler and Fan This dope Raspberry Pi 5 case has a tower cooler and dual RGB fans to keep the device cool. It also extends your Pi 5 with M.2 SSD slot and 2 standard HDMI ports. [Explore Pironman 5](https://www.sunfounder.com/products/pironman-5-nvme-m-2-ssd-pcie-mini-pc-case-for-raspberry-pi-5?ref=itsfoss) ## 5. OSMC: For Media Server ![a screenshot of osmc](https://itsfoss.com/content/images/2023/12/OSMC.png) [OSMC](https://osmc.tv/?ref=itsfoss.com) is yet another popular media server software for Linux. While considering the use of Raspberry Pi boards as media center devices, **this is one of the good ones that you can recommend to someone**. Similar to LibreELEC, OSMC** also runs KODI** to help you manage your media files and enjoy watching the content you already have. OSMC officially supports **Raspberry Pi 4 and Raspberry Pi 5**. So, even if you have a Raspberry Pi 3 or lower, you should be good to go. ## 6. RISC OS: The original ARM OS ![a screenshot of risc os](https://itsfoss.com/content/images/wordpress/2018/10/riscos5.1-800x600.jpg) Originally crafted for ARM devices, [RISC OS](https://www.riscosopen.org/content/?ref=itsfoss.com)** has been around for almost 30 years** or so. We also have a separate detailed article on [RISC OS](https://itsfoss.com/risc-os-is-now-open-source/), if you’re curious to know more about it. Long story short, RISC OS is **tailored for modern ARM-based single-board computers like the Raspberry Pi**. It presents a simple user interface with a focus on performance. This is meant for the Raspberry Pi 4, with no news of support for the Raspberry Pi 5. Anyway, if you have a Raspberry Pi 4 or lower, you can give it a try. ## 7. Mozilla WebThings Gateway: For IoT Projects ![a screenshot of webthings gateway](https://itsfoss.com/content/images/wordpress/2020/03/web-things-gateway.png) As part of **Mozilla’s open-source implementation for IoT devices**, [WebThings Gateway](https://webthings.io/gateway/?ref=itsfoss.com) lets you monitor and control all your connected IoT devices. You can follow the [official documentation](https://iot.mozilla.org/docs/gateway-getting-started-guide.html?ref=itsfoss.com) to check installation instructions and other requirements to get it installed on a Raspberry Pi. This is, definitely, one of the more utility-focused operating systems for IoT applications. ## 8. Ubuntu Core: For IoT projects Yet another operating system for potential [IoT](https://en.wikipedia.org/wiki/Internet_of_things?ref=itsfoss.com) applications or maybe to simply test snaps – [Ubuntu Core](https://ubuntu.com/download/raspberry-pi?ref=itsfoss.com). Ubuntu core is **specifically tailored for IoT devices **and is a suitable and secure OS for Raspberry Pi boards. You can give this a try for yourself! ## 9. DietPi: A Lightweight OS ![a screenshot of dietpi](https://itsfoss.com/content/images/2023/12/DietPi.jpg) [DietPi](https://dietpi.com/?ref=itsfoss.com) is **a lightweight Debian-based operating system** that claims to be lighter than “*Raspberry Pi OS Lite*”. While considering it as a lightweight operating system for the Raspberry Pi, it offers plenty of features that could be useful for several use-cases. Ranging from easy installers for software packages to a backup solution, there’s a lot to explore. If you’re aiming to get an OS with a low footprint but potentially better performance, you could give this a try. ## 10. Lakka: For Retro Gaming ![a screenshot of lakka](https://itsfoss.com/content/images/wordpress/2016/08/lakkaos-1024x640.jpg) Looking for **a way to turn your Raspberry Pi into a retro gaming console?** Well, then the [Lakka](http://www.lakka.tv/?ref=itsfoss.com) Linux distribution can be the one for you. Originally built on top of the [RetroArch](https://www.retroarch.com) emulator, it **lets you run retro games** on your Raspberry Pi effortlessly. ## 11. RetroPie: For Retro Gaming ![a screenshot of the emulationstation frontend on retropie](https://itsfoss.com/content/images/wordpress/2020/03/retro-pie.png) [RetroPie](https://retropie.org.uk/?ref=itsfoss.com) is yet another popular OS option that turns a Raspberry Pi into a retro gaming console. It **features various configuration tools** so that you can customize the theme or just tweak the emulator to have the best retro gaming experience possible. It is worth noting that it does not include any copyrighted games. You can give it a try and see how it works! ## 12. Kali Linux: Hacking On a Budget ![a screenshot of kali linux on raspberry pi 4](https://itsfoss.com/content/images/wordpress/2020/03/kali-linux-pi.png) Want to try to **learn some ethical hacking skills** on your Raspberry Pi? [Kali Linux](https://www.kali.org) can be a perfect fit for it. And, yes, it usually supports the latest Raspberry Pi as soon as it launches. Not just limited to Raspberry Pi, you can get support for a long list of other supported devices as well. Try it out and have fun! ## 13. openmediavault: For Network Attached Storage (NAS) ![a screenshot of openmediavault](https://itsfoss.com/content/images/2023/12/openmediavault.png) If you’re trying to set up a [NAS](https://en.wikipedia.org/wiki/Network-attached_storage?ref=itsfoss.com) (Network Attached Storage) solution on minimal hardware, the Raspberry Pi is a great choice. Originally, based on Debian, [openmediavault](https://www.openmediavault.org/?ref=itsfoss.com) offers a number of features that include web-based administration capabilities, plugin support, and more. It supports most Raspberry Pi models – so you can try downloading and installing it! ## 14. piCore (Tiny Core Linux): For a Minimal Experience ![a screenshot of tiny core](https://itsfoss.com/content/images/2023/12/Tiny_Core.png) Derived on the popular community-built Tiny Core Linux distro, [piCore](http://tinycorelinux.net/5.x/armv6/releases/README) is a Raspberry Pi port that features an independent system architecture built by a handful of developers. Its** main highlight is the amount of flexibility on offer**, you can create a customized system that is tailored according to your preferences. It also features a recent kernel and a handy set of applications. Moreover, **the system entirely runs on the RAM** of a Raspberry Pi board. ## 15. Alpine Linux: Lightweight and Security-Focused ![a screenshot of the alpine linux homepage](https://itsfoss.com/content/images/2023/12/Alpine_Linux.png) Nowadays, many users are usually looking for security-focused and [privacy-focused distributions](https://itsfoss.com/privacy-focused-linux-distributions/). And, if you are one of them, you might as well try [Alpine Linux](https://alpinelinux.org/?ref=itsfoss.com) on a Raspberry Pi. It may not be as user-friendly as you’d expect (or beginner-friendly) if you’re just getting started with Raspberry Pi. But, if you want something different to start with, you can try this as a security-focused Linux distribution. ## 16. Slackware ARM: For a Slackware Experience ![a screenshot of the slackware arm homepage](https://itsfoss.com/content/images/2023/12/Slackware_ARM-1.png) For those unfamiliar, Slackware Linux is **one of the oldest Linux distributions around** that has a pretty dedicated user base. [Slackware ARM](https://arm.slackware.com) was created as a hobby project spinoff that has evolved a lot over the years. It can be a great fit for your Raspberry Pi board, you can follow the [official documentation](https://docs.slackware.com/slackwarearm:inst) to get started with Slackware ARM. ## 17. KDE Plasma Bigscreen: Convert TV into Smart TV ![a screenshot of plasma bigscreen](https://itsfoss.com/content/images/2023/12/Plasma_Bigscreen.jpg) This is a regularly updated project from KDE. With [Plasma Bigscreen](https://plasma-bigscreen.org/?ref=itsfoss.com) installed on the Raspberry Pi, you can convert your regular TV into a smart TV. It is an open-source user interface for TVs that is based on a Linux distribution, it also integrates [Mycroft AI](https://mycroft.ai/about-mycroft/) for **voice assistant features**. There are two main ways of installing it on a Raspberry Pi, one is by using the **PostmarketOS distribution **(based on Alpine Linux). The other is by using the Raspberry Pi images with Plasma Bigscreen from **Manjaro ARM**. ## 18. Manjaro ARM: A Versatile Desktop Experience ![a screenshot of manjaro arm](https://itsfoss.com/content/images/wordpress/2020/04/manjaro-arm.jpg) If you were looking for** an Arch-based Linux distro for the Raspberry Pi**, [Manjaro ARM](https://wiki.manjaro.org/index.php/Manjaro-ARM) should be a great addition for general computing tasks, with the ability to do plenty of things. Manjaro ARM **is the SBC-focused edition of Manjaro Linux **that supports many Raspberry Pi boards. It offers **XFCE**, **KDE Plasma**, **GNOME**,** SWAY,** **MATE**,** **and **Minimal** variants for your Raspberry Pi or any [Raspberry Pi alternatives](https://itsfoss.com/raspberry-pi-alternatives/). Moreover, it seems to offer one of the fastest/best experiences on a Raspberry Pi device. Give it a try if you haven’t! ## 19. Volumio: To use it as an open-source music player ![a screenshot of volumio on different devices](https://itsfoss.com/content/images/wordpress/2020/04/volumio.jpg) Want to make **an inexpensive audiophile system**? [Volumio](https://volumio.com/en/) should be able to help with that. It is a free and open-source OS ([GitHub](https://github.com/volumio?ref=itsfoss.com)) which also supports the ability to integrate multiple devices. You can manage all the devices connected through a simple web-based control interface. In addition to the free edition, it also has a premium edition as well that gives you access to exclusive features. It supports many Raspberry Pi boards as well. So, if you’ve got some interest in converting an existing home stereo system for better quality, you could try this out. ## 20. FreeBSD: The Do-it-All OS ![a screenshot of freebsd homepage](https://itsfoss.com/content/images/2023/12/FreeBSD.png) Don’t want to utilize a Linux distribution? Fret not, you can also have a UNIX-like OS installed on Raspberry Pi thanks to [FreeBSD](https://www.freebsd.org/?ref=itsfoss.com). Once you get it installed following their [official instructions](https://www.freebsdfoundation.org/freebsd/how-to-guides/installing-freebsd-for-raspberry-pi/?ref=itsfoss.com), you **can use it for any DIY experiment with your Raspberry Pi**, or just use it as a lightweight desktop setup for specific tasks. In case you didn’t know, in the past, we had an insightful chat with Deb Goodkin, the Executive Director of the [FreeBSD Foundation](https://freebsdfoundation.org/?ref=itsfoss.com). You can go through [our article](https://itsfoss.com/freebsd-interview-deb-goodkin/) to know more about the FreeBSD project. **Suggested Read 📖** [Discussing Past, Present and Future of FreeBSD ProjectFreeBSD is one of the most popular BSD distributions. It is used on desktop, servers and embedded devices for more than two decades. We talked to Deb Goodkin, executive director, FreeBSD Foundation and discussed the past, present and future of FreeBSD project. It’s FOSS: FreeBSD has been in the](https://itsfoss.com/freebsd-interview-deb-goodkin/)![](https://itsfoss.com/content/images/wordpress/2020/02/deb-goodkin-interview.png) ![](https://itsfoss.com/content/images/wordpress/2020/02/deb-goodkin-interview.png) ## 21. NetBSD ![a screenshot of netbsd running on x11](https://itsfoss.com/content/images/2023/12/NetBSD.png) [NetBSD](https://www.netbsd.org/?ref=itsfoss.com) is yet another impressive UNIX-like OS that you can install on your Raspberry Pi. It **aims to be a portable OS** that can be used across multiple systems. If you’ve used it previously, you might already know its benefits. However, it is not just limited to being a lightweight and portable OS, it has a useful set of features to get various tasks done. ## Wrapping Up I’m sure there are a lot of other operating systems tailored for Raspberry Pi – but I’ve tried to list the most popular and useful ones that are actively maintained. *💬 If you think we missed any operating system, feel free to let us know in the comments below!*
12,220
无法在 Ubuntu 20.04 上安装 Deb 文件?这是你需要做的!
https://itsfoss.com/cant-install-deb-file-ubuntu/
2020-05-15T11:22:44
[ "deb" ]
https://linux.cn/article-12220-1.html
> > 双击.deb 文件后无法通过 Ubuntu 20.04 的软件中心安装?你不是唯一遇到此问题的人。本教程展示了解决方法。 > > > ![](/data/attachment/album/202005/15/112149cfdyg556upv6vd66.jpg) 在“[安装 Ubuntu 20.04 之后要做的事](/article-12183-1.html)”一文中,一些读者提到他们[用 .deb 文件安装软件](https://itsfoss.com/install-deb-files-ubuntu/)遇到了麻烦。 我发现这很奇怪,因为使用 deb 文件安装程序是最简单的方法之一。你要做的就是双击下载的文件,它会在软件中心中打开(默认情况下)。单击安装,它要求你输入密码,并在几秒钟/分钟内安装了该软件。 我[从 19.10 升级到 Ubuntu 20.04](https://itsfoss.com/upgrade-ubuntu-version/)直到今天都没有遇到这个问题。 我下载了 .deb 文件来安装 [Rocket Chat Messenger](https://rocket.chat/),然后双击该文件安装时,文件用存档管理器打开。这不是我所期望的。 ![DEB files opened with Archive Manager instead of Software Center](/data/attachment/album/202005/15/112245karnndgrbt5avqru.png) “修复”很简单,我将在本教程中向你展示。 ### 在 Ubuntu 20.04 中安装 deb 文件 由于某些原因,在 Ubuntu 20.04 中 deb 文件的默认打开程序被设置为存档管理器。存档管理器是用于[解压 zip](https://itsfoss.com/unzip-linux/) 和其他压缩文件。 解决此问题的方法非常简单。[在 Ubuntu 中更改默认应用](https://itsfoss.com/change-default-applications-ubuntu/),将打开 DEB 文件从“存档管理器”改到“软件安装”。让我告诉你步骤。 **步骤 1:**右键单击下载的 .deb 文件,然后选择**属性**: ![](/data/attachment/album/202005/15/112246vc6c9lj5gj5jpp9m.png) **步骤 2:**进入“**打开方式**”标签,选择“**软件安装**”,然后点击“**设置为默认**”。 ![](/data/attachment/album/202005/15/112248zpiwyiciyiqwl99y.png) 这样,以后所有的 .deb 文件都将通过“软件安装”即软件中心打开。 双击 .deb 文件确认,看看是否在软件中心中打开。 ### 忽视的 bug 还是愚蠢的功能? 为什么会用存档管理器打开 deb 文件让人无法理解。我确实希望这是一个 bug,而不是像[在 Ubuntu 20.04 中不允许在桌面上拖放文件](https://itsfoss.com/add-files-on-desktop-ubuntu/)这样的怪异功能。 既然我们在讨论 deb 文件的安装,就让我告诉你一个不错的工具 [gdebi](https://launchpad.net/gdebi)。它是一个轻量级应用,其唯一目的是安装 DEB 文件。有时它也可以处理依赖关系。 你可以了解更多有关[使用 gdebi 并默认设为安装 deb 文件的工具](https://itsfoss.com/gdebi-default-ubuntu-software-center/)。 --- via: <https://itsfoss.com/cant-install-deb-file-ubuntu/> 作者:[Abhishek Prakash](https://itsfoss.com/author/abhishek/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
![Warp Terminal](/assets/images/warp-terminal.webp) ![Warp Terminal](/assets/images/warp-terminal.webp) In the “[things to do after installing Ubuntu 20.04](https://itsfoss.com/things-to-do-after-installing-ubuntu-20-04/)” article, a few readers mentioned that they had trouble [installing software from the Deb file](https://itsfoss.com/install-deb-files-ubuntu/). I found that strange because installing a program using the deb file is one of the simplest methods. All you have to do is to double-click the downloaded file and it opens (by default) with the Software Center program. You click on install, it asks for your password and within a few seconds/minute, the software is installed. I had [upgraded to Ubuntu 20.04 from 19.10](https://itsfoss.com/upgrade-ubuntu-version/) and hadn’t faced this issue with it until today. I downloaded the .deb file for installing [Rocket Chat messenger](https://rocket.chat/) and when I double-clicked on it to install this software, the file was opened with the archive manager. This is not what I expected. ![Error Opening Deb File](https://itsfoss.com/content/images/wordpress/2020/05/error-opening-deb-file.png) The “fix” is simple, and I am going to show it to you in this quick tutorial. ## Installing deb files in Ubuntu 20.04 and 22.04 For some reasons, the default software to open the deb file has been set to Archive Manager tool in Ubuntu 20.04. The Archive Manager tool is used for [extract zip](https://itsfoss.com/unzip-linux/) and other compressed files. The solution for this problem is pretty simple. You [change the default application in Ubuntu](https://itsfoss.com/change-default-applications-ubuntu/) for opening DEB files from Archive Manager to Software Install. Let me show you the steps. **Step 1:** Right click on the downloaded DEB file and select **Properties**: ![Open Deb Files](https://itsfoss.com/content/images/wordpress/2020/05/open-deb-files.png) **Step 2:** Go to “**Open With**” tab, select “**Software Install**” app and click on “**Set as default**“. ![How to fix deb file not opening with software center](https://itsfoss.com/content/images/2023/09/deb-file-install-fix-ubuntu.webp) This way, all the deb files in the future will be opened with Software Install i.e. the software center applications. Confirm it by double clicking the DEB file and see if it open with the software center application or not. ## Ignorant bug or stupid feature? Why deb files are supposed to be opened with Archive Manager is beyond comprehension. I do hope that this is a bug, not a weird feature like [not allowing drag and drop files on the desktop in Ubuntu 20.04](https://itsfoss.com/add-files-on-desktop-ubuntu/). Since we are discussing deb file installation, let me tell you about a nifty tool [gdebi](https://launchpad.net/gdebi). It’s a lightweight application with the sole purpose of installing DEB file. Not always but some times, it can also handle the dependencies. You can learn more about [using gdebi and making it default for installing deb files here](https://itsfoss.com/gdebi-default-ubuntu-software-center/). On a related note, you may also want to read about [deleting deb packages](https://itsfoss.com/uninstall-deb-ubuntu/).
12,223
Ubuntu Studio 将用 KDE Plasma 桌面环境替换 Xfce
https://itsfoss.com/ubuntu-studio-opts-for-kde/
2020-05-16T11:37:08
[ "Ubnutu", "Studio" ]
https://linux.cn/article-12223-1.html
[Ubuntu Studio](https://ubuntustudio.org/) 是一个流行的 [Ubuntu 官方变种](https://itsfoss.com/which-ubuntu-install/),它是为从事音频制作、视频、图形、摄影和书籍出版的创意内容创建者量身定制的。它提供了许多多媒体内容创建应用,开箱即用,体验极佳。 在最近的 20.04 LTS 版本发布之后,Ubuntu Studio 团队在其[官方公告](https://ubuntustudio.org/2020/04/ubuntu-studio-20-04-lts-released/)中强调了一些非常重要的内容。而且,也许不是每个人都注意到关键信息,即 Ubuntu Studio 的未来。 Ubuntu Studio 20.04 将是带有 [Xfce 桌面环境](https://xfce.org)的最后一个版本。将来的所有发行版都将改用 [KDE Plasma](https://kde.org/plasma-desktop)。 ### 为什么 Ubuntu Studio 放弃 XFCE? ![](/data/attachment/album/202005/16/113713pl190wbu8788rq9q.jpg) 据他们的澄清,Ubuntu Studio 并不专注于任何特定的外观/感受,而是旨在提供最佳的用户体验。同时,KDE 被证明是一个更好的选择。 > > 事实证明,Plasma 为图形艺术家和摄影师提供了更好的工具,例如在 Gwenview、Krita 甚至文件管理器 Dolphin 中都可以看出。此外,它对 Wacom 平板的支持比其他任何桌面环境都更好。 > > > 它已经变得如此之好,以至于现在大部分 Ubuntu Studio 团队成员都在使用通过 Ubuntu Studio 安装程序添加了 Ubuntu Studio 的 Kubuntu 作为日常工作使用。既然我们中的这么多人都在使用 Plasma,因此在我们的下一个版本中过渡到 Plasma 似乎是个好时机。 > > > 当然,每一个桌面环境都是针对不同的用户量身定制的。在此,他们认为 KDE Plasma 是最适合取代 XFCE 的桌面环境,可以为所有用户提供更好的用户体验。 尽管我不确定用户对此会有何反应,因为每个用户都有不同的偏好。如果现有用户对 KDE 没有问题,那就没什么大问题。 值得注意的是,Ubuntu Studio 还提到了为什么 KDE 可能是它们的更好选择: > > 在没有 Akonadi 的情况下,Plasma 桌面环境的资源使用与 Xfce 一样轻量级,甚至更轻。Fedora Jam 和 KXStudio 等其他以音频为重点的 Linux 发行版在历史上一直使用 KDE Plasma 桌面环境,并且在音频方面做得很好。 > > > 此外,他们还强调了 [Jason Evangelho 在福布斯杂志上的一篇文章](https://www.forbes.com/sites/jasonevangelho/2019/10/23/bold-prediction-kde-will-steal-the-lightweight-linux-desktop-crown-in-2020),其中一些基准测试表明 KDE 几乎与 Xfce 一样轻量级。尽管这是一个好的迹象,我们仍然需要等待用户试驾 KDE 驱动的 Ubuntu Studio。只有这样,我们才能观察到 Ubuntu Studio 放弃 XFCE 桌面环境的决定是否正确。 ### 更改后,Ubuntu Studio 用户将发生什么变化? 在带有 KDE 的 Ubuntu Studio 20.10 及更高版本上的整体工作流程可能会受到影响(或改善)。 但是,升级过程(从 20.04 到 20.10)会导致系统损坏。因此,全新安装 Ubuntu Studio 20.10 或更高版本将是唯一的方法。 他们还提到,他们还会不断评估与预安装的应用是否存在重复。所以,相信在未来的日子里,更多的细节也会随之而来。 Ubuntu Studio 是最近一段时间内第二个改变了主桌面环境的发行版。先前,[Lubuntu](https://itsfoss.com/lubuntu-20-04-review/) 从 LXDE 切换到 LXQt。 你如何看待这种变化?欢迎在下面的评论中分享你的想法。 --- via: <https://itsfoss.com/ubuntu-studio-opts-for-kde/> 作者:[Ankush Das](https://itsfoss.com/author/ankush/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
[Ubuntu Studio](https://ubuntustudio.org/) is a popular [official flavour of Ubuntu](https://itsfoss.com/which-ubuntu-install/) tailored for creative content creators involved in audio production, video, graphics, photography, and book publishing. It offers a lot of multimedia content creation applications out of the box with the best possible experience. After the recent 20.04 LTS release, the Ubuntu Studio team highlighted something very important in their [official announcement](https://ubuntustudio.org/2020/04/ubuntu-studio-20-04-lts-released/). And, probably not everyone noticed the key information i.e Ubuntu Studio’s future. Ubuntu Studio 20.04 will be the last version to ship with the [Xfce desktop environment](https://xfce.org). All the future releases will be using [KDE Plasma](https://kde.org/plasma-desktop) instead. ## Why is Ubuntu Studio ditching XFCE? ![Ubuntu Studio Kde Xfce](https://itsfoss.com/content/images/wordpress/2020/05/ubuntu-studio-kde-xfce.jpg) As per their clarification, Ubuntu Studio isn’t focused on any particular look/feel but aims to provide the best user experience possible. And, KDE proves to be a better option. Plasma has proven to have better tools for graphics artists and photographers, as can be seen in Gwenview, Krita, and even the file manager Dolphin. Additionally, it has Wacom tablet support better than any other desktop environment. It has become so good that the majority of the Ubuntu Studio team is now using Kubuntu with Ubuntu Studio added-on via Ubuntu Studio Installer as their daily driver. With so many of us using Plasma, the timing just seems right to focus on a transition to Plasma with our next release. Of course every desktop environment has been tailored for something different. And, here, they think that KDE Plasma will be the most suitable desktop environment replacing XFCE to provide a better user experience to all the users. While I’m not sure how the users will react to this as every user has a different set of preferences. If the existing users won’t have a problem with KDE, it isn’t going to be a big deal. It is worth noting that Ubuntu Studio also mentioned why KDE is potentially a superior choice for them: The Plasma desktop environment has, without Akonadi, become just as light in resource usage as Xfce, perhaps even lighter. Other audio-focused Linux distributions, such as Fedora Jam and KXStudio, have historically used the KDE Plasma desktop environment and done well with the audio. Also, they’ve highlighted [an article by Jason Evangelho at Forbes](https://www.forbes.com/sites/jasonevangelho/2019/10/23/bold-prediction-kde-will-steal-the-lightweight-linux-desktop-crown-in-2020) where some benchmarks reveal that KDE is almost as light as Xfce. Even though that’s a good sign – we still have to wait for the users to test-drive the KDE-powered Ubuntu Studio. Only then we’ll be able to observe whether Ubuntu Studio’s decision to ditch XFCE desktop environment was the right thing to do. ## What will change for Ubuntu Studio users after this change? The overall workflow may get affected (or improve) moving forward with KDE on Ubuntu Studio 20.10 and later. However, the upgrade process (from 20.04 to 20.10) will result in broken systems. So, a fresh install of Ubuntu Studio 20.10 or later versions will be the only way to go. They’ve also mentioned that they will be constantly evaluating for any duplication with the pre-installed apps. So, I believe more details will follow in the coming days. Ubuntu Studio is second distribution that has changed its main desktop environment in recent times. Earlier, [Lubuntu](https://itsfoss.com/lubuntu-20-04-review/) switched to LXQt from LXDE. What do you think about this change? Feel free to share your thoughts in the comments below.
12,225
使用 FreeBSD 作为桌面操作系统
https://opensource.com/article/20/5/furybsd-linux
2020-05-17T11:02:24
[ "FuryBSD", "FreeBSD" ]
https://linux.cn/article-12225-1.html
> > FuryBSD 的实时桌面环境能让你在实际使用之前先尝试。 > > > ![FuryBSD Post-Install XFCE Desktop](/data/attachment/album/202005/17/110252itt3prgiiesebr54.png "FuryBSD Post-Install XFCE Desktop") [FreeBSD](https://www.freebsd.org) 是一个很棒的操作系统,但是从设计上讲,它并没有自带桌面环境。如果不从 FreeBSD 的 [ports 和软件包集](https://www.freebsd.org/ports/)安装其他软件,那么 FreeBSD 仅能体验命令行。下面的截图显示了在安装过程中选择了每个“可选系统组件”后,登录 FreeBSD 12.1 的样子。 ![FreeBSD](/data/attachment/album/202005/17/110300jgqbbmmbbvlb733l.png "FreeBSD") FreeBSD 可以用各种桌面环境中的任何一种来变成桌面操作系统,但是这需要时间、精力和[遵循大量书面说明](https://www.freebsdfoundation.org/freebsd/how-to-guides/installing-a-desktop-environment-on-freebsd/)。使用使用 desktop-installer 包(为用户提供基于文本的菜单并帮助自动执行大部分过程)仍然非常耗时。这两种方法的最大问题是,用户可能在花了很多时间进行设置之后,可能会发现他们的硬件系统与 FreeBSD 不完全兼容。 [FuryBSD](https://www.furybsd.org) 通过提供实时桌面镜像来解决此问题,用户可以在安装之前对其进行评估。目前,FuryBSD 提供 Xfce 镜像和 KDE 镜像。每个镜像都提供了一个已预安装桌面环境的 FreeBSD。如果用户试用该镜像并发现其硬件工作正常,那么他们可以安装 FuryBSD,并拥有一个由 FreeBSD 驱动的即用桌面操作系统。在本文中,我会使用 Xfce 镜像,但 KDE 镜像的工作原理完全一样。 对于安装过 Linux 发行版、BSD 或任何其他类 Unix 的开源操作系统的人,FuryBSD 的上手过程应该很熟悉。从 FuryBSD 网站下载 ISO,将它复制到闪存盘,然后从闪存盘启动计算机。如果从闪存盘引导失败,请确保“安全引导”已禁用。 ![FuryBSD Live XFCE Desktop](/data/attachment/album/202005/17/110327kz9tc7q79itt76kn.png "FuryBSD Live XFCE Desktop") 从闪存盘启动后,桌面环境将自动加载。除了“家”、“文件系统”和“回收站”图标外,实时桌面还有用于配置 Xorg 的工具、入门指南、FuryBSD 安装程序和系统信息程序的图标。除了这些额外功能以及一些自定义的 Xfce 设置和壁纸外,桌面环境除了基本的 Xfce 应用和 Firefox 之外并没有其他功能。 ![FuryBSD Xorg Tool](/data/attachment/album/202005/17/110342c7w8aeawwmez9cje.png "FuryBSD Xorg Tool") 此时仅加载基本的图形驱动,但足以检查 FuryBSD 是否支持系统的有线和无线网络接口。如果网络接口没有一个能自动工作,那么 `Getting Started.txt` 文件包含有关尝试配置网络接口和其他配置任务的说明。如果至少有一个网络接口有效,那么可以使用 Configure Xorg 应用安装 Intel、NVidia 或 VirtualBox 图形驱动。它将下载并安装驱动,并且需要重新启动 Xorg。如果系统未自动重新登录到实时镜像用户,那么密码为 `furybsd`(你可以使用它来登录)。配置后,图形驱动将转移到已安装的系统中。 ![FuryBSD Installer - ZFS Configuration](/data/attachment/album/202005/17/110404i1szsya19a7a58au.png "FuryBSD Installer - ZFS Configuration") 如果一切都可以在实时环境中正常运行,那么 FuryBSD 安装程序可以将 FuryBSD 配置并安装到计算机上。该安装程序在终端中运行,但提供与大多数其他 BSD 和 Linux 安装程序相同的选项。系统将要求用户设置系统的主机名、配置 ZFS 存储、设置 root 密码,添加至少一个非 root 用户以及配置时间和日期。完成后,系统可以引导到预装有 Xfce (或 KDE)的 FreeBSD 中。FuryBSD 完成了所有困难的工作,甚至还花了很多精力使桌面看起来更漂亮。 如上所述,桌面环境没有大量预装软件,因此几乎肯定要安装额外的软件包。最快的方法是在终端中使用 `pkg` 命令。该命令的行为很像 `dnf` 和 `apt`,因此使用过其中之一的来自 Linux 发行版的用户在查找和安装软件包时应该感到很熟悉。FreeBSD 的软件包集合非常多,因此大多数知名的开源软件包都可用。 如果用户在没有太多 FreeBSD 经验的情况下尝试 FuryBSD,应查阅 [FreeBSD 手册](https://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/),以了解有关如何以 FreeBSD 的方式操作。有任何 Linux 发行版或其他 BSD 使用经验的用户应该能弄清很多事情,但是手册可以帮助你弄清一些差异。进一步了解 FreeBSD 的一个很好的资源是 Michael W. Lucas 的 《[Absolute FreeBSD,第三版](https://nostarch.com/absfreebsd3)》。 --- via: <https://opensource.com/article/20/5/furybsd-linux> 作者:[Joshua Allen Holm](https://opensource.com/users/holmja) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
[FreeBSD](https://www.freebsd.org) is a great operating system, but, by design, it does not come with a desktop environment. Without installing additional software from FreeBSD's [ports and packages collection](https://www.freebsd.org/ports/), FreeBSD is a command-line only experience. The screenshot below shows what logging into FreeBSD 12.1 looks like when every one of the "optional system components" is selected during installation. ![FreeBSD FreeBSD](https://opensource.com/sites/default/files/uploads/freebsd.png) FreeBSD can be turned into a desktop operating system with any of a wide selection of desktop environments, but it takes time, effort, and [following a lot of written instructions](https://www.freebsdfoundation.org/freebsd/how-to-guides/installing-a-desktop-environment-on-freebsd/). Using the **desktop-installer** package, which provides the user with options in a text-based menu and helps automate much of the process, is still time-consuming. The biggest problem with either of these methods is that users might find out that their system is not fully compatible with FreeBSD after they have taken all the time to set things up. [FuryBSD](https://www.furybsd.org) solves that problem by providing a live desktop image that users can evaluate before installing. Currently, FuryBSD provides an Xfce image and a KDE image. Each of these images provides an installation of FreeBSD that has a desktop environment pre-installed. If users try out the image and find that their hardware works, they can install FuryBSD and have a ready-to-go desktop operating system powered by FreeBSD. For the purposes of this article, I will be using the Xfce image, but the KDE image works the exact same way. Getting started with FuryBSD should be a familiar process to anyone who has installed a Linux distribution, any of the BSDs, or any other Unix-like open source operating system. Download the ISO from the FuryBSD website, copy it to a flash drive, and boot a computer from the flash drive. If booting from the flash drive fails, make sure Secure Boot is disabled. ![FuryBSD Live XFCE Desktop FuryBSD Live XFCE Desktop](https://opensource.com/sites/default/files/uploads/furybsd_live_xfce_desktop.png) After booting from the flash drive, the desktop environment loads automatically. In addition to the Home, File System, and Trash icons, the live desktop has icons for a tool to configure Xorg, getting started instructions, the FuryBSD installer, and a system information utility. Other than these extras and some custom Xfce settings and wallpaper, the desktop environment does not come with much beyond the basic Xfce applications and Firefox. ![FuryBSD Xorg Tool FuryBSD Xorg Tool](https://opensource.com/sites/default/files/uploads/furybsd_xorg_tool.png) Only basic graphics drivers are loaded at this point, but it is enough to check to see if the system's wired and wireless network interfaces are supported by FuryBSD. If none of the network interfaces is working automatically, the **Getting Started.txt** file contains instructions for attempting to configure network interfaces and other configuration tasks. If at least one of the network interfaces works, the **Configure Xorg** application can be used to install Intel, NVidia, or VirtualBox graphics drivers. The drivers will be downloaded and installed, and Xorg will need to be restarted. If the system does not automatically re-login to the live image user, the password is **furybsd**. Once they are configured, the graphics drivers will carry over to an installed system. ![FuryBSD Installer - ZFS Configuration FuryBSD Installer - ZFS Configuration](https://opensource.com/sites/default/files/uploads/furybsd_installer_-_zfs_configuration.png) If everything works well in the live environment, the FuryBSD installer can configure and install FuryBSD onto the computer. This installer runs in a terminal, but it provides the same options found in most other BSD and Linux installers. The user will be asked to set the system's hostname, configure ZFS storage, set the root password, add at least one non-root user, and configure the time and date settings. Once the process is complete, the system can be rebooted into a pre-configured FreeBSD with an Xfce (or KDE) desktop. FuryBSD did all the hard work and even took the extra effort to make the desktop look nice. ![FuryBSD Post-Install XFCE Desktop FuryBSD Post-Install XFCE Desktop](https://opensource.com/sites/default/files/uploads/furybsd_post-install_xfce_desktop.png) As noted above, the desktop environment does not come with a lot of pre-installed software, so installing additional packages is almost certainly necessary. The quickest way to do this is by using the **pkg** command in the terminal. This command behaves much like **dnf** and **apt**, so users coming from a Linux distribution that uses one of those should feel right at home when it comes to finding and installing packages. FreeBSD's package collection is large, so most of the big-name open source software packages are available. Users trying out FuryBSD without having much FreeBSD experience should consult the [FreeBSD Handbook](https://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/) to learn more about how to do things the FreeBSD way. Users with experience using any Linux distribution or one of the other BSDs should be able to figure out a lot of things, but there are differences that the handbook can help clarify. Another great resource for learning more about the FreeBSD way of doing things is * Absolute FreeBSD, 3rd Edition,* by Michael W. Lucas. ## 15 Comments
12,227
9 个用于前端 Web 开发的开源 CSS 框架
https://opensource.com/article/20/4/open-source-css-frameworks
2020-05-18T09:49:44
[ "css" ]
/article-12227-1.html
> > 探索开源 CSS 框架,找到适合你的项目的框架。 > > > ![](/data/attachment/album/202005/18/094922of81rqfiei8x78xi.jpg) 当大多数人想到 Web 开发时,通常会想到 HTML 或 JavaScript。他们通常会忘记对网站的欣赏能力有更大影响的技术:<ruby> <a href="https://en.wikipedia.org/wiki/Cascading_Style_Sheets"> 级联样式表 </a> <rt> cascading style sheets </rt></ruby>(简称 CSS)。据维基百科的说法,CSS 既是网页中最重要的部分,也是最常被遗忘的部分,尽管它是万维网的三大基石技术之一。 本文将探讨九种流行的、强大的、开源的框架,是这些框架让构建漂亮的网站前端的 CSS 开发变得简单明了。 | 名称 | 介绍 | 许可证 | | --- | --- | --- | | [Bootstrap](https://github.com/twbs/bootstrap) | 最流行的 CSS 框架,来自 Twitter | MIT | | [PatternFly](https://github.com/patternfly/patternfly) | 开源框架,来自 Red Hat | MIT | | [MDC Web](https://github.com/material-components/material-components-web) | Material Design 组件开源框架,来自 Google | MIT | | [Pure](https://github.com/pure-css/pure) | 开源框架,来自 Yahoo | BSD | | [Foundation](https://github.com/foundation/foundation-sites) | 前端框架,来自 Zurb 基金会 | MIT | | [Bulma](https://github.com/jgthms/bulma) | 现代 CSS 框架,基于 Flexbox | MIT | | [Skeleton](https://github.com/dhg/Skeleton) | 轻量级 CSS 框架 | MIT | | [Materialize](https://github.com/Dogfalo/materialize) | 基于 Material Design 的 CSS 框架 | MIT | | [Bootflat](https://github.com/bootflat/bootflat.github.io) | 开源 Flat UI 工具,基于 Bootstrap 3.3.0 | MIT | ### Bootstrap [Bootstrap](https://en.wikipedia.org/wiki/Cascading_Style_Sheets) 无疑是最流行的 CSS 框架,它是所有前端 Web 设计的开端。Bootstrap 由 Twitter 开发,提供了可用性、功能性和可扩展性。 ![Bootstrap homepage](/data/attachment/album/202005/18/094946us2a8ig2qw8g8d9a.jpg "Bootstrap homepage") Bootstrap 还提供了大量的[例子](https://getbootstrap.com/docs/4.4/examples/)来帮助你入门。 ![Bootstrap examples](/data/attachment/album/202005/18/094948j0kff84cccklpb0y.jpg "Bootstrap examples") 使用 Bootstrap,你可以将不同的组件和布局拼接在一起,创造出有趣的页面设计。它还提供了大量详细的文档。 ![Bootstrap documentation](/data/attachment/album/202005/18/094949ww3aznwwgpl5w4qw.jpg "Bootstrap documentation") Bootstrap 的 [GitHub](https://github.com/twbs/bootstrap) 仓库有超过 19000 个提交和 1100 个贡献者。它基于 MIT 许可证,所以(和这个列表中的所有框架一样)你也可以加入并贡献。 ![Bootstrap GitHub](/data/attachment/album/202005/18/094950wg03bc5gi4vccz8o.jpg "Bootstrap GitHub") ### PatternFly [PatternFly](https://www.patternfly.org) 是由 Red Hat 开发的一个开源的(MIT 许可证)CSS 框架。PatternFly 采取了与 Bootstrap 不同的方法:Bootstrap 是为任何对创建一个漂亮网站感兴趣的人而设计的,而 PatternFly 主要针对企业级应用开发者,它提供的组件,如条形图、图表和导航,对于创建强大的、指标驱动的仪表盘非常有吸引力。事实上,Red Hat 在其产品(如 OpenShift)的设计中也使用了这个 CSS 框架。 ![PatternFly homepage](/data/attachment/album/202005/18/094951jq9s9oaa9z92znla.jpg "PatternFly homepage") 除了静态 HTML 之外,PatternFly 还支持 ReactJS 框架,ReactJS 是 Facebook 开发的一个流行的 JavaScript 框架。 ![PatternFly ReactJS support](/data/attachment/album/202005/18/094952mtgcsvcmm5aa5pap.jpg "PatternFly ReactJS support") PatternFly 有许多高级组件,如条形图、图表、[模态窗口](https://en.wikipedia.org/wiki/Modal_window)和布局等,适用于企业级应用。 ![PatternFly chart component](/data/attachment/album/202005/18/094952v4i3hah4oz3inhaa.jpg "PatternFly chart component") PatternFly 的 [GitHub](https://github.com/patternfly/patternfly) 页面列出了超过 1050 个提交和 44 个贡献者。PatternFly 得到了很多人的关注,欢迎大家踊跃贡献。 ![PatternFly GitHub](/data/attachment/album/202005/18/094953z7bvlv4g89eeq8kl.jpg "PatternFly GitHub") ### MDC Web 凭借其大获成功的安卓平台,谷歌以一个名为 [Material Design](https://material.io) 的概念制定了自己的标准设计准则。Material Design 标准旨在体现在所有谷歌的产品中,这些标准也可以面向大众,并且在 MIT 许可证下开源。 ![Material Design homepage](/data/attachment/album/202005/18/094953fyz0yngo1mt0zx01.jpg "Material Design homepage") Material Design 有许多“用于创建用户界面的交互式构建块”的[组件](https://material.io/components/)。这些按钮、卡片、背景等可用于创建网站或移动应用程序的任何类型的用户界面。 ![Material Components webpage](/data/attachment/album/202005/18/094954olni3lala020qiz0.jpg "Material Components webpage") 维护人员为不同的平台提供了详尽的文档。 ![Material Design documentation](/data/attachment/album/202005/18/094955k8j4dvugarms2z8a.jpg "Material Design documentation") 还有分步教程,其中包含用于实现不同目标的练习。 ![Material Design tutorial](/data/attachment/album/202005/18/094956n2x774bcggznb77x.jpg "Material Design tutorial") Material 组件的 GitHub 页面承载了面向不同平台的存储库,包括用于网站开发的 [Material Web 组件(MDC Web)](https://github.com/material-components/material-components-web)。MDC Web 有超过 5700 个提交和 349 个贡献者。 ![MDC Web GitHub](/data/attachment/album/202005/18/095001fi2salug8qql2ka8.jpg "MDC Web GitHub") ### Pure Bootstrap、Patternfly 和 MDC Web 都是非常强大的 CSS 框架,但是它们可能相当的笨重和复杂。如果你想要一个轻量级的 CSS 框架,它更接近于自己编写 CSS,但又能帮助你建立一个漂亮的网页,可以试试 [Pure.css](https://purecss.io)。Pure 是一个轻量级的 CSS 框架,它的体积很小。它是由 Yahoo 开发的,在 BSD 许可证下开源。 ![Pure.css homepage](/data/attachment/album/202005/18/095003wasgsqmafqa9fnc5.jpg "Pure.css homepage") 尽管体积小,但 Pure 提供了建立一个漂亮网页的很多必要的组件。 ![Pure.css components](/data/attachment/album/202005/18/095005h0kttxtguz3wgxk3.jpg "Pure.css components") Pure 的 [GitHub](https://github.com/pure-css/pure) 页面显示它有超过 565 个提交和 59 个贡献者。 ![Pure.css GitHub](/data/attachment/album/202005/18/095008ew9i9a9zqpjhyqaf.jpg "Pure.css GitHub") ### Foundation [Foundation](https://get.foundation) 号称是世界上最先进的响应式前端框架。它提供了先进的功能和教程,用于构建专业网站。 ![Foundation homepage](/data/attachment/album/202005/18/095011km82sdxsrbcskc33.jpg "Foundation homepage") 该框架被许多公司、组织甚至政客[使用](https://zurb.com/responsive),并且有大量的文档可用。 ![Foundation documentation](/data/attachment/album/202005/18/095013kabduevhvmbeq5ed.jpg "Foundation documentation") Foundation 的 [GitHub](https://github.com/foundation/foundation-sites) 页面显示有近 17000 个提交和 1000 个贡献者。和这个列表中的大多数其他框架一样,它也是在 MIT 许可证下提供的。 ![Foundation GitHub](/data/attachment/album/202005/18/095018q7yakuiikllvz1vq.jpg "Foundation GitHub") ### Bulma [Bulma](https://bulma.io) 是一个基于 Flexbox 的开源框架,在 MIT 许可证下提供。Bulma 是一个相当轻量级的框架,因为它只需要一个 CSS 文件。 ![Bulma homepage](/data/attachment/album/202005/18/095021xgvrnwwfvobbivz7.jpg "Bulma homepage") Bulma 有简洁明快的文档,让你可以很容易地选择你想要探索的主题。它也有很多网页组件,你可以直接拿起来在设计中使用。 ![Bulma documentation](/data/attachment/album/202005/18/095022ib1ujq5xuxhtxujj.jpg "Bulma documentation") Bulma 的 [GitHub](https://github.com/jgthms/bulma) 页面列出了 1400 多个提交和 300 多个贡献者。 ![Bulma GitHub](/data/attachment/album/202005/18/095025ycn2cn888snn2x48.jpg "Bulma GitHub") ### Skeleton 如果连 Pure 都觉得太重了,那么还有一个叫 [Skeleton](http://getskeleton.com) 的更轻量级框架。Skeleton 库只有 400 行左右的长度,而且这个框架只提供了开始你的 CSS 框架之旅的基本组件。 ![Skeleton homepage](/data/attachment/album/202005/18/095026ceyylpuyzfvuelr7.jpg "Skeleton homepage") 尽管它很简单,但 Skeleton 提供了详细的文档,可以帮助你马上上手。 ![Skeleton documentation](/data/attachment/album/202005/18/095029j8q20nqri8kkx8tr.jpg "Skeleton documentation") Skeleton 的 [GitHub](https://github.com/dhg/Skeleton) 列出了 167 个提交和 22 个贡献者。然而,它不是很活跃,它的最后一次更新是在 2014 年,所以在使用之前可能需要一些维护。由于它是在 MIT 许可证下发布的,你可以自行维护。 ![Skeleton GitHub](/data/attachment/album/202005/18/095032l4o3bwc39bz3uuga.jpg "Skeleton GitHub") ### Materialize [Materialize](https://materializecss.com) 是一个基于 Google 的 Material Design 的响应式前端框架,带有由 Materialize 的贡献者开发的附加主题和组件。 ![Materialize homepage](/data/attachment/album/202005/18/095033kha6kis3rsraqwct.jpg "Materialize homepage") Materialize 的文档页面非常全面,而且相当容易理解。它的组件页面包括按钮、卡片、导航等等。 ![Materialize documentation](/data/attachment/album/202005/18/095035or00l5bf65jbksrr.jpg "Materialize documentation") Materialize 是 MIT 许可证下的开源项目,它的 [GitHub](https://github.com/Dogfalo/materialize) 列出了超过 3800 个提交和 250 个贡献者。 ![Materialize GitHub](/data/attachment/album/202005/18/095037fqb2ebzu2kwzz9yk.jpg "Materialize GitHub") ### Bootflat [Bootflat](http://bootflat.github.io) 是由 Twitter 的 Bootstrap 衍生出来的一个开源 CSS 框架。与 Bootstrap 相比, Bootflat 更简单,框架组件更轻量级。 ![Bootflat homepage](/data/attachment/album/202005/18/095039c813elddde867hk9.jpg "Bootflat homepage") Bootflat 的[文档](http://bootflat.github.io/documentation.html)几乎像是受到了宜家的启发 —— 它显示的是每个组件的图片,没有太多的文字。 ![Bootflat docs](/data/attachment/album/202005/18/095041x437z9uh7uo5252q.jpg "Bootflat docs") Bootflat 是在 MIT 许可证下提供的,其 [GitHub](https://github.com/bootflat/bootflat.github.io) 页面包括 159 个提交和 8 个贡献者。 ![Bootflat GitHub](/data/attachment/album/202005/18/095044rdcc0353bfyhsa6h.jpg "Bootflat GitHub") ### 你应该选择哪个 CSS 框架? 对于开源的 CSS 框架,你有很多选择,这取决于你想要的工具功能有多丰富或简单。就像所有的技术决定一样,没有一个正确的答案,只有在给定的时间和项目中才有正确的选择。 尝试一下其中的一些,看看要在下一个项目中使用哪个。另外,我有没有错过任何有趣的开源 CSS 框架?请在下面的评论中分享你的反馈和想法。 --- via: <https://opensource.com/article/20/4/open-source-css-frameworks> 作者:[Bryant Son](https://opensource.com/users/brson) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
null
HTTPSConnectionPool(host='opensource.com', port=443): Read timed out. (read timeout=10)
null
12,228
我的 Linux 故事:从 8 位发烧友到 Unix 系统管理员
https://opensource.com/article/20/4/linux-story
2020-05-18T13:27:44
[ "Linux" ]
https://linux.cn/article-12228-1.html
> > 我是如何从一个电脑爱好者成为职业系统管理员和 Linux 粉丝的。 > > > ![](/data/attachment/album/202005/18/132731pnnzy7t5tz7hvc6z.jpg) 故事得从 1980 年中期我父母给家里购买[苹果 ][c](https://en.wikipedia.org/wiki/Apple_IIc) 开始。尽管很喜欢打游戏,但我还是很快被实用又好玩的 BASIC 编程迷住了。那个年代的人们还是把电脑当作小一点的打字机对待,所以拥有“高级电脑技能”的人可以轻松使用他们的魔法。 以用 BASIC 和点阵打印机自动生成惩罚作业来举个例子。被罚写两百遍道歉时,我问老师我可不可以用打字代替手写。经过同意后,我写了 5 行 BASIC 语句来自动生成作业。另外一个小技巧是用非可视化文本编辑器,比如用 AppleWorks 微调字体、行距和边距,把学期论文“拉长”到要求的篇幅。 对电脑的痴迷很快让我得到了带有内存驱动卡和 x86 协处理器的苹果 ][gs。那时候,调制解调器和 BBS 刚开始火起来,有了这样的双处理器系统后,我就可以安装各种琳琅满目的软件。但是由于调制解调器 2400bps 的速度限制,对我每天都要下载几 KB 的有趣东西形成了阻碍。我对苹果痴迷一段时间,不久之后就换了。 ### 探索 Unix 我的本科专业是计算机信息系统,研究生专业是计算机科学。本科教育主要使用个人电脑,很少涉及大型分时系统。研究生的时候才开始真正有意思起来,拨号进入带有互联网连接的 Unix 简直打开了新世界的大门。尽管我依然用着我的双处理器 ][gs 来使用调制解调器还有写写论文,不过 Unix 系统真正吸引了我的注意力,因为它可以访问通用的 Telnet 游戏、文件传输协议(FTP)、在线邮箱和进行 C 语言编程。当时 Gopher 非常受欢迎,特别是在我们这群终端用户当中。 被分到学院计算机部门是我研究生命运的转折点,这个部门主管学校的计算机服务。学生们可以使用 X Window 终端来登录基于 [Ultrix](https://en.wikipedia.org/wiki/Ultrix) 的系统。大部分都是灰度的黑白界面,彩色处理在当时非常占用 CPU,也很影响系统性能。也有一些彩色系统还不错,但是这些机器都很慢。 我很喜欢那个时候,我有系统管理员权限而且工作是维护系统和网络。我有一些很好的导师,他们对我选择从事系统管理员而不是程序员起了关键作用(尽管我至今仍然热爱编程)。 ### 从 Unix 到 Linux 稀缺是创造之母,当需要分享匮乏的学校电脑系统资源的时候,我们学生们变得富有创造力。需要用电脑的学生是 Ultrix 工作站承受量的三到五倍,所以寻找资源往往是个难题(特别是要交付项目的时候)。在不需要图形化显示的时候,我们有一个 56k 的点对点协议的调制解调器池可供远程系统访问接入。但是找到一个有空余资源的机器并共享系统进行源码编译通常会导致进度缓慢。和大部分人一样,我发现晚上工作通常会有所帮助,但我还需要其它一些东西让我的项目迭代快一点。 后来学校的一个系统管理员建议我去看一个免费提供的 Unix 系统。那就是 Linux,它被装在 3.5 英寸的软盘里。多亏我们学校超快的 T1 线路,我很容易就搜索到新闻组和其他资源来学习如何下载它。它全是基于 32 位的英特尔 PC 机的,而我并没有这一类的设备。 幸运的是,我在学校的工作让我有机会接触到堆积如山的废旧电脑,所以命运的齿轮又开始旋转起来。 我找到了足够多的废旧 PC 组装了一个可靠的 80386 PC,带有足够内存(我确定不到 1GB),它有一个能用的显卡、一个细缆(同轴)以太网卡和一个硬盘。我所用的镜像是 Linux 内核 0.98,我不记得它是不是正式发行版的一部分了(可能是 SLS)。我所记得的是,它有一系列的软盘镜像,第一张软盘启动内核和一个最小安装程序,然后格式化硬盘,接着要求插入每个后续的软盘来安装 GNU 核心实用程序。在核心实用程序装好并引导系统之后,你可以下载和安装其他的软件包镜像,比如编译器之类的。 这是我学术道路上巨大的福音。在没有运行 X Window 显示服务器的情况下,这台电脑性能比学校的 Ultrix 工作站强很多。学校允许我把这台机器连到校园网络,挂载学校的学生网络文件系统(NFS)共享,并且能直接访问互联网。因为我的研究生课程用 [GCC](https://en.wikipedia.org/wiki/GNU_Compiler_Collection)(还有 Perl 4)来完成大部分学生作业,所以我可以在本地进行开发工作。这使得我可以独享关键资源,从而使我能够更快速地迭代我的项目。 但是,这个方案不是完美的。硬件有时会有点不稳定(这可能就是它们被丢弃的原因),但我都能搞定。真正让我感受到的是 Linux 和 Ultrix 在操作系统和系统库层面的差异。我开始理解移植软件到其他操作系统的意义,我可以自由地在任何地方开发,但是我必须以 Ultrix 编译的二进制文件交付项目。在一个平台上完美运行的 C 语言代码可能在另一个平台出错。这非常令人沮丧,但是我可能本能的察觉到了早期 Linux 解引用空指针的方法。Linux 倾向于把它作为空操作处理,但是 Ultrix 会立即触发核心转储和段错误 [SIGSEGV](https://en.wikipedia.org/wiki/Segmentation_fault)。这是我第一次程序移植时的重大发现,正好在要交作业的几天之前。这同时对我研究 C++ 造成了一些麻烦,因为我粗心地同时使用了 `malloc()`/`free()` 和自动[构造函数和析构函数](https://www.tutorialspoint.com/cplusplus/cpp_constructor_destructor.htm)处理,让我的项目到处都是空指针炸弹。 研究生课程快结束的时候,我升级到了一台性能野兽工作站:一颗英特尔 486DX2 66MHz 芯片、一块 SCSI 硬盘、一块光驱和一个 1024x768 RGB 显示器,并且还用一个 16550 UART 串口卡完美地匹配了我的新 US Robotics V.Everything 牌调制解调器。它可以双启动 Windows 和 Linux 系统,但更重要的是显卡和 CPU 的速度让我的开发环境幸福感倍增。那台旧的 386 依然在学校服役,不过我我现在大部分繁重的功课和钻研都转移到了家里。 和 [Mike Harris](/article-11831-1.html) 关于 90 年代的 Linux 故事类似,我真的对当时流行的 CD 集合很着迷。我住的附近有家新开的 Micro Center 计算机商店,这个宝库充满了电脑配件、高级专业书籍和你能想到的各种 Linux(以及免费的 Unix)CD。我还记得 [Yggdrasil](https://en.wikipedia.org/wiki/Yggdrasil_Linux/GNU/X) 和 [Slackware](http://slackware.com) 是我最喜欢的发行版。真正让人难以置信的是 CD 存储空间的巨大容量 —— 650MB!使它成为获得软件的必不可少的载体。是的,你可以用 56k 的速度下载,但是真的很慢。更别提大部分人负担不起存档这么多供以后使用的闲置数据。 ### 而到了今天 就是这些开启了我长达 25 年的系统管理员的职业生涯和开源软件的乐趣。Linux 一直是我事业和个人开发中的重要组成部分。最近我依旧醉心于 Linux(主要是 CentOS、RedHat 和 Ubuntu),但也经常从 [FreeBSD](https://www.freebsd.org/) 和其他炫酷开源软件中得到乐趣。 Linux 让我来到了 Opensource.com,我希望在这里能回馈社区,为新一代电脑爱好者出一份力。 --- via: <https://opensource.com/article/20/4/linux-story> 作者:[James Farrell](https://opensource.com/users/jamesf) 选题:[lujun9972](https://github.com/lujun9972) 译者:[Accelerator](https://github.com/Acceleratorrrr) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
It all started in the mid-1980s with an [Apple ][c](https://en.wikipedia.org/wiki/Apple_IIc) that my parents purchased for our family. Although I enjoyed playing games, I quickly became fascinated with BASIC programming and how useful it could be for work and fun. This was an era when computers were viewed as little more than typewriters, so people with "advanced computer skills" could easily use them to their advantage. One example was using BASIC and a dot matrix printer to auto-generate punishment assignments. When I was assigned to write out 200 times some apologetic statements, I asked my teacher if it could be typed out. On confirmation, I wrote a 5 line BASIC program to generate it for me. Another example of subtle trickery was using non-WYSIWYG word processors, such as AppleWorks for micro-manipulation of fonts, line spacing, and margins to "stretch" term papers out to the required length. My obsession with computers quickly lead to an Apple ][gs with a RAM drive card and an x86 PC co-processor card. Modems and BBSs were getting hot, and having a dual-hardware system like this gave me all sorts of options for software. However, modem speeds of 2400bps put a real damper on getting anything more than a few KBs of fun downloads per day. I stuck with Apple as a hobby for some time, but that was soon to change. ## Venturing into Unix My undergraduate program was BS in Computer Information Systems (CIS) and my graduate degree was MS in Computer Science. My undergraduate education program put me mostly into PCs and a little into timeshare mainframes. The real fun began in my graduate programs, where dial-in access to Unix machines with internet connections opened a whole new world of exploration. Although I still used my dual-processor ][gs for modem work and writing and printing papers, Unix systems really grabbed my attention with their general-access Telnet-based games, FTP archives, online email, and C programming. Gopher was popular and growing with people like me who were bound to plain terminal interfaces. My graduate program took a fateful turn for the better when I was assigned to the academic computing department, which was charged with running computer services for the school. The students had access to [Ultrix](https://en.wikipedia.org/wiki/Ultrix)-based systems with X Window terminals. Most were grayscale, as color processing was then a CPU intensive task and really affected system performance. The few color systems were nice, but those machines just dragged. This was a really fun time for me, as I was given root access to systems and assigned to system and network maintenance. I had some excellent mentors, and this strongly influenced my decision to get into system administration rather than programming (although I still really love programming to this day). ## From Unix to Linux Scarcity is the mother of invention, and we students often got creative when we had to share the scant resources of the school's computer systems. We had three to five times more students than we had Ultrix workstations, so finding resources (especially at project delivery time) was often a challenge. There was a bank of 56k [PPP](https://en.wikipedia.org/wiki/Point-to-Point_Protocol) modems available for remote system access when graphical displays were not needed. However, finding a machine with spare resources and sharing the system for source compilation often resulted in slow progress. Like most, I found working at night often helped, but I needed something else to let me iterate more quickly. Then one of the school's sysadmins suggested I check out a Unix system that was freely available. This was Linux, made available as 3.5" floppy images. Given our school's blazing fast T1 line, it was easy for me to search newsgroups and other sources to learn how to download it. It was all 32-bit Intel PC-based, a class of equipment that I did not own. Luckily, my work at the school gave me access to junk piles of old computers, so the wheels started turning. I found enough discarded PCs to build a solid 80386 PC with some decent RAM (I am sure well under 1GB), a workable graphic display, a thin-net (coax) Ethernet card, and a hard disk. The images I had were Linux kernel 0.98, and I don't recall it being part of an official distribution (it might have been SLS). What I do remember is that it came on a series of floppy images—the first booted the kernel and a minimal installer, next it formatted the drive, and then it asked for each successive floppy image to install the core GNU utilities. After the core was installed and the system bootable, you would download and install other package images, like compilers and such. This was a serious boon to me in my academic career. With no X Window server display running, this PC seriously outperformed the Ultrix workstations I had access to at school. I was allowed to connect this machine to the academic network, mount the school's student Network File System (NFS) shares, and access the internet directly. Since my graduate program used [GCC](https://en.wikipedia.org/wiki/GNU_Compiler_Collection) (and sometimes Perl 4) for most student work, I could do my development work locally. This gave me exclusive access to a key resource that enabled me to iterate more quickly on my projects. All was not perfect, however. The hardware was a tiny bit unstable (likely why it was discarded), but I could deal with that. What really got me was how much Linux and Ultrix differed at the OS and system library level. I began to appreciate what it meant to port software to other platforms; I was free to develop wherever I wanted, but I had to deliver my projects as Ultrix compiled binaries. The C code that ran perfectly on one platform would crash on the other. This was very frustrating, but probably my rudest awakening was early Linux's handling of null-pointer dereferencing. Linux seemed happy to pass over these as a virtual no-op, but Ultrix promptly dumped core on [SIGSEGV](https://en.wikipedia.org/wiki/Segmentation_fault). This was quite a thing to find out when my first port to the target platform happened days before my project was due! This also made my exploration of C++ quite challenging, as my careless use of malloc()/free() along with automatic [constructor and destructor](https://www.tutorialspoint.com/cplusplus/cpp_constructor_destructor.htm) processing peppered my projects with null pointer bombs all over the place. Toward the end of my graduate program, I upgraded to a complete beast of a workstation—an Intel 486DX2 66MHz with SCSI hard drives, a CD-ROM drive, a 1024x768 RGB monitor, and a 16550 UART serial card perfectly matched to my new US Robotics V.Everything modem. It could dual-boot Windows and Linux, but more importantly, the graphics card and processor allowed a much more pleasant (and faster) development environment. The old 386 was still in service back at the school, but most of my heavy work and hacking now happened at home. Similar to [Mike Harris' story](https://opensource.com/article/19/11/learning-linux-90s) about Linux in the '90s, I really got into those CD bundles that were popular at the time. There was a new Micro Center computer store close to where I lived, and it was a goldmine of hobby PC parts, phenomenal technical books, and every conceivable Linux (and free Unix) CD archive. I remember [Yggdrasil](https://en.wikipedia.org/wiki/Yggdrasil_Linux/GNU/X) and [Slackware](http://slackware.com) being some of my favorite distributions. What was really incredible was the enormous size of CD storage—650MB! This was an essential resource for getting access to software. Yes, you could download the bits at 56k, but that was quite limiting. Not to mention the fact that most people could not afford to archive that much idle data for later perusal. ## And on to today This is what kicked off my more than 25 years of system administration and open source software fun. Linux has been an important part of both my career and personal development. Nowadays, I am still heavily into Linux (mostly CentOS, RedHat, and Ubuntu), but often have fun with the likes of [FreeBSD](https://www.freebsd.org/) and other cool open source offerings. My forays into Linux led me to Opensource.com, where I hope to give back a little and help bootstrap new generations of hands-on computer fun. ## 5 Comments
12,230
COPR 仓库中 4 个很酷的新项目(2020.05)
https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-april-2020/
2020-05-18T17:49:26
[ "COPR" ]
https://linux.cn/article-12230-1.html
![](/data/attachment/album/202005/18/174929vv14oo60jtqjh93n.jpg) COPR 是个人软件仓库[集合](https://copr.fedorainfracloud.org/),它不在 Fedora 中。这是因为某些软件不符合轻松打包的标准;或者它可能不符合其他 Fedora 标准,尽管它是自由而开源的。COPR 可以在 Fedora 套件之外提供这些项目。COPR 中的软件不受 Fedora 基础设施的支持,或者是由项目自己背书的。但是,这是一种尝试新的或实验性的软件的一种巧妙的方式。 本文介绍了 COPR 中一些有趣的新项目。如果你第一次使用 COPR,请参阅 [COPR 用户文档](https://docs.pagure.org/copr.copr/user_documentation.html#)。 ### Ytop [ytop](https://github.com/cjbassi/ytop) 是类似于 `htop` 的命令行系统监视器。它们之间的主要区别是,`ytop` 在显示进程及其 CPU 和内存使用率的顶部显示了系统的 CPU、内存和网络使用率随时间变化的图表。此外,`ytop` 还显示磁盘使用情况和计算机温度。最后,`ytop` 支持多种配色方案以及创建新配色的选项。 ![](/data/attachment/album/202005/18/174930v2d72oszdvs8kt7w.png) #### 安装说明 [该仓库](https://copr.fedorainfracloud.org/coprs/atim/ytop/)当前为 Fedora 30、31、32 和 Rawhide 以及 EPEL 7 提供了 `ytop`。要安装 `ytop`,请[带上 sudo](https://fedoramagazine.org/howto-use-sudo/) 使用以下命令: ``` sudo dnf copr enable atim/ytop sudo dnf install ytop ``` ### Ctop [ctop](https://github.com/bcicen/ctop) 是另一个命令行系统监视器。但是,与 `htop` 和 `ytop` 不同,`ctop` 专注于显示容器的资源使用情况。`ctop` 同时显示计算机上运行的所有容器的 CPU、内存、网络和磁盘使用情况的概要,以及单个容器的更全面的信息,包括一段时间内资源使用情况的图表。当前,`ctop` 支持 Docker 和 runc 容器。 ![](/data/attachment/album/202005/18/174931pnbiet5f50b5met1.png) #### 安装说明 [该仓库](https://copr.fedorainfracloud.org/coprs/fuhrmann/ctop/)当前为 Fedora 31、32 和 Rawhide 以及 EPEL 7 还有其他发行版提供了安装包。要安装 `ctop`,请使用以下命令: ``` sudo dnf copr enable fuhrmann/ctop sudo dnf install ctop ``` ### Shortwave [shortwave](https://github.com/ranfdev/shortwave) 是用于收听广播电台的程序。`shortwave` 使用广播电台的社区数据库 [www.radio-browser.info](http://www.radio-browser.info/gui/#!/)。在此数据库中,你可以发现或搜索广播电台,将它们添加到库中,然后收听。此外,`shortwave` 还提供有关当前播放歌曲的信息,并且还可以记录这些歌曲。 ![](/data/attachment/album/202005/18/174931gqk8ezk9dki8f1yi.png) #### 安装说明 [该仓库](https://copr.fedorainfracloud.org/coprs/atim/shortwave/) 当前为 Fedora 31、32 和 Rawhide 提供了 shortwave。要安装 `shortwave`,请使用以下命令: ``` sudo dnf copr enable atim/shortwave sudo dnf install shortwave ``` ### Setzer [setzer](https://www.cvfosammmm.org/setzer/) 是 LaTeX 编辑器,它可以构建 pdf 文档并查看它们。它提供了各种类型文档(例如文章或幻灯片)的模板。此外,`setzer` 还有许多特殊符号、数学符号和希腊字母的按钮。 ![](/data/attachment/album/202005/18/174932x5aekfuicaqphn5z.png) #### 安装说明 [该仓库](https://copr.fedorainfracloud.org/coprs/lyessaadi/setzer/) 当前为 Fedora 30、31、32 和 Rawhide 提供了 `setzer`。要安装 `setzer`,请使用以下命令: ``` sudo dnf copr enable lyessaadi/setzer sudo dnf install setzer ``` --- via: <https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-april-2020/> 作者:[Dominik Turecek](https://fedoramagazine.org/author/dturecek/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
COPR is a [collection](https://copr.fedorainfracloud.org/) of personal repositories for software that isn’t carried in Fedora. Some software doesn’t conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open source. COPR can offer these projects outside the Fedora set of packages. Software in COPR isn’t supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software. This article presents a few new and interesting projects in COPR. If you’re new to using COPR, see the [COPR User Documentation](https://docs.pagure.org/copr.copr/user_documentation.html#) for how to get started. ### Ytop [Ytop](https://github.com/cjbassi/ytop) is a command-line system monitor similar to *htop*. The main difference between them is that *ytop*, on top of showing processes and their CPU and memory usage, shows graphs of system CPU, memory, and network usage over time. Additionally, *ytop* shows disk usage and temperatures of the machine. Finally, *ytop* supports multiple color schemes as well as an option to create new ones. ![](https://fedoramagazine.org/wp-content/uploads/2020/04/ytop.png) #### Installation instructions The [repo](https://copr.fedorainfracloud.org/coprs/atim/ytop/) currently provides *ytop* for Fedora 30, 31, 32, and Rawhide, as well as EPEL 7. To install *ytop*, use these commands [with sudo](https://fedoramagazine.org/howto-use-sudo/): sudo dnf copr enable atim/ytop sudo dnf install ytop ### Ctop [Ctop](https://github.com/bcicen/ctop) is yet another command-line system monitor. However, unlike *htop* and *ytop*, *ctop* focuses on showing resource usage of containers. *Ctop* shows both an overview of CPU, memory, network and disk usage of all containers running on your machine, and more comprehensive information about a single container, including graphs of resource usage over time. Currently, *ctop* has support for Docker and runc containers. ![](https://fedoramagazine.org/wp-content/uploads/2020/04/ctop.png) #### Installation instructions The [repo](https://copr.fedorainfracloud.org/coprs/fuhrmann/ctop/) currently provides *ctop* for Fedora 31, 32 and Rawhide, EPEL 7, as well as for other distributions. To install *ctop*, use these commands: sudo dnf copr enable fuhrmann/ctop sudo dnf install ctop ### Shortwave [Shortwave](https://github.com/ranfdev/shortwave) is a program for listening to radio stations. Shortwave uses a community database of radio stations [www.radio-browser.info](http://www.radio-browser.info/gui/#!/). In this database, you can discover or search for radio stations, add them to your library, and listen to them. Additionally, Shortwave provides information about currently playing song and can record the songs as well. ![](https://fedoramagazine.org/wp-content/uploads/2020/04/shortwave.png) #### Installation instructions The [repo](https://copr.fedorainfracloud.org/coprs/atim/shortwave/) currently provides Shortwave for Fedora 31, 32, and Rawhide. To install Shortwave, use these commands: sudo dnf copr enable atim/shortwave sudo dnf install shortwave ### Setzer [Setzer](https://www.cvfosammmm.org/setzer/) is a LaTeX editor that can build pdf documents and view them as well. It provides templates for various types of documents, such as articles or presentation slides. Additionally, Setzer has buttons for a lot of special symbols, math symbols and greek letters. ![](https://fedoramagazine.org/wp-content/uploads/2020/04/setzer.png) #### Installation instructions The [repo](https://copr.fedorainfracloud.org/coprs/lyessaadi/setzer/) currently provides Setzer for Fedora 30, 31, 32, and Rawhide. To install Setzer, use these commands: sudo dnf copr enable lyessaadi/setzer sudo dnf install setzer ## red what’s the difference of ytop and htop? ## shy ytop is written in Rust which is a more secure language than C ## jfnklstrm The main difference between them is that ytop, on top of showing processes and their CPU and memory usage, shows graphs of system CPU, memory, and network usage over time. Additionally, ytop shows disk usage and temperatures of the machine. Finally, ytop supports multiple color schemes as well as an option to create new ones. ## Splebd Great Job!!!!!… Thank you ## Hylke if you like ytop you’re gonna love bashtop. ## juxtapoz bashtop is really slow compared to ytop ## Jasper Vinkenvleugel I’m not entirely sure if the repos for Shortwave and Setzer are official ones, but I would guess not. I would advise to install the Flatpak-versions for either, they are on Flathub and officially supported so there’s no risk of a maintainer that neglects the Copr. ## Lyes Saadi Hi! I’m the maintainer of the Setzer COPR. I can confirm it is not official! But the spec file is available here: https://gitlab.com/LyesSaadi/spec/-/blob/master/setzer/setzer.spec ! If you find an issue or that I forgot to update the repo, don’t hesitate to remind me by opening an issue or mailing me directly: fedora [at] lyes [dot] eu ! I tend to watch the repositories of the packages I maintain, but it might happen that some updates’ notification is lost, so I’ll be grateful for any reminder :D! I do also trust Atim for any package he maintains! I cannot speak for him, but I’m 100% confident that Shortwave’s COPR repository is in good hands :)! ## Mark Liked the look of ytop for the temperatures, so tried it. But in the temperatures window I only get acpitiz 28C acpitiz 30C On the same machine “sensors” (from lmsensors package) gives me acpitz-acpi-0 Adapter: ACPI interface temp1: +27.8C (crit = +105.0C) temp2: +29.8C (crit = +105.0C) coretemp-isa-0000 Adapter: ISA adapter Package id 0: +35.0C (high = +80.0C, crit = +100.0C) Core 0: +33.0C (high = +80.0C, crit = +100.0C) Core 1: +35.0C (high = +80.0C, crit = +100.0C) Core 2: +33.0C (high = +80.0C, crit = +100.0C) Core 3: +30.0C (high = +80.0C, crit = +100.0C) So ytop does not monitor the temps I am interested in, as it is the core temps that go over 80C when ffmpeg is running. Interestingly typing ? (question mark) in ytop to see if there is any help results in the below, so I’m not sure about the earlier comment that rust is more secure than C, Backtrace omitted. Run with RUST_BACKTRACE=1 to display it. Run with RUST_BACKTRACE=full to include source snippets. The application panicked (crashed). index out of bounds: the len is 3657 but the index is 32346 in /builddir/build/BUILD/rustc-1.42.0-src/src/libcore/slice/mod.rs, line 2797 thread: main But certainly an interesting project to keep an eye on as it evolves. ## Constantine No offense, but Ytop looks like nmon ## Marco Gómez I tried Setzer to compile some of my work (which is separated in various .tex files and including in a master one) and it didn’t work. There is no error log or messages related to this problem. However, for single .tex file and for beamer it works pretty well. Great job! and Thank you ! ## Yazan Al Monshed Reboot your OS and it’s will work Good. ## Alberto Patino Thanks for article!!! I’ve tried ytop, I even downloaded source code and a little Rust learnt. I also tried shirtwave and radio listening right now. ## Mattia I don’t think COPR is necessary for ‘ytop’. I could install it with just . ## SZ Quadri COPR is NOT necessary for ‘ytop’ as it’s now part of “updates” repo. Simply install using dnf
12,231
完美生活:git rebase -i
https://opensource.com/article/20/4/git-rebase-i
2020-05-18T18:59:46
[ "git", "变基" ]
https://linux.cn/article-12231-1.html
> > 让大家觉得你一次就能写出完美的代码,并让你的补丁更容易审核和合并。 > > > ![](/data/attachment/album/202005/18/185911fvwztwyp4lvbzkw4.jpg) 软件开发是混乱的。有很多错误的转折、有需要修复的错别字、有需要修正的错误、有需要稍后纠正的临时和粗陋的代码,还有在以后的开发过程中发现一次又一次的问题。有了版本控制,在创建“完美”的最终产品(即准备提交给上游的补丁)的过程中,你会有一个记录着每一个错误转折和修正的原始记录。就像电影中的花絮一样,它们会让人有点尴尬,有时也会让人觉得好笑。 如果你使用版本控制来定期保存你的工作线索,然后当你准备提交审核的东西时,又可以隐藏所有这些私人草稿工作,并只提交一份单一的、完美的补丁,那不是很好吗?`git rebase -i`,是重写历史记录的完美方法,可以让大家觉得你一次就写出了完美的代码! ### git rebase 的作用是什么? 如果你不熟悉 Git 的复杂性,这里简单介绍一下。在幕后,Git 将项目的不同版本与唯一标识符关联起来,这个标识符由父节点的唯一标识符的哈希以及新版本与其父节点的差异组成。这样就形成了一棵修订树,每个签出项目的人都会得到自己的副本。不同的人可以把项目往不同的方向发展,每个方向都可能从不同的分支点开始。 ![Master branch vs. private branch](/data/attachment/album/202005/18/185954e6u9qgo89fm1iqus.png "Master branch vs. private branch") *左边是 origin 版本库中的主分支,右边是你个人副本中的私有分支。* 有两种方法可以将你的工作与原始版本库中的主分支整合起来:一种是使用合并:`git merge`,另一种是使用变基:`git rebase`。它们的工作方式非常不同。 当你使用 `git merge` 时,会在主分支(`master`)上创建一个新的提交,其中包括所有来自原始位置(`origin`)的修改和所有本地的修改。如果有任何冲突(例如,如果别人修改了你也在修改的文件),则将这些冲突标记出来,并且你有机会在将这个“合并提交”提交到本地版本库之前解决这些冲突。当你将更改推送回父版本库时,所有的本地工作都会以分支的形式出现在 Git 版本库的其他用户面前。 但是 `git rebase` 的工作方式不同。它会回滚你的提交,并从主分支(`master`)的顶端再次重放这些提交。这导致了两个主要的变化。首先,由于你的提交现在从一个不同的父节点分支出来,它们的哈希值会被重新计算,并且任何克隆了你的版本库的人都可能得到该版本库的一个残破副本。第二,你没有“合并提交”,所以在将更改重放到主分支上时会识别出任何合并冲突,因此,你需要在进行<ruby> 变基 <rt> rebase </rt></ruby>之前先修复它们。现在,当你现在推送你的修改时,你的工作不会出现在分支上,并且看起来像是你是在主分支的最新的提交上写入了所有的修改。 ![Merge commits preserve history, and rebase rewrites history.](/data/attachment/album/202005/18/190001rh770g6a6r7hra0z.png "Merge commits preserve history, and rebase rewrites history.") *合并提交(左)保留了历史,而变基(右)重写历史。* 然而,这两种方式都有一个缺点:在你准备好分享代码之前,每个人都可以看到你在本地处理问题时的所有涂鸦和编辑。这就是 `git rebase` 的 `--interactive`(或简写 `-i`)标志发挥作用的地方。 ### git rebase -i 登场 `git rebase` 的最大优点是它可以重写历史。但是,为什么仅止于假装你从后面的点分支出来呢?有一种更进一步方法可以重写你是如何准备就绪这些代码的:`git rebase -i`,即交互式的 `git rebase`。 这个功能就是 Git 中的 “魔术时光机” 功能。这个标志允许你在做变基时对修订历史记录进行复杂的修改。你可以隐藏你的错误! 将许多小的修改合并到一个崭新的功能补丁中! 重新排列修改历史记录中的显示顺序! ![output of git rebase -i](/data/attachment/album/202005/18/190007un3ink2h5nyciz3p.png "output of git rebase -i") 当你运行 `git rebase -i` 时,你会进入一个编辑器会话,其中列出了所有正在被变基的提交,以及可以对其执行的操作的多个选项。默认的选择是选择(`Pick`)。 * `Pick`:会在你的历史记录中保留该提交。 * `Reword`:允许你修改提交信息,可能是修复一个错别字或添加其它注释。 * `Edit`:允许你在重放分支的过程中对提交进行修改。 * `Squash`:可以将多个提交合并为一个。 * 你可以通过在文件中移动来重新排序提交。 当你完成后,只需保存最终结果,变基操作就会执行。在你选择修改提交的每个阶段(无论是用 `reword`、`edit`、`squash` 还是发生冲突时),变基都会停止,并允许你在继续提交之前进行适当的修改。 上面这个例子的结果是 “One-liner bug fix” 和 “Integate new header everywhere” 被合并到一个提交中,而 “New header for docs website” 和 “D'oh - typo. Fixed” 合并到另一个提交中。就像变魔术一样,其他提交的工作还在你的分支中,但相关的提交已经从你的历史记录中消失了! 这使得使用 `git send-email` 或者用你新整理好的补丁集在父版本库中创建一个拉取请求,然后来提交一个干净的补丁给上游项目变得很容易。这有很多好处,包括让你的代码更容易审核,更容易接受,也更容易合并。 --- via: <https://opensource.com/article/20/4/git-rebase-i> 作者:[Dave Neary](https://opensource.com/users/dneary) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
Software development is messy. So many wrong turns, typos to fix, quick hacks and kludges to correct later, off-by-one errors you find late in the process. With version control, you have a pristine record of every wrong turn and correction made during the process of creating the "perfect" final product—a patch ready to submit upstream. Like the outtakes from movies, they are a little embarrassing and sometimes amusing. Wouldn't it be great if you could use version control to save your work regularly at waypoints, and then when you have something you are ready to submit for review, you could hide all of that private drafting work and just submit a single, perfect patch? Meet **git rebase -i**, the perfect way to rewrite history and make everyone think that you produce perfect code the first time! ## What does git rebase do? In case you're not familiar with the intricacies of Git, here is a brief overview. Under the covers, Git associates different versions of your project with a unique identifier, which is made up of a hash of the parent node's unique identifier, and the difference between the new version and its parent node. This creates a tree of revisions, and each person who checks out the project gets their own copy. Different people can take the project in different directions, each starting from potentially different branch points. ![Master branch vs. private branch Master branch vs. private branch](https://opensource.com/sites/default/files/uploads/master-private-branches.png) The master branch in the "origin" repo on the left and the private branch on your personal copy on the right. There are two ways to integrate your work back with the master branch in the original repository: one is to use **git merge**, and the other is to use **git rebase**. They work in very different ways. When you use **git merge**, a new commit is created on the master branch that includes all of the changes from origin plus all of your local changes. If there are any conflicts (for example, if someone else has changed a file you are also working with), these will be marked, and you have an opportunity to resolve the conflicts before committing this merge commit to your local repository. When you push your changes back to the parent repository, all of your local work will appear as a branch for other users of the Git repository. But **git rebase** works differently. It rewinds your commits and replays those commits again from the tip of the master branch. This results in two main changes. First, since your commits are now branching off a different parent node, their hashes will be recalculated, and anyone who has cloned your repository may now have a broken copy of the repository. Second, you do not have a merge commit, so any merge conflicts are identified as your changes are being replayed onto the master branch, and you need to fix them before proceeding with the rebase. When you push your changes now, your work does not appear on a branch, and it looks as though you wrote all of your changes off the very latest commit to the master branch. ![Merge commits preserve history, and rebase rewrites history. Merge commits preserve history, and rebase rewrites history.](https://opensource.com/sites/default/files/uploads/merge-commit-vs-rebase.png) Merge commits (left) preserve history, while rebase (right) rewrites history. However, both of these options come with a downside: everyone can see all your scribbles and edits as you worked through problems locally before you were ready to share your code. This is where the **--interactive** (or **-i** for short) flag to **git rebase** comes into the picture. ## Introducing git rebase -i The big advantage of **git rebase** is that it rewrites history. But why stop at just pretending you branched off a later point? There is a way to go even further and rewrite how you arrived at your ready-to-propose code: **git rebase -i**, an interactive **git rebase**. This feature is the "magic time machine" function in Git. The flag allows you to make sophisticated changes to revision history while doing a rebase. You can hide your mistakes! Merge many small changes into one pristine feature patch! Reorder how things appear in revision history! ![output of git rebase -i output of git rebase -i](https://opensource.com/sites/default/files/uploads/git-rebase-i.png) When you run **git rebase -i**, you get an editor session listing all of the commits that are being rebased and a number of options for what you can do to them. The default choice is **pick**. **Pick**maintains the commit in your history.**Reword**allows you to change a commit message, perhaps to fix a typo or add additional commentary.**Edit**allows you to make changes to the commit while in the process of replaying the branch.**Squash**merges multiple commits into one.- You can reorder commits by moving them around in the file. When you are finished, simply save the final result, and the rebase will execute. At each stage where you have chosen to modify a commit (either with **reword**, **edit**, **squash**, or when there is a conflict), the rebase stops and allows you to make the appropriate changes before continuing. The example above results in "One-liner bug fix" and "Integrate new header everywhere" being merged into one commit, and "New header for docs website" and "D'oh - typo. Fixed" into another. Like magic, the work that went into the other commits is still there on your branch, but the associated commits have disappeared from your history! This makes it easy to submit a clean patch to an upstream project using **git send-email** or by creating a pull request against the parent repository with your newly tidied up patchset. This has a number of advantages, including that it makes your code easier to review, easier to accept, and easier to merge. ## 5 Comments
12,232
对构建系统进行容器化的指南
https://opensource.com/article/20/4/how-containerize-build-system
2020-05-20T08:00:00
[ "容器", "构建" ]
https://linux.cn/article-12232-1.html
> > 搭建一个通过容器分发应用的可复用系统可能很复杂,但这儿有个好方法。 > > > ![](/data/attachment/album/202005/19/085248ausakkjfu05akqr2.jpg) 一个用于将源代码转换成可运行的应用的构建系统是由工具和流程共同组成。在转换过程中还涉及到代码的受众从软件开发者转变为最终用户,无论最终用户是运维的同事还是部署的同事。 在使用容器搭建了一些构建系统后,我觉得有一个不错的可复用的方法值得分享。虽然这些构建系统被用于编译机器学习算法和为嵌入式硬件生成可加载的软件镜像,但这个方法足够抽象,可用于任何基于容器的构建系统。 这个方法是以一种易于使用和维护的方式搭建或组织构建系统,但并不涉及处理特定编译器或工具容器化的技巧。它适用于软件开发人员构建软件,并将可维护镜像交给其他技术人员(无论是系统管理员、运维工程师或者其他一些头衔)的常见情况。该构建系统被从终端用户中抽象出来,这样他们就可以专注于软件。 ### 为什么要容器化构建系统? 搭建基于容器的可复用构建系统可以为软件团队带来诸多好处: * **专注**:我希望专注于应用的开发。当我调用一个工具进行“构建”时,我希望这个工具集能生成一个随时可用的二进制文件。我不想浪费时间在构建系统的查错上。实际上,我宁愿不了解,或者说不关心构建系统。 * **一致的构建行为**:无论在哪种使用情况下,我都想确保整个团队使用相同版本的工具集并在构建时得到相同的结果。否则,我就得不断地处理“我这咋就是好的”的麻烦。在团队项目中,使用相同版本的工具集并对给定的输入源文件集产生一致的输出是非常重要。 * **易于部署和升级**:即使向每个人都提供一套详细说明来安装一个项目的工具集,也可能会有人翻车。问题也可能是由于每个人对自己的 Linux 环境的个性化修改导致的。在团队中使用不同的 Linux 发行版(或者其他操作系统),情况可能还会变得更复杂。当需要将工具集升级到下一版本时,问题很快就会变得更糟糕。使用容器和本指南将使得新版本升级非常简单。 对我在项目中使用的构建系统进行容器化的这些经验显然很有价值,因为它可以缓解上述问题。我倾向于使用 Docker 作为容器工具,虽然在相对特殊的环境中安装和网络配置仍可能出现问题,尤其是当你在一个使用复杂代理的企业环境中工作时。但至少现在我需要解决的构建系统问题已经很少了。 ### 漫步容器化的构建系统 我创建了一个[教程存储库](https://github.com/ravi-chandran/dockerize-tutorial),随后你可以克隆并检查它,或者按照本文内容进行操作。我将逐个介绍存储库中的文件。这个构建系统非常简单(它运行 `gcc`),从而可以让你专注于这个构建系统结构上。 ### 构建系统需求 我认为构建系统中有两个关键点: * **标准化构建调用**:我希望能够指定一些形如 `/path/to/workdir` 的工作目录来构建代码。我希望以如下形式调用构建: ``` ./build.sh /path/to/workdir ``` 为了使得示例的结构足够简单(以便说明),我将假定输出也在 `/path/to/workdir` 路径下的某处生成。(否则,将增加容器中显示的卷的数量,虽然这并不困难,但解释起来比较麻烦。) * **通过 shell 自定义构建调用**:有时,工具集会以出乎意料的方式被调用。除了标准的工具集调用 `build.sh` 之外,如果需要还可以为 `build.sh` 添加一些选项。但我一直希望能够有一个可以直接调用工具集命令的 shell。在这个简单的示例中,有时我想尝试不同的 `gcc` 优化选项并查看效果。为此,我希望调用: ``` ./shell.sh /path/to/workdir ``` 这将让我得到一个容器内部的 Bash shell,并且可以调用工具集和访问我的工作目录(`workdir`),从而我可以根据需要尝试使用这个工具集。 ### 构建系统的架构 为了满足上述基本需求,这是我的构架系统架构: ![Container build system architecture](/data/attachment/album/202005/19/085620czamgvs3hpzzyzy3.jpg "Container build system architecture") 在底部的 `workdir` 代表软件开发者用于构建的任意软件源码。通常,这个 `workdir` 是一个源代码的存储库。在构建之前,最终用户可以通过任何方式来操纵这个存储库。例如,如果他们使用 `git` 作为版本控制工具的话,可以使用 `git checkout` 切换到他们正在工作的功能分支上并添加或修改文件。这样可以使得构建系统独立于 `workdir` 之外。 顶部的三个模块共同代表了容器化的构建系统。最左边的黄色模块代表最终用户与构建系统交互的脚本(`build.sh` 和 `shell.sh`)。 在中间的红色模块是 Dockerfile 和相关的脚本 `build_docker_image.sh`。开发运营者(在这个例子中指我)通常将执行这个脚本并生成容器镜像(事实上我多次执行它直到一切正常为止,但这是另一回事)。然后我将镜像分发给最终用户,例如通过<ruby> 容器信任注册库 <rt> container trusted registry </rt></ruby>进行分发。最终用户将需要这个镜像。另外,他们将克隆构建系统的存储库(即一个与[教程存储库](https://github.com/ravi-chandran/dockerize-tutorial)等效的存储库)。 当最终用户调用 `build.sh` 或者 `shell.sh` 时,容器内将执行右边的 `run_build.sh` 脚本。接下来我将详细解释这些脚本。这里的关键是最终用户不需要为了使用而去了解任何关于红色或者蓝色模块或者容器工作原理的知识。 ### 构建系统细节 把教程存储库的文件结构映射到这个系统结构上。我曾将这个原型结构用于相对复杂构建系统,因此它的简单并不会造成任何限制。下面我列出存储库中相关文件的树结构。文件夹 `dockerize-tutorial` 能用构建系统的其他任何名称代替。在这个文件夹下,我用 `workdir` 的路径作参数调用 `build.sh` 或 `shell.sh`。 ``` dockerize-tutorial/ ├── build.sh ├── shell.sh └── swbuilder ├── build_docker_image.sh ├── install_swbuilder.dockerfile └── scripts └── run_build.sh ``` 请注意,我上面特意没列出 `example_workdir`,但你能在教程存储库中找到它。实际的源码通常存放在单独的存储库中,而不是构建工具库中的一部分;本教程为了不必处理两个存储库,所以我将它包含在这个存储库中。 如果你只对概念感兴趣,本教程并非必须的,因为我将解释所有文件。但是如果你继续本教程(并且已经安装 Docker),首先使用以下命令来构建容器镜像 `swbuilder:v1`: ``` cd dockerize-tutorial/swbuilder/ ./build_docker_image.sh docker image ls # resulting image will be swbuilder:v1 ``` 然后调用 `build.sh`: ``` cd dockerize-tutorial ./build.sh ~/repos/dockerize-tutorial/example_workdir ``` 下面是 [build.sh](https://github.com/ravi-chandran/dockerize-tutorial/blob/master/build.sh) 的代码。这个脚本从容器镜像 `swbuilder:v1` 实例化一个容器。而这个容器实例映射了两个卷:一个将文件夹 `example_workdir` 挂载到容器内部路径 `/workdir` 上,第二个则将容器外的文件夹 `dockerize-tutorial/swbuilder/scripts` 挂载到容器内部路径 `/scripts` 上。 ``` docker container run \ --volume $(pwd)/swbuilder/scripts:/scripts \ --volume $1:/workdir \ --user $(id -u ${USER}):$(id -g ${USER}) \ --rm -it --name build_swbuilder swbuilder:v1 \ build ``` 另外,`build.sh` 还会用你的用户名(以及组,本教程假设两者一致)去运行容器,以便在访问构建输出时不出现文件权限问题。 请注意,[shell.sh](https://github.com/ravi-chandran/dockerize-tutorial/blob/master/shell.sh) 和 `build.sh` 大体上是一致的,除了两点不同:`build.sh` 会创建一个名为 `build_swbuilder` 的容器,而 `shell.sh` 则会创建一个名为 `shell_swbuilder` 的容器。这样一来,当其中一个脚本运行时另一个脚本被调用也不会产生冲突。 两个脚本之间的另一处关键不同则在于最后一个参数:`build.sh` 传入参数 `build` 而 `shell.sh` 则传入 `shell`。如果你看了用于构建容器镜像的 [Dockerfile](https://github.com/ravi-chandran/dockerize-tutorial/blob/master/swbuilder/install_swbuilder.dockerfile),就会发现最后一行包含了下面的 `ENTRYPOINT` 语句。这意味着上面的 `docker container run` 调用将使用 `build` 或 `shell` 作为唯一的输入参数来执行 `run_build.sh` 脚本。 ``` # run bash script and process the input command ENTRYPOINT [ "/bin/bash", "/scripts/run_build.sh"] ``` [run\_build.sh](https://github.com/ravi-chandran/dockerize-tutorial/blob/master/swbuilder/scripts/run_build.sh) 使用这个输入参数来选择启动 Bash shell 还是调用 `gcc` 来构建 `helloworld.c` 项目。一个真正的构建系统通常会使用 Makefile 而非直接运行 `gcc`。 ``` cd /workdir if [ $1 = "shell" ]; then echo "Starting Bash Shell" /bin/bash elif [ $1 = "build" ]; then echo "Performing SW Build" gcc helloworld.c -o helloworld -Wall fi ``` 在使用时,如果你需要传入多个参数,当然也是可以的。我处理过的构建系统,构建通常是对给定的项目调用 `make`。如果一个构建系统有非常复杂的构建调用,则你可以让 `run_build.sh` 调用 `workdir` 下最终用户编写的特定脚本。 ### 关于 scripts 文件夹的说明 你可能想知道为什么 `scripts` 文件夹位于目录树深处而不是位于存储库的顶层。两种方法都是可行的,但我不想鼓励最终用户到处乱翻并修改里面的脚本。将它放到更深的地方是一个让他们更难乱翻的方法。另外,我也可以添加一个 `.dockerignore` 文件去忽略 `scripts` 文件夹,因为它不是容器必需的部分。但因为它很小,所以我没有这样做。 ### 简单而灵活 尽管这一方法很简单,但我在几个相当不同的构建系统中使用过,发现它相当灵活。相对稳定的部分(例如,一年仅修改数次的给定工具集)被固定在容器镜像内。较为灵活的部分则以脚本的形式放在镜像外。这使我能够通过修改脚本并将更改推送到构建系统存储库中,轻松修改调用工具集的方式。用户所需要做的是将更改拉到本地的构建系统存储库中,这通常是非常快的(与更新 Docker 镜像不同)。这种结构使其能够拥有尽可能多的卷和脚本,同时使最终用户摆脱复杂性。 --- via: <https://opensource.com/article/20/4/how-containerize-build-system> 作者:[Ravi Chandran](https://opensource.com/users/ravichandran) 选题:[lujun9972](https://github.com/lujun9972) 译者:[LazyWolfLin](https://github.com/LazyWolfLin) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
A build system is comprised of the tools and processes used to transition from source code to a running application. This transition also involves changing the code's audience from the software developer to the end user, whether the end user is a colleague in operations or a deployment system. After creating a few build systems using containers, I think I have a decent, repeatable approach that's worth sharing. These build systems were used for generating loadable software images for embedded hardware and compiling machine learning algorithms, but the approach is abstract enough to be used in any container-based build system. This approach is about creating or organizing the build system in a way that makes it easy to use and maintain. It's not about the tricks needed to deal with containerizing any particular software compilers or tools. It applies to the common use case of software developers building software to hand off a maintainable image to other technical users (whether they are sysadmins, DevOps engineers, or some other title). The build system is abstracted away from the end users so that they can focus on the software. ## Why containerize a build system? Creating a repeatable, container-based build system can provide a number of benefits to a software team: **Focus:**I want to focus on writing my application. When I call a tool to "build," I want the toolset to deliver a ready-to-use binary. I don't want to spend time troubleshooting the build system. In fact, I'd rather not know or care about the build system.**Identical build behavior:**Whatever the use case, I want to ensure that the entire team uses the same versions of the toolset and gets the same results when building. Otherwise, I am constantly dealing with the case of "it works on my PC but not yours." Using the same toolset version and getting identical output for a given input source file set is critical in a team project.**Easy setup and future migration:**Even if a detailed set of instructions is given to everyone to install a toolset for a project, chances are someone will get it wrong. Or there could be issues due to how each person has customized their Linux environment. This can be further compounded by the use of different Linux distributions across the team (or other operating systems). The issues can get uglier quickly when it comes time for moving to the next version of the toolset. Using containers and the guidelines in this article will make migration to newer versions much easier. Containerizing the build systems that I use on my projects has certainly been valuable in my experience, as it has alleviated the problems above. I tend to use Docker for my container tooling, but there can still be issues due to the installation and network configuration being unique environment to environment, especially if you work in a corporate environment involving some complex proxy settings. But at least now I have fewer build system problems to deal with. ## Walking through a containerized build system I created a [tutorial repository](https://github.com/ravi-chandran/dockerize-tutorial) you can clone and examine at a later time or follow along through this article. I'll be walking through all the files in the repository. The build system is deliberately trivial (it runs **gcc**) to keep the focus on the build system architecture. ## Build system requirements Two key aspects that I think are desirable in a build system are: **Standard build invocation:**I want to be able to build code by pointing to some work directory whose path is**/path/to/workdir**. I want to invoke the build as: `./build.sh /path/to/workdir` To keep the example architecture simple (for the sake of explanation), I'll assume that the output is also generated somewhere within **/path/to/workdir**. (Otherwise, it would increase the number of volumes exposed to the container, which is not difficult, but more cumbersome to explain.)**Custom build invocation via shell:**Sometimes, the toolset needs to be used in unforeseen ways. In addition to the standard**build.sh**to invoke the toolset, some of these could be added as options to**build.sh**, if needed. But I always want to be able to get to a shell where I can invoke toolset commands directly. In this trivial example, say I sometimes want to try out different**gcc**optimization options to see the effects. To achieve this, I want to invoke: `./shell.sh /path/to/workdir` This should get me to a Bash shell inside the container with access to the toolset and to my **workdir**, so I can experiment as I please with the toolset. ## Build system architecture To comply with the basic requirements above, here is how I architect the build system: ![Container build system architecture Container build system architecture](https://opensource.com/sites/default/files/uploads/build_sys_arch.jpg) At the bottom, the **workdir** represents any software source code that needs to be built by the software developer end users. Typically, this **workdir** will be a source-code repository. The end users can manipulate this source code repository in any way they want before invoking a build. For example, if they're using **git** for version control, they could **git checkout** the feature branch they are working on and add or modify files. This keeps the build system independent of the **workdir**. The three blocks at the top collectively represent the containerized build system. The left-most (yellow) block at the top represents the scripts (**build.sh** and **shell.sh**) that the end user will use to interact with the build system. In the middle (the red block) is the Dockerfile and the associated script **build_docker_image.sh**. The development operations people (me, in this case) will typically execute this script and generate the container image. (In fact, I'll execute this many, many times until I get everything working right, but that's another story.) And then I would distribute the image to the end users, such as through a container trusted registry. The end users will need this image. In addition, they will clone the build system repository (i.e., one that is equivalent to the [tutorial repository](https://github.com/ravi-chandran/dockerize-tutorial)). The **run_build.sh** script on the right is executed inside the container when the end user invokes either **build.sh** or **shell.sh**. I'll explain these scripts in detail next. The key here is that the end user does not need to know anything about the red or blue blocks or how a container works in order to use any of this. ## Build system details The tutorial repository's file structure maps to this architecture. I've used this prototype structure for relatively complex build systems, so its simplicity is not a limitation in any way. Below, I've listed the tree structure of the relevant files from the repository. The **dockerize-tutorial** folder could be replaced with any other name corresponding to a build system. From within this folder, I invoke either **build.sh** or **shell.sh** with the one argument that is the path to the **workdir**. ``` dockerize-tutorial/ ├── build.sh ├── shell.sh └── swbuilder ├── build_docker_image.sh ├── install_swbuilder.dockerfile └── scripts └── run_build.sh ``` Note that I've deliberately excluded the **example_workdir** above, which you'll find in the tutorial repository. Actual source code would typically reside in a separate repository and not be part of the build tool repository; I included it in this repository, so I didn't have to deal with two repositories in the tutorial. Doing the tutorial is not necessary if you're only interested in the concepts, as I'll explain all the files. But if you want to follow along (and have Docker installed), first build the container image **swbuilder:v1** with: ``` cd dockerize-tutorial/swbuilder/ ./build_docker_image.sh docker image ls # resulting image will be swbuilder:v1 ``` Then invoke **build.sh** as: ``` cd dockerize-tutorial ./build.sh ~/repos/dockerize-tutorial/example_workdir ``` The code for [build.sh](https://github.com/ravi-chandran/dockerize-tutorial/blob/master/build.sh) is below. This script instantiates a container from the container image **swbuilder:v1**. It performs two volume mappings: one from the **example_workdir** folder to a volume inside the container at path **/workdir**, and the second from **dockerize-tutorial/swbuilder/scripts** outside the container to **/scripts** inside the container. ``` docker container run \ --volume $(pwd)/swbuilder/scripts:/scripts \ --volume $1:/workdir \ --user $(id -u ${USER}):$(id -g ${USER}) \ --rm -it --name build_swbuilder swbuilder:v1 \ build ``` In addition, the **build.sh** also invokes the container to run with your username (and group, which the tutorial assumes to be the same) so that you will not have issues with file permissions when accessing the generated build output. Note that [ shell.sh](https://github.com/ravi-chandran/dockerize-tutorial/blob/master/shell.sh) is identical except for two things: **build.sh**creates a container named **build_swbuilder**while **shell.sh**creates one named **shell_swbuilder**. This is so that there are no conflicts if either script is invoked while the other one is running. The other key difference between the two scripts is the last argument: **build.sh** passes in the argument **build** while **shell.sh** passes in the argument **shell**. If you look at the [Dockerfile](https://github.com/ravi-chandran/dockerize-tutorial/blob/master/swbuilder/install_swbuilder.dockerfile) that is used to create the container image, the last line contains the following **ENTRYPOINT**. This means that the **docker container run** invocation above will result in executing the **run_build.sh** script with either **build** or **shell** as the sole input argument. ``` # run bash script and process the input command ENTRYPOINT [ "/bin/bash", "/scripts/run_build.sh"] ``` [ run_build.sh](https://github.com/ravi-chandran/dockerize-tutorial/blob/master/swbuilder/scripts/run_build.sh) uses this input argument to either start the Bash shell or invoke **gcc**to perform the build of the trivial **helloworld.c**project. A real build system would typically invoke a Makefile and not run **gcc**directly. ``` cd /workdir if [ $1 = "shell" ]; then echo "Starting Bash Shell" /bin/bash elif [ $1 = "build" ]; then echo "Performing SW Build" gcc helloworld.c -o helloworld -Wall fi ``` You could certainly pass more than one argument if your use case demands it. For the build systems I've dealt with, the build is usually for a given project with a specific **make** invocation. In the case of a build system where the build invocation is complex, you can have **run_build.sh** call a specific script inside **workdir** that the end user has to write. ## A note about the scripts folder You may be wondering why the **scripts** folder is located deep in the tree structure rather than at the top level of the repository. Either approach would work, but I didn't want to encourage the end user to poke around and change things there. Placing it deeper is a way to make it more difficult to poke around. Also, I could have added a **.dockerignore** file to ignore the **scripts** folder, as it doesn't need to be part of the container context. But since it's tiny, I didn't bother. ## Simple yet flexible While the approach is simple, I've used it for a few rather different build systems and found it to be quite flexible. The aspects that are going to be relatively stable (e.g., a given toolset that changes only a few times a year) are fixed inside the container image. The aspects that are more fluid are kept outside the container image as scripts. This allows me to easily modify how the toolset is invoked by updating the script and pushing the changes to the build system repository. All the user needs to do is to pull the changes to their local build system repository, which is typically quite fast (unlike updating a Docker image). The structure lends itself to having as many volumes and scripts as are needed while abstracting the complexity away from the end user. ## 7 Comments
12,235
自定义用于 Web 开发的开源 PHP 框架 Codeigniter
https://opensource.com/article/20/5/codeigniter
2020-05-21T10:27:24
[ "PHP", "Codeigniter" ]
https://linux.cn/article-12235-1.html
> > Codeigniter 是一个 PHP 框架,可以使公司进行开发具有灵活性和便捷性的高性能网站。 > > > ![](/data/attachment/album/202005/21/102637vslj5zqk52x98a52.jpg) PHP Codeigniter 是一个开源框架,为商业应用提供易于使用的 PHP 编程语言和强大的编码工具。它还提供商务智能、服务器监视、开发和应用集成功能。这是一个相对冷清的项目,你很少听到它,但它功能强大,许多刚接触的开发人员都对此感到惊讶和耳目一新。 我在新加坡的一家在线学习服务提供商处使用 [Codeigniter](https://codeigniter.com/)。我们提供的服务并不算常见,没有可以作为模板的默认功能集或现有后台管理系统,所以我需要一个能提供良好的、可靠的、可以建立在此基础上的原始材料。最初,我考虑用其他平台(如 Wordpress)用于我们的网站。但是,我决定使用 Codeigniter,因为它的灵活性,以及集成了在我们的补课匹配过程中需要的功能。 以下是打动我使用 Codeigniter 的原因: * 与 MySQL 数据库的集成 —— 主要功能是允许客户端浏览导师的数据库并添加导师,例如类似于电子商务平台的“购物车”。 * 客户端界面系统 —— 用户可以登录来管理偏好并编辑详细信息,修改所教的科目、旅游的地区、手机号码、地址等。 * 定制的管理员面板 —— 管理员可以使用定制的管理面板访问客户提交的资料,它与客户服务功能集成在一起,因此管理员可以单独跟进。 * 付款系统 —— 管理面板带有与 Paypal 集成的发票和付款网关。 * CMS 编辑器界面 —— 管理员能够编辑博客和文章中的文本和图像,以及添加新页面。 该项目花费了大约六个月的时间来完成,另外花了两个月的调试时间。如果我需要从头开始构建所有,或者尝试重新设计现有的框架以满足我们的需求,那将花费更长的时间,而且可能最终无法满足客户需求。 ### 功能和优点 PHP Codeigniter还有很多吸引开发者的功能,包括错误处理和代码格式化,这些功能在各种编码情景下都非常有用。它支持模板,可用于向现有网站添加功能或生成新网站。有许多基于 web 系统商业需要的功能,包括使用自定义标签。即使没有编程经验的普通开发人员也可以使用大多数工具。 Codeigniter 的主要功能是: * XML 核心服务, * HTTP/FTP 核心服务 * AppData 和 PHP 沙箱功能 * XSLT 和 HTML 模板 * 加密的信息传输 * PCM Codeigniter 服务器监控 * 应用集成 * 文件传输协议(FTP) * 服务台支持 * Apache POI(用于托管网站的内容管理基础架构) #### 兼容性 Codeigniter 与许多领先的软件程序兼容,例如 PHP、MySQL、[MariaDB](http://mariadb.org/)、[phpMyAdmin](https://www.phpmyadmin.net/)、[Apache](http://apache.org/)、OpenBSD、XSLT、[SQLite](http://sqlite.org/) 等。许多公司更喜欢使用 Codeigniter 产品来满足网站要求,因为它们易于使用和集成。如果你不想创建自己的网站,你可以找到许多提供自定义 Web 开发服务的开发人员和设计机构。 #### 安全 Codeigniter 还通过 SSL 加密提供数据安全性。加密可以保护数据免受入侵者和防火墙外部威胁的侵害。数据存储功能还允许对公司网站进行安全审核。 #### 其它功能 一家优秀的 PHP Web 开发公司会使用几种高级技术和第三方技术,例如 XML 和 PHP。它为企业提供了一个完整的平台,可以开发出具有看起来专业的、好用的商业网站。Codeigniter 使得第三方技术的使用变得容易,并可以与常见的 Web 开发软件一起使用。这使得 Web 公司可以轻松地使用所选模块创建网站。大多数 PHP 开发者也为个人提供支持和培训服务。 ### 使用 PHP 框架 Codeigniter Codeigniter 给企业提供了完整的 PHP 开发包,它将提供强大的功能、灵活性和性能完美结合在一起。到目前为止,我很满意我们的网站,并不断地升级和添加新的功能。并不断升级和增加新的功能。我期待着发现我们的网站还能用 Codeigniter 做些什么。你也是这样么? --- via: <https://opensource.com/article/20/5/codeigniter> 作者:[Wee Ben Sen](https://opensource.com/users/bswee14) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
PHP Codeigniter is an open source framework providing business applications with the easy-to-use PHP programming language and powerful tools for coding. It also provides business intelligence, server monitoring, development, and application integration facilities. It's a relatively quiet project that you don't hear much about, but it's got a lot going for it that many developers new to it find surprising and refreshing. I use [Codeigniter](https://codeigniter.com/) at my job working for an online tuition service provider in Singapore. We offer services that aren't common enough to be the default feature set for templates or existing back-ends, so I need something that provides good, solid, raw materials I can build upon. Initially, I was considering other platforms such as Wordpress for our website; however, I arrived at Codeigniter due to its flexibility and integration of functions needed in the tuition-matching process. Here are the points that sold me on Codeigniter: - Database integration with MySQL—A major functionality is allowing clients to browse the tutor database and add tutors like a "shopping cart" similar to an e-commerce platform. - Client interface system—Users can log in to manage preferences and edit their particulars, modify subject taught, areas traveled, mobile number, address, etc. - Customized administrator panel—The administrator can access the client's submission with a customized admin panel, which is integrated with a customer service feature so the administrator can follow up individually. - Payment system—The admin panel comes with an invoice and payments gateway, which is integrated with Paypal. - CMS editor interface—The administrator is able to edit text and images in the blog and subject pages, as well as add new pages. The project took around six months to complete and another two months of debugging work. If I'd had to build all of it from scratch or try to rework an existing framework to suit our needs, it would have taken longer, and I probably wouldn't have ended up with what I needed for the demands of our customers. ## Features and benefits There are many more features that draw developers to PHP Codeigniter, including error handling and code formatting, which are useful in every coding situation. It supports templates, which can be used to add functionality to an existing website or to generate new ones. There are many features available for a business that needs to use a web-based system, including the ability to use custom tags. Most can be used by even an average developer who does not have any prior experience in programming. The key features of Codeigniter are: - XML core services, - HTTP/FTP core services - AppData and PHP sandbox features - XSLT and HTML templates - Encrypted information transfer - PCM Codeigniter server monitoring - Application integration - File Transfer Protocol (FTP) - Help desk support - Apache POI (content management infrastructure used for hosting a website) ### Compatibility Codeigniter is compatible with many leading software applications like PHP, MySQL, [MariaDB](http://mariadb.org/), [phpMyAdmin](https://www.phpmyadmin.net/), [Apache](http://apache.org/), OpenBSD, XSLT, [SQLite](http://sqlite.org/), and more. A number of companies prefer to use Codeigniter products for their website requirements because they are easy to work with and integrate. If you're not comfortable creating your own website, you can find many developers and design agencies that provide custom web development services. ### Security Codeigniter also provides data security through SSL encryption. The encryption protects the data from external threats such as intruders and firewalls. The data storage facility also allows for security audits of the company's website. ### Other features A good PHP web development company uses several advanced and third-party technologies such as XML and PHP. It provides organizations with a complete platform to develop professional-looking, useful websites with a business application. Codeigniter makes it easy to use third party technology, and works with common web development software. This allows web agencies to easily create websites with their chosen modules. Most PHP developers offer support and training services for individuals, as well. ## Using PHP framework Codeigniter Codeigniter allows businesses to have a complete package for PHP development that will offer the right combination of power, flexibility, and performance. So far, I am very pleased with our website and I have continuously upgraded and added new features along the way. I look forward to discovering what else I can do with our website using Codeigniter. Could it be right for you too? ## 6 Comments
12,236
5 种拆分 Linux 终端的方法
https://opensource.com/article/20/5/split-terminal
2020-05-21T13:25:00
[ "终端", "终端复用器", "tmux" ]
/article-12236-1.html
> > 本文介绍了 Linux 提供的拆分终端的方法,它能够帮助你完成多任务工作。那么,你最喜欢哪一款终端复用工具呢? > > > ![](/data/attachment/album/202005/21/132437ypzpqqppqh1qfznh.jpg) 没有什么问题是不能用一个 Linux 终端解决的,如果不行,那就用两个。 很早以前,[终端其实是一个物理设备](https://www.redhat.com/sysadmin/terminals-shells-consoles),而现在的终端实际上是在计算机上被模拟出来的一个应用程序。当你使用终端和计算机进行交互的时候,就会发现,只打开一个终端是不够用的。在进行编译、数据处理等长时间任务的时候,你不得不打开一个新终端或新<ruby> 选项卡 <rt> tab </rt></ruby>来同时进行其它工作。 如果你是系统管理员,你就需要更多的终端窗口,以便连接到多个不同的主机上并行工作了。 在 Linux 系统中,终端应用程序在很久之前就已经开始带有选项卡功能了。而现在的终端应用程序里,选项卡已经是标配功能了,这是非常流行的趋势。尽管如此,工作的时候在多个选项卡之间来回切换,或多或少也会分散我们的注意力,甚至带来不便。 而最好的解决方案就是将整个屏幕划分为多个部分,这样多个终端就可以在同一个终端应用程序窗口中同时存在。Linux 发行版中也有很多相关的工具可以实现这一功能。 ### Shell、终端和控制台 在此之前,我们首先要明确 Shell、<ruby> 终端 <rt> terminal </rt></ruby>、<ruby> 控制台 <rt> console </rt></ruby>这三个概念。想要详细了解的话,请参阅 [Enable Sysadmin](https://www.redhat.com/sysadmin/terminals-shells-consoles) 博客上的相关文章。 简而言之: * **Shell** 是带有<ruby> 命令提示符 <rt> prompt </rt></ruby>的用于输入、输出的界面。准确地说,[POSIX](https://opensource.com/article/19/7/what-posix-richard-stallman-explains) 桌面底层也运行着一个 Shell,即使这个 Shell 对用户不可见,因为用户会话就是由这个 Shell 启动的。 * **终端**是在图形界面服务器(例如 X11 或 Wayland)中运行的应用程序,其中加载了一个 Shell。只有在终端窗口启动之后,才算是运行了一个终端。终端可以认为是操作 Shell 的一个入口。 * **控制台**(或称“虚拟控制台”)通常表示在桌面环境以外使用的 Shell,你可以通过 `Alt+Ctrl+F2` 进入控制台,通常情况下从 `F3` 到 `F7` 都是不同的控制台,其中桌面环境有可能是 `F1` 或者 `F7`,这在不同的发行版中可能会有所不同。 因此,有些应用程序提供的功能是拆分 Shell 或者控制台,有些应用程序的功能则是拆分终端。 ### tmux ![tmux terminal](/data/attachment/album/202005/21/132609upfsopddjaaadkjd.png "tmux terminal") [tmux](https://github.com/tmux/tmux) 可以说是最灵活、最强大的屏幕拆分工具了,它通过键盘控制对多个终端的复用,因此你可以将一个控制台叠放在另一个控制台上面,并在两个控制台之间切换。你还可以将整个屏幕等分为多个控制台,以便同时观察不同控制台上的状况。 `tmux` 的所有操作都是通过键盘完成的,这就意味着你的手不需要离开键盘去寻找鼠标。为此,你需要记住一些按键组合。 如果你只用 `tmux` 来做屏幕拆分,那你只需要记住一下这些命令: * `Ctrl-B %` 竖直拆分屏幕(两个 Shell 分别位于左右) * `Ctrl-B "` 水平拆分屏幕(两个 Shell 分别位于上下) * `Ctrl-B O` 切换到另一个 Shell * `Ctrl-B ?` 查看帮助 * `Ctrl-B d` 断开 `tmux` 并让其在后台运行(可以使用 `tmux attach` 重新进入) `tmux` 的一大好处是,在一台计算机上启动 `tmux` 会话之后,也可以从另一台计算机上进入到这个会话,由此可以看出,`tmux` 对 Shell 进行了<ruby> 守护进程化 <rt> daemonize </rt></ruby>。 例如,当我在树莓派上运行 `tmux`,我就可以从计算机上连接到树莓派并登录 IRC,当我断开连接时,树莓派上的 `tmux` 会继续运行,并等待我的下一次连接,在此期间 IRC 是处于持续登录状态的。 ### GNU Screen ![GNU Screen terminal](/data/attachment/album/202005/21/132542q2a10vnvyupo09u5.png "GNU Screen terminal") [GNU Screen](https://www.gnu.org/software/screen/) 也是一个 Shell 复用工具,类似于 `tmux`,你可以在断开一个活动会话后重连到其中,它也支持竖直或水平拆分屏幕。 `screen` 的灵活性比 `tmux` 要弱一些。它默认的绑定按键组合是 `Ctrl-A`,和 Bash 中光标移动到行首的快捷键是一样的。因此,当你正在运行 `screen` 的时候,如果想要将光标移动到行首,就需要多按一次 `Ctrl-A`。而我自己的做法是,在 `$HOME/.screenrc` 文件中将绑定按键组合重新设置为 `Ctrl-J`。 ``` escape ^jJ ``` 尽管 `screen` 在屏幕拆分功能上做得很好,但 `tmux` 上的一些缺点在 Screen 上也同样存在。例如在拆分 Shell 时,在一个新的面板中不会启动新的 Shell ,而是需要使用 `Ctrl-A Tab` 导航到另一个面板(如果你按照我的方式重新设置了按键组合,需要对应地把 `Ctrl-A` 改为 `Ctrl-J`),然后通过 `Ctrl-A C` 手动创建一个新的 Shell。 和 `tmux` 不同的是,`screen` 在退出一个 Shell 的时候,屏幕拆分状态不会改变,这样的设计在某些情况下是比较适合的,但麻烦之处在于需要手动管理屏幕拆分状态。 尽管如此,`screen` 还是一个相当可靠灵活的应用程序,在无法使用 `tmux` 的时候,你可以选择 `screen` 作为备选方案。 在默认按键方案下,`screen` 常用的基本命令包括: * `Ctrl-A |` 竖直拆分屏幕(两个 Shell 分别位于左右) * `Ctrl-A S` 水平拆分屏幕(两个 Shell 分别位于上下) * `Ctrl-A Tab` 切换到另一个 Shell * `Ctrl-A ?` 查看帮助 * `Ctrl-A d` 断开 `screen` 并让其在后台运行(可以使用 `screen -r` 重新进入) ### Konsole ![Konsole screen](/data/attachment/album/202005/21/132546d07pbn70p2nbbp0b.jpg "Konsole screen") [Konsole](https://konsole.kde.org) 是 KDE Plasma 桌面使用的终端应用程序。和 KDE 一样,Konsole 也以高度可定制、功能强大的特点而著称。 和 `tmux`、GNU Screen 类似,Konsole 也具有拆分屏幕的功能。由于 Konsole 是图形界面的终端,因此还可以用鼠标来控制它的屏幕拆分。 Konsole 的屏幕拆分功能在“<ruby> 查看 <rt> View </rt></ruby>”菜单中。它也支持竖直和水平方向的拆分,只要点击鼠标就可以切换到另一个面板上。每个面板都是一个独立的终端,因此都可以拥有独立的主题和标签页。 Konsole 和 `tmux`、GNU Screen 最大的不同之处在于不能断开和重新连接 Konsole。除非使用远程桌面软件,否则只能在打开 Konsole 时使用,这一点和大多数图形界面应用程序是一样的。 ### Emacs ![Emacs rpg](/data/attachment/album/202005/21/132549hh9czonx0jc8l49r.jpg "Emacs rpg") 严格来说,Emacs 并不算是一个终端复用工具,但它的使用界面支持拆分和调整大小,同时还带有一个内建的终端。 如果 Emacs 是你日常使用的文本编辑器,你就可以在不关闭编辑器的情况下,在不同的应用程序之间轻松互相切换。由于 Emacs eshell 模块是通过 eLISP 实现的,因此你可以在 Emacs 中使用相同的命令进行交互,让一些繁琐的操作变得更为简单。 如果你是在图形界面中使用 Emacs,还可以使用鼠标进行操作。例如通过点击切换面板、用鼠标调整拆分屏幕的的大小等等。尽管如此,键盘的操作速度还是更快,因此记住一些键盘快捷键还是很有必要的。 Emacs 的一些重要快捷键包括: * `Ctrl-X 3` 竖直拆分屏幕(两个 Shell 分别位于左右) * `Ctrl-X 2` 水平拆分屏幕(两个 Shell 分别位于上下) * `Ctrl-X O` (大写字母 `O`)切换到另一个 Shell(你也可以使用鼠标操作) * `Ctrl-X 0` (数字 `0`)关闭当前面板 如果你运行了 emacs-client 的话,就可以像 tmux 和 GNU Screen 一样断开和重新连接到 Emacs 了。 ### 窗口管理器 ![Ratpoison split screen](/data/attachment/album/202005/21/132556rbqd7dujnmddud7d.jpg "Ratpoison split screen") 除了文本编辑器之外,一些 Linux 桌面也同样具有拆分屏幕、加载终端这样的功能。例如 [Ratpoison](https://opensource.com/article/19/12/ratpoison-linux-desktop)、[Herbsluftwm](https://opensource.com/article/19/12/herbstluftwm-linux-desktop)、i3、Awesome,甚至是启用了特定设置的 KDE Plasma 桌面,都可以将多个应用程序在桌面上分块显示。 这些桌面可以让各个应用程序占据屏幕的固定位置,而不是浮在你的桌面“之上”,因此你可以在多个应用程序窗口之间轻松切换。你还可以打开多个终端,排布成网格,就像终端复用工具一样。更进一步,你还可以在你的桌面复用工具中加载一个终端复用工具。 而且,没有什么可以阻止你在里面载入 Emacs 并分割缓冲区。没有人知道,如果你把它更进一步,会发生什么,大多数 Linux 用户不会外传这种秘密。 和 `tmux`、GNU Screen 不同,你在断开与桌面的连接后无法重新连接到同一个桌面会话,除非你使用了远程桌面软件进行连接。 ### 更多选择 除了上面介绍到的工具以外,还有诸如 [Tilix](https://gnunn1.github.io/tilix-web/)、Terminator 这样的终端模拟器,它们同样可以实现屏幕拆分、嵌入终端组件等功能。欢迎在评论区分享你喜欢的终端拆分工具。 --- via: <https://opensource.com/article/20/5/split-terminal> 作者:[Seth Kenlon](https://opensource.com/users/seth) 选题:[lujun9972](https://github.com/lujun9972) 译者:[HankChow](https://github.com/HankChow) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
null
HTTPSConnectionPool(host='opensource.com', port=443): Read timed out. (read timeout=10)
null
12,238
通过禁止比较让 Go 二进制文件变小
https://dave.cheney.net/2020/05/09/ensmallening-go-binaries-by-prohibiting-comparisons
2020-05-22T10:17:46
[ "Go", "相等" ]
https://linux.cn/article-12238-1.html
![](/data/attachment/album/202005/22/101617lcha7vvqzhh7d565.jpg) 大家常规的认知是,Go 程序中声明的类型越多,生成的二进制文件就越大。这个符合直觉,毕竟如果你写的代码不去操作定义的类型,那么定义一堆类型就没有意义了。然而,链接器的部分工作就是检测没有被程序引用的函数(比如说它们是一个库的一部分,其中只有一个子集的功能被使用),然后把它们从最后的编译产出中删除。常言道,“类型越多,二进制文件越大”,对于多数 Go 程序还是正确的。 本文中我会深入讲解在 Go 程序的上下文中“相等”的意义,以及为什么[像这样](https://github.com/golang/net/commit/e0ff5e5a1de5b859e2d48a2830d7933b3ab5b75f)的修改会对 Go 程序的大小有重大的影响。 ### 定义两个值相等 Go 的语法定义了“赋值”和“相等”的概念。赋值是把一个值赋给一个标识符的行为。并不是所有声明的标识符都可以被赋值,如常量和函数就不可以。相等是通过检查标识符的内容是否相等来比较两个标识符的行为。 作为强类型语言,“相同”的概念从根源上被植入标识符的类型中。两个标识符只有是相同类型的前提下,才有可能相同。除此之外,值的类型定义了如何比较该类型的两个值。 例如,整型是用算数方法进行比较的。对于指针类型,是否相等是指它们指向的地址是否相同。映射和通道等引用类型,跟指针类似,如果它们指向相同的地址,那么就认为它们是相同的。 上面都是按位比较相等的例子,即值占用的内存的位模式是相同的,那么这些值就相等。这就是所谓的 memcmp,即内存比较,相等是通过比较两个内存区域的内容来定义的。 记住这个思路,我过会儿再来谈。 ### 结构体相等 除了整型、浮点型和指针等标量类型,还有复合类型:结构体。所有的结构体以程序中的顺序被排列在内存中。因此下面这个声明: ``` type S struct { a, b, c, d int64 } ``` 会占用 32 字节的内存空间;`a` 占用 8 个字节,`b` 占用 8 个字节,以此类推。Go 的规则说如果结构体所有的字段都是可以比较的,那么结构体的值就是可以比较的。因此如果两个结构体所有的字段都相等,那么它们就相等。 ``` a := S{1, 2, 3, 4} b := S{1, 2, 3, 4} fmt.Println(a == b) // 输出 true ``` 编译器在底层使用 memcmp 来比较 `a` 的 32 个字节和 `b` 的 32 个字节。 ### 填充和对齐 然而,在下面的场景下过分简单化的按位比较的策略会返回错误的结果: ``` type S struct { a byte b uint64 c int16 d uint32 } func main() a := S{1, 2, 3, 4} b := S{1, 2, 3, 4} fmt.Println(a == b) // 输出 true } ``` 编译代码后,这个比较表达式的结果还是 `true`,但是编译器在底层并不能仅依赖比较 `a` 和 `b` 的位模式,因为结构体有*填充*。 Go 要求结构体的所有字段都对齐。2 字节的值必须从偶数地址开始,4 字节的值必须从 4 的倍数地址开始,以此类推 <sup id="fnref1"> <a href="#fn1" rel="footnote"> 1 </a></sup>。编译器根据字段的类型和底层平台加入了填充来确保字段都*对齐*。在填充之后,编译器实际上看到的是 <sup id="fnref2"> <a href="#fn2" rel="footnote"> 2 </a></sup>: ``` type S struct { a byte _ [7]byte // 填充 b uint64 c int16 _ [2]int16 // 填充 d uint32 } ``` 填充的存在保证了字段正确对齐,而填充确实占用了内存空间,但是填充字节的内容是未知的。你可能会认为在 Go 中 填充字节都是 0,但实际上并不是 — 填充字节的内容是未定义的。由于它们并不是被定义为某个确定的值,因此按位比较会因为分布在 `s` 的 24 字节中的 9 个填充字节不一样而返回错误结果。 Go 通过生成所谓的相等函数来解决这个问题。在这个例子中,`s` 的相等函数只比较函数中的字段略过填充部分,这样就能正确比较类型 `s` 的两个值。 ### 类型算法 呵,这是个很大的设置,说明了为什么,对于 Go 程序中定义的每种类型,编译器都会生成几个支持函数,编译器内部把它们称作类型的算法。如果类型是一个映射的键,那么除相等函数外,编译器还会生成一个哈希函数。为了维持稳定,哈希函数在计算结果时也会像相等函数一样考虑诸如填充等因素。 凭直觉判断编译器什么时候生成这些函数实际上很难,有时并不明显,(因为)这超出了你的预期,而且链接器也很难消除没有被使用的函数,因为反射往往导致链接器在裁剪类型时变得更保守。 ### 通过禁止比较来减小二进制文件的大小 现在,我们来解释一下 Brad 的修改。向类型添加一个不可比较的字段 <sup id="fnref3"> <a href="#fn3" rel="footnote"> 3 </a></sup>,结构体也随之变成不可比较的,从而强制编译器不再生成相等函数和哈希函数,规避了链接器对那些类型的消除,在实际应用中减小了生成的二进制文件的大小。作为这项技术的一个例子,下面的程序: ``` package main import "fmt" func main() { type t struct { // _ [0][]byte // 取消注释以阻止比较 a byte b uint16 c int32 d uint64 } var a t fmt.Println(a) } ``` 用 Go 1.14.2(darwin/amd64)编译,大小从 2174088 降到了 2174056,节省了 32 字节。单独看节省的这 32 字节似乎微不足道,但是考虑到你的程序中每个类型及其传递闭包都会生成相等和哈希函数,还有它们的依赖,这些函数的大小随类型大小和复杂度的不同而不同,禁止它们会大大减小最终的二进制文件的大小,效果比之前使用 `-ldflags="-s -w"` 还要好。 最后总结一下,如果你不想把类型定义为可比较的,可以在源码层级强制实现像这样的奇技淫巧,会使生成的二进制文件变小。 --- 附录:在 Brad 的推动下,[Cherry Zhang](https://go-review.googlesource.com/c/go/+/231397) 和 [Keith Randall](https://go-review.googlesource.com/c/go/+/191198) 已经在 Go 1.15 做了大量的改进,修复了最严重的故障,消除了无用的相等和哈希函数(虽然我猜想这也是为了避免这类 CL 的扩散)。 #### 相关文章: 1. [Go 运行时如何高效地实现映射(不使用泛型)](https://dave.cheney.net/2018/05/29/how-the-go-runtime-implements-maps-efficiently-without-generics "How the Go runtime implements maps efficiently (without generics)") 2. [空结构体](https://dave.cheney.net/2014/03/25/the-empty-struct "The empty struct") 3. [填充很难](https://dave.cheney.net/2015/10/09/padding-is-hard "Padding is hard") 4. [Go 中有类型的 nil(2)](https://dave.cheney.net/2017/08/09/typed-nils-in-go-2 "Typed nils in Go 2") --- 1. 在 32 位平台上 `int64` 和 `unit64` 的值可能不是按 8 字节对齐的,因为平台原生的是以 4 字节对齐的。查看 [议题 599](https://github.com/golang/go/issues/599) 了解内部详细信息。 [↩](#fnref1) 2. 32 位平台会在 `a` 和 `b` 的声明中填充 `_ [3]byte`。参见前一条。 [↩](#fnref2) 3. Brad 使用的是`[0]func()`,但是所有能限制和禁止比较的类型都可以。添加了一个有 0 个元素的数组的声明后,结构体的大小和对齐不会受影响。 [↩](#fnref3) --- via: <https://dave.cheney.net/2020/05/09/ensmallening-go-binaries-by-prohibiting-comparisons> 作者:[Dave Cheney](https://dave.cheney.net/author/davecheney) 选题:[lujun9972](https://github.com/lujun9972) 译者:[lxbwolf](https://github.com/lxbwolf) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
Conventional wisdom dictates that the larger the number of types declared in a Go program, the larger the resulting binary. Intuitively this makes sense, after all, what’s the point in defining a bunch of types if you’re not going to write code that operates on them. However, part of the job of a linker is to detect functions which are not referenced by a program–say they are part of a library of which only a subset of functionality is used–and remove them from the final output. Yet, the adage mo’ types, mo’ binary holds true for the majority of Go programs. In this post I’ll dig into what equality, in the context of a Go program, means and why changes [like this](https://github.com/golang/net/commit/e0ff5e5a1de5b859e2d48a2830d7933b3ab5b75f) have a measurable impact on the size of a Go program. ## Defining equality between two values The Go spec defines the concepts of assignability and equality. Assignabiity is the act of assigning a value to an identifier. Not everything which is declared can be assigned, for example constants and functions. Equality is the act of comparing two identifies by asking *are their contents the same?* Being a strongly typed language, the notion of sameness is fundamentally rooted in the identifier’s type. Two things can only be the same if they are of the same type. Beyond that, the type of the values defines how they are compared. For example, integers are compared arithmetically. For pointer types, equality is determining if the addresses they point too are the same. Reference types like maps and channels, like pointers, are considered to be the same if they have the same address. These are all examples of bitwise equality, that is, if the bit patterns of the memory that value occupies are the same, those values are equal. This is known as memcmp, short for memory comparison, as equality is defined by comparing the contents of two areas of memory. Hold on to this idea, I’ll come back to in a second. ## Struct equality Beyond scalar types like integers, floats, and pointers is the realm of compound types; structs. All structs are laid out in memory in program order, thus this declaration: ``` type S struct { a, b, c, d int64 } ``` will consume 32 bytes of memory; 8 bytes for `a` , then 8 bytes for `b` , and so on. The spec says that *struct values are comparable if all their fields are comparable*. Thus two structs are equal iff each of their fields are equal. ``` a := S{1, 2, 3, 4} b := S{1, 2, 3, 4} fmt.Println(a == b) // prints true ``` Under the hood the compiler uses memcmp to compare the 32 bytes of `a` and `b` . ## Padding and alignment However the simplistic bitwise comparison strategy will fail in situations like this: ``` type S struct { a byte b uint64 c int16 d uint32 } func main() a := S{1, 2, 3, 4} b := S{1, 2, 3, 4} fmt.Println(a == b) // prints true } ``` The code compiles, the comparison is still true, but under the hood the compiler cannot rely on comparing the bit patterns of `a` and `b` because the structure contains *padding*. Go requires each field in a struct to be naturally aligned. 2 byte values must start on an even address, four byte values on an address divisible by 4, and so on[ 1](#easy-footnote-bottom-1-4116). The compiler inserts padding to ensure the fields are *aligned*to according to their type and the underlying platform. In effect, after padding, this is what the compiler sees [:](#easy-footnote-bottom-2-4116) 2``` type S struct { a byte _ [7]byte // padding b uint64 c int16 _ [2]int16 // padding d uint32 } ``` Padding exists to ensure the correct field alignments, and while it does take up space in memory, the contents of those padding bytes are unknown. You might assume that, being Go, the padding bytes are always zero, but it turns out that’s not the case–the contents of padding bytes are simply not defined. Because they’re not defined to always be a certain value, doing a bitwise comparison may return false because the nine bytes of padding spread throughout the 24 bytes of `S` are may not be the same. The Go compiler solves this problem by generating what is known as an equality function. In this case `S` ‘s equality function knows how to compare two values of type `S` by comparing only the fields in the function while skipping over the padding. ## Type algorithms Phew, that was a lot of setup to illustrate why, for each type defined in a Go program, the compiler may generate several supporting functions, known inside the compiler as the type’s algorithms. In addition to the equality function the compiler will generate a hash function if the type is used as a map key. Like the equality function, the hash function must consider factors like padding when computing its result to ensure it remains stable. It turns out that it can be hard, and sometimes non obvious, to intuit when the compiler will generate these functions–it’s more than you’d expect–and it can be hard for the linker to eliminate the ones that are not needed as reflection often causes the linker to be more conservative when trimming types. ## Reducing binary size by prohibiting comparisons Now we’re at a point to explain Brad’s change. By adding an incomparable field [ 3](#easy-footnote-bottom-3-4116) to the type, the resulting struct is by extension incomparable, thus forcing the compiler to elide the generation of eq and hash algorithms, short circuiting the linkers elimination of those types and, in practice, reducing the size of the final binary. As an example of this technique, this program: ``` package main import "fmt" func main() { type t struct { // _ [0][]byte uncomment to prevent comparison a byte b uint16 c int32 d uint64 } var a t fmt.Println(a) } ``` when compiled with Go 1.14.2 (darwin/amd64), decreased from 2174088 to 2174056, a saving of 32 bytes. In isolation this 32 byte saving may seem like small beer, but consider that equality and hash functions can be generated for every type in the transitive closure of your program and all its dependencies, and the size of these functions varies depending on the size of the type and its complexity, prohibiting them can have a sizeable impact on the final binary over and above the old saw of `-ldflags="-s -w"` . The bottom line, if you don’t wish to make your types comparable, a hack like this enforces it at the source level while contributing to a small reduction in the size of your binary. Addendum: thanks to Brad’s prodding, Go 1.15 already has a bunch of improvements by [Cherry Zhang](https://go-review.googlesource.com/c/go/+/231397) and [Keith Randall](https://go-review.googlesource.com/c/go/+/191198) that fix the most egregious of the failures to eliminate unnecessary equality and hash functions (although I suspect it was also to avoid the proliferation of this class of CLs). - On 32bit platforms `int64` and`uint64` values may not be 8 byte aligned as the natural alignment of the platform is 4 bytes. See[issue 599](https://github.com/golang/go/issues/599)for the gory details. - 32 bit platforms would see `_ [3]byte` padding between the declaration of`a` and`b` . See previous. - Brad used `[0]func()` , but any type that the spec limits or prohibits comparisons on will do. By declaring the array has zero elements the type has no impact on the size or alignment of the struct.
12,239
在 Linux 文件系统中导航的技巧
https://www.networkworld.com/article/3533421/tricks-for-getting-around-your-linux-file-system.html
2020-05-22T11:41:35
[ "cd", "导航", "文件系统" ]
https://linux.cn/article-12239-1.html
> > cd 命令可能是任何 Linux 用户学习的前 10 个命令之一,但这并不是在 Linux 文件系统中导航的唯一方法,这里还有其他一些方法。 > > > ![](/data/attachment/album/202005/22/114058yrzlx94rz9lbx974.jpg) 无论你是在文件系统中四处查看、寻找文件还是尝试进入重要目录,Linux 都可以提供很多帮助。在本文中,我们将介绍一些技巧,使你可以在文件系统中移动,查找和使用所需的命令也更加轻松。 ### 添加到 $PATH 确保你不必花费大量时间在 Linux 系统上查找命令的最简单、最有用的方法之一就是在 `$PATH` 变量中添加适当的目录。但是,添加到 `$PATH` 变量中的目录顺序非常重要。它们确定系统在目录中查找要运行命令的目录顺序–在找到第一个匹配项时停止。 例如,你可能希望将家目录放在第一个,这样,如果你创建的脚本与其他可执行文件有相同的名称,那么只要输入该脚本的名称,它便会运行。 要将家目录添加到 `$PATH` 变量中,可以执行以下操作: ``` $ export PATH=~:$PATH ``` `~` 字符代表家目录。 如果将脚本保存在 `bin` 目录中,下面的会有效: ``` $ export PATH=~/bin:$PATH ``` 然后,你可以运行位于家目录中的脚本,如下所示: ``` $ myscript Good morning, you just ran /home/myacct/bin/myscript ``` **重要提示:**上面显示的命令会添加到你的搜索路径中,因为 `$PATH`(当前路径)被包含在内。它们不会覆盖它。你的搜索路径应该在你的 `.bashrc` 文件中配置,任何你打算永久化的更改也应该添加到那里。 ### 使用符号链接 符号链接提供了一种简单而明显的方式来记录可能经常需要使用的目录的位置。例如,如果你管理网站的内容,那么可能需要通过创建如下链接来使你的帐户“记住”网页文件的位置: ``` ln -s /var/www/html www ``` 参数的顺序很重要。第一个(`/var/www/html`)是目标,第二个是你创建的链接的名称。如果你当前不在家目录中,那么以下命令将执行相同的操作: ``` ln -s /var/www/html ~/www ``` 设置好之后,你可以使用 `cd www` 进入 `/var/www/html`。 ### 使用 shopt `shopt` 命令还提供了一种让移动到其他目录更加容易的方法。当你使用 `shopt` 的 `autocd` 选项时,只需输入名称即可转到目录。例如: ``` $ shopt -s autocd $ www cd -- www /home/myacct/www $ pwd -P /var/www/html $ ~/bin cd -- /home/myacct/bin $ pwd /home/myacct/bin ``` 在上面的第一组命令中,启用了 `shopt` 命令的 `autocd` 选项。输入 `www`,就会调用 `cd www` 命令。由于此符号链接是在上面的 `ln` 命令示例之一中创建的,因此将我们移至 `/var/www/html`。 `pwd -P` 命令显示实际位置。 在第二组中,键入 `~/bin` 会调用 `cd` 进入在用户家目录的 `bin` 目录。 请注意,当你输入的是命令时,`autocd` 行为将*不会*生效,即使它也是目录的名称。 `shopt` 是 bash 内置命令,它有很多选项。这只是意味着你不必在要进入每个目录的名称之前输入 `cd`。 要查看 `shopt` 的其他选项,只需输入 `shopt`。 ### 使用 $CDPATH 可能进入特定目录的最有用技巧之一,就是将你希望能够轻松进入的路径添加到 `$CDPATH` 中。这将创建一个目录列表,只需输入完整路径名的一部分即可进入。 一方面,这可能有点棘手。你的 `$CDPATH` 需要包含要移动到的目录的父目录,而不是目录本身。 例如,假设你希望仅通过输入 `cd html` 就可以移至 `/var/www/html` 目录,并仅使用 `cd` 和简单目录名即可移至 `/var/log` 中的子目录。在这种情况下,此 `$CDPATH` 就可以起作用: ``` $ CDPATH=.:/var/log:/var/www ``` 你将看到: ``` $ cd journal /var/log/journal $ cd html /var/www/html ``` 当你输入的不是完整路径时,`$CDPATH` 就会生效。它向下查看其目录列表,以查看指定的目录是否存在于其中一个目录中。找到匹配项后,它将带你到那里。 在 `$CDPATH` 开头保持 `.` 意味着你可以进入本地目录,而不必在 `$CDPATH` 中定义它们。 ``` $ export CDPATH=".:$CDPATH" $ Videos cd -- Videos /home/myacct/Videos ``` 在 Linux 文件系统键切换并不难,但是如果你使用一些方便的技巧轻松地到达各个位置,那你可以节省一些大脑细胞。 --- via: <https://www.networkworld.com/article/3533421/tricks-for-getting-around-your-linux-file-system.html> 作者:[Sandra Henry-Stocker](https://www.networkworld.com/author/Sandra-Henry_Stocker/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
301
Moved Permanently
null
12,241
如何在 Mac 上使用 pyenv 运行多个版本的 Python
https://opensource.com/article/20/4/pyenv
2020-05-23T11:20:54
[ "pyenv", "Python" ]
/article-12241-1.html
> > 如果你在 macOS 上运行的项目需要没有安装的 Python 版本,请试试 pyenv。 > > > ![](/data/attachment/album/202005/23/112041pawp65alw6tmea6l.jpg) 即使对于有经验的开发人员,管理本地 Python 开发环境仍然是一个挑战。尽管有详细的[软件包管理策略](https://opensource.com/article/19/4/managing-python-packages),但仍需要采取另外的步骤来确保你在需要时运行所需的 Python 版本。 ### 为什么 Python 版本重要? 起初这是一个奇怪的概念,但是编程语言会像其他任何软件一样发生变化。它们有错误、修复和更新,就像你喜欢的 [API](https://opensource.com/article/19/5/api-evolution-right-way) 和任何其他软件一样。同样,不同的发行版由称为[语义化版本](https://semver.org/)的三位数标识。 > > ??? [pic.twitter.com/yt1Z2439W8](https://t.co/yt1Z2439W8) > > > — Denny Perez (@dennyperez18) [May 28, 2019](https://twitter.com/dennyperez18/status/1133505310516232203?ref_src=twsrc%5Etfw) > > > 多年来,Python 2 是该语言的常用主要版本。在 2020 年 1 月,Python 2 [到达最后寿命](https://opensource.com/article/19/11/end-of-life-python-2),此后,Python 的核心维护者将仅支持 Python 3。Python 3 稳步发展,并定期发布新更新。对我来说定期获取这些更新很重要。 最近,我试图在 macOS 上运行一个依赖于 Python 3.5.9 的项目,而我的系统上并没有安装这个版本。我认为 Python 包管理器 `pip` 可以安装它,但事实并非如此: ``` $ pip install python3.5.9 Collecting python3.5.9 ERROR: Could not find a version that satisfies the requirement python3.5.9 (from versions: none) ERROR: No matching distribution found for python3.5.9 ``` 或者,我也可以从官方 Python 网站下载该版本,但我如何在我的 Mac 上与现有的 Python 版本一起运行?每次运行时指定 Python 解释器版本(例如 python3.7 或 python3.5)似乎很容易出错。一定会有更好的方法。 (说明:我知道这对经验丰富的 Python 开发人员没有意义,但对当时的我来说是有意义的。我很乐意谈一谈为什么我仍然认为它应该这样做。) ### 安装和设置 pyenv 值得庆幸的是,`pyenv` 可以绕开这一系列复杂的问题。首先,我需要安装 `pyenv`。我可以[从源码](https://github.com/pyenv/pyenv)克隆并编译它,但是我更喜欢通过 Homebrew 包管理器来管理软件包: ``` $ brew install pyenv ``` 为了通过 `pyenv` 使用 Python 版本,必须了解 shell 的 `PATH` 变量。`PATH` 决定了 shell 通过命令的名称来搜索文件的位置。你必须确保 shell 程序能够找到通过 `pyenv` 运行的 Python 版本,而不是默认安装的版本(通常称为*系统版本*)。如果不更改路径,那么结果如下: ``` $ which python /usr/bin/python ``` 这是 Python 的系统版本。 要正确设置 `pyenv`,可以在 Bash 或 zsh 中运行以下命令: ``` $ PATH=$(pyenv root)/shims:$PATH ``` 现在,如果你检查 Python 的版本,你会看到它是 `pyenv` 管理的版本: ``` $ which python /Users/my_username/.pyenv/shims/python ``` 该导出语句(`PATH=`)仅会对该 shell 实例进行更改,为了使更改永久生效,你需要将它添加到点文件当中。由于 zsh 是 macOS 的默认 shell,因此我将重点介绍它。将相同的语法添加到 `~/.zshrc` 文件中: ``` $ echo 'PATH=$(pyenv root)/shims:$PATH' >> ~/.zshrc ``` 现在,每次我们在 zsh 中运行命令时,它将使用 `pyenv` 版本的 Python。请注意,我在 `echo` 中使用了单引号,因此它不会评估和扩展命令。 `.zshrc` 文件仅管理 zsh 实例,因此请确保检查你的 shell 程序并编辑关联的点文件。如果需要再次检查默认 shell 程序,可以运行 `echo $SHELL`。如果是 zsh,请使用上面的命令。如果你使用 Bash,请将 `~/.zshrc` 更改为 `~/.bashrc`。如果你想了解更多信息,可以在 `pyenv` 的 `README` 中深入研究[路径设置](https://github.com/pyenv/pyenv#understanding-path)。 ### 使用 pyenv 管理 Python 版本 现在 `pyenv` 已经可用,我们可以看到它只有系统 Python 可用: ``` $ pyenv versions system ``` 如上所述,你绝对不想使用此版本([阅读更多有关信息](https://opensource.com/article/19/5/python-3-default-mac))。现在 `pyenv` 已正确设置,我希望它能有我经常使用的几个不同版本的 Python。 有一种方法可以通过运行 `pyenv install --list` 来查看 pyenv 可以访问的所有仓库中的所有 Python 版本。这是一个很长的列表,将来回顾的时候可能会有所帮助。目前,我决定在 [Python 下载页面](https://www.python.org/downloads/)找到的每个最新的“点版本”(3.5.x 或 3.6.x,其中 x 是最新的)。因此,我将安装 3.5.9 和 3.8.0: ``` $ pyenv install 3.5.9 $ pyenv install 3.8.0 ``` 这将需要一段时间,因此休息一会(或阅读上面的链接之一)。有趣的是,输出中显示了该版本的 Python 的下载和构建。例如,输出显示文件直接来自 [Python.org](http://python.org)。 安装完成后,你可以设置默认值。我喜欢最新的,因此将全局默认 Python 版本设置为最新版本: ``` $ pyenv global 3.8.0 ``` 该版本立即在我的 shell 中设置完成。确认一下: ``` $ python -V Python 3.8.0 ``` 我要运行的项目仅适于 Python 3.5,因此我将在本地设置该版本并确认: ``` $ pyenv local 3.5.9 $ python -V Python 3.5.9 ``` 因为我在 `pyenv` 中使用了 `local` 选项,所以它向当前目录添加了一个文件来跟踪该信息。 ``` $ cat .python-version 3.5.9 ``` 现在,我终于可以为想要的项目设置虚拟环境,并确保运行正确版本的 Python。 ``` $ python -m venv venv $ source ./venv/bin/activate (venv) $ which python /Users/mbbroberg/Develop/my_project/venv/bin/python ``` 要了解更多信息,请查看有关[在 Mac 上管理虚拟环境](/article-11086-1.html)的教程。 ### 总结 默认情况下,运行多个 Python 版本可能是一个挑战。我发现 `pyenv` 可以确保在我需要时可以有我需要的 Python 版本。 你还有其他初学者或中级 Python 问题吗? 请发表评论,我们将在以后的文章中考虑介绍它们。 --- via: <https://opensource.com/article/20/4/pyenv> 作者:[Matthew Broberg](https://opensource.com/users/mbbroberg) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
null
HTTPSConnectionPool(host='opensource.com', port=443): Read timed out. (read timeout=10)
null
12,242
Lubuntu 20.04 点评:轻量、简约、文雅
https://itsfoss.com/lubuntu-20-04-review/
2020-05-23T13:03:00
[ "Lubuntu" ]
https://linux.cn/article-12242-1.html
> > Lubuntu 20.04 LTS 与之前的 LTS 版本有很大的不同。它旨在给你一个更完善的体验,而不仅仅是关注旧电脑。请阅读更多关于 Lubuntu 20.04 的内容。 > > > ### Lubuntu 20.04 点评:第一个基于 LXQt 的长期支持版 我在 Lubuntu 20.04 发行前几天就已经开始使用它了。我通常使用 Arch 阵营中 Manjaro 和 Cinnamon 桌面,所以使用 Lubuntu 对我来说是一个愉快的改变。 以下是我在使用 Lubuntu 20.04.时的一些感受和注记。 #### 再见 LXDE,你好 LXQt! 长期以来,[Lubuntu](https://lubuntu.me/) 都依靠 [LXDE](https://github.com/lxde) 来提供轻量级的 Linux 体验。但现在,它使用的是 LXQt 桌面环境。 [LXDE](https://lxde.org/) 是基于 GTK(GNOME 所使用的库),更具体地说是基于 2020 年的 GTK+ 2。由于对 GTK+ 3 不满意,LXDE 开发人员 Hong Jen Yee 决定将整个桌面移植到 Qt(KDE 所使用的库)。LXDE 的 Qt 移植版本和 [Razor-qt](https://web.archive.org/web/20160220061334/http://razor-qt.org/) 项目合并形成 [LXQt](https://lxqt.org/)。所以现在,LXDE 和 LXQt 作为单独的项目而共存。 既然 LXDE 开发者本身专注于 LXQt,那么 Lubuntu 坚持使用三年多前上一次稳定发布版的桌面环境 LXDE 是没有意义的。 因此,Lubuntu 18.04 是使用 [LXDE](https://lxde.org/) 的最后一个版本。幸运的是,这是一个长期支持版本。Lubuntu 团队将提供支持直到 2021 年。 ![](/data/attachment/album/202005/23/133454ngy7a35ja5a73z5e.jpg) #### 不仅适于老机器 随着在 2020 年“老机器”的定义发生了变化,Lubuntu 18.04 成为了最后一个 32 位版本。现在,即使是一台 10 年前的老机器也至少有 2G 的内存和一个双核 64 位处理器。 因此,Lubuntu 团队将不再设置最低的系统需求,也不再主要关注旧硬件。尽管 LXQt 仍然是一个轻量级的、经典而不失精致的、功能丰富的桌面环境。 在 Lubuntu 20.04 LTS 发布之前,Lubuntu 的第一个 LXQt 发行版是 18.10,开发人员经历了三个标准发行版来完善 LXQt 桌面,这是一个很好的开发策略。 #### 不用常规的 Ubiquity,Lubuntu 20.04 使用的是 Calamares 安装程序 ![](/data/attachment/album/202005/23/133509wg8nmgfx9pnaugcg.jpg) 在新版本中使用了全新的 [Calamares](https://calamares.io/) 安装程序,取代了其它 [Ubuntu 官方版本](https://itsfoss.com/which-ubuntu-install/)使用的 Ubiquity 安装程序。 整个安装过程在大约能在 10 分钟内完成,比之前 Lubuntu 的版本稍微快一些。 由于镜像文件附带了预先安装的基本应用程序,所以你可以很快就可以完成系统的完全配置。 #### 不要直接从 Lubuntu 18.04 升级到 Lubuntu 20.04 通常,你可以[将 Ubuntu 从一个 LTS 版本升级到另一个 LTS 版本](https://itsfoss.com/upgrade-ubuntu-version/)。但是 Lubuntu 团队建议不要从 Lubuntu 18.04 升级到 20.04。他们建议重新安装,这才是正确的。 Lubuntu 18.04 使用 LXDE 桌面,20.04 使用 LXQt。由于桌面环境的巨大变化,从 18.04 升级到 20.04 将导致系统崩溃。 #### 更多的 KDE 和 Qt 应用程序 ![](/data/attachment/album/202005/23/133546ytjfqniuacatucir.gif) 下面是在这个新版本中默认提供的一些应用程序,正如我们所看到的,并非所有应用程序都是轻量级的,而且大多数应用程序都是基于 Qt 的。 甚至使用的软件中心也是 KDE 的 Discover,而不是 Ubuntu 的 GNOME 软件中心。 * Ark – 归档文件管理器 * Bluedevil – 蓝牙连接管理 * Discover 软件中心 – 包管理系统 * FeatherPad – 文本编辑器 * FireFox – 浏览器 * K3b – CD/DVD 刻录器 * Kcalc – 计算器 * KDE 分区管理器 – 分区管理工具 * LibreOffice – 办公套件(Qt 界面版本) * LXimage-Qt – 图片查看器及截图制作 * Muon – 包管理器 * Noblenote – 笔记工具 * PCManFM-Qt – 文件管理器 * Qlipper – 剪贴板管理工具 * qPDFview – PDF 阅读器 * PulseAudio – 音频控制器 * Qtransmission – BT 下载工具(Qt 界面版本) * Quassel – IRC 客户端 * ScreenGrab – 截屏制作工具 * Skanlite – 扫描工具 * 启动盘创建工具 – USB 启动盘制作工具 * Trojita – 邮件客户端 * VLC – 媒体播放器 * [MPV 视频播放器](https://itsfoss.com/mpv-video-player/) #### 测试 Lubuntu 20.04 LTS LXQt 版 Lubuntu 的启动时间不到一分钟,虽然是从 SSD 启动的。 LXQt 目前需要的内存比基于 Gtk+ 2 的 LXDE 稍微多一点,但是另一种 Gtk+ 3 工具包也需要更多的内存。 在重新启动之后,系统以非常低的内存占用情况运行,大约只有 340 MB(按照现代标准),比 LXDE 多 100 MB。 ![](/data/attachment/album/202005/23/133558asy8t83t74763a7a.jpg) LXQt 不仅适用于硬件较旧的用户,也适用于那些希望在新机器上获得简约经典体验的用户。 桌面布局看起来类似于 KDE 的 Plasma 桌面,你觉得呢? ![](/data/attachment/album/202005/23/133612eszrsd3oocoendpu.jpg) 在左下角有一个应用程序菜单,一个用于显示固定和活动的应用程序的任务栏,右下角有一个系统托盘。 Lubuntu 的 LXQt 版本可以很容易的定制,所有的东西都在菜单的首选项下,大部分的关键项目都在 LXQt “设置”中。 值得一提的是,LXQt 在默认情况下使用流行的 [Openbox 窗口管理器](https://en.wikipedia.org/wiki/Openbox)。 与前三个发行版一样,20.04 LTS 附带了一个默认的黑暗主题 Lubuntu Arc,但是如果不适合你的口味,可以快速更换,也很方便。 就日常使用而言,事实证明,Lubuntu 20.04 向我证明,其实每一个 Ubuntu 的分支版本都完全没有问题。 #### 结论 Lubuntu 团队已经成功地过渡到一个现代的、依然轻量级的、极简的桌面环境。LXDE 看起来被遗弃了,迁移到一个活跃的项目也是一件好事。 我希望 Lubuntu 20.04 能够让你和我一样热爱,如果是这样,请在下面的评论中告诉我。请继续关注! --- via: <https://itsfoss.com/lubuntu-20-04-review/> 作者:[Dimitrios Savvopoulos](https://itsfoss.com/author/dimitrios/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[qfzy1233](https://github.com/qfzy1233) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
*Lubuntu 20.04 LTS is significantly different than its previous LTS version. It is aiming to give you a more polished experience rather than just focusing on older computer. Read more about it as I review Lubuntu 20.04.* ## Lubuntu 20.04 Review: First LTS release with LXQt I have been using Lubuntu 20.04 from a few days before the release. I usually dwell in Arch world with Manjaro and Cinnamon desktop so using Lubuntu was a pleasant change for me. Here’s what I have noticed and felt about Lubuntu 20.04. ### Bye bye LXDE, Hello LXQt! For a long time, [Lubuntu](https://lubuntu.me/) relied on [LXDE](https://github.com/lxde) to provide a lightweight Linux experience. It now uses LXQt desktop environment. [LXDE](http://lxde.org/) is based on GTK (the libraries used by GNOME) and more specifically on GTK+ 2 which is dated in 2020. Dissatisfied with GTK+ 3, LXDE developer Hong Jen Yee decided to port the entire desktop to Qt (the libraries used by KDE). LXDE, the Qt port of it, and the [Razor-qt](https://web.archive.org/web/20160220061334/http://razor-qt.org/) project were combined to form [LXQt](http://lxqt.org/). Although today, LXDE and LXQt coexist as separate projects. Since LXDE developer itself is focusing on LXQt, it makes no sense for Lubuntu to stick with a desktop environment that had its last stable release more than three years ago. Lubuntu 18.04 is the last version of with [LXDE](https://lxde.org/). Fortunately it’s a long term support edition. It will be supported officially by Lubuntu team till 2021. ![Lubuntu 20.04 Review](https://itsfoss.com/content/images/wordpress/2020/04/Lubuntu-20-04-review.jpg) ### Not exclusively for older machines As the definition of “older machine” has changed in 2020 Lubuntu 18.04 is the last 32bit version. Nowadays even a 10 year old machine comes with at least 2 gigabytes of ram and a dual-core 64bit processor. As per that, [Lubuntu Team will no longer provide minimum system requirements and will no longer primarily focus on older hardware](https://itsfoss.com/lubuntu-no-more-old-distro/). Although LXQt is still a lightweight, classic yet polished and feature rich desktop environment. The First Lubuntu release with LXQt was 18.10, giving the developers three standard releases to perfect the LXQt desktop before the Lubuntu 20.04 LTS release, which is a good development strategy. ### Not the regular Ubiquity, Lubuntu 20.04 uses Calamares installer ![Lubuntu 20 04 Installer](https://itsfoss.com/content/images/wordpress/2020/04/lubuntu-20-04-installer.jpg) A fresh installation begins with the new [Calamares](https://calamares.io/) installer, in place of the Ubiquity installer that other [official Ubuntu flavors](https://itsfoss.com/which-ubuntu-install/) use. The whole process is done in approximately 10 minutes, slightly faster than the previous Lubuntu releases. As the .iso comes with the essential applications pre-installed you can get your system fully configured pretty fast too. ### No upgrade from Lubuntu 18.04 to Lubuntu 20.04 Normally, you can [upgrade Ubuntu from one LTS to another LTS release](https://itsfoss.com/upgrade-ubuntu-version/). But Lubuntu team advises not to upgrade from Lubuntu 18.04 to 20.04. They recommend a fresh install and rightly so. Lubuntu 18.04 used LXDE desktop while 20.04 uses LXQt. Due to the extensive changes in the desktop environments, upgrading to 20.04 from 18.04 will result in a broken system. **More KDE and Qt applications** ![Lubuntu 20.04](https://itsfoss.com/content/images/wordpress/2020/04/Lubuntu-20.04.gif) Here are some of the applications that are available by default in this new release and as I can see, not all of them are lightweight and most of them are Qt-based. Even the software center used is KDE’s Discover instead of Ubuntu’s GNOME software center. - Ark – archive manager - Bluedevil – bluetooth connector - Discover Software Center – package management system - FeatherPad – text editor - FireFox – web browser - K3b – CD/DVD burner - Kcalc – calculator - KDE partition manager – partition manager - LibreOffice – Office suite (Qt interface version) - LXimage-Qt – image viewer and screenshot tool - Muon – package manager - Noblenote – note taker - PCManFM-Qt – File manager - Qlipper – clipboard manager - qPDFview – PDF viewer - PulseAudio – audio controller - Qtransmission – bittorrent client (Qt interface version) - Quassel – IRC client - ScreenGrab – Screenshot creator - Skanlite – scanning - Startup Disk Creator – USB boot disk maker - Trojita – email client - VLC – media player [MPV video player](https://itsfoss.com/mpv-video-player/) ### Testing Lubuntu 20.04 LTS Boot times on the LXQt version of Lubuntu are under a minute, booting from an SSD though. LXQt currently requires slightly more memory than the Gtk+ v2-based LXDE, but the alternative Gtk+ v3 toolkit would also have required more memory. After a reboot the system runs approximately at a very low of 340 MB for the modern standards, 100 MB more than LXDE. ![Htop](https://itsfoss.com/content/images/wordpress/2020/04/htop.jpg?fit=800%2C629&ssl=1) LXQt is not only for users with an older hardware but also for those who are seeking a simple and classic experience at their new machine. The desktop layout looks similar to KDE’s Plasma desktop, don’t you think? ![Lubuntu 20.04 Desktop](https://itsfoss.com/content/images/wordpress/2020/04/Lubuntu-20.04-desktop.jpg?fit=800%2C450&ssl=1) There’s an application menu in the lower-left corner, a taskbar for pinned and active applications, and a system tray in the lower-right corner. Lubuntu in its LXQt version can be easily customized and everything is in the menu under preferences, with most key items under LXQt Settings. It is worth-mentioning that LXQt uses the popular [Openbox window manager](https://en.wikipedia.org/wiki/Openbox) by default. Like the last three releases, 20.04 LTS comes with a default dark theme Lubuntu Arc, but it is quick and easy to change it if it doesn’t suit your taste. In daily use, Lubuntu 20.04 has proven to me completely trouble-free as every Ubuntu flavour in fact. ### Conclusion Lubuntu team has successfully made the transition to a modern, still lightweight and minimal desktop environment. LXDE looks like abandoned and it is a good thing to move away to an active project. I hope that Lubuntu 20.04 makes you as much enthusiastic as I am, and if so don’t hesitate to let me know at the comments below. Stay tuned!
12,244
使用子模块和子树来管理 Git 项目
https://opensource.com/article/20/5/git-submodules-subtrees
2020-05-23T20:14:44
[ "Git" ]
https://linux.cn/article-12244-1.html
> > 使用子模块和子树来帮助你管理多个存储库中共有的子项目。 > > > ![](/data/attachment/album/202005/23/201323myyhob22eg2y2jqt.jpg) 如果你参与了开源项目的开发,那么你很可能已经用了 Git 来管理你的源码。你可能遇到过有很多依赖和/或子项目的项目。你是如何管理它们的? 对于一个开源组织,要实现社区**和**产品的单一来源文档和依赖管理比较棘手。文档和项目往往会碎片化和变得冗余,这致使它们很难维护。 ### 必要性 假设你想把单个项目作为一个存储库内的子项目,传统的方法是把该项目复制到父存储库中,但是,如果你想要在多个父项目中使用同一个子项目呢?如果把子项目复制到所有父项目中,当有更新时,你都要在每个父项目中做修改,这是不太可行的。这会导致父项目中的冗余和数据不一致,使更新和维护子项目变得很困难。 ### Git 子模块和子树 如果你可以用一条命令把一个项目放进另一个项目中,会怎样呢?如果你随时可以把一个项目作为子项目添加到任意数目的项目中,并可以同步更新修改呢?Git 提供了这类问题的解决方案:Git <ruby> 子模块 <rt> submodule </rt></ruby>和 Git <ruby> 子树 <rt> subtree </rt></ruby>。创建这些工具的目的是以更加模块化的水平来支持共用代码的开发工作流,旨在 Git 存储库<ruby> 源码管理 <rt> source-code management </rt></ruby>(SCM)与它下面的子树之间架起一座桥梁。 ![Cherry tree growing on a mulberry tree](/data/attachment/album/202005/23/201448jcxlcci1f1z4c2l2.jpg "Cherry tree growing on a mulberry tree") *生长在桑树上的樱桃树* 下面是本文要详细介绍的概念的一个真实应用场景。如果你已经很熟悉树形结构,这个模型看起来是下面这样: ![Tree with subtrees](/data/attachment/album/202005/23/201451xllv5o14lc4344tp.png "Tree with subtrees") ### Git 子模块是什么? Git 在它默认的包中提供了子模块,子模块可以把 Git 存储库嵌入到其他存储库中。确切地说,Git 子模块指向子树中的某次提交。下面是我 [Docs-test](https://github.com/manaswinidas/Docs-test/) GitHub 存储库中的 Git 子模块的样子: ![Git submodules screenshot](/data/attachment/album/202005/23/201452dliztziziialcmbq.png "Git submodules screenshot") [文件夹@提交 Id](mailto:folder@commitId) 格式表明这个存储库是一个子模块,你可以直接点击文件夹进入该子树。名为 `.gitmodules` 的配置文件包含所有子模块存储库的详细信息。我的存储库的 `.gitmodules` 文件如下: ![Screenshot of .gitmodules file](/data/attachment/album/202005/23/201454khhen8n8cpe698hp.png "Screenshot of .gitmodules file") 你可以用下面的命令在你的存储库中使用 Git 子模块: #### 克隆一个存储库并加载子模块 克隆一个含有子模块的存储库: ``` $ git clone --recursive <URL to Git repo> ``` 如果你之前已经克隆了存储库,现在想加载它的子模块: ``` $ git submodule update --init ``` 如果有嵌套的子模块: ``` $ git submodule update --init --recursive ``` #### 下载子模块 串行地连续下载多个子模块是很枯燥的工作,所以 `clone` 和 `submodule update` 会支持 `--jobs` (或 `-j`)参数: 例如,想一次下载 8 个子模块,使用: ``` $ git submodule update --init --recursive -j 8 $ git clone --recursive --jobs 8 <URL to Git repo> ``` #### 拉取子模块 在运行或构建父项目之前,你需要确保依赖的子项目都是最新的。 拉取子模块的所有修改: ``` $ git submodule update --remote ``` #### 使用子模块创建存储库: 向一个父存储库添加子树: ``` $ git submodule add <URL to Git repo> ``` 初始化一个已存在的 Git 子模块: ``` $ git submodule init ``` 你也可以通过为 `submodule update` 命令添加 `--update` 参数在子模块中创建分支和追踪提交: ``` $ git submodule update --remote ``` #### 更新子模块的提交 上面提到过,一个子模块就是一个指向子树中某次提交的链接。如果你想更新子模块的提交,不要担心。你不需要显式地指定最新的提交。你只需要使用通用的 `submodule update` 命令: ``` $ git submodule update ``` 就像你平时创建父存储库和把父存储库推送到 GitHub 那样添加和提交就可以了。 #### 从一个父存储库中删除一个子模块 仅仅手动删除一个子项目文件夹不会从父项目中移除这个子项目。想要删除名为 `childmodule` 的子模块,使用: ``` $ git rm -f childmodule ``` 虽然 Git 子模块看起来很容易上手,但是对于初学者来说,有一定的使用门槛。 ### Git 子树是什么? Git <ruby> 子树 <rt> subtree </rt></ruby>,是在 Git 1.7.11 引入的,让你可以把任何存储库的副本作为子目录嵌入另一个存储库中。它是 Git 项目可以注入和管理项目依赖的几种方法之一。它在常规的提交中保存了外部依赖信息。Git 子树提供了整洁的集成点,因此很容易复原它们。 如果你参考 [GitHub 提供的子树教程](https://help.github.com/en/github/using-git/about-git-subtree-merges)来使用子树,那么无论你什么时候添加子树,在本地都不会看到 `.gittrees` 配置文件。这让我们很难分辨哪个是子树,因为它们看起来很像普通的文件夹,但是它们却是子树的副本。默认的 Git 包中不提供带 `.gittrees` 配置文件的 Git 子树版本,因此如果你想要带 `.gittrees` 配置文件的 git-subtree 命令,必须从 Git 源码存储库的 [/contrib/subtree 文件夹](https://github.com/git/git/tree/master/contrib/subtree) 下载 git-subtree。 你可以像克隆其他常规的存储库那样克隆任何含有子树的存储库,但由于在父存储库中有整个子树的副本,因此克隆过程可能会持续很长时间。 你可以用下面的命令在你的存储库中使用 Git 子树。 #### 向父存储库中添加一个子树 想要向父存储库中添加一个子树,首先你需要执行 `remote add`,之后执行 `subtree add` 命令: ``` $ git remote add remote-name <URL to Git repo> $ git subtree add --prefix=folder/ remote-name <URL to Git repo> subtree-branchname ``` 上面的命令会把整个子项目的提交历史合并到父存储库。 #### 向子树推送修改以及从子树拉取修改 ``` $ git subtree push-all ``` 或者 ``` $ git subtree pull-all ``` ### 你应该使用哪个? 任何工具都有优缺点。下面是一些可能会帮助你决定哪种最适合你的特性: * Git 子模块的存储库占用空间更小,因为它们只是指向子项目的某次提交的链接,而 Git 子树保存了整个子项目及其提交历史。 * Git 子模块需要在服务器中可访问,但子树是去中心化的。 * Git 子模块大量用于基于组件的开发,而 Git 子树多用于基于系统的开发。 Git 子树并不是 Git 子模块的直接可替代项。有明确的说明来指导我们该使用哪种。如果有一个归属于你的外部存储库,使用场景是向它回推代码,那么就使用 Git 子模块,因为推送代码更容易。如果你有第三方代码,且不会向它推送代码,那么使用 Git 子树,因为拉取代码更容易。 自己尝试使用 Git 子树和子模块,然后在评论中留下你的使用感想。 --- via: <https://opensource.com/article/20/5/git-submodules-subtrees> 作者:[Manaswini Das](https://opensource.com/users/manaswinidas) 选题:[lujun9972](https://github.com/lujun9972) 译者:[lxbwolf](https://github.com/lxbwolf) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
If you are into open source development, you have probably worked with Git to manage source code. You might have come across projects with numerous dependencies and/or sub-projects. How do you manage them? For an open source organization, it can be tricky to achieve single-source documentation and dependency management for the community *and* the product. The documentation and project often end up fragmented and redundant, which makes them difficult to maintain. ## The need Suppose you want to use a single project as a child project inside a repository. The traditional method is just to copy the project to the parent repository. But, what if you want to use the same child project in many parent repositories? It wouldn't be feasible to copy the child project into every parent and have to make changes in all of them whenever you update it. This would create redundancy and inconsistency in the parent repositories and make it difficult to update and maintain the child project. ## Git submodules and subtrees What if you could put one project within another using a single command? What if you could just add the project as a child to any number of projects and push changes on the go, whenever you want to? Git provides solutions for this: Git submodules and Git subtrees. These tools were created to support code-sharing development workflows on a more modular level, aspiring to bridge the gap between the Git repository's source-code management (SCM) and the sub-repos within it. ![Cherry tree growing on a mulberry tree Cherry tree growing on a mulberry tree](https://opensource.com/sites/default/files/uploads/640px-bialbero_di_casorzo.jpg) Cherry tree growing on a mulberry tree This is a real-life scenario of the concepts this article will cover in detail. If you're already familiar with trees, here is what this model will look like: ![Tree with subtrees Tree with subtrees](https://opensource.com/sites/default/files/subtree_0.png) CC BY-SA opensource.com ## What are Git submodules? Git provides submodules in its default package that enable Git repositories to be nested within other repositories. To be precise, the Git submodule points to a specific commit on the child repository. Here is what Git submodules look like in my [Docs-test](https://github.com/manaswinidas/Docs-test/) GitHub repo: ![Git submodules screenshot Git submodules screenshot](https://opensource.com/sites/default/files/uploads/git-submodules_github.png) The format **folder@commitId** indicates that the repository is a submodule, and you can directly click on the folder to go to the child repository. The config file called **.gitmodules** contains all the submodule repository details. My repo's **.gitmodules** file looks like this: ![Screenshot of .gitmodules file Screenshot of .gitmodules file](https://opensource.com/sites/default/files/uploads/gitmodules.png) You can use the following commands to use Git submodules in your repositories. ### Clone a repository and load submodules To clone a repository containing submodules: `$ git clone --recursive <URL to Git repo>` If you have already cloned a repository and want to load its submodules: `$ git submodule update --init` If there are nested submodules: `$ git submodule update --init --recursive` ### Download submodules Downloading submodules sequentially can be a tedious task, so **clone** and **submodule update** will support the **--jobs** or **-j** parameter. For example, to download eight submodules at once, use: ``` $ git submodule update --init --recursive -j 8 $ git clone --recursive --jobs 8 <URL to Git repo> ``` ### Pull submodules Before running or building the parent repository, you have to make sure that the child dependencies are up to date. To pull all changes in submodules: `$ git submodule update --remote` ### Create repositories with submodules To add a child repository to a parent repository: `$ git submodule add <URL to Git repo>` To initialize an existing Git submodule: `$ git submodule init` You can also create branches and track commits in your submodules by adding **--update **to your **submodule update** command: `$ git submodule update --remote` ### Update submodule commits As explained above, a submodule is a link that points to a specific commit in the child repository. If you want to update the commit of the submodule, don't worry. You don't need to specify the latest commit explicitly. You can just use the general **submodule update** command: `$ git submodule update` Just add and commit as you normally would to create and push the parent repository to GitHub. ### Delete a submodule from a parent repository Merely deleting a child project folder manually won't remove the child project from the parent repository. To delete a submodule named **childmodule**, use: `$ git rm -f childmodule` Although Git submodules may appear easy to work with, it can be difficult for beginners to find their way around them. ## What are Git subtrees? Git subtrees, introduced in Git 1.7.11, allow you to insert a copy of any repository as a subdirectory of another one. It is one of several ways Git projects can inject and manage project dependencies. It stores the external dependencies in regular commits. Git subtrees provide clean integration points, so they're easier to revert. If you use the [subtrees tutorial provided by GitHub](https://help.github.com/en/github/using-git/about-git-subtree-merges) to use subtrees, you won't see a **.gittrees** config file in your local whenever you add a subtree. This makes it difficult to recognize subtrees because subtrees look like general folders, but they are copies of the child repository. The version of Git subtree with the **.gittrees** config file is not available with the default Git package, so to get the git-subtree with **.gittrees** config file, you must download git-subtree from the [ /contrib/subtree folder](https://github.com/git/git/tree/master/contrib/subtree) in the Git source repository. You can clone any repository containing subtrees, just like any other general repository, but it may take longer because entire copies of the child repository reside in the parent repository. You can use the following commands to use Git subtrees in your repositories. ### Add a subtree to a parent repository To add a new subtree to a parent repository, you first need to **remote add** it and then run the **subtree add** command, like: ``` $ git remote add remote-name <URL to Git repo> $ git subtree add --prefix=folder/ remote-name <URL to Git repo> subtree-branchname ``` This merges the whole child project's commit history to the parent repository. ### Push and pull changes to and from the subtree `$ git subtree push-all` or `$ git subtree pull-all` ## Which should you use? Every tool has pros and cons. Here are some features that may help you decide which is best for your use case. - Git submodules have a smaller repository size since they are just links that point to a specific commit in the child project, whereas Git subtrees house the entire child project along with its history. - Git submodules need to be accessible in a server, but subtrees are decentralized. - Git submodules are mostly used in component-based development, whereas Git subtrees are used in system-based development. A Git subtree isn't a direct alternative to a Git submodule. There are certain caveats that guide where each can be used. If there is an external repository you own and are likely to push code back to, use Git submodule since it is easier to push. If you have third-party code that you are unlikely to push to, use Git subtree since it is easier to pull. Give Git subtrees and submodules a try and let me know how it goes in the comments. ## Comments are closed.
12,247
k9s:你没看错,这是一个加速 k8s 集群管理的工具
https://opensource.com/article/20/5/kubernetes-administration
2020-05-25T10:48:44
[ "Kubernetes", "k8s", "k9s" ]
https://linux.cn/article-12247-1.html
> > 看看这个很酷的 Kubernetes 管理的终端 UI。 > > > ![](/data/attachment/album/202005/25/104742pqjmiroc44honcs5.jpg) 通常情况下,我写的关于 Kubernetes 管理的文章中用的都是做集群管理的 `kubectl` 命令。然而最近,有人给我介绍了 [k9s](https://github.com/derailed/k9s) 项目,可以让我快速查看并解决 Kubernetes 中的日常问题。这极大地改善了我的工作流程,我会在这篇教程中告诉你如何上手它。 它可以安装在 Mac、Windows 和 Linux 中,每种操作系统的说明可以在[这里](https://github.com/derailed/k9s)找到。请先完成安装,以便能够跟上本教程。 我会使用 Linux 和 Minikube,这是一种在个人电脑上运行 Kubernetes 的轻量级方式。按照[此教程](https://opensource.com/article/18/10/getting-started-minikube)或使用[该文档](https://kubernetes.io/docs/tasks/tools/install-minikube/)来安装它。 ### 设置 k9s 配置文件 安装好 `k9s` 应用后,从帮助命令开始总是很好的起点。 ``` $ k9s help ``` 正如你在帮助信息所看到的,我们可以用 `k9s` 来配置很多功能。我们唯一需要进行的步骤就是编写配置文件。而 `info` 命令会告诉我们该应用程序要在哪里找它的配置文件。 ``` $ k9s info ____ __.________ | |/ _/ __ \______ | < \____ / ___/ | | \ / /\___ \ |____|__ \ /____//____ > \/ \/ Configuration: /Users/jess/.k9s/config.yml Logs: /var/folders/5l/c1y1gcw97szdywgf9rk1100m0000gn/T/k9s-jess.log Screen Dumps: /var/folders/5l/c1y1gcw97szdywgf9rk1100m0000gn/T/k9s-screens-jess ``` 如果要添加配置文件,该配置目录不存在的话就创建它,然后添加一个配置文件。 ``` $ mkdir -p ~/.k9s/ $ touch ~/.k9s/config.yml ``` 在这篇介绍中,我们将使用 `k9s` 版本库中推荐的默认 `config.yml`。维护者请注意,这种格式可能会有变化,可以[在这里查看](https://github.com/derailed/k9s#k9s-configuration)最新版本。 ``` k9s: refreshRate: 2 headless: false readOnly: false noIcons: false logger: tail: 200 buffer: 500 sinceSeconds: 300 fullScreenLogs: false textWrap: false showTime: false currentContext: minikube currentCluster: minikube clusters: minikube: namespace: active: "" favorites: - all - kube-system - default view: active: dp thresholds: cpu: critical: 90 warn: 70 memory: critical: 90 warn: 70 ``` 我们设置了 `k9s` 寻找本地的 minikube 配置,所以我要确认 minikube 已经上线可以使用了。 ``` $ minikube status host: Running kubelet: Running apiserver: Running kubeconfig: Configured ``` ### 运行 k9s 来探索一个 Kubernetes 集群 有了配置文件,并指向我们的本地集群,我们现在可以运行 `k9s` 命令了。 ``` $ k9s ``` 启动后,会弹出 `k9s` 的基于文本的用户界面。在没有指定命名空间标志的情况下,它会向你显示默认命名空间中的 Pod。 ![K9s screenshot](/data/attachment/album/202005/25/104848zllsmdm3ql647l4r.png "K9s screenshot") 如果你运行在一个有很多 Pod 的环境中,默认视图可能会让人不知所措。或者,我们可以将注意力集中在给定的命名空间上。退出该应用程序,运行 `k9s -n <namespace>`,其中 `<namespace>` 是已存在的命名空间。在下图中,我运行了 `k9s -n minecraft`,它显示了我损坏的 Pod: ![K9s screenshot](/data/attachment/album/202005/25/104849bl11m9jlhhm4lw4h.png "K9s screenshot") 所以,一旦你有了 `k9s` 后,有很多事情你可以更快地完成。 通过快捷键来导航 `k9s`,我们可以随时使用方向键和回车键来选择列出的项目。还有不少其他的通用快捷键可以导航到不同的视图。 * `0`:显示在所有命名空间中的所有 Pod ![K9s screenshot](/data/attachment/album/202005/25/104855hnzmdbpuzq75z5l2.png "K9s screenshot") * `d`:描述所选的 Pod ![K9s screenshot](/data/attachment/album/202005/25/104859k6cc2z2gb9wddcgd.png "K9s screenshot") * `l`:显示所选的 Pod 的日志 ![Using k9s to show Kubernetes pod logs](/data/attachment/album/202005/25/104909zpcjjepmjm4oe499.png "Using k9s to show Kubernetes pod logs") 你可能会注意到 `k9s` 设置为使用 [Vim 命令键](https://opensource.com/article/19/3/getting-started-vim),包括使用 `J` 和 `K` 键上下移动等。Emacs 用户们,败退吧 :) ### 快速查看不同的 Kubernetes 资源 需要去找一个不在 Pod 里的东西吗?是的,我也需要。当我们输入冒号(`:`)键时,可以使用很多快捷方式。从那里,你可以使用下面的命令来导航。 * `:svc`:跳转到服务视图 ![K9s screenshot](/data/attachment/album/202005/25/104911lzjkeeisqkkgoj42.png "K9s screenshot") * `:deploy`:跳转到部署视图 ![K9s screenshot](/data/attachment/album/202005/25/104914bqd6vddnjt1zzqqz.png "K9s screenshot") * `:rb`:跳转到角色绑定视图,用于 [基于角色的访问控制(RBAC)](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)管理 ![K9s screenshot](/data/attachment/album/202005/25/104920po5nov990nzvz599.png "K9s screenshot") * `:namespace`:跳转到命名空间视图 ![K9s screenshot](/data/attachment/album/202005/25/104922yvooon0ro1bn0or0.png "K9s screenshot") * `:cj`:跳转到 cronjob 视图,查看集群中计划了哪些作业。 ![K9s screenshot](/data/attachment/album/202005/25/104924cnrggikg3x7nrbii.png "K9s screenshot") 这个应用最常用的工具是键盘;要在任何页面往上或下翻页,请使用方向键。如果你需要退出,记得使用 Vim 绑定键,键入 `:q`,然后按回车键离开。 ### 用 k9s 排除 Kubernetes 的故障示例 当出现故障的时候,`k9s` 怎么帮忙?举个例子,我让几个 Pod 由于配置错误而死亡。下面你可以看到我那个可怜的 “hello” 部署死了。当我们将其高亮显示出来,可以按 `d` 运行 `describe` 命令,看看是什么原因导致了故障。 ![K9s screenshot](/data/attachment/album/202005/25/104927sixjpcxs5osjx11t.png "K9s screenshot") ![K9s screenshot](/data/attachment/album/202005/25/104936lpk81mzhmu8qw1kc.png "K9s screenshot") 草草掠过那些事件并不能告诉我们故障原因。接下来,我按了 `esc` 键,然后通过高亮显示 Pod 并输入`shift-l` 来检查日志。 ![K9s screenshot](/data/attachment/album/202005/25/104938uvie7qyvs9qsbvyv.png "K9s screenshot") 不幸的是,日志也没有提供任何有用的信息(可能是因为部署从未正确配置过),而且 Pod 也没有出现。 然后我使用 `esc` 退了出来,我看看删除 Pod 是否能解决这个问题。要做到这一点,我高亮显示该 Pod,然后使用 `ctrl-d`。幸好 `k9s` 在删除前会提示用户。 ![K9s screenshot](/data/attachment/album/202005/25/104941vi0jsc80ckd1los1.png "K9s screenshot") 虽然我确实删除了这个 Pod,但部署资源仍然存在,所以新的 Pod 会重新出现。无论什么原因(我们还不知道),它还会继续重启并死掉。 在这里,我会重复查看日志,描述资源,甚至使用 `e` 快捷方式来编辑运行中的 Pod 以排除故障行为。在这个特殊情况下,失败的 Pod 是因为没有配置在这个环境下运行。因此,让我们删除部署来停止崩溃接着重启的循环。 我们可以通过键入 `:deploy` 并点击回车进入部署。从那里我们高亮显示并按 `ctrl-d` 来删除。 ![K9s screenshot](/data/attachment/album/202005/25/104943aj9hnjzn0s111nr0.png "K9s screenshot") ![K9s screenshot](/data/attachment/album/202005/25/104946qg4jntnufntanb21.png "K9s screenshot") 这个有问题的部署被干掉了! 只用了几个按键就把这个失败的部署给清理掉了。 ### k9s 是极其可定制的 这个应用有很多自定义选项、乃至于 UI 的配色方案。这里有几个可编辑的选项,你可能会感兴趣。 * 调整你放置 `config.yml` 文件的位置(这样你就可以把它存储在[版本控制](https://opensource.com/article/19/3/move-your-dotfiles-version-control)中)。 * 在 `alias.yml` 文件中添加[自定义别名](https://k9scli.io/topics/aliases/)。 * 在 `hotkey.yml` 文件中创建[自定义热键](https://k9scli.io/topics/hotkeys/)。 * 探索现有的[插件](https://github.com/derailed/k9s/tree/master/plugins)或编写自己的插件。 整个应用是在 YAML 文件中配置的,所以定制化对于任何 Kubernetes 管理员来说都会觉得很熟悉。 ### 用 k9s 简化你的生活 我倾向于以一种非常手动的方式来管理我团队的系统,更多的是为了锻炼脑力,而不是别的。当我第一次听说 `k9s` 的时候,我想,“这只是懒惰的 Kubernetes 而已。”于是我否定了它,然后回到了到处进行人工干预的状态。实际上,当我在处理积压工作时就开始每天使用它,而让我震惊的是它比单独使用 `kubectl` 快得多。现在,我已经皈依了。 了解你的工具并掌握做事情的“硬道理”很重要。还有一点很重要的是要记住,就管理而言,重要的是要更聪明地工作,而不是更努力。使用 `k9s`,就是我践行这个目标的方法。我想,我们可以把它叫做懒惰的 Kubernetes 管理,也没关系。 --- via: <https://opensource.com/article/20/5/kubernetes-administration> 作者:[Jessica Cherry](https://opensource.com/users/cherrybomb) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
Usually, my articles about Kubernetes administration are full of kubectl commands for administration for your clusters. Recently, however, someone pointed me to the [k9s](https://github.com/derailed/k9s) project for a fast way to review and resolve day-to-day issues in Kubernetes. It's been a huge improvement to my workflow and I'll show you how to get started in this tutorial. Installation can be done on a Mac, in Windows, and Linux. Instructions for each operating system can be found [here](https://github.com/derailed/k9s). Be sure to complete installation to be able to follow along. I will be using Linux and Minikube, which is a lightweight way to run Kubernetes on a personal computer. Install it following [this tutorial](https://opensource.com/article/18/10/getting-started-minikube) or by using the [documentation](https://kubernetes.io/docs/tasks/tools/install-minikube/). ## Setting the k9s configuration file Once you've installed the k9s app, it's always good to start with the help command. ``` $ k9s help K9s is a CLI to view and manage your Kubernetes clusters. Usage: k9s [flags] k9s [command] Available Commands: help Help about any command info Print configuration info version Print version/build info Flags: -A, --all-namespaces Launch K9s in all namespaces --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use -c, --command string Specify the default command to view when the application launches --context string The name of the kubeconfig context to use --demo Enable demo mode to show keyboard commands --headless Turn K9s header off -h, --help help for k9s --insecure-skip-tls-verify If true, the server's caCertFile will not be checked for validity --kubeconfig string Path to the kubeconfig file to use for CLI requests -l, --logLevel string Specify a log level (info, warn, debug, error, fatal, panic, trace) (default "info") -n, --namespace string If present, the namespace scope for this CLI request --readonly Disable all commands that modify the cluster -r, --refresh int Specify the default refresh rate as an integer (sec) (default 2) --request-timeout string The length of time to wait before giving up on a single server request --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use Use "k9s [command] --help" for more information about a command. ``` As you can see, there is a lot of functionality we can configure with k9s. The only step we need to take place to get off the ground is to write a configuration file. The **info **command will point us to where the application is looking for it. ``` $ k9s info ____ __.________ | |/ _/ __ \______ | < \____ / ___/ | | \ / /\___ \ |____|__ \ /____//____ > \/ \/ Configuration: /Users/jess/.k9s/config.yml Logs: /var/folders/5l/c1y1gcw97szdywgf9rk1100m0000gn/T/k9s-jess.log Screen Dumps: /var/folders/5l/c1y1gcw97szdywgf9rk1100m0000gn/T/k9s-screens-jess ``` To add a file, make the directory if it doesn't already exist and then add one. ``` $ mkdir -p ~/.k9s/ $ touch ~/.k9s/config.yml ``` For this introduction, we will use the default config.yml recommendations from the k9s repository. The maintainers note that this format is subject to change, so we can [check here](https://github.com/derailed/k9s#k9s-configuration) for the latest version. ``` k9s: refreshRate: 2 headless: false readOnly: false noIcons: false logger: tail: 200 buffer: 500 sinceSeconds: 300 fullScreenLogs: false textWrap: false showTime: false currentContext: minikube currentCluster: minikube clusters: minikube: namespace: active: "" favorites: - all - kube-system - default view: active: dp thresholds: cpu: critical: 90 warn: 70 memory: critical: 90 warn: 70 ``` We set k9s to look for a local minikube configuration, so I'm going to confirm minikube is online and ready to go. ``` $ minikube status host: Running kubelet: Running apiserver: Running kubeconfig: Configured ``` ## Running k9s to explore a Kubernetes cluster ## With a configuration file set and pointing at our local cluster, we can now run the **k9s** command. `$ k9s` Once you start it up, the k9s text-based user interface (UI) will pop up. With no flag for a namespace, it will show you the pods in the default namespace. ![K9s screenshot K9s screenshot](https://opensource.com/sites/default/files/uploads/k9s_1.png) If you run in an environment with a lot of pods, the default view can be overwhelming. Alternatively, we can focus on a given namespace. Exit the application and run **k9s -n <namespace>** where *<namespace>* is an existing namespace. In the picture below, I ran **k9s -n minecraft,** and it shows my broken pod ![K9s screenshot K9s screenshot](https://opensource.com/sites/default/files/uploads/k9s_2.png) So once you have k9s up and running, there are a bunch of things you can do quickly. Navigating k9s happens through shortcut keys. We can always use arrow keys and the enter key to choose items listed. There are quite a few other universal keystrokes to navigate to different views: **0**—Show all pods in all namespaces ![K9s screenshot K9s screenshot](https://opensource.com/sites/default/files/uploads/k9s_3.png) **d**—Describe the selected pod ![K9s screenshot K9s screenshot](https://opensource.com/sites/default/files/uploads/k9s_5_0.png) **l**—Show logs for the selected pod pod ![Using k9s to show Kubernetes pod logs Using k9s to show Kubernetes pod logs](https://opensource.com/sites/default/files/uploads/k9s-show-logs-opensourcedotcom.png) You may notice that k9s is set to use [Vim command keys](https://opensource.com/article/19/3/getting-started-vim), including moving up and down using **J** and **K** keys. Good luck exiting, emacs users :) ## Viewing different Kubernetes resources quickly Need to get to something that's not a pod? Yea I do too. There are a number of shortcuts that are available when we enter a colon (":") key. From there, you can use the following commands to navigate around there. **:svc**—Jump to a services view. ![K9s screenshot K9s screenshot](https://opensource.com/sites/default/files/uploads/k9s_5.png) **:deploy**—Jump to a deployment view. ![K9s screenshot K9s screenshot](https://opensource.com/sites/default/files/uploads/k9s_6.png) **:rb**—Jump to a Rolebindings view for[role-based access control (RBAC)](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)management. ![K9s screenshot K9s screenshot](https://opensource.com/sites/default/files/uploads/k9s_7.png) **:namespace**—Jump back to the namespaces view. ![K9s screenshot K9s screenshot](https://opensource.com/sites/default/files/uploads/k9s_8.png) **:cj**—Jump to the cronjobs view to see what jobs are scheduled in the cluster. ![K9s screenshot K9s screenshot](https://opensource.com/sites/default/files/uploads/k9s_9.png) The most used tool for this application will be the keyboard; to go up or down on any page, use the arrow keys. If you need to quit, remember to use Vim keybindings. Type **:q** and hit enter to leave. ## Example of troubleshooting Kubernetes with k9s How does k9s help when something goes wrong? To walk through an example, I let several pods die due to misconfiguration. Below you can see my terrible hello deployment that's crashing. Once we highlight it, we press **d** to run a *describe* command to see what is causing the failure. ![K9s screenshot K9s screenshot](https://opensource.com/sites/default/files/uploads/k9s_10.png) ![K9s screenshot K9s screenshot](https://opensource.com/sites/default/files/uploads/k9s_11.png) Skimming the events does not tell us a reason for the failure. Next, I hit the **esc** key and go check the logs by highlighting the pod and entering **<shift-l>**. ![K9s screenshot K9s screenshot](https://opensource.com/sites/default/files/uploads/k9s_12.png) Unfortunately, the logs don't offer anything helpful either (probably because the deployment was never correctly configured), and the pod will not come up. I then **esc** to step back out, and I will see if deleting the pod will take care of this issue. To do so, I highlight the pod and use **<ctrl-d>**. Thankfully, k9s prompts users before deletion. ![K9s screenshot K9s screenshot](https://opensource.com/sites/default/files/uploads/k9s_13.png) While I did delete the pod, the deployment resource still exists, so a new pod will come back up. It will also continue to restart and crash for whatever reason (we don't know yet). Here is the point where I would repeat reviewing logs, describing resources, and use the **e** shortcut to even edit a running pod to troubleshoot the behavior. In this particular case, the failing pod is not configured to run in this environment. So let's delete the deployment to stop crash-then-reboot loop we are in. We can get to deployments by typing **:deploy **and clicking enter. From there we highlight and press **<ctrl-d> **to delete. ![K9s screenshot K9s screenshot](https://opensource.com/sites/default/files/uploads/k9s_14.png) ![K9s screenshot K9s screenshot](https://opensource.com/sites/default/files/uploads/k9s_15.png) And poof the deployment is gone! It only took a few keystrokes to clean up this failed deployment. ## k9s is incredibly customizable So this application has a ton of customization options, down to the color scheme of the UI. Here are a few editable options you may be interested in: - Adjust where you put the config.yml file (so you can store it in [version control](https://opensource.com/article/19/3/move-your-dotfiles-version-control)) - Add [custom aliases](https://k9scli.io/topics/aliases/)to an**alias.yml**file - Create [custom hotkeys](https://k9scli.io/topics/hotkeys/)in a**hotkey.yml**file - Explore available [plugins](https://github.com/derailed/k9s/tree/master/plugins)or write your own The entire application is configured in YAML files, so customization will feel familiar to any Kubernetes administrator. ## Simplify your life with k9s I'm prone to administrating over my team's systems in a very manual way, more for brain training than anything else. When I first heard about k9s, I thought, "This is just lazy Kubernetes," so I dismissed it and went back to doing my manual intervention everywhere. I actually started using it daily while working through my backlog, and I was blown away at how much faster it was to use than kubectl alone. Now I'm a convert. It's important to know your tools and master the "hard way" of doing something. It is also important to remember, as far as administration goes, it's important to work smarter, not harder. Using k9s is the way I live up to that objective. I guess we can call it lazy Kubernetes administration, and that's okay. ## 1 Comment
12,250
如何安装使用 Qt 5 的音频播放器 Audacious 4.0
https://itsfoss.com/audacious-4-release/
2020-05-25T22:35:16
[ "Audacious", "音乐" ]
https://linux.cn/article-12250-1.html
[Audacious](https://audacious-media-player.org) 是一个开源音频播放器,可用于包括 Linux 在内的多个平台。继上次发布主版本将近 2 年后,Audacious 4.0 带来了一些重大变化。 最新版本的 Audacious 4.0 默认带 [Qt 5](https://doc.qt.io/qt-5/qt5-intro.html) 用户界面。你仍然可以和以前一样使用旧的 GTK2 UI,但是,新功能仅会添加到 Qt UI 中。 让我们看下发生了什么变化,以及如何在 Linux 系统上安装最新的 Audacious。 ### Audacious 4.0 关键变化和功能 ![](/data/attachment/album/202005/25/223338q0oaxa5ukiwdx6mr.jpg) 当然,主要的变化是默认使用 Qt 5 UI。除此之外,他们的[官方公告](https://audacious-media-player.org/news/45-audacious-4-0-released)中提到了许多改进和功能补充,它们是: * 单击播放列表列头可对播放列表进行排序 * 拖动播放列表列头会更改列顺序 * 应用中的音量和时间步长设置 * 隐藏播放列表标签的新选项 * 按路径对播放列表排序,现在将文件夹排序在文件后面 * 实现了额外的 MPRIS 调用,以与 KDE 5.16+ 兼容 * 新的基于 OpenMPT 的跟踪器模块插件 * 新的 VU Meter 可视化插件 * 添加了使用 SOCKS 网络代理的选项 * 换歌插件现在可在 Windows 上使用 * 新的“下一张专辑”和“上一张专辑”命令 * Qt UI 中的标签编辑器现在可以一次编辑多个文件 * 为 Qt UI 实现均衡器预设窗口 * 歌词插件获得了在本地保存和加载歌词的能力 * 模糊范围和频谱分析器可视化已移植到 Qt * MIDI 插件 “SoundFont 选择”已移植到 Qt * JACK 输出插件获得了一些新选项 * 添加了无限循环 PSF 文件的选项 如果你以前不了解它,你可以轻松安装它,并使用均衡器和 [LADSP](https://www.ladspa.org/) 效果器来调整音乐体验。 ![](/data/attachment/album/202005/25/223349ti89t8eyz3rnq3vt.jpg) ### 如何在 Ubuntu 上安装 Audacious 4.0 值得注意的是,[非官方 PPA](https://itsfoss.com/ppa-guide/) 是由 [UbuntuHandbook](http://ubuntuhandbook.org/index.php/2020/03/audacious-4-0-released-qt5-ui/) 提供的。你可以按照以下说明在 Ubuntu 16.04、18.04、19.10 和 20.04 上进行安装。 1、首先,你必须在终端中输入以下命令将 PPA 添加到系统中: ``` sudo add-apt-repository ppa:ubuntuhandbook1/apps ``` 2、接下来,你需要从仓库中更新(刷新)软件包信息,然后继续安装该应用。方法如下: ``` sudo apt update sudo apt install audacious audacious-plugins ``` 就是这样。你无需执行其他任何操作。无论什么情况,如果你想[删除 PPA 和软件](https://itsfoss.com/how-to-remove-or-delete-ppas-quick-tip/),只需按顺序输入以下命令: ``` sudo add-apt-repository --remove ppa:ubuntuhandbook1/apps sudo apt remove --autoremove audacious audacious-plugins ``` 你也可以在它的 GitHub 页面上查看有关源码的更多信息,并根据需要在其他 Linux 发行版上进行安装。 * [Audacious 源代码](https://github.com/audacious-media-player/audacious) ### 总结 新功能和 Qt 5 UI 开关对于改善用户体验和音频播放器的功能应该是一件好事。如果你是经典 Winamp 界面的粉丝,它也可以正常工作。但缺少其公告中提到的一些功能。 你可以尝试一下,并在下面的评论中让我知道你的想法! --- via: <https://itsfoss.com/audacious-4-release/> 作者:[Ankush Das](https://itsfoss.com/author/ankush/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
[Audacious](https://audacious-media-player.org) is an open-source audio player available for multiple platforms that include Linux. Almost after 2 years of its last major release, Audacious 4.0 has arrived with some big changes. The latest release Audacious 4.0 comes with [Qt 5](https://doc.qt.io/qt-5/qt5-intro.html) UI by default. You can still go for the old GTK2 UI from the source – however, the new features will be added to the Qt UI only. Let’s take a look at what has changed and how to install the latest Audacious on your Linux system. ## Audacious 4.0 Key Changes & Features ![Audacious 4 Release](https://itsfoss.com/content/images/wordpress/2020/03/audacious-4-release.jpg) Of course, the major change would be the use of Qt 5 UI as the default. In addition to that, there are a lot of improvements and feature additions mentioned in their [official announcement post](https://audacious-media-player.org/news/45-audacious-4-0-released), here they are: - Clicking on playlist column headers sorts the playlist - Dragging playlist column headers changes the column order - Application-wide settings for volume and time step sizes - New option to hide playlist tabs - Sorting playlist by path now sorts folders after files - Implemented additional MPRIS calls for compatibility with KDE 5.16+ - New OpenMPT-based tracker module plugin - New VU Meter visualization plugin - Added option to use a SOCKS network proxy - The Song Change plugin now works on Windows - New “Next Album” and “Previous Album” commands - The tag editor in Qt UI can now edit multiple files at once - Implemented equalizer presets window for Qt UI - Lyrics plugin gained the ability to save and load lyrics locally - Blur Scope and Spectrum Analyzer visualizations ported to Qt - MIDI plugin SoundFont selection ported to Qt - JACK output plugin gained some new options - Added option to endlessly loop PSF files If you didn’t know about it previously, you can easily get it installed and use the equalizer coupled with [LADSP](https://www.ladspa.org/) effects to tweak your music experience. ![Audacious Winamp](https://itsfoss.com/content/images/wordpress/2020/03/audacious-winamp.jpg) ## How to Install Audacious 4.0 on Ubuntu It is worth noting that the [unofficial PPA](https://itsfoss.com/ppa-guide/) is made available by [UbuntuHandbook](http://ubuntuhandbook.org/index.php/2020/03/audacious-4-0-released-qt5-ui/). You can simply follow the instructions below to install it on Ubuntu 16.04, 18.04, 19.10, and 20.04. 1. First, you have to add the PPA to your system by typing in the following command in the terminal: `sudo add-apt-repository ppa:ubuntuhandbook1/apps` 3. Next, you need to update/refresh the package information from the repositories/sources you have and proceed to install the app. Here’s how to do that: ``` sudo apt update sudo apt install audacious audacious-plugins ``` That’s it. You don’t have to do anything else. In either case, if you want to [remove the PPA and the software](https://itsfoss.com/how-to-remove-or-delete-ppas-quick-tip/), just type in the following commands in order: ``` sudo add-apt-repository --remove ppa:ubuntuhandbook1/apps sudo apt remove --autoremove audacious audacious-plugins ``` You can also check out their GitHub page for more information on the source and potentially install it on other Linux distros as well, if that’s what you’re looking for. ## Wrapping Up The new features and the Qt 5 UI switch should be a good thing to improve the user experience and the functionality of the audio player. If you’re a fan of the classic Winamp interface, it works just fine as well – but missing a few features as mentioned in their announcement post. You can try it out and let me know your thoughts in the comments below!
12,251
为什么 strace 在 Docker 中不起作用?
https://jvns.ca/blog/2020/04/29/why-strace-doesnt-work-in-docker/
2020-05-26T10:19:27
[ "strace", "容器" ]
https://linux.cn/article-12251-1.html
![](/data/attachment/album/202005/26/101909a5wsvedzw5svawa2.jpg) 在编辑“容器如何工作”爱好者杂志的能力页面时,我想试着解释一下为什么 `strace` 在 Docker 容器中无法工作。 这里的问题是 —— 如果我在笔记本上的 Docker 容器中运行 `strace`,就会出现这种情况: ``` $ docker run -it ubuntu:18.04 /bin/bash $ # ... install strace ... [email protected]:/# strace ls strace: ptrace(PTRACE_TRACEME, ...): Operation not permitted ``` `strace` 通过 `ptrace` 系统调用起作用,所以如果不允许使用 `ptrace`,它肯定是不能工作的! 这个问题很容易解决 —— 在我的机器上,是这样解决的: ``` docker run --cap-add=SYS_PTRACE -it ubuntu:18.04 /bin/bash ``` 但我对如何修复它不感兴趣,我想知道为什么会出现这种情况。为什么 `strace` 不能工作,为什么`--cap-add=SYS_PTRACE` 可以解决这个问题? ### 假设 1:容器进程缺少 `CAP_SYS_PTRACE` 能力。 我一直以为原因是 Docker 容器进程默认不具备 `CAP_SYS_PTRACE` 能力。这和它可以被 `--cap-add=SYS_PTRACE` 修复是一回事,是吧? 但这实际上是不合理的,原因有两个。 原因 1:在实验中,作为一个普通用户,我可以对我的用户运行的任何进程进行 `strace`。但如果我检查我的当前进程是否有 `CAP_SYS_PTRACE` 能力,则没有: ``` $ getpcaps $$ Capabilities for `11589': = ``` 原因 2:`capabilities` 的手册页对 `CAP_SYS_PTRACE` 的介绍是: ``` CAP_SYS_PTRACE * Trace arbitrary processes using ptrace(2); ``` 所以,`CAP_SYS_PTRACE` 的作用是让你像 root 一样,可以对任何用户拥有的**任意**进程进行 `ptrace`。你不需要用它来对一个只是由你的用户拥有的普通进程进行 `ptrace` 。 我用第三种方法测试了一下(LCTT 译注:此处可能原文有误) —— 我用 `docker run --cap-add=SYS_PTRACE -it ubuntu:18.04 /bin/bash` 运行了一个 Docker 容器,去掉了 `CAP_SYS_PTRACE` 能力,但我仍然可以跟踪进程,虽然我已经没有这个能力了。什么?为什么?! ### 假设 2:关于用户命名空间的事情? 我的下一个(没有那么充分的依据的)假设是“嗯,也许这个过程是在不同的用户命名空间里,而 `strace` 不能工作,因为某种原因而行不通?”这个问题其实并不相关,但这是我观察时想到的。 容器进程是否在不同的用户命名空间中?嗯,在容器中: ``` root@e27f594da870:/# ls /proc/$$/ns/user -l ... /proc/1/ns/user -> 'user:[4026531837]' ``` 在宿主机: ``` bork@kiwi:~$ ls /proc/$$/ns/user -l ... /proc/12177/ns/user -> 'user:[4026531837]' ``` 因为用户命名空间 ID(`4026531837`)是相同的,所以容器中的 root 用户和主机上的 root 用户是完全相同的用户。所以,绝对没有理由不能够对它创建的进程进行 `strace`! 这个假设并没有什么意义,但我(之前)没有意识到 Docker 容器中的 root 用户和主机上的 root 用户同一个,所以我觉得这很有意思。 ### 假设 3:ptrace 系统的调用被 seccomp-bpf 规则阻止了 我也知道 Docker 使用 seccomp-bpf 来阻止容器进程运行许多系统调用。而 `ptrace` 在[被 Docker 默认的 seccomp 配置文件阻止的系统调用列表](https://docs.docker.com/engine/security/seccomp/)中!(实际上,允许的系统调用列表是一个白名单,所以只是`ptrace` 不在默认的白名单中。但得出的结果是一样的。) 这很容易解释为什么 `strace` 在 Docker 容器中不能工作 —— 如果 `ptrace` 系统调用完全被屏蔽了,那么你当然不能调用它,`strace` 就会失败。 让我们来验证一下这个假设 —— 如果我们禁用了所有的 seccomp 规则,`strace` 能在 Docker 容器中工作吗? ``` $ docker run --security-opt seccomp=unconfined -it ubuntu:18.04 /bin/bash $ strace ls execve("/bin/ls", ["ls"], 0x7ffc69a65580 /* 8 vars */) = 0 ... it works fine ... ``` 是的,很好用!很好。谜底解开了,除了….. ### 为什么 `--cap-add=SYS_PTRACE` 能解决问题? 我们还没有解释的是:为什么 `--cap-add=SYS_PTRACE` 可以解决这个问题? `docker run` 的手册页是这样解释 `--cap-add` 参数的。 ``` --cap-add=[] Add Linux capabilities ``` 这跟 seccomp 规则没有任何关系! 怎么回事? ### 我们来看看 Docker 源码 当文档没有帮助的时候,唯一要做的就是去看源码。 Go 语言的好处是,因为依赖关系通常是在一个 Go 仓库里,你可以通过 `grep` 来找出做某件事的代码在哪里。所以我克隆了 `github.com/moby/moby`,然后对一些东西进行 `grep`,比如 `rg CAP_SYS_PTRACE`。 我认为是这样的。在 `containerd` 的 seccomp 实现中,在 [contrib/seccomp/seccomp/seccomp\_default.go](https://github.com/containerd/containerd/blob/4be98fa28b62e8a012491d655a4d6818ef87b080/contrib/seccomp/seccomp_default.go#L527-L537) 中,有一堆代码来确保如果一个进程有一个能力,那么它也会(通过 seccomp 规则)获得访问权限,以使用与该能力相关的系统调用。 ``` case "CAP_SYS_PTRACE": s.Syscalls = append(s.Syscalls, specs.LinuxSyscall{ Names: []string{ "kcmp", "process_vm_readv", "process_vm_writev", "ptrace", }, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{}, }) ``` 在 moby 中的 [profile/seccomp/seccomp.go](https://github.com/moby/moby/blob/cc0dfb6e7b22ad120c60a9ce770ea15415767cf9/profiles/seccomp/seccomp.go#L126-L132) 和 [默认的 seccomp 配置文件](https://github.com/moby/moby/blob/master/profiles/seccomp/default.json#L723-L739)中,也有一些其他的代码似乎做了一些非常类似的事情,所以有可能就是这个代码在做这个事情。 所以我想我们有答案了! ### Docker 中的 `--cap-add` 做的事情比它说的要多 结果似乎是,`--cap-add` 并不像手册页里说的那样,它更像是 `--cap-add-and-also-whiteelist-some-extra-system-calls-if-required`。这很有意义! 如果你具有一个像 `--CAP_SYS_PTRACE` 这样的能力,可以让你使用 `process_vm_readv` 系统调用,但是该系统调用被 seccomp 配置文件阻止了,那对你没有什么帮助! 所以当你给容器 `CAP_SYS_PTRACE` 能力时,允许使用 `process_vm_readv` 和 `ptrace` 系统调用似乎是一个合理的选择。 ### 就这样! 这是个有趣的小事情,我认为这是一个很好的例子,说明了容器是由许多移动的部件组成的,它们以不完全显而易见的方式一起工作。 --- via: <https://jvns.ca/blog/2020/04/29/why-strace-doesnt-work-in-docker/> 作者:[Julia Evans](https://jvns.ca/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
While editing the capabilities page of the [how containers work](https://wizardzines.com/zines/containers) zine, I found myself trying to explain why `strace` doesn’t work in a Docker container. The problem here is – if I run `strace` in a Docker container on my laptop, this happens: ``` $ docker run -it ubuntu:18.04 /bin/bash $ # ... install strace ... root@e27f594da870:/# strace ls strace: ptrace(PTRACE_TRACEME, ...): Operation not permitted ``` strace works using the `ptrace` system call, so if `ptrace` isn’t allowed, it’s definitely not gonna work! This is pretty easy to fix – on my machine, this fixes it: ``` docker run --cap-add=SYS_PTRACE -it ubuntu:18.04 /bin/bash ``` But I wasn’t interested in fixing it, I wanted to know why it happens. So why does strace not work, and why does `--cap-add=SYS_PTRACE` fix it? ### hypothesis 1: container processes are missing the `CAP_SYS_PTRACE` capability I always thought the reason was that Docker container processes by default didn’t have the `CAP_SYS_PTRACE` capability. This is consistent with it being fixed by `--cap-add=SYS_PTRACE` , right? But this actually doesn’t make sense for 2 reasons. **Reason 1**: Experimentally, as a regular user, I can strace on any process run by my user. But if I check if my current process has the `CAP_SYS_PTRACE` capability, I don’t: ``` $ getpcaps $$ Capabilities for `11589': = ``` **Reason 2**: `man capabilities` says this about `CAP_SYS_PTRACE` : ``` CAP_SYS_PTRACE * Trace arbitrary processes using ptrace(2); ``` So the point of `CAP_SYS_PTRACE` is to let you ptrace **arbitrary** processes owned by any user, the way that root usually can. You shouldn’t need it to just ptrace a regular process owned by your user. And I tested this a third way – I ran a Docker container with `docker run --cap-add=SYS_PTRACE -it ubuntu:18.04 /bin/bash` , dropped the `CAP_SYS_PTRACE` capability, and I could still strace processes even though I didn’t have that capability anymore. What? Why? ### hypothesis 2: something about user namespaces??? My next (much less well-founded) hypothesis was something along the lines of “um, maybe the process is in a different user namespace and strace doesn’t work because of… reasons?” This isn’t really coherent but here’s what happened when I looked into it. Is the container process in a different user namespace? Well, in the container: ``` root@e27f594da870:/# ls /proc/$$/ns/user -l ... /proc/1/ns/user -> 'user:[4026531837]' ``` On the host: ``` bork@kiwi:~$ ls /proc/$$/ns/user -l ... /proc/12177/ns/user -> 'user:[4026531837]' ``` Because the user namespace ID (`4026531837` ) is the same, the root user in the container is the exact same user as the root user on the host. So there’s definitely no reason it shouldn’t be able to strace processes that it created! This hypothesis doesn’t make much sense but I hadn’t realized that the root user in a Docker container is the same as the root user on the host, so I thought that was interesting. ### hypothesis 3: the ptrace system call is being blocked by a seccomp-bpf rule I also knew that Docker uses seccomp-bpf to stop container processes from running a lot of system calls. And ptrace is in the [list of system calls blocked by Docker’s default seccomp profile](https://docs.docker.com/engine/security/seccomp/)! (actually the list of allowed system calls is a whitelist, so it’s just that ptrace is not in the default whitelist. But it comes out to the same thing.) That easily explains why strace wouldn’t work in a Docker container – if the `ptrace` system call is totally blocked, then of course you can’t call it at all and strace would fail. Let’s verify this hypothesis – if we disable all seccomp rules, can we strace in a Docker container? ``` $ docker run --security-opt seccomp=unconfined -it ubuntu:18.04 /bin/bash $ strace ls execve("/bin/ls", ["ls"], 0x7ffc69a65580 /* 8 vars */) = 0 ... it works fine ... ``` Yes! It works! Great. Mystery solved, except… ### why does `--cap-add=SYS_PTRACE` fix the problem? What we still haven’t explained is: why does `--cap-add=SYS_PTRACE` would fix the problem? The man page for `docker run` explains the `--cap-add` argument this way: ``` --cap-add=[] Add Linux capabilities ``` That doesn’t have anything to do with seccomp rules! What’s going on? ### let’s look at the Docker source code. When the documentation doesn’t help, the only thing to do is go look at the source. The nice thing about Go is, because dependencies are often vendored in a Go repository, you can just grep the repository to figure out where the code that does a thing is. So I cloned `github.com/moby/moby` and grepped for some things, like `rg CAP_SYS_PTRACE` . Here’s what I think is going on. In containerd’s seccomp implementation, in [contrib/seccomp/seccomp_default.go](https://github.com/containerd/containerd/blob/4be98fa28b62e8a012491d655a4d6818ef87b080/contrib/seccomp/seccomp_default.go#L527-L537), there’s a bunch of code that makes sure that if a process has a capability, then it’s also given access (through a seccomp rule) to use the system calls that go with that capability. ``` case "CAP_SYS_PTRACE": s.Syscalls = append(s.Syscalls, specs.LinuxSyscall{ Names: []string{ "kcmp", "process_vm_readv", "process_vm_writev", "ptrace", }, Action: specs.ActAllow, Args: []specs.LinuxSeccompArg{}, }) ``` There’s some other code that seems to do something very similar in [profiles/seccomp/seccomp.go](https://github.com/moby/moby/blob/cc0dfb6e7b22ad120c60a9ce770ea15415767cf9/profiles/seccomp/seccomp.go#L126-L132) in moby and the [default seccomp profile](https://github.com/moby/moby/blob/master/profiles/seccomp/default.json#L723-L739), so it’s possible that that’s what’s doing it instead. So I think we have our answer! `--cap-add` in Docker does a little more than what it says The upshot seems to be that `--cap-add` doesn’t do exactly what it says it does in the man page, it’s more like `--cap-add-and-also-whitelist-some-extra-system-calls-if-required` . Which makes sense! If you have a capability like `CAP_SYS_PTRACE` which is supposed to let you use the `process_vm_readv` system call but that system call is blocked by a seccomp profile, that’s not going to help you much! So allowing the `process_vm_readv` and `ptrace` system calls when you give the container `CAP_SYS_PTRACE` seems like a reasonable choice. ### strace actually does work in newer versions of Docker As of [this commit](https://github.com/moby/moby/commit/1124543ca8071074a537a15db251af46a5189907) (docker 19.03), Docker does actually allow the `ptrace` system calls for kernel versions newer than 4.8. But the Docker version on my laptop is 18.09.7, so it predates that commit. ### that’s all! This was a fun small thing to investigate, and I think it’s a nice example of how containers are made of lots of moving pieces that work together in not-completely-obvious ways. If you liked this, you might like my new zine called [How Containers Work](https://wizardzines.com/zines/containers) that explains the Linux kernel features that make containers work in 24 pages. You can read the pages on [capabilities](https://wizardzines.com/comics/capabilities/) and [seccomp-bpf](https://wizardzines.com/comics/seccomp-bpf/) from the zine.
12,252
你应该(或许)没使用过的 3 种 Python 模板语言
https://opensource.com/article/20/4/python-templating-languages
2020-05-26T11:02:00
[ "Python", "模板" ]
https://linux.cn/article-12252-1.html
> > 包括这 3 个模板语言在内,Python 积累了许多模板语言。 > > > ![](/data/attachment/album/202005/26/110220lxie9osmd592m5ee.jpg) 当需要使用模板语言来编写 [Python](https://opensource.com/resources/python) Web 应用时,有很多健壮的解决方案。 有 [Jinja2](https://opensource.com/article/20/2/jinja2-cheat-sheet)、[Genshi 和 Mako](https://opensource.com/resources/python/template-libraries)。甚至还有 [Chameleon](https://chameleon.readthedocs.io/en/latest/) 之类的解决方案,虽然有些陈旧,但仍被 [Pyramid](https://opensource.com/article/18/5/pyramid-framework) 框架推荐。 Python 已经存在了很长时间。此时,在系统的深处,它积累了一些几乎被遗忘的模板语言,它们都是值得一试的。 这些语言就像桉树上可爱的考拉一样,在自己的生态圈里快乐地生活着,有时也会有危险的工作,这些都是很少有人听说过的模板语言,使用过的应该更少。 ### 3、string.Template 你是否曾经想过:“如何获得一种没有任何特性的模板语言,而且同时也不需要 `pip install` 安装任何东西?” Python 标准库已经为你提供了答案。虽然没有循环和条件,但 `string.Template` 类是一种最小的模板语言。 使用它很简单。 ``` >>> import string >>> greeting = string.Template("Hello, $name, good $time!") >>> greeting.substitute(name="OpenSource.com", time="afternoon") 'Hello, OpenSource.com, good afternoon!' ``` ### 2、twisted.web.template 你会给一个包罗万象的库送什么礼物? 当然,不是模板语言,因为它已经有了。twisted.web.template 中嵌套了两种模板语言。一种是基于 XML 的,并有一个[很棒的文档](https://twistedmatrix.com/documents/13.1.0/web/howto/twisted-templates.html)。 但是它还有另一种,一种基于使用 Python 作为领域特定语言(DSL)来生成 HTML 文档。 它基于两个原语:包含标签对象的 `twisted.web.template.tags` 和渲染它们的 `twisted.web.template.flattenString`。由于它是 Twisted 的一部分,因此它内置支持高效异步渲染。 此例将渲染一个小页面: ``` async def render(reactor): my_title = "A Fun page" things = ["one", "two", "red", "blue"] template = tags.html( tags.head( tags.title(my_title), ), tags.body( tags.h1(my_title), tags.ul( [tags.li(thing) for thing in things], ), tags.p( task.deferLater(reactor, 3, lambda: "Hello "), task.deferLater(reactor, 3, lambda: "world!"), ) ) ) res = await flattenString(None, template) res = res.decode('utf-8') with open("hello.html", 'w') as fpout: fpout.write(res) ``` 该模板是使用 `tags.<TAGNAME>` 来指示层次结构的常规 Python 代码。原生支持渲染字符串,因此任何字符串都正常。 要渲染它,你需要做的是添加调用: ``` from twisted.internet import task, defer from twisted.web.template import tags, flattenString def main(reactor): return defer.ensureDeferred(render(reactor)) ``` 最后写上: ``` task.react(main) ``` 只需 3 秒(而不是 6 秒),它将渲染一个不错的 HTML 页面。在实际中,这些 `deferLater` 可以是对 HTTP API 的调用:它们将并行发送和处理,而无需付出任何努力。我建议你阅读关于[更好地使用 Twisted](https://opensource.com/article/20/3/treq-python)。不过,这已经可以工作了。 ### 1、Quixote 你会说:“但是 Python 并不是针对 HTML 领域而优化的领域特定语言。” 如果有一种语言可以[转化](https://en.wikipedia.org/wiki/Source-to-source_compiler)到 Python,但是更适合定义模板,而不是像 Python 那样按原样解决呢?如果可以的话,请使用“Python 模板语言”(PTL)。 编写自己的语言,有时被说成是一个攻击假想敌人的唐吉坷德项目。当 Quixote(可在 [PyPI](https://pypi.org/project/Quixote/) 中找到)的创造者决定这样做时,并没有受此影响。 以下将渲染与上面 Twisted 相同的模板。*警告:以下不是有效的 Python 代码*: ``` import time def render [html] (): my_title = "A Fun page" things = ["one", "two", "red", "blue"] "<html><head><title>" my_title "</head></title><body><h1>" my_title "</h1>" "<ul>" for thing in things: "<li>" thing "</li>" "<p>" time.sleep(3) (lambda: "Hello ")() time.sleep(3) (lambda: "world!")() "</p>" "</body></html>" def write(): result = render() with open("hello.html", 'w') as fpout: fpout.write(str(result)) ``` 但是,如果将它放到 `template.ptl` 文件中,那么可以将其导入到 Quixote 中,并写出可以渲染模板的版本: ``` >>> from quixote import enable_ptl >>> enable_ptl() >>> import template >>> template.write() ``` Quixote 安装了一个导入钩子,它会将 PTL 文件转换为 Python。请注意,此渲染需要 6 秒,而不是 3 秒。你不再获得自由的异步性。 ### Python 中的模板太多 Python 库的历史悠久且曲折,其中一些库可以或多或少都能达到类似结果(例如,Python [包管理](https://opensource.com/article/19/4/managing-python-packages))。 我希望你喜欢探索这三种*可以*用 Python 创建模板的方式。另外,我建议从[这三个库之一](https://opensource.com/resources/python/template-libraries)开始了解。 你是否有另一种深奥的模板方法?请在下面的评论中分享! --- via: <https://opensource.com/article/20/4/python-templating-languages> 作者:[Moshe Zadka](https://opensource.com/users/moshez) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
When reaching for a templating language for writing a [Python](https://opensource.com/resources/python) web application, there are an abundance of robust solutions. There are [Jinja2](https://opensource.com/article/20/2/jinja2-cheat-sheet), [Genshi, and Mako](https://opensource.com/resources/python/template-libraries). There are even solutions like [Chameleon](https://chameleon.readthedocs.io/en/latest/), which are a bit older, but still recommended by the [Pyramid](https://opensource.com/article/18/5/pyramid-framework) framework. Python has been around for a long time. In that time, deep in the corners of its system, it has accumulated some almost forgotten templating languages that are well worth poking at. Like cute koalas on top of a eucalyptus tree, happy in their ecological niche, and sometimes as dangerous to work with, these are the templating languages few have heard of—and even fewer should use. ## 3. string.Template Have you ever wondered, "How can I get a templating language with no features, but also without needing to **pip install** anything?" The Python standard library has you covered. While it does no looping or conditionals, the **string.Template** class is a minimal templating language. Using it is simplicity itself. ``` >>> import string >>> greeting = string.Template("Hello, $name, good $time!") >>> greeting.substitute(name="OpenSource.com", time="afternoon") 'Hello, OpenSource.com, good afternoon!' ``` ## 2. twisted.web.template What gift do you give the library that has everything? Not a templating language, certainly, because it already has one. Nestled in **twisted.web.template** are two templating languages. One is XML-based and has a [great tutorial](https://twistedmatrix.com/documents/13.1.0/web/howto/twisted-templates.html). But there is another one, one that is based on using Python as a domain-specific language to produce HTML documents. It is based on two primitives: **twisted.web.template.tags**, which contains tag objects, and **twisted.web.template.flattenString**, which will render them. Because it is part of Twisted, it has built-in support for rendering async results efficiently. This example will render a silly little page: ``` async def render(reactor): my_title = "A Fun page" things = ["one", "two", "red", "blue"] template = tags.html( tags.head( tags.title(my_title), ), tags.body( tags.h1(my_title), tags.ul( [tags.li(thing) for thing in things], ), tags.p( task.deferLater(reactor, 3, lambda: "Hello "), task.deferLater(reactor, 3, lambda: "world!"), ) ) ) res = await flattenString(None, template) res = res.decode('utf-8') with open("hello.html", 'w') as fpout: fpout.write(res) ``` The template is regular Python code that uses the **tags.<TAGNAME>** to indicate the hierarchy. It natively supports strings as renderables, so any string is fine. To render it, the only things you need to do are to add a preamble: ``` from twisted.internet import task, defer from twisted.web.template import tags, flattenString def main(reactor): return defer.ensureDeferred(render(reactor)) ``` and an epilogue to run the whole thing: `task.react(main)` In just *three* seconds (and not *six*), it will render a nice HTML page. In real-life, those **deferLater**s can be, for example, calls to an HTTP API: they will be sent and processed in parallel, without having to put in any effort. I recommend you instead read about a [far better use for Twisted](https://opensource.com/article/20/3/treq-python). But still, this works. ## 1. Quixote You will say, "But Python is not *optimized* for being an HTML-spouting domain-specific language." What if, instead of settling for Python-as-is, there was a language that [transpiles](https://en.wikipedia.org/wiki/Source-to-source_compiler) to Python, but is better at defining templates? A "Python template language" (PTL), if you will. Writing your own language is sometimes said to be a dreamer's project for someone who tilts at windmills. Irony was not lost on the creators of Quixote (available on [PyPI](https://pypi.org/project/Quixote/)) when they decided to do exactly that. The following will render an equivalent template to the one done with Twisted above. *Warning: the following is not valid Python*: ``` import time def render [html] (): my_title = "A Fun page" things = ["one", "two", "red", "blue"] "<html><head><title>" my_title "</head></title><body><h1>" my_title "</h1>" "<ul>" for thing in things: "<li>" thing "</li>" "<p>" time.sleep(3) (lambda: "Hello ")() time.sleep(3) (lambda: "world!")() "</p>" "</body></html>" def write(): result = render() with open("hello.html", 'w') as fpout: fpout.write(str(result)) ``` However, if you put it in a file called **template.ptl**, you can make it importable to Quixote and write out the rendered version of the template: ``` >>> from quixote import enable_ptl >>> enable_ptl() >>> import template >>> template.write() ``` Quixote installs an import hook that will cause PTL files to transpile into Python. Note that this render takes *six* seconds, not *three*; you no longer gain free asynchronicity. ## So many templates in Python Python has a long and winding history of libraries, some of which can achieve the same outcomes in more or less similar ways (for example, Python [package management](https://opensource.com/article/19/4/managing-python-packages)). On this April Fools' Day, I hope you enjoyed exploring three ways you *can* create templates in Python. Instead, I recommend starting with [one of these libraries](https://opensource.com/resources/python/template-libraries) for ways you *should *template. Do you have another esoteric way to template? Share it in the comments below! ## Comments are closed.
12,254
现在你可以在 Windows 中运行 Linux 应用了
https://itsfoss.com/run-linux-apps-windows-wsl/
2020-05-27T11:15:00
[ "微软", "WSL" ]
https://linux.cn/article-12254-1.html
![](/data/attachment/album/202005/27/111439z64u19z6ct6r46kb.jpg) 微软最近的 “[Build 2020](https://news.microsoft.com/build2020/)” 开发者大会公布了一些有趣的公告。我不确定这该令人兴奋还是该令人怀疑 —— 但是微软,你现在比以往任何时候都受到我们的关注。 同时,在所有的这些公告中,能够在 WSL(Windows Subsystem for Linux)上运行 GUI 应用程序的功能备受关注。 更不要忘了 [Xamrin.Forms 更名为 MAUI 的尴尬结局](https://itsfoss.com/microsoft-maui-kde-row/),它与 [Nitrux Linux](https://itsfoss.com/nitrux-linux/) 的 Uri Herrera 的现有开源项目([Maui Project](https://mauikit.org/))名字冲突。 以防你不清楚,WSL 是一种环境,可让你在 Windows 10 中获得 Linux 控制台的体验。它也是在 [Windows 中运行 Linux 命令的最佳方法](https://itsfoss.com/run-linux-commands-in-windows/)之一。 正如 [Liam Dawe](https://www.gamingonlinux.com/2020/05/microsoft-build-directx-and-linux-plus-more) 认为的那样,通过博客文章([DirectX ❤ Linux](https://devblogs.microsoft.com/directx/directx-heart-linux/))发布的公告可能是只是个公关诱饵。但是,仍然值得一提。 ### WSL 上对 Linux GUI 应用程序的支持 ![](/data/attachment/album/202005/27/110600xjhgiodn1pn1qghg.png) 最近,Microsoft 在在线开发者大会上宣布了 WSL(即 WSL 2)的一系列新功能。 [Windows 包管理器](https://devblogs.microsoft.com/commandline/windows-package-manager-preview/)、[Windows 终端 1.0](https://devblogs.microsoft.com/commandline/windows-terminal-1-0/),以及其他一些功能的引入是其亮点。 但是, WSL 2 对 GPU 硬件加速的支持非常重要。 那么,是否意味着你可以使用 WSL 在 Windows 上运行 Linux 应用程序呢?看起来像是。 微软计划通过使用全新的 Linux 内核驱动程序 `dxgkrnl` 来实现它。为了给你一个技术性的简报, 我在这里引用他们的公告中的描述: ![](/data/attachment/album/202005/27/110701v6ctmn07w1i8mm0g.png) > > dxgkrnl 是一个全新的 Linux 内核驱动程序,它将 `/dev/dxg` 设备提供给用户模式的 Linux。 `/dev/dxg` 提供了一组 IOCTL,它们与 Winodws 上的原生 WDDM D3DKMT 内核服务层非常相似。Linux 内核中的 dxgkrnl 通过 VM 总线连接到 Windows 主机上,并使用此 VM 总线连接与物理 GPU 进行通讯。 > > > 我不是这方面的专家,但这意味着 WSL 上的 Linux 应用程序将与原生的 Windows 应用程序一样可以访问 GPU。 针对 GUI 应用程序的支持将在今年秋季的晚些时候提供(而不是 2020 年 5 月的更新) —— 所以我们要看看什么时候提供。 微软专门针对的是那些希望在 Windows 上轻松使用 Linux IDE 的开发人员。谷歌也在瞄准同样的用户群,[将 GUI Linux 应用程序引入到 Chromebook](https://itsfoss.com/linux-apps-chromebook/)。 那么,对于那些坚持使用 Windows 的用户来说,这是个好消息。但是,这是真的吗? ### 微软爱上了 Linux —— 真的吗? ![](/data/attachment/album/202005/27/110730uujjlybefey7s0ea.jpg) 他们在 Windows 上整合 Linux 环境来拥抱 Linux 及其优势的努力,绝对是一件好事。 但是,它真的能给**桌面 Linux 用户**带来什么好处呢?到目前为止,我还没有看到任何实际的好处。 在这里,你可以有不同的看法。但是,我认为 WSL 的开发对于 Linux 桌面用户来说没有真正的价值。至少,到目前为止没有。 有趣的是,[Linux Unplugged podcast](https://linuxunplugged.com/354) 上有人强调了微软的举动,认为这与他们的 EEE(<ruby> 拥抱、延伸和扑灭 <rt> Embrace, extend, and extinguish </rt></ruby>)的思路是一致的。 可能吧,谁知道呢?当然,他们为实现这一目标而付出的努力值得赞赏 —— 同时又令人感到兴奋和神秘。 ### 这是否意味着 Windows 用户将不必再转到 Linux? 微软之所以在其平台上集成 Linux,是因为他们知道 Liunx 的能力,也知道开发人员(或用户)喜欢使用它的原因。 但是,随着 WSL 2 的更新,如果这种情况持续下去,我倾向于同意 Abhishek 的看法: > > 最终,桌面 Linux 将被限制在 Windows 下,成为桌面应用程序…… > > > 好吧,当然,原生的体验暂时还是比较好的。而且,很难看到现有的 Linux 桌面用户会使用 Windows 来将其替代。但是,这仍然值得担心。 你如何看待这一切?我不认为 WSL 对于被迫使用 Windows 的用户有什么好处 —— 但是,从长远来看,你认为微软在 WSL 方面的进展本质上是敌意还是对 Linux 有帮助? 在评论中让我知道你的想法! --- via: <https://itsfoss.com/run-linux-apps-windows-wsl/> 作者:[Ankush Das](https://itsfoss.com/author/ankush/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[lnrCoder](https://github.com/lnrCoder) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
# Run Linux GUI Apps in Windows With WSL 2 The WSL 2 brings the ability to run GUI Linux apps inside Windows. Here's what you need to know and do to achieve that. When Microsoft released WSL first, it was revolutionary as it allowed to run Linux commands on Windows. A few years later, Microsoft took it to the next level by releasing WSL 2. The new version of WSL allowed running GUI Linux apps on Windows. It's not that Windows has a dearth of GUI apps. But when working in the Linux environment inside WSL, using the GUI apps could come in handy. For example, you have to edit a config file and are uncomfortable with the command line text editors like Nano. You use Gedit GUI text editor to make the changes. It simplifies the life of Windows users who have to use Linux for their work. Wondering how to run Linux GUI apps on Windows? There are the main steps for that: - Enable WSL 2 and install a Linux distribution with it - Install appropriate graphics driver for WSL - Install and use GUI Linux apps Let's see about the steps in detail. ## Requirements As stated above, running Linux GUI applications is not available for all Windows versions. To do this, your system should be: - For x64 systems: **Version 1903**or later, with**Build 18362**or later. - For ARM64 systems: **Version 2004**or later, with**Build 19041**or later. - The installed Linux distribution should use WSL2 Remember that, the requirements above are solely for running Linux GUI apps. WSL is supported for some of the Windows 10 versions also. Please refer to the dedicated article detailing [how to install WSL in Windows](https://itsfoss.com/install-bash-on-windows/) for more about WSL and its uses. ## Step1: Installing Linux Distribution with WSL 2 This is a lot easier on Windows 11 which comes with built-in support for WSL 2. ### On Windows 11 You can use the Microsoft Store but I find it easier to use the command line. You need to open PowerShell with admin privileges. For this, search for Powershell in the start menu, right-click on Powershell and select **Run as Administrator.** ![Run Powershell as an administrator](https://itsfoss.com/content/images/2023/01/run-powershell-as-an-administrator.webp) *Run Powershell as an administrator*Enter the following command to install WSL. `wsl --install` **By default, Ubuntu will be installed as the Linux distribution**. If you want to install any other distribution, use the command below: `wsl --list --online` This will list all the available Linux distributions. Once you decide on the distribution, use the below command to install it. `wsl --install <Distribution Name>` Once finished downloading and installing, you need to reboot to apply the changes. ### On Windows 10 This is a bit complicated and takes some time and effort to run WSL version 2 on Windows 10. Ensure that the Windows Subsystem for Linux feature is turned on. Execute the following command in Powershell with admin rights: `dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart` Reboot the device once the command is completed. After this, you need to enable the **Virtual Machine Platform** feature. Open the Powershell with admin privileges and enter the following command: `dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart` Once again, restart the device to complete the WSL install and update to WSL 2. Now, download the **Linux Kernel Update Package** for x64 machines from the [official website](https://wslstorestorage.blob.core.windows.net/wslblob/wsl_update_x64.msi). If you are using ARM64 devices, use this link to [download the latest kernel update package](https://wslstorestorage.blob.core.windows.net/wslblob/wsl_update_arm64.msi). If you are not sure about the device architecture, enter the command below in Powershell to get the type: `systeminfo | find "System Type"` When the file is downloaded, double-click on it and finish the installation of the Kernel update package. Now, open PowerShell and run this command to set WSL 2 as the default version when installing a new Linux distribution: `wsl --set-default-version 2` **Once** **WSL2 is set as the default version**, you can now install the Linux distribution of your choice. **Go to Windows Store and install Ubuntu.** ![Install ubuntu 22.04 LTS version from Microsoft Store](https://itsfoss.com/content/images/2023/01/ubuntu-22-04-wsl-1.jpg) *Install Ubuntu from Microsoft Store*### 🛠️ Configure the newly installed Ubuntu **Whether you installed WSL and Ubuntu using the Microsoft Store or the command line, you need to configure it.** Here's how it is done: Once you rebooted after installing Ubuntu, search for Ubuntu in Start Menu and open it. ![Open the Installed Ubuntu from Windows Start Menu](https://itsfoss.com/content/images/2023/01/open-ubuntu-from-windows-11-start-menu.webp) *Open Ubuntu from Start Menu*It will ask you to enter a **UNIX Username** and **Password**. Enter these details and press enter key. ![When asks for the new UNIX username and password for the newly installed Ubuntu system, enter both details](https://itsfoss.com/content/images/2023/01/enter-unix-username-and-password.webp) *Enter Unix username and Password*You will now be inside the terminal window of Ubuntu. ![Logged into new Ubuntu 22.04 LTS in Windows 11 WSL](https://itsfoss.com/content/images/2023/01/logged-into-new-ubuntu-22.webp) *Logged into new Ubuntu 22.04 LTS in Windows 11 WSL*Once logged in, you need to update the installed Ubuntu. For this, enter the following commands one by one: ``` sudo apt update sudo apt full-upgrade ``` After completing the update, you are good to go with Ubuntu in WSL. ![Running Ubuntu in WSL](https://itsfoss.com/content/images/2023/01/running-ubuntu-in-wsl-1.webp) *Running Ubuntu in WSL*## Step 2: Installing GUI Applications Once you are all set with the Linux distribution inside WSL, now is the time to install the Linux GUI application. It is done in two parts as described below. ### Download and Install Graphics drivers To run GUI apps, you need to install appropriate graphics drivers. You can use the following link to download the drivers according to your provider. Once installed, you are all done. ### Install some GUI Apps Now, go to your Ubuntu app and install any GUI app using the APT package manager. You should note that running apps from other sources like flatpak are problematic within WSL. For this article, I installed the Gedit text editor using the following command: `sudo apt install gedit -y` This will install several MB of packages including required libraries. Once completed, you can run the following command to start the GUI Gedit app in Windows: `gedit` ![Run Gedit text editor GUI in WSL Ubuntu](https://itsfoss.com/content/images/2023/01/run-gedit-text-editor-gui-in-wsl-ubuntu.webp) *Run Gedit text editor GUI in WSL Ubuntu*Similarly, you can install all the popular applications available to Linux, including Nautilus file manager, GIMP, etc. You can [refer to the official documentation](https://learn.microsoft.com/en-us/windows/wsl/tutorials/gui-apps) for more about running GUI applications in WSL. ## Wrapping Up With WSL, Microsoft has provided a comfortable way of using Linux within Windows. It keeps on improving with each major version. Running GUI Linux apps on Windows feature is proof of the same. As you can see, it is rather simple to use WSL 2 on Windows 11. Running it on Windows 10 takes some effort. I hope you find this tutorial helpful. Let me know if you face any difficulties.
12,255
怎样在 Linux 下用 SSH 搭建个人文件服务器
https://opensource.com/article/20/3/personal-file-server-ssh
2020-05-27T12:03:52
[ "SSH", "文件服务器" ]
https://linux.cn/article-12255-1.html
> > 通过 SSH 连接远程 Linux 系统很简单。下面是教程。 > > > ![](/data/attachment/album/202005/27/120338v62cakrqnccpjckk.jpg) 树莓派是一个有用且价格低廉的家庭服务器,可用于很多事情。我的树莓派最常用来做[打印服务器](https://opensource.com/article/18/3/print-server-raspberry-pi),可以在我的家庭网络中共享激光打印机,或作为个人文件服务器保存项目副本和其他数据。 我的文件服务器有很多用途。假设说我现在有一个项目,比如一本新书,我想把我的工作和所有相关的文件都复制一份快照。这种场景下,我只需要把 `BookProject` 文件夹复制到文件服务器的 `BookBackup` 文件夹。 或者我现在正在清理我的本地文件时,发现一些我不需要的文件,但是我不确定是否要删除,我会把它们复制到文件服务器的 `KeepForLater` 文件夹。这是我日常 Linux 系统中清除杂乱的文件,并将不常用的文件卸载到个人文件服务器上的方便方法。 用树莓派或其他 Linux 系统搭建个人文件服务器不需要配置 NFS(<ruby> 网络文件系统 <rt> Network File System </rt></ruby>>)或 CIFS(<ruby> 通用互联网文件系统 <rt> Common Internet File System </rt></ruby>)或改造其他的文件共享系统如 WebDAV。你可以很轻松的使用 SSH 来搭建远程文件服务器。下面是教程。 ### 在远程服务器上配置 SSHD 你的 Linux 系统可能已经安装了 SSH 守护进程(`sshd`),甚至它已经默认运行了。如果没有,你可以使用你 Linux 发行版本上的任何控制面板来轻松配置 SSH。我在树莓派上运行了 [Fedora ARM](https://arm.fedoraproject.org/),通过 Web 浏览器访问树莓派的 9090 端口,我可以远程访问控制面板。(在我的家庭网络中,树莓派的 IP 地址是 `10.0.0.11`,因此我连接的是 `10.0.0.11:9090`。)如果 SSH 守护进程没有默认运行,你可以在控制面板的“服务”里把它设置为开机启动。 ![sshd in the list of system services](/data/attachment/album/202005/27/120355xh3jh8g3qz0lyozw.png "sshd in the list of system services") 你可以在系统服务列表里找到 `sshd`。 ![slider to activate sshd](/data/attachment/album/202005/27/120356s0z14p1y2apc1a1n.png "slider to activate sshd") 如果 `sshd` 没有开启,点击切换按钮打开它。 ### 你有账号吗? 你需要有个远程系统的账号。它可以与你本地系统的账号相同,也可以不同。 在流行的 Raspbian 发行版本上,默认的账号名是 `pi`。但是其他的 Linux 发行版本可能需要你在安装系统时就设置一个唯一的新用户。如果你不知道你的用户名,你可以用系统的控制面板创建一个。在我的树莓派上,我创建了一个 `jhall` 账号,与我日常用的 Linux 桌面机器的用户名相同。 ![Set up a new account on Fedora Server](/data/attachment/album/202005/27/120357aagr4fxnetu7gaj7.png "Set up a new account on Fedora Server") 如果你用的是 Fedora 服务器,你可以点击“创建新账号”按钮。 ![Set password or SSH key](/data/attachment/album/202005/27/120357tjllwx9m7tnwuexj.png "Set password or SSH key") 不要忘记设置密码或添加公钥。 ### 可选:添加公钥 如果你把公钥添加到远程 Linux 系统上,你就可以不使用密码登录。这一步是可选的;如果你愿意,你仍可以用密码登录。 你可以在下面的文章中学到更多关于 SSH 密钥的信息: * [SSH 密钥管理工具](/article-11947-1.html) * [用 Seahorse 对 SSH 密钥进行图形化管理](/article-9451-1.html) * [如何管理多个 SSH 密钥](https://opensource.com/article/19/4/gpg-subkeys-ssh-manage) * [使用 GPG 密钥作为鉴权依据开启 SSH 访问](https://opensource.com/article/19/4/gpg-subkeys-ssh) ### 创建文件管理器的快捷方式 现在你已经在远程系统上启动 SSH 守护进程了,也设置了用户名和密码,最后一步就是在你本地的文件管理器中创建一个快捷方式,地址映射到远程 Linux 系统。我的桌面是 GNOME,但是在其他的 Linux 桌面上的基本操作步骤都是一样的。 #### 建立初始连接 在 GNOME 的文件管理器中,在左边导航栏找到 “+其它位置” 按钮。点击它会出现一个 “连接到服务器” 提示框。在框中输入远程 Linux 服务器的地址,地址以 SSH 连接协议开头。 ![Creating a shortcut in GNOME file manager](/data/attachment/album/202005/27/120358y10ttm3dz4tbtwm0.png "Creating a shortcut in GNOME file manager") GNOME 文件管理器支持多种连接协议。要通过 SSH 进行连接,服务器地址请以 `sftp://` 或 `ssh://` 开头。 如果你远程 Linux 系统的用户名与本地的相同,那么你只需要输入服务器的地址和文件夹路径就可以了。比如要连接到我的树莓派的 `/home/jhall` 目录,我输入: ``` sftp://10.0.0.11/home/jhall ``` ![GNOME file manager Connect to Server](/data/attachment/album/202005/27/120358l37c7hqf7huym7ha.png "GNOME file manager Connect to Server") 如果你远程 Linux 系统的用户名与本地的不同,你可以在远程系统地址前加 `@` 符号来指定远程系统的用户名。要连接到远程的 Raspbian 系统,你可能要输入: ``` sftp://[email protected]/home/pi ``` ![GNOME file manager Connect to Server](/data/attachment/album/202005/27/120359mlizokui0zelopi0.png "GNOME file manager Connect to Server") 如果你没有把公钥添加到远程服务器,那么你需要输入密码。如果你已经添加,GNOME 文件管理器应该会自动打开远程系统上的文件夹来让你跳转到不同的目录。 ![GNOME file manager connection](/data/attachment/album/202005/27/120400eom17cqc7zql7c1c.png "GNOME file manager connection") #### 创建一个快捷方式,之后就可以轻松连接服务器 在 GNOME 文件管理器中,这很简单。右击导航栏中远程系统的名字,选择“添加书签”。这一步操作就创建了连接到远程路径的快捷方式。 ![GNOME file manager - adding bookmark](/data/attachment/album/202005/27/120400mtldt6lzjtj1ldlt.png "GNOME file manager - adding bookmark") 如果你想把标签中的快捷方式改成一个更容易记的名字,你可以右击快捷方式选择“重命名”。 ### 总结 通过 SSH 连接到远程 Linux 系统是很简单的事。你可以用相同的方式连接到家庭文件服务器以外的其他系统。我还创建了一个能让我立即访问我的提供商 Web 服务器上的文件的快捷方式,和另一个能迅速打开我的项目服务器的文件夹的快捷方式。SSH 使它成为一个安全的连接;所有的传输都是加密的。当我通过 SSH 打开远程的文件时,我可以像在本地操作一样使用 GNOME 文件管理器轻松打开远程文件。 --- via: <https://opensource.com/article/20/3/personal-file-server-ssh> 作者:[Jim Hall](https://opensource.com/users/jim-hall) 选题:[lujun9972](https://github.com/lujun9972) 译者:[lxbwolf](https://github.com/lxbwolf) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
The Raspberry Pi makes for a useful and inexpensive home server for lots of things. I most often use the [Raspberry Pi as a print server](https://opensource.com/article/18/3/print-server-raspberry-pi) to share a laser printer with other devices in our home or as a personal file server to store copies of projects and other data. I use this file server in various ways. Let's say I'm working on a project, such as a new book, and I want to make a snapshot copy of my work and all my associated files. In that case, I simply copy my **BookProject** folder to a **BookBackup** folder on the file server. Or if I'm cleaning up my local files, and I discover some files that I don't really need but I'm not yet ready to delete, I'll copy them to a **KeepForLater** folder on the file server. That's a convenient way to remove clutter from my everyday Linux system and offload infrequently used files to my personal file server. Setting up a Raspberry Pi—or any Linux system—as a personal file server doesn't require configuring Network File System (NFS) or Common Internet File System (CIFS) or tinkering with other file-sharing systems such as WebDAV. You can easily set up a remote file server using SSH. And here's how. ## Set up SSHD on the remote system Your Linux system probably has the SSH daemon (sshd) installed. It may even be running by default. If not, you can easily set up SSH through whatever control panel you prefer on your Linux distribution. I run [Fedora ARM](https://arm.fedoraproject.org/) on my Raspberry Pi, and I can access the control panel remotely by pointing my Pi's web browser to port 9090. (On my home network, the Raspberry Pi's IP address is **10.0.0.11**, so I connect to **10.0.0.11:9090**.) If the SSH daemon isn't running by default, you can set it to start automatically in Services in the control panel. ![sshd in the list of system services sshd in the list of system services](https://opensource.com/sites/default/files/uploads/fedora-server-control-panel-sshd.png) You can find sshd in the list of system services. ![slider to activate sshd slider to activate sshd](https://opensource.com/sites/default/files/uploads/fedora-server-control-panel-sshd-service.png) Click the slider to activate sshd if it isn't already ## Do you have an account? Make sure you have an account on the remote system. It might be the same as the username you use on your local system, or it could be something different. On the popular Raspbian distribution, the default account username is **pi**. But other Linux distributions may require you to set up a unique new user when you install it. If you don't know your username, you can use your distribution's control panel to create one. On my Raspberry Pi, I set up a **jhall** account that matches the username on my everyday Linux desktop machine. ![Set up a new account on Fedora Server Set up a new account on Fedora Server](https://opensource.com/sites/default/files/uploads/fedora-server-control-panel-accounts_create-user.png) If you use Fedora Server, click the Create new user button to set up a new account. ![Set up a new account on Fedora Server Set password or SSH key](https://opensource.com/sites/default/files/uploads/fedora-server-control-panel-accounts.png) If you use Fedora Server, click the Create new user button to set up a new account ## Optional: Share your SSH public key If you exchange your public SSH key with the remote Linux system, you can log in without having to enter a password. This step is optional; you can use a password if you prefer. You can learn more about SSH keys in these Opensource.com articles: [Tools for SSH key management](https://opensource.com/article/20/2/ssh-tools)[Graphically manage SSH keys with Seahorse](https://opensource.com/article/19/4/ssh-keys-seahorse)[How to manage multiple SSH keys](https://opensource.com/article/19/4/gpg-subkeys-ssh-manage)[How to enable SSH access using a GPG key for authentication](https://opensource.com/article/19/4/gpg-subkeys-ssh) ## Make a file manager shortcut Since you've started the SSH daemon on the remote system and set up your account username and password, all that's left is to map a shortcut to the other Linux system from your file manager. I use GNOME as my desktop, but the steps are basically the same for any Linux desktop. ### Make the initial connection In the GNOME file manager, look for the **+Other Locations** button in the left-hand navigation. Click that to open a **Connect to Server** prompt. Enter the address of the remote Linux server here, starting with the SSH connection protocol. ![GNOME file manager Other Locations Creating a shortcut in GNOME file manager](https://opensource.com/sites/default/files/uploads/gnome-file-manager-other-locations.png) The GNOME file manager supports a variety of connection protocols. To make a connection over SSH, start your server address with **sftp://** or **ssh://**. If your username is the same on your local Linux system and your remote Linux system, you can just enter the server's address and the folder location. To make my connection to the **/home/jhall** directory on my Raspberry Pi, I use: `sftp://10.0.0.11/home/jhall` ![GNOME file manager Connect to Server GNOME file manager Connect to Server](https://opensource.com/sites/default/files/uploads/gnome-file-manager-other-sftp.png) If your username is different, you can specify your remote system's username with an **@** sign before the remote system's address. To connect to a Raspbian system on the other end, you might use: `sftp://[email protected]/home/pi` ![GNOME file manager Connect to Server GNOME file manager Connect to Server](https://opensource.com/sites/default/files/uploads/gnome-file-manager-other-sftp-username.png) If you didn't share your public SSH key, you may need to enter a password. Otherwise, the GNOME file manager should automatically open the folder on the remote system and let you navigate. ![GNOME file manager connection GNOME file manager connection](https://opensource.com/sites/default/files/uploads/gnome-file-manager-remote-jhall.png) ### Create a shortcut so you can easily connect to the server later This is easy in the GNOME file manager. Right-click on the remote system's name in the navigation list, and select **Add Bookmark**. This creates a shortcut to the remote location. ![GNOME file manager - adding bookmark GNOME file manager - adding bookmark](https://opensource.com/sites/default/files/uploads/gnome-file-manager-remote-jhall-add-bookmark.png) If you want to give the bookmark a more memorable name, you can right-click on the shortcut and choose **Rename**. ## That's it! Connecting to a remote Linux system over SSH is just plain easy. And you can use the same method to connect to systems other than home file servers. I also have a shortcut that allows me to instantly access files on my provider's web server and another that lets me open a folder on my project server. SSH makes it a secure connection; all of my traffic is encrypted. Once I've opened the remote system over SSH, I can use the GNOME file manager to manage my remote files as easily as I'd manage my local folders. ## 10 Comments