id
int64
2.05k
16.6k
title
stringlengths
5
75
fromurl
stringlengths
19
185
date
timestamp[s]
tags
sequencelengths
0
11
permalink
stringlengths
20
37
content
stringlengths
342
82.2k
fromurl_status
int64
200
526
status_msg
stringclasses
339 values
from_content
stringlengths
0
229k
11,307
使用开源工具构建 DevOps 流水线的初学者指南
https://opensource.com/article/19/4/devops-pipeline
2019-09-05T06:03:37
[ "DevOps" ]
https://linux.cn/article-11307-1.html
> > 如果你是 DevOps 新人,请查看这 5 个步骤来构建你的第一个 DevOps 流水线。 > > > ![](/data/attachment/album/201909/05/060323yizmqwn43zwy13za.jpg) DevOps 已经成为解决软件开发过程中出现的缓慢、孤立或者其他故障的默认方式。但是当你刚接触 DevOps 并且不确定从哪开始时,就意义不大了。本文探索了什么是 DevOps 流水线并且提供了创建它的 5 个步骤。尽管这个教程并不全面,但可以给你以后上手和扩展打下基础。首先,插入一个小故事。 ### 我的 DevOps 之旅 我曾经在花旗集团的云小组工作,开发<ruby> <rt> Infrastructure as a Service </rt> 基础设施即服务</ruby>网页应用来管理花旗的云基础设施,但我经常对研究如何让开发流水线更加高效以及如何带给团队积极的文化感兴趣。我在 Greg Lavender 推荐的书中找到了答案。Greg Lavender 是花旗的云架构和基础设施工程(即 [Phoenix 项目](https://www.amazon.com/dp/B078Y98RG8/))的 CTO。这本书尽管解释的是 DevOps 原理,但它读起来像一本小说。 书后面的一张表展示了不同公司部署在发布环境上的频率: | 公司 | 部署频率 | | --- | --- | | Amazon | 23,000 次/天 | | Google | 5,500 次/天 | | Netflix | 500 次/天 | | Facebook | 1 次/天 | | Twitter | 3 次/周 | | 典型企业 | 1 次/9 个月 | Amazon、Google、Netflix 怎么能做到如此之频繁?那是因为这些公司弄清楚了如何去实现一个近乎完美的 DevOps 流水线。 但在花旗实施 DevOps 之前,情况并非如此。那时候,我的团队拥有不同<ruby> 构建阶段 <rt> stage </rt></ruby>的环境,但是在开发服务器上的部署非常手工。所有的开发人员都只能访问一个基于 IBM WebSphere Application 社区版的开发环境服务器。问题是当多个用户同时尝试部署时,服务器就会宕机,因此开发人员在部署时就得互相通知,这一点相当痛苦。此外,还存在代码测试覆盖率低、手动部署过程繁琐以及无法根据定义的任务或用户需求跟踪代码部署的问题。 我意识到必须做些事情,同时也找到了一个有同样感受的同事。我们决定合作去构建一个初始的 DevOps 流水线 —— 他设置了一个虚拟机和一个 Tomcat 服务器,而我则架设了 Jenkins,集成了 Atlassian Jira、BitBucket 和代码覆盖率测试。这个业余项目非常成功:我们近乎全自动化了开发流水线,并在开发服务器上实现了几乎 100% 的正常运行,我们可以追踪并改进代码覆盖率测试,并且 Git 分支能够与部署任务和 jira 任务关联在一起。此外,大多数用来构建 DevOps 所使用的工具都是开源的。 现在我意识到了我们的 DevOps 流水线是多么的原始,因为我们没有利用像 Jenkins 文件或 Ansible 这样的高级设置。然而,这个简单的过程运作良好,这也许是因为 [Pareto](https://en.wikipedia.org/wiki/Pareto_principle) 原则(也被称作 80/20 法则)。 ### DevOps 和 CI/CD 流水线的简要介绍 如果你问一些人,“什么是 DevOps?”,你或许会得到一些不同的回答。DevOps,就像敏捷,已经发展到涵盖着诸多不同的学科,但大多数人至少会同意这些:DevOps 是一个软件开发实践或一个<ruby> 软件开发生命周期 <rt> software development lifecycle </rt></ruby>(SDLC),并且它的核心原则是一种文化上的变革 —— 开发人员与非开发人员呼吸着同一片天空的气息,之前手工的事情变得自动化;每个人做着自己擅长的事;同一时间的部署变得更加频繁;吞吐量提升;灵活度增加。 虽然拥有正确的软件工具并非实现 DevOps 环境所需的唯一东西,但一些工具却是必要的。最关键的一个便是持续集成和持续部署(CI/CD)。在流水线环境中,拥有不同的构建阶段(例如:DEV、INT、TST、QA、UAT、STG、PROD),手动的工作能实现自动化,开发人员可以实现高质量的代码,灵活而且大量部署。 这篇文章描述了一个构建 DevOps 流水线的五步方法,就像下图所展示的那样,使用开源的工具实现。 ![Complete DevOps pipeline](/data/attachment/album/201909/05/060340ilvz8rifkv8rkjjx.jpg "Complete DevOps pipeline") 闲话少说,让我们开始吧。 ### 第一步:CI/CD 框架 首先你需要的是一个 CI/CD 工具,Jenkins,是一个基于 Java 的 MIT 许可下的开源 CI/CD 工具,它是推广 DevOps 运动的工具,并已成为了<ruby> 事实标准 <rt> de facto standard </rt> <ruby> 。 </ruby></ruby> 所以,什么是 Jenkins?想象它是一种神奇的万能遥控,能够和许多不同的服务器和工具打交道,并且能够将它们统一安排起来。就本身而言,像 Jenkins 这样的 CI/CD 工具本身是没有用的,但随着接入不同的工具与服务器时会变得非常强大。 Jenkins 仅是众多构建 DevOps 流水线的开源 CI/CD 工具之一。 | 名称 | 许可证 | | --- | --- | | [Jenkins](https://github.com/jenkinsci/jenkins) | Creative Commons 和 MIT | | [Travis CI](https://github.com/travis-ci/travis-ci) | MIT | | [CruiseControl](http://cruisecontrol.sourceforge.net) | BSD | | [Buildbot](https://github.com/buildbot/buildbot) | GPL | | [Apache Gump](https://gump.apache.org) | Apache 2.0 | | [Cabie](http://cabie.tigris.org) | GNU | 下面就是使用 CI/CD 工具时 DevOps 看起来的样子。 ![CI/CD tool](/data/attachment/album/201909/05/060340o710ecl7eif71jpp.jpg "CI/CD tool") 你的 CI/CD 工具在本地主机上运行,但目前你还不能够做些别的。让我们紧随 DevOps 之旅的脚步。 ### 第二步:源代码控制管理 验证 CI/CD 工具可以执行某些魔术的最佳(也可能是最简单)方法是与源代码控制管理(SCM)工具集成。为什么需要源代码控制?假设你在开发一个应用。无论你什么时候构建应用,无论你使用的是 Java、Python、C++、Go、Ruby、JavaScript 或任意一种语言,你都在编程。你所编写的程序代码称为源代码。在一开始,特别是只有你一个人工作时,将所有的东西放进本地文件夹里或许都是可以的。但是当项目变得庞大并且邀请其他人协作后,你就需要一种方式来避免共享代码修改时的合并冲突。你也需要一种方式来恢复一个之前的版本——备份、复制并粘贴的方式已经过时了。你(和你的团队)想要更好的解决方式。 这就是 SCM 变得不可或缺的原因。SCM 工具通过在仓库中保存代码来帮助进行版本控制与多人协作。 尽管这里有许多 SCM 工具,但 Git 是最标准恰当的。我极力推荐使用 Git,但如果你喜欢这里仍有其他的开源工具。 | 名称 | 许可证 | | --- | --- | | [Git](https://git-scm.com) | GPLv2 & LGPL v2.1 | | [Subversion](https://subversion.apache.org) | Apache 2.0 | | [Concurrent Versions System](http://savannah.nongnu.org/projects/cvs) (CVS) | GNU | | [Vesta](http://www.vestasys.org) | LGPL | | [Mercurial](https://www.mercurial-scm.org) | GNU GPL v2+ | 拥有 SCM 之后,DevOps 流水线看起来就像这样。 ![Source control management](/data/attachment/album/201909/05/060341m80rh0a8s010dxar.jpg "Source control management") CI/CD 工具能够自动化进行源代码检入检出以及完成成员之间的协作。还不错吧?但是,如何才能把它变成可工作的应用程序,使得数十亿人来使用并欣赏它呢? ### 第三步:自动化构建工具 真棒!现在你可以检出代码并将修改提交到源代码控制,并且可以邀请你的朋友就源代码控制进行协作。但是到目前为止你还没有构建出应用。要想让它成为一个网页应用,必须将其编译并打包成可部署的包或可执行程序(注意,像 JavaScript 或 PHP 这样的解释型编程语言不需要进行编译)。 于是就引出了自动化构建工具。无论你决定使用哪一款构建工具,它们都有一个共同的目标:将源代码构建成某种想要的格式,并且将清理、编译、测试、部署到某个位置这些任务自动化。构建工具会根据你的编程语言而有不同,但这里有一些通常使用的开源工具值得考虑。 | 名称 | 许可证 | 编程语言 | | --- | --- | --- | | [Maven](https://maven.apache.org) | Apache 2.0 | Java | | [Ant](https://ant.apache.org) | Apache 2.0 | Java | | [Gradle](https://gradle.org/) | Apache 2.0 | Java | | [Bazel](https://bazel.build) | Apache 2.0 | Java | | [Make](https://www.gnu.org/software/make) | GNU | N/A | | [Grunt](https://gruntjs.com) | MIT | JavaScript | | [Gulp](https://gulpjs.com) | MIT | JavaScript | | [Buildr](http://buildr.apache.org) | Apache | Ruby | | [Rake](https://github.com/ruby/rake) | MIT | Ruby | | [A-A-P](http://www.a-a-p.org) | GNU | Python | | [SCons](https://www.scons.org) | MIT | Python | | [BitBake](https://www.yoctoproject.org/software-item/bitbake) | GPLv2 | Python | | [Cake](https://github.com/cake-build/cake) | MIT | C# | | [ASDF](https://common-lisp.net/project/asdf) | Expat (MIT) | LISP | | [Cabal](https://www.haskell.org/cabal) | BSD | Haskell | 太棒了!现在你可以将自动化构建工具的配置文件放进源代码控制管理系统中,并让你的 CI/CD 工具构建它。 ![Build automation tool](/data/attachment/album/201909/05/060341rfvzwege306dapue.jpg "Build automation tool") 一切都如此美好,对吧?但是在哪里部署它呢? ### 第四步:网页应用服务器 到目前为止,你有了一个可执行或可部署的打包文件。对任何真正有用的应用程序来说,它必须提供某种服务或者接口,所以你需要一个容器来发布你的应用。 对于网页应用,网页应用服务器就是容器。应用程序服务器提供了环境,让可部署包中的编程逻辑能够被检测到、呈现界面,并通过打开套接字为外部世界提供网页服务。在其他环境下你也需要一个 HTTP 服务器(比如虚拟机)来安装服务应用。现在,我假设你将会自己学习这些东西(尽管我会在下面讨论容器)。 这里有许多开源的网页应用服务器。 | 名称 | 协议 | 编程语言 | | --- | --- | --- | | [Tomcat](https://tomcat.apache.org) | Apache 2.0 | Java | | [Jetty](https://www.eclipse.org/jetty/) | Apache 2.0 | Java | | [WildFly](http://wildfly.org) | GNU Lesser Public | Java | | [GlassFish](https://javaee.github.io/glassfish) | CDDL & GNU Less Public | Java | | [Django](https://www.djangoproject.com/) | 3-Clause BSD | Python | | [Tornado](http://www.tornadoweb.org/en/stable) | Apache 2.0 | Python | | [Gunicorn](https://gunicorn.org) | MIT | Python | | [Python Paste](https://github.com/cdent/paste) | MIT | Python | | [Rails](https://rubyonrails.org) | MIT | Ruby | | [Node.js](https://nodejs.org/en) | MIT | Javascript | 现在 DevOps 流水线差不多能用了,干得好! ![Web application server](/data/attachment/album/201909/05/060342cd8rboq080odd88u.jpg "Web application server") 尽管你可以在这里停下来并进行进一步的集成,但是代码质量对于应用开发者来说是一件非常重要的事情。 ### 第五步:代码覆盖测试 实现代码测试件可能是另一个麻烦的需求,但是开发者需要尽早地捕捉程序中的所有错误并提升代码质量来保证最终用户满意度。幸运的是,这里有许多开源工具来测试你的代码并提出改善质量的建议。甚至更好的,大部分 CI/CD 工具能够集成这些工具并将测试过程自动化进行。 代码测试分为两个部分:“代码测试框架”帮助进行编写与运行测试,“代码质量改进工具”帮助提升代码的质量。 #### 代码测试框架 | 名称 | 许可证 | 编程语言 | | --- | --- | --- | | [JUnit](https://junit.org/junit5) | Eclipse Public License | Java | | [EasyMock](http://easymock.org) | Apache | Java | | [Mockito](https://site.mockito.org) | MIT | Java | | [PowerMock](https://github.com/powermock/powermock) | Apache 2.0 | Java | | [Pytest](https://docs.pytest.org) | MIT | Python | | [Hypothesis](https://hypothesis.works) | Mozilla | Python | | [Tox](https://github.com/tox-dev/tox) | MIT | Python | #### 代码质量改进工具 | 名称 | 许可证 | 编程语言 | | --- | --- | --- | | [Cobertura](http://cobertura.github.io/cobertura) | GNU | Java | | [CodeCover](http://codecover.org/) | Eclipse Public (EPL) | Java | | [Coverage.py](https://github.com/nedbat/coveragepy) | Apache 2.0 | Python | | [Emma](http://emma.sourceforge.net) | Common Public License | Java | | [JaCoCo](https://github.com/jacoco/jacoco) | Eclipse Public License | Java | | [Hypothesis](https://hypothesis.works) | Mozilla | Python | | [Tox](https://github.com/tox-dev/tox) | MIT | Python | | [Jasmine](https://jasmine.github.io) | MIT | JavaScript | | [Karma](https://github.com/karma-runner/karma) | MIT | JavaScript | | [Mocha](https://github.com/mochajs/mocha) | MIT | JavaScript | | [Jest](https://jestjs.io) | MIT | JavaScript | 注意,之前提到的大多数工具和框架都是为 Java、Python、JavaScript 写的,因为 C++ 和 C# 是专有编程语言(尽管 GCC 是开源的)。 现在你已经运用了代码覆盖测试工具,你的 DevOps 流水线应该就像教程开始那幅图中展示的那样了。 ### 可选步骤 #### 容器 正如我之前所说,你可以在虚拟机(VM)或服务器上发布你的应用,但是容器是一个更好的解决方法。 [什么是容器](/resources/what-are-linux-containers)?简要的介绍就是 VM 需要占用操作系统大量的资源,它提升了应用程序的大小,而容器仅仅需要一些库和配置来运行应用程序。显然,VM 仍有重要的用途,但容器对于发布应用(包括应用程序服务器)来说是一个更为轻量的解决方式。 尽管对于容器来说也有其他的选择,但是 Docker 和 Kubernetes 更为广泛。 | 名称 | 许可证 | | --- | --- | | [Docker](https://www.docker.com) | Apache 2.0 | | [Kubernetes](https://kubernetes.io) | Apache 2.0 | 了解更多信息,请查看 [Opensource.com](http://Opensource.com) 上关于 Docker 和 Kubernetes 的其它文章: * [什么是 Docker?](https://opensource.com/resources/what-docker) * [Docker 简介](https://opensource.com/business/15/1/introduction-docker) * [什么是 Kubernetes?](https://opensource.com/resources/what-is-kubernetes) * [从零开始的 Kubernetes 实践](https://opensource.com/article/17/11/kubernetes-lightning-talk) #### 中间件自动化工具 我们的 DevOps 流水线大部分集中在协作构建与部署应用上,但你也可以用 DevOps 工具完成许多其他的事情。其中之一便是利用它实现<ruby> 基础设施管理 <rt> Infrastructure as Code </rt></ruby>(IaC)工具,这也是熟知的中间件自动化工具。这些工具帮助完成中间件的自动化安装、管理和其他任务。例如,自动化工具可以用正确的配置下拉应用程序,例如网页服务器、数据库和监控工具,并且部署它们到应用服务器上。 这里有几个开源的中间件自动化工具值得考虑: | 名称 | 许可证 | | --- | --- | | [Ansible](https://www.ansible.com) | GNU Public | | [SaltStack](https://www.saltstack.com) | Apache 2.0 | | [Chef](https://www.chef.io) | Apache 2.0 | | [Puppet](https://puppet.com) | Apache or GPL | 获取更多中间件自动化工具,查看 [Opensource.com](http://Opensource.com) 上的其它文章: * [Ansible 快速入门指南](https://opensource.com/article/19/2/quickstart-guide-ansible) * [Ansible 自动化部署策略](https://opensource.com/article/19/1/automating-deployment-strategies-ansible) * [配置管理工具 Top 5](https://opensource.com/article/18/12/configuration-management-tools) ### 之后的发展 这只是一个完整 DevOps 流水线的冰山一角。从 CI/CD 工具开始并且探索其他可以自动化的东西来使你的团队更加轻松的工作。并且,寻找[开源通讯工具](https://opensource.com/alternatives/slack)可以帮助你的团队一起工作的更好。 发现更多见解,这里有一些非常棒的文章来介绍 DevOps : * [什么是 DevOps](https://opensource.com/resources/devops) * [掌握 5 件事成为 DevOps 工程师](https://opensource.com/article/19/2/master-devops-engineer) * [所有人的 DevOps](https://opensource.com/article/18/11/how-non-engineer-got-devops) * [在 DevOps 中开始使用预测分析](https://opensource.com/article/19/1/getting-started-predictive-analytics-devops) 使用开源 agile 工具来集成 DevOps 也是一个很好的主意: * [什么是 agile ?](https://opensource.com/article/18/10/what-agile) * [4 步成为一个了不起的 agile 开发者](https://opensource.com/article/19/2/steps-agile-developer) --- via: <https://opensource.com/article/19/4/devops-pipeline> 作者:[Bryant Son](https://opensource.com/users/brson/users/milindsingh/users/milindsingh/users/dscripter) 选题:[lujun9972](https://github.com/lujun9972) 译者:[LuMing](https://github.com/LuuMing) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
DevOps has become the default answer to fixing software development processes that are slow, siloed, or otherwise dysfunctional. But that doesn't mean very much when you're new to DevOps and aren't sure where to begin. This article explores what a DevOps pipeline is and offers a five-step process to create one. While this tutorial is not comprehensive, it should give you a foundation to start on and expand later. But first, a story. ## My DevOps journey I used to work for the cloud team at Citi Group, developing an Infrastructure-as-a-Service (IaaS) web application to manage Citi's cloud infrastructure, but I was always interested in figuring out ways to make the development pipeline more efficient and bring positive cultural change to the development team. I found my answer in a book recommended by Greg Lavender, who was the CTO of Citi's cloud architecture and infrastructure engineering, called * The Phoenix Project*. The book reads like a novel while it explains DevOps principles. A table at the back of the book shows how often different companies deploy to the release environment: Company | Deployment Frequency | ---|---| Amazon | 23,000 per day | 5,500 per day | | Netflix | 500 per day | 1 per day | | 3 per week | | Typical enterprise | 1 every 9 months | How are the frequency rates of Amazon, Google, and Netflix even possible? It's because these companies have figured out how to make a nearly perfect DevOps pipeline. This definitely wasn't the case before we implemented DevOps at Citi. Back then, my team had different staged environments, but deployments to the development server were very manual. All developers had access to just one development server based on IBM WebSphere Application Server Community Edition. The problem was the server went down whenever multiple users simultaneously tried to make deployments, so the developers had to let each other know whenever they were about to make a deployment, which was quite a pain. In addition, there were problems with low code test coverages, cumbersome manual deployment processes, and no way to track code deployments with a defined task or a user story. I realized something had to be done, and I found a colleague who felt the same way. We decided to collaborate to build an initial DevOps pipeline—he set up a virtual machine and a Tomcat application server while I worked on Jenkins, integrating with Atlassian Jira and BitBucket, and code testing coverages. This side project was hugely successful: we almost fully automated the development pipeline, we achieved nearly 100% uptime on our development server, we could track and improve code testing coverage, and the Git branch could be associated with the deployment and Jira task. And most of the tools we used to construct our DevOps pipeline were open source. I now realize how rudimentary our DevOps pipeline was, as we didn't take advantage of advanced configurations like Jenkins files or Ansible. However, this simple process worked well, maybe due to the [Pareto](https://en.wikipedia.org/wiki/Pareto_principle) principle (also known as the 80/20 rule). ## A brief introduction to DevOps and the CI/CD pipeline If you ask several people, "What is DevOps? you'll probably get several different answers. DevOps, like agile, has evolved to encompass many different disciplines, but most people will agree on a few things: DevOps is a software development practice or a software development lifecycle (SDLC) and its central tenet is cultural change, where developers and non-developers all breathe in an environment where formerly manual things are automated; everyone does what they are best at; the number of deployments per period increases; throughput increases; and flexibility improves. **[Read next: A primer on DevOps pipeline: Continuous Integration & Continuous Delivery (CI/CD)]** While having the right software tools is not the only thing you need to achieve a DevOps environment, some tools are necessary. A key one is continuous integration and continuous deployment (CI/CD). This pipeline is where the environments have different stages (e.g., DEV, INT, TST, QA, UAT, STG, PROD), manual things are automated, and developers can achieve high-quality code, flexibility, and numerous deployments. This article describes a five-step approach to creating a DevOps pipeline, like the one in the following diagram, using open source tools. ![Complete DevOps pipeline Complete DevOps pipeline](https://opensource.com/sites/default/files/uploads/1_finaldevopspipeline.jpg) Without further ado, let's get started. ## Step 1: CI/CD framework The first thing you need is a CI/CD tool. Jenkins, an open source, Java-based CI/CD tool based on the MIT License, is the tool that popularized the DevOps movement and has become the de facto standard. So, what is Jenkins? Imagine it as some sort of a magical universal remote control that can talk to many many different services and tools and orchestrate them. On its own, a CI/CD tool like Jenkins is useless, but it becomes more powerful as it plugs into different tools and services. Jenkins is just one of many open source CI/CD tools that you can leverage to build a DevOps pipeline. Name | License | ---|---| | [Travis CI](https://github.com/travis-ci/travis-ci)[CruiseControl](http://cruisecontrol.sourceforge.net)[Buildbot](https://github.com/buildbot/buildbot)[Apache Gump](https://gump.apache.org)[Cabie](http://cabie.tigris.org)Here's what a DevOps process looks like with a CI/CD tool. ![CI/CD tool CI/CD tool](https://opensource.com/sites/default/files/uploads/2_runningjenkins.jpg) You have a CI/CD tool running in your localhost, but there is not much you can do at the moment. Let's follow the next step of DevOps journey. ## Step 2: Source control management The best (and probably the easiest) way to verify that your CI/CD tool can perform some magic is by integrating with a source control management (SCM) tool. Why do you need source control? Suppose you are developing an application. Whenever you build an application, you are programming—whether you are using Java, Python, C++, Go, Ruby, JavaScript, or any of the gazillion programming languages out there. The programming codes you write are called source codes. In the beginning, especially when you are working alone, it's probably OK to put everything in your local directory. But when the project gets bigger and you invite others to collaborate, you need a way to avoid merge conflicts while effectively sharing the code modifications. You also need a way to recover a previous version—and the process of making a backup and copying-and-pasting gets old. You (and your teammates) want something better. This is where SCM becomes almost a necessity. A SCM tool helps by storing your code in repositories, versioning your code, and coordinating among project members. Although there are many SCM tools out there, Git is the standard and rightly so. I highly recommend using Git, but there are other open source options if you prefer. Name | License | ---|---| | [Subversion](https://subversion.apache.org)[Concurrent Versions System](http://savannah.nongnu.org/projects/cvs)(CVS)[Vesta](http://www.vestasys.org)[Mercurial](https://www.mercurial-scm.org)Here's what the DevOps pipeline looks like with the addition of SCM. ![Source control management Source control management](https://opensource.com/sites/default/files/uploads/3_sourcecontrolmanagement.jpg) The CI/CD tool can automate the tasks of checking in and checking out source code and collaborating across members. Not bad? But how can you make this into a working application so billions of people can use and appreciate it? ## Step 3: Build automation tool Excellent! You can check out the code and commit your changes to the source control, and you can invite your friends to collaborate on the source control development. But you haven't yet built an application. To make it a web application, it has to be compiled and put into a deployable package format or run as an executable. (Note that an interpreted programming language like JavaScript or PHP doesn't need to be compiled.) Enter the build automation tool. No matter which build tool you decide to use, all build automation tools have a shared goal: to build the source code into some desired format and to automate the task of cleaning, compiling, testing, and deploying to a certain location. The build tools will differ depending on your programming language, but here are some common open source options to consider. Name | License | Programming Language | ---|---|---| | [Ant](https://ant.apache.org)[Gradle](https://gradle.org/)[Bazel](https://bazel.build)[Make](https://www.gnu.org/software/make)[Grunt](https://gruntjs.com)[Gulp](https://gulpjs.com)[Buildr](http://buildr.apache.org)[Rake](https://github.com/ruby/rake)[A-A-P](http://www.a-a-p.org)[SCons](https://www.scons.org)[BitBake](https://www.yoctoproject.org/software-item/bitbake)[Cake](https://github.com/cake-build/cake)[ASDF](https://common-lisp.net/project/asdf)[Cabal](https://www.haskell.org/cabal)Awesome! You can put your build automation tool configuration files into your source control management and let your CI/CD tool build it. ![Build automation tool Build automation tool](https://opensource.com/sites/default/files/uploads/4_buildtools.jpg) Everything is good, right? But where can you deploy it? ## Step 4: Web application server So far, you have a packaged file that might be executable or deployable. For any application to be truly useful, it has to provide some kind of a service or an interface, but you need a vessel to host your application. For a web application, a web application server is that vessel. An application server offers an environment where the programming logic inside the deployable package can be detected, render the interface, and offer the web services by opening sockets to the outside world. You need an HTTP server as well as some other environment (like a virtual machine) to install your application server. For now, let's assume you will learn about this along the way (although I will discuss containers below). There are a number of open source web application servers available. Name | License | Programming Language | ---|---|---| | [Jetty](https://www.eclipse.org/jetty/)[WildFly](http://wildfly.org)[GlassFish](https://javaee.github.io/glassfish)[Django](https://www.djangoproject.com/)[Tornado](http://www.tornadoweb.org/en/stable)[Gunicorn](https://gunicorn.org)[Python Paste](https://github.com/cdent/paste)[Rails](https://rubyonrails.org)[Node.js](https://nodejs.org/en)Now the DevOps pipeline is almost usable. Good job! ![Web application server Web application server](https://opensource.com/sites/default/files/uploads/5_applicationserver.jpg) Although it's possible to stop here and integrate further on your own, code quality is an important thing for an application developer to be concerned about. ## Step 5: Code testing coverage Implementing code test pieces can be another cumbersome requirement, but developers need to catch any errors in an application early on and improve the code quality to ensure end users are satisfied. Luckily, there are many open source tools available to test your code and suggest ways to improve its quality. Even better, most CI/CD tools can plug into these tools and automate the process. There are two parts to code testing: *code testing frameworks* that help write and run the tests, and *code quality suggestion tools* that help improve code quality. ### Code test frameworks Name | License | Programming Language | ---|---|---| | [EasyMock](http://easymock.org)[Mockito](https://site.mockito.org)[PowerMock](https://github.com/powermock/powermock)[Pytest](https://docs.pytest.org)[Hypothesis](https://hypothesis.works)[Tox](https://github.com/tox-dev/tox)### Code quality suggestion tools Name | License | Programming Language | ---|---|---| | [CodeCover](http://codecover.org/)[Coverage.py](https://github.com/nedbat/coveragepy)[Emma](http://emma.sourceforge.net)[JaCoCo](https://github.com/jacoco/jacoco)[Hypothesis](https://hypothesis.works)[Tox](https://github.com/tox-dev/tox)[Jasmine](https://jasmine.github.io)[Karma](https://github.com/karma-runner/karma)[Mocha](https://github.com/mochajs/mocha)[Jest](https://jestjs.io)Note that most of the tools and frameworks mentioned above are written for Java, Python, and JavaScript, since C++ and C# are proprietary programming languages (although GCC is open source). Now that you've implemented code testing coverage tools, your DevOps pipeline should resemble the DevOps pipeline diagram shown at the beginning of this tutorial. ## Optional steps ### Containers As I mentioned above, you can host your application server on a virtual machine or a server, but containers are a popular solution. [What are ](https://opensource.com/resources/what-are-linux-containers)[containers](https://opensource.com/resources/what-are-linux-containers)? The short explanation is that a VM needs the huge footprint of an operating system, which overwhelms the application size, while a container just needs a few libraries and configurations to run the application. There are clearly still important uses for a VM, but a container is a lightweight solution for hosting an application, including an application server. Although there are other options for containers, Docker and Kubernetes are the most popular. Name | License | ---|---| | [Kubernetes](https://kubernetes.io)To learn more, check out these other [Opensource.com](http://Opensource.com) articles about Docker and Kubernetes: ### Middleware automation tools Our DevOps pipeline mostly focused on collaboratively building and deploying an application, but there are many other things you can do with DevOps tools. One of them is leveraging Infrastructure as Code (IaC) tools, which are also known as middleware automation tools. These tools help automate the installation, management, and other tasks for middleware software. For example, an automation tool can pull applications, like a web application server, database, and monitoring tool, with the right configurations and deploy them to the application server. Here are several open source middleware automation tools to consider: Name | License | ---|---| | [SaltStack](https://www.saltstack.com)[Chef](https://www.chef.io)[Puppet](https://puppet.com)For more on middleware automation tools, check out these other [Opensource.com](http://Opensource.com) articles: [A quickstart guide to Ansible](https://opensource.com/article/19/2/quickstart-guide-ansible)[Automating deployment strategies with Ansible](https://opensource.com/article/19/1/automating-deployment-strategies-ansible)[Top 5 configuration management tools](https://opensource.com/article/18/12/configuration-management-tools) ## Where can you go from here? This is just the tip of the iceberg for what a complete DevOps pipeline can look like. Start with a CI/CD tool and explore what else you can automate to make your team's job easier. Also, look into [open source communication tools](https://opensource.com/alternatives/slack) that can help your team work better together. For more insight, here are some very good introductory articles about DevOps: [What is DevOps](https://opensource.com/resources/devops)[5 things to master to be a DevOps engineer](https://opensource.com/article/19/2/master-devops-engineer)[DevOps is for everyone](https://opensource.com/article/18/11/how-non-engineer-got-devops)[Getting started with predictive analytics in DevOps](https://opensource.com/article/19/1/getting-started-predictive-analytics-devops) Integrating DevOps with open source agile tools is also a good idea: ## 14 Comments
11,308
如何在 Ubuntu 中检查你的 IP 地址
https://itsfoss.com/check-ip-address-ubuntu/
2019-09-05T06:15:17
[ "IP" ]
https://linux.cn/article-11308-1.html
不知道你的 IP 地址是什么?以下是在 Ubuntu 和其他 Linux 发行版中检查 IP 地址的几种方法。 ![](/data/attachment/album/201909/05/061519lz58opfcjp4pj0c1.png) ### 什么是 IP 地址? **互联网协议地址**(通常称为 **IP 地址**)是分配给连接到计算机网络的每个设备(使用互联网协议)的数字标签。IP 地址用于识别和定位机器。 **IP 地址**在网络中是*唯一的*,使得所有连接设备能够通信。 你还应该知道有两种**类型的 IP 地址**:**公有**和**私有**。**公有 IP 地址**是用于互联网通信的地址,这与你用于邮件的物理地址相同。但是,在本地网络(例如使用路由器的家庭)的环境中,会为每个设备分配在该子网内唯一的**私有 IP 地址**。这在本地网络中使用,而不直接暴露公有 IP(路由器用它与互联网通信)。 另外还有区分 **IPv4** 和 **IPv6** 协议。**IPv4** 是经典的 IP 格式,它由基本的 4 部分结构组成,四个字节用点分隔(例如 127.0.0.1)。但是,随着设备数量的增加,IPv4 很快就无法提供足够的地址。这就是 **IPv6** 被发明的原因,它使用 **128 位地址**的格式(与 **IPv4** 使用的 **32 位地址**相比)。 ### 在 Ubuntu 中检查你的 IP 地址(终端方式) 检查 IP 地址的最快和最简单的方法是使用 `ip` 命令。你可以按以下方式使用此命令: ``` ip addr show ``` 它将同时显示 IPv4 和 IPv6 地址: ![Display IP Address in Ubuntu Linux](/data/attachment/album/201909/05/061520f07p0z95qz4rxpi5.png) 实际上,你可以进一步缩短这个命令 `ip a`。它会给你完全相同的结果。 ``` ip a ``` 如果你希望获得最少的细节,也可以使用 `hostname`: ``` hostname -I ``` 还有一些[在 Linux 中检查 IP 地址的方法](https://linuxhandbook.com/find-ip-address/),但是这两个命令足以满足这个目的。 `ifconfig` 如何? 老用户可能会想要使用 `ifconfig`(net-tools 软件包的一部分),但该程序已被弃用。一些较新的 Linux 发行版不再包含此软件包,如果你尝试运行它,你将看到 ifconfig 命令未找到的错误。 ### 在 Ubuntu 中检查你的 IP 地址(GUI 方式) 如果你对命令行不熟悉,你还可以使用图形方式检查 IP 地址。 打开 Ubuntu 应用菜单(在屏幕左下角**显示应用**)并搜索**Settings**,然后单击图标: ![Applications Menu Settings](/data/attachment/album/201909/05/061521sx6mepmlolrm66px.jpg) 这应该会打开**设置菜单**。进入**网络**: ![Network Settings Ubuntu](/data/attachment/album/201909/05/061522artoiggzfrbr2oyi.jpg) 按下连接旁边的**齿轮图标**会打开一个窗口,其中包含更多设置和有关你网络链接的信息,其中包括你的 IP 地址: ![IP Address GUI Ubuntu](/data/attachment/album/201909/05/061523cb0wm44bmop313we.png) ### 额外提示:检查你的公共 IP 地址(适用于台式计算机) 首先,要检查你的**公有 IP 地址**(用于与服务器通信),你可以[使用 curl 命令](https://linuxhandbook.com/curl-command-examples/)。打开终端并输入以下命令: ``` curl ifconfig.me ``` 这应该只会返回你的 IP 地址而没有其他多余信息。我建议在分享这个地址时要小心,因为这相当于公布你的个人地址。 **注意:** 如果 `curl` 没有安装,只需使用 `sudo apt install curl -y` 来解决问题,然后再试一次。 另一种可以查看公共 IP 地址的简单方法是在 Google 中搜索 “ip address”。 ### 总结 在本文中,我介绍了在 Uuntu Linux 中找到 IP 地址的不同方法,并向你概述了 IP 地址的用途以及它们对我们如此重要的原因。 我希望你喜欢这篇文章。如果你觉得文章有用,请在评论栏告诉我们! --- via: <https://itsfoss.com/check-ip-address-ubuntu/> 作者:[Sergiu](https://itsfoss.com/author/sergiu/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
![Warp Terminal](/assets/images/warp-terminal.webp) ![Warp Terminal](/assets/images/warp-terminal.webp) Wonder what’s your IP address? Here are several ways to check IP addresses in Ubuntu and other Linux distributions. Want to know your Linux system's IP address? You can use the ip command with the option `a` like this: `ip a` The output is extensive and it shows all the internet interfaces available, including loopback. Identifying the IP address could seem challenging if you are new to it. ``` 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enx747827c86d70: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000 link/ether 74:78:27:c8:6d:70 brd ff:ff:ff:ff:ff:ff 3: wlp0s20f3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether dc:41:a9:fb:7a:c0 brd ff:ff:ff:ff:ff:ff inet 192.168.1.53/24 brd 192.168.1.255 scope global dynamic noprefixroute wlp0s20f3 valid_lft 82827sec preferred_lft 82827sec inet6 fe80::e1d:d71b:c507:5cc8/64 scope link noprefixroute valid_lft forever preferred_lft forever ``` It really is not, actually. And there are other ways to find the IP address. I'll discuss all this in detail. But first, let's brush up the basics. ## What is an IP Address? An **Internet Protocol address** (commonly referred to as the **IP address**) is a numerical label assigned to each device connected to a computer network (using the Internet Protocol). An IP address serves both the purpose of identification and localisation of a machine. The **IP address** is * unique* within the network, allowing communication between all connected devices. You should also know that there are two **types of IP addresses**: **public**and **private**. The **public IP address**is used to communicate over the Internet, the same way your physical address is used for postal mail. However, in the context of a local network (such as a home where a router is used), each device is assigned a unique private IP address within this sub-network. This is used inside this local network without directly exposing the public IP (which the router uses to communicate with the Internet). Another distinction can be made between **IPv4**and **IPv6**protocols. **IPv4**is the classic IP format, consisting of a basic 4-part structure, with four bytes separated by dots (e.g., 127.0.0.1). However, with the growing number of devices, IPv4 will soon be unable to offer enough addresses. This is why **IPv6**was invented, a format that uses 128-bit addresses (compared to the 32-bit **IPv4**). ## Checking your IP Address in Ubuntu [Terminal Method] The fastest and simplest way to check your IP address is by using the `ip` command. You can use this command in the following fashion: ``` ip a ``` Actually, it’s short for this: ``` ip addr show ``` Both commands show the same output. They will show you both IPv4 and IPv6 addresses: ![Display IP Address in Ubuntu Linux](https://itsfoss.com/content/images/2023/03/ip-addr-show.png) You should identify the correct interface and then look beside inet for IPv4 and inet6 for IPv6. For example, inet 192.168.1.53/24 means the IPv4 address is 192.168.1.53. ### Just get the IP address If you prefer to get minimal details, you can also use the **hostname** command: ``` hostname -I ``` It will just give the IP address of the system. Nothing else. ![Hostname only shows IP address](https://itsfoss.com/content/images/2023/03/hostname-command.png) There are other [ways to check IP addresses in Linux](https://linuxhandbook.com/find-ip-address/?ref=itsfoss.com) but these two commands are more than enough to serve the purpose. **ifconfig**(part of net-tools), but that [command is deprecated](https://itsfoss.com/deprecated-linux-commands/). Some newer Linux distributions don’t include this package anymore and if you try running it, you’ll see the ifconfig command not found error. ## Checking IP address in Ubuntu [GUI Method] If you are not comfortable with the command line, you can also check the IP address graphically. Open up the Ubuntu Applications Menu (**Show Applications** in the bottom-left corner of the screen) and search for **Settings** and click on the icon: ![Open system settings from Ubuntu Activities Overview](https://itsfoss.com/content/images/2023/03/open-settings-from-activities-overview.png) This should open up the **Settings Menu**. Go to **Network**: ![Open settings for the currently connected network from the gear icon adjacent to the name of that particular connection](https://itsfoss.com/content/images/2023/03/click-gear-icon-adjacent-to-network-connection.png) Pressing on the **gear icon** next to your connection should open up a window with more settings and information about your link to the network, including your IP address: ![IP Address details in Ubuntu System Settings](https://itsfoss.com/content/images/2023/03/ipaddress-details-in-ssytem-settings.png) You can [see the IP address of your router](https://itsfoss.com/router-ip-address-linux/) as well in the above screenshot. It’s displayed with “Default Route”. ## Bonus Tip: Checking your public IP address (for desktop computers) First of all, to check your **public IP address** (used for communicating with servers etc.) you can [use the curl command](https://linuxhandbook.com/curl-command-examples/?ref=itsfoss.com). Open up a terminal and enter the following command: ``` curl ifconfig.me ``` This should simply return your IP address with no additional bulk information. I would recommend being careful when sharing this address since it is equivalent to giving out your personal address. **curl**isn’t installed on your system, simply use `sudo apt install curl -y` to [install curl on Ubuntu-based Linux distributions](https://itsfoss.com/install-curl-ubuntu/). Another simple way you can see your public IP address is by searching for the **IP address** on Google. ## Summary Here's a summary of the commands you learned: Description | Command | ---|---| Show both IPv4 and IPv6 addresses with `ip` command | ip a or ip addr show | Print only IP address using `hostname` command | hostname -I | To check your public IP address (Need `curl` installed) | curl ifconfig.me | Display IP Address with Network Manager tool | nmcli -p device show | Use the ifconfig command to display the IP address (Need `net-tools` installed ) | ifconfig -a | Now that you know your system's IP address, how about [getting the gateway IP](https://itsfoss.com/router-ip-address-linux/)? [Get Router’s IP Address (Default Gateway) in Ubuntu LinuxLooking for a way to connect to your router but don’t know its address? Here’s how to get the IP address of your router in Ubuntu and other Linux systems.](https://itsfoss.com/router-ip-address-linux/)![](https://itsfoss.com/content/images/wordpress/2022/06/how-to-find-router-ip-address.png) ![](https://itsfoss.com/content/images/wordpress/2022/06/how-to-find-router-ip-address.png) Boost your Linux networking skills with these essential commands! [21 Basic Linux Networking Commands You Should KnowA list of basic Linux networking commands that will help you troubleshoot network issues, monitor packets, connect devices, and much more.](https://itsfoss.com/basic-linux-networking-commands/)![](https://itsfoss.com/content/images/wordpress/2016/06/essential-linux-networking-commmands.png) ![](https://itsfoss.com/content/images/wordpress/2016/06/essential-linux-networking-commmands.png) In this article, I went through the different ways you can find your IP address in Ubuntu Linux, as well as gave you a basic overview of what IP addresses are used for and why they are so important to us. I also discussed IPv4 and IPv6 briefly. By the way, have you ever wondered why there is no IPv5? [What Happened to IPv5? Why there is IPv4, IPv6 but no IPv5?If you have spent any amount of time in the world of the internet, you should have heard about the IPv4 and IPv6 protocols that our computers use every day. One question that you might be asking is: Why there is no IPv5? Why IPv6 came after IPv4 and not](https://itsfoss.com/what-happened-to-ipv5/)![](https://itsfoss.com/content/images/wordpress/2020/04/what-happened-to-ipv5.png) ![](https://itsfoss.com/content/images/wordpress/2020/04/what-happened-to-ipv5.png) I hope you enjoyed this quick guide. Let us know if you found this explanation helpful in the comments section!
11,310
如何更改 Linux 终端颜色主题
https://opensource.com/article/19/8/add-color-linux-terminal
2019-09-06T07:07:03
[ "终端", "主题" ]
https://linux.cn/article-11310-1.html
> > 你可以用丰富的选项来定义你的终端主题。 > > > ![](/data/attachment/album/201909/06/070600ztd434ppd99df99d.jpg) 如果你大部分时间都盯着终端,那么你很自然地希望它看起来能赏心悦目。美与不美,全在观者,自 CRT 串口控制台以来,终端已经经历了很多变迁。因此,你的软件终端窗口有丰富的选项,可以用来定义你看到的主题,不管你如何定义美,这总是件好事。 ### 设置 包括 GNOME、KDE 和 Xfce 在内的流行的软件终端应用,它们都提供了更改其颜色主题的选项。调整主题就像调整应用首选项一样简单。Fedora、RHEL 和 Ubuntu 默认使用 GNOME,因此本文使用该终端作为示例,但对 Konsole、Xfce 终端和许多其他终端的设置流程类似。 首先,进入到应用的“首选项”或“设置”面板。在 GNOME 终端中,你可以通过屏幕顶部或窗口右上角的“应用”菜单访问它。 在“首选项”中,单击“配置文件” 旁边的加号(“+”)来创建新的主题配置文件。在新配置文件中,单击“颜色”选项卡。 ![GNOME Terminal preferences](/data/attachment/album/201909/06/070706cxdvkdnzh0dxka6m.jpg "GNOME Terminal preferences") 在“颜色”选项卡中,取消选择“使用系统主题中的颜色”选项,以使窗口的其余部分变为可选状态。最开始,你可以选择内置的颜色方案。这些包括浅色主题,它有明亮的背景和深色的前景文字;还有深色主题,它有深色背景和浅色前景文字。 当没有其他设置(例如 `dircolors` 命令的设置)覆盖它们时,“默认颜色”色板将同时定义前景色和背景色。“调色板”设置 `dircolors` 命令定义的颜色。这些颜色由终端以 `LS_COLORS` 环境变量的形式使用,以在 [ls](https://opensource.com/article/19/7/master-ls-command) 命令的输出中添加颜色。如果这些颜色不吸引你,请在此更改它们。 如果对主题感到满意,请关闭“首选项”窗口。 要将终端更改为新的配置文件,请单击“应用”菜单,然后选择“配置文件”。选择新的配置文件,接着享受自定义主题。 ![GNOME Terminal profile selection](/data/attachment/album/201909/06/070706gm0zscgzscf0f0qc.jpg "GNOME Terminal profile selection") ### 命令选项 如果你的终端没有合适的设置窗口,它仍然可以在启动命令中提供颜色选项。xterm 和 rxvt 终端(旧的和启用 Unicode 的变体,有时称为 urxvt 或 rxvt-unicode)都提供了这样的选项,因此即使没有桌面环境和大型 GUI 框架,你仍然可以设置终端模拟器的主题。 两个明显的选项是前景色和背景色,分别用 `-fg` 和 `-bg` 定义。每个选项的参数是*颜色名*而不是它的 ANSI 编号。例如: ``` $ urxvt -bg black -fg green ``` 这些会设置默认的前景和背景。如果有任何其他规则会控制特定文件或设备类型的颜色,那么就使用这些颜色。有关如何设置它们的信息,请参阅 [dircolors](http://man7.org/linux/man-pages/man1/dircolors.1.html) 命令。 你还可以使用 `-cr` 设置文本光标(而不是鼠标光标)的颜色: ``` $ urxvt -bg black -fg green -cr teal ``` ![Setting color in urxvt](/data/attachment/album/201909/06/070707w8q8lsllitbjtaoz.jpg "Setting color in urxvt") 你的终端模拟器可能还有更多选项,如边框颜色(rxvt 中的 `-bd`)、光标闪烁(urxvt 中的 `-bc` 和 `+bc`),甚至背景透明度。请参阅终端的手册页,了解更多的功能。 要使用你选择的颜色启动终端,你可以将选项添加到用于启动终端的命令或菜单中(例如,在你的 Fluxbox 菜单文件、`$HOME/.local/share/applications` 目录中的 `.desktop` 或者类似的)。或者,你可以使用 [xrdb](https://www.x.org/releases/X11R7.7/doc/man/man1/xrdb.1.xhtml) 工具来管理与 X 相关的资源(但这超出了本文的范围)。 ### 家是可定制的地方 自定义 Linux 机器并不意味着你需要学习如何编程。你可以而且应该进行小而有意义的更改,来使你的数字家庭感觉更舒适。而且没有比终端更好的起点了! --- via: <https://opensource.com/article/19/8/add-color-linux-terminal> 作者:[Seth Kenlon](https://opensource.com/users/seth) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
If you spend most of your day staring into a terminal, it's only natural that you want it to look pleasing. Beauty is in the eye of the beholder, and terminals have come a long way since the days of CRT serial consoles. So, the chances are good that your software terminal window has plenty of options to theme what you see—however you define beauty. ## Settings Most popular software terminal applications, including GNOME, KDE, and Xfce, ship with the option to change their color theme. Adjusting your theme is as easy as adjusting application preferences. Fedora, RHEL, and Ubuntu ship with GNOME by default, so this article uses that terminal as its example, but the process is similar for Konsole, Xfce terminal, and many others. First, navigate to the application's Preferences or Settings panel. In GNOME terminal, you reach it through the Application menu along the top of the screen or in the right corner of the window. In Preferences, click the plus symbol (+) next to Profiles to create a new theme profile. In your new profile, click the Colors tab. ![GNOME Terminal preferences GNOME Terminal preferences](https://opensource.com/sites/default/files/uploads/gnome-terminal-preferences.jpg) In the Colors tab, deselect the Use Colors From System Theme option so that the rest of the window will become active. As a starting point, you can select a built-in color scheme. These include light themes, with bright backgrounds and dark foreground text, as well as dark themes, with dark backgrounds and light foreground text. The Default Color swatches define both the foreground and background colors when no other setting (such as settings from the dircolors command) overrides them. The Palette sets the colors defined by the dircolors command. These colors are used by your terminal, in the form of the LS_COLORS environment variable, to add color to the output of the [ls](https://opensource.com/article/19/7/master-ls-command) command. If none of them appeal to you, change them on this screen. When you're happy with your theme, close the Preferences window. To change your terminal to your new profile, click on the Application menu, and select Profile. Choose your new profile and enjoy your custom theme. ![GNOME Terminal profile selection GNOME Terminal profile selection](https://opensource.com/sites/default/files/uploads/gnome-terminal-profile-select.jpg) ## Command options If your terminal doesn't have a fancy settings window, it may still provide options for colors in your launch command. The xterm and rxvt terminals (the old one and the Unicode-enabled variant, sometimes called urxvt or rxvt-unicode) provide such options, so you can still theme your terminal emulator—even without desktop environments and big GUI frameworks. The two obvious options are the foreground and background colors, defined by **-fg** and **-bg**, respectively. The argument for each option is the color *name* rather than its ANSI number. For example: `$ urxvt -bg black -fg green ` These settings set the default foreground and background. Should any other rule govern the color of a specific file or device type, those colors are used. See the [dircolors](http://man7.org/linux/man-pages/man1/dircolors.1.html) command for information on how to set those. You can also set the color of the text cursor (not the mouse cursor) with **-cr**: `$ urxvt -bg black -fg green -cr teal` ![Setting color in urxvt Setting color in urxvt](https://opensource.com/sites/default/files/uploads/urxvt-color.jpg) Your terminal emulator may have more options, like a border color (**-bd** in rxvt), cursor blink (**-bc** and **+bc** in urxvt), and even background transparency. Refer to your terminal's man page to find out what cool features are available. To launch your terminal with your choice of colors, you can add the options either to the command or the menu you use to launch the terminal (such as your Fluxbox menu file, a **.desktop** file in **$HOME/.local/share/applications**, or similar). Alternatively, you can use the [xrdb](https://www.x.org/releases/X11R7.7/doc/man/man1/xrdb.1.xhtml) tool to manage X-related resources (but that's out of scope for this article). ## Home is where the customization is Customizing your Linux machine doesn't mean you have to learn how to program. You can and should make small but meaningful changes to make your digital home feel that much more comfortable. And there's no better place to start than the terminal! ## Comments are closed.
11,312
5 个 Ansible 运维任务
https://opensource.com/article/19/8/ops-tasks-ansible
2019-09-06T13:43:06
[ "DevOps", "Ansible" ]
https://linux.cn/article-11312-1.html
> > 让 DevOps 少一点,OpsDev 多一点。 > > > ![](/data/attachment/album/201909/06/134240khkca18pkqkjkhsk.jpg) 在这个 DevOps 世界中,看起来开发(Dev)这一半成为了关注的焦点,而运维(Ops)则是这个关系中被遗忘的另一半。这几乎就好像是领头的开发告诉尾随的运维做什么,几乎所有的“运维”都是开发说要做的。因此,运维被抛到后面,降级到了替补席上。 我想看到更多的 OpsDev。因此,让我们来看看 Ansible 在日常的运维中可以帮助你什么。 ![Job templates](/data/attachment/album/201909/06/134315p4j9rj85j2ricztj.png "Job templates") 我选择在 [Ansible Tower](https://www.ansible.com/products/tower) 中展示这些方案,因为我认为用户界面 (UI) 可以增色大多数的任务。如果你想模拟测试,你可以在 Tower 的上游开源版本 [AWX](https://github.com/ansible/awx) 中测试它。 ### 管理用户 在大规模环境中,你的用户将集中在活动目录或 LDAP 等系统中。但我敢打赌,仍然存在许多包含大量的静态用户的全负荷环境。Ansible 可以帮助你将这些分散的环境集中到一起。*社区*已为我们解决了这个问题。看看 [Ansible Galaxy](https://galaxy.ansible.com) 中的 [users](https://galaxy.ansible.com/singleplatform-eng/users) 角色。 这个角色的聪明之处在于它允许我们通过*数据*管理用户,而无需更改运行逻辑。 ![User data](/data/attachment/album/201909/06/134319qgx28xmh42kkxd4m.png "User data") 通过简单的数据结构,我们可以在系统上添加、删除和修改静态用户。这很有用。 ### 管理 sudo 提权有[多种形式](https://docs.ansible.com/ansible/latest/plugins/become.html),但最流行的是 [sudo](https://www.sudo.ws/intro.html)。通过每个 `user`、`group` 等离散文件来管理 sudo 相对容易。但一些人对给予特权感到紧张,并倾向于有时限地给予提权。因此[下面是一种方案](https://github.com/phips/ansible-demos/tree/master/roles/sudo),它使用简单的 `at` 命令对授权访问设置时间限制。 ![Managing sudo](/data/attachment/album/201909/06/134321mazkpfkpyk8kvhta.png "Managing sudo") ### 管理服务 给入门级运维团队提供[菜单](https://docs.ansible.com/ansible-tower/latest/html/userguide/job_templates.html#surveys)以便他们可以重启某些服务不是很好吗?看下面! ![Managing services](/data/attachment/album/201909/06/134323pz2hh6vhugia6v63.png "Managing services") ### 管理磁盘空间 这有[一个简单的角色](https://github.com/phips/ansible-demos/tree/master/roles/disk),可在特定目录中查找字节大于某个大小的文件。在 Tower 中这么做时,启用[回调](https://docs.ansible.com/ansible-tower/latest/html/userguide/job_templates.html#provisioning-callbacks)有额外的好处。想象一下,你的监控方案发现文件系统已超过 X% 并触发 Tower 中的任务以找出是什么文件导致的。 ![Managing disk space](/data/attachment/album/201909/06/134325ss6usssysszws6uy.png "Managing disk space") ### 调试系统性能问题 [这个角色](https://github.com/phips/ansible-demos/tree/master/roles/gather_debug)相当简单:它运行一些命令并打印输出。细节在最后输出,让你 —— 系统管理员快速浏览一眼。另外可以使用 [正则表达式](https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html#regular-expression-filters) 在输出中找到某些条件(比如说 CPU 占用率超过 80%)。 ![Debugging system performance](/data/attachment/album/201909/06/134332pxb8i0jm0hvjibcb.png "Debugging system performance") ### 总结 我已经录制了这五个任务的简短视频。你也可以在 Github 上找到[所有代码](https://github.com/phips/ansible-demos)! --- via: <https://opensource.com/article/19/8/ops-tasks-ansible> 作者:[Mark Phillips](https://opensource.com/users/markphttps://opensource.com/users/adminhttps://opensource.com/users/alsweigarthttps://opensource.com/users/belljennifer43) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
In this DevOps world, it sometimes appears the Dev half gets all the limelight, with Ops the forgotten half in the relationship. It's almost as if the leading Dev tells the trailing Ops what to do, with almost everything "Ops" being whatever Dev says it should be. Ops, therefore, gets left behind, punted to the back, relegated to the bench. I'd like to see more OpsDev happening. So let's look at a handful of things Ansible can help you do with your day-to-day Ops life. ![Job templates Job templates](https://opensource.com/sites/default/files/uploads/00_templates.png) I've chosen to present these solutions within [Ansible Tower](https://www.ansible.com/products/tower) because I think a user interface (UI) adds value to most of these tasks. If you want to emulate this, you can test it out in [AWX](https://github.com/ansible/awx), the upstream open source version of Tower. ## Manage users In a large-scale environment, your users would be centralised in a system like Active Directory or LDAP. But I bet there are still a whole load of environments with lots of static users in them, too. Ansible can help you centralise that decentralised problem. And *the community* has already solved it for us. Meet the [Ansible Galaxy](https://galaxy.ansible.com) role ** users**. What's clever about this role is it allows us to manage users via *data—*no changes to play logic required. ![User data User data](https://opensource.com/sites/default/files/uploads/01_users_data.png) With simple data structures, we can add, remove and modify static users on a system. Very useful. ## Manage sudo Privilege escalation comes [in many forms](https://docs.ansible.com/ansible/latest/plugins/become.html), but one of the most popular is [sudo](https://www.sudo.ws/intro.html). It's relatively easy to manage sudo through discrete files per user, group, etc. But some folk get nervous about giving privilege escalation willy-nilly and prefer it to be time-bound. So [here's a take on that](https://github.com/phips/ansible-demos/tree/master/roles/sudo), using the simple **at** command to put a time limit on the granted access. ![Managing sudo Managing sudo](https://opensource.com/sites/default/files/uploads/02_sudo.png) ## Manage services Wouldn't it be great to give a [menu](https://docs.ansible.com/ansible-tower/latest/html/userguide/job_templates.html#surveys) to an entry-level ops team so they could just restart certain services? Voila! ![Managing services Managing services](https://opensource.com/sites/default/files/uploads/03_services.png) ## Manage disk space Here's [a simple role](https://github.com/phips/ansible-demos/tree/master/roles/disk) that can be used to look for files larger than size *N* in a particular directory. Doing this in Tower, we have the bonus of enabling [callbacks](https://docs.ansible.com/ansible-tower/latest/html/userguide/job_templates.html#provisioning-callbacks). Imagine your monitoring solution spotting a filesystem going over X% full and triggering a job in Tower to go find out what files are the cause. ![Managing disk space Managing disk space](https://opensource.com/sites/default/files/uploads/04_diskspace.png) ## Debug a system performance problem [This role](https://github.com/phips/ansible-demos/tree/master/roles/gather_debug) is fairly simple: it runs some commands and prints the output. The details are printed at the end of the run for you, sysadmin, to cast your skilled eyes over. Bonus homework: use [regexs](https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html#regular-expression-filters) to find certain conditions in the output (CPU hog over 80%, say). ![Debugging system performance Debugging system performance](https://opensource.com/sites/default/files/uploads/05_debug.png) ## Summary I've recorded a short video of these five tasks in action. You can find all [the code on GitHub](https://github.com/phips/ansible-demos) too! ## 2 Comments
11,314
Bash shell 的诞生
https://opensource.com/19/9/command-line-heroes-bash
2019-09-07T14:23:16
[ "代码英雄", "bash" ]
/article-11314-1.html
> > 本周的《代码英雄》播客深入研究了最广泛使用的、已经成为事实标准的脚本语言,它来自于自由软件基金会及其作者的早期灵感。 > > > ![Listen to the Command Line Heroes Podcast](/data/attachment/album/201909/07/142321vwrwoq0ou0kqu48q.png "Listen to the Command Line Heroes Podcast") 对于任何从事于系统管理员方面的人来说,Shell 脚本编程是一门必不可少的技能,而如今人们编写脚本的主要 shell 是 Bash。Bash 是几乎所有的 Linux 发行版和现代 MacOS 版本的默认配置,也很快就会成为 [Windows 终端](https://devblogs.microsoft.com/commandline/introducing-windows-terminal/)的原生部分。你可以说 Bash 无处不在。 那么它是如何做到这一点的呢?本周的《[代码英雄](https://www.redhat.com/en/command-line-heroes)》播客将通过询问编写那些代码的人来深入研究这个问题。 ### 肇始于 Unix 像所有编程方面的东西一样,我们必须追溯到 Unix。shell 的简短历史是这样的:1971 年,Ken Thompson 发布了第一个 Unix shell:Thompson shell。但是,脚本用户所能做的存在严重限制,这意味着严重制约了自动化以及整个 IT 运营领域。 这个[奇妙的研究](https://developer.ibm.com/tutorials/l-linux-shells/)概述了早期尝试脚本的挑战: > > 类似于它在 Multics 中的前身,这个 shell(`/bin/sh`)是一个在内核外执行的独立用户程序。诸如通配(参数扩展的模式匹配,例如 `*.txt`)之类的概念是在一个名为 `glob` 的单独的实用程序中实现的,就像用于计算条件表达式的 `if` 命令一样。这种分离使 shell 变得更小,才不到 900 行的 C 源代码。 > > > shell 引入了紧凑的重定向(`<`、`>` 和 `>>`)和管道(`|` 或 `^`)语法,它们已经存在于现代 shell 中。你还可以找到对调用顺序命令(`;`)和异步命令(`&`)的支持。 > > > Thompson shell 缺少的是编写脚本的能力。它的唯一目的是作为一个交互式 shell(命令解释器)来调用命令和查看结果。 > > > 随着对终端使用的增长,对自动化的兴趣随之增长。 ### Bourne shell 前进一步 在 Thompson 发布 shell 六年后,1977 年,Stephen Bourne 发布了 Bourne shell,旨在解决Thompson shell 中的脚本限制。(Chet Ramey 是自 1990 年以来 Bash 语言的主要维护者,在这一集的《代码英雄》中讨论了它)。作为 Unix 系统的一部分,这是这个来自贝尔实验室的技术的自然演变。 Bourne 打算做什么不同的事情?[研究员 M. Jones](https://developer.ibm.com/tutorials/l-linux-shells/) 很好地概述了它: > > Bourne shell 有两个主要目标:作为命令解释器以交互方式执行操作系统的命令,和用于脚本编程(编写可通过 shell 调用的可重用脚本)。除了替换 Thompson shell,Bourne shell 还提供了几个优于其前辈的优势。Bourne 将控制流、循环和变量引入脚本,提供了更具功能性的语言来(以交互式和非交互式)与操作系统交互。该 shell 还允许你使用 shell 脚本作为过滤器,为处理信号提供集成支持,但它缺乏定义函数的能力。最后,它结合了我们今天使用的许多功能,包括命令替换(使用后引号)和 HERE 文档(以在脚本中嵌入保留的字符串文字)。 > > > Bourne 在[之前的一篇采访中](https://www.computerworld.com.au/article/279011/-z_programming_languages_bourne_shell_sh)这样描述它: > > 最初的 shell (编程语言)不是一种真正的语言;它是一种记录 —— 一种从文件中线性执行命令序列的方法,唯一的控制流的原语是 `GOTO` 到一个标签。Ken Thompson 所编写的这个最初的 shell 的这些限制非常重要。例如,你无法简单地将命令脚本用作过滤器,因为命令文件本身是标准输入。而在过滤器中,标准输入是你从父进程继承的,不是命令文件。 > > > 最初的 shell 很简单,但随着人们开始使用 Unix 进行应用程序开发和脚本编写,它就太有限了。它没有变量、它没有控制流,而且它的引用能力非常不足。 > > > 对于脚本编写者来说,这个新 shell 是一个巨大的进步,但前提是你可以使用它。 ### 以自由软件来重新构思 Bourne Shell 在此之前,这个占主导地位的 shell 是由贝尔实验室拥有和管理的专有软件。幸运的话,你的大学可能有权访问 Unix shell。但这种限制性访问远非自由软件基金会(FSF)想要实现的世界。 Richard Stallman 和一群志同道合的开发人员那时正在编写所有的 Unix 功能,其带有可以在 GNU 许可证下免费获得的许可。其中一个开发人员的任务是制作一个 shell,那位开发人员是 Brian Fox。他对他的任务的讲述十分吸引我。正如他在播客上所说: > > 它之所以如此具有挑战性,是因为我们必须忠实地模仿 Bourne shell 的所有行为,同时允许扩展它以使其成为一个供人们使用的更好工具。 > > > 而那时也恰逢人们在讨论 shell 标准是什么的时候。在这一历史背景和将来的竞争前景下,流行的 Bourne shell 被重新构想,并再次重生。 ### 重新打造 Bourne Shell 自由软件的使命和竞争这两个催化剂使重制的 Bourne shell(Bash)具有了生命。和之前不同的是,Fox 并没有把 shell 放到自己的名字之后命名,他专注于从 Unix 到自由软件的演变。(虽然 Fox Shell 这个名字看起来要比 Fish shell 更适合作为 fsh 命令 #missedopportunity)。这个命名选择似乎符合他的个性。正如 Fox 在剧集中所说,他甚至对个人的荣耀也不感兴趣;他只是试图帮助编程文化发展。然而,他并不是一个优秀的双关语。 而 Bourne 也并没有因为他命名 shell 的文字游戏而感到被轻视。Bourne 讲述了一个故事,有人走到他面前,并在会议上给了他一件 Bash T 恤,而那个人是 Brian Fox。 | Shell | 发布于 | 创造者 | | --- | --- | --- | | Thompson Shell | 1971 | Ken Thompson | | Bourne Shell | 1977 | Stephen Bourne | | Bourne-Again Shell | 1989 | Brian Fox | 随着时间的推移,Bash 逐渐成长。其他工程师开始使用它并对其设计进行改进。事实上,多年后,Fox 坚定地认为学会放弃控制 Bash 是他一生中最重要的事情之一。随着 Unix 让位于 Linux 和开源软件运动,Bash 成为开源世界的至关重要的脚本语言。这个伟大的项目似乎超出了单一一个人的愿景范围。 ### 我们能从 shell 中学到什么? shell 是一项技术,它是笔记本电脑日常使用中的一个组成部分,你很容易忘记它也需要发明出来。从 Thompson 到 Bourne 再到 Bash,shell 的故事为我们描绘了一些熟悉的结论: * 有动力的人可以在正确的使命中取得重大进展。 * 我们今天所依赖的大部分内容都建立在我们行业中仍然活着的那些传奇人物打下的基础之上。 * 能够生存下来的软件超越了其原始创作者的愿景。 代码英雄在全部的第三季中讲述了编程语言,并且正在接近它的尾声。[请务必订阅,来了解你想知道的有关编程语言起源的各种内容](https://www.redhat.com/en/command-line-heroes),我很乐意在下面的评论中听到你的 shell 故事。 --- via: <https://opensource.com/19/9/command-line-heroes-bash> 作者:[Matthew Broberg](https://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberg) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
null
HTTPSConnectionPool(host='opensource.com', port=443): Read timed out. (read timeout=10)
null
11,315
从 Yum 更新中排除特定/某些包的三种方法
https://www.2daygeek.com/redhat-centos-yum-update-exclude-specific-packages/
2019-09-07T14:58:28
[ "yum" ]
https://linux.cn/article-11315-1.html
![](/data/attachment/album/201909/07/145817rj7khqkbqwqx7sb9.jpg) 作为系统更新的一部分,你也许需要在基于 Red Hat 系统中由于应用依赖排除一些软件包。 如果是,如何排除?可以采取多少种方式?有三种方式可以做到,我们会在本篇中教你这三种方法。 包管理器是一组工具,它允许用户在 Linux 系统中轻松管理包。它能让用户在 Linux 系统中安装、更新/升级、删除、查询、重新安装和搜索软件包。 对于基于 Red Hat 的系统,我们使用 [yum 包管理器](https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/) 和 [rpm 包管理器](https://www.2daygeek.com/rpm-command-examples/) 进行包管理。 ### 什么是 yum? yum 代表 “Yellowdog Updater, Modified”。Yum 是用于 rpm 系统的自动更新程序和包安装/卸载器。 它在安装包时自动解决依赖关系。 ### 什么是 rpm? rpm 代表 “Red Hat Package Manager”,它是一款用于 Red Hat 系统的功能强大的包管理工具。 RPM 指的是 `.rpm` 文件格式,它包含已编译的软件和必要的库。 你可能有兴趣阅读以下与本主题相关的文章。如果是的话,请进入相应的链接。 * [如何检查 Red Hat(RHEL)和 CentOS 系统上的可用安全更新](https://www.2daygeek.com/check-list-view-find-available-security-updates-on-redhat-rhel-centos-system/) * [在 Red Hat(RHEL)和 CentOS 系统上安装安全更新的四种方法](https://www.2daygeek.com/install-security-updates-on-redhat-rhel-centos-system/) * [在 Redhat(RHEL)和 CentOS 系统上检查或列出已安装的安全更新的两种方法](https://www.2daygeek.com/check-installed-security-updates-on-redhat-rhel-and-centos-system/) ### 方法 1:手动或临时用 yum 命令排除包 我们可以在 yum 中使用 `--exclude` 或 `-x` 开关来阻止 yum 命令获取特定包的更新。 我可以说,这是一种临时方法或按需方法。如果你只想将特定包排除一次,那么我们可以使用此方法。 以下命令将更新除 kernel 之外的所有软件包。 要排除单个包: ``` # yum update --exclude=kernel 或者 # yum update -x 'kernel' ``` 要排除多个包。以下命令将更新除 kernel 和 php 之外的所有软件包。 ``` # yum update --exclude=kernel* --exclude=php* 或者 # yum update --exclude httpd,php ``` ### 方法 2:在 yum 命令中永久排除软件包 这是永久性方法,如果你经常执行修补程序更新,那么可以使用此方法。 为此,请在 `/etc/yum.conf` 中添加相应的软件包以永久禁用软件包更新。 添加后,每次运行 `yum update` 命令时都不需要指定这些包。此外,这可以防止任何意外更新这些包。 ``` # vi /etc/yum.conf [main] cachedir=/var/cache/yum/$basearch/$releasever keepcache=0 debuglevel=2 logfile=/var/log/yum.log exactarch=1 obsoletes=1 gpgcheck=1 plugins=1 installonly_limit=3 exclude=kernel* php* ``` ### 方法 3:使用 Yum versionlock 插件排除包 这也是与上面类似的永久方法。Yum versionlock 插件允许用户通过 `yum` 命令锁定指定包的更新。 为此,请运行以下命令。以下命令将从 `yum update` 中排除 freetype 包。 或者,你可以直接在 `/etc/yum/pluginconf.d/versionlock.list` 中添加条目。 ``` # yum versionlock add freetype Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, subscription-manager, verify, versionlock Adding versionlock on: 0:freetype-2.8-12.el7 versionlock added: 1 ``` 运行以下命令来检查被 versionlock 插件锁定的软件包列表。 ``` # yum versionlock list Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, subscription-manager, verify, versionlock 0:freetype-2.8-12.el7.* versionlock list done ``` 运行以下命令清空该列表。 ``` # yum versionlock clear ``` --- via: <https://www.2daygeek.com/redhat-centos-yum-update-exclude-specific-packages/> 作者:[Magesh Maruthamuthu](https://www.2daygeek.com/author/magesh/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
404
Not Found
null
11,316
5 个开源的速读应用
https://opensource.com/article/19/8/speed-reading-open-source
2019-09-07T15:13:00
[ "速读" ]
/article-11316-1.html
> > 使用这五个应用训练自己更快地阅读文本。 > > > ![](/data/attachment/album/201909/07/151320r39o26onsp3sq1qo.jpg) 英国散文家和政治家 [Joseph Addison](https://en.wikipedia.org/wiki/Joseph_Addison) 曾经说过,“读书益智,运动益体。”如今,我们大多数人(如果不是全部)都是通过计算机显示器、电视屏幕、移动设备、街道标志、报纸、杂志上阅读,以及在工作场所和学校阅读论文来训练我们的大脑。 鉴于我们每天都会收到大量的书面信息,通过做一些挑战我们经典的阅读习惯并教会我们吸收更多内容和数据的特定练习,来训练我们的大脑以便更快地阅读似乎是有利的。学习这些技能的目的不仅仅是浏览文本,因为没有理解的阅读就是浪费精力。目标是提高你的阅读速度,同时仍然达到高水平的理解。 ### 阅读和处理输入 在深入探讨速读之前,让我们来看看阅读过程。根据法国眼科医生 Louis Emile Javal 的说法,阅读分为三个步骤: 1. 固定 2. 处理 3. <ruby> <a href="https://en.wikipedia.org/wiki/Saccade"> 扫视 </a> <rt> saccade </rt></ruby> 在第一步,我们确定文本中的固定点,称为最佳识别点。在第二步中,我们在眼睛固定的同时引入(处理)新信息。最后,我们改变注视点的位置,这是一种称为扫视的操作,此时未获取任何新信息。 在实践中,阅读更快的读者之间的主要差异是固定时间短于平均值,更长距离扫视,重读更少。 ### 阅读练习 阅读不是人类的自然过程,因为它是人类生存跨度中一个相当新的发展。第一个书写系统是在大约 5000 年前创建的,它不足以让人们发展成为阅读机器。因此,我们必须运用我们的阅读技巧,在这项沟通的基本任务中变得更加娴熟和高效。 第一项练习包括减少默读,也被称为无声语音,这是一种在阅读时内部发音的习惯。它是一个减慢阅读速度的自然过程,因为阅读速度限于语速。减少默读的关键是只说出一些阅读的单词。一种方法是用其他任务来占据内部声音,例如用口香糖。 第二个练习包括减少回归,或称为重读。回归是一种懒惰的机制,因为我们的大脑可以随时重读任何材料,从而降低注意力。 ### 5 个开源应用来训练你的大脑 有几个有趣的开源应用可用于锻炼你的阅读速度。 一个是 [Gritz](https://github.com/jeffkowalski/gritz),它是一个开源文件阅读器,它一次一个地弹出单词,以减少回归。它适用于 Linux、Windows 和 MacOS,并在 GPL 许可证下发布,因此你可以随意使用它。 其他选择包括 [Spray Speed-Reader](https://github.com/chaimpeck/spray),一个用 JavaScript 编写的开源速读应用,以及 [Sprits-it!](https://github.com/the-happy-hippo/sprits-it),一个开源 Web 应用,可以快速阅读网页。 对于 Android 用户,[Comfort Reader](https://github.com/mschlauch/comfortreader) 是一个开源的速读应用。它可以在 [F-droid](https://f-droid.org/packages/com.mschlauch.comfortreader/) 和 [Google Play](https://play.google.com/store/apps/details?id=com.mschlauch.comfortreader) 应用商店中找到。 我最喜欢的应用是 [Speedread](https://github.com/pasky/speedread),它是一个简单的终端程序,它可以在最佳阅读点逐字显示文本。要安装它,请在你的设备上克隆 Github 仓库,然后输入相应的命令来选择以喜好的每分钟字数 (WPM)来阅读文档。默认速率为 250 WPM。例如,要以 400 WPM 阅读 `your_text_file.txt`,你应该输入: ``` `cat your_text_file.txt | ./speedread -w 400` ``` 下面是该程序的运行界面: ![Speedread demo](/data/attachment/album/201909/07/151426rm11lgd5jt5x194g.gif "Speedread demo") 由于你可能不会只阅读[纯文本](https://plaintextproject.online/),因此可以使用 [Pandoc](https://opensource.com/article/18/9/intro-pandoc) 将文件从标记格式转换为文本格式。你还可以使用 Android 终端模拟器 [Termux](https://termux.com/) 在 Android 设备上运行 Speedread。 ### 其他方案 对于开源社区来说,构建一个解决方案是一个有趣的项目,它仅为了通过使用特定练习来提高阅读速度,以此改进如默读和重读。我相信这个项目会非常有益,因为在当今信息丰富的环境中,提高阅读速度非常有价值。 --- via: <https://opensource.com/article/19/8/speed-reading-open-source> 作者:[Jaouhari Youssef](https://opensource.com/users/jaouhari) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
null
HTTPSConnectionPool(host='opensource.com', port=443): Read timed out. (read timeout=10)
null
11,317
使用 Python 学习面向对象的编程
https://opensource.com/article/19/7/get-modular-python-classes
2019-09-08T09:11:00
[ "Python" ]
https://linux.cn/article-11317-1.html
> > 使用 Python 类使你的代码变得更加模块化。 > > > ![](/data/attachment/album/201909/08/091142y2bdbboctw7xdbjq.jpg) 在我上一篇文章中,我解释了如何通过使用函数、创建模块或者两者一起来[使 Python 代码更加模块化](/article-11295-1.html)。函数对于避免重复多次使用的代码非常有用,而模块可以确保你在不同的项目中复用代码。但是模块化还有另一种方法:类。 如果你已经听过<ruby> 面向对象编程 <rt> object-oriented programming </rt></ruby>(OOP)这个术语,那么你可能会对类的用途有一些概念。程序员倾向于将类视为一个虚拟对象,有时与物理世界中的某些东西直接相关,有时则作为某种编程概念的表现形式。无论哪种表示,当你想要在程序中为你或程序的其他部分创建“对象”时,你都可以创建一个类来交互。 ### 没有类的模板 假设你正在编写一个以幻想世界为背景的游戏,并且你需要这个应用程序能够涌现出各种坏蛋来给玩家的生活带来一些刺激。了解了很多关于函数的知识后,你可能会认为这听起来像是函数的一个教科书案例:需要经常重复的代码,但是在调用时可以考虑变量而只编写一次。 下面一个纯粹基于函数的敌人生成器实现的例子: ``` #!/usr/bin/env python3 import random def enemy(ancestry,gear): enemy=ancestry weapon=gear hp=random.randrange(0,20) ac=random.randrange(0,20) return [enemy,weapon,hp,ac] def fight(tgt): print("You take a swing at the " + tgt[0] + ".") hit=random.randrange(0,20) if hit > tgt[3]: print("You hit the " + tgt[0] + " for " + str(hit) + " damage!") tgt[2] = tgt[2] - hit else: print("You missed.") foe=enemy("troll","great axe") print("You meet a " + foe[0] + " wielding a " + foe[1]) print("Type the a key and then RETURN to attack.") while True: action=input() if action.lower() == "a": fight(foe) if foe[2] < 1: print("You killed your foe!") else: print("The " + foe[0] + " has " + str(foe[2]) + " HP remaining") ``` `enemy` 函数创造了一个具有多个属性的敌人,例如谱系、武器、生命值和防御等级。它返回每个属性的列表,表示敌人全部特征。 从某种意义上说,这段代码创建了一个对象,即使它还没有使用类。程序员将这个 `enemy` 称为*对象*,因为该函数的结果(本例中是一个包含字符串和整数的列表)表示游戏中一个单独但复杂的*东西*。也就是说,列表中字符串和整数不是任意的:它们一起描述了一个虚拟对象。 在编写描述符集合时,你可以使用变量,以便随时使用它们来生成敌人。这有点像模板。 在示例代码中,当需要对象的属性时,会检索相应的列表项。例如,要获取敌人的谱系,代码会查询 `foe[0]`,对于生命值,会查询 `foe[2]`,以此类推。 这种方法没有什么不妥,代码按预期运行。你可以添加更多不同类型的敌人,创建一个敌人类型列表,并在敌人创建期间从列表中随机选择,等等,它工作得很好。实际上,[Lua](https://opensource.com/article/17/4/how-program-games-raspberry-pi) 非常有效地利用这个原理来近似了一个面向对象模型。 然而,有时候对象不仅仅是属性列表。 ### 使用对象 在 Python 中,一切都是对象。你在 Python 中创建的任何东西都是某个预定义模板的*实例*。甚至基本的字符串和整数都是 Python `type` 类的衍生物。你可以在这个交互式 Python shell 中见证: ``` >>> foo=3 >>> type(foo) <class 'int'> >>> foo="bar" >>> type(foo) <class 'str'> ``` 当一个对象由一个类定义时,它不仅仅是一个属性的集合,Python 类具有各自的函数。从逻辑上讲,这很方便,因为只涉及某个对象类的操作包含在该对象的类中。 在示例代码中,`fight` 的代码是主应用程序的功能。这对于一个简单的游戏来说是可行的,但对于一个复杂的游戏来说,世界中不仅仅有玩家和敌人,还可能有城镇居民、牲畜、建筑物、森林等等,它们都不需要使用战斗功能。将战斗代码放在敌人的类中意味着你的代码更有条理,在一个复杂的应用程序中,这是一个重要的优势。 此外,每个类都有特权访问自己的本地变量。例如,敌人的生命值,除了某些功能之外,是不会改变的数据。游戏中的随机蝴蝶不应该意外地将敌人的生命值降低到 0。理想情况下,即使没有类,也不会发生这种情况。但是在具有大量活动部件的复杂应用程序中,确保不需要相互交互的部件永远不会发生这种情况,这是一个非常有用的技巧。 Python 类也受垃圾收集的影响。当不再使用类的实例时,它将被移出内存。你可能永远不知道这种情况会什么时候发生,但是你往往知道什么时候它不会发生,因为你的应用程序占用了更多的内存,而且运行速度比较慢。将数据集隔离到类中可以帮助 Python 跟踪哪些数据正在使用,哪些不在需要了。 ### 优雅的 Python 下面是一个同样简单的战斗游戏,使用了 `Enemy` 类: ``` #!/usr/bin/env python3 import random class Enemy(): def __init__(self,ancestry,gear): self.enemy=ancestry self.weapon=gear self.hp=random.randrange(10,20) self.ac=random.randrange(12,20) self.alive=True def fight(self,tgt): print("You take a swing at the " + self.enemy + ".") hit=random.randrange(0,20) if self.alive and hit > self.ac: print("You hit the " + self.enemy + " for " + str(hit) + " damage!") self.hp = self.hp - hit print("The " + self.enemy + " has " + str(self.hp) + " HP remaining") else: print("You missed.") if self.hp < 1: self.alive=False # 游戏开始 foe=Enemy("troll","great axe") print("You meet a " + foe.enemy + " wielding a " + foe.weapon) # 主函数循环 while True: print("Type the a key and then RETURN to attack.") action=input() if action.lower() == "a": foe.fight(foe) if foe.alive == False: print("You have won...this time.") exit() ``` 这个版本的游戏将敌人作为一个包含相同属性(谱系、武器、生命值和防御)的对象来处理,并添加一个新的属性来衡量敌人时候已被击败,以及一个战斗功能。 类的第一个函数是一个特殊的函数,在 Python 中称为 `init` 或初始化的函数。这类似于其他语言中的[构造器](https://opensource.com/article/19/6/what-java-constructor),它创建了类的一个实例,你可以通过它的属性和调用类时使用的任何变量来识别它(示例代码中的 `foe`)。 ### Self 和类实例 类的函数接受一种你在类之外看不到的新形式的输入:`self`。如果不包含 `self`,那么当你调用类函数时,Python 无法知道要使用的类的*哪个*实例。这就像在一间充满兽人的房间里说:“我要和兽人战斗”,向一个兽人发起。没有人知道你指的是谁,所有兽人就都上来了。 ![Image of an Orc, CC-BY-SA by Buch on opengameart.org](/data/attachment/album/201909/08/091202o7lzrliwfprtwilt.jpg "CC-BY-SA by Buch on opengameart.org") *CC-BY-SA by Buch on opengameart.org* 类中创建的每个属性都以 `self` 符号作为前缀,该符号将变量标识为类的属性。一旦派生出类的实例,就用表示该实例的变量替换掉 `self` 前缀。使用这个技巧,你可以在一间满是兽人的房间里说:“我要和谱系是 orc 的兽人战斗”,这样来挑战一个兽人。当 orc 听到 “gorblar.orc” 时,它就知道你指的是谁(他自己),所以你得到是一场公平的战斗而不是斗殴。在 Python 中: ``` gorblar=Enemy("orc","sword") print("The " + gorblar.enemy + " has " + str(gorblar.hp) + " remaining.") ``` 通过检索类属性(`gorblar.enemy` 或 `gorblar.hp` 或你需要的任何对象的任何值)而不是查询 `foe[0]`(在函数示例中)或 `gorblar[0]` 来寻找敌人。 ### 本地变量 如果类中的变量没有以 `self` 关键字作为前缀,那么它就是一个局部变量,就像在函数中一样。例如,无论你做什么,你都无法访问 `Enemy.fight` 类之外的 `hit` 变量: ``` >>> print(foe.hit) Traceback (most recent call last): File "./enclass.py", line 38, in <module> print(foe.hit) AttributeError: 'Enemy' object has no attribute 'hit' >>> print(foe.fight.hit) Traceback (most recent call last): File "./enclass.py", line 38, in <module> print(foe.fight.hit) AttributeError: 'function' object has no attribute 'hit' ``` `hit` 变量包含在 Enemy 类中,并且只能“存活”到在战斗中发挥作用。 ### 更模块化 本例使用与主应用程序相同的文本文档中的类。在一个复杂的游戏中,我们更容易将每个类看作是自己独立的应用程序。当多个开发人员处理同一个应用程序时,你会看到这一点:一个开发人员负责一个类,另一个开发人员负责主程序,只要他们彼此沟通这个类必须具有什么属性,就可以并行地开发这两个代码块。 要使这个示例游戏模块化,可以把它拆分为两个文件:一个用于主应用程序,另一个用于类。如果它是一个更复杂的应用程序,你可能每个类都有一个文件,或每个逻辑类组有一个文件(例如,用于建筑物的文件,用于自然环境的文件,用于敌人或 NPC 的文件等)。 将只包含 `Enemy` 类的一个文件保存为 `enemy.py`,将另一个包含其他内容的文件保存为 `main.py`。 以下是 `enemy.py`: ``` import random class Enemy(): def __init__(self,ancestry,gear): self.enemy=ancestry self.weapon=gear self.hp=random.randrange(10,20) self.stg=random.randrange(0,20) self.ac=random.randrange(0,20) self.alive=True def fight(self,tgt): print("You take a swing at the " + self.enemy + ".") hit=random.randrange(0,20) if self.alive and hit > self.ac: print("You hit the " + self.enemy + " for " + str(hit) + " damage!") self.hp = self.hp - hit print("The " + self.enemy + " has " + str(self.hp) + " HP remaining") else: print("You missed.") if self.hp < 1: self.alive=False ``` 以下是 `main.py`: ``` #!/usr/bin/env python3 import enemy as en # game start foe=en.Enemy("troll","great axe") print("You meet a " + foe.enemy + " wielding a " + foe.weapon) # main loop while True: print("Type the a key and then RETURN to attack.") action=input() if action.lower() == "a": foe.fight(foe) if foe.alive == False: print("You have won...this time.") exit() ``` 导入模块 `enemy.py` 使用了一条特别的语句,引用类文件名称而不用带有 `.py` 扩展名,后跟你选择的命名空间指示符(例如,`import enemy as en`)。这个指示符是在你调用类时在代码中使用的。你需要在导入时添加指示符,例如 `en.Enemy`,而不是只使用 `Enemy()`。 所有这些文件名都是任意的,尽管在原则上不要使用罕见的名称。将应用程序的中心命名为 `main.py` 是一个常见约定,和一个充满类的文件通常以小写形式命名,其中的类都以大写字母开头。是否遵循这些约定不会影响应用程序的运行方式,但它确实使经验丰富的 Python 程序员更容易快速理解应用程序的工作方式。 在如何构建代码方面有一些灵活性。例如,使用该示例代码,两个文件必须位于同一目录中。如果你只想将类打包为模块,那么必须创建一个名为 `mybad` 的目录,并将你的类移入其中。在 `main.py` 中,你的 `import` 语句稍有变化: ``` from mybad import enemy as en ``` 两种方法都会产生相同的结果,但如果你创建的类足够通用,你认为其他开发人员可以在他们的项目中使用它们,那么后者更好。 无论你选择哪种方式,都可以启动游戏的模块化版本: ``` $ python3 ./main.py You meet a troll wielding a great axe Type the a key and then RETURN to attack. a You take a swing at the troll. You missed. Type the a key and then RETURN to attack. a You take a swing at the troll. You hit the troll for 8 damage! The troll has 4 HP remaining Type the a key and then RETURN to attack. a You take a swing at the troll. You hit the troll for 11 damage! The troll has -7 HP remaining You have won...this time. ``` 游戏启动了,它现在更加模块化了。现在你知道了面向对象的应用程序意味着什么,但最重要的是,当你向兽人发起决斗的时候,你知道是哪一个。 --- via: <https://opensource.com/article/19/7/get-modular-python-classes> 作者:[Seth Kenlon](https://opensource.com/users/seth) 选题:[lujun9972](https://github.com/lujun9972) 译者:[MjSeven](https://github.com/MjSeven) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
In my previous article, I explained how to [make Python modular](https://opensource.com/article/19/6/get-modular-python-functions) by using functions, creating modules, or both. Functions are invaluable to avoid repeating code you intend to use several times, and modules ensure that you can use your code across different projects. But there's another component to modularity: the class. If you've heard the term *object-oriented programming*, then you may have some notion of the purpose classes serve. Programmers tend to consider a class as a virtual object, sometimes with a direct correlation to something in the physical world, and other times as a manifestation of some programming concept. Either way, the idea is that you can create a class when you want to create "objects" within a program for you or other parts of the program to interact with. ## Templates without classes Assume you're writing a game set in a fantasy world, and you need this application to be able to drum up a variety of baddies to bring some excitement into your players' lives. Knowing quite a lot about functions, you might think this sounds like a textbook case for functions: code that needs to be repeated often but is written once with allowance for variations when called. Here's an example of a purely function-based implementation of an enemy generator: ``` #!/usr/bin/env python3 import random def enemy(ancestry,gear): enemy=ancestry weapon=gear hp=random.randrange(0,20) ac=random.randrange(0,20) return [enemy,weapon,hp,ac] def fight(tgt): print("You take a swing at the " + tgt[0] + ".") hit=random.randrange(0,20) if hit > tgt[3]: print("You hit the " + tgt[0] + " for " + str(hit) + " damage!") tgt[2] = tgt[2] - hit else: print("You missed.") foe=enemy("troll","great axe") print("You meet a " + foe[0] + " wielding a " + foe[1]) print("Type the a key and then RETURN to attack.") while True: action=input() if action.lower() == "a": fight(foe) if foe[2] < 1: print("You killed your foe!") else: print("The " + foe[0] + " has " + str(foe[2]) + " HP remaining") ``` The **enemy** function creates an enemy with several attributes, such as ancestry, a weapon, health points, and a defense rating. It returns a list of each attribute, representing the sum total of the enemy. In a sense, this code has created an object, even though it's not using a class yet. Programmers call this "enemy" an *object* because the result (a list of strings and integers, in this case) of the function represents a singular but complex *thing* in the game. That is, the strings and integers in the list aren't arbitrary: together, they describe a virtual object. When writing a collection of descriptors, you use variables so you can use them any time you want to generate an enemy. It's a little like a template. In the example code, when an attribute of the object is needed, the corresponding list item is retrieved. For instance, to get the ancestry of an enemy, the code looks at **foe[0]**, for health points, it looks at **foe[2]** for health points, and so on. There's nothing necessarily wrong with this approach. The code runs as expected. You could add more enemies of different types, you could create a list of enemy types and randomly select from the list during enemy creation, and so on. It works well enough, and in fact [Lua](https://opensource.com/article/17/4/how-program-games-raspberry-pi) uses this principle very effectively to approximate an object-oriented model. However, there's sometimes more to an object than just a list of attributes. ## The way of the object In Python, everything is an object. Anything you create in Python is an *instance* of some predefined template. Even basic strings and integers are derivatives of the Python **type** class. You can witness this for yourself an interactive Python shell: ``` >>> foo=3 >>> type(foo) <class 'int'> >>> foo="bar" >>> type(foo) <class 'str'> ``` When an object is defined by a class, it is more than just a collection of attributes. Python classes have functions all their own. This is convenient, logically, because actions that pertain only to a certain class of objects are contained within that object's class. In the example code, the fight code is a function of the main application. That works fine for a simple game, but in a complex one, there would be more than just players and enemies in the game world. There might be townsfolk, livestock, buildings, forests, and so on, and none of them ever need access to a fight function. Placing code for combat in an enemy class means your code is better organized; and in a complex application, that's a significant advantage. Furthermore, each class has privileged access to its own local variables. An enemy's health points, for instance, isn't data that should ever change except by some function of the enemy class. A random butterfly in the game should not accidentally reduce an enemy's health to 0. Ideally, even without classes, that would never happen, but in a complex application with lots of moving parts, it's a powerful trick of the trade to ensure that parts that don't need to interact with one another never do. Python classes are also subject to garbage collection. When an instance of a class is no longer used, it is moved out of memory. You may never know when this happens, but you tend to notice when it doesn't happen because your application takes up more memory and runs slower than it should. Isolating data sets into classes helps Python track what is in use and what is no longer needed. ## Classy Python Here's the same simple combat game using a class for the enemy: ``` #!/usr/bin/env python3 import random class Enemy(): def __init__(self,ancestry,gear): self.enemy=ancestry self.weapon=gear self.hp=random.randrange(10,20) self.ac=random.randrange(12,20) self.alive=True def fight(self,tgt): print("You take a swing at the " + self.enemy + ".") hit=random.randrange(0,20) if self.alive and hit > self.ac: print("You hit the " + self.enemy + " for " + str(hit) + " damage!") self.hp = self.hp - hit print("The " + self.enemy + " has " + str(self.hp) + " HP remaining") else: print("You missed.") if self.hp < 1: self.alive=False # game start foe=Enemy("troll","great axe") print("You meet a " + foe.enemy + " wielding a " + foe.weapon) # main loop while True: print("Type the a key and then RETURN to attack.") action=input() if action.lower() == "a": foe.fight(foe) if foe.alive == False: print("You have won...this time.") exit() ``` This version of the game handles the enemy as an object containing the same attributes (ancestry, weapon, health, and defense), plus a new attribute measuring whether the enemy has been vanquished yet, as well as a function for combat. The first function of a class is a special function called (in Python) an *init*, or initialization, function. This is similar to a [constructor](https://opensource.com/article/19/6/what-java-constructor) in other languages; it creates an instance of the class, which is identifiable to you by its attributes and to whatever variable you use when invoking the class (**foe** in the example code). ## Self and class instances The class' functions accept a new form of input you don't see outside of classes: **self**. If you don't include **self**, then Python has no way of knowing *which* instance of the class to use when you call a class function. It's like challenging a single orc to a duel by saying "I'll fight the orc" in a room full of orcs; nobody knows which one you're referring to, and so bad things happen. ![CC-BY-SA by Buch on opengameart.org Image of an Orc, CC-BY-SA by Buch on opengameart.org](https://opensource.com/sites/default/files/images/orc-buch-opengameart_cc-by-sa.jpg) opensource.com Each attribute created within a class is prepended with the **self** notation, which identifies that variable as an attribute of the class. Once an instance of a class is spawned, you swap out the **self** prefix with the variable representing that instance. Using this technique, you could challenge just one orc to a duel in a room full of orcs by saying "I'll fight the gorblar.orc"; when Gorblar the Orc hears **gorblar.orc**, he knows which orc you're referring to (him*self*), and so you get a fair fight instead of a brawl. In Python: ``` gorblar=Enemy("orc","sword") print("The " + gorblar.enemy + " has " + str(gorblar.hp) + " remaining.") ``` Instead of looking to **foe[0]** (as in the functional example) or **gorblar[0] **for the enemy type, you retrieve the class attribute (**gorblar.enemy** or **gorblar.hp** or whatever value for whatever object you need). ## Local variables If a variable in a class is not prepended with the **self** keyword, then it is a local variable, just as in any function. For instance, no matter what you do, you cannot access the **hit** variable outside the **Enemy.fight** class: ``` >>> print(foe.hit) Traceback (most recent call last): File "./enclass.py", line 38, in <module> print(foe.hit) AttributeError: 'Enemy' object has no attribute 'hit' >>> print(foe.fight.hit) Traceback (most recent call last): File "./enclass.py", line 38, in <module> print(foe.fight.hit) AttributeError: 'function' object has no attribute 'hit' ``` The **hit** variable is contained within the Enemy class, and only "lives" long enough to serve its purpose in combat. ## More modularity This example uses a class in the same text document as your main application. In a complex game, it's easier to treat each class almost as if it were its own self-standing application. You see this when multiple developers work on the same application: one developer works on a class, and the other works on the main program, and as long as they communicate with one another about what attributes the class must have, the two code bases can be developed in parallel. To make this example game modular, split it into two files: one for the main application and one for the class. Were it a more complex application, you might have one file per class, or one file per logical groups of classes (for instance, a file for buildings, a file for natural surroundings, a file for enemies and NPCs, and so on). Save one file containing just the Enemy class as **enemy.py** and another file containing everything else as **main.py**. Here's **enemy.py**: ``` import random class Enemy(): def __init__(self,ancestry,gear): self.enemy=ancestry self.weapon=gear self.hp=random.randrange(10,20) self.stg=random.randrange(0,20) self.ac=random.randrange(0,20) self.alive=True def fight(self,tgt): print("You take a swing at the " + self.enemy + ".") hit=random.randrange(0,20) if self.alive and hit > self.ac: print("You hit the " + self.enemy + " for " + str(hit) + " damage!") self.hp = self.hp - hit print("The " + self.enemy + " has " + str(self.hp) + " HP remaining") else: print("You missed.") if self.hp < 1: self.alive=False ``` Here's **main.py**: ``` #!/usr/bin/env python3 import enemy as en # game start foe=en.Enemy("troll","great axe") print("You meet a " + foe.enemy + " wielding a " + foe.weapon) # main loop while True: print("Type the a key and then RETURN to attack.") action=input() if action.lower() == "a": foe.fight(foe) if foe.alive == False: print("You have won...this time.") exit() ``` Importing the module **enemy.py** is done very specifically with a statement that refers to the file of classes as its name without the **.py** extension, followed by a namespace designator of your choosing (for example,** import enemy as en**). This designator is what you use in the code when invoking a class. Instead of just using **Enemy()**, you preface the class with the designator of what you imported, such as **en.Enemy**. All of these file names are entirely arbitrary, although not uncommon in principle. It's a common convention to name the part of the application that serves as the central hub **main.py**, and a file full of classes is often named in lowercase with the classes inside it, each beginning with a capital letter. Whether you follow these conventions doesn't affect how the application runs, but it does make it easier for experienced Python programmers to quickly decipher how your application works. There's some flexibility in how you structure your code. For instance, using the code sample, both files must be in the same directory. If you want to package just your classes as a module, then you must create a directory called, for instance, **mybad** and move your classes into it. In **main.py**, your import statement changes a little: `from mybad import enemy as en` Both systems produce the same results, but the latter is best if the classes you have created are generic enough that you think other developers could use them in their projects. Regardless of which you choose, launch the modular version of the game: ``` $ python3 ./main.py You meet a troll wielding a great axe Type the a key and then RETURN to attack. a You take a swing at the troll. You missed. Type the a key and then RETURN to attack. a You take a swing at the troll. You hit the troll for 8 damage! The troll has 4 HP remaining Type the a key and then RETURN to attack. a You take a swing at the troll. You hit the troll for 11 damage! The troll has -7 HP remaining You have won...this time. ``` The game works. It's modular. And now you know what it means for an application to be object-oriented. But most importantly, you know to be specific when challenging an orc to a duel. ## 3 Comments
11,319
Linux 上 5 个最好 CAD 软件
https://itsfoss.com/cad-software-linux/
2019-09-08T10:44:36
[ "CAD" ]
https://linux.cn/article-11319-1.html
[计算机辅助设计 (CAD)](https://en.wikipedia.org/wiki/Computer-aided_design) 是很多工程流程的必不可少的部分。CAD 用于建筑、汽车零部件设计、航天飞机研究、航空、桥梁施工、室内设计,甚至服装和珠宝设计等专业领域。 在 Linux 上并不原生支持一些专业级 CAD 软件,如 SolidWorks 和 Autodesk AutoCAD。因此,今天,我们将看看排名靠前的 Linux 上可用的 CAD 软件。预知详情,请看下文。 ### Linux 可用的最好的 CAD 软件 ![CAD Software for Linux](/data/attachment/album/201909/08/104441if4smfe1m1zlql54.jpg) 在我们查看这份 Linux 的 CAD 软件列表前,你应该记住一件事,在这里不是所有的应用程序都是开源软件。我们也将包含一些非自由和开源软件的 CAD 软件来帮助普通的 Linux 用户。 我们为基于 Ubuntu 的 Linux 发行版提供了安装操作指南。对于其它发行版,你可以检查相应的网站来了解安装程序步骤。 该列表没有任何特殊顺序。在第一顺位的 CAD 应用程序不能认为比在第三顺位的好,以此类推。 #### 1、FreeCAD 对于 3D 建模,FreeCAD 是一个极好的选择,它是自由 (免费和自由) 和开源软件。FreeCAD 坚持以构建机械工程和产品设计为目标。FreeCAD 是多平台的,可用于 Windows、Mac OS X+ 以及 Linux。 ![freecad](/data/attachment/album/201909/08/104442icvpcdeaa22g2h72.jpg) 尽管 FreeCAD 已经是很多 Linux 用户的选择,应该注意到,FreeCAD 仍然是 0.17 版本,因此,不适用于重要的部署。但是最近开发加速了。 * [FreeCAD](https://www.freecadweb.org/) FreeCAD 并不专注于 direct-2D 绘图和真实形状的动画,但是它对机械工程相关的设计极好。FreeCAD 的 0.15 版本在 Ubuntu 存储库中可用。你可以通过运行下面的命令安装。 ``` sudo apt install freecad ``` 为获取新的每日构建(目前 0.17),打开一个终端(`ctrl+alt+t`),并逐个运行下面的命令。 ``` sudo add-apt-repository ppa:freecad-maintainers/freecad-daily sudo apt update sudo apt install freecad-daily ``` #### 2、LibreCAD LibreCAD 是一个自由开源的、2D CAD 解决方案。一般来说,CAD 是一个资源密集型任务,如果你有一个相当普通的硬件,那么我建议你使用 LibreCAD ,因为它在资源使用方面真的轻量化。LibreCAD 是几何图形结构方面的一个极好的候选者。 ![librecad](/data/attachment/album/201909/08/104443w4dxnxe7n75yndn1.jpg) 作为一个 2D 工具,LibreCAD 是好的,但是它不能在 3D 模型和渲染上工作。它有时可能不稳定,但是,它有一个可靠的自动保存,它不会让你的工作浪费。 * [LibreCAD](https://librecad.org/) 你可以通过运行下面的命令安装 LibreCAD。 ``` sudo apt install librecad ``` #### 3、OpenSCAD OpenSCAD 是一个自由的 3D CAD 软件。OpenSCAD 非常轻量和灵活。OpenSCAD 不是交互式的。你需要‘编程’模型,OpenSCAD 来解释这些代码来渲染一个可视化模型。在某种意义上说,它是一个编译器。你不能直接绘制模型,而是描述模型。 ![openscad](/data/attachment/album/201909/08/104444dozlkvh888uouuo9.jpg) OpenSCAD 是这个列表上最复杂的工具,但是,一旦你了解它,它将提供一个令人愉快的工作经历。 * [OpenSCAD](http://www.openscad.org/) 你可以使用下面的命令来安装 OpenSCAD。 ``` sudo apt-get install openscad ``` #### 4、BRL-CAD BRL-CAD 是最老的 CAD 工具之一。它也深受 Linux/UNIX 用户喜爱,因为它与模块化和自由的 \*nix 哲学相一致。 ![BRL-CAD rendering by Sean](/data/attachment/album/201909/08/104445urohlv5b57ph0j14.jpg) BRL-CAD 始于 1979 年,并且,它仍然在积极开发。现在,BRL-CAD 不是 AutoCAD,但是对于像热穿透和弹道穿透等等的运输研究仍然是一个极好的选择。BRL-CAD 构成 CSG 的基础,而不是边界表示。在选择 BRL-CAD 时,你可能需要记住这一点。你可以从它的官方网站下载 BRL-CAD 。 * [BRL-CAD](https://brlcad.org/) #### 5、DraftSight (非开源) 如果你习惯在 AutoCAD 上作业。那么,DraftSight 将是完美的替代。 DraftSight 是一个在 Linux 上可用的极好的 CAD 工具。它有相当类似于 AutoCAD 的工作流,这使得迁移更容易。它甚至提供一种类似的外观和感觉。DrafSight 也兼容 AutoCAD 的 .dwg 文件格式。 但是,DrafSight 是一个 2D CAD 软件。截至当前,它不支持 3D CAD 。 ![draftsight](/data/attachment/album/201909/08/104446u45oe7y778le7o7o.jpg) 尽管 DrafSight 是一款起价 149 美元的商业软件。在 [DraftSight 网站](https://www.draftsight2018.com/)上可获得一个免费版本。你可以下载 .deb 软件包,并在基于 Ubuntu 的发行版上安装它。为了开始使用 DraftSight ,你需要使用你的电子邮件 ID 来注册你的免费版本。 * [DraftSight](https://www.draftsight2018.com/) #### 荣誉提名 * 随着云计算技术的巨大发展,像 [OnShape](https://www.onshape.com/) 的云 CAD 解决方案已经变得日渐流行。 * [SolveSpace](http://solvespace.com/index.pl) 是另一个值得一提的开源软件项目。它支持 3D 模型。 * 西门子 NX 是一个在 Windows、Mac OS 及 Linux 上可用的工业级 CAD 解决方案,但是它贵得离谱,所以,在这个列表中被忽略。 * 接下来,你有 [LeoCAD](https://www.leocad.org/),它是一个 CAD 软件,在软件中你使用乐高积木来构建东西。你使用这些信息做些什么取决于你。 ### 我对 Linux 上的 CAD 的看法 尽管在 Linux 上游戏变得流行,我总是告诉我的铁杆游戏朋友坚持使用 Windows。类似地,如果你是一名在你是课程中使用 CAD 的工科学生,我建议你使用学校规定的软件 (AutoCAD、SolidEdge、Catia),这些软件通常只在 Windows 上运行。 对于高级专业人士来说,当我们讨论行业标准时,这些工具根本达不到标准。 对于想在 WINE 中运行 AutoCAD 的那些人来说,尽管一些较旧版本的 AutoCAD 可以安装在 WINE 上,它们根本不执行工作,小故障和崩溃严重损害这些体验。 话虽如此,我高度尊重上述列表中软件的开发者的工作。他们丰富了 FOSS 世界。很高兴看到像 FreeCAD 一样的软件在近些年中加速开发速度。 好了,今天到此为止。使用下面的评论区与我们分享你的想法,不用忘记分享这篇文章。谢谢。 --- via: <https://itsfoss.com/cad-software-linux/> 作者:[Aquil Roshan](https://itsfoss.com/author/aquil/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[robsean](https://github.com/robsean) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
![Warp Terminal](/assets/images/warp-terminal.webp) ![Warp Terminal](/assets/images/warp-terminal.webp) [Computer Aided Design (CAD)](https://en.wikipedia.org/wiki/Computer-aided_design?ref=itsfoss.com) is an essential part of many engineering streams. CAD is professionally used in architecture, auto parts design, space shuttle research, aeronautics, bridge construction, interior design, and even clothing and jewelry. Several professional-grade CAD programs like SolidWorks and [Autodesk](https://www.autodesk.com/?ref=itsfoss.com) AutoCAD are not natively supported on the Linux platform. Still, there are some alternative CAD applications available for Linux. You'll learn about them in this article. ## Best CAD Software available for Linux ![CAD Software for Linux](https://itsfoss.com/content/images/wordpress/2018/08/cad-software-linux.jpeg?resize=800%2C450&ssl=1) **Not all the applications listed here are open-source and free. I have also included some non-FOSS CAD software to help average Linux users. The non-open-source software has been duly indicated.** **Non-FOSS Warning!**Installation instructions for Ubuntu-based Linux distributions have been provided. You can check the respective websites to learn the installation procedures for other distributions. The list is not in any specific order. The CAD application at number one shouldn’t be considered better than the one at number three, and so on. ### 1. FreeCAD For 3D modelling, FreeCAD is an excellent option that is both free (beer and speech) and open-source. FreeCAD is built with mechanical engineering and product design as its target purposes. FreeCAD is multiplatform and is available on Windows and macOS as well as Linux. ![freecad](https://itsfoss.com/content/images/wordpress/2018/07/freecad.jpg?resize=800%2C450&ssl=1) Although FreeCAD has been the choice of many Linux users, it should be noted that it’s not a full-fledged solution. However, it’s good to know that it’s being actively developed and you can find the latest releases on [GitHub](https://github.com/FreeCAD/FreeCAD/releases?ref=itsfoss.com) as well. FreeCAD doesn’t focus on direct 2D drawings and animating organic shapes, but it’s great for design related to mechanical engineering. FreeCAD version 0.15 is available in the Ubuntu repositories. So you can install it directly from your software center. If you don’t find it there, you can install it by running the following command: `sudo apt install freecad` To get newer daily builds (currently on 0.19), simply head to the [GitHub releases ](https://github.com/FreeCAD/FreeCAD/releases?ref=itsfoss.com)page to download them. ### 2. LibreCAD LibreCAD is a free and open-source 2D CAD solution. Generally, CAD tends to be a resource-intensive task, and if you have rather modest hardware, then I’d suggest you go for LibreCAD as it’s really lightweight in terms of resource usage. LibreCAD is a great tool for geometric constructions. ![librecad](https://itsfoss.com/content/images/wordpress/2018/07/librecad.jpg?resize=800%2C450&ssl=1) As a 2D tool, LibreCAD is good but it doesn’t work on 3D models and renderings. It might be unstable at times but it has a dependable autosave that won’t let your work go to waste. You can install LibreCAD by running the following command: `sudo apt install librecad` ### 3. OpenSCAD OpenSCAD is a free 3D CAD program. It’s very lightweight and flexible. OpenSCAD isn’t interactive: you need to ‘program’ the model and OpenSCAD will interpret that code to render a visual model. In a sense, it’s like a compiler. You cannot draw the model – you describe the model. ![openscad](https://itsfoss.com/content/images/wordpress/2018/07/openscad.jpg?resize=800%2C450&ssl=1) OpenSCAD is the most complicated tool on this list, but once you get to know it, it provides an enjoyable work environment. You can use the following command to install OpenSCAD. `sudo apt-get install openscad` ### 4. BRL-CAD BRL-CAD is one of the oldest CAD tools out there. It’s also a favorite of Linux/UNIX users as it aligns itself with the *nix philosophies of modularity and freedom. ![BRL-CAD rendering by Sean](https://itsfoss.com/content/images/wordpress/2018/07/brlcad.jpg?resize=800%2C453&ssl=1) The BRL-CAD project started in 1979, and it’s still developed actively. Now, BRL-CAD isn’t AutoCAD, but it’s still a great choice for transport studies such as thermal and ballistic penetration. BRL-CAD uses CSG instead of boundary representation. You might need to keep that in mind if you opt for BRL-CAD. You can download BRL-CAD from its official website. ### 5. QCAD ![Qcad Linux](https://itsfoss.com/content/images/wordpress/2019/12/qcad-linux.jpg) QCAD is a commercially available open-source CAD program based on the [Qt framework](https://www.qt.io/?ref=itsfoss.com). The free community edition is open-source and its [source code is available](https://github.com/qcad/qcad?ref=itsfoss.com). The professional version contains add-ons for advanced DXF support, DWG support and many extra tools and features. In other words, the free community edition is restricted to certain features. QCAD may not be the best CAD software there is, but the UI and the options it provides are good for many uses. So if you’re interested in trying open-source CAD software, you can download the trial version to test-drive it. You can opt for the trial version first, which runs for 15 minutes before you need to restart the session. And if you like using the trial version, you can consider upgrading it. ### 6. BricsCAD (not open-source) Yet another alternative suggested by some of our readers. This may not be a free and open-source solution. However, you will find it available for Linux when you purchase it. It’s a feature-rich CAD program available for Linux users. If you are curious, there’s a [comparison chart](https://www.bricsys.com/en-eu/bricscad/compare/?ref=itsfoss.com) with AutoCAD on its official website that lists its capabilities and features. You need to sign up for a 30-day trial to start with and purchase it later if you like it. ### 7. VariCAD (not open-source) ![Varicad Illustration](https://itsfoss.com/content/images/wordpress/2019/12/varicad-illustration.jpg) VariCAD is another decent CAD program for 2D and 3D designs. Even though it isn’t free, you get a 30-day free trial version to test it out. For Linux, you can download Debian and RPM packages to try it out. It’s actively maintained and supports most of the latest Linux distributions. It also offers a free VariCAD viewer, which you can use to convert DWG to DFX and similar tasks. ### Honorary mentions - With a huge growth in cloud computing technologies, cloud CAD solutions like [OnShape](https://www.onshape.com/?ref=itsfoss.com)have been getting more popular each day. [SolveSpace](http://solvespace.com/index.pl?ref=itsfoss.com)is another open-source project worth mentioning. It supports 3D modeling.- Siemens NX is an industrial-grade CAD solution available on Windows, Mac OS and Linux, but it’s ridiculously expensive, so we’ve omitted it from this list. - Then there’s [LeoCAD](https://www.leocad.org/?ref=itsfoss.com), which is a CAD program where you use LEGO blocks to build stuff. What you do with this information is up to you. ## CAD on Linux – my opinion Although gaming on Linux has picked up, I always tell my hardcore gaming friends to stick to Windows in dual boot. Similarly, if you’re an engineering student with CAD on your curriculum, I’d recommend using the software that your college prescribes (AutoCAD, SolidEdge, Catia), which generally tends to run on Windows only. You can always dual boot to keep Windows and Linux on the same computer. And for advanced professionals, these tools might not be up to the mark when we’re talking about industry standards. For those of you thinking about running AutoCAD in [WINE](https://itsfoss.com/install-wine-ubuntu/), although some older versions of AutoCAD can be installed on WINE, they simply do not perform with glitches and crashes ruining the experience. That being said, I highly respect the work that has been put in by the developers of the above-listed software. They’ve enriched the FOSS world. And it’s great to see a program like FreeCAD developing at an accelerated pace in recent years. Do share your thoughts with us using the comments section below and don’t forget to share this article. Cheers!
11,320
用 Git 管理你的每日行程
https://opensource.com/article/19/4/calendar-git
2019-09-09T06:19:16
[ "日历", "Git" ]
https://linux.cn/article-11320-1.html
> > 像源代码一样对待时间并在 Git 的帮助下维护你的日历。 > > > ![](/data/attachment/album/201909/09/061835la7ne9edtlr7kn18.png) [Git](https://git-scm.com/) 是一个少有的能将如此多的现代计算封装到一个程序之中的应用程序,它可以用作许多其他应用程序的计算引擎。虽然它以跟踪软件开发中的源代码更改而闻名,但它还有许多其他用途,可以让你的生活更轻松、更有条理。在这个 Git 系列中,我们将分享七种鲜为人知的使用 Git 的方法。 今天,我们将使用 Git 来跟踪你的日历。 ### 使用 Git 跟踪你的日程安排 如果时间本身只是可以管理和版本控制的源代码呢?虽然证明或反驳这种理论可能超出了本文的范围,但在 Git 的帮助下,你可以将时间视为源代码并管理你的日程安排。 日历的卫冕冠军是 [CalDAV](https://tools.ietf.org/html/rfc4791) 协议,它支撑了如 [NextCloud](http://nextcloud.com) 这样的流行的开源及闭源的日历应用程序。CalDAV 没什么问题(评论者,请注意),但它并不适合所有人,除此之外,它还有一种不同于单一文化的鼓舞人心的东西。 因为我对大量使用 GUI 的 CalDAV 客户端没有兴趣(如果你正在寻找一个好的终端 CalDAV 查看器,请参阅 [khal](https://github.com/pimutils/khal)),我开始研究基于文本的替代方案。基于文本的日历具有在[明文](https://plaintextproject.online/)中工作的所有常见好处。它很轻巧,非常便携,只要它结构化,就很容易解析和美化(无论*美丽*对你意味着什么)。 最重要的是,它正是 Git 旨在管理的内容。 ### Org 模式不是一种可怕的方式 如果你没有对你的明文添加结构,它很快就会陷入一种天马行空般的混乱,变成恶魔才能懂的符号。幸运的是,有一种用于日历的标记语法,它包含在令人尊敬的生产力 Emacs 模式 —— [Org 模式](https://orgmode.org) 中(承认吧,你其实一直想开始使用它)。 许多人没有意识到 Org 模式的惊人之处在于[你不需要知道甚至不需要使用 Emacs](https://opensource.com/article/19/1/productivity-tool-org-mode)来利用 Org 模式建立的约定。如果你使用 Emacs,你会得到许多很棒的功能,但是如果 Emacs 对你来说太难了,那么你可以实现一个基于 Git 的 Org 模式的日历系统,而不需要安装 Emacs。 关于 Org 模式你唯一需要知道的部分是它的语法。Org 模式的语法维护成本低、直观。使用 Org 模式而不是 GUI 日历应用程序进行日历记录的最大区别在于工作流程:你可以创建一个任务列表,然后每天分配一个任务,而不是转到日历并查找要安排任务的日期。 组织模式中的列表使用星号(`*`)作为项目符号。这是我的游戏任务列表: ``` * Gaming ** Build Stardrifter character ** Read Stardrifter rules ** Stardrifter playtest ** Blue Planet @ Mike's ** Run Rappan Athuk *** Purchase hard copy *** Skim Rappan Athuk *** Build Rappan Athuk maps in maptool *** Sort Rappan Athuk tokens ``` 如果你熟悉 [CommonMark](https://commonmark.org/) 或 Markdown,你会注意到,Org 模式不是使用空格来创建子任务,而是更明确地使用了其它项目符号。无论你的使用背景和列表是什么,这都是一种构建列表的直观且简单的方法,它显然与 Emacs 没有内在联系(尽管使用 Emacs 为你提供了快捷方式,因此你可以快速地重新排列列表)。 要将列表转换为日历中的计划任务或事件,请返回并添加关键字 `SCHEDULED` 和(可选)`:CATEGORY:`。 ``` * Gaming :CATEGORY: Game ** Build Stardrifter character SCHEDULED: <2019-03-22 18:00-19:00> ** Read Stardrifter rules SCHEDULED: <2019-03-22 19:00-21:00> ** Stardrifter playtest SCHEDULED: <2019-03-25 0900-1300> ** Blue Planet @ Mike's SCHEDULED: <2019-03-18 18:00-23:00 +1w> and so on... ``` `SCHEDULED` 关键字将该条目标记为你希望收到通知的事件,并且可选的 `:CATEGORY:` 关键字是一个可供你自己使用的任意标记系统(在 Emacs 中,你可以根据类别对条目使用颜色代码)。 对于重复事件,你可以使用符号(如`+1w`)创建每周事件或 `+2w` 以进行每两周一次的事件,依此类推。 所有可用于 Org 模式的花哨标记都[记录于文档](https://orgmode.org/manual/),所以不要犹豫,找到更多技巧来让它满足你的需求。 ### 放进 Git 如果没有 Git,你的 Org 模式的日程安排只不过是本地计算机上的文件。这是 21 世纪,所以你至少需要可以在手机上使用你的日历,即便不是在你所有的个人电脑上。你可以使用 Git 为自己和他人发布日历。 首先,为 `.org` 文件创建一个目录。我将我的存储在 `~/cal` 中。 ``` $ mkdir ~/cal ``` 转到你的目录并使其成为 Git 存储库: ``` $ cd cal $ git init ``` 将 `.org` 文件移动到你本地的 Git 存储库。在实践中,我为每个类别维护一个 `.org` 文件。 ``` $ mv ~/*.org ~/cal $ ls Game.org Meal.org Seth.org Work.org ``` 暂存并提交你的文件: ``` $ git add *.org $ git commit -m 'cal init' ``` ### 创建一个 Git 远程源 要在任何地方提供日历,你必须在互联网上拥有 Git 存储库。你的日历是纯文本,因此任何 Git 存储库都可以。你可以将日历放在 [GitLab](http://gitlab.com) 或任何其他公共 Git 托管服务(甚至是专有服务)上,只要你的主机允许,你甚至可以将该存储库标记为私有库。如果你不想将日历发布到你无法控制的服务器,则可以自行托管 Git 存储库,或者为单个用户使用裸存储库,或者使用 [Gitolite](http://gitolite.com/gitolite/index.html) 或 [Gitea](https://gitea.io/en-us/) 等前端服务。 为了简单起见,我将假设一个自托管的 Git 裸存储库。你可以使用 Git 命令在任何具有 SSH 访问权限的服务器上创建一个远程裸存储库: ``` $ ssh -p 22122 [[email protected]][14] [remote]$ mkdir cal.git [remote]$ cd cal.git [remote]$ git init --bare [remote]$ exit ``` 这个裸存储库可以作为你日历在互联网上的家。 将其设置为本地 Git 存储库(在你的计算机上,而不是你的服务器上)的远程源: ``` $ git remote add origin [email protected]:/home/seth/cal.git ``` 然后推送你的日历到该服务器: ``` $ git push -u origin HEAD ``` 将你的日历放在 Git 存储库中,就可以在任何运行 Git 的设备上使用它。这意味着你可以对计划进行更新和更改,并将更改推送到上游,以便在任何地方进行更新。 我使用这种方法使我的日历在我的工作笔记本电脑和家庭工作站之间保持同步。由于我每天大部分时间都在使用 Emacs,因此能够在 Emacs 中查看和编辑我的日历是一个很大的便利。对于大多数使用移动设备的人来说也是如此,因此下一步是在移动设备上设置 Org 模式的日历系统。 ### 移动设备上的 Git 由于你的日历数据是纯文本的,严格来说,你可以在任何可以读取文本文件的设备上“使用”它。这是这个系统之美的一部分;你永远不会缺少原始数据。但是,要按照你希望的现代日历的工作方式将日历集成到移动设备上,你需要两个组件:移动设备上的 Git 客户端和 Org 模式查看器。 #### 移动设备上的 Git 客户端 [MGit](https://f-droid.org/en/packages/com.manichord.mgit) 是 Android 上的优秀 Git 客户端。同样,iOS 也有 Git 客户端。 一旦安装了 MGit(或类似的 Git 客户端),你必须克隆日历存储库,以便在你的手机上有副本。要从移动设备访问服务器,必须设置 SSH 密钥进行身份验证。MGit 可以为你生成和存储密钥,你必须将其添加到服务器的 `~/.ssh/authorized_keys` 文件或托管的 Git 的帐户设置中的 SSH 密钥中。 你必须手动执行此操作。MGit 没有登录你的服务器或托管的 Git 帐户的界面。如果你不这样做,你的移动设备将无法访问你的服务器以访问你的日历数据。 我是通过将我在 MGit 中生成的密钥文件通过 [KDE Connect](https://community.kde.org/KDEConnect) 复制到我的笔记本电脑来实现的(但你可以通过蓝牙、SD 卡读卡器或 USB 电缆进行相同操作,具体取决于你访问手机上的数据的首选方法)。 我用这个命令将密钥(一个名为 `calkey` 的文件)复制到我的服务器: ``` $ cat calkey | ssh [email protected] "cat >> /home/seth/.ssh/authorized_keys" ``` 你可能有不同的方法,但如果你曾经将服务器设置为无密码登录,这是完全相同的过程。如果你使用的是 GitLab 等托管的 Git 服务,则必须将密钥文件的内容复制并粘贴到用户帐户的 SSH 密钥面板中。 ![Adding key file data to GitLab](/data/attachment/album/201909/09/061918p8yixpqr59pokpup.jpg "Adding key file data to GitLab") 完成后,你的移动设备可以向你的服务器授权,但仍需要知道在哪里查找你的日历数据。不同的应用程序可能使用不同的表示法,但 MGit 使用普通的旧式 Git-over-SSH。这意味着如果你使用的是非标准 SSH 端口,则必须指定要使用的 SSH 端口: ``` $ git clone ssh://[email protected]:22122//home/seth/git/cal.git ``` ![Specifying SSH port in MGit](/data/attachment/album/201909/09/061919y79aaosoe8snyza6.jpg "Specifying SSH port in MGit") 如果你使用其他应用程序,它可能会使用不同的语法,允许你在特殊字段中提供端口,或删除 `ssh://` 前缀。如果遇到问题,请参阅应用程序文档。 将存储库克隆到手机。 ![Cloned repositories](/data/attachment/album/201909/09/061919dnqb0hcdd08qpyqy.jpg "Cloned repositories") 很少有 Git 应用程序设置为自动更新存储库。有一些应用程序可以用来自动拉取,或者你可以设置 Git 钩子来推送服务器的更新 —— 但我不会在这里讨论这些。目前,在对日历进行更新后,请务必在 MGit 中手动提取新更改(或者如果在手机上更改了事件,请将更改推送到服务器)。 ![MGit push/pull settings](/data/attachment/album/201909/09/061920kjisgu7uugzxw4gc.jpg "MGit push/pull settings") #### 移动设备上的日历 有一些应用程序可以为移动设备上的 Org 模式提供前端。[Orgzly](https://f-droid.org/en/packages/com.orgzly/) 是一个很棒的开源 Android 应用程序,它为 Org 模式的从 Agenda 模式到 TODO 列表的大多数功能提供了一个界面。安装并启动它。 从主菜单中,选择“设置同步存储库”,然后选择包含日历文件的目录(即,从服务器克隆的 Git 存储库)。 给 Orgzly 一点时间来导入数据,然后使用 Orgzly 的[汉堡包](https://en.wikipedia.org/wiki/Hamburger_button)菜单选择日程视图。 ![Orgzly's agenda view](/data/attachment/album/201909/09/061920gzq27kmzkmpamq2p.jpg "Orgzly's agenda view") 在 Orgzly 的“设置提醒”菜单中,你可以选择在手机上触发通知的事件类型。你可以获得 `SCHEDULED` 任务,`DEADLINE` 任务或任何分配了事件时间的任何通知。如果你将手机用作任务管理器,那么你将永远不会错过 Org 模式和 Orgzly 的活动。 ![Orgzly notification](/data/attachment/album/201909/09/061920r961p8d99q2xdpd9.jpg "Orgzly notification") Orgzly 不仅仅是一个解析器。你可以编辑和更新事件,甚至标记事件为 `DONE`。 ![Orgzly to-do list](/data/attachment/album/201909/09/061921fryrq8979z8i28ff.jpg "Orgzly to-do list") ### 专为你而设计 关于使用 Org 模式和 Git 的重要一点是,这两个应用程序都非常灵活,并且你可以自定义它们的工作方式和内容,以便它们能够适应你的需求。如果本文中的内容是对你如何组织生活或管理每周时间表的冒犯,但你喜欢此提案提供的其他部分,那么请丢弃你不喜欢的部分。如果需要,你可以在 Emacs 中使用 Org 模式,或者你可以将其用作日历标记。你可以将手机设置为在一天结束时从计算机上拉取 Git 数据,而不是从互联网上的服务器上,或者你可以将计算机配置为在手机插入时同步日历,或者你可以每天管理它,就像你把你工作日所需的所有东西都装到你的手机上一样。这取决于你,而这是关于 Git、Org 模式和开源的最重要的事情。 --- via: <https://opensource.com/article/19/4/calendar-git> 作者:[Seth Kenlon](https://opensource.com/users/seth) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
[Git](https://git-scm.com/) is one of those rare applications that has managed to encapsulate so much of modern computing into one program that it ends up serving as the computational engine for many other applications. While it's best-known for tracking source code changes in software development, it has many other uses that can make your life easier and more organized. In this series leading up to Git's 14th anniversary on April 7, we'll share seven little-known ways to use Git. Today, we'll look at using Git to keep track of your calendar. ## Keep track of your schedule with Git What if time itself was but source code that could be managed and version controlled? While proving or disproving such a theory is probably beyond the scope of this article, it happens that you can treat time like source code and manage your daily schedule with the help of Git. The reigning champion for calendaring is the [CalDAV](https://tools.ietf.org/html/rfc4791) protocol, which drives popular open source calendaring applications like [NextCloud](http://nextcloud.com) as well as popular closed source ones. There's nothing wrong with CalDAV (commenters, take heed). But it's not for everyone, and besides there's nothing less inspiring than a mono-culture. Because I have no interest in becoming invested in largely GUI-dependent CalDAV clients (although if you're looking for a good terminal CalDAV viewer, see [khal](https://github.com/pimutils/khal)), I started investigating text-based alternatives. Text-based calendaring has all the usual benefits of working in [plaintext](https://plaintextproject.online/). It's lightweight, it's highly portable, and as long as it's structured, it's easy to parse and beautify (whatever *beauty* means to you). And best of all, it's exactly what Git was designed to manage. ## Org mode not in a scary way If you don't impose structure on your plaintext, it quickly falls into a pandemonium of off-the-cuff thoughts and devil-may-care notation. Luckily, a markup syntax exists for calendaring, and it's contained in the venerable productivity Emacs mode, [Org mode](https://orgmode.org) (which, admit it, you've been meaning to start using anyway). The amazing thing about Org mode that many people don't realize is [you don't need to know or even use Emacs](https://opensource.com/article/19/1/productivity-tool-org-mode) to take advantage of conventions established by Org mode. You get a lot of great features if you *do* use Emacs, but if Emacs intimidates you, then you can implement a Git-based Org-mode calendaring system without so much as installing Emacs. The only part of Org mode that you need to know is its syntax. Org-mode syntax is low-maintenance and fairly intuitive. The biggest difference in calendaring with Org mode instead of a GUI calendaring app is the workflow: instead of going to a calendar and finding the day you want to schedule a task, you create a list of tasks and then assign each one a day and time. Lists in Org mode use asterisks (*) as bullets. Here's my gaming task list:** ** ``` * Gaming ** Build Stardrifter character ** Read Stardrifter rules ** Stardrifter playtest ** Blue Planet @ Mike's ** Run Rappan Athuk *** Purchase hard copy *** Skim Rappan Athuk *** Build Rappan Athuk maps in maptool *** Sort Rappan Athuk tokens ``` If you're familiar with [CommonMark](https://commonmark.org/) or Markdown, you'll notice that instead of using whitespace to create a subtask, Org mode favors the more explicit use of additional bullets. Whatever your background with lists, this is an intuitive and easy way to build a list, and it obviously is not inherently tied to Emacs (although using Emacs provides you with shortcuts so you can rearrange your list quickly). To turn your list into scheduled tasks or events in a calendar, go back through and add the keywords **SCHEDULED** and, optionally, **:CATEGORY:**. ``` * Gaming :CATEGORY: Game ** Build Stardrifter character SCHEDULED: <2019-03-22 18:00-19:00> ** Read Stardrifter rules SCHEDULED: <2019-03-22 19:00-21:00> ** Stardrifter playtest SCHEDULED: <2019-03-25 0900-1300> ** Blue Planet @ Mike's SCHEDULED: <2019-03-18 18:00-23:00 +1w> and so on... ``` The **SCHEDULED** keyword marks the entry as an event that you expect to be notified about and the optional **:CATEGORY:** keyword is an arbitrary tagging system for your own use (and in Emacs, you can color-code entries according to category). For a repeating event, you can use notation such as **+1w** to create a weekly event or **+2w** for a fortnightly event, and so on. All the fancy markup available for Org mode is [documented](https://orgmode.org/manual/), so don't hesitate to find more tricks to help it fit your needs. ## Put it into Git Without Git, your Org-mode appointments are just a file on your local machine. It's the 21st century, though, so you at least need your calendar on your mobile phone, if not on all of your personal computers. You can use Git to publish your calendar for yourself and others. First, create a directory for your **.org** files. I store mine in **~/cal**. `$ mkdir ~/cal` Change into your directory and make it a Git repository: ``` $ cd cal $ git init ``` Move your **.org** file to your local Git repo. In practice, I maintain one **.org** file per category. ``` $ mv ~/*.org ~/cal $ ls Game.org Meal.org Seth.org Work.org ``` Stage and commit your files: ``` $ git add *.org $ git commit -m 'cal init' ``` ## Create a Git remote To make your calendar available from anywhere, you must have a Git repository on the internet. Your calendar is plaintext, so any Git repository will do. You can put your calendar on [GitLab](http://gitlab.com) or any other public Git hosting service (even proprietary ones), and as long as your host allows it, you can even mark the repository as private. If you don't want to post your calendar to a server you don't control, it's easy to host a Git repository yourself, either using a bare repository for a single user or using a frontend service like [Gitolite](http://gitolite.com/gitolite/index.html) or [Gitea](https://gitea.io/en-us/). In the interest of simplicity, I'll assume a self-hosted bare Git repository. You can create a bare remote repository on any server you have SSH access to with one Git command: ``` $ ssh -p 22122 [email protected] [remote]$ mkdir cal.git [remote]$ cd cal.git [remote]$ git init --bare [remote]$ exit ``` This bare repository can serve as your calendar's home on the internet. Set it as the remote source for your local (on your computer, not your server) Git repository: `$ git remote add origin [email protected]:/home/seth/cal.git` And then push your calendar data to the server: `$ git push -u origin HEAD` With your calendar in a Git repository, it's available to you on any device running Git. That means you can make updates and changes to your schedule and push your changes upstream so it updates everywhere. I use this method to keep my calendar in sync between my work laptop and my home workstation. Since I use Emacs every day for most of the day, being able to view and edit my calendar in Emacs is a major convenience. The same is true for most people with a mobile device, so the next step is to set up an Org-mode calendaring system on a mobile. ## Mobile Git Since your calendar data is in plaintext, strictly speaking, you can "use" it on any device that can read a text file. That's part of the beauty of this system; you're never without, at the very least, your raw data. But to integrate your calendar on a mobile device the way you'd expect a modern calendar to work, you need two components: a mobile Git client and a mobile Org-mode viewer. ### Git client for mobile [MGit](https://f-droid.org/en/packages/com.manichord.mgit) is a good Git client for Android. There are Git clients for iOS, as well. Once you've installed MGit (or a similar Git client), you must clone your calendar repository so your phone has a copy. To access your server from your mobile device, you must set up an SSH key for authentication. MGit can generate and store a key for you, which you must add to your server's **~/.ssh/authorized_keys** file or to your SSH keys in the settings of your hosted Git account. You must do this manually. MGit does not have an interface to log into your server or hosted Git account. If you do not do this, your mobile device cannot access your server to access your calendar data. I did it by copying the key file I generated in MGit to my laptop over [KDE Connect](https://community.kde.org/KDEConnect) (but you can do the same over Bluetooth, or with an SD card reader, or a USB cable, depending on your preferred method of accessing data on your phone). I copied the key (a file called **calkey** to my server with this command: `$ cat calkey | ssh [email protected] "cat >> /home/seth/.ssh/authorized_keys"` You may have a different way of doing it, but if you ever set your server up for passwordless login, this is exactly the same process. If you're using a hosted Git service like GitLab, you must copy and paste the contents of your key file into your user account's SSH Key panel. ![Adding key file to GitLab Adding key file data to GitLab](https://opensource.com/sites/default/files/uploads/gitlab-add-key.jpg) Once that's done, your mobile device can authorize to your server, but it still needs to know where to go to find your calendar data. Different apps may use different notation, but MGit uses plain old Git-over-SSH. That means if you're using a non-standard SSH port, you must specify the SSH port to use: `$ git clone ssh://[email protected]:22122//home/seth/git/cal.git` ![Specifying SSH port in MGit Specifying SSH port in MGit](https://opensource.com/sites/default/files/uploads/mgit-0.jpg) If you use a different app, it may use a different syntax that allows you to provide a port in a special field or drop the **ssh://** prefix. Refer to the app documentation if you experience issues. Clone the repository to your phone. ![Cloned repositories Cloned repositories](https://opensource.com/sites/default/files/uploads/mgit-1.jpg) Few Git apps are set to automatically update the repository. There are a few apps you can use to automate pulls, or you can set up Git hooks to push updates from your server—but I won't get into that here. For now, after you make an update to your calendar, be sure to pull new changes manually in MGit (or if you change events on your phone, push the changes to your server). ![MGit push/pull settings MGit push/pull settings](https://opensource.com/sites/default/files/uploads/mgit-2.jpg) ### Mobile calendar There are a few different apps that provide frontends for Org mode on a mobile device. [Orgzly](https://f-droid.org/en/packages/com.orgzly/) is a great open source Android app that provides an interface for Org mode's greatest features, from the Agenda mode to the TODO lists. Install and launch it. From the Main menu, choose Setting Sync Repositories and select the directory containing your calendar files (i.e., the Git repository you cloned from your server). Give Orgzly a moment to import the data, then use Orgzly's [hamburger](https://en.wikipedia.org/wiki/Hamburger_button) menu to select the Agenda view. ![Orgzly's agenda view Orgzly's agenda view](https://opensource.com/sites/default/files/uploads/orgzly-agenda.jpg) In Orgzly's Settings Reminders menu, you can choose which event types trigger a notification on your phone. You can get notifications for **SCHEDULED** tasks, **DEADLINE** tasks, or anything with an event time assigned to it. If you use your phone as your taskmaster, you'll never miss an event with Org mode and Orgzly. ![Orgzly notification Orgzly notification](https://opensource.com/sites/default/files/uploads/orgzly-cal-notify.jpg) Orgzly isn't just a parser. You can edit and update events, and even mark events **DONE**. ![Orgzly to-do list Orgzly to-do list](https://opensource.com/sites/default/files/uploads/orgzly-cal-todo.jpg) ## Designed for and by you The important thing to understand about using Org mode and Git is that both applications are highly flexible, and it's expected that you'll customize how and what they do so they will adapt to your needs. If something in this article is an affront to how you organize your life or manage your weekly schedule, but you like other parts of what this proposal offers, then throw out the part you don't like. You can use Org mode in Emacs if you want, or you can just use it as calendar markup. You can set your phone to pull Git data right off your computer at the end of the day instead of a server on the internet, or you can configure your computer to sync calendars whenever your phone is plugged in, or you can manage it daily as you load up your phone with all the stuff you need for the workday. It's up to you, and that's the most significant thing about Git, about Org mode, and about open source. ## Comments are closed.
11,321
在不使用 mv 命令的情况下移动文件
https://opensource.com/article/19/8/moving-files-linux-without-mv
2019-09-09T06:44:06
[ "mv" ]
https://linux.cn/article-11321-1.html
> > 有时当你需要移动一个文件时,mv 命令似乎不是最佳选项,那么你会如何做呢? > > > ![](/data/attachment/album/201909/09/064313e02mvq28he8fk0mu.jpg) 不起眼的 `mv` 命令是在你见过的每个 POSIX 系统中都能找到的有用工具之一。它的作用是明确定义的,并且做得很好:将文件从文件系统中的一个位置移动到另一个位置。但是 Linux 非常灵活,还有其他移动文件的办法。使用不同的工具可以完美匹配一些特殊用例,这算一个小优势。 在远离 `mv` 之前,先看看这个命令的默认结果。首先,创建一个目录并生成一些权限为 777 的文件: ``` $ mkdir example $ touch example/{foo,bar,baz} $ for i in example/*; do ls /bin &gt; "${i}"; done $ chmod 777 example/* ``` 你可能不会这么认为,但是文件在一个[文件系统](https://opensource.com/article/18/11/partition-format-drive-linux#what-is-a-filesystem)中作为条目存在,称为索引节点(通常称为 inode),你可以使用 [ls 命令](https://opensource.com/article/19/7/master-ls-command)及其 `--inode` 选项查看一个文件占用的 inode: ``` $ ls --inode example/foo 7476868 example/foo ``` 作为测试,将文件从示例目录移动到当前目录,然后查看文件的属性: ``` $ mv example/foo . $ ls -l -G -g --inode 7476868 -rwxrwxrwx. 1 29545 Aug 2 07:28 foo ``` 如你所见,原始文件及权限已经被“移动”,但它的 inode 没有变化。 这就是 `mv` 工具用来移动的方式:保持 inode 不变(除非文件被移动到不同的文件系统),并保留其所有权和权限。 其他工具提供了不同的选项。 ### 复制和删除 在某些系统上,移动操作是真的在做移动:比特从文件系统中的某个位置删除并重新分配给另一个位置。这种行为在很大程度上已经失宠。现在,移动操作要么是属性重新分配(inode 现在指向文件组织中的不同位置),要么是复制和删除操作的组合。这种设计的哲学意图是确保在移动失败时,文件不会碎片化。 与 `mv` 不同,`cp` 命令会在文件系统中创建一个全新的数据对象,它有一个新的 inode 位置,并取决于 umask。你可以使用 `cp` 和 `rm`(如果有的话,或者 [trash](https://gitlab.com/trashy) —— LCTT 译注:它是一个命令行回收站工具)命令来模仿 `mv` 命令。 ``` $ cp example/foo . $ ls -l -G -g --inode 7476869 -rwxrwxr-x. 29545 Aug 2 11:58 foo $ trash example/foo ``` 示例中的新 `foo` 文件获得了 755 权限,因为此处的 umask 明确排除了写入权限。 ``` $ umask 0002 ``` 有关 umask 的更多信息,阅读 Alex Juarez 这篇关于[文件权限](https://opensource.com/article/19/8/linux-permissions-101#umask)的文章。 ### 查看和删除 与复制和删除类似,使用 [cat](https://opensource.com/article/19/2/getting-started-cat-command)(或 `tac`)命令在创建“移动”文件时分配不同的权限。假设当前目录中是一个没有 `foo` 的新测试环境: ``` $ cat example/foo > foo $ ls -l -G -g --inode 7476869 -rw-rw-r--. 29545 Aug 8 12:21 foo $ trash example/foo ``` 这次,创建了一个没有事先设置权限的新文件,所以文件最终权限完全取决于 umask 设置,它不会阻止用户和组的权限位(无论 umask 是什么,都不会为新文件授予可执行权限),但它会阻止其他人的写入(值为 2)。所以结果是一个权限是 664 的文件。 ### Rsync `rsync` 命令是一个强大的多功能工具,用于在主机和文件系统位置之间发送文件。此命令有许多可用选项,包括使其目标镜像成为源。 你可以使用带有 `--remove-source-files` 选项的 `rsync` 复制,然后删除文件,并可以带上你选择执行同步的任何其他选项(常见的通用选项是 `--archive`): ``` $ rsync --archive --remove-source-files example/foo . $ ls example bar baz $ ls -lGgi 7476870 -rwxrwxrwx. 1 seth users 29545 Aug 8 12:23 foo ``` 在这里,你可以看到保留了文件权限和所有权,只是更新了时间戳,并删除了源文件。 警告:不要将此选项与 `--delete` 混淆,后者会从*目标*目录中删除(源目录中不存在的)文件。误用 `--delete` 会清除很多数据,建议你不要使用此选项,除非是在测试环境中。 你可以覆盖其中一些默认值,更改权限和修改设置: ``` $ rsync --chmod=666 --times \ --remove-source-files example/foo . $ ls example bar baz $ ls -lGgi 7476871 -rw-rw-r--. 1 seth users 29545 Aug 8 12:55 foo ``` 这里,目标的 umask 会生效,因此 `--chmod=666` 选项会产生一个权限为 644 的文件。 好处不仅仅是权限,与简单的 `mv` 命令相比,`rsync` 命令有[很多](https://opensource.com/article/19/5/advanced-rsync)有用的[选项](https://opensource.com/article/17/1/rsync-backup-linux)(其中最重要的是 `--exclude` 选项,这样你可以在一个大型移动操作中排除某些项目),这使它成为一个更强大的工具。例如,要在移动文件集合时排除所有备份文件: ``` $ rsync --chmod=666 --times \ --exclude '*~' \ --remove-source-files example/foo . ``` ### 使用 install 设置权限 `install` 命令是一个专门面向开发人员的复制命令,主要是作为软件编译安装例程的一部分调用。它并不为用户所知(我经常想知道为什么它有这么一个直观的名字,而剩下的包管理器却只能使用缩写和昵称),但是 `install` 实际上是一种将文件放在你想要地方的有用方法。 `install` 命令有很多选项,包括 `--backup` 和 `--compare` 命令(以避免*更新*文件的新副本)。 与 `cp` 和 `cat` 命令不同,但与 `mv` 完全相同,`install` 命令可以在复制文件的同时而保留其时间戳: ``` $ install --preserve-timestamp example/foo . $ ls -l -G -g --inode 7476869 -rwxr-xr-x. 1 29545 Aug 2 07:28 foo $ trash example/foo ``` 在这里,文件被复制到一个新的 inode,但它的 mtime(修改时间)没有改变。但权限被设置为 `install` 的默认值 `755`。 你可以使用 `install` 来设置文件的权限,所有者和组: ``` $ install --preserve-timestamp \ --owner=skenlon \ --group=dialout \ --mode=666 example/foo . $ ls -li 7476869 -rw-rw-rw-. 1 skenlon dialout 29545 Aug 2 07:28 foo $ trash example/foo ``` ### 移动、复制和删除 文件包含数据,而真正重要的文件包含*你的*数据。学会聪明地管理它们是很重要的,现在你有了确保以你想要的方式来处理数据的工具包。 你是否有不同的数据管理方式?在评论中告诉我们你的想法。 --- via: <https://opensource.com/article/19/8/moving-files-linux-without-mv> 作者:[Seth Kenlon](https://opensource.com/users/sethhttps://opensource.com/users/seth) 选题:[lujun9972](https://github.com/lujun9972) 译者:[MjSeven](https://github.com/MjSeven) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
The humble **mv** command is one of those useful tools you find on every POSIX box you encounter. Its job is clearly defined, and it does it well: Move a file from one place in a file system to another. But Linux is nothing if not flexible, and there are other options for moving files. Using different tools can provide small advantages that fit perfectly with a specific use case. Before straying too far from **mv**, take a look at this command’s default results. First, create a directory and generate some files with permissions set to 777: ``` $ mkdir example $ touch example/{foo,bar,baz} $ for i in example/*; do ls /bin > "${i}"; done $ chmod 777 example/* ``` You probably don't think about it this way, but files exist as entries, called index nodes (commonly known as **inodes**), in a [filesystem](https://opensource.com/article/18/11/partition-format-drive-linux#what-is-a-filesystem). You can see what inode a file occupies with the [ls command](https://opensource.com/article/19/7/master-ls-command) and its **--inode** option: ``` $ ls --inode example/foo 7476868 example/foo ``` As a test, move that file from the example directory to your current directory and then view the file’s attributes: ``` $ mv example/foo . $ ls -l -G -g --inode 7476868 -rwxrwxrwx. 1 29545 Aug 2 07:28 foo ``` As you can see, the original file—along with its existing permissions—has been "moved", but its inode has not changed. That’s the way the **mv** tool is programmed to move a file: Leave the inode unchanged (unless the file is being moved to a different filesystem), and preserve its ownership and permissions. Other tools provide different options. ## Copy and remove On some systems, the move action is a true move action: Bits are removed from one point in the file system and reassigned to another. This behavior has largely fallen out of favor. Move actions are now either attribute reassignments (an inode now points to a different location in your file organization) or amalgamations of a copy action followed by a remove action. The philosophical intent of this design is to ensure that, should a move fail, a file is not left in pieces. The **cp** command, unlike **mv**, creates a brand new data object in your filesystem. It has a new inode location, and it is subject to your active umask. You can mimic a move using the **cp** and **rm** (or [trash](https://gitlab.com/trashy) if you have it) commands: ``` $ cp example/foo . $ ls -l -G -g --inode 7476869 -rwxrwxr-x. 29545 Aug 2 11:58 foo $ trash example/foo ``` The new **foo** file in this example got 775 permissions because the location’s umask specifically excludes write permissions: ``` $ umask 0002 ``` For more information about umask, read Alex Juarez’s article about [file permissions](https://opensource.com/article/19/8/linux-permissions-101#umask). ## Cat and remove Similar to a copy and remove, using the [cat](https://opensource.com/article/19/2/getting-started-cat-command) (or **tac**, for that matter) command assigns different permissions when your "moved" file is created. Assuming a fresh test environment with no **foo** in the current directory: ``` $ cat example/foo > foo $ ls -l -G -g --inode 7476869 -rw-rw-r--. 29545 Aug 8 12:21 foo $ trash example/foo ``` This time, a new file was created with no prior permissions set. The result is entirely subject to the umask setting, which blocks no permission bit for the user and group (the executable bit is not granted for new files regardless of umask), but it blocks the write (value two) bit from others. The result is a file with 664 permission. ## Rsync The **rsync** command is a robust multipurpose tool to send files between hosts and file system locations. This command has many options available to it, including the ability to make its destination mirror its source. You can copy and then remove a file with **rsync** using the **--remove-source-files** option, along with whatever other option you choose to perform the synchronization (a common, general-purpose one is **--archive**): ``` $ rsync --archive --remove-source-files example/foo . $ ls example bar baz $ ls -lGgi 7476870 -rwxrwxrwx. 1 seth users 29545 Aug 8 12:23 foo ``` Here you can see that file permission and ownership was retained, the timestamp was updated, and the source file was removed. **A word of warning:** Do not confuse this option for **--delete**, which removes files from your *destination* directory. Misusing **--delete** can wipe out most of your data, and it’s recommended that you avoid this option except in a test environment. You can override some of these defaults, changing permission and modification settings: ``` $ rsync --chmod=666 --times \ --remove-source-files example/foo . $ ls example bar baz $ ls -lGgi 7476871 -rw-rw-r--. 1 seth users 29545 Aug 8 12:55 foo ``` Here, the destination’s umask is respected, so the **--chmod=666** option results in a file with 664 permissions. The benefits go beyond just permissions, though. The **rsync** command has [many](https://opensource.com/article/19/5/advanced-rsync) useful [options](https://opensource.com/article/17/1/rsync-backup-linux) (not the least of which is the **--exclude** flag so you can exempt items from a large move operation) that make it a more robust tool than the simple **mv** command. For example, to exclude all backup files while moving a collection of files: ``` $ rsync --chmod=666 --times \ --exclude '*~' \ --remove-source-files example/foo . ``` ## Set permissions with install The **install** command is a copy command specifically geared toward developers and is mostly invoked as part of the install routine of software compiling. It’s not well known among users (and I do often wonder why it got such an intuitive name, leaving mere acronyms and pet names for package managers), but **install** is actually a useful way to put files where you want them. There are many options for the **install** command, including **--backup** and **--compare** command (to avoid "updating" a newer copy of a file). Unlike **cp** and **cat**, but exactly like **mv**, the **install** command can copy a file while preserving its timestamp: ``` $ install --preserve-timestamp example/foo . $ ls -l -G -g --inode 7476869 -rwxr-xr-x. 1 29545 Aug 2 07:28 foo $ trash example/foo ``` Here, the file was copied to a new inode, but its **mtime** did not change. The permissions, however, were set to the **install** default of **755**. You can use **install** to set the file’s permissions, owner, and group: ``` $ install --preserve-timestamp \ --owner=skenlon \ --group=dialout \ --mode=666 example/foo . $ ls -li 7476869 -rw-rw-rw-. 1 skenlon dialout 29545 Aug 2 07:28 foo $ trash example/foo ``` ## Move, copy, and remove Files contain data, and the really important files contain *your* data. Learning to manage them wisely is important, and now you have the toolkit to ensure that your data is handled in exactly the way you want. Do you have a different way of managing your data? Tell us your ideas in the comments. ## 5 Comments
11,322
你通常打开多少个浏览器标签页?
https://opensource.com/article/19/6/how-many-browser-tabs
2019-09-09T07:03:53
[ "浏览器" ]
https://linux.cn/article-11322-1.html
> > 外加一些提高浏览器效率的技巧。 > > > ![Browser of things](/data/attachment/album/201909/09/070357cnk8qnqqqq2znlqd.png "Browser of things") 这里有一个别有用心的问题:你通常一次打开多少个浏览器标签页?你是否有多个窗口,每个窗口都有多个标签页?或者你是一个极简主义者,一次只打开两个标签页。另一种选择是将一个 20 个标签页的浏览器窗口移动到另一个屏幕上去,这样在处理特定任务时它就不会碍事了。你的处理方法在工作、个人和移动浏览器之间有什么不同吗?你的浏览器策略是否与你的[工作习惯](https://enterprisersproject.com/article/2019/1/5-time-wasting-habits-break-new-year)有关? ### 4 个提高浏览器效率的技巧 1. 了解浏览器快捷键以节省单击。无论你使用 Firefox 还是 Chrome,都有很多快捷键可以让你方便地执行包括切换标签页在内的某些功能。例如,Chrome 可以很方便地打开一个空白的谷歌文档。使用快捷键 `Ctrl + t` 打开一个新标签页,然后键入 `doc.new` 即可。电子表格、幻灯片和表单也可以这样做。 2. 用书签文件夹组织最频繁的任务。当开始一项特定的任务时,只需打开文件夹中的所有书签 (`Ctrl + 点击`),就可以快速地从列表中勾选它。 3. 使用正确的浏览器扩展。成千上万的浏览器扩展都声称可以提高工作效率。在安装之前,确定你不是仅仅在屏幕上添加更多的干扰而已。 4. 使用计时器减少看屏幕的时间。无论你使用的是老式的 egg 定时器还是花哨的浏览器扩展,都没有关系。为了防止眼睛疲劳,执行 20/20/20 规则。每隔 20 分钟,离开屏幕 20 秒,看看 20 英尺以外的东西。 参加我们的投票来分享你一次打开多少个浏览器标签。请务必在评论中告诉我们你最喜欢的浏览器技巧。 生产力有两个组成部分——做正确的事情和高效地做那些事情…… --- via: <https://opensource.com/article/19/6/how-many-browser-tabs> 作者:[Lauren Pritchett](https://opensource.com/users/lauren-pritchett/users/sarahwall/users/ksonney/users/jwhitehurst) 选题:[lujun9972](https://github.com/lujun9972) 译者:[lujun9972](https://github.com/lujun9972) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
Here's a potentially loaded question: How many browser tabs do you usually have open at one time? Do you have multiple windows, each with multiple tabs? Or are you a minimalist, and only have a couple of tabs open at once. Another option is to move a 20-tabbed browser window to a different monitor so that it is out of the way while working on a particular task. Does your approach differ between work, personal, and mobile browsers? Is your browser strategy related to your [productivity habits](https://enterprisersproject.com/article/2019/1/5-time-wasting-habits-break-new-year)? ## 4 tips for browser productivity - Know your browser shortcuts to save clicks. Whether you use Firefox or Chrome, there are plenty of keyboard shortcuts to help make switching between tabs and performing certain functions a breeze. For example, Chrome makes it easy to open up a blank Google document. Use the shortcut **"Ctrl + t"**to open a new tab, then type**"doc.new"**. The same can be done for spreadsheets, slides, and forms. - Organize your most frequent tasks with bookmark folders. When it's time to start a particular task, simply open all of the bookmarks in the folder **(Ctrl + click)**to check it off your list quickly. - Get the right browser extensions for you. There are thousands of browser extensions out there all claiming to improve productivity. Before you install, make sure you're not just adding more distractions to your screen. - Reduce screen time by using a timer. It doesn't matter if you use an old-fashioned egg timer or a fancy browser extension. To prevent eye strain, implement the 20/20/20 rule. Every 20 minutes, take a 20-second break from your screen and look at something 20 feet away. Take our poll to share how many browser tabs you like to have open at once. Be sure to tell us about your favorite browser tricks in the comments. ## 11 Comments
11,324
如何改变你的终端颜色
https://opensource.com/article/19/9/linux-terminal-colors
2019-09-10T11:12:02
[ "终端" ]
/article-11324-1.html
> > 使 Linux 变得丰富多彩(或单色)。 > > > ![](/data/attachment/album/201909/10/111118bnvvcy6sntcs1s3t.jpg) 你可以使用特殊的 ANSI 编码设置为 Linux 终端添加颜色,可以在终端命令或配置文件中动态添加,也可以在终端仿真器中使用现成的主题。无论哪种方式,你都可以黑色屏幕上找回怀旧的绿色或琥珀色文本。本文演示了如何使 Linux 变得丰富多彩(或单色)的方法。 ### 终端的功能特性 现代系统的终端的颜色配置通常默认至少是 xterm-256color,但如果你尝试为终端添加颜色但未成功,则应检查你的 `TERM` 设置。 从历史上看,Unix 终端从字面上讲是:用户可以输入命令的共享计算机系统上实际的物理端点(终点)。它们专指通常用于远程发出命令的电传打字机(这也是我们今天在 Linux 中仍然使用 `/dev/tty` 设备的原因)。终端内置了 CRT 显示器,因此用户可以坐在办公室的终端上直接与大型机进行交互。CRT 显示器价格昂贵 —— 无论是制造还是使用控制;比担心抗锯齿和现代计算机专家理所当然认为的漂亮信息,让计算机吐出原始 ASCII 文本更容易。然而,即使在那时,技术的发展也很快,很快人们就会发现,随着新的视频显示终端的设计,他们需要新的功能特性来提供可选功能。 例如,1978 年发布的花哨的新 VT100 支持 ANSI 颜色,因此如果用户将终端类型识别为 vt100,则计算机可以提供彩色输出,而基本串行设备可能没有这样的选项。同样的原则适用于今天,它是由 `TERM` [环境变量](https://opensource.com/article/19/8/what-are-environment-variables)设定的。你可以使用 `echo` 检查你的 `TERM` 定义: ``` $ echo $TERM xterm-256color ``` 过时的(但在一些系统上仍然为了向后兼容而维护)`/etc/termcap` 文件定义了终端和打印机的功能特性。现代的版本是 `terminfo`,位于 `/etc` 或 `/usr/share` 中,具体取决于你的发行版。 这些文件列出了不同类型终端中可用的功能特性,其中许多都是由历史上的硬件定义的,如 vt100 到 vt220 的定义,以及 xterm 和 Xfce 等现代软件仿真器。大多数软件并不关心你使用的终端类型; 在极少数情况下,登录到检查兼容功能的服务器时,你可能会收到有关错误的终端类型的警告或错误。如果你的终端设置为功能特性很少的配置文件,但你知道你所使用的仿真器能够支持更多功能特性,那么你可以通过定义 `TERM` 环境变量来更改你的设置。你可以通过在 `~/.bashrc` 配置文件中导出 `TERM` 变量来完成此操作: ``` export TERM=xterm-256color ``` 保存文件并重新载入设置: ``` $ source ~/.bashrc ``` ### ANSI 颜色代码 现代终端继承了用于“元”特征的 ANSI 转义序列。这些是特殊的字符序列,终端将其解释为操作而不是字符。例如,此序列将清除屏幕,直到下一个提示符: ``` $ printf '\033[2J' ``` 它不会清除你的历史信息;它只是清除终端仿真器中的屏幕,因此它是一个安全且具有示范性的 ANSI 转义序列。 ANSI 还具有设置终端颜色的序列。例如,键入此代码会将后续文本更改为绿色: ``` $ printf '\033[32m' ``` 只要你对相同的计算机使用同一个颜色,就可以使用颜色来帮助你记住你登录的系统。例如,如果你经常通过 SSH 连接到服务器,则可以将服务器的提示符设置为绿色,以帮助你一目了然地将其与本地的提示符区分开来。 要设置绿色提示符,请在提示符前使用 ANSI 代码设置为绿色,并使用代表正常默认颜色的代码结束: ``` export PS1=`printf "\033[32m$ \033[39m"` ``` ### 前景色和背景色 你不仅可以设置文本的颜色。使用 ANSI 代码,你还可以控制文本的背景颜色以及做一些基本的样式。 例如,使用 `\033[4m`,你可以为文本加上下划线,或者使用 `\033[5m` 你可以将其设置为闪烁的文本。起初这可能看起来很愚蠢,因为你可能不会将你的终端设置为所有文本带有下划线并整天闪烁, 但它对某些功能很有用。例如,你可以将 shell 脚本生成的紧急错误设置为闪烁(作为对用户的警报),或者你可以为 URL 添加下划线。 作为参考,以下是前景色和背景色的代码。前景色在 30 范围内,而背景色在 40 范围内: | 颜色 | 前景色 | 背景色 | | --- | --- | --- | | 黑色 | \033[30m | \033[40m | | 红色 | \033[31m | \033[41m | | 绿色 | \033[32m | \033[42m | | 橙色 | \033[33m | \033[43m | | 蓝色 | \033[34m | \033[44m | | 品红 | \033[35m | \033[45m | | 青色 | \033[36m | \033[46m | | 浅灰 | \033[37m | \033[47m | | 回退到发行版默认值 | \033[39m | \033[49m | 还有一些可用于背景的其他颜色: | 颜色 | 背景色 | | --- | --- | | 深灰 | \033[100m | | 浅红 | \033[101m | | 浅绿 | \033[102m | | 黄色 | \033[103m | | 浅蓝 | \033[104m | | 浅紫 | \033[105m | | 蓝绿 | \033[106m | | 白色 | \033[107m | ### 持久设置 在终端会话中设置颜色只是暂时的,相对无条件的。有时效果会持续几行;这是因为这种设置颜色的方法依赖于 `printf` 语句来设置一种模式,该模式仅持续到其他设置覆盖它。 终端仿真器通常获取使用哪种颜色的指令的方式来自于 `LS_COLORS` 环境变量的设置,该设置又由 `dircolors` 的设置填充。你可以使用 `echo` 语句查看当前设置: ``` $ echo $LS_COLORS rs=0:di=38;5;33:ln=38;5;51:mh=00:pi=40; 38;5;11:so=38;5;13:do=38;5;5:bd=48;5; 232;38;5;11:cd=48;5;232;38;5;3:or=48; 5;232;38;5;9:mi=01;05;37;41:su=48;5; 196;38;5;15:sg=48;5;11;38;5;16:ca=48;5; 196;38;5;226:tw=48;5;10;38;5;16:ow=48;5; [...] ``` 或者你可以直接使用 `dircolors`: ``` $ dircolors --print-database [...] # image formats .jpg 01;35 .jpeg 01;35 .mjpg 01;35 .mjpeg 01;35 .gif 01;35 .bmp 01;35 .pbm 01;35 .tif 01;35 .tiff 01;35 [...] ``` 这看起来很神秘。文件类型后面的第一个数字是属性代码,它有六种选择: * 00 无 * 01 粗体 * 04 下划线 * 05 闪烁 * 07 反白 * 08 暗色 下一个数字是简化形式的颜色代码。你可以通过获取 ANSI 代码的最后一个数字来获取颜色代码(绿色前景为 32,绿色背景为 42;红色为 31 或 41,依此类推)。 你的发行版可能全局设置了 `LS_COLORS`,因此系统上的所有用户都会继承相同的颜色。如果你想要一组自定义的颜色,可以使用 `dircolors`。首先,生成颜色设置的本地副本: ``` $ dircolors --print-database > ~/.dircolors ``` 根据需要编辑本地列表。如果你对自己的选择感到满意,请保存文件。你的颜色设置只是一个数据库,不能由 [ls](https://opensource.com/article/19/7/master-ls-command) 直接使用,但你可以使用 `dircolors` 获取可用于设置 `LS_COLORS` 的 shellcode: ``` $ dircolors --bourne-shell ~/.dircolors LS_COLORS='rs=0:di=01;34:ln=01;36:mh=00: pi=40;33:so=01;35:do=01;35:bd=40;33;01: cd=40;33;01:or=40;31;01:mi=00:su=37;41: sg=30;43:ca=30;41:tw=30;42:ow=34; [...] export LS_COLORS ``` 将输出复制并粘贴到 `~/.bashrc` 文件中并重新加载。或者,你可以将该输出直接转储到 `.bashrc` 文件中并重新加载。 ``` $ dircolors --bourne-shell ~/.dircolors &gt;&gt; ~/.bashrc $ source ~/.bashrc ``` 你也可以在启动时使 Bash 解析 `.dircolors` 而不是手动进行转换。实际上,你可能不会经常改变颜色,所以这可能过于激进,但如果你打算改变你的配色方案,这是一个选择。在 `.bashrc` 文件中,添加以下规则: ``` [[ -e $HOME/.dircolors ]] && eval "`dircolors --sh $HOME/.dircolors`" ``` 如果你的主目录中有 `.dircolors` 文件,Bash 会在启动时对其进行评估并相应地设置 `LS_COLORS`。 ### 颜色 在终端中使用颜色是一种可以为你自己提供特定信息的快速视觉参考的简单方法。但是,你可能不希望过于依赖它们。毕竟,颜色不是通用的,所以如果其他人使用你的系统,他们可能不会像你那样看懂颜色代表的含义。此外,如果你使用各种工具与计算机进行交互,你可能还会发现某些终端或远程连接无法提供你期望的颜色(或根本不提供颜色)。 除了上述警示之外,颜色在某些工作流程中可能很有用且很有趣,因此创建一个 `.dircolor` 数据库并根据你的想法对其进行自定义吧。 --- via: <https://opensource.com/article/19/9/linux-terminal-colors> 作者:[Seth Kenlon](https://opensource.com/users/seth) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
null
HTTPSConnectionPool(host='opensource.com', port=443): Read timed out. (read timeout=10)
null
11,325
如何使用 GNOME 的 Internet Radio 播放流音乐
https://opensource.com/article/19/6/gnome-internet-radio
2019-09-10T11:40:57
[ "网络广播" ]
/article-11325-1.html
> > 如果你正在寻找一个简单、直观的界面,让你可以播放流媒体,可以尝试一下 GNOME 的 Internet Radio 插件。 > > > ![](/data/attachment/album/201909/10/114049ppzxeug7xx7jm7ko.jpg) 网络广播是收听世界各地电台节目的好方法。和许多开发人员一样,我喜欢在编写代码时打开电台。你可以使用 [MPlayer](https://opensource.com/article/18/12/linux-toy-mplayer) 或 [mpv](https://mpv.io/) 等终端媒体播放器收听网络广播,我就是这样通过 Linux 命令行收听广播的。但是,如果你喜欢使用图形用户界面 (GUI),你可以尝试一下 [GNOME Internet Radio](https://extensions.gnome.org/extension/836/internet-radio/),这是一个用于 GNOME 桌面的漂亮插件。你可以在包管理器中找到它。 ![GNOME Internet Radio plugin](/data/attachment/album/201909/10/114101if8uw9frzhz98rwm.png "GNOME Internet Radio plugin") 使用图形桌面操作系统收听网络广播通常需要启动一个应用程序,比如 [Audacious](https://audacious-media-player.org/) 或 [Rhythmbox](https://help.gnome.org/users/rhythmbox/stable/)。它们有很好的界面,很多选项,以及很酷的音频可视化工具。但如果你只想要一个简单、直观的界面播放你的流媒体,GNOME Internet Radio 就是你的选择。 安装之后,工具栏中会出现一个小图标,你可以在其中进行所有配置和管理。 ![GNOME Internet Radio icons](/data/attachment/album/201909/10/114101zbbkktlye7pxzbfy.png "GNOME Internet Radio icons") 我做的第一件事是进入设置菜单。我启用了以下两个选项:显示标题通知和显示音量调整。 ![GNOME Internet Radio Settings](/data/attachment/album/201909/10/114101f89bt5icmb9tmiq7.png "GNOME Internet Radio Settings") GNOME Internet Radio 包含一些预置的电台,并且很容易添加其他电台。只需点击(“+”)符号即可。你需要输入一个频道名称,它可以是你喜欢的任何内容(包括电台名称)和电台地址。例如,我喜欢听 Synthetic FM。我输入名称(Synthetic FM),以及流地址(<https://mediaserv38.live-streams.nl:2199/tunein/syntheticfm.pls>)。 然后单击流旁边的星号将其添加到菜单中。 不管你听什么音乐,不管你选择什么类型,很明显,程序员需要他们的音乐!GNOME Internet Radio 插件使你可以轻松地让你排好喜爱的网络电台。 --- via: <https://opensource.com/article/19/6/gnome-internet-radio> 作者:[Alan Formy-Duval](https://opensource.com/users/alanfdoss/users/r3bl) 选题:[lujun9972](https://github.com/lujun9972) 译者:[lujun9972](https://github.com/lujun9972) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
null
HTTPSConnectionPool(host='opensource.com', port=443): Read timed out. (read timeout=10)
null
11,327
如何构建 Fedora 容器镜像
https://fedoramagazine.org/how-to-build-fedora-container-images/
2019-09-10T12:23:23
[ "容器", "镜像" ]
https://linux.cn/article-11327-1.html
![](/data/attachment/album/201909/10/122249tpt2f8fti37ie33f.jpg) 随着容器和容器技术的兴起,现在所有主流的 Linux 发行版都提供了容器基础镜像。本文介绍了 Fedora 项目如何构建其基本镜像,同时还展示了如何使用它来创建分层图像。 ### 基础和分层镜像 在看如何构建 Fedora 容器<ruby> 基础镜像 <rt> base image </rt></ruby>之前,让我们定义基础镜像和<ruby> 分层镜像 <rt> layered image </rt></ruby>。定义基础镜像的简单方法是没有父镜像层的镜像。但这具体意味着什么呢?这意味着基础镜像通常只包含操作系统的根文件系统基础镜像(rootfs)。基础镜像通常提供安装软件以创建分层镜像所需的工具。 分层镜像在基础镜像上添加了一组层,以便安装、配置和运行应用。分层镜像在 Dockerfile 中使用 `FROM` 指令引用基础镜像: ``` FROM fedora:latest ``` ### 如何构建基础镜像 Fedora 有一整套用于构建容器镜像的工具。[其中包括 podman](/article-10156-1.html),它不需要以 root 身份运行。 #### 构建 rootfs 基础镜像主要由一个 [tarball](https://en.wikipedia.org/wiki/Tar_(computing)) 构成。这个 tarball 包含一个 rootfs。有不同的方法来构建此 rootfs。Fedora 项目使用 [kickstart](https://en.wikipedia.org/wiki/Kickstart_(Linux)) 安装方式以及 [imagefactory](http://imgfac.org/) 来创建这些 tarball。 在创建 Fedora 基础镜像期间使用的 kickstart 文件可以在 Fedora 的构建系统 [Koji](https://koji.fedoraproject.org/koji/) 中找到。[Fedora-Container-Base](https://koji.fedoraproject.org/koji/packageinfo?packageID=26387) 包重新组合了所有基础镜像的构建版本。如果选择了一个构建版本,那么可以访问所有相关文件,包括 kickstart 文件。查看 [示例](https://kojipkgs.fedoraproject.org//packages/Fedora-Container-Base/30/20190902.0/images/koji-f30-build-37420478-base.ks),文件末尾的 `%packages` 部分定义了要安装的所有软件包。这就是让软件放在基础镜像中的方法。 #### 使用 rootfs 构建基础镜像 rootfs 完成后,构建基础镜像就很容易了。它只需要一个包含以下指令的 Dockerfile: ``` FROM scratch ADD layer.tar / CMD ["/bin/bash"] ``` 这里的重要部分是 `FROM scratch` 指令,它会创建一个空镜像。然后,接下来的指令将 rootfs 添加到镜像,并设置在运行镜像时要执行的默认命令。 让我们使用 Koji 内置的 Fedora rootfs 构建一个基础镜像: ``` $ curl -o fedora-rootfs.tar.xz https://kojipkgs.fedoraproject.org/packages/Fedora-Container-Base/Rawhide/20190902.n.0/images/Fedora-Container-Base-Rawhide-20190902.n.0.x86_64.tar.xz $ tar -xJvf fedora-rootfs.tar.xz 51c14619f9dfd8bf109ab021b3113ac598aec88870219ff457ba07bc29f5e6a2/layer.tar $ mv 51c14619f9dfd8bf109ab021b3113ac598aec88870219ff457ba07bc29f5e6a2/layer.tar layer.tar $ printf "FROM scratch\nADD layer.tar /\nCMD [\"/bin/bash\"]" > Dockerfile $ podman build -t my-fedora . $ podman run -it --rm my-fedora cat /etc/os-release ``` 需要从下载的存档中提取包含 rootfs 的 `layer.tar` 文件。这在 Fedora 生成的镜像已经可以被容器运行时使用才需要。 因此,使用 Fedora 生成的镜像,获得基础镜像会更容易。让我们看看它是如何工作的: ``` $ curl -O https://kojipkgs.fedoraproject.org/packages/Fedora-Container-Base/Rawhide/20190902.n.0/images/Fedora-Container-Base-Rawhide-20190902.n.0.x86_64.tar.xz $ podman load --input Fedora-Container-Base-Rawhide-20190902.n.0.x86_64.tar.xz $ podman run -it --rm localhost/fedora-container-base-rawhide-20190902.n.0.x86_64:latest cat /etc/os-release ``` ### 构建分层镜像 要构建使用 Fedora 基础镜像的分层镜像,只需在 `FROM` 行指令中指定 `fedora`: ``` FROM fedora:latest ``` `latest` 标记引用了最新的 Fedora 版本(编写本文时是 Fedora 30)。但是可以使用镜像的标签来使用其他版本。例如,`FROM fedora:31` 将使用 Fedora 31 基础镜像。 Fedora 支持将软件作为容器来构建并发布。这意味着你可以维护 Dockerfile 来使其他人可以使用你的软件。关于在 Fedora 中成为容器镜像维护者的更多信息,请查看 [Fedora 容器指南](https://docs.fedoraproject.org/en-US/containers/guidelines/guidelines/)。 --- via: <https://fedoramagazine.org/how-to-build-fedora-container-images/> 作者:[Clément Verna](https://fedoramagazine.org/author/cverna/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
With the rise of containers and container technology, all major Linux distributions nowadays provide a container base image. This article presents how the Fedora project builds its base image. It also shows you how to use it to create a layered image. ## Base and layered images Before we look at how the Fedora container base image is built, let’s define a base image and a layered image. A simple way to define a base image is an image that has no parent layer. But what does that concretely mean? It means a base image usually contains only the root file system (*rootfs*) of an operating system. The base image generally provides the tools needed to install software in order to create layered images. A layered image adds a collections of layers on top of the base image in order to install, configure, and run an application. Layered images reference base images in a *Dockerfile* using the *FROM* instruction: FROM fedora:latest ## How to build a base image Fedora has a full suite of tools available to build container images. [This includes ](https://fedoramagazine.org/running-containers-with-podman/)* podman*, which does not require running as the root user. ### Building a rootfs A base image comprises mainly a [tarball](https://en.wikipedia.org/wiki/Tar_(computing)). This tarball contains a rootfs. There are different ways to build this rootfs. The Fedora project uses the [kickstart](https://en.wikipedia.org/wiki/Kickstart_(Linux)) installation method coupled with [imagefactory](http://imgfac.org/) software to create these tarballs. The kickstart file used during the creation of the Fedora base image is available in Fedora’s build system [Koji](https://koji.fedoraproject.org/koji/). The * Fedora-Container-Base* package regroups all the base image builds. If you select a build, it gives you access to all the related artifacts, including the kickstart files. Looking at an [example](https://kojipkgs.fedoraproject.org//packages/Fedora-Container-Base/30/20190902.0/images/koji-f30-build-37420478-base.ks), the *%packages*section at the end of the file defines all the packages to install. This is how you make software available in the base image. ### Using a rootfs to build a base image Building a base image is easy, once a rootfs is available. It requires only a Dockerfile with the following instructions: FROM scratch ADD layer.tar / CMD ["/bin/bash"] The important part here is the *FROM scratch* instruction, which is creating an empty image. The following instructions then add the rootfs to the image, and set the default command to be executed when the image is run. Let’s build a base image using a Fedora rootfs built in Koji: $ curl -o fedora-rootfs.tar.xz https://kojipkgs.fedoraproject.org/packages/Fedora-Container-Base/Rawhide/20190902.n.0/images/Fedora-Container-Base-Rawhide-20190902.n.0.x86_64.tar.xz $ tar -xJvf fedora-rootfs.tar.xz 51c14619f9dfd8bf109ab021b3113ac598aec88870219ff457ba07bc29f5e6a2/layer.tar $ mv 51c14619f9dfd8bf109ab021b3113ac598aec88870219ff457ba07bc29f5e6a2/layer.tar layer.tar $ printf "FROM scratch\nADD layer.tar /\nCMD [\"/bin/bash\"]" > Dockerfile $ podman build -t my-fedora . $ podman run -it --rm my-fedora cat /etc/os-release The *layer.tar* file which contains the rootfs needs to be extracted from the downloaded archive. This is only needed because Fedora generates images that are ready to be consumed by a container run-time. So using Fedora’s generated image, it’s even easier to get a base image. Let’s see how that works: $ curl -O https://kojipkgs.fedoraproject.org/packages/Fedora-Container-Base/Rawhide/20190902.n.0/images/Fedora-Container-Base-Rawhide-20190902.n.0.x86_64.tar.xz $ podman load --input Fedora-Container-Base-Rawhide-20190902.n.0.x86_64.tar.xz $ podman run -it --rm localhost/fedora-container-base-rawhide-20190902.n.0.x86_64:latest cat /etc/os-release ## Building a layered image To build a layered image that uses the Fedora base image, you only need to specify *fedora* in the *FROM* line instruction: FROM fedora:latest The *latest* tag references the latest active Fedora release (Fedora 30 at the time of writing). But it is possible to get other versions using the image tag. For example, *FROM fedora:31* will use the Fedora 31 base image. Fedora supports building and releasing software as containers. This means you can maintain a Dockerfile to make your software available to others. For more information about becoming a container image maintainer in Fedora, check out the [Fedora Containers Guidelines](https://docs.fedoraproject.org/en-US/containers/guidelines/guidelines/). ## Sandra Have it been considered to write an article “docker-compose the podman way”? According to this article and podman docs, there will never be an official podman-compose, so it would be very useful to get an article on how to migrate away from docker-compose. https://mkdev.me/en/posts/dockerless-part-3-moving-development-environment-to-containers-with-podman ## Clément Verna @Sandra I think that this would be a great article. Would you be interested in writing it ? In case you are I ll point you to the documentation on how to start contributing 🙂 https://docs.fedoraproject.org/en-US/fedora-magazine/contributing/ ## 0xSheepdog That would be amazing. One of my biggest frustrations trying to learn “the container way” without drinking the docker kool-aid is nearly everything is offered as a docker container/cluster with docker-compose. I feel like i have to learn a LOT of docker just so I can understand how to do containers WITHOUT docker. ## Blaise Yes!!! Or maybe using ansible to direct the compose settings to podman? ## Raphael G Thanks for sharing the link. It’s a nice one! ## Osqui Why not talking about mkosi? It’s another beast but using it with systemd-nspawn is a charm. Thanks ## Clément Verna @Osqui I did not know about mkosi thanks for pointing me to it, I ll have a look at it ## Ph0zzy What about: # # Generate minimal container image # set -ex # start new container from scratch newcontainer=$(buildah from scratch) scratchmnt=$(buildah mount ${newcontainer}) # install the packages dnf install --installroot ${scratchmnt} bash coreutils --releasever 30 --setopt=tsflags=nodocs --setopt=override_install_langs=en_US.utf8 -y # Clean up dnf cache if [ -d "${scratchmnt}" ]; then rm -rf "${scratchmnt}"/var/cache/dnf fi # configure container label and entrypoint buildah config --label name=f30-minimal ${newcontainer} buildah config --cmd /bin/bash ${newcontainer} # commit the image buildah unmount ${newcontainer} buildah commit ${newcontainer} f30-minimal ? ## Bill Has any thought been given to using a 64 bit base image and adding an image of a current install as a layer to convert from 32-bit to 64 bit Fedora? ## Raphael Groner What about Foreman or SaltStack? There’s definitely a comparison need. Although, I don’t intend to start a flamewar about features. ## Mohnish Nice suggestions everyone, thanks. Please take the lead in writing a write-up/follow-up for future edition.
11,328
Hyperledger Fabric 介绍
https://opensource.com/article/19/9/introduction-hyperledger-fabric
2019-09-11T10:59:52
[ "Hyperledger", "区块链" ]
https://linux.cn/article-11328-1.html
> > Hyperledger (超级账本)是一组开源工具,旨在构建一个强大的、业务驱动的区块链框架。 > > > ![](/data/attachment/album/201909/11/105935hm606vso3fclzso6.jpg) [Hyperledger](https://www.hyperledger.org/) (超级账本)是区块链行业中最大的项目之一,它由一组开源工具和多个子项目组成。该项目是由 Linux 基金会主办的一个全球协作项目,其中包括一些不同领域的领导者们,这些领导者们的目标是建立一个强大的、业务驱动的区块链框架。 区块链网络主要有三种类型:公共区块链、联盟或联合区块链,以及私有区块链。Hyperledger 是一个区块链框架,旨在帮助公司建立私人或联盟许可的区块链网络,在该网络中,多个组织可以共享控制和操作网络内节点的权限。 因为区块链是一个透明的,基于不可变模式的安全的去中心化系统,所以它被认为是传统的供应链行业改变游戏规则的一种解决方案。它可以通过以下方式支持有效的供应链系统: * 跟踪整个区块链中的产品 * 校验和验证区块链中的产品 * 在供应链参与者之间共享整个区块链的信息 * 提供可审核性 本文通过食品供应链的例子来解释 Hyperledger 区块链是如何改变传统供应链系统的。 ### 食品行业供应链 传统供应链效率低下的主要原因是由于缺乏透明度而导致报告不可靠和竞争上的劣势。 在传统的供应链模式中,有关实体的信息对该区块链中的其他人来说并不完全透明,这就导致了不准确的报告和缺乏互操作性问题。电子邮件和印刷文档提供了一些信息,但它们不可能包含完整详细的可见性数据,因为很难在整个供应链中去追踪产品。这也使消费者几乎不可能知道产品的真正价值和来源。 食品行业的供应链环境复杂,多个参与者需要协作将货物运送到最终目的地 —— 客户手中。下图显示了食品供应链(多级)网络中的主要参与者。 ![典型的食品供应链](/data/attachment/album/201909/11/105956kbcic8ctb7ub2cec.png "Typical food supply chain") 该区块链的每个阶段都会引入潜在的安全问题、整合问题和其他低效问题。目前食品供应链中的主要威胁仍然是假冒食品和食品欺诈。 基于 Hyperledger 区块链的食品跟踪系统可实现对食品信息全面的可视性和和可追溯性。更重要的是,它以一种不变但可行的方式来记录产品细节,确保食品信息的真实性。最终用户通过在不可变框架上共享产品的详细信息,可以自我验证产品的真实性。 ### Hyperledger Fabric Hyperledger Fabric 是 Hyperledger 项目的基石。它是基于许可的区块链,或者更准确地说是一种分布式分类帐技术(DLT),该技术最初由 IBM 公司和 Digital Asset 创建。分布式分类帐技术被设计为具有不同组件的模块化框架(概述如下)。它也是提供可插入的共识模型的一种灵活的解决方案,尽管它目前仅提供基于投票的许可共识(假设今天的 Hyperledger 网络在部分可信赖的环境中运行)。 鉴于此,无需匿名矿工来验证交易,也无需用作激励措施的相关货币。所有的参与者必须经过身份验证才能参与到该区块链进行交易。与以太坊一样,Hyperledger Fabric 支持智能合约,在 Hyperledger 中称为 <ruby> Chaincodes <rt> 链码 </rt></ruby>,这些合约描述并执行系统的应用程序逻辑。 然而,与以太坊不同,Hyperledger Fabric 不需要昂贵的挖矿计算来提交交易,因此它有助于构建可以在更短的延迟内进行扩展的区块链。 Hyperledger Fabric 不同于以太坊或比特币这样的区块链,不仅在于它们类型不同,或者说是它与货币无关,而且它们在内部机制方面也不同。以下是典型的 Hyperledger 网络的关键要素: * <ruby> 账本 <rt> Ledgers </rt></ruby>:存储了一系列块,这些块保留了所有状态交易的所有不可变历史记录。 * <ruby> 节点 <rt> Nodes </rt></ruby>:区块链的逻辑实体。它有三种类型: + <ruby> 客户端 <rt> Clients </rt></ruby>:是代表用户向网络提交事务的应用程序。 + <ruby> 对等体 <rt> Peers </rt></ruby>:是提交交易并维护分类帐状态的实体。 + <ruby> 排序者 <rt> Orderers </rt></ruby> 在客户端和对等体之间创建共享通信渠道,还将区块链交易打包成块发送给遵从的对等体节点。 除了这些要素,Hyperledger Fabric 还有以下关键设计功能: * <ruby> 链码 <rt> Chaincode </rt></ruby>:类似于其它诸如以太坊的网络中的智能合约。它是用一种更高级的语言编写的程序,在针对分类帐当前状态的数据库执行。 * <ruby> 通道 <rt> Channels </rt></ruby>:用于在多个网络成员之间共享机密信息的专用通信子网。每笔交易都在一个只有经过身份验证和授权的各方可见的通道上执行。 * <ruby> 背书人 <rt> Endorsers </rt></ruby> 验证交易,调用链码,并将背书的交易结果返回给调用应用程序。 * <ruby> 成员服务提供商 <rt> Membership Services Providers </rt></ruby>(MSP)通过颁发和验证证书来提供身份验证和身份验证过程。MSP 确定信任哪些证书颁发机构(CA)去定义信任域的成员,并确定成员可能扮演的特定角色(成员、管理员等)。 ### 如何验证交易 探究一笔交易是如何通过验证的是理解 Hyperledger Fabric 在底层如何工作的好方法。此图显示了在典型的 Hyperledger 网络中处理交易的端到端系统流程: ![Hyperledger 交易验证流程](/data/attachment/album/201909/11/105957inogr6s4gnfsgsgg.png "Hyperledger transaction validation flow") 首先,客户端通过向基于 Hyperledger Fabric 的应用程序客户端发送请求来启动交易,该客户端将交易提议提交给背书对等体。这些对等体通过执行由交易指定的链码(使用该状态的本地副本)来模拟该交易,并将结果发送回应用程序。此时,应用程序将交易与背书相结合,并将其广播给<ruby> 排序服务 <rt> Ordering Service </rt></ruby>。排序服务检查背书并为每个通道创建一个交易块,然后将其广播给通道中的其它节点,对的体验证该交易并进行提交。 Hyperledger Fabric 区块链可以通过透明的、不变的和共享的食品来源数据记录、处理数据,及运输细节等信息将食品供应链中的参与者们连接起来。链码由食品供应链中的授权参与者来调用。所有执行的交易记录都永久保存在分类帐中,所有参与者都可以查看此信息。 ### Hyperledger Composer 除了 Fabric 或 Iroha 等区块链框架外,Hyperledger 项目还提供了 Composer、Explorer 和 Cello 等工具。 Hyperledger Composer 提供了一个工具集,可帮助你更轻松地构建区块链应用程序。 它包括: * CTO,一种建模语言 * Playground,一种基于浏览器的开发工具,用于快速测试和部署 * 命令行界面(CLI)工具 Composer 支持 Hyperledger Fabric 的运行时和基础架构,在内部,Composer 的 API 使用底层 Fabric 的 API。Composer 在 Fabric 上运行,这意味着 Composer 生成的业务网络可以部署到 Hyperledger Fabric 执行。 --- via: <https://opensource.com/article/19/9/introduction-hyperledger-fabric> 作者:[Matt Zand](https://opensource.com/users/mattzandhttps://opensource.com/users/ron-mcfarlandhttps://opensource.com/users/wonderchook) 选题:[lujun9972](https://github.com/lujun9972) 译者:[Morisun029](https://github.com/Morisun029) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
One of the biggest projects in the blockchain industry, [Hyperledger](https://www.hyperledger.org/), is comprised of a set of open source tools and subprojects. It's a global collaboration hosted by The Linux Foundation and includes leaders in different sectors who are aiming to build a robust, business-driven blockchain framework. There are three main types of blockchain networks: public blockchains, consortiums or federated blockchains, and private blockchains. Hyperledger is a blockchain framework that aims to help companies build private or consortium permissioned blockchain networks where multiple organizations can share the control and permission to operate a node within the network. Since a blockchain is a transparent, immutable, and secure decentralized system, it is considered a game-changing solution for traditional supply chain industries. It can support an effective supply chain system by: - Tracking the products in the entire chain - Verifying and authenticating the products in the chain - Sharing the entire chain's information between supply chain actors - Providing auditability This article uses the example of a food supply chain to explain how a Hyperledger blockchain can transform a traditional supply chain. ## Food industry supply chains The main reason for classic supply chain inefficiency is lack of transparency, leading to unreliable reporting and competitive disadvantage. In traditional supply chain models, information about an entity is not fully transparent to others in the chain, which leads to inaccurate reports and a lack of interoperability. Emails and printed documents provide some information, but they can't contain fully detailed visibility data because the products are hard to trace across the entire supply chain. This also makes it nearly impossible for a consumer to know the true value and origin of a product. The food industry's supply chain is a difficult landscape, where multiple actors need to coordinate to deliver goods to their final destination, the customers. The following diagram shows the key actors in a food supply chain (multi-echelon) network. ![Typical food supply chain Typical food supply chain](https://opensource.com/sites/default/files/uploads/foodindustrysupplychain.png) Every stage of the chain introduces potential security vulnerabilities, integration problems, and other inefficiency issues. The main growing threat in current food supply chains remains counterfeit food and food fraud. A food-tracking system based on the Hyperledger blockchain enables full visibility, tracking, and traceability. More importantly, it ensures the authenticity of food by recording a product's details in an immutable and viable way. By sharing a product's details over an immutable framework, the end user can self-verify a product's authenticity. ## Hyperledger Fabric Hyperledger Fabric is the cornerstone of the Hyperledger project. It is a permission-based blockchain, or more accurately a distributed ledger technology (DLT), which was originally created by IBM and Digital Asset. It is designed as a modular framework with different components (outlined below). It is also a flexible solution offering a pluggable consensus model, although it currently only provides permissioned, voting-based consensus (with the assumption that today's Hyperledger networks operate in a partially trustworthy environment). Given this, there is no need for anonymous miners to validate transactions nor for an associated currency to act as an incentive. All participants must be authenticated to participate and transact on the blockchain. Like with Ethereum, Hyperledger Fabric supports smart contracts, called Chaincodes in Hyperledger, and these contracts describe and execute the system's application logic. Unlike Ethereum, however, Hyperledger Fabric doesn't require expensive mining computations to commit transactions, so it can help build blockchains that can scale up with less latency. Hyperledger Fabric is different from blockchains such as Ethereum or Bitcoin, not only in its type or because it is currency-agnostic, but also in terms of its internal machinery. Following are the key elements of a typical Hyperledger network: **Ledgers**store a chain of blocks that keep all immutable historical records of all state transitions.**Nodes**are the logical entities of the blockchain. There are three types: –**Clients**are applications that act on behalf of a user to submit transactions to the network. –**Peers**are entities that commit transactions and maintain the ledger state. –**Orderers**create a shared communication channel between clients and peers; they also package blockchain transactions into blocks and send them to committing peers Along with these elements, Hyperledger Fabric is based on the following key design features: **Chaincode**is similar to a smart contract in other networks, such as Ethereum. It is a program written in a higher-level language that executes against the ledger's current-state database.**Channels**are private communication subnets for sharing confidential information between multiple network members. Each transaction is executed on a channel that is visible only to the authenticated and authorized parties.**Endorsers**validate transactions, invoke Chaincode, and send the endorsed transaction results back to the calling applications.**Membership Services Providers**(MSPs) provide identity validation and authentication processes by issuing and validating certificates. An MSP identifies which certification authorities (CAs) are trusted to define the members of a trust domain and determines the specific roles an actor might play (member, admin, and so on). ## How transactions are validated Exploring how a transaction gets validated is a good way to understand how Hyperledger Fabric works under the hood. This diagram shows the end-to-end system flow for processing a transaction in a typical Hyperledger network: ![Hyperledger transaction validation flow Hyperledger transaction validation flow](https://opensource.com/sites/default/files/uploads/hyperledger-fabric-transaction-flow.png) First, the client initiates a transaction by sending a request to a Hyperledger Fabric-based application client, which submits the transaction proposal to endorsing peers. These peers simulate the transaction by executing the Chaincode (using a local copy of the state) specified by the transaction and sending the results back to the application. At this point, the application combines the transaction with the endorsements and broadcasts it to the Ordering Service. The Ordering Service checks the endorsements and creates a block of transactions for each channel before broadcasting them to all peers in the channel. Peers then verify the transactions and commit them. The Hyperledger Fabric blockchain can connect food supply chain participants through a transparent, permanent, and shared record of food-origin data, processing data, shipping details, and more. The Chaincode is invoked by authorized participants in the food supply chain. All executed transaction records are permanently saved in the ledger, and all entities can look up this information. ## Hyperledger Composer Alongside blockchain frameworks such as Fabric or Iroha, the Hyperledger project provides tools such as Composer, Hyperledger Explorer, and Cello. Hyperledger Composer provides a toolset to help build blockchain applications more easily. It consists of: - CTO, a modeling language - Playground, a browser-based development tool for rapid testing and deployment - A command-line interface (CLI) tool Composer supports the Hyperledger Fabric runtime and infrastructure, and internally the composer's API utilizes the underlying Fabric API. Composer runs on Fabric, meaning the business networks generated by Composer can be deployed to Hyperledger Fabric for execution. To learn more about Hyperledger, visit the [project's website](https://www.hyperledger.org/), where you can view the members, access training and tutorials, or find out how you can contribute. *This article is adapted from Coding Bootcamp's article Building A Blockchain Supply Chain Using Hyperledger Fabric and Composer and is used with permission.* ## Comments are closed.
11,330
USB4 规范获得最终批准,像以太网一样快!
https://www.networkworld.com/article/3435113/usb4-gets-final-approval-offers-ethernet-like-speed.html
2019-09-11T11:23:47
[ "USB" ]
https://linux.cn/article-11330-1.html
> > USB4 会是一个统一的接口,可以淘汰笨重的电缆和超大的插头,并提供满足从笔记本电脑用户到服务器管理员的每个人的吞吐量。 > > > ![](/data/attachment/album/201909/11/112354mnkn99nxprdpa64z.jpg) <ruby> USB 开发者论坛 <rt> USB Implementers Forum </rt></ruby>(USB-IF)是通用串行总线(USB)规范开发背后的行业协会,上周宣布了它已经完成了下一代 USB4 的技术规范。 USB4 最重要的一个方面(它们在此版本中省略了首字母缩略词和版本号之间的空格)是它将 USB 与 Intel 设计的接口 Thunderbolt 3 融合在了一起。Thunderbolt 尽管有潜力,但在除了笔记本之外并未真正流行起来。出于这个原因,Intel 向 USB 联盟提供了 Thunderbolt 规范。 不幸的是,Thunderbolt 3 被列为 USB4 设备的一个可选项,因此有些设备会有,有些则不会。这无疑会让人头疼,希望所有设备制造商都会包括 Thunderbolt 3。 USB4 将使用与 USB type-C 相同的外形尺寸,这个小型插头用在所有现代 Android 手机和 Thunderbolt 3 中。它将向后兼容 USB 3.2、USB 2.0 以及 Thunderbolt。因此,几乎任何现有的USB type-C 设备都可以连接到具有 USB4 总线的机器,但将以连接电缆的额定速度运行。 ### USB4:体积更小,速度更快 因为它支持 Thunderbolt 3,所以这种新连接将同时支持数据和显示协议,因此这可能意味着小型 USB-C 端口将取代显示器上庞大的 DVI 端口,而显示器则带有多个 USB4 端口来作为集线器。 新标准的主要内容:它提供双通道 40Gbps 传输速度,是当前 USB 3.2 规格的两倍,是 USB 3 的八倍。这是以太网的速度,应该足够给你的高清显示器以及其他数据传输提供充足带宽。 USB4 也为视频提供了更好的资源分配,因此如果你使用 USB4 端口同时传输视频和数据,端口将相应地分配带宽。这将允许计算机同时使用外部独立的 GPU(因为有 Thunderbolt 3,它已经上市 )和外部 SSD。 这可能会启发各种新的服务器设计,因为大型、笨重的设备,如 GPU 或其他不易放入 1U 或 2U 机箱的卡板,现在可以外部连接并以与内部设备相当的速度运行。 当然,我们看到配备了 USB4 端口的电脑还需要一段时间,更别说服务器了。将 USB 3 用于 PC 花费了数年时间,而 USB-C 的采用速度非常缓慢。USB 2 的 U 盘仍然是这些设备的主要市场,并且主板仍然带有 USB 2 端口。 尽管如此,USB4 还是有可能成为一个统一的接口,可以摆脱拥有超大插头的笨重电缆,并提供满足从笔记本电脑用户到服务器管理员的每个人的吞吐量。 --- via: <https://www.networkworld.com/article/3435113/usb4-gets-final-approval-offers-ethernet-like-speed.html> 作者:[Andy Patrizio](https://www.networkworld.com/author/Andy-Patrizio/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
301
Moved Permanently
null
11,331
如何在 Ubuntu 上安装和使用 R 语言
https://itsfoss.com/install-r-ubuntu/
2019-09-11T11:52:57
[ "R语言" ]
https://linux.cn/article-11331-1.html
> > 这个教程指导你如何在 Ubuntu 上安装 R 语言。你也将同时学习到如何在 Ubuntu 上用不同方法运行简单的 R 语言程序。 > > > [R](https://www.r-project.org/),和 Python 一样,它是在统计计算和图形处理上最常用的编程语言,易于处理数据。随着数据分析、数据可视化、数据科学(机器学习热)的火热化,对于想深入这一领域的人来说,它是一个很好的工具。 R 语言的优点是它的语法非常简练,你可以找到它的很多实际使用的教程或指南。 本文将介绍包含如何在 Ubuntu 下安装 R 语言,也会介绍在 Linux 下如何运行第一个 R 程序。 ![](/data/attachment/album/201909/11/115259umnrin57vlrrhixl.jpg) ### 如何在 Ubuntu 上安装 R 语言 R 默认在 Ubuntu 的软件库里。用以下命令很容易安装: ``` sudo apt install r-base ``` 请注意这可能会安装一个较老的版本。在我写这篇文字的时候,Ubuntu 提供的是 3.4,但是最新的是3.6。 *我建议除非你必须使用最新版本,否则直接使用 Ubuntu 的配套版本。* #### 如何在 Ubuntu 上安装最新 3.6 版本的 R 环境 如果想安装最新的版本(或特殊情况指定版本),你必须用 [CRAN](https://cran.r-project.org/)(Comprehensive R Archive Network)。这个是 R 最新版本的镜像列表。 如需获取 3.6 的版本,需要添加镜像到你的源索引里。我已经简化其命令如下: ``` sudo add-apt-repository "deb https://cloud.r-project.org/bin/linux/ubuntu $(lsb_release -cs)-cran35/" ``` 下面你需要添加密钥到服务器中: ``` sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys E298A3A825C0D65DFD57CBB651716619E084DAB9 ``` 然后更新服务器信息并安装R环境: ``` sudo apt update sudo apt install r-base ``` 就这样安装完成。 ### 如何在 Ubuntu 下使用 R 语言编程 R 的用法多样,我将介绍运行多个运行 R 语言的方式。 #### R 语言的交互模式 安装了 R 语言后,你可以在控制台上直接运行: ``` R ``` 这样会打开交互模式: ![R Interactive Mode](/data/attachment/album/201909/11/115303c9i5c3zxw5luwfi3.png) R 语言的控制台与 Python 和 Haskell 的交互模式很类似。你可以输入 R 命令做一些基本的数学运算,例如: ``` > 20+40 [1] 60 > print ("Hello World!") [1] "Hello World!" ``` 你可以测试绘图: ![R Plotting](/data/attachment/album/201909/11/115307gb4wjy3jgmjyokbj.jpg) 如果想退出可以用 `q()`或按下 `CTRL+c`键。接着你会被提示是否保存工作空间镜像;工作空间是创建变量的环境。 #### 用 R 脚本运行程序 第二种运行 R 程序的方式是直接在 Linux 命令行下运行。你可以用 `RScript` 执行,它是一个包含 `r-base` 软件包的工具。 首先,你需要用你在 [Linux 下常用的编辑器](https://itsfoss.com/best-modern-open-source-code-editors-for-linux/)保存 R 程序到文件。文件的扩展名必须是 `.r`。 下面是一个打印 “Hello World” 的 R 程序。你可以保存其为 `hello.r`。 ``` print("Hello World!") a <- rnorm(100) plot(a) ``` 用下面命令运行 R 程序: ``` Rscript hello.r ``` 你会得到如下输出结果: ``` [1] "Hello World!" ``` 结果将会保存到当前工作目录,文件名为 `Rplots.pdf`: ![Rplots.pdf](/data/attachment/album/201909/11/115309u0ra26orxl9p5o0t.png) 小提示:`Rscript` 默认不会加载 `methods` 包。确保在脚本中[显式加载](https://www.dummies.com/programming/r/how-to-install-load-and-unload-packages-in-r/)它。 #### 在 Ubuntu 下用 RStudio 运行 R 语言 最常见的 R 环境是 [RStudio](https://www.rstudio.com/),这是一个强大的跨平台的开源 IDE。你可以用 deb 文件在 Ubuntu 上安装它。下载 deb 文件的链接如下。你需要向下滚动找到 Ubuntu 下的 DEB 文件。 * [下载 Ubuntu 的 Rstudio](https://www.rstudio.com/products/rstudio/download/#download) 下载了 DEB 文件后,直接点击安装。 下载后从菜单搜索启动它。程序主界面会弹出如下: ![RStudio 主界面](/data/attachment/album/201909/11/115311e66zyyfzdfyfczfa.jpg) 现在可以看到和 R 命令终端一样的工作台。 创建一个文件:点击顶栏 “File” 然后选择 “New File > Rscript”(或 `CTRL+Shift+n`): ![RStudio 新建文件](/data/attachment/album/201909/11/115313co2zwcja9166w91c.png) 按下 `CTRL+s` 保存文件选择路径和命名: ![RStudio 保存文件](/data/attachment/album/201909/11/115314zak1eqmqa664uzhk.png) 这样做了后,点击 “Session > Set Working Directory > To Source File Location” 修改工作目录为你的脚本路径: ![RStudio 工作目录](/data/attachment/album/201909/11/115315yj9f9zzblss499f4.png) 现在一切准备就绪!编写代码然后点击运行。你可以在控制台和图形窗口看到结果: ![RStudio 运行](/data/attachment/album/201909/11/115316h7m55ecn6qp6cwwg.jpg) ### 结束语 这篇文章中展示了如何在 Ubuntu 下使用 R 语言。包含了以下几个方面:R 控制台 —— 可用于测试,Rscript —— 终端达人操作,RStudio —— 你想要的 IDE。 无论你正在从事数据科学或只是热爱数据统计,作为一个数据分析的完美工具,R 都是一个比较好的编程装备。 你想使用 R 吗?你入门了吗?让我们了解你是如何学习 R 的以及为什么要学习 R! --- via: <https://itsfoss.com/install-r-ubuntu/> 作者:[Sergiu](https://itsfoss.com/author/sergiu/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[guevaraya](https://github.com/guevaraya) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
*Brief: This tutorial teaches you to install R on Ubuntu. You’ll also learn how to run your first R program in Ubuntu using various methods.* [R](https://www.r-project.org/), together with Python, is the most commonly used programming language for statistical computing and graphics, making it easy to work with data. With the growing interest in data analysis, data visualization, data science (the machine learning craze), it is now more popular than ever and is a great tool for anyone looking to dive into this fields. The good thing about R is that its syntax is pretty straight-forward and you can find many tutorials/guides on how R is used in the real world. In this article, I’ll cover how to install R on Ubuntu Linux. I’ll also show you how to run your first R program in Linux. ![Install R On Ubuntu](https://itsfoss.com/content/images/wordpress/2019/06/install-r-on-ubuntu-800x450.jpg) ## Installing R on Ubuntu **R** is included in the Ubuntu repositories. It can be easily installed using: `sudo apt install r-base` Do note that this may install a slightly older version. At the time of writing this article, Ubuntu offers version 3.4 whereas the latest is version 3.6. *I advise sticking with whichever version Ubuntu provides unless you must use the newer version.* In order to get the latest version (or any specific version for that matter), you must use ** CRAN** (The Comprehensive R Archive Network). This is a list of mirrors for downloading the latest version of R. Click on the next section to learn how to install the latest version of R on Ubuntu. **How to install latest R version 3.6 on Ubuntu (click to expand)** To get the R version 3.6, you need to add the mirror to your sources list. I have simplified it for you in this command: sudo add-apt-repository "deb https://cloud.r-project.org/bin/linux/ubuntu $(lsb_release -cs)-cran35/" Now you should add the key for the repository: sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys E298A3A825C0D65DFD57CBB651716619E084DAB9 And then update the repository information and install R: sudo apt update sudo apt install r-base That’s it. ## Using R programming on Ubuntu R has more than one use. I’ll go over several methods you can use to run R programs. ### Interactive Mode in R After having installed **R**, you can run the console using: `R` This should open up the interactive mode: ![R Interactive Mode](https://itsfoss.com/content/images/wordpress/2019/06/r_interactive_mode.png?fit=800%2C516&ssl=1) This R console is very similar to the **Python** and **Haskell** interactive prompts. You can enter any **R** command and you can do basic mathematical computations. For example: ``` > 20+40 [1] 60 > print ("Hello World!") [1] "Hello World!" ``` You could test plotting too: ![R Plotting](https://itsfoss.com/content/images/wordpress/2019/06/r_plotting.jpg?fit=800%2C434&ssl=1) You can **quit** using **q()** or pressing **CTRL+c**. When doing so, you will be asked if you want to save a workspace** **image; a workspace** **is an environment for created variables. ### Running R program with Rscript The second way to run R programs is in directly on the Linux command line. You can do so using **RScript**, a utility included with **r-base**. First, you have to save your R program to a file using your [favorite code editor on Linux](https://itsfoss.com/best-modern-open-source-code-editors-for-linux/). The file extension should be .r. This is my sample R program printing “Hello World”. I have saved it in a file name hello.r. ``` print("Hello World!") a <- rnorm(100) plot(a) ``` To run the R program, use the command like this: `Rscript hello.r` You should get back the output: `[1] "Hello World!"` The plot is going to be saved in the working directory, to a file named **Rplots.pdf**: ![Rplots.pdf](https://itsfoss.com/content/images/wordpress/2019/06/rplots_pdf.png?fit=800%2C539&ssl=1) **Note:** *Rscript **doesn’t load the **methods **package by default. Make sure to load it explicitly in your script*. ### Run R scripts with RStudio in Ubuntu The most common way to use **R** is using [RStudio](https://www.rstudio.com/), a great cross-platform open source IDE. You can [install it using deb file in Ubuntu](https://itsfoss.com/install-deb-files-ubuntu/). Download the deb file from the link below. You’ll have to scroll down a bit to locate the DEB files for Ubuntu. Once you download the DEB file, just double click on it to install it. Once installed, search for it in the menu and start it. The home window of the application should pop up: ![RStudio Home](https://itsfoss.com/content/images/wordpress/2019/06/rstudio_home.jpg?fit=800%2C603&ssl=1) Here you have a working console, just like the one you got in the terminal with the **R** command. To create a file, in the top bar click on **File** and select **New File > Rscript** (or **CTRL+Shift+n)**: ![RStudio New File](https://itsfoss.com/content/images/wordpress/2019/06/rstudio_new_file.png?fit=800%2C392&ssl=1) Press **CTRL+s** to save the file and choose a location and a name it: ![RStudio Save File](https://itsfoss.com/content/images/wordpress/2019/06/rstudio_save_file.png?fit=800%2C258&ssl=1) After doing so, click on **Session > Set Working Directory > To Source File Location** to change the working directory to the location of your script: ![RStudio Working Directory](https://itsfoss.com/content/images/wordpress/2019/06/rstudio_working_directory.png?fit=800%2C394&ssl=1) You are now ready to go! Write in your code and click run. You should be able to see output both in the console and in the plotting window: ![RStudio Run](https://itsfoss.com/content/images/wordpress/2019/06/rstudio_run.jpg?fit=800%2C626&ssl=1) **Wrapping Up** In this article, I showed you step by step how to get started using the **R** programming language on an Ubuntu system. I covered several ways you can go about this: **R console** – useful for testing, **Rscript** – for the terminal lover, **RStudio** – the IDE for your needs. Whether you are willing to get into data science or simply love statistics, **R** is a good addition to your programming arsenal, being the perfect tool for analyzing data. If you are absolutely new to R, let me recommend you this excellent book that will teach you fundamentals of R. It’s available on Amazon Kindle. [lasso box=”B00GC2LKOK” link_id=”14655″ ref=”learn-r-in-a-day” id=”101763″] Do you use **R**? Are you just getting into it? Let us know more about how and why you use or want to learn to use **R**!
11,333
使用 HTTPie 进行 API 测试
https://opensource.com/article/19/8/getting-started-httpie
2019-09-12T10:29:59
[ "HTTPie" ]
https://linux.cn/article-11333-1.html
> > 使用 HTTPie 调试 API,这是一个用 Python 写的易用的命令行工具。 > > > ![](/data/attachment/album/201909/12/102919ry1ute1y9h991ftz.jpg) [HTTPie](https://httpie.org/) 是一个非常易用、易于升级的 HTTP 客户端。它的发音为 “aitch-tee-tee-pie” 并以 `http` 命令运行,它是一个用 Python 编写的来用于访问 Web 的命令行工具。 由于这是一篇关于 HTTP 客户端的指导文章,因此你需要一个 HTTP 服务器来试用它。在这里,访问 [httpbin.org](https://github.com/postmanlabs/httpbin),它是一个简单的开源 HTTP 请求和响应服务。httpbin.org 网站是一种测试 Web API 的强大方式,并能仔细管理并显示请求和响应内容,不过现在让我们专注于 HTTPie 的强大功能。 ### Wget 和 cURL 的替代品 你可能听说过古老的 [Wget](https://en.wikipedia.org/wiki/Wget) 或稍微新一些的 [cURL](https://en.wikipedia.org/wiki/CURL) 工具,它们允许你从命令行访问 Web。它们是为访问网站而编写的,而 HTTPie 则用于访问 Web API。 网站请求发生在计算机和正在阅读并响应它所看到的内容的最终用户之间,这并不太依赖于结构化的响应。但是,API 请求会在两台计算机之间进行*结构化*调用,人并不是该流程内的一部分,像 HTTPie 这样的命令行工具的参数可以有效地处理这个问题。 ### 安装 HTTPie 有几种方法可以安装 HTTPie。你可以通过包管理器安装,无论你使用的是 `brew`、`apt`、`yum` 还是 `dnf`。但是,如果你已配置 [virtualenvwrapper](https://opensource.com/article/19/6/virtual-environments-python-macos),那么你可以用自己的方式安装: ``` $ mkvirtualenv httpie ... (httpie) $ pip install httpie ... (httpie) $ deactivate $ alias http=~/.virtualenvs/httpie/bin/http $ http -b GET https://httpbin.org/get { "args": {}, "headers": { "Accept": "*/*", "Accept-Encoding": "gzip, deflate", "Host": "httpbin.org", "User-Agent": "HTTPie/1.0.2" }, "origin": "104.220.242.210, 104.220.242.210", "url": "https://httpbin.org/get" } ``` 通过将 `http` 别名指向为虚拟环境中的命令,即使虚拟环境在非活动状态,你也可以运行它。你可以将 `alias` 命令放在 `.bash_profile` 或 `.bashrc` 中,这样你就可以使用以下命令升级 HTTPie: ``` $ ~/.virtualenvs/httpie/bin/pip install -U pip ``` ### 使用 HTTPie 查询网站 HTTPie 可以简化查询和测试 API。上面使用了一个选项,`-b`(即 `--body`)。没有它,HTTPie 将默认打印整个响应,包括响应头: ``` $ http GET https://httpbin.org/get HTTP/1.1 200 OK Access-Control-Allow-Credentials: true Access-Control-Allow-Origin: * Connection: keep-alive Content-Encoding: gzip Content-Length: 177 Content-Type: application/json Date: Fri, 09 Aug 2019 20:19:47 GMT Referrer-Policy: no-referrer-when-downgrade Server: nginx X-Content-Type-Options: nosniff X-Frame-Options: DENY X-XSS-Protection: 1; mode=block { "args": {}, "headers": { "Accept": "*/*", "Accept-Encoding": "gzip, deflate", "Host": "httpbin.org", "User-Agent": "HTTPie/1.0.2" }, "origin": "104.220.242.210, 104.220.242.210", "url": "https://httpbin.org/get" } ``` 这在调试 API 服务时非常重要,因为大量信息在响应头中发送。例如,查看发送的 cookie 通常很重要。httpbin.org 提供了通过 URL 路径设置 cookie(用于测试目的)的方式。以下设置一个标题为 `opensource`, 值为 `awesome` 的 cookie: ``` $ http GET https://httpbin.org/cookies/set/opensource/awesome HTTP/1.1 302 FOUND Access-Control-Allow-Credentials: true Access-Control-Allow-Origin: * Connection: keep-alive Content-Length: 223 Content-Type: text/html; charset=utf-8 Date: Fri, 09 Aug 2019 20:22:39 GMT Location: /cookies Referrer-Policy: no-referrer-when-downgrade Server: nginx Set-Cookie: opensource=awesome; Path=/ X-Content-Type-Options: nosniff X-Frame-Options: DENY X-XSS-Protection: 1; mode=block <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"> <title>Redirecting...</title> <h1>Redirecting...</h1> <p>You should be redirected automatically to target URL: <a href="/cookies">/cookies</a>. If not click the link. ``` 注意 `Set-Cookie: opensource=awesome; Path=/` 的响应头。这表明你预期设置的 cookie 已正确设置,路径为 `/`。另请注意,即使你得到了 `302` 重定向,`http` 也不会遵循它。如果你想要遵循重定向,则需要明确使用 `--follow` 标志请求: ``` $ http --follow GET https://httpbin.org/cookies/set/opensource/awesome HTTP/1.1 200 OK Access-Control-Allow-Credentials: true Access-Control-Allow-Origin: * Connection: keep-alive Content-Encoding: gzip Content-Length: 66 Content-Type: application/json Date: Sat, 10 Aug 2019 01:33:34 GMT Referrer-Policy: no-referrer-when-downgrade Server: nginx X-Content-Type-Options: nosniff X-Frame-Options: DENY X-XSS-Protection: 1; mode=block { "cookies": { "opensource": "awesome" } } ``` 但此时你无法看到原来的 `Set-Cookie` 头。为了看到中间响应,你需要使用 `--all`: ``` $ http --headers --all --follow GET https://httpbin.org/cookies/set/opensource/awesome HTTP/1.1 302 FOUND Access-Control-Allow-Credentials: true Access-Control-Allow-Origin: * Content-Type: text/html; charset=utf-8 Date: Sat, 10 Aug 2019 01:38:40 GMT Location: /cookies Referrer-Policy: no-referrer-when-downgrade Server: nginx Set-Cookie: opensource=awesome; Path=/ X-Content-Type-Options: nosniff X-Frame-Options: DENY X-XSS-Protection: 1; mode=block Content-Length: 223 Connection: keep-alive HTTP/1.1 200 OK Access-Control-Allow-Credentials: true Access-Control-Allow-Origin: * Content-Encoding: gzip Content-Type: application/json Date: Sat, 10 Aug 2019 01:38:41 GMT Referrer-Policy: no-referrer-when-downgrade Server: nginx X-Content-Type-Options: nosniff X-Frame-Options: DENY X-XSS-Protection: 1; mode=block Content-Length: 66 Connection: keep-alive ``` 打印响应体并不有趣,因为你大多数时候只关心 cookie。如果你想看到中间请求的响应头,而不是最终请求中的响应体,你可以使用: ``` $ http --print hb --history-print h --all --follow GET https://httpbin.org/cookies/set/opensource/awesome HTTP/1.1 302 FOUND Access-Control-Allow-Credentials: true Access-Control-Allow-Origin: * Content-Type: text/html; charset=utf-8 Date: Sat, 10 Aug 2019 01:40:56 GMT Location: /cookies Referrer-Policy: no-referrer-when-downgrade Server: nginx Set-Cookie: opensource=awesome; Path=/ X-Content-Type-Options: nosniff X-Frame-Options: DENY X-XSS-Protection: 1; mode=block Content-Length: 223 Connection: keep-alive HTTP/1.1 200 OK Access-Control-Allow-Credentials: true Access-Control-Allow-Origin: * Content-Encoding: gzip Content-Type: application/json Date: Sat, 10 Aug 2019 01:40:56 GMT Referrer-Policy: no-referrer-when-downgrade Server: nginx X-Content-Type-Options: nosniff X-Frame-Options: DENY X-XSS-Protection: 1; mode=block Content-Length: 66 Connection: keep-alive { "cookies": { "opensource": "awesome" } } ``` 你可以使用 `--print` 精确控制打印的内容(`h`:响应头;`b`:响应体),并使用 `--history-print` 覆盖中间请求的打印内容设置。 ### 使用 HTTPie 下载二进制文件 有时响应体并不是文本形式,它需要发送到可被不同应用打开的文件: ``` $ http GET https://httpbin.org/image/jpeg HTTP/1.1 200 OK Access-Control-Allow-Credentials: true Access-Control-Allow-Origin: * Connection: keep-alive Content-Length: 35588 Content-Type: image/jpeg Date: Fri, 09 Aug 2019 20:25:49 GMT Referrer-Policy: no-referrer-when-downgrade Server: nginx X-Content-Type-Options: nosniff X-Frame-Options: DENY X-XSS-Protection: 1; mode=block +-----------------------------------------+ | NOTE: binary data not shown in terminal | +-----------------------------------------+ ``` 要得到正确的图片,你需要保存到文件: ``` $ http --download GET https://httpbin.org/image/jpeg HTTP/1.1 200 OK Access-Control-Allow-Credentials: true Access-Control-Allow-Origin: * Connection: keep-alive Content-Length: 35588 Content-Type: image/jpeg Date: Fri, 09 Aug 2019 20:28:13 GMT Referrer-Policy: no-referrer-when-downgrade Server: nginx X-Content-Type-Options: nosniff X-Frame-Options: DENY X-XSS-Protection: 1; mode=block Downloading 34.75 kB to "jpeg.jpe" Done. 34.75 kB in 0.00068s (50.05 MB/s) ``` 试一下!图片很可爱。 ### 使用 HTTPie 发送自定义请求 你可以发送指定的请求头。这对于需要非标准头的自定义 Web API 很有用: ``` $ http GET https://httpbin.org/headers X-Open-Source-Com:Awesome { "headers": { "Accept": "*/*", "Accept-Encoding": "gzip, deflate", "Host": "httpbin.org", "User-Agent": "HTTPie/1.0.2", "X-Open-Source-Com": "Awesome" } } ``` 最后,如果要发送 JSON 字段(尽管可以指定确切的内容),对于许多嵌套较少的输入,你可以使用快捷方式: ``` $ http --body PUT https://httpbin.org/anything open-source=awesome author=moshez { "args": {}, "data": "{\"open-source\": \"awesome\", \"author\": \"moshez\"}", "files": {}, "form": {}, "headers": { "Accept": "application/json, */*", "Accept-Encoding": "gzip, deflate", "Content-Length": "46", "Content-Type": "application/json", "Host": "httpbin.org", "User-Agent": "HTTPie/1.0.2" }, "json": { "author": "moshez", "open-source": "awesome" }, "method": "PUT", "origin": "73.162.254.113, 73.162.254.113", "url": "https://httpbin.org/anything" } ``` 下次在调试 Web API 时,无论是你自己的还是别人的,记得放下 cURL,试试 HTTPie 这个命令行工具。 --- via: <https://opensource.com/article/19/8/getting-started-httpie> 作者:[Moshe Zadka](https://opensource.com/users/moshezhttps://opensource.com/users/mkalindepauleduhttps://opensource.com/users/jamesf) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
[HTTPie](https://httpie.org/) is a delightfully easy to use and easy to upgrade HTTP client. Pronounced "aitch-tee-tee-pie" and run as **http**, it is a command-line tool written in Python to access the web. Since this how-to is about an HTTP client, you need an HTTP server to try it out; in this case, [httpbin.org](https://github.com/postmanlabs/httpbin), a simple, open source HTTP request-and-response service. The httpbin.org site is a powerful way to test to test web API clients and carefully manage and show details in requests and responses, but for now we will focus on the power of HTTPie. ## An alternative to Wget and cURL You might have heard of the venerable [Wget](https://en.wikipedia.org/wiki/Wget) or the slightly newer [cURL](https://en.wikipedia.org/wiki/CURL) tools that allow you to access the web from the command line. They were written to access websites, whereas HTTPie is for accessing *web APIs*. Website requests are designed to be between a computer and an end user who is reading and responding to what they see. This doesn't depend much on structured responses. However, API requests make *structured* calls between two computers. The human is not part of the picture, and the parameters of a command-line tool like HTTPie handle this effectively. ## Install HTTPie There are several ways to install HTTPie. You can probably get it as a package for your package manager, whether you use **brew**, **apt**, **yum**, or **dnf**. However, if you have configured [virtualenvwrapper](https://opensource.com/article/19/6/virtual-environments-python-macos), you can own your own installation: ``` $ mkvirtualenv httpie ... (httpie) $ pip install httpie ... (httpie) $ deactivate $ alias http=~/.virtualenvs/httpie/bin/http $ http -b GET https://httpbin.org/get { "args": {}, "headers": { "Accept": "*/*", "Accept-Encoding": "gzip, deflate", "Host": "httpbin.org", "User-Agent": "HTTPie/1.0.2" }, "origin": "104.220.242.210, 104.220.242.210", "url": "https://httpbin.org/get" } ``` By aliasing **http** directly to the command inside the virtual environment, you can run it even when the virtual environment is not active. You can put the **alias** command in **.bash_profile** or **.bashrc** so you can upgrade HTTPie with the command: `$ ~/.virtualenvs/httpie/bin/pip install -U httpie` ## Query a website with HTTPie HTTPie can simplify querying and testing an API. One option for running it, **-b** (also known as **--body**), was used above. Without it, HTTPie will print the entire response, including the headers, by default: ``` $ http GET https://httpbin.org/get HTTP/1.1 200 OK Access-Control-Allow-Credentials: true Access-Control-Allow-Origin: * Connection: keep-alive Content-Encoding: gzip Content-Length: 177 Content-Type: application/json Date: Fri, 09 Aug 2019 20:19:47 GMT Referrer-Policy: no-referrer-when-downgrade Server: nginx X-Content-Type-Options: nosniff X-Frame-Options: DENY X-XSS-Protection: 1; mode=block { "args": {}, "headers": { "Accept": "*/*", "Accept-Encoding": "gzip, deflate", "Host": "httpbin.org", "User-Agent": "HTTPie/1.0.2" }, "origin": "104.220.242.210, 104.220.242.210", "url": "https://httpbin.org/get" } ``` This is crucial when debugging an API service because a lot of information is sent in the headers. For example, it is often important to see which cookies are being sent. Httpbin.org provides options to set cookies (for testing purposes) through the URL path. The following sets a cookie titled **opensource** to the value **awesome**: ``` $ http GET https://httpbin.org/cookies/set/opensource/awesome HTTP/1.1 302 FOUND Access-Control-Allow-Credentials: true Access-Control-Allow-Origin: * Connection: keep-alive Content-Length: 223 Content-Type: text/html; charset=utf-8 Date: Fri, 09 Aug 2019 20:22:39 GMT Location: /cookies Referrer-Policy: no-referrer-when-downgrade Server: nginx Set-Cookie: opensource=awesome; Path=/ X-Content-Type-Options: nosniff X-Frame-Options: DENY X-XSS-Protection: 1; mode=block <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"> <title>Redirecting...</title> <h1>Redirecting...</h1> <p>You should be redirected automatically to target URL: <a href="https://opensource.com/cookies">/cookies</a>. If not click the link. ``` Notice the **Set-Cookie: opensource=awesome; Path=/** header. This shows the cookie you expected to be set is set correctly and with a **/** path. Also notice that, even though you got a **302** redirect, **http** did not follow it. If you want to follow redirects, you need to ask for it explicitly with the **--follow** flag: ``` $ http --follow GET https://httpbin.org/cookies/set/opensource/awesome HTTP/1.1 200 OK Access-Control-Allow-Credentials: true Access-Control-Allow-Origin: * Connection: keep-alive Content-Encoding: gzip Content-Length: 66 Content-Type: application/json Date: Sat, 10 Aug 2019 01:33:34 GMT Referrer-Policy: no-referrer-when-downgrade Server: nginx X-Content-Type-Options: nosniff X-Frame-Options: DENY X-XSS-Protection: 1; mode=block { "cookies": { "opensource": "awesome" } } ``` But now you cannot see the original **Set-Cookie** header. In order to see intermediate replies, you need to use **--all**: ``` $ http --headers --all --follow \ GET https://httpbin.org/cookies/set/opensource/awesome HTTP/1.1 302 FOUND Access-Control-Allow-Credentials: true Access-Control-Allow-Origin: * Content-Type: text/html; charset=utf-8 Date: Sat, 10 Aug 2019 01:38:40 GMT Location: /cookies Referrer-Policy: no-referrer-when-downgrade Server: nginx Set-Cookie: opensource=awesome; Path=/ X-Content-Type-Options: nosniff X-Frame-Options: DENY X-XSS-Protection: 1; mode=block Content-Length: 223 Connection: keep-alive HTTP/1.1 200 OK Access-Control-Allow-Credentials: true Access-Control-Allow-Origin: * Content-Encoding: gzip Content-Type: application/json Date: Sat, 10 Aug 2019 01:38:41 GMT Referrer-Policy: no-referrer-when-downgrade Server: nginx X-Content-Type-Options: nosniff X-Frame-Options: DENY X-XSS-Protection: 1; mode=block Content-Length: 66 Connection: keep-alive ``` Printing the body is uninteresting because you are mostly interested in the cookies. If you want to see the headers from the intermediate request but the body from the final request, you can do that with: ``` $ http --print hb --history-print h --all --follow \ GET https://httpbin.org/cookies/set/opensource/awesome HTTP/1.1 302 FOUND Access-Control-Allow-Credentials: true Access-Control-Allow-Origin: * Content-Type: text/html; charset=utf-8 Date: Sat, 10 Aug 2019 01:40:56 GMT Location: /cookies Referrer-Policy: no-referrer-when-downgrade Server: nginx Set-Cookie: opensource=awesome; Path=/ X-Content-Type-Options: nosniff X-Frame-Options: DENY X-XSS-Protection: 1; mode=block Content-Length: 223 Connection: keep-alive HTTP/1.1 200 OK Access-Control-Allow-Credentials: true Access-Control-Allow-Origin: * Content-Encoding: gzip Content-Type: application/json Date: Sat, 10 Aug 2019 01:40:56 GMT Referrer-Policy: no-referrer-when-downgrade Server: nginx X-Content-Type-Options: nosniff X-Frame-Options: DENY X-XSS-Protection: 1; mode=block Content-Length: 66 Connection: keep-alive { "cookies": { "opensource": "awesome" } } ``` You can control exactly what is being printed with **--print** and override what is printed for intermediate requests with **--history-print**. ## Download binary files with HTTPie Sometimes the body is non-textual and needs to be sent to a file that can be opened by a different application: ``` $ http GET https://httpbin.org/image/jpeg HTTP/1.1 200 OK Access-Control-Allow-Credentials: true Access-Control-Allow-Origin: * Connection: keep-alive Content-Length: 35588 Content-Type: image/jpeg Date: Fri, 09 Aug 2019 20:25:49 GMT Referrer-Policy: no-referrer-when-downgrade Server: nginx X-Content-Type-Options: nosniff X-Frame-Options: DENY X-XSS-Protection: 1; mode=block +-----------------------------------------+ | NOTE: binary data not shown in terminal | +-----------------------------------------+ ``` To get the right image, you need to save it to a file: ``` $ http --download GET https://httpbin.org/image/jpeg HTTP/1.1 200 OK Access-Control-Allow-Credentials: true Access-Control-Allow-Origin: * Connection: keep-alive Content-Length: 35588 Content-Type: image/jpeg Date: Fri, 09 Aug 2019 20:28:13 GMT Referrer-Policy: no-referrer-when-downgrade Server: nginx X-Content-Type-Options: nosniff X-Frame-Options: DENY X-XSS-Protection: 1; mode=block Downloading 34.75 kB to "jpeg.jpe" Done. 34.75 kB in 0.00068s (50.05 MB/s) ``` Try it! The picture is adorable. ## Sending custom requests with HTTPie You can also send specific headers. This is useful for custom web APIs that require a non-standard header: ``` $ http GET https://httpbin.org/headers X-Open-Source-Com:Awesome { "headers": { "Accept": "*/*", "Accept-Encoding": "gzip, deflate", "Host": "httpbin.org", "User-Agent": "HTTPie/1.0.2", "X-Open-Source-Com": "Awesome" } } ``` Finally, if you want to send JSON fields (although it is possible to specify exact content), for many less-nested inputs, you can use a shortcut: ``` $ http --body PUT https://httpbin.org/anything open-source=awesome author=moshez { "args": {}, "data": "{\"open-source\": \"awesome\", \"author\": \"moshez\"}", "files": {}, "form": {}, "headers": { "Accept": "application/json, */*", "Accept-Encoding": "gzip, deflate", "Content-Length": "46", "Content-Type": "application/json", "Host": "httpbin.org", "User-Agent": "HTTPie/1.0.2" }, "json": { "author": "moshez", "open-source": "awesome" }, "method": "PUT", "origin": "73.162.254.113, 73.162.254.113", "url": "https://httpbin.org/anything" } ``` The next time you are debugging a web API, whether your own or someone else's, put down your cURL and reach for HTTPie, the command-line client for web APIs. ## 2 Comments
11,335
如何在 Ubuntu 19.04 中安装 Shutter 截图工具
https://itsfoss.com/install-shutter-ubuntu/
2019-09-12T11:31:33
[ "Shutter", "截屏" ]
https://linux.cn/article-11335-1.html
Shutter 是我在 [Linux 中最喜欢的截图工具](https://itsfoss.com/take-screenshot-linux/)。你可以使用它截图,还可以用它编辑截图或其他图像。它是一个在图像上添加箭头和文本的不错的工具。你也可以使用它在 Ubuntu 或其它你使用的发行版中[调整图像大小](https://itsfoss.com/resize-images-with-right-click/)。FOSS 上大多数截图教程都使用 Shutter 编辑。 ![Install Shutter Ubuntu](/data/attachment/album/201909/12/113136ccx3h3xu8uimnme0.jpg) 虽然 [Shutter](http://shutter-project.org/) 一直是一款很棒的工具,但它的开发却停滞了。这几年来一直没有新版本的 Shutter。甚至像 [Shutter 中编辑模式被禁用](https://itsfoss.com/shutter-edit-button-disabled/)这样的简单 bug 也没有修复。根本没有开发者的消息。 也许这就是为什么新版本的 Ubuntu 放弃它的原因。在 Ubuntu 18.04 LTS 之前,你可以在软件中心,或者[启用 universe 仓库](https://itsfoss.com/ubuntu-repositories/)来[使用 apt-get 命令](https://itsfoss.com/apt-get-linux-guide/)安装它。但是从 Ubuntu 18.10 及更高版本开始,你就不能再这样做了。 抛开这些缺点,Shutter 是一个很好的工具,我想继续使用它。也许你也是像我这样的 Shutter 粉丝,并且想要使用它。好的方面是你仍然可以在 Ubuntu 19.04 中安装 Shutter,这要归功于非官方 PPA。 ### 在 Ubuntu 19.04 上安装 Shutter ![](/data/attachment/album/201909/12/113136v2jrsggdwjgwh3ra.jpg) 我希望你了解 PPA 的概念。如果不了解,我强烈建议阅读我的指南,以了解更多关于[什么是 PPA 以及如何使用它](https://itsfoss.com/ppa-guide/)。 现在,打开终端并使用以下命令添加新仓库: ``` sudo add-apt-repository -y ppa:linuxuprising/shutter ``` 不需要再使用 `apt update`,因为从 Ubuntu 18.04 开始,仓库会在添加新条目后自动更新。 现在使用 `apt` 命令安装 Shutter: ``` sudo apt install shutter ``` 完成。你应该已经安装 Shutter 截图工具。你可从菜单搜索并启动它。 ### 删除通过非官方 PPA 安装的 Shutter 最后我以卸载 Shutter 以及删除添加的仓库来结束教程。 首先,从系统中删除 Shutter: ``` sudo apt remove shutter ``` 接下来,从你的仓库列表中删除 PPA: ``` sudo add-apt-repository --remove ppa:linuxuprising/shutter ``` 你或许还想了解 [Y PPA Manager](https://itsfoss.com/y-ppa-manager/),这是一款 PPA 图形管理工具。 Shutter 是一个很好的工具,我希望它能被积极开发。我希望它的开发人员没问题,他/她可以找一些时间来处理它。或者是时候让其他人分叉并继续让它变得更棒。 --- via: <https://itsfoss.com/install-shutter-ubuntu/> 作者:[Abhishek Prakash](https://itsfoss.com/author/abhishek/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
Shutter is my favorite tool for [taking screenshots in Linux](https://itsfoss.com/take-screenshot-linux/). You can take screenshots with it, and you can also edit screenshots or other images with it. It’s a nifty tool for adding arrows and text to the images. You can also use it to [resize images in Ubuntu](https://itsfoss.com/resize-images-with-right-click/) or whichever Linux distribution you are using. Most of the screenshot tutorials (in the past) at It’s FOSS has been edited on Shutter. ![shutter home screen](https://itsfoss.com/content/images/wordpress/2022/02/shutter_home_screen.png) [Shutter](http://shutter-project.org/) has been a great tool so far, but its development was paused for several years. Even simple bugs like [editing mode being disabled in Shutter](https://itsfoss.com/shutter-edit-button-disabled/) were starting to be an annoyance. Perhaps this was the reason some Ubuntu releases dropped their packages. But, now it’s back! Fortunately, it [received an overhaul in 2021](https://news.itsfoss.com/shutter-0-95-release/), and now you install it without any hassle. Thanks to [Linux Uprising](https://www.linuxuprising.com/), the official PPA for Ubuntu is available in other versions as well. Since there are different steps for different releases, please [check your Ubuntu version](https://itsfoss.com/how-to-know-ubuntu-unity-version/) first. ## Installing Shutter on Ubuntu 22.04 and higher Make sure that you have the [universe repository enabled](https://itsfoss.com/ubuntu-repositories/): `sudo add-apt-repository universe` Update [apt cache](https://itsfoss.com/apt-cache-command/) so that your system knows about the availability of software from Universe repository. `sudo apt update` And now you can install it in Ubuntu 22.04 and higher versions: `sudo apt install shutter` ## Installing Shutter on Ubuntu 20.04 & 18.04 Shutter is not available in Ubuntu 20.04. It is there in 18.04 but has some bugs. This is why installing it from a PPA is the better way here. I hope you are familiar with the concept of PPA. If not, I highly recommend reading my detailed guide to know more about [what is PPA and how to use it](https://itsfoss.com/ppa-guide/). Now, open a terminal and use the following commands to add the repo and refresh the repository list: ``` sudo add-apt-repository ppa:linuxuprising/shutter sudo apt update ``` Once done, you can [use the apt command](https://itsfoss.com/apt-command-guide/) to install Shutter: `sudo apt install shutter` That’s it. You should have Shutter screenshot tool installed. You can search for it in the menu and start from there. ## Removing Shutter installed via the Official PPA I’ll complete this tutorial by adding the steps to uninstall Shutter and remove the repository you added. First, remove Shutter from your system: `sudo apt remove shutter` Next, remove the PPA from your list of repositories: `sudo add-apt-repository --remove ppa:linuxuprising/shutter` You may also want to take a look at [Y PPA Manager](https://itsfoss.com/y-ppa-manager/), a tool for managing PPA graphically, if you require the need.
11,336
如何在 Linux 中管理日志
https://www.networkworld.com/article/3428361/how-to-manage-logs-in-linux.html
2019-09-13T16:17:57
[ "日志" ]
https://linux.cn/article-11336-1.html
> > Linux 系统上的日志文件包含了**很多**信息——比你有时间查看的还要多。以下是一些建议,告诉你如何正确的使用它们……而不是淹没在其中。 > > > ![Greg Lobinski \(CC BY 2.0\)](/data/attachment/album/201909/13/161842c83egb236wwfe6g4.jpg) 在 Linux 系统上管理日志文件可能非常容易,也可能非常痛苦。这完全取决于你所认为的日志管理是什么。 如果你认为是如何确保日志文件不会耗尽你的 Linux 服务器上的所有磁盘空间,那么这个问题通常很简单。Linux 系统上的日志文件会自动翻转,系统将只维护固定数量的翻转日志。即便如此,一眼看去一组上百个文件可能会让人不知所措。在这篇文章中,我们将看看日志轮换是如何工作的,以及一些最相关的日志文件。 ### 自动日志轮换 日志文件是经常轮转的。当前的日志会获得稍微不同的文件名,并建立一个新的日志文件。以系统日志文件为例。对于许多正常的系统 messages 文件来说,这个文件是一个包罗万象的东西。如果你 `cd` 转到 `/var/log` 并查看一下,你可能会看到一系列系统日志文件,如下所示: ``` $ ls -l syslog* -rw-r----- 1 syslog adm 28996 Jul 30 07:40 syslog -rw-r----- 1 syslog adm 71212 Jul 30 00:00 syslog.1 -rw-r----- 1 syslog adm 5449 Jul 29 00:00 syslog.2.gz -rw-r----- 1 syslog adm 6152 Jul 28 00:00 syslog.3.gz -rw-r----- 1 syslog adm 7031 Jul 27 00:00 syslog.4.gz -rw-r----- 1 syslog adm 5602 Jul 26 00:00 syslog.5.gz -rw-r----- 1 syslog adm 5995 Jul 25 00:00 syslog.6.gz -rw-r----- 1 syslog adm 32924 Jul 24 00:00 syslog.7.gz ``` 轮换发生在每天午夜,旧的日志文件会保留一周,然后删除最早的系统日志文件。`syslog.7.gz` 文件将被从系统中删除,`syslog.6.gz` 将被重命名为 `syslog.7.gz`。日志文件的其余部分将依次改名,直到 `syslog` 变成 `syslog.1` 并创建一个新的 `syslog` 文件。有些系统日志文件会比其他文件大,但是一般来说,没有一个文件可能会变得非常大,并且你永远不会看到超过八个的文件。这给了你一个多星期的时间来回顾它们收集的任何数据。 某种特定日志文件维护的文件数量取决于日志文件本身。有些文件可能有 13 个。请注意 `syslog` 和 `dpkg` 的旧文件是如何压缩以节省空间的。这里的考虑是你对最近的日志最感兴趣,而更旧的日志可以根据需要用 `gunzip` 解压。 ``` # ls -t dpkg* dpkg.log dpkg.log.3.gz dpkg.log.6.gz dpkg.log.9.gz dpkg.log.12.gz dpkg.log.1 dpkg.log.4.gz dpkg.log.7.gz dpkg.log.10.gz dpkg.log.2.gz dpkg.log.5.gz dpkg.log.8.gz dpkg.log.11.gz ``` 日志文件可以根据时间和大小进行轮换。检查日志文件时请记住这一点。 尽管默认值适用于大多数 Linux 系统管理员,但如果你愿意,可以对日志文件轮换进行不同的配置。查看这些文件,如 `/etc/rsyslog.conf` 和 `/etc/logrotate.conf`。 ### 使用日志文件 对日志文件的管理也包括时不时的使用它们。使用日志文件的第一步可能包括:习惯每个日志文件可以告诉你有关系统如何工作以及系统可能会遇到哪些问题。从头到尾读取日志文件几乎不是一个好的选择,但是当你想了解你的系统运行的情况或者需要跟踪一个问题时,知道如何从日志文件中获取信息会是有很大的好处。这也表明你对每个文件中存储的信息有一个大致的了解了。例如: ``` $ who wtmp | tail -10 显示最近的登录信息 $ who wtmp | grep shark 显示特定用户的最近登录信息 $ grep "sudo:" auth.log 查看谁在使用 sudo $ tail dmesg 查看(最近的)内核日志 $ tail dpkg.log 查看最近安装和更新的软件包 $ more ufw.log 查看防火墙活动(假如你使用 ufw) ``` 你运行的一些命令也会从日志文件中提取信息。例如,如果你想查看系统重新启动的列表,可以使用如下命令: ``` $ last reboot reboot system boot 5.0.0-20-generic Tue Jul 16 13:19 still running reboot system boot 5.0.0-15-generic Sat May 18 17:26 - 15:19 (21+21:52) reboot system boot 5.0.0-13-generic Mon Apr 29 10:55 - 15:34 (18+04:39) ``` ### 使用更高级的日志管理器 虽然你可以编写脚本来更容易地在日志文件中找到感兴趣的信息,但是你也应该知道有一些非常复杂的工具可用于日志文件分析。一些可以把来自多个来源的信息联系起来,以便更全面地了解你的网络上发生了什么。它们也可以提供实时监控。这些工具,如 [Solarwinds Log & Event Manager](https://www.esecurityplanet.com/products/solarwinds-log-event-manager-siem.html) 和 [PRTG 网络监视器](https://www.paessler.com/prtg)(包括日志监视)浮现在脑海中。 还有一些免费工具可以帮助分析日志文件。其中包括: * Logwatch — 用于扫描系统日志中感兴趣的日志行的程序 * Logcheck — 系统日志分析器和报告器 在接下来的文章中,我将提供一些关于这些工具的见解和帮助。 --- via: <https://www.networkworld.com/article/3428361/how-to-manage-logs-in-linux.html> 作者:[Sandra Henry-Stocker](https://www.networkworld.com/author/Sandra-Henry_Stocker/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[heguangzhi](https://github.com/heguangzhi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
301
Moved Permanently
null
11,337
我为什么使用 Java
https://opensource.com/article/19/9/why-i-use-java
2019-09-13T17:12:37
[ "Java" ]
https://linux.cn/article-11337-1.html
> > 根据你的工作需要,可能有比 Java 更好的语言,但是我还没有看到任何能把我拉走的语言。 > > > ![](/data/attachment/album/201909/13/171223bf7noo4bbnkxbkdk.jpg) 我记得我是从 1997 年开始使用 Java 的,就在 [Java 1.1 刚刚发布](https://en.wikipedia.org/wiki/Java_version_history)不久之后。从那时起,总的来说,我非常喜欢用 Java 编程;虽然我得承认,这些日子我经常像在 Java 中编写“严肃的代码”一样编写 [Groovy](https://en.wikipedia.org/wiki/Apache_Groovy) 脚本。 来自 [FORTRAN](https://en.wikipedia.org/wiki/Fortran)、[PL/1](https://en.wikipedia.org/wiki/PL/I)、[Pascal](https://en.wikipedia.org/wiki/Pascal_(programming_language)) 以及最后的 [C 语言](https://en.wikipedia.org/wiki/C_(programming_language)) 背景,我发现了许多让我喜欢 Java 的东西。Java 是我[面向对象编程](https://en.wikipedia.org/wiki/Object-oriented_programming)的第一次重要实践经验。到那时,我已经编程了大约 20 年,而且可以说我对什么重要、什么不重要有了一些看法。 ### 调试是一个关键的语言特性 我真的很讨厌浪费时间追踪由我的代码不小心迭代到数组末尾而导致的模糊错误,特别是在 IBM 大型机上的 FORTRAN 编程时代。另一个不时出现的隐晦问题是调用一个子程序时,该子程序带有一个四字节整数参数,而预期有两个字节;在小端架构上,这通常是一个良性的错误,但在大端机器上,前两个字节的值通常并不总是为零。 在那种批处理环境中进行调试也非常不便,通过核心转储或插入打印语句进行调试,这些语句本身会移动错误的位置甚至使它们消失。 所以我使用 Pascal 的早期体验,先是在 [MTS](https://en.wikipedia.org/wiki/Michigan_Terminal_System) 上,然后是在 [IBM OS/VS1](https://en.wikipedia.org/wiki/OS/VS1) 上使用相同的 MTS 编译器,让我的生活变得更加轻松。Pascal 的[强类型和静态类型](https://stackoverflow.com/questions/11889602/difference-between-strong-vs-static-typing-and-weak-vs-dynamic-typing)是取得这种胜利的重要组成部分,我使用的每个 Pascal 编译器都会在数组的边界和范围上插入运行时检查,因此错误可以在发生时检测到。当我们在 20 世纪 80 年代早期将大部分工作转移到 Unix 系统时,移植 Pascal 代码是一项简单的任务。 ### 适量的语法 但是对于我所喜欢的 Pascal 来说,我的代码很冗长,而且语法似乎要比代码还要多;例如,使用: ``` if ... then begin ... end else ... end ``` 而不是 C 或类似语言中的: ``` if (...) { ... } else { ... } ``` 另外,有些事情在 Pascal 中很难完成,在 C 中更容易。但是,当我开始越来越多地使用 C 时,我发现自己遇到了我曾经在 FORTRAN 中遇到的同样类型的错误,例如,超出数组边界。在原始的错误点未检测到数组结束,而仅在程序执行后期才会检测到它们的不利影响。幸运的是,我不再生活在那种批处理环境中,并且手头有很好的调试工具。不过,C 对于我来说有点太灵活了。 当我遇到 [awk](https://en.wikipedia.org/wiki/AWK) 时,我发现它与 C 相比又是另外一种样子。那时,我的很多工作都涉及转换字段数据并创建报告。我发现用 `awk` 加上其他 Unix 命令行工具,如 `sort`、`sed`、`cut`、`join`、`paste`、`comm` 等等,可以做到事情令人吃惊。从本质上讲,这些工具给了我一个像是基于文本文件的关系数据库管理器,这种文本文件具有列式结构,是我们很多字段数据的保存方式。或者,即便不是这种格式,大部分时候也可以从关系数据库或某种二进制格式导出到列式结构中。 `awk` 支持的字符串处理、[正则表达式](https://en.wikipedia.org/wiki/Regular_expression)和[关联数组](https://en.wikipedia.org/wiki/Associative_array),以及 `awk` 的基本特性(它实际上是一个数据转换管道),非常符合我的需求。当面对二进制数据文件、复杂的数据结构和关键性能需求时,我仍然会转回到 C;但随着我越来越多地使用 `awk`,我发现 C 的非常基础的字符串支持越来越令人沮丧。随着时间的推移,更多的时候我只会在必须时才使用 C,并且在其余的时候里大量使用 `awk`。 ### Java 的抽象层级合适 然后是 Java。它看起来相当不错 —— 相对简洁的语法,让人联想到 C,或者这种相似性至少要比 Pascal 或其他任何早期的语言更为明显。它是强类型的,因此很多编程错误会在编译时被捕获。它似乎并不需要过多的面向对象的知识就能起步,这是一件好事,因为我当时对 [OOP 设计模式](https://opensource.com/article/19/7/understanding-software-design-patterns)毫不熟悉。但即使在刚刚开始,我也喜欢它的简化[继承模型](https://www.w3schools.com/java/java_inheritance.asp)背后的思想。(Java 允许使用提供的接口进行单继承,以在某种程度上丰富范例。) 它似乎带有丰富的功能库(即“自备电池”的概念),在适当的水平上直接满足了我的需求。最后,我发现自己很快就会想到将数据和行为在对象中组合在一起的想法。这似乎是明确控制数据之间交互的好方法 —— 比大量的参数列表或对全局变量的不受控制的访问要好得多。 从那以后,Java 在我的编程工具箱中成为了 Helvetic 军刀。我仍然偶尔会在 `awk` 中编写程序,或者使用 Linux 命令行实用程序(如 `cut`、`sort` 或 `sed`),因为它们显然是解决手头问题的直接方法。我怀疑过去 20 年我可能没写过 50 行的 C 语言代码;Java 完全满足了我的需求。 此外,Java 一直在不断改进。首先,它变得更加高效。并且它添加了一些非常有用的功能,例如[可以用 try 来测试资源](https://www.baeldung.com/java-try-with-resources),它可以很好地清理在文件 I/O 期间冗长而有点混乱的错误处理代码;或 [lambda](https://www.baeldung.com/java-8-lambda-expressions-tips),它提供了声明函数并将其作为参数传递的能力,而旧方法需要创建类或接口来“托管”这些函数;或[流](https://www.tutorialspoint.com/java8/java8_streams),它在函数中封装了迭代行为,可以创建以链式函数调用形式实现的高效数据转换管道。 ### Java 越来越好 许多语言设计者研究了从根本上改善 Java 体验的方法。对我来说,其中大部分没有引起我的太多兴趣;再次,这更多地反映了我的典型工作流程,并且(更多地)减少了这些语言带来的功能。但其中一个演化步骤已经成为我的编程工具中不可或缺的一部分:[Groovy](https://groovy-lang.org/)。当我遇到一个小问题,需要一个简单的解决方案时,Groovy 已经成为了我的首选。而且,它与 Java 高度兼容。对我来说,Groovy 填补了 Python 为许多其他人所提供的相同用处 —— 它紧凑、DRY(不要重复自己)和具有表达性(列表和词典有完整的语言支持)。我还使用了 [Grails](https://grails.org/),它使用 Groovy 为非常高性能和有用的 Java Web 应用程序提供简化的 Web 框架。 ### Java 仍然开源吗? 最近,对 [OpenJDK](https://openjdk.java.net/) 越来越多的支持进一步提高了我对 Java 的舒适度。许多公司以各种方式支持 OpenJDK,包括 [AdoptOpenJDK、Amazon 和 Red Hat](https://en.wikipedia.org/wiki/OpenJDK)。在我的一个更大、更长期的项目中,我们使用 AdoptOpenJDK [来在几个桌面平台上生成自定义的运行时环境](https://opensource.com/article/19/4/java-se-11-removing-jnlp)。 有没有比 Java 更好的语言?我确信有,这取决于你的工作需要。但我一直对 Java 非常满意,我还没有遇到任何可能会让我失望的东西。 --- via: <https://opensource.com/article/19/9/why-i-use-java> 作者:[Chris Hermansen](https://opensource.com/users/clhermansen) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
I believe I started using Java in 1997, not long after [Java 1.1 saw the light of day](https://en.wikipedia.org/wiki/Java_version_history). Since that time, by and large, I've really enjoyed programming in Java; although I confess these days, I'm as likely to be found writing [Groovy](https://en.wikipedia.org/wiki/Apache_Groovy) scripts as "serious code" in Java. Coming from a background in [FORTRAN](https://en.wikipedia.org/wiki/Fortran), [PL/1](https://en.wikipedia.org/wiki/PL/I), [Pascal](https://en.wikipedia.org/wiki/Pascal_(programming_language)), and finally [C](https://en.wikipedia.org/wiki/C_(programming_language)), I found a lot of things to like about Java. Java was my first significant hands-on experience with [object-oriented programming](https://en.wikipedia.org/wiki/Object-oriented_programming). By then, I had been programming for about 20 years, and it's probably safe to say I had some ideas about what mattered and what didn't. ## Debugging as a key language feature I really hated wasting time tracking down obscure bugs caused by my code carelessly iterating off the end of an array, especially back in the days of programming in FORTRAN on IBM mainframes. Another subtle problem that cropped up from time to time was calling a subroutine with a four-byte integer argument that was expecting two bytes; on small-endian architecture, this was often a benign bug, but on big-endian machines, the value of the top two bytes was usually, but not always, zero. Debugging in that batch environment was pretty awkward, too—poring through core dumps or inserting print statements, which themselves could move bugs around or even make them disappear. So my early experiences with Pascal, first on [MTS](https://en.wikipedia.org/wiki/Michigan_Terminal_System), then using the same MTS compiler on [IBM OS/VS1](https://en.wikipedia.org/wiki/OS/VS1), made my life a lot easier. Pascal's [strong and static typing](https://stackoverflow.com/questions/11889602/difference-between-strong-vs-static-typing-and-weak-vs-dynamic-typing) were a big part of the win here, and every Pascal compiler I have used inserts run-time checks on array bounds and ranges, so bugs are detected at the point of occurrence. When we moved most of our work to a Unix system in the early 1980s, porting the Pascal code was a straightforward task. ## Finding the right amount of syntax But for all the things I liked about Pascal, my code was wordy, and the syntax seemed to have a tendency to slightly obscure the code; for example, using: `if … then begin … end else … end` instead of: `if (…) { … } else { … }` in C and similar languages. Also, some things were quite hard to do in Pascal and much easier to do in C. But, as I began to use C more and more, I found myself running into the same kind of errors I used to commit in FORTRAN—running off the end of arrays, for example—that were not detected at the point of the original error, but only through their adverse effects later in the program's execution. Fortunately, I was no longer living in the batch environment and had great debugging tools at hand. Still, C gave me a little too much flexibility for my own good. When I discovered [awk](https://en.wikipedia.org/wiki/AWK), I found I had a nice counterpoint to C. At that time, a lot of my work involved transforming field data and creating reports. I found I could do a surprising amount of that with awk, coupled with other Unix command-line tools like sort, sed, cut, join, paste, comm, and so on. Essentially, these tools gave me something a lot like a relational database manager for text files that had a column-oriented structure, which was the way a lot of our field data came in. Or, if not exactly in that format, most of the time the data could be unloaded from a relational database or from some kind of binary format into that column-oriented structure. String handling, [regular expressions](https://en.wikipedia.org/wiki/Regular_expression), and [associative arrays](https://en.wikipedia.org/wiki/Associative_array) supported by awk, as well as the basic nature of awk (it's really a data-transformation pipeline), fit my needs very well. When confronted with binary data files, complicated data structuring, and absolute performance needs, I would still revert to C; but as I used awk more and more, I found C's very basic string support more and more frustrating. As time went on, more and more often I would end up using C only when I had to—and probably overusing awk the rest of the time. ## Java is the right level of abstraction And then along came Java. It looked pretty good right out of the gate—a relatively terse syntax reminiscent of C, or at least, more so than Pascal or any of those other earlier experiences. It was strongly typed, so a lot of programming errors would get caught at compile time. It didn't seem to require too much object-oriented learning to get going, which was a good thing, as I was barely familiar with [OOP design patterns](https://opensource.com/article/19/7/understanding-software-design-patterns) at the time. But even in the earliest days, I liked the ideas behind its simplified [inheritance model](https://www.w3schools.com/java/java_inheritance.asp). (Java allows for single inheritance with interfaces provided to enrich the paradigm somewhat.) And it seemed to come with a rich library of functionality (the concept of "batteries included") that worked at the right level to directly meet my needs. Finally, I found myself rapidly coming to like the idea of both data and behavior being grouped together in objects. This seemed like a great way to explicitly control interactions among data—much better than enormous parameter lists or uncontrolled access to global variables. Since then, Java has grown to be the Helvetic military knife in my programming toolbox. I will still write stuff occasionally in awk or use Linux command-line utilities like cut, sort, or sed when they're obviously and precisely the straightforward way to solve the problem at hand. I doubt if I've written 50 lines of C in the last 20 years, though; Java has completely replaced C for my needs. In addition, Java has been improving over time. First of all, it's become much more performant. And it's added some really useful capabilities, like [try with resources](https://www.baeldung.com/java-try-with-resources), which very nicely cleans up verbose and somewhat messy code dealing with error handling during file I/O, for example; or [lambdas](https://www.baeldung.com/java-8-lambda-expressions-tips), which provide the ability to declare functions and pass them as parameters, instead of the old approach, which required creating classes or interfaces to "host" those functions; or [streams](https://www.tutorialspoint.com/java8/java8_streams), which encapsulate iterative behavior in functions, creating an efficient data-transformation pipeline materialized in the form of chained function calls. ## Java is getting better and better A number of language designers have looked at ways to radically improve the Java experience. For me, most of these aren't yet of great interest; again, that's more a reflection of my typical workflow and (much) less a function of the features those languages bring. But one of these evolutionary steps has become an indispensable part of my programming arsenal: [Groovy](https://groovy-lang.org/). Groovy has become my go-to solution when I run into a small problem that needs a small solution. Moreover, it's highly compatible with Java. For me, Groovy fills the same niche that Python fills for a lot of other people—it's compact, DRY (don't repeat yourself), and expressive (lists and dictionaries have full language support). I also make use of [Grails](https://grails.org/), which uses Groovy to provide a streamlined web framework for very performant and useful Java web applications. ## But is Java still open source? Recently, growing support for [OpenJDK](https://openjdk.java.net/) has further improved my comfort level with Java. A number of companies are supporting OpenJDK in various ways, including [AdoptOpenJDK, Amazon, and Red Hat](https://en.wikipedia.org/wiki/OpenJDK). In one of my bigger and longer-term projects, we use AdoptOpenJDK to [generate customized runtimes on several desktop platforms](https://opensource.com/article/19/4/java-se-11-removing-jnlp). Are there better languages than Java? I'm sure there are, depending on your work needs. But I'm still a very happy Java user, and I haven't seen anything yet that threatens to pull me away. ## 8 Comments
11,339
为什么 const 无法让 C 代码跑得更快?
https://theartofmachinery.com/2019/08/12/c_const_isnt_for_performance.html
2019-09-14T18:16:34
[ "常量" ]
https://linux.cn/article-11339-1.html
![](/data/attachment/album/201909/14/181535lsrt9t93k1c1n0mt.jpg) 在几个月前的一篇文章里,我曾说过“[有个一个流行的传言,`const` 有助于编译器优化 C 和 C++ 代码](https://theartofmachinery.com/2019/04/05/d_as_c_replacement.html#const-and-immutable)”。我觉得我需要解释一下,尤其是曾经我自己也以为这是显然对的。我将会用一些理论并构造一些例子来论证,然后在一个真实的代码库 `Sqlite` 上做一些实验和基准测试。 ### 一个简单的测试 让我们从一个最简单、最明显的例子开始,以前认为这是一个 `const` 让 C 代码跑得更快的例子。首先,假设我们有如下两个函数声明: ``` void func(int *x); void constFunc(const int *x); ``` 然后假设我们如下两份代码: ``` void byArg(int *x) { printf("%d\n", *x); func(x); printf("%d\n", *x); } void constByArg(const int *x) { printf("%d\n", *x); constFunc(x); printf("%d\n", *x); } ``` 调用 `printf()` 时,CPU 会通过指针从 RAM 中取得 `*x` 的值。很显然,`constByArg()` 会稍微快一点,因为编译器知道 `*x` 是常量,因此不需要在调用 `constFunc()` 之后再次获取它的值。它仅是打印相同的东西。没问题吧?让我们来看下 GCC 在如下编译选项下生成的汇编代码: ``` $ gcc -S -Wall -O3 test.c $ view test.s ``` 以下是函数 `byArg()` 的完整汇编代码: ``` byArg: .LFB23: .cfi_startproc pushq %rbx .cfi_def_cfa_offset 16 .cfi_offset 3, -16 movl (%rdi), %edx movq %rdi, %rbx leaq .LC0(%rip), %rsi movl $1, %edi xorl %eax, %eax call __printf_chk@PLT movq %rbx, %rdi call func@PLT # constFoo 中唯一不同的指令 movl (%rbx), %edx leaq .LC0(%rip), %rsi xorl %eax, %eax movl $1, %edi popq %rbx .cfi_def_cfa_offset 8 jmp __printf_chk@PLT .cfi_endproc ``` 函数 `byArg()` 和函数 `constByArg()` 生成的汇编代码中唯一的不同之处是 `constByArg()` 有一句汇编代码 `call constFunc@PLT`,这正是源代码中的调用。关键字 `const` 本身并没有造成任何字面上的不同。 好了,这是 GCC 的结果。或许我们需要一个更聪明的编译器。Clang 会有更好的表现吗? ``` $ clang -S -Wall -O3 -emit-llvm test.c $ view test.ll ``` 这是 `IR` 代码(LCTT 译注:LLVM 的中间语言)。它比汇编代码更加紧凑,所以我可以把两个函数都导出来,让你可以看清楚我所说的“除了调用外,没有任何字面上的不同”是什么意思: ``` ; Function Attrs: nounwind uwtable define dso_local void @byArg(i32*) local_unnamed_addr #0 { %2 = load i32, i32* %0, align 4, !tbaa !2 %3 = tail call i32 (i8*, ...) @printf(i8* getelementptr inbounds ([4 x i8], [4 x i8]* @.str, i64 0, i64 0), i32 %2) tail call void @func(i32* %0) #4 %4 = load i32, i32* %0, align 4, !tbaa !2 %5 = tail call i32 (i8*, ...) @printf(i8* getelementptr inbounds ([4 x i8], [4 x i8]* @.str, i64 0, i64 0), i32 %4) ret void } ; Function Attrs: nounwind uwtable define dso_local void @constByArg(i32*) local_unnamed_addr #0 { %2 = load i32, i32* %0, align 4, !tbaa !2 %3 = tail call i32 (i8*, ...) @printf(i8* getelementptr inbounds ([4 x i8], [4 x i8]* @.str, i64 0, i64 0), i32 %2) tail call void @constFunc(i32* %0) #4 %4 = load i32, i32* %0, align 4, !tbaa !2 %5 = tail call i32 (i8*, ...) @printf(i8* getelementptr inbounds ([4 x i8], [4 x i8]* @.str, i64 0, i64 0), i32 %4) ret void } ``` ### 某些有作用的东西 接下来是一组 `const` 能够真正产生作用的代码: ``` void localVar() { int x = 42; printf("%d\n", x); constFunc(&x); printf("%d\n", x); } void constLocalVar() { const int x = 42; // 对本地变量使用 const printf("%d\n", x); constFunc(&x); printf("%d\n", x); } ``` 下面是 `localVar()` 的汇编代码,其中有两条指令在 `constLocalVar()` 中会被优化掉: ``` localVar: .LFB25: .cfi_startproc subq $24, %rsp .cfi_def_cfa_offset 32 movl $42, %edx movl $1, %edi movq %fs:40, %rax movq %rax, 8(%rsp) xorl %eax, %eax leaq .LC0(%rip), %rsi movl $42, 4(%rsp) call __printf_chk@PLT leaq 4(%rsp), %rdi call constFunc@PLT movl 4(%rsp), %edx # 在 constLocalVar() 中没有 xorl %eax, %eax movl $1, %edi leaq .LC0(%rip), %rsi # 在 constLocalVar() 中没有 call __printf_chk@PLT movq 8(%rsp), %rax xorq %fs:40, %rax jne .L9 addq $24, %rsp .cfi_remember_state .cfi_def_cfa_offset 8 ret .L9: .cfi_restore_state call __stack_chk_fail@PLT .cfi_endproc ``` 在 LLVM 生成的 `IR` 代码中更明显一点。在 `constLocalVar()` 中,第二次调用 `printf()` 之前的 `load` 会被优化掉: ``` ; Function Attrs: nounwind uwtable define dso_local void @localVar() local_unnamed_addr #0 { %1 = alloca i32, align 4 %2 = bitcast i32* %1 to i8* call void @llvm.lifetime.start.p0i8(i64 4, i8* nonnull %2) #4 store i32 42, i32* %1, align 4, !tbaa !2 %3 = tail call i32 (i8*, ...) @printf(i8* getelementptr inbounds ([4 x i8], [4 x i8]* @.str, i64 0, i64 0), i32 42) call void @constFunc(i32* nonnull %1) #4 %4 = load i32, i32* %1, align 4, !tbaa !2 %5 = call i32 (i8*, ...) @printf(i8* getelementptr inbounds ([4 x i8], [4 x i8]* @.str, i64 0, i64 0), i32 %4) call void @llvm.lifetime.end.p0i8(i64 4, i8* nonnull %2) #4 ret void } ``` 好吧,现在,`constLocalVar()` 成功的省略了对 `*x` 的重新读取,但是可能你已经注意到一些问题:`localVar()` 和 `constLocalVar()` 在函数体中做了同样的 `constFunc()` 调用。如果编译器能够推断出 `constFunc()` 没有修改 `constLocalVar()` 中的 `*x`,那为什么不能推断出完全一样的函数调用也没有修改 `localVar()` 中的 `*x`? 这个解释更贴近于为什么 C 语言的 `const` 不能作为优化手段的核心原因。C 语言的 `const` 有两个有效的含义:它可以表示这个变量是某个可能是常数也可能不是常数的数据的一个只读别名,或者它可以表示该变量是真正的常量。如果你移除了一个指向常量的指针的 `const` 属性并写入数据,那结果将是一个未定义行为。另一方面,如果是一个指向非常量值的 `const` 指针,将就没问题。 这份 `constFunc()` 的可能实现揭示了这意味着什么: ``` // x 是一个指向某个可能是常数也可能不是常数的数据的只读指针 void constFunc(const int *x) { // local_var 是一个真正的常数 const int local_var = 42; // C 语言规定的未定义行为 doubleIt((int*)&local_var); // 谁知道这是不是一个未定义行为呢? doubleIt((int*)x); } void doubleIt(int *x) { *x *= 2; } ``` `localVar()` 传递给 `constFunc()` 一个指向非 `const` 变量的 `const` 指针。因为这个变量并非常量,`constFunc()` 可以撒个谎并强行修改它而不触发未定义行为。所以,编译器不能断定变量在调用 `constFunc()` 后仍是同样的值。在 `constLocalVar()` 中的变量是真正的常量,因此,编译器可以断定它不会改变 —— 因为在 `constFunc()` 去除变量的 `const` 属性并写入它*将*会是一个未定义行为。 第一个例子中的函数 `byArg()` 和 `constByArg()` 是没有可能优化的,因为编译器没有任何方法能知道 `*x` 是否真的是 `const` 常量。 > > 补充(和题外话):相当多的读者已经正确地指出,使用 `const int *x`,该指针本身不是限定的常量,只是该数据被加个了别名,而 `const int * const extra_const` 是一个“双向”限定为常量的指针。但是因为指针本身的常量与别名数据的常量无关,所以结果是相同的。仅在 `extra_const` 指向使用 `const` 定义的对象时,`*(int*const)extra_const = 0` 才是未定义行为。(实际上,`*(int*)extra_const = 0` 也不会更糟。)因为它们之间的区别可以一句话说明白,一个是完全的 `const` 指针,另外一个可能是也可能不是常量本身的指针,而是一个可能是也可能不是常量的对象的只读别名,我将继续不严谨地引用“常量指针”。(题外话结束) > > > 但是为什么不一致呢?如果编译器能够推断出 `constLocalVar()` 中调用的 `constFunc()` 不会修改它的参数,那么肯定也能继续在其他 `constFunc()` 的调用上实施相同的优化,是吗?并不。编译器不能假设 `constLocalVar()` 根本没有运行。如果不是这样(例如,它只是代码生成器或者宏的一些未使用的额外输出),`constFunc()` 就能偷偷地修改数据而不触发未定义行为。 你可能需要重复阅读几次上述说明和示例,但不要担心,它听起来很荒谬,它确实是正确的。不幸的是,对 `const` 变量进行写入是最糟糕的未定义行为:大多数情况下,编译器无法知道它是否将会是未定义行为。所以,大多数情况下,编译器看见 `const` 时必须假设它未来可能会被移除掉,这意味着编译器不能使用它进行优化。这在实践中是正确的,因为真实的 C 代码会在“深思熟虑”后移除 `const`。 简而言之,很多事情都可以阻止编译器使用 `const` 进行优化,包括使用指针从另一内存空间接受数据,或者在堆空间上分配数据。更糟糕的是,在大部分编译器能够使用 `const` 进行优化的情况,它都不是必须的。例如,任何像样的编译器都能推断出下面代码中的 `x` 是一个常量,甚至都不需要 `const`: ``` int x = 42, y = 0; printf("%d %d\n", x, y); y += x; printf("%d %d\n", x, y); ``` 总结,`const` 对优化而言几乎无用,因为: 1. 除了特殊情况,编译器需要忽略它,因为其他代码可能合法地移除它 2. 在 #1 以外的大多数例外中,编译器无论如何都能推断出该变量是常量 ### C++ 如果你在使用 C++ 那么有另外一个方法让 `const` 能够影响到代码的生成:函数重载。你可以用 `const` 和非 `const` 的参数重载同一个函数,而非 `const` 版本的代码可能可以被优化(由程序员优化而不是编译器),减少某些拷贝或者其他事情。 ``` void foo(int *p) { // 需要做更多的数据拷贝 } void foo(const int *p) { // 不需要保护性的拷贝副本 } int main() { const int x = 42; // const 影响被调用的是哪一个版本的重载函数 foo(&x); return 0; } ``` 一方面,我不认为这会在实际的 C++ 代码中大量使用。另一方面,为了导致差异,程序员需要假设编译器无法做出,因为它们不受语言保护。 ### 用 Sqlite3 进行实验 有了足够的理论和例子。那么 `const` 在一个真正的代码库中有多大的影响呢?我将会在代码库 `Sqlite`(版本:3.30.0)上做一个测试,因为: * 它真正地使用了 `const` * 它不是一个简单的代码库(超过 20 万行代码) * 作为一个数据库,它包括了字符串处理、数学计算、日期处理等一系列内容 * 它能够在绑定 CPU 的情况下进行负载测试 此外,作者和贡献者们已经进行了多年的性能优化工作,因此我能确定他们没有错过任何有显著效果的优化。 #### 配置 我做了两份[源码](https://sqlite.org/src/doc/trunk/README.md)拷贝,并且正常编译其中一份。而对于另一份拷贝,我插入了这个特殊的预处理代码段,将 `const` 变成一个空操作: ``` #define const ``` (GNU) `sed` 可以将一些东西添加到每个文件的顶端,比如 `sed -i '1i#define const' *.c *.h`。 在编译期间使用脚本生成 `Sqlite` 代码稍微有点复杂。幸运的是当 `const` 代码和非 `const` 代码混合时,编译器会产生了大量的提醒,因此很容易发现它并调整脚本来包含我的反 `const` 代码段。 直接比较编译结果毫无意义,因为任意微小的改变就会影响整个内存布局,这可能会改变整个代码中的指针和函数调用。因此,我用每个指令的二进制大小和汇编代码作为识别码(`objdump -d libsqlite3.so.0.8.6`)。举个例子,这个函数: ``` 000000000005d570 <sqlite3_blob_read>: 5d570: 4c 8d 05 59 a2 ff ff lea -0x5da7(%rip),%r8 # 577d0 <sqlite3BtreePayloadChecked> 5d577: e9 04 fe ff ff jmpq 5d380 <blobReadWrite> 5d57c: 0f 1f 40 00 nopl 0x0(%rax) ``` 将会变成这样: ``` sqlite3_blob_read 7lea 5jmpq 4nopl ``` 在编译时,我保留了所有 `Sqlite` 的编译设置。 #### 分析编译结果 `const` 版本的 `libsqlite3.so` 的大小是 4,740,704 字节,大约比 4,736,712 字节的非 `const` 版本大了 0.1% 。在全部 1374 个导出函数(不包括类似 PLT 里的底层辅助函数)中,一共有 13 个函数的识别码不一致。 其中的一些改变是由于插入的预处理代码。举个例子,这里有一个发生了更改的函数(已经删去一些 `Sqlite` 特有的定义): ``` #define LARGEST_INT64 (0xffffffff|(((int64_t)0x7fffffff)<<32)) #define SMALLEST_INT64 (((int64_t)-1) - LARGEST_INT64) static int64_t doubleToInt64(double r){ /* ** Many compilers we encounter do not define constants for the ** minimum and maximum 64-bit integers, or they define them ** inconsistently. And many do not understand the "LL" notation. ** So we define our own static constants here using nothing ** larger than a 32-bit integer constant. */ static const int64_t maxInt = LARGEST_INT64; static const int64_t minInt = SMALLEST_INT64; if( r<=(double)minInt ){ return minInt; }else if( r>=(double)maxInt ){ return maxInt; }else{ return (int64_t)r; } } ``` 删去 `const` 使得这些常量变成了 `static` 变量。我不明白为什么会有不了解 `const` 的人让这些变量加上 `static`。同时删去 `static` 和 `const` 会让 GCC 再次认为它们是常量,而我们将得到同样的编译输出。由于类似这样的局部的 `static const` 变量,使得 13 个函数中有 3 个函数产生假的变化,但我一个都不打算修复它们。 `Sqlite` 使用了很多全局变量,而这正是大多数真正的 `const` 优化产生的地方。通常情况下,它们类似于将一个变量比较代替成一个常量比较,或者一个循环在部分展开的一步。([Radare toolkit](https://rada.re/r/) 可以很方便的找出这些优化措施。)一些变化则令人失望。`sqlite3ParseUri()` 有 487 个指令,但 `const` 产生的唯一区别是进行了这个比较: ``` test %al, %al je <sqlite3ParseUri+0x717> cmp $0x23, %al je <sqlite3ParseUri+0x717> ``` 并交换了它们的顺序: ``` cmp $0x23, %al je <sqlite3ParseUri+0x717> test %al, %al je <sqlite3ParseUri+0x717> ``` #### 基准测试 `Sqlite` 自带了一个性能回归测试,因此我尝试每个版本的代码执行一百次,仍然使用默认的 `Sqlite` 编译设置。以秒为单位的测试结果如下: | | const | 非 const | | --- | --- | --- | | 最小值 | 10.658s | 10.803s | | 中间值 | 11.571s | 11.519s | | 最大值 | 11.832s | 11.658s | | 平均值 | 11.531s | 11.492s | 就我个人看来,我没有发现足够的证据来说明这个差异值得关注。我是说,我从整个程序中删去 `const`,所以如果它有明显的差别,那么我希望它是显而易见的。但也许你关心任何微小的差异,因为你正在做一些绝对性能非常重要的事。那让我们试一下统计分析。 我喜欢使用类似 Mann-Whitney U 检验这样的东西。它类似于更著名的 T 检验,但对你在机器上计时时产生的复杂随机变量(由于不可预测的上下文切换、页错误等)更加健壮。以下是结果: | | const | 非 const | | --- | --- | --- | | N | 100 | 100 | | Mean rank | 121.38 | 79.62 | | | | | --- | --- | | Mann-Whitney U | 2912 | | Z | -5.10 | | 2-sided p value | <10-6 | | HL median difference | -0.056s | | 95% confidence interval | -0.077s – -0.038s | U 检验已经发现统计意义上具有显著的性能差异。但是,令人惊讶的是,实际上是非 `const` 版本更快——大约 60ms,0.5%。似乎 `const` 启用的少量“优化”不值得额外代码的开销。这不像是 `const` 启用了任何类似于自动矢量化的重要的优化。当然,你的结果可能因为编译器配置、编译器版本或者代码库等等而有所不同,但是我觉得这已经说明了 `const` 是否能够有效地提高 `C` 的性能,我们现在已经看到答案了。 ### 那么,const 有什么用呢? 尽管存在缺陷,C/C++ 的 `const` 仍有助于类型安全。特别是,结合 C++ 的移动语义和 `std::unique_pointer`,`const` 可以使指针所有权显式化。在超过十万行代码的 C++ 旧代码库里,指针所有权模糊是一个大难题,我对此深有感触。 但是,我以前常常使用 `const` 来实现有意义的类型安全。我曾听说过基于性能上的原因,最好是尽可能多地使用 `const`。我曾听说过当性能很重要时,重构代码并添加更多的 `const` 非常重要,即使以降低代码可读性的方式。**当时觉得这没问题,但后来我才知道这并不对。** --- via: <https://theartofmachinery.com/2019/08/12/c_const_isnt_for_performance.html> 作者:[Simon Arneaud](https://theartofmachinery.com) 选题:[lujun9972](https://github.com/lujun9972) 译者:[LazyWolfLin](https://github.com/LazyWolfLin) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
In a post a few months back I said [it’s a popular myth that const is helpful for enabling compiler optimisations in C and C++](/2019/04/05/d_as_c_replacement.html#const-and-immutable). I figured I should explain that one, especially because I used to believe it was obviously true, myself. I’ll start off with some theory and artificial examples, then I’ll do some experiments and benchmarks on a real codebase: Sqlite. ## A simple test Let’s start with what I used to think was the simplest and most obvious example of how `const` can make C code faster. First, let’s say we have these two function declarations: ``` void func(int *x); void constFunc(const int *x); ``` And suppose we have these two versions of some code: ``` void byArg(int *x) { printf("%d\n", *x); func(x); printf("%d\n", *x); } void constByArg(const int *x) { printf("%d\n", *x); constFunc(x); printf("%d\n", *x); } ``` To do the `printf()` , the CPU has to fetch the value of `*x` from RAM through the pointer. Obviously, `constByArg()` can be made slightly faster because the compiler knows that `*x` is constant, so there’s no need to load its value a second time after `constFunc()` does its thing. It’s just printing the same thing. Right? Let’s see the assembly code generated by GCC with optimisations cranked up: ``` $ gcc -S -Wall -O3 test.c $ view test.s ``` Here’s the full assembly output for `byArg()` : ``` byArg: .LFB23: .cfi_startproc pushq %rbx .cfi_def_cfa_offset 16 .cfi_offset 3, -16 movl (%rdi), %edx movq %rdi, %rbx leaq .LC0(%rip), %rsi movl $1, %edi xorl %eax, %eax call __printf_chk@PLT movq %rbx, %rdi call func@PLT # The only instruction that's different in constFoo movl (%rbx), %edx leaq .LC0(%rip), %rsi xorl %eax, %eax movl $1, %edi popq %rbx .cfi_def_cfa_offset 8 jmp __printf_chk@PLT .cfi_endproc ``` The only difference between the generated assembly code for `byArg()` and `constByArg()` is that `constByArg()` has a `call constFunc@PLT` , just like the source code asked. The `const` itself has literally made zero difference. Okay, that’s GCC. Maybe we just need a sufficiently smart compiler. Is Clang any better? ``` $ clang -S -Wall -O3 -emit-llvm test.c $ view test.ll ``` Here’s the IR. It’s more compact than assembly, so I’ll dump both functions so you can see what I mean by “literally zero difference except for the call”: ``` ; Function Attrs: nounwind uwtable define dso_local void @byArg(i32*) local_unnamed_addr #0 { %2 = load i32, i32* %0, align 4, !tbaa !2 %3 = tail call i32 (i8*, ...) @printf(i8* getelementptr inbounds ([4 x i8], [4 x i8]* @.str, i64 0, i64 0), i32 %2) tail call void @func(i32* %0) #4 %4 = load i32, i32* %0, align 4, !tbaa !2 %5 = tail call i32 (i8*, ...) @printf(i8* getelementptr inbounds ([4 x i8], [4 x i8]* @.str, i64 0, i64 0), i32 %4) ret void } ; Function Attrs: nounwind uwtable define dso_local void @constByArg(i32*) local_unnamed_addr #0 { %2 = load i32, i32* %0, align 4, !tbaa !2 %3 = tail call i32 (i8*, ...) @printf(i8* getelementptr inbounds ([4 x i8], [4 x i8]* @.str, i64 0, i64 0), i32 %2) tail call void @constFunc(i32* %0) #4 %4 = load i32, i32* %0, align 4, !tbaa !2 %5 = tail call i32 (i8*, ...) @printf(i8* getelementptr inbounds ([4 x i8], [4 x i8]* @.str, i64 0, i64 0), i32 %4) ret void } ``` ## Something that (sort of) works Here’s some code where `const` actually does make a difference: ``` void localVar() { int x = 42; printf("%d\n", x); constFunc(&x); printf("%d\n", x); } void constLocalVar() { const int x = 42; // const on the local variable printf("%d\n", x); constFunc(&x); printf("%d\n", x); } ``` Here’s the assembly for `localVar()` , which has two instructions that have been optimised out of `constLocalVar()` : ``` localVar: .LFB25: .cfi_startproc subq $24, %rsp .cfi_def_cfa_offset 32 movl $42, %edx movl $1, %edi movq %fs:40, %rax movq %rax, 8(%rsp) xorl %eax, %eax leaq .LC0(%rip), %rsi movl $42, 4(%rsp) call __printf_chk@PLT leaq 4(%rsp), %rdi call constFunc@PLT movl 4(%rsp), %edx # not in constLocalVar() xorl %eax, %eax movl $1, %edi leaq .LC0(%rip), %rsi # not in constLocalVar() call __printf_chk@PLT movq 8(%rsp), %rax xorq %fs:40, %rax jne .L9 addq $24, %rsp .cfi_remember_state .cfi_def_cfa_offset 8 ret .L9: .cfi_restore_state call __stack_chk_fail@PLT .cfi_endproc ``` The LLVM IR is a little clearer. The `load` just before the second `printf()` call has been optimised out of `constLocalVar()` : ``` ; Function Attrs: nounwind uwtable define dso_local void @localVar() local_unnamed_addr #0 { %1 = alloca i32, align 4 %2 = bitcast i32* %1 to i8* call void @llvm.lifetime.start.p0i8(i64 4, i8* nonnull %2) #4 store i32 42, i32* %1, align 4, !tbaa !2 %3 = tail call i32 (i8*, ...) @printf(i8* getelementptr inbounds ([4 x i8], [4 x i8]* @.str, i64 0, i64 0), i32 42) call void @constFunc(i32* nonnull %1) #4 %4 = load i32, i32* %1, align 4, !tbaa !2 %5 = call i32 (i8*, ...) @printf(i8* getelementptr inbounds ([4 x i8], [4 x i8]* @.str, i64 0, i64 0), i32 %4) call void @llvm.lifetime.end.p0i8(i64 4, i8* nonnull %2) #4 ret void } ``` Okay, so, `constLocalVar()` has sucessfully elided the reloading of `*x` , but maybe you’ve noticed something a bit confusing: it’s the same `constFunc()` call in the bodies of `localVar()` and `constLocalVar()` . If the compiler can deduce that `constFunc()` didn’t modify `*x` in `constLocalVar()` , why can’t it deduce that the exact same function call didn’t modify `*x` in `localVar()` ? The explanation gets closer to the heart of why C `const` is impractical as an optimisation aid. C `const` effectively has two meanings: it can mean the variable is a read-only alias to some data that may or may not be constant, or it can mean the variable is actually constant. If you cast away `const` from a pointer to a constant value and then write to it, the result is undefined behaviour. On the other hand, it’s okay if it’s just a `const` pointer to a value that’s not constant. This possible implementation of `constFunc()` shows what that means: ``` // x is just a read-only pointer to something that may or may not be a constant void constFunc(const int *x) { // local_var is a true constant const int local_var = 42; // Definitely undefined behaviour by C rules doubleIt((int*)&local_var); // Who knows if this is UB? doubleIt((int*)x); } void doubleIt(int *x) { *x *= 2; } ``` `localVar()` gave `constFunc()` a `const` pointer to non-`const` variable. Because the variable wasn’t originally `const` , `constFunc()` can be a liar and forcibly modify it without triggering UB. So the compiler can’t assume the variable has the same value after `constFunc()` returns. The variable in `constLocalVar()` really is `const` , though, so the compiler can assume it won’t change — because this time it *would* be UB for `constFunc()` to cast `const` away and write to it. The `byArg()` and `constByArg()` functions in the first example are hopeless because the compiler has no way of knowing if `*x` really is `const` . **Update (and digression):** Quite a few readers have correctly pointed out that with `const int *x` , the pointer itself isn’t qualified `const` , just the data being aliased, and that `const int * const extra_const` is a pointer that’s qualified `const` “both ways”. But because the constness of the pointer itself is independent of the constness of the data being aliased, the result is the same. `*(int*const)extra_const = 0` is still UB only if `extra_const` points to an object that’s defined with `const` . (In fact, `*(int*)extra_const = 0` wouldn’t be any worse.) Because it’s a mouthful to keep distinguishing between a fully `const` pointer and a pointer that may or may not be itself constant but is a read-only alias to an object that may or not be constant, I’ll just keep referring loosely to “`const` pointers”. (End of digression.) But why the inconsistency? If the compiler can assume that `constFunc()` doesn’t modify its argument when called in `constLocalVar()` , surely it can go ahead an apply the same optimisations to other `constFunc()` calls, right? Nope. The compiler can’t assume `constLocalVar()` is ever run at all. If it isn’t (say, because it’s just some unused extra output of a code generator or macro), `constFunc()` can sneakily modify data without ever triggering UB. You might want to read the above explanation and examples a few times, but don’t worry if it sounds absurd: it is. Unfortunately, writing to `const` variables is the worst kind of UB: most of the time the compiler can’t know if it even would be UB. So most of the time the compiler sees `const` , it has to assume that someone, somewhere could cast it away, which means the compiler can’t use it for optimisation. This is true in practice because enough real-world C code has “I know what I’m doing” casting away of `const` . In short, a whole lot of things can prevent the compiler from using `const` for optimisation, including receiving data from another scope using a pointer, or allocating data on the heap. Even worse, in most cases where `const` can be used by the compiler, it’s not even necessary. For example, any decent compiler can figure out that `x` is constant in the following code, even without `const` : ``` int x = 42, y = 0; printf("%d %d\n", x, y); y += x; printf("%d %d\n", x, y); ``` TL;DR: `const` is almost useless for optimisation because - Except for special cases, the compiler has to ignore it because other code might legally cast it away - In most of the exceptions to #1, the compiler can figure out a variable is constant, anyway ## C++ There’s another way `const` can affect code generation if you’re using C++: function overloads. You can have `const` and non-`const` overloads of the same function, and maybe the non-`const` can be optimised (by the programmer, not the compiler) to do less copying or something. ``` void foo(int *p) { // Needs to do more copying of data } void foo(const int *p) { // Doesn't need defensive copies } int main() { const int x = 42; // const-ness affects which overload gets called foo(&x); return 0; } ``` On the one hand, I don’t think this is exploited much in practical C++ code. On the other hand, to make a real difference, the programmer has to make assumptions that the compiler can’t make because they’re not guaranteed by the language. ## An experiment with Sqlite3 That’s enough theory and contrived examples. How much effect does `const` have on a real codebase? I thought I’d do a test on the Sqlite database (version 3.30.0) because - It actually uses `const` - It’s a non-trivial codebase (over 200KLOC) - As a database, it includes a range of things from string processing to arithmetic to date handling - It can be tested with CPU-bound loads Also, the author and contributors have put years of effort into performance optimisation already, so I can assume they haven’t missed anything obvious. ### The setup I made two copies of [the source code](https://sqlite.org/src/doc/trunk/README.md) and compiled one normally. For the other copy, I used this hacky preprocessor snippet to turn `const` into a no-op: `#define const` (GNU) `sed` can add that to the top of each file with something like `sed -i '1i#define const' *.c *.h` . Sqlite makes things slightly more complicated by generating code using scripts at build time. Fortunately, compilers make a lot of noise when `const` and non-`const` code are mixed, so it was easy to detect when this happened, and tweak the scripts to include my anti-`const` snippet. Directly diffing the compiled results is a bit pointless because a tiny change can affect the whole memory layout, which can change pointers and function calls throughout the code. Instead I took a fingerprint of the disassembly (`objdump -d libsqlite3.so.0.8.6` ), using the binary size and mnemonic for each instruction. For example, this function: ``` 000000000005d570 <sqlite3_blob_read>: 5d570: 4c 8d 05 59 a2 ff ff lea -0x5da7(%rip),%r8 # 577d0 <sqlite3BtreePayloadChecked> 5d577: e9 04 fe ff ff jmpq 5d380 <blobReadWrite> 5d57c: 0f 1f 40 00 nopl 0x0(%rax) ``` would turn into something like this: ``` sqlite3_blob_read 7lea 5jmpq 4nopl ``` I left all the Sqlite build settings as-is when compiling anything. ### Analysing the compiled code The `const` version of libsqlite3.so was 4,740,704 bytes, about 0.1% larger than the 4,736,712 bytes of the non-`const` version. Both had 1374 exported functions (not including low-level helpers like stuff in the PLT), and a total of 13 had any difference in fingerprint. A few of the changes were because of the dumb preprocessor hack. For example, here’s one of the changed functions (with some Sqlite-specific definitions edited out): ``` #define LARGEST_INT64 (0xffffffff|(((int64_t)0x7fffffff)<<32)) #define SMALLEST_INT64 (((int64_t)-1) - LARGEST_INT64) static int64_t doubleToInt64(double r){ /* ** Many compilers we encounter do not define constants for the ** minimum and maximum 64-bit integers, or they define them ** inconsistently. And many do not understand the "LL" notation. ** So we define our own static constants here using nothing ** larger than a 32-bit integer constant. */ static const int64_t maxInt = LARGEST_INT64; static const int64_t minInt = SMALLEST_INT64; if( r<=(double)minInt ){ return minInt; }else if( r>=(double)maxInt ){ return maxInt; }else{ return (int64_t)r; } } ``` Removing `const` makes those constants into `static` variables. I don’t see why anyone who didn’t care about `const` would make those variables `static` . Removing both `static` and `const` makes GCC recognise them as constants again, and we get the same output. Three of the 13 functions had spurious changes because of local `static const` variables like this, but I didn’t bother fixing any of them. Sqlite uses a lot of global variables, and that’s where most of the real `const` optimisations came from. Typically they were things like a comparison with a variable being replaced with a constant comparison, or a loop being partially unrolled a step. (The [Radare toolkit](https://rada.re/r/) was handy for figuring out what the optimisations did.) A few changes were underwhelming. `sqlite3ParseUri()` is 487 instructions, but the only difference `const` made was taking this pair of comparisons: ``` test %al, %al je <sqlite3ParseUri+0x717> cmp $0x23, %al je <sqlite3ParseUri+0x717> ``` And swapping their order: ``` cmp $0x23, %al je <sqlite3ParseUri+0x717> test %al, %al je <sqlite3ParseUri+0x717> ``` ### Benchmarking Sqlite comes with a performance regression test, so I tried running it a hundred times for each version of the code, still using the default Sqlite build settings. Here are the timing results in seconds: const | No const | | ---|---|---| Minimum | 10.658s | 10.803s | Median | 11.571s | 11.519s | Maximum | 11.832s | 11.658s | Mean | 11.531s | 11.492s | Personally, I’m not seeing enough evidence of a difference worth caring about. I mean, I removed `const` from the entire program, so if it made a significant difference, I’d expect it to be easy to see. But maybe you care about any tiny difference because you’re doing something absolutely performance critical. Let’s try some statistical analysis. I like using the Mann-Whitney U test for stuff like this. It’s similar to the more-famous t test for detecting differences in groups, but it’s more robust to the kind of complex random variation you get when timing things on computers (thanks to unpredictable context switches, page faults, etc). Here’s the result: const | No const | | ---|---|---| N | 100 | 100 | Mean rank | 121.38 | 79.62 | Mann-Whitney U | 2912 | ---|---| Z | -5.10 | 2-sided p value | <10-6 | HL median difference | -.056s | 95% confidence interval | -.077s – -0.038s | The U test has detected a statistically significant difference in performance. But, surprise, it’s actually the non-`const` version that’s faster — by about 60ms, or 0.5%. It seems like the small number of “optimisations” that `const` enabled weren’t worth the cost of extra code. It’s not like `const` enabled any major optimisations like auto-vectorisation. Of course, your mileage may vary with different compiler flags, or compiler versions, or codebases, or whatever, but I think it’s fair to say that if `const` were effective at improving C performance, we’d have seen it by now. ## So, what’s `const` for? For all its flaws, C/C++ `const` is still useful for type safety. In particular, combined with C++ move semantics and `std::unique_pointer` s, `const` can make pointer ownership explicit. Pointer ownership ambiguity was a huge pain in old C++ codebases over ~100KLOC, so personally I’m grateful for that alone. However, I used to go beyond using `const` for meaningful type safety. I’d heard it was best practices to use `const` literally as much as possible for performance reasons. I’d heard that when performance really mattered, it was important to refactor code to add more `const` , even in ways that made it less readable. That made sense at the time, but I’ve since learned that it’s just not true.
11,341
如何在 Linux 上创建和使用交换文件
https://itsfoss.com/create-swap-file-linux/
2019-09-14T19:07:00
[]
https://linux.cn/article-11341-1.html
![](/data/attachment/album/201909/14/190637uggjgsjoogxg3vh0.jpg) 本教程讨论了 Linux 中交换文件的概念,为什么使用它以及它相对于传统交换分区的优势。你将学习如何创建交换文件和调整其大小。 ### 什么是 Linux 的交换文件? 交换文件允许 Linux 将磁盘空间模拟为内存。当你的系统开始耗尽内存时,它会使用交换空间将内存的一些内容交换到磁盘空间上。这样释放了内存,为更重要的进程服务。当内存再次空闲时,它会从磁盘交换回数据。我建议[阅读这篇文章,了解 Linux 上的交换空间的更多内容](https://itsfoss.com/swap-size/)。 传统上,交换空间是磁盘上的一个独立分区。安装 Linux 时,只需创建一个单独的分区进行交换。但是这种趋势在最近几年发生了变化。 使用交换文件,你不再需要单独的分区。你会根目录下创建一个文件,并告诉你的系统将其用作交换空间就行了。 使用专用的交换分区,在许多情况下,调整交换空间的大小是一个可怕而不可能的任务。但是有了交换文件,你可以随意调整它们的大小。 最新版本的 Ubuntu 和其他一些 Linux 发行版已经开始 [默认使用交换文件](https://help.ubuntu.com/community/SwapFaq)。甚至如果你没有创建交换分区,Ubuntu 也会自己创建一个 1GB 左右的交换文件。 让我们看看交换文件的更多信息。 ### 检查 Linux 的交换空间 在你开始添加交换空间之前,最好检查一下你的系统中是否已经有了交换空间。 你可以用[Linux 上的 free 命令](https://linuxhandbook.com/free-command/)检查它。就我而言,我的[戴尔 XPS](https://itsfoss.com/dell-xps-13-ubuntu-review/)有 14GB 的交换容量。 ``` free -h total used free shared buff/cache available Mem: 7.5G 4.1G 267M 971M 3.1G 2.2G Swap: 14G 0B 14G ``` `free` 命令给出了交换空间的大小,但它并没有告诉你它是真实的交换分区还是交换文件。`swapon` 命令在这方面会更好。 ``` swapon --show NAME TYPE SIZE USED PRIO /dev/nvme0n1p4 partition 14.9G 0B -2 ``` 如你所见,我有 14.9GB 的交换空间,它在一个单独的分区上。如果是交换文件,类型应该是 `file` 而不是 `partition`。 ``` swapon --show NAME TYPE SIZE USED PRIO /swapfile file 2G 0B -2 ``` 如果你的系统上没有交换空间,它应该显示如下内容: ``` free -h total used free shared buff/cache available Mem: 7.5G 4.1G 267M 971M 3.1G 2.2G Swap: 0B 0B 0B ``` 而 `swapon` 命令不会显示任何输出。 ### 在 Linux 上创建交换文件 如果你的系统没有交换空间,或者你认为交换空间不足,你可以在 Linux 上创建交换文件。你也可以创建多个交换文件。 让我们看看如何在 Linux 上创建交换文件。我在本教程中使用 Ubuntu 18.04,但它也应该适用于其他 Linux 发行版本。 #### 步骤 1:创建一个新的交换文件 首先,创建一个具有所需交换空间大小的文件。假设我想给我的系统增加 1GB 的交换空间。使用`fallocate` 命令创建大小为 1GB 的文件。 ``` sudo fallocate -l 1G /swapfile ``` 建议只允许 `root` 用户读写该交换文件。当你尝试将此文件用于交换区域时,你甚至会看到类似“不安全权限 0644,建议 0600”的警告。 ``` sudo chmod 600 /swapfile ``` 请注意,交换文件的名称可以是任意的。如果你需要多个交换空间,你可以给它任何合适的名称,如 `swap_file_1`、`swap_file_2` 等。它们只是一个预定义大小的文件。 #### 步骤 2:将新文件标记为交换空间 你需要告诉 Linux 系统该文件将被用作交换空间。你可以用 [mkswap](http://man7.org/linux/man-pages/man8/mkswap.8.html) 工具做到这一点。 ``` sudo mkswap /swapfile ``` 你应该会看到这样的输出: ``` Setting up swapspace version 1, size = 1024 MiB (1073737728 bytes) no label, UUID=7e1faacb-ea93-4c49-a53d-fb40f3ce016a ``` #### 步骤 3:启用交换文件 现在,你的系统知道文件 `swapfile` 可以用作交换空间。但是还没有完成。你需要启用该交换文件,以便系统可以开始使用该文件作为交换。 ``` sudo swapon /swapfile ``` 现在,如果你检查交换空间,你应该会看到你的 Linux 系统会识别并使用它作为交换空间: ``` swapon --show NAME TYPE SIZE USED PRIO /swapfile file 1024M 0B -2 ``` #### 步骤 4:让改变持久化 迄今为止你所做的一切都是暂时的。重新启动系统,所有更改都将消失。 你可以通过将新创建的交换文件添加到 `/etc/fstab` 文件来使更改持久化。 对 `/etc/fstab` 文件进行任何更改之前,最好先进行备份。 ``` sudo cp /etc/fstab /etc/fstab.back ``` 现在将以下行添加到 `/etc/fstab` 文件的末尾: ``` /swapfile none swap sw 0 0 ``` 你可以使用[命令行文本编辑器](https://itsfoss.com/command-line-text-editors-linux/)手动操作,或者使用以下命令: ``` echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab ``` 现在一切都准备好了。即使在重新启动你的 Linux 系统后,你的交换文件也会被使用。 ### 调整 swappiness 参数 `swappiness` 参数决定了交换空间的使用频率。`swappiness` 值的范围从 0 到 100。较高的值意味着交换空间将被更频繁地使用。 Ubuntu 桌面的默认的 `swappiness` 是 60,而服务器的默认 `swappiness` 是 1。你可以使用以下命令检查 `swappiness`: ``` cat /proc/sys/vm/swappiness ``` 为什么服务器应该使用低的 `swappiness` 值?因为交换空间比内存慢,为了获得更好的性能,应该尽可能多地使用内存。在服务器上,性能因素至关重要,因此 `swappiness` 应该尽可能低。 你可以使用以下系统命令动态更改 `swappiness`: ``` sudo sysctl vm.swappiness=25 ``` 这种改变只是暂时的。如果要使其永久化,可以编辑 `/etc/sysctl.conf` 文件,并在文件末尾添加`swappiness` 值: ``` vm.swappiness=25 ``` ### 在 Linux 上调整交换空间的大小 在 Linux 上有几种方法可以调整交换空间的大小。但是在你看到这一点之前,你应该了解一些关于它的事情。 当你要求系统停止将交换文件用于交换空间时,它会将所有数据(确切地说是内存页)传输回内存。所以你应该有足够的空闲内存,然后再停止交换。 这就是为什么创建和启用另一个临时交换文件是一个好的做法的原因。这样,当你关闭原来的交换空间时,你的系统将使用临时交换文件。现在你可以调整原来的交换空间的大小。你可以手动删除临时交换文件或留在那里,下次启动时会自动删除(LCTT 译注:存疑?)。 如果你有足够的可用内存或者创建了临时交换空间,那就关闭你原来的交换文件。 ``` sudo swapoff /swapfile ``` 现在你可以使用 `fallocate` 命令来更改文件的大小。比方说,你将其大小更改为 2GB: ``` sudo fallocate -l 2G /swapfile ``` 现在再次将文件标记为交换空间: ``` sudo mkswap /swapfile ``` 并再次启用交换文件: ``` sudo swapon /swapfile ``` 你也可以选择同时拥有多个交换文件。 ### 删除 Linux 中的交换文件 你可能有不在 Linux 上使用交换文件的原因。如果你想删除它,该过程类似于你刚才看到的调整交换大小的过程。 首先,确保你有足够的空闲内存。现在关闭交换文件: ``` sudo swapoff /swapfile ``` 下一步是从 `/etc/fstab` 文件中删除相应的条目。 最后,你可以删除该文件来释放空间: ``` sudo rm /swapfile ``` ### 你用了交换空间了吗? 我想你现在已经很好地理解了 Linux 中的交换文件概念。现在,你可以根据需要轻松创建交换文件或调整它们的大小。 如果你对这个话题有什么要补充的或者有任何疑问,请在下面留下评论。 --- via: <https://itsfoss.com/create-swap-file-linux/> 作者:[Abhishek Prakash](https://itsfoss.com/author/abhishek/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[heguangzhi](https://github.com/heguangzhi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
![Warp Terminal](/assets/images/warp-terminal.webp) ![Warp Terminal](/assets/images/warp-terminal.webp) ## What is a swap file in Linux? A swap file allows Linux to simulate the disk space as RAM. When your system starts running out of RAM, it uses the swap space to and swaps some content of the RAM on to the disk space. This frees up the RAM to serve more important processes. When the RAM is free again, it swaps back the data from the disk. I recommend [reading this article to learn more about swap on Linux](https://itsfoss.com/swap-size/). Traditionally, swap space is used as a separate partition on the disk. When you install Linux, you create a separate partition just for swap. But this trend has changed in the recent years. With swap file, you don’t need a separate partition anymore. You create a file under root and tell your system to use it as the swap space. With dedicated swap partition, resizing the swap space is a nightmare and an impossible task in many cases. But with swap files, you can resize them as you like. Recent versions of Ubuntu and some other Linux distributions have started [using the swap file by default](https://help.ubuntu.com/community/SwapFaq?ref=its-foss). Even if you don’t create a swap partition, Ubuntu creates a swap file of around 1 GB on its own. Let’s see some more on swap files. ![Swap File Linux](https://itsfoss.com/content/images/wordpress/2019/08/swap-file-linux-800x450.png) ## Check swap space in Linux Before you go and start adding swap space, it would be a good idea to check whether you have swap space already available in your system. You can check it with the [free command in Linux](https://linuxhandbook.com/free-command/?ref=its-foss). In my case, my [Dell XPS](https://itsfoss.com/dell-xps-13-ubuntu-review/) has 14GB of swap. ``` free -h total used free shared buff/cache available Mem: 7.5G 4.1G 267M 971M 3.1G 2.2G Swap: 14G 0B 14G ``` The free command gives you the size of the swap space but it doesn’t tell you if it’s a real swap partition or a swap file. The swapon command is better in this regard. ``` swapon --show NAME TYPE SIZE USED PRIO /dev/nvme0n1p4 partition 14.9G 0B -2 ``` As you can see, I have 14.9 GB of swap space and it’s on a separate partition. If it was a swap file, the type would have been file instead of partition. ``` swapon --show NAME TYPE SIZE USED PRIO /swapfile file 2G 0B -2 ``` If you don’ have a swap space on your system, it should show something like this: ``` free -h total used free shared buff/cache available Mem: 7.5G 4.1G 267M 971M 3.1G 2.2G Swap: 0B 0B 0B ``` The swapon command won’t show any output. ## Create swap file on Linux If your system doesn’t have swap space or if you think the swap space is not adequate enough, you can create swap file on Linux. You can create multiple swap files as well. Let’s see how to create swap file on Linux. I am using Ubuntu 18.04 in this tutorial but it should work on other Linux distributions as well. ### Step 1: Make a new swap file First thing first, create a file with the size of swap space you want. Let’s say that I want to add 1 GB of swap space to my system. Use the fallocate command to create a file of size 1 GB. `sudo fallocate -l 1G /swapfile` It is recommended to allow only root to read and write to the swap file. You’ll even see warning like “insecure permissions 0644, 0600 suggested” when you try to use this file for swap area. `sudo chmod 600 /swapfile` Do note that the name of the swap file could be anything. If you need multiple swap spaces, you can give it any appropriate name like swap_file_1, swap_file_2 etc. It’s just a file with a predefined size. ### Step 2: Mark the new file as swap space Your need to tell the Linux system that this file will be used as swap space. You can do that with [mkswap](http://man7.org/linux/man-pages/man8/mkswap.8.html?ref=its-foss) tool. `sudo mkswap /swapfile` You should see an output like this: ``` Setting up swapspace version 1, size = 1024 MiB (1073737728 bytes) no label, UUID=7e1faacb-ea93-4c49-a53d-fb40f3ce016a ``` ### Step 3: Enable the swap file Now your system knows that the file swapfile can be used as swap space. But it is not done yet. You need to enable the swap file so that your system can start using this file as swap. `sudo swapon /swapfile` Now if you check the swap space, you should see that your Linux system recognizes and uses it as the swap area: ``` swapon --show NAME TYPE SIZE USED PRIO /swapfile file 1024M 0B -2 ``` ### Step 4: Make the changes permanent Whatever you have done so far is temporary. Reboot your system and all the changes will disappear. You can make the changes permanent by adding the newly created swap file to /etc/fstab file. It’s always a good idea to make a backup before you make any changes to the /etc/fstab file. `sudo cp /etc/fstab /etc/fstab.back` Now you can add the following line to the end of /etc/fstab file: `/swapfile none swap sw 0 0` You can do it manually using a [command line text editor](https://itsfoss.com/command-line-text-editors-linux/) or you just use the following command: `echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab` Now you have everything in place. Your swap file will be used even after you reboot your Linux system. ## Adjust swappiness The swappiness parameters determines how often the swap space should be used. The swappiness value ranges from 0 to 100. Higher value means the swap space will be used more frequently. The default swappiness in Ubuntu desktop is 60 while in server it is 1. You can check the swappiness with the following command: `cat /proc/sys/vm/swappiness` Why servers should use a low swappiness? Because swap is slower than RAM and for a better performance, the RAM should be utilized as much as possible. On servers, the performance factor is crucial and hence the swappinness is as low as possible. You can change the swappiness on the fly using the following systemd command: `sudo sysctl vm.swappiness=25` This change it only temporary though. If you want to make it permanent, you can edit the /etc/sysctl.conf file and add the swappiness value in the end of the file: `vm.swappiness=25` ## Resizing swap space on Linux There are a couple of ways you can resize the swap space on Linux. But before you see that, you should learn a few things around it. When you ask your system to stop using a swap file for swap area, it transfers all the data (pages to be precise) back to RAM. So you should have enough free RAM before you swap off. This is why a good practice is to create and enable another temporary swap file. This way, when you swap off the original swap area, your system will use the temporary swap file. Now you can resize the original swap space. You can manually remove the temporary swap file or leave it as it is and it will be automatically deleted on the next boot. If you have enough free RAM or if you created a temporary swap space, swapoff your original file. `sudo swapoff /swapfile` Now you can use fallocate command to change the size of the file. Let’s say, you change it to 2 GB in size: `sudo fallocate -l 2G /swapfile` Now mark the file as swap space again: `sudo mkswap /swapfile` And turn the swap on again: `sudo swapon /swapfile` You may also choose to have multiple swap files at the same time. ## Removing swap file in Linux You may have your reasons for not using swap file on Linux. If you want to remove it, the process is similar to what you just saw in resizing the swap. First, make sure that you have enough free RAM. Now swap off the file: `sudo swapoff /swapfile` The next step is to remove the respective entry from the /etc/fstab file. And in the end, you can remove the file to free up the space: `sudo rm /swapfile` **Do you swap?** I think you now have a good understanding of swap file concept in Linux. You can now easily create swap file or resize them as per your need. If you have anything to add on this topic or if you have any doubts, please leave a comment below.
11,342
用 Git 作为聊天应用的后端
https://opensource.com/article/19/4/git-based-chat
2019-09-15T10:09:00
[ "Git", "聊天" ]
https://linux.cn/article-11342-1.html
> > GIC 是一个聊天应用程序的原型,展示了一种使用 Git 的新方法。 > > > ![](/data/attachment/album/201909/15/100905euzi3l5xgslsgx7i.png) [Git](https://git-scm.com/) 是一个少有的能将如此多的现代计算封装到一个程序之中的应用程序,它可以用作许多其他应用程序的计算引擎。虽然它以跟踪软件开发中的源代码更改而闻名,但它还有许多其他用途,可以让你的生活更轻松、更有条理。在这个 Git 系列中,我们将分享七种鲜为人知的使用 Git 的方法。 今天我们来看看 GIC,它是一个基于 Git 的聊天应用。 ### 初识 GIC 虽然 Git 的作者们可能期望会为 Git 创建前端,但毫无疑问他们从未预料到 Git 会成为某种后端,如聊天客户端的后端。然而,这正是开发人员 Ephi Gabay 用他的实验性的概念验证应用 [GIC](https://github.com/ephigabay/GIC) 所做的事情:用 [Node.js](https://nodejs.org/en/) 编写的聊天客户端,使用 Git 作为其后端数据库。 GIC 并没有打算用于生产用途。这纯粹是一种编程练习,但它证明了开源技术的灵活性。令人惊讶的是,除了 Node 库和 Git 本身,该客户端只包含 300 行代码。这是这个聊天客户端和开源所反映出来的最好的地方之一:建立在现有工作基础上的能力。眼见为实,你应该自己亲自来了解一下 GIC。 ### 架设起来 GIC 使用 Git 作为引擎,因此你需要一个空的 Git 存储库为聊天室和记录器提供服务。存储库可以托管在任何地方,只要你和需要访问聊天服务的人可以访问该存储库就行。例如,你可以在 GitLab 等免费 Git 托管服务上设置 Git 存储库,并授予聊天用户对该 Git 存储库的贡献者访问权限。(他们必须能够提交到存储库,因为每个聊天消息都是一个文本的提交。) 如果你自己托管,请创建一个中心化的裸存储库。聊天中的每个用户必须在裸存储库所在的服务器上拥有一个帐户。你可以使用如 [Gitolite](http://gitolite.com) 或 [Gitea](http://gitea.io) 这样的 Git 托管软件创建特定于 Git 的帐户,或者你可以在服务器上为他们提供个人用户帐户,可以使用 `git-shell` 来限制他们只能访问 Git。 自托管实例的性能最好。无论你是自己托管还是使用托管服务,你创建的 Git 存储库都必须具有一个活跃分支,否则 GIC 将无法在用户聊天时进行提交,因为没有 Git HEAD。确保分支初始化和活跃的最简单方法是在创建存储库时提交 `README` 或许可证文件。如果你没有这样做,你可以在事后创建并提交一个: ``` $ echo "chat logs" > README $ git add README $ git commit -m 'just creating a HEAD ref' $ git push -u origin HEAD ``` ### 安装 GIC 由于 GIC 基于 Git 并使用 Node.js 编写,因此必须首先安装 Git、Node.js 和 Node 包管理器npm(它应该与 Node 捆绑在一起)。安装它们的命令因 Linux 或 BSD 发行版而异,这是 Fedora 上的一个示例命令: ``` $ sudo dnf install git nodejs ``` 如果你没有运行 Linux 或 BSD,请按照 [git-scm.com](http://git-scm.com) 和 [nodejs.org](http://nodejs.org) 上的安装说明进行操作。 因此,GIC 没有安装过程。每个用户(在此示例中为 Alice 和 Bob)必须将存储库克隆到其硬盘驱动器: ``` $ git clone https://github.com/ephigabay/GIC GIC ``` 将目录更改为 GIC 目录并使用 `npm` 安装 Node.js 依赖项: ``` $ cd GIC $ npm install ``` 等待 Node 模块下载并安装。 ### 配置 GIC GIC 唯一需要的配置是 Git 聊天存储库的位置。编辑 `config.js` 文件: ``` module.exports = { gitRepo: '[email protected]:/home/gitchat/chatdemo.git', messageCheckInterval: 500, branchesCheckInterval: 5000 }; ``` 在尝试 GIC 之前测试你与 Git 存储库的连接,以确保你的配置是正确的: ``` $ git clone --quiet [email protected]:/home/gitchat/chatdemo.git > /dev/null ``` 假设你没有收到任何错误,就可以开始聊天了。 ### 用 Git 聊天 在 GIC 目录中启动聊天客户端: ``` $ npm start ``` 客户端首次启动时,必须克隆聊天存储库。由于它几乎是一个空的存储库,因此不会花费很长时间。输入你的消息,然后按回车键发送消息。 ![GIC](/data/attachment/album/201909/15/100928gb4iykuyezrkayie.jpg "GIC") *基于 Git 的聊天客户端。 他们接下来会怎么想?* 正如问候消息所说,Git 中的分支在 GIC 中就是聊天室或频道。无法在 GIC 的 UI 中创建新分支,但如果你在另一个终端会话或 Web UI 中创建一个分支,它将立即显示在 GIC 中。将一些 IRC 式的命令加到 GIC 中并不需要太多工作。 聊了一会儿之后,可以看看你的 Git 存储库。由于聊天发生在 Git 中,因此存储库本身也是聊天日志: ``` $ git log --pretty=format:"%p %cn %s" 4387984 Seth Kenlon Hey Chani, did you submit a talk for All Things Open this year? 36369bb Chani No I didn't get a chance. Did you? [...] ``` ### 退出 GIC Vim 以来,还没有一个应用程序像 GIC 那么难以退出。你看,没有办法停止 GIC。它会一直运行,直到它被杀死。当你准备停止 GIC 时,打开另一个终端选项卡或窗口并发出以下命令: ``` $ kill `pgrep npm` ``` GIC 是一个新奇的事物。这是一个很好的例子,说明开源生态系统如何鼓励和促进创造力和探索,并挑战我们从不同角度审视应用程序。尝试下 GIC,也许它会给你一些思路。至少,它可以让你与 Git 度过一个下午。 --- via: <https://opensource.com/article/19/4/git-based-chat> 作者:[Seth Kenlon](https://opensource.com/users/seth) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
[Git](https://git-scm.com/) is one of those rare applications that has managed to encapsulate so much of modern computing into one program that it ends up serving as the computational engine for many other applications. While it's best-known for tracking source code changes in software development, it has many other uses that can make your life easier and more organized. In this series leading up to Git's 14th anniversary on April 7, we'll share seven little-known ways to use Git. Today, we'll look at GIC, a Git-based chat application ## Meet GIC While the authors of Git probably expected frontends to be created for Git, they undoubtedly never expected Git would become the backend for, say, a chat client. Yet, that's exactly what developer Ephi Gabay did with his experimental proof-of-concept [GIC](https://github.com/ephigabay/GIC): a chat client written in [Node.js](https://nodejs.org/en/) using Git as its backend database. GIC is by no means intended for production use. It's purely a programming exercise, but it's one that demonstrates the flexibility of open source technology. What's astonishing is that the client consists of just 300 lines of code, excluding the Node libraries and Git itself. And that's one of the best things about the chat client and about open source; the ability to build upon existing work. Seeing is believing, so you should give GIC a look for yourself. ## Get set up GIC uses Git as its engine, so you need an empty Git repository to serve as its chatroom and logger. The repository can be hosted anywhere, as long as you and anyone who needs access to the chat service has access to it. For instance, you can set up a Git repository on a free Git hosting service like GitLab and grant chat users contributor access to the Git repository. (They must be able to make commits to the repository, because each chat message is a literal commit.) If you're hosting it yourself, create a centrally located bare repository. Each user in the chat must have an account on the server where the bare repository is located. You can create accounts specific to Git with Git hosting software like [Gitolite](http://gitolite.com) or [Gitea](http://gitea.io), or you can give them individual user accounts on your server, possibly using **git-shell** to restrict their access to Git. Performance is best on a self-hosted instance. Whether you host your own or you use a hosting service, the Git repository you create must have an active branch, or GIC won't be able to make commits as users chat because there is no Git HEAD. The easiest way to ensure that a branch is initialized and active is to commit a README or license file upon creation. If you don't do that, you can create and commit one after the fact: ``` $ echo "chat logs" > README $ git add README $ git commit -m 'just creating a HEAD ref' $ git push -u origin HEAD ``` ## Install GIC Since GIC is based on Git and written in Node.js, you must first install Git, Node.js, and the Node package manager, npm (which should be bundled with Node). The command to install these differs depending on your Linux or BSD distribution, but here's an example command on Fedora: `$ sudo dnf install git nodejs` If you're not running Linux or BSD, follow the installation instructions on [git-scm.com](http://git-scm.com) and [nodejs.org](http://nodejs.org). There's no install process, as such, for GIC. Each user (Alice and Bob, in this example) must clone the repository to their hard drive: `$ git clone https://github.com/ephigabay/GIC GIC` Change directory into the GIC directory and install the Node.js dependencies with **npm**: ``` $ cd GIC $ npm install ``` Wait for the Node modules to download and install. ## Configure GIC The only configuration GIC requires is the location of your Git chat repository. Edit the **config.js** file: ``` module.exports = { gitRepo: '[email protected]:/home/gitchat/chatdemo.git', messageCheckInterval: 500, branchesCheckInterval: 5000 }; ``` Test your connection to the Git repository before trying GIC, just to make sure your configuration is sane: `$ git clone --quiet [email protected]:/home/gitchat/chatdemo.git > /dev/null` Assuming you receive no errors, you're ready to start chatting. ## Chat with Git From within the GIC directory, start the chat client: `$ npm start` When the client first launches, it must clone the chat repository. Since it's nearly an empty repository, it won't take long. Type your message and press Enter to send a message. ![GIC GIC](https://opensource.com/sites/default/files/uploads/gic.jpg) A Git-based chat client. What will they think of next?] As the greeting message says, a branch in Git serves as a chatroom or channel in GIC. There's no way to create a new branch from within the GIC UI, but if you create one in another terminal session or in a web UI, it shows up immediately in GIC. It wouldn't take much to patch some IRC-style commands into GIC. After chatting for a while, take a look at your Git repository. Since the chat happens in Git, the repository itself is also a chat log: ``` $ git log --pretty=format:"%p %cn %s" 4387984 Seth Kenlon Hey Chani, did you submit a talk for All Things Open this year? 36369bb Chani No I didn't get a chance. Did you? [...] ``` ## Exit GIC Not since Vim has there been an application as difficult to stop as GIC. You see, there is no way to stop GIC. It will continue to run until it is killed. When you're ready to stop GIC, open another terminal tab or window and issue this command: `$ kill `pgrep npm`` GIC is a novelty. It's a great example of how an open source ecosystem encourages and enables creativity and exploration and challenges us to look at applications from different angles. Try GIC out. Maybe it will give you ideas. At the very least, it's a great excuse to spend an afternoon with Git. ## 4 Comments
11,344
在 Linux 中使用变量
https://www.networkworld.com/article/3387154/working-with-variables-on-linux.html
2019-09-15T10:51:00
[ "变量" ]
https://linux.cn/article-11344-1.html
> > 变量通常看起来像 $var 这样,但它们也有 $1、$\*、$? 和 $$ 这种形式。让我们来看看所有这些 $ 值可以告诉你什么。 > > > ![](/data/attachment/album/201909/15/105140faf2jzyybubu1d0c.jpg) 有许多重要的值都存储在 Linux 系统中,我们称为“变量”,但实际上变量有几种类型,并且一些有趣的命令可以帮助你使用它们。在上一篇文章中,我们研究了[环境变量](/article-10916-1.html)以及它们定义在何处。在本文中,我们来看一看在命令行和脚本中使用的变量。 ### 用户变量 虽然在命令行中设置变量非常容易,但是有一些有趣的技巧。要设置变量,你只需这样做: ``` $ myvar=11 $ myvar2="eleven" ``` 要显示这些值,只需这样做: ``` $ echo $myvar 11 $ echo $myvar2 eleven ``` 你也可以使用这些变量。例如,要递增一个数字变量,使用以下任意一个命令: ``` $ myvar=$((myvar+1)) $ echo $myvar 12 $ ((myvar=myvar+1)) $ echo $myvar 13 $ ((myvar+=1)) $ echo $myvar 14 $ ((myvar++)) $ echo $myvar 15 $ let "myvar=myvar+1" $ echo $myvar 16 $ let "myvar+=1" $ echo $myvar 17 $ let "myvar++" $ echo $myvar 18 ``` 使用其中的一些,你可以增加一个变量的值。例如: ``` $ myvar0=0 $ ((myvar0++)) $ echo $myvar0 1 $ ((myvar0+=10)) $ echo $myvar0 11 ``` 通过这些选项,你可能会发现它们是容易记忆、使用方便的。 你也可以*删除*一个变量 – 这意味着没有定义它。 ``` $ unset myvar $ echo $myvar ``` 另一个有趣的选项是,你可以设置一个变量并将其设为**只读**。换句话说,变量一旦设置为只读,它的值就不能改变(除非一些非常复杂的命令行魔法才可以)。这意味着你也不能删除它。 ``` $ readonly myvar3=1 $ echo $myvar3 1 $ ((myvar3++)) -bash: myvar3: readonly variable $ unset myvar3 -bash: unset: myvar3: cannot unset: readonly variable ``` 你可以使用这些设置和递增选项中来赋值和操作脚本中的变量,但也有一些非常有用的*内部变量*可以用于在脚本中。注意,你无法重新赋值或增加它们的值。 ### 内部变量 在脚本中可以使用很多变量来计算参数并显示有关脚本本身的信息。 * `$1`、`$2`、`$3` 等表示脚本的第一个、第二个、第三个等参数。 * `$#` 表示参数的数量。 * `$*` 表示所有参数。 * `$0` 表示脚本的名称。 * `$?` 表示先前运行的命令的返回码(0 代表成功)。 * `$$` 显示脚本的进程 ID。 * `$PPID` 显示 shell 的进程 ID(脚本的父进程)。 其中一些变量也适用于命令行,但显示相关信息: * `$0` 显示你正在使用的 shell 的名称(例如,-bash)。 * `$$` 显示 shell 的进程 ID。 * `$PPID` 显示 shell 的父进程的进程 ID(对我来说,是 sshd)。 为了查看它们的结果,如果我们将所有这些变量都放入一个脚本中,比如: ``` #!/bin/bash echo $0 echo $1 echo $2 echo $# echo $* echo $? echo $$ echo $PPID ``` 当我们调用这个脚本时,我们会看到如下内容: ``` $ tryme one two three /home/shs/bin/tryme <== 脚本名称 one <== 第一个参数 two <== 第二个参数 3 <== 参数的个数 one two three <== 所有的参数 0 <== 上一条 echo 命令的返回码 10410 <== 脚本的进程 ID 10109 <== 父进程 ID ``` 如果我们在脚本运行完毕后检查 shell 的进程 ID,我们可以看到它与脚本中显示的 PPID 相匹配: ``` $ echo $$ 10109 <== shell 的进程 ID ``` 当然,比起简单地显示它们的值,更有用的方式是使用它们。我们来看一看它们可能的用处。 检查是否已提供参数: ``` if [ $# == 0 ]; then echo "$0 filename" exit 1 fi ``` 检查特定进程是否正在运行: ``` ps -ef | grep apache2 > /dev/null if [ $? != 0 ]; then echo Apache is not running exit fi ``` 在尝试访问文件之前验证文件是否存在: ``` if [ $# -lt 2 ]; then echo "Usage: $0 lines filename" exit 1 fi if [ ! -f $2 ]; then echo "Error: File $2 not found" exit 2 else head -$1 $2 fi ``` 在下面的小脚本中,我们检查是否提供了正确数量的参数、第一个参数是否为数字,以及第二个参数代表的文件是否存在。 ``` #!/bin/bash if [ $# -lt 2 ]; then echo "Usage: $0 lines filename" exit 1 fi if [[ $1 != [0-9]* ]]; then echo "Error: $1 is not numeric" exit 2 fi if [ ! -f $2 ]; then echo "Error: File $2 not found" exit 3 else echo top of file head -$1 $2 fi ``` ### 重命名变量 在编写复杂的脚本时,为脚本的参数指定名称通常很有用,而不是继续将它们称为 `$1`、`$2` 等。等到第 35 行,阅读你脚本的人可能已经忘了 `$2` 表示什么。如果你将一个重要参数的值赋给 `$filename` 或 `$numlines`,那么他就不容易忘记。 ``` #!/bin/bash if [ $# -lt 2 ]; then echo "Usage: $0 lines filename" exit 1 else numlines=$1 filename=$2 fi if [[ $numlines != [0-9]* ]]; then echo "Error: $numlines is not numeric" exit 2 fi if [ ! -f $ filename]; then echo "Error: File $filename not found" exit 3 else echo top of file head -$numlines $filename fi ``` 当然,这个示例脚本只是运行 `head` 命令来显示文件中的前 x 行,但它的目的是显示如何在脚本中使用内部参数来帮助确保脚本运行良好,或在失败时清晰地知道失败原因。 --- via: <https://www.networkworld.com/article/3387154/working-with-variables-on-linux.html#tk.rss_all> 作者:[Sandra Henry-Stocker](https://www.networkworld.com/author/Sandra-Henry_Stocker/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[MjSeven](https://github.com/MjSeven) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
301
Moved Permanently
null
11,345
GNOME 3.34 发布
https://itsfoss.com/gnome-3-34-release/
2019-09-15T11:33:18
[ "GNOME" ]
https://linux.cn/article-11345-1.html
![](/data/attachment/album/201909/15/113154i3bcp9p3md3mc3bk.jpg) 最新版本的 GNOME 代号为“<ruby> 塞萨洛尼基 <rt> Thessaloniki </rt></ruby>”。考虑到这个版本经过了 6 个月的开发,这应该是对 [GNOME 3.32](https://www.gnome.org/news/2019/03/gnome-3-32-released/) 的一次令人印象深刻的升级。 在此版本中,有许多新功能和显著的性能改进。除了新功能外,可定制的程度也得到了提升。 以下是新的变化: ### GNOME 3.34 的关键改进 你可以观看此视频,了解 GNOME 3.34 中的新功能: #### 拖放图标到文件夹 新的 shell 主题允许你拖放应用程序抽屉中的图标以重新排列它们,或将它们组合到一个文件夹中。你可能已经在 Android 或 iOS 智能手机中使用过此类功能。 ![You can now drag and drop icons into a folder](/data/attachment/album/201909/15/113322d7ynnbb4p5bdiipn.png) #### 改进的日历管理器 改进的日历管理器可以轻松地与第三方服务集成,使你能够直接从 Linux 系统管理日程安排,而无需单独使用其他应用程序。 ![GNOME Calendar Improvements](/data/attachment/album/201909/15/113323qb7oo8i7vo1sfbvo.jpg) #### 背景选择的设置 现在,更容易为主屏幕和锁定屏幕选择自定义背景,因为它在同一屏幕中显示所有可用背景。为你节省至少一次鼠标点击。 ![It’s easier to select backgrounds now](/data/attachment/album/201909/15/113327izp709j7ipcvit1s.png) #### 重新排列搜索选项 搜索选项和结果可以手动重新排列。因此,当你要搜索某些内容时,可以决定哪些内容先出现。 #### 响应式设计的“设置”应用 设置菜单 UI 现在具有响应性,因此无论你使用何种类型(或尺寸)的设备,都可以轻松访问所有选项。这肯定对 [Linux 智能手机(如 Librem 5)](https://itsfoss.com/librem-linux-phone/) 上的 GNOME 有所帮助。 除了所有这些之外,[官方公告](https://www.gnome.org/press/2019/09/gnome-3-34-released/)还提到到开发人员的有用补充(增加了系统分析器和虚拟化改进): > > 对于开发人员,GNOME 3.34 在 Sysprof 中包含更多数据源,使应用程序的性能分析更加容易。对 Builder 的多项改进中包括集成的 D-Bus 检查器。 > > > ![Improved Sysprof tool in GNOME 3.34](/data/attachment/album/201909/15/113329uc5qpqkkcjcklaiq.jpg) ### 如何获得GNOME 3.34? 虽然新版本已经发布,但它还没有进入 Linux 发行版的官方存储库。所以,我们建议等待它,并在它作为更新包提供时进行升级。不管怎么说,如果你想构建它,你都可以在这里找到[源代码](https://download.gnome.org/)。 嗯,就是这样。如果你感兴趣,可以查看[完整版本说明](https://help.gnome.org/misc/release-notes/3.34/)以了解技术细节。 你如何看待新的 GNOME 3.34? --- via: <https://itsfoss.com/gnome-3-34-release/> 作者:[Ankush Das](https://itsfoss.com/author/ankush/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
The latest version of GNOME dubbed “Thessaloniki” is here. It is an impressive upgrade over [GNOME 3.32](https://www.gnome.org/news/2019/03/gnome-3-32-released/) considering 6 months of work there. With this release, there’s a lot of new features and significant performance improvements. In addition to the new features, the level of customization has also improved. Here’s what’s new: ## GNOME 3.34 Key Improvements You may watch this video to have a look at what’s new in GNOME 3.34: ### Drag and drop app icons into a folder The new shell theme lets you drag and drop the icons in the app drawer to re-arrange them or compile them into a folder. You may have already used a feature like this in your Android or iOS smartphone. ![Icon Grid Drag Gnome](https://itsfoss.com/content/images/wordpress/2019/09/icon-grid-drag-gnome.png) ### Improved Calendar Manager The improved calendar manager integrates easily with 3rd party services and gives you the ability to manage your schedule right from your Linux system – without utilizing another app separately. ![Gnome Calendar Improvements](https://itsfoss.com/content/images/wordpress/2019/09/gnome-calendar-improvements.jpg) ### Background selection settings It’s now easier to select a custom background for the main screen and lock screen as it displays all the available backgrounds in the same screen. Saves you at least one mouse click. ![Background Panel Gnome](https://itsfoss.com/content/images/wordpress/2019/09/background-panel-GNOME-800x555.png) ### Re-arranging search options The search options/results can be re-arranged manually. So, you can decide what comes first when you head to search something. ### Responsive design for ‘Settings’ app The settings menu UI is now responsive – so that you can easily access all the options no matter what type (or size) of device you’re on. This is surely going to help GNOME on [Linux smartphones like Librem 5](https://itsfoss.com/librem-linux-phone/). In addition to all these, the [official announcement](https://www.gnome.org/press/2019/09/gnome-3-34-released/) also notes useful additions for developers (additions to system profiler and virtualization improvements): For developers, GNOME 3.34 includes more data sources in Sysprof, making performance profiling an application even easier. Multiple improvements to Builder include an integrated D-Bus inspector. ![Sysprof Gnome](https://itsfoss.com/content/images/wordpress/2019/09/sysprof-gnome-800x493.jpg) ## How to get GNOME 3.34? Even though the new release is live – it hasn’t yet reached the official repositories of your Linux distros. So, we recommend to wait it out and upgrade it when it’s available as update packages. In either case, you can explore the [source code](https://download.gnome.org/) – if you want to build it. Well, that’s about it. If you’re curious, you may check out the [full release notes](https://help.gnome.org/misc/release-notes/3.34/) for technical details. What do you think about the new GNOME 3.34?
11,346
Firefox 69 默认阻拦第三方 Cookie、自动播放的视频和加密矿工
https://itsfoss.com/firefox-69/
2019-09-15T21:27:46
[ "Firefox" ]
https://linux.cn/article-11346-1.html
![](/data/attachment/album/201909/15/212659s3p37i4i4qb366tf.jpg) 如果你使用的是 [Mozilla Firefox](https://itsfoss.com/why-firefox/) 并且尚未更新到最新版本,那么你将错过许多新的重要功能。 ### Firefox 69 版本中的一些新功能 首先,Mozilla Firefox 69 会默认强制执行更强大的安全和隐私选项。以下是新版本的一些主要亮点。 #### Firefox 69 阻拦视频自动播放 ![](/data/attachment/album/201909/15/212750mneaie1mi3gzixcj.png) 现在很多网站都提供了自动播放视频。无论是弹出视频还是嵌入在文章中设置为自动播放的视频,默认情况下,Firefox 69 都会阻止它(或者可能会提示你)。 这个[阻拦自动播放](https://support.mozilla.org/en-US/kb/block-autoplay)功能可让用户自动阻止任何视频播放。 #### 禁止第三方跟踪 cookie 默认情况下,作为<ruby> 增强型跟踪保护 <rt> Enhanced Tracking Protection </rt></ruby>功能的一部分,它现在将阻止第三方跟踪 Cookie 和加密矿工。这是 Mozilla Firefox 的增强隐私保护功能的非常有用的改变。 Cookie 有两种:第一方的和第三方的。第一方 cookie 由网站本身拥有。这些是“好的 cookie”,可以让你保持登录、记住你的密码或输入字段等来改善浏览体验。第三方 cookie 由你访问的网站以外的域所有。广告服务器使用这些 Cookie 来跟踪你,并在你访问的所有网站上跟踪广告。Firefox 69 旨在阻止这些。 当它发挥作用时,你将在地址栏中看到盾牌图标。你可以选择为特定网站禁用它。 ![Firefox Blocking Tracking](/data/attachment/album/201909/15/212751eunea7jeurton8o1.png) #### 禁止加密矿工消耗你的 CPU ![](/data/attachment/album/201909/15/212755osg2l1dumz22us1s.png) 对加密货币的欲望一直困扰着这个世界。GPU 的价格已经高企,因为专业的加密矿工们使用它们来挖掘加密货币。 人们使用工作场所的计算机秘密挖掘加密货币。当我说工作场所时,我不一定是指 IT 公司。就在今年,[人们在乌克兰的一家核电站抓住了偷挖加密货币的活动](https://thenextweb.com/hardfork/2019/08/22/ukrainian-nuclear-powerplant-mine-cryptocurrency-state-secrets/)。 不仅如此。如果你访问某些网站,他们会运行脚本并使用你的计算机的 CPU 来挖掘加密货币。这在 IT 术语中被称为 <ruby> <a href="https://hackernoon.com/cryptojacking-in-2019-is-not-dead-its-evolving-984b97346d16"> 挖矿攻击 </a> <rt> cryptojacking </rt></ruby>。 好消息是 Firefox 69 会自动阻止这些加密矿工脚本。因此,网站不再能利用你的系统资源进行挖矿攻击了。 #### Firefox 69 带来的更强隐私保护 ![](/data/attachment/album/201909/15/212800yoa5hfmem1zm8hff.jpg) 如果你把隐私保护设置得更严格,那么它也会阻止指纹。因此,当你在 Firefox 69 中选择严格的隐私设置时,你不必担心通过[指纹](https://clearcode.cc/blog/device-fingerprinting/)共享计算机的配置信息。 在[关于这次发布的官方博客文章](https://blog.mozilla.org/blog/2019/09/03/todays-firefox-blocks-third-party-tracking-cookies-and-cryptomining-by-default/)中,Mozilla 提到,在此版本中,他们希望默认情况下为 100% 的用户提供保护。 #### 性能改进 尽管在更新日志中没有提及 Linux,但它提到了在 Windows 10/mac OS 上运行性能、UI 和电池寿命有所改进。如果你发现任何性能改进,请在评论中提及。 ### 总结 除了所有这些之外,还有很多底层的改进。你可以查看[发行说明](https://www.mozilla.org/en-US/firefox/69.0/releasenotes/)中的详细信息。 Firefox 69 对于关注其隐私的用户来说是一个令人印象深刻的更新。与我们最近对某些[安全电子邮件服务](https://itsfoss.com/secure-private-email-services/)的建议类似,我们建议你更新浏览器以充分受益。新版本已在大多数 Linux 发行版中提供,你只需要更新你的系统即可。 如果你对阻止广告和跟踪 Cookie 的浏览器感兴趣,请尝试[开源的 Brave 浏览器](https://itsfoss.com/brave-web-browser/),他们甚至给你提供了加密货币以让你使用他们的浏览器,你可以使用这些加密货币来奖励你最喜爱的发布商。 你觉得这个版本怎么样?请在下面的评论中告诉我们你的想法。 --- via: <https://itsfoss.com/firefox-69/> 作者:[Ankush Das](https://itsfoss.com/author/ankush/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
301
Moved Permanently
null
11,348
区块链能如何补充开源
https://opensource.com/article/18/9/barter-currency-system
2019-09-16T11:15:42
[ "区块链", "开源" ]
https://linux.cn/article-11348-1.html
> > 了解区块链如何成为去中心化的开源补贴模型。 > > > ![](/data/attachment/album/201909/16/111521od1yn9r1nr1eii9o.jpg) 《<ruby> <a href="http://catb.org/"> 大教堂与集市 </a> <rt> The Cathedral and The Bazaar </rt></ruby>》是 20 年前由<ruby> 埃里克·史蒂文·雷蒙德 <rt> Eric Steven Raymond </rt> <rt> </rt></ruby>(ESR)撰写的经典开源故事。在这个故事中,ESR 描述了一种新的革命性的软件开发模型,其中复杂的软件项目是在没有(或者很少的)集中管理的情况下构建的。这个新模型就是<ruby> 开源 <rt> open source </rt></ruby>。 ESR 的故事比较了两种模式: * 经典模型(由“大教堂”所代表),其中软件由一小群人在封闭和受控的环境中通过缓慢而稳定的发布制作而成。 * 以及新模式(由“集市”所代表),其中软件是在开放的环境中制作的,个人可以自由参与,但仍然可以产生一个稳定和连贯的系统。 开源如此成功的一些原因可以追溯到 ESR 所描述的创始原则。尽早发布、经常发布,并接受许多头脑必然比一个更好的事实,让开源项目进入全世界的人才库(很少有公司能够使用闭源模式与之匹敌)。 在 ESR 对黑客社区的反思分析 20 年后,我们看到开源成为占据主导地位的的模式。它不再仅仅是为了满足开发人员的个人喜好,而是创新发生的地方。甚至是全球[最大](http://oss.cash/)软件公司也正在转向这种模式,以便继续占据主导地位。 ### 易货系统 如果我们仔细研究开源模型在实践中的运作方式,我们就会意识到它是一个封闭系统,只对开源开发者和技术人员开放。影响项目方向的唯一方法是加入开源社区,了解成文和不成文的规则,学习如何贡献、编码标准等,并自己亲力完成。 这就是集市的运作方式,也是这个易货系统类比的来源。易货系统是一种交换服务和货物以换取其他服务和货物的方法。在市场中(即软件的构建地)这意味着为了获取某些东西,你必须自己也是一个生产者并回馈一些东西——那就是通过交换你的时间和知识来完成任务。集市是开源开发者与其他开源开发者交互并以开源方式生成开源软件的地方。 易货系统向前迈出了一大步,从自给自足的状态演变而来,而在自给自足的状态下,每个人都必须成为所有行业的杰出人选。使用易货系统的集市(开源模式)允许具有共同兴趣和不同技能的人们收集、协作和创造个人无法自行创造的东西。易货系统简单,没有现代货币系统那么复杂,但也有一些局限性,例如: * 缺乏可分性:在没有共同的交换媒介的情况下,不能将较大的不可分割的商品/价值兑换成较小的商品/价值。例如,如果你想在开源项目中进行一些哪怕是小的更改,有时你可能仍需要经历一个高进入门槛。 * 存储价值:如果一个项目对贵公司很重要,你可能需要投入大量投资/承诺。但由于它是开源开发者之间的易货系统,因此拥有强大发言权的唯一方法是雇佣许多开源贡献者,但这并非总是可行的。 * 转移价值:如果你投资了一个项目(受过培训的员工、雇用开源开发者)并希望将重点转移到另一个项目,却不可能快速转移(你在上一个项目中拥有的)专业知识、声誉和影响力。 * 时间脱钩:易货系统没有为延期或提前承诺提供良好的机制。在开源世界中,这意味着用户无法提前或在未来期间以可衡量的方式表达对项目的承诺或兴趣。 下面,我们将探讨如何使用集市的后门解决这些限制。 ### 货币系统 人们因为不同的原因勾连于集市上:有些人在那里学习,有些是出于满足开发者个人的喜好,有些人为大型软件工厂工作。因为在集市中拥有发言权的唯一方法是成为开源社区的一份子并加入这个易货系统,为了在开源世界获得信誉,许多大型软件公司雇用这些开发者并以货币方式支付薪酬。这代表可以使用货币系统来影响集市,开源不再只是为了满足开发者个人的喜好,它也占据全球整体软件生产的重要部分,并且有许多人想要施加影响。 开源设定了开发人员交互的指导原则,并以分布式方式构建一致的系统。它决定了项目的治理方式、软件的构建方式以及其成果如何分发给用户。它是分散的实体共同构建高质量软件的开放共识模型。但是开源模型并没有包括如何补贴开源的部分,无论是直接还是间接地,通过内在或外在动机的赞助,都与集市无关。 ![](/data/attachment/album/201909/16/111546fak9ksakscuxck3e.png) 目前,没有相当于以补贴为目的的去中心化式开源开发模型。大多数开源补贴都是集中式的,通常一家公司通过雇用该项目的主要开源开发者来主导该项目。说实话,这是目前最好的状况,因为它保证了开发人员将长期获得报酬,项目也将继续蓬勃发展。 项目垄断情景也有例外情况:例如,一些云原生计算基金会(CNCF)项目是由大量的竞争公司开发的。此外,Apache 软件基金会(ASF)旨在通过鼓励不同的贡献者来使他们管理的项目不被单一供应商所主导,但实际上大多数受欢迎的项目仍然是单一供应商项目。 我们缺少的是一个开放的、去中心化的模式,就像一个没有集中协调和所有权的集市一样,消费者(开源用户)和生产者(开源开发者)在市场力量和开源价值的驱动下相互作用。为了补充开源,这样的模型也必须是开放和去中心化的,这就是为什么我认为区块链技术[最适合](https://opensource.com/article/18/8/open-source-tokenomics)的原因。 旨在补贴开源开发的大多数现有区块链(和非区块链)平台主要针对的是漏洞赏金、小型和零碎的任务。少数人还专注于资助新的开源项目。但并没有多少平台旨在提供维持开源项目持续开发的机制 —— 基本上,这个系统可以模仿开源服务提供商公司或开放核心、基于开源的 SaaS 产品公司的行为:确保开发人员可以获得持续和可预测的激励,并根据激励者(即用户)的优先事项指导项目开发。这种模型将解决上面列出的易货系统的局限性: * 允许可分性:如果你想要一些小的修复,你可以支付少量费用,而不是成为项目的开源开发者的全部费用。 * 存储价值:你可以在项目中投入大量资金,并确保其持续发展和你的发言权。 * 转移价值:在任何时候,你都可以停止投资项目并将资金转移到其他项目中。 * 时间脱钩:允许定期定期付款和订阅。 还有其他好处,纯粹是因为这种基于区块链的系统是透明和去中心化的:根据用户的承诺、开放的路线图承诺、去中心化决策等来量化项目的价值/实用性。 ### 总结 一方面,我们看到大公司雇用开源开发者并收购开源初创公司甚至基础平台(例如微软收购 GitHub)。许多(甚至大多数)能够长期成功运行的开源项目都集中在单个供应商周围。开源的重要性及其集中化是一个事实。 另一方面,[维持开源软件](https://www.youtube.com/watch?v=VS6IpvTWwkQ)的挑战正变得越来越明显,许多人正在更深入地研究这个领域及其基本问题。有一些项目具有很高的知名度和大量的贡献者,但还有许多其他也重要的项目缺乏足够的贡献者和维护者。 有[许多努力](https://opensource.com/article/18/8/open-source-tokenomics)试图通过区块链来解决开源的挑战。这些项目应提高透明度、去中心化和补贴,并在开源用户和开发人员之间建立直接联系。这个领域还很年轻,但是进展很快,随着时间的推移,集市将会有一个加密货币系统。 如果有足够的时间和足够的技术,去中心化就会发生在很多层面: * 互联网是一种去中心化的媒介,它释放了全球分享和获取知识的潜力。 * 开源是一种去中心化的协作模式,它释放了全球的创新潜力。 * 同样,区块链可以补充开源,成为去中心化的开源补贴模式。 请在[推特](http://twitter.com/bibryam)上关注我在这个领域的其他帖子。 --- via: <https://opensource.com/article/18/9/barter-currency-system> 作者:[Bilgin lbryam](https://opensource.com/users/bibryam) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
[The Cathedral and The Bazaar](http://catb.org/) is a classic open source story, written 20 years ago by Eric Steven Raymond. In the story, Eric describes a new revolutionary software development model where complex software projects are built without (or with a very little) central management. This new model is open source. Eric's story compares two models: - The classic model (represented by the cathedral), in which software is crafted by a small group of individuals in a closed and controlled environment through slow and stable releases. - And the new model (represented by the bazaar), in which software is crafted in an open environment where individuals can participate freely but still produce a stable and coherent system. Some of the reasons open source is so successful can be traced back to the founding principles Eric describes. Releasing early, releasing often, and accepting the fact that many heads are inevitably better than one allows open source projects to tap into the world’s pool of talent (and few companies can match that using the closed source model). Two decades after Eric's reflective analysis of the hacker community, we see open source becoming dominant. It is no longer a model only for scratching a developer’s personal itch, but instead, the place where innovation happens. Even the world's [largest](http://oss.cash/) software companies are transitioning to this model in order to continue dominating. ## A barter system If we look closely at how the open source model works in practice, we realize that it is a closed system, exclusive only to open source developers and techies. The only way to influence the direction of a project is by joining the open source community, understanding the written and the unwritten rules, learning how to contribute, the coding standards, etc., and doing it yourself. This is how the bazaar works, and it is where the barter system analogy comes from. A barter system is a method of exchanging services and goods in return for other services and goods. In the bazaar—where the software is built—that means in order to take something, you must also be a producer yourself and give something back in return. And that is by exchanging your time and knowledge for getting something done. A bazaar is a place where open source developers interact with other open source developers and produce open source software the open source way. The barter system is a great step forward and an evolution from the state of self-sufficiency where everybody must be a jack of all trades. The bazaar (open source model) using the barter system allows people with common interests and different skills to gather, collaborate, and create something that no individual can create on their own. The barter system is simple and lacks complex problems of the modern monetary systems, but it also has some limitations, such as: - Lack of divisibility: In the absence of a common medium of exchange, a large indivisible commodity/value cannot be exchanged for a smaller commodity/value. For example, if you want to do even a small change in an open source project, you may sometimes still need to go through a high entry barrier. - Storing value: If a project is important to your company, you may want to have a large investment/commitment in it. But since it is a barter system among open source developers, the only way to have a strong say is by employing many open source committers, and that is not always possible. - Transferring value: If you have invested in a project (trained employees, hired open source developers) and want to move focus to another project, it is not possible to transfer expertise, reputation, and influence quickly. - Temporal decoupling: The barter system does not provide a good mechanism for deferred or advance commitments. In the open source world, that means a user cannot express commitment or interest in a project in a measurable way in advance, or continuously for future periods. Below, we will explore how to address these limitations using the back door to the bazaar. ## A currency system People are hanging at the bazaar for different reasons: Some are there to learn, some are there to scratch a personal developer's itch, and some work for large software farms. Because the only way to have a say in the bazaar is to become part of the open source community and join the barter system, in order to gain credibility in the open source world, many large software companies employ these developers and pay them in monetary value. This represents the use of a currency system to influence the bazaar. Open source is no longer only for scratching the personal developer itch. It also accounts for a significant part of the overall software production worldwide, and there are many who want to have an influence. Open source sets the guiding principles through which developers interact and build a coherent system in a distributed way. It dictates how a project is governed, how software is built, and how the output distributed to users. It is an open consensus model for decentralized entities for building quality software together. But the open source model does not cover how open source is subsidized. Whether it is sponsored, directly or indirectly, through intrinsic or extrinsic motivators is irrelevant to the bazaar. ![Tokenomics, cryptocurrency chart Tokenomics, cryptocurrency chart](https://opensource.com/sites/default/files/uploads/tokenomics_-_page_4.png) Currently, there is no equivalent of the decentralized open source development model for subsidization purposes. The majority of open source subsidization is centralized, where typically one company dominates a project by employing the majority of the open source developers of that project. And to be honest, this is currently the best-case scenario, as it guarantees that the developers will be paid for a long period and the project will continue to flourish. There are also exceptions for the project monopoly scenario: For example, some Cloud Native Computing Foundation projects are developed by a large number of competing companies. Also, the Apache Software Foundation aims for their projects not to be dominated by a single vendor by encouraging diverse contributors, but most of the popular projects, in reality, are still single-vendor projects. What we are missing is an open and decentralized model that works like the bazaar without a central coordination and ownership, where consumers (open source users) and producers (open source developers) interact with each other, driven by market forces and open source value. In order to complement open source, such a model must also be open and decentralized, and this is why I think the blockchain technology would [fit best here](https://opensource.com/article/18/8/open-source-tokenomics). Most of the existing blockchain (and non-blockchain) platforms that aim to subsidize open source development are targeting primarily bug bounties, small and piecemeal tasks. A few also focus on funding new open source projects. But not many aim to provide mechanisms for sustaining continued development of open source projects—basically, a system that would emulate the behavior of an open source service provider company, or open core, open source-based SaaS product company: ensuring developers get continued and predictable incentives and guiding the project development based on the priorities of the incentivizers; i.e., the users. Such a model would address the limitations of the barter system listed above: - Allow divisibility: If you want something small fixed, you can pay a small amount rather than the full premium of becoming an open source developer for a project. - Storing value: You can invest a large amount into a project and ensure both its continued development and that your voice is heard. - Transferring value: At any point, you can stop investing in the project and move funds into other projects. - Temporal decoupling: Allow regular recurring payments and subscriptions. There would be also other benefits, purely from the fact that such a blockchain-based system is transparent and decentralized: to quantify a project’s value/usefulness based on its users’ commitment, open roadmap commitment, decentralized decision making, etc. ## Conclusion On the one hand, we see large companies hiring open source developers and acquiring open source startups and even foundational platforms (such as Microsoft buying GitHub). Many, if not most, long-running successful open source projects are centralized around a single vendor. The significance of open source and its centralization is a fact. On the other hand, the challenges around [sustaining open source](https://www.youtube.com/watch?v=VS6IpvTWwkQ) software are becoming more apparent, and many are investigating this space and its foundational issues more deeply. There are a few projects with high visibility and a large number of contributors, but there are also many other still-important projects that lack enough contributors and maintainers. There are [many efforts](https://opensource.com/article/18/8/open-source-tokenomics) trying to address the challenges of open source through blockchain. These projects should improve the transparency, decentralization, and subsidization and establish a direct link between open source users and developers. This space is still very young, but it is progressing quickly, and with time, the bazaar is going to have a cryptocurrency system. Given enough time and adequate technology, decentralization is happening at many levels: - The internet is a decentralized medium that has unlocked the world’s potential for sharing and acquiring knowledge. - Open source is a decentralized collaboration model that has unlocked the world’s potential for innovation. - Similarly, blockchain can complement open source and become the decentralized open source subsidization model. Follow me on [Twitter](http://twitter.com/bibryam) for other posts in this space. ## 3 Comments
11,349
Manjaro Linux 从业余爱好项目成长为专业项目
https://itsfoss.com/manjaro-linux-business-formation/
2019-09-16T18:27:46
[ "Manjaro" ]
https://linux.cn/article-11349-1.html
> > Manjaro 正在走专业化路线。虽然 Manjaro 社区将负责项目的开发和其他相关活动,但该团队已成立了一家公司作为其法人实体处理商业协议和专业服务。 > > > Manjaro 是一个相当流行的 Linux 发行版,而它只是由三个人(Bernhard、Jonathan 和 Phili)于 2011 年激情之下创建的项目。现在,它是目前[最好的 Linux 发行版](https://itsfoss.com/best-linux-distributions/)之一,所以它不能真的一直还只是个业余爱好项目了,对吧。 嗯,现在有个好消息:Manjaro 已经建立了一家新公司“[Manjaro GmbH & Co. KG]”,以 [Blue Systems](https://www.blue-systems.com/) 为顾问,以便能够全职雇佣维护人员,并探索未来的商业机会。 ![](/data/attachment/album/201909/16/182749ic43iw3cxiiw2t11.jpg) ### 具体有什么变化? 根据[官方公告](https://forum.manjaro.org/t/manjaro-is-taking-the-next-step/102105),Manjaro 项目将保持不变。但是,成立了一家新公司来保护该项目,以允许他们制定法律合同、官方协议和进行其他潜在的商业活动。因此,这使得这个“业余爱好项目”成为了一项专业工作。 除此之外,捐赠资金将转给非营利性的[财政托管](https://en.wikipedia.org/wiki/Fiscal_sponsorship)([CommunityBridge](https://communitybridge.org/) 和 [OpenCollective](https://opencollective.com/)),让他们来代表项目接受和管理资金。请注意,这些捐赠没有被用于创建这个公司,因此,将资金转移给非营利的财务托管将在确保捐赠的同时也确保透明度。 ### 这会有何改善? 随着这个公司的成立,(如开发者所述)新结构将以下列方式帮助 Manjaro: * 使开发人员能够全职投入 Manjaro 及其相关项目; * 在 Linux 相关的比赛和活动中与其他开发人员进行互动; * 保护 Manjaro 作为一个社区驱动项目的独立性,并保护其品牌; * 提供更快的安全更新,更有效地响应用户需求; * 提供在专业层面上作为公司行事的手段。 Manjaro 团队还阐明了它将如何继续致力于社区: > > Manjaro 的使命和目标将与以前一样 —— 支持 Manjaro 的协作开发及其广泛使用。这项工作将继续通过捐赠和赞助来支持,这些捐赠和赞助在任何情况下都不会被这个成立的公司使用。 > > > ### 关于 Manjaro 公司的更多信息 尽管他们提到该项目将独立于公司,但并非所有人都清楚当有了一家具有商业利益的公司时 Manjaro 与“社区”的关系。因此,该团队还在公告中澄清了他们作为一家公司的计划。 > > Manjaro GmbH & Co.KG 的成立旨在有效地参与商业协议、建立合作伙伴关系并提供专业服务。有了这个,Manjaro 开发者 Bernhard 和 Philip 现在可以全职工作投入到 Manjaro,而 Blue Systems 将担任顾问。 > > > 公司将能够正式签署合同并承担职责和保障,而社区不能承担或承担责任。 > > > ### 总结 因此,通过这一举措以及商业机会,他们计划全职工作并聘请贡献者。 当然,现在他们的意思是“业务”(我希望不是作为坏人)。对此公告的大多数反应都是积极的,我们都祝他们好运。虽然有些人可能对具有“商业”利益的“社区”项目持怀疑态度(还记得 [FreeOffice 和 Manjaro 的挫败](https://itsfoss.com/libreoffice-freeoffice-manjaro-linux/)吗?),但我认为这是一个有趣的举措。 你怎么看?请在下面的评论中告诉我们你的想法。 --- via: <https://itsfoss.com/manjaro-linux-business-formation/> 作者:[Ankush Das](https://itsfoss.com/author/ankush/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
*Brief: Manjaro is taking things professionally. While the Manjaro community will be responsible for the development of the project and other related activities, a company has been formed to work as its legal entity and handle the commercial agreements and professional services. * Manjaro is a quite popular Linux distribution considering that it was just a passion project by three people, Bernhard, Jonathan and Philip, which came into existence in 2011. Now that it’s one of the [best Linux distros](https://itsfoss.com/best-linux-distributions/) out there, this can’t really remain a hobby project, right? Well, here’s good news: Manjaro has established a new company “**Manjaro GmbH & Co. KG**” with [Blue Systems](https://www.blue-systems.com/) as an advisor to enable full-time employment of maintainers and exploration of future commercial opportunities. ![Manjaro Gmbh](https://itsfoss.com/content/images/wordpress/2019/09/manjaro-gmbh.jpg) ## What is exactly the change here? As per the [official announcement](https://forum.manjaro.org/t/manjaro-is-taking-the-next-step/102105), the Manjaro project will stay as-is. However, a new company has been formed to secure the project and allow them to make legal contracts, official agreements, and other potential commercial activities. So, this makes the “hobby project” a professional endeavor. In addition to this, the donation funds will be transferred to non-profit [fiscal hosts](https://en.wikipedia.org/wiki/Fiscal_sponsorship) ([CommunityBridge](https://communitybridge.org/) and [OpenCollective](https://opencollective.com/)) which will then accept and administer the funds on behalf of the project. Do note, that the donations haven’t been used to create the company – so the transfer of funds to a non-profit fiscal host will ensure transparency while securing the donations. ## How does this improve things? With the company formed, the new structure will help Manjaro in the following ways (as mentioned by the devlopers): - enable developers to commit full time to Manjaro and its related projects; - interact with other developers in sprints and events around Linux; - protect the independence of Manjaro as a community-driven project, as well as protect its brand; - provide faster security updates and a more efficient reaction to the needs of users; - provide the means to act as a company on a professional level. The Manjaro team also shed some light on how it’s going to stay committed to the community: The mission and goals of Manjaro will remain the same as before – to support the collaborative development of Manjaro and its widespread use. This effort will continue to be supported through donations and sponsorship and these will not, under any circumstances, be used by the established company. ## More about Manjaro as a company Even though they mentioned that the project will remain independent of the company, not everyone is clear about the involvement of Manjaro with the “community” while having a company with commercial interests. So, the team also clarified about their plans as a company in the announcement. Manjaro GmbH & Co. KG has been formed to effectively engage in commercial agreements, form partnerships, and offer professional services. With this, Manjaro devs Bernhard and Philip will now be able to commit full-time to Manjaro, while Blue Systems will take a role as an advisor. The company will be able to sign contracts and cover duties and guarantees officially, which the community cannot take or be held responsible for. **Wrapping Up** So, with this move, along with commercial opportunities, they plan to go full-time and also hire contributors. Of course, now they mean – “business” (not as the bad guys, I hope). Most of the reactions to this announcement are positive and we all wish them good luck with this. While some might be skeptical about a “community” project having “commercial” interests (remember the [FreeOffice and Manjaro fiasco](https://itsfoss.com/libreoffice-freeoffice-manjaro-linux/)?), I see this as an interesting move. What do you think? Feel free to let us know your thoughts in the comments below.
11,351
开源新闻综述:五角大楼、好莱坞和 Sandboxie 的开源
https://opensource.com/article/19/9/news-september-15
2019-09-17T12:23:00
[ "开源" ]
https://linux.cn/article-11351-1.html
> > 不要错过两周以来最大的开源头条新闻。 > > > ![Weekly news roundup with TV](/data/attachment/album/201909/17/122416ry11111hiy11bt4x.png "Weekly news roundup with TV") 在本期我们的开源新闻综述中有 Sandboxie 的开源之路、五角大楼开源计划的进一步变化、好莱坞开源等等! ### 五角大楼不符合白宫对开源软件的要求 2016 年,美国白宫要求每个美国政府机构必须在三年内开放至少 20% 的定制软件。2017 年有一篇关于这一倡议的[有趣文章](https://medium.com/@DefenseDigitalService/code-mil-an-open-source-initiative-at-the-pentagon-5ae4986b79bc),其中列出了一些令人激动的事情和面临的挑战。 根据美国政府问责局(GAO)的说法,[美国五角大楼做的还远远不足](https://www.nextgov.com/analytics-data/2019/09/pentagon-needs-make-more-software-open-source-watchdog-says/159832/)。 在一篇关于 Nextgov 的文章中,Jack Corrigan 写道,截至 2019 年 7 月,美国五角大楼仅发布了 10% 的代码为开源代码。他们还没有实施的其它白宫任务包括要求制定开源软件政策和定制代码的清单。 根据该报告,一些美国政府官员告诉 GAO,他们担心美国政府部门间共享代码的安全风险。他们还承认没有创建衡量开源工作成功的指标。美国五角大楼的首席技术官将五角大楼的规模列为不执行白宫的开源任务的原因。在周二发布的一份报告中,GAO 表示,“在(美国国防部)完全实施其试点计划并确定完成行政管理和预算局(OMB)要求的里程碑之前,该部门将无法达成显著的成本节约和效率的目的。” ### Sandboxie 在开源的过程中变成了免费软件 一家英国安全公司 Sophos Group plc 发布了[其流行的 Sandboxie 工具的免费版本](https://www.sandboxie.com/DownloadSandboxie),它用作Windows 的隔离操作环境([可在此下载](https://www.sandboxie.com/DownloadSandboxie))。 Sophos 表示,由于 Sandboxie 不是其业务的核心,因此更容易做出的决定就是关闭它。但 Sandboxie 因为无需让用户的操作系统冒风险就可以在安全的环境中运行未知软件而[广受赞誉](https://betanews.com/2019/09/13/sandboxie-free-open-source/),因此该团队正在投入额外的工作将其作为开源软件发布。这个免费但非开源的中间阶段似乎与当前的系统设计有关,因为它需要激活密钥: > > Sandboxie 目前使用许可证密钥来激活和授予仅针对付费客户开放的高级功能的访问权限(与使用免费版本的用户相比)。我们修改了代码,并发布了一个不限制任何功能的免费版本的更新版。换句话说,新的免费许可证将可以访问之前仅供付费客户使用的所有功能。 > > > 受此工具的社区影响,Sophos 的高级领导人宣布发布 Sandboxie 版本 5.31.4,这个不受限制的程序版本将保持免费,直到该工具完全开源。 > > “Sandboxie 用户群代表了一些最热情、前瞻性和知识渊博的安全社区成员,我们不想让你失望,”[Sophos 的博文说到](https://community.sophos.com/products/sandboxie/f/forum/115109/major-sandboxie-news-sandboxie-is-now-a-free-tool-with-plans-to-transition-it-to-an-open-source-tool/414522)。“经过深思熟虑后,我们认为让 Sandboxie 走下去的最佳方式是将其交还给用户,将其转换为开源工具。” > > > ### 志愿者团队致力于查找和数字化无版权书籍 1924 年以前在美国出版的所有书籍都是[公有的、可以自由使用/复制的](https://www.vice.com/en_us/article/a3534j/libraries-and-archivists-are-scanning-and-uploading-books-that-are-secretly-in-the-public-domain)。1964 年及之后出版的图书在出版日期后将保留 95 年的版权。但由于版权漏洞,1923 年至 1964 年间出版的书籍中有高达 75% 可以免费阅读和复制。现在只需要耗时确认那些书是什么。 因此,一些图书馆、志愿者和档案管理员们联合起来了解哪些图书没有版权,然后将其数字化并上传到互联网。由于版权续约记录已经数字化,因此很容易判断 1923 年至 1964 年间出版的书籍是否更新了其版权。但是,由于试图提供的是反证,因此寻找缺乏版权更新的难度要大得多。 参与者包括纽约公共图书馆(NYPL),它[最近解释了](https://www.nypl.org/blog/2019/09/01/historical-copyright-records-transparency)为什么这个耗时的项目是值得的。为了帮助更快地找到更多书籍,NYPL 将许多记录转换为 XML 格式。这样可以更轻松地自动执行查找可以将哪些书籍添加到公共域的过程。 ### 好莱坞的学院软件基金会获得新成员 微软和苹果公司宣布计划以<ruby> 学院软件基金会 <rt> Academy Software Foundation </rt></ruby>(ASWF)的高级会员做出贡献。他们将加入[创始董事会成员](https://variety.com/2019/digital/news/microsoft-apple-academy-software-foundation-1203334675/),其它成员还包括 Netflix、Google Cloud、Disney Studios 和 Sony Pictures。 学院软件基金会于 2018 年作为[电影艺术与科学学院](https://www.oscars.org/)和[Linux 基金会](http://www.linuxfoundation.org/)的联合项目而启动。 > > 学院软件基金会(ASWF)的使命是提高贡献到内容创作行业的开源软件库的质量和数量;提供一个中立的论坛来协调跨项目的工作;提供通用的构建和测试基础架构;并为个人和组织提供参与推进我们的开源生态系统的明确途径。 > > > 在第一年内,该基金会构建了 [OpenTimelineIO](https://github.com/PixarAnimationStudios/OpenTimelineIO),这是一种开源 API 和交换格式,可帮助工作室团队跨部门协作。OpenTImelineIO 被该[基金会技术咨询委员会](https://www.linuxfoundation.org/press-release/2019/07/opentimelineio-joins-aswf/)去年 7 月正式接受为第五个托管项目。他们现在将它与 [OpenColorIO](https://opencolorio.org/)、[OpenCue](https://www.opencue.io/)、[OpenEXR](https://www.openexr.com/) 和 [OpenVDB](https://www.openvdb.org/) 并列维护。 ### 其它新闻 * [Comcast 将开源网络软件投入生产环境](https://www.fiercetelecom.com/operators/comcast-puts-open-source-networking-software-into-production) * [SD Times 本周开源项目:Ballerina](https://sdtimes.com/os/sd-times-open-source-project-of-the-week-ballerina/) * [美国国防部努力实施开源计划](https://www.fedscoop.com/open-source-software-dod-struggles/) * [Kong 开源通用服务网格 Kuma](https://sdtimes.com/micro/kong-open-sources-universal-service-mesh-kuma/) * [Eclipse 推出 Jakarta EE 8](https://devclass.com/2019/09/11/hey-were-open-source-again-eclipse-unveils-jakarta-ee-8/) 一如既往地感谢 Opensource.com 的工作人员和主持人本周的帮助。 --- via: <https://opensource.com/article/19/9/news-september-15> 作者:[Lauren Maffeo](https://opensource.com/users/lmaffeo) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
In this edition of our open source news roundup, Sandboxie's path to open source, update on the Pentagon's adoption of open source, open source in Hollywood, and more! ## Sandboxie becomes freeware on its way to open source Sophos Group plc, a British security company, released a [free version of its popular Sandboxie tool](https://www.sandboxie.com/DownloadSandboxie), used as an isolated operating environment for Windows ([downloadable here](https://www.sandboxie.com/DownloadSandboxie)). Sophos said that since Sandboxie isn't a core aspect of its business, the easier decision would've been to shut it down. But Sandboxie has [earned a reputation](https://betanews.com/2019/09/13/sandboxie-free-open-source/) for letting users run unknown software in a safe environment without risking their systems, so the team is putting in the additional work to release it as open source software. This intermediate phase of free-but-not-open-source appears to be related to the current system design, which requires an activation key: Sandboxie currently uses a license key to activate and grant access to premium features only available to paid customers (as opposed to those using a free version). We have modified the code and have released an updated free version that does not restrict any features. In other words, the new free license will have access to all the features previously only available to paid customers. Citing this tool's community impact, senior leaders at Sophos announced the release of Sandboxie version 5.31.4–an unrestricted version of the program–will remain free until the tool is fully open sourced. "The Sandboxie user base represents some of the most passionate, forward thinking, and knowledgeable members of the security community and we didn’t want to let you down," [Sophos' blog post read](https://community.sophos.com/products/sandboxie/f/forum/115109/major-sandboxie-news-sandboxie-is-now-a-free-tool-with-plans-to-transition-it-to-an-open-source-tool/414522). "After thoughtful consideration we decided that the best way to keep Sandboxie going was to give it back to its users -- transitioning it to an open source tool." ## The Pentagon doesn't meet White House mandate for more open source software In 2016, the White House mandated that each government agency had to open source at least 20 percent of its custom software within three years. There is an [interesting article](https://medium.com/@DefenseDigitalService/code-mil-an-open-source-initiative-at-the-pentagon-5ae4986b79bc) about this initiative from 2017 that laid out some of the excitement and challenges. According to the Government Accountability Office, [the Pentagon's not even halfway there](https://www.nextgov.com/analytics-data/2019/09/pentagon-needs-make-more-software-open-source-watchdog-says/159832/). In an article for Nextgov, Jack Corrigan wrote that as of July 2019, the Pentagon had released just 10 percent of its code as open source. They've also not yet implemented other aspects of the White House mandate, including the directive to build an open source software policy and inventories of custom code. According to the report, some government officials told the GAO that they worry about security risks of sharing code across government departments. They also admitted to not creating metrics that could measure their open source efforts' successes. The Pentagon's Chief Technology Officer cited the Pentagon's size as the reason for not implementing the White House's open source mandate. In a report published Tuesday, the GAO said, “Until [the Defense Department] fully implements its pilot program and establishes milestones for completing the OMB requirements, the department will not be positioned to take advantage of significant cost savings and efficiencies." ## A team of volunteers works to find and digitize copyright-free books All books published in the U.S. before 1924 are [publicly owned and can be freely used/copied](https://www.vice.com/en_us/article/a3534j/libraries-and-archivists-are-scanning-and-uploading-books-that-are-secretly-in-the-public-domain). Books published in and after 1964 will stay under copyright for 95 years after their publication dates. But thanks to a copyright loophole, up to 75 percent of books published between 1923 and 1964 are free to read and copy. The time-consuming trick is confirming which books those are. So, a group of libraries, volunteers, and archivists have united to learn which books are copyright-free, then digitize and upload them to the Internet. Since renewal records were already digitized, it's been easy to tell if books published between 1923 and 1964 had their copyrights renewed. But looking for a lack of copyright renewal is much harder since you're trying to prove a negative. Participants include the New York Public Library, [which recently explained](https://www.nypl.org/blog/2019/09/01/historical-copyright-records-transparency) why the time-consuming project is worthwhile. To help find more books faster, the NYPL converted many records to XML format. This makes it easier to automate the process of finding which books can be added to the public domain. ## Hollywood's Academy Software Foundation gains new members Microsoft and Apple announced plans to contribute at the premier membership level of the ASF. They'll join [founding board members](https://variety.com/2019/digital/news/microsoft-apple-academy-software-foundation-1203334675/) including Netflix, Google Cloud, Disney Studios, and Sony Pictures. The Academy Software Foundation launched in 2018 as a joint project of the [Academy of Motion Picture Arts and Sciences](https://www.oscars.org/) and the [Linux Foundation](http://www.linuxfoundation.org/). The mission of the Academy Software Foundation (ASWF) is to increase the quality and quantity of contributions to the content creation industry’s open source software base; to provide a neutral forum to coordinate cross-project efforts; to provide a common build and test infrastructure; and to provide individuals and organizations a clear path to participation in advancing our open source ecosystem. Within its first year, the Foundation built [OpenTimelineIO](https://github.com/PixarAnimationStudios/OpenTimelineIO), an open source API and interchange format that helps studio teams collaborate across departments. OpenTImelineIO was formally accepted by [the Foundation's Technical Advisory Council](https://www.linuxfoundation.org/press-release/2019/07/opentimelineio-joins-aswf/) as its fifth hosted project last July. They now maintain it alongside [OpenColorIO](https://opencolorio.org/), [OpenCue](https://www.opencue.io/), [OpenEXR](https://www.openexr.com/), and [OpenVDB](https://www.openvdb.org/). ### In other news [Comcast puts open source networking software into production](https://www.fiercetelecom.com/operators/comcast-puts-open-source-networking-software-into-production)[SD Times open source project of the week: Ballerina](https://sdtimes.com/os/sd-times-open-source-project-of-the-week-ballerina/)[DOD struggles to implement open source pilots](https://www.fedscoop.com/open-source-software-dod-struggles/)[Kong open sources universal service mesh Kuma](https://sdtimes.com/micro/kong-open-sources-universal-service-mesh-kuma/)[Eclipse unveils Jakarta EE 8](https://devclass.com/2019/09/11/hey-were-open-source-again-eclipse-unveils-jakarta-ee-8/) *Thanks, as always, to Opensource.com staff members and moderators for their help this week.* ## Comments are closed.
11,352
如何使用 Bash 脚本从 SAR 报告中获取 CPU 和内存使用情况
https://www.2daygeek.com/linux-get-average-cpu-memory-utilization-from-sar-data-report/
2019-09-17T12:51:47
[ "sar" ]
https://linux.cn/article-11352-1.html
![](/data/attachment/album/201909/17/125134fm0vsssppybnnmvp.jpg) 大多数 Linux 管理员使用 [SAR 报告](https://www.2daygeek.com/sar-system-performance-monitoring-command-tool-linux/)监控系统性能,因为它会收集一周的性能数据。但是,你可以通过更改 `/etc/sysconfig/sysstat` 文件轻松地将其延长到四周。同样,这段时间可以延长一个月以上。如果超过 28,那么日志文件将放在多个目录中,每月一个。 要将覆盖期延长至 28 天,请对 `/etc/sysconfig/sysstat` 文件做以下更改。 编辑 `sysstat` 文件并将 `HISTORY=7` 更改为 `HISTORY=28`。 在本文中,我们添加了三个 bash 脚本,它们可以帮助你在一个地方轻松查看每个数据文件的平均值。 我们过去加过许多有用的 shell 脚本。如果你想查看它们,请进入下面的链接。 * [如何使用 shell 脚本自动化日常操作](https://www.2daygeek.com/category/shell-script/) 这些脚本简单明了。出于测试目的,我们仅包括两个性能指标,即 CPU 和内存。你可以修改脚本中的其他性能指标以满足你的需求。 ### 脚本 1:从 SAR 报告中获取平均 CPU 利用率的 Bash 脚本 该 bash 脚本从每个数据文件中收集 CPU 平均值并将其显示在一个页面上。 由于是月末,它显示了 2019 年 8 月的 28 天数据。 ``` # vi /opt/scripts/sar-cpu-avg.sh #!/bin/sh echo "+----------------------------------------------------------------------------------+" echo "|Average: CPU %user %nice %system %iowait %steal %idle |" echo "+----------------------------------------------------------------------------------+" for file in `ls -tr /var/log/sa/sa* | grep -v sar` do dat=`sar -f $file | head -n 1 | awk '{print $4}'` echo -n $dat sar -f $file | grep -i Average | sed "s/Average://" done echo "+----------------------------------------------------------------------------------+" ``` 运行脚本后,你将看到如下输出。 ``` # sh /opt/scripts/sar-cpu-avg.sh +----------------------------------------------------------------------------------+ |Average: CPU %user %nice %system %iowait %steal %idle | +----------------------------------------------------------------------------------+ 08/01/2019 all 0.70 0.00 1.19 0.00 0.00 98.10 08/02/2019 all 1.73 0.00 3.16 0.01 0.00 95.10 08/03/2019 all 1.73 0.00 3.16 0.01 0.00 95.11 08/04/2019 all 1.02 0.00 1.80 0.00 0.00 97.18 08/05/2019 all 0.68 0.00 1.08 0.01 0.00 98.24 08/06/2019 all 0.71 0.00 1.17 0.00 0.00 98.12 08/07/2019 all 1.79 0.00 3.17 0.01 0.00 95.03 08/08/2019 all 1.78 0.00 3.14 0.01 0.00 95.08 08/09/2019 all 1.07 0.00 1.82 0.00 0.00 97.10 08/10/2019 all 0.38 0.00 0.50 0.00 0.00 99.12 . . . 08/29/2019 all 1.50 0.00 2.33 0.00 0.00 96.17 08/30/2019 all 2.32 0.00 3.47 0.01 0.00 94.20 +----------------------------------------------------------------------------------+ ``` ### 脚本 2:从 SAR 报告中获取平均内存利用率的 Bash 脚本 该 bash 脚本从每个数据文件中收集内存平均值并将其显示在一个页面上。 由于是月末,它显示了 2019 年 8 月的 28 天数据。 ``` # vi /opt/scripts/sar-memory-avg.sh #!/bin/sh echo "+-------------------------------------------------------------------------------------------------------------------+" echo "|Average: kbmemfree kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty |" echo "+-------------------------------------------------------------------------------------------------------------------+" for file in `ls -tr /var/log/sa/sa* | grep -v sar` do dat=`sar -f $file | head -n 1 | awk '{print $4}'` echo -n $dat sar -r -f $file | grep -i Average | sed "s/Average://" done echo "+-------------------------------------------------------------------------------------------------------------------+" ``` 运行脚本后,你将看到如下输出。 ``` # sh /opt/scripts/sar-memory-avg.sh +--------------------------------------------------------------------------------------------------------------------+ |Average: kbmemfree kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty | +--------------------------------------------------------------------------------------------------------------------+ 08/01/2019 1492331 2388461 61.55 29888 1152142 1560615 12.72 1693031 380472 6 08/02/2019 1493126 2387666 61.53 29888 1147811 1569624 12.79 1696387 373346 3 08/03/2019 1489582 2391210 61.62 29888 1147076 1581711 12.89 1701480 370325 3 08/04/2019 1490403 2390389 61.60 29888 1148206 1569671 12.79 1697654 373484 4 08/05/2019 1484506 2396286 61.75 29888 1152409 1563804 12.75 1702424 374628 4 08/06/2019 1473593 2407199 62.03 29888 1151137 1577491 12.86 1715426 371000 8 08/07/2019 1467150 2413642 62.19 29888 1155639 1596653 13.01 1716900 372574 13 08/08/2019 1451366 2429426 62.60 29888 1162253 1604672 13.08 1725931 376998 5 08/09/2019 1451191 2429601 62.61 29888 1158696 1582192 12.90 1728819 371025 4 08/10/2019 1450050 2430742 62.64 29888 1160916 1579888 12.88 1729975 370844 5 . . . 08/29/2019 1365699 2515093 64.81 29888 1198832 1593567 12.99 1781733 376157 15 08/30/2019 1361920 2518872 64.91 29888 1200785 1595105 13.00 1784556 375641 8 +-------------------------------------------------------------------------------------------------------------------+ ``` ### 脚本 3:从 SAR 报告中获取 CPU 和内存平均利用率的 Bash 脚本 该 bash 脚本从每个数据文件中收集 CPU 和内存平均值并将其显示在一个页面上。 该脚本与上面相比稍微不同。它在同一位置同时显示两者(CPU 和内存)平均值,而不是其他数据。 ``` # vi /opt/scripts/sar-cpu-mem-avg.sh #!/bin/bash for file in `ls -tr /var/log/sa/sa* | grep -v sar` do sar -f $file | head -n 1 | awk '{print $4}' echo "-----------" sar -u -f $file | awk '/Average:/{printf("CPU Average: %.2f%\n"), 100 - $8}' sar -r -f $file | awk '/Average:/{printf("Memory Average: %.2f%\n"),(($3-$5-$6)/($2+$3)) * 100 }' printf "\n" done ``` 运行脚本后,你将看到如下输出。 ``` # sh /opt/scripts/sar-cpu-mem-avg.sh 08/01/2019 ----------- CPU Average: 1.90% Memory Average: 31.09% 08/02/2019 ----------- CPU Average: 4.90% Memory Average: 31.18% 08/03/2019 ----------- CPU Average: 4.89% Memory Average: 31.29% 08/04/2019 ----------- CPU Average: 2.82% Memory Average: 31.24% 08/05/2019 ----------- CPU Average: 1.76% Memory Average: 31.28% . . . 08/29/2019 ----------- CPU Average: 3.83% Memory Average: 33.15% 08/30/2019 ----------- CPU Average: 5.80% Memory Average: 33.19% ``` --- via: <https://www.2daygeek.com/linux-get-average-cpu-memory-utilization-from-sar-data-report/> 作者:[Magesh Maruthamuthu](https://www.2daygeek.com/author/magesh/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
404
Not Found
null
11,354
Firefox 69 已可在 Fedora 中获取
https://fedoramagazine.org/firefox-69-available-in-fedora/
2019-09-18T09:54:18
[ "Firefox" ]
https://linux.cn/article-11354-1.html
![](/data/attachment/album/201909/18/095421jv602uqkrwlrwu53.jpg) 当你安装 Fedora Workstation 时,你会发现它包括了世界知名的 Firefox 浏览器。Mozilla 基金会以开发 Firefox 以及其他促进开放、安全和隐私的互联网项目为己任。Firefox 有快速的浏览引擎和大量的隐私功能。 开发者社区不断改进和增强 Firefox。最新版本 Firefox 69 于最近发布,你可在稳定版 Fedora 系统(30 及更高版本)中获取它。继续阅读以获得更多详情。 ### Firefox 69 中的新功能 最新版本的 Firefox 包括<ruby> <a href="https://blog.mozilla.org/blog/2019/09/03/todays-firefox-blocks-third-party-tracking-cookies-and-cryptomining-by-default/"> 增强跟踪保护 </a> <rt> Enhanced Tracking Protection </rt></ruby>(ETP)。当你使用带有新(或重置)配置文件的 Firefox 69 时,浏览器会使网站更难以跟踪你的信息或滥用你的计算机资源。 例如,不太正直的网站使用脚本让你的系统进行大量计算来产生加密货币,这称为<ruby> <a href="https://www.webopedia.com/TERM/C/cryptocurrency-mining.html"> 加密挖矿 </a> <rt> cryptomining </rt></ruby>。加密挖矿在你不知情或未经许可的情况下发生,因此是对你的系统的滥用。Firefox 69 中的新标准设置可防止网站遭受此类滥用。 Firefox 69 还有其他设置,可防止识别或记录你的浏览器指纹,以供日后使用。这些改进为你提供了额外的保护,免于你的活动被在线追踪。 另一个常见的烦恼是在没有提示的情况下播放视频。视频播放也会占用更多的 CPU,你可能不希望未经许可就在你的笔记本上发生这种情况。Firefox 使用<ruby> <a href="https://support.mozilla.org/kb/block-autoplay"> 阻止自动播放 </a> <rt> Block Autoplay </rt></ruby>这个功能阻止了这种情况的发生。而 Firefox 69 还允许你停止静默开始播放的视频。此功能可防止不必要的突然的噪音。它还解决了更多真正的问题 —— 未经许可使用计算机资源。 新版本中还有许多其他新功能。在 [Firefox 发行说明](https://www.mozilla.org/en-US/firefox/69.0/releasenotes/)中阅读有关它们的更多信息。 ### 如何获得更新 Firefox 69 存在于稳定版 Fedora 30、预发布版 Fedora 31 和 Rawhide 仓库中。该更新由 Fedora 的 Firefox 包维护者提供。维护人员还确保更新了 Mozilla 的网络安全服务(nss 包)。我们感谢 Mozilla 项目和 Firefox 社区在提供此新版本方面的辛勤工作。 如果你使用的是 Fedora 30 或更高版本,请在 Fedora Workstation 上使用*软件中心*,或在任何 Fedora 系统上运行以下命令: ``` $ sudo dnf --refresh upgrade firefox ``` 如果你使用的是 Fedora 29,请[帮助测试更新](https://bodhi.fedoraproject.org/updates/FEDORA-2019-89ae5bb576),这样它可以变得稳定,让所有用户可以轻松使用。 Firefox 可能会提示你升级个人设置以使用新设置。要使用新功能,你应该这样做。 --- via: <https://fedoramagazine.org/firefox-69-available-in-fedora/> 作者:[Paul W. Frields](https://fedoramagazine.org/author/pfrields/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
When you install the Fedora Workstation, you’ll find the world-renowned Firefox browser included. The Mozilla Foundation underwrites work on Firefox, as well as other projects that promote an open, safe, and privacy respecting Internet. Firefox already features a fast browsing engine and numerous privacy features. A community of developers continues to improve and enhance Firefox. The latest version, Firefox 69, was released recently and you can get it for your stable Fedora system (30 and later). Read on for more details. ## New features in Firefox 69 The newest version of Firefox includes [Enhanced Tracking Protection](https://blog.mozilla.org/blog/2019/09/03/todays-firefox-blocks-third-party-tracking-cookies-and-cryptomining-by-default/) (or ETP). When you use Firefox 69 with a new (or reset) settings profile, the browser makes it harder for sites to track your information or misuse your computer resources. For instance, less scrupulous websites use scripts that cause your system to do lots of intense calculations to produce cryptocurrency results, called * cryptomining*. Cryptomining happens without your knowledge or permission and is therefore a misuse of your system. The new standard setting in Firefox 69 prevents sites from this kind of abuse. Firefox 69 has additional settings to prevent sites from identifying or fingerprinting your browser for later use. These improvements give you additional protection from having your activities tracked online. Another common annoyance is videos that start in your browser without warning. Video playback also uses extra CPU power and you may not want this happening on your laptop without permission. Firefox already stops this from happening using the [Block Autoplay](https://support.mozilla.org/kb/block-autoplay) feature. But Firefox 69 also lets you stop videos from playing even if they start without sound. This feature prevents unwanted sudden noise. It also solves more of the real problem — having your computer’s power used without permission. There are numerous other new features in the new release. Read more about them in the [Firefox release notes](https://www.mozilla.org/en-US/firefox/69.0/releasenotes/). ## How to get the update Firefox 69 is available in the stable Fedora 30 and pre-release Fedora 31 repositories, as well as Rawhide. The update is provided by Fedora’s maintainers of the Firefox package. The maintainers also ensured an update to Mozilla’s Network Security Services (the nss package). We appreciate the hard work of the Mozilla project and Firefox community in providing this new release. If you’re using Fedora 30 or later, use the *Software* tool on Fedora Workstation, or run the following command on any Fedora system: $ sudo dnf --refresh upgrade firefox If you’re on Fedora 29, [help test the update](https://bodhi.fedoraproject.org/updates/FEDORA-2019-89ae5bb576) for that release so it can become stable and easily available for all users. Firefox may prompt you to upgrade your profile to use the new settings. To take advantage of new features, you should do this. ## Tony Rawlings Just re-installed Fedora 30, they are always on the ball, new Firefox already included. ## Catalin I performed the upgrade but still version 68 ## tongap If you use F29, try ‘sudo dnf update –enablerepo=updates-testing firefox’ ## Chin Nothing over here in Fedora 29… ## Paul W. Frields Help test the update — see the article for more info. ## John Not available on official repositories for Fedora 29 as far as I can see? ## Paul W. Frields It may be held up because users haven’t tested it so it can go to stable. Visit this page: https://bodhi.fedoraproject.org/updates/FEDORA-2019-89ae5bb576 ## John Hi Paul, Then is it or not available? ## Paul W. Frields It is indeed available, in the updates-testing repository. Use this article to see how you can help test: https://fedoramagazine.org/contributing-fedora-testing-packages/ ## Eric Nicholls Nice 😉 ## MattM Nice ## Roger I really like the ETP feature and block autoplay is a wonderful thing. I didn’t know about even stopping the video play feature..Now engaged. ## Daniel Somehow I am having missing resize handles on the right and bottom. Anybody else having this issue. Already dropped a bug with Firefox. ## Mads Fedora 30, updated to Firefox 69, rebooted. Seeing lots of weird glitches when switching tabs, usually some focused segment from one tab lingering on top the other. Anybody else? ## svsv sarma I am already running firefox 69, and so far so good. At least I don’t know what is happening or not happening. ## Tom Bouwman Installed it on Fedora29, as described above. Also I don’t know what should happen or should not happen. Videos still automatically start to play when I click on an empty folder in Yahoo! Mail. ## hammerhead corvette Very nice Thanks! ## pola this is looking nice, thanks 😀 I am loving it actually
11,355
每周开源点评:Linux Plumbers、Appwrite
https://opensource.com/article/19/9/conferences-industry-trends
2019-09-18T11:34:36
[ "Linux" ]
https://linux.cn/article-11355-1.html
> > 了解每周的开源社区和行业趋势。 > > > ![Person standing in front of a giant computer screen with numbers, data](/data/attachment/album/201909/18/113440ao2ox4rqpxlz7o4o.png "Person standing in front of a giant computer screen with numbers, data") 作为采用开源开发模式的企业软件公司的高级产品营销经理,这是我为产品营销人员、经理和其他相关人员发布的有关开源社区、市场和行业趋势的定期更新。以下是本次更新中我最喜欢的五篇文章。 ### 《在 Linux Plumbers 会议上解决 Linux 具体细节》 * [文章地址](https://www.zdnet.com/article/working-on-linuxs-nuts-and-bolts-at-linux-plumbers/) > > Linux 的创建者 Linus Torvalds 告诉我,<ruby> 内核维护者峰会 <rt> Kernel Maintainers Summit </rt></ruby>是顶级 Linux 内核开发人员的邀请制聚会。但是,虽然你可能认为这是关于规划 Linux 内核的未来的会议,但事实并非如此。“这个维护者峰会真的与众不同,因为它甚至不谈论技术问题。”相反,“全都谈的是关于创建和维护 Linux 内核的过程。” > > > **影响**:这就像技术版的 Bilderberg 会议:你们举办的都是各种华丽的流行语会议,而在这里我们做出的才是真正的决定。不过我觉得,可能不太会涉及到私人飞机吧。(LCTT 译注:有关 Bilderberg 请自行搜索) ### 《微软主办第一个 WSL 会议》 * [文章地址](https://www.zdnet.com/article/microsoft-hosts-first-windows-subsystem-for-linux-conference/) > > [Whitewater Foundry](https://github.com/WhitewaterFoundry) 是一家专注于 [Windows 的 Linux 子系统(WSL)](https://docs.microsoft.com/en-us/windows/wsl/install-win10)的创业公司,它的创始人 Hayden Barnes [宣布举办 WSLconf 1](https://www.linkedin.com/feed/update/urn:li:activity:6574754435518599168/),这是 WSL 的第一次社区会议。该活动将于 2020 年 3 月 10 日至 11 日在华盛顿州雷德蒙市的微软总部 20 号楼举行。会议是合办的。我们已经知道将有来自[Pengwin(Whitewater 的 Linux for Windows)](https://www.zdnet.com/article/pengwin-a-linux-specifically-for-windows-subsystem-for-linux/)、微软 WSL 和 Canonical 的 Ubuntu on WSL 开发人员的演讲和研讨会。 > > > **影响**:微软正在培育社区成长的种子,围绕它越来越多地采用开源软件并作出贡献。这足以让我眼前一亮。 ### 《Appwrite 简介:面向移动和 Web 开发人员的开源后端服务器》 * [文章链接](https://medium.com/@eldadfux/introducing-appwrite-an-open-source-backend-server-for-mobile-web-developers-4be70731575d) > > [Appwrite](https://appwrite.io) 是一个新的[开源软件](https://github.com/appwrite/appwrite),用于前端和移动开发人员的端到端的后端服务器,可以让你更快地构建应用程序。[Appwrite](https://medium.com/@eldadfux/introducing-appwrite-an-open-source-backend-server-for-mobile-web-developers-4be70731575d?source=friends_link&sk=b6a2be384aafd1fa5b1b6ff12906082c) 的目标是抽象和简化 REST API 和工具背后的常见开发任务,以帮助开发人员更快地构建高级应用程序。 > > > 在这篇文章中,我将简要介绍一些主要的 [Appwrite](https://appwrite.io/) 服务,并解释它们的主要功能以及它们的设计方式,相比从头开始编写所有后端 API,这可以帮助你更快地构建下一个项目。 > > > **影响**:随着更多开源中间件变得更易于使用,软件开发越来越容易。Appwrite 声称可将开发时间和成本降低 70%。想象一下这对小型移动开发机构或个人开发者意味着什么。我很好奇他们将如何通过这种方式赚钱。 ### 《“不只是 IT”:开源技术专家说协作文化是政府转型的关键》 * [文章链接](https://medium.com/agile-government-leadership/more-than-just-it-open-source-technologist-says-collaborative-culture-is-key-to-government-c46d1489f822) > > AGL(<ruby> 敏捷的政府领导 <rt> agile government leadership </rt></ruby>)正在为那些帮助政府更好地为公众工作的人们提供价值支持网络。该组织专注于我非常热衷的事情:DevOps、数字化转型、开源以及许多政府 IT 领导者首选的类似主题。AGL 为我提供了一个社区,可以了解当今最优秀和最聪明的人所做的事情,并与整个行业的同行分享这些知识。 > > > **影响**:不管你的政治信仰如何,对政府都很容易愤世嫉俗。我发现令人耳目一新的是,政府也是由一个个实际的人组成的,他们大多在尽力将相关技术应用于公益事业。特别是当该技术是开源的! ### 《彭博社如何通过 Kubernetes 实现接近 90-95% 的硬件利用率》 * [文章链接](https://www.cncf.io/blog/2019/09/12/how-bloomberg-achieves-close-to-90-95-hardware-utilization-with-kubernetes/) > > 2016 年,彭博社采用了 Kubernetes(当时仍处于 alpha 阶段中),自使用该项目的上游代码以来,取得了显著成果。Rybka 说:“借助 Kubernetes,我们能够非常高效地使用我们的硬件,使利用率接近 90% 到 95%。”Kubernetes 中的自动缩放使系统能够更快地满足需求。此外,Kubernetes “为我们提供了标准化我们构建和管理服务的方法的能力,这意味着我们可以花费更多时间专注于实际使用我们支持的开源工具,”数据和分析基础架构主管 Steven Bower 说,“如果我们想要在世界的另一个位置建立一个新的集群,那么这样做真的非常简单。一切都只是代码。配置就是代码。” > > > **影响**:没有什么能像利用率统计那样穿过营销的迷雾。我听说过关于 Kube 的一件事是,当人们运行它时,他们不知道用它做什么。像这样的用例可以给他们(和你)一些想要的东西。 *我希望你喜欢这个上周重要内容的清单,请下周回来了解有关开源社区、市场和行业趋势的更多信息。* --- via: <https://opensource.com/article/19/9/conferences-industry-trends> 作者:[Tim Hildred](https://opensource.com/users/thildred) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update. [Working on Linux's nuts and bolts at Linux Plumbers](https://www.zdnet.com/article/working-on-linuxs-nuts-and-bolts-at-linux-plumbers/) The Kernel Maintainers Summit, Linux creator Linus Torvalds told me, is an invitation-only gathering of the top Linux kernel developers. But, while you might think it's about planning on the Linux kernel's future, that's not the case. "The maintainer summit is really different because it doesn't even talk about technical issues." Instead, "It's all about the process of creating and maintaining the Linux kernel." **The impact**: This is like the technical version of the Bilderberg meeting: you can have your flashy buzzword conferences, but we'll be over here making the real decisions. Or so I imagine. Probably less private jets involved though. [Microsoft hosts first Windows Subsystem for Linux conference](https://www.zdnet.com/article/microsoft-hosts-first-windows-subsystem-for-linux-conference/) Hayden Barnes, founder of [Whitewater Foundry], a startup focusing on[Windows Subsystem for Linux (WSL)][announced WSLconf 1], the first community conference for WSL. This event will be held on March 10-11, 2020 at Building 20 on the Microsoft HQ campus in Redmond, WA. The conference is still coming together. But we already know it will have presentations and workshops from[Pengwin, Whitewater's Linux for Windows,]Microsoft WSL, and[Canonical]'s[Ubuntu]on WSL developers. **The impact**: Microsoft is nurturing the seeds of community growing up around its increasing adoption of and contribution to open source software. It's enough to bring a tear to my eye. [Introducing Appwrite: An open source backend server for mobile and web developers](https://medium.com/@eldadfux/introducing-appwrite-an-open-source-backend-server-for-mobile-web-developers-4be70731575d) [Appwrite]is a new[open source], end to end backend server for frontend and mobile developers that allows you to build apps a lot faster.[Appwrite]goal is to abstract and simplify common development tasks behind REST APIs and tools, to help developers build advanced apps way faster.In this post I will shortly cover some of the main [Appwrite]services and explain about their main features and how they are designed to help you build your next project way faster than you would when writing all your backend APIs from scratch. **The impact**: Software development is getting more and more accessible as more open source middleware gets easier to use. Appwrite claims to reduce the time and cost of development by 70%. Imagine what that would mean to a small mobile development agency or citizen developer. I'm curious about how they'll monetize this. AGL (agile government leadership) is providing a valuable support network for people who are helping government work better for the public. The organization is focused on things that I am very passionate about — DevOps, digital transformation, open source, and similar topics that are top-of-mind for many government IT leaders. AGL provides me with a community to learn about what the best and brightest are doing today, and share those learnings with my peers throughout the industry. **The impact**: It is easy to be cynical about the government no matter your political persuasion. I found it refreshing to have a reminder that the government is comprised of real people who are mostly doing their best to apply relevant technology to the public good. Especially when that technology is open source! [How Bloomberg achieves close to 90-95% hardware utilization with Kubernetes](https://www.cncf.io/blog/2019/09/12/how-bloomberg-achieves-close-to-90-95-hardware-utilization-with-kubernetes/) In 2016, Bloomberg adopted Kubernetes—when it was still in alpha—and has seen remarkable results ever since using the project’s upstream code. “With Kubernetes, we’re able to very efficiently use our hardware to the point where we can get close to 90 to 95% utilization rates,” says Rybka. Autoscaling in Kubernetes allows the system to meet demands much faster. Furthermore, Kubernetes “offered us the ability to standardize our approach to how we build and manage services, which means that we can spend more time focused on actually working on the open source tools that we support,” says Steven Bower, Data and Analytics Infrastructure Lead. “If we want to stand up a new cluster in another location in the world, it’s really very straightforward to do that. Everything is all just code. Configuration is code.” **The impact**: Nothing cuts through the fog of marketing like utilization stats. One of the things that I've heard about Kube is that people don't know what to do with it when they have it running. Use cases like this give them (and you) something to aspire to. *I hope you enjoyed this list of what stood out to me from last week and come back next Monday for more open source community, market, and industry trends.* ## Comments are closed.
11,356
使用 Conda 管理 MacOS 上的 Ansible 环境
https://opensource.com/article/19/8/using-conda-ansible-administration-macos
2019-09-18T12:39:11
[ "Conda", "Ansible" ]
https://linux.cn/article-11356-1.html
> > Conda 将 Ansible 所需的一切都收集到虚拟环境中并将其与其他项目分开。 > > > ![](/data/attachment/album/201909/18/123838m1bcmke570kl6kzm.jpg) 如果你是一名使用 MacOS 并涉及到 Ansible 管理的 Python 开发人员,你可能希望使用 Conda 包管理器将 Ansible 的工作内容与核心操作系统和其他本地项目分开。 Ansible 基于 Python。要让 Ansible 在 MacOS 上工作,Conda 并不是必须要的,但是它确实让你管理 Python 版本和包依赖变得更加容易。这允许你在 MacOS 上使用升级的 Python 版本,并在你的系统中、Ansible 和其他编程项目之间保持 Python 包的依赖性相互独立。 在 MacOS 上安装 Ansible 还有其他方法。你可以使用 [Homebrew](https://brew.sh/),但是如果你对 Python 开发(或 Ansible 开发)感兴趣,你可能会发现在一个独立 Python 虚拟环境中管理 Ansible 可以减少一些混乱。我觉得这更简单;与其试图将 Python 版本和依赖项加载到系统或 `/usr/local` 目录中 ,还不如使用 Conda 帮助我将 Ansible 所需的一切都收集到一个虚拟环境中,并将其与其他项目完全分开。 本文着重于使用 Conda 作为 Python 项目来管理 Ansible,以保持它的干净并与其他项目分开。请继续阅读,并了解如何安装 Conda、创建新的虚拟环境、安装 Ansible 并对其进行测试。 ### 序幕 最近,我想学习 [Ansible](https://docs.ansible.com/?extIdCarryOver=true&sc_cid=701f2000001OH6uAAG),所以我需要找到安装它的最佳方法。 我通常对在我的日常工作站上安装东西很谨慎。我尤其不喜欢对供应商的默认操作系统安装应用手动更新(这是我多年作为 Unix 系统管理的习惯)。我真的很想使用 Python 3.7,但是 MacOS 的 Python 包是旧的 2.7,我不会安装任何可能干扰核心 MacOS 系统的全局 Python 包。 所以,我使用本地 Ubuntu 18.04 虚拟机上开始了我的 Ansible 工作。这提供了真正意义上的的安全隔离,但我很快发现管理它是非常乏味的。所以我着手研究如何在本机 MacOS 上获得一个灵活但独立的 Ansible 系统。 由于 Ansible 基于 Python,Conda 似乎是理想的解决方案。 ### 安装 Conda Conda 是一个开源软件,它提供方便的包和环境管理功能。它可以帮助你管理多个版本的 Python、安装软件包依赖关系、执行升级和维护项目隔离。如果你手动管理 Python 虚拟环境,Conda 将有助于简化和管理你的工作。浏览 [Conda 文档](https://conda.io/projects/conda/en/latest/index.html)可以了解更多细节。 我选择了 [Miniconda](https://docs.conda.io/en/latest/miniconda.html) Python 3.7 安装在我的工作站中,因为我想要最新的 Python 版本。无论选择哪个版本,你都可以使用其他版本的 Python 安装新的虚拟环境。 要安装 Conda,请下载 PKG 格式的文件,进行通常的双击,并选择 “Install for me only” 选项。安装在我的系统上占用了大约 158 兆的空间。 安装完成后,调出一个终端来查看你有什么了。你应该看到: * 在你的家目录中的 `miniconda3` 目录 * shell 提示符被修改为 `(base)` * `.bash_profile` 文件更新了一些 Conda 特有的设置内容 现在基础已经安装好了,你有了第一个 Python 虚拟环境。运行 Python 版本检查可以证明这一点,你的 `PATH` 将指向新的位置: ``` (base) $ which python /Users/jfarrell/miniconda3/bin/python (base) $ python --version Python 3.7.1 ``` 现在安装了 Conda,下一步是建立一个虚拟环境,然后安装 Ansible 并运行。 ### 为 Ansible 创建虚拟环境 我想将 Ansible 与我的其他 Python 项目分开,所以我创建了一个新的虚拟环境并切换到它: ``` (base) $ conda create --name ansible-env --clone base (base) $ conda activate ansible-env (ansible-env) $ conda env list ``` 第一个命令将 Conda 库克隆到一个名为 `ansible-env` 的新虚拟环境中。克隆引入了 Python 3.7 版本和一系列默认的 Python 模块,你可以根据需要添加、删除或升级这些模块。 第二个命令将 shell 上下文更改为这个新的环境。它为 Python 及其包含的模块设置了正确的路径。请注意,在 `conda activate ansible-env` 命令后,你的 shell 提示符会发生变化。 第三个命令不是必须的;它列出了安装了哪些 Python 模块及其版本和其他数据。 你可以随时使用 Conda 的 `activate` 命令切换到另一个虚拟环境。这将带你回到基本环境:`conda base`。 ### 安装 Ansible 安装 Ansible 有多种方法,但是使用 Conda 可以将 Ansible 版本和所有需要的依赖项打包在一个地方。Conda 提供了灵活性,既可以将所有内容分开,又可以根据需要添加其他新环境(我将在后面演示)。 要安装 Ansible 的相对较新版本,请使用: ``` (base) $ conda activate ansible-env (ansible-env) $ conda install -c conda-forge ansible ``` 由于 Ansible 不是 Conda 默认通道的一部分,因此 `-c` 用于从备用通道搜索和安装。Ansible 现已安装到 `ansible-env` 虚拟环境中,可以使用了。 ### 使用 Ansible 既然你已经安装了 Conda 虚拟环境,就可以使用它了。首先,确保要控制的节点已将工作站的 SSH 密钥安装到正确的用户帐户。 调出一个新的 shell 并运行一些基本的 Ansible 命令: ``` (base) $ conda activate ansible-env (ansible-env) $ ansible --version ansible 2.8.1 config file = None configured module search path = ['/Users/jfarrell/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/jfarrell/miniconda3/envs/ansibleTest/lib/python3.7/site-packages/ansible executable location = /Users/jfarrell/miniconda3/envs/ansibleTest/bin/ansible python version = 3.7.1 (default, Dec 14 2018, 13:28:58) [Clang 4.0.1 (tags/RELEASE_401/final)] (ansible-env) $ ansible all -m ping -u ansible 192.168.99.200 | SUCCESS => { "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python" }, "changed": false, "ping": "pong" } ``` 现在 Ansible 工作了,你可以在控制台中抽身,并从你的 MacOS 工作站中使用它们。 ### 克隆新的 Ansible 进行 Ansible 开发 这部分完全是可选的;只有当你想要额外的虚拟环境来修改 Ansible 或者安全地使用有问题的 Python 模块时,才需要它。你可以通过以下方式将主 Ansible 环境克隆到开发副本中: ``` (ansible-env) $ conda create --name ansible-dev --clone ansible-env (ansible-env) $ conda activte ansible-dev (ansible-dev) $ ``` ### 需要注意的问题 偶尔你可能遇到使用 Conda 的麻烦。你通常可以通过以下方式删除不良环境: ``` $ conda activate base $ conda remove --name ansible-dev --all ``` 如果出现无法解决的错误,通常可以通过在 `~/miniconda3/envs` 中找到该环境并删除整个目录来直接删除环境。如果基础环境损坏了,你可以删除整个 `~/miniconda3`,然后从 PKG 文件中重新安装。只要确保保留 `~/miniconda3/envs` ,或使用 Conda 工具导出环境配置并在以后重新创建即可。 MacOS 上不包括 `sshpass` 程序。只有当你的 Ansible 工作要求你向 Ansible 提供 SSH 登录密码时,才需要它。你可以在 SourceForge 上找到当前的 [sshpass 源代码](https://sourceforge.net/projects/sshpass/)。 最后,基础的 Conda Python 模块列表可能缺少你工作所需的一些 Python 模块。如果你需要安装一个模块,首选命令是 `conda install package`,但是需要的话也可以使用 `pip`,Conda 会识别安装的模块。 ### 结论 Ansible 是一个强大的自动化工具,值得我们去学习。Conda 是一个简单有效的 Python 虚拟环境管理工具。 在你的 MacOS 环境中保持软件安装分离是保持日常工作环境的稳定性和健全性的谨慎方法。Conda 尤其有助于升级你的 Python 版本,将 Ansible 从其他项目中分离出来,并安全地使用 Ansible。 --- via: <https://opensource.com/article/19/8/using-conda-ansible-administration-macos> 作者:[James Farrell](https://opensource.com/users/jamesf) 选题:[lujun9972](https://github.com/lujun9972) 译者:[heguangzhi](https://github.com/heguangzhi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
If you are a Python developer using MacOS and involved with Ansible administration, you may want to use the Conda package manager to keep your Ansible work separate from your core OS and other local projects. Ansible is based on Python. Conda is not required to make Ansible work on MacOS, but it does make managing Python versions and package dependencies easier. This allows you to use an upgraded Python version on MacOS and keep Python package dependencies separate between your system, Ansible, and other programming projects. There are other ways to install Ansible on MacOS. You could use [Homebrew](https://brew.sh/), but if you are into Python development (or Ansible development), you might find managing Ansible in a Python virtual environment reduces some confusion. I find this to be simpler; rather than trying to load a Python version and dependencies into the system or in **/usr/local**, Conda helps me corral everything I need for Ansible into a virtual environment and keep it all completely separate from other projects. This article focuses on using Conda to manage Ansible as a Python project to keep it clean and separated from other projects. Read on to learn how to install Conda, create a new virtual environment, install Ansible, and test it. ## Prelude Recently, I wanted to learn [Ansible](https://docs.ansible.com/?extIdCarryOver=true&sc_cid=701f2000001OH6uAAG), so I needed to figure out the best way to install it. I am generally wary of installing things into my daily use workstation. I especially dislike applying manual updates to the vendor's default OS installation (a preference I developed from years of Unix system administration). I really wanted to use Python 3.7, but MacOS packages the older 2.7, and I was not going to install any global Python packages that might interfere with the core MacOS system. So, I started my Ansible work using a local Ubuntu 18.04 virtual machine. This provided a real level of safe isolation, but I soon found that managing it was tedious. I set out to see how to get a flexible but isolated Ansible system on native MacOS. Since Ansible is based on Python, Conda seemed to be the ideal solution. ## Installing Conda Conda is an open source utility that provides convenient package- and environment-management features. It can help you manage multiple versions of Python, install package dependencies, perform upgrades, and maintain project isolation. If you are manually managing Python virtual environments, Conda will help streamline and manage your work. Surf on over to the [Conda documentation](https://conda.io/projects/conda/en/latest/index.html) for all the details. I chose the [Miniconda](https://docs.conda.io/en/latest/miniconda.html) Python 3.7 installation for my workstation because I wanted the latest Python version. Regardless of which version you select, you can always install new virtual environments with other versions of Python. To install Conda, download the PKG format file, do the usual double-click, and select the "Install for me only" option. The install took about 158MB of space on my system. After the installation, bring up a terminal to see what you have. You should see: - A new **miniconda3**directory in your**home** - The shell prompt modified to prepend the word "(base)" **.bash_profile**updated with Conda-specific settings Now that the base is installed, you have your first Python virtual environment. Running the usual Python version check should prove this, and your PATH will point to the new location: ``` (base) $ which python /Users/jfarrell/miniconda3/bin/python (base) $ python --version Python 3.7.1 ``` Now that Conda is installed, the next step is to set up a virtual environment, then get Ansible installed and running. ## Creating a virtual environment for Ansible I want to keep Ansible separate from my other Python projects, so I created a new virtual environment and switched over to it: ``` (base) $ conda create --name ansible-env --clone base (base) $ conda activate ansible-env (ansible-env) $ conda env list ``` The first command clones the Conda base into a new virtual environment called **ansible-env**. The clone brings in the Python 3.7 version and a bunch of default Python modules that you can add to, remove, or upgrade as needed. The second command changes the shell context to this new **ansible-env** environment. It sets the proper paths for Python and the modules it contains. Notice that your shell prompt changes after the **conda activate ansible-env** command. The third command is not required; it lists what Python modules are installed with their version and other data. You can always switch out of a virtual environment and into another with Conda's **activate** command. This will bring you back to the base: **conda activate base**. ## Installing Ansible There are various ways to install Ansible, but using Conda keeps the Ansible version and all desired dependencies packaged in one place. Conda provides the flexibility both to keep everything separated and to add in other new environments as needed (as I'll demonstrate later). To install a relatively recent version of Ansible, use: ``` (base) $ conda activate ansible-env (ansible-env) $ conda install -c conda-forge ansible ``` Since Ansible is not part of Conda's default channels, the **-c** is used to search and install from an alternate channel. Ansible is now installed into the **ansible-env** virtual environment and is ready to use. ## Using Ansible Now that you have installed a Conda virtual environment, you're ready to use it. First, make sure the node you want to control has your workstation's SSH key installed to the right user account. Bring up a new shell and run some basic Ansible commands: ``` (base) $ conda activate ansible-env (ansible-env) $ ansible --version ansible 2.8.1 config file = None configured module search path = ['/Users/jfarrell/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/jfarrell/miniconda3/envs/ansibleTest/lib/python3.7/site-packages/ansible executable location = /Users/jfarrell/miniconda3/envs/ansibleTest/bin/ansible python version = 3.7.1 (default, Dec 14 2018, 13:28:58) [Clang 4.0.1 (tags/RELEASE_401/final)] (ansible-env) $ ansible all -m ping -u ansible 192.168.99.200 | SUCCESS => { "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python" }, "changed": false, "ping": "pong" } ``` Now that Ansible is working, you can pull your playbooks out of source control and start using them from your MacOS workstation. ## Cloning the new Ansible for Ansible development This part is purely optional; it's only needed if you want additional virtual environments to modify Ansible or to safely experiment with questionable Python modules. You can clone your main Ansible environment into a development copy with: ``` (ansible-env) $ conda create --name ansible-dev --clone ansible-env (ansible-env) $ conda activte ansible-dev (ansible-dev) $ ``` ## Gotchas to look out for Occasionally you may get into trouble with Conda. You can usually delete a bad environment with: ``` $ conda activate base $ conda remove --name ansible-dev --all ``` If you get errors that you cannot resolve, you can usually delete the environment directly by finding it in **~/miniconda3/envs** and removing the entire directory. If the base becomes corrupt, you can remove the entire **~/miniconda3** directory and reinstall it from the PKG file. Just be sure to preserve any desired environments you have in **~/miniconda3/envs**, or use the Conda tools to dump the environment configuration and recreate it later. The **sshpass** program is not included on MacOS. It is needed only if your Ansible work requires you to supply Ansible with an SSH login password. You can find the current [sshpass source](https://sourceforge.net/projects/sshpass/) on SourceForge. Finally, the base Conda Python module list may lack some Python modules you need for your work. If you need to install one, the **conda install <package>** command is preferred, but **pip** can be used where needed, and Conda will recognize the install modules. ## Conclusion Ansible is a powerful automation utility that's worth all the effort to learn. Conda is a simple and effective Python virtual environment management tool. Keeping software installs separated on your MacOS environment is a prudent approach to maintain stability and sanity with your daily work environment. Conda can be especially helpful to upgrade your Python version, separate Ansible from your other projects, and safely hack on Ansible. ## Comments are closed.
11,358
Richard Stallman 被迫辞去 FSF 主席的职务
https://itsfoss.com/richard-stallman-controversy/
2019-09-19T09:39:24
[ "FSF", "RMS" ]
https://linux.cn/article-11358-1.html
> > Richard Stallman,自由软件基金会的创建者以及主席,已经辞去主席及董事会职务。此前,因为 Stallman 对于爱泼斯坦事件中的受害者的观点,一小撮活动家以及媒体人发起了清除 Stallman 的运动。这份声明就是在这些活动后发生的。阅读全文以获得更多信息。 > > > ![](/data/attachment/album/201909/19/093929osnl5488ns1x8ehb.png) ### Stallman 事件的背景概述 如果你不知道这次事件发生的前因后果,请看本段的详细信息。 [Richard Stallman](https://en.wikipedia.org/wiki/Richard_Stallman),66岁,是就职于 [MIT](https://en.wikipedia.org/wiki/Massachusetts_Institute_of_Technology) 的计算机科学家。他最著名的成就就是在 1983 年发起了[自由软件运动](https://en.wikipedia.org/wiki/Free_software_movement)。他也开发了 GNU 项目旗下的部分软件,比如 GCC 和 Emacs。受自由软件运动影响选择使用 GPL 开源协议的项目不计其数。Linux 是其中最出名的项目之一。 [Jeffrey Epstein](https://en.wikipedia.org/wiki/Jeffrey_Epstein)(爱泼斯坦),美国亿万富翁,金融大佬。其涉嫌为社会上流精英提供性交易服务(其中有未成年少女)而被指控成为性犯罪者。在受审期间,爱泼斯坦在监狱中自杀身亡。 [Marvin Lee Minsky](https://en.wikipedia.org/wiki/Marvin_Minsky),MIT 知名计算机科学家。他在 MIT 建立了人工智能实验室。2016 年,88 岁的 Minsky 逝世。在 Minsky 逝世后,一位名为 Misky 的爱泼斯坦事件受害者声称其在未成年时曾被“诱导”到爱泼斯坦的私人岛屿,与之发生性关系。 但是这些与 Richard Stallman 有什么关系?这要从 Stallman 发给 MIT 计算机科学与人工智能实验室(CSAIL)的学生以及附属机构就爱泼斯坦的捐款提出抗议的邮件列表的邮件说起。邮件全文翻译如下: > > 周五事件的公告对 Marvin Minsky 来说是不公正的。 > > > “已故的人工智能 ‘先锋’ Marvin Minsky (被控告侵害了爱泼斯坦事件的受害者之一[2])” > > > 不公正之处在于 “<ruby> 侵害 <rt> assulting </rt></ruby>” 这个用语。“<ruby> 性侵犯 <rt> sexual assault </rt></ruby>” 这个用语非常的糢糊,夸大了指控的严重性:宣称某人做了 X 但误导别人,让别人觉得这个人做了 Y,Y 远远比 X 严重。 > > > 上面引用的指控显然就是夸大。报导声称 Minksy 与爱泼斯坦的<ruby> 女眷 <rt> harem </rt></ruby>之一发生了性关系(详见 <https://www.theverge.com/2019/8/9/20798900/marvin-minsky-jeffrey-epstein-sex-trafficking-island-court-records-unsealed>)。我们假设这是真的(我找不到理由不相信)。 > > > “<ruby> 侵害 <rt> assulting </rt></ruby>” 这个词,意味着他使用了某种暴力。但那篇报道并没有提到这个,只说了他们发生了性关系。 > > > 我们可以想像很多种情况,但最合理的情况是,她在 Marvin 面前表现的像是完全自愿的。假设她是被爱泼斯坦强迫的,那爱泼斯坦有充足的理由让她对大多数人守口如瓶。 > > > 从各种的指控夸大事例中,我总结出,在指控时使用“<ruby> 性侵犯 <rt> sexual assault </rt></ruby>”是绝对错误的。 > > > 无论你想要批判什么行为,你都应该使用特定的词汇来描述,以此避免批判的本质的道德模糊性。 > > > ### “清除 Stallman” 的呼吁 ‘爱泼斯坦’在美国是颇具争议的‘话题’。Stallman 对该敏感事件做出如此鲁莽的 “知识陈述” 不会有好结果,事实也是如此。 一位机器人学工程师从她的朋友那里收到了转发的邮件并发起了一个[清除 Stallman 的活动](https://medium.com/@selamie/remove-richard-stallman-fec6ec210794)。她要的不是澄清或者道歉,她只想要清除斯托曼,就算这意味着 “将 MIT 夷为平地” 也在所不惜。 > > 是,至少 Stallman 没有被控强奸任何人。但这就是我们的最高标准吗?这所声望极高的学院坚持的标准就是这样的吗?如果这是麻省理工学院想要捍卫的、想要代表的标准的话,还不如把这破学校夷为平地… > > > 如果有必要的话,就把所有人都清除出去,之后从废墟中建立出更好的秩序。 > > > —— Salem,发起“清除 Stallman“运动的机器人学专业学生 > > > Salem 的声讨最初没有被主流媒体重视。但它还是被反对软件行业内的精英崇拜以及性别偏见的积极分子发现了。 > > [#epstein](https://twitter.com/hashtag/epstein?src=hash&ref_src=twsrc%5Etfw) [#MIT](https://twitter.com/hashtag/MIT?src=hash&ref_src=twsrc%5Etfw) 嗨 记者没有回复我我很生气就自己写了这么个故事。作为 MIT 的校友我还真是高兴啊? <https://t.co/D4V5L5NzPA> > > > — SZJG (@selamjie) [September 12, 2019](https://twitter.com/selamjie/status/1172244207978897408?ref_src=twsrc%5Etfw) > > > > > 是不是对于性侵儿童的 “杰出混蛋” 我们也可以辩护说 “万一这是你情我愿的” <https://t.co/gSYPJ3WOfp> > > > — Tracy Chou ??‍? (@triketora) [September 13, 2019](https://twitter.com/triketora/status/1172443389536555009?ref_src=twsrc%5Etfw) > > > > > 多年来我就一直发推说 Richard RMS Stallman 这人有多恶心 —— 恋童癖、厌女症、还残障歧视 > > > 不可避免的是,每次我这样做,都会有老哥检查我的数据来源,然后说 “这都是几年前的事了!他现在变了!” > > > 变个屁。 <https://t.co/ti2SrlKObp> > > > — Sarah Mei (@sarahmei) [September 12, 2019](https://twitter.com/sarahmei/status/1172283772428906496?ref_src=twsrc%5Etfw) > > > 下面是 Sage Sharp 开头的一篇关于 Stallman 的行为如何对科技人员产生负面影响的帖子: > > ?大家说下 Richard Stallman 对科技从业者的影响吧,尤其是女性。 [例如: 强奸、乱伦、残障歧视、性交易] > > > [@fsf](https://twitter.com/fsf?ref_src=twsrc%5Etfw) 有必要永久禁止 Richard Stallman 担任自由软件基金会董事会主席。 > > > — Sage Sharp (@\_sagesharp\_) [September 16, 2019](https://twitter.com/_sagesharp_/status/1173637138413318144?ref_src=twsrc%5Etfw) > > > Stallman 一直以来也不是一个圣人。他粗暴,不合时宜、多年来一直在开带有性别歧视的笑话。你可以在[这里](https://geekfeminism.wikia.org/wiki/Richard_Stallman)和[这里](https://medium.com/@selamie/remove-richard-stallman-appendix-a-a7e41e784f88)读到。 很快这个消息就被 [The Vice](https://www.vice.com/en_us/article/9ke3ke/famed-computer-scientist-richard-stallman-described-epstein-victims-as-entirely-willing)、[每日野兽](https://www.thedailybeast.com/famed-mit-computer-scientist-richard-stallman-defends-epstein-victims-were-entirely-willing),[未来主义](https://futurism.com/richard-stallman-epstein-scandal)等大媒体采访。他们把 Stallman 描绘成爱泼斯坦的捍卫者。在强烈的抗议声中,[GNOME 执行董事威胁要结束 GNOME 和 FSF 之间的关系](https://blog.halon.org.uk/2019/09/gnome-foundation-relationship-gnu-fsf/)。 最后,Stallman 先是从 MIT 辞职,现在又从 [自由软件基金会](https://www.fsf.org/news/richard-m-stallman-resigns) 辞职。 ![](/data/attachment/album/201909/19/093930wi000kkng4y0qhhm.png) ### 危险的特权? 我们见识到了,把一个人从他创建并为之工作了三十多年的组织中驱逐出去仅仅需要五天。这甚至还是在 Stallman 没有参与性交易丑闻的情况下。 其中一些 “活动家” 过去也曾[针对过 Linux 的作者 Linus Torvalds](https://www.newyorker.com/science/elements/after-years-of-abusive-e-mails-the-creator-of-linux-steps-aside)。Linux 基金会背后的管理层预见到了科技行业激进主义的增长趋势,因此他们制定了[适用于 Linux 内核开发的行为准则](https://itsfoss.com/linux-code-of-conduct/)并[强制 Torvalds 接受培训以改善他的行为](https://itsfoss.com/torvalds-takes-a-break-from-linux/)。如果他们没有采取纠正措施,可能 Torvalds 也已经被批倒批臭了。 忽视技术支持者的鲁莽行为和性别歧视是不可接受的,但是对于那些遇到不同意某种流行观点的人就进行声讨,施以私刑也是不道德的做法。我不支持 Stallman 和他过去的言论,但我也不能接受他以这种方式(被迫?)辞职。 Techrights 对此有一些有趣的评论,你可以在[这里](http://techrights.org/2019/09/15/media-attention-has-been-shifted/)和[这里](http://techrights.org/2019/09/16/stallman-removed/)看到。 *你对此事有何看法?请文明分享你的观点和意见。过激评论将不会公布。* --- via: <https://itsfoss.com/richard-stallman-controversy/> 作者:[Abhishek Prakash](https://itsfoss.com/author/abhishek/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[name1e5s](https://github.com/name1e5s) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
*Richard Stallman, founder and president of the Free Software Foundation, has resigned as the president and from its board of directors. The announcement has come after a relentless campaign by a few activists and media person to remove Stallman for his views on the Epstein victims. Read more to get the details.* ![Stallman Conroversy](https://itsfoss.com/content/images/wordpress/2019/09/stallman-conroversy.png) ## A little background to the Stallman controversy If you are not aware of the context, let me provide some details. [Richard Stallman](https://en.wikipedia.org/wiki/Richard_Stallman), a 66 years old computer scientist at [MIT](https://en.wikipedia.org/wiki/Massachusetts_Institute_of_Technology), is best known for founding the [free software movement](https://en.wikipedia.org/wiki/Free_software_movement) in 1983. He also developed several software like GCC, Emacs under the GNU project. The free software movement inspired a number of projects to choose the open source GPL license. Linux is one of those projects. [Jeffrey Epstein](https://en.wikipedia.org/wiki/Jeffrey_Epstein) was a billionaire American financier. He was convicted as a sex offender for running an escort service (included underage girls) for the rich and elites in his social service. He committed suicide in his prison cell while still being tried for sex trafficking charges. [Marvin Lee Minsky](https://en.wikipedia.org/wiki/Marvin_Minsky) was an eminent computer scientist at MIT. He founded the Artificial Intelligence lab at MIT. He died at the age of 88 in 2016. After his death, an Epstein victim named Misky as one of the people she was “directed to have sex” with on Jeffrey Epstein’s private island while she was a minor. So what all this has to do with Richard Stallman? It all started with an email Stallman sent to MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) mailing list over proposed protest by MIT students and affiliates regarding Jeffrey Epstein’s donation (to MIT’s AI lab). The announcement of the Friday event does an injustice to Marvin Minsky: “deceased AI ‘pioneer’ Marvin Minsky (who is accused of assaulting one of Epstein’s victims [2])” The injustice is in the word “assaulting”. The term “sexual assault” is so vague and slippery that it facilitates accusation inflation: taking claims that someone did X and leading people to think of it as Y, which is much worse than X. The accusation quoted is a clear example of inflation. The reference reports the claim that Minsky had sex with one of Epstein’s harem. (See https://www.theverge.com/2019/8/9/20798900/marvin-minsky-jeffrey-epstein-sex-trafficking-island-court-records-unsealed.) Let’s presume that was true (I see no reason to disbelieve it). The word “assaulting” presumes that he applied force or violence, in some unspecified way, but the article itself says no such thing. Only that they had sex. We can imagine many scenarios, but the most plausible scenario is that she presented herself to him as entirely willing. Assuming she was being coerced by Epstein, he would have had every reason to tell her to conceal that from most of his associates. I’ve concluded from various examples of accusation inflation that it is absolutely wrong to use the term “sexual assault” in an accusation. Whatever conduct you want to criticize, you should describe it with a specific term that avoids moral vagueness about the nature of the criticism. ## The call for removing Stallman ‘Epstein’ is an extremely controversial ‘topic’ in the USA. Stallman’s reckless ‘intellectual discourse’ on a sensitive matter like this would not have gone well and it didn’t go well. A robotics engineer received this forwarded email from her friend and started a [campaign to remove Stallman](https://medium.com/@selamie/remove-richard-stallman-fec6ec210794). She didn’t want a clarification or apology. All she wanted was to remove Stallman even if it means ‘burning MIT to the ground’. At least Richard Stallman is not accused of raping anyone. But is that our highest standard? The standard that this prestigious institution holds itself to? If this is what MIT wants to defend; if this is what MIT wants to stand for, then, yes, burn it to the ground… Salem, Robotics student who started Remove Stallman campaign …Remove everyone, if we must, and let something much better be built from the ashes. Salem’s rant was initially ignored by mainstream digital media. But it was picked by activists who fight against meritocracy and gender bias in the software industry. A Twitter thread by Sage Sharp on how Stallman’s behavior negatively impact people in tech: It’s not that Stallman is a saint. His crude, insensitive and sexist jokes have been doing the rounds for years. You can read about it [here](https://geekfeminism.wikia.org/wiki/Richard_Stallman) and [here](https://medium.com/@selamie/remove-richard-stallman-appendix-a-a7e41e784f88). Soon the news was picked by the big media houses like [The Vice](https://www.vice.com/en_us/article/9ke3ke/famed-computer-scientist-richard-stallman-described-epstein-victims-as-entirely-willing), [The Daily Beast](https://www.thedailybeast.com/famed-mit-computer-scientist-richard-stallman-defends-epstein-victims-were-entirely-willing), [Futurism](https://futurism.com/richard-stallman-epstein-scandal) etc. They painted Stallman as a defender of Jeffrey Epstein. Amidst the outcry, [executive director of GNOME threatened to end the relationship between GNOME and FSF](https://blog.halon.org.uk/2019/09/gnome-foundation-relationship-gnu-fsf/). Eventually, Stallman resigned first from MIT and now [from Free Software Foundation](https://www.fsf.org/news/richard-m-stallman-resigns). ![Richard Stallman](https://itsfoss.com/content/images/wordpress/2019/09/richard-stallman-800x94.png) ## A dangerous precedence? All it took five days of activism to remove a person from an organization he created and worked for more than thirty years. And this is when Stallman wasn’t even remotely involved in the sex trafficking scandal. Some of these ‘activists’ have also targeted [Linux creator Linus Torvalds in the past](https://www.newyorker.com/science/elements/after-years-of-abusive-e-mails-the-creator-of-linux-steps-aside). The management behind the Linux Foundation foresaw the growing trend of activism in the tech industry and hence they put up a [code of conduct for Linux kernel development](https://itsfoss.com/linux-code-of-conduct/) in place and forced [Torvalds to undergo training to improve his behavior](https://itsfoss.com/torvalds-takes-a-break-from-linux/). If they had not taken the corrective step, probably Torvalds would have been a goner by now. Ignoring reckless and sexist behavior of tech stalwarts is not acceptable but neither is the mob mentality of lynching anyone who disagrees with a certain popular view. I don’t agree with Stallman and his past remarks but I am also not happy that he has been (forced to?) resign in this manner. Techrights has some interesting take on it that you can read [here](http://techrights.org/2019/09/15/media-attention-has-been-shifted/) and [here](http://techrights.org/2019/09/16/stallman-removed/). *What do you think of the entire episode? Please share your views and opinion but in a civilized manner. Abusive comments will not be published. Arguments and discussion must be civil.*
11,359
如何在 Linux Mint 中更换主题
https://itsfoss.com/install-themes-linux-mint/
2019-09-19T10:03:56
[ "Cinnamon" ]
https://linux.cn/article-11359-1.html
![](/data/attachment/album/201909/19/100317ixxp3y1l7lljl47a.jpg) 一直以来,使用 Cinnamon 桌面环境的 Linux Mint 都是一种卓越的体验。这也是[为何我喜爱 Linux Mint](https://itsfoss.com/tiny-features-linux-mint-cinnamon/)的主要原因之一。 自从 Mint 的开发团队[开始更为严肃的对待设计](https://itsfoss.com/linux-mint-new-design/), “桌面主题” 应用便成为了更换新主题、图标、按钮样式、窗口边框以及鼠标指针的重要方式,当然你也可以直接通过它安装新的主题。感兴趣么?让我们开始吧。 ### 如何在 Linux Mint 中更换主题 在菜单中搜索主题并打开主题应用。 ![Theme Applet provides an easy way of installing and changing themes](/data/attachment/album/201909/19/100358jrpo6nt5k9trhrhn.jpg) 在应用中有一个“添加/删除”按钮,非常简单吧。点击它,我们可以看到按流行程度排序的 Cinnamon Spices(Cinnamon 的官方插件库)的主题。 ![Installing new themes in Linux Mint Cinnamon](/data/attachment/album/201909/19/100401w51vuff01z8bb8t0.jpg) 要安装主题,你所要做的就是点击你喜欢的主题,然后等待它下载。之后,主题将在应用第一页的“Desktop”选项中显示可用。只需双击已安装的主题之一就可以开始使用它。 ![Changing themes in Linux Mint Cinnamon](/data/attachment/album/201909/19/100404dogcebhher0b7zb4.jpg) 下面是默认的 Linux Mint 外观: ![Linux Mint Default Theme](/data/attachment/album/201909/19/100407s2cn5lrrldd5d55j.jpg) 这是在我更换主题之后: ![Linux Mint with Carta Theme](/data/attachment/album/201909/19/100408n1feoypwfw6ec1yu.jpg) 所有的主题都可以在 Cinnamon Spices 网站上获得更多的信息和更大的截图,这样你就可以更好地了解你的系统的外观。 * [浏览 Cinnamon 主题](https://cinnamon-spices.linuxmint.com/themes) ### 在 Linux Mint 中安装第三方主题 > > “我在另一个网站上看到了这个优异的主题,但 Cinnamon Spices 网站上没有……” > > > Cinnamon Spices 集成了许多优秀的主题,但你仍然会发现,你看到的主题并没有被 Cinnamon Spices 官方网站收录。 这时你可能会想:如果有别的办法就好了,对么?你可能会认为有(我的意思是……当然啦)。首先,我们可以在其他网站上找到一些很酷的主题。 我推荐你去 Cinnamon Look 浏览一下那儿的主题。如果你喜欢什么,就下载吧。 * [在 Cinnamon Look 中获取更多主题](https://www.cinnamon-look.org/) 下载了首选主题之后,你现在将得到一个压缩文件,其中包含安装所需的所有内容。提取它并保存到 `~/.themes`。迷糊么? `~` 代表了你的家目录的对应路径:`/home/{YOURUSER}/.themes`。 然后跳转到你的家目录。按 `Ctrl+H` 来[显示 Linux 中的隐藏文件](https://itsfoss.com/hide-folders-and-show-hidden-files-in-ubuntu-beginner-trick/)。如果没有看到 `.themes` 文件夹,创建一个新文件夹并命名为 `.themes`。记住,文件夹名称开头的点很重要。 将提取的主题文件夹从下载目录复制到你的家目录中的 `.themes` 文件夹中。 最后,在上面提到的应用中查找已安装的主题。 > > 注记 > > > 请记住,主题必须是 Cinnamon 相对应的,即使它是一个从 GNOME 复刻的系统也不行,并不是所有的 GNOME 主题都适用于 Cinnamon。 > > > 改变主题是 Cinnamon 定制工作的一部分。你还可以[通过更改图标来更改 Linux Mint 的外观](https://itsfoss.com/install-icon-linux-mint/)。 我希望你现在已经知道如何在 Linux Mint 中更改主题了。快去选取你喜欢的主题吧。 --- via: <https://itsfoss.com/install-themes-linux-mint/> 作者:[It’s FOSS](https://itsfoss.com/author/itsfoss/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[qfzy1233](https://github.com/qfzy1233) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
![Warp Terminal](/assets/images/warp-terminal.webp) ![Warp Terminal](/assets/images/warp-terminal.webp) Using Linux Mint is, from the start, a unique experience, because of its flagship [desktop environment](https://itsfoss.com/what-is-desktop-environment/): Cinnamon. This is one of the [main reasons why I love Linux Mint over Ubuntu](https://itsfoss.com/linux-mint-vs-ubuntu/). Since Linux Mint’s dev team started to take design more seriously, the “Themes” option has become simpler, but the advance settings still exist to customize the look and feel with greater control. You can change the theme, cursor icons, app icons, and more. So, how do you do that? Let me tell you about it. ## Steps to change themes in Linux Mint In Linux Mint, you can use the default Themes app to get started. Here, you can either select from the default preset themes or install new third-party themes. Search for themes in the menu and open the Themes applet. ![Open Themes app from the Cinnamon Panel Menu button](https://itsfoss.com/content/images/2024/02/select-themes-from-application-menu.png) Latest Cinnamon versions will show you a minimal themes settings window. Here, you can change the themes like Light/Dark mode, Accent Colors, etc. ![Simplistic Theme Settings on Cinnamon Themes Application. You can click on “Advanced Settings” to access detailed theme settings.](https://itsfoss.com/content/images/2024/02/default-theme-changes-1.png) These settings are enough for minimal tweaks. But if you are into more customizations, you can select the “**Advanced Settings**” button at the bottom of the screen. This will open the full theming application. ![Selecting a theme from the themes tab of the Themes application. Here, various application themes are listed.](https://itsfoss.com/content/images/2024/02/various-additional-themes-available.png) Here, you can change the Application themes, Panel (Desktop) themes, Icons and Cursor themes. **Suggested Read 📖** [8 Reasons Why Linux Mint is Better Than UbuntuLinux Mint is better for beginners, but why so? Here are the reasons behind it.](https://itsfoss.com/linux-mint-vs-ubuntu/)![](https://itsfoss.com/content/images/2023/07/why-mint-is-better-than-ubuntu.png) ![](https://itsfoss.com/content/images/2023/07/why-mint-is-better-than-ubuntu.png) ## fInstalling third-party themes in Linux Mint from within Themes App Inside the application, there’s an **“Add/Remove”** section, pretty simple, right? Just click on it, and you can see Cinnamon Spices (Cinnamon’s official add-ons' repository) themes ordered first by popularity. ![Download the desktop theme from the Cinnamon themes app Add/Remove section.](https://itsfoss.com/content/images/2024/02/themes-as-add-ons-1.png) To install one, all you need to do is click on your preferred one and wait for it to download. Thereafter, the theme will be available at the “Desktop” option under "Themes" section. And, then, click on one of the installed themes to select it as the new theme. ![Select the downloaded theme from the list to apply it.](https://itsfoss.com/content/images/2024/02/apply-an-addon-desktop-theme-from-desktop-tab.png) Here’s the default Linux Mint look: ![Default theme on Linux Mint](https://itsfoss.com/content/images/2024/02/default-theme-on-linux-mint.png) And here’s after I changed the theme (Semabe Azure Less Transparent): ![A new theme is applied to Cinnamon.](https://itsfoss.com/content/images/2024/02/applied-a-desktop-theme.png) All the themes are also available at the Cinnamon Spices site for more information and bigger screenshots so you can take a better look at how your system will look before you download/apply them. ## Installing third-party themes in Linux Mint from Cinnamon Look *“I saw this amazing theme on another site, and it is not available at Cinnamon Spices…”* Cinnamon Spices has a good collection of themes, but you will still find that the theme you saw some place else is not available on the official Cinnamon website. Well, it would be nice if there was another way, no? You might imagine that there is and there is. You can visit Cinnamon Look (*you can also use GNOME-Look, more on that below*) and browse themes there. If you like something, download it. ![Download a theme from Cinnamon Look Website](https://itsfoss.com/content/images/2024/02/download-themes-from-cinnamon-look.webp) After the preferred theme is downloaded, you will have a compressed file now with all you need for the installation. Extract it and save at `~/.themes` . The “~” file path is actually your home folder: `/home/{YOURUSER}/.themes` . So, go to your Home directory and press Ctrl+H to [show hidden files in Linux](https://itsfoss.com/hide-folders-and-show-hidden-files-in-ubuntu-beginner-trick/). If you don’t see a `.themes` folder, create a new folder and name **.themes**. Remember that the dot at the beginning of the folder name is important. Copy the extracted theme folder from your **Downloads** directory to the *.themes* folder in your Home directory. ![The extracted theme folders are placed inside the themes directory on Home.](https://itsfoss.com/content/images/2024/02/themes-folder-with-contents.png) After that, look for the installed theme on the application tab of the themes app. ![Newly added themes are accessible from the respective section of the themes app.](https://itsfoss.com/content/images/2024/02/newly-installed-themes-in-applications-tab-of-themes-app.png) You can see that the new theme just changes the entire look of Cinnamon! ![A New theme is applied to the desktop and applications in Cinnamon Desktop](https://itsfoss.com/content/images/2024/02/a-new-theme-applied-1.png) ## Installing third-party themes in Linux Mint using GNOME-Look Often the themes found on Cinnamon Looks are a bit outdated or less frequently developed. Developers of many modern GNOME themes also release a theme variant for Cinnamon. While not all themes have it, you can download and check those that have Cinnamon support. ![Fluent Round, a modern GNOME theme has a Cinnamon folder, which means, it will work more or less good on Cinnamon.](https://itsfoss.com/content/images/2024/02/cinnamon-folder-inside-a-popular-gnome-theme.png) In the above screenshot, you can see that there is a “Cinnamon” folder inside the popular GNOME theme “**Fluent Round**”. Thus, if you copy such themes (entire theme folder) to *~/.themes* you can apply it to both desktop and applications. ![The Fluent Round Light theme is applied to the Cinnamon; both desktop and application themes.](https://itsfoss.com/content/images/2024/02/fluent-theme-applied-to-cinnamon.png) Changing theme is one part of Cinnamon customization. You can also [change the look of Linux Mint by changing the icons](https://itsfoss.com/install-icon-linux-mint/). [How To Install Icon Themes In Linux Mint Cinnamon [Beginner Tip]Brief: This quick tutorial for beginners shows how to install and change icon themes in Linux Mint. If you think the default Mint themes and icons are not good enough for you, why not change it? In this quick tip for beginners, we shall see how to install icon themes](https://itsfoss.com/install-icon-linux-mint/)![](https://itsfoss.com/content/images/wordpress/2014/01/Moka_Linux_Mint_16.jpeg) ![](https://itsfoss.com/content/images/wordpress/2014/01/Moka_Linux_Mint_16.jpeg) ## Revert Back to Default Didn’t like the applied setup? You need not worry! You can simply revert it back to the default using one single click. Open the themes app. By default, you should be on the Simplistic view. Now, from “Custom”, make it “Mint Y”. That’s it. You are back to the default Linux Mint themes. Reverting the themes back to the default Cinnamon theme. ## Wrapping Up I hope now you know how to change themes in Linux Mint. Which theme are you going to use? You can also apply themes to Flatpak apps, so that the system get a uniform look everywhere! [Flatpak Apps Look Out of Place? Here’s How to Apply GTK Themes on Flatpak ApplicationsFlatpak applications don’t play along well with system themes because of their sandbox nature. With a little effort, you can make them work with system themes.](https://itsfoss.com/flatpak-app-apply-theme/)![](https://itsfoss.com/content/images/wordpress/2021/11/apply-gtk-themes-on-flatpak-applications.png) ![](https://itsfoss.com/content/images/wordpress/2021/11/apply-gtk-themes-on-flatpak-applications.png) For more customize tips, here is a concise guide on customizing the Cinnamon Desktop: [7 Ways to Customize Cinnamon Desktop in Linux MintThe traditional Cinnamon desktop can be tweaked to look different and customized for your needs. Here’s how to do that.](https://itsfoss.com/customize-cinnamon-desktop/)![](https://itsfoss.com/content/images/wordpress/2021/02/customize-cinnamon.png) ![](https://itsfoss.com/content/images/wordpress/2021/02/customize-cinnamon.png) **back in 2020. It was further edited and enhanced by our author to reflect latest information.** **João Gondim**
11,360
如何查看 Linux Mint 版本号和代号
https://itsfoss.com/check-linux-mint-version/
2019-09-19T10:30:48
[ "Mint", "版本" ]
https://linux.cn/article-11360-1.html
Linux Mint 每两年发布一次主版本(如 Mint 19),每六个月左右发布一次次版本(如 Mint 19.1、19.2 等)。 你可以自己升级 Linux Mint 版本,而次版本也会自动更新。 在所有这些版本中,你可能想知道你正在使用的是哪个版本。了解 Linux Mint 版本号可以帮助你确定某个特定软件是否适用于你的系统,或者检查你的系统是否已达到使用寿命。 你可能需要 Linux Mint 版本号有多种原因,你也有多种方法可以获取此信息。让我向你展示用图形和命令行的方式获取 Mint 版本信息。 * [使用命令行查看 Linux Mint 版本信息](tmp.pL5Hg3N6Qt#terminal) * [使用 GUI(图形用户界面)查看 Linux Mint 版本信息](tmp.pL5Hg3N6Qt#GUI) ### 使用终端查看 Linux Mint 版本号的方法 ![](/data/attachment/album/201909/19/103049kgb34obr4sggy06l.png) 我将介绍几种使用非常简单的命令查看 Linux Mint 版本号和代号的方法。 你可以从 “菜单” 中打开终端,或按 `CTRL+ALT+T`(默认热键)打开。 本文中的最后两个命令还会输出你当前的 Linux Mint 版本所基于的 Ubuntu 版本。 #### 1、/etc/issue 从最简单的 CLI 方法开始,你可以打印出 `/etc/issue` 的内容来检查你的版本号和代号: ``` [email protected]:~$ cat /etc/issue Linux Mint 19.2 Tina \n \l ``` #### 2、hostnamectl ![hostnamectl](/data/attachment/album/201909/19/103050nsqsf8jjrl3fi8ld.jpg) 这一个命令(`hostnamectl`)打印的信息几乎与“系统信息”中的信息相同。 你可以看到你的操作系统(带有版本号)以及你的内核版本。 #### 3、lsb\_release `lsb_release` 是一个非常简单的 Linux 实用程序,用于查看有关你的发行版本的基本信息: ``` [email protected]:~$ lsb_release -a No LSB modules are available. Distributor ID: LinuxMint Description: Linux Mint 19.2 Tina Release: 19.2 Codename: tina ``` **注:** 我使用 `–a` 标签打印所有参数, 但你也可以使用 `-s` 作为简写格式,`-d` 用于描述等 (用 `man lsb_release` 查看所有选项) #### 4、/etc/linuxmint/info ![/etc/linuxmint/info](/data/attachment/album/201909/19/103051hn7kpaina2v32k3r.jpg) 这不是命令,而是 Linux Mint 系统上的文件。只需使用 `cat` 命令将其内容打印到终端,然后查看你的版本号和代号。 #### 5、使用 /etc/os-release 命令也可以获取到 Ubuntu 代号 ![/etc/os-release](/data/attachment/album/201909/19/103054dvablfs9mmvknyg1.jpg) Linux Mint 基于 Ubuntu。每个 Linux Mint 版本都基于不同的 Ubuntu 版本。了解你的 Linux Mint 版本所基于的 Ubuntu 版本有助你在必须要使用 Ubuntu 版本号的情况下使用(比如你需要在 [Linux Mint 中安装最新的 Virtual Box](https://itsfoss.com/install-virtualbox-ubuntu/)添加仓库时)。 `os-release` 则是另一个类似于 `info` 的文件,向你展示 Linux Mint 所基于的 Ubuntu 版本代号。 #### 6、使用 /etc/upstream-release/lsb-release 只获取 Ubuntu 的基本信息 如果你只想要查看有关 Ubuntu 的基本信息,请输出 `/etc/upstream-release/lsb-release`: ``` [email protected]:~$ cat /etc/upstream-release/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=18.04 DISTRIB_CODENAME=bionic DISTRIB_DESCRIPTION="Ubuntu 18.04 LTS" ``` 特别提示:[你可以使用 uname 命令查看 Linux 内核版本](https://itsfoss.com/find-which-kernel-version-is-running-in-ubuntu/)。 ``` [email protected]:~$ uname -r 4.15.0-54-generic ``` **注:** `-r` 代表 release,你可以使用 `man uname` 查看其他信息。 ### 使用 GUI 查看 Linux Mint 版本信息 如果你对终端和命令行不满意,可以使用图形方法。如你所料,这个非常明了。 打开“菜单” (左下角),然后转到“偏好设置 > 系统信息”: ![Linux Mint 菜单](/data/attachment/album/201909/19/103055vag6cx6ihzc6lihg.jpg) 或者,在菜单中,你可以搜索“System Info”: ![Menu Search System Info](/data/attachment/album/201909/19/103056ghtppyht0adhiptt.jpg) 在这里,你可以看到你的操作系统(包括版本号),内核和桌面环境的版本号: ![System Info](/data/attachment/album/201909/19/103057gun3aadnrrrrahrn.png) ### 总结 我已经介绍了一些不同的方法,用这些方法你可以快速查看你正在使用的 Linux Mint 的版本和代号(以及所基于的 Ubuntu 版本和内核)。我希望这个初学者教程对你有所帮助。请在评论中告诉我们你最喜欢哪个方法! --- via: <https://itsfoss.com/check-linux-mint-version/> 作者:[Sergiu](https://itsfoss.com/author/sergiu/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[Morisun029](https://github.com/Morisun029) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
![Warp Terminal](/assets/images/warp-terminal.webp) ![Warp Terminal](/assets/images/warp-terminal.webp) Linux Mint has a major release (like Mint 19) every two years and minor releases (like Mint 19.1, 19.2 etc) every six months or so. You can upgrade Linux Mint version on your own or it may get automatically update for the minor releases. Between all these release, you may wonder which Linux Mint version you are using. Knowing the version number is also helpful in determining whether a particular software is available for your system or if your system has reached end of life. There could be a number of reasons why you might require the Linux Mint version number and there are various ways you can obtain this information. Let me show you both graphical and command line ways to get the Mint release information. ## Ways to check Linux Mint version number using terminal I’ll go over several ways you can check your Linux Mint version number and codename using very simple commands. You can open up a **terminal **from the **Menu** or by pressing **CTRL+ALT+T** (default hotkey). The **last two entries** in this list also output the **Ubuntu release** your current Linux Mint version is based on. ### 1. /etc/issue Starting out with the simplest CLI method, you can print out the contents of **/etc/issue **to check your **Version Number** and **Codename**: ``` mint@mint:~$ cat /etc/issue Linux Mint 19.2 Tina \n \l ``` ### 2. hostnamectl ![hostnamectl](https://itsfoss.com/content/images/wordpress/2019/09/hostnamectl.jpg) This single command (**hostnamectl**) prints almost the same information as that found in **System Info**. You can see your **Operating System** (with** version number**), as well as your **kernel version**.3. ### 3. lsb_release **lsb_release** is a very simple Linux utility to check basic information about your distribution: ``` mint@mint:~$ lsb_release -a No LSB modules are available. Distributor ID: LinuxMint Description: Linux Mint 19.2 Tina Release: 19.2 Codename: tina ``` **Note:** *I used the –* *a* *tag to print all parameters, but you can also use***-s**for short form,**-d**for description etc. (check**man lsb_release**for all tags).### 4. /etc/linuxmint/info ![Check Linux Mint version number with /etc/linuxmint/info](https://itsfoss.com/content/images/wordpress/2019/09/linuxmint_info.jpg) This isn’t a command, but rather a file on any Linux Mint install. Simply use cat command to print it’s contents to your terminal and see your **Release Number** and **Codename**. ### 5. Use /etc/os-release to get Ubuntu codename as well ![Check Linux Mint version number with /etc/os-release](https://itsfoss.com/content/images/wordpress/2019/09/os_release.jpg) Linux Mint is based on Ubuntu. Each Linux Mint release is based on a different Ubuntu release. Knowing which Ubuntu version your Linux Mint release is based on is helpful in cases where you’ll have to use Ubuntu codename while adding a repository like when you need to [install the latest Virtual Box in Linux Mint](https://itsfoss.com/install-virtualbox-ubuntu/). **os-release **is yet another file similar to **info**, showing you the codename for the **Ubuntu** release your Linux Mint is based on. ### 6. Use /etc/upstream-release/lsb-release to get only Ubuntu base info If you only** **want to see information about the **Ubuntu** base, output **/etc/upstream-release/lsb-release**: ``` mint@mint:~$ cat /etc/upstream-release/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=18.04 DISTRIB_CODENAME=bionic DISTRIB_DESCRIPTION="Ubuntu 18.04 LTS" ``` Bonus Tip: [You can just check Linux kernel version](https://itsfoss.com/find-which-kernel-version-is-running-in-ubuntu/) with the **uname** command: ``` mint@mint:~$ uname -r 4.15.0-54-generic ``` **Note:** **-r** stands for **release**, however you can check the other flags with **man uname**. ## Check Linux Mint version information using GUI If you are not comfortable with the terminal and commands, you can use the graphical method. As you would expect, this one is pretty straight-forward. Open up the **Menu** (bottom-left corner) and then go to **Preferences > System Info**: ![Linux Mint Menu](https://itsfoss.com/content/images/wordpress/2019/09/linux_mint_menu.jpg) Alternatively, in the Menu you can search for **System Info**: ![Menu Search System Info](https://itsfoss.com/content/images/wordpress/2019/09/menu_search_system_info.jpg) Here you can see both your operating system (including version number), your kernel and the version number of your DE: ![Linux Mint System Info](https://itsfoss.com/content/images/wordpress/2019/09/system_info.png) **Wrapping Up** I have covered some different ways you can quickly check the version and name (as well as the Ubuntu base and kernel) of the Linux Mint release you are running. I hope you found this beginner tutorial helpful. Let us know in the comments which one is your favorite method!
11,362
用 Bash 脚本发送新用户帐户创建的邮件
https://www.2daygeek.com/linux-shell-script-to-monitor-user-creation-send-email/
2019-09-20T09:33:33
[ "用户" ]
https://linux.cn/article-11362-1.html
![](/data/attachment/album/201909/20/093308a615tcuiopctvp5t.jpg) 出于某些原因,你可能需要跟踪 Linux 上的新用户创建信息。同时,你可能需要通过邮件发送详细信息。这或许是审计目标的一部分,或者安全团队出于跟踪目的可能希望对此进行监控。 我们可以通过其他方式进行此操作,正如我们在上一篇文章中已经描述的那样。 * [在系统中创建新用户帐户时发送邮件的 Bash 脚本](https://www.2daygeek.com/linux-bash-script-to-monitor-user-creation-send-email/) Linux 有许多开源监控工具可以使用。但我不认为他们有办法跟踪新用户创建过程,并在发生时提醒管理员。 那么我们怎样才能做到这一点? 我们可以编写自己的 Bash 脚本来实现这一目标。我们过去写过许多有用的 shell 脚本。如果你想了解,请进入下面的链接。 * [如何使用 shell 脚本自动化日常活动?](https://www.2daygeek.com/category/shell-script/) ### 这个脚本做了什么? 这将每天两次(一天的开始和结束)备份 `/etc/passwd` 文件,这将使你能够获取指定日期的新用户创建详细信息。 我们需要添加以下两个 cron 任务来复制 `/etc/passwd` 文件。 ``` # crontab -e 1 0 * * * cp /etc/passwd /opt/scripts/passwd-start-$(date +"%Y-%m-%d") 59 23 * * * cp /etc/passwd /opt/scripts/passwd-end-$(date +"%Y-%m-%d") ``` 它使用 `diff` 命令来检测文件之间的差异,如果发现与昨日有任何差异,脚本将向指定 email 发送新用户详细信息。 我们不用经常运行此脚本,因为用户创建不经常发生。但是,我们计划每天运行一次此脚本。 这样,你可以获得有关新用户创建的综合报告。 **注意:**我们在脚本中使用了我们的电子邮件地址进行演示。因此,我们要求你用自己的电子邮件地址。 ``` # vi /opt/scripts/new-user-detail.sh #!/bin/bash mv /opt/scripts/passwd-start-$(date --date='yesterday' '+%Y-%m-%d') /opt/scripts/passwd-start mv /opt/scripts/passwd-end-$(date --date='yesterday' '+%Y-%m-%d') /opt/scripts/passwd-end ucount=$(diff /opt/scripts/passwd-start /opt/scripts/passwd-end | grep ">" | cut -d":" -f6 | cut -d"/" -f3 | wc -l) if [ $ucount -gt 0 ] then SUBJECT="ATTENTION: New User Account is created on server : `date --date='yesterday' '+%b %e'`" MESSAGE="/tmp/new-user-logs.txt" TO="[email protected]" echo "Hostname: `hostname`" >> $MESSAGE echo -e "\n" >> $MESSAGE echo "The New User Details are below." >> $MESSAGE echo "+------------------------------+" >> $MESSAGE diff /opt/scripts/passwd-start /opt/scripts/passwd-end | grep ">" | cut -d":" -f6 | cut -d"/" -f3 >> $MESSAGE echo "+------------------------------+" >> $MESSAGE mail -s "$SUBJECT" "$TO" < $MESSAGE rm $MESSAGE fi ``` 给 `new-user-detail.sh` 文件添加可执行权限。 ``` $ chmod +x /opt/scripts/new-user-detail.sh ``` 最后添加一个 cron 任务来自动执行此操作。它在每天早上 7 点运行。 ``` # crontab -e 0 7 * * * /bin/bash /opt/scripts/new-user.sh ``` **注意:**你会在每天早上 7 点都会收到一封关于昨日详情的邮件提醒。 **输出:**输出与下面的输出相同。 ``` # cat /tmp/new-user-logs.txt Hostname: CentOS.2daygeek.com The New User Details are below. +------------------------------+ tuser3 +------------------------------+ ``` --- via: <https://www.2daygeek.com/linux-shell-script-to-monitor-user-creation-send-email/> 作者:[Magesh Maruthamuthu](https://www.2daygeek.com/author/magesh/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
404
Not Found
null
11,364
虚拟机管理器(Virtual Machine Manager)简介
https://opensource.com/article/19/9/introduction-virtual-machine-manager
2019-09-20T11:34:00
[ "虚拟机" ]
https://linux.cn/article-11364-1.html
> > virt-manager 为 Linux 虚拟化提供了全方位的选择。 > > > ![](/data/attachment/album/201909/20/113434dxbbp3ttmxbhmnnm.jpg) 在我关于 [GNOME Boxes](https://wiki.gnome.org/Apps/Boxes) 的[系列文章](https://opensource.com/sitewide-search?search_api_views_fulltext=GNOME%20Box)中,我已经解释了 Linux 用户如何能够在他们的桌面上快速启动虚拟机。当你只需要简单的配置时,Box 可以轻而易举地创建虚拟机。 但是,如果你需要在虚拟机中配置更多详细信息,那么你就需要一个工具,为磁盘、网卡(NIC)和其他硬件提供全面的选项。这时就需要 [虚拟机管理器(Virtual Machine Manager)](https://virt-manager.org/)(virt-manager)了。如果在应用菜单中没有看到它,你可以从包管理器或命令行安装它: * 在 Fedora 上:`sudo dnf install virt-manager` * 在 Ubuntu 上:`sudo apt install virt-manager` 安装完成后,你可以从应用菜单或在命令行中输入 `virt-manager` 启动。 ![Virtual Machine Manager's main screen](/data/attachment/album/201909/20/113502hmwwmlaaww5ojxm0.png "Virtual Machine Manager's main screen") 为了演示如何使用 virt-manager 创建虚拟机,我将设置一个 Red Hat Enterprise Linux 8 虚拟机。 首先,单击 “<ruby> 文件 <rt> File </rt></ruby>” 然后点击 “<ruby> 新建虚拟机 <rt> New Virtual Machine </rt></ruby>”。Virt-manager 的开发者已经标记好了每一步(例如,“<ruby> 第 1 步,共 5 步 <rt> Step 1 of 5 </rt></ruby>”)来使其变得简单。单击 “<ruby> 本地安装介质 <rt> Local install media </rt></ruby>” 和 “<ruby> 下一步 <rt> Forward </rt></ruby>”。 ![Step 1 virtual machine creation](/data/attachment/album/201909/20/113503ew9gey9m9gy9k0oq.png "Step 1 virtual machine creation") 在下个页面中,选择要安装的操作系统的 ISO 文件。(RHEL 8 镜像位于我的下载目录中。)Virt-manager 自动检测操作系统。 ![Step 2 Choose the ISO File](/data/attachment/album/201909/20/113504sntswo8naw8arngq.png "Step 2 Choose the ISO File") 在步骤 3 中,你可以指定虚拟机的内存和 CPU。默认值为内存 1,024MB 和一个 CPU。 ![Step 3 Set CPU and Memory](/data/attachment/album/201909/20/113505dhehrv0e44747z2v.png "Step 3 Set CPU and Memory") 我想给 RHEL 充足的配置来运行,我使用的硬件配置也充足,所以我将它们(分别)增加到 4,096MB 和两个 CPU。 下一步为虚拟机配置存储。默认设置是 10GB 硬盘。(我保留此设置,但你可以根据需要进行调整。)你还可以选择现有磁盘镜像或在自定义位置创建一个磁盘镜像。 ![Step 4 Configure VM Storage](/data/attachment/album/201909/20/113507tfipllbzlpvkk299.png "Step 4 Configure VM Storage") 步骤 5 是命名虚拟机并单击“<ruby> 完成 <rt> Finish </rt></ruby>”。这相当于创建了一台虚拟机,也就是 GNOME Boxes 中的一个 Box。虽然技术上讲是最后一步,但你有几个选择(如下面的截图所示)。由于 virt-manager 的优点是能够自定义虚拟机,因此在单击“<ruby> 完成 <rt> Finish </rt></ruby>”之前,我将选中“<ruby> 在安装前定制配置 <rt> Customize configuration before install </rt></ruby>”的复选框。 因为我选择了自定义配置,virt-manager 打开了一个有一组设备和设置的页面。这里是重点! 这里你也可以命名该虚拟机。在左侧列表中,你可以查看各个方面的详细信息,例如 CPU、内存、磁盘、控制器和许多其他项目。例如,我可以单击 “CPU” 来验证我在步骤 3 中所做的更改。 ![Changing the CPU count](/data/attachment/album/201909/20/113508u65xx676zfmomtzs.png "Changing the CPU count") 我也可以确认我设置的内存量。 当虚拟机作为服务器运行时,我通常会禁用或删除声卡。为此,请选择 “<ruby> 声卡 <rt> Sound </rt></ruby>” 并单击 “<ruby> 移除 <rp> ( </rp> <rt> Remove </rt> <rp> ) </rp></ruby>” 或右键单击 “<ruby> 声卡 <rt> Sound </rt></ruby>” 并选择 “<ruby> 移除硬件 <rp> ( </rp> <rt> Remove Hardware </rt> <rp> ) </rp></ruby>”。 你还可以使用底部的 “<ruby> 添加硬件 <rp> ( </rp> <rt> Add Hardware </rt> <rp> ) </rp></ruby>” 按钮添加硬件。这会打开 “<ruby> 添加新的虚拟硬件 <rp> ( </rp> <rt> Add New Virtual Hardware </rt> <rp> ) </rp></ruby>” 页面,你可以在其中添加其他存储设备、内存、声卡等。这就像可以访问一个库存充足的(虚拟)计算机硬件仓库。 ![The Add New Hardware screen](/data/attachment/album/201909/20/113510o77sdxy7as5nnsna.png "The Add New Hardware screen") 对 VM 配置感到满意后,单击 “<ruby> 开始安装 <rt> Begin Installation </rt></ruby>”,系统将启动并开始从 ISO 安装指定的操作系统。 ![Begin installing the OS](/data/attachment/album/201909/20/113511lbrhiwblh5lrf55b.png) 完成后,它会重新启动,你的新虚拟机就可以使用了。 ![Red Hat Enterprise Linux 8 running in VMM](/data/attachment/album/201909/20/113514uk44br4yfogu7gg4.png "Red Hat Enterprise Linux 8 running in VMM") Virtual Machine Manager 是桌面 Linux 用户的强大工具。它是开源的,是专有和封闭虚拟化产品的绝佳替代品。 --- via: <https://opensource.com/article/19/9/introduction-virtual-machine-manager> 作者:[Alan Formy-Duval](https://opensource.com/users/alanfdoss) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
In my series about [GNOME Boxes](https://wiki.gnome.org/Apps/Boxes), I explained how Linux users can quickly spin up virtual machines on their desktop without much fuss. Boxes is ideal for creating virtual machines in a pinch when a simple configuration is all you need. But if you need to configure more detail in your virtual machine, you need a tool that provides a full range of options for disks, network interface cards (NICs), and other hardware. This is where [Virtual Machine Manager](https://virt-manager.org/) (virt-manager) comes in. If you don't see it in your applications menu, you can install it from your package manager or via the command line: - On Fedora: **sudo dnf install virt-manager** - On Ubuntu: **sudo apt install virt-manager** Once it's installed, you can launch it from its application menu icon or from the command line by entering **virt-manager**. ![Virtual Machine Manager's main screen](https://opensource.com/sites/default/files/1-vmm_main_0.png) opensource.com To demonstrate how to create a virtual machine using virt-manager, I'll go through the steps to set one up for Red Hat Enterprise Linux 8. To start, click **File** then **New Virtual Machine**. Virt-manager's developers have thoughtfully titled each step of the process (e.g., Step 1 of 5) to make it easy. Click **Local install media** and **Forward**. ![Step 1 virtual machine creation](https://opensource.com/sites/default/files/2-vmm_step1_0.png) opensource.com On the next screen, browse to select the ISO file for the operating system you want to install. (My RHEL 8 image is located in my Downloads directory.) Virt-manager automatically detects the operating system. ![Step 2 Choose the ISO File](https://opensource.com/sites/default/files/3-vmm_step2.png) opensource.com In Step 3, you can specify the virtual machine's memory and CPU. The defaults are 1,024MB memory and one CPU. ![Step 3 Set CPU and Memory](https://opensource.com/sites/default/files/4-vmm_step3default.png) opensource.com I want to give RHEL ample room to run—and the hardware I'm using can accommodate it—so I'll increase them (respectively) to 4,096MB and two CPUs. The next step configures storage for the virtual machine; the default setting is a 10GB disk image. (I'll keep this setting, but you can adjust it for your needs.) You can also choose an existing disk image or create one in a custom location. ![Step 4 Configure VM Storage](https://opensource.com/sites/default/files/6-vmm_step4.png) opensource.com Step 5 is the place to name your virtual machine and click Finish. This is equivalent to creating a virtual machine or a Box in GNOME Boxes. While it's technically the last step, you have several options (as you can see in the screenshot below). Since the advantage of virt-manager is the ability to customize a virtual machine, I'll check the box labeled **Customize configuration before install** before I click **Finish**. Since I chose to customize the configuration, virt-manager opens a screen displaying a bunch of devices and settings. This is the fun part! Here you have another chance to name the virtual machine. In the list on the left, you can view details on various aspects, such as CPU, memory, disks, controllers, and many other items. For example, I can click on **CPUs** to verify the change I made in Step 3. ![Changing the CPU count](https://opensource.com/sites/default/files/9-vmm_customizecpu.png) opensource.com I can also confirm the amount of memory I set. When installing a VM to run as a server, I usually disable or remove its sound capability. To do so, select **Sound **and click **Remove** or right-click on **Sound **and choose **Remove Hardware**. You can also add hardware with the **Add Hardware** button at the bottom. This brings up the **Add New Virtual Hardware** screen where you can add additional storage devices, memory, sound, etc. It's like having access to a very well-stocked (if virtual) computer hardware warehouse. ![The Add New Hardware screen](https://opensource.com/sites/default/files/11-vmm_addnewhardware.png) opensource.com Once you are happy with your VM configuration, click **Begin Installation**, and the system will boot and begin installing your specified operating system from the ISO. ![Begin installing the OS](https://opensource.com/sites/default/files/12-vmm_rhelbegininstall.png) opensource.com Once it completes, it reboots, and your new VM is ready for use. ![Red Hat Enterprise Linux 8 running in VMM](https://opensource.com/sites/default/files/13-vmm_rhelinstalled_0.png) opensource.com Virtual Machine Manager is a powerful tool for desktop Linux users. It is open source and an excellent alternative to proprietary and closed virtualization products. ## 3 Comments
11,366
Oracle 发布全球最快的数据库机器 Exadata X8M
https://opensourceforu.com/2019/09/oracle-unleashes-worlds-fastest-database-machine-exadata-x8m/
2019-09-20T19:16:03
[ "数据库" ]
https://linux.cn/article-11366-1.html
> > Exadata X8M 是第一台具有集成持久内存和 RoCE 的数据库机器。Oracle 还宣布推出 Oracle 零数据丢失恢复设备 X8M(ZDLRA)。 > > > ![](/data/attachment/album/201909/20/191530qiyvvxl8qqcov8xq.jpg) Oracle 发布了新的 Exadata 数据库机器 X8M,旨在为数据库基础架构市场树立新的标杆。 Exadata X8M 结合了英特尔 Optane DC 持久存储器和通过融合以太网(RoCE)的 100 千兆的远程直接内存访问(RDMA)来消除存储瓶颈,并显著提高性能,其适用于最苛刻的工作负载,如在线事务处理(OLTP)、分析、物联网、欺诈检测和高频交易。 “借助 Exadata X8M,我们可以提供内存级的性能,同时为 OLTP 和分析提供共享存储的所有优势,”Oracle 任务关键型数据库技术执行副总裁 Juan Loaiza 说。 “使用对共享持久存储器的直接数据库访问将响应时间减少一个数量级,可加速每个 OLTP 应用程序,它是需要实时访问大量数据的应用程序的游戏规则改变者,例如欺诈检测和个性化购物,”官方补充。 ### 它有什么独特之处? Oracle Exadata X8M 使用 RDMA 让数据库直接访问智能存储服务器中的持久内存,从而绕过整个操作系统、IO 和网络软件堆栈。这导致更低的延迟和更高的吞吐量。使用 RDMA 绕过软件堆栈还可以释放存储服务器上的 CPU 资源,以执行更多智能扫描查询来支持分析工作负载。 ### 更少的存储瓶颈 “高性能 OLTP 应用需要高的每秒输入/输出操作(IOPS)和低延迟。直接数据库访问共享持久性内存可将SQL 读取的峰值性能提升至 1600 万 IOPS,比行业领先的 Exadata X8 高出 2.5 倍,“Oracle 在一份声明中表示。 此外,Exadata X8M 通过使远程 IO 延迟低于 19 微秒,大大减少了关键数据库 IO 的延迟 —— 这比 Exadata X8 快 10 倍以上。即使对于每秒需要数百万 IO 的工作负载,也可实现这些超低延迟。 ### 比 AWS 和 Azure 更高效 该公司声称,与 Oracle 最快的 Amazon RDS 存储相比,Exadata X8M 的延迟降低了 50 倍,IOPS 提高了 200 倍,容量提高了 15 倍。 与 Azure SQL 数据库服务存储相比,Exadata X8M 的延迟降低了 100 倍,IOPS 提高了 150 倍,容量提高了 300 倍。 据 Oracle 称,单机架 Exadata X8M 可提供高达 2 倍的 OLTP 读取 IOPS,3 倍的吞吐量和比具有持久性内存的共享存储系统(如 Dell EMC PowerMax 8000 的单机架)低 5 倍的延迟。 “通过同时支持更快的 OLTP 查询和更高的分析工作负载吞吐量,Exadata X8M 是融合混合工作负载环境以降低 IT 成本和复杂性的理想平台,”该公司说。 ### Oracle 零数据丢失恢复设备 X8 Oracle 当天还宣布推出 Oracle 零数据丢失恢复设备 X8M(ZDLRA),它使用新的 100Gb RoCE,用于计算和存储服务器之间的高吞吐量内部数据传输。 Exadata 和 ZDLRA 客户现在可以在 RoCE 或基于 InfiniBand 的工程系统之间进行选择,以在其架构部署中实现最佳灵活性。 --- via: <https://opensourceforu.com/2019/09/oracle-unleashes-worlds-fastest-database-machine-exadata-x8m/> 作者:[Longjam Dineshwori](https://opensourceforu.com/author/dineshwori-longjam/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
301
Moved Permanently
null
11,367
使用 sed 命令查找和替换文件中的字符串的 16 个示例
https://www.2daygeek.com/linux-sed-to-find-and-replace-string-in-files/
2019-09-20T21:07:35
[ "sed" ]
https://linux.cn/article-11367-1.html
![](/data/attachment/album/201909/20/210723xf884pafyzf9zf4a.jpg) 当你在使用文本文件时,很可能需要查找和替换文件中的字符串。`sed` 命令主要用于替换一个文件中的文本。在 Linux 中这可以通过使用 `sed` 命令和 `awk` 命令来完成。 在本教程中,我们将告诉你使用 `sed` 命令如何做到这一点,然后讨论讨论 `awk` 命令相关的。 ### sed 命令是什么 `sed` 命令表示 Stream Editor(流编辑器),用来在 Linux 上执行基本的文本操作。它可以执行各种功能,如搜索、查找、修改、插入或删除文件。 此外,它也可以执行复杂的正则表达式匹配。 它可用于以下目的: * 查找和替换匹配给定的格式的内容。 * 在指定行查找和替换匹配给定的格式的内容。 * 在所有行查找和替换匹配给定的格式的内容。 * 搜索并同时替换两种不同的模式。 本文列出的十五个例子可以帮助你掌握 `sed` 命令。 如果要使用 `sed` 命令删除文件中的行,去下面的文章。 注意:由于这是一篇演示文章,我们使用不带 `-i` 选项的 `sed` 命令,该选项会在 Linux 终端中删除行并打印文件内容。 但是,在实际环境中如果你想删除源文件中的行,使用带 `-i` 选项的 `sed` 命令。 常见的 `sed` 替换字符串的语法。 ``` sed -i 's/Search_String/Replacement_String/g' Input_File ``` 首先我们需要了解 `sed` 语法来做到这一点。请参阅有关的细节。 * `sed`:这是一个 Linux 命令。 * `-i`:这是 `sed` 命令的一个选项,它有什么作用?默认情况下,`sed` 打印结果到标准输出。当你使用 `sed` 添加这个选项时,那么它会在适当的位置修改文件。当你添加一个后缀(比如,`-i.bak`)时,就会创建原始文件的备份。 * `s`:字母 `s` 是一个替换命令。 * `Search_String`:搜索一个给定的字符串或正则表达式。 * `Replacement_String`:替换的字符串。 * `g`:全局替换标志。默认情况下,`sed` 命令替换每一行第一次出现的模式,它不会替换行中的其他的匹配结果。但是,提供了该替换标志时,所有匹配都将被替换。 * `/`:分界符。 * `Input_File`:要执行操作的文件名。 让我们来看看文件中用sed命令来搜索和转换文本的一些常用例子。 我们已经创建了用于演示的以下文件。 ``` # cat sed-test.txt 1 Unix unix unix 23 2 linux Linux 34 3 linuxunix UnixLinux linux /bin/bash CentOS Linux OS Linux is free and opensource operating system ``` ### 1) 如何查找和替换一行中“第一次”模式匹配 下面的 `sed` 命令用 `linux` 替换文件中的 `unix`。这仅仅改变了每一行模式的第一个实例。 ``` # sed 's/unix/linux/' sed-test.txt 1 Unix linux unix 23 2 linux Linux 34 3 linuxlinux UnixLinux linux /bin/bash CentOS Linux OS Linux is free and opensource operating system ``` ### 2) 如何查找和替换每一行中“第 N 次”出现的模式 在行中使用`/1`、`/2`……`/n` 等标志来代替相应的匹配。 下面的 `sed` 命令在一行中用 `linux` 来替换 `unix` 模式的第二个实例。 ``` # sed 's/unix/linux/2' sed-test.txt 1 Unix unix linux 23 2 linux Linux 34 3 linuxunix UnixLinux linux /bin/bash CentOS Linux OS Linux is free and opensource operating system ``` ### 3) 如何搜索和替换一行中所有的模式实例 下面的 `sed` 命令用 `linux` 替换 `unix` 格式的所有实例,因为 `g` 是一个全局替换标志。 ``` # sed 's/unix/linux/g' sed-test.txt 1 Unix linux linux 23 2 linux Linux 34 3 linuxlinux UnixLinux linux /bin/bash CentOS Linux OS Linux is free and opensource operating system ``` ### 4) 如何查找和替换一行中从“第 N 个”开始的所有匹配的模式实例 下面的 `sed` 命令在一行中替换从模式的“第 N 个”开始的匹配实例。 ``` # sed 's/unix/linux/2g' sed-test.txt 1 Unix unix linux 23 2 linux Linux 34 3 linuxunix UnixLinux linux /bin/bash CentOS Linux OS Linux is free and opensource operating system ``` ### 5) 在特定的行号搜索和替换模式 你可以替换特定行号中的字符串。下面的 `sed` 命令用 `linux` 仅替换第三行的 `unix` 模式。 ``` # sed '3 s/unix/linux/' sed-test.txt 1 Unix unix unix 23 2 linux Linux 34 3 linuxlinux UnixLinux linux /bin/bash CentOS Linux OS Linux is free and opensource operating system ``` ### 6) 在特定范围行号间搜索和替换模式 你可以指定行号的范围,以替换字符串。 下面的 `sed` 命令在 1 到 3 行间用 `linux` 替换 `Unix` 模式。 ``` # sed '1,3 s/unix/linux/' sed-test.txt 1 Unix linux unix 23 2 linux Linux 34 3 linuxlinux UnixLinux linux /bin/bash CentOS Linux OS Linux is free and opensource operating system ``` ### 7) 如何查找和修改最后一行的模式 下面的 sed 命令允许你只在最后一行替换匹配的字符串。 下面的 `sed` 命令只在最后一行用 `Unix` 替换 `Linux` 模式。 ``` # sed '$ s/Linux/Unix/' sed-test.txt 1 Unix unix unix 23 2 linux Linux 34 3 linuxunix UnixLinux linux /bin/bash CentOS Linux OS Unix is free and opensource operating system ``` ### 8) 在一行中如何只查找和替换正确的模式匹配 你可能已经注意到,子串 `linuxunix` 被替换为在第 6 个示例中的 `linuxlinux`。如果你只想更改正确的匹配词,在搜索串的两端用这个边界符 `\b`。 ``` # sed '1,3 s/\bunix\b/linux/' sed-test.txt 1 Unix linux unix 23 2 linux Linux 34 3 linuxunix UnixLinux linux /bin/bash CentOS Linux OS Linux is free and opensource operating system ``` ### 9) 如何以不区分大小写来搜索与替换模式 大家都知道,Linux 是区分大小写的。为了与不区分大小写的模式匹配,使用 `I` 标志。 ``` # sed 's/unix/linux/gI' sed-test.txt 1 linux linux linux 23 2 linux Linux 34 3 linuxlinux linuxLinux linux /bin/bash CentOS Linux OS Linux is free and opensource operating system ``` ### 10) 如何查找和替换包含分隔符的字符串 当你搜索和替换含分隔符的字符串时,我们需要用反斜杠 `\` 来取消转义。 在这个例子中,我们将用 `/usr/bin/fish` 来替换 `/bin/bash`。 ``` # sed 's/\/bin\/bash/\/usr\/bin\/fish/g' sed-test.txt 1 Unix unix unix 23 2 linux Linux 34 3 linuxunix UnixLinux linux /usr/bin/fish CentOS Linux OS Linux is free and opensource operating system ``` 上述 `sed` 命令按预期工作,但它看起来来很糟糕。 为了简化,大部分的人会用竖线 `|` 作为正则表达式的定位符。 所以,我建议你用它。 ``` # sed 's|/bin/bash|/usr/bin/fish/|g' sed-test.txt 1 Unix unix unix 23 2 linux Linux 34 3 linuxunix UnixLinux linux /usr/bin/fish/ CentOS Linux OS Linux is free and opensource operating system ``` ### 11) 如何以给定的模式来查找和替换数字 类似地,数字可以用模式来代替。下面的 `sed` 命令以 `[0-9]` 替换所有数字为 `number`。 ``` # sed 's/[0-9]/number/g' sed-test.txt number Unix unix unix numbernumber number linux Linux numbernumber number linuxunix UnixLinux linux /bin/bash CentOS Linux OS Linux is free and opensource operating system ``` ### 12) 如何用模式仅查找和替换两个数字 如果你想用模式来代替两位数字,使用下面的 `sed` 命令。 ``` # sed 's/\b[0-9]\{2\}\b/number/g' sed-test.txt 1 Unix unix unix number 2 linux Linux number 3 linuxunix UnixLinux linux /bin/bash CentOS Linux OS Linux is free and opensource operating system ``` ### 13) 如何用 sed 命令仅打印被替换的行 如果你想显示仅更改的行,使用下面的 `sed` 命令。 * `p` - 它在终端上输出替换的行两次。 * `-n` - 它抑制由 `p` 标志所产生的重复行。 ``` # sed -n 's/Unix/Linux/p' sed-test.txt 1 Linux unix unix 23 3 linuxunix LinuxLinux ``` ### 14) 如何同时运行多个 sed 命令 以下 `sed` 命令同时检测和置换两个不同的模式。 下面的 `sed` 命令搜索 `linuxunix` 和 `CentOS` 模式,用 `LINUXUNIX` 和 `RHEL8` 一次性更换它们。 ``` # sed -e 's/linuxunix/LINUXUNIX/g' -e 's/CentOS/RHEL8/g' sed-test.txt 1 Unix unix unix 23 2 linux Linux 34 3 LINUXUNIX UnixLinux linux /bin/bash RHEL8 Linux OS Linux is free and opensource operating system ``` 下面的 `sed` 命令搜索替换两个不同的模式,并一次性替换为一个字符串。 以下 `sed` 的命令搜索 `linuxunix` 和 `CentOS` 模式,用 `Fedora30` 替换它们。 ``` # sed -e 's/\(linuxunix\|CentOS\)/Fedora30/g' sed-test.txt 1 Unix unix unix 23 2 linux Linux 34 3 Fedora30 UnixLinux linux /bin/bash Fedora30 Linux OS Linux is free and opensource operating system ``` ### 15) 如果给定的模式匹配,如何查找和替换整个行 如果模式匹配,可以使用 `sed` 命令用新行来代替整行。这可以通过使用 `c` 标志来完成。 ``` # sed '/OS/ c\ New Line ' sed-test.txt 1 Unix unix unix 23 2 linux Linux 34 3 linuxunix UnixLinux New Line Linux is free and opensource operating system ``` ### 16) 如何搜索和替换相匹配的模式行 在 `sed` 命令中你可以为行指定适合的模式。在匹配该模式的情况下,`sed` 命令搜索要被替换的字符串。 下面的 `sed` 命令首先查找具有 `OS` 模式的行,然后用 `ArchLinux` 替换单词 `Linux`。 ``` # sed '/OS/ s/Linux/ArchLinux/' sed-test.txt 1 Unix unix unix 23 2 linux Linux 34 3 linuxunix UnixLinux linux /bin/bash CentOS ArchLinux OS Linux is free and opensource operating system ``` --- via: <https://www.2daygeek.com/linux-sed-to-find-and-replace-string-in-files/> 作者:[Magesh Maruthamuthu](https://www.2daygeek.com/author/magesh/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[Asche910](https://github.com/asche910) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
404
Not Found
null
11,368
Linux 内核的五大创新
https://opensource.com/article/19/8/linux-kernel-top-5-innovations
2019-09-21T09:39:15
[ "Linux" ]
https://linux.cn/article-11368-1.html
> > 想知道什么是 Linux 内核上真正的(不是那种时髦的)创新吗? > > > ![](/data/attachment/album/201909/21/093858no01oh78v111r3zt.jpg) 在科技行业,*创新*这个词几乎和*革命*一样到处泛滥,所以很难将那些夸张的东西与真正令人振奋的东西区分开来。Linux 内核被称为创新,但它又被称为现代计算中最大的奇迹,一个微观世界中的庞然大物。 撇开营销和模式不谈,Linux 可以说是开源世界中最受欢迎的内核,它在近 30 年的生命时光当中引入了一些真正的规则改变者。 ### Cgroups(2.6.24) 早在 2007 年,Paul Menage 和 Rohit Seth 就在内核中添加了深奥的[控制组(cgroups)](https://en.wikipedia.org/wiki/Cgroups)功能(cgroups 的当前实现是由 Tejun Heo 重写的)。这种新技术最初被用作一种方法,从本质上来说,是为了确保一组特定任务的服务质量。 例如,你可以为与你的 WEB 服务相关联的所有任务创建一个控制组定义(cgroup),为例行备份创建另一个 cgroup ,再为一般操作系统需求创建另一个 cgroup。然后,你可以控制每个组的资源百分比,这样你的操作系统和 WEB 服务就可以获得大部分系统资源,而你的备份进程可以访问剩余的资源。 然而,cgroups 如今变得这么著名是因其作为驱动云技术的角色:容器。事实上,cgroups 最初被命名为[进程容器](https://lkml.org/lkml/2006/10/20/251)。当它们被 [LXC](https://linuxcontainers.org)、[CoreOS](https://coreos.com/) 和 Docker 等项目采用时,这并不奇怪。 就像闸门打开后一样,“容器” 一词就像成为了 Linux 的同义词一样,微服务风格的基于云的“应用”概念很快成为了规范。如今,已经很难摆脱 cgroups 了,它们是如此普遍。每一个大规模的基础设施(如果你运行 Linux 的话,可能还有你的笔记本电脑)都以一种合理的方式使用了 cgroups,这使得你的计算体验比以往任何时候都更加易于管理和灵活。 例如,你可能已经在电脑上安装了 [Flathub](http://flathub.org) 或 [Flatpak](http://flatpak.org),或者你已经在工作中使用 [Kubernetes](http://kubernetes.io) 和/或 [OpenShift](https://www.redhat.com/sysadmin/learn-openshift-minishift)。不管怎样,如果“容器”这个术语对你来说仍然模糊不清,则可以 [通过 Linux 容器从背后](https://opensource.com/article/18/11/behind-scenes-linux-containers)获得对容器的实际理解。 ### LKMM(4.17) 2018 年,Jade Alglave、Alan Stern、Andrea Parri、Luc Maranget、Paul McKenney 以及其他几个人的辛勤工作的成果被合并到主线 Linux 内核中,以提供正式的内存模型。Linux 内核内存[一致性]模型(LKMM)子系统是一套描述 Linux 内存一致性模型的工具,同时也产生用于测试的用例(特别命名为 klitmus)。 随着系统在物理设计上变得越来越复杂(增加了更多的中央处理器内核,高速缓存和内存增长,等等),它们就越难知道哪个中央处理器需要哪个地址空间,以及何时需要。例如,如果 CPU0 需要将数据写入内存中的共享变量,并且 CPU1 需要读取该值,那么 CPU0 必须在 CPU1 尝试读取之前写入。类似地,如果值是以一种顺序方式写入内存的,那么期望它们也以同样的顺序被读取,而不管哪个或哪些 CPU 正在读取。 即使在单个处理器上,内存管理也需要特定的任务顺序。像 `x = y` 这样的简单操作需要处理器从内存中加载 `y` 的值,然后将该值存储在 `x` 中。在处理器从内存中读取值之前,是不能将存储在 `y` 中的值放入 `x` 变量的。此外还有地址依赖:`x[n] = 6` 要求在处理器能够存储值 `6` 之前加载 `n`。 LKMM 可以帮助识别和跟踪代码中的这些内存模式。它部分是通过一个名为 `herd` 的工具来实现的,该工具(以逻辑公式的形式)定义了内存模型施加的约束,然后列举了与这些约束一致性的所有可能的结果。 ### 低延迟补丁(2.6.38) 很久以前,在 2011 年之前,如果你想[在 Linux 上进行多媒体工作](http://slackermedia.info),你必须得有一个低延迟内核。这主要适用于[录音](https://opensource.com/article/17/6/qtractor-audio)时添加了许多实时效果(如对着麦克风唱歌和添加混音,以及在耳机中无延迟地听到你的声音)。有些发行版,如 [Ubuntu Studio](http://ubuntustudio.org),可靠地提供了这样一个内核,所以实际上这没有什么障碍,这只不过是当艺术家选择发行版时的一个重要提醒。 然而,如果你没有使用 Ubuntu Studio,或者你需要在你的发行版提供之前更新你的内核,你必须跳转到 rt-patches 网页,下载内核补丁,将它们应用到你的内核源代码,编译,然后手动安装。 后来,随着内核版本 2.6.38 的发布,这个过程结束了。Linux 内核突然像变魔术一样默认内置了低延迟代码(根据基准测试,延迟至少降低了 10 倍)。不再需要下载补丁,不用编译。一切都很顺利,这都是因为 Mike Galbraith 编写了一个 200 行的小补丁。 对于全世界的开源多媒体艺术家来说,这是一个规则改变者。从 2011 年开始事情变得如此美好,到 2016 年我自己做了一个挑战,[在树莓派 v1(型号 B)上建造一个数字音频工作站(DAW)](https://opensource.com/life/16/3/make-music-raspberry-pi-milkytracker),结果发现它运行得出奇地好。 ### RCU(2.5) RCU,即<ruby> 读-拷贝-更新 <rt> Read-Copy-Update </rt></ruby>,是计算机科学中定义的一个系统,它允许多个处理器线程从共享内存中读取数据。它通过延迟更新但也将它们标记为已更新来做到这一点,以确保数据读取为最新内容。实际上,这意味着读取与更新同时发生。 典型的 RCU 循环有点像这样: 1. 删除指向数据的指针,以防止其他读操作引用它。 2. 等待读操作完成它们的关键处理。 3. 回收内存空间。 将更新阶段划分为删除和回收阶段意味着更新程序会立即执行删除,同时推迟回收直到所有活动读取完成(通过阻止它们或注册一个回调以便在完成时调用)。 虽然 RCU 的概念不是为 Linux 内核发明的,但它在 Linux 中的实现是该技术的一个定义性的例子。 ### 合作(0.01) 对于 Linux 内核创新的问题的最终答案永远是协作。你可以说这是一个好时机,也可以称之为技术优势,称之为黑客能力,或者仅仅称之为开源,但 Linux 内核及其支持的许多项目是协作与合作的光辉范例。 它远远超出了内核范畴。各行各业的人都对开源做出了贡献,可以说都是因为 Linux 内核。Linux 曾经是,现在仍然是[自由软件](http://fsf.org)的主要力量,激励人们把他们的代码、艺术、想法或者仅仅是他们自己带到一个全球化的、有生产力的、多样化的人类社区中。 ### 你最喜欢的创新是什么? 这个列表偏向于我自己的兴趣:容器、非统一内存访问(NUMA)和多媒体。无疑,列表中肯定缺少你最喜欢的内核创新。在评论中告诉我。 --- via: <https://opensource.com/article/19/8/linux-kernel-top-5-innovations> 作者:[Seth Kenlon](https://opensource.com/users/seth) 选题:[lujun9972](https://github.com/lujun9972) 译者:[heguangzhi](https://github.com/heguangzhi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
The word *innovation* gets bandied about in the tech industry almost as much as *revolution*, so it can be difficult to differentiate hyperbole from something that’s actually exciting. The Linux kernel has been called innovative, but then again it’s also been called the biggest hack in modern computing, a monolith in a micro world. Setting aside marketing and modeling, Linux is arguably the most popular kernel of the open source world, and it’s introduced some real game-changers over its nearly 30-year life span. ## Cgroups (2.6.24) Back in 2007, Paul Menage and Rohit Seth got the esoteric [ control groups (cgroups)](https://en.wikipedia.org/wiki/Cgroups) feature added to the kernel (the current implementation of cgroups is a rewrite by Tejun Heo.) This new technology was initially used as a way to ensure, essentially, quality of service for a specific set of tasks. For example, you could create a control group definition (cgroup) for all tasks associated with your web server, another cgroup for routine backups, and yet another for general operating system requirements. You could then control a percentage of resources for each cgroup, such that your OS and web server gets the bulk of system resources while your backup processes have access to whatever is left. What cgroups has become most famous for, though, is its role as the technology driving the cloud today: containers. In fact, cgroups were originally named [process containers](https://lkml.org/lkml/2006/10/20/251). It was no great surprise when they were adopted by projects like [LXC](https://linuxcontainers.org), [CoreOS](https://coreos.com/), and Docker. The floodgates being opened, the term *containers* justly became synonymous with Linux, and the concept of microservice-style cloud-based “apps” quickly became the norm. These days, it’s hard to get away from cgroups, they’re so prevalent. Every large-scale infrastructure (and probably your laptop, if you run Linux) takes advantage of cgroups in a meaningful way, making your computing experience more manageable and more flexible than ever. For example, you might already have installed [Flathub](http://flathub.org) or [Flatpak](http://flatpak.org) on your computer, or maybe you’ve started using [Kubernetes](http://kubernetes.io) and/or [OpenShift](https://www.redhat.com/sysadmin/learn-openshift-minishift) at work. Regardless, if the term “containers” is still hazy for you, you can gain a hands-on understanding of containers from [Behind the scenes with Linux containers](https://opensource.com/article/18/11/behind-scenes-linux-containers). ## LKMM (4.17) In 2018, the hard work of Jade Alglave, Alan Stern, Andrea Parri, Luc Maranget, Paul McKenney, and several others, got merged into the mainline Linux kernel to provide formal memory models. The Linux Kernel Memory [Consistency] Model (LKMM) subsystem is a set of tools describing the Linux memory coherency model, as well as producing *litmus tests* (**klitmus**, specifically) for testing. As systems become more complex in physical design (more CPU cores added, cache and RAM grow, and so on), the harder it is for them to know which address space is required by which CPU, and when. For example, if CPU0 needs to write data to a shared variable in memory, and CPU1 needs to read that value, then CPU0 must write before CPU1 attempts to read. Similarly, if values are written in one order to memory, then there’s an expectation that they are also read in that same order, regardless of which CPU or CPUs are doing the reading. Even on a single CPU, memory management requires a specific task order. A simple action such as **x = y** requires a CPU to load the value of **y** from memory, and then store that value in **x**. Placing the value stored in **y** into the **x** variable cannot occur *before* the CPU has read the value from memory. There are also address dependencies: **x[n] = 6** requires that **n** is loaded before the CPU can store the value of six. LKMM helps identify and trace these memory patterns in code. It does this in part with a tool called **herd**, which defines the constraints imposed by a memory model (in the form of logical axioms), and then enumerates all possible outcomes consistent with these constraints. ## Low-latency patch (2.6.38) Long ago, in the days before 2011, if you wanted to do "serious" [multimedia work on Linux](http://slackermedia.info), you had to obtain a low-latency kernel. This mostly applied to [audio recording](https://opensource.com/article/17/6/qtractor-audio) while adding lots of real-time effects (such as singing into a microphone and adding reverb, and hearing your voice in your headset with no noticeable delay). There were distributions, such as [Ubuntu Studio](http://ubuntustudio.org), that reliably provided such a kernel, so in practice it wasn't much of a hurdle, just a significant caveat when choosing your distribution as an artist. However, if you weren’t using Ubuntu Studio, or you had some need to update your kernel before your distribution got around to it, you had to go to the rt-patches web page, download the kernel patches, apply them to your kernel source code, compile, and install manually. And then, with the release of kernel version 2.6.38, this process was all over. The Linux kernel suddenly, as if by magic, had low-latency code (according to benchmarks, latency decreased by a factor of 10, at least) built-in by default. No more downloading patches, no more compiling. Everything just worked, and all because of a small 200-line patch implemented by Mike Galbraith. For open source multimedia artists the world over, it was a game-changer. Things got so good from 2011 on that in 2016, I challenged myself to [build a Digital Audio Workstation (DAW) on a Raspberry Pi v1 (model B)](https://opensource.com/life/16/3/make-music-raspberry-pi-milkytracker) and found that it worked surprisingly well. ## RCU (2.5) RCU, or Read-Copy-Update, is a system defined in computer science that allows multiple processor threads to read from shared memory. It does this by deferring updates, but also marking them as updated, to ensure that the data’s consumers read the latest version. Effectively, this means that reads happen concurrently with updates. The typical RCU cycle is a little like this: - Remove pointers to data to prevent other readers from referencing it. - Wait for readers to complete their critical processes. - Reclaim the memory space. Dividing the update stage into removal and reclamation phases means the updater performs the removal immediately while deferring reclamation until all active readers are complete (either by blocking them or by registering a callback to be invoked upon completion). While the concept of read-copy-update was not invented for the Linux kernel, its implementation in Linux is a defining example of the technology. ## Collaboration (0.01) The final answer to the question of what the Linux kernel innovated will always be, above all else, collaboration. Call it good timing, call it technical superiority, call it hackability, or just call it open source, but the Linux kernel and the many projects that it enabled is a glowing example of collaboration and cooperation. And it goes well beyond just the kernel. People from all walks of life have contributed to open source, arguably *because* of the Linux kernel. The Linux was, and remains to this day, a major force of [Free Software](http://fsf.org), inspiring users to bring their code, art, ideas, or just themselves, to a global, productive, and diverse community of humans. ## What’s your favorite innovation? This list is biased toward my own interests: containers, non-uniform memory access (NUMA), and multimedia. I’ve surely left your favorite kernel innovation off the list. Tell me about it in the comments! ## 1 Comment
11,370
Oracle Autonomous Linux:用于云计算的自我更新、自我修补的 Linux 发行版
https://itsfoss.com/oracle-autonomous-linux/
2019-09-22T09:38:52
[ "Oracle" ]
https://linux.cn/article-11370-1.html
自动化是 IT 行业的增长趋势,其目的是消除重复任务中的手动干扰。Oracle 通过推出 Oracle Autonomous Linux 向自动化世界迈出了又一步,这无疑将使 IoT 和云计算行业受益。 ### Oracle Autonomous Linux:减少人工干扰,增多自动化 ![](/data/attachment/album/201909/22/093857pn9k69e9fn5x9969.png) 周一,Oracle 联合创始人<ruby> 拉里·埃里森 <rt> Larry Ellison </rt></ruby>参加了在旧金山举行的Oracle OpenWorld 全球大会。[他宣布了](https://www.zdnet.com/article/oracle-announces-oracle-autonomous-linux/)一个新产品:世界上第一个自治 Linux。这是 Oracle 向第二代云迈进的第二步。第一步是两年前发布的 [Autonomous Database](https://www.oracle.com/in/database/what-is-autonomous-database.html)。 Oracle Autonomous Linux 的最大特性是降低了维护成本。根据 [Oracle 网站](https://www.oracle.com/corporate/pressrelease/oow19-oracle-autonomous-linux-091619.html) 所述,Autonomous Linux “使用先进的机器学习和自治功能来提供前所未有的成本节省、安全性和可用性,并释放关键的 IT 资源来应对更多的战略计划”。 Autonomous Linux 可以无需人工干预就安装更新和补丁。这些自动更新包括 “Linux 内核和关键用户空间库”的补丁。“不需要停机,而且可以免受外部攻击和内部恶意用户的攻击。”它们也可以在系统运行时进行,以减少停机时间。Autonomous Linux 还会自动处理伸缩,以确保满足所有计算需求。 埃里森强调了新的自治系统将如何提高安全性。他特别提到了 [Capitol One 数据泄露](https://www.zdnet.com/article/100-million-americans-and-6-million-canadians-caught-up-in-capital-one-breach/)是由于配置错误而发生的。他说:“一个防止数据被盗的简单规则:将数据放入自治系统。没有人为错误,没有数据丢失。 那是我们与 AWS 之间的最大区别。” 有趣的是,Oracle 还瞄准了这一新产品以与 IBM 竞争。埃里森说:“如果你付钱给 IBM,可以停了。”所有 Red Hat 应用程序都应该能够在 Autonomous Linux 上运行而无需修改。有趣的是,Oracle Linux 是从 Red Hat Enterprise Linux 的源代码中[构建](https://distrowatch.com/table.php?distribution=oracle)的。 看起来,Oracle Autonomous Linux 不会用于企业市场以外。 ### 关于 Oracle Autonomous Linux 的思考 Oracle 是云服务市场的重要参与者。这种新的 Linux 产品将使其能够与 IBM 竞争。让人感兴趣的是 IBM 的反应会是如何,特别是当他们有来自 Red Hat 的新一批开源智能软件。 如果你看一下市场数字,那么对于 IBM 或 Oracle 来说情况都不好。大多数云业务由 [Amazon Web Services、Microsoft Azure 和 Google Cloud Platform](https://www.zdnet.com/article/top-cloud-providers-2019-aws-microsoft-azure-google-cloud-ibm-makes-hybrid-move-salesforce-dominates-saas/) 所占据。IBM 和 Oracle 落后于他们。[IBM 收购 Red Hat](https://itsfoss.com/ibm-red-hat-acquisition/) 试图获得发展。这项新的自主云计划是 Oracle 争取统治地位(或至少试图获得更大的市场份额)的举动。让人感兴趣的是,到底有多少公司因为购买了 Oracle 的系统而在互联网的狂野西部变得更加安全? 我必须简单提一下:当我第一次阅读该公告时,我的第一反应就是“好吧,我们离天网更近了一步。”如果我们技术性地考虑一下,我们就像是要进入了机器人末日。如果你打算帮我,我计划去购买一些罐头食品。 你对 Oracle 的新产品感兴趣吗?你会帮助他们赢得云战争吗?在下面的评论中让我们知道。 如果你觉得这篇文章有趣,请花一点时间在社交媒体、Hacker News 或 [Reddit](https://reddit.com/r/linuxusersgroup) 上分享。 --- via: <https://itsfoss.com/oracle-autonomous-linux/> 作者:[John Paul](https://itsfoss.com/author/john/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
Automation is the growing trend in the IT industry. The aim is to remove the manual interference from the repetitive tasks. Oracle has taken another step into the automation world by launching Oracle Autonomous Linux that is surely going to benefit the IoT and CLoud Computing industry. ## Oracle Autonomous Linux: Less Human Intervention, More Automation ![Oracle Autonomous Linux](https://itsfoss.com/content/images/wordpress/2019/09/oracle-autonomous-linux-800x450.png) On Monday, Larry Ellison, the legendary co-founder of Oracle, took the stage at the Oracle OpenWorld conference in San Francisco. [He announced](https://www.zdnet.com/article/oracle-announces-oracle-autonomous-linux/) a new product: the world’s first autonomous linux. This is the second step in Oracle’s march towards a second-generation cloud. The first step was the [Autonomous Database](https://www.oracle.com/in/database/what-is-autonomous-database.html) released two years ago. The biggest feature that Oracle Autonomous Linux is reduced maintenance costs. According to [Oracle’s site](https://www.oracle.com/corporate/pressrelease/oow19-oracle-autonomous-linux-091619.html), Autonomous Linux “uses advanced machine learning and autonomous capabilities to deliver unprecedented cost savings, security, and availability and frees up critical IT resources to tackle more strategic initiatives”. Autonomous Linux can install updates and patches without human interference. These automatic updates include patches for the “Linux kernel and key user space libraries”. “This requires no downtime along with protection from both external attacks and malicious internal users.” They can also take place while the system is running to reduce downtime. Autonomous Linux also handles scaling automatically to ensure that all computing needs are handled. Ellison highlighted how the new autonomous would improve security. He mentioned in particular how [Capitol One data breach](https://www.zdnet.com/article/100-million-americans-and-6-million-canadians-caught-up-in-capital-one-breach/) occurred because of a bad configuration. He said “One simple rule to prevent data theft: Put your data in an autonomous system. No human error, no data loss. That’s the big difference between us and AWS.” Interestingly, Oracle is also aiming this new product to compete with IBM. Ellison said, “If you’re paying IBM, you can stop.” All Red Hat applications should be able to run on Autonomous Linux without modification. Interestingly, Oracle Linux is [built](https://distrowatch.com/table.php?distribution=oracle) from the sources of Red Hat Enterprise Linux. It does not appear that Oracle Autonomous Linux will be available for anyone outside of the enterprise market. ## Thoughts on Oracle Autonomous Linux Oracle is a big player in the cloud services market. This new Linux product will allow it to compete with IBM. It will be interesting how IBM responds, especially since they have a new influx of open-source smarts from Red Hat. If you look at the number, things are not looking good for either IBM or Oracle. The majority of the cloud business is controlled by [Amazon Web Services, Microsoft Azure, and Google Cloud Platform](https://www.zdnet.com/article/top-cloud-providers-2019-aws-microsoft-azure-google-cloud-ibm-makes-hybrid-move-salesforce-dominates-saas/). IBM and Oracle are somewhere behind them. [IBM bought Red Hat](https://itsfoss.com/ibm-red-hat-acquisition/) in an attempt to gain ground. This new Autonomous Cloud initiative is Oracle’s move for dominance (or at least attempt to gain a larger market share). It will be interesting how many companies buy into Oracle’s system to become more secure in the wild west of the internet. I have to mention this quickly: when I first read about the announcement, all I could this was “Well, we are one step closer to Skynet.” If we let technology think for itself, we are just inviting an android apocalypse. If you’ll encuse me, I’m going to buy some canned goods. Are you interested in Oracle’s new product? Do you it will help them win the cloud wars? Let us know in the comments below. If you found this article interesting, please take a minute to share it on social media, Hacker News or [R](http://reddit.com/r/linuxusersgroup)[e](http://reddit.com/r/linuxusersgroup)[ddit](http://reddit.com/r/linuxusersgroup).
11,371
如何在 Fedora 上建立一个 TFTP 服务器
https://fedoramagazine.org/how-to-set-up-a-tftp-server-on-fedora/
2019-09-22T11:14:41
[ "tftp" ]
https://linux.cn/article-11371-1.html
![](/data/attachment/album/201909/22/111433ar23l5gp2igsz2d3.jpg) TFTP 即<ruby> 简单文本传输协议 <rt> Trivial File Transfer Protocol </rt></ruby>,允许用户通过 [UDP](https://en.wikipedia.org/wiki/User_Datagram_Protocol) 协议在系统之间传输文件。默认情况下,协议使用的是 UDP 的 69 号端口。TFTP 协议广泛用于无盘设备的远程启动。因此,在你的本地网络建立一个 TFTP 服务器,这样你就可以对 [安装好的 Fedora](https://docs.fedoraproject.org/en-US/fedora/f30/install-guide/advanced/Network_based_Installations/) 和其他无盘设备做一些操作,这将非常有趣。 TFTP 仅仅能够从远端系统读取数据或者向远端系统写入数据,而没有列出远端服务器上文件的能力。它也没提供用户身份验证。由于安全隐患和缺乏高级功能,TFTP 通常仅用于局域网内部(LAN)。 ### 安装 TFTP 服务器 首先你要做的事就是安装 TFTP 客户端和 TFTP 服务器: ``` dnf install tftp-server tftp -y ``` 上述的这条命令会在 `/usr/lib/systemd/system` 目录下为 [systemd](https://fedoramagazine.org/systemd-getting-a-grip-on-units/) 创建 `tftp.service` 和 `tftp.socket` 文件。 ``` /usr/lib/systemd/system/tftp.service /usr/lib/systemd/system/tftp.socket ``` 接下来,将这两个文件复制到 `/etc/systemd/system` 目录下,并重新命名。 ``` cp /usr/lib/systemd/system/tftp.service /etc/systemd/system/tftp-server.service cp /usr/lib/systemd/system/tftp.socket /etc/systemd/system/tftp-server.socket ``` ### 修改文件 当你把这些文件复制和重命名后,你就可以去添加一些额外的参数,下面是 `tftp-server.service` 刚开始的样子: ``` [Unit] Description=Tftp Server Requires=tftp.socket Documentation=man:in.tftpd [Service] ExecStart=/usr/sbin/in.tftpd -s /var/lib/tftpboot StandardInput=socket [Install] Also=tftp.socket ``` 在 `[Unit]` 部分添加如下内容: ``` Requires=tftp-server.socket ``` 修改 `[ExecStart]` 行: ``` ExecStart=/usr/sbin/in.tftpd -c -p -s /var/lib/tftpboot ``` 下面是这些选项的意思: * `-c` 选项允许创建新的文件 * `-p` 选项用于指明在正常系统提供的权限检查之上没有其他额外的权限检查 * `-s` 建议使用该选项以确保安全性以及与某些引导 ROM 的兼容性,这些引导 ROM 在其请求中不容易包含目录名。 默认的上传和下载位置位于 `/var/lib/tftpboot`。 下一步,修改 `[Install]` 部分的内容 ``` [Install] WantedBy=multi-user.target Also=tftp-server.socket ``` 不要忘记保存你的修改。 下面是 `/etc/systemd/system/tftp-server.service` 文件的完整内容: ``` [Unit] Description=Tftp Server Requires=tftp-server.socket Documentation=man:in.tftpd [Service] ExecStart=/usr/sbin/in.tftpd -c -p -s /var/lib/tftpboot StandardInput=socket [Install] WantedBy=multi-user.target Also=tftp-server.socket ``` ### 启动 TFTP 服务器 重新启动 systemd 守护进程: ``` systemctl daemon-reload ``` 启动服务器: ``` systemctl enable --now tftp-server ``` 要更改 TFTP 服务器允许上传和下载的权限,请使用此命令。注意 TFTP 是一种固有的不安全协议,因此不建议你在与其他人共享的网络上这样做。 ``` chmod 777 /var/lib/tftpboot ``` 配置防火墙让 TFTP 能够使用: ``` firewall-cmd --add-service=tftp --perm firewall-cmd --reload ``` ### 客户端配置 安装 TFTP 客户端 ``` yum install tftp -y ``` 运行 `tftp` 命令连接服务器。下面是一个启用详细信息选项的例子: ``` [client@thinclient:~ ]$ tftp 192.168.1.164 tftp> verbose Verbose mode on. tftp> get server.logs getting from 192.168.1.164:server.logs to server.logs [netascii] Received 7 bytes in 0.0 seconds [inf bits/sec] tftp> quit [client@thinclient:~ ]$ ``` 记住,因为 TFTP 没有列出服务器上文件的能力,因此,在你使用 `get` 命令之前需要知道文件的具体名称。 --- via: <https://fedoramagazine.org/how-to-set-up-a-tftp-server-on-fedora/> 作者:[Curt Warfield](https://fedoramagazine.org/author/rcurtiswarfield/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[amwps290](https://github.com/amwps290) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
**TFTP**, or Trivial File Transfer Protocol, allows users to transfer files between systems using the [UDP protocol](https://en.wikipedia.org/wiki/User_Datagram_Protocol). By default, it uses UDP port 69. The TFTP protocol is extensively used to support remote booting of diskless devices. So, setting up a TFTP server on your own local network can be an interesting way to do [Fedora installations](https://docs.fedoraproject.org/en-US/fedora/f30/install-guide/advanced/Network_based_Installations/), or other diskless operations. TFTP can only read and write files to or from a remote system. It doesn’t have the capability to list files or make any changes on the remote server. There are also no provisions for user authentication. Because of security implications and the lack of advanced features, TFTP is generally only used on a local area network (LAN). ## TFTP server installation The first thing you will need to do is install the TFTP client and server packages: dnf install tftp-server tftp -y This creates a *tftp* service and socket file for [systemd](https://fedoramagazine.org/systemd-getting-a-grip-on-units/) under */usr/lib/systemd/system*. /usr/lib/systemd/system/tftp.service /usr/lib/systemd/system/tftp.socket Next, copy and rename these files to */etc/systemd/system*: cp /usr/lib/systemd/system/tftp.service /etc/systemd/system/tftp-server.service cp /usr/lib/systemd/system/tftp.socket /etc/systemd/system/tftp-server.socket ## Making local changes You need to edit these files from the new location after you’ve copied and renamed them, to add some additional parameters. Here is what the *tftp-server.service* file initially looks like: [Unit] Description=Tftp Server Requires=tftp.socket Documentation=man:in.tftpd [Service] ExecStart=/usr/sbin/in.tftpd -s /var/lib/tftpboot StandardInput=socket [Install] Also=tftp.socket Make the following changes to the *[Unit]* section: Requires=tftp-server.socket Make the following changes to the *ExecStart* line: ExecStart=/usr/sbin/in.tftpd -c -p -s /var/lib/tftpboot Here are what the options mean: - The option allows new files to be created.*-c* - The option is used to have no additional permissions checks performed above the normal system-provided access controls.*-p* - The option is recommended for security as well as compatibility with some boot ROMs which cannot be easily made to include a directory name in its request.*-s* The default upload/download location for transferring the files is */var/lib/tftpboot*. Next, make the following changes to the *[Install]* section: [Install] WantedBy=multi-user.target Also=tftp-server.socket Don’t forget to save your changes! Here is the completed */etc/systemd/system/tftp-server.service* file: [Unit] Description=Tftp Server Requires=tftp-server.socket Documentation=man:in.tftpd [Service] ExecStart=/usr/sbin/in.tftpd -c -p -s /var/lib/tftpboot StandardInput=socket [Install] WantedBy=multi-user.target Also=tftp-server.socket ## Starting the TFTP server Reload the systemd daemon: systemctl daemon-reload Now start and enable the server: systemctl enable --now tftp-server To change the permissions of the TFTP server to allow upload and download functionality, use this command. Note TFTP is an inherently insecure protocol, so this may not be advised on a network you share with other people. chmod 777 /var/lib/tftpboot Configure your firewall to allow TFTP traffic: firewall-cmd --add-service=tftp --perm firewall-cmd --reload ## Client Configuration Install the TFTP client: yum install tftp -y Run the *tftp* command to connect to the TFTP server. Here is an example that enables the verbose option: [client@thinclient:~ ]$tftp 192.168.1.164tftp>verboseVerbose mode on. tftp>get server.logsgetting from 192.168.1.164:server.logs to server.logs [netascii] Received 7 bytes in 0.0 seconds [inf bits/sec] tftp>quit[client@thinclient:~ ]$ Remember, TFTP does not have the ability to list file names. So you’ll need to know the file name before running the *get* command to download any files. *Photo by **Laika Notebooks** on Unsplash*. ## Edgar Hoch What crazy description for tftp configuration do you release into the world? Why should anyone be allowed to upload any files to the server without any restrictions? What application is there that requires this and you can’t use a more secure method (with authentication and authorization)? You don’t need this to boot devices over the network. The only thing you need to do, apart from installing the packages, is to enable the socket with “systemctl enable -now tftpd.socket” and place the files needed for booting over the network in /var/lib/tftpboot/ or a subdirectory in it, preferably as owner and group root and only writeable for root and readable for all. You should NOT make /var/lib/tftpboot/ writeable for all. You should NOT use the -c option. You don’t need to make a copy of tftpd.server and tftpd.socket in /etc/systemd/system/; if you want to make local changes, create a directory /etc/systemd/system/tftpd.server.d/ and create a file in it with the extension “.conf”, where you just enter the change – see “man systemd.unit”. This could be used, for example, to make changes to the options when calling the service: [Service] ExecStart=/usr/sbin/in.tftpd -s /var/lib/tftpboot Do NOT enter “WantedBy=multi-user.target”! You also don’t activate the tftpd.service, but tftpd.socket (see above). This has the advantage that the service only runs and occupies resources when it is needed (and after some time of inactivity (default 15 minutes) it stops itself). Why should someone use the tftp client to download a file like server.logs (as in your example)? Somebody has to put the file there first. TFTP is only needed for booting devices over the network, usually with PXE – to load a boot kernel, grub, or similar. Everything else the device should do via other services. People should only use the tftp client to test the connection. Upload or download files to a server should only be done via secure services, e.g. ssh / scp / sftp / rsync via ssh or via network file systems. ## Curt Warfield Hi Edgar, Thank-you for taking the time to bring up some valid concerns. The intent of the article was not meant to try to ask anyone to embrace tftp or to even suggest it should be anyone’s first choice. I would not expect any enterprise environments to ever use this in production. This article was written as more of a way to just show how to configure a legacy application such as this. It was even noted in the article that it is not a secure method of uploading files: “There are also no provisions for user authentication. Because of security implications and the lack of advanced features, TFTP is generally only used on a local area network (LAN). ” “Note TFTP is an inherently insecure protocol, so this may not be advised on a network you share with other people.” But I have come across occasions where I’ve been asked how to set this up even with my recommendation to use a more secure method. ## stee I agree with Edgar, the config here seems unnecessarily complicated. It would have been better to divide this article into two parts, first the default config to get up and running and then how to enable uploading as a “bonus”. Other articles on here have done similar in the past. ## Felix Kaechele At times where I just quickly require an ad-hoc TFTP server to host some firmware files I’m pushing to devices I use tftpy: dnf install python2-tftpy sudo firewall-cmd –add-service=tftp sudo tftpy_server.py -r . You can also change the path you’d like to share via TFTP by replacing the . after the -r parameter. When done Ctrl-C out of the TFTPy server and close the port again: sudo firewall-cmd –remove-service=tftp Hint: If you’re on Rawhide (or for that matter: after the release of Fedora 32) you’d have to use python3-tftpy and tftpy_server3.py respectively. The Python 3 capable update was not pushed to earlier releases. ## Einer Folks, There are reasons you would still use a TFTP server (also know as a server providing bootp services) …… an example is booting and running a diskless Unix/Linux/some network routers and switches/Xterms ……… basically a networked machine that has no hard disk or other local OS storage device ……… I have even used TFTP boot to boot and run other “Thin Client” machines running Windows. So, not as “Obsolete” as you might think 🙂 Good security practice for this kind of a setup is: 1) Make sure your TFTP server is read only 2) Make sure the segment/VLAN the TFTP server is on is properly firewalled from any other network 3) Make sure your site’s Physical security/access is well controlled …AND ….. IF you can find another way, client machines that you can actually load the base OS on, do it 🙂 Einer ## Justin Don’t forget, Cisco IP phones use a TFTP server as well. I am not a telephony administrator, so I don’t know if the enterprise that I am with uses this specific program, but the hosts themselves do use the TFTP protocol to get an OS. If you consider how many desktop phones are on the campus of a large enterprise, that is a considerable amount of hosts using the TFTP protocol. It is still relevant. ## Quine The tftp client won’t work as you describe with the standard firewall rules on a Fedora system, due to the TFTP protocol’s stateless nature. You’d have to allow traffic inbound from the TFTP server on the client’s firewall. ## Mehdi Interesting. Never heard of TFTP before. ## GW For me it is much more common to use tftp to backup Cisco router and switch config files as well as to store Cable Modem config files, so not much booting. However is is imperative in backing up router and switch config files the createoption be available for writing the file onto the tftpboot directory. Generally in a tftp config spec there is some content regarding how to spin up tftp as a no-privledged entity usingnobodyor similar without a login shell and insuring the file ownership matches the tftpboot file.And I was surprised recently when I was not able to use tftp localhost in another OS but had to use the local IP address for server testing. ## Antti N. I’ve been unable to get tftp-server to bind to 255.255.255.255:69. It’s mandatory for flashing Cisco devices, which also happens to be one of the very few use cases I have. Tftpd-hpa handles this without any issue and Tftpd32/64 on Windows does it as well. ## r44 How make mastodon server e-mail server but tftp? ## Mark In the past apart from kickstart installs my main use for network booting was having a config file that instead of starting a kickstart would boot into ‘linux rescue’ mode, so when a machine became unbootable it was just a case of powering on the kickstart server and rebooting the failed machine to get it into rescue mode; saved hours of trying to figure out where I last put the install DVD and eventually downloading another just to get into rescue mode. Looks like I last needed that in F16 however, and quickly googling ‘rescue mode’ found only documentation for F16 and F17, I hope F30 still has it. ## Jasper Hartline I used tftpd to load up 10 nodes all diskless.. served them up a Kerrighed clusterring kernel and joined them all so we had a 10 node thin client clustering network. The process migration worked just fine with Gigabit ethernet.
11,373
使用 Java 框架 Scipio ERP 创建一个在线商店
https://opensource.com/article/19/1/scipio-erp
2019-09-22T13:33:08
[ "在线商店" ]
https://linux.cn/article-11373-1.html
> > Scipio ERP 具有包罗万象的应用程序和功能。 > > > ![](/data/attachment/album/201909/22/133258hqvwax5w1zvq5ffa.jpg) 如果,你想在网上销售产品或服务,但要么找不到合适的软件,要么觉得定制成本太高?那么,[Scipio ERP](https://www.scipioerp.com) 也许正是你想要的。 Scipio ERP 是一个基于 Java 的开源的电子商务框架,具有包罗万象的应用程序和功能。这个项目于 2014 年从 [Apache OFBiz](https://ofbiz.apache.org/) 分叉而来,侧重于更好的定制和更现代的吸引力。这个电子商务组件非常丰富,可以在多商店环境中工作,同时支持国际化,具有琳琅满目的产品配置,而且它还兼容现代 HTML 框架。该软件还为许多其他业务场景提供标准应用程序,例如会计、仓库管理或销售团队自动化。它都是高度标准化的,因此易于定制,如果你想要的不仅仅是一个虚拟购物车,这是非常棒的。 该系统也使得跟上现代 Web 标准变得非常容易。所有界面都是使用系统的“[模板工具包](https://www.scipioerp.com/community/developer/freemarker-macros/)”构建的,这是一个易于学习的宏集,可以将 HTML 与所有应用程序分开。正因为如此,每个应用程序都已经标准化到核心。听起来令人困惑?它真的不是 HTML——它看起来很像 HTML,但你写的内容少了很多。 ### 初始安装 在你开始之前,请确保你已经安装了 Java 1.8(或更高版本)的 SDK 以及一个 Git 客户端。完成了?太棒了!接下来,切换到 Github 上的主分支: ``` git clone https://github.com/ilscipio/scipio-erp.git cd scipio-erp git checkout master ``` 要安装该系统,只需要运行 `./install.sh` 并从命令行中选择任一选项。在开发过程中,最好一直使用 “installation for development”(选项 1),它还将安装一系列演示数据。对于专业安装,你可以修改初始配置数据(“种子数据”),以便自动为你设置公司和目录数据。默认情况下,系统将使用内部数据库运行,但是它[也可以配置](https://www.scipioerp.com/community/developer/installation-configuration/configuration/#database-configuration)使用各种关系数据库,比如 PostgreSQL 和 MariaDB 等。 ![安装向导](/data/attachment/album/201909/22/133311l37pcpchf3s77f3v.jpg "Setup wizard") *按照安装向导完成初始配置* 通过命令 `./start.sh` 启动系统然后打开链接 <https://localhost:8443/setup/> 完成配置。如果你安装了演示数据, 你可以使用用户名 `admin` 和密码 `scipio` 进行登录。在安装向导中,你可以设置公司简介、会计、仓库、产品目录、在线商店和额外的用户配置信息。暂时在产品商店配置界面上跳过网站实体的配置。系统允许你使用不同的底层代码运行多个在线商店;除非你想这样做,一直选择默认值是最简单的。 祝贺你,你刚刚安装了 Scipio ERP!在界面上操作一两分钟,感受一下它的功能。 ### 捷径 在你进入自定义之前,这里有一些方便的命令可以帮助你: * 创建一个 shop-override:`./ant create-component-shop-override` * 创建一个新组件:`./ant create-component` * 创建一个新主题组件:`./ant create-theme` * 创建管理员用户:`./ant create-admin-user-login` * 各种其他实用功能:`./ant -p` * 用于安装和更新插件的实用程序:`./git-addons help` 另外,请记下以下位置: * 将 Scipio 作为服务运行的脚本:`/tools/scripts/` * 日志输出目录:`/runtime/logs` * 管理应用程序:`<https://localhost:8443/admin/>` * 电子商务应用程序:`<https://localhost:8443/shop/>` 最后,Scipio ERP 在以下五个主要目录中构建了所有代码: * `framework`: 框架相关的源,应用程序服务器,通用界面和配置 * `applications`: 核心应用程序 * `addons`: 第三方扩展 * `themes`: 修改界面外观 * `hot-deploy`: 你自己的组件 除了一些配置,你将在 `hot-deploy` 和 `themes` 目录中进行开发。 ### 在线商店定制 要真正使系统成为你自己的系统,请开始考虑使用[组件](https://www.scipioerp.com/community/developer/architecture/components/)。组件是一种模块化方法,可以覆盖、扩展和添加到系统中。你可以将组件视为独立 Web 模块,可以捕获有关数据库([实体](https://www.scipioerp.com/community/developer/entities/))、功能([服务](https://www.scipioerp.com/community/developer/services/))、界面([视图](https://www.scipioerp.com/community/developer/views-requests/))、[事件和操作](https://www.scipioerp.com/community/developer/events-actions/)和 Web 应用程序等的信息。由于组件功能,你可以添加自己的代码,同时保持与原始源兼容。 运行命令 `./ant create-component-shop-override` 并按照步骤创建你的在线商店组件。该操作将会在 `hot-deploy` 目录内创建一个新目录,该目录将扩展并覆盖原始的电子商务应用程序。 ![组件目录结构](/data/attachment/album/201909/22/133317n930l8lnl49a4old.jpg "component directory structure") *一个典型的组件目录结构。* 你的组件将具有以下目录结构: * `config`: 配置 * `data`: 种子数据 * `entitydef`: 数据库表定义 * `script`: Groovy 脚本的位置 * `servicedef`: 服务定义 * `src`: Java 类 * `webapp`: 你的 web 应用程序 * `widget`: 界面定义 此外,`ivy.xml` 文件允许你将 Maven 库添加到构建过程中,`ofbiz-component.xml` 文件定义整个组件和 Web 应用程序结构。除了一些在当前目录所能够看到的,你还可以在 Web 应用程序的 `WEB-INF` 目录中找到 `controller.xml` 文件。这允许你定义请求实体并将它们连接到事件和界面。仅对于界面来说,你还可以使用内置的 CMS 功能,但优先要坚持使用核心机制。在引入更改之前,请熟悉 `/applications/shop/`。 #### 添加自定义界面 还记得[模板工具包](https://www.scipioerp.com/community/developer/freemarker-macros/)吗?你会发现它在每个界面都有使用到。你可以将其视为一组易于学习的宏,它用来构建所有内容。下面是一个例子: ``` <@section title="Title"> <@heading id="slider">Slider</@heading> <@row> <@cell columns=6> <@slider id="" class="" controls=true indicator=true> <@slide link="#" image="https://placehold.it/800x300">Just some content…</@slide> <@slide title="This is a title" link="#" image="https://placehold.it/800x300"></@slide> </@slider> </@cell> <@cell columns=6>Second column</@cell> </@row> </@section> ``` 不是很难,对吧?同时,主题包含 HTML 定义和样式。这将权力交给你的前端开发人员,他们可以定义每个宏的输出,并坚持使用自己的构建工具进行开发。 我们快点试试吧。首先,在你自己的在线商店上定义一个请求。你将修改此代码。一个内置的 CMS 系统也可以通过 <https://localhost:8443/cms/> 进行访问,它允许你以更有效的方式创建新模板和界面。它与模板工具包完全兼容,并附带可根据你的喜好采用的示例模板。但是既然我们试图在这里理解系统,那么首先让我们采用更复杂的方法。 打开你商店 `webapp` 目录中的 [controller.xml](https://www.scipioerp.com/community/developer/views-requests/request-controller/) 文件。控制器会跟踪请求事件并相应地执行操作。下面的操作将会在 `/shop/test` 下创建一个新的请求: ``` <!-- Request Mappings --> <request-map uri="test"> <security https="true" auth="false"/> <response name="success" type="view" value="test"/> </request-map> ``` 你可以定义多个响应,如果需要,可以在请求中使用事件或服务调用来确定你可能要使用的响应。我选择了“视图”类型的响应。视图是渲染的响应;其他类型是请求重定向、转发等。系统附带各种渲染器,可让你稍后确定输出;为此,请添加以下内容: ``` <!-- View Mappings --> <view-map name="test" type="screen" page="component://mycomponent/widget/CommonScreens.xml#test"/> ``` 用你自己的组件名称替换 `my-component`。然后,你可以通过在 `widget/CommonScreens.xml` 文件的标签内添加以下内容来定义你的第一个界面: ``` <screen name="test"> <section> <actions> </actions> <widgets> <decorator-screen name="CommonShopAppDecorator" location="component://shop/widget/CommonScreens.xml"> <decorator-section name="body"> <platform-specific><html><html-template location="component://mycomponent/webapp/mycomponent/test/test.ftl"/></html></platform-specific> </decorator-section> </decorator-screen> </widgets> </section> </screen> ``` 商店界面实际上非常模块化,由多个元素组成([小部件、动作和装饰器](https://www.scipioerp.com/community/developer/views-requests/screen-widgets-decorators/))。为简单起见,请暂时保留原样,并通过添加第一个模板工具包文件来完成新网页。为此,创建一个新的 `webapp/mycomponent/test/test.ftl` 文件并添加以下内容: ``` <@alert type="info">Success!</@alert> ``` ![自定义的界面](/data/attachment/album/201909/22/133319lt99ylywl99at8l9.jpg "Custom screen") *一个自定义的界面。* 打开 <https://localhost:8443/shop/control/test/> 并惊叹于你自己的成就。 #### 自定义主题 通过创建自己的主题来修改商店的界面外观。所有主题都可以作为组件在 `themes` 文件夹中找到。运行命令 `./ant create-theme` 来创建你自己的主题。 ![主题组件布局](/data/attachment/album/201909/22/133322imewmwj1kste3psp.jpg "theme component layout") *一个典型的主题组件布局。* 以下是最重要的目录和文件列表: * 主题配置:`data/*ThemeData.xml` * 特定主题封装的 HTML:`includes/*.ftl` * 模板工具包 HTML 定义:`includes/themeTemplate.ftl` * CSS 类定义:`includes/themeStyles.ftl` * CSS 框架: `webapp/theme-title/` 快速浏览工具包中的 Metro 主题;它使用 Foundation CSS 框架并且充分利用了这个框架。然后,然后,在新构建的 `webapp/theme-title` 目录中设置自己的主题并开始开发。Foundation-shop 主题是一个非常简单的特定于商店的主题实现,你可以将其用作你自己工作的基础。 瞧!你已经建立了自己的在线商店,准备个性化定制吧! ![搭建完成的 Scipio ERP 在线商店](/data/attachment/album/201909/22/133339ckf90t90wfco9ffk.jpg "Finished Scipio ERP shop") *一个搭建完成的基于 Scipio ERP的在线商店。* ### 接下来是什么? Scipio ERP 是一个功能强大的框架,可简化复杂的电子商务应用程序的开发。为了更完整的理解,请查看项目[文档](https://www.scipioerp.com/community/developer/architecture/components/),尝试[在线演示](https://www.scipioerp.com/demo/),或者[加入社区](https://forum.scipioerp.com/). --- via: <https://opensource.com/article/19/1/scipio-erp> 作者:[Paul Piper](https://opensource.com/users/madppiper) 选题:[lujun9972](https://github.com/lujun9972) 译者:[laingke](https://github.com/laingke) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
So you want to sell products or services online, but either can't find a fitting software or think customization would be too costly? [Scipio ERP](https://www.scipioerp.com) may just be what you are looking for. Scipio ERP is a Java-based open source e-commerce framework that comes with a large range of applications and functionality. The project was forked from [Apache OFBiz](https://ofbiz.apache.org/) in 2014 with a clear focus on better customization and a more modern appeal. The e-commerce component is quite extensive and works in a multi-store setup, internationally, and with a wide range of product configurations, and it's also compatible with modern HTML frameworks. The software also provides standard applications for many other business cases, such as accounting, warehouse management, or sales force automation. It's all highly standardized and therefore easy to customize, which is great if you are looking for more than a virtual cart. The system makes it very easy to keep up with modern web standards, too. All screens are constructed using the system's "[templating toolkit](https://www.scipioerp.com/community/developer/freemarker-macros/)," an easy-to-learn macro set that separates HTML from all applications. Because of it, every application is already standardized to the core. Sounds confusing? It really isn't—it all looks a lot like HTML, but you write a lot less of it. ## Initial setup Before you get started, make sure you have Java 1.8 (or greater) SDK and a Git client installed. Got it? Great! Next, check out the master branch from GitHub: ``` git clone https://github.com/ilscipio/scipio-erp.git cd scipio-erp git checkout master ``` To set up the system, simply run **./install.sh** and select either option from the command line. Throughout development, it is best to stick to an **installation for development** (Option 1), which will also install a range of demo data. For professional installations, you can modify the initial config data ("seed data") so it will automatically set up the company and catalog data for you. By default, the system will run with an internal database, but it [can also be configured](https://www.scipioerp.com/community/developer/installation-configuration/configuration/#database-configuration) with a wide range of relational databases such as PostgreSQL and MariaDB. ![Setup wizard Setup wizard](https://opensource.com/sites/default/files/uploads/setup_step5_sm.jpg) Follow the setup wizard to complete your initial configuration, Start the system with **./start.sh** and head over to ** https://localhost:8443/setup/** to complete the configuration. If you installed with demo data, you can log in with username **admin**and password **scipio**. During the setup wizard, you can set up a company profile, accounting, a warehouse, your product catalog, your online store, and additional user profiles. Keep the website entries on the product store configuration screen for now. The system allows you to run multiple webstores with different underlying code; unless you want to do that, it is easiest to stick to the defaults. Congratulations, you just installed Scipio ERP! Play around with the screens for a minute or two to get a feel for the functionality. ## Shortcuts Before you jump into the customization, here are a few handy commands that will help you along the way: - Create a shop-override: **./ant create-component-shop-override** - Create a new component: **./ant create-component** - Create a new theme component: **./ant create-theme** - Create admin user: **./ant create-admin-user-login** - Various other utility functions: **./ant -p** - Utility to install & update add-ons: **./git-addons help** Also, make a mental note of the following locations: - Scripts to run Scipio as a service: **/tools/scripts/** - Log output directory: **/runtime/logs** - Admin application: [https://localhost:8443/admin/](https://localhost:8443/admin/) - E-commerce application: [https://localhost:8443/shop/](https://localhost:8443/shop/) Last, Scipio ERP structures all code in the following five major directories: - Framework: framework-related sources, the application server, generic screens, and configurations - Applications: core applications - Addons: third-party extensions - Themes: modifies the look and feel - Hot-deploy: your own components Aside from a few configurations, you will be working within the hot-deploy and themes directories. ## Webstore customizations To really make the system your own, start thinking about [components](https://www.scipioerp.com/community/developer/architecture/components/). Components are a modular approach to override, extend, and add to the system. Think of components as self-contained web modules that capture information on databases ([entity](https://www.scipioerp.com/community/developer/entities/)), functions ([services](https://www.scipioerp.com/community/developer/services/)), screens ([views](https://www.scipioerp.com/community/developer/views-requests/)), [events and actions](https://www.scipioerp.com/community/developer/events-actions/), and web applications. Thanks to components, you can add your own code while remaining compatible with the original sources. Run **./ant create-component-shop-override** and follow the steps to create your webstore component. A new directory will be created inside of the hot-deploy directory, which extends and overrides the original e-commerce application. ![component directory structure component directory structure](https://opensource.com/sites/default/files/uploads/component_structure.jpg) A typical component directory structure. Your component will have the following directory structure: - config: configurations - data: seed data - entitydef: database table definitions - script: Groovy script location - servicedef: service definitions - src: Java classes - webapp: your web application - widget: screen definitions Additionally, the **ivy.xml** file allows you to add Maven libraries to the build process and the **ofbiz****-component.xml** file defines the overall component and web application structure. Apart from the obvious, you will also find a **controller.xml** file inside the web apps' **WEB-INF** directory. This allows you to define request entries and connect them to events and screens. For screens alone, you can also use the built-in CMS functionality, but stick to the core mechanics first. Familiarize yourself with **/applications/shop/** before introducing changes. ### Adding custom screens Remember the [templating toolkit](https://www.scipioerp.com/community/developer/freemarker-macros/)? You will find it used on every screen. Think of it as a set of easy-to-learn macros that structure all content. Here's an example: ``` <@section title="Title"> <@heading id="slider">Slider</@heading> <@row> <@cell columns=6> <@slider id="" class="" controls=true indicator=true> <@slide link="#" image="https://placehold.it/800x300">Just some content…</@slide> <@slide title="This is a title" link="#" image="https://placehold.it/800x300"></@slide> </@slider> </@cell> <@cell columns=6>Second column</@cell> </@row> </@section> ``` Not too difficult, right? Meanwhile, themes contain the HTML definitions and styles. This hands the power over to your front-end developers, who can define the output of each macro and otherwise stick to their own build tools for development. Let's give it a quick try. First, define a request on your own webstore. You will modify the code for this. A built-in CMS is also available at ** https://localhost:8443/cms/**, which allows you to create new templates and screens in a much more efficient way. It is fully compatible with the templating toolkit and comes with example templates that can be adopted to your preferences. But since we are trying to understand the system here, let's go with the more complicated way first. Open the ** controller.xml** file inside of your shop's webapp directory. The controller keeps track of request events and performs actions accordingly. The following will create a new request under **/shop/test**: ``` <!-- Request Mappings --> <request-map uri="test"> <security https="true" auth="false"/> <response name="success" type="view" value="test"/> </request-map> ``` You can define multiple responses and, if you want, you could use an event or a service call inside the request to determine which response you may want to use. I opted for a response of type "view." A view is a rendered response; other types are request-redirects, forwards, and alike. The system comes with various renderers and allows you to determine the output later; to do so, add the following: ``` <!-- View Mappings --> <view-map name="test" type="screen" page="component://mycomponent/widget/CommonScreens.xml#test"/> ``` Replace **my-component** with your own component name. Then you can define your very first screen by adding the following inside the tags within the **widget/CommonScreens.xml** file: ``` <screen name="test"> <section> <actions> </actions> <widgets> <decorator-screen name="CommonShopAppDecorator" location="component://shop/widget/CommonScreens.xml"> <decorator-section name="body"> <platform-specific><html><html-template location="component://mycomponent/webapp/mycomponent/test/test.ftl"/></html></platform-specific> </decorator-section> </decorator-screen> </widgets> </section> </screen> ``` Screens are actually quite modular and consist of multiple elements ([widgets, actions, and decorators](https://www.scipioerp.com/community/developer/views-requests/screen-widgets-decorators/)). For the sake of simplicity, leave this as it is for now, and complete the new webpage by adding your very first templating toolkit file. For that, create a new **webapp/mycomponent/test/test.ftl** file and add the following: `<@alert type="info">Success!</@alert>` ![Custom screen Custom screen](https://opensource.com/sites/default/files/uploads/success_screen_sm.jpg) A custom screen. Open ** https://localhost:8443/shop/control/test/** and marvel at your own accomplishments. ### Custom themes Modify the look and feel of the shop by creating your very own theme. All themes can be found as components inside of the themes folder. Run ** ./ant create-theme** to add your own. ![theme component layout theme component layout](https://opensource.com/sites/default/files/uploads/theme_structure.jpg) A typical theme component layout. Here's a list of the most important directories and files: - Theme configuration: **data/*ThemeData.xml** - Theme-specific wrapping HTML: **includes/*.ftl** - Templating Toolkit HTML definition: **includes/themeTemplate.ftl** - CSS class definition: **includes/themeStyles.ftl** - CSS framework: **webapp/theme-title/*** Take a quick look at the Metro theme in the toolkit; it uses the Foundation CSS framework and makes use of all the things above. Afterwards, set up your own theme inside your newly constructed **webapp/theme-title** directory and start developing. The Foundation-shop theme is a very simple shop-specific theme implementation that you can use as a basis for your own work. Voila! You have set up your own online store and are ready to customize! ![Finished Scipio ERP shop Finished Scipio ERP shop](https://opensource.com/sites/default/files/uploads/finished_shop_1_sm.jpg) A finished shop based on Scipio ERP. ## What's next? Scipio ERP is a powerful framework that simplifies the development of complex e-commerce applications. For a more complete understanding, check out the project [documentation](https://www.scipioerp.com/community/developer/architecture/components/), try the [online demo](https://www.scipioerp.com/demo/), or [join the community](https://forum.scipioerp.com/). ## 2 Comments
11,374
如何在互联网放置 HTML 页面
https://jvns.ca/blog/2019/09/06/how-to-put-an-html-page-on-the-internet/
2019-09-22T23:50:49
[ "网站" ]
https://linux.cn/article-11374-1.html
![](/data/attachment/album/201909/22/234957mmzoie1imufsuwea.jpg) 我喜欢互联网的一点是在互联网放置静态页面是如此简单。今天有人问我该怎么做,所以我想我会快速地写下来! ### 只是一个 HTML 页面 我的所有网站都只是静态 HTML 和 CSS。我的网页设计技巧相对不高(<https://wizardzines.com> 是我自己开发的最复杂的网站),因此保持我所有的网站相对简单意味着我可以做一些改变/修复,而不会花费大量时间。 因此,我们将在此文章中采用尽可能简单的方式 —— 只需一个 HTML 页面。 ### HTML 页面 我们要放在互联网上的网站只是一个名为 `index.html` 的文件。你可以在 <https://github.com/jvns/website-example> 找到它,它是一个 Github 仓库,其中只包含一个文件。 HTML 文件中包含一些 CSS,使其看起来不那么无聊,部分复制自 <https://example.com>。 ### 如何将 HTML 页面放在互联网上 有以下几步: 1. 注册 [Neocities](https://neocities.org/) 帐户 2. 将 index.html 复制到你自己 neocities 站点的 index.html 中 3. 完成 上面的 `index.html` 页面位于 [julia-example-website.neocities.com](https://julia-example-website.neocities.org/) 中,如果你查看源代码,你将看到它与 github 仓库中的 HTML 相同。 我认为这可能是将 HTML 页面放在互联网上的最简单的方法(这是一次回归 Geocities,它是我在 2003 年制作我的第一个网站的方式):)。我也喜欢 Neocities (像 [glitch](https://glitch.com),我也喜欢)它能实验、学习,并有乐趣。 ### 其他选择 这绝不是唯一简单的方式,在你推送 Git 仓库时,Github pages 和 Gitlab pages 以及 Netlify 都将会自动发布站点,并且它们都非常易于使用(只需将它们连接到你的 GitHub 仓库即可)。我个人使用 Git 仓库的方式,因为 Git 不会让我感到紧张,我想知道我实际推送的页面发生了什么更改。但我想你如果第一次只想将 HTML/CSS 制作的站点放到互联网上,那么 Neocities 就是一个非常好的方法。 如果你不只是玩,而是要将网站用于真实用途,那么你或许会需要买一个域名,以便你将来可以更改托管服务提供商,但这有点不那么简单。 ### 这是学习 HTML 的一个很好的起点 如果你熟悉在 Git 中编辑文件,同时想练习 HTML/CSS 的话,我认为将它放在网站中是一个有趣的方式!我真的很喜欢它的简单性 —— 实际上这只有一个文件,所以没有其他花哨的东西需要去理解。 还有很多方法可以复杂化/扩展它,比如这个博客实际上是用 [Hugo](https://gohugo.io/) 生成的,它生成了一堆 HTML 文件并放在网络中,但从基础开始总是不错的。 --- via: <https://jvns.ca/blog/2019/09/06/how-to-put-an-html-page-on-the-internet/> 作者:[Julia Evans](https://jvns.ca/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
null
11,375
Skytap 和微软将 IBM 机器搬到了 Azure
https://www.networkworld.com/article/3438904/microsoft-brings-ibm-iron-to-azure-for-on-premises-migrations.html
2019-09-23T00:52:40
[ "IBM", "微软" ]
https://linux.cn/article-11375-1.html
> > 微软再次证明了其摒弃了“非我发明”这一态度来支持客户。 > > > ![](/data/attachment/album/201909/23/005251s2z9exdlyk9k9t93.jpg) 当微软将 Azure 作为其 Windows 服务器操作系统的云计算版本发布时,它并没有使其成为仅支持 Windows 系统的版本,它还支持 Linux 系统,并且在短短几年内[其 Linux 实例的数量现在已经超过了Windows 实例的数量](https://www.openwall.com/lists/oss-security/2019/06/27/7)。 很高兴看到微软终于摆脱了这种长期以来非常有害的“非我发明”态度,该公司的最新举动确实令人惊讶。 微软与一家名为 Skytap 的公司合作,以在 Azure 云服务上提供 IBM Power9 实例,可以在 Azure 云内运行基于 Power 的系统,该系统将与其已有的 Xeon 和 Epyc 实例一同作为 Azure 的虚拟机(VM)。 Skytap 是一家有趣的公司。它由华盛顿大学的三位教授创立,专门研究本地遗留硬件的云迁移,如 IBM System I 或 Sparc 的云迁移。该公司在西雅图拥有一个数据中心,以 IBM 的硬件运行 IBM 的 PowerVM 管理程序,并且对在美国和英格兰的 IBM 数据中心提供主机托管。 该公司的座右铭是快速迁移,然后按照自己的节奏进行现代化。因此,它专注于帮助一些企业将遗留系统迁移到云,然后实现应用程序的现代化,这也是它与微软合作的目的。Azure 将通过为企业提供平台来提高传统应用程序的价值,而无需花费巨额费用重写一个新平台。 Skytap 提供了预览,可以看到使用 Skytap 上的 DB2 提升和扩展原有的 IBM i 应用程序以及通过 Azure 的物联网中心进行扩展时可能发生的情况。该应用程序无缝衔接新旧架构,并证明了不需要完全重写可靠的 IBM i 应用程序即可从现代云功能中受益。 ### 迁移到 Azure 根据协议,微软将把 IBM 的 Power S922 服务器部署在一个未声明的 Azure 区域。这些机器可以运行 PowerVM 管理程序,这些管理程序支持老式 IBM 操作系统以及 Linux 系统。 Skytap 首席执行官<ruby> 布拉德·希克 <rt> Brad Schick </rt></ruby>在一份声明中说道:“通过先替换旧技术来迁移上云既耗时又冒险。……Skytap 的愿景一直是通过一些小小的改变和较低的风险实现企业系统到云平台的迁移。与微软合作,我们将为各种遗留应用程序迁移到 Azure 提供本地支持,包括那些在 IBM i、AIX 和 Power Linux 上运行的程序。这将使企业能够通过使用 Azure 服务进行现代化来延长传统系统的寿命并增加其价值。” 随着基于 Power 应用程序的现代化,Skytap 随后将引入 DevOps CI/CD 工具链来加快软件的交付。迁移到 Azure 的 Skytap 上后,客户将能够集成 Azure DevOps,以及 Power 的 CI/CD 工具链,例如 Eradani 和 UrbanCode。 这些听起来像是迈出了第一步,但这意味着以后将会实现更多,尤其是在应用程序迁移方面。如果它仅在一个 Azure 区域中,听起来好像它们正在对该项目进行测试和验证,并可能在今年晚些时候或明年进行扩展。 --- via: <https://www.networkworld.com/article/3438904/microsoft-brings-ibm-iron-to-azure-for-on-premises-migrations.html> 作者:[Andy Patrizio](https://www.networkworld.com/author/Andy-Patrizio/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[Morisun029](https://github.com/Morisun029) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
301
Moved Permanently
null
11,378
Zsh 入门
https://opensource.com/article/19/9/getting-started-zsh
2019-09-23T16:40:00
[ "zsh" ]
https://linux.cn/article-11378-1.html
> > 从 Bash 进阶到 Z-shell,改进你的 shell 体验。 > > > ![](/data/attachment/album/201909/23/163910imr1z1qw1ruo9uqs.jpg) Z-shell(Zsh)是一种 Bourne 式的交互式 POSIX shell,以其丰富的创新功能而著称。Z-Shell 用户经常会提及它的许多便利之处,赞誉它对效率的提高和丰富的自定义支持。 如果你刚接触 Linux 或 Unix,但你的经验足以让你可以打开终端并运行一些命令的话,那么你可能使用的就是 Bash shell。Bash 可能是最具有代表意义的自由软件 shell,部分是因为它具有的先进的功能,部分是因为它是大多数流行的 Linux 和 Unix 操作系统上的默认 shell。但是,随着使用的次数越多,你可能会开始发现一些细节可能能够做的更好。开源有一个众所周知的地方,那就是选择。所以,许多人选择从 Bash “毕业”到 Z。 ### Zsh 介绍 Shell 只是操作系统的接口。交互式 shell 程序允许你通过称为*标准输入*(stdin)的某个东西键入命令,并通过*标准输出*(stdout)和*标准错误*(stderr)获取输出。有很多种 shell,如 Bash、Csh、Ksh、Tcsh、Dash 和 Zsh。每个都有其开发者所认为最适合于 Shell 的功能。而这些功能的好坏,则取决于最终用户。 Zsh 具有交互式制表符补全、自动文件搜索、支持正则表达式、用于定义命令范围的高级速记符,以及丰富的主题引擎等功能。这些功能也包含在你所熟悉的其它 Bourne 式 shell 环境中,这意味着,如果你已经了解并喜欢 Bash,那么你也会熟悉 Zsh,除此以外,它还有更多的功能。你可能会认为它是一种 Bash++。 ### 安装 Zsh 用你的包管理器安装 Zsh。 在 Fedora、RHEL 和 CentOS 上: ``` $ sudo dnf install zsh ``` 在 Ubuntu 和 Debian 上: ``` $ sudo apt install zsh ``` 在 MacOS 上你可以使用 MacPorts 安装它: ``` $ sudo port install zsh ``` 或使用 Homebrew: ``` $ brew install zsh ``` 在 Windows 上也可以运行 Zsh,但是只能在 Linux 层或类似 Linux 的层之上运行,例如 [Windows 的 Linux 子系统](https://devblogs.microsoft.com/commandline/category/bash-on-ubuntu-on-windows/)(WSL)或 [Cygwin](https://www.cygwin.com/)。这类安装超出了本文的范围,因此请参考微软的文档。 ### 设置 Zsh Zsh 不是终端模拟器。它是在终端仿真器中运行的 shell。因此,要启动 Zsh,必须首先启动一个终端窗口,例如 GNOME Terminal、Konsole、Terminal、iTerm2、rxvt 或你喜欢的其它终端。然后,你可以通过键入以下命令启动 Zsh: ``` $ zsh ``` 首次启动 Zsh 时,会要求你选择一些配置选项。这些都可以在以后更改,因此请按 `1` 继续。 ``` This is the Z Shell configuration function for new users, zsh-newuser-install. (q) Quit and do nothing. (0) Exit, creating the file ~/.zshrc (1) Continue to the main menu. ``` 偏好设置分为四类,因此请从顶部开始。 1. 第一个类使你可以选择在 shell 历史记录文件中保留多少个命令。默认情况下,它设置为 1,000 行。 2. Zsh 补全是其最令人兴奋的功能之一。为了简单起见,请考虑使用其默认选项激活它,直到你习惯了它的工作方式。按 `1` 使用默认选项,按 `2` 手动设置选项。 3. 选择 Emacs 式键绑定或 Vi 式键绑定。Bash 使用 Emacs 式绑定,因此你可能已经习惯了。 4. 最后,你可以了解(以及设置或取消设置)Zsh 的一些精妙的功能。例如,当你提供不带命令的非可执行路径时,可以通过让 Zsh 来改变目录而无需你使用 `cd` 命令。要激活这些额外选项之一,请输入选项号并输入 `s` 进行设置。请尝试打开所有选项以获得完整的 Zsh 体验。你可以稍后通过编辑 `~/.zshrc` 取消设置它们。 要完成配置,请按 `0`。 ### 使用 Zsh 刚开始,Zsh 的使用感受就像使用 Bash 一样,这无疑是其众多功能之一。例如,Bash 和 Tcsh 之间就存在严重的差异,因此如果你必须在工作中或在服务器上使用 Bash,而 Zsh 就可以在家里轻松尝试和使用,这样在 Bash 和 Zsh 之间轻松切换就是一种便利。 #### 在 Zsh 中改变目录 正是这些微小的差异使 Zsh 变得好用。首先,尝试在没有 `cd` 命令的情况下,将目录更改为 `Documents` 文件夹。简直太棒了,难以置信。如果你输入的是目录路径而没有进一步的指令,Zsh 会更改为该目录: ``` % Documents % pwd /home/seth/Documents ``` 而这会在 Bash 或任何其他普通 shell 中导致错误。但是 Zsh 却根本不是普通的 shell,而这仅仅才是开始。 #### 在 Zsh 中搜索 当你想使用普通 shell 程序查找文件时,可以使用 `find` 或 `locate` 命令。最起码,你可以使用 `ls -R` 来递归地列出一组目录。Zsh 内置有允许它在当前目录或任何其他子目录中查找文件的功能。 例如,假设你有两个名为 `foo.txt` 的文件。一个位于你的当前目录中,另一个位于名为 `foo` 的子目录中。在 Bash Shell 中,你可以使用以下命令列出当前目录中的文件: ``` $ ls foo.txt ``` 你可以通过明确指明子目录的路径来列出另一个目录: ``` $ ls foo foo.txt ``` 要同时列出这两者,你必须使用 `-R` 开关,并结合使用 `grep`: ``` $ ls -R | grep foo.txt foo.txt foo.txt ``` 但是在 Zsh 中,你可以使用 `**` 速记符号: ``` % ls **/foo.txt foo.txt foo.txt ``` 你可以在任何命令中使用此语法,而不仅限于 `ls`。想象一下在这样的场景中提高的效率:将特定文件类型从一组目录中移动到单个位置、将文本片段串联到一个文件中,或对日志进行抽取。 ### 使用 Zsh 的制表符补全 制表符补全是 Bash 和其他一些 Shell 中的高级用户功能,它变得司空见惯,席卷了 Unix 世界。Unix 用户不再需要在输入冗长而乏味的路径时使用通配符(例如输入 `/h*/s*h/V*/SCS/sc*/comp*/t*/a*/*9/04/LS*boat*v`,比输入 `/home/seth/Videos/SCS/scenes/composite/takes/approved/109/04/LS_boat-port-cargo-mover.mkv` 要容易得多)。相反,他们只要输入足够的唯一字符串即可按 `Tab` 键。例如,如果你知道在系统的根目录下只有一个以 `h` 开头的目录,则可以键入 `/h`,然后单击 `Tab`。快速、简单、高效。它还会确认路径存在;如果 `Tab` 无法完成任何操作,则说明你在错误的位置或输入了错误的路径部分。 但是,如果你有许多目录有五个或更多相同的首字母,`Tab` 会坚决拒绝进行补全。尽管在大多数现代终端中,它将(至少会)显示阻止其进行猜测你的意思的文件,但通常需要按两次 `Tab` 键才能显示它们。因此,制表符补全通常会变成来回按下键盘上字母和制表符,以至于你好像在接受钢琴独奏会的训练。 Zsh 通过循环可能的补全来解决这个小问题。如果键入 `ls ~/D` 并按 `Tab`,则 Zsh 首先使用 `Documents` 来完成命令;如果再次按 `Tab`,它将提供 `Downloads`,依此类推,直到找到所需的选项。 ### Zsh 中的通配符 在 Zsh 中,通配符的行为不同于 Bash 中用户所习惯的行为。首先,可以对其进行修改。例如,如果要列出当前目录中的所有文件夹,则可以使用修改后的通配符: ``` % ls dir0 dir1 dir2 file0 file1 % ls *(/) dir0 dir1 dir2 ``` 在此示例中,`(/)` 限定了通配符的结果,因此 Zsh 仅显示目录。要仅列出文件,请使用 `(.)`。要列出符号链接,请使用 `(@)`。要列出可执行文件,请使用 `(*)`。 ``` % ls ~/bin/*(*) fop exify tt ``` Zsh 不仅仅知道文件类型。它也可以使用相同的通配符修饰符约定根据修改时间列出。例如,如果要查找在过去八个小时内修改的文件,请使用 `mh` 修饰符(即 “modified hours” 的缩写)和小时的负整数: ``` % ls ~/Documents/*(mh-8) cal.org game.org home.org ``` 要查找超过(例如)两天前修改过的文件,修饰符更改为 `md`(即 “modified day” 的缩写),并带上天数的正整数: ``` % ls ~/Documents/*(+2) holiday.org ``` 通配符修饰符和限定符还可以做很多事情,因此,请阅读 [Zsh 手册页](https://linux.die.net/man/1/zsh),以获取全部详细信息。 #### 通配符的副作用 要像在 Bash 中使用通配符一样使用它,有时必须在 Zsh 中对通配符进行转义。例如,如果要在 Bash 中将某些文件复制到服务器上,则可以使用如下通配符: ``` $ scp IMG_*.JPG [email protected]:~/www/ph*/*19/09/14 ``` 这在 Bash 中有效,但是在 Zsh 中会返回错误,因为它在发出 `scp` 命令之前尝试在远程端扩展该变量(通配符)。为避免这种情况,必须转义远程变量(通配符): ``` % scp IMG_*.JPG [email protected]:~/www/ph\*/\*19/09/14 ``` 当你切换到新的 shell 时,这些小异常可能会使你感到沮丧。使用 Zsh 时会遇到的问题不多(体验过 Zsh 后切换回 Bash 的可能遇到更多),但是当它们发生时,请保持镇定且坦率。严格遵守 POSIX 的情况很少会出错,但是如果失败了,请查找问题以解决并继续。对于许多在工作中困在一个 shell 上而在家中困在另一个 shell 上的用户来说,[hyperpolyglot.org](http://hyperpolyglot.org/unix-shells) 已被证明其是无价的。 在我的下一篇 Zsh 文章中,我将向你展示如何安装主题和插件以定制你的 Z-Shell 甚至 Z-ier。 --- via: <https://opensource.com/article/19/9/getting-started-zsh> 作者:[Seth Kenlon](https://opensource.com/users/seth) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
Z-shell (or Zsh) is an interactive Bourne-like POSIX shell known for its abundance of innovative features. Z-Shell users often cite its many conveniences and credit it for increased efficiency and extensive customization. If you're relatively new to Linux or Unix but experienced enough to have opened a terminal and run a few commands, you have probably used the Bash shell. Bash is arguably the definitive free software shell, partly because of its progressive features and partly because it ships as the default shell on most of the popular Linux and Unix operating systems. However, the more you use a shell, the more you start to find small things that might be better for the way you want to use it. If there's one thing open source is famous for, it's *choice*. Many people choose to "graduate" from Bash to Z. ## What is Zsh? A shell is just an interface to your operating system. An interactive shell allows you to type in commands through what is called *standard input*, or **stdin**, and get output through *standard output* and *standard error*, or **stdout** and **stderr**. There are many shells, including Bash, Csh, Ksh, Tcsh, Dash, and Zsh. Each has features based on what its programmers thought would be best for a shell. Whether those features are good or bad is up to you, the end user. Zsh has features like interactive Tab completion, automated file searching, regex integration, advanced shorthand for defining command scope, and a rich theme engine. These features are included in an otherwise familiar Bourne-like shell environment, meaning that if you already know and love Bash, you'll find Zsh familiar—except with more features. You might think of it as a kind of Bash++. ## Installing Zsh Install Zsh with your package manager. On Fedora, RHEL, and CentOS: `$ sudo dnf install zsh` On Ubuntu and Debian: `$ sudo apt install zsh` On MacOS, you can install it using MacPorts: `$ sudo port install zsh` Or with Homebrew: `$ brew install zsh` It's possible to run Zsh on Windows, but only on top of a Linux or Linux-like layer such as [Windows Subsystem for Linux](https://devblogs.microsoft.com/commandline/category/bash-on-ubuntu-on-windows/) (WSL) or [Cygwin](https://www.cygwin.com/). That installation is out of scope for this article, so refer to Microsoft documentation. ## Setting up Zsh Zsh is not a terminal emulator; it's a shell that runs inside a terminal emulator. So, to launch Zsh, you must first launch a terminal window such as GNOME Terminal, Konsole, Terminal, iTerm2, rxvt, or another terminal of your preference. Then you can launch Zsh by typing: `$ zsh` The first time you launch Zsh, you're asked to choose some configuration options. These can all be changed later, so press **1** to continue. ``` This is the Z Shell configuration function for new users, zsh-newuser-install. (q) Quit and do nothing. (0) Exit, creating the file ~/.zshrc (1) Continue to the main menu. ``` There are four categories of preferences, so just start at the top. - The first category lets you choose how many commands are retained in your shell history file. By default, it's set to 1,000 lines. - Zsh completion is one of its most exciting features. To keep things simple, consider activating it with its default options until you get used to how it works. Press **1**for default options,**2**to set options manually. - Choose Emacs or Vi key bindings. Bash uses Emacs bindings, so you may be used to that already. - Finally, you can learn about (and set or unset) some of Zsh's subtle features. For instance, you can stop using the **cd**command by allowing Zsh to initiate a directory change when you provide a non-executable path with no command. To activate one of these extra options, type the option number and enter**s**to*set*it. Try turning on all options to get the full Zsh experience. You can unset them later by editing**~/.zshrc**. To complete configuration, press **0**. ## Using Zsh At first, Zsh feels a lot like using Bash, which is unmistakably one of its many features. There are serious differences between, for instance, Bash and Tcsh, so being able to switch between Bash and Zsh is a convenience that makes Zsh easy to try and easy to use at home if you have to use Bash at work or on your server. ### Change directory with Zsh It's the small differences that make Zsh nice. First, try changing the directory to your Documents folder *without the cd command*. It seems too good to be true; but if you enter a directory path with no further instruction, Zsh changes to that directory: ``` % Documents % pwd /home/seth/Documents ``` That renders an error in Bash or any other normal shell. But Zsh is far from normal, and this is just the beginning. ### Search with Zsh When you want to find a file using a normal shell, you probably resort to the **find** or **locate** command. At the very least, you may have used **ls -R** for a recursive listing of a set of directories. Zsh has a built-in feature allowing it to find a file in the current or any other subdirectory. For instance, assume you have two files called **foo.txt**. One is located in your current directory, and the other is in a subdirectory called **foo**. In a Bash shell, you can list the file in the current directory with: ``` $ ls foo.txt ``` and you can list the other one by stating the subdirectory's path explicitly: ``` $ ls foo foo.txt ``` To list both, you must use the **-R** switch, maybe combined with **grep**: ``` $ ls -R | grep foo.txt foo.txt foo.txt ``` But in Zsh, you can use the ****** shorthand: ``` % ls **/foo.txt foo.txt foo.txt ``` And you can use this syntax with any command, not just with **ls**. Imagine your increased efficiency when moving specific file types from one collection of directories to a single location, or concatenating snippets of text into a file, or grepping through logs. ## Using Zsh Tab completion Tab completion is a power-user feature in Bash and some other shells, and it took the Unix world by storm when it became commonplace. No longer did Unix users have to resort to wildcards when typing long and tedious paths (such as **/h*/s*h/V*/SCS/sc*/comp*/t*/a*/*9/04/LS*boat*v**, which is a lot easier than typing **/home/seth/Videos/SCS/scenes/composite/takes/approved/109/04/LS_boat-port-cargo-mover.mkv**). Instead, they could just press the Tab key when they entered enough of a unique string. For example, if you know there's only one directory starting with an **h** at the root level of your system, you might type **/h** and then hit Tab. It's fast, it's simple, it's efficient. It also confirms a path exists; if Tab doesn't complete anything, you know you're looking in the wrong place or you mistyped part of the path. However, if you have many directories that share five or more of the same first letters, Tab staunchly refuses to complete. While in most modern terminals it will (at least) reveal the files blocking it from guessing what you mean, it usually takes two Tab presses to reveal them; therefore, Tab completion often becomes such an interplay of letters and Tabs across your keyboard that you feel like you're training for a piano recital. Zsh solves this minor annoyance by cycling through possible completions. If you type **ls ~/D** and press Tab, Zsh completes your command with **Documents** first; if you press Tab again, it offers **Downloads**, and so on until you find the one you want. ## Wildcards in Zsh Wildcards behave differently in Zsh than what Bash users are used to. First of all, they can be modified. For example, if you want to list all folders in your current directory, you can use a modified wildcard: ``` % ls dir0 dir1 dir2 file0 file1 % ls *(/) dir0 dir1 dir2 ``` In this example, the **(/)** qualifies the results of the wildcard so Zsh will display only directories. To list just the files, use **(.)**. To list symlinks, use **(@)**. To list executable files, use **(*)**. ``` % ls ~/bin/*(*) fop exify tt ``` Zsh isn't aware of file types only. It can also list according to modification time, using the same wildcard modifier convention. For example, if you want to find a file that was modified within the past eight hours, use the **mh** modifier (for **modified** and **hours**) and the negative integer of hours: ``` % ls ~/Documents/*(mh-8) cal.org game.org home.org ``` To find a file modified more than (for instance) two days ago, the modifiers change to **md** (for **modified** and **day**) with a positive integer: ``` % ls ~/Documents/*(+2) holiday.org ``` There's a lot more you can do with wildcard modifiers and qualifiers, so read the [Zsh man page](https://linux.die.net/man/1/zsh) for full details. ### The wildcard side effect To use wildcards the way you would use them in Bash, sometimes they must be escaped in Zsh. For instance, if you're copying some files to your server in Bash, you might use a wildcard like this: `$ scp IMG_*.JPG [email protected]:~/www/ph*/*19/09/14` That works in Bash, but Zsh returns an error because it tries to expand the variables on the remote side before issuing the **scp** command. To avoid this, you must escape the remote variables: `% scp IMG_*.JPG [email protected]:~/www/ph\*/\*19/09/14` It's these types of little exceptions that can frustrate you when you're switching to a new shell. There aren't many when using Zsh (there are probably more when switching back to Bash after experiencing Zsh) but when they happen, remain calm and be explicit. Rarely will you go wrong to adhere strictly to POSIX—but if that fails, look up the problem to solve it and move on. [Hyperpolyglot.org](http://hyperpolyglot.org/unix-shells) has proven invaluable to many users stuck on one shell at work and another at home. In my next Zsh article, I'll show you how to install themes and plugins to make your Z-Shell even Z-ier. ## 2 Comments
11,379
Git 练习:存储库导航
https://jvns.ca/blog/2019/08/30/git-exercises--navigate-a-repository/
2019-09-23T22:42:05
[ "Git" ]
https://linux.cn/article-11379-1.html
![](/data/attachment/album/201909/23/224146fo79str9bxs8wb6s.jpg) 我觉得前几天的 [curl 练习](https://jvns.ca/blog/2019/08/27/curl-exercises/)进展顺利,所以今天我醒来后,想尝试编写一些 Git 练习。Git 是一大块需要学习的技能,可能要花几个小时才能学会,所以我分解练习的第一个思路是从“导航”一个存储库开始的。 我本来打算使用一个玩具测试库,但后来我想,为什么不使用真正的存储库呢?这样更有趣!因此,我们将浏览 Ruby 编程语言的存储库。你无需了解任何 C 即可完成此练习,只需熟悉一下存储库中的文件随时间变化的方式即可。 ### 克隆存储库 开始之前,需要克隆存储库: ``` git clone https://github.com/ruby/ruby ``` 与实际使用的大多数存储库相比,该存储库的最大不同之处在于它没有分支,但是它有很多标签,它们与分支相似,因为它们都只是指向一个提交的指针而已。因此,我们将使用标签而不是分支进行练习。*改变*标签的方式和分支非常不同,但*查看*标签和分支的方式完全相同。 ### Git SHA 总是引用同一个代码 执行这些练习时要记住的最重要的一点是,如本页面所述,像`9e3d9a2a009d2a0281802a84e1c5cc1c887edc71` 这样的 Git SHA 始终引用同一个的代码。下图摘自我与凯蒂·西勒·米勒撰写的一本杂志,名为《[Oh shit, git!](https://wizardzines.com/zines/oh-shit-git/)》。(她还有一个名为 <https://ohshitgit.com/> 的很棒的网站,启发了该杂志。) ![](/data/attachment/album/201909/23/224212gedldsaen5u4qzzl.png) 我们将在练习中大量使用 Git SHA,以使你习惯于使用它们,并帮助你了解它们与标签和分支的对应关系。 ### 我们将要使用的 Git 子命令 所有这些练习仅使用这 5 个 Git 子命令: ``` git checkout git log (--oneline, --author, and -S will be useful) git diff (--stat will be useful) git show git status ``` ### 练习 1. 查看 matz 从 1998 年开始的 Ruby 提交。提交 ID 为 `3db12e8b236ac8f88db8eb4690d10e4a3b8dbcd4`。找出当时 Ruby 的代码行数。 2. 检出当前的 master 分支。 3. 查看文件 `hash.c` 的历史记录。更改该文件的最后一个提交 ID 是什么? 4. 了解最近 20 年来 `hash.c` 的变化:将 master 分支上的文件与提交 `3db12e8b236ac8f88db8eb4690d10e4a3b8dbcd4` 的文件进行比较。 5. 查找最近更改了 `hash.c` 的提交,并查看该提交的差异。 6. 对于每个 Ruby 版本,该存储库都有一堆**标签**。获取所有标签的列表。 7. 找出在标签 `v1_8_6_187` 和标签 `v1_8_6_188` 之间更改了多少文件。 8. 查找 2015 年的提交(任何一个提交)并将其检出,简单地查看一下文件,然后返回 master 分支。 9. 找出标签 `v1_8_6_187` 对应的提交。 10. 列出目录 `.git/refs/tags`。运行 `cat .git/refs/tags/v1_8_6_187` 来查看其中一个文件的内容。 11. 找出当前 `HEAD` 对应的提交 ID。 12. 找出已经对 `test/` 目录进行了多少次提交。 13. 提交 `65a5162550f58047974793cdc8067a970b2435c0` 和 `9e3d9a2a009d2a0281802a84e1c5cc1c887edc71` 之间的 `lib/telnet.rb` 的差异。该文件更改了几行? 14. 在 Ruby 2.5.1 和 2.5.2 之间进行了多少次提交(标记为 `v2_5_1` 和 `v2_5_3`)(这一步有点棘手,步骤不只一步) 15. “matz”(Ruby 的创建者)作了多少提交? 16. 最近包含 “tkutil” 一词的提交是什么? 17. 检出提交 `e51dca2596db9567bd4d698b18b4d300575d3881` 并创建一个指向该提交的新分支。 18. 运行 `git reflog` 以查看你到目前为止完成的所有存储库导航操作。 ——————————————————————————– via: [https://jvns.ca/blog/2019/08/30/git-exercises–navigate-a-repository/](https://jvns.ca/blog/2019/08/30/git-exercises--navigate-a-repository/) 作者:[Julia Evans](https://jvns.ca/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
I think the [curl exercises](https://jvns.ca/blog/2019/08/27/curl-exercises/) the other day went well, so today I woke up and wanted to try writing some Git exercises. Git is a big thing to learn, probably too big to learn in a few hours, so my first idea for how to break it down was by starting by **navigating** a repository. I was originally going to use a toy test repository, but then I thought – why not a real repository? That’s way more fun! So we’re going to navigate the repository for the Ruby programming language. You don’t need to know any C to do this exercise, it’s just about getting comfortable with looking at how files in a repository change over time. ### clone the repository To get started, clone the repository: ``` git clone https://github.com/ruby/ruby ``` The big different thing about this repository (as compared to most of the repositories you’ll work with in real life) is that it doesn’t have branches, but it DOES have lots of tags, which are similar to branches in that they’re both just pointers to a commit. So we’ll do exercises with tags instead of branches. The way you *change* tags and branches are very different, but the way you *look at* tags and branches is exactly the same. ### a git SHA always refers to the same code The most important thing to keep in mind while doing these exercises is that a git SHA like `9e3d9a2a009d2a0281802a84e1c5cc1c887edc71` always refers to the same code, as explained in this page. This page is from a zine I wrote with Katie Sylor-Miller called [Oh shit, git!](https://wizardzines.com/zines/oh-shit-git/). (She also has a great site called [https://ohshitgit.com/](https://ohshitgit.com/) that inspired the zine). ![](https://wizardzines.com/zines/oh-shit-git/samples/ohshit-commit.png) We’ll be using git SHAs really heavily in the exercises to get you used to working with them and to help understand how they correspond to tags and branches. ### git subcommands we’ll be using All of these exercises only use 5 git subcommands: ``` git checkout git log (--oneline, --author, and -S will be useful) git diff (--stat will be useful) git show git status ``` ### exercises - Check out matz’s commit of Ruby from 1998. The commit ID is `3db12e8b236ac8f88db8eb4690d10e4a3b8dbcd4` . Find out how many lines of code Ruby was at that time. - Check out the current master branch - Look at the history for the file `hash.c` . What was the last commit ID that changed that file? - Get a diff of how `hash.c` has changed in the last 20ish years: compare that file on the master branch to the file at commit`3db12e8b236ac8f88db8eb4690d10e4a3b8dbcd4` . - Find a recent commit that changed `hash.c` and look at the diff for that commit - This repository has a bunch of **tags**for every Ruby release. Get a list of all the tags. - Find out how many files changed between tag `v1_8_6_187` and tag`v1_8_6_188` - Find a commit (any commit) from 2015 and check it out, look at the files very briefly, then go back to the master branch. - Find out what commit the tag `v1_8_6_187` corresponds to. - List the directory `.git/refs/tags` . Run`cat .git/refs/tags/v1_8_6_187` to see the contents of one of those files. - Find out what commit ID `HEAD` corresponds to right now. - Find out how many commits have been made to the `test/` directory - Get a diff of `lib/telnet.rb` between the commits`65a5162550f58047974793cdc8067a970b2435c0` and`9e3d9a2a009d2a0281802a84e1c5cc1c887edc71` . How many lines of that file were changed? - How many commits were made between Ruby 2.5.1 and 2.5.2 (tags `v2_5_1` and`v2_5_3` ) - How many commits were authored by `matz` (Ruby’s creator)? - What’s the most recent commit that included the word `tkutil` ? - Check out the commit `e51dca2596db9567bd4d698b18b4d300575d3881` and create a new branch that points at that commit. - Run `git reflog` to see all the navigating of the repository you’ve done so far **Question #1: **Check out matz's commit of Ruby from 1998. The commit ID is `3db12e8b236ac8f88db8eb4690d10e4a3b8dbcd4`. Find out how many lines of code Ruby was at that time. **Solution #1: ** git checkout 3db12e8b236ac8f88db8eb4690d10e4a3b8dbcd4 find . -name '*.c' | xargs wc -l **Question #2:** Check out the current master branch **Solution #2:** git checkout master **Question #3:** Look at the history for the file `hash.c`. What was the last commit ID that changed that file? **Solution #3:** git log hash.c # look at the first line to get the commit ID. # I got 3df37259d81d9fc71f8b4f0b8d45dc9d0af81ab4. **Question #4:** Get a diff of how `hash.c` has changed in the last 20ish years: compare that file on the master branch to the file at commit `3db12e8b236ac8f88db8eb4690d10e4a3b8dbcd4`. **Solution #4:** git diff 3db12e8b236ac8f88db8eb4690d10e4a3b8dbcd4 hash.c **Question #5:** Find a recent commit that changed `hash.c` and look at the diff for that commit **Solution #5:** git log hash.c # look at the first line to get the commit ID. # I got 3df37259d81d9fc71f8b4f0b8d45dc9d0af81ab4. git show 3df37259d81d9fc71f8b4f0b8d45dc9d0af81ab4 **Question #6:** This repository has a bunch of **tags** for every Ruby release. Get a list of all the tags. **Solution #6:** git tags **Question #7:** Find out how many files changed between tag `v1_8_6_187` and tag `v1_8_6_188` **Solution #7:** git diff v1_8_6_187 v1_8_6_188 --stat # 5 files! **Question #8:** Find a commit (any commit) from 2015 and check it out, look at the files very briefly, then go back to the master branch. **Solution #8:** git log | grep -C 2 ' 2015 ' | head git checkout bd5d443a56ee4bcb59a0a08776c07dea3ee60121 ls git checkout master **Question #9:** Find out what commit the tag `v1_8_6_187` corresponds to. **Solution #9:** git show v1_8_6_187 **Question #10:** List the directory `.git/refs/tags`. Run `cat .git/refs/tags/v1_8_6_187` to see the contents of one of those files. **Solution #10:** $ cat .git/refs/tags/v1_8_6_187 928e6916b25aee5b2b379999a3fa8816d40db714 **Question #11:** Find out what commit ID `HEAD` corresponds to right now. **Solution #11:** git show HEAD **Question #12:** Find out how many commits have been made to the `test/` directory **Solution #12:** git log --oneline test/ | wc **Question #13:** Get a diff of `lib/telnet.rb` between the commits `f2a91397fd7f9ca5bb3d296ec6df2de6f9cfc7cb` and `e44c9b11475d0be2f63286c1332a48da1b4d8626 `. How many lines of that file were changed? **Solution #13:** git diff f2a91397fd7f9..e44c9b11475d0 lib/tempfile.rb **Question #14:** How many commits were made between Ruby 2.5.1 and 2.5.2 (tags `v2_5_1` and `v2_5_3`) **Solution #14:** git log v2_5_1..v2_5_3 --oneline | wc **Question #15:** How many commits were authored by `matz` (Ruby's creator)? **Solution #15:** git log --oneline --author matz | wc -l **Question #16:** What's the most recent commit that included the word `tkutil`? **Solution #16:** git log -S tkutil # result is 6c5f5233db596c2c7708d5807d9a925a3a0ee73a **Question #17:** Check out the commit `e51dca2596db9567bd4d698b18b4d300575d3881` and create a new branch that points at that commit. **Solution #17:** git checkout e51dca2596db9567bd4d698b18b4d300575d3881 git branch my-branch **Question #18:** Run `git reflog` to see all the navigating of the repository you've done so far **Solution #18:** git reflog
11,380
在 Linux 中如何移动文件
https://opensource.com/article/19/8/moving-files-linux-depth
2019-09-24T16:29:38
[ "移动" ]
https://linux.cn/article-11380-1.html
> > 无论你是刚接触 Linux 的文件移动的新手还是已有丰富的经验,你都可以通过此深入的文章中学到一些东西。 > > > ![](/data/attachment/album/201909/24/162919ygppgeevgrj0ppgv.jpg) 在 Linux 中移动文件看似比较简单,但是可用的选项却比大多数人想象的要多。本文介绍了初学者如何在 GUI 和命令行中移动文件,还介绍了底层实际上发生了什么,并介绍了许多有一定经验的用户也很少使用的命令行选项。 ### 移动什么? 在研究移动文件之前,有必要仔细研究*移动*文件系统对象时实际发生的情况。当文件创建后,会将其分配给一个<ruby> 索引节点 <rt> inode </rt></ruby>,这是文件系统中用于数据存储的固定点。你可以使用 [ls](https://opensource.com/article/19/7/master-ls-command) 命令看到文件对应的索引节点: ``` $ ls --inode example.txt 7344977 example.txt ``` 移动文件时,实际上并没有将数据从一个索引节点移动到另一个索引节点,只是给文件对象分配了新的名称或文件路径而已。实际上,文件在移动时会保留其权限,因为移动文件不会更改或重新创建文件。(LCTT 译注:在不跨卷、分区和存储器时,移动文件是不会重新创建文件的;反之亦然) 文件和目录的索引节点并没有暗示这种继承关系,而是由文件系统本身决定的。索引节点的分配是基于文件创建时的顺序分配的,并且完全独立于你组织计算机文件的方式。一个目录“内”的文件的索引节点号可能比其父目录的索引节点号更低或更高。例如: ``` $ mkdir foo $ mv example.txt foo $ ls --inode 7476865 foo $ ls --inode foo 7344977 example.txt ``` 但是,将文件从一个硬盘驱动器移动到另一个硬盘驱动器时,索引节点基本上会更改。发生这种情况是因为必须将新数据写入新文件系统。因此,在 Linux 中,移动和重命名文件的操作实际上是相同的操作。无论你将文件移动到另一个目录还是在同一目录使用新名称,这两个操作均由同一个底层程序执行。 本文重点介绍将文件从一个目录移动到另一个目录。 ### 用鼠标移动文件 图形用户界面是大多数人都熟悉的友好的抽象层,位于复杂的二进制数据集合之上。这也是在 Linux 桌面上移动文件的首选方法,也是最直观的方法。从一般意义上来说,如果你习惯使用台式机,那么你可能已经知道如何在硬盘驱动器上移动文件。例如,在 GNOME 桌面上,将文件从一个窗口拖放到另一个窗口时的默认操作是移动文件而不是复制文件,因此这可能是该桌面上最直观的操作之一: ![Moving a file in GNOME.](/data/attachment/album/201909/24/162941j3bvbvvxnc5bxvvc.jpg "Moving a file in GNOME.") 而 KDE Plasma 桌面中的 Dolphin 文件管理器默认情况下会提示用户以执行不同的操作。拖动文件时按住 `Shift` 键可强制执行移动操作: ![Moving a file in KDE.](/data/attachment/album/201909/24/162943i4px4s5mz48saswv.jpg "Moving a file in KDE.") ### 在命令行移动文件 用于在 Linux、BSD、Illumos、Solaris 和 MacOS 上移动文件的 shell 命令是 `mv`。不言自明,简单的命令 `mv <source> <destination>` 会将源文件移动到指定的目标,源和目标都由[绝对](https://opensource.com/article/19/7/understanding-file-paths-and-how-use-them)或[相对](https://opensource.com/article/19/7/navigating-filesystem-relative-paths)文件路径定义。如前所述,`mv` 是 [POSIX](https://opensource.com/article/19/7/what-posix-richard-stallman-explains) 用户的常用命令,其有很多不为人知的附加选项,因此,无论你是新手还是有经验的人,本文都会为你带来一些有用的选项。 但是,不是所有 `mv` 命令都是由同一个人编写的,因此取决于你的操作系统,你可能拥有 GNU `mv`、BSD `mv` 或 Sun `mv`。命令的选项因其实现而异(BSD `mv` 根本没有长选项),因此请参阅你的 `mv` 手册页以查看支持的内容,或安装你的首选版本(这是开源的奢侈之处)。 #### 移动文件 要使用 `mv` 将文件从一个文件夹移动到另一个文件夹,请记住语法 `mv <source> <destination>`。 例如,要将文件 `example.txt` 移到你的 `Documents` 目录中: ``` $ touch example.txt $ mv example.txt ~/Documents $ ls ~/Documents example.txt ``` 就像你通过将文件拖放到文件夹图标上来移动文件一样,此命令不会将 `Documents` 替换为 `example.txt`。相反,`mv` 会检测到 `Documents` 是一个文件夹,并将 `example.txt` 文件放入其中。 你还可以方便地在移动文件时重命名该文件: ``` $ touch example.txt $ mv example.txt ~/Documents/foo.txt $ ls ~/Documents foo.txt ``` 这很重要,这使你不用将文件移动到另一个位置,也可以重命名文件,例如: ``` $ touch example.txt $ mv example.txt foo2.txt $ ls foo2.txt` ``` #### 移动目录 不像 [cp](https://opensource.com/article/19/7/copying-files-linux) 命令,`mv` 命令处理文件和目录没有什么不同,你可以用同样的格式移动目录或文件: ``` $ touch file.txt $ mkdir foo_directory $ mv file.txt foo_directory $ mv foo_directory ~/Documents ``` #### 安全地移动文件 如果你移动一个文件到一个已有同名文件的地方,默认情况下,`mv` 会用你移动的文件替换目标文件。这种行为被称为<ruby> 清除 <rt> clobbering </rt></ruby>,有时候这就是你想要的结果,而有时则不是。 一些发行版将 `mv` 别名定义为 `mv --interactive`(你也可以[自己写一个](https://opensource.com/article/19/7/bash-aliases)),这会提醒你确认是否覆盖。而另外一些发行版没有这样做,那么你可以使用 `--interactive` 或 `-i` 选项来确保当两个文件有一样的名字而发生冲突时让 `mv` 请你来确认。 ``` $ mv --interactive example.txt ~/Documents mv: overwrite '~/Documents/example.txt'? ``` 如果你不想手动干预,那么可以使用 `--no-clobber` 或 `-n`。该选项会在发生冲突时静默拒绝移动操作。在这个例子当中,一个名为 `example.txt` 的文件以及存在于 `~/Documents`,所以它不会如命令要求从当前目录移走。 ``` $ mv --no-clobber example.txt ~/Documents $ ls example.txt ``` #### 带备份的移动 如果你使用 GNU `mv`,有一个备份选项提供了另外一种安全移动的方式。要为任何冲突的目标文件创建备份文件,可以使用 `-b` 选项。 ``` $ mv -b example.txt ~/Documents $ ls ~/Documents example.txt example.txt~ ``` 这个选项可以确保 `mv` 完成移动操作,但是也会保护目录位置的已有文件。 另外的 GNU 备份选项是 `--backup`,它带有一个定义了备份文件如何命名的参数。 * `existing`:如果在目标位置已经存在了编号备份文件,那么会创建编号备份。否则,会使用 `simple` 方式。 * `none`:即使设置了 `--backup`,也不会创建备份。当 `mv` 被别名定义为带有备份选项时,这个选项可以覆盖这种行为。 * `numbered`:给目标文件名附加一个编号。 * `simple`:给目标文件附加一个 `~`,当你日常使用带有 `--ignore-backups` 选项的 [ls](https://opensource.com/article/19/7/master-ls-command) 时,这些文件可以很方便地隐藏起来。 简单来说: ``` $ mv --backup=numbered example.txt ~/Documents $ ls ~/Documents -rw-rw-r--. 1 seth users 128 Aug 1 17:23 example.txt -rw-rw-r--. 1 seth users 128 Aug 1 17:20 example.txt.~1~ ``` 可以使用环境变量 `VERSION_CONTROL` 设置默认的备份方案。你可以在 `~/.bashrc` 文件中设置该环境变量,也可以在命令前动态设置: ``` $ VERSION_CONTROL=numbered mv --backup example.txt ~/Documents $ ls ~/Documents -rw-rw-r--. 1 seth users 128 Aug 1 17:23 example.txt -rw-rw-r--. 1 seth users 128 Aug 1 17:20 example.txt.~1~ -rw-rw-r--. 1 seth users 128 Aug 1 17:22 example.txt.~2~ ``` `--backup` 选项仍然遵循 `--interactive` 或 `-i` 选项,因此即使它在执行备份之前创建了备份,它仍会提示你覆盖目标文件: ``` $ mv --backup=numbered example.txt ~/Documents mv: overwrite '~/Documents/example.txt'? y $ ls ~/Documents -rw-rw-r--. 1 seth users 128 Aug 1 17:24 example.txt -rw-rw-r--. 1 seth users 128 Aug 1 17:20 example.txt.~1~ -rw-rw-r--. 1 seth users 128 Aug 1 17:22 example.txt.~2~ -rw-rw-r--. 1 seth users 128 Aug 1 17:23 example.txt.~3~ ``` 你可以使用 `--force` 或 `-f` 选项覆盖 `-i`。 ``` $ mv --backup=numbered --force example.txt ~/Documents $ ls ~/Documents -rw-rw-r--. 1 seth users 128 Aug 1 17:26 example.txt -rw-rw-r--. 1 seth users 128 Aug 1 17:20 example.txt.~1~ -rw-rw-r--. 1 seth users 128 Aug 1 17:22 example.txt.~2~ -rw-rw-r--. 1 seth users 128 Aug 1 17:24 example.txt.~3~ -rw-rw-r--. 1 seth users 128 Aug 1 17:25 example.txt.~4~ ``` `--backup` 选项在 BSD `mv` 中不可用。 #### 一次性移动多个文件 移动多个文件时,`mv` 会将最终目录视为目标: ``` $ mv foo bar baz ~/Documents $ ls ~/Documents foo bar baz ``` 如果最后一个项目不是目录,则 `mv` 返回错误: ``` $ mv foo bar baz mv: target 'baz' is not a directory ``` GNU `mv` 的语法相当灵活。如果无法把目标目录作为提供给 `mv` 命令的最终参数,请使用 `--target-directory` 或 `-t` 选项: ``` $ mv --target-directory=~/Documents foo bar baz $ ls ~/Documents foo bar baz ``` 当从某些其他命令的输出构造 `mv` 命令时(例如 `find` 命令、`xargs` 或 [GNU Parallel](https://opensource.com/article/18/5/gnu-parallel)),这特别有用。 #### 基于修改时间移动 使用 GNU `mv`,你可以根据要移动的文件是否比要替换的目标文件新来定义移动动作。该方式可以通过 `--update` 或 `-u` 选项使用,在BSD `mv` 中不可用: ``` $ ls -l ~/Documents -rw-rw-r--. 1 seth users 128 Aug 1 17:32 example.txt $ ls -l -rw-rw-r--. 1 seth users 128 Aug 1 17:42 example.txt $ mv --update example.txt ~/Documents $ ls -l ~/Documents -rw-rw-r--. 1 seth users 128 Aug 1 17:42 example.txt $ ls -l ``` 此结果仅基于文件的修改时间,而不是两个文件的差异,因此请谨慎使用。只需使用 `touch` 命令即可愚弄 `mv`: ``` $ cat example.txt one $ cat ~/Documents/example.txt one two $ touch example.txt $ mv --update example.txt ~/Documents $ cat ~/Documents/example.txt one ``` 显然,这不是最智能的更新功能,但是它提供了防止覆盖最新数据的基本保护。 ### 移动 除了 `mv` 命令以外,还有更多的移动数据的方法,但是作为这项任务的默认程序,`mv` 是一个很好的通用选择。现在你知道了有哪些可以使用的选项,可以比以前更智能地使用 `mv` 了。 --- via: <https://opensource.com/article/19/8/moving-files-linux-depth> 作者:[Seth Kenlon](https://opensource.com/users/sethhttps://opensource.com/users/doni08521059) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
Moving files in Linux can seem relatively straightforward, but there are more options available than most realize. This article teaches beginners how to move files in the GUI and on the command line, but also explains what’s actually happening under the hood, and addresses command line options that many experience users have rarely explored. ## Moving what? Before delving into moving files, it’s worth taking a closer look at what actually happens when *moving* file system objects. When a file is created, it is assigned to an *inode*, which is a fixed point in a file system that’s used for data storage. You can find what inode maps to a file with the `ls` command: ``` `````` $ ls --inode example.txt 7344977 example.txt ``` When you move a file, you don’t actually move the data from one inode to another, you only assign the file object a new name or file path. In fact, a file retains its permissions when it’s moved, because moving a file doesn’t change or re-create it. File and directory inodes never imply inheritance and are dictated by the filesystem itself. Inode assignment is sequential based on when the file was created and is entirely independent of how you organize your computer. A file "inside" a directory may have a lower inode number than its parent directory, or a higher one. For example: ``` `````` $ mkdir foo $ mv example.txt foo $ ls --inode 7476865 foo $ ls --inode foo 7344977 example.txt ``` When moving a file from one hard drive to another, however, the inode is very likely to change. This happens because the new data has to be written onto a new filesystem. For this reason, in Linux the act of moving and renaming files is literally the same action. Whether you move a file to another directory or to the same directory with a new name, both actions are performed by the same underlying program. This article focuses on moving files from one directory to another. ## Moving with a mouse The GUI is a friendly and, to most people, familiar layer of abstraction on top of a complex collection of binary data. It’s also the first and most intuitive way to move files on Linux. If you’re used to the desktop experience, in a generic sense, then you probably already know how to move files around your hard drive. In the GNOME desktop, for instance, the default action when dragging and dropping a file from one window to another is to move the file rather than to copy it, so it’s probably one of the most intuitive actions on the desktop: ![Moving a file in GNOME. Moving a file in GNOME.](https://opensource.com/sites/default/files/uploads/gnome-mv.jpg) The Dolphin file manager in the KDE Plasma desktop defaults to prompting the user for an action. Holding the **Shift** key while dragging a file forces a move action: ![Moving a file in KDE. Moving a file in KDE.](https://opensource.com/sites/default/files/uploads/kde-mv.jpg) ## Moving on the command line The shell command intended for moving files on Linux, BSD, Illumos, Solaris, and MacOS is **mv**. A simple command with a predictable syntax, **mv <source> <destination>** moves a source file to the specified destination, each defined by either an [absolute](https://opensource.com/article/19/7/understanding-file-paths-and-how-use-them) or [relative](https://opensource.com/article/19/7/navigating-filesystem-relative-paths) file path. As mentioned before, **mv** is such a common command for [POSIX](https://opensource.com/article/19/7/what-posix-richard-stallman-explains) users that many of its additional modifiers are generally unknown, so this article brings a few useful modifiers to your attention whether you are new or experienced. Not all **mv** commands were written by the same people, though, so you may have GNU **mv**, BSD **mv**, or Sun **mv**, depending on your operating system. Command options differ from implementation to implementation (BSD **mv** has no long options at all) so refer to your **mv** man page to see what’s supported, or install your preferred version instead (that’s the luxury of open source). ### Moving a file To move a file from one folder to another with **mv**, remember the syntax **mv <source> <destination>**. For instance, to move the file **example.txt** into your **Documents** directory: ``` `````` $ touch example.txt $ mv example.txt ~/Documents $ ls ~/Documents example.txt ``` Just like when you move a file by dragging and dropping it onto a folder icon, this command doesn’t replace **Documents** with **example.txt**. Instead, **mv** detects that **Documents** is a folder, and places the **example.txt** file into it. You can also, conveniently, rename the file as you move it: ``` `````` $ touch example.txt $ mv example.txt ~/Documents/foo.txt $ ls ~/Documents foo.txt ``` That’s important because it enables you to rename a file even when you don’t want to move it to another location, like so: ``` `````` $ touch example.txt $ mv example.txt foo2.txt $ ls foo2.txt ``` ### Moving a directory The **mv** command doesn’t differentiate a file from a directory the way [ cp](https://opensource.com/article/19/7/copying-files-linux) does. You can move a directory or a file with the same syntax: ``` `````` $ touch file.txt $ mkdir foo_directory $ mv file.txt foo_directory $ mv foo_directory ~/Documents ``` ### Moving a file safely If you copy a file to a directory where a file of the same name already exists, the **mv** command replaces the destination file with the one you are moving, by default. This behavior is called *clobbering*, and sometimes it’s exactly what you intend. Other times, it is not. Some distributions *alias* (or you might [write your own](https://opensource.com/article/19/7/bash-aliases)) **mv** to **mv --interactive**, which prompts you for confirmation. Some do not. Either way, you can use the **--interactive** or **-i** option to ensure that **mv** asks for confirmation in the event that two files of the same name are in conflict: ``` `````` $ mv --interactive example.txt ~/Documents mv: overwrite '~/Documents/example.txt'? ``` If you do not want to manually intervene, use **--no-clobber** or **-n** instead. This flag silently rejects the move action in the event of conflict. In this example, a file named **example.txt** already exists in **~/Documents**, so it doesn't get moved from the current directory as instructed: ``` `````` $ mv --no-clobber example.txt ~/Documents $ ls example.txt ``` ### Moving with backups If you’re using GNU **mv**, there are backup options offering another means of safe moving. To create a backup of any conflicting destination file, use the **-b** option: ``` `````` $ mv -b example.txt ~/Documents $ ls ~/Documents example.txt example.txt~ ``` This flag ensures that **mv** completes the move action, but also protects any pre-existing file in the destination location. Another GNU backup option is **--backup**, which takes an argument defining how the backup file is named: **existing**: If numbered backups already exist in the destination, then a numbered backup is created. Otherwise, the**simple**scheme is used.**none**: Does not create a backup even if**--backup**is set. This option is useful to override a**mv**alias that sets the backup option.**numbered**: Appends the destination file with a number.**simple**: Appends the destination file with a**~**, which can conveniently be hidden from your daily view with the**--ignore-backups**option for.[ls](https://opensource.com/article/19/7/master-ls-command) For example: ``` `````` $ mv --backup=numbered example.txt ~/Documents $ ls ~/Documents -rw-rw-r--. 1 seth users 128 Aug 1 17:23 example.txt -rw-rw-r--. 1 seth users 128 Aug 1 17:20 example.txt.~1~ ``` A default backup scheme can be set with the environment variable VERSION_CONTROL. You can set environment variables in your **~/.bashrc** file or dynamically before your command: ``` `````` $ VERSION_CONTROL=numbered mv --backup example.txt ~/Documents $ ls ~/Documents -rw-rw-r--. 1 seth users 128 Aug 1 17:23 example.txt -rw-rw-r--. 1 seth users 128 Aug 1 17:20 example.txt.~1~ -rw-rw-r--. 1 seth users 128 Aug 1 17:22 example.txt.~2~ ``` The **--backup** option still respects the **--interactive** or **-i** option, so it still prompts you to overwrite the destination file, even though it creates a backup before doing so: ``` `````` $ mv --backup=numbered example.txt ~/Documents mv: overwrite '~/Documents/example.txt'? y $ ls ~/Documents -rw-rw-r--. 1 seth users 128 Aug 1 17:24 example.txt -rw-rw-r--. 1 seth users 128 Aug 1 17:20 example.txt.~1~ -rw-rw-r--. 1 seth users 128 Aug 1 17:22 example.txt.~2~ -rw-rw-r--. 1 seth users 128 Aug 1 17:23 example.txt.~3~ ``` You can override **-i** with the **--force** or **-f** option. ``` `````` $ mv --backup=numbered --force example.txt ~/Documents $ ls ~/Documents -rw-rw-r--. 1 seth users 128 Aug 1 17:26 example.txt -rw-rw-r--. 1 seth users 128 Aug 1 17:20 example.txt.~1~ -rw-rw-r--. 1 seth users 128 Aug 1 17:22 example.txt.~2~ -rw-rw-r--. 1 seth users 128 Aug 1 17:24 example.txt.~3~ -rw-rw-r--. 1 seth users 128 Aug 1 17:25 example.txt.~4~ ``` The **--backup** option is not available in BSD **mv**. ### Moving many files at once When moving multiple files, **mv** treats the final directory named as the destination: ``` `````` $ mv foo bar baz ~/Documents $ ls ~/Documents foo bar baz ``` If the final item is not a directory, **mv** returns an error: ``` `````` $ mv foo bar baz mv: target 'baz' is not a directory ``` The syntax of GNU **mv** is fairly flexible. If you are unable to provide the **mv** command with the destination as the final argument, use the **--target-directory** or **-t** option: ``` `````` $ mv --target-directory=~/Documents foo bar baz $ ls ~/Documents foo bar baz ``` This is especially useful when constructing **mv** commands from the output of some other command, such as the **find** command, **xargs**, or [GNU Parallel](https://opensource.com/article/18/5/gnu-parallel). ### Moving based on mtime With GNU **mv**, you can define a move action based on whether the file being moved is newer than the destination file it would replace. This option is possible with the **--update** or **-u** option, and is not available in BSD **mv**: ``` `````` $ ls -l ~/Documents -rw-rw-r--. 1 seth users 128 Aug 1 17:32 example.txt $ ls -l -rw-rw-r--. 1 seth users 128 Aug 1 17:42 example.txt $ mv --update example.txt ~/Documents $ ls -l ~/Documents -rw-rw-r--. 1 seth users 128 Aug 1 17:42 example.txt $ ls -l ``` This result is exclusively based on the files’ modification time, not on a diff of the two files, so use it with care. It’s easy to fool **mv** with a mere **touch** command: ``` `````` $ cat example.txt one $ cat ~/Documents/example.txt one two $ touch example.txt $ mv --update example.txt ~/Documents $ cat ~/Documents/example.txt one ``` Obviously, this isn’t the most intelligent update function available, but it offers basic protection against overwriting recent data. ## Moving There are more ways to move data than just the **mv** command, but as the default program for the job, **mv** is a good universal option. Now that you know what options you have available, you can use **mv** smarter than ever before. ## 1 Comment
11,382
在 Linux 中怎样移除(删除)符号链接
https://www.2daygeek.com/remove-delete-symbolic-link-softlink-linux/
2019-09-24T17:07:00
[ "符号链接" ]
https://linux.cn/article-11382-1.html
![](/data/attachment/album/201909/24/170625no4babjqq7jmm3qb.jpg) 你可能有时需要在 Linux 上创建或者删除符号链接。如果有,你知道该怎样做吗?之前你做过吗?你踩坑没有?如果你踩过坑,那没什么问题。如果还没有,别担心,我们将在这里帮助你。 使用 `rm` 和 `unlink` 命令就能完成移除(删除)符号链接的操作。 ### 什么是符号链接? 符号链接(symlink)又称软链接,它是一种特殊的文件类型,在 Linux 中该文件指向另一个文件或者目录。它类似于 Windows 中的快捷方式。它能在相同或者不同的文件系统或分区中指向一个文件或着目录。 符号链接通常用来链接库文件。它也可用于链接日志文件和挂载的 NFS(网络文件系统)上的文件夹。 ### 什么是 rm 命令? [rm 命令](https://www.2daygeek.com/linux-remove-files-directories-folders-rm-command/) 被用来移除文件和目录。它非常危险,你每次使用 `rm` 命令的时候要非常小心。 ### 什么是 unlink 命令? `unlink` 命令被用来移除特殊的文件。它被作为 GNU Gorutils 的一部分安装了。 ### 1) 使用 rm 命令怎样移除符号链接文件 `rm` 命令是在 Linux 中使用最频繁的命令,它允许我们像下列描述那样去移除符号链接。 ``` # rm symlinkfile ``` 始终将 `rm` 命令与 `-i` 一起使用以了解正在执行的操作。 ``` # rm -i symlinkfile1 rm: remove symbolic link ‘symlinkfile1’? y ``` 它允许我们一次移除多个符号链接: ``` # rm -i symlinkfile2 symlinkfile3 rm: remove symbolic link ‘symlinkfile2’? y rm: remove symbolic link ‘symlinkfile3’? y ``` #### 1a) 使用 rm 命令怎样移除符号链接目录 这像移除符号链接文件那样。使用下列命令移除符号链接目录。 ``` # rm -i symlinkdir rm: remove symbolic link ‘symlinkdir’? y ``` 使用下列命令移除多个符号链接目录。 ``` # rm -i symlinkdir1 symlinkdir2 rm: remove symbolic link ‘symlinkdir1’? y rm: remove symbolic link ‘symlinkdir2’? y ``` 如果你在结尾增加 `/`,这个符号链接目录将不会被删除。如果你加了,你将得到一个错误。 ``` # rm -i symlinkdir/ rm: cannot remove ‘symlinkdir/’: Is a directory ``` 你可以增加 `-r` 去处理上述问题。**但如果你增加这个参数,它将会删除目标目录下的内容,并且它不会删除这个符号链接文件。**(LCTT 译注:这可能不是你的原意。) ``` # rm -ri symlinkdir/ rm: descend into directory ‘symlinkdir/’? y rm: remove regular file ‘symlinkdir/file4.txt’? y rm: remove directory ‘symlinkdir/’? y rm: cannot remove ‘symlinkdir/’: Not a directory ``` ### 2) 使用 unlink 命令怎样移除符号链接 `unlink` 命令删除指定文件。它一次仅接受一个文件。 删除符号链接文件: ``` # unlink symlinkfile ``` 删除符号链接目录: ``` # unlink symlinkdir2 ``` 如果你在结尾增加 `/`,你不能使用 `unlink` 命令删除符号链接目录。 ``` # unlink symlinkdir3/ unlink: cannot unlink ‘symlinkdir3/’: Not a directory ``` --- via: <https://www.2daygeek.com/remove-delete-symbolic-link-softlink-linux/> 作者:[Magesh Maruthamuthu](https://www.2daygeek.com/author/magesh/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[arrowfeng](https://github.com/arrowfeng) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
404
Not Found
null
11,383
Go 语言在极小硬件上的运用(一)
https://ziutek.github.io/2018/03/30/go_on_very_small_hardware.html
2019-09-24T21:03:09
[ "Go" ]
https://linux.cn/article-11383-1.html
![](/data/attachment/album/201909/24/210256yihkuy8kcigugr2h.png) Go 语言,能在多低下的配置上运行并发挥作用呢? 我最近购买了一个特别便宜的开发板: ![STM32F030F4P6](/data/attachment/album/201909/24/210325sk2snn6u6hs82tu7.jpg) 我购买它的理由有三个。首先,我(作为程序员)从未接触过 STM320 系列的开发板。其次,STM32F10x 系列使用也有点少了。STM320 系列的 MCU 很便宜,有更新一些的外设,对系列产品进行了改进,问题修复也做得更好了。最后,为了这篇文章,我选用了这一系列中最低配置的开发板,整件事情就变得有趣起来了。 ### 硬件部分 [STM32F030F4P6](http://www.st.com/content/st_com/en/products/microcontrollers/stm32-32-bit-arm-cortex-mcus/stm32-mainstream-mcus/stm32f0-series/stm32f0x0-value-line/stm32f030f4.html) 给人留下了很深的印象: * CPU: [Cortex M0](https://en.wikipedia.org/wiki/ARM_Cortex-M#Cortex-M0) 48 MHz(最低配置,只有 12000 个逻辑门电路) * RAM: 4 KB, * Flash: 16 KB, * ADC、SPI、I2C、USART 和几个定时器 以上这些采用了 TSSOP20 封装。正如你所见,这是一个很小的 32 位系统。 ### 软件部分 如果你想知道如何在这块开发板上使用 [Go](https://golang.org/) 编程,你需要反复阅读硬件规范手册。你必须面对这样的真实情况:在 Go 编译器中给 Cortex-M0 提供支持的可能性很小。而且,这还仅仅只是第一个要解决的问题。 我会使用 [Emgo](https://github.com/ziutek/emgo),但别担心,之后你会看到,它如何让 Go 在如此小的系统上尽可能发挥作用。 在我拿到这块开发板之前,对 [stm32/hal](https://github.com/ziutek/emgo/tree/master/egpath/src/stm32/hal) 系列下的 F0 MCU 没有任何支持。在简单研究[参考手册](http://www.st.com/resource/en/reference_manual/dm00091010.pdf)后,我发现 STM32F0 系列是 STM32F3 削减版,这让在新端口上开发的工作变得容易了一些。 如果你想接着本文的步骤做下去,需要先安装 Emgo ``` cd $HOME git clone https://github.com/ziutek/emgo/ cd emgo/egc go install ``` 然后设置一下环境变量 ``` export EGCC=path_to_arm_gcc # eg. /usr/local/arm/bin/arm-none-eabi-gcc export EGLD=path_to_arm_linker # eg. /usr/local/arm/bin/arm-none-eabi-ld export EGAR=path_to_arm_archiver # eg. /usr/local/arm/bin/arm-none-eabi-ar export EGROOT=$HOME/emgo/egroot export EGPATH=$HOME/emgo/egpath export EGARCH=cortexm0 export EGOS=noos export EGTARGET=f030x6 ``` 更详细的说明可以在 [Emgo](https://github.com/ziutek/emgo) 官网上找到。 要确保 `egc` 在你的 `PATH` 中。 你可以使用 `go build` 来代替 `go install`,然后把 `egc` 复制到你的 `$HOME/bin` 或 `/usr/local/bin` 中。 现在,为你的第一个 Emgo 程序创建一个新文件夹,随后把示例中链接器脚本复制过来: ``` mkdir $HOME/firstemgo cd $HOME/firstemgo cp $EGPATH/src/stm32/examples/f030-demo-board/blinky/script.ld . ``` ### 最基本程序 在 `main.go` 文件中创建一个最基本的程序: ``` package main func main() { } ``` 文件编译没有出现任何问题: ``` $ egc $ arm-none-eabi-size cortexm0.elf text data bss dec hex filename 7452 172 104 7728 1e30 cortexm0.elf ``` 第一次编译可能会花点时间。编译后产生的二进制占用了 7624 个字节的 Flash 空间(文本 + 数据)。对于一个什么都没做的程序来说,占用的空间有些大。还剩下 8760 字节,可以用来做些有用的事。 不妨试试传统的 “Hello, World!” 程序: ``` package main import "fmt" func main() { fmt.Println("Hello, World!") } ``` 不幸的是,这次结果有些糟糕: ``` $ egc /usr/local/arm/bin/arm-none-eabi-ld: /home/michal/P/go/src/github.com/ziutek/emgo/egpath/src/stm32/examples/f030-demo-board/blog/cortexm0.elf section `.text' will not fit in region `Flash' /usr/local/arm/bin/arm-none-eabi-ld: region `Flash' overflowed by 10880 bytes exit status 1 ``` “Hello, World!” 需要 STM32F030x6 上至少 32KB 的 Flash 空间。 `fmt` 包强制包含整个 `strconv` 和 `reflect` 包。这三个包,即使在精简版本中的 Emgo 中,占用空间也很大。我们不能使用这个例子了。有很多的应用不需要好看的文本输出。通常,一个或多个 LED,或者七段数码管显示就足够了。不过,在第二部分,我会尝试使用 `strconv` 包来格式化,并在 UART 上显示一些数字和文本。 ### 闪烁 我们的开发板上有一个与 PA4 引脚和 VCC 相连的 LED。这次我们的代码稍稍长了一些: ``` package main import ( "delay" "stm32/hal/gpio" "stm32/hal/system" "stm32/hal/system/timer/systick" ) var led gpio.Pin func init() { system.SetupPLL(8, 1, 48/8) systick.Setup(2e6) gpio.A.EnableClock(false) led = gpio.A.Pin(4) cfg := &gpio.Config{Mode: gpio.Out, Driver: gpio.OpenDrain} led.Setup(cfg) } func main() { for { led.Clear() delay.Millisec(100) led.Set() delay.Millisec(900) } } ``` 按照惯例,`init` 函数用来初始化和配置外设。 `system.SetupPLL(8, 1, 48/8)` 用来配置 RCC,将外部的 8 MHz 振荡器的 PLL 作为系统时钟源。PLL 分频器设置为 1,倍频数设置为 48/8 =6,这样系统时钟频率为 48MHz。 `systick.Setup(2e6)` 将 Cortex-M SYSTICK 时钟作为系统时钟,每隔 2e6 次纳秒运行一次(每秒钟 500 次)。 `gpio.A.EnableClock(false)` 开启了 GPIO A 口的时钟。`False` 意味着这一时钟在低功耗模式下会被禁用,但在 STM32F0 系列中并未实现这一功能。 `led.Setup(cfg)` 设置 PA4 引脚为开漏输出。 `led.Clear()` 将 PA4 引脚设为低,在开漏设置中,打开 LED。 `led.Set()` 将 PA4 设为高电平状态,关掉LED。 编译这个代码: ``` $ egc $ arm-none-eabi-size cortexm0.elf text data bss dec hex filename 9772 172 168 10112 2780 cortexm0.elf ``` 正如你所看到的,这个闪烁程序占用了 2320 字节,比最基本程序占用空间要大。还有 6440 字节的剩余空间。 看看代码是否能运行: ``` $ openocd -d0 -f interface/stlink.cfg -f target/stm32f0x.cfg -c 'init; program cortexm0.elf; reset run; exit' Open On-Chip Debugger 0.10.0+dev-00319-g8f1f912a (2018-03-07-19:20) Licensed under GNU GPL v2 For bug reports, read http://openocd.org/doc/doxygen/bugs.html debug_level: 0 adapter speed: 1000 kHz adapter_nsrst_delay: 100 none separate adapter speed: 950 kHz target halted due to debug-request, current mode: Thread xPSR: 0xc1000000 pc: 0x0800119c msp: 0x20000da0 adapter speed: 4000 kHz ** Programming Started ** auto erase enabled target halted due to breakpoint, current mode: Thread xPSR: 0x61000000 pc: 0x2000003a msp: 0x20000da0 wrote 10240 bytes from file cortexm0.elf in 0.817425s (12.234 KiB/s) ** Programming Finished ** adapter speed: 950 kHz ``` 在这篇文章中,这是我第一次,将一个短视频转换成[动画 PNG](https://en.wikipedia.org/wiki/APNG)。我对此印象很深,再见了 YouTube。 对于 IE 用户,我很抱歉,更多信息请看 [apngasm](http://apngasm.sourceforge.net/)。我本应该学习 HTML5,但现在,APNG 是我最喜欢的,用来播放循环短视频的方法了。 ![STM32F030F4P6](/data/attachment/album/201909/24/210408vdq9h6qh6cwxtu0z.png) ### 更多的 Go 语言编程 如果你不是一个 Go 程序员,但你已经听说过一些关于 Go 语言的事情,你可能会说:“Go 语法很好,但跟 C 比起来,并没有明显的提升。让我看看 Go 语言的通道和协程!” 接下来我会一一展示: ``` import ( "delay" "stm32/hal/gpio" "stm32/hal/system" "stm32/hal/system/timer/systick" ) var led1, led2 gpio.Pin func init() { system.SetupPLL(8, 1, 48/8) systick.Setup(2e6) gpio.A.EnableClock(false) led1 = gpio.A.Pin(4) led2 = gpio.A.Pin(5) cfg := &gpio.Config{Mode: gpio.Out, Driver: gpio.OpenDrain} led1.Setup(cfg) led2.Setup(cfg) } func blinky(led gpio.Pin, period int) { for { led.Clear() delay.Millisec(100) led.Set() delay.Millisec(period - 100) } } func main() { go blinky(led1, 500) blinky(led2, 1000) } ``` 代码改动很小: 添加了第二个 LED,上一个例子中的 `main` 函数被重命名为 `blinky` 并且需要提供两个参数。 `main` 在新的协程中先调用 `blinky`,所以两个 LED 灯在并行使用。值得一提的是,`gpio.Pin` 可以同时访问同一 GPIO 口的不同引脚。 Emgo 还有很多不足。其中之一就是你需要提前规定 `goroutines(tasks)` 的最大执行数量。是时候修改 `script.ld` 了: ``` ISRStack = 1024; MainStack = 1024; TaskStack = 1024; MaxTasks = 2; INCLUDE stm32/f030x4 INCLUDE stm32/loadflash INCLUDE noos-cortexm ``` 栈的大小需要靠猜,现在还不用关心这一点。 ``` $ egc $ arm-none-eabi-size cortexm0.elf text data bss dec hex filename 10020 172 172 10364 287c cortexm0.elf ``` 另一个 LED 和协程一共占用了 248 字节的 Flash 空间。 ![STM32F030F4P6](/data/attachment/album/201909/24/210519jkopyikil0z2al7r.png) ### 通道 通道是 Go 语言中协程之间相互通信的一种[推荐方式](https://blog.golang.org/share-memory-by-communicating)。Emgo 甚至能允许通过*中断处理*来使用缓冲通道。下一个例子就展示了这种情况。 ``` package main import ( "delay" "rtos" "stm32/hal/gpio" "stm32/hal/irq" "stm32/hal/system" "stm32/hal/system/timer/systick" "stm32/hal/tim" ) var ( leds [3]gpio.Pin timer *tim.Periph ch = make(chan int, 1) ) func init() { system.SetupPLL(8, 1, 48/8) systick.Setup(2e6) gpio.A.EnableClock(false) leds[0] = gpio.A.Pin(4) leds[1] = gpio.A.Pin(5) leds[2] = gpio.A.Pin(9) cfg := &gpio.Config{Mode: gpio.Out, Driver: gpio.OpenDrain} for _, led := range leds { led.Set() led.Setup(cfg) } timer = tim.TIM3 pclk := timer.Bus().Clock() if pclk < system.AHB.Clock() { pclk *= 2 } freq := uint(1e3) // Hz timer.EnableClock(true) timer.PSC.Store(tim.PSC(pclk/freq - 1)) timer.ARR.Store(700) // ms timer.DIER.Store(tim.UIE) timer.CR1.Store(tim.CEN) rtos.IRQ(irq.TIM3).Enable() } func blinky(led gpio.Pin, period int) { for range ch { led.Clear() delay.Millisec(100) led.Set() delay.Millisec(period - 100) } } func main() { go blinky(leds[1], 500) blinky(leds[2], 500) } func timerISR() { timer.SR.Store(0) leds[0].Set() select { case ch <- 0: // Success default: leds[0].Clear() } } //c:__attribute__((section(".ISRs"))) var ISRs = [...]func(){ irq.TIM3: timerISR, } ``` 与之前例子相比较下的不同: 1. 添加了第三个 LED,并连接到 PA9 引脚(UART 头的 TXD 引脚)。 2. 时钟(`TIM3`)作为中断源。 3. 新函数 `timerISR` 用来处理 `irq.TIM3` 的中断。 4. 新增容量为 1 的缓冲通道是为了 `timerISR` 和 `blinky` 协程之间的通信。 5. `ISRs` 数组作为*中断向量表*,是更大的*异常向量表*的一部分。 6. `blinky` 中的 `for` 语句被替换成 `range` 语句。 为了方便起见,所有的 LED,或者说它们的引脚,都被放在 `leds` 这个数组里。另外,所有引脚在被配置为输出之前,都设置为一种已知的初始状态(高电平状态)。 在这个例子里,我们想让时钟以 1 kHz 的频率运行。为了配置 TIM3 预分频器,我们需要知道它的输入时钟频率。通过参考手册我们知道,输入时钟频率在 `APBCLK = AHBCLK` 时,与 `APBCLK` 相同,反之等于 2 倍的 `APBCLK`。 如果 CNT 寄存器增加 1 kHz,那么 ARR 寄存器的值等于*更新事件*(重载事件)在毫秒中的计数周期。 为了让更新事件产生中断,必须要设置 DIER 寄存器中的 UIE 位。CEN 位能启动时钟。 时钟外设在低功耗模式下必须启用,为了自身能在 CPU 处于休眠时保持运行: `timer.EnableClock(true)`。这在 STM32F0 中无关紧要,但对代码可移植性却十分重要。 `timerISR` 函数处理 `irq.TIM3` 的中断请求。`timer.SR.Store(0)` 会清除 SR 寄存器里的所有事件标志,无效化向 [NVIC](http://infocenter.arm.com/help/topic/com.arm.doc.ddi0432c/Cihbecee.html) 发出的所有中断请求。凭借经验,由于中断请求无效的延时性,需要在程序一开始马上清除所有的中断标志。这避免了无意间再次调用处理。为了确保万无一失,需要先清除标志,再读取,但是在我们的例子中,清除标志就已经足够了。 下面的这几行代码: ``` select { case ch <- 0: // Success default: leds[0].Clear() } ``` 是 Go 语言中,如何在通道上非阻塞地发送消息的方法。中断处理程序无法一直等待通道中的空余空间。如果通道已满,则执行 `default`,开发板上的LED就会开启,直到下一次中断。 `ISRs` 数组包含了中断向量表。`//c:__attribute__((section(".ISRs")))` 会导致链接器将数组插入到 `.ISRs` 节中。 `blinky` 的 `for` 循环的新写法: ``` for range ch { led.Clear() delay.Millisec(100) led.Set() delay.Millisec(period - 100) } ``` 等价于: ``` for { _, ok := <-ch if !ok { break // Channel closed. } led.Clear() delay.Millisec(100) led.Set() delay.Millisec(period - 100) } ``` 注意,在这个例子中,我们不在意通道中收到的值,我们只对其接受到的消息感兴趣。我们可以在声明时,将通道元素类型中的 `int` 用空结构体 `struct{}` 来代替,发送消息时,用 `struct{}{}` 结构体的值代替 0,但这部分对新手来说可能会有些陌生。 让我们来编译一下代码: ``` $ egc $ arm-none-eabi-size cortexm0.elf text data bss dec hex filename 11096 228 188 11512 2cf8 cortexm0.elf ``` 新的例子占用了 11324 字节的 Flash 空间,比上一个例子多占用了 1132 字节。 采用现在的时序,两个闪烁协程从通道中获取数据的速度,比 `timerISR` 发送数据的速度要快。所以它们在同时等待新数据,你还能观察到 `select` 的随机性,这也是 [Go 规范](https://golang.org/ref/spec#Select_statements)所要求的。 ![STM32F030F4P6](/data/attachment/album/201909/24/210528f1226mspt6m61642.png) 开发板上的 LED 一直没有亮起,说明通道从未出现过溢出。 我们可以加快消息发送的速度,将 `timer.ARR.Store(700)` 改为 `timer.ARR.Store(200)`。 现在 `timerISR` 每秒钟发送 5 条消息,但是两个接收者加起来,每秒也只能接受 4 条消息。 ![STM32F030F4P6](/data/attachment/album/201909/24/210533bwf6oxaokxwtiwk6.png) 正如你所看到的,`timerISR` 开启黄色 LED 灯,意味着通道上已经没有剩余空间了。 第一部分到这里就结束了。你应该知道,这一部分并未展示 Go 中最重要的部分,接口。 协程和通道只是一些方便好用的语法。你可以用自己的代码来替换它们,这并不容易,但也可以实现。接口是Go 语言的基础。这是文章中 [第二部分](https://ziutek.github.io/2018/04/14/go_on_very_small_hardware2.html)所要提到的. 在 Flash 上我们还有些剩余空间。 --- via: <https://ziutek.github.io/2018/03/30/go_on_very_small_hardware.html> 作者:[Michał Derkacz](https://ziutek.github.io/) 译者:[wenwensnow](https://github.com/wenwensnow) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
# Go on very small hardware (Part 1) How low we can *Go* and still do something useful? I recently bought this ridiculously cheap board: I bought it for three reasons. First, I have never dealt (as a programmer) with STM32F0 series. Second, the STM32F10x series is getting old. MCUs belonging to the STM32F0 family are just as cheap if not cheaper and has newer peripherals, with many improvements and bugs fixed. Thirdly, I chose the smallest member of the family for the purpose of this article, to make the whole thing a little more intriguing. ## The Hardware The [STM32F030F4P6](http://www.st.com/content/st_com/en/products/microcontrollers/stm32-32-bit-arm-cortex-mcus/stm32-mainstream-mcus/stm32f0-series/stm32f0x0-value-line/stm32f030f4.html) is impresive piece of hardware: - CPU: [Cortex M0](https://en.wikipedia.org/wiki/ARM_Cortex-M#Cortex-M0)48 MHz (only 12000 logic gates, in minimal configuration), - RAM: 4 KB, - Flash: 16 KB, - ADC, SPI, I2C, USART and a couple of timers, all enclosed in TSSOP20 package. As you can see, it is very small 32-bit system. ## The software If you hoped to see how to use [genuine Go](https://golang.org/) to program this board, you need to read the hardware specification one more time. You must face the truth: there is a negligible chance that someone will ever add support for Cortex-M0 to the Go compiler and this is just the beginning of work. I’ll use [Emgo](https://github.com/ziutek/emgo), but don’t worry, you will see that it gives you as much Go as it can on such small system. There was no support for any F0 MCU in [stm32/hal](https://github.com/ziutek/emgo/tree/master/egpath/src/stm32/hal) before this board arrived to me. After brief study of [RM](http://www.st.com/resource/en/reference_manual/dm00091010.pdf), the STM32F0 series appeared to be striped down STM32F3 series, which made work on new port easier. If you want to follow subsequent steps of this post, you need to install Emgo ``` cd $HOME git clone https://github.com/ziutek/emgo/ cd emgo/egc go install ``` and set a couple environment variables ``` export EGCC=path_to_arm_gcc # eg. /usr/local/arm/bin/arm-none-eabi-gcc export EGLD=path_to_arm_linker # eg. /usr/local/arm/bin/arm-none-eabi-ld export EGAR=path_to_arm_archiver # eg. /usr/local/arm/bin/arm-none-eabi-ar export EGROOT=$HOME/emgo/egroot export EGPATH=$HOME/emgo/egpath export EGARCH=cortexm0 export EGOS=noos export EGTARGET=f030x6 ``` A more detailed description can be found on the [Emgo website](https://github.com/ziutek/emgo). Ensure that egc is on your PATH. You can use `go build` instead of `go install` and copy egc to your *$HOME/bin* or */usr/local/bin*. Now create new directory for your first Emgo program and copy example linker script there: ``` mkdir $HOME/firstemgo cd $HOME/firstemgo cp $EGPATH/src/stm32/examples/f030-demo-board/blinky/script.ld . ``` ## Minimal program Lets create minimal program in *main.go* file: ``` package main func main() { } ``` It’s actually minimal and compiles witout any problem: ``` $ egc $ arm-none-eabi-size cortexm0.elf text data bss dec hex filename 7452 172 104 7728 1e30 cortexm0.elf ``` The first compilation can take some time. The resulting binary takes 7624 bytes of Flash (text+data), quite a lot for a program that does nothing. There are 8760 free bytes left to do something useful. What about traditional *Hello, World!* code: ``` package main import "fmt" func main() { fmt.Println("Hello, World!") } ``` Unfortunately, this time it went worse: ``` $ egc /usr/local/arm/bin/arm-none-eabi-ld: /home/michal/P/go/src/github.com/ziutek/emgo/egpath/src/stm32/examples/f030-demo-board/blog/cortexm0.elf section `.text' will not fit in region `Flash' /usr/local/arm/bin/arm-none-eabi-ld: region `Flash' overflowed by 10880 bytes exit status 1 ``` *Hello, World!* requires at last STM32F030x6, with its 32 KB of Flash. The *fmt* package forces to include whole *strconv* and *reflect* packages. All three are pretty big, even a slimmed-down versions in Emgo. We must forget about it. There are many applications that don’t require fancy formatted text output. Often one or more LEDs or seven segment display are enough. However, in Part 2, I’ll try to use *strconv* package to format and print some numbers and text over UART. ## Blinky Our board has one LED connected between PA4 pin and VCC. This time we need a bit more code: ``` package main import ( "delay" "stm32/hal/gpio" "stm32/hal/system" "stm32/hal/system/timer/systick" ) var led gpio.Pin func init() { system.SetupPLL(8, 1, 48/8) systick.Setup(2e6) gpio.A.EnableClock(false) led = gpio.A.Pin(4) cfg := &gpio.Config{Mode: gpio.Out, Driver: gpio.OpenDrain} led.Setup(cfg) } func main() { for { led.Clear() delay.Millisec(100) led.Set() delay.Millisec(900) } } ``` By convention, the *init* function is used to initialize the basic things and configure peripherals. `system.SetupPLL(8, 1, 48/8)` configures RCC to use PLL with external 8 MHz oscilator as system clock source. PLL divider is set to 1, multipler to 48/8 = 6 which gives 48 MHz system clock. `systick.Setup(2e6)` setups Cortex-M SYSTICK timer as system timer, which runs the scheduler every 2e6 nanoseconds (500 times per second). `gpio.A.EnableClock(false)` enables clock for GPIO port A. *False* means that this clock should be disabled in low-power mode, but this is not implemented int STM32F0 series. `led.Setup(cfg)` setups PA4 pin as open-drain output. `led.Clear()` sets PA4 pin low, which in open-drain configuration turns the LED on. `led.Set()` sets PA4 to high-impedance state, which turns the LED off. Lets compile this code: ``` $ egc $ arm-none-eabi-size cortexm0.elf text data bss dec hex filename 9772 172 168 10112 2780 cortexm0.elf ``` As you can see, blinky takes 2320 bytes more than minimal program. There are still 6440 bytes left for more code. Let’s see if it works: ``` $ openocd -d0 -f interface/stlink.cfg -f target/stm32f0x.cfg -c 'init; program cortexm0.elf; reset run; exit' Open On-Chip Debugger 0.10.0+dev-00319-g8f1f912a (2018-03-07-19:20) Licensed under GNU GPL v2 For bug reports, read http://openocd.org/doc/doxygen/bugs.html debug_level: 0 adapter speed: 1000 kHz adapter_nsrst_delay: 100 none separate adapter speed: 950 kHz target halted due to debug-request, current mode: Thread xPSR: 0xc1000000 pc: 0x0800119c msp: 0x20000da0 adapter speed: 4000 kHz ** Programming Started ** auto erase enabled target halted due to breakpoint, current mode: Thread xPSR: 0x61000000 pc: 0x2000003a msp: 0x20000da0 wrote 10240 bytes from file cortexm0.elf in 0.817425s (12.234 KiB/s) ** Programming Finished ** adapter speed: 950 kHz ``` For this article, the first time in my life, I converted short video to [animated PNG](https://en.wikipedia.org/wiki/APNG) sequence. I’m impressed, goodbye YouTube and sorry IE users. See [apngasm](http://apngasm.sourceforge.net/) for more info. I should study HTML5 based alternative, but for now, APNG is my preffered way for short looped videos. ## More Go If you aren’t a Go programmer but you’ve heard something about Go language, you can say: “This syntax is nice, but not a significant improvement over C. Show me *Go language*, give mi *channels* and *goroutines!*”. Here you are: ``` import ( "delay" "stm32/hal/gpio" "stm32/hal/system" "stm32/hal/system/timer/systick" ) var led1, led2 gpio.Pin func init() { system.SetupPLL(8, 1, 48/8) systick.Setup(2e6) gpio.A.EnableClock(false) led1 = gpio.A.Pin(4) led2 = gpio.A.Pin(5) cfg := &gpio.Config{Mode: gpio.Out, Driver: gpio.OpenDrain} led1.Setup(cfg) led2.Setup(cfg) } func blinky(led gpio.Pin, period int) { for { led.Clear() delay.Millisec(100) led.Set() delay.Millisec(period - 100) } } func main() { go blinky(led1, 500) blinky(led2, 1000) } ``` Code changes are minor: the second LED was added and the previous *main* function was renamed to *blinky* and now requires two parameters. *Main* starts first *blinky* in new goroutine, so both LEDs are handled *concurrently*. It is worth mentioning that *gpio.Pin* type supports concurrent access to different pins of the same GPIO port. Emgo still has several shortcomings. One of them is that you have to specify a maximum number of goroutines (tasks) in advance. It’s time to edit *script.ld*: ``` ISRStack = 1024; MainStack = 1024; TaskStack = 1024; MaxTasks = 2; INCLUDE stm32/f030x4 INCLUDE stm32/loadflash INCLUDE noos-cortexm ``` The size of the stacks are set by guess, and we’ll not care about them at the moment. ``` $ egc $ arm-none-eabi-size cortexm0.elf text data bss dec hex filename 10020 172 172 10364 287c cortexm0.elf ``` Another LED and goroutine costs 248 bytes of Flash. ## Channels Channels are the [preffered way](https://blog.golang.org/share-memory-by-communicating) in Go to communicate between goroutines. Emgo goes even further and allows to use *buffered* channels by *interrupt handlers*. The next example actually shows such case. ``` package main import ( "delay" "rtos" "stm32/hal/gpio" "stm32/hal/irq" "stm32/hal/system" "stm32/hal/system/timer/systick" "stm32/hal/tim" ) var ( leds [3]gpio.Pin timer *tim.Periph ch = make(chan int, 1) ) func init() { system.SetupPLL(8, 1, 48/8) systick.Setup(2e6) gpio.A.EnableClock(false) leds[0] = gpio.A.Pin(4) leds[1] = gpio.A.Pin(5) leds[2] = gpio.A.Pin(9) cfg := &gpio.Config{Mode: gpio.Out, Driver: gpio.OpenDrain} for _, led := range leds { led.Set() led.Setup(cfg) } timer = tim.TIM3 pclk := timer.Bus().Clock() if pclk < system.AHB.Clock() { pclk *= 2 } freq := uint(1e3) // Hz timer.EnableClock(true) timer.PSC.Store(tim.PSC(pclk/freq - 1)) timer.ARR.Store(700) // ms timer.DIER.Store(tim.UIE) timer.CR1.Store(tim.CEN) rtos.IRQ(irq.TIM3).Enable() } func blinky(led gpio.Pin, period int) { for range ch { led.Clear() delay.Millisec(100) led.Set() delay.Millisec(period - 100) } } func main() { go blinky(leds[1], 500) blinky(leds[2], 500) } func timerISR() { timer.SR.Store(0) leds[0].Set() select { case ch <- 0: // Success default: leds[0].Clear() } } //c:__attribute__((section(".ISRs"))) var ISRs = [...]func(){ irq.TIM3: timerISR, } ``` Changes compared to the previous example: - Thrid LED was added and connected to PA9 pin (TXD pin on UART header). - The timer (TIM3) has been introduced as a source of interrupts. - The new *timerISR*function handles*irq.TIM3*interrupt. - The new buffered channel with capacity 1 is intended for communication between *timerISR*and*blinky*goroutines. - The *ISRs*array acts as*interrupt vector table*, a part of bigger*exception vector table*. - The *blinky’s for statement*was replaced with a*range statement*. For convenience, all LEDs, or rather their pins, have been collected in the *leds* array. Additionally, all pins have been set to a known initial state (high), just before they were configured as outputs. In this case, we want the timer to tick at 1 kHz. To configure TIM3 prescaler, we need to known its input clock frequency. According to RM the input clock frequency is equal to APBCLK when APBCLK = AHBCLK, otherwise it is equal to 2 x APBCLK. If the CNT register is incremented at 1 kHz, then the value of ARR register corresponds to the period of counter *update event* (reload event) expressed in milliseconds. To make update event to generate interrupts, the UIE bit in DIER register must be set. The CEN bit enables the timer. Timer peripheral should stay enabled in low-power mode, to keep ticking when the CPU is put to sleep: `timer.EnableClock(true)` . It doesn’t matter in case of STM32F0 but it’s important for code portability. The *timerISR* function handles *irq.TIM3* interrupt requests. `timer.SR.Store(0)` clears all event flags in SR register to deassert the IRQ to [NVIC](http://infocenter.arm.com/help/topic/com.arm.doc.ddi0432c/Cihbecee.html). The rule of thumb is to clear the interrupt flags immedaitely at begining of their handler, because of the IRQ deassert latency. This prevents unjustified re-call the handler again. For absolute certainty, the clear-read sequence should be performed, but in our case, just clearing is enough. The following code: ``` select { case ch <- 0: // Success default: leds[0].Clear() } ``` is a Go way to non-blocking sending on a channel. No one interrupt handler can afford to wait for a free space in the channel. If the channel is full, the default case is taken, and the onboard LED is set on, until the next interrupt. The *ISRs* array contains interrupt vectors. The `//c:__attribute__((section(".ISRs")))` causes that the linker will inserted it into .ISRs section. The new form of *blinky’s for* loop: ``` for range ch { led.Clear() delay.Millisec(100) led.Set() delay.Millisec(period - 100) } ``` is the equivalent of: ``` for { _, ok := <-ch if !ok { break // Channel closed. } led.Clear() delay.Millisec(100) led.Set() delay.Millisec(period - 100) } ``` Note that in this case we aren’t interested in the value received from the channel. We’re interested only in the fact that there is something to receive. We can give it expression by declaring the channel’s element type as empty struct `struct{}` instead of *int* and send `struct{}{}` values instead of 0, but it can be strange for newcomer’s eyes. Lets compile this code: ``` $ egc $ arm-none-eabi-size cortexm0.elf text data bss dec hex filename 11096 228 188 11512 2cf8 cortexm0.elf ``` This new example takes 11324 bytes of Flash, 1132 bytes more than the previous one. With the current timings, both *blinky* goroutines consume from the channel much faster than the *timerISR* sends to it. So they both wait for new data simultaneously and you can observe the randomness of *select*, required by the [Go specification](https://golang.org/ref/spec#Select_statements). The onboard LED is always off, so the channel overrun never occurs. Let’s speed up sending, by changing `timer.ARR.Store(700)` to `timer.ARR.Store(200)` . Now the *timerISR* sends 5 messages per second but both recipients together can receive only 4 messages per second. As you can see, the *timerISR* lights the yellow LED which means there is no space in the channel. This is where I finish the first part of this article. You should know that this part didn’t show you the most important thing in Go language, *interfaces*. Goroutines and channels are only nice and convenient syntax. You can replace them with your own code - not easy but feasible. Interfaces are the essence of Go, and that’s what I will start with in the [second part](/2018/04/14/go_on_very_small_hardware2.html) of this article. We still have some free space on Flash.
11,384
如何冻结和锁定你的 Linux 系统
https://www.networkworld.com/article/3438818/how-to-freeze-and-lock-your-linux-system-and-why-you-would-want-to.html
2019-09-24T23:10:13
[ "冻结", "锁定" ]
https://linux.cn/article-11384-1.html
> > 冻结终端窗口并锁定屏幕意味着什么 - 以及如何在 Linux 系统上管理这些活动。 > > > ![](/data/attachment/album/201909/24/230938vgxzv3nrakk0wxnw.jpg) 如何在 Linux 系统上冻结和“解冻”屏幕,很大程度上取决于这些术语的含义。有时“冻结屏幕”可能意味着冻结终端窗口,以便该窗口内的活动停止。有时它意味着锁定屏幕,这样就没人可以在你去拿一杯咖啡时,走到你的系统旁边代替你输入命令了。 在这篇文章中,我们将研究如何使用和控制这些操作。 ### 如何在 Linux 上冻结终端窗口 你可以输入 `Ctrl+S`(按住 `Ctrl` 键和 `s` 键)冻结 Linux 系统上的终端窗口。把 `s` 想象成“<ruby> 开始冻结 <rt> start the freeze </rt></ruby>”。如果在此操作后继续输入命令,那么你不会看到输入的命令或你希望看到的输出。实际上,命令将堆积在一个队列中,并且只有在通过输入 `Ctrl+Q` 解冻时才会运行。把它想象成“<ruby> 退出冻结 <rt> quit the freeze </rt></ruby>”。 查看其工作的一种简单方式是使用 `date` 命令,然后输入 `Ctrl+S`。接着再次输入 `date` 命令并等待几分钟后再次输入 `Ctrl+Q`。你会看到这样的情景: ``` $ date Mon 16 Sep 2019 06:47:34 PM EDT $ date Mon 16 Sep 2019 06:49:49 PM EDT ``` 这两次时间显示的差距表示第二次的 `date` 命令直到你解冻窗口时才运行。 无论你是坐在计算机屏幕前还是使用 PuTTY 等工具远程运行,终端窗口都可以冻结和解冻。 这有一个可以派上用场的小技巧。如果你发现终端窗口似乎处于非活动状态,那么可能是你或其他人无意中输入了 `Ctrl+S`。那么,输入 `Ctrl+Q` 来尝试解决不妨是个不错的办法。 ### 如何锁定屏幕 要在离开办公桌前锁定屏幕,请按住 `Ctrl+Alt+L` 或 `Super+L`(即按住 `Windows` 键和 `L` 键)。屏幕锁定后,你必须输入密码才能重新登录。 ### Linux 系统上的自动屏幕锁定 虽然最佳做法建议你在即将离开办公桌时锁定屏幕,但 Linux 系统通常会在一段时间没有活动后自动锁定。 “消隐”屏幕(使其变暗)并实际锁定屏幕(需要登录才能再次使用)的时间取决于你个人首选项中的设置。 要更改使用 GNOME 屏幕保护程序时屏幕变暗所需的时间,请打开设置窗口并选择 “Power” 然后 “Blank screen”。你可以选择 1 到 15 分钟或从不变暗。要选择屏幕变暗后锁定所需时间,请进入设置,选择 “Privacy”,然后选择 “Blank screen”。设置应包括 1、2、3、5 和 30 分钟或一小时。 ### 如何在命令行锁定屏幕 如果你使用的是 GNOME 屏幕保护程序,你还可以使用以下命令从命令行锁定屏幕: ``` gnome-screensaver-command -l ``` 这里是小写的 L,代表“锁定”。 ### 如何检查锁屏状态 你还可以使用 `gnome-screensaver` 命令检查屏幕是否已锁定。使用 `--query` 选项,该命令会告诉你屏幕当前是否已锁定(即处于活动状态)。使用 `--time` 选项,它会告诉你锁定生效的时间。这是一个示例脚本: ``` #!/bin/bash gnome-screensaver-command --query gnome-screensaver-command --time ``` 运行脚本将会输出: ``` $ ./check_lockscreen The screensaver is active The screensaver has been active for 1013 seconds. ``` #### 总结 如果你记住了正确的控制方式,那么锁定终端窗口是很简单的。对于屏幕锁定,它的效果取决于你自己的设置,或者你是否习惯使用默认设置。 --- via: <https://www.networkworld.com/article/3438818/how-to-freeze-and-lock-your-linux-system-and-why-you-would-want-to.html> 作者:[Sandra Henry-Stocker](https://www.networkworld.com/author/Sandra-Henry_Stocker/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
301
Moved Permanently
null
11,387
用于测量磁盘活动的 Linux 命令
https://www.networkworld.com/article/3330497/linux/linux-commands-for-measuring-disk-activity.html
2019-09-25T09:21:00
[ "磁盘" ]
https://linux.cn/article-11387-1.html
> > Linux 发行版提供了几个度量磁盘活动的有用命令。让我们了解一下其中的几个。 > > > ![](/data/attachment/album/201909/25/092250rlzdj83cjbddvoud.jpg) Linux 系统提供了一套方便的命令,帮助你查看磁盘有多忙,而不仅仅是磁盘有多满。在本文中,我们将研究五个非常有用的命令,用于查看磁盘活动。其中两个命令(`iostat` 和 `ioping`)可能必须添加到你的系统中,这两个命令一样要求你使用 sudo 特权,所有这五个命令都提供了查看磁盘活动的有用方法。 这些命令中最简单、最直观的一个可能是 `dstat` 了。 ### dtstat 尽管 `dstat` 命令以字母 “d” 开头,但它提供的统计信息远远不止磁盘活动。如果你只想查看磁盘活动,可以使用 `-d` 选项。如下所示,你将得到一个磁盘读/写测量值的连续列表,直到使用 `CTRL-c` 停止显示为止。注意,在第一个报告信息之后,显示中的每个后续行将在接下来的时间间隔内报告磁盘活动,缺省值仅为一秒。 ``` $ dstat -d -dsk/total- read writ 949B 73k 65k 0 <== first second 0 24k <== second second 0 16k 0 0 ^C ``` 在 `-d` 选项后面包含一个数字将把间隔设置为该秒数。 ``` $ dstat -d 10 -dsk/total- read writ 949B 73k 65k 81M <== first five seconds 0 21k <== second five second 0 9011B ^C ``` 请注意,报告的数据可能以许多不同的单位显示——例如,M(Mb)、K(Kb)和 B(字节)。 如果没有选项,`dstat` 命令还将显示许多其他信息——指示 CPU 如何使用时间、显示网络和分页活动、报告中断和上下文切换。 ``` $ dstat You did not select any stats, using -cdngy by default. --total-cpu-usage-- -dsk/total- -net/total- ---paging-- ---system-- usr sys idl wai stl| read writ| recv send| in out | int csw 0 0 100 0 0| 949B 73k| 0 0 | 0 3B| 38 65 0 0 100 0 0| 0 0 | 218B 932B| 0 0 | 53 68 0 1 99 0 0| 0 16k| 64B 468B| 0 0 | 64 81 ^C ``` `dstat` 命令提供了关于整个 Linux 系统性能的有价值的见解,几乎可以用它灵活而功能强大的命令来代替 `vmstat`、`netstat`、`iostat` 和 `ifstat` 等较旧的工具集合,该命令结合了这些旧工具的功能。要深入了解 `dstat` 命令可以提供的其它信息,请参阅这篇关于 [dstat](https://www.networkworld.com/article/3291616/linux/examining-linux-system-performance-with-dstat.html) 命令的文章。 ### iostat `iostat` 命令通过观察设备活动的时间与其平均传输速率之间的关系,帮助监视系统输入/输出设备的加载情况。它有时用于评估磁盘之间的活动平衡。 ``` $ iostat Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 0.07 0.01 0.03 0.05 0.00 99.85 Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn loop0 0.00 0.00 0.00 1048 0 loop1 0.00 0.00 0.00 365 0 loop2 0.00 0.00 0.00 1056 0 loop3 0.00 0.01 0.00 16169 0 loop4 0.00 0.00 0.00 413 0 loop5 0.00 0.00 0.00 1184 0 loop6 0.00 0.00 0.00 1062 0 loop7 0.00 0.00 0.00 5261 0 sda 1.06 0.89 72.66 2837453 232735080 sdb 0.00 0.02 0.00 48669 40 loop8 0.00 0.00 0.00 1053 0 loop9 0.01 0.01 0.00 18949 0 loop10 0.00 0.00 0.00 56 0 loop11 0.00 0.00 0.00 7090 0 loop12 0.00 0.00 0.00 1160 0 loop13 0.00 0.00 0.00 108 0 loop14 0.00 0.00 0.00 3572 0 loop15 0.01 0.01 0.00 20026 0 loop16 0.00 0.00 0.00 24 0 ``` 当然,当你只想关注磁盘时,Linux 回环设备上提供的所有统计信息都会使结果显得杂乱无章。不过,该命令也确实提供了 `-p` 选项,该选项使你可以仅查看磁盘——如以下命令所示。 ``` $ iostat -p sda Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 0.07 0.01 0.03 0.05 0.00 99.85 Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn sda 1.06 0.89 72.54 2843737 232815784 sda1 1.04 0.88 72.54 2821733 232815784 ``` 请注意 `tps` 是指每秒的传输量。 你还可以让 `iostat` 提供重复的报告。在下面的示例中,我们使用 `-d` 选项每五秒钟进行一次测量。 ``` $ iostat -p sda -d 5 Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU) Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn sda 1.06 0.89 72.51 2843749 232834048 sda1 1.04 0.88 72.51 2821745 232834048 Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn sda 0.80 0.00 11.20 0 56 sda1 0.80 0.00 11.20 0 56 ``` 如果你希望省略第一个(自启动以来的统计信息)报告,请在命令中添加 `-y`。 ``` $ iostat -p sda -d 5 -y Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU) Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn sda 0.80 0.00 11.20 0 56 sda1 0.80 0.00 11.20 0 56 ``` 接下来,我们看第二个磁盘驱动器。 ``` $ iostat -p sdb Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 0.07 0.01 0.03 0.05 0.00 99.85 Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn sdb 0.00 0.02 0.00 48669 40 sdb2 0.00 0.00 0.00 4861 40 sdb1 0.00 0.01 0.00 35344 0 ``` ### iotop `iotop` 命令是类似 `top` 的实用程序,用于查看磁盘 I/O。它收集 Linux 内核提供的 I/O 使用信息,以便你了解哪些进程在磁盘 I/O 方面的要求最高。在下面的示例中,循环时间被设置为 5 秒。显示将自动更新,覆盖前面的输出。 ``` $ sudo iotop -d 5 Total DISK READ: 0.00 B/s | Total DISK WRITE: 1585.31 B/s Current DISK READ: 0.00 B/s | Current DISK WRITE: 12.39 K/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND 32492 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.12 % [kworker/u8:1-ev~_power_efficient] 208 be/3 root 0.00 B/s 1585.31 B/s 0.00 % 0.11 % [jbd2/sda1-8] 1 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % init splash 2 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kthreadd] 3 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rcu_gp] 4 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rcu_par_gp] 8 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [mm_percpu_wq] ``` ### ioping `ioping` 命令是一种完全不同的工具,但是它可以报告磁盘延迟——也就是磁盘响应请求需要多长时间,而这有助于诊断磁盘问题。 ``` $ sudo ioping /dev/sda1 4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=1 time=960.2 us (warmup) 4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=2 time=841.5 us 4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=3 time=831.0 us 4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=4 time=1.17 ms ^C --- /dev/sda1 (block device 111.8 GiB) ioping statistics --- 3 requests completed in 2.84 ms, 12 KiB read, 1.05 k iops, 4.12 MiB/s generated 4 requests in 3.37 s, 16 KiB, 1 iops, 4.75 KiB/s min/avg/max/mdev = 831.0 us / 947.9 us / 1.17 ms / 158.0 us ``` ### atop `atop` 命令,像 `top` 一样提供了大量有关系统性能的信息,包括有关磁盘活动的一些统计信息。 ``` ATOP - butterfly 2018/12/26 17:24:19 37d3h13m------ 10ed PRC | sys 0.03s | user 0.01s | #proc 179 | #zombie 0 | #exit 6 | CPU | sys 1% | user 0% | irq 0% | idle 199% | wait 0% | cpu | sys 1% | user 0% | irq 0% | idle 99% | cpu000 w 0% | CPL | avg1 0.00 | avg5 0.00 | avg15 0.00 | csw 677 | intr 470 | MEM | tot 5.8G | free 223.4M | cache 4.6G | buff 253.2M | slab 394.4M | SWP | tot 2.0G | free 2.0G | | vmcom 1.9G | vmlim 4.9G | DSK | sda | busy 0% | read 0 | write 7 | avio 1.14 ms | NET | transport | tcpi 4 | tcpo stall 8 | udpi 1 | udpo 0swout 2255 | NET | network | ipi 10 | ipo 7 | ipfrw 0 | deliv 60.67 ms | NET | enp0s25 0% | pcki 10 | pcko 8 | si 1 Kbps | so 3 Kbp0.73 ms | PID SYSCPU USRCPU VGROW RGROW ST EXC THR S CPUNR CPU CMD 1/1673e4 | 3357 0.01s 0.00s 672K 824K -- - 1 R 0 0% atop 3359 0.01s 0.00s 0K 0K NE 0 0 E - 0% <ps> 3361 0.00s 0.01s 0K 0K NE 0 0 E - 0% <ps> 3363 0.01s 0.00s 0K 0K NE 0 0 E - 0% <ps> 31357 0.00s 0.00s 0K 0K -- - 1 S 1 0% bash 3364 0.00s 0.00s 8032K 756K N- - 1 S 1 0% sleep 2931 0.00s 0.00s 0K 0K -- - 1 I 1 0% kworker/u8:2-e 3356 0.00s 0.00s 0K 0K -E 0 0 E - 0% <sleep> 3360 0.00s 0.00s 0K 0K NE 0 0 E - 0% <sleep> 3362 0.00s 0.00s 0K 0K NE 0 0 E - 0% <sleep> ``` 如果你*只*想查看磁盘统计信息,则可以使用以下命令轻松进行管理: ``` $ atop | grep DSK DSK | sda | busy 0% | read 122901 | write 3318e3 | avio 0.67 ms | DSK | sdb | busy 0% | read 1168 | write 103 | avio 0.73 ms | DSK | sda | busy 2% | read 0 | write 92 | avio 2.39 ms | DSK | sda | busy 2% | read 0 | write 94 | avio 2.47 ms | DSK | sda | busy 2% | read 0 | write 99 | avio 2.26 ms | DSK | sda | busy 2% | read 0 | write 94 | avio 2.43 ms | DSK | sda | busy 2% | read 0 | write 94 | avio 2.43 ms | DSK | sda | busy 2% | read 0 | write 92 | avio 2.43 ms | ^C ``` ### 了解磁盘 I/O Linux 提供了足够的命令,可以让你很好地了解磁盘的工作强度,并帮助你关注潜在的问题或减缓。希望这些命令中的一个可以告诉你何时需要质疑磁盘性能。偶尔使用这些命令将有助于确保当你需要检查磁盘,特别是忙碌或缓慢的磁盘时可以显而易见地发现它们。 --- via: <https://www.networkworld.com/article/3330497/linux/linux-commands-for-measuring-disk-activity.html> 作者:[Sandra Henry-Stocker](https://www.networkworld.com/author/Sandra-Henry_Stocker/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[laingke](https://github.com/laingke) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
404
Not Found
null
11,389
如何在 Linux 中删除文本中的回车字符
https://www.networkworld.com/article/3438857/how-to-remove-carriage-returns-from-text-files-on-linux.html
2019-09-25T21:42:38
[ "回车" ]
https://linux.cn/article-11389-1.html
> > 当回车字符(`Ctrl+M`)让你紧张时,别担心。有几种简单的方法消除它们。 > > > ![](/data/attachment/album/201909/25/214211xenk2dqfepx3xemm.jpg) “回车”字符可以往回追溯很长一段时间 —— 早在打字机上就有一个机械装置或杠杆将承载纸滚筒的机架移到右边,以便可以重新在左侧输入字母。他们在 Windows 上的文本文件上保留了它,但从未在 Linux 系统上使用过。当你尝试在 Linux 上处理在 Windows 上创建的文件时,这种不兼容性有时会导致问题,但这是一个非常容易解决的问题。 如果你使用 `od`(<ruby> 八进制转储 <rt> octal dump </rt></ruby>)命令查看文件,那么回车(也用 `Ctrl+M` 代表)字符将显示为八进制的 15。字符 `CRLF` 通常用于表示 Windows 文本文件中的一行结束的回车符和换行符序列。那些注意看八进制转储的会看到 `\r\n`。相比之下,Linux 文本仅以换行符结束。 这有一个 `od` 输出的示例,高亮显示了行中的 `CRLF` 字符,以及它的八进制。 ``` $ od -bc testfile.txt 0000000 124 150 151 163 040 151 163 040 141 040 164 145 163 164 040 146 T h i s i s a t e s t f 0000020 151 154 145 040 146 162 157 155 040 127 151 156 144 157 167 163 i l e f r o m W i n d o w s 0000040 056 015 012 111 164 047 163 040 144 151 146 146 145 162 145 156 <== . \r \n I t ' s d i f f e r e n <== 0000060 164 040 164 150 141 156 040 141 040 125 156 151 170 040 164 145 t t h a n a U n i x t e 0000100 170 164 040 146 151 154 145 015 012 167 157 165 154 144 040 142 <== x t f i l e \r \n w o u l d b <== ``` 虽然这些字符不是大问题,但是当你想要以某种方式解析文本,并且不希望就它们是否存在进行编码时,这有时候会产生干扰。 ### 3 种从文本中删除回车符的方法 幸运的是,有几种方法可以轻松删除回车符。这有三个选择: #### dos2unix 你可能会在安装时遇到麻烦,但 `dos2unix` 可能是将 Windows 文本转换为 Unix/Linux 文本的最简单方法。一个命令带上一个参数就行了。不需要第二个文件名。该文件会被直接更改。 ``` $ dos2unix testfile.txt dos2unix: converting file testfile.txt to Unix format... ``` 你应该会发现文件长度减少,具体取决于它包含的行数。包含 100 行的文件可能会缩小 99 个字符,因为只有最后一行不会以 `CRLF` 字符结尾。 之前: ``` -rw-rw-r-- 1 shs shs 121 Sep 14 19:11 testfile.txt ``` 之后: ``` -rw-rw-r-- 1 shs shs 118 Sep 14 19:12 testfile.txt ``` 如果你需要转换大量文件,不用每次修复一个。相反,将它们全部放在一个目录中并运行如下命令: ``` $ find . -type f -exec dos2unix {} \; ``` 在此命令中,我们使用 `find` 查找常规文件,然后运行 `dos2unix` 命令一次转换一个。命令中的 `{}` 将被替换为文件名。运行时,你应该处于包含文件的目录中。此命令可能会损坏其他类型的文件,例如除了文本文件外在上下文中包含八进制 15 的文件(如,镜像文件中的字节)。 #### sed 你还可以使用流编辑器 `sed` 来删除回车符。但是,你必须提供第二个文件名。以下是例子: ``` $ sed -e “s/^M//” before.txt > after.txt ``` 一件需要注意的重要的事情是,请不要输入你看到的字符。你必须按下 `Ctrl+V` 后跟 `Ctrl+M` 来输入 `^M`。`s` 是替换命令。斜杠将我们要查找的文本(`Ctrl + M`)和要替换的文本(这里为空)分开。 #### vi 你甚至可以使用 `vi` 删除回车符(`Ctrl+M`),但这里假设你没有打开数百个文件,或许也在做一些其他的修改。你可以键入 `:` 进入命令行,然后输入下面的字符串。与 `sed` 一样,命令中 `^M` 需要通过 `Ctrl+V` 输入 `^`,然后 `Ctrl+M` 插入 `M`。`%s` 是替换操作,斜杠再次将我们要删除的字符和我们想要替换它的文本(空)分开。 `g`(全局)意味在所有行上执行。 ``` :%s/^M//g ``` ### 总结 `dos2unix` 命令可能是最容易记住的,也是从文本中删除回车的最可靠的方法。其他选择使用起来有点困难,但它们提供相同的基本功能。 --- via: <https://www.networkworld.com/article/3438857/how-to-remove-carriage-returns-from-text-files-on-linux.html> 作者:[Sandra Henry-Stocker](https://www.networkworld.com/author/Sandra-Henry_Stocker/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
301
Moved Permanently
null
11,390
在 RHEL8 配置静态 IP 地址的不同方法
https://www.linuxtechi.com/configure-static-ip-address-rhel8/
2019-09-25T22:28:00
[ "IP" ]
https://linux.cn/article-11390-1.html
在 Linux 服务器上工作时,在网卡/以太网卡上分配静态 IP 地址是每个 Linux 工程师的常见任务之一。如果一个人在 Linux 服务器上正确配置了静态地址,那么他/她就可以通过网络远程访问它。在本文中,我们将演示在 RHEL 8 服务器网卡上配置静态 IP 地址的不同方法。 ![](/data/attachment/album/201909/25/222737dx94bbl9qbhzlfe4.jpg) 以下是在网卡上配置静态IP的方法: * `nmcli`(命令行工具) * 网络脚本文件(`ifcfg-*`) * `nmtui`(基于文本的用户界面) ### 使用 nmcli 命令行工具配置静态 IP 地址 每当我们安装 RHEL 8 服务器时,就会自动安装命令行工具 `nmcli`,它是由网络管理器使用的,可以让我们在以太网卡上配置静态 IP 地址。 运行下面的 `ip addr` 命令,列出 RHEL 8 服务器上的以太网卡 ``` [root@linuxtechi ~]# ip addr ``` 正如我们在上面的命令输出中看到的,我们有两个网卡 `enp0s3` 和 `enp0s8`。当前分配给网卡的 IP 地址是通过 DHCP 服务器获得的。 假设我们希望在第一个网卡 (`enp0s3`) 上分配静态 IP 地址,具体内容如下: * IP 地址 = 192.168.1.4 * 网络掩码 = 255.255.255.0 * 网关 = 192.168.1.1 * DNS = 8.8.8.8 依次运行以下 `nmcli` 命令来配置静态 IP, 使用 `nmcli connection` 命令列出当前活动的以太网卡, ``` [root@linuxtechi ~]# nmcli connection NAME UUID TYPE DEVICE enp0s3 7c1b8444-cb65-440d-9bf6-ea0ad5e60bae ethernet enp0s3 virbr0 3020c41f-6b21-4d80-a1a6-7c1bd5867e6c bridge virbr0 [root@linuxtechi ~]# ``` 使用下面的 `nmcli` 给 enp0s3 分配静态 IP。 **命令语法:** ``` # nmcli connection modify <interface_name> ipv4.address <ip/prefix> ``` **注意:** 为了简化语句,在 `nmcli` 命令中,我们通常用 `con` 关键字替换 `connection`,并用 `mod` 关键字替换 `modify`。 将 IPv4 地址 (192.168.1.4) 分配给 `enp0s3` 网卡上, ``` [root@linuxtechi ~]# nmcli con mod enp0s3 ipv4.addresses 192.168.1.4/24 ``` 使用下面的 `nmcli` 命令设置网关, ``` [root@linuxtechi ~]# nmcli con mod enp0s3 ipv4.gateway 192.168.1.1 ``` 设置手动配置(从 dhcp 到 static), ``` [root@linuxtechi ~]# nmcli con mod enp0s3 ipv4.method manual ``` 设置 DNS 值为 “8.8.8.8”, ``` [root@linuxtechi ~]# nmcli con mod enp0s3 ipv4.dns "8.8.8.8" [root@linuxtechi ~]# ``` 要保存上述更改并重新加载,请执行如下 `nmcli` 命令, ``` [root@linuxtechi ~]# nmcli con up enp0s3 Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/4) ``` 以上命令显示网卡 `enp0s3` 已成功配置。我们使用 `nmcli` 命令做的那些更改都将永久保存在文件 `etc/sysconfig/network-scripts/ifcfg-enp0s3` 里。 ``` [root@linuxtechi ~]# cat /etc/sysconfig/network-scripts/ifcfg-enp0s3 ``` ![](/data/attachment/album/201909/25/223405resufmm3ujr9ucnm.jpg) 要确认 IP 地址是否分配给了 `enp0s3` 网卡了,请使用以下 IP 命令查看, ``` [root@linuxtechi ~]#ip addr show enp0s3 ``` ### 使用网络脚本文件(ifcfg-\*)手动配置静态 IP 地址 我们可以使用配置以太网卡的网络脚本或 `ifcfg-*` 文件来配置以太网卡的静态 IP 地址。假设我们想在第二个以太网卡 `enp0s8` 上分配静态 IP 地址: * IP 地址 = 192.168.1.91 * 前缀 = 24 * 网关 =192.168.1.1 * DNS1 =4.2.2.2 转到目录 `/etc/sysconfig/network-scripts`,查找文件 `ifcfg-enp0s8`,如果它不存在,则使用以下内容创建它, ``` [root@linuxtechi ~]# cd /etc/sysconfig/network-scripts/ [root@linuxtechi network-scripts]# vi ifcfg-enp0s8 TYPE="Ethernet" DEVICE="enp0s8" BOOTPROTO="static" ONBOOT="yes" NAME="enp0s8" IPADDR="192.168.1.91" PREFIX="24" GATEWAY="192.168.1.1" DNS1="4.2.2.2" ``` 保存并退出文件,然后重新启动网络管理器服务以使上述更改生效, ``` [root@linuxtechi network-scripts]# systemctl restart NetworkManager ``` 现在使用下面的 `ip` 命令来验证 IP 地址是否分配给网卡, ``` [root@linuxtechi ~]# ip add show enp0s8 3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 08:00:27:7c:bb:cb brd ff:ff:ff:ff:ff:ff inet 192.168.1.91/24 brd 192.168.1.255 scope global noprefixroute enp0s8 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fe7c:bbcb/64 scope link valid_lft forever preferred_lft forever [root@linuxtechi ~]# ``` 以上输出内容确认静态 IP 地址已在网卡 `enp0s8` 上成功配置了。 ### 使用 nmtui 实用程序配置静态 IP 地址 `nmtui` 是一个基于文本用户界面的,用于控制网络的管理器,当我们执行 `nmtui` 时,它将打开一个基于文本的用户界面,通过它我们可以添加、修改和删除连接。除此之外,`nmtui` 还可以用来设置系统的主机名。 假设我们希望通过以下细节将静态 IP 地址分配给网卡 `enp0s3` , * IP 地址 = 10.20.0.72 * 前缀 = 24 * 网关 = 10.20.0.1 * DNS1 =4.2.2.2 运行 `nmtui` 并按照屏幕说明操作,示例如下所示, ``` [root@linuxtechi ~]# nmtui ``` ![](/data/attachment/album/201909/25/223430jpikvncdovq7ov7a.jpg) 选择第一个选项 “Edit a connection”,然后选择接口为 “enp0s3”, ![](/data/attachment/album/201909/25/223452dk23z2ok0l2v85c2.jpg) 选择 “Edit”,然后指定 IP 地址、前缀、网关和域名系统服务器 IP, ![](/data/attachment/album/201909/25/223519n3eyh7nsbxbfhdyz.jpg) 选择确定,然后点击回车。在下一个窗口中,选择 “Activate a connection”, ![](/data/attachment/album/201909/25/223542lxmdivknodlb87n8.jpg) 选择 “enp0s3”,选择 “Deactivate” 并点击回车, ![](/data/attachment/album/201909/25/223610n54cbdsbps55dsz4.jpg) 现在选择 “Activate” 并点击回车, ![](/data/attachment/album/201909/25/223654yr0lpwylyg902r98.jpg) 选择 “Back”,然后选择 “Quit”, ![](/data/attachment/album/201909/25/223716mazhfggoi3ajkkz0.jpg) 使用下面的 `ip` 命令验证 IP 地址是否已分配给接口 `enp0s3`, ``` [root@linuxtechi ~]# ip add show enp0s3 2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 08:00:27:53:39:4d brd ff:ff:ff:ff:ff:ff inet 10.20.0.72/24 brd 10.20.0.255 scope global noprefixroute enp0s3 valid_lft forever preferred_lft forever inet6 fe80::421d:5abf:58bd:c47e/64 scope link noprefixroute valid_lft forever preferred_lft forever [root@linuxtechi ~]# ``` 以上输出内容显示我们已经使用 `nmtui` 实用程序成功地将静态 IP 地址分配给接口 `enp0s3`。 以上就是本教程的全部内容,我们已经介绍了在 RHEL 8 系统上为以太网卡配置 IPv4 地址的三种不同方法。请在下面的评论部分分享反馈和评论。 --- via: <https://www.linuxtechi.com/configure-static-ip-address-rhel8/> 作者:[Pradeep Kumar](https://www.linuxtechi.com/author/pradeep/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[heguangzhi](https://github.com/heguangzhi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
While Working on** Linux Servers**, assigning Static IP address on NIC / Ethernet cards is one of the common tasks that every Linux engineer do. If one configures the **Static IP address** correctly on a Linux server then he/she can access it remotely over network. In this article we will demonstrate what are different ways to assign or configure Static IP address on RHEL 8 / CentOS 8 Server’s NIC. Following are the ways to configure Static IP on a NIC, - nmcli (command line tool) - Network Scripts files(ifcfg-*) - nmtui (text based user interface) #### Configure Static IP Address using nmcli command line tool Whenever we install RHEL 8 / CentOS 8 server then ‘**nmcli**’, a command line tool is installed automatically, nmcli is used by network manager and allows us to configure static ip address on Ethernet cards. Run the below ip addr command to list Ethernet cards on your server [root@linuxtechi-rhel8 ~]# ip addr As we can see in above command output, we have two NICs enp0s3 & enp0s8. Currently ip address assigned to the NIC is via dhcp server. Let’s assume we want to assign the static IP address on first NIC (enp0s3) with the following details, - IP address = 192.168.1.4 - Netmask = 255.255.255.0 - Gateway= 192.168.1.1 - DNS = 8.8.8.8 Run the following nmcli commands one after the another to configure static ip, List currently active Ethernet cards using “**nmcli connection**” command, [root@linuxtechi-rhel8 ~]# nmcli connection NAME UUID TYPE DEVICE enp0s3 7c1b8444-cb65-440d-9bf6-ea0ad5e60bae ethernet enp0s3 virbr0 3020c41f-6b21-4d80-a1a6-7c1bd5867e6c bridge virbr0 [root@linuxtechi-rhel8 ~]# Use beneath nmcli command to assign static ip on enp0s3, **Syntax:** # nmcli connection modify <interface_name> ipv4.address <ip/prefix> **Note:** In short form, we usually replace connection with ‘con’ keyword and modify with ‘mod’ keyword in nmcli command. Assign ipv4 (192.168.1.4) to enp0s3 interface, [root@linuxtechi-rhel8 ~]# nmcli con mod enp0s3 ipv4.addresses 192.168.1.4/24 [root@linuxtechi-rhel8 ~]# Set the gateway using below nmcli command, [root@linuxtechi-rhel8 ~]# nmcli con mod enp0s3 ipv4.gateway 192.168.1.1 [root@linuxtechi-rhel8 ~]# Set the manual configuration (from dhcp to static), [root@linuxtechi-rhel8 ~]# nmcli con mod enp0s3 ipv4.method manual [root@linuxtechi-rhel8 ~]# Set DNS value as “8.8.8.8”, [root@linuxtechi-rhel8 ~]# nmcli con mod enp0s3 ipv4.dns "8.8.8.8" [root@linuxtechi-rhel8 ~]# To save the above changes and to reload the interface execute the beneath nmcli command, [root@linuxtechi-rhel8 ~]# nmcli con up enp0s3 Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/4) [root@linuxtechi-rhel8 ~]# Above command output confirms that interface enp0s3 has been configured successfully.Whatever the changes we have made using above nmcli commands, those changes is saved permanently under the file “etc/sysconfig/network-scripts/ifcfg-enp0s3” [root@linuxtechi-rhel8 ~]# cat /etc/sysconfig/network-scripts/ifcfg-enp0s3 To Confirm whether IP address has been to enp0s3 interface use the below [ip command](https://www.linuxtechi.com/ip-command-examples-for-linux-users/), [root@linuxtechi-rhel8 ~]#ip addr show enp0s3 #### Configure Static IP Address using network-scripts (ifcfg-) files We can configure the static ip address to an ethernet card using its network-script or ‘ifcfg-‘ files. Let’s assume we want to assign the static ip address on our second Ethernet card ‘enp0s8’. - IP= 192.168.1.91 - Netmask / Prefix = 24 - Gateway=192.168.1.1 - DNS1=4.2.2.2 Go to the directory “/etc/sysconfig/network-scripts” and look for the file ‘ifcfg- enp0s8’, if it does not exist then create it with following content, [root@linuxtechi-rhel8 ~]# cd /etc/sysconfig/network-scripts/ [root@linuxtechi-rhel8 network-scripts]# vi ifcfg-enp0s8 TYPE="Ethernet" DEVICE="enp0s8" BOOTPROTO="static" ONBOOT="yes" NAME="enp0s8" IPADDR="192.168.1.91" PREFIX="24" GATEWAY="192.168.1.1" DNS1="4.2.2.2" Save and exit the file and then restart network manager service to make above changes into effect, [root@linuxtechi-rhel8 network-scripts]# systemctl restart NetworkManager [root@linuxtechi-rhel8 network-scripts]# Now use below ip command to verify whether ip address is assigned to nic or not, [root@linuxtechi-rhel8 ~]# ip add show enp0s8 3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 08:00:27:7c:bb:cb brd ff:ff:ff:ff:ff:ff inet 192.168.1.91/24 brd 192.168.1.255 scope global noprefixroute enp0s8 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fe7c:bbcb/64 scope link valid_lft forever preferred_lft forever [root@linuxtechi-rhel8 ~]# Above output confirms that static ip address has been configured successfully on the NIC ‘enp0s8’ #### Configure Static IP Address using ‘nmtui’ utility nmtui is a text based user interface for controlling network manager, when we execute nmtui, it will open a text base user interface through which we can add, modify and delete connections. Apart from this nmtui can also be used to set hostname of your system. Let’s assume we want to assign static ip address to interface enp0s3 with following details, - IP address = 10.20.0.72 - Prefix = 24 - Gateway= 10.20.0.1 - DNS1=4.2.2.2 Run nmtui and follow the screen instructions, example is show [root@linuxtechi-rhel8 ~]# nmtui Select the first option ‘**Edit a connection**‘ and then choose the interface as ‘enp0s3’ Choose Edit and then specify the IP address, Prefix, Gateway and DNS Server ip, Choose OK and hit enter. In the next window Choose ‘**Activate a connection**’ Select **enp0s3**, Choose **Deactivate** & hit enter Now choose **Activate** & hit enter, Select Back and then select Quit, Use below IP command to verify whether ip address has been assigned to interface enp0s3 [root@linuxtechi-rhel8 ~]# ip add show enp0s3 2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 08:00:27:53:39:4d brd ff:ff:ff:ff:ff:ff inet 10.20.0.72/24 brd 10.20.0.255 scope global noprefixroute enp0s3 valid_lft forever preferred_lft forever inet6 fe80::421d:5abf:58bd:c47e/64 scope link noprefixroute valid_lft forever preferred_lft forever [root@linuxtechi-rhel8 ~]# Above output confirms that we have successfully assign the static IP address to interface enp0s3 using nmtui utility. That’s all from this tutorial, we have covered three different ways to configure ipv4 address to an Ethernet card on RHEL 8 / CentOS 8 system. Please do not hesitate to share feedback and comments in comments section below. **Read Also** : [How to Install and Configure Nagios Core on CentOS 8 / RHEL 8](https://www.linuxtechi.com/install-nagios-core-rhel-8-centos-8/) ScottThe ip tool can be used to add and remove connections, bridges, addresses, etc. Pradeep KumarHi Scott, Yes, IP tool can be used to assigned the ip address but it will not persistent across the reboot. Xavi AznarThank you! Very good article! Best regards! Xavi Sumon PaulIt’s so usefull. here have lot of content and very easy ways this content and solution. Thank You MauroVery good. This article helped me a lot EbenWhy do i lose my IP configuration after reboot in redhat 8? regardless of the method used, please i need help
11,392
技术如何改变敏捷的规则
https://enterprisersproject.com/article/2018/1/how-technology-changes-rules-doing-agile
2019-09-26T11:39:40
[ "容器" ]
https://linux.cn/article-11392-1.html
> > 当我们开始推行敏捷时,还没有容器和 Kubernetes。但是它们改变了过去最困难的部分:将敏捷性从小团队应用到整个组织。 > > > ![](/data/attachment/album/201909/26/113910ytmoosx5tt79gan5.jpg) 越来越多的企业正因为一个非常明显的原因开始尝试敏捷和 [DevOps](https://enterprisersproject.com/tags/devops): 企业需要通过更快的速度和更多的实验为创新和竞争性提供优势。而 DevOps 将帮助我们得到所需的创新速度。但是,在小团队或初创企业中实践 DevOps 与进行大规模实践完全是两码事。我们都明白这样的一个事实,那就是在十个人的跨职能团队中能够很好地解决问题的方案,当将相同的模式应用到一百个人的团队中时就可能无法奏效。这条道路是如此艰难,以至于 IT 领导者最简单的应对就是将敏捷方法的推行再推迟一年。 但那样的时代已经结束了。如果你已经尝试过,但是没有成功,那么现在是时候重新开始了。 到目前为止,DevOps 需要为许多组织提供个性化的解决方案,因此往往需要进行大量的调整以及付出额外的工作。但在今天,[Linux 容器](https://www.redhat.com/en/topics/containers?intcmp=701f2000000tjyaAAA)和 Kubernetes 正在推动 DevOps 工具和过程的标准化。而这样的标准化将会加速整个软件开发过程。因此,我们用来实践 DevOps 工作方式的技术最终能够满足我们加快软件开发速度的愿望。 Linux 容器和 [Kubernetes](https://www.redhat.com/en/topics/containers/what-is-kubernetes?intcmp=701f2000000tjyaAAA) 正在改变团队交互的方式。此外,你可以在 Kubernetes 平台上运行任何能够在 Linux 运行的应用程序。这意味着什么呢?你可以运行大量的企业及应用程序(甚至可以解决以前令人烦恼的 Windows 和 Linux 之间的协调问题)。最后,容器和 Kubernetes 能够满足你未来将要运行的几乎所有工作。它们正在经受着未来的考验,以应对机器学习、人工智能和分析工作等下一代解决问题工具。 让我们以机器学习为例来思考一下。今天,人们可以在大量的企业数据中找到一些模式。当机器发现这些模式时(想想机器学习),你的员工就能更快地采取行动。随着人工智能的加入,机器不仅可以发现模式,还可以对模式进行操作。如今,一个积极的软件开发冲刺周期也就是三个星期而已。有了人工智能,机器每秒可以多次修改代码。创业公司会利用这种能力来“打扰你”。 考虑一下你需要多快才能参与到竞争当中。如果你对于无法对于 DevOps 和每周一个迭代周期充满信心,那么考虑一下当那个创业公司将 AI 驱动的过程指向你时会发生什么?现在是时候转向 DevOps 的工作方式了,否则就会像你的竞争对手一样被甩在后面。 ### 容器技术如何改变团队的工作? DevOps 使得许多试图将这种工作方式扩展到更大范围的团队感到沮丧。即使许多 IT(和业务)人员之前都听说过敏捷相关的语言、框架、模型(如 DevOps),而这些都有望彻底应用程序开发和 IT 流程,但他们还是对此持怀疑态度。 向你的受众“推销”快速开发冲刺也不是一件容易的事情。想象一下,如果你以这种方式买了一栋房子 —— 你将不再需要向开发商支付固定的金额,而是会得到这样的信息:“我们将在 4 周内浇筑完地基,其成本是 X,之后再搭建房屋框架和铺设电路,但是我们现在只能够知道地基完成的时间表。”人们已经习惯了买房子的时候有一个预先的价格和交付时间表。 挑战在于构建软件与构建房屋不同。同一个建筑商往往建造了成千上万个完全相同的房子,而软件项目从来都各不相同。这是你要克服的第一个障碍。 开发和运维团队的工作方式确实不同,我之所以知道这一点是因为我曾经从事过这两方面的工作。企业往往会用不同的方式来激励他们,开发人员会因为更改和创建而获得奖励,而运维专家则会因降低成本和确保安全性而获得奖励。我们会把他们分成不同的小组,并且尽量减少互动。而这些角色通常会吸引那些思维方式完全不同的技术人员。但是这样的解决方案注定会失败,你必须打破横亘在开发和运维之间的藩篱。 想想传统情况下会发生什么。业务会把需求扔过墙,这是因为他们在“买房”模式下运作,并且说上一句“我们 9 个月后见。”开发人员根据这些需求进行开发,并根据技术约束的需要进行更改。然后,他们把它扔过墙传递给运维人员,并说一句“搞清楚如何运行这个软件”。然后,运维人员勤就会勤奋地进行大量更改,使软件与基础设施保持一致。然而,最终的结果是什么呢? 通常情况下,当业务人员看到需求实现的最终结果时甚至根本辨认不出。在过去 20 年的大部分时间里,我们一次又一次地目睹了这种模式在软件行业中上演。而现在,是时候改变了。 Linux 容器能够真正地解决这样的问题,这是因为容器弥合开发和运维之间的鸿沟。容器技术允许两个团队共同理解和设计所有的关键需求,但仍然独立地履行各自团队的职责。基本上,我们去掉了开发人员和运维人员之间的电话游戏。 有了容器技术,我们可以使得运维团队的规模更小,但依旧能够承担起数百万应用程序的运维工作,并且能够使得开发团队可以更加快速地根据需要更改软件。(在较大的组织中,所需的速度可能比运维人员的响应速度更快。) 有了容器技术,你可以将所需要交付的内容与它运行的位置分开。你的运维团队只需要负责运行容器的主机和安全的内存占用,仅此而已。这意味着什么呢? 首先,这意味着你现在可以和团队一起实践 DevOps 了。没错,只需要让团队专注于他们已经拥有的专业知识,而对于容器,只需让团队了解所需集成依赖关系的必要知识即可。 如果你想要重新训练每个人,没有人会精通所有事情。容器技术允许团队之间进行交互,但同时也会为每个团队提供一个围绕该团队优势而构建的强大边界。开发人员会知道需要消耗什么资源,但不需要知道如何使其大规模运行。运维团队了解核心基础设施,但不需要了解应用程序的细节。此外,运维团队也可以通过更新应用程序来解决新的安全问题,以免你成为下一个数据泄露的热门话题。 想要为一个大型 IT 组织,比如 30000 人的团队教授运维和开发技能?那或许需要花费你十年的时间,而你可能并没有那么多时间。 当人们谈论“构建新的云原生应用程序将帮助我们摆脱这个问题”时,请批判性地进行思考。你可以在 10 个人的团队中构建云原生应用程序,但这对《财富》杂志前 1000 强的企业而言或许并不适用。除非你不再需要依赖现有的团队,否则你无法一个接一个地构建新的微服务:你最终将成为一个孤立的组织。这是一个诱人的想法,但你不能指望这些应用程序来重新定义你的业务。我还没见过哪家公司能在如此大规模的并行开发中获得成功。IT 预算已经受到限制;在很长时间内,将预算翻倍甚至三倍是不现实的。 ### 当奇迹发生时:你好,速度 Linux 容器就是为扩容而生的。一旦你开始这样做,[Kubernetes 之类的编制工具就会发挥作用](https://enterprisersproject.com/article/2017/11/how-enterprise-it-uses-kubernetes-tame-container-complexity),这是因为你将需要运行数千个容器。应用程序将不仅仅由一个容器组成,它们将依赖于许多不同的部分,所有的部分都会作为一个单元运行在容器上。如果不这样做,你的应用程序将无法在生产环境中很好地运行。 思考一下有多少小滑轮和杠杆组合在一起来支撑你的业务,对于任何应用程序都是如此。开发人员负责应用程序中的所有滑轮和杠杆。(如果开发人员没有这些组件,你可能会在集成时做噩梦。)与此同时,无论是在线下还是在云上,运维团队都会负责构成基础设施的所有滑轮和杠杆。做一个较为抽象的比喻,使用Kubernetes,你的运维团队就可以为应用程序提供运行所需的燃料,但又不必成为所有方面的专家。 开发人员进行实验,运维团队则保持基础设施的安全和可靠。这样的组合使得企业敢于承担小风险,从而实现创新。不同于打几个孤注一掷的赌,公司中真正的实验往往是循序渐进的和快速的。 从个人经验来看,这就是组织内部发生的显著变化:因为人们说:“我们如何通过改变计划来真正地利用这种实验能力?”它会强制执行敏捷计划。 举个例子,使用 DevOps 模型、容器和 Kubernetes 的 KeyBank 如今每天都会部署代码。(观看[视频](https://www.redhat.com/en/about/videos/john-rzeszotarski-keybank-red-hat-summit-2017?intcmp=701f2000000tjyaAAA),其中主导了 KeyBank 持续交付和反馈的 John Rzeszotarski 将解释这一变化。)类似地,Macquarie 银行也借助 DevOps 和容器技术每天将一些东西投入生产环境。 一旦你每天都推出软件,它就会改变你计划的每一个方面,并且会[加速业务的变化速度](https://enterprisersproject.com/article/2017/11/dear-cios-stop-beating-yourselves-being-behind-transformation)。Macquarie 银行和金融服务集团的 CDO,Luis Uguina 表示:“创意可以在一天内触达客户。”(参见对 Red Hat 与 Macquarie 银行合作的[案例研究](https://www.redhat.com/en/resources/macquarie-bank-case-study?intcmp=701f2000000tjyaAAA))。 ### 是时候去创造一些伟大的东西了 Macquarie 的例子说明了速度的力量。这将如何改变你的经营方式?记住,Macquarie 不是一家初创企业。这是 CIO 们所面临的颠覆性力量,它不仅来自新的市场进入者,也来自老牌同行。 开发人员的自由还改变了运营敏捷商店的 CIO 们的人才方程式。突然之间,大公司里的个体(即使不是在最热门的行业或地区)也可以产生巨大的影响。Macquarie 利用这一变动作为招聘工具,并向开发人员承诺,所有新招聘的员工将会在第一周内推出新产品。 与此同时,在这个基于云的计算和存储能力的时代,我们比以往任何时候都拥有更多可用的基础设施。考虑到[机器学习和人工智能工具将很快实现的飞跃](https://enterprisersproject.com/article/2018/1/4-ai-trends-watch),这是幸运的。 所有这些都说明现在正是打造伟大事业的好时机。考虑到市场创新的速度,你需要不断地创造伟大的东西来保持客户的忠诚度。因此,如果你一直在等待将赌注押在 DevOps 上,那么现在就是正确的时机。容器技术和 Kubernetes 改变了规则,并且对你有利。 --- via: <https://enterprisersproject.com/article/2018/1/how-technology-changes-rules-doing-agile> 作者:[Matt Hicks](https://enterprisersproject.com/user/matt-hicks) 译者:[JayFrank](https://github.com/JayFrank) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
More companies are trying agile and [DevOps](https://enterprisersproject.com/taxonomy/term/76) for a clear reason: Businesses want more speed and more experiments – which lead to innovations and competitive advantage. DevOps helps you gain that speed. But doing DevOps in a small group or startup and doing it at scale are two very different things. Any of us who’ve worked in a cross-functional group of 10 people, come up with a great solution to a problem, and then tried to apply the same patterns across a team of 100 people know the truth: It often doesn’t work. This path has been so hard, in fact, that it has been easy for IT leaders to put off agile methodology for another year. But that time is over. If you’ve tried and stalled, it’s time to jump back in. Until now, DevOps required customized answers for many organizations – lots of tweaks and elbow grease. But today, [Linux containers ](https://www.redhat.com/en/topics/containers?intcmp=701f2000000tjyaAAA)and Kubernetes are fueling standardization of DevOps tools and processes. That standardization will only accelerate. The technology we are using to practice the DevOps way of working has finally caught up with our desire to move faster. Linux containers and [Kubernetes](https://www.redhat.com/en/topics/containers/what-is-kubernetes?intcmp=701f2000000tjyaAAA) are changing the way teams interact. Moreover, on the Kubernetes platform, you can run any application you now run on Linux. What does that mean? You can run a tremendous number of enterprise apps (and handle even previously vexing coordination issues between Windows and Linux.) Finally, containers and Kubernetes will handle almost all of what you’ll run tomorrow. They’re being future-proofed to handle machine learning, AI, and analytics workloads – the next wave of problem-solving tools. **[ See our related article, 4 container adoption patterns: What you need to know. ] ** Think about machine learning, for example. Today, people still find the patterns in much of an enterprise’s data. When machines find the patterns (think machine learning), your people will be able to act on them faster. With the addition of AI, machines can not only find but also act on patterns. Today, with people doing everything, three weeks is an aggressive software development sprint cycle. With AI, machines can change code multiple times per second. Startups will use that capability – to disrupt you. Consider how fast you have to be to compete. If you can’t make a leap of faith now to DevOps and a one week cycle, think of what will happen when that startup points its AI-fueled process at you. It’s time to move to the DevOps way of working now, or get left behind as your competitors do. ## How are containers changing how teams work? DevOps has frustrated many groups trying to scale this way of working to a bigger group. Many IT (and business) people are suspicious of agile: They’ve heard it all before – languages, frameworks, and now models (like DevOps), all promising to revolutionize application development and IT process. **[ Want DevOps advice from other CIOs? See our comprehensive resource, DevOps: The IT Leader's Guide. ]** It’s not easy to “sell” quick development sprints to your stakeholders, either. Imagine if you bought a house this way. You’re not going to pay a fixed amount to your builder anymore. Instead, you get something like: “We’ll pour the foundation in 4 weeks and it will cost x. Then we’ll frame. Then we’ll do electrical. But we only know the timing on the foundation right now.” People are used to buying homes with a price up front and a schedule. The challenge is that building software is not like building a house. The same builder builds thousands of houses that are all the same. Software projects are never the same. This is your first hurdle to get past. Dev and operations teams really do work differently: I know because I’ve worked on both sides. We incent them differently. Developers are rewarded for changing and creating, while operations pros are rewarded for reducing cost and ensuring security. We put them in different groups and generally minimize interaction. And the roles typically attract technical people who think quite differently. This situation sets IT up to fail. You have to be willing to break down these barriers. Think of what has traditionally happened. You throw pieces over the wall, then the business throws requirements over the wall because they are operating in "house-buying" mode: “We’ll see you in 9 months.” Developers build to those requirements and make changes as needed for technical constraints. Then they throw it over the wall to operations to "figure out how to run this." Operations then works diligently to make a slew of changes to align the software with their infrastructure. And what’s the end result? More often than not, the end result isn’t even recognizable to the business when they see it in its final glory. We’ve watched this pattern play out time and time again in our industry for the better part of two decades. It’s time for a change. It’s Linux containers that truly crack the problem – because containers close the gap between development and operations. They allow both teams to understand and design to all of the critical requirements, but still uniquely fulfill their team’s responsibilities. Basically, we take out the telephone game between developers and operations. With containers, we can have smaller operations teams, even teams responsible for millions of applications, but development teams that can change software as quickly as needed. (In larger organizations, the desired pace may be faster than humans can respond on the operations side.) With containers, you’re separating what is delivered from where it runs. Your operations teams are responsible for the host that will run the containers and the security footprint, and that’s all. What does this mean? First, it means you can get going on DevOps now, with the team you have. That’s right. Keep teams focused on the expertise they already have: With containers, just teach them the bare minimum of the required integration dependencies. If you try and retrain everyone, no one will be that good at anything. Containers let teams interact, but alongside a strong boundary, built around each team’s strengths. Your devs know what needs to be consumed, but don’t need to know how to make it run at scale. Ops teams know the core infrastructure, but don’t need to know the minutiae of the app. Also, Ops teams can update apps to address new security implications, before you become the next trending data breach story. Teaching a large IT organization of say 30,000 people both ops and devs skills? It would take you a decade. You don’t have that kind of time. When people talk about “building new, cloud-native apps will get us out of this problem,” think critically. You can build cloud-native apps in 10-person teams, but that doesn’t scale for a Fortune 1000 company. You can’t just build new microservices one by one until you’re somehow not reliant on your existing team: You’ll end up with a siloed organization. It’s an alluring idea, but you can’t count on these apps to redefine your business. I haven’t met a company that could fund parallel development at this scale and succeed. IT budgets are already constrained; doubling or tripling them for an extended period of time just isn’t realistic. ## When the remarkable happens: Hello, velocity Linux containers were made to scale. Once you start to do so, [orchestration tools like Kubernetes come into play](https://enterprisersproject.com/article/2017/11/how-enterprise-it-uses-kubernetes-tame-container-complexity) – because you’ll need to run thousands of containers. Applications won’t consist of just a single container, they will depend on many different pieces, all running on containers, all running as a unit. If they don’t, your apps won’t run well in production. Think of how many small gears and levers come together to run your business: The same is true for any application. Developers are responsible for all the pulleys and levers in the application. (You could have an integration nightmare if developers don’t own those pieces.) At the same time, your operations team is responsible for all the pulleys and levers that make up your infrastructure, whether on-premises or in the cloud. With Kubernetes as an abstraction, your operations team can give the application the fuel it needs to run – without being experts on all those pieces. Developers get to experiment. The operations team keeps infrastructure secure and reliable. This combination opens up the business to take small risks that lead to innovation. Instead of having to make only a couple of bet-the-farm size bets, real experimentation happens inside the company, incrementally and quickly. In my experience, this is where the remarkable happens inside organizations: Because people say “How do we change planning to actually take advantage of this ability to experiment?” It forces agile planning. For example, KeyBank, which uses a DevOps model, containers, and Kubernetes, now deploys code every day. (Watch this [video](https://www.redhat.com/en/about/videos/john-rzeszotarski-keybank-red-hat-summit-2017?intcmp=701f2000000tjyaAAA) in which John Rzeszotarski, director of Continuous Delivery and Feedback at KeyBank, explains the change.) Similarly, Macquarie Bank uses DevOps and containers to put something in production every day. Once you push software every day, it changes every aspect of how you plan – and [accelerates the rate of change to the business](https://enterprisersproject.com/article/2017/11/dear-cios-stop-beating-yourselves-being-behind-transformation). “An idea can get to a customer in a day,” says Luis Uguina, CDO of Macquarie’s banking and financial services group. (See this [case study](https://www.redhat.com/en/resources/macquarie-bank-case-study?intcmp=701f2000000tjyaAAA) on Red Hat’s work with Macquarie Bank). ## The right time to build something great The Macquarie example demonstrates the power of velocity. How would that change your approach to your business? Remember, Macquarie is not a startup. This is the type of disruptive power that CIOs face, not only from new market entrants but also from established peers. The developer freedom also changes the talent equation for CIOs running agile shops. Suddenly, individuals within huge companies (even those not in the hottest industries or geographies) can have great impact. Macquarie uses this dynamic as a recruiting tool, promising developers that all new hires will push something live within the first week. At the same time, in this day of cloud-based compute and storage power, we have more infrastructure available than ever. That’s fortunate, considering the [leaps that machine learning and AI tools will soon enable](https://enterprisersproject.com/article/2018/1/4-ai-trends-watch). This all adds up to this being the right time to build something great. Given the pace of innovation in the market, you need to keep building great things to keep customers loyal. So if you’ve been waiting to place your bet on DevOps, now is the right time. Containers and Kubernetes have changed the rules – in your favor. **[ Kubernetes terminology, demystified: Get our ****Kubernetes glossary**** cheat sheet for IT and business leaders. ]** ## Comments Hey! Thanks for the post. I think you might find this article very interesting: https://kanbantool.com/kanban-library/devops-kanban-basics/what-is-kanban-in-software-development . It talks about implementing Kanban as a way to manage your processes. Enjoy!
11,393
Elvish Shell 速览
https://itsfoss.com/elvish-shell/
2019-09-26T20:26:48
[ "shell" ]
https://linux.cn/article-11393-1.html
![](/data/attachment/album/201909/26/202622wefefhfrfr75i5ws.jpg) 每个来到这里的人都会对许多系统中默认 Bash shell 有所了解(无论多少)。过去这些年已经有一些新的 shell 出现来解决 Bash 中的一些缺点。Elvish 就是其中之一,我们将在今天讨论它。 ### 什么是 Elvish Shell? ![Pipelines In Elvish](/data/attachment/album/201909/26/202656izgaddcdvgl8tem4.png) [Elvish](https://elv.sh/) 不仅仅是一个 shell。它[也是](https://github.com/elves/elvish)“一种表达性编程语言”。它有许多有趣的特性,包括: * 它是由 Go 语言编写的 * 内置文件管理器,灵感来自 [Ranger 文件管理器](https://ranger.github.io/)(`Ctrl + N`) * 可搜索的命令历史记录(`Ctrl + R`) * 访问的目录的历史记录(`Ctrl + L`) * 支持结构化数据,例如列表、字典和函数的强大的管道 * 包含“一组标准的控制结构:有 `if` 条件控制、`for` 和 `while` 循环,还有 `try` 的异常处理” * 通过包管理器支持[第三方模块扩展 Elvish](https://github.com/elves/awesome-elvish) * BSD 两句版许可证 你肯定在喊,“为什么叫 Elvish?”。好吧,根据[他们的网站](https://elv.sh/ref/name.html),他们之所以选择当前的名字,是因为: > > 在 Roguelike 游戏中,精灵制造的物品质量很高。它们通常被称为“精灵物品”。但是之所以选择 “elvish” 是因为它以 “sh” 结尾,这是 Unix shell 的久远传统。这个与 fish 押韵,它是影响 Elvish 哲学的 shell 之一。 > > > ### 如何安装 Elvish Shell Elvish 在几种主流发行版中都有。 请注意,该软件还很年轻。最新版本是 0.12。根据该项目的 [GitHub 页面](https://github.com/elves/elvish):“尽管还处在 1.0 之前,但它已经适合大多数日常交互使用。” ![Elvish Control Structures](/data/attachment/album/201909/26/202700mh0v8dr7rk0d1d5o.png) #### Debian 和 Ubuntu Elvish 包已引入 Debian Buster 和 Ubuntu 17.10。不幸的是,这些包已经过时,你需要使用 [PPA](https://launchpad.net/%7Ezhsj/+archive/ubuntu/elvish) 安装最新版本。你需要使用以下命令: ``` sudo add-apt-repository ppa:zhsj/elvish sudo apt update sudo apt install elvish ``` #### Fedora Elvish 在 Fedora 的主仓库中没有。你需要添加 [FZUG 仓库](https://github.com/FZUG/repo/wiki/Add-FZUG-Repository)安装 Evlish。为此,你需要使用以下命令: ``` sudo dnf config-manager --add-repo=http://repo.fdzh.org/FZUG/FZUG.repol sudo dnf install elvish ``` #### Arch Elvish 在 [Arch 用户仓库](https://aur.archlinux.org/packages/elvish/)中可用。 我相信你知道该[如何在 Linux 中更改 Shell](https://linuxhandbook.com/change-shell-linux/),因此安装后可以切换到 Elvish 来使用它。 ### 对 Elvish Shell 的想法 就个人而言,我没有理由在任何系统上安装 Elvish。我可以通过安装几个小的命令行程序或使用已经安装的程序来获得它的大多数功能。 例如,Bash 中已经存在“搜索历史命令”功能,并且效果很好。如果要提高历史命令的能力,我建议安装 [fzf](https://github.com/junegunn/fzf)。`fzf` 使用模糊搜索,因此你无需记住要查找的确切命令。`fzf` 还允许你预览和打开文件。 我认为 Elvish 作为一种编程语言是不错的,但是我会坚持使用 Bash shell 脚本,直到 Elvish 变得更成熟。 你们都有用过 Elvish 么?你认为安装 Elvish 是否值得?你最喜欢的 Bash 替代品是什么?请在下面的评论中告诉我们。 如果你发现这篇文章有趣,请花一点时间在社交媒体、Hacker News 或 Reddit 上分享它。 --- via: <https://itsfoss.com/elvish-shell/> 作者:[John Paul](https://itsfoss.com/author/john/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
Everyone who comes to this site has some knowledge (no matter how slight) of the Bash shell that comes default of so many systems. There have been several attempts to create shells that solve some of the shortcomings of Bash that have appeared over the years. One such shell is Elvish, which we will look at today. ## What is Elvish Shell? ![Pipelines In Elvish](https://itsfoss.com/content/images/wordpress/2019/05/pipelines-in-elvish.png?fit=800%2C421&ssl=1) [Elvish](https://elv.sh/) is more than just a shell. It is [also](https://github.com/elves/elvish) “an expressive programming language”. It has a number of interesting features including: - Written in Go - Built-in file manager, inspired by the [Ranger file manager](https://ranger.github.io/)(`Ctrl + N` ) - Searchable command history ( `Ctrl + R` ) - History of directories visited ( `Ctrl + L` ) - Powerful pipelines that support structured data, such as lists, maps, and functions - Includes a “standard set of control structures: conditional control with `if` , loops with`for` and`while` , and exception handling with`try` “ - Support for [third-party modules via a package manager to extend Elvish](https://github.com/elves/awesome-elvish) - Licensed under the BSD 2-Clause license “Why is it named Elvish?” I hear you shout. Well, according to [their website](https://elv.sh/ref/name.html), they chose their current name because: In roguelikes, items made by the elves have a reputation of high quality. These are usually called elven items, but “elvish” was chosen because it ends with “sh”, a long tradition of Unix shells. It also rhymes with fish, one of the shells that influenced the philosophy of Elvish. ## How to Install Elvish Shell Elvish is available in several mainstream distributions. Note that the software is very young. The most recent version is 0.12. According to the project’s [GitHub page](https://github.com/elves/elvish): “Despite its pre-1.0 status, it is already suitable for most daily interactive use.” ![Elvish Control Structures](https://itsfoss.com/content/images/wordpress/2019/05/Elvish-control-structures.png?fit=800%2C425&ssl=1) ### Debian and Ubuntu Elvish packages were introduced into Debian Buster and Ubuntu 17.10. Unfortunately, those packages are out of date and you will need to use a [PPA](https://launchpad.net/%7Ezhsj/+archive/ubuntu/elvish) to install the latest version. You will need to use the following commands: sudo add-apt-repository ppa:zhsj/elvish sudo apt update sudo apt install elvish ### Fedora Elvish is not available in the main Fedora repos. You will need to add the [FZUG Repository](https://github.com/FZUG/repo/wiki/Add-FZUG-Repository) to install Evlish. To do so, you will need to use these commands: sudo dnf config-manager --add-repo=http://repo.fdzh.org/FZUG/FZUG.repol sudo dnf install elvish ### Arch Elvish is available in the [Arch User Repository](https://aur.archlinux.org/packages/elvish/). I believe you know [how to change shell in Linux](https://linuxhandbook.com/change-shell-linux/) so after installing you can switch to Elvish to use it. ## Final Thoughts on Elvish Shell Personally, I have no reason to install Elvish on any of my systems. I can get most of its features by installing a couple of small command line programs or using already installed programs. For example, the search past commands feature already exists in Bash and it works pretty well. If you want to improve your ability to search past commands, I would recommend installing [fzf](https://github.com/junegunn/fzf) instead. Fzf uses fuzzy search, so you don’t need to remember the exact command you are looking for. Fzf also allows you to preview and open files. I do think that the fact that Elvish is also a programming language is neat, but I’ll stick with Bash shell scripting until Elvish matures a little more. Have you every used Elvish? Do you think it would be worthwhile to install Elvish? What is your favorite Bash replacement? Please let us know in the comments below. If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit](http://reddit.com/r/linuxusersgroup).
11,394
如何在 RHEL8 /CentOS8 上建立多节点 Elastic stack 集群
https://www.linuxtechi.com/setup-multinode-elastic-stack-cluster-rhel8-centos8/
2019-09-26T21:24:00
[ "Elastic" ]
https://linux.cn/article-11394-1.html
Elastic stack 俗称 ELK stack,是一组包括 Elasticsearch、Logstash 和 Kibana 在内的开源产品。Elastic Stack 由 Elastic 公司开发和维护。使用 Elastic stack,可以将系统日志发送到 Logstash,它是一个数据收集引擎,接受来自可能任何来源的日志或数据,并对日志进行归一化,然后将日志转发到 Elasticsearch,用于分析、索引、搜索和存储,最后使用 Kibana 表示为可视化数据,使用 Kibana,我们还可以基于用户的查询创建交互式图表。 ![](/data/attachment/album/201909/26/212420byaf0zyrv9z8ak8r.jpg) 在本文中,我们将演示如何在 RHEL 8 / CentOS 8 服务器上设置多节点 elastic stack 集群。以下是我的 Elastic Stack 集群的详细信息: **Elasticsearch:** * 三台服务器,最小化安装 RHEL 8 / CentOS 8 * IP & 主机名 – 192.168.56.40(`elasticsearch1.linuxtechi.local`)、192.168.56.50 (`elasticsearch2.linuxtechi.local`)、192.168.56.60(elasticsearch3.linuxtechi.local`) Logstash:\*\* * 两台服务器,最小化安装 RHEL 8 / CentOS 8 * IP & 主机 – 192.168.56.20(`logstash1.linuxtechi.local`)、192.168.56.30(`logstash2.linuxtechi.local`) **Kibana:** * 一台服务器,最小化安装 RHEL 8 / CentOS 8 * IP & 主机名 – 192.168.56.10(`kibana.linuxtechi.local`) **Filebeat:** * 一台服务器,最小化安装 CentOS 7 * IP & 主机名 – 192.168.56.70(`web-server`) 让我们从设置 Elasticsearch 集群开始, ### 设置3个节点 Elasticsearch 集群 正如我已经说过的,设置 Elasticsearch 集群的节点,登录到每个节点,设置主机名并配置 yum/dnf 库。 使用命令 `hostnamectl` 设置各个节点上的主机名: ``` [root@linuxtechi ~]# hostnamectl set-hostname "elasticsearch1.linuxtechi. local" [root@linuxtechi ~]# exec bash [root@linuxtechi ~]# [root@linuxtechi ~]# hostnamectl set-hostname "elasticsearch2.linuxtechi. local" [root@linuxtechi ~]# exec bash [root@linuxtechi ~]# [root@linuxtechi ~]# hostnamectl set-hostname "elasticsearch3.linuxtechi. local" [root@linuxtechi ~]# exec bash [root@linuxtechi ~]# ``` 对于 CentOS 8 系统,我们不需要配置任何操作系统包库,对于 RHEL 8 服务器,如果你有有效订阅,那么用红帽订阅以获得包存储库就可以了。如果你想为操作系统包配置本地 yum/dnf 存储库,请参考以下网址: * [如何使用 DVD 或 ISO 文件在 RHEL 8 服务器上设置本地 Yum / DNF 存储库](https://www.linuxtechi.com/setup-local-yum-dnf-repository-rhel-8/) 在所有节点上配置 Elasticsearch 包存储库,在 `/etc/yum.repo.d/` 文件夹下创建一个包含以下内容的 `elastic.repo` 文件: ``` ~]# vi /etc/yum.repos.d/elastic.repo [elasticsearch-7.x] name=Elasticsearch repository for 7.x packages baseurl=https://artifacts.elastic.co/packages/7.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md ``` 保存文件并退出。 在所有三个节点上使用 `rpm` 命令导入 Elastic 公共签名密钥。 ``` ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch ``` 在所有三个节点的 `/etc/hosts` 文件中添加以下行: ``` 192.168.56.40 elasticsearch1.linuxtechi.local 192.168.56.50 elasticsearch2.linuxtechi.local 192.168.56.60 elasticsearch3.linuxtechi.local ``` 使用 `yum`/`dnf` 命令在所有三个节点上安装 Java: ``` [root@linuxtechi ~]# dnf install java-openjdk -y [root@linuxtechi ~]# dnf install java-openjdk -y [root@linuxtechi ~]# dnf install java-openjdk -y ``` 使用 `yum`/`dnf` 命令在所有三个节点上安装 Elasticsearch: ``` [root@linuxtechi ~]# dnf install elasticsearch -y [root@linuxtechi ~]# dnf install elasticsearch -y [root@linuxtechi ~]# dnf install elasticsearch -y ``` **注意:** 如果操作系统防火墙已启用并在每个 Elasticsearch 节点中运行,则使用 `firewall-cmd` 命令允许以下端口开放: ``` ~]# firewall-cmd --permanent --add-port=9300/tcp ~]# firewall-cmd --permanent --add-port=9200/tcp ~]# firewall-cmd --reload ``` 配置 Elasticsearch, 在所有节点上编辑文件 `/etc/elasticsearch/elasticsearch.yml` 并加入以下内容: ``` ~]# vim /etc/elasticsearch/elasticsearch.yml cluster.name: opn-cluster node.name: elasticsearch1.linuxtechi.local network.host: 192.168.56.40 http.port: 9200 discovery.seed_hosts: ["elasticsearch1.linuxtechi.local", "elasticsearch2.linuxtechi.local", "elasticsearch3.linuxtechi.local"] cluster.initial_master_nodes: ["elasticsearch1.linuxtechi.local", "elasticsearch2.linuxtechi.local", "elasticsearch3.linuxtechi.local"] ``` **注意:** 在每个节点上,在 `node.name` 中填写正确的主机名,在 `network.host` 中填写正确的 IP 地址,其他参数保持不变。 现在使用 `systemctl` 命令在所有三个节点上启动并启用 Elasticsearch 服务: ``` ~]# systemctl daemon-reload ~]# systemctl enable elasticsearch.service ~]# systemctl start elasticsearch.service ``` 使用下面 `ss` 命令验证 elasticsearch 节点是否开始监听 9200 端口: ``` [root@linuxtechi ~]# ss -tunlp | grep 9200 tcp LISTEN 0 128 [::ffff:192.168.56.40]:9200 *:* users:(("java",pid=2734,fd=256)) [root@linuxtechi ~]# ``` 使用以下 `curl` 命令验证 Elasticsearch 群集状态: ``` [root@linuxtechi ~]# curl http://elasticsearch1.linuxtechi.local:9200 [root@linuxtechi ~]# curl -X GET http://elasticsearch2.linuxtechi.local:9200/_cluster/health?pretty ``` 命令的输出如下所示: ![Elasticsearch-cluster-status-rhel8](/data/attachment/album/201909/26/212753bchmv9icciva11wz.jpg) 以上输出表明我们已经成功创建了 3 节点的 Elasticsearch 集群,集群的状态也是绿色的。 **注意:** 如果你想修改 JVM 堆大小,那么你可以编辑了文件 `/etc/elasticsearch/jvm.options`,并根据你的环境更改以下参数: * `-Xms1g` * `-Xmx1g` 现在让我们转到 Logstash 节点。 ### 安装和配置 Logstash 在两个 Logstash 节点上执行以下步骤。 登录到两个节点使用 `hostnamectl` 命令设置主机名: ``` [root@linuxtechi ~]# hostnamectl set-hostname "logstash1.linuxtechi.local" [root@linuxtechi ~]# exec bash [root@linuxtechi ~]# [root@linuxtechi ~]# hostnamectl set-hostname "logstash2.linuxtechi.local" [root@linuxtechi ~]# exec bash [root@linuxtechi ~]# ``` 在两个 logstash 节点的 `/etc/hosts` 文件中添加以下条目: ``` ~]# vi /etc/hosts 192.168.56.40 elasticsearch1.linuxtechi.local 192.168.56.50 elasticsearch2.linuxtechi.local 192.168.56.60 elasticsearch3.linuxtechi.local ``` 保存文件并退出。 在两个节点上配置 Logstash 存储库,在文件夹 `/ete/yum.repo.d/` 下创建一个包含以下内容的文件 `logstash.repo`: ``` ~]# vi /etc/yum.repos.d/logstash.repo [elasticsearch-7.x] name=Elasticsearch repository for 7.x packages baseurl=https://artifacts.elastic.co/packages/7.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md ``` 保存并退出文件,运行 `rpm` 命令导入签名密钥: ``` ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch ``` 使用 `yum`/`dnf` 命令在两个节点上安装 Java OpenJDK: ``` ~]# dnf install java-openjdk -y ``` 从两个节点运行 `yum`/`dnf` 命令来安装 logstash: ``` [root@linuxtechi ~]# dnf install logstash -y [root@linuxtechi ~]# dnf install logstash -y ``` 现在配置 logstash,在两个 logstash 节点上执行以下步骤,创建一个 logstash 配置文件,首先我们在 `/etc/logstash/conf.d/` 下复制 logstash 示例文件: ``` # cd /etc/logstash/ # cp logstash-sample.conf conf.d/logstash.conf ``` 编辑配置文件并更新以下内容: ``` # vi conf.d/logstash.conf input { beats { port => 5044 } } output { elasticsearch { hosts => ["http://elasticsearch1.linuxtechi.local:9200", "http://elasticsearch2.linuxtechi.local:9200", "http://elasticsearch3.linuxtechi.local:9200"] index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}" #user => "elastic" #password => "changeme" } } ``` 在 `output` 部分之下,在 `hosts` 参数中指定所有三个 Elasticsearch 节点的 FQDN,其他参数保持不变。 使用 `firewall-cmd` 命令在操作系统防火墙中允许 logstash 端口 “5044”: ``` ~ # firewall-cmd --permanent --add-port=5044/tcp ~ # firewall-cmd –reload ``` 现在,在每个节点上运行以下 `systemctl` 命令,启动并启用 Logstash 服务: ``` ~]# systemctl start logstash ~]# systemctl eanble logstash ``` 使用 `ss` 命令验证 logstash 服务是否开始监听 5044 端口: ``` [root@linuxtechi ~]# ss -tunlp | grep 5044 tcp LISTEN 0 128 *:5044 *:* users:(("java",pid=2416,fd=96)) [root@linuxtechi ~]# ``` 以上输出表明 logstash 已成功安装和配置。让我们转到 Kibana 安装。 ### 安装和配置 Kibana 登录 Kibana 节点,使用 `hostnamectl` 命令设置主机名: ``` [root@linuxtechi ~]# hostnamectl set-hostname "kibana.linuxtechi.local" [root@linuxtechi ~]# exec bash [root@linuxtechi ~]# ``` 编辑 `/etc/hosts` 文件并添加以下行: ``` 192.168.56.40 elasticsearch1.linuxtechi.local 192.168.56.50 elasticsearch2.linuxtechi.local 192.168.56.60 elasticsearch3.linuxtechi.local ``` 使用以下命令设置 Kibana 存储库: ``` [root@linuxtechi ~]# vi /etc/yum.repos.d/kibana.repo [elasticsearch-7.x] name=Elasticsearch repository for 7.x packages baseurl=https://artifacts.elastic.co/packages/7.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md [root@linuxtechi ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch ``` 执行 `yum`/`dnf` 命令安装 kibana: ``` [root@linuxtechi ~]# yum install kibana -y ``` 通过编辑 `/etc/kibana/kibana.yml` 文件,配置 Kibana: ``` [root@linuxtechi ~]# vim /etc/kibana/kibana.yml ………… server.host: "kibana.linuxtechi.local" server.name: "kibana.linuxtechi.local" elasticsearch.hosts: ["http://elasticsearch1.linuxtechi.local:9200", "http://elasticsearch2.linuxtechi.local:9200", "http://elasticsearch3.linuxtechi.local:9200"] ………… ``` 启用并启动 kibana 服务: ``` [root@linuxtechi ~]# systemctl start kibana [root@linuxtechi ~]# systemctl enable kibana ``` 在系统防火墙上允许 Kibana 端口 “5601”: ``` [root@linuxtechi ~]# firewall-cmd --permanent --add-port=5601/tcp success [root@linuxtechi ~]# firewall-cmd --reload success [root@linuxtechi ~]# ``` 使用以下 URL 访问 Kibana 界面:<http://kibana.linuxtechi.local:5601> ![Kibana-Dashboard-rhel8](/data/attachment/album/201909/26/212453uht51r0cd0wcupbh.jpg) 从面板上,我们可以检查 Elastic Stack 集群的状态。 ![Stack-Monitoring-Overview-RHEL8](/data/attachment/album/201909/26/212508s0f88fd788nwk868.jpg) 这证明我们已经在 RHEL 8 /CentOS 8 上成功地安装并设置了多节点 Elastic Stack 集群。 现在让我们通过 `filebeat` 从其他 Linux 服务器发送一些日志到 logstash 节点中,在我的例子中,我有一个 CentOS 7服务器,我将通过 `filebeat` 将该服务器的所有重要日志推送到 logstash。 登录到 CentOS 7 服务器使用 yum/rpm 命令安装 filebeat 包: ``` [root@linuxtechi ~]# rpm -ivh https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.3.1-x86_64.rpm Retrieving https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.3.1-x86_64.rpm Preparing... ################################# [100%] Updating / installing... 1:filebeat-7.3.1-1 ################################# [100%] [root@linuxtechi ~]# ``` 编辑 `/etc/hosts` 文件并添加以下内容: ``` 192.168.56.20 logstash1.linuxtechi.local 192.168.56.30 logstash2.linuxtechi.local ``` 现在配置 `filebeat`,以便它可以使用负载平衡技术向 logstash 节点发送日志,编辑文件 `/etc/filebeat/filebeat.yml`,并添加以下参数: 在 `filebeat.inputs:` 部分将 `enabled: false` 更改为 `enabled: true`,并在 `paths` 参数下指定我们可以发送到 logstash 的日志文件的位置;注释掉 `output.elasticsearch` 和 `host` 参数;删除 `output.logstash:` 和 `hosts:` 的注释,并在 `hosts` 参数添加两个 logstash 节点,以及设置 `loadbalance: true`。 ``` [root@linuxtechi ~]# vi /etc/filebeat/filebeat.yml filebeat.inputs: - type: log enabled: true paths: - /var/log/messages - /var/log/dmesg - /var/log/maillog - /var/log/boot.log #output.elasticsearch: # hosts: ["localhost:9200"] output.logstash: hosts: ["logstash1.linuxtechi.local:5044", "logstash2.linuxtechi.local:5044"] loadbalance: true ``` 使用下面的 2 个 `systemctl` 命令 启动并启用 `filebeat` 服务: ``` [root@linuxtechi ~]# systemctl start filebeat [root@linuxtechi ~]# systemctl enable filebeat ``` 现在转到 Kibana 用户界面,验证新索引是否可见。 从左侧栏中选择管理选项,然后单击 Elasticsearch 下的索引管理: ![Elasticsearch-index-management-Kibana](/data/attachment/album/201909/26/212514m1lziump2ly2zz81.jpg) 正如我们上面看到的,索引现在是可见的,让我们现在创建索引模型。 点击 Kibana 部分的 “Index Patterns”,它将提示我们创建一个新模型,点击 “Create Index Pattern” ,并将模式名称指定为 “filebeat”: ![Define-Index-Pattern-Kibana-RHEL8](/data/attachment/album/201909/26/212519soeqwn1emyomum2m.jpg) 点击下一步。 选择 “Timestamp” 作为索引模型的时间过滤器,然后单击 “Create index pattern”: ![Time-Filter-Index-Pattern-Kibana-RHEL8](/data/attachment/album/201909/26/212526cvb1vr31lj3lojkn.jpg) ![filebeat-index-pattern-overview-Kibana](/data/attachment/album/201909/26/212532wlz4albbl2mgccap.jpg) 现在单击查看实时 filebeat 索引模型: ![Discover-Kibana-REHL8](/data/attachment/album/201909/26/212548zqw4482qv44q12v7.jpg) 这表明 Filebeat 代理已配置成功,我们能够在 Kibana 仪表盘上看到实时日志。 以上就是本文的全部内容,对这些帮助你在 RHEL 8 / CentOS 8 系统上设置 Elastic Stack 集群的步骤,请不要犹豫分享你的反馈和意见。 --- via: <https://www.linuxtechi.com/setup-multinode-elastic-stack-cluster-rhel8-centos8/> 作者:[Pradeep Kumar](https://www.linuxtechi.com/author/pradeep/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[heguangzhi](https://github.com/heguangzhi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
Elastic stack widely known as **ELK stack**, it is a group of opensource products like **Elasticsearch**, **Logstash **and **Kibana**. Elastic Stack is developed and maintained by Elastic company. Using elastic stack, one can feed system’s logs to Logstash, it is a data collection engine which accept the logs or data from all the sources and normalize logs and then it forwards the logs to Elasticsearch for **analyzing**, **indexing**, **searching** and **storing** and finally using Kibana one can represent the visualize data, using Kibana we can also create interactive graphs and diagram based on user’s queries. In this article we will demonstrate how to setup multi node elastic stack (ELK Stack) cluster on RHEL 8 / [CentOS 8](https://www.linuxtechi.com/centos-8-installation-guide-screenshots/) servers. Following are details for my Elastic Stack Cluster: ##### Elasticsearch: - Three Servers with Minimal RHEL 8 / CentOS 8 - IPs & Hostname – 192.168.56.40 (elasticsearch1.linuxtechi. local), 192.168.56.50 (elasticsearch2.linuxtechi. local), 192.168.56.60 (elasticsearch3.linuxtechi. local) ##### Logstash: - Two Servers with minimal RHEL 8 / CentOS 8 - IPs & Hostname – 192.168.56.20 (logstash1.linuxtechi. local) , 192.168.56.30 (logstash2.linuxtechi. local) ##### Kibana: - One Server with minimal RHEL 8 / CentOS 8 - Hostname – kibana.linuxtechi.local - IP – 192.168.56.10 ##### Filebeat: - One Server with minimal CentOS 7 - IP & hostname – 192.168.56.70 (web-server) Let’s start with Elasticsearch cluster setup, #### Setup 3 node Elasticsearch cluster As I have already stated that I have kept nodes for Elasticsearch cluster, login to each node, set the hostname and configure yum/dnf repositories. Use the below hostnamectl command to set the hostname on respective nodes, [root@localhost ~]# hostnamectl set-hostname "elasticsearch1.linuxtechi. local" [root@localhost ~]# exec bash [root@elasticsearch1 ~]# [root@localhost ~]# hostnamectl set-hostname "elasticsearch2.linuxtechi. local" [root@localhost ~]# exec bash [root@elasticsearch2 ~]# [root@localhost ~]# hostnamectl set-hostname "elasticsearch3.linuxtechi. local" [root@localhost ~]# exec bash [root@elasticsearch3 ~]# For CentOS 8 System we don’t need to configure any OS package repository and for RHEL 8 Server, if you have valid subscription and then subscribed it with Red Hat for getting package repository. In Case you want to configure local yum/dnf repository for OS packages then refer the below url: [How to Setup Local Yum/DNF Repository on RHEL 8 Server Using DVD or ISO File](https://www.linuxtechi.com/setup-local-yum-dnf-repository-rhel-8/) Configure Elasticsearch package repository on all the nodes, create a file elastic.repo file under /etc/yum.repos.d/ folder with the following content ~]# vi /etc/yum.repos.d/elastic.repo [elasticsearch-7.x] name=Elasticsearch repository for 7.x packages baseurl=https://artifacts.elastic.co/packages/7.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md save & exit the file Use below rpm command on all three nodes to import Elastic’s public signing key ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch Add the following lines in /etc/hosts file on all three nodes, 192.168.56.40 elasticsearch1.linuxtechi.local 192.168.56.50 elasticsearch2.linuxtechi.local 192.168.56.60 elasticsearch3.linuxtechi.local Install Java on all three Nodes using yum / dnf command, [root@elasticsearch1 ~]# dnf install java-openjdk -y [root@elasticsearch2 ~]# dnf install java-openjdk -y [root@elasticsearch3 ~]# dnf install java-openjdk -y Install Elasticsearch using beneath dnf command on all three nodes, [root@elasticsearch1 ~]# dnf install elasticsearch -y [root@elasticsearch2 ~]# dnf install elasticsearch -y [root@elasticsearch3 ~]# dnf install elasticsearch -y **Note:** In case OS firewall is enabled and running in each Elasticsearch node then allow following ports using beneath firewall-cmd command, ~]# firewall-cmd --permanent --add-port=9300/tcp ~]# firewall-cmd --permanent --add-port=9200/tcp ~]# firewall-cmd --reload Configure Elasticsearch, edit the file “**/etc/elasticsearch/elasticsearch.yml**” on all the three nodes and add the followings, ~]# vim /etc/elasticsearch/elasticsearch.yml ………………………………………… cluster.name: opn-cluster node.name: elasticsearch1.linuxtechi.local network.host: 192.168.56.40 http.port: 9200 discovery.seed_hosts: ["elasticsearch1.linuxtechi.local", "elasticsearch2.linuxtechi.local", "elasticsearch3.linuxtechi.local"] cluster.initial_master_nodes: ["elasticsearch1.linuxtechi.local", "elasticsearch2.linuxtechi.local", "elasticsearch3.linuxtechi.local"] …………………………………………… **Note:** on Each node, add the correct hostname in node.name parameter and ip address in network.host parameter and other parameters will remain the same. Now Start and enable the Elasticsearch service on all three nodes using following systemctl command, ~]# systemctl daemon-reload ~]# systemctl enable elasticsearch.service ~]# systemctl start elasticsearch.service Use below ‘ss’ command to verify whether elasticsearch node is start listening on 9200 port, [root@elasticsearch1 ~]# ss -tunlp | grep 9200 tcp LISTEN 0 128 [::ffff:192.168.56.40]:9200 *:* users:(("java",pid=2734,fd=256)) [root@elasticsearch1 ~]# Use following curl commands to verify the Elasticsearch cluster status [root@elasticsearch1 ~]# curl http://elasticsearch1.linuxtechi.local:9200 [root@elasticsearch1 ~]# curl -X GET http://elasticsearch2.linuxtechi.local:9200/_cluster/health?pretty Output above command would be something like below, Above output confirms that we have successfully created 3 node Elasticsearch cluster and status of cluster is also green. **Note:** If you want to modify JVM heap size then you have edit the file “**/etc/elasticsearch/jvm.options**” and change the below parameters that suits to your environment, - -Xms1g - -Xmx1g Now let’s move to Logstash nodes, #### Install and Configure Logstash Perform the following steps on both Logstash nodes, Login to both the nodes set the hostname using following hostnamectl command, [root@localhost ~]# hostnamectl set-hostname "logstash1.linuxtechi.local" [root@localhost ~]# exec bash [root@logstash1 ~]# [root@localhost ~]# hostnamectl set-hostname "logstash2.linuxtechi.local" [root@localhost ~]# exec bash [root@logstash2 ~]# Add the following entries in /etc/hosts file in both logstash nodes ~]# vi /etc/hosts 192.168.56.40 elasticsearch1.linuxtechi.local 192.168.56.50 elasticsearch2.linuxtechi.local 192.168.56.60 elasticsearch3.linuxtechi.local Save and exit the file Configure Logstash repository on both the nodes, create a file** logstash.repo** under the folder /ete/yum.repos.d/ with following content, ~]# vi /etc/yum.repos.d/logstash.repo [elasticsearch-7.x] name=Elasticsearch repository for 7.x packages baseurl=https://artifacts.elastic.co/packages/7.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md Save and exit the file, run the following rpm command to import the signing key ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch Install Java OpenJDK on both the nodes using following dnf command, ~]# dnf install java-openjdk -y Run the following dnf command from both the nodes to install logstash, [root@logstash1 ~]# dnf install logstash -y [root@logstash2 ~]# dnf install logstash -y Now configure logstash, perform below steps on both logstash nodes, Create a logstash conf file, for that first we have copy sample logstash file under ‘/etc/logstash/conf.d/’ # cd /etc/logstash/ # cp logstash-sample.conf conf.d/logstash.conf Edit conf file and update the following content, # vi conf.d/logstash.conf input { beats { port => 5044 } } output { elasticsearch { hosts => ["http://elasticsearch1.linuxtechi.local:9200", "http://elasticsearch2.linuxtechi.local:9200", "http://elasticsearch3.linuxtechi.local:9200"] index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}" #user => "elastic" #password => "changeme" } } Under output section, in hosts parameter specify FQDN of all three Elasticsearch nodes, other parameters leave as it is. Allow logstash port “5044” in OS firewall using following firewall-cmd command, ~ # firewall-cmd --permanent --add-port=5044/tcp ~ # firewall-cmd –reload Now start and enable Logstash service, run the following systemctl commands on both the nodes ~]# systemctl start logstash ~]# systemctl eanble logstash Use below ss command to verify whether logstash service start listening on 5044, [root@logstash1 ~]# ss -tunlp | grep 5044 tcp LISTEN 0 128 *:5044 *:* users:(("java",pid=2416,fd=96)) [root@logstash1 ~]# Above output confirms that logstash has been installed and configured successfully. Let’s move to Kibana installation. #### Install and Configure Kibana Login to Kibana node, set the hostname with **hostnamectl** command, [root@localhost ~]# hostnamectl set-hostname "kibana.linuxtechi.local" [root@localhost ~]# exec bash [root@kibana ~]# Edit /etc/hosts file and add the following lines 192.168.56.40 elasticsearch1.linuxtechi.local 192.168.56.50 elasticsearch2.linuxtechi.local 192.168.56.60 elasticsearch3.linuxtechi.local Setup the Kibana repository using following, [root@kibana ~]# vi /etc/yum.repos.d/kibana.repo [elasticsearch-7.x] name=Elasticsearch repository for 7.x packages baseurl=https://artifacts.elastic.co/packages/7.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md [root@kibana ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch Execute below dnf command to install kibana, [root@kibana ~]# yum install kibana -y Configure Kibana by editing the file “**/etc/kibana/kibana.yml**” [root@kibana ~]# vim /etc/kibana/kibana.yml ………… server.host: "kibana.linuxtechi.local" server.name: "kibana.linuxtechi.local" elasticsearch.hosts: ["http://elasticsearch1.linuxtechi.local:9200", "http://elasticsearch2.linuxtechi.local:9200", "http://elasticsearch3.linuxtechi.local:9200"] ………… Start and enable kibana service [root@kibana ~]# systemctl start kibana [root@kibana ~]# systemctl enable kibana Allow Kibana port ‘5601’ in OS firewall, [root@kibana ~]# firewall-cmd --permanent --add-port=5601/tcp success [root@kibana ~]# firewall-cmd --reload success [root@kibana ~]# Access Kibana portal / GUI using the following URL: http://kibana.linuxtechi.local:5601 From dashboard, we can also check our Elastic Stack cluster status This confirms that we have successfully setup multi node Elastic Stack cluster on RHEL 8 / CentOS 8. Now let’s send some logs to logstash nodes via filebeat from other Linux servers, In my case I have one CentOS 7 Server, I will push all important logs of this server to logstash via filebeat. Login to CentOS 7 server and install filebeat package using following rpm command, [root@web-server ~]# rpm -ivh https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.3.1-x86_64.rpm Retrieving https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.3.1-x86_64.rpm Preparing... ################################# [100%] Updating / installing... 1:filebeat-7.3.1-1 ################################# [100%] [root@web-server ~]# Edit the /etc/hosts file and add the following entries, 192.168.56.20 logstash1.linuxtechi.local 192.168.56.30 logstash2.linuxtechi.local Now configure the filebeat so that it can send logs to logstash nodes using load balancing technique, edit the file “**/etc/filebeat/filebeat.yml**” and add the following parameters, Under the ‘**filebeat.inputs:**’ section change ‘**enabled: false**‘ to ‘**enabled: true**‘ and under the “**paths**” parameter specify the location log files that we can send to logstash, In output Elasticsearch section comment out “**output.elasticsearch**” and **host** parameter. In Logstash output section, remove the comments for “**output.logstash:**” and “**hosts:**” and add the both logstash nodes in hosts parameters and also “**loadbalance: true**”. [root@web-server ~]# vi /etc/filebeat/filebeat.yml ………………………. filebeat.inputs: - type: log enabled: true paths: - /var/log/messages - /var/log/dmesg - /var/log/maillog - /var/log/boot.log #output.elasticsearch: # hosts: ["localhost:9200"] output.logstash: hosts: ["logstash1.linuxtechi.local:5044", "logstash2.linuxtechi.local:5044"] loadbalance: true ……………………………………… Start and enable filebeat service using beneath systemctl commands, [root@web-server ~]# systemctl start filebeat [root@web-server ~]# systemctl enable filebeat Now go to Kibana GUI, verify whether new indices are visible or not, Choose Management option from Left side bar and then click on Index Management under Elasticsearch, As we can see above, indices are visible now, let’s create index pattern, Click on “Index Patterns” from Kibana Section, it will prompt us to create a new pattern, click on “**Create Index Pattern**” and specify the pattern name as “**filebeat**” Click on Next Step Choose “**Timestamp**” as time filter for index pattern and then click on “Create index pattern” Now Click on Discover to see real time filebeat index pattern, This confirms that Filebeat agent has been configured successfully and we are able to see real time logs on Kibana dashboard. That’s all from this article, please don’t hesitate to share your feedback and comments in case these steps help you to setup multi node Elastic Stack Cluster on RHEL 8 / CentOS 8 system. eduardoHow about if i will use the elasticsearch-certutil cert do i need to do it in all the nodes? Also the elasticsearch-setup-passwords auto should i run this to all of the machine nodes? DerickThanks for the tut. On installinf the elasticsearch, I cannot get the systemctl to start. What must be the status of SELinux be like when installing this?? theluliHi there very nice tutorial , and everything working But , to me is not appearing first page when l access kibana, no dashboard , how did you make dashboard working ? Best regards HemaThanks for the detailed article. I did this setup in CentOS7 and it is perfectly fine. AviThank you for the perfect multi-node ELK stack. If possible, please create same Graylog Multi-node setup tutorial. zmmdvThank you for this tutorial. Can you please make tutorial for securing cluster? Kind like ssl verifications kibanaa users and etc
11,396
构建一个即时消息应用(一):模式
https://nicolasparada.netlify.com/posts/go-messenger-schema/
2019-09-27T21:15:15
[]
https://linux.cn/article-11396-1.html
![](/data/attachment/album/201909/27/211458n44f7jvp77lfxxm0.jpg) 这是一系列关于构建“即时消息”应用的新帖子。你应该对这类应用并不陌生。有了它们的帮助,我们才可以与朋友畅聊无忌。[Facebook Messenger](https://www.messenger.com/)、[WhatsApp](https://www.whatsapp.com/) 和 [Skype](https://www.skype.com/) 就是其中的几个例子。正如你所看到的那样,这些应用允许我们发送图片、传输视频、录制音频、以及和一大帮子人聊天等等。当然,我们的教程应用将会尽量保持简单,只在两个用户之间发送文本消息。 我们将会用 [CockroachDB](https://www.cockroachlabs.com/) 作为 SQL 数据库,用 [Go](https://golang.org/) 作为后端语言,并且用 JavaScript 来制作 web 应用。 这是第一篇帖子,我们将会讲述数据库的设计。 ``` CREATE TABLE users ( id SERIAL NOT NULL PRIMARY KEY, username STRING NOT NULL UNIQUE, avatar_url STRING, github_id INT NOT NULL UNIQUE ); ``` 显然,这个应用需要一些用户。我们这里采用社交登录的形式。由于我选用了 [GitHub](https://github.com/),所以这里需要保存一个对 GitHub 用户 ID 的引用。 ``` CREATE TABLE conversations ( id SERIAL NOT NULL PRIMARY KEY, last_message_id INT, INDEX (last_message_id DESC) ); ``` 每个对话都会引用最近一条消息。每当我们输入一条新消息时,我们都会更新这个字段。我会在后面添加外键约束。 … 你可能会想,我们可以先对对话进行分组,然后再通过这样的方式获取最近一条消息。但这样做会使查询变得更加复杂。 ``` CREATE TABLE participants ( user_id INT NOT NULL REFERENCES users ON DELETE CASCADE, conversation_id INT NOT NULL REFERENCES conversations ON DELETE CASCADE, messages_read_at TIMESTAMPTZ NOT NULL DEFAULT now(), PRIMARY KEY (user_id, conversation_id) ); ``` 尽管之前我提到过对话只会在两个用户之间进行,但我们还是采用了允许向对话中添加多个参与者的设计。因此,在对话和用户之间有一个参与者表。 为了知道用户是否有未读消息,我们在消息表中添加了“读取时间”(`messages_read_at`)字段。每当用户在对话中读取消息时,我们都会更新它的值,这样一来,我们就可以将它与对话中最后一条消息的“创建时间”(`created_at`)字段进行比较。 ``` CREATE TABLE messages ( id SERIAL NOT NULL PRIMARY KEY, content STRING NOT NULL, user_id INT NOT NULL REFERENCES users ON DELETE CASCADE, conversation_id INT NOT NULL REFERENCES conversations ON DELETE CASCADE, created_at TIMESTAMPTZ NOT NULL DEFAULT now(), INDEX(created_at DESC) ); ``` 尽管我们将消息表放在最后,但它在应用中相当重要。我们用它来保存对创建它的用户以及它所出现的对话的引用。而且还可以根据“创建时间”(`created_at`)来创建索引以完成对消息的排序。 ``` ALTER TABLE conversations ADD CONSTRAINT fk_last_message_id_ref_messages FOREIGN KEY (last_message_id) REFERENCES messages ON DELETE SET NULL; ``` 我在前面已经提到过这个外键约束了,不是吗:D 有这四张表就足够了。你也可以将这些查询保存到一个文件中,并将其通过管道传送到 Cockroach CLI。 首先,我们需要启动一个新节点: ``` cockroach start --insecure --host 127.0.0.1 ``` 然后创建数据库和这些表: ``` cockroach sql --insecure -e "CREATE DATABASE messenger" cat schema.sql | cockroach sql --insecure -d messenger ``` 这篇帖子就到这里。在接下来的部分中,我们将会介绍「登录」,敬请期待。 * [源代码](https://github.com/nicolasparada/go-messenger-demo) --- via: <https://nicolasparada.netlify.com/posts/go-messenger-schema/> 作者:[Nicolás Parada](https://nicolasparada.netlify.com/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[PsiACE](https://github.com/PsiACE) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux 中国](https://linux.cn/) 荣誉推出
404
Not Found
null
11,397
如何在 Linux/Windows/MacOS 上使用 .NET 进行开发
https://opensource.com/article/19/9/getting-started-net
2019-09-28T11:11:12
[ ".NET" ]
/article-11397-1.html
> > 了解 .NET 开发平台启动和运行的基础知识。 > > > ![](/data/attachment/album/201909/28/111101n3i43c38tv3j9im4.jpg) .NET 框架由 Microsoft 于 2000 年发布。该平台的开源实现 [Mono](https://www.monodevelop.com/) 在 21 世纪初成为了争议的焦点,因为微软拥有 .NET 技术的多项专利,并且可能使用这些专利来终止 Mono 项目。幸运的是,在 2014 年,微软宣布 .NET 开发平台从此成为 MIT 许可下的开源平台,并在 2016 年收购了开发 Mono 的 Xamarin 公司。 .NET 和 Mono 已经同时可用于 C#、F#、GTK+、Visual Basic、Vala 等的跨平台编程环境。使用 .NET 和 Mono 创建的程序已经应用于 Linux、BSD、Windows、MacOS、Android,甚至一些游戏机。你可以使用 .NET 或 Mono 来开发 .NET 应用。这两个都是开源的,并且都有活跃和充满活力的社区。本文重点介绍微软的 .NET 环境。 ### 如何安装 .NET .NET 下载被分为多个包:一个仅包含 .NET 运行时,另一个 .NET SDK 包含了 .NET Core 和运行时。根据架构和操作系统版本,这些包可能有多个版本。要开始使用 .NET 进行开发,你必须[安装该 SDK](https://dotnet.microsoft.com/download)。它为你提供了 [dotnet](https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet?tabs=netcore21) 终端或 PowerShell 命令,你可以使用它们来创建和生成项目。 #### Linux 要在 Linux 上安装 .NET,首先将微软 Linux 软件仓库添加到你的计算机。 在 Fedora 上: ``` $ sudo rpm --import https://packages.microsoft.com/keys/microsoft.asc $ sudo wget -q -O /etc/yum.repos.d/microsoft-prod.repo https://packages.microsoft.com/config/fedora/27/prod.repo ``` 在 Ubuntu 上: ``` $ wget -q https://packages.microsoft.com/config/ubuntu/19.04/packages-microsoft-prod.deb -O packages-microsoft-prod.deb $ sudo dpkg -i packages-microsoft-prod.deb ``` 接下来,使用包管理器安装 SDK,将 `<X.Y>` 替换为当前版本的 .NET 版本: 在 Fedora 上: ``` $ sudo dnf install dotnet-sdk-<X.Y> ``` 在 Ubuntu 上: ``` $ sudo apt install apt-transport-https $ sudo apt update $ sudo apt install dotnet-sdk-<X.Y> ``` 下载并安装所有包后,打开终端并输入下面命令确认安装: ``` $ dotnet --version X.Y.Z ``` #### Windows 如果你使用的是微软 Windows,那么你可能已经安装了 .NET 运行时。但是,要开发 .NET 应用,你还必须安装 .NET Core SDK。 首先,[下载安装程序](https://dotnet.microsoft.com/download)。请认准下载 .NET Core 进行跨平台开发(.NET Framework 仅适用于 Windows)。下载 .exe 文件后,双击该文件启动安装向导,然后单击两下进行安装:接受许可证并允许安装继续。 ![Installing dotnet on Windows](/data/attachment/album/201909/28/111125jgsef75jnzcexgff.jpg "Installing dotnet on Windows") 然后,从左下角的“应用程序”菜单中打开 PowerShell。在 PowerShell 中,输入测试命令: ``` PS C:\Users\osdc> dotnet ``` 如果你看到有关 dotnet 安装的信息,那么说明 .NET 已正确安装。 #### MacOS 如果你使用的是 Apple Mac,请下载 .pkg 形式的 [Mac 安装程序](https://dotnet.microsoft.com/download)。下载并双击该 .pkg 文件,然后单击安装程序。你可能需要授予安装程序权限,因为该软件包并非来自 App Store。 下载并安装所有软件包后,请打开终端并输入以下命令来确认安装: ``` $ dotnet --version X.Y.Z ``` ### Hello .NET `dotnet` 命令提供了一个用 .NET 编写的 “hello world” 示例程序。或者,更准确地说,该命令提供了示例应用。 首先,使用 `dotnet` 命令以及 `new` 和 `console` 参数创建一个控制台应用的项目目录及所需的代码基础结构。使用 `-o` 选项指定项目名称: ``` $ dotnet new console -o hellodotnet ``` 这将在当前目录中创建一个名为 `hellodotnet` 的目录。进入你的项目目录并看一下: ``` $ cd hellodotnet $ dir hellodotnet.csproj obj Program.cs ``` `Program.cs` 是一个空的 C# 文件,它包含了一个简单的 Hello World 程序。在文本编辑器中打开查看它。微软的 Visual Studio Code 是一个使用 dotnet 编写的跨平台的开源应用,虽然它不是一个糟糕的文本编辑器,但它会收集用户的大量数据(在它的二进制发行版的许可证中授予了自己权限)。如果要尝试使用 Visual Studio Code,请考虑使用 [VSCodium](https://vscodium.com/),它是使用 Visual Studio Code 的 MIT 许可的源码构建的版本,而*没有*远程收集(请阅读[此文档](https://github.com/VSCodium/vscodium/blob/master/DOCS.md)来禁止此构建中的其他形式追踪)。或者,只需使用现有的你最喜欢的文本编辑器或 IDE。 新控制台应用中的样板代码为: ``` using System; namespace hellodotnet { class Program { static void Main(string[] args) { Console.WriteLine("Hello World!"); } } } ``` 要运行该程序,请使用 `dotnet run` 命令: ``` $ dotnet run Hello World! ``` 这是 .NET 和 `dotnet` 命令的基本工作流程。这里有完整的 [.NET C# 指南](https://docs.microsoft.com/en-us/dotnet/csharp/tutorials/intro-to-csharp/),并且都是与 .NET 相关的内容。关于 .NET 实战示例,请关注 [Alex Bunardzic](https://opensource.com/users/alex-bunardzic "View user profile.") 在 opensource.com 中的变异测试文章。 --- via: <https://opensource.com/article/19/9/getting-started-net> 作者:[Seth Kenlon](https://opensource.com/users/sethhttps://opensource.com/users/alex-bunardzichttps://opensource.com/users/alex-bunardzic) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
null
HTTPSConnectionPool(host='opensource.com', port=443): Read timed out. (read timeout=10)
null
11,399
chgrp 和 newgrp 命令简介
https://opensource.com/article/19/9/linux-chgrp-and-newgrp-commands
2019-09-28T16:02:00
[ "chgrp" ]
/article-11399-1.html
> > chgrp 和 newgrp 命令可帮助你管理需要维护组所有权的文件。 > > > ![](/data/attachment/album/201909/28/155554aezllilzbedetm43.jpg) 在最近的一篇文章中,我介绍了 [chown](/article-11416-1.html) 命令,它用于修改系统上的文件所有权。回想一下,所有权是分配给一个对象的用户和组的组合。`chgrp` 和 `newgrp` 命令为管理需要维护组所有权的文件提供了帮助。 ### 使用 chgrp `chgrp` 只是更改文件的组所有权。这与 `chown :<group>` 命令相同。你可以使用: ``` $chown :alan mynotes ``` 或者: ``` $chgrp alan mynotes ``` #### 递归 `chgrp` 和它的一些参数可以用在命令行和脚本中。就像许多其他 Linux 命令一样,`chgrp` 有一个递归参数 `-R`。如下所示,你需要它来对文件夹及其内容进行递归操作。我加了 `-v`(详细)参数,因此 `chgrp` 会告诉我它在做什么: ``` $ ls -l . conf .: drwxrwxr-x 2 alan alan 4096 Aug 5 15:33 conf conf: -rw-rw-r-- 1 alan alan 0 Aug 5 15:33 conf.xml # chgrp -vR delta conf changed group of 'conf/conf.xml' from alan to delta changed group of 'conf' from alan to delta ``` #### 参考 当你要更改文件的组以匹配特定的配置,或者当你不知道具体的组时(比如你运行一个脚本时),可使用参考文件 (`--reference=RFILE`)。你可以复制另外一个作为参考的文件(RFILE)的组。比如,为了撤销上面的更改 (请注意,点 `.` 代表当前工作目录): ``` $ chgrp -vR --reference=. conf ``` #### 报告更改 大多数命令都有用于控制其输出的参数。最常见的是 `-v` 来启用详细信息,而且 `chgrp` 命令也拥有详细模式。它还具有 `-c`(`--changes`)参数,指示 `chgrp` 仅在进行了更改时报告。`chgrp` 还会报告其他内容,例如是操作不被允许时。 参数 `-f`(`--silent`、`--quiet`)用于禁止显示大部分错误消息。我将在下一节中使用此参数和 `-c` 来显示实际更改。 #### 保持根目录 Linux 文件系统的根目录(`/`)应该受到高度重视。如果命令在此层级犯了一个错误,那么后果可能是可怕的,并会让系统无法使用。尤其是在运行一个会递归修改甚至删除的命令时。`chgrp` 命令有一个可用于保护和保持根目录的参数。它是 `--preserve-root`。如果在根目录中将此参数和递归一起使用,那么什么也不会发生,而是会出现一条消息: ``` [root@localhost /]# chgrp -cfR --preserve-root a+w / chgrp: it is dangerous to operate recursively on '/' chgrp: use --no-preserve-root to override this failsafe ``` 不与递归(-R)结合使用时,该选项无效。但是,如果该命令由 `root` 用户运行,那么 `/` 的权限将会更改,但其下的其他文件或目录的权限则不会被更改: ``` [alan@localhost /]$ chgrp -c --preserve-root alan / chgrp: changing group of '/': Operation not permitted [root@localhost /]# chgrp -c --preserve-root alan / changed group of '/' from root to alan ``` 令人惊讶的是,它似乎不是默认参数。而选项 `--no-preserve-root` 是默认的。如果你在不带“保持”选项的情况下运行上述命令,那么它将默认为“无保持”模式,并可能会更改不应更改的文件的权限: ``` [alan@localhost /]$ chgrp -cfR alan / changed group of '/dev/pts/0' from tty to alan changed group of '/dev/tty2' from tty to alan changed group of '/var/spool/mail/alan' from mail to alan ``` ### 关于 newgrp `newgrp` 命令允许用户覆盖当前的主要组。当你在所有文件必须有相同的组所有权的目录中操作时,`newgrp` 会很方便。假设你的内网服务器上有一个名为 `share` 的目录,不同的团队在其中存储市场活动照片。组名为 `share`。当不同的用户将文件放入目录时,文件的主要组可能会变得混乱。每当添加新文件时,你都可以运行 `chgrp` 将错乱的组纠正为 `share`: ``` $ cd share ls -l -rw-r--r--. 1 alan share 0 Aug 7 15:35 pic13 -rw-r--r--. 1 alan alan 0 Aug 7 15:35 pic1 -rw-r--r--. 1 susan delta 0 Aug 7 15:35 pic2 -rw-r--r--. 1 james gamma 0 Aug 7 15:35 pic3 -rw-rw-r--. 1 bill contract 0 Aug 7 15:36 pic4 ``` 我在 [chmod 命令](https://opensource.com/article/19/8/linux-chmod-command)的文章中介绍了 `setgid` 模式。它是解决此问题的一种方法。但是,假设由于某种原因未设置 `setgid` 位。`newgrp` 命令在此时很有用。在任何用户将文件放入 `share` 目录之前,他们可以运行命令 `newgrp share`。这会将其主要组切换为 `share`,因此他们放入目录中的所有文件都将有 `share` 组,而不是用户自己的主要组。完成后,用户可以使用以下命令切换回常规主要组(举例): ``` newgrp alan ``` ### 总结 了解如何管理用户、组和权限非常重要。最好知道一些替代方法来解决可能遇到的问题,因为并非所有环境都以相同的方式设置。 --- via: <https://opensource.com/article/19/9/linux-chgrp-and-newgrp-commands> 作者:[Alan Formy-Duval](https://opensource.com/users/alanfdosshttps://opensource.com/users/sethhttps://opensource.com/users/alanfdosshttps://opensource.com/users/seth) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
null
HTTPSConnectionPool(host='opensource.com', port=443): Read timed out. (read timeout=10)
null
11,400
每周开源点评:云原生 Java、开源安全以及更多行业趋势
https://opensource.com/article/19/8/cloud-native-java-and-more
2019-09-29T10:32:57
[ "开源" ]
https://linux.cn/article-11400-1.html
> > 开源社区和行业趋势的每周总览。 > > > ![Person standing in front of a giant computer screen with numbers, data](/data/attachment/album/201909/29/103316pxhxlh5l9qag7mth.png "Person standing in front of a giant computer screen with numbers, data") 作为我在具有开源开发模型的企业软件公司担任高级产品营销经理的角色的一部分,我为产品营销人员、经理和其他影响者定期发布有关开源社区,市场和行业趋势的定期更新。 以下是该更新中我和他们最喜欢的五篇文章。 ### 《为什么现代 web 开发如此复杂?》 * [文章地址](https://www.vrk.dev/2019/07/11/why-is-modern-web-development-so-complicated-a-long-yet-hasty-explanation-part-1/) > > 现代前端 web 开发带来了一种两极分化的体验:许多人喜欢它,而其他人则鄙视它。 > > > 我是现代Web开发的忠实拥护者,尽管我将其描述为“魔法”——而魔法也有其优点和缺点……。最近,我一直在向那些只具有粗略的原始 web 开发工作流程的人们讲解“现代 web 开发工作流程”……,但我发现需要解释的内容实在是太多了!甚至笼统的解释最终都会变得冗长。因此,在我努力写下更多解释的过程中,这里是对 web 开发演变的一个长期而笼统的解释的开始…… > > > **影响**:足够具体,对前端开发人员非常有用(特别是对新开发人员),且足够简单,解释得足够好,可以帮助非开发人员更好地理解前端开发人员的一些问题。到最后,你将(有点)了解 Javascript 和 WebAPI 之间的区别,以及 2019 年的 Javascript 与 2006 年的 Javascript 有何不同。 ### 开源 Kubernetes 安全审计 * [文章链接](https://www.cncf.io/blog/2019/08/06/open-sourcing-the-kubernetes-security-audit/) > > 去年,云原生计算基金会(CNCF)开始为其项目执行并开源第三方安全审计,以提高我们生态系统的整体安全性。这个想法是从一些项目开始,并从 CNCF 社区收集了关于这个试点项目是否有用的反馈。第一批经历这个过程的项目是 [CoreDNS](https://coredns.io/2018/03/15/cure53-security-assessment/)、[Envoy](https://github.com/envoyproxy/envoy/blob/master/docs/SECURITY_AUDIT.pdf) 和 [Prometheus](https://cure53.de/pentest-report_prometheus.pdf)。这些首次公开审计发现了从一般漏洞到严重漏洞的安全问题。有了这些结果,CoreDNS、Envoy 和 Prometheus 的项目维护者就能够解决已发现的漏洞,并添加文档来帮助用户。 > > > 从这些初始审计中得出的主要结论是,公开安全审计是测试开源项目的质量及其漏洞管理过程的一个很好的方法,更重要的是,测试开源项目的安全实践有多大的弹性。特别是 CNCF 的[毕业项目](https://www.cncf.io/projects/),它们被世界上一些最大的公司广泛应用于生产中,它们必须坚持最高级别的安全最佳实践。 > > > **影响**:就像 Linux 之于数据中心一样,很多公司都把云计算押宝在 Kubernetes 上。从安全的角度来看,看到其中 4 家公司以确保项目正在做应该做的事情,这激发了人们的信心。共享这项研究表明,开源远远不止是仓库中的代码;它是以一种有益于整个社区而不是少数人利益的方式获取和分享专家意见。 ### Quarkus——这个轻量级 Java 框架的下一步是什么? * [文章链接](https://jaxenter.com/quarkus-whats-next-for-the-lightweight-java-framework-160793.html) > > “容器优先”是什么意思?Quarkus 有哪些优势?0.20.0 版本有什么新功能?未来我们可以期待哪些功能?1.0.0 版什么时候发布?我们对 Quarkus 有很多问题,而 Alex Soto 也很耐心地回答了所有问题。 随着 Quarkus 0.20.0 的发布,我们和 [JAX 伦敦演讲者](https://jaxlondon.com/cloud-kubernetes-serverless/java-particle-acceleration-using-quarkus/),Java 拥护者和红帽的开发人员体验总监 Alex Soto 进行了接触。他很好地回答了我们关于 Quarkus 的过去、现在和未来的所有问题。看起来我们对这个令人兴奋的轻量级框架有很多期待! > > > **影响**:最近有个聪明的人告诉我,Quarkus 有潜力使 Java “可能成为容器和无服务器环境的最佳语言之一”。不禁使我多看了一眼。尽管 Java 是最流行的编程语言之一([如果不是最流行的](https://opensource.com/article/19/8/possibly%20one%20of%20the%20best%20languages%20for%20containers%20and%20serverless%20environments.)),但当你听到“云原生”一词时,它可能并不是第一个想到的语言。Quarkus 可以通过让开发人员将他们的经验应用到新的挑战中,从而扩展和提高他们所拥有的技能的价值。 ### Julia 编程语言:用户批露他们最喜欢和最讨厌它的地方 * [文章链接](https://www.zdnet.com/article/julia-programming-language-users-reveal-what-they-love-and-hate-the-most-about-it/#ftag=RSSbaffb68) > > Julia 最受欢迎的技术特性是速度和性能,其次是易用性,而最受欢迎的非技术特性是使用者无需付费即可使用它。 > > > 用户还报告了他们对该语言最大的不满。排在首位的是附加功能的包不够成熟,或者维护得不够好,无法满足他们的需求。 > > > **影响**:Julia 1.0 版本已经发布了一年,并且在一系列相关指标(下载、GitHub 星级等)中取得了令人瞩目的增长。它是一种直接针对我们当前和未来最大挑战(“科学计算、机器学习、数据挖掘、大规模线性代数、分布式和并行计算”)的语言,因此,了解用户对它的感受,就可以间接看到有关这些挑战的应对情况。 ### 多云数据解读:11 个有趣的统计数据 * [文章链接](https://enterprisersproject.com/article/2019/8/multi-cloud-statistics) > > 如果你把我们最近对 [Kubernetes 的有趣数据](https://enterprisersproject.com/article/2019/7/kubernetes-statistics-13-compelling)的深入研究归结最基本的一条,它看起来是这样的:[Kubernetes](https://www.redhat.com/en/topics/containers/what-is-kubernetes?intcmp=701f2000000tjyaAAA) 的受欢迎程度在可预见的未来将持续下去。 > > > 剧透警报:当你挖掘有关[多云](https://www.redhat.com/en/topics/cloud-computing/what-is-multicloud?intcmp=701f2000000tjyaAAA)使用情况的最新数据时,他们告诉你一个类似的描述:使用率正在飙升。 > > > 这种一致性是有道理的。也许不是每个组织都将使用 Kubernetes 来管理其多云和/或[混合云](https://enterprisersproject.com/hybrid-cloud)基础架构,但是两者越来越紧密地联系在一起。即使不这样做,它们都反映了向更分散和异构 IT 环境的普遍转变,以及[云原生开发](https://enterprisersproject.com/article/2018/10/how-explain-cloud-native-apps-plain-english)和其他重叠趋势。 > > > **影响**:越来越多地采用“多云战略”的另一种解释是,它们将组织中单独部分未经协商而作出的决策追溯为“战略”,从而使决策合法化。“等等,所以你从谁那里买了几个小时?又从另一个人那里买了几个小时?为什么在会议纪要中没有呢?我想我们现在是一家多云公司!”。当然,我在开玩笑,我敢肯定大多数大公司的协调能力远胜于此,对吗? *我希望你喜欢这张上周让我印象深刻的列表,并在下周一回来了解更多的开放源码社区、市场和行业趋势。* --- via: <https://opensource.com/article/19/8/cloud-native-java-and-more> 作者:[Tim Hildred](https://opensource.com/users/thildred) 选题:[lujun9972](https://github.com/lujun9972) 译者:[laingke](https://github.com/laingke) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update. [Why is modern web development so complicated?](https://www.vrk.dev/2019/07/11/why-is-modern-web-development-so-complicated-a-long-yet-hasty-explanation-part-1/) Modern frontend web development is a polarizing experience: many love it, others despise it. I am a huge fan of modern web development, though I would describe it as "magical"—and magic has its upsides and downsides... Recently I’ve been needing to explain “modern web development workflows” to folks who only have a cursory of vanilla web development workflows and… It is a LOT to explain! Even a hasty explanation ends up being pretty long. So in the effort of writing more of my explanations down, here is the beginning of a long yet hasty explanation of the evolution of web development.. **The impact: **Specific enough to be useful to (especially new) frontend developers, but simple and well explained enough to help non-developers understand better some of the frontend developer problems. By the end, you'll (kinda) know the difference between Javascript and WebAPIs and how 2019 Javascript is different than 2006 Javascript. [Open sourcing the Kubernetes security audit](https://www.cncf.io/blog/2019/08/06/open-sourcing-the-kubernetes-security-audit/) Last year, the Cloud Native Computing Foundation (CNCF) began the process of performing and open sourcing third-party security audits for its projects in order to improve the overall security of our ecosystem. The idea was to start with a handful of projects and gather feedback from the CNCF community as to whether or not this pilot program was useful. The first projects to undergo this process were [CoreDNS],[Envoy]and[Prometheus]. These first public audits identified security issues from general weaknesses to critical vulnerabilities. With these results, project maintainers for CoreDNS, Envoy and Prometheus have been able to address the identified vulnerabilities and add documentation to help users.The main takeaway from these initial audits is that a public security audit is a great way to test the quality of an open source project along with its vulnerability management process and more importantly, how resilient the open source project’s security practices are. With CNCF [graduated projects]especially, which are used widely in production by some of the largest companies in the world, it is imperative that they adhere to the highest levels of security best practices. **The impact: **A lot of companies are placing big bets on Kubernetes being to the cloud what Linux is to that data center. Seeing 4 of those companies working together to make sure the project is doing what it should be from a security perspective inspires confidence. Sharing that research shows that open source is so much more than code in a repository; it is the capturing and sharing of expert opinions in a way that benefits the community at large rather than the interests of a few. [Quarkus—what's next for the lightweight Java framework?](https://jaxenter.com/quarkus-whats-next-for-the-lightweight-java-framework-160793.html) What does “container first” mean? What are the strengths of Quarkus? What’s new in 0.20.0? What features can we look forward to in the future? When will version 1.0.0 be released? We have so many questions about Quarkus and Alex Soto was kind enough to answer them all. With the release of Quarkus 0.20.0, we decided to get in touch with[JAX London speaker], Java Champion, and Director of Developer Experience at Red Hat – Alex Soto. He was kind enough to answer all our questions about the past, present, and future of Quarkus. It seems like we have a lot to look forward to with this exciting lightweight framework! **The impact**: Someone clever recently told me that Quarkus has the potential to make Java "possibly one of the best languages for containers and serverless environments". That made me do a double-take; while Java is one of the most popular programming languages ([if not the most popular](https://opensource.com/possibly%20one%20of%20the%20best%20languages%20for%20containers%20and%20serverless%20environments.)) it probably isn't the first one that jumps to mind when you hear the words "cloud native." Quarkus could extend and grow the value of the skills held by a huge chunk of the developer workforce by allowing them to apply their experience to new challenges. [Julia programming language: Users reveal what they love and hate the most about it](https://www.zdnet.com/article/julia-programming-language-users-reveal-what-they-love-and-hate-the-most-about-it/#ftag=RSSbaffb68) The most popular technical feature of Julia is speed and performance followed by ease of use, while the most popular non-technical feature is that users don't have to pay to use it. Users also report their biggest gripes with the language. The top one is that packages for add-on features aren't sufficiently mature or well maintained to meet their needs. **The impact: **The Julia 1.0 release has been out for a year now, and has seen impressive growth in a bunch of relevant metrics (downloads, GitHub stars, etc). It is a language aimed squarely at some of our biggest current and future challenges ("scientific computing, machine learning, data mining, large-scale linear algebra, distributed and parallel computing") so finding out how it's users are feeling about it gives an indirect read on how well those challenges are being addressed. [Multi-cloud by the numbers: 11 interesting stats](https://enterprisersproject.com/article/2019/8/multi-cloud-statistics) If you boil our recent dive into [interesting stats about Kubernetes]down to its bottom line, it looks something like this:[Kubernetes']popularity will continue for the foreseeable future.Spoiler alert: When you dig up recent numbers about [multi-cloud]usage, they tell a similar story: Adoption is soaring.This congruity makes sense. Perhaps not every organization will use Kubernetes to manage its multi-cloud and/or [hybrid cloud]infrastructure, but the two increasingly go hand-in-hand. Even when they don’t, they both reflect a general shift toward more distributed and heterogeneous IT environments, as well as[cloud-native development]and other overlapping trends. **The impact**: Another explanation of increasing adoption of "multi-cloud strategies" is they retroactively legitimize decisions taken in separate parts of an organization without consultation as "strategic." "Wait, so you bought hours from who? And you bought hours from the other one? Why wasn't that in the meeting minutes? I guess we're a multi-cloud company now!" Of course I'm joking, I'm sure most big companies are a lot better coordinated than that, right? *I hope you enjoyed this list of what stood out to me from last week and come back next Monday for more open source community, market, and industry trends.* ## Comments are closed.
11,401
使用 strace 查找 Emacs 启动阻塞的原因
https://www.lujun9972.win/blog/2019/09/26/%E4%BD%BF%E7%94%A8strace%E6%9F%A5%E6%89%BEemacs%E5%90%AF%E5%8A%A8%E9%98%BB%E5%A1%9E%E7%9A%84%E5%8E%9F%E5%9B%A0(exec-path-from-shell)/index.html
2019-09-29T11:53:24
[ "Emacs", "strace" ]
https://linux.cn/article-11401-1.html
![](/data/attachment/album/201909/29/115250bgndkvezdds7q24j.jpg) 之前就觉得我的 Emacs 启动好慢,查看启动日志会发现启动到一般的时候会有一个比较长时间的卡顿。 之前一直没有理会它,今天花了点时间探索了一下,发现罪魁祸首居然是 exec-path-from-shell 这个包。 现将探索的过程记录如下: 由于使用了 spacemacs 的配置,配置上比较复杂,不太想通过实验缩减配置的方式来摸索出问题的地方。刚好最近在学习使用 `strace` 工具,因此决定使用 `strace` 来看看 Emacs 到底卡在哪里。 ``` strace emacs --fg-daemon ``` 输出的内容特别多,这里只截取卡顿前的部分内容 ``` readlinkat(AT_FDCWD, "/home", 0x7ffd1d3abb50, 1024) = -1 EINVAL (无效的参数) readlinkat(AT_FDCWD, "/home/lujun9972", 0x7ffd1d3abf00, 1024) = -1 EINVAL (无效的参数) readlinkat(AT_FDCWD, "/home/lujun9972/.emacs.d", 0x7ffd1d3ac2b0, 1024) = -1 EINVAL (无效的参数) readlinkat(AT_FDCWD, "/home/lujun9972/.emacs.d/elpa", 0x7ffd1d3ac660, 1024) = -1 EINVAL (无效的参数) readlinkat(AT_FDCWD, "/home/lujun9972/.emacs.d/elpa/exec-path-from-shell-20180323.1904", 0x7ffd1d3aca10, 1024) = -1 EINVAL (无效的参数) readlinkat(AT_FDCWD, "/home/lujun9972/.emacs.d/elpa/exec-path-from-shell-20180323.1904/exec-path-from-shell.elc", 0x7ffd1d3acdc0, 1024) = -1 EINVAL (无效的参数) lseek(7, -2655, SEEK_CUR) = 1441 read(7, "\n(defvar exec-path-from-shell-de"..., 4096) = 4096 lseek(7, 5537, SEEK_SET) = 5537 lseek(7, 5537, SEEK_SET) = 5537 lseek(7, 5537, SEEK_SET) = 5537 lseek(7, 5537, SEEK_SET) = 5537 lseek(7, 5537, SEEK_SET) = 5537 lseek(7, 5537, SEEK_SET) = 5537 brk(0x7507000) = 0x7507000 lseek(7, 5537, SEEK_SET) = 5537 lseek(7, 5537, SEEK_SET) = 5537 lseek(7, 5537, SEEK_SET) = 5537 read(7, "230\\205\26\0\t\22\\307\\310\t!\vC\\\"\\211\24\\2"..., 4096) = 2430 lseek(7, 7967, SEEK_SET) = 7967 lseek(7, 7967, SEEK_SET) = 7967 lseek(7, 7967, SEEK_SET) = 7967 lseek(7, 7967, SEEK_SET) = 7967 read(7, "", 4096) = 0 close(7) = 0 getpid() = 10818 faccessat(AT_FDCWD, "/home/lujun9972/bin/printf", X_OK) = -1 ENOENT (没有那个文件或目录) faccessat(AT_FDCWD, "/usr/local/sbin/printf", X_OK) = -1 ENOENT (没有那个文件或目录) faccessat(AT_FDCWD, "/usr/local/bin/printf", X_OK) = -1 ENOENT (没有那个文件或目录) faccessat(AT_FDCWD, "/usr/bin/printf", X_OK) = 0 stat("/usr/bin/printf", {st_mode=S_IFREG|0755, st_size=51176, ...}) = 0 openat(AT_FDCWD, "/dev/null", O_RDONLY|O_CLOEXEC) = 7 faccessat(AT_FDCWD, "/proc/5070/fd/.", F_OK) = 0 faccessat(AT_FDCWD, "/proc/5070/fd/.", F_OK) = 0 faccessat(AT_FDCWD, "/bin/bash", X_OK) = 0 stat("/bin/bash", {st_mode=S_IFREG|0755, st_size=903440, ...}) = 0 pipe2([8, 9], O_CLOEXEC) = 0 rt_sigprocmask(SIG_BLOCK, [INT CHLD], [], 8) = 0 vfork() = 10949 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 close(9) = 0 close(7) = 0 read(8, "bash: \346\227\240\346\263\225\350\256\276\345\256\232\347\273\210\347\253\257\350\277\233\347\250\213\347\273"..., 16384) = 74 read(8, "bash: \346\255\244 shell \344\270\255\346\227\240\344\273\273\345\212\241\346\216\247\345"..., 16310) = 35 read(8, "setterm: \347\273\210\347\253\257 xterm-256color \344"..., 16275) = 51 read(8, "Couldn't get a file descriptor r"..., 16224) = 56 read(8, "bash: [: \357\274\232\351\234\200\350\246\201\346\225\264\346\225\260\350\241\250\350\276\276\345\274"..., 16168) = 34 read(8, "Your display number is 0\n", 16134) = 25 read(8, "Test whether fcitx is running co"..., 16109) = 53 read(8, "Fcitx is running correctly.\n\n==="..., 16056) = 87 read(8, "Launch fbterm...\n", 15969) = 17 read(8, "stdin isn't a tty!\n", 15952) = 19 read(8, "__RESULT\0/home/lujun9972/bin:/ho"..., 15933) = 298 read(8, 0x7ffd1d39ce9d, 15635) = ? ERESTARTSYS (To be restarted if SA_RESTART is set) --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=10949, si_uid=1000, si_status=0, si_utime=10, si_stime=7} --- rt_sigreturn({mask=[]}) = -1 EINTR (被中断的系统调用) read(8, "", 15635) = 0 wait4(10949, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 10949 close(8) = 0 getpid() = 10818 faccessat(AT_FDCWD, "/home/lujun9972/bin/printf", X_OK) = -1 ENOENT (没有那个文件或目录) faccessat(AT_FDCWD, "/usr/local/sbin/printf", X_OK) = -1 ENOENT (没有那个文件或目录) faccessat(AT_FDCWD, "/usr/local/bin/printf", X_OK) = -1 ENOENT (没有那个文件或目录) faccessat(AT_FDCWD, "/usr/bin/printf", X_OK) = 0 stat("/usr/bin/printf", {st_mode=S_IFREG|0755, st_size=51176, ...}) = 0 openat(AT_FDCWD, "/dev/null", O_RDONLY|O_CLOEXEC) = 7 faccessat(AT_FDCWD, "/proc/5070/fd/.", F_OK) = 0 faccessat(AT_FDCWD, "/proc/5070/fd/.", F_OK) = 0 faccessat(AT_FDCWD, "/bin/bash", X_OK) = 0 stat("/bin/bash", {st_mode=S_IFREG|0755, st_size=903440, ...}) = 0 pipe2([8, 9], O_CLOEXEC) = 0 rt_sigprocmask(SIG_BLOCK, [INT CHLD], [], 8) = 0 vfork() = 11679 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 close(9) = 0 close(7) = 0 read(8, "setterm: \347\273\210\347\253\257 xterm-256color \344"..., 16384) = 51 read(8, "Couldn't get a file descriptor r"..., 16333) = 56 read(8, "/home/lujun9972/.bash_profile: \347"..., 16277) = 72 read(8, "Your display number is 0\nTest wh"..., 16205) = 78 read(8, "Fcitx is running correctly.\n\n==="..., 16127) = 104 read(8, "stdin isn't a tty!\n", 16023) = 19 read(8, "__RESULT\0b269cd09e7ec4e8a115188c"..., 16004) = 298 read(8, 0x7ffd1d39cba6, 15706) = ? ERESTARTSYS (To be restarted if SA_RESTART is set) --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=11679, si_uid=1000, si_status=0, si_utime=1, si_stime=1} --- rt_sigreturn({mask=[]}) = -1 EINTR (被中断的系统调用) read(8, ``` 很容易就可以看出,当 Emacs 卡顿时,它在尝试从 8 号文件句柄中读取内容。 那么 8 号文件句柄在哪里定义的呢?往前看可以看到: ``` pipe2([8, 9], O_CLOEXEC) = 0 rt_sigprocmask(SIG_BLOCK, [INT CHLD], [], 8) = 0 vfork() = 11679 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 close(9) = 0 ``` 可以推测出,Emacs 主进程 `fork` 出一个子进程(进程号为 11679),并通过管道读取子进程的内容。 然而,从 ``` --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=11679, si_uid=1000, si_status=0, si_utime=1, si_stime=1} --- rt_sigreturn({mask=[]}) = -1 EINTR (被中断的系统调用) read(8, ``` 可以看出,实际上子进程已经退出了(父进程收到 SIGCHLD 信号),父进程确依然在尝试从管道中读取内容,导致的阻塞。 而且从 ``` read(8, "setterm: \347\273\210\347\253\257 xterm-256color \344"..., 16384) = 51 read(8, "Couldn't get a file descriptor r"..., 16333) = 56 read(8, "/home/lujun9972/.bash_profile: \347"..., 16277) = 72 read(8, "Your display number is 0\nTest wh"..., 16205) = 78 read(8, "Fcitx is running correctly.\n\n==="..., 16127) = 104 read(8, "stdin isn't a tty!\n", 16023) = 19 read(8, "__RESULT\0b269cd09e7ec4e8a115188c"..., 16004) = 298 read(8, 0x7ffd1d39cba6, 15706) = ? ERESTARTSYS (To be restarted if SA_RESTART is set) ``` 看到,子进程的输出似乎是我的交互式登录 bash 启动时的输出(加载了 `.bash_profile`) 在往前翻发现这么一段信息: ``` readlinkat(AT_FDCWD, "/home", 0x7ffd1d3abb50, 1024) = -1 EINVAL (无效的参数) readlinkat(AT_FDCWD, "/home/lujun9972", 0x7ffd1d3abf00, 1024) = -1 EINVAL (无效的参数) readlinkat(AT_FDCWD, "/home/lujun9972/.emacs.d", 0x7ffd1d3ac2b0, 1024) = -1 EINVAL (无效的参数) readlinkat(AT_FDCWD, "/home/lujun9972/.emacs.d/elpa", 0x7ffd1d3ac660, 1024) = -1 EINVAL (无效的参数) readlinkat(AT_FDCWD, "/home/lujun9972/.emacs.d/elpa/exec-path-from-shell-20180323.1904", 0x7ffd1d3aca10, 1024) = -1 EINVAL (无效的参数) readlinkat(AT_FDCWD, "/home/lujun9972/.emacs.d/elpa/exec-path-from-shell-20180323.1904/exec-path-from-shell.elc", 0x7ffd1d3acdc0, 1024) = -1 EINVAL (无效的参数) lseek(7, -2655, SEEK_CUR) = 1441 read(7, "\n(defvar exec-path-from-shell-de"..., 4096) = 4096 ``` 这很明显是跟 `exec-path-from-shell` 有关啊。 通过查看 `exec-path-from-shell` 的实现,发现 `exec-path-from-shell` 的实现原理是通过实际调启一个 shell,然后输出 `PATH` 和 `MANPATH` 的值的。 而且对于 `bash` 来说,默认的启动参数为 `-i -l`(可以通过`exec-path-from-shell-arguments`来设置)。也就是说 `bash` 会作为交互式的登录shell来启动的,因此会加载 `.bash_profile` 和 `.bashrc`。 既然发现跟 `exec-path-from-shell` 这个包有关,而且据说这个包对 Linux 其实意义不大,那不如直接禁用掉好了。 ``` dotspacemacs-excluded-packages '(exec-path-from-shell) ``` 再次重启Emacs,发现这次启动速度明显快了许多了。
200
OK
# 使用strace查找Emacs启动阻塞的原因(exec-path-from-shell) 之前就觉得我的Emacs启动好慢,查看启动日志会发现启动到一般的时候会有一个比较长时间的卡顿。 之前一直没有理会它,今天花了点时间探索了一下,发现罪魁祸首居然是exec-path-from-shell这个package。 现将探索的过程记录如下: 由于使用了spacemacs的配置,配置上比较复杂,不太想通过实验缩减配置的方式来摸索出问题的地方. 刚好最近在学习使用strace工具,因此决定使用strace来看看Emacs到底卡在哪里。 strace emacs --fg-daemon 输出的内容特别多,这里只截取卡顿前的部分内容 readlinkat(AT_FDCWD, "/home", 0x7ffd1d3abb50, 1024) = -1 EINVAL (无效的参数) readlinkat(AT_FDCWD, "/home/lujun9972", 0x7ffd1d3abf00, 1024) = -1 EINVAL (无效的参数) readlinkat(AT_FDCWD, "/home/lujun9972/.emacs.d", 0x7ffd1d3ac2b0, 1024) = -1 EINVAL (无效的参数) readlinkat(AT_FDCWD, "/home/lujun9972/.emacs.d/elpa", 0x7ffd1d3ac660, 1024) = -1 EINVAL (无效的参数) readlinkat(AT_FDCWD, "/home/lujun9972/.emacs.d/elpa/exec-path-from-shell-20180323.1904", 0x7ffd1d3aca10, 1024) = -1 EINVAL (无效的参数) readlinkat(AT_FDCWD, "/home/lujun9972/.emacs.d/elpa/exec-path-from-shell-20180323.1904/exec-path-from-shell.elc", 0x7ffd1d3acdc0, 1024) = -1 EINVAL (无效的参数) lseek(7, -2655, SEEK_CUR) = 1441 read(7, "\n(defvar exec-path-from-shell-de"..., 4096) = 4096 lseek(7, 5537, SEEK_SET) = 5537 lseek(7, 5537, SEEK_SET) = 5537 lseek(7, 5537, SEEK_SET) = 5537 lseek(7, 5537, SEEK_SET) = 5537 lseek(7, 5537, SEEK_SET) = 5537 lseek(7, 5537, SEEK_SET) = 5537 brk(0x7507000) = 0x7507000 lseek(7, 5537, SEEK_SET) = 5537 lseek(7, 5537, SEEK_SET) = 5537 lseek(7, 5537, SEEK_SET) = 5537 read(7, "230\\205\26\0\t\22\\307\\310\t!\vC\\\"\\211\24\\2"..., 4096) = 2430 lseek(7, 7967, SEEK_SET) = 7967 lseek(7, 7967, SEEK_SET) = 7967 lseek(7, 7967, SEEK_SET) = 7967 lseek(7, 7967, SEEK_SET) = 7967 read(7, "", 4096) = 0 close(7) = 0 getpid() = 10818 faccessat(AT_FDCWD, "/home/lujun9972/bin/printf", X_OK) = -1 ENOENT (没有那个文件或目录) faccessat(AT_FDCWD, "/usr/local/sbin/printf", X_OK) = -1 ENOENT (没有那个文件或目录) faccessat(AT_FDCWD, "/usr/local/bin/printf", X_OK) = -1 ENOENT (没有那个文件或目录) faccessat(AT_FDCWD, "/usr/bin/printf", X_OK) = 0 stat("/usr/bin/printf", {st_mode=S_IFREG|0755, st_size=51176, ...}) = 0 openat(AT_FDCWD, "/dev/null", O_RDONLY|O_CLOEXEC) = 7 faccessat(AT_FDCWD, "/proc/5070/fd/.", F_OK) = 0 faccessat(AT_FDCWD, "/proc/5070/fd/.", F_OK) = 0 faccessat(AT_FDCWD, "/bin/bash", X_OK) = 0 stat("/bin/bash", {st_mode=S_IFREG|0755, st_size=903440, ...}) = 0 pipe2([8, 9], O_CLOEXEC) = 0 rt_sigprocmask(SIG_BLOCK, [INT CHLD], [], 8) = 0 vfork() = 10949 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 close(9) = 0 close(7) = 0 read(8, "bash: \346\227\240\346\263\225\350\256\276\345\256\232\347\273\210\347\253\257\350\277\233\347\250\213\347\273"..., 16384) = 74 read(8, "bash: \346\255\244 shell \344\270\255\346\227\240\344\273\273\345\212\241\346\216\247\345"..., 16310) = 35 read(8, "setterm: \347\273\210\347\253\257 xterm-256color \344"..., 16275) = 51 read(8, "Couldn't get a file descriptor r"..., 16224) = 56 read(8, "bash: [: \357\274\232\351\234\200\350\246\201\346\225\264\346\225\260\350\241\250\350\276\276\345\274"..., 16168) = 34 read(8, "Your display number is 0\n", 16134) = 25 read(8, "Test whether fcitx is running co"..., 16109) = 53 read(8, "Fcitx is running correctly.\n\n==="..., 16056) = 87 read(8, "Launch fbterm...\n", 15969) = 17 read(8, "stdin isn't a tty!\n", 15952) = 19 read(8, "__RESULT\0/home/lujun9972/bin:/ho"..., 15933) = 298 read(8, 0x7ffd1d39ce9d, 15635) = ? ERESTARTSYS (To be restarted if SA_RESTART is set) --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=10949, si_uid=1000, si_status=0, si_utime=10, si_stime=7} --- rt_sigreturn({mask=[]}) = -1 EINTR (被中断的系统调用) read(8, "", 15635) = 0 wait4(10949, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 10949 close(8) = 0 getpid() = 10818 faccessat(AT_FDCWD, "/home/lujun9972/bin/printf", X_OK) = -1 ENOENT (没有那个文件或目录) faccessat(AT_FDCWD, "/usr/local/sbin/printf", X_OK) = -1 ENOENT (没有那个文件或目录) faccessat(AT_FDCWD, "/usr/local/bin/printf", X_OK) = -1 ENOENT (没有那个文件或目录) faccessat(AT_FDCWD, "/usr/bin/printf", X_OK) = 0 stat("/usr/bin/printf", {st_mode=S_IFREG|0755, st_size=51176, ...}) = 0 openat(AT_FDCWD, "/dev/null", O_RDONLY|O_CLOEXEC) = 7 faccessat(AT_FDCWD, "/proc/5070/fd/.", F_OK) = 0 faccessat(AT_FDCWD, "/proc/5070/fd/.", F_OK) = 0 faccessat(AT_FDCWD, "/bin/bash", X_OK) = 0 stat("/bin/bash", {st_mode=S_IFREG|0755, st_size=903440, ...}) = 0 pipe2([8, 9], O_CLOEXEC) = 0 rt_sigprocmask(SIG_BLOCK, [INT CHLD], [], 8) = 0 vfork() = 11679 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 close(9) = 0 close(7) = 0 read(8, "setterm: \347\273\210\347\253\257 xterm-256color \344"..., 16384) = 51 read(8, "Couldn't get a file descriptor r"..., 16333) = 56 read(8, "/home/lujun9972/.bash_profile: \347"..., 16277) = 72 read(8, "Your display number is 0\nTest wh"..., 16205) = 78 read(8, "Fcitx is running correctly.\n\n==="..., 16127) = 104 read(8, "stdin isn't a tty!\n", 16023) = 19 read(8, "__RESULT\0b269cd09e7ec4e8a115188c"..., 16004) = 298 read(8, 0x7ffd1d39cba6, 15706) = ? ERESTARTSYS (To be restarted if SA_RESTART is set) --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=11679, si_uid=1000, si_status=0, si_utime=1, si_stime=1} --- rt_sigreturn({mask=[]}) = -1 EINTR (被中断的系统调用) read(8, 很容易就可以看出,当Emacs卡顿时,它在尝试从8号fd中读取内容. 那么8号fd在哪里定义的呢?往前看可以看到 pipe2([8, 9], O_CLOEXEC) = 0 rt_sigprocmask(SIG_BLOCK, [INT CHLD], [], 8) = 0 vfork() = 11679 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 close(9) = 0 可以推测出,Emacs主进程fork出一个子进程(进程号为11679),并通过管道读取子进程的内容。 然而,从 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=11679, si_uid=1000, si_status=0, si_utime=1, si_stime=1} --- rt_sigreturn({mask=[]}) = -1 EINTR (被中断的系统调用) read(8, 可以看出,实际上子进程已经退出了(父进程收到SIGCHLD信号),父进程确依然在尝试从管道中读取内容,导致的阻塞。 而且从 read(8, "setterm: \347\273\210\347\253\257 xterm-256color \344"..., 16384) = 51 read(8, "Couldn't get a file descriptor r"..., 16333) = 56 read(8, "/home/lujun9972/.bash_profile: \347"..., 16277) = 72 read(8, "Your display number is 0\nTest wh"..., 16205) = 78 read(8, "Fcitx is running correctly.\n\n==="..., 16127) = 104 read(8, "stdin isn't a tty!\n", 16023) = 19 read(8, "__RESULT\0b269cd09e7ec4e8a115188c"..., 16004) = 298 read(8, 0x7ffd1d39cba6, 15706) = ? ERESTARTSYS (To be restarted if SA_RESTART is set) 看到,子进程的输出似乎是我的交互式登录bash启动时的输出(加载了.bash_profile) 在往前翻发现这么一段信息 readlinkat(AT_FDCWD, "/home", 0x7ffd1d3abb50, 1024) = -1 EINVAL (无效的参数) readlinkat(AT_FDCWD, "/home/lujun9972", 0x7ffd1d3abf00, 1024) = -1 EINVAL (无效的参数) readlinkat(AT_FDCWD, "/home/lujun9972/.emacs.d", 0x7ffd1d3ac2b0, 1024) = -1 EINVAL (无效的参数) readlinkat(AT_FDCWD, "/home/lujun9972/.emacs.d/elpa", 0x7ffd1d3ac660, 1024) = -1 EINVAL (无效的参数) readlinkat(AT_FDCWD, "/home/lujun9972/.emacs.d/elpa/exec-path-from-shell-20180323.1904", 0x7ffd1d3aca10, 1024) = -1 EINVAL (无效的参数) readlinkat(AT_FDCWD, "/home/lujun9972/.emacs.d/elpa/exec-path-from-shell-20180323.1904/exec-path-from-shell.elc", 0x7ffd1d3acdc0, 1024) = -1 EINVAL (无效的参数) lseek(7, -2655, SEEK_CUR) = 1441 read(7, "\n(defvar exec-path-from-shell-de"..., 4096) = 4096 这很明显是跟 `exec-path-from-shell` 有关啊。 通过查看 `exec-path-from-shell` 的实现,发现 `exec-path-from-shell` 的实现原理是通过实际调启一个shell,然后输出 `PATH` 和 `MANPATH` 的值的。 而且对于 `bash` 来说,默认的启动参数为 `-i -l(可以通过exec-path-from-shell-arguments来设置)` . 也就是说 `bash` 会作为交互式的登录shell来启动的,因此会加载 `.bash_profile` 和 `.bashrc` . 既然发现跟 `exec-path-from-shell` 这个package有关,而且据说这个package对linux其实意义不大,那不如直接禁用掉好了。 dotspacemacs-excluded-packages '(exec-path-from-shell) 再次重启Emacs,发现这次启动速度明显快了许多了。
11,402
一份 Markdown 简介
https://opensource.com/article/19/9/introduction-markdown
2019-09-29T12:32:44
[ "Markdown" ]
/article-11402-1.html
> > 一次编辑便可将文本转换为多种格式。下面是如何开始使用 Markdown。 > > > ![](/data/attachment/album/201909/29/123226bjte253n2h44cjjj.jpg) 在很长一段时间里,我发现我在 GitLab 和 GitHub 上看到的所有文件都带有 **.md** 扩展名,这是专门为开发人员编写的文件类型。几周前,当我开始使用 Markdown 时,我的观念发生了变化。它很快成为我日常工作中最重要的工具。 Markdown 使我的生活更简易。我只需要在已经编写的代码中添加一些符号,并且在浏览器扩展或开源程序的帮助下,即可将文本转换为各种常用格式,如 ODT、电子邮件(稍后将详细介绍)、PDF 和 EPUB。 ### 什么是 Markdown? 来自 [维基百科](https://en.wikipedia.org/wiki/Markdown)的友情提示: > > Markdown 是一种轻量级标记语言,具有纯文本格式语法。 > > > 这意味着通过在文本中使用一些额外的符号,Markdown 可以帮助你创建具有特定结构和格式的文档。当你以纯文本(例如,在记事本应用程序中)做笔记时,没有任何东西表明哪个文本应该是粗体或斜体。在普通文本中,你在写链接时需要将一个链接写为 “<http://example.com> ”,或者写为 “example.com”,又或“访问网站(example.com)”。这样没有内在的一致性。 但是如果你按照 Markdown 的方式编写,你的文本就有了内在的一致性。计算机喜欢一致性,因为这使得它们能够遵循严格的指令而不用担心异常。 相信我;一旦你学会使用 Markdown,每一项写作任务在某种程度上都会比以前更容易、更好。让我们开始吧。 ### Markdown 基础 以下是使用 Markdown 的基础语法。 1、创建一个以 **.md** 扩展名结尾的文本文件(例如,`example.md`)。你可以使用任何文本编辑器(甚至像 LibreOffice 或 Microsoft word 这样的文字处理程序亦可),只要记住将其保存为*文本*文件。 ![Names of Markdown files](/data/attachment/album/201909/29/123250xzw5vjyvz4dw3y6s.png "Names of Markdown files") 2、想写什么就写什么,就像往常一样: ``` Lorem ipsum Consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. De Finibus Bonorum et Malorum Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt explicabo. Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. Neque porro quisquam est, qui dolorem ipsum quia dolor sit amet, consectetur, adipisci velit, sed quia non numquam eius modi tempora incidunt ut labore et dolore magnam aliquam quaerat voluptatem. ``` (LCTT 译注:上述这段“Lorem ipsum”,中文又称“乱数假文”,是一篇常用于排版设计领域的拉丁文文章,主要目的为测试文章或文字在不同字型、版型下看起来的效果。) 3、确保在段落之间留有空行。如果你习惯写商务信函或传统散文,这可能会觉得不自然,因为那里段落只有一行,甚至在第一个单词前还有一个缩进。对于 Markdown,空行(一些文字处理程序使用 `¶`,称为Pilcrow 符号)保证在创建一个新段落应用另一种格式(如 HTML)。 4、指定标题和副标题。对于文档的标题,在文本前面添加一个井号或散列符号(`#`)和一个空格(例如 `# Lorem ipsum`)。第一个副标题级别使用两个(`## De Finibus Bonorum et Malorum`),下一个级别使用三个(`### 第三个副标题`),以此类推。注意,在井号和第一个单词之间有一个空格。 ``` # Lorem ipsum Consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. ## De Finibus Bonorum et Malorum Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt explicabo. Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. Neque porro quisquam est, qui dolorem ipsum quia dolor sit amet, consectetur, adipisci velit, sed quia non numquam eius modi tempora incidunt ut labore et dolore magnam aliquam quaerat voluptatem. ``` 5、如果你想使用**粗体**字符,只需将字母放在两个星号之间,没有空格:`**对应的文本将以粗体显示**`。 ![Bold text in Markdown](/data/attachment/album/201909/29/123301vwbz2f8wttq3bbtk.png "Bold text in Markdown") 6、对于**斜体**,将文本放在没有空格的下划线符号之间:`_我希望这个本文以斜体显示_`。(LCTT 译注:有的 Markdown 流派会将用下划线引起来的字符串视作下划线文本,而单个星号 `*` 引用起来的才视作斜体。从兼容性的角度看,使用星号比较兼容。) ![Italics text in Markdown](/data/attachment/album/201909/29/123317u0ep0m0w2dmz0mlp.png "Italics text in Markdown") 7、要插入一个链接(像 [Markdown Tutorial](https://www.markdowntutorial.com/)),把你想链接的文本放在括号里,URL 放在括号里,中间没有空格:`[Markdown Tutorial](<https://www.markdowntutorial.com/>)`。 ![Hyperlinks in Markdown](/data/attachment/album/201909/29/123337uuyttpuz47h6z5me.png "Hyperlinks in Markdown") 8、块引用是用大于号编写的(`>`)在你要引用的文本前加上大于符号和空格: `> 名言引用`。 ![Blockquote text in Markdown](/data/attachment/album/201909/29/123359h63m96mkscm1c9s9.png "Blockquote text in Markdown") ### Markdown 教程和技巧 这些技巧可以帮助你上手 Markdown ,但它涵盖了很多功能,不仅仅是粗体、斜体和链接。学习 Markdown 的最好方法是使用它,但是我建议你花 15 分钟来学习这篇简单的 [Markdown 教程](https://www.markdowntutorial.com/),学以致用,勤加练习。 由于现代 Markdown 是对结构化文本概念的许多不同解释的融合,[CommonMark](https://commonmark.org/help/) 项目定义了一个规范,其中包含一组严格的规则,以使 Markdown 更加清晰。在编辑时手边准备一份[符合 CommonMark 的快捷键列表](https://opensource.com/downloads/cheat-sheet-markdown)可能会有帮助。 ### 你能用 Markdown 做什么 Markdown 可以让你写任何你想写的东西,仅需一次编辑,就可以把它转换成几乎任何你想使用的格式。下面的示例演示如何将用 MD 编写简单的文本并转换为不同的格式。你不需要多种格式的文档-你可以仅仅编辑一次…然后拥有无限可能。 1、**简单的笔记**:你可以用 Markdown 编写你的笔记,并且在保存笔记时,开源笔记应用程序 [Turtl](https://turtlapp.com/) 将解释你的文本文件并显示为对应的格式。你可以把笔记存储在任何地方! ![Turtl application](/data/attachment/album/201909/29/123406kudznzc1uni63ccn.png "Turtl application") 2、**PDF 文件**:使用 [Pandoc](https://opensource.com/article/19/5/convert-markdown-to-word-pandoc) 应用程序,你可以使用一个简单的命令将 Markdown 文件转换为 PDF: ``` pandoc <file.md> -o <file.pdf> ``` ![Markdown text converted to PDF with Pandoc](/data/attachment/album/201909/29/123418xeixhzl8b3yzhebf.png "Markdown text converted to PDF with Pandoc") 3、**Email**:你还可以通过安装浏览器扩展 [Markdown Here](https://markdown-here.com/) 将 Markdown 文本转换为 html 格式的电子邮件。要使用它,只需选择你的 Markdown 文本,在这里使用 Markdown 将其转换为 HTML,并使用你喜欢的电子邮件客户端发送消息。 ![Markdown text converted to email with Markdown Here](/data/attachment/album/201909/29/123424orbu5k9b51psstp9.png "Markdown text converted to email with Markdown Here") ### 现在就开始上手吧 你不需要一个特殊的应用程序来使用 Markdown,你只需要一个文本编辑器和上面的技巧。它与你已有的写作方式兼容;你所需要做的就是使用它,所以试试吧。 --- via: <https://opensource.com/article/19/9/introduction-markdown> 作者:[Juan Islas](https://opensource.com/users/xislas) 选题:[lujun9972](https://github.com/lujun9972) 译者:[qfzy1233](https://github.com/qfzy1233) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
null
HTTPSConnectionPool(host='opensource.com', port=443): Read timed out. (read timeout=10)
null
11,404
区块链 2.0 :以太坊(九)
https://www.ostechnix.com/blockchain-2-0-what-is-ethereum/
2019-09-29T17:41:00
[ "区块链", "以太坊" ]
https://linux.cn/article-11404-1.html
![Ethereum](/data/attachment/album/201909/29/174151e7apavuw77b3so4a.png) 在本系列的上一指南中,我们讨论了 [Hyperledger 项目(HLP)](/article-11275-1.html),这是一个由 Linux 基金会开发的增长最快的产品。在本指南中,我们将详细讨论什么是“<ruby> 以太坊 <rt> Ethereum </rt></ruby>”及其功能。许多研究人员认为,互联网的未来将基于<ruby> 去中心化计算 <rt> decentralized computing </rt></ruby>的原理。实际上,去中心化计算是互联网放在首位的更广泛目标之一。但是,由于可用的计算能力不同,互联网发生了转折。尽管现代服务器功能使得服务器端处理和执行成为可能,但在世界上大部分地区缺乏像样的移动网络使得客户端也是如此。现在,现代智能手机具有 SoC(片上系统),在客户端本身上也能够处理许多此类操作,但是,由于安全地检索和存储数据而受到的限制仍然迫使开发人员需要在服务器端进行计算和数据管理。因此,当前可以观察到数据传输能力方面存在瓶颈。 由于分布式数据存储和程序执行平台的进步,所有这些可能很快就会改变。[区块链](/article-10650-1.html)允许在分布式用户网络(而不是中央服务器)上进行安全的数据管理和程序执行,这在互联网历史上基本上是第一次。 以太坊就是一个这样的区块链平台,使开发人员可以访问用于在这样的去中心化网络上构建和运行应用程序的框架和工具。尽管它以其加密货币而广为人知,以太坊不只是<ruby> 以太币 <rt> ether </rt></ruby>(加密货币)。这是一种完整的<ruby> 图灵完备 <rt> Turing complete </rt></ruby>编程语言,旨在开发和部署 DApp(即<ruby> 分布式应用 <rt> Distributed APPlication </rt></ruby>) <sup id="fnref1"> <a href="#fn1" rel="footnote"> 1 </a></sup>。我们会在接下来的一篇文章中详细介绍 DApp。 以太坊是开源的,默认情况下是一个公共(非许可)区块链,并具有一个大范围的智能合约平台底层(Solidity)。以太坊提供了一个称为“<ruby> 以太坊虚拟机 <rt> Ethereum virtual machine </rt></ruby>(EVM)”的虚拟计算环境,以运行应用程序和[智能合约](/article-10956-1.html) <sup id="fnref2"> <a href="#fn2" rel="footnote"> 2 </a></sup>。以太坊虚拟机运行在世界各地的成千上万个参与节点上,这意味着应用程序数据在保证安全的同时,几乎不可能被篡改或丢失。 ### 以太坊的背后:什么使之不同 在 2017 年,为了推广对以太坊区块链的功能的利用,技术和金融领域的 30 多个团队汇聚一堂。因此,“<ruby> 以太坊企业联盟 <rt> Ethereum Enterprise Alliance </rt></ruby>”(EEA)由众多支持成员组成,包括微软、摩根大通、思科、德勤和埃森哲。摩根大通已经拥有 Quorum,这是一个基于以太坊的去中心化金融服务计算平台,目前已经投入运行;而微软拥有基于以太坊的云服务,通过其 Azure 云业务销售 <sup id="fnref3"> <a href="#fn3" rel="footnote"> 3 </a></sup>。 ### 什么是以太币,它和以太坊有什么关系 以太坊的创建者<ruby> 维塔利克·布特林 <rt> Vitalik Buterin </rt></ruby>深谙去中心化处理平台的真正价值以及为比特币提供动力的底层区块链技术。他提议比特币应该开发以支持运行分布式应用程序(DApp)和程序(现在称为智能合约)的想法,未能获得多数同意。 因此,他在 2013 年发表的白皮书中提出了以太坊的想法。原始白皮书仍然保留,[可供](https://github.com/ethereum/wiki/wiki/White-Paper)读者阅读。其理念是开发一个基于区块链的平台来运行智能合约和应用程序,这些合约和应用程序设计为在节点和用户设备上运行,而非服务器上运行。 以太坊系统经常被误认为就是加密货币以太币,但是,必须重申,以太坊是一个用于开发和执行应用程序的全栈平台,自成立以来一直如此,而比特币则不是。**以太网目前是按市值计算的第二大加密货币**,在撰写本文时,其平均交易价格为每个以太币 170 美元 <sup id="fnref4"> <a href="#fn4" rel="footnote"> 4 </a></sup>。 ### 该平台的功能和技术特性 <sup id="fnref5"> <a href="#fn5" rel="footnote"> 5 </a></sup> * 正如我们已经提到的,称为以太币的加密货币只是该平台功能之一。该系统的目的不仅仅是处理金融交易。 实际上,以太坊平台和比特币之间的主要区别在于它们的脚本能力。以太坊是以图灵完备的编程语言开发的,这意味着它具有类似于其他主要编程语言的脚本编程和应用程序功能。开发人员需要此功能才能在平台上创建 DApp 和复杂的智能合约,而该功能是比特币缺失的。 * 以太币的“挖矿”过程更加严格和复杂。尽管可以使用专用的 ASIC 来开采比特币,但以太坊使用的基本哈希算法(EThash)降低了 ASIC 在这方面的优势。 * 为激励矿工和节点运营者运行网络而支付的交易费用本身是使用称为 “<ruby> 燃料 <rt> Gas </rt></ruby>”的计算令牌来计算的。通过要求交易的发起者支付与执行交易所需的计算资源数量成比例的以太币,燃料提高了系统的弹性以及对外部黑客和攻击的抵抗力。这与其他平台(例如比特币)相反,在该平台上,交易费用与交易规模一并衡量。因此,以太坊的平均交易成本从根本上低于比特币。这也意味着在以太坊虚拟机上运行的应用程序需要付费,具体取决于应用程序要解决的计算问题。基本上,执行越复杂,费用就越高。 * 以太坊的出块时间估计约为 10 - 15 秒。出块时间是在区块链网络上打时间戳和创建区块所需的平均时间。与将在比特币网络上进行同样的交易要花费 10 分钟以上的时间相比,很明显,就交易和区块验证而言,以太坊要快得多。 * *有趣的是,对可开采的以太币数量或开采速度没有硬性限制,这导致其系统设计不像比特币那么激进*。 ### 总结 尽管与以太坊相比,它远远超过了类似的平台,但在以太坊企业联盟开始推动之前,该平台本身尚缺乏明确的发展道路。虽然以太坊平台确实推动了企业发展,但必须注意,以太坊还可以满足小型开发商和个人的需求。 这样一来,为最终用户和企业开发的平台就为以太坊遗漏了许多特定功能。另外,以太坊基金会提出和开发的区块链模型是一种公共模型,而 Hyperledger 项目等项目提出的模型是私有的和需要许可的。 虽然只有时间才能证明以太坊、Hyperledger 和 R3 Corda 等平台中,哪一个平台会在现实场景中找到最多粉丝,但此类系统确实证明了以区块链为动力的未来主张背后的有效性。 --- 1. [Gabriel Nicholas, “Ethereum Is Coding’s New Wild West | WIRED,” Wired , 2017](https://www.wired.com/story/ethereum-is-codings-new-wild-west/). [↩](#fnref1) 2. [What is Ethereum? — Ethereum Homestead 0.1 documentation](http://www.ethdocs.org/en/latest/introduction/what-is-ethereum.html#ethereum-virtual-machine). [↩](#fnref2) 3. [Ethereum, a Virtual Currency, Enables Transactions That Rival Bitcoin’s – The New York Times](https://www.nytimes.com/2016/03/28/business/dealbook/ethereum-a-virtual-currency-enables-transactions-that-rival-bitcoins.html). [↩](#fnref3) 4. [Cryptocurrency Market Capitalizations | CoinMarketCap](https://coinmarketcap.com/). [↩](#fnref4) 5. [Introduction — Ethereum Homestead 0.1 documentation](http://www.ethdocs.org/en/latest/introduction/index.html). [↩](#fnref5) --- via: <https://www.ostechnix.com/blockchain-2-0-what-is-ethereum/> 作者:[ostechnix](https://www.ostechnix.com/author/editor/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
403
Forbidden
null
11,405
IBM 将区块链引入红帽 OpenShift;为混合云客户添加了Apache CouchDB
https://www.networkworld.com/article/3441362/ibm-brings-blockchain-to-red-hat-openshift-adds-apache-couchdb-for-hybrid-cloud-customers.html
2019-09-29T22:02:11
[ "IBM", "区块链" ]
https://linux.cn/article-11405-1.html
> > IBM 在其区块链平台上增加了红帽 OpenShift 支持,并将用于 Apache CouchDB 的 Kubernetes Operator 引入其混合云服务中。 > > > ![](/data/attachment/album/201909/29/220234zc6zk6i66nt66kk1.jpg) IBM 本周继续推进其红帽和开源集成工作,在其[区块链](https://www.networkworld.com/article/3330937/how-blockchain-will-transform-the-iot.html)平台上添加了红帽 OpenShift 支持,并在其[混合云](https://www.networkworld.com/article/3268448/what-is-hybrid-cloud-really-and-whats-the-best-strategy.html)服务产品之外为 Apache CouchDB 引入了 Kubernetes Operator。 在该公司的旗舰级企业 Kubernetes 平台 [红帽 OpenShift 上部署 IBM 区块链](https://www.ibm.com/blogs/blockchain/2019/09/ibm-blockchain-platform-meets-red-hat-openshift/) 的能力,意味着 IBM 区块链的开发人员将能够在本地、公共云或混合云架构中部署安全软件。 区块链是一个分布式数据库,维护着一个不断增长的记录列表,可以使用哈希技术对其进行验证,并且 IBM 区块链平台包括用于构建、操作、治理和发展受保护的区块链网络的工具。 IBM 表示,其区块链 / OpenShift 组合的目标客户面对的公司客户是:希望保留区块链分类帐副本并在自己的基础设施上运行工作负载以实现安全性,降低风险或合规性;需要将数据存储在特定位置以满足数据驻留要求;需要在多个云或混合云架构中部署区块链组件。 自 7 月份完成对红帽的收购以来,IBM 一直在围绕红帽基于 Kubernetes 的 OpenShift 容器平台构建云开发生态系统。最近,这位蓝色巨人将其[新 z15 大型机与 IBM 的红帽](https://www.networkworld.com/article/3438542/ibm-z15-mainframe-amps-up-cloud-security-features.html)技术融合在一起,称它将为红帽 OpenShift 容器平台提供 IBM z/OS 云代理。该产品将通过连接到 Kubernetes 容器为用户提供 z/OS 计算资源的直接自助访问。 IBM 表示,打算在 IBM z 系列和 LinuxONE 产品上向 Linux 提供 IBM [Cloud Pak 产品](https://www.networkworld.com/article/3429596/ibm-fuses-its-software-with-red-hats-to-launch-hybrid-cloud-juggernaut.html)。Cloud Paks 是由 OpenShift 与 100 多种其他 IBM 软件产品组成的捆绑包。LinuxONE 是 IBM 专为支持 Linux 环境而设计的非常成功的大型机系统。 IBM 表示,愿景是使支持 OpenShift 的 IBM 软件成为客户用来转变其组织的基础构建组件。 IBM 表示:“我们的大多数客户都需要支持混合云工作负载以及可在任何地方运行这些工作负载的灵活性的解决方案,而用于红帽的 z/OS 云代理将成为我们在平台上启用云原生的关键。” 在相关新闻中,IBM 宣布支持开源 Apache CouchDB,这是 [Apache CouchDB](https://www.ibm.com/cloud/learn/couchdb) 的 Kubernetes Operator,并且该 Operator 已通过认证可与红帽 OpenShift 一起使用。Operator 可以自动部署、管理和维护 Apache CouchDB 部署。Apache CouchDB 是非关系型开源 NoSQL 数据库。 在最近的 [Forrester Wave 报告](https://reprints.forrester.com/#/assets/2/363/RES136481/reports)中,研究人员说:“企业喜欢 NoSQL 这样的能力,可以使用低成本服务器和可以存储、处理和访问任何类型的业务数据的灵活的无模式模型进行横向扩展。NoSQL 平台为企业基础设施专业人士提供了对数据存储和处理的更好控制,并提供了可加速应用程序部署的配置。当许多组织使用 NoSQL 来补充其关系数据库时,一些组织已开始替换它们以支持更好的性能、扩展规模并降低其数据库成本。” 当前,IBM 云使用 Cloudant Db 服务作为其针对新的云原生应用程序的标准数据库。IBM 表示,对 CouchDB 的强大支持为用户提供了替代方案和后备选项。IBM 表示,能够将它们全部绑定到红帽 OpenShift Kubernetes 部署中,可以使客户在部署应用程序并在多个云环境中移动数据时使用数据库本地复制功能来维持对数据的低延迟访问。 “我们的客户正在转向基于容器化和[微服务](https://www.networkworld.com/article/3137250/what-you-need-to-know-about-microservices.html)的架构,以提高速度、敏捷性和运营能力。在云原生应用程序开发中,应用程序需要具有支持可伸缩性、可移植性和弹性的数据层。”IBM 院士兼云数据库副总裁 Adam Kocoloski 写道,“我们相信数据可移植性和 CouchDB 可以大大改善多云架构的功能,使客户能够构建真正可在私有云、公共云和边缘位置之间移植的解决方案。” --- via: <https://www.networkworld.com/article/3441362/ibm-brings-blockchain-to-red-hat-openshift-adds-apache-couchdb-for-hybrid-cloud-customers.html> 作者:[Michael Cooney](https://www.networkworld.com/author/Michael-Cooney/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
301
Moved Permanently
null
11,406
用 Python 入门数据科学
https://opensource.com/article/19/9/get-started-data-science-python
2019-09-30T00:19:00
[ "Python", "数据科学" ]
https://linux.cn/article-11406-1.html
> > 使用 Python 开展数据科学为你提供了无限的潜力,使你能够以有意义和启发性的方式解析、解释和组织数据。 > > > ![](/data/attachment/album/201909/30/001853sfkm07j7wfp94dzp.jpg) 数据科学是计算领域一个令人兴奋的新领域,它围绕分析、可视化和关联以解释我们的计算机收集的有关世界的无限信息而建立。当然,称其为“新”领域有点不诚实,因为该学科是统计学、数据分析和普通而古老的科学观察派生而来的。 但是数据科学是这些学科的形式化分支,拥有自己的流程和工具,并且可以广泛应用于以前从未产生过大量不可管理数据的学科(例如视觉效果)。数据科学是一个新的机会,可以重新审视海洋学、气象学、地理学、制图学、生物学、医学和健康以及娱乐行业的数据,并更好地了解其中的模式、影响和因果关系。 像其他看似包罗万象的大型领域一样,知道从哪里开始探索数据科学可能会令人生畏。有很多资源可以帮助数据科学家使用自己喜欢的编程语言来实现其目标,其中包括最流行的编程语言之一:Python。使用 [Pandas](https://pandas.pydata.org/)、[Matplotlib](https://matplotlib.org/) 和 [Seaborn](https://seaborn.pydata.org/index.html) 这些库,你可以学习数据科学的基本工具集。 如果你对 Python 的基本用法不是很熟悉,请在继续之前先阅读我的 [Python 介绍](https://opensource.com/article/17/10/python-101)。 ### 创建 Python 虚拟环境 程序员有时会忘记在开发计算机上安装了哪些库,这可能导致他们提供了在自己计算机上可以运行,但由于缺少库而无法在所有其它电脑上运行的代码。Python 有一个系统旨在避免这种令人不快的意外:虚拟环境。虚拟环境会故意忽略你已安装的所有 Python 库,从而有效地迫使你一开始使用通常的 Python 进行开发。 为了用 `venv` 激活虚拟环境, 为你的环境取个名字 (我会用 `example`) 并且用下面的指令创建它: ``` $ python3 -m venv example ``` <ruby> 导入 <rt> source </rt></ruby>该环境的 `bin` 目录里的 `activate` 文件以激活它: ``` $ source ./example/bin/activate (example) $ ``` 你现在“位于”你的虚拟环境中。这是一个干净的状态,你可以在其中构建针对该问题的自定义解决方案,但是额外增加了需要有意识地安装依赖库的负担。 ### 安装 Pandas 和 NumPy 你必须在新环境中首先安装的库是 Pandas 和 NumPy。这些库在数据科学中很常见,因此你肯定要时不时安装它们。它们也不是你在数据科学中唯一需要的库,但是它们是一个好的开始。 Pandas 是使用 BSD 许可证的开源库,可轻松处理数据结构以进行分析。它依赖于 NumPy,这是一个提供多维数组、线性代数和傅立叶变换等等的科学库。使用 `pip3` 安装两者: ``` (example) $ pip3 install pandas ``` 安装 Pandas 还会安装 NumPy,因此你无需同时指定两者。一旦将它们安装到虚拟环境中,安装包就会被缓存,这样,当你再次安装它们时,就不必从互联网上下载它们。 这些是你现在仅需的库。接下来,你需要一些样本数据。 ### 生成样本数据集 数据科学都是关于数据的,幸运的是,科学、计算和政府组织可以提供许多免费和开放的数据集。虽然这些数据集是用于教育的重要资源,但它们具有比这个简单示例所需的数据更多的数据。你可以使用 Python 快速创建示例和可管理的数据集: ``` #!/usr/bin/env python3 import random def rgb(): NUMBER=random.randint(0,255)/255 return NUMBER FILE = open('sample.csv','w') FILE.write('"red","green","blue"') for COUNT in range(10): FILE.write('\n{:0.2f},{:0.2f},{:0.2f}'.format(rgb(),rgb(),rgb())) ``` 这将生成一个名为 `sample.csv` 的文件,该文件由随机生成的浮点数组成,这些浮点数在本示例中表示 RGB 值(在视觉效果中通常是数百个跟踪值)。你可以将 CSV 文件用作 Pandas 的数据源。 ### 使用 Pandas 提取数据 Pandas 的基本功能之一是可以提取数据和处理数据,而无需程序员编写仅用于解析输入的新函数。如果你习惯于自动执行此操作的应用程序,那么这似乎不是很特别,但请想象一下在 [LibreOffice](http://libreoffice.org) 中打开 CSV 并且必须编写公式以在每个逗号处拆分值。Pandas 可以让你免受此类低级操作的影响。以下是一些简单的代码,可用于提取和打印以逗号分隔的值的文件: ``` #!/usr/bin/env python3 from pandas import read_csv, DataFrame import pandas as pd FILE = open('sample.csv','r') DATAFRAME = pd.read_csv(FILE) print(DATAFRAME) ``` 一开始的几行导入 Pandas 库的组件。Pandas 库功能丰富,因此在寻找除本文中基本功能以外的功能时,你会经常参考它的文档。 接下来,通过打开你创建的 `sample.csv` 文件创建变量 `FILE`。Pandas 模块 `read_csv`(在第二行中导入)使用该变量来创建<ruby> 数据帧 <rt> dataframe </rt></ruby>。在 Pandas 中,数据帧是二维数组,通常可以认为是表格。数据放入数据帧中后,你可以按列和行进行操作,查询其范围,然后执行更多操作。目前,示例代码仅将该数据帧输出到终端。 运行代码。你的输出会和下面的输出有些许不同,因为这些数字都是随机生成的,但是格式都是一样的。 ``` (example) $ python3 ./parse.py red green blue 0 0.31 0.96 0.47 1 0.95 0.17 0.64 2 0.00 0.23 0.59 3 0.22 0.16 0.42 4 0.53 0.52 0.18 5 0.76 0.80 0.28 6 0.68 0.69 0.46 7 0.75 0.52 0.27 8 0.53 0.76 0.96 9 0.01 0.81 0.79 ``` 假设你只需要数据集中的红色值(`red`),你可以通过声明数据帧的列名称并有选择地仅打印你感兴趣的列来做到这一点: ``` from pandas import read_csv, DataFrame import pandas as pd FILE = open('sample.csv','r') DATAFRAME = pd.read_csv(FILE) # define columns DATAFRAME.columns = [ 'red','green','blue' ] print(DATAFRAME['red']) ``` 现在运行代码,你只会得到红色列: ``` (example) $ python3 ./parse.py 0 0.31 1 0.95 2 0.00 3 0.22 4 0.53 5 0.76 6 0.68 7 0.75 8 0.53 9 0.01 Name: red, dtype: float64 ``` 处理数据表是经常使用 Pandas 解析数据的好方法。从数据帧中选择数据的方法有很多,你尝试的次数越多就越习惯。 ### 可视化你的数据 很多人偏爱可视化信息已不是什么秘密,这是图表和图形成为与高层管理人员开会的主要内容的原因,也是“信息图”在新闻界如此流行的原因。数据科学家的工作之一是帮助其他人理解大量数据样本,并且有一些库可以帮助你完成这项任务。将 Pandas 与可视化库结合使用可以对数据进行可视化解释。一个流行的可视化开源库是 [Seaborn](https://seaborn.pydata.org/),它基于开源的 [Matplotlib](https://matplotlib.org/)。 #### 安装 Seaborn 和 Matplotlib 你的 Python 虚拟环境还没有 Seaborn 和 Matplotlib,所以用 `pip3` 安装它们。安装 Seaborn 的时候,也会安装 Matplotlib 和很多其它的库。 ``` (example) $ pip3 install seaborn ``` 为了使 Matplotlib 显示图形,你还必须安装 [PyGObject](https://pygobject.readthedocs.io/en/latest/getting_started.html) 和 [Pycairo](https://pycairo.readthedocs.io/en/latest/)。这涉及到编译代码,只要你安装了必需的头文件和库,`pip3` 便可以为你执行此操作。你的 Python 虚拟环境不了解这些依赖库,因此你可以在环境内部或外部执行安装命令。 在 Fedora 和 CentOS 上: ``` (example) $ sudo dnf install -y gcc zlib-devel bzip2 bzip2-devel readline-devel \ sqlite sqlite-devel openssl-devel tk-devel git python3-cairo-devel \ cairo-gobject-devel gobject-introspection-devel ``` 在 Ubuntu 和 Debian 上: ``` (example) $ sudo apt install -y libgirepository1.0-dev build-essential \ libbz2-dev libreadline-dev libssl-dev zlib1g-dev libsqlite3-dev wget \ curl llvm libncurses5-dev libncursesw5-dev xz-utils tk-dev libcairo2-dev ``` 一旦它们安装好了,你可以安装 Matplotlib 需要的 GUI 组件。 ``` (example) $ pip3 install PyGObject pycairo ``` ### 用 Seaborn 和 Matplotlib 显示图形 在你最喜欢的文本编辑器新建一个叫 `vizualize.py` 的文件。要创建数据的线形图可视化,首先,你必须导入必要的 Python 模块 —— 先前代码示例中使用的 Pandas 模块: ``` #!/usr/bin/env python3 from pandas import read_csv, DataFrame import pandas as pd ``` 接下来,导入 Seaborn、Matplotlib 和 Matplotlib 的几个组件,以便你可以配置生成的图形: ``` import seaborn as sns import matplotlib import matplotlib.pyplot as plt from matplotlib import rcParams ``` Matplotlib 可以将其输出导出为多种格式,包括 PDF、SVG 和桌面上的 GUI 窗口。对于此示例,将输出发送到桌面很有意义,因此必须将 Matplotlib 后端设置为 `GTK3Agg`。如果你不使用 Linux,则可能需要使用 `TkAgg` 后端。 设置完 GUI 窗口以后,设置窗口大小和 Seaborn 预设样式: ``` matplotlib.use('GTK3Agg') rcParams['figure.figsize'] = 11,8 sns.set_style('darkgrid') ``` 现在,你的显示已配置完毕,代码已经很熟悉了。使用 Pandas 导入 `sample.csv` 文件,并定义数据帧的列: ``` FILE = open('sample.csv','r') DATAFRAME = pd.read_csv(FILE) DATAFRAME.columns = [ 'red','green','blue' ] ``` 有了适当格式的数据,你可以将其绘制在图形中。将每一列用作绘图的输入,然后使用 `plt.show()` 在 GUI 窗口中绘制图形。`plt.legend()` 参数将列标题与图形上的每一行关联(`loc` 参数将图例放置在图表之外而不是在图表上方): ``` for i in DATAFRAME.columns: DATAFRAME[i].plot() plt.legend(bbox_to_anchor=(1, 1), loc=2, borderaxespad=1) plt.show() ``` 运行代码以获得结果。 ![第一个数据可视化](/data/attachment/album/201909/30/001927sbk9jbqi11s2y2kq.png "First data visualization") 你的图形可以准确显示 CSV 文件中包含的所有信息:值在 Y 轴上,索引号在 X 轴上,并且图形中的线也被标识出来了,以便你知道它们代表什么。然而,由于此代码正在跟踪颜色值(至少是假装),所以线条的颜色不仅不直观,而且违反直觉。如果你永远不需要分析颜色数据,则可能永远不会遇到此问题,但是你一定会遇到类似的问题。在可视化数据时,你必须考虑呈现数据的最佳方法,以防止观看者从你呈现的内容中推断出虚假信息。 为了解决此问题(并展示一些可用的自定义设置),以下代码为每条绘制的线分配了特定的颜色: ``` import matplotlib from pandas import read_csv, DataFrame import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from matplotlib import rcParams matplotlib.use('GTK3Agg') rcParams['figure.figsize'] = 11,8 sns.set_style('whitegrid') FILE = open('sample.csv','r') DATAFRAME = pd.read_csv(FILE) DATAFRAME.columns = [ 'red','green','blue' ] plt.plot(DATAFRAME['red'],'r-') plt.plot(DATAFRAME['green'],'g-') plt.plot(DATAFRAME['blue'],'b-') plt.plot(DATAFRAME['red'],'ro') plt.plot(DATAFRAME['green'],'go') plt.plot(DATAFRAME['blue'],'bo') plt.show() ``` 这使用特殊的 Matplotlib 表示法为每列创建两个图。每列的初始图分配有一种颜色(红色为 `r`,绿色为 `g`,蓝色为 `b`)。这些是内置的 Matplotlib 设置。 `-` 表示实线(双破折号,例如 `r--`,将创建虚线)。为每个具有相同颜色的列创建第二个图,但是使用 `o` 表示点或节点。为了演示内置的 Seaborn 主题,请将 `sns.set_style` 的值更改为 `whitegrid`。 ![改进的数据可视化](/data/attachment/album/201909/30/001932q8nq81e4pq8nefqg.png "Improved data visualization") ### 停用你的虚拟环境 探索完 Pandas 和绘图后,可以使用 `deactivate` 命令停用 Python 虚拟环境: ``` (example) $ deactivate $ ``` 当你想重新使用它时,只需像在本文开始时一样重新激活它即可。重新激活虚拟环境时,你必须重新安装模块,但是它们是从缓存安装的,而不是从互联网下载的,因此你不必联网。 ### 无尽的可能性 Pandas、Matplotlib、Seaborn 和数据科学的真正力量是无穷的潜力,使你能够以有意义和启发性的方式解析、解释和组织数据。下一步是使用你在本文中学到的新工具探索简单的数据集。Matplotlib 和 Seaborn 不仅有折线图,还有很多其他功能,因此,请尝试创建条形图或饼图或完全不一样的东西。 一旦你了解了你的工具集并对如何关联数据有了一些想法,则可能性是无限的。数据科学是寻找隐藏在数据中的故事的新方法。让开源成为你的媒介。 --- via: <https://opensource.com/article/19/9/get-started-data-science-python> 作者:[Seth Kenlon](https://opensource.com/users/seth) 选题:[lujun9972](https://github.com/lujun9972) 译者:[GraveAccent](https://github.com/GraveAccent) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
Data science is an exciting new field in computing that's built around analyzing, visualizing, correlating, and interpreting the boundless amounts of information our computers are collecting about the world. Of course, calling it a "new" field is a little disingenuous because the discipline is a derivative of statistics, data analysis, and plain old obsessive scientific observation. But data science is a formalized branch of these disciplines, with processes and tools all its own, and it can be broadly applied across disciplines (such as visual effects) that had never produced big dumps of unmanageable data before. Data science is a new opportunity to take a fresh look at data from oceanography, meteorology, geography, cartography, biology, medicine and health, and entertainment industries and gain a better understanding of patterns, influences, and causality. Like other big and seemingly all-inclusive fields, it can be intimidating to know where to start exploring data science. There are a lot of resources out there to help data scientists use their favorite programming languages to accomplish their goals, and that includes one of the most popular programming languages out there: Python. Using the [Pandas](https://pandas.pydata.org/), [Matplotlib](https://matplotlib.org/), and [Seaborn](https://seaborn.pydata.org/index.html) libraries, you can learn the basic toolset of data science. If you're not familiar with the basics of Python yet, read my [introduction to Python](https://opensource.com/article/17/10/python-101) before continuing. ## Creating a Python virtual environment Programmers sometimes forget which libraries they have installed on their development machine, and this can lead them to ship code that worked on their computer but fails on all others for lack of a library. Python has a system designed to avoid this manner of unpleasant surprise: the virtual environment. A virtual environment intentionally ignores all the Python libraries you have installed, effectively forcing you to begin development with nothing more than stock Python. To activate a virtual environment with **venv**, invent a name for your environment (I'll use **example**) and create it with: `$ python3 -m venv example` Source the **activate** file in the environment's **bin** directory to activate it: ``` $ source ./example/bin/activate (example) $ ``` You are now "in" your virtual environment, a clean slate where you can build custom solutions to problems—with the added burden of consciously needing to install required libraries. ## Installing Pandas and NumPy The first libraries you must install in your new environment are Pandas and NumPy. These libraries are common in data science, so this won't be the last time you'll install them. They're also not the only libraries you'll ever need in data science, but they're a good start. Pandas is an open source, BSD-licensed library that makes it easy to process data structures for analysis. It depends on NumPy, a scientific library that provides multi-dimensional arrays, linear algebra, Fourier transforms, and much more. Install both using **pip3**: `(example) $ pip3 install pandas` Installing Pandas also installs NumPy, so you don't need to specify both. Once you have installed them to your virtual environment once, the installation packages are cached so that when you install them again, you don't have to download them from the internet. Those are the only libraries you need for now. Next, you need some sample data. ## Generating a sample dataset Data science is all about data, and luckily there are lots of free and open datasets available from scientific, computing, and government organizations. While these datasets are a great resource for education, they have a lot more data than necessary for this simple example. You can create a sample and manageable dataset quickly with Python: ``` #!/usr/bin/env python3 import random def rgb(): NUMBER=random.randint(0,255)/255 return NUMBER FILE = open('sample.csv','w') FILE.write('"red","green","blue"') for COUNT in range(10): FILE.write('\n{:0.2f},{:0.2f},{:0.2f}'.format(rgb(),rgb(),rgb())) ``` This produces a file called **sample.csv**, consisting of randomly generated floats representing, in this example, RGB values (a commonly tracked value, among hundreds, in visual effects). You can use a CSV file as a data source for Pandas. ## Ingesting data with Pandas One of Pandas' basic features is its ability to ingest data and process it without the programmer writing new functions just to parse input. If you're used to applications that do that automatically, this might not seem like it's very special—but imagine opening a CSV in [LibreOffice](http://libreoffice.org) and having to write formulas to split the values at each comma. Pandas shields you from low-level operations like that. Here's some simple code to ingest and print out a file of comma-separated values: ``` #!/usr/bin/env python3 from pandas import read_csv, DataFrame import pandas as pd FILE = open('sample.csv','r') DATAFRAME = pd.read_csv(FILE) print(DATAFRAME) ``` The first few lines import components of the Pandas library. The Pandas library is extensive, so you'll refer to its documentation frequently when looking for functions beyond the basic ones in this article. Next, a variable **f** is created by opening the **sample.csv** file you created. That variable is used by the Pandas module **read_csv** (imported in the second line) to create a *dataframe*. In Pandas, a dataframe is a two-dimensional array, commonly thought of as a table. Once your data is in a dataframe, you can manipulate it by column and row, query it for ranges, and do a lot more. The sample code, for now, just prints the dataframe to the terminal. Run the code. Your output will differ slightly from this sample output because the numbers are randomly generated, but the format is the same: ``` (example) $ python3 ./parse.py red green blue 0 0.31 0.96 0.47 1 0.95 0.17 0.64 2 0.00 0.23 0.59 3 0.22 0.16 0.42 4 0.53 0.52 0.18 5 0.76 0.80 0.28 6 0.68 0.69 0.46 7 0.75 0.52 0.27 8 0.53 0.76 0.96 9 0.01 0.81 0.79 ``` Assume you need only the red values from your dataset. You can do this by declaring your dataframe's column names and selectively printing only the column you're interested in: ``` from pandas import read_csv, DataFrame import pandas as pd FILE = open('sample.csv','r') DATAFRAME = pd.read_csv(FILE) # define columns DATAFRAME.columns = [ 'red','green','blue' ] print(DATAFRAME['red']) ``` Run the code now, and you get just the red column: ``` (example) $ python3 ./parse.py 0 0.31 1 0.95 2 0.00 3 0.22 4 0.53 5 0.76 6 0.68 7 0.75 8 0.53 9 0.01 Name: red, dtype: float64 ``` Manipulating tables of data is a great way to get used to how data can be parsed with Pandas. There are many more ways to select data from a dataframe, and the more you experiment, the more natural it becomes. ## Visualizing your data It's no secret that many humans prefer to visualize information. It's the reason charts and graphs are staples of meetings with upper management and why "infographics" are popular in the news business. Part of a data scientist's job is to help others understand large samples of data, and there are libraries to help with this task. Combining Pandas with a visualization library can produce visual interpretations of your data. One popular open source library for visualization is [Seaborn](https://seaborn.pydata.org/), which is based on the open source [Matplotlib](https://matplotlib.org/). ### Installing Seaborn and Matplotlib Your Python virtual environment doesn't yet have Seaborn and Matplotlib, so install them with pip3. Seaborn also installs Matplotlib along with many other libraries: `(example) $ pip3 install seaborn` For Matplotlib to display graphics, you must also install [PyGObject](https://pygobject.readthedocs.io/en/latest/getting_started.html) and [Pycairo](https://pycairo.readthedocs.io/en/latest/). This involves compiling code, which pip3 can do for you as long as you have the necessary header files and libraries installed. Your Python virtual environment has no awareness of these support libraries, so you can execute the installation command inside or outside the environment. On Fedora and CentOS: ``` (example) $ sudo dnf install -y gcc zlib-devel bzip2 bzip2-devel readline-devel \ sqlite sqlite-devel openssl-devel tk-devel git python3-cairo-devel \ cairo-gobject-devel gobject-introspection-devel ``` On Ubuntu and Debian: ``` (example) $ sudo apt install -y libgirepository1.0-dev build-essential \ libbz2-dev libreadline-dev libssl-dev zlib1g-dev libsqlite3-dev wget \ curl llvm libncurses5-dev libncursesw5-dev xz-utils tk-dev libcairo2-dev ``` Once they are installed, you can install the GUI components needed by Matplotlib: `(example) $ pip3 install PyGObject pycairo` ## Displaying a graph with Seaborn and Matplotlib Open a file called **vizualize.py** in your favorite text editor. To create a line graph visualization of your data, first, you must import the necessary Python modules: the Pandas modules you used in the previous code examples: ``` #!/usr/bin/env python3 from pandas import read_csv, DataFrame import pandas as pd ``` Next, import Seaborn, Matplotlib, and several components of Matplotlib so you can configure the graphics you produce: ``` import seaborn as sns import matplotlib import matplotlib.pyplot as plt from matplotlib import rcParams ``` Matplotlib can export its output to many formats, including PDF, SVG, or just a GUI window on your desktop. For this example, it makes sense to send your output to the desktop, so you must set the Matplotlib backend to GTK3Agg. If you're not using Linux, you may need to use the TkAgg backend instead. After setting the backend for the GUI window, set the size of the window and the Seaborn preset style: ``` matplotlib.use('GTK3Agg') rcParams['figure.figsize'] = 11,8 sns.set_style('darkgrid') ``` Now that your display is configured, the code is familiar. Ingest your **sample.csv** file with Pandas, and define the columns of your dataframe: ``` FILE = open('sample.csv','r') DATAFRAME = pd.read_csv(FILE) DATAFRAME.columns = [ 'red','green','blue' ] ``` With the data in a useful format, you can plot it out in a graph. Use each column as input for a plot, then use **plt.show()** to draw the graph in a GUI window. The **plt.legend()** parameter associates the column header with each line on your graph (the **loc** parameter places the legend outside the chart rather than over it): ``` for i in DATAFRAME.columns: DATAFRAME[i].plot() plt.legend(bbox_to_anchor=(1, 1), loc=2, borderaxespad=1) plt.show() ``` Run the code to display the results. ![First data visualization First data visualization](https://opensource.com/sites/default/files/uploads/seaborn-matplotlib-graph_0.png) Your graph accurately displays all the information contained in your CSV file: values are on the Y-axis, index numbers are on the X-axis, and the lines of the graph are identified so that you know what they represent. However, since this code is tracking color values (at least, it's pretending to), the colors of the lines are not just non-intuitive, but counterintuitive. If you never need to analyze color data, you may never run into this problem, but you're sure to run into something analogous. When visualizing data, you must consider the best way to present it to prevent the viewer from extrapolating false information from what you're presenting. To fix this problem (and show off some of the customization available), the following code assigns each plotted line a specific color: ``` import matplotlib from pandas import read_csv, DataFrame import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from matplotlib import rcParams matplotlib.use('GTK3Agg') rcParams['figure.figsize'] = 11,8 sns.set_style('whitegrid') FILE = open('sample.csv','r') DATAFRAME = pd.read_csv(FILE) DATAFRAME.columns = [ 'red','green','blue' ] plt.plot(DATAFRAME['red'],'r-') plt.plot(DATAFRAME['green'],'g-') plt.plot(DATAFRAME['blue'],'b-') plt.plot(DATAFRAME['red'],'ro') plt.plot(DATAFRAME['green'],'go') plt.plot(DATAFRAME['blue'],'bo') plt.show() ``` This uses special Matplotlib notation to create two plots per column. The initial plot of each column is assigned a color (**r** for red, **g** for green, and **b** for blue). These are built-in Matplotlib settings. The **-** notation indicates a solid line (a double dash, such as **r--**, creates a dashed line). A second plot is created for each column with the same colors but using **o** to denote dots or nodes. To demonstrate built-in Seaborn themes, change the value of **sns.set_style** to **whitegrid**. ![Improved data visualization Improved data visualization](https://opensource.com/sites/default/files/uploads/seaborn-matplotlib-graph_1.png) ## Deactivating your virtual environment When you're finished exploring Pandas and plotting, you can deactivate your Python virtual environment with the **deactivate** command: ``` (example) $ deactivate $ ``` When you want to get back to it, just reactivate it as you did at the start of this article. You'll have to reinstall your modules when you reactivate your virtual environment, but they'll be installed from cache rather than downloaded from the internet, so you don't have to be online. ## Endless possibilities The true power of Pandas, Matplotlib, Seaborn, and data science is the endless potential for you to parse, interpret, and structure data in a meaningful and enlightening way. Your next step is to explore simple datasets with the new tools you've learned in this article. There's a lot more to Matplotlib and Seaborn than just line graphs, so try creating a bar graph or a pie chart or something else entirely. The possibilities are limitless once you understand your toolset and have some idea of how to correlate your data. Data science is a new way to find stories hidden within data; let open source be your medium. ## 1 Comment
11,409
使用 Terminator 在一个窗口中运行多个终端
https://www.networkworld.com/article/3436784/how-to-use-terminator-on-linux-to-run-multiple-terminals-in-one-window.html
2019-09-30T23:38:00
[ "终端", "Terminator" ]
https://linux.cn/article-11409-1.html
![](/data/attachment/album/201909/30/233732j9jjx3xxuujopiuu.jpg) > > Terminator 为在单窗口中运行多个 GNOME 终端提供了一个选择,让你可以灵活地调整工作空间来适应你的需求。 > > > ![](/data/attachment/album/201909/30/233819d9hn3n4l25dnng59.jpg) 如果你曾经希望可以排列多个终端并将它们组织在一个窗口中,那么我们可能会给你带来一个好消息。 Linux 的 Terminator 可以为你做到这一点。没有问题! ### 分割窗口 Terminator 最初打开像是一个单一窗口的终端窗口一样。但是,一旦在该窗口中单击鼠标,它将弹出一个选项,让你可以灵活地进行更改。你可以选择“水平分割”或“垂直分割”,将你当前所在的窗口分为两个较小的窗口。实际上,菜单旁会有小的分割结果图示(类似于 `=` and `||`),你可以根据需要重复拆分窗口。当然,你如果将整个窗口分为六个或九个以上,那么你可能会发现它们太小而无法有效使用。 使用 ASCII 艺术来说明分割窗口的过程,你可能会看到类似以下的样子: ``` +-------------------+ +-------------------+ +-------------------+ | | | | | | | | | | | | | | ==> |-------------------| ==> |-------------------| | | | | | | | | | | | | | | +-------------------+ +-------------------+ +-------------------+ 原始终端 水平分割 垂直分割 ``` 另一种拆分窗口的方法是使用控制键组合,例如,使用 `Ctrl+Shift+e` 垂直分割窗口,使用 `Ctrl+Shift+o`(“o” 表示“打开”)水平分割窗口。 在 Terminator 分割完成后,你可以点击任意窗口使用,并根据工作需求在窗口间移动。 ### 最大化窗口 如果你想暂时忽略除了一个窗口外的其他窗口而只关注一个,你可以单击该窗口,然后从菜单中选择“最大化”选项。接着该窗口会撑满所有空间。再次单击并选择“还原所有终端”可以返回到多窗口显示。使用 `Ctrl+Shift+x` 将在正常和最大化设置之间切换。 窗口标签上的窗口大小指示(例如 80x15)显示了每行的字符数以及每个窗口的行数。 ### 关闭窗口 要关闭任何窗口,请打开 Terminator 菜单,然后选择“关闭”。其他窗口将自行调整占用空间,直到你关闭最后一个窗口。 ### 保存你的自定义设置 将窗口分为多个部分后,将自定义的 Terminator 设置设置为默认非常容易。从弹出菜单中选择“首选项”,然后从打开的窗口顶部的选项卡中选择“布局”。接着你应该看到列出了“新布局”。只需单击底部的“保存”,然后单击右下角的“关闭”。Terminator 会将你的设置保存在 `~/.config/terminator/config` 中,然后每次使用到时都会使用该文件。 你也可以通过使用鼠标拉伸来扩大整个窗口。再说一次,如果要保留更改,请从菜单中选择“首选项”,“布局”,接着选择“保存”和“关闭”。 ### 在保存的配置之间进行选择 如果愿意,你可以通过维护多个配置文件来设置多种 Terminator 窗口布局,重命名每个配置文件(如 `config-1`、`config-2`),接着在你想使用它时将它移动到 `~/.config/terminator/config`。这有一个类似执行此任务的脚本。它让你在 3 个预配置的窗口布局之间进行选择。 ``` #!/bin/bash PS3='Terminator options: ' options=("Split 1" "Split 2" "Split 3" "Quit") select opt in "${options[@]}" do case $opt in "Split 1") config=config-1 break ;; "Split 2") config=config-2 break ;; "Split 3") config=config-3 break ;; *) exit ;; esac done cd ~/.config/terminator cp config config- cp $config config cd terminator & ``` 如果有用的话,你可以给选项一个比 `config-1` 更有意义的名称。 ### 总结 Terminator 是设置多窗口处理相关任务的不错选择。如果你从未使用过它,那么可能需要先使用 `sudo apt install terminator` 或 `sudo yum install -y terminator` 之类的命令进行安装。 希望你喜欢使用 Terminator。还有,如另一个同名角色所说,“我会回来的!” --- via: <https://www.networkworld.com/article/3436784/how-to-use-terminator-on-linux-to-run-multiple-terminals-in-one-window.html> 作者:[Sandra Henry-Stocker](https://www.networkworld.com/author/Sandra-Henry_Stocker/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
301
Moved Permanently
null
11,411
2019 年最好的 Linux 发行版
https://itsfoss.com/best-linux-distributions/
2019-10-01T14:08:00
[ "发行版" ]
https://linux.cn/article-11411-1.html
> > 哪个是最好的 Linux 发行版呢?这个问题是没有明确的答案的。这就是为什么我们按各种分类汇编了这个最佳 Linux 列表的原因。 > > > ![](/data/attachment/album/201910/01/140807rlo3cz2l99tdoc6d.jpg) 有许多 Linux 发行版,我甚至想不出一个确切的数量,因为你会发现很多不同的 Linux 发行版。 其中有些只是另外一个的复制品,而有些往往是独一无二的。这虽然有点混乱——但这也是 Linux 的优点。 不用担心,尽管有成千上万的发行版,在这篇文章中,我已经列出了目前最好的 Linux 发行版。当然,这个列表是主观的。但是,在这里,我们试图对发行版进行分类——每个发行版本都有自己的特点的。 * 面向初学者的 Linux 用户的最佳发行版 * 最佳 Linux 服务器发行版 * 可以在旧计算机上运行的最佳 Linux 发行版 * 面向高级 Linux 用户的最佳发行版 * 最佳常青树 Linux 发行版 **注:** 该列表没有特定的排名顺序。 ### 面向初学者的最佳 Linux 发行版 在这个分类中,我们的目标是列出开箱即用的易用发行版。你不需要深度学习,你可以在安装后马上开始使用,不需要知道任何命令或技巧。 #### Ubuntu ![](/data/attachment/album/201910/01/140821dfhfh2sesc9abfma.jpg) Ubuntu 无疑是最流行的 Linux 发行版之一。你甚至可以发现它已经预装在很多笔记本电脑上了。 用户界面很容易适应。如果你愿意,你可以根据自己的要求轻松定制它的外观。无论哪种情况,你都可以选择安装一个主题。你可以从了解更多关于[如何在 Ubuntu 安装主题的](https://itsfoss.com/install-themes-ubuntu/)的信息来起步。 除了它本身提供的功能外,你会发现一个巨大的 Ubuntu 用户在线社区。因此,如果你有问题——可以去任何论坛(或版块)寻求帮助。如果你想直接寻找解决方案,你应该看看我们对 [Ubuntu](https://itsfoss.com/tag/ubuntu/) 的报道(我们有很多关于 Ubuntu 的教程和建议)。 * [Ubuntu](https://ubuntu.com/download/desktop) #### Linux Mint ![](/data/attachment/album/201910/01/140821t96h5bzhh5xvqvqv.jpg) Linux Mint Cinnamon 是另一个受初学者欢迎的 Linux 发行版。默认的 Cinnamon 桌面类似于 Windows XP,这就是为什么当 Windows XP 停止维护时许多用户选择它的原因。 Linux Mint 基于 Ubuntu,因此它具有适用于 Ubuntu 的所有应用程序。简单易用是它成为 Linux 新用户首选的原因。 * [Linux Mint](https://www.linuxmint.com/) #### elementary OS ![](/data/attachment/album/201910/01/140826viyz1ut1u116urrr.jpg) elementary OS 是我用过的最漂亮的 Linux 发行版之一。用户界面类似于苹果操作系统——所以如果你已经使用了苹果系统,则很容易适应。 该发行版基于 Ubuntu,致力于提供一个用户友好的 Linux 环境,该环境在考虑性能的同时尽可能美观。如果你选择安装 elementary OS,这份[在安装 elementary OS 后要做的 11 件事的清单](https://itsfoss.com/things-to-do-after-installing-elementary-os-5-juno/)会派上用场。 * [elementary OS](https://elementary.io/) #### MX Linux ![](/data/attachment/album/201910/01/140827firvujjzfpwjwpjb.jpg) 大约一年前,MX Linux 成为众人瞩目的焦点。现在(在发表这篇文章的时候),它是 [DistroWatch.com](https://distrowatch.com/) 上最受欢迎的 Linux 发行版。如果你还没有使用过它,那么当你开始使用它时,你会感到惊讶。 与 Ubuntu 不同,MX Linux 是一个基于 Debian 的日益流行的发行版,采用 Xfce 作为其桌面环境。除了无与伦比的稳定性之外,它还配备了许多图形用户界面工具,这使得任何习惯了 Windows/Mac 的用户易于使用它。 此外,软件包管理器还专门针对一键安装进行了量身定制。你甚至可以搜索 [Flatpak](https://flatpak.org/) 软件包并立即安装它(默认情况下,Flathub 在软件包管理器中是可用的来源之一)。 * [MX Linux](https://mxlinux.org/) #### Zorin OS ![](/data/attachment/album/201910/01/140827a8qpk88q5335pee8.png) Zorin OS 是又一个基于 Ubuntu 的发行版,它又是桌面上最漂亮、最直观的操作系统之一。尤其是在[Zorin OS 15 发布](https://itsfoss.com/zorin-os-15-release/)之后——我绝对会向没有任何 Linux 经验的用户推荐它。它也引入了许多基于图形用户界面的应用程序。 你也可以将其安装在旧电脑上,但是,请确保选择“Lite”版本。此外,你还有“Core”、“Education”和 “Ultimate”版本可以选择。你可以选择免费安装 Core 版,但是如果你想支持开发人员并帮助改进 Zorin,请考虑获得 Ultimate 版。 Zorin OS 是由两名爱尔兰的青少年创建的。你可以[在这里阅读他们的故事](https://itsfoss.com/zorin-os-interview/)。 * [Zorin OS](https://zorinos.com/) #### Pop!\_OS ![](/data/attachment/album/201910/01/140829wftaafiqjlm224ag.jpg) Sytem76 的 Pop!\_OS 是开发人员或计算机科学专业人员的理想选择。当然,不仅限于编码人员,如果你刚开始使用 Linux,这也是一个很好的选择。它基于 Ubuntu,但是其 UI 感觉更加直观和流畅。除了 UI 外,它还强制执行全盘加密。 你可以通过文章下面的评论看到,我们的许多读者似乎都喜欢(并坚持使用)它。如果你对此感到好奇,也应该查看一下我们关于 Phillip Prado 的 [Pop!\_OS 的动手实践](https://itsfoss.com/pop-os-linux-review/)的文章。 (LCTT 译注:这段推荐是原文后来补充的,因为原文下面很多人在评论推荐。) * [Pop!\_OS](https://system76.com/pop) #### 其他选择 [深度操作系统](https://www.deepin.org/en/) 和其他的 Ubuntu 变种(如 Kubuntu、Xubuntu)也是初学者的首选。如果你想寻求更多的选择,你可以看看。(LCTT 译注:我知道你们肯定对将深度操作系统列入其它不满意——这个锅归原作者。) 如果你想要挑战自己,你可以试试 Ubuntu 之外的 Fedora —— 但是一定要看看我们关于 [Ubuntu 和 Fedora 对比](https://itsfoss.com/ubuntu-vs-fedora/)的文章,从桌面的角度做出更好的选择。 ### 最好的服务器发行版 对于服务器来说,选择 Linux 发行版取决于稳定性、性能和企业级支持。如果你只是尝试,则可以尝试任何你想要的发行版。 但是,如果你要为 Web 服务器或任何重要的组件安装它,你应该看看我们的一些建议。 #### Ubuntu 服务器 根据你的需要,Ubuntu 为你的服务器提供了不同的选项。如果你正在寻找运行在 AWS、Azure、谷歌云平台等平台上的优化解决方案,[Ubuntu Cloud](https://ubuntu.com/download/cloud) 是一个很好的选择。 无论是哪种情况,你都可以选择 Ubuntu 服务器包,并将其安装在你的服务器上。然而,Ubuntu 在云上部署时也是最受欢迎的 Linux 发行版(根据数字判断——[来源1](https://w3techs.com/technologies/details/os-linux/all/all)、[来源2](https://thecloudmarket.com/stats))。 请注意,除非你有特殊要求,我们建议你选择 LTS 版。 * [Ubuntu Server](https://ubuntu.com/download/server) #### 红帽企业版 Linux(RHEL) 红帽企业版 Linux(RHEL)是面向企业和组织的顶级 Linux 平台。如果我们按数字来看,红帽可能不是服务器领域最受欢迎的。但是,有相当一部分企业用户依赖于 RHEL (比如联想)。 从技术上讲,Fedora 和红帽企业版是相关联的。无论红帽要支持什么——在出现在 RHEL 之前,都要在 Fedora 上进行测试。我不是定制需求的服务器发行版专家,所以你一定要查看他们的[官方文档](https://developers.redhat.com/products/rhel/docs-and-apis)以了解它是否适合你。 * [RHEL](https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux) #### SUSE Linux 企业服务器(SLES) ![](/data/attachment/album/201910/01/140830hz8txlzh1tmx2h6o.jpg) 别担心,不要把这和 OpenSUSE 混淆。一切都以一个共同的品牌 “SUSE” 命名 —— 但是 OpenSUSE 是一个开源发行版,目标是社区,并且由社区维护。 SUSE Linux 企业服务器(SLES)是基于云的服务器最受欢迎的解决方案之一。为了获得管理开源解决方案的优先支持和帮助,你必须选择订阅。 * [SLES](https://www.suse.com/products/server/) #### CentOS ![](/data/attachment/album/201910/01/140832v0fitxwxl6wwwx0f.png) 正如我提到的,对于 RHEL 你需要订阅。而 CentOS 更像是 RHEL 的社区版,因为它是从 RHEL 的源代码中派生出来的。而且,它是开源的,也是免费的。尽管与过去几年相比,使用 CentOS 的托管提供商数量明显减少,但这仍然是一个很好的选择。 CentOS 可能没有加载最新的软件包,但它被认为是最稳定的发行版之一,你可以在各种云平台上找到 CentOS 镜像。如果没有,你可以选择 CentOS 提供的自托管镜像。 * [CentOS](https://www.centos.org/) #### 其他选择 你也可以尝试 [Fedora Server](https://getfedora.org/en/server/)或[Debian](https://www.debian.org/distrib/)作为上述发行版的替代品。 ### 旧电脑的最佳 Linux 发行版 如果你有一台旧电脑,或者你真的不需要升级你的系统,你仍然可以尝试一些最好的 Linux 发行版。 我们已经详细讨论了一些[最好的轻量级 Linux 发行版](https://itsfoss.com/lightweight-linux-beginners/)。在这里,我们将只提到那些真正突出的东西(以及一些新的补充)。 #### Puppy Linux ![](/data/attachment/album/201910/01/140834mu66hl2ah6is3eh6.jpg) Puppy Linux 实际上是最小的发行版本之一。刚开始使用 Linux 时,我的朋友建议我尝试一下 Puppy Linux,因为它可以轻松地在较旧的硬件配置上运行。 如果你想在你的旧电脑上享受一次爽快的体验,那就值得去看看。多年来,随着一些新的有用特性的增加,用户体验得到了改善。 * [Puppy Linux](http://puppylinux.com/) #### Solus Budgie ![](/data/attachment/album/201910/01/140837pb38q8zeo8bbomu5.jpg) 在最近的一个主要版本——[Solus 4 Fortitude](https://itsfoss.com/solus-4-release/) 之后,它是一个令人印象深刻的轻量级桌面操作系统。你可以选择像 GNOME 或 MATE 这样的桌面环境。然而,Solus Budgie 恰好是我的最爱之一,它是一款适合初学者的功能齐全的 Linux发行版,同时对系统资源要求很少。 * [Solus](https://getsol.us/home/) #### Bodhi ![](/data/attachment/album/201910/01/140840s9ohzo1srgc4zvik.png) Bodhi Linux 构建于 Ubuntu 之上。然而,与Ubuntu不同,它在较旧的配置上运行良好。 这个发行版的主要亮点是它的 [Moksha 桌面](http://www.bodhilinux.com/moksha-desktop/)(这是 Enlightenment 17 桌面的延续)。用户体验直观且反应极快。即使我个人不用它,你也应该在你的旧系统上试一试。 * [Bodhi Linux](http://www.bodhilinux.com/) #### antiX ![](/data/attachment/album/201910/01/140841vp6sannk0qga403q.jpg) antiX 部分担起了 MX Linux 的责任,它是一个轻量级的 Linux 发行版,为新的或旧的计算机量身定制。其用户界面并不令人印象深刻——但它可以像预期的那样工作。 它基于 Debian,可以作为一个现场版 CD 发行版使用,而不需要安装它。antiX 还提供现场版引导加载程序。与其他发行版相比,你可以保存设置,这样就不会在每次重新启动时丢失设置。不仅如此,你还可以通过其“持久保留”功能将更改保存到根目录中。 因此,如果你正在寻找一个可以在旧硬件上提供快速用户体验的现场版 USB 发行版,antiX 是一个不错的选择。 * [antiX](https://antixlinux.com/) #### Sparky Linux ![](/data/attachment/album/201910/01/140842svk7u7v2307gkn79.jpg) Sparky Linux 基于 Debian,它是理想的低端系统 Linux 发行版。伴随着超快的用户体验,Sparky Linux 为不同的用户提供了几个特殊版本(或变种)。 例如,它提供了针对一组用户的稳定版本(和变种)和滚动版本。Sparky Linux GameOver 版非常受游戏玩家欢迎,因为它包含了一堆预装的游戏。你可以查看我们的[最佳 Linux 游戏发行版](https://itsfoss.com/linux-gaming-distributions/) —— 如果你也想在你的系统上玩游戏。 #### 其他选择 你也可以尝试 [Linux Lite](https://www.linuxliteos.com/)、[Lubuntu](https://lubuntu.me/)、[Peppermint](https://peppermintos.com/) 等轻量级 Linux 发行版。 ### 面向高级用户的最佳 Linux 发行版 一旦你习惯了各种软件包管理器和命令来帮助你解决任何问题,你就可以开始找寻只为高级用户量身定制的 Linux 发行版。 当然,如果你是专业人士,你会有一套具体的要求。然而,如果你已经作为普通用户使用了一段时间——以下发行版值得一试。 #### Arch Linux ![](/data/attachment/album/201910/01/140844k366n0gqob3488nq.jpg) Arch Linux 本身是一个简单而强大的发行版,具有陡峭的学习曲线。不像其系统,你不会一次就把所有东西都预先安装好。你必须配置系统并根据需要添加软件包。 此外,在安装 Arch Linux 时,必须按照一组命令来进行(没有图形用户界面)。要了解更多信息,你可以按照我们关于[如何安装 Arch Linux](https://itsfoss.com/install-arch-linux/) 的指南进行操作。如果你要安装它,你还应该知道在[安装 Arch Linux 后需要做的一些基本事情](https://itsfoss.com/things-to-do-after-installing-arch-linux/)。这会帮助你快速入门。 除了多才多艺和简便性之外,值得一提的是 Arch Linux 背后的社区非常活跃。所以,如果你遇到问题,你不用担心。 * [Arch Linux](https://www.archlinux.org) #### Gentoo ![](/data/attachment/album/201910/01/140845teav5shle1p11gsl.png) 如果你知道如何编译源代码,Gentoo Linux 是你必须尝试的版本。这也是一个轻量级的发行版,但是,你需要具备必要的技术知识才能使它发挥作用。 当然,[官方手册](https://wiki.gentoo.org/wiki/Handbook:Main_Page)提供了许多你需要知道的信息。但是,如果你不确定自己在做什么——你需要花很多时间去想如何充分利用它。 * [Gentoo Linux](https://www.gentoo.org) #### Slackware ![](/data/attachment/album/201910/01/140846xdiktbb9c1ied0bd.jpg) Slackware 是仍然重要的最古老的 Linux 发行版之一。如果你愿意编译或开发软件来为自己建立一个完美的环境 —— Slackware 是一个不错的选择。 如果你对一些最古老的 Linux 发行版感到好奇,我们有一篇关于[最早的 Linux 发行版](https://itsfoss.com/earliest-linux-distros/)可以去看看。 尽管使用它的用户/开发人员的数量已经显著减少,但对于高级用户来说,它仍然是一个极好的选择。此外,最近有个新闻是 [Slackware 有了一个 Patreon 捐赠页面](https://distrowatch.com/dwres.php?resource=showheadline&story=8743),我们希望 Slackware 继续作为最好的 Linux 发行版之一存在。 * [Slackware](http://www.slackware.com/) ### 最佳多用途 Linux 发行版 有些 Linux 发行版既可以作为初学者友好的桌面又可以作为高级操作系统的服务器。因此,我们考虑为这样的发行版编辑一个单独的部分。 如果你不同意我们的观点(或者有建议要补充),请在评论中告诉我们。我们认为,这对于每个用户都可以派上用场: #### Fedora ![](/data/attachment/album/201910/01/140849h700m0y79q7hynq2.png) Fedora 提供两个独立的版本:一个用于台式机/笔记本电脑(Fedora 工作站),另一个用于服务器(Fedora 服务器)。 因此,如果你正在寻找一款时髦的桌面操作系统,有点学习曲线,又对用户友好,那么 Fedora 是一个选择。无论是哪种情况,如果你正在为你的服务器寻找一个 Linux 操作系统,这也是一个不错的选择。 * [Fedora](https://getfedora.org/) #### Manjaro ![](/data/attachment/album/201910/01/140901p2qafahqqcyryzyr.jpg) Manjaro 基于 [Arch Linux](https://www.archlinux.org/)。不用担心,虽然 Arch Linux 是为高级用户量身定制的,但Manjaro 让新手更容易上手。这是一个简单且对初学者友好的 Linux 发行版。用户界面足够好,并且内置了一系列有用的图形用户界面应用程序。 下载时,你可以为 Manjaro 选择[桌面环境](https://itsfoss.com/glossary/desktop-environment/)。就个人而言,我喜欢 Manjaro 的 KDE 桌面。 * [Manjaro Linux](https://manjaro.org/) #### Debian ![](/data/attachment/album/201910/01/140902u8ch14h4ubrj8ja1.png) 嗯,Ubuntu 是基于 Debian 的——所以它本身是一个非常好的发行版本。Debian 是台式机和服务器的理想选择。 这可能不是对初学者最友好的操作系统——但你可以通过阅读[官方文档](https://www.debian.org/releases/stable/installmanual)轻松开始。[Debian 10 Buster](https://itsfoss.com/debian-10-buster/) 的最新版本引入了许多变化和必要的改进。所以,你必须试一试! ### 总结 总的来说,这些是我们推荐你去尝试的最好的 Linux 发行版。是的,还有许多其他的 Linux 发行版值得一提,但是根据个人喜好,对每个发行版来说,取决于个人喜好,这种选择是主观的。 但是,我们也为 [Windows 用户](https://itsfoss.com/windows-like-linux-distributions/)、[黑客和脆弱性测试人员](https://itsfoss.com/linux-hacking-penetration-testing/)、[游戏玩家](https://itsfoss.com/linux-gaming-distributions/)、[程序员](https://itsfoss.com/best-linux-distributions-progammers/)和[偏重隐私者](https://itsfoss.com/privacy-focused-linux-distributions/)提供了单独的发行版列表所以,如果你感兴趣的话请仔细阅读。 如果你认为我们遗漏了你最喜欢的 Linux 发行版,请在下面的评论中告诉我们你的想法,我们将更新这篇文章。 --- via: <https://itsfoss.com/best-linux-distributions/> 作者:[Ankush Das](https://itsfoss.com/author/ankush/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[heguangzhi](https://github.com/heguangzhi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
![Warp Terminal](/assets/images/warp-terminal.webp) ![Warp Terminal](/assets/images/warp-terminal.webp) There are many Linux distributions. I can’t even think of coming up with an exact number because you would find loads of Linux distros that differ from one another in one way or the other. Some of them turn out to be a clone of one another, while some tend to be unique. So, it’s a mess—but that is the beauty of Linux. Fret not; even though thousands of distributions are around, in this article, I have compiled a list of the best Linux distributions available. Of course, the list can be subjective. But, here, we try to categorize the distros—so there’s something for everyone. **Best distribution for new Linux users****Best Linux distros for servers****Best Linux distros that can run on old computers****Best distributions for advanced Linux users****Best evergreen Linux distributions** *The list is in no particular order of ranking.*## Best Linux Distributions for Beginners In this category, we aim to list the distros which are easy to use out of the box. You do not need to dig deeper; you can start using it immediately after installation without knowing any commands or tips ### Ubuntu ![ubuntu 23.04 with gnome 44 screenshot](https://itsfoss.com/content/images/2024/01/ubuntu-23-04-gnome-44.jpg) Ubuntu is undoubtedly one of the most popular Linux distributions. You can even find it pre-installed on a lot of laptops available. The user interface is easy to get comfortable with. If you play around, you can easily customize the look of it as per your requirements. In either case, you can opt to install a theme as well. You can learn more about [how to install themes in Ubuntu](https://itsfoss.com/install-themes-ubuntu/) to get started. In addition to what it offers, you will find a vast online community of Ubuntu users. So, if you face an issue – head to any forums (or a subreddit) to ask for help. If you are looking for direct solutions in no time, you should check out our coverage on [Ubuntu](https://itsfoss.com/tag/ubuntu/) (where we have numerous tutorials and recommendations for Ubuntu). ### Linux Mint ![linux mint 21.2 screenshot](https://itsfoss.com/content/images/2024/01/linux-mint-21-2-home.jpg) Linux Mint Cinnamon is another popular Linux distribution suitable for beginners. The default Cinnamon desktop resembles the layout of the Windows system, and this is why it is one of the [best Windows-like Linux distros](https://itsfoss.com/windows-like-linux-distributions/). Not just limited to that, [Linux Mint also does a few things better than Ubuntu](https://itsfoss.com/linux-mint-vs-ubuntu/). Linux Mint is based on Ubuntu, and thus, it has all the applications available for Ubuntu. Its simplicity and ease of use make it a prominent choice for new Linux users. ### elementary OS ![elementary os 7 screenshot](https://itsfoss.com/content/images/2024/01/elementary-os-7.jpg) elementary OS is one of the [most beautiful Linux distros](https://itsfoss.com/beautiful-linux-distributions/) I’ve ever used. The UI resembles that of macOS, and things have evolved with newer updates. But, if you have already used a Mac-powered system, it’s easy to get comfortable with. This distribution is based on Ubuntu and focuses on delivering a user-friendly Linux environment that looks as pretty as possible while keeping performance in mind. ### MX Linux ![mx linux 23.2 screenshot](https://itsfoss.com/content/images/2024/01/MX_Linux_23.2.jpg) MX Linux is a quite popular Debian-based Linux distribution. It is known for its performance, simplicity, and a great out-of-the-box experience with its included MX tools. Unlike Ubuntu, MX Linux features Xfce as its desktop environment. In addition to its impeccable stability—it comes packed with numerous GUI tools (MX Tools), which makes it easier for any user comfortable with Windows/Mac originally. Furthermore, the package manager is perfectly tailored to facilitate one-click installations. You can even search for [Flatpak](https://flatpak.org/) packages and install them in no time (Flathub is available by default in the package manager as one of the sources). ### Zorin OS ![zorin os 17 screenshot](https://itsfoss.com/content/images/2024/01/ZorinOS_17_beta_1.jpg) Zorin OS is yet another Ubuntu-based distribution that happens to be one of the most good-looking and intuitive OS for desktop. A lot of GUI-based applications come baked in as well. It is almost a perfect desktop experience, technically and aesthetically. You can also install it on older PCs –, however, make sure to choose the “Lite” edition. In addition, you have “Core”, “Education” & “Ultimate” editions. You can decide to install the Core edition for free – but if you want to support the developers and help improve Zorin, consider getting the Ultimate edition. Zorin OS was started by two teenagers based in Ireland. You may [read their story here](https://itsfoss.com/zorin-os-interview/). ### Pop!_OS ![pop os 22.04 screenshot](https://itsfoss.com/content/images/2024/01/pop-os-22-04-lts-release.jpg) Pop!_OS by [Sytem76](https://system76.com/) is an excellent pick for developers and other working professionals. Of course, not just limited to coders—it is also an excellent choice for any user looking for a unique desktop experience. It is based on Ubuntu—but the UI feels much more intuitive and smooth. Not to forget, its [COSMIC Desktop environment](https://news.itsfoss.com/system76-cosmic-panel/) will soon be built using Rust from scratch, so you should keep an eye on it if you’re already impressed with its latest version. **Other Options** [Deepin](https://www.deepin.org/en/) and other flavors of Ubuntu (like Kubuntu, Xubuntu) could also be some preferred choices for beginners. You can take a look at them if you are keen to explore more options. If you want a challenge, you can indeed try Fedora over Ubuntu—but make sure to follow our article on [Ubuntu vs Fedora](https://itsfoss.com/ubuntu-vs-fedora/) to make a better decision from the desktop perspective. [Ubuntu or Fedora: Which One Should You Use and WhyUbuntu or Fedora? What’s the difference? Which is better? Which one should you use? Read this comparison of Ubuntu and Fedora.](https://itsfoss.com/ubuntu-vs-fedora/)![](https://itsfoss.com/content/images/wordpress/2019/07/ubuntu-vs-fedora.png) ![](https://itsfoss.com/content/images/wordpress/2019/07/ubuntu-vs-fedora.png) ## Best Linux Server Distributions For servers, the choice of a Linux distro comes down to stability, performance, and enterprise support. If you are experimenting, you can try any distro you want. But, if you are installing it for a web server or anything vital – you should take a look at some of our recommendations. ### Ubuntu Server Depending on where you want it, Ubuntu provides different options for your server. If you are looking for an optimized solution to run on AWS, Azure, Google Cloud Platform, etc., [Ubuntu Cloud](https://ubuntu.com/download/cloud) is the way to go. In either case, you can opt for Ubuntu Server packages and have it installed on your server. Nevertheless, Ubuntu is the most popular Linux distro when it comes to deployment on the cloud (judging by the numbers— [source ](https://thecloudmarket.com/stats)). Do note that we recommend you go for the LTS editions—unless you have specific requirements. ### Red Hat Enterprise Linux Red Hat Enterprise Linux is a top-notch Linux platform for businesses and organizations. If we go by the numbers, Red Hat may not be the most popular choice for servers. But, there’s a significant group of enterprise users who rely on RHEL (like Lenovo). Technically, Fedora and Red Hat are related. Whatever Red Hat supports—gets tested on Fedora before making it available for RHEL. I’m not an expert on server distributions for tailored requirements—so you should check out their [official documentation](https://developers.redhat.com/products/rhel/docs-and-apis) to know if it’s suitable for you. ### SUSE Linux Enterprise Server ![Suse Linux Enterprise](https://itsfoss.com/content/images/wordpress/2019/08/SUSE-Linux-Enterprise.jpg) Fret not, do not confuse this with OpenSUSE. Everything comes under a common brand “SUSE”—but OpenSUSE is an open-source distro targeted and yet, maintained by the community. SUSE Linux Enterprise Server is one of the most popular solutions for cloud-based servers. You will have to opt for a subscription to get priority support and assistance to manage your open-source solution. ### CentOS Stream (or CentOS Alternatives) ![Centos](https://itsfoss.com/content/images/wordpress/2019/08/centos.png) Yes, you can get [RHEL subscription for free up to 16 servers without technical support](https://news.itsfoss.com/rhel-no-cost-option/). But, CentOS was more like a community edition of RHEL because it was derived from the sources of Red Hat Enterprise Linux. But, now that [CentOS has been replaced by CentOS Stream](https://itsfoss.com/centos-stream-fiasco/), you can either try CentOS Stream, which is upstream to Red Hat Enterprise Linux, or look for [CentOS alternatives](https://itsfoss.com/rhel-based-server-distributions/). [Here are the Worthy Replacements of CentOS 8 for Your Production Linux ServersCentOS is one of the most popular server distributions in the world. It is an open source fork of Red Hat Enterprise Linux (RHEL) and provides the goodness of RHEL without the cost associated with RHEL. However, things have changed recently. Red Hat is converting the stable CentOS to a](https://itsfoss.com/rhel-based-server-distributions/)![](https://itsfoss.com/content/images/wordpress/2020/12/Replace-centos.png) ![](https://itsfoss.com/content/images/wordpress/2020/12/Replace-centos.png) Cent OS 7 will be supported until 2024 and CentOS 8 has already reached the early end of life in 2021. So, you can try it as an experiment before trying CentOS alternatives or CentOS Stream. [Fedora Server](https://getfedora.org/en/server/)or [Debian](https://www.debian.org/distrib/)as alternatives to some of the distros mentioned above. **Suggested Read 📖** [10 Best Linux Distributions For ProgrammersThere are hundreds of Linux distributions. A lot of them are customized for specific usage such as robotics, mathematics etc. Does this mean there are specific Linux distributions for programming as well? Yes and no. When Linux was originally created, it was mainly used by programmers at t…](https://itsfoss.com/best-linux-distributions-progammers/)![](https://itsfoss.com/content/images/wordpress/2017/03/best_distros_for_programming.jpg) ![](https://itsfoss.com/content/images/wordpress/2017/03/best_distros_for_programming.jpg) ## Best Linux Distributions for Older Computers If you have an old PC lying around or if you don’t really need to upgrade your system – you can still try some of the best Linux distros available. We’ve already discussed some of the [best lightweight Linux distributions](https://itsfoss.com/lightweight-linux-beginners/) in detail and [top Linux distributions to support 32-bit computers](https://itsfoss.com/32-bit-linux-distributions/). Here, we shall only mention what really stands out from that list (and some new additions). ### Puppy Linux ![Puppy Linux Bionic](https://itsfoss.com/content/images/wordpress/2019/08/puppy-linux-bionic.jpg) Puppy Linux is literally one of the smallest distributions there is. When I started to explore Linux, my friend recommended me to experiment with Puppy Linux because it can run on older hardware configurations with ease. It’s worth checking it out if you want a snappy experience on your good old PC. Over the years, the user experience has improved along with the addition of several new useful features. ### Solus Budgie ![solus](https://itsfoss.com/content/images/wordpress/2019/03/solus-4-featured-800x450.jpg) Solus offers the best of both worlds with its Budgie desktop edition. It is an impressive lightweight desktop OS. You can opt for desktop environments like GNOME or MATE as well. The Budgie desktop is its key offering with interesting GUI tweaks while offering a unique desktop experience. ### Bodhi ![bodhi linux](https://itsfoss.com/content/images/2024/01/Bodhi_Linux_7.0_1.jpg) Bodhi Linux is built on top of Ubuntu. However, unlike Ubuntu – it does run well on older configurations. The main highlight of this distro is its [Moksha Desktop](http://www.bodhilinux.com/moksha-desktop/) (which is a continuation of Enlightenment 17 desktop). The user experience is intuitive and screaming fast. Even though it’s not something for my personal use—you should give it a try on your older systems. ### antiX ![antiX Linux desktop](https://itsfoss.com/content/images/wordpress/2017/10/antix-linux-screenshot.jpg) antiX – which is also partially responsible for MX Linux, is a lightweight Linux distribution tailored for old and new computers. The UI isn’t impressive—but it works as expected. It is based on Debian and can be utilized as a live CD distribution without needing to install it. antiX also provides live bootloaders. In contrast to some other distros, you get to save the settings so that you don’t lose them with every reboot. Not just that, you can also save changes to the root directory with its “Live persistence” feature. So, if you are looking for a live-USB distro to provide a snappy user experience on old hardware – antiX is the way to go. ### Sparky Linux ![sparkylinux screenshot](https://itsfoss.com/content/images/2024/01/SparkyLinux_2024.01.png) Sparky Linux is based on Debian which turns out to be a perfect Linux distro for low-end systems. Along with a screaming fast experience, Sparky Linux offers several special editions (or varieties) for different users. For example, it provides a stable release (with varieties) and rolling releases specific to a group of users. Sparky Linux GameOver edition is quite popular for gamers because it includes a number of pre-installed games. You can check out our list of [best Linux Gaming distributions—if](https://itsfoss.com/linux-gaming-distributions/) you also want to play games on your system. [Linux Lite](https://www.linuxliteos.com/), [Lubuntu](https://lubuntu.me/), and [Peppermint](https://peppermintos.com/)as some of the lightweight Linux distributions. **Suggested Read 📖** [Best Linux Distributions for Hacking and Penetration TestingLooking for the best Linux distro to learn hacking? Whether you want to pursue a career in information security, are already working as a security professional, or are just interested in the field, a decent Linux distro that suits your purposes is a must. There are countless Linux distros…](https://itsfoss.com/linux-hacking-penetration-testing/)![](https://itsfoss.com/content/images/wordpress/2016/07/best-linux-distributions-hacking.jpg) ![](https://itsfoss.com/content/images/wordpress/2016/07/best-linux-distributions-hacking.jpg) ## Best Linux Distro for Advanced Users Once you get comfortable with the variety of package managers and commands to help troubleshoot your way to resolve any issue, you can start exploring Linux distros that are tailored for advanced users only. Of course, if you are a professional – you will have a set of specific requirements. However, if you’ve been using Linux for a while as a common user – these distros are worth checking out. ### Arch Linux ![arch linux with kde desktop screenshot](https://itsfoss.com/content/images/2024/01/arch-linux-with-kde-plasma-desktop-environment.jpg) Arch Linux is itself a simple yet powerful distribution with a huge learning curve. Unlike others, you won’t have everything pre-installed in one go. You have to configure the system and add packages as needed. Also, when installing Arch Linux, you will have to follow a set of commands (without GUI). To know more about it, you can follow our guide on [how to install Arch Linux](https://itsfoss.com/install-arch-linux/). If you are going to install it, you should also know about some of the [essential things to do after installing Arch Linux](https://itsfoss.com/things-to-do-after-installing-arch-linux/). It will help you get a jump start. In addition to all the versatility and simplicity, it’s worth mentioning that the community behind Arch Linux is very active. So, if you run into a problem, you don’t have to worry. ### Gentoo ![Gentoo Linux](https://itsfoss.com/content/images/wordpress/2019/08/gentoo-linux.png) If you know how to compile the source code, Gentoo Linux is a must-try for you. It is also a lightweight distribution – however, you need to have the required technical knowledge to make it work. Of course, the [official handbook](https://wiki.gentoo.org/wiki/Handbook:Main_Page) provides a lot of information that you need to know. But, if you aren’t sure what you’re doing – it will take a lot of your time to figure out how to make the most out of it. ### Slackware ![slackware with kde screenshot](https://itsfoss.com/content/images/2024/01/slackware-15-screenshot-1.png) Slackware is one of the oldest Linux distributions that still matters. If you are willing to compile or develop software to set up a perfect environment for yourself—Slackware is the way to go. If you’re curious about some of the oldest Linux distros, we have an article on the [earliest linux distributions](https://itsfoss.com/earliest-linux-distros/) – go check it out. Even though the number of users/developers utilizing it has significantly decreased, it is still a fantastic choice for advanced users. Not to forget, Slackware is still active. ## Best Multi-purpose Linux Distribution There are certain Linux distros that you can utilize as a beginner-friendly / advanced OS for both desktops and servers. Hence, we thought of compiling a separate section for such distributions. If you don’t agree with us (or have suggestions to add here) – feel free to let us know in the comments. Here’s what we think could come in handy for every user: ### Fedora ![fedora 39 screenshot](https://itsfoss.com/content/images/2024/01/Fedora_39_3.png) Fedora offers two separate editions – one for desktops/laptops and the other for servers (Fedora Workstation and Fedora Server, respectively). So, if you are looking for a snappy desktop OS—with a potential learning curve while being user-friendly – Fedora is an option. In either case, if you are looking for a Linux OS for your server—that’s a good choice as well. ### Manjaro ![manjaro linux](https://itsfoss.com/content/images/2024/01/Manjaro_Linux_GNOME.png) Manjaro is based on [Arch Linux](https://www.archlinux.org/). Fret not, while Arch Linux is tailored for advanced users, Manjaro makes it easy for a newcomer. It is a simple and beginner-friendly Linux distro. The user interface is good enough and offers a bunch of useful GUI applications built-in. You get options to choose a [desktop environment](https://itsfoss.com/what-is-desktop-environment/) for Manjaro while downloading it. Personally, I like the KDE desktop for Manjaro. ### Debian ![debian 12 screenshot](https://itsfoss.com/content/images/2024/01/debian-12.png) Well, Ubuntu’s based on Debian—so it must be a darn good distribution itself. Debian is an ideal choice for both desktops and servers. It may not be the best beginner-friendly OS – but you can easily get started by going through the [official documentation](https://www.debian.org/releases/stable/installmanual). The recent release of [Debian 12 ‘Bookworm’](https://news.itsfoss.com/debian-12-release/) introduces many changes and necessary improvements. So, you must give it a try! ## Wrapping Up Overall, these are the best Linux distributions that we recommend you try. Yes, there are a lot of other Linux distributions that deserve mention—but to each of their own, depending on personal preferences—the choices will be subjective. But we also have a separate list of distros for [Windows users](https://itsfoss.com/windows-like-linux-distributions/), [hackers and pen testers](https://itsfoss.com/linux-hacking-penetration-testing/), [gamers](https://itsfoss.com/linux-gaming-distributions/), [programmers](https://itsfoss.com/best-linux-distributions-progammers/), and [privacy buffs.](https://itsfoss.com/privacy-focused-linux-distributions/) So, if that interests you—do go through them. 💬 *If you think we missed listing one of your favorites that deserves as one of the best Linux distributions out there, let us know your thoughts in the comments below, and we’ll keep the article up-to-date accordingly.*
11,412
滚动版 CentOS Stream 和 Fedora 的关系
https://fedoramagazine.org/fedora-and-centos-stream/
2019-10-02T09:35:00
[ "RHEL", "CentOS", "Fedora" ]
https://linux.cn/article-11412-1.html
![](/data/attachment/album/201910/02/093523xigcv6mx7ximcmzc.jpg) *一封来自 Fedora 项目负责人办公室的信件:* (LCTT 译注:背景介绍 —— 红帽宣布与 CentOS 同步构建一个 CentOS Stream 滚动构建版。我们知道 Fedora 是红帽企业版 Linux [RHEL] 的上游,经过 Fedora 验证的特性才会放入 RHEL;而 RHEL 发布后,其源代码开放出来形成了 CentOS。而新的 CentOS Stream 则位于 Fedora 和 RHEL 之间,会滚动添加新的实验特性、更新的软件包等。) 嗨,大家好!你可能已经看到有关 [CentOS 项目变更](https://wiki.centos.org/Manuals/ReleaseNotes/CentOSStream)的[公告](http://redhat.com/en/blog/transforming-development-experience-within-centos)。(如果没有,请花一些时间阅读它,我等你看完回来!)现在你可能想知道:如果 CentOS 现在位于 RHEL 的上游,那么 Fedora 会发生什么?那不是 Fedora 在 Red Hat 生态系统中的角色吗? 首先,不用担心。整体有一些变化,但是一切都变得更好。 ![](/data/attachment/album/201910/02/093405qx085a0pu535dui5.jpg) 如果你一直在关注 RHEL 领导者关于 Fedora、CentOS 和 RHEL 之间关系的会议讨论,那么你就听说过 “<ruby> <a href="https://www.youtube.com/watch?v=1JmgOkEznjw"> 彭罗斯三角 </a> <rt> Penrose Triangle </rt></ruby>”。形状就像 M. C. Escher 绘图中的形状:在现实生活中这是不可能的! 我们已经思考了一段时间,*也许*几何不可能实际上是最好的模型。 一方面,想象中的流向最终的贡献会流回 Fedora 并以“良性循环”增长,但这种流从来没有真正起作用过。 真可惜,因为有一个庞大而强大的 CentOS 社区,并且有很多伟大的人在为此工作,而且 Fedora 社区也有很多重叠之处。我们错失了。 但是,这个缺口并不是唯一的空隙:在该项目与产品之间并没有真正一致的流程。到目前为止,该过程如下: 1. 在上一版 RHEL 发布之后的某个时间,红帽突然会比以往更加关注 Fedora。 2. 几个月后,红帽将分拆出一个内部开发的 RHEL 新版本。 3. 几个月后,它便被带到了世界各地,成为所有包括 CentOS 在内的下游发行版的来源。 4. 这些源持续向下更新,有时这些更新包括 Fedora 中的修补程序,但没有明确的路径。 这里的每个步骤都有其问题:间歇性注意力、闭门开发、盲目下发以及几乎没有持续的透明度。但是现在红帽和 CentOS 项目正在解决此问题,这对 Fedora 也是个好消息。 **Fedora 仍将是 RHEL 的[第一个](https://docs.fedoraproject.org/en-US/project/#_first)上游**。这是每个 RHEL 的来源,也是 RHEL 9 的来源。但是在 RHEL 分支之后,*CentOS* 将成为上游,以继续进行那些 RHEL 版本的工作。我喜欢称其为“中游”,但营销人员却不这样称呼,因此将其称为 “CentOS Stream”。 我们(Fedora、CentOS 和红帽)仍需要解决各种技术细节,但是我们的想法是这些分支将存在于同一软件包源存储库中。(目前的计划是制作一个 “src.centos.org”,它具有与 [src.fedoraproject.org](https://src.fedoraproject.org/) 相同数据的并行视图)。这项更改使公众可以看到已经发布的 RHEL 上正在进行的工作,并为开发人员和红帽合作伙伴在该级别进行协作提供了场所。 [CentOS SIG](https://wiki.centos.org/SpecialInterestGroup)(虚拟化、存储、配置管理等特殊兴趣小组)将在 Fedora 分支旁边的共享空间中开展工作。这将使项目之间的协作和共享更加容易,我希望我们甚至能够合并一些类似的 SIG,以直接协同工作。在有用的情况下,可以将 Fedora 软件包中的修补程序挑选到 CentOS “中游”中,反之亦然。 最终,Fedora、CentOS 和 RHEL 属于同一大型项目家族。这种新的、更自然的流程为协作提供了可能性,这些协作被锁定在人为(和超维度!)障碍的后面。我们现在可以一起做,我感到非常兴奋! *—— Matthew Miller, Fedora 项目负责人* --- via: <https://fedoramagazine.org/fedora-and-centos-stream/> 作者:[Matthew Miller](https://fedoramagazine.org/author/mattdm/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
*From the desk of the Fedora Project Leader:* Hi everyone! You may have seen the [announcement](http://redhat.com/en/blog/transforming-development-experience-within-centos) about [changes over at the CentOS Project](https://wiki.centos.org/Manuals/ReleaseNotes/CentOSStream). (If not, please go ahead and take a few minutes and read it — I’ll wait!) And now you may be wondering: if CentOS is now upstream of RHEL, what happens to Fedora? Isn’t that Fedora’s role in the Red Hat ecosystem? First, don’t worry. There are changes to the big picture, but they’re all for the better. If you’ve been following the conference talks from Red Hat Enterprise Linux leadership about the relationship between Fedora, CentOS, and RHEL, you have heard about “the [Penrose Triangle](https://www.youtube.com/watch?v=1JmgOkEznjw)”. That’s a shape like something from an M. C. Escher drawing: it’s impossible in real life! We’ve been thinking for a while that *maybe *impossible geometry is not actually the best model. For one thing, the imagined flow where contributions at the end would flow back into Fedora and grow in a “virtuous cycle” never actually worked that way. That’s a shame, because there’s a huge, awesome CentOS community and many great people working on it — and there’s a lot of overlap with the Fedora community too. We’re missing out. But that gap isn’t the only one: there’s not really been a consistent flow between the projects and product at all. So far, the process has gone like this: - Some time after the previous RHEL release, Red Hat would suddenly turn more attention to Fedora than usual. - A few months later, Red Hat would split off a new RHEL version, developed internally. - After some months, that’d be put into the world, including all of the source — from which CentOS is built. - Source drops continue for updates, and sometimes those updates include patches that were in Fedora — but there’s no visible connection. Each step here has its problems: intermittent attention, closed-door development, blind drops, and little ongoing transparency. But now Red Hat and CentOS Project are fixing that, and that’s good news for Fedora, too. **Fedora will remain the ****first**** upstream of RHEL.** It’s where every RHEL came from, and is where RHEL 9 will come from, too. But after RHEL branches off, *CentOS *will be upstream for ongoing work on those RHEL versions. I like to call it “the midstream”, but the marketing folks somehow don’t, so that’s going to be called “CentOS Stream”. We — Fedora, CentOS, and Red Hat — still need to work out all of the technical details, but the idea is that these branches will live in the same package source repository. (The current plan is to make a “src.centos.org” with a parallel view of the same data as [src.fedoraproject.org](https://src.fedoraproject.org/)). This change gives public visibility into ongoing work on released RHEL, and a place for developers and Red Hat’s partners to collaborate at that level. [CentOS SIGs](https://wiki.centos.org/SpecialInterestGroup) — the special interest groups for virtualization, storage, config management and so on — will do their work in shared space right next to Fedora branches. This will allow much easier collaboration and sharing between the projects, and I’m hoping we’ll even be able to merge some of our similar SIGs to work together directly. Fixes from Fedora packages can be cherry-picked into the CentOS “midstream” ones — and where useful, vice versa. Ultimately, Fedora, CentOS, and RHEL are part of the same big project family. This new, more natural flow opens possibilities for collaboration which were locked behind artificial (and extra-dimensional!) barriers. I’m very excited for what we can now do together! *— Matthew Miller, Fedora Project Leader* ## David This is a good news, unite knowledge and developers to evolve together on the same road is very important. 3 distributions together will be more robust and versatile for the next versions. long life to both 3! ## Karl Schneider Stop me if you’ve heard this one: A bleeding edge mad scientist distro project, a super stable all powerful enterprise distro and it’s community driven continuously rolling beta version of itself walk in to a bar…… Does this mean Fedora project will be freed up a little to innovate and explore? Does this also mean more devs will move over to and contribute to CentOS Stream vs. the Fedora project? And if so would this even matter? ## John Can I say it can be compared to Debian unstable (Fedora), testing (Centos Stream) and stable (Red hat), the same distro. Great vision! ## svsv sarma IBM steps in and CentOS pulled in to enhance RHEL pet “Fedora Project”. Good news indeed as Fedora is becoming the SuperOS. ## Paul W. Frields @svsv: IBM has nothing to do with this event. Please check your assumptions. ## svsv sarma Noted. Thank you. ## zmks12 What would be stream – standalone development instance of CentOS? Just a repo CentOS is subscribed to? ## Creak So, if I understand correctly, in Debian terms: * Fedora = Debian Unstable * CentOS = Debian Testing * RHEL = Debian Stable (Except Fedora is not “unstable” per se, it’s just an analogy 😉 ) ## Eric I find that Fedora can very much be unstable. It crashes and burns when I try to use it with a Vega 56 GPU, for instance, whether I’m trying Fedora 30 or the 31 beta, and it is all thanks to the version of Mesa they are using. ## Creak I Interesting, I really don’t have this experience. I have an RX 480 and everything works out of the box and each version of mesa is a bit better or, at least, the same. ## r2 I think more like * Fedora = Debian Testing (maybe more stable than it?) * [ RHEL->CentOS ] + EPEL(Backport from Fedora) = Debian Stable + Backports ## Julen Rubio “CentOS Stream will be a rolling-release Linux distro that exists as a midstream between the upstream development in Fedora Linux and the downstream development for Red Hat Enterprise Linux (RHEL). ” [https://wiki.centos.org/Manuals/ReleaseNotes/CentOSStream]. Sounds good! Red Hat doesn’t forget Fedora. ## thrix So, when we can expect copr chroots for Centos Stream? 🙂 ## Andrea Is CentOS now basically Fedora LTS? ## Vakho OK.. -Fedora – Debian Stable 5x -CentOS – Debian Stable 10x -RHEL – Debian Stable 10000x ## Nicola Musatti So, if I understand correctly, CentOS Stream would still be tied to the current RHEL release, i.e. what just came out is really CentOS Stream 8, is that correct? How does CentOS Stream relates with EPEL? ## Dokter This is really interesting. I’m looking forward to see what this partnership will do. As much as I want a rolling distro with RPM and DNF I’m hesitant as I would at least want to run the latest Firefox, without too much hassle. Or maybe this will push for what might be known as Fedora Stream? ## Sebastiaan Franken Uhm, Fedora already contains the latest Firefox version in it’s default install, and it’s kept up-to-date quite nicely with ‘upstream’ (as in: I read about a new Firefox version, I check my updates and so far it’s always been there). ## Dokter I know how Fedora works. I’ve used it since it was called Fedora Core 1. Maybe the wording is a bit off, but I’m talking about the Centos Stream repo, and even if you use EPEL. I might be missing something, but when I look at the EPEL repo I don’t find the latest Firefox. I don’t want to run Firefox 60, which is the latest in Centos. And even if I’ve used Linux since ’99 I want to avoid to do a personal solution. ## Creak I don’t know if it works on CentOS, but Flatpak might be the solution to your problem here. ## Eric Firefox 60 is probably the ESR version. Just install the regular version. ## olahaye74 Good to know that fedora and CentOS will remain in the ecosystem, though, despite the effort to make things more easy to understand, I think it won’t address things like docbook-utils-pdf is dropped while docbook-utils is kepst (this break my systemimager doc build). This will also not address all the dropped package in RHEL while they exists in fedora. (this also breaks another part of SystemImager) As SystemImager main developper (and also OSCAR Cluster main developper), I used to make builds for fedora rawhide to make sure that my code remains compatible with latests packages versions. Right now, the code builds for most debian, ubuntu, rhel-6 and 7, fedora up to 31 SuSE. What was my surprise to discover that centos-8 + epel-8 + elrepo-8 was lacking so much things like xmlstarlet, docbook2pdf, perl-Qt that port will be a very long task. Even more difficult to handle, /usr/bin/python doesn’t exists anymore in RHEL-8 and you have to fix all python code shebang. Unfortunately, this breaks portability with distrib that ships /usr/bin/phyton as python v2. Those decisions may have logic when thinking about fedora+rhel+centos ecosystem, but on developper side, it’s makes things really difficult in terms of portability. what shebang should I use to make sure all python v2 scripts runs from RHEL-6 to RHEL-8 or MacOS-X or ubuntu or any other OS with python2 installed as /usr/bin/python? What is the benefit of such change? The /usr/bin/python exists in fedora but not on RHEL-8. Where is the ecosystem? I must admit That I’m a little bit lost trying to find interrest of this ecosystem which make builds between rhel/centos/fedora inconsistent. ## Hugh It’s pretty clear that Python 2 is about to hit EoL. It’s past time to switch away from it. Especially with new deployments (RHEL 8). https://pythonclock.org/ It’s unfortunate that so many people have stuck with Python 2 until the bitter end. My discomfort with RHEL/CentOS is that components get very obsolete before the next RHEL release comes out. Ubuntu LTS seems to be more comfortable moving forward. But I really value the maintenance done by Red Hat. Flatpak might be the solution but I don’t really get it. ## Eric Nicholls IMHO, CentOS should join the Fedora project. It would be nice to have everything under one brand, just like Ubuntu. Together we’re Stronger. Fedora CoreOS Fedora SilverBlue -Fedora Fedora Stream (CentOS Stream) Fedora Enterprise (CentOS) etc… ## Gustavo Domínguez So, let me see if I got it, If both Fedora and CentOS are upstreams of Red Hat, that means Red Hat is at the bottom of the chain don’t it? CentOS and RHEL sort of switched places except CentOS also has its own Netflix Originals to put it some way. It does make sense to put RHEL at the bottom to receive all the revised stuff. Am I close? ## Hugh There are now two CentOS things: CentOS and CentOS Stream. I think that the flows go: Fedora => CentOS Stream // not 100% because RHEL and Fedora have different policies CentOS Stream => RHEL // process not clear to me RHEL => CentOS // as fast as CentOS can manage; as little change as possible/legal ## Matthew Miller Just like “Fedora” is the project, not the operating system, “CentOS Project” has two outputs: “CentOS Linux”, the traditional rebuild, and “CentOS Stream”, the new thing. ## Aaron Earl The way I understand the new paradigm is that Red Hat will do what it has been doing for major releases, which is distilling a new major release from some version of Fedora. For the incremental releases, however, it will look to CentOS to fill the upstream role that Fedora fills for major releases. So, RHEL has now has an upstream that will move between the two in a sequence like: Fedora, CentOS, CentOS, CentOS, . . . Fedora, CentOS, CentOS, CentOS, . . . My question is what happens to the version of CentOS that is compiled from RHEL sources, i.e., the one that exists now and is, essentially, a clone of RHEL? I have enjoyed knowing that CentOS has all the strengths of RHEL. I wonder if this new CentOS be some kind of beta that will, likely, not as stable as RHEL? ## Matthew Miller “Stable” is a loaded word with a lot of different meanings. The changes that will land in CentOS Stream are intended for the next RHEL minor release, and so should not be more drastic than one might expect from that kind of update. Note that CentOS Linux 8 as a traditional rebuild is still a thing.
11,414
CutiePi:正在开发中的基于树莓派的开源平板
https://itsfoss.com/cutiepi-open-source-tab/
2019-10-02T12:53:30
[ "树莓派" ]
https://linux.cn/article-11414-1.html
![](/data/attachment/album/201910/02/125301wkbvgz1n7zv7j55e.jpg) CutiePi 是一款 8 英寸的构建在树莓派上的开源平板。他们在[树莓派论坛](https://www.raspberrypi.org/forums/viewtopic.php?t=247380)上宣布:现在,它只是一台原型机。 在本文中,你将了解有关 CutiePi 的规格、价格和可用性的更多详细信息。 它们使用一款定制的计算模块载版(CM3)来制造平板。[官网](https://cutiepi.io/)提到使用定制 CM3 载板的目的是: > > 定制 CM3/CM3+ 载板是专为便携使用而设计,拥有增强的电源管理和锂聚合物电池电量监控功能。还可与指定的 HDMI 或 MIPI DSI 显示器配合使用。 > > > 因此,这使得该平板足够薄而且便携。 ### CutiePi 规格 ![CutiePi Board](/data/attachment/album/201910/02/125334rux1xsyli3gguffz.png) 我惊讶地了解到它有 8 英寸的 IPS LCD 显示屏,这对新手而言是个好事。然而,你不会有一个真正高清的屏幕,因为官方宣称它的分辨率是 1280×800。 它还计划配备 4800 mAh 锂电池(原型机的电池为 5000 mAh)。嗯,对于平板来说,这不算坏。 连接性上包括支持 Wi-Fi 和蓝牙 4.0。此外,还有一个 USB Type-A 插口、6 个 GPIO 引脚和 microSD 卡插槽。 ![CutiePi Specifications](/data/attachment/album/201910/02/125336o7cc7n7k87vfvfiz.jpg) 硬件与 [Raspbian OS](https://itsfoss.com/raspberry-pi-os-desktop/) 官方兼容,用户界面采用 [Qt](https://en.wikipedia.org/wiki/Qt_%28software%29) 构建,以获得快速直观的用户体验。此外,除了内置应用外,它还将通过 XWayland 支持 Raspbian PIXEL 应用。 ### CutiePi 源码 你可以通过分析所用材料的清单来猜测此平板的定价。CutiePi 遵循 100% 的开源硬件设计。因此,如果你觉得好奇,可以查看它的 [GitHub 页面](https://github.com/cutiepi-io/cutiepi-board),了解有关硬件设计和内容的详细信息。 ### CutiePi 价格、发布日期和可用性 CutiePi 计划在 8 月进行[设计验证测试](https://en.wikipedia.org/wiki/Engineering_validation_test#Design_verification_test)批量 PCB。他们的目标是在 2019 年底推出最终产品。 官方预计,发售价大约在 $150-$250 左右。这只是一个近似的范围,还应该保有怀疑。 显然,即使产品听上去挺有希望,但价格将是它成功的一个主要因素。 ### 总结 CutiePi 并不是第一个使用[像树莓派这样的单板计算机](https://itsfoss.com/raspberry-pi-alternatives/)来制作平板的项目。我们有即将推出的 [PineTab](https://www.pine64.org/pinetab/),它基于 Pine64 单板电脑。Pine 还有一种笔记本电脑,名为 [Pinebook](https://itsfoss.com/pinebook-pro/)。 从原型来看,它确实是一个我们可以期望使用的产品。但是,预安装的应用和它将支持的应用可能会扭转局面。此外,考虑到价格估计,这听起来很有希望。 你觉得怎么样?让我们在下面的评论中知道你的想法,或者投个票。 --- via: <https://itsfoss.com/cutiepi-open-source-tab/> 作者:[Ankush Das](https://itsfoss.com/author/ankush/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
301
Moved Permanently
null
11,415
使用 rsync 复制大文件的一些误解
https://fedoramagazine.org/copying-large-files-with-rsync-and-some-misconceptions/
2019-10-02T13:40:00
[ "rsync" ]
https://linux.cn/article-11415-1.html
![](/data/attachment/album/201910/02/134003xeowu3qppvl6fz3j.jpg) 有一种观点认为,在 IT 行业工作的许多人经常从网络帖子里复制和粘贴。我们都干过,复制粘贴本身不是问题。问题是当我们在不理解它们的情况下这样干。 几年前,一个曾经在我团队中工作的朋友需要将虚拟机模板从站点 A 复制到站点 B。他们无法理解为什么复制的文件在站点 A 上为 10GB,但是在站点 B 上却变为 100GB。 这位朋友认为 `rsync` 是一个神奇的工具,应该仅“同步”文件本身。但是,我们大多数人所忘记的是了解 `rsync` 的真正含义、用法,以及我认为最重要的是它原本是用来做什么的。本文提供了有关 `rsync` 的更多信息,并解释了那件事中发生了什么。 ### 关于 rsync `rsync` 是由 Andrew Tridgell 和 Paul Mackerras 创建的工具,其动机是以下问题: 假设你有两个文件,`file_A` 和 `file_B`。你希望将 `file_B` 更新为与 `file_A` 相同。显而易见的方法是将 `file_A` 复制到 `file_B`。 现在,假设这两个文件位于通过慢速通信链接(例如,拨号 IP 链接)连接的两个不同的服务器上。如果`file_A` 大,将其复制到 `file_B` 将会很慢,有时甚至是不可能完成的。为了提高效率,你可以在发送前压缩 `file_A`,但这通常只会获得 2 到 4 倍的效率提升。 现在假设 `file_A` 和 `file_B` 非常相似,并且为了加快处理速度,你可以利用这种相似性。一种常见的方法是仅通过链接发送 `file_A` 和 `file_B` 之间的差异,然后使用这个差异列表在远程端重建文件。 问题在于,用于在两个文件之间创建一组差异的常规方法依赖于能够读取两个文件。因此,它们要求链接的一端预先提供两个文件。如果它们在同一台计算机上不是同时可用的,则无法使用这些算法。(一旦将文件复制过来,就不需要做对比差异了)。而这是 `rsync` 解决的问题。 `rsync` 算法有效地计算源文件的哪些部分与现有目标文件的部分匹配。这样,匹配的部分就不需要通过链接发送了;所需要的只是对目标文件部分的引用。只有源文件中不匹配的部分才需要发送。 然后,接收者可以使用对现有目标文件各个部分的引用和原始素材来构造源文件的副本。 另外,可以使用一系列常用压缩算法中的任何一种来压缩发送到接收器的数据,以进一步提高速度。 我们都知道,`rsync` 算法以一种漂亮的方式解决了这个问题。 在 `rsync` 的介绍之后,回到那件事! ### 问题 1:自动精简配置 有两件事可以帮助那个朋友了解正在发生的事情。 该文件在其他地方的大小变得越来越大的问题是由源系统上启用了<ruby> 自动精简配置 <rt> Thin Provisioning </rt></ruby>(TP)引起的,这是一种优化存储区域网络(SAN)或网络连接存储(NAS)中可用空间效率的方法。 由于启用了 TP,源文件只有 10GB,并且在不使用任何其他配置的情况下使用 `rsync` 进行传输时,目标位置将接收到全部 100GB 的大小。`rsync` 无法自动完成该(TP)操作,必须对其进行配置。 进行此工作的选项是 `-S`(或 `–sparse`),它告诉 `rsync` 有效地处理稀疏文件。它会按照它说的做!它只会发送该稀疏数据,因此源和目标将有一个 10GB 的文件。 ### 问题 2:更新文件 当发送一个更新的文件时会出现第二个问题。现在目标仅接收 10GB 了,但始终传输的是整个文件(包含虚拟磁盘),即使只是在该虚拟磁盘上更改了一个配置文件。换句话说,只是该文件的一小部分发生了更改。 用于此传输的命令是: ``` rsync -avS vmdk_file syncuser@host1:/destination ``` 同样,了解 `rsync` 的工作方式也将有助于解决此问题。 上面是关于 `rsync` 的最大误解。我们许多人认为 `rsync` 只会发送文件的增量更新,并且只会自动更新需要更新的内容。**但这不是 `rsync` 的默认行为**。 如手册页所述,`rsync` 的默认行为是在目标位置创建文件的新副本,并在传输完成后将其移动到正确的位置。 要更改 `rsync` 的默认行为,你必须设置以下标志,然后 `rsync` 将仅发送增量: ``` --inplace 原地更新目标文件 --partial 保留部分传输的文件 --append 附加数据到更短的文件 --progress 在传输时显示进度条 ``` 因此,可以确切地执行我那个朋友想要的功能的完整命令是: ``` rsync -av --partial --inplace --append --progress vmdk_file syncuser@host1:/destination ``` 注意,出于两个原因,这里必须删除稀疏选项 `-S`。首先是通过网络发送文件时,不能同时使用 `–sparse` 和 `–inplace`。其次,当你以前使用过 `–sparse` 发送文件时,就无法再使用 `–inplace` 进行更新。请注意,低于 3.1.3 的 `rsync` 版本将拒绝 `–sparse` 和 `–inplace` 的组合。 因此,即使那个朋友最终通过网络复制了 100GB,那也只需发生一次。以下所有更新仅复制差异,从而使复制非常高效。 --- via: <https://fedoramagazine.org/copying-large-files-with-rsync-and-some-misconceptions/> 作者:[Daniel Leite de Abreu](https://fedoramagazine.org/author/dabreu/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
There is a notion that a lot of people working in the IT industry often copy and paste from internet howtos. We all do it, and the copy-and-paste itself is not a problem. The problem is when we run things without understanding them. Some years ago, a friend who used to work on my team needed to copy virtual machine templates from site A to site B. They could not understand why the file they copied was 10GB on site A but but it became 100GB on-site B. The friend believed that *rsync* is a magic tool that should just “sync” the file as it is. However, what most of us forget is to understand what *rsync* really is, and how is it used, and the most important in my opinion is, where it come from. This article provides some further information about rsync, and an explanation of what happened in that story. ## About rsync *rsync* is a tool was created by Andrew Tridgell and Paul Mackerras who were motivated by the following problem: Imagine you have two files, *file_A* and *file_B*. You wish to update *file_B* to be the same as *file_A*. The obvious method is to copy *file_A* onto *file_B*. Now imagine that the two files are on two different servers connected by a slow communications link, for example, a dial-up IP link. If *file_A* is large, copying it onto *file_B* will be slow, and sometimes not even possible. To make it more efficient, you could compress *file_A* before sending it, but that would usually only gain a factor of 2 to 4. Now assume that *file_A* and *file_B* are quite similar, and to speed things up, you take advantage of this similarity. A common method is to send just the differences between *file_A* and *file_B* down the link and then use such list of differences to reconstruct the file on the remote end. The problem is that the normal methods for creating a set of differences between two files rely on being able to read both files. Thus they require that both files are available beforehand at one end of the link. If they are not both available on the same machine, these algorithms cannot be used. (Once you had copied the file over, you don’t need the differences). This is the problem that *rsync* addresses. The *rsync* algorithm efficiently computes which parts of a source file match parts of an existing destination file. Matching parts then do not need to be sent across the link; all that is needed is a reference to the part of the destination file. Only parts of the source file which are not matching need to be sent over. The receiver can then construct a copy of the source file using the references to parts of the existing destination file and the original material. Additionally, the data sent to the receiver can be compressed using any of a range of common compression algorithms for further speed improvements. The rsync algorithm addresses this problem in a lovely way as we all might know. After this introduction on *rsync*, Back to the story! ## Problem 1: Thin provisioning There were two things that would help the friend understand what was going on. The problem with the file getting significantly bigger on the other size was caused by Thin Provisioning (TP) being enabled on the source system — a method of optimizing the efficiency of available space in Storage Area Networks (SAN) or Network Attached Storages (NAS). The source file was only 10GB because of TP being enabled, and when transferred over using *rsync* without any additional configuration, the target destination was receiving the full 100GB of size. *rsync* could not do the magic automatically, it had to be configured. The Flag that does this work is *-S* or *–sparse* and it tells *rsync* to handle sparse files efficiently. And it will do what it says! It will only send the sparse data so source and destination will have a 10GB file. ## Problem 2: Updating files The second problem appeared when sending over an updated file. The destination was now receiving just the 10GB, but the whole file (containing the virtual disk) was always transferred. Even when a single configuration file was changed on that virtual disk. In other words, only a small portion of the file changed. The command used for this transfer was: rsync -avS vmdk_file syncuser@host1:/destination Again, understanding how *rsync* works would help with this problem as well. The above is the biggest misconception about rsync. Many of us think *rsync* will simply send the delta updates of the files, and that it will automatically update only what needs to be updated. But this is not the default behaviour of *rsync*. As the man page says, the default behaviour of *rsync* is to create a new copy of the file in the destination and to move it into the right place when the transfer is completed. To change this default behaviour of *rsync*, you have to set the following flags and then rsync will send only the deltas: --inplace update destination files in-place --partial keep partially transferred files --append append data onto shorter files --progress show progress during transfer So the full command that would do exactly what the friend wanted is: rsync -av --partial --inplace --append --progress vmdk_file syncuser@host1:/destination Note that the sparse flag *-S* had to be removed, for two reasons. The first is that you can not use *–sparse* and *–inplace* together when sending a file over the wire. And second, when you once sent a file over with *–sparse*, you can’t updated with *–inplace* anymore. Note that versions of rsync older than 3.1.3 will reject the combination of *–sparse* and *–inplace*. So even when the friend ended up copying 100GB over the wire, that only had to happen once. All the following updates were only copying the difference, making the copy to be extremely efficient. ## 4e4 Hm… beter wrote about tox vpn tox are communicator https://github.com/cleverca22/toxvpn or iodine, put files trought dns ## RG Could someone provide a package or build for the official Fedora repository? Is there any Copr at least? ## Kathirvel Rajendran Hi Daniel, Very useful article! ## Garrett Nievin Verynice article, thank you!Tiny spelling correction: ‘–inaplce’ should be ‘–inplace’. ## Adam Šamalík Thank you for noticing! I’ve updated it. ## Night Romantic Daniel, thanks for the article! There’s a small typo here: ## Adam Šamalík Thank you for noticing! I’ve updated it. ## Rakshith I use rsync alot…. Thank u for letting us know the nitty-gritty…. 👍 ## Jake I thought the delta-xfer algorithm is always used, unless you use the –whole-file option to disable it? So, I would’ve thought that the large file should be being transferred “efficiently” over the network; just that the receiver side is chewing up twice the space if not using –inplace (the existing local destination file is used as a “basis” and matching blocks can be copied from there to the temporary file). I could be wrong, but don’t know really how to check this… or this could be what you meant and I wasn’t understanding on my end. ## SVD That was my understanding too, could it be that the author has it wrong? ## Mohammed El-Afifi I’ve reviewed rsync man page and I can conclude the same. Rsync by default uses delta-xfer algorithm for the network transfer, but it’s only the handling of the transferred delta chunks on the destination end that’s by default applied to a temporary file before being copied to the target file. The options the author has listed overwrites the target file reconstruction step by dropping the use of a temporary file altogether. ## Linux Dummy rsync is one of those commands I have never really understood, but it gets recommended by people a lot when you ask about making backups. I’ll bet have asked some variation of this question a dozen times, and have had rsync recommended every single time, but no one seems to know what are the correct options to use. The question is this: Assume I am running a Linux server with no desktop using something like the “lite” version of Raspbian, Debian, or Ubuntu. This means I cannot use anything that requires a GUI configuration utility. What I want to be able to do is backup in a manner similar to how Apple’s Time Machine does it, but again without the GUI. Specifically, on a regular schedule (every hour or every day) I want to do a full backup of all files to a different drive on the machine or elsewhere on the local network, but of course I only want it to write the data that has changed. My understanding is that Apple uses “sparse” files of some kind in Time Machine, so that’s fine, but what I really need is two things: The ability to easily recover a single file, group of files, or entire directory from the backup (not necessarily the most recent backup; I should be able to view or recover files from earlier backups in addition to the most recent, as is possible in Time Machine). But more important, and this is the thing that nobody seems to be able to explain how to do, is that I need to be able to reformat the hard drive and then put everything (system files AND user files) back as they were prior to the last update. A big bonus would be the ability to roll back the last kernel update and get back to how everything was just before that kernel update. In other words something similar to taking a snapshot of a virtual machine, then restoring from that snapshot, but without actually using a virtual machine. Everybody seems to think rsync can do all this but no one seems to know how, and if there’s a guide that explains it I haven’t found it yet. Typically the next thing people will suggest is some program that uses a GUI (if only for configuration) but of course that doesn’t work if your only access is via ssh. I wish someone would make a very basic guide for noobs that basically says that if you want to do this type of backup or recovery, these are the options you should use. Sometimes I get the feeling that nobody but the developers truly understand rsync, but a lot of people know a little bit about it. ## Night Romantic @Linux Dummy, as far as I know, rsync isn’t the solution for you. Rsync is for file syncing, and although it can be used (and is used) as a light backup solution, it isn’t well-suited to do everything you ask for (without some heavy-duty scripting).I think, you should look at backupoptions available. One possibility would be this:https://fedoramagazine.org/backup-on-fedora-silverblue-with-borg/ The article is about Silverblue, but Borg itself is generic, not Siverblue-specific. There are other good and time-tested cli backup options, but I don’t remember them from the top of my head. You can ask for advice on Ask Fedora, I’m sure someone there will provide more options for you. ## Paul W. Frields @Linux Dummy: You could also look into Deja Dup. ## tallship Yes Deja Dup is a friendly front end for Duplicity (http://duplicity.nongnu.org/), something I incorporate into a lot of DR and backup solutions on a regular basis. They both take a lot of the guess work out of using tar and rsync together, and offer the flexibility to use in a myriad of ways. Great article @danbreuu, by the way 🙂 ## mcshoggoth Another option is Veeam Agent for Linux. There’s a free edition that will do what you want and you can set it up from the command line. There is not an ARM version of it yet, but it does exist for debian, rpm, openSUSE, and several other flavors. If offers file level recovery and Forever Forward Incremental backups. Scheduling of the job occurs using crontab. https://www.veeam.com/linux-backup-free.html ## RG What about backintime (in official repository)? It’s a GUI frontend for automation of jobs with rsync as the backend and it supports various targets. ## cmurf Time Machine has gone through many revisions, but the central way it works is the snapshots are all hardlink based, and changed files tracked by FSEvents are written over “stale” hardlinks. And to this day it’s still based on Journaled HFS+, even if the source is APFS. Some time ago I found this approximate idea with rsync, specifically the first answer. https://serverfault.com/questions/489289/handling-renamed-files-or-directories-in-rsync A more modern approach is ZFS/Btrfs snapshots with incremental send/receive. That’s not a generic solution, of course. Both source and destination must be the same filesystem to take advantage of that approach. ## Gregory Bartholomew @Linux Dummy, FWIW, I do something kind of similar and I’ve been very happy with using ZFS as a backup/rollback solution for my VMs. I have several VM “root” filesystems that I share out from my ZFS pool over NFS. I can snapshot them, roll them back, perform incremental backups, live migrate them between hypervisors — all the good stuff. And because ZFS is a log-based filesystem, the snapshots, backups, and clones are all extremely efficient. The destination of the backup doesn’t haveto be ZFS BTW (as someone else stated earlier). You can write the backup out (incremental or otherwise) as a regular file on any target filesystem. I wouldn’t recommend that though as it might be difficult to re-assemble the incremental backup later on.## Stuart D Gathman The way that is done is usually to organize the backup directories into one per date and using –link-dest to share unchanged files with the last backup. While this can certainly be done manually, and I sometimes do so, at the very least, I will capture the command in a backup.sh script in the backup directory. Here are the internal scripts we use: https://github.com/sdgathman/rbackup The root backup destination has a flag file and directory per system: l /mnt/unilit total 24 -rw-r–r–. 1 root root 0 Sep 5 2017 BMS_BACKUP_V1 drwxr-xr-x. 3 root root 4096 Sep 7 2017 C6 drwxr-xr-x. 4 root root 4096 Sep 7 2017 c6mail drwxr-xr-x. 4 root root 4096 Sep 4 22:07 c7mail drwxr-xr-x. 10 root root 4096 Sep 12 21:15 cumplir7 drwxr-xr-x. 35 root root 4096 Sep 7 2017 current drwxr-xr-x. 4 root root 4096 Sep 7 2017 hulk And each directory looks like this: l /mnt/unilit/cumplir7 total 44 dr-xr-xr-x. 25 root root 4096 Jun 28 2017 17Sep07a dr-xr-xr-x. 20 root root 4096 Jan 1 2019 19Jan05 dr-xr-xr-x. 20 root root 4096 Sep 4 18:13 19Sep03 dr-xr-xr-x. 20 root root 4096 Jan 7 2019 19Sep04 dr-xr-xr-x. 20 root root 4096 Sep 5 02:08 19Sep06 dr-xr-xr-x. 20 root root 4096 Sep 5 02:08 19Sep12 -rw-r–r–. 1 root root 0 Sep 12 21:15 BMS_BACKUP_COMPLETE drwxr-xr-x. 2 root root 4096 Sep 12 21:15 current lrwxrwxrwx. 1 root root 7 Sep 12 21:15 last -> 19Sep12 drwx——. 2 root root 16384 Sep 5 2017 lost+found ## Raghav Rao Oh wow this whole time I’ve been confusing the –update option to be what the –inplace option is. Thanks! ## svsv sarma Now I understand the Rsync. A good practical example of Rsync is Timeshift. ## RG Maybe you’re interested in backintime, it’s a GUI frontend for rsync as the backend to automate backup to various targets. ## Davide Just one important note about one option that you suggested: –append This causes rsync to update a file by appending data onto the end of the file … The use of –append can be dangerous if you aren’t 100% sure that the files that are longer have only grown by the appending of data onto the end So if you use the “–append” then modify the source file by changing the content on the middle of it (and not only append the content), you will get a different file on the destination! you can check it with md5sum. This article is very interesting, it’s a big misunderstood! thank you ## evgeniusx rsync with inotifywait – make powerful synchronization on action ## Diego Roberto Nice! ## Sam Tuke Interesting and useful article, thanks! ## pamsu With faster and faster wifi ac, I’m seeing a lot of stuff on file transfers….or maybe its just me…Love this!
11,416
chown 命令简介
https://opensource.com/article/19/8/linux-chown-command
2019-10-03T00:01:11
[ "chown" ]
https://linux.cn/article-11416-1.html
> > 学习如何使用 chown 命令更改文件或目录的所有权。 > > > ![](/data/attachment/album/201910/03/000014mfrxrxi5rej75mjs.jpg) Linux 系统上的每个文件和目录均由某个人拥有,拥有者可以完全控制更改或删除他们拥有的文件。除了有一个*拥有用户*外,文件还有一个*拥有组*。 你可以使用 `ls -l` 命令查看文件的所有权: ``` [pablo@workstation Downloads]$ ls -l total 2454732 -rw-r--r--. 1 pablo pablo 1934753792 Jul 25 18:49 Fedora-Workstation-Live-x86_64-30-1.2.iso ``` 该输出的第三和第四列是拥有用户和组,它们一起称为*所有权*。上面的那个 ISO 文件这两者都是 `pablo`。 所有权设置由 [chmod 命令](https://opensource.com/article/19/8/introduction-linux-chmod-command)进行设置,控制允许谁可以执行读取、写入或运行的操作。你可以使用 `chown` 命令更改所有权(一个或两者)。 所有权经常需要更改。文件和目录一直存在在系统中,但用户不断变来变去。当文件和目录在系统中移动时,或从一个系统移动到另一个系统时,所有权也可能需要更改。 我的主目录中的文件和目录的所有权是我的用户和我的主要组,以 `user:group` 的形式表示。假设 Susan 正在管理 Delta 组,该组需要编辑一个名为 `mynotes` 的文件。你可以使用 `chown` 命令将该文件的用户更改为 `susan`,组更改为 `delta`: ``` $ chown susan:delta mynotes ls -l -rw-rw-r--. 1 susan delta 0 Aug 1 12:04 mynotes ``` 当给该文件设置好了 Delta 组时,它可以分配回给我: ``` $ chown alan mynotes $ ls -l mynotes -rw-rw-r--. 1 alan delta 0 Aug 1 12:04 mynotes ``` 给用户后添加冒号(`:`),可以将用户和组都分配回给我: ``` $ chown alan: mynotes $ ls -l mynotes -rw-rw-r--. 1 alan alan 0 Aug 1 12:04 mynotes ``` 通过在组前面加一个冒号,可以只更改组。现在,`gamma` 组的成员可以编辑该文件: ``` $ chown :gamma mynotes $ ls -l -rw-rw-r--. 1 alan gamma 0 Aug 1 12:04 mynotes ``` `chown` 的一些附加参数都能用在命令行和脚本中。就像许多其他 Linux 命令一样,`chown` 有一个递归参数(`-R`),它告诉该命令进入目录以对其中的所有文件进行操作。没有 `-R` 标志,你就只能更改文件夹的权限,而不会更改其中的文件。在此示例中,假定目的是更改目录及其所有内容的权限。这里我添加了 `-v`(详细)参数,以便 `chown` 报告其工作情况: ``` $ ls -l . conf .: drwxrwxr-x 2 alan alan 4096 Aug 5 15:33 conf conf: -rw-rw-r-- 1 alan alan 0 Aug 5 15:33 conf.xml $ chown -vR susan:delta conf changed ownership of 'conf/conf.xml' from alan:alan to susan:delta changed ownership of 'conf' from alan:alan to susan:delta ``` 根据你的角色,你可能需要使用 `sudo` 来更改文件的所有权。 在更改文件的所有权以匹配特定配置时,或者在你不知道所有权时(例如运行脚本时),可以使用参考文件(`--reference=RFILE`)。例如,你可以复制另一个文件(`RFILE`,称为参考文件)的用户和组,以撤消上面所做的更改。回想一下,点(`.`)表示当前的工作目录。 ``` $ chown -vR --reference=. conf ``` ### 报告更改 大多数命令都有用于控制其输出的参数。最常见的是 `-v`(`--verbose`)以启用详细信息,但是 `chown` 还具有 `-c`(`--changes`)参数来指示 `chown` 仅在进行更改时报告。`chown` 还会报告其他情况,例如不允许进行的操作。 参数 `-f`(`--silent`、`--quiet`)用于禁止显示大多数错误消息。在下一节中,我将使用 `-f` 和 `-c`,以便仅显示实际更改。 ### 保持根目录 Linux 文件系统的根目录(`/`)应该受到高度重视。如果命令在此层级上犯了一个错误,则后果可能会使系统完全无用。尤其是在运行一个会递归修改甚至删除的命令时。`chown` 命令具有一个可用于保护和保持根目录的参数,它是 `--preserve-root`。如果在根目录中将此参数和递归一起使用,那么什么也不会发生,而是会出现一条消息: ``` $ chown -cfR --preserve-root alan / chown: it is dangerous to operate recursively on '/' chown: use --no-preserve-root to override this failsafe ``` 如果不与 `--recursive` 结合使用,则该选项无效。但是,如果该命令由 `root` 用户运行,则 `/` 本身的权限将被更改,但其下的其他文件或目录的权限则不会更改: ``` $ chown -c --preserve-root alan / chown: changing ownership of '/': Operation not permitted [root@localhost /]# chown -c --preserve-root alan / changed ownership of '/' from root to alan ``` ### 所有权即安全 文件和目录所有权是良好的信息安全性的一部分,因此,偶尔检查和维护文件所有权以防止不必要的访问非常重要。`chown` 命令是 Linux 安全命令集中最常见和最重要的命令之一。 --- via: <https://opensource.com/article/19/8/linux-chown-command> 作者:[Alan Formy-Duval](https://opensource.com/users/alanfdosshttps://opensource.com/users/sethhttps://opensource.com/users/alanfdosshttps://opensource.com/users/sethhttps://opensource.com/users/greg-phttps://opensource.com/users/alanfdoss) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
Every file and directory on a Linux system is owned by someone, and the owner has complete control to change or delete the files they own. In addition to having an owning *user*, a file has an owning *group*. You can view the ownership of a file using the **ls -l** command: ``` [pablo@workstation Downloads]$ ls -l total 2454732 -rw-r--r--. 1 pablo pablo 1934753792 Jul 25 18:49 Fedora-Workstation-Live-x86_64-30-1.2.iso ``` The third and fourth columns of the output are the owning user and group, which together are referred to as *ownership*. Both are **pablo** for the ISO file above. The ownership settings, set by the [ chmod command](https://opensource.com/article/19/8/introduction-linux-chmod-command), control who is allowed to perform read, write, or execute actions. You can change ownership (one or both) with the **chown**command. It is often necessary to change ownership. Files and directories can live a long time on a system, but users can come and go. Ownership may also need to change when files and directories are moved around the system or from one system to another. The ownership of the files and directories in my home directory are my user and my primary group, represented in the form **user:group**. Suppose Susan is managing the Delta group, which needs to edit a file called **mynotes**. You can use the **chown** command to change the user to **susan** and the group to **delta**: ``` $ chown susan:delta mynotes ls -l -rw-rw-r--. 1 susan delta 0 Aug 1 12:04 mynotes ``` Once the Delta group is finished with the file, it can be assigned back to me: ``` $ chown alan mynotes $ ls -l mynotes -rw-rw-r--. 1 alan delta 0 Aug 1 12:04 mynotes ``` Both the user and group can be assigned back to me by appending a colon (**:**) to the user: ``` $ chown alan: mynotes $ ls -l mynotes -rw-rw-r--. 1 alan alan 0 Aug 1 12:04 mynotes ``` By prepending the group with a colon, you can change just the group. Now members of the **gamma** group can edit the file: ``` $ chown :gamma mynotes $ ls -l -rw-rw-r--. 1 alan gamma 0 Aug 1 12:04 mynotes ``` A few additional arguments to chown can be useful at both the command line and in a script. Just like many other Linux commands, chown has a recursive argument** **(**-R**) which tells the command to descend into the directory to operate on all files inside. Without the** -R **flag, you change permissions of the folder only, leaving the files inside it unchanged. In this example, assume that the intent is to change permissions of a directory and all its contents. Here I have added the **-v** (verbose) argument so that chown reports what it is doing: ``` $ ls -l . conf .: drwxrwxr-x 2 alan alan 4096 Aug 5 15:33 conf conf: -rw-rw-r-- 1 alan alan 0 Aug 5 15:33 conf.xml $ chown -vR susan:delta conf changed ownership of 'conf/conf.xml' from alan:alan to susan:delta changed ownership of 'conf' from alan:alan to susan:delta ``` Depending on your role, you may need to use **sudo** to change ownership of a file. You can use a reference file (**--reference=RFILE**) when changing the ownership of files to match a certain configuration or when you don't know the ownership (as might be the case when running a script). You can duplicate the user and group of another file (**RFILE**, known as a reference file), for example, to undo the changes made above. Recall that a dot (**.**) refers to the present working directory. `$ chown -vR --reference=. conf` ## Report Changes Most commands have arguments for controlling their output. The most common is **-v** (-**-verbose**) to enable verbose, but chown also has a **-c** (**--changes**) argument to instruct chown to only report when a change is made. Chown still reports other things, such as when an operation is not permitted. The argument **-f** (**--silent**, **--quiet**) is used to suppress most error messages. I will use **-f** and the **-c** in the next section so that only actual changes are shown. ## Preserve Root The root (**/**) of the Linux filesystem should be treated with great respect. If a mistake is made at this level, the consequences could leave a system completely useless. Particularly when you are running a recursive command that makes any kind of change or worse: deletions. The chown command has an argument that can be used to protect and preserve the root. The argument is **--preserve-root**. If this argument is used with a recursive chown command on the root, nothing is done and a message appears instead. ``` $ chown -cfR --preserve-root alan / chown: it is dangerous to operate recursively on '/' chown: use --no-preserve-root to override this failsafe ``` The option has no effect when not used in conjunction with **--recursive**. However, if the command is run by the root user, the permissions of the **/** itself will be changed, but not of other files or directories within. ``` $ chown -c --preserve-root alan / chown: changing ownership of '/': Operation not permitted [root@localhost /]# chown -c --preserve-root alan / changed ownership of '/' from root to alan ``` ## Ownership is security File and directory ownership is part of good information security, so it's important to occasionally check and maintain file ownership to prevent unwanted access. The chown command is one of the most common and important in the set of Linux security commands. ## 4 Comments
11,417
Shell 点文件可以为你做点什么
https://opensource.com/article/18/9/shell-dotfile
2019-10-03T12:35:39
[ "点文件" ]
https://linux.cn/article-11417-1.html
> > 了解如何使用配置文件来改善你的工作环境。 > > > ![](/data/attachment/album/201910/03/123528x3skwqwb8sz8qo8s.jpg) 不要问你可以为你的 shell <ruby> 点文件 <rt> dotfile </rt></ruby>做什么,而是要问一个 shell 点文件可以为你做什么! 我一直在操作系统领域里面打转,但是在过去的几年中,我的日常使用的一直是 Mac。很长一段时间,我都在使用 Bash,但是当几个朋友开始把 [zsh](http://www.zsh.org/) 当成宗教信仰时,我也试试了它。我没用太长时间就喜欢上了它,几年后,我越发喜欢它做的许多小事情。 我一直在使用 zsh(通过 [Homebrew](https://brew.sh/) 提供,而不是由操作系统安装的)和 [Oh My Zsh 增强功能](https://github.com/robbyrussell/oh-my-zsh)。 本文中的示例是我的个人 `.zshrc`。大多数都可以直接用在 Bash 中,我觉得不是每个人都依赖于 Oh My Zsh,但是如果不用的话你的工作量可能会有所不同。曾经有一段时间,我同时为 zsh 和 Bash 维护一个 shell 点文件,但是最终我还是放弃了我的 `.bashrc`。 ### 不偏执不行 如果你希望在各个操作系统上使用相同的点文件,则需要让你的点文件聪明点。 ``` ### Mac 专用 if [[ "$OSTYPE" == "darwin"* ]]; then # Mac 专用内容在此 ``` 例如,我希望 `Alt + 箭头键` 将光标按单词移动而不是单个空格。为了在 [iTerm2](https://www.iterm2.com/)(我的首选终端)中实现这一目标,我将此代码段添加到了 `.zshrc` 的 Mac 专用部分: ``` ### Mac 专用 if [[ "$OSTYPE" == "darwin"* ]]; then ### Mac 用于 iTerm2 的光标命令;映射 ctrl+arrows 或 alt+arrows 来快速移动 bindkey -e bindkey '^[[1;9C' forward-word bindkey '^[[1;9D' backward-word bindkey '\e\e[D' backward-word bindkey '\e\e[C' forward-word fi ``` (LCTT 译注:标题 “We’re all mad here” 是电影《爱丽丝梦游仙境》中,微笑猫对爱丽丝讲的一句话:“我们这儿全都是疯的”。) ### 在家不工作 虽然我开始喜欢我的 Shell 点文件了,但我并不总是想要家用计算机上的东西与工作的计算机上的东西一样。解决此问题的一种方法是让补充的点文件在家中使用,而不是在工作中使用。以下是我的实现方式: ``` if [[ `egrep 'dnssuffix1|dnssuffix2' /etc/resolv.conf` ]]; then if [ -e $HOME/.work ] source $HOME/.work else echo "This looks like a work machine, but I can't find the ~/.work file" fi fi ``` 在这种情况下,我根据我的工作 dns 后缀(或多个后缀,具体取决于你的情况)来提供(`source`)一个可以使我的工作环境更好的单独文件。 (LCTT 译注:标题 “What about Bob?” 是 1991 年的美国电影《天才也疯狂》。) ### 你该这么做 现在可能是放弃使用波浪号(`~`)表示编写脚本时的主目录的好时机。你会发现在某些上下文中无法识别它。养成使用环境变量 `$HOME` 的习惯,这将为你节省大量的故障排除时间和以后的工作。 如果你愿意,合乎逻辑的扩展是应该包括特定于操作系统的点文件。 (LCTT 译注:标题 “That thing you do” 是 1996 年由汤姆·汉克斯执导的喜剧片《挡不住的奇迹》。) ### 别指望记忆 我写了那么多 shell 脚本,我真的再也不想写脚本了。并不是说 shell 脚本不能满足我大部分时间的需求,而是我发现写 shell 脚本,可能只是拼凑了一个胶带式解决方案,而不是永久地解决问题。 同样,我讨厌记住事情,在我的整个职业生涯中,我经常不得不在一天之中就彻彻底底地改换环境。实际的结果是这些年来,我不得不一再重新学习很多东西。(“等等……这种语言使用哪种 for 循环结构?”) 因此,每隔一段时间我就会觉得自己厌倦了再次寻找做某事的方法。我改善生活的一种方法是添加别名。 对于任何一个使用操作系统的人来说,一个常见的情况是找出占用了所有磁盘的内容。不幸的是,我从来没有记住过这个咒语,所以我做了一个 shell 别名,创造性地叫做 `bigdirs`: ``` alias bigdirs='du --max-depth=1 2> /dev/null | sort -n -r | head -n20' ``` 虽然我可能不那么懒惰,并实际记住了它,但是,那不太 Unix …… (LCTT 译注:标题 “Memory, all alone in the moonlight” 是一手英文老歌。) ### 输错的人们 使用 shell 别名改善我的生活的另一种方法是使我免于输入错误。我不知道为什么,但是我已经养成了这种讨厌的习惯,在序列 `ea` 之后输入 `w`,所以如果我想清除终端,我经常会输入 `cleawr`。不幸的是,这对我的 shell 没有任何意义。直到我添加了这个小东西: ``` alias cleawr='clear' ``` 在 Windows 中有一个等效但更好的命令 `cls`,但我发现自己会在 Shell 也输入它。看到你的 shell 表示抗议真令人沮丧,因此我添加: ``` alias cls='clear' ``` 是的,我知道 `ctrl + l`,但是我从不使用它。 (LCTT 译注:标题 “Typos, and the people who love them” 可能来自某部电影。) ### 要自娱自乐 工作压力很大。有时你需要找点乐子。如果你的 shell 不知道它显然应该执行的命令,则可能你想直接让它耸耸肩!你可以使用以下功能执行此操作: ``` shrug() { echo "¯\_(ツ)_/¯"; } ``` 如果还不行,也许你需要掀桌不干了: ``` fliptable() { echo "(╯°□°)╯ ┻━┻"; } # 掀桌,用法示例: fsck -y /dev/sdb1 || fliptable ``` 想想看,当我想掀桌子时而我不记得我给它起了个什么名字,我会有多沮丧和失望,所以我添加了更多的 shell 别名: ``` alias flipdesk='fliptable' alias deskflip='fliptable' alias tableflip='fliptable' ``` 而有时你需要庆祝一下: ``` disco() { echo "(•_•)" echo "<) )╯" echo " / \ " echo "" echo "\(•_•)" echo " ( (>" echo " / \ " echo "" echo " (•_•)" echo "<) )>" echo " / \ " } ``` 通常,我会将这些命令的输出通过管道传递到 `pbcopy`,并将其粘贴到我正在使用的相关聊天工具中。 我从一个我关注的一个叫 “Command Line Magic” [@ climagic](https://twitter.com/climagic) 的 Twitter 帐户得到了下面这个有趣的函数。自从我现在住在佛罗里达州以来,我很高兴看到我这一生中唯一的一次下雪: ``` snow() { clear;while :;do echo $LINES $COLUMNS $(($RANDOM%$COLUMNS));sleep 0.1;done|gawk '{a[$3]=0;for(x in a) {o=a[x];a[x]=a[x]+1;printf "\033[%s;%sH ",o,x;printf "\033[%s;%sH*\033[0;0H",a[x],x;}}' } ``` (LCTT 译注:标题 “Amuse yourself” 是 1936 年的美国电影《自娱自乐》) ### 函数的乐趣 我们已经看到了一些我使用的函数示例。由于这些示例中几乎不需要参数,因此可以将它们作为别名来完成。 当比一个短句更长时,我出于个人喜好使用函数。 在我职业生涯的很多时期我都运行过 [Graphite](https://github.com/graphite-project/),这是一个开源、可扩展的时间序列指标解决方案。 在很多的情况下,我需要将度量路径(用句点表示)转换到文件系统路径(用斜杠表示),反之亦然,拥有专用于这些任务的函数就变得很有用: ``` # 在 Graphite 指标和文件路径之间转换很有用 function dottoslash() { echo $1 | sed 's/\./\//g' } function slashtodot() { echo $1 | sed 's/\//\./g' } ``` 在我的另外一段职业生涯里,我运行了很多 Kubernetes。如果你对运行 Kubernetes 不熟悉,你需要编写很多 YAML。不幸的是,一不小心就会编写了无效的 YAML。更糟糕的是,Kubernetes 不会在尝试应用 YAML 之前对其进行验证,因此,除非你应用它,否则你不会发现它是无效的。除非你先进行验证: ``` function yamllint() { for i in $(find . -name '*.yml' -o -name '*.yaml'); do echo $i; ruby -e "require 'yaml';YAML.load_file(\"$i\")"; done } ``` 因为我厌倦了偶尔破坏客户的设置而让自己感到尴尬,所以我写了这个小片段并将其作为提交前挂钩添加到我所有相关的存储库中。在持续集成过程中,类似的内容将非常有帮助,尤其是在你作为团队成员的情况下。 (LCTT 译注:哦抱歉,我不知道这个标题的出处。) ### 手指不听话 我曾经是一位出色的盲打打字员。但那些日子已经一去不回。我的打字错误超出了我的想象。 在各种时期,我多次用过 Chef 或 Kubernetes。对我来说幸运的是,我从未同时使用过这两者。 Chef 生态系统的一部分是 Test Kitchen,它是加快测试的一组工具,可通过命令 `kitchen test` 来调用。Kubernetes 使用 CLI 工具 `kubectl` 进行管理。这两个命令都需要几个子命令,并且这两者都不会特别顺畅地移动手指。 我没有创建一堆“输错别名”,而是将这两个命令别名为 `k`: ``` alias k='kitchen test $@' ``` 或 ``` alias k='kubectl $@' ``` (LCTT 译注:标题 “Oh, fingers, where art thou?” 演绎自《O Brother, Where Art Thou?》,这是 2000 年美国的一部电影《逃狱三王》。) ### 分裂与合并 我职业生涯的后半截涉及与其他人一起编写更多代码。我曾在许多环境中工作过,在这些环境中,我们在帐户中复刻了存储库副本,并将拉取请求用作审核过程的一部分。当我想确保给定存储库的复刻与父版本保持最新时,我使用 `fetchupstream`: ``` alias fetchupstream='git fetch upstream && git checkout master && git merge upstream/master && git push' ``` (LCTT 译注:标题 “Timesplitters” 是一款视频游戏《时空分裂者》。) ### 颜色之荣耀 我喜欢颜色。它可以使 `diff` 之类的东西更易于使用。 ``` alias diff='colordiff' ``` 我觉得彩色的手册页是个巧妙的技巧,因此我合并了以下函数: ``` # 彩色化手册页,来自: # http://boredzo.org/blog/archives/2016-08-15/colorized-man-pages-understood-and-customized man() { env \ LESS_TERMCAP_md=$(printf "\e[1;36m") \ LESS_TERMCAP_me=$(printf "\e[0m") \ LESS_TERMCAP_se=$(printf "\e[0m") \ LESS_TERMCAP_so=$(printf "\e[1;44;33m") \ LESS_TERMCAP_ue=$(printf "\e[0m") \ LESS_TERMCAP_us=$(printf "\e[1;32m") \ man "$@" } ``` 我喜欢命令 `which`,但它只是告诉你正在运行的命令在文件系统中的位置,除非它是 Shell 函数才能告诉你更多。在多个级联的点文件之后,有时会不清楚函数的定义位置或作用。事实证明,`whence` 和 `type` 命令可以帮助解决这一问题。 ``` # 函数定义在哪里? whichfunc() { whence -v $1 type -a $1 } ``` (LCTT 译注:标题“Mine eyes have seen the glory of the coming of color” 演绎自歌曲 《Mine Eyes Have Seen The Glory Of The Coming Of The Lord》) ### 总结 希望本文对你有所帮助,并能激发你找到改善日常使用 Shell 的方法。这些方法不必庞大、新颖或复杂。它们可能会解决一些微小但频繁的摩擦、创建捷径,甚至提供减少常见输入错误的解决方案。 欢迎你浏览我的 [dotfiles 存储库](https://github.com/gwaldo/dotfiles),但我要警示你,这样做可能会花费很多时间。请随意使用你认为有帮助的任何东西,并互相取长补短。 --- via: <https://opensource.com/article/18/9/shell-dotfile> 作者:[H.Waldo Grunenwald](https://opensource.com/users/gwaldo) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
Ask not what you can do for your shell dotfile, but what a shell dotfile can do for you! I've been all over the OS map, but for the past several years my daily drivers have been Macs. For a long time, I used Bash, but when a few friends started proselytizing [zsh](http://www.zsh.org/), I gave it a shot. It didn't take long for me to appreciate it, and several years later, I strongly prefer it for many of the little things that it does. I've been using zsh (provided via [Homebrew](https://brew.sh/), not the system installed), and the [Oh My Zsh enhancement](https://github.com/robbyrussell/oh-my-zsh). The examples in this article are for my personal `.zshrc` . Most will work directly in Bash, and I don't believe that any rely on Oh My Zsh, but your mileage may vary. There was a period when I was maintaining a shell dotfile for both zsh and Bash, but I did eventually give up on my `.bashrc` . ## We're all mad here If you want the possibility of using the same dotfile across OS's, you'll want to give your dotfile a little smarts. ``` ### Mac Specifics if [[ "$OSTYPE" == "darwin"* ]]; then # Mac-specific stuff here. fi ``` For instance, I expect the Alt + arrow keys to move the cursor by the word rather than by a single space. To make this happen in [iTerm2](https://www.iterm2.com/) (my preferred shell), I add this snippet to the Mac-specific portion of my .zshrc: ``` ### Mac Specifics if [[ "$OSTYPE" == "darwin"* ]]; then ### Mac cursor commands for iTerm2; map ctrl+arrows or alt+arrows to fast-move bindkey -e bindkey '^[[1;9C' forward-word bindkey '^[[1;9D' backward-word bindkey '\e\e[D' backward-word bindkey '\e\e[C' forward-word fi ``` ## What about Bob? While I came to love my shell dotfile, I didn't always want the same things available on my home machines as on my work machines. One way to solve this is to have supplementary dotfiles to use at home but not at work. Here's how I accomplished this: ``` if [[ `egrep 'dnssuffix1|dnssuffix2' /etc/resolv.conf` ]]; then if [ -e $HOME/.work ] source $HOME/.work else echo "This looks like a work machine, but I can't find the ~/.work file" fi fi ``` In this case, I key off of my work dns suffix (or multiple suffixes, depending on your situation) and source a separate file that makes my life at work a little better. ## That thing you do Now is probably a good time to quit using the tilde (`~` ) to represent your home directory when writing scripts. You'll find that there are some contexts where it's not recognized. Getting in the habit of using the environment variable `$HOME` will save you a lot of troubleshooting time and headaches later on. The logical extension would be to have OS-specific dotfiles to include if you are so inclined. ## Memory, all alone in the moonlight I've written embarrassing amounts of shell, and I've come to the conclusion that I really don't want to write more. It's not that shell can't do what I need most of the time, but I find that if I'm writing shell, I'm probably slapping together a duct-tape solution rather than permanently solving the problem. Likewise, I hate memorizing things, and throughout my career, I have had to do radical context shifting during the course of a day. The practical consequence is that I've had to re-learn many things several times over the years. ("Wait... which for-loop structure does this language use?") So, every so often I decide that I'm tired of looking up how to do something again. One way that I improve my life is by adding aliases. A common scenario for anyone who works with systems is finding out what's taking up all of the disk. Unfortunately, I have never been able to remember this incantation, so I made a shell alias, creatively called `bigdirs` : `alias bigdirs='du --max-depth=1 2> /dev/null | sort -n -r | head -n20'` While I could be less lazy and actually memorize it, well, that's just not the Unix way... ## Typos, and the people who love them Another way that using shell aliases improves my life is by saving me from typos. I don't know why, but I've developed this nasty habit of typing a `w` after the sequence `ea` , so if I want to clear my terminal, I'll often type `cleawr` . Unfortunately, that doesn't mean anything to my shell. Until I add this little piece of gold: `alias cleawr='clear'` In one instance of Windows having an equivalent, but better, command, I find myself typing `cls` . It's frustrating to see your shell throw up its hands, so I add: `alias cls='clear'` Yes, I'm aware of `ctrl + l` , but I never use it. ## Amuse yourself Work can be stressful. Sometimes you just need to have a little fun. If your shell doesn't know the command that it clearly should *just do*, maybe you want to shrug your shoulders right back at it! You can do this with a function: `shrug() { echo "¯\_(ツ)_/¯"; }` If that doesn't work, maybe you need to flip a table: `fliptable() { echo "(╯°□°)╯ ┻━┻"; } # Flip a table. Example usage: fsck -y /dev/sdb1 || fliptable` Imagine my chagrin and frustration when I needed to flip a desk and I couldn't remember what I had called it. So I added some more shell aliases: ``` alias flipdesk='fliptable' alias deskflip='fliptable' alias tableflip='fliptable' ``` And sometimes you need to celebrate: ``` disco() { echo "(•_•)" echo "<) )╯" echo " / \ " echo "" echo "\(•_•)" echo " ( (>" echo " / \ " echo "" echo " (•_•)" echo "<) )>" echo " / \ " } ``` Typically, I'll pipe the output of these commands to `pbcopy ` and paste it into the relevant chat tool I'm using. I got this fun function from a Twitter account that I follow called "Command Line Magic:" [@climagic](https://twitter.com/climagic). Since I live in Florida now, I'm very happy that this is the only snow in my life: ``` snow() { clear;while :;do echo $LINES $COLUMNS $(($RANDOM%$COLUMNS));sleep 0.1;done|gawk '{a[$3]=0;for(x in a) {o=a[x];a[x]=a[x]+1;printf "\033[%s;%sH ",o,x;printf "\033[%s;%sH*\033[0;0H",a[x],x;}}' } ``` ## Fun with functions We've seen some examples of functions that I use. Since few of these examples require an argument, they could be done as aliases. I use functions out of personal preference when it's more than a single short statement. At various times in my career, I've run [Graphite](https://github.com/graphite-project/), an open-source, scalable, time-series metrics solution. There have been enough instances where I needed to transpose a metric path (delineated with periods) to a filesystem path (delineated with slashes), or vice versa, that it became useful to have dedicated functions for these tasks: ``` # Useful for converting between Graphite metrics and file paths function dottoslash() { echo $1 | sed 's/\./\//g' } function slashtodot() { echo $1 | sed 's/\//\./g' } ``` During another time in my career, I was running a lot of Kubernetes. If you aren't familiar with running Kubernetes, you need to write a lot of YAML. Unfortunately, it's not hard to write invalid YAML. Worse, Kubernetes doesn't validate YAML before trying to apply it, so you won't find out it's invalid until you apply it. Unless you validate it first: ``` function yamllint() { for i in $(find . -name '*.yml' -o -name '*.yaml'); do echo $i; ruby -e "require 'yaml';YAML.load_file(\"$i\")"; done } ``` Because I got tired of embarrassing myself and occasionally breaking a customer's setup, I wrote this little snippet and added it as a pre-commit hook to all of my relevant repos. Something similar would be very helpful as part of your continuous integration process, especially if you're working as part of a team. ## Oh, fingers, where art thou? I was once an excellent touch-typist. Those days are long gone. I typo more than I would have believed possible. At different times, I have used a fair amount of either Chef or Kubernetes. Fortunately for me, I never used both at the same time. Part of the Chef ecosystem is Test Kitchen, a suite of tools that facilitate testing, which is invoked with the commands `kitchen test` . Kubernetes is managed with a CLI tool `kubectl` . Both commands require several subcommands, and neither rolls off the fingers particularly fluidly. Rather than create a bunch of "typo aliases," I aliased those commands to `k` : `alias k='kitchen test $@'` or `alias k='kubectl $@'` ## Timesplitters The last half of my career has involved writing more code with other people. I've worked in many environments where we have forked copies of repos on our account and use pull requests as part of the review process. When I want to make sure that my fork of a given repo is up to date with the parent, I use `fetchupstream` : `alias fetchupstream='git fetch upstream && git checkout master && git merge upstream/master && git push'` ## Mine eyes have seen the glory of the coming of color I like color. It can make things like diffs easier to use. `alias diff='colordiff'` I thought that colorized man pages was a neat trick, so I incorporated this function: ``` # Colorized man pages, from: # http://boredzo.org/blog/archives/2016-08-15/colorized-man-pages-understood-and-customized man() { env \ LESS_TERMCAP_md=$(printf "\e[1;36m") \ LESS_TERMCAP_me=$(printf "\e[0m") \ LESS_TERMCAP_se=$(printf "\e[0m") \ LESS_TERMCAP_so=$(printf "\e[1;44;33m") \ LESS_TERMCAP_ue=$(printf "\e[0m") \ LESS_TERMCAP_us=$(printf "\e[1;32m") \ man "$@" } ``` I love the command `which` . It simply tells you where in the filesystem the command you're running comes from—unless it's a shell function. After multiple cascading dotfiles, sometimes it's not clear where a function is defined or what it does. It turns out that the `whence` and `type` commands can help with that. ``` # Where is a function defined? whichfunc() { whence -v $1 type -a $1 } ``` ## Conclusion I hope this article helps and inspires you to find ways to improve your daily shell-using experience. They don't need to be huge, novel, or complex. They might solve a minor but frequent bit of friction, create a shortcut, or even offer a solution to reducing common typos. You're welcome to look through my [dotfiles repo](https://github.com/gwaldo/dotfiles), but I warn you that it could use a lot of cleaning up. Feel free to use anything that you find helpful, and please be excellent to one another. ## 1 Comment
11,419
把“点文件”放到版本控制中
https://opensource.com/article/19/3/move-your-dotfiles-version-control
2019-10-03T20:52:42
[ "点文件" ]
/article-11419-1.html
> > 通过在 GitLab 或 GitHub 上分享你的点文件,可以在整个系统上备份或同步你的自定义配置。 > > > ![](/data/attachment/album/201910/03/205222yzo1rbck6accccvo.jpg) 通过隐藏文件集(称为<ruby> 点文件 <rt> dotfile </rt></ruby>)来定制操作系统是个非常棒的想法。在这篇 [Shell 点文件可以为你做点什么](/article-11417-1.html)中,H. Waldo Grunenwald 详细介绍了为什么以及如何设置点文件的细节。现在让我们深入探讨分享它们的原因和方式。 ### 什么是点文件? “<ruby> 点文件 <rt> dotfile </rt></ruby>”是指我们计算机中四处漂泊的配置文件。这些文件通常在文件名的开头以 `.` 开头,例如 `.gitconfig`,并且操作系统通常在默认情况下将其隐藏。例如,当我在 MacOS 上使用 `ls -a` 时,它才会显示所有可爱的点文件,否则就不会显示这些点文件。 ``` dotfiles on master ➜ ls README.md Rakefile bin misc profiles zsh-custom dotfiles on master ➜ ls -a . .gitignore .oh-my-zsh README.md zsh-custom .. .gitmodules .tmux Rakefile .gemrc .global_ignore .vimrc bin .git .gvimrc .zlogin misc .gitconfig .maid .zshrc profiles ``` 如果看一下用于 Git 配置的 `.gitconfig`,我能看到大量的自定义配置。我设置了帐户信息、终端颜色首选项和大量别名,这些别名可以使我的命令行界面看起来就像我的一样。这是 `[alias]` 块的摘录: ``` 87 # Show the diff between the latest commit and the current state 88 d = !"git diff-index --quiet HEAD -- || clear; git --no-pager diff --patch-with-stat" 89 90 # `git di $number` shows the diff between the state `$number` revisions ago and the current state 91 di = !"d() { git diff --patch-with-stat HEAD~$1; }; git diff-index --quiet HEAD -- || clear; d" 92 93 # Pull in remote changes for the current repository and all its submodules 94 p = !"git pull; git submodule foreach git pull origin master" 95 96 # Checkout a pull request from origin (of a github repository) 97 pr = !"pr() { git fetch origin pull/$1/head:pr-$1; git checkout pr-$1; }; pr" ``` 由于我的 `.gitconfig` 有 200 多行的自定义设置,我无意于在我使用的每一台新计算机或系统上重写它,其他人肯定也不想这样。这是分享点文件变得越来越流行的原因之一,尤其是随着社交编码网站 GitHub 的兴起。正式提倡分享点文件的文章是 Zach Holman 在 2008 年发表的《[点文件意味着被复刻](https://zachholman.com/2010/08/dotfiles-are-meant-to-be-forked/)》。其前提到今天依然如此:我想与我自己、与点文件新手,以及那些分享了他们的自定义配置从而教会了我很多知识的人分享它们。 ### 分享点文件 我们中的许多人拥有多个系统,或者知道硬盘变化无常,因此我们希望备份我们精心策划的自定义设置。那么我们如何在环境之间同步这些精彩的文件? 我最喜欢的答案是分布式版本控制,最好是可以为我处理繁重任务的服务。我经常使用 GitHub,随着我对 GitLab 的使用经验越来越丰富,我肯定会一如既往地继续喜欢它。任何一个这样的服务都是共享你的信息的理想场所。要自己设置的话可以这样做: 1. 登录到你首选的基于 Git 的服务。 2. 创建一个名为 `dotfiles` 的存储库。(将其设置为公开!分享即关爱。) 3. 将其克隆到你的本地环境。(你可能需要设置 Git 配置命令来克隆存储库。GitHub 和 GitLab 都会提示你需要运行的命令。) 4. 将你的点文件复制到该文件夹中。 5. 将它们符号链接回到其目标文件夹(最常见的是 `$HOME`)。 6. 将它们推送到远程存储库。 ![](/data/attachment/album/201910/03/205253y5e5ya4y556u5m35.png) 上面的步骤 4 是这项工作的关键,可能有些棘手。无论是使用脚本还是手动执行,工作流程都是从 `dotfiles` 文件夹符号链接到点文件的目标位置,以便对点文件的任何更新都可以轻松地推送到远程存储库。要对我的 `.gitconfig` 文件执行此操作,我要输入: ``` $ cd dotfiles/ $ ln -nfs .gitconfig $HOME/.gitconfig ``` 添加到符号链接命令的标志还具有其他一些用处: * `-s` 创建符号链接而不是硬链接。 * `-f` 在发生错误时继续做其他符号链接(此处不需要,但在循环中很有用) * `-n` 避免符号链接到一个符号链接文件(等同于其他版本的 `ln` 的 `-h` 标志) 如果要更深入地研究可用参数,可以查看 IEEE 和开放小组的 [ln 规范](http://pubs.opengroup.org/onlinepubs/9699919799/utilities/ln.html)以及 [MacOS 10.14.3](https://www.unix.com/man-page/FreeBSD/1/ln/) 上的版本。自从其他人的点文件中拉取出这些标志以来,我才发现了这些标志。 你还可以使用一些其他代码来简化更新,例如我从 [Brad Parbs](https://github.com/bradp/dotfiles) 复刻的 [Rakefile](https://github.com/mbbroberg/dotfiles/blob/master/Rakefile)。另外,你也可以像 Jeff Geerling [在其点文件中](https://github.com/geerlingguy/dotfiles)那样,使它保持极其简单的状态。他使用[此 Ansible 剧本](https://github.com/geerlingguy/mac-dev-playbook)对文件进行符号链接。这样使所有内容保持同步很容易:你可以从点文件的文件夹中进行 cron 作业或偶尔进行 `git push`。 ### 简单旁注:什么不能分享 在继续之前,值得注意的是你不应该添加到共享的点文件存储库中的内容 —— 即使它以点开头。任何有安全风险的东西,例如 `.ssh/` 文件夹中的文件,都不是使用此方法分享的好选择。确保在在线发布配置文件之前仔细检查配置文件,并再三检查文件中没有 API 令牌。 ### 我应该从哪里开始? 如果你不熟悉 Git,那么我[有关 Git 术语的文章](https://opensource.com/article/19/2/git-terminology)和常用命令[备忘清单](https://opensource.com/downloads/cheat-sheet-git)将会帮助你继续前进。 还有其他超棒的资源可帮助你开始使用点文件。多年前,我就发现了 [dotfiles.github.io](http://dotfiles.github.io/),并继续使用它来更广泛地了解人们在做什么。在其他人的点文件中隐藏了许多秘传知识。花时间浏览一些,大胆地将它们添加到自己的内容中。 我希望这是让你在计算机上拥有一致的点文件的快乐开端。 你最喜欢的点文件技巧是什么?添加评论或在 Twitter 上找我 [@mbbroberg](https://twitter.com/mbbroberg?lang=en)。 --- via: <https://opensource.com/article/19/3/move-your-dotfiles-version-control> 作者:[Matthew Broberg](https://opensource.com/users/mbbroberg) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
null
HTTPSConnectionPool(host='opensource.com', port=443): Read timed out. (read timeout=10)
null
11,420
主流发行版之前的那些最早的 Linux 发行版
https://itsfoss.com/earliest-linux-distros/
2019-10-04T10:00:00
[ "发行版" ]
https://linux.cn/article-11420-1.html
> > 在这篇回溯历史的文章中,我们尝试回顾一些最早的 Linux 发行版是如何演变的,并形成我们今天所知道的发行版的。 > > > ![](/data/attachment/album/201910/04/100019oc79ftxhuw91x7rc.png) 在这里,我们尝试探讨了第一个 Linux 内核问世后,诸如 Red Hat、Debian、Slackware、SUSE、Ubuntu 等诸多流行的发行版的想法是如何产生的。 随着 1991 年 Linux 最初以内核的形式发布,今天我们所知道的发行版在世界各地众多合作者的帮助下得以创建 shell、库、编译器和相关软件包,从而使其成为一个完整的操作系统。 ### 1、第一个已知的“发行版”是由 HJ Lu 创建的 Linux 发行版这种方式可以追溯到 1992 年,当时可以用来访问 Linux 的第一个已知的类似发行版的工具是由 HJ Lu 发布的。它由两个 5.25 英寸软盘组成: ![Linux 0.12 Boot and Root Disks | Photo Credit](/data/attachment/album/201910/04/100021xb3xz61oozxbcbqq.jpg) * LINUX 0.12 BOOT DISK:“启动”磁盘用来先启动系统。 * LINUX 0.12 ROOT DISK:第二个“根”磁盘,用于在启动后获取命令提示符以访问 Linux 文件系统。 要在硬盘上安装 LINUX 0.12,必须使用十六进制编辑器来编辑其主启动记录(MBR),这是一个非常复杂的过程,尤其是在那个时代。 > > 感觉太怀旧了? > > > 你可以[安装 cool-retro-term 应用程序](https://itsfoss.com/cool-retro-term/),它可以为你提供 90 年代计算机的复古外观的 Linux 终端。 > > > ### 2、MCC Interim Linux ![MCC Linux 0.99.14, 1993 | Image Credit](/data/attachment/album/201910/04/100028sbt9ujbjx29xuxpg.jpg) MCC Interim Linux 最初由英格兰曼彻斯特计算中心的 Owen Le Blanc 与 “LINUX 0.12” 同年发布,它是针对普通用户的第一个 Linux 发行版,它具有菜单驱动的安装程序和最终用户/编程工具。它也是以软盘集的形式,可以将其安装在系统上以提供基于文本的基本环境。 MCC Interim Linux 比 0.12 更加易于使用,并且在硬盘驱动器上的安装过程更加轻松和类似于现代方式。它不需要使用十六进制编辑器来编辑 MBR。 尽管它于 1992 年 2 月首次发布,但自当年 11 月以来也可以通过 FTP 下载。 ### 3、TAMU Linux ![TAMU Linux | Image Credit](/data/attachment/album/201910/04/100029kz397iwlonxoyxr9.jpg) TAMU Linux 由 Texas A&M 的 Aggies 与 Texas A&M Unix & Linux 用户组于 1992 年 5 月开发,被称为 TAMU 1.0A。它是第一个提供 X Window System 的 Linux 发行版,而不仅仅是基于文本的操作系统。 ### 4、Softlanding Linux System (SLS) ![SLS Linux 1.05, 1994 | Image Credit](/data/attachment/album/201910/04/100031cromfglvzipyyrqg.jpg) 他们的口号是“DOS 伞降的温柔救援”!SLS 由 Peter McDonald 于 1992 年 5 月发布。SLS 在其时代得到了广泛的使用和流行,并极大地推广了 Linux 的思想。但是由于开发人员决定更改发行版中的可执行格式,因此用户停止使用它。 当今社区最熟悉的许多流行发行版是通过 SLS 演变而成的。其中两个是: * Slackware:它是最早的 Linux 发行版之一,由 Patrick Volkerding 于 1993 年创建。Slackware 基于 SLS,是最早的 Linux 发行版之一。 * Debian:由 Ian Murdock 发起,Debian 在从 SLS 模型继续发展之后于 1993 年发布。我们今天知道的非常流行的 Ubuntu 发行版基于 Debian。 ### 5、Yggdrasil ![LGX Yggdrasil Fall 1993 | Image Credit](/data/attachment/album/201910/04/100034o21yhz60az10dg88.jpg) Yggdrasil 于 1992 年 12 月发行,是第一个产生 Live Linux CD 想法的发行版。它是由 Yggdrasil 计算公司开发的,该公司由位于加利福尼亚州伯克利的 Adam J. Richter 创立。它可以在系统硬件上自动配置自身,即“即插即用”功能,这是当今非常普遍且众所周知的功能。Yggdrasil 后来的版本包括一个用于在 Linux 中运行任何专有 MS-DOS CD-ROM 驱动程序的黑科技。 ![Yggdrasil’s Plug-and-Play Promo | Image Credit](/data/attachment/album/201910/04/100035rf14o70x5jt9l590.jpg) 他们的座右铭是“<ruby> 我们其余人的自由软件 <rp> ( </rp> <rt> Free Software For The Rest of Us </rt> <rp> ) </rp></ruby>”。 ### 6、Mandriva 在 90 年代后期,有一个非常受欢迎的发行版 [Mandriva](https://en.wikipedia.org/wiki/Mandriva_Linux),该发行版于 1998 年首次发行,是通过将法国的 Mandrake Linux 发行版与巴西的 Conectiva Linux 发行版统一起来形成的。它的发布寿命为 18 个月,会对 Linux 和系统软件进行更新,并且每年都会发布基于桌面的更新。它还有带有 5 年支持的服务器版本。现在是 [Open Mandriva](https://www.openmandriva.org/)。 如果你在 Linux 发行之初就用过更多的怀旧发行版,请在下面的评论中与我们分享。 --- via: <https://itsfoss.com/earliest-linux-distros/> 作者:[Avimanyu Bandyopadhyay](https://itsfoss.com/author/avimanyu/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
In this throwback history article, we’ve tried to look back into how some of the earliest Linux distributions evolved and came into being as we know them today. ![The Earliest Linux distributions](https://itsfoss.com/content/images/wordpress/2019/02/earliest-linux-distros-800x450.png) In here we have tried to explore how the idea of popular distros such as Red Hat, Debian, Slackware, SUSE, Ubuntu and many others came into being after the first Linux kernel became available. As Linux was initially released in the form of a kernel in 1991, the distros we know today was made possible with the help of numerous collaborators throughout the world with the creation of shells, libraries, compilers and related packages to make it a complete Operating System. ## 1. The first known “distro” by HJ Lu The way we know Linux distributions today goes back to 1992, when the first known distro-like tools to get access to Linux were released by HJ Lu. It consisted of two 5.25” floppy diskettes: ![](https://itsfoss.com/content/images/wordpress/2019/01/Linux-0.12-Floppies.jpg) [Photo Credit](https://en.wikipedia.org/wiki/File:Linux_0_12.jpg) **LINUX 0.12 BOOT DISK**: The “boot” disk was used to boot the system first.**LINUX 0.12 ROOT DISK**: The second “root” disk for getting a command prompt for access to the Linux file system after booting. To install 0.12 on a hard drive, one had to use a hex editor to edit its master boot record (MBR) and that was quite a complex process, especially during that era. Feeling too nostalgic? You can [install cool-retro-term application](https://itsfoss.com/cool-retro-term/) that gives you a Linux terminal in the vintage looks of the 90’s computers. ## 2. MCC Interim Linux ![](https://itsfoss.com/content/images/wordpress/2019/01/MCC-Interim-Linux-0.99.14-1993.jpg?fit=800%2C600&ssl=1) [Image Credit](https://www.reddit.com/r/unixporn/comments/9wv8c3/) Initially released in the same year as “LINUX 0.12” by Owen Le Blanc of Manchester Computing Centre in England, MCC Interim Linux was the first Linux distribution for novice users with a menu driven installer and end user/programming tools. Also in the form of a collection of diskettes, it could be installed on a system to provide a basic text-based environment. MCC Interim Linux was much more user-friendly than 0.12 and the installation process on a hard drive was much easier and similar to modern ways. It did not require using a hex editor to edit the MBR. Though it was first released in February 1992, it was also available for download through FTP since November that year. ## 3. TAMU Linux ![](https://itsfoss.com/content/images/wordpress/2019/01/TAMU-Linux.jpg) [Image Credit](https://archiveos.org/tamu/) TAMU Linux was developed by Aggies at Texas A&M with the Texas A&M Unix & Linux Users Group in May 1992 and was called TAMU 1.0A. It was the first Linux distribution to offer the X Window System instead of just a text based operating system. ## 4. Softlanding Linux System (SLS) ![](https://itsfoss.com/content/images/wordpress/2019/01/SLS-1.05-1994.jpg) [Image Credit](https://commons.wikimedia.org/wiki/File:Sls-linux_1.05.png) “Gentle Touchdowns for DOS Bailouts” was their slogan! SLS was released by Peter McDonald in May 1992. SLS was quite widely used and popular during its time and greatly promoted the idea of Linux. But due to a decision by the developers to change the executable format in the distro, users stopped using it. Many of the popular distros the present community is most familiar with, evolved via SLS. Two of them are: **Slackware**: One of the earliest Linux distros, Slackware was created by Patrick Volkerding in 1993. Slackware is based on SLS and was one of the very first Linux distributions.**Debian**: An initiative by Ian Murdock, Debian was also released in 1993 after moving on from the SLS model. The very popular Ubuntu distro we know today is based on Debian. ## 5. Yggdrasil ![](https://itsfoss.com/content/images/wordpress/2019/01/LGX_Yggdrasil_CD_Fall_1993.jpg?fit=781%2C800&ssl=1) [Image Credit](https://en.wikipedia.org/wiki/File:Lgx_yggdrasil_fall_1993.jpg) Released on December 1992, Yggdrasil was the first distro to give birth to the idea of Live Linux CDs. It was developed by Yggdrasil Computing, Inc., founded by Adam J. Richter in Berkeley, California. It could automatically configure itself on system hardware as “Plug-and-Play”, which is a very regular and known feature in today’s time. The later versions of Yggdrasil included a hack for running any proprietary MS-DOS CD-ROM driver within Linux. ![](https://itsfoss.com/content/images/wordpress/2019/01/Yggdrasil-Linux-Summer-1994.jpg) [Image Credit](https://en.wikipedia.org/wiki/File:Yggdrasil-linux-summer-94.JPG) Their motto was “Free Software For The Rest of Us”. In the late 90s, one very popular distro was [Mandriva](https://en.wikipedia.org/wiki/Mandriva_Linux), first released in 1998, by unifying the French *Mandrake Linux* distribution with the Brazilian *Conectiva Linux* distribution. It had a release lifetime of 18 months for updates related to Linux and system software and desktop based updates were released every year. It also had server versions with 5 years of support. Now we have [Open Mandriva](https://www.openmandriva.org/). If you have more nostalgic distros to share from the earliest days of Linux release, please share with us in the comments below.
11,422
用 Linux 命令显示硬件信息
https://opensource.com/article/19/9/linux-commands-hardware-information
2019-10-04T12:06:00
[ "硬件" ]
https://linux.cn/article-11422-1.html
> > 通过命令行获取计算机硬件详细信息。 > > > ![](/data/attachment/album/201910/04/120618q2k1fflrsy1bgbwp.jpg) 你可能会有很多的原因需要查清计算机硬件的详细信息。例如,你需要修复某些问题并在论坛上发出请求,人们可能会立即询问你的计算机具体的信息。或者当你想要升级计算机配置时,你需要知道现有的硬件型号和能够升级的型号。这些都需要查询你的计算机具体规格信息。 最简单的方法是使用标准的 Linux GUI 程序之一: * [i-nex](http://sourceforge.net/projects/i-nex/) 收集硬件信息,并且类似于 Windows 下流行的 [CPU-Z](https://www.cpuid.com/softwares/cpu-z.html) 的显示。 * [HardInfo](http://sourceforge.net/projects/hardinfo.berlios/) 显示硬件具体信息,甚至包括一组八个的流行的性能基准程序,你可以用它们评估你的系统性能。 * [KInfoCenter](https://userbase.kde.org/KInfoCenter) 和 [Lshw](http://www.binarytides.com/linux-lshw-command/) 也能够显示硬件的详细信息,并且可以从许多软件仓库中获取。 或者,你也可以拆开计算机机箱去查看硬盘、内存和其他设备上的标签信息。或者你可以在系统启动时,按下[相应的按键](http://www.disk-image.com/faq-bootmenu.htm)进入 UEFI 和 BIOS 界面获得信息。这两种方式都会向你显示硬件信息但省略软件信息。 你也可以使用命令行获取硬件信息。等一下… 这听起来有些困难。为什么你会要这样做? 有时候通过使用一条针对性强的命令可以很轻松的找到特定信息。也可能你没有可用的 GUI 程序或者只是不想安装这样的程序。 使用命令行的主要原因可能是编写脚本。无论你是使用 Linux shell 还是其他编程语言来编写脚本通常都需要使用命令行。 很多检测硬件信息的命令行都需要使用 root 权限。所以要么切换到 root 用户,要么使用 `sudo` 在普通用户状态下发出命令: ``` sudo <the_line_command> ``` 并按提示输入你的密码。 这篇文章介绍了很多用于发现系统信息的有用命令。文章最后的快速查询表对它们作出了总结。 ### 硬件概述 下面几条命令可以全面概述计算机硬件信息。 `inxi` 命令能够列出包括 CPU、图形、音频、网络、驱动、分区、传感器等详细信息。当论坛里的人尝试帮助其他人解决问题的时候,他们常常询问此命令的输出。这是解决问题的标准诊断程序: ``` inxi -Fxz ``` `-F` 参数意味着你将得到完整的输出,`x` 增加细节信息,`z` 参数隐藏像 MAC 和 IP 等私人身份信息。 `hwinfo` 和 `lshw` 命令以不同的格式显示大量相同的信息: ``` hwinfo --short ``` 或 ``` lshw -short ``` 这两条命令的长格式输出非常详细,但也有点难以阅读: ``` hwinfo ``` 或 ``` lshw ``` ### CPU 详细信息 通过命令你可以了解关于你的 CPU 的任何信息。使用 `lscpu` 命令或与它相近的 `lshw` 命令查看 CPU 的详细信息: ``` lscpu ``` 或 ``` lshw -C cpu ``` 在这两个例子中,输出的最后几行都列出了所有 CPU 的功能。你可以查看你的处理器是否支持特定的功能。 使用这些命令的时候,你可以通过使用 `grep` 命令过滤复杂的信息,并缩小所需信息范围。例如,只查看 CPU 品牌和型号: ``` lshw -C cpu | grep -i product ``` 仅查看 CPU 的速度(兆赫兹): ``` lscpu | grep -i mhz ``` 或其 [BogoMips](https://en.wikipedia.org/wiki/BogoMips) 额定功率: ``` lscpu | grep -i bogo ``` `grep` 命令的 `-i` 参数代表搜索结果忽略大小写。 ### 内存 Linux 命令行使你能够收集关于你的计算机内存的所有可能的详细信息。你甚至可以不拆开计算机机箱就能确定是否可以为计算机添加额外的内存条。 使用 `dmidecode` 命令列出每根内存条和其容量: ``` dmidecode -t memory | grep -i size ``` 使用以下命令获取系统内存更多的信息,包括类型、容量、速度和电压: ``` lshw -short -C memory ``` 你肯定想知道的一件事是你的计算机可以安装的最大内存: ``` dmidecode -t memory | grep -i max ``` 现在检查一下计算机是否有空闲的插槽可以插入额外的内存条。你可以通过使用命令在不打开计算机机箱的情况下就做到: ``` lshw -short -C memory | grep -i empty ``` 输出为空则意味着所有的插槽都在使用中。 确定你的计算机拥有多少显卡内存需要下面的命令。首先使用 `lspci` 列出所有设备信息然后过滤出你想要的显卡设备信息: ``` lspci | grep -i vga ``` 视频控制器的设备号输出信息通常如下: ``` 00:02.0 VGA compatible controller: Intel Corporation 82Q35 Express Integrated Graphics Controller (rev 02) ``` 现在再加上视频设备号重新运行 `lspci` 命令: ``` lspci -v -s 00:02.0 ``` 输出信息中 `prefetchable` 那一行显示了系统中的显卡内存大小: ``` ... Memory at f0100000 (32-bit, non-prefetchable) [size=512K] I/O ports at 1230 [size=8] Memory at e0000000 (32-bit, prefetchable) [size=256M] Memory at f0000000 (32-bit, non-prefetchable) [size=1M] ... ``` 最后使用下面的命令展示当前内存使用量(兆字节): ``` free -m ``` 这条命令告诉你多少内存是空闲的,多少命令正在使用中以及交换内存的大小和是否正在使用。例如,输出信息如下: ``` total used free shared buff/cache available Mem: 11891 1326 8877 212 1687 10077 Swap: 1999 0 1999 ``` `top` 命令为你提供内存使用更加详细的信息。它显示了当前全部内存和 CPU 使用情况并按照进程 ID、用户 ID 及正在运行的命令细分。同时这条命令也是全屏输出: ``` top ``` ### 磁盘文件系统和设备 你可以轻松确定有关磁盘、分区、文件系统和其他设备信息。 显示每个磁盘设备的描述信息: ``` lshw -short -C disk ``` 通过以下命令获取任何指定的 SATA 磁盘详细信息,例如其型号、序列号以及支持的模式和扇区数量等: ``` hdparm -i /dev/sda ``` 当然,如果需要的话你应该将 `sda` 替换成 `sdb` 或者其他设备号。 要列出所有磁盘及其分区和大小,请使用以下命令: ``` lsblk ``` 使用以下命令获取更多有关扇区数量、大小、文件系统 ID 和 类型以及分区开始和结束扇区: ``` fdisk -l ``` 要启动 Linux,你需要确定 [GRUB](https://www.dedoimedo.com/computers/grub.html) 引导程序的可挂载分区。你可以使用 `blkid` 命令找到此信息。它列出了每个分区的唯一标识符(UUID)及其文件系统类型(例如 ext3 或 ext4): ``` blkid ``` 使用以下命令列出已挂载的文件系统和它们的挂载点,以及已用的空间和可用的空间(兆字节为单位): ``` df -m ``` 最后,你可以列出所有的 USB 和 PCI 总线以及其他设备的详细信息: ``` lsusb ``` 或 ``` lspci ``` ### 网络 Linux 提供大量的网络相关命令,下面只是几个例子。 查看你的网卡硬件详细信息: ``` lshw -C network ``` `ifconfig` 是显示网络接口的传统命令: ``` ifconfig -a ``` 但是现在很多人们使用: ``` ip link show ``` 或 ``` netstat -i ``` 在阅读输出时,了解常见的网络缩写十分有用: | 缩写 | 含义 | | --- | --- | | `lo` | 回环接口 | | `eth0` 或 `enp*` | 以太网接口 | | `wlan0` | 无线网接口 | | `ppp0` | 点对点协议接口(由拨号调制解调器、PPTP VPN 连接或者 USB 调制解调器使用) | | `vboxnet0` 或 `vmnet*` | 虚拟机网络接口 | 表中的星号是通配符,代表不同系统的任意字符。 使用以下命令显示默认网关和路由表: ``` ip route | column -t ``` 或 ``` netstat -r ``` ### 软件 让我们以显示最底层软件详细信息的两条命令来结束。例如,如果你想知道是否安装了最新的固件该怎么办?这条命令显示了 UEFI 或 BIOS 的日期和版本: ``` dmidecode -t bios ``` 内核版本是多少,以及它是 64 位的吗?网络主机名是什么?使用下面的命令查出结果: ``` uname -a ``` ### 快速查询表 | 用途 | 命令 | | --- | --- | | 显示所有硬件信息 | `inxi -Fxz` 或 `hwinfo --short` 或 `lshw -short` | | CPU 信息 | `lscpu` 或 `lshw -C cpu` | | 显示 CPU 功能(例如 PAE、SSE2) | `lshw -C cpu | grep -i capabilities` | | 报告 CPU 位数 | `lshw -C cpu | grep -i width` | | 显示当前内存大小和配置 | `dmidecode -t memory | grep -i size` 或 `lshw -short -C memory` | | 显示硬件支持的最大内存 | `dmidecode -t memory | grep -i max` | | 确定是否有空闲内存插槽 | `lshw -short -C memory | grep -i empty`(输出为空表示没有可用插槽) | | 确定显卡内存数量 | `lspci | grep -i vga` 然后指定设备号再次使用;例如:`lspci -v -s 00:02.0` 显卡内存数量就是 `prefetchable` 的值 | | 显示当前内存使用情况 | `free -m` 或 `top` | | 列出磁盘驱动器 | `lshw -short -C disk` | | 显示指定磁盘驱动器的详细信息 | `hdparm -i /dev/sda`(需要的话替换掉 `sda`) | | 列出磁盘和分区信息 | `lsblk`(简单) 或 `fdisk -l`(详细) | | 列出分区 ID(UUID) | `blkid` | | 列出已挂载文件系统挂载点以及已用和可用空间 | `df -m` | | 列出 USB 设备 | `lsusb` | | 列出 PCI 设备 | `lspci` | | 显示网卡详细信息 | `lshw -C network` | | 显示网络接口 | `ifconfig -a` 或 `ip link show` 或 `netstat -i` | | 显示路由表 | `ip route | column -t` 或 `netstat -r` | | 显示 UEFI/BIOS 信息 | `dmidecode -t bios` | | 显示内核版本网络主机名等 | `uname -a` | 你有喜欢的命令被我忽略掉的吗?请添加评论分享给大家。 --- via: <https://opensource.com/article/19/9/linux-commands-hardware-information> 作者:[Howard Fosdick](https://opensource.com/users/howtechhttps://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/seth) 选题:[lujun9972](https://github.com/lujun9972) 译者:[way-ww](https://github.com/way-ww) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
There are many reasons you might need to find out details about your computer hardware. For example, if you need help fixing something and post a plea in an online forum, people will immediately ask you for specifics about your computer. Or, if you want to upgrade your computer, you'll need to know what you have and what you can have. You need to interrogate your computer to discover its specifications. The easiest way is to do that is with one of the standard Linux GUI programs: [i-nex](http://sourceforge.net/projects/i-nex/)collects hardware information and displays it in a manner similar to the popular[CPU-Z](https://www.cpuid.com/softwares/cpu-z.html)under Windows.[HardInfo](http://sourceforge.net/projects/hardinfo.berlios/)displays hardware specifics and even includes a set of eight popular benchmark programs you can run to gauge your system's performance.[KInfoCenter](https://userbase.kde.org/KInfoCenter)and[Lshw](http://www.binarytides.com/linux-lshw-command/)also display hardware details and are available in many software repositories. Alternatively, you could open up the box and read the labels on the disks, memory, and other devices. Or you could enter the boot-time panels—the so-called UEFI or BIOS panels. Just hit [the proper program function key](http://www.disk-image.com/faq-bootmenu.htm) during the boot process to access them. These two methods give you hardware details but omit software information. Or, you could issue a Linux line command. Wait a minute… that sounds difficult. Why would you do this? Sometimes it's easy to find a specific bit of information through a well-targeted line command. Perhaps you don't have a GUI program available or don't want to install one. Probably the main reason to use line commands is for writing scripts. Whether you employ the Linux shell or another programming language, scripting typically requires coding line commands. Many line commands for detecting hardware must be issued under root authority. So either switch to the root user ID, or issue the command under your regular user ID preceded by **sudo**: `sudo <the_line_command>` and respond to the prompt for the root password. This article introduces many of the most useful line commands for system discovery. The quick reference chart at the end summarizes them. ## Hardware overview There are several line commands that will give you a comprehensive overview of your computer's hardware. The **inxi** command lists details about your system, CPU, graphics, audio, networking, drives, partitions, sensors, and more. Forum participants often ask for its output when they're trying to help others solve problems. It's a standard diagnostic for problem-solving: `inxi -Fxz` The **-F** flag means you'll get full output, **x** adds details, and **z** masks out personally identifying information like MAC and IP addresses. The **hwinfo** and **lshw** commands display much of the same information in different formats: `hwinfo --short` or `lshw -short` The long forms of these two commands spew out exhaustive—but hard to read—output: `hwinfo` or `lshw` ## CPU details You can learn everything about your CPU through line commands. View CPU details by issuing either the **lscpu** command or its close relative **lshw**: `lscpu` or `lshw -C cpu` In both cases, the last few lines of output list all the CPU's capabilities. Here you can find out whether your processor supports specific features. With all these commands, you can reduce verbiage and narrow any answer down to a single detail by parsing the command output with the **grep** command. For example, to view only the CPU make and model: `lshw -C cpu | grep -i product` To view just the CPU's speed in megahertz: `lscpu | grep -i mhz` or its [BogoMips](https://en.wikipedia.org/wiki/BogoMips) power rating: `lscpu | grep -i bogo` The **-i** flag on the **grep** command simply ensures your search ignores whether the output it searches is upper or lower case. ## Memory Linux line commands enable you to gather all possible details about your computer's memory. You can even determine whether you can add extra memory to the computer without opening up the box. To list each memory stick and its capacity, issue the **dmidecode** command: `dmidecode -t memory | grep -i size` For more specifics on system memory, including type, size, speed, and voltage of each RAM stick, try: `lshw -short -C memory` One thing you'll surely want to know is is the maximum memory you can install on your computer: `dmidecode -t memory | grep -i max` Now find out whether there are any open slots to insert additional memory sticks. You can do this without opening your computer by issuing this command: `lshw -short -C memory | grep -i empty` A null response means all the memory slots are already in use. Determining how much video memory you have requires a pair of commands. First, list all devices with the **lspci** command and limit the output displayed to the video device you're interested in: `lspci | grep -i vga` The output line that identifies the video controller will typically look something like this: `00:02.0 VGA compatible controller: Intel Corporation 82Q35 Express Integrated Graphics Controller (rev 02)` Now reissue the **lspci** command, referencing the video device number as the selected device: `lspci -v -s 00:02.0` The output line identified as *prefetchable* is the amount of video RAM on your system: ``` ... Memory at f0100000 (32-bit, non-prefetchable) [size=512K] I/O ports at 1230 [size=8] Memory at e0000000 (32-bit, prefetchable) [size=256M] Memory at f0000000 (32-bit, non-prefetchable) [size=1M] ... ``` Finally, to show current memory use in megabytes, issue: `free -m` This tells how much memory is free, how much is in use, the size of the swap area, and whether it's being used. For example, the output might look like this: ``` total used free shared buff/cache available Mem: 11891 1326 8877 212 1687 10077 Swap: 1999 0 1999 ``` The **top** command gives you more detail on memory use. It shows current overall memory and CPU use and also breaks it down by process ID, user ID, and the commands being run. It displays full-screen text output: `top` ## Disks, filesystems, and devices You can easily determine whatever you wish to know about disks, partitions, filesystems, and other devices. To display a single line describing each disk device: `lshw -short -C disk` Get details on any specific SATA disk, such as its model and serial numbers, supported modes, sector count, and more with: `hdparm -i /dev/sda` Of course, you should replace **sda** with **sdb** or another device mnemonic if necessary. To list all disks with all their defined partitions, along with the size of each, issue: `lsblk` For more detail, including the number of sectors, size, filesystem ID and type, and partition starting and ending sectors: `fdisk -l` To start up Linux, you need to identify mountable partitions to the [GRUB](https://www.dedoimedo.com/computers/grub.html) bootloader. You can find this information with the **blkid** command. It lists each partition's unique identifier (UUID) and its filesystem type (e.g., ext3 or ext4): `blkid` To list the mounted filesystems, their mount points, and the space used and available for each (in megabytes): `df -m` Finally, you can list details for all USB and PCI buses and devices with these commands: `lsusb` or `lspci` ## Network Linux offers tons of networking line commands. Here are just a few. To see hardware details about your network card, issue: `lshw -C network` Traditionally, the command to show network interfaces was **ifconfig**: `ifconfig -a` But many people now use: `ip link show` or `netstat -i` In reading the output, it helps to know common network abbreviations: Abbreviation | Meaning | ---|---| lo | Loopback interface | eth0 or enp* | Ethernet interface | wlan0 | Wireless interface | ppp0 | Point-to-Point Protocol interface (used by a dial-up modem, PPTP VPN connection, or USB modem) | vboxnet0 or vmnet* | Virtual machine interface | The asterisks in this table are wildcard characters, serving as a placeholder for whatever series of characters appear from system to system.** ** To show your default gateway and routing tables, issue either of these commands: `ip route | column -t` or `netstat -r` ## Software Let's conclude with two commands that display low-level software details. For example, what if you want to know whether you have the latest firmware installed? This command shows the UEFI or BIOS date and version: `dmidecode -t bios` What is the kernel version, and is it 64-bit? And what is the network hostname? To find out, issue: `uname -a` ## Quick reference chart This chart summarizes all the commands covered in this article: Display info about all hardware | inxi -Fxz --or--hwinfo --short --or--lshw -short | Display all CPU info | lscpu --or--lshw -C cpu | Show CPU features (e.g., PAE, SSE2) | lshw -C cpu | grep -i capabilities | Report whether the CPU is 32- or 64-bit | lshw -C cpu | grep -i width | Show current memory size and configuration | dmidecode -t memory | grep -i size --or--lshw -short -C memory | Show maximum memory for the hardware | dmidecode -t memory | grep -i max | Determine whether memory slots are available | lshw -short -C memory | grep -i empty(a null answer means no slots available) | Determine the amount of video memory | lspci | grep -i vgathen reissue with the device number; for example: lspci -v -s 00:02.0The VRAM is the prefetchable value. | Show current memory use | free -m --or--top | List the disk drives | lshw -short -C disk | Show detailed information about a specific disk drive | hdparm -i /dev/sda(replace sda if necessary) | List information about disks and partitions | lsblk (simple) --or--fdisk -l (detailed) | List partition IDs (UUIDs) | blkid | List mounted filesystems, their mount points, and megabytes used and available for each | df -m | List USB devices | lsusb | List PCI devices | lspci | Show network card details | lshw -C network | Show network interfaces | ifconfig -a --or--ip link show --or--netstat -i | Display routing tables | ip route | column -t --or--netstat -r | Display UEFI/BIOS info | dmidecode -t bios | Show kernel version, network hostname, more | uname -a | Do you have a favorite command that I overlooked? Please add a comment and share it. ## 5 Comments
11,423
数码文件与文件夹收纳术(以照片为例)
http://karl-voit.at/managing-digital-photographs/
2019-10-05T00:10:00
[ "照片", "相片" ]
https://linux.cn/article-11423-1.html
![](/data/attachment/album/201910/05/000950xsxopomsrs55rrb5.jpg) * 更新 2014-05-14:增加了一些具体实例 * 更新 2015-03-16:根据照片的 GPS 坐标过滤图片 * 更新 2016-08-29:以新的 `filetags --filter` 替换已经过时的 `show-sel.sh` 脚本 * 更新 2017-08-28: geeqier 视频缩略图的邮件评论 * 更新 2018-03-06:增加了 Julian Kahnert 的链接 * 更新 2018-05-06:增加了作者在 2018 Linuxtage Graz 大会上 45 分钟演讲的视频 * 更新 2018-06-05:关于 metadata 的邮件回复 * 更新 2018-07-22:移动文件夹结构的解释到一篇它自己的文章中 * 更新 2019-07-09:关于在文件名中避免使用系谱和字符的邮件回复 每当度假或去哪游玩时我就会化身为一个富有激情的摄影师。所以,过去的几年中我积累了许多的 [JPEG](https://en.wikipedia.org/wiki/Jpeg) 文件。这篇文章中我会介绍我是如何避免 [供应商锁定](http://en.wikipedia.org/wiki/Vendor_lock-in)(LCTT 译注:<ruby> 供应商锁定 <rt> vendor lock-in </rt></ruby>,原为经济学术语,这里引申为避免过于依赖某一服务平台)造成受限于那些临时性的解决方案及数据丢失。相反,我更倾向于使用那些可以让我**投入时间和精力打理,并能长久使用**的解决方案。 这一(相当长的)攻略 **并不仅仅适用于图像文件**:我将进一步阐述像是文件夹结构、文件的命名规则等等许多领域的事情。因此,这些规范适用于我所能接触到的所有类型的文件。 在我开始传授我的方法之前,我们应该先就我将要介绍方法的达成一个共识,那就是我们是否有相同的需求。如果你对 [raw 图像格式](https://en.wikipedia.org/wiki/Raw_image_format)十分推崇,将照片存储在云端或其他你信赖的地方(对我而言可能不会),那么你可能不会认同这篇文章将要描述的方式了。请根据你的情况来灵活做出选择。 ### 我的需求 对于 **将照片(或视频)从我的数码相机中导出到电脑里**,我只需要将 SD 卡插到我的电脑里并调用 `fetch-workflow` 软件。这一步也完成了**图像软件的预处理**以适用于我的文件命名规范(下文会具体论述),同时也可以将图片旋转至正常的方向(而不是横着)。 这些文件将会被存入到我的摄影收藏文件夹 `$HOME/tmp/digicam/`。在这一文件夹中我希望能**遍历我的图像和视频文件**,以便于**整理/删除、重命名、添加/移除标签,以及将一系列相关的文件移动到相应的文件夹中**。 在完成这些以后,我将会**浏览包含图像/电影文件集的文件夹**。在极少数情况下,我希望**在独立的图像处理工具**(比如 [GIMP](http://www.gimp.org/))中打开一个图像文件。如果仅是为了**旋转 JPEG 文件**,我想找到一个快速的方法,不需要图像处理工具,并且是[以无损的方式](http://petapixel.com/2012/08/14/why-you-should-always-rotate-original-jpeg-photos-losslessly/)旋转 JPEG 图像。 我的数码相机支持用 [GPS](https://en.wikipedia.org/wiki/Gps) 坐标标记图像。因此,我需要一个方法来**对单个文件或一组文件可视化 GPS 坐标**来显示我走过的路径。 我想拥有的另一个好功能是:假设你在威尼斯度假时拍了几百张照片。每一个都很漂亮,所以你每张都舍不得删除。另一方面,你可能想把一组更少的照片送给家里的朋友。而且,在他们嫉妒的爆炸之前,他们可能只希望看到 20 多张照片。因此,我希望能够**定义并显示一组特定的照片子集**。 就独立性和**避免锁定效应**而言,我不想使用那种一旦公司停止产品或服务就无法使用的工具。出于同样的原因,由于我是一个注重隐私的人,**我不想使用任何基于云的服务**。为了让自己对新的可能性保持开放的心态,我不希望只在一个特定的操作系统平台才可行的方案上倾注全部的精力。**基本的东西必须在任何平台上可用**(查看、导航、……),而**全套需求必须可以在 GNU/Linux 上运行**,对我而言,我选择 Debian GNU/Linux。 在我传授当前针对上述大量需求的解决方案之前,我必须解释一下我的一般文件夹结构和文件命名约定,我也使用它来命名数码照片。但首先,你必须认清一个重要的事实: #### iPhoto、Picasa,诸如此类应被认为是有害的 管理照片集的软件工具确实提供了相当酷的功能。它们提供了一个良好的用户界面,并试图为你提供满足各种需求的舒适的工作流程。 对它们我确实遇到了很多大问题。它们几乎对所有东西都使用专有的存储格式:图像文件、元数据等等。当你打算在几年内换一个不同的软件,这是一个大问题。相信我:总有一天你会因为多种原因而**更换软件**。 如果你现在正打算更换相应的工具,你会意识到 iPhoto 或 Picasa 是分别存储原始图像文件和你对它们所做的所有操作的(旋转图像、向图像文件添加描述/标签、裁剪等等)。如果你不能导出并重新导入到新工具,那么**所有的东西都将永远丢失**。而无损的进行转换和迁移几乎是不可能的。 我不想在一个会锁住我工作的工具上投入任何精力。**我也拒绝把自己绑定在任何专有工具上**。我是一个过来人,希望你们吸取我的经验。 这就是我在文件名中保留时间戳、图像描述或标记的原因。文件名是永久性的,除非我手动更改它们。当我把照片备份或复制到 U 盘或其他操作系统时,它们不会丢失。每个人都能读懂。任何未来的系统都能够处理它们。 ### 我的文件命名规范 这里有一个我在 [2018 Linuxtage Graz 大会](https://glt18.linuxtage.at)上做的[演讲](https://glt18-programm.linuxtage.at/events/321.html),其中详细阐述了我的在本文中提到的想法和工作流程。 * [Grazer Linuxtage 2018 - The Advantages of File Name Conventions and Tagging](https://youtu.be/rckSVmYCH90) * [备份视频托管在 media.CCC.de](https://media.ccc.de/v/GLT18_-_321_-_en_-_g_ap147_004_-_201804281550_-_the_advantages_of_file_name_conventions_and_tagging_-_karl_voit) 我所有的文件都与一个特定的日期或时间有关,根据所采用的 [ISO 8601](https://en.wikipedia.org/wiki/Iso_date) 规范,我采用的是**日期戳**或**时间戳** 带有日期戳和两个标签的示例文件名:`2014-05-09 Budget export for project 42 -- finance company.csv`。 带有时间戳(甚至包括可选秒)和两个标签的示例文件名:`2014-05-09T22.19.58 Susan presenting her new shoes -- family clothing.jpg`。 由于我使用的 ISO 时间戳冒号不适用于 Windows [NTFS 文件系统](https://en.wikipedia.org/wiki/Ntfs),因此,我用点代替冒号,以便将小时与分钟(以及可选的秒)区别开来。 如果是**持续的一段日期或时间**,我会将两个日期戳或时间戳用两个减号分开:`2014-05-09--2014-05-13 Jazz festival Graz -- folder tourism music.pdf`。 文件名中的时间/日期戳的优点是,除非我手动更改它们,否则它们保持不变。当通过某些不处理这些元数据的软件进行处理时,包含在文件内容本身中的元数据(如 [Exif](https://en.wikipedia.org/wiki/Exif))往往会丢失。此外,使用这样的日期/时间戳开始的文件名可以确保文件按时间顺序显示,而不是按字母顺序显示。字母表是一种[完全人工的排序顺序](http://www.isisinform.com/reinventing-knowledge-the-medieval-controversy-of-alphabetical-order/),对于用户定位文件通常不太实用。 当我想将**标签**关联到文件名时,我将它们放在原始文件名和[文件名扩展名](https://en.wikipedia.org/wiki/File_name_extension)之间,中间用空格、两个减号和两端额外的空格分隔 `--`。我的标签是小写的英文单词,不包含空格或特殊字符。有时,我可能会使用 `quantifiedself` 或 `usergenerated` 这样的连接词。我[倾向于选择一般类别](http://karl-voit.at/tagstore/en/papers.shtml),而不是太过具体的描述标签。我在 Twitter [hashtags](https://en.wikipedia.org/wiki/Hashtag)、文件名、文件夹名、书签、诸如此类的博文等诸如此类地地方重用这些标签。 标签作为文件名的一部分有几个优点。通过使用常用的桌面搜索引擎,你可以在标签的帮助下定位文件。文件名称中的标签不会因为复制到不同的存储介质上而丢失。当系统使用与文件名之外的存储位置(如:元数据数据库、[点文件](https://en.wikipedia.org/wiki/Dot-file)、[备用数据流](https://en.wikipedia.org/wiki/NTFS#Alternate_data_streams_.28ADS.29)等)存储元信息通常会发生丢失。 当然,通常在文件和文件夹名称中,**请避免使用特殊字符**、变音符、冒号等。尤其是在不同操作系统平台之间同步文件时。 我的**文件夹名命名约定**与文件的相应规范相同。 注意:由于 [Memacs](https://github.com/novoid/Memacs) 的 [filenametimestamp](https://github.com/novoid/Memacs/blob/master/docs/memacs_filenametimestamps.org) 模块的聪明之处,所有带有日期/时间戳的文件和文件夹都出现在我的 Org 模式的日历(日程)上的同一天/同一时间。这样,我就能很好地了解当天发生了什么,包括我拍的所有照片。 ### 我的一般文件夹结构 在本节中,我将描述我的主文件夹中最重要的文件夹。注意:这可能在将来的被移动到一个独立的页面。或许不是。让我们等着瞧 :-) (LCTT 译注:后来这一节已被作者扩展并移动到另外一篇[文章](https://karl-voit.at/folder-hierarchy/)。) 很多东西只有在一定的时间内才会引起人们的兴趣。这些内容包括快速浏览其内容的下载、解压缩文件以检查包含的文件、一些有趣的小内容等等。对于**临时的东西**,我有 `$HOME/tmp/` 子层次结构。新照片放在 `$HOME/tmp/digicam/` 中。我从 CD、DVD 或 USB 记忆棒临时复制的东西放在 `$HOME/tmp/fromcd/` 中。每当软件工具需要用户文件夹层次结构中的临时数据时,我就使用 `$HOME/tmp/Tools/`作为起点。我经常使用的文件夹是 `$HOME/tmp/2del/`:`2del` 的意思是“随时可以删除”。例如,我所有的浏览器都使用这个文件夹作为默认的下载文件夹。如果我需要在机器上腾出空间,我首先查看这个 `2del` 文件夹,用于删除内容。 与上面描述的临时文件相比,我当然也想将文件**保存更长的时间**。这些文件被移动到我的 `$HOME/archive/` 子层次结构中。它有几个子文件夹用于备份、我想保留的 web 下载类、我要存档的二进制文件、可移动媒体(CD、DVD、记忆棒、外部硬盘驱动器)的索引文件,和一个稍后(寻找一个合适的的目标文件夹)存档的文件夹。有时,我太忙或没有耐心的时候将文件妥善整理。是的,那就是我,我甚至有一个名为“现在不要烦我”的文件夹。这对你而言是否很怪?:-) 我的归档中最重要的子层次结构是 `$HOME/archive/events_memories/` 及其子文件夹 `2014/`、`2013/`、`2012/` 等等。正如你可能已经猜到的,每个年份有一个**子文件夹**。其中每个文件中都有单个文件和文件夹。这些文件是根据我在前一节中描述的文件名约定命名的。文件夹名称以 [ISO 8601](https://en.wikipedia.org/wiki/Iso_date) 日期标签 “YYYY-MM-DD” 开头,后面跟着一个具有描述性的名称,如 `$HOME/archive/events_memories/2014/2014-05-08 Business marathon with/`。在这些与日期相关的文件夹中,我保存着各种与特定事件相关的文件:照片、(扫描的)pdf 文件、文本文件等等。 对于**共享数据**,我设置一个 `$HOME/share/` 子层次结构。这是我的 Dropbox 文件夹,我用各种各样的方法(比如 [unison](http://www.cis.upenn.edu/%7Ebcpierce/unison/))来分享数据。我也在我的设备之间共享数据:家里的 Mac Mini、家里的 GNU/Linux 笔记本、Android 手机,root-server(我的个人云),工作用的 Windows 笔记本。我不想在这里详细说明我的同步设置。如果你想了解相关的设置,可以参考另一篇相关的文章。:-) 在我的 `$HOME/templates_tags/` 子层次结构中,我保存了各种**模板文件**([LaTeX](https://github.com/novoid/LaTeX-KOMA-template)、脚本、…),插图和**徽标**,等等。 我的 **Org 模式** 文件,主要是保存在 `$HOME/org/`。我练习记忆力,不会解释我有多喜欢 [Emacs/Org 模式](http://orgmode.org/) 以及我从中获益多少。你可能读过或听过我详细描述我用它做的很棒的事情。具体可以在我的博客上查找 [我的 Emacs 标签](http://karl-voit.at/tags/emacs),在 Twitter 上查找 [hashtag #orgmode](https://twitter.com/search?q%3D%2523orgmode&src%3Dtypd)。 以上就是我最重要的文件夹子层次结构设置方式。 ### 我的工作流程 哒哒哒,在你了解了我的文件夹结构和文件名约定之后,下面是我当前的工作流程和工具,我使用它们来满足我前面描述的需求。 请注意,**你必须知道你在做什么**。我这里的示例及文件夹路径和更多**只适用我的机器或我的环境**。**你必须采用相应的**路径、文件名等来满足你的需求! #### 工作流程:将文件从 SD 卡移动到笔记本电脑、旋转人像图像,并重命名文件 当我想把数据从我的数码相机移到我的 GNU/Linux 笔记本上时,我拿出它的 mini SD 存储卡,把它放在我的笔记本上。然后它会自动挂载在 `/media/digicam` 上。 然后,调用 [getdigicamdata](https://github.com/novoid/getdigicamdata.sh)。它做了如下几件事:它将文件从 SD 卡移动到一个临时文件夹中进行处理。原始文件名会转换为小写字符。所有的人像照片会使用 [jhead](http://www.sentex.net/%3Ccode%3Emwandel/jhead/) 旋转。同样使用 jhead,我从 Exif 头的时间戳中生成文件名称中的时间戳。使用 [date2name](https://github.com/novoid/date2name),我也将时间戳添加到电影文件中。处理完所有这些文件后,它们将被移动到新的数码相机文件的目标文件夹: `$HOME/tmp/digicam/tmp/`。 #### 工作流程:文件夹索引、查看、重命名、删除图像文件 为了快速浏览我的图像和电影文件,我喜欢使用 GNU/Linux 上的 [geeqie](http://geeqie.sourceforge.net/)。这是一个相当轻量级的图像浏览器,它具有其他文件浏览器所缺少的一大优势:我可以添加通过键盘快捷方式调用的外部脚本/工具。通过这种方式,我可以通过任意外部命令扩展这个图像浏览器的特性。 基本的图像管理功能是内置在 geeqie:浏览我的文件夹层次结构、以窗口模式或全屏查看图像(快捷键 `f`)、重命名文件名、删除文件、显示 Exif 元数据(快捷键 `Ctrl-e`)。 在 OS X 上,我使用 [Xee](http://xee.c3.cx/)。与 geeqie 不同,它不能通过外部命令进行扩展。不过,基本的浏览、查看和重命名功能也是可用的。 #### 工作流程:添加和删除标签 我创建了一个名为 [filetags](https://github.com/novoid/filetag) 的 Python 脚本,用于向单个文件以及一组文件添加和删除标记。 对于数码照片,我使用标签,例如,`specialL` 用于我认为适合桌面背景的风景图片,`specialP` 用于我想展示给其他人的人像照片,`sel` 用于筛选,等等。 ##### 使用 geeqie 初始设置 filetags 向 geeqie 添加 `filetags` 是一个手动步骤:“Edit > Preferences > Configure Editors …”,然后创建一个附加条目 `New`。在这里,你可以定义一个新的桌面文件,如下所示: ``` [Desktop Entry] Name=filetags GenericName=filetags Comment= Exec=/home/vk/src/misc/vk-filetags-interactive-adding-wrapper-with-gnome-terminal.sh %F Icon= Terminal=true Type=Application Categories=Application;Graphics; hidden=false MimeType=image/*;video/*;image/mpo;image/thm Categories=X-Geeqie; ``` *add-tags.desktop* 封装脚本 `vk-filetags-interactive-adding-wrapper-with-gnome-terminal.sh` 是必须的,因为我想要弹出一个新的终端,以便添加标签到我的文件: ``` #!/bin/sh /usr/bin/gnome-terminal \ --geometry=85x15+330+5 \ --tab-with-profile=big \ --hide-menubar \ -x /home/vk/src/filetags/filetags.py --interactive "${@}" #end ``` *vk-filetags-interactive-adding-wrapper-with-gnome-terminal.sh* 在 geeqie 中,你可以在 “Edit > Preferences > Preferences … > Keyboard”。我将 `t` 与 `filetags` 命令相关联。 这个 `filetags` 脚本还能够从单个文件或一组文件中删除标记。它基本上使用与上面相同的方法。唯一的区别是 `filetags` 脚本额外的 `--remove` 参数: ``` [Desktop Entry] Name=filetags-remove GenericName=filetags-remove Comment= Exec=/home/vk/src/misc/vk-filetags-interactive-removing-wrapper-with-gnome-terminal.sh %F Icon= Terminal=true Type=Application Categories=Application;Graphics; hidden=false MimeType=image/*;video/*;image/mpo;image/thm Categories=X-Geeqie; ``` *remove-tags.desktop* ``` #!/bin/sh /usr/bin/gnome-terminal \ --geometry=85x15+330+5 \ --tab-with-profile=big \ --hide-menubar \ -x /home/vk/src/filetags/filetags.py --interactive --remove "${@}" #end ``` *vk-filetags-interactive-removing-wrapper-with-gnome-terminal.sh* 为了删除标签,我创建了一个键盘快捷方式 `T`。 ##### 在 geeqie 中使用 filetags 当我在 geeqie 文件浏览器中浏览图像文件时,我选择要标记的文件(一到多个)并按 `t`。然后,一个小窗口弹出,要求我提供一个或多个标签。用回车确认后,这些标签被添加到文件名中。 删除标签也是一样:选择多个文件,按下 `T`,输入要删除的标签,然后按回车确认。就是这样。几乎没有[给文件添加或删除标签的更简单的方法了](http://karl-voit.at/tagstore/)。 #### 工作流程:改进的使用 appendfilename 重命名文件 ##### 不使用 appendfilename 重命名一组大型文件可能是一个冗长乏味的过程。对于 `2014-04-20T17.09.11_p1100386.jpg` 这样的原始文件名,在文件名中添加描述的过程相当烦人。你将按 `Ctrl-r` (重命名)在 geeqie 中打开文件重命名对话框。默认情况下,原始名称(没有文件扩展名的文件名称)被标记。因此,如果不希望删除/覆盖文件名(但要追加),则必须按下光标键 `→`。然后,光标放在基本名称和扩展名之间。输入你的描述(不要忘记以空格字符开始),并用回车进行确认。 ##### 在 geeqie 使中用 appendfilename 使用 [appendfilename](https://github.com/novoid/appendfilename),我的过程得到了简化,可以获得将文本附加到文件名的最佳用户体验:当我在 geeqie 中按下 `a`(附加)时,会弹出一个对话框窗口,询问文本。在回车确认后,输入的文本将放置在时间戳和可选标记之间。 例如,当我在 `2014-04-20T17.09.11_p1100386.jpg` 上按下 `a`,然后键入`Pick-nick in Graz` 时,文件名变为 `2014-04-20T17.09.11_p1100386 Pick-nick in Graz.jpg`。当我再次按下 `a` 并输入 `with Susan` 时,文件名变为 `2014-04-20T17.09.11_p1100386 Pick-nick in Graz with Susan.jpg`。当文件名添加标记时,附加的文本前将附加标记分隔符。 这样,我就不必担心覆盖时间戳或标记。重命名的过程对我来说变得更加有趣! 最好的部分是:当我想要将相同的文本添加到多个选定的文件中时,也可以使用 `appendfilename`。 ##### 在 geeqie 中初始设置 appendfilename 添加一个额外的编辑器到 geeqie: “Edit > Preferences > Configure Editors … > New”。然后输入桌面文件定义: ``` [Desktop Entry] Name=appendfilename GenericName=appendfilename Comment= Exec=/home/vk/src/misc/vk-appendfilename-interactive-wrapper-with-gnome-terminal.sh %F Icon= Terminal=true Type=Application Categories=Application;Graphics; hidden=false MimeType=image/*;video/*;image/mpo;image/thm Categories=X-Geeqie; ``` *appendfilename.desktop* 同样,我也使用了一个封装脚本,它将为我打开一个新的终端: ``` #!/bin/sh /usr/bin/gnome-terminal \ --geometry=90x5+330+5 \ --tab-with-profile=big \ --hide-menubar \ -x /home/vk/src/appendfilename/appendfilename.py "${@}" #end ``` *vk-appendfilename-interactive-wrapper-with-gnome-terminal.sh* #### 工作流程:播放电影文件 在 GNU/Linux 上,我使用 [mplayer](http://www.mplayerhq.hu) 回放视频文件。由于 geeqie 本身不播放电影文件,所以我必须创建一个设置,以便在 mplayer 中打开电影文件。 ##### 在 geeqie 中初始设置 mplayer 我已经使用 [xdg-open](https://wiki.archlinux.org/index.php/xdg-open) 将电影文件扩展名关联到 mplayer。因此,我只需要为 geeqie 创建一个通用的“open”命令,让它使用 `xdg-open` 打开任何文件及其关联的应用程序。 在 geeqie 中,再次访问 “Edit > Preferences > Configure Editors …” 添加“open”的条目: ``` [Desktop Entry] Name=open GenericName=open Comment= Exec=/usr/bin/xdg-open %F Icon= Terminal=true Type=Application hidden=false NOMimeType=*; MimeType=image/*;video/* Categories=X-Geeqie; ``` *open.desktop* 当你也将快捷方式 `o` (见上文)与 geeqie 关联时,你就能够打开与其关联的应用程序的视频文件(和其他文件)。 ##### 使用 xdg-open 打开电影文件(和其他文件) 在上面的设置过程之后,当你的 geeqie 光标位于文件上方时,你只需按下 `o` 即可。就是如此简洁。 #### 工作流程:在外部图像编辑器中打开 我不太希望能够在 GIMP 中快速编辑图像文件。因此,我添加了一个快捷方式 `g`,并将其与外部编辑器 “GNU Image Manipulation Program (GIMP)” 关联起来,geeqie 已经默认创建了该外部编辑器。 这样,只需按下 `g` 就可以在 GIMP 中打开当前图像。 #### 工作流程:移动到存档文件夹 现在我已经在我的文件名中添加了注释,我想将单个文件移动到 `$HOME/archive/events_memories/2014/`,或者将一组文件移动到这个文件夹中的新文件夹中,如 `$HOME/archive/events_memories/2014/2014-05-08 business marathon after show - party`。 通常的方法是选择一个或多个文件,并用快捷方式 `Ctrl-m` 将它们移动到文件夹中。 何等繁复无趣之至! 因此,我(再次)编写了一个 Python 脚本,它为我完成了这项工作:[move2archive](https://github.com/novoid/move2archive)(简写为:`m2a`),需要一个或多个文件作为命令行参数。然后,出现一个对话框,我可以在其中输入一个可选文件夹名。当我不输入任何东西而是按回车,文件被移动到相应年份的文件夹。当我输入一个类似 `Business-Marathon After-Show-Party` 的文件夹名称时,第一个图像文件的日期戳被附加到该文件夹(`$HOME/archive/events_memories/2014/2014-05-08 Business-Marathon After-Show-Party`),然后创建该文件夹,并移动文件。 再一次,我在 geeqie 中选择一个或多个文件,按 `m`(移动),或者只按回车(没有特殊的子文件夹),或者输入一个描述性文本,这是要创建的子文件夹的名称(可选不带日期戳)。 **没有一个图像管理工具像我的带有 appendfilename 和 move2archive 的 geeqie 一样可以通过快捷键快速且有趣的完成工作。** ##### 在 geeqie 里初始化 m2a 的相关设置 同样,向 geeqie 添加 `m2a` 是一个手动步骤:“Edit > Preferences > Configure Editors …”,然后创建一个附加条目“New”。在这里,你可以定义一个新的桌面文件,如下所示: ``` [Desktop Entry] Name=move2archive GenericName=move2archive Comment=Moving one or more files to my archive folder Exec=/home/vk/src/misc/vk-m2a-interactive-wrapper-with-gnome-terminal.sh %F Icon= Terminal=true Type=Application Categories=Application;Graphics; hidden=false MimeType=image/*;video/*;image/mpo;image/thm Categories=X-Geeqie; ``` *m2a.desktop* 封装脚本 `vk-m2a-interactive-wrapper-with-gnome-terminal.sh` 是必要的,因为我想要弹出一个新的终端窗口,以便我的文件进入我指定的目标文件夹: ``` #!/bin/sh /usr/bin/gnome-terminal \ --geometry=157x56+330+5 \ --tab-with-profile=big \ --hide-menubar \ -x /home/vk/src/m2a/m2a.py --pauseonexit "${@}" #end ``` *vk-m2a-interactive-wrapper-with-gnome-terminal.sh* 在 geeqie 中,你可以在 “Edit > Preferences > Preferences … > Keyboard” 将 `m` 与 `m2a` 命令相关联。 #### 工作流程:旋转图像(无损) 通常,我的数码相机会自动将人像照片标记为人像照片。然而,在某些特定的情况下(比如从装饰图案上方拍照),我的相机会出错。在那些**罕见的情况下**,我必须手动修正方向。 你必须知道,JPEG 文件格式是一种有损格式,应该只用于照片,而不是计算机生成的东西,如屏幕截图或图表。以傻瓜方式旋转 JPEG 图像文件通常会解压/可视化图像文件、旋转生成新的图像,然后重新编码结果。这将导致生成的图像[比原始图像质量差得多](http://petapixel.com/2012/08/14/why-you-should-always-rotate-original-jpeg-photos-losslessly/)。 因此,你应该使用无损方法来旋转 JPEG 图像文件。 再一次,我添加了一个“外部编辑器”到 geeqie:“Edit > Preferences > Configure Editors … > New”。在这里,我添加了两个条目:使用 [exiftran](http://manpages.ubuntu.com/manpages/raring/man1/exiftran.1.html),一个用于旋转 270 度(即逆时针旋转 90 度),另一个用于旋转 90 度(顺时针旋转 90 度): ``` [Desktop Entry] Version=1.0 Type=Application Name=Losslessly rotate JPEG image counterclockwise # call the helper script TryExec=exiftran Exec=exiftran -p -2 -i -g %f # Desktop files that are usable only in Geeqie should be marked like this: Categories=X-Geeqie; OnlyShowIn=X-Geeqie; # Show in menu "Edit/Orientation" X-Geeqie-Menu-Path=EditMenu/OrientationMenu MimeType=image/jpeg; ``` *rotate-270.desktop* ``` [Desktop Entry] Version=1.0 Type=Application Name=Losslessly rotate JPEG image clockwise # call the helper script TryExec=exiftran Exec=exiftran -p -9 -i -g %f # Desktop files that are usable only in Geeqie should be marked like this: Categories=X-Geeqie; OnlyShowIn=X-Geeqie; # Show in menu "Edit/Orientation" X-Geeqie-Menu-Path=EditMenu/OrientationMenu # It can be made verbose # X-Geeqie-Verbose=true MimeType=image/jpeg; ``` *rotate-90.desktop* 我创建了 geeqie 快捷键 `[`(逆时针方向)和 `]`(顺时针方向)。 #### 工作流程:可视化 GPS 坐标 我的数码相机有一个 GPS 传感器,它在 JPEG 文件的 Exif 元数据中存储当前的地理位置。位置数据以 [WGS 84](https://en.wikipedia.org/wiki/WGS84#A_new_World_Geodetic_System:_WGS_84) 格式存储,如 `47, 58, 26.73; 16, 23, 55.51`(纬度;经度)。这一方式可读性较差,我期望:要么是地图,要么是位置名称。因此,我向 geeqie 添加了一些功能,这样我就可以在 [OpenStreetMap](http://www.openstreetmap.org/) 上看到单个图像文件的位置: `Edit > Preferences > Configure Editors ... > New`。 ``` [Desktop Entry] Name=vkphotolocation GenericName=vkphotolocation Comment= Exec=/home/vk/src/misc/vkphotolocation.sh %F Icon= Terminal=true Type=Application Categories=Application;Graphics; hidden=false MimeType=image/bmp;image/gif;image/jpeg;image/jpg;image/pjpeg;image/png;image/tiff;image/x-bmp;image/x-gray;image/x-icb;image/x-ico;image/x-png;image/x-portable-anymap;image/x-portable-bitmap;image/x-portable-graymap;image/x-portable-pixmap;image/x-xbitmap;image/x-xpixmap;image/x-pcx;image/svg+xml;image/svg+xml-compressed;image/vnd.wap.wbmp; ``` *photolocation.desktop* 这调用了我的名为 `vkphotolocation.sh` 的封装脚本,它使用 [ExifTool](http://www.sno.phy.queensu.ca/%7Ephil/exiftool/) 以 [Marble](http://userbase.kde.org/Marble/Tracking) 能够读取和可视化的适当格式提取该坐标: ``` #!/bin/sh IMAGEFILE="${1}" IMAGEFILEBASENAME=`basename ${IMAGEFILE}` COORDINATES=`exiftool -c %.6f "${IMAGEFILE}" | awk '/GPS Position/ { print $4 " " $6 }'` if [ "x${COORDINATES}" = "x" ]; then zenity --info --title="${IMAGEFILEBASENAME}" --text="No GPS-location found in the image file." else /usr/bin/marble --latlon "${COORDINATES}" --distance 0.5 fi #end ``` *vkphotolocation.sh* 映射到键盘快捷键 `G`,我可以快速地得到**单个图像文件的位置的地图定位**。 当我想将多个 JPEG 图像文件的**位置可视化为路径**时,我使用 [GpsPrune](http://activityworkshop.net/software/gpsprune/)。我无法挖掘出 GpsPrune 将一组文件作为命令行参数的方法。正因为如此,我必须手动启动 GpsPrune,用 “File > Add photos”选择一组文件或一个文件夹。 通过这种方式,我可以为每个 JPEG 位置在 OpenStreetMap 地图上获得一个点(如果配置为这样)。通过单击这样一个点,我可以得到相应图像的详细信息。 如果你恰好在国外拍摄照片,可视化 GPS 位置对**在文件名中添加描述**大有帮助! #### 工作流程:根据 GPS 坐标过滤照片 这并非我的工作流程。为了完整起见,我列出该工作流对应工具的特性。我想做的就是从一大堆图片中寻找那些在一定区域内(范围或点 + 距离)的照片。 到目前为止,我只找到了 [DigiKam](https://en.wikipedia.org/wiki/DigiKam),它能够[根据矩形区域进行过滤](https://docs.kde.org/development/en/extragear-graphics/digikam/using-kapp.html#idp7659904)。如果你知道其他工具,请将其添加到下面的评论或给我写一封电子邮件。 #### 工作流程:显示给定集合的子集 如上面的需求所述,我希望能够对一个文件夹中的文件定义一个子集,以便将这个小集合呈现给其他人。 工作流程非常简单:我向选择的文件添加一个标记(通过 `t`/`filetags`)。为此,我使用标记 `sel`,它是 “selection” 的缩写。在标记了一组文件之后,我可以按下 `s`,它与一个脚本相关联,该脚本只显示标记为 `sel` 的文件。 当然,这也适用于任何标签或标签组合。因此,用同样的方法,你可以得到一个适当的概述,你的婚礼上的所有照片都标记着“教堂”和“戒指”。 很棒的功能,不是吗?:-) ##### 初始设置 filetags 以根据标签和 geeqie 过滤 你必须定义一个额外的“外部编辑器”,“ Edit > Preferences > Configure Editors … > New”: ``` [Desktop Entry] Name=filetag-filter GenericName=filetag-filter Comment= Exec=/home/vk/src/misc/vk-filetag-filter-wrapper-with-gnome-terminal.sh Icon= Terminal=true Type=Application Categories=Application;Graphics; hidden=false MimeType=image/*;video/*;image/mpo;image/thm Categories=X-Geeqie; ``` *filter-tags.desktop* 再次调用我编写的封装脚本: ``` #!/bin/sh /usr/bin/gnome-terminal \ --geometry=85x15+330+5 \ --hide-menubar \ -x /home/vk/src/filetags/filetags.py --filter #end ``` *vk-filetag-filter-wrapper-with-gnome-terminal.sh* 带有参数 `--filter` 的 `filetags` 基本上完成的是:用户被要求输入一个或多个标签。然后,当前文件夹中所有匹配的文件都使用[符号链接](https://en.wikipedia.org/wiki/Symbolic_link)链接到 `$HOME/.filetags_tagfilter/`。然后,启动了一个新的 geeqie 实例,显示链接的文件。 在退出这个新的 geeqie 实例之后,你会看到进行选择的旧的 geeqie 实例。 #### 用一个真实的案例来总结 哇哦, 这是一篇很长的博客文章。你可能已经忘了之前的概述。总结一下我在(扩展了标准功能集的) geeqie 中可以做的事情,我有一个很酷的总结: | 快捷键 | 功能 | | --- | --- | | `m` | 移到归档(m2a) | | `o` | 打开(针对非图像文件) | | `a` | 在文件名里添加字段 | | `t` | 文件标签(添加) | | `T` | 文件标签(删除) | | `s` | 文件标签(排序) | | `g` | gimp | | `G` | 显示 GPS 信息 | | `[` | 无损的逆时针旋转 | | `]` | 无损的顺时针旋转 | | `Ctrl-e` | EXIF 图像信息 | | `f` | 全屏显示 | 文件名(包括它的路径)的部分及我用来操作该部分的相应工具: ``` /this/is/a/folder/2014-04-20T17.09 Picknick in Graz -- food graz.jpg [ move2archive ] [ date2name ] [appendfilename] [ filetags ] ``` 在实践中,我按照以下步骤将照片从相机保存到存档:我将 SD 存储卡放入计算机的 SD 读卡器中。然后我运行 [getdigicamdata.sh](https://github.com/novoid/getdigicamdata.sh)。完成之后,我在 geeqie 中打开 `$HOME/tmp/digicam/tmp/`。我浏览了一下照片,把那些不成功的删除了。如果有一个图像的方向错误,我用 `[` 或 `]` 纠正它。 在第二步中,我向我认为值得评论的文件添加描述 (`a`)。每当我想添加标签时,我也这样做:我快速地标记所有应该共享相同标签的文件(`Ctrl + 鼠标点击`),并使用 [filetags](https://github.com/novoid/filetag)(`t`)进行标记。 要合并来自给定事件的文件,我选中相应的文件,将它们移动到年度归档文件夹中的 `event-folder`,并通过在 [move2archive](https://github.com/novoid/move2archive)(`m`)中键入事件描述,其余的(非特殊的文件夹)无需声明事件描述由 `move2archive` (`m`)直接移动到年度归档中。 结束我的工作流程,我删除了 SD 卡上的所有文件,把它从操作系统上弹出,然后把它放回我的数码相机里。 以上。 因为这种工作流程几乎不需要任何开销,所以评论、标记和归档照片不再是一项乏味的工作。 ### 最后 所以,这是一个详细描述我关于照片和电影的工作流程的叙述。你可能已经发现了我可能感兴趣的其他东西。所以请不要犹豫,请使用下面的链接留下评论或电子邮件。 我也希望得到反馈,如果我的工作流程适用于你。并且,如果你已经发布了你的工作流程或者找到了其他人工作流程的描述,也请留下评论! 及时行乐,莫让错误的工具或低效的方法浪费了我们的人生! ### 其他工具 阅读关于[本文中关于 gThumb 的部分](http://karl-voit.at/2017/02/19/gthumb)。 当你觉得你以上文中所叙述的符合你的需求时,请根据相关的建议来选择对应的工具。 --- via: <http://karl-voit.at/managing-digital-photographs/> 作者:[Karl Voit](http://karl-voit.at) 译者:[qfzy1233](https://github.com/qfzy1233) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
301
Moved Permanently
null
11,424
Fedora 31 将放弃 32 位 i686 支持
https://fedoramagazine.org/in-fedora-31-32-bit-i686-is-86ed/
2019-10-05T11:15:59
[ "Fedora", "32位" ]
https://linux.cn/article-11424-1.html
![](/data/attachment/album/201910/05/111602sofdze38czrnf58z.jpg) Fedora 31 中[丢弃了](https://fedoraproject.org/wiki/Changes/Stop_Building_i686_Kernels) 32 位 i686 内核及其可启动镜像。虽然可能有一些用户仍然拥有无法与 64 位 x86\_64 内核一起使用的硬件,但数量很少。本文为你提供了这次更改背后的整个事情,以及在 Fedora 31 中仍然可以找到的 32 位元素。 ### 发生了什么? i686 架构实质上从 [Fedora 27 版本](https://fedoramagazine.org/announcing-fedora-27/)就进入了社区支持阶段(LCTT 译注:不再由官方支持)。不幸的是,社区中没有足够的成员愿意做维护该体系结构的工作。不过请放心,Fedora 不会删除所有 32 位软件包,仍在构建许多 i686 软件包,以确保诸如 multilib、wine 和 Steam 之类的东西可以继续工作。 尽管该存储库不再构建和镜像输出,但存在一个 koji i686 存储库,该库可与 mock 一起使用以构建 32 位程序包,并且可以在紧要关头安装不属于 x86\_64 multilib 存储库的 32 位版本。当然,维护人员希望这样做解决有限的使用场景。只是需要运行一个 32 位应用程序的用户应该可以在 64 位系统上使用 multilib 来运行。 ### 如果你要运行 32 位应用需要做什么? 如果你仍在运行 32 位 i686 系统,则会在 Fedora 30 生命周期中继续收到受支持的 Fedora 更新。直到大约 2020 年 5 月或 6 月。到那时,如果硬件支持,你可以将其重新安装为 64 位 x86\_64,或者如果可能的话,将其替换为支持 64 位的硬件。 社区中有一个用户已经成功地从 32 位 Fedora “升级” 到了 64 位 x86 Fedora。虽然这不是预期或受支持的升级路径,但应该也可行。该项目希望可以为具有 64 位功能的硬件的用户提供一些文档,以在 Fedora 30 使用寿命终止之前说明该升级过程。 如果有 64 位的 CPU,但由于内存不足而运行 32 位 Fedora,请尝试[备用桌面流派](https://spins.fedoraproject.org)之一。LXDE 和其他产品在内存受限的环境中往往表现良好。对于仅在旧的可以扔掉的 32 位硬件上运行简单服务器的用户,请考虑使用较新的 ARM 板之一。在许多情况下,仅节能一项就可以支付新硬件的费用。如果以上皆不可行,[CentOS 7](https://centos.org) 提供了一个 32 位镜像,并对该平台提供长期支持。 ### 安全与你 尽管有些用户可能会在生命周期结束后继续运行旧版本的 Fedora,但强烈建议不要这样做。人们不断研究软件的安全问题。通常,他们发现这些问题已经存在多年了。 一旦 Fedora 维护人员知道了此类问题,他们通常会为它们打补丁,并为支持的发行版提供更新,而不会给使用寿命已终止的发行版提供。当然,一旦这些漏洞公开,就会有人尝试利用它们。如果你在生命周期结束时运行了较旧的发行版,则安全风险会随着时间的推移而增加,从而使你的系统面临不断增长的风险。 --- via: <https://fedoramagazine.org/in-fedora-31-32-bit-i686-is-86ed/> 作者:[Justin Forbes](https://fedoramagazine.org/author/jforbes/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
The release of Fedora 31 [drops](https://fedoraproject.org/wiki/Changes/Stop_Building_i686_Kernels) the 32-bit i686 kernel, and as a result bootable images. While there may be users out there who still have hardware which will not work with the 64-bit x86_64 kernel, there are very few. However, this article gives you the whole story behind the change, and what 32-bit material you’ll still find in Fedora 31. ## What is happening? The i686 architecture essentially entered community support with the [Fedora 27 release](https://fedoramagazine.org/announcing-fedora-27/). Unfortunately, there are not enough members of the community willing to do the work to maintain the architecture. Don’t worry, though — Fedora is not dropping all 32-bit packages. Many i686 packages are still being built to ensure things like multilib, wine, and Steam will continue to work. While the repositories are no longer being composed and mirrored out, there is a koji i686 repository which works with *mock* for building 32-bit packages, and in a pinch to install 32-bit versions which are not part of the x86_64 multilib repository. Of course, maintainers expect this will see limited use. Users who simply need to run a 32-bit application should be able to do so with multilib on a 64-bit system. ## What to do if you’re running 32-bit If you still run 32-bit i686 installations, you’ll continue to receive supported Fedora updates through the Fedora 30 lifecycle. This is until roughly May or June of 2020. At that point, you can either reinstall as 64-bit x86_64 if your hardware supports it, or replace your hardware with 64-bit capable hardware if possible. There is a user in the community who has done a successful “upgrade” from 32-bit Fedora to 64-bit x86 Fedora. While this is not an intended or supported upgrade path, it should work. The Project hopes to have some documentation for users who have 64-bit capable hardware to explain the process before the Fedora 30 end of life. If you have a 64-bit capable CPU running 32-bit Fedora due to low memory, try one of the [alternate desktop spins](https://spins.fedoraproject.org). LXDE and others tend to do fairly well in memory constrained environments. For users running simple servers on old 32-bit hardware that was just lying around, consider one of the newer ARM boards. The power savings alone can more than pay for the new hardware in many instances. And if none of these are on option, [CentOS 7](https://centos.org) offers a 32-bit image with longer term support for the platform. ## Security and you While some users may be tempted to keep running an older Fedora release past end of life, this is highly discouraged. People constantly research software for security issues. Often times, they find these issues which have been around for years. Once Fedora maintainers know about such issues, they typically patch for them, and make updates available to supported releases — but not to end of life releases. And of course, once these vulnerabilities are public, there will be people trying to exploit them. If you run an older release past end of life, your security exposure increases over time as a result, putting your system at ever-growing risk. *Photo by **Alexandre Debiève** on Unsplash*. ## Kamil I don’t really understand the title. I guess “86ed” is supposed to mean “dead”, for some unclear reason? Perhaps I’d get it if I were a native English speaker. But for important announcements, I’d recommend avoiding wordplay that is only clear to native speakers. ## Ben Cotton @Kamil, thanks for the reminder. We’ll be more careful in future articles. Your understanding of the term is correct enough. It generally means discarded. Wikipedia has a few possible explanations of the term’s origin. ## Jens Petersen Apparently it is American slang. ## Kamil Ah, thanks, Ben. Now it makes much more sense. ## Stuart D Gathman Inside puns and jokes are no problem when there are hyperlinks for the uninitiated. No need to leave out a good pun when clicking on “86ed” can lead to an explanation. ## Mihai I am not a native english speaker and the title is far from a conundrum. The pun was well crafted and it did made me chuckle 🙂 ## Silvia Sanchez Well, I am native speaker and I didn’t get it until I read the comments. So yeah, I think it would be better to avoid puns that are not easily understood by everyone. ## Bryan J Smith https://wikipedia.org/wiki/86_(term) ## Maciej Is there a way to request a lib? Unfortunately I still need libstdc++.so.5 to run “snx SSL Network Extender”, other libs to run this thingie are still present in 31, but this one. Installing rpm from 30 still works though. I know, I know… 🙂 ## Mike Can you give some pointers on how to set up and use multilib for running 32bit applications under 64 bit OS? ## Roger The use of the term ’86ed’ was entirely appropriate. Do not ‘dumb-down’ an article for any particular audience. It does no harm to non-native English speakers when they are introduced to English idioms. English is a big language that has welcomed many other words and expressions from other languages into everyday use. It is a challenge to everyone. ## Silvia Sanchez So if they do not understand, they do not matter? That’s not very community spirit, me thinks. ## J-Money Who’s saying they don’t matter? Have you ever come across a word you don’t know and looked it up? This is the same thing. By your logic nobody should ever use a word that somebody might not know. Ridiculous. ## Anonymous Semi unrelated, but speaking of removal of packages, some packages got removed from Fedora 31 for another reason, python2. PlayOnLinux and Bleachbit are examples of ones I use on a daily basis. It’s a shame, but for now I use the Fedora 30 builds of PlayOnLinux and Bleachbit and they work fine for now. Is there any chance they could get added back to the repos? If not, is there a chance that they will stop working in the future? ## Luya Tshimbalanga Python2 is reaching end of life on January 1, 2020. ## J-Money I gotta say this article title was genius. Bravo! ## Anonymous Coward* Totally agree! This headline is attention-grabbing, funny, and with a cultural background. So in one word, it’s smart! This is what makes us remember a story. This is the best headline I’ve read so far in Fedora Magazine and we’d like to see more like this! I’m not a native speaker and I believe this is a good opportunity to learn more about the language and the culture. We can look at the brilliant tech news site The Register. Plenty of British humor, plenty of idioms, slang and cultural references from all over the English world. How do they solve the comprehension conundrum for their audience? Some writers add asterisks to a footnote, others give a short explanation or a link within the article. Because of course important information has to be clear to everyone. Tech news without a spoon of smart fun is boring… the collective anonymous handle for commentards on The Register (https://www.theregister.co.uk/) ## Manfred Where is it possible to have some more info on the koji i686 repository? ## Andrew Marchant Don’t worry about the slang in the title, it means nothing to English-speaking Right-Pondians* either! Shame to drop 32-bit (I still miss 8 bit) but usable 64 bit machines are cheap these days so it’s time to move on. See: https://www.urbandictionary.com/define.php?term=right-pondian ## Vernon Van Steenkist Very disappointing. I wish this was stated when I installed Fedora 30 LXQT on my 32 bit X86 netbook and wasted all the time with configuration – especially with having to select an alternate Network Manager repository. If I had known. I would have stuck with Debian. Also, it would have been great if you suggested alternative 32 bit X86 Linux distributions. ## Manfred I am sad about this too, I wish there were some way to keep i686 going.
11,426
给 Zsh 添加主题和插件
https://opensource.com/article/19/9/adding-plugins-zsh
2019-10-05T12:05:32
[ "Zsh" ]
https://linux.cn/article-11426-1.html
> > 通过 Oh My Zsh 安装的主题和插件来扩展 Zsh 的功能。 > > > ![](/data/attachment/album/201910/05/120457r49mk2l9oelv94bi.jpg) 在我的[前文](/article-11378-1.html)中,我向大家展示了如何安装并使用 [Z-Shell](/article-11378-1.html) (Zsh)。对于某些用户来说,Zsh 最令人激动的是它可以安装主题。Zsh 安装主题非常容易,一方面是因为有非常活跃的社区为 Z-Shell 设计主题,另一方面是因为有 [Oh My Zsh](https://ohmyz.sh/) 这个项目。这使得安装主题变得轻而易举。 主题的变化可能会立刻吸引你的注意力,因此如果你安装了 Zsh 并且将默认的 Shell 替换为 Zsh 时,你可能不喜欢 Shell 默认主题的样子,那么你可以立即更换 Oh My Zsh 自带的 100 多个主题。Oh My Zsh 不仅拥有大量精美的主题,同时还有数以百计的扩展 Zsh 功能的插件。 ### 安装 Oh My Zsh Oh My Zsh 的[官网](https://ohmyz.sh/)建议你使用一个脚本在有网络的情况下来安装这个包。尽管 Oh My Zsh 项目几乎是可以令人信服的,但是盲目地在你的电脑上运行一个脚本这是一个糟糕的建议。如果你想运行这个脚本,你可以把它下载下来,看一下它实现了什么功能,在你确信你已经了解了它的所作所为之后,你就可以运行它了。 如果你下载了脚本并且阅读了它,你就会发现安装过程仅仅只有三步: #### 1、克隆 oh-my-zsh 第一步,克隆 oh-my-zsh 库到 `~/.oh-my-zsh` 目录: ``` % git clone http://github.com/robbyrussell/oh-my-zsh ~/.oh-my-zsh ``` #### 2、切换配置文件 下一步,备份你已有的 `.zshrc` 文件,然后将 oh-my-zsh 自带的配置文件移动到这个地方。这两步操作可以一步完成,只需要你的 `mv` 命令支持 `-b` 这个选项。 ``` % mv -b \ ~/.oh-my-zsh/templates/zshrc.zsh-template ~/.zshrc ``` #### 3、编辑配置文件 默认情况下,Oh My Zsh 自带的配置文件是非常简陋的。如果你想将你自己的 `~/.zshrc` 文件合并到 `.oh-my-zsh` 的配置文件中。你可以使用 [cat](https://opensource.com/article/19/2/getting-started-cat-command) 命令将你的旧的配置文件添加到新文件的末尾。 ``` % cat ~/.zshrc~ >> ~/.zshrc ``` 看一下默认的配置文件以及它提供的一些选项。用你最喜欢的编辑器打开 `~/.zshrc` 文件。这个文件有非常良好的注释。这是了解它的一个非常好的方法。 例如,你可以更改 `.oh-my-zsh` 目录的位置。在安装的时候,它默认是位于你的家目录。但是,根据 [Free Desktop](http://freedesktop.org) 所定义的现代 Linux 规范。这个目录应当放置于 `~/.local/share` 。你可以在配置文件中进行修改。如下所示: ``` # Path to your oh-my-zsh installation. export ZSH=$HOME/.local/share/oh-my-zsh ``` 然后将 .oh-my-zsh 目录移动到你新配置的目录下: ``` % mv ~/.oh-my-zsh $HOME/.local/share/oh-my-zsh ``` 如果你使用的是 MacOS,这个目录可能会有点含糊不清,但是最合适的位置可能是在 `$HOME/Library/Application\ Support`。 ### 重新启动 Zsh 编辑配置文件之后,你必须重新启动你的 Shell。在这之前,你必须确定你的任何操作都已正确完成。例如,在你修改了 `.oh-my-zsh` 目录的路径之后。不要忘记将目录移动到新的位置。如果你不想重新启动你的 Shell。你可以使用 `source` 命令来使你的配置文件生效。 ``` % source ~/.zshrc ➜ .oh-my-zsh git:(master) ✗ ``` 你可以忽略任何丢失更新文件的警告;他们将会在重启的时候再次进行解析。 ### 更换你的主题 安装好 oh-my-zsh 之后。你可以将你的 Zsh 的主题设置为 `robbyrussell`,这是一个该项目维护者的主题。这个主题的更改是非常小的,仅仅是改变了提示符的颜色。 你可以通过列出 `.oh-my-zsh` 目录下的所有文件来查看所有安装的主题: ``` ➜ .oh-my-zsh git:(master) ✗ ls ~/.local/share/oh-my-zsh/themes 3den.zsh-theme adben.zsh-theme af-magic.zsh-theme afowler.zsh-theme agnoster.zsh-theme [...] ``` 想在切换主题之前查看一下它的样子,你可以查看 Oh My Zsh 的 [wiki](https://github.com/robbyrussell/oh-my-zsh/wiki/Themes) 页面。要查看更多主题,可以查看 [外部主题](https://github.com/robbyrussell/oh-my-zsh/wiki/External-themes) wiki 页面。 大部分的主题是非常易于安装和使用的,仅仅需要改变 `.zshrc` 文件中的配置选项然后重新载入配置文件。 ``` ➜ ~ sed -i 's/_THEME=\"robbyrussel\"/_THEME=\"linuxonly\"/g' ~/.zshrc ➜ ~ source ~/.zshrc seth@darkstar:pts/0-&gt;/home/skenlon (0) ➜ ``` 其他的主题可能需要一些额外的配置。例如,为了使用 `agnoster` 主题,你必须先安装 Powerline 字体。这是一个开源字体,如果你使用 Linux 操作系统的话,这个字体很可能在你的软件库中存在。使用下面的命令安装这个字体: ``` ➜ ~ sudo dnf install powerline-fonts ``` 在配置文件中更改你的主题: ``` ➜ ~ sed -i 's/_THEME=\"linuxonly\"/_THEME=\"agnoster\"/g' ~/.zshrc ``` 重新启动你的 Sehll(一个简单的 `source` 命令并不会起作用)。一旦重启,你就可以看到新的主题: ![agnoster theme](/data/attachment/album/201910/05/120539pnofailixuohniou.jpg "agnoster theme") ### 安装插件 Oh My Zsh 有超过 200 的插件,你可以在 `.oh-my-zsh/plugins` 中看到它们。每一个扩展目录下都有一个 `README` 文件解释了这个插件的作用。 一些插件相当简单。例如,`dnf`、`ubuntu`、`brew` 和 `macports` 插件仅仅是为了简化与 DNF、Apt、Homebres 和 MacPorts 的交互操作而定义的一些别名。 而其他的一些插件则较为复杂,`git` 插件默认是被激活使用的。当你的目录是一个 git 仓库的时候,这个扩展就会更新你的 Shell 提示符,以显示当前的分支和是否有未合并的更改。 为了激活这个扩展,你可以将这个扩展添加到你的配置文件 `~/.zshrc` 中。例如,你可以添加 `dnf` 和 `pass` 插件,按照如下的方式更改: ``` plugins=(git dnf pass) ``` 保存修改,重新启动你的 Shell。 ``` % source ~/.zshrc ``` 这个扩展现在就可以使用了。你可以通过使用 `dnf` 提供的别名来测试一下: ``` % dnfs fop ====== Name Exactly Matched: fop ====== fop.noarch : XSL-driven print formatter ``` 不同的插件做不同的事,因此你可以一次安装一两个插件来帮你学习新的特性和功能。 ### 兼容性 一些 Oh My Zsh 插件具有通用性。如果你看到一个插件声称它可以与 Bash 兼容,那么它就可以在你自己的 Bash 中使用。另一些插件需要 Zsh 提供的特定功能。因此,它们并不是所有都能工作。但是你可以添加一些其他的插件,例如 `dnf`、`ubuntu`、`firewalld`,以及其他的一些插件。你可以使用 `source` 使你的选择生效。例如: ``` if [ -d $HOME/.local/share/oh-my-zsh/plugins ]; then source $HOME/.local/share/oh-my-zsh/plugins/dnf/dnf.plugin.zsh fi ``` ### 选择或者不选择 Zsh Z-shell 的内置功能和它由社区贡献的扩展功能都非常强大。你可以把它当成你的主 Shell 使用,你也可以在你休闲娱乐的时候尝试一下。这取决于你的爱好。 什么是你最喜爱的主题和扩展可以在下方的评论告诉我们! --- via: <https://opensource.com/article/19/9/adding-plugins-zsh> 作者:[Seth Kenlon](https://opensource.com/users/seth) 选题:[lujun9972](https://github.com/lujun9972) 译者:[amwps290](https://github.com/amwps290) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
In my [previous article](https://opensource.com/article/19/9/getting-started-zsh), I explained how to get started with [Z-shell](https://opensource.com/article/19/9/getting-started-zsh) (Zsh). For some users, the most exciting thing about Zsh is its ability to adopt new themes. It's so easy to theme Zsh both because of the active community designing visuals for the shell and also because of the [Oh My Zsh](https://ohmyz.sh/) project, which makes it trivial to install them. Theming is one of those changes you notice immediately, so if you don't feel like you changed shells when you installed Zsh, you'll definitely feel it once you've adopted one of the 100+ themes bundled with Oh My Zsh. There's a lot more to Oh My Zsh than just pretty themes, though; there are also hundreds of plugins that add features to your Z-shell environment. ## Installing Oh My Zsh The [ohmyz.sh](https://ohmyz.sh/) website encourages you to install the framework by running a script over the internet from your computer. While the Oh My Zsh project is almost certainly trustworthy, it's generally ill-advised to blindly run scripts on your system. If you want to run the install script, you can download it, read it, and run it after you're satisfied you understand what it's doing. If you download the script and read it, you may notice that installation is only a three-step process: ### 1. Clone oh-my-zsh First, clone the oh-my-zsh repository into a directory called **~/.oh-my-zsh**: `% git clone http://github.com/robbyrussell/oh-my-zsh ~/.oh-my-zsh` ### 2. Switch the config file Next, back up your existing **.zshrc** file and move the default one from the oh-my-zsh install into its place. You can do this in one command using the **-b** (backup) option for **mv**, as long as your version of the **mv** command includes that option: ``` % mv -b \ ~/.oh-my-zsh/templates/zshrc.zsh-template \ ~/.zshrc ``` ### 3. Edit the config By default, Oh My Zsh's configuration is pretty bland, so you might want to reintegrate your custom **~/.zshrc** into the **.oh-my-zsh** config. To do that, append your old config to the end of the new one using the [cat command](https://opensource.com/article/19/2/getting-started-cat-command): `% cat ~/.zshrc~ >> ~/.zshrc` To see the default configuration and learn about some of the options it provides, open **~/.zshrc** in your favorite text editor. The file is well-commented, so it's a great way to get a good idea of what's possible. For instance, you can change the location of your **.oh-my-zsh** directory. At installation, it resides at the base of your home directory, but modern Linux convention, as defined by the [Free Desktop](http://freedesktop.org) specification, is to place directories that extend the functionality of applications in the **~/.local/share** directory. You can change it in **~/.zshrc** by editing the line: ``` # Path to your oh-my-zsh installation. export ZSH=$HOME/.local/share/oh-my-zsh ``` then moving the directory to that location: ``` % mv ~/.oh-my-zsh \ $HOME/.local/share/oh-my-zsh ``` If you're using MacOS, the specification is less clear, but arguably the most appropriate place for the directory is **$HOME/Library/Application\ Support**. ## Relaunching Zsh After editing the config, you have to relaunch your shell. Before you do that, make sure you've finished any in-progress config changes; for instance, don't change the path of **.oh-my-zsh** then forget to move the directory to its new location. If you don't want to relaunch your shell, you can **source** the config file, just as you can with Bash: ``` % source ~/.zshrc ➜ .oh-my-zsh git:(master) ✗ ``` You can ignore any warnings about missing update files; they will be resolved upon relaunch. ## Changing your theme Installing Oh My Zsh sets your Z-shell theme to **robbyrussell**, a theme by the project's maintainer. This theme's changes are minimal, mostly involving the color of your prompt. To view all the available themes, list the contents of the **.oh-my-zsh** theme directory: ``` ➜ .oh-my-zsh git:(master) ✗ ls \ ~/.local/share/oh-my-zsh/themes 3den.zsh-theme adben.zsh-theme af-magic.zsh-theme afowler.zsh-theme agnoster.zsh-theme [...] ``` To see screenshots of themes before trying them, visit the Oh My Zsh [wiki](https://github.com/robbyrussell/oh-my-zsh/wiki/Themes). For even more themes, visit the [External themes](https://github.com/robbyrussell/oh-my-zsh/wiki/External-themes) wiki page. Most themes are simple to set up and use. Just change the value of the theme name in **.zshrc** and reload the config: ``` ➜ ~ sed -i \ 's/_THEME=\"robbyrussel\"/_THEME=\"linuxonly\"/g' \ ~/.zshrc ➜ ~ source ~/.zshrc seth@darkstar:pts/0->/home/skenlon (0) ➜ ``` Other themes require extra configuration. For example, to use the **agnoster** theme, you must first install the Powerline font. This is an open source font, and it's probably in your software repository if you're running Linux. Install it with: `➜ ~ sudo dnf install powerline-fonts` Set your theme in the config: ``` ➜ ~ sed -i \ 's/_THEME=\"linuxonly\"/_THEME=\"agnoster\"/g' \ ~/.zshrc ``` and then relaunch (a simple **source** won't work). Upon relaunch, you will see the new theme: ![agnoster theme agnoster theme](https://opensource.com/sites/default/files/uploads/zsh-agnoster.jpg) ## Installing plugins Over 200 plugins ship with Oh My Zsh, and you can see them by looking in **.oh-my-zsh/plugins**. Each plugin directory has a README file explaining what the plugin does. Some plugins are relatively simple. For instance, the **dnf**, **ubuntu**, **brew**, and **macports** plugins are collections of aliases to simplify interactions with the DNF, Apt, Homebrew, and MacPorts package managers. Others are more complex. The **git** plugin, active by default, detects when you're working in a [Git repository](https://opensource.com/resources/what-is-git) and updates your shell prompt so that it lists the current branch and even indicates whether there are unmerged changes. To activate a plugin, add it to the plugin setting in **~/.zshrc**. For example, to add the **dnf** and **pass** plugins, open **~/.zshrc** in your favorite text editor: `plugins=(git dnf pass)` Save your changes and reload your Zsh session: `% source ~/.zshrc` The plugins are now active. You can test the **dnf** plugin by using one of the aliases it provides: ``` % dnfs fop ====== Name Exactly Matched: fop ====== fop.noarch : XSL-driven print formatter ``` Different plugins do different things, so you may want to install only one or two at a time to help you learn the new capabilities of your shell. ### Cheating Some Oh My Zsh plugins are pretty generic. If you look at a plugin that claims to be a Z-shell plugin and the code is also compatible with Bash, then you can use it in your Bash shell. Some plugins require Z-shell-specific functions, so this won't work with all of them. But you can load plugins like **dnf**, **ubuntu**, ** firewalld**, and others into a Bash shell by using **source**to load the plugin of your choice. For example: ``` if [ -d $HOME/.local/share/oh-my-zsh/plugins ]; then source $HOME/.local/share/oh-my-zsh/plugins/dnf/dnf.plugin.zsh fi ``` ## To Z or not to Z Z-shell is a powerful shell both for its built-in features and the plugins contributed by its passionate community. Whether you use it as your primary shell or just as a shell you visit on weekends or holidays, you owe it to yourself to try it out. What are your favorite Z-shell themes and plugins? Tell us in the comments! ## 3 Comments
11,427
在 21 世纪该怎样编译 Linux 内核
https://opensource.com/article/19/8/linux-kernel-21st-century
2019-10-06T11:40:05
[ "内核" ]
https://linux.cn/article-11427-1.html
> > 也许你并不需要编译 Linux 内核,但你能通过这篇教程快速上手。 > > > ![](/data/attachment/album/201910/06/113927vrs6rurljyuza8cy.jpg) 在计算机世界里,<ruby> 内核 <rt> kernel </rt></ruby>是处理硬件与一般系统之间通信的<ruby> 低阶软件 <rt> low-level software </rt></ruby>。除过一些烧录进计算机主板的初始固件,当你启动计算机时,内核让系统意识到它有一个硬盘驱动器、屏幕、键盘以及网卡。分配给每个部件相等时间(或多或少)使得图像、音频、文件系统和网络可以流畅甚至并行地运行。 然而,对于硬件的需求是源源不断的,随着发布的硬件越多,内核就必须纳入更多代码来保证那些硬件正常工作。得到具体的数字很困难,但是 Linux 内核无疑是硬件兼容性方面的顶级内核之一。Linux 操作着无数的计算机和移动电话、工业用途和爱好者使用的板级嵌入式系统(SoC)、RAID 卡、缝纫机等等。 回到 20 世纪(甚至是 21 世纪初期),对于 Linux 用户来说,在刚买到新的硬件后就需要下载最新的内核代码并编译安装才能使用这是不可理喻的。而现在你也很难见到 Linux 用户为了好玩而编译内核或通过高度专业化定制的硬件的方式赚钱。现在,通常已经不需要再编译 Linux 内核了。 这里列出了一些原因以及快速编译内核的教程。 ### 更新当前的内核 无论你买了配备新显卡或 Wifi 芯片集的新品牌电脑还是给家里配备一个新的打印机,你的操作系统(称为 GNU+Linux 或 Linux,它也是该内核的名字)需要一个驱动程序来打开新部件(显卡、芯片集、打印机和其他任何东西)的信道。有时候当你插入某些新的设备时而你的电脑表示发现了它,这具有一定的欺骗性。别被骗到了,有时候那就够了,但更多的情况是你的操作系统仅仅是使用了通用的协议检测到安装了新的设备。 例如,你的计算机也许能够鉴别出新的网络打印机,但有时候那仅仅是因为打印机的网卡被设计成为了获得 DHCP 地址而在网络上标识自己。它并不意味着你的计算机知道如何发送文档给打印机进行打印。事实上,你可以认为计算机甚至不“知道”那台设备是一个打印机。它也许仅仅是显示网络有个设备在一个特定的地址上,并且该设备以一系列字符 “p-r-i-n-t-e-r” 标识自己而已。人类语言的便利性对于计算机毫无意义。计算机需要的是一个驱动程序。 内核开发者、硬件制造商、技术支持和爱好者都知道新的硬件会不断地发布。它们大多数都会贡献驱动程序,直接提交给内核开发团队以包含在 Linux 中。例如,英伟达显卡驱动程序通常都会写入 [Nouveau](https://nouveau.freedesktop.org/wiki/) 内核模块中,并且因为英伟达显卡很常用,它的代码都包含在任一个日常使用的发行版内核中(例如当下载 [Fedora](http://fedoraproject.org) 或 [Ubuntu](http://ubuntu.com) 得到的内核)。英伟达也有不常用的地方,例如嵌入式系统中 Nouveau 模块通常被移除。对其他设备来说也有类似的模块:打印机得益于 [Foomatic](https://wiki.linuxfoundation.org/openprinting/database/foomatic) 和 [CUPS](https://www.cups.org/),无线网卡有 [b43、ath9k、wl](https://wireless.wiki.kernel.org/en/users/drivers) 模块等等。 发行版往往会在它们 Linux 内核的构建中包含尽可能多合理的驱动程序,因为他们想让你在接入新设备时不用安装驱动程序能够立即使用。对于大多数情况来说就是这样的,尤其是现在很多设备厂商都在资助自己售卖硬件的 Linux 驱动程序开发,并且直接将这些驱动程序提交给内核团队以用在通常的发行版上。 有时候,或许你正在运行六个月之前安装的内核,并配备了上周刚刚上市令人兴奋的新设备。在这种情况下,你的内核也许没有那款设备的驱动程序。好消息是经常会出现那款设备的驱动程序已经存在于最近版本的内核中,意味着你只要更新运行的内核就可以了。 通常,这些都是通过安装包管理软件完成的。例如在 RHEL、CentOS 和 Fedora 上: ``` $ sudo dnf update kernel ``` 在 Debian 和 Ubuntu 上,首先获取你当前内核的版本: ``` $ uname -r 4.4.186 ``` 搜索新的版本: ``` $ sudo apt update $ sudo apt search linux-image ``` 安装找到的最新版本。在这个例子中,最新的版本是 5.2.4: ``` $ sudo apt install linux-image-5.2.4 ``` 内核更新后,你必须 [reboot](https://opensource.com/article/19/7/reboot-linux) (除非你使用 kpatch 或 kgraft)。这时,如果你需要的设备驱动程序包含在最新的内核中,你的硬件就会正常工作。 ### 安装内核模块 有时候一个发行版没有预计到用户会使用某个设备(或者该设备的驱动程序至少不足以包含在 Linux 内核中)。Linux 对于驱动程序采用模块化方式,因此尽管驱动程序没有编译进内核,但发行版可以推送单独的驱动程序包让内核去加载。尽管有些复杂但是非常有用,尤其是当驱动程序没有包含进内核中而是在引导过程中加载,或是内核中的驱动程序相比模块化的驱动程序过期时。第一个问题可以用 “initrd” 解决(初始化 RAM 磁盘),这一点超出了本文的讨论范围,第二点通过 “kmod” 系统解决。 kmod 系统保证了当内核更新后,所有与之安装的模块化驱动程序也得到更新。如果你手动安装一个驱动程序,你就体验不到 kmod 提供的自动化,因此只要能用 kmod 安装包,就应该选择它。例如,尽管英伟达驱动程序以 Nouveau 模块构建在内核中,但官方的驱动程序仅由英伟达发布。你可以去网站上手动安装英伟达旗下的驱动程序,下载 “.run” 文件,并运行提供的 shell 脚本,但在安装了新的内核之后你必须重复相同的过程,因为没有任何东西告诉包管理软件你手动安装了一个内核驱动程序。英伟达驱动着你的显卡,手动更新英伟达驱动程序通常意味着你需要通过终端来执行更新,因为没有显卡驱动程序将无法显示。 ![Nvidia configuration application](/data/attachment/album/201910/06/114010jez62e1zmxf1rpfj.jpg "Nvidia configuration application") 然而,如果你通过 kmod 包安装英伟达驱动程序,更新你的内核也会更新你的英伟达驱动程序。在 Fedora 和相关的发行版中: ``` $ sudo dnf install kmod-nvidia ``` 在 Debian 和相关发行版上: ``` $ sudo apt update $ sudo apt install nvidia-kernel-common nvidia-kernel-dkms nvidia-glx nvidia-xconfig nvidia-settings nvidia-vdpau-driver vdpau-va-driver ``` 这仅仅是一个例子,但是如果你真的要安装英伟达驱动程序,你也必须屏蔽掉 Nouveau 驱动程序。参考你使用发行版的文档获取最佳的步骤吧。 ### 下载并安装驱动程序 不是所有的东西都包含在内核中,也不是所有的东西都可以作为内核模块使用。在某些情况下,你需要下载一个由供应商编写并绑定好的特殊驱动程序,还有一些情况,你有驱动程序,但是没有配置驱动程序的前端界面。 有两个常见的例子是 HP 打印机和 [Wacom](https://linuxwacom.github.io) 数位板。如果你有一台 HP 打印机,你可能有能够和打印机通信的通用的驱动程序,甚至能够打印出东西。但是通用的驱动程序却不能为特定型号的打印机提供定制化的选项,例如双面打印、校对、纸盒选择等等。[HPLIP](https://developers.hp.com/hp-linux-imaging-and-printing)(HP Linux 成像和打印系统)提供了选项来进行任务管理、调整打印设置、选择可用的纸盒等等。 HPLIP 通常包含在包管理软件中;只要搜索“hplip”就行了。 ![HPLIP in action](/data/attachment/album/201910/06/114011gmkdaqop6zsssro6.jpg "HPLIP in action") 同样的,电子艺术家主要使用的数位板 Wacom 的驱动程序通常也包含在内核中,但是例如调整压感和按键功能等设置只能通过默认包含在 GNOME 的图形控制面板访问。但也可以作为 KDE 上额外的程序包“kde-config-tablet”来访问。 这里也有几个类似的个别例子,例如内核中没有驱动程序,但是以 RPM 或 DEB 文件提供了可供下载并且通过包管理软件安装的 kmod 版本的驱动程序。 ### 打上补丁并编译你的内核 即使在 21 世纪的未来主义乌托邦里,仍有厂商不够了解开源,没有提供可安装的驱动程序。有时候,一些公司为驱动程序提供开源代码,而需要你下载代码、修补内核、编译并手动安装。 这种发布方式和在 kmod 系统之外安装打包的驱动程序拥有同样的缺点:对内核的更新会破坏驱动程序,因为每次更换新的内核时都必须手动将其重新集成到内核中。 令人高兴的是,这种事情变得少见了,因为 Linux 内核团队在呼吁公司们与他们交流方面做得很好,并且公司们最终接受了开源不会很快消失的事实。但仍有新奇的或高度专业的设备仅提供了内核补丁。 官方上,对于你如何编译内核以使包管理器参与到升级系统如此重要的部分中,发行版有特定的习惯。这里有太多的包管理器,所以无法一一涵盖。举一个例子,当你使用 Fedora 上的工具例如 `rpmdev` 或 `build-essential`,Debian 上的 `devscripts`。 首先,像通常那样,找到你正在运行内核的版本: ``` $ uname -r ``` 在大多数情况下,如果你还没有升级过内核那么可以试试升级一下内核。搞定之后,也许你的问题就会在最新发布的内核中解决。如果你尝试后发现不起作用,那么你应该下载正在运行内核的源码。大多数发行版提供了特定的命令来完成这件事,但是手动操作的话,可以在 [kernel.org](https://www.kernel.org/) 上找到它的源代码。 你必须下载内核所需的任何补丁。有时候,这些补丁对应具体的内核版本,因此请谨慎选择。 通常,或至少在人们习惯于编译内核的那时,都是拿到源代码并对 `/usr/src/linux` 打上补丁。 解压内核源码并打上需要的补丁: ``` $ cd /usr/src/linux $ bzip2 --decompress linux-5.2.4.tar.bz2 $ cd linux-5.2.4 $ bzip2 -d ../patch*bz2 ``` 补丁文件也许包含如何使用的教程,但通常它们都设计成在内核源码树的顶层可用来执行。 ``` $ patch -p1 < patch*example.patch ``` 当内核代码打上补丁后,你可以继续使用旧的配置来对打了补丁的内核进行配置。 ``` $ make oldconfig ``` `make oldconfig` 命令有两个作用:它继承了当前的内核配置,并且允许你配置补丁带来的新的选项。 你或许需要运行 `make menuconfig` 命令,它启动了一个基于 ncurses 的菜单界面,列出了新的内核所有可能的选项。整个菜单可能看不过来,但是它是以旧的内核配置为基础的,你可以遍历菜单并且禁用掉你没有或不需要的硬件模块。另外,如果你知道自己有一些硬件没有包含在当前的配置中,你可以选择构建它,当作模块或者直接嵌入内核中。理论上,这些并不是必要的,因为你可以猜想,当前的内核运行良好只是缺少了补丁,当使用补丁的时候可能已经激活了所有设备所必要的选项。 下一步,编译内核和它的模块: ``` $ make bzImage $ make modules ``` 这会产生一个叫作 `vmlinuz` 的文件,它是你的可引导内核的压缩版本。保存旧的版本并在 `/boot` 文件夹下替换为新的。 ``` $ sudo mv /boot/vmlinuz /boot/vmlinuz.nopatch $ sudo cat arch/x86_64/boot/bzImage > /boot/vmlinuz $ sudo mv /boot/System.map /boot/System.map.stock $ sudo cp System.map /boot/System.map ``` 到目前为止,你已经打上了补丁并且编译了内核和它的模块,你安装了内核,但你并没有安装任何模块。那就是最后的步骤: ``` $ sudo make modules_install ``` 新的内核已经就位,并且它的模块也已经安装。 最后一步是更新你的引导程序,为了让你的计算机在加载 Linux 内核之前知道它的位置。GRUB 引导程序使这一过程变得相当简单: ``` $ sudo grub2-mkconfig ``` ### 现实生活中的编译 当然,现在没有人手动执行这些命令。相反的,参考你的发行版,寻找发行版维护人员使用的开发者工具集修改内核的说明。这些工具集可能会创建一个集成所有补丁的可安装软件包,告诉你的包管理器来升级并更新你的引导程序。 ### 内核 操作系统和内核都是玄学,但要理解构成它们的组件并不难。下一次你看到某个技术无法应用在 Linux 上时,深呼吸,调查可用的驱动程序,寻找一条捷径。Linux 比以前简单多了——包括内核。 --- via: <https://opensource.com/article/19/8/linux-kernel-21st-century> 作者:[Seth Kenlon](https://opensource.com/users/seth) 选题:[lujun9972](https://github.com/lujun9972) 译者:[LuMing](https://github.com/LuuMing) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
In computing, a kernel is the low-level software that handles communication with hardware and general system coordination. Aside from some initial firmware built into your computer's motherboard, when you start your computer, the kernel is what provides awareness that it has a hard drive and a screen and a keyboard and a network card. It's also the kernel's job to ensure equal time (more or less) is given to each component so that your graphics and audio and filesystem and network all run smoothly, even though they're running concurrently. The quest for hardware support, however, is ongoing, because the more hardware that gets released, the more stuff a kernel must adopt into its code to make the hardware work as expected. It's difficult to get accurate numbers, but the Linux kernel is certainly among the top kernels for hardware compatibility. Linux operates innumerable computers and mobile phones, embedded system on a chip (SoC) boards for hobbyist and industrial uses, RAID cards, sewing machines, and much more. Back in the 20th century (and even in the early years of the 21st), it was not unreasonable for a Linux user to expect that when they purchased a very new piece of hardware, they would need to download the very latest kernel source code, compile it, and install it so that they could get support for the device. Lately, though, you'd be hard-pressed to find a Linux user who compiles their own kernel except for fun or profit by way of highly specialized custom hardware. It generally isn't required these days to compile the Linux kernel yourself. Here are the reasons why, plus a quick tutorial on how to compile a kernel when you need to. ## Update your existing kernel Whether you've got a brand new laptop featuring a fancy new graphics card or WiFi chipset or you've just brought home a new printer, your operating system (called either GNU+Linux or just Linux, which is also the name of the kernel) needs a driver to open communication channels to that new component (graphics card, WiFi chip, printer, or whatever). It can be deceptive, sometimes, when you plug in a new device and your computer *appears* to acknowledge it. But don't let that fool you. Sometimes that *is* all you need, but other times your OS is just using generic protocols to probe a device that's attached. For instance, your computer may be able to identify your new network printer, but sometimes that's only because the network card in the printer is programmed to identify itself to a network so it can gain a DHCP address. It doesn't necessarily mean that your computer knows what instructions to send to the printer to produce a page of printed text. In fact, you might argue that the computer doesn't even really "know" that the device is a printer; it may only display that there's a device on the network at a specific address and the device identifies itself with the series of characters *p-r-i-n-t-e-r*. The conventions of human language are meaningless to a computer; what it needs is a driver. Kernel developers, hardware manufacturers, support technicians, and hobbyists all know that new hardware is constantly being released. Many of them contribute drivers, submitted straight to the kernel development team for inclusion in Linux. For example, Nvidia graphic card drivers are often written into the [Nouveau](https://nouveau.freedesktop.org/wiki/) kernel module and, because Nvidia cards are common, the code is usually included in any kernel distributed for general use (such as the kernel you get when you download [Fedora](http://fedoraproject.org) or [Ubuntu](http://ubuntu.com). Where Nvidia is less common, for instance in embedded systems, the Nouveau module is usually excluded. Similar modules exist for many other devices: printers benefit from [Foomatic](https://wiki.linuxfoundation.org/openprinting/database/foomatic) and [CUPS](https://www.cups.org/), wireless cards have [b43, ath9k, wl](https://wireless.wiki.kernel.org/en/users/drivers) modules, and so on. Distributions tend to include as much as they reasonably can in their Linux kernel builds because they want you to be able to attach a device and start using it immediately, with no driver installation required. For the most part, that's what happens, especially now that many device vendors are now funding Linux driver development for the hardware they sell and submitting those drivers directly to the kernel team for general distribution. Sometimes, however, you're running a kernel you installed six months ago with an exciting new device that just hit the stores a week ago. In that case, your kernel may not have a driver for that device. The good news is that very often, a driver for that device may exist in a very recent edition of the kernel, meaning that all you have to do is update what you're running. Generally, this is done through a package manager. For instance, on RHEL, CentOS, and Fedora: `$ sudo dnf update kernel` On Debian and Ubuntu, first get your current kernel version: ``` $ uname -r 4.4.186 ``` Search for newer versions: ``` $ sudo apt update $ sudo apt search linux-image ``` Install the latest version you find. In this example, the latest available is 5.2.4: `$ sudo apt install linux-image-5.2.4` After a kernel upgrade, you must [reboot](https://opensource.com/article/19/7/reboot-linux) (unless you're using kpatch or kgraft). Then, if the device driver you need is in the latest kernel, your hardware will work as expected. ## Install a kernel module Sometimes a distribution doesn't expect that its users often use a device (or at least not enough that the device driver needs to be in the Linux kernel). Linux takes a modular approach to drivers, so distributions can ship separate driver packages that can be loaded by the kernel even though the driver isn't compiled into the kernel itself. This is useful, although it can get complicated when a driver isn't included in a kernel but is needed during boot, or when the kernel gets updated out from under the modular driver. The first problem is solved with an **initrd** (initial RAM disk) and is out of scope for this article, and the second is solved by a system called **kmod**. The kmod system ensures that when a kernel is updated, all modular drivers installed alongside it are also updated. If you install a driver manually, you miss out on the automation that kmod provides, so you should opt for a kmod package whenever it is available. For instance, while Nvidia drivers are built into the kernel as the Nouveau driver, the official Nvidia drivers are distributed only by Nvidia. You can install Nvidia-branded drivers manually by going to the website, downloading the **.run** file, and running the shell script it provides, but you must repeat that same process after you install a new kernel, because nothing tells your package manager that you manually installed a kernel driver. Because Nvidia drives your graphics, updating the Nvidia driver manually usually means you have to perform the update from a terminal, because you have no graphics without a functional graphics driver. ![Nvidia configuration application Nvidia configuration application](https://opensource.com/sites/default/files/uploads/nvidia.jpg) However, if you install the Nvidia drivers as a kmod package, updating your kernel also updates your Nvidia driver. On Fedora and related: `$ sudo dnf install kmod-nvidia` On Debian and related: ``` $ sudo apt update $ sudo apt install nvidia-kernel-common nvidia-kernel-dkms nvidia-glx nvidia-xconfig nvidia-settings nvidia-vdpau-driver vdpau-va-driver ``` This is only an example, but if you're installing Nvidia drivers in real life, you must also blacklist the Nouveau driver. See your distribution's documentation for the best steps. ## Download and install a driver Not everything is included in the kernel, and not everything *else* is available as a kernel module. In some cases, you have to download a special driver written and bundled by the hardware vendor, and other times, you have the driver but not the frontend to configure driver options. Two common examples are HP printers and [Wacom](https://linuxwacom.github.io) illustration tablets. If you get an HP printer, you probably have generic drivers that can communicate with your printer. You might even be able to print. But the generic driver may not be able to provide specialized options specific to your model, such as double-sided printing, collation, paper tray choices, and so on. [HPLIP](https://developers.hp.com/hp-linux-imaging-and-printing) (the HP Linux Imaging and Printing system) provides options to manage jobs, adjust printing options, select paper trays where applicable, and so on. HPLIP is usually bundled in package managers; just search for "hplip." ![HPLIP in action HPLIP in action](https://opensource.com/sites/default/files/uploads/hplip.jpg) Similarly, drivers for Wacom tablets, the leading illustration tablet for digital artists, are usually included in your kernel, but options to fine-tune settings, such as pressure sensitivity and button functionality, are only accessible through the graphical control panel included by default with GNOME but installable as the extra package **kde-config-tablet** on KDE. There are likely some edge cases that don't have drivers in the kernel but offer kmod versions of driver modules as an RPM or DEB file that you can download and install through your package manager. ## Patching and compiling your own kernel Even in the futuristic utopia that is the 21st century, there are vendors that don't understand open source enough to provide installable drivers. Sometimes, such companies provide source code for a driver but expect you to download the code, patch a kernel, compile, and install manually. This kind of distribution model has the same disadvantages as installing packaged drivers outside of the kmod system: an update to your kernel breaks the driver because it must be re-integrated into your kernel manually each time the kernel is swapped out for a new one. This has become rare, happily, because the Linux kernel team has done an excellent job of pleading loudly for companies to communicate with them, and because companies are finally accepting that open source isn't going away any time soon. But there are still novelty or hyper-specialized devices out there that provide only kernel patches. Officially, there are distribution-specific preferences for how you should compile a kernel to keep your package manager involved in upgrading such a vital part of your system. There are too many package managers to cover each; as an example, here is what happens behind the scenes when you use tools like **rpmdev** on Fedora or **build-essential** and **devscripts** on Debian. First, as usual, find out which kernel version you're running: `$ uname -r` In most cases, it's safe to upgrade your kernel if you haven't already. After all, it's possible that your problem will be solved in the latest release. If you tried that and it didn't work, then you should download the source code of the kernel you are running. Most distributions provide a special command for that, but to do it manually, you can find the source code on [kernel.org](https://www.kernel.org/). You also must download whatever patch you need for your kernel. Sometimes, these patches are specific to the kernel release, so choose carefully. It's traditional, or at least it was back when people regularly compiled their own kernels, to place the source code and patches in **/usr/src/linux**. Unarchive the kernel source and the patch files as needed: ``` $ cd /usr/src/linux $ bzip2 --decompress linux-5.2.4.tar.bz2 $ cd linux-5.2.4 $ bzip2 -d ../patch*bz2 ``` The patch file may have instructions on how to do the patch, but often they're designed to be executed from the top level of your tree: `$ patch -p1 < patch*example.patch` Once the kernel code is patched, you can use your old configuration to prepare the patched kernel config: `$ make oldconfig` The **make oldconfig** command serves two purposes: it inherits your current kernel's configuration, and it allows you to configure new options introduced by the patch. You may need to run the **make menuconfig** command, which launches an ncurses-based, menu-driven list of possible options for your new kernel. The menu can be overwhelming, but since it starts with your old config as a foundation, you can look through the menu and disable modules for hardware that you know you do not have and do not anticipate needing. Alternately, if you know that you have some piece of hardware and see it is not included in your current configuration, you may choose to build it, either as a module or directly into the kernel. In theory, this isn't necessary because presumably, your current kernel was treating you well but for the missing patch, and probably the patch you applied has activated all the necessary options required by whatever device prompted you to patch your kernel in the first place. Next, compile the kernel and its modules: ``` $ make bzImage $ make modules ``` This leaves you with a file named **vmlinuz**, which is a compressed version of your bootable kernel. Save your old version and place the new one in your **/boot** directory: ``` $ sudo mv /boot/vmlinuz /boot/vmlinuz.nopatch $ sudo cat arch/x86_64/boot/bzImage > /boot/vmlinuz $ sudo mv /boot/System.map /boot/System.map.stock $ sudo cp System.map /boot/System.map ``` So far, you've patched and built a kernel and its modules, you've installed the kernel, but you haven't installed any modules. That's the final build step: `$ sudo make modules_install` The new kernel is in place, and its modules are installed. The final step is to update your bootloader so that the part of your computer that loads before the kernel knows where to find Linux. The GRUB bootloader makes this process relatively simple: `$ sudo grub2-mkconfig` ## Real-world compiling Of course, nobody runs those manual commands now. Instead, refer to your distribution for instructions on modifying a kernel using the developer toolset that your distribution's maintainers use. This toolset will probably create a new installable package with all the patches incorporated, alert the package manager of the upgrade, and update your bootloader for you. ## Kernels Operating systems and kernels are mysterious things, but it doesn't take much to understand what components they're built upon. The next time you get a piece of tech that appears to not work on Linux, take a deep breath, investigate driver availability, and go with the path of least resistance. Linux is easier than ever—and that includes the kernel. ## 7 Comments
11,429
在 Linux 上记录和重放终端会话活动
https://www.linuxtechi.com/record-replay-linux-terminal-sessions-activity/
2019-10-06T12:27:00
[ "记录" ]
https://linux.cn/article-11429-1.html
通常,Linux 管理员们都使用 `history` 命令来跟踪在先前的会话中执行过哪些命令,但是 `history` 命令的局限性在于它不存储命令的输出。在某些情况下,我们要检查上一个会话的命令输出,并希望将其与当前会话进行比较。除此之外,在某些情况下,我们正在对 Linux 生产环境中的问题进行故障排除,并希望保存所有终端会话活动以供将来参考,因此在这种情况下,`script` 命令就变得很方便。 ![](/data/attachment/album/201910/06/122659mmi64z8ryr4z2n8a.jpg) `script` 是一个命令行工具,用于捕获/记录你的 Linux 服务器终端会话活动,以后可以使用 `scriptreplay` 命令重放记录的会话。在本文中,我们将演示如何安装 `script` 命令行工具以及如何记录 Linux 服务器终端会话活动,然后,我们将看到如何使用 `scriptreplay` 命令来重放记录的会话。 ### 安装 script 工具 #### 在 RHEL 7/ CentOS 7 上安装 script 工具 `script` 命令由 RPM 包 `util-linux` 提供,如果你没有在你的 CentOS 7 / RHEL 7 系统上安装它,运行下面的 `yum` 安装它: ``` [root@linuxtechi ~]# yum install util-linux -y ``` #### 在 RHEL 8 / CentOS 8 上安装 script 工具 运行下面的 `dnf` 命令来在 RHEL 8 / CentOS 8 上安装 `script` 工具: ``` [root@linuxtechi ~]# dnf install util-linux -y ``` #### 在基于 Debian 的系统(Ubuntu / Linux Mint)上安装 script 工具 运行下面的 `apt-get` 命令来安装 `script` 工具: ``` root@linuxtechi ~]# apt-get install util-linux -y ``` ### 如何使用 script 工具 直接使用 `script` 命令,在终端上键入 `script` 命令,然后按回车,它将开始在名为 `typescript` 的文件中捕获当前的终端会话活动。 ``` [root@linuxtechi ~]# script Script started, file is typescript [root@linuxtechi ~]# ``` 要停止记录会话活动,请键入 `exit` 命令,然后按回车: ``` [root@linuxtechi ~]# exit exit Script done, file is typescript [root@linuxtechi ~]# ``` `script` 命令的语法格式: ``` ~] # script {options} {file_name} ``` 能在 `script` 命令中使用的不同选项: ![options-script-command](/data/attachment/album/201910/06/122713o4vvjz6xmxljc4vd.png) 让我们开始通过执行 `script` 命令来记录 Linux 终端会话,然后执行诸如 `w`,`route -n`,`df -h` 和 `free -h`,示例如下所示: ![script-examples-linux-server](/data/attachment/album/201910/06/122715ud9bxq9ovq2b0dqa.jpg) 正如我们在上面看到的,终端会话日志保存在文件 `typescript` 中: 现在使用 `cat` / `vi` 命令查看 `typescript` 文件的内容, ``` [root@linuxtechi ~]# ls -l typescript -rw-r--r--. 1 root root 1861 Jun 21 00:50 typescript [root@linuxtechi ~]# ``` ![typescript-file-content-linux](/data/attachment/album/201910/06/122716y7f166757hfl56fs.jpg) 以上内容确认了我们在终端上执行的所有命令都已保存在 `typescript` 文件中。 ### 在 script 命令中使用定制文件名 假设我们要使用自定义文件名来执行 `script` 命令,可以在 `script` 命令后指定文件名。在下面的示例中,我们使用的文件名为 `session-log-(当前日期时间).txt`。 ``` [root@linuxtechi ~]# script sessions-log-$(date +%d-%m-%Y-%T).txt Script started, file is sessions-log-21-06-2019-01:37:39.txt [root@linuxtechi ~]# ``` 现在运行该命令并输入 `exit`: ``` [root@linuxtechi ~]# exit exit Script done, file is sessions-log-21-06-2019-01:37:39.txt [root@linuxtechi ~]# ``` ### 附加命令输出到 script 记录文件 假设 `script` 命令已经将命令输出记录到名为 `session-log.txt` 的文件中,现在我们想将新会话命令的输出附加到该文件中,那么可以在 `script` 命令中使用 `-a` 选项。 ``` [root@linuxtechi ~]# script -a sessions-log.txt Script started, file is sessions-log.txt [root@linuxtechi ~]# xfs_info /dev/mapper/centos-root meta-data=/dev/mapper/centos-root isize=512 agcount=4, agsize=2746624 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0 spinodes=0 data = bsize=4096 blocks=10986496, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal bsize=4096 blocks=5364, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 [root@linuxtechi ~]# exit exit Script done, file is sessions-log.txt [root@linuxtechi ~]# ``` 要查看更新的会话记录,使用 `cat session-log.txt` 命令。 ### 无需 shell 交互而捕获命令输出到 script 记录文件 假设我们要捕获命令的输出到会话记录文件,那么使用 `-c` 选项,示例如下所示: ``` [root@linuxtechi ~]# script -c "uptime && hostname && date" root-session.txt Script started, file is root-session.txt 01:57:40 up 2:30, 3 users, load average: 0.00, 0.01, 0.05 linuxtechi Fri Jun 21 01:57:40 EDT 2019 Script done, file is root-session.txt [root@linuxtechi ~]# ``` ### 以静默模式运行 script 命令 要以静默模式运行 `script` 命令,请使用 `-q` 选项,该选项将禁止 `script` 的启动和完成消息,示例如下所示: ``` [root@linuxtechi ~]# script -c "uptime && date" -q root-session.txt 02:01:10 up 2:33, 3 users, load average: 0.00, 0.01, 0.05 Fri Jun 21 02:01:10 EDT 2019 [root@linuxtechi ~]# ``` 要将时序信息记录到文件中并捕获命令输出到单独的文件中,这可以通过在 `script` 命令中传递时序文件(`-timing`)实现,示例如下所示: 语法格式: ``` ~ ]# script -t <timing-file-name> {file_name} ``` ``` [root@linuxtechi ~]# script --timing=timing.txt session.log Script started, file is session.log [root@linuxtechi ~]# uptime 02:27:59 up 3:00, 3 users, load average: 0.00, 0.01, 0.05 [root@linuxtechi ~]# date Fri Jun 21 02:28:02 EDT 2019 [root@linuxtechi ~]# free -h total used free shared buff/cache available Mem: 3.9G 171M 2.0G 8.6M 1.7G 3.3G Swap: 3.9G 0B 3.9G [root@linuxtechi ~]# whoami root [root@linuxtechi ~]# exit exit Script done, file is session.log [root@linuxtechi ~]# [root@linuxtechi ~]# ls -l session.log timing.txt -rw-r--r--. 1 root root 673 Jun 21 02:28 session.log -rw-r--r--. 1 root root 414 Jun 21 02:28 timing.txt [root@linuxtechi ~]# ``` ### 重放记录的 Linux 终端会话活动 现在,使用 `scriptreplay` 命令重放录制的终端会话活动。 注意:`scriptreplay` 也由 RPM 包 `util-linux` 提供。`scriptreplay` 命令需要时序文件才能工作。 ``` [root@linuxtechi ~]# scriptreplay --timing=timing.txt session.log ``` 上面命令的输出将如下所示, ![](/data/attachment/album/201910/06/122718q7riaibd8a7rr8r7.gif) ### 记录所有用户的 Linux 终端会话活动 在某些关键业务的 Linux 服务器上,我们希望跟踪所有用户的活动,这可以使用 `script` 命令来完成,将以下内容放在 `/etc/profile` 文件中, ``` [root@linuxtechi ~]# vi /etc/profile …………………………………………………… if [ "x$SESSION_RECORD" = "x" ] then timestamp=$(date +%d-%m-%Y-%T) session_log=/var/log/session/session.$USER.$$.$timestamp SESSION_RECORD=started export SESSION_RECORD script -t -f -q 2>${session_log}.timing $session_log exit fi …………………………………………………… ``` 保存文件并退出。 在 `/var/log` 文件夹下创建 `session` 目录: ``` [root@linuxtechi ~]# mkdir /var/log/session ``` 给该文件夹指定权限: ``` [root@linuxtechi ~]# chmod 777 /var/log/session/ [root@linuxtechi ~]# ``` 现在,验证以上代码是否有效。在我正在使用 `pkumar` 用户的情况下,登录普通用户到 Linux 服务器: ``` ~ ] # ssh root@linuxtechi root@linuxtechi's password: [root@linuxtechi ~]$ uptime 04:34:09 up 5:06, 3 users, load average: 0.00, 0.01, 0.05 [root@linuxtechi ~]$ date Fri Jun 21 04:34:11 EDT 2019 [root@linuxtechi ~]$ free -h total used free shared buff/cache available Mem: 3.9G 172M 2.0G 8.6M 1.7G 3.3G Swap: 3.9G 0B 3.9G [root@linuxtechi ~]$ id uid=1001(pkumar) gid=1002(pkumar) groups=1002(pkumar) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 [root@linuxtechi ~]$ whoami pkumar [root@linuxtechi ~]$ exit Login as root and view user’s linux terminal session activity [root@linuxtechi ~]# cd /var/log/session/ [root@linuxtechi session]# ls -l | grep pkumar -rw-rw-r--. 1 pkumar pkumar 870 Jun 21 04:34 session.pkumar.19785.21-06-2019-04:34:05 -rw-rw-r--. 1 pkumar pkumar 494 Jun 21 04:34 session.pkumar.19785.21-06-2019-04:34:05.timing [root@linuxtechi session]# ``` ![Session-output-file-linux](/data/attachment/album/201910/06/122719dexuklun2x9kx2lf.jpg) 我们还可以使用 `scriptreplay` 命令来重放用户的终端会话活动: ``` [root@linuxtechi session]# scriptreplay --timing session.pkumar.19785.21-06-2019-04\:34\:05.timing session.pkumar.19785.21-06-2019-04\:34\:05 ``` 以上就是本教程的全部内容,请在下面的评论部分中分享你的反馈和评论。 --- via: <https://www.linuxtechi.com/record-replay-linux-terminal-sessions-activity/> 作者:[Pradeep Kumar](https://www.linuxtechi.com/author/pradeep/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
Generally, all Linux administrators use **history** command to track which commands were executed in previous sessions, but there is one limitation of history command is that it doesn’t store the command’s output. There can be some scenarios where we want to check commands output of previous session and want to compare it with current session. Apart from this, there are some situations where we are troubleshooting the issues on Linux production boxes and want to save all terminal session activities for future reference, so in such cases script command become handy. Script is a command line tool which is used to capture or record your Linux server terminal sessions activity and later the recorded session can be replayed using scriptreplay command. In this article we will demonstrate how to install script command line tool and how to record Linux server terminal session activity and then later we will see how the recorded session can be replayed using **scriptreplay** command. #### Installation of Script tool on RHEL 7/ CentOS 7 Script command is provided by the rpm package “**util-linux**”, in case it is not installed on your CentOS 7 / RHEL 7 system , run the following yum command, [root@linuxtechi ~]# yum install util-linux -y **On RHEL 8 / CentOS 8** Run the following dnf command to install script utility on RHEL 8 and CentOS 8 system, [root@linuxtechi-rhel8 ~]# dnf install util-linux -y **Installation of Script tool on Debian based systems (Ubuntu / Linux Mint)** Execute the beneath apt-get command to install script utility root@linuxtechi ~]# apt-get install util-linux -y #### How to Use script utility Use of script command is straight forward, type script command on terminal then hit enter, it will start capturing your current terminal session activities inside a file called “**typescript**” [root@linuxtechi ~]# script Script started, file is typescript [root@linuxtechi ~]# To stop recording the session activities, type exit command and hit enter. [root@linuxtechi ~]# exit exit Script done, file is typescript [root@linuxtechi ~]# Syntax of Script command: ~ ] # script {options} {file_name} Different options used in script command, Let’s start recording of your Linux terminal session by executing script command and then execute couple of command like ‘**w**’, ‘**route -n**’ , ‘[ df -h](https://www.linuxtechi.com/11-df-command-examples-in-linux/)’ and ‘ **free-h**’, example is shown below As we can see above, terminal session logs are saved in the file “typescript” Now view the contents of typescript file using [cat](https://www.linuxtechi.com/cat-command-examples-for-beginners-in-linux/) / vi command, [root@linuxtechi ~]# ls -l typescript -rw-r--r--. 1 root root 1861 Jun 21 00:50 typescript [root@linuxtechi ~]# Above confirms that whatever commands we execute on terminal that have been saved inside the file “typescript” #### Use Custom File name in script command Let’s assume we want to use our customize file name to script command, so specify the file name after script command, in the below example we are using a file name “session-log-(current-date-time).txt” [root@linuxtechi ~]# script sessions-log-$(date +%d-%m-%Y-%T).txt Script started, file is sessions-log-21-06-2019-01:37:39.txt [root@linuxtechi ~]# Now run the commands and then type exit, [root@linuxtechi ~]# exit exit Script done, file is sessions-log-21-06-2019-01:37:39.txt [root@linuxtechi ~]# #### Append the commands output to script file Let assume script command had already recorded the commands output to a file called session-log.txt file and now we want to append output of new sessions commands output to this file, then use “**-a**” command in script command [root@linuxtechi ~]# script -a sessions-log.txt Script started, file is sessions-log.txt [root@linuxtechi ~]# xfs_info /dev/mapper/centos-root meta-data=/dev/mapper/centos-root isize=512 agcount=4, agsize=2746624 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0 spinodes=0 data = bsize=4096 blocks=10986496, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal bsize=4096 blocks=5364, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 [root@linuxtechi ~]# exit exit Script done, file is sessions-log.txt [root@linuxtechi ~]# To view updated session’s logs, use “cat session-log.txt ” #### Capture commands output to script file without interactive shell Let’s assume we want to capture commands output to a script file, then use **-c** option, example is shown below, [root@linuxtechi ~]# script -c "uptime && hostname && date" root-session.txt Script started, file is root-session.txt 01:57:40 up 2:30, 3 users, load average: 0.00, 0.01, 0.05 linuxtechi Fri Jun 21 01:57:40 EDT 2019 Script done, file is root-session.txt [root@linuxtechi ~]# #### Run script command in quiet mode To run script command in quiet mode use **-q** option, this option will suppress the script started and script done message, example is shown below, [root@linuxtechi ~]# script -c "uptime && date" -q root-session.txt 02:01:10 up 2:33, 3 users, load average: 0.00, 0.01, 0.05 Fri Jun 21 02:01:10 EDT 2019 [root@linuxtechi ~]# Record Timing information to a file and capture commands output to a separate file, this can be achieved in script command by passing timing file (**–timing**) , example is shown below, Syntax: ~ ]# script -t <timing-file-name> {file_name} [root@linuxtechi ~]# script --timing=timing.txt session.log Script started, file is session.log [root@linuxtechi ~]# uptime 02:27:59 up 3:00, 3 users, load average: 0.00, 0.01, 0.05 [root@linuxtechi ~]# date Fri Jun 21 02:28:02 EDT 2019 [root@linuxtechi ~]# free -h total used free shared buff/cache available Mem: 3.9G 171M 2.0G 8.6M 1.7G 3.3G Swap: 3.9G 0B 3.9G [root@linuxtechi ~]# whoami root [root@linuxtechi ~]# exit exit Script done, file is session.log [root@linuxtechi ~]# [root@linuxtechi ~]# ls -l session.log timing.txt -rw-r--r--. 1 root root 673 Jun 21 02:28 session.log -rw-r--r--. 1 root root 414 Jun 21 02:28 timing.txt [root@linuxtechi ~]# #### Replay recorded Linux terminal session activity Now replay the recorded terminal session activities using scriptreplay command, **Note:** Scriptreplay is also provided by rpm package “**util-linux**”. Scriptreplay command requires timing file to work. [root@linuxtechi ~]# scriptreplay --timing=timing.txt session.log Output of above command would be something like below, #### Record all User’s Linux terminal session activities There are some business critical Linux servers where we want keep track on all users activity, so this can be accomplished using script command, place the following content in /etc/profile file , [root@linuxtechi ~]# vi /etc/profile …………………………………………………… if [ "x$SESSION_RECORD" = "x" ] then timestamp=$(date +%d-%m-%Y-%T) session_log=/var/log/session/session.$USER.$$.$timestamp SESSION_RECORD=started export SESSION_RECORD script -t -f -q 2>${session_log}.timing $session_log exit fi …………………………………………………… Save & exit the file. Create the session directory under /var/log folder, [root@linuxtechi ~]# mkdir /var/log/session Assign the permissions to session folder, [root@linuxtechi ~]# chmod 777 /var/log/session/ [root@linuxtechi ~]# Now verify whether above code is working or not. Login to ordinary user to linux server, in my I am using pkumar user, ~ ] # ssh[[email protected]][[email protected]]'s password: [pkumar@linuxtechi ~]$ uptime 04:34:09 up 5:06, 3 users, load average: 0.00, 0.01, 0.05 [pkumar@linuxtechi ~]$ date Fri Jun 21 04:34:11 EDT 2019 [pkumar@linuxtechi ~]$ free -h total used free shared buff/cache available Mem: 3.9G 172M 2.0G 8.6M 1.7G 3.3G Swap: 3.9G 0B 3.9G [pkumar@linuxtechi ~]$ id uid=1001(pkumar) gid=1002(pkumar) groups=1002(pkumar) \ context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 [pkumar@linuxtechi ~]$ whoami pkumar [pkumar@linuxtechi ~]$ exit Login as root and view user’s linux terminal session activity [root@linuxtechi ~]# cd /var/log/session/ [root@linuxtechi session]# ls -l | grep pkumar -rw-rw-r--. 1 pkumar pkumar 870 Jun 21 04:34 \ session.pkumar.19785.21-06-2019-04:34:05 -rw-rw-r--. 1 pkumar pkumar 494 Jun 21 04:34 \ session.pkumar.19785.21-06-2019-04:34:05.timing [root@linuxtechi session]# We can also use scriptreplay command to replay user’s terminal session activities, [root@linuxtechi session]# scriptreplay --timing \ session.pkumar.19785.21-06-2019-04\:34\:05.timing \ session.pkumar.19785.21-06-2019-04\:34\:05 That’s all from this tutorial, please do share your feedback and comments in the comments section below. naveenThis is what I was looking for.It will be very helpful for automation. Is there a way to repeat a subset of commands in a loop Also is there a way to do ssh from there terminal by feeding password kolberThanks for this. I have one issue, after applying this. Users’s profile “bash_profile” variables won’t load anymore. How can I solve this? RickyxVery nice! A suggestion: /var/log/session should be created before editing the profile. If you exit from the root session before session folder exists you get: $ su – Password: -bash: /var/log/session/session.root.1705.08-03-2020-13:37:55.timing: File o directory non esistente …and you can’t be root easily again 😉 jairgreat, well explained and complete! Thx! linuxadmany idea on how to prevent the user from deleting it own session?.. since the file owner is the user itself and /var/log/session is with world writable permission?.. Jacob Tuz PootThis works fine for ssh sessions, but now I can not use scp to transfer files, I was looking for any solution about it, but… nothing until now, do anyone had the same issue? how did you solve it? JaericoCan we include the info about who sshed my server in this thing ? Pardom Me , If anything is wrong, I am new to Linux.
11,430
通过编写扫雷游戏提高你的 Bash 技巧
https://opensource.com/article/19/9/advanced-bash-building-minesweeper
2019-10-07T11:51:53
[ "扫雷" ]
https://linux.cn/article-11430-1.html
> > 那些令人怀念的经典游戏可是提高编程能力的好素材。今天就让我们仔细探索一番,怎么用 Bash 编写一个扫雷程序。 > > > ![](/data/attachment/album/201910/07/115136p51t04j584m4o18z.jpg) 我在编程教学方面不是专家,但当我想更好掌握某一样东西时,会试着找出让自己乐在其中的方法。比方说,当我想在 shell 编程方面更进一步时,我决定用 Bash 编写一个[扫雷](https://en.wikipedia.org/wiki/Minesweeper_(video_game))游戏来加以练习。 如果你是一个有经验的 Bash 程序员,希望在提高技巧的同时乐在其中,那么请跟着我编写一个你的运行在终端中的扫雷游戏。完整代码可以在这个 [GitHub 存储库](https://github.com/abhiTamrakar/playground/tree/master/bash_games)中找到。 ### 做好准备 在我编写任何代码之前,我列出了该游戏所必须的几个部分: 1. 显示雷区 2. 创建游戏逻辑 3. 创建判断单元格是否可选的逻辑 4. 记录可用和已查明(已排雷)单元格的个数 5. 创建游戏结束逻辑 ### 显示雷区 在扫雷中,游戏界面是一个由 2D 数组(列和行)组成的不透明小方格。每一格下都有可能藏有地雷。玩家的任务就是找到那些不含雷的方格,并且在这一过程中,不能点到地雷。这个 Bash 版本的扫雷使用 10x10 的矩阵,实际逻辑则由一个简单的 Bash 数组来完成。 首先,我先生成了一些随机数字。这将是地雷在雷区里的位置。控制地雷的数量,在开始编写代码之前,这么做会容易一些。实现这一功能的逻辑可以更好,但我这么做,是为了让游戏实现保持简洁,并有改进空间。(我编写这个游戏纯属娱乐,但如果你能将它修改的更好,我也是很乐意的。) 下面这些变量在整个过程中是不变的,声明它们是为了随机生成数字。就像下面的 `a` - `g` 的变量,它们会被用来计算可排除的地雷的值: ``` # 变量 score=0 # 会用来存放游戏分数 # 下面这些变量,用来随机生成可排除地雷的实际值 a="1 10 -10 -1" b="-1 0 1" c="0 1" d="-1 0 1 -2 -3" e="1 2 20 21 10 0 -10 -20 -23 -2 -1" f="1 2 3 35 30 20 22 10 0 -10 -20 -25 -30 -35 -3 -2 -1" g="1 4 6 9 10 15 20 25 30 -30 -24 -11 -10 -9 -8 -7" # # 声明 declare -a room # 声明一个 room 数组,它用来表示雷区的每一格。 ``` 接下来,我会用列(0-9)和行(a-j)显示出游戏界面,并且使用一个 10x10 矩阵作为雷区。(`M[10][10]` 是一个索引从 0-99,有 100 个值的数组。) 如想了解更多关于 Bash 数组的内容,请阅读这本书[那些关于 Bash 你所不了解的事: Bash 数组简介](https://opensource.com/article/18/5/you-dont-know-bash-intro-bash-arrays)。 创建一个叫 `plough` 的函数,我们先将标题显示出来:两个空行、列头,和一行 `-`,以示意往下是游戏界面: ``` printf '\n\n' printf '%s' " a b c d e f g h i j" printf '\n %s\n' "-----------------------------------------" ``` 然后,我初始化一个计数器变量,叫 `r`,它会用来记录已显示多少横行。注意,稍后在游戏代码中,我们会用同一个变量 `r`,作为我们的数组索引。 在 [Bash for 循环](https://opensource.com/article/19/6/how-write-loop-bash)中,用 `seq` 命令从 0 增加到 9。我用数字(`d%`)占位,来显示行号(`$row`,由 `seq` 定义): ``` r=0 # 计数器 for row in $(seq 0 9); do printf '%d ' "$row" # 显示 行数 0-9 ``` 在我们接着往下做之前,让我们看看到现在都做了什么。我们先横着显示 `[a-j]` 然后再将 `[0-9]` 的行号显示出来,我们会用这两个范围,来确定用户排雷的确切位置。 接着,在每行中,插入列,所以是时候写一个新的 `for` 循环了。这一循环管理着每一列,也就是说,实际上是生成游戏界面的每一格。我添加了一些辅助函数,你能在源码中看到它的完整实现。 对每一格来说,我们需要一些让它看起来像地雷的东西,所以我们先用一个点(`.`)来初始化空格。为了实现这一想法,我们用的是一个叫 [`is_null_field`](https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L114-L120) 的自定义函数。 同时,我们需要一个存储每一格具体值的数组,这儿会用到之前已定义的全局数组 [`room`](https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L41) , 并用 [变量 `r`](https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L74)作为索引。随着 `r` 的增加,遍历所有单元格,并随机部署地雷。 ``` for col in $(seq 0 9); do ((r+=1)) # 循环完一列行数加一 is_null_field $r # 假设这里有个函数,它会检查单元格是否为空,为真,则此单元格初始值为点(.) printf '%s \e[33m%s\e[0m ' "|" "${room[$r]}" # 最后显示分隔符,注意,${room[$r]} 的第一个值为 '.',等于其初始值。 #结束 col 循环 done ``` 最后,为了保持游戏界面整齐好看,我会在每行用一个竖线作为结尾,并在最后结束行循环: ``` printf '%s\n' "|" # 显示出行分隔符 printf ' %s\n' "-----------------------------------------" # 结束行循环 done printf '\n\n' ``` 完整的 `plough` 代码如下: ``` plough() { r=0 printf '\n\n' printf '%s' " a b c d e f g h i j" printf '\n %s\n' "-----------------------------------------" for row in $(seq 0 9); do printf '%d ' "$row" for col in $(seq 0 9); do ((r+=1)) is_null_field $r printf '%s \e[33m%s\e[0m ' "|" "${room[$r]}" done printf '%s\n' "|" printf ' %s\n' "-----------------------------------------" done printf '\n\n' } ``` 我花了点时间来思考,`is_null_field` 的具体功能是什么。让我们来看看,它到底能做些什么。在最开始,我们需要游戏有一个固定的状态。你可以随便选择个初始值,可以是一个数字或者任意字符。我最后决定,所有单元格的初始值为一个点(`.`),因为我觉得,这样会让游戏界面更好看。下面就是这一函数的完整代码: ``` is_null_field() { local e=$1 # 在数组 room 中,我们已经用过循环变量 'r' 了,这次我们用 'e' if [[ -z "${room[$e]}" ]];then room[$r]="." #这里用点(.)来初始化每一个单元格 fi } ``` 现在,我已经初始化了所有的格子,现在只要用一个很简单的函数就能得出当前游戏中还有多少单元格可以操作: ``` get_free_fields() { free_fields=0 # 初始化变量 for n in $(seq 1 ${#room[@]}); do if [[ "${room[$n]}" = "." ]]; then # 检查当前单元格是否等于初始值(.),结果为真,则记为空余格子。 ((free_fields+=1)) fi done } ``` 这是显示出来的游戏界面,`[a-j]` 为列,`[0-9]` 为行。 ![Minefield](/data/attachment/album/201910/07/115159fb5z9b3qn11v111q.png "Minefield") ### 创建玩家逻辑 玩家操作背后的逻辑在于,先从 [stdin](https://en.wikipedia.org/wiki/Standard_streams#Standard_input_(stdin)) 中读取数据作为坐标,然后再找出对应位置实际包含的值。这里用到了 Bash 的[参数扩展](https://www.gnu.org/software/bash/manual/html_node/Shell-Parameter-Expansion.html),来设法得到行列数。然后将代表列数的字母传给分支语句,从而得到其对应的列数。为了更好地理解这一过程,可以看看下面这段代码中,变量 `o` 所对应的值。 举个例子,玩家输入了 `c3`,这时 Bash 将其分成两个字符:`c` 和 `3`。为了简单起见,我跳过了如何处理无效输入的部分。 ``` colm=${opt:0:1} # 得到第一个字符,一个字母 ro=${opt:1:1} # 得到第二个字符,一个整数 case $colm in a ) o=1;; # 最后,通过字母得到对应列数。 b ) o=2;; c ) o=3;; d ) o=4;; e ) o=5;; f ) o=6;; g ) o=7;; h ) o=8;; i ) o=9;; j ) o=10;; esac ``` 下面的代码会计算用户所选单元格实际对应的数字,然后将结果储存在变量中。 这里也用到了很多的 `shuf` 命令,`shuf` 是一个专门用来生成随机序列的 [Linux 命令](https://linux.die.net/man/1/shuf)。`-i` 选项后面需要提供需要打乱的数或者范围,`-n` 选项则规定输出结果最多需要返回几个值。Bash 中,可以在两个圆括号内进行[数学计算](https://www.tldp.org/LDP/abs/html/dblparens.html),这里我们会多次用到。 还是沿用之前的例子,玩家输入了 `c3`。 接着,它被转化成了 `ro=3` 和 `o=3`。 之后,通过上面的分支语句代码, 将 `c` 转化为对应的整数,带进公式,以得到最终结果 `i` 的值。 ``` i=$(((ro*10)+o)) # 遵循运算规则,算出最终值 is_free_field $i $(shuf -i 0-5 -n 1) # 调用自定义函数,判断其指向空/可选择单元格。 ``` 仔细观察这个计算过程,看看最终结果 `i` 是如何计算出来的: ``` i=$(((ro*10)+o)) i=$(((3*10)+3))=$((30+3))=33 ``` 最后结果是 33。在我们的游戏界面显示出来,玩家输入坐标指向了第 33 个单元格,也就是在第 3 行(从 0 开始,否则这里变成 4),第 3 列。 ### 创建判断单元格是否可选的逻辑 为了找到地雷,在将坐标转化,并找到实际位置之后,程序会检查这一单元格是否可选。如不可选,程序会显示一条警告信息,并要求玩家重新输入坐标。 在这段代码中,单元格是否可选,是由数组里对应的值是否为点(`.`)决定的。如果可选,则重置单元格对应的值,并更新分数。反之,因为其对应值不为点,则设置变量 `not_allowed`。为简单起见,游戏中[警告消息](https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L143-L177)这部分源码,我会留给读者们自己去探索。 ``` is_free_field() { local f=$1 local val=$2 not_allowed=0 if [[ "${room[$f]}" = "." ]]; then room[$f]=$val score=$((score+val)) else not_allowed=1 fi } ``` ![Extracting mines](/data/attachment/album/201910/07/115200yun3rw2r9b8fjw69.png "Extracting mines") 如输入坐标有效,且对应位置为地雷,如下图所示。玩家输入 `h6`,游戏界面会出现一些随机生成的值。在发现地雷后,这些值会被加入用户得分。 ![Extracting mines](/data/attachment/album/201910/07/115203umy7fft9vmfrvfto.png "Extracting mines") 还记得我们开头定义的变量,`a` - `g` 吗,我会用它们来确定随机生成地雷的具体值。所以,根据玩家输入坐标,程序会根据(`m`)中随机生成的数,来生成周围其他单元格的值(如上图所示)。之后将所有值和初始输入坐标相加,最后结果放在 `i`(计算结果如上)中。 请注意下面代码中的 `X`,它是我们唯一的游戏结束标志。我们将它添加到随机列表中。在 `shuf` 命令的魔力下,`X` 可以在任意情况下出现,但如果你足够幸运的话,也可能一直不会出现。 ``` m=$(shuf -e a b c d e f g X -n 1) # 将 X 添加到随机列表中,当 m=X,游戏结束 if [[ "$m" != "X" ]]; then # X 将会是我们爆炸地雷(游戏结束)的触发标志 for limit in ${!m}; do # !m 代表 m 变量的值 field=$(shuf -i 0-5 -n 1) # 然后再次获得一个随机数字 index=$((i+limit)) # 将 m 中的每一个值和 index 加起来,直到列表结尾 is_free_field $index $field done ``` 我想要游戏界面中,所有随机显示出来的单元格,都靠近玩家选择的单元格。 ![Extracting mines](/data/attachment/album/201910/07/115204ri4htjh4554yg4gd.png "Extracting mines") ### 记录已选择和可用单元格的个数 这个程序需要记录游戏界面中哪些单元格是可选择的。否则,程序会一直让用户输入数据,即使所有单元格都被选中过。为了实现这一功能,我创建了一个叫 `free_fields` 的变量,初始值为 `0`。用一个 `for` 循环,记录下游戏界面中可选择单元格的数量。 如果单元格所对应的值为点(`.`),则 `free_fields` 加一。 ``` get_free_fields() { free_fields=0 for n in $(seq 1 ${#room[@]}); do if [[ "${room[$n]}" = "." ]]; then ((free_fields+=1)) fi done } ``` 等下,如果 `free_fields=0` 呢? 这意味着,玩家已选择过所有单元格。如果想更好理解这一部分,可以看看这里的[源代码](https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L91)。 ``` if [[ $free_fields -eq 0 ]]; then # 这意味着你已选择过所有格子 printf '\n\n\t%s: %s %d\n\n' "You Win" "you scored" "$score" exit 0 fi ``` ### 创建游戏结束逻辑 对于游戏结束这种情况,我们这里使用了一些很[巧妙的技巧](https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L131-L141),将结果在屏幕中央显示出来。我把这部分留给读者朋友们自己去探索。 ``` if [[ "$m" = "X" ]]; then g=0 # 为了在参数扩展中使用它 room[$i]=X # 覆盖此位置原有的值,并将其赋值为X for j in {42..49}; do # 在游戏界面中央, out="gameover" k=${out:$g:1} # 在每一格中显示一个字母 room[$j]=${k^^} ((g+=1)) done fi ``` 最后,我们显示出玩家最关心的两行。 ``` if [[ "$m" = "X" ]]; then printf '\n\n\t%s: %s %d\n' "GAMEOVER" "you scored" "$score" printf '\n\n\t%s\n\n' "You were just $free_fields mines away." exit 0 fi ``` ![Minecraft Gameover](/data/attachment/album/201910/07/115207h9eshwh1sil1wsho.png "Minecraft Gameover") 文章到这里就结束了,朋友们!如果你想了解更多,具体可以查看我的 [GitHub 存储库](https://github.com/abhiTamrakar/playground/tree/master/bash_games),那儿有这个扫雷游戏的源代码,并且你还能找到更多用 Bash 编写的游戏。 我希望,这篇文章能激起你学习 Bash 的兴趣,并乐在其中。 --- via: <https://opensource.com/article/19/9/advanced-bash-building-minesweeper> 作者:[Abhishek Tamrakar](https://opensource.com/users/tamrakar) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wenwensnow](https://github.com/wenwensnow) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
I am no expert on teaching programming, but when I want to get better at something, I try to find a way to have fun with it. For example, when I wanted to get better at shell scripting, I decided to practice by programming a version of the [Minesweeper](https://en.wikipedia.org/wiki/Minesweeper_(video_game)) game in Bash. If you are an experienced Bash programmer and want to hone your skills while having fun, follow along to write your own version of Minesweeper in the terminal. The complete source code is found in this [GitHub repository](https://github.com/abhiTamrakar/playground/tree/master/bash_games). ## Getting ready Before I started writing any code, I outlined the ingredients I needed to create my game: - Print a minefield - Create the gameplay logic - Create logic to determine the available minefield - Keep count of available and discovered (extracted) mines - Create the endgame logic ## Print a minefield In Minesweeper, the game world is a 2D array (columns and rows) of concealed cells. Each cell may or may not contain an explosive mine. The player's objective is to reveal cells that contain no mine, and to never reveal a mine. Bash version of the game uses a 10x10 matrix, implemented using simple bash arrays. First, I assign some random variables. These are the locations that mines could be placed on the board. By limiting the number of locations, it will be easy to build on top of this. The logic could be better, but I wanted to keep the game looking simple and a bit immature. (I wrote this for fun, but I would happily welcome your contributions to make it look better.) The variables below are some default variables, declared to call randomly for field placement, like the variables a-g, we will use them to calculate our extractable mines: ``` # variables score=0 # will be used to store the score of the game # variables below will be used to randomly get the extract-able cells/fields from our mine. a="1 10 -10 -1" b="-1 0 1" c="0 1" d="-1 0 1 -2 -3" e="1 2 20 21 10 0 -10 -20 -23 -2 -1" f="1 2 3 35 30 20 22 10 0 -10 -20 -25 -30 -35 -3 -2 -1" g="1 4 6 9 10 15 20 25 30 -30 -24 -11 -10 -9 -8 -7" # # declarations declare -a room # declare an array room, it will represent each cell/field of our mine. ``` Next, I print my board with columns (0-9) and rows (a-j), forming a 10x10 matrix to serve as the minefield for the game. (M[10][10] is a 100-value array with indexes 0-99.) If you want to know more about Bash arrays, read [ You don't know Bash: An introduction to Bash arrays](https://opensource.com/article/18/5/you-dont-know-bash-intro-bash-arrays). Lets call it a function, **plough, ** we print the header first: two blank lines, the column headings, and a line to outline the top of the playing field: ``` printf '\n\n' printf '%s' " a b c d e f g h i j" printf '\n %s\n' "-----------------------------------------" ``` Next, I establish a counter variable, called **r**, to keep track of how many horizontal rows have been populated. Note that, we will use the same counter variable '**r**' as our array index later in the game code. In a [Bash for loop](https://opensource.com/article/19/6/how-write-loop-bash), using the **seq**command to increment from 0 to 9, I print a digit ( **d%**) to represent the row number ($row, which is defined by **seq**): ``` r=0 # our counter for row in $(seq 0 9); do printf '%d ' "$row" # print the row numbers from 0-9 ``` Before we move ahead from here, lets check what we have made till now. We printed sequence **[a-j] **horizontally first and then we printed row numbers in a range **[0-9]**, we will be using these two ranges to act as our users input coordinates to locate the mine to extract.** ** Next,** **Within each row, there is a column intersection, so it's time to open a new **for** loop. This one manages each column, so it essentially generates each cell in the playing field. I have added some helper functions that you can see the full definition of in the source code. For each cell, we need something to make the field look like a mine, so we initialize the empty ones with a dot (.), using a custom function called [ is_null_field](https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L114-L120). Also, we need an array variable to store the value for each cell, we will use the predefined global array variable **along with an index** [room](https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L41)[variable](https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L74). As **r****r**increments, we iterate over the cells, dropping mines along the way. ``` for col in $(seq 0 9); do ((r+=1)) # increment the counter as we move forward in column sequence is_null_field $r # assume a function which will check, if the field is empty, if so, initialize it with a dot(.) printf '%s \e[33m%s\e[0m ' "|" "${room[$r]}" # finally print the separator, note that, the first value of ${room[$r]} will be '.', as it is just initialized. #close col loop done ``` Finally, I keep the board well-defined by enclosing the bottom of each row with a line, and then close the row loop: ``` printf '%s\n' "|" # print the line end separator printf ' %s\n' "-----------------------------------------" # close row for loop done printf '\n\n' ``` The full **plough** function looks like: ``` plough() { r=0 printf '\n\n' printf '%s' " a b c d e f g h i j" printf '\n %s\n' "-----------------------------------------" for row in $(seq 0 9); do printf '%d ' "$row" for col in $(seq 0 9); do ((r+=1)) is_null_field $r printf '%s \e[33m%s\e[0m ' "|" "${room[$r]}" done printf '%s\n' "|" printf ' %s\n' "-----------------------------------------" done printf '\n\n' } ``` It took me some time to decide on needing the **is_null_field**, so let's take a closer look at what it does. We need a dependable state from the beginning of the game. That choice is arbitrary–it could have been a number or any character. I decided to assume everything was declared as a dot (.) because I believe it makes the gameboard look pretty. Here's what that looks like: ``` is_null_field() { local e=$1 # we used index 'r' for array room already, let's call it 'e' if [[ -z "${room[$e]}" ]];then room[$r]="." # this is where we put the dot(.) to initialize the cell/minefield fi } ``` Now that, I have all the cells in our mine initialized, I get a count of all available mines by declaring and later calling a simple function shown below: ``` get_free_fields() { free_fields=0 # initialize the variable for n in $(seq 1 ${#room[@]}); do if [[ "${room[$n]}" = "." ]]; then # check if the cells has initial value dot(.), then count it as a free field. ((free_fields+=1)) fi done } ``` Here is the printed minefield, where [**a-j]** are columns, and [**0-9**] are rows. ![Minefield Minefield](https://opensource.com/sites/default/files/uploads/minefield.png) ## Create the logic to drive the player The player logic reads an option from [stdin](https://en.wikipedia.org/wiki/Standard_streams#Standard_input_(stdin)) as a coordinate to the mines and extracts the exact field on the minefield. It uses Bash's [parameter expansion](https://www.gnu.org/software/bash/manual/html_node/Shell-Parameter-Expansion.html) to extract the column and row inputs, then feeds the column to a switch that points to its equivalent integer notation on the board, to understand this, see the values getting assigned to variable '**o'** in the switch case statement below. For instance, a player might enter **c3**, which Bash splits into two characters: **c **and **3**. For simplicity, I'm skipping over how invalid entry is handled. ``` colm=${opt:0:1} # get the first char, the alphabet ro=${opt:1:1} # get the second char, the digit case $colm in a ) o=1;; # finally, convert the alphabet to its equivalent integer notation. b ) o=2;; c ) o=3;; d ) o=4;; e ) o=5;; f ) o=6;; g ) o=7;; h ) o=8;; i ) o=9;; j ) o=10;; esac ``` Then it calculates the exact index and assigns the index of the input coordinates to that field. There is also a lot of use of **shuf** command here, **shuf** is a [Linux utility](https://linux.die.net/man/1/shuf) designed to provide a random permutation of information where the **-i** option denotes indexes or possible ranges to shuffle and **-n** denotes the maximum number or output given back. Double parentheses allow for [mathematical evaluation](https://www.tldp.org/LDP/abs/html/dblparens.html) in Bash, and we will use them heavily here. Let's assume our previous example received **c3 **via stdin. Then, **ro=3** and **o=3** from above switch case statement converted **c** to its equivalent integer, put it into our formula to calculate final index '**i'.** ``` i=$(((ro*10)+o)) # Follow BODMAS rule, to calculate final index. is_free_field $i $(shuf -i 0-5 -n 1) # call a custom function that checks if the final index value points to a an empty/free cell/field. ``` Walking through this math to understand how the final index '**i**' is calculated: ``` i=$(((ro*10)+o)) i=$(((3*10)+3))=$((30+3))=33 ``` The final index value is 33. On our board, printed above, the final index points to 33rd cell and that should be 3rd (starting from 0, otherwise 4th) row and 3rd (C) column. ## Create the logic to determine the available minefield To extract a mine, after the coordinates are decoded and the index is found, the program checks whether that field is available. If it's not, the program displays a warning, and the player chooses another coordinate. In this code, a cell is available if it contains a dot (**.**) character. Assuming it's available, the value in the cell is reset and the score is updated. If a cell is unavailable because it does not contain a dot, then a variable **not_allowed** is set. For brevity, I leave it to you to look at the source code of the game for the contents of [the warning statement](https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L143-L177) in the game logic. ``` is_free_field() { local f=$1 local val=$2 not_allowed=0 if [[ "${room[$f]}" = "." ]]; then room[$f]=$val score=$((score+val)) else not_allowed=1 fi } ``` ![Extracting mines Extracting mines](https://opensource.com/sites/default/files/uploads/extractmines.png) If the coordinate entered is available, the mine is discovered, as shown below. When **h6 **is provided as input, some values at random populated on our minefields, these values are added to users score after the mins are extracted. ![Extracting mines Extracting mines](https://opensource.com/sites/default/files/uploads/extractmines2.png) Now remember the variables we declared at the start, [a-g], I will now use them here to extract random mines assigning their value to the variable **m **using Bash indirection. So, depending upon the input coordinates, the program picks a random set of additional numbers (**m**) to calculate the additional fields to be populated (as shown above) by adding them to the original input coordinates, represented here by **i (**calculated above**)**. Please note the character **X** in below code snippet, is our sole GAME-OVER trigger, we added it to our shuffle list to appear at random, with the beauty of **shuf** command, it can appear after any number of chances or may not even appear for our lucky winning user. ``` m=$(shuf -e a b c d e f g X -n 1) # add an extra char X to the shuffle, when m=X, its GAMEOVER if [[ "$m" != "X" ]]; then # X will be our explosive mine(GAME-OVER) trigger for limit in ${!m}; do # !m represents the value of value of m field=$(shuf -i 0-5 -n 1) # again get a random number and index=$((i+limit)) # add values of m to our index and calculate a new index till m reaches its last element. is_free_field $index $field done ``` I want all revealed cells to be contiguous to the cell selected by the player. ![Extracting mines Extracting mines](https://opensource.com/sites/default/files/uploads/extractmines3.png) ## Keep a count of available and extracted mines The program needs to keep track of available cells in the minefield; otherwise, it keeps asking the player for input even after all the cells have been revealed. To implement this, I create a variable called **free_fields**, initially setting it to 0. In a **for** loop defined by the remaining number of available cells/fields in our minefields.** **If a cell contains a dot (**.**), then the count of **free_fields** is incremented. ``` get_free_fields() { free_fields=0 for n in $(seq 1 ${#room[@]}); do if [[ "${room[$n]}" = "." ]]; then ((free_fields+=1)) fi done } ``` Wait, what if, the **free_fields=0**? That means, our user had extracted all the mines. Please feel free to look at [the exact code](https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L91) to understand better. ``` if [[ $free_fields -eq 0 ]]; then # well that means you extracted all the mines. printf '\n\n\t%s: %s %d\n\n' "You Win" "you scored" "$score" exit 0 fi ``` ## Create the logic for Gameover For the Gameover situation, we print to the middle of the terminal using some [nifty logic](https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L131-L141) that I leave it to the reader to explore how it works. ``` if [[ "$m" = "X" ]]; then g=0 # to use it in parameter expansion room[$i]=X # override the index and print X for j in {42..49}; do # in the middle of the minefields, out="gameover" k=${out:$g:1} # print one alphabet in each cell room[$j]=${k^^} ((g+=1)) done fi ``` Finally, we can print the two lines which are most awaited. ``` if [[ "$m" = "X" ]]; then printf '\n\n\t%s: %s %d\n' "GAMEOVER" "you scored" "$score" printf '\n\n\t%s\n\n' "You were just $free_fields mines away." exit 0 fi ``` ![Minecraft Gameover Minecraft Gameover](https://opensource.com/sites/default/files/uploads/gameover.png) That's it, folks! If you want to know more, access the source code for this Minesweeper game and other games in Bash from my [GitHub repo](https://github.com/abhiTamrakar/playground/tree/master/bash_games). I hope it gives you some inspiration to learn more Bash and to have fun while doing so. ## 2 Comments
11,433
现在你可以借助 Insync 在 Linux 中原生使用 OneDrive
https://itsfoss.com/use-onedrive-on-linux/
2019-10-08T11:54:22
[ "Insync", "OneDrive" ]
https://linux.cn/article-11433-1.html
[OneDrive](https://onedrive.live.com) 是微软的一项云存储服务,它为每个用户提供 5GB 的免费存储空间。它已与微软帐户集成,如果你使用 Windows,那么已在其中预安装了 OneDrive。 OneDrive 无法在 Linux 中作为桌面应用使用。你可以通过网页访问已存储的文件,但无法像在文件管理器中那样使用云存储。 好消息是,你现在可以使用一个非官方工具,它可让你在 Ubuntu 或其他 Linux 发行版中使用 OneDrive。 当 [Insync](https://www.insynchq.com) 在 Linux 上支持 Google Drive 时,它变成了 Linux 上非常流行的高级第三方同步工具。我们有篇对 [Insync 支持 Google Drive](https://itsfoss.com/insync-linux-review/) 的详细点评文章。 而最近[发布的 Insync 3](https://www.insynchq.com/blog/insync-3/) 支持了 OneDrive。因此在本文中,我们将看下如何在 Insync 中使用 OneDrive 以及它的新功能。 > > 非 FOSS 警告 > > > 少数开发者会对非 FOSS 软件引入 Linux 感到痛苦。作为专注于桌面 Linux 的门户,即使不是 FOSS,我们也会在此介绍此类软件。 > > > Insync 3 既不是开源软件,也不免费使用。你只有 15 天的试用期进行测试。如果你喜欢它,那么可以按每个帐户终生 29.99 美元的费用购买。 > > > 我们不会拿钱来推广它们(以防你这么想)。我们不会在这里这么做。 > > > ### 在 Linux 中通过 Insync 获得原生 OneDrive 体验 ![](/data/attachment/album/201910/08/115426ho4nonbbn8nbxgbn.png) 尽管它是一个付费工具,但依赖 OneDrive 的用户或许希望在他们的 Linux 系统中获得同步 OneDrive 的无缝体验。 首先,你需要从[官方页面](https://www.insynchq.com/downloads?start=true)下载适合你 Linux 发行版的软件包。 * [下载 Insync](https://www.insynchq.com/downloads) 你也可以选择添加仓库并进行安装。你将在 Insync 的[官方网站](https://www.insynchq.com/downloads)看到说明。 安装完成后,只需启动并选择 OneDrive 选项。 ![](/data/attachment/album/201910/08/115431y5i8lecf0c8w5fe6.png) 另外,要注意的是,你添加的每个 OneDrive 或 Google Drive 帐户都需要单独的许可证。 现在,在授权 OneDrive 帐户后,你必须选择一个用于同步所有内容的基础文件夹,这是 Insync 3 中的一项新功能。 ![Insync 3 Base Folder](/data/attachment/album/201910/08/115431mpnsnd1dpnx2r2rn.png) 除此之外,设置完成后,你还可以选择性地同步本地或云端的文件/文件夹。 ![Insync Selective Sync](/data/attachment/album/201910/08/115436vztr1oi9iciokwpl.png) 你还可以通过添加自己的规则来自定义同步选项,以忽略/同步所需的文件夹和文件,这完全是可选的。 ![Insync Customize Sync Preferences](/data/attachment/album/201910/08/115436es2fwjaq3jzx8ljd.png) 最后,就这样完成了。 ![Insync 3](/data/attachment/album/201910/08/115437wgsswxllgwwigwle.png) 你现在可以在包括带有 Insync 的 Linux 桌面在内的多个平台使用 OneDrive 开始同步文件/文件夹。除了上面所有新功能/更改之外,你还可以在 Insync 上获得更快/更流畅的体验。 此外,借助 Insync 3,你可以查看同步进度: ![](/data/attachment/album/201910/08/115437u7fvbtsw3t3fbt03.png) ### 总结 总的来说,对于希望在 Linux 系统上同步 OneDrive 的用户而言,Insync 3 是令人印象深刻的升级。如果你不想付款,你可以尝试其他 [Linux 的免费云服务](https://itsfoss.com/cloud-services-linux/)。 你如何看待 Insync?如果你已经在使用它,到目前为止的体验如何?在下面的评论中让我们知道你的想法。 --- via: <https://itsfoss.com/use-onedrive-on-linux/> 作者:[Ankush Das](https://itsfoss.com/author/ankush/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
![Warp Terminal](/assets/images/warp-terminal.webp) ![Warp Terminal](/assets/images/warp-terminal.webp) [OneDrive](https://onedrive.live.com/?ref=itsfoss.com) is a cloud storage service from Microsoft, and it provides 5 GB of free storage to every user. This is integrated with Microsoft account and if you use Windows, you already have OneDrive preinstalled. OneDrive as a desktop application is not available on Linux. You can access your stored files via the web interface, but you won’t get the native feel of using the cloud storage in the file manager. You can [use rclone CLI tool to sync OneDrive in Linux](https://itsfoss.com/use-onedrive-linux-rclone/), but it’s not easy to do and doesn’t give the native desktop application experience. The good news is that you can now use an unofficial tool that lets you use OneDrive on Ubuntu or other Linux distributions. [Insync](https://itsfoss.com/insync-linux-review/) is a quite popular premium third-party sync tool when it comes to accessing Google Drive or OneDrive on Linux. We already have a detailed review of Insync tool if you are keen to explore more: [Insync: The Hassleless Way of Using Google Drive on LinuxInsync can be your all-in-one solution to sync files to the cloud for Google Drive, OneDrive, and Dropbox.](https://itsfoss.com/insync-linux-review/)![](https://itsfoss.com/content/images/2023/09/insync-ft.png) ![](https://itsfoss.com/content/images/2023/09/insync-ft.png) Here, we are going to focus on accessing and using OneDrive account via Insync on Linux. [affiliate policy](https://itsfoss.com/affiliate-policy/). ## Get A Native OneDrive Experience on Linux With Insync Even though it is a premium tool – the users who rely on OneDrive may want to get this for a seamless experience to sync OneDrive on their Linux system. To get started, you have to download the suitable package for your Linux distribution from the [official download page](https://www.insynchq.com/?fp_ref=chmod777&ref=itsfoss.com). You can also choose to add the repository and get it installed. You will get the instructions at [Insync’s official website](https://www.insynchq.com/?fp_ref=chmod777&ref=itsfoss.com). *Insync is not FOSS.* *If you like it, you can purchase a license as per your requirements. We cover it here because it is a useful tool for Linux users.*Once you have it installed, just launch it and pick the OneDrive option. ![insync homescreen screenshot on ubuntu](https://itsfoss.com/content/images/2023/10/insync-home-screen.png) Next, log in to your OneDrive account using your system's default web browser: ![insync login microsoft](https://itsfoss.com/content/images/2023/10/insync-onedrive-login.png) Now, after authorizing the OneDrive account, it asks you to select cloud folders/files that you want to sync to your computer. ![](https://itsfoss.com/content/images/2023/09/insync-start-sync.png) You can decide to do it later or proceed with the desired selection. Next, you will be asked to tweak your sync preferences as shown below: ![insync preferences](https://itsfoss.com/content/images/2023/09/insync-sync-preferences.png) You can also customize the sync preference by adding your own rules to ignore/sync folders and files that you want – it is totally optional (and is available under a different subscription plan for businesses/developers). Here's how that would look: ![Insync Customize Sync](https://itsfoss.com/content/images/wordpress/2019/09/insync-customize-sync.png) Finally, you have it ready: ![insync homescreen ui](https://itsfoss.com/content/images/2023/09/insync-home-screen-ui.png) You can now start syncing files/folders using OneDrive across multiple platforms, including your Linux desktop with Insync. As per your sync requirements, you can choose to perform: **1-way sync****2-way sync****Local Selective Sync****Cloud Selective Sync** Furthermore, you can also integrate Insync with your file manager. For instance, if you are using Nautilus file manager on Ubuntu or Fedora, get the package from [Insync's download page](https://www.insynchq.com/downloads?fp_ref=chmod777&ref=itsfoss.com). ![insync download page screenshot](https://itsfoss.com/content/images/2023/10/insync-file-manager-add-on.jpg) Once done, you can check the sync status right from the file manager for the folders synced to the cloud: ![insync status on file manager](https://itsfoss.com/content/images/2023/09/insync-file-manager-status.png) **Suggested Read 📖** [Top 10 Best Free Cloud Storage Services for LinuxWhich cloud service is the best for Linux? Check out this list of free cloud storage services that you can use in Linux.](https://itsfoss.com/cloud-services-linux/)![](https://itsfoss.com/content/images/wordpress/2016/02/Linux-cloud-services.jpg) ![](https://itsfoss.com/content/images/wordpress/2016/02/Linux-cloud-services.jpg) ## Wrapping Up Overall, Insync is an impressive tool for those looking to sync OneDrive on their Linux system. In case you would rather not pay – you can try other [free cloud services for Linux](https://itsfoss.com/cloud-services-linux/). *What do you think about Insync? If you’re already using it, how’s the experience so far? Let us know your thoughts in the comments below.*
11,434
使用 guiscrcpy 将你的安卓手机的屏幕投射到你的电脑
https://opensource.com/article/19/9/mirror-android-screen-guiscrcpy
2019-10-08T12:32:23
[ "安卓" ]
https://linux.cn/article-11434-1.html
> > 使用这个基于 scrcpy 的开源应用从你的电脑上访问你的安卓设备。 > > > ![](/data/attachment/album/201910/08/123143nlz718152v5nf5n8.png) 在未来,你所需的一切信息皆触手可及,并且全部会以全息的形式出现在空中,即使你在驾驶汽车时也可以与之交互。不过,那是未来,在那一刻到来之前,我们所有人都只能将信息分散在笔记本电脑、手机、平板电脑和智能冰箱上。不幸的是,这意味着当我们需要来自该设备的信息时,我们通常必须查看该设备。 虽然不完全是像全息终端或飞行汽车那样酷炫,但 [srevin saju](http://opensource.com/users/srevinsaju) 开发的 [guiscrcpy](https://github.com/srevinsaju/guiscrcpy) 是一个可以在一个地方整合多个屏幕,让你有一点未来感觉的应用程序。 Guiscrcpy 是一个基于屡获殊荣的一个开源引擎 [scrcpy](https://github.com/Genymobile/scrcpy) 的一个开源项目(GUN GPLv3 许可证)。使用 Guiscrcpy 可以将你的安卓手机的屏幕投射到你的电脑,这样你就可以查看手机上的一切东西。Guiscrcpy 支持 Linux、Windows 和 MacOS。 不像其他 scrcpy 的替代软件一样,Guiscrcpy 并不仅仅是 scrcpy 的一个简单的复制品。该项目优先考虑了与其他开源项目的协作。因此,Guiscrcpy 对 scrcpy 来说是一个扩展,或者说是一个用户界面层。将 Python 3 GUI 与 scrcpy 分开可以确保没有任何东西干扰 scrcpy 后端的效率。你可以投射到 1080P 分辨率的屏幕,因为它的超快的渲染速度和超低的 CPU 使用,即使在低端的电脑上也可以运行的很顺畅。 Scrcpy 是 Guiscrcpy 项目的基石。它是一个基于命令行的应用,因此它没有处理你的手势操作的用户界面。它也没有提供返回按钮和主页按钮,而且它需要你对 [Linux 终端](https://www.redhat.com/sysadmin/navigating-filesystem-linux-terminal)比较熟悉。Guiscrcpy 给 scrcpy 添加了图形面板。因此,任何用户都可以使用它,而且不需要通过网络发送任何信息就可以投射和控制他的设备。Guiscrcpy 同时也为 Windows 用户和 Linux 用户提供了编译好的二进制文件,以方便你的使用。 ### 安装 Guiscrcpy 在你安装 Guiscrcpy 之前,你需要先安装它的依赖包。尤其是要安装 scrcpy。安装 scrcpy 最简单的方式可能就是使用对于大部分 Linux 发行版都安装了的 [snap](https://snapcraft.io/) 工具。如果你的电脑上安装并使用了 snap,那么你就可以使用下面的命令来一步安装 scrcpy。 ``` $ sudo snap install scrcpy ``` 当你安装完 scrcpy,你就可以安装其他的依赖包了。[Simple DirectMedia Layer](https://www.libsdl.org/)(SDL 2.0) 是一个显示和控制你设备屏幕的工具包。[Android Debug Bridge](https://developer.android.com/studio/command-line/adb) (ADB) 命令可以连接你的安卓手机到电脑。 在 Fedora 或者 CentOS: ``` $ sudo dnf install SDL2 android-tools ``` 在 Ubuntu 或者 Debian: ``` $ sudo apt install SDL2 android-tools-adb ``` 在另一个终端中,安装 Python 依赖项: ``` $ python3 -m pip install -r requirements.txt --user ``` ### 设置你的手机 为了能够让你的手机接受 adb 连接。必须让你的手机开启开发者选项。为了打开开发者选项,打开“设置”,然后选择“关于手机”,找到“版本号”(它也可能位于“软件信息”面板中)。不敢置信,只要你连续点击“版本号”七次,你就可以打开开发者选项。(LCTT 译注:显然这里是以 Google 原生的 Android 作为说明的,你的不同品牌的安卓手机打开开发者选项的方式或有不同。) ![Enabling Developer Mode](/data/attachment/album/201910/08/123228p6so9wwzwi703ow7.jpg "Enabling Developer Mode") 更多更全面的连接手机的方式,请参考[安卓开发者文档](https://developer.android.com/studio/debug/dev-options)。 一旦你设置好了你的手机,将你的手机通过 USB 线插入到你的电脑中(或者通过无线的方式进行连接,确保你已经配置好了无线连接)。 ### 使用 Guiscrcpy 当你启动 guiscrcpy 的时候,你就能看到一个主控制窗口。点击窗口里的 “Start scrcpy” 按钮。只要你设置好了开发者模式并且通过 USB 或者 WiFi 将你的手机连接到电脑。guiscrcpy 就会连接你的手机。 ![Guiscrcpy main screen](/data/attachment/album/201910/08/123231nljld9jnkyj3dny3.png "Guiscrcpy main screen") 它还包括一个可写入的配置系统,你可以将你的配置文件写入到 `~/.config` 目录。可以在使用前保存你的首选项。 guiscrcpy 底部的面板是一个浮动的窗口,可以帮助你执行一些基本的控制动作。它包括了主页按钮、返回按钮、电源按钮以及一些其他的按键。这些按键在安卓手机上都非常常用。值得注意的是,这个模块并不是与 scrcpy 的 SDL 进行交互。因此,它可以毫无延迟的执行。换句话说,这个操作窗口是直接通过 adb 与你的手机进行交互而不是通过 scrcpy。 ![guiscrcpy's bottom panel](/data/attachment/album/201910/08/123232gk25dzbcg32in3k2.png "guiscrcpy's bottom panel") 这个项目目前十分活跃,不断地有新的特性加入其中。最新版本的具有了手势操作和通知界面。 有了这个 guiscrcpy,你不仅仅可以在你的电脑屏幕上看到你的手机,你还可以就像操作你的实体手机一样点击 SDL 窗口,或者使用浮动窗口上的按钮与之进行交互。 ![guiscrcpy running on Fedora 30](/data/attachment/album/201910/08/123236ccvjc41z1pw4vjds.jpg "guiscrcpy running on Fedora 30") Guiscrcpy 是一个有趣且实用的应用程序,它提供的功能应该是任何现代设备(尤其是 Android 之类的平台)的正式功能。自己尝试一下,为当今的数字生活增添一些未来主义的感觉。 --- via: <https://opensource.com/article/19/9/mirror-android-screen-guiscrcpy> 作者:[Seth Kenlon](https://opensource.com/users/seth) 选题:[lujun9972](https://github.com/lujun9972) 译者:[amwps290](https://github.com/amwps290) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
In the future, all the information you need will be just one gesture away, and it will all appear in midair as a hologram that you can interact with even while you're driving your flying car. That's the future, though, and until that arrives, we're all stuck with information spread across a laptop, a phone, a tablet, and a smart refrigerator. Unfortunately, that means when we need information from a device, we generally have to look at that device. While not quite holographic terminals or flying cars, [guiscrcpy](https://github.com/srevinsaju/guiscrcpy) by developer [Srevin Saju](http://opensource.com/users/srevinsaju) is an application that consolidates multiple screens in one location and helps to capture that futuristic feeling. Guiscrcpy is an open source (GNU GPLv3 licensed) project based on the award-winning [scrcpy](https://github.com/Genymobile/scrcpy) open source engine. With guiscrcpy, you can cast your Android screen onto your computer screen so you can view it along with everything else. Guiscrcpy supports Linux, Windows, and MacOS. Unlike many scrcpy alternatives, Guiscrcpy is not a fork of scrcpy. The project prioritizes collaborating with other open source projects, so Guiscrcpy is an extension, or a graphical user interface (GUI) layer, for scrcpy. Keeping the Python 3 GUI separate from scrcpy ensures that nothing interferes with the efficiency of the scrcpy backend. You can screencast up to 1080p resolution and, because it uses ultrafast rendering and surprisingly little CPU, it works even on a relatively low-end PC. Scrcpy, Guiscrcpy's foundation, is a command-line application, so it doesn't have GUI buttons to handle gestures, it doesn't provide a Back or Home button, and it requires familiarity with the [Linux terminal](https://www.redhat.com/sysadmin/navigating-filesystem-linux-terminal). Guiscrcpy adds GUI panels to scrcpy, so any user can run it—and cast and control their device—without sending any information over the internet. Everything works over USB or WiFi (using only a local network). Guiscrcpy also adds a desktop launcher to Linux and Windows systems and provides compiled binaries for Linux and Windows. ## Installing Guiscrcpy Before installing Guiscrcpy, you must install its dependencies, most notably scrcpy. Possibly the easiest way to install scrcpy is with [snap](https://snapcraft.io/), which is available for most major Linux distributions. If you have snap installed and active, then you can install scrcpy with one easy command: `$ sudo snap install scrcpy` While it's installing, you can install the other dependencies. The [Simple DirectMedia Layer](https://www.libsdl.org/) (SDL 2.0) toolkit is required to display and interact with the phone screen, and the [Android Debug Bridge](https://developer.android.com/studio/command-line/adb) (adb) command connects your computer to your Android phone. On Fedora or CentOS: `$ sudo dnf install SDL2 android-tools` On Ubuntu or Debian: `$ sudo apt install SDL2 android-tools-adb` In another terminal, install the Python dependencies: `$ python3 -m pip install -r requirements.txt --user` ## Setting up your phone For your phone to accept an adb connection, it must have Developer Mode enabled. To enable Developer Mode on Android, go to **Settings **and select **About phone**. In **About phone**, find the **Build number** (it may be in the **Software information** panel). Believe it or not, to enable Developer Mode, tap **Build number** seven times in a row. ![Enabling Developer Mode Enabling Developer Mode](https://opensource.com/sites/default/files/uploads/developer-mode.jpg) For full instructions on all the many ways you can configure your phone for access from your computer, read the [Android developer documentation](https://developer.android.com/studio/debug/dev-options). Once that's set up, plug your phone into a USB port on your computer (or ensure that you've configured it correctly to connect over WiFi). ## Using guiscrcpy When you launch guiscrcpy, you see its main control window. In this window, click the **Start scrcpy** button. This connects to your phone, as long as it's set up in Developer Mode and connected to your computer over USB or WiFi. ![Guiscrcpy main screen Guiscrcpy main screen](https://opensource.com/sites/default/files/uploads/guiscrcpy-main.png) It also includes a configuration-writing system, where you can write a configuration file to your **~/.config** directory to preserve your preferences between uses. The bottom panel of guiscrcpy is a floating window that helps you perform basic controlling actions. It has buttons for Home, Back, Power, and more. These are common functions on Android devices, but an important feature of this module is that it doesn't interact with scrcpy's SDL, so it can function with no lag. In other words, this panel communicates directly with your connected device through adb rather than scrcpy. ![Bottom panel guiscrcpy's bottom panel](https://opensource.com/sites/default/files/uploads/guiscrcpy-bottompanel.png) The project is in active development and new features are still being added. The latest build has an interface for gestures and notifications. With guiscrcpy, you not only *see* your phone on your screen, but you can also interact with it, either by clicking the SDL window itself, just as you would tap your physical phone, or by using the buttons on the panels. ![Running on Fedora 30 guiscrcpy running on Fedora 30](https://opensource.com/sites/default/files/uploads/guiscrcpy-screenshot.jpg) Guiscrcpy is a fun and useful application that provides features that ought to be official features of any modern device, especially a platform like Android. Try it out yourself, and add some futuristic pragmatism to your present-day digital life. ## 1 Comment
11,436
我买了一台 Linux 笔记本
https://opensource.com/article/19/7/linux-laptop
2019-10-08T13:40:00
[ "Linux", "笔记本" ]
https://linux.cn/article-11436-1.html
> > Tuxedo 让买一台开箱即用的 Linux 笔记本变得容易。 > > > ![](/data/attachment/album/201910/08/133924vnmbklqh5jkshkmj.jpg) 最近,我开始使用我买的 Linux 笔记本计算机 Tuxedo Book BC1507。十年前,如果有人告诉我,十年后我可以从 [System76](https://system76.com/)、[Slimbook](https://slimbook.es/en/) 和 [Tuxedo](https://www.tuxedocomputers.com/) 等公司购买到高质量的“企鹅就绪”的笔记本电脑。我可能会发笑。好吧,现在我也在笑,但是很开心! 除了为免费/自由开源软件(FLOSS)设计计算机之外,这三家公司最近[宣布](https://www.tuxedocomputers.com/en/Infos/News/Tuxedo-Computers-stands-for-Free-Software-and-Security-.tuxedo)都试图通过切换到[Coreboot](https://coreboot.org/)来消除专有的 BIOS 软件。 ### 买一台 Tuxedo Computers 是一家德国公司,生产支持 Linux 的笔记本电脑。实际上,如果你要使用其他的操作系统,则它的价格会更高。 购买他们的计算机非常容易。Tuxedo 提供了许多付款方式:不仅包括信用卡,而且还包括 PayPal 甚至银行转帐(LCTT 译注:我们需要支付宝和微信支付……此外,要国际配送,还需要支付运输费和清关费用等)。只需在 Tuxedo 的网页上填写银行转帐表格,公司就会给你发送银行信息。 Tuxedo 可以按需构建每台计算机,只需选择基本模型并浏览下拉菜单以选择不同的组件,即可轻松准确地选择所需内容。页面上有很多信息可以指导你进行购买。 如果你选择的 Linux 发行版与推荐的发行版不同,则 Tuxedo 会进行“网络安装”,因此请准备好网络电缆以完成安装,也可以将你首选的镜像文件刻录到 USB 盘上。我通过外部 DVD 阅读器来安装刻录了 openSUSE Leap 15.1 安装程序的 DVD,但是你可用你自己的方式。 我选择的型号最多可以容纳两个磁盘:一个 SSD,另一个可以是 SSD 或常规硬盘。由于已经超出了我的预算,因此我决定选择传统的 1TB 磁盘并将 RAM 增加到 16GB。该处理器是具有四个内核的第八代 i5。我选择了背光西班牙语键盘、1920×1080/96dpi 屏幕和 SD 卡读卡器。总而言之,这是一个很棒的系统。 如果你对默认的英语或德语键盘感觉满意,甚至可以要求在 Meta 键上印上一个企鹅图标!我需要的西班牙语键盘则不提供此选项。 ### 收货并开箱使用 付款完成后仅六个工作日,完好包装的计算机就十分安全地到达了我家。打开计算机包装并解锁电池后,我准备好开始浪了。 ![Tuxedo Book BC1507](/data/attachment/album/201910/08/134049x7m8vlvfqmxl8x38.jpg "Tuxedo Book BC1507") *我的(物理)桌面上的新玩具。* 该电脑的设计确实很棒,而且感觉扎实。即使此型号的外壳不是铝制的(LCTT 译注:他们有更好看的铝制外壳的型号),也可以保持凉爽。风扇真的很安静,气流像许多其他笔记本电脑一样导向后边缘,而不是流向侧面。电池可提供数小时的续航时间。BIOS 中的一个名为 FlexiCharger 的选项会在达到一定百分比后停止为电池充电,因此在插入电源长时间工作时,无需卸下电池。 键盘真的很舒适,而且非常安静。甚至触摸板按键也很安静!另外,你可以轻松调整背光键盘上的光强度。 最后,很容易访问笔记本电脑中的每个组件,因此可以毫无问题地对计算机进行更新或维修。Tuxedo 甚至送了几个备用螺丝! ### 结语 经过一个月的频繁使用,我对该系统感到非常满意。它完全满足了我的要求,并且一切都很完美。 因为它们通常是高端系统,所以包含 Linux 的计算机往往比较昂贵。如果你将 Tuxedo 或 Slimbook 电脑的价格与更知名品牌的类似规格的价格进行比较,价格相差无几。如果你想要一台使用自由软件的强大系统,请毫不犹豫地支持这些公司:他们所提供的物有所值。 请在评论中让我们知道你在 Tuxedo 和其他“企鹅友好”公司的经历。 --- 本文基于 Ricardo 的博客 [From Mind to Type](https://frommindtotype.wordpress.com/) 上发表的“ [我的新企鹅笔记本电脑:Tuxedo-Book-BC1507](https://frommindtotype.wordpress.com/2019/06/17/my-new-penguin-ready-laptop-tuxedo-book-bc1507/)”。 --- via: <https://opensource.com/article/19/7/linux-laptop> 作者:[Ricardo Berlasso](https://opensource.com/users/rgb-es) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
Recently, I bought and started using a Tuxedo Book BC1507, a Linux laptop computer. Ten years ago, if someone had told me that, by the end of the decade, I could buy top-quality, "penguin-ready" laptops from companies such as [System76](https://system76.com/), [Slimbook](https://slimbook.es/en/), and [Tuxedo](https://www.tuxedocomputers.com/), I probably would have laughed. Well, now I'm laughing, but with joy! Going beyond designing computers for free/libre open source software (FLOSS), all three companies recently [announced](https://www.tuxedocomputers.com/en/Infos/News/Tuxedo-Computers-stands-for-Free-Software-and-Security-.tuxedo) they are trying to eliminate proprietary BIOS software by switching to [Coreboot](https://coreboot.org/). ## Buying it Tuxedo Computers is a German company that builds Linux-ready laptops. In fact, if you want a different operating system, it costs more. Buying the computer was incredibly easy. Tuxedo offers many payment methods: not only credit cards but also PayPal and even bank transfers. Just fill out the bank transfer form on Tuxedo's web page, and the company will send you the bank coordinates. Tuxedo builds every computer on demand, and picking exactly what you want is as easy as selecting the basic model and exploring the drop-down menus to select different components. There is a lot of information on the page to guide you in the purchase. If you pick a different Linux distribution from the recommended one, Tuxedo does a "net install," so have a network cable ready to finish the installation, or you can burn your preferred image onto a USB key. I used a DVD with the openSUSE Leap 15.1 installer through an external DVD reader instead, but you get the idea. The model I chose accepts up to two disks: one SSD and the other either an SSD or a conventional hard drive. As I was already over budget, I decided to pick a conventional 1TB disk and increase the RAM to 16GB. The processor is an 8th Generation i5 with four cores. I selected a back-lit Spanish keyboard, a 1920×1080/96dpi screen, and an SD card reader—all in all, a great system. If you're fine with the default English or German keyboard, you can even ask for a penguin icon on the Meta key! I needed a Spanish keyboard, which doesn't offer this option. ## Receiving and using it The perfectly packaged computer arrived in total safety to my door just six working days after the payment was registered. After unpacking the computer and unlocking the battery, I was ready to roll. ![Tuxedo Book BC1507 Tuxedo Book BC1507](https://opensource.com/sites/default/files/uploads/tuxedo-600_0.jpg) The new toy on top of my (physical) desktop. The computer's design is really nice and feels solid. Even though the chassis on this model is not aluminum, it stays cool. The fan is really quiet, and the airflow goes to the back edge, not to the sides, as in many other laptops. The battery provides several hours of autonomy from an electrical outlet. An option in the BIOS called FlexiCharger stops charging the battery after it reaches a certain percentage, so you don't need to remove the battery when you work for a long time while plugged in. The keyboard is really comfortable and surprisingly quiet. Even the touchpad keys are quiet! Also, you can easily adjust the light intensity on the back-lit keyboard. Finally, it's easy to access every component in the laptop so the computer can be updated or repaired without problems. Tuxedo even sends spare screws! ## Conclusion After a month of heavy use, I'm really happy with the system. I got exactly what I asked for, and everything works perfectly. Because they are usually high-end systems, Linux-included computers tend to be on the expensive side of the spectrum. If you compare the price of a Tuxedo or Slimbook computer with something with similar specifications from a more established brand, the prices are not that different. If you are after a powerful system to use with free software, don't hesitate to support these companies: What they offer is worth the price. Let's us know in the comments about your experience with Tuxedo and other "penguin-friendly" companies. *This article is based on " My new 'penguin ready' laptop: Tuxedo-Book-BC1507," published on Ricardo's blog, From Mind to Type.* ## 22 Comments
11,438
CentOS 8 安装图解
https://www.linuxtechi.com/centos-8-installation-guide-screenshots/
2019-10-09T12:12:00
[ "CentOS" ]
https://linux.cn/article-11438-1.html
![](/data/attachment/album/201910/09/121153tj05o5t2ee79jl63.png) 继 RHEL 8 发布之后,CentOS 社区也发布了让人期待已久的 CentOS 8,并发布了两种模式: * CentOS stream:滚动发布的 Linux 发行版,适用于需要频繁更新的开发者 * CentOS:类似 RHEL 8 的稳定操作系统,系统管理员可以用其部署或配置服务和应用 在这篇文章中,我们会使用图解的方式演示 CentOS 8 的安装方法。 ### CentOS 8 的新特性 * DNF 成为了默认的软件包管理器,同时 yum 仍然是可用的 * 使用网络管理器(`nmcli` 和 `nmtui`)进行网络配置,移除了网络脚本 * 使用 Podman 进行容器管理 * 引入了两个新的包仓库:BaseOS 和 AppStream * 使用 Cockpit 作为默认的系统管理工具 * 默认使用 Wayland 作为显示服务器 * `iptables` 将被 `nftables` 取代 * 使用 Linux 内核 4.18 * 提供 PHP 7.2、Python 3.6、Ansible 2.8、VIM 8.0 和 Squid 4 ### CentOS 8 所需的最低硬件配置: * 2 GB RAM * 64 位 x86 架构、2 GHz 或以上的 CPU * 20 GB 硬盘空间 ### CentOS 8 安装图解 #### 第一步:下载 CentOS 8 ISO 文件 在 CentOS 官方网站 <https://www.centos.org/download/> 下载 CentOS 8 ISO 文件。 #### 第二步: 创建 CentOS 8 启动介质(USB 或 DVD) 下载 CentOS 8 ISO 文件之后,将 ISO 文件烧录到 USB 移动硬盘或 DVD 光盘中,作为启动介质。 然后重启系统,在 BIOS 中设置为从上面烧录好的启动介质启动。 #### 第三步:选择“安装 CentOS Linux 8.0”选项 当系统从 CentOS 8 ISO 启动介质启动之后,就可以看到以下这个界面。选择“Install CentOS Linux 8.0”(安装 CentOS Linux 8.0)选项并按回车。 ![Choose-Install-CentOS8](/data/attachment/album/201910/09/121203kwt4yvjxmnvzrjrp.jpg) #### 第四步:选择偏好语言 选择想要在 CentOS 8 **安装过程**中使用的语言,然后继续。 ![Select-Language-CentOS8-Installation](/data/attachment/album/201910/09/121203dhfnlfyh8p0ajbqz.jpg) #### 第五步:准备安装 CentOS 8 这一步我们会配置以下内容: * 键盘布局 * 日期和时间 * 安装来源 * 软件选择 * 安装目标 * Kdump ![Installation-Summary-CentOS8](/data/attachment/album/201910/09/121204djyz9iwhf7re9nrn.jpg) 如上图所示,安装向导已经自动提供了“<ruby> 键盘布局 <rt> Keyboard </rt></ruby>”、“<ruby> 时间和日期 <rt> Time &amp; Date </rt></ruby>”、“<ruby> 安装来源 <rt> Installation Source </rt></ruby>”和“<ruby> 软件选择 <rt> Software Selection </rt></ruby>”的选项。 如果你需要修改以上设置,点击对应的图标就可以了。例如修改系统的时间和日期,只需要点击“<ruby> 时间和日期 <rt> Time &amp; Date </rt></ruby>”,选择正确的时区,然后点击“<ruby> 完成 <rt> Done </rt></ruby>”即可。 ![TimeZone-CentOS8-Installation](/data/attachment/album/201910/09/121204ed2al52p527zbzbp.jpg) 在软件选择选项中选择安装的模式。例如“<ruby> 包含图形界面 <rt> Server with GUI </rt></ruby>”选项会在安装后的系统中提供图形界面,而如果想安装尽可能少的额外软件,可以选择“<ruby> 最小化安装 <rt> Minimal Install </rt></ruby>”。 ![Software-Selection-CentOS8-Installation](/data/attachment/album/201910/09/121205adk7zvsxxfh1av25.jpg) 这里我们选择“<ruby> 包含图形界面 <rt> Server with GUI </rt></ruby>”,点击“<ruby> 完成 <rt> Done </rt></ruby>”。 Kdump 功能默认是开启的。尽管这是一个强烈建议开启的功能,但也可以点击对应的图标将其关闭。 如果想要在安装过程中对网络进行配置,可以点击“<ruby> 网络与主机名 <rt> Network &amp; Host Name </rt></ruby>”选项。 ![Networking-During-CentOS8-Installation](/data/attachment/album/201910/09/121205com66etz1eql6v2q.jpg) 如果系统连接到启用了 DHCP 功能的调制解调器上,就会在启动网络接口的时候自动获取一个 IP 地址。如果需要配置静态 IP,点击“<ruby> 配置 <rt> Configure </rt></ruby>”并指定 IP 的相关信息。除此以外我们还将主机名设置为 “linuxtechi.com”。 完成网络配置后,点击“<ruby> 完成 <rt> Done </rt></ruby>”。 最后我们要配置“<ruby> 安装目标 <rt> Installation Destination </rt></ruby>”,指定 CentOS 8 将要安装到哪一个硬盘,以及相关的分区方式。 ![Installation-Destination-Custom-CentOS8](/data/attachment/album/201910/09/121206cs7p325477n6wdyp.jpg) 点击“<ruby> 完成 <rt> Done </rt></ruby>”。 如图所示,我为 CentOS 8 分配了 40 GB 的硬盘空间。有两种分区方案可供选择:如果由安装向导进行自动分区,可以从“<ruby> 存储配置 <rt> Storage Configuration </rt></ruby>”中选择“<ruby> 自动 <rt> Automatic </rt></ruby>”选项;如果想要自己手动进行分区,可以选择“<ruby> 自定义 <rt> Custom </rt></ruby>”选项。 在这里我们选择“<ruby> 自定义 <rt> Custom </rt></ruby>”选项,并按照以下的方式创建基于 LVM 的分区: * `/boot` – 2 GB (ext4 文件系统) * `/` – 12 GB (xfs 文件系统) * `/home` – 20 GB (xfs 文件系统) * `/tmp` – 5 GB (xfs 文件系统) * Swap – 1 GB (xfs 文件系统) 首先创建 `/boot` 标准分区,设置大小为 2GB,如下图所示: ![boot-partition-CentOS8-Installation](/data/attachment/album/201910/09/121206q9t4oto94am49szt.jpg) 点击“<ruby> 添加挂载点 <rt> Add mount point </rt></ruby>”。 再创建第二个分区 `/`,并设置大小为 12GB。点击加号,指定挂载点和分区大小,点击“<ruby> 添加挂载点 <rt> Add mount point </rt></ruby>”即可。 ![slash-root-partition-centos8-installation](/data/attachment/album/201910/09/121207oyi4oogd4dcphczd.jpg) 然后在页面上将 `/` 分区的分区类型从标准更改为 LVM,并点击“<ruby> 更新设置 <rt> Update Settings </rt></ruby>”。 ![Change-Partition-Type-CentOS8](/data/attachment/album/201910/09/121207fyxo93878mi7i7xy.jpg) 如上图所示,安装向导已经自动创建了一个卷组。如果想要更改卷组的名称,只需要点击“<ruby> 卷组 <rt> Volume Group </rt></ruby>”标签页中的“<ruby> 修改 <rt> Modify </rt></ruby>”选项。 同样地,创建 `/home` 分区和 `/tmp` 分区,分别将大小设置为 20GB 和 5GB,并设置分区类型为 LVM。 ![home-partition-CentOS8-Installation](/data/attachment/album/201910/09/121208q8kzhhuyuz3ui8l8.jpg) ![tmp-partition-centos8-installation](/data/attachment/album/201910/09/121208k20ilel7qon2iddq.jpg) 最后创建<ruby> 交换分区 <rt> Swap Partition </rt></ruby>。 ![Swap-Partition-CentOS8-Installation](/data/attachment/album/201910/09/121209lrjngsgiganzttr2.jpg) 点击“<ruby> 添加挂载点 <rt> Add mount point </rt></ruby>”。 在完成所有分区设置后,点击“<ruby> 完成 <rt> Done </rt></ruby>”。 ![Choose-Done-after-manual-partition-centos8](/data/attachment/album/201910/09/121209u930opgp386p3p9m.jpg) 在下一个界面,点击“<ruby> 应用更改 <rt> Accept changes </rt></ruby>”,以上做的更改就会写入到硬盘中。 ![Accept-changes-CentOS8-Installation](/data/attachment/album/201910/09/121211uwkri3puh3dhuh8u.jpg) #### 第六步:选择“开始安装” 完成上述的所有更改后,回到先前的安装概览界面,点击“<ruby> 开始安装 <rt> Begin Installation </rt></ruby>”以开始安装 CentOS 8。 ![Begin-Installation-CentOS8](/data/attachment/album/201910/09/121211nk6hnlccu2ahc2xw.jpg) 下面这个界面表示安装过程正在进行中。 ![Installation-progress-centos8](/data/attachment/album/201910/09/121211nt8jejvnjb9xnbht.jpg) 要设置 root 用户的口令,只需要点击 “<ruby> root 口令 <rt> Root Password </rt></ruby>”选项,输入一个口令,然后点击“<ruby> 创建用户 <rt> User Creation </rt></ruby>”选项创建一个本地用户。 ![Root-Password-CentOS8-Installation](/data/attachment/album/201910/09/121212i6b635335of6h3up.jpg) 填写新创建的用户的详细信息。 ![Local-User-Details-CentOS8](/data/attachment/album/201910/09/121212id94d9k9kju9j0tz.jpg) 在安装完成后,安装向导会提示重启系统。 ![CentOS8-Installation-Progress](/data/attachment/album/201910/09/121213zl22jvs619sq6p6w.jpg) #### 第七步:完成安装并重启系统 安装完成后要重启系统。只需点击“<ruby> 重启 <rt> Reboot </rt></ruby>”按钮。 ![Installation-Completed-CentOS8](/data/attachment/album/201910/09/121213zo84qfefagpr38q2.jpg) 注意:重启完成后,记得要把安装介质断开,并将 BIOS 的启动介质设置为硬盘。 #### 第八步:启动新安装的 CentOS 8 并接受许可协议 在 GRUB 引导菜单中,选择 CentOS 8 进行启动。 ![Grub-Boot-CentOS8](/data/attachment/album/201910/09/121213rr1ys3h7r7r57x08.jpg) 同意 CentOS 8 的许可证,点击“<ruby> 完成 <rt> Done </rt></ruby>”。 ![Accept-License-CentOS8-Installation](/data/attachment/album/201910/09/121214x4dif5d5g64uu1su.jpg) 在下一个界面,点击“<ruby> 完成配置 <rt> Finish Configuration </rt></ruby>”。 ![Finish-Configuration-CentOS8-Installation](/data/attachment/album/201910/09/121214koqwuozbb9brogw8.jpg) #### 第九步:配置完成后登录 同意 CentOS 8 的许可证以及完成配置之后,会来到登录界面。 ![Login-screen-CentOS8](/data/attachment/album/201910/09/121215td78fmhb661ffl17.jpg) 使用刚才创建的用户以及对应的口令登录,按照提示进行操作,就可以看到以下界面。 ![CentOS8-Ready-Use-Screen](/data/attachment/album/201910/09/121216fi9anhzio4pnnpnp.jpg) 点击“<ruby> 开始使用 CentOS Linux <rt> Start Using CentOS Linux </rt></ruby>”。 ![Desktop-Screen-CentOS8](/data/attachment/album/201910/09/121216c344z3khvh04kphe.jpg) 以上就是 CentOS 8 的安装过程,至此我们已经完成了 CentOS 8 的安装。 欢迎给我们发送评论。 --- via: <https://www.linuxtechi.com/centos-8-installation-guide-screenshots/> 作者:[Pradeep Kumar](https://www.linuxtechi.com/author/pradeep/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[HankChow](https://github.com/HankChow) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
After **RHEL 8** release, **CentOS** community has released its most awaited Linux distribution as **CentOS 8**. It is released into two forms: **CentOS stream**– It is designed for the developers where they will get the updates quite frequently.**CentOS**– It is RHEL 8 like stable OS where sysadmin can install and configure the servers and applications. In this article, we will demonstrate how to install CentOS 8 Server step by step with screenshots. #### New features in CentOS 8 - DNF is the default package manager though yum can also be used. - Network configuration will be controlled by Network Manager (nmcli & nmtui) as network scripts are removed. - Podman utility to manage containers - Introduction of two new packages repositories: BaseOS and AppStream - Cockpit available as default server management tool - Wayland is the default display server - Iptables are replaced by nftables - Linux Kernel 4.18 - PHP 7.2, Python 3.6, Ansible 2.8, VIM 8.0 and Squid 4 #### Minimum System Requirements for CentOS 8 - 2 GB RAM - 2 GHz or Higher Processor - 20 GB Hard Disk - 64-bit x86 System #### CentOS 8 Installation Steps with Screenshots #### Step:1) Download CentOS 8 ISO File Download CentOS 8 ISO file from its official site, [https://www.centos.org/download/](https://www.centos.org/download/) #### Step:2) Create CentOS 8 bootable media (USB / DVD) Once you have downloaded CentOS 8 iso file, burn it either into USB stick or in DVD to make it bootable. Reboot the system on which you want to install CentOS 8, change the boot medium either as USB or DVD from bios settings. #### Step:3) Choose “Install CentOS Linux 8.0” option When the system boots up with CentOS 8 bootable media, then we will get the following screen, choose “**Install CentOS Linux 8.0**” and hit enter, #### Step:4) Select your preferred language Choose the language that suits to your CentOS 8 installation and then click on Continue, #### Step:5) Preparing CentOS 8 Installation In this step we will configure the followings: - Keyboard Layout - Date / Time - Installation Source - Software Selection - Installation Destination - Kdump As we can see in above window, installer has automatically pick ‘**Keyboard**’ layout, ‘**Time & Date**’, ‘**Installation Source**’ and ‘**Software Selection’.** If you want to change any of these settings, then click on their respective icon, let’s assume we want to change Time & Date of system, so click on ‘**Time & Date**’ and choose the time zone that suits to your installation and then click on **Done** Choose your preferred option from “**Software Selection**“, in case you want to install server with GUI then choose “**Server with GUI**” option and if you want to do minimal installation then choose “**Minimal Install**“. In this tutorial we will go with “**Server with GUI**” option, click on Done **Kdump** is enabled by default, if wish to disable it then click on its icon and disable it but it is strongly recommended one should enable kdump. If you wish to configure networking during the installation, then click on “**Network & Host Name**” In case your system is connected to modem where DHCP is running then it will automatically pick the ip whenever we enable the interface and if you wish to configure the static ip then click on ‘**Configure**‘ and specify IP details there and apart from this we have also set host name as “**linuxtechi.com**“. Once you are done with network changes, click on Done, Now finally configure ‘**Installation Destination**‘, in this step we will specify on which disk we will install CentOS 8 and what would be its partition scheme. Click on Done As we can see I have 40 GB disk space for CentOS 8 installation, here we have two options to create partition scheme, if you want installer to create automatic partition on 40 GB disk space then choose “**Automatic**” from **Storage Configuration** and if you want to create partitions manually then choose “**Custom**” option. In this tutorial I will create custom partitions by choosing “Custom” option. I will create following LVM based partitions, - /boot – 2 GB (ext4 file system) - / – 12 GB (xfs file system) - /home – 20 GB (xfs file system) - /tmp – 5 GB (xfs file system) - Swap – 1 GB First create /boot as standard partition of size 2 GB, steps are shown below, Click on “**Add mount point**” Create second partition as / of size 12 GB on LVM, click on ‘+’ symbol and specify mount point and size and then click on “Add mount point” In next screen change Partition Type from standard to LVM for / partition and click on update settings As we can see above, installer has automatically created a volume group, if want to change the name of that volume group then click on “**Modify**” option from “**Volume Group**” Tab Similarly create next partitions as /home and /tmp of size 20 GB and 5 GB respectively and also change their partition type from standard to **LVM**, Finally create swap partition, Click on “Add mount point” Once you are done with all partition creations then click on Done, In the next window, click on “**Accept changes**“, it will write the changes to disk, #### Step:6) Choose “Begin Installation” Once we Accept the changes in above window then we will move back to installation summary screen, there click on “**Begin Installation**” to start the installation Below screen confirms that installation has been started, To set root password, click on “**Root Password**” option and then specify the password string and Click on “**User creation**” option to create a local user Local User details, Installation is progress and once it is completed, installer will prompt us to reboot the system #### Step:7) Installation Completed and reboot system Once the installation is completed, reboot your system, Click on Reboot **Note:** After the reboot, don’t forget to remove the installation media and set the boot medium as disk from bios. #### Step:8) Boot newly installed CentOS 8 and Accept License From the grub menu, select the first option to boot CentOS 8, Accept CentOS 8 License and then click on Done, In the next screen, click on “**Finish Configuration**” #### Step:9) Login Screen after finishing the configuration We will get the following login screen after accepting CentOS 8 license and finishing the configuration Use the same credentials of the user that you created during the installation. Follow the screen instructions and then finally we will get the following screen, Click on “**Start Using CentOS Linux**” That’s all from this tutorial, this confirms we have successfully installed CentOS 8. Please do share your valuable feedback and comments. **Read More** : **How to Install and Use Cockpit on CentOS 8 / RHEL 8** **Also Read** : [Top 7 Security Hardening Tips for CentOS 8 / RHEL 8 Server](https://www.linuxtechi.com/harden-secure-centos-8-rhel-8-server/) AbrarSwap partition type is not xfs/ext{2,3,4} but just swap. Pradeep KumarHi Abrar, Thank you for pointing out the typo, i have corrected it now. CyberNerdWhat about guest addition installation for CentOS 8 guest in host Mac OS? Robert FallowsHi Thanks for this. Helped a lot. I had been fighting the whole automatic partioning setup. I think this has solved it. I am in the process of installing again (I hate to say how many). I have a 400GB drive in an old mac that I am going to be loading with a lot of pdfs to be served up. I used your method and expanded it to a 400 GB scheme…. Hopefully it works. I really don’t understand how or why the automatic partioning leaves the bulk of the 400 GB as unavailable free space… oh well. I watch you on YouTube….. always interesting although a little beyond me most of the time. Also did folding@home for a few months but had to stop about a week ago luckyhow to create a partitioning the disk after installation of centos8.2 Pradeep KumarAfter the CentOS 8.x installation, you can use fdisk or parted utility to create partition on new disks. Paul GriffinPradeep, many thanks for a great tutorial. Much appreciated.
11,440
如何通过 SSH 在远程 Linux 系统上运行命令
https://www.2daygeek.com/execute-run-linux-commands-remote-system-over-ssh/
2019-10-09T12:55:00
[ "SSH" ]
https://linux.cn/article-11440-1.html
![](/data/attachment/album/201910/09/125518dcrqxqr5zb1q1qhf.jpg) 我们有时可能需要在远程机器上运行一些命令。如果只是偶尔进行的操作,要实现这个目的,可以登录到远程系统上直接执行命令。但是每次都这么做的话,就有点烦人了。既然如此,有没有摆脱这种麻烦操作的更佳方案? 是的,你可以从你本地系统上执行这些操作,而不用登录到远程系统上。这有什么好处吗?毫无疑问。这会为你节省很多好时光。 这是怎么实现的?SSH 允许你无需登录到远程计算机就可以在它上面运行命令。 **通用语法如下所示:** ``` $ ssh [用户名]@[远程主机名或 IP] [命令或脚本] ``` ### 1) 如何通过 SSH 在远程 Linux 系统上运行命令 下面的例子允许用户通过 ssh 在远程 Linux 机器上运行 [df 命令](https://www.2daygeek.com/linux-check-disk-space-usage-df-command/)。 ``` $ ssh [email protected] df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/centos-root 27G 4.4G 23G 17% / devtmpfs 903M 0 903M 0% /dev tmpfs 920M 0 920M 0% /dev/shm tmpfs 920M 9.3M 910M 2% /run tmpfs 920M 0 920M 0% /sys/fs/cgroup /dev/sda1 1014M 179M 836M 18% /boot tmpfs 184M 8.0K 184M 1% /run/user/42 tmpfs 184M 0 184M 0% /run/user/1000 ``` ### 2) 如何通过 SSH 在远程 Linux 系统上运行多条命令 下面的例子允许用户通过 ssh 在远程 Linux 机器上一次运行多条命令。 同时在远程 Linux 系统上运行 `uptime` 命令和 `free` 命令。 ``` $ ssh [email protected] "uptime && free -m" 23:05:10 up 10 min, 0 users, load average: 0.00, 0.03, 0.03 total used free shared buffers cached Mem: 1878 432 1445 1 100 134 -/+ buffers/cache: 197 1680 Swap: 3071 0 3071 ``` ### 3) 如何通过 SSH 在远程 Linux 系统上运行带 sudo 权限的命令 下面的例子允许用户通过 ssh 在远程 Linux 机器上运行带有 [sudo 权限](https://www.2daygeek.com/how-to-configure-sudo-access-in-linux/) 的 `fdisk` 命令。 普通用户不允许执行系统二进制(`/usr/sbin/`)目录下提供的命令。用户需要 root 权限来运行它。 所以你需要 root 权限,好在 Linux 系统上运行 [fdisk 命令](https://www.2daygeek.com/linux-fdisk-command-to-manage-disk-partitions/)。`which` 命令返回给定命令的完整可执行路径。 ``` $ which fdisk /usr/sbin/fdisk ``` ``` $ ssh -t [email protected] "sudo fdisk -l" [sudo] password for daygeek: Disk /dev/sda: 32.2 GB, 32212254720 bytes, 62914560 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x000bf685 Device Boot Start End Blocks Id System /dev/sda1 * 2048 2099199 1048576 83 Linux /dev/sda2 2099200 62914559 30407680 8e Linux LVM Disk /dev/sdb: 10.7 GB, 10737418240 bytes, 20971520 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/centos-root: 29.0 GB, 28982640640 bytes, 56606720 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/centos-swap: 2147 MB, 2147483648 bytes, 4194304 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Connection to centos7.2daygeek.com closed. ``` ### 4) 如何通过 SSH 在远程 Linux 系统上运行带 sudo 权限的服务控制命令 下面的例子允许用户通过 ssh 在远程 Linux 机器上运行带有 sudo 权限的服务控制命令。 ``` $ ssh -t [email protected] "sudo systemctl restart httpd" [sudo] password for daygeek: Connection to centos7.2daygeek.com closed. ``` ### 5) 如何通过非标准端口 SSH 在远程 Linux 系统上运行命令 下面的例子允许用户通过 ssh 在使用了非标准端口的远程 Linux 机器上运行 [hostnamectl 命令](https://www.2daygeek.com/four-methods-to-change-the-hostname-in-linux/)。 ``` $ ssh -p 2200 [email protected] hostnamectl Static hostname: Ubuntu18.2daygeek.com Icon name: computer-vm Chassis: vm Machine ID: 27f6c2febda84dc881f28fd145077187 Boot ID: bbeccdf932be41ddb5deae9e5f15183d Virtualization: oracle Operating System: Ubuntu 18.04.2 LTS Kernel: Linux 4.15.0-60-generic Architecture: x86-64 ``` ### 6) 如何将远程系统的输出保存到本地系统 下面的例子允许用户通过 ssh 在远程 Linux 机器上运行 [top 命令](https://www.2daygeek.com/understanding-linux-top-command-output-usage/),并将输出保存到本地系统。 ``` $ ssh [email protected] "top -bc | head -n 35" > /tmp/top-output.txt ``` ``` cat /tmp/top-output.txt top - 01:13:11 up 18 min, 1 user, load average: 0.01, 0.05, 0.10 Tasks: 168 total, 1 running, 167 sleeping, 0 stopped, 0 zombie %Cpu(s): 0.0 us, 6.2 sy, 0.0 ni, 93.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem : 1882300 total, 1176324 free, 342392 used, 363584 buff/cache KiB Swap: 2097148 total, 2097148 free, 0 used. 1348140 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 4943 daygeek 20 0 162052 2248 1612 R 10.0 0.1 0:00.07 top -bc 1 root 20 0 128276 6936 4204 S 0.0 0.4 0:03.08 /usr/lib/sy+ 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [kthreadd] 3 root 20 0 0 0 0 S 0.0 0.0 0:00.25 [ksoftirqd/+ 4 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [kworker/0:+ 5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kworker/0:+ 7 root rt 0 0 0 0 S 0.0 0.0 0:00.00 [migration/+ 8 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [rcu_bh] 9 root 20 0 0 0 0 S 0.0 0.0 0:00.77 [rcu_sched] 10 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [lru-add-dr+ 11 root rt 0 0 0 0 S 0.0 0.0 0:00.01 [watchdog/0] 13 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [kdevtmpfs] 14 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [netns] 15 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [khungtaskd] 16 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [writeback] 17 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kintegrity+ 18 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [bioset] 19 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [bioset] 20 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [bioset] ``` 或者你也可以使用以下格式在远程系统上运行多条命令: ``` $ ssh [email protected] << EOF hostnamectl free -m grep daygeek /etc/passwd EOF ``` 上面命令的输出如下: ``` Pseudo-terminal will not be allocated because stdin is not a terminal. Static hostname: CentOS7.2daygeek.com Icon name: computer-vm Chassis: vm Machine ID: 002f47b82af248f5be1d67b67e03514c Boot ID: dca9a1ba06374d7d96678f9461752482 Virtualization: kvm Operating System: CentOS Linux 7 (Core) CPE OS Name: cpe:/o:centos:centos:7 Kernel: Linux 3.10.0-957.el7.x86_64 Architecture: x86-64 total used free shared buff/cache available Mem: 1838 335 1146 11 355 1314 Swap: 2047 0 2047 daygeek:x:1000:1000:2daygeek:/home/daygeek:/bin/bash ``` ### 7) 如何在远程系统上运行本地 Bash 脚本 下面的例子允许用户通过 ssh 在远程 Linux 机器上运行本地 [bash 脚本](https://www.2daygeek.com/understanding-linux-top-command-output-usage/) `remote-test.sh`。 创建一个 shell 脚本并执行它。 ``` $ vi /tmp/remote-test.sh #!/bin/bash #Name: remote-test.sh #-------------------- uptime free -m df -h uname -a hostnamectl ``` 上面命令的输出如下: ``` $ ssh [email protected] 'bash -s' < /tmp/remote-test.sh 01:17:09 up 22 min, 1 user, load average: 0.00, 0.02, 0.08 total used free shared buff/cache available Mem: 1838 333 1148 11 355 1316 Swap: 2047 0 2047 Filesystem Size Used Avail Use% Mounted on /dev/mapper/centos-root 27G 4.4G 23G 17% / devtmpfs 903M 0 903M 0% /dev tmpfs 920M 0 920M 0% /dev/shm tmpfs 920M 9.3M 910M 2% /run tmpfs 920M 0 920M 0% /sys/fs/cgroup /dev/sda1 1014M 179M 836M 18% /boot tmpfs 184M 12K 184M 1% /run/user/42 tmpfs 184M 0 184M 0% /run/user/1000 Linux CentOS7.2daygeek.com 3.10.0-957.el7.x86_64 #1 SMP Thu Nov 8 23:39:32 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux Static hostname: CentOS7.2daygeek.com Icon name: computer-vm Chassis: vm Machine ID: 002f47b82af248f5be1d67b67e03514c Boot ID: dca9a1ba06374d7d96678f9461752482 Virtualization: kvm Operating System: CentOS Linux 7 (Core) CPE OS Name: cpe:/o:centos:centos:7 Kernel: Linux 3.10.0-957.el7.x86_64 Architecture: x86-64 ``` 或者也可以使用管道。如果你觉得输出不太好看,再做点修改让它更优雅些。 ``` $ vi /tmp/remote-test-1.sh #!/bin/bash #Name: remote-test.sh echo "---------System Uptime--------------------------------------------" uptime echo -e "\n" echo "---------Memory Usage---------------------------------------------" free -m echo -e "\n" echo "---------Disk Usage-----------------------------------------------" df -h echo -e "\n" echo "---------Kernel Version-------------------------------------------" uname -a echo -e "\n" echo "---------HostName Info--------------------------------------------" hostnamectl echo "------------------------------------------------------------------" ``` 上面脚本的输出如下: ``` $ cat /tmp/remote-test.sh | ssh [email protected] Pseudo-terminal will not be allocated because stdin is not a terminal. ---------System Uptime-------------------------------------------- 03:14:09 up 2:19, 1 user, load average: 0.00, 0.01, 0.05 ---------Memory Usage--------------------------------------------- total used free shared buff/cache available Mem: 1838 376 1063 11 398 1253 Swap: 2047 0 2047 ---------Disk Usage----------------------------------------------- Filesystem Size Used Avail Use% Mounted on /dev/mapper/centos-root 27G 4.4G 23G 17% / devtmpfs 903M 0 903M 0% /dev tmpfs 920M 0 920M 0% /dev/shm tmpfs 920M 9.3M 910M 2% /run tmpfs 920M 0 920M 0% /sys/fs/cgroup /dev/sda1 1014M 179M 836M 18% /boot tmpfs 184M 12K 184M 1% /run/user/42 tmpfs 184M 0 184M 0% /run/user/1000 tmpfs 184M 0 184M 0% /run/user/0 ---------Kernel Version------------------------------------------- Linux CentOS7.2daygeek.com 3.10.0-957.el7.x86_64 #1 SMP Thu Nov 8 23:39:32 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux ---------HostName Info-------------------------------------------- Static hostname: CentOS7.2daygeek.com Icon name: computer-vm Chassis: vm Machine ID: 002f47b82af248f5be1d67b67e03514c Boot ID: dca9a1ba06374d7d96678f9461752482 Virtualization: kvm Operating System: CentOS Linux 7 (Core) CPE OS Name: cpe:/o:centos:centos:7 Kernel: Linux 3.10.0-957.el7.x86_64 Architecture: x86-64 ``` ### 8) 如何同时在多个远程系统上运行多条指令 下面的 bash 脚本允许用户同时在多个远程系统上运行多条指令。使用简单的 `for` 循环实现。 为了实现这个目的,你可以尝试 [PSSH 命令](https://www.2daygeek.com/pssh-parallel-ssh-run-execute-commands-on-multiple-linux-servers/) 或 [ClusterShell 命令](https://www.2daygeek.com/clustershell-clush-run-commands-on-cluster-nodes-remote-system-in-parallel-linux/) 或 [DSH 命令](https://www.2daygeek.com/dsh-run-execute-shell-commands-on-multiple-linux-servers-at-once/)。 ``` $ vi /tmp/multiple-host.sh for host in CentOS7.2daygeek.com CentOS6.2daygeek.com do ssh daygeek@${host} "uname -a;uptime;date;w" done ``` 上面脚本的输出如下: ``` $ sh multiple-host.sh Linux CentOS7.2daygeek.com 3.10.0-957.el7.x86_64 #1 SMP Thu Nov 8 23:39:32 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux 01:33:57 up 39 min, 1 user, load average: 0.07, 0.06, 0.06 Wed Sep 25 01:33:57 CDT 2019 01:33:57 up 39 min, 1 user, load average: 0.07, 0.06, 0.06 USER TTY FROM [email protected] IDLE JCPU PCPU WHAT daygeek pts/0 192.168.1.6 01:08 23:25 0.06s 0.06s -bash Linux CentOS6.2daygeek.com 2.6.32-754.el6.x86_64 #1 SMP Tue Jun 19 21:26:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux 23:33:58 up 39 min, 0 users, load average: 0.00, 0.00, 0.00 Tue Sep 24 23:33:58 MST 2019 23:33:58 up 39 min, 0 users, load average: 0.00, 0.00, 0.00 USER TTY FROM [email protected] IDLE JCPU PCPU WHAT ``` ### 9) 如何使用 sshpass 命令添加一个密码 如果你觉得每次输入密码很麻烦,我建议你视你的需求选择以下方法中的一项来解决这个问题。 如果你经常进行类似的操作,我建议你设置 [免密码认证](https://www.2daygeek.com/configure-setup-passwordless-ssh-key-based-authentication-linux/),因为它是标准且永久的解决方案。 如果你一个月只是执行几次这些任务,我推荐你使用 `sshpass` 工具。只需要使用 `-p` 参数选项提供你的密码即可。 ``` $ sshpass -p '在这里输入你的密码' ssh -p 2200 [email protected] ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:18:90:7f brd ff:ff:ff:ff:ff:ff inet 192.168.1.12/24 brd 192.168.1.255 scope global dynamic eth0 valid_lft 86145sec preferred_lft 86145sec inet6 fe80::a00:27ff:fe18:907f/64 scope link tentative dadfailed valid_lft forever preferred_lft forever ``` --- via: <https://www.2daygeek.com/execute-run-linux-commands-remote-system-over-ssh/> 作者:[Magesh Maruthamuthu](https://www.2daygeek.com/author/magesh/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[alim0x](https://github.com/alim0x) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
404
Not Found
null
11,441
GNU binutils 里的九种武器
https://opensource.com/article/19/10/gnu-binutils
2019-10-10T11:54:54
[ "二进制" ]
/article-11441-1.html
> > 二进制分析是计算机行业中最被低估的技能。 > > > ![](/data/attachment/album/201910/10/115409g9nkdm2omutduw7u.jpg) 想象一下,在无法访问软件的源代码时,但仍然能够理解软件的实现方式,在其中找到漏洞,并且更厉害的是还能修复错误。所有这些都是在只有二进制文件时做到的。这听起来就像是超能力,对吧? 你也可以拥有这样的超能力,GNU 二进制实用程序(binutils)就是一个很好的起点。[GNU binutils](https://en.wikipedia.org/wiki/GNU_Binutils) 是一个二进制工具集,默认情况下所有 Linux 发行版中都会安装这些二进制工具。 二进制分析是计算机行业中最被低估的技能。它主要由恶意软件分析师、反向工程师和使用底层软件的人使用。 本文探讨了 binutils 可用的一些工具。我使用的是 RHEL,但是这些示例应该在任何 Linux 发行版上可以运行。 ``` [~]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.6 (Maipo) [~]# [~]# uname -r 3.10.0-957.el7.x86_64 [~]# ``` 请注意,某些打包命令(例如 `rpm`)在基于 Debian 的发行版中可能不可用,因此请使用等效的 `dpkg` 命令替代。 ### 软件开发的基础知识 在开源世界中,我们很多人都专注于源代码形式的软件。当软件的源代码随时可用时,很容易获得源代码的副本,打开喜欢的编辑器,喝杯咖啡,然后就可以开始探索了。 但是源代码不是在 CPU 上执行的代码,在 CPU 上执行的是二进制或者说是机器语言指令。二进制或可执行文件是编译源代码时获得的。熟练的调试人员深谙通常这种差异。 ### 编译的基础知识 在深入研究 binutils 软件包本身之前,最好先了解编译的基础知识。 编译是将程序从某种编程语言(如 C/C++)的源代码(文本形式)转换为机器代码的过程。 机器代码是 CPU(或一般而言,硬件)可以理解的 1 和 0 的序列,因此可以由 CPU 执行或运行。该机器码以特定格式保存到文件,通常称为可执行文件或二进制文件。在 Linux(和使用 [Linux 兼容二进制](https://www.freebsd.org/doc/handbook/linuxemu.html)的 BSD)上,这称为 [ELF](https://en.wikipedia.org/wiki/Executable_and_Linkable_Format)(<ruby> 可执行和可链接格式 <rt> Executable and Linkable Format </rt></ruby>)。 在生成给定的源文件的可执行文件或二进制文件之前,编译过程将经历一系列复杂的步骤。以这个源程序(C 代码)为例。打开你喜欢的编辑器,然后键入以下程序: ``` #include <stdio.h> int main(void) { printf("Hello World\n"); return 0; } ``` #### 步骤 1:用 cpp 预处理 [C 预处理程序(cpp)](https://en.wikipedia.org/wiki/C_preprocessor)用于扩展所有宏并将头文件包含进来。在此示例中,头文件 `stdio.h` 将被包含在源代码中。`stdio.h` 是一个头文件,其中包含有关程序内使用的 `printf` 函数的信息。对源代码运行 `cpp`,其结果指令保存在名为 `hello.i` 的文件中。可以使用文本编辑器打开该文件以查看其内容。打印 “hello world” 的源代码在该文件的底部。 ``` [testdir]# cat hello.c #include <stdio.h> int main(void) { printf("Hello World\n"); return 0; } [testdir]# [testdir]# cpp hello.c > hello.i [testdir]# [testdir]# ls -lrt total 24 -rw-r--r--. 1 root root 76 Sep 13 03:20 hello.c -rw-r--r--. 1 root root 16877 Sep 13 03:22 hello.i [testdir]# ``` #### 步骤 2:用 gcc 编译 在此阶段,无需创建目标文件就将步骤 1 中生成的预处理源代码转换为汇编语言指令。这个阶段使用 [GNU 编译器集合(gcc)](https://gcc.gnu.org/onlinedocs/gcc/)。对 `hello.i` 文件运行带有 `-S` 选项的 `gcc` 命令后,它将创建一个名为 `hello.s` 的新文件。该文件包含该 C 程序的汇编语言指令。 你可以使用任何编辑器或 `cat` 命令查看其内容。 ``` [testdir]# [testdir]# gcc -Wall -S hello.i [testdir]# [testdir]# ls -l total 28 -rw-r--r--. 1 root root 76 Sep 13 03:20 hello.c -rw-r--r--. 1 root root 16877 Sep 13 03:22 hello.i -rw-r--r--. 1 root root 448 Sep 13 03:25 hello.s [testdir]# [testdir]# cat hello.s .file "hello.c" .section .rodata .LC0: .string "Hello World" .text .globl main .type main, @function main: .LFB0: .cfi_startproc pushq %rbp .cfi_def_cfa_offset 16 .cfi_offset 6, -16 movq %rsp, %rbp .cfi_def_cfa_register 6 movl $.LC0, %edi call puts movl $0, %eax popq %rbp .cfi_def_cfa 7, 8 ret .cfi_endproc .LFE0: .size main, .-main .ident "GCC: (GNU) 4.8.5 20150623 (Red Hat 4.8.5-36)" .section .note.GNU-stack,"",@progbits [testdir]# ``` #### 步骤 3:用 as 汇编 汇编器的目的是将汇编语言指令转换为机器语言代码,并生成扩展名为 `.o` 的目标文件。此阶段使用默认情况下在所有 Linux 平台上都可用的 GNU 汇编器。 ``` testdir]# as hello.s -o hello.o [testdir]# [testdir]# ls -l total 32 -rw-r--r--. 1 root root 76 Sep 13 03:20 hello.c -rw-r--r--. 1 root root 16877 Sep 13 03:22 hello.i -rw-r--r--. 1 root root 1496 Sep 13 03:39 hello.o -rw-r--r--. 1 root root 448 Sep 13 03:25 hello.s [testdir]# ``` 现在,你有了第一个 ELF 格式的文件;但是,还不能执行它。稍后,你将看到“<ruby> 目标文件 <rt> object file </rt></ruby>”和“<ruby> 可执行文件 <rt> executable file </rt></ruby>”之间的区别。 ``` [testdir]# file hello.o hello.o: ELF 64-bit LSB relocatable, x86-64, version 1 (SYSV), not stripped ``` #### 步骤 4:用 ld 链接 这是编译的最后阶段,将目标文件链接以创建可执行文件。可执行文件通常需要外部函数,这些外部函数通常来自系统库(`libc`)。 你可以使用 `ld` 命令直接调用链接器;但是,此命令有些复杂。相反,你可以使用带有 `-v`(详细)标志的 `gcc` 编译器,以了解链接是如何发生的。(使用 `ld` 命令进行链接作为一个练习,你可以自行探索。) ``` [testdir]# gcc -v hello.o Using built-in specs. COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-redhat-linux/4.8.5/lto-wrapper Target: x86_64-redhat-linux Configured with: ../configure --prefix=/usr --mandir=/usr/share/man [...] --build=x86_64-redhat-linux Thread model: posix gcc version 4.8.5 20150623 (Red Hat 4.8.5-36) (GCC) COMPILER_PATH=/usr/libexec/gcc/x86_64-redhat-linux/4.8.5/:/usr/libexec/gcc/x86_64-redhat-linux/4.8.5/:[...]:/usr/lib/gcc/x86_64-redhat-linux/ LIBRARY_PATH=/usr/lib/gcc/x86_64-redhat-linux/4.8.5/:/usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../../lib64/:/lib/../lib64/:/usr/lib/../lib64/:/usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../:/lib/:/usr/lib/ COLLECT_GCC_OPTIONS='-v' '-mtune=generic' '-march=x86-64' /usr/libexec/gcc/x86_64-redhat-linux/4.8.5/collect2 --build-id --no-add-needed --eh-frame-hdr --hash-style=gnu [...]/../../../../lib64/crtn.o [testdir]# ``` 运行此命令后,你应该看到一个名为 `a.out` 的可执行文件: ``` [testdir]# ls -l total 44 -rwxr-xr-x. 1 root root 8440 Sep 13 03:45 a.out -rw-r--r--. 1 root root 76 Sep 13 03:20 hello.c -rw-r--r--. 1 root root 16877 Sep 13 03:22 hello.i -rw-r--r--. 1 root root 1496 Sep 13 03:39 hello.o -rw-r--r--. 1 root root 448 Sep 13 03:25 hello.s ``` 对 `a.out` 运行 `file` 命令,结果表明它确实是 ELF 可执行文件: ``` [testdir]# file a.out a.out: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=48e4c11901d54d4bf1b6e3826baf18215e4255e5, not stripped ``` 运行该可执行文件,看看它是否如源代码所示工作: ``` [testdir]# ./a.out Hello World ``` 工作了!在幕后发生了很多事情它才在屏幕上打印了 “Hello World”。想象一下在更复杂的程序中会发生什么。 ### 探索 binutils 工具 上面这个练习为使用 binutils 软件包中的工具提供了良好的背景。我的系统带有 binutils 版本 2.27-34;你的 Linux 发行版上的版本可能有所不同。 ``` [~]# rpm -qa | grep binutils binutils-2.27-34.base.el7.x86_64 ``` binutils 软件包中提供了以下工具: ``` [~]# rpm -ql binutils-2.27-34.base.el7.x86_64 | grep bin/ /usr/bin/addr2line /usr/bin/ar /usr/bin/as /usr/bin/c++filt /usr/bin/dwp /usr/bin/elfedit /usr/bin/gprof /usr/bin/ld /usr/bin/ld.bfd /usr/bin/ld.gold /usr/bin/nm /usr/bin/objcopy /usr/bin/objdump /usr/bin/ranlib /usr/bin/readelf /usr/bin/size /usr/bin/strings /usr/bin/strip ``` 上面的编译练习已经探索了其中的两个工具:用作汇编器的 `as` 命令,用作链接器的 `ld` 命令。继续阅读以了解上述 GNU binutils 软件包工具中的其他七个。 #### readelf:显示 ELF 文件信息 上面的练习提到了术语“目标文件”和“可执行文件”。使用该练习中的文件,通过带有 `-h`(标题)选项的 `readelf` 命令,以将文件的 ELF 标题转储到屏幕上。请注意,以 `.o` 扩展名结尾的目标文件显示为 `Type: REL (Relocatable file)`(可重定位文件): ``` [testdir]# readelf -h hello.o ELF Header: Magic: 7f 45 4c 46 02 01 01 00 [...] [...] Type: REL (Relocatable file) [...] ``` 如果尝试执行此目标文件,会收到一条错误消息,指出无法执行。这仅表示它尚不具备在 CPU 上执行所需的信息。 请记住,你首先需要使用 `chmod` 命令在对象文件上添加 `x`(可执行位),否则你将得到“权限被拒绝”的错误。 ``` [testdir]# ./hello.o bash: ./hello.o: Permission denied [testdir]# chmod +x ./hello.o [testdir]# [testdir]# ./hello.o bash: ./hello.o: cannot execute binary file ``` 如果对 `a.out` 文件尝试相同的命令,则会看到其类型为 `EXEC (Executable file)`(可执行文件)。 ``` [testdir]# readelf -h a.out ELF Header: Magic: 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00 Class: ELF64 [...] Type: EXEC (Executable file) ``` 如上所示,该文件可以直接由 CPU 执行: ``` [testdir]# ./a.out Hello World ``` `readelf` 命令可提供有关二进制文件的大量信息。在这里,它会告诉你它是 ELF 64 位格式,这意味着它只能在 64 位 CPU 上执行,而不能在 32 位 CPU 上运行。它还告诉你它应在 X86-64(Intel/AMD)架构上执行。该二进制文件的入口点是地址 `0x400430`,它就是 C 源程序中 `main` 函数的地址。 在你知道的其他系统二进制文件上尝试一下 `readelf` 命令,例如 `ls`。请注意,在 RHEL 8 或 Fedora 30 及更高版本的系统上,由于安全原因改用了<ruby> 位置无关可执行文件 <rt> position independent executable </rt></ruby>([PIE](https://en.wikipedia.org/wiki/Position-independent_code#Position-independent_executables)),因此你的输出(尤其是 `Type:`)可能会有所不同。 ``` [testdir]# readelf -h /bin/ls ELF Header: Magic: 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00 Class: ELF64 Data: 2's complement, little endian Version: 1 (current) OS/ABI: UNIX - System V ABI Version: 0 Type: EXEC (Executable file) ``` 使用 `ldd` 命令了解 `ls` 命令所依赖的系统库,如下所示: ``` [testdir]# ldd /bin/ls linux-vdso.so.1 => (0x00007ffd7d746000) libselinux.so.1 => /lib64/libselinux.so.1 (0x00007f060daca000) libcap.so.2 => /lib64/libcap.so.2 (0x00007f060d8c5000) libacl.so.1 => /lib64/libacl.so.1 (0x00007f060d6bc000) libc.so.6 => /lib64/libc.so.6 (0x00007f060d2ef000) libpcre.so.1 => /lib64/libpcre.so.1 (0x00007f060d08d000) libdl.so.2 => /lib64/libdl.so.2 (0x00007f060ce89000) /lib64/ld-linux-x86-64.so.2 (0x00007f060dcf1000) libattr.so.1 => /lib64/libattr.so.1 (0x00007f060cc84000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f060ca68000) ``` 对 `libc` 库文件运行 `readelf` 以查看它是哪种文件。正如它指出的那样,它是一个 `DYN (Shared object file)`(共享对象文件),这意味着它不能直接执行;必须由内部使用了该库提供的任何函数的可执行文件使用它。 ``` [testdir]# readelf -h /lib64/libc.so.6 ELF Header: Magic: 7f 45 4c 46 02 01 01 03 00 00 00 00 00 00 00 00 Class: ELF64 Data: 2's complement, little endian Version: 1 (current) OS/ABI: UNIX - GNU ABI Version: 0 Type: DYN (Shared object file) ``` #### size:列出节的大小和全部大小 `size` 命令仅适用于目标文件和可执行文件,因此,如果尝试在简单的 ASCII 文件上运行它,则会抛出错误,提示“文件格式无法识别”。 ``` [testdir]# echo "test" > file1 [testdir]# cat file1 test [testdir]# file file1 file1: ASCII text [testdir]# size file1 size: file1: File format not recognized ``` 现在,在上面的练习中,对目标文件和可执行文件运行 `size` 命令。请注意,根据 `size` 命令的输出可以看出,可执行文件(`a.out`)的信息要比目标文件(`hello.o`)多得多: ``` [testdir]# size hello.o text data bss dec hex filename 89 0 0 89 59 hello.o [testdir]# size a.out text data bss dec hex filename 1194 540 4 1738 6ca a.out ``` 但是这里的 `text`、`data` 和 `bss` 节是什么意思? `text` 节是指二进制文件的代码部分,其中包含所有可执行指令。`data` 节是所有初始化数据所在的位置,`bss` 节是所有未初始化数据的存储位置。(LCTT 译注:一般来说,在静态的映像文件中,各个部分称之为<ruby> 节 <rt> section </rt></ruby>,而在运行时的各个部分称之为<ruby> 段 <rt> segment </rt></ruby>,有时统称为段。) 比较其他一些可用的系统二进制文件的 `size` 结果。 对于 `ls` 命令: ``` [testdir]# size /bin/ls text data bss dec hex filename 103119 4768 3360 111247 1b28f /bin/ls ``` 只需查看 `size` 命令的输出,你就可以看到 `gcc` 和 `gdb` 是比 `ls` 大得多的程序: ``` [testdir]# size /bin/gcc text data bss dec hex filename 755549 8464 81856 845869 ce82d /bin/gcc [testdir]# size /bin/gdb text data bss dec hex filename 6650433 90842 152280 6893555 692ff3 /bin/gdb ``` #### strings:打印文件中的可打印字符串 在 `strings` 命令中添加 `-d` 标志以仅显示 `data` 节中的可打印字符通常很有用。 `hello.o` 是一个目标文件,其中包含打印出 `Hello World` 文本的指令。因此,`strings` 命令的唯一输出是 `Hello World`。 ``` [testdir]# strings -d hello.o Hello World ``` 另一方面,在 `a.out`(可执行文件)上运行 `strings` 会显示在链接阶段该二进制文件中包含的其他信息: ``` [testdir]# strings -d a.out /lib64/ld-linux-x86-64.so.2 !^BU libc.so.6 puts __libc_start_main __gmon_start__ GLIBC_2.2.5 UH-0 UH-0 =( []A\A]A^A_ Hello World ;*3$" ``` #### objdump:显示目标文件信息 另一个可以从二进制文件中转储机器语言指令的 binutils 工具称为 `objdump`。使用 `-d` 选项,可从二进制文件中反汇编出所有汇编指令。 回想一下,编译是将源代码指令转换为机器代码的过程。机器代码仅由 1 和 0 组成,人类难以阅读。因此,它有助于将机器代码表示为汇编语言指令。汇编语言是什么样的?请记住,汇编语言是特定于体系结构的;由于我使用的是 Intel(x86-64)架构,因此如果你使用 ARM 架构编译相同的程序,指令将有所不同。 ``` [testdir]# objdump -d hello.o hello.o: file format elf64-x86-64 Disassembly of section .text: 0000000000000000 : 0: 55 push %rbp 1: 48 89 e5 mov %rsp,%rbp 4: bf 00 00 00 00 mov $0x0,%edi 9: e8 00 00 00 00 callq e e: b8 00 00 00 00 mov $0x0,%eax 13: 5d pop %rbp 14: c3 retq ``` 该输出乍一看似乎令人生畏,但请花一点时间来理解它,然后再继续。回想一下,`.text` 节包含所有的机器代码指令。汇编指令可以在第四列中看到(即 `push`、`mov`、`callq`、`pop`、`retq` 等)。这些指令作用于寄存器,寄存器是 CPU 内置的存储器位置。本示例中的寄存器是 `rbp`、`rsp`、`edi`、`eax` 等,并且每个寄存器都有特殊的含义。 现在对可执行文件(`a.out`)运行 `objdump` 并查看得到的内容。可执行文件的 `objdump` 的输出可能很大,因此我使用 `grep` 命令将其缩小到 `main` 函数: ``` [testdir]# objdump -d a.out | grep -A 9 main\> 000000000040051d : 40051d: 55 push %rbp 40051e: 48 89 e5 mov %rsp,%rbp 400521: bf d0 05 40 00 mov $0x4005d0,%edi 400526: e8 d5 fe ff ff callq 400400 40052b: b8 00 00 00 00 mov $0x0,%eax 400530: 5d pop %rbp 400531: c3 retq ``` 请注意,这些指令与目标文件 `hello.o` 相似,但是其中包含一些其他信息: * 目标文件 `hello.o` 具有以下指令:`callq e` * 可执行文件 `a.out` 由以下指令组成,该指令带有一个地址和函数:`callq 400400 <puts@plt>` 上面的汇编指令正在调用 `puts` 函数。请记住,你在源代码中使用了一个 `printf` 函数。编译器插入了对 `puts` 库函数的调用,以将 `Hello World` 输出到屏幕。 查看 `put` 上方一行的说明: * 目标文件 `hello.o` 有个指令 `mov`:`mov $0x0,%edi` * 可执行文件 `a.out` 的 `mov` 指令带有实际地址(`$0x4005d0`)而不是 `$0x0`:`mov $0x4005d0,%edi` 该指令将二进制文件中地址 `$0x4005d0` 处存在的内容移动到名为 `edi` 的寄存器中。 这个存储位置的内容中还能是别的什么吗?是的,你猜对了:它就是文本 `Hello, World`。你是如何确定的? `readelf` 命令使你可以将二进制文件(`a.out`)的任何节转储到屏幕上。以下要求它将 `.rodata`(这是只读数据)转储到屏幕上: ``` [testdir]# readelf -x .rodata a.out Hex dump of section '.rodata': 0x004005c0 01000200 00000000 00000000 00000000 .... 0x004005d0 48656c6c 6f20576f 726c6400 Hello World. ``` 你可以在右侧看到文本 `Hello World`,在左侧可以看到其二进制格式的地址。它是否与你在上面的 `mov` 指令中看到的地址匹配?是的,确实匹配。 #### strip:从目标文件中剥离符号 该命令通常用于在将二进制文件交付给客户之前减小二进制文件的大小。 请记住,由于重要信息已从二进制文件中删除,因此它会妨碍调试。但是,这个二进制文件可以完美地执行。 对 `a.out` 可执行文件运行该命令,并注意会发生什么。首先,通过运行以下命令确保二进制文件没有被剥离(`not stripped`): ``` [testdir]# file a.out a.out: ELF 64-bit LSB executable, x86-64, [......] not stripped ``` 另外,在运行 `strip` 命令之前,请记下二进制文件中最初的字节数: ``` [testdir]# du -b a.out 8440 a.out ``` 现在对该可执行文件运行 `strip` 命令,并使用 `file` 命令以确保正常完成: ``` [testdir]# strip a.out [testdir]# file a.out a.out: ELF 64-bit LSB executable, x86-64, [......] stripped ``` 剥离该二进制文件后,此小程序的大小从之前的 `8440` 字节减小为 `6296` 字节。对于这样小的一个程序都能有这么大的空间节省,难怪大型程序经常被剥离。 ``` [testdir]# du -b a.out 6296 a.out ``` #### addr2line:转换地址到文件名和行号 `addr2line` 工具只是在二进制文件中查找地址,并将其与 C 源代码程序中的行进行匹配。很酷,不是吗? 为此编写另一个测试程序;只是这一次确保使用 `gcc` 的 `-g` 标志进行编译,这将为二进制文件添加其它调试信息,并包含有助于调试的行号(由源代码中提供): ``` [testdir]# cat -n atest.c 1 #include <stdio.h> 2 3 int globalvar = 100; 4 5 int function1(void) 6 { 7 printf("Within function1\n"); 8 return 0; 9 } 10 11 int function2(void) 12 { 13 printf("Within function2\n"); 14 return 0; 15 } 16 17 int main(void) 18 { 19 function1(); 20 function2(); 21 printf("Within main\n"); 22 return 0; 23 } ``` 用 `-g` 标志编译并执行它。正如预期: ``` [testdir]# gcc -g atest.c [testdir]# ./a.out Within function1 Within function2 Within main ``` 现在使用 `objdump` 来标识函数开始的内存地址。你可以使用 `grep` 命令来过滤出所需的特定行。函数的地址在下面突出显示(`55 push %rbp` 前的地址): ``` [testdir]# objdump -d a.out | grep -A 2 -E 'main>:|function1>:|function2>:' 000000000040051d : 40051d: 55 push %rbp 40051e: 48 89 e5 mov %rsp,%rbp -- 0000000000400532 : 400532: 55 push %rbp 400533: 48 89 e5 mov %rsp,%rbp -- 0000000000400547 : 400547: 55 push %rbp 400548: 48 89 e5 mov %rsp,%rbp ``` 现在,使用 `addr2line` 工具从二进制文件中的这些地址映射到 C 源代码匹配的地址: ``` [testdir]# addr2line -e a.out 40051d /tmp/testdir/atest.c:6 [testdir]# [testdir]# addr2line -e a.out 400532 /tmp/testdir/atest.c:12 [testdir]# [testdir]# addr2line -e a.out 400547 /tmp/testdir/atest.c:18 ``` 它说 `40051d` 从源文件 `atest.c` 中的第 `6` 行开始,这是 `function1` 的起始大括号(`{`)开始的行。`function2` 和 `main` 的输出也匹配。 #### nm:列出目标文件的符号 使用上面的 C 程序测试 `nm` 工具。使用 `gcc` 快速编译并执行它。 ``` [testdir]# gcc atest.c [testdir]# ./a.out Within function1 Within function2 Within main ``` 现在运行 `nm` 和 `grep` 获取有关函数和变量的信息: ``` [testdir]# nm a.out | grep -Ei 'function|main|globalvar' 000000000040051d T function1 0000000000400532 T function2 000000000060102c D globalvar U __libc_start_main@@GLIBC_2.2.5 0000000000400547 T main ``` 你可以看到函数被标记为 `T`,它表示 `text` 节中的符号,而变量标记为 `D`,表示初始化的 `data` 节中的符号。 想象一下在没有源代码的二进制文件上运行此命令有多大用处?这使你可以窥视内部并了解使用了哪些函数和变量。当然,除非二进制文件已被剥离,这种情况下它们将不包含任何符号,因此 `nm` 就命令不会很有用,如你在此处看到的: ``` [testdir]# strip a.out [testdir]# nm a.out | grep -Ei 'function|main|globalvar' nm: a.out: no symbols ``` ### 结论 GNU binutils 工具为有兴趣分析二进制文件的人提供了许多选项,这只是它们可以为你做的事情的冰山一角。请阅读每种工具的手册页,以了解有关它们以及如何使用它们的更多信息。 --- via: <https://opensource.com/article/19/10/gnu-binutils> 作者:[Gaurav Kamathe](https://opensource.com/users/gkamathe) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
null
HTTPSConnectionPool(host='opensource.com', port=443): Read timed out. (read timeout=10)
null
11,443
IceWM:一个非常酷的桌面
https://fedoramagazine.org/icewm-a-really-cool-desktop/
2019-10-10T12:25:00
[ "IceWM" ]
https://linux.cn/article-11443-1.html
![](/data/attachment/album/201910/10/123525yb1vombkfb3uo6uk.jpg) IceWM 是一款非常轻量的桌面。它已经出现 20 多年了,它今天的目标仍然与当时相同:速度、简单性以及不妨碍用户。 我曾经将 IceWM 添加到 Scientific Linux 中作为轻量级桌面。当时它只是 0.5 兆的 rpm 包。运行时,它仅使用 5 兆的内存。这些年来,IceWM 有所增长。rpm 包现在为 1 兆。运行时,IceWM 现在使用 10 兆的内存。尽管在过去十年中它的大小增加了一倍,但它仍然非常小。 这么小的包,你能得到什么?确切地说,就是一个窗口管理器。没有其他东西。你有一个带有菜单或图标的工具栏来启动程序。速度很快。最后,还有主题和选项。除了工具栏中的一些小东西,就只有这些了。 ![](/data/attachment/album/201910/10/122532n0rjkhhchcxrnlpw.png) ### 安装 因为 IceWM 很小,你只需安装主软件包。 ``` $ sudo dnf install icewm ``` 如果要节省磁盘空间,许多依赖项都是可选的。没有它们,IceWM 也可以正常工作。 ``` $ sudo dnf install icewm --setopt install_weak_deps=false ``` ### 选项 IceWM 默认已经设置完毕,以使普通的 Windows 用户也能感到舒适。这是一件好事,因为选项是通过配置文件手动完成的。 我希望你不会因此而止步,因为它并没有听起来那么糟糕。它只有 8 个配置文件,大多数人只使用其中几个。主要的三个配置文件是 `keys`(键绑定),`preferences`(总体首选项)和 `toolbar`(工具栏上显示的内容)。默认配置文件位于 `/usr/share/icewm/`。 要进行更改,请将默认配置复制到 IceWM 家目录(`~/.icewm`),编辑文件,然后重新启动 IceWM。第一次做可能会有点害怕,因为在 “Logout” 菜单项下可以找到 “Restart Icewm”。但是,当你重启 IceWM 时,你只会看到闪烁一下,更改就生效了。任何打开的程序均不受影响,并保持原样。 ### 主题 ![IceWM in the NanoBlue theme](/data/attachment/album/201910/10/122533urm1k1khg02a1ku1.png) 如果安装 icewm-themes 包,那么会得到很多主题。与常规选项不同,你无需重启 IceWM 即可更改为新主题。通常我不会谈论主题,但是由于其他功能很少,因此我想提下。 ### 工具栏 工具栏是为 IceWM 添加了更多的功能的地方。你可以看到它可以切换不同的工作区。工作区有时称为虚拟桌面。单击工作区图标以移动到它。右键单击一个窗口的任务栏项目,可以在工作区之间移动它。如果你喜欢工作区,它拥有你想要的所有功能。如果你不喜欢工作区,那么可以选择关闭它。 工具栏还有网络/内存/CPU 监控图。将鼠标悬停在图标上可获得详细信息。单击图标可以打开一个拥有完整监控功能的窗口。这些小图形曾经出现在每个窗口管理器上。但是,随着桌面环境的成熟,它们都将这些图形去除了。我很高兴 IceWM 留下了这个不错的功能。 ### 总结 如果你想要轻量但功能强大的桌面,IceWM 适合你。它已经设置好了,因此新的 Linux 用户也可以立即使用它。它是灵活的,因此 Unix 用户可以根据自己的喜好进行调整。最重要的是,IceWM 可以让你的程序不受阻碍地运行。 --- via: <https://fedoramagazine.org/icewm-a-really-cool-desktop/> 作者:[tdawson](https://fedoramagazine.org/author/tdawson/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
IceWM is a very lightweight desktop. It’s been around for over 20 years, and its goals today are still the same as back then: speed, simplicity, and getting out of the users way. I used to add IceWM to Scientific Linux, for a lightweight desktop. At the time, it was only a .5 Meg rpm. When running, it used only 5 Meg of memory. Over the years, IceWM has grown a little bit. The rpm package is now 1 Meg. When running, IceWM now uses 10 Meg of memory. Even though it literally doubled in size in the past 10 years, it is still extremely small. What do you get in such a small package? Exactly what it says, a Window Manager. Not much else. You have a toolbar with a menu or icons to launch programs. You have speed. And finally you have themes and options. Besides the few goodies in the toolbar, that’s about it. ![](https://fedoramagazine.org/wp-content/uploads/2019/09/icewm.2-1024x768.png) ### Installation Because IceWM is so small, you just install the main package, and default theme. In Fedora 31, the default theme will be part of the main package. #### Fedora 30 / IceWM 1.3.8 $ sudo dnf install icewm icewm-clearlooks #### Fedora 31/ IceWM 1.6.2 $ sudo dnf install icewm In Fedora 31, the IceWM package will allow you to save disk space. Many of the dependencies are soft options. $ sudo dnf install icewm --setopt install_weak_deps=false ### Options The defaults for IceWM are set so that your average windows user feels comfortable. This is a good thing, because options are done manually, through configuration files. I hope I didn’t lose you there, because it’s not as bad as it sounds. There are only 8 configuration files, and most people only use a couple. The main three config files are keys (keybinding), preferences (overall preferences), and toolbar (what is shown on the toolbar). The default config files are found in /usr/share/icewm/ To make a change, you copy the default config to you home icewm directory (~/.icewm), edit the file, and then restart IceWM. The first time you do this might be a little scary because “Restart Icewm” is found under the “Logout” menu entry. But when you restart IceWM, you just see a single flicker, and your changes are there. Any open programs are unaffected and stay as they were. ### Themes ![](https://fedoramagazine.org/wp-content/uploads/2019/09/icewm.3-1024x771.png) If you install the icewm-themes package, you get quite a few themes. Unlike regular options, you do not need to restart IceWM to change into a new theme. Usually I wouldn’t talk much about themes, but since there are so few other features, I figured I’m mention them. ### Toolbar The toolbar is the one place where a few extra features have been added to IceWM. You will see that you can switch between workplaces. Workspaces are sometimes called Virtual Desktops. Click on the workspace to move to it. Right clicking on a windows taskbar entry allows you to move it between workspaces. If you like workspaces, this has all the functionality you will like. If you don’t like workspaces, it’s an option and can be turned off. The toolbar also has Network/Memory/CPU monitoring graphs. Hover your mouse over the graph to get details. Click on the graph to get a window with full monitoring. These little graphs used to be on every window manager. But as those desktops matured, they have all taken the graphs out. I’m very glad that IceWM has left this nice feature alone. ## Summary If you want something lightweight, but functional, IceWM is the desktop for you. It is setup so that new linux users can use it out of the box. It is flexible so that unix users can tweak it to their liking. Most importantly, IceWM lets your programs run without getting in the way. ## Wojtek Ah… IceWM.. the memories.. 🙂 It was my go-to WM back in the AMD k6-2 era. One of the WMs out there that do not stand in you way and allows you to run almost everything without using a mouse. ## CrankyTom If I want to use something lightweight, I most certainly don’t want Fedora no matter the Windows Manager. I can run full blown Mate Desktop with Devuan base and it’s performing better than Matchbox Desktop on Fedora on the same Computer. I don’t know what’s Fedora doing to their base but it’s shit. (Also, I just learned after switching Fedora installed HDD to another PC that SHUTDOWN in Fedora means HIBERNATION, if Fedora lies to it’s user on such a basic function, where else does it lie/betray/swindle?) ## Felix Pojtinger How so? Have you tried Fedora Silverblue? I use it on low-end workstation a lot and it doesn’t use more CPU/RAM than other distros with the same desktop environment. ## CrankyWho I don’t know what you’re talking about. My Fedora installations always seem to take a bit of effort to enable hibernation in the first place. Shutdown means shutdown. ## Gustavo This is what happens when you don’t know what services you are installing, and/or what compilation options Red Hat’s RPM standards have. Of course Fedora uses a bit more memory than Debian’s desktops by design, let alone Devuan. They are hardening flags for security. I myself don’t know much because I’m not involved too much with developing, but I’ve read as much to know this difference without thinking “It’s lazy design”. TLDR: This is a very vague, and unrespectful critic towards the project. ## Andrew I was on Windows 10 for a couple of months, not too long ago (old Linux laptop was on last legs, took a while to get a replacement). Windows 10 basically does the same thing. In order to get a proper shutdown, you have to -restart-. Presumably done to trick the user that the system is faster than it is. Lots of cheats in Windows to simulate speed – including displaying the full desktop & icons before most stuff (networking, drivers, etc) is loaded. On most laptops, you no longer even have the led lights to indicate hard drive activity. Big Tech sucks… Shame that Fedora is copying some of this. ## John In an elevated windows command prompt, powercfg -h off You’ll also save some disk space. ## Paul W. Frields To the best of my knowledge this is not what Fedora does. In fact hibernation was more or less abandoned due to issues with it. Hibernation involves for example copying memory state out to a file, so the thawed system will be in roughly the same state it was in before hibernation. That is not what happens when you shut down Fedora. I would expect someone to bring supporting facts to reach any other conclusion. ## tdawson In your comment that got deleted, you mentioned zeitgeist. This is pulled in by gedit-plugins and the elementary desktop. Is it in the base Fedora? No. Is it removable if you get it installed? Yes … as long as you are not running the full elementary desktop. ## CraTo zeitgeist appeared in fedora live cds (xfce/mate) both don’t “pull” gedit or elementary. thus, it’s beyond desktop installation, it’s default and thus base. In the end it was an example what Fedora Project is willing to do and it’s evil. This was one thing I knew that it is bad but what else is Fedora deploying in the name of “security” when it actually is user survilance and mass data collection? Deleting comments: It’s so common practice these days it hardly raises an eyelid even though its truly despicable but this requires a concience. People traded that for “code of conduct, community guidelines, not-on-subject… or any other excuse”. One day it’s in the CoC to shoot people in the head, because not abiding by it is “violence” and must be “defended actively” and it will be done naturally and without betting an eyelid. This is already discussed by the very same people who bring those guidelines into community projects. However, people will still believe there is no harm in deleting a few comments that “aren’t nice”, “just wrong” or “hurtful” and they think of themselves as being “the good guys”. I also don’t expect this comment to go public because it’s obviously not “politically correct” and believing what I wrote would naturally show that the current “trend” is dangerous, evil and paint the Fedora Project, this Website, Moderators, CoCs and those involved in a (bad) way they don’t enjoy. However, someone will still read this before deleting it. ## Troy Dawson That is very broad definition of “default and base”, but I have checked, and it is currently no longer on those live-cd’s. I have also checked through bugzilla, and though it looks like it was pulled in by several other packages at first, it was taken out as dependencies many years ago. 4 or 5 years, depending on the application. If you have a security issue, please open a bug in our bugzilla, the is not the proper forum for that. I did not just “delete your comments”, I also addressed your concerns as much as I could. Yes they did. ## RMPR How does it compare to i3 from your experience ? ## tdawson I haven’t used i3, so I’ll let others who have tried them both reply. ## Adam What about Sway? https://swaywm.org/ Since Wayland is the future, I think WMs and DEs that are Wayland-centric should be in focus. Although I’m sure IceWM is good, it’s one of those window managers that doesn’t care about supporting Wayland. I think this is a real problem in the Linux space. People are either overly critical of Wayland, due to ignorance of how it works, or they simply don’t know that Wayland exists and how it differs from X. We need a real display server that uses double buffering and compositing by default. The desktop on Linux is way too jittery, laggy and has a real problem with screen tearing. Don’t get me wrong, I love Linux, open source and the free software philosophy. Apple has done the desktop properly since 2001 with macOS, and it’s about time Linux catches up. In case people don’t know, the window manager in macOS is state of the art. You notice that when you use macOS for an extended period of time. Everything is so smooth and there’s no visual lag when handling windows, and there’s no screen tearing. It’s been like that since 2001. (You’re allowed to say good things about Apple products without loving the company itself or what they do…) Although I’m glad to see that GNOME has excellent Wayland support, and it’s the default DE in Fedora. KDE isn’t there yet, although they’re working on it. Right now, if you want to use Wayland you’re probably best off using GNOME. Although I like GNOME, I think it’s a bit sad that you hardly have any options if you want to run Wayland as your default display server and compositor. https://tech.slashdot.org/story/14/08/20/1825221/linus-torvalds-i-still-want-the-desktop “I still want the desktop”, Linus Torvalds says. Well, if you want the desktop you need Wayland to succeed so that the Linux desktop will be technically on par with what Apple and Microsoft are offering. In almost every other way, Linux beats those proprietary systems. But on the desktop it fails miserably in comparison, I’m sad to say. ## tdawson I have not used sway either. sway is in Fedora. I think it would be great if you, or someone else were to write an article for Fedora Magazine on it. https://docs.fedoraproject.org/en-US/fedora-magazine/writing-a-pitch/ https://docs.fedoraproject.org/en-US/fedora-magazine/writing-an-article/ As for apples desktop system, I haven’t used one for over 15 years, so I don’t feel I could comment. ## Alex The choice of Window Manager in Linux is a very personal thing for any user who is willing to make the jump from the default out-of-the-box platform WM (often Gnome) to, well, literally anything else. You’ll end up crafting environments that just ‘work’ for you for use cases and systems. KDE/Plasma, Gnome, Mate, Cinnamon, XFCE, LXQT, FWVM, IceWM, i3, openBox, Fluxbox, etc. all have their positives, negatives, quirks and specialties. I encourage anyone reading this article who hasn’t tried out a few different Window Managers for a solid week or so and seen what they provide to make an effort to do so. On most systems you can choose (per-login session) which WM you fancy for that session (i.e. xdm, lightdm, and gdm all have UI options like a drop-down menu on the login screen), so you can always switch back to the one you’re most familiar with if you need to. The window manager you use is the glue that binds the multitasking strands that you as a user work at, be they command line terminals or specific tools (browsers, text editors, IDEs, etc. etc.), so having that glue be the right kind for you really matters. ## suoeno1357 Best possible IceWM implementation in a distro (to me at least) so far would be antiX Linux’, so there’s a model that can be learnt from. Though from my flip flopping dabbling, cherry picking the necessities such as which file manager or network conn manager are where the trickier parts lie. If there’s willing people up to the task, I’m thinking a proper Spin for this should prove to be an interesting proposition. ## tdawson I haven’t tried antiX Linux, so I can’t comment there. But you are correct in being able to cherry pick everything . In my screenshot’s you can clearly see that I also have KDE installed on the system along with IceWM. But if I am on a less powerful machine such as a Raspberry Pi, I usually only install icwm, x-term and nedit. Seeing how small, but functional you could make a IceWM Spin would be interesting. But I don’t know how practical it would be. As someone said in another comment, a desktop is a personal thing. And since you would be cherry picking everything, it would be setup how you like it. But the next person might like a different terminal, browser or other things. ## SeeM I still prefer WindowMaker and use it on a laptop, but IceWM is as solidas always have been. I really like this huge config file with everything under a few keyboard clicks.## Steve Borden I love discussions around Window Managers as it really seems to bring out peoples passion like few other topics. Me personally, I have been using Linux since 1998 (Redhat 5.0), the first WM I used was WindowMaker until I discovered Enlightenment that same year and have used it ever since. I am still using E16 (v1.0.19) on Fedora 30 and I love it, it’s beautiful, lightweight and functional. ## Thomas Batten I’ve actually packed up a newer version of IceWM and hosting it on copr! https://copr.fedorainfracloud.org/coprs/stenstorp/icewm/ I’ve adjusted some configuration files for it so that volume adjustment and backlight brightness for laptops works out of the box as well as other minor adjustments. ## SeeM Wow! Thanks. 🙂 ## tdawson Thank you for that Thomas. A newer version (the same you have there for F30) is available in Fedora 31 and Rawhide. I debated about putting that information in the article, but left it out. I decided to leave the article about features and leave the versions and changes out. For those that are wondering, for many years Fedora had IceWM version 1.3.8. This was the last version on sourceforge. Development has moved over to github, and the current version is 1.6.2. This new version is the version that Thomas packaged up, and is also coming in F31. ## Warren Oops, tiny grammar problem: Loose is not lose: “I hope I didn’t loose you there” –> “I hope I didn’t lose you there.” ## Adam Šamalík Thanks for the catch! Fixed. ## horizonbravw hey, I’m a regular Fedora user. Fedora comes with Wayland by default, how come an article on a WM that doesn’t support Wayland? That doesn’t seem right! ## Paul W. Frields @horizonbravw: Like you, I dig Wayland. But we sometimes have contributing authors who have other things they like. Within reason, we try to ensure other lesser-known technologies get a chance to be seen. We encourage folks to use what they like. 🙂 ## Anthony L. Stauss I very much appreciate the occasional articles about DEs and WMs. I experiment with them and have found some particularly useful in various situations. My poor old laptop would use too much swap before I went from 4 to 8 GB on Gnome, and so I tried LXDE to my great satisfaction. It’s one of the great things about Linux. It takes a long time before your systems become truly obsolete, but you can add features as resources become available, and help is always available here and on the forums. It’s been a great six years on Linux and three on Fedora. ## Guilherme Installed as the instructions say and the programs menu shows just a tiny empty line with no programs. Also window control buttons (minimize, maximize and close) are all black. Something is messed on my system… ## Troy Dawson I am so sorry. When I tested a minimal install, I tested on F31-beta. Thus, my instructions work on F31 and above, which have the new IceWM 1.6.2. For Fedora 30, and older, that has IceWM 1.3.8 you have to also install icewm-clearlooks, or you get what you are seeing. I will see if I am able to update the article, but here are the instructions for F30 and older releases. dnf install icewm icewm-clearlooks if you want the start menu to be populated non-icewm entries, install the following dnf install icewm-xdgmenu If you want the start menu to be populated with gnome menu entries, install the following. (Warning, installs alot) dnf install icewm-gnome The above installation options are for IceWM 1.3.8 and older. The newer version, 1.6.2, has the default theme, and menu population already part of the main package. ## Guilherme Thank you for the instructions sir ## Ken I’ve used icewm for many years ! Back in the day, it was because it was so small. Now it’s just habit. I copied /etc/X11/xinit/Xclients to ~/.Xclients and modified to start icewm-session (and various tools, such as nm-applet , xosview). I also copied /usr/share/icewm/preferences to ~/.icewm/preferences and modfied some things, eg : TaskBarShowAPMStatus=1 NetworkStatusDevice=”ppp0 eth0 eth1 eth2 ens2 enp6s0 enp2s0f0 wlan0 wls3 wlp1s0″
11,444
安全强化你的 Linux 服务器的七个步骤
https://opensource.com/article/19/10/linux-server-security
2019-10-11T09:41:21
[ "安全" ]
https://linux.cn/article-11444-1.html
> > 通过七个简单的步骤来加固你的 Linux 服务器。 > > > ![](/data/attachment/album/201910/11/094107k8skl8wwxq62pzld.jpg) 这篇入门文章将向你介绍基本的 Linux 服务器安全知识。虽然主要针对 Debian/Ubuntu,但是你可以将此处介绍的所有内容应用于其他 Linux 发行版。我也鼓励你研究这份材料,并在适用的情况下进行扩展。 ### 1、更新你的服务器 保护服务器安全的第一件事是更新本地存储库,并通过应用最新的修补程序来升级操作系统和已安装的应用程序。 在 Ubuntu 和 Debian 上: ``` $ sudo apt update && sudo apt upgrade -y ``` 在 Fedora、CentOS 或 RHEL: ``` $ sudo dnf upgrade ``` ### 2、创建一个新的特权用户 接下来,创建一个新的用户帐户。永远不要以 root 身份登录服务器,而是创建你自己的帐户(用户),赋予它 `sudo` 权限,然后使用它登录你的服务器。 首先创建一个新用户: ``` $ adduser <username> ``` 通过将 `sudo` 组(`-G`)附加(`-a`)到用户的组成员身份里,从而授予新用户帐户 `sudo` 权限: ``` $ usermod -a -G sudo <username> ``` ### 3、上传你的 SSH 密钥 你应该使用 SSH 密钥登录到新服务器。你可以使用 `ssh-copy-id` 命令将[预生成的 SSH 密钥](https://opensource.com/article/19/4/ssh-keys-seahorse)上传到你的新服务器: ``` $ ssh-copy-id <username>@ip_address ``` 现在,你无需输入密码即可登录到新服务器。 ### 4、安全强化 SSH 接下来,进行以下三个更改: * 禁用 SSH 密码认证 * 限制 root 远程登录 * 限制对 IPv4 或 IPv6 的访问 使用你选择的文本编辑器打开 `/etc/ssh/sshd_config` 并确保以下行: ``` PasswordAuthentication yes PermitRootLogin yes ``` 改成这样: ``` PasswordAuthentication no PermitRootLogin no ``` 接下来,通过修改 `AddressFamily` 选项将 SSH 服务限制为 IPv4 或 IPv6。要将其更改为仅使用 IPv4(对大多数人来说应该没问题),请进行以下更改: ``` AddressFamily inet ``` 重新启动 SSH 服务以启用你的更改。请注意,在重新启动 SSH 服务之前,与服务器建立两个活动连接是一个好主意。有了这些额外的连接,你可以在重新启动 SSH 服务出错的情况下修复所有问题。 在 Ubuntu 上: ``` $ sudo service sshd restart ``` 在 Fedora 或 CentOS 或任何使用 Systemd 的系统上: ``` $ sudo systemctl restart sshd ``` ### 5、启用防火墙 现在,你需要安装防火墙、启用防火墙并对其进行配置,以仅允许你指定的网络流量通过。(Ubuntu 上的)[简单的防火墙](https://launchpad.net/ufw)(UFW)是一个易用的 iptables 界面,可大大简化防火墙的配置过程。 你可以通过以下方式安装 UFW: ``` $ sudo apt install ufw ``` 默认情况下,UFW 拒绝所有传入连接,并允许所有传出连接。这意味着服务器上的任何应用程序都可以访问互联网,但是任何尝试访问服务器的内容都无法连接。 首先,确保你可以通过启用对 SSH、HTTP 和 HTTPS 的访问来登录: ``` $ sudo ufw allow ssh $ sudo ufw allow http $ sudo ufw allow https ``` 然后启用 UFW: ``` $ sudo ufw enable ``` 你可以通过以下方式查看允许和拒绝了哪些服务: ``` $ sudo ufw status ``` 如果你想禁用 UFW,可以通过键入以下命令来禁用: ``` $ sudo ufw disable ``` 你还可以(在 RHEL/CentOS 上)使用 [firewall-cmd](https://www.redhat.com/sysadmin/secure-linux-network-firewall-cmd),它已经安装并集成到某些发行版中。 ### 6、安装 Fail2ban [Fail2ban](https://www.fail2ban.org/wiki/index.php/Main_Page) 是一种用于检查服务器日志以查找重复或自动攻击的应用程序。如果找到任何攻击,它会更改防火墙以永久地或在指定的时间内阻止攻击者的 IP 地址。 你可以通过键入以下命令来安装 Fail2ban: ``` $ sudo apt install fail2ban -y ``` 然后复制随附的配置文件: ``` $ sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local ``` 重启 Fail2ban: ``` $ sudo service fail2ban restart ``` 这样就行了。该软件将不断检查日志文件以查找攻击。一段时间后,该应用程序将建立相当多的封禁的 IP 地址列表。你可以通过以下方法查询 SSH 服务的当前状态来查看此列表: ``` $ sudo fail2ban-client status ssh ``` ### 7、移除无用的网络服务 几乎所有 Linux 服务器操作系统都启用了一些面向网络的服务。你可能希望保留其中大多数,然而,有一些你或许希望删除。你可以使用 `ss` 命令查看所有正在运行的网络服务:(LCTT 译注:应该是只保留少部分,而所有确认无关的、无用的服务都应该停用或删除。) ``` $ sudo ss -atpu ``` `ss` 的输出取决于你的操作系统。下面是一个示例,它显示 SSH(`sshd`)和 Ngnix(`nginx`)服务正在侦听网络并准备连接: ``` tcp LISTEN 0 128 *:http *:* users:(("nginx",pid=22563,fd=7)) tcp LISTEN 0 128 *:ssh *:* users:(("sshd",pid=685,fd=3)) ``` 删除未使用的服务的方式因你的操作系统及其使用的程序包管理器而异。 要在 Debian / Ubuntu 上删除未使用的服务: ``` $ sudo apt purge <service_name> ``` 要在 Red Hat/CentOS 上删除未使用的服务: ``` $ sudo yum remove <service_name> ``` 再次运行 `ss -atup` 以确认这些未使用的服务没有安装和运行。 ### 总结 本教程介绍了加固 Linux 服务器所需的最起码的措施。你应该根据服务器的使用方式启用其他安全层。这些安全层可以包括诸如各个应用程序配置、入侵检测软件(IDS)以及启用访问控制(例如,双因素身份验证)之类的东西。 --- via: <https://opensource.com/article/19/10/linux-server-security> 作者:[Patrick H. Mullins](https://opensource.com/users/pmullins) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
This primer will introduce you to basic Linux server security. While it focuses on Debian/Ubuntu, you can apply everything presented here to other Linux distributions. I also encourage you to research this material and extend it where applicable. ## 1. Update your server The first thing you should do to secure your server is to update the local repositories and upgrade the operating system and installed applications by applying the latest patches. On Ubuntu and Debian: `$ sudo apt update && sudo apt upgrade -y` On Fedora, CentOS, or RHEL: `$ sudo dnf upgrade` ## 2. Create a new privileged user account Next, create a new user account. You should never log into your server as **root**. Instead, create your own account ("**<user>**"), give it **sudo** rights, and use it to log into your server. Start out by creating a new user: `$ adduser <username>` Give your new user account **sudo** rights by appending (**-a**) the **sudo** group (**-G**) to the user's group membership: `$ usermod -a -G sudo <username>` ## 3. Upload your SSH key You'll want to use an SSH key to log into your new server. You can upload your [pre-generated SSH key](https://opensource.com/article/19/4/ssh-keys-seahorse) to your new server using the **ssh-copy-id** command: `$ ssh-copy-id <username>@ip_address` Now you can log into your new server without having to type in a password. ## 4. Secure SSH Next, make these three changes: - Disable SSH password authentication - Restrict **root**from logging in remotely - Restrict access to IPv4 or IPv6 Open **/etc/ssh/sshd_config** using your text editor of choice and ensure these lines: ``` PasswordAuthentication yes PermitRootLogin yes ``` look like this: ``` PasswordAuthentication no PermitRootLogin no ``` Next, restrict the SSH service to either IPv4 or IPv6 by modifying the **AddressFamily** option. To change it to use only IPv4 (which should be fine for most folks) make this change: `AddressFamily inet` Restart the SSH service to enable your changes. Note that it's a good idea to have two active connections to your server before restarting the SSH server. Having that extra connection allows you to fix anything should the restart go wrong. On Ubuntu: `$ sudo service sshd restart` On Fedora or CentOS or anything using Systemd: `$ sudo systemctl restart sshd` ## 5. Enable a firewall Now you need to install a firewall, enable it, and configure it only to allow network traffic that you designate. [Uncomplicated Firewall](https://launchpad.net/ufw) (UFW) is an easy-to-use interface to **iptables** that greatly simplifies the process of configuring a firewall. You can install UFW with: `$ sudo apt install ufw` By default, UFW denies all incoming connections and allows all outgoing connections. This means any application on your server can reach the internet, but anything trying to reach your server cannot connect. First, make sure you can log in by enabling access to SSH, HTTP, and HTTPS: ``` $ sudo ufw allow ssh $ sudo ufw allow http $ sudo ufw allow https ``` Then enable UFW: `$ sudo ufw enable` You can see what services are allowed and denied with: `$ sudo ufw status` If you ever want to disable UFW, you can do so by typing: `$ sudo ufw disable` You can also use [firewall-cmd](https://www.redhat.com/sysadmin/secure-linux-network-firewall-cmd), which is already installed and integrated into some distributions. ## 6. Install Fail2ban [Fail2ban](https://www.fail2ban.org/wiki/index.php/Main_Page) is an application that examines server logs looking for repeated or automated attacks. If any are found, it will alter the firewall to block the attacker's IP address either permanently or for a specified amount of time. You can install Fail2ban by typing: `$ sudo apt install fail2ban -y` Then copy the included configuration file: `$ sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local` And restart Fail2ban: `$ sudo service fail2ban restart` That's all there is to it. The software will continuously examine the log files looking for attacks. After a while, the app will build up quite a list of banned IP addresses. You can view this list by requesting the current status of the SSH service with: `$ sudo fail2ban-client status ssh` ## 7. Remove unused network-facing services Almost all Linux server operating systems come with a few network-facing services enabled. You'll want to keep most of them. However, there are a few that you might want to remove. You can see all running network services by using the **ss** command: `$ sudo ss -atpu` The output from **ss** will differ depending on your operating system. This is an example of what you might see. It shows that the SSH (sshd) and Ngnix (nginx) services are listening and ready for connection: ``` tcp LISTEN 0 128 *:http *:* users:(("nginx",pid=22563,fd=7)) tcp LISTEN 0 128 *:ssh *:* users:(("sshd",pid=685,fd=3)) ``` How you go about removing an unused service ("**<service_name>**") will differ depending on your operating system and the package manager it uses. To remove an unused service on Debian/Ubuntu: `$ sudo apt purge <service_name>` To remove an unused service on Red Hat/CentOS: `$ sudo yum remove <service_name>` Run **ss -atup** again to verify that the unused services are no longer installed and running. ## Final thoughts This tutorial presents the bare minimum needed to harden a Linux server. Additional security layers can and should be enabled depending on how a server is used. These layers can include things like individual application configurations, intrusion detection software, and enabling access controls, e.g., two-factor authentication. ## 6 Comments
11,445
处理 Linux 文件的 3 个技巧
https://www.networkworld.com/article/3440035/3-quick-tips-for-working-with-linux-files.html
2019-10-11T10:11:51
[ "文件" ]
https://linux.cn/article-11445-1.html
> > Linux 提供了许多用于查找、计数和重命名文件的命令。这有一些有用的选择。 > > > ![](/data/attachment/album/201910/11/101136ei4sslezne7esyis.jpg) Linux 提供了多种用于处理文件的命令,这些命令可以节省你的时间,并使你的工作不那么繁琐。 ### 查找文件 当你查找文件时,`find` 可能会是第一个想到的命令,但是有时精心设计的 `ls` 命令会更好。想知道你昨天离开办公室回家前调用的脚本么?简单!使用 `ls` 命令并加上 `-ltr` 选项。最后一个列出的将是最近创建或更新的文件。 ``` $ ls -ltr ~/bin | tail -3 -rwx------ 1 shs shs 229 Sep 22 19:37 checkCPU -rwx------ 1 shs shs 285 Sep 22 19:37 ff -rwxrw-r-- 1 shs shs 1629 Sep 22 19:37 test2 ``` 像这样的命令将仅列出今天更新的文件: ``` $ ls -al --time-style=+%D | grep `date +%D` drwxr-xr-x 60 shs shs 69632 09/23/19 . drwxrwxr-x 2 shs shs 8052736 09/23/19 bin -rw-rw-r-- 1 shs shs 506 09/23/19 stats ``` 如果你要查找的文件可能不在当前目录中,那么 `find` 将比 `ls` 提供更好的选项,但它可能会输出比你想要的更多结果。在下面的命令中,我们*不*搜索以点开头的目录(它们很多一直在更新),指定我们要查找的是文件(即不是目录),并要求仅显示最近一天 (`-mtime -1`)更新过的文件。 ``` $ find . -not -path '*/\.*' -type f -mtime -1 -ls 917517 0 -rwxrw-r-- 1 shs shs 683 Sep 23 11:00 ./newscript ``` 注意 `-not` 选项反转了 `-path` 的行为,因此我们不会搜索以点开头的子目录。 如果只想查找最大的文件和目录,那么可以使用类似 `du` 这样的命令,它会按大小列出当前目录的内容。将输出通过管道传输到 `tail`,仅查看最大的几个。 ``` $ du -kx | egrep -v "\./.+/" | sort -n | tail -5 918984 ./reports 1053980 ./notes 1217932 ./.cache 31470204 ./photos 39771212 . ``` `-k` 选项让 `du` 以块列出文件大小,而 `x` 可防止其遍历其他文件系统上的目录(例如,通过符号链接引用)。事实上,`du` 会先列出文件大小,这样可以按照大小排序(`sort -n`)。 ### 文件计数 使用 `find` 命令可以很容易地计数任何特定目录中的文件。你只需要记住,`find` 会递归到子目录中,并将这些子目录中的文件与当前目录中的文件一起计数。在此命令中,我们计数一个特定用户(`username`)的家目录中的文件。根据家目录的权限,这可能需要使用 `sudo`。请记住,第一个参数是搜索的起点。这里指定的是用户的家目录。 ``` $ find ~username -type f 2>/dev/null | wc -l 35624 ``` 请注意,我们正在将上面 `find` 命令的错误输出发送到 `/dev/null`,以避免搜索类似 `~username/.cache` 这类无法搜索并且对它的内容也不感兴趣的文件夹。 必要时,你可以使用 `maxdepth 1` 选项将 `find` 限制在单个目录中: ``` $ find /home/shs -maxdepth 1 -type f | wc -l 387 ``` ### 重命名文件 使用 `mv` 命令可以很容易地重命名文件,但是有时你会想重命名大量文件,并且不想花费大量时间。例如,要将你在当前目录的文件名中找到的所有空格更改为下划线,你可以使用如下命令: ``` $ rename 's/ /_/g' * ``` 如你怀疑的那样,此命令中的 `g` 表示“全局”。这意味着该命令会将文件名中的*所有*空格更改为下划线,而不仅仅是第一个。 要从文本文件中删除 .txt 扩展名,可以使用如下命令: ``` $ rename 's/.txt//g' * ``` ### 总结 Linux 命令行提供了许多用于处理文件的有用选择。请提出你认为特别有用的其他命令。 --- via: <https://www.networkworld.com/article/3440035/3-quick-tips-for-working-with-linux-files.html> 作者:[Sandra Henry-Stocker](https://www.networkworld.com/author/Sandra-Henry_Stocker/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
301
Moved Permanently
null
11,447
如何用 GVM 管理 Go 项目
https://opensource.com/article/19/10/introduction-gvm
2019-10-11T11:22:41
[ "Go" ]
https://linux.cn/article-11447-1.html
> > 使用 Go 版本管理器管理多个版本的 Go 语言环境及其模块。 > > > ![](/data/attachment/album/201910/11/112215m48u4zocc7p48okn.png) Go 语言版本管理器([GVM](https://github.com/moovweb/gvm))是管理 Go 语言环境的开源工具。GVM “pkgsets” 支持安装多个版本的 Go 并管理每个项目的模块。它最初由 [Josh Bussdieker](https://github.com/jbussdieker) 开发,GVM(像它的对手 Ruby RVM 一样)允许你为每个项目或一组项目创建一个开发环境,分离不同的 Go 版本和包依赖关系,以提供更大的灵活性,防止不同版本造成的问题。 有几种管理 Go 包的方式,包括内置于 Go 中的 Go 1.11 的 Modules。我发现 GVM 简单直观,即使我不用它来管理包,我还是会用它来管理 Go 不同的版本的。 ### 安装 GVM 安装 GVM 很简单。[GVM 存储库](https://github.com/moovweb/gvm#installing)安装文档指示你下载安装程序脚本并将其传送到 Bash 来安装: ``` bash < <(curl -s -S -L https://raw.githubusercontent.com/moovweb/gvm/master/binscripts/gvm-installer) ``` 尽管越来越多的人采用这种安装方法,但是在安装之前先看看安装程序在做什么仍然是一个很好的想法。以 GVM 为例,该安装程序脚本: 1. 检查一些相关依赖性 2. 克隆 GVM 存储库 3. 使用 shell 脚本: * 安装 Go 语言 * 管理 `GOPATH` 环境变量 * 向 `bashrc`、`zshrc` 或配置文件中添加一行内容 如果你想确认它在做什么,你可以克隆该存储库并查看 shell 脚本,然后运行 `./binscripts/gvm-installer` 这个本地脚本进行设置。 `注意:` 因为 GVM 可以用来下载和编译新的 Go 版本,所以有一些预期的依赖关系,如 Make、Git 和 Curl。你可以在 [GVM 的自述文件](https://github.com/moovweb/gvm/blob/master/README.md)中找到完整的发行版列表。 ### 使用 GVM 安装和管理 GO 版本 一旦安装了 GVM,你就可以使用它来安装和管理不同版本的 Go。`gvm listall` 命令显示可下载和编译的可用版本的 Go: ``` [chris@marvin ]$ gvm listall gvm gos (available) go1 go1.0.1 go1.0.2 go1.0.3 <输出截断> ``` 安装特定的 Go 版本就像 `gvm install <版本>` 一样简单,其中 `<版本>` 是 `gvm listall` 命令返回的版本之一。 假设你正在进行一个使用 Go1.12.8 版本的项目。你可以使用 `gvm install go1.12.8` 安装这个版本: ``` [chris@marvin]$ gvm install go1.12.8 Installing go1.12.8... * Compiling... go1.12.8 successfully installed! ``` 输入 `gvm list`,你会看到 Go 版本 1.12.8 与系统 Go 版本(使用操作系统的软件包管理器打包的版本)一起并存: ``` [chris@marvin]$ gvm list gvm gos (installed) go1.12.8 => system ``` GVM 仍在使用系统版本的 Go ,由 `=>` 符号表示。你可以使用 `gvm use` 命令切换你的环境以使用新安装的 go1.12.8: ``` [chris@marvin]$ gvm use go1.12.8 Now using version go1.12.8 [chris@marvin]$ go version go version go1.12.8 linux/amd64 ``` GVM 使管理已安装版本的 Go 变得极其简单,但它不止于此! ### 使用 GVM pkgset 开箱即用,Go 有一种出色而令人沮丧的管理包和模块的方式。默认情况下,如果你 `go get` 获取一个包,它将被下载到 `$GOPATH` 目录中的 `src` 和 `pkg` 目录下,然后可以使用 `import` 将其包含在你的 Go 程序中。这使得获得软件包变得很容易,特别是对于非特权用户,而不需要 `sudo` 或 root 特权(很像 Python 中的 `pip install --user`)。然而,在不同的项目中管理相同包的不同版本是非常困难的。 有许多方法可以尝试修复或缓解这个问题,包括实验性 Go Modules(Go 1.11 版中增加了初步支持)和 [Go dep](https://golang.github.io/dep/)(Go Modules 的“官方实验”并且持续迭代)。在我发现 GVM 之前,我会在一个 Go 项目自己的 Docker 容器中构建和测试它,以确保分离。 GVM 通过使用 “pkgsets” 将项目的新目录附加到安装的 Go 版本的默认 `$GOPATH` 上,很好地实现了项目之间包的管理和隔离,就像 `$PATH` 在 Unix/Linux 系统上工作一样。 想象它如何运行的。首先,安装新版 Go 1.12.9: ``` [chris@marvin]$ echo $GOPATH /home/chris/.gvm/pkgsets/go1.12.8/global [chris@marvin]$ gvm install go1.12.9 Installing go1.12.9... * Compiling... go1.12.9 successfully installed [chris@marvin]$ gvm use go1.12.9 Now using version go1.12.9 ``` 当 GVM 被告知使用新版本时,它会更改为新的 `$GOPATH`,默认 `gloabl` pkgset 应用于该版本: ``` [chris@marvin]$ echo $GOPATH /home/chris/.gvm/pkgsets/go1.12.9/global [chris@marvin]$ gvm pkgset list gvm go package sets (go1.12.9) => global ``` 尽管默认情况下没有安装额外的包,但是全局 pkgset 中的包对于使用该特定版本的 Go 的任何项目都是可用的。 现在,假设你正在启用一个新项目,它需要一个特定的包。首先,使用 GVM 创建一个新的 pkgset,名为 `introToGvm`: ``` [chris@marvin]$ gvm pkgset create introToGvm [chris@marvin]$ gvm pkgset use introToGvm Now using version go1.12.9@introToGvm [chris@marvin]$ gvm pkgset list gvm go package sets (go1.12.9) global => introToGvm ``` 如上所述,pkgset 的一个新目录被添加到 `$GOPATH`: ``` [chris@marvin]$ echo $GOPATH /home/chris/.gvm/pkgsets/go1.12.9/introToGvm:/home/chris/.gvm/pkgsets/go1.12.9/global ``` 将目录更改为预先设置的 `introToGvm` 路径,检查目录结构,这里使用 `awk` 和 `bash` 完成。 ``` [chris@marvin]$ cd $( awk -F':' '{print $1}' <<< $GOPATH ) [chris@marvin]$ pwd /home/chris/.gvm/pkgsets/go1.12.9/introToGvm [chris@marvin]$ ls overlay pkg src ``` 请注意,新目录看起来很像普通的 `$GOPATH`。新的 Go 包使用同样的 `go get` 命令下载并正常使用,且添加到 pkgset 中。 例如,使用以下命令获取 `gorilla/mux` 包,然后检查 pkgset 的目录结构: ``` [chris@marvin]$ go get github.com/gorilla/mux [chris@marvin]$ tree [chris@marvin introToGvm ]$ tree . ├── overlay │ ├── bin │ └── lib │ └── pkgconfig ├── pkg │ └── linux_amd64 │ └── github.com │ └── gorilla │ └── mux.a src/ └── github.com └── gorilla └── mux ├── AUTHORS ├── bench_test.go ├── context.go ├── context_test.go ├── doc.go ├── example_authentication_middleware_test.go ├── example_cors_method_middleware_test.go ├── example_route_test.go ├── go.mod ├── LICENSE ├── middleware.go ├── middleware_test.go ├── mux.go ├── mux_test.go ├── old_test.go ├── README.md ├── regexp.go ├── route.go └── test_helpers.go ``` 如你所见,`gorilla/mux` 已按预期添加到 pkgset `$GOPATH` 目录中,现在可用于使用此 pkgset 项目了。 ### GVM 让 Go 管理变得轻而易举 GVM 是一种直观且非侵入性的管理 Go 版本和包的方式。它可以单独使用,也可以与其他 Go 模块管理技术结合使用并利用 GVM Go 版本管理功能。按 Go 版本和包依赖来分离项目使得开发更加容易,并且减少了管理版本冲突的复杂性,GVM 让这变得轻而易举。 --- via: <https://opensource.com/article/19/10/introduction-gvm> 作者:[Chris Collins](https://opensource.com/users/clcollins) 选题:[lujun9972](https://github.com/lujun9972) 译者:[heguangzhi](https://github.com/heguangzhi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
301
Moved Permanently
null
11,448
Bash 学习的快乐之旅:3 个命令行游戏
https://opensource.com/article/19/10/learn-bash-command-line-games
2019-10-12T09:03:00
[ "Bash", "游戏" ]
https://linux.cn/article-11448-1.html
> > 通过这些命令行游戏,学习有用的 Bash 技能也是一件乐事。 > > > ![](/data/attachment/album/201910/12/090304jdb7u6yi1p202v61.jpg) 学习是件艰苦的工作,然而没有人喜欢工作。这意味着无论学习 Bash 多么容易,它仍然对你来说就像工作一样。当然,除非你通过游戏来学习。 你不会觉得会有很多游戏可以教你如何使用 Bash 终端吧,这是对的。严肃的 PC 游戏玩家知道,《<ruby> 辐射 <rt> Fallout </rt></ruby>》系列在金库中配备了基于终端的计算机,这可以帮你理解通过文本与计算机进行交互是什么样子,但是尽管其功能或多或少地类似于 [Alpine](https://opensource.com/article/17/10/alpine-email-client) 或 [Emacs](http://www.gnu.org/software/emacs),可是玩《辐射》并不会教给你可以在现实生活中使用的命令或应用程序。《辐射》系列从未直接移植到Linux(尽管可以通过 Steam 的开源的 [Proton](https://github.com/ValveSoftware/Proton/) 来玩。)曾是《辐射》的前身的《<ruby> <a href="https://www.gog.com/game/wasteland_the_classic_original"> 废土 </a> <rt> Wasteland </rt></ruby>》系列的最新作品倒是面向 Linux 的,因此,如果你想体验游戏中的终端,可以在你的 Linux 游戏计算机上玩《[废土 2](https://www.inxile-entertainment.com/wasteland2)》和《[废土 3](https://www.inxile-entertainment.com/wasteland3)》。《<ruby> <a href="http://harebrained-schemes.com/games/"> 暗影狂奔 </a> <rt> Shadowrun </rt></ruby>》系列也有面向 Linux 的版本,它有许多基于终端的交互,尽管公认 [hot sim](https://forums.shadowruntabletop.com/index.php?topic=21804.0) 序列常常使它黯然失色。 虽然这些游戏中采用了有趣的操作计算机终端的方式,并且可以在开源的系统上运行,但它们本身都不是开源的。不过,至少有两个游戏采用了严肃且非常有趣的方法来教人们如何通过文本命令与系统进行交互。最重要的是,它们是开源的。 ### Bashcrawl 你可能听说过《<ruby> <a href="https://opensource.com/article/18/12/linux-toy-adventure"> 巨洞探险 </a> <rt> Colossal Cave Adventure </rt></ruby>》游戏,这是一款古老的基于文本的交互式游戏,其风格为“自由冒险”类。早期的计算机爱好者们在 DOS 或 ProDOS 命令行上痴迷地玩这些游戏,他们努力寻找有效语法和(如一个讽刺黑客所解释的)滑稽幻想逻辑的正确组合来击败游戏。想象一下,如果除了探索虚拟的中世纪地下城之外,挑战还在于回忆起有效的 Bash 命令,那么这样的挑战会多么有成效。这就是 [Bashcrawl](https://gitlab.com/slackermedia/bashcrawl) 的基调,这是一个基于 Bash 的地下城探险游戏,你可以通过学习和使用 Bash 命令来玩这个游戏。 在 Bashcrawl 中,“地下城”是以目录和文件的形式创建在你的计算机上的。你可以通过使用 `cd` 命令更改目录进入地下城的每个房间来探索它。当你[穿行目录](https://opensource.com/article/19/8/understanding-file-paths-linux)时,你可以用 [ls -F](https://opensource.com/article/19/7/master-ls-command) 来查看文件,用 [cat](https://opensource.com/article/19/2/getting-started-cat-command) 读取文件,[设置变量](https://opensource.com/article/19/8/using-variables-bash)来收集宝藏,并运行脚本来与怪物战斗。你在游戏中所做的一切操作都是有效的 Bash 命令,你可以稍后在现实生活中使用它,玩这个游戏提供了 Bash 体验,因为这个“游戏”是由计算机上的实际目录和文件组成的。 ``` $ cd entrance/ $ ls cellar scroll $ cat scroll It is pitch black in these catacombs. You have a magical spell that lists all items in a room. To see in the dark, type: ls To move around, type: cd &lt;directory&gt; Try looking around this room. Then move into one of the next rooms. EXAMPLE: $ ls $ cd cellar Remember to cast ``ls`` when you get into the next room! $ ``` #### 安装 Bashcrawl 在玩 Bashcrawl 之前,你的系统上必须有 Bash 或 [Zsh](https://opensource.com/article/19/9/getting-started-zsh)。Linux、BSD 和 MacOS 都附带了 Bash。Windows 用户可以下载并安装 [Cygwin](https://www.cygwin.com/) 或 [WSL](https://docs.microsoft.com/en-us/windows/wsl/wsl2-about) 或[试试 Linux](https://opensource.com/article/19/7/ways-get-started-linux)。 要安装 Bashcrawl,请在 Firefox 或你选择的 Web 浏览器中导航到这个 [GitLab 存储库](https://gitlab.com/slackermedia/bashcrawl)。在页面的右侧,单击“下载”图标(位于“Find file”按钮右侧)。在“下载”弹出菜单中,单击“zip”按钮以下载最新版本的游戏。 ![Download a zip from Gitlab](/data/attachment/album/201910/12/090307sm3z88c3ici1cn86.png "Download a zip from Gitlab") 下载完成后,解压缩该存档文件。 另外,如果你想从终端中开始安装,则可以使用 [Git](https://opensource.com/life/16/7/stumbling-git) 命令: ``` $ git clone https://gitlab.com/slackermedia/bashcrawl.git bashcrawl ``` #### 游戏入门 与你下载的几乎所有新的软件包一样,你必须做的第一件事是阅读 README 文件。你可以通过双击`bashcrawl` 目录中的 `README.md` 文件来阅读。在 Mac 上,你的计算机可能不知道要使用哪个应用程序打开该文件;你也可以使用任何文本编辑器或 LibreOffice 打开它。`README.md` 这个文件会具体告诉你如何开始玩游戏,包括如何在终端上进入游戏以及要开始游戏必须发出的第一条命令。如果你无法阅读 README 文件,那游戏就不战自胜了(尽管由于你没有玩而无法告诉你)。 Bashcrawl 并不意味着是给比较聪明或高级用户玩的。相反,为了对新用户透明,它尽可能地简单。理想情况下,新的 Bash 用户可以从游戏中学习 Bash 的一些基础知识,然后会偶然发现一些游戏机制,包括使游戏运行起来的简单脚本,并学习到更多的 Bash 知识。此外,新的 Bash 用户可以按照 Bashcrawl 现有内容的示例设计自己的地下城,没有比编写游戏更好的学习编码的方法了。 ### 命令行英雄:BASH Bashcrawl 适用于绝对初学者。如果你经常使用 Bash,则很有可能会尝试通过以初学者尚不了解的方式查看 Bashcrawl 的文件,从而找到胜过它的秘径。如果你是中高级的 Bash 用户,则应尝试一下 [命令行英雄:BASH](https://www.redhat.com/en/command-line-heroes/bash/index.html?extIdCarryOver=true&sc_cid=701f2000001OH79AAG)。 这个游戏很简单:在给定的时间内输入尽可能多的有效命令(LCTT 译注:BASH 也有“猛击”的意思)。听起来很简单。作为 Bash 用户,你每天都会使用许多命令。对于 Linux 用户来说,你知道在哪里可以找到命令列表。仅 util-linux 软件包就包含一百多个命令!问题是,在倒计时的压力下,你的指尖是否忙的过来输入这些命令? ![Command Line Heroes: BASH](/data/attachment/album/201910/12/090308kl3o3ezp6hngpdhz.jpg "Command Line Heroes: BASH") 这个游戏听起来很简单,它确实也很简单!原则上,它与<ruby> 闪卡 <rt> flashcard </rt></ruby>相似,只是反过来而已。在实践中,这是测试你的知识和回忆的一种有趣方式。当然,它是开源的,是由 [Open Jam](http://openjam.io/) 的开发者开发的。 #### 安装 你可以[在线](https://www.redhat.com/en/command-line-heroes/bash/index.html)玩“命令行英雄:BASH”,或者你也可以从 [GitHub](https://github.com/CommandLineHeroes/clh-bash/) 下载它的源代码。 这个游戏是用 Node.js 编写的,因此除非你想帮助开发该游戏,否则在线进行游戏就够了。 ### 在 Bash 中扫雷 如果你是高级 Bash 用户,并且已经编写了多个 Bash 脚本,那么你可能不仅仅想学习 Bash。你可以尝试编写游戏而不是玩游戏,这才是真的挑战。稍加思考,用上一个下午或几个小时,便可以在 Bash 中实现流行的游戏《扫雷》。你可以先尝试自己编写这个游戏,然后参阅 Abhishek Tamrakar 的[文章](/article-11430-1.html),以了解他如何完成该游戏的。 ![](/data/attachment/album/201910/12/090309kpckkoohanhfazbc.png) 有时编程没有什么目的而是为了教育。在 Bash 中编写的游戏可能不是可以让你在网上赢得声誉的项目,但是该过程可能会很有趣且很有启发性。面对一个你从未想到的问题,这是学习新技巧的好方法。 ### 学习 Bash,玩得开心 不管你如何学习它,Bash 都是一个功能强大的界面,因为它使你能够指示计算机执行所需的操作,而无需通过图形界面的应用程序的“中间人”界面。有时,图形界面很有帮助,但有时你想离开那些已经非常了解的东西,然后转向可以快速或通过自动化来完成的事情。由于 Bash 基于文本,因此易于编写脚本,使其成为自动化作业的理想起点。 了解 Bash 以开始走向高级用户之路,但是请确保你乐在其中。 --- via: <https://opensource.com/article/19/10/learn-bash-command-line-games> 作者:[Seth Kenlon](https://opensource.com/users/seth) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
Learning is hard work, and nobody likes work. That means no matter how easy it is to learn Bash, it still might feel like work to you. Unless, of course, you learn through gaming. You wouldn't think there would be many games out there to teach you how to use a Bash terminal, and you'd be right. Serious PC gamers know that the Fallout series features terminal-based computers in the vaults, which helps normalize the idea of interfacing with a computer through text, but in spite of featuring applications more or less like [Alpine](https://opensource.com/article/17/10/alpine-email-client) or [Emacs](http://www.gnu.org/software/emacs), playing Fallout doesn't teach you commands or applications you can use in real life. The Fallout series was never ported to Linux directly (although it is playable through Steam's open source [Proton](https://github.com/ValveSoftware/Proton/). The modern entries into the [Wasteland series](https://www.gog.com/game/wasteland_the_classic_original) that served as predecessors to Fallout, however, do target Linux, so if you want to experience in-game terminals, you can play [Wasteland 2](https://www.inxile-entertainment.com/wasteland2) and [Wasteland 3](https://www.inxile-entertainment.com/wasteland3) on your Linux gaming PC. The [Shadowrun](http://harebrained-schemes.com/games/) series also targets Linux, and it features a lot of terminal-based interactions, although it's admittedly often overshadowed by blazing [hot sim](https://forums.shadowruntabletop.com/index.php?topic=21804.0) sequences. While those games take a fun approach to computer terminals, and they run on open source systems, none are open source themselves. There are, however, at least two games that take a serious, and seriously fun, approach to teaching people how to interact with systems through text commands. And best of all, they're open source. ## Bashcrawl You may have heard of [Colossal Cave Adventure](https://opensource.com/article/18/12/linux-toy-adventure), an old text-based, interactive game in the style of "choose your own adventure" books. Early computerists played these obsessively at the DOS or ProDOS command line, struggling to find the right combination of valid syntax and zany fantasy logic (as interpreted by a sardonic hacker) to beat the game. Imagine how productive such a struggle could be if the challenge, aside from exploring a virtual medieval dungeon, was to recall valid Bash commands. That's the pitch for ** Bashcrawl**, a Bash-based dungeon crawl you play by learning and using Bash commands. In Bashcrawl, a "dungeon" is created in the form of directories and files on your computer. You explore the dungeon by using the **cd** command to change directory into each room of the dungeon. As you [proceed through directories](https://opensource.com/article/19/8/understanding-file-paths-linux), you examine files with ** ls -F**, read files with **,** [cat](https://opensource.com/article/19/2/getting-started-cat-command)[set variables](https://opensource.com/article/19/8/using-variables-bash)to collect treasure, and run scripts to fight monsters. Everything you do in the game is a valid Bash command that you can use later in real life, and playing the game provides Bash practice because the "game" is made out of actual directories and files on your computer. ``` $ cd entrance/ $ ls cellar scroll $ cat scroll It is pitch black in these catacombs. You have a magical spell that lists all items in a room. To see in the dark, type: ls To move around, type: cd <directory> Try looking around this room. Then move into one of the next rooms. EXAMPLE: $ ls $ cd cellar Remember to cast ``ls`` when you get into the next room! $ ``` ### Install Bashcrawl Before you can play Bashcrawl, you must have Bash or [Zsh](https://opensource.com/article/19/9/getting-started-zsh) on your system. Linux, BSD, and MacOS ship with Bash included. Windows users can download and install [Cygwin](https://www.cygwin.com/) or [WSL](https://docs.microsoft.com/en-us/windows/wsl/wsl2-about) or [try Linux](https://opensource.com/article/19/7/ways-get-started-linux). To install Bashcrawl, navigate to [GitLab](https://gitlab.com/slackermedia/bashcrawl) in Firefox or your web browser of choice. On the right side of the page, click the **Download** icon (to the right of the **Find file** button). In the **Download** pop-up menu, click the **Zip** button to download the latest version of the game. ![Download a zip from Gitlab Download a zip from Gitlab](https://opensource.com/sites/default/files/images/education/screenshot_from_2019-09-28_10-49-49.png) Once it's downloaded, unzip the archive. Alternatively, if you want to start working in the terminal right away, you can use [Git](https://opensource.com/life/16/7/stumbling-git): `$ git clone https://gitlab.com/slackermedia/bashcrawl.git bashcrawl` ### Getting started As with almost any new software package you download, the first thing you must do is read the README file. You can do this by double-clicking on the **README.md** file in the **bashcrawl** directory. On a Mac, your computer may not know what application to use to open the file; you can use any text editor or LibreOffice. **README.md** tells you exactly how to start playing the game, including how to get to the game in your terminal and the first command you must issue to start the game. If you fail to read the README file, the game wins by default (although it can't tell you that because you won't have played it). Bashcrawl isn't meant to be overly clever or advanced. On the contrary, it's as simple as it possibly can be in the interest of being transparent to new users. Ideally, a new Bash user can learn some of the basics of Bash from the game, and then stumble upon the mechanics of the game, including the simple scripts that make it run, and learn still more Bash. Additionally, new Bash users can design their own dungeon by following the examples of Bashcrawl's existing content, and there's no better way to learn to code than to make a game. ## Command Line Heroes: BASH Bashcrawl is meant for absolute beginners. If you use Bash on a regular basis, you'll very likely try to outsmart Bashcrawl by looking at files in ways that a beginner doesn't know yet. If you're an intermediate or advanced Bash user, then you should try ** Command Line Heroes: BASH**. The game is simple: Type in as many valid commands you can think of during a given amount of time. It sounds simple. As a Bash user, you use lots of commands every day. As a Linux user, you know where to find lists of commands; the **util-linux** package alone contains over 100 commands! The question is, are any of those commands available at your fingertips under the pressure of a countdown? ![Command Line Heroes: BASH Command Line Heroes: BASH](https://opensource.com/sites/default/files/uploads/commandlineheroes-bash.jpg) This game sounds simple because it is! In principle, it's similar to flashcards, only in reverse. In practice, it's a fun way to test your knowledge and recall. And of course, it's open source, having been developed by the creators of [Open Jam](http://openjam.io/). ### Installing You can play Command Line Heroes: Bash [online](https://www.redhat.com/en/command-line-heroes/bash/index.html), or you can download the source code from [GitHub](https://github.com/CommandLineHeroes/clh-bash/). The game is written in Node.js, so unless you want to help develop the game, it makes sense to just play it online. ## Minesweeper in Bash If you're an advanced Bash user, and you've written several Bash scripts, then you're probably beyond just learning Bash. For a real challenge, you might try programming a game instead of playing one. With a little thought and an afternoon or three of work, the popular game **Minesweeper** can be implemented entirely in Bash. You can try writing the game yourself first, then refer to [Abhishek Tamrakar's](https://opensource.com/article/19/9/advanced-bash-building-minesweeper) article for an overview of how he accomplished it. Sometimes programming doesn't have a purpose but to educate. Programming a game in Bash may not be the project you'll stake your online reputation on, but the process can be fun and enlightening. And facing a problem you never expected to face is a great way to learn new tricks. ## Learn Bash; have fun Regardless of how you approach learning it, Bash is a powerful interface because it gives you the ability to direct your computer to do what you want without going through the "middleman" interface of a GUI application. Sometimes a GUI is very helpful, but other times you want to graduate from something you know all too well and move to something you can do quickly or through automation. Because Bash is text-based, it's easy to script, making it a great starting point for automated jobs. Learn Bash to start becoming a power user. But make sure you have fun doing it. ## 8 Comments
11,449
如何在 CentOS 8/RHEL 8 上安装和使用 Cockpit
https://www.linuxtechi.com/install-use-cockpit-tool-centos8-rhel8/
2019-10-12T09:34:27
[ "Cockpit" ]
https://linux.cn/article-11449-1.html
![](/data/attachment/album/201910/12/093405gb8hv3exdbsdyfda.jpg) Cockpit 是一个基于 Web 的服务器管理工具,可用于 CentOS 和 RHEL 系统。最近发布的 CentOS 8 和 RHEL 8,其中 cockpit 是默认的服务器管理工具。它的软件包在默认的 CentOS 8 和 RHEL 8 仓库中就有。Cockpit 是一个有用的基于 Web 的 GUI 工具,系统管理员可以通过该工具监控和管理 Linux 服务器,它还可用于管理服务器、容器、虚拟机中的网络和存储,以及检查系统和应用的日志。 在本文中,我们将演示如何在 CentOS 8 和 RHEL 8 中安装和设置 Cockpit。 ### 在 CentOS 8/RHEL 8 上安装和设置Cockpit 登录你的 CentOS 8/RHEL 8,打开终端并执行以下 `dnf` 命令: ``` [root@linuxtechi ~]# dnf install cockpit -y ``` 运行以下命令启用并启动 cockpit 服务: ``` [root@linuxtechi ~]# systemctl start cockpit.socket [root@linuxtechi ~]# systemctl enable cockpit.socket ``` 使用以下命令在系统防火墙中允许 Cockpit 端口: ``` [root@linuxtechi ~]# firewall-cmd --permanent --add-service=cockpit [root@linuxtechi ~]# firewall-cmd --reload ``` 验证 cockpit 服务是否已启动和运行,执行以下命令: ``` [root@linuxtechi ~]# systemctl status cockpit.socket [root@linuxtechi ~]# ss -tunlp | grep cockpit [root@linuxtechi ~]# ps auxf|grep cockpit ``` ![cockpit-status-centos8-rhel8](/data/attachment/album/201910/12/093431e1qgq904y96xnz55.jpg) ### 在 CentOS 8/RHEL 8 上访问 Cockpit 正如我们在上面命令的输出中看到的,cockpit 正在监听 tcp 9090 端口,打开你的 Web 浏览器并输入 url:`https://<Your-CentOS8/RHEL8-System-IP>:9090`。 ![CentOS8-cockpit-login-screen](/data/attachment/album/201910/12/093432n448mj7zb8817x5y.jpg) RHEL 8 中的 Cockpit 登录页面: ![RHEL8-Cockpit-Login-Screen](/data/attachment/album/201910/12/093433l77b74lq1ba76a6a.jpg) 使用有管理员权限的用户名,或者我们也可以使用 root 用户的密码登录。如果要将管理员权限分配给任何本地用户,请执行以下命令: ``` [root@linuxtechi ~]# usermod -G wheel pkumar ``` 这里 `pkumar` 是我的本地用户, 在输入用户密码后,选择 “Reuse my password for privileged tasks”,然后单击 “Log In”,然后我们看到以下页面: ![cockpit-dashboard-centos8](/data/attachment/album/201910/12/093434qznbnpkaavni47kv.jpg) 在左侧栏上,我们可以看到可以通过 cockpit GUI 监控和配置的内容, 假设你要检查 CentOS 8/RHEL 8 中是否有任何可用更新,请单击 “System Updates”: ![Software-Updates-Cockpit-GUI-CentOS8-RHEL8](/data/attachment/album/201910/12/093436x559d52tm4s0ei7b.jpg) 要安装所有更新,点击 “Install All Updates”: ![Install-Software-Updates-CentOS8-RHEL8](/data/attachment/album/201910/12/093437ldid422d4d67fsks.jpg) 如果想要修改网络并要添加 Bond 接口和网桥,请单击 “Networking”: ![Networking-Cockpit-Dashboard-CentOS8-RHEL8](/data/attachment/album/201910/12/093438kp6mcw8777r7y82w.jpg) 如上所见,我们有创建 Bond 接口、网桥和 VLAN 标记接口的选项。 假设我们想创建一个 `br0` 网桥,并要为它添加 `enp0s3` 端口,单击 “Add Bridge”: 将网桥名称指定为 `br0`,将端口指定为 `enp0s3`,然后单击“Apply”。 ![Add-Bridge-Cockpit-CentOS8-RHEL8](/data/attachment/album/201910/12/093440y2fec8vllvcs8dre.jpg) 在下个页面,我们将看到该网桥处于活动状态,并且获得了与 enp0s3 接口相同的 IP: ![Bridge-Details-Cockpit-Dashboard-CentOS8-RHEL8](/data/attachment/album/201910/12/093442wigngfcfeqhi00cl.jpg) 如果你想检查系统日志,单击 “Logs”,我们可以根据严重性查看日志: ![System-Logs-Cockpit-Dashboard-CentOS8-RHEL8](/data/attachment/album/201910/12/093443vxm4989d8j3j6r43.jpg) 本文就是这些了,类似地,系统管理员可以使用 cockpit 的其他功能来监控和管理 CentOS 8 和 RHEL 8 服务器。如果这些步骤可以帮助你在 Linux 服务器上设置 cockpit,请在下面的评论栏分享你的反馈和意见。 --- via: <https://www.linuxtechi.com/install-use-cockpit-tool-centos8-rhel8/> 作者:[Pradeep Kumar](https://www.linuxtechi.com/author/pradeep/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
**Cockpit** is a Web based server management tool available for CentOS and RHEL systems, recently **CentOS 8** and **RHEL 8** are released where cockpit is kept as default server management tool. Its package is available in the default CentOS 8 and RHEL 8 package repositories. Cockpit is a useful Web based GUI tool through which sysadmins can monitor and manage their Linux servers, it can also used to manage networking and storage on servers, containers, virtual machines and inspections of system and application’s logs. In this article we will demonstrate how to install and setup Cockpit on CentOS 8 and RHEL 8 system. #### Installation and setup of Cockpit on CentOS 8 / RHEL 8 System Login to your CentOS8 / RHEL 8 system and open the terminal and execute the following dnf command, [root@linuxtechi ~]# dnf install cockpit -y Run the following command to enable and start cockpit service, [root@linuxtechi ~]# systemctl start cockpit.socket [root@linuxtechi ~]# systemctl enable cockpit.socket Allow Cockpit ports in OS firewall using following command, [root@linuxtechi ~]# firewall-cmd --permanent --add-service=cockpit [root@linuxtechi ~]# firewall-cmd --reload Verify whether cockpit service is up and running or not, execute the following commands, [root@linuxtechi ~]# systemctl status cockpit.socket [root@linuxtechi ~]# ss -tunlp | grep cockpit [root@linuxtechi ~]# ps auxf|grep cockpit #### Access Cockpit on CentOS 8 / RHEL 8 system As we can see in above command’s output that cockpit is listening on tcp port 9090, open your system web browser and type url : https://<Your-CentOS8/RHEL8-System-IP>:9090 Cockpit Login Screen for RHEL 8, Use the user name which has admin rights, or we can also use the root user’s credentials to login. In case you want to assign admin rights to any local user, execute the following command, [root@linuxtechi ~]# usermod -G wheel pkumar here pkumar is my local user, Once you enter the user’s credentials, choose “**Reuse my password for privileged tasks**” and then click on “**Log In**” option after that we will get following screen, On the Left-hand side bar, we can see what things can be monitored and configured via cockpit GUI, Let’s assume if you wish to check whether there are any updates available for your CentOS 8 / RHEL 8 system, click on “**System Updates**” option, To Install all updates, click on “**Install All Updates**” If wish to modify networking and want to add Bond interface and Bridge, then click on “**Networking**” As we can in above window, we have the options to create Bond interface, Bridge and VLAN tagged interface. Let’s assume we want to create a bridge as “**br0**” and want to add **enp0s3** as port to it, click on “**Add Bridge**” option, Specify the bridge name as “br0” and port as “enp0s3” and then click on apply In the next screen we will see that our bridge is active and got the same IP as of enp0s3 interface, If you wish to inspect system logs then click on “**Logs**” options, there we can view logs based on their severity That’s all from this article, similarly sysadmins can use remaining other options of cockpit to monitor and manage their CentOS 8 and RHEL 8 server. In case these steps help you to setup cockpit on your Linux server then please do share your feedback and comments in the comments section below. **Read Also** : [How to Install and Configure VNC Server on Centos 8 / RHEL 8](https://www.linuxtechi.com/install-configure-vnc-server-centos8-rhel8/) **Read Also** : [How to Install Cockpit Web Console on Ubuntu 20.04 Server](https://www.linuxtechi.com/how-to-install-cockpit-on-ubuntu-20-04/) sivaIf we want add the cluster for this . Please share the steps jayI just want to know how to use cockpit with reverse proxy like traefik Pradeep KumarFirst You need deploy cockpit in a container and then deploy traefik container inside your Docker setup after that traefik will automatically discover cockpit. To access the Cockpit GUI, You will use traefik IP and Cockpit container port. MartinNice, but why not just use DNF automatic and (optional) let it report by e-mail about (auto-installed) updates.
11,452
RPM 包初窥
https://fedoramagazine.org/rpm-packages-explained/
2019-10-13T10:22:44
[ "RPM" ]
https://linux.cn/article-11452-1.html
![](/data/attachment/album/201910/13/102247leuu032pxggy17gz.jpg) 也许,Fedora 社区追求其[促进自由和开源的软件及内容的使命](https://docs.fedoraproject.org/en-US/project/#_what_is_fedora_all_about)的最著名的方式就是开发 [Fedora 软件发行版](https://getfedora.org)了。因此,我们将很大一部分的社区资源用于此任务也就不足为奇了。这篇文章总结了这些软件是如何“打包”的,以及使之成为可能的基础工具,如 `rpm` 之类。 ### RPM:最小的软件单元 可供用户选择的“版本”和“风味版”([spins](https://spins.fedoraproject.org/) / [labs](https://labs.fedoraproject.org/) / [silverblue](https://silverblue.fedoraproject.org/))其实非常相似。它们都是由各种软件组成的,这些软件经过混合和搭配,可以很好地协同工作。它们之间的不同之处在于放入其中的具体工具不同。这种选择取决于它们所针对的用例。所有这些的“版本”和“风味版”基本组成单位都是 RPM 软件包文件。 RPM 文件是类似于 ZIP 文件或 tarball 的存档文件。实际上,它们使用了压缩来减小存档文件的大小。但是,除了文件之外,RPM 存档中还包含有关软件包的元数据。可以使用 `rpm` 工具查询: ``` $ rpm -q fpaste fpaste-0.3.9.2-2.fc30.noarch $ rpm -qi fpaste Name : fpaste Version : 0.3.9.2 Release : 2.fc30 Architecture: noarch Install Date: Tue 26 Mar 2019 08:49:10 GMT Group : Unspecified Size : 64144 License : GPLv3+ Signature : RSA/SHA256, Thu 07 Feb 2019 15:46:11 GMT, Key ID ef3c111fcfc659b9 Source RPM : fpaste-0.3.9.2-2.fc30.src.rpm Build Date : Thu 31 Jan 2019 20:06:01 GMT Build Host : buildhw-07.phx2.fedoraproject.org Relocations : (not relocatable) Packager : Fedora Project Vendor : Fedora Project URL : https://pagure.io/fpaste Bug URL : https://bugz.fedoraproject.org/fpaste Summary : A simple tool for pasting info onto sticky notes instances Description : It is often useful to be able to easily paste text to the Fedora Pastebin at http://paste.fedoraproject.org and this simple script will do that and return the resulting URL so that people may examine the output. This can hopefully help folks who are for some reason stuck without X, working remotely, or any other reason they may be unable to paste something into the pastebin $ rpm -ql fpaste /usr/bin/fpaste /usr/share/doc/fpaste /usr/share/doc/fpaste/README.rst /usr/share/doc/fpaste/TODO /usr/share/licenses/fpaste /usr/share/licenses/fpaste/COPYING /usr/share/man/man1/fpaste.1.gz ``` 安装 RPM 软件包后,`rpm` 工具可以知道具体哪些文件被添加到了系统中。因此,删除该软件包也会删除这些文件,并使系统保持一致状态。这就是为什么要尽可能地使用 `rpm` 安装软件,而不是从源代码安装软件的原因。 ### 依赖关系 如今,完全独立的软件已经非常罕见。甚至 [fpaste](https://src.fedoraproject.org/rpms/fpaste),连这样一个简单的单个文件的 Python 脚本,都需要安装 Python 解释器。因此,如果系统未安装 Python(几乎不可能,但有可能),则无法使用 `fpaste`。用打包者的术语来说,“Python 是 `fpaste` 的**运行时依赖项**。” 构建 RPM 软件包时(本文不讨论构建 RPM 的过程),生成的归档文件中包括了所有这些元数据。这样,与 RPM 软件包归档文件交互的工具就知道必须要安装其它的什么东西,以便 `fpaste` 可以正常工作: ``` $ rpm -q --requires fpaste /usr/bin/python3 python3 rpmlib(CompressedFileNames) &lt;= 3.0.4-1 rpmlib(FileDigests) &lt;= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) &lt;= 4.0-1 rpmlib(PayloadIsXz) &lt;= 5.2-1 $ rpm -q --provides fpaste fpaste = 0.3.9.2-2.fc30 $ rpm -qi python3 Name : python3 Version : 3.7.3 Release : 3.fc30 Architecture: x86_64 Install Date: Thu 16 May 2019 18:51:41 BST Group : Unspecified Size : 46139 License : Python Signature : RSA/SHA256, Sat 11 May 2019 17:02:44 BST, Key ID ef3c111fcfc659b9 Source RPM : python3-3.7.3-3.fc30.src.rpm Build Date : Sat 11 May 2019 01:47:35 BST Build Host : buildhw-05.phx2.fedoraproject.org Relocations : (not relocatable) Packager : Fedora Project Vendor : Fedora Project URL : https://www.python.org/ Bug URL : https://bugz.fedoraproject.org/python3 Summary : Interpreter of the Python programming language Description : Python is an accessible, high-level, dynamically typed, interpreted programming language, designed with an emphasis on code readability. It includes an extensive standard library, and has a vast ecosystem of third-party libraries. The python3 package provides the "python3" executable: the reference interpreter for the Python language, version 3. The majority of its standard library is provided in the python3-libs package, which should be installed automatically along with python3. The remaining parts of the Python standard library are broken out into the python3-tkinter and python3-test packages, which may need to be installed separately. Documentation for Python is provided in the python3-docs package. Packages containing additional libraries for Python are generally named with the "python3-" prefix. $ rpm -q --provides python3 python(abi) = 3.7 python3 = 3.7.3-3.fc30 python3(x86-64) = 3.7.3-3.fc30 python3.7 = 3.7.3-3.fc30 python37 = 3.7.3-3.fc30 ``` ### 解决 RPM 依赖关系 虽然 `rpm` 知道每个归档文件所需的依赖关系,但不知道在哪里找到它们。这是设计使然:`rpm` 仅适用于本地文件,必须具体告知它们的位置。因此,如果你尝试安装单个 RPM 软件包,则 `rpm` 找不到该软件包的运行时依赖项时就会出错。本示例尝试安装从 Fedora 软件包集中下载的软件包: ``` $ ls python3-elephant-0.6.2-3.fc30.noarch.rpm $ rpm -qpi python3-elephant-0.6.2-3.fc30.noarch.rpm Name : python3-elephant Version : 0.6.2 Release : 3.fc30 Architecture: noarch Install Date: (not installed) Group : Unspecified Size : 2574456 License : BSD Signature : (none) Source RPM : python-elephant-0.6.2-3.fc30.src.rpm Build Date : Fri 14 Jun 2019 17:23:48 BST Build Host : buildhw-02.phx2.fedoraproject.org Relocations : (not relocatable) Packager : Fedora Project Vendor : Fedora Project URL : http://neuralensemble.org/elephant Bug URL : https://bugz.fedoraproject.org/python-elephant Summary : Elephant is a package for analysis of electrophysiology data in Python Description : Elephant - Electrophysiology Analysis Toolkit Elephant is a package for the analysis of neurophysiology data, based on Neo. $ rpm -qp --requires python3-elephant-0.6.2-3.fc30.noarch.rpm python(abi) = 3.7 python3.7dist(neo) >= 0.7.1 python3.7dist(numpy) >= 1.8.2 python3.7dist(quantities) >= 0.10.1 python3.7dist(scipy) >= 0.14.0 python3.7dist(six) >= 1.10.0 rpmlib(CompressedFileNames) &lt;= 3.0.4-1 rpmlib(FileDigests) &lt;= 4.6.0-1 rpmlib(PartialHardlinkSets) &lt;= 4.0.4-1 rpmlib(PayloadFilesHavePrefix) &lt;= 4.0-1 rpmlib(PayloadIsXz) &lt;= 5.2-1 $ sudo rpm -i ./python3-elephant-0.6.2-3.fc30.noarch.rpm error: Failed dependencies: python3.7dist(neo) >= 0.7.1 is needed by python3-elephant-0.6.2-3.fc30.noarch python3.7dist(quantities) >= 0.10.1 is needed by python3-elephant-0.6.2-3.fc30.noarch ``` 理论上,你可以下载 `python3-elephant` 所需的所有软件包,并告诉 `rpm` 它们都在哪里,但这并不方便。如果 `python3-neo` 和 `python3-quantities` 还有其它的运行时要求怎么办?很快,这种“依赖链”就会变得相当复杂。 #### 存储库 幸运的是,有了 `dnf` 和它的朋友们,可以帮助解决此问题。与 `rpm` 不同,`dnf` 能感知到**存储库**。存储库是程序包的集合,带有告诉 `dnf` 这些存储库包含什么内容的元数据。所有 Fedora 系统都带有默认启用的默认 Fedora 存储库: ``` $ sudo dnf repolist repo id repo name status fedora Fedora 30 - x86_64 56,582 fedora-modular Fedora Modular 30 - x86_64 135 updates Fedora 30 - x86_64 - Updates 8,573 updates-modular Fedora Modular 30 - x86_64 - Updates 138 updates-testing Fedora 30 - x86_64 - Test Updates 8,458 ``` 在 Fedora 快速文档中有[这些存储库](https://docs.fedoraproject.org/en-US/quick-docs/repositories/)以及[如何管理](https://docs.fedoraproject.org/en-US/quick-docs/adding-or-removing-software-repositories-in-fedora/)它们的更多信息。 `dnf` 可用于查询存储库以获取有关它们包含的软件包信息。它还可以在这些存储库中搜索软件,或从中安装/卸载/升级软件包: ``` $ sudo dnf search elephant Last metadata expiration check: 0:05:21 ago on Sun 23 Jun 2019 14:33:38 BST. ============================================================================== Name &amp; Summary Matched: elephant ============================================================================== python3-elephant.noarch : Elephant is a package for analysis of electrophysiology data in Python python3-elephant.noarch : Elephant is a package for analysis of electrophysiology data in Python $ sudo dnf list \*elephant\* Last metadata expiration check: 0:05:26 ago on Sun 23 Jun 2019 14:33:38 BST. Available Packages python3-elephant.noarch 0.6.2-3.fc30 updates-testing python3-elephant.noarch 0.6.2-3.fc30 updates ``` #### 安装依赖项 现在使用 `dnf` 安装软件包时,它将*解决*所有必需的依赖项,然后调用 `rpm` 执行该事务操作: ``` $ sudo dnf install python3-elephant Last metadata expiration check: 0:06:17 ago on Sun 23 Jun 2019 14:33:38 BST. Dependencies resolved. ============================================================================================================================================================================================== Package Architecture Version Repository Size ============================================================================================================================================================================================== Installing: python3-elephant noarch 0.6.2-3.fc30 updates-testing 456 k Installing dependencies: python3-neo noarch 0.8.0-0.1.20190215git49b6041.fc30 fedora 753 k python3-quantities noarch 0.12.2-4.fc30 fedora 163 k Installing weak dependencies: python3-igor noarch 0.3-5.20150408git2c2a79d.fc30 fedora 63 k Transaction Summary ============================================================================================================================================================================================== Install 4 Packages Total download size: 1.4 M Installed size: 7.0 M Is this ok [y/N]: y Downloading Packages: (1/4): python3-igor-0.3-5.20150408git2c2a79d.fc30.noarch.rpm 222 kB/s | 63 kB 00:00 (2/4): python3-elephant-0.6.2-3.fc30.noarch.rpm 681 kB/s | 456 kB 00:00 (3/4): python3-quantities-0.12.2-4.fc30.noarch.rpm 421 kB/s | 163 kB 00:00 (4/4): python3-neo-0.8.0-0.1.20190215git49b6041.fc30.noarch.rpm 840 kB/s | 753 kB 00:00 ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Total 884 kB/s | 1.4 MB 00:01 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Installing : python3-quantities-0.12.2-4.fc30.noarch 1/4 Installing : python3-igor-0.3-5.20150408git2c2a79d.fc30.noarch 2/4 Installing : python3-neo-0.8.0-0.1.20190215git49b6041.fc30.noarch 3/4 Installing : python3-elephant-0.6.2-3.fc30.noarch 4/4 Running scriptlet: python3-elephant-0.6.2-3.fc30.noarch 4/4 Verifying : python3-elephant-0.6.2-3.fc30.noarch 1/4 Verifying : python3-igor-0.3-5.20150408git2c2a79d.fc30.noarch 2/4 Verifying : python3-neo-0.8.0-0.1.20190215git49b6041.fc30.noarch 3/4 Verifying : python3-quantities-0.12.2-4.fc30.noarch 4/4 Installed: python3-elephant-0.6.2-3.fc30.noarch python3-igor-0.3-5.20150408git2c2a79d.fc30.noarch python3-neo-0.8.0-0.1.20190215git49b6041.fc30.noarch python3-quantities-0.12.2-4.fc30.noarch Complete! ``` 请注意,`dnf` 甚至还安装了`python3-igor`,而它不是 `python3-elephant` 的直接依赖项。 ### DnfDragora:DNF 的一个图形界面 尽管技术用户可能会发现 `dnf` 易于使用,但并非所有人都这样认为。[Dnfdragora](https://src.fedoraproject.org/rpms/dnfdragora) 通过为 `dnf` 提供图形化前端来解决此问题。 ![dnfdragora (version 1.1.1-2 on Fedora 30) listing all the packages installed on a system.](/data/attachment/album/201910/13/102248utuo4govbd8wvouw.png) 从上面可以看到,dnfdragora 似乎提供了 `dnf` 的所有主要功能。 Fedora 中还有其他工具也可以管理软件包,GNOME 的“<ruby> 软件 <rt> Software </rt></ruby>”和“<ruby> 发现 <rt> Discover </rt></ruby>”就是其中两个。GNOME “软件”仅专注于图形应用程序。你无法使用这个图形化前端来安装命令行或终端工具,例如 `htop` 或 `weechat`。但是,GNOME “软件”支持安装 `dnf` 所不支持的 [Flatpak](https://fedoramagazine.org/getting-started-flatpak/) 和 Snap 应用程序。它们是针对不同目标受众的不同工具,因此提供了不同的功能。 这篇文章仅触及到了 Fedora 软件的生命周期的冰山一角。本文介绍了什么是 RPM 软件包,以及使用 `rpm` 和 `dnf` 的主要区别。 在以后的文章中,我们将详细介绍: * 创建这些程序包所需的过程 * 社区如何测试它们以确保它们正确构建 * 社区用来将其给到社区用户的基础设施 --- via: <https://fedoramagazine.org/rpm-packages-explained/> 作者:[Ankur Sinha FranciscoD](https://fedoramagazine.org/author/ankursinha/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
Perhaps the best known way the Fedora community pursues its [mission of promoting free and open source software and content](https://docs.fedoraproject.org/en-US/project/#_what_is_fedora_all_about) is by developing the [Fedora software distribution](https://getfedora.org). So it’s not a surprise at all that a very large proportion of our community resources are spent on this task. This post summarizes how this software is “packaged” and the underlying tools such as *rpm* that make it all possible. ## RPM: the smallest unit of software The editions and flavors ([spins](https://spins.fedoraproject.org/)/[labs](https://labs.fedoraproject.org/)/[silverblue](https://silverblue.fedoraproject.org/)) that users get to choose from are all very similar. They’re all composed of various software that is mixed and matched to work well together. What differs between them is the exact list of tools that goes into each. That choice depends on the use case that they target. The basic unit of all of these is an RPM package file. RPM files are archives that are similar to ZIP files or tarballs. In fact, they uses compression to reduce the size of the archive. However, along with files, RPM archives also contain metadata about the package. This can be queried using the *rpm* tool: $ rpm -q fpaste fpaste-0.3.9.2-2.fc30.noarch $ rpm -qi fpaste Name : fpaste Version : 0.3.9.2 Release : 2.fc30 Architecture: noarch Install Date: Tue 26 Mar 2019 08:49:10 GMT Group : Unspecified Size : 64144 License : GPLv3+ Signature : RSA/SHA256, Thu 07 Feb 2019 15:46:11 GMT, Key ID ef3c111fcfc659b9 Source RPM : fpaste-0.3.9.2-2.fc30.src.rpm Build Date : Thu 31 Jan 2019 20:06:01 GMT Build Host : buildhw-07.phx2.fedoraproject.org Relocations : (not relocatable) Packager : Fedora Project Vendor : Fedora Project URL : https://pagure.io/fpaste Bug URL : https://bugz.fedoraproject.org/fpaste Summary : A simple tool for pasting info onto sticky notes instances Description : It is often useful to be able to easily paste text to the Fedora Pastebin at http://paste.fedoraproject.org and this simple script will do that and return the resulting URL so that people may examine the output. This can hopefully help folks who are for some reason stuck without X, working remotely, or any other reason they may be unable to paste something into the pastebin $ rpm -ql fpaste /usr/bin/fpaste /usr/share/doc/fpaste /usr/share/doc/fpaste/README.rst /usr/share/doc/fpaste/TODO /usr/share/licenses/fpaste /usr/share/licenses/fpaste/COPYING /usr/share/man/man1/fpaste.1.gz When an RPM package is installed, the *rpm* tools know exactly what files were added to the system. So, removing a package also removes these files, and leaves the system in a consistent state. This is why installing software using *rpm* is preferred over installing software from source whenever possible. ## Dependencies Nowadays, it is quite rare for software to be completely self-contained. Even [fpaste](https://src.fedoraproject.org/rpms/fpaste), a simple one file Python script, requires that the Python interpreter be installed. So, if the system does not have Python installed (highly unlikely, but possible), *fpaste* cannot be used. In packager jargon, we say that “Python is a **run-time dependency** of *fpaste*“. When RPM packages are built (the process of building RPMs is not discussed in this post), the generated archive includes all of this metadata. That way, the tools interacting with the RPM package archive know what else must must be installed so that fpaste works correctly: $ rpm -q --requires fpaste /usr/bin/python3 python3 rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 rpmlib(PayloadIsXz) <= 5.2-1 $ rpm -q --provides fpaste fpaste = 0.3.9.2-2.fc30 $ rpm -qi python3 Name : python3 Version : 3.7.3 Release : 3.fc30 Architecture: x86_64 Install Date: Thu 16 May 2019 18:51:41 BST Group : Unspecified Size : 46139 License : Python Signature : RSA/SHA256, Sat 11 May 2019 17:02:44 BST, Key ID ef3c111fcfc659b9 Source RPM : python3-3.7.3-3.fc30.src.rpm Build Date : Sat 11 May 2019 01:47:35 BST Build Host : buildhw-05.phx2.fedoraproject.org Relocations : (not relocatable) Packager : Fedora Project Vendor : Fedora Project URL : https://www.python.org/ Bug URL : https://bugz.fedoraproject.org/python3 Summary : Interpreter of the Python programming language Description : Python is an accessible, high-level, dynamically typed, interpreted programming language, designed with an emphasis on code readability. It includes an extensive standard library, and has a vast ecosystem of third-party libraries. The python3 package provides the "python3" executable: the reference interpreter for the Python language, version 3. The majority of its standard library is provided in the python3-libs package, which should be installed automatically along with python3. The remaining parts of the Python standard library are broken out into the python3-tkinter and python3-test packages, which may need to be installed separately. Documentation for Python is provided in the python3-docs package. Packages containing additional libraries for Python are generally named with the "python3-" prefix. $ rpm -q --provides python3 python(abi) = 3.7 python3 = 3.7.3-3.fc30 python3(x86-64) = 3.7.3-3.fc30 python3.7 = 3.7.3-3.fc30 python37 = 3.7.3-3.fc30 ## Resolving RPM dependencies While *rpm* knows the required dependencies for each archive, it does not know where to find them. This is by design: *rpm* only works on local files and must be told exactly where they are. So, if you try to install a single RPM package, you get an error if *rpm* cannot find the package’s run-time dependencies. This example tries to install a package downloaded from the Fedora package set: $ ls python3-elephant-0.6.2-3.fc30.noarch.rpm $ rpm -qpi python3-elephant-0.6.2-3.fc30.noarch.rpm Name : python3-elephant Version : 0.6.2 Release : 3.fc30 Architecture: noarch Install Date: (not installed) Group : Unspecified Size : 2574456 License : BSD Signature : (none) Source RPM : python-elephant-0.6.2-3.fc30.src.rpm Build Date : Fri 14 Jun 2019 17:23:48 BST Build Host : buildhw-02.phx2.fedoraproject.org Relocations : (not relocatable) Packager : Fedora Project Vendor : Fedora Project URL : http://neuralensemble.org/elephant Bug URL : https://bugz.fedoraproject.org/python-elephant Summary : Elephant is a package for analysis of electrophysiology data in Python Description : Elephant - Electrophysiology Analysis Toolkit Elephant is a package for the analysis of neurophysiology data, based on Neo. $ rpm -qp --requires python3-elephant-0.6.2-3.fc30.noarch.rpm python(abi) = 3.7 python3.7dist(neo) >= 0.7.1 python3.7dist(numpy) >= 1.8.2 python3.7dist(quantities) >= 0.10.1 python3.7dist(scipy) >= 0.14.0 python3.7dist(six) >= 1.10.0 rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PartialHardlinkSets) <= 4.0.4-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 rpmlib(PayloadIsXz) <= 5.2-1 $ sudo rpm -i ./python3-elephant-0.6.2-3.fc30.noarch.rpm error: Failed dependencies: python3.7dist(neo) >= 0.7.1 is needed by python3-elephant-0.6.2-3.fc30.noarch python3.7dist(quantities) >= 0.10.1 is needed by python3-elephant-0.6.2-3.fc30.noarch In theory, one could download all the packages that are required for *python3-elephant*, and tell *rpm* where they all are, but that isn’t convenient. What if *python3-neo* and *python3-quantities* have other run-time requirements and so on? Very quickly, the **dependency chain** can get quite complicated. ### Repositories Luckily, *dnf* and friends exist to help with this issue. Unlike *rpm*, *dnf* is aware of **repositories**. Repositories are collections of packages, with metadata that tells *dnf* what these repositories contain. All Fedora systems come with the default Fedora repositories enabled by default: $ sudo dnf repolist repo id repo name status fedora Fedora 30 - x86_64 56,582 fedora-modular Fedora Modular 30 - x86_64 135 updates Fedora 30 - x86_64 - Updates 8,573 updates-modular Fedora Modular 30 - x86_64 - Updates 138 updates-testing Fedora 30 - x86_64 - Test Updates 8,458 There’s more information on [these repositories](https://docs.fedoraproject.org/en-US/quick-docs/repositories/), and how they [can be managed](https://docs.fedoraproject.org/en-US/quick-docs/adding-or-removing-software-repositories-in-fedora/) on the Fedora quick docs. *dnf* can be used to query repositories for information on the packages they contain. It can also search them for software, or install/uninstall/upgrade packages from them: $ sudo dnf search elephant Last metadata expiration check: 0:05:21 ago on Sun 23 Jun 2019 14:33:38 BST. ============================================================================== Name & Summary Matched: elephant ============================================================================== python3-elephant.noarch : Elephant is a package for analysis of electrophysiology data in Python python3-elephant.noarch : Elephant is a package for analysis of electrophysiology data in Python $ sudo dnf list \*elephant\* Last metadata expiration check: 0:05:26 ago on Sun 23 Jun 2019 14:33:38 BST. Available Packages python3-elephant.noarch 0.6.2-3.fc30 updates-testing python3-elephant.noarch 0.6.2-3.fc30 updates ### Installing dependencies When installing the package using *dnf* now, it *resolves* all the required dependencies, then calls *rpm* to carry out the *transaction*: $ sudo dnf install python3-elephant Last metadata expiration check: 0:06:17 ago on Sun 23 Jun 2019 14:33:38 BST. Dependencies resolved. ============================================================================================================================================================================================== Package Architecture Version Repository Size ============================================================================================================================================================================================== Installing: python3-elephant noarch 0.6.2-3.fc30 updates-testing 456 k Installing dependencies: python3-neo noarch 0.8.0-0.1.20190215git49b6041.fc30 fedora 753 k python3-quantities noarch 0.12.2-4.fc30 fedora 163 k Installing weak dependencies: python3-igor noarch 0.3-5.20150408git2c2a79d.fc30 fedora 63 k Transaction Summary ============================================================================================================================================================================================== Install 4 Packages Total download size: 1.4 M Installed size: 7.0 M Is this ok [y/N]: y Downloading Packages: (1/4): python3-igor-0.3-5.20150408git2c2a79d.fc30.noarch.rpm 222 kB/s | 63 kB 00:00 (2/4): python3-elephant-0.6.2-3.fc30.noarch.rpm 681 kB/s | 456 kB 00:00 (3/4): python3-quantities-0.12.2-4.fc30.noarch.rpm 421 kB/s | 163 kB 00:00 (4/4): python3-neo-0.8.0-0.1.20190215git49b6041.fc30.noarch.rpm 840 kB/s | 753 kB 00:00 ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Total 884 kB/s | 1.4 MB 00:01 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Installing : python3-quantities-0.12.2-4.fc30.noarch 1/4 Installing : python3-igor-0.3-5.20150408git2c2a79d.fc30.noarch 2/4 Installing : python3-neo-0.8.0-0.1.20190215git49b6041.fc30.noarch 3/4 Installing : python3-elephant-0.6.2-3.fc30.noarch 4/4 Running scriptlet: python3-elephant-0.6.2-3.fc30.noarch 4/4 Verifying : python3-elephant-0.6.2-3.fc30.noarch 1/4 Verifying : python3-igor-0.3-5.20150408git2c2a79d.fc30.noarch 2/4 Verifying : python3-neo-0.8.0-0.1.20190215git49b6041.fc30.noarch 3/4 Verifying : python3-quantities-0.12.2-4.fc30.noarch 4/4 Installed: python3-elephant-0.6.2-3.fc30.noarch python3-igor-0.3-5.20150408git2c2a79d.fc30.noarch python3-neo-0.8.0-0.1.20190215git49b6041.fc30.noarch python3-quantities-0.12.2-4.fc30.noarch Complete! Notice how dnf even installed *python3-igor*, which isn’t a direct dependency of *python3-elephant*. ## DnfDragora: a graphical interface to DNF While technical users may find *dnf* straightforward to use, it isn’t for everyone. [Dnfdragora](https://src.fedoraproject.org/rpms/dnfdragora) addresses this issue by providing a graphical front end to *dnf*. ![](https://fedoramagazine.org/wp-content/uploads/2019/06/dnfdragora-1024x558.png) From a quick look, dnfdragora appears to provide all of *dnf*‘s main functions. There are other tools in Fedora that also manage packages. GNOME Software, and Discover are two examples. GNOME Software is focused on graphical applications only. You can’t use the graphical front end to install command line or terminal tools such as *htop* or *weechat*. However, GNOME Software does support the installation of [Flatpaks](https://fedoramagazine.org/getting-started-flatpak/) and Snap applications which *dnf* does not. So, they are different tools with different target audiences, and so provide different functions. This post only touches the tip of the iceberg that is the life cycle of software in Fedora. This article explained what RPM packages are, and the main differences between using *rpm* and using *dnf*. In future posts, we’ll speak more about: - The processes that are needed to create these packages - How the community tests them to ensure that they are built correctly - The infrastructure that the community uses to get them to community users in future posts. ## Gilbert Thanks for writing a great article. I never knew about “dnfdragora”, I find that a lot more useful than “Gnome Software”. It reminds me a bit of “synaptic” on Debian-based systems. On the other hand “dnf” itself is also really easy to use and is better than the Debian packages (apt-get, apt-cache, etc). But the GUI gives a more easy to handle front-end, which is particularly useful when one is just exploring the available software. ## John Duchek How about a simple step by step method to make an rpm for beginners? I (and probably others) would be more willing to help support various programs if it were a simple matter. I have looked at the online directions but haven’t gotten past that. Astronomy and Chemistry programs in particular strike my interest. Also, I am noting more and more programs that I would like to run that are just available as .deb files. A lot of people are building for Ubuntu/debian only. Is there a way to do that (either convert to an working rpm, or install directly so that it works) It seems like the energy hill for learning how to properly deal with all of this is quite high. ## David I would welcome a building RPM tutorial as well. Is it possible to take an RPM and get back to a .spec file for it? There are some EPEL and other repos whose RP Ms I would like to tweak ## Guido The source RPM (srpm) contains the spec file and all Fedora specs are here https://src.fedoraproject.org/ ## Cazo The source RPM (*.src.rpm) packages provide the .spec & (usually) tar.gz files for (re)building the RPM. ## Chicken Sandwich I also enjoyed the article and would find a tutorial on building rpm files very interesting. @John Duchek: Yes, you can convert .deb files to .rpm files using alien. It doesn’t always work perfectly though and you may need to edit the result with something like rpmrebuild. ## Suraj Such a informational blog thanks ## Alex Alonso Very well written article. I love this kind of post, which is not a tutorial, but an explanation of the fundamentals. Looking forward for the next ones 🙂 ## J. Rioux Thanks, it’s a great article. I second that it’s hard to make a rpm from source code as instructions are hard to translate to a real world. Also, can anyone submit a package, that respect fedora’s policy, to be maintained/mainlined by fedora if we are not the developer once the .spec if made ? When talking about “infrastructure” is it Copr ? ## Night Romantic @J. Rioux, Copr is a sort of “unofficial user repositories”, like a playgroundfree for everyone to create, experiment with — and use too, if you feel like it. But as such it can have packages that are poorly maintained on doesn’t work at all. As far as I understand it’s not curated.It’s like the opposite of official Fedora repositories, which are curated and maintained by Fedora developers. But it does have quite a few repos maintained by Fedora developers, that are of good quality. Such copr repos can, for example, contain software which is waiting to be included in the official repositories. The infrastructurein the article refers to all the servers and webapplications and tools that fedora users/developers use to maintain/update/build etc. everything from individual packages up to Fedora distribution itself.I don’t have a good link to some overview, this page can be used as a list of sorts, it shows status of all the major parts of the Fedora infrastructure: https://status.fedoraproject.org/ ## H Great post! Looking forward the next one in the series!
11,453
每周开源点评:Java 还有用吗、Linux 桌面以及更多的行业趋势
https://opensource.com/article/19/9/java-relevant-and-more-industry-trends
2019-10-13T11:17:00
[ "Java" ]
https://linux.cn/article-11453-1.html
> > 开源社区和行业趋势的每周总览。 > > > ![Person standing in front of a giant computer screen with numbers, data](/data/attachment/album/201910/13/111738qz4h2tqt44tim43h.png "Person standing in front of a giant computer screen with numbers, data") 作为我在具有开源开发模型的企业软件公司担任高级产品营销经理的角色的一部分,我为产品营销人员、经理和其他影响者定期发布有关开源社区,市场和行业趋势的定期更新。以下是该更新中我和他们最喜欢的五篇文章。 ### 《Java 还有用吗?》 * [文章地址](https://sdtimes.com/java/is-java-still-relevant/) > > 负责 Java Enterprise Edition(现为 Jakarta EE)的 Eclipse 基金会执行董事 Mike Milinkovich 也认为 Java 本身将不断发展以支持这些技术。“我认为 Java 将从 JVM 一直到 Java 本身都将发生变化,”Milinkovich 表示,“因此,JVM 中任何有助于将 JVM 与 Docker 容器集成在一起,以及能够更好地在 Kubernetes 中对 Docker 容器进行检测的新特性,都将是一个巨大的帮助。因此,我们将期待 Java SE 朝着这个方向发展。” > > > **影响**:Jakarta EE 是 Java Enterprise Edition 的完全开源版本,奠定了 Java 未来发展的基础。一些 Java 有用论来自于在用 Java 开发中花费的令人难以置信的成本,以及软件开发人员在用它解决问题方面的多年经验。将其与生态系统中的创新相结合(例如,请参见 [Quarkus](https://github.com/quarkusio/quarkus) 或 GraalVM),答案必须是“是”。 ### 《GraalVM:多语言 JVM 的圣杯?》 * [文章地址](https://www.transposit.com/blog/2019.01.02-graalvm-holy/?c=hn) > > 虽然大多数关于 GraalVM 的宣传都是围绕着将 JVM 项目编译成原生的程序,但是我们仍可以发现它的 Polyglot API 有很多价值。GraalVM 是一个引人注目的、已经完全可以用来替代 Nashorn 的选择,尽管迁移的路径仍然有一些困难,主要原因是缺乏文档。希望这篇文章能帮助其他人找到离开 Nashorn 通往圣杯之路。 > > > **影响**:对于开放源码项目来说,最好的事情之一就是用户开始对一些新奇的应用程序赞不绝口,即使这些应用程序不是主要用例。“是的,听起来不错,我们甚至没有使用过那个功能(指在 JVM 上运行本地语言)……,(都可以感受得到它的优势,)然而我们使用了它的另一个功能(指 Polyglot API)!” ### 《你可以说我疯了,但 Windows 11 或可以在 Linux 上运行》 * [文章链接](https://www.computerworld.com/article/3438856/call-me-crazy-but-windows-11-could-run-on-linux.html#tk.rss_operatingsystems) > > 微软已经做了一些必要的工作。[Windows 的 Linux 子系统](https://blogs.msdn.microsoft.com/wsl/)(WSL)的开发人员一直在致力于将 Linux API 调用映射到 Windows 中,反之亦然。在 WSL 的第一个版本中, 微软将 Windows 本地库、程序以及 Linux 之间的关键点连接起来了。当时,[Carmen Crincoli 发推文称](https://twitter.com/CarmenCrincoli/status/862714516257226752):“2017 年归根结底还是 Linux 桌面年。只不过这个桌面是 Windows。”Carmen Crincoli 是什么人?微软与存储和独立硬件供应商的合作伙伴经理。 > > > **影响**:[Hieroglyph 项目](https://hieroglyph.asu.edu/2016/04/what-is-the-purpose-of-science-fiction-stories/) 的前提是“一部好的科幻小说都有一个对未来的愿景……是建立在现实主义的基础上的……(而这)引发我们思考自己的选择和互动对创造未来做出贡献的复杂方式。”微软的选择以及与更广泛的开源社区的互动是否可以导致科幻的未来?敬请关注! ### 《Python 正在吞噬世界:一个开发人员的业余项目如何成为地球上最热门的编程语言》 * [文章链接](https://www.techrepublic.com/article/python-is-eating-the-world-how-one-developers-side-project-became-the-hottest-programming-language-on-the-planet/) > > 还有一个问题是,监督语言开发的机构“Python 核心开发人员和 Python 指导委员会”的组成是否能更好地反映 2019 年 Python 用户群的多样性。 > > > Wijaya 称:“我希望看到在所有不同指标上都有更好的代表,不仅在性别平衡方面,而且在种族和其它所有方面。” > > > “在 PyCon 上,我与来自印度和非洲的 [PyLadies](https://www.pyladies.com/) 成员进行了交谈。他们评论说:‘当我们听说 Python 或 PyLadies 时,我们想到的是北美或加拿大的人,而实际上,世界其它地区的用户群很大。为什么我们看不到更多?’我认为这很有意义。因此,我绝对希望看到这种情况发生,我认为我们都需要尽自己的一份力量。” > > > **影响**: 在这个动荡的时代,谁不想听到一位仁慈独裁者(指 Python 创始人)把他们项目的统治权移交给最经常使用它的人呢? *我希望你喜欢这张上周让我印象深刻的列表,并在下周一回来了解更多的开放源码社区、市场和行业趋势。* --- via: <https://opensource.com/article/19/9/java-relevant-and-more-industry-trends> 作者:[Tim Hildred](https://opensource.com/users/thildred) 选题:[lujun9972](https://github.com/lujun9972) 译者:[laingke](https://github.com/laingke) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
200
OK
As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update. [Is Java still relevant?](https://sdtimes.com/java/is-java-still-relevant/) Mike Milinkovich, executive director of the Eclipse Foundation, which oversees Java Enterprise Edition (now Jakarta EE), also believes Java itself is going to evolve to support these technologies. “I think that’s there’s going to be changes to Java that go from the JVM all the way up,” said Milinkovich. “So any new features in the JVM which will help integrate the JVM with Docker containers and be able to do a better job of instrumenting Docker containers within Kubernetes is definitely going to be a big help. So we are going to be looking for Java SE to evolve in that direction.” **The impact**: A completely open source release of Java Enterprise Edition as Jakarta EE lays the groundwork for years of Java development to come. Some of Java's relevance comes from the mind-boggling sums that have been spent developing in it and the years of experience that software developers have in solving problems with it. Combine that with the innovation in the ecosystem (for example, see [Quarkus](https://github.com/quarkusio/quarkus), or GraalVM), and the answer has to be "yes." [GraalVM: The holy graal of polyglot JVM?](https://www.transposit.com/blog/2019.01.02-graalvm-holy/?c=hn) While most of the hype around GraalVM has been around compiling JVM projects to native, we found plenty of value in its Polyglot APIs. GraalVM is a compelling and already fully useable alternative to Nashorn, though the migration path is still a little rocky, mostly due to a lack of documentation. Hopefully this post helps others find their way off of Nashorn and on to the holy graal. **The impact**: One of the best things that can happen with an open source project is if users start raving about some novel application of the technology that isn't even the headline use case. "Yeah yeah, sounds great but we don't even turn that thing on... this other piece though!" [Call me crazy, but Windows 11 could run on Linux](https://www.computerworld.com/article/3438856/call-me-crazy-but-windows-11-could-run-on-linux.html#tk.rss_operatingsystems) Microsoft has already been doing some of the needed work. [Windows Subsystem for Linux](WSL) developers have been working on mapping Linux API calls to Windows, and vice versa. With the first version of WSL, Microsoft connected the dots between Windows-native libraries and programs and Linux. At the time,[Carmen Crincoli tweeted]: “2017 is finally the year of Linux on the Desktop. It’s just that the Desktop is Windows.” Who is Carmen Crincoli? Microsoft’s manager of partnerships with storage and independent hardware vendors. **The impact**: [Project Hieroglyph](https://hieroglyph.asu.edu/2016/04/what-is-the-purpose-of-science-fiction-stories/) builds on the premise that "a good science fiction work posits one vision for the future... that is built on a foundation of realism [that]... invites us to consider the complex ways our choices and interactions contribute to generating the future." Could Microsoft's choices and interactions with the broader open source community lead to a sci-fi future? Stay tuned! [Python is eating the world: How one developer's side project became the hottest programming language on the planet](https://www.techrepublic.com/article/python-is-eating-the-world-how-one-developers-side-project-became-the-hottest-programming-language-on-the-planet/) There are also questions over whether the makeup of bodies overseeing the development of the language — Python core developers and the Python Steering Council — could better reflect the diverse user base of Python users in 2019. "I would like to see better representation across all the diverse metrics, not just in terms of gender balance, but also race and everything else," says Wijaya. "At PyCon I spoke to [PyLadies]members from India and Africa. They commented that, 'When we hear about Python or PyLadies, we think about people in North America or Canada, where in reality there are big user bases in other parts of the world. Why aren't we seeing more of them?' I think it makes so much sense. So I definitely would like to see that happening, and I think we all need to do our part." **The impact**: In these troubled times who doesn't want to hear about a benevolent dictator turning the reigns of their project over to the people who are using it the most? *I hope you enjoyed this list of what stood out to me from last week and come back next Monday for more open source community, market, and industry trends.* ## Comments are closed.
11,455
生成 Linux 运行时间报告的 Bash 脚本
https://www.2daygeek.com/bash-script-generate-linux-system-uptime-reports/
2019-10-13T11:47:35
[ "运行时间" ]
https://linux.cn/article-11455-1.html
![](/data/attachment/album/201910/13/114727no80y3hzg3og2rhg.jpg) 出于一些原因,你可能需要每月收集一次 [Linux 系统运行时间](https://www.2daygeek.com/linux-system-server-uptime-check/)报告。如果是这样,你可以根据需要使用以下 [bash 脚本](https://www.2daygeek.com/category/shell-script/) 之一。 我们为什么要收集这份报告?在一段时间后重启 Linux 服务器是解决某些未解决问题的好方法。(LCTT 译注:本文这些观点值得商榷,很多服务器可以稳定运行几千天,尤其是有了内核热补丁之后,启动并不是必须的。) 建议每 180 天重新启动一次。但时间段也许取决于你公司的政策。如果你已经长时间运行服务器而没有重启。这可能导致服务器上出现一些性能或内存问题,我在许多服务器上都注意到了这一点。 这些脚本一次性提供了所有系统运行报告。 ### 什么是 uptime 命令 `uptime` 命令将告诉你系统已经运行了多长时间。它在一行中显示以下信息:当前时间、系统运行了多长时间、当前登录了多少用户以及过去 1、5 和 15 分钟的平均系统负载。 ### 什么是 tuptime? [tuptime](https://www.2daygeek.com/linux-tuptime-check-historical-uptime/) 是用于报告系统的历史和统计运行时间的工具,可在重启之间保存。它类似于 `uptime` 命令,但输出更有趣。 ### 1)检查 Linux 系统运行时间的 Bash 脚本 该 bash 脚本将收集所有服务器正常运行时间,并将报告发送到给定的电子邮箱地址。 请替换为你的电子邮箱地址,而不是用我们的,否则你将不会收到邮件。 ``` # vi /opt/scripts/system-uptime-script.sh #!/bin/bash > /tmp/uptime-report.out for host in cat /tmp/servers.txt do echo -n "$host: " ssh $host uptime | awk '{print $3,$4}' | sed 's/,//' done | column -t >> /tmp/uptime-report.out cat /tmp/uptime-report.out | mail -s "Linux Servers Uptime Report" "[email protected]" ``` 给 `system-uptime-script.sh` 设置可执行权限。 ``` $ chmod +x /opt/scripts/system-uptime-script.sh ``` 最后运行 bash 脚本获取输出。 ``` # sh /opt/scripts/system-uptime-script.sh ``` 你将收到类似以下的报告。 ``` # cat /tmp/uptime-report.out 192.168.1.5: 2 days 192.168.1.6: 15 days 192.168.1.7: 30 days 192.168.1.8: 7 days 192.168.1.9: 67 days 192.168.1.10: 130 days 192.168.1.11: 23 days ``` ### 2)检查 Linux 系统是否运行了 30 天以上的 Bash 脚本 此 bash 脚本会收集运行 30 天以上的服务器,并将报告发送到指定的邮箱地址。你可以根据需要更改天数。 ``` # vi /opt/scripts/system-uptime-script-1.sh #!/bin/bash > /tmp/uptime-report-1.out for host in cat /tmp/servers.txt do echo -n "$host: " ssh $host uptime | awk '{print $3,$4}' | sed 's/,//' done | column -t >> /tmp/uptime-report-1.out cat /tmp/uptime-report-1.out | awk ' $2 >= 30' > /tmp/uptime-report-2.out cat /tmp/uptime-report-2.out | mail -s "Linux Servers Uptime Report" "[email protected]" ``` 给 `system-uptime-script-1.sh` 设置可执行权限。 ``` $ chmod +x /opt/scripts/system-uptime-script-1.sh ``` 最后添加一条 [cronjob](https://www.2daygeek.com/crontab-cronjob-to-schedule-jobs-in-linux/) 来自动执行。它会在每天早上 7 点运行。 ``` # crontab -e 0 7 * * * /bin/bash /opt/scripts/system-uptime-script-1.sh ``` **注意:** 你会在每天早上 7 点会收到一封电子邮件提醒,它是昨天的详情。 你将收到类似下面的报告。 ``` # cat /tmp/uptime-report-2.out 192.168.1.7: 30 days 192.168.1.9: 67 days 192.168.1.10: 130 days ``` --- via: <https://www.2daygeek.com/bash-script-generate-linux-system-uptime-reports/> 作者:[Magesh Maruthamuthu](https://www.2daygeek.com/author/magesh/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
404
Not Found
null