id
int64 2.05k
16.6k
| title
stringlengths 5
75
| fromurl
stringlengths 19
185
| date
timestamp[s] | tags
sequencelengths 0
11
| permalink
stringlengths 20
37
| content
stringlengths 342
82.2k
| fromurl_status
int64 200
526
⌀ | status_msg
stringclasses 339
values | from_content
stringlengths 0
229k
⌀ |
---|---|---|---|---|---|---|---|---|---|
8,773 | 响应式编程与响应式系统 | https://www.oreilly.com/ideas/reactive-programming-vs-reactive-systems | 2017-08-12T18:30:00 | [
"响应式"
] | https://linux.cn/article-8773-1.html |
>
> 在恒久的迷惑与过多期待的海洋中,登上一组简单响应式设计原则的小岛。
>
>
>

>
> 下载 Konrad Malawski 的免费电子书[《为什么选择响应式?企业应用中的基本原则》](http://www.oreilly.com/programming/free/why-reactive.csp?intcmp=il-webops-free-product-na_new_site_reactive_programming_vs_reactive_systems_text_cta),深入了解更多响应式技术的知识与好处。
>
>
>
自从 2013 年一起合作写了[《响应式宣言》](http://www.reactivemanifesto.org/)之后,我们看着响应式从一种几乎无人知晓的软件构建技术——当时只有少数几个公司的边缘项目使用了这一技术——最后成为<ruby> 中间件领域 <rt> middleware field </rt></ruby>大佬们全平台战略中的一部分。本文旨在定义和澄清响应式各个方面的概念,方法是比较在*响应式编程*风格下和把*响应式系统*视作一个紧密整体的设计方法下编写代码的不同之处。
### 响应式是一组设计原则
响应式技术目前成功的标志之一是“<ruby> 响应式 <rt> reactive </rt></ruby>”成为了一个热词,并且跟一些不同的事物与人联系在了一起——常常伴随着像“<ruby> 流 <rt> streaming </rt></ruby>”、“<ruby> 轻量级 <rt> lightweight </rt></ruby>”和“<ruby> 实时 <rt> real-time </rt></ruby>”这样的词。
举个例子:当我们看到一支运动队时(像棒球队或者篮球队),我们一般会把他们看成一个个单独个体的组合,但是当他们之间碰撞不出火花,无法像一个团队一样高效地协作时,他们就会输给一个“更差劲”的队伍。从这篇文章的角度来看,响应式是一组设计原则,一种关于系统架构与设计的思考方式,一种关于在一个分布式环境下,当<ruby> 实现技术 <rp> ( </rp> <rt> implementation techniques </rt> <rp> ) </rp></ruby>、工具和设计模式都只是一个更大系统的一部分时如何设计的思考方式。
这个例子展示了不经考虑地将一堆软件拼揍在一起——尽管单独来看,这些软件都很优秀——和响应式系统之间的不同。在一个响应式系统中,正是*不同<ruby> 组件 <rp> ( </rp> <rt> parts </rt> <rp> ) </rp></ruby>间的相互作用*让响应式系统如此不同,它使得不同组件能够独立地运作,同时又一致协作从而达到最终想要的结果。
*一个响应式系统* 是一种<ruby> 架构风格 <rp> ( </rp> <rt> architectural style </rt> <rp> ) </rp></ruby>,它允许许多独立的应用结合在一起成为一个单元,共同响应它们所处的环境,同时保留着对单元内其它应用的“感知”——这能够表现为它能够做到<ruby> 放大/缩小规模 <rp> ( </rp> <rt> scale up/down </rt> <rp> ) </rp></ruby>,负载平衡,甚至能够主动地执行这些步骤。
以响应式的风格(或者说,通过响应式编程)写一个软件是可能的;然而,那也不过是拼图中的一块罢了。虽然在上面的提到的各个方面似乎都足以称其为“响应式的”,但仅就其它们自身而言,还不足以让一个*系统*成为响应式的。
当人们在软件开发与设计的语境下谈论“响应式”时,他们的意思通常是以下三者之一:
* 响应式系统(架构与设计)
* 响应式编程(基于声明的事件的)
* 函数响应式编程(FRP)
我们将调查这些做法与技术的意思,特别是前两个。更明确地说,我们会在使用它们的时候讨论它们,例如它们是怎么联系在一起的,从它们身上又能到什么样的好处——特别是在为多核、云或移动架构搭建系统的情境下。
让我们先来说一说函数响应式编程吧,以及我们在本文后面不再讨论它的原因。
### 函数响应式编程(FRP)
<ruby> 函数响应式编程 <rt> Functional reactive programming </rt></ruby>,通常被称作 *FRP*,是最常被误解的。FRP 在二十年前就被 Conal Elliott [精确地定义过了](http://conal.net/papers/icfp97/)了。但是最近这个术语却被错误地<sup> 脚注1</sup> 用来描述一些像 Elm、Bacon.js 的技术以及其它技术中的响应式插件(RxJava、Rx.NET、 RxJS)。许多的<ruby> 库 <rp> ( </rp> <rt> libraries </rt> <rp> ) </rp></ruby>声称他们支持 FRP,事实上他们说的并非*响应式编程*,因此我们不会再进一步讨论它们。
### 响应式编程
<ruby> 响应式编程 <rt> Reactive programming </rt></ruby>,不要把它跟*函数响应式编程*混淆了,它是异步编程下的一个子集,也是一种范式,在这种范式下,由新信息的<ruby> 有效性 <rp> ( </rp> <rt> availability </rt> <rp> ) </rp></ruby>推动逻辑的前进,而不是让<ruby> 一条执行线程 <rp> ( </rp> <rt> a thread-of-execution </rt> <rp> ) </rp></ruby>去推动<ruby> 控制流 <rp> ( </rp> <rt> control flow </rt> <rp> ) </rp></ruby>。
它能够把问题分解为多个独立的步骤,这些独立的步骤可以以异步且<ruby> 非阻塞 <rp> ( </rp> <rt> non-blocking </rt> <rp> ) </rp></ruby>的方式被执行,最后再组合在一起产生一条<ruby> 工作流 <rp> ( </rp> <rt> workflow </rt> <rp> ) </rp></ruby>——它的输入和输出可能是<ruby> 非绑定的 <rp> ( </rp> <rt> unbounded </rt> <rp> ) </rp></ruby>。
<ruby> <a href="http://www.reactivemanifesto.org/glossary#Asynchronous"> “异步地” </a> <rp> ( </rp> <rt> Asynchronous </rt> <rp> ) </rp></ruby>被牛津词典定义为“不在同一时刻存在或发生”,在我们的语境下,它意味着一条消息或者一个事件可发生在任何时刻,也有可能是在未来。这在响应式编程中是非常重要的一项技术,因为响应式编程允许[<ruby> 非阻塞式 <rp> ( </rp> <rt> non-blocking </rt> <rp> ) </rp></ruby>]的执行方式——执行线程在竞争一块共享资源时不会因为<ruby> 阻塞 <rp> ( </rp> <rt> blocking </rt> <rp> ) </rp></ruby>而陷入等待(为了防止执行线程在当前的工作完成之前执行任何其它操作),而是在共享资源被占用的期间转而去做其它工作。<ruby> 阿姆达尔定律 <rp> ( </rp> <rt> Amdahl's Law </rt> <rp> ) </rp></ruby> <sup> 脚注2</sup> 告诉我们,竞争是<ruby> 可伸缩性 <rp> ( </rp> <rt> scalability </rt> <rp> ) </rp></ruby>最大的敌人,所以一个响应式系统应当在极少数的情况下才不得不做阻塞工作。
响应式编程一般是<ruby> 事件驱动 <rp> ( </rp> <rt> event-driven </rt> <rp> ) </rp></ruby>,相比之下,响应式系统则是<ruby> 消息驱动 <rp> ( </rp> <rt> message-driven </rt> <rp> ) </rp></ruby>的——事件驱动与消息驱动之间的差别会在文章后面阐明。
响应式编程库的应用程序接口(API)一般是以下二者之一:
* <ruby> 基于回调的 <rp> ( </rp> <rt> Callback-based </rt> <rp> ) </rp></ruby>—匿名的<ruby> 间接作用 <rp> ( </rp> <rt> side-effecting </rt> <rp> ) </rp></ruby>回调函数被绑定在<ruby> 事件源 <rp> ( </rp> <rt> event sources </rt> <rp> ) </rp></ruby>上,当事件被放入<ruby> 数据流 <rp> ( </rp> <rt> dataflow chain </rt> <rp> ) </rp></ruby>中时,回调函数被调用。
* <ruby> 声明式的 <rp> ( </rp> <rt> Declarative </rt> <rp> ) </rp></ruby>——通过函数的组合,通常是使用一些固定的函数,像 *map*、 *filter*、 *fold* 等等。
大部分的库会混合这两种风格,一般还带有<ruby> 基于流 <rp> ( </rp> <rt> stream-based </rt> <rp> ) </rp></ruby>的<ruby> 操作符 <rp> ( </rp> <rt> operators </rt> <rp> ) </rp></ruby>,像 windowing、 counts、 triggers。
说响应式编程跟<ruby> <a href="https://en.wikipedia.org/wiki/Dataflow_programming"> 数据流编程 </a> <rp> ( </rp> <rt> dataflow programming </rt> <rp> ) </rp></ruby>有关是很合理的,因为它强调的是*数据流*而不是*控制流*。
举几个为这种编程技术提供支持的的编程抽象概念:
* [Futures/Promises](https://en.wikipedia.org/wiki/Futures_and_promises)——一个值的容器,具有<ruby> 读共享/写独占 <rp> ( </rp> <rt> many-read/single-write </rt> <rp> ) </rp></ruby>的语义,即使变量尚不可用也能够添加异步的值转换操作。
* <ruby> 流 <rp> ( </rp> <rt> streams </rt> <rp> ) </rp></ruby> - [响应式流](http://reactive-streams.org/)——无限制的数据处理流,支持异步,非阻塞式,支持多个源与目的的<ruby> 反压转换管道 <rp> ( </rp> <rt> back-pressured transformation pipelines </rt> <rp> ) </rp></ruby>。
* [数据流变量](https://en.wikipedia.org/wiki/Oz_(programming_language)#Dataflow_variables_and_declarative_concurrency)——依赖于输入、<ruby> 过程 <rp> ( </rp> <rt> procedures </rt> <rp> ) </rp></ruby>或者其它单元的<ruby> 单赋值变量 <rp> ( </rp> <rt> single assignment variables </rt> <rp> ) </rp></ruby>(存储单元),它能够自动更新值的改变。其中一个应用例子是表格软件——一个单元的值的改变会像涟漪一样荡开,影响到所有依赖于它的函数,顺流而下地使它们产生新的值。
在 JVM 中,支持响应式编程的流行库有 Akka Streams、Ratpack、Reactor、RxJava 和 Vert.x 等等。这些库实现了响应式编程的规范,成为 JVM 上响应式编程库之间的<ruby> 互通标准 <rp> ( </rp> <rt> standard for interoperability </rt> <rp> ) </rp></ruby>,并且根据它自身的叙述是“……一个为如何处理非阻塞式反压异步流提供标准的倡议”。
响应式编程的基本好处是:提高多核和多 CPU 硬件的计算资源利用率;根据阿姆达尔定律以及引申的 <ruby> Günther 的通用可伸缩性定律 <rp> ( </rp> <rt> Günther’s Universal Scalability Law </rt> <rp> ) </rp></ruby> <sup> 脚注3</sup> ,通过减少<ruby> 序列化点 <rp> ( </rp> <rt> serialization points </rt> <rp> ) </rp></ruby>来提高性能。
另一个好处是开发者生产效率,传统的编程范式都尽力想提供一个简单直接的可持续的方法来处理异步非阻塞式计算和 I/O。在响应式编程中,因活动(active)组件之间通常不需要明确的协作,从而也就解决了其中大部分的挑战。
响应式编程真正的发光点在于组件的创建跟工作流的组合。为了在异步执行上取得最大的优势,把<ruby> <a href="http://www.reactivemanifesto.org/glossary#Back-Pressure"> 反压 </a> <rp> ( </rp> <rt> back-pressure </rt> <rp> ) </rp></ruby>加进来是很重要,这样能避免过度使用,或者确切地说,避免无限度的消耗资源。
尽管如此,响应式编程在搭建现代软件上仍然非常有用,为了在更高层次上<ruby> 理解 <rp> ( </rp> <rt> reason about </rt> <rp> ) </rp></ruby>一个系统,那么必须要使用到另一个工具:<ruby> 响应式架构 <rt> reactive architecture </rt></ruby>——设计响应式系统的方法。此外,要记住编程范式有很多,而响应式编程仅仅只是其中一个,所以如同其它工具一样,响应式编程并不是万金油,它不意图适用于任何情况。
### 事件驱动 vs. 消息驱动
如上面提到的,响应式编程——专注于短时间的数据流链条上的计算——因此倾向于*事件驱动*,而响应式系统——关注于通过分布式系统的通信和协作所得到的弹性和韧性——则是[*消息驱动的*](http://www.reactivemanifesto.org/glossary#Message-Driven) <sup> 脚注4</sup>(或者称之为 <ruby> 消息式 <rp> ( </rp> <rt> messaging </rt> <rp> ) </rp></ruby> 的)。
一个拥有<ruby> 长期存活的可寻址 <rp> ( </rp> <rt> long-lived addressable </rt> <rp> ) </rp></ruby>组件的消息驱动系统跟一个事件驱动的数据流驱动模型的不同在于,消息具有固定的导向,而事件则没有。消息会有明确的(一个)去向,而事件则只是一段等着被<ruby> 观察 <rp> ( </rp> <rt> observe </rt> <rp> ) </rp></ruby>的信息。另外,<ruby> 消息式 <rp> ( </rp> <rt> messaging </rt> <rp> ) </rp></ruby>更适用于异步,因为消息的发送与接收和发送者和接收者是分离的。
响应式宣言中的术语表定义了两者之间[概念上的不同](http://www.reactivemanifesto.org/glossary#Message-Driven):
>
> 一条消息就是一则被送往一个明确目的地的数据。一个事件则是达到某个给定状态的组件发出的一个信号。在一个消息驱动系统中,可寻址到的接收者等待消息的到来然后响应它,否则保持休眠状态。在一个事件驱动系统中,通知的监听者被绑定到消息源上,这样当消息被发出时它就会被调用。这意味着一个事件驱动系统专注于可寻址的事件源而消息驱动系统专注于可寻址的接收者。
>
>
>
分布式系统需要通过消息在网络上传输进行交流,以实现其沟通基础,与之相反,事件的发出则是本地的。在底层通过发送包裹着事件的消息来搭建跨网络的事件驱动系统的做法很常见。这样能够维持在分布式环境下事件驱动编程模型的相对简易性,并且在某些特殊的和合理的范围内的使用案例上工作得很好。
然而,这是有利有弊的:在编程模型的抽象性和简易性上得一分,在控制上就减一分。消息强迫我们去拥抱分布式系统的真实性和一致性——像<ruby> 局部错误 <rp> ( </rp> <rt> partial failures </rt> <rp> ) </rp></ruby>,<ruby> 错误侦测 <rp> ( </rp> <rt> failure detection </rt> <rp> ) </rp></ruby>,<ruby> 丢弃/复制/重排序 <rp> ( </rp> <rt> dropped/duplicated/reordered </rt> <rp> ) </rp></ruby>消息,最后还有一致性,管理多个并发真实性等等——然后直面它们,去处理它们,而不是像过去无数次一样,藏在一个蹩脚的抽象面罩后——假装网络并不存在(例如EJB、 [RPC](https://christophermeiklejohn.com/pl/2016/04/12/rpc.html)、 [CORBA](https://queue.acm.org/detail.cfm?id=1142044) 和 [XA](https://cs.brown.edu/courses/cs227/archives/2012/papers/weaker/cidr07p15.pdf))。
这些在语义学和适用性上的不同在应用设计中有着深刻的含义,包括分布式系统的<ruby> 复杂性 <rp> ( </rp> <rt> complexity </rt> <rp> ) </rp></ruby>中的 <ruby> 弹性 <rp> ( </rp> <rt> resilience </rt> <rp> ) </rp></ruby>、 <ruby> 韧性 <rp> ( </rp> <rt> elasticity </rt> <rp> ) </rp></ruby>、<ruby> 移动性 <rp> ( </rp> <rt> mobility </rt> <rp> ) </rp></ruby>、<ruby> 位置透明性 <rp> ( </rp> <rt> location transparency </rt> <rp> ) </rp></ruby>和 <ruby> 管理 <rp> ( </rp> <rt> management </rt> <rp> ) </rp></ruby>,这些在文章后面再进行介绍。
在一个响应式系统中,特别是使用了响应式编程技术的,这样的系统中就即有事件也有消息——一个是用于沟通的强大工具(消息),而另一个则呈现现实(事件)。
### 响应式系统和架构
*响应式系统* —— 如同在《响应式宣言》中定义的那样——是一组用于搭建现代系统——已充分准备好满足如今应用程序所面对的不断增长的需求的现代系统——的架构设计原则。
响应式系统的原则决对不是什么新东西,它可以被追溯到 70 和 80 年代 Jim Gray 和 Pat Helland 在<ruby> <a href="http://www.hpl.hp.com/techreports/tandem/TR-86.2.pdf"> 串级系统 </a> <rp> ( </rp> <rt> Tandem System </rt> <rp> ) </rp></ruby>上和 Joe aomstrong 和 Robert Virding 在 [Erland](http://erlang.org/download/armstrong_thesis_2003.pdf) 上做出的重大工作。然而,这些人在当时都超越了时代,只有到了最近 5 - 10 年,技术行业才被不得不反思当前企业系统最好的开发实践活动并且学习如何将来之不易的响应式原则应用到今天这个多核、云计算和物联网的世界中。
响应式系统的基石是<ruby> 消息传递 <rp> ( </rp> <rt> message-passing </rt> <rp> ) </rp></ruby>,消息传递为两个组件之间创建一条暂时的边界,使得它们能够在 *时间* 上分离——实现并发性——和 <ruby> 空间 <rp> ( </rp> <rt> space </rt> <rp> ) </rp></ruby> ——实现<ruby> 分布式 <rp> ( </rp> <rt> distribution </rt> <rp> ) </rp></ruby>与<ruby> 移动性 <rp> ( </rp> <rt> mobility </rt> <rp> ) </rp></ruby>。这种分离是两个组件完全<ruby> <a href="http://www.reactivemanifesto.org/glossary#Isolation"> 隔离 </a> <rp> ( </rp> <rt> isolation </rt> <rp> ) </rp></ruby>以及实现<ruby> 弹性 <rp> ( </rp> <rt> resilience </rt> <rp> ) </rp></ruby>和<ruby> 韧性 <rp> ( </rp> <rt> elasticity </rt> <rp> ) </rp></ruby>基础的必需条件。
### 从程序到系统
这个世界的连通性正在变得越来越高。我们不再构建 *程序* ——为单个操作子来计算某些东西的端到端逻辑——而更多地在构建 *系统* 了。
系统从定义上来说是复杂的——每一部分都包含多个组件,每个组件的自身或其子组件也可以是一个系统——这意味着软件要正常工作已经越来越依赖于其它软件。
我们今天构建的系统会在多个计算机上操作,小型的或大型的,或少或多,相近的或远隔半个地球的。同时,由于人们的生活正变得越来越依赖于系统顺畅运行的有效性,用户的期望也变得越得越来越难以满足。
为了实现用户——和企业——能够依赖的系统,这些系统必须是 <ruby> 灵敏的 <rp> ( </rp> <rt> responsive </rt> <rp> ) </rp></ruby> ,这样无论是某个东西提供了一个正确的响应,还是当需要一个响应时响应无法使用,都不会有影响。为了达到这一点,我们必须保证在错误( *弹性* )和欠载( *韧性* )下,系统仍然能够保持灵敏性。为了实现这一点,我们把系统设计为 *消息驱动的* ,我们称其为 *响应式系统* 。
### 响应式系统的弹性
弹性是与 *错误下* 的<ruby> 灵敏性 <rp> ( </rp> <rt> responsiveness </rt> <rp> ) </rp></ruby>有关的,它是系统内在的功能特性,是需要被设计的东西,而不是能够被动的加入系统中的东西。弹性是大于容错性的——弹性无关于<ruby> 故障退化 <rp> ( </rp> <rt> graceful degradation </rt> <rp> ) </rp></ruby>——虽然故障退化对于系统来说是很有用的一种特性——与弹性相关的是与从错误中完全恢复达到 *自愈* 的能力。这就需要组件的隔离以及组件对错误的包容,以免错误散播到其相邻组件中去——否则,通常会导致灾难性的连锁故障。
因此构建一个弹性的、<ruby> 自愈 <rp> ( </rp> <rt> self-healing </rt> <rp> ) </rp></ruby>系统的关键是允许错误被:容纳、具体化为消息,发送给其他(担当<ruby> 监管者 <rp> ( </rp> <rt> supervisors </rt> <rp> ) </rp></ruby>)的组件,从而在错误组件之外修复出一个安全环境。在这,消息驱动是其促成因素:远离高度耦合的、脆弱的深层嵌套的同步调用链,大家长期要么学会忍受其煎熬或直接忽略。解决的想法是将调用链中的错误管理分离,将客户端从处理服务端错误的责任中解放出来。
### 响应式系统的韧性
<ruby> <a href="http://www.reactivemanifesto.org/glossary#Elasticity"> 韧性 </a> <rp> ( </rp> <rt> Elasticity </rt> <rp> ) </rp></ruby>是关于 *欠载下的<ruby> 灵敏性 <rp> ( </rp> <rt> responsiveness </rt> <rp> ) </rp></ruby>* 的——意味着一个系统的吞吐量在资源增加或减少时能够自动地相应<ruby> 增加或减少 <rp> ( </rp> <rt> scales up or down </rt> <rp> ) </rp></ruby>(同样能够<ruby> 向内或外扩展 <rp> ( </rp> <rt> scales in or out </rt> <rp> ) </rp></ruby>)以满足不同的需求。这是利用云计算承诺的特性所必需的因素:使系统利用资源更加有效,成本效益更佳,对环境友好以及实现按次付费。
系统必须能够在不重写甚至不重新设置的情况下,适应性地——即无需介入自动伸缩——响应状态及行为,沟通负载均衡,<ruby> 故障转移 <rp> ( </rp> <rt> failover </rt> <rp> ) </rp></ruby>,以及升级。实现这些的就是 <ruby> 位置透明性 <rp> ( </rp> <rt> location transparency </rt> <rp> ) </rp></ruby>:使用同一个方法,同样的编程抽象,同样的语义,在所有向度中<ruby> 伸缩 <rp> ( </rp> <rt> scaling </rt> <rp> ) </rp></ruby>系统的能力——从 CPU 核心到数据中心。
如同《响应式宣言》所述:
>
> 一个极大地简化问题的关键洞见在于意识到我们都在使用分布式计算。无论我们的操作系统是运行在一个单一结点上(拥有多个独立的 CPU,并通过 QPI 链接进行交流),还是在一个<ruby> 节点集群 <rp> ( </rp> <rt> cluster of nodes </rt> <rp> ) </rp></ruby>(独立的机器,通过网络进行交流)上。拥抱这个事实意味着在垂直方向上多核的伸缩与在水平方面上集群的伸缩并无概念上的差异。在空间上的解耦 [...],是通过异步消息传送以及运行时实例与其引用解耦从而实现的,这就是我们所说的位置透明性。
>
>
>
因此,不论接收者在哪里,我们都以同样的方式与它交流。唯一能够在语义上等同实现的方式是消息传送。
### 响应式系统的生产效率
既然大多数的系统生来即是复杂的,那么其中一个最重要的点即是保证一个系统架构在开发和维护组件时,最小程度地减低生产效率,同时将操作的 <ruby> 偶发复杂性 <rp> ( </rp> <rt> accidental complexity </rt> <rp> ) </rp></ruby>降到最低。
这一点很重要,因为在一个系统的生命周期中——如果系统的设计不正确——系统的维护会变得越来越困难,理解、定位和解决问题所需要花费时间和精力会不断地上涨。
响应式系统是我们所知的最具 *生产效率* 的系统架构(在多核、云及移动架构的背景下):
* 错误的隔离为组件与组件之间裹上[舱壁](http://skife.org/architecture/fault-tolerance/2009/12/31/bulkheads.html)(LCTT 译注:当船遭到损坏进水时,舱壁能够防止水从损坏的船舱流入其他船舱),防止引发连锁错误,从而限制住错误的波及范围以及严重性。
* 监管者的层级制度提供了多个等级的防护,搭配以自我修复能力,避免了许多曾经在侦查(inverstigate)时引发的操作<ruby> 代价 <rp> ( </rp> <rt> cost </rt> <rp> ) </rp></ruby>——大量的<ruby> 瞬时故障 <rp> ( </rp> <rt> transient failures </rt> <rp> ) </rp></ruby>。
* 消息传送和位置透明性允许组件被卸载下线、代替或<ruby> 重新布线 <rp> ( </rp> <rt> rerouted </rt> <rp> ) </rp></ruby>同时不影响终端用户的使用体验,并降低中断的代价、它们的相对紧迫性以及诊断和修正所需的资源。
* 复制减少了数据丢失的风险,减轻了数据<ruby> 检索 <rp> ( </rp> <rt> retrieval </rt> <rp> ) </rp></ruby>和存储的有效性错误的影响。
* 韧性允许在使用率波动时保存资源,允许在负载很低时,最小化操作开销,并且允许在负载增加时,最小化<ruby> 运行中断 <rp> ( </rp> <rt> outgae </rt> <rp> ) </rp></ruby>或<ruby> 紧急投入 <rp> ( </rp> <rt> urgent investment </rt> <rp> ) </rp></ruby>伸缩性的风险。
因此,响应式系统使<ruby> 生成系统 <rp> ( </rp> <rt> creation systems </rt> <rp> ) </rp></ruby>很好的应对错误、随时间变化的负载——同时还能保持低运营成本。
### 响应式编程与响应式系统的关联
响应式编程是一种管理<ruby> 内部逻辑 <rp> ( </rp> <rt> internal logic </rt> <rp> ) </rp></ruby>和<ruby> 数据流转换 <rp> ( </rp> <rt> dataflow transformation </rt> <rp> ) </rp></ruby>的好技术,在本地的组件中,做为一种优化代码清晰度、性能以及资源利用率的方法。响应式系统,是一组架构上的原则,旨在强调分布式信息交流并为我们提供一种处理分布式系统弹性与韧性的工具。
只使用响应式编程常遇到的一个问题,是一个事件驱动的基于回调的或声明式的程序中两个计算阶段的<ruby> 高度耦合 <rp> ( </rp> <rt> tight coupling </rt> <rp> ) </rp></ruby>,使得 *弹性* 难以实现,因此时它的转换链通常存活时间短,并且它的各个阶段——回调函数或<ruby> 组合子 <rp> ( </rp> <rt> combinator </rt> <rp> ) </rp></ruby>——是匿名的,也就是不可寻址的。
这意味着,它通常在内部处理成功与错误的状态而不会向外界发送相应的信号。这种寻址能力的缺失导致单个<ruby> 阶段 <rp> ( </rp> <rt> stages </rt> <rp> ) </rp></ruby>很难恢复,因为它通常并不清楚异常应该,甚至不清楚异常可以,发送到何处去。
另一个与响应式系统方法的不同之处在于单纯的响应式编程允许 *时间* 上的<ruby> 解耦 <rp> ( </rp> <rt> decoupling </rt> <rp> ) </rp></ruby>,但不允许 *空间* 上的(除非是如上面所述的,在底层通过网络传送消息来<ruby> 分发 <rp> ( </rp> <rt> distribute </rt> <rp> ) </rp></ruby>数据流)。正如叙述的,在时间上的解耦使 *并发性* 成为可能,但是是空间上的解耦使 <ruby> 分布 <rp> ( </rp> <rt> distribution </rt> <rp> ) </rp></ruby>和<ruby> 移动性 <rp> ( </rp> <rt> mobility </rt> <rp> ) </rp></ruby> (使得不仅仅静态拓扑可用,还包括了动态拓扑)成为可能的——而这些正是 *韧性* 所必需的要素。
位置透明性的缺失使得很难以韧性方式对一个基于适应性响应式编程技术的程序进行向外扩展,因为这样就需要分附加工具,例如<ruby> 消息总线 <rp> ( </rp> <rt> message bus </rt> <rp> ) </rp></ruby>,<ruby> 数据网格 <rp> ( </rp> <rt> data grid </rt> <rp> ) </rp></ruby>或者在顶层的<ruby> 定制网络协议 <rp> ( </rp> <rt> bespoke network protocol </rt> <rp> ) </rp></ruby>。而这点正是响应式系统的消息驱动编程的闪光的地方,因为它是一个包含了其编程模型和所有伸缩向度语义的交流抽象概念,因此降低了复杂性与认知超载。
对于基于回调的编程,常会被提及的一个问题是写这样的程序或许相对来说会比较简单,但最终会引发一些真正的后果。
例如,对于基于匿名回调的系统,当你想理解它们,维护它们或最重要的是在<ruby> 生产供应中断 <rp> ( </rp> <rt> production outages </rt> <rp> ) </rp></ruby>或错误行为发生时,你想知道到底发生了什么、发生在哪以及为什么发生,但此时它们只提供极少的内部信息。
为响应式系统设计的库与平台(例如 [Akka](http://akka.io/) 项目和 [Erlang](https://www.erlang.org/) 平台)学到了这一点,它们依赖于那些更容易理解的长期存活的可寻址组件。当错误发生时,根据导致错误的消息可以找到唯一的组件。当可寻址的概念存在组件模型的核心中时,<ruby> 监控方案 <rp> ( </rp> <rt> monitoring solution </rt> <rp> ) </rp></ruby>就有了一个 *有意义* 的方式来呈现它收集的数据——利用<ruby> 传播 <rp> ( </rp> <rt> propagated </rt> <rp> ) </rp></ruby>的身份标识。
一个好的编程范式的选择,一个选择实现像可寻址能力和错误管理这些东西的范式,已经被证明在生产中是无价的,因它在设计中承认了现实并非一帆风顺,*接受并拥抱错误的出现* 而不是毫无希望地去尝试避免错误。
总而言之,响应式编程是一个非常有用的实现技术,可以用在响应式架构当中。但是记住这只能帮助管理一部分:异步且非阻塞执行下的数据流管理——通常只在单个结点或服务中。当有多个结点时,就需要开始认真地考虑像<ruby> 数据一致性 <rp> ( </rp> <rt> data consistency </rt> <rp> ) </rp></ruby>、<ruby> 跨结点沟通 <rp> ( </rp> <rt> cross-node communication </rt> <rp> ) </rp></ruby>、<ruby> 协调 <rp> ( </rp> <rt> coordination </rt> <rp> ) </rp></ruby>、<ruby> 版本控制 <rp> ( </rp> <rt> versioning </rt> <rp> ) </rp></ruby>、<ruby> 编制 <rp> ( </rp> <rt> orchestration </rt> <rp> ) </rp></ruby>、<ruby> 错误管理 <rp> ( </rp> <rt> failure management </rt> <rp> ) </rp></ruby>、<ruby> 关注与责任 <rp> ( </rp> <rt> concerns and responsibilities </rt> <rp> ) </rp></ruby>分离等等的东西——也即是:系统架构。
因此,要最大化响应式编程的价值,就把它作为构建响应式系统的工具来使用。构建一个响应式系统需要的不仅是在一个已存在的遗留下来的<ruby> 软件栈 <rp> ( </rp> <rt> software stack </rt> <rp> ) </rp></ruby>上抽象掉特定的操作系统资源和少量的异步 API 和<ruby> <a href="http://martinfowler.com/bliki/CircuitBreaker.html"> 断路器 </a> <rp> ( </rp> <rt> circuit breakers </rt> <rp> ) </rp></ruby>。此时应该拥抱你在创建一个包含多个服务的分布式系统这一事实——这意味着所有东西都要共同合作,提供一致性与灵敏的体验,而不仅仅是如预期工作,但同时还要在发生错误和不可预料的负载下正常工作。
### 总结
企业和中间件供应商在目睹了应用响应式所带来的企业利润增长后,同样开始拥抱响应式。在本文中,我们把响应式系统做为企业最终目标进行描述——假设了多核、云和移动架构的背景——而响应式编程则从中担任重要工具的角色。
响应式编程在内部逻辑及数据流转换的组件层次上为开发者提高了生产率——通过性能与资源的有效利用实现。而响应式系统在构建 <ruby> 原生云 <rp> ( </rp> <rt> cloud native </rt> <rp> ) </rp></ruby> 和其它大型分布式系统的系统层次上为架构师及 DevOps 从业者提高了生产率——通过弹性与韧性。我们建议在响应式系统设计原则中结合响应式编程技术。
### 脚注
>
> 1. 参考 Conal Elliott,FRP 的发明者,见[这个演示](https://begriffs.com/posts/2015-07-22-essence-of-frp.html)。
> 2. [Amdahl 定律](https://en.wikipedia.org/wiki/Amdahl%2527s_law)揭示了系统理论上的加速会被一系列的子部件限制,这意味着系统在新的资源加入后会出现<ruby> 收益递减 <rp> ( </rp> <rt> diminishing returns </rt> <rp> ) </rp></ruby>。
> 3. Neil Günter 的<ruby> <a href="http://www.perfdynamics.com/Manifesto/USLscalability.html"> 通用可伸缩性定律 </a> <rp> ( </rp> <rt> Universal Scalability Law </rt> <rp> ) </rp></ruby>是理解并发与分布式系统的竞争与协作的重要工具,它揭示了当新资源加入到系统中时,保持一致性的开销会导致不好的结果。
> 4. 消息可以是同步的(要求发送者和接受者同时存在),也可以是异步的(允许他们在时间上解耦)。其语义上的区别超出本文的讨论范围。
>
>
>
---
via: <https://www.oreilly.com/ideas/reactive-programming-vs-reactive-systems>
作者:[Jonas Bonér](https://www.oreilly.com/people/e0b57-jonas-boner), [Viktor Klang](https://www.oreilly.com/people/f96106d4-4ce6-41d9-9d2b-d24590598fcd) 译者:[XLCYun](https://github.com/XLCYun) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,774 | 一篇缺失的 TypeScript 介绍 | https://toddmotto.com/typescript-the-missing-introduction | 2017-08-13T13:24:07 | [
"JavaScript",
"TypeScript"
] | https://linux.cn/article-8774-1.html | 
**下文是 James Henry([@MrJamesHenry](https://twitter.com/MrJamesHenry))所提交的内容。我是 ESLint 核心团队的一员,也是 TypeScript 布道师。我正在和 Todd 在 [UltimateAngular](https://ultimateangular.com/courses) 平台上合作发布 Angular 和 TypeScript 的精品课程。**
>
> 本文的主旨是为了介绍我们是如何看待 TypeScript 的以及它在加强 JavaScript 开发中所起的作用。
>
>
> 我们也将尽可能地给出那些类型和编译方面的那些时髦词汇的准确定义。
>
>
>
TypeScript 强大之处远远不止这些,本篇文章无法涵盖,想要了解更多请阅读[官方文档](http://www.typescriptlang.org/docs),或者学习 [UltimateAngular 上的 TypeScript 课程](https://ultimateangular.com/courses#typescript) ,从初学者成为一位 TypeScript 高手。
### 背景
TypeScript 是个出乎意料强大的工具,而且它真的很容易掌握。
然而,TypeScript 可能比 JavaScript 要更为复杂一些,因为 TypeScript 可能向我们同时引入了一系列以前没有考虑过的 JavaScript 程序相关的技术概念。
每当我们谈论到类型、编译器等这些概念的时候,你会发现很快会变的不知所云起来。
这篇文章就是一篇为了解答你需要知道的许许多多不知所云的概念,来帮助你 TypeScript 快速入门的教程,可以让你轻松自如的应对这些概念。
### 关键知识的掌握
在 Web 浏览器中运行我们的代码这件事或许使我们对它是如何工作的产生一些误解,“它不用经过编译,是吗?”,“我敢肯定这里面是没有类型的...”
更有意思的是,上述的说法既是正确的也是不正确的,这取决于上下文环境和我们是如何定义这些概念的。
首先,我们要作的是明确这些。
#### JavaScript 是解释型语言还是编译型语言?
传统意义上,程序员经常将自己的程序编译之后运行出结果就认为这种语言是编译型语言。
>
> 从初学者的角度来说,编译的过程就是将我们自己编辑好的高级语言程序转换成机器实际运行的格式。
>
>
>
就像 Go 语言,可以使用 `go build` 的命令行工具编译 .go 的文件,将其编译成代码的低级形式,它可以直接执行、运行。
```
# We manually compile our .go file into something we can run
# using the command line tool "go build"
go build ultimate-angular.go
# ...then we execute it!
./ultimate-angular
```
作为一个 JavaScript 程序员(这一刻,请先忽略我们对新一代构建工具和模块加载程序的热爱),我们在日常的 JavaScript 开发中并没有编译的这一基本步骤,
我们写一些 JavaScript 代码,把它放在浏览器的 `<script>` 标签中,它就能运行了(或者在服务端环境运行,比如:node.js)。
**好吧,因此 JavaScript 没有进行过编译,那它一定是解释型语言了,是吗?**
实际上,我们能够确定的一点是,JavaScript 不是我们自己编译的,现在让我们简单的回顾一个简单的解释型语言的例子,再来谈 JavaScript 的编译问题。
>
> 解释型计算机语言的执行的过程就像人们看书一样,从上到下、一行一行的阅读。
>
>
>
我们所熟知的解释型语言的典型例子是 bash 脚本。我们终端中的 bash 解释器逐行读取我们的命令并且执行它。
现在我们回到 JavaScript 是解释执行还是编译执行的讨论中,我们要将逐行读取和逐行执行程序分开理解(对“解释型”的简单理解),不要混在一起。
以此代码为例:
```
hello();
function hello(){
console.log("Hello")
}
```
这是真正意义上 JavaScript 输出 Hello 单词的程序代码,但是,在 `hello()` 在我们定义它之前就已经使用了这个函数,这是简单的逐行执行办不到的,因为 `hello()` 在第一行没有任何意义的,直到我们在第二行声明了它。
像这样在 JavaScript 是存在的,因为我们的代码实际上在执行之前就被所谓的“JavaScript 引擎”或者是“特定的编译环境”编译过,这个编译的过程取决于具体的实现(比如,使用 V8 引擎的 node.js 和 Chome 就和使用 SpiderMonkey 的 FireFox 就有所不同)。
在这里,我们不会在进一步的讲解编译型执行和解释型执行微妙之处(这里的定义已经很好了)。
>
> 请务必记住,我们编写的 JavaScript 代码已经不是我们的用户实际执行的代码了,即使是我们简单地将其放在 HTML 中的 `<script>` ,也是不一样的。
>
>
>
#### 运行时间 VS 编译时间
现在我们已经正确理解了编译和运行是两个不同的阶段,那“<ruby> 运行阶段 <rt> Run Time </rt></ruby>”和“<ruby> 编译阶段 <rt> Compile Time </rt></ruby>”理解起来也就容易多了。
编译阶段,就是我们在我们的编辑器或者 IDE 当中的代码转换成其它格式的代码的阶段。
运行阶段,就是我们程序实际执行的阶段,例如:上面的 `hello()` 函数就执行在“运行阶段”。
#### TypeScript 编译器
现在我们了解了程序的生命周期中的关键阶段,接下来我们可以介绍 TypeScript 编译器了。
TypeScript 编译器是帮助我们编写代码的关键。比如,我们不需将 JavaScript 代码包含到 `<script>` 标签当中,只需要通过 TypeScript 编译器传递它,就可以在运行程序之前得到改进程序的建议。
>
> 我们可以将这个新的步骤作为我们自己的个人“编译阶段”,这将在我们的程序抵达 JavaScript 主引擎之前,确保我们的程序是以我们预期的方式编写的。
>
>
>
它与上面 Go 语言的实例类似,但是 TypeScript 编译器只是基于我们编写程序的方式提供提示信息,并不会将其转换成低级的可执行文件,它只会生成纯 JavaScript 代码。
```
# One option for passing our source .ts file through the TypeScript
# compiler is to use the command line tool "tsc"
tsc ultimate-angular.ts
# ...this will produce a .js file of the same name
# i.e. ultimate-angular.js
```
在[官方文档](http://www.typescriptlang.org/docs)中,有许多关于将 TypeScript 编译器以各种方式融入到你的现有工作流程中的文章。这些已经超出本文范围。
#### 动态类型与静态类型
就像对比编译程序与解释程序一样,动态类型与静态类型的对比在现有的资料中也是极其模棱两可的。
让我们先回顾一下我们在 JavaScript 中对于类型的理解。
我们的代码如下:
```
var name = 'James';
var sum = 1 + 2;
```
我们如何给别人描述这段代码?
“我们声明了一个变量 `name`,它被分配了一个 “James” 的**字符串**,然后我们又申请了一个变量 `sum`,它被分配了一个**数字** 1 和**数字** 2 的求和的数值结果。”
即使在这样一个简单的程序中,我们也使用了两个 JavaScript 的基本类型:`String` 和 `Number`。
就像上面我们讲编译一样,我们不会陷入编程语言类型的学术细节当中,关键是要理解在 JavaScript 中类型表示的是什么,并扩展到 TypeScript 的类型的理解上。
从每夜拜读的最新 ECMAScript 规范中我们可以学到(LOL, JK - “wat’s an ECMA?”),它大量引用了 JavaScript 的类型及其用法。
直接引自官方规范:
>
> ECMAScript 语言的类型取决于使用 ECMAScript 语言的 ECMAScript 程序员所直接操作的值。
>
>
> ECMAScript 语言的类型有 Undefined、Null、Boolean、String、Symbol、Number 和 Object。
>
>
>
我们可以看到,JavaScript 语言有 7 种正式类型,其中我们在我们现在程序中使用了 6 种(Symbol 首次在 ES2015 中引入,也就是 ES6)。
现在我们来深入一点看上面的 JavaScript 代码中的 “name 和 sum”。
我们可以把我们当前被分配了字符串“James”的变量 `name` 重新赋值为我们的第二个变量 sum 的当前值,目前是数字 3。
```
var name = 'James';
var sum = 1 + 2;
name = sum;
```
该 `name` 变量开始“存有”一个字符串,但现在它“存有”一个数字。这凸显了 JavaScript 中变量和类型的基本特性:
“James” 值一直是字符串类型,而 `name` 变量可以分配任何类型的值。和 `sum` 赋值的情况相同,1 是一个数字类型,`sum` 变量可以分配任何可能的值。
>
> 在 JavaScript 中,值是具有类型的,而变量是可以随时保存任何类型的值。
>
>
>
这也恰好是一个“动态类型语言”的定义。
相比之下,我们可以将“静态类型语言”视为我们可以(也必须)将类型信息与特定变量相关联的语言:
```
var name: string = ‘James’;
```
在这段代码中,我们能够更好地显式声明我们对变量 `name` 的意图,我们希望它总是用作一个字符串。
你猜怎么着?我们刚刚看到我们的第一个 TypeScript 程序。
当我们<ruby> 反思reflect</ruby>我们自己的代码(非编程方面的双关语“反射”)时,我们可以得出的结论,即使我们使用动态语言(如 JavaScript),在几乎所有的情况下,当我们初次定义变量和函数参数时,我们应该有非常明确的使用意图。如果这些变量和参数被重新赋值为与我们原先赋值不同类型的值,那么有可能某些东西并不是我们预期的那样工作的。
>
> 作为 JavaScript 开发者,TypeScript 的静态类型注释给我们的一个巨大的帮助,它能够清楚地表达我们对变量的意图。
>
>
> 这种改进不仅有益于 TypeScript 编译器,还可以让我们的同事和将来的自己明白我们的代码。代码是用来读的。
>
>
>
### TypeScript 在我们的 JavaScript 工作流程中的作用
我们已经开始看到“为什么经常说 TypeScript 只是 JavaScript + 静态类型”的说法了。`: string` 对于我们的 `name` 变量就是我们所谓的“类型注释”,在编译时被使用(换句话说,当我们让代码通过 TypeScript 编译器时),以确保其余的代码符合我们原来的意图。
我们再来看看我们的程序,并添加显式注释,这次是我们的 `sum` 变量:
```
var name: string = 'James';
var sum: number = 1 + 2;
name = sum;
```
如果我们使用 TypeScript 编译器编译这个代码,我们现在就会收到一个在 `name = sum` 这行的错误: `Type 'number' is not assignable to type 'string'`,我们的这种“偷渡”被警告,我们执行的代码可能有问题。
>
> 重要的是,如果我们想要继续执行,我们可以选择忽略 TypeScript 编译器的错误,因为它只是在将 JavaScript 代码发送给我们的用户之前给我们反馈的工具。
>
>
>
TypeScript 编译器为我们输出的最终 JavaScript 代码将与上述原始源代码完全相同:
```
var name = 'James';
var sum = 1 + 2;
name = sum;
```
类型注释全部为我们自动删除了,现在我们可以运行我们的代码了。
>
> 注意:在此示例中,即使我们没有提供显式类型注释的 `: string` 和 `: number` ,TypeScript 编译器也可以为我们提供完全相同的错误 。
>
>
> TypeScript 通常能够从我们使用它的方式推断变量的类型!
>
>
>
#### 我们的源文件是我们的文档,TypeScript 是我们的拼写检查
对于 TypeScript 与我们的源代码的关系来说,一个很好的类比,就是拼写检查与我们在 Microsoft Word 中写的文档的关系。
这两个例子有三个关键的共同点:
1. **它能告诉我们写的东西的客观的、直接的错误:**
* *拼写检查*:“我们已经写了字典中不存在的字”
* *TypeScript*:“我们引用了一个符号(例如一个变量),它没有在我们的程序中声明”
2. **它可以提醒我们写的可能是错误的:**
* *拼写检查*:“该工具无法完全推断特定语句的含义,并建议重写”
* *TypeScript*:“该工具不能完全推断特定变量的类型,并警告不要这样使用它”
3. **我们的来源可以用于其原始目的,无论工具是否存在错误:**
* *拼写检查*:“即使您的文档有很多拼写错误,您仍然可以打印出来,并把它当成文档使用”
* *TypeScript*:“即使您的源代码具有 TypeScript 错误,它仍然会生成您可以执行的 JavaScript 代码”
### TypeScript 是一种可以启用其它工具的工具
TypeScript 编译器由几个不同的部分或阶段组成。我们将通过查看这些部分之一 The Parser(语法分析程序)来结束这篇文章,除了 TypeScript 已经为我们做的以外,它为我们提供了在其上构建其它开发工具的机会。
编译过程的“解析器步骤”的结果是所谓的抽象语法树,简称为 AST。
#### 什么是抽象语法树(AST)?
我们以普通文本形式编写我们的程序,因为这是我们人类与计算机交互的最好方式,让它们能够做我们想要的东西。我们并不是很擅长于手工编写复杂的数据结构!
然而,不管在哪种情况下,普通文本在编译器里面实际上是一个非常棘手的事情。它可能包含程序运作不必要的东西,例如空格,或者可能存在有歧义的部分。
因此,我们希望将我们的程序转换成数据结构,将数据结构全部映射为我们所使用的所谓“标记”,并将其插入到我们的程序中。
这个数据结构正是 AST!
AST 可以通过多种不同的方式表示,我使用 JSON 来看一看。
我们从这个极其简单的基本源代码来看:
```
var a = 1;
```
TypeScript 编译器的 Parser(语法分析程序)阶段的(简化后的)输出将是以下 AST:
```
{
"pos": 0,
"end": 10,
"kind": 256,
"text": "var a = 1;",
"statements": [
{
"pos": 0,
"end": 10,
"kind": 200,
"declarationList": {
"pos": 0,
"end": 9,
"kind": 219,
"declarations": [
{
"pos": 3,
"end": 9,
"kind": 218,
"name": {
"pos": 3,
"end": 5,
"text": "a"
},
"initializer": {
"pos": 7,
"end": 9,
"kind": 8,
"text": "1"
}
}
]
}
}
]
}
```
我们的 AST 中的对象称为节点。
#### 示例:在 VS Code 中重命名符号
在内部,TypeScript 编译器将使用 Parser 生成的 AST 来提供一些非常重要的事情,例如,发生在编译程序时的类型检查。
但它不止于此!
>
> 我们可以使用 AST 在 TypeScript 之上开发自己的工具,如代码美化工具、代码格式化工具和分析工具。
>
>
>
建立在这个 AST 代码之上的工具的一个很好的例子是:<ruby> 语言服务器 <rt> Language Server </rt></ruby>。
深入了解语言服务器的工作原理超出了本文的范围,但是当我们编写程序时,它能为我们提供一个绝对重量级别功能,就是“重命名符号”。
假设我们有以下源代码:
```
// The name of the author is James
var first_name = 'James';
console.log(first_name);
```
经过代码审查和对完美的适当追求,我们决定应该改换我们的变量命名惯例;使用驼峰式命名方式,而不是我们当前正在使用这种蛇式命名。
在我们的代码编辑器中,我们一直以来可以选择多个相同的文本,并使用多个光标来一次更改它们。

当我们把程序也视作文本这样继续操作时,我们已经陷入了一个典型的陷阱中。
那个注释中我们不想修改的“name”单词,在我们的手动匹配中却被误选中了。我们可以看到在现实世界的应用程序中这样更改代码是有多危险。
正如我们在上面学到的那样,像 TypeScript 这样的东西在幕后生成一个 AST 的时候,与我们的程序不再像普通文本那样可以交互,每个标记在 AST 中都有自己的位置,而且它有很清晰的映射关系。
当我们右键单击我们的 `first_name` 变量时,我们可以在 VS Code 中直接“重命名符号”(TypeScript 语言服务器插件也可用于其他编辑器)。

非常好!现在我们的 `first_name` 变量是唯一需要改变的东西,如果需要的话,这个改变甚至会发生在我们项目中的多个文件中(与导出和导入的值一样)!
### 总结
哦,我们在这篇文章中已经讲了很多的内容。
我们把有关学术方面的规避开,围绕编译器和类型还有很多专业术语给出了通俗的定义。
我们对比了编译语言与解释语言、运行阶段与编译阶段、动态类型与静态类型,以及抽象语法树(AST)如何为我们的程序构建工具提供了更为优化的方法。
重要的是,我们提供了 TypeScript 作为我们 JavaScript 开发工具的一种思路,以及如何在其上构建更棒的工具,比如说作为重构代码的一种方式的重命名符号。
快来 UltimateAngular 平台上学习从初学者到 TypeScript 高手的课程吧,开启你的学习之旅!
---
via: <https://toddmotto.com/typescript-the-missing-introduction>
作者:James Henry 译者:[MonkeyDEcho](https://github.com/MonkeyDEcho) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,775 | 如何在 CentOS 上安装 Apache Hadoop | https://www.unixmen.com.hijacked/setup-apache-hadoop-centos/ | 2017-08-13T21:05:34 | [
"Hadoop"
] | /article-8775-1.html | 
Apache Hadoop 软件库是一个框架,它允许使用简单的编程模型在计算机集群上对大型数据集进行分布式处理。Apache™ Hadoop® 是可靠、可扩展、分布式计算的开源软件。
该项目包括以下模块:
* Hadoop Common:支持其他 Hadoop 模块的常用工具。
* Hadoop 分布式文件系统 (HDFS™):分布式文件系统,可提供对应用程序数据的高吞吐量访问支持。
* Hadoop YARN:作业调度和集群资源管理框架。
* Hadoop MapReduce:一个基于 YARN 的大型数据集并行处理系统。
本文将帮助你逐步在 CentOS 上安装 hadoop 并配置单节点 hadoop 集群。
### 安装 Java
在安装 hadoop 之前,请确保你的系统上安装了 Java。使用此命令检查已安装 Java 的版本。
```
java -version
java version "1.7.0_75"
Java(TM) SE Runtime Environment (build 1.7.0_75-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.75-b04, mixed mode)
```
要安装或更新 Java,请参考下面逐步的说明。
第一步是从 [Oracle 官方网站](http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html)下载最新版本的 java。
```
cd /opt/
wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/7u79-b15/jdk-7u79-linux-x64.tar.gz"
tar xzf jdk-7u79-linux-x64.tar.gz
```
需要设置使用更新版本的 Java 作为替代。使用以下命令来执行此操作。
```
cd /opt/jdk1.7.0_79/
alternatives --install /usr/bin/java java /opt/jdk1.7.0_79/bin/java 2
alternatives --config java
```
```
There are 3 programs which provide 'java'.
Selection Command
-----------------------------------------------
* 1 /opt/jdk1.7.0_60/bin/java
+ 2 /opt/jdk1.7.0_72/bin/java
3 /opt/jdk1.7.0_79/bin/java
Enter to keep the current selection[+], or type selection number: 3 [Press Enter]
```
现在你可能还需要使用 `alternatives` 命令设置 `javac` 和 `jar` 命令路径。
```
alternatives --install /usr/bin/jar jar /opt/jdk1.7.0_79/bin/jar 2
alternatives --install /usr/bin/javac javac /opt/jdk1.7.0_79/bin/javac 2
alternatives --set jar /opt/jdk1.7.0_79/bin/jar
alternatives --set javac /opt/jdk1.7.0_79/bin/javac
```
下一步是配置环境变量。使用以下命令正确设置这些变量。
设置 `JAVA_HOME` 变量:
```
export JAVA_HOME=/opt/jdk1.7.0_79
```
设置 `JRE_HOME` 变量:
```
export JRE_HOME=/opt/jdk1.7.0_79/jre
```
设置 `PATH` 变量:
```
export PATH=$PATH:/opt/jdk1.7.0_79/bin:/opt/jdk1.7.0_79/jre/bin
```
### 安装 Apache Hadoop
设置好 java 环境后。开始安装 Apache Hadoop。
第一步是创建用于 hadoop 安装的系统用户帐户。
```
useradd hadoop
passwd hadoop
```
现在你需要配置用户 `hadoop` 的 ssh 密钥。使用以下命令启用无需密码的 ssh 登录。
```
su - hadoop
ssh-keygen -t rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
chmod 0600 ~/.ssh/authorized_keys
exit
```
现在从官方网站 [hadoop.apache.org](https://hadoop.apache.org/) 下载 hadoop 最新的可用版本。
```
cd ~
wget http://apache.claz.org/hadoop/common/hadoop-2.6.0/hadoop-2.6.0.tar.gz
tar xzf hadoop-2.6.0.tar.gz
mv hadoop-2.6.0 hadoop
```
下一步是设置 hadoop 使用的环境变量。
编辑 `~/.bashrc`,并在文件末尾添加以下这些值。
```
export HADOOP_HOME=/home/hadoop/hadoop
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
```
在当前运行环境中应用更改。
```
source ~/.bashrc
```
编辑 `$HADOOP_HOME/etc/hadoop/hadoop-env.sh` 并设置 `JAVA_HOME` 环境变量。
```
export JAVA_HOME=/opt/jdk1.7.0_79/
```
现在,先从配置基本的 hadoop 单节点集群开始。
首先编辑 hadoop 配置文件并进行以下更改。
```
cd /home/hadoop/hadoop/etc/hadoop
```
让我们编辑 `core-site.xml`。
```
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
```
接着编辑 `hdfs-site.xml`:
```
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>file:///home/hadoop/hadoopdata/hdfs/namenode</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>file:///home/hadoop/hadoopdata/hdfs/datanode</value>
</property>
</configuration>
```
并编辑 `mapred-site.xml`:
```
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
```
最后编辑 `yarn-site.xml`:
```
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
```
现在使用以下命令格式化 namenode:
```
hdfs namenode -format
```
要启动所有 hadoop 服务,请使用以下命令:
```
cd /home/hadoop/hadoop/sbin/
start-dfs.sh
start-yarn.sh
```
要检查所有服务是否正常启动,请使用 `jps` 命令:
```
jps
```
你应该看到这样的输出。
```
26049 SecondaryNameNode
25929 DataNode
26399 Jps
26129 JobTracker
26249 TaskTracker
25807 NameNode
```
现在,你可以在浏览器中访问 Hadoop 服务:http://your-ip-address:8088/ 。
[](http://www.unixmen.com/wp-content/uploads/2015/06/hadoop.png)
谢谢阅读!!!
---
via: [https://www.unixmen.com/setup-apache-hadoop-centos/](https://www.unixmen.com.hijacked/setup-apache-hadoop-centos/)
作者:[anismaj](https://www.unixmen.com/author/anis/) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| null | HTTPSConnectionPool(host='www.unixmen.com.hijacked', port=443): Max retries exceeded with url: /setup-apache-hadoop-centos/ (Caused by NameResolutionError("<urllib3.connection.HTTPSConnection object at 0x7b83275821d0>: Failed to resolve 'www.unixmen.com.hijacked' ([Errno -2] Name or service not known)")) | null |
8,776 | 4 个 Linux 桌面上的轻量级图像浏览器 | https://opensource.com/article/17/7/4-lightweight-image-viewers-linux-desktop | 2017-08-14T11:48:00 | [
"图像"
] | https://linux.cn/article-8776-1.html |
>
> 当你需要的不仅仅是一个基本的图像浏览器,而是一个完整的图像编辑器,请查看这些程序。
>
>
>

像大多数人一样,你计算机上可能有些照片和其他图像。而且,像大多数人一样,你可能想要经常查看那些图像和照片。
而启动一个 [GIMP](https://www.gimp.org/) 或者 [Pinta](https://pinta-project.com/pintaproject/pinta/) 这样的图片编辑器对于简单的浏览图片来说太笨重了。
另一方面,大多数 Linux 桌面环境中包含的基本图像查看器可能不足以满足你的需要。如果你想要一些更多的功能,但仍然希望它是轻量级的,那么看看这四个 Linux 桌面中的图像查看器,如果还不能满足你的需要,还有额外的选择。
### Feh
[Feh](https://feh.finalrewind.org/) 是我以前在老旧计算机上最喜欢的软件。它简单、朴实、用起来很好。
你可以从命令行启动 Feh:只将其指向图像或者包含图像的文件夹之后就行了。Feh 会快速加载,你可以通过鼠标点击或使用键盘上的向左和向右箭头键滚动图像。不能更简单了。
Feh 可能很轻量级,但它提供了一些选项。例如,你可以控制 Feh 的窗口是否具有边框,设置要查看的图像的最小和最大尺寸,并告诉 Feh 你想要从文件夹中的哪个图像开始浏览。

*Feh 的使用*
### Ristretto
如果你将 Xfce 作为桌面环境,那么你会熟悉 [Ristretto](https://docs.xfce.org/apps/ristretto/start)。它很小、简单、并且非常有用。
怎么简单?你打开包含图像的文件夹,单击左侧的缩略图之一,然后单击窗口顶部的导航键浏览图像。Ristretto 甚至有幻灯片功能。
Ristretto 也可以做更多的事情。你可以使用它来保存你正在浏览的图像的副本,将该图像设置为桌面壁纸,甚至在另一个应用程序中打开它,例如,当你需要修改一下的时候。

*在 Ristretto 中浏览照片*
### Mirage
表面上,[Mirage](http://mirageiv.sourceforge.net/)有点平常,没什么特色,但它做着和其他优秀图片浏览器一样的事:打开图像,将它们缩放到窗口的宽度,并且可以使用键盘滚动浏览图像。它甚至可以使用幻灯片。
不过,Mirage 将让需要更多功能的人感到惊喜。除了其核心功能,Mirage 还可以调整图像大小和裁剪图像、截取屏幕截图、重命名图像,甚至生成文件夹中图像的 150 像素宽的缩略图。
如果这还不够,Mirage 还可以显示 [SVG 文件](https://en.wikipedia.org/wiki/Scalable_Vector_Graphics)。你甚至可以从[命令行](http://mirageiv.sourceforge.net/docs-advanced.html#cli)中运行。

*使用 Mirage*
### Nomacs
[Nomacs](http://nomacs.org/) 显然是本文中最重量级的图像浏览器。它所呈现的那么多功能让人忽视了它的速度。它快捷而易用。
Nomacs 不仅仅可以显示图像。你还可以查看和编辑图像的[元数据](https://iptc.org/standards/photo-metadata/photo-metadata/),向图像添加注释,并进行一些基本的编辑,包括裁剪、调整大小、并将图像转换为灰度。Nomacs 甚至可以截图。
一个有趣的功能是你可以在桌面上运行程序的两个实例,并在这些实例之间同步图像。当需要比较两个图像时,[Nomacs 文档](http://nomacs.org/synchronization/)中推荐这样做。你甚至可以通过局域网同步图像。我没有尝试通过网络进行同步,如果你做过可以分享下你的经验。

*Nomacs 中的照片及其元数据*
### 其他一些值得一看的浏览器
如果这四个图像浏览器不符合你的需求,这里还有其他一些你可能感兴趣的。
**[Viewnior](http://siyanpanayotov.com/project/viewnior/)** 自称是 “GNU/Linux 中的快速简单的图像查看器”,它很适合这个用途。它的界面干净整洁,Viewnior 甚至可以进行一些基本的图像处理。
如果你喜欢在命令行中使用,那么 **display** 可能是你需要的浏览器。 **[ImageMagick](https://www.imagemagick.org/script/display.php)** 和 **[GraphicsMagick](http://www.graphicsmagick.org/display.html)** 这两个图像处理软件包都有一个名为 display 的应用程序,这两个版本都有查看图像的基本和高级选项。
**[Geeqie](http://geeqie.org/)** 是更轻和更快的图像浏览器之一。但是,不要让它的简单误导你。它包含的功能有元数据编辑功能和其它浏览器所缺乏的查看相机 RAW 图像格式的功能。
**[Shotwell](https://wiki.gnome.org/Apps/Shotwell)** 是 GNOME 桌面的照片管理器。然而它不仅仅能浏览图像,而且 Shotwell 非常快速,并且非常适合显示照片和其他图形。
*在 Linux 桌面中你有最喜欢的一款轻量级图片浏览器么?请在评论区随意分享你的喜欢的浏览器。*
---
作者简介:
我是一名长期使用自由/开源软件的用户,并因为乐趣和收获写各种东西。我不会很严肃。你可以在这些网站上找到我:Twitter、Mastodon、GitHub。
via: <https://opensource.com/article/17/7/4-lightweight-image-viewers-linux-desktop>
作者:[Scott Nesbitt](https://opensource.com/users/scottnesbitt) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Like most people, you probably have more than a few photos and other images on your computer. And, like most people, you probably like to take a peek at those images and photos every so often.
Firing up an editor like [GIMP](https://www.gimp.org/) or [Pinta](https://pinta-project.com/pintaproject/pinta/) is overkill for simply viewing images.
On the other hand, the basic image viewer included with most Linux desktop environments might not be enough for your needs. If you want something with a few more features, but still want it to be lightweight, then take a closer look at these four image viewers for the Linux desktop, plus a handful of bonus options if they don't meet your needs.
## Feh
[Feh](https://feh.finalrewind.org/) is an old favorite from the days when I computed on older, slower hardware. It's simple, unadorned, and does what it's designed to do very well.
You drive Feh from the command line: just point it at an image or a folder containing images and away you go. Feh loads quickly, and you can scroll through a set of images with a mouse click or by using the left and right arrow keys on your keyboard. What could be simpler?
Feh might be light, but it offers some options. You can, for example, control whether Feh's window has a border, set the minimum and maximum sizes of the images you want to view, and tell Feh at which image in a folder you want to start viewing.

opensource.com
## Ristretto
If you've used Xfce as a desktop environment, you'll be familiar with [Ristretto](https://docs.xfce.org/apps/ristretto/start). It's small, simple, and very useful.
How simple? You open a folder containing images, click on one of the thumbnails on the left, and move through the images by clicking the navigation keys at the top of the window. Ristretto even has a slideshow feature.
Ristretto can do a bit more, too. You can use it to save a copy of an image you're viewing, set that image as your desktop wallpaper, and even open it in another application, for example, if you need to touch it up.

opensource.com
## Mirage
On the surface, [Mirage](http://mirageiv.sourceforge.net/) is kind of plain and nondescript. It does the same things just about every decent image viewer does: opens image files, scales them to the width of the window, and lets you scroll through a collection of images using your keyboard. It even runs slideshows.
Still, Mirage will surprise anyone who needs a little more from their image viewer. In addition to its core features, Mirage lets you resize and crop images, take screenshots, rename an image file, and even generate 150-pixel-wide thumbnails of the images in a folder.
If that wasn't enough, Mirage can display [SVG files](https://en.wikipedia.org/wiki/Scalable_Vector_Graphics). You can even drive it [from the command line](http://mirageiv.sourceforge.net/docs-advanced.html#cli).

opensource.com
## Nomacs
[Nomacs](http://nomacs.org/) is easily the heaviest of the image viewers described in this article. Its perceived bulk belies Nomacs' speed. It's quick and easy to use.
Nomacs does more than display images. You can also view and edit an image's [metadata](https://iptc.org/standards/photo-metadata/photo-metadata/), add notes to an image, and do some basic editing—including cropping, resizing, and converting the image to grayscale. Nomacs can even take screenshots.
One interesting feature is that you can run two instances of the application on your desktop and synchronize an image across those instances. The [Nomacs documentation](http://nomacs.org/synchronization/) recommends this when you need to compare two images. You can even synchronize an image across a local area network. I haven't tried synchronizing across a network, but please share your experiences if you have.

opensource.com
## A few other viewers worth looking at
If these four image viewers don't suit your needs, here are some others that might interest you.
** Viewnior** bills itself as a "fast and simple image viewer for GNU/Linux," and it fits that bill nicely. Its interface is clean and uncluttered, and Viewnior can even do some basic image manipulation.
If the command line is more your thing, then **display** might be the viewer for you. Both the ** ImageMagick** and
**image manipulation packages have an application named display, and both versions have basic and advanced options for viewing images.**
[GraphicsMagick](http://www.graphicsmagick.org/display.html)** Geeqie** is one of the lighter and faster image viewers out there. Don't let its simplicity fool you, though. It packs features, like metadata editing and viewing camera RAW image formats, that other viewers lack.
** Shotwell** is the photo manager for the GNOME desktop. While it does more than just view images, Shotwell is quite speedy and does a great job of displaying photos and other graphics.
*Do you have a favorite lightweight image viewer for the Linux desktop? Feel free to share your preferences by leaving a comment.*
## 7 Comments |
8,777 | Samba 系列(十四):在命令行中将 CentOS 7 与 Samba4 AD 集成 | https://www.tecmint.com/integrate-centos-7-to-samba4-active-directory/ | 2017-08-14T17:58:03 | [
"Samba"
] | https://linux.cn/article-8777-1.html | 
本指南将向你介绍如何使用 Authconfig 在命令行中将无图形界面的 CentOS 7 服务器集成到 [Samba4 AD 域控制器](/article-8065-1.html)中。
这类设置提供了由 Samba 持有的单一集中式帐户数据库,允许 AD 用户通过网络基础设施对 CentOS 服务器进行身份验证。
#### 要求
1. [在 Ubuntu 上使用 Samba4 创建 AD 基础架构](/article-8065-1.html)
2. [CentOS 7.3 安装指南](/article-8048-1.html)
### 步骤 1:为 Samba4 AD DC 配置 CentOS
1、 在开始将 CentOS 7 服务器加入 Samba4 DC 之前,你需要确保网络接口被正确配置为通过 DNS 服务查询域。
运行 `ip address` 命令列出你机器网络接口,选择要编辑的特定网卡,通过针对接口名称运行 `nmtui-edit` 命令(如本例中的 ens33),如下所示。
```
# ip address
# nmtui-edit ens33
```
[](https://www.tecmint.com/wp-content/uploads/2017/07/List-Network-Interfaces.jpg)
*列出网络接口*
2、 打开网络接口进行编辑后,添加最适合 LAN 的静态 IPv4 配置,并确保为 DNS 服务器设置 Samba AD 域控制器 IP 地址。
另外,在搜索域中追加你的域的名称,并使用 [TAB] 键跳到确定按钮来应用更改。
当你仅对域 dns 记录使用短名称时, 已提交的搜索域保证域对应项会自动追加到 dns 解析 (FQDN) 中。
[](https://www.tecmint.com/wp-content/uploads/2017/07/Configure-Network-Interface.png)
*配置网络接口*
3、最后,重启网络守护进程以应用更改,并通过对域名和域控制器 ping 来测试 DNS 解析是否正确配置,如下所示。
```
# systemctl restart network.service
# ping -c2 tecmint.lan
# ping -c2 adc1
# ping -c2 adc2
```
[](https://www.tecmint.com/wp-content/uploads/2017/07/Verify-DNS-Resolution-on-Domain.png)
*验证域上的 DNS 解析*
4、 另外,使用下面的命令配置你的计算机主机名并重启机器应用更改。
```
# hostnamectl set-hostname your_hostname
# init 6
```
使用以下命令验证主机名是否正确配置。
```
# cat /etc/hostname
# hostname
```
5、 最后,使用 root 权限运行以下命令,与 Samba4 AD DC 同步本地时间。
```
# yum install ntpdate
# ntpdate domain.tld
```
[](https://www.tecmint.com/wp-content/uploads/2017/07/Sync-Time-with-Samba4-AD-DC.png)
*与 Samba4 AD DC 同步时间*
### 步骤 2:将 CentOS 7 服务器加入到 Samba4 AD DC
6、 要将 CentOS 7 服务器加入到 Samba4 AD 中,请先用具有 root 权限的帐户在计算机上安装以下软件包。
```
# yum install authconfig samba-winbind samba-client samba-winbind-clients
```
7、 为了将 CentOS 7 服务器与域控制器集成,可以使用 root 权限运行 `authconfig-tui`,并使用下面的配置。
```
# authconfig-tui
```
首屏选择:
* 在 User Information 中:
+ Use Winbind
* 在 Authentication 中使用[空格键]选择:
+ Use Shadow Password
+ Use Winbind Authentication
+ Local authorization is sufficient
[](https://www.tecmint.com/wp-content/uploads/2017/07/Authentication-Configuration.png)
*验证配置*
8、 点击 Next 进入 Winbind 设置界面并配置如下:
* Security Model: ads
* Domain = YOUR\_DOMAIN (use upper case)
* Domain Controllers = domain machines FQDN (comma separated if more than one)
* ADS Realm = YOUR\_DOMAIN.TLD
* Template Shell = /bin/bash
[](https://www.tecmint.com/wp-content/uploads/2017/07/Winbind-Settings.png)
*Winbind 设置*
9、 要加入域,使用 [tab] 键跳到 “Join Domain” 按钮,然后按[回车]键加入域。
在下一个页面,添加具有提升权限的 Samba4 AD 帐户的凭据,以将计算机帐户加入 AD,然后单击 “OK” 应用设置并关闭提示。
请注意,当你输入用户密码时,凭据将不会显示在屏幕中。在下面再次点击 OK,完成 CentOS 7 的域集成。
[](https://www.tecmint.com/wp-content/uploads/2017/07/Join-Domain-to-Samba4-AD-DC.png)
*加入域到 Samba4 AD DC*
[](https://www.tecmint.com/wp-content/uploads/2017/07/Confirm-Winbind-Settings.png)
*确认 Winbind 设置*
要强制将机器添加到特定的 Samba AD OU 中,请使用 hostname 命令获取计算机的完整名称,并使用机器名称在该 OU 中创建一个新的计算机对象。
将新对象添加到 Samba4 AD 中的最佳方法是已经集成到[安装了 RSAT 工具](/article-8097-1.html)的域的 Windows 机器上使用 ADUC 工具。
重要:加入域的另一种方法是使用 `authconfig` 命令行,它可以对集成过程进行广泛的控制。
但是,这种方法很容易因为其众多参数造成错误,如下所示。该命令必须输入一条长命令行。
```
# authconfig --enablewinbind --enablewinbindauth --smbsecurity ads --smbworkgroup=YOUR_DOMAIN --smbrealm YOUR_DOMAIN.TLD --smbservers=adc1.yourdomain.tld --krb5realm=YOUR_DOMAIN.TLD --enablewinbindoffline --enablewinbindkrb5 --winbindtemplateshell=/bin/bash--winbindjoin=domain_admin_user --update --enablelocauthorize --savebackup=/backups
```
10、 机器加入域后,通过使用以下命令验证 winbind 服务是否正常运行。
```
# systemctl status winbind.service
```
11、 接着检查是否在 Samba4 AD 中成功创建了 CentOS 机器对象。从安装了 RSAT 工具的 Windows 机器使用 AD 用户和计算机工具,并进入到你的域计算机容器。一个名为 CentOS 7 Server 的新 AD 计算机帐户对象应该在右边的列表中。
12、 最后,使用文本编辑器打开 samba 主配置文件(`/etc/samba/smb.conf`)来调整配置,并在 `[global]` 配置块的末尾附加以下行,如下所示:
```
winbind use default domain = true
winbind offline logon = true
```
[](https://www.tecmint.com/wp-content/uploads/2017/07/Configure-Samba.jpg)
*配置 Samba*
13、 为了在 AD 帐户首次登录时在机器上创建本地家目录,请运行以下命令:
```
# authconfig --enablemkhomedir --update
```
14、 最后,重启 Samba 守护进程使更改生效,并使用一个 AD 账户登陆验证域加入。AD 帐户的家目录应该会自动创建。
```
# systemctl restart winbind
# su - domain_account
```
[](https://www.tecmint.com/wp-content/uploads/2017/07/Verify-Domain-Joining.jpg)
*验证域加入*
15、 通过以下命令之一列出域用户或域组。
```
# wbinfo -u
# wbinfo -g
```
[](https://www.tecmint.com/wp-content/uploads/2017/07/List-Domain-Users-and-Groups.png)
*列出域用户和组*
16、 要获取有关域用户的信息,请运行以下命令。
```
# wbinfo -i domain_user
```
[](https://www.tecmint.com/wp-content/uploads/2017/07/List-Domain-User-Info.jpg)
*列出域用户信息*
17、 要显示域摘要信息,请使用以下命令。
```
# net ads info
```
[](https://www.tecmint.com/wp-content/uploads/2017/07/List-Domain-Summary.jpg)
*列出域摘要*
### 步骤 3:使用 Samba4 AD DC 帐号登录CentOS
18、 要在 CentOS 中与域用户进行身份验证,请使用以下命令语法之一。
```
# su - ‘domain\domain_user’
# su - domain\\domain_user
```
或者在 samba 配置文件中设置了 `winbind use default domain = true` 参数的情况下,使用下面的语法。
```
# su - domain_user
# su - [email protected]
```
19、 要为域用户或组添加 root 权限,请使用 `visudocommand` 编辑 `sudoers` 文件,并添加以下截图所示的行。
```
YOUR_DOMAIN\\domain_username ALL=(ALL:ALL) ALL #For domain users
%YOUR_DOMAIN\\your_domain\ group ALL=(ALL:ALL) ALL #For domain groups
```
或者在 samba 配置文件中设置了 `winbind use default domain = true` 参数的情况下,使用下面的语法。
```
domain_username ALL=(ALL:ALL) ALL #For domain users
%your_domain\ group ALL=(ALL:ALL) ALL #For domain groups
```
[](https://www.tecmint.com/wp-content/uploads/2017/07/Grant-Root-Privileges-on-Domain-Users.jpg)
*授予域用户 root 权限*
20、 针对 Samba4 AD DC 的以下一系列命令也可用于故障排除:
```
# wbinfo -p #Ping domain
# wbinfo -n domain_account #Get the SID of a domain account
# wbinfo -t #Check trust relationship
```
21、 要离开该域, 请使用具有提升权限的域帐户对你的域名运行以下命令。从 AD 中删除计算机帐户后, 重启计算机以在集成进程之前还原更改。
```
# net ads leave -w DOMAIN -U domain_admin
# init 6
```
就是这样了!尽管此过程主要集中在将 CentOS 7 服务器加入到 Samba4 AD DC 中,但这里描述的相同步骤也适用于将 CentOS 服务器集成到 Microsoft Windows Server 2012 AD 中。
---
作者简介:
Matei Cezar - 我是一个电脑上瘾的家伙,开源和基于 linux 的系统软件的粉丝,在 Linux 发行版桌面、服务器和 bash 脚本方面拥有大约 4 年的经验。
---
via: <https://www.tecmint.com/integrate-centos-7-to-samba4-active-directory/>
作者:[Matei Cezar](https://www.tecmint.com/author/cezarmatei/) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | This guide will show you how you can integrate a **CentOS 7** Server with no Graphical User Interface to [Samba4 Active Directory Domain Controller](https://www.tecmint.com/install-samba4-active-directory-ubuntu/) from command line using Authconfig software.
This type of setup provides a single centralized account database held by Samba and allows the AD users to authenticate to CentOS server across the network infrastructure.
#### Requirements
### Step 1: Configure CentOS for Samba4 AD DC
**1.** Before starting to join **CentOS 7** Server into a **Samba4 DC** you need to assure that the network interface is properly configured to query domain via DNS service.
Run [ip address](https://www.tecmint.com/ip-command-examples/) command to list your machine network interfaces and choose the specific NIC to edit by issuing **nmtui-edit** command against the interface name, such as **ens33** in this example, as illustrated below.
# ip address # nmtui-edit ens33

**2.** Once the network interface is opened for editing, add the static IPv4 configurations best suited for your LAN and make sure you setup Samba AD Domain Controllers IP addresses for the DNS servers.
Also, append the name of your domain in search domains filed and navigate to **OK** button using **[TAB]** key to apply changes.
The search domains filed assures that the domain counterpart is automatically appended by DNS resolution (FQDN) when you use only a short name for a domain DNS record.

**3.** Finally, restart the network daemon to apply changes and test if DNS resolution is properly configured by issuing series of **ping** commands against the domain name and domain controllers short names as shown below.
# systemctl restart network.service # ping -c2 tecmint.lan # ping -c2 adc1 # ping -c2 adc2

**4.** Also, configure your machine hostname and reboot the machine to properly apply the settings by issuing the following commands.
# hostnamectl set-hostname your_hostname # init 6
Verify if hostname was correctly applied with the below commands.
# cat /etc/hostname # hostname
**5.** Finally, sync local time with Samba4 AD DC by issuing the below commands with root privileges.
# yum install ntpdate # ntpdate domain.tld

### Step 2: Join CentOS 7 Server to Samba4 AD DC
**6.** To join CentOS 7 server to Samba4 Active Directory, first install the following packages on your machine from an account with root privileges.
# yum install authconfig samba-winbind samba-client samba-winbind-clients
**7.** In order to integrate CentOS 7 server to a domain controller run **authconfig-tui** graphical utility with root privileges and use the below configurations as described below.
# authconfig-tui
At the first prompt screen choose:
- On
**User Information**:- Use Winbind
- On
**Authentication**tab select by pressing**[Space]**key:- Use
**Shadow Password** - Use
**Winbind Authentication** - Local authorization is sufficient
- Use

**8.** Hit **Next** to continue to the Winbind Settings screen and configure as illustrated below:
- Security Model:
**ads** - Domain =
**YOUR_DOMAIN**(use upper case) - Domain Controllers =
**domain machines FQDN**(comma separated if more than one) - ADS Realm =
**YOUR_DOMAIN.TLD** - Template Shell =
**/bin/bash**

**9.** To perform domain joining navigate to **Join Domain** button using **[tab]** key and hit **[Enter]** key to join domain.
At the next screen prompt, add the credentials for a **Samba4 AD** account with elevated privileges to perform the machine account joining into AD and hit **OK** to apply settings and close the prompt.
Be aware that when you type the user password, the credentials won’t be shown in the password screen. On the remaining screen hit **OK** again to finish domain integration for CentOS 7 machine.


To force adding a machine into a specific **Samba AD Organizational Unit**, get your machine exact name using hostname command and create a new Computer object in that OU with the name of your machine.
The best way to add a new object into a Samba4 AD is by using **ADUC** tool from a Windows machine integrated into the domain with [RSAT tools installed](https://www.tecmint.com/manage-samba4-ad-from-windows-via-rsat/) on it.
**Important**: An alternate method of joining a domain is by using **authconfig** command line which offers extensive control over the integration process.
However, this method is prone to errors do to its numerous parameters as illustrated on the below command excerpt. The command must be typed into a single long line.
# authconfig --enablewinbind --enablewinbindauth --smbsecurity ads --smbworkgroup=YOUR_DOMAIN --smbrealm YOUR_DOMAIN.TLD --smbservers=adc1.yourdomain.tld --krb5realm=YOUR_DOMAIN.TLD --enablewinbindoffline --enablewinbindkrb5 --winbindtemplateshell=/bin/bash--winbindjoin=domain_admin_user --update --enablelocauthorize --savebackup=/backups
**10.** After the machine has been joined to domain, verify if winbind service is up and running by issuing the below command.
# systemctl status winbind.service
**11.** Then, check if CentOS machine object has been successfully created in Samba4 AD. Use AD Users and Computers tool from a Windows machine with RSAT tools installed and navigate to your domain Computers container. A new AD computer account object with name of your CentOS 7 server should be listed in the right plane.
**12.** Finally, tweak the configuration by opening samba main configuration file (**/etc/samba/smb.conf**) with a text editor and append the below lines at the end of the **[global]** configuration block as illustrated below:
winbind use default domain = true winbind offline logon = true

**13.** In order to create local homes on the machine for AD accounts at their first logon run the below command.
# authconfig --enablemkhomedir --update
**14.** Finally, restart Samba daemon to reflect changes and verify domain joining by performing a logon on the server with an AD account. The home directory for the AD account should be automatically created.
# systemctl restart winbind # su - domain_account

**15.** List the domain users or domain groups by issuing one of the following commands.
# wbinfo -u # wbinfo -g

**16.** To get info about a domain user run the below command.
# wbinfo -i domain_user

**17.** To display summary domain info issue the following command.
# net ads info

### Step 3: Login to CentOS with a Samba4 AD DC Account
**18.** To authenticate with a domain user in CentOS, use one of the following command line syntaxes.
# su - ‘domain\domain_user’ # su - domain\\domain_user
Or use the below syntax in case winbind use default domain = true parameter is set to samba configuration file.
# su - domain_user # su -[[email protected]]
**19.** In order to add root privileges for a domain user or group, edit **sudoers** file using **visudo** command and add the following lines as illustrated on the below screenshot.
YOUR_DOMAIN\\domain_username ALL=(ALL:ALL) ALL #For domain users %YOUR_DOMAIN\\your_domain\ group ALL=(ALL:ALL) ALL #For domain groups
Or use the below excerpt in case winbind use default domain = true parameter is set to samba configuration file.
domain_username ALL=(ALL:ALL) ALL #For domain users %your_domain\ group ALL=(ALL:ALL) ALL #For domain groups

**20.** The following series of commands against a Samba4 AD DC can also be useful for troubleshooting purposes:
# wbinfo -p #Ping domain # wbinfo -n domain_account #Get the SID of a domain account # wbinfo -t #Check trust relationship
**21.** To leave the domain run the following command against your domain name using a domain account with elevated privileges. After the machine account has been removed from the AD, reboot the machine to revert changes before the integration process.
# net ads leave -w DOMAIN -U domain_admin # init 6
That’s all! Although this procedure is mainly focused on joining a CentOS 7 server to a Samba4 AD DC, the same steps described here are also valid for integrating a CentOS server into a Microsoft Windows Server 2012 Active Directory.
Please compare this way using authconfig and winbind to the RHEL recommendation to use realm join and sssd – in other words, when is this better or worse?
Reply |
8,778 | OCI 发布容器运行时和镜像格式规范 V1.0 | https://blog.docker.com/2017/07/oci-release-of-v1-0-runtime-and-image-format-specifications/ | 2017-08-15T08:30:00 | [
"OCI",
"Docker"
] | https://linux.cn/article-8778-1.html | 
7 月 19 日是<ruby> 开放容器计划 <rt> Open Container Initiative </rt></ruby>(OCI)的一个重要里程碑,OCI 发布了容器运行时和镜像规范的 1.0 版本,而 Docker 在这过去两年中一直充当着推动和引领的核心角色。我们的目标是为社区、客户以及更广泛的容器行业提供底层的标准。要了解这一里程碑的意义,我们先来看看 Docker 在开发容器技术行业标准方面的成长和发展历史。
### Docker 将运行时和镜像捐赠给 OCI 的历史回顾
Docker 的镜像格式和容器运行时在 2013 年作为开源项目发布后,迅速成为事实上的标准。我们认识到将其转交给中立管理机构管理,以加强创新和防止行业碎片化的重要性。我们与广泛的容器技术人员和行业领导者合作,成立了<ruby> 开放容器项目 <rt> Open Container Project </rt></ruby>来制定了一套容器标准,并在 Linux 基金会的支持下,于 2015 年 6 月在 <ruby> Docker 大会 <rp> ( </rp> <rt> DockerCon </rt> <rp> ) </rp></ruby>上推出。最终在那个夏天演变成为<ruby> 开放容器计划 <rt> Open Container Initiative </rt></ruby> (OCI)。
Docker 贡献了 runc ,这是从 Docker 员工 [Michael Crosby](https://github.com/crosbymichael) 的 libcontainer 项目中发展而来的容器运行时参考实现。 runc 是描述容器生命周期和运行时行为的运行时规范的基础。runc 被用在数千万个节点的生产环境中,这比任何其它代码库都要大一个数量级。runc 已经成为运行时规范的参考实现,并且随着项目的进展而不断发展。
在运行时规范制定工作开始近一年后,我们组建了一个新的工作组来制定镜像格式的规范。 Docker 将 Docker V2 镜像格式捐赠给 OCI 作为镜像规范的基础。通过这次捐赠,OCI 定义了构成容器镜像的数据结构(原始镜像)。定义容器镜像格式是一个至关重要的步骤,但它需要一个像 Docker 这样的平台通过定义和提供构建、管理和发布镜像的工具来实现它的价值。 例如,Dockerfile 等内容并不包括在 OCI 规范中。

### 开放容器标准化之旅
这个规范已经持续开发了两年。随着代码的重构,更小型的项目已经从 runc 参考实现中脱颖而出,并支持即将发布的认证测试工具。
有关 Docker 参与塑造 OCI 的详细信息,请参阅上面的时间轴,其中包括:创建 runc ,和社区一起更新、迭代运行时规范,创建 containerd 以便于将 runc 集成到 Docker 1.11 中,将 Docker V2 镜像格式贡献给 OCI 作为镜像格式规范的基础,并在 [containerd](https://containerd.io/) 中实现该规范,使得该核心容器运行时同时涵盖了运行时和镜像格式标准,最后将 containerd 捐赠给了<ruby> 云计算基金会 <rt> Cloud Native Computing Foundation </rt></ruby>(CNCF),并于本月发布了更新的 1.0 alpha 版本。
维护者 [Michael Crosby](https://github.com/crosbymichael) 和 [Stephen Day](https://github.com/stevvooe) 引导了这些规范的发展,并且为 v1.0 版本的实现提供了极大的帮助,另外 Alexander Morozov,Josh Hawn,Derek McGown 和 Aaron Lehmann 也贡献了代码,以及 Stephen Walli 参加了认证工作组。
Docker 仍然致力于推动容器标准化进程,在每个人都认可的层面建立起坚实的基础,使整个容器行业能够在依旧十分差异化的层面上进行创新。
### 开放标准只是一小块拼图
Docker 是一个完整的平台,用于创建、管理、保护和编排容器以及镜像。该项目的愿景始终是致力于成为支持开源组件的行业规范的基石,或着是容器解决方案的校准铅锤。Docker 平台正位于此层之上 -- 为客户提供从开发到生产的安全的容器管理解决方案。
OCI 运行时和镜像规范成为一个可靠的标准基础,允许和鼓励多样化的容器解决方案,同时它们不限制产品创新或遏制主要开发者。打一个比方,TCP/IP、HTTP 和 HTML 成为过去 25 年来建立万维网的可靠标准,其他公司可以继续通过这些标准的新工具、技术和浏览器进行创新。 OCI 规范也为容器解决方案提供了类似的规范基础。
开源项目也在为产品开发提供组件方面发挥着作用。containerd 项目就使用了 OCI 的 runc 参考实现,它负责镜像的传输和存储,容器运行和监控,以及支持存储和网络附件的等底层功能。containerd 项目已经被 Docker 捐赠给了 CNCF ,与其他重要项目一起支持云计算解决方案。
Docker 使用了 containerd 和其它自己的核心开源基础设施组件,如 LinuxKit,InfraKit 和 Notary 等项目来构建和保护 Docker 社区版容器解决方案。正在寻找一个能提供容器管理、安全性、编排、网络和更多功能的完整容器平台的用户和组织可以了解下 Docker Enterprise Edition 。

>
> 这张图强调了 OCI 规范提供了一个由容器运行时实现的标准层:containerd 和 runc。 要组装一个完整的像 Docker 这样具有完整容器生命周期和工作流程的容器平台,需要和许多其他的组件集成在一起:管理基础架构的 InfraKit,提供操作系统的 LinuxKit,交付编排的 SwarmKit,确保安全性的 Notary。
>
>
>
### OCI 下一步该干什么
随着运行时和镜像规范的发布,我们应该庆祝开发者的努力。开放容器计划的下一个关键工作是提供认证计划,以验证实现者的产品和项目确实符合运行时和镜像规范。[认证工作组](https://github.com/opencontainers/certification) 已经组织了一个程序,结合了<ruby> 开发套件 <rp> ( </rp> <rt> developing suite </rt> <rp> ) </rp></ruby>的[运行时](https://github.com/opencontainers/runtime-tools)和[镜像](https://github.com/opencontainers/image-tools)规范测试工具将展示产品应该如何参照标准进行实现。
同时,当前规范的开发者们正在考虑下一个最重要的容器技术领域。云计算基金会的通用容器网络接口开发工作已经正在进行中,支持镜像签署和分发的工作正也在 OCI 的考虑之中。
除了 OCI 及其成员,Docker 仍然致力于推进容器技术的标准化。 OCI 的使命是为用户和公司提供在开发者工具、镜像分发、容器编排、安全、监控和管理等方面进行创新的基准。Docker 将继续引领创新,不仅提供提高生产力和效率的工具,而且还通过授权用户,合作伙伴和客户进行创新。
**在 Docker 学习更过关于 OCI 和开源的信息:**
* 阅读 [OCI 规范的误区](/article-8763-1.html)
* 访问 [开放容器计划的网站](https://www.opencontainers.org/join)
* 访问 [Moby 项目网站](http://mobyproject.org/)
* 参加 [DockerCon Europe 2017](https://europe-2017.dockercon.com/)
* 参加 [Moby Summit LA](https://www.eventbrite.com/e/moby-summit-los-angeles-tickets-35930560273)
(题图:vox-cdn.com)
---
作者简介:
Patrick Chanezon 是 Docker Inc. 技术人员。他的工作是帮助构建 Docker 。一个程序员和讲故事的人 (storyller),他在 Netscape 和 Sun 工作了10年的时间,又在Google,VMware 和微软工作了10年。他的主要职业兴趣是为这些奇特的双边市场“平台”建立和推动网络效应。他曾在门户网站,广告,电商,社交,Web,分布式应用和云平台上工作过。有关更多信息,请访问 linkedin.com/in/chanezon 和他的推特 @chanezon。
---
via: <https://blog.docker.com/2017/07/oci-release-of-v1-0-runtime-and-image-format-specifications/>
作者:[Patrick Chanezon](https://blog.docker.com/author/chanezon/) 译者:[rieonke](https://github.com/rieonke) 校对:[校对者ID](https://github.com/%E6%A0%A1%E5%AF%B9%E8%80%85ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,780 | 学习用 Python 编程时要避免的 3 个错误 | https://opensource.com/article/17/6/3-things-i-did-wrong-learning-python | 2017-08-16T08:58:00 | [
"Python"
] | https://linux.cn/article-8780-1.html |
>
> 这些错误会造成很麻烦的问题,需要数小时才能解决。
>
>
>

当你做错事时,承认错误并不是一件容易的事,但是犯错是任何学习过程中的一部分,无论是学习走路,还是学习一种新的编程语言都是这样,比如学习 Python。
为了让初学 Python 的程序员避免犯同样的错误,以下列出了我学习 Python 时犯的三种错误。这些错误要么是我长期以来经常犯的,要么是造成了需要几个小时解决的麻烦。
年轻的程序员们可要注意了,这些错误是会浪费一下午的!
### 1、 可变数据类型作为函数定义中的默认参数
这似乎是对的?你写了一个小函数,比如,搜索当前页面上的链接,并可选将其附加到另一个提供的列表中。
```
def search_for_links(page, add_to=[]):
new_links = page.search_for_links()
add_to.extend(new_links)
return add_to
```
从表面看,这像是十分正常的 Python 代码,事实上它也是,而且是可以运行的。但是,这里有个问题。如果我们给 `add_to` 参数提供了一个列表,它将按照我们预期的那样工作。但是,如果我们让它使用默认值,就会出现一些神奇的事情。
试试下面的代码:
```
def fn(var1, var2=[]):
var2.append(var1)
print var2
fn(3)
fn(4)
fn(5)
```
可能你认为我们将看到:
```
[3]
[4]
[5]
```
但实际上,我们看到的却是:
```
[3]
[3, 4]
[3, 4, 5]
```
为什么呢?如你所见,每次都使用的是同一个列表,输出为什么会是这样?在 Python 中,当我们编写这样的函数时,这个列表被实例化为函数定义的一部分。当函数运行时,它并不是每次都被实例化。这意味着,这个函数会一直使用完全一样的列表对象,除非我们提供一个新的对象:
```
fn(3, [4])
```
```
[4, 3]
```
答案正如我们所想的那样。要想得到这种结果,正确的方法是:
```
def fn(var1, var2=None):
if not var2:
var2 = []
var2.append(var1)
```
或是在第一个例子中:
```
def search_for_links(page, add_to=None):
if not add_to:
add_to = []
new_links = page.search_for_links()
add_to.extend(new_links)
return add_to
```
这将在模块加载的时候移走实例化的内容,以便每次运行函数时都会发生列表实例化。请注意,对于不可变数据类型,比如[**元组**](https://docs.python.org/2/library/functions.html?highlight=tuple#tuple)、[**字符串**](https://docs.python.org/2/library/string.html)、[**整型**](https://docs.python.org/2/library/functions.html#int),是不需要考虑这种情况的。这意味着,像下面这样的代码是非常可行的:
```
def func(message="my message"):
print message
```
### 2、 可变数据类型作为类变量
这和上面提到的最后一个错误很相像。思考以下代码:
```
class URLCatcher(object):
urls = []
def add_url(self, url):
self.urls.append(url)
```
这段代码看起来非常正常。我们有一个储存 URL 的对象。当我们调用 add\_url 方法时,它会添加一个给定的 URL 到存储中。看起来非常正确吧?让我们看看实际是怎样的:
```
a = URLCatcher()
a.add_url('http://www.google.com')
b = URLCatcher()
b.add_url('http://www.bbc.co.hk')
```
b.urls:
```
['http://www.google.com', 'http://www.bbc.co.uk']
```
a.urls:
```
['http://www.google.com', 'http://www.bbc.co.uk']
```
等等,怎么回事?!我们想的不是这样啊。我们实例化了两个单独的对象 `a` 和 `b`。把一个 URL 给了 `a`,另一个给了 `b`。这两个对象怎么会都有这两个 URL 呢?
这和第一个错例是同样的问题。创建类定义时,URL 列表将被实例化。该类所有的实例使用相同的列表。在有些时候这种情况是有用的,但大多数时候你并不想这样做。你希望每个对象有一个单独的储存。为此,我们修改代码为:
```
class URLCatcher(object):
def __init__(self):
self.urls = []
def add_url(self, url):
self.urls.append(url)
```
现在,当创建对象时,URL 列表被实例化。当我们实例化两个单独的对象时,它们将分别使用两个单独的列表。
### 3、 可变的分配错误
这个问题困扰了我一段时间。让我们做出一些改变,并使用另一种可变数据类型 - [**字典**](https://docs.python.org/2/library/stdtypes.html?highlight=dict#dict)。
```
a = {'1': "one", '2': 'two'}
```
现在,假设我们想把这个字典用在别的地方,且保持它的初始数据完整。
```
b = a
b['3'] = 'three'
```
简单吧?
现在,让我们看看原来那个我们不想改变的字典 `a`:
```
{'1': "one", '2': 'two', '3': 'three'}
```
哇等一下,我们再看看 **b**?
```
{'1': "one", '2': 'two', '3': 'three'}
```
等等,什么?有点乱……让我们回想一下,看看其它不可变类型在这种情况下会发生什么,例如一个**元组**:
```
c = (2, 3)
d = c
d = (4, 5)
```
现在 `c` 是 `(2, 3)`,而 `d` 是 `(4, 5)`。
这个函数结果如我们所料。那么,在之前的例子中到底发生了什么?当使用可变类型时,其行为有点像 **C** 语言的一个指针。在上面的代码中,我们令 `b = a`,我们真正表达的意思是:`b` 成为 `a` 的一个引用。它们都指向 Python 内存中的同一个对象。听起来有些熟悉?那是因为这个问题与先前的相似。其实,这篇文章应该被称为「可变引发的麻烦」。
列表也会发生同样的事吗?是的。那么我们如何解决呢?这必须非常小心。如果我们真的需要复制一个列表进行处理,我们可以这样做:
```
b = a[:]
```
这将遍历并复制列表中的每个对象的引用,并且把它放在一个新的列表中。但是要注意:如果列表中的每个对象都是可变的,我们将再次获得它们的引用,而不是完整的副本。
假设在一张纸上列清单。在原来的例子中相当于,A 某和 B 某正在看着同一张纸。如果有个人修改了这个清单,两个人都将看到相同的变化。当我们复制引用时,每个人现在有了他们自己的清单。但是,我们假设这个清单包括寻找食物的地方。如果“冰箱”是列表中的第一个,即使它被复制,两个列表中的条目也都指向同一个冰箱。所以,如果冰箱被 A 修改,吃掉了里面的大蛋糕,B 也将看到这个蛋糕的消失。这里没有简单的方法解决它。只要你记住它,并编写代码的时候,使用不会造成这个问题的方式。
字典以相同的方式工作,并且你可以通过以下方式创建一个昂贵副本:
```
b = a.copy()
```
再次说明,这只会创建一个新的字典,指向原来存在的相同的条目。因此,如果我们有两个相同的列表,并且我们修改字典 `a` 的一个键指向的可变对象,那么在字典 b 中也将看到这些变化。
可变数据类型的麻烦也是它们强大的地方。以上都不是实际中的问题;它们是一些要注意防止出现的问题。在第三个项目中使用昂贵复制操作作为解决方案在 99% 的时候是没有必要的。你的程序或许应该被改改,所以在第一个例子中,这些副本甚至是不需要的。
*编程快乐!在评论中可以随时提问。*
---
作者简介:
Pete Savage - Peter 是一位充满激情的开源爱好者,在过去十年里一直在推广和使用开源产品。他从 Ubuntu 社区开始,在许多不同的领域自愿参与音频制作领域的研究工作。在职业经历方面,他起初作为公司的系统管理员,大部分时间在管理和建立数据中心,之后在 Red Hat 担任 CloudForms 产品的主要测试工程师。
---
via: <https://opensource.com/article/17/6/3-things-i-did-wrong-learning-python>
作者:[Pete Savage](https://opensource.com/users/psav) 译者:[polebug](https://github.com/polebug) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | It's never easy to admit when you do things wrong, but making errors is part of any learning process, from learning to walk to learning a new programming language, such as Python.
Here's a list of three things I got wrong when I was learning Python, presented so that newer Python programmers can avoid making the same mistakes. These are errors that either I got away with for a long time or that that created big problems that took hours to solve.
Take heed young coders, some of these mistakes are afternoon wasters!
## 1. Mutable data types as default arguments in function definitions
It makes sense right? You have a little function that, let's say, searches for links on a current page and optionally appends it to another supplied list.
```
``````
def search_for_links(page, add_to=[]):
new_links = page.search_for_links()
add_to.extend(new_links)
return add_to
```
On the face of it, this looks like perfectly normal Python, and indeed it is. It works. But there are issues with it. If we supply a list for the **add_to** parameter, it works as expected. If, however, we let it use the default, something interesting happens.
Try the following code:
```
``````
def fn(var1, var2=[]):
var2.append(var1)
print var2
fn(3)
fn(4)
fn(5)
```
You may expect that we would see:
**[3]
[4]
[5]**
But we actually see this:
**[3]
[3, 4]
[3, 4, 5]**
Why? Well, you see, the same list is used each time. In Python, when we write the function like this, the list is instantiated as part of the function's definition. It is not instantiated each time the function is run. This means that the function keeps using the exact same list object again and again, unless of course we supply another one:
```
````fn(3, [4])`
**[4, 3]**
Just as expected. The correct way to achieve the desired result is:
```
``````
def fn(var1, var2=None):
if not var2:
var2 = []
var2.append(var1)
```
Or, in our first example:
```
``````
def search_for_links(page, add_to=None):
if not add_to:
add_to = []
new_links = page.search_for_links()
add_to.extend(new_links)
return add_to
```
This moves the instantiation from module load time so that it happens every time the function runs. Note that for immutable data types, like [ tuples](https://docs.python.org/2/library/functions.html?highlight=tuple#tuple),
[, or](https://docs.python.org/2/library/string.html)
**strings**[, this is not necessary. That means it is perfectly fine to do something like:](https://docs.python.org/2/library/functions.html#int)
**ints**```
``````
def func(message="my message"):
print message
```
## 2. Mutable data types as class variables
Hot on the heels of the last error is one that is very similar. Consider the following:
```
``````
class URLCatcher(object):
urls = []
def add_url(self, url):
self.urls.append(url)
```
This code looks perfectly normal. We have an object with a storage of URLs. When we call the **add_url** method, it adds a given URL to the store. Perfect right? Let's see it in action:
```
``````
a = URLCatcher()
a.add_url('http://www.google.')
b = URLCatcher()
b.add_url('http://www.bbc.co.')
```
**b.urls
[' http://www.google.com', 'http://www.bbc.co.uk']**
**a.urls
[' http://www.google.com', 'http://www.bbc.co.uk']**
Wait, what?! We didn't expect that. We instantiated two separate objects, **a** and **b**. **A** was given one URL and **b** the other. How is it that both objects have both URLs?
Turns out it's kinda the same problem as in the first example. The URLs list is instantiated when the class definition is created. All instances of that class use the same list. Now, there are some cases where this is advantageous, but the majority of the time you don't want to do this. You want each object to have a separate store. To do that, we would modify the code like:
```
``````
class URLCatcher(object):
def __init__(self):
self.urls = []
def add_url(self, url):
self.urls.append(url)
```
Now the URLs list is instantiated when the object is created. When we instantiate two separate objects, they will be using two separate lists.
## 3. Mutable assignment errors
This one confused me for a while. Let's change gears a little and use another mutable datatype, the [ dict](https://docs.python.org/2/library/stdtypes.html?highlight=dict#dict).
```
``````
a = {'1': "one", '2': 'two'}
```
Now let's assume we want to take that **dict** and use it someplace else, leaving the original intact.
```
``````
b = a
b['3'] = 'three'
```
Simple eh?
Now let's look at our original dict, **a**, the one we didn't want to modify:
```
````{'1': "one", '2': 'two', '3': 'three'}`
Whoa, hold on a minute. What does **b** look like then?
```
````{'1': "one", '2': 'two', '3': 'three'}`
Wait what? But… let's step back and see what happens with our other immutable types, a **tuple** for instance:
```
``````
c = (2, 3)
d = c
d = (4, 5)
```
Now **c** is:
**(2, 3)**
While **d** is:
**(4, 5)**
That functions as expected. So what happened in our example? When using mutable types, we get something that behaves a little more like a pointer from C. When we said **b = a** in the code above, what we really meant was: **b** is now also a reference to **a**. They both point to the same object in Python's memory. Sound familiar? That's because it's similar to the previous problems. In fact, this post should really have been called, "The Trouble with Mutables."
Does the same thing happen with lists? Yes. So how do we get around it? Well, we have to be very careful. If we really need to copy a list for processing, we can do so like:
```
````b = a[:]`
This will go through and copy a reference to each item in the list and place it in a new list. But be warned: If any objects in the list are mutable, we will again get references to those, rather than complete copies.
Imagine having a list on a piece of paper. In the original example, Person A and Person B are looking at the same piece of paper. If someone changes that list, both people will see the same changes. When we copy the references, each person now has their own list. But let's suppose that this list contains places to search for food. If "fridge" is first on the list, even when it is copied, both entries in both lists point to the same fridge. So if the fridge is modified by Person A, by say eating a large gateaux, Person B will also see that the gateaux is missing. There is no easy way around this. It is just something that you need to remember and code in a way that will not cause an issue.
Dicts function in the same way, and you can create this expensive copy by doing:
```
````b = a.copy()`
Again, this will only create a new dictionary pointing to the same entries that were present in the original. Thus, if we have two lists that are identical and we modify a mutable object that is pointed to by a key from dict 'a', the dict object present in dict 'b' will also see those changes.
The trouble with mutable data types is that they are powerful. None of the above are real problems; they are things to keep in mind to prevent issues. The expensive copy operations presented as solutions in the third item are unnecessary 99% of the time. Your program can and probably should be modified so that those copies are not even required in the first place.
*Happy coding! And feel free to ask questions in the comments.*
## 7 Comments |
8,782 | Linux 包管理基础:apt、yum、dnf 和 pkg | https://www.digitalocean.com/community/tutorials/package-management-basics-apt-yum-dnf-pkg | 2017-08-16T11:45:00 | [
"apt",
"包管理",
"yum"
] | https://linux.cn/article-8782-1.html | 
### 介绍
大多数现代的类 Unix 操作系统都提供了一种中心化的机制用来搜索和安装软件。软件通常都是存放在存储库中,并通过包的形式进行分发。处理包的工作被称为包管理。包提供了操作系统的基本组件,以及共享的库、应用程序、服务和文档。
包管理系统除了安装软件外,它还提供了工具来更新已经安装的包。包存储库有助于确保你的系统中使用的代码是经过审查的,并且软件的安装版本已经得到了开发人员和包维护人员的认可。
在配置服务器或开发环境时,我们最好了解下包在官方存储库之外的情况。某个发行版的稳定版本中的包有可能已经过时了,尤其是那些新的或者快速迭代的软件。然而,包管理无论对于系统管理员还是开发人员来说都是至关重要的技能,而已打包的软件对于主流 Linux 发行版来说也是一笔巨大的财富。
本指南旨在快速地介绍下在多种 Linux 发行版中查找、安装和升级软件包的基础知识,并帮助您将这些内容在多个系统之间进行交叉对比。
### 包管理系统:简要概述
大多数包系统都是围绕包文件的集合构建的。包文件通常是一个存档文件,它包含已编译的二进制文件和软件的其他资源,以及安装脚本。包文件同时也包含有价值的元数据,包括它们的依赖项,以及安装和运行它们所需的其他包的列表。
虽然这些包管理系统的功能和优点大致相同,但打包格式和工具却因平台而异:
| 操作系统 | 格式 | 工具 |
| --- | --- | --- |
| Debian | `.deb` | `apt`, `apt-cache`, `apt-get`, `dpkg` |
| Ubuntu | `.deb` | `apt`, `apt-cache`, `apt-get`, `dpkg` |
| CentOS | `.rpm` | `yum` |
| Fedora | `.rpm` | `dnf` |
| FreeBSD | Ports, `.txz` | `make`, `pkg` |
Debian 及其衍生版,如 Ubuntu、Linux Mint 和 Raspbian,它们的包格式是 `.deb`。APT 这款先进的包管理工具提供了大多数常见的操作命令:搜索存储库、安装软件包及其依赖项,并管理升级。在本地系统中,我们还可以使用 `dpkg` 程序来安装单个的 `deb` 文件,APT 命令作为底层 `dpkg` 的前端,有时也会直接调用它。
最近发布的 debian 衍生版大多数都包含了 `apt` 命令,它提供了一个简洁统一的接口,可用于通常由 `apt-get` 和 `apt-cache` 命令处理的常见操作。这个命令是可选的,但使用它可以简化一些任务。
CentOS、Fedora 和其它 Red Hat 家族成员使用 RPM 文件。在 CentOS 中,通过 `yum` 来与单独的包文件和存储库进行交互。
在最近的 Fedora 版本中,`yum` 已经被 `dnf` 取代,`dnf` 是它的一个现代化的分支,它保留了大部分 `yum` 的接口。
FreeBSD 的二进制包系统由 `pkg` 命令管理。FreeBSD 还提供了 `Ports` 集合,这是一个存在于本地的目录结构和工具,它允许用户获取源码后使用 Makefile 直接从源码编译和安装包。
### 更新包列表
大多数系统在本地都会有一个和远程存储库对应的包数据库,在安装或升级包之前最好更新一下这个数据库。另外,`yum` 和 `dnf` 在执行一些操作之前也会自动检查更新。当然你可以在任何时候对系统进行更新。
| 系统 | 命令 |
| --- | --- |
| Debian / Ubuntu | `sudo apt-get update` |
| | `sudo apt update` |
| CentOS | `yum check-update` |
| Fedora | `dnf check-update` |
| FreeBSD Packages | `sudo pkg update` |
| FreeBSD Ports | `sudo portsnap fetch update` |
### 更新已安装的包
在没有包系统的情况下,想确保机器上所有已安装的软件都保持在最新的状态是一个很艰巨的任务。你将不得不跟踪数百个不同包的上游更改和安全警报。虽然包管理器并不能解决升级软件时遇到的所有问题,但它确实使你能够使用一些命令来维护大多数系统组件。
在 FreeBSD 上,升级已安装的 ports 可能会引入破坏性的改变,有些步骤还需要进行手动配置,所以在通过 `portmaster` 更新之前最好阅读下 `/usr/ports/UPDATING` 的内容。
| 系统 | 命令 | 说明 |
| --- | --- | --- |
| Debian / Ubuntu | `sudo apt-get upgrade` | 只更新已安装的包 |
| | `sudo apt-get dist-upgrade` | 可能会增加或删除包以满足新的依赖项 |
| | `sudo apt upgrade` | 和 `apt-get upgrade` 类似 |
| | `sudo apt full-upgrade` | 和 `apt-get dist-upgrade` 类似 |
| CentOS | `sudo yum update` | |
| Fedora | `sudo dnf upgrade` | |
| FreeBSD Packages | `sudo pkg upgrade` | |
| FreeBSD Ports | `less /usr/ports/UPDATING` | 使用 `less` 来查看 ports 的更新提示(使用上下光标键滚动,按 q 退出)。 |
| | `cd /usr/ports/ports-mgmt/portmaster && sudo make install && sudo portmaster -a` | 安装 `portmaster` 然后使用它更新已安装的 ports |
### 搜索某个包
大多数发行版都提供针对包集合的图形化或菜单驱动的工具,我们可以分类浏览软件,这也是一个发现新软件的好方法。然而,查找包最快和最有效的方法是使用命令行工具进行搜索。
| 系统 | 命令 | 说明 |
| --- | --- | --- |
| Debian / Ubuntu | `apt-cache search search_string` | |
| | `apt search search_string` | |
| CentOS | `yum search search_string` | |
| | `yum search all search_string` | 搜索所有的字段,包括描述 |
| Fedora | `dnf search search_string` | |
| | `dnf search all search_string` | 搜索所有的字段,包括描述 |
| FreeBSD Packages | `pkg search search_string` | 通过名字进行搜索 |
| | `pkg search -f search_string` | 通过名字进行搜索并返回完整的描述 |
| | `pkg search -D search_string` | 搜索描述 |
| FreeBSD Ports | `cd /usr/ports && make search name=package` | 通过名字进行搜索 |
| | `cd /usr/ports && make search key=search_string` | 搜索评论、描述和依赖 |
### 查看某个软件包的信息
在安装软件包之前,我们可以通过仔细阅读包的描述来获得很多有用的信息。除了人类可读的文本之外,这些内容通常包括像版本号这样的元数据和包的依赖项列表。
| 系统 | 命令 | 说明 |
| --- | --- | --- |
| Debian / Ubuntu | `apt-cache show package` | 显示有关包的本地缓存信息 |
| | `apt show package` | |
| | `dpkg -s package` | 显示包的当前安装状态 |
| CentOS | `yum info package` | |
| | `yum deplist package` | 列出包的依赖 |
| Fedora | `dnf info package` | |
| | `dnf repoquery --requires package` | 列出包的依赖 |
| FreeBSD Packages | `pkg info package` | 显示已安装的包的信息 |
| FreeBSD Ports | `cd /usr/ports/category/port && cat pkg-descr` | |
### 从存储库安装包
知道包名后,通常可以用一个命令来安装它及其依赖。你也可以一次性安装多个包,只需将它们全部列出来即可。
| 系统 | 命令 | 说明 |
| --- | --- | --- |
| Debian / Ubuntu | `sudo apt-get install package` | |
| | `sudo apt-get install package1 package2 ...` | 安装所有列出来的包 |
| | `sudo apt-get install -y package` | 在 `apt` 提示是否继续的地方直接默认 `yes` |
| | `sudo apt install package` | 显示一个彩色的进度条 |
| CentOS | `sudo yum install package` | |
| | `sudo yum install package1 package2 ...` | 安装所有列出来的包 |
| | `sudo yum install -y package` | 在 `yum` 提示是否继续的地方直接默认 `yes` |
| Fedora | `sudo dnf install package` | |
| | `sudo dnf install package1 package2 ...` | 安装所有列出来的包 |
| | `sudo dnf install -y package` | 在 `dnf` 提示是否继续的地方直接默认 `yes` |
| FreeBSD Packages | `sudo pkg install package` | |
| | `sudo pkg install package1 package2 ...` | 安装所有列出来的包 |
| FreeBSD Ports | `cd /usr/ports/category/port && sudo make install` | 从源码构建安装一个 port |
### 从本地文件系统安装一个包
对于一个给定的操作系统,有时有些软件官方并没有提供相应的包,那么开发人员或供应商将需要提供包文件的下载。你通常可以通过 web 浏览器检索这些包,或者通过命令行 `curl` 来检索这些信息。将包下载到目标系统后,我们通常可以通过单个命令来安装它。
在 Debian 派生的系统上,`dpkg` 用来处理单个的包文件。如果一个包有未满足的依赖项,那么我们可以使用 `gdebi` 从官方存储库中检索它们。
在 CentOS 和 Fedora 系统上,`yum` 和 `dnf` 用于安装单个的文件,并且会处理需要的依赖。
| 系统 | 命令 | 说明 |
| --- | --- | --- |
| Debian / Ubuntu | `sudo dpkg -i package.deb` | |
| | `sudo apt-get install -y gdebi && sudo gdebi package.deb` | 安装 `gdebi`,然后使用 `gdebi` 安装 `package.deb` 并处理缺失的依赖 |
| CentOS | `sudo yum install package.rpm` | |
| Fedora | `sudo dnf install package.rpm` | |
| FreeBSD Packages | `sudo pkg add package.txz` | |
| | `sudo pkg add -f package.txz` | 即使已经安装的包也会重新安装 |
### 删除一个或多个已安装的包
由于包管理器知道给定的软件包提供了哪些文件,因此如果某个软件不再需要了,它通常可以干净利落地从系统中清除这些文件。
| 系统 | 命令 | 说明 |
| --- | --- | --- |
| Debian / Ubuntu | `sudo apt-get remove package` | |
| | `sudo apt remove package` | |
| | `sudo apt-get autoremove` | 删除不需要的包 |
| CentOS | `sudo yum remove package` | |
| Fedora | `sudo dnf erase package` | |
| FreeBSD Packages | `sudo pkg delete package` | |
| | `sudo pkg autoremove` | 删除不需要的包 |
| FreeBSD Ports | `sudo pkg delete package` | |
| | `cd /usr/ports/path_to_port && make deinstall` | 卸载 port |
### `apt` 命令
Debian 家族发行版的管理员通常熟悉 `apt-get` 和 `apt-cache`。较少为人所知的是简化的 `apt` 接口,它是专为交互式使用而设计的。
| 传统命令 | 等价的 `apt` 命令 |
| --- | --- |
| `apt-get update` | `apt update` |
| `apt-get dist-upgrade` | `apt full-upgrade` |
| `apt-cache search string` | `apt search string` |
| `apt-get install package` | `apt install package` |
| `apt-get remove package` | `apt remove package` |
| `apt-get purge package` | `apt purge package` |
虽然 `apt` 通常是一个特定操作的快捷方式,但它并不能完全替代传统的工具,它的接口可能会随着版本的不同而发生变化,以提高可用性。如果你在脚本或 shell 管道中使用包管理命令,那么最好还是坚持使用 `apt-get` 和 `apt-cache`。
### 获取帮助
除了基于 web 的文档,请记住我们可以通过 shell 从 Unix 手册页(通常称为 man 页面)中获得大多数的命令。比如要阅读某页,可以使用 `man`:
```
man page
```
在 `man` 中,你可以用箭头键导航。按 `/` 搜索页面内的文本,使用 `q` 退出。
| 系统 | 命令 | 说明 |
| --- | --- | --- |
| Debian / Ubuntu | `man apt-get` | 更新本地包数据库以及与包一起工作 |
| | `man apt-cache` | 在本地的包数据库中搜索 |
| | `man dpkg` | 和单独的包文件一起工作以及能查询已安装的包 |
| | `man apt` | 通过更简洁,用户友好的接口进行最基本的操作 |
| CentOS | `man yum` | |
| Fedora | `man dnf` | |
| FreeBSD Packages | `man pkg` | 和预先编译的二进制包一起工作 |
| FreeBSD Ports | `man ports` | 和 Ports 集合一起工作 |
### 结论和进一步的阅读
本指南通过对多个系统间进行交叉对比概述了一下包管理系统的基本操作,但只涉及了这个复杂主题的表面。对于特定系统更详细的信息,可以参考以下资源:
* [这份指南](https://www.digitalocean.com/community/tutorials/ubuntu-and-debian-package-management-essentials) 详细介绍了 Ubuntu 和 Debian 的软件包管理。
* 这里有一份 CentOS 官方的指南 [使用 yum 管理软件](https://www.centos.org/docs/5/html/yum/)
* 这里有一个有关 Fedora 的 `dnf` 的 [wiki 页面](https://fedoraproject.org/wiki/Dnf) 以及一份有关 `dnf` [官方的手册](https://dnf.readthedocs.org/en/latest/index.html)
* [这份指南](https://www.digitalocean.com/community/tutorials/how-to-manage-packages-on-freebsd-10-1-with-pkg) 讲述了如何使用 `pkg` 在 FreeBSD 上进行包管理
* 这本 [FreeBSD Handbook](https://www.freebsd.org/doc/handbook/) 有一节讲述了[如何使用 Ports 集合](https://www.freebsd.org/doc/handbook/ports-using.html)
---
via: <https://www.digitalocean.com/community/tutorials/package-management-basics-apt-yum-dnf-pkg>
译者后记:
从经典的 `configure` && `make` && `make install` 三部曲到 `dpkg`,从需要手处理依赖关系的 `dpkg` 到全自动化的 `apt-get`,恩~,你有没有想过接下来会是什么?译者只能说可能会是 `Snaps`,如果你还没有听过这个东东,你也许需要关注下这个公众号了:**Snapcraft**
作者:[Brennen Bearnes](https://www.digitalocean.com/community/users/bpb) 译者:[Snapcrafter](https://github.com/Snapcrafter) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Most modern Unix-like operating systems offer a centralized mechanism for finding and installing software. Software is usually distributed in the form of **packages**, kept in **repositories**. Working with packages is known as **package management**. Packages provide the core components of an operating system, along with shared libraries, applications, services, and documentation.
A package management system does much more than one-time installation of software. It also provides tools for upgrading already-installed packages. Package repositories help to ensure that code has been vetted for use on your system, and that the installed versions of software have been approved by developers and package maintainers.
When configuring servers or development environments, it’s often necessary to look beyond official repositories. Packages in the stable release of a distribution may be out of date, especially where new or rapidly-changing software is concerned. Nevertheless, package management is a vital skill for system administrators and developers, and the wealth of packaged software for major distributions is a tremendous resource.
This guide is intended as a quick reference for the fundamentals of finding, installing, and upgrading packages on a variety of distributions, and should help you translate that knowledge between systems.
Most package systems are built around collections of package files. A package file is usually an archive which contains compiled applications and other resources used by the software, along with installation scripts. Packages also contain valuable metadata, including their **dependencies**, a list of other packages required to install and run them.
While their functionality and benefits are broadly similar, packaging formats and tools vary by platform:
`.deb`
packages installed by `apt`
and `dpkg`
`.rpm`
packages installed by `yum`
`.txz`
packages installed by `pkg`
In Debian and systems based on it, like Ubuntu, Linux Mint, and Raspbian, the package format is the `.deb`
file. `apt`
, the Advanced Packaging Tool, provides commands used for most common operations: Searching repositories, installing collections of packages and their dependencies, and managing upgrades. `apt`
commands operate as a front-end to the lower-level `dpkg`
utility, which handles the installation of individual `.deb`
files on the local system, and is sometimes invoked directly.
Recent releases of most Debian-derived distributions include a single `apt`
command, which offers a concise and unified interface to common operations that have traditionally been handled by the more-specific `apt-get`
and `apt-cache`
.
Rocky Linux, Fedora, and other members of the Red Hat family use RPM files. These used to use a package manager called `yum`
. In recent versions of Fedora and its derivatives, `yum`
has been supplanted by `dnf`
, a modernized fork which retains most of `yum`
’s interface.
FreeBSD’s binary package system is administered with the `pkg`
command. FreeBSD also offers the Ports Collection, a local directory structure and tools which allow the user to fetch, compile, and install packages directly from source using Makefiles. It’s usually much more convenient to use `pkg`
, but occasionally a pre-compiled package is unavailable, or you may need to change compile-time options.
Most systems keep a local database of the packages available from remote repositories. It’s best to update this database before installing or upgrading packages. As a partial exception to this pattern, `dnf`
will check for updates before performing some operations, but you can ask at any time whether updates are available.
`sudo apt update`
`dnf check-update`
`sudo pkg update`
`sudo portsnap fetch update`
Making sure that all of the installed software on a machine stays up to date would be an enormous undertaking without a package system. You would have to track upstream changes and security alerts for hundreds of different packages. While a package manager doesn’t solve every problem you’ll encounter when upgrading software, it does enable you to maintain most system components with a few commands.
On FreeBSD, upgrading installed ports can introduce breaking changes or require manual configuration steps. It’s best to read `/usr/ports/UPDATING`
before upgrading with `portmaster`
.
`sudo apt upgrade`
`sudo dnf upgrade`
`sudo pkg upgrade`
Most distributions offer a graphical or menu-driven front end to package collections. These can be a good way to browse by category and discover new software. Often, however, the quickest and most effective way to locate a package is to search with command-line tools.
`apt search search_string`
`dnf search search_string`
`pkg search search_string`
**Note:** On Rocky, Fedora, or RHEL, you can search package titles and descriptions together by using `dnf search all`
. On FreeBSD, you can search descriptions by using `pkg search -D`
When deciding what to install, it’s often helpful to read detailed descriptions of packages. Along with human-readable text, these often include metadata like version numbers and a list of the package’s dependencies.
`apt show package`
`dnf info package`
`pkg info package`
`cd /usr/ports/category/port && cat pkg-descr`
Once you know the name of a package, you can usually install it and its dependencies with a single command. In general, you can supply multiple packages to install at once by listing them all.
`sudo apt install package`
`sudo dnf install package`
`sudo pkg install package`
Sometimes, even though software isn’t officially packaged for a given operating system, a developer or vendor will offer package files for download. You can usually retrieve these with your web browser, or via `curl`
on the command line. Once a package is on the target system, it can often be installed with a single command.
On Debian-derived systems, `dpkg`
handles individual package files. If a package has unmet dependencies, `gdebi`
can often be used to retrieve them from official repositories.
On On Rocky Linux, Fedora, or RHEL, `dnf`
is used to install individual files, and will also handle needed dependencies.
`sudo dpkg -i package.deb`
`sudo dnf install package.rpm`
`sudo pkg add package.txz`
Since a package manager knows what files are provided by a given package, it can usually remove them cleanly from a system if the software is no longer needed.
`sudo apt remove package`
`sudo dnf erase package`
`sudo pkg delete package`
In addition to web-based documentation, keep in mind that Unix manual pages (usually referred to as **man pages**) are available for most commands from the shell. To read a page, use `man`
:
- man page
In `man`
, you can navigate with the arrow keys. Press **/** to search for text within the page, and **q** to quit.
`man apt`
`man dnf`
`man pkg`
`man ports`
This guide provides an overview of operations that can be cross-referenced between systems, but only scratches the surface of a complex topic. For greater detail on a given system, you can consult the following resources:
`dnf`
`dnf`
itself`pkg`
.Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.
This textbox defaults to using **Markdown** to format your answer.
You can type **!ref** in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
Sign up for Infrastructure as a Newsletter.
Working on improving health and education, reducing inequality, and spurring economic growth? We'd like to help.
Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.
you can use just “apt” for everything (except dpkg) apt-get update -> apt update apt-get upgrade -> apt upgrade apt-get dist-upgrade -> apt dist-upgrade apt-cache search search_string -> apt search search_string apt-cache show package -> apt show package install remove …
Why there is no info about arch based distributions?
helpful but i m want become Linux administrator how it possible
What about the best package managers, pacman and zypper? You forgot us, the Arch users… ;)
PS: Good article :)
Great article! Apt-ollutly fantastic, condensed and straightforward :) Just wondering, did Fedora ppl ever get away from computers? I mean, in sports dnf means “did not finish”… |
8,783 | 如何建模可以帮助你避免在 OpenStack 中遇到问题 | https://insights.ubuntu.com/2017/07/18/stuckstack-how-modelling-helps-you-avoid-getting-a-stuck-openstack/ | 2017-08-16T14:19:35 | [
"OpenStack"
] | https://linux.cn/article-8783-1.html | 
OpenStack 部署完就是一个 “<ruby> 僵栈 <rt> StuckStack </rt></ruby>”,一般出于技术原因,但有时是商业上的原因,它是无法在没有明显中断,也不花费时间和成本的情况下升级的。在关于这个话题的最后一篇文章中,我们讨论了这些云中有多少陷入僵局,以及当时是怎么决定的与如今的大部分常识相符。现在 OpenStack 已经有 7 年了,最近随着容器编排系统的增长以及更多企业开始利用公共和私有的云平台,OpenStack 正面临着压力。
### 没有魔法解决方案
如果你仍在寻找一个可以没有任何问题地升级你现有的 <ruby> 僵栈 <rt> StuckStack </rt></ruby> 的解决方案,那么我有坏消息给你:没有魔法解决方案,你最好集中精力建立一个标准化的平台,它可以有效地运营和轻松地升级。
廉价航空业已经表明,虽然乘客可能渴望最好的体验,可以坐在头等舱或者商务舱喝香槟,有足够的空间放松,但是大多数人会选择乘坐最便宜的,最终价值等式不要让他们付出更多的代价。工作负载是相同的。长期而言,工作负载将运行在最经济的平台上,因为在高价硬件或软件上运行的业务实际上并没有受益。
Amazon、Microsoft、Google 等大型公共云企业都知道,这就是为什么他们建立了高效的数据中心,并使用模型来构建、操作和扩展基础设施。长期以来,企业一直奉行以设计、制造、市场、定价、销售、实施为一体的最优秀的硬件和软件基础设施。现实可能并不总是符合承诺,但它现在还不重要,因为<ruby> 成本模式 <rt> cost model </rt></ruby>在当今世界无法生存。一些组织试图通过改用免费软件替代,而不改变自己的行为来解决这一问题。因此,他们发现,他们只是将成本从获取软件变到运营软件上。好消息是,那些高效运营的大型运营商使用的技术,现在可用于所有类型的组织。
### 什么是软件模型?
虽然许多年来,软件程序由许多对象、进程和服务而组成,但近年来,程序是普遍由许多单独的服务组成,它们高度分布在数据中心的不同服务器以及跨越数据中心的服务器上。

*OpenStack 服务的简单演示*
许多服务意味着许多软件需要配置、管理并跟踪许多物理机器。以成本效益的方式规模化地进行这一工作需要一个模型,即所有组件如何连接以及它们如何映射到物理资源。为了构建模型,我们需要有一个软件组件库,这是一种定义它们如何彼此连接以及将其部署到平台上的方法,无论是物理的还是虚拟的。在 Canonical 公司,我们几年前就认识到这一点,并建立了一个通用的软件建模工具 [Juju](https://www.ubuntu.com/cloud/juju),使得运营商能够从 100 个通用软件服务目录中组合灵活的拓扑结构、架构和部署目标。

*Juju 建模 OpenStack 服务*
在 Juju 中,软件服务被定义为一种叫做 Charm 的东西。 Charms 是代码片段,它通常用 python 或 bash 编写,其中提供有关服务的信息 - 声明的接口、服务的安装方式、可连接的其他服务等。
Charms 可以简单或者复杂,具体取决于你想要赋予的功能。对于 OpenStack,Canonical 在上游 OpenStack 社区的帮助下,为主要 OpenStack 服务开发了一套完整的 Charms。Charms 代表了模型的说明,使其可以轻松地部署、操作扩展和复制。Charms 还定义了如何升级自身,包括在需要时执行升级的顺序以及如何在需要时优雅地暂停和恢复服务。通过将 Juju 连接到诸如 [裸机即服务(MAAS)](https://www.ubuntu.com/server/maas) 这样的裸机配置系统,其中 OpenStack 的逻辑模型可以部署到物理硬件上。默认情况下,Charms 将在 LXC 容器中部署服务,从而根据云行为的需要,提供更大的灵活性来重新定位服务。配置在 Charms 中定义,或者在部署时由第三方工具(如 Puppet 或 Chef)注入。
这种方法有两个不同的好处:1 - 通过创建一个模型,我们从底层硬件抽象出每个云服务。2 - 使用已知来源的标准化组件,通过迭代组合新的架构。这种一致性使我们能够使用相同的工具部署非常不同的云架构,运行和升级这些工具是安全的。
通过全面自动化的配置工具和软件程序来管理硬件库存,运营商可以比使用传统企业技术或构建偏离核心的定制系统更有效地扩展基础架构。有价值的开发资源可以集中在创新应用领域,使新的软件服务更快上线,而不是改变标准的商品基础设施,这将会导致进一步的兼容性问题。
在下一篇文章中,我将介绍部署完全建模的 OpenStack 的一些最佳实践,以及如何快速地进行操作。如果你有一个现有的 <ruby> 僵栈 <rt> StuckStack </rt></ruby>,那么虽然我们不能很容易地拯救它,但是与公有云相比,我们将能够让你走上一条完全支持的、高效的基础架构以及运营成本的道路。
### 即将举行的网络研讨会
如果你在旧版本的 OpenStack 中遇到问题,并且想要轻松升级 OpenStack 云并且无需停机,请观看我们的[在线点播研讨会](http://ubunt.eu/Bwe7kQ),从 Newton 升级到 Ocata 的现场演示。
### 联系我们
如果你想了解有关迁移到 Canonical OpenStack 云的更多信息,请[联系这里](http://ubunt.eu/3OYs5s)。
(题图:乐高的空客 A380-800模型。空客运行 OpenStack)
---
作者简介:
专注于 Ubuntu OpenStack 的云产品经理。以前在 MySQL 和 Red Hat 工作。喜欢摩托车,遇见使用 Ubuntu 和 Openstack 做有趣事的人。
---
via: <https://insights.ubuntu.com/2017/07/18/stuckstack-how-modelling-helps-you-avoid-getting-a-stuck-openstack/>
作者:[Mark Baker](https://insights.ubuntu.com/author/markbaker/) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | null |
|
8,786 | Debian 庆祝 24 岁生日 | http://linux.softpedia.com/blog/happy-24th-birthday-debian-517413.shtml | 2017-08-17T10:14:13 | [
"Debian"
] | https://linux.cn/article-8786-1.html | ****
2017 年 8 月 16 日,Debian 操作系统度过了它的第 24 个生日,这个由 Linux 内核所驱动的操作系统,由其创始人 Ian Murdock 发布于 1993 年的同一天。
每年的 8 月 16 日这一天,被 [Debian Project](https://www.debian.org/) 确定为 Debian 日,全世界各地的 Debian 爱好者会在这一天[举办各种庆祝活动](https://wiki.debian.org/DebianDay/2017)。
“如果你附近有城市在举办 2017 Debian 日 ,非常欢迎你前往参加,”Laura Arjona Reina 在[博文](https://bits.debian.org/2017/08/debian-turns-24.html)中说到,“如果没有的话,你来组织一个也很好!”
Debian GNU/Linux 是 Linux 社区里最受欢迎的 Linux 操作系统之一,也有大量的发行版衍生自此,这包括 Ubuntu 和 Linux Mint。
Debian GNU/Linux 的最新发布版本是 9.1.0 (Stretch),你可以从 Debian 官方网站下载安装镜像或者现场版镜像。这个操作系统支持十种之多的硬件架构。
我们对 Debian 的生日送上我们的良好祝福,我们已经迫不及待地想要看到即将来到的 Debian GNU/Linux 10 "Buster" 了。
生日快乐! Debian!
| 301 | Moved Permanently | null |
8,787 | 用 R 收集和映射推特数据的初学者向导 | https://opensource.com/article/17/6/collecting-and-mapping-twitter-data-using-r | 2017-08-17T15:02:07 | [
"R语言",
"Twitter"
] | /article-8787-1.html |
>
> 学习使用 R 的 twitteR 和 leaflet 包, 你就可以把任何话题的推文定位画在地图上。
>
>
>

当我开始学习 R ,我也需要学习如何出于研究的目的地收集推特数据并对其进行映射。尽管网上关于这个话题的信息很多,但我发觉难以理解什么与收集并映射推特数据相关。我不仅是个 R 新手,而且对各种教程中技术名词不熟悉。但尽管困难重重,我成功了!在这个教程里,我将以一种新手程序员都能看懂的方式来攻略如何收集推特数据并将至展现在地图中。
### 创建应用程序
如果你没有推特帐号,首先你需要 [注册一个](https://twitter.com/signup)。然后,到 [apps.twitter.com](https://apps.twitter.com/) 创建一个允许你收集推特数据的应用程序。别担心,创建应用程序极其简单。你创建的应用程序会与推特应用程序接口(API)相连。 想象 API 是一个多功能电子个人助手。你可以使用 API 让其它程序帮你做事。这样一来,你可以接入推特 API 令其收集数据。只需确保不要请求太多,因为推特数据请求次数是有[限制](https://dev.twitter.com/rest/public/rate-limiting) 的。
收集推文有两个可用的 API 。你若想做一次性的推文收集,那么使用 **REST API**. 若是想在特定时间内持续收集,可以用 **streaming API**。教程中我主要使用 REST API。
创建应用程序之后,前往 **Keys and Access Tokens** 标签。你需要 Consumer Key (API key)、 Consumer Secret (API secret)、 Access Token 和 Access Token Secret 才能在 R 中访问你的应用程序。
### 收集推特数据
下一步是打开 R 准备写代码。对于初学者,我推荐使用 [RStudio](https://www.rstudio.com/),这是 R 的集成开发环境 (IDE) 。我发现 RStudio 在解决问题和测试代码时很实用。 R 有访问该 REST API 的包叫 **[twitteR](https://cran.r-project.org/web/packages/twitteR/twitteR.pdf)**。
打开 RStudio 并新建 RScript。做好这些之后,你需要安装和加载 **twitteR** 包:
```
install.packages("twitteR")
#安装 TwitteR
library (twitteR)
#载入 TwitteR
```
安装并载入 **twitteR** 包之后,你得输入上文提及的应用程序的 API 信息:
```
api_key <- ""
#在引号内放入你的 API key
api_secret <- ""
#在引号内放入你的 API secret token
token <- ""
#在引号内放入你的 token
token_secret <- ""
#在引号内放入你的 token secret
```
接下来,连接推特访问 API:
```
setup_twitter_oauth(api_key, api_secret, token, token_secret)
```
我们来试试让推特搜索有关社区花园和农夫市场:
```
tweets <- searchTwitter("community garden OR #communitygarden OR farmers market OR #farmersmarket", n = 200, lang = "en")
```
这个代码意思是搜索前 200 篇 `(n = 200)` 英文 `(lang = "en")` 的推文, 包括关键词 `community garden` 或 `farmers market` 或任何提及这些关键词的话题标签。
推特搜索完成之后,在数据框中保存你的结果:
```
tweets.df <-twListToDF(tweets)
```
为了用推文创建地图,你需要收集的导出为 **.csv** 文件:
```
write.csv(tweets.df, "C:\Users\YourName\Documents\ApptoMap\tweets.csv")
#an example of a file extension of the folder in which you want to save the .csv file.
```
运行前确保 **R** 代码已保存然后继续进行下一步。.
### 生成地图
现在你有了可以展示在地图上的数据。在此教程中,我们将用一个 R 包 **[Leaflet](https://rstudio.github.io/leaflet)** 做一个基本的应用程序,这是一个生成交互式地图的热门 JavaScript 库。 Leaflet 使用 [magrittr](https://github.com/smbache/magrittr) 管道运算符 (`%>%`), 因为其语法自然,易于写代码。刚接触可能有点奇怪,但它确实降低了写代码的工作量。
为了清晰起见,在 RStudio 打开一个新的 R 脚本安装这些包:
```
install.packages("leaflet")
install.packages("maps")
library(leaflet)
library(maps)
```
现在需要一个路径让 Leaflet 访问你的数据:
```
read.csv("C:\Users\YourName\Documents\ApptoMap\tweets.csv", stringsAsFactors = FALSE)
```
`stringAsFactors = FALSE` 意思是保留信息,不将它转化成 factors。 (想了解 factors,读这篇文章["stringsAsFactors: An unauthorized biography"](http://simplystatistics.org/2015/07/24/stringsasfactors-an-unauthorized-biography/), 作者 Roger Peng)
是时候制作你的 Leaflet 地图了。我们将使用 **OpenStreetMap**基本地图来做你的地图:
```
m <- leaflet(mymap) %>% addTiles()
```
我们在基本地图上加个圈。对于 `lng` 和 `lat`,输入包含推文的经纬度的列名,并在前面加个`~`。 `~longitude` 和 `~latitude` 指向你的 **.csv** 文件中与列名:
```
m %>% addCircles(lng = ~longitude, lat = ~latitude, popup = mymap$type, weight = 8, radius = 40, color = "#fb3004", stroke = TRUE, fillOpacity = 0.8)
```
运行你的代码。会弹出网页浏览器并展示你的地图。这是我前面收集的推文的地图:

带定位的推文地图,使用了 Leaflet 和 OpenStreetMap [CC-BY-SA](https://creativecommons.org/licenses/by-sa/2.0/)
虽然你可能会对地图上的图文数量如此之小感到惊奇,通常只有 1% 的推文记录了地理编码。我收集了总数为 366 的推文,但只有 10(大概总推文的 3%)是记录了地理编码的。如果你为得到记录了地理编码的推文而困扰,改变搜索关键词看看能不能得到更好的结果。
### 总结
对于初学者,把以上所有碎片结合起来,从推特数据生成一个 Leaflet 地图可能很艰难。 这个教程基于我完成这个任务的经验,我希望它能让你的学习过程变得更轻松。
(题图:[琼斯·贝克](https://opensource.com/users/jason-baker). [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/). 来源: [Cloud](https://pixabay.com/en/clouds-sky-cloud-dark-clouds-1473311/), [Globe](https://pixabay.com/en/globe-planet-earth-world-1015311/). Both [CC0](https://creativecommons.org/publicdomain/zero/1.0/).)
---
作者简介:
Dorris Scott - Dorris Scott 是佐治亚大学的地理学博士生。她的研究重心是地理信息系统(GIS)、 地理数据科学、可视化和公共卫生。她的论文是在一个 GIS 系统接口将退伍军人福利医院的传统和非传统数据结合起来,帮助病人为他们的健康状况作出更为明朗的决定。
---
via: <https://opensource.com/article/17/6/collecting-and-mapping-twitter-data-using-r>
作者:[Dorris Scott](https://opensource.com/users/dorrisscott) 译者:[XYenChi](https://github.com/XYenChi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| null | HTTPSConnectionPool(host='opensource.com', port=443): Read timed out. (read timeout=10) | null |
8,788 | 使用 Snapcraft 构建、测试并发布 Snap 软件包 | https://insights.ubuntu.com/2017/06/28/build-test-and-publish-snap-packages-using-snapcraft/ | 2017-08-18T08:42:00 | [
"snap"
] | https://linux.cn/article-8788-1.html | 
snapcraft 是一个正在为其在 Linux 中的地位而奋斗的包管理系统,它为你重新设想了分发软件的方式。这套新的跨发行版的工具可以用来帮助你构建和发布 snap 软件包。接下来我们将会讲述怎么使用 CircleCI 2.0 来加速这个过程以及一些在这个过程中的可能遇到的问题。
### snap 软件包是什么?snapcraft 又是什么?
snap 是用于 Linux 发行版的软件包,它们在设计的时候吸取了像 Android 这样的移动平台和物联网设备上分发软件的经验教训。snapcraft 这个名字涵盖了 snap 和用来构建它们的命令行工具、这个 [snapcraft.io](https://snapcraft.io/) 网站,以及在这些技术的支撑下构建的几乎整个生态系统。
snap 软件包被设计成用来隔离并封装整个应用程序。这些概念使得 snapcraft 提高软件安全性、稳定性和可移植性的目标得以实现,其中可移植性允许单个 snap 软件包不仅可以在 Ubuntu 的多个版本中安装,而且也可以在 Debian、Fedora 和 Arch 等发行版中安装。snapcraft 网站对其的描述如下:
>
> 为每个 Linux 桌面、服务器、云端或设备打包任何应用程序,并且直接交付更新。
>
>
>
### 在 CircleCI 2.0 上构建 snap 软件包
在 CircleCI 上使用 [CircleCI 2.0 语法](https://circleci.com/docs/2.0/) 来构建 snap 和在本地机器上基本相同。在本文中,我们将会讲解一个示例配置文件。如果您对 CircleCI 还不熟悉,或者想了解更多有关 2.0 的入门知识,您可以从 [这里](https://circleci.com/docs/2.0/first-steps/) 开始。
### 基础配置
```
version: 2
jobs:
build:
machine: true
working_directory: ~/project
steps:
- checkout
- run:
command: |
sudo apt update && sudo apt install -y snapd
sudo snap install snapcraft --edge --classic
/snap/bin/snapcraft
```
这个例子使用了 `machine` 执行器来安装用于管理运行 snap 的可执行程序 `snapd` 和制作 snap 的 `snapcraft` 工具。
由于构建过程需要使用比较新的内核,所以我们使用了 `machine` 执行器而没有用 `docker` 执行器。在这里,Linux v4.4 已经足够满足我们的需求了。
### 用户空间的依赖关系
上面的例子使用了 `machine` 执行器,它实际上是一个内核为 Linux v4.4 的 [Ubuntu 14.04 (Trusty) 虚拟机](https://circleci.com/docs/1.0/differences-between-trusty-and-precise/)。如果 Trusty 仓库可以满足你的 project/snap 构建依赖,那就没问题。如果你的构建依赖需要其他版本,比如 Ubuntu 16.04 (Xenial),我们仍然可以在 `machine` 执行器中使用 Docker 来构建我们的 snap 软件包 。
```
version: 2
jobs:
build:
machine: true
working_directory: ~/project
steps:
- checkout
- run:
command: |
sudo apt update && sudo apt install -y snapd
docker run -v $(pwd):$(pwd) -t ubuntu:xenial sh -c "apt update -qq && apt install snapcraft -y && cd $(pwd) && snapcraft"
```
这个例子中,我们再次在 `machine` 执行器的虚拟机中安装了 `snapd`,但是我们决定将 snapcraft 安装在 Ubuntu Xenial 镜像构建的 Docker 容器中,并使用它来构建我们的 snap。这样,在 `snapcraft` 运行的过程中就可以使用在 Ubuntu 16.04 中可用的所有 `apt` 包。
### 测试
在我们的[博客](https://circleci.com/blog/)、[文档](https://circleci.com/docs/)以及互联网上已经有很多讲述如何对软件代码进行单元测试的内容。搜索你的语言或者框架和单元测试或者 CI 可以找到大量相关的信息。在 CircleCI 上构建 snap 软件包,我们最终会得到一个 `.snap` 的文件,这意味着除了创造它的代码外我们还可以对它进行测试。
### 工作流
假设我们构建的 snap 软件包是一个 webapp,我们可以通过测试套件来确保构建的 snap 可以正确的安装和运行,我们也可以试着安装它或者使用 [Selenium](http://www.seleniumhq.org/) 来测试页面加载、登录等功能正常工作。但是这里有一个问题,由于 snap 是被设计成可以在多个 Linux 发行版上运行,这就需要我们的测试套件可以在 Ubuntu 16.04、Fedora 25 和 Debian 9 等发行版中可以正常运行。这个问题我们可以通过 CircleCI 2.0 的工作流来有效地解决。
工作流是在最近的 CircleCI 2.0 测试版中加入的,它允许我们通过特定的逻辑流程来运行离散的任务。这样,使用单个任务构建完 snap 后,我们就可以开始并行的运行 snap 的发行版测试任务,每个任务对应一个不同的发行版的 [Docker 镜像](https://circleci.com/docs/2.0/building-docker-images/) (或者在将来,还会有其他可用的执行器)。
这里有一个简单的例子:
```
workflows:
version: 2
build-test-and-deploy:
jobs:
- build
- acceptance_test_xenial:
requires:
- build
- acceptance_test_fedora_25:
requires:
- build
- acceptance_test_arch:
requires:
- build
- publish:
requires:
- acceptance_test_xenial
- acceptance_test_fedora_25
- acceptance_test_arch
```
在这个例子中首先构建了 snap,然后在四个不同的发行版上运行验收测试。如果所有的发行版都通过测试了,那么我们就可以运行发布 `job`,以便在将其推送到 snap 商店之前完成剩余的 snap 任务。
### 留着 .snap 包
为了测试我们在工作流示例中使用的 .snap 软件包,我们需要一种在构建的时候持久保存 snap 的方法。在这里我将提供两种方法:
1. **artifact** —— 在运行 `build` 任务的时候我们可以将 snaps 保存为一个 CircleCI 的 artifact(LCTT 译注:artifact 是 `snapcraft.yaml` 中的一个 `Plugin-specific` 关键字),然后在接下来的任务中检索它。CircleCI 工作流有自己处理共享 artifact 的方式,相关信息可以在 [这里](https://circleci.com/docs/2.0/workflows/#using-workspaces-to-share-artifacts-among-jobs) 找到。
2. **snap 商店通道** —— 当发布 snap 软件包到 snap 商店时,有多种通道可供我们选择。将 snap 的主分支发布到 edge 通道以供内部或者用户测试已经成为一种常见做法。我们可以在 `build` 任务中完成这些工作,然后接下来的的任务就可以从 edge 通道来安装构建好的 snap 软件包。
第一种方法速度更快,并且它还可以在 snap 软包上传到 snap 商店供用户甚至是测试用户使用之前对 snap 进行验收测试。第二种方法的好处是我们可以从 snap 商店安装 snap,这也是 CI 运行期间的测试项之一。
### snap 商店的身份验证
[snapcraft-config-generator.py](https://gist.github.com/3v1n0/479ad142eccdd17ad7d0445762dea755) 脚本可以生成商店证书并将其保存到 `.snapcraft/snapcraft.cfg` 中(注意:在运行公共脚本之前一定要对其进行检查)。如果觉得在你仓库中使用明文来保存这个文件不安全,你可以用 `base64` 编码该文件,并将其存储为一个[私有环境变量](https://circleci.com/docs/1.0/environment-variables/#setting-environment-variables-for-all-commands-without-adding-them-to-git),或者你也可以对文件 [进行加密](https://github.com/circleci/encrypted-files),并将密钥存储在一个私有环境变量中。
下面是一个示例,将商店证书放在一个加密的文件中,并在 `deploy` 环节中使用它将 snap 发布到 snap 商店中。
```
- deploy:
name: Push to Snap Store
command: |
openssl aes-256-cbc -d -in .snapcraft/snapcraft.encrypted -out .snapcraft/snapcraft.cfg -k $KEY
/snap/bin/snapcraft push *.snap
```
除了 `deploy` 任务之外,工作流示例同之前的一样, `deploy` 任务只有当验收测试任务通过时才会运行。
### 更多的信息
* Alan Pope 在 [论坛中发的帖子](https://forum.snapcraft.io/t/building-and-pushing-snaps-using-circleci/789):“popey” 是 Canonical 的员工,他在 snapcraft 的论坛上写了这篇文章,并启发作者写了这篇博文。
* [snapcraft 网站](https://snapcraft.io/): snapcraft 官方网站。
* [snapcraft 的 CircleCI Bug 报告](https://bugs.launchpad.net/snapcraft/+bug/1693451):在 Launchpad 上有一个开放的 bug 报告页面,用来改善 CircleCI 对 snapcraft 的支持。同时这将使这个过程变得更简单并且更“正式”。期待您的支持。
* 怎么使用 CircleCI 构建 [Nextcloud](https://nextcloud.com/) 的 snap:这里有一篇题为 [“复杂应用的持续验收测试”](https://kyrofa.com/posts/continuous-acceptance-tests-for-complex-applications) 的博文,它同时也影响了这篇博文。
这篇客座文章的作者是 Ricardo Feliciano —— CircleCi 的开发者传道士。如果您也有兴趣投稿,请联系 [[email protected]](mailto:[email protected])。原始文章可以从 [这里](https://circleci.com/blog/build-test-publish-snap-packages?utm_source=insightsubuntu&utm_medium=syndicatedblogpost) 找到。
---
via: <https://insights.ubuntu.com/2017/06/28/build-test-and-publish-snap-packages-using-snapcraft/>
译者简介:
>
> 常年混迹于 snapcraft.io,对 Ubuntu Core、snaps 和 snapcraft 有浓厚的兴趣,并致力于将这些还在快速发展的新技术通过翻译或原创的方式介绍到中文世界。有兴趣的小伙伴也可以关注译者个人的公众号: `Snapcraft`
>
>
>
作者:Ricardo Feliciano 译者:[Snapcrafter](https://github.com/Snapcrafter) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | null |
|
8,790 | Fedora 26 助力云、服务器、工作站系统 | http://www.linuxinsider.com/story/84674.html | 2017-08-20T10:09:31 | [
"Fedora"
] | https://linux.cn/article-8790-1.html | 
[Fedora 项目](https://getfedora.org/) 7 月份宣布推出 Fedora 26, 它是全面开放源代码的 Fedora 操作系统的最新版本。
Fedora Linux 是 Red Hat Enterprise Linux(RHEL)的社区版本。Fedora 26 包含一组基础包,形成针对不同用户的三个不同版本的基础。
Fedora <ruby> 原子主机版 <rt> Atomic Host Edition </rt></ruby> 是用于运行基于容器的工作的操作系统。Fedora <ruby> 服务器版 <rt> Server </rt></ruby>将 Fedora Server OS 安装在硬盘驱动器上。Fedora <ruby> 工作站版 <rt> Workstation </rt></ruby>是一款用于笔记本电脑和台式机的用户友好操作系统,它适用于广泛的用户 - 从业余爱好者和学生到企业环境中的专业人士。
所有这三个版本都有共同的基础和一些共同的优点。所有 Fedora 版本每年发行两次。
Fedora 项目是创新和新功能的测试基地。Fedora 项目负责人 Matthew Miller 说,有些特性将在即将发布的 RHEL 版本中实现。
他告诉 LinuxInsider:“Fedora 并没有直接参与这些产品化决策。Fedora 提供了许多想法和技术,它是 Red Hat Enterprise Linux 客户参与并提供反馈的好地方。”
### 强力的软件包
Fedora 开发人员更新和改进了所有三个版本的软件包。他们在 Fedora 26 中进行了许多错误修复和性能调整,以便在 Fedora 的用例中提供更好的用户体验。
这些安装包包括以下改进:
* 更新的编译器和语言,包括 GCC 7、Go 1.8、Python 3.6 和 Ruby 2.4;
* DNF 2.0 是 Fedora 下一代包管理系统的最新版本,它与 Yum 的向后兼容性得到改善;
* Anaconda 安装程序新的存储配置界面,可从设备和分区进行自下而上的配置;
* Fedora Media Writer 更新,使用户可以为基于 ARM 的设备(如 Raspberry Pi)创建可启动 SD 卡。
[Endpoint Technologies Associates](http://www.ndpta.com/) 的总裁 Roger L. Kay 指出,云工具对于使用云的用户必不可少,尤其是程序员。
他对 LinuxInsider 表示:“Kubernetes 对于在混合云中编程感兴趣的程序员来说是至关重要的,这可能是目前业界更重要的发展之一。云,无论是公有云、私有云还是混合云 - 都是企业计算未来的关键。”
### Fedora 26 原子主机亮相
Linux 容器和容器编排引擎一直在普及。Fedora 26 原子主机提供了一个最小占用的操作系统,专门用于在裸机到云端的环境中运行基于容器的工作任务。
Fedora 26 原子主机更新大概每两周发布一次,这个时间表可以让用户及时跟上游创新。
Fedora 26 原子主机可用于 Amazon EC2 。OpenStack、Vagrant 镜像和标准安装程序 ISO 镜像可在 [Fedora 项目](https://getfedora.org/)网站上找到。
最小化的 Fedora 原子的容器镜像也在 Fedora 26 上首次亮相。
### 云托管
最新版本为 Fedora 26 原子主机提供了新功能和特性:
* 容器化的 Kubernetes 作为内置的 Kubernetes 二进制文件的替代品,使用户更容易地运行不同版本的容器编排引擎;
* 最新版本的 rpm-ostree,其中包括支持直接 RPM 安装,重新加载命令和清理命令;
* 系统容器,它提供了一种在容器中的 Fedora 原子主机上安装系统基础设施软件(如网络或 Kubernetes)的方法;
* 更新版本的 Docker、Atomic和 Cockpit,用于增强容器构建,系统支持和负载监控。
根据 Fedora 项目的 Miller 所言,容器化的 Kubernetes 对于 Fedora 原子主机来说是重要的,有两个重要原因。
他解释说:“首先,它可以让我们从基础镜像中删除它,减小大小和复杂度。第二,在容器中提供它可以轻松地在不同版本中切换,而不会破环基础,或者为尚未准备好进行改变的人造成麻烦。”
### 服务器端服务
Fedora 26 服务器版为数据中心运营提供了一个灵活的多角色平台。它还允许用户自定义此版本的 Fedora 操作系统以满足其独特需求。
Fedora 26 服务器版的新功能包括 FreeIPA 4.5,它可以改进容器中运行的安全信息管理解决方案,以及 SSSD 文件缓存,以加快用户和组查询的速度。
Fedora 26 服务器版月底将增加称为 “Boltron” 的 Fedora 模块化技术预览。作为模块化操作系统,Boltron 使不同版本的不同应用程序能够在同一个系统上运行,这实质上允许将前沿运行时与稳定的数据库配合使用。
### 打磨工作站版
对于一般用户的新工具和功能之一是更新的 GNOME 桌面功能。开发将获得增强的生产力工具。
Fedora 26 工作站版附带 GNOME 3.24 和众多更新的功能调整。夜光根据时间细微地改变屏幕颜色,以减少对睡眠模式的影响。[LibreOffice](http://www.libreoffice.org/) 5.3 是开源办公生产力套件的最新更新。
GNOME 3.24 提供了 Builder 和 Flatpak 的成熟版本,它为开发人员提供了更好的应用程序开发工具,它可以方便地访问各种系统,包括 Rust 和 Meson。
### 不只是为了开发
根据 [Azul Systems](https://www.azul.com/) 的首席执行官 Scott Sellers 的说法,更新的云工具将纳入针对企业用户的 Linux 发行版中。
他告诉 LinuxInsider:“云是新兴公司以及地球上一些最大的企业的主要开发和生产平台。”
Sellers说:“鉴于 Fedora 社区的前沿性质,我们预计在任何 Fedora 版本中都会强烈关注云技术,Fedora 26 不会不令人失望。”
他指出,Fedora 开发人员和用户社区的另一个特点就是 Fedora 团队在模块化方面所做的工作。
Sellers 说:“我们将密切关注这些实验功能。”
### 支持的升级方式
Sellers 说 Fedora 的用户超过其他 Linux 发行版的用户,很多都有兴趣升级到 Fedora 26,即使他们不是重度云端用户。
他说:“这个发行版的主要优点之一就是能提前看到先进的生产级别技术,这些最终将被整合到 RHEL 中。Fedora 26 的早期评论表明它非常稳定,修复了许多错误以及提升了性能。”
Fedora 的 Miller 指出,有兴趣从早期 Fedora 版本升级的用户可能比擦除现有系统安装 Fedora 26 更容易。Fedora 一次维护两个版本,中间还有一个月的重叠。
他说:“所以,如果你在用 Fedora 24,你应该在下个月升级。幸运的 Fedora 25 用户可以随时升级,这是 Fedora 快速滚动版本的优势之一。”
### 更快的发布
用户可以安排自己升级,而不是在发行版制作出来时进行升级。
也就是说,Fedora 23 或更早版本的用户应该尽快升级。社区不再为这些版本发布安全更新
---
作者简介:
Jack M. Germain 自 2003 年以来一直是 ECT 新闻网记者。他的主要重点领域是企业IT、Linux、和开源技术。他撰写了许多关于 Linux 发行版和其他开源软件的评论。发邮件联系 Jack
---
via: <http://www.linuxinsider.com/story/84674.html>
作者:[Jack M. Germain]([email protected]) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,791 | 极客漫画:没特别的理由,别用 SIGKILL | http://turnoff.us/geek/dont-sigkill-2/ | 2017-08-18T16:08:00 | [
"漫画",
"SIGKILL"
] | https://linux.cn/article-8791-1.html | 
为线程们想想吧,不要随便用 SIGKILL!
---
via: <http://turnoff.us/geek/dont-sigkill-2/>
作者:[Daniel Stori](http://turnoff.us/about/) 点评:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,792 | 混合云的那些事 | https://opensource.com/article/17/7/what-is-hybrid-cloud | 2017-08-19T10:03:00 | [
"云计算",
"混合云"
] | https://linux.cn/article-8792-1.html |
>
> 了解混合云的细节,包括它是什么以及如何使用它
>
>
>

在过去 10 年出现的众多技术中,云计算因其快速发展而引人注目,从一个细分领域的技术而成为了全球热点。就其本身来说,云计算已经造成了许多困惑、争论和辩论,而混合了多种类型的云计算的"混合"云计算也带来了更多的不确定性。阅读下文可以了解有关混合云的一些最常见问题的答案。
### 什么是混合云
基本上,混合云是本地基础设施、私有云和公共云(例如,第三方云服务)的灵活和集成的组合。尽管公共云和私有云服务在混合云中是绑定在一起的,但实际上,它们是独立且分开的服务实体,而可以编排在一起服务。使用公共和私有云基础设施的选择基于以下几个因素,包括成本、负载灵活性和数据安全性。
高级的特性,如<ruby> 扩展 <rt> scale-up </rt></ruby>和<ruby> 延伸scale-out</ruby>,可以快速扩展云应用程序的基础设施,使混合云成为具有季节性或其他可变资源需求的服务的流行选择。(<ruby> 扩展 <rt> scale-up </rt></ruby>意味着在特定的 Linux 实例上增加计算资源,例如 CPU 内核和内存,而<ruby> 延伸scale-out</ruby>则意味着提供具有相似配置的多个实例,并将它们分布到一个集群中)。
处于混合云解决方案中心的是开源软件,如 [OpenStack](https://opensource.com/resources/openstack),它用于部署和管理虚拟机组成的大型网络。自 2010 年 10 月发布以来,OpenStack 一直在全球蓬勃发展。它的一些集成项目和工具处理核心的云计算服务,比如计算、网络、存储和身份识别,而其他数十个项目可以与 OpenStack 捆绑在一起,创建独特的、可部署的混合云解决方案。
### 混合云的组成部分
如下图所示,混合云由私有云、公有云注成,并通过内部网络连接,由编排系统、系统管理工具和自动化工具进行管理。

*混合云模型*
#### 公共云基础设施
* <ruby> 基础设施即服务 <rt> Infrastructure as a Service </rt></ruby>(IaaS) 从一个远程数据中心提供计算资源、存储、网络、防火墙、入侵预防服务(IPS)等。可以使用图形用户界面(GUI)或命令行接口(CLI)对这些服务进行监视和管理。公共 IaaS 用户不需要购买和构建自己的基础设施,而是根据需要使用这些服务,并根据使用情况付费。
* <ruby> 平台即服务 <rt> Platform as a Service </rt></ruby>(PaaS)允许用户在其上开发、测试、管理和运行应用程序和服务器。这些包括操作系统、中间件、web 服务器、数据库等等。公共 PaaS 以模板形式为用户提供了可以轻松部署和复制的预定义服务,而不是手动实现和配置基础设施。
* <ruby> 软件即服务 <rt> Software as a Service </rt></ruby>(SaaS)通过互联网交付软件。用户可以根据订阅或许可模型或帐户级别使用这些服务,在这些服务中,他们按活跃用户计费。SaaS 软件是低成本、低维护、无痛升级的,并且降低了购买新硬件、软件或带宽以支持增长的负担。
#### 私有云基础设施
* 私有 **IaaS** 和 **PaaS** 托管在孤立的数据中心中,并与公共云集成在一起,这些云可以使用远程数据中心中可用的基础设施和服务。这使私有云所有者能够在全球范围内利用公共云基础设施来扩展应用程序,并利用其计算、存储、网络等功能。
* **SaaS** 是由公共云提供商完全监控、管理和控制的。SaaS 一般不会在公共云和私有云基础设施之间共享,并且仍然是通过公共云提供的服务。
#### 云编排和自动化工具
要规划和协调私有云和公共云实例,云编排工具是必要的。该工具应该具有智能,包括简化流程和自动化重复性任务的能力。此外,集成的自动化工具负责在设置阈值时自动扩展和延伸,以及在发生任何部分损坏或宕机时执行自修复。
#### 系统和配置管理工具
在混合云中,系统和配置工具,如 [Foreman](https://github.com/theforeman),管理着私有云和公共云数据中心提供的虚拟机的完整生命周期。这些工具使系统管理员能够轻松地控制用户、角色、部署、升级和实例,并及时地应用补丁、bug 修复和增强功能。包括Foreman 工具中的 [Puppet](https://github.com/theforeman/puppet-foreman),使管理员能够管理配置,并为所有供给的和注册的主机定义一个完整的结束状态。
### 混合云的特性
对于大多数组织来说,混合云是有意义的,因为这些关键特性:
* **可扩展性:** 在混合云中,集成的私有云和公共云实例共享每个可配置的实例的计算资源池。这意味着每个实例都可以在需要时按需扩展和延伸。
* **快速响应:** 当私有云资源超过其阈值时,混合云的弹性支持公共云中的实例快速爆发增长。当需求高峰对运行中的应用程序需要显著的动态提升负载和容量时,这是特别有价值的。(例如,电商在假日购物季期间)
* **可靠性:** 组织可以根据需要的成本、效率、安全性、带宽等来选择公共云服务提供商。在混合云中,组织还可以决定存储敏感数据的位置,以及是在私有云中扩展实例,还是通过公共基础设施跨地域进行扩展。另外,混合模型在多个站点上存储数据和配置的能力提供了对备份、灾难恢复和高可用性的支持。
* **管理:** 在非集成的云环境中,管理网络、存储、实例和/或数据可能是乏味的。与混合工具相比,传统的编排工具非常有限,因此限制了决策制定和对完整的端到端进程和任务的自动化。使用混合云和有效的管理应用程序,您可以跟踪每个组件的数量增长,并通过定期优化这些组件,使年度费用最小化。
* **安全性:** 在评估是否在云中放置应用程序和数据时,安全性和隐私是至关重要的。IT 部门必须验证所有的合规性需求和部署策略。公共云的安全性正在改善,并将继续成熟。而且,在混合云模型中,组织可以将高度敏感的信息存储在私有云中,并将其与存储在公共云中的不敏感数据集成在一起。
* **定价:** 云定价通常基于所需的基础设施和服务水平协议(SLA)的要求。在混合云模型中,用户可以在计算资源(CPU/内存)、带宽、存储、网络、公共 IP 地址等粒度上进行比较,价格要么是固定的,要么是可变的,可以按月、小时、甚至每秒钟计量。因此,用户总是可以在公共云提供商中购买最好的价位,并相应地部署实例。
### 混合云如今的发展
尽管对公共云服务的需求很大且不断增长,并且从本地到公共云的迁移系统,仍然是大多数大型组织关注的问题。大多数人仍然在企业数据中心和老旧系统中保留关键的应用程序和数据。他们担心在公共基础设施中面临失去控制、安全威胁、数据隐私和数据真实性。因为混合云将这些问题最小化并使收益最大化,对于大多数大型组织来说,这是最好的解决方案。
### 预测五年后的发展
我预计混合云模型将在全球范围内被广泛接受,而公司的“无云”政策将在短短几年内变得非常罕见。这是我想我们会看到的:
* 由于混合云作为一种共担的责任,企业和公共云提供商之间将加强协作,以实施安全措施来遏制网络攻击、恶意软件、数据泄漏和其他威胁。
* 实例的爆发性增长将会很快,因此客户可以自发地满足负载需求或进行自我修复。
* 此外,编排或自动化工具(如 [Ansible](https://opensource.com/life/16/8/cloud-ansible-gateway))将通过继承用于解决关键问题的能力来发挥重要作用。
* 计量和“量入为出”的概念对客户来说是透明的,并且工具将使用户能够通过监控价格波动,安全地销毁现有实例,并提供新的实例以获得最佳的可用定价。
---
作者简介:
Amit Das 是一名 Red Hat 的工程师,他对 Linux、云计算、DevOps 等充满热情,他坚信新的创新和技术,将以一种开放的方式的让世界更加开放,可以对社会产生积极的影响,改变许多人的生活。
---
via: <https://opensource.com/article/17/7/what-is-hybrid-cloud>
作者:[Amit Das](https://opensource.com/users/amit-das) 译者:[LHRchina](https://github.com/LHRchina) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Of the many technologies that have emerged over the past decade, cloud computing is notable for its rapid advance from a niche technology to global domination. On its own, cloud computing has created a lot of confusion, arguments, and debates, and "hybrid" cloud, which blends several types of cloud computing, has created even more uncertainty. Read on for answers to some of the most common questions about hybrid cloud.
## What is a hybrid cloud?
Basically, a hybrid cloud is a flexible and integrated combination of on-premises infrastructure, private cloud, and public (i.e., third-party) cloud platforms. Even though public and private cloud services are bound together in a hybrid cloud, in practice they remain unique and separate entities with services that can be orchestrated together. The choice to use both public and private cloud infrastructure is based on several factors, including cost, load flexibility, and data security.
Advanced features, such as scale-up and scale-out, can quickly expand a cloud application's infrastructure on demand, making hybrid cloud a popular choice for services with seasonal or other variable resource demands. (Scaling up means to increase compute resources, such as CPU cores and memory, on a specific Linux instance, whereas scaling out means to provision multiple instances with similar configurations and distribute them into a cluster.)
At the center of hybrid cloud solutions sits open source software, such as [OpenStack](https://opensource.com/resources/openstack), that deploys and manages large networks of virtual machines. Since its initial release in October 2010, OpenStack has been thriving globally. Some of its integrated projects and tools handle core cloud computing services, such as compute, networking, storage, and identity, while dozens of other projects can be bundled together with OpenStack to create unique and deployable hybrid cloud solutions.
## Components of the hybrid cloud
As illustrated in the graphic below, a hybrid cloud consists of private cloud, public cloud, and the internal network connected and managed through orchestration, system management, and automation tools.

opensource.com
Public cloud infrastructure:
**Infrastructure as a Service (IaaS)**provides compute resources, storage, networking, firewall, intrusion prevention services (IPS), etc. from a remote data center. These services can be monitored and managed using a graphical user interface (GUI) or a command line interface (CLI). Rather than purchasing and building their own infrastructure, public IaaS users consume these services as needed and pay based on usage.**Platform as a Service (PaaS)**allows users to develop, test, manage, and run applications and servers. These include the operating system, middleware, web servers, database, and so forth. Public PaaS provides users with predefined services in the form of templates that can be easily deployed and replicated, instead of manually implementing and configuring infrastructure.**Software as a Service (SaaS)**delivers software through the internet. Users can consume these services under a subscription or license model or at the account level, where they are billed as active users. SaaS software is low cost, low maintenance, painless to upgrade, and reduces the burden of buying new hardware, software, or bandwidth to support growth.
### Private cloud infrastructure:
- Private
**IaaS and PaaS**are hosted in isolated data centers and integrated with public clouds that can consume the infrastructure and services available in remote data centers. This enables a private cloud owner to leverage public cloud infrastructure to expand applications and utilize their compute, storage, networking, and so forth across the globe. **SaaS**is completely monitored, managed, and controlled by public cloud providers. SaaS is generally not shared between public and private cloud infrastructure and remains a service provided through a public cloud.
### Cloud orchestration and automation tools:
A cloud orchestration tool is necessary for planning and coordinating private and public cloud instances. This tool should inherit intelligence, including the capability to streamline processes and automate repetitive tasks. Further, an integrated automation tool is responsible for automatically scaling up and scaling out when a set threshold is crossed, as well as performing self-healing if any fractional damage or downtime occurs.
### System and configuration management tools:
In a hybrid cloud, system and configuration tools, such as [Foreman](https://github.com/theforeman), manage the complete lifecycles of the virtual machines provisioned in private and public cloud data centers. These tools give system administrators the power to easily control users, roles, deployments, upgrades, and instances and to apply patches, bugfixes, and enhancements in a timely manner. Including [Puppet](https://github.com/theforeman/puppet-foreman) in the Foreman tool enables administrators to manage configurations and define a complete end state for all provisioned and registered hosts.
## Hybrid cloud features
The hybrid cloud makes sense for most organizations because of these key features:
**Scalability:**In a hybrid cloud, integrated private and public cloud instances share a pool of compute resources for each provisioned instance. This means each instance can scale up or out anytime, as needed.**Rapid response:**Hybrid clouds' elasticity supports rapid bursting of instances in the public cloud when private cloud resources exceed their threshold. This is especially valuable when peaks in demand produce significant and variable increases in load and capacity for a running application (e.g., online retailers during the holiday shopping season).**Reliability:**Organizations can choose among public cloud providers based on the cost, efficiency, security, bandwidth, etc. that match their needs. In a hybrid cloud, organizations can also decide where to store sensitive data and whether to expand instances in a private cloud or to expand geographically through public infrastructure. Also, the hybrid model's ability to store data and configurations across multiple sites supports backup, disaster recovery, and high availability.**Management:**Managing networking, storage, instances, and/or data can be tedious in non-integrated cloud environments. Traditional orchestration tools, in comparison to hybrid tools, are extremely modest and consequently limit decision making and automation for complete end-to-end processes and tasks. With hybrid cloud and an effective management application, you can keep track of every component as their numbers grow and, by regularly optimizing those components, minimize annual expense.**Security:**Security and privacy are critical when evaluating whether to place applications and data in the cloud. The IT department must verify all compliance requirements and deployment policies. Security in the public cloud is improving and continues to mature. And, in the hybrid cloud model, organizations can store highly sensitive information in the private cloud and integrate it with less sensitive data stored in the public cloud.**Pricing:**Cloud pricing is generally based on the infrastructure and service level agreement required. In the hybrid cloud model, users can compare costs at a granular level for compute resources (CPU/memory), bandwidth, storage, networking, public IP address, etc. Prices are either fixed or variable and can be metered monthly, hourly, or even per second. Therefore, users can always shop for the best pricing among public cloud providers and deploy their instances accordingly.
## Where hybrid cloud is today
Although there is a large and growing demand for public cloud offerings and migrating systems from on-premises to the public cloud, most large organizations remain concerned. Most still keep critical applications and data in corporate data centers and legacy systems. They fear losing control, security threats, data privacy, and data authenticity in public infrastructure. Because hybrid cloud minimizes these problems and maximizes benefits, it's the best solution for most large organizations.
## Where we'll be five years from now
I expect that the hybrid cloud model will be highly accepted globally, and corporate "no-cloud" policies will be rare, within only a handful of years. Here is what else I think we will see:
- Since hybrid cloud acts as a shared responsibility, there will be increased coordination between corporate and public cloud providers for implementing security measures to curb cyber attacks, malware, data leakage, and other threats.
- Bursting of instances will be rapid, so customers can spontaneously meet load requirements or perform self-healing.
- Further, orchestration or automation tools (such as
[Ansible](https://opensource.com/life/16/8/cloud-ansible-gateway)) will play a significant role by inheriting intelligence for solving critical situations. - Metering and the concept of "pay-as-you-go" will be transparent to customers, and tools will enable users to make decisions by monitoring fluctuating prices, safely destroy existing instances, and provision new instances to get the best available pricing.
What predictions do you have for hybrid cloud—and cloud computing in general—over the next five years? Please share your opinions in the comments.
## Comments are closed. |
8,793 | 在 Wireshark 中过滤数据包 | https://linuxconfig.org/filtering-packets-in-wireshark-on-kali-linux | 2017-08-21T09:34:00 | [
"Wireshark"
] | https://linux.cn/article-8793-1.html | 
### 介绍
数据包过滤可让你专注于你感兴趣的确定数据集。如你所见,Wireshark 默认会抓取*所有*数据包。这可能会妨碍你寻找具体的数据。 Wireshark 提供了两个功能强大的过滤工具,让你简单而无痛地获得精确的数据。
Wireshark 可以通过两种方式过滤数据包。它可以通过只收集某些数据包来过滤,或者在抓取数据包后进行过滤。当然,这些可以彼此结合使用,并且它们各自的用处取决于收集的数据和信息的多少。
### 布尔表达式和比较运算符
Wireshark 有很多很棒的内置过滤器。当开始输入任何一个过滤器字段时,你将看到它们会自动补完。这些过滤器大多数对应于用户对数据包的常见分组方式,比如仅过滤 HTTP 请求就是一个很好的例子。
对于其他的,Wireshark 使用布尔表达式和/或比较运算符。如果你曾经做过任何编程,你应该熟悉布尔表达式。他们是使用 `and`、`or`、`not` 来验证声明或表达式的真假。比较运算符要简单得多,它们只是确定两件或更多件事情是否彼此相等、大于或小于。
### 过滤抓包
在深入自定义抓包过滤器之前,请先查看 Wireshark 已经内置的内容。单击顶部菜单上的 “Capture” 选项卡,然后点击 “Options”。可用接口下面是可以编写抓包过滤器的行。直接移到左边一个标有 “Capture Filter” 的按钮上。点击它,你将看到一个新的对话框,其中包含内置的抓包过滤器列表。看看里面有些什么。

在对话框的底部,有一个用于创建并保存抓包过滤器的表单。按左边的 “New” 按钮。它将创建一个填充有默认数据的新的抓包过滤器。要保存新的过滤器,只需将实际需要的名称和表达式替换原来的默认值,然后单击“Ok”。过滤器将被保存并应用。使用此工具,你可以编写并保存多个不同的过滤器,以便它们将来可以再次使用。
抓包有自己的过滤语法。对于比较,它不使用等于号,并使用 `>` 和 `<` 来用于大于或小于。对于布尔值来说,它使用 `and`、`or` 和 `not`。
例如,如果你只想监听 80 端口的流量,你可以使用这样的表达式:`port 80`。如果你只想从特定的 IP 监听端口 80,你可以使用 `port 80 and host 192.168.1.20`。如你所见,抓包过滤器有特定的关键字。这些关键字用于告诉 Wireshark 如何监控数据包以及哪一个数据是要找的。例如,`host` 用于查看来自 IP 的所有流量。`src` 用于查看源自该 IP 的流量。与之相反,`dst` 只监听目标到这个 IP 的流量。要查看一组 IP 或网络上的流量,请使用 `net`。
### 过滤结果
界面的底部菜单栏是专门用于过滤结果的菜单栏。此过滤器不会更改 Wireshark 收集的数据,它只允许你更轻松地对其进行排序。有一个文本字段用于输入新的过滤器表达式,并带有一个下拉箭头以查看以前输入的过滤器。旁边是一个标为 “Expression” 的按钮,另外还有一些用于清除和保存当前表达式的按钮。
点击 “Expression” 按钮。你将看到一个小窗口,其中包含多个选项。左边一栏有大量的条目,每个都有附加的折叠子列表。你可以用这些来过滤所有不同的协议、字段和信息。你不可能看完所有,所以最好是大概看下。你应该注意到了一些熟悉的选项,如 HTTP、SSL 和 TCP。

子列表包含可以过滤的不同部分和请求方法。你可以看到通过 GET 和 POST 请求过滤 HTTP 请求。
你还可以在中间看到运算符列表。通过从每列中选择条目,你可以使用此窗口创建过滤器,而不用记住 Wireshark 可以过滤的每个条目。对于过滤结果,比较运算符使用一组特定的符号。 `==` 用于确定是否相等。`>` 用于确定一件东西是否大于另一个东西,`<` 找出是否小一些。 `>=` 和 `<=` 分别用于大于等于和小于等于。它们可用于确定数据包是否包含正确的值或按大小过滤。使用 `==` 仅过滤 HTTP GET 请求的示例如下:`http.request.method == "GET"`。
布尔运算符基于多个条件将小的表达式串到一起。不像是抓包所使用的单词,它使用三个基本的符号来做到这一点。`&&` 代表 “与”。当使用时,`&&` 两边的两个语句都必须为真值才行,以便 Wireshark 来过滤这些包。`||` 表示 “或”。只要两个表达式任何一个为真值,它就会被过滤。如果你正在查找所有的 GET 和 POST 请求,你可以这样使用 `||`:`(http.request.method == "GET") || (http.request.method == "POST")`。`!` 是 “非” 运算符。它会寻找除了指定的东西之外的所有东西。例如,`!http` 将展示除了 HTTP 请求之外的所有东西。
### 总结思考
过滤 Wireshark 可以让你有效监控网络流量。熟悉可以使用的选项并习惯你可以创建过滤器的强大表达式需要一些时间。然而一旦你学会了,你将能够快速收集和查找你要的网络数据,而无需梳理长长的数据包或进行大量的工作。
---
via: <https://linuxconfig.org/filtering-packets-in-wireshark-on-kali-linux>
作者:[Nick Congleton](https://linuxconfig.org/filtering-packets-in-wireshark-on-kali-linux) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
8,795 | 了解 7z 命令开关(一) | https://www.howtoforge.com/tutorial/understanding-7z-command-switches/ | 2017-08-19T14:32:59 | [
"7z"
] | https://linux.cn/article-8795-1.html | 
7z 无疑是一个功能强大的强大的归档工具(声称提供最高的压缩比)。在 HowtoForge 中,我们已经[已经讨论过](https://www.howtoforge.com/tutorial/how-to-install-and-use-7zip-file-archiver-on-ubuntu-linux/)如何安装和使用它。但讨论仅限于你可以使用该工具提供的“功能字母”来使用基本功能。
在本教程中,我们将扩展对这个工具的说明,我们会讨论一些 7z 提供的“开关”。 但在继续之前,需要说明的是,本教程中提到的所有说明和命令都已在 Ubuntu 16.04 LTS 上进行了测试。
**注意**:我们将使用以下截图中显示的文件来执行使用 7zip 的各种操作。
[](https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/big/ls.png)
### 包含文件
7z 工具允许你有选择地将文件包含在归档中。可以使用 `-i` 开关来使用此功能。
语法:
```
-i[r[-|0]]{@listfile|!wildcard}
```
比如,如果你想在归档中只包含 “.txt” 文件,你可以使用下面的命令:
```
$ 7z a ‘-i!*.txt’ include.7z
```
这是输出:
[](https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/big/include.png)
现在,检查新创建的归档是否只包含 “.txt” 文件,你可以使用下面的命令:
```
$ 7z l include.7z
```
这是输出:
[](https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/big/includelist.png)
在上面的截图中,你可以看到 “testfile.txt” 已经包含到归档中了。
### 排除文件
如果你想要,你可以排除不想要的文件。可以使用 `-x` 开关做到。
语法:
```
-x[r[-|0]]]{@listfile|!wildcard}
```
比如,如果你想在要创建的归档中排除 “abc.7z” ,你可以使用下面的命令:
```
$ 7z a ‘-x!abc.7z’ exclude.7z
```
这是输出:
[](https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/big/exclude.png)
要检查最后的归档是否排除了 “abc.7z”, 你可以使用下面的命令:
```
$ 7z l exclude.7z
```
这是输出:
[](https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/big/excludelist.png)
上面的截图中,你可以看到 “abc.7z” 已经从新的归档中排除了。
**专业提示**:假设任务是排除以 “t” 开头的所有 .7z 文件,并且包含以字母 “a” 开头的所有 .7z 文件。这可以通过以下方式组合 `-i` 和 `-x` 开关来实现:
```
$ 7z a '-x!t*.7z' '-i!a*.7z' combination.7z
```
### 设置归档密码
7z 同样也支持用密码保护你的归档文件。这个功能可以使用 `-p` 开关来实现。
```
$ 7z a [archive-filename] -p[your-password] -mhe=[on/off]
```
**注意**:`-mhe` 选项用来启用或者禁用归档头加密(默认是“off”)。
例子:
```
$ 7z a password.7z -pHTF -mhe=on
```
无需多说,当你解压密码保护的归档时,工具会向你询问密码。要解压一个密码保护的文件,使用 `e` 功能字母。下面是例子:
```
$ 7z e password.7z
```
[](https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/big/password.png)
### 设置输出目录
工具同样支持解压文件到你选择的目录中。这可以使用 `-o` 开关。无需多说,这个开关只在含有 `e` 或者 `x` 功能字母的时候有用。
```
$ 7z [e/x] [existing-archive-filename] -o[path-of-directory]
```
比如,假设下面命令工作在当前的工作目录中:
```
$ 7z e output.7z -ohow/to/forge
```
如 `-o` 开关的值所指的那样,它的目标是解压文件到 ./how/to/forge 中。
这是输出:
[](https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/big/output.png)
在上面的截图中,你可以看到归档文件的所有内容都已经解压了。但是在哪里?要检查文件是否被解压到 ./how/to/forge,我们可以使用 `ls -R` 命令。
[](https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/big/ls_-R.png)
在上面的截图中,我们可以看到 .7z 中的内容都被解压到 ./how/to/forge 中。
### 创建多个卷
借助 7z 工具,你可以为归档创建多个卷(较小的子档案)。当通过网络或 USB 传输大文件时,这是非常有用的。可以使用 `-v` 开关使用此功能。这个开关需要指定子档案的大小。
我们可以以字节(b)、千字节(k)、兆字节(m)和千兆字节(g)指定子档案大小。
```
$ 7z a [archive-filename] [files-to-archive] -v[size-of-sub-archive1] -v[size-of-sub-archive2] ....
```
让我们用一个例子来理解这个。请注意,我们将使用一个新的目录来执行 `-v` 开关的操作。
这是目录内容的截图:
[](https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/big/volumels.png)
现在,我们运行下面的命令来为一个归档文件创建多个卷(每个大小 100b):
```
7z a volume.7z * -v100b
```
这是截图:
[](https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/big/volume.png)
现在,要查看创建的子归档,使用 `ls` 命令。
[](https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/big/volumels2.png)
如下截图所示,一共创建四个卷 - volume.7z.001、volume.7z.002、volume.7z.003 和 volume.7z.004
**注意**:你可以使用 .7z.001 归档文件来解压。但是,要这么做,其他所有的卷都应该在同一个目录内。
### 设置归档的压缩级别
7z 允许你设置归档的压缩级别。这个功能可以使用 `-m` 开关。7z 中有不同的压缩级别,比如:`-mx0`、`-mx1`、`-mx3`、`-mx5`、`-mx7` 和 `-mx9`。
这是这些压缩级别的简要说明:
* `mx0` = 完全不压缩 - 只是复制文件到归档中。
* `mx1` = 消耗最少时间,但是压缩最小。
* `mx3` = 比 `-mx1` 好。
* `mx5` = 这是默认级别 (常规压缩)。
* `mx7` = 最大化压缩。
* `mx9` = 极端压缩。
**注意**:关于这些压缩级别的更多信息,阅读[这里](http://askubuntu.com/questions/491223/7z-ultra-settings-for-zip-format)。
```
$ 7z a [archive-filename] [files-to-archive] -mx=[0,1,3,5,7,9]
```
例如,我们在目录中有一堆文件和文件夹,我们每次尝试使用不同的压缩级别进行压缩。作为一个例子,这是当使用压缩级别 “0” 时创建存档时使用的命令。
```
$ 7z a compression(-mx0).7z * -mx=0
```
相似地,其他命令也这样执行。
以下是输出档案(使用 “ls” 命令生成)的列表,其名称表示其创建中使用的压缩级别,输出中的第五列显示压缩级别对其大小的影响。
[](https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/big/compression.png)
### 显示归档的技术信息
如果需要,7z 还可以在标准输出中显示归档的技术信息 - 类型、物理大小、头大小等。可以使用 `-slt` 开关使用此功能。 此开关仅适用于带有 `l` 功能字母的情况下。
```
$ 7z l -slt [archive-filename]
```
比如:
```
$ 7z l -slt abc.7z
```
这是输出:
[](https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/big/slt.png)
### 指定创建归档的类型
如果你想要创建一个非 7z 的归档文件(这是默认的创建类型),你可以使用 `-t` 开关来指定。
```
$ 7z a -t[specify-type-of-archive] [archive-filename] [file-to-archive]
```
下面的例子展示创建了一个 .zip 文件:
```
7z a -tzip howtoforge *
```
输出的文件是 “howtoforge.zip”。要交叉验证它的类型,使用 `file` 命令:
[](https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/big/type.png)
因此,howtoforge.zip 的确是一个 ZIP 文件。相似地,你可以创建其他 7z 支持的归档。
### 总结
你将会认识到, 7z 的 “功能字母” 以及 “开关” 的知识可以让你充分利用这个工具。我们还没有完成开关的部分 - 其余部分将在第 2 部分中讨论。
---
via: <https://www.howtoforge.com/tutorial/understanding-7z-command-switches/>
作者:[Himanshu Arora](https://www.howtoforge.com/tutorial/understanding-7z-command-switches/) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | # Understanding 7z command switches - part I
7z is no doubt a feature-rich and powerful archiver (claimed to offer the highest compression ratio). Here at HowtoForge, we have [already discussed](https://www.howtoforge.com/tutorial/how-to-install-and-use-7zip-file-archiver-on-ubuntu-linux/) how you can install and use it. But the discussion was limited to basic features that you can access using the 'function letters' the tool provides.
Expanding our coverage on the tool, here in this tutorial, we will be discussing some of the 'switches' 7z offers. But before we proceed, it's worth sharing that all the instructions and commands mentioned in this tutorial have been tested on Ubuntu 16.04 LTS.
**Note**: We will be using the files displayed in the following screenshot for performing various operations using 7zip.
Include files
The 7z tool allows you selectively include files in an archive. This feature can be accessed using the *-i* switch.
Syntax:
-i[r[-|0]]{@listfile|!wildcard}
For example, if you want to include only ‘.txt’ files in your archive, you can use the following command:
$ 7z a ‘-i!*.txt’ include.7z
Here is the output:
Now, to check whether the newly-created archive file contains only ‘.txt’ file or not, you can use the following command:
$ 7z l include.7z
Here is the output:
In the above screenshot, you can see that only ‘testfile.txt’ file has been added to the archive.
## Exclude files
If you want, you can also exclude the files that you don’t need. This can be done using the *-x* switch.
Syntax:
-x[r[-|0]]]{@listfile|!wildcard}
For example, if you want to exclude a file named ‘abc.7z’ from the archive that you are going to create, then you can use the following command:
$ 7z a ‘-x!abc.7z’ exclude.7z
Here is the output:
To check whether the resulting archive file has excluded ‘abc.7z’ or not, you can use the following command:
$ 7z l exclude.7z
Here is the output:
In the above screenshot, you can see that ‘abc.7z’ file has been excluded from the new archive file.
**Pro tip**: Suppose the task is to exclude all the .7z files with names starting with letter ‘t’ and include all .7z files with names starting with letter ‘a’ . This can be done by combining both ‘-i’ and ‘-x’ switches in the following way:
$ 7z a '-x!t*.7z' '-i!a*.7z' combination.7z
## Set password for your archive
7z also lets you password protect your archive file. This feature can be accessed using the *-p* switch.
$ 7z a [archive-filename] -p[your-password] -mhe=[on/off]
**Note**: *The -mhe option enables or disables archive header encryption (default is off)*.
For example:
$ 7z a password.7z -pHTF -mhe=on
Needless to say, when you will extract your password protected archive, the tool will ask you for the password. To extract a password-protected file, use the 'e' function letter. Following is an example:
$ 7z e password.7z
## Set output directory
The tool also lets you extract an archive file in the directory of your choice. This can be done using the *-o* switch. Needless to say, the switch only works when the command contains either the ‘e’ function letter or the ‘x’ function letter.
$ 7z [e/x] [existing-archive-filename] -o[path-of-directory]
For example, suppose the following command is run in the present working directory:
$ 7z e output.7z -ohow/to/forge
And, as the value passed to the *-o* switch suggests, the aim is to extract the archive in the *./how/to/forge* directory.
Here is the output:
In the above screenshot, you can see that all the contents of existing archive file has been extracted. But where? To check whether or not the archive file has been extracted in the *./how/to/forge* directory or not, we can use the ‘ls -R’ command.
In the above screenshot, we can see that all the contents of output.7z have indeed been extracted to ./how/to/forge.
## Creating multiple volumes
With the help of the 7z tool, you can create multiple volumes (smaller sub-archives) of your archive file. This is very useful when transferring large files over a network or in a USB. This feature can be accessed using the *-v* switch. The switch requires you to specify size of sub-archives.
We can specify size of sub-archives in bytes (b), kilobytes (k), megabytes (m) and gigabytes (g).
$ 7z a [archive-filename] [files-to-archive] -v[size-of-sub-archive1] -v[size-of-sub-archive2] ....
Let's understand this using an example. Please note that we will be using a new directory for performing operations on the *-v* switch.
Here is the screenshot of the directory contents:
Now, we can run the following command for creating multiple volumes (sized 100b each) of an archive file:
7z a volume.7z * -v100b
Here is the screenshot:
Now, to see the list of sub-archives that were created, use the ‘ls’ command.
As seen in the above screenshot, a total of four multiple volumes have been created - volume.7z.001, volume.7z.002, volume.7z.003, and volume.7z.004
**Note**: You can extract files using the .7z.001 archive. But, for that, all the other sub-archive volumes should be present in the same directory.
## Set compression level of archive
7z also allows you to set compression levels of your archives. This feature can be accessed using the *-m* switch. There are various compression levels in 7z, such as -mx0, -mx1, -mx3, -mx5, -mx7 and -mx9.
Here's a brief summary about these levels:
-**mx0** = Don't compress at all - just copy the contents to archive.
-**mx1** = Consumes least time, but compression is low.
-**mx3** = Better than -mx1.
-**mx5** = This is default (compression is normal).
-**mx7** = Maximum compression.
-**mx9** = Ultra compression.
**Note**: For more information on these compression levels, head [here](http://askubuntu.com/questions/491223/7z-ultra-settings-for-zip-format).
$ 7z a [archive-filename] [files-to-archive] -mx=[0,1,3,5,7,9]
For example, we have a bunch of files and folders in a directory, which we tried compressing using a different compression level each time. Just to give you an idea, here's the command used when the archive was created with compression level '0'.
$ 7z a compression(-mx0).7z * -mx=0
Similarly, other commands were executed.
Here is the list of output archives (produced using the 'ls' command), with their names suggesting the compression level used in their creation, and the fifth column in the output revealing the effect of compression level on their size.
Display technical information of archive
If you want, 7z also lets you display technical information of an archive - it's type, physical size, header size, and so on - on the standard output. This feature can be accessed using the *-slt* switch. This switch only works with the ‘l’ function letter.
$ 7z l -slt [archive-filename]
For example:
$ 7z l -slt abc.7z
Here is the output:
# Specify type of archive to create
If you want to create a non 7zip archive (which gets created by default), you can specify your choice using the *-t* switch.
$ 7z a -t[specify-type-of-archive] [archive-filename] [file-to-archive]
The following example shows a command to create a .zip file:
7z a -tzip howtoforge *
The output file produced is 'howtoforge.zip'. To cross verify its type, use the 'file' command:
So, howtoforge.zip is indeed a ZIP file. Similarly, you can create other kind of archives that 7z supports.
# Conclusion
As you would agree, the knowledge of 7z 'function letters' along with 'switches' lets you make the most out of the tool. We aren't yet done with switches - there are some more that will be discussed in part 2. |
8,796 | 极客漫画:最大之数 | http://turnoff.us/geek/big-numbers/ | 2017-08-21T10:15:00 | [
"漫画"
] | https://linux.cn/article-8796-1.html | 
人们总是在说 Java 要完了,然后 Java 年复一年的还继续活着……
---
via: <http://turnoff.us/geek/big-numbers/>
作者:[Daniel Stori](http://turnoff.us/about/) 译者:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,797 | Linux 容器轻松应对性能工程 | https://opensource.com/article/17/2/performance-container-world | 2017-08-20T18:42:00 | [
"性能",
"容器"
] | https://linux.cn/article-8797-1.html | 
应用程序的性能决定了软件能多快完成预期任务。这回答有关应用程序的几个问题,例如:
* 峰值负载下的响应时间
* 与替代方案相比,它易于使用,受支持的功能和用例
* 运营成本(CPU 使用率、内存需求、数据吞吐量、带宽等)
该性能分析的价值超出了服务负载所需的计算资源或满足峰值需求所需的应用实例数量的估计。性能显然与成功企业的基本要素挂钩。它揭示了用户的总体体验,包括确定什么会拖慢客户预期的响应时间,通过设计满足带宽要求的内容交付来提高客户粘性,选择最佳设备,最终帮助企业发展业务。
### 问题
当然,这是对业务服务的性能工程价值的过度简化。为了理解在完成我刚刚所描述事情背后的挑战,让我们把它放到一个真实的稍微有点复杂的场景中。
现实世界的应用程序可能托管在云端。应用程序可以利用非常大(或概念上是无穷大)的计算资源。在硬件和软件方面的需求将通过云来满足。从事开发工作的开发人员将使用云交付功能来实现更快的编码和部署。云托管不是免费的,但成本开销与应用程序的资源需求成正比。
除了<ruby> 搜索即服务 <rt> Search as a Service </rt></ruby>(SaaS)、<ruby> 平台即服务 <rt> Platform as a Service </rt></ruby>(PaaS)、<ruby> 基础设施即服务 <rt> Infrastructure as a Service </rt></ruby>(IaaS)以及<ruby> 负载平衡即服务 <rt> Load Balancing as a Service </rt></ruby>(LBaaS)之外,当云端管理托管程序的流量时,开发人员可能还会使用这些快速增长的云服务中的一个或多个:
* <ruby> 安全即服务 <rt> Security as a Service </rt></ruby> (SECaaS),可满足软件和用户的安全需求
* <ruby> 数据即服务 <rt> Data as a Service </rt></ruby> (DaaS),为应用提供了用户需求的数据
* <ruby> 登录即服务 <rt> Logging as a Service </rt></ruby> (LaaS),DaaS 的近亲,提供了日志传递和使用的分析指标
* <ruby> 搜索即服务 <rt> Search as a Service </rt></ruby> (SaaS),用于应用程序的分析和大数据需求
* <ruby> 网络即服务 <rt> Network as a Service </rt></ruby> (NaaS),用于通过公共网络发送和接收数据
云服务也呈指数级增长,因为它们使得开发人员更容易编写复杂的应用程序。除了软件复杂性之外,所有这些分布式组件的相互作用变得越来越多。用户群变得更加多元化。该软件的需求列表变得更长。对其他服务的依赖性变大。由于这些因素,这个生态系统的缺陷会引发性能问题的多米诺效应。
例如,假设你有一个精心编写的应用程序,它遵循安全编码实践,旨在满足不同的负载要求,并经过彻底测试。另外假设你已经将基础架构和分析工作结合起来,以支持基本的性能要求。在系统的实现、设计和架构中建立性能标准需要做些什么?软件如何跟上不断变化的市场需求和新兴技术?如何测量关键参数以调整系统以获得最佳性能?如何使系统具有弹性和自我恢复能力?你如何更快地识别任何潜在的性能问题,并尽早解决?
### 进入容器
软件[容器](https://opensource.com/resources/what-are-linux-containers)以[微服务](https://opensource.com/resources/what-are-microservices)设计或面向服务的架构(SoA)的优点为基础,提高了性能,因为包含更小的、自足的代码块的系统更容易编码,对其它系统组件有更清晰、定义良好的依赖。测试更容易,包括围绕资源利用和内存过度消耗的问题比在宏架构中更容易确定。
当扩容系统以增加负载能力时,容器应用程序的复制快速而简单。安全漏洞能更好地隔离。补丁可以独立版本化并快速部署。性能监控更有针对性,测量更可靠。你还可以重写和“改版”资源密集型代码,以满足不断变化的性能要求。
容器启动快速,停止也快速。它比虚拟机(VM)有更高效资源利用和更好的进程隔离。容器没有空闲内存和 CPU 闲置。它们允许多个应用程序共享机器,而不会丢失数据或性能。容器使应用程序可移植,因此开发人员可以构建并将应用程序发送到任何支持容器技术的 Linux 服务器上,而不必担心性能损失。容器生存在其内,并遵守其集群管理器(如 Cloud Foundry 的 Diego、[Kubernetes](https://opensource.com/resources/what-is-kubernetes)、Apache Mesos 和 Docker Swarm)所规定的配额(比如包括存储、计算和对象计数配额)。
容器在性能方面表现出色,而即将到来的 “serverless” 计算(也称为<ruby> 功能即服务 <rt> Function as a Service </rt></ruby>(FaaS))的浪潮将扩大容器的优势。在 FaaS 时代,这些临时性或短期的容器将带来超越应用程序性能的优势,直接转化为在云中托管的间接成本的节省。如果容器的工作更快,那么它的寿命就会更短,而且计算量负载纯粹是按需的。
---
作者简介:
Garima 是 Red Hat 的工程经理,专注于 OpenShift 容器平台。在加入 Red Hat 之前,Garima 帮助 Akamai Technologies&MathWorks Inc. 开创了创新。
---
via: <https://opensource.com/article/17/2/performance-container-world>
作者:[Garima](https://opensource.com/users/garimavsharma) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Performance for an application determines how quickly your software can complete the intended task. It answers questions about the application, such as:
- Response time under peak load
- Ease of use, supported functionality, and use cases compared to an alternative
- Operational costs (CPU usage, memory needs, data throughput, bandwidth, etc.)
The value of this performance analysis extends beyond the estimation of the compute resources needed to serve the load or the number of application instances needed to meet the peak demand. Performance is clearly tied to the fundamentals of a successful business. It informs the overall user experience, including identifying what slows down customer-expected response times, improving customer stickiness by designing content delivery optimized to their bandwidth, choosing the best device, and ultimately helping enterprises grow their business.
## The problem
Of course, this is an oversimplification of the value of performance engineering for business services. To understand the challenges behind accomplishing what I've just described, let's make this real and just a little bit complicated.
Real-world applications are likely hosted on the cloud. An application could avail to very large (or conceptually infinite) amounts of compute resources. Its needs in terms of both hardware and software would be met via the cloud. The developers working on it would use the cloud-offered features for enabling faster coding and deployment. Cloud hosting doesn't come free, but the cost overhead is proportional to the resource needs of the application.
Outside of Search as a Service (SaaS), Platform as a Service (PaaS), Infrastructure as a Service (IaaS), and Load Balancing as a Service (LBaaS), which is when the cloud takes care of traffic management for this hosted app, a developer probably may also use one or more of these fast-growing cloud services:
- Security as a Service (SECaaS), which meets security needs for software and the user
- Data as a Service (DaaS), which provides a user's data on demand for application
- Logging as a Service (LaaS), DaaS's close cousin, which provides analytic metrics on delivery and usage of logs
- Search as a Service (SaaS), which is for the analytics and big data needs of the app
- Network as a Service (NaaS), which is for sending and receiving data across public networks
Cloud-powered services are also growing exponentially because they make writing complex apps easier for developers. In addition to the software complexity, the interplay of all these distributed components becomes more involved. The user base becomes more diverse. The list of requirements for the software becomes longer. The dependencies on other services becomes larger. Because of these factors, the flaws in this ecosystem can trigger a domino effect of performance problems.
For example, assume you have a well-written application that follows secure coding practices, is designed to meet varying load requirements, and is thoroughly tested. Assume also that you have the infrastructure and analytics work in tandem to support the basic performance requirements. What does it take to build performance standards into the implementation, design, and architecture of your system? How can the software keep up with evolving market needs and emerging technologies? How do you measure the key parameters to tune a system for optimal performance as it ages? How can the system be made resilient and self-recovering? How can you identify any underlying performance problems faster and resolved them sooner?
## Enter containers
Software [containers](https://opensource.com/resources/what-are-linux-containers) backed with the merits of [microservices](https://opensource.com/resources/what-are-microservices) design, or Service-oriented Architecture (SoA), improves performance because a system comprising of smaller, self-sufficient code blocks is easier to code and has cleaner, well-defined dependencies on other system components. It is easier to test and problems, including those around resource utilization and memory over-consumption, are more easily identified than in a giant monolithic architecture.
When scaling the system to serve increased load, the containerized applications replicate fast and easy. Security flaws are better isolated. Patches can be versioned independently and deployed fast. Performance monitoring is more targeted and the measurements are more reliable. You can also rewrite and "facelift" resource-intensive code pieces to meet evolving performance requirements.
Containers start fast and stop fast. They enable efficient resource utilization and far better process isolation than Virtual Machines (VMs). Containers do not have idle memory and CPU overhead. They allow for multiple applications to share a machine without the loss of data or performance. Containers make applications portable, so developers can build and ship apps to any server running Linux that has support for container technology, without worrying about performance penalties. Containers live within their means and abide by the quotas (examples include storage, compute, and object count quotas) as imposed by their cluster manager, such as Cloud Foundry's Diego, [Kubernetes](https://opensource.com/resources/what-is-kubernetes), Apache Mesos, and Docker Swarm.
While containers show merit in performance, the coming wave of "serverless" computing, also known as Function as a Service (FaaS), is set to extend the benefits of containers. In the FaaS era, these ephemeral or short-lived containers will drive the benefits beyond application performance and translate directly to savings in overhead costs of hosting in the cloud. If the container does its job faster, then it lives for a shorter time, and the computation overload is purely on demand.
## 4 Comments |
8,798 | 如何解决 VLC 视频嵌入字幕中遇到的错误 | http://www.dedoimedo.com/computers/vlc-subtitles-errors.html | 2017-08-21T09:00:12 | [
"VLC",
"视频",
"字幕"
] | https://linux.cn/article-8798-1.html | 这会是一个有点奇怪的教程。背景故事如下。最近,我创作了一堆 [Risitas y las paelleras](https://www.youtube.com/watch?v=cDphUib5iG4) 素材中[sweet](https://www.youtube.com/watch?v=MpDdGOKZ3dg) [parody](https://www.youtube.com/watch?v=KHG6fXEba0A) [的片段](https://www.youtube.com/watch?v=TXw5lRi97YY),以主角 Risitas 疯狂的笑声而闻名。和往常一样,我把它们上传到了 Youtube,但是从当我决定使用字幕起,到最终在网上可以观看时,我经历了一个漫长而曲折的历程。
在本指南中,我想介绍几个你可能会在创作自己的媒体时会遇到的典型问题,主要是使用字幕方面,然后上传到媒体共享门户网站,特别是 Youtube 中,以及如何解决这些问题。跟我来。
### 背景故事
我选择的视频编辑软件是 Kdenlive,当我创建那愚蠢的 [Frankenstein](http://www.dedoimedo.com/computers/frankenstein-media.html) 片段时开始使用这个软件,从那以后它一直是我的忠实伙伴。通常,我将文件交给带有 VP8 视频编解码器和 Vorbis 音频编解码器的 WebM 容器来渲染,因为这是 Google 所喜欢的格式。事实上,我在过去七年里上传的大约 40 个不同的片段中都没有问题。


但是,在完成了我的 Risitas&Linux 项目之后,我遇到了一个困难。视频文件和字幕文件仍然是两个独立的实体,我需要以某种方式将它们放在一起。我最初关于字幕的文章提到了 Avidemux 和 Handbrake,这两个都是有效的选项。
但是,我对它们任何一个的输出都并不满意,而且由于种种原因,有些东西有所偏移。 Avidemux 不能很好处理视频编码,而 Handbrake 在最终输出中省略了几行字幕,而且字体是丑陋的。这个可以解决,但这不是今天的话题。
因此,我决定使用 VideoLAN(VLC) 将字幕嵌入视频。有几种方法可以做到这一点。你可以使用 “Media > Convert/Save” 选项,但这不能达到我们需要的。相反,你应该使用 “Media > Stream”,它带有一个更完整的向导,它还提供了一个我们需要的可编辑的代码转换选项 - 请参阅我的[教程](http://www.dedoimedo.com/computers/vlc-subtitles.html)关于字幕的部分。
### 错误!
嵌入字幕的过程并没那么简单的。你有可能遇到几个问题。本指南应该能帮助你解决这些问题,所以你可以专注于你的工作,而不是浪费时间调试怪异的软件错误。无论如何,在使用 VLC 中的字幕时,你将会遇到一小部分可能会遇到的问题。尝试以及出错,还有书呆子的设计。
### 没有可播放的流
你可能选择了奇怪的输出设置。你要仔细检查你是否选择了正确的视频和音频编解码器。另外,请记住,一些媒体播放器可能没有所有的编解码器。此外,确保在所有要播放的系统中都测试过了。

### 字幕叠加两次
如果在第一步的流媒体向导中选择了 “Use a subtitle file”,则可能会发生这种情况。只需选择所需的文件,然后单击 “Stream”。取消选中该框。

### 字幕没有输出
这可能是两个主要原因。一、你选择了错误的封装格式。在进行编辑之前,请确保在配置文件页面上正确标记了字幕。如果格式不支持字幕,它可能无法正常工作。

二、你可能已经在最终输出中启用了字幕编解码器渲染功能。你不需要这个。你只需要将字幕叠加到视频片段上。在单击 “Stream” 按钮之前,请检查生成的流输出字符串并删除 “scodec=” 的选项。

### 缺少编解码器的解决方法
这是一个常见的 [bug](https://trac.videolan.org/vlc/ticket/6184),取决于编码器的实现的实验性,如果你选择以下配置文件,你将很有可能会看到它:“Video - H.264 + AAC (MP4)”。该文件将被渲染,如果你选择了字幕,它们也会被叠加上,但没有任何音频。但是,我们可以用技巧来解决这个问题。


一个可能的技巧是从命令行使用 “--sout-ffmpeg-strict=-2” 选项(可能有用)启动 VLC。另一个更安全的解决方法是采用无音频视频,但是带有字幕叠加,并将不带字幕的原始项目作为音频源用 Kdenlive 渲染。听上去很复杂,下面是详细步骤:
* 将现有片段(包含音频)从视频移动到音频。删除其余的。
* 或者,使用渲染过的 WebM 文件作为你的音频源。
* 添加新的片段 - 带有字幕,并且没有音频。
* 将片段放置为新视频。
* 再次渲染为 WebM。

使用其他类型的音频编解码器将很有可能可用(例如 MP3),你将拥有一个包含视频、音频和字幕的完整项目。如果你很高兴没有遗漏,你可以现在上传到 Youtube 上。但是之后 ...
### Youtube 视频管理器和未知格式
如果你尝试上传非 WebM 片段(例如 MP4),则可能会收到未指定的错误,你的片段不符合媒体格式要求。我不知道为什么 VLC 会生成一个不符合 YouTube 规定的文件。但是,修复很容易。使用 Kdenlive 重新创建视频,将会生成带有所有正确的元字段和 Youtube 喜欢的文件。回到我原来的故事,我有 40 多个片段使用 Kdenlive 以这种方式创建。
P.S. 如果你的片段有有效的音频,则只需通过 Kdenlive 重新运行它。如果没有,重做视频/音频。根据需要将片段静音。最终,这就像叠加一样,除了你使用的视频来自于一个片段,而音频来自于另一个片段。工作完成。
### 更多阅读
我不想用链接重复自己或垃圾信息。在“软件与安全”部分,我有 VLC 上的片段,因此你可能需要咨询。前面提到的关于 VLC 和字幕的文章已经链接到大约六个相关教程,涵盖了其他主题,如流媒体、日志记录、视频旋转、远程文件访问等等。我相信你可以像专业人员一样使用搜索引擎。
### 总结
我希望你觉得本指南有帮助。它涵盖了很多,我试图使其直接而简单,并解决流媒体爱好者和字幕爱好者在使用 VLC 时可能遇到的许多陷阱。这都与容器和编解码器相关,而且媒体世界几乎没有标准的事实,当你从一种格式转换到另一种格式时,有时你可能会遇到边际情况。
如果你遇到了一些错误,这里的提示和技巧应该可以至少帮助你解决一些,包括无法播放的流、丢失或重复的字幕、缺少编解码器和 Kdenlive 解决方法、YouTube 上传错误、隐藏的 VLC 命令行选项,还有一些其他东西。是的,这些对于一段文字来说是很多的。幸运的是,这些都是好东西。保重,互联网的孩子们。如果你有任何其他要求,我将来的 VLC 文章应该会涵盖,请随意给我发邮件。
干杯。
---
via: <http://www.dedoimedo.com/computers/vlc-subtitles-errors.html>
作者:[Dedoimedo](http://www.dedoimedo.com/faq.html) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,799 | 11 个使用 GNOME 3 桌面环境的理由 | https://opensource.com/article/17/5/reasons-gnome | 2017-08-22T11:43:32 | [
"GNOME"
] | https://linux.cn/article-8799-1.html |
>
> GNOME 3 桌面的设计目的是简单、易于访问和可靠。GNOME 的受欢迎程度证明达成了这些目标。
>
>
>

去年年底,在我升级到 Fedora 25 后,新版本 [KDE](https://opensource.com/life/15/4/9-reasons-to-use-kde) Plasma 出现了一些问题,这使我难以完成任何工作。所以我决定尝试其他的 Linux 桌面环境有两个原因。首先,我需要完成我的工作。第二,多年来一直使用 KDE,我想可能是尝试一些不同的桌面的时候了。
我尝试了几个星期的第一个替代桌面是我在 1 月份文章中写到的 [Cinnamon](/article-8606-1.html),然后我写了用了大约八个星期的[LXDE](/article-8434-1.html),我发现这里有很多事情我都喜欢。我用了 [GNOME 3](https://www.gnome.org/gnome-3/) 几个星期来写了这篇文章。
像几乎所有的网络世界一样,GNOME 是缩写;它代表 “GNU 网络对象模型”(GNU Network Object Model)。GNOME 3 桌面设计的目的是简单、易于访问和可靠。GNOME 的受欢迎程度证明达成了这些目标。
GNOME 3 在需要大量屏幕空间的环境中非常有用。这意味着两个具有高分辨率的大屏幕,并最大限度地减少桌面小部件、面板和用来启动新程序之类任务的图标所需的空间。GNOME 项目有一套人机接口指南(HIG),用来定义人类应该如何与计算机交互的 GNOME 哲学。
### 我使用 GNOME 3 的十一个原因
1、 **诸多选择:** GNOME 以多种形式出现在我个人喜爱的 Fedora 等一些发行版上。你可以选择的桌面登录选项有 GNOME Classic、Xorg 上的 GNOME、GNOME 和 GNOME(Wayland)。从表面上看,启动后这些都是一样的,但它们使用不同的 X 服务器,或者使用不同的工具包构建。Wayland 在小细节上提供了更多的功能,例如动态滚动,拖放和中键粘贴。
2、 **入门教程:** 在用户第一次登录时会显示入门教程。它向你展示了如何执行常见任务,并提供了大量的帮助链接。教程在首次启动后关闭后也可轻松访问,以便随时访问该教程。教程非常简单直观,这为 GNOME 新用户提供了一个简单明了开始。要之后返回本教程,请点击 **Activities**,然后点击会显示程序的有九个点的正方形。然后找到并点击标为救生员图标的 **Help**。
3、 **桌面整洁:** 对桌面环境采用极简方法以减少杂乱,GNOME 设计为仅提供具备可用环境所必需的最低限度。你应该只能看到顶部栏(是的,它就叫这个),其他所有的都被隐藏,直到需要才显示。目的是允许用户专注于手头的任务,并尽量减少桌面上其他东西造成的干扰。
4、 **顶部栏:** 无论你想做什么,顶部栏总是开始的地方。你可以启动应用程序、注销、关闭电源、启动或停止网络等。不管你想做什么都很简单。除了当前应用程序之外,顶栏通常是桌面上唯一的其他对象。
5、 **dash:** 如下所示,在默认情况下, dash 包含三个图标。在开始使用应用程序时,会将它们添加到 dash 中,以便在其中显示最常用的应用程序。你也可以从应用程序查看器中将应用程序图标添加到 dash 中。

6、 **应用程序浏览器:** 我真的很喜欢这个可以从位于 GNOME 桌面左侧的垂直条上访问应用程序浏览器。除非有一个正在运行的程序,GNOME 桌面通常没有任何东西,所以你必须点击顶部栏上的 **Activities** 选区,点击 dash 底部的九个点组成的正方形,它是应用程序浏览器的图标。

如上所示,浏览器本身是一个由已安装的应用程序的图标组成的矩阵。矩阵下方有一对互斥的按钮,**Frequent** 和 **All**。默认情况下,应用程序浏览器会显示所有安装的应用。点击 **Frequent** 按钮,它会只显示最常用的应用程序。向上和向下滚动以找到要启动的应用程序。应用程序按名称按字母顺序显示。
[GNOME](https://www.gnome.org/gnome-3/) 官网和内置的帮助有更多关于浏览器的细节。
7、 **应用程序就绪通知:** 当新启动的应用程序的窗口打开并准备就绪时,GNOME 会在屏幕顶部显示一个整齐的通知。只需点击通知即可切换到该窗口。与在其他桌面上搜索新打开的应用程序窗口相比,这节省了一些时间。
8、 **应用程序显示:** 为了访问不可见的其它运行的应用程序,点击 **Activities** 菜单。这将在桌面上的矩阵中显示所有正在运行的应用程序。点击所需的应用程序将其带到前台。虽然当前应用程序显示在顶栏中,但其他正在运行的应用程序不会。
9、 **最小的窗口装饰:** 桌面上打开窗口也很简单。标题栏上唯一显示的按钮是关闭窗口的 “X”。所有其他功能,如最小化、最大化、移动到另一个桌面等,可以通过在标题栏上右键单击来访问。
10、 **自动创建的新桌面:** 在使用下一空桌面的时候将自动创建的新的空桌面。这意味着总是有一个空的桌面在需要时可以使用。我使用的所有其他的桌面系统都可以让你在桌面活动时设置桌面数量,但必须使用系统设置手动完成。
11、 **兼容性:** 与我所使用的所有其他桌面一样,为其他桌面创建的应用程序可在 GNOME 上正常工作。这功能让我有可能测试这些桌面,以便我可以写出它们。
### 最后的思考
GNOME 不像我以前用过的桌面。它的主要指导是“简单”。其他一切都要以简单易用为前提。如果你从入门教程开始,学习如何使用 GNOME 需要很少的时间。这并不意味着 GNOME 有所不足。它是一款始终保持不变的功能强大且灵活的桌面。
(题图:[Gunnar Wortmann](https://pixabay.com/en/users/karpartenhund-3077375/) 通过 [Pixabay](https://pixabay.com/en/garden-gnome-black-and-white-f%C3%B6hr-1584401/)。由 Opensource.com 修改。[CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/))
---
作者简介:
David Both - David Both 是位于北卡罗来纳州罗利的 Linux 和开源倡导者。他已经在 IT 行业工作了四十多年,并为 IBM 教授 OS/2 超过 20 年。在 IBM,他在 1981 年为初始的 IBM PC 写了第一个培训课程。他为红帽教授 RHCE 课程,曾在 MCI Worldcom、思科和北卡罗来纳州工作。他一直在使用 Linux 和开源软件近 20 年。
---
via: <https://opensource.com/article/17/5/reasons-gnome>
作者:[David Both](https://opensource.com/users/dboth) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Late last year, an upgrade to Fedora 25 caused issues with the new version of [KDE](https://opensource.com/life/15/4/9-reasons-to-use-kde) Plasma that made it difficult for me to get any work done. So I decided to try other Linux desktop environments for two reasons. First, I needed to get my work done. Second, having been using KDE exclusively for many years, I thought it might be time to try some different desktops.
The first alternate desktop I tried for several weeks was [Cinnamon](https://opensource.com/article/17/1/cinnamon-desktop-environment) which I wrote about in January, and then I wrote about [LXDE](https://opensource.com/article/17/3/8-reasons-use-lxde) which I used for about eight weeks and I have found many things about it that I like. I have used [GNOME 3](https://www.gnome.org/gnome-3/) for a few weeks to research this article.
Like almost everything else in the cyberworld, GNOME is an acronym; it stands for GNU Network Object Model. The GNOME 3 desktop was designed with the goals of being simple, easy to access, and reliable. GNOME's popularity attests to the achievement of those goals.
GNOME 3 is useful in environments where lots of screen real-estate is needed. That means both large screens with high resolution, and minimizing the amount of space needed by the desktop widgets, panels, and icons to allow access to tasks like launching new programs. The GNOME project has a set of Human Interface Guidelines (HIG) that are used to define the GNOME philosophy for how humans should interface with the computer.
## My eleven reasons for using GNOME 3
-
**Choice:**GNOME is available in many forms on some distributions like my personal favorite, Fedora. The login options for your desktop of choice are GNOME Classic, GNOME on Xorg, GNOME, and GNOME (Wayland). On the surface, these all look the same once they are launched but they use different X servers or are built with different toolkits. Wayland provides more functionality for the little niceties of the desktop such as kinetic scrolling, drag-and-drop, and paste with middle click. -
**Getting started tutorial:**The getting started tutorial is displayed the first time a user logs into the desktop. It shows how to perform common tasks and provides a link to more extensive help. The tutorial is also easily accessible after it is dismissed on first boot so it can be accessed at any time. It is very simple and straightforward and provides users new to GNOME an easy and obvious starting point. To return to the tutorial later, click on**Activities**, then click on the square of nine dots which displays the applications. Then find and click on the life preserver icon labeled,**Help**. -
**Clean deskto****p:**With a minimalist approach to a desktop environment in order to reduce clutter, GNOME is designed to present only the minimum necessary to have a functional environment. You should see only the top bar (yes, that is what it is called) and all else is hidden until needed. The intention is to allow the user to focus on the task at hand and to minimize the distractions caused by other stuff on the desktop. -
**The top bar:**The top bar is always the place to start, no matter what you want to do. You can launch applications, log out, power off, start or stop the network, and more. This makes life simple when you want to do anything. Aside from the current application, the top bar is usually the only other object on the desktop. -
**The dash:**The dash contains three icons by default, as shown below. As you start using applications, they are added to the dash so that your most frequently used applications are displayed there. You can also add application icons to the dash yourself from the application viewer. -
**A****pplication****v****iewer:**I really like the application viewer that is accessible from the vertical bar on the left side of the GNOME desktop,, above. The GNOME desktop normally has nothing on it unless there is a running program so you must click on the**Activities**selection on the top bar, click on the square consisting of nine dots at the bottom of the dash, which is the icon for the viewer.The viewer itself is a matrix consisting of the icons of the installed applications as shown above. There is a pair of mutually exclusive buttons below the matrix,
**Frequent**and**All**. By default, the application viewer shows all installed applications. Click on the**Frequent**button and it shows only the applications used most frequently. Scroll up and down to locate the application you want to launch. The applications are displayed in alphabetical order by name.The
[GNOME](https://www.gnome.org/gnome-3/)website and the built-in help have more detail on the viewer. -
**Application ready n****otification****s:**GNOME has a neat notifier that appears at top of screen when the window for a newly launched app is open and ready. Simply click on the notification to switch to that window. This saved me some time compared to searching for the newly opened application window on some other desktops. -
**A****pplication****display****:**In order to access a different running application that is not visible you click on the activity menu. This displays all of the running applications in a matrix on the desktop. Click on the desired application to bring it to the foreground. Although the current application is displayed in the Top Bar, other running applications are not. -
**Minimal w****indow decorations:**Open windows on the desktop are also quite simple. The only button apparent on the title bar is the "**X**" button to close a window. All other functions such as minimize, maximize, move to another desktop, and so on, are accessible with a right-click on the title bar. -
**New d****esktops are automatically created:**New empty desktops created automatically when the next empty one down is used. This means that there will always be one empty desktop and available when needed. All of the other desktops I have used allow you to set the number of desktops while the desktop is active, too, but it must be done manually using the system settings. -
**Compatibility:**As with all of the other desktops I have used, applications created for other desktops will work correctly on GNOME. This is one of the features that has made it possible for me to test all of these desktops so that I can write about them.
## Final thoughts
GNOME is a desktop unlike any other I have used. Its prime directive is "simplicity." Everything else takes a back seat to simplicity and ease of use. It takes very little time to learn how to use GNOME if you start with the getting started tutorial. That does not mean that GNOME is deficient in any way. It is a powerful and flexible desktop that stays out of the way at all times.
## 19 Comments |
8,800 | 一文了解 Kubernetes 是什么? | https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/ | 2017-08-22T13:47:00 | [
"Kubernetes"
] | https://linux.cn/article-8800-1.html | 
这是一篇 Kubernetes 的概览。
Kubernetes 是一个[自动化部署、伸缩和操作应用程序容器的开源平台](http://www.slideshare.net/BrianGrant11/wso2con-us-2015-kubernetes-a-platform-for-automating-deployment-scaling-and-operations)。
使用 Kubernetes,你可以快速、高效地满足用户以下的需求:
* 快速精准地部署应用程序
* 即时伸缩你的应用程序
* 无缝展现新特征
* 限制硬件用量仅为所需资源
我们的目标是培育一个工具和组件的生态系统,以减缓在公有云或私有云中运行的程序的压力。
#### Kubernetes 的优势
* **可移动**: 公有云、私有云、混合云、多态云
* **可扩展**: 模块化、插件化、可挂载、可组合
* **自修复**: 自动部署、自动重启、自动复制、自动伸缩
Google 公司于 2014 年启动了 Kubernetes 项目。Kubernetes 是在 [Google 的长达 15 年的成规模的产品级任务的经验下](https://research.google.com/pubs/pub43438.html)构建的,结合了来自社区的最佳创意和实践经验。
### 为什么选择容器?
想要知道你为什么要选择使用 [容器](https://aucouranton.com/2014/06/13/linux-containers-parallels-lxc-openvz-docker-and-more/)?

程序部署的*传统方法*是指通过操作系统包管理器在主机上安装程序。这样做的缺点是,容易混淆程序之间以及程序和主机系统之间的可执行文件、配置文件、库、生命周期。为了达到精准展现和精准回撤,你可以搭建一台不可变的虚拟机镜像。但是虚拟机体量往往过于庞大而且不可转移。
容器部署的*新的方式*是基于操作系统级别的虚拟化,而非硬件虚拟化。容器彼此是隔离的,与宿主机也是隔离的:它们有自己的文件系统,彼此之间不能看到对方的进程,分配到的计算资源都是有限制的。它们比虚拟机更容易搭建。并且由于和基础架构、宿主机文件系统是解耦的,它们可以在不同类型的云上或操作系统上转移。
正因为容器又小又快,每一个容器镜像都可以打包装载一个程序。这种一对一的“程序 - 镜像”联系带给了容器诸多便捷。有了容器,静态容器镜像可以在编译/发布时期创建,而非部署时期。因此,每个应用不必再等待和整个应用栈其它部分进行整合,也不必和产品基础架构环境之间进行妥协。在编译/发布时期生成容器镜像建立了一个持续地把开发转化为产品的环境。相似地,容器远比虚拟机更加透明,尤其在设备监控和管理上。这一点,在容器的进程生命周期被基础架构管理而非被容器内的进程监督器隐藏掉时,尤为显著。最终,随着每个容器内都装载了单一的程序,管理容器就等于管理或部署整个应用。
容器优势总结:
* **敏捷的应用创建与部署**:相比虚拟机镜像,容器镜像的创建更简便、更高效。
* **持续的开发、集成,以及部署**:在快速回滚下提供可靠、高频的容器镜像编译和部署(基于镜像的不可变性)。
* **开发与运营的关注点分离**:由于容器镜像是在编译/发布期创建的,因此整个过程与基础架构解耦。
* **跨开发、测试、产品阶段的环境稳定性**:在笔记本电脑上的运行结果和在云上完全一致。
* **在云平台与 OS 上分发的可转移性**:可以在 Ubuntu、RHEL、CoreOS、预置系统、Google 容器引擎,乃至其它各类平台上运行。
* **以应用为核心的管理**: 从在虚拟硬件上运行系统,到在利用逻辑资源的系统上运行程序,从而提升了系统的抽象层级。
* **松散耦联、分布式、弹性、无拘束的[微服务](https://martinfowler.com/articles/microservices.html)**:整个应用被分散为更小、更独立的模块,并且这些模块可以被动态地部署和管理,而不再是存储在大型的单用途机器上的臃肿的单一应用栈。
* **资源隔离**:增加程序表现的可预见性。
* **资源利用率**:高效且密集。
#### 为什么我需要 Kubernetes,它能做什么?
至少,Kubernetes 能在实体机或虚拟机集群上调度和运行程序容器。而且,Kubernetes 也能让开发者斩断联系着实体机或虚拟机的“锁链”,从**以主机为中心**的架构跃至**以容器为中心**的架构。该架构最终提供给开发者诸多内在的优势和便利。Kubernetes 提供给基础架构以真正的**以容器为中心**的开发环境。
Kubernetes 满足了一系列产品内运行程序的普通需求,诸如:
* [协调辅助进程](https://kubernetes.io/docs/concepts/workloads/pods/pod/),协助应用程序整合,维护一对一“程序 - 镜像”模型。
* [挂载存储系统](https://kubernetes.io/docs/concepts/storage/volumes/)
* [分布式机密信息](https://kubernetes.io/docs/concepts/configuration/secret/)
* [检查程序状态](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/)
* [复制应用实例](https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/)
* [使用横向荚式自动缩放](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/)
* [命名与发现](https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/)
* [负载均衡](https://kubernetes.io/docs/concepts/services-networking/service/)
* [滚动更新](https://kubernetes.io/docs/tasks/run-application/rolling-update-replication-controller/)
* [资源监控](https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/)
* [访问并读取日志](https://kubernetes.io/docs/concepts/cluster-administration/logging/)
* [程序调试](https://kubernetes.io/docs/tasks/debug-application-cluster/debug-application-introspection/)
* [提供验证与授权](https://kubernetes.io/docs/admin/authorization/)
以上兼具平台即服务(PaaS)的简化和基础架构即服务(IaaS)的灵活,并促进了在平台服务提供商之间的迁移。
#### Kubernetes 是一个什么样的平台?
虽然 Kubernetes 提供了非常多的功能,总会有更多受益于新特性的新场景出现。针对特定应用的工作流程,能被流水线化以加速开发速度。特别的编排起初是可接受的,这往往需要拥有健壮的大规模自动化机制。这也是为什么 Kubernetes 也被设计为一个构建组件和工具的生态系统的平台,使其更容易地部署、缩放、管理应用程序。
<ruby> <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/"> 标签 </a> <rp> ( </rp> <rt> label </rt> <rp> ) </rp></ruby>可以让用户按照自己的喜好组织资源。 <ruby> <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/"> 注释 </a> <rp> ( </rp> <rt> annotation </rt> <rp> ) </rp></ruby>让用户在资源里添加客户信息,以优化工作流程,为管理工具提供一个标示调试状态的简单方法。
此外,[Kubernetes 控制面板](https://kubernetes.io/docs/concepts/overview/components/)是由开发者和用户均可使用的同样的 [API](https://kubernetes.io/docs/reference/api-overview/) 构建的。用户可以编写自己的控制器,比如 <ruby> <a href="https://git.k8s.io/community/contributors/devel/scheduler.md"> 调度器 </a> <rp> ( </rp> <rt> scheduler </rt> <rp> ) </rp></ruby>,使用可以被通用的[命令行工具](https://kubernetes.io/docs/user-guide/kubectl-overview/)识别的[他们自己的 API](https://git.k8s.io/community/contributors/design-proposals/extending-api.md)。
这种[设计](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/principles.md)让大量的其它系统也能构建于 Kubernetes 之上。
#### Kubernetes 不是什么?
Kubernetes 不是传统的、全包容的平台即服务(Paas)系统。它尊重用户的选择,这很重要。
Kubernetes:
* 并不限制支持的程序类型。它并不检测程序的框架 (例如,[Wildfly](http://wildfly.org/)),也不限制运行时支持的语言集合 (比如, Java、Python、Ruby),也不仅仅迎合 [12 因子应用程序](https://12factor.net/),也不区分 *应用* 与 *服务* 。Kubernetes 旨在支持尽可能多种类的工作负载,包括无状态的、有状态的和处理数据的工作负载。如果某程序在容器内运行良好,它在 Kubernetes 上只可能运行地更好。
* 不提供中间件(例如消息总线)、数据处理框架(例如 Spark)、数据库(例如 mysql),也不把集群存储系统(例如 Ceph)作为内置服务。但是以上程序都可以在 Kubernetes 上运行。
* 没有“点击即部署”这类的服务市场存在。
* 不部署源代码,也不编译程序。持续集成 (CI) 工作流程是不同的用户和项目拥有其各自不同的需求和表现的地方。所以,Kubernetes 支持分层 CI 工作流程,却并不监听每层的工作状态。
* 允许用户自行选择日志、监控、预警系统。( Kubernetes 提供一些集成工具以保证这一概念得到执行)
* 不提供也不管理一套完整的应用程序配置语言/系统(例如 [jsonnet](https://github.com/google/jsonnet))。
* 不提供也不配合任何完整的机器配置、维护、管理、自我修复系统。
另一方面,大量的 PaaS 系统运行*在* Kubernetes 上,诸如 [Openshift](https://www.openshift.org/)、[Deis](http://deis.io/),以及 [Eldarion](http://eldarion.cloud/)。你也可以开发你的自定义 PaaS,整合上你自选的 CI 系统,或者只在 Kubernetes 上部署容器镜像。
因为 Kubernetes 运营在应用程序层面而不是在硬件层面,它提供了一些 PaaS 所通常提供的常见的适用功能,比如部署、伸缩、负载平衡、日志和监控。然而,Kubernetes 并非铁板一块,这些默认的解决方案是可供选择,可自行增加或删除的。
而且, Kubernetes 不只是一个*编排系统* 。事实上,它满足了编排的需求。 *编排* 的技术定义是,一个定义好的工作流程的执行:先做 A,再做 B,最后做 C。相反地, Kubernetes 囊括了一系列独立、可组合的控制流程,它们持续驱动当前状态向需求的状态发展。从 A 到 C 的具体过程并不唯一。集中化控制也并不是必须的;这种方式更像是*编舞*。这将使系统更易用、更高效、更健壮、复用性、扩展性更强。
#### Kubernetes 这个单词的含义?k8s?
**Kubernetes** 这个单词来自于希腊语,含义是 *舵手* 或 *领航员* 。其词根是 *governor* 和 [cybernetic](http://www.etymonline.com/index.php?term=cybernetics)。 *K8s* 是它的缩写,用 8 字替代了“ubernete”。
---
via: <https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/>
作者:[kubernetes.io](https://kubernetes.io/) 译者:[songshuang00](https://github.com/songsuhang00) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,801 | 免费学习 Docker 的最佳方法:Play-with-docker(PWD) | https://blog.docker.com/2017/07/best-way-learn-docker-free-play-docker-pwd/ | 2017-08-22T14:18:00 | [
"Docker"
] | https://linux.cn/article-8801-1.html | 
去年在柏林的分布式系统峰会上,Docker 的负责人 [Marcos Nils](https://www.twitter.com/marcosnils) 和 [Jonathan Leibiusky](https://www.twitter.com/xetorthio) 宣称已经开始研究浏览器内置 Docker 的方案,帮助人们学习 Docker。 几天后,[Play-with-docker](http://play-with-docker.com/)(PWD)就诞生了。
PWD 像是一个 Docker 游乐场,用户在几秒钟内就可以运行 Docker 命令。 还可以在浏览器中安装免费的 Alpine Linux 虚拟机,然后在虚拟机里面构建和运行 Docker 容器,甚至可以使用 [Docker 集群模式](https://docs.docker.com/engine/swarm/)创建集群。 有了 Docker-in-Docker(DinD)引擎,甚至可以体验到多个虚拟机/个人电脑的效果。 除了 Docker 游乐场外,PWD 还包括一个培训站点 [training.play-with-docker.com](http://training.play-with-docker.com/),该站点提供大量的难度各异的 Docker 实验和测验。
如果你错过了峰会,Marcos 和 Jonathan 在最后一场 DockerCon Moby Cool Hack 会议中展示了 PWD。 观看下面的视频,深入了解其基础结构和发展路线图。
在过去几个月里,Docker 团队与 Marcos、Jonathan,还有 Docker 社区的其他活跃成员展开了密切合作,为项目添加了新功能,为培训部分增加了 Docker 实验室。
### PWD: 游乐场
以下快速的概括了游乐场的新功能:
#### 1、 PWD Docker Machine 驱动和 SSH
随着 PWD 成功的成长,社区开始问他们是否可以使用 PWD 来运行自己的 Docker 研讨会和培训。 因此,对项目进行的第一次改进之一就是创建 [PWD Docker Machine 驱动](https://github.com/play-with-docker/docker-machine-driver-pwd/releases/tag/v0.0.5),从而用户可以通过自己喜爱的终端轻松创建管理 PWD 主机,包括使用 SSH 相关命令的选项。 下面是它的工作原理:

#### 2、 支持文件上传
Marcos 和 Jonathan 还带来了另一个炫酷的功能就是可以在 PWD 实例中通过拖放文件的方式将 Dockerfile 直接上传到 PWD 窗口。

#### 3、 模板会话
除了文件上传之外,PWD 还有一个功能,可以使用预定义的模板在几秒钟内启动 5 个节点的群集。

#### 4、 一键使用 Docker 展示你的应用程序
PWD 附带的另一个很酷的功能是它的内嵌按钮,你可以在你的站点中使用它来设置 PWD 环境,并快速部署一个构建好的堆栈,另外还有一个 [chrome 扩展](https://chrome.google.com/webstore/detail/play-with-docker/kibbhpioncdhmamhflnnmfonadknnoan) ,可以将 “Try in PWD” 按钮添加 DockerHub 最流行的镜像中。 以下是扩展程序的一个简短演示:

### PWD 培训站点
[training.play-with-docker.com](http://training.play-with-docker.com/) 站点提供了大量新的实验。有一些值得注意的两点,包括两个来源于奥斯丁召开的 DockerCon 中的动手实践的实验,还有两个是 Docker 17.06CE 版本中亮眼的新功能:
* [可以动手实践的 Docker 网络实验](http://training.play-with-docker.com/docker-networking-hol/)
* [可以动手实践的 Docker 编排实验](http://training.play-with-docker.com/orchestration-hol/)
* [多阶段构建](http://training.play-with-docker.com/multi-stage/)
* [Docker 集群配置文件](http://training.play-with-docker.com/swarm-config/)
总而言之,现在有 36 个实验,而且一直在增加。 如果你想贡献实验,请从查看 [GitHub 仓库](https://github.com/play-with-docker/play-with-docker.github.io)开始。
### PWD 用例
根据网站访问量和我们收到的反馈,很可观的说,PWD 现在有很大的吸引力。下面是一些最常见的用例:
* 紧跟最新开发版本,尝试新功能。
* 快速建立集群并启动复制服务。
* 通过互动教程学习: [training.play-with-docker.com](http://training.play-with-docker.com/)。
* 在会议和集会上做演讲。
* 召开需要复杂配置的高级研讨会,例如 Jérôme’的 [Docker 编排高级研讨会](https://github.com/docker/labs/tree/master/Docker-Orchestration)。
* 和社区成员协作诊断问题检测问题。
参与 PWD:
* 通过[向 PWD 提交 PR](https://github.com/play-with-docker/) 做贡献
* 向 [PWD 培训站点](https://github.com/play-with-docker/training)贡献
---
作者简介:
Victor 是 Docker, Inc. 的高级社区营销经理。他喜欢优质的葡萄酒、象棋和足球,上述爱好不分先后顺序。 Victor 的 tweet:@vcoisne。
---
via: <https://blog.docker.com/2017/07/best-way-learn-docker-free-play-docker-pwd/>
作者:[Victor](https://blog.docker.com/author/victor_c/) 译者:[Flowsnow](https://github.com/Flowsnow) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,802 | 物联网助长了 Linux 恶意软件 | http://www.linuxinsider.com/story/84652.html | 2017-08-23T08:31:00 | [
"物联网",
"恶意软件"
] | https://linux.cn/article-8802-1.html | 
针对 Linux 系统的恶意软件正在增长,这主要是由于连接到物联网设备的激增。
这是网络安全设备制造商 [WatchGuard Technologies](http://www.watchguard.com/) 上周发布的的一篇报告中所披露的。该报告分析了全球 26,000 多件设备收集到的数据,今年第一季度的前 10 名恶意软件中发现了三个针对 Linux 的恶意软件,而上一季度仅有一个。
WatchGuard 的 CTO Corey Nachreiner 和安全威胁分析师 Marc Laliberte 写道:“Linux 上的攻击和恶意软件正在兴起。我们相信这是因为 IoT 设备的系统性弱点与其快速增长相结合的结果,它正在引导僵尸网络的作者们转向 Linux 平台。”
他们建议“阻止入站的 Telnet 和 SSH,以及使用复杂的管理密码,可以防止绝大多数潜在的攻击”。
### 黑客的新大道
Laliberte 观察到,Linux 恶意软件在去年年底随着 Mirai 僵尸网络开始增长。Mirai 在九月份曾经用来攻击部分互联网的基础设施,迫使数百万用户断线。
他告诉 LinuxInsider,“现在,随着物联网设备的飞速发展,一条全新的大道正在向攻击者们开放。我们相信,随着互联网上新目标的出现,Linux 恶意软件会逐渐增多。”Laliberte 继续说,物联网设备制造商并没有对安全性表现出很大的关注。他们的目标是使他们的设备能够使用、便宜,能够快速制造。
他说:“开发过程中他们真的不关心安全。”
### 轻易捕获
[Alert Logic](http://www.alertlogic.com/) 的网络安全布道师 Paul Fletcher 说,大多数物联网制造商都使用 Linux 的裁剪版本,因为操作系统需要最少的系统资源来运行。
他告诉 LinuxInsider,“当你将大量与互联网连接的物联网设备结合在一起时,这相当于在线大量的 Linux 系统,它们可用于攻击。”
为了使设备易于使用,制造商使用的协议对黑客来说也是用户友好的。Fletcher 说:“攻击者可以访问这些易受攻击的接口,然后上传并执行他们选择的恶意代码。”
他指出,厂商经常给他们的设备很差的默认设置。Fletcher 说:“通常,管理员帐户是空密码或易于猜测的默认密码,例如 ‘password123’。”
[SANS 研究所](http://www.sans.org/) 首席研究员 Johannes B. Ullrich 表示,安全问题通常“本身不是 Linux 特有的”。他告诉 LinuxInsider,“制造商对他们如何配置这些设备不屑一顾,所以他们使这些设备的利用变得非常轻易。”
### 10 大恶意软件
这些 Linux 恶意软件在 WatchGuard 的第一季度的统计数据中占据了前 10 名的位置:
* Linux/Exploit,它使用几种木马来扫描可以加入僵尸网络的设备。
* Linux/Downloader,它使用恶意的 Linux shell 脚本。Linux 可以运行在许多不同的架构上,如 ARM、MIPS 和传统的 x86 芯片组。报告解释说,一个为某个架构编译的可执行文件不能在不同架构的设备上运行。因此,一些 Linux 攻击利用 dropper shell 脚本下载并安装适合它们所要感染的体系架构的恶意组件。
* Linux/Flooder,它使用了 Linux 分布式拒绝服务工具,如 Tsunami,用于执行 DDoS 放大攻击,以及 Linux 僵尸网络(如 Mirai)使用的 DDoS 工具。报告指出:“正如 Mirai 僵尸网络向我们展示的,基于 Linux 的物联网设备是僵尸网络军队的主要目标。
### Web 服务器战场
WatchGuard 报告指出,敌人攻击网络的方式发生了变化。
公司发现,到 2016 年底,73% 的 Web 攻击针对客户端 - 浏览器和配套软件。今年头三个月发生了彻底改变,82% 的 Web 攻击集中在 Web 服务器或基于 Web 的服务上。报告合著者 Nachreiner 和 Laliberte 写道:“我们不认为下载式的攻击将会消失,但似乎攻击者已经集中力量和工具来试图利用 Web 服务器攻击。”
他们也发现,自 2006 年底以来,杀毒软件的有效性有所下降。Nachreiner 和 Laliberte 报道说:“连续的第二个季度,我们看到使用传统的杀毒软件解决方案漏掉了使用我们更先进的解决方案可以捕获的大量恶意软件,实际上已经从 30% 上升到了 38% 漏掉了。”
他说:“如今网络犯罪分子使用许多精妙的技巧来重新包装恶意软件,从而避免了基于签名的检测。这就是为什么使用基本的杀毒软件的许多网络成为诸如赎金软件之类威胁的受害者。”
---
作者简介:
John P. Mello Jr.自 2003 年以来一直是 ECT 新闻网记者。他的重点领域包括网络安全、IT问题、隐私权、电子商务、社交媒体、人工智能、大数据和消费电子。 他撰写和编辑了众多出版物,包括“波士顿商业杂志”、“波士顿凤凰”、“Megapixel.Net” 和 “政府安全新闻”。
---
via: <http://www.linuxinsider.com/story/84652.html>
作者:[John P. Mello Jr]([email protected]) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,803 | 一个时代的结束:Solaris 系统的那些年,那些事 | https://www.phoronix.com/scan.php?page=news_item&px=Solaris-2017-Look-Back | 2017-08-23T13:37:00 | [
"Oracle",
"Solaris"
] | /article-8803-1.html | 
现在看来,Oracle 公司正在通过取消 Solaris 12 而[终止 Solaris 的功能开发](http://www.phoronix.com/scan.php?page=news_item&px=No-Solaris-12),这里我们要回顾下多年来在 Phoronix 上最受欢迎的 Solaris 重大事件和新闻。
这里有许多关于 Solaris 的有趣/重要的回忆。

在 Sun Microsystems 时期,我真的对 Solaris 很感兴趣。在 Phoronix 上我们一直重点关注 Linux 的同时,经常也有 Solaris 的文章出现。 Solaris 玩起来很有趣,OpenSolaris/SXCE 是伟大的产物,我将 Phoronix 测试套件移植到 Solaris 上,我们与 Sun Microsystems 人员有密切的联系,也出现在 Sun 的许多活动中。

*在那些日子里 Sun 有一些相当独特的活动...*
不幸的是,自从 Oracle 公司收购了 Sun 公司, Solaris 就如坠入深渊一样。最大的打击大概是 Oracle 结束了 OpenSolaris ,并将所有 Solaris 的工作转移到专有模式...

在 Sun 时代的 Solaris 有很多美好的回忆,所以 Oracle 在其计划中抹去了 Solaris 12 之后,我经常在 Phoronix 上翻回去看一些之前 Solaris 的经典文章,期待着能从 Oracle 听到 “Solaris 11” 下一代的消息,重启 Solaris 项目的开发。

虽然在后 Solaris 的世界中,看到 Oracle 对 ZFS 所做的事情以及他们在基于 RHEL 的 Oracle Enterprise Linux 上下的重注将会很有趣,但时间将会告诉我们一切。

无论如何,这是回顾自 2004 年以来我们最受欢迎的 Solaris 文章:
### 2016/12/1 [Oracle 或许会罐藏 Solaris](http://www.phoronix.com/scan.php?page=news_item&px=Oracle-Solaris-Demise-Rumors)
Oracle 可能正在拔掉 Solaris 的电源插头,据一些新的传闻说。
### 2013/6/9 [OpenSXCE 2013.05 拯救 Solaris 社区](http://www.phoronix.com/scan.php?page=news_item&px=MTM4Njc)
作为 Solaris 社区版的社区复兴,OpenSXCE 2013.05 出现在网上。
### 2013/2/2 [Solaris 12 可能最终带来 Radeon KMS 驱动程序](http://www.phoronix.com/scan.php?page=news_item&px=MTI5MTU)
看起来,Oracle 可能正在准备发布自己的 AMD Radeon 内核模式设置(KMS)驱动程序,并引入到 Oracle Solaris 12 中。
### 2012/10/4 [Oracle Solaris 11.1 提供 300 个以上增强功能](http://www.phoronix.com/scan.php?page=news_item&px=MTE5OTQ)
Oracle昨天在旧金山的 Oracle OpenWorld 会议上发布了 Solaris 11.1 。

### 2012/1/9 [Oracle 尚未澄清 Solaris 11 内核来源](http://www.phoronix.com/scan.php?page=news_item&px=MTAzOTc)
一个月前,Phoronix 是第一个注意到 Solaris 11 内核源代码通过 Torrent 站点泄漏到网上的信息。一个月后,甲骨文还没有正式评论这个情况。
### 2011/12/19 [Oracle Solaris 11 内核源代码泄漏](http://www.phoronix.com/scan.php?page=news_item&px=MTAzMDE)
似乎 Solaris 11的内核源代码在过去的一个周末被泄露到了网上。
### 2011/8/25 [对于 BSD,Solaris 的 GPU 驱动程序的悲惨状态](http://www.phoronix.com/scan.php?page=news_item&px=OTgzNA)
昨天在邮件列表上出现了关于干掉所有旧式 Mesa 驱动程序的讨论。这些旧驱动程序没有被积极维护,支持复古的图形处理器,并且没有更新支持新的 Mesa 功能。英特尔和其他开发人员正在努力清理 Mesa 核心,以将来增强这一开源图形库。这种清理 Mesa,对 BSD 和 Solaris 用户也有一些影响。
### 2010/8/13 [告别 OpenSolaris,Oracle 刚刚把它干掉](http://www.phoronix.com/scan.php?page=news_item&px=ODUwNQ)
Oracle 终于宣布了他们对 Solaris 操作系统和 OpenSolaris 平台的计划,而且不是好消息。OpenSolaris 将实际死亡,未来将不会有更多的 Solaris 版本出现 - 包括长期延期的 2010 年版本。Solaris 仍然会继续存在,现在 Oracle 正在忙于明年发布的 Solaris 11,但仅在 Oracle 的企业版之后才会发布 “Solaris 11 Express” 作为 OpenSolaris 的类似产品。
### 2010/2/22 [Oracle 仍然要对 OpenSolaris 进行更改](http://www.phoronix.com/scan.php?page=news_item&px=ODAwNg)
自从 Oracle 完成对 Sun Microsystems 的收购以来,已经有了许多变化,这个 Sun 最初支持的开源项目现在已经不再被 Oracle 支持,并且对其余的开源产品进行了重大改变。 Oracle 表现出并不太开放的意图的开源项目之一是 OpenSolaris 。 Solaris Express 社区版(SXCE)上个月已经关闭,并且也没有预计 3 月份发布的下一个 OpenSolaris 版本(OpenSolaris 2010.03)的信息流出。
### 2007/9/10 [Solaris Express 社区版 Build 72](http://www.phoronix.com/scan.php?page=news_item&px=NjA0Nw)
对于那些想要在 “印第安纳项目” 发布之前尝试 OpenSolaris 软件中最新最好的软件的人来说,现在可以使用 Solaris Express 社区版 Build 72。Solaris Express 社区版(SXCE)Build 72 可以从 OpenSolaris.org 下载。同时,预计将在下个月推出 Sun 的 “印第安纳项目” 项目的预览版。
### 2007/9/6 [ATI R500/600 驱动要支持 Solaris 了?](http://www.phoronix.com/scan.php?page=news_item&px=NjA0Mg)
虽然没有可用于 Solaris/OpenSolaris 或 \* BSD 的 ATI fglrx 驱动程序,现在 AMD 将向 X.Org 开发人员和开源驱动程序交付规范,但对于任何使用 ATI 的 Radeon X1000 “R500” 或者 HD 2000“R600” 系列的 Solaris 用户来说,这肯定是有希望的。将于下周发布的开源 X.Org 驱动程序距离成熟尚远,但应该能够相对容易地移植到使用 X.Org 的 Solaris 和其他操作系统上。 AMD 今天宣布的针对的是 Linux 社区,但它也可以帮助使用 ATI 硬件的 Solaris/OpenSolaris 用户。特别是随着印第安纳项目的即将推出,开源 R500/600 驱动程序移植就只是时间问题了。
### 2007/9/5 [Solaris Express 社区版 Build 71](http://www.phoronix.com/scan.php?page=news_item&px=NjAzNQ)
Solaris Express 社区版(SXCE)现已推出 Build 71。您可以在 OpenSolaris.org 中找到有关 Solaris Express 社区版 Build 71 的更多信息。另外,在 Linux 内核峰会上,AMD 将提供 GPU 规格的消息,由此产生的 X.Org 驱动程序将来可能会导致 ATI 硬件上 Solaris/OpenSolaris 有所改善。
### 2007/8/27 [Linux 的 Solaris 容器](http://www.phoronix.com/scan.php?page=news_item&px=NjAxMQ)
Sun Microsystems 已经宣布,他们将很快支持适用于 Linux 应用程序的 Solaris 容器。这样可以在 Solaris 下运行 Linux 应用程序,而无需对二进制包进行任何修改。适用于 Linux 的 Solaris 容器将允许从 Linux 到 Solaris 的平滑迁移,协助跨平台开发以及其他优势。当该支持到来时,这个时代就“快到了”。
### 2007/8/23 [OpenSolaris 开发者峰会](http://www.phoronix.com/scan.php?page=news_item&px=NjAwNA)
今天早些时候在 OpenSolaris 论坛上发布了第一次 OpenSolaris 开发人员峰会的消息。这次峰会将在十月份在加州大学圣克鲁斯分校举行。 Sara Dornsife 将这次峰会描述为“不是与演示文稿或参展商举行会议,而是一个亲自参与的协作工作会议,以计划下一期的印第安纳项目。” 伊恩·默多克(Ian Murdock) 将在这个“印第安纳项目”中进行主题演讲,但除此之外,该计划仍在计划之中。 Phoronix 可能会继续跟踪此事件,您可以在 Solaris 论坛上讨论此次峰会。
### 2007/8/18 [Solaris Express 社区版 Build 70](http://www.phoronix.com/scan.php?page=news_item&px=NTk4Nw)
名叫 "Nevada" 的 Solaris Express 社区版 Build 70 (SXCE snv\_70) 现在已经发布。有关下载链接的通知可以在 OpenSolaris 论坛中找到。还有公布了其网络存储的 Build 71 版本,包括来自 Qlogic 的光纤通道 HBA 驱动程序的源代码。
### 2007/8/16 [IBM 使用 Sun Solaris 的系统](http://www.phoronix.com/scan.php?page=news_item&px=NTk4NA)
Sun Microsystems 和 IBM正在举行电话会议,他们刚刚宣布,IBM 将开始在服务器上使用 Sun 的 Solaris 操作系统。这些 IBM 服务器包括基于 x86 的服务器系统以及 Blade Center 服务器。官方新闻稿刚刚发布,可以在 sun 新闻室阅读。
### 2007/8/9 [OpenSolaris 不会与 Linux 合并](http://www.phoronix.com/scan.php?page=news_item&px=NTk2Ng)
在旧金山的 LinuxWorld 2007 上,Andrew Morton 在主题演讲中表示, OpenSolaris 的关键组件不会出现在 Linux 内核中。事实上,莫顿甚至表示 “非常遗憾 OpenSolaris 活着”。OpenSolaris 的一些关键组件包括 Zones、ZFS 和 DTrace 。虽然印第安纳州项目有可能将这些项目转变为 GPLv3 项目... 更多信息参见 ZDNET。
### 2007/7/27 [Solaris Xen 已经更新](http://www.phoronix.com/scan.php?page=news_item&px=NTkzMQ)
已经有一段时间了,Solaris Xen 终于更新了。约翰·莱文(John Levon)表示,这一最新版本基于 Xen 3.0.4 和 Solaris “Nevada” Build 66。这一最新版本的改进包括 PAE 支持、HVM 支持、新的 virt-manager 工具、改进的调试支持以及管理域支持。可以在 Sun 的网站上找到 2007 年 7 月 Solaris Xen 更新的下载。
### 2007/7/25 [Solaris 10 7/07 HW 版本](http://www.phoronix.com/scan.php?page=news_item&px=NTkyMA)
Solaris 10 7/07 HW 版本的文档已经上线。如 Solaris 发行注记中所述,Solaris 10 7/07 仅适用于 SPARC Enterprise M4000-M9000 服务器,并且没有 x86/x64 版本可用。所有平台的最新 Solaris 更新是 Solaris 10 11/06 。您可以在 Phoronix 论坛中讨论 Solaris 7/07。
### 2007/7/16 [来自英特尔的 Solaris 电信服务器](http://www.phoronix.com/scan.php?page=news_item&px=NTg5Nw)
今天宣布推出符合 NEBS、ETSI 和 ATCA 合规性的英特尔体系的 Sun Solaris 电信机架服务器和刀片服务器。在这些新的运营商级平台中,英特尔运营商级机架式服务器 TIGW1U 支持 Linux 和 Solaris 10,而 Intel NetStructure MPCBL0050 SBC 也将支持这两种操作系统。今天的新闻稿可以在这里阅读。
然后是 Solaris 分类中最受欢迎的特色文章:
### [Ubuntu vs. OpenSolaris vs. FreeBSD 基准测试](http://www.phoronix.com/vr.php?view=13149)
在过去的几个星期里,我们提供了几篇关于 Ubuntu Linux 性能的深入文章。我们已经开始提供 Ubuntu 7.04 到 8.10 的基准测试,并且发现这款受欢迎的 Linux 发行版的性能随着时间的推移而变慢,随之而来的是 Mac OS X 10.5 对比 Ubuntu 8.10 的基准测试和其他文章。在本文中,我们正在比较 Ubuntu 8.10 的 64 位性能与 OpenSolaris 2008.11 和 FreeBSD 7.1 的最新测试版本。
### [NVIDIA 的性能:Windows vs. Linux vs. Solaris](http://www.phoronix.com/vr.php?view=11968)
本周早些时候,我们预览了 Quadro FX1700,它是 NVIDIA 的中端工作站显卡之一,基于 G84GL 内核,而 G84GL 内核又源于消费级 GeForce 8600 系列。该 PCI Express 显卡提供 512MB 的视频内存,具有两个双链路 DVI 连接,并支持 OpenGL 2.1 ,同时保持最大功耗仅为 42 瓦。正如我们在预览文章中提到的,我们将不仅在 Linux 下查看此显卡的性能,还要在 Microsoft Windows 和 Sun 的 Solaris 中测试此工作站解决方案。在今天的这篇文章中,我们正在这样做,因为我们测试了 NVIDIA Quadro FX1700 512MB 与这些操作系统及其各自的二进制显示驱动程序。
### [FreeBSD 8.0 对比 Linux、OpenSolaris](http://www.phoronix.com/vr.php?view=14407)
在 FreeBSD 8.0 的稳定版本发布的上周,我们终于可以把它放在测试台上,并用 Phoronix 测试套件进行了全面的了解。我们将 FreeBSD 8.0 的性能与早期的 FreeBSD 7.2 版本以及 Fedora 12 和 Ubuntu 9.10 还有 Sun OS 端的 OpenSolaris 2010.02 b127 快照进行了比较。
### [Fedora、Debian、FreeBSD、OpenBSD、OpenSolaris 基准测试](http://www.phoronix.com/vr.php?view=14533)
上周我们发布了第一个 Debian GNU/kFreeBSD 基准测试,将 FreeBSD 内核捆绑在 Debian GNU 用户的 Debian GNU/Linux 上,比较了这款 Debian 系统的 32 位和 64 位性能。 我们现在扩展了这个比较,使许多其他操作系统与 Debian GNU/Linux 和 Debian GNU/kFreeBSD 的 6.0 Squeeze 快照直接进行比较,如 Fedora 12,FreeBSD 7.2,FreeBSD 8.0,OpenBSD 4.6 和 OpenSolaris 2009.06 。
### [AMD 上海皓龙:Linux vs. OpenSolaris 基准测试](http://www.phoronix.com/vr.php?view=13475)
1月份,当我们研究了四款皓龙 2384 型号时,我们在 Linux 上发布了关于 AMD 上海皓龙 CPU 的综述。与早期的 AMD 巴塞罗那处理器 Ubuntu Linux 相比,这些 45nm 四核工作站服务器处理器的性能非常好,但是在运行 Sun OpenSolaris 操作系统时,性能如何?今天浏览的是 AMD 双核的基准测试,运行 OpenSolaris 2008.11、Ubuntu 8.10 和即将推出的 Ubuntu 9.04 版本。
### [OpenSolaris vs. Linux 内核基准](http://www.phoronix.com/vr.php?view=13826)
本周早些时候,我们提供了 Ubuntu 9.04 与 Mac OS X 10.5.6 的基准测试,发现 Leopard 操作系统(Mac)在大多数测试中的表现要优于 Jaunty Jackalope (Ubuntu),至少在 Ubuntu 32 位是这样的。我们今天又回过来进行更多的操作系统基准测试,但这次我们正在比较 Linux 和 Sun OpenSolaris 内核的性能。我们使用的 Nexenta Core 2 操作系统将 OpenSolaris 内核与 GNU/Ubuntu 用户界面组合在同一个 Ubuntu 软件包中,但使用了 Linux 内核的 32 位和 64 位 Ubuntu 服务器安装进行测试。
### [Netbook 性能:Ubuntu vs. OpenSolaris](http://www.phoronix.com/vr.php?view=14039)
过去,我们已经发布了 OpenSolaris vs. Linux Kernel 基准测试以及类似的文章,关注 Sun 的 OpenSolaris 与流行的 Linux 发行版的性能。我们已经看过高端 AMD 工作站的性能,但是我们从来没有比较上网本上的 OpenSolaris 和 Linux 性能。直到今天,在本文中,我们将比较戴尔 Inspiron Mini 9 上网本上的 OpenSolaris 2009.06 和 Ubuntu 9.04 的结果。
### [NVIDIA 图形:Linux vs. Solaris](http://www.phoronix.com/vr.php?view=10301)
在 Phoronix,我们不断探索 Linux 下的不同显示驱动程序,在我们评估了 Sun 的检查工具并测试了 Solaris 主板以及覆盖其他几个领域之后,我们还没有执行图形驱动程序 Linux 和 Solaris 之间的比较。直到今天。由于印第安纳州项目,我们对 Solaris 更感兴趣,我们决定终于通过 NVIDIA 专有驱动程序提供我们在 Linux 和 Solaris 之间的第一次定量图形比较。
### [OpenSolaris 2008.05 向 Solaris 提供了一个新面孔](http://www.phoronix.com/vr.php?view=12269)
2月初,Sun Microsystems 发布了印第安纳项目的第二个预览版本。对于那些人来说,印第安纳州项目是 Sun 的 Ian Murdock 领导的项目的代号,旨在通过解决 Solaris 的长期可用性问题,将 OpenSolaris 推向更多的台式机和笔记本电脑。我们没有对预览 2 留下什么深刻印象,因为它没有比普通用户感兴趣的 GNU/Linux 桌面更有优势。然而,随着 5 月份推出的 OpenSolaris 2008.05 印第安纳项目发布,Sun Microsystems 今天发布了该操作系统的最终测试副本。当最后看到项目印第安纳时, 我们对这个新的 OpenSolaris 版本的最初体验是远远优于我们不到三月前的体验的。
### [快速概览 Oracle Solaris 11](http://www.phoronix.com/vr.php?view=16681)
Solaris 11 在周三发布,是七年来这个前 Sun 操作系统的第一个主要更新。在过去七年中,Solaris 家族发生了很大变化,OpenSolaris 在那个时候已经到来,但在本文中,简要介绍了全新的 Oracle Solaris 11 版本。
### [OpenSolaris、BSD & Linux 的新基准测试](http://www.phoronix.com/vr.php?view=15476)
今天早些时候,我们对以原生的内核模块支持的 Linux 上的 ZFS 进行了基准测试,该原生模块将被公开提供,以将这个 Sun/Oracle 文件系统覆盖到更多的 Linux 用户。现在,尽管作为一个附加奖励,我们碰巧有了基于 OpenSolaris 的最新发行版的新基准,包括 OpenSolaris、OpenIndiana 和 Augustiner-Schweinshaxe,与 PC-BSD、Fedora 和 Ubuntu相比。
### [FreeBSD/PC-BSD 9.1 针对 Linux、Solaris、BSD 的基准](http://www.phoronix.com/vr.php?view=18291)
虽然 FreeBSD 9.1 尚未正式发布,但是基于 FreeBSD 的 PC-BSD 9.1 “Isotope”版本本月已经可用。本文中的性能指标是 64 位版本的 PC-BSD 9.1 与 DragonFlyBSD 3.0.3、Oracle Solaris Express 11.1、CentOS 6.3、Ubuntu 12.10 以及 Ubuntu 13.04 开发快照的比较。
---
作者简介:
Michael Larabel 是 Phoronix.com 的作者,并于 2004 年创立了该网站,该网站重点是丰富多样的 Linux 硬件体验。 Michael 撰写了超过10,000 篇文章,涵盖了 Linux 硬件支持,Linux 性能,图形驱动程序等主题。 Michael 也是 Phoronix 测试套件、 Phoromatic 和 OpenBenchmarking.org 自动化基准测试软件的主要开发人员。可以通过 Twitter 关注他或通过 MichaelLarabel.com 联系他。
---
via: <https://www.phoronix.com/scan.php?page=news_item&px=Solaris-2017-Look-Back>
作者:[Michael Larabel](http://www.michaellarabel.com/) 译者:[MonkeyDEcho](https://github.com/MonkeyDEcho) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| null | HTTPSConnectionPool(host='www.phoronix.com', port=443): Read timed out. (read timeout=10) | null |
8,806 | 极客漫画:Deadline | http://turnoff.us/geek/deadline/ | 2017-08-23T14:52:00 | [
"漫画"
] | https://linux.cn/article-8806-1.html | 
DeLorean:美国经典科幻电影系列《回到未来》中的时光汽车。
Tardis:英国科幻电视剧《Doctor Who》中的时间机器和宇宙飞船,它是时间和空间的相对维度(Time And Relative Dimension(s) In Space)的缩写。
---
via: <http://turnoff.us/geek/deadline/>
作者:[Daniel Stori](http://turnoff.us/about/) 译者:[zionfuo](https://github.com/zionfuo) 合成:[zionfuo](https://github.com/zionfuo)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,807 | Linux 开机引导和启动过程详解 | https://opensource.com/article/17/2/linux-boot-and-startup | 2017-08-24T12:37:00 | [
"Linux",
"启动",
"引导",
"GRUB2"
] | https://linux.cn/article-8807-1.html |
>
> 你是否曾经对操作系统为何能够执行应用程序而感到疑惑?那么本文将为你揭开操作系统引导与启动的面纱。
>
>
>

理解操作系统开机引导和启动过程对于配置操作系统和解决相关启动问题是至关重要的。该文章陈述了 [GRUB2 引导装载程序](https://en.wikipedia.org/wiki/GNU_GRUB)开机引导装载内核的过程和 [systemd 初始化系统](https://en.wikipedia.org/wiki/Systemd)执行开机启动操作系统的过程。
事实上,操作系统的启动分为两个阶段:<ruby> 引导 <rt> boot </rt></ruby>和<ruby> 启动 <rt> startup </rt></ruby>。引导阶段开始于打开电源开关,结束于内核初始化完成和 systemd 进程成功运行。启动阶段接管了剩余工作,直到操作系统进入可操作状态。
总体来说,Linux 的开机引导和启动过程是相当容易理解,下文将分节对于不同步骤进行详细说明。
* BIOS 上电自检(POST)
* 引导装载程序 (GRUB2)
* 内核初始化
* 启动 systemd,其是所有进程之父。
注意,本文以 GRUB2 和 systemd 为载体讲述操作系统的开机引导和启动过程,是因为这二者是目前主流的 linux 发行版本所使用的引导装载程序和初始化软件。当然另外一些过去使用的相关软件仍然在一些 Linux 发行版本中使用。
### 引导过程
引导过程能以两种方式之一初始化。其一,如果系统处于关机状态,那么打开电源按钮将开启系统引导过程。其二,如果操作系统已经运行在一个本地用户(该用户可以是 root 或其他非特权用户),那么用户可以借助图形界面或命令行界面通过编程方式发起一个重启操作,从而触发系统引导过程。重启包括了一个关机和重新开始的操作。
#### BIOS 上电自检(POST)
上电自检过程中其实 Linux 没有什么也没做,上电自检主要由硬件的部分来完成,这对于所有操作系统都一样。当电脑接通电源,电脑开始执行 BIOS(<ruby> 基本输入输出系统 <rt> Basic I/O System </rt></ruby>)的 POST(<ruby> 上电自检 <rt> Power On Self Test </rt></ruby>)过程。
在 1981 年,IBM 设计的第一台个人电脑中,BIOS 被设计为用来初始化硬件组件。POST 作为 BIOS 的组成部分,用于检验电脑硬件基本功能是否正常。如果 POST 失败,那么这个电脑就不能使用,引导过程也将就此中断。
BIOS 上电自检确认硬件的基本功能正常,然后产生一个 BIOS [中断](https://en.wikipedia.org/wiki/BIOS_interrupt_call) INT 13H,该中断指向某个接入的可引导设备的引导扇区。它所找到的包含有效的引导记录的第一个引导扇区将被装载到内存中,并且控制权也将从引导扇区转移到此段代码。
引导扇区是引导加载器真正的第一阶段。大多数 Linux 发行版本使用的引导加载器有三种:GRUB、GRUB2 和 LILO。GRUB2 是最新的,也是相对于其他老的同类程序使用最广泛的。
#### GRUB2
GRUB2 全称是 GRand Unified BootLoader,Version 2(第二版大一统引导装载程序)。它是目前流行的大部分 Linux 发行版本的主要引导加载程序。GRUB2 是一个用于计算机寻找操作系统内核并加载其到内存的智能程序。由于 GRUB 这个单词比 GRUB2 更易于书写和阅读,在下文中,除特殊指明以外,GRUB 将代指 GRUB2。
GRUB 被设计为兼容操作系统[多重引导规范](https://en.wikipedia.org/wiki/Multiboot_Specification),它能够用来引导不同版本的 Linux 和其他的开源操作系统;它还能链式加载专有操作系统的引导记录。
GRUB 允许用户从任何给定的 Linux 发行版本的几个不同内核中选择一个进行引导。这个特性使得操作系统,在因为关键软件不兼容或其它某些原因升级失败时,具备引导到先前版本的内核的能力。GRUB 能够通过文件 `/boot/grub/grub.conf` 进行配置。(LCTT 译注:此处指 GRUB1)
GRUB1 现在已经逐步被弃用,在大多数现代发行版上它已经被 GRUB2 所替换,GRUB2 是在 GRUB1 的基础上重写完成。基于 Red Hat 的发行版大约是在 Fedora 15 和 CentOS/RHEL 7 时升级到 GRUB2 的。GRUB2 提供了与 GRUB1 同样的引导功能,但是 GRUB2 也是一个类似主框架(mainframe)系统上的基于命令行的前置操作系统(Pre-OS)环境,使得在预引导阶段配置更为方便和易操作。GRUB2 通过 `/boot/grub2/grub.cfg` 进行配置。
两个 GRUB 的最主要作用都是将内核加载到内存并运行。两个版本的 GRUB 的基本工作方式一致,其主要阶段也保持相同,都可分为 3 个阶段。在本文将以 GRUB2 为例进行讨论其工作过程。GRUB 或 GRUB2 的配置,以及 GRUB2 的命令使用均超过本文范围,不会在文中进行介绍。
虽然 GRUB2 并未在其三个引导阶段中正式使用这些<ruby> 阶段 <rt> stage </rt></ruby>名词,但是为了讨论方便,我们在本文中使用它们。
##### 阶段 1
如上文 POST(上电自检)阶段提到的,在 POST 阶段结束时,BIOS 将查找在接入的磁盘中查找引导记录,其通常位于 MBR(<ruby> 主引导记录 <rt> Master Boot Record </rt></ruby>),它加载它找到的第一个引导记录中到内存中,并开始执行此代码。引导代码(及阶段 1 代码)必须非常小,因为它必须连同分区表放到硬盘的第一个 512 字节的扇区中。 在[传统的常规 MBR](https://en.wikipedia.org/wiki/Master_boot_record) 中,引导代码实际所占用的空间大小为 446 字节。这个阶段 1 的 446 字节的文件通常被叫做引导镜像(boot.img),其中不包含设备的分区信息,分区是一般单独添加到引导记录中。
由于引导记录必须非常的小,它不可能非常智能,且不能理解文件系统结构。因此阶段 1 的唯一功能就是定位并加载阶段 1.5 的代码。为了完成此任务,阶段 1.5 的代码必须位于引导记录与设备第一个分区之间的位置。在加载阶段 1.5 代码进入内存后,控制权将由阶段 1 转移到阶段 1.5。
##### 阶段 1.5
如上所述,阶段 1.5 的代码必须位于引导记录与设备第一个分区之间的位置。该空间由于历史上的技术原因而空闲。第一个分区的开始位置在扇区 63 和 MBR(扇区 0)之间遗留下 62 个 512 字节的扇区(共 31744 字节),该区域用于存储阶段 1.5 的代码镜像 core.img 文件。该文件大小为 25389 字节,故此区域有足够大小的空间用来存储 core.img。
因为有更大的存储空间用于阶段 1.5,且该空间足够容纳一些通用的文件系统驱动程序,如标准的 EXT 和其它的 Linux 文件系统,如 FAT 和 NTFS 等。GRUB2 的 core.img 远比更老的 GRUB1 阶段 1.5 更复杂且更强大。这意味着 GRUB2 的阶段 2 能够放在标准的 EXT 文件系统内,但是不能放在逻辑卷内。故阶段 2 的文件可以存放于 `/boot` 文件系统中,一般在 `/boot/grub2` 目录下。
注意 `/boot` 目录必须放在一个 GRUB 所支持的文件系统(并不是所有的文件系统均可)。阶段 1.5 的功能是开始执行存放阶段 2 文件的 `/boot` 文件系统的驱动程序,并加载相关的驱动程序。
##### 阶段 2
GRUB 阶段 2 所有的文件都已存放于 `/boot/grub2` 目录及其几个子目录之下。该阶段没有一个类似于阶段 1 与阶段 1.5 的镜像文件。相应地,该阶段主要需要从 `/boot/grub2/i386-pc` 目录下加载一些内核运行时模块。
GRUB 阶段 2 的主要功能是定位和加载 Linux 内核到内存中,并转移控制权到内核。内核的相关文件位于 `/boot` 目录下,这些内核文件可以通过其文件名进行识别,其文件名均带有前缀 vmlinuz。你可以列出 `/boot` 目录中的内容来查看操作系统中当前已经安装的内核。
GRUB2 跟 GRUB1 类似,支持从 Linux 内核选择之一引导启动。Red Hat 包管理器(DNF)支持保留多个内核版本,以防最新版本内核发生问题而无法启动时,可以恢复老版本的内核。默认情况下,GRUB 提供了一个已安装内核的预引导菜单,其中包括问题诊断菜单(recuse)以及恢复菜单(如果配置已经设置恢复镜像)。
阶段 2 加载选定的内核到内存中,并转移控制权到内核代码。
#### 内核
内核文件都是以一种自解压的压缩格式存储以节省空间,它与一个初始化的内存映像和存储设备映射表都存储于 `/boot` 目录之下。
在选定的内核加载到内存中并开始执行后,在其进行任何工作之前,内核文件首先必须从压缩格式解压自身。一旦内核自解压完成,则加载 [systemd](https://en.wikipedia.org/wiki/Systemd) 进程(其是老式 System V 系统的 [init](https://en.wikipedia.org/wiki/Init#SysV-style) 程序的替代品),并转移控制权到 systemd。
这就是引导过程的结束。此刻,Linux 内核和 systemd 处于运行状态,但是由于没有其他任何程序在执行,故其不能执行任何有关用户的功能性任务。
### 启动过程
启动过程紧随引导过程之后,启动过程使 Linux 系统进入可操作状态,并能够执行用户功能性任务。
#### systemd
systemd 是所有进程的父进程。它负责将 Linux 主机带到一个用户可操作状态(可以执行功能任务)。systemd 的一些功能远较旧式 init 程序更丰富,可以管理运行中的 Linux 主机的许多方面,包括挂载文件系统,以及开启和管理 Linux 主机的系统服务等。但是 systemd 的任何与系统启动过程无关的功能均不在此文的讨论范围。
首先,systemd 挂载在 `/etc/fstab` 中配置的文件系统,包括内存交换文件或分区。据此,systemd 必须能够访问位于 `/etc` 目录下的配置文件,包括它自己的。systemd 借助其配置文件 `/etc/systemd/system/default.target` 决定 Linux 系统应该启动达到哪个状态(或<ruby> 目标态 <rt> target </rt></ruby>)。`default.target` 是一个真实的 target 文件的符号链接。对于桌面系统,其链接到 `graphical.target`,该文件相当于旧式 systemV init 方式的 **runlevel 5**。对于一个服务器操作系统来说,`default.target` 更多是默认链接到 `multi-user.target`, 相当于 systemV 系统的 **runlevel 3**。 `emergency.target` 相当于单用户模式。
(LCTT 译注:“target” 是 systemd 新引入的概念,目前尚未发现有官方的准确译名,考虑到其作用和使用的上下文环境,我们认为翻译为“目标态”比较贴切。以及,“unit” 是指 systemd 中服务和目标态等各个对象/文件,在此依照语境译作“单元”。)
注意,所有的<ruby> 目标态 <rt> target </rt></ruby>和<ruby> 服务 <rt> service </rt></ruby>均是 systemd 的<ruby> 单元 <rt> unit </rt></ruby>。
如下表 1 是 systemd 启动的<ruby> 目标态 <rt> target </rt></ruby>和老版 systemV init 启动<ruby> 运行级别 <rt> runlevel </rt></ruby>的对比。这个 **systemd 目标态别名** 是为了 systemd 向前兼容 systemV 而提供。这个目标态别名允许系统管理员(包括我自己)用 systemV 命令(例如 `init 3`)改变运行级别。当然,该 systemV 命令是被转发到 systemd 进行解释和执行的。
| SystemV 运行级别 | systemd 目标态 | systemd 目标态别名 | 描述 |
| --- | --- | --- | --- |
| | `halt.target` | | 停止系统运行但不切断电源。 |
| 0 | `poweroff.target` | `runlevel0.target` | 停止系统运行并切断电源. |
| S | `emergency.target` | | 单用户模式,没有服务进程运行,文件系统也没挂载。这是一个最基本的运行级别,仅在主控制台上提供一个 shell 用于用户与系统进行交互。 |
| 1 | `rescue.target` | `runlevel1.target` | 挂载了文件系统,仅运行了最基本的服务进程的基本系统,并在主控制台启动了一个 shell 访问入口用于诊断。 |
| 2 | | `runlevel2.target` | 多用户,没有挂载 NFS 文件系统,但是所有的非图形界面的服务进程已经运行。 |
| 3 | `multi-user.target` | `runlevel3.target` | 所有服务都已运行,但只支持命令行接口访问。 |
| 4 | | `runlevel4.target` | 未使用。 |
| 5 | `graphical.target` | `runlevel5.target` | 多用户,且支持图形界面接口。 |
| 6 | `reboot.target` | `runlevel6.target` | 重启。 |
| | `default.target` | | 这个<ruby> 目标态 <rt> target </rt></ruby>是总是 `multi-user.target` 或 `graphical.target` 的一个符号链接的别名。systemd 总是通过 `default.target` 启动系统。`default.target` 绝不应该指向 `halt.target`、 `poweroff.target` 或 `reboot.target`。 |
*表 1 老版本 systemV 的 运行级别与 systemd 与<ruby> 目标态 <rt> target </rt></ruby>或目标态别名的比较*
每个<ruby> 目标态 <rt> target </rt></ruby>有一个在其配置文件中描述的依赖集,systemd 需要首先启动其所需依赖,这些依赖服务是 Linux 主机运行在特定的功能级别所要求的服务。当配置文件中所有的依赖服务都加载并运行后,即说明系统运行于该目标级别。
systemd 也会查看老式的 systemV init 目录中是否存在相关启动文件,若存在,则 systemd 根据这些配置文件的内容启动对应的服务。在 Fedora 系统中,过时的网络服务就是通过该方式启动的一个实例。
如下图 1 是直接从 bootup 的 man 页面拷贝而来。它展示了在 systemd 启动过程中一般的事件序列和确保成功的启动的基本的顺序要求。
`sysinit.target` 和 `basic.target` 目标态可以被视作启动过程中的状态检查点。尽管 systemd 的设计初衷是并行启动系统服务,但是部分服务或功能目标态是其它服务或目标态的启动的前提。系统将暂停于检查点直到其所要求的服务和目标态都满足为止。
`sysinit.target` 状态的到达是以其所依赖的所有资源模块都正常启动为前提的,所有其它的单元,如文件系统挂载、交换文件设置、设备管理器的启动、随机数生成器种子设置、低级别系统服务初始化、加解密服务启动(如果一个或者多个文件系统加密的话)等都必须完成,但是在 **sysinit.target** 中这些服务与模块是可以并行启动的。
`sysinit.target` 启动所有的低级别服务和系统初具功能所需的单元,这些都是进入下一阶段 basic.target 的必要前提。

*图 1:systemd 的启动流程*
在 `sysinit.target` 的条件满足以后,systemd 接下来启动 `basic.target`,启动其所要求的所有单元。 `basic.target` 通过启动下一目标态所需的单元而提供了更多的功能,这包括各种可执行文件的目录路径、通信 sockets,以及定时器等。
最后,用户级目标态(`multi-user.target` 或 `graphical.target`) 可以初始化了,应该注意的是 `multi-user.target` 必须在满足图形化目标态 `graphical.target` 的依赖项之前先达成。
图 1 中,以 `*` 开头的目标态是通用的启动状态。当到达其中的某一目标态,则说明系统已经启动完成了。如果 `multi-user.target` 是默认的目标态,则成功启动的系统将以命令行登录界面呈现于用户。如果 `graphical.target` 是默认的目标态,则成功启动的系统将以图形登录界面呈现于用户,界面的具体样式将根据系统所配置的[显示管理器](https://opensource.com/article/16/12/yearbook-best-couple-2016-display-manager-and-window-manager)而定。
### 故障讨论
最近我需要改变一台使用 GRUB2 的 Linux 电脑的默认引导内核。我发现一些 GRUB2 的命令在我的系统上不能用,也可能是我使用方法不正确。至今,我仍然不知道是何原因导致,此问题需要进一步探究。
`grub2-set-default` 命令没能在配置文件 `/etc/default/grub` 中成功地设置默认内核索引,以至于期望的替代内核并没有被引导启动。故在该配置文件中我手动更改 `GRUB_DEFAULT=saved` 为 `GRUB_DEFAULT=2`,2 是我需要引导的安装好的内核文件的索引。然后我执行命令 `grub2-mkconfig > /boot/grub2/grub.cfg` 创建了新的 GRUB 配置文件,该方法如预期的规避了问题,并成功引导了替代的内核。
### 结论
GRUB2、systemd 初始化系统是大多数现代 Linux 发行版引导和启动的关键组件。尽管在实际中,systemd 的使用还存在一些争议,但是 GRUB2 与 systemd 可以密切地配合先加载内核,然后启动一个业务系统所需要的系统服务。
尽管 GRUB2 和 systemd 都比其前任要更加复杂,但是它们更加容易学习和管理。在 man 页面有大量关于 systemd 的帮助说明,freedesktop.org 也在线收录了完整的此[帮助说明](https://www.freedesktop.org/software/systemd/man/index.html)。下面有更多相关信息链接。
### 附加资源
* [GNU GRUB](https://en.wikipedia.org/wiki/GNU_GRUB) (Wikipedia)
* [GNU GRUB Manual](https://www.gnu.org/software/grub/manual/grub.html) (GNU.org)
* [Master Boot Record](https://en.wikipedia.org/wiki/Master_boot_record) (Wikipedia)
* [Multiboot specification](https://en.wikipedia.org/wiki/Multiboot_Specification) (Wikipedia)
* [systemd](https://en.wikipedia.org/wiki/Systemd) (Wikipedia)
* [systemd bootup process](https://www.freedesktop.org/software/systemd/man/bootup.html) (Freedesktop.org)
* [systemd index of man pages](https://www.freedesktop.org/software/systemd/man/index.html) (Freedesktop.org)
---
作者简介:
David Both 居住在美国北卡罗纳州的首府罗利,是一个 Linux 开源贡献者。他已经从事 IT 行业 40 余年,在 IBM 教授 OS/2 20余年。1981 年,他在 IBM 开发了第一个关于最初的 IBM 个人电脑的培训课程。他也曾在 Red Hat 教授 RHCE 课程,也曾供职于 MCI worldcom,Cico 以及北卡罗纳州等。他已经为 Linux 开源社区工作近 20 年。
---
via: <https://opensource.com/article/17/2/linux-boot-and-startup>
作者:[David Both](https://opensource.com/users/dboth) 译者: [penghuster](https://github.com/penghuster) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Understanding the Linux boot and startup processes is important to being able to both configure Linux and to resolving startup issues. This article presents an overview of the bootup sequence using the [GRUB2 bootloader](https://en.wikipedia.org/wiki/GNU_GRUB) and the startup sequence as performed by the [systemd initialization system](https://en.wikipedia.org/wiki/Systemd).
In reality, there are two sequences of events that are required to boot a Linux computer and make it usable: *boot* and *startup*. The *boot* sequence starts when the computer is turned on, and is completed when the kernel is initialized and systemd is launched. The *startup* process then takes over and finishes the task of getting the Linux computer into an operational state.
Overall, the Linux boot and startup process is fairly simple to understand. It is comprised of the following steps which will be described in more detail in the following sections.
- BIOS POST
- Boot loader (GRUB2)
- Kernel initialization
- Start systemd, the parent of all processes.
Note that this article covers GRUB2 and systemd because they are the current boot loader and initialization software for most major distributions. Other software options have been used historically and are still found in some distributions.
## The boot process
The boot process can be initiated in one of a couple ways. First, if power is turned off, turning on the power will begin the boot process. If the computer is already running a local user, including root or an unprivileged user, the user can programmatically initiate the boot sequence by using the GUI or command line to initiate a reboot. A reboot will first do a shutdown and then restart the computer.
### BIOS POST
The first step of the Linux boot process really has nothing whatever to do with Linux. This is the hardware portion of the boot process and is the same for any operating system. When power is first applied to the computer it runs the POST (Power On Self Test) which is part of the BIOS (Basic I/O System).
When IBM designed the first PC back in 1981, BIOS was designed to initialize the hardware components. POST is the part of BIOS whose task is to ensure that the computer hardware functioned correctly. If POST fails, the computer may not be usable and so the boot process does not continue.
BIOS POST checks the basic operability of the hardware and then it issues a BIOS [interrupt](https://en.wikipedia.org/wiki/BIOS_interrupt_call), INT 13H, which locates the boot sectors on any attached bootable devices. The first boot sector it finds that contains a valid boot record is loaded into RAM and control is then transferred to the code that was loaded from the boot sector.
The boot sector is really the first stage of the boot loader. There are three boot loaders used by most Linux distributions, GRUB, GRUB2, and LILO. GRUB2 is the newest and is used much more frequently these days than the other older options.
### GRUB2
GRUB2 stands for "GRand Unified Bootloader, version 2" and it is now the primary bootloader for most current Linux distributions. GRUB2 is the program which makes the computer just smart enough to find the operating system kernel and load it into memory. Because it is easier to write and say GRUB than GRUB2, I may use the term GRUB in this document but I will be referring to GRUB2 unless specified otherwise.
GRUB has been designed to be compatible with the [multiboot specification](https://en.wikipedia.org/wiki/Multiboot_Specification) which allows GRUB to boot many versions of Linux and other free operating systems; it can also chain load the boot record of proprietary operating systems.
GRUB can also allow the user to choose to boot from among several different kernels for any given Linux distribution. This affords the ability to boot to a previous kernel version if an updated one fails somehow or is incompatible with an important piece of software. GRUB can be configured using the /boot/grub/grub.conf file.
GRUB1 is now considered to be legacy and has been replaced in most modern distributions with GRUB2, which is a rewrite of GRUB1. Red Hat based distros upgraded to GRUB2 around Fedora 15 and CentOS/RHEL 7. GRUB2 provides the same boot functionality as GRUB1 but GRUB2 is also a mainframe-like command-based pre-OS environment and allows more flexibility during the pre-boot phase. GRUB2 is configured with /boot/grub2/grub.cfg.
The primary function of either GRUB is to get the Linux kernel loaded into memory and running. Both versions of GRUB work essentially the same way and have the same three stages, but I will use GRUB2 for this discussion of how GRUB does its job. The configuration of GRUB or GRUB2 and the use of GRUB2 commands is outside the scope of this article.
Although GRUB2 does not officially use the stage notation for the three stages of GRUB2, it is convenient to refer to them in that way, so I will in this article.
#### Stage 1
As mentioned in the BIOS POST section, at the end of POST, BIOS searches the attached disks for a boot record, usually located in the Master Boot Record (MBR), it loads the first one it finds into memory and then starts execution of the boot record. The bootstrap code, i.e., GRUB2 stage 1, is very small because it must fit into the first 512-byte sector on the hard drive along with the partition table. The total amount of space allocated for the actual bootstrap code in a [classic generic MBR](https://en.wikipedia.org/wiki/Master_boot_record) is 446 bytes. The 446 Byte file for stage 1 is named boot.img and does not contain the partition table which is added to the boot record separately.
Because the boot record must be so small, it is also not very smart and does not understand filesystem structures. Therefore the sole purpose of stage 1 is to locate and load stage 1.5. In order to accomplish this, stage 1.5 of GRUB must be located in the space between the boot record itself and the first partition on the drive. After loading GRUB stage 1.5 into RAM, stage 1 turns control over to stage 1.5.
#### Stage 1.5
As mentioned above, stage 1.5 of GRUB must be located in the space between the boot record itself and the first partition on the disk drive. This space was left unused historically for technical reasons. The first partition on the hard drive begins at sector 63 and with the MBR in sector 0, that leaves 62 512-byte sectors—31,744 bytes—in which to store the core.img file which is stage 1.5 of GRUB. The core.img file is 25,389 Bytes so there is plenty of space available between the MBR and the first disk partition in which to store it.
Because of the larger amount of code that can be accommodated for stage 1.5, it can have enough code to contain a few common filesystem drivers, such as the standard EXT and other Linux filesystems, FAT, and NTFS. The GRUB2 core.img is much more complex and capable than the older GRUB1 stage 1.5. This means that stage 2 of GRUB2 can be located on a standard EXT filesystem but it cannot be located on a logical volume. So the standard location for the stage 2 files is in the /boot filesystem, specifically /boot/grub2.
Note that the /boot directory must be located on a filesystem that is supported by GRUB. Not all filesystems are. The function of stage 1.5 is to begin execution with the filesystem drivers necessary to locate the stage 2 files in the /boot filesystem and load the needed drivers.
#### Stage 2
All of the files for GRUB stage 2 are located in the /boot/grub2 directory and several subdirectories. GRUB2 does not have an image file like stages 1 and 2. Instead, it consists mostly of runtime kernel modules that are loaded as needed from the /boot/grub2/i386-pc directory.
The function of GRUB2 stage 2 is to locate and load a Linux kernel into RAM and turn control of the computer over to the kernel. The kernel and its associated files are located in the /boot directory. The kernel files are identifiable as they are all named starting with vmlinuz. You can list the contents of the /boot directory to see the currently installed kernels on your system.
GRUB2, like GRUB1, supports booting from one of a selection of Linux kernels. The Red Hat package manager, DNF, supports keeping multiple versions of the kernel so that if a problem occurs with the newest one, an older version of the kernel can be booted. By default, GRUB provides a pre-boot menu of the installed kernels, including a rescue option and, if configured, a recovery option.
Stage 2 of GRUB2 loads the selected kernel into memory and turns control of the computer over to the kernel.
### Kernel
All of the kernels are in a self-extracting, compressed format to save space. The kernels are located in the /boot directory, along with an initial RAM disk image, and device maps of the hard drives.
After the selected kernel is loaded into memory and begins executing, it must first extract itself from the compressed version of the file before it can perform any useful work. Once the kernel has extracted itself, it loads [systemd](https://en.wikipedia.org/wiki/Systemd), which is the replacement for the old [SysV init](https://en.wikipedia.org/wiki/Init#SysV-style) program, and turns control over to it.
This is the end of the boot process. At this point, the Linux kernel and systemd are running but unable to perform any productive tasks for the end user because nothing else is running.
## The startup process
The startup process follows the boot process and brings the Linux computer up to an operational state in which it is usable for productive work.
### systemd
systemd is the mother of all processes and it is responsible for bringing the Linux host up to a state in which productive work can be done. Some of its functions, which are far more extensive than the old init program, are to manage many aspects of a running Linux host, including mounting filesystems, and starting and managing system services required to have a productive Linux host. Any of systemd's tasks that are not related to the startup sequence are outside the scope of this article.
First, systemd mounts the filesystems as defined by **/etc/fstab**, including any swap files or partitions. At this point, it can access the configuration files located in /etc, including its own. It uses its configuration file, **/etc/systemd/system/default.target**, to determine which state or target, into which it should boot the host. The **default.target** file is only a symbolic link to the true target file. For a desktop workstation, this is typically going to be the graphical.target, which is equivalent to **runlevel**** 5** in the old SystemV init. For a server, the default is more likely to be the **multi-user.target** which is like **runlevel**** 3** in SystemV. The **emergency.target** is similar to single user mode.
Note that targets and services are systemd units.
Table 1, below, is a comparison of the systemd targets with the old SystemV startup runlevels. The **systemd target aliases** are provided by systemd for backward compatibility. The target aliases allow scripts—and many sysadmins like myself—to use SystemV commands like **init 3** to change runlevels. Of course, the SystemV commands are forwarded to systemd for interpretation and execution.
SystemV Runlevel |
systemd target |
systemd target aliases |
Description |
halt.target | Halts the system without powering it down. | ||
0 | poweroff.target | runlevel0.target | Halts the system and turns the power off. |
S | emergency.target | Single user mode. No services are running; filesystems are not mounted. This is the most basic level of operation with only an emergency shell running on the main console for the user to interact with the system. | |
1 | rescue.target | runlevel1.target | A base system including mounting the filesystems with only the most basic services running and a rescue shell on the main console. |
2 | runlevel2.target | Multiuser, without NFS but all other non-GUI services running. | |
3 | multi-user.target | runlevel3.target | All services running but command line interface (CLI) only. |
4 | runlevel4.target | Unused. | |
5 | graphical.target | runlevel5.target | multi-user with a GUI. |
6 | reboot.target | runlevel6.target | Reboot |
default.target | This target is always aliased with a symbolic link to either multi-user.target or graphical.target. systemd always uses the default.target to start the system. The default.target should never be aliased to halt.target, poweroff.target, or reboot.target. |
*Table 1: Comparison of SystemV runlevels with systemd targets and some target aliases.*
Each target has a set of dependencies described in its configuration file. systemd starts the required dependencies. These dependencies are the services required to run the Linux host at a specific level of functionality. When all of the dependencies listed in the target configuration files are loaded and running, the system is running at that target level.
systemd also looks at the legacy SystemV init directories to see if any startup files exist there. If so, systemd used those as configuration files to start the services described by the files. The deprecated network service is a good example of one of those that still use SystemV startup files in Fedora.
Figure 1, below, is copied directly from the **bootup** [man page](http://man7.org/linux/man-pages/man7/bootup.7.html). It shows the general sequence of events during systemd startup and the basic ordering requirements to ensure a successful startup.
The **sysinit.target** and **basic.target** targets can be considered as checkpoints in the startup process. Although systemd has as one of its design goals to start system services in parallel, there are still certain services and functional targets that must be started before other services and targets can be started. These checkpoints cannot be passed until all of the services and targets required by that checkpoint are fulfilled.
So the **sysinit.target** is reached when all of the units on which it depends are completed. All of those units, mounting filesystems, setting up swap files, starting udev, setting the random generator seed, initiating low-level services, and setting up cryptographic services if one or more filesystems are encrypted, must be completed, but within the **sysinit****.target **those tasks can be performed in parallel.
The **sysinit.target** starts up all of the low-level services and units required for the system to be marginally functional and that are required to enable moving on to the basic.target.
```
local-fs-pre.target
|
v
(various mounts and (various swap (various cryptsetup
fsck services...) devices...) devices...) (various low-level (various low-level
| | | services: udevd, API VFS mounts:
v v v tmpfiles, random mqueue, configfs,
local-fs.target swap.target cryptsetup.target seed, sysctl, ...) debugfs, ...)
| | | | |
\__________________|_________________ | ___________________|____________________/
\|/
v
sysinit.target
|
____________________________________/|\________________________________________
/ | | | \
| | | | |
v v | v v
(various (various | (various rescue.service
timers...) paths...) | sockets...) |
| | | | v
v v | v rescue.target
timers.target paths.target | sockets.target
| | | |
v \_________________ | ___________________/
\|/
v
basic.target
|
____________________________________/| emergency.service
/ | | |
| | | v
v v v emergency.target
display- (various system (various system
manager.service services services)
| required for |
| graphical UIs) v
| | multi-user.target
| | |
\_________________ | _________________/
\|/
v
graphical.target
```
*Figure 1: The systemd startup map.*
After the **sysinit.target** is fulfilled, systemd next starts the **basic.target**, starting all of the units required to fulfill it. The basic target provides some additional functionality by starting units that re required for the next target. These include setting up things like paths to various executable directories, communication sockets, and timers.
Finally, the user-level targets, **multi-user.target** or **graphical.target** can be initialized. Notice that the **multi-user.****target**** **must be reached before the graphical target dependencies can be met.
The underlined targets in Figure 1, are the usual startup targets. When one of these targets is reached, then startup has completed. If the **multi-user.target** is the default, then you should see a text mode login on the console. If **graphical.target** is the default, then you should see a graphical login; the specific GUI login screen you see will depend on the default [display manager](https://opensource.com/article/16/12/yearbook-best-couple-2016-display-manager-and-window-manager) you use.
## Issues
I recently had a need to change the default boot kernel on a Linux computer that used GRUB2. I found that some of the commands did not seem to work properly for me, or that I was not using them correctly. I am not yet certain which was the case, and need to do some more research.
The grub2-set-default command did not properly set the default kernel index for me in the **/etc/default/grub** file so that the desired alternate kernel did not boot. So I manually changed /etc/default/grub **GRUB_DEFAULT=saved** to **GRUB_DEFAULT=****2** where 2 is the index of the installed kernel I wanted to boot. Then I ran the command **grub2-mkconfig ****> /boot/grub2/grub.cfg** to create the new grub configuration file. This circumvention worked as expected and booted to the alternate kernel.
## Conclusions
GRUB2 and the systemd init system are the key components in the boot and startup phases of most modern Linux distributions. Despite the fact that there has been controversy surrounding systemd especially, these two components work together smoothly to first load the kernel and then to start up all of the system services required to produce a functional Linux system.
Although I do find both GRUB2 and systemd more complex than their predecessors, they are also just as easy to learn and manage. The man pages have a great deal of information about systemd, and freedesktop.org has the complete set of [systemd man pages](https://www.freedesktop.org/software/systemd/man/index.html) online. Refer to the resources, below, for more links.
## Additional resources
[GNU GRUB](https://en.wikipedia.org/wiki/GNU_GRUB)(Wikipedia)[GNU GRUB Manual](https://www.gnu.org/software/grub/manual/grub.html)(GNU.org)[Master Boot Record](https://en.wikipedia.org/wiki/Master_boot_record)(Wikipedia)[Multiboot specification](https://en.wikipedia.org/wiki/Multiboot_Specification)(Wikipedia)[systemd](https://en.wikipedia.org/wiki/Systemd)(Wikipedia)[sy](https://www.freedesktop.org/software/systemd/man/bootup.html)[stemd bootup process](https://www.freedesktop.org/software/systemd/man/bootup.html)(Freedesktop.org)[systemd index of man pages](https://www.freedesktop.org/software/systemd/man/index.html)(Freedesktop.org)
## 10 Comments |
8,808 | 给中级 Meld 用户的有用技巧 | https://www.howtoforge.com/tutorial/beginners-guide-to-visual-merge-tool-meld-on-linux-part-2/ | 2017-08-25T07:20:00 | [
"Meld"
] | https://linux.cn/article-8808-1.html | 
Meld 是 Linux 上功能丰富的可视化比较和合并工具。如果你是第一次接触,你可以进入我们的[初学者指南](/article-8402-1.html),了解该程序的工作原理,如果你已经阅读过或正在使用 Meld 进行基本的比较/合并任务,你将很高兴了解本教程的东西,在本教程中,我们将讨论一些非常有用的技巧,这将让你使用工具的体验更好。
*但在我们跳到安装和解释部分之前,值得一提的是,本教程中介绍的所有说明和示例已在 Ubuntu 14.04 上进行了测试,而我们使用的 Meld 版本为 3.14.2*。
### 1、 跳转
你可能已经知道(我们也在初学者指南中也提到过这一点),标准滚动不是在使用 Meld 时在更改之间跳转的唯一方法 - 你可以使用向上和向下箭头键轻松地从一个更改跳转到另一个更改位于编辑区域上方的窗格中:
[](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-2/big/meld-go-next-prev-9.png)
但是,这需要你将鼠标指针移动到这些箭头,然后再次单击其中一个(取决于你要去哪里 - 向上或向下)。你会很高兴知道,存在另一种更简单的方式来跳转:只需使用鼠标的滚轮即可在鼠标指针位于中央更改栏上时进行滚动。
[](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-2/big/meld-center-area-scrolling.png)
这样,你就可以在视线不离开或者分心的情况下进行跳转,
### 2、 可以对更改进行的操作
看下上一节的最后一个屏幕截图。你知道那些黑箭头做什么吧?默认情况下,它们允许你执行合并/更改操作 - 当没有冲突时进行合并,并在同一行发生冲突时进行更改。
但是你知道你可以根据需要删除个别的更改么?是的,这是可能的。为此,你需要做的是在处理更改时按下 Shift 键。你会观察到箭头被变成了十字架。
[](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-2/big/meld-delete-changes.png)
只需点击其中任何一个,相应的更改将被删除。
不仅是删除,你还可以确保冲突的更改不会在合并时更改行。例如,以下是一个冲突变化的例子:
[](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-2/big/meld-conflicting-change.png)
现在,如果你点击任意两个黑色箭头,箭头指向的行将被改变,并且将变得与其他文件的相应行相似。只要你想这样做,这是没问题的。但是,如果你不想要更改任何行呢?相反,目的是将更改的行在相应行的上方或下方插入到其他文件中。
我想说的是,例如,在上面的截图中,需要在 “test23” 之上或之下添加 “test 2”,而不是将 “test23” 更改为 “test2”。你会很高兴知道在 Meld 中这是可能的。就像你按下 Shift 键删除注释一样,在这种情况下,你必须按下 Ctrl 键。
你会观察到当前操作将被更改为插入 - 双箭头图标将确认这一点 。
[](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-2/big/meld-ctrl-insert.png)
从箭头的方向看,此操作可帮助用户将当前更改插入到其他文件中的相应更改 (如所选择的)。
### 3、 自定义文件在 Meld 的编辑器区域中显示的方式
有时候,你希望 Meld 的编辑区域中的文字大小变大(为了更好或更舒适的浏览),或者你希望文本行被包含而不是脱离视觉区域(意味着你不要想使用底部的水平滚动条)。
Meld 在 *Editor* 选项卡(*Edit->Preferences->Editor*)的 *Preferences* 菜单中提供了一些显示和字体相关的自定义选项,你可以进行这些调整:
[](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-2/big/meld-editor-tab.png)
在这里你可以看到,默认情况下,Meld 使用系统定义的字体宽度。只需取消选中 *Font* 类别下的框,你将有大量的字体类型和大小选项可供选择。
然后在 *Display* 部分,你将看到我们正在讨论的所有自定义选项:你可以设置 Tab 宽度、告诉工具是否插入空格而不是 tab、启用/禁用文本换行、使Meld显示行号和空白(在某些情况下非常有用)以及使用语法突出显示。
### 4、 过滤文本
有时候,并不是所有的修改都是对你很重要的。例如,在比较两个 C 编程文件时,你可能不希望 Meld 显示注释中的更改,因为你只想专注于与代码相关的更改。因此,在这种情况下,你可以告诉 Meld 过滤(或忽略)与注释相关的更改。
例如,这里是 Meld 中的一个比较,其中由工具高亮了注释相关更改:
[](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-2/big/meld-changes-with-comments.png)
而在这种情况下,Meld 忽略了相同的变化,仅关注与代码相关的变更:
[](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-2/big/meld-changes-without-comments.png)
很酷,不是吗?那么这是怎么回事?为此,我是在 “*Edit->Preferences->Text Filters*” 标签中启用了 “C comments” 文本过滤器:
[](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-2/big/meld-text-filters.png)
如你所见,除了 “C comments” 之外,你还可以过滤掉 C++ 注释、脚本注释、引导或所有的空格等。此外,你还可以为你处理的任何特定情况定义自定义文本过滤器。例如,如果你正在处理日志文件,并且不希望 Meld 高亮显示特定模式开头的行中的更改,则可以为该情况定义自定义文本过滤器。
但是,请记住,要定义一个新的文本过滤器,你需要了解 Python 语言以及如何使用该语言创建正则表达式。
### 总结
这里讨论的所有四个技巧都不是很难理解和使用(当然,除了你想立即创建自定义文本过滤器),一旦你开始使用它们,你会认为他们是真的有好处。这里的关键是要继续练习,否则你学到的任何技巧不久后都会忘记。
你还知道或者使用其他任何中级 Meld 的贴士和技巧么?如果有的话,欢迎你在下面的评论中分享。
---
via: <https://www.howtoforge.com/tutorial/beginners-guide-to-visual-merge-tool-meld-on-linux-part-2/>
作者:[Ansh](https://www.howtoforge.com/tutorial/beginners-guide-to-visual-merge-tool-meld-on-linux-part-2/) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | # Useful Meld tips/tricks for intermediate users
Meld is a feature-rich visual comparison and merging tool available for Linux. If you're new to the tool, you can head to our [beginner's guide](https://www.howtoforge.com/tutorial/beginners-guide-to-visual-merge-tool-meld-on-linux/) to get a quick know-how of how the utility works. However, if you've already read that, or are already using Meld for basic comparison/merging tasks, you'll be glad to know that in this tutorial, we will be discussing some really useful tips/tricks that will make your experience with the tool even better.
*But before we jump onto the installation and explanation part, it'd be worth sharing that all the instructions and examples presented in this tutorial have been tested on Ubuntu 14.04 and the Meld version we've used is 3.14.2*.
## Meld tips/tricks for intermediate users
### 1. Navigation
As you might already know (and we've also mentioned this in our beginner's guide), standard scrolling is not the only way to navigate between changes while using Meld - you can easily switch from one change to another using the up and down arrow keys located in the pane that sits above the edit area:
However, this requires you to move your mouse pointer to these arrows and then click one of them (depending on where you want to go - up or down) repeatedly. You'll be glad to know that there exists an even easier way to jump between changes: just use your mouse's scroll wheel to perform scrolling when mouse pointer is on the central change bar.
This way, you can navigate between changes without taking your eyes off them, or getting distracted.
### 2. Things you can do with changes
Just look at the last screenshot in the previous section. You know what those black arrows do, right? By default, they let you perform the merge/change operation - merge when there's no confliction, and change when there's a conflict in the same line.
But do you know you can delete individual changes if you want. Yes, that's possible. For this, all you have to do is to press the Shift key when dealing with changes. You'll observe that arrows get converted into crosses.
Just click any of them, and the corresponding change will get deleted.
Not only delete, you can also make sure that conflicting changes do not change the lines when merged. For example, here's an example of a conflicting change:
Now, if you click any the two black arrows, the line where the arrow points will get changed, and will become similar to the corresponding line of other file. That's fine as long as you want this to happen. But what if you don't want any of the lines to get changed? Instead, the aim is to insert the changed line above or below the corresponding line in other file.
What I am trying to say is that, for example, in the screenshot above, the need is to add 'test 2' above or below 'test23', rather than changing 'test23' to 'test2'. You'll be glad to know that even that's possible with Meld. Just like you press the Shift key to delete comments, in this case, you'll have to press the Ctrl key.
And you'll observe that the current action will be changed to insert - the dual arrow icons will confirm this.
As clear from the direction of arrows, this action helps users to insert the current change above or below (as selected) the corresponding change in other file.
### 3. Customize the way files are displayed in Meld's editor area
There might be times when you would want the text size in Meld's editor area to be a bit large (for better or more comfortable viewing), or you would want the text lines to wrap instead of going out of visual area (meaning you don't want to use the horizontal scroll bar at the bottom).
Meld provides some display- and font-related customization options in its *Preferences* menu under the *Editor* tab (*Edit->Preferences->Editor*) where you'll be able to make these kind of tweaks:
So here you can see that, by default, Meld uses the system defined font width. Just uncheck that box under the *Font* category, and you'll have a plethora of font type and size options to select from.
Then in the *Display* section, you'll see all the customization options we were talking about: you can set Tab width, tell the tool whether or not to insert spaces instead of tabs, enable/disable text wrapping, make Meld show line numbers and whitespaces (very useful in some cases) as well as use syntax highlighting.
### 4. Filtering text
There are times when not all the changes that Meld shows are important to you. For example, while comparing two C programming files, you may not want changes in comments to be shown by Meld as you only want to focus on code-related changes. So, in that case, you can tell Meld to filter (or ignore) comment-related changes.
For example, here's a Meld comparison where comment-related changes are highlighted by the tool:
And here's the case where Meld has ignored the same changes, focusing only on the code-related changes:
Cool, isn't it? So, how did that happen? Well, for this, what I did was, I enabled the 'C comments' text filter in *Edit->Preferences->Text Filters* tab:
As you can see, aside from 'C comments', you can also filter out C++ comments, Script comments, leading or all whitespaces, and more. What more, you can also define custom text filters for any specific case you are dealing with. For example, if you are dealing with log-files and don't want changes in lines that begin with a particular pattern to be highlighted by Meld, then you can define a custom text filter for that case.
However, keep in mind that in order to define a new text filter, you need to know Python language as well as how to create regular expressions in that language.
## Conclusion
All the four tips/tricks discussed here aren't very difficult to understand and use (except, of course, if you want to create custom text filters right away), and once you start using them, you'll agree that they are really beneficial. The key here is to keep practicing, otherwise any tip/trick you learn will slip out of your mind in no time.
Do you know or use any other intermediate level Meld tip or trick? If yes, then you are welcome to share that in comments below. |
8,809 | 开源优先:私营公司宣言 | https://opensource.com/article/17/2/open-source-first | 2017-08-26T08:59:00 | [
"开源"
] | https://linux.cn/article-8809-1.html | 
这是一个宣言,任何私人组织都可以用来构建其协作转型。请阅读并让我知道你的看法。
我[在 Linux TODO 小组中作了一个演讲](https://sarob.com/2017/01/todo-open-source-presentation-17-january-2017/)使用了这篇文章作为我的材料。对于那些不熟悉 TODO 小组的人,他们是在商业公司支持开源领导力的组织。相互依赖是很重要的,因为法律、安全和其他共享的知识对于开源社区向前推进是非常重要的。尤其是因为我们需要同时代表商业和公共社区的最佳利益。
“开源优先”意味着我们在考虑供应商出品的产品以满足我们的需求之前,首先考虑开源。要正确使用开源技术,你需要做的不仅仅是消费,还需要你的参与,以确保开源技术长期存在。要参与开源工作,你需要将工程师的工作时间分别分配给你的公司和开源项目。我们期望将开源贡献意图以及内部协作带到私营公司。我们需要定义、建立和维护一种贡献、协作和择优工作的文化。
### 开放花园开发
我们的私营公司致力于通过对技术界的贡献,成为技术的领导者。这不仅仅是使用开源代码,成为领导者需要参与。成为领导者还需要与公司以外的团体(社区)进行各种类型的参与。这些社区围绕一个特定的研发项目进行组织。每个社区的参与就像为公司工作一样。重大成果需要大量的参与。
### 编码更多,生活更好
我们必须对计算资源慷慨,对空间吝啬,并鼓励由此产生的凌乱而有创造力的结果。允许人们使用他们的业务的这些工具将改变他们。我们必须有自发的互动。我们必须通过协作来构建鼓励创造性的线上以及线下空间。无法实时联系对方,协作就不能进行。
### 通过精英体制创新
我们必须创建一个精英阶层。思想素质要超过群体结构和在其中的职位任期。按业绩晋升鼓励每个人都成为更好的人和雇员。当我们成为最好的坏人时, 充满激情的人之间的争论将会发生。我们的文化应该有鼓励异议的义务。强烈的意见和想法将会变成热情的职业道德。这些想法和意见可以来自而且应该来自所有人。它不应该改变你是谁,而是应该关心你做什么。随着精英体制的进行,我们会投资未经许可就能正确行事的团队。
### 项目到产品
由于我们的私营公司拥抱开源贡献,我们还必须在研发项目中的上游工作和实现最终产品之间实现明确的分离。项目是研发工作,快速失败以及开发功能是常态。产品是你投入生产,拥有 SLA,并使用研发项目的成果。分离至少需要分离项目和产品的仓库。正常的分离包含在项目和产品上工作的不同社区。每个社区都需要大量的贡献和参与。为了使这些活动保持独立,需要有一个客户功能以及项目到产品的 bug 修复请求的工作流程。
接下来,我们会强调在私营公司创建、支持和扩展开源中的主要步骤。
### 技术上有天赋的人的学校
高手必须指导没有经验的人。当你学习新技能时,你将它们传给下一个人。当你训练下一个人时,你会面临新的挑战。永远不要期待在一个位置很长时间。获得技能,变得强大,通过学习,然后继续前进。
### 找到最适合你的人
我们热爱我们的工作。我们非常喜欢它,我们想和我们的朋友一起工作。我们是一个比我们公司大的社区的一部分。我们应该永远记住招募最好的人与我们一起工作。即使不是为我们公司工作,我们将会为我们周围的人找到很棒的工作。这样的想法使雇用很棒的人成为一种生活方式。随着招聘变得普遍,那么审查和帮助新员工就会变得容易了。
### 即将写的
我将在我的博客上发布关于每个宗旨的[更多细节](https://sarob.com/2017/02/open-source-first-project-product/),敬请关注。
*这篇文章最初发表在 [Sean Robert 的博客](https://sarob.com/2017/01/open-source-first/)上。CC BY 许可。*
(题图: opensource.com)
---
作者简介:
Sean A Roberts - -以同理心为主导,同时专注于结果。我实践精英体制。在这里发现的智慧。
---
via: <https://opensource.com/article/17/2/open-source-first>
作者:[Sean A Roberts](https://opensource.com/users/sarob) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | This is a manifesto that any private organization can use to frame their collaboration transformation. Take a read and let me know what you think.
I presented [a talk at the Linux TODO group](https://sarob.com/2017/01/todo-open-source-presentation-17-january-2017/) using this article as my material. For those of you who are not familiar with the TODO group, they support open source leadership at commercial companies. It is important to lean on each other because legal, security, and other shared knowledge is so important for the open source community to move forward. This is especially true because we need to represent both the commercial and public community best interests.
"Open source first" means that we look to open source before we consider vendor-based products to meet our needs. To use open source technology correctly, you need to do more than just consume, you need to participate to ensure the open source technology survives long term. To participate in open source requires your engineer's time be split between working for your company and the open source project. We expect to bring the open source contribution intent and collaboration internal to our private company. We need to define, build, and maintain a culture of contribution, collaboration, and merit-based work.
## Open garden development
Our private company strives to be a leader in technology through its contributions to the technology community. This requires more than just the use of open source code. To be a leader requires participation. To be a leader also requires various types of participation with groups (communities) outside of the company. These communities are organized around a specific R&D project. Participation in each of these communities is much like working for a company. Substantial results require substantial participation.
## Code more, live better
We must be generous with computing resources, stingy with space, and encourage the messy, creative stew that results from this. Allowing people access to the tools of their business will transform them. We must have spontaneous interactions. We must build the online and physical spaces that encourage creativity through collaboration. Collaboration doesn't happen without access to each other in real time.
## Innovation through meritocracy
We must create a meritocracy. The quality of ideas has to overcome the group structure and tenure of those in it. Promotion by merit encourages everyone to be better people and employees. While we are being the best badasses we can be, hardy debates between passionate people will happen. Our culture should encourage the obligation to dissent. Strong opinions and ideas lead to a passionate work ethic. The ideas and opinions can and should come from all. It shouldn't make difference who you are, rather it should matter what you do. As meritocracy takes hold, we need to invest in teams that are going to do the right thing without permission.
## Project to product
As our private company embraces open source contribution, we must also create clearer separation between working upstream on an R&D project and implementing the resulting product in production. A project is R&D where failing fast and developing features is the status quo. A product is what you put into production, has SLAs, and is using the results of the R&D project. The separation requires at least separate repositories for projects and products. Normal separation consists of different communities working on the projects and products. Each of the communities require substantial contribution and participation. In order to keep these activities separate, there needs to be a workflow of customer feature and bug fix requests from project to product.
Next, we highlight the major steps in creating, supporting, and expanding open source at our private company.
## A school for the technically gifted
The seniors must mentor the inexperienced. As you learn new skills, you pass them on to the next person. As you train the next person, you move on to new challenges. Never expect to stay in one position for very long. Get skills, become awesome, pass learning on, and move on.
## Find the best people for your family
We love our work. We love it so much that we want to work with our friends. We are part of a community that is larger than our company. Recruiting the best people to work with us, should always be on our mind. We will find awesome jobs for the people around us, even if that isn't with our company. Thinking this way makes hiring great people a way of life. As hiring becomes common, then reviewing and helping new hires becomes easy.
## More to come
I will be posting [more details](https://sarob.com/2017/02/open-source-first-project-product/) about each tenet on my blog, stay tuned.
*This article was originally posted on Sean Robert's blog. Licensed CC BY.*
## Comments are closed. |
8,810 | 使用 snapcraft 将 snap 包发布到商店 | https://insights.ubuntu.com/2016/11/15/making-your-snaps-available-to-the-store-using-snapcraft/ | 2017-08-26T10:07:32 | [
"snapcraft"
] | https://linux.cn/article-8810-1.html | 
Ubuntu Core 已经正式发布(LCTT 译注:指 2016 年 11 月发布的 Ubuntu Snappy Core 16 ),也许是时候让你的 snap 包进入商店了!
### 交付和商店的概念
首先回顾一下我们是怎么通过商店管理 snap 包的吧。
每次你上传 snap 包,商店都会为其分配一个修订版本号,并且商店中针对特定 snap 包 的版本号都是唯一的。
但是第一次上传 snap 包的时候,我们首先要为其注册一个还没有被使用的名字,这很容易。
商店中所有的修订版本都可以释放到多个通道中,这些通道只是概念上定义的,以便给用户一个稳定或风险等级的参照,这些通道有:
* 稳定(stable)
* 候选(candidate)
* 测试(beta)
* 边缘(edge)
理想情况下,如果我们设置了 CI/CD 过程,那么每天或在每次更新源码时都会将其推送到边缘通道。在此过程中有两件事需要考虑。
首先在开始的时候,你最好制作一个不受限制的 snap 包,因为在这种新范例下,snap 包的大部分功能都能不受限制地工作。考虑到这一点,你的项目开始时 `confinement` 将被设置为 `devmode`(LCTT 译注:这是 `snapcraft.yaml` 中的一个键及其可选值)。这使得你在开发的早期阶段,仍然可以让你的 snap 包进入商店。一旦所有的东西都得到了 snap 包运行的安全模型的充分支持,那么就可以将 `confinement` 修改为 `strict`。
好了,假设你在限制方面已经做好了,并且也开始了一个对应边缘通道的 CI/CD 过程,但是如果你也想确保在某些情况下,早期版本 master 分支新的迭代永远也不会进入稳定或候选通道,那么我们可以使用 `gadge` 设置。如果 snap 包的 `gadge` 设置为 `devel` (LCTT注:这是 `snapcraft.yaml` 中的一个键及其可选值),商店将会永远禁止你将 snap 包释放到稳定和候选通道。
在这个过程中,我们有时可能想要发布一个修订版本到测试通道,以便让有些用户更愿意去跟踪它(一个好的发布管理流程应该比一个随机的日常构建更有用)。这个阶段结束后,如果希望人们仍然能保持更新,我们可以选择关闭测试通道,从一个特定的时间点开始我们只计划发布到候选和稳定通道,通过关闭测试通道我们将使该通道跟随稳定列表中的下一个开放通道,在这里是候选通道。而如果候选通道跟随的是稳定通道后,那么最终得到是稳定通道了。
### 进入 Snapcraft
那么所有这些给定的概念是如何在 snapcraft 中配合使用的?首先我们需要登录:
```
$ snapcraft login
Enter your Ubuntu One SSO credentials.
Email: [email protected]
Password: **************
Second-factor auth: 123456
```
在登录之后,我们就可以开始注册 snap 了。例如,我们想要注册一个虚构的 snap 包 awesome-database:
```
$ snapcraft register awesome-database
We always want to ensure that users get the software they expect
for a particular name.
If needed, we will rename snaps to ensure that a particular name
reflects the software most widely expected by our community.
For example, most people would expect ‘thunderbird’ to be published by
Mozilla. They would also expect to be able to get other snaps of
Thunderbird as 'thunderbird-sergiusens'.
Would you say that MOST users will expect 'a' to come from
you, and be the software you intend to publish there? [y/N]: y
You are now the publisher for 'awesome-database'
```
假设我们已经构建了 snap 包,接下来我们要做的就是把它上传到商店。我们可以在同一个命令中使用快捷方式和 `--release` 选项:
```
$ snapcraft push awesome-databse_0.1_amd64.snap --release edge
Uploading awesome-database_0.1_amd64.snap [=================] 100%
Processing....
Revision 1 of 'awesome-database' created.
Channel Version Revision
stable - -
candidate - -
beta - -
edge 0.1 1
The edge channel is now open.
```
如果我们试图将其发布到稳定通道,商店将会阻止我们:
```
$ snapcraft release awesome-database 1 stable
Revision 1 (devmode) cannot target a stable channel (stable, grade: devel)
```
这样我们不会搞砸,也不会让我们的忠实用户使用它。现在,我们将最终推出一个值得发布到稳定通道的修订版本:
```
$ snapcraft push awesome-databse_0.1_amd64.snap
Uploading awesome-database_0.1_amd64.snap [=================] 100%
Processing....
Revision 10 of 'awesome-database' created.
```
注意,<ruby> 版本号 <rt> version </rt></ruby>(LCTT 译注:这里指的是 snap 包名中 `0.1` 这个版本号)只是一个友好的标识符,真正重要的是商店为我们生成的<ruby> 修订版本号 <rt> Revision </rt></ruby>(LCTT 译注:这里生成的修订版本号为 `10`)。现在让我们把它释放到稳定通道:
```
$ snapcraft release awesome-database 10 stable
Channel Version Revision
stable 0.1 10
candidate ^ ^
beta ^ ^
edge 0.1 10
The 'stable' channel is now open.
```
在这个针对我们正在使用架构最终的通道映射视图中,可以看到边缘通道将会被固定在修订版本 10 上,并且测试和候选通道将会跟随现在修订版本为 10 的稳定通道。由于某些原因,我们决定将专注于稳定性并让我们的 CI/CD 推送到测试通道。这意味着我们的边缘通道将会略微过时,为了避免这种情况,我们可以关闭这个通道:
```
$ snapcraft close awesome-database edge
Arch Channel Version Revision
amd64 stable 0.1 10
candidate ^ ^
beta ^ ^
edge ^ ^
The edge channel is now closed.
```
在当前状态下,所有通道都跟随着稳定通道,因此订阅了候选、测试和边缘通道的人也将跟踪稳定通道的改动。比如就算修订版本 11 只发布到稳定通道,其他通道的人们也能看到它。
这个清单还提供了完整的体系结构视图,在本例中,我们只使用了 amd64。
### 获得更多的信息
有时过了一段时间,我们想知道商店中的某个 snap 包的历史记录和现在的状态是什么样的,这里有两个命令,一个是直截了当输出当前的状态,它会给我们一个熟悉的结果:
```
$ snapcraft status awesome-database
Arch Channel Version Revision
amd64 stable 0.1 10
candidate ^ ^
beta ^ ^
edge ^ ^
```
我们也可以通过下面的命令获得完整的历史记录:
```
$ snapcraft history awesome-database
Rev. Uploaded Arch Version Channels
3 2016-09-30T12:46:21Z amd64 0.1 stable*
...
...
...
2 2016-09-30T12:38:20Z amd64 0.1 -
1 2016-09-30T12:33:55Z amd64 0.1 -
```
### 结束语
希望这篇文章能让你对 Snap 商店能做的事情有一个大概的了解,并让更多的人开始使用它!
---
via: <https://insights.ubuntu.com/2016/11/15/making-your-snaps-available-to-the-store-using-snapcraft/>
*译者简介:*
>
> snapcraft.io 的钉子户,对 Ubuntu Core、Snaps 和 Snapcraft 有着浓厚的兴趣,并致力于将这些还在快速发展的新技术通过翻译或原创的方式介绍到中文世界。有兴趣的小伙伴也可以关注译者个人的公众号: `Snapcraft`,近期会在上面连载几篇有关 Core snap 发布策略、交付流程和验证流程的文章,欢迎围观 :)
>
>
>
作者:[Sergio Schvezov](https://insights.ubuntu.com/author/sergio-schvezov/) 译者:[Snapcrafter](https://github.com/Snapcrafter) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | null |
|
8,811 | Linux 容器演化史 | https://opensource.com/article/17/7/how-linux-containers-evolved | 2017-08-26T16:10:00 | [
"容器",
"OCI",
"Docker"
] | https://linux.cn/article-8811-1.html |
>
> 容器在过去几年内取得很大的进展。现在我们来回顾它发展的时间线。
>
>
>

### Linux 容器是如何演变的
在过去几年内,容器不仅成为了开发者们热议的话题,还受到了企业的关注。持续增长的关注使得在它的安全性、可扩展性以及互用性等方面的需求也得以增长。满足这些需求需要很大的工程量,下面我们讲讲在红帽这样的企业级这些工程是如何发展的。
我在 2013 年秋季第一次遇到 Docker 公司(Docker.io)的代表,那时我们在设法使 Red Hat Enterprise Linux (RHEL) 支持 Docker 容器(现在 Docker 项目的一部分已经更名为 *Moby*)的运行。在移植过程中,我们遇到了一些问题。处理容器镜像分层所需的写时拷贝(COW)文件系统成了我们第一个重大阻碍。Red Hat 最终贡献了一些 COW 文件系统实现,包括 [Device Mapper](https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Logical_Volume_Manager_Administration/device_mapper.html)、[btrf](https://btrfs.wiki.kernel.org/index.php/Main_Page),以及 [OverlayFS](https://www.kernel.org/doc/Documentation/filesystems/overlayfs.txt) 的第一个版本。在 RHEL 上,我们默认使用 Device Mapper, 但是我们在 OverlayFS 上也已经取得了很大进展。
我们在用于启动容器的工具上遇到了第二个主要障碍。那时的上游 docker 使用 [LXC](https://linuxcontainers.org/) 工具来启动容器,然而我们不想在 RHEL 上支持 LXC 工具集。而且在与上游 docker 合作之前,我们已经与 [libvrit](https://libvirt.org/) 团队携手构建了 [virt-sandbox](http://sandbox.libvirt.org/) 工具,它使用 `libvrit-lxc` 来启动容器。
在那时,红帽里有员工提到一个好办法,换掉 LXC 工具集而添加桥接器,以便 docker 守护进程通过 `libvirt-lxc` 与 libvirt 通讯来启动容器。这个方案也有一些顾虑。考虑下面这个例子,使用 Docker 客户端(`docker-cli`)来启动容器,各层调用会在容器进程(`pid1OfContainer`)之前依次启动:
>
> **docker-cli → docker-daemon → libvirt-lxc → pid1OfContainer**
>
>
>
我不是很喜欢这个方案,因为它在启动容器的工具与最终的容器进程之间有两个守护进程。
我的团队与上游 docker 开发者合作实现了一个原生的 [Go 编程语言](https://opensource.com/article/17/6/getting-started-go) 版本的容器运行时,叫作 [libcontainer](https://github.com/opencontainers/runc/tree/master/libcontainer)。这个库作为 [OCI 运行时规范]的最初版实现与 runc 一同发布。
>
> **docker-cli → docker-daemon @ pid1OfContainer**
>
>
>
大多数人误认为当他们执行一个容器时,容器进程是作为 `docker-cli` 的子进程运行的。实际上他们执行的是一个客户端/服务端请求操作,容器进程是在一个完全单独的环境作为子进程运行的。这个客户端/服务端请求会导致不稳定性和潜在的安全问题,而且会阻碍一些实用特性的实现。举个例子,[systemd](https://opensource.com/business/15/10/lisa15-interview-alison-chaiken-mentor-graphics) 有个叫做套接字唤醒的特性,你可以将一个守护进程设置成仅当相应的套结字被连接时才启动。这意味着你的系统可以节约内存并按需执行服务。套结字唤醒的工作原理是 systemd 代为监听 TCP 套结字,并在数据包到达套结字时启动相应的服务。一旦服务启动完毕,systemd 将套结字交给新启动的守护进程。如果将守护进程运行在基于 docker 的容器中就会出现问题。systemd 的 unit 文件通过 Docker CLI 执行容器,然而这时 systemd 却无法简单地经由 Docker CLI 将套结字转交给 Docker 守护进程。
类似这样的问题让我们意识到我们需要一个运行容器的替代方案。
#### 容器编排问题
上游的 docker 项目简化了容器的使用过程,同时也是一个绝佳的 Linux 容器学习工具。你可以通过一条简单的命令快速地体验如何启动一个容器,例如运行 `docker run -ti fedora sh` 然后你就立即处于一个容器之中。
当开始把许多容器组织成一个功能更为强大的应用时,你才能体会到容器真正的能力。但是问题在于伴随多容器应用而来的高复杂度使得简单的 Docker 命令无法胜任编排工作。你要如何管理容器应用在有限资源的集群节点间的布局与编排?如何管理它们的生命周期等等?
在第一届 DockerCon,至少有 7 种不同的公司/开源项目展示了其容器的编排方案。红帽演示了 [OpenShift](https://www.openshift.com/) 的 [geard](https://openshift.github.io/geard/) 项目,它基于 OpenShift v2 的容器(叫作 gears)。红帽觉得我们需要重新审视容器编排,而且可能要与开源社区的其他人合作。
Google 则演示了 Kubernetes 容器编排工具,它来源于 Google 对其自内部架构进行编排时所积累的知识经验。OpenShift 决定放弃 Gear 项目,开始和 Google 一同开发 Kubernetes。 现在 Kubernetes 是 GitHub 上最大的社区项目之一。
#### Kubernetes
Kubernetes 原先被设计成使用 Google 的 [lmctfy](https://github.com/google/lmctfy) 容器运行时环境来完成工作。在 2014 年夏天,lmctfy 兼容了 docker。Kubernetes 还会在 kubernetes 集群的每个节点运行一个 [kubelet](https://kubernetes.io/docs/admin/kubelet/) 守护进程,这意味着原先使用 docker 1.8 的 kubernetes 工作流看起来是这样的:
>
> **kubelet → dockerdaemon @ PID1**
>
>
>
回退到了双守护进程的模式。
然而更糟糕的是,每次 docker 的新版本发布都使得 kubernetes 无法工作。Docker 1.10 切换镜像底层存储方案导致所有镜像重建。而 Docker 1.11 开始使用 `runc` 来启动镜像:
>
> **kubelet → dockerdaemon @ runc @PID1**
>
>
>
Docker 1.12 则增加了一个容器守护进程用于启动容器。其主要目的是为了支持 Docker Swarm (Kubernetes 的竞争者之一):
>
> **kubelet → dockerdaemon → containerd @runc @ pid1**
>
>
>
如上所述,*每一次* docker 发布都破坏了 Kubernetes 的功能,这也是为什么 Kubernetes 和 OpenShift 请求我们为他们提供老版本 Docker 的原因。
现在我们有了一个三守护进程的系统,只要任何一个出现问题,整个系统都将崩溃。
### 走向容器标准化
#### CoreOS、rkt 和其它替代运行时
因为 docker 运行时带来的问题,几个组织都在寻求一个替代的运行时。CoreOS 就是其中之一。他们提供了一个 docker 容器运行时的替代品,叫 *rkt* (rocket)。他们同时还引入一个标准容器规范,称作 *appc* (App Container)。从根本上讲,他们是希望能使得所有人都使用一个标准规范来管理容器镜像中的应用。
这一行为为标准化工作树立了一面旗帜。当我第一次开始和上游 docker 合作时,我最大的担忧就是最终我们会分裂出多个标准。我不希望类似 RPM 和 DEB 之间的战争影响接下来 20 年的 Linux 软件部署。appc 的一个成果是它说服了上游 docker 与开源社区合作创建了一个称作<ruby> <a href="https://www.opencontainers.org/"> 开放容器计划 </a> <rp> ( </rp> <rt> Open Container Initiative </rt> <rp> ) </rp></ruby>(OCI) 的标准团体。
OCI 已经着手制定两个规范:
[OCI 运行时规范](https://github.com/opencontainers/runtime-spec/blob/master/spec.md):OCI 运行时规范“旨在规范容器的配置、执行环境以及生命周期”。它定义了容器的磁盘存储,描述容器内运行的应用的 JSON 文件,容器的生成和执行方式。上游 docker 贡献了 libcontainer 并构建了 runc 作为 OCI 运行时规范的默认实现。
[OCI 镜像文件格式规范](https://github.com/opencontainers/image-spec/blob/master/spec.md):镜像文件格式规范主要基于上游 docker 所使用的镜像格式,定义了容器仓库中实际存储的容器镜像格式。该规范使得应用开发者能为应用使用单一的标准化格式。一些 appc 中描述的概念被加入到 OCI 镜像格式规范中得以保留。这两份规范 1.0 版本的发布已经临近(LCTT 译注:[已经发布](/article-8778-1.html))。上游 docker 已经同意在 OCI 镜像规范定案后支持该规范。Rkt 现在既支持运行 OCI 镜像也支持传统的上游 docker 镜像。
OCI 通过为工业界提供容器镜像与运行时标准化的环境,帮助在工具与编排领域解放创新的力量。
#### 抽象运行时接口
得益于标准化工作, Kubernetes 编排领域也有所创新。作为 Kubernetes 的一大支持者,CoreOS 提交了一堆补丁,使 Kubernetes 除了 docker 引擎外还能通过 rkt 运行容器并且与容器通讯。Google 和 Kubernetes 上游预见到增加这些补丁和将来可能添加的容器运行时接口将给 Kubernetes 带来的代码复杂度,他们决定实现一个叫作<ruby> 容器运行时接口 <rp> ( </rp> <rt> Container Runtime Interface </rt> <rp> ) </rp></ruby>(CRI) 的 API 协议规范。于是他们将 Kubernetes 由原来的直接调用 docker 引擎改为调用 CRI,这样任何人都可以通过实现服务器端的 CRI 来创建支持 Kubernetes 的容器运行时。Kubernetes 上游还为 CRI 开发者们创建了一个大型测试集以验证他们的运行时对 Kubernetes 的支持情况。开发者们还在努力地移除 Kubernetes 对 docker 引擎的调用并将它们隐藏在一个叫作 docker-shim 的薄抽象层后。
### 容器工具的创新
#### 伴随 skopeo 而来的容器仓库创新
几年前我们正与 Atomic 项目团队合作构建 [atomic CLI](https://github.com/projectatomic/atomic)。我们希望实现一个功能,在镜像还在镜像仓库时查看它的细节。在那时,查看仓库中的容器镜像相关 JSON 文件的唯一方法是将镜像拉取到本地服务器再通过 `docker inspect` 来查看 JSON 文件。这些镜像可能会很大,上至几个 GiB。为了允许用户在不拉取镜像的情况下查看镜像细节,我们希望在 `docker inspect` 接口添加新的 `--remote` 参数。上游 docker 拒绝了我们的代码拉取请求(PR),告知我们他们不希望将 Docker CLI 复杂化,我们可以构建我们自己的工具去实现相同的功能。
我们的团队在 [Antonio Murdaca](https://twitter.com/runc0m) 的领导下执行这个提议,构建了 [skopeo](https://github.com/projectatomic/skopeo)。Antonio 没有止步于拉取镜像相关的 JSON 文件,而是决定实现一个完整的协议,用于在容器仓库与本地主机之间拉取与推送容器镜像。
skopeo 现在被 atomic CLI 大量用于类似检查容器更新的功能以及 [atomic 扫描](https://developers.redhat.com/blog/2016/05/02/introducing-atomic-scan-container-vulnerability-detection/) 当中。Atomic 也使用 skopeo 取代上游 docker 守护进程拉取和推送镜像的功能。
#### Containers/image
我们也曾和 CoreOS 讨论过在 rkt 中使用 skopeo 的可能,然而他们表示不希望运行一个外部的协助程序,但是会考虑使用 skopeo 所使用的代码库。于是我们决定将 skopeo 分离为一个代码库和一个可执行程序,创建了 [image](https://github.com/containers/image) 代码库。
[containers/images](https://github.com/containers/image) 代码库和 skopeo 被几个其它上游项目和云基础设施工具所使用。Skopeo 和 containers/image 已经支持 docker 和多个存储后端,而且能够在容器仓库之间移动容器镜像,还拥有许多酷炫的特性。[skopeo 的一个优点](http://rhelblog.redhat.com/2017/05/11/skopeo-copy-to-the-rescue/)是它不需要任何守护进程的协助来完成任务。Containers/image 代码库的诞生使得类似[容器镜像签名](https://access.redhat.com/articles/2750891)等增强功能得以实现。
#### 镜像处理与扫描的创新
我在前文提到 atomic CLI。我们构建这个工具是为了给容器添加不适合 docker CLI 或者我们无法在上游 docker 中实现的特性。我们也希望获得足够灵活性,将其用于开发额外的容器运行时、工具和存储系统。Skopeo 就是一例。
我们想要在 atomic 实现的一个功能是 `atomic mount`。从根本上讲,我们希望从 Docker 镜像存储(上游 docker 称之为 graph driver)中获取内容,把镜像挂在到某处,以便用工具来查看该镜像。如果你使用上游的 docker,查看镜像内容的唯一方法就是启动该容器。如果其中有不可信的内容,执行容器中的代码来查看它会有潜在危险。通过启动容器查看镜像内容的另一个问题是所需的工具可能没有被包含在容器镜像当中。
大多数容器镜像扫描器遵循以下流程:它们连接到 Docker 的套结字,执行一个 `docker save` 来创建一个 tar 打包文件,然后在磁盘上分解这个打包文件,最后查看其中的内容。这是一个很慢的过程。
通过 `atomic mount`,我们希望直接使用 Docker graph driver 挂载镜像。如果 docker 守护进程使用 device mapper,我们将挂载这个设备。如果它使用 overlay,我们会挂载 overlay。这个操作很快而且满足我们的需求。现在你可以执行:
```
# atomic mount fedora /mnt
# cd /mnt
```
然后开始探查内容。你完成相应工作后,执行:
```
# atomic umount /mnt
```
我们在 `atomic scan` 中使用了这一特性,实现了一个快速的容器扫描器。
#### 工具协作的问题
其中一个严重的问题是 `atomic mount` 隐式地执行这些工作。Docker 守护进程不知道有另一个进程在使用这个镜像。这会导致一些问题(例如,如果你先挂载了 Fedora 镜像,然后某个人执行了 `docker rmi fedora` 命令,docker 守护进程移除镜像时就会产生奇怪的操作失败,同时报告说相应的资源忙碌)。Docker 守护进程可能因此进入一个奇怪的状态。
#### 容器存储系统
为了解决这个问题,我们开始尝试将从上游 docker 守护进程剥离出来的 graph driver 代码拉取到我们的代码库中。Docker 守护进程在内存中为 graph driver 完成所有锁的获取。我们想要将这些锁操作转移到文件系统中,这样我们可以支持多个不同的进程来同时操作容器的存储系统,而不用通过单一的守护进程。
我们创建了 [containers/storage](https://github.com/containers/storage) 项目,实现了容器运行、构建、存储所需的所有写时拷贝(COW)特性,同时不再需要一个单一进程来控制和监控这个过程(也就是不需要守护进程)。现在 skopeo 以及其它工具和项目可以直接利用镜像的存储系统。其它开源项目也开始使用 containers/storage,在某些时候,我们也会把这些项目合并回上游 docker 项目。
### 驶向创新
当 Kubernetes 在一个节点上使用 docker 守护进程运行容器时会发生什么?首先,Kubernetes 执行一条类似如下的命令:
```
kubelet run nginx -image=nginx
```
这个命令告诉 kubelet 在节点上运行 NGINX 应用程序。kubelet 调用 CRI 请求启动 NGINX 应用程序。在这时,实现了 CRI 规范的容器运行时必须执行以下步骤:
1. 检查本地是否存在名为 `nginx` 的容器。如果没有,容器运行时会在容器仓库中搜索标准的容器镜像。
2. 如果镜像不存在于本地,从容器仓库下载到本地系统。
3. 使用容器存储系统(通常是写时拷贝存储系统)解析下载的容器镜像并挂载它。
4. 使用标准的容器运行时执行容器。
让我们看看上述过程使用到的特性:
1. OCI 镜像格式规范定义了容器仓库存储的标准镜像格式。
2. Containers/image 代码库实现了从容器仓库拉取镜像到容器主机所需的所有特性。
3. Containers/storage 提供了在写时拷贝的存储系统上探查并处理 OCI 镜像格式的代码库。
4. OCI 运行时规范以及 `runc` 提供了执行容器的工具(同时也是 docker 守护进程用来运行容器的工具)。
这意味着我们可以利用这些工具来使用容器,而无需一个大型的容器守护进程。
在中等到大规模的基于 DevOps 的持续集成/持续交付环境下,效率、速度和安全性至关重要。只要你的工具遵循 OCI 规范,开发者和执行者就能在持续集成、持续交付到生产环境的自动化中自然地使用最佳的工具。大多数的容器工具被隐藏在容器编排或上层容器平台技术之下。我们预想着有朝一日,运行时和镜像工具的选择会变成容器平台的一个安装选项。
#### 系统(独立)容器
在 Atomic 项目中我们引入了<ruby> 原子主机 <rt> atomic host </rt></ruby>,一种新的操作系统构建方式:所有的软件可以被“原子地”升级并且大多数应用以容器的形式运行在操作系统中。这个平台的目的是证明将来所有的软件都能部署在 OCI 镜像格式中并且使用标准协议从容器仓库中拉取,然后安装到系统上。用容器镜像的形式发布软件允许你以不同的速度升级应用程序和操作系统。传统的 RPM/yum/DNF 包分发方式把应用更新锁定在操作系统的生命周期中。
在以容器部署基础设施时多数会遇到一个问题——有时一些应用必须在容器运行时执行之前启动。我们看一个使用 docker 的 Kubernetes 的例子:Kubernetes 为了将 pods 或者容器部署在独立的网络中,要求先建立一个网络。现在默认用于创建网络的守护进程是 [flanneld](https://github.com/coreos/flannel),而它必须在 docker 守护进程之前启动,以支持 docker 网络接口来运行 Kubernetes 的 pods。而且,flanneld 使用 [etcd](https://github.com/coreos/etcd) 来存储数据,这个守护进程必须在 flanneld 启动之前运行。
如果你想把 etcd 和 flanneld 部署到容器镜像中,那就陷入了鸡与鸡蛋的困境中。我们需要容器运行时来启动容器化的应用,但这些应用又需要在容器运行时之前启动。我见过几个取巧的方法尝试解决这个问题,但这些方法都不太干净利落。而且 docker 守护进程当前没有合适的方法来配置容器启动的优先级顺序。我见过一些提议,但它们看起来和 SysVInit 所使用的启动服务的方式相似(我们知道它带来的复杂度)。
#### systemd
用 systemd 替代 SysVInit 的原因之一就是为了处理服务启动的优先级和顺序,我们为什么不充分利用这种技术呢?在 Atomic 项目中我们决定在让它在没有容器运行时的情况下也能启动容器,尤其是在系统启动早期。我们增强了 atomic CLI 的功能,让用户可以安装容器镜像。当你执行 `atomic install --system etc`,它将利用 skopeo 从外部的容器仓库拉取 etcd 的 OCI 镜像,然后把它分解(扩展)为 OSTree 底层存储。因为 etcd 运行在生产环境中,我们把镜像处理为只读。接着 `atomic` 命令抓取容器镜像中的 systemd 的 unit 文件模板,用它在磁盘上创建 unit 文件来启动镜像。这个 unit 文件实际上使用 `runc` 来在主机上启动容器(虽然 `runc` 不是必需的)。
执行 `atomic install --system flanneld` 时会进行相似的过程,但是这时 flanneld 的 unit 文件中会指明它依赖 etcd。
在系统引导时,systemd 会保证 etcd 先于 flanneld 运行,并且直到 flanneld 启动完毕后再启动容器运行时。这样我们就能把 docker 守护进程和 Kubernetes 部署到系统容器当中。这也意味着你可以启动一台原子主机或者使用传统的基于 rpm 的操作系统,让整个容器编排工具栈运行在容器中。这是一个强大的特性,因为用户往往希望改动容器主机时不受这些组件影响。而且,它保持了主机的操作系统的占用最小化。
大家甚至讨论把传统的应用程序部署到独立/系统容器或者被编排的容器中。设想一下,可以用 `atomic install --system httpd` 命令安装一个 Apache 容器,这个容器可以和用 RPM 安装的 httpd 服务以相同的方式启动(`systemctl start httpd` ,区别是这个容器 httpd 运行在一个容器中)。存储系统可以是本地的,换言之,`/var/www` 是从宿主机挂载到容器当中的,而容器监听着本地网络的 80 端口。这表明了我们可以在不使用容器守护进程的情况下将传统的负载组件部署到一个容器中。
### 构建容器镜像
在我看来,在过去 4 年来容器发展方面最让人失落的是缺少容器镜像构建机制上的创新。容器镜像不过是将一些 tar 包文件与 JSON 文件一起打包形成的文件。基础镜像则是一个 rootfs 与一个描述该基础镜像的 JSON 文件。然后当你增加镜像层时,层与层之间的差异会被打包,同时 JSON 文件会做出相应修改。这些镜像层与基础文件一起被打包,共同构成一个容器镜像。
现在几乎所有人都使用 `docker build` 与 Dockerfile 格式来构建镜像。上游 docker 已经在几年前停止了接受修改或改进 Dockerfile 格式的拉取请求(PR)了。Dockerfile 在容器的演进过程中扮演了重要角色,开发者和管理员/运维人员可以通过简单直接的方式来构建镜像;然而我觉得 Dockerfile 就像一个简陋的 bash 脚本,还带来了一些尚未解决的问题,例如:
* 使用 Dockerfile 创建容器镜像要求运行着 Docker 守护进程。
+ 没有可以独立于 docker 命令的标准工具用于创建 OCI 镜像。
+ 甚至类似 `ansible-containers` 和 OpenShift S2I (Source2Image) 的工具也在底层使用 `docker-engine`。
* Dockerfile 中的每一行都会创建一个新的镜像,这有助于创建容器的开发过程,这是因为构建工具能够识别 Dockerfile 中的未改动行,复用已经存在的镜像从而避免了未改动行的重复执行。但这个特性会产生*大量*的镜像层。
+ 因此,不少人希望构建机制能压制镜像消除这些镜像层。我猜想上游 docker 最后应该接受了一些提交满足了这个需求。
* 要从受保护的站点拉取内容到容器镜像,你往往需要某种密钥。比如你为了添加 RHEL 的内容到镜像中,就需要访问 RHEL 的证书和订阅。
+ 这些密钥最终会被以层的方式保存在镜像中。开发者要费很大工夫去移除它们。
+ 为了允许在 docker 构建过程中挂载数据卷,我们在我们维护的 projectatomic/docker 中加入了 `-v volume` 选项,但是这些修改没有被上游 docker 接受。
* 构建过程的中间产物最终会保留在容器镜像中,所以尽管 Dockerfile 易于学习,当你想要了解你要构建的镜像时甚至可以在笔记本上构建容器,但它在大规模企业环境下还不够高效。然而在自动化容器平台下,你应该不会关心用于构建 OCI 镜像的方式是否高效。
### Buildah 起航
在 DevConf.cz 2017,我让我们团队的 [Nalin Dahyabhai](https://twitter.com/nalind) 考虑构建被我称为 `containers-coreutils` 的工具,它基本上就是基于 containers/storage 和 containers/image 库构建的一系列可以使用类似 Dockerfile 语法的命令行工具。Nalin 为了取笑我的波士顿口音,决定把它叫做 [buildah](https://github.com/projectatomic/buildah)。我们只需要少量的 buildah 原语就可以构建一个容器镜像:
* 最小化 OS 镜像、消除不必要的工具是主要的安全原则之一。因为黑客在攻击应用时需要一些工具,如果类似 `gcc`,`make`,`dnf` 这样的工具根本不存在,就能阻碍攻击者的行动。
* 减小容器的体积总是有益的,因为这些镜像会通过互联网拉取与推送。
* 使用 Docker 进行构建的基本原理是在容器构建的根目录下利用命令安装或编译软件。
* 执行 `run` 命令要求所有的可执行文件都包含在容器镜像内。只是在容器镜像中使用 `dnf` 就需要完整的 Python 栈,即使在应用中从未使用到 Python。
* `ctr=$(buildah from fedora)`:
+ 使用 containers/image 从容器仓库拉取 Fedora 镜像。
+ 返回一个容器 ID (`ctr`)。
* `mnt=$(buildah mount $ctr)`:
+ 挂载新建的容器镜像(`$ctr`).
+ 返回挂载点路径。
+ 现在你可以使用挂载点来写入内容。
* `dnf install httpd –installroot=$mnt`:
+ 你可以使用主机上的命令把内容重定向到容器中,这样你可以把密钥保留在主机而不导入到容器内,同时构建所用的工具也仅仅存在于主机上。
+ 容器内不需要包含 `dnf` 或者 Python 栈,除非你的应用用到它们。
* `cp foobar $mnt/dir`:
+ 你可以使用任何 bash 中可用的命令来构造镜像。
* `buildah commit $ctr`:
+ 你可以随时创建一个镜像层,镜像的分层由用户而不是工具来决定。
* `buildah config --env container=oci --entrypoint /usr/bin/httpd $ctr`:
+ Buildah 支持所有 Dockerfile 的命令。
* `buildah run $ctr dnf -y install httpd`:
+ Buildah 支持 `run` 命令,但它是在一个锁定的容器内利用 `runc` 执行命令,而不依赖容器运行时守护进程。
* `buildah build-using-dockerfile -f Dockerfile .`:
+ 我们希望将移植类似 `ansible-containers` 和 OpenShift S2I 这样的工具,改用 `buildah` 以去除对容器运行时守护进程的依赖。
+ 使用与生产环境相同的容器运行时构建容器镜像会遇到另一个大问题。为了保证安全性,我们需要把权限限制到支持容器构建与运行所需的最小权限。构建容器比起运行容器往往需要更多额外的权限。举个例子,我们默认允许 `mknod` 权限,这会允许进程创建设备节点。有些包的安装会尝试创建设备节点,然而在生产环境中的应用几乎都不会这么做。如果默认移除生产环境中容器的 `mknod` 特权会让系统更为安全。
+ 另一个例子是,容器镜像默认是可读写的,因为安装过程意味着向 `/usr` 存入软件包。然而在生产环境中,我强烈建议把所有容器设为只读模式,仅仅允许它们写入 tmpfs 或者是挂载了数据卷的目录。通过分离容器的构建与运行环境,我们可以更改这些默认设置,提供一个更为安全的环境。
+ 当然,buildah 可以使用 Dockerfile 构建容器镜像。
### CRI-O :一个 Kubernetes 的运行时抽象
Kubernetes 添加了<ruby> 容器运行时接口 <rt> Container Runtime Interface </rt></ruby>(CRI)接口,使 pod 可以在任何运行时上工作。虽然我不是很喜欢在我的系统上运行太多的守护进程,然而我们还是加了一个。我的团队在 [Mrunal Patel](https://twitter.com/mrunalp) 的领导下于 2016 年后期开始构建 [CRI-O] 守护进程。这是一个用来运行 OCI 应用程序的 OCI 守护进程。理论上,将来我们能够把 CRI-O 的代码直接并入 kubelet 中从而消除这个多余的守护进程。
不像其它容器运行时,CRI-O 的唯一目的就只是为了满足 Kubernetes 的需求。记得前文描述的 Kubernetes 运行容器的条件。
Kubernetes 传递消息给 kubelet 告知其运行 NGINX 服务器:
1. kubelet 唤醒 CRI-O 并告知它运行 NGINX。
2. CRI-O 回应 CRI 请求。
3. CRI-O 在容器仓库查找 OCI 镜像。
4. CRI-O 使用 containers/image 从仓库拉取镜像到主机。
5. CRI-O 使用 containers/storage 解压镜像到本地磁盘。
6. CRI-O 按照 OCI 运行时规范(通常使用 `runc`)启动容器。如前文所述,Docker 守护进程也同样使用 `runc` 启动它的容器。
7. 按照需要,kubelet 也可以使用替代的运行时启动容器,例如 Clear Containers `runcv`。
CRI-O 旨在成为稳定的 Kubernetes 运行平台。只有通过完整的 Kubernetes 测试集后,新版本的 CRI-O 才会被推出。所有提交到 <https://github.com/Kubernetes-incubator/cri-o> 的拉取请求都会运行完整的 Kubernetes 测试集。没有通过测试集的拉取请求都不会被接受。CRI-O 是完全开放的,我们已经收到了来自 Intel、SUSE、IBM、Google、Hyper.sh 等公司的代码贡献。即使不是红帽想要的特性,只要通过一定数量维护者的同意,提交给 CRI-O 的补丁就会被接受。
### 小结
我希望这份深入的介绍能够帮助你理解 Linux 容器的演化过程。Linux 容器曾经陷入一种各自为营的困境,Docker 建立起了镜像创建的事实标准,简化了容器的使用工具。OCI 则意味着业界在核心镜像格式与运行时方面的合作,这促进了工具在自动化效率、安全性、高可扩展性、易用性方面的创新。容器使我们能够以一种新奇的方式部署软件——无论是运行于主机上的传统应用还是部署在云端的微服务。而在许多方面,这一切还仅仅是个开始。
---
作者简介:
Daniel J Walsh - Daniel 有将近 30 年的计算机安全领域工作经验。他在 2001 年 8 月加入 Red Hat。
via: <https://opensource.com/article/17/7/how-linux-containers-evolved>
作者:[Daniel J Walsh](https://opensource.com/users/rhatdan) 译者:[haoqixu](https://github.com/haoqixu) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | In the past few years, containers have become a hot topic among not just developers, but also enterprises. This growing interest has caused an increased need for security improvements and hardening, and preparing for scaleability and interoperability. This has necessitated a lot of engineering, and here's the story of how much of that engineering has happened at an enterprise level at Red Hat.
When I first met up with representatives from Docker Inc. (Docker.io) in the fall of 2013, we were looking at how to make Red Hat Enterprise Linux (RHEL) use Docker containers. (Part of the Docker project has since been rebranded as *Moby*.) We had several problems getting this technology into RHEL. The first big hurdle was getting a supported Copy On Write (COW) file system to handle container image layering. Red Hat ended up contributing a few COW implementations, including [Device Mapper](https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Logical_Volume_Manager_Administration/device_mapper.html), [btrfs](https://btrfs.wiki.kernel.org/index.php/Main_Page), and the first version of [OverlayFS](https://www.kernel.org/doc/Documentation/filesystems/overlayfs.txt). For RHEL, we defaulted to Device Mapper, although we are getting a lot closer on OverlayFS support.
The next major hurdle was on the tooling to launch the container. At that time, upstream docker was using [LXC](https://linuxcontainers.org/) tools for launching containers, and we did not want to support LXC tools set in RHEL. Prior to working with upstream docker, I had been working with the [libvirt](https://libvirt.org/) team on a tool called [virt-sandbox](http://sandbox.libvirt.org/), which used **libvirt-lxc** for launching containers.
At the time, some people at Red Hat thought swapping out the LXC tools and adding a bridge so the Docker daemon would communicate with libvirt using **libvirt-lxc** to launch containers was a good idea. There were serious concerns with this approach. Consider the following example of starting a container with the Docker client (**docker-cli**) and the layers of calls before the container process (**pid1OfContainer**) is started:
**docker-cli → docker-daemon → libvirt-lxc → pid1OfContainer**
I did not like the idea of having two daemons between your tool to launch containers and the final running container.
My team worked hard with the upstream docker developers on a native [Go programming language](https://opensource.com/article/17/6/getting-started-go) implementation of the container runtime, called [libcontainer](https://github.com/opencontainers/runc/tree/master/libcontainer). This library eventually got released as the initial implementation of the [OCI Runtime Specification](https://github.com/opencontainers/runtime-spec) along with runc.
**docker- ****cli**** → docker-daemon @ pid1OfContainer**
Although most people mistakenly think that when they execute a container, the container process is a child of the **docker-cli**, they actually have executed a client/server operation and the container process is running as a child of a totally separate environment. This client/server operation can lead to instability and potential security concerns, and it blocks useful features. For example, [systemd](https://opensource.com/business/15/10/lisa15-interview-alison-chaiken-mentor-graphics) has a feature called socket activation, where you can set up a daemon to run only when a process connects to a socket. This means your system uses less memory and only has services executing when they are needed. The way socket activation works is systemd listens at a TCP socket, and when a packet arrives for the socket, systemd activates the service that normally listens on the socket. Once the service is activated, systemd hands the socket to the newly started daemon. Moving this daemon into a Docker-based container causes issues. The unit file would start the container using the Docker CLI and there was no easy way for systemd to pass the connected socket to the Docker daemon through the Docker CLI.
Problems like this made us realize that we needed alternate ways to run containers.
### The container orchestration problem
The upstream docker project made using containers easy, and it continues to be a great tool for learning about Linux containers. You can quickly experience launching a container by running a simple command like **docker run -ti fedora sh** and instantly you are in a container.
The real power of containers comes about when you start to run many containers simultaneously and hook them together into a more powerful application. The problem with setting up a multi-container application is the complexity quickly grows and wiring it up using simple Docker commands falls apart. How do you manage the placement or orchestration of container applications across a cluster of nodes with limited resources? How does one manage their lifecycle, and so on?
At the first DockerCon, at least seven different companies/open source projects showed how you could orchestrate containers. Red Hat's [OpenShift](https://www.openshift.com/) had a project called [geard](https://openshift.github.io/geard/), loosely based on OpenShift v2 containers (called "gears"), which we were demonstrating. Red Hat decided that we needed to re-look at orchestration and maybe partner with others in the open source community.
Google was demonstrating [Kubernetes](https://opensource.com/resources/what-is-kubernetes) container orchestration based on all of the knowledge Google had developed in orchestrating their own internal architecture. OpenShift decided to drop our Gear project and start working with Google on Kubernetes. Kubernetes is now one of the largest community projects on GitHub.
#### Kubernetes
Kubernetes was developed to use Google's [lmctfy](https://github.com/google/lmctfy) container runtime. Lmctfy was ported to work with Docker during the summer of 2014. Kubernetes runs a daemon on each node in the Kubernetes cluster called a [kubelet](https://kubernetes.io/docs/admin/kubelet/). This means the original Kubernetes with Docker 1.8 workflow looked something like:
**kubelet → dockerdaemon @ PID1**
Back to the two-daemon system.
But it gets worse. With every release of Docker, Kubernetes broke.Docker 1.10 Switched the backing store causing a rebuilding of all images.Docker 1.11 started using **runc** to launch containers:
**kubelet → dockerdaemon @ runc @PID1**
Docker 1.12 added a container daemon to launch containers. Its main purpose was to satisfy Docker Swarm (a Kubernetes competitor):
**kubelet → dockerdaemon → containerd @runc @ pid1**
As was stated previously, *every* Docker release has broken Kubernetes functionality, which is why Kubernetes and OpenShift require us to ship older versions of Docker for their workloads.
Now we have a three-daemon system, where if anything goes wrong on any of the daemons, the entire house of cards falls apart.
## Toward container standardization
### CoreOS, rkt, and the alternate runtime
Due to the issues with the Docker runtime, several organizations were looking at alternative runtimes. One such organization was CoreOS. CoreOS had offered an alternative container runtime to upstream docker, called *rkt* (rocket). They also introduced a standard container specification called *appc* (App Container). Basically, they wanted to get everyone to use a standard specification for how you store applications in a container image bundle.
This threw up red flags. When I first started working on containers with upstream docker, my biggest fear is that we would end up with multiple specifications. I did not want an RPM vs. Debian-like war to affect the next 20 years of shipping Linux software. One good outcome from the appc introduction was that it convinced upstream docker to work with the open source community to create a standards body called the [Open Container Initiative](https://www.opencontainers.org/) (OCI).
The OCI has been working on two specifications:
** OCI Runtime Specification**: The OCI Runtime Specification "aims to specify the configuration, execution environment, and lifecycle of a container." It defines what a container looks like on disk, the JSON file that describes the application(s) that will run within the container, and how to spawn and execute the container. Upstream docker contributed the libcontainer work and built runc as a default implementation of the OCI Runtime Specification.
** OCI Image Format Specification**: The Image Format Specification is based mainly on the upstream docker image format and defines the actual container image bundle that sits at container registries. This specification allows application developers to standardize on a single format for their applications. Some of the ideas described in appc, although it still exists, have been added to the OCI Image Format Specification. Both of these OCI specifications are nearing 1.0 release. Upstream docker has agreed to support the OCI Image Specification once it is finalized. Rkt now supports running OCI images as well as traditional upstream docker images.
The Open Container Initiative, by providing a place for the industry to standardize around the container image and the runtime, has helped free up innovation in the areas of tooling and orchestration.
### Abstracting the runtime interface
One of the innovations taking advantage of this standardization is in the area of Kubernetes orchestration. As a big supporter of the Kubernetes effort, CoreOS submitted a bunch of patches to Kubernetes to add support for communicating and running containers via rkt in addition to the upstream docker engine. Google and upstream Kubernetes saw that adding these patches and possibly adding new container runtime interfaces in the future was going to complicate the Kubernetes code too much. The upstream Kubernetes team decided to implement an API protocol specification called the Container Runtime Interface (CRI). Then they would rework Kubernetes to call into CRI rather than to the Docker engine, so anyone who wants to build a container runtime interface could just implement the server side of the CRI and they could support Kubernetes. Upstream Kubernetes created a large test suite for CRI developers to test against to prove they could service Kubernetes. There is an ongoing effort to remove all of Docker-engine calls from Kubernetes and put them behind a shim called the docker-shim.
## Innovations in container tooling
### Container registry innovations with skopeo
A few years ago, we were working with the Project Atomic team on the [atomic CLI](https://github.com/projectatomic/atomic) . We wanted the ability to examine a container image when it sat on a container registry. At that time, the only way to look at the JSON data associated with a container image at a container registry was to pull the image to the local server and then you could use **docker inspect** to read the JSON files. These images can be huge, up to multiple gigabytes. Because we wanted to allow users to examine the images and decide not to pull them, we wanted to add a new **--remote** interface to **docker inspect**. Upstream docker rejected the pull request, telling us that they did not want to complicate the Docker CLI, and that we could easily build our own tooling to do the same.
My team, led by [Antonio Murdaca](https://twitter.com/runc0m), ran with the idea and created [skopeo](https://github.com/projectatomic/skopeo). Antonio did not stop at just pulling the JSON file associated with the image—he decided to implement the entire protocol for pulling and pushing container images from container registries to/from the local host.
Skopeo is now used heavily within the atomic CLI for things such as checking for new updates for containers and inside of [atomic scan](https://developers.redhat.com/blog/2016/05/02/introducing-atomic-scan-container-vulnerability-detection/). Atomic also uses skopeo for pulling and pushing images, instead of using the upstream docker daemon.
### Containers/image
We had been talking to CoreOS about potentially using skopeo with rkt, and they said that they did not want to **exec** out to a helper application, but would consider using the library that skopeo used. We decided to split skopeo apart into a library and executable and created ** image**.
The [containers/image](https://github.com/containers/image) library and skopeo are used in several other upstream projects and cloud infrastructure tools. Skopeo and containers/image have evolved to support multiple storage backends in addition to Docker, and it has the ability to move container images between container registries and many cool features. A [nice thing about skopeo](http://rhelblog.redhat.com/2017/05/11/skopeo-copy-to-the-rescue/) is it does not require any daemons to do its job. The breakout of containers/image library has also allowed us to add enhancements such as [container image signing](https://access.redhat.com/articles/2750891).
### Innovations in image handling and scanning
I mentioned the **atomic** CLI command earlier in this article. We built this tool to add features to containers that did not fit in with the Docker CLI, and things that we did not feel we could get into the upstream docker. We also wanted to allow flexibility to support additional container runtimes, tools, and storage as they developed. Skopeo is an example of this.
One feature we wanted to add to atomic was **atomic mount**. Basically we wanted to take content that was stored in the Docker image store (upstream docker calls this a graph driver), and mount the image somewhere, so that tools could examine the image. Currently if you use upstream docker, the only way to look at an image is to start the container. If you have untrusted content, executing code inside of the container to look at the image could be dangerous. The second problem with examining an image by starting it is that the tools to examine the container are probably not in the container image.
Most container image scanners seem to have the following pattern: They connect to the Docker socket, do a **docker save** to create a tarball, then explode the tarball on disk, and finally examine the contents. This is a slow operation.
With **atomic mount**, we wanted to go into the Docker graph driver and mount the image. If the Docker daemon was using device mapper, we would mount the device. If it was using overlay, we would mount the overlay. This is an incredibly quick operation and satisfies our needs. You can now do:
# atomic mount fedora /mnt # cd /mnt
And start examining the content. When you are done, do a:
# atomic umount /mnt
We use this feature inside of **atomic scan**, which allows you to have some of the fastest container scanners around.
**Issues with tool coordination**
One big problem is that **atomic mount** is doing this under the covers. The Docker daemon does not know that another process is using the image. This could cause problems (for example, if you mounted the Fedora image above and then someone went and executed **docker rmi fedora**, the Docker daemon would fail weirdly when trying to remove the Fedora image saying it was busy). The Docker daemon could get into a weird state.
### Containers storage
To solve this issue, we started looking at pulling the graph driver code out of the upstream docker daemon into its own repository. The Docker daemon did all of its locking in memory for the graph driver. We wanted to move this locking into the file system so that we could have multiple distinct processes able to manipulate the container storage at the same time, without having to go through a single daemon process.
We created a project called [container/storage](https://github.com/containers/storage), which can do all of the COW features required for running, building, and storing containers, without requiring one process to control and monitor it (i.e., no daemon required). Now skopeo and other tools and projects can take advantage of the storage. Other open source projects have begun to use containers/storage, and at some point we would like to merge this project back into the upstream docker project.
## Undock and let's innovate
If you think about what happens when Kubernetes runs a container on a node with the Docker daemon, first Kubernetes executes a command like:
kubelet run nginx –image=nginx
This command tells the kubelet to run the NGINX application on the node. The kubelet calls into the CRI and asks it to start the NGINX application. At this point, the container runtime that implemented the CRI must do the following steps:
- Check local storage for a container named
**nginx**. If not local, the container runtime will search for a standardized container image at a container registry. - If the image is not in local storage, download it from the container registry to the local system.
- Explode the the download container image on top of container storage—usually a COW storage—and mount it up.
- Execute the container using a standardized container runtime.
Let's look at the features described above:
- OCI Image Format Specification defines the standard image format for images stored at container registries.
- Containers/image is the library that implements all features needed to pull a container image from a container registry to a container host.
- Containers/storage provides a library to exploding OCI Image Formats onto COW storage and allows you to work with the image.
- OCI Runtime Specification and
**runc**provide tools for executing the containers (the same tool that the Docker daemon uses for running containers).
This means we can use these tools to implement the ability to use containers without requiring a big container daemon.
In a moderate- to large-scale DevOps-based CI/CD environment, efficiency, speed, and security are important. And as long as your tools conform to the OCI specifications, then a developer or an operator should be using the best tools for automation through the CI/CD pipeline and into production. Most of the container tooling is hidden beneath orchestration or higher-up container platform technology. We envision a time in which runtime or image bundle tool selection perhaps becomes an installation option of the container platform.
### System (standalone) containers
On Project Atomic we introduced the **atomic host**, a new way of building an operating system in which the software can be "atomicly" updated and most of the applications that run on it will be run as containers. Our goal with this platform is to prove that most software can be shipped in the future in OCI Image Format, and use standard protocols to get images from container registries and install them on your system. Providing software as container images allows you to update the host operating system at a different pace than the applications that run on it. The traditional RPM/yum/DNF way of distributing packages locks the applications to the live cycle of the host operating systems.
One problem we see with shipping most of the infrastructure as containers is that sometimes you must run an application before the container runtime daemon is executing. Let's look at our Kubernetes example running with the Docker daemon: Kubernetes requires a network to be set up so that it can put its pods/containers into isolated networks. The default daemon we use for this currently is ** flanneld**, which must be running before the Docker daemon is started in order to hand the Docker daemon the network interfaces to run the Kubernetes pods. Also, flanneld uses
[for its data store. This daemon is required to be run before flanneld is started.](https://github.com/coreos/etcd)
**etcd**If we want to ship etcd and flanneld as container images, we have a chicken and egg situation. We need the container runtime daemon to start the containerized applications, but these applications need to be running before the container runtime daemon is started. I have seen several hacky setups to try to handle this situation, but none of them are clean. Also, the Docker daemon currently has no decent way to configure the priority order that containers start. I have seen suggestions on this, but they all look like the old SysVInit way of starting services (and we know the complexities that caused).
### systemd
One reason for replacing SysVInit with systemd was to handle the priority and ordering of starting services, so why not take advantage of this technology? In Project Atomic, we decided that we wanted to run containers on the host without requiring a container runtime daemon, especially for early boot. We enhanced the atomic CLI to allow you to install container images. If you execute** atomic install --system etcd**, it uses skopeo to go out to a container registries and pulls down the etcd OCI Image. Then it explodes (or expands) the image onto an OSTree backing store. Because we are running etcd in production, we treat the image as read-only. Next the **atomic** command grabs the systemd unit file template from the container image and creates a unit file on disk to start the image. The unit file actually uses **runc** to start the container on the host (although **runc** is not necessary).
Similar things happen if you execute **atomic install --system flanneld**, except this time the flanneld unit file specifies that it needs etcd unit running before it starts.
When the system boots up, systemd ensures that etcd is running before flanneld, and that the container runtime is not started until after flanneld is started. This allows you to move the Docker daemon and Kubernetes into system containers. This means you can boot up an atomic host or a traditional rpm-based operating system that runs the entire container orchestration stack as containers. This is powerful because we know customers want to continue to patch their container hosts independently of these components. Furthermore, it keeps the host's operating system footprint to a minimum.
There even has been discussion about putting traditional applications into containers that can run either as standalone/system containers or as an orchestrated container. Consider an Apache container that you could install with the **atomic install --system httpd** command. This container image would be started the same way you start an rpm-based httpd service (**systemctl start httpd** except httpd will be started in a container). The storage could be local, meaning /var/www from the host gets mounted into the container, and the container listens on the local network at port 80. This shows that you could run traditional workloads on a host inside of a container without requiring a container runtime daemon.
## Building container images
From my perspective, one of the saddest things about container innovation over the past four years has been the lack of innovation on mechanisms to build container images. A container image is nothing more than a tarball of tarballs and some JSON files. The base image of a container is a rootfs along with an JSON file describing the base image. Then as you add layers, the difference between the layers gets tar’d up along with changes to the JSON file. These layers and the base file get tar'd up together to form the container image.
Almost everyone is building with the **docker build** and the Dockerfile format. Upstream docker stopped accepting pull requests to modify or improve Dockerfile format and builds a couple of years ago. The Dockerfile played an important part in the evolution of containers. Developers or administrators/operators could build containers in a simple and straightforward manner; however, in my opinion, the Dockerfile is really just a poor man’s bash script and creates several problems that have never been solved. For example:
- To build a container image, Dockerfile requires a Docker daemon to be running.
- No one has built standard tooling to create the OCI image outside of executing Docker commands.
- Even tools such as
**ansible-containers**and OpenShift S2I (Source2Image) use**docker-engine**under the covers.
- Each line in a Dockerfile creates a new image, which helps in the development process of creating the container because the tooling is smart enough to know that the lines in the Dockerfile have not changed, so the existing images can be used and the lines do not need to be reprocessed. This can lead to a
*huge*number of layers.- Because of these, several people have requested mechanisms to squash the images eliminating the layers. I think upstream docker finally has accepted something to satisfy the need.
- To pull content from secured sites to put into your container image, often you need some form of secrets. For example you need access to the RHEL certificates and subscriptions in order to add RHEL content to an image.
- These secrets can end up in layers stored in the image. And the developer needs to jump through hoops to remove the secrets.
- To allow volumes to be mounted in during Docker build, we have added a
**-v**volume switch to the projectatomic/docker package that we ship, but upstream docker has not accepted these patches.
- Build artifacts end up inside of the container image. So although Dockerfiles are great for getting started or building containers on a laptop while trying to understand the image you may want to build, they really are not an effective or efficient means to build images in a high-scaled enterprise environment. And behind an automated container platform, you shouldn't care if you are using a more efficient means to build OCI-compliant images.
## Undock with Buildah
At DevConf.cz 2017, I asked [Nalin Dahyabhai](https://twitter.com/nalind) on my team to look at building what I called **containers-coreutils**, basically, to use the containers/storage and containers/image libraries and build a series of command-line tools that could mimic the syntax of the Dockerfile. Nalin decided to call it [buildah](https://github.com/projectatomic/buildah), making fun of my Boston accent. With a few buildah primitives, you can build a container image:
- One of the main concepts of security is to keep the amount of content inside of an OS image as small as possible to eliminate unwanted tools. The idea is that a hacker might need tools to break through an application, and if the tools such as
**gcc**,**make**,**dnf**are not present, the attacker can be stopped or confined. - Because these images are being pulled and pushed over the internet, shrinking the size of the container is always a good idea.
- How Docker build works is commands to install software or compile software have to be in the
**uildroot**of the container. - Executing the
**run**command requires all of the executables to be inside of the container image. Just using**dnf**inside of the container image requires that the entire Python stack be present, even if you never use Python in the application. **ctr=$(buildah from fedora)**:- Uses containers/image to pull the Fedora image from a container registry.
- Returns a container ID (
**ctr**).
**mnt=$(buildah mount $ctr)**:- Mounts up the newly created container image (
**$ctr**). - Returns the path to the mount point.
- You can now use this mount point to write content.
- Mounts up the newly created container image (
**dnf install httpd –installroot=$mnt**:- You can use commands on the host to redirect content into the container, which means you can keep your secrets on the host, you don't have to put them inside of the container, and your build tools can be kept on the host.
- You don't need
**dnf**inside of the container or the Python stack unless your application is going to use it.
**cp foobar $mnt/dir**:- You can use any command available in bash to populate the container.
**buildah commit $ctr**:- You can create a layer whenever you decide. You control the layers rather than the tool.
**buildah config --env container=oci --entrypoint /usr/bin/httpd $ctr**:- All of the commands available inside of Dockerfile can be specified.
**buildah run $ctr dnf -y install httpd**:- Buildah
**run**is supported, but instead of relying on a container runtime daemon, buildah executes**runc**to run the command inside of a locked down container.
- Buildah
**buildah build-using-dockerfile -f Dockerfile .**:We want to move tools like
**ansible-containers**and OpenShift S2I to use**buildah**rather than requiring a container runtime daemon.Another big issue with building in the same container runtime that is used to run containers in production is that you end up with the lowest common denominator when it comes to security. Building containers tends to require a lot more privileges than running containers. For example, we allow the
**mknod**capability by default. The**mknod**capability allows processes to create device nodes. Some package installs attempt to create device nodes, yet in production almost no applications do. Removing the**mknod**capability from your containers in production would make your systems more secure.Another example is that we default container images to read/write because the install process means writing packages to
**/usr**. Yet in production, I argue that you really should run all of your containers in read-only mode. Only allow the containers to write to**tmpfs**or directories that have been volume mounted into the container. By splitting the running of containers from the building, we could change the defaults and make for a much more secure environment.- And yes, buildah can build a container image using a Dockerfile.
## CRI-O a runtime abstraction for Kubernetes
Kubernetes added an API to plug in any runtime for the pods called Container Runtime Interface (CRI). I am not a big fan of having lots of daemons running on my system, but we have added another. My team led by [Mrunal Patel](https://twitter.com/mrunalp) started working on [CRI-O](https://github.com/Kubernetes-incubator/cri-o) daemon in late 2016. This is a Container Runtime Interface daemon for running OCI-based applications. Theoretically, in the future we could compile in the CRI-O code directly into the kubelet to eliminate the second daemon.
Unlike other container runtimes, CRI-O's only purpose in life is satisfying Kubernetes' needs. Remember the steps described above for what Kubernetes need to run a container.
Kubernetes sends a message to the kubelet that it wants it to run the NGINX server:
- The kubelet calls out to the CRI-O to tell it to run NGINX.
- CRI-O answers the CRI request.
- CRI-O finds an OCI Image at a container registry.
- CRI-O uses containers/image to pull the image from the registry to the host.
- CRI-O unpacks the image onto local storage using containers/storage.
- CRI-O launches a OCI Runtime Specification, usually
**runc**, and starts the container. As I stated previously, the Docker daemon launches its containers using**runc**, in exactly the same way. - If desired, the kubelet could also launch the container using an alternate runtime, such as Clear Containers
**runv**.
CRI-O is intended to be a stable platform for running Kubernetes, and we will not ship a new version of CRI-O unless it passes the entire Kubernetes test suite. All pull requests that go to [https://github.com/Kubernetes-incubator/cri-o](https://github.com/Kubernetes-incubator/cri-o) run against the entire Kubernetes test suite. You can not get a pull request into CRI-O without passing the tests. CRI-O is fully open, and we have had contributors from several different companies, including Intel, SUSE, IBM, Google, Hyper.sh. As long as a majority of maintainers agree to a patch to CRI-O, it will get accepted, even if the patch is not something that Red Hat wants.
## Conclusion
I hope this deep dive helps you understand how Linux containers have evolved. At one point, Linux containers were an every-vendor-for-themselves situation. Docker helped focus on a de facto standard for image creation and simplifying the tools used to work with containers. The Open Container Initiative now means that the industry is working around a core image format and runtime, which fosters innovation around making tooling more efficient for automation, more secure, highly scalable, and easier to use. Containers allow us to examine installing software in new and novel ways—whether they are traditional applications running on a host, or orchestrated micro-services running in the cloud. In many ways, this is just the beginning.
## 6 Comments |
8,812 | 开发一个 Linux 调试器(五):源码和信号 | https://blog.tartanllama.xyz/c++/2017/04/24/writing-a-linux-debugger-source-signal/ | 2017-08-26T17:55:00 | [
"调试器"
] | https://linux.cn/article-8812-1.html | 
在上一部分我们学习了关于 DWARF 的信息,以及它如何被用于读取变量和将被执行的机器码与我们的高级语言的源码联系起来。在这一部分,我们将进入实践,实现一些我们调试器后面会使用的 DWARF 原语。我们也会利用这个机会,使我们的调试器可以在命中一个断点时打印出当前的源码上下文。
### 系列文章索引
随着后面文章的发布,这些链接会逐渐生效。
1. [准备环境](/article-8626-1.html)
2. [断点](/article-8645-1.html)
3. [寄存器和内存](/article-8663-1.html)
4. [Elves 和 dwarves](/article-8719-1.html)
5. [源码和信号](https://blog.tartanllama.xyz/c++/2017/04/24/writing-a-linux-debugger-source-signal/)
6. [源码级逐步执行](https://blog.tartanllama.xyz/c++/2017/05/06/writing-a-linux-debugger-dwarf-step/)
7. 源码级断点
8. 调用栈展开
9. 读取变量
10. 下一步
### 设置我们的 DWARF 解析器
正如我在这系列文章开始时备注的,我们会使用 [libelfin](https://github.com/TartanLlama/libelfin/tree/fbreg) 来处理我们的 DWARF 信息。希望你已经在[第一部分](/article-8626-1.html)设置好了这些,如果没有的话,现在做吧,确保你使用我仓库的 `fbreg` 分支。
一旦你构建好了 `libelfin`,就可以把它添加到我们的调试器。第一步是解析我们的 ELF 可执行程序并从中提取 DWARF 信息。使用 `libelfin` 可以轻易实现,只需要对`调试器`作以下更改:
```
class debugger {
public:
debugger (std::string prog_name, pid_t pid)
: m_prog_name{std::move(prog_name)}, m_pid{pid} {
auto fd = open(m_prog_name.c_str(), O_RDONLY);
m_elf = elf::elf{elf::create_mmap_loader(fd)};
m_dwarf = dwarf::dwarf{dwarf::elf::create_loader(m_elf)};
}
//...
private:
//...
dwarf::dwarf m_dwarf;
elf::elf m_elf;
};
```
我们使用了 `open` 而不是 `std::ifstream`,因为 elf 加载器需要传递一个 UNIX 文件描述符给 `mmap`,从而可以将文件映射到内存而不是每次读取一部分。
### 调试信息原语
下一步我们可以实现从程序计数器的值中提取行条目(line entry)以及函数 DWARF 信息条目(function DIE)的函数。我们从 `get_function_from_pc` 开始:
```
dwarf::die debugger::get_function_from_pc(uint64_t pc) {
for (auto &cu : m_dwarf.compilation_units()) {
if (die_pc_range(cu.root()).contains(pc)) {
for (const auto& die : cu.root()) {
if (die.tag == dwarf::DW_TAG::subprogram) {
if (die_pc_range(die).contains(pc)) {
return die;
}
}
}
}
}
throw std::out_of_range{"Cannot find function"};
}
```
这里我采用了朴素的方法,迭代遍历编译单元直到找到一个包含程序计数器的,然后迭代遍历它的子节点直到我们找到相关函数(`DW_TAG_subprogram`)。正如我在上一篇中提到的,如果你想要的话你可以处理类似的成员函数或者内联等情况。
接下来是 `get_line_entry_from_pc`:
```
dwarf::line_table::iterator debugger::get_line_entry_from_pc(uint64_t pc) {
for (auto &cu : m_dwarf.compilation_units()) {
if (die_pc_range(cu.root()).contains(pc)) {
auto < = cu.get_line_table();
auto it = lt.find_address(pc);
if (it == lt.end()) {
throw std::out_of_range{"Cannot find line entry"};
}
else {
return it;
}
}
}
throw std::out_of_range{"Cannot find line entry"};
}
```
同样,我们可以简单地找到正确的编译单元,然后查询行表获取相关的条目。
### 打印源码
当我们命中一个断点或者逐步执行我们的代码时,我们会想知道处于源码中的什么位置。
```
void debugger::print_source(const std::string& file_name, unsigned line, unsigned n_lines_context) {
std::ifstream file {file_name};
//获得一个所需行附近的窗口
auto start_line = line <= n_lines_context ? 1 : line - n_lines_context;
auto end_line = line + n_lines_context + (line < n_lines_context ? n_lines_context - line : 0) + 1;
char c{};
auto current_line = 1u;
//跳过 start_line 之前的行
while (current_line != start_line && file.get(c)) {
if (c == '\n') {
++current_line;
}
}
//如果我们在当前行则输出光标
std::cout << (current_line==line ? "> " : " ");
//输出行直到 end_line
while (current_line <= end_line && file.get(c)) {
std::cout << c;
if (c == '\n') {
++current_line;
//如果我们在当前行则输出光标
std::cout << (current_line==line ? "> " : " ");
}
}
//输出换行确保恰当地清空了流
std::cout << std::endl;
}
```
现在我们可以打印出源码了,我们需要将这些通过钩子添加到我们的调试器。实现这个的一个好地方是当调试器从一个断点或者(最终)逐步执行得到一个信号时。到了这里,我们可能想要给我们的调试器添加一些更好的信号处理。
### 更好的信号处理
我们希望能够得知什么信号被发送给了进程,同样我们也想知道它是如何产生的。例如,我们希望能够得知是否由于命中了一个断点从而获得一个 `SIGTRAP`,还是由于逐步执行完成、或者是产生了一个新线程等等导致的。幸运的是,我们可以再一次使用 `ptrace`。可以给 `ptrace` 的一个命令是 `PTRACE_GETSIGINFO`,它会给你被发送给进程的最后一个信号的信息。我们类似这样使用它:
```
siginfo_t debugger::get_signal_info() {
siginfo_t info;
ptrace(PTRACE_GETSIGINFO, m_pid, nullptr, &info);
return info;
}
```
这会给我们一个 `siginfo_t` 对象,它能提供以下信息:
```
siginfo_t {
int si_signo; /* 信号编号 */
int si_errno; /* errno 值 */
int si_code; /* 信号代码 */
int si_trapno; /* 导致生成硬件信号的陷阱编号
(大部分架构中都没有使用) */
pid_t si_pid; /* 发送信号的进程 ID */
uid_t si_uid; /* 发送信号进程的用户 ID */
int si_status; /* 退出值或信号 */
clock_t si_utime; /* 消耗的用户时间 */
clock_t si_stime; /* 消耗的系统时间 */
sigval_t si_value; /* 信号值 */
int si_int; /* POSIX.1b 信号 */
void *si_ptr; /* POSIX.1b 信号 */
int si_overrun; /* 计时器 overrun 计数;
POSIX.1b 计时器 */
int si_timerid; /* 计时器 ID; POSIX.1b 计时器 */
void *si_addr; /* 导致错误的内存地址 */
long si_band; /* Band event (在 glibc 2.3.2 和之前版本中是 int 类型) */
int si_fd; /* 文件描述符 */
short si_addr_lsb; /* 地址的最不重要位
(自 Linux 2.6.32) */
void *si_lower; /* 出现地址违规的下限 (自 Linux 3.19) */
void *si_upper; /* 出现地址违规的上限 (自 Linux 3.19) */
int si_pkey; /* PTE 上导致错误的保护键 (自 Linux 4.6) */
void *si_call_addr; /* 系统调用指令的地址
(自 Linux 3.5) */
int si_syscall; /* 系统调用尝试次数
(自 Linux 3.5) */
unsigned int si_arch; /* 尝试系统调用的架构
(自 Linux 3.5) */
}
```
我只需要使用 `si_signo` 就可以找到被发送的信号,使用 `si_code` 来获取更多关于信号的信息。放置这些代码的最好位置是我们的 `wait_for_signal` 函数:
```
void debugger::wait_for_signal() {
int wait_status;
auto options = 0;
waitpid(m_pid, &wait_status, options);
auto siginfo = get_signal_info();
switch (siginfo.si_signo) {
case SIGTRAP:
handle_sigtrap(siginfo);
break;
case SIGSEGV:
std::cout << "Yay, segfault. Reason: " << siginfo.si_code << std::endl;
break;
default:
std::cout << "Got signal " << strsignal(siginfo.si_signo) << std::endl;
}
}
```
现在再来处理 `SIGTRAP`。知道当命中一个断点时会发送 `SI_KERNEL` 或 `TRAP_BRKPT`,而逐步执行结束时会发送 `TRAP_TRACE` 就足够了:
```
void debugger::handle_sigtrap(siginfo_t info) {
switch (info.si_code) {
//如果命中了一个断点其中的一个会被设置
case SI_KERNEL:
case TRAP_BRKPT:
{
set_pc(get_pc()-1); //将程序计数器的值设置为它应该指向的地方
std::cout << "Hit breakpoint at address 0x" << std::hex << get_pc() << std::endl;
auto line_entry = get_line_entry_from_pc(get_pc());
print_source(line_entry->file->path, line_entry->line);
return;
}
//如果信号是由逐步执行发送的,这会被设置
case TRAP_TRACE:
return;
default:
std::cout << "Unknown SIGTRAP code " << info.si_code << std::endl;
return;
}
}
```
这里有一大堆不同风格的信号你可以处理。查看 `man sigaction` 获取更多信息。
由于当我们收到 `SIGTRAP` 信号时我们已经修正了程序计数器的值,我们可以从 `step_over_breakpoint` 中移除这些代码,现在它看起来类似:
```
void debugger::step_over_breakpoint() {
if (m_breakpoints.count(get_pc())) {
auto& bp = m_breakpoints[get_pc()];
if (bp.is_enabled()) {
bp.disable();
ptrace(PTRACE_SINGLESTEP, m_pid, nullptr, nullptr);
wait_for_signal();
bp.enable();
}
}
}
```
### 测试
现在你应该可以在某个地址设置断点,运行程序然后看到打印出了源码,而且正在被执行的行被光标标记了出来。
后面我们会添加设置源码级别断点的功能。同时,你可以从[这里](https://github.com/TartanLlama/minidbg/tree/tut_source)获取该博文的代码。
---
via: <https://blog.tartanllama.xyz/c++/2017/04/24/writing-a-linux-debugger-source-signal/>
作者:[TartanLlama](https://www.twitter.com/TartanLlama) 译者:[ictlyh](https://github.com/ictlyh) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Redirecting…
Click here if you are not redirected. |
8,813 | 开发一个 Linux 调试器(六):源码级逐步执行 | https://blog.tartanllama.xyz/c++/2017/05/06/writing-a-linux-debugger-dwarf-step/ | 2017-08-28T10:21:00 | [
"调试器"
] | https://linux.cn/article-8813-1.html | 
在前几篇博文中我们学习了 DWARF 信息以及它如何使我们将机器码和上层源码联系起来。这一次我们通过为我们的调试器添加源码级逐步调试将该知识应用于实际。
### 系列文章索引
随着后面文章的发布,这些链接会逐渐生效。
1. [准备环境](/article-8626-1.html)
2. [断点](/article-8645-1.html)
3. [寄存器和内存](/article-8579-1.html)
4. [Elves 和 dwarves](/article-8719-1.html)
5. [源码和信号](/article-8812-1.html)
6. [源码级逐步执行](https://blog.tartanllama.xyz/c++/2017/05/06/writing-a-linux-debugger-dwarf-step/)
7. 源码级断点
8. 调用栈展开
9. 读取变量
10. 下一步
### 揭秘指令级逐步执行
我们正在超越了自我。首先让我们通过用户接口揭秘指令级单步执行。我决定将它切分为能被其它部分代码利用的 `single_step_instruction` 和确保是否启用了某个断点的 `single_step_instruction_with_breakpoint_check` 两个函数。
```
void debugger::single_step_instruction() {
ptrace(PTRACE_SINGLESTEP, m_pid, nullptr, nullptr);
wait_for_signal();
}
void debugger::single_step_instruction_with_breakpoint_check() {
//首先,检查我们是否需要停用或者启用某个断点
if (m_breakpoints.count(get_pc())) {
step_over_breakpoint();
}
else {
single_step_instruction();
}
}
```
正如以往,另一个命令被集成到我们的 `handle_command` 函数:
```
else if(is_prefix(command, "stepi")) {
single_step_instruction_with_breakpoint_check();
auto line_entry = get_line_entry_from_pc(get_pc());
print_source(line_entry->file->path, line_entry->line);
}
```
利用新增的这些函数我们可以开始实现我们的源码级逐步执行函数。
### 实现逐步执行
我们打算编写这些函数非常简单的版本,但真正的调试器有 *thread plan* 的概念,它封装了所有的单步信息。例如,调试器可能有一些复杂的逻辑去决定断点的位置,然后有一些回调函数用于判断单步操作是否完成。这其中有非常多的基础设施,我们只采用一种朴素的方法。我们可能会意外地跳过断点,但如果你愿意的话,你可以花一些时间把所有的细节都处理好。
对于跳出 `step_out`,我们只是在函数的返回地址处设一个断点然后继续执行。我暂时还不想考虑调用栈展开的细节 - 这些都会在后面的部分介绍 - 但可以说返回地址就保存在栈帧开始的后 8 个字节中。因此我们会读取栈指针然后在内存相对应的地址读取值:
```
void debugger::step_out() {
auto frame_pointer = get_register_value(m_pid, reg::rbp);
auto return_address = read_memory(frame_pointer+8);
bool should_remove_breakpoint = false;
if (!m_breakpoints.count(return_address)) {
set_breakpoint_at_address(return_address);
should_remove_breakpoint = true;
}
continue_execution();
if (should_remove_breakpoint) {
remove_breakpoint(return_address);
}
}
```
`remove_breakpoint` 是一个小的帮助函数:
```
void debugger::remove_breakpoint(std::intptr_t addr) {
if (m_breakpoints.at(addr).is_enabled()) {
m_breakpoints.at(addr).disable();
}
m_breakpoints.erase(addr);
}
```
接下来是跳入 `step_in`。一个简单的算法是继续逐步执行指令直到新的一行。
```
void debugger::step_in() {
auto line = get_line_entry_from_pc(get_pc())->line;
while (get_line_entry_from_pc(get_pc())->line == line) {
single_step_instruction_with_breakpoint_check();
}
auto line_entry = get_line_entry_from_pc(get_pc());
print_source(line_entry->file->path, line_entry->line);
}
```
跳过 `step_over` 对于我们来说是三个中最难的。理论上,解决方法就是在下一行源码中设置一个断点,但下一行源码是什么呢?它可能不是当前行后续的那一行,因为我们可能处于一个循环、或者某种条件结构之中。真正的调试器一般会检查当前正在执行什么指令然后计算出所有可能的分支目标,然后在所有分支目标中设置断点。对于一个小的项目,我不打算实现或者集成一个 x86 指令模拟器,因此我们要想一个更简单的解决办法。有几个可怕的选择,一个是一直逐步执行直到当前函数新的一行,或者在当前函数的每一行都设置一个断点。如果我们是要跳过一个函数调用,前者将会相当的低效,因为我们需要逐步执行那个调用图中的每个指令,因此我会采用第二种方法。
```
void debugger::step_over() {
auto func = get_function_from_pc(get_pc());
auto func_entry = at_low_pc(func);
auto func_end = at_high_pc(func);
auto line = get_line_entry_from_pc(func_entry);
auto start_line = get_line_entry_from_pc(get_pc());
std::vector<std::intptr_t> to_delete{};
while (line->address < func_end) {
if (line->address != start_line->address && !m_breakpoints.count(line->address)) {
set_breakpoint_at_address(line->address);
to_delete.push_back(line->address);
}
++line;
}
auto frame_pointer = get_register_value(m_pid, reg::rbp);
auto return_address = read_memory(frame_pointer+8);
if (!m_breakpoints.count(return_address)) {
set_breakpoint_at_address(return_address);
to_delete.push_back(return_address);
}
continue_execution();
for (auto addr : to_delete) {
remove_breakpoint(addr);
}
}
```
这个函数有一点复杂,我们将它拆开来看。
```
auto func = get_function_from_pc(get_pc());
auto func_entry = at_low_pc(func);
auto func_end = at_high_pc(func);
```
`at_low_pc` 和 `at_high_pc` 是 `libelfin` 中的函数,它们能给我们指定函数 DWARF 信息条目的最小和最大程序计数器值。
```
auto line = get_line_entry_from_pc(func_entry);
auto start_line = get_line_entry_from_pc(get_pc());
std::vector<std::intptr_t> breakpoints_to_remove{};
while (line->address < func_end) {
if (line->address != start_line->address && !m_breakpoints.count(line->address)) {
set_breakpoint_at_address(line->address);
breakpoints_to_remove.push_back(line->address);
}
++line;
}
```
我们需要移除我们设置的所有断点,以便不会泄露出我们的逐步执行函数,为此我们把它们保存到一个 `std::vector` 中。为了设置所有断点,我们循环遍历行表条目直到找到一个不在我们函数范围内的。对于每一个,我们都要确保它不是我们当前所在的行,而且在这个位置还没有设置任何断点。
```
auto frame_pointer = get_register_value(m_pid, reg::rbp);
auto return_address = read_memory(frame_pointer+8);
if (!m_breakpoints.count(return_address)) {
set_breakpoint_at_address(return_address);
to_delete.push_back(return_address);
}
```
这里我们在函数的返回地址处设置一个断点,正如跳出 `step_out`。
```
continue_execution();
for (auto addr : to_delete) {
remove_breakpoint(addr);
}
```
最后,我们继续执行直到命中它们中的其中一个断点,然后移除所有我们设置的临时断点。
它并不美观,但暂时先这样吧。
当然,我们还需要将这个新功能添加到用户界面:
```
else if(is_prefix(command, "step")) {
step_in();
}
else if(is_prefix(command, "next")) {
step_over();
}
else if(is_prefix(command, "finish")) {
step_out();
}
```
### 测试
我通过实现一个调用一系列不同函数的简单函数来进行测试:
```
void a() {
int foo = 1;
}
void b() {
int foo = 2;
a();
}
void c() {
int foo = 3;
b();
}
void d() {
int foo = 4;
c();
}
void e() {
int foo = 5;
d();
}
void f() {
int foo = 6;
e();
}
int main() {
f();
}
```
你应该可以在 `main` 地址处设置一个断点,然后在整个程序中跳入、跳过、跳出函数。如果你尝试跳出 `main` 函数或者跳入任何动态链接库,就会出现意料之外的事情。
你可以在[这里](https://github.com/TartanLlama/minidbg/tree/tut_dwarf_step)找到这篇博文的相关代码。下次我们会利用我们新的 DWARF 技巧来实现源码级断点。
---
via: <https://blog.tartanllama.xyz/c++/2017/05/06/writing-a-linux-debugger-dwarf-step/>
作者:[Simon Brand](https://www.twitter.com/TartanLlama) 译者:[ictlyh](https://github.com/ictlyh) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Redirecting…
Click here if you are not redirected. |
8,814 | GitHub 简易入门指南 | http://www.linuxandubuntu.com/home/getting-started-with-github | 2017-08-28T08:00:00 | [
"GitHub"
] | https://linux.cn/article-8814-1.html | [](http://www.linuxandubuntu.com/home/getting-started-with-github)
[GitHub](https://github.com/) 是一个在线平台,旨在促进在一个共同项目上工作的个人之间的代码托管、版本控制和协作。通过该平台,无论何时何地,都可以对项目进行操作(托管和审查代码,管理项目和与世界各地的其他开发者共同开发软件)。**GitHub 平台**为开源项目和私人项目都提供了项目处理功能。
关于团队项目处理的功能包括:GitHub <ruby> 流 <rt> Flow> </rt></ruby>和 GitHub <ruby> 页 <rt> Pages </rt></ruby>。这些功能可以让需要定期部署的团队轻松处理工作流程。另一方面,GitHub 页提供了页面用于展示开源项目、展示简历、托管博客等。
GitHub 也为个人项目提供了必要的工具,使得个人项目可以轻松地处理。它也使得个人可以更轻松地与世界分享他们的项目。
### 注册 GitHub 并启动一个项目
在 GitHub 上启动新项目时,您必须先使用您的电子邮件地址创建一个帐户。
[](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/github-homepage_orig.jpg)
然后,在验证邮箱的时候,用户将自动登录到他们的 GitHub 帐户。
#### 1、 创建仓库
之后,我们会被带到一个用于创建<ruby> 仓库 <rt> repository </rt></ruby>的页面。仓库存储着包括修订历史记录在内的所有项目文件。仓库可以是公开的或者是私有的。公开的仓库可以被任何人查看,但是,只有项目所有者授予权限的人才可以提交修改到这个仓库。另一方面,私有仓库提供了额外的控制,可以将项目设置为对谁可见。因此,公开仓库适用于开源软件项目,而私有仓库主要适用于私有或闭源项目。
* 填写 “<ruby> 仓库名称 <rt> Repository Name </rt></ruby>” 和 “<ruby> 简短描述 <rt> Short Description </rt></ruby>”。
* 选中 “<ruby> 以一个 README 文件初始化 <rt> Initialize this repository with a README </rt></ruby>”。
* 最后,点击底部的 “<ruby> 创建仓库 <rt> Create Repository </rt></ruby>” 按钮。
[](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/create-a-github-repository_orig.jpg)
#### 2、 添加分支
在 GitHub 中,<ruby> 分支 <rt> branch </rt></ruby>是一种同时操作单个仓库的各种版本的方式。默认情况下,任何创建的单个仓库都会被分配一个名为 “MASTER” 的分支,它被认为是最后一个分支。在 GitHub 中,分支在被合并到<ruby> 主干 <rt> master </rt></ruby>(最后的分支)之前,可以在对仓库进行实验和编辑中发挥作用。
为了使项目适合每一个人的需求,通常情况下,总是需要添加几个格外的分支来匹配不同的项目。在主分支上创建一个分支和复制主分支时的当前状态是一样的。
[](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/add-a-branch-to-github-repository_orig.jpg)
创建分支与在不同版本中保存单个文件是类似的。它通过在特定仓库上执行的任务重命名来实现。
分支在保持错误修复和功能添加工作中同样被证明是有效。在进行必要的修改后,这些分支会被合并到主分支中。
在创建仓库后创建一个分支:
* 在这个例子中,点击仓库名称 “Hello-World” 跳转到你的新仓库。
* 点击顶部的 “Branch:Master” 按钮,会看到一个下拉菜单,菜单里有填写分支名称的空白字段。
* 输入分支名称,在这个例子中我们输入 “readme-edits“。
* 按下回车键或者点击蓝色的 “<ruby> 创建分支 <rt> create branch </rt></ruby>” 框。
这样就成功创建了两个分支:master 和 readme-edits。
#### 3、 修改项目文件并提交
此步骤提供了关于如何更改仓库并保存修改的指导。在 GitHub 上,<ruby> 提交 <rt> commit </rt></ruby>被定义为保存的修改的意思。每一次提交都与一个<ruby> 提交信息 <rt> commit message </rt></ruby>相关联,该提交信息包含了保存的修改的历史记录,以及为何进行这些更改。这使得其他贡献者可以很轻松地知道你做出的更改以及更改的原因。
要对仓库进行更改和提交更改,请执行以下步骤:
* 点击仓库名称 “Hello-World”。
* 点击右上角的铅笔图标查看和编辑文件。 [](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/commit-changes-to-github-repository_orig.jpg)
* 在编辑器中,写一些东西来确定你可以进行更改。
* 在<ruby> 提交消息 <rt> commit message </rt></ruby>字段中做简要的总结,以解释为什么以及如何进行更改。
* 点击<ruby> 提交更改 <rt> commit changes </rt></ruby>按钮保存更改。
请注意,这些更改仅仅影响到 readme-edits 分支,而不影响主分支。
[](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/commit-branch-to-master_orig.jpg)
#### 4、 开启一个拉取请求
<ruby> 拉取请求 <rt> pull request </rt></ruby>是一个允许贡献者提出并请求某人审查和合并某些更改到他们的分支的功能。拉取请求还显示了几个分支的差异(diffs)。更改、添加和删减通常以红色和绿色来表示。一旦提交完成就可以开启拉取请求,即使代码还未完成。
开启一个拉取请求:
* 点击<ruby> 拉取请求 <rt> pull requests </rt></ruby>选项卡。 [](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/github-pull-request_orig.jpg)
* 点击<ruby> 新建拉取请求 <rt> new pull requests </rt></ruby>按钮。
* 选择 readme-edits 分支与 master 分支进行比较。 [](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/compare-commit-changes-github_orig.jpg)
* 确定请求,并确定这是您要提交的内容。
* 点击创建拉取请求绿色按钮并输入一个标题。 [](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/open-a-pull-request-in-github-repository_orig.jpg)
* 按下回车键。
用户可以通过尝试创建并保存拉取请求来证实这些操作。
#### 5、 合并拉取请求
最后一步是将 readme-edits 分支和 master 分支合并到一起。如果 readme-edits 分支和 master 分支不会产生冲突,则会显示<ruby> merge pull request <rt> 合并拉取请求 </rt></ruby>的按钮。
[](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/merge-the-pull-request-github_orig.jpg)
当合并拉取时,有必要确保<ruby> 评论 <rt> comment </rt></ruby>和其他字段被正确填写。合并拉取:
* 点击<ruby> merge pull request <rt> 合并拉取请求 </rt></ruby>的按钮。
* 确认合并。
* 按下紫色的删除分支按钮,删除 readme-edits 分支,因为它已经被包含在 master 分支中。(LCTT 译注:如果是合并他人提交的拉取请求,则无需也无法删除合并过来的他人的分支。)
本文提供了 GitHub 平台从注册到使用的基本操作,接下来由大家尽情探索吧。
---
via: <http://www.linuxandubuntu.com/home/getting-started-with-github>
作者:[LinuxAndUbuntu](http://www.linuxandubuntu.com) 译者:[firmianay](https://github.com/firmianay) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,816 | 在 Snap 中玩转 OpenStack | https://insights.ubuntu.com/2017/07/06/openstack-in-a-snap/ | 2017-08-28T08:30:00 | [
"Snapcraft",
"snap"
] | https://linux.cn/article-8816-1.html | 
OpenStack 非常复杂,许多社区成员都在努力使 OpenStack 的部署和操作更加容易。其中大部分时间都用来改善相关工具,如:Ansible、Puppet、Kolla、Juju、Triple-O 和 Chef (仅举几例)。但是,如果我们降低一下标准,并且还能使包的体验更加简单,将会怎样呢?
我们正在努力通过 snap 包来实现这一点。snap 包是一种新兴的软件分发方式,这段来自 [snapcraft.io](http://snapcraft.io/) 的介绍很好的总结了它的主要优点:*snap 包可以快速安装、易于创建、安全运行而且能自动地事务化更新,因此你的应用程序总是能保持最新的状态并且永远不会被破坏。*
### 捆绑软件
单个 snap 包可以内嵌多个不同来源的软件,从而提供一个能够快速启动和运行的解决方案。当你安装 snap 包时,你会发现安装速度是很快的,这是因为单个 snap 包捆绑了所有它需要的依赖。这和安装 deb 包有些不同,因为它需要下载所有的依赖然后分别进行安装。
### Snap 包制作简单
在 Ubuntu 工作的时候,我花了很多时间为 Debian 制作 OpenStack 的安装包。这是一种很特殊技能,需要花很长时间才能理解其中的细微差别。与 snap 包相比,deb 包和 snap 包在复杂性上的差异有天壤之别。snap 包简单易行,并且相当有趣。
### Snap 包的其它特性
* 每个 snap 包都安装在其独有的只读 squashfs 文件系统中。
* 每个 snap 包都运行在一个由 AppArmor 和 seccomp 策略构建的严格沙箱环境中。
* snap 包能事务更新。新版本的 snap 包会安装到一个新的只读 squashfs 文件系统中。如果升级失败,它将回滚到旧版本。
* 当有新版本可用时,snap 包将自动更新。
* OpenStack 的 snap 包能保证与 OpenStack 的上游约束保持一致。打包的人不需要再为 OpenStack 依赖链维护单独的包。这真是太爽了!
### OpenStack snap 包介绍
现在,下面这些项目已经有了相应的 snap 包:
* `Keystone` —— 这个 snap 包为 OpenStack 提供了身份鉴证服务。
* `Glance` —— 这个 snap 包为 OpenStack 提供了镜像服务。
* `Neutron` —— 这个 snap 包专门提供了 neutron-server 过程,作为 OpenStack 部署过程的一个 snap 包。
* `Nova` —— 这个 snap 包提供 OpenStack 部署过程中的 Nova 控制器组件。
* `Nova-hypervisor` —— 这个 snap 包提供 OpenStack 部署过程中的 hypervisor 组件,并且配置使用通过 deb 包安装的 Libvirt/KVM + Open vSwitch 组合。这个 snap 包同时也包含 nava-lxd,这允许我们使用 nova-lxd 而不用 KVM。
这些 snpa 包已经能让我们部署一个简单可工作的 OpenStack 云。你可以在 [github](https://github.com/openstack?utf8=%E2%9C%93&q=snap-&type=&language=) 上找到所有这些 OpenStack snap 包的源码。有关 OpenStack snap 包更多的细节,请参考上游存储库中各自的 README。在那里,你可以找到更多有关管理 snap 包的信息,比如覆盖默认配置、重启服务、设置别名等等。
### 想要创建自己的 OpenStack snap 包吗?
查看 [snap cookie 工具](https://github.com/openstack-snaps/snap-cookiecutter/blob/master/README.rst)。我很快就会写一篇博文,告诉你如何使用 snap cookie 工具。它非常简单,并且能帮助你在任何时候创建一个新的 OpenStack snap 包。
### 测试 OpenStack snap 包
我们已经用简单的脚本初步测试了 OpenStack snap 包。这个脚本会在单个节点上安装 sanp 包,还会在安装后提供额外的配置服务。来尝试下吧:
```
git clone https://github.com/openstack-snaps/snap-test
cd snap-test
./snap-deploy
```
这样,我们就已经在 Ubuntu Xenial(16.04) 上做了所有的测试。要注意的是,这将在你的系统上安装和配置相当多的软件,因此你最好在可自由使用的机器上运行它。
### 追踪 OpenStack
现在,你可以从 snap 商店的边缘通道来安装 snap 包,比如:
```
sudo snap install --edge keystone
```
OpenStack 团队正在努力使 CI/CD 配置到位,以便让 snap 包的发布能够交叉追踪 OpenStack 的发布(比如一个追踪 Ocata,另一个追踪 Pike 等)。每个<ruby> 轨道 <rt> track </rt></ruby>都有 4 个不同的通道。每个轨道的边缘通道将包含 OpenStack 项目对应分支最近的内容,测试、候选和稳定通道被保留用于已发布的版本。这样我们将看到如下的用法:
```
sudo snap install --channel=ocata/stable keystone
sudo snap install --channel=pike/edge keystone
```
### 其它
我们可以使用多个环境变量来简化 snap 包的制作。[这里](https://snapcraft.io/docs/reference/env) 有相关的说明。实际上,你无需深入的研究他们,但是在安装完 snap 包后,你也许会想要了解这些位置:
#### `$SNAP == /snap/<snap-name>/current`
这是 snap 包和它所有的文件挂载的位置。所有东西都是只读的。比如我当前安装的 keystone,$SNAP 就是 `/snap/keystone/91`。幸好,你不需要知道当前版本号,因为在 `/snap/keystone/` 中有一个软链接(LCTT 译注:`/snap/keystone/current/`)指向当前正在使用版本对应的文件夹。
```
$ ls /snap/keystone/current/
bin etc pysqlite2-doc usr
command-manage.wrapper include snap var
command-nginx.wrapper lib snap-openstack.yaml
command-uwsgi.wrapper meta templates
$ ls /snap/keystone/current/bin/
alembic oslo-messaging-send-notification
convert-json oslo-messaging-zmq-broker
jsonschema oslo-messaging-zmq-proxy
keystone-manage oslopolicy-checker
keystone-wsgi-admin oslopolicy-list-redundant
keystone-wsgi-public oslopolicy-policy-generator
lockutils-wrapper oslopolicy-sample-generator
make_metadata.py osprofiler
mako-render parse_xsd2.py
mdexport.py pbr
merge_metadata.py pybabel
migrate snap-openstack
migrate-repository sqlformat
netaddr uwsgi
oslo-config-generator
$ ls /snap/keystone/current/usr/bin/
2to3 idle pycompile python2.7-config
2to3-2.7 pdb pydoc python2-config
cautious-launcher pdb2.7 pydoc2.7 python-config
compose pip pygettext pyversions
dh_python2 pip2 pygettext2.7 run-mailcap
easy_install pip2.7 python see
easy_install-2.7 print python2 smtpd.py
edit pyclean python2.7
$ ls /snap/keystone/current/lib/python2.7/site-packages/
...
```
#### `$SNAP_COMMON == /var/snap/<snap-name>/common`
这个目录用于存放系统数据,对于 snap 包的多个修订版本这些数据是共用的。在这里,你可以覆盖默认配置文件和访问日志文件。
```
$ ls /var/snap/keystone/common/
etc fernet-keys lib lock log run
$ sudo ls /var/snap/keystone/common/etc/
keystone nginx uwsgi
$ ls /var/snap/keystone/common/log/
keystone.log nginx-access.log nginx-error.log uwsgi.log
```
### 严格限制
每个 snap 包都是在一个由 seccomp 和 AppArmor 策略构建的严格限制的环境中运行的。更多关于 snap 约束的细节可以在 [这里](https://snapcraft.io/docs/reference/confinement) 查看。
### snap 包即将到来的新特性和更新
我正在期待 snap 包一些即将到来的新特性和更新(LCTT 译注:此文发表于 7 月 6 日):
* 我们正在致力于实现 libvirt AppArmor 策略,这样 nova-hypervisor 的 snap 包就能够访问 qcow2 的<ruby> 支持文件 <rt> backing files </rt></ruby>。
+ 现在,作为一种变通方法,你可以将 virt-aa-helper 放在 complain 模式下:`sudo aa-complain /usr/lib/libvirt/virt-aa-helper`。
* 我们还在为 snapd 开发额外的接口策略,以便为部署的实例启用网络连接。
+ 现在你可以在 devmode 模式下安装 nova-hypervisor snap 包,它会禁用安全限制:`snap install -devmode -edge nova-hypervisor`。
* 自动连接 nova-hypervisor 的接口。我们正在努力实现在安装时自动定义 nova-hypervisor 接口。
+ 定义 AppArmor 和 seccomp 策略的接口可以允许 snap 包访问系统的资源。
+ 现在,你可以手动连接需要接口,在 nova-hypervisor snap 包的 README 中有相关的描述。
* 命令自动定义别名。我们正在努力实现 snap 包在安装时为命令自动定义别名。
+ 这使得我们可以使用传统的命令名。安装 snap 包后,你将可以使用 `nova-manage db sync` 而无需再用 `nova.manage db sync`。
+ 现在,你可以在安装 snap 包后手动设置别名,比如:`snap alias nova.manage nova-manage`。如想获取更多细节请查看 snap 包的 README 。
* 守护进程自动定义别名。当前 snappy 仅支持为命令(非守护进程)定义别名。一旦针对守护进程的别名可用了,我们将设置它们在安装的时候自动配置。
+ 这使得我们可以使用额外的单元文件名。我们可以使用 `systemctl restart nova-compute` 而无需再用 `systemctl restart snap.nova.nova-compute`。
* snap 包资产跟踪。这使得我们可以追踪用来构建 snap 包的版本以便在将来构建时重复使用。
如果你想多聊一些关于 snap 包的内容,你可以在 freenode 的 #openstack-snaps 这样的 IRC 上找到我们。我们欢迎你的反馈和贡献!感谢并祝你玩得开心!Corey
---
作者简介:
Corey Bryant 是 Ubuntu 的核心开发者和 Canonical 公司 OpenStack 工程团队的软件工程师,他主要专注于为 Ubuntu 提供 OpenStack 的安装包以及为 Juju 进行 OpenStack 的魅力开发。他对开源软件充满热情,喜欢与来自世界各地的人一起工作。
译者简介:
>
> snapcraft.io 的钉子户,对 Ubuntu Core、Snaps 和 Snapcraft 有着浓厚的兴趣,并致力于将这些还在快速发展的新技术通过翻译或原创的方式介绍到中文世界。有兴趣的小伙伴也可以关注译者个人的公众号: `Snapcraft`,最近会在上面连载几篇有关 Core snap 发布策略、交付流程和验证流程的文章,欢迎围观 :)
>
>
>
---
via: <https://insights.ubuntu.com/2017/07/06/openstack-in-a-snap/>
作者:[Corey Bryant](https://insights.ubuntu.com/author/corey-bryant/) 译者:[Snapcrafter](https://github.com/Snapcrafter) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | null |
|
8,817 | 公钥加密之外 | https://blog.cryptographyengineering.com/2017/07/02/beyond-public-key-encryption/ | 2017-08-28T11:10:00 | [
"加密",
"密码学"
] | https://linux.cn/article-8817-1.html | 关于应用密码学最令人扼腕也最引人入胜的一件事就是*我们在现实中实际使用的密码学是多么的少*。这并不是指密码学在业界没有被广泛的应用————事实上它的应用很广泛。我想指出的是,迄今为止密码学研究人员开发了如此多实用的技术,但工业界平常使用的却少之又少。实际上,除了少数个别情况,我们现今使用的绝大部分密码学技术是在 21 世纪初<sup> (注1)</sup> 就已经存在的技术。

大多数人并不在意这点,但作为一个工作在研究与应用交汇领域的密码学家,这让我感到不开心。我不能完全解决这个问题,我*能*做的,就是谈论一部分这些新的技术。在这个夏天里,这就是我想要做的:谈论。具体来说,在接下来的几个星期里,我将会写一系列讲述这些没有被看到广泛使用的前沿密码学技术的文章。
今天我要从一个非常简单的问题开始:在公钥加密之外还有什么(可用的加密技术)?具体地说,我将讨论几个过去 20 年里开发出的技术,它们可以让我们走出传统的公钥加密的概念的局限。
这是一篇专业的技术文章,但是不会有太困难的数学内容。对于涉及方案的实际定义,我会提供一些原论文的链接,以及一些背景知识的参考资料。在这里,我们的关注点是解释这些方案在做什么————以及它们在现实中可以怎样被应用。
### 基于身份的加密
在 20 世纪 80 年代中期,一位名叫<ruby> 阿迪·萨莫尔 <rt> Adi Shamir </rt></ruby>的密码学家提出了一个 [全新的想法](https://discovery.csc.ncsu.edu/Courses/csc774-S08/reading-assignments/shamir84.pdf) 。这个想法,简单来说,就是*摒弃公钥*。
为了理解 萨莫尔 的想法从何而来,我们最好先了解一些关于公钥加密的东西。在公钥加密的发明之前,所有的加密技术都牵涉到密钥。处理这样的密钥是相当累赘的工作。在你可以安全地通信之前,你需要和你的伙伴交换密钥。这一过程非常的困难,而且当通信规模增大时不能很好地运作。
公钥加密(由 [Diffie-Hellman](https://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_key_exchange) 和萨莫尔的 [RSA](https://en.wikipedia.org/wiki/RSA_(cryptosystem)) 密码系统发展而来的)通过极大地简化密钥分配的过程给密码学带来了革命性的改变。比起分享密钥,用户现在只要将他们的*公共*密钥发送给其他使用者。有了公钥,公钥的接收者可以加密给你的信息(或者验证你的数字签名),但是又不能用该公钥来进行解密(或者产生数字签名)。这一部分要通过你自己保存的私有密钥来完成。
尽管公钥的使用改进了密码学应用的许多方面,它也带来了一系列新的挑战。从实践中的情况来看,拥有公钥往往只是成功的一半————人们通常还需要安全地分发这些公钥。
举一个例子,想象一下我想要给你发送一封 PGP 加密的电子邮件。在我可以这么做之前,我需要获得一份你的公钥的拷贝。我要怎么获得呢?显然我们可以亲自会面,然后当面交换这个密钥————但(由于面基的麻烦)没有人会愿意这样做。通过电子的方式获得你的公钥会更理想。在现实中,这意味着要么(1)我们必须通过电子邮件交换公钥, 要么(2)我必须通过某个第三方基础设施,比如一个 [网站](https://keybase.io/) 或者 [密钥服务器](https://pgp.mit.edu/) ,来获得你的密钥。现在我们面临这样的问题:如果电子邮件或密钥服务器是*不值得信赖的*(或者简单的来说允许任何人以 [你的名义](https://motherboard.vice.com/en_us/article/bmvdwd/wave-of-spoofed-encryption-keys-shows-weakness-in-pgp) [上传密钥](https://motherboard.vice.com/en_us/article/bmvdwd/wave-of-spoofed-encryption-keys-shows-weakness-in-pgp) ),我就可能会意外下载到恶意用户的密钥。当我给“你”发送一条消息的时候,也许我实际上正在将消息加密发送给 Mallory.

*Mallory*
解决这个问题——关于交换公钥和验证它们的来源的问题——激励了*大量的*实践密码工程,包括整个 [web PKI](https://en.wikipedia.org/wiki/Certificate_authority) (网络公钥基础设施)。在大部分情况下,这些系统非常奏效。但是萨莫尔并不满意。如果,他这样问道,我们能做得更好吗?更具体地说,他这样思考:*我们是否可以用一些更好的技术去替换那些麻烦的公钥?*
萨莫尔的想法非常令人激动。他提出的是一个新的公钥加密形式,在这个方案中用户的“公钥”可以就是他们的*身份*。这个身份可以是一个名字(比如 “Matt Green”)或者某些诸如电子邮箱地址这样更准确的信息。事实上,“身份”是什么并不重要。重要的是这个公钥可以是一个任意的字符串————而*不是*一大串诸如“ 7cN5K4pspQy3ExZV43F6pQ6nEKiQVg6sBkYPg1FG56Not ”这样无意义的字符组合。
当然,使用任意字符串作为公钥会造成一个严重的问题。有意义的身份听起来很棒————但我们无法拥有它们。如果我的公钥是 “Matt Green” ,我要怎么得到的对应的私钥?如果*我*能获得那个私钥,又有谁来阻止*其他的某些 Matt Green* 获得同样的私钥,进而读取我的消息。进而考虑一下这个,谁来阻止任意的某个*不是*名为 Matt Green 的人来获得它。啊,我们现在陷入了 [Zooko 三难困境](https://en.wikipedia.org/wiki/Zooko%27s_triangle) 。
萨莫尔的想法因此要求稍微更多一点的手段。相比期望身份可以全世界范围使用,他提出了一个名为“<ruby> 密钥生成机构 <rt> key generation authority </rt></ruby>”的特殊服务器,负责产生私钥。在设立初期,这个机构会产生一个<ruby> 最高公共密钥 <rt> master public key </rt></ruby>(MPK),这个公钥将会向全世界公布。如果你想要加密一条消息给“Matt Green”(或者验证我的签名),你可以用我的身份和我们达成一致使用的权威机构的唯一 MPK 来加密。要*解密*这则消息(或者制作签名),我需要访问同一个密钥机构,然后请求一份我的密钥的拷贝。密钥机构将会基于一个秘密保存的<ruby> 最高私有密钥 <rt> master secret key </rt></ruby>(MSK)来计算我的密钥。
加上上述所有的算法和参与者,整个系统看起来是这样的:

*一个<ruby> 基于身份加密 <rt> Identity-Based Encryption </rt></ruby>(IBE)系统的概览。<ruby> 密钥生成机构 <rt> Key Generation Authority </rt></ruby>的 Setup 算法产生最高公共密钥(MPK)和最高私有密钥(MSK)。该机构可以使用 Extract 算法来根据指定的 ID 生成对应的私钥。加密器(左)仅使用身份和 MPK 来加密。消息的接受者请求对应她身份的私钥,然后用这个私钥解密。(图标由 [Eugen Belyakoff](https://thenounproject.com/eugen.belyakoff/) 制作)*
这个设计有一些重要的优点————并且胜过少数明显的缺点。在好的方面,它*完全*摆脱了任何和你发送消息的对象进行密钥交换的必要。一旦你选择了一个主密钥机构(然后下载了它的 MPK),你就可以加密给整个世界上任何一个人的消息。甚至更酷炫地,在你加密的时候,*你的通讯对象甚至还不需要联系密钥机构*。她可以在你给她发送消息*之后*再取得她的私钥。
当然,这个“特性”也同时是一个漏洞。因为密钥机构产生所有的私钥,它拥有相当大权力。一个不诚实的机构可以轻易生成你的私钥然后解密你的消息。用更得体的方式来说就是标准的 IBE 系统有效地“包含” [密钥托管机制](https://en.wikipedia.org/wiki/Key_escrow)。<sup> (注2)</sup>
### 基于身份加密(IBE)中的“加密(E)”
所有这些想法和额外的思考都是萨莫尔在他 1984 年的论文中提出来的。其中有一个小问题:萨莫尔只能解决问题的一半。
具体地说,萨莫尔提出了一个<ruby> 基于身份签名 <rt> identity-based signature </rt></ruby>(IBS)的方案—— 一个公共验证密钥是身份、而签名密钥由密钥机构生成的签名方案。他尽力尝试了,但仍然不能找到一个建立基于身份*加密*的解决方案。这成为了一个悬而未决的问题。<sup> (注3)</sup>
到有人能解决萨莫尔的难题等了 16 年。令人惊讶的是,当解答出现的时候,它出现了不只一次,*而是三次*。
第一个,或许也是最负盛名的 IBE 的实现,是由<ruby> 丹·博奈 <rt> Dan Boneh </rt></ruby>和<ruby> 马太·富兰克林Matthew Franklin</ruby>在多年以后开发的。博奈和富兰克林的发现的时机十分有意义。<ruby> 博奈富兰克林方案 <rt> Boneh-Franklin scheme </rt></ruby> 根本上依赖于能支持有效的 “<ruby> <a href="http://people.csail.mit.edu/alinush/6.857-spring-2015/papers/bilinear-maps.pdf"> 双线性映射 </a> <rt> bilinear map </rt></ruby>” (或者“<ruby> 配对 <rt> pairing </rt></ruby>”)<sup> (注4)</sup> 的椭圆曲线。需要计算这些配对的该类 [算法](https://crypto.stanford.edu/miller/) 在萨莫尔撰写他的那篇论文是还不被人知晓,因此没有被*建设性地*使用——即被作为比起 [一种攻击](http://ieeexplore.ieee.org/document/259647/) 更有用的东西使用——直至 [2000年](https://pdfs.semanticscholar.org/845e/96c20e5a5ff3b03f4caf72c3cb817a7fa542.pdf)。
*(关于博奈富兰克林 IBE 方案的简短教学,请查看 [这个页面](https://blog.cryptographyengineering.com/boneh-franklin-ibe/))*
第二个被称为 [Sakai-Kasahara](https://en.wikipedia.org/wiki/Sakai%E2%80%93Kasahara_scheme) 的方案的情况也大抵类似,这个方案将在与第一个大约同一时间被另外一组学者独立发现。
第三个 IBE 的实现并不如前二者有效,但却更令人吃惊得多。[这个方案](https://pdfs.semanticscholar.org/8289/821325781e2f0ce83cfbfc1b62c44be799ee.pdf) 由<ruby> 克利福德·柯克斯 <rt> Clifford Cocks </rt></ruby>,一位英国国家通信总局的资深密码学家开发。它因为两个原因而引人注目。第一,柯克斯的 IBE 方案完全不需要用到双线性映射——都是建立在以往的 RSA 的基础上的,这意味着*原则上*这个算法这么多年来仅仅是没有被人们发现(而非在等待相应的理论基础)而已。第二,柯克斯本人近期因为一些甚至更令人惊奇的东西而闻名:在 RSA 算法被提出之前将近 5 年 [发现 RSA 加密系统](https://cryptome.org/jya/ellisdoc.htm)(LCTT 译注:即公钥加密算法)。用再一个在公钥加密领域的重要成就来结束这一成就,实在堪称令人印象深刻的创举。
自 2001 年起,许多另外的 IBE 构造涌现出来,用到了各种各样的密码学背景知识。尽管如此,博奈和富兰克林早期的实现仍然是这些算法之中最为简单和有效的。
即使你并不因为 IBE 自身而对它感兴趣,事实证明它的基本元素对密码学家来说在许许多多单纯地加密之外的领域都十分有用。事实上,如果我们把 IBE 看作是一种由单一的主公/私钥对来产生数以亿计的相关联的密钥对的方式,它将会显得意义非凡。这让 IBE 对于诸如<ruby> <a href="https://www.cs.umd.edu/%7Ejkatz/papers/id-cca.pdf"> 选择密文攻击 </a> <rt> chosen ciphertext attacks </rt></ruby>, <ruby> <a href="https://eprint.iacr.org/2003/083.pdf"> 前向安全的公钥加密 </a> <rt> forward-secure public key encryption </rt></ruby> 和<ruby> <a href="https://en.wikipedia.org/wiki/Boneh%E2%80%93Lynn%E2%80%93Shacham"> 短签名方案 </a> <rt> short signature schemes </rt></ruby> 这样各种各样的应用来说非常有用。
### 基于特征加密
当然,如果你给密码学家以一个类似 IBE 的工具,那么首先他们要做的将是找到一种~~让事情更复杂~~改进它的方法。
最大的改进之一要归功于 [<ruby> 阿密特·萨海 <rt> Amit Sahai </rt></ruby>和<ruby> 布伦特·沃特世 <rt> Brent Waters </rt></ruby>](https://eprint.iacr.org/2004/086.pdf)。我们称之为<ruby> 基于特征加密 <rt> Attribute-Based Encryption </rt></ruby>,或者 ABE。
这个想法最初并不是为了用特征来加密。相反,萨海和沃特世试图开发一种使用生物辨识特征来加密的*基于身份的*加密方案。为了理解这个问题,想象一下我决定使用某种生物辨识特征,比如你的 [虹膜扫描影像](https://en.wikipedia.org/wiki/Iris_recognition),来作为你的“身份”来加密一则给你的密文。然后你将向权威机构请求一个对应你的虹膜的解密密钥————如果一切都匹配得上,你就可以解密信息了。
问题就在于这几乎不能奏效。

*告诉我这不会给你带来噩梦*
因为生物辨识特征的读取(比如虹膜扫描或者指纹模板)本来就是易出错的。这意味着每一次的读取通常都是十分接近的,但却总是会几个对不上的比特。在标准的 IBE 系统中这是灾难性的:如果加密使用的身份和你的密钥身份有哪怕是一个比特的不同,解密都会失效。你就不走运了。
萨海和沃特世决定通过开发一种包含“阈值门”的 IBE 形式来解决这个问题。在这个背景下,一个身份的每一个字节都被表示为一个不同的“特征”。把每一个这种特征看作是你用于加密的一个元件——譬如“你的虹膜扫描的 5 号字节是 1”和“你的虹膜扫描的 23 号字节是 0”。加密的一方罗列出所有这些字节,然后将它们中的每一个都用于加密中。权威机构生成的解密密钥也嵌入了一连串相似的字节值。根据这个方案的定义,当且仅当(你的身份密钥与密文解密密钥之间)配对的特征数量超过某个预先定义的阈值时,才能顺利解密:*比如*为了能解密,2048 个字节中的(至少) 2024 个要是对应相同的。
这个想法的优美之处不在于模糊 IBE,而在于一旦你有了一个阈值门和一个“特征”的概念,你就能做更有趣的事情。[主要的观察结论](https://eprint.iacr.org/2006/309.pdf) 是阈值门可以拥有实现布尔逻辑的 AND 门和 OR 门(LCTT 译注:译者认为此处应为用 AND 门和 OR 门实现, 原文: a threshold gate can be used to implement the boolean AND and OR gates),就像这样:

甚至你还可以将这些逻辑闸门*堆叠*起来,一些在另一些之上,来表示一些相当复杂的布尔表达式——这些表达式本身就用于判定在什么情况下你的密文可以被解密。举个例子,考虑一组更为现实的特征,你可以这样加密一份医学记录,使医院的儿科医生*或者*保险理算员都可以阅读它。你所需要做的只不过是保证人们可以得到正确描述*他们的*特征的密钥(就是一些任意的字符串,如同身份那样)。

*一个简单的“密文规定”。在这个规定中当且仅当一个密钥与一组特定的特征匹配时,密文才能被解密。在这个案例中,密钥满足该公式的条件,因此密文将被解密。其余用不到的特征在这里忽略掉。*
其他的条件判断也能实现。通过一长串特征,比如文件创建时间、文件名,甚至指示文件创建位置的 GPS 坐标, 来加密密文也是有可能的。于是你可以让权威机构分发一部分对应你的数据集的非常精确的密钥————比如说,“该密钥用于解密所有在 11 月 3 号和 12 月 12 号之间在芝加哥被加密的包含‘小儿科’或者‘肿瘤科’标记的放射科文件”。
### 函数式加密
一旦拥有一个相关的基础工具,像 IBE 和 ABE,研究人员的本能是去扩充和一般化它。为什么要止步于简单的布尔表达式?我们能不能制作嵌入了*任意的计算机程序*的<ruby> 密钥 <rt> key </rt></ruby>(或者<ruby> 密文 <rt> ciphertext </rt></ruby>)?答案被证明是肯定的——尽管不是非常高效。一组 [近几年的](https://eprint.iacr.org/2013/337.pdf) [研究](https://arxiv.org/abs/1210.5287) 显示可以根据各种各样的<ruby> 基于格 <rt> lattice-based </rt></ruby>的密码假设,构建在<ruby> 任意多项式大小线路 <rt> arbitrary polynomial-size circuits </rt></ruby>运作的 ABE。所以这一方向毫无疑问非常有发展潜力。
这一潜力启发了研究人员将所有以上的想法一般化成为单独一类被称作 <ruby> <a href="https://eprint.iacr.org/2010/543.pdf"> “函数式加密” </a> <rt> functional encryption </rt></ruby> 的加密方式。函数式加密更多是一种抽象的概念而没有具体所指——它不过是一种将所有这些系统看作是一个特定的类的实例的方式。它基本的想法是,用一种依赖于(1)明文,和(2)嵌入在密钥中的数据 的任意函数 F 的算法来代表解密过程。
(LCTT 译注:上面函数 F 的 (1) 原文是“the plaintext inside of a ciphertext”,但译者认为应该是密文,其下的公式同。)
这个函数大概是这样的:
```
输出 = F(密钥数据,密文数据)
```
在这一模型中,IBE 可以表达为有一个加密算法 *加密(身份,明文)*并定义了一个这样的函数 F:如果“*密钥输入 == 身份*”,则输出对应明文,否则输出空字符串的系统。相似地,ABE 可以表达为一个稍微更复杂的函数。依照这一范式,我们可以展望在将来,各类有趣的功能都可以由计算不同的函数得到,并在未来的方案中被实现。
但这些都必须要等到以后了。今天我们谈的已经足够多了。
### 所以这一切的重点是什么?
对于我来说,重点不过是证明密码学可以做到一些十分优美惊人的事。当谈及工业与“应用”密码学时,我们鲜有见到这些出现在日常应用中,但我们都可以等待着它们被广泛使用的一天的到来。
也许完美的应用就在某个地方,也许有一天我们会发现它。
*注:*
* 注 1:最初在这片博文里我写的是 “20 世纪 90 年代中期”。在文章的评论里,Tom Ristenpart 提出了异议,并且非常好地论证了很多重要的发展都是在这个时间之后发生的。所以我把时间再推进了大约 5 年,而我也在考虑怎样将这表达得更好一些。
* 注 2:我们知道有一种叫作 [“无证书加密”](http://eprint.iacr.org/2003/126.pdf) 的加密的中间形式。这个想法由 Al-Riyami 和 Paterson 提出,并且使用到标准公钥加密和 IBE 的结合。基本的思路是用一个(消息接受者生成的)传统密钥和一个 IBE 身份*共同*加密每则消息。然后接受者必须从 IBE 权威机构处获得一份私钥的拷贝来解密。这种方案的优点是两方面的:(1)IBE 密钥机构不能独自解密消息,因为它没有对应的(接受者)私钥,这就解决了“托管”问题(即权威机构完全具备解密消息的能力);(2)发送者不必验证公钥的确属于接收者(LCTT 译注:原文为 sender,但译者认为应该是笔误,应为 recipient),因为 IBE 方面会防止伪装者解密这则消息。但不幸的是,这个系统更像是传统的公钥加密系统,而缺少 IBE 简洁的实用特性。
* 注 3:开发 IBE 的一部分挑战在于构建一个面临不同密钥持有者的“勾结”安全的系统。譬如说,想象一个非常简单的只有 2 比特的身份鉴定系统。这个系统只提供四个可能的身份:“00”,“01”,“10”,“11”。如果我分配给你对应 “01” 身份的密钥,分配给 Bob 对应 “10” 的密钥,我需要保证你们不能合谋生成对应 “00” 和 “11” 身份的密钥。一些早期提出的解决方法尝试通过用不同方式将标准公共加密密钥拼接到一起来解决这个问题(比如,为身份的每一个字节保留一个独立的公钥,然后将对应的多个私钥合并成一个分发)。但是,当仅仅只有少量用户合谋(或者他们的密钥被盗)时,这些系统就往往会出现灾难性的失败。因而基本上这个问题的解决就是真正的 IBE 与它的仿造近亲之间的区别。
* 注 4: 博奈和富兰克林方案的完整描述可以在 [这里](https://en.wikipedia.org/wiki/Boneh%E2%80%93Franklin_scheme) 看到,或者在他们的 [原版论文](https://crypto.stanford.edu/%7Edabo/papers/bfibe.pdf) 中。[这里](http://go-search.org/view?id=github.com%2Fvanadium%2Fgo.lib%2Fibe)、[这里](https://github.com/relic-toolkit/relic) 和 [这里](https://github.com/JHUISI/charm) 有一部分代码。除了指出这个方案十分高效之外,我不希望在这上面花太多的篇幅。它由 [Voltage Security](https://www.voltage.com/)(现属于惠普) 实现并占有专利。
---
via: <https://blog.cryptographyengineering.com/2017/07/02/beyond-public-key-encryption/>
作者:[Matthew Green](https://blog.cryptographyengineering.com/author/matthewdgreen/) 译者:[Janzen\_Liu](https://github.com/JanzenLiu) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | One of the saddest and most fascinating things about applied cryptography is *how little cryptography we actually use. *This is not to say that cryptography isn’t widely used in industry — it is. Rather, what I mean is that cryptographic researchers have developed so many useful technologies, and yet industry on a day to day basis barely uses any of them. In fact, with a few minor exceptions, the vast majority of the cryptography we use was settled by the early-2000s.*
Most people don’t sweat this, but as a cryptographer who works on the boundary of research and deployed cryptography it makes me unhappy. So while I can’t solve the problem entirely, what I *can* do is talk about some of these newer technologies. And over the course of this summer that’s what I intend to do: talk. Specifically, in the next few weeks I’m going to write a series of posts that describe some of the advanced cryptography that we *don’t* generally see used.
Today I’m going to start with a very simple question: what lies beyond public key cryptography? Specifically, I’m going to talk about a handful of technologies that were developed in the past 20 years, each of which allows us to go beyond the traditional notion of public keys.
This is a wonky post, but it won’t be mathematically-intense. For actual definitions of the schemes, I’ll provide links to the original papers, and references to cover some of the background. The point here is to explain what these new schemes do — and how they can be useful in practice.
### Identity Based Cryptography
In the mid-1980s, a cryptographer named Adi Shamir proposed a [radical new idea.](https://discovery.csc.ncsu.edu/Courses/csc774-S08/reading-assignments/shamir84.pdf) The idea, put simply, was *to get rid of public keys*.
To understand where Shamir was coming from, it helps to understand a bit about public key encryption. You see, prior to the invention of public key crypto, all cryptography involved secret keys. Dealing with such keys was a huge drag. Before you could communicate securely, you needed to exchange a secret with your partner. This process was fraught with difficulty and didn’t scale well.
Public key encryption (beginning with [Diffie-Hellman](https://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_key_exchange) and Shamir’s [RSA](https://en.wikipedia.org/wiki/RSA_(cryptosystem)) cryptosystem) hugely revolutionized cryptography by dramatically simplifying this key distribution process. Rather than sharing secret keys, users could now transmit their *public* key to other parties. This public key allowed the recipient to encrypt to you (or verify your signature) but it could not be used to perform the corresponding decryption (or signature generation) operations. That part would be done with a secret key you kept to yourself.
While the use of public keys improved many aspects of using cryptography, it also gave rise to a set of new challenges. In practice, it turns out that having public keys is only half the battle — people still need to use distribute them securely.
For example, imagine that I want to send you a PGP-encrypted email. Before I can do this, I need to obtain a copy of your public key. How do I get this? Obviously we could meet in person and exchange that key on physical media — but nobody wants to do this. It would much more desirable to obtain your public key electronically. In practice this means either *(1)* we have to exchange public keys by email, or *(2)* I have to obtain your key from a third piece of infrastructure, such as a [website](https://keybase.io/) or [key server](https://pgp.mit.edu/). And now we come to the problem: if that email or key server is *untrustworthy *(or simply allows anyone to [upload a key in y](https://motherboard.vice.com/en_us/article/bmvdwd/wave-of-spoofed-encryption-keys-shows-weakness-in-pgp)[our name](https://motherboard.vice.com/en_us/article/bmvdwd/wave-of-spoofed-encryption-keys-shows-weakness-in-pgp)),* *I might end up downloading a malicious party’s key by accident. When I send a message to “you”, I’d actually be encrypting it to Mallory.

Solving this problem — of exchanging public keys and verifying their provenance — has motivated a *huge* amount of practical cryptographic engineering, including the entire [web PKI.](https://en.wikipedia.org/wiki/Certificate_authority) In most cases, these systems work well. But Shamir wasn’t satisfied. What if, he asked, we could do it better? More specifically, he asked: *could we replace those pesky public keys with something better?*
Shamir’s idea was exciting. What he proposed was a new form of public key cryptography in which the user’s “public key” could simply be their *identity*. This identity could be a name (e.g., “Matt Green”) or something more precise like an email address. Actually, it didn’t realy matter. What did matter was that the public key would be some arbitrary string — and *not* a big meaningless jumble of characters like “7cN5K4pspQy3ExZV43F6pQ6nEKiQVg6sBkYPg1FG56Not”.
Of course, using an arbitrary string as a public key raises a big problem. Meaningful identities sound great — but I don’t own them. If my public key is “Matt Green”, how do I get the corresponding private key? And if* **I* can get out that private key, what stops *some other Matt Green* from doing the same, and thus reading my messages? And ok, now that I think about this, what stops some random person who *isn’t* named Matt Green from obtaining it? Yikes. We’re headed straight into [Zooko’s triangle](https://en.wikipedia.org/wiki/Zooko%27s_triangle).
Shamir’s idea thus requires a bit more finesse. Rather than expecting identities to be global, he proposed a special server called a “key generation authority” that would be responsible for generating the private keys. At setup time, this authority would generate a single *master public key (MPK), *which it would publish to the world. If you wanted to encrypt a message to “Matt Green” (or verify my signature), then you could do so using my identity and the single MPK of an authority we’d both agree to use. To *decrypt *that message (or sign one), I would have to visit the same key authority and ask for a copy of my secret key. The key authority would compute my key based on a *master secret key (MSK)*, which it would keep very secret.
With all algorithms and players specified, whole system looks like this:

**Setup**algorithm of the Key Generation Authority generates the master public key (MPK) and master secret key (MSK). The authority can use the
**Extract**algorithm to derive the secret key corresponding to a specific ID. The encryptor (left) encrypts using only the identity and MPK. The recipient requests the secret key for her identity, and then uses it to decrypt. (Icons by
[Eugen Belyakoff](https://thenounproject.com/eugen.belyakoff/))
This design has some important advantages — and more than a few obvious drawbacks. On the plus side, it removes the need for any key exchange *at all* with the person you’re sending the message to. Once you’ve chosen a master key authority (and downloaded its MPK), you can encrypt to anyone in the entire world. Even cooler: at the time you encrypt, *your recipient doesn’t even need to have contacted the key authority yet*. She can obtain her secret key *after* I send her a message.
Of course, this “feature” is also a bug. Because the key authority generates all the secret keys, it has an awful lot of power. A dishonest authority could easily generate your secret key and decrypt your messages. The polite way to say this is that standard IBE systems effectively “bake in” [key escrow](https://en.wikipedia.org/wiki/Key_escrow).**
**Putting the “E” in IBE**
All of these ideas and more were laid out by Shamir in his 1984 paper. There was just one small problem: Shamir could only figure out half the problem.
Specifically, Shamir’s proposed a scheme for *identity-based signature* (IBS) — a signature scheme where the public verification key is an identity, but the signing key is generated by the key authority. Try as he might, he could not find a solution to the problem of building identity-based* encryption *(IBE). This was left as an open problem.***
It would take more than 16 years before someone answered Shamir’s challenge. Surprisingly, when the answer came it came not once *but three times*.
The first, and probably most famous realization of IBE was developed by Dan Boneh and Matthew Franklin much later — in 2001. The timing of Boneh and Franklin’s discovery makes a great deal of sense. The Boneh-Franklin scheme relies fundamentally on elliptic curves that support an efficient “[bilinear map](http://people.csail.mit.edu/alinush/6.857-spring-2015/papers/bilinear-maps.pdf)” (or “pairing”).**** The [algorithms](https://crypto.stanford.edu/miller/) needed to compute such pairings were not known when Shamir wrote his paper, and weren’t employed *constructively* — that is, as a useful thing rather than [an attack](http://ieeexplore.ieee.org/document/259647/) — until about [2000](https://pdfs.semanticscholar.org/845e/96c20e5a5ff3b03f4caf72c3cb817a7fa542.pdf). The same can be said about a second scheme called [Sakai-Kasahara](https://en.wikipedia.org/wiki/Sakai%E2%80%93Kasahara_scheme), which would be independently discovered around the same time.
*(For a brief tutorial on the Boneh-Franklin IBE scheme, see this page.)*
The third realization of IBE was not as efficient as the others, but was much more surprising. [This scheme](https://pdfs.semanticscholar.org/8289/821325781e2f0ce83cfbfc1b62c44be799ee.pdf) was developed by Clifford Cocks, a senior cryptologist at Britain’s GCHQ. It’s noteworthy for two reasons. First, Cocks’ IBE scheme does not require bilinear pairings at all — it is based in the much older RSA setting, which means *in principle *it spent all those years just waiting to be found. Second, Cocks himself had recently become known for something even more surprising: [discovering the RSA cryptosystem,](https://cryptome.org/jya/ellisdoc.htm) nearly five years before RSA did. To bookend that accomplishment with a second major advance in public key cryptography was a pretty impressive accomplishment.
In the years since 2001, a number of additional IBE constructions have been developed, using all sorts of cryptographic settings. Nonetheless, Boneh and Franklin’s early construction remains among the simplest and most efficient.
Even if you’re not interested in IBE for its own sake, it turns out that this primitive is really useful to cryptographers for many things beyond simple encryption. In fact, it’s more helpful to think of IBE as a way of “pluralizing” a single public/secret master keypair into billions of related keypairs. This makes it useful for applications as diverse as blocking [chosen ciphertext attacks,](https://www.cs.umd.edu/~jkatz/papers/id-cca.pdf) [forward-secure public key encryption](https://eprint.iacr.org/2003/083.pdf), and short [signature schemes](https://en.wikipedia.org/wiki/Boneh%E2%80%93Lynn%E2%80%93Shacham).
**Attribute Based Encryption**
Of course, if you leave cryptographers alone with a tool like IBE, the first thing they’re going to do is find a way to ~~make things more complicated~~ improve on it.
One of the biggest such improvements is due to [Sahai and Waters](https://eprint.iacr.org/2004/086.pdf). It’s called Attribute-Based Encryption, or ABE.
The origin of this idea was not actually to encrypt with attributes. Instead Sahai and Waters were attempting to develop an *Identity-Based* encryption scheme that could encrypt using biometrics. To understand the problem, imagine I decide to use a biometric like your [iris scan](https://en.wikipedia.org/wiki/Iris_recognition) as the “identity” to encrypt you a ciphertext. Later on you’ll ask the authority for a decryption key that corresponds to your own iris scan — and if everything matches up and you’ll be able to decrypt.
The problem is that this will almost never work.
The issue here is that biometric readings (like iris scans or fingerprint templates) are inherently error-prone. This means every scan will typically be very *close*, but often there will be a few bits that disagree. With standard IBE

this is *fatal*: if the encryption identity differs from your key identity by even a single bit, decryption will not work. You’re out of luck.
Sahai and Waters decided that the solution to this problem was to develop a form of IBE with a “threshold gate”. In this setting, each bit of the identity is represented as a different “attribute”. Think of each of these as components you’d encrypt under — something like “bit 5 of your iris scan is a 1” and “bit 23 of your iris scan is a 0”. The encrypting party lists all of these bits and encrypts under each one. The decryption key generated by the authority embeds a similar list of bit values. The scheme is defined so that decryption will work if and only if the number of matching attributes (between your key and the ciphertext) exceeds some pre-defined threshold: *e.g.,* any 2024 out of 2048 bits must be identical in order to decrypt.
The beautiful thing about this idea is not fuzzy IBE. It’s that once you have a threshold gate and a concept of “attributes”, you can more interesting things. The [main observation](https://eprint.iacr.org/2006/309.pdf) is that a threshold gate can be used to implement the boolean AND and OR gates, like so:
Even better, you can *stack* these gates on top of one another to assign a fairly complex boolean formula — which will itself determine what conditions your ciphertext can be decrypted under. For example, switching to a more realistic set of attributes, you could encrypt a medical record so that either a pediatrician in a hospital could read it, *or* an insurance adjuster could. All you’d need is to make sure people received keys that correctly described *their* attributes (which are just arbitrary strings, like identities).

The other direction can be implemented as well. It’s possible to encrypt a ciphertext under a long list of attributes, such as creation time, file name, and even GPS coordinates indicating where the file was created. You can then have the authority hand out keys that correspond to a very precise slice of your dataset — for example, “this key decrypts any radiology file encrypted in Chicago between November 3rd and December 12th that is tagged with ‘pediatrics’ or ‘oncology'”.
**Functional Encryption**
Once you have a related of primitives like IBE and ABE, the researchers’ instinct is to both extend and generalize. Why stop at simple boolean formulae? Can we make keys (or ciphertexts) that embed *arbitrary computer programs? *The answer, it turns out, is yes — though not terribly efficiently. A set of [recent](https://eprint.iacr.org/2013/337.pdf) [works](https://arxiv.org/abs/1210.5287) show that it is possible to build ABE that works over arbitrary polynomial-size circuits, using various lattice-based assumptions. So there is certainly a lot of potential here.
This potential has inspired researchers to generalize all of the above ideas into a single class of encryption called “[functional encryption](https://eprint.iacr.org/2010/543.pdf)“. Functional encryption is more conceptual than concrete — it’s just a way to look at all of these systems as instances of a specific class. The basic idea is to represent the decryption procedure as an algorithm that computes an arbitary function *F* over (1) the plaintext inside of a ciphertext, and (2) the data embedded in the key. This function has the following profile:
*output = F(key data, plaintext data)*
In this model *IBE* can be expressed by having the encryption algorithm encrypt* (identity, plaintext) *and defining the function *F *such that, if “*key input == identity”, it *outputs the plaintext, and outputs an empty string otherwise. Similarly, ABE can be expressed by a slightly more complex function. Following this paradigm, once can envision all sorts of interesting functions that might be computed by different functions and realized by future schemes.
But those will have to wait for another time. We’ve gone far enough for today.
### So what’s the point of all this?
For me, the point is just to show that cryptography can do some pretty amazing things. We rarely see this on a day-to-day basis when it comes to industry and “applied” cryptography, but it’s all there waiting to be used.
Perhaps the perfect application is out there. Maybe you’ll find it.
*Notes:*
* An earlier version of this post said “mid-1990s”. In comments below, Tom Ristenpart takes issue with that and makes the excellent point that many important developments post-date that. So I’ve moved the date forward about five years, and I’m thinking about how to say this in a nicer way.
** There is also an intermediate form of encryption known as “[certificateless encryption](http://eprint.iacr.org/2003/126.pdf)“. Proposed by Al-Riyami and Paterson, this idea uses a *combination* of standard public key encryption and IBE. The basic idea is to encrypt each message using *both* a traditional public key (generated by the recipient) and an IBE identity. The recipient must then obtain a copy of the secret key from the IBE authority to decrypt. The advantages here are twofold: (1) the IBE key authority can’t decrypt the message by itself, since it does not have the corresponding secret key, which solves the “escrow” problem. And (2) the sender does not need to verify that the public key really belongs to the sender (e.g., by checking a certificate), since the IBE portion prevents imposters from decrypting the resulting message. Unfortunately this system is more like traditional public key cryptography than IBE, and does not have the neat usability features of IBE.
*** A part of the challenge of developing IBE is the need to make a system that is secure against “collusion” between different key holders. For example, imagine a very simple system that has only 2-bit identities. This gives four possible identities: “00”, “01”, “10”, “11”. If I give you a key for the identity “01” and I give Bob a key for “10”, we need to ensure that you two cannot collude to produce a key for identities “00” and “11”. Many earlier proposed solutions have tried to solve this problem by gluing together standard public encryption keys in various ways (e.g., by having a separate public key for each bit of the identity and giving out the secret keys as a single “key”). However these systems tend to fail catastrophically when just a few users collude (or their keys are stolen). Solving the collusion problem is fundamentally what separates real IBE from its faux cousins.
**** A full description of Boneh and Franklin’s scheme can be found [here](https://en.wikipedia.org/wiki/Boneh%E2%80%93Franklin_scheme), or in the [original paper](https://crypto.stanford.edu/~dabo/papers/bfibe.pdf). Some code is [here](http://go-search.org/view?id=github.com%2Fvanadium%2Fgo.lib%2Fibe) and [here](https://github.com/relic-toolkit/relic) and [here](https://github.com/JHUISI/charm). I won’t spend more time on it, except to note that the scheme is very efficient. It was patented and implemented by [Voltage Security](https://www.voltage.com/), now part of HPE.
Hi Matthew, enjoyed reading your latest post. Fascinating! Took Dan Boneh’s crypto class couple years back and now looking forward to his next Coursera class which will focus on various IBE schemes, some of which Dan helped introduce.
Given advent of quantum computing power and useful Block Chain technology via crypto currencies, I believe these protocols (ZK, IBE, etc) will play a significant role in promoting better privacy and security environments (data in motion) especially in cyber space. Note: Boot strapping IBE schemes with biometrics is interesting, conceptually, but from a privacy point of view could be problematic, given LE can compel one to unlock their device if they use their unique biometric fingerprint, but cannot if you hold the “key” in your head. Your unique biometric identity can be used as witness testimony. Of course that does not mean methods cannot be used to force one to eventually give-up he key – being incarcerate for an” indefinite” period of time might be incentive enough for most people.
Appreciate your post
Suggestion: Google Search Identity Based Encryption
A Google search for someone will bring up information about them that is publicly available. Trying to fake this information would be difficult. You would have to ensure that Alice and Bob both got the same bogus search results at different times (encrypt/decrypt) and different locations. Since Google search results are fed from sites across the planet this is a formidable challenge.
Any attempt to fake these results “at the source” within Google would be publicly obvious.
So the trick is to leverage public search data rather than Iris scans.
It’s not surprising IBE has not had any adoption, if it basically requires entrusting the Key Generation Authority with the ability to decrypt your messages. In an era where even the NSA can’t keep its secrets, RSA can’t keep SecurID, well, secure, and certificate authorities fail at their only job, the idea that any such authority can be trusted is simply laughable, even before we consider their susceptibility to government coercion.
Hi
I love your blog; thank you for always being able to explain stuff to non-mathematicians!
I would like to offer my thoughts on why the world at large has not moved beyond PKI. Apologies for the rambling!
I’ll start with saying: none of the below means that these no-doubt great crypto ideas are not applicable in specific, possibly un-common, circumstances; just that in most systems that are in use today, they don’t work well enough, if at all.
My background is I work for TCS, which is a large-ish IT services company. I used to be (many years ago) a development manager type person but I haven’t done that for a while now. I like to think of what I do now as a bit of research, plus a lot of internal consulting on security matters.
Firstly, speaking from an IT services point of view, we almost never offer to use anything that has not been standardised in some fashion (for example by NIST; mis-adventures like Dual EC DRBG notwithstanding!)
Secondly, if I again don my “development manager” hat, there is no way I will allow my team to *write* their own crypto — I don’t have any Dan Bernsteins on my team! Equally, there is no way I will let them use any algorithm that does not have at least two public, open source (GPL, Apache, whatever…) implementations, preferably in two different languages.
These are just basic hygiene for enterprise use of crypto. In fact, it would be a hugely optimistic (or knowledge-lacking) customer who knowingly agreed to let us violate these constraints.
IBE:
To your specific algorithms now, starting with IBE.
I have two issues with IBE. First, with standard PKI, I do the identification once, when I sign the key he presents. After that if he has the key, he’s who he claims to be. The key *is* the identity proof, from that point on; I’m done with that problem.
IBE passes the problem of proving identity to something else, and then hands out a private key based on that proof. That’s always struck me as particularly problematic.
Worse, since so many people are fond of predicting the death of the password (no comment!), proving identity is often **best** done with keys of some kind — ssh keys, 2-factor keys, yubikeys, keys in some other sort of crypto device, whatever.
Catch-22!
In fact, since you and I have to agree on a server anyway, it may as well be my **PKI** server that you ask for my public key. What’s the difference? I see little!
(Literally the only advantage I see is the ability for me to receive an encrypted message *before* I have had a chance to download my private key, but you have to admit that’s hardly a “killer use case” in the big picture!)
Back to my PKI server, maybe I can even **choose to** allow it to keep my *private* key safe, and give it only to me in case I ever lose it! That replicates that behaviour of the IBE server also!
But that brings me to my second problem. You called it “key escrow”, but, in my humble opinion, that does not go far enough in making sure people get the true depth of the problem. I prefer to call it “lack of non-repudiation”, or, “anyone can claim the server impersonated them!”
And honestly, trusting a third party seems a bit optimistic in 2017.
(Finally, look at the phrasing you used: “pluralizing a single public/secret master keypair into billions of related keypairs”. That already implies that all those billions of related keypairs somehow “belong” to whoever owns the single master pair. It certainly does not lend itself to an interpretation where the billions of keypairs belong to a number of *independent users*, such as in an enterprise environment)
ABE:
ABE also sounds interesting, until you get into the weeds. (I appreciate your honesty, of writing “make things more complicated” in a strike-through font!). It is indeed complicated, and yet seems to offer no real benefits — when you consider deployment issues, over an ACL based system baked into the server side application.
If you assume that the bulk of the information is “in the cloud”, and the end point accessing it is just used as a reasonably dumb accessor device that only asks for their identity in some fashion, and passes it to the server, ABE does not add any value whatsoever. (The hospital example you used could fit this model quite well, and indeed I think most hospital systems are heavily server based).
The second problem with ABE is that monitoring bad behaviour is much harder — any failed attempts to decrypt are not always known to the server, unlike with a server-mediated access control mechanism that stops, and counts (and reports), the attempted **download** itself, if it is in violation.
And that point is much more important when you consider that revocations have always been a problem in ABE. Someone who saves old keys, can definitely read all documents he is no longer allowed to. Even if some expensive re-encryption, or proxy re-encryption, is performed, that won’t save documents he downloaded *before* his rights were revoked. (He can do that in an server-side access managed system too, but that’s an active system that’s counting his downloads and may be better able to report that “John in Finance has downloaded 320 documents last night, which is unusual for him; please check!”).
Basically, I’d rather control the initial download, because after that I have no real control!
Anyway, I think I’ve rambled enough, so I’ll give FE a miss 🙂 Thanks for listening.
I also have encountered the “lack-of-standard” problem with anything newer than 1990’s crypto.
If there isn’t an accepted specification from a reputable standards body, that guarantees at least some level of interoperability, then some organisations will not consider going further. This is partly for the valid reason that implementing a non-interoperable standard locks the company into maintaining that code forever more. When a company, as many do, outsource development and maintenance, the contractor who implemented the system has freedom to charge what they like. I hit this obstacle when working on a system for which a simple hash-chain would have been the ideal solution, but where’s the NIST standard for a hash-chain?
I think this is one reason that blockchains have become so popular. The vast majority of users don’t need all they offer, but they are at least a standard, to some extent.
Interesting ideas! Thank you for sharing!
Interesting article but it doesn’t explain how the new IBE systems avoid the problem of depending on a central authority.
I regularly enjoy reading your blog, Matt, but with all due respect, I couldn’t read past the first paragraph this time! I was floored by the following statement: “In fact, with a few minor exceptions, the vast majority of the cryptography we use was settled by the mid-1990s.”
The examples refuting this statement are everywhere: hash function design (SHA-2, SHA-3), block cipher design (AES), authenticated encryption, format-preserving encryption, disk encryption schemes, memory hard password hashing (scrypt, argon2), OTR (and in turn Signal), side-channel resistant implementations …. Actually the more I think about it the harder it is to find examples of in-use crypto primitives that have *not* seen significant research developments since the mid 1990s. According to your post we live in a world of mid 1990s crypto: one would be using MD5 for hashing, DES for encryption, all deployed encryption would be malleable, ad nauseam. Of course there is still broken legacy crypto around, but the point is that we know how to do better, thanks to a ton of amazing crypto research that your blog implies never existed.
By seemingly (and probably unintentionally) dismissing all these topics as not worthy of research (having been settled so long ago), I worry that your comments will discourage people from working on all the important problems in applied crypto we still face. This would be a huge tragedy! Sure, the research community should work on speculative primitives that may (or may not) be practically relevant in the future (who knows where breakthroughs will come from), but let’s not forget all the fantastic, fundamental research that has already had, and continues to have, tangible impact on security in practice.
Tom, the post is about “beyond public key crypto”, all your examples are about symmetric key crypto. Don’t worry, Matt is not dismissing your fantastic research.
Even just considering public-key crypto, his statement is not really supported by the evidence. Modern forward-secure key exchange protocols weren’t developed until the mid-2000s. Proper padding modes for RSA encryption and signing came at the end of the 90s and early 2000s. Many of the most widely-used elliptic curves (e.g. Curve25519) are quite recent as well.
I think this is probably a good point. I responded in part on Twitter and I’ll summarize here. I’m including some of Anon1’s points as well:
* OAEP padding for RSA: 1994 (and it’s still not used in TLS 1.2)
* AES (Rijndeal): 1998
* IAPM (one of the earlier one-pass AE modes): 2000
* SHA-2: 2001
* Modern forward-secure KE protocols: tough to say, maybe 2000/1 for MQV, but TLS and IKE still do their own thing (we can argue about TLS 1.3)
* Disk encryption schemes (I dunno that any of the more modern wide-pipe modes are heavily used. I checked and Bitlocker uses CBC mode, Apple iOS ses CTR or something. A few implementations use XTS mode but that’s not something to be proud of.)
Based on that list it looks like the cryptographic “drop dead line” is actually about 2001, so I was off by 5 years or so and I’ve updated the post. There are still a few exceptions: memory-hard hash functions (2007), OTR (2004) and (software) side-channel resistant implementations (who knows) are more recent (i.e., 2000s) inventions. You could also throw Curve25519, Salsa/ChaCha, although these also leverage techniques known earlier.
What’s notable about this list is that very few of these are fundamentally new *types* of primitive, in the sense that IBE/IBS was a fundamentally different type of primitive from PKE. Inventing new hash functions (that fit known profiles) and new ciphers (that fit known applications) is very useful and important — but it’s mostly “replacing windows” rather than “building new houses”.
However, with all that said: my point in this series is not to crap on the developments that people have made and deployed, but to point out some of the neater technologies that we haven’t. So if my intro gives you this reaction I’ll try to figure out how to make my point a bit better.
Accessing a key authority every time one needs to encrypt/decrypt may be a big issue for dissidents in authoritarian regimes due to meta data interception.
This is in contrast to standard PKE where one can collect public keys ahead of any actual use, and get them from a variety of sources (and not only from key servers).
To clarify, one doesn’t need to get the key *every* time, just the first time. Once you request the key you can use it to decrypt any message sent to your identity.
> Actually, it didn’t realy matter.
> you can more interesting things. |
8,818 | 在 Linux 中分割和重组文件 | https://www.linux.com/learn/intro-to-linux/2017/8/splitting-and-re-assembling-files-linux | 2017-08-29T07:59:14 | [
"csplit",
"split"
] | https://linux.cn/article-8818-1.html | 
>
> 非常有用的 `csplit` 命令可以将单个文件分割成多个文件。Carla Schroder 解释说。
>
>
>
Linux 有几个用于分割文件的工具程序。那么你为什么要分割文件呢?一个用例是将大文件分割成更小的尺寸,以便它适用于比较小的存储介质,比如 U 盘。当您遇到 FAT32(最大文件大小为 4GB),且您的文件大于此时,通过 U 盘传输文件也是一个很好的技巧。另一个用例是加速网络文件传输,因为小文件的并行传输通常更快。
我们将学习如何使用 `csplit`,`split` 和 `cat` 来重新整理文件,然后再将文件合并在一起。这些操作在任何文件类型下都有用:文本、图片、音频文件、ISO 镜像文件等。
### 使用 csplit 分割文件
`csplit` 是这些有趣的小命令中的一个,它永远伴你左右,一旦开始用它就离不开了。`csplit` 将单个文件分割成多个文件。这个示例演示了最简单的使用方法,它将文件 foo.txt 分为三个文件,以行号 17 和 33 作为分割点:
```
$ csplit foo.txt 17 33
2591
3889
2359
```
`csplit` 在当前目录下创建了三个新文件,并以字节为单位打印出新文件的大小。默认情况下,每个新文件名为 `xx_nn`:
```
$ ls
xx00
xx01
xx02
```
您可以使用 `head` 命令查看每个新文件的前十行:
```
$ head xx*
==> xx00 <==
Foo File
by Carla Schroder
Foo text
Foo subheading
More foo text
==> xx01 <==
Foo text
Foo subheading
More foo text
==> xx02 <==
Foo text
Foo subheading
More foo text
```
如果要将文件分割成包含相同行数的多个文件怎么办?可以指定行数,然后将重复次数放在在花括号中。此示例重复分割 4 次,并将剩下的转储到最后一个文件中:
```
$ csplit foo.txt 5 {4}
57
1488
249
1866
3798
```
您可以使用星号通配符来告诉 `csplit` 尽可能多地重复分割。这听起来很酷,但是如果文件不能等分,则可能会失败(LCTT 译注:低版本的 `csplit` 不支持此参数):
```
$ csplit foo.txt 10 {*}
1545
2115
1848
1901
csplit: '10': line number out of range on repetition 4
1430
```
默认的行为是删除发生错误时的输出文件。你可以用 `-k` 选项来解决这个问题,当有错误时,它就不会删除输出文件。另一个行为是每次运行 `csplit` 时,它将覆盖之前创建的文件,所以你需要使用新的文件名来分别保存它们。使用 `--prefix= _prefix_` 来设置一个不同的文件前缀:
```
$ csplit -k --prefix=mine foo.txt 5 {*}
57
1488
249
1866
993
csplit: '5': line number out of range on repetition 9
437
$ ls
mine00
mine01
mine02
mine03
mine04
mine05
```
选项 `-n` 可用于改变对文件进行编号的数字位数(默认是 2 位):
```
$ csplit -n 3 --prefix=mine foo.txt 5 {4}
57
1488
249
1866
1381
3798
$ ls
mine000
mine001
mine002
mine003
mine004
mine005
```
`csplit` 中的 “c” 是上下文(context)的意思。这意味着你可以根据任意匹配的方式或者巧妙的正则表达式来分割文件。下面的例子将文件分为两部分。第一个文件在包含第一次出现 “fie” 的前一行处结束,第二个文件则以包含 “fie” 的行开头。
```
$ csplit foo.txt /fie/
```
在每次出现 “fie” 时分割文件:
```
$ csplit foo.txt /fie/ {*}
```
在 “fie” 前五次出现的地方分割文件:
```
$ csplit foo.txt /fie/ {5}
```
仅当内容以包含 “fie” 的行开始时才复制,并且省略前面的所有内容:
```
$ csplit myfile %fie%
```
### 将文件分割成不同大小
`split` 与 `csplit` 类似。它将文件分割成特定的大小,当您将大文件分割成小的多媒体文件或者使用网络传送时,这就非常棒了。默认的大小为 1000 行:
```
$ split foo.mv
$ ls -hl
266K Aug 21 16:58 xaa
267K Aug 21 16:58 xab
315K Aug 21 16:58 xac
[...]
```
它们分割出来的大小相似,但你可以指定任何你想要的大小。这个例子中是 20M 字节:
```
$ split -b 20M foo.mv
```
尺寸单位缩写为 K,M,G,T,P,E,Z,Y(1024 的幂)或者 KB,MB,GB 等等(1000 的幂)。
为文件名选择你自己的前缀和后缀:
```
$ split -a 3 --numeric-suffixes=9 --additional-suffix=mine foo.mv SB
240K Aug 21 17:44 SB009mine
214K Aug 21 17:44 SB010mine
220K Aug 21 17:44 SB011mine
```
`-a` 选项控制编号的数字位置。`--numeric-suffixes` 设置编号的开始值。默认前缀为 `x`,你也可以通过在文件名后输入它来设置一个不同的前缀。
### 将分割后的文件合并
你可能想在某个时候重组你的文件。常用的 `cat` 命令就用在这里:
```
$ cat SB0* > foo2.txt
```
示例中的星号通配符将匹配到所有以 SB0 开头的文件,这可能不会得到您想要的结果。您可以使用问号通配符进行更精确的匹配,每个字符使用一个问号:
```
$ cat SB0?????? > foo2.txt
```
和往常一样,请查阅相关的手册和信息页面以获取完整的命令选项。
---
via: <https://www.linux.com/learn/intro-to-linux/2017/8/splitting-and-re-assembling-files-linux>
作者:[CARLA SCHRODER](https://www.linux.com/users/cschroder) 译者:[firmianay](https://github.com/firmianay) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,819 | 如何恢复丢弃的 git stash 数据 | https://opensource.com/article/17/8/recover-dropped-data-stash | 2017-08-29T08:15:00 | [
"git"
] | /article-8819-1.html |
>
> 不要让 git 命令中的错误抹去你数天的工作
>
>
>

今天我的同事几乎失去了他在四天工作中所做的一切。由于不正确的 `git` 命令,他把保存在 [stash](https://www.git-scm.com/docs/git-stash) 中的更改删除了。在这悲伤的情节之后,我们试图寻找一种恢复他所做工作的方法,而且我们做到了!
首先警告一下:当你在实现一个大功能时,请将它分成小块并定期提交。长时间工作而不做提交并不是一个好主意。
现在我们已经搞定了那个错误,下面就演示一下怎样从 stash 中恢复误删的更改。
我用作示例的仓库中,只有一个源文件 “main.c”,如下所示:

它只有一次提交,即 “Initial commit”:

该文件的第一个版本是:

我将在文件中写一些代码。对于这个例子,我并不需要做什么大的改动,只需要有什么东西放进 stash 中即可,所以我们仅仅增加一行。“git diff” 的输出如下:

现在,假设我想从远程仓库中拉取一些新的更改,当时还不打算提交我自己的更改。于是,我决定先 stash 它,等拉取远程仓库中的更改后,再把我的更改恢复应用到主分支上。我执行下面的命令将我的更改移动到 stash 中:
```
git stash
```
使用命令 `git stash list` 查看 stash,在这里能看到我的更改:

我的代码已经在一个安全的地方,而且主分支目前是干净的(使用命令 `git status` 检查)。现在我只需要拉取远程仓库的更改,然后把我的更改恢复应用到主分支上,而且我也应该是这么做的。
但是我错误地执行了命令:
```
git stash drop
```
它删除了 stash,而不是执行了下面的命令:
```
git stash pop
```
这条命令会在从栈中删除 stash 之前应用它。如果我再次执行命令 `git stash list`,就能看到在没有从栈中将更改恢复到主分支的之前,我就删除了它。OMG!接下来怎么办?
好消息是:`git` 并没有删除包含了我的更改的对象,它只是移除了对它的引用。为了证明这一点,我使用命令 `git fsck`,它会验证数据库中对象的连接和有效性。这是我对该仓库执行了 `git fsck` 之后的输出:

由于使用了参数 `--unreachable`,我让 `git-fsck` 显示出所有不可访问的对象。正如你看到的,它显示并没有不可访问的对象。而当我从 stash 中删除了我的更改之后,再次执行相同的指令,得到了一个不一样的输出:

现在有三个不可访问对象。那么哪一个才是我的更改呢?实际上,我不知道。我需要通过执行命令 `git show` 来搜索每一个对象。

就是它!ID 号 `95ccbd927ad4cd413ee2a28014c81454f4ede82c` 对应了我的更改。现在我已经找到了丢失的更改,我可以恢复它。其中一种方法是将此 ID 取出来放进一个新的分支,或者直接提交它。如果你得到了你的更改对象的 ID 号,就可以决定以最好的方式,将更改再次恢复应用到主分支上。对于这个例子,我使用 `git stash` 将更改恢复到我的主分支上。
```
git stash apply 95ccbd927ad4cd413ee2a28014c81454f4ede82c
```
另外需要重点记住的是 `git` 会周期性地执行它的垃圾回收程序(`gc`),它执行之后,使用 `git fsck` 就不能再看到不可访问对象了。
*本文[最初发表](http://jvanz.com/recovering-missed-data-from-stash.html#recovering-missed-data-from-stash)于作者的博客,并得到了转载授权。*
(题图:opensource.com,附图:José Guilherme Vanz, [CC BY](https://creativecommons.org/licenses/by/4.0/))
---
via: <https://opensource.com/article/17/8/recover-dropped-data-stash>
作者:[Jose Guilherme Vanz](https://opensource.com/users/jvanz) 译者:[firmianay](https://github.com/firmianay) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| null | HTTPSConnectionPool(host='opensource.com', port=443): Read timed out. (read timeout=10) | null |
8,820 | 六个优雅的 Linux 命令行技巧 | https://www.networkworld.com/article/3219684/linux/half-a-dozen-clever-linux-command-line-tricks.html | 2017-08-30T08:13:00 | [
"命令行"
] | https://linux.cn/article-8820-1.html |
>
> 一些非常有用的命令能让命令行的生活更满足
>
>
>

使用 Linux 命令工作可以获得许多乐趣,但是如果您使用一些命令,它们可以减少您的工作或以有趣的方式显示信息时,您将获得更多的乐趣。在今天的文章中,我们将介绍六个命令,它们可能会使你用在命令行上的时间更加值当。
### watch
`watch` 命令会重复运行您给出的任何命令,并显示输出。默认情况下,它每两秒运行一次命令。命令的每次运行都将覆盖上一次运行时显示的内容,因此您始终可以看到最新的数据。
您可能会在等待某人登录时使用它。在这种情况下,您可以使用 `watch who` 命令或者 `watch -n 15 who` 命令使每 15 秒运行一次,而不是两秒一次。另外终端窗口的右上角会显示日期和时间。
```
$ watch -n 5 who
Every 5.0s: who stinkbug: Wed Aug 23 14:52:15 2017
shs pts/0 2017-08-23 14:45 (192.168.0.11)
zoe pts/1 2017-08-23 08:15 (192.168.0.19)
```
您也可以使用它来查看日志文件。如果您显示的数据没有任何变化,则只有窗口角落里的日期和时间会发生变化。
```
$ watch tail /var/log/syslog
Every 2.0s: tail /var/log/syslog stinkbug: Wed Aug 23 15:16:37 2017
Aug 23 14:45:01 stinkbug CRON[7214]: (root) CMD (command -v debian-sa1 > /dev/nu
ll && debian-sa1 1 1)
Aug 23 14:45:17 stinkbug systemd[1]: Started Session 179 of user shs.
Aug 23 14:55:01 stinkbug CRON[7577]: (root) CMD (command -v debian-sa1 > /dev/nu
ll && debian-sa1 1 1)
Aug 23 15:05:01 stinkbug CRON[7582]: (root) CMD (command -v debian-sa1 > /dev/nu
ll && debian-sa1 1 1)
Aug 23 15:08:48 stinkbug systemd[1]: Starting Cleanup of Temporary Directories...
Aug 23 15:08:48 stinkbug systemd-tmpfiles[7584]: [/usr/lib/tmpfiles.d/var.conf:1
4] Duplicate line for path "/var/log", ignoring.
Aug 23 15:08:48 stinkbug systemd[1]: Started Cleanup of Temporary Directories.
Aug 23 15:13:41 stinkbug systemd[1]: Started Session 182 of user shs.
Aug 23 15:14:29 stinkbug systemd[1]: Started Session 183 of user shs.
Aug 23 15:15:01 stinkbug CRON[7828]: (root) CMD (command -v debian-sa1 > /dev/nu
ll && debian-sa1 1 1)
```
这里的输出和使用命令 `tail -f /var/log/syslog` 的输出相似。
### look
这个命令的名字 `look` 可能会让我们以为它和 `watch` 做类似的事情,但其实是不同的。`look` 命令用于搜索以某个特定字符串开头的单词。
```
$ look ecl
eclectic
eclectic's
eclectically
eclecticism
eclecticism's
eclectics
eclipse
eclipse's
eclipsed
eclipses
eclipsing
ecliptic
ecliptic's
```
`look` 命令通常有助于单词的拼写,它使用 `/usr/share/dict/words` 文件,除非你使用如下的命令指定了文件名:
```
$ look esac .bashrc
esac
esac
esac
```
在这种情况下,它的作用就像跟在一个 `awk` 命令后面的 `grep` ,只打印匹配行上的第一个单词。
### man -k
`man -k` 命令列出包含指定单词的手册页。它的工作基本上和 `apropos` 命令一样。
```
$ man -k logrotate
dh_installlogrotate (1) - install logrotate config files
logrotate (8) - rotates, compresses, and mails system logs
logrotate.conf (5) - rotates, compresses, and mails system logs
```
### help
当你完全绝望的时候,您可能会试图使用此命令,`help` 命令实际上是显示一个 shell 内置命令的列表。最令人惊讶的是它有相当多的参数变量。你可能会看到这样的东西,然后开始想知道这些内置功能可以为你做些什么:
```
$ help
GNU bash, version 4.4.7(1)-release (i686-pc-linux-gnu)
These shell commands are defined internally. Type `help' to see this list.
Type `help name' to find out more about the function `name'.
Use `info bash' to find out more about the shell in general.
Use `man -k' or `info' to find out more about commands not in this list.
A star (*) next to a name means that the command is disabled.
job_spec [&] history [-c] [-d offset] [n] or hist>
(( expression )) if COMMANDS; then COMMANDS; [ elif C>
. filename [arguments] jobs [-lnprs] [jobspec ...] or jobs >
: kill [-s sigspec | -n signum | -sigs>
[ arg... ] let arg [arg ...]
[[ expression ]] local [option] name[=value] ...
alias [-p] [name[=value] ... ] logout [n]
bg [job_spec ...] mapfile [-d delim] [-n count] [-O or>
bind [-lpsvPSVX] [-m keymap] [-f file> popd [-n] [+N | -N]
break [n] printf [-v var] format [arguments]
builtin [shell-builtin [arg ...]] pushd [-n] [+N | -N | dir]
caller [expr] pwd [-LP]
case WORD in [PATTERN [| PATTERN]...)> read [-ers] [-a array] [-d delim] [->
cd [-L|[-P [-e]] [-@]] [dir] readarray [-n count] [-O origin] [-s>
command [-pVv] command [arg ...] readonly [-aAf] [name[=value] ...] o>
compgen [-abcdefgjksuv] [-o option] [> return [n]
complete [-abcdefgjksuv] [-pr] [-DE] > select NAME [in WORDS ... ;] do COMM>
compopt [-o|+o option] [-DE] [name ..> set [-abefhkmnptuvxBCHP] [-o option->
continue [n] shift [n]
coproc [NAME] command [redirections] shopt [-pqsu] [-o] [optname ...]
declare [-aAfFgilnrtux] [-p] [name[=v> source filename [arguments]
dirs [-clpv] [+N] [-N] suspend [-f]
disown [-h] [-ar] [jobspec ... | pid > test [expr]
echo [-neE] [arg ...] time [-p] pipeline
enable [-a] [-dnps] [-f filename] [na> times
eval [arg ...] trap [-lp] [[arg] signal_spec ...]
exec [-cl] [-a name] [command [argume> true
exit [n] type [-afptP] name [name ...]
export [-fn] [name[=value] ...] or ex> typeset [-aAfFgilnrtux] [-p] name[=v>
false ulimit [-SHabcdefiklmnpqrstuvxPT] [l>
fc [-e ename] [-lnr] [first] [last] o> umask [-p] [-S] [mode]
fg [job_spec] unalias [-a] name [name ...]
for NAME [in WORDS ... ] ; do COMMAND> unset [-f] [-v] [-n] [name ...]
for (( exp1; exp2; exp3 )); do COMMAN> until COMMANDS; do COMMANDS; done
function name { COMMANDS ; } or name > variables - Names and meanings of so>
getopts optstring name [arg] wait [-n] [id ...]
hash [-lr] [-p pathname] [-dt] [name > while COMMANDS; do COMMANDS; done
help [-dms] [pattern ...] { COMMANDS ; }
```
### stat -c
`stat` 命令用于显示文件的大小、所有者、用户组、索引节点号、权限、修改和访问时间等重要的统计信息。这是一个非常有用的命令,可以显示比 `ls -l` 更多的细节。
```
$ stat .bashrc
File: .bashrc
Size: 4048 Blocks: 8 IO Block: 4096 regular file
Device: 806h/2054d Inode: 421481 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 1000/ shs) Gid: ( 1000/ shs)
Access: 2017-08-23 15:13:41.781809933 -0400
Modify: 2017-06-21 17:37:11.875157790 -0400
Change: 2017-06-21 17:37:11.899157791 -0400
Birth: -
```
使用 `-c` 选项,您可以指定要查看的字段。例如,如果您只想查看一个文件或一系列文件的文件名和访问权限,则可以这样做:
```
$ stat -c '%n %a' .bashrc
.bashrc 644
```
在此命令中, `%n` 表示每个文件的名称,而 `%a` 表示访问权限。`%u` 表示数字类型的 UID,而 `%U` 表示用户名。
```
$ stat -c '%n %a' bin/*
bin/loop 700
bin/move2nohup 700
bin/nohup.out 600
bin/show_release 700
$ stat -c '%n %a %U' bin/*
bin/loop 700 shs
bin/move2nohup 700 shs
bin/nohup.out 600 root
bin/show_release 700 shs
```
### TAB
如果你没有使用过 tab 键来补全文件名,你真的错过了一个非常有用的命令行技巧。tab 键提供文件名补全功能(包括使用 `cd` 时的目录)。它在出现歧义之前尽可能多的填充文件名(多个文件以相同的字母开头。如果您有一个名为 `bigplans` 的文件,另一个名为 `bigplans2017` 的文件会发生歧义,你将听到一个声音,然后需要决定是按下回车键还是输入 `2` 之后再按下 tab 键选择第二个文件。
(题图:[Micah Elizabeth Scott](https://www.flickr.com/photos/micahdowty/4630801442/in/photolist-84d4Wb-p29iHU-dscgLx-pXKT7a-pXKT7v-azMz3V-azMz7M-4Amp2h-6iyQ51-4nf4VF-5C1gt6-6P4PwG-po6JEA-p6C5Wg-6RcRbH-7GAmbK-dCkRnT-7ETcBp-4Xbhrw-dXrN8w-dXm83Z-dXrNvQ-dXrMZC-dXrMPN-pY4GdS-azMz8X-bfNoF4-azQe61-p1iUtm-87i3vj-7enNsv-6sqvJy-dXm8aD-6smkyX-5CFfGm-dXm8dD-6sqviw-6sqvVU-dXrMVd-6smkXc-dXm7Ug-deuxUg-6smker-Hd15p-6squyf-aGtnxn-6smjRX-5YtTUN-nynqYm-ea5o3c) [(CC BY 2.0)](https://creativecommons.org/licenses/by/2.0/legalcode))
---
via: <https://www.networkworld.com/article/3219684/linux/half-a-dozen-clever-linux-command-line-tricks.html>
作者:[Sandra Henry-Stocker](https://www.networkworld.com/author/Sandra-Henry_Stocker/) 译者:[firmianay](https://github.com/firmianay) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 404 | Not Found | null |
8,821 | 通过开源书籍学习 Ruby 编程 | https://www.ossblog.org/study-ruby-programming-with-open-source-books/ | 2017-08-30T09:52:00 | [
"Ruby",
"书籍"
] | https://linux.cn/article-8821-1.html | 
### 开源的 Ruby 书籍
Ruby 是由 Yukihiro “Matz” Matsumoto 开发的一门通用目的、脚本化、结构化、灵活且完全面向对象的编程语言。它具有一个完全动态类型系统,这意味着它的大多数类型检查是在运行的时候进行,而非编译的时候。因此程序员不必过分担心是整数类型还是字符串类型。Ruby 会自动进行内存管理,它具有许多和 Python、Perl、Lisp、Ada、Eiffel 和 Smalltalk 相同的特性。
Ruby on Rails 框架对于 Ruby 的流行起到了重要作用,它是一个全栈 Web 框架,目前已被用来创建许多受欢迎的应用,包括 Basecamp、GitHub、Shopify、Airbnb、Twitch、SoundCloud、Hulu、Zendesk、Square 和 Highise 。
Ruby 具有很高的可移植性性,在 Linux、Windows、Mac OS X、Cygwin、FreeBSD、NetBSD、OpenBSD、BSD/OS、Solaris、Tru64 UNIX、HP-UX 以及其他许多系统上均可运行。目前,Ruby 在 TIOBE 编程社区排名 12 。
这篇文章有 9 本很优秀的推荐书籍,有针对包括初学者、中级程序员和高级程序员的书籍。当然,所有的书籍都是在开源许可下发布的。
这篇文章是 [OSSBlog 的系列文章开源编程书籍](https://www.ossblog.org/opensourcebooks/)的一部分。
### 《[Ruby Best Practices](https://github.com/practicingruby/rbp-book/tree/gh-pages/pdfs)》

作者: Gregory Brown (328 页)
《Ruby Best Practices》适合那些希望像有经验的 Ruby 专家一样使用 Ruby 的程序员。本书是由 Ruby 项目 Prawn 的开发者所撰写的,它阐述了如何使用 Ruby 设计美丽的 API 和特定领域语言,以及如何利用函数式编程想法和技术,从而简化代码,提高效率。
《Ruby Best Practices》 更多的内容是关于如何使用 Ruby 来解决问题,它阐述的是你应该使用的最佳解决方案。这本书不是针对 Ruby 初学者的,所以对于编程新手也不会有太多帮助。这本书的假想读者应该对 Ruby 的相应技术有一定理解,并且拥有一些使用 Ruby 来开发软件的经验。
这本书分为两部分,前八章组成本书的核心部分,后三章附录作为补充材料。
这本书提供了大量的信息:
* 通过测试驱动代码 - 涉及了大量的测试哲学和技术。使用 mocks 和 stubs
* 通过利用 Ruby 神秘的力量来设计漂亮的 API:灵活的参数处理和代码块
* 利用动态工具包向开发者展示如何构建灵活的界面,实现单对象行为,扩展和修改已有代码,以及程序化地构建类和模块
* 文本处理和文件管理集中于正则表达式,文件、临时文件标准库以及文本处理策略实战
* 函数式编程技术优化了模块代码组织、存储、无穷目录以及更高顺序程序。
* 理解代码如何出错以及为什么会出错,阐述如何处理日志记录
* 通过利用 Ruby 的多语言能力削弱文化屏障
* 熟练的项目维护
本书为开源书籍,在 CC NC-SA 许可证下发布。
[在此下载《Ruby Best Practices》](https://github.com/practicingruby/rbp-book/tree/gh-pages/pdfs)。
### 《[I Love Ruby](https://mindaslab.github.io/I-Love-Ruby/)》

作者: Karthikeyan A K (246 页)
《I Love Ruby》以比传统的介绍更高的深度阐述了基本概念和技术。该方法为编写有用、正确、易维护和高效的 Ruby 代码提供了一个坚实的基础。
章节内容涵盖:
* 变量
* 字符串
* 比较和逻辑
* 循环
* 数组
* 哈希和符号
* Ranges
* 函数
* 变量作用域
* 类 & 对象
* Rdoc
* 模块和 Mixins
* 日期和时间
* 文件
* Proc、匿名 和 块
* 多线程
* 异常处理
* 正则表达式
* Gems
* 元编程
在 GNU 自由文档许可证之下,你可以复制、发布和修改本书,1.3 或任何之后版本由自由软件基金会发布。
[点此下载《I Love Ruby》](https://mindaslab.github.io/I-Love-Ruby/)。
### [Programming Ruby – The Pragmatic Programmer’s Guide](http://ruby-doc.com/docs/ProgrammingRuby/)

作者: David Thomas, Andrew Hunt (HTML)
《Programming Ruby – The Pragmatic Programmer’s Guide》是一本 Ruby 编程语言的教程和参考书。使用 Ruby,你将能够写出更好的代码,更加有效率,并且使编程变成更加享受的体验。
内容涵盖以下部分:
* 类、对象和变量
* 容器、块和迭代器
* 标准类型
* 更多方法
* 表达式
* 异常、捕获和抛出
* 模块
* 基本输入和输出
* 线程和进程
* 何时抓取问题
* Ruby 和它的世界、Web、Tk 和 微软 Windows
* 扩展 Ruby
* 映像、对象空间和分布式 Ruby
* 标准库
* 面向对象设计库
* 网络和 Web 库
* 嵌入式文件
* 交互式 Ruby shell
这本书的第一版在开放发布许可证 1.0 版或更新版的许可下发布。本书更新后的第二版涉及 Ruby 1.8 ,并且包括所有可用新库的描述,但是它不是在免费发行许可证下发布的。
[点此下载《Programming Ruby – The Pragmatic Programmer’s Guide》](http://ruby-doc.com/docs/ProgrammingRuby/)。
### 《[Why’s (Poignant) Guide to Ruby](http://poignant.guide/)》

作者:why the lucky stiff (176 页)
《Why’s (poignant) Guide to Ruby》是一本 Ruby 编程语言的介绍书籍。该书包含一些冷幽默,偶尔也会出现一些和主题无关的内容。本书包含的笑话在 Ruby 社区和卡通角色中都很出名。
本书的内容包括:
* 关于本书
* Kon’nichi wa, Ruby
* 一个快速(希望是无痛苦的)的 Ruby 浏览(伴随卡通角色):Ruby 核心概念的基本介绍
* 代码浮动小叶:评估和值,哈希和列表
* 组成规则的核心部分:case/when、while/until、变量作用域、块、方法、类定义、类属性、对象、模块、IRB 中的内省、dup、self 和 rbconfig 模块
* 中心:元编程、正则表达式
* 当你打算靠近胡须时:在已存在类中发送一个新方法
* 天堂演奏
本书在 CC-SA 许可证许可下可用。
[点此下载《Why’s (poignant) Guide to Ruby》](http://poignant.guide/)。
### 《[Ruby Hacking Guide](http://ruby-hacking-guide.github.io/)》

作者: Minero Aoki ,翻译自 Vincent Isambart 和 Clifford Escobar Caoille (HTML)
通过阅读本书可以达成下面的目标:
* 拥有关于 Ruby 结构的知识
* 掌握一般语言处理的知识
* 收获阅读源代码的技能
本书分为四个部分:
* 对象
* 动态分析
* 评估
* 外部评估
要想从本书中收获最多的东西,需要具备一定 C 语言的知识和基本的面向对象编程知识。本书在 CC-NC-SA 许可证许可下发布。
原书的官方支持网站为 [i.loveruby.net/ja/rhg/](http://i.loveruby.net/ja/rhg/)
[点此下载《Ruby Hacking Guide》](http://ruby-hacking-guide.github.io/)
### 《[The Book Of Ruby](http://www.sapphiresteel.com/ruby-programming/The-Book-Of-Ruby.html)》

作者: How Collingbourne (425 页)
《The Book Of Ruby》是一本免费的 Ruby 编程高级教程。
《The Book Of Ruby》以 PDF 文件格式提供,并且每一个章节的所有例子都伴有可运行的源代码。同时,也有一个介绍来阐述如何在 Steel 或其他任何你喜欢的编辑器/IDE 中运行这些 Ruby 代码。它主要集中于 Ruby 语言的 1.8.x 版本。
本书被分成很小的块。每一个章节介绍一个主题,并且分成几个不同的子话题。每一个编程主题由一个或多个小的自包含、可运行的 Ruby 程序构成。
* 字符串、数字、类和对象 - 获取输入和输出、字符串和外部评估、数字和条件测试:if ... then、局部变量和全局变量、类和对象、实例变量、消息、方法、多态性、构造器和检属性和类变量 - 超类和子类,超类传参,访问器方法,’set‘ 访问器,属性读写器、超类的方法调用,以及类变量
* 类等级、属性和类变量 - 超类和子类,超类传参,访问器方法,’set‘ 访问器,属性读写器、超类的方法调用,以及类变量
* 字符串和 Ranges - 用户自定义字符串定界符、引号等更多
* 数组和哈希 - 展示如何创建一系列对象
* 循环和迭代器 - for 循环、代码块、while 循环、while 修改器以及 until 循环
* 条件语句 - If..Then..Else、And..Or..Not、If..Elsif、unless、if 和 unless 修改器、以及 case 语句
* 方法 - 类方法、类变量、类方法是用来干什么的、Ruby 构造器、单例方法、单例类、重载方法以及更多
* 传递参数和返回值 - 实例方法、类方法、单例方法、返回值、返回多重值、默认参数和多重参数、赋值和常量传递以及更多
* 异常处理 - 涉及 rescue、ensure、else、错误数量、retry 和 raise
* 块、Procs 和 匿名 - 阐述为什么它们对 Ruby 来说很特殊
* 符号 - 符号和字符串、符号和变量以及为什么应该使用符号
* 模块和 Mixins
* 文件和 IO - 打开和关闭文件、文件和目录、复制文件、目录询问、一个关于递归的讨论以及按大小排序
* YAML - 包括嵌套序列,保存 YAML 数据以及更多
* Marshal - 提供一个保存和加载数据的可选择方式
* 正则表达式 - 进行匹配、匹配群组以及更多
* 线程 - 向你展示如何同时运行多个任务
* 调试和测试 - 涉及交互式 Ruby shell(IRB.exe)、debugging 和 单元测试
* Ruby on Rails - 浏览一个创建博客的实践指南
* 动态编程 - 自修改程序、重运算魔法、特殊类型的运算、添加变量和方法以及更多
本书由 SapphireSteel Software 发布,SapphireSteel Software 是用于 Visual Studio 的 Ruby In Steel 集成开发环境的开发者。读者可以复制和发布本书的文本和代码(免费版)
[点此下载《The Book Of Ruby》](http://www.sapphiresteel.com/ruby-programming/The-Book-Of-Ruby.html)
### 《[The Little Book Of Ruby](http://www.sapphiresteel.com/ruby-programming/The-Book-Of-Ruby.html)》

作者: Huw Collingbourne (87 页)
《The Little Book of Ruby》是一本一步接一步的 Ruby 编程教程。它指导读者浏览 Ruby 的基础。另外,它分享了《The Book of Ruby》一书的内容,但是它旨在作为一个简化的教程来阐述 Ruby 的主要特性。
章节内容涵盖:
* 字符串和方法 - 包括外部评估。详细描述了 Ruby 方法的语法
* 类和对象 - 阐述如何创建一个新类型的对象
* 类等级 - 一个特殊类型的类,其为一些其他类的简化并且继承了其他一些类的特性
* 访问器、属性、类变量 - 访问器方法,属性读写器,属性创建变量,调用超类方法以及类变量探索
* 数组 - 学习如何创建一系列对象:数组包括多维数组
* 哈希 - 涉及创建哈希表,为哈希表建立索引以及哈希操作等
* 循环和迭代器 - for 循环、块、while 循环、while 修饰器以及 until 循环
* 条件语句 - If..Then..Else、And..Or..Not、If..Elsif、unless、if 和 unless 修饰器以及 case 语句
* 模块和 Mixins - 包括模块方法、模块作为名字空间模块实例方法、模块或 'mixins'、来自文件的模块和预定义模块
* 保存文件以及更多内容
本书可免费复制和发布,只需保留原始文本且注明版权信息。
[点此下载《The Little Book of Ruby》](http://www.sapphiresteel.com/ruby-programming/The-Book-Of-Ruby.html)
### 《[Kestrels, Quirky Birds, and Hopeless Egocentricity](https://leanpub.com/combinators)》

作者: Reg “raganwald” Braithwaite (123 页)
《Kestrels, Quirky Birds, and Hopeless Egocentricity》是通过收集 “Raganwald” Braithwaite 的关于组合逻辑、Method Combinators 以及 Ruby 元编程的系列文章而形成的一本方便的电子书。
本书提供了通过使用 Ruby 编程语言来应用组合逻辑的一个基本介绍。组合逻辑是一种数学表示方法,它足够强大,从而用于解决集合论问题以及计算中的问题。
在这本书中,读者会会探讨到一些标准的 Combinators,并且对于每一个 Combinators,书中都用 Ruby 编程语言写程序探讨了它的一些结果。在组合逻辑上,Combinators 之间组合并相互改变,书中的 Ruby 例子注重组合和修改 Ruby 代码。通过像 K Combinator 和 .tap 方法这样的简单例子,本书阐述了元编程的理念和递归 Combinators 。
本书在 MIT 许可证许可下发布。
[点此下载《Kestrels, Quirky Birds, and Hopeless Egocentricity》](https://leanpub.com/combinators)
### 《[Ruby Programming](https://en.wikibooks.org/wiki/Ruby_Programming)》

作者: Wikibooks.org (261 页)
Ruby 是一种解释性、面向对象的编程语言。
本书被分为几个部分,从而方便按顺序阅读。
* 开始 - 向读者展示如何在其中一个操作系统环境中安装并开始使用 Ruby
* Ruby 基础 - 阐述 Ruby 语法的主要特性。它涵盖了字符串、编码、写方法、类和对象以及异常等内容
* Ruby 语义参考
* 内建类
* 可用模块,涵盖一些标准库
* 中级 Ruby 涉及一些稍微高级的话题
本书在 CC-SA 3.0 本地化许可证许可下发布。
[点此下载《Ruby Programming》](https://en.wikibooks.org/wiki/Ruby_Programming)
---
无特定顺序,我将在结束前推荐一些没有在开源许可证下发布但可以免费下载的 Ruby 编程书籍。
* [Mr. Neighborly 的 Humble Little Ruby Book](http://www.humblelittlerubybook.com/) – 一个易读易学的 Ruby 完全指南。
* [Introduction to Programming with Ruby](https://launchschool.com/books/ruby) – 学习编程的基础知识,一切从零开始。
* [Object Oriented Programming with Ruby](https://launchschool.com/books/oo_ruby) – 学习编程的基础知识,一切从零开始。
* [Core Ruby Tools](https://launchschool.com/books/core_ruby_tools) – 对 Ruby 的四个核心工具 Gems、Ruby Version Managers、Bundler 和 Rake 进行了简短的概述。
* [Learn Ruby the Hard Way, 3rd Edition](https://learnrubythehardway.org/book/) – 一本适合初学者的入门书籍。
* [Learn to Program](https://pine.fm/LearnToProgram) – 来自 Chris Pine。
* [Ruby Essentials](http://www.techotopia.com/index.php/Ruby_Essentials) – 一个准确且简单易学的 Ruby 学习指南。
---
via: <https://www.ossblog.org/study-ruby-programming-with-open-source-books/>
作者:[Steve Emms](https://www.ossblog.org/author/steve/) 译者:[ucasFL](https://github.com/ucasFL) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK |  |
By Gregory Brown (328 pages)
Ruby Best Practices is for programmers who want to use Ruby as experienced Rubyists do. Written by the developer of the Ruby project Prawn, this book explains how to design beautiful APIs and domain-specific languages with Ruby, as well as how to work with functional programming ideas and techniques that can simplify your code and make you more productive.
Ruby Best Practices is much more about how to go about solving problems in Ruby than it is about the exact solution you should use. The book is not targeted at the Ruby beginner, and will be of little use to someone new to programming. The book assumes a reasonable technical understanding of Ruby, and some experience in developing software with it.
The book is split into two parts, with eight chapters forming its core and three appendixes included as supplementary material.
This book provides a wealth of information on:
- Driving Code Through Tests – covers a number testing philosophies and techniques. Use mocks and stubs
- Designing Beautiful APIs with special focus on Ruby’s secret powers: Flexible argument processing and code blocks
- Mastering the Dynamic Toolkit showing developers how to build flexible interfaces, implementing per-object behaviour, extending and modifying pre-existing code, and building classes and modules programmatically
- Text Processing and File Management focusing on regular expressions, working with files, the tempfile standard library, and text-processing strategies
- Functional Programming Techniques highlighting modular code organisation, memoization, infinite lists, and higher-order procedures
- Understand how and why things can go wrong explaining how to work with logger
- Reduce Cultural Barriers by leveraging Ruby’s multilingual capabilities
- Skillful Project Maintenance
The book is open source, released under the Creative Commons NC-SA license. |
 |
By Karthikeyan A K (246 pages)
I Love Ruby explains fundamental concepts and techniques in greater depth than traditional introductions. This approach provides a solid foundation for writing useful, correct, maintainable, and efficient Ruby code.
Chapters cover:
- Variables
- Strings
- Comparison and Logic
- Loops
- Arrays
- Hashes and Symbols
- Ranges
- Functions
- Variable Scope
- Classes & Objects
- Rdoc
- Modules and Mixins
- Date and Time
- Files
- Proc, Lambdas and Blocks
- Multi Threading
- Exception Handling
- Regular Expressions
- Gems
- Meta Programming
Permission is granted to copy, distribute and/or modify the book under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation. |
 |
By David Thomas, Andrew Hunt (HTML)
Programming Ruby is a tutorial and reference for the Ruby programming language. Use Ruby, and you will write better code, be more productive, and make programming a more enjoyable experience.
Topics covered include:
- Classes, Objects and Variables
- Containers, Blocks and Iterators
- Standard Types
- More about Methods
- Expressions
- Exceptions, Catch and Throw
- Modules
- Basic Input and Output
- Threads and Processes
- When Trouble Strikes
- Ruby and its World, the Web, Tk, and Microsoft Windows
- Extending Ruby
- Reflection, ObjectSpace and Distributed Ruby
- Standard Library
- Object-Oriented Design Libraries
- Network and Web Libraries
- Embedded Documentation
- Interactive Ruby Shell
The first edition of this book is released under the Open Publication License, v1.0 or later. An updated Second Edition of this book, covering Ruby 1.8 and including descriptions of all the new libraries is available, but is not released under a freely distributable license. |
 |
By why the lucky stiff (176 pages)
Why’s (poignant) Guide to Ruby is an introductory book to the Ruby programming language. The book includes some wacky humour and goes off-topic on occasions. The book includes jokes that are known within the Ruby community as well as cartoon characters.
The contents of the book:
- About this book
- Kon’nichi wa, Ruby
- A Quick (and Hopefully Painless) Ride Through Ruby (with Cartoon Foxes): basic introduction to central Ruby concepts
- Floating Little Leaves of Code: evaluation and values, hashes and lists
- Them What Make the Rules and Them What Live the Dream: case/when, while/until, variable scope, blocks, methods, class definitions, class attributes, objects, modules, introspection in IRB, dup, self, rbconfig module
- Downtown: metaprogramming, regular expressions
- When You Wish Upon a Beard: send method, new methods in existing classes
- Heaven’s Harp
This book is made available under the Creative Commons Attribution-ShareAlike License. |
 |
By Minero Aoki – translated by Vincent Isambart and Clifford Escobar Caoille (HTML)
This book has the following goals:
- To have knowledge of the structure of Ruby
- To gain knowledge about language processing systems in general
- To acquire skills in reading source code
This book has four main parts:
- Objects
- Syntactic analysis
- Evaluation
- Peripheral around the evaluator
Knowledge about the C language and the basics of object-oriented programming is needed to get the most from the book. This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike2.5 license.
The official support site of the original book is [i.loveruby.net/ja/rhg/](http://i.loveruby.net/ja/rhg/) |
 |
By How Collingbourne (425 pages)
The Book Of Ruby is a free in-depth tutorial to Ruby programming.
The Book Of Ruby is provided in the form of a PDF document in which each chapter is accompanied by ready-to-run source code for all the examples. There is also an Introduction which explains how to use the source code in Ruby In Steel or any other editor/IDE of your choice plus appendices and an index. It concentrates principally on version 1.8.x of the Ruby language.
The book is divided up into bite-sized chunks. Each chapter introduces a theme which is subdivided into sub-topics. Each programming topic is accompanied by one or more small self-contained, ready-to-run Ruby programs.
- Strings, Numbers, Classes, and Objects – getting and putting input, strings and embedded evaluation, numbers, testing a condition: if … then, local and global variables, classes and objects, instance variables, messages, methods and polymorphism, constructors, and inspecting objects
- Class Hierarchies, Attributes, and Class Variables – superclasses and subclasses, passing arguments to the superclass, accessor methods, ‘set’ accessors, attribute readers and writers, calling methods of a superclass, and class variables
- Strings and Ranges – user-defined string delimiters, backquotes, and more
- Arrays and Hashes – shows how to create a list of objects
- Loops and Iterators – for loops, blocks, while loops, while modifiers, and until loops
- Conditional Statements – If..Then..Else, And..Or..Not, If..Elsif, unless, if and unless modifiers, and case statements
- Methods – class methods, class variables, what are class methods for, ruby constructors, singleton methods, singleton classes, overriding methods and more
- Passing Arguments and Returning Values – instance methods, class methods, singleton methods, returning values, returning multiple values, default and multiple arguments, assignment and parameter passing, and more
- Exception Handling – covers rescue, ensure, else, error numbers, retry, and raise
- Blocks, Procs, and Lambdas – explains why they are special to Ruby
- Symbols – symbols and strings, symbols and variables, and why symbols should be used
- Modules and Mixins
- Files and IO – opening and closing files, files and directories, copying files, directory enquiries, a discursion into recursion, and sorting by size
- YAML – includes nested sequences, saving YAML data and more
- Marshal – offers an alternative way of saving and loading data
- Regular Expressions – making matches, match groups, and more
- Threads – shows you how to run more than one task at a time
- Debugging and Testing – covers the interactive ruby shell (IRB.exe), debugging, and unit testing
- Ruby on Rails – goes through a hands-on guide to create a blog
- Dynamic Programming – self-modifying programs, eval magic, special types of eval, adding variables and methods, and more
The book is distributed by SapphireSteel Software – developers of the Ruby In Steel IDE for Visual Studio. Readers may copy or distribute the text and programs of The Book Of Ruby (free edition). |
 |
By Huw Collingbourne (87 pages)
The Little Book of Ruby is a step-by-step tutorial to programming in Ruby. It guides the reader through the fundamentals of Ruby. It shares content with The Book of Ruby, but aims to be a simpler guide to the main features of Ruby.
Chapters cover:
- Strings and Methods – including embedded evaluation. Details the syntax to Ruby methods
- Classes and Objects – explains how to create new types of objects
- Class Hierarchies – a class which is a ‘special type ’ of some other class simply ‘inherits’ the features of that other class
- Accessors, Attributes, Class Variables – accessor methods, attribute readers and writers, attributes create variables, calling methods of a superclass, and class variables are explored
- Arrays – learn how to create a list of objects: arrays including multi-dimensional arrays,
- Hashes – create, indexing into a hash, and hash operations are covered
- Loops and Iterators – for loops, blocks, while loops, while modifiers, and until loops
- Conditional Statements – If..Then..Else, And..Or..Not, If..Elsif, unless, if and unless modifiers, and case statements
- Modules and Mixins – including module methods, modules as namespaces, module ‘instance methods’, included modules or ‘mixins’, including modules from files, and pre-defined modules
- Saving Files, Moving on..
This book can be copied and distributed freely as long as the text is not modified and the copyright notice is retained. |
 |
By Reg “raganwald” Braithwaite (123 pages)
Kestrels, Quirky Birds, and Hopeless Egocentricity collects Reg “Raganwald” Braithwaite’s series of essays about Combinatory Logic, Method Combinators, and Ruby Meta-Programing into a convenient e-book.
The book provides a gentle introduction to Combinatory Logic, applied using the Ruby programming language. Combinatory Logic is a mathematical notation that is powerful enough to handle set theory and issues in computability.
In this book, the reader meets some of the standard combinators, and for each one the book explores some of its ramifications when writing programs using the Ruby programming language. In Combinatory Logic, combinators combine and alter each other, and the book’s Ruby examples focus on combining and altering Ruby code. From simple examples like the K Combinator and Ruby’s .tap method, the books works up to meta-programming with aspects and recursive combinators.
The book is published under the MIT license. |
 |
By Wikibooks.org (261 pages)
Ruby is an interpreted, object-oriented programming language.
The book is broken down into several sections and is intended to be read sequentially.
- Getting started – shows users how to install and begin using Ruby in an environment
- Basic Ruby – explains the main features of the syntax of Ruby. It covers, amongst other things, strings, encoding, writing methods, classes and objects, and exceptions
- Ruby Semantic reference
- Built in classes
- Available modules covers some of the standard library
- Intermediate Ruby covers a selection of slightly more advanced topics
This book is published under the Creative Commons Attribution-ShareAlike 3.0 Unported license. | |
8,822 | 在标准建立之前,软件所存在的问题 | https://opensource.com/article/17/7/software-standards | 2017-08-30T08:50:15 | [
"标准",
"开源"
] | https://linux.cn/article-8822-1.html |
>
> 开源项目需要认真对待交付成果中所包含的标准
>
>
>

无论以何种标准来衡量,开源软件作为传统的专有软件的替代品而崛起,取得了不错的效果。 如今,仅 Github 中就有着数以千万计的代码仓库,其中重要项目的数量也在快速增长。在本文撰写的时候,[Apache 软件基金会](https://www.apache.org/) 开展了超过 [300 个项目](https://projects.apache.org/), [Linux 基金会](https://www.linuxfoundation.org/) 支持的项目也超过了 60 个。与此同时,[OpenStack 基金会](https://www.linuxfoundation.org/projects/directory) 在 180 多个国家拥有超过 60,000 名成员。
这样说来,这种情景下有什么问题么?
开源软件在面对用户的众多需求时,由于缺少足够的意识,而无法独自去解决全部需求。 更糟糕的是,许多开源软件社区的成员(业务主管以及开发者)对利用最合适的工具解决这一问题并不感兴趣。
让我们开始找出那些有待解决的问题,看看这些问题在过去是如何被处理的。
问题存在于:通常许多项目都在试图解决一个大问题当中重复的一小部分,而客户希望能够在竞争产品之间做出选择,不满意的话还能够轻松选择其他产品。但是现在看来都是不可能的,在这个问题被解决之前它将会阻碍开源软件的使用。
这已经不是一个新的问题或者没有传统解决方案的问题了。在一个半世纪以来,用户期望有更多的选择和自由来变换厂商,而这一直是通过标准的制定来实现的。在现实当中,你可以对螺丝钉、灯泡、轮胎、延长线的厂商做出无数多的选择,甚至于对独特形状的红酒杯也可以专注选择。因为标准为这里的每一件物品都提供了物理规格。而在健康和安全领域,我们的幸福也依赖于成千上万的标准,这些标准是由私营行业制定的,以确保在最大化的竞争中能够有完美的结果。
随着信息与通信技术(ICT)的发展,以同样类似的方式形成了一些重要的组织机构,例如:国际电信联盟(ITU)、国际电工委员会(IEC),以及电气与电子工程师学会标准协会(IEEE-SA)。近千家财团遵循 ICT 标准来进行开发、推广以及测试。
虽然并非是所有的 ICT 标准都形成了无缝对接,但如今在我们生活的科技世界里,成千上万的基本标准履行着这一承诺,这些标准包含了计算机、移动设备、Wi-Fi 路由器以及其他一切依赖电力来运行的东西。
关键的一点,在很长的一段时间里,由于客户对拥有种类丰富的产品、避免受制于供应商,并且享受全球范围内的服务的渴望,逐渐演变出了这一体系。
现在让我们来看看开源软件是如何演进的。
好消息是伟大的软件已经被创造出来了。坏消息是对于像云计算和虚拟化网络这样的关键领域,没有任何单独的基金会在开发整个堆栈。取而代之的是,单个项目开发单独的一层或者多层,依靠需要时才建立的善意的合作,这些项目最终堆叠成栈。当这一过程运行良好时,结果是好的,但也有可能形成与传统的专有产品同样的锁定。相反,当这一过程运行不良时,坏的结果就是它会浪费开发商、社区成员的时间和努力,同时也会辜负客户的期望。
最明确的解决方法的创建标准,允许客户避免被锁定,鼓励多个解决方案通过对附加服务和功能进行有益的竞争。当然也存在着例外,但这不是开源世界正在发生的情况。
这背后的主要原因在于,开源社区的主流观点是:标准意味着限制、落后和多余。对于一个完整的堆栈中的单独一层来说,可能就是这样。但客户想要选择的自由、激烈的竞争,这就导致回到了之前的坏结果上,尽管多个厂商提供相似的集成堆栈,但却被锁定在一个技术上。
在 Yaron Haviv 于 2017 年 6 月 14 日所写的 “[除非我们协作,否则我们将被困在专有云上](https://www.enterprisetech.com/2017/06/14/well-enslaved-proprietary-clouds-unless-collaborate/)” 一文中,就有对这一问题有着很好的描述。
>
> 在今天的开源生态系统当中存在一个问题,跨项目整合并不普遍。开源项目能够进行大型合作,构建出分层的模块化的架构,比如说 Linux — 已经一次又一次的证明了它的成功。但是与 Linux 的意识形成鲜明对比的就是如今许多开源社区的日常状态。
>
>
> 举个例子:大数据生态系统,就是依赖众多共享组件或通用 API 和层的堆叠来实现的。这一过程同样缺少标准的线路协议,同时,每个处理框架(看看 Spark、Presto 和 Flink)都拥有独立的数据源 API。
>
>
> 这种合作的缺乏正在造成担忧。缺少了合作,项目就会变得不通用,结果对客户产生了负面影响。因为每个人都不得不从头开始,重新开发,这基本上就锁定了客户,减缓了项目的发展。
>
>
>
Haviv 提出了两种解决方法:
* 项目之间更紧密的合作,联合多个项目消除重叠的部分,使堆栈内的整合更加密切;
* 开发 API ,使切换更加容易。
这两种方法都能达到目的。但除非事情能有所改变,我们将只会看到第一种方法,这就是前边展望中发现的技术锁定。结果会发现工业界,无论是过去 WinTel 的世界,或者纵观苹果的历史,相互竞争的产品都是以牺牲选择来换取紧密整合的。
同样的事情似乎很有可能发生在新的开源界,如果开源项目继续忽视对标准的需求,那么竞争会存在于层内,甚至是堆栈间。如果现在能够做到的话,这样的问题可能就不会发生了。
因为如果口惠无实开发软件优先、标准在后的话,对于标准的制定就没有真正的兴趣。主要原因是,大多数的商人和开发者对标准知之甚少。不幸的是,我们能够理解这些使事情变得糟糕的原因。这些原因有几个:
* 大学几乎很少对标准进行培训;
* 过去拥有专业的标准人员的公司遣散了这些部门,现在的部署工程师接受标准组织的培训又远远不够;
* 在建立雇主标准工作方面的专业知识方面几乎没有职业价值;
* 参与标准活动的工程师可能需要以他们认为是最佳技术解决方案为代价来延长雇主的战略利益;
* 在许多公司内部,专业的标准人员与开源开发者之间鲜有交流;
* 许多软件工程师将标准视为与 FOSS 定义的“四大自由”有着直接冲突。
现在,让我们来看看在开源界正在发生什么:
* 今天大多数的软件工程师鲜有不知道开源的;
* 工程师们每天都在享受着开源工具所带来的便利;
* 许多令人激动的最前沿的工作正是在开源项目中完成的;
* 在热门的开源领域,有经验的开发者广受欢迎,并获得了大量实质性的奖励;
* 在备受好评的项目中,开发者在软件开发过程中享受到了空前的自主权;
* 事实上,几乎所有的大型 ICT 公司都参与了多个开源项目,最高级别的成员当中,通常每个公司每年的合并成本(会费加上投入的雇员)都超过了一百万美元。
如果脱离实际的话,这个比喻似乎暗示着标准是走向 ICT 历史的灰烬。但现实却有很大差别。一个被忽视的事实是,开源开发是比常人所认为的更为娇嫩的花朵。这样比喻的原因是:
* 项目的主要支持者们可以撤回(已经做过的事情),这将导致一个项目的失败;
* 社区内的个性和文化冲突会导致社区的瓦解;
* 重要项目更加紧密的整合能力有待观察;
* 有时专有权在博弈中被削弱,高资助的开源项目在某些情况下会导致失败。
* 随着时间的推移,可能个别公司认为其开源策略没能给他们带来预期的回报;
* 对关键开源项目的失败引起过多关注,会导致厂商放弃一些投资中的新项目,并说服客户谨慎选择开源方案。
奇怪的是,最积极解决这些问题的协作单位是标准组织,部分原因是,他们已经感受到了开源合作的崛起所带来的威胁。他们的回应包括更新知识产权策略以允许在此基础上各种类型的合作,开发开源工具,包含开源代码的标准,以及在其他类型的工作项目中开发开源手册。
结果就是,这些标准组织调整自己成为一个近乎中立的角色,为完整方案的开发提供平台。这些方案能够包含市场上需要的各种类型的合作产品,以及混合工作产品。随着此过程的继续,很有可能使厂商们乐意推行一些包含了标准组织在内的举措,否则他们可能会走向开源基金。
重要的是,由于这些原因,开源项目开始认真对待项目交付所包含的标准,或者与标准开发商合作,共同为完整的方案做准备。这不仅会有更多的产品选择,对客户更少的限制,而且也给客户在开源方案上更大的信心,同时也对开源产品和服务有更多的需求。
倘若这一切不发生的话,将会是一个很大的遗憾,因为这是开源所导致的巨大损失。而这取决于如今的项目所做的决定,是供给市场所需,还是甘心于未来日趋下降的影响力,而不是持续的成功。
*本文源自 ConsortiumInfo.org的 [Standards Blog](http://www.consortiuminfo.org/standardsblog/article.php?story=20170616133415179),并已获得出版许可*
(题图:opensource.com)
---
作者简介:
Andy Updegrove - Andy helps 的 CEO,管理团队,由他们的投资者建立的成功的组织。他曾作为一名先驱,自1979年起,就为高科技公司提供商业头脑的法律顾问和策略建议。在全球舞台上,他经常作为代表,帮助推动超过 135 部全球标准的制定,宣传开源,主张联盟,其中包括一些世界上最大,最具影响力的标准制定机构。
---
via: <https://opensource.com/article/17/7/software-standards>
作者:[Andy Updegrove](https://opensource.com/users/andrewupdegrove) 译者:[softpaopao](https://github.com/softpaopao) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | By any measure, the rise of open source software as an alternative to the old, proprietary ways has been remarkable. Today, there are tens of millions of libraries hosted at GitHub alone, and the number of major projects is growing rapidly. As of this writing, the [Apache Software Foundation](https://www.apache.org/) hosts over [300 projects](https://projects.apache.org/), while the [Linux Foundation](https://www.linuxfoundation.org/) supports over 60. Meanwhile, the more narrowly focused [OpenStack Foundation](https://www.linuxfoundation.org/projects/directory) boasts 60,000 members living in more than 180 countries.
So, what could possibly be wrong with this picture?
What's missing is enough awareness that, while open source software can meet the great majority of user demands, standing alone it can't meet all of them. Worse yet, too many members of the open source community (business leads as well as developers) have no interest in making use of the most appropriate tools available to close the gap.
Let's start by identifying the problem that needs to be solved, and then see how that problem used to be solved in the past.
The problem is that there are often many projects trying to solve the same small piece of a larger problem. Customers want to be able to have a choice among competing products and to easily switch among products if they're not satisfied. That's not possible right now, and until this problem is solved, it will hold back open source adoption.
It's also not a new problem or a problem without traditional solutions. Over the course of a century and a half, user expectations of broad choice and freedom to switch vendors were satisfied through the development of standards. In the physical world, you can choose between myriad vendors of screws, light bulbs, tires, extension cords, and even of the proper shape wine glass for the pour of your choice, because standards provide the physical specifications for each of these goods. In the world of health and safety, our well-being relies on thousands of standards developed by the private sector that ensure proper results while maximizing competition.
When information and communications technology (ICT) came along, the same approach was taken with the formation of major organizations such as the International Telecommunication Union (ITU), International Electrotechnical Commission (IEC), and the Standards Association of the Institute of Electrical and Electronics Engineers (IEEE-SA). Close to 1,000 consortia followed to develop, promote, or test compliance with ICT standards.
While not all ICT standards resulted in seamless interoperability, the technology world we live in today exists courtesy of the tens of thousands of essential standards that fulfill that promise, as implemented in computers, mobile devices, Wi-Fi routers, and indeed everything else that runs on electricity.
The point here is that, over a very long time, a system evolved that could meet customers' desires to have broad product offerings, avoid vendor lock-in, and enjoy services on a global basis.
Now let's look at how open software is evolving.
The good news is that great software is being created. The bad news is that in many key areas, like cloud computing and network virtualization, no single foundation is developing the entire stack. Instead, discrete projects develop individual layers, or parts of layers, and then rely on real-time, goodwill-based collaboration up and down the stack among peer projects. When this process works well, the results are good but have the potential to create lock-in the same way that traditional, proprietary products could. When the process works badly, it can result in much wasted time and effort for vendors and community members, as well as disappointed customer expectations.
The clear way to provide a solution is to create standards that allow customers to avoid lock-in, along with encouraging the availability of multiple solutions competing through value-added features and services. But, with rare exceptions, that's not what's happening in the world of open source.
The main reason behind this is the prevailing opinion in the open source community is that standards are limiting, irrelevant, and unnecessary. Within a single, well-integrated stack, that may be the case. But for customers that want freedom of choice and ongoing, robust competition, the result could be a return to the bad old days of being locked into a technology, albeit with multiple vendors offering similarly integrated stacks.
A good description of the problem can be found in a June 14, 2017, article written by Yaron Haviv, "[We'll Be Enslaved to Proprietary Clouds Unless We Collaborate](https://www.enterprisetech.com/2017/06/14/well-enslaved-proprietary-clouds-unless-collaborate/)":
Cross-project integration is not exactly prevalent in today's open source ecosystem, and it's a problem. Open source projects that enable large-scale collaboration and are built on a layered and modular architecture—such as Linux—have proven their success time and again. But the Linux ideology stands in stark contrast to the general state of much of today's open source community.
Case in point: big data ecosystems, where numerous overlapping implementations rarely share components or use common APIs and layers. They also tend to lack standard wire protocols, and each processing framework (think Spark, Presto, and Flink) has its own data source API.
This lack of collaboration is causing angst. Without it, projects are not interchangeable, resulting in negative repercussions for customers by essentially locking them in and slowing down the evolution of projects because each one has to start from scratch and re-invent the wheel.
Haviv proposes two ways to resolve the situation:
- Closer collaboration among projects, leading to consolidation, the elimination of overlaps between multiple projects, and tighter integration within a stack;
- The development of APIs to make switching easier.
Both these approaches make sense. But unless something changes, we'll see only the first, and that's where the prospect for lock-in is found. The result would be where the industry found itself in the WinTel world of the past or throughout Apple's history, where competing product choice is sacrificed in exchange for tight integration.
The same thing can, and likely will, happen in the new open source world if open source projects continue to ignore the need for standards so that competition can exist within layers, and even between stacks. Where things stand today, there's almost no chance of that happening.
The reason is that while some projects pay lip service to develop software first and standards later, there is no real interest in following through with the standards. The main reason is that most business people and developers don't know much about standards. Unfortunately, that's all too understandable and likely to get worse. The reasons are several:
- Universities dedicate almost no training time to standards;
- Companies that used to have staffs of standards professionals have disbanded those departments and now deploy engineers with far less training to participate in standards organizations;
- There is little career value in establishing expertise in representing an employer in standards work;
- Engineers participating in standards activities may be required to further the strategic interests of their employer at the cost of what they believe to be the best technical solution;
- There is little to no communication between open source developers and standards professionals within many companies;
- Many software engineers view standards as being in direct conflict with the "four freedoms" underlying the FOSS definition.
Now let's look at what's going on in the world of open source:
- It would be difficult for any software engineer today to not know about open source;
- It's a tool engineers are comfortable with and often use on a daily basis;
- Much of the sexiest, most cutting-edge work is being done in open source projects;
- Developers with expertise in hot open source areas are much sought after and command substantial compensation premiums;
- Developers enjoy unprecedented autonomy in developing software within well-respected projects;
- Virtually all of the major ICT companies participate in multiple open source projects, often with a combined cost (dues plus dedicated employees) of over $1 million per year per company at the highest membership level.
When viewed in a vacuum, this comparison would seem to indicate that standards are headed for the ash heap of history in ICT. But the reality is more nuanced. It also ignores the reality that open source development can be a more delicate flower than many might assume. The reasons include the following:
- Major supporters of projects can decommit (and sometimes have done so), leading to the failure of a project;
- Personality and cultural conflicts within communities can lead to disruptions;
- The ability of key projects to more tightly integrate remains to be seen;
- Proprietary game playing has sometimes undercut, and in some cases caused the failure of, highly funded open source projects;
- Over time, individual companies may decide that their open source strategies have failed to bring the rewards they anticipated;
- A few well-publicized failures of key open source projects could lead vendors to back off from investing in new projects and persuade customers to be wary of committing to open source solutions.
Curiously enough, the collaborative entities that are addressing these issues most aggressively are standards organizations, in part because they feel (rightly) threatened by the rise of open source collaboration. Their responses include upgrading their intellectual property rights policies to allow all types of collaboration to occur under the same umbrella, including development of open source tools, inclusion of open source code in standards, and development of open source reference implementations of standards, among other types of work projects.
The result is that standards organizations are retooling themselves to provide an approach-neutral venue for the development of complete solutions. Those solutions can incorporate whatever type of collaborative work product, or hybrid work product, the marketplace may need. As this process continues, it is likely that vendors will begin to pursue some initiatives within standards organizations that might otherwise have made their way to open source foundations.
For all these reasons, it's crucial that open source projects get serious about including standards in their deliverables or otherwise partner with appropriate standards-developers to jointly provide complete solutions. The result will not only be greater product choice and less customer lock-in, but far greater confidence by customers in open source solutions, and therefore far greater demand for and use of open source products and services.
If that doesn't happen it will be a great shame, because the open source cause has the most to lose. It's up to the projects now to decide whether to give the market what it wants and needs or reconcile themselves to a future of decreasing influence, rather than continuing success.
*This was originally published on ConsortiumInfo.org's Standards Blog and is republished with permission.*
## 1 Comment |
8,824 | 听说过时间表,但是你是否知道“哈希表” | http://www.zeroequalsfalse.press/2017/02/20/hashtables/ | 2017-08-31T08:00:00 | [
"哈希"
] | https://linux.cn/article-8824-1.html | 
探索<ruby> 哈希表 <rt> hash table </rt></ruby>的世界并理解其底层的机制是非常有趣的,并且将会受益匪浅。所以,让我们了解它,并从头开始探索吧。
哈希表是许多现代软件应用程序中一种常见的数据结构。它提供了类似字典的功能,使你能够在其中执行插入、删除和删除等操作。这么说吧,比如我想找出“苹果”的定义是什么,并且我知道该定义被存储在了我定义的哈希表中。我将查询我的哈希表来得到定义。它在哈希表内的记录看起来可能像:`"苹果" => "一种拥有水果之王之称的绿色水果"`。这里,“苹果”是我的关键字,而“一种拥有水果之王之称的水果”是与之关联的值。
还有一个例子可以让我们更清楚,哈希表的内容如下:
```
"面包" => "固体"
"水" => "液体"
"汤" => "液体"
"玉米片" => "固体"
```
我想知道*面包*是固体还是液体,所以我将查询哈希表来获取与之相关的值,该哈希表将返回“固体”给我。现在,我们大致了解了哈希表是如何工作的。使用哈希表需要注意的另一个重要概念是每一个关键字都是唯一的。如果到了明天,我拥有一个面包奶昔(它是液体),那么我们需要更新哈希表,把“固体”改为“液体”来反映哈希表的改变。所以,我们需要添加一条记录到字典中:关键字为“面包”,对应的值为“液体”。你能发现下面的表发生了什么变化吗?(LCTT 译注:不知道这个“面包奶昔”是一种什么食物,大约是一种面包做的奶昔,总之你就理解成作者把液体的“面包奶昔”当成一种面包吧。)
```
"面包" => "液体"
"水" => "液体"
"汤" => "液体"
"玉米片" => "固体"
```
没错,“面包”对应的值被更新为了“液体”。
**关键字是唯一的**,我的面包不能既是液体又是固体。但是,是什么使得该数据结构与其他数据结构相比如此特殊呢?为什么不使用一个[数组](https://en.wikipedia.org/wiki/Array_data_type)来代替呢?它取决于问题的本质。对于某一个特定的问题,使用数组来描述可能会更好,因此,我们需要注意的关键点就是,**我们应该选择最适合问题的数据结构**。例如,如果你需要做的只是存储一个简单的杂货列表,那么使用数组会很适合。考虑下面的两个问题,两个问题的本质完全不同。
1. 我需要一个水果的列表
2. 我需要一个水果的列表以及各种水果的价格(每千克)
正如你在下面所看到的,用数组来存储水果的列表可能是更好的选择。但是,用哈希表来存储每一种水果的价格看起来是更好的选择。
```
//示例数组
["苹果", "桔子", "梨子", "葡萄"]
//示例哈希表
{ "苹果" : 3.05,
"桔子" : 5.5,
"梨子" : 8.4,
"葡萄" : 12.4
}
```
实际上,有许多的机会需要[使用](https://en.wikipedia.org/wiki/Hash_table#Uses)哈希表。
### 时间以及它对你的意义
[这里有篇对时间复杂度和空间复杂度的一个复习](https://www.hackerearth.com/practice/basic-programming/complexity-analysis/time-and-space-complexity/tutorial/)。
平均情况下,在哈希表中进行搜索、插入和删除记录的时间复杂度均为 `O(1)` 。实际上,`O(1)` 读作“大 O 1”,表示常数时间。这意味着执行每一种操作的运行时间不依赖于数据集中数据的数量。我可以保证,查找、插入和删除项目均只花费常数时间,“当且仅当”哈希表的实现方式正确时。如果实现不正确,可能需要花费很慢的 `O(n)` 时间,尤其是当所有的数据都映射到了哈希表中的同一位置/点。
### 构建一个好的哈希表
到目前为止,我们已经知道如何使用哈希表了,但是如果我们想**构建**一个哈希表呢?本质上我们需要做的就是把一个字符串(比如 “狗”)映射到一个哈希代码(一个生成的数),即映射到一个数组的索引。你可能会问,为什么不直接使用索引呢?为什么要这么麻烦呢?因为通过这种方式我们可以直接查询 “狗” 并立即得到 “狗” 所在的位置,`String name = Array["狗"] // 名字叫拉斯`。而使用索引查询名称时,可能出现的情况是我们不知道名称所在的索引。比如,`String name = Array[10] // 该名字现在叫鲍勃` - 那不是我的狗的名字。这就是把一个字符串映射到一个哈希代码的益处(对应于一个数组的索引而言)。我们可以通过使用模运算符和哈希表的大小来计算出数组的索引:`index = hash_code % table_size`。
我们需要避免的另一种情况是两个关键字映射到同一个索引,这叫做**哈希碰撞**,如果哈希函数实现的不好,这很容易发生。实际上,每一个输入比输出多的哈希函数都有可能发生碰撞。通过下面的同一个函数的两个输出来展示一个简单的碰撞:
```
int cat_idx = hashCode("猫") % table_size; //cat_idx 现在等于 1
int dog_idx = hashCode("狗") % table_size; //dog_idx 也等于 1
```
我们可以看到,现在两个数组的索引均是 1 。这样将会出现两个值相互覆盖,因为它们被写到了相同的索引中。如果我们查找 “猫” 的值,将会返回 “拉斯” ,但是这并不是我们想要的。有许多可以[解决哈希碰撞](https://en.wikipedia.org/wiki/Hash_table#Collision_resolution)的方法,但是更受欢迎的一种方法叫做**链接**。链接的想法就是对于数组的每一个索引位置都有一个链表,如果碰撞发生,值就被存到链表中。因此,在前面的例子中,我们将会得到我们需要的值,但是我们需要搜索数组中索引为 1 的位置上的链表。伴有链接的哈希实现需要 `O(1 + α)` 时间,其中 α 是装载因子,它可以表示为 n/k,其中 n 是哈希表中的记录数目,k 是哈希表中可用位置的数目。但是请记住,只有当你给出的关键字非常随机时,这一结论才正确(依赖于 [SUHA](https://en.wikipedia.org/wiki/SUHA_(computer_science))。
这是做了一个很大的假设,因为总是有可能任何不相等的关键字都散列到同一点。这一问题的一个解决方法是去除哈希表中关键字对随机性的依赖,转而把随机性集中于关键字是如何被散列的,从而减少矛盾发生的可能性。这被称为……
### 通用散列
这个观念很简单,从<ruby> 通用散列 <rt> universal hash </rt></ruby>家族集合随机选择一个哈希函数 h 来计算哈希代码。换句话来说,就是选择任何一个随机的哈希函数来散列关键字。通过这种方法,两个不同的关键字的散列结果相同的可能性将非常低(LCTT 译注:原文是“not be the same”,应是笔误)。我只是简单的提一下,如果不相信我那么请相信[数学](https://en.wikipedia.org/wiki/Universal_hashing#Mathematical_guarantees)。实现这一方法时需要注意的另一件事是如果选择了一个不好的通用散列家族,它会把时间和空间复杂度拖到 `O(U)`,其中 U 是散列家族的大小。而其中的挑战就是找到一个不需要太多时间来计算,也不需要太多空间来存储的哈希家族。
### 上帝哈希函数
追求完美是人的天性。我们是否能够构建一个*完美的哈希函数*,从而能够把关键字映射到整数集中,并且几乎*没有碰撞*。好消息是我们能够在一定程度上做到,但是我们的数据必须是静态的(这意味着在一定时间内没有插入/删除/更新)。一个实现完美哈希函数的方法就是使用 <ruby> 2 级哈希 <rt> 2-Level Hashing </rt></ruby>,它基本上是我们前面讨论过的两种方法的组合。它使用*通用散列*来选择使用哪个哈希函数,然后通过*链接*组合起来,但是这次不是使用链表数据结构,而是使用另一个哈希表。让我们看一看下面它是怎么实现的:
[](http://www.zeroequalsfalse.press/2017/02/20/hashtables/Diagram.png)
**但是这是如何工作的以及我们如何能够确保无需关心碰撞?**
它的工作方式与[生日悖论](https://en.wikipedia.org/wiki/Birthday_problem)相反。它指出,在随机选择的一堆人中,会有一些人生日相同。但是如果一年中的天数远远大于人数(平方以上),那么有极大的可能性所有人的生日都不相同。所以这二者是如何相关的?对于每一个链接哈希表,其大小均为第一级哈希表大小的平方。那就是说,如果有两个元素被散列到同一个点,那么链接哈希表的大小将为 4 。大多数时候,链接哈希表将会非常稀疏/空。
重复下面两步来确保无需担心碰撞:
* 从通用散列家族中选择一个哈希函数来计算
* 如果发生碰撞,那么继续从通用散列家族中选择另一个哈希函数来计算
字面上看就是这样(这是一个 `O(n^2)` 空间的解)。如果需要考虑空间问题,那么显然需要另一个不同的方法。但是值得庆幸的是,该过程平均只需要进行**两次**。
### 总结
只有具有一个好的哈希函数才能算得上是一个好的哈希表。在同时保证功能实现、时间和空间的提前下构建一个完美的哈希函数是一件很困难的事。我推荐你在解决问题的时候首先考虑哈希表,因为它能够为你提供巨大的性能优势,而且它能够对应用程序的可用性产生显著差异。哈希表和完美哈希函数常被用于实时编程应用中,并且在各种算法中都得到了广泛应用。你见或者不见,哈希表就在这儿。
---
via: <http://www.zeroequalsfalse.press/2017/02/20/hashtables/>
作者:[Marty Jacobs](http://www.zeroequalsfalse.press/about) 译者:[ucasFL](https://github.com/ucasFL) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,825 | 如何管理开源产品的安全漏洞 | https://www.linux.com/news/event/elcna/2017/2/how-manage-security-vulnerabilities-your-open-source-product | 2017-08-31T10:29:39 | [
"CVE",
"安全漏洞"
] | https://linux.cn/article-8825-1.html | 
在 ELC + OpenIoT 峰会上,英特尔安全架构师 Ryan Ware 将会解释如何应对漏洞洪流,并管理你产品的安全性。
在开发开源软件时, 你需要考虑的安全漏洞也许会将你吞没。<ruby> 常见漏洞及曝光 <rt> Common Vulnerabilities and Exposures </rt></ruby>(CVE)ID、零日漏洞和其他漏洞似乎每天都在公布。随着这些信息洪流,你怎么能保持不掉队?
英特尔安全架构师 Ryan Ware 表示:“如果你发布了基于 Linux 内核 4.4.1 的产品,该内核截止今日已经有 9 个针对该内核的 CVE。这些都会影响你的产品,尽管事实上当你配载它们时还不知道。”
在 [ELC](http://events.linuxfoundation.org/events/embedded-linux-conference) + [OpenIoT 峰会](http://events.linuxfoundation.org/events/openiot-summit)上,英特尔安全架构师 Ryan Ware 的演讲将介绍如何实施并成功管理产品的安全性的策略。在他的演讲中,Ware 讨论了最常见的开发者错误,跟上最新的漏洞的策略等等。
**Linux.com:让我们从头开始。你能否简要介绍一下常见漏洞和曝光(CVE),零日以及其他漏洞么?它们是什么,为什么重要?**
Ryan Ware:好问题。<ruby> 常见漏洞及曝光 <rt> Common Vulnerabilities and Exposures </rt></ruby>(CVE)是按美国政府的要求由 MITR Corporation(一个非营利组织)维护的数据库。其目前由美国国土安全部资助。它是在 1999 年创建的,以包含有关所有公布的安全漏洞的信息。这些漏洞中的每一个都有自己的标识符(CVE-ID),并且可以被引用。 CVE 这个术语,已经从指整个数据库逐渐演变成代表一个单独的安全漏洞: 一个 CVE 漏洞。
出现于 CVE 数据库中的许多漏洞最初是零日漏洞。这些漏洞出于不管什么原因没有遵循更有序的如“<ruby> 责任揭秘 <rt> Responsible Disclosure </rt></ruby>”这样的披露过程。关键在于,如果没有软件供应商能够通过某种类型的修复(通常是软件补丁)来进行响应,那么它们就成为了公开和可利用的。这些和其他未打补丁的软件漏洞至关重要,因为在修补软件之前,漏洞是可以利用的。在许多方面,发布 CVE 或者零日就像是开枪。在你比赛结束之前,你的客户很容易受到伤害。
**Linux.com:有多少漏洞?你如何确定那些与你的产品相关?**
Ryan:在探讨有多少之前,以任何形式发布软件的任何人都应该记住。即使你采取一切努力确保你发布的软件没有已知的漏洞,你的软件*也会*存在漏洞。它们只是不知道而已。例如,如果你发布了一个基于 Linux 内核 4.4.1 的产品,那么截止今日,已经有了 9 个CVE。这些都会影响你的产品,尽管事实上在你使用它们时不知道。
此时,CVE 数据库包含 80,957 个条目(截止至 2017 年 1 月 30 日),包括最早可追溯到 1999 年的所有记录,当时有 894 个已记录问题。迄今为止,一年中出现最大的数字的是 2014 年,当时记录了 7,946 个问题。也就是说,我认为过去两年该数字减少并不是因为安全漏洞的减少。这是我将在我的谈话中说到的东西。
**Linux.com:开发人员可以使用哪些策略来跟上这些信息?**
Ryan:开发人员可以通过各种方式跟上这些如洪水般涌来的漏洞信息。我最喜欢的工具之一是 [CVE Details](http://www.cvedetails.com/)。它以一种非常容易理解的方式展示了来自 MITRE 的信息。它最好的功能是创建自定义 RSS 源的能力,以便你可以跟踪你关心的组件的漏洞。那些具有更复杂的追踪需求的人可以从下载 MITR CVE 数据库(免费提供)开始,并定期更新。其他优秀工具,如 cvechecker,可以让你检查软件中已知的漏洞。
对于软件栈中的关键部分,我还推荐一个非常有用的工具:参与到上游社区中。这些是最理解你所使用的软件的人。世界上没有比他们更好的专家。与他们一起合作。
**Linux.com:你怎么知道你的产品是否解决了所有漏洞?有推荐的工具吗?**
Ryan:不幸的是,正如我上面所说,你永远无法从你的产品中移除所有的漏洞。上面提到的一些工具是关键。但是,我还没有提到一个对你发布的任何产品来说都是至关重要的部分:软件更新机制。如果你无法在当场更新产品软件,则当客户受到影响时,你无法解决安全问题。你的软件必须能够更新,更新过程越容易,你的客户将受到更好的保护。
**Linux.com:开发人员还需要知道什么才能成功管理安全漏洞?**
Ryan:有一个我反复看到的错误。开发人员总是需要牢记将攻击面最小化的想法。这是什么意思?在实践中,这意味着只包括你的产品实际需要的东西!这不仅包括确保你不将无关的软件包加入到你的产品中,而且还可以关闭不需要的功能的配置来编译项目。
这有什么帮助?想象这是 2014 年。你刚刚上班就看到 Heartbleed 的技术新闻。你知道你在产品中包含 OpenSSL,因为你需要执行一些基本的加密功能,但不使用 TLS 心跳,该问题与该漏洞相关。你愿意:
a. 花费时间与客户和合作伙伴合作,通过关键的软件更新来修复这个高度安全问题?
b. 只需要告诉你的客户和合作伙伴,你使用 “-DOPENSSL*NO*HEARTBEATS” 标志编译 OpenSSL 产品,他们不会受到损害,你就可以专注于新功能和其他生产活动。
最简单解决漏洞的方法是你不包含这个漏洞。
(题图:[Creative Commons Zero](https://www.linux.com/licenses/category/creative-commons-zero) Pixabay)
---
via: <https://www.linux.com/news/event/elcna/2017/2/how-manage-security-vulnerabilities-your-open-source-product>
作者:[AMBER ANKERHOLZ](https://www.linux.com/users/aankerholz) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,826 | Ubuntu Linux 的不同安装类型:服务器 vs 桌面 | http://www.radiomagonline.com/deep-dig/0005/linux-installation-types-server-vs-desktop/39123 | 2017-08-31T11:04:20 | [
"服务器",
"桌面"
] | https://linux.cn/article-8826-1.html |
>
> 内核是任何 Linux 机器的核心
>
>
>

之前我已经讲了获取与安装 Ubuntu Linux,这次我将讲桌面和服务器的安装。两类安装都满足某些需求。不同的安装包是从 Ubuntu 分开下载的。你可以从 [Ubuntu.com/downloads](https://www.ubuntu.com/download) 选择你需要的。
无论安装类型如何,都有一些相似之处。

*可以从桌面系统图形用户界面或从服务器系统命令行添加安装包。*
两者都使用相同的内核和包管理器系统。软件包管理器系统是预编译为可在几乎任何 Ubuntu 系统运行的程序的仓库。程序分组成包,然后以安装包进行安装。安装包可以从桌面系统图形用户界面或从服务器系统命令行添加。
程序安装使用一个名为 `apt-get` 的程序。这是一个包管理器系统或程序管理器系统。最终用户只需输入命令行 `apt-get install (package-name)`,Ubuntu 就会自动获取软件包并进行安装。
软件包通常安装可以通过手册页访问的文档的命令(这本身就是一个主题)。它们可以通过输入 `man (command)` 来访问。这将打开一个描述该命令详细用法的页面。终端用户还可以 Google 任何的 Linux 命令或安装包,并找到大量关于它的信息。
例如,在安装网络连接存储套件后,可以通过命令行、GUI 或使用名为 Webmin 的程序进行管理。Webmin 安装了一个基于 Web 的管理界面,用于配置大多数 Linux 软件包,它受到了仅安装服务器版本的人群的欢迎,因为它安装为网页,不需要 GUI。它还允许远程管理服务器。
大多数(如果不是全部)基于 Linux 的软件包都有专门帮助你如何运行该软件包的视频和网页。只需在 YouTube 上搜索 “Linux Ubuntu NAS”,你就会找到一个指导你如何设置和配置此服务的视频。还有专门指导 Webmin 的设置和操作的视频。
内核是任何 Linux 安装的核心。由于内核是模块化的,它是非常小的(顾名思义)。我在一个 32MB 的小型闪存上运行 Linux 服务器。我没有打错 - 32MB 的空间!Linux 系统使用的大部分空间都是由安装的软件包使用的。
**服务器**
服务器安装 ISO 镜像是 Ubuntu 提供的最小的下载。它是针对服务器操作优化的操作系统的精简版本。此版本没有 GUI。默认情况下,它完全从命令行运行。
移除 GUI 和其他组件可简化系统并最大限度地提高性能。最初没有安装的必要软件包可以稍后通过命令行程序包管理器添加。由于没有 GUI,因此必须从命令行完成所有配置、故障排除和包管理。许多管理员将使用服务器安装来获取一个干净或最小的系统,然后只添加他们需要的某些包。这包括添加桌面 GUI 系统并制作精简桌面系统。
广播电台可以使用 Linux 服务器作为 Apache Web 服务器或数据库服务器。这些是真实需要消耗处理能力的程序,这就是为什么它们通常使用服务器形式安装以及没有 GUI 的原因。SNORT 和 Cacti 是可以在你的 Linux 服务器上运行的其他程序(这两个应用程序都在上一篇文章中介绍,可以在这里找到:[*http://tinyurl.com/yd8dyegu*](http://tinyurl.com/yd8dyegu))。
**桌面**
桌面安装 ISO 镜像相当大,并且有多个在服务器安装 ISO 镜像上没有的软件包。此安装用于工作站或日常桌面使用。此安装类型允许自定义安装包(程序),或者可以选择默认的桌面配置。

*桌面安装 ISO 镜像相当大,并且有多个在服务器安装 ISO 镜像上没有的软件包。此安装包专为工作站或日常桌面使用设计。*
软件包通过 apt-get 包管理器系统安装,就像服务器安装一样。两者之间的区别在于,在桌面安装中,apt-get 包管理器具有不错的 GUI 前端。这允许通过点击鼠标轻松地从系统安装或删除软件包!桌面安装将设置一个 GUI 以及许多与桌面操作系统相关的软件包。

*通过 apt-get 包管理器系统安装软件包,就像服务器安装一样。两者之间的区别在于,在桌面安装中,apt-get 包管理器具有不错的 GUI 前端。*\*
这个系统安装后随时可用,可以很好的替代你的 Windows 或 Mac 台式机。它有很多包,包括 Office 套件和 Web 浏览器。
Linux 是一个成熟而强大的操作系统。无论哪种安装类型,它都可以配置为适合几乎所有需要。从功能强大的数据库服务器到用于网页浏览和写信给奶奶的基本台式机操作系统,天空有极限,而可用的安装包几乎是不竭的。如果你遇到一个需要计算机化解决方案的问题,Linux 可能会提供免费或低成本的软件来解决该问题。
通过提供两个安装版本,Ubuntu 做得很好,这让人们开始朝着正确的方向前进。
*Cottingham 是前无线电总工程师,现在从事流媒体工作。*
---
via: <http://www.radiomagonline.com/deep-dig/0005/linux-installation-types-server-vs-desktop/39123>
作者:[Chris Cottingham](http://www.radiomagonline.com/author/chris-cottingham) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
8,828 | 从这开始了解 OPNFV | https://www.linux.com/blog/opnfv/2017/8/understanding-opnfv-starts-here | 2017-09-01T08:33:00 | [
"NFV",
"OPNFV"
] | https://linux.cn/article-8828-1.html | 
如果电信运营商或企业今天从头开始构建网络,那么他们可能用软件定义资源的方式构建,这与 Google 或 Facebook 的基础设施类似。这是网络功能虚拟化 (NFV) 的前提。
NFV 是颠覆的一代,其将彻底改变网络的建设和运营。而且,[OPNFV](https://www.opnfv.org/) 是一个领先的开源 NFV 项目,旨在加速这项技术的采用。
你是想要知道有哪些开源项目可能会帮助你进行 NFV 转换计划的电信运营商或者相关的企业员工么?还是要将你的产品和服务推向新的 NFV 世界的技术提供商?或者,也许是一名想使用开源项目来发展你事业的工程师、网络运维或商业领袖?(例如 2013 年 Rackspace [提到](https://blog.rackspace.com/solving-the-openstack-talent-gap) 拥有 OpenStack 技能的网络工程师的平均工资比他们的同行高 13%)?如果这其中任何一个适用于你,那么 *理解 OPNFV* 一书是你的完美资源。

*“理解 OPNFV”一书高屋建瓴地提供了 OPNFV 的理解以及它如何帮助你和你们的组织。*
本书(由 Mirantis 、 Nick Chase 和我撰写)在 11 个易于阅读的章节和超过 144 页中介绍了从 NFV、NFV 转换、OPNFV 项目的各个方面到 VNF 入门的概述,涵盖了一系列主题。阅读本书后,你将对 OPNFV 是什么有一个高屋建瓴的理解以及它如何帮助你或你们的组织。这本书不是专门面向开发人员的,虽然有开发背景信息很有用。如果你是开发人员,希望作为贡献者参与 OPNFV 项目,那么 [wiki.opnfv.org](https://wiki.opnfv.org/) 仍然是你的最佳资源。
在本博客系列中,我们会向你展示本书的一部分内容 - 就是有些什么内容,以及你可能会学到的。
让我们从第一章开始。第 1 章,毫不奇怪,是对 NFV 的介绍。它从业务驱动因素(需要差异化服务、成本压力和敏捷需求)、NFV 是什么,以及你可从 NFV 可以获得什么好处的角度做了简要概述。
简而言之,NFV 可以在数据中心的计算节点上执行复杂的网络功能。在计算节点上执行的网络功能称为虚拟网络功能 (VNF)。因此,VNF 可以作为网络运行,NFV 还会添加机制来确定如何将它们链接在一起,以提供对网络中流量的控制。
虽然大多数人认为它用在电信,但 NFV 涵盖了广泛的使用场景,从基于应用或流量类型的按角色访问控制 (RBAC) 到用于管理网络内容的内容分发网络 (CDN) 网络(通常需要的地方),更明显的电信相关用例如演进分组核心 (EPC) 和 IP 多媒体系统(IMS)。
此外,一些主要收益包括增加收入、改善客户体验、减少运营支出 (OPEX)、减少资本支出 (CAPEX)和为新项目腾出资源。本节还提供了具体的 NFV 总体拥有成本 (TCO) 分析。这些话题的处理很简单,因为我们假设你有一些 NFV 背景。然而,如果你刚接触 NFV ,不要担心 - 介绍材料足以理解本书的其余部分。
本章总结了 NFV 要求 - 安全性、性能、互操作性、易操作性以及某些具体要求,如服务保证和服务功能链。不符合这些要求,没有 NFV 架构或技术可以真正成功。
阅读本章后,你将对为什么 NFV 非常重要、NFV是什么,以及 NFV 成功的技术要求有一个很好的概念。我们将在今后的博客文章中浏览下面的章节。
这本书已被证明是行业活动上最受欢迎的赠品,中文版正在进行之中!但是你现在可以[下载 PDF 格式的电子书](https://www.opnfv.org/resources/download-understanding-opnfv-ebook),或者在亚马逊上下载[打印版本](https://www.amazon.com/dp/B071LQY724/ref=cm_sw_r_cp_ep_dp_pgFMzbM8YHJA9)。
(题图:[Creative Commons Zero](https://www.linux.com/licenses/category/creative-commons-zero)Pixabay)
---
via: <https://www.linux.com/blog/opnfv/2017/8/understanding-opnfv-starts-here>
作者:[AMAR KAPADIA](https://www.linux.com/users/akapadia) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,829 | 在树莓派中开启激动人心的 Perl 之旅 | https://opensource.com/article/17/3/perl-raspberry-pi | 2017-09-01T11:00:30 | [
"树莓派",
"Perl"
] | https://linux.cn/article-8829-1.html |
>
> 树莓派,随心所欲。
>
>
>

我最近在 SVPerl (硅谷 Perl 聚会)谈到在树莓派上运行 Perl 语言的时候,有人问我,“我听说树莓派应该使用 Python ,是这样吗?”。我非常乐意回答他,这是个常见误解。树莓派可以支持任何语言: Python、Perl 和其他树莓派官方软件 Raspbian Linux 初始安装的语言。
看似很厉害,其实很简单。树莓派的创造者英国的计算机科学教授 Eben Upton 曾经说过,树莓派名字中的‘派’(pi),是想为了听起来像 Python,因为他喜欢这门语言。他选择了这门语言作为孩子们的启蒙语言。但是他和他的团队做了一个通用计算机。开源软件没给树莓派任何限制。我们想运行什么就运行什么,全凭自己心意。
我在 SVPerl 和这篇文章中还想讲第二点,就是介绍我的 “PiFlash” 脚本。虽然它是用 Perl 写的,但是不需要你有多了解 Perl 就可以在 Linux 下将树莓派系统自动化烧录到 SD 卡。这样对初学者就比较友好,避免他们在烧录 SD 卡时候,偶然擦除了整个硬盘。即使是高级用户也可以从它的自动化工作中受益,包括我,这也是我开发这个工具的原因。在 Windows 和 Mac 下也有类似的工具,但是树莓派网站没有介绍类似工具给 Linux 用户。不过,现在有了。
开源软件早就有自己造轮子的传统,因为他们总是崇尚“自痒自挠”去解决问题。这种方式在 Eric S 1997 年的论文和 1999 年的书籍《[大教堂与集市](http://www.catb.org/%7Eesr/writings/cathedral-bazaar/)》中早有提及,它定义了开源软件的方法论。我也是为了满足想我这样的 Linux 用户,所以写了这个脚本。
### 下载系统镜像
想要开启树莓派之旅,你首先需要为它下载一个操作系统。我们称之为“系统镜像”文件。一旦你把它下载到你的桌面、手提电脑,或者甚至是另一个树莓派中,我就需要写入或者称之为“烧录”进你的 SD卡。详细情况可以看在线文件。手动做这件事情需要一些功底,你要把系统镜像烧录到整个 SD卡,而不是其中一块分区。系统镜像必须独自包含至少一个分区,因为树莓派引导需要一个 FAT32文件系统分区,系统引导这里开始。除了引导分区,其他分区可以是操作系统内核支持的任何分区类型。
在大部分树莓派中,我们都运行的是某些使用 Linux 内核的发行版。已经有一系列树莓派中常用的系统镜像你可以下载使用。(当然,没什么能阻止你自己造轮子)
树莓派基金会向新手推荐的是“[NOOBS](https://www.raspberrypi.org/downloads/noobs/)”系统。它代表了 “New Out of the Box System”(新鲜出炉即开即用系统),显然它好像听起来像术语 “noob"”(小白),通俗点说就是 “newbie”(菜鸟)。NOOBS 是一个基于树莓派的 Linux 系统,它会给你一个菜单可以在你的树莓派上自动下载安装几个其它的系统镜像。
[Raspbian Linux](https://www.raspberrypi.org/downloads/raspbian/) 是 Debian Linux 发行版的树莓派定制版。它是为树莓派开发的正式 Linux 发行版,并且由树莓派基金会维护。几乎所有树莓派驱动和软件都会在 Raspbian 上先试用,然后才会放到其它发行版上。其默认安装博客 Perl。
Ubuntu Linux (还有其社区版的 Ubuntu MATE)也将树莓派作为其支持 ARM (Advanced RISC Machines)处理器的平台之一。RISC(Reduced Instruction Set Computer)Ubuntu 是一个 Debian Linux 的商业化支持的开源分支,它也使用 DEB 包管理器。Perl 也在其中。它仅仅支持 32 位 ARM7 或者 64 位 ARM8 处理器的树莓派 2 和 3。ARM6 的树莓派 1 和 Zero 从未被 Ubuntu 构建过程支持。
[Fedora Linux](https://fedoraproject.org/wiki/Raspberry_Pi#Downloading_the_Fedora_ARM_image) 支持树莓派2 ,而 Fedora 25 支持 3。 Fedora 是一个隶属于红帽(Red Hat)的开源项目。Fedora 是个基础,商业版的 RHEL(Red Hat Enterprise Linux)在其上增加了商业软件包和支持,所以其软件像所有的兼容红帽的发行版一样来自 RPM(Red Hat Package Manager) 软件包。就像其它发行版一样,也包括 Perl。
[RISC OS](https://www.riscosopen.org/content/downloads/raspberry-pi) 是一个特别针对 ARM 处理器的单用户操作系统。如果你想要一个比 Linux 系统更加简洁的小型桌面(功能更少),你可以考虑一下。它同样支持 Perl。
[RaspBSD](http://www.raspbsd.org/raspberrypi.html) 是一个 FreeBSD 的树莓派发行版。它是一个基于 Unix 的系统,而不是 Linux。作为开源 Unix 的一员,它延续了 Unix 的功能,而且和 Linux 有着众多相似之处。包括有类似的开源软件带来的相似的系统环境,包括 Perl。
[OSMC](https://osmc.tv/),即开源多媒体中心,以及 [LibreElec](https://libreelec.tv/) 电视娱乐中心,它们都基于运行 Linux 内核之上的 Kodi 娱乐中心。它是一个小巧、特化的 Linux 系统,所以不要期望它能支持 Perl。
[Microsoft Windows IoT Core](http://ms-iot.github.io/content/en-US/Downloads.htm) 是仅运行在树莓派3上的新成员。你需要微软开发者身份才能下载。而作为一个 Linux 极客,我根本不看它。我的 PiFlash 脚本还不支持它,但如果你找的是它,你可以去看看。
### PiFlash 脚本
如果你想看看[树莓派 SD 卡烧录指导](https://www.raspberrypi.org/documentation/installation/installing-images/README.md),你可以找到在 Windows 或者 Mac 系统下需要下载的工具来完成烧录任务。但是对于 Linux 系统,只有一系列手工操作建议。我已经手工做过这个太多次,这很容易引发一个开发者的本能去自动化这个过程,这就是 PiFlash 脚本的起源。这有点难,因为 Linux 有太多方法可以配置,但是它们都是基于 Linux 内核的。
我总是觉得,手工操作潜在最大的失误恐怕就是偶然错误地擦除了某个设备,而不是擦除了 SD 卡,然后彻底清除了我本想保留在硬盘的东西。我在 SVPerl 演讲中也说了,我很惊讶地发现在听众中有犯了这种错误(而且不害怕承认)的人。因此,PiFlash 其中一个目的就是保护新手的安全,不会擦除 SD 卡之外的设备。PiFlash 脚本还会拒绝覆写包含了已经挂载的文件系统的设备。
对于有经验的用户,包括我,PiFlash 脚本还提供提供一个简便的自动化服务。下载完系统镜像之后,我不需要必须从 zip格式中解压缩或者提取出系统镜像。PiFlash 可以直接提取它,不管是哪种格式,并且直接烧录到 SD 卡中。
我把 [PiFlash 及其指导](https://github.com/ikluft/ikluft-tools/tree/master/piflash)发布在了 GitHub 上。
命令行用法如下:
```
piflash [--verbose] input-file output-device
piflash [--verbose] --SDsearch
```
`input-file` 参数是你要写入的系统镜像文件,只要是你从树莓派发行版网站下载的镜像都行。`output-device` 参数是你要写入的 SD 卡的块设备路径。
你也可以使用 `--SDsearch` 参数列出挂载在系统中 SD 卡设备名称。
可选项 `--verbose` 可以输出所有的程序状态数据,它在你需要帮助时或者递送 bug 报告和自行排错时很有用。它就是我开发时用的。
下面的例子是我使用该脚本写入仍是 zip 存档的 Raspbian 镜像到位于 `/dev/mmcblk0` 的 SD 卡:
```
piflash 2016-11-25-raspbian-jessie.img.zip /dev/mmcblk0
```
如果你已经指定了 `/dev/mmcblk0p1` (SD 卡的第一分区),它会识别到这个分区不是一个正确的位置,并拒绝写入。
在不同的 Linux 系统中怎样去识别哪个设备是 SD 卡是一个技术活。像 mmcblk0 这种在我的笔记本上是基于 PCI 的 SD卡接口。如果我使用了 USB SD 卡接口,它就是 `/dev/sdb`,这在多硬盘的系统中不好区分。然而,只有少量的 Linux 块设备支持 SD 卡。PiFlash 在这两种情况下都会检查块设备的参数。如果全部失败,它会认为可写入、可移动的,并有着正确物理扇区数量的 USB 驱动器是 SD 卡。
我想这应该能涵盖大部分情况。但是,如果你使用了我不知道的 SD 卡接口呢?我乐意看到你的来信。请在输出信息中加上 `--verbos --SDsearch` 参数,以便让我可以知道你系统目前的环境。理想情况下,如果 PiFlash 脚本可以被广泛利用,我们可以构建一个开源社区去尽可能的帮助更多的树莓派用户。
### 树莓派的 CPAN 模块
[CPAN](http://www.cpan.org/)(Comprehensive Perl Archive Network)是一个世界范围内包含各种 Perl 模块的的下载镜像。它们都是开源的。大量 CPAN 中的模块都是历久弥坚。对于成千上百的任务,你不需要重复造轮子,只要利用别人已经发布的代码就可以了。然后,你还可以提交你的新功能。
尽管树莓派是个五脏俱全的 Linux 系统,支持大部分 CPAN 模块,但是这里我想强调一下专为树莓派硬件开发的东西。一般来说它们都用在测量、控制、机器人方面的嵌入式系统中。你可以通过 GPIO (General-Purpose Input/Output)针脚将你的树莓派连接到外部电子设备。
可以使用树莓派 GPIO 针脚的模块如下:[Device::SMBus](https://metacpan.org/pod/Device::SMBus)、[Device::I2C](https://metacpan.org/pod/Device::I2C)、[Rpi::PIGPIO](https://metacpan.org/pod/RPi::PIGPIO)、[Rpi::SPI](https://metacpan.org/pod/RPi::SPI)、[Rpi::WiringPi](https://metacpan.org/pod/RPi::WiringPi)、[Device::WebIO::RaspberryPI](https://metacpan.org/pod/Device::WebIO::RaspberryPi) 和 [Device::PiGlow](https://metacpan.org/pod/Device::PiGlow)。树莓派支持的嵌入式模块如下:[UAV::Pilot::Wumpus::Server::Backend::RaspberryPiI2C](https://metacpan.org/pod/UAV::Pilot::Wumpus::Server::Backend::RaspberryPiI2C)、[RPI::DHT11](https://metacpan.org/pod/RPi::DHT11)(温度/湿度)、[RPI::HCSR04](https://metacpan.org/pod/RPi::HCSR04)(超声波)、[App::RPI::EnvUI](https://metacpan.org/pod/App::RPi::EnvUI)、[RPi::DigiPot::MCP4XXXX](https://metacpan.org/pod/RPi::DigiPot::MCP4XXXX)、[RPI::ADC::ADS](https://metacpan.org/pod/RPi::ADC::ADS)、[Device::PaPiRus](https://metacpan.org/pod/Device::PaPiRus) 和 [Device::BCM2835::Timer](https://metacpan.org/pod/Device::BCM2835::Timer)。
### 例子
这里有些我们在树莓派上可以用 Perl 做的事情的例子。
#### 例一:在 OSMC 使用 PiFlash 播放视频
本例中,你将练习如何设置并运行使用 OSMC 操作系统的树莓派。
* 到 [RaspberryPi.Org](http://raspberrypi.org/) 下载区,下载最新的 OSMC 版本。
* 将空 SD 卡插入你的 Linux 电脑或者笔记本。树莓派第一代是全尺寸的 SD 卡,除此以外都在使用 microSD,你也许需要一个通用适配器才能插入它。
* 在插入前后分别运行 `cat /proc/partitions` 命令来看看系统分给硬件的设备名称。它可能像这样 `/dev/mmcblk0` 或者 `/dev/sdb`, 用如下命令将正确的系统镜像烧录到 SD 卡:`piflash OSMC_TGT_rbp2_20170210.img.gz /dev/mmcblk0`。
* 弹出 SD 卡,将它插入树莓派中,接上 HDMI 显示器,开机。
* 当 OSMC 设置完毕,插入一个 USB 设备,在里面放点视频。出于示范目的,我将使用 `youtube-dl` 程序下载两个视频。运行 `youtube-dl OHF2xDrq8dY` (彭博关于英国高新产业,包括树莓派的介绍)还有 `youtube-dl nAvZMgXbE9c` (CNet 发表的“排名前五的树莓派项目”) 。将它们下载到 USB 中,然后卸载移除设备。
* 将 USB 设备插入到 OSMC 树莓派。点击视频选项进入到外部设备。
* 只要你能在树莓派中播放视频,那么恭喜你,你已经完成了本次练习。玩的愉快。
#### 例二:随机播放目录中的视频的脚本
这个例子将使用一个脚本在树莓派上的目录中乱序播放视频。根据视频的不同和设备的摆放位置,这可以用作信息亭显示的用途。我写这个脚本用来展示室内体验视频。
* 设置树莓派引导 Raspbian Linux。连接到 HDMI 监视器。
* 从 GitHub 上下载 [do-video 脚本](https://github.com/ikluft/ikluft-tools/tree/master/perl-on-pi)。把它放到树莓派中。
* 跟随该页面的安装指导。最主要的事情就是安装 omxplayer 包,它可以使用树莓派硬件视频加速功能平滑地播放视频。
* 在家目录的 Videos 目录下放一些视频。
* 运行 `do-video` ,这样,应该就可以播放视频了
#### 例三:读取 GPS 数据的脚本
这个例子更加深入,更有针对性。它展示了 Perl 怎么从外部设备中读取数据。在先前例子中出现的我的 GitHub上 “[Perl on Pi](https://github.com/ikluft/ikluft-tools/tree/master/perl-on-pi)” 有一个 gps-read.pl 脚本。它可以通过一系列端口从 GPS 读取 NMEA(国家海洋电子协会)的数据。页面还有教程,包括构建它所使用的 AdaFruit Industries 部分,但是你可以使用任何能输出 NMEA 数据的 GPS。
通过这些任务,我想你应该可以在树莓派上像使用其他语言一样使用 Perl了。希望你喜欢。
---
作者简介:
Ian Kluft - 上学开始,Ian 就对喜欢编程和飞行。他一直致力于 Unix 的工作。在 Linux 内核发布后的六个月他转向了 Linux。他有计算机科学硕士学位,并且拥有 CSSLP 资格证(认证规范开发流程专家),另一方面,他还是引航员和认证的飞机指令长。作为一个超过二十五年的认证的无线电爱好者,在近些年,他在一些电子设备上陆续做了实验,包括树莓派。
---
via: <https://opensource.com/article/17/3/perl-raspberry-pi>
作者:[Ian Kluft](https://opensource.com/users/ikluft) 译者:[Taylor1024](https://github.com/Taylor1024) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | When I spoke recently at SVPerl (Silicon Valley Perl) about Perl on the Raspberry Pi, someone asked, "I heard the Raspberry Pi is supposed to use Python. Is that right?" I was glad he asked because it's a common misconception. The Raspberry Pi can run any language. Perl, Python, and others are part of the initial installation of Raspbian Linux, the official software for the board.
The origin of the myth is simple. The Raspberry Pi's creator, UK Computer Science professor Eben Upton, has told the story that the "Pi" part of the name was intended to sound like Python because he likes the language. He chose it as his emphasis for kids to learn coding. But he and his team made a general-purpose computer. The open source software on the Raspberry Pi places no restrictions on us. We're all free to pick what we want to run and make each Raspberry Pi our own.
The second point to my presentation at SVPerl and this article is to introduce my "PiFlash" script. It was written in Perl, but it doesn't require any knowledge of Perl to automate your task of flashing SD cards for a Raspberry Pi from a Linux system. It provides safety for beginners, so they won't accidentally erase a hard drive while trying to flash an SD card. It offers automation and convenience for power users, which includes me and is why I wrote it. Similar tools already existed for Windows and Macs, but the instructions on the Raspberry Pi website oddly have no automated tools for Linux users. Now one exists.
Open source software has a long tradition of new projects starting because an author wanted to "scratch their own itch," or to solve their own problems. That's the way Eric S. Raymond described it in his 1997 paper and 1999 book "[The Cathedral and the Bazaar](http://www.catb.org/~esr/writings/cathedral-bazaar/)," which defined the open source software development methodology. I wrote PiFlash to fill a need for Linux users like myself.
## Downloadable system images
When setting up a Raspberry Pi, you first need to download an operating system for it. We call it a "system image" file. Once you download it to your desktop, laptop, or even another Raspberry Pi, you have to write or "flash" it to an SD card. The details are covered online already. It can be a bit tricky to do manually because getting the system image on the whole SD card and not on a partition matters. The system image will actually contain at least one partition of its own because the Raspberry Pi's boot procedure needs a FAT32 filesystem partition from which to start. Other partitions after the boot partition can be any filesystem type supported by the OS kernel.
In most cases on the Raspberry Pi, we're running some distribution with a Linux kernel. Here's a list of common system images that you can download for the Raspberry Pi (but there's nothing to stop you from building your own from scratch).
The ["NOOBS"](https://www.raspberrypi.org/downloads/noobs/) system from the Raspberry Pi Foundation is their recommended system for new users. It stands for "New Out of the Box System." It's obviously intended to sound like the term "noob," short for "newbie." NOOBS starts a Raspbian-based Linux system, which presents a menu that you can use to automatically download and install several other system images on your Raspberry Pi.
[Raspbian ](https://www.raspberrypi.org/downloads/raspbian/)[Linux](https://www.raspberrypi.org/downloads/raspbian/) is Debian Linux specialized for the Raspberry Pi. It's the official Linux distribution for the Raspberry Pi and is maintained by the Raspberry Pi Foundation. Nearly all Raspberry Pi software and drivers start with Raspbian before going to other Linux distributions. It runs on all models of the Raspberry Pi. The default installation includes Perl.
Ubuntu Linux (and the community edition Ubuntu MATE) includes the Raspberry Pi as one of its supported platforms for the ARM (Advanced RISC Machines) processor. [RISC (Reduced Instruction Set Computer) architecture] Ubuntu is a commercially supported open source variant of Debian Linux, so its software comes as DEB packages. Perl is included. It only works on the Raspberry Pi 2 and 3 models with their 32-bit ARM7 and 64-bit ARM8 processors. The ARM6 processor of the Raspberry Pi 1 and Zero was never supported by Ubuntu's build process.
[Fedora Linux](https://fedoraproject.org/wiki/Raspberry_Pi#Downloading_the_Fedora_ARM_image) supports the Raspberry Pi 2 and 3 as of Fedora 25. Fedora is the open source project affiliated with Red Hat. Fedora serves as the base that the commercial RHEL (Red Hat Enterprise Linux) adds commercial packages and support to, so its software comes as RPM (Red Hat Package Manager) packages like all Red Hat-compatible Linux distributions. Like the others, it includes Perl.
[RISC OS](https://www.riscosopen.org/content/downloads/raspberry-pi) is a single-user operating system made specifically for the ARM processor. If you want to experiment with a small desktop that is more compact than Linux (due to fewer features), it's an option. Perl runs on RISC OS.
[RaspBSD](http://www.raspbsd.org/raspberrypi.html) is the Raspberry Pi distribution of FreeBSD. It's a Unix-based system, but isn't Linux. As an open source Unix, form follows function and it has many similarities to Linux, including that the operating system environment is made from a similar set of open source packages, including Perl.
[OSMC](https://osmc.tv/), the Open Source Media Center, and [LibreElec](https://libreelec.tv/) are TV entertainment center systems. They are both based on the Kodi entertainment center, which runs on a Linux kernel. It's a really compact and specialized Linux system, so don't expect to find Perl on it.
[Microsoft ](http://ms-iot.github.io/content/en-US/Downloads.htm)[Windows IoT Core](http://ms-iot.github.io/content/en-US/Downloads.htm) is a new entrant that runs only on the Raspberry Pi 3. You need Microsoft developer access to download it, so as a Linux geek, that deterred me from looking at it. My PiFlash script doesn't support it, but if that's what you're looking for, it's there.
## The PiFlash script
If you look at the Raspberry Pi 's [SD card flashing](https://www.raspberrypi.org/documentation/installation/installing-images/README.md)[ instructions](https://www.raspberrypi.org/documentation/installation/installing-images/README.md), you'll see the instructions to do that from Windows or Mac involve downloading a tool to write to the SD card. But for Linux systems, it's a set of instructions to do manually. I've done that manual procedure so many times that it triggered my software-developer instinct to automate the process, and that's where the PiFlash script came from. It's tricky because there are many ways a Linux system can be set up, but they are all based on the Linux kernel.
I always imagined one of the biggest potential errors of the manual procedure is accidentally erasing the wrong device, instead of the SD card, and destroying the data on a hard drive that I wanted to keep. In my presentation at SVPerl, I was surprised to find someone in the audience who has made that mistake (and wasn't afraid to admit it). Therefore, one of the purposes of the PiFlash script, to provide safety for new users by refusing to erase a device that isn't an SD card, is even more needed than I expected. PiFlash will also refuse to overwrite a device that contains a mounted filesystem.
For experienced users, including me, the PiFlash script offers the convenience of automation. After downloading the system image, I don't have to uncompress it or extract the system image from a zip archive. PiFlash will extract it from whichever format it's in and directly flash the SD card.
I posted [PiFlash and its instructions](https://github.com/ikluft/ikluft-tools/tree/master/piflash) on GitHub.
It's a command-line tool with the following usages:
**piflash [--verbose] input-file output-device**
**piflash [--verbose] --SDsearch**
The **input-file** parameter is the system image file, whatever you downloaded from the Raspberry Pi software distribution sites. The **output-device** parameter is the path of the block device for the SD card you want to write to.
Alternatively, use **--SDsearch** to print a list of the device names of SD cards on the system.
The optional **--verbose** parameter is useful for printing out all of the program's state data in case you need to ask for help, submit a bug report, or troubleshoot a problem yourself. That's what I used for developing it.
This example of using the script writes a Raspbian image, still in its zip archive, to the SD card at **/dev/mmcblk0**:
**piflash 2016-11-25-raspbian-jessie.img.zip /dev/mmcblk0**
If you had specified **/dev/mmcblk0p1** (the first partition on the SD card), it would have recognized that a partition is not the correct location and refused to write to it.
One tricky aspect is recognizing which devices are SD cards on various Linux systems. The example with **mmcblk0** is from the PCI-based SD card interface on my laptop. If I used a USB SD card interface, it would be **/dev/sdb**, which is harder to distinguish from hard drives present on many systems. However, there are only a few Linux block drivers that support SD cards. PiFlash checks the parameters of the block devices in both those cases. If all else fails, it will accept USB drives which are writable, removable and have the right physical sector count for an SD card.
I think that covers most cases. However, what if you have another SD card interface I haven't seen? I'd like to hear from you. Please include the **--verbose**** --SDsearch** output, so I can see what environment was present on your system when it tried. Ideally, if the PiFlash script becomes widely used, we should build up an open source community around maintaining it for as many Raspberry Pi users as we can.
## CPAN modules for Raspberry Pi
CPAN is the [Comprehensive Perl Archive Network](http://www.cpan.org/), a worldwide network of download mirrors containing a wealth of Perl modules. All of them are open source. The vast quantity of modules on CPAN has been a huge strength of Perl over the years. For many thousands of tasks, there is no need to re-invent the wheel, you can just use the code someone else already posted, then submit your own once you have something new.
As Raspberry Pi is a full-fledged Linux system, most CPAN modules will run normally on it, but I'll focus on some that are specifically for the Raspberry Pi's hardware. These would usually be for embedded systems projects like measurement, control, or robotics. You can connect your Raspberry Pi to external electronics via its GPIO (General-Purpose Input/Output) pins.
Modules specifically for accessing the Raspberry Pi's GPIO pins include [Device::SMBus](https://metacpan.org/pod/Device::SMBus), [Device::I2C](https://metacpan.org/pod/Device::I2C), [Rpi::PIGPIO](https://metacpan.org/pod/RPi::PIGPIO), [Rpi::SPI](https://metacpan.org/pod/RPi::SPI), [Rpi::WiringPi](https://metacpan.org/pod/RPi::WiringPi), [Device::WebIO::RaspberryPi](https://metacpan.org/pod/Device::WebIO::RaspberryPi) and [Device::PiGlow](https://metacpan.org/pod/Device::PiGlow). Modules for other embedded systems with Raspberry Pi support include [UAV::Pilot::Wumpus::Server::Backend::RaspberryPiI2C](https://metacpan.org/pod/UAV::Pilot::Wumpus::Server::Backend::RaspberryPiI2C), [RPi::DHT11](https://metacpan.org/pod/RPi::DHT11) (temperature/humidity), [RPi::HCSR04](https://metacpan.org/pod/RPi::HCSR04) (ultrasonic), [App::RPi::EnvUI](https://metacpan.org/pod/App::RPi::EnvUI) (lights for growing plants), [RPi::DigiPot::MCP4XXXX](https://metacpan.org/pod/RPi::DigiPot::MCP4XXXX) (potentiometer), [RPi::ADC::ADS](https://metacpan.org/pod/RPi::ADC::ADS) (A/D conversion), [Device::PaPiRus](https://metacpan.org/pod/Device::PaPiRus) and [Device::BCM2835::Timer](https://metacpan.org/pod/Device::BCM2835::Timer) (the on-board timer chip).
## Examples
Here are some examples of what you can do with Perl on a Raspberry Pi.
### Example 1: Flash OSMC with PiFlash and play a video
For this example, you'll practice setting up and running a Raspberry Pi using the OSMC (Open Source Media Center).
- Go to
[RaspberryPi.Org](http://raspberrypi.org/). In the downloads area, get the latest version of OSMC. - Insert a blank SD card in your Linux desktop or laptop. The Raspberry Pi 1 uses a full-size SD card. Everything else uses a microSD, which may require a common adapter to insert it.
- Check "cat /proc/partitions" before and after inserting the SD card to see which device name it was assigned by the system. It could be something like
**/dev/mmcblk0**or**/dev/sdb**. Substitute your correct system image file and output device in a command that looks like this:
** piflash OSMC_TGT_rbp2_20170210.img.gz /dev/mmcblk0**
- Eject the SD card. Put it in the Raspberry Pi and boot it connected to an HDMI monitor.
- While OSMC is setting up, get a USB stick and put some videos on it. For purposes of the demonstration, I suggest using the "youtube-dl" program to download two videos. Run "youtube-dl OHF2xDrq8dY" (The Bloomberg "Hello World" episode about UK tech including Raspberry Pi) and "youtube-dl nAvZMgXbE9c" (CNet's Top 5 Raspberry Pi projects). Move them to the USB stick, then unmount and remove it.
- Insert the USB stick in the OSMC Raspberry Pi. Follow the Videos menu to the external device.
- When you can play the videos on the Raspberry Pi, you have completed the exercise. Have fun.
### Example 2: A script to play random videos from a directory
This example uses a script to shuffle-play videos from a directory on the Raspberry Pi. Depending on the videos and where it's installed, this could be a kiosk display. I wrote it to display videos while using indoor exercise equipment.
- Set up a Raspberry Pi to boot Raspbian Linux. Connect it to an HDMI monitor.
- Download my
["do-video" script](https://github.com/ikluft/ikluft-tools/tree/master/perl-on-pi)from GitHub and put it on the Raspberry Pi. - Follow the installation instructions on the page. The main thing is to install the
**omxplayer**package, which plays videos smoothly using the Raspberry Pi's hardware video acceleration. - Put some videos in a directory called Videos under the home directory.
- Run "do-video" and videos should start playing.
### Example 3: A script to read GPS data
This example is more advanced and optional, but it shows how Perl can read from external devices. At my "Perl on Pi" page on GitHub from the previous example, there is also a **gps-read.pl** script. It reads NMEA (National Marine Electronics Association) data from a GPS via the serial port. Instructions are on the page, including parts I used from AdaFruit Industries to build it, but any GPS that outputs NMEA data could be used.
With these tasks, I've made the case that you really can use Perl as well as any other language on a Raspberry Pi. I hope you enjoy it.
## 3 Comments |
8,830 | OpenStack 上的 OpenShift:更好地交付应用程序 | https://blog.openshift.com/openshift-on-openstack-delivering-applications-better-together/ | 2017-09-01T11:44:00 | [
"OpenStack",
"OpenShift"
] | https://linux.cn/article-8830-1.html | 
你有没有问过自己,我应该在哪里运行 OpenShift?答案是任何地方 - 它可以在裸机、虚拟机、私有云或公共云中很好地运行。但是,这里有一些为什么人们正迁移到围绕全栈和资源消耗自动化相关的私有云和公有云的原因。传统的操作系统一直是关于[硬件资源的展示和消耗](https://docs.google.com/presentation/d/139_dxpiYc5JR8yKAP8pl-FcZmOFQCuV8RyDxZqOOcVE/edit) - 硬件提供资源,应用程序消耗它们,操作系统一直是交通警察。但传统的操作系统一直局限于单机<sup> 注1</sup> 。
那么,在原生云的世界里,现在意味着这个概念扩展到包括多个操作系统实例。这就是 OpenStack 和 OpenShift 所在。在原生云世界,虚拟机、存储卷和网段都成为动态配置的构建块。我们从这些构建块构建我们的应用程序。它们通常按小时或分钟付费,并在不再需要时被取消配置。但是,你需要将它们视为应用程序的动态配置能力。 OpenStack 在动态配置能力(展示)方面非常擅长,OpenShift 在动态配置应用程序(消费)方面做的很好,但是我们如何将它们结合在一起来提供一个动态的、高度可编程的多节点操作系统呢?
要理解这个,让我们来看看如果我们在传统的环境中安装 OpenShift 会发生什么 - 想像我们想要为开发者提供动态访问来创建新的应用程序,或者想象我们想要提供业务线,使其能够访问现有应用程序的新副本以满足合同义务。每个应用程序都需要访问持久存储。持久存储不是临时的,在传统的环境中,这通过提交一张工单实现。没关系,我们可以连到 OpenShift,每次需要存储时都会提交一张工单。存储管理员可以登录企业存储阵列并根据需要删除卷,然后将其移回 OpenShift 以满足应用程序。但这将是一个非常慢的手动过程,而且你可能会遇到存储管理员辞职。

在原生云的世界里,我们应该将其视为一个策略驱动的自动化流程。存储管理员变得更加战略性、设置策略、配额和服务级别(银、黄金等),但实际配置变得动态。

动态过程可扩展到多个应用程序 - 这可能是开发者测试的业务线甚至新应用程序。从 10 多个应用程序到 1000 个应用程序,动态配置提供原生云体验。

下面的演示视频展示了动态存储配置如何与 Red Hat OpenStack 平台(Cinder 卷)以及 Red Hat OpenShift 容器平台配合使用,但动态配置并不限于存储。想象一下,随着 OpenShift 的一个实例需要更多的容量、节点自动扩展的环境。想象一下,推送一个敏感的程序更改前,将网段划分为负载测试 OpenShift 的特定实例。这些是你为何需要动态配置 IT 构建块的原因。OpenStack 实际上是以 API 驱动的方式实现的。
OpenShift 和 OpenStack 一起更好地交付应用程序。OpenStack 动态提供资源,而 OpenShift 会动态地消耗它们。它们一起为你所有的容器和虚拟机需求提供灵活的原生云解决方案。
注1:高可用性集群和一些专门的操作系统在一定程度上弥合了这一差距,但在计算中通常是一个边缘情况。
---
via: <https://blog.openshift.com/openshift-on-openstack-delivering-applications-better-together/>
作者:[SCOTT MCCARTY](https://blog.openshift.com/author/smccartyredhat-com/) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,831 | 使用 LibreOffice Calc 管理你的财务 | https://opensource.com/article/17/8/budget-libreoffice-calc | 2017-09-01T13:49:47 | [
"LibreOffice"
] | https://linux.cn/article-8831-1.html |
>
> 你想知道你的钱花在哪里?这个精心设计的电子表格可以一目了然地回答这个问题。
>
>
>

如果你像大多数人一样,没有一个无底般的银行帐户。你可能需要仔细观察你的每月支出。
有很多方法可以做到这一点,但是最快最简单的方法是使用电子表格。许多人创建一个非常基本的电子表格来完成这项工作,它由两长列组成,总计位于底部。这是可行的,但这有点傻。
我将通过使用 LibreOffice Calc 创建一个更便于细查的(我认为)以及更具视觉吸引力的个人消费电子表格。
你说不用 LibreOffice?没关系。你可以使用 [Gnumeric](http://www.gnumeric.org/)、[Calligra Sheets](https://www.calligra.org/sheets/) 或 [EtherCalc](https://ethercalc.net/) 等电子表格工具使用本文中的信息。
### 首先列出你的费用
先别费心 LibreOffice 了。坐下来用笔和纸,列出你的每月日常开支。花时间,翻遍你的记录,记下所有的事情,无论它多么渺小。不要担心你花了多少钱。重点放在你把钱花在哪里。
完成之后,将你的费用分组到最有意义的标题下。例如,将你的燃气、电气和水费放在“水电费”下。你也可能想要为我们每个月都会遇到的意外费用,使用一组名为“种种”。
### 创建电子表格
启动 LibreOffice Calc 并创建一个空的电子表格。在电子表格的顶部留下三个空白行。之后我们会回来。
你把你的费用归类是有原因的:这些组将成为电子表格上的块。我们首先将最重要的花费组(例如 “家庭”)放在电子表格的顶部。
在工作表顶部第四行的第一个单元格中输入该花费组的名称。将它放大(可以是 12 号字体)、加粗使得它显眼。
在该标题下方的行中,添加以下三列:
* 花费
* 日期
* 金额
在“花费”列下的单元格中输入该组内花费的名称。
接下来,选择日期标题下的单元格。单击 **Format** 菜单,然后选择 **Number Format > Date**。对“金额”标题下的单元格重复此操作,然后选择 **Number Format > Currency**。
你会看到这样:

这是一组开支的方式。不要为每个花费组创建新块, 而是复制你创建的内容并将其粘贴到第一个块旁边。我建议一行放三块, 在它们之间有一个空列。
你会看到这样:

对所有你的花费组做重复操作。
### 总计所有
查看所有个人费用是一回事,但你也可以一起查看每组费用的总额和所有费用。
我们首先总计每个费用组的金额。你可以让 LibreOffice Calc 自动做这些。高亮显示“金额”列底部的单元格,然后单击 “Formula” 工具栏上的 “Sum” 按钮。

单击金额列中的第一个单元格,然后将光标拖动到列中的最后一个单元格。然后按下 Enter。

现在让我们用你顶部留下的两三行空白行做一些事。这就是你所有费用的总和。我建议把它放在那里,这样无论何时你打开文件时它都是可见的。
在表格左上角的其中一个单元格中,输入类似“月总计”。然后,在它旁边的单元格中,输入 `=SUM()`。这是 LibreOffice Calc 函数,它可以在电子表格中添加特定单元格的值。
不要手动输入要添加的单元格的名称,请按住键盘上的 Ctrl。然后在电子表格上单击你在每组费用中总计的单元格。
### 完成
你有一张追踪一个月花费的表。拥有单个月花费的电子表格有点浪费。为什么不用它跟踪全年的每月支出呢?
右键单击电子表格底部的选项卡,然后选择 **Move or Copy Sheet**。在弹出的窗口中,单击 **-move to end position-**,然后按下 Enter 键。一直重复到你有 12 张表 - 每月一张。以月份重命名表格,然后使用像 *Monthly Expenses 2017.ods* 这样的描述性名称保存电子表格。
现在设置完成了,你可以使用电子表格了。使用电子表格跟踪你的花费本身不会坚实你的财务基础,但它可以帮助你控制每个月的花费。
(题图: opensource.com)
---
作者简介:
Scott Nesbitt - 我是一名长期使用自由/开源软件的用户,并为了乐趣和收益写了各种软件。我不会太严肃。你可以在网上这些地方找到我:Twitter、Mastodon、GitHub。
---
via: <https://opensource.com/article/17/8/budget-libreoffice-calc>
作者:[Scott Nesbitt](https://opensource.com/users/scottnesbitt) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | If you're like most people, you don't have a bottomless bank account. You probably need to watch your monthly spending carefully.
There are many ways to do that, but that quickest and easiest way is to use a spreadsheet. Many folks create a very basic spreadsheet to do the job, one that consists of two long columns with a total at the bottom. That works, but it's kind of blah.
I'm going to walk you through creating a more scannable and (I think) more visually appealing personal expense spreadsheet using LibreOffice Calc.
Say you don't use LibreOffice? That's OK. You can use the information in this article with spreadsheet tools like [Gnumeric](http://www.gnumeric.org/), [Calligra Sheets](https://www.calligra.org/sheets/), or [EtherCalc](https://ethercalc.net/).
## Start by making a list of your expenses
Don't bother firing up LibreOffice Calc just yet. Sit down with pen and paper and list your regular monthly expenses. Take your time, go through your records, and note everything, no matter how small. Don't worry about how much you're spending. Focus on where you're putting your money.
Once you've done that, group your expenses under headings that make the most sense to you. For example, group your gas, electric, and water bills under the heading Utilities. You might also want to have a group of expenses with a name like Various for those unexpected expenses we all run into each month.
## Create the spreadsheet
Start LibreOffice Calc and create an empty spreadsheet. Leave three blank rows at the top of the spreadsheet. We'll come back to them.
There's a reason you grouped your expenses: Those groups will become blocks on the spreadsheet. Let's start by putting your most important expense group (e.g., Home) at the top of the spreadsheet.
Type that expense group's name in the first cell of the fourth row from the top of sheet. Make it stand out by putting it in a larger (12 points is good), bold font.
In the row below that heading, add the following three columns:
- Expense
- Date
- Amount
Type the names of the expenses within that group into the cells under the Expense column.
Next, select the cells under the Date heading. Click the **Format** menu and select **Number Format > Date**. Repeat that for the cells under the Amount heading, and choose **Number Format > Currency**.
You'll have something that looks like this:

opensource.com
That's one group of expenses out of the way. Instead of creating a new block for each expense group, copy what you created and paste it beside the first block. I recommend having rows of three blocks, with an empty column between them.
You'll have something like this:

opensource.com
Repeat that for all your expense groups.
## Total it all up
It's one thing to see all your individual expenses, but you'll also want to view totals for each group of expenses and for all of your expenses together.
Let's start by totaling the amounts for each expense group. You can get LibreOffice Calc to do that automatically. Highlight a cell at the bottom of the Amount column and then click the **Sum** button on the Formula toolbar.

opensource.com
Click the first cell in the Amount column and drag the cursor to the last cell in the column. Then, press Enter.

opensource.com
Now let's do something with the two or three blank rows you left at the top of the spreadsheet. That's where you'll put the grand total of all your expenses. I advise putting it up there so it's visible whenever you open the file.
In one of the cells at the top left of the sheet, type something like Grand Total or *T*otal for the Month. Then, in the cell beside it, type **=SUM()**. That's the LibreOffice Calc function that adds the values of specific cells on a spreadsheet.
Instead of manually entering the names of the cells to add, press and hold Ctrl on your keyboard. Then click the cells where you totaled each group of expenses on your spreadsheet.
## Finishing up
You have a sheet for a tracking a month's expenses. Having a spreadsheet for a single month's expenses is a bit of a waste. Why not use it to track your monthly expenses for the full year instead?
Right-click on the tab at the bottom of the spreadsheet and select **Move or Copy Sheet**. In the window that pops up, click **-move to end position-** and press Enter. Repeat that until you have 12 sheets—one for each month. Rename each sheet for each month of the year, then save the spreadsheet with a descriptive name like *Monthly Expenses 2017.ods*.
Now that your setup is out of the way, you're ready to use the spreadsheet. While using a spreadsheet to track your expenses won't, by itself, put you on firmer financial footing, it can help you keep on top of and control what you're spending each month.
## 3 Comments |
8,832 | 为什么开源应该是云原生环境的首选 | https://opensource.com/article/17/8/open-sourcing-infrastructure | 2017-09-02T20:27:22 | [
"开源",
"OpenStack"
] | https://linux.cn/article-8832-1.html |
>
> 基于 Linux 击败了专有软件一样的原因,开源应该成为云原生环境的首选。
>
>
>

让我们回溯到上世纪 90 年代,当时专有软件大行其道,而开源才刚开始进入它自己的时代。是什么导致了这种转变?更重要的是,而今天我们转到云原生环境时,我们能从中学到什么?
### 基础设施的历史经验
我将以一个高度武断的、开源的视角开始,来看看基础设施过去 30 年的历史。在上世纪 90 年代,Linux 只是大多数组织视野中一个微不足道的小光点而已——如果他们听说过它的话。你早早购入股票的那些公司们很快就发现了 Linux 的好处,它主要是作为专有的 Unix 的廉价替代品,而部署服务器的标准方式是使用专有的 Unix,或者日渐增多的使用 Microsoft Windows NT。
这种模式的专有本性为更专有的软件提供了一个肥沃的生态系统。软件被装在盒子里面放在商店出售。甚至开源软件也参与了这种装盒游戏;你可以在货架上买到 Linux,而不是用你的互联网连接免费下载。去商店和从你的软件供应商那里只是你得到软件的不同方式而已。

*Ubuntu 包装盒出现在百思买的货架上*
我认为,随着 LAMP 系列(Linux、Apache、MySQL 和 PHP / Perl / Python)的崛起,情况发生了变化。LAMP 系列非常成功。它是稳定的、可伸缩的和相对用户友好的。与此同时,我开始看到对专有解决方案的不满。一旦客户在 LAMP 系列中尝过了开源的甜头,他们就会改变他们对软件的期望,包括:
* 不愿被供应商绑架,
* 关注安全,
* 希望自己来修复 bug ,以及
* 孤立开发的软件意味着创新被扼杀。
在技术方面,我们也看到了各种组织在如何使用软件上的巨大变化。忽然有一天,网站的宕机变成不可接受的了。这就对扩展性和自动化有了更多的依赖。特别是在过去的十年里,我们看到了基础设施从传统的“宠物”模式到“群牛”模式的转变,在这种模式中,服务器可以被换下和替换,而不是一直运行和被指定。公司使用大量的数据,更注重数据留存和数据到用户的处理和返回速度。
开源和开源社区,以及来自大公司的日益增多的投入,为我们改变如何使用软件提供了基础。系统管理员的岗位要求开始 要求 Linux 技能和对开源技术和理念的熟悉。通过开源类似 Chef cookbooks 和 Puppet 模块这样东西,管理员可以分享他们的模式配置。我们不再单独配置和调优 MySQL;我们创建了一个掌控基础部分的系统,我们现在可以专注于更有趣的、可以给我们雇主带来更高价值的工程作业。
开源现在无处不在,围绕它的模式也无处不在。曾经仇视这个想法的公司不仅通过协同项目与外界拥抱开源,而且进一步地,还发布了他们自己的开源软件项目并且围绕它们构建了社区。

### 转向云端
今天,我们生活在一个 DevOps 和云端的世界里。我们收获了开源运动带来的创新成果。在公司内部采用开源软件开发实践的情况下, Tim O'reilly 所称的 “[内部开源](https://opensource.com/life/16/11/create-internal-innersource-community)” 有了明显增长。我们为云平台共享部署配置。像 Terraform 这样的工具甚至允许我们编写和分享我们如何部署特定的平台。
但这些平台本身呢?
>
> “大多数人想都不想就使用了云……许多用户将钱投入到根本不属于他们的基础设施中,而对放弃他们的数据和信息毫无顾虑。" —Edward Snowden, OpenStack Summit, May 9, 2017
>
>
>
现在是时候要更多地想想本能地转移或扩展到云上的事情了。
就像 Snowden 强调的那样,现在我们正面临着对我们的用户和客户的数据的失控风险。抛开安全不谈,如果我们回顾一下我们转向开源的原因,个中原因还包括被厂商绑架的担忧、创新难以推动、甚至修复 bug 的考虑。
在把你自己和/或你的公司锁定在一个专有平台之前,考虑以下问题:
* 我使用的服务是遵循开放标准,还是被厂商绑架的?
* 如果服务供应商破产或被竞争对手收购,什么是我可以依赖的?
* 关于停机、安全等问题,供应商与其客户沟通中是否有一个明确而真诚的历史过往?
* 供应商是否响应 bug 和特性请求,即使那是来自小客户?
* 供应商是否会在我不知情的情况下使用我们的数据(或者更糟,即便我们的客户协议所不同意)?
* 供应商是否有一个计划来处理长期的,不断上升的增长成本,特别是如果最初的成本很低呢?
您可以通过这个问卷,讨论每个要点,而仍然决定使用专有的解决方案。这很好,很多公司一直都在这么做。然而,如果你像我一样,宁愿找到一个更开放的解决方案而仍然受益于云,你确实有的选择。
### 基于私有云
当您寻找私有云解决方案时,您的首选是开源,投资一个云提供商,其核心运行在开源软件上。 [OpenStack](https://www.openstack.org/) 是行业领袖,在其 7 年的历史中,有 100 多个参与组织和成千上万的贡献者(包括我)。 OpenStack 项目已经证明,结合多个基于 OpenStack 云不仅是可行的,而且相对简单。云公司之间的 API 是相似的,所以您不必局限于特定的 OpenStack 供应商。作为一个开放源码项目,您仍然可以影响该基础设施的特性、bug 请求和发展方向。
第二种选择是继续在基础层面上使用私有云,但在一个开源容器编排系统中。无论您选择 [DC/OS](https://dcos.io/)(基于[Apache Mesos](http://mesos.apache.org/)) 、[Kubernetes](https://kubernetes.io/) 或 [Docker Swarm 模式](https://docs.docker.com/engine/swarm/) ,这些平台都允许您将私有云系统提供的虚拟机作为独立的 Linux 机器,并在此之上安装您的平台。您所需要的只是 Linux 而已,不会立即被锁定在特定云的工具或平台上。可以根据具体情况来决定是否使用特定的专属后端,但如果你这样做,就应该着眼于未来。
有了这两种选择,你也可以选择完全离开云服务商。您可以部署自己的 OpenStack 云,或者将容器平台内部架构移动到您自己的数据中心。
### 做一个登月计划
最后,我想谈一谈开源项目基础设施。今年 3 月,在召开的 [南加州 Linux 展会](https://www.socallinuxexpo.org/) 上,多个开放源码项目的参与者讨论了为他们的项目运行开源基础设施。(更多的,请阅读我的 [关于该会议的总结](https://opensource.com/article/17/3/growth-open-source-project-infrastructures))我认为这些项目正在做的这个工作是基础设施开源的最后一步。除了我们现在正在做的基本分享之外,我相信公司和组织们可以在不放弃与竞争对手相区分的“独门秘方”的情况下,进一步充分利用他们的基础设施开源。
开源了他们的基础设施的开源项目,已经证明了允许多个公司和组织向他们的基础设施提交训练有素的 bug 报告,甚至是补丁和特定论文的价值。突然之间,你可以邀请兼职的贡献者。你的客户可以通过了解你的基础设施,“深入引擎盖子之下”,从而获得信心。
想要更多的证据吗?访问 [开源基础设施](https://opensourceinfra.org/) 的网站了解开源基础设施的项目(以及他们已经发布的大量基础设施)。
可以在 8 月 26 日在费城举办的 FOSSCON 大会上 Elizabeth K. Joseph 的演讲“[基础架构开源](https://fosscon.us/node/12637)”上了解更多。
(题图:[Jason Baker](https://opensource.com/users/jason-baker). [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/). Source: [Cloud](https://pixabay.com/en/clouds-sky-cloud-dark-clouds-1473311/), [Globe](https://pixabay.com/en/globe-planet-earth-world-1015311/). Both [CC0](https://creativecommons.org/publicdomain/zero/1.0/).)
---
via: <https://opensource.com/article/17/8/open-sourcing-infrastructure>
作者:[Elizabeth K. Joseph](https://opensource.com/users/pleia2) 译者:[wenzhiyi](https://github.com/wenzhiyi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Let's take a trip back in time to the 1990s, when proprietary software reigned, but open source was starting to come into its own. What caused this switch, and more importantly, what can we learn from it today as we shift into cloud-native environments?
## An infrastructure history lesson
I'll begin with a highly opinionated, open source view of infrastructure's history over the past 30 years. In the 1990s, Linux was merely a blip on most organizations' radar, if they knew anything about it. You had early buy-in from companies that quickly saw the benefits of Linux, mostly as a cheap replacement for proprietary Unix, but the standard way of deploying a server was with a proprietary form of Unix or—increasingly—by using Microsoft Windows NT.
The proprietary nature of this tooling provided a fertile ecosystem for even more proprietary software. Software was boxed up to be sold in stores. Even open source got in on the packaging game; you could buy Linux on the shelf instead of tying up your internet connection downloading it from free sources. Going to the store or working with your software vendor was just how you got software.

opensource.com
Where I think things changed was with the rise of the LAMP stack (Linux, Apache, MySQL, and PHP/Perl/Python).
The LAMP stack is a major success story. It was stable, scalable, and relatively user-friendly. At the same time, I started seeing dissatisfaction with proprietary solutions. Once customers had this taste of open source in the LAMP stack, they changed what they expected from software, including:- reluctance to be locked in by a vendor,
- concern over security,
- desire to fix bugs themselves, and
- recognition that innovation is stifled when software is developed in isolation.
On the technical side, we also saw a massive change in how organizations use software. Suddenly, downtime for a website was unacceptable. There was a move to a greater reliance on scaling and automation. In the past decade especially, we've seen a move from the traditional "pet" model of infrastructure to a "cattle" model, where servers can be swapped out and replaced, rather than kept and named. Companies work with massive amounts of data, causing a greater focus on data retention and the speed of processing and returning that data to users.
Open source, with open communities and increasing investment from major companies, provided the foundation to satisfy this change in how we started using software. Systems administrators' job descriptions began requiring skill with Linux and familiarity with open source technologies and methodologies. Through the open sourcing of things like Chef cookbooks and Puppet modules, administrators could share the configuration of their tooling. No longer were we individually configuring and tuning MySQL in silos; we created a system for handling
the basic parts so we could focus on the more interesting engineering work that brought specific value to our employers.Open source is ubiquitous today, and so is the tooling surrounding it. Companies once hostile to the idea are not only embracing open source through interoperability programs and outreach, but also by releasing their own open source software projects and building communities around it.

opensource.com
## Turning to the cloud
Today, we're living in a world of DevOps and clouds. We've reaped the rewards of the innovation that open source movements brought. There's a sharp rise in what Tim O'Reilly called "[inner-sourcing](https://opensource.com/life/16/11/create-internal-innersource-community)," where open source software development practices are adopted inside of companies. We're sharing deployment configurations for cloud platforms. Tools like Terraform are even allowing us to write and share how we deploy to specific platforms.
"Most people just consume the cloud without thinking ... many users are sinking cost into infrastructure that is not theirs, and they are giving up data and information about themselves without thinking."
—Edward Snowden, OpenStack Summit, May 9, 2017
It's time to put more thought into our knee-jerk reaction to move or expand to the cloud.
As Snowden highlighted, now we risk of losing control of the data that we maintain for our users and customers. Security aside, if we look back at our list of reasons for switching to open source, high among them were also concerns about vendor lock-in and the inability to drive innovation or even fix bugs.
Before you lock yourself and/or your company into a proprietary platform, consider the following questions:
- Is the service I'm using adhering to open standards, or am I locked in?
- What is my recourse if the service vendor goes out of business or is bought by a competitor?
- Does the vendor have a history of communicating clearly and honestly with its customers about downtime, security, etc.?
- Does the vendor respond to bugs and feature requests, even from smaller customers?
- Will the vendor use our data in a way that I'm not comfortable with (or worse, isn't allowed by our own customer agreements)?
- Does the vendor have a plan to handle long-term, escalating costs of growth, particularly if initial costs are low?
You may go through this questionnaire, discuss each of the points, and still decide to use a proprietary solution. That's fine; companies do it all the time. However, if you're like me and would rather find a more open solution while still benefiting from the cloud, you do have options.
## Beyond the proprietary cloud
As you look beyond proprietary cloud solutions, your first option to go open source is by investing in a cloud provider whose core runs on open source software. [OpenStack](https://www.openstack.org/) is the industry leader, with more than 100 participating organizations and thousands of contributors in its seven-year history (including me for a time). The OpenStack project has proven that interfacing with multiple OpenStack-based clouds is not only possible, but relatively trivial. The APIs are similar between cloud companies, so you're not necessarily locked in to a specific OpenStack vendor. As an open source project, you can still influence the features, bug requests, and direction of the infrastructure.
The second option is to continue to use proprietary clouds at a basic level, but within an open source container orchestration system. Whether you select [DC/OS](https://dcos.io/) (built on [Apache Mesos](http://mesos.apache.org/)), [Kubernetes](https://kubernetes.io/), or [Docker in swarm mode](https://docs.docker.com/engine/swarm/), these platforms allow you to treat the virtual machines served up by proprietary cloud systems as independent Linux machines and install your platform on top of that. All you need is Linux—and don't get immediately locked into the cloud-specific tooling or platforms. Decisions can be made on a case-by-case basis about whether to use specific proprietary backends, but if you do, try to keep an eye toward the future should a move be required.
With either option, you also have the choice to depart from the cloud entirely. You can deploy your own OpenStack cloud or move your container platform in-house to your own data center.
## Making a moonshot
To conclude, I'd like to talk a bit about open source project infrastructures. Back in March, participants from various open source projects convened at the [Southern California Linux Expo](https://www.socallinuxexpo.org/) to talk about running open source infrastructures for their projects. (For more, read my [summary of this event](https://opensource.com/article/17/3/growth-open-source-project-infrastructures).) I see the work these projects are doing as the final step in the open sourcing of infrastructure. Beyond the basic sharing that we're doing now, I believe companies and organizations can make far more of their infrastructures open source without giving up the "secret sauce" that distinguishes them from competitors.
The open source projects that have open sourced their infrastructures have proven the value of allowing multiple companies and organizations to submit educated bug reports, and even patches and features, to their infrastructure. Suddenly you can invite part-time contributors. Your customers can derive confidence by knowing what your infrastructure looks like "under the hood."
Want more evidence? Visit [Open Source Infrastructure](https://opensourceinfra.org/)'s website to learn more about the projects making their infrastructures open source (and the extensive amount of infrastructure they've released).
*Learn more in Elizabeth K. Joseph's talk, The Open Sourcing of Infrastructure, at FOSSCON August 26th in Philadelphia.*
## 1 Comment |
8,833 | Fedora 的 Yum 或将在一两年内退休 | http://www.phoronix.com/scan.php?page=news_item&px=Fedora-Yum-Retirement | 2017-09-03T11:38:00 | [
"DNF",
"Yum"
] | https://linux.cn/article-8833-1.html | 
随着 DNF 软件包管理器在最近的 Fedora 版本里面工作日益工作良好,我们可以预见到 Yum 将在之后的 Fedora 版本中谢幕。
当然,Yum 还一直广泛用在 RHEL 7 中,而在 Fedora 这边,估计在大约一年后的 Fedora 28 乃至 29 中正式退休。
在 Fedora 开发者邮件列表中有一个讨论 [Yum 退休](https://lists.fedoraproject.org/archives/list/[email protected]/thread/GF6THFF5FXCNTKHVLVRRFHS46BTDPO5Y/)的新话题。看起来在 Fedora 28 或 29 的时候会移除 Yum。DNF 已经提供了与 Yum 一样的能力。Fedora 也在开发一个“富依赖”的支持,而这个功能 Yum 不支持,所以这也表明了 Yum 将在以后的 Fedora 系统中消失。
邮件列表中也提到了 Yum 和 DNF 还存在一些差异需要解决,但是看起来在 2018 年应该可以看到希望。
| 301 | Moved Permanently | null |
8,834 | GNU GPL 许可证常见问题解答(二) | https://www.gnu.org/licenses/gpl-faq.html | 2017-09-04T09:26:00 | [
"GPL",
"许可证"
] | https://linux.cn/article-8834-1.html | 
本文由高级咨询师薛亮据自由软件基金会(FSF)的[英文原文](https://www.gnu.org/licenses/gpl-faq.html)翻译而成,这篇常见问题解答澄清了在使用 GNU 许可证中遇到许多问题,对于企业和软件开发者在实际应用许可证和解决许可证问题时具有很强的实践指导意义。
1. [关于 GNU 项目、自由软件基金会(FSF)及其许可证的基本问题](/article-8761-1.html)
2. 对于 GNU 许可证的一般了解
3. 在您的程序中使用 GNU 许可证
4. 依据 GNU 许可证分发程序
5. 在编写其他程序时采用依据 GNU 许可证发布的程序
6. 将作品与依据 GNU 许可证发布的代码相结合
7. 关于违反 GNU 许可证的问题
### **2、****对于** **GNU** **许可证的一般了解**
#### 2.1 为什么 GPL 允许用户发布其修改版本?
自由软件的一个关键特点是用户可以自由合作。绝对有必要允许希望彼此帮助的用户与其他用户分享他们对错误的修复和改进。
有些人提出了 GPL 的替代方案,需要原作者批准修改版本。只要原作者持续进行维护,这种做法在实践中可能会不错,但是如果作者停止维护(或多或少会)去做别的事情,或并不打算去满足所有用户的需求,这种替代方案就会失败。除了实践上的问题之外,该方案也不允许用户之间互相帮助。
有时候对修改版本的控制,是为了防止用户制作的各种版本之间造成混淆。根据我们的经验,这种混乱不是一个大问题。在 GNU 项目之外出现了许多版本的 Emacs,但用户仍可以将它们区分开。GPL 要求版本创造者将他/她的名字标注其上,以区别于其他版本,并保护其他维护者的声誉。
#### 2.2 GPL 是否要求将修改版本的源代码公开发布?
GPL 不要求您发布修改后的版本或其中的任何一部分。您可以自由地进行修改并私人使用它们,而无需进行发布。这也适用于组织(包括企业);组织可以制作修改版本,在内部使用它,并且绝不将修改版本发布到组织之外。
但是,*如果*您以某种方式向公众发布修改后的版本,依据 GPL 许可证,GPL 要求您保证程序的用户可以获得修改后源代码。
因此,GPL 给你以某些特定方式发布修改后的程序的授权,而不是以其他方式发布;但是,是否发布修改版本的决定取决于您。
#### 2.3 我可以在同一台电脑上安装一个遵循 GPL 许可证的程序和一个不相关的非自由程序吗?
可以。
#### 2.4 如果我知道某些人有一个遵循 GPL 许可证的程序的副本,我可以要求他们给我一个副本吗?
不可以,GPL 给人们制作和再分发程序副本的授权,倘若有人选择这样做的话。那个人也有权选择不去再分发该程序。
#### 2.5 GPL v2 中的“<ruby> 书面文件 <rp> ( </rp> <rt> written offer </rt> <rp> ) </rp></ruby>对任何第三方有效”是什么意思? 这是否意味着世界上每个人都可以获得任何遵循 GPL 许可证的程序的源代码?
如果您选择通过书面文件提供源代码,那么任何向您索求源代码的人都有权收到。
如果您对不附带源代码的二进制文件进行商业分发,GPL 要求您必须提供一份书面文件,表明将稍后分发源代码。当用户非商业性地再分发他们从您那里获取的二进制文件时,他们必须传递这份书面文件的副本。这意味着不能直接从您那里获取二进制文件的人,仍然可以收到源代码副本以及该书面文件。
我们要求书面文件对任何第三方有效的原因是,以这种方式间接收到二进制代码的人可以从您那里订购源代码。
#### 2.6 GPL v2 中规定,发布后的修改版本必须“授予…所有第三方许可”。这些第三方是谁?
第 2 节中规定,依据 GPL,您分发的修改版本必须授权给所有第三方。 “所有第三方”当然是指所有人,但这并不要求您为他们*做*任何事情。这仅仅意味着他们依据 GPL 从您那里获得了您的修改版本的许可。
#### 2.7 GPL 允许我出售程序的副本以获利吗?
是的,GPL 允许每个人这样做。[销售副本的权利](https://www.gnu.org/philosophy/selling.html)是自由软件定义的一部分。除了一种特殊情况之外,对您可以收取什么样的价格是没有限制的。(这种例外情况是:二进制版本必须附有表明将提供源代码的书面文件。)
#### 2.8 GPL 允许我从我的分发网站收取下载程序的费用吗?
允许。您可以对您分发程序副本收取任何您想收取的费用。如果您通过下载分发二进制文件,则必须提供“等同的访问权限”来让人们下载源代码,因此,下载源代码的费用可能不会超过下载二进制文件的费用。
#### 2.9 GPL 允许我要求任何接收软件的人必须向我支付费用和/或通知我吗?
不允许。实际上,这样的要求会使程序变成非自由软件。如果人们在获得程序副本时必须支付费用,或者他们必须特别通知任何人,那么这个程序就不是自由软件。请参阅[自由软件的定义](https://www.gnu.org/philosophy/free-sw.html)。
GPL 是自由软件许可证,因此允许人们使用甚至再分发软件,而不需要向任何人支付费用。
您可以向用户收取费用,[以获取您的副本](https://www.gnu.org/licenses/gpl-faq.html#DoesTheGPLAllowMoney)。当他们*从别人那里*获得副本时,您不能要求人们向您支付费用。
#### 2.10 如果我收费分发遵循 GPL 的软件,我是否还需要向公众免费提供?
不需要。不过,如果有人向您支付费用并获得副本,GPL 给予他们免费或收费向公众发布的自由。例如,有人可以向您支付费用,然后把他的副本放在一个网站上,让公众去获取。
#### 2.11 GPL 允许我根据保密协议分发副本吗?
不允许。GPL 规定,任何从您那里获取副本的人都有权再分发已修改或未修改的副本。您不得在任何更严格的基础上分发作品。
如果有人要求您签署保密协议(NDA),以获取来自 FSF 的遵循 GPL 的软件,请立即通知我们 <[email protected]>。
如果违规行为涉及其他版权所有者的遵循 GPL 的代码,请通知该版权所有者,就像您对其他任何类型的 GPL 违规行为所做的工作一样。
#### 2.12 GPL 允许我根据保密协议分发程序的修改版或测试版吗?
不可以。GPL 规定,您的修改版本必须具备 GPL 中规定的所有自由。因此,从您那里获取您的版本副本的任何人都有权再分发该版本的副本(已修改或未修改)。您不得在更严格的基础上分发该作品的任何版本。
#### 2.13 GPL 是否允许我根据保密协议开发程序的修改版本?
可以。例如,您可以接受一份合同来开发修改版本,并同意不得发布*您的修改版本*,直到客户同意才可以发布。这是允许的,因为这种情况下不处于 GPL 协议之下的代码是依据 NDA 分发的。
您还可以依据 GPL 将您的修改版本发布给客户,但须同意不将其发布给其他任何人,除非客户同意。也同样在这种情况下,不处于 GPL 协议之下的代码是以保密协议或任何其他限制条款分发的。
GPL 将给予客户再分发您的版本的权利。在这种情况下,客户可能会选择不行使这项权利,但他确实*拥有*这项权利。
#### 2.14 为什么 GPL 要求程序的每个副本必须包含 GPL 许可证副本?
作品包含许可证副本至关重要,因此获得程序副本的每个人都可以知道他们的权利是什么。
包括一个指向许可证的 URL,而不是将许可证本身包含在内,这是一种看起来很诱人的做法。但是您不能确定该 URL 在五年或十年后仍然有效。二十年后,我们今天所知道的 URL 们可能已不复存在。
不管网络将发生什么样的变化,确保拥有该程序副本的人员能够继续看到 GPL 许可证的唯一方法是,将许可证的副本包含在该程序中。
#### 2.15 如果作品不是很长,那该怎么办?
如果整个软件包中只有很少的代码——我们使用的基准是不到 300 行,那么您可以使用一个宽松的许可证,而不是像 GNU GPL这样的 Copyleft 许可证(除非代码特别重要)。我们[建议这种情况使用 Apache License 2.0](https://www.gnu.org/licenses/license-recommendations.html#software)。
#### 2.16 我是否需要将我对遵循 GPL 的程序所做的修改声明版权?
您不需要对您的修改声明版权。不过,在大多数国家/地区,默认情况下会自动获得版权,因此,如果您不希望修改受到版权限制,您需要将修改明显地放置于公有领域。
无论您是否对您的修改声明版权,依据 GPL,您都必须将修改版本作为整体发布(参见:2.2 GPL 是否要求将修改版本的源代码公开发布?)。
#### 2.17 GPL 对于将某些代码翻译成不同的编程语言是如何规定的?
根据著作权法,翻译工作被认为是一种修改。因此,GPL 对修改版本的规定也适用于翻译版本。
#### 2.18 如果一个程序将公有领域代码与遵循 GPL 的代码相结合,我可以取出公有领域的部分,并作为公有领域代码来使用吗?
您可以这样做,如果您能弄清楚哪个部分是公有领域的部分,并将其与其他部分区分开。如果代码曾经由其开发人员放置在公有领域,那么它就是公有领域代码,无论它现在究竟在哪里。
#### 2.19 我想要因我的作品获得声誉。我想让人们知道我写了什么,如果我使用 GPL,我还能获得声誉吗?
您一定能获得这份作品的声誉。遵循 GPL 发布的程序的一部分是以您自己名义撰写的版权声明(假设您是版权所有者)。GPL 要求所有副本携带恰当的版权声明。
#### 2.20 GPL 允许我添加条款,要求在使用遵循 GPL 的软件或其输出物的研究论文中包含引用或致谢吗?
不可以,根据 GPL 的规定,这是不允许的。虽然我们认识到适当的引用是学术出版物的重要组成部分,但引用不能作为对 GPL 的附加要求。对使用 GPL 软件的研究论文要求包含引用,超出了 GPL v3 第 7(b)条中可接受的附加要求,因此将被视为对 GPL 第 7 节的额外限制。而且版权法也不允许您在[软件的输出物](https://www.gnu.org/licenses/gpl-faq.html#GPLOutput)中设置这样的要求,无论该软件是依据 GPL 还是其他许可证的条款获得许可。
#### 2.21 为了节省空间,我是否可以省略 GPL 的引言部分,或者省略如何在自己的程序上使用GPL的<ruby> 指导 <rp> ( </rp> <rt> instructions </rt> <rp> ) </rp></ruby>部分吗?
引言和指导是 GNU GPL 的组成部分,不能省略。事实上,GPL 是受版权保护的,其许可证规定只能逐字复制整个 GPL。(您可以使用法律条款制作[另一个许可证](https://www.gnu.org/licenses/gpl-faq.html#ModifyGPL),但该许可证不再是 GNU GPL。)
引言和指导部分共约 1000 字,不到 GPL 总文字数量的 1/5。除非软件包本身很小,否则引言和指导不会对软件包的大小产生大幅度的改变。在软件包很小的情况下,您可以使用一个简单的<ruby> 全权 <rp> ( </rp> <rt> all-permissive </rt> <rp> ) </rp></ruby>许可证,而不是 GNU GPL。
#### 2.22 两个许可证“兼容”是指什么?
为了将两个程序(或它们的实质部分)组合成一个更大的作品,您需要有以组合方式使用这两个程序的权限。如果两个程序的许可证允许这种使用方式,则它们是兼容的。如果没有办法同时满足这两个许可证,则它们是不兼容的。
对于一些许可证,组合的方式可能会影响它们是否兼容,例如,它们可能允许将两个模块链接在一起,但不允许将其代码合并到一个模块中。
如果您只想在同一个系统中安装两个独立的程序,那么它们的许可证并不是必须兼容的,因为它们没有组合成更大的作品。
#### 2.23 许可证与 GPL 兼容是什么意思?
这意味着其他许可证和 GNU GPL 兼容;在一个更大的程序中,您可以将根据其他许可证发布的代码与根据 GNU GPL 发布的代码进行组合。
所有 GNU GPL 版本都允许进行这种组合;它们还允许分发这些组合,只要该组合在相同的 GNU GPL 版本下发布。其他许可证如果允许这样做,则其与 GPL 兼容。
与 GPL v2 相比,GPL v3 与更多的许可证兼容:它允许您与具有特定类型附加要求(GPL v3 本身不包含)的代码进行组合。GPL v3 第 7 节中有关于此问题的更多信息,包括了允许的附加要求的列表。
#### 2.24 为什么原始的 BSD 许可证与 GPL 不兼容?
因为它规定了 GPL 不包含的具体要求,即对程序广告的要求。GPL v2 第 6 节规定:
>
> *您不得对接受者行使本协议授予的权利施加进一步的限制。*
>
>
>
GPL v3 在第 10 节中提到类似的内容。广告条款正好提供了这样一个限制,因此与 GPL 不兼容。
修订的 BSD 许可证没有广告条款,从而消除了这个问题。
#### 2.25 “<ruby> 聚合 <rp> ( </rp> <rt> aggregate </rt> <rp> ) </rp></ruby>”与其他类型的“修改版本”有什么区别?
“聚合”由多个单独的程序组成,分布在同一个 CD-ROM或其他媒介中。GPL 允许您创建和分发聚合,即使其他软件的许可证不是自由许可证或与 GPL 不兼容。唯一的条件是,发布“聚合”所使用的许可证不能禁止用户去行使“聚合”中每个程序对应的许可证所赋予用户的权利。
两个单独的程序还是一个程序有两个部分,区分的界限在哪里?这是一个法律问题,最终由法官决定。我们认为,适当的判断标准取决于通信机制(exec、管道、rpc、共享地址空间内的函数调用等)和通信的语义(哪些信息被互换)。
如果模块们被包含在相同的可执行文件中,则它们肯定是被组合在一个程序中。如果模块们被设计为在共享地址空间中链接在一起运行,那么几乎肯定意味着它们组合成为一个程序。
相比之下,管道、套接字和命令行参数是通常在两个独立程序之间使用的通信机制。所以当它们用于通信时,模块们通常是单独的程序。但是,如果通信的语义足够亲密,交换复杂的内部数据结构,那么也可以视为这两个部分合并成了一个更大的程序。
#### 2.26 为什么 FSF 要求为 FSF拥有版权的程序做出贡献的贡献者将版权<ruby> 分配 <rp> ( </rp> <rt> assign </rt> <rp> ) </rp></ruby>给 FSF?如果我持有 GPL 程序的版权,我也应该这样做吗?如果是,怎么做? (同1.11)
我们的律师告诉我们,为了最大限度地向法院要求违规者强制执行 GPL,我们应该让程序的版权状况尽可能简单。为了做到这一点,我们要求每个贡献者将贡献部分的版权分配给 FSF,或者放弃对贡献部分的版权要求。
我们也要求个人贡献者从雇主那里获得版权放弃声明(如果有的话),以确保雇主不会声称拥有这部分贡献的版权。
当然,如果所有的贡献者把他们的代码放在公共领域,也就没有用之来执行 GPL 许可证的版权了。所以我们鼓励人们为大规模的代码贡献分配版权,只把小规模的修改放在公共领域。
如果您想要在您的程序中执行 GPL,遵循类似的策略可能是一个好主意。如果您需要更多信息,请联系<[email protected]>。
#### 2.27 如果我使用遵循 GNU GPL 的软件,那么允许我将原始代码修改为新程序,然后在商业上分发和销售新程序吗?
您被允许在商业上出售修改程序的副本,但仅限于在 GNU GPL 的条款之下这么做。因此,例如,您必须依照 GPL 所述,使得程序用户可获取源代码,并且,用户也依照 GPL 所述,被允许再分发和修改程序。
这些要求是将遵循 GPL 的代码包含在您自己的程序中的条件。
#### 2.28 我可以将 GPL 应用于软件以外的其他作品吗? (同1.6)
您可以将 GPL 应用于任何类型的作品,只需明确该作品的“源代码”构成即可。GPL 将“源代码”定义为作品的首选形式,以便在其中进行修改。
不过,对于手册和教科书,或更一般地,任何旨在教授某个主题的作品,我们建议使用 GFDL,而非 GPL。
#### 2.29 我想依据 GPL 授权我的代码,但我也想明确指出,它不能用于军事和/或商业用途。我可以这样做吗?
不可以,因为这两个目标相互矛盾。GNU GPL 被专门设计为防止添加进一步的限制。GPL v3 在第 7 节允许非常有限的一组附加限制,但是用户可以删除任何其他添加的限制。
#### 2.30 我可以依据 GPL 来对硬件进行许可吗?
任何可以受版权保护的<ruby> 材料 <rp> ( </rp> <rt> material </rt> <rp> ) </rp></ruby>都可以依据GPL进行许可。GPL v3 也可以用于受其他类似版权法的法律保护的材料,如半导体掩模。因此,作为一个例子,您可以依据 GPL 发布物理对象的图纸或电路。
在许多情况下,版权不涵盖依照图纸制作的物理硬件。在这种情况下,无论您使用什么许可证,您对图纸的许可都不能对制造或销售物理硬件施加任何控制。当版权不涵盖利用 IC 掩膜等进行硬件制作时,GPL 能以有效的方式处理这种情况。
#### 2.31 将遵循 GPL 的二进制文件<ruby> 预链接 <rp> ( </rp> <rt> prelinking </rt> <rp> ) </rp></ruby>到系统上的各种库,以优化其性能,将被视为修改吗?
不。<ruby> 预链接 <rp> ( </rp> <rt> prelinking </rt> <rp> ) </rp></ruby>是编译过程的一部分;与编译过程所涉及的其他各个方面相比,预链接没有引入更多的许可证要求。如果您被允许将程序链接到各种库,那么也可以预链接到各种库。如果您分发预链接后的目标码,则需要遵循第 6 节的条款。
#### 2.32 LGPL 如何与 Java 配合使用?
详情请参阅[这篇文章](https://www.gnu.org/licenses/lgpl-java.html)。LGPL 如同被设计、被预想、被期待的那样去工作。
#### 2.33 为什么你们在 GPL v3 中发明新的术语“<ruby> 传播 <rp> ( </rp> <rt> propagate </rt> <rp> ) </rp></ruby>”和“<ruby> 传递 <rp> ( </rp> <rt> convey </rt> <rp> ) </rp></ruby>”?
(译者注:convey 这个词汇在一个 GPL 译本中被翻译为“转发”,此处,我们认为“转发”一词在计算机领域存在一定的歧义,因此,采用“传递”的翻译。)
GPL v2 中使用的“分发”一词来自美国版权法。多年来,我们了解到,一些司法管辖区在自己的版权法中使用了同一个词,但却给出了不同的含义。我们发明了这些新术语,无论在何处对许可证进行解释,都使我们的意图尽可能清楚。这些新术语没有使用在世界上的任何版权法中,我们直接在许可证中提供了它们的定义。
#### 2.34 在GPL v3中的“<ruby> 传递 <rp> ( </rp> <rt> convey </rt> <rp> ) </rp></ruby>”与GPL v2中的“<ruby> 分发 <rp> ( </rp> <rt> distribute </rt> <rp> ) </rp></ruby>”意思一样吗?
是的,差不多是一个意思。在执行 GPL v2 的过程中,我们了解到一些司法管辖区在自己的版权法中使用了“分发”这个词,但给出了不同的含义。我们发明了一个新的术语,以使我们的意图清晰,避免这些差异可能引起的任何问题。
#### 2.35 如果我只复制遵循 GPL 的程序并运行它们,而不分发或传递给其他人,许可证对我施加什么要求?
没有要求。GPL 不对此活动附加任何条件。
2.36 GPL v3将“向公众提供”作为<ruby> 传播 <rp> ( </rp> <rt> propagate </rt> <rp> ) </rp></ruby>的一个例子。这是什么意思? 是提供一种可获取的传递形式吗?
“向公众提供”的一个例子是将该软件放在公共网页或FTP服务器上。在您这样做之后,在任何人从您那里真正获取软件之前,可能会需要一段时间,但是由于这种情况可能会立即发生,您也需要能够立即履行 GPL 的义务。因此,我们将“<ruby> 传递 <rp> ( </rp> <rt> convey </rt> <rp> ) </rp></ruby>”定义为包括这一活动。
#### 2.37 鉴于分发和向公众提供作为传播形式,同样也构成了 GPL v3 中的“<ruby> 传递 <rp> ( </rp> <rt> conveying </rt> <rp> ) </rp></ruby>”,那么有哪些传播的例子不构成传递吗?
为自己制作软件的副本是不构成“传递”的主要传播形式。您可以依此在多台计算机上安装软件,或进行备份。
#### 2.38 GPL v3 如何让 BitTorrent 分发变得更容易?
因为 GPL v2 是在软件的点对点分发普及之前编写的,所以当您利用这种方式分享代码时,很难满足 GPL v2 的要求。在 BitTorrent 上分发 GPL v2 目标代码时,确保您合规的最佳方法是将所有相应的源代码包含在相同的<ruby> 种子文件 <rp> ( </rp> <rt> Torrent </rt> <rp> ) </rp></ruby>中,但这种方式代价高昂。
GPL v3 以两种方式解决了这个问题。首先,作为该过程的一部分,下载此种子文件并将数据发送给其他人的人不需要做任何事情。因为第 9 节规定:“受保护作品的辅助传播如果仅仅是使用点对点传输来接收副本,不需要接受[本许可证]。”
第二,通过告知接收人在公共网络服务器上哪里可获取,GPL v3 的第 6(e)节旨在给予分发者(最初制作种子文件的人)一种清晰、直观的方式来提供源代码。这样可以确保每个想要获取源代码的人都可以如此获取,而且分发者几乎不用担心。
#### 2.39 什么是 <ruby> TiVo化 <rp> ( </rp> <rt> tivoization </rt> <rp> ) </rp></ruby>? GPL v3 对此如何防止?
一些设备使用可升级的自由软件,但被设计为用户不能修改该软件。有很多不同的方式可以做到这一点;例如,有时硬件校验所安装的软件,如果与预期签名不匹配,则关闭软件。制造商通过提供源代码来遵循GPL v2,但是您仍然无法自由修改您使用的软件。我们称这种做法为<ruby> TiVo 化 <rp> ( </rp> <rt> tivoization </rt> <rp> ) </rp></ruby>。
当人们分发包含遵循 GPL v3 软件的“<ruby> 用户产品 <rp> ( </rp> <rt> User Products </rt> <rp> ) </rp></ruby>”时,第 6 节要求他们为您提供修改该软件所需的信息。“用户产品”是该许可证特别定义的术语;“用户产品”的示例包括便携式音乐播放器、数字录像机和家庭安全系统。
#### 2.40 GPL v3 是否禁止 DRM?
不禁止,您可以使用遵循 GPL v3 发布的代码来开发您喜欢的任何类型的 DRM 技术。不过,如果您这样做,第 3 节规定,系统不会被视为一种有效的技术“保护”措施,这意味着如果有人破坏了 DRM,他也可以自由分发他的软件,不受《美国数字千禧版权法》(DMCA)和类似法律的限制。
像往常一样,GNU GPL 并不限制人们在软件中怎么做,只是阻止他们限制他人。
#### 2.41 GPL v3 是否要求投票人能够修改在投票机中运行的软件?
不要求。企业分发包含遵循 GPL v3 软件的设备,最多只需要为拥有目标代码副本的人提供软件的源代码和安装信息。使用投票机(如同任何其他信息亭一样)的选民不能拥有它,甚至不能暂时拥有,所以选民也不能拥有二进制软件。
不过,请注意,投票是一个非常特殊的情况。仅仅因为计算机中的软件是自由软件,并不意味着您可以信任计算机,并进行投票。我们认为电脑不值得信任,不能被用作投票。投票应在纸上进行。
#### 2.42 GPL v3 是否包含“专利报复条款”?
实际上,是的。第 10 节禁止传递该软件的人向其他被许可人发起专利诉讼。如果有人这样做,第 8 节解释他们将如何丧失许可证权益以及与之伴随的所有专利许可。
#### 2.43 在 GPL v3 和 AGPL v3中,当说到“尽管有本许可证的其他规定”时,是什么意思?
这仅仅意味着以下条款胜过许可证中可能与之冲突的其他任何内容。例如,如果没有该文本,有些人可能声称,您不能将遵循 GPL v3 的代码与遵循 AGPL v3 的代码结合在一起,因为 AGPL 的附加要求将被归类为 GPL v3 第 7 节下的“进一步限制”。该文本明确表示我们的预期解释是正确的,您可以进行组合。
该文本仅解决许可证不同条款之间的冲突。当两个条件之间没有冲突的时候,您必须同时满足它们。这些段落不允许您轻率地忽略许可证的其余部分,它们只是强调了一些非常有限的例外。
#### 2.44 在 AGPL v3 中,怎么才算是“通过计算机网络远程与[软件]进行交互”?
如果程序被明确设计为接受用户请求并通过网络发送响应,则它符合这些标准。属于此类程序的常见示例包括网络和邮件服务器、交互式网络应用程序和在线游戏的服务器。
如果程序没有被明确设计为通过网络与用户进行交互,而是恰好在这样做的环境中运行,那么它不属于此类。例如,仅仅因为用户通过 SSH 或远程 X 会话来运行的应用程序不需要提供源代码。
#### 2.45 GPL v3 中“<ruby> 您 <rp> ( </rp> <rt> you </rt> <rp> ) </rp></ruby>”的概念如何与 Apache License 2.0 中“<ruby> 法律实体 <rp> ( </rp> <rt> Legal Entity </rt> <rp> ) </rp></ruby>”的定义相比较?
它们是完全相同的。Apache License 2.0 中“法律实体”的定义在各种法律协议中是非常标准的,因此,如果法院在缺乏明确定义的情况下没有以同样的方式解释该术语,将会令人非常惊讶。我们完全期待他们在看 GPL v3 时也会如此,并考虑到谁有资格作为被许可人。
#### 2.46 在 GPL v3 中,“<ruby> 该程序 <rp> ( </rp> <rt> the Program </rt> <rp> ) </rp></ruby>”是指什么? 是每个依据 GPL v3 发布的程序?
术语“该程序”是指依据 GPL v3 进行许可的特定作品,由特定被许可人从上游许可人或分发者那里接收。“该程序”是您在 GPL v3 许可的指定情境下接受到的特定软件作品。
“该程序”并不意味着“依据 GPL v3 进行许可的所有作品”;由于一些原因,这种解释没有任何意义。针对那些想要了解更多的人,我们已经发表了对“该程序”一词的[分析](https://www.gnu.org/licenses/gplv3-the-program.html)。
#### 2.47 如果某些网络客户端软件依据 AGPL v3 发布,是否必须能够向与之交互的服务器提供源代码?
AGPL v3 需要该程序向“所有通过计算机网络进行远程交互的用户”提供源代码。至于您将程序称为“客户端”还是“服务器”,那无关紧要。您需要询问的问题是,是否存在合理的期望让一个人通过网络远程与该程序交互。
#### 2.48 对于一个运行在代理服务器上的依据 AGPL 进行许可的软件来说,如何向与该软件交互的用户提供源代码书面文件?
对于代理服务器上的软件,您可以通过向此类代理的用户传递消息的常规方法来提供源代码书面文件。例如,Web 代理可以使用登录页。当用户最初开始使用代理时,您可以将他们引导到包含源代码书面文件以及您选择提供的任何其他信息的页面。
AGPL 规定,您必须向“所有用户”提供书面文件。如果您知道某个用户已经被展示过书面文件,对于当前版本的软件,您不必再重复向该用户提供。
| 200 | OK | ## Frequently Asked Questions about the GNU Licenses
### Table of Contents
**Basic questions about the GNU Project, the Free Software Foundation, and its licenses****General understanding of the GNU licenses****Using GNU licenses for your programs****Distribution of programs released under the GNU licenses****Using programs released under the GNU licenses when writing other programs****Combining work with code released under the GNU licenses****Questions about violations of the GNU licenses**
#### Basic questions about the GNU Project, the Free Software Foundation, and its licenses
[What does “GPL” stand for?](#WhatDoesGPLStandFor)[Does free software mean using the GPL?](#DoesFreeSoftwareMeanUsingTheGPL)[Why should I use the GNU GPL rather than other free software licenses?](#WhyUseGPL)[Does all GNU software use the GNU GPL as its license?](#DoesAllGNUSoftwareUseTheGNUGPLAsItsLicense)[Does using the GPL for a program make it GNU software?](#DoesUsingTheGPLForAProgramMakeItGNUSoftware)[Can I use the GPL for something other than software?](#GPLOtherThanSoftware)[Why don't you use the GPL for manuals?](#WhyNotGPLForManuals)[Are there translations of the GPL into other languages?](#GPLTranslations)[Why are some GNU libraries released under the ordinary GPL rather than the Lesser GPL?](#WhySomeGPLAndNotLGPL)[Who has the power to enforce the GPL?](#WhoHasThePower)[Why does the FSF require that contributors to FSF-copyrighted programs assign copyright to the FSF? If I hold copyright on a GPLed program, should I do this, too? If so, how?](#AssignCopyright)[Can I modify the GPL and make a modified license?](#ModifyGPL)[Why did you decide to write the GNU Affero GPLv3 as a separate license?](#SeparateAffero)
#### General understanding of the GNU licenses
[Why does the GPL permit users to publish their modified versions?](#WhyDoesTheGPLPermitUsersToPublishTheirModifiedVersions)[Does the GPL require that source code of modified versions be posted to the public?](#GPLRequireSourcePostedPublic)[Can I have a GPL-covered program and an unrelated nonfree program on the same computer?](#GPLAndNonfreeOnSameMachine)[If I know someone has a copy of a GPL-covered program, can I demand they give me a copy?](#CanIDemandACopy)[What does “written offer valid for any third party” mean in GPLv2? Does that mean everyone in the world can get the source to any GPLed program no matter what?](#WhatDoesWrittenOfferValid)[The GPL says that modified versions, if released, must be “licensed … to all third parties.” Who are these third parties?](#TheGPLSaysModifiedVersions)[Does the GPL allow me to sell copies of the program for money?](#DoesTheGPLAllowMoney)[Does the GPL allow me to charge a fee for downloading the program from my distribution site?](#DoesTheGPLAllowDownloadFee)[Does the GPL allow me to require that anyone who receives the software must pay me a fee and/or notify me?](#DoesTheGPLAllowRequireFee)[If I distribute GPLed software for a fee, am I required to also make it available to the public without a charge?](#DoesTheGPLRequireAvailabilityToPublic)[Does the GPL allow me to distribute a copy under a nondisclosure agreement?](#DoesTheGPLAllowNDA)[Does the GPL allow me to distribute a modified or beta version under a nondisclosure agreement?](#DoesTheGPLAllowModNDA)[Does the GPL allow me to develop a modified version under a nondisclosure agreement?](#DevelopChangesUnderNDA)[Why does the GPL require including a copy of the GPL with every copy of the program?](#WhyMustIInclude)[What if the work is not very long?](#WhatIfWorkIsShort)[Am I required to claim a copyright on my modifications to a GPL-covered program?](#RequiredToClaimCopyright)[What does the GPL say about translating some code to a different programming language?](#TranslateCode)[If a program combines public-domain code with GPL-covered code, can I take the public-domain part and use it as public domain code?](#CombinePublicDomainWithGPL)[I want to get credit for my work. I want people to know what I wrote. Can I still get credit if I use the GPL?](#IWantCredit)[Does the GPL allow me to add terms that would require citation or acknowledgment in research papers which use the GPL-covered software or its output?](#RequireCitation)[Can I omit the preamble of the GPL, or the instructions for how to use it on your own programs, to save space?](#GPLOmitPreamble)[What does it mean to say that two licenses are “compatible”?](#WhatIsCompatible)[What does it mean to say a license is “compatible with the GPL”?](#WhatDoesCompatMean)[Why is the original BSD license incompatible with the GPL?](#OrigBSD)[What is the difference between an “aggregate” and other kinds of “modified versions”?](#MereAggregation)[When it comes to determining whether two pieces of software form a single work, does the fact that the code is in one or more containers have any effect?](#AggregateContainers)[Why does the FSF require that contributors to FSF-copyrighted programs assign copyright to the FSF? If I hold copyright on a GPLed program, should I do this, too? If so, how?](#AssignCopyright)[If I use a piece of software that has been obtained under the GNU GPL, am I allowed to modify the original code into a new program, then distribute and sell that new program commercially?](#GPLCommercially)[Can I use the GPL for something other than software?](#GPLOtherThanSoftware)[I'd like to license my code under the GPL, but I'd also like to make it clear that it can't be used for military and/or commercial uses. Can I do this?](#NoMilitary)[Can I use the GPL to license hardware?](#GPLHardware)[Does prelinking a GPLed binary to various libraries on the system, to optimize its performance, count as modification?](#Prelinking)[How does the LGPL work with Java?](#LGPLJava)[Why did you invent the new terms “propagate” and “convey” in GPLv3?](#WhyPropagateAndConvey)[Is “convey” in GPLv3 the same thing as what GPLv2 means by “distribute”?](#ConveyVsDistribute)[If I only make copies of a GPL-covered program and run them, without distributing or conveying them to others, what does the license require of me?](#NoDistributionRequirements)[GPLv3 gives “making available to the public” as an example of propagation. What does this mean? Is making available a form of conveying?](#v3MakingAvailable)[Since distribution and making available to the public are forms of propagation that are also conveying in GPLv3, what are some examples of propagation that do not constitute conveying?](#PropagationNotConveying)[How does GPLv3 make BitTorrent distribution easier?](#BitTorrent)[What is tivoization? How does GPLv3 prevent it?](#Tivoization)[Does GPLv3 prohibit DRM?](#DRMProhibited)[Does GPLv3 require that voters be able to modify the software running in a voting machine?](#v3VotingMachine)[Does GPLv3 have a “patent retaliation clause”?](#v3PatentRetaliation)[In GPLv3 and AGPLv3, what does it mean when it says “notwithstanding any other provision of this License”?](#v3Notwithstanding)[In AGPLv3, what counts as “ interacting with [the software] remotely through a computer network?”](#AGPLv3InteractingRemotely)[How does GPLv3's concept of “you” compare to the definition of “Legal Entity” in the Apache License 2.0?](#ApacheLegalEntity)[In GPLv3, what does “the Program” refer to? Is it every program ever released under GPLv3?](#v3TheProgram)[If some network client software is released under AGPLv3, does it have to be able to provide source to the servers it interacts with?](#AGPLv3ServerAsUser)[For software that runs a proxy server licensed under the AGPL, how can I provide an offer of source to users interacting with that code?](#AGPLProxy)
#### Using GNU licenses for your programs
[How do I upgrade from (L)GPLv2 to (L)GPLv3?](#v3HowToUpgrade)[Could you give me step by step instructions on how to apply the GPL to my program?](#CouldYouHelpApplyGPL)[Why should I use the GNU GPL rather than other free software licenses?](#WhyUseGPL)[Why does the GPL require including a copy of the GPL with every copy of the program?](#WhyMustIInclude)[Is putting a copy of the GNU GPL in my repository enough to apply the GPL?](#LicenseCopyOnly)[Why should I put a license notice in each source file?](#NoticeInSourceFile)[What if the work is not very long?](#WhatIfWorkIsShort)[Can I omit the preamble of the GPL, or the instructions for how to use it on your own programs, to save space?](#GPLOmitPreamble)[How do I get a copyright on my program in order to release it under the GPL?](#HowIGetCopyright)[What if my school might want to make my program into its own proprietary software product?](#WhatIfSchool)[I would like to release a program I wrote under the GNU GPL, but I would like to use the same code in nonfree programs.](#ReleaseUnderGPLAndNF)[Can the developer of a program who distributed it under the GPL later license it to another party for exclusive use?](#CanDeveloperThirdParty)[Can the US Government release a program under the GNU GPL?](#GPLUSGov)[Can the US Government release improvements to a GPL-covered program?](#GPLUSGovAdd)[Why should programs say “Version 3 of the GPL or any later version”?](#VersionThreeOrLater)[Is it a good idea to use a license saying that a certain program can be used only under the latest version of the GNU GPL?](#OnlyLatestVersion)[Is there some way that I can GPL the output people get from use of my program? For example, if my program is used to develop hardware designs, can I require that these designs must be free?](#GPLOutput)[Why don't you use the GPL for manuals?](#WhyNotGPLForManuals)[How does the GPL apply to fonts?](#FontException)[What license should I use for website maintenance system templates?](#WMS)[Can I release a program under the GPL which I developed using nonfree tools?](#NonFreeTools)[I use public key cryptography to sign my code to assure its authenticity. Is it true that GPLv3 forces me to release my private signing keys?](#GiveUpKeys)[Does GPLv3 require that voters be able to modify the software running in a voting machine?](#v3VotingMachine)[The warranty and liability disclaimers in GPLv3 seem specific to U.S. law. Can I add my own disclaimers to my own code?](#v3InternationalDisclaimers)[My program has interactive user interfaces that are non-visual in nature. How can I comply with the Appropriate Legal Notices requirement in GPLv3?](#NonvisualLegalNotices)
#### Distribution of programs released under the GNU licenses
[Can I release a modified version of a GPL-covered program in binary form only?](#ModifiedJustBinary)[I downloaded just the binary from the net. If I distribute copies, do I have to get the source and distribute that too?](#UnchangedJustBinary)[I want to distribute binaries via physical media without accompanying sources. Can I provide source code by FTP instead of by mail order?](#DistributeWithSourceOnInternet)[My friend got a GPL-covered binary with an offer to supply source, and made a copy for me. Can I use the offer to obtain the source?](#RedistributedBinariesGetSource)[Can I put the binaries on my Internet server and put the source on a different Internet site?](#SourceAndBinaryOnDifferentSites)[I want to distribute an extended version of a GPL-covered program in binary form. Is it enough to distribute the source for the original version?](#DistributeExtendedBinary)[I want to distribute binaries, but distributing complete source is inconvenient. Is it ok if I give users the diffs from the “standard” version along with the binaries?](#DistributingSourceIsInconvenient)[Can I make binaries available on a network server, but send sources only to people who order them?](#AnonFTPAndSendSources)[How can I make sure each user who downloads the binaries also gets the source?](#HowCanIMakeSureEachDownloadGetsSource)[Does the GPL require me to provide source code that can be built to match the exact hash of the binary I am distributing?](#MustSourceBuildToMatchExactHashOfBinary)[Can I release a program with a license which says that you can distribute modified versions of it under the GPL but you can't distribute the original itself under the GPL?](#ReleaseNotOriginal)[I just found out that a company has a copy of a GPLed program, and it costs money to get it. Aren't they violating the GPL by not making it available on the Internet?](#CompanyGPLCostsMoney)[A company is running a modified version of a GPLed program on a web site. Does the GPL say they must release their modified sources?](#UnreleasedMods)[A company is running a modified version of a program licensed under the GNU Affero GPL (AGPL) on a web site. Does the AGPL say they must release their modified sources?](#UnreleasedModsAGPL)[Is use within one organization or company “distribution”?](#InternalDistribution)[If someone steals a CD containing a version of a GPL-covered program, does the GPL give him the right to redistribute that version?](#StolenCopy)[What if a company distributes a copy of some other developers' GPL-covered work to me as a trade secret?](#TradeSecretRelease)[What if a company distributes a copy of its own GPL-covered work to me as a trade secret?](#TradeSecretRelease2)[Do I have “fair use” rights in using the source code of a GPL-covered program?](#GPLFairUse)[Does moving a copy to a majority-owned, and controlled, subsidiary constitute distribution?](#DistributeSubsidiary)[Can software installers ask people to click to agree to the GPL? If I get some software under the GPL, do I have to agree to anything?](#ClickThrough)[I would like to bundle GPLed software with some sort of installation software. Does that installer need to have a GPL-compatible license?](#GPLCompatInstaller)[Does a distributor violate the GPL if they require me to “represent and warrant” that I am located in the US, or that I intend to distribute the software in compliance with relevant export control laws?](#ExportWarranties)[The beginning of GPLv3 section 6 says that I can convey a covered work in object code form “under the terms of sections 4 and 5” provided I also meet the conditions of section 6. What does that mean?](#v3Under4and5)[My company owns a lot of patents. Over the years we've contributed code to projects under “GPL version 2 or any later version”, and the project itself has been distributed under the same terms. If a user decides to take the project's code (incorporating my contributions) under GPLv3, does that mean I've automatically granted GPLv3's explicit patent license to that user?](#v2OrLaterPatentLicense)[If I distribute a GPLv3-covered program, can I provide a warranty that is voided if the user modifies the program?](#v3ConditionalWarranty)[If I give a copy of a GPLv3-covered program to a coworker at my company, have I “conveyed” the copy to that coworker?](#v3CoworkerConveying)[Am I complying with GPLv3 if I offer binaries on an FTP server and sources by way of a link to a source code repository in a version control system, like CVS or Subversion?](#SourceInCVS)[Can someone who conveys GPLv3-covered software in a User Product use remote attestation to prevent a user from modifying that software?](#RemoteAttestation)[What does “rules and protocols for communication across the network” mean in GPLv3?](#RulesProtocols)[Distributors that provide Installation Information under GPLv3 are not required to provide “support service” for the product. What kind of “support service” do you mean?](#SupportService)
#### Using programs released under the GNU licenses when writing other programs
[Can I have a GPL-covered program and an unrelated nonfree program on the same computer?](#GPLAndNonfreeOnSameMachine)[Can I use GPL-covered editors such as GNU Emacs to develop nonfree programs? Can I use GPL-covered tools such as GCC to compile them?](#CanIUseGPLToolsForNF)[Is there some way that I can GPL the output people get from use of my program? For example, if my program is used to develop hardware designs, can I require that these designs must be free?](#GPLOutput)[In what cases is the output of a GPL program covered by the GPL too?](#WhatCaseIsOutputGPL)[If I port my program to GNU/Linux, does that mean I have to release it as free software under the GPL or some other free software license?](#PortProgramToGPL)[I'd like to incorporate GPL-covered software in my proprietary system. I have no permission to use that software except what the GPL gives me. Can I do this?](#GPLInProprietarySystem)[If I distribute a proprietary program that links against an LGPLv3-covered library that I've modified, what is the “contributor version” for purposes of determining the scope of the explicit patent license grant I'm making—is it just the library, or is it the whole combination?](#LGPLv3ContributorVersion)[Under AGPLv3, when I modify the Program under section 13, what Corresponding Source does it have to offer?](#AGPLv3CorrespondingSource)[Where can I learn more about the GCC Runtime Library Exception?](#LibGCCException)
#### Combining work with code released under the GNU licenses
[Is GPLv3 compatible with GPLv2?](#v2v3Compatibility)[Does GPLv2 have a requirement about delivering installation information?](#InstInfo)[How are the various GNU licenses compatible with each other?](#AllCompatibility)[What is the difference between an “aggregate” and other kinds of “modified versions”?](#MereAggregation)[Do I have “fair use” rights in using the source code of a GPL-covered program?](#GPLFairUse)[Can the US Government release improvements to a GPL-covered program?](#GPLUSGovAdd)[Does the GPL have different requirements for statically vs dynamically linked modules with a covered work?](#GPLStaticVsDynamic)[Does the LGPL have different requirements for statically vs dynamically linked modules with a covered work?](#LGPLStaticVsDynamic)[If a library is released under the GPL (not the LGPL), does that mean that any software which uses it has to be under the GPL or a GPL-compatible license?](#IfLibraryIsGPL)[You have a GPLed program that I'd like to link with my code to build a proprietary program. Does the fact that I link with your program mean I have to GPL my program?](#LinkingWithGPL)[If so, is there any chance I could get a license of your program under the Lesser GPL?](#SwitchToLGPL)[If a programming language interpreter is released under the GPL, does that mean programs written to be interpreted by it must be under GPL-compatible licenses?](#IfInterpreterIsGPL)[If a programming language interpreter has a license that is incompatible with the GPL, can I run GPL-covered programs on it?](#InterpreterIncompat)[If I add a module to a GPL-covered program, do I have to use the GPL as the license for my module?](#GPLModuleLicense)[When is a program and its plug-ins considered a single combined program?](#GPLPlugins)[If I write a plug-in to use with a GPL-covered program, what requirements does that impose on the licenses I can use for distributing my plug-in?](#GPLAndPlugins)[Can I apply the GPL when writing a plug-in for a nonfree program?](#GPLPluginsInNF)[Can I release a nonfree program that's designed to load a GPL-covered plug-in?](#NFUseGPLPlugins)[I'd like to incorporate GPL-covered software in my proprietary system. I have no permission to use that software except what the GPL gives me. Can I do this?](#GPLInProprietarySystem)[Using a certain GNU program under the GPL does not fit our project to make proprietary software. Will you make an exception for us? It would mean more users of that program.](#WillYouMakeAnException)[I'd like to incorporate GPL-covered software in my proprietary system. Can I do this by putting a “wrapper” module, under a GPL-compatible lax permissive license (such as the X11 license) in between the GPL-covered part and the proprietary part?](#GPLWrapper)[Can I write free software that uses nonfree libraries?](#FSWithNFLibs)[Can I link a GPL program with a proprietary system library?](#SystemLibraryException)[In what ways can I link or combine AGPLv3-covered and GPLv3-covered code?](#AGPLGPL)[What legal issues come up if I use GPL-incompatible libraries with GPL software?](#GPLIncompatibleLibs)[I'm writing a Windows application with Microsoft Visual C++ and I will be releasing it under the GPL. Is dynamically linking my program with the Visual C++ runtime library permitted under the GPL?](#WindowsRuntimeAndGPL)[I'd like to modify GPL-covered programs and link them with the portability libraries from Money Guzzler Inc. I cannot distribute the source code for these libraries, so any user who wanted to change these versions would have to obtain those libraries separately. Why doesn't the GPL permit this?](#MoneyGuzzlerInc)[If license for a module Q has a requirement that's incompatible with the GPL, but the requirement applies only when Q is distributed by itself, not when Q is included in a larger program, does that make the license GPL-compatible? Can I combine or link Q with a GPL-covered program?](#GPLIncompatibleAlone)[In an object-oriented language such as Java, if I use a class that is GPLed without modifying, and subclass it, in what way does the GPL affect the larger program?](#OOPLang)[Does distributing a nonfree driver meant to link with the kernel Linux violate the GPL?](#NonfreeDriverKernelLinux)[How can I allow linking of proprietary modules with my GPL-covered library under a controlled interface only?](#LinkingOverControlledInterface)[Consider this situation: 1) X releases V1 of a project under the GPL. 2) Y contributes to the development of V2 with changes and new code based on V1. 3) X wants to convert V2 to a non-GPL license. Does X need Y's permission?](#Consider)[I have written an application that links with many different components, that have different licenses. I am very confused as to what licensing requirements are placed on my program. Can you please tell me what licenses I may use?](#ManyDifferentLicenses)[Can I use snippets of GPL-covered source code within documentation that is licensed under some license that is incompatible with the GPL?](#SourceCodeInDocumentation)
#### Questions about violations of the GNU licenses
[What should I do if I discover a possible violation of the GPL?](#ReportingViolation)[Who has the power to enforce the GPL?](#WhoHasThePower)[I heard that someone got a copy of a GPLed program under another license. Is this possible?](#HeardOtherLicense)[Is the developer of a GPL-covered program bound by the GPL? Could the developer's actions ever be a violation of the GPL?](#DeveloperViolate)[I just found out that a company has a copy of a GPLed program, and it costs money to get it. Aren't they violating the GPL by not making it available on the Internet?](#CompanyGPLCostsMoney)[Can I use GPLed software on a device that will stop operating if customers do not continue paying a subscription fee?](#SubscriptionFee)[What does it mean to “cure” a violation of GPLv3?](#Cure)[If someone installs GPLed software on a laptop, and then lends that laptop to a friend without providing source code for the software, have they violated the GPL?](#LaptopLoan)[Suppose that two companies try to circumvent the requirement to provide Installation Information by having one company release signed software, and the other release a User Product that only runs signed software from the first company. Is this a violation of GPLv3?](#TwoPartyTivoization)
This page is maintained by the Free Software
Foundation's Licensing and Compliance Lab. You can support our efforts by
[making a donation](http://donate.fsf.org) to the FSF.
You can use our publications to understand how GNU licenses work or help you advocate for free software, but they are not legal advice. The FSF cannot give legal advice. Legal advice is personalized advice from a lawyer who has agreed to work for you. Our answers address general questions and may not apply in your specific legal situation.
Have a
question not answered here? Check out some of our other [licensing resources](https://www.fsf.org/licensing) or contact the
Compliance Lab at [[email protected]](mailto:[email protected]).
- What does “GPL” stand for?
(
[#WhatDoesGPLStandFor](#WhatDoesGPLStandFor)) “GPL” stands for “General Public License”. The most widespread such license is the GNU General Public License, or GNU GPL for short. This can be further shortened to “GPL”, when it is understood that the GNU GPL is the one intended.
- Does free software mean using
the GPL?
(
[#DoesFreeSoftwareMeanUsingTheGPL](#DoesFreeSoftwareMeanUsingTheGPL)) Not at all—there are many other free software licenses. We have an
[incomplete list](/licenses/license-list.html). Any license that provides the user[certain specific freedoms](/philosophy/free-sw.html)is a free software license.- Why should I use the GNU GPL rather than other
free software licenses?
(
[#WhyUseGPL](#WhyUseGPL)) Using the GNU GPL will require that all the
[released improved versions be free software](/philosophy/pragmatic.html). This means you can avoid the risk of having to compete with a proprietary modified version of your own work. However, in some special situations it can be better to use a[more permissive license](/licenses/why-not-lgpl.html).- Does all GNU
software use the GNU GPL as its license?
(
[#DoesAllGNUSoftwareUseTheGNUGPLAsItsLicense](#DoesAllGNUSoftwareUseTheGNUGPLAsItsLicense)) Most GNU software packages use the GNU GPL, but there are a few GNU programs (and parts of programs) that use looser licenses, such as the Lesser GPL. When we do this, it is a matter of
[strategy](/licenses/why-not-lgpl.html).- Does using the
GPL for a program make it GNU software?
(
[#DoesUsingTheGPLForAProgramMakeItGNUSoftware](#DoesUsingTheGPLForAProgramMakeItGNUSoftware)) Anyone can release a program under the GNU GPL, but that does not make it a GNU package.
Making the program a GNU software package means explicitly contributing to the GNU Project. This happens when the program's developers and the GNU Project agree to do it. If you are interested in contributing a program to the GNU Project, please write to
[<[email protected]>](mailto:[email protected]).- What should I do if I discover a possible
violation of the GPL?
(
[#ReportingViolation](#ReportingViolation)) You should
[report it](/licenses/gpl-violation.html). First, check the facts as best you can. Then tell the publisher or copyright holder of the specific GPL-covered program. If that is the Free Software Foundation, write to[<[email protected]>](mailto:[email protected]). Otherwise, the program's maintainer may be the copyright holder, or else could tell you how to contact the copyright holder, so report it to the maintainer.- Why
does the GPL permit users to publish their modified versions?
(
[#WhyDoesTheGPLPermitUsersToPublishTheirModifiedVersions](#WhyDoesTheGPLPermitUsersToPublishTheirModifiedVersions)) A crucial aspect of free software is that users are free to cooperate. It is absolutely essential to permit users who wish to help each other to share their bug fixes and improvements with other users.
Some have proposed alternatives to the GPL that require modified versions to go through the original author. As long as the original author keeps up with the need for maintenance, this may work well in practice, but if the author stops (more or less) to do something else or does not attend to all the users' needs, this scheme falls down. Aside from the practical problems, this scheme does not allow users to help each other.
Sometimes control over modified versions is proposed as a means of preventing confusion between various versions made by users. In our experience, this confusion is not a major problem. Many versions of Emacs have been made outside the GNU Project, but users can tell them apart. The GPL requires the maker of a version to place his or her name on it, to distinguish it from other versions and to protect the reputations of other maintainers.
- Does the GPL require that
source code of modified versions be posted to the public?
(
[#GPLRequireSourcePostedPublic](#GPLRequireSourcePostedPublic)) The GPL does not require you to release your modified version, or any part of it. You are free to make modifications and use them privately, without ever releasing them. This applies to organizations (including companies), too; an organization can make a modified version and use it internally without ever releasing it outside the organization.
But
*if*you release the modified version to the public in some way, the GPL requires you to make the modified source code available to the program's users, under the GPL.Thus, the GPL gives permission to release the modified program in certain ways, and not in other ways; but the decision of whether to release it is up to you.
- Can I have a GPL-covered
program and an unrelated nonfree program on the same computer?
(
[#GPLAndNonfreeOnSameMachine](#GPLAndNonfreeOnSameMachine)) Yes.
- If I know someone has a copy of a GPL-covered
program, can I demand they give me a copy?
(
[#CanIDemandACopy](#CanIDemandACopy)) No. The GPL gives a person permission to make and redistribute copies of the program
*if and when that person chooses to do so*. That person also has the right not to choose to redistribute the program.- What does “written offer
valid for any third party” mean in GPLv2? Does that mean
everyone in the world can get the source to any GPLed program
no matter what?
(
[#WhatDoesWrittenOfferValid](#WhatDoesWrittenOfferValid)) If you choose to provide source through a written offer, then anybody who requests the source from you is entitled to receive it.
If you commercially distribute binaries not accompanied with source code, the GPL says you must provide a written offer to distribute the source code later. When users non-commercially redistribute the binaries they received from you, they must pass along a copy of this written offer. This means that people who did not get the binaries directly from you can still receive copies of the source code, along with the written offer.
The reason we require the offer to be valid for any third party is so that people who receive the binaries indirectly in that way can order the source code from you.
- GPLv2 says that modified
versions, if released, must be “licensed … to all third
parties.” Who are these third parties?
(
[#TheGPLSaysModifiedVersions](#TheGPLSaysModifiedVersions)) Section 2 says that modified versions you distribute must be licensed to all third parties under the GPL. “All third parties” means absolutely everyone—but this does not require you to
*do*anything physically for them. It only means they have a license from you, under the GPL, for your version.- Am I required to claim a copyright
on my modifications to a GPL-covered program?
(
[#RequiredToClaimCopyright](#RequiredToClaimCopyright)) You are not required to claim a copyright on your changes. In most countries, however, that happens automatically by default, so you need to place your changes explicitly in the public domain if you do not want them to be copyrighted.
Whether you claim a copyright on your changes or not, either way you must release the modified version, as a whole, under the GPL (
[if you release your modified version at all](#GPLRequireSourcePostedPublic)).- What does the GPL say about translating
some code to a different programming language?
(
[#TranslateCode](#TranslateCode)) Under copyright law, translation of a work is considered a kind of modification. Therefore, what the GPL says about modified versions applies also to translated versions. The translation is covered by the copyright on the original program.
If the original program carries a free license, that license gives permission to translate it. How you can use and license the translated program is determined by that license. If the original program is licensed under certain versions of the GNU GPL, the translated program must be covered by the same versions of the GNU GPL.
- If a program combines
public-domain code with GPL-covered code, can I take the
public-domain part and use it as public domain code?
(
[#CombinePublicDomainWithGPL](#CombinePublicDomainWithGPL)) You can do that, if you can figure out which part is the public domain part and separate it from the rest. If code was put in the public domain by its developer, it is in the public domain no matter where it has been.
- Does the GPL allow me to sell copies of
the program for money?
(
[#DoesTheGPLAllowMoney](#DoesTheGPLAllowMoney)) Yes, the GPL allows everyone to do this. The
[right to sell copies](/philosophy/selling.html)is part of the definition of free software. Except in one special situation, there is no limit on what price you can charge. (The one exception is the required written offer to provide source code that must accompany binary-only release.)- Does the GPL allow me to charge a
fee for downloading the program from my distribution site?
(
[#DoesTheGPLAllowDownloadFee](#DoesTheGPLAllowDownloadFee)) Yes. You can charge any fee you wish for distributing a copy of the program. Under GPLv2, if you distribute binaries by download, you must provide “equivalent access” to download the source—therefore, the fee to download source may not be greater than the fee to download the binary. If the binaries being distributed are licensed under the GPLv3, then you must offer equivalent access to the source code in the same way through the same place at no further charge.
- Does the GPL allow me to require
that anyone who receives the software must pay me a fee and/or
notify me?
(
[#DoesTheGPLAllowRequireFee](#DoesTheGPLAllowRequireFee)) No. In fact, a requirement like that would make the program nonfree. If people have to pay when they get a copy of a program, or if they have to notify anyone in particular, then the program is not free. See the
[definition of free software](/philosophy/free-sw.html).The GPL is a free software license, and therefore it permits people to use and even redistribute the software without being required to pay anyone a fee for doing so.
You
*can*charge people a fee to[get a copy](#DoesTheGPLAllowMoney). You can't require people to pay you when they get a copy*from you**from someone else*.- If I
distribute GPLed software for a fee, am I required to also make
it available to the public without a charge?
(
[#DoesTheGPLRequireAvailabilityToPublic](#DoesTheGPLRequireAvailabilityToPublic)) No. However, if someone pays your fee and gets a copy, the GPL gives them the freedom to release it to the public, with or without a fee. For example, someone could pay your fee, and then put her copy on a web site for the general public.
- Does the GPL allow me to distribute copies
under a nondisclosure agreement?
(
[#DoesTheGPLAllowNDA](#DoesTheGPLAllowNDA)) No. The GPL says that anyone who receives a copy from you has the right to redistribute copies, modified or not. You are not allowed to distribute the work on any more restrictive basis.
If someone asks you to sign an NDA for receiving GPL-covered software copyrighted by the FSF, please inform us immediately by writing to
[[email protected]](mailto:[email protected]).If the violation involves GPL-covered code that has some other copyright holder, please inform that copyright holder, just as you would for any other kind of violation of the GPL.
- Does the GPL allow me to distribute a
modified or beta version under a nondisclosure agreement?
(
[#DoesTheGPLAllowModNDA](#DoesTheGPLAllowModNDA)) No. The GPL says that your modified versions must carry all the freedoms stated in the GPL. Thus, anyone who receives a copy of your version from you has the right to redistribute copies (modified or not) of that version. You may not distribute any version of the work on a more restrictive basis.
- Does the GPL allow me to develop a
modified version under a nondisclosure agreement?
(
[#DevelopChangesUnderNDA](#DevelopChangesUnderNDA)) Yes. For instance, you can accept a contract to develop changes and agree not to release
*your changes*until the client says ok. This is permitted because in this case no GPL-covered code is being distributed under an NDA.You can also release your changes to the client under the GPL, but agree not to release them to anyone else unless the client says ok. In this case, too, no GPL-covered code is being distributed under an NDA, or under any additional restrictions.
The GPL would give the client the right to redistribute your version. In this scenario, the client will probably choose not to exercise that right, but does
*have*the right.- I want to get credit
for my work. I want people to know what I wrote. Can I still get
credit if I use the GPL?
(
[#IWantCredit](#IWantCredit)) You can certainly get credit for the work. Part of releasing a program under the GPL is writing a copyright notice in your own name (assuming you are the copyright holder). The GPL requires all copies to carry an appropriate copyright notice.
- Does the GPL allow me to add terms
that would require citation or acknowledgment in research papers
which use the GPL-covered software or its output?
(
[#RequireCitation](#RequireCitation)) No, this is not permitted under the terms of the GPL. While we recognize that proper citation is an important part of academic publications, citation cannot be added as an additional requirement to the GPL. Requiring citation in research papers which made use of GPLed software goes beyond what would be an acceptable additional requirement under section 7(b) of GPLv3, and therefore would be considered an additional restriction under Section 7 of the GPL. And copyright law does not allow you to place such a
[requirement on the output of software](#GPLOutput), regardless of whether it is licensed under the terms of the GPL or some other license.- Why does the GPL
require including a copy of the GPL with every copy of the program?
(
[#WhyMustIInclude](#WhyMustIInclude)) Including a copy of the license with the work is vital so that everyone who gets a copy of the program can know what their rights are.
It might be tempting to include a URL that refers to the license, instead of the license itself. But you cannot be sure that the URL will still be valid, five years or ten years from now. Twenty years from now, URLs as we know them today may no longer exist.
The only way to make sure that people who have copies of the program will continue to be able to see the license, despite all the changes that will happen in the network, is to include a copy of the license in the program.
- Is it enough just to put a copy
of the GNU GPL in my repository?
(
[#LicenseCopyOnly](#LicenseCopyOnly)) Just putting a copy of the GNU GPL in a file in your repository does not explicitly state that the code in the same repository may be used under the GNU GPL. Without such a statement, it's not entirely clear that the permissions in the license really apply to any particular source file. An explicit statement saying that eliminates all doubt.
A file containing just a license, without a statement that certain other files are covered by that license, resembles a file containing just a subroutine which is never called from anywhere else. The resemblance is not perfect: lawyers and courts might apply common sense and conclude that you must have put the copy of the GNU GPL there because you wanted to license the code that way. Or they might not. Why leave an uncertainty?
This statement should be in each source file. A clear statement in the program's README file is legally sufficient
*as long as that accompanies the code*, but it is easy for them to get separated. Why take a risk of[uncertainty about your code's license](#NoticeInSourceFile)?This has nothing to do with the specifics of the GNU GPL. It is true for any free license.
- Why should I put a license notice in each
source file?
(
[#NoticeInSourceFile](#NoticeInSourceFile)) You should put a notice at the start of each source file, stating what license it carries, in order to avoid risk of the code's getting disconnected from its license. If your repository's README says that source file is under the GNU GPL, what happens if someone copies that file to another program? That other context may not show what the file's license is. It may appear to have some other license, or
[no license at all](/licenses/license-list.html#NoLicense)(which would make the code nonfree).Adding a copyright notice and a license notice at the start of each source file is easy and makes such confusion unlikely.
This has nothing to do with the specifics of the GNU GPL. It is true for any free license.
- What if the work is not very long?
(
[#WhatIfWorkIsShort](#WhatIfWorkIsShort)) If a whole software package contains very little code—less than 300 lines is the benchmark we use—you may as well use a lax permissive license for it, rather than a copyleft license like the GNU GPL. (Unless, that is, the code is specially important.) We
[recommend the Apache License 2.0](/licenses/license-recommendations.html#software)for such cases.- Can I omit the preamble of the GPL, or the
instructions for how to use it on your own programs, to save space?
(
[#GPLOmitPreamble](#GPLOmitPreamble)) The preamble and instructions are integral parts of the GNU GPL and may not be omitted. In fact, the GPL is copyrighted, and its license permits only verbatim copying of the entire GPL. (You can use the legal terms to make
[another license](#ModifyGPL)but it won't be the GNU GPL.)The preamble and instructions add up to some 1000 words, less than 1/5 of the GPL's total size. They will not make a substantial fractional change in the size of a software package unless the package itself is quite small. In that case, you may as well use a simple all-permissive license rather than the GNU GPL.
- What does it
mean to say that two licenses are “compatible”?
(
[#WhatIsCompatible](#WhatIsCompatible)) In order to combine two programs (or substantial parts of them) into a larger work, you need to have permission to use both programs in this way. If the two programs' licenses permit this, they are compatible. If there is no way to satisfy both licenses at once, they are incompatible.
For some licenses, the way in which the combination is made may affect whether they are compatible—for instance, they may allow linking two modules together, but not allow merging their code into one module.
If you just want to install two separate programs in the same system, it is not necessary that their licenses be compatible, because this does not combine them into a larger work.
- What does it mean to say a license is
“compatible with the GPL?”
(
[#WhatDoesCompatMean](#WhatDoesCompatMean)) It means that the other license and the GNU GPL are compatible; you can combine code released under the other license with code released under the GNU GPL in one larger program.
All GNU GPL versions permit such combinations privately; they also permit distribution of such combinations provided the combination is released under the same GNU GPL version. The other license is compatible with the GPL if it permits this too.
GPLv3 is compatible with more licenses than GPLv2: it allows you to make combinations with code that has specific kinds of additional requirements that are not in GPLv3 itself. Section 7 has more information about this, including the list of additional requirements that are permitted.
- Can I write
free software that uses nonfree libraries?
(
[#FSWithNFLibs](#FSWithNFLibs)) If you do this, your program won't be fully usable in a free environment. If your program depends on a nonfree library to do a certain job, it cannot do that job in the Free World. If it depends on a nonfree library to run at all, it cannot be part of a free operating system such as GNU; it is entirely off limits to the Free World.
So please consider: can you find a way to get the job done without using this library? Can you write a free replacement for that library?
If the program is already written using the nonfree library, perhaps it is too late to change the decision. You may as well release the program as it stands, rather than not release it. But please mention in the README that the need for the nonfree library is a drawback, and suggest the task of changing the program so that it does the same job without the nonfree library. Please suggest that anyone who thinks of doing substantial further work on the program first free it from dependence on the nonfree library.
Note that there may also be legal issues with combining certain nonfree libraries with GPL-covered free software. Please see
[the question on GPL software with GPL-incompatible libraries](#GPLIncompatibleLibs)for more information.- Can I link a GPL program with a
proprietary system library? (
[#SystemLibraryException](#SystemLibraryException)) Both versions of the GPL have an exception to their copyleft, commonly called the system library exception. If the GPL-incompatible libraries you want to use meet the criteria for a system library, then you don't have to do anything special to use them; the requirement to distribute source code for the whole program does not include those libraries, even if you distribute a linked executable containing them.
The criteria for what counts as a “system library” vary between different versions of the GPL. GPLv3 explicitly defines “System Libraries” in section 1, to exclude it from the definition of “Corresponding Source.” GPLv2 deals with this issue slightly differently, near the end of section 3.
- In what ways can I link or combine
AGPLv3-covered and GPLv3-covered code?
(
[#AGPLGPL](#AGPLGPL)) Each of these licenses explicitly permits linking with code under the other license. You can always link GPLv3-covered modules with AGPLv3-covered modules, and vice versa. That is true regardless of whether some of the modules are libraries.
- What legal issues
come up if I use GPL-incompatible libraries with GPL software?
(
[#GPLIncompatibleLibs](#GPLIncompatibleLibs)) -
If you want your program to link against a library not covered by the system library exception, you need to provide permission to do that. Below are two example license notices that you can use to do that; one for GPLv3, and the other for GPLv2. In either case, you should put this text in each file to which you are granting this permission.
Only the copyright holders for the program can legally release their software under these terms. If you wrote the whole program yourself, then assuming your employer or school does not claim the copyright, you are the copyright holder—so you can authorize the exception. But if you want to use parts of other GPL-covered programs by other authors in your code, you cannot authorize the exception for them. You have to get the approval of the copyright holders of those programs.
When other people modify the program, they do not have to make the same exception for their code—it is their choice whether to do so.
If the libraries you intend to link with are nonfree, please also see
[the section on writing Free Software which uses nonfree libraries](#FSWithNFLibs).If you're using GPLv3, you can accomplish this goal by granting an additional permission under section 7. The following license notice will do that. You must replace all the text in brackets with text that is appropriate for your program. If not everybody can distribute source for the libraries you intend to link with, you should remove the text in braces; otherwise, just remove the braces themselves.
Copyright (C)
`[years]``[name of copyright holder]`This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 3 of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this program; if not, see <https://www.gnu.org/licenses>.
Additional permission under GNU GPL version 3 section 7
If you modify this Program, or any covered work, by linking or combining it with
`[name of library]`(or a modified version of that library), containing parts covered by the terms of`[name of library's license]`, the licensors of this Program grant you additional permission to convey the resulting work. {Corresponding Source for a non-source form of such a combination shall include the source code for the parts of`[name of library]`used as well as that of the covered work.}If you're using GPLv2, you can provide your own exception to the license's terms. The following license notice will do that. Again, you must replace all the text in brackets with text that is appropriate for your program. If not everybody can distribute source for the libraries you intend to link with, you should remove the text in braces; otherwise, just remove the braces themselves.
Copyright (C)
`[years]``[name of copyright holder]`This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this program; if not, see <https://www.gnu.org/licenses>.
Linking
`[name of your program]`statically or dynamically with other modules is making a combined work based on`[name of your program]`. Thus, the terms and conditions of the GNU General Public License cover the whole combination.In addition, as a special exception, the copyright holders of
`[name of your program]`give you permission to combine`[name of your program]`with free software programs or libraries that are released under the GNU LGPL and with code included in the standard release of`[name of library]`under the`[name of library's license]`(or modified versions of such code, with unchanged license). You may copy and distribute such a system following the terms of the GNU GPL for`[name of your program]`and the licenses of the other code concerned{, provided that you include the source code of that other code when and as the GNU GPL requires distribution of source code}.Note that people who make modified versions of
`[name of your program]`are not obligated to grant this special exception for their modified versions; it is their choice whether to do so. The GNU General Public License gives permission to release a modified version without this exception; this exception also makes it possible to release a modified version which carries forward this exception. - How do I get a copyright on my program
in order to release it under the GPL?
(
[#HowIGetCopyright](#HowIGetCopyright)) Under the Berne Convention, everything written is automatically copyrighted from whenever it is put in fixed form. So you don't have to do anything to “get” the copyright on what you write—as long as nobody else can claim to own your work.
However, registering the copyright in the US is a very good idea. It will give you more clout in dealing with an infringer in the US.
The case when someone else might possibly claim the copyright is if you are an employee or student; then the employer or the school might claim you did the job for them and that the copyright belongs to them. Whether they would have a valid claim would depend on circumstances such as the laws of the place where you live, and on your employment contract and what sort of work you do. It is best to consult a lawyer if there is any possible doubt.
If you think that the employer or school might have a claim, you can resolve the problem clearly by getting a copyright disclaimer signed by a suitably authorized officer of the company or school. (Your immediate boss or a professor is usually NOT authorized to sign such a disclaimer.)
- What if my school
might want to make my program into its own proprietary software product?
(
[#WhatIfSchool](#WhatIfSchool)) Many universities nowadays try to raise funds by restricting the use of the knowledge and information they develop, in effect behaving little different from commercial businesses. (See “The Kept University”, Atlantic Monthly, March 2000, for a general discussion of this problem and its effects.)
If you see any chance that your school might refuse to allow your program to be released as free software, it is best to raise the issue at the earliest possible stage. The closer the program is to working usefully, the more temptation the administration might feel to take it from you and finish it without you. At an earlier stage, you have more leverage.
So we recommend that you approach them when the program is only half-done, saying, “If you will agree to releasing this as free software, I will finish it.” Don't think of this as a bluff. To prevail, you must have the courage to say, “My program will have liberty, or never be born.”
- Could
you give me step by step instructions on how to apply the GPL to my program?
(
[#CouldYouHelpApplyGPL](#CouldYouHelpApplyGPL)) See the page of
[GPL instructions](/licenses/gpl-howto.html).- I heard that someone got a copy
of a GPLed program under another license. Is this possible?
(
[#HeardOtherLicense](#HeardOtherLicense)) The GNU GPL does not give users permission to attach other licenses to the program. But the copyright holder for a program can release it under several different licenses in parallel. One of them may be the GNU GPL.
The license that comes in your copy, assuming it was put in by the copyright holder and that you got the copy legitimately, is the license that applies to your copy.
- I would like to release a program I wrote
under the GNU GPL, but I would
like to use the same code in nonfree programs.
(
[#ReleaseUnderGPLAndNF](#ReleaseUnderGPLAndNF)) To release a nonfree program is always ethically tainted, but legally there is no obstacle to your doing this. If you are the copyright holder for the code, you can release it under various different non-exclusive licenses at various times.
- Is the
developer of a GPL-covered program bound by the GPL? Could the
developer's actions ever be a violation of the GPL?
(
[#DeveloperViolate](#DeveloperViolate)) Strictly speaking, the GPL is a license from the developer for others to use, distribute and change the program. The developer itself is not bound by it, so no matter what the developer does, this is not a “violation” of the GPL.
However, if the developer does something that would violate the GPL if done by someone else, the developer will surely lose moral standing in the community.
- Can the developer of a program who distributed
it under the GPL later license it to another party for exclusive use?
(
[#CanDeveloperThirdParty](#CanDeveloperThirdParty)) No, because the public already has the right to use the program under the GPL, and this right cannot be withdrawn.
- Can I use GPL-covered editors such as
GNU Emacs to develop nonfree programs? Can I use GPL-covered tools
such as GCC to compile them?
(
[#CanIUseGPLToolsForNF](#CanIUseGPLToolsForNF)) Yes, because the copyright on the editors and tools does not cover the code you write. Using them does not place any restrictions, legally, on the license you use for your code.
Some programs copy parts of themselves into the output for technical reasons—for example, Bison copies a standard parser program into its output file. In such cases, the copied text in the output is covered by the same license that covers it in the source code. Meanwhile, the part of the output which is derived from the program's input inherits the copyright status of the input.
As it happens, Bison can also be used to develop nonfree programs. This is because we decided to explicitly permit the use of the Bison standard parser program in Bison output files without restriction. We made the decision because there were other tools comparable to Bison which already permitted use for nonfree programs.
- Do I have “fair use”
rights in using the source code of a GPL-covered program?
(
[#GPLFairUse](#GPLFairUse)) Yes, you do. “Fair use” is use that is allowed without any special permission. Since you don't need the developers' permission for such use, you can do it regardless of what the developers said about it—in the license or elsewhere, whether that license be the GNU GPL or any other free software license.
Note, however, that there is no world-wide principle of fair use; what kinds of use are considered “fair” varies from country to country.
- Can the US Government release a program under the GNU GPL?
(
[#GPLUSGov](#GPLUSGov)) If the program is written by US federal government employees in the course of their employment, it is in the public domain, which means it is not copyrighted. Since the GNU GPL is based on copyright, such a program cannot be released under the GNU GPL. (It can still be
[free software](/philosophy/free-sw.html), however; a public domain program is free.)However, when a US federal government agency uses contractors to develop software, that is a different situation. The contract can require the contractor to release it under the GNU GPL. (GNU Ada was developed in this way.) Or the contract can assign the copyright to the government agency, which can then release the software under the GNU GPL.
- Can the US Government
release improvements to a GPL-covered program?
(
[#GPLUSGovAdd](#GPLUSGovAdd)) Yes. If the improvements are written by US government employees in the course of their employment, then the improvements are in the public domain. However, the improved version, as a whole, is still covered by the GNU GPL. There is no problem in this situation.
If the US government uses contractors to do the job, then the improvements themselves can be GPL-covered.
- Does the GPL have different requirements
for statically vs dynamically linked modules with a covered
work? (
[#GPLStaticVsDynamic](#GPLStaticVsDynamic)) No. Linking a GPL covered work statically or dynamically with other modules is making a combined work based on the GPL covered work. Thus, the terms and conditions of the GNU General Public License cover the whole combination. See also
[What legal issues come up if I use GPL-incompatible libraries with GPL software?](#GPLIncompatibleLibs)- Does the LGPL have different requirements
for statically vs dynamically linked modules with a covered
work? (
[#LGPLStaticVsDynamic](#LGPLStaticVsDynamic)) For the purpose of complying with the LGPL (any extant version: v2, v2.1 or v3):
(1) If you statically link against an LGPLed library, you must also provide your application in an object (not necessarily source) format, so that a user has the opportunity to modify the library and relink the application.
(2) If you dynamically link against an LGPLed library
*already present on the user's computer*, you need not convey the library's source. On the other hand, if you yourself convey the executable LGPLed library along with your application, whether linked with statically or dynamically, you must also convey the library's sources, in one of the ways for which the LGPL provides.- Is there some way that
I can GPL the output people get from use of my program? For example,
if my program is used to develop hardware designs, can I require that
these designs must be free?
(
[#GPLOutput](#GPLOutput)) In general this is legally impossible; copyright law does not give you any say in the use of the output people make from their data using your program. If the user uses your program to enter or convert her own data, the copyright on the output belongs to her, not you. More generally, when a program translates its input into some other form, the copyright status of the output inherits that of the input it was generated from.
So the only way you have a say in the use of the output is if substantial parts of the output are copied (more or less) from text in your program. For instance, part of the output of Bison (see above) would be covered by the GNU GPL, if we had not made an exception in this specific case.
You could artificially make a program copy certain text into its output even if there is no technical reason to do so. But if that copied text serves no practical purpose, the user could simply delete that text from the output and use only the rest. Then he would not have to obey the conditions on redistribution of the copied text.
- In what cases is the output of a GPL
program covered by the GPL too?
(
[#WhatCaseIsOutputGPL](#WhatCaseIsOutputGPL)) The output of a program is not, in general, covered by the copyright on the code of the program. So the license of the code of the program does not apply to the output, whether you pipe it into a file, make a screenshot, screencast, or video.
The exception would be when the program displays a full screen of text and/or art that comes from the program. Then the copyright on that text and/or art covers the output. Programs that output audio, such as video games, would also fit into this exception.
If the art/music is under the GPL, then the GPL applies when you copy it no matter how you copy it. However,
[fair use](#GPLFairUse)may still apply.Keep in mind that some programs, particularly video games, can have artwork/audio that is licensed separately from the underlying GPLed game. In such cases, the license on the artwork/audio would dictate the terms under which video/streaming may occur. See also:
[Can I use the GPL for something other than software?](#GPLOtherThanSoftware)- If I add a module to a GPL-covered program,
do I have to use the GPL as the license for my module?
(
[#GPLModuleLicense](#GPLModuleLicense)) The GPL says that the whole combined program has to be released under the GPL. So your module has to be available for use under the GPL.
But you can give additional permission for the use of your code. You can, if you wish, release your module under a license which is more lax than the GPL but compatible with the GPL. The
[license list page](/licenses/license-list.html)gives a partial list of GPL-compatible licenses.- If a library is released under the GPL
(not the LGPL), does that mean that any software which uses it
has to be under the GPL or a GPL-compatible license?
(
[#IfLibraryIsGPL](#IfLibraryIsGPL)) Yes, because the program actually links to the library. As such, the terms of the GPL apply to the entire combination. The software modules that link with the library may be under various GPL compatible licenses, but the work as a whole must be licensed under the GPL. See also:
[What does it mean to say a license is “compatible with the GPL”?](#WhatDoesCompatMean)- If a programming language interpreter
is released under the GPL, does that mean programs written to be
interpreted by it must be under GPL-compatible licenses?
(
[#IfInterpreterIsGPL](#IfInterpreterIsGPL)) When the interpreter just interprets a language, the answer is no. The interpreted program, to the interpreter, is just data; a free software license like the GPL, based on copyright law, cannot limit what data you use the interpreter on. You can run it on any data (interpreted program), any way you like, and there are no requirements about licensing that data to anyone.
However, when the interpreter is extended to provide “bindings” to other facilities (often, but not necessarily, libraries), the interpreted program is effectively linked to the facilities it uses through these bindings. So if these facilities are released under the GPL, the interpreted program that uses them must be released in a GPL-compatible way. The JNI or Java Native Interface is an example of such a binding mechanism; libraries that are accessed in this way are linked dynamically with the Java programs that call them. These libraries are also linked with the interpreter. If the interpreter is linked statically with these libraries, or if it is designed to
[link dynamically with these specific libraries](#GPLPluginsInNF), then it too needs to be released in a GPL-compatible way.Another similar and very common case is to provide libraries with the interpreter which are themselves interpreted. For instance, Perl comes with many Perl modules, and a Java implementation comes with many Java classes. These libraries and the programs that call them are always dynamically linked together.
A consequence is that if you choose to use GPLed Perl modules or Java classes in your program, you must release the program in a GPL-compatible way, regardless of the license used in the Perl or Java interpreter that the combined Perl or Java program will run on.
- I'm writing a Windows application with
Microsoft Visual C++ (or Visual Basic) and I will be releasing it
under the GPL. Is dynamically linking my program with the Visual
C++ (or Visual Basic) runtime library permitted under the GPL?
(
[#WindowsRuntimeAndGPL](#WindowsRuntimeAndGPL)) You may link your program to these libraries, and distribute the compiled program to others. When you do this, the runtime libraries are “System Libraries” as GPLv3 defines them. That means that you don't need to worry about including their source code with the program's Corresponding Source. GPLv2 provides a similar exception in section 3.
You may not distribute these libraries in compiled DLL form with the program. To prevent unscrupulous distributors from trying to use the System Library exception as a loophole, the GPL says that libraries can only qualify as System Libraries as long as they're not distributed with the program itself. If you distribute the DLLs with the program, they won't be eligible for this exception anymore; then the only way to comply with the GPL would be to provide their source code, which you are unable to do.
It is possible to write free programs that only run on Windows, but it is not a good idea. These programs would be “
[trapped](/philosophy/java-trap.html)” by Windows, and therefore contribute zero to the Free World.- Why is the original BSD
license incompatible with the GPL?
(
[#OrigBSD](#OrigBSD)) Because it imposes a specific requirement that is not in the GPL; namely, the requirement on advertisements of the program. Section 6 of GPLv2 states:
You may not impose any further restrictions on the recipients' exercise of the rights granted herein.
GPLv3 says something similar in section 10. The advertising clause provides just such a further restriction, and thus is GPL-incompatible.
The revised BSD license does not have the advertising clause, which eliminates the problem.
- When is a program and its plug-ins considered a single combined program?
(
[#GPLPlugins](#GPLPlugins)) It depends on how the main program invokes its plug-ins. If the main program uses fork and exec to invoke plug-ins, and they establish intimate communication by sharing complex data structures, or shipping complex data structures back and forth, that can make them one single combined program. A main program that uses simple fork and exec to invoke plug-ins and does not establish intimate communication between them results in the plug-ins being a separate program.
If the main program dynamically links plug-ins, and they make function calls to each other and share data structures, we believe they form a single combined program, which must be treated as an extension of both the main program and the plug-ins. If the main program dynamically links plug-ins, but the communication between them is limited to invoking the ‘main’ function of the plug-in with some options and waiting for it to return, that is a borderline case.
Using shared memory to communicate with complex data structures is pretty much equivalent to dynamic linking.
- If I write a plug-in to use with a GPL-covered
program, what requirements does that impose on the licenses I can
use for distributing my plug-in?
(
[#GPLAndPlugins](#GPLAndPlugins)) Please see this question
[for determining when plug-ins and a main program are considered a single combined program and when they are considered separate works](#GPLPlugins).If the main program and the plugins are a single combined program then this means you must license the plug-in under the GPL or a GPL-compatible free software license and distribute it with source code in a GPL-compliant way. A main program that is separate from its plug-ins makes no requirements for the plug-ins.
- Can I apply the
GPL when writing a plug-in for a nonfree program?
(
[#GPLPluginsInNF](#GPLPluginsInNF)) Please see this question
[for determining when plug-ins and a main program are considered a single combined program and when they are considered separate programs](#GPLPlugins).If they form a single combined program this means that combination of the GPL-covered plug-in with the nonfree main program would violate the GPL. However, you can resolve that legal problem by adding an exception to your plug-in's license, giving permission to link it with the nonfree main program.
See also the question
[I am writing free software that uses a nonfree library.](#FSWithNFLibs)- Can I release a nonfree program
that's designed to load a GPL-covered plug-in?
(
[#NFUseGPLPlugins](#NFUseGPLPlugins)) Please see this question
[for determining when plug-ins and a main program are considered a single combined program and when they are considered separate programs](#GPLPlugins).If they form a single combined program then the main program must be released under the GPL or a GPL-compatible free software license, and the terms of the GPL must be followed when the main program is distributed for use with these plug-ins.
However, if they are separate works then the license of the plug-in makes no requirements about the main program.
See also the question
[I am writing free software that uses a nonfree library.](#FSWithNFLibs)- You have a GPLed program that I'd like
to link with my code to build a proprietary program. Does the fact
that I link with your program mean I have to GPL my program?
(
[#LinkingWithGPL](#LinkingWithGPL)) Not exactly. It means you must release your program under a license compatible with the GPL (more precisely, compatible with one or more GPL versions accepted by all the rest of the code in the combination that you link). The combination itself is then available under those GPL versions.
- If so, is there
any chance I could get a license of your program under the Lesser GPL?
(
[#SwitchToLGPL](#SwitchToLGPL)) You can ask, but most authors will stand firm and say no. The idea of the GPL is that if you want to include our code in your program, your program must also be free software. It is supposed to put pressure on you to release your program in a way that makes it part of our community.
You always have the legal alternative of not using our code.
- Does distributing a nonfree driver
meant to link with the kernel Linux violate the GPL?
(
[#NonfreeDriverKernelLinux](#NonfreeDriverKernelLinux)) Linux (the kernel in the GNU/Linux operating system) is distributed under GNU GPL version 2. Does distributing a nonfree driver meant to link with Linux violate the GPL?
Yes, this is a violation, because effectively this makes a larger combined work. The fact that the user is expected to put the pieces together does not really change anything.
Each contributor to Linux who holds copyright on a substantial part of the code can enforce the GPL and we encourage each of them to take action against those distributing nonfree Linux-drivers.
- How can I allow linking of
proprietary modules with my GPL-covered library under a controlled
interface only?
(
[#LinkingOverControlledInterface](#LinkingOverControlledInterface)) Add this text to the license notice of each file in the package, at the end of the text that says the file is distributed under the GNU GPL:
Linking ABC statically or dynamically with other modules is making a combined work based on ABC. Thus, the terms and conditions of the GNU General Public License cover the whole combination.
As a special exception, the copyright holders of ABC give you permission to combine ABC program with free software programs or libraries that are released under the GNU LGPL and with independent modules that communicate with ABC solely through the ABCDEF interface. You may copy and distribute such a system following the terms of the GNU GPL for ABC and the licenses of the other code concerned, provided that you include the source code of that other code when and as the GNU GPL requires distribution of source code and provided that you do not modify the ABCDEF interface.
Note that people who make modified versions of ABC are not obligated to grant this special exception for their modified versions; it is their choice whether to do so. The GNU General Public License gives permission to release a modified version without this exception; this exception also makes it possible to release a modified version which carries forward this exception. If you modify the ABCDEF interface, this exception does not apply to your modified version of ABC, and you must remove this exception when you distribute your modified version.
This exception is an additional permission under section 7 of the GNU General Public License, version 3 (“GPLv3”)
This exception enables linking with differently licensed modules over the specified interface (“ABCDEF”), while ensuring that users would still receive source code as they normally would under the GPL.
Only the copyright holders for the program can legally authorize this exception. If you wrote the whole program yourself, then assuming your employer or school does not claim the copyright, you are the copyright holder—so you can authorize the exception. But if you want to use parts of other GPL-covered programs by other authors in your code, you cannot authorize the exception for them. You have to get the approval of the copyright holders of those programs.
- I have written an application that links
with many different components, that have different licenses. I am
very confused as to what licensing requirements are placed on my
program. Can you please tell me what licenses I may use?
(
[#ManyDifferentLicenses](#ManyDifferentLicenses)) To answer this question, we would need to see a list of each component that your program uses, the license of that component, and a brief (a few sentences for each should suffice) describing how your library uses that component. Two examples would be:
- To make my software work, it must be linked to the FOO library, which is available under the Lesser GPL.
- My software makes a system call (with a command line that I built) to run the BAR program, which is licensed under “the GPL, with a special exception allowing for linking with QUUX”.
- What is the difference between an
“aggregate” and other kinds of “modified versions”?
(
[#MereAggregation](#MereAggregation)) An “aggregate” consists of a number of separate programs, distributed together on the same CD-ROM or other media. The GPL permits you to create and distribute an aggregate, even when the licenses of the other software are nonfree or GPL-incompatible. The only condition is that you cannot release the aggregate under a license that prohibits users from exercising rights that each program's individual license would grant them.
Where's the line between two separate programs, and one program with two parts? This is a legal question, which ultimately judges will decide. We believe that a proper criterion depends both on the mechanism of communication (exec, pipes, rpc, function calls within a shared address space, etc.) and the semantics of the communication (what kinds of information are interchanged).
If the modules are included in the same executable file, they are definitely combined in one program. If modules are designed to run linked together in a shared address space, that almost surely means combining them into one program.
By contrast, pipes, sockets and command-line arguments are communication mechanisms normally used between two separate programs. So when they are used for communication, the modules normally are separate programs. But if the semantics of the communication are intimate enough, exchanging complex internal data structures, that too could be a basis to consider the two parts as combined into a larger program.
- When it comes to determining
whether two pieces of software form a single work, does the fact
that the code is in one or more containers have any effect?
(
[#AggregateContainers](#AggregateContainers)) No, the analysis of whether they are a
[single work or an aggregate](#MereAggregation)is unchanged by the involvement of containers.- Why does
the FSF require that contributors to FSF-copyrighted programs assign
copyright to the FSF? If I hold copyright on a GPLed program, should
I do this, too? If so, how?
(
[#AssignCopyright](#AssignCopyright)) Our lawyers have told us that to be in the
[best position to enforce the GPL](/licenses/why-assign.html)in court against violators, we should keep the copyright status of the program as simple as possible. We do this by asking each contributor to either assign the copyright on contributions to the FSF, or disclaim copyright on contributions.We also ask individual contributors to get copyright disclaimers from their employers (if any) so that we can be sure those employers won't claim to own the contributions.
Of course, if all the contributors put their code in the public domain, there is no copyright with which to enforce the GPL. So we encourage people to assign copyright on large code contributions, and only put small changes in the public domain.
If you want to make an effort to enforce the GPL on your program, it is probably a good idea for you to follow a similar policy. Please contact
[<[email protected]>](mailto:[email protected])if you want more information.- Can I modify the GPL
and make a modified license?
(
[#ModifyGPL](#ModifyGPL)) It is possible to make modified versions of the GPL, but it tends to have practical consequences.
You can legally use the GPL terms (possibly modified) in another license provided that you call your license by another name and do not include the GPL preamble, and provided you modify the instructions-for-use at the end enough to make it clearly different in wording and not mention GNU (though the actual procedure you describe may be similar).
If you want to use our preamble in a modified license, please write to
[<[email protected]>](mailto:[email protected])for permission. For this purpose we would want to check the actual license requirements to see if we approve of them.Although we will not raise legal objections to your making a modified license in this way, we hope you will think twice and not do it. Such a modified license is almost certainly
[incompatible with the GNU GPL](#WhatIsCompatible), and that incompatibility blocks useful combinations of modules. The mere proliferation of different free software licenses is a burden in and of itself.Rather than modifying the GPL, please use the exception mechanism offered by GPL version 3.
- If I use a
piece of software that has been obtained under the GNU GPL, am I
allowed to modify the original code into a new program, then
distribute and sell that new program commercially?
(
[#GPLCommercially](#GPLCommercially)) You are allowed to sell copies of the modified program commercially, but only under the terms of the GNU GPL. Thus, for instance, you must make the source code available to the users of the program as described in the GPL, and they must be allowed to redistribute and modify it as described in the GPL.
These requirements are the condition for including the GPL-covered code you received in a program of your own.
- Can I use the GPL for something other than
software?
(
[#GPLOtherThanSoftware](#GPLOtherThanSoftware)) You can apply the GPL to any kind of work, as long as it is clear what constitutes the “source code” for the work. The GPL defines this as the preferred form of the work for making changes in it.
However, for manuals and textbooks, or more generally any sort of work that is meant to teach a subject, we recommend using the GFDL rather than the GPL.
- How does the LGPL work with Java?
(
[#LGPLJava](#LGPLJava)) [See this article for details.](/licenses/lgpl-java.html)It works as designed, intended, and expected.- Consider this situation:
1) X releases V1 of a project under the GPL.
2) Y contributes to the development of V2 with changes and new code
based on V1.
3) X wants to convert V2 to a non-GPL license.
Does X need Y's permission?
(
[#Consider](#Consider)) Yes. Y was required to release its version under the GNU GPL, as a consequence of basing it on X's version V1. Nothing required Y to agree to any other license for its code. Therefore, X must get Y's permission before releasing that code under another license.
- I'd like to incorporate GPL-covered
software in my proprietary system. I have no permission to use
that software except what the GPL gives me. Can I do this?
(
[#GPLInProprietarySystem](#GPLInProprietarySystem)) You cannot incorporate GPL-covered software in a proprietary system. The goal of the GPL is to grant everyone the freedom to copy, redistribute, understand, and modify a program. If you could incorporate GPL-covered software into a nonfree system, it would have the effect of making the GPL-covered software nonfree too.
A system incorporating a GPL-covered program is an extended version of that program. The GPL says that any extended version of the program must be released under the GPL if it is released at all. This is for two reasons: to make sure that users who get the software get the freedom they should have, and to encourage people to give back improvements that they make.
However, in many cases you can distribute the GPL-covered software alongside your proprietary system. To do this validly, you must make sure that the free and nonfree programs communicate at arms length, that they are not combined in a way that would make them effectively a single program.
The difference between this and “incorporating” the GPL-covered software is partly a matter of substance and partly form. The substantive part is this: if the two programs are combined so that they become effectively two parts of one program, then you can't treat them as two separate programs. So the GPL has to cover the whole thing.
If the two programs remain well separated, like the compiler and the kernel, or like an editor and a shell, then you can treat them as two separate programs—but you have to do it properly. The issue is simply one of form: how you describe what you are doing. Why do we care about this? Because we want to make sure the users clearly understand the free status of the GPL-covered software in the collection.
If people were to distribute GPL-covered software calling it “part of” a system that users know is partly proprietary, users might be uncertain of their rights regarding the GPL-covered software. But if they know that what they have received is a free program plus another program, side by side, their rights will be clear.
- Using a certain GNU program under the
GPL does not fit our project to make proprietary software. Will you
make an exception for us? It would mean more users of that program.
(
[#WillYouMakeAnException](#WillYouMakeAnException)) Sorry, we don't make such exceptions. It would not be right.
Maximizing the number of users is not our aim. Rather, we are trying to give the crucial freedoms to as many users as possible. In general, proprietary software projects hinder rather than help the cause of freedom.
We do occasionally make license exceptions to assist a project which is producing free software under a license other than the GPL. However, we have to see a good reason why this will advance the cause of free software.
We also do sometimes change the distribution terms of a package, when that seems clearly the right way to serve the cause of free software; but we are very cautious about this, so you will have to show us very convincing reasons.
- I'd like to incorporate GPL-covered software in
my proprietary system. Can I do this by putting a “wrapper”
module, under a GPL-compatible lax permissive license (such as the X11
license) in between the GPL-covered part and the proprietary part?
(
[#GPLWrapper](#GPLWrapper)) No. The X11 license is compatible with the GPL, so you can add a module to the GPL-covered program and put it under the X11 license. But if you were to incorporate them both in a larger program, that whole would include the GPL-covered part, so it would have to be licensed
*as a whole*under the GNU GPL.The fact that proprietary module A communicates with GPL-covered module C only through X11-licensed module B is legally irrelevant; what matters is the fact that module C is included in the whole.
- Where can I learn more about the GCC
Runtime Library Exception?
(
[#LibGCCException](#LibGCCException)) The GCC Runtime Library Exception covers libgcc, libstdc++, libfortran, libgomp, libdecnumber, and other libraries distributed with GCC. The exception is meant to allow people to distribute programs compiled with GCC under terms of their choice, even when parts of these libraries are included in the executable as part of the compilation process. To learn more, please read our
[FAQ about the GCC Runtime Library Exception](/licenses/gcc-exception-faq.html).- I'd like to
modify GPL-covered programs and link them with the portability
libraries from Money Guzzler Inc. I cannot distribute the source code
for these libraries, so any user who wanted to change these versions
would have to obtain those libraries separately. Why doesn't the
GPL permit this?
(
[#MoneyGuzzlerInc](#MoneyGuzzlerInc)) There are two reasons for this. First, a general one. If we permitted company A to make a proprietary file, and company B to distribute GPL-covered software linked with that file, the effect would be to make a hole in the GPL big enough to drive a truck through. This would be carte blanche for withholding the source code for all sorts of modifications and extensions to GPL-covered software.
Giving all users access to the source code is one of our main goals, so this consequence is definitely something we want to avoid.
More concretely, the versions of the programs linked with the Money Guzzler libraries would not really be free software as we understand the term—they would not come with full source code that enables users to change and recompile the program.
- If the license for a module Q has a
requirement that's incompatible with the GPL,
but the requirement applies only when Q is distributed by itself, not when
Q is included in a larger program, does that make the license
GPL-compatible? Can I combine or link Q with a GPL-covered program?
(
[#GPLIncompatibleAlone](#GPLIncompatibleAlone)) If a program P is released under the GPL that means *any and every part of it* can be used under the GPL. If you integrate module Q, and release the combined program P+Q under the GPL, that means any part of P+Q can be used under the GPL. One part of P+Q is Q. So releasing P+Q under the GPL says that Q any part of it can be used under the GPL. Putting it in other words, a user who obtains P+Q under the GPL can delete P, so that just Q remains, still under the GPL.
If the license of module Q permits you to give permission for that, then it is GPL-compatible. Otherwise, it is not GPL-compatible.
If the license for Q says in no uncertain terms that you must do certain things (not compatible with the GPL) when you redistribute Q on its own, then it does not permit you to distribute Q under the GPL. It follows that you can't release P+Q under the GPL either. So you cannot link or combine P with Q.
- Can I release a modified
version of a GPL-covered program in binary form only?
(
[#ModifiedJustBinary](#ModifiedJustBinary)) No. The whole point of the GPL is that all modified versions must be
[free software](/philosophy/free-sw.html)—which means, in particular, that the source code of the modified version is available to the users.- I
downloaded just the binary from the net. If I distribute copies,
do I have to get the source and distribute that too?
(
[#UnchangedJustBinary](#UnchangedJustBinary)) Yes. The general rule is, if you distribute binaries, you must distribute the complete corresponding source code too. The exception for the case where you received a written offer for source code is quite limited.
- I want to distribute
binaries via physical media without accompanying sources. Can I provide
source code by FTP?
(
[#DistributeWithSourceOnInternet](#DistributeWithSourceOnInternet)) Version 3 of the GPL allows this; see option 6(b) for the full details. Under version 2, you're certainly free to offer source via FTP, and most users will get it from there. However, if any of them would rather get the source on physical media by mail, you are required to provide that.
If you distribute binaries via FTP,
[you should distribute source via FTP.](#AnonFTPAndSendSources)- My friend got a GPL-covered
binary with an offer to supply source, and made a copy for me.
Can I use the offer myself to obtain the source?
(
[#RedistributedBinariesGetSource](#RedistributedBinariesGetSource)) Yes, you can. The offer must be open to everyone who has a copy of the binary that it accompanies. This is why the GPL says your friend must give you a copy of the offer along with a copy of the binary—so you can take advantage of it.
- Can I put the binaries on my
Internet server and put the source on a different Internet site?
(
[#SourceAndBinaryOnDifferentSites](#SourceAndBinaryOnDifferentSites)) Yes. Section 6(d) allows this. However, you must provide clear instructions people can follow to obtain the source, and you must take care to make sure that the source remains available for as long as you distribute the object code.
- I want to distribute an extended
version of a GPL-covered program in binary form. Is it enough to
distribute the source for the original version?
(
[#DistributeExtendedBinary](#DistributeExtendedBinary)) No, you must supply the source code that corresponds to the binary. Corresponding source means the source from which users can rebuild the same binary.
Part of the idea of free software is that users should have access to the source code for
*the programs they use*. Those using your version should have access to the source code for your version.A major goal of the GPL is to build up the Free World by making sure that improvement to a free program are themselves free. If you release an improved version of a GPL-covered program, you must release the improved source code under the GPL.
- I want to distribute
binaries, but distributing complete source is inconvenient. Is it ok if
I give users the diffs from the “standard” version along with
the binaries?
(
[#DistributingSourceIsInconvenient](#DistributingSourceIsInconvenient)) This is a well-meaning request, but this method of providing the source doesn't really do the job.
A user that wants the source a year from now may be unable to get the proper version from another site at that time. The standard distribution site may have a newer version, but the same diffs probably won't work with that version.
So you need to provide complete sources, not just diffs, with the binaries.
- Can I make binaries available
on a network server, but send sources only to people who order them?
(
[#AnonFTPAndSendSources](#AnonFTPAndSendSources)) If you make object code available on a network server, you have to provide the Corresponding Source on a network server as well. The easiest way to do this would be to publish them on the same server, but if you'd like, you can alternatively provide instructions for getting the source from another server, or even a
[version control system](#SourceInCVS). No matter what you do, the source should be just as easy to access as the object code, though. This is all specified in section 6(d) of GPLv3.The sources you provide must correspond exactly to the binaries. In particular, you must make sure they are for the same version of the program—not an older version and not a newer version.
- How can I make sure each
user who downloads the binaries also gets the source?
(
[#HowCanIMakeSureEachDownloadGetsSource](#HowCanIMakeSureEachDownloadGetsSource)) You don't have to make sure of this. As long as you make the source and binaries available so that the users can see what's available and take what they want, you have done what is required of you. It is up to the user whether to download the source.
Our requirements for redistributors are intended to make sure the users can get the source code, not to force users to download the source code even if they don't want it.
- Does the GPL require
me to provide source code that can be built to match the exact
hash of the binary I am distributing?
(
[#MustSourceBuildToMatchExactHashOfBinary](#MustSourceBuildToMatchExactHashOfBinary)) Complete corresponding source means the source that the binaries were made from, but that does not imply your tools must be able to make a binary that is an exact hash of the binary you are distributing. In some cases it could be (nearly) impossible to build a binary from source with an exact hash of the binary being distributed — consider the following examples: a system might put timestamps in binaries; or the program might have been built against a different (even unreleased) compiler version.
- A company
is running a modified version of a GPLed program on a web site.
Does the GPL say they must release their modified sources?
(
[#UnreleasedMods](#UnreleasedMods)) The GPL permits anyone to make a modified version and use it without ever distributing it to others. What this company is doing is a special case of that. Therefore, the company does not have to release the modified sources. The situation is different when the modified program is licensed under the terms of the
[GNU Affero GPL](#UnreleasedModsAGPL).Compare this to a situation where the web site contains or links to separate GPLed programs that are distributed to the user when they visit the web site (often written in
[JavaScript](/philosophy/javascript-trap.html), but other languages are used as well). In this situation the source code for the programs being distributed must be released to the user under the terms of the GPL.- A company is running a modified
version of a program licensed under the GNU Affero GPL (AGPL) on a
web site. Does the AGPL say they must release their modified
sources?
(
[#UnreleasedModsAGPL](#UnreleasedModsAGPL)) The
[GNU Affero GPL](/licenses/agpl.html)requires that modified versions of the software offer all users interacting with it over a computer network an opportunity to receive the source. What the company is doing falls under that meaning, so the company must release the modified source code.- Is making and using multiple copies
within one organization or company “distribution”?
(
[#InternalDistribution](#InternalDistribution)) No, in that case the organization is just making the copies for itself. As a consequence, a company or other organization can develop a modified version and install that version through its own facilities, without giving the staff permission to release that modified version to outsiders.
However, when the organization transfers copies to other organizations or individuals, that is distribution. In particular, providing copies to contractors for use off-site is distribution.
- If someone steals
a CD containing a version of a GPL-covered program, does the GPL
give the thief the right to redistribute that version?
(
[#StolenCopy](#StolenCopy)) If the version has been released elsewhere, then the thief probably does have the right to make copies and redistribute them under the GPL, but if thieves are imprisoned for stealing the CD, they may have to wait until their release before doing so.
If the version in question is unpublished and considered by a company to be its trade secret, then publishing it may be a violation of trade secret law, depending on other circumstances. The GPL does not change that. If the company tried to release its version and still treat it as a trade secret, that would violate the GPL, but if the company hasn't released this version, no such violation has occurred.
- What if a company distributes a copy of
some other developers' GPL-covered work to me as a trade secret?
(
[#TradeSecretRelease](#TradeSecretRelease)) The company has violated the GPL and will have to cease distribution of that program. Note how this differs from the theft case above; the company does not intentionally distribute a copy when a copy is stolen, so in that case the company has not violated the GPL.
- What if a company distributes a copy
of its own GPL-covered work to me as a trade secret?
(
[#TradeSecretRelease2](#TradeSecretRelease2)) If the program distributed does not incorporate anyone else's GPL-covered work, then the company is not violating the GPL (see “
[Is the developer of a GPL-covered program bound by the GPL?](#DeveloperViolate)” for more information). But it is making two contradictory statements about what you can do with that program: that you can redistribute it, and that you can't. It would make sense to demand clarification of the terms for use of that program before you accept a copy.- Why are some GNU libraries released under
the ordinary GPL rather than the Lesser GPL?
(
[#WhySomeGPLAndNotLGPL](#WhySomeGPLAndNotLGPL)) Using the Lesser GPL for any particular library constitutes a retreat for free software. It means we partially abandon the attempt to defend the users' freedom, and some of the requirements to share what is built on top of GPL-covered software. In themselves, those are changes for the worse.
Sometimes a localized retreat is a good strategy. Sometimes, using the LGPL for a library might lead to wider use of that library, and thus to more improvement for it, wider support for free software, and so on. This could be good for free software if it happens to a large extent. But how much will this happen? We can only speculate.
It would be nice to try out the LGPL on each library for a while, see whether it helps, and change back to the GPL if the LGPL didn't help. But this is not feasible. Once we use the LGPL for a particular library, changing back would be difficult.
So we decide which license to use for each library on a case-by-case basis. There is a
[long explanation](/licenses/why-not-lgpl.html)of how we judge the question.- Why should programs say
“Version 3 of the GPL or any later version”?
(
[#VersionThreeOrLater](#VersionThreeOrLater)) From time to time, at intervals of years, we change the GPL—sometimes to clarify it, sometimes to permit certain kinds of use not previously permitted, and sometimes to tighten up a requirement. (The last two changes were in 2007 and 1991.) Using this “indirect pointer” in each program makes it possible for us to change the distribution terms on the entire collection of GNU software, when we update the GPL.
If each program lacked the indirect pointer, we would be forced to discuss the change at length with numerous copyright holders, which would be a virtual impossibility. In practice, the chance of having uniform distribution terms for GNU software would be nil.
Suppose a program says “Version 3 of the GPL or any later version” and a new version of the GPL is released. If the new GPL version gives additional permission, that permission will be available immediately to all the users of the program. But if the new GPL version has a tighter requirement, it will not restrict use of the current version of the program, because it can still be used under GPL version 3. When a program says “Version 3 of the GPL or any later version”, users will always be permitted to use it, and even change it, according to the terms of GPL version 3—even after later versions of the GPL are available.
If a tighter requirement in a new version of the GPL need not be obeyed for existing software, how is it useful? Once GPL version 4 is available, the developers of most GPL-covered programs will release subsequent versions of their programs specifying “Version 4 of the GPL or any later version”. Then users will have to follow the tighter requirements in GPL version 4, for subsequent versions of the program.
However, developers are not obligated to do this; developers can continue allowing use of the previous version of the GPL, if that is their preference.
- Is it a good idea to use a license saying
that a certain program can be used only under the latest version
of the GNU GPL?
(
[#OnlyLatestVersion](#OnlyLatestVersion)) The reason you shouldn't do that is that it could result some day in withdrawing automatically some permissions that the users previously had.
Suppose a program was released in 2000 under “the latest GPL version”. At that time, people could have used it under GPLv2. The day we published GPLv3 in 2007, everyone would have been suddenly compelled to use it under GPLv3 instead.
Some users may not even have known about GPL version 3—but they would have been required to use it. They could have violated the program's license unintentionally just because they did not get the news. That's a bad way to treat people.
We think it is wrong to take back permissions already granted, except due to a violation. If your freedom could be revoked, then it isn't really freedom. Thus, if you get a copy of a program version under one version of a license, you should
*always*have the rights granted by that version of the license. Releasing under “GPL version N or any later version” upholds that principle.- Why don't you use the GPL for manuals?
(
[#WhyNotGPLForManuals](#WhyNotGPLForManuals)) It is possible to use the GPL for a manual, but the GNU Free Documentation License (GFDL) is much better for manuals.
The GPL was designed for programs; it contains lots of complex clauses that are crucial for programs, but that would be cumbersome and unnecessary for a book or manual. For instance, anyone publishing the book on paper would have to either include machine-readable “source code” of the book along with each printed copy, or provide a written offer to send the “source code” later.
Meanwhile, the GFDL has clauses that help publishers of free manuals make a profit from selling copies—cover texts, for instance. The special rules for Endorsements sections make it possible to use the GFDL for an official standard. This would permit modified versions, but they could not be labeled as “the standard”.
Using the GFDL, we permit changes in the text of a manual that covers its technical topic. It is important to be able to change the technical parts, because people who change a program ought to change the documentation to correspond. The freedom to do this is an ethical imperative.
Our manuals also include sections that state our political position about free software. We mark these as “invariant”, so that they cannot be changed or removed. The GFDL makes provisions for these “invariant sections”.
- How does the GPL apply to fonts?
(
[#FontException](#FontException)) Font licensing is a complex issue which needs serious consideration. The following license exception is experimental but approved for general use. We welcome suggestions on this subject—please see this this
[explanatory essay](http://www.fsf.org/blogs/licensing/20050425novalis)and write to[[email protected]](mailto:[email protected]).To use this exception, add this text to the license notice of each file in the package (to the extent possible), at the end of the text that says the file is distributed under the GNU GPL:
As a special exception, if you create a document which uses this font, and embed this font or unaltered portions of this font into the document, this font does not by itself cause the resulting document to be covered by the GNU General Public License. This exception does not however invalidate any other reasons why the document might be covered by the GNU General Public License. If you modify this font, you may extend this exception to your version of the font, but you are not obligated to do so. If you do not wish to do so, delete this exception statement from your version.
- I am writing a website maintenance system
(called a “
[content management system](/philosophy/words-to-avoid.html#Content)” by some), or some other application which generates web pages from templates. What license should I use for those templates? ([#WMS](#WMS)) Templates are minor enough that it is not worth using copyleft to protect them. It is normally harmless to use copyleft on minor works, but templates are a special case, because they are combined with data provided by users of the application and the combination is distributed. So, we recommend that you license your templates under simple permissive terms.
Some templates make calls into JavaScript functions. Since Javascript is often non-trivial, it is worth copylefting. Because the templates will be combined with user data, it's possible that template+user data+JavaScript would be considered one work under copyright law. A line needs to be drawn between the JavaScript (copylefted), and the user code (usually under incompatible terms).
Here's an exception for JavaScript code that does this:
As a special exception to the GPL, any HTML file which merely makes function calls to this code, and for that purpose includes it by reference shall be deemed a separate work for copyright law purposes. In addition, the copyright holders of this code give you permission to combine this code with free software libraries that are released under the GNU LGPL. You may copy and distribute such a system following the terms of the GNU GPL for this code and the LGPL for the libraries. If you modify this code, you may extend this exception to your version of the code, but you are not obligated to do so. If you do not wish to do so, delete this exception statement from your version.
- Can I release
a program under the GPL which I developed using nonfree tools?
(
[#NonFreeTools](#NonFreeTools)) Which programs you used to edit the source code, or to compile it, or study it, or record it, usually makes no difference for issues concerning the licensing of that source code.
However, if you link nonfree libraries with the source code, that would be an issue you need to deal with. It does not preclude releasing the source code under the GPL, but if the libraries don't fit under the “system library” exception, you should affix an explicit notice giving permission to link your program with them.
[The FAQ entry about using GPL-incompatible libraries](#GPLIncompatibleLibs)provides more information about how to do that.- Are there translations
of the GPL into other languages?
(
[#GPLTranslations](#GPLTranslations)) It would be useful to have translations of the GPL into languages other than English. People have even written translations and sent them to us. But we have not dared to approve them as officially valid. That carries a risk so great we do not dare accept it.
A legal document is in some ways like a program. Translating it is like translating a program from one language and operating system to another. Only a lawyer skilled in both languages can do it—and even then, there is a risk of introducing a bug.
If we were to approve, officially, a translation of the GPL, we would be giving everyone permission to do whatever the translation says they can do. If it is a completely accurate translation, that is fine. But if there is an error in the translation, the results could be a disaster which we could not fix.
If a program has a bug, we can release a new version, and eventually the old version will more or less disappear. But once we have given everyone permission to act according to a particular translation, we have no way of taking back that permission if we find, later on, that it had a bug.
Helpful people sometimes offer to do the work of translation for us. If the problem were a matter of finding someone to do the work, this would solve it. But the actual problem is the risk of error, and offering to do the work does not avoid the risk. We could not possibly authorize a translation written by a non-lawyer.
Therefore, for the time being, we are not approving translations of the GPL as globally valid and binding. Instead, we are doing two things:
Referring people to unofficial translations. This means that we permit people to write translations of the GPL, but we don't approve them as legally valid and binding.
An unapproved translation has no legal force, and it should say so explicitly. It should be marked as follows:
This translation of the GPL is informal, and not officially approved by the Free Software Foundation as valid. To be completely sure of what is permitted, refer to the original GPL (in English).
But the unapproved translation can serve as a hint for how to understand the English GPL. For many users, that is sufficient.
However, businesses using GNU software in commercial activity, and people doing public ftp distribution, should need to check the real English GPL to make sure of what it permits.
Publishing translations valid for a single country only.
We are considering the idea of publishing translations which are officially valid only for one country. This way, if there is a mistake, it will be limited to that country, and the damage will not be too great.
It will still take considerable expertise and effort from a sympathetic and capable lawyer to make a translation, so we cannot promise any such translations soon.
- If a programming language interpreter has a
license that is incompatible with the GPL, can I run GPL-covered
programs on it?
(
[#InterpreterIncompat](#InterpreterIncompat)) When the interpreter just interprets a language, the answer is yes. The interpreted program, to the interpreter, is just data; the GPL doesn't restrict what tools you process the program with.
However, when the interpreter is extended to provide “bindings” to other facilities (often, but not necessarily, libraries), the interpreted program is effectively linked to the facilities it uses through these bindings. The JNI or Java Native Interface is an example of such a facility; libraries that are accessed in this way are linked dynamically with the Java programs that call them.
So if these facilities are released under a GPL-incompatible license, the situation is like linking in any other way with a GPL-incompatible library. Which implies that:
- If you are writing code and releasing it under the GPL, you can state an explicit exception giving permission to link it with those GPL-incompatible facilities.
- If you wrote and released the program under the GPL, and you designed it specifically to work with those facilities, people can take that as an implicit exception permitting them to link it with those facilities. But if that is what you intend, it is better to say so explicitly.
- You can't take someone else's GPL-covered code and use it that way, or add such exceptions to it. Only the copyright holders of that code can add the exception.
- Who has the power to enforce the GPL?
(
[#WhoHasThePower](#WhoHasThePower)) Since the GPL is a copyright license, it can be enforced by the copyright holders of the software. If you see a violation of the GPL, you should inform the developers of the GPL-covered software involved. They either are the copyright holders, or are connected with the copyright holders.
In addition, we encourage the use of any legal mechanism available to users for obtaining complete and corresponding source code, as is their right, and enforcing full compliance with the GNU GPL. After all, we developed the GNU GPL to make software free for all its users.
- In an object-oriented language such as Java,
if I use a class that is GPLed without modifying, and subclass it,
in what way does the GPL affect the larger program?
(
[#OOPLang](#OOPLang)) Subclassing is creating a derivative work. Therefore, the terms of the GPL affect the whole program where you create a subclass of a GPLed class.
- If I port my program to GNU/Linux,
does that mean I have to release it as free software under the GPL
or some other Free Software license?
(
[#PortProgramToGPL](#PortProgramToGPL)) In general, the answer is no—this is not a legal requirement. In specific, the answer depends on which libraries you want to use and what their licenses are. Most system libraries either use the
[GNU Lesser GPL](/licenses/lgpl.html), or use the GNU GPL plus an exception permitting linking the library with anything. These libraries can be used in nonfree programs; but in the case of the Lesser GPL, it does have some requirements you must follow.Some libraries are released under the GNU GPL alone; you must use a GPL-compatible license to use those libraries. But these are normally the more specialized libraries, and you would not have had anything much like them on another platform, so you probably won't find yourself wanting to use these libraries for simple porting.
Of course, your software is not a contribution to our community if it is not free, and people who value their freedom will refuse to use it. Only people willing to give up their freedom will use your software, which means that it will effectively function as an inducement for people to lose their freedom.
If you hope some day to look back on your career and feel that it has contributed to the growth of a good and free society, you need to make your software free.
- I just found out that a company has a
copy of a GPLed program, and it costs money to get it. Aren't they
violating the GPL by not making it available on the Internet?
(
[#CompanyGPLCostsMoney](#CompanyGPLCostsMoney)) No. The GPL does not require anyone to use the Internet for distribution. It also does not require anyone in particular to redistribute the program. And (outside of one special case), even if someone does decide to redistribute the program sometimes, the GPL doesn't say he has to distribute a copy to you in particular, or any other person in particular.
What the GPL requires is that he must have the freedom to distribute a copy to you
*if he wishes to*. Once the copyright holder does distribute a copy of the program to someone, that someone can then redistribute the program to you, or to anyone else, as he sees fit.- Can I release a program with a license which
says that you can distribute modified versions of it under the GPL
but you can't distribute the original itself under the GPL?
(
[#ReleaseNotOriginal](#ReleaseNotOriginal)) No. Such a license would be self-contradictory. Let's look at its implications for me as a user.
Suppose I start with the original version (call it version A), add some code (let's imagine it is 1000 lines), and release that modified version (call it B) under the GPL. The GPL says anyone can change version B again and release the result under the GPL. So I (or someone else) can delete those 1000 lines, producing version C which has the same code as version A but is under the GPL.
If you try to block that path, by saying explicitly in the license that I'm not allowed to reproduce something identical to version A under the GPL by deleting those lines from version B, in effect the license now says that I can't fully use version B in all the ways that the GPL permits. In other words, the license does not in fact allow a user to release a modified version such as B under the GPL.
- Does moving a copy to a majority-owned,
and controlled, subsidiary constitute distribution?
(
[#DistributeSubsidiary](#DistributeSubsidiary)) Whether moving a copy to or from this subsidiary constitutes “distribution” is a matter to be decided in each case under the copyright law of the appropriate jurisdiction. The GPL does not and cannot override local laws. US copyright law is not entirely clear on the point, but appears not to consider this distribution.
If, in some country, this is considered distribution, and the subsidiary must receive the right to redistribute the program, that will not make a practical difference. The subsidiary is controlled by the parent company; rights or no rights, it won't redistribute the program unless the parent company decides to do so.
- Can software installers ask people
to click to agree to the GPL? If I get some software under the GPL,
do I have to agree to anything?
(
[#ClickThrough](#ClickThrough)) Some software packaging systems have a place which requires you to click through or otherwise indicate assent to the terms of the GPL. This is neither required nor forbidden. With or without a click through, the GPL's rules remain the same.
Merely agreeing to the GPL doesn't place any obligations on you. You are not required to agree to anything to merely use software which is licensed under the GPL. You only have obligations if you modify or distribute the software. If it really bothers you to click through the GPL, nothing stops you from hacking the GPLed software to bypass this.
- I would
like to bundle GPLed software with some sort of installation software.
Does that installer need to have a GPL-compatible license?
(
[#GPLCompatInstaller](#GPLCompatInstaller)) No. The installer and the files it installs are separate works. As a result, the terms of the GPL do not apply to the installation software.
- Some distributors of GPLed software
require me in their umbrella EULAs or as part of their downloading
process to “represent and warrant” that I am located in
the US or that I intend to distribute the software in compliance with
relevant export control laws. Why are they doing this and is it a
violation of those distributors' obligations under GPL?
(
[#ExportWarranties](#ExportWarranties)) This is not a violation of the GPL. Those distributors (almost all of whom are commercial businesses selling free software distributions and related services) are trying to reduce their own legal risks, not to control your behavior. Export control law in the United States
*might*make them liable if they knowingly export software into certain countries, or if they give software to parties they know will make such exports. By asking for these statements from their customers and others to whom they distribute software, they protect themselves in the event they are later asked by regulatory authorities what they knew about where software they distributed was going to wind up. They are not restricting what you can do with the software, only preventing themselves from being blamed with respect to anything you do. Because they are not placing additional restrictions on the software, they do not violate section 10 of GPLv3 or section 6 of GPLv2.The FSF opposes the application of US export control laws to free software. Not only are such laws incompatible with the general objective of software freedom, they achieve no reasonable governmental purpose, because free software is currently and should always be available from parties in almost every country, including countries that have no export control laws and which do not participate in US-led trade embargoes. Therefore, no country's government is actually deprived of free software by US export control laws, while no country's citizens
*should*be deprived of free software, regardless of their governments' policies, as far as we are concerned. Copies of all GPL-licensed software published by the FSF can be obtained from us without making any representation about where you live or what you intend to do. At the same time, the FSF understands the desire of commercial distributors located in the US to comply with US laws. They have a right to choose to whom they distribute particular copies of free software; exercise of that right does not violate the GPL unless they add contractual restrictions beyond those permitted by the GPL.- Can I use
GPLed software on a device that will stop operating if customers do
not continue paying a subscription fee?
(
[#SubscriptionFee](#SubscriptionFee)) No. In this scenario, the requirement to keep paying a fee limits the user's ability to run the program. This is an additional requirement on top of the GPL, and the license prohibits it.
- How do I upgrade from (L)GPLv2 to (L)GPLv3?
(
[#v3HowToUpgrade](#v3HowToUpgrade)) First, include the new version of the license in your package. If you're using LGPLv3 in your project, be sure to include copies of both GPLv3 and LGPLv3, since LGPLv3 is now written as a set of additional permissions on top of GPLv3.
Second, replace all your existing v2 license notices (usually at the top of each file) with the new recommended text available on
[the GNU licenses howto](/licenses/gpl-howto.html). It's more future-proof because it no longer includes the FSF's postal mailing address.Of course, any descriptive text (such as in a README) which talks about the package's license should also be updated appropriately.
- How does GPLv3 make BitTorrent distribution easier?
(
[#BitTorrent](#BitTorrent)) Because GPLv2 was written before peer-to-peer distribution of software was common, it is difficult to meet its requirements when you share code this way. The best way to make sure you are in compliance when distributing GPLv2 object code on BitTorrent would be to include all the corresponding source in the same torrent, which is prohibitively expensive.
GPLv3 addresses this problem in two ways. First, people who download this torrent and send the data to others as part of that process are not required to do anything. That's because section 9 says “Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance [of the license].”
Second, section 6(e) of GPLv3 is designed to give distributors—people who initially seed torrents—a clear and straightforward way to provide the source, by telling recipients where it is available on a public network server. This ensures that everyone who wants to get the source can do so, and it's almost no hassle for the distributor.
- What is tivoization? How does GPLv3 prevent it?
(
[#Tivoization](#Tivoization)) Some devices utilize free software that can be upgraded, but are designed so that users are not allowed to modify that software. There are lots of different ways to do this; for example, sometimes the hardware checksums the software that is installed, and shuts down if it doesn't match an expected signature. The manufacturers comply with GPLv2 by giving you the source code, but you still don't have the freedom to modify the software you're using. We call this practice tivoization.
When people distribute User Products that include software under GPLv3, section 6 requires that they provide you with information necessary to modify that software. User Products is a term specially defined in the license; examples of User Products include portable music players, digital video recorders, and home security systems.
- Does GPLv3 prohibit DRM?
(
[#DRMProhibited](#DRMProhibited)) It does not; you can use code released under GPLv3 to develop any kind of DRM technology you like. However, if you do this, section 3 says that the system will not count as an effective technological “protection” measure, which means that if someone breaks the DRM, she will be free to distribute her software too, unhindered by the DMCA and similar laws.
As usual, the GNU GPL does not restrict what people do in software, it just stops them from restricting others.
- Can I use the GPL to license hardware?
(
[#GPLHardware](#GPLHardware)) Any material that can be copyrighted can be licensed under the GPL. GPLv3 can also be used to license materials covered by other copyright-like laws, such as semiconductor masks. So, as an example, you can release a drawing of a physical object or circuit under the GPL.
In many situations, copyright does not cover making physical hardware from a drawing. In these situations, your license for the drawing simply can't exert any control over making or selling physical hardware, regardless of the license you use. When copyright does cover making hardware, for instance with IC masks, the GPL handles that case in a useful way.
- I use public key cryptography to sign my code to
assure its authenticity. Is it true that GPLv3 forces me to release
my private signing keys?
(
[#GiveUpKeys](#GiveUpKeys)) No. The only time you would be required to release signing keys is if you conveyed GPLed software inside a User Product, and its hardware checked the software for a valid cryptographic signature before it would function. In that specific case, you would be required to provide anyone who owned the device, on demand, with the key to sign and install modified software on the device so that it will run. If each instance of the device uses a different key, then you need only give each purchaser a key for that instance.
- Does GPLv3 require that voters be able to
modify the software running in a voting machine?
(
[#v3VotingMachine](#v3VotingMachine)) No. Companies distributing devices that include software under GPLv3 are at most required to provide the source and Installation Information for the software to people who possess a copy of the object code. The voter who uses a voting machine (like any other kiosk) doesn't get possession of it, not even temporarily, so the voter also does not get possession of the binary software in it.
Note, however, that voting is a very special case. Just because the software in a computer is free does not mean you can trust the computer for voting. We believe that computers cannot be trusted for voting. Voting should be done on paper.
- Does GPLv3 have a “patent retaliation
clause”?
(
[#v3PatentRetaliation](#v3PatentRetaliation)) In effect, yes. Section 10 prohibits people who convey the software from filing patent suits against other licensees. If someone did so anyway, section 8 explains how they would lose their license and any patent licenses that accompanied it.
- Can I use snippets of GPL-covered
source code within documentation that is licensed under some license
that is incompatible with the GPL?
(
[#SourceCodeInDocumentation](#SourceCodeInDocumentation)) If the snippets are small enough that you can incorporate them under fair use or similar laws, then yes. Otherwise, no.
- The beginning of GPLv3 section 6 says that I can
convey a covered work in object code form “under the terms of
sections 4 and 5” provided I also meet the conditions of
section 6. What does that mean?
(
[#v3Under4and5](#v3Under4and5)) This means that all the permissions and conditions you have to convey source code also apply when you convey object code: you may charge a fee, you must keep copyright notices intact, and so on.
- My company owns a lot of patents.
Over the years we've contributed code to projects under “GPL
version 2 or any later version”, and the project itself has
been distributed under the same terms. If a user decides to take the
project's code (incorporating my contributions) under GPLv3, does
that mean I've automatically granted GPLv3's explicit patent license
to that user?
(
[#v2OrLaterPatentLicense](#v2OrLaterPatentLicense)) No. When you convey GPLed software, you must follow the terms and conditions of one particular version of the license. When you do so, that version defines the obligations you have. If users may also elect to use later versions of the GPL, that's merely an additional permission they have—it does not require you to fulfill the terms of the later version of the GPL as well.
Do not take this to mean that you can threaten the community with your patents. In many countries, distributing software under GPLv2 provides recipients with an implicit patent license to exercise their rights under the GPL. Even if it didn't, anyone considering enforcing their patents aggressively is an enemy of the community, and we will defend ourselves against such an attack.
- If I distribute a proprietary
program that links against an LGPLv3-covered library that I've
modified, what is the “contributor version” for purposes of
determining the scope of the explicit patent license grant I'm
making—is it just the library, or is it the whole
combination?
(
[#LGPLv3ContributorVersion](#LGPLv3ContributorVersion)) The “contributor version” is only your version of the library.
- Is GPLv3 compatible with GPLv2?
(
[#v2v3Compatibility](#v2v3Compatibility)) No. Many requirements have changed from GPLv2 to GPLv3, which means that the precise requirement of GPLv2 is not present in GPLv3, and vice versa. For instance, the Termination conditions of GPLv3 are considerably more permissive than those of GPLv2, and thus different from the Termination conditions of GPLv2.
Due to these differences, the two licenses are not compatible: if you tried to combine code released under GPLv2 with code under GPLv3, you would violate section 6 of GPLv2.
However, if code is released under GPL “version 2 or later,” that is compatible with GPLv3 because GPLv3 is one of the options it permits.
- Does GPLv2 have a requirement about delivering installation
information?
(
[#InstInfo](#InstInfo)) GPLv3 explicitly requires redistribution to include the full necessary “Installation Information.” GPLv2 doesn't use that term, but it does require redistribution to include
scripts used to control compilation and installation of the executable
with the complete and corresponding source code. This covers part, but not all, of what GPLv3 calls “Installation Information.” Thus, GPLv3's requirement about installation information is stronger.- What does it mean to “cure” a violation of GPLv3?
(
[#Cure](#Cure)) To cure a violation means to adjust your practices to comply with the requirements of the license.
- The warranty and liability
disclaimers in GPLv3 seem specific to U.S. law. Can I add my own
disclaimers to my own code?
(
[#v3InternationalDisclaimers](#v3InternationalDisclaimers)) Yes. Section 7 gives you permission to add your own disclaimers, specifically 7(a).
- My program has interactive user
interfaces that are non-visual in nature. How can I comply with the
Appropriate Legal Notices requirement in GPLv3?
(
[#NonvisualLegalNotices](#NonvisualLegalNotices)) All you need to do is ensure that the Appropriate Legal Notices are readily available to the user in your interface. For example, if you have written an audio interface, you could include a command that reads the notices aloud.
- If I give a copy of a GPLv3-covered
program to a coworker at my company, have I “conveyed” the
copy to that coworker?
(
[#v3CoworkerConveying](#v3CoworkerConveying)) As long as you're both using the software in your work at the company, rather than personally, then the answer is no. The copies belong to the company, not to you or the coworker. This copying is propagation, not conveying, because the company is not making copies available to others.
- If I distribute a GPLv3-covered
program, can I provide a warranty that is voided if the user modifies
the program?
(
[#v3ConditionalWarranty](#v3ConditionalWarranty)) Yes. Just as devices do not need to be warranted if users modify the software inside them, you are not required to provide a warranty that covers all possible activities someone could undertake with GPLv3-covered software.
- Why did you decide to write the GNU Affero GPLv3
as a separate license?
(
[#SeparateAffero](#SeparateAffero)) Early drafts of GPLv3 allowed licensors to add an Affero-like requirement to publish source in section 7. However, some companies that develop and rely upon free software consider this requirement to be too burdensome. They want to avoid code with this requirement, and expressed concern about the administrative costs of checking code for this additional requirement. By publishing the GNU Affero GPLv3 as a separate license, with provisions in it and GPLv3 to allow code under these licenses to link to each other, we accomplish all of our original goals while making it easier to determine which code has the source publication requirement.
- Why did you invent the new terms
“propagate” and “convey” in GPLv3?
(
[#WhyPropagateAndConvey](#WhyPropagateAndConvey)) The term “distribute” used in GPLv2 was borrowed from United States copyright law. Over the years, we learned that some jurisdictions used this same word in their own copyright laws, but gave it different meanings. We invented these new terms to make our intent as clear as possible no matter where the license is interpreted. They are not used in any copyright law in the world, and we provide their definitions directly in the license.
- I'd like to license my code under the GPL, but I'd
also like to make it clear that it can't be used for military and/or
commercial uses. Can I do this?
(
[#NoMilitary](#NoMilitary)) No, because those two goals contradict each other. The GNU GPL is designed specifically to prevent the addition of further restrictions. GPLv3 allows a very limited set of them, in section 7, but any other added restriction can be removed by the user.
More generally, a license that limits who can use a program, or for what, is
[not a free software license](/philosophy/programs-must-not-limit-freedom-to-run.html).- Is “convey” in GPLv3 the same
thing as what GPLv2 means by “distribute”?
(
[#ConveyVsDistribute](#ConveyVsDistribute)) Yes, more or less. During the course of enforcing GPLv2, we learned that some jurisdictions used the word “distribute” in their own copyright laws, but gave it different meanings. We invented a new term to make our intent clear and avoid any problems that could be caused by these differences.
- GPLv3 gives “making available to the
public” as an example of propagation. What does this mean?
Is making available a form of conveying?
(
[#v3MakingAvailable](#v3MakingAvailable)) One example of “making available to the public” is putting the software on a public web or FTP server. After you do this, some time may pass before anybody actually obtains the software from you—but because it could happen right away, you need to fulfill the GPL's obligations right away as well. Hence, we defined conveying to include this activity.
- Since distribution and making
available to the public are forms of propagation that are also
conveying in GPLv3, what are some examples of propagation that do not
constitute conveying?
(
[#PropagationNotConveying](#PropagationNotConveying)) Making copies of the software for yourself is the main form of propagation that is not conveying. You might do this to install the software on multiple computers, or to make backups.
- Does prelinking a
GPLed binary to various libraries on the system, to optimize its
performance, count as modification?
(
[#Prelinking](#Prelinking)) No. Prelinking is part of a compilation process; it doesn't introduce any license requirements above and beyond what other aspects of compilation would. If you're allowed to link the program to the libraries at all, then it's fine to prelink with them as well. If you distribute prelinked object code, you need to follow the terms of section 6.
- If someone installs GPLed software on a laptop, and
then lends that laptop to a friend without providing source code for
the software, have they violated the GPL?
(
[#LaptopLoan](#LaptopLoan)) No. In the jurisdictions where we have investigated this issue, this sort of loan would not count as conveying. The laptop's owner would not have any obligations under the GPL.
- Suppose that two companies try to
circumvent the requirement to provide Installation Information by
having one company release signed software, and the other release a
User Product that only runs signed software from the first company. Is
this a violation of GPLv3?
(
[#TwoPartyTivoization](#TwoPartyTivoization)) Yes. If two parties try to work together to get around the requirements of the GPL, they can both be pursued for copyright infringement. This is especially true since the definition of convey explicitly includes activities that would make someone responsible for secondary infringement.
- Am I complying with GPLv3 if I offer binaries on an
FTP server and sources by way of a link to a source code repository
in a version control system, like CVS or Subversion?
(
[#SourceInCVS](#SourceInCVS)) This is acceptable as long as the source checkout process does not become burdensome or otherwise restrictive. Anybody who can download your object code should also be able to check out source from your version control system, using a publicly available free software client. Users should be provided with clear and convenient instructions for how to get the source for the exact object code they downloaded—they may not necessarily want the latest development code, after all.
- Can someone who conveys GPLv3-covered
software in a User Product use remote attestation to prevent a user
from modifying that software?
(
[#RemoteAttestation](#RemoteAttestation)) No. The definition of Installation Information, which must be provided with source when the software is conveyed inside a User Product, explicitly says: “The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made.” If the device uses remote attestation in some way, the Installation Information must provide you some means for your modified software to report itself as legitimate.
- What does “rules and protocols for
communication across the network” mean in GPLv3?
(
[#RulesProtocols](#RulesProtocols)) This refers to rules about traffic you can send over the network. For example, if there is a limit on the number of requests you can send to a server per day, or the size of a file you can upload somewhere, your access to those resources may be denied if you do not respect those limits.
These rules do not include anything that does not pertain directly to data traveling across the network. For instance, if a server on the network sent messages for users to your device, your access to the network could not be denied merely because you modified the software so that it did not display the messages.
- Distributors that provide Installation Information
under GPLv3 are not required to provide “support service”
for the product. What kind of “support service”do you mean?
(
[#SupportService](#SupportService)) This includes the kind of service many device manufacturers provide to help you install, use, or troubleshoot the product. If a device relies on access to web services or similar technology to function properly, those should normally still be available to modified versions, subject to the terms in section 6 regarding access to a network.
- In GPLv3 and AGPLv3, what does it mean when it
says “notwithstanding any other provision of this License”?
(
[#v3Notwithstanding](#v3Notwithstanding)) This simply means that the following terms prevail over anything else in the license that may conflict with them. For example, without this text, some people might have claimed that you could not combine code under GPLv3 with code under AGPLv3, because the AGPL's additional requirements would be classified as “further restrictions” under section 7 of GPLv3. This text makes clear that our intended interpretation is the correct one, and you can make the combination.
This text only resolves conflicts between different terms of the license. When there is no conflict between two conditions, then you must meet them both. These paragraphs don't grant you carte blanche to ignore the rest of the license—instead they're carving out very limited exceptions.
- Under AGPLv3, when I modify the Program
under section 13, what Corresponding Source does it have to offer?
(
[#AGPLv3CorrespondingSource](#AGPLv3CorrespondingSource)) “Corresponding Source” is defined in section 1 of the license, and you should provide what it lists. So, if your modified version depends on libraries under other licenses, such as the Expat license or GPLv3, the Corresponding Source should include those libraries (unless they are System Libraries). If you have modified those libraries, you must provide your modified source code for them.
The last sentence of the first paragraph of section 13 is only meant to reinforce what most people would have naturally assumed: even though combinations with code under GPLv3 are handled through a special exception in section 13, the Corresponding Source should still include the code that is combined with the Program this way. This sentence does not mean that you
*only*have to provide the source that's covered under GPLv3; instead it means that such code is*not*excluded from the definition of Corresponding Source.- In AGPLv3, what counts as
“interacting with [the software] remotely through a computer
network?”
(
[#AGPLv3InteractingRemotely](#AGPLv3InteractingRemotely)) If the program is expressly designed to accept user requests and send responses over a network, then it meets these criteria. Common examples of programs that would fall into this category include web and mail servers, interactive web-based applications, and servers for games that are played online.
If a program is not expressly designed to interact with a user through a network, but is being run in an environment where it happens to do so, then it does not fall into this category. For example, an application is not required to provide source merely because the user is running it over SSH, or a remote X session.
- How does GPLv3's concept of
“you” compare to the definition of “Legal Entity”
in the Apache License 2.0?
(
[#ApacheLegalEntity](#ApacheLegalEntity)) They're effectively identical. The definition of “Legal Entity” in the Apache License 2.0 is very standard in various kinds of legal agreements—so much so that it would be very surprising if a court did not interpret the term in the same way in the absence of an explicit definition. We fully expect them to do the same when they look at GPLv3 and consider who qualifies as a licensee.
- In GPLv3, what does “the Program”
refer to? Is it every program ever released under GPLv3?
(
[#v3TheProgram](#v3TheProgram)) The term “the Program” means one particular work that is licensed under GPLv3 and is received by a particular licensee from an upstream licensor or distributor. The Program is the particular work of software that you received in a given instance of GPLv3 licensing, as you received it.
“The Program” cannot mean “all the works ever licensed under GPLv3”; that interpretation makes no sense for a number of reasons. We've published an
[analysis of the term “the Program”](/licenses/gplv3-the-program.html)for those who would like to learn more about this.- If I only make copies of a
GPL-covered program and run them, without distributing or conveying them to
others, what does the license require of me?
(
[#NoDistributionRequirements](#NoDistributionRequirements)) Nothing. The GPL does not place any conditions on this activity.
- If some network client software is
released under AGPLv3, does it have to be able to provide source to
the servers it interacts with?
(
[#AGPLv3ServerAsUser](#AGPLv3ServerAsUser)) -
AGPLv3 requires a program to offer source code to “all users interacting with it remotely through a computer network.” It doesn't matter if you call the program a “client” or a “server,” the question you need to ask is whether or not there is a reasonable expectation that a person will be interacting with the program remotely over a network.
- For software that runs a proxy server licensed
under the AGPL, how can I provide an offer of source to users
interacting with that code?
(
[#AGPLProxy](#AGPLProxy)) For software on a proxy server, you can provide an offer of source through a normal method of delivering messages to users of that kind of proxy. For example, a Web proxy could use a landing page. When users initially start using the proxy, you can direct them to a page with the offer of source along with any other information you choose to provide.
The AGPL says you must make the offer to “all users.” If you know that a certain user has already been shown the offer, for the current version of the software, you don't have to repeat it to that user again.
- How are the various GNU licenses
compatible with each other?
(
[#AllCompatibility](#AllCompatibility)) The various GNU licenses enjoy broad compatibility between each other. The only time you may not be able to combine code under two of these licenses is when you want to use code that's
*only*under an older version of a license with code that's under a newer version.Below is a detailed compatibility matrix for various combinations of the GNU licenses, to provide an easy-to-use reference for specific cases. It assumes that someone else has written some software under one of these licenses, and you want to somehow incorporate code from that into a project that you're releasing (either your own original work, or a modified version of someone else's software). Find the license for your project in a column at the top of the table, and the license for the other code in a row on the left. The cell where they meet will tell you whether or not this combination is permitted.
When we say “copy code,” we mean just that: you're taking a section of code from one source, with or without modification, and inserting it into your own program, thus forming a work based on the first section of code. “Use a library” means that you're not copying any source directly, but instead interacting with it through linking, importing, or other typical mechanisms that bind the sources together when you compile or run the code.
Each place that the matrix states GPLv3, the same statement about compatibility is true for AGPLv3 as well.
I want to license my code under: | |||||||
---|---|---|---|---|---|---|---|
GPLv2 only | GPLv2 or later | GPLv3 or later | LGPLv2.1 only | LGPLv2.1 or later | LGPLv3 or later | ||
I want to copy code under: | GPLv2 only | OK | OK
|
[[7]](#compat-matrix-footnote-7)[[7]](#compat-matrix-footnote-7)[[2]](#compat-matrix-footnote-2)[[1]](#compat-matrix-footnote-1)[[7]](#compat-matrix-footnote-7)[[7]](#compat-matrix-footnote-7)[[8]](#compat-matrix-footnote-8)[[3]](#compat-matrix-footnote-3)[[7]](#compat-matrix-footnote-7)[[7]](#compat-matrix-footnote-7)[[8]](#compat-matrix-footnote-8)[[7]](#compat-matrix-footnote-7)[[7]](#compat-matrix-footnote-7)[[7]](#compat-matrix-footnote-7)[[6]](#compat-matrix-footnote-6)[[7]](#compat-matrix-footnote-7)[[8]](#compat-matrix-footnote-8)[[7]](#compat-matrix-footnote-7)[[1]](#compat-matrix-footnote-1)[[7]](#compat-matrix-footnote-7)[[7]](#compat-matrix-footnote-7)[[5]](#compat-matrix-footnote-5)[[8]](#compat-matrix-footnote-8)[[3]](#compat-matrix-footnote-3)[[8]](#compat-matrix-footnote-8)[[7]](#compat-matrix-footnote-7)[[8]](#compat-matrix-footnote-8)[[4]](#compat-matrix-footnote-4)[[2]](#compat-matrix-footnote-2)[[7]](#compat-matrix-footnote-7)[[7]](#compat-matrix-footnote-7)[[2]](#compat-matrix-footnote-2)[[1]](#compat-matrix-footnote-1)[[7]](#compat-matrix-footnote-7)[[7]](#compat-matrix-footnote-7)[[8]](#compat-matrix-footnote-8)[[3]](#compat-matrix-footnote-3)[[7]](#compat-matrix-footnote-7)[[7]](#compat-matrix-footnote-7)[[8]](#compat-matrix-footnote-8)[[9]](#compat-matrix-footnote-9)1: You must follow the terms of GPLv2 when incorporating the code in this case. You cannot take advantage of terms in later versions of the GPL.
2: While you may release under GPLv2-or-later both your original work, and/or modified versions of work you received under GPLv2-or-later, the GPLv2-only code that you're using must remain under GPLv2 only. As long as your project depends on that code, you won't be able to upgrade the license of your own code to GPLv3-or-later, and the work as a whole (any combination of both your project and the other code) can only be conveyed under the terms of GPLv2.
3: If you have the ability to release the project under GPLv2 or any later version, you can choose to release it under GPLv3 or any later version—and once you do that, you'll be able to incorporate the code released under GPLv3.
4: If you have the ability to release the project under LGPLv2.1 or any later version, you can choose to release it under LGPLv3 or any later version—and once you do that, you'll be able to incorporate the code released under LGPLv3.
5: You must follow the terms of LGPLv2.1 when incorporating the code in this case. You cannot take advantage of terms in later versions of the LGPL.
6: If you do this, as long as the project contains the code released under LGPLv2.1 only, you will not be able to upgrade the project's license to LGPLv3 or later.
7: LGPLv2.1 gives you permission to relicense the code under any version of the GPL since GPLv2. If you can switch the LGPLed code in this case to using an appropriate version of the GPL instead (as noted in the table), you can make this combination.
8: LGPLv3 is GPLv3 plus extra permissions that you can ignore in this case.
9: Because GPLv2 does not permit combinations with LGPLv3, you must convey the project under GPLv3's terms in this case, since it will allow that combination. |
8,835 | Linux 系统开机启动项清理 | https://www.linux.com/learn/cleaning-your-linux-startup-process | 2017-09-03T16:37:49 | [
"Systemd",
"启动",
"服务"
] | https://linux.cn/article-8835-1.html | 
一般情况下,常规用途的 Linux 发行版在开机启动时拉起各种相关服务进程,包括许多你可能无需使用的服务,例如<ruby> 蓝牙 <rt> bluetooth </rt></ruby>、Avahi、 <ruby> 调制解调管理器 <rt> ModemManager </rt></ruby>、ppp-dns(LCTT 译注:此处作者笔误 ppp-dns 应该为 pppd-dns) 等服务进程,这些都是什么东西?用于哪里,有何功能?
Systemd 提供了许多很好的工具用于查看系统启动情况,也可以控制在系统启动时运行什么。在这篇文章中,我将说明在 Systemd 类发行版中如何关闭一些令人讨厌的进程。
### 查看开机启动项
在过去,你能很容易通过查看 `/etc/init.d` 了解到哪些服务进程会在引导时启动。Systemd 以不同的方式展现,你可以使用如下命令罗列允许开机启动的服务进程。
```
$ systemctl list-unit-files --type=service | grep enabled
accounts-daemon.service enabled
anacron-resume.service enabled
anacron.service enabled
bluetooth.service enabled
brltty.service enabled
[...]
```
在此列表顶部,对我来说,蓝牙服务是冗余项,因为在该电脑上我不需要使用蓝牙功能,故无需运行此服务。下面的命令将停止该服务进程,并且使其开机不启动。
```
$ sudo systemctl stop bluetooth.service
$ sudo systemctl disable bluetooth.service
```
你可以通过下面命令确定是否操作成功。
```
$ systemctl status bluetooth.service
bluetooth.service - Bluetooth service
Loaded: loaded (/lib/systemd/system/bluetooth.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:bluetoothd(8)
```
停用的服务进程仍然能够被另外一个服务进程启动。如果你真的想在任何情况下系统启动时都不启动该进程,无需卸载该它,只需要把它掩盖起来就可以阻止该进程在任何情况下开机启动。
```
$ sudo systemctl mask bluetooth.service
Created symlink from /etc/systemd/system/bluetooth.service to /dev/null.
```
一旦你对禁用该进程启动而没有出现负面作用感到满意,你也可以选择卸载该程序。
通过执行命令可以获得如下服务列表:
```
$ systemctl list-unit-files --type=service
UNIT FILE STATE
accounts-daemon.service enabled
acpid.service disabled
alsa-restore.service static
alsa-utils.service masked
```
你不能启用或禁用静态服务,因为静态服务被其他的进程所依赖,并不意味着它们自己运行。
### 哪些服务能够禁止?
如何知道你需要哪些服务,而哪些又是可以安全地禁用的呢?它总是依赖于你的个性化需求。
这里举例了几个服务进程的作用。许多服务进程都是发行版特定的,所以你应该看看你的发行版文档(比如通过 google 或 StackOverflow)。
* **accounts-daemon.service** 是一个潜在的安全风险。它是 AccountsService 的一部分,AccountsService 允许程序获得或操作用户账户信息。我不认为有好的理由能使我允许这样的后台操作,所以我选择<ruby> 掩盖 <rt> mask </rt></ruby>该服务进程。
* **avahi-daemon.service** 用于零配置网络发现,使电脑超容易发现网络中打印机或其他的主机,我总是禁用它,别漏掉它。
* **brltty.service** 提供布莱叶盲文设备支持,例如布莱叶盲文显示器。
* **debug-shell.service** 开放了一个巨大的安全漏洞(该服务提供了一个无密码的 root shell ,用于帮助 调试 systemd 问题),除非你正在使用该服务,否则永远不要启动服务。
* **ModemManager.service** 该服务是一个被 dbus 激活的守护进程,用于提供移动<ruby> 宽频 <rt> broadband </rt></ruby>(2G/3G/4G)接口,如果你没有该接口,无论是内置接口,还是通过如蓝牙配对的电话,以及 USB 适配器,那么你也无需该服务。
* **pppd-dns.service** 是一个计算机发展的遗物,如果你使用拨号接入互联网的话,保留它,否则你不需要它。
* **rtkit-daemon.service** 听起来很可怕,听起来像是 rootkit。 但是你需要该服务,因为它是一个<ruby> 实时内核调度器 <rt> real-time kernel scheduler </rt></ruby>。
* **whoopsie.service** 是 Ubuntu 错误报告服务。它用于收集 Ubuntu 系统崩溃报告,并发送报告到 <https://daisy.ubuntu.com> 。 你可以放心地禁止其启动,或者永久的卸载它。
* **wpa\_supplicant.service** 仅在你使用 Wi-Fi 连接时需要。
### 系统启动时发生了什么?
Systemd 提供了一些命令帮助调试系统开机启动问题。该命令会重演你的系统启动的所有消息。
```
$ journalctl -b
-- Logs begin at Mon 2016-05-09 06:18:11 PDT,
end at Mon 2016-05-09 10:17:01 PDT. --
May 16 06:18:11 studio systemd-journal[289]:
Runtime journal (/run/log/journal/) is currently using 8.0M.
Maximum allowed usage is set to 157.2M.
Leaving at least 235.9M free (of currently available 1.5G of space).
Enforced usage limit is thus 157.2M.
[...]
```
通过命令 `journalctl -b -1` 可以复审前一次启动,`journalctl -b -2` 可以复审倒数第 2 次启动,以此类推。
该命令会打印出大量的信息,你可能并不关注所有信息,只是关注其中问题相关部分。为此,系统提供了几个过滤器,用于帮助你锁定目标。让我们以进程号为 1 的进程为例,该进程是所有其它进程的父进程。
```
$ journalctl _PID=1
May 08 06:18:17 studio systemd[1]: Starting LSB: Raise network interfaces....
May 08 06:18:17 studio systemd[1]: Started LSB: Raise network interfaces..
May 08 06:18:17 studio systemd[1]: Reached target System Initialization.
May 08 06:18:17 studio systemd[1]: Started CUPS Scheduler.
May 08 06:18:17 studio systemd[1]: Listening on D-Bus System Message Bus Socket
May 08 06:18:17 studio systemd[1]: Listening on CUPS Scheduler.
[...]
```
这些打印消息显示了什么被启动,或者是正在尝试启动。
一个最有用的命令工具之一 `systemd-analyze blame`,用于帮助查看哪个服务进程启动耗时最长。
```
$ systemd-analyze blame
8.708s gpu-manager.service
8.002s NetworkManager-wait-online.service
5.791s mysql.service
2.975s dev-sda3.device
1.810s alsa-restore.service
1.806s systemd-logind.service
1.803s irqbalance.service
1.800s lm-sensors.service
1.800s grub-common.service
```
这个特定的例子没有出现任何异常,但是如果存在系统启动瓶颈,则该命令将能发现它。
你也能通过如下资源了解 Systemd 如何工作:
* [理解和使用 Systemd](https://www.linux.com/learn/understanding-and-using-systemd)
* [介绍 Systemd 运行级别和服务管理命令](https://www.linux.com/learn/intro-systemd-runlevels-and-service-management-commands)
* [再次前行,另一个 Linux 初始化系统:Systemd 介绍](https://www.linux.com/learn/here-we-go-again-another-linux-init-intro-systemd)
---
via: <https://www.linux.com/learn/cleaning-your-linux-startup-process>
作者:[David Both](https://www.linux.com/users/cschroder) 译者:[penghuster](https://github.com/penghuster) 校对:[wxy](https://github.com/wxy)
本文由 LCTT 原创编译,Linux中国 荣誉推出
| 301 | Moved Permanently | null |
8,836 | Linux 1.0 之旅:回顾这一切的开始 | https://opensource.com/article/17/8/linux-anniversary | 2017-09-03T20:02:29 | [
"Linux",
"SLS"
] | https://linux.cn/article-8836-1.html |
>
> 通过安装 SLS 1.05 展示了 Linux 内核在这 26 年间走过了多远。
>
>
>

我第一次安装 Linux 是在 1993 年。那时我跑的是 MS-DOS,但我真的很喜欢学校机房电脑的 Unix 系统,就在那里度过了我大学本科时光。 当我听说了 Linux,一个 Unix 的免费版本,可以在我家的 386 电脑上运行的时候,我立刻就想要试试。我的第一个 Linux 发行版是 [Softlanding Linux System](https://en.wikipedia.org/wiki/Softlanding_Linux_System) (SLS) 1.03,带有 11 级补丁的 0.99 alpha 版本的 Linux 内核。它要求高达 2 MB 的内存,如果你想要编译项目需要 4 MB,运行 X windows 则需要 8 MB。
我认为 Linux 相较于 MS-DOS 世界是一个巨大的进步。 尽管 Linux 缺乏运行在 MS-DOS 上的广泛的应用及游戏,但我发现 Linux 带给我的是巨大的灵活性。不像 MS-DOS ,现在我可以进行真正的多任务,同时运行不止一个程序。并且 Linux 提供了丰富的工具,包括一个 C 语言编译器,让我可以构建自己的项目。
一年后,我升级到了 SLS 1.05,它支持全新的 Linux 内核 1.0。 更重要的,Linux 引入了内核模块。通过内核模块,你不再需要为支持新硬件而编译整个内核;取而代之,只需要从包含 Linux 内核之内的 63 个模块里加载一个就行。在 SLS 1.05 的发行自述文件中包含这些关于模块的注释:
>
> 内核的模块化旨在正视减少并最终消除重新编译内核的要求,无论是变更、修改设备驱动或者为了动态访问不常用的驱动。也许更为重要的是,个别工作小组的工作不再影响到内核的正确开发。事实上,这让以二进制发布官方内核现在成为了可能。
>
>
>
在 8 月 25 日,Linux 内核将迎来它的第 26 周年(LCTT 译注:已经过去了 =.= )。为了庆祝,我重新安装了 SLS 1.05 来提醒自己 Linux 1.0 内核是什么样子,去认识 Linux 自二十世纪 90 年代以来走了多远。和我一起踏上 Linux 的怀旧之旅吧!
### 安装
SLS 是第一个真正的 “发行版”,因为它包含一个安装程序。 尽管安装过程并不像现代发行版一样顺畅。 不能从 CD-ROM 启动安装,我需要从安装软盘启动我的系统,然后从 **login** 提示中运行安装程序。

在 SLS 1.05 中引入的一个漂亮的功能是支持彩色的文本模式安装器。当我选择彩色模式时,安装器切换到一个带有黑色文字的亮蓝色背景,不再是我们祖祖辈辈们使用的原始的普通黑白文本。

SLS 安装器是个简单的东西,文本从屏幕底部滚动而上,显示其做的工作。通过对一些简单的提示的响应,我能够创建一个 Linux 分区,挂载上 ext2 文件系统,并安装 Linux 。 安装包含了 X windows 和开发工具的 SLS 1.05,需要大约 85 MB 的磁盘空间。依照今天的标准这听起来可能不是很多,但在 Linux 1.0 出来的时候,120 MB 的硬件设备才是主流设备。


### 系统级别
当我第一次启动到 Linux 时,让我想起来了一些关于这个早期版本 Linux 系统的事情。首先,Linux 没有占据很多的空间。在启动系统之后运行一些程序来检查的时候,Linux 占用了不到 4 MB 的内存。在一个拥有 16MB 内存的系统中,这就意味着节省了很多内存用来运行程序。

熟悉的 `/proc` 元文件系统在 Linux 1.0 就存在了,尽管对比我们今天在现代系统上看到的,它并不能提供许多信息。在 Linux 1.0, `/proc` 包含一些接口来探测类似 `meminfo` 和 `stat` 之类的基本系统状态。

在这个系统上的 `/etc` 文件目录非常简单。值得一提的是,SLS 1.05 借用了来自 [BSD Unix](https://en.wikipedia.org/wiki/Berkeley_Software_Distribution) 的 **rc** 脚本来控制系统启动。 初始化是通过 **rc** 脚本进行的,由 `rc.local` 文件来定义本地系统的调整。后来,许多 Linux 发行版采用了来自 [Unix System V](https://en.wikipedia.org/wiki/UNIX_System_V) 的很相似的 **init** 脚本,后来又是 [systemd](https://en.wikipedia.org/wiki/Systemd) 初始化系统。

### 你能做些什么
随着我的系统的启动运行,接下来就可以使用了了。那么,在这样的早期 Linux 系统上你能做些什么?
让我们从基本的文件管理开始。 每次在你登录的时候,SLS 会让你使用 Softlanding 菜单界面(MESH),这是一个文件管理程序,现代的用户们可能觉得它和 [Midnight Commander](https://midnight-commander.org/) 很相似。 而二十世纪 90 年代的用户们可能会拿 MESH 与更为接近的 [Norton Commander](https://en.wikipedia.org/wiki/Norton_Commander) 相比,这个可以说是在 MS-DOS 上最流行的第三方文件管理程序。
")
除了 MESH 之外,在 SLS 1.05 中还少量包含了一些全屏应用程序。你可以找到熟悉的用户工具,包括 Elm 邮件阅读器、GNU Emacs 可编程编辑器,以及古老的 Vim 编辑器。


SLS 1.05 甚至包含了一个可以让你在终端玩的俄罗斯方块版本。

在二十世纪 90 年代,多数住宅的网络接入是通过拨号连接的,所以 SLS 1.05 包含了 Minicom 调制解调器拨号程序。Minicom 提供一个与调制解调器的直接连接,并需要用户通过贺氏调制解调器的 **AT** 命令来完成一些像是拨号或挂电话这样的基础功能。Minicom 同样支持宏和其他简单功能来使连接你的本地调制解调器池更容易。

但如果你想要写一篇文档时怎么办? SLS 1.05 的存在要比 LibreOffice 或者 OpenOffice 早很长时间。在二十世纪 90 年代,Linux 还没有这些应用。相反,如果你想要使用一个文字处理器,可能需要引导你的系统进入 MS-DOS,然后运行你喜欢的文字处理器程序,如 WordPerfect 或者共享软件 GalaxyWrite。
但是所有的 Unix 系统都包含一套简单的文本格式化程序,叫做 nroff 和 troff。在 Linux 系统中,他们被合并成 GNU groff 包,而 SLS 1.05 包含了 groff 的一个版本。我在 SLS 1.05 上的一项测试就是用 nroff 生成一个简单的文本文档。


### 运行 X windows
获取安装 X windows 并不特别容易,如 SLS 安装文件承诺的那样:
>
> 在你的 PC 上获取安装 X windows 可能会有一些发人深省的体验,主要是因为 PC 的显示卡类型太多。Linux X11 仅支持 VGA 类型的显示卡,但在许多类型的 VGA 中仅有个别的某些类型是完全支持的。SLS 存在两种 X windows 服务器。全彩的 XFree86,支持一些或所有 ET3000、ET400、PVGA1、GVGA、Trident、S3、8514、Accelerated cards、ATI plus 等。
>
>
> 另一个服务器 XF86\_Mono,能够工作在几乎所有的 VGA 卡上,但只提供单色模式。因此,相比于彩色服务器,它会占用更少的内存并拥有更快的速度。当然就是看起来不怎么漂亮。
>
>
> X windows 的配置信息都堆放在目录 “/usr/X386/lib/X11/”。需要注意的是,“Xconfig” 文件为监视器和显示卡定义了时序。默认情况下,X windows 设置使用彩色服务器,如果彩色服务器出现问题,你可以切换到单色服务器 x386mono,因为它已经支持各种标准的 VGA。本质上,这只是将 /usr/X386/bin/X 链接到它。
>
>
> 只需要编辑 Xconfig 来设置鼠标驱动类型和时序,然后键入 “startx” 即可。
>
>
>
这些听起来令人困惑,但它就是这样。手工配置 X windows 真的可以是一个发人深省的体验。幸好,SLS 1.05 包含了 syssetup 程序来帮你确定系统组件的种类,包括了 X windows 的显示设置。在一些提示过后,经过一些实验和调整,最终我成功启动了 X windows!

但这是来自于 1994 年的 X windows,它仍然并没有桌面的概念。我可以从 FVWM (一个虚拟窗口管理器)或 TWM (选项卡式的窗口管理器)中选择。TWM 直观地设置提供一个功能简单的图形环境。

### 关机
我已经在我的 Linux 寻根之旅沉浸许久,是时候最终回到我的现代桌面上了。最初我跑 Linux 的是一台仅有 8MB 内存和 一个 120MB 硬盘驱动器的 32 位 386 电脑,而我现在的系统已经足够强大了。拥有双核 64 位 Intel Core i5 处理器,4 GB 内存和一个 128 GB 的固态硬盘,我可以在我的运行着 Linux 内核 4.11.11 的系统上做更多事情。那么,在我的 SLS 1.05 的实验结束之后,是时候离开了。

再见,Linux 1.0。很高兴看到你的茁壮成长。
(题图:图片来源:[litlnemo](https://www.flickr.com/photos/litlnemo/19777182/)。由 Opnesource.com 修改。[CC BY-SA 2.0.](https://creativecommons.org/licenses/by-sa/2.0/))
---
via: <https://opensource.com/article/17/8/linux-anniversary>
作者:[Jim Hall](https://opensource.com/users/jim-hall) 译者:[softpaopao](https://github.com/softpaopao) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | I first installed Linux in 1993. I ran MS-DOS at the time, but I really liked the Unix systems in our campus computer lab, where I spent much of my time as an undergraduate university student. When I heard about Linux, a free version of Unix that I could run on my 386 computer at home, I immediately wanted to try it out. My first Linux distribution was [Softlanding Linux System](https://en.wikipedia.org/wiki/Softlanding_Linux_System) (SLS) 1.03, with Linux kernel 0.99 alpha patch level 11. That required a whopping 2MB of RAM, or 4MB if you wanted to compile programs, and 8MB to run X windows.
I thought Linux was a huge step up from the world of MS-DOS. While Linux lacked the breadth of applications and games available on MS-DOS, I found Linux gave me a greater degree of flexibility. Unlike MS-DOS, I could now do true multi-tasking, running more than one program at a time. And Linux provided a wealth of tools, including a C compiler that I could use to build my own programs.
A year later, I upgraded to SLS 1.05, which sported the brand-new Linux kernel 1.0. More importantly, Linux 1.0 introduced kernel modules. With modules, you no longer needed to completely recompile your kernel to support new hardware; instead you loaded one of the 63 included Linux kernel modules. SLS 1.05 included this note about modules in the distribution's README file:
Modularization of the kernel is aimed squarely at reducing, and eventually eliminating, the requirements for recompiling the kernel, either for changing/modifying device drivers or for dynamic access to infrequently required drivers. More importantly, perhaps, the efforts of individual working groups need no longer affect the development of the kernel proper. In fact, a binary release of the official kernel should now be possible.
On August 25, the Linux kernel will reach its 26th anniversary. To celebrate, I reinstalled SLS 1.05 to remind myself what the Linux 1.0 kernel was like and to recognize how far Linux has come since the 1990s. Join me on this journey into Linux nostalgia!
## Installation
Softlanding Linux System was the first true "distribution" that included an install program. Yet the install process isn't the same smooth process you find in modern distributions. Instead of booting from an install CD-ROM, I needed to boot my system from an install floppy, then run the install program from the **login** prompt.

opensource.com
A neat feature introduced in SLS 1.05 was the color-enabled text-mode installer. When I selected color mode, the installer switched to a light blue background with black text, instead of the plain white-on-black text used by our primitive forbearers.

opensource.com
The SLS installer is a simple affair, scrolling text from the bottom of the screen, but it does the job. By responding to a few simple prompts, I was able to create a partition for Linux, put an ext2 filesystem on it, and install Linux. Installing SLS 1.05, including X windows and development tools, required about 85MB of disk space. That may not sound like much space by today's standards, but when Linux 1.0 came out, 120MB hard drives were still common.

opensource.com

opensource.com
## System level
When I first booted into Linux, my memory triggered a few system things about this early version of Linux. First, Linux doesn't take up much space. After booting the system and running a few utilities to check it out, Linux occupied less than 4MB of memory. On a system with 16MB of memory, that meant lots left over to run programs.

opensource.com
The familiar **/proc** meta filesystem exists in Linux 1.0, although it doesn't provide much information compared to what you see in modern systems. In Linux 1.0, **/proc** includes interfaces to probe basic system statistics like **meminfo** and **stat**.

opensource.com
The **/etc** directory on this system is pretty bare. Notably, SLS 1.05 borrows the **rc** scripts from [BSD Unix](https://en.wikipedia.org/wiki/Berkeley_Software_Distribution) to control system startup. Everything gets started via **rc** scripts, with local system changes defined in the **rc.local** file. Later, most Linux distributions would adopt the more familiar **init** scripts from [Unix System V](https://en.wikipedia.org/wiki/UNIX_System_V), then the [systemd](https://en.wikipedia.org/wiki/Systemd) initialization system.

opensource.com
## What you can do
With my system up and running, it was time to get to work. So, what can you do with this early Linux system?
Let's start with basic file management. Every time you log in, SLS reminds you about the Softlanding menu shell (MESH), a file-management program that modern users might recognize as similar to [Midnight Commander](https://midnight-commander.org/). Users in the 1990s would have compared MESH more closely to [Norton Commander](https://en.wikipedia.org/wiki/Norton_Commander), arguably the most popular third-party file manager available on MS-DOS.

opensource.com
Aside from MESH, there are relatively few full-screen applications included with SLS 1.05. But you can find the familiar user tools, including the Elm mail reader, the GNU Emacs programmable editor, and the venerable Vim editor.

opensource.com

opensource.com
SLS 1.05 even included a version of Tetris that you could play at the terminal.

opensource.com
In the 1990s, most residential internet access was via dial-up connections, so SLS 1.05 included the Minicom modem-dialer application. Minicom provided a direct connection to the modem and required users to navigate the Hayes modem **AT** commands to do basic functions like dial a number or hang up the phone. Minicom also supported macros and other neat features to make it easier to connect to your local modem pool.

opensource.com
But what if you wanted to write a document? SLS 1.05 existed long before the likes of LibreOffice or OpenOffice. Linux just didn't have those applications in the early 1990s. Instead, if you wanted to use a word processor, you likely booted your system into MS-DOS and ran your favorite word processor program, such as WordPerfect or the shareware GalaxyWrite.
But all Unix systems include a set of simple text formatting programs, called nroff and troff. On Linux systems, these are combined into the GNU groff package, and SLS 1.05 includes a version of groff. One of my tests with SLS 1.05 was to generate a simple text document using nroff.

opensource.com

opensource.com
## Running X windows
Getting X windows to perform was not exactly easy, as the SLS install file promised:
Getting X windows to run on your PC can sometimes be a bit of a sobering experience, mostly because there are so many types of video cards for the PC. Linux X11 supports only VGA type video cards, but there are so many types of VGAs that only certain ones are fully supported. SLS comes with two X windows servers. The full color one, XFree86, supports some or all ET3000, ET4000, PVGA1, GVGA, Trident, S3, 8514, Accelerated cards, ATI plus, and others.
The other server, XF86_Mono, should work with virtually any VGA card, but only in monochrome mode. Accordingly, it also uses less memory and should be faster than the color one. But of course it doesn't look as nice.
The bulk of the X windows configuration information is stored in the directory "/usr/X386/lib/X11/". In particular, the file "Xconfig" defines the timings for the monitor and the video card. By default, X windows is set up to use the color server, but you can switch to using the monochrome server x386mono, if the color one gives you trouble, since it should support any standard VGA. Essentially, this just means making /usr/X386/bin/X a link to it.
Just edit Xconfig to set the mouse device type and timings, and enter "startx".
If that sounds confusing, it is. Configuring X windows by hand really can be a sobering experience. Fortunately, SLS 1.05 included the syssetup program to help you define various system components, including display settings for X windows. After a few prompts, and some experimenting and tweaking, I was finally able to launch X windows!

opensource.com
But this is X windows from 1994, and the concept of a desktop didn't exist yet. My options were either FVWM (a virtual window manager) or TWM (the tabbed window manager). TWM was straightforward to set up and provided a simple, yet functional, graphical environment.

opensource.com
## Shutdown
As much as I enjoyed exploring my Linux roots, eventually it was time to return to my modern desktop. I originally ran Linux on a 32-bit 386 computer with just 8MB of memory and a 120MB hard drive, and my system today is much more powerful. I can do so much more on my dual-core, 64-bit Intel Core i5 CPU with 4GB of memory and a 128GB solid-state drive running Linux kernel 4.11.11. So, after my experiments with SLS 1.05 were over, it was time to leave.

opensource.com
So long, Linux 1.0. It's good to see how well you've grown up.
## 15 Comments |
8,839 | Oracle 终于干掉了 Sun! | https://meshedinsights.com/2017/09/03/oracle-finally-killed-sun/ | 2017-09-05T09:48:00 | [
"Sun",
"Oracle"
] | https://linux.cn/article-8839-1.html |
>
> **随着 Solaris 团队的彻底完蛋,看起来 Sun 微系统公司最终连块骨头都没剩下。**
>
>
>

来自前 Sun 社区的消息表明,[一月份的传闻](http://www.mercurynews.com/2017/01/20/oracle-lays-off-450-employees/)(Oracle 裁员 450 人)成为了现实,上周五,[Oracle 裁掉了 Solaris 和 SPARC 团队的核心员工](https://twitter.com/drewfisher314/status/903804762373537793),选择这个时候(<ruby> 美国劳动节 <rp> ( </rp> <rt> Labor Day </rt> <rp> ) </rp></ruby>前的周末)或许是希望这个消息被大众所忽略。

可以肯定,这意味着对于这些产品只会剩下骨架级的维护,特别是, [Solaris 12 也取消了](https://arstechnica.com/information-technology/2017/01/oracle-sort-of-confirms-demise-of-solaris-12-effort/)。这是一个典型的 Oracle 式的“<ruby> 无声结局 <rp> ( </rp> <rt> silent EOL </rt> <rp> ) </rp></ruby>”,无论他们怎么声称保证履行对富士通和其它客户的合同条款。
随着该硬件的淘汰,我估计这是最后一个被 Oracle 收购处置的 Sun 资产了。在收购 Sun 微系统公司之后,[Oracle 的埃里森对 Sun 的管理毫不留情](http://www.businessinsider.com/wow-larry-ellison-just-tore-ex-sun-ceo-jonathan-schwartz-a-new-one-2010-5?IR=T),而且最大化地利用其手中的机会。那么,Sun 的资产都在 Oracle 手里都有什么遭遇呢?我并不是很关注 Oracle 的业务,但是可以从下列报告中一窥:
* Java 被称之为“皇冠上的宝石”,但是其买下 Java SE 的[真实原因](https://www.theguardian.com/technology/2012/apr/18/oracle-google-court-smartphone)其实是——试图起诉 Google 赔偿 80 亿美金,而且还两次败诉。
* 埃里森说 [Java 是中间件成功的关键](https://betanews.com/2009/04/20/industry-in-a-box-sun-acquisition-will-lead-to-oracle-java/),但是现在 Java EE 正准备[交给一个开源基金会](https://www.redhat.com/en/blog/java-ee-moving-open-source-foundation)。
* Oracle 批评 Sun 没能从 Java 上挣到钱(忽略了 1996 - 2000 年 Java 使硬件市场获利的事实)并提出了一个根本不会产生收入的[免费模型](http://www.theregister.co.uk/2010/11/06/oracle_dueling_jvms/)。
* 他们 [拥抱了 NetBeans](https://www.infoworld.com/article/2627862/application-development/oracle-hails-java-but-kills-sun-cloud.html),而现在它被捐献给了 Apache 基金会。
* 对 [MySQL 安全修复](http://www.bytebot.net/blog/archives/2012/07/20/security-fixes-in-mysql-critical-patch-updates)的官僚主义导致一部分社区成员转而和 MySQL 创始人 Monty 创建了 MariaDB 分支,而新分支已经足以支撑起[一家公司](http://mariadb.com/)。
* 埃里森说要[重建 Sun 的硬件业务](https://web.archive.org/web/20100516032914/http://abcnews.go.com:80/Business/wireStory?id=10630034),但是[其负责人已经在一个月前离职了](https://www.theregister.co.uk/2017/08/02/oracle_john_fowler_bails/),而剩下的员工也属于裁员的一部分。
* 尽管 Sun 的前老板 [McNealy 明白 Solaris 只有开源才能赢得市场](https://www.theregister.co.uk/2010/12/07/mcnealy_sun_and_open_source/?page=3),Oracle 表示“你说的没错”,然后把 Solaris [弄死了](http://www.osnews.com/story/23683/Oracle_Kills_OpenSolaris_Moves_Development_Behind_Closed_Doors)。其结果就是,上周末的裁员——而这在今年一月份就已经有了传闻。
* [Hudson 处理失当](https://www.infoq.com/news/2011/01/hudson-jenkins2),意味着其 CI 业务只能跟在 CloudBees 从 Hubson 分支而来的 Jenkins 后面。
* Oracle 放弃了 Sun 的身份管理项目,而现在 Forgerock 用它撑起了一个价值五亿美金的业务,为那些 Oracle 所不在意的客户服务。
* Oracle 决定 [取消 Sun Cloud](https://www.infoworld.com/article/2627862/application-development/oracle-hails-java-but-kills-sun-cloud.html) ,并拆除了 Solaris 中的云服务功能,然后现在是云服务的天下了。
* Oracle 重命名了 StarOffice,然后[宣布了一个云端版本](https://web.archive.org/web/20101217212955/http://www.oracle.com/us/corporate/press/195766),但是没卵用。感觉该项目将走向末日,遭受到了严厉对待的社区决定另起锅灶去做 LibreOffice。
* 只有 VirtualBox 看起来还好。
也许这并不是 Sun 失败的真正原因——[在开源 Solaris 上花了太长时间](https://www.theregister.co.uk/2010/12/07/mcnealy_sun_and_open_source/?page=3),并在 2000 - 2002 年间试图用以市场为主导的方式来替代 Sun 一贯的工程为主导的方式——埃里森指责那些承担力挽狂澜责任的、硕果仅存的领袖们 McNealy、Zander、Tolliver 和他们的团队。埃里森从来没有理解过 Schwartz 所采取的开创性做法,而是在[博客中嘲讽](https://web.archive.org/web/20100516032944/http://abcnews.go.com:80/Business/wirestory?id=10630034&page=2)和[取消所有正在进行中的“科学项目”](https://web.archive.org/web/20100516032949/http://abcnews.go.com:80/Business/wirestory?id=10630034&page=3),拆除合作伙伴渠道,溯源开源社区。
与本周 HPE 将放弃的旧产品[卖给 Micro Focus](https://www.ft.com/content/16ce31c4-8d5e-11e7-9084-d0c17942ba93) ,让其照料这些产品的做法相反,Oracle 的做法更残酷。Oracle [说过它将“重振 Sun 品牌”](https://www.infoworld.com/article/2627785/m-a/oracle-s-ambitious-plans-for-integrating-sun-s-technology.html),但是事实上他们杀死的产品比 Sun 管理层曾经管理过的都要多——毫无疑问,这是“交易的艺术”。今天,和许多前 Sun 同事一样,这件事使我非常悲伤。
*(本文由 [Patreon 赞助支持](https://patreon.com/webmink), 欢迎你也成为支持者之一!)*
| 200 | OK | **With the Solaris team gutted, it looks like the Sun skeleton has finally been picked clean.**

The news from the ex-Sun community jungle drums is that the [January rumours](http://www.mercurynews.com/2017/01/20/oracle-lays-off-450-employees/) were true and Oracle [laid off the core talent of the Solaris and SPARC teams](https://twitter.com/drewfisher314/status/903804762373537793) on Friday (perhaps hoping to get the news lost in the Labor Day weekend). With [90% gone](http://dtrace.org/blogs/bmc/2017/09/04/the-sudden-death-and-eternal-life-of-solaris/) according to Bryan Cantrill that surely has to mean either a skeleton-staffed maintenance-only future for the product range, especially with [Solaris 12 cancelled](https://arstechnica.com/information-technology/2017/01/oracle-sort-of-confirms-demise-of-solaris-12-effort/), or an attempt to [force Solaris workloads onto Oracle’s SPARC Cloud offering](https://www.theregister.co.uk/2017/09/05/solaris_update_plan_is_real_but_its_future_looks_cloudy_by_design/). A classic Oracle “silent EOL”, no matter what they claim as they satisfy their contractual commitments to Fujitsu and others.
On acquisition, [Ellison was scathing about Sun’s management](http://www.businessinsider.com/wow-larry-ellison-just-tore-ex-sun-ceo-jonathan-schwartz-a-new-one-2010-5?IR=T) and sure he was going to max out the opportunity. So just how good were Oracle’s decisions with Sun’s assets? I’m not really following Oracle’s business day-to-day, but here’s what seems obvious from reports:
- Java was described as the “crown jewels”, but the
[real reason](https://www.theguardian.com/technology/2012/apr/18/oracle-google-court-smartphone)for buying Java SE – trying to sue $8bn from Google – has failed twice. Maybe it will still yield on SCOTUS appeal? - Ellison said
[Java’s role in middleware](https://betanews.com/2009/04/20/industry-in-a-box-sun-acquisition-will-lead-to-oracle-java/)was the key to success, but Java EE is now[headed to a Foundation](https://www.redhat.com/en/blog/java-ee-moving-open-source-foundation). - Oracle criticised Sun for “failing to monetise” Java (ignoring the fact Java made the market which Sun monetised with hardware in 1996-2000) and proposed a
[freemium model](http://www.theregister.co.uk/2010/11/06/oracle_dueling_jvms/)that’s not resulted in revenue. - They
[embraced NetBeans](https://www.infoworld.com/article/2627862/application-development/oracle-hails-java-but-kills-sun-cloud.html), which is now[donated to Apache](https://meshedinsights.com/2016/09/15/oracle-gets-it-right-netbeans-heads-to-apache/)(a good thing). - Bureaucracy over
[MySQL security fixes](http://www.bytebot.net/blog/archives/2012/07/20/security-fixes-in-mysql-critical-patch-updates)led to a decent portion of the user community going over to Monty’s MariaDB fork, enough to start[a company](http://mariadb.com/)around. - Ellison said he would
[rebuild Sun’s hardware business](https://web.archive.org/web/20100516032914/http://abcnews.go.com:80/Business/wireStory?id=10630034), but[its boss quit a month ago](https://www.theregister.co.uk/2017/08/02/oracle_john_fowler_bails/)and the team behind it was part of the lay-off. - Despite
[McNealy understanding that Solaris had to be open to win in the market](https://www.theregister.co.uk/2010/12/07/mcnealy_sun_and_open_source/?page=3), Oracle hyped it up and[closed it down](http://www.osnews.com/story/23683/Oracle_Kills_OpenSolaris_Moves_Development_Behind_Closed_Doors). The result was this week’s layoffs, foreshadowed extensively in January. [Mishandling Hudson](https://www.infoq.com/news/2011/01/hudson-jenkins2)meant the CI business followed the Jenkins fork to CloudBees.- Oracle abandoned Sun’s identity management projects and now Forgerock uses them in a business valued around a half billion dollars powered by the customers Oracle alienated.
- Oracle decided to
[cancel Sun Cloud](https://www.infoworld.com/article/2627862/application-development/oracle-hails-java-but-kills-sun-cloud.html)and dismantled the ready-for-cloud features of Solaris, then the market went Cloud. - Oracle renamed StarOffice and
[announced a cloud version](https://web.archive.org/web/20101217212955/http://www.oracle.com/us/corporate/press/195766)but couldn’t make it fly. Sensing the impending EOL of the project and alienated by heavy-handed treatment the community jumped ship to LibreOffice. - Just VirtualBox seems unscathed.
Instead of understanding the real failures at Sun – [taking too long to open source Solaris](https://www.theregister.co.uk/2010/12/07/mcnealy_sun_and_open_source/?page=3) and attempting a marketing-led approach in 2000-2002 instead of Sun’s traditional engineering-led approach – Ellison blamed the man who was landed with the task of rescuing whatever he could from the smouldering ruins left by [McNealy, Zander, Tolliver and their clan](https://www.itworld.com/article/2771904/business/where-did-sun-go-wrong-.html) and their leadership failure. Ellison never understood the pioneering approach Schwartz was taking, instead [sneering at blogging](https://web.archive.org/web/20100516032944/http://abcnews.go.com:80/Business/wirestory?id=10630034&page=2) and [calling all the work-in-progress “science projects”](https://web.archive.org/web/20100516032949/http://abcnews.go.com:80/Business/wirestory?id=10630034&page=3) while dismantling the partner channels and alienating the open source community.
The contrast with the approach HPE completed this week with its unwanted legacy products, doing [a deal with Micro Focus](https://www.ft.com/content/16ce31c4-8d5e-11e7-9084-d0c17942ba93) to look after them, could not be more stark. Oracle [said it was going to “reinvigorate the Sun brand”](https://www.infoworld.com/article/2627785/m-a/oracle-s-ambitious-plans-for-integrating-sun-s-technology.html) but instead has killed it more dead than any Sun executive managed – the “art of the deal” no doubt. Along with many former Sun staff today, that makes me very sad.
*(This post made possible by Patreon patrons. You could be one too!)*
In my opinion Oracle has always been very similar to Microsoft, with only difference, that somehow Oracle managed to present its image much better than Microsoft.
LikeLike
Don’t you feel Microsoft image is getting a lot better now? With Azure and O365 plus Win 10. New CEO has changed the direction.
LikeLike
The refocus on cloud has led to changes of public stance that value community and especially developer reputation more, sure. The fundamentals in the desktop business (not respecting individual control of systems and using patents to shake down innovators for unearned revenue) is still in place, just not advertised.
LikeLike
Pingback: Links 3/9/2017: Linux 4.13 Out Shortly, Manjaro 17.0.3, ReactOS 0.4.6, Oracle Solaris Layoffs | Techrights
Don’t forget the sad OpenOffice.org story…
LikeLike
I admit I was avoiding mentioning it, but as everyone has asked have added it to the list.
LikeLike
@Libre, you beat me to OpenOffice. When MariaDB came out, I moved there. Who in their right mind would deal with Larry when there is a good alternative.
I also want to give a thumbs up to Judge William Haskell Alsup. He did a great job of educating himself about how software has been developed from epoch. Without his dedication, things could be much worse in the software copyright/patent space.
LikeLike
Pingback: Oracle Murders Solaris – Mark writes
The solaris legacy though has born fruit – http://www.omniosce.org
LikeLike
It’s so sad that the “about” page doesn’t even tell what it’s about. 😦 How would I know that’s a solaris or illumos fork without a lot more googling.
LikeLike
I agree with Libre, you are missing the OpenOffice.org story, which had to be forked to LibreOffice.org as Oracle kept ignoring the contributions being made by the community, until in the end it just got forked.
LikeLike
As one of the laid-off, it’s a sad end of a long road. Solaris 12 has been in development for over 5 years, with many features backported to Solaris 11.1, 11.2, and 11.3. But technically 12.0 is not cancelled but rebranded as 11.4. The open question: can the remaining skeleton Solaris staff get a quality 11.4 product out the door.
LikeLike
What do you think of the theory that Oracle is attempting to move Solaris workloads onto the SPARC Cloud offering? Presumably a skeleton team would be able to maintain just “one” instance like that.
LikeLike
Netbeans lives on. JDeveloper was migrated to Netbeans Platform so if Oracle wants to continue enhancing JDeveloper the Netbeans devs need to continue employed. Virtualbox is used widely as a platform for devs, won’ t go away anytime soon. Yes Solaris 12 was removed from the roadmap, but replaced with Solaris 11.next. So, what is the hurry, what pressing features does solaris need? Everyone and their mother is moving to “the cloud” FAST. Nobody except Google-Amazon-IBM is buying blades to run in-house anymore. The market and sales are hence slowing. It all comes down to MAKING MONEY which is something Ellison understands and Sun did not, as it was hemorraging money during its last few decades.
Another innacuracy in your article, Oracle did not sue Google for DESKTOP Java but for
MOBILE JAVA
Desktop Java is alive and well… in fact Java 9 will be released soon, with OpenJDK made the reference implementation by none other than Oracle.
Did I mention that Oracle Linux is also doing well? Oracle’s cloud business is doing well too.
SPARC is a though call. Competing with AMD and Intel on the top and ARM from below is EXPENSIVE. Again, Oracle wants to make money from services, much like IBM which also de-emphasized its PowerPC business.
LikeLike
Hi Max,
Yes, there are current-market justifications for various decisions, and things like Netbeans do live on. What I’m discussing in this article is not that. When Oracle bought Sun, they brutally criticised its current leadership and bragged about how they were going to do really great things, much better things than the awful management team before could do, tremendous things. Having a long memory, I’m thus comparing that bragging with the actual outcomes.
You’re wrong about the Google lawsuit – it is actually about Java SE.
S.
LikeLike
Sad news … I don`t understand these games.
LikeLike
Thanks for the thoughtful write-up. It was becoming clearer to me in last few months when Oracle kept on pushing on moving to a Linux IaaS service rather than Solaris based infrastructure. Whatever the game, I believe companies should come forward and clarify their stand and road map instead of pushing customers for their personal gains. Folks like me will always remember the quality time we had with Sun Solaris..
LikeLike
If they buy or takeovers any company that means that company & its employees are going to garbage.
LikeLike
Pingback: Sunset, finally? | Well Red
Reblogged this on dave levy online and commented:
Simon was close to the politics of Sun’s “Dash to Open”, my feeling is that Sun had failed before Schwartz was appointed, there was no longer room for differentiated hardware company; Oracle’s failure to monetise the SPARC product line may have been caused by management hubris, but the long term economics of microprocessors and the establishment of distributed & collaborative software technology was key. Sun were late to adopt Plan 9 and making Solaris, an SMP/UMA big iron solution was a cul-de-sac it couldn’t escape from.
I had promised not to write about Sun’s failure, but I am better now.
LikeLike
The decision to kill off x86 Solaris in the early 2000’s and the bad press, bad customer feelings that followed was the beginning of the end. That and not taking Linux seriously. Buying Cobalt was odd. I don’t agree with you about Swartz. He wasn’t serious enough, he really didn’t care. He could always hightail back to Kinsey with his golden parachute and his wine obsession. Most of us thought he was a plant to take the company down & kill it off. . Giving away IP for the POSSIBILITY of a hardware sale was, and still is, just plain nuts! He seemed to go off the rails a bit in his blog, especially toward the end. There is a reason that no one on Earth has those posts archived.
LikeLike
The blog was deleted by Oracle when they realised it harmed their case against Google according to Mike Masnik:
https://www.techdirt.com/articles/20110724/11263315224/oracle-deletes-jonathan-schwartzs-old-blog-which-excitedly-celebrated-googles-use-java-android.shtml
LikeLike
The SPARCstation may be gone, but Forth will live on for eternity. Forth is to computer science what math is to physics.
LikeLike
Forth was also in the bootloader of the PowerPC-based macs. And it’s still in the bootloader of FreeBSD.
LikeLike
That might be overstating things just a tad. I’ve been a programmer since the 80’s (mostly embedded, including bootloaders) and have never used Forth or even come across it. The wiki page on Forth lists certain projects it is used in – something not even attempted with a language such as C, because it is used everywhere.
LikeLike
How Forth is related here? It was invented by Chuck Moore who was not working for Sun nor Oracle. Sure, it is used in bootloaders. Maybe Sun engineers did their implementation but Forth has more implemetations than apps.
LikeLike
How Forth is related here? It was invented by Chuck Moore who was not working for Sun nor Oracle. Sure, it is used in bootloaders. Maybe Sun engineers did their implementation but Forth has more implemetations than apps.
LikeLike
Thanks for posting Simon. We just celebrated 20 years of Tech Titan luncheons here in Dallas this year and you were one of the thought leaders who came to educate us about Open Source in the early days. Oracle may have killed Sun but the spirit of the people who grew up there will never die. Thought leadership was in our DNA. The company cared about it employees and it’s customers. We had high ethical standards. That’s the culture that makes great companies and Sun was one of those. Oracle doesn’t come close. #sun #java
LikeLike
The sun has finally set. Sad, but also a relief. Nothing left to be destroyed by Oracle, pain is gone now.
LikeLike
Pingback: Oracle Finally Killed Sun | thechrisshort
What about the Unified Communications System (Sun’s iPlanet/Sun One) Carrier Grade eMail, Calendar, Chat product line?
LikeLike
I’d forgotten all about that. I wonder if anyone can tell us.
LikeLike
I am hoping to hear that one. 😞. thanks for the article
LikeLike
Pingback: Oracle Finally Killed Sun | Meshed Insights Ltd – Larry Ellison should be ashamed of himself. | BiffSocko.com
What is going to be the impact on the illumos ecosystem? How long for stuff like VMware to stop support for Solaris based kernels? What would be the best move for an illumos distro maintainer?
Your personal thoughts will be very appreciated.
LikeLike
Those are all great questions. The open source Solaris community has been cut off for quite some time so I’d not expect them to be negatively affected.
LikeLike
Pingback: New top story on Hacker News: Oracle Finally Killed Sun – The Internet Yard
VirtualBox “seems unscathed”?
It seems already being left to die off.
In particular, after 3 years (see https://www.virtualbox.org/ticket/13471), there’s still no support for Wayland.
As a dev that’s been using Virtualbox for years and currently locked into Fedora, I’m looking for alternatives…
LikeLike
Try Proxmox
That is where we are going from VirtualBox
LikeLike
I’ve been saying for years that Oracle must not realize they own VirtualBox, other than rebranding it as Oracle VirtualBox, because they hadn’t ruined it yet. Good to know that they even screwed that up. I’ll look into Proxmox, maybe even get them for FLOSS Weekly.
LikeLike
Don’t kid yourself, HP’s is where acquisitions go to die. Just look at Mercury, ArcSight, and others. While it’s too bad about Sun ( disclaimer I worked there from ’87-94 ) as Scott would say, ” It’s lunch or be lunch.”
LikeLike
The Carrier Grade eMail product was licensed to Sun. They did not own it.
LikeLike
Pingback: Oracle Finally Killed Sun - https://meshedinsights.com/2017/09/03/oracle-finally-killed-sun/ … - Subdwellerz Beatz
Pingback: Sun set: Oracle closes down last Sun product lines | Fleekist.com
I’m Roger Faulkner’s (RAF) daughter. I don’t understand a lot of this article since I’m not a hacker. I just want to know if this means that PROCFS will be dead in the water. Will his disappear?
Bethany Faulkner
LikeLike
Just so you know, I have seen your father’s name mentioned more times as a role model in the comments from the people laid off than any other.
LikeLike
Bethany – your Dad’s legacy is going NOWHERE! It lives on not only in any Oracle Solaris deployment that still moves bits, but it also lives on in illumos – the still and forever open-source successor to OpenSolaris. Here’s procfs, for the world to see, for example:
http://src.illumos.org/source/xref/illumos-gate/usr/src/uts/common/fs/proc/
I had the pleasure of working at least a little with Roger. He was a champion not only of his own technology, but others’ as well, including some things I did.
Roger’s legacy will never EVER disappear.
LikeLike
Another aspect of Roger’s legacy which is very, very much alive, is the attitude towards software architecture and engineering which he fostered. We all carry a bit of that with us.
#WWRD
LikeLike
Bethany, Roger’s baby /proc lives on not only in Illimos, but also in Linux. They extended it to do many more things than just expose the process model. Roger hated that on grounds of purity, but it’s a mark of a good idea when it gets expanded far beyond its creator’s original intentions.
LikeLike
Pingback: Oracle Finally Killed Sun | laage65
Thank you so much, gentlemen. It’s been a year of gradually losing little bits of him. I’m relieved his baby lives on. I’m so sorry for all those who’ve lost their jobs. I’m sure there would have been much loud swearing coming from his upstairs office, had Roger lived to see this.
LikeLike
Pingback: Oracle chính thức khai tử Sun | Caphesang.vn
My time at Sun was one of the best learning experiences I ever had. It was definitely a trial by fire. A trial made easier by the great people I had the pleasure of knowing and working with. Sun became great by placing the customer first. Sun’s downfall started when they lost sight of their customers’ needs and refused to see the potential in the average consumer user. There is a lesson that can be learned from Sun’s sunset.
LikeLike
After having been in the IT business for 40 years i can tell that this lesson will never be learned. L’histoire se repete, but not the same way.
LikeLike
It took Ellison 9 years to kill Sun, that in my opinion, was the biggest insult, it should have been swift and painless.
LikeLike
It seemed to me that the Sun business was doomed from the beginning, as Oracle exists on high software margins. The low margin systems business was a drag on the business. It was justified on strategic factors, such as developing on Sparc and leveraging the huge Java following. They were always a top down company, both under McNeally and Ellison. They suffrred the same fate as other top down companies such as DEC, IBM, and HP. Sun was always stocked with engineering talent, even at the end. They should find jobs easy enough.
LikeLike
Sun always put HW before SW and especially services. Man cannot live by HW alone it sayeth somewhere. I worked for Sun for 5 years and here are some of the SW backtracks and executions;
• Genesys, a futuristic view of linked nodes with dynamic changes to almost everything in the way of resources one desired. This was touted widely for a while, and even I went on the road ‘selling’ the vision but it suddenly disappeared
• JOE (Object Request Broker)
• NEO (renamed from Distributed Objects Everywhere)
• Enterprise Manager
• Domain Manager
• Site Manager
• Job Scheduler
• Java Management API (JMAPI)
• Javastations 1 and 2
• A7000 disk subsystem after the Encore acquisition (I project managed the UK side of this)
• Scott McNealy’s ‘auction’ plans
• Numerous ‘unproductive acquisitions’ over the years , the latest being Tarantella (old SCO)
• The demise of the ‘Sparc and Solaris’ is all philosophy and the acceptance of Linux
• Utility grid computing
• Senior manager exits, compounding the earlier Ed Zander exit – unannounced and only visible by searching the Internet.
The old Communist empire has nothing on Sun when it comes to covering things up.
Having said all that, they were a great company to work for, unlike Oracle, who I also worked for. Oracle were a ‘hire and fire a will’ company and probably still are. People disappeared as quickly as they did in Nazi Germany.
LikeLike
So sad to see a such waste ….
LikeLike
Pingback: tinman alley » Bye bye Sun and Solaris :-(
And what of VDI? Sun Ray? Ellison coined the term “The Network Computer” back in the 90’s and had a brilliant offering, staffed by really knowledgeable, enthusiastic, brilliant people and he killed it. I guess because although it was a profitable business, it just didn’t make enough money.
LikeLike
The beginning of the end for Sun was the dot-com crash. Sun servers powered the dot-coms then when that all came tumbling down, all that equipment went on the used market and Sun could no longer sell new product with all that nearly-new stuff going for a deep discount.
LikeLike
… sadly very true. I was part of that epoch moment, having previously enjoyed a successful career in eCommerce implementations, made possible with SUN engineering.
LikeLike
Totally agree
LikeLike
Pingback: Sun set: Oracle closes down last Sun product lines | All About Tech in News
Pingback: Newsletter: September 9, 2017 | Notes from MWhite
Sun was dedicated to and driven by its channel partners. The direct sales force was great to partner with even in the largest of accounts. Oracle has always been a direct sales force company. They claimed to be channel friendly but hot air is hot air. They really lacked top down leadership in sales. Thats what McNealy got and Ellison didnt.
If you had an Oracle oportunity from 2008 on then 3 different internal Oracle sales groups pounced. None of them knowing or understanding the role of the other.
leadershipsalesdirection.
LikeLike
Pingback: ITPS251: Hurricane Equifax | Same3Guys
To be fair, Oracle kept Sun going longer than folks expected, and much longer than the life it would have had at IBM, it’s first suitor. Scott probably had trouble sleeping seeing it go to IBM, and better for a silicon valley home with closer culture match. Oracle made a kickbutt DB stack sitting on Sun HW with it’s Exa line, even running on Solaris, that still runs circles around other stacks.
LikeLike |
8,843 | 使用 Ansible 部署无服务(serverless)应用 | https://opensource.com/article/17/8/ansible-serverless-applications | 2017-09-06T08:25:00 | [
"serverless",
"无服务",
"Ansible"
] | https://linux.cn/article-8843-1.html |
>
> <ruby> 无服务 <rt> serverless </rt></ruby>是<ruby> 托管服务 <rt> managed service </rt></ruby>发展方向的又一步,并且与 Ansible 的无代理体系结构相得益彰。
>
>
>

[Ansible](https://www.ansible.com/) 被设计为实际工作中的最简化的部署工具。这意味着它不是一个完整的编程语言。你需要编写定义任务的 YAML 模板,并列出任何需要自动完成的任务。
大多数人认为 Ansible 是一种更强大的“处于 for 循环中的 SSH”,在简单的使用场景下这是真的。但其实 Ansible 是*任务*,而非 SSH。在很多情况下,我们通过 SSH 进行连接,但它也支持 Windows 机器上的 Windows 远程管理(WinRM),以及作为云服务的通用语言的 HTTPS API 之类的东西。
在云中,Ansible 可以在两个独立的层面上操作:<ruby> 控制面 <rt> control plane </rt></ruby>和<ruby> 实例资源 <rt> on-instance resource </rt></ruby>。控制面由所有*没有*运行在操作系统上的东西组成。包括设置网络、新建实例、供给更高级别的服务,如亚马逊的 S3 或 DynamoDB,以及保持云基础设施安全和服务客户所需的一切。
实例上的工作是你已经知道 Ansible 可以做的:启动和停止服务、配置文件<ruby> 模版化 <rt> templating </rt></ruby>、安装软件包以及通过 SSH 执行的所有与操作系统相关的操作。
现在,什么是<ruby> <a href="https://en.wikipedia.org/wiki/Serverless_computing"> 无服务 </a> <rt> serverless </rt></ruby>呢?这要看你问谁,无服务要么是对公有云的无限延伸,或者是一个全新的范例,其中所有的东西都是 API 调用,以前从来没有这样做过。
Ansible 采取第一种观点。在 “无服务” 是专门术语之前,用户不得不管理和配置 EC2 实例、虚拟私有云 (VPC) 网络以及其他所有内容。无服务是托管服务方向迈出的另一步,并且与 Ansible 的无代理体系结构相得益彰。
在我们开始 [Lambda](https://aws.amazon.com/lambda/) 示例之前,让我们来看一个简单的配置 CloudFormation 栈任务:
```
- name: Build network
cloudformation:
stack_name: prod-vpc
state: present
template: base_vpc.yml
```
编写这样的任务只需要几分钟,但它是构建基础架构所涉及的最后的半手动步骤 - 点击 “Create Stack” - 这将 playbook 与其他放在一起。现在你的 VPC 只是在建立新区域时可以调用的另一项任务了。
由于云提供商是你帐户中发生些什么的真相来源,因此 Ansible 有许多方法来取回并使用 ID、名称和其他参数来过滤和查询运行的实例或网络。以 `cloudformation_facts` 模块为例,我们可以从我们刚刚创建的模板中得到子网 ID、网络范围和其他数据。
```
- name: Pull all new resources back in as a variable
cloudformation_facts:
stack_name: prod-vpc
register: network_stack
```
对于无服务应用,除了 DynamoDB 表,S3 bucket 和其他任何其他功能之外,你肯定还需要一个 Lambda 函数的补充。幸运的是,通过使用 `lambda` 模块, Lambda 函数可以以上次任务的堆栈相同的方式创建:
```
- lambda:
name: sendReportMail
zip_file: "{{ deployment_package }}"
runtime: python3.6
handler: report.send
memory_size: 1024
role: "{{ iam_exec_role }}"
register: new_function
```
如果你有其他想用来交付无服务应用的工具,这也是可以的。开源的[无服务框架](https://serverless.com/)有自己的 Ansible 模块,它也可以工作:
```
- serverless:
service_path: '{{ project_dir }}'
stage: dev
register: sls
- name: Serverless uses CloudFormation under the hood, so you can easily pull info back into Ansible
cloudformation_facts:
stack_name: "{{ sls.service_name }}"
register: sls_facts
```
这不是你需要的全部,因为无服务项目也必须存在,你将在那里大量的定义你的函数和事件源。对于此例,我们将制作一个响应 HTTP 请求的函数。无服务框架使用 YAML 作为其配置语言(和 Ansible 一样),所以这应该看起来很熟悉。
```
# serverless.yml
service: fakeservice
provider:
name: aws
runtime: python3.6
functions:
main:
handler: test_function.handler
events:
- http:
path: /
method: get
```
在 [AnsibleFest](https://www.ansible.com/ansiblefest?intcmp=701f2000000h4RcAAI) 中,我将介绍这个例子和其他深入的部署策略,以最大限度地利用你已经拥有的 playbook 和基础设施,还有新的无服务实践。无论你是否能到,我希望这些例子可以让你开始使用 Ansible,无论你是否有任何服务要管理。
*AnsibleFest 是一个单日会议,汇集了数百名 Ansible 用户、开发人员和行业合作伙伴。加入我们吧,这里有产品更新、鼓舞人心的交谈、技术深度潜水,动手演示和整天的网络。*
(题图: opensource.com)
---
via: <https://opensource.com/article/17/8/ansible-serverless-applications>
作者:[Ryan Scott Brown](https://opensource.com/users/ryansb) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | [Ansible](https://www.ansible.com/) is designed as the simplest deployment tool that actually works. What that means is that it's not a full programming language. You write YAML templates that define tasks and list whatever tasks you need to automate your job.
Most people think of Ansible as a souped-up version of "SSH in a 'for' loop," and that's true for simple use cases. But really Ansible is about *tasks*, not about SSH. For a lot of use cases, we connect via SSH but also support things like Windows Remote Management (WinRM) for Windows machines, different protocols for network devices, and the HTTPS APIs that are the lingua franca of cloud services.
In a cloud, Ansible can operate on two separate layers: the control plane and the on-instance resources. The control plane consists of everything *not* running on the OS. This includes setting up networks, spawning instances, provisioning higher-level services like Amazon's S3 or DynamoDB, and everything else you need to keep your cloud infrastructure secure and serving customers.
On-instance work is what you already know Ansible for: starting and stopping services, templating config files, installing packages, and everything else OS-related that you can do over SSH.
Now, what about [serverless](https://en.wikipedia.org/wiki/Serverless_computing)? Depending who you ask, serverless is either the ultimate extension of the continued rush to the public cloud or a wildly new paradigm where everything is an API call, and it's never been done before.
Ansible takes the first view. Before "serverless" was a term of art, users had to manage and provision EC2 instances, virtual private cloud (VPC) networks, and everything else. Serverless is another step in the direction of managed services and plays nice with Ansible's agentless architecture.
Before we go into a [Lambda](https://aws.amazon.com/lambda/) example, let's look at a simpler task for provisioning a CloudFormation stack:
```
``````
- name: Build network
cloudformation:
stack_name: prod-vpc
state: present
template: base_vpc.yml
```
Writing a task like this takes just a couple minutes, but it brings the last semi-manual step involved in building your infrastructure—clicking "Create Stack"—into a playbook with everything else. Now your VPC is just another task you can call when building up a new region.
Since cloud providers are the real source of truth when it comes to what's really happening in your account, Ansible has a number of ways to pull that back and use the IDs, names, and other parameters to filter and query running instances or networks. Take for example the **cloudformation_facts** module that we can use to get the subnet IDs, network ranges, and other data back out of the template we just created.
```
``````
- name: Pull all new resources back in as a variable
cloudformation_facts:
stack_name: prod-vpc
register: network_stack
```
For serverless applications, you'll definitely need a complement of Lambda functions in addition to any other DynamoDB tables, S3 buckets, and whatever else. Fortunately, by using the **lambda** modules, Lambda functions can be created in the same way as the stack from the last tasks:
```
``````
- lambda:
name: sendReportMail
zip_file: "{{ deployment_package }}"
runtime: python3.6
handler: report.send
memory_size: 1024
role: "{{ iam_exec_role }}"
register: new_function
```
If you have another tool that you prefer for shipping the serverless parts of your application, that works as well. The open source [Serverless Framework](https://serverless.com/) has its own Ansible module that will work just as well:
```
``````
- serverless:
service_path: '{{ project_dir }}'
stage: dev
register: sls
- name: Serverless uses CloudFormation under the hood, so you can easily pull info back into Ansible
cloudformation_facts:
stack_name: "{{ sls.service_name }}"
register: sls_facts
```
That's not quite everything you need, since the serverless project also must exist, and that's where you'll do the heavy lifting of defining your functions and event sources. For this example, we'll make a single function that responds to HTTP requests. The Serverless Framework uses YAML as its config language (as does Ansible), so this should look familiar.
```
``````
# serverless.yml
service: fakeservice
provider:
name: aws
runtime: python3.6
functions:
main:
handler: test_function.handler
events:
- http:
path: /
method: get
```
At [AnsibleFest](https://www.ansible.com/ansiblefest?intcmp=701f2000000h4RcAAI), I'll be covering this example and other in-depth deployment strategies to take the best advantage of the Ansible playbooks and infrastructure you already have, along with new serverless practices. Whether you're able to be there or not, I hope these examples can get you started using Ansible—whether or not you have any servers to manage.
*AnsibleFest is a **day-long** conference bringing together hundreds of Ansible users, developers, and industry partners. Join us for product updates, inspirational talks, tech deep dives, hands-on demos and a day of networking. Get your tickets to AnsibleFest in San Francisco on September 7. Save 25% on registration with the discount code OPENSOURCE.*
## Comments are closed. |
8,844 | 我对 Go 的错误处理有哪些不满,以及我是如何处理的 | https://opencredo.com/why-i-dont-like-error-handling-in-go | 2017-09-06T09:36:00 | [
"Go",
"错误处理"
] | https://linux.cn/article-8844-1.html | 
写 Go 的人往往对它的错误处理模式有一定的看法。按不同的语言经验,人们可能有不同的习惯处理方法。这就是为什么我决定要写这篇文章,尽管有点固执己见,但我认为听取我的经验是有用的。我想要讲的主要问题是,很难去强制执行良好的错误处理实践,错误经常没有堆栈追踪,并且错误处理本身太冗长。不过,我已经看到了一些潜在的解决方案,或许能帮助解决一些问题。
### 与其他语言的快速比较
[在 Go 中,所有的错误都是值](https://blog.golang.org/errors-are-values)。因为这点,相当多的函数最后会返回一个 `error`, 看起来像这样:
```
func (s *SomeStruct) Function() (string, error)
```
因此这导致调用代码通常会使用 `if` 语句来检查它们:
```
bytes, err := someStruct.Function()
if err != nil {
// Process error
}
```
另外一种方法,是在其他语言中,如 Java、C#、Javascript、Objective C、Python 等使用的 `try-catch` 模式。如下你可以看到与先前的 Go 示例类似的 Java 代码,声明 `throws` 而不是返回 `error`:
```
public String function() throws Exception
```
它使用的是 `try-catch` 而不是 `if err != nil`:
```
try {
String result = someObject.function()
// continue logic
}
catch (Exception e) {
// process exception
}
```
当然,还有其他的不同。例如,`error` 不会使你的程序崩溃,然而 `Exception` 会。还有其他的一些,在本篇中会专门提到这些。
### 实现集中式错误处理
退一步,让我们看看为什么要在一个集中的地方处理错误,以及如何做到。
大多数人或许会熟悉的一个例子是 web 服务 - 如果出现了一些未预料的的服务端错误,我们会生成一个 5xx 错误。在 Go 中,你或许会这么实现:
```
func init() {
http.HandleFunc("/users", viewUsers)
http.HandleFunc("/companies", viewCompanies)
}
func viewUsers(w http.ResponseWriter, r *http.Request) {
user // some code
if err := userTemplate.Execute(w, user); err != nil {
http.Error(w, err.Error(), 500)
}
}
func viewCompanies(w http.ResponseWriter, r *http.Request) {
companies = // some code
if err := companiesTemplate.Execute(w, companies); err != nil {
http.Error(w, err.Error(), 500)
}
}
```
这并不是一个好的解决方案,因为我们不得不重复地在所有的处理函数中处理错误。为了能更好地维护,最好能在一处地方处理错误。幸运的是,[在 Go 语言的官方博客中,Andrew Gerrand 提供了一个替代方法](https://blog.golang.org/error-handling-and-go),可以完美地实现。我们可以创建一个处理错误的 Type:
```
type appHandler func(http.ResponseWriter, *http.Request) error
func (fn appHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
if err := fn(w, r); err != nil {
http.Error(w, err.Error(), 500)
}
}
```
这可以作为一个封装器来修饰我们的处理函数:
```
func init() {
http.Handle("/users", appHandler(viewUsers))
http.Handle("/companies", appHandler(viewCompanies))
}
```
接着我们需要做的是修改处理函数的签名来使它们返回 `errors`。这个方法很好,因为我们做到了 [DRY](https://en.wikipedia.org/wiki/Don't_repeat_yourself) 原则,并且没有重复使用不必要的代码 - 现在我们可以在单独一个地方返回默认错误了。
### 错误上下文
在先前的例子中,我们可能会收到许多潜在的错误,它们中的任何一个都可能在调用堆栈的许多环节中生成。这时候事情就变得棘手了。
为了演示这点,我们可以扩展我们的处理函数。它可能看上去像这样,因为模板执行并不是唯一一处会发生错误的地方:
```
func viewUsers(w http.ResponseWriter, r *http.Request) error {
user, err := findUser(r.formValue("id"))
if err != nil {
return err;
}
return userTemplate.Execute(w, user);
}
```
调用链可能会相当深,在整个过程中,各种错误可能在不同的地方实例化。[Russ Cox](https://research.swtch.com/go2017)的这篇文章解释了如何避免遇到太多这类问题的最佳实践:
>
> “在 Go 中错误报告的部分约定是函数包含相关的上下文,包括正在尝试的操作(比如函数名和它的参数)。”
>
>
>
这个给出的例子是对 OS 包的一个调用:
```
err := os.Remove("/tmp/nonexist")
fmt.Println(err)
```
它会输出:
```
remove /tmp/nonexist: no such file or directory
```
总结一下,执行后,输出的是被调用的函数、给定的参数、特定的出错信息。当在其他语言中创建一个 `Exception` 消息时,你也可以遵循这个实践。如果我们在 `viewUsers` 处理中坚持这点,那么几乎总是能明确错误的原因。
问题来自于那些不遵循这个最佳实践的人,并且你经常会在第三方的 Go 库中看到这些消息:
```
Oh no I broke
```
这没什么帮助 - 你无法了解上下文,这使得调试很困难。更糟糕的是,当这些错误被忽略或返回时,这些错误会被备份到堆栈中,直到它们被处理为止:
```
if err != nil {
return err
}
```
这意味着错误何时发生并没有被传递出来。
应该注意的是,所有这些错误都可以在 `Exception` 驱动的模型中发生 - 糟糕的错误信息、隐藏异常等。那么为什么我认为该模型更有用?
即便我们在处理一个糟糕的异常消息,*我们仍然能够了解它发生在调用堆栈中什么地方*。因为堆栈跟踪,这引发了一些我对 Go 不了解的部分 - 你知道 Go 的 `panic` 包含了堆栈追踪,但是 `error` 没有。我推测可能是 `panic` 会使你的程序崩溃,因此需要一个堆栈追踪,而处理错误并不会,因为它会假定你在它发生的地方做一些事。
所以让我们回到之前的例子 - 一个有糟糕错误信息的第三方库,它只是输出了调用链。你认为调试会更容易吗?
```
panic: Oh no I broke
[signal 0xb code=0x1 addr=0x0 pc=0xfc90f]
goroutine 1103 [running]:
panic(0x4bed00, 0xc82000c0b0)
/usr/local/go/src/runtime/panic.go:481 +0x3e6
github.com/Org/app/core.(_app).captureRequest(0xc820163340, 0x0, 0x55bd50, 0x0, 0x0)
/home/ubuntu/.go_workspace/src/github.com/Org/App/core/main.go:313 +0x12cf
github.com/Org/app/core.(_app).processRequest(0xc820163340, 0xc82064e1c0, 0xc82002aab8, 0x1)
/home/ubuntu/.go_workspace/src/github.com/Org/App/core/main.go:203 +0xb6
github.com/Org/app/core.NewProxy.func2(0xc82064e1c0, 0xc820bb2000, 0xc820bb2000, 0x1)
/home/ubuntu/.go_workspace/src/github.com/Org/App/core/proxy.go:51 +0x2a
github.com/Org/app/core/vendor/github.com/rusenask/goproxy.FuncReqHandler.Handle(0xc820da36e0, 0xc82064e1c0, 0xc820bb2000, 0xc5001, 0xc820b4a0a0)
/home/ubuntu/.go_workspace/src/github.com/Org/app/core/vendor/github.com/rusenask/goproxy/actions.go:19 +0x30
```
我认为这可能是 Go 的设计中被忽略的东西 - 不是所有语言都不会忽视的。
如果我们使用 Java 作为一个随意的例子,其中人们犯的一个最愚蠢的错误是不记录堆栈追踪:
```
LOGGER.error(ex.getMessage()) // 不记录堆栈追踪
LOGGER.error(ex.getMessage(), ex) // 记录堆栈追踪
```
但是 Go 似乎在设计中就没有这个信息。
在获取上下文信息方面 - Russ 还提到了社区正在讨论一些潜在的接口用于剥离上下文错误。关于这点,了解更多或许会很有趣。
### 堆栈追踪问题解决方案
幸运的是,在做了一些查找后,我发现了这个出色的 [Go 错误](https://github.com/go-errors/errors)库来帮助解决这个问题,来给错误添加堆栈跟踪:
```
if errors.Is(err, crashy.Crashed) {
fmt.Println(err.(*errors.Error).ErrorStack())
}
```
不过,我认为这个功能如果能成为语言的<ruby> 第一类公民 <rt> first class citizenship </rt></ruby>将是一个改进,这样你就不必做一些类型修改了。此外,如果我们像先前的例子那样使用第三方库,它可能没有使用 `crashy` - 我们仍有相同的问题。
### 我们对错误应该做什么?
我们还必须考虑发生错误时应该发生什么。[这一定有用,它们不会让你的程序崩溃](https://davidnix.io/post/error-handling-in-go/),通常也会立即处理它们:
```
err := method()
if err != nil {
// some logic that I must do now in the event of an error!
}
```
如果我们想要调用大量方法,它们会产生错误,然后在一个地方处理所有错误,这时会发生什么?看上去像这样:
```
err := doSomething()
if err != nil {
// handle the error here
}
func doSomething() error {
err := someMethod()
if err != nil {
return err
}
err = someOther()
if err != nil {
return err
}
someOtherMethod()
}
```
这感觉有点冗余,在其他语言中你可以将多条语句作为一个整体处理。
```
try {
someMethod()
someOther()
someOtherMethod()
}
catch (Exception e) {
// process exception
}
```
或者只要在方法签名中传递错误:
```
public void doSomething() throws SomeErrorToPropogate {
someMethod()
someOther()
someOtherMethod()
}
```
我个人认为这两个例子实现了一件事情,只是 `Exception` 模式更少冗余,更加弹性。如果有什么的话,我觉得 `if err!= nil` 感觉像样板。也许有一种方法可以清理?
### 将失败的多条语句做为一个整体处理错误
首先,我做了更多的阅读,并[在 Rob Pike 写的 Go 博客中](https://blog.golang.org/errors-are-values)发现了一个比较务实的解决方案。
他定义了一个封装了错误的方法的结构体:
```
type errWriter struct {
w io.Writer
err error
}
func (ew *errWriter) write(buf []byte) {
if ew.err != nil {
return
}
_, ew.err = ew.w.Write(buf)
}
```
让我们这么做:
```
ew := &errWriter{w: fd}
ew.write(p0[a:b])
ew.write(p1[c:d])
ew.write(p2[e:f])
// and so on
if ew.err != nil {
return ew.err
}
```
这也是一个很好的方案,但是我感觉缺少了点什么 - 因为我们不能重复使用这个模式。如果我们想要一个含有字符串参数的方法,我们就不得不改变函数签名。或者如果我们不想执行写操作会怎样?我们可以尝试使它更通用:
```
type errWrapper struct {
err error
}
```
```
func (ew *errWrapper) do(f func() error) {
if ew.err != nil {
return
}
ew.err = f();
}
```
但是我们有一个相同的问题,如果我们想要调用含有不同参数的函数,它就无法编译了。然而你可以简单地封装这些函数调用:
```
w := &errWrapper{}
w.do(func() error {
return someFunction(1, 2);
})
w.do(func() error {
return otherFunction("foo");
})
err := w.err
if err != nil {
// process error here
}
```
这可以用,但是并没有太大帮助,因为它最终比标准的 `if err != nil` 检查带来了更多的冗余。如果有人能提供其他解决方案,我会很有兴趣听。或许这个语言本身需要一些方法来以不那么臃肿的方式传递或者组合错误 - 但是感觉似乎是特意设计成不那么做。
### 总结
看完这些之后,你可能会认为我在对 `error` 挑刺儿,由此推论我反对 Go。事实并非如此,我只是将它与我使用 `try catch` 模型的经验进行比较。它是一个用于系统编程很好的语言,并且已经出现了一些优秀的工具。仅举几例,有 [Kubernetes](https://kubernetes.io/)、[Docker](https://www.docker.com/)、[Terraform](https://www.terraform.io/)、[Hoverfly](http://hoverfly.io/en/latest/) 等。还有小型、高性能、本地二进制的优点。但是,`error` 难以适应。 我希望我的推论是有道理的,而且一些方案和解决方法可能会有帮助。
---
作者简介:
Andrew 是 OpenCredo 的顾问,于 2015 年加入公司。Andrew 在多个行业工作多年,开发基于 Web 的企业应用程序。
---
via: <https://opencredo.com/why-i-dont-like-error-handling-in-go>
作者:[Andrew Morgan](https://opencredo.com/author/andrew/) 译者:[geekpi](https://github.com/geekpi) 校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,845 | 编译器简介: 在 Siri 前时代如何与计算机对话 | https://nicoleorchard.com/blog/compilers | 2017-09-07T09:41:00 | [
"LLVM",
"编译器"
] | https://linux.cn/article-8845-1.html | 
简单说来,一个<ruby> 编译器 <rt> compiler </rt></ruby>不过是一个可以翻译其他程序的程序。传统的编译器可以把源代码翻译成你的计算机能够理解的可执行机器代码。(一些编译器将源代码翻译成别的程序语言,这样的编译器称为源到源翻译器或<ruby> 转化器 <rt> transpilers </rt></ruby>。)[LLVM](http://llvm.org/) 是一个广泛使用的编译器项目,包含许多模块化的编译工具。
传统的编译器设计包含三个部分:

* <ruby> 前端 <rt> Frontend </rt></ruby>将源代码翻译为<ruby> 中间表示 <rt> intermediate representation </rt></ruby> (IR)\* 。[clang](http://clang.llvm.org/) 是 LLVM 中用于 C 家族语言的前端工具。
* <ruby> 优化器 <rt> Optimizer </rt></ruby>分析 IR 然后将其转化为更高效的形式。[opt](http://llvm.org/docs/CommandGuide/opt.html) 是 LLVM 的优化工具。
* <ruby> 后端 <rt> Backend </rt></ruby>通过将 IR 映射到目标硬件指令集从而生成机器代码。[llc](http://llvm.org/docs/CommandGuide/llc.html) 是 LLVM 的后端工具。
注:LLVM 的 IR 是一种和汇编类似的低级语言。然而,它抽离了特定硬件信息。
### Hello, Compiler
下面是一个打印 “Hello, Compiler!” 到标准输出的简单 C 程序。C 语法是人类可读的,但是计算机却不能理解,不知道该程序要干什么。我将通过三个编译阶段使该程序变成机器可执行的程序。
```
// compile_me.c
// Wave to the compiler. The world can wait.
#include <stdio.h>
int main() {
printf("Hello, Compiler!\n");
return 0;
}
```
### 前端
正如我在上面所提到的,`clang` 是 LLVM 中用于 C 家族语言的前端工具。Clang 包含 <ruby> C 预处理器 <rt> C preprocessor </rt></ruby>、<ruby> 词法分析器 <rt> lexer </rt></ruby>、<ruby> 语法解析器 <rt> parser </rt></ruby>、<ruby> 语义分析器 <rt> semantic analyzer </rt></ruby>和 <ruby> IR 生成器 <rt> IR generator </rt></ruby>。
**C 预处理器**在将源程序翻译成 IR 前修改源程序。预处理器处理外部包含文件,比如上面的 `#include <stdio.h>`。 它将会把这一行替换为 `stdio.h` C 标准库文件的完整内容,其中包含 `printf` 函数的声明。
通过运行下面的命令来查看预处理步骤的输出:
```
clang -E compile_me.c -o preprocessed.i
```
**词法分析器**(或<ruby> 扫描器 <rt> scanner </rt></ruby>或<ruby> 分词器 <rt> tokenizer </rt></ruby>)将一串字符转化为一串单词。每一个单词或<ruby> 记号 <rt> token </rt></ruby>,被归并到五种语法类别之一:标点符号、关键字、标识符、文字或注释。
compile\_me.c 的分词过程:

**语法分析器**确定源程序中的单词流是否组成了合法的句子。在分析记号流的语法后,它会输出一个<ruby> 抽象语法树 <rt> abstract syntax tree </rt></ruby>(AST)。Clang 的 AST 中的节点表示声明、语句和类型。
compile\_me.c 的语法树:

**语义分析器**会遍历抽象语法树,从而确定代码语句是否有正确意义。这个阶段会检查类型错误。如果 `compile_me.c` 的 main 函数返回 `"zero"`而不是 `0`, 那么语义分析器将会抛出一个错误,因为 `"zero"` 不是 `int` 类型。
**IR 生成器**将抽象语法树翻译为 IR。
对 compile\_me.c 运行 clang 来生成 LLVM IR:
```
clang -S -emit-llvm -o llvm_ir.ll compile_me.c
```
在 `llvm_ir.ll` 中的 main 函数:
```
; llvm_ir.ll
@.str = private unnamed_addr constant [18 x i8] c"Hello, Compiler!\0A\00", align 1
define i32 @main() {
%1 = alloca i32, align 4 ; <- memory allocated on the stack
store i32 0, i32* %1, align 4
%2 = call i32 (i8*, ...) @printf(i8* getelementptr inbounds ([18 x i8], [18 x i8]* @.str, i32 0, i32 0))
ret i32 0
}
declare i32 @printf(i8*, ...)
```
### 优化程序
优化程序的工作是基于其对程序的运行时行为的理解来提高代码效率。优化程序将 IR 作为输入,然后生成改进后的 IR 作为输出。LLVM 的优化工具 `opt` 将会通过标记 `-O2`(大写字母 `o`,数字 2)来优化处理器速度,通过标记 `Os`(大写字母 `o`,小写字母 `s`)来减少指令数目。
看一看上面的前端工具生成的 LLVM IR 代码和运行下面的命令生成的结果之间的区别:
```
opt -O2 -S llvm_ir.ll -o optimized.ll
```
在 `optimized.ll` 中的 main 函数:
```
optimized.ll
@str = private unnamed_addr constant [17 x i8] c"Hello, Compiler!\00"
define i32 @main() {
%puts = tail call i32 @puts(i8* getelementptr inbounds ([17 x i8], [17 x i8]* @str, i64 0, i64 0))
ret i32 0
}
declare i32 @puts(i8* nocapture readonly)
```
优化后的版本中, main 函数没有在栈中分配内存,因为它不使用任何内存。优化后的代码中调用 `puts` 函数而不是 `printf` 函数,因为程序中并没有使用 `printf` 函数的格式化功能。
当然,优化程序不仅仅知道何时可以把 `printf` 函数用 `puts` 函数代替。优化程序也能展开循环并内联简单计算的结果。考虑下面的程序,它将两个整数相加并打印出结果。
```
// add.c
#include <stdio.h>
int main() {
int a = 5, b = 10, c = a + b;
printf("%i + %i = %i\n", a, b, c);
}
```
下面是未优化的 LLVM IR:
```
@.str = private unnamed_addr constant [14 x i8] c"%i + %i = %i\0A\00", align 1
define i32 @main() {
%1 = alloca i32, align 4 ; <- allocate stack space for var a
%2 = alloca i32, align 4 ; <- allocate stack space for var b
%3 = alloca i32, align 4 ; <- allocate stack space for var c
store i32 5, i32* %1, align 4 ; <- store 5 at memory location %1
store i32 10, i32* %2, align 4 ; <- store 10 at memory location %2
%4 = load i32, i32* %1, align 4 ; <- load the value at memory address %1 into register %4
%5 = load i32, i32* %2, align 4 ; <- load the value at memory address %2 into register %5
%6 = add nsw i32 %4, %5 ; <- add the values in registers %4 and %5\. put the result in register %6
store i32 %6, i32* %3, align 4 ; <- put the value of register %6 into memory address %3
%7 = load i32, i32* %1, align 4 ; <- load the value at memory address %1 into register %7
%8 = load i32, i32* %2, align 4 ; <- load the value at memory address %2 into register %8
%9 = load i32, i32* %3, align 4 ; <- load the value at memory address %3 into register %9
%10 = call i32 (i8*, ...) @printf(i8* getelementptr inbounds ([14 x i8], [14 x i8]* @.str, i32 0, i32 0), i32 %7, i32 %8, i32 %9)
ret i32 0
}
declare i32 @printf(i8*, ...)
```
下面是优化后的 LLVM IR:
```
@.str = private unnamed_addr constant [14 x i8] c"%i + %i = %i\0A\00", align 1
define i32 @main() {
%1 = tail call i32 (i8*, ...) @printf(i8* getelementptr inbounds ([14 x i8], [14 x i8]* @.str, i64 0, i64 0), i32 5, i32 10, i32 15)
ret i32 0
}
declare i32 @printf(i8* nocapture readonly, ...)
```
优化后的 main 函数本质上是未优化版本的第 17 行和 18 行,伴有变量值内联。`opt` 计算加法,因为所有的变量都是常数。很酷吧,对不对?
### 后端
LLVM 的后端工具是 `llc`。它分三个阶段将 LLVM IR 作为输入生成机器代码。
* **指令选择**是将 IR 指令映射到目标机器的指令集。这个步骤使用虚拟寄存器的无限名字空间。
* **寄存器分配**是将虚拟寄存器映射到目标体系结构的实际寄存器。我的 CPU 是 x86 结构,它只有 16 个寄存器。然而,编译器将会尽可能少的使用寄存器。
* **指令安排**是重排操作,从而反映出目标机器的性能约束。
运行下面这个命令将会产生一些机器代码:
```
llc -o compiled-assembly.s optimized.ll
```
```
_main:
pushq %rbp
movq %rsp, %rbp
leaq L_str(%rip), %rdi
callq _puts
xorl %eax, %eax
popq %rbp
retq
L_str:
.asciz "Hello, Compiler!"
```
这个程序是 x86 汇编语言,它是计算机所说的语言,并具有人类可读语法。某些人最后也许能理解我。

---
相关资源:
1. [设计一个编译器](https://www.amazon.com/Engineering-Compiler-Second-Keith-Cooper/dp/012088478X)
2. [开始探索 LLVM 核心库](https://www.amazon.com/Getting-Started-LLVM-Core-Libraries/dp/1782166920)
(题图:deviantart.net)
---
via: <https://nicoleorchard.com/blog/compilers>
作者:[Nicole Orchard](https://nicoleorchard.com/) 译者:[ucasFL](https://github.com/ucasFL) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,847 | Minikube:使用 Kubernetes 进行本地开发 | https://medium.com/devopslinks/using-kubernetes-minikube-for-local-development-c37c6e56e3db | 2017-09-07T11:46:32 | [
"Docker",
"Kubernetes"
] | https://linux.cn/article-8847-1.html | 如果你的运维团队在使用 Docker 和 Kubernetes,那么建议开发上采用相同或相似的技术。这将减少不兼容性和可移植性问题的数量,并使每个人都会认识到应用程序容器是开发和运维团队的共同责任。

这篇博客文章介绍了 Kubernetes 在开发模式中的用法,它的灵感来自于一个视频教程,你可以在“[无痛 Docker 教程](http://painlessdocker.com/)”中找到它。

Minikube 是一个允许开发人员在本地使用和运行 Kubernetes 集群的工具,从而使开发人员的生活变得轻松。
在这篇博客中,对于我测试的例子,我使用的是 Linux Mint 18,但其它 Linux 发行版在安装部分没有区别。
```
cat /etc/lsb-release
```
```
DISTRIB_ID=LinuxMint
DISTRIB_RELEASE=18.1
DISTRIB_CODENAME=serena
DISTRIB_DESCRIPTION=”Linux Mint 18.1 Serena”
```

### 先决条件
为了与 Minkube 一起工作,我们应该安装 Kubectl 和 Minikube 和一些虚拟化驱动程序。
* 对于 OS X,安装 [xhyve 驱动](https://git.k8s.io/minikube/docs/drivers.md#xhyve-driver)、[VirtualBox](https://www.virtualbox.org/wiki/Downloads) 或者 [VMware Fusion](https://www.vmware.com/products/fusion),然后再安装 Kubectl 和 Minkube。
```
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.21.0/minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
```
* 对于 Windows,安装 [VirtualBox](https://www.virtualbox.org/wiki/Downloads) 或者 [Hyper-V](https://msdn.microsoft.com/en-us/virtualization/hyperv_on_windows/quick_start/walkthrough_install),然后再安装 Kubectl 和 Minkube。
```
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.7.0/bin/windows/amd64/kubectl.exe
```
将二进制文件添加到你的 PATH 中(这篇[文章](https://www.windows-commandline.com/set-path-command-line/)解释了如何修改 PATH)
下载 `minikube-windows-amd64.exe`,将其重命名为 `minikube.exe`,并将其添加到你的 PATH 中。[在这](https://github.com/kubernetes/minikube/releases)可以找到最新版本。
* 对于 Linux,安装 [VirtualBox](https://www.virtualbox.org/wiki/Downloads) 或者 [KVM](http://www.linux-kvm.org/),然后再安装 Kubectl 和 Minkube。
```
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.21.0/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
```
### 使用 Minikube
我们先从这个 Dockerfile 创建一个镜像:
```
FROM busybox
ADD index.html /www/index.html
EXPOSE 8000
CMD httpd -p 8000 -h /www; tail -f /dev/null
```
添加你希望在 index.html 中看到的内容。
构建镜像:
```
docker build -t eon01/hello-world-web-server .
```
我们来运行容器来测试它:
```
docker run -d --name webserver -p 8000:8000 eon01/hello-world-web-server
```
这是 `docker ps` 的输出:
```
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2ad8d688d812 eon01/hello-world-web-server "/bin/sh -c 'httpd..." 3 seconds ago Up 2 seconds 0.0.0.0:8000->8000/tcp webserver
```
让我们提交镜像并将其上传到公共 Docker Hub 中。你也可以使用自己的私有仓库:
```
docker commit webserver
docker push eon01/hello-world-web-server
```
删除容器,因为我们将与 Minikube 一起使用它。
```
docker rm -f webserver
```
启动 Minikube:
```
minkube start
```
检查状态:
```
minikube status
```
我们运行一个单一节点:
```
kubectl get node
```
运行 webserver:
```
kubectl run webserver --image=eon01/hello-world-web-server --port=8000
```
webserver 应该会暴露它的端口:
```
kubectl expose deployment webserver --type=NodePort
```
为了得到服务 url 输入:
```
minikube service webserver --url
```
使用下面的命令得到 Web 页面的内容:
```
curl $(minikube service webserver --url)
```
显示运行中集群的摘要:
```
kubectl cluster-info
```
更多细节:
```
kubectl cluster-info dump
```
我们还可以使用以下方式列出 pod:
```
kubectl get pods
```
使用下面的方式访问面板:
```
minikube dashboard
```
如果你想访问 Web 程序的前端,输入:
```
kubectl proxy
```
如果我们要在容器内部执行一个命令,请使用以下命令获取 pod id:
```
kubetctl get pods
```
然后像这样使用:
```
kubectl exec webserver-2022867364-0v1p9 -it -- /bin/sh
```
最后完成了,请删除所有部署:
```
kubectl delete deployments --all
```
删除所有 pod:
```
kubectl delete pods --all
```
并且停止 Minikube。
```
minikube stop
```
我希望你享受这个介绍。
### 更加深入
如果你对本文感到共鸣,您可以在[无痛 Docker 教程](http://painlessdocker.com/)中找到更多有趣的内容。
我们 [Eralabs](http://eralabs.io/) 将很乐意为你的 Docker 和云计算项目提供帮助,[联系我们](http://eralabs.io/),我们将很乐意听到你的项目。
请订阅 [DevOpsLinks](http://devopslinks.com/):成千上万的 IT 专家和 DevOps 爱好者在线社区。
你可能也有兴趣加入我们的新闻订阅 [Shipped](http://shipped.devopslinks.com/),一个专注于容器,编排和无服务技术的新闻订阅。
你可以在 [Twitter](https://twitter.com/eon01)、[Clarity](https://clarity.fm/aymenelamri/) 或我的[网站](http://aymenelamri.com/)上找到我,你也可以看看我的书:[SaltStack For DevOps](http://saltstackfordevops.com/)。
不要忘记加入我的最后一个项目 [DevOps 的职位](http://jobsfordevops.com/)!
如果你喜欢本文,请推荐它,并与你的关注者分享。
---
作者简介:
Aymen El Amri - 云和软件架构师、企业家、作者、www.eralabs.io 的 CEO、www.devopslinks.com 的创始人,个人页面:www.aymenelamri.com
---
via: <https://medium.com/devopslinks/using-kubernetes-minikube-for-local-development-c37c6e56e3db>
作者:[Aymen El Amri](https://medium.com/@eon01) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,849 | 一个开源软件许可证合规的经济高效模式 | https://opensource.com/article/17/9/economically-efficient-model | 2017-09-07T12:45:23 | [
"开源",
"许可证"
] | https://linux.cn/article-8849-1.html |
>
> 使用开源的方式有利于你的盈亏底线以及开源生态系统。
>
>
>

“<ruby> 合规性工业联合体 <rt> The Compliance Industrial Complex </rt></ruby>” 是一个术语,它会唤起那些组织参与精心设计并且花费昂贵流程的以遵守开源许可条款的反乌托邦想象。由于“生活经常模仿艺术”,许多组织采用了这种做法,可惜的是它们剥夺了许多开源模型的好处。本文介绍了一种经济高效的开源软件许可证合规性方法。
开源许可证通常对从第三方授权的代码分发者有三个要求:
1. 提供开源许可证的副本
2. 包括版权声明
3. 对于 copyleft 许可证(如 GPL),将相应的源代码提供给接受者。
*(与任何一般性声明一样,可能会有例外情况,因此始终建议审查许可条款,如有需要,请咨询律师的意见。)*
因为源代码(以及任何相关的文件,例如:许可证、README)通常都包含所有这些信息,所以最简单的遵循方法就是随着二进制/可执行程序一起提供源代码。
替代方案更加困难并且昂贵,因为在大多数情况下,你仍然需要提供开源许可证的副本并保留版权声明。提取这些信息来结合你的二进制/可执行版本并不简单。你需要流程、系统和人员来从源代码和相关文件中复制此信息,并将其插入到单独的文本文件或文档中。
不要低估创建此文件的时间和费用。虽然有工具也许可以自动化部分流程,但这些工具通常需要人力资源(例如工程师、质量经理、发布经理)来准备代码来扫描并对结果进行评估(没有完美的工具,几乎总是需要审查)。你的组织资源有限,将其转移到此活动会增加机会成本。考虑到这笔费用,每个后续版本(主要或次要)的成本将需要进行新的分析和修订。
也有因不选择发布不能被很好识别的源码而导致增加的其他成本。这些根源在于不向开源项目的原始作者和/或维护者发布源代码, 这一活动称为上游化。独自上游化一般不满足大多数开源许可证的要求,这就是为什么这篇文章主张与你的二进制/可执行文件一起发布源代码。然而,上游化和提供源代码以及二进制/可执行文件都能提供额外的经济效益。这是因为你的组织不再需要保留随着每次发布合并开源代码修改而产生的私有分支 - 由于你的内部代码库与社区项目不同,这将是越来越消耗和凌乱的工作。上游化还增强了开源生态系统,它会鼓励社区创新,从中你的组织或许也会得到收益。
那么为什么大量的组织不会为其产品发布源代码来简化其合规性工作?在许多情况下,这是因为他们认为这可能会暴露他们竞争优势的信息。考虑到这些专有产品中的大量代码可能是开源代码的直接副本,以支持诸如 WiFi 或云服务这些当代产品的基础功能,这种信念可能是错误的。
即使对这些开源作品进行了修改来适配其专有产品,这些更改也往往是微不足道的,并包含了很少的新的版权部分或可用来专利的内容。因此,任何组织都应该通过这种方式来查看其代码,因为它可能会发现其代码库中绝大部分是开源的,只有一小部分是真正专有的、与竞争对手区分开来的部分。那么为什么不分发和向上游提交这些没有差别的代码呢?
考虑一下拒绝遵从工业联合体的思维方式, 以降低成本并大大简化合规性。使用开源的方式,并体验发布你的源代码的乐趣,以造福于你的盈亏底线和开源生态系统,从中你将继续收获更多的利益。
---
作者简介
Jeffrey Robert Kaufman - Jeffrey R. Kaufman 是全球领先的开源软件解决方案提供商红帽公司的开源知识产权律师。Jeffrey 还担任着 Thomas Jefferson 法学院的兼职教授。 在加入红帽前,Jeffrey 在高通担任专利法律顾问,为首席科学家办公室提供开源顾问。 Jeffrey 在 RFID、条形码、图像处理和打印技术方面拥有多项专利。[更多关于我](https://opensource.com/users/jkaufman)
(题图: opensource.com)
---
via: <https://opensource.com/article/17/9/economically-efficient-model>
作者:[Jeffrey Robert Kaufman](https://opensource.com/users/jkaufman) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | "The Compliance Industrial Complex" is a term that evokes dystopian imagery of organizations engaging in elaborate and highly expensive processes to comply with open source license terms. As life often imitates art, many organizations engage in this practice, sadly robbing them of the many benefits of the open source model. This article presents an economically efficient approach to open source software license compliance.
Open source licenses generally impose three requirements on a distributor of code licensed from a third party:
- Provide a copy of the open source license(s)
- Include copyright notices
- For copyleft licenses (like GPL), make the corresponding source code available to the distributees
*(As with any general statement, there may be exceptions, so it is always advised to review license terms and, if necessary, seek the advice of an attorney.)*
Because the source code (and any associated files, e.g. license/README) generally contains all of this information, the easiest way to comply is to simply provide the source code along with your binary/executable application.
The alternative is more difficult and expensive, because, in most situations, you are still required to provide a copy of the open source licenses and retain copyright notices. Extracting this information to accompany your binary/executable release is not trivial. You need processes, systems, and people to copy this information out of the sources and associated files and insert them into a separate text file or document.
The amount of time and expense to create this file is not to be underestimated. Although there are software tools that may be used to partially automate the process, these tools often require resources (e.g., engineers, quality managers, release managers) to prepare code for scan and to review the results for accuracy (no tool is perfect and review is almost always required). Your organization has finite resources, and diverting them to this activity leads to opportunity costs. Compounding this expense, each subsequent release—major or minor—will require a new analysis and revision.
There are also other costs resulting from not choosing to release sources that are not well recognized. These stem from not releasing source code back to the original authors and/or maintainers of the open source project, an activity known as upstreaming. Upstreaming alone seldom meets the requirements of most open source licenses, which is why this article advocates releasing sources along with your binary/executable; however, both upstreaming and providing the source code along with your binary/executable affords additional economic benefits. This is because your organization will no longer be required to keep a private fork of your code changes that must be internally merged with the open source bits upon every release—an increasingly costly and messy endeavor as your internal code base diverges from the community project. Upstreaming also enhances the open source ecosystem, which encourages further innovations from the community from which your organization may benefit.
So why do a significant number of organizations not release source code for their products to simplify their compliance efforts? In many cases, this is because they are under the belief that it may reveal information that gives them a competitive edge. This belief may be misplaced in many situations, considering that substantial amounts of code in these proprietary products are likely direct copies of open source code to enable functions such as WiFi or cloud services, foundational features of most contemporary products.
Even if changes are made to these open source works to adapt them for proprietary offerings, such changes are often de minimis and contain little new copyright expression or patentable content. As such, any organization should look at its code through this lens, as it may discover that an overwhelming percentage of its code base is open source, with only a small percentage truly proprietary and enabling differentiation from its competitors. So why then not distribute and upstream the source to those non-differentiating bits?
Consider rejecting the Compliance Industrial Complex mindset to lower your cost and drastically simplify compliance. Use open source the way it was intended and experience the joy of releasing your source code to benefit your bottom line and the open source ecosystem from which you will continue to reap increasing benefits.
## Comments are closed. |
8,850 | Headless Chrome 入门 | https://developers.google.com/web/updates/2017/04/headless-chrome | 2017-09-08T11:34:00 | [
"浏览器",
"Chrome",
"Headless"
] | https://linux.cn/article-8850-1.html | 
### 摘要
在 Chrome 59 中开始搭载 [Headless Chrome](https://chromium.googlesource.com/chromium/src/+/lkgr/headless/README.md)。这是一种在<ruby> 无需显示 <rt> headless </rt></ruby>的环境下运行 Chrome 浏览器的方式。从本质上来说,就是不用 chrome 浏览器来运行 Chrome 的功能!它将 Chromium 和 Blink 渲染引擎提供的所有现代 Web 平台的功能都带入了命令行。
它有什么用?
<ruby> 无需显示 <rt> headless </rt></ruby>的浏览器对于自动化测试和不需要可视化 UI 界面的服务器环境是一个很好的工具。例如,你可能需要对真实的网页运行一些测试,创建一个 PDF,或者只是检查浏览器如何呈现 URL。
>
> **注意:** Mac 和 Linux 上的 Chrome 59 都可以运行无需显示模式。[对 Windows 的支持](https://bugs.chromium.org/p/chromium/issues/detail?id=686608)将在 Chrome 60 中提供。要检查你使用的 Chrome 版本,请在浏览器中打开 `chrome://version`。
>
>
>
### 开启<ruby> 无需显示 <rt> headless </rt></ruby>模式(命令行界面)
开启<ruby> 无需显示 <rt> headless </rt></ruby>模式最简单的方法是从命令行打开 Chrome 二进制文件。如果你已经安装了 Chrome 59 以上的版本,请使用 `--headless` 标志启动 Chrome:
```
chrome \
--headless \ # Runs Chrome in headless mode.
--disable-gpu \ # Temporarily needed for now.
--remote-debugging-port=9222 \
https://www.chromestatus.com # URL to open. Defaults to about:blank.
```
>
> **注意:**目前你仍然需要使用 `--disable-gpu` 标志。但它最终会不需要的。
>
>
>
`chrome` 二进制文件应该指向你安装 Chrome 的位置。确切的位置会因平台差异而不同。当前我在 Mac 上操作,所以我为安装的每个版本的 Chrome 都创建了方便使用的别名。
如果您使用 Chrome 的稳定版,并且无法获得测试版,我建议您使用 `chrome-canary` 版本:
```
alias chrome="/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome"
alias chrome-canary="/Applications/Google\ Chrome\ Canary.app/Contents/MacOS/Google\ Chrome\ Canary"
alias chromium="/Applications/Chromium.app/Contents/MacOS/Chromium"
```
在[这里](https://www.google.com/chrome/browser/canary.html)下载 Chrome Cannary。
### 命令行的功能
在某些情况下,你可能不需要[以脚本编程的方式](https://developers.google.com/web/updates/2017/04/headless-chrome#node)操作 Headless Chrome。可以使用一些[有用的命令行标志](https://cs.chromium.org/chromium/src/headless/app/headless_shell_switches.cc)来执行常见的任务。
#### 打印 DOM
`--dump-dom` 标志将打印 `document.body.innerHTML` 到标准输出:
```
chrome --headless --disable-gpu --dump-dom https://www.chromestatus.com/
```
#### 创建一个 PDF
`--print-to-pdf` 标志将页面转出为 PDF 文件:
```
chrome --headless --disable-gpu --print-to-pdf https://www.chromestatus.com/
```
#### 截图
要捕获页面的屏幕截图,请使用 `--screenshot` 标志:
```
chrome --headless --disable-gpu --screenshot https://www.chromestatus.com/
# Size of a standard letterhead.
chrome --headless --disable-gpu --screenshot --window-size=1280,1696 https://www.chromestatus.com/
# Nexus 5x
chrome --headless --disable-gpu --screenshot --window-size=412,732 https://www.chromestatus.com/
```
使用 `--screenshot` 标志运行 Headless Chrome 将在当前工作目录中生成一个名为 `screenshot.png` 的文件。如果你正在寻求整个页面的截图,那么会涉及到很多事情。来自 David Schnurr 的一篇很棒的博文已经介绍了这一内容。请查看 [使用 headless Chrome 作为自动截屏工具](https://medium.com/@dschnr/using-headless-chrome-as-an-automated-screenshot-tool-4b07dffba79a)。
#### REPL 模式 (read-eval-print loop)
`--repl` 标志可以使 Headless Chrome 运行在一个你可以使用浏览器评估 JS 表达式的模式下。执行下面的命令:
```
$ chrome --headless --disable-gpu --repl https://www.chromestatus.com/
[0608/112805.245285:INFO:headless_shell.cc(278)] Type a Javascript expression to evaluate or "quit" to exit.
>>> location.href
{"result":{"type":"string","value":"https://www.chromestatus.com/features"}}
>>> quit
```
### 在没有浏览器界面的情况下调试 Chrome
当你使用 `--remote-debugging-port=9222` 运行 Chrome 时,它会启动一个支持 [DevTools 协议](https://chromedevtools.github.io/devtools-protocol/)的实例。该协议用于与 Chrome 进行通信,并且驱动 Headless Chrome 浏览器实例。它也是一个类似 Sublime、VS Code 和 Node 的工具,可用于应用程序的远程调试。#协同效应
由于你没有浏览器用户界面可用来查看网页,请在另一个浏览器中输入 `http://localhost:9222`,以检查一切是否正常。你将会看到一个<ruby> 可检查的 <rt> inspectable </rt></ruby>页面的列表,可以点击它们来查看 Headless Chrome 正在呈现的内容:

*DevTools 远程调试界面*
从这里,你就可以像往常一样使用熟悉的 DevTools 来检查、调试和调整页面了。如果你以编程方式使用 Headless Chrome,这个页面也是一个功能强大的调试工具,用于查看所有通过网络与浏览器交互的原始 DevTools 协议命令。
### 使用编程模式 (Node)
#### Puppeteer 库 API
[Puppeteer](https://github.com/GoogleChrome/puppeteer) 是一个由 Chrome 团队开发的 Node 库。它提供了一个高层次的 API 来控制无需显示版(或 完全版)的 Chrome。它与其他自动化测试库,如 Phantom 和 NightmareJS 相类似,但是只适用于最新版本的 Chrome。
除此之外,Puppeteer 还可用于轻松截取屏幕截图,创建 PDF,页面间导航以及获取有关这些页面的信息。如果你想快速地自动化进行浏览器测试,我建议使用该库。它隐藏了 DevTools 协议的复杂性,并可以处理诸如启动 Chrome 调试实例等繁冗的任务。
安装:
```
yarn add puppeteer
```
**例子** - 打印用户代理:
```
const puppeteer = require('puppeteer');
(async() => {
const browser = await puppeteer.launch();
console.log(await browser.version());
browser.close();
})();
```
**例子** - 获取页面的屏幕截图:
```
const puppeteer = require('puppeteer');
(async() => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('https://www.chromestatus.com', {waitUntil: 'networkidle'});
await page.pdf({path: 'page.pdf', format: 'A4'});
browser.close();
})();
```
查看 [Puppeteer 的文档](https://github.com/GoogleChrome/puppeteer/blob/master/docs/api.md),了解完整 API 的更多信息。
#### CRI 库
[chrome-remote-interface](https://www.npmjs.com/package/chrome-remote-interface) 是一个比 Puppeteer API 更低层次的库。如果你想要更接近原始信息和更直接地使用 [DevTools 协议](https://chromedevtools.github.io/devtools-protocol/)的话,我推荐使用它。
**启动 Chrome**
chrome-remote-interface 不会为你启动 Chrome,所以你要自己启动它。
在前面的 CLI 章节中,我们使用 `--headless --remote-debugging-port=9222` [手动启动了 Chrome](https://developers.google.com/web/updates/2017/04/headless-chrome#cli)。但是,要想做到完全自动化测试,你可能希望从你的应用程序中启动 Chrome。
其中一种方法是使用 `child_process`:
```
const execFile = require('child_process').execFile;
function launchHeadlessChrome(url, callback) {
// Assuming MacOSx.
const CHROME = '/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome';
execFile(CHROME, ['--headless', '--disable-gpu', '--remote-debugging-port=9222', url], callback);
}
launchHeadlessChrome('https://www.chromestatus.com', (err, stdout, stderr) => {
...
});
```
但是如果你想要在多个平台上运行可移植的解决方案,事情会变得很棘手。请注意 Chrome 的硬编码路径:
**使用 ChromeLauncher**
[Lighthouse](https://developers.google.com/web/tools/lighthouse/) 是一个令人称奇的网络应用的质量测试工具。Lighthouse 内部开发了一个强大的用于启动 Chrome 的模块,现在已经被提取出来单独使用。[chrome-launcher NPM 模块](https://www.npmjs.com/package/chrome-launcher) 可以找到 Chrome 的安装位置,设置调试实例,启动浏览器和在程序运行完之后将其杀死。它最好的一点是可以跨平台工作,感谢 Node!
默认情况下,**chrome-launcher 会尝试启动 Chrome Canary**(如果已经安装),但是你也可以更改它,手动选择使用的 Chrome 版本。要想使用它,首先从 npm 安装:
```
yarn add chrome-launcher
```
**例子** - 使用 `chrome-launcher` 启动 Headless Chrome:
```
const chromeLauncher = require('chrome-launcher');
// Optional: set logging level of launcher to see its output.
// Install it using: yarn add lighthouse-logger
// const log = require('lighthouse-logger');
// log.setLevel('info');
/**
* Launches a debugging instance of Chrome.
* @param {boolean=} headless True (default) launches Chrome in headless mode.
* False launches a full version of Chrome.
* @return {Promise<ChromeLauncher>}
*/
function launchChrome(headless=true) {
return chromeLauncher.launch({
// port: 9222, // Uncomment to force a specific port of your choice.
chromeFlags: [
'--window-size=412,732',
'--disable-gpu',
headless ? '--headless' : ''
]
});
}
launchChrome().then(chrome => {
console.log(`Chrome debuggable on port: ${chrome.port}`);
...
// chrome.kill();
});
```
运行这个脚本没有做太多的事情,但你应该能在任务管理器中看到启动了一个 Chrome 的实例,它加载了页面 `about:blank`。记住,它不会有任何的浏览器界面,我们是无需显示的。
为了控制浏览器,我们需要 DevTools 协议!
#### 检索有关页面的信息
>
> **警告:** DevTools 协议可以做一些有趣的事情,但是起初可能有点令人生畏。我建议先花点时间浏览 [DevTools 协议查看器](https://chromedevtools.github.io/devtools-protocol/)。然后,转到 `chrome-remote-interface` 的 API 文档,看看它是如何包装原始协议的。
>
>
>
我们来安装该库:
```
yarn add chrome-remote-interface
```
**例子** - 打印用户代理:
```
const CDP = require('chrome-remote-interface');
...
launchChrome().then(async chrome => {
const version = await CDP.Version({port: chrome.port});
console.log(version['User-Agent']);
});
```
结果是类似这样的东西:`HeadlessChrome/60.0.3082.0`。
**例子** - 检查网站是否有 [Web 应用程序清单](https://developers.google.com/web/fundamentals/engage-and-retain/web-app-manifest/):
```
const CDP = require('chrome-remote-interface');
...
(async function() {
const chrome = await launchChrome();
const protocol = await CDP({port: chrome.port});
// Extract the DevTools protocol domains we need and enable them.
// See API docs: https://chromedevtools.github.io/devtools-protocol/
const {Page} = protocol;
await Page.enable();
Page.navigate({url: 'https://www.chromestatus.com/'});
// Wait for window.onload before doing stuff.
Page.loadEventFired(async () => {
const manifest = await Page.getAppManifest();
if (manifest.url) {
console.log('Manifest: ' + manifest.url);
console.log(manifest.data);
} else {
console.log('Site has no app manifest');
}
protocol.close();
chrome.kill(); // Kill Chrome.
});
})();
```
**例子** - 使用 DOM API 提取页面的 `<title>`:
```
const CDP = require('chrome-remote-interface');
...
(async function() {
const chrome = await launchChrome();
const protocol = await CDP({port: chrome.port});
// Extract the DevTools protocol domains we need and enable them.
// See API docs: https://chromedevtools.github.io/devtools-protocol/
const {Page, Runtime} = protocol;
await Promise.all([Page.enable(), Runtime.enable()]);
Page.navigate({url: 'https://www.chromestatus.com/'});
// Wait for window.onload before doing stuff.
Page.loadEventFired(async () => {
const js = "document.querySelector('title').textContent";
// Evaluate the JS expression in the page.
const result = await Runtime.evaluate({expression: js});
console.log('Title of page: ' + result.result.value);
protocol.close();
chrome.kill(); // Kill Chrome.
});
})();
```
### 使用 Selenium、WebDriver 和 ChromeDriver
现在,Selenium 开启了 Chrome 的完整实例。换句话说,这是一个自动化的解决方案,但不是完全无需显示的。但是,Selenium 只需要进行小小的配置即可运行 Headless Chrome。如果你想要关于如何自己设置的完整说明,我建议你阅读“[使用 Headless Chrome 来运行 Selenium](https://intoli.com/blog/running-selenium-with-headless-chrome/)”,不过你可以从下面的一些示例开始。
#### 使用 ChromeDriver
[ChromeDriver](https://sites.google.com/a/chromium.org/chromedriver/) 2.3.0 支持 Chrome 59 及更新版本,可与 Headless Chrome 配合使用。在某些情况下,你可能需要等到 Chrome 60 以解决 bug。例如,Chrome 59 中屏幕截图已知存在问题。
安装:
```
yarn add selenium-webdriver chromedriver
```
例子:
```
const fs = require('fs');
const webdriver = require('selenium-webdriver');
const chromedriver = require('chromedriver');
// This should be the path to your Canary installation.
// I'm assuming Mac for the example.
const PATH_TO_CANARY = '/Applications/Google Chrome Canary.app/Contents/MacOS/Google Chrome Canary';
const chromeCapabilities = webdriver.Capabilities.chrome();
chromeCapabilities.set('chromeOptions', {
binary: PATH_TO_CANARY // Screenshots require Chrome 60\. Force Canary.
'args': [
'--headless',
]
});
const driver = new webdriver.Builder()
.forBrowser('chrome')
.withCapabilities(chromeCapabilities)
.build();
// Navigate to google.com, enter a search.
driver.get('https://www.google.com/');
driver.findElement({name: 'q'}).sendKeys('webdriver');
driver.findElement({name: 'btnG'}).click();
driver.wait(webdriver.until.titleIs('webdriver - Google Search'), 1000);
// Take screenshot of results page. Save to disk.
driver.takeScreenshot().then(base64png => {
fs.writeFileSync('screenshot.png', new Buffer(base64png, 'base64'));
});
driver.quit();
```
#### 使用 WebDriverIO
[WebDriverIO](http://webdriver.io/) 是一个在 Selenium WebDrive 上构建的更高层次的 API。
安装:
```
yarn add webdriverio chromedriver
```
例子:过滤 chromestatus.com 上的 CSS 功能:
```
const webdriverio = require('webdriverio');
const chromedriver = require('chromedriver');
// This should be the path to your Canary installation.
// I'm assuming Mac for the example.
const PATH_TO_CANARY = '/Applications/Google Chrome Canary.app/Contents/MacOS/Google Chrome Canary';
const PORT = 9515;
chromedriver.start([
'--url-base=wd/hub',
`--port=${PORT}`,
'--verbose'
]);
(async () => {
const opts = {
port: PORT,
desiredCapabilities: {
browserName: 'chrome',
chromeOptions: {
binary: PATH_TO_CANARY // Screenshots require Chrome 60\. Force Canary.
args: ['--headless']
}
}
};
const browser = webdriverio.remote(opts).init();
await browser.url('https://www.chromestatus.com/features');
const title = await browser.getTitle();
console.log(`Title: ${title}`);
await browser.waitForText('.num-features', 3000);
let numFeatures = await browser.getText('.num-features');
console.log(`Chrome has ${numFeatures} total features`);
await browser.setValue('input[type="search"]', 'CSS');
console.log('Filtering features...');
await browser.pause(1000);
numFeatures = await browser.getText('.num-features');
console.log(`Chrome has ${numFeatures} CSS features`);
const buffer = await browser.saveScreenshot('screenshot.png');
console.log('Saved screenshot...');
chromedriver.stop();
browser.end();
})();
```
### 更多资源
以下是一些可以带你入门的有用资源:
文档
* [DevTools Protocol Viewer](https://chromedevtools.github.io/devtools-protocol/) - API 参考文档
工具
* [chrome-remote-interface](https://www.npmjs.com/package/chrome-remote-interface) - 基于 DevTools 协议的 node 模块
* [Lighthouse](https://github.com/GoogleChrome/lighthouse) - 测试 Web 应用程序质量的自动化工具;大量使用了协议
* [chrome-launcher](https://github.com/GoogleChrome/lighthouse/tree/master/chrome-launcher) - 用于启动 Chrome 的 node 模块,可以自动化
样例
* "[The Headless Web](https://paul.kinlan.me/the-headless-web/)" - Paul Kinlan 发布的使用了 Headless 和 api.ai 的精彩博客
### 常见问题
**我需要 `--disable-gpu` 标志吗?**
目前是需要的。`--disable-gpu` 标志在处理一些 bug 时是需要的。在未来版本的 Chrome 中就不需要了。查看 [https://crbug.com/546953#c152](https://bugs.chromium.org/p/chromium/issues/detail?id=546953#c152) 和 [https://crbug.com/695212](https://bugs.chromium.org/p/chromium/issues/detail?id=695212) 获取更多信息。
**所以我仍然需要 Xvfb 吗?**
不。Headless Chrome 不使用窗口,所以不需要像 Xvfb 这样的显示服务器。没有它你也可以愉快地运行你的自动化测试。
什么是 Xvfb?Xvfb 是一个用于类 Unix 系统的运行于内存之内的显示服务器,可以让你运行图形应用程序(如 Chrome),而无需附加的物理显示器。许多人使用 Xvfb 运行早期版本的 Chrome 进行 “headless” 测试。
**如何创建一个运行 Headless Chrome 的 Docker 容器?**
查看 [lighthouse-ci](https://github.com/ebidel/lighthouse-ci)。它有一个使用 Ubuntu 作为基础镜像的 [Dockerfile 示例](https://github.com/ebidel/lighthouse-ci/blob/master/builder/Dockerfile.headless),并且在 App Engine Flexible 容器中安装和运行了 Lighthouse。
**我可以把它和 Selenium / WebDriver / ChromeDriver 一起使用吗?**
是的。查看 [Using Selenium, WebDrive, or ChromeDriver](https://developers.google.com/web/updates/2017/04/headless-chrome#drivers)。
**它和 PhantomJS 有什么关系?**
Headless Chrome 和 [PhantomJS](http://phantomjs.org/) 是类似的工具。它们都可以用来在无需显示的环境中进行自动化测试。两者的主要不同在于 Phantom 使用了一个较老版本的 WebKit 作为它的渲染引擎,而 Headless Chrome 使用了最新版本的 Blink。
目前,Phantom 提供了比 [DevTools protocol](https://chromedevtools.github.io/devtools-protocol/) 更高层次的 API。
**我在哪儿提交 bug?**
对于 Headless Chrome 的 bug,请提交到 [crbug.com](https://bugs.chromium.org/p/chromium/issues/entry?components=Blink&blocking=705916&cc=skyostil%40chromium.org&Proj=Headless)。
对于 DevTools 协议的 bug,请提交到 [github.com/ChromeDevTools/devtools-protocol](https://github.com/ChromeDevTools/devtools-protocol/issues/new)。
---
作者简介
[Eric Bidelman](https://developers.google.com/web/resources/contributors#ericbidelman) 谷歌工程师,Lighthouse 开发,Web 和 Web 组件开发,Chrome 开发
---
via: <https://developers.google.com/web/updates/2017/04/headless-chrome>
作者:[Eric Bidelman](https://developers.google.com/web/resources/contributors#ericbidelman) 译者:[firmianay](https://github.com/firmianay) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,851 | ImageMagick 入门:使用命令行来编辑图片 | https://opensource.com/article/17/8/imagemagick | 2017-09-08T13:29:37 | [
"ImageMagick",
"图像查看"
] | https://linux.cn/article-8851-1.html |
>
> 了解使用此轻量级图像编辑器查看和修改图像的常见方法。
>
>
>

在最近一篇关于[轻量级图像查看器](https://opensource.com/article/17/7/4-lightweight-image-viewers-linux-desktop)的文章中,作者 Scott Nesbitt 提到了 `display`,它是 [ImageMagick](https://www.imagemagick.org/script/index.php) 中的一个组件。ImageMagick 不仅仅是一个图像查看器,它还提供了大量的图像编辑工具和选项。本教程将详细介绍如何在 ImageMagick 中使用 `display` 命令和其他命令行工具。
现在有许多优秀的图像编辑器可用,你可能会想知道为什么有人会选择一个非 GUI 的、基于命令行的程序,如 ImageMagick。一方面,它非常可靠。但更大的好处是,它允许你建立一个以特定的方式编辑大量图像的方式。
这篇对于常见的 ImageMagick 命令的介绍应该让你入门。
### display 命令
让我们从 Scott 提到的命令开始:`display`。假设你有一个目录,其中有很多想要查看的图像。使用以下命令开始 `display`:
```
cd Pictures
display *.JPG
```
这将按照字母数字顺序顺序加载你的 JPG 文件,每张放在一个简单的窗口中。左键单击图像可以打开一个简单的独立菜单(ImageMagick 中唯一的 GUI 功能)。

你可以在 **display** 菜单中找到以下内容:
* **File** 包含选项 Open、Next、Former、Select、Save、Print、Delete、New、Visual Directory 和 Quit。 *Select* 来选择要显示的特定文件,*Visual Directory* 显示当前工作目录中的所有文件(而不仅仅是图像)。如果要滚动显示所有选定的图像,你可以使用 *Next* 和 *Former*,但使用键盘快捷键(下一张图像用空格键,上一张图像用退格)更容易。
* **Edit** 提供 Undo、Redo、Cut、Copy 和 Paste,它们只是辅助命令进行更具体的编辑过程。 当你进行不同的编辑功能看看它们做什么时 *Undo* 特别有用。
* **View** 有 Half Size、Original Size、Double Size、Resize、Apply、Refresh 和 Restore。这些大多是不用说明的,除非你在应用其中之一后保存图像,否则图像文件不会更改。*Resize* 会打开一个对话框,以像素为单位,带有或者不带尺寸限制,或者是百分比指定图片大小。我不知道 *Apply* 会做什么。
* **Transform** 显示 Crop、Chop、Flop、Flip、Rotate Right、Rotate Left、Rotate、Shear、Roll 和 Trim Edges。*Chop* 使用点击拖动操作剪切图像的垂直或水平部分,将边缘粘贴在一起。了解这些功能如何工作的最佳方法是操作它们,而不是看看。
* **Enhance** 提供 Hue、Saturation、Brightness、Gamma、Spiff、Dull、Contrast Stretch、Sigmoidal Contrast、Normalize、Equalize、Negate、Grayscale、Map 和 Quantize。这些是用于颜色和调整亮度和对比度的操作。
* **效果** 有 Despeckle、Emboss、Reduce Noise、Add Noise、Sharpen、Blur、Threshold、Edge Detect、Spread、Shade、Raise 和 Segment。这些是相当标准的图像编辑效果。
* **F/X** 选项有 Solarize、Sepia Tone、Swirl、Implode、Vignette、Wave、Oil Paint 和 Charcoal Draw,在图像编辑器中也是非常常见的效果。
* **Image Edit** 包含 Annotate、Draw、Color、Matte、Composite、Add Border、Add Frame、Comment、Launch 和 Region of Interest。*Launch \_ 将打开 GIMP 中的当前图像(至少在我的 Fedora 中是这样)。 \_Region of Interest* 允许你选择一个区域来应用编辑。按下 Esc 取消选择该区域。
* **Miscellany** 提供 Image Info、Zoom Image、Show Preview、Show Histogram、Show Matte、Background、Slide Show 和 Preferences。 *Show Preview* 似乎很有趣,但我努力让它工作。
* **Help** 有 Overview、Browse Documentation 和 About Display。 *Overview* 提供了大量关于 display 的基本信息,并且包含大量内置的键盘快捷键,用于各种命令和操作。在我的 Fedora 中,*Browse Documentation* 没有作用。
虽然 `display` 的 GUI 界面提供了一个称职的图像编辑器,但 ImageMagick 还提供了 89 个命令行选项,其中许多与上述菜单项相对应。例如,如果我显示的数码相片目录中的图像大于我的屏幕尺寸,我不用在显示后单独调整大小,我可以指定:
```
display -resize 50% *.JPG
```
上面菜单中的许多操作都可以通过在命令行中添加一个选项来完成。但是还有其他的选项在菜单中没有,包括 `-monochrome`,将图像转换为黑白(不是灰度),还有 `-colors`,你可以指定在图像中使用多少种颜色。例如,尝试这些:
```
display -resize 50% -monochrome *.JPG
```
```
display -resize 50% -colors 8 *.JPG
```
这些操作会创建有趣的图像。试试增强颜色或进行其他编辑后减少颜色。记住,除非你保存并覆盖它们,否则原始文件保持不变。
### convert 命令
`convert` 命令有 237 个选项 - 是的, 237 个! - 它提供了你可以做的各种各样的事情(其中一些 `display` 也可以做)。我只会覆盖其中的几个,主要是图像操作。你可以用 `convert` 做的两件简单的事情是:
```
convert DSC_0001.JPG dsc0001.png
```
```
convert *.bmp *.png
```
第一个命令将单个文件(DSC\_0001)从 JPG 转换为 PNG 格式,而不更改原始文件。第二个将对目录中的所有 BMP 图像执行此操作。
如果要查看 ImageMagick 可以使用的格式,请输入:
```
identify -list format
```
我们来看几个用 `convert` 命令来处理图像的有趣方法。以下是此命令的一般格式:
```
convert inputfilename [options] outputfilename
```
你有多个选项,它们按照从左到右排列的顺序完成。
以下是几个简单的选项:
```
convert monochrome_source.jpg -monochrome monochrome_example.jpg
```

```
convert DSC_0008.jpg -charcoal 1.2 charcoal_example.jpg
```

`-monochrome` 选项没有关联的设置,但 `-charcoal` 变量需要一个相关因子。根据我的经验,它需要一个小的数字(甚至小于 1)来实现类似于炭笔绘画的东西,否则你会得到很大的黑色斑点。即使如此,图像中的尖锐边缘也是非常明显的,与炭笔绘画不同。
现在来看看这些:
```
convert DSC_0032.JPG -edge 3 edge_demo.jpg
```
```
convert DSC_0032.JPG -colors 4 reduced4_demo.jpg
```
```
convert DSC_0032.JPG -colors 4 -edge 3 reduced+edge_demo.jpg
```

原始图像位于左上方。在第一个命令中,我使用了一个 `-edge` 选项,设置为 3(见右上角的图像) - 对于我的喜好而言小于它的数字都太精细了。在第二个命令(左下角的图像)中,我们将颜色的数量减少到了 4 个,与原来没有什么不同。但是看看当我们在第三个命令中组合这两个时,会发生什么(右下角的图像)!也许这有点大胆,但谁能预期到从原始图像或任何一个选项变成这个结果?
`-canny` 选项提供了另外一个惊喜。这是另一种边缘检测器,称为“多阶算法”。单独使用 `-canny` 可以产生基本黑色的图像和一些白线。我后面跟着一个 `-negate` 选项:
```
convert DSC_0049.jpg -canny 0x1 -negate canny_egret.jpg
convert DSC_0023.jpg -canny 0x1 -negate canny_ship.jpg
```

这有点极简主义,但我认为它类似于一种笔墨绘画,与原始照片有相当显著的差异。它并不能用于所有图片。一般来说,它对有锐利线条的图像效果最好。不是焦点的元素可能会消失。注意白鹭图片中的背景沙滩没有显示,因为它是模糊的。同样注意下船舶图片,虽然大多数边缘显示得非常好,因为没有颜色,我们失去了图片的整体形象,所以也许这可以作为一些数字着色,甚至在印后着色的基础。
### montage 命令
最后,我想谈一下 `montage` (蒙太奇)命令。我已经在上面展示了这个例子,我将单个图像组合成复合图片。
这是我如何生成炭笔的例子(请注意,它们都在一行):
```
montage -label %f DSC_0008.jpg charcoal_example.jpg -geometry +10+10
-resize 25% -shadow -title 'charcoal demo' charcoal_demo.jpg
```
`-label` 选项会在每个图像下方标记它的文件名(`%f`)。不用 `-geometry` 选项,所有的图像将是缩略图大小(120 像素宽),`+10+10` 负责边框大小。接下来,我调整了整个最终组合的大小(`-resize 25%`),并添加了一个阴影(没有设置,因此是默认值),最后为这次 montage 操作创建了一个标题(`-title`)。
你可以将所有图像名称放在最后,最后一个图像的名称将是 `montage` 操作所保存的文件名。这可用于为命令及其所有选项创建别名,然后我可以简单地键入该别名、输入适当的文件名即可。我偶尔会这么做来减少 `montage` 操作需要输入的命令长度。
在 `-canny` 的例子中,我对 4 张图像进行了蒙太奇操作。我添加了 `-tile` 选项,确切地说是 `-tile 2x`,它创建了有两列的蒙太奇。我可以指定一个 `matrix`、`-tile 2x2` 或 `-tile x2` 来产生相同的结果。
ImageMagick 还有更多可以了解,所以我打算写更多关于它的文章,甚至可能使用 [Perl](https://opensource.com/sitewide-search?search_api_views_fulltext=perl) 脚本运行 ImageMagick 命令。ImageMagick 具有丰富的[文档](https://imagemagick.org/script/index.php),尽管该网站在示例或者显示结果上还不足,我认为最好的学习方式是通过实验和更改各种设置和选项来学习。
(题图: opensource.com)
---
作者简介:
Greg Pittman - Greg 是肯塔基州路易斯维尔的一名退休的神经科医生,对计算机和程序设计有着长期的兴趣,从 1960 年代的 Fortran IV 开始。当 Linux 和开源软件相继出现时,他开始学习更多,并最终做出贡献。他是 Scribus 团队的成员。
---
via: <https://opensource.com/article/17/8/imagemagick>
作者:[Greg Pittman](https://opensource.com/users/greg-p) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | In a recent article about [lightweight image viewers](https://opensource.com/article/17/7/4-lightweight-image-viewers-linux-desktop), author Scott Nesbitt mentioned display, one of the components in [ImageMagick](https://www.imagemagick.org/script/index.php). ImageMagick is not merely an image viewer—it offers a large number of utilities and options for image editing. This tutorial will explain more about using the **display** command and other command-line utilities in ImageMagick.
With a number of excellent image editors available, you may be wondering why someone would choose a mainly non-GUI, command-line based program like ImageMagick. For one thing, it is rock-solid dependable. But an even bigger benefit is that it allows you to set up methods to edit a large number of images in a particular way.
This introduction to common ImageMagick commands should get you started.
## The display command
Let's start with the command Scott mentioned: **display**. Say you have a directory with a lot of images you want to look at. Start **display** with the following command:
```
``````
cd Pictures
display *.JPG
```
This will load your JPG files sequentially in alphanumeric order, one at a time in a simple window. Left-clicking on an image brings up a simple, standalone menu (the only GUI feature you'll see in ImageMagick).

opensource.com
Here's what you'll find in the **display** menu:
**File**contains the options*Open, Next, Former, Select, Save, Print, Delete, New, Visual Directory*, and*Quit*.*Select*picks a specific image file to display,*Visual Directory*shows all of the files (not just the images) in the current working directory. If you want to scroll through all the selected images, you can use*Next*and*Former*, but it's easier to use their keyboard shortcuts (Spacebar for the next image and Backspace for the previous).**Edit**offers*Undo, Redo, Cut, Copy*, and*Paste*, which are just auxiliary commands to more specific editing process.*Undo*is especially useful when you're playing around with different edits to see what they do.**View**has*Half Size, Original Size, Double Size, Resize, Apply, Refresh*, and*Restore*. These are mostly self-explanatory and, unless you save the image after applying one of them, the image file isn't changed.*Resize*brings up a dialog to name a specific size either in pixels, with or without constrained dimensions, or a percentage. I'm not sure what*Apply*does.**Transform**shows*Crop, Chop, Flop, Flip, Rotate Right, Rotate Left, Rotate, Shear, Roll*, and*Trim Edges*.*Chop*uses a click-drag operation to cut out a vertical or horizontal section of the image, pasting the edges together. The best way to learn how these features work is to play with them, rather than reading about them.**Enhance**provides*Hue, Saturation, Brightness, Gamma, Spiff, Dull, Contrast Stretch, Sigmoidal Contrast, Normalize, Equalize, Negate, Grayscale, Map*, and*Quantize*. These are operations for color manipulation and adjusting brightness and contrast.**Effects**has*Despeckle, Emboss, Reduce Noise, Add Noise, Sharpen, Blur, Threshold, Edge Detect, Spread, Shade, Raise*, and*Segment*. These are fairly standard image editing effects.**F/X**options are*Solarize, Sepia Tone, Swirl, Implode, Vignette, Wave, Oil Paint*, and*Charcoal Draw*, also very common effects in image editors.**Image Edit**contains*Annotate, Draw, Color, Matte, Composite, Add Border, Add Frame, Comment, Launch*, and*Region of Interest*.*Launch*will open the current image in GIMP (in my Fedora at least).*Region of Interest*allows you to select an area to apply editing; press Esc to deselect the region.**Miscellany**offers*Image Info, Zoom Image, Show Preview, Show Histogram, Show Matte, Background, Slide Show*, and*Preferences*.*Show Preview*seems interesting, but I struggled to get it to work.**Help**shows*Overview, Browse Documentation*, and*About Display*.*Overview*gives a lot of basic information about display and includes a large number of built-in keyboard equivalents for various commands and operations. In my Fedora,*Browse Documentation*took me nowhere.
Although **display**'s GUI interface provides a reasonably competent image editor, ImageMagick also provides 89 command-line options, many of which correspond to the menu items above. For example, if I'm displaying a directory of digital images that are larger than my screen size, rather than resizing them individually after they appear on my screen, I can specify:
```
```` display -resize 50% *.JPG`
Many of the operations in the menus above can also be done by adding an option in the command line. But there are others that aren't available from the menu, including **‑monochrome**, which converts the image to black and white (not grayscale), and **‑colors**, where you can specify how many colors to use in the image. For example, try these out:
```
```` display -resize 50% -monochrome *.JPG`
```
```` display -resize 50% -colors 8 *.JPG`
These operations create interesting images. Try enhancing colors or making other edits after reducing colors. Remember, unless you save and overwrite them, the original files remain unchanged.
## The convert command
The **convert** command has 237 options—yes 237—that provide a wide range of things you can do (some of which display can also do). I'll only cover a few of them, mostly sticking with image manipulation. Two simple things you can do with **convert** would be:
```
```` convert DSC_0001.JPG dsc0001.png`
```
```` convert *.bmp *.png`
The first command would convert a single file (DSC_0001) from JPG to PNG format without changing the original. The second would do this operation on all the BMP images in a directory.
If you want to see the formats ImageMagick can work with, type:
```
```` identify -list format`
Let's pick through a few interesting ways we can use the **convert** command to manipulate images. Here is the general format for this command:
```
```` convert inputfilename [options] outputfilename`
You can have multiple options, and they are done in the order they are arranged, from left to right.
Here are a couple of simple options:
```
```` convert monochrome_source.jpg -monochrome monochrome_example.jpg`

opensource.com
```
```` convert DSC_0008.jpg -charcoal 1.2 charcoal_example.jpg`

opensource.com
The **‑monochrome** option has no associated setting, but the **‑charcoal** variable needs an associated factor. In my experience, it needs to be a small number (even less than 1) to achieve something that resembles a charcoal drawing, otherwise you get pretty heavy blobs of black. Even so, the sharp edges in an image are quite distinct, unlike in a charcoal drawing.
Now let's look at these:
```
```` convert DSC_0032.JPG -edge 3 edge_demo.jpg`
```
```` convert DSC_0032.JPG -colors 4 reduced4_demo.jpg`
```
```` convert DSC_0032.JPG -colors 4 -edge 3 reduced+edge_demo.jpg`

opensource.com
The original image is in the upper left. In the first command, I applied an **‑edge** option with a setting of 3 (see the upper-right image)—anything less than that was too subtle for my liking. In the second command (the lower-left image), we have reduced the number of colors to four, which doesn't look much different from the original. But look what happens when we combine these two in the third command (lower-right image)! Perhaps it's a bit garish, but who would have expected this result from the original image or either option on its own?
The **‑canny** command provided another surprise. This is another kind of edge detector, called a "multi-stage algorithm." Using **‑canny** alone produces a mostly black image and some white lines. I followed that with a **‑negate** command:
```
``````
convert DSC_0049.jpg -canny 0x1 -negate canny_egret.jpg
convert DSC_0023.jpg -canny 0x1 -negate canny_ship.jpg
```

opensource.com
It's a bit minimalist, but I think it resembles a pen-and-ink drawing, a rather remarkable difference from the original photos. It doesn't work well with all images; generally, it works best with images with sharp lines. Elements that are out of focus are likely to disappear; notice how the background sandbar in the egret picture doesn't show up because it is blurred. Also notice in the ship picture, while most edges show up very well, without colors we lose the gestalt of the picture, so perhaps this could be the basis for some digital coloration or even coloring after printing.
## The montage command
Finally, I want to talk about the **montage** command. I've already shown examples of it above, where I have combined single images into composites.
Here's how I generated the charcoal example (note that it would all be on one line):
```
``````
montage -label %f DSC_0008.jpg charcoal_example.jpg -geometry +10+10
-resize 25% -shadow -title 'charcoal demo' charcoal_demo.jpg
```
The **-label** option labels each image with its filename (**%f**) underneath. Without the **‑geometry** option, all the images would be thumbnail size (120 pixels wide), and **+10+10** manages the border size. Next, I resized the entire final composite (**‑resize 25%**) and added a shadow (with no settings, so it's the default), and finally created a **title** for the montage.
You can place all the image names at the end, with the last image name the file where the montage is saved. This might be useful to create an alias for the command and all its options, then I can simply type the alias followed by the appropriate filenames. I've done this on occasion to reduce the typing needed to create a series of montages.
In the **‑canny** examples, I had four images in the montage. I added the **‑tile** option, specifically **‑tile 2x**, which created a montage of two columns. I could have specified a **matrix**, **‑tile 2x2**, or **‑tile x2** to produce the same result.
There is a lot more to learn about ImageMagick, so I plan to write more about it, maybe even about using [Perl](https://opensource.com/sitewide-search?search_api_views_fulltext=perl) to script ImageMagick commands. ImageMagick has extensive [documentation](https://imagemagick.org/script/index.php), although the site is short on examples or showing results, and I think the best way to learn is by experimenting and changing various settings and options.
## 7 Comments |
8,852 | GitHub 的 DNS 基础设施 | https://githubengineering.com/dns-infrastructure-at-github/ | 2017-09-09T16:59:10 | [
"GitHub",
"DNS"
] | https://linux.cn/article-8852-1.html | 
在 GitHub,我们最近从头改进了 DNS。这包括了我们[如何与外部 DNS 提供商交互](https://githubengineering.com/enabling-split-authority-dns-with-octodns/)以及我们如何在内部向我们的主机提供记录。为此,我们必须设计和构建一个新的 DNS 基础设施,它可以随着 GitHub 的增长扩展并跨越多个数据中心。
以前,GitHub 的 DNS 基础设施相当简单直接。它包括每台服务器上本地的、只具备转发功能的 DNS 缓存服务器,以及一对被所有这些主机使用的缓存服务器和权威服务器主机。这些主机在内部网络以及公共互联网上都可用。我们在缓存守护程序中配置了<ruby> 区域 <rt> zone </rt></ruby><ruby> 存根 <rt> stub </rt></ruby>,以在本地进行查询,而不是在互联网上进行递归。我们还在我们的 DNS 提供商处设置了 NS 记录,它们将特定的内部<ruby> 域 <rt> domain </rt></ruby>指向这对主机的公共 IP,以便我们网络外部的查询。
这个配置使用了很多年,但它并非没有缺点。许多程序对于解析 DNS 查询非常敏感,我们遇到的任何性能或可用性问题在最好的情况下也会导致服务排队和性能降级,而最坏情况下客户会遭遇服务中断。配置和代码的更改可能会导致查询率发生大幅度的意外变化。因此超出这两台主机的扩展成为了一个问题。由于这些主机的网络配置,如果我们只是继续添加 IP 和主机的话存在一些本身的问题。在试图解决和补救这些问题的同时,由于缺乏测量指标和可见性,老旧的系统难以识别问题的原因。在许多情况下,我们使用 `tcpdump` 来识别有问题的流量和查询。另一个问题是在公共 DNS 服务器上运行,我们处于泄露内部网络信息的风险之下。因此,我们决定建立更好的东西,并开始确定我们对新系统的要求。
我们着手设计一个新的 DNS 基础设施,以改善上述包括扩展和可见性在内的运维问题,并引入了一些额外的需求。我们希望通过外部 DNS 提供商继续运行我们的公共 DNS 域,因此我们构建的系统需要与供应商无关。此外,我们希望该系统能够服务于我们的内部和外部域,这意味着内部域仅在我们的内部网络上可用,除非另有特别配置,而外部域也不用离开我们的内部网络就可解析。我们希望新的 DNS 架构不但可以[基于部署的工作流进行更改](https://githubengineering.com/enabling-split-authority-dns-with-octodns/),并可以通过我们的仓库和配置系统使用 API 自动更改 DNS 记录。新系统不能有任何外部依赖,太依赖于 DNS 功能将会陷入级联故障,这包括连接到其他数据中心和其中可能有的 DNS 服务。我们的旧系统将缓存服务器和权威服务器在同一台主机上混合使用。我们想转到具有独立角色的分层设计。最后,我们希望系统能够支持多数据中心环境,无论是 EC2 还是裸机。
### 实现

为了构建这个系统,我们确定了三类主机:<ruby> 缓存主机 <rt> cache </rt></ruby>、<ruby> 边缘主机 <rt> edge </rt></ruby>和<ruby> 权威主机 <rt> authority </rt></ruby>。缓存主机作为<ruby> 递归解析器 <rt> recursive resolver </rt></ruby>和 DNS “路由器” 缓存来自边缘层的响应。边缘层运行 DNS 权威守护程序,用于响应缓存层对 DNS <ruby> 区域 <rt> zone </rt></ruby>的请求,其被配置为来自权威层的<ruby> 区域传输 <rt> zone transfer </rt></ruby>。权威层作为隐藏的 DNS <ruby> 主服务器 <rt> master </rt></ruby>,作为 DNS 数据的规范来源,为来自边缘主机的<ruby> 区域传输 <rt> zone transfer </rt></ruby>提供服务,并提供用于创建、修改或删除记录的 HTTP API。
在我们的新配置中,缓存主机存在于每个数据中心中,这意味着应用主机不需要穿过数据中心边界来检索记录。缓存主机被配置为将<ruby> 区域 <rt> zone </rt></ruby>映射到其<ruby> 地域 <rt> region </rt></ruby>内的边缘主机,以便将我们的内部<ruby> 区域 <rt> zone </rt></ruby>路由到我们自己的主机。未明确配置的任何<ruby> 区域 <rt> zone </rt></ruby>将通过互联网递归解析。
边缘主机是地域性的主机,存在我们的网络边缘 PoP(<ruby> 存在点 <rt> Point of Presence </rt></ruby>)内。我们的 PoP 有一个或多个依赖于它们进行外部连接的数据中心,没有 PoP 数据中心将无法访问互联网,互联网也无法访问它们。边缘主机对所有的权威主机执行<ruby> 区域传输 <rt> zone transfer </rt></ruby>,无论它们存在什么<ruby> 地域 <rt> region </rt></ruby>或<ruby> 位置 <rt> location </rt></ruby>,并将这些区域存在本地的磁盘上。
我们的权威主机也是地域性的主机,只包含适用于其所在<ruby> 地域 <rt> region </rt></ruby>的<ruby> 区域 <rt> zone </rt></ruby>。我们的仓库和配置系统决定一个<ruby> 区域 <rt> zone </rt></ruby>存放在哪个<ruby> 地域性权威主机 <rt> regional authority </rt></ruby>,并通过 HTTP API 服务来创建和删除记录。 OctoDNS 将区域映射到地域性权威主机,并使用相同的 API 创建静态记录,以及确保动态源处于同步状态。对于外部域 (如 github.com),我们有另外一个单独的权威主机,以允许我们可以在连接中断期间查询我们的外部域。所有记录都存储在 MySQL 中。
### 可运维性

迁移到更现代的 DNS 基础设施的巨大好处是可观察性。我们的旧 DNS 系统几乎没有指标,只有有限的日志。决定使用哪些 DNS 服务器的一个重要因素是它们所产生的指标的广度和深度。我们最终用 [Unbound](https://unbound.net/) 作为缓存主机,[NSD](https://www.nlnetlabs.nl/projects/nsd/) 作为边缘主机,[PowerDNS](https://powerdns.com/) 作为权威主机,所有这些都已在比 GitHub 大得多的 DNS 基础架构中得到了证实。
当在我们的裸机数据中心运行时,缓存通过私有的<ruby> <a href="https://en.wikipedia.org/wiki/Anycast"> 任播 </a> <rt> anycast </rt></ruby> IP 访问,从而使之可以到达最近的可用缓存主机。缓存主机已经以机架感知的方式部署,在它们之间提供了一定程度的平衡负载,并且与一些电源和网络故障模式相隔离。当缓存主机出现故障时,通常将用其进行 DNS 查询的服务器现在将自动路由到下一个最接近的缓存主机,以保持低延迟并提供对某些故障模式的容错。任播允许我们扩展单个 IP 地址后面的缓存数量,这与先前的配置不同,使得我们能够按 DNS 需求量运行尽可能多的缓存主机。
无论地域或位置如何,边缘主机使用权威层进行区域传输。我们的<ruby> 区域 <rt> zone </rt></ruby>并没有大到在每个<ruby> 地域 <rt> region </rt></ruby>保留所有<ruby> 区域 <rt> zone </rt></ruby>的副本成为问题。(LCTT 译注:此处原文“Our zones are not large enough that keeping a copy of all of them in every region is a problem.”,根据上下文理解而翻译。)这意味着对于每个区域,即使某个地域处于脱机状态,或者上游服务提供商存在连接问题,所有缓存服务器都可以访问具备所有区域的本地副本的本地边缘服务器。这种变化在面对连接问题方面已被证明是相当有弹性的,并且在不久前本来会导致客户面临停止服务的故障期间帮助保持 GitHub 可用。
那些区域传输包括了内部和外部域从它们相应的权威服务器进行的传输。正如你可能会猜想像 github.com 这样的区域是外部的,像 github.net 这样的区域通常是内部的。它们之间的区别仅在于我们使用的类型和存储在其中的数据。了解哪些区域是内部和外部的,为我们在配置中提供了一些灵活性。
```
$ dig +short github.com
192.30.253.112
192.30.253.113
```
公共<ruby> 区域 <rt> zone </rt></ruby>被[同步](https://githubengineering.com/enabling-split-authority-dns-with-octodns/)到外部 DNS 提供商,并且是 GitHub 用户每天使用的 DNS 记录。另外,公共区域在我们的网络中是完全可解析的,而不需要与我们的外部提供商进行通信。这意味着需要查询 `api.github.com` 的任何服务都可以这样做,而无需依赖外部网络连接。我们还使用了 Unbound 的 `stub-first` 配置选项,它给了我们第二次查询的机会,如果我们的内部 DNS 服务由于某些原因在外部查询失败,则可以进行第二次查找。
```
$ dig +short time.github.net
10.127.6.10
```
大部分的 `github.net` 区域是完全私有的,无法从互联网访问,它只包含 [RFC 1918](http://www.faqs.org/rfcs/rfc1918.html) 中规定的 IP 地址。每个地域和站点都划分了私有区域。每个地域和/或站点都具有适用于该位置的一组子区域,子区域用于管理网络、服务发现、特定的服务记录,并且还包括在我们仓库中的配置主机。私有区域还包括 PTR 反向查找区域。
### 总结
用一个新系统替换可以为数百万客户提供服务的旧系统并不容易。使用实用的、基于需求的方法来设计和实施我们的新 DNS 系统,才能打造出一个能够迅速有效地运行、并有望与 GitHub 一起成长的 DNS 基础设施。
想帮助 GitHub SRE 团队解决有趣的问题吗?我们很乐意你加入我们。[在这申请](https://boards.greenhouse.io/github/jobs/669805#.WPVqJlPyvUI)。
---
via: <https://githubengineering.com/dns-infrastructure-at-github/>
作者:[Joe Williams](https://github.com/joewilliams) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Redirecting…
Click here if you are not redirected. |
8,853 | Samba 系列(十五):用 SSSD 和 Realm 集成 Ubuntu 到 Samba4 AD DC | https://www.tecmint.com/integrate-ubuntu-to-samba4-ad-dc-with-sssd-and-realm/ | 2017-09-09T18:14:00 | [
"Samba",
"SSSD"
] | https://linux.cn/article-8853-1.html | 
本教程将告诉你如何将 Ubuntu 桌面版机器加入到带有 SSSD 和 Realm 服务的 Samba4 活动目录域中,以在活动目录中认证用户。
### 要求:
1. [在 Ubuntu 上用 Samba4 创建一个活动目录架构](/article-8065-1.html)
### 第 1 步:初始配置
1、 在把 Ubuntu 加入活动目录前确保主机名被正确设置了。使用 `hostnamectl` 命令设置机器名字或者手动编辑 `/etc/hostname` 文件。
```
$ sudo hostnamectl set-hostname your_machine_short_hostname
$ cat /etc/hostname
$ hostnamectl
```
2、 接下来,编辑机器网络接口设置并且添加合适的 IP 设置,并将正确的 DNS IP 服务器地址指向 Samba 活动目录域控制器,如下图所示。
如果你已经配置了 DHCP 服务来为局域网机器自动分配包括合适的 AD DNS IP 地址的 IP 设置,那么你可以跳过这一步。
[](https://www.tecmint.com/wp-content/uploads/2017/07/Configure-Network-Interface.jpg)
*设置网络接口*
上图中,`192.168.1.254` 和 `192.168.1.253` 代表 Samba4 域控制器的 IP 地址。
3、 用 GUI(图形用户界面)或命令行重启网络服务来应用修改,并且对你的域名发起一系列 ping 请求来测试 DNS 解析如预期工作。 也用 `host` 命令来测试 DNS 解析。
```
$ sudo systemctl restart networking.service
$ host your_domain.tld
$ ping -c2 your_domain_name
$ ping -c2 adc1
$ ping -c2 adc2
```
4、 最后, 确保机器时间和 Samba4 AD 同步。安装 `ntpdate` 包并用下列指令和 AD 同步时间。
```
$ sudo apt-get install ntpdate
$ sudo ntpdate your_domain_name
```
### 第 2 步:安装需要的包
5、 这一步将安装将 Ubuntu 加入 Samba4 活动目录域控制器所必须的软件和依赖:Realmd 和 SSSD 服务。
```
$ sudo apt install adcli realmd krb5-user samba-common-bin samba-libs samba-dsdb-modules sssd sssd-tools libnss-sss libpam-sss packagekit policykit-1
```
6、 输入大写的默认 realm 名称,然后按下回车继续安装。
[](https://www.tecmint.com/wp-content/uploads/2017/07/Set-realm-name.png)
*输入 Realm 名称*
7、 接着,创建包含以下内容的 SSSD 配置文件。
```
$ sudo nano /etc/sssd/sssd.conf
```
加入下面的内容到 `sssd.conf` 文件。
```
[nss]
filter_groups = root
filter_users = root
reconnection_retries = 3
[pam]
reconnection_retries = 3
[sssd]
domains = tecmint.lan
config_file_version = 2
services = nss, pam
default_domain_suffix = TECMINT.LAN
[domain/tecmint.lan]
ad_domain = tecmint.lan
krb5_realm = TECMINT.LAN
realmd_tags = manages-system joined-with-samba
cache_credentials = True
id_provider = ad
krb5_store_password_if_offline = True
default_shell = /bin/bash
ldap_id_mapping = True
use_fully_qualified_names = True
fallback_homedir = /home/%d/%u
access_provider = ad
auth_provider = ad
chpass_provider = ad
access_provider = ad
ldap_schema = ad
dyndns_update = true
dyndsn_refresh_interval = 43200
dyndns_update_ptr = true
dyndns_ttl = 3600
```
确保你对应地替换了下列参数的域名:
```
domains = tecmint.lan
default_domain_suffix = TECMINT.LAN
[domain/tecmint.lan]
ad_domain = tecmint.lan
krb5_realm = TECMINT.LAN
```
8、 接着,用下列命令给 SSSD 配置文件适当的权限:
```
$ sudo chmod 700 /etc/sssd/sssd.conf
```
9、 现在,打开并编辑 Realmd 配置文件,输入下面这行:
```
$ sudo nano /etc/realmd.conf
```
`realmd.conf` 文件摘录:
```
[active-directory]
os-name = Linux Ubuntu
os-version = 17.04
[service]
automatic-install = yes
[users]
default-home = /home/%d/%u
default-shell = /bin/bash
[tecmint.lan]
user-principal = yes
fully-qualified-names = no
```
10、 最后需要修改的文件属于 Samba 守护进程。 打开 `/etc/samba/smb.conf` 文件编辑,然后在文件开头加入下面这块代码,在 `[global]` 之后的部分如下图所示。
```
workgroup = TECMINT
client signing = yes
client use spnego = yes
kerberos method = secrets and keytab
realm = TECMINT.LAN
security = ads
```
[](https://www.tecmint.com/wp-content/uploads/2017/07/Configure-Samba-Server.jpg)
*配置 Samba 服务器*
确保你替换了域名值,特别是对应域名的 realm 值,并运行 `testparm` 命令检验设置文件是否包含错误。
```
$ sudo testparm
```
[](https://www.tecmint.com/wp-content/uploads/2017/07/Test-Samba-Configuration.jpg)
*测试 Samba 配置*
11、 在做完所有必需的修改之后,用 AD 管理员帐号验证 Kerberos 认证并用下面的命令列出票据。
```
$ sudo kinit [email protected]
$ sudo klist
```
[](https://www.tecmint.com/wp-content/uploads/2017/07/Check-Kerberos-Authentication.jpg)
*检验 Kerberos 认证*
### 第 3 步: 加入 Ubuntu 到 Samba4 Realm
12、 键入下列命令将 Ubuntu 机器加入到 Samba4 活动目录。用有管理员权限的 AD DC 账户名字,以便绑定 realm 可以如预期般工作,并替换对应的域名值。
```
$ sudo realm discover -v DOMAIN.TLD
$ sudo realm list
$ sudo realm join TECMINT.LAN -U ad_admin_user -v
$ sudo net ads join -k
```
[](https://www.tecmint.com/wp-content/uploads/2017/07/Join-Ubuntu-to-Samba4-Realm.jpg)
*加入 Ubuntu 到 Samba4 Realm*
[](https://www.tecmint.com/wp-content/uploads/2017/07/List-Realm-Domain-Info.jpg)
*列出 Realm Domain 信息*
[](https://www.tecmint.com/wp-content/uploads/2017/07/Add-User-to-Realm-Domain.jpg)
*添加用户到 Realm Domain*
[](https://www.tecmint.com/wp-content/uploads/2017/07/Add-Domain-to-Realm.jpg)
*添加 Domain 到 Realm*
13、 区域绑定好了之后,运行下面的命令确保所有域账户允许在这台机器上认证。
```
$ sudo realm permit -all
```
然后你可以使用下面举例的 `realm` 命令允许或者禁止域用户帐号或群组访问。
```
$ sudo realm deny -a
$ realm permit --groups ‘domain.tld\Linux Admins’
$ realm permit [email protected]
$ realm permit DOMAIN\\User2
```
14、 从一个 [安装了 RSAT 工具的](/article-8097-1.html) Windows 机器上你可以打开 AD UC 并浏览“<ruby> 电脑 <rt> computers </rt></ruby>”容器,并检验是否有一个使用你机器名的对象帐号已经创建。
[](https://www.tecmint.com/wp-content/uploads/2017/07/Confirm-Domain-Added.jpg)
*确保域被加入 AD DC*
### 第 4 步:配置 AD 账户认证
15、 为了在 Ubuntu 机器上用域账户认证,你需要用 root 权限运行 `pam-auth-update` 命令并允许所有 PAM 配置文件,包括为每个域账户在第一次注册的时候自动创建家目录的选项。
按 [空格] 键检验所有配置项并点击 ok 来应用配置。
```
$ sudo pam-auth-update
```
[](https://www.tecmint.com/wp-content/uploads/2017/07/PAM-Configuration.jpg)
*PAM 配置*
16、 在系统上手动编辑 `/etc/pam.d/common-account` 文件,下面这几行是为了给认证过的域用户自动创建家目录。
```
session required pam_mkhomedir.so skel=/etc/skel/ umask=0022
```
17、 如果活动目录用户不能用 linux 命令行修改他们的密码,打开 `/etc/pam.d/common-password` 文件并在 `password` 行移除 `use_authtok` 语句,最后如下:
```
password [success=1 default=ignore] pam_winbind.so try_first_pass
```
18、 最后,用下面的命令重启并启用以应用 Realmd 和 SSSD 服务的修改:
```
$ sudo systemctl restart realmd sssd
$ sudo systemctl enable realmd sssd
```
19、 为了测试 Ubuntu 机器是是否成功集成到 realm ,安装 winbind 包并运行 `wbinfo` 命令列出域账户和群组,如下所示。
```
$ sudo apt-get install winbind
$ wbinfo -u
$ wbinfo -g
```
[](https://www.tecmint.com/wp-content/uploads/2017/07/List-Domain-Accounts.jpg)
*列出域账户*
20、 同样,也可以针对特定的域用户或群组使用 `getent` 命令检验 Winbind nsswitch 模块。
```
$ sudo getent passwd your_domain_user
$ sudo getent group ‘domain admins’
```
[](https://www.tecmint.com/wp-content/uploads/2017/07/check-Winbind-nsswitch.jpg)
*检验 Winbind Nsswitch*
21、 你也可以用 Linux `id` 命令获取 AD 账户的信息,命令如下:
```
$ id tecmint_user
```
[](https://www.tecmint.com/wp-content/uploads/2017/07/Check-AD-User-Info.jpg)
*检验 AD 用户信息*
22、 用 `su -` 后跟上域用户名参数来认证 Ubuntu 主机的一个 Samba4 AD 账户。运行 `id` 命令获取该 AD 账户的更多信息。
```
$ su - your_ad_user
```
[](https://www.tecmint.com/wp-content/uploads/2017/07/AD-User-Authentication.jpg)
*AD 用户认证*
用 `pwd` 命令查看你的域用户当前工作目录,和用 `passwd` 命令修改密码。
23、 在 Ubuntu 上使用有 root 权限的域账户,你需要用下面的命令添加 AD 用户名到 sudo 系统群组:
```
$ sudo usermod -aG sudo [email protected]
```
用域账户登录 Ubuntu 并运行 `apt update` 命令来更新你的系统以检验 root 权限。
24、 给一个域群组 root 权限,用 `visudo` 命令打开并编辑 `/etc/sudoers` 文件,并加入如下行:
```
%domain\ [email protected] ALL=(ALL:ALL) ALL
```
25、 要在 Ubuntu 桌面使用域账户认证,通过编辑 `/usr/share/lightdm/lightdm.conf.d/50-ubuntu.conf` 文件来修改 LightDM 显示管理器,增加以下两行并重启 lightdm 服务或重启机器应用修改。
```
greeter-show-manual-login=true
greeter-hide-users=true
```
域账户用“你的域用户”或“你的域用户@你的域” 格式来登录 Ubuntu 桌面。
26、 为使用 Samba AD 账户的简称格式,编辑 `/etc/sssd/sssd.conf` 文件,在 `[sssd]` 块加入如下几行命令。
```
full_name_format = %1$s
```
并重启 SSSD 守护进程应用改变。
```
$ sudo systemctl restart sssd
```
你会注意到 bash 提示符会变成了没有附加域名部分的 AD 用户名。
27、 万一你因为 `sssd.conf` 里的 `enumerate=true` 参数设定而不能登录,你得用下面的命令清空 sssd 缓存数据:
```
$ rm /var/lib/sss/db/cache_tecmint.lan.ldb
```
这就是全部了!虽然这个教程主要集中于集成 Samba4 活动目录,同样的步骤也能被用于把使用 Realm 和 SSSD 服务的 Ubuntu 整合到微软 Windows 服务器活动目录。
---
作者简介:
Matei Cezar - 我是一名网瘾少年,开源和基于 linux 系统软件的粉丝,有4年经验在 linux 发行版桌面、服务器和 bash 脚本。
---
via: <https://www.tecmint.com/integrate-ubuntu-to-samba4-ad-dc-with-sssd-and-realm/>
作者:[Matei Cezar](https://www.tecmint.com/author/cezarmatei/) 译者:[XYenChi](https://github.com/XYenChi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | This tutorial will guide you on how to join an **Ubuntu Desktop** machine into a **Samba4 Active Directory** domain with **SSSD** and **Realmd** services in order to authenticate users against an Active Directory.
#### Requirements:
### Step 1: Initial Configurations
**1.** Before starting to join Ubuntu into an Active Directory make sure the hostname is properly configured. Use **hostnamectl** command to set the machine name or manually edit **/etc/hostname** file.
$ sudo hostnamectl set-hostname your_machine_short_hostname $ cat /etc/hostname $ hostnamectl
**2.** On the next step, edit machine network interface settings and add the proper IP configurations and the correct DNS IP server addresses to point to the Samba AD domain controller as illustrated in the below screenshot.
If you have configured a DHCP server at your premises to automatically assign IP settings for your LAN machines with the proper AD DNS IP addresses then you can skip this step and move forward.

On the above screenshot, **192.168.1.254** and **192.168.1.253** represents the IP addresses of the Samba4 Domain Controllers.
**3.** Restart the network services to apply the changes using the GUI or from command line and issue a series of **ping command** against your domain name in order to test if DNS resolution is working as expected. Also, use **host** command to test DNS resolution.
$ sudo systemctl restart networking.service $ host your_domain.tld $ ping -c2 your_domain_name $ ping -c2 adc1 $ ping -c2 adc2
**4.** Finally, make sure that machine time is in sync with Samba4 AD. Install **ntpdate** package and sync time with the AD by issuing the below commands.
$ sudo apt-get install ntpdate $ sudo ntpdate your_domain_name
### Step 2: Install Required Packages
**5.** On this step install the necessary software and required dependencies in order to join Ubuntu into Samba4 AD DC: **Realmd** and **SSSD** services.
$ sudo apt install adcli realmd krb5-user samba-common-bin samba-libs samba-dsdb-modules sssd sssd-tools libnss-sss libpam-sss packagekit policykit-1
**6.** Enter the name of the default realm with uppercases and press **Enter** key to continue the installation.

**7.** Next, create the **SSSD** configuration file with the following content.
$ sudo nano /etc/sssd/sssd.conf
Add following lines to **sssd.conf** file.
[nss] filter_groups = root filter_users = root reconnection_retries = 3 [pam] reconnection_retries = 3 [sssd] domains = tecmint.lan config_file_version = 2 services = nss, pam default_domain_suffix = TECMINT.LAN [domain/tecmint.lan] ad_domain = tecmint.lan krb5_realm = TECMINT.LAN realmd_tags = manages-system joined-with-samba cache_credentials = True id_provider = ad krb5_store_password_if_offline = True default_shell = /bin/bash ldap_id_mapping = True use_fully_qualified_names = True fallback_homedir = /home/%d/%u access_provider = ad auth_provider = ad chpass_provider = ad access_provider = ad ldap_schema = ad dyndns_update = true dyndns_refresh_interval = 43200 dyndns_update_ptr = true dyndns_ttl = 3600
Make sure you replace the domain name in following parameters accordingly:
domains = tecmint.lan default_domain_suffix = TECMINT.LAN [domain/tecmint.lan] ad_domain = tecmint.lan krb5_realm = TECMINT.LAN
**8.** Next, add the proper permissions for SSSD file by issuing the below command:
$ sudo chmod 700 /etc/sssd/sssd.conf
**9.** Now, open and edit **Realmd** configuration file and add the following lines.
$ sudo nano /etc/realmd.conf
**Realmd.conf** file excerpt:
[active-directory] os-name = Linux Ubuntu os-version = 17.04 [service] automatic-install = yes [users] default-home = /home/%d/%u default-shell = /bin/bash [tecmint.lan] user-principal = yes fully-qualified-names = no
**10.** The last file you need to modify belongs to Samba daemon. Open **/etc/samba/smb.conf** file for editing and add the following block of code at the beginning of the file, after the **[global]** section as illustrated on the image below.
workgroup = TECMINT client signing = yes client use spnego = yes kerberos method = secrets and keytab realm = TECMINT.LAN security = ads

Make sure you replace the **domain name** value, especially the **realm value** to match your domain name and run **testparm** command in order to check if the configuration file contains no errors.
$ sudo testparm

**11.** After you’ve made all the required changes, test Kerberos authentication using an AD administrative account and list the ticket by issuing the below commands.
$ sudo kinit[[email protected]]$ sudo klist

### Step 3: Join Ubuntu to Samba4 Realm
**12.** To join Ubuntu machine to Samba4 Active Directory issue following series of commands as illustrated below. Use the name of an AD DC account with administrator privileges in order for the binding to realm to work as expected and replace the domain name value accordingly.
$ sudo realm discover -v DOMAIN.TLD $ sudo realm list $ sudo realm join TECMINT.LAN -U ad_admin_user -v $ sudo net ads join -k




**13.** After the domain binding took place, run the below command to assure that all domain accounts are permitted to authenticate on the machine.
$ sudo realm permit --all
Subsequently, you can allow or deny access for a domain user account or a group using realm command as presented on the below examples.
$ sudo realm deny -a $ realm permit --groups ‘domain.tld\Linux Admins’ $ realm permit[[email protected]]$ realm permit DOMAIN\\User2
**14.** From a Windows machine with [RSAT tools installed](https://www.tecmint.com/manage-samba4-ad-from-windows-via-rsat/) you can open **AD UC** and navigate to **Computers** container and check if an object account with the name of your machine has been created.

### Step 4: Configure AD Accounts Authentication
**15.** In order to authenticate on Ubuntu machine with domain accounts you need to run **pam-auth-update** command with root privileges and enable all PAM profiles including the option to automatically create home directories for each domain account at the first login.
Check all entries by pressing **[space]** key and hit **ok** to apply configuration.
$ sudo pam-auth-update

**16.** On systems manually edit **/etc/pam.d/common-account** file and the following line in order to automatically create homes for authenticated domain users.
session required pam_mkhomedir.so skel=/etc/skel/ umask=0022
**17.** If Active Directory users can’t change their password from command line in Linux, open **/etc/pam.d/common-password** file and remove the **use_authtok** statement from password line to finally look as on the below excerpt.
password [success=1 default=ignore] pam_winbind.so try_first_pass
**18.** Finally, restart and enable Realmd and SSSD service to apply changes by issuing the below commands:
$ sudo systemctl restart realmd sssd $ sudo systemctl enable realmd sssd
**19.** In order to test if the Ubuntu machine was successfully integrated to realm run install winbind package and run **wbinfo** command to list domain accounts and groups as illustrated below.
$ sudo apt-get install winbind $ wbinfo -u $ wbinfo -g

**20.** Also, check Winbind nsswitch module by issuing the **getent** command against a specific domain user or group.
$ sudo getent passwd your_domain_user $ sudo getent group ‘domain admins’

**21.** You can also use Linux **id** command to get info about an AD account as illustrated on the below command.
$ id tecmint_user

**22.** To authenticate on Ubuntu host with a Samba4 AD account use the domain username parameter after **su** – command. Run **id** command to get extra info about the AD account.
$ su - your_ad_user

Use **pwd** command to see your domain user current working directory and passwd command if you want to change password.
**23.** To use a domain account with root privileges on your Ubuntu machine, you need to add the AD username to the sudo system group by issuing the below command:
$ sudo usermod -aG sudo[[email protected]]
Login to Ubuntu with the domain account and update your system by running **apt update** command to check root privileges.
**24.** To add root privileges for a domain group, open end edit **/etc/sudoers** file using **visudo** command and add the following line as illustrated.
%domain\[[email protected]]ALL=(ALL:ALL) ALL
**25.** To use domain account authentication for Ubuntu Desktop modify **LightDM** display manager by editing **/usr/share/lightdm/lightdm.conf.d/50-ubuntu.conf** file, append the following two lines and restart lightdm service or reboot the machine apply changes.
greeter-show-manual-login=true greeter-hide-users=true
Log in to Ubuntu Desktop with a domain account using either **your_domain_username** or **your_domain_username@your_domain.tld** syntax.
**26.** To use short name format for Samba AD accounts, edit **/etc/sssd/sssd.conf** file, add the following line in **[sssd]** block as illustrated below.
full_name_format = %1$s
and restart SSSD daemon to apply changes.
$ sudo systemctl restart sssd
You will notice that the bash prompt will change to the short name of the AD user without appending the domain name counterpart.
**27.** In case you cannot login due to **enumerate=true** argument set in **sssd.conf** you must clear sssd cached database by issuing the below command:
$ rm /var/lib/sss/db/cache_tecmint.lan.ldb
That’s all! Although this guide is mainly focused on integration with a Samba4 Active Directory, the same steps can be applied in order to integrate Ubuntu with Realmd and SSSD services into a Microsoft Windows Server Active Directory.
When I try to add Ubuntu 18.4. into AD domain it says:
realm couldn’t join realm insufficient permissions to join the domainCommand Issued:
Reply`realm join --computer-ou="OU=Servers_Linux" --user=`
Hi, all in Step 3.
I got this error:
realm couldn’t change permitted logins the samba provider cannot restrict permitted loginsanyone can help me?
Replywhat about the gpo policy ? how it will apply on ubuntu domain users?
ReplyWhen I run command
`sudo testparm`
.You have idea?
ReplyHello, Thank you very much for such a good tutorial, everything went well, I could join my computer in the company AD, but in step # 20 I can not get any response with the
Replygetentcommand and therefore I can not log in with the users of the domain, any help here, am I stuck?I get the same error,
Replygetent passwdnot get domain user infoThank you so much for this tutorial.
I tried so many different forum to troubleshoot my problem to join my ubuntu to my AD and I always failed until I found this one.
For sure you found a new reader of your forum and I will share it with other.
Thanks again!
Replythank you very much. worked without problems.
ReplyHi,
Everything is set, I can also see all users when I type:
However, when I try to login, it only authenticates via
pam_unix; pam_sssauthentication is not at all checked.How can I fix that? SSSD is enabled and running.
ReplyRun sudo pam-auth-update ans make sure you check sss authentication method in pam modules.
Replyyes its already enabled
ReplyGood Evening,
When I restart the
sssdservice with this commandsystemctl restart sssd, i have this error.Do you have an idea of the problem please?
ReplyThanks
Excellent post!! Thank you very much!
Just a typo:
“sudo realm permit -all” should be “sudo realm permit –all” (that’s a double – instead of just one)
Also, in
Replysssd.confshouldn’t “dyndsn_refresh_interval = 43200” by “dyndns_refresh…” ?@Fex,
Thanks for pointing out those typos, corrected in the article..:)
ReplyIs it possible to change the user desktop background from the Domain Controller?
Replyif we want to set a particular image for all linux users?
All the settings for new users is in the skel dir.
Check this out
https://askubuntu.com/questions/62927/how-do-i-set-the-default-icon-set-and-wallpaper-for-new-users
Reply |
8,855 | 如何使用拉取请求(PR)来改善你的代码审查 | https://www.oreilly.com/ideas/how-to-use-pull-requests-to-improve-your-code-reviews | 2017-09-10T14:57:02 | [
"PR",
"拉取请求",
"GitHub"
] | https://linux.cn/article-8855-1.html |
>
> 通过使用 GitHub 的<ruby> 拉取请求 <rt> Pull Request </rt></ruby>正确地进行代码审核,把时间更多的花在构建上,而在修复上少用点时间。
>
>
>

如果你不是每天编写代码,你可能不知道软件开发人员日常面临的一些问题。
* 代码中的安全漏洞
* 导致应用程序崩溃的代码
* 被称作 “技术债务” 和之后需要重写的代码
* 在某处你所不知道地方的代码已经被重写
<ruby> 代码审查 <rt> Code review </rt></ruby>可以允许其他的人或工具来检查代码,帮助我们改善所编写的软件。这种审查(也称为<ruby> 同行评审 <rt> peer review </rt></ruby>)能够通过自动化代码分析或者测试覆盖工具来进行,是软件开发过程中两个重要的部分,它能够节省数小时的手工工作。同行评审是开发人员审查彼此工作的一个过程。在软件开发的过程中,速度和紧迫性是两个经常提及的问题。如果你没有尽快的发布,你的竞争对手可能会率先发布新功能。如果你不能够经常发布新的版本,你的用户可能会怀疑您是否仍然关心改进你的应用程序。
### 权衡时间:代码审查与缺陷修复
如果有人能够以最小争议的方式汇集多种类型的代码审查,那么随着时间的推移,该软件的质量将会得到改善。如果认为引入新的工具或流程最先导致的不是延迟,那未免太天真了。但是代价更高昂的是:修复生产环境中的错误花费的时间,或者在放到生产环境之前改进软件所花费的时间。即使新工具延迟了新功能的发布和得到客户欣赏的时间,但随着软件开发人员提高自己的技能,该延迟会缩短,软件开发周期将会回升到以前的水平,而同时缺陷将会减少。
通过代码审查实现提升代码质量目标的关键之一就是使用一个足够灵活的平台,允许软件开发人员快速编写代码,置入他们熟悉的工具,并对彼此进行同行评审。 GitHub 就是这样的平台的一个很好的例子。然而,只是把你的代码放在 [GitHub](https://github.com/about) 上并不会魔术般地使代码审查发生;你必须使用<ruby> 拉取请求 <rt> Pull Request </rt></ruby>来开始这个美妙的旅程。
### 拉取请求:关于代码的现场讨论
<ruby> <a href="https://help.github.com/articles/about-pull-requests/"> 拉取请求 </a> <rt> Pull Request </rt></ruby>是 Github 上的一个工具,允许软件开发人员讨论并提出对项目的主要代码库的更改,这些更改稍后可以部署给所有用户看到。这个功能创建于 2008 年 2 月,其目的是在接受(合并)之前,对某人的建议进行更改,然后在部署到生产环境中,供最终用户看到这种变化。
拉取请求开始是以一种松散的方式让你为某人的项目提供更改,但是它们已经演变成:
* 关于你想要合并的代码的现场讨论
* 提升了所更改内容的可视功能
* 整合了你最喜爱的工具
* 作为受保护的分支工作流程的一部分可能需要显式的拉取请求评审
### 对于代码:URL 是永久的
看看上述的前两个点,拉取请求促成了一个正在进行的代码讨论,使代码变更可以更醒目,并且使您很容易在审查的过程中找到所需的代码。无论是对于新人还是有经验的开发人员,能够回顾以前的讨论,了解一个功能为什么以这种方式开发出来,或者与另一个相关功能的讨论相联系起来是无价的。当跨多个项目协调,并使每个人尽可能接近代码时,前后讨论的内容也非常重要。如果这些功能仍在开发中,重要的是能够看到上次审查以来更改了哪些内容。毕竟,[审查小的更改要比大的容易得多](https://blog.skyliner.io/ship-small-diffs-741308bec0d1),但不可能全都是小功能。因此,重要的是能够找到你上次审查,并只看到从那时以来的变化。
### 集成工具:软件开发人员的偏执
再看下上述第三点,GitHub 上的拉取请求有很多功能,但开发人员总是偏好第三方工具。代码质量是个完整的代码审查领域,它涉及到其它组件的代码评审,而这些评审不一定是由人完成的。检测“低效”或缓慢的代码、具有潜在安全漏洞或不符合公司标准的代码是留给自动化工具的任务。类似 [SonarQube](https://github.com/integrations/sonarqube) 和 [Code Climatecan](https://github.com/integrations/code-climate) 这样工具可以分析你的代码,而像 [Codecov](https://github.com/integrations/codecov) 和 [Coveralls](https://github.com/integrations/coveralls) 的这样工具可以告诉你刚刚写的新代码还没有得到很好的测试。这些工具最令人称奇的是,它们可以集成到 GitHub 中,并把它们的发现汇报到拉取请求当中!这意味着该过程中不仅是人们在审查代码,而且工具也在会在那里报告情况。这样每个人都可以完全了解一个功能是如何开发的。
最后,根据您的团队的偏好,您可以利用[受保护的分支工作流](https://help.github.com/articles/about-protected-branches/)的<ruby> 必需状态 <rt> required status </rt></ruby>功能来要求进行工具审查和同行评审。
虽然您可能只是刚刚开始您的软件开发之旅,或者是一位希望知道项目正在做什么的业务利益相关者,抑或是一位想要确保项目的及时性和质量的项目经理,你都可以通过设置批准流程来参与到拉取请求中,并考虑集成更多的工具以确保质量,这在任何级别的软件开发中都很重要。
无论是为您的个人网站,贵公司的在线商店,还是想用最新的组合以获得最大的收获,编写好的软件都需要进行良好的代码审查。良好的代码审查涉及到正确的工具和平台。要了解有关 GitHub 和软件开发过程的更多信息,请参阅 O'Reilly 的 《[GitHub 简介](https://www.safaribooksonline.com/library/view/introducing-github/9781491949801/?utm_source=newsite&utm_medium=content&utm_campaign=lgen&utm_content=how-to-use-pull-requests-to-improve-your-code-reviews-lower)》 一书, 您可以在其中了解创建项目、启动拉取请求以及概要了解团队的软件开发流程。
---
作者简介:
**Brent Beer**
Brent Beer 通过大学的课程、对开源项目的贡献,以及担任专业网站开发人员使用 Git 和 GitHub 已经超过五年了。在担任 GitHub 上的培训师时,他也成为 O’Reilly 的 《GitHub 简介》的出版作者。他在阿姆斯特丹担任 GitHub 的解决方案工程师,帮助 Git 和 GitHub 向世界各地的开发人员提供服务。
**Peter Bell**
Peter Bell 是 Ronin 实验室的创始人以及 CTO。培训是存在问题的,我们通过技术提升培训来改进它!他是一位有经验的企业家、技术专家、敏捷教练和 CTO,专门从事 EdTech 项目。他为 O'Reilly 撰写了 《GitHub 简介》,为代码学校创建了“精通 GitHub ”课程,为 Pearson 创建了“ Git 和 GitHub 现场课”课程。他经常在国际和国际会议上发表 ruby、 nodejs、NoSQL(尤其是 MongoDB 和 neo4j )、云计算、软件工艺、java、groovy、j 等的演讲。
---
via: <https://www.oreilly.com/ideas/how-to-use-pull-requests-to-improve-your-code-reviews>
作者:[Brent Beer](https://www.oreilly.com/people/acf937de-cdf4-4b0e-85bd-b559404c580e), [Peter Bell](https://www.oreilly.com/people/2256f119-7ea0-440e-99e8-65281919e952) 译者:[MonkeyDEcho](https://github.com/MonkeyDEcho) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,857 | 漫谈传统的 Linux 初始化系统的运行级别 | https://www.networkworld.com/article/3222070/linux/maneuvering-around-run-levels-on-linux.html | 2017-09-11T08:26:00 | [
"runlevel"
] | https://linux.cn/article-8857-1.html |
>
> 了解运行级别是如何配置的,如何改变系统运行级别以及修改对应状态下运行的服务。
>
>
>

在 Linux 系统中,<ruby> 运行级别 <rt> run level </rt></ruby>是指运维的级别,用于描述一种表明什么服务是可用的系统运行状态。
运行级别 1 是严格限制的,仅仅用于系统维护;该级别下,网络连接将不可操作,但是管理员可以通过控制台连接登录系统。
其他运行级别的系统允许任何人登录和使用,但是不同级别中可使用的服务不同。本文将探索如何配置运行级别,如何交互式改变系统运行级别以及修改该状态下可用的服务。
Linux 系统的默认运行状态是一个在系统开机时使用的运行级别(除非有其他的指示),它通常是在 `/etc/inittab` 文件中进行配置的,该文件内容通常如下:
```
id:3:initdefault
```
包括 Debian 系统在内的一些系统,默认运行级别为 2,而不是上述文件中的 3,甚至都没有 `/etc/inittab` 文件。
运行级别在默认情况下是如何被配置,其配置依赖于你所运行的 Linux 操作系统的具体发行版本。 例如,在某些系统中, 运行级别 2 是多用户模式,运行级别 3 是多用户模式并支持 NFS (网络文件系统)。 在另外一些系统,运行级别 2 - 5 基本相同,运行级别 1 是单用户模式。例如,Debian 系统的所用运行级别如下:
```
0 = 停机
1 = 单用户(维护模式)
2 = 多用户模式
3-5 = 同 2 一样
6 = 重启
```
在 Linux 系统上,运行级别 3 用于共享文件系统给其它系统,可以方便地只通过改变系统的运行级别来启动和停止文件系统共享。系统从运行级别 2 改变到 3 系统将允许文件系统共享,反之从运行级别 3 改变到 2 则系统不支持文件系统共享。
在某个运行级别中,系统运行哪些进程依赖于目录 `/etc/rc?.d` 目录的内容,其中 `?` 可以是 2、 3、 4 或 5 (对应于相应的运行级别)。
在以下示例中(Ubuntu 系统),由于这些目录的配置是相同的,我们将看见上述 4 个级别对应的目录中的内容是一致的。
```
/etc/rc2.d$ ls
README S20smartmontools S50saned S99grub-common
S20kerneloops S20speech-dispatcher S70dns-clean S99ondemand
S20rsync S20sysstat S70pppd-dns S99rc.local
/etc/rc2.d$ cd ../rc3.d
/etc/rc3.d$ ls
README S20smartmontools S50saned S99grub-common
S20kerneloops S20speech-dispatcher S70dns-clean S99ondemand
S20rsync S20sysstat S70pppd-dns S99rc.local
/etc/rc3.d$ cd ../rc4.d
/etc/rc4.d$ ls
README S20smartmontools S50saned S99grub-common
S20kerneloops S20speech-dispatcher S70dns-clean S99ondemand
S20rsync S20sysstat S70pppd-dns S99rc.local
/etc/rc4.d$ cd ../rc5.d
/etc/rc5.d$ ls
README S20smartmontools S50saned S99grub-common
S20kerneloops S20speech-dispatcher S70dns-clean S99ondemand
S20rsync S20sysstat S70pppd-dns S99rc.local
```
这些都是什么文件?它们都是指向 `/etc/init.d` 目录下用于启动服务的脚本符号连接。 这些文件的文件名是至关重要的, 因为它们决定了这些脚本文件的执行顺序,例如, S20 脚本是在 S50 脚本前面运行的。
```
$ ls -l
total 4
-rw-r--r-- 1 root root 677 Feb 16 2016 README
lrwxrwxrwx 1 root root 20 Aug 30 14:40 S20kerneloops -> ../init.d/kerneloops
lrwxrwxrwx 1 root root 15 Aug 30 14:40 S20rsync -> ../init.d/rsync
lrwxrwxrwx 1 root root 23 Aug 30 16:10 S20smartmontools -> ../init.d/smartmontools
lrwxrwxrwx 1 root root 27 Aug 30 14:40 S20speech-dispatcher -> ../init.d/speech-dispatcher
lrwxrwxrwx 1 root root 17 Aug 31 14:12 S20sysstat -> ../init.d/sysstat
lrwxrwxrwx 1 root root 15 Aug 30 14:40 S50saned -> ../init.d/saned
lrwxrwxrwx 1 root root 19 Aug 30 14:40 S70dns-clean -> ../init.d/dns-clean
lrwxrwxrwx 1 root root 18 Aug 30 14:40 S70pppd-dns -> ../init.d/pppd-dns
lrwxrwxrwx 1 root root 21 Aug 30 14:40 S99grub-common -> ../init.d/grub-common
lrwxrwxrwx 1 root root 18 Aug 30 14:40 S99ondemand -> ../init.d/ondemand
lrwxrwxrwx 1 root root 18 Aug 30 14:40 S99rc.local -> ../init.d/rc.local
```
如你所想,目录 `/etc/rc1.d` 因运行级别 1 的特殊而不同。它包含的符号链接指向非常不同的一套脚本。 同样也要注意到其中一些脚本以 `K` 开头命名,而另一些与其它运行级别脚本一样以 `S` 开头命名。这是因为当系统进入单用户模式时, 一些服务需要**停止**。 然而这些 K 开头的符号链接指向了其它级别 S 开头的符号链接的同一文件时, K(kill)表示这个脚本将以指示其停止的参数执行,而不是以启动的参数执行。
```
/etc/rc1.d$ ls -l
total 4
lrwxrwxrwx 1 root root 20 Aug 30 14:40 K20kerneloops -> ../init.d/kerneloops
lrwxrwxrwx 1 root root 15 Aug 30 14:40 K20rsync -> ../init.d/rsync
lrwxrwxrwx 1 root root 15 Aug 30 14:40 K20saned -> ../init.d/saned
lrwxrwxrwx 1 root root 23 Aug 30 16:10 K20smartmontools -> ../init.d/smartmontools
lrwxrwxrwx 1 root root 27 Aug 30 14:40 K20speech-dispatcher -> ../init.d/speech-dispatcher
-rw-r--r-- 1 root root 369 Mar 12 2014 README
lrwxrwxrwx 1 root root 19 Aug 30 14:40 S30killprocs -> ../init.d/killprocs
lrwxrwxrwx 1 root root 19 Aug 30 14:40 S70dns-clean -> ../init.d/dns-clean
lrwxrwxrwx 1 root root 18 Aug 30 14:40 S70pppd-dns -> ../init.d/pppd-dns
lrwxrwxrwx 1 root root 16 Aug 30 14:40 S90single -> ../init.d/single
```
你可以改变系统的默认运行级别,尽管这很少被用到。例如,通过修改前文中提到的 `/etc/inittab` 文件,你能够配置 Debian 系统的默认运行级别为 3 (而不是 2),以下是该文件示例:
```
id:3:initdefault:
```
一旦你修改完成并重启系统, `runlevel` 命令将显示如下:
```
$ runlevel
N 3
```
另外一种可选方式,使用 `init 3` 命令,你也能改变系统运行级别(且无需重启立即生效), `runlevel` 命令的输出为:
```
$ runlevel
2 3
```
当然,除非你修改了系统默认级别的 `/etc/rc?.d` 目录下的符号链接,使得系统默认运行在一个修改的运行级别之下,否则很少需要通过创建或修改 `/etc/inittab` 文件改变系统的运行级别。
### 在 Linux 系统中如何使用运行级别?
为了扼要重述在系统中如何使用运行级别,下面有几个关于运行级别的快速问答问题:
**如何查询系统当前的运行级别?**
使用 `runlevel` 命令。
**如何查看特定运行级别所关联的服务进程?**
查看与该运行级别关联的运行级别开始目录(例如, `/etc/rc2.d` 对应于运行级别 2)。
**如何查看系统的默认运行级别?**
首先,查看 `/etc/inittab` 文件是否存在。如果不存在,就执行 `runlevel` 命令查询,你一般就已经处在该运行级别。
**如何改变系统运行级别?**
用 `init` 命令(例如 `init 3`)临时改变运行级别,通过修改或创建 `/etc/inittab` 文件永久改变其运行级别。
**能改变特定运行级别下运行的服务么?**
当然,通过改变对应的 `/etc/rc?.d` 目录下的符号连接即可。
**还有一些其他的什么需要考虑?**
当改变系统运行级别时,你应该特别小心,确保不影响到系统上正在运行的服务或者正在使用的用户。
(题图:[Vincent Desjardins](https://www.flickr.com/photos/endymion120/4824696883/in/photolist-8mkQi2-8vtyRx-8vvYZS-i31xQj-4TXTS2-S7VRNC-azimYK-dW8cYu-Sb5b7S-S7VRES-fpSVvo-61Zpn8-WxFwGi-UKKq3x-q6NSnC-8vsBLr-S3CPxn-qJUrLr-nDnpNu-8d7a6Q-T7mGpN-RE26wj-SeEXRa-5mZ7LG-Vp7t83-fEG5HS-Vp7sU7-6JpNBi-RCuR8P-qLzCL5-6WsfZx-5nU1tF-6ieGFi-3P5xwh-8mnxpo-hBXwSj-i3iCur-9dmrST-6bXk8d-8vtDb4-i2KLwU-5jhfU6-8vwbrN-ShAtNm-XgzXmb-8rad18-VfXm4L-8tQTrh-Vp7tcb-UceVDB) [(CC BY 2.0)](https://creativecommons.org/licenses/by/2.0/legalcode))
---
via: <https://www.networkworld.com/article/3222070/linux/maneuvering-around-run-levels-on-linux.html>
作者:[Sandra Henry-Stocker](https://www.networkworld.com/author/Sandra-Henry_Stocker/) 译者:[penghuster](https://github.com/penghuster) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 404 | Not Found | null |
8,858 | Kubernetes 是什么? | https://www.redhat.com/en/containers/what-is-kubernetes | 2017-09-11T09:33:00 | [
"Kubernetes"
] | https://linux.cn/article-8858-1.html | 
Kubernetes,简称 k8s(k,8 个字符,s——明白了?)或者 “kube”,是一个开源的 [Linux 容器](https://www.redhat.com/en/containers/whats-a-linux-container)自动化运维平台,它消除了容器化应用程序在部署、伸缩时涉及到的许多手动操作。换句话说,你可以将多台主机组合成集群来运行 Linux 容器,而 Kubernetes 可以帮助你简单高效地管理那些集群。构成这些集群的主机还可以跨越[公有云](https://www.redhat.com/en/topics/cloud-computing/what-is-public-cloud)、[私有云](https://www.redhat.com/en/topics/cloud-computing/what-is-private-cloud)以及混合云。
Kubernetes 最开始是由 Google 的工程师设计开发的。Google 作为 [Linux 容器技术的早期贡献者](https://en.wikipedia.org/wiki/Cgroups)之一,曾公开演讲介绍 [Google 如何将一切都运行于容器之中](https://speakerdeck.com/jbeda/containers-at-scale)(这是 Google 的云服务背后的技术)。Google 一周内的容器部署超过 20 亿次,全部的工作都由内部平台 [Borg](http://blog.kubernetes.io/2015/04/borg-predecessor-to-kubernetes.html) 支撑。Borg 是 Kubernetes 的前身,几年来开发 Borg 的经验教训也成了影响 Kubernetes 中许多技术的主要因素。
*趣闻: Kubernetes logo 中的七个辐条来源于项目原先的名称, “[Seven of Nine 项目](https://cloudplatform.googleblog.com/2016/07/from-Google-to-the-world-the-Kubernetes-origin-story.html)”(LCTT 译注:Borg 是「星际迷航」中的一个宇宙种族,Seven of Nine 是该种族的一名女性角色)。*

红帽作为最早与 Google 合作开发 Kubernetes 的公司之一(甚至早于 Kubernetes 的发行),已经是 Kubernetes 上游项目的[第二大贡献者](http://stackalytics.com/?project_type=kubernetes-group&metric=commits)。Google 在 2015 年把 Kubernetes 项目捐献给了新成立的 <ruby> <a href="https://www.cncf.io/"> 云计算基金会 </a> <rt> Cloud Native Computing Foundation </rt></ruby>(CNCF)。
### 为什么你需要 Kubernetes ?
真实的生产环境应用会包含多个容器,而这些容器还很可能会跨越多个服务器主机部署。Kubernetes 提供了为那些工作负载大规模部署容器的编排与管理能力。Kubernetes 编排让你能够构建多容器的应用服务,在集群上调度或伸缩这些容器,以及管理它们随时间变化的健康状态。
Kubernetes 也需要与网络、存储、安全、监控等其它服务集成才能提供综合性的容器基础设施。

当然,这取决于你如何在你的环境中使用容器。一个初步的 Linux 容器应用程序把容器视作高效、快速的虚拟机。一旦把它部署到生产环境或者扩展为多个应用,很显然你需要许多组托管在相同位置的容器合作提供某个单一的服务。随着这些容器的累积,你的运行环境中容器的数量会急剧增加,复杂度也随之增长。
Kubernetes 通过将容器分类组成 “pod” 来解决了容器增殖带来的许多常见问题。pod 为容器分组提供了一层抽象,以此协助你调度工作负载以及为这些容器提供类似网络与存储这类必要的服务。Kubernetes 的其它组件帮助你对 pod 进行负载均衡,以保证有合适数量的容器支撑你的工作负载。
正确实施的 Kubernetes,结合类似 [Atomic Registry](http://www.projectatomic.io/registry/)、[Open vSwitch](http://openvswitch.org/)、[heapster](https://github.com/kubernetes/heapster)、[OAuth](https://oauth.net/) 和 [SELinux](https://selinuxproject.org/page/Main_Page) 的开源项目,让你可以管理你自己的整个容器基础设施。
### Kubernetes 能做些什么?
在生产环境中使用 Kubernetes 的主要优势在于它提供了在物理机或虚拟机集群上调度和运行容器的平台。更宽泛地说,它能帮你在生产环境中实现可以依赖的基于容器的基础设施。而且,由于 Kubernetes 本质上就是运维任务的自动化平台,你可以执行一些其它应用程序平台或管理系统支持的操作,只不过操作对象变成了容器。
有了 Kubernetes,你可以:
* 跨主机编排容器。
* 更充分地利用硬件资源来最大化地满足企业应用的需求。
* 控制与自动化应用的部署与升级。
* 为有状态的应用程序挂载和添加存储器。
* 线上扩展或裁剪容器化应用程序与它们的资源。
* 声明式的容器管理,保证所部署的应用按照我们部署的方式运作。
* 通过自动布局、自动重启、自动复制、自动伸缩实现应用的状态检查与自我修复。
然而 Kubernetes 依赖其它项目来提供完整的编排服务。结合其它开源项目作为其组件,你才能充分感受到 Kubernetes 的能力。这些必要组件包括:
* 仓库:Atomic Registry、Docker Registry 等。
* 网络:OpenvSwitch 和智能边缘路由等。
* 监控:heapster、kibana、hawkular 和 elastic。
* 安全:LDAP、SELinux、 RBAC 与 支持多租户的 OAUTH。
* 自动化:通过 Ansible 的 playbook 进行集群的安装和生命周期管理。
* 服务:大量事先创建好的常用应用模板。
[红帽 OpenShift 为容器部署预先集成了上面这些组件。](https://www.redhat.com/en/technologies/cloud-computing/openshift)
### Kubernetes 入门
和其它技术一样,大量的专有名词有可能成为入门的障碍。下面解释一些通用的术语,希望帮助你理解 Kubernetes。
* **Master(主节点):** 控制 Kubernetes 节点的机器,也是创建作业任务的地方。
* **Node(节点):** 这些机器在 Kubernetes 主节点的控制下执行被分配的任务。
* **Pod:** 由一个或多个容器构成的集合,作为一个整体被部署到一个单一节点。同一个 pod 中的容器共享 IP 地址、进程间通讯(IPC)、主机名以及其它资源。Pod 将底层容器的网络和存储抽象出来,使得集群内的容器迁移更为便捷。
* **Replication controller(复制控制器):** 控制一个 pod 在集群上运行的实例数量。
* **Service(服务):** 将服务内容与具体的 pod 分离。Kubernetes 服务代理负责自动将服务请求分发到正确的 pod 处,不管 pod 移动到集群中的什么位置,甚至可以被替换掉。
* **Kubelet:** 这个守护进程运行在各个工作节点上,负责获取容器列表,保证被声明的容器已经启动并且正常运行。
* **kubectl:** 这是 Kubernetes 的命令行配置工具。
[上面这些知识就足够了吗?不,这仅仅是一小部分,更多内容请查看 Kubernetes 术语表。](https://kubernetes.io/docs/reference/)
### 生产环境中使用 Kubernetes
Kubernetes 是开源的,所以没有正式的技术支持机构为你的商业业务提供支持。如果在生产环境使用 Kubernetes 时遇到问题,你恐怕不会太愉快,当然你的客户也不会太高兴。
这就是[红帽 OpenShift](https://www.redhat.com/en/technologies/cloud-computing/openshift) 要解决的问题。OpenShift 是为企业提供的 Kubernetes ——并且集成了更多的组件。OpenShift 包含了强化 Kubernetes 功能、使其更适用于企业场景的额外部件,包括仓库、网络、监控、安全、自动化和服务在内。OpenShift 使得开发者能够在具有伸缩性、控制和编排能力的云端开发、托管和部署容器化的应用,快速便捷地把想法转变为业务。
而且,OpenShift 还是由头号开源领导公司红帽支持和开发的。
### Kubernetes 如何适用于你的基础设施

Kubernetes 运行在操作系统(例如 [Red Hat Enterprise Linux Atomic Host](https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux/options))之上,操作着该节点上运行的容器。Kubernetes 主节点(master)从管理员(或者 DevOps 团队)处接受命令,再把指令转交给附属的节点。这种带有大量服务的切换工作自动决定最适合该任务的节点,然后在该节点上分配资源并指派 pod 来完成任务请求。
所以从基础设施的角度,管理容器的方式发生了一点小小的变化。对容器的控制在更高的层次进行,提供了更佳的控制方式,而无需用户微观管理每个单独的容器或者节点。必要的工作则主要集中在如何指派 Kubernetes 主节点、定义节点和 pod 等问题上。
#### docker 在 Kubernetes 中的角色
[Docker](https://www.redhat.com/en/containers/what-is-docker) 技术依然执行它原本的任务。当 kubernetes 把 pod 调度到节点上,节点上的 kubelet 会指示 docker 启动特定的容器。接着,kubelet 会通过 docker 持续地收集容器的信息,然后提交到主节点上。Docker 如往常一样拉取容器镜像、启动或停止容器。不同点仅仅在于这是由自动化系统控制而非管理员在每个节点上手动操作的。
---
via: <https://www.redhat.com/en/containers/what-is-kubernetes>
作者:[www.redhat.com](https://www.redhat.com/) 译者:[haoqixu](https://github.com/haoqixu) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,861 | 使用 Headless Chrome 进行自动化测试 | https://developers.google.com/web/updates/2017/06/headless-karma-mocha-chai | 2017-09-12T15:08:00 | [
"Chrome",
"Headless"
] | https://linux.cn/article-8861-1.html | 
如果你想使用 Headless Chrome 进行自动化测试,那么就往下!这篇文章将让你完全使用 Karma 作为<ruby> 运行器 <rt> runner </rt></ruby>,并且使用 Mocha+Chai 来编撰测试。
**这些东西是什么?**
Karma、Mocha、Chai、Headless Chrome,哦,我的天哪!
[Karma](https://karma-runner.github.io/) 是一个测试工具,可以和所有最流行的测试框架([Jasmine](https://jasmine.github.io/)、[Mocha](https://mochajs.org/)、 [QUnit](https://qunitjs.com/))配合使用。
[Chai](http://chaijs.com/) 是一个断言库,可以与 Node 和浏览器一起使用。这里我们需要后者。
[Headless Chrome](https://developers.google.com/web/updates/2017/04/headless-chrome) 是一种在没有浏览器用户界面的无需显示环境中运行 Chrome 浏览器的方法。使用 Headless Chrome(而不是直接在 Node 中测试) 的一个好处是 JavaScript 测试将在与你的网站用户相同的环境中执行。Headless Chrome 为你提供了真正的浏览器环境,却没有运行完整版本的 Chrome 一样的内存开销。
### 设置
#### 安装
使用 `yarn` 安装 Karma、相关插件和测试用例:
```
yarn add --dev karma karma-chrome-launcher karma-mocha karma-chai
yarn add --dev mocha chai
```
或者使用 `npm`:
```
npm i --save-dev karma karma-chrome-launcher karma-mocha karma-chai
npm i --save-dev mocha chai
```
在这篇文章中我使用 [Mocha](https://mochajs.org/) 和 [Chai](http://chaijs.com/),但是你也可以选择自己最喜欢的在浏览器中工作的断言库。
#### 配置 Karma
创建一个使用 `ChromeHeadless` 启动器的 `karma.config.js` 文件。
**karma.conf.js**:
```
module.exports = function(config) {
config.set({
frameworks: ['mocha', 'chai'],
files: ['test/**/*.js'],
reporters: ['progress'],
port: 9876, // karma web server port
colors: true,
logLevel: config.LOG_INFO,
browsers: ['ChromeHeadless'],
autoWatch: false,
// singleRun: false, // Karma captures browsers, runs the tests and exits
concurrency: Infinity
})
}
```
>
> **注意:** 运行 `./node_modules/karma/bin/karma init karma.conf.js` 生成 Karma 的配置文件。
>
>
>
### 写一个测试
在 `/test/test.js` 中写一个测试:
**/test/test.js**:
```
describe('Array', () => {
describe('#indexOf()', () => {
it('should return -1 when the value is not present', () => {
assert.equal(-1, [1,2,3].indexOf(4));
});
});
});
```
### 运行你的测试
在我们设置好用于运行 Karma 的 `package.json` 中添加一个测试脚本。
**package.json**:
```
"scripts": {
"test": "karma start --single-run --browsers ChromeHeadless karma.conf.js"
}
```
当你运行你的测试(`yarn test`)时,Headless Chrome 会启动并将运行结果输出到终端:

### 创建你自己的 Headless Chrome 启动器
`ChromeHeadless` 启动器非常棒,因为它可以在 Headless Chrome 上进行测试。它包含了适合你的 Chrome 标志,并在端口 `9222` 上启动 Chrome 的远程调试版本。
但是,有时你可能希望将自定义的标志传递给 Chrome 或更改启动器使用的远程调试端口。要做到这一点,可以通过创建一个 `customLaunchers` 字段来扩展基础的 `ChromeHeadless` 启动器:
**karma.conf.js**:
```
module.exports = function(config) {
...
config.set({
browsers: ['Chrome', 'ChromeHeadless', 'MyHeadlessChrome'],
customLaunchers: {
MyHeadlessChrome: {
base: 'ChromeHeadless',
flags: ['--disable-translate', '--disable-extensions', '--remote-debugging-port=9223']
}
},
}
};
```
### 完全在 Travis CI 上运行它
在 Headless Chrome 中配置 Karma 运行测试是很困难的。而在 Travis 中持续集成就只有几种!
要在 Travis 中运行测试,请使用 `dist: trusty` 并安装稳定版 Chrome 插件:
**.travis.yml**:
```
language: node_js
node_js:
- "7"
dist: trusty # needs Ubuntu Trusty
sudo: false # no need for virtualization.
addons:
chrome: stable # have Travis install chrome stable.
cache:
yarn: true
directories:
- node_modules
install:
- yarn
script:
- yarn test
```
---
作者简介
[Eric Bidelman](https://developers.google.com/web/resources/contributors#ericbidelman) 谷歌工程师,Lighthouse 开发,Web 和 Web 组件开发,Chrome 开发
---
via: <https://developers.google.com/web/updates/2017/06/headless-karma-mocha-chai>
作者:[Eric Bidelman](https://developers.google.com/web/resources/contributors#ericbidelman) 译者:[firmianay](https://github.com/firmianay) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,863 | 使用 OctoDNS 启用 DNS 分割权威 | https://githubengineering.com/enabling-split-authority-dns-with-octodns/ | 2017-09-13T08:11:00 | [
"DNS",
"GItHub"
] | https://linux.cn/article-8863-1.html | 构建一个健壮的系统需要为故障而设计。作为 GitHub 的网站可靠性工程师(SRE),我们一直在寻求通过冗余来帮助缓解问题,今天将讨论最近我们所做的工作,以便支持你通过 DNS 来查找我们的服务器。
大型 [DNS](https://en.wikipedia.org/wiki/Domain_Name_System) 提供商在其服务中构建了多级冗余,出现导致中断的问题时,可以采取措施来减轻其影响。最佳选择之一是把你的<ruby> 区域 <rt> zone </rt></ruby>的权威服务分割到多个服务提供商中。启用<ruby> 分割权威 <rt> split authority </rt></ruby>很简单,你只需在域名注册商配置两套或多套你区域的[名称服务器](https://en.wikipedia.org/wiki/Name_server),然后 DNS 请求将分割到整个列表中。但是,你必须在多个提供商之间对这些区域的记录保持同步,并且,根据具体情况这可能要么设置复杂,要么是完全手动的过程。
```
$ dig NS github.com. @a.gtld-servers.net.
...
;; QUESTION SECTION:
;github.com. IN NS
;; AUTHORITY SECTION:
github.com. 172800 IN NS ns4.p16.dynect.net.
github.com. 172800 IN NS ns-520.awsdns-01.net.
github.com. 172800 IN NS ns1.p16.dynect.net.
github.com. 172800 IN NS ns3.p16.dynect.net.
github.com. 172800 IN NS ns-421.awsdns-52.com.
github.com. 172800 IN NS ns-1283.awsdns-32.org.
github.com. 172800 IN NS ns2.p16.dynect.net.
github.com. 172800 IN NS ns-1707.awsdns-21.co.uk.
...
```
上面的查询是向 [TLD 名称服务器](https://en.wikipedia.org/wiki/Top-level_domain) 询问 `github.com.` 的 `NS` 记录。它返回了在我们在域名注册商中配置的值,在本例中,一共有两个 DNS 服务提供商,每个四条记录。如果其中一个提供商发生中断,那么其它的仍有希望可以服务请求。我们在各个地方同步记录,并且可以安全地修改它们,而不必担心数据陈旧或状态不正确。
完整地配置分割权威的最后一部分是在两个 DNS 服务提供商中将所有名称服务器作为顶层 `NS` 记录添加到区域的根中。
```
$ dig NS github.com. @ns1.p16.dynect.net.
...
;; QUESTION SECTION:
;github.com. IN NS
;; ANSWER SECTION:
github.com. 551 IN NS ns1.p16.dynect.net.
github.com. 551 IN NS ns2.p16.dynect.net.
github.com. 551 IN NS ns-520.awsdns-01.net.
github.com. 551 IN NS ns3.p16.dynect.net.
github.com. 551 IN NS ns-421.awsdns-52.com.
github.com. 551 IN NS ns4.p16.dynect.net.
github.com. 551 IN NS ns-1283.awsdns-32.org.
github.com. 551 IN NS ns-1707.awsdns-21.co.uk.
```
在 GitHub,我们有几十个区域和数千条记录,而大多数这些区域并没有关键到需要冗余,因此我们只需要处理一部分。我们希望有能够在多个 DNS 服务提供商中保持这些记录同步的方案,并且更一般地管理内部和外部的所有 DNS 记录。所以今天我们宣布了 [OctoDNS](https://github.com/github/octodns/)。

### 配置
OctoDNS 能够让我们重新打造我们的 DNS 工作流程。我们的区域和记录存储在 Git 仓库的配置文件中。对它们的变更使用 [GitHub 流](https://guides.github.com/introduction/flow/),并[像个站点一样用分支部署](https://githubengineering.com/deploying-branches-to-github-com/)。我们甚至可以做个 “空” 部署来预览哪些记录将在变更中修改。配置文件是 yaml 字典,每个区域一个,它的顶层的键名是记录名称,键值是 ttl、类型和类型特定的数据。例如,当包含在区域文件 `github.com.yaml` 中时,以下配置将创建 `octodns.github.com.` 的 `A` 记录。
```
octodns:
type: A
values:
- 1.2.3.4
- 1.2.3.5
```
配置的第二部分将记录数据的源映射到 DNS 服务提供商。下面的代码片段告诉 OctoDNS 从 `config` 提供程序加载区域 `github.com`,并将其结果同步到 `dyn` 和 `route53`。
```
zones:
github.com.:
sources:
- config
targets:
- dyn
- route53
```
### 同步
一旦我们的配置完成,OctoDNS 就可以评估当前的状态,并建立一个计划,其中列出将需要将目标状态与源相匹配的一组更改。在下面的例子中,`octodns.github.com` 是一个新的记录,所以所需的操作是在两者中创建记录。
```
$ octodns-sync --config-file=./config/production.yaml
...
********************************************************************************
* github.com.
********************************************************************************
* route53 (Route53Provider)
* Create <ARecord A 60, octodns.github.com., [u'1.2.3.4', '1.2.3.5']>
* Summary: Creates=1, Updates=0, Deletes=0, Existing Records=0
* dyn (DynProvider)
* Create <ARecord A 60, octodns.github.com., [u'1.2.3.4', '1.2.3.5']>
* Summary: Creates=1, Updates=0, Deletes=0, Existing Records=0
********************************************************************************
...
```
默认情况下 `octodns-sync` 处于模拟运行模式,因此不会采取任何行动。一旦我们审阅了变更,并对它们感到满意,我们可以添加 `--doit' 标志并再次运行命令。OctoDNS 将继续它的处理流程,这一次将在 Route53 和 Dynect 中进行必要的更改,以便创建新的记录。
```
$ octodns-sync --config-file=./config/production.yaml --doit
...
```
此刻,在两个 DNS 服务提供商里我们有了相同的数据记录,并可以轻松地分割我们的 DNS 请求给它们,并知道它们将提供准确的结果。当我们直接运行上面的 OctoDNS 命令时,我们的内部工作流程依赖于部署脚本和 chatops。你可以在 [README 的工作流程部分](https://github.com/github/octodns#workflow)中找到更多信息。
### 总结
我们认为大多数网站可以从分割权威中受益,并且希望用 [OctoDNS](https://github.com/github/octodns/),其中最大的障碍已被扫除。即使对分割权威不感兴趣,OctoDNS 仍然值得一看,因为它将[基础设施即代码](https://en.wikipedia.org/wiki/Infrastructure_as_Code)的好处带给了 DNS。
想帮助 GitHub SRE 团队解决有趣的问题吗?我们很乐意加入我们。[在这里申请](https://boards.greenhouse.io/github/jobs/669805#.WPVqJlPyvUI)。
---
via: <https://githubengineering.com/enabling-split-authority-dns-with-octodns/>
作者:[Ross McFarland](https://github.com/ross) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Redirecting…
Click here if you are not redirected. |
8,865 | Stack Overflow 报告:Python 正在令人难以置信地增长! | https://stackoverflow.blog/2017/09/06/incredible-growth-python/ | 2017-09-13T12:11:07 | [
"Python"
] | https://linux.cn/article-8865-1.html | 
我们[最近探讨](https://stackoverflow.blog/2017/08/29/tale-two-industries-programming-languages-differ-wealthy-developing-countries/?utm_source=so-owned&utm_medium=blog&utm_campaign=gen-blog&utm_content=blog-link&utm_term=incredible-growth-python)了那些世界银行定义为[高收入](https://en.wikipedia.org/wiki/World_Bank_high-income_economy)的富裕国家是如何倾向于使用与世界上其它地区不同的技术。这其中我们看到的最大的差异在于 Python 编程语言。就高收入国家而言,Python 的增长甚至要比 [Stack Overflow Trends](https://insights.stackoverflow.com/trends?tags=python%2Cjavascript%2Cjava%2Cc%23%2Cphp%2Cc%2B%2B&utm_source=so-owned&utm_medium=blog&utm_campaign=gen-blog&utm_content=blog-link&utm_term=incredible-growth-python) 等工具展现的或其他针对全球的软件开发的排名更高。
在本文中,我们将探讨在过去五年中 Python 编程语言的非凡增长,就如在高收入国家的 Stack Overflow 流量所示那样。“增长最快”一词[很难准确定义](https://xkcd.com/1102/),但是我们认为 Python 确实可以称得上增长最快的主流编程语言。
这篇文章中讨论的所有数字都是针对高收入国家的。它们一般指的是美国、英国、德国、加拿大等国家的趋势,他们加起来占了 Stack Overflow 大约 64% 的流量。许多其他国家,如印度、巴西、俄罗斯和中国,也为全球软件开发生态系统做出了巨大贡献,尽管我们也将看到 Python 在这方面有所增长,但本文对这些经济体的描述较少。
值得强调的是,一种语言的用户数量并不能衡量语言的品质:我们是在*描述*开发人员使用的语言,但没有规定任何东西。(完全披露:我[曾经](https://stackoverflow.com/search?tab=newest&q=user%3a712603%20%5bpython%5d)主要使用 Python 编程,尽管我已经完全切换到 R 了)。
### Python 在高收入国家的增长
你可以在 [Stack Overflow Trends](https://insights.stackoverflow.com/trends?tags=python%2Cjavascript%2Cjava%2Cc%23%2Cphp%2Cc%2B%2B&utm_source=so-owned&utm_medium=blog&utm_campaign=gen-blog&utm_content=blog-link&utm_term=incredible-growth-python) 中看到,Python 在过去几年中一直在快速增长。但是对于本文,我们将重点关注高收入国家,考虑的是问题的浏览量而不是提出的问题数量(这基本上结果是类似的,但是每个月都有所波动,特别是对于较小的标签分类)。
我们有关于 Stack Overflow 问题的查看数据可以追溯到 2011 年底,在这段时间内,我们可以研究下 Python 相对于其他五种主要编程语言的增长。(请注意,这比 Stack Overflow Trends 的时间范围更短,它可追溯到 2008 年)。这些目前是高收入国家里十大访问最高的 Stack Overflow 标签中的六个。我们没有包括的四个是 CSS、HTML、Android 和 JQuery。

2017 年 6 月,Python 是成为高收入国家里 Stack Overflow 访问量最高的标签的第一个月。这也是美国和英国最受欢迎的标签,以及几乎所有其他高收入国家的前两名(接着就是 Java 或 JavaScript)。这是特别令人印象深刻的,因为在 2012 年,它比其他 5 种语言的访问量小,比当时增长了 2.5 倍。
部分原因是因为 Java 流量的季节性。由于它[在本科课程中有很多课程](https://stackoverflow.blog/2017/02/15/how-do-students-use-stack-overflow/),Java 流量在秋季和春季会上升,夏季则下降。到年底,它会再次赶上 Python 吗?我们可以尝试用一个叫做 [“STL” 的模型](http://otexts.org/fpp2/sec-6-stl.html)来预测未来两年的增长, 它将增长与季节性趋势结合起来,来预测将来的变化。

根据这个模型,Python 可能会在秋季保持领先地位或被 Java 取代(大致在模型预测的变化范围之内),但是 Python 显然会在 2018 年成为浏览最多的标签。STL 还表明,与过去两年一样,JavaScript 和 Java 在高收入国家中的流量水平将保持相似水平。
### 什么标签整体上增长最快?
上面只看了六个最受欢迎的编程语言。在其他重大技术中,哪些是目前在高收入国家中增长最快的技术?
我们以 2017 年至 2016 年流量的比例来定义增长率。在此分析中,我们决定仅考虑编程语言(如 Java 和 Python)和平台(如 iOS、Android、Windows 和 Linux),而不考虑像 [Angular](https://stackoverflow.com/questions/tagged/angular) 或 [TensorFlow](https://stackoverflow.com/questions/tagged/tensorflow) 这样的框架(虽然其中许多有显著的增长,可能在未来的文章中分析)。

由于上面[这个漫画](https://xkcd.com/1102/)中所描述的“最快增长”定义的激励,我们将增长与[平均差异图](https://en.wikipedia.org/wiki/Bland%E2%80%93Altman_plot)中的整体平均值进行比较。

Python 以 27% 的年增长率成为了规模大、增长快的标签。下一个类似增长的最大标签是 R。我们看到,大多数其他大型标签的流量在高收入国家中保持稳定,浏览 Android、iOS 和 PHP 则略有下降。我们以前在 [Flash 之死这篇文章](https://stackoverflow.blog/2017/08/01/flash-dead-technologies-might-next/?utm_source=so-owned&utm_medium=blog&utm_campaign=gen-blog&utm_content=blog-link&utm_term=incredible-growth-python)中审查过一些正在衰减的标签,如 Objective-C、Perl 和 Ruby。我们还注意到,在函数式编程语言中,Scala 是最大的并且不断增长的,而 F# 和 Clojure 较小并且正在衰减,Haskell 则保持稳定。
上面的图表中有一个重要的遗漏:去年,有关 TypeScript 的问题流量增长了惊人的 142%,这使得我们需要去除它以避免压扁比例尺。你还可以看到,其他一些较小的语言的增长速度与 Python 类似或更快(例如 R、Go 和 Rust),而且还有许多标签,如 Swift 和 Scala,这些标签也显示出惊人的增长。它们随着时间的流量相比 Python 如何?

像 R 和 Swift 这样的语言的发展确实令人印象深刻,而 TypeScript 在更短的时间内显示出特别快速的扩张。这些较小的语言中,有许多从很少的流量成为软件生态系统中引人注目的存在。但是如图所示,当标签开始相对较小时,显示出快速增长更容易。
请注意,我们并不是说这些语言与 Python “竞争”。相反,这只是解释了为什么我们要把它们的增长分成一个单独的类别,这些是始于较低流量的标签。Python 是一个不寻常的案例,**既是 Stack Overflow 中最受欢迎的标签之一,也是增长最快的其中之一**。(顺便说一下,它也在加速!自 2013 年以来,每年的增长速度都会更快)。
### 世界其他地区
在这篇文章中,我们一直在分析高收入国家的趋势。Python 在世界其他地区,如印度、巴西、俄罗斯和中国等国家的增长情况是否类似?
确实如此。

在高收入国家之外,Python *仍旧*是增长最快的主要编程语言。它从较低的水平开始,两年后才开始增长(2014 年而不是 2012 年)。事实上,非高收入国家的 Python 同比增长率高于高收入国家。我们不会在这里研究它,但是 R ([其它语言的使用与 GDP 正相关](https://stackoverflow.blog/2017/08/29/tale-two-industries-programming-languages-differ-wealthy-developing-countries/?utm_source=so-owned&utm_medium=blog&utm_campaign=gen-blog&utm_content=blog-link&utm_term=incredible-growth-python)) 在这些国家也在增长。
在这篇文章中,许多关于高收入国家标签 (相对于绝对排名) 的增长和下降的结论,对世界其他地区都是正确的。两个部分增长率之间有一个 0.979 Spearman 相关性。在某些情况下,你可以看到类似于 Python 上发生的 “滞后” 现象,其中一个技术在高收入国家被广泛采用,一年或两年才能在世界其他地区扩大。(这是一个有趣的现象,这可能是未来文章的主题!)
### 下一次
我们不打算为任何“语言战争”提供弹药。一种语言的用户数量并不意味着它的质量,而且肯定不会让你知道哪种语言[更适合某种特定情况](https://stackoverflow.blog/2011/08/16/gorilla-vs-shark/?utm_source=so-owned&utm_medium=blog&utm_campaign=gen-blog&utm_content=blog-link&utm_term=incredible-growth-python)。不过,考虑到这点,我们认为值得了解什么语言构成了开发者生态系统,以及生态系统会如何变化。
本文表明 Python 在过去五年中,特别是在高收入国家,显示出惊人的增长。在我们的下一篇文章中,我们将开始研究*“为什么”*。我们将按国家和行业划分增长情况,并研究有哪些其他技术与 Python 一起使用(例如,估计多少增长是由于 Python 用于 Web 开发而不是数据科学)。
在此期间,如果你使用 Python 工作,并希望你的职业生涯中进入下一阶段,那么[在 Stack Overflow Jobs 上有些公司正在招聘 Python 开发](https://stackoverflow.com/jobs/developer-jobs-using-python?utm_source=so-owned&utm_medium=blog&utm_campaign=gen-blog&utm_content=blog-link&utm_term=incredible-growth-python)。
---
via: <https://stackoverflow.blog/2017/09/06/incredible-growth-python/>
作者:[David Robinson](https://stackoverflow.blog/authors/drobinson/) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | null |
8,867 | 使用 Docker 和 Kubernetes 将 MongoDB 作为微服务运行 | https://www.mongodb.com/blog/post/running-mongodb-as-a-microservice-with-docker-and-kubernetes | 2017-09-14T10:15:00 | [
"MongoDB",
"容器",
"Kubernetes"
] | https://linux.cn/article-8867-1.html | 
### 介绍
想在笔记本电脑上尝试 MongoDB?只需执行一个命令,你就会有一个轻量级的、独立的沙箱。完成后可以删除你所做的所有痕迹。
想在多个环境中使用相同的<ruby> 程序栈 <rt> application stack </rt></ruby>副本?构建你自己的容器镜像,让你的开发、测试、运维和支持团队使用相同的环境克隆。
容器正在彻底改变整个软件生命周期:从最早的技术性实验和概念证明,贯穿了开发、测试、部署和支持。
编排工具用来管理如何创建、升级多个容器,并使之高可用。编排还控制容器如何连接,以从多个微服务容器构建复杂的应用程序。
丰富的功能、简单的工具和强大的 API 使容器和编排功能成为 DevOps 团队的首选,将其集成到连续集成(CI) 和连续交付 (CD) 的工作流程中。
这篇文章探讨了在容器中运行和编排 MongoDB 时遇到的额外挑战,并说明了如何克服这些挑战。
### MongoDB 的注意事项
使用容器和编排运行 MongoDB 有一些额外的注意事项:
* MongoDB 数据库节点是有状态的。如果容器发生故障并被重新编排,数据则会丢失(能够从副本集的其他节点恢复,但这需要时间),这是不合需要的。为了解决这个问题,可以使用诸如 Kubernetes 中的<ruby> 数据卷 <rt> volume </rt></ruby> 抽象等功能来将容器中临时的 MongoDB 数据目录映射到持久位置,以便数据在容器故障和重新编排过程中存留。
* 一个副本集中的 MongoDB 数据库节点必须能够相互通信 - 包括重新编排后。副本集中的所有节点必须知道其所有对等节点的地址,但是当重新编排容器时,可能会使用不同的 IP 地址重新启动。例如,Kubernetes Pod 中的所有容器共享一个 IP 地址,当重新编排 pod 时,IP 地址会发生变化。使用 Kubernetes,可以通过将 Kubernetes 服务与每个 MongoDB 节点相关联来处理,该节点使用 Kubernetes DNS 服务提供“主机名”,以保持服务在重新编排中保持不变。
* 一旦每个单独的 MongoDB 节点运行起来(每个都在自己的容器中),则必须初始化副本集,并添加每个节点到其中。这可能需要在编排工具之外提供一些额外的处理。具体来说,必须使用目标副本集中的一个 MongoDB 节点来执行 `rs.initiate` 和 `rs.add` 命令。
* 如果编排框架提供了容器的自动化重新编排(如 Kubernetes),那么这将增加 MongoDB 的弹性,因为这可以自动重新创建失败的副本集成员,从而在没有人为干预的情况下恢复完全的冗余级别。
* 应该注意的是,虽然编排框架可能监控容器的状态,但是不太可能监视容器内运行的应用程序或备份其数据。这意味着使用 [MongoDB Enterprise Advanced](https://www.mongodb.com/products/mongodb-enterprise-advanced) 和 [MongoDB Professional](https://www.mongodb.com/products/mongodb-professional) 中包含的 [MongoDB Cloud Manager](https://www.mongodb.com/cloud/) 等强大的监控和备份解决方案非常重要。可以考虑创建自己的镜像,其中包含你首选的 MongoDB 版本和 [MongoDB Automation Agent](https://docs.cloud.mongodb.com/tutorial/nav/install-automation-agent/)。
### 使用 Docker 和 Kubernetes 实现 MongoDB 副本集
如上节所述,分布式数据库(如 MongoDB)在使用编排框架(如 Kubernetes)进行部署时,需要稍加注意。本节将介绍详细介绍如何实现。
我们首先在单个 Kubernetes 集群中创建整个 MongoDB 副本集(通常在一个数据中心内,这显然不能提供地理冗余)。实际上,很少有必要改变成跨多个集群运行,这些步骤将在后面描述。
副本集的每个成员将作为自己的 pod 运行,并提供一个公开 IP 地址和端口的服务。这个“固定”的 IP 地址非常重要,因为外部应用程序和其他副本集成员都可以依赖于它在重新编排 pod 的情况下保持不变。
下图说明了其中一个 pod 以及相关的复制控制器和服务。

*图 1:MongoDB 副本集成员被配置为 Kubernetes Pod 并作为服务公开*
逐步介绍该配置中描述的资源:
* 从核心开始,有一个名为 `mongo-node1` 的容器。`mongo-node1` 包含一个名为 `mongo` 的镜像,这是一个在 [Docker Hub](https://hub.docker.com/_/mongo/) 上托管的一个公开可用的 MongoDB 容器镜像。容器在集群中暴露端口 `27107`。
* Kubernetes 的数据卷功能用于将连接器中的 `/data/db` 目录映射到名为 `mongo-persistent-storage1` 的永久存储上,这又被映射到在 Google Cloud 中创建的名为 `mongodb-disk1` 的磁盘中。这是 MongoDB 存储其数据的地方,这样它可以在容器重新编排后保留。
* 容器保存在一个 pod 中,该 pod 中有标签命名为 `mongo-node`,并提供一个名为 `rod` 的(任意)示例。
* 配置 `mongo-node1` 复制控制器以确保 `mongo-node1` pod 的单个实例始终运行。
* 名为 `mongo-svc-a` 的 `负载均衡` 服务给外部开放了一个 IP 地址以及 `27017` 端口,它被映射到容器相同的端口号上。该服务使用选择器来匹配 pod 标签来确定正确的 pod。外部 IP 地址和端口将用于应用程序以及副本集成员之间的通信。每个容器也有本地 IP 地址,但是当容器移动或重新启动时,这些 IP 地址会变化,因此不会用于副本集。
下一个图显示了副本集的第二个成员的配置。

*图 2:第二个 MongoDB 副本集成员配置为 Kubernetes Pod*
90% 的配置是一样的,只有这些变化:
* 磁盘和卷名必须是唯一的,因此使用的是 `mongodb-disk2` 和 `mongo-persistent-storage2`
* Pod 被分配了一个 `instance: jane` 和 `name: mongo-node2` 的标签,以便新的服务可以使用选择器与图 1 所示的 `rod` Pod 相区分。
* 复制控制器命名为 `mongo-rc2`
* 该服务名为`mongo-svc-b`,并获得了一个唯一的外部 IP 地址(在这种情况下,Kubernetes 分配了 `104.1.4.5`)
第三个副本成员的配置遵循相同的模式,下图展示了完整的副本集:

*图 3:配置为 Kubernetes 服务的完整副本集成员*
请注意,即使在三个或更多节点的 Kubernetes 群集上运行图 3 所示的配置,Kubernetes 可能(并且经常会)在同一主机上编排两个或多个 MongoDB 副本集成员。这是因为 Kubernetes 将三个 pod 视为属于三个独立的服务。
为了在区域内增加冗余,可以创建一个附加的 *headless* 服务。新服务不向外界提供任何功能(甚至不会有 IP 地址),但是它可以让 Kubernetes 通知三个 MongoDB pod 形成一个服务,所以 Kubernetes 会尝试在不同的节点上编排它们。

*图 4:避免同一 MongoDB 副本集成员的 Headless 服务*
配置和启动 MongoDB 副本集所需的实际配置文件和命令可以在白皮书《[启用微服务:阐述容器和编排](https://www.mongodb.com/collateral/microservices-containers-and-orchestration-explained)》中找到。特别的是,需要一些本文中描述的特殊步骤来将三个 MongoDB 实例组合成具备功能的、健壮的副本集。
#### 多个可用区 MongoDB 副本集
上面创建的副本集存在风险,因为所有内容都在相同的 GCE 集群中运行,因此都在相同的<ruby> 可用区 <rt> availability zone </rt></ruby>中。如果有一个重大事件使可用区离线,那么 MongoDB 副本集将不可用。如果需要地理冗余,则三个 pod 应该在三个不同的可用区或地区中运行。
令人惊奇的是,为了创建在三个区域之间分割的类似的副本集(需要三个集群),几乎不需要改变。每个集群都需要自己的 Kubernetes YAML 文件,该文件仅为该副本集中的一个成员定义了 pod、复制控制器和服务。那么为每个区域创建一个集群,永久存储和 MongoDB 节点是一件很简单的事情。

*图 5:在多个可用区域上运行的副本集*
### 下一步
要了解有关容器和编排的更多信息 - 所涉及的技术和所提供的业务优势 - 请阅读白皮书《[启用微服务:阐述容器和编排](https://www.mongodb.com/collateral/microservices-containers-and-orchestration-explained)》。该文件提供了获取本文中描述的副本集,并在 Google Container Engine 中的 Docker 和 Kubernetes 上运行的完整的说明。
---
作者简介:
Andrew 是 MongoDB 的产品营销总经理。他在去年夏天离开 Oracle 加入 MongoDB,在 Oracle 他花了 6 年多的时间在产品管理上,专注于高可用性。他可以通过 @andrewmorgan 或者在他的博客(clusterdb.com)评论联系他。
---
via: <https://www.mongodb.com/blog/post/running-mongodb-as-a-microservice-with-docker-and-kubernetes>
作者:[Andrew Morgan](http://www.clusterdb.com/) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,869 | 函数式编程简介 | https://opensource.com/article/17/4/introduction-functional-programming | 2017-09-15T13:53:59 | [
"函数式编程",
"Haskell"
] | https://linux.cn/article-8869-1.html |
>
> 我们来解释函数式编程的什么,它的优点是哪些,并且给出一些函数式编程的学习资源。
>
>
>

这要看您问的是谁, <ruby> 函数式编程 <rt> functional programming </rt></ruby>(FP)要么是一种理念先进的、应该广泛传播的程序设计方法;要么是一种偏学术性的、实际用途不多的编程方式。在这篇文章中我将讲解函数式编程,探究其优点,并推荐学习函数式编程的资源。
### 语法入门
本文的代码示例使用的是 [Haskell](https://wiki.haskell.org/Introduction) 编程语言。在这篇文章中你只需要了解的基本函数语法:
```
even :: Int -> Bool
even = ... -- 具体的实现放在这里
```
上述示例定义了含有一个参数的函数 `even` ,第一行是 *类型声明*,具体来说就是 `even` 函数接受一个 Int 类型的参数,返回一个 Bool 类型的值,其实现跟在后面,由一个或多个等式组成。在这里我们将忽略具体实现方法(名称和类型已经足够了):
```
map :: (a -> b) -> [a] -> [b]
map = ...
```
这个示例,`map` 是一个有两个参数的函数:
1. `(a -> b)` :将 `a` 转换成 `b` 的函数
2. `[a]`:一个 `a` 的列表,并返回一个 `b` 的列表。(LCTT 译注: 将函数作用到 `[a]` (List 序列对应于其它语言的数组)的每一个元素上,将每次所得结果放到另一个 `[b]` ,最后返回这个结果 `[b]`。)
同样我们不去关心要如何实现,我们只感兴趣它的定义类型。`a` 和 `b` 是任何一种的的 <ruby> 类型变量 <rt> type variable </rt></ruby> 。就像上一个示例中, `a` 是 `Int` 类型, `b` 是 `Bool` 类型:
```
map even [1,2,3]
```
这个是一个 Bool 类型的序列:
```
[False,True,False]
```
如果你看到你不理解的其他语法,不要惊慌;对语法的充分理解不是必要的。
### 函数式编程的误区
我们先来解释一下常见的误区:
* 函数式编程不是命令行编程或者面向对象编程的竞争对手或对立面,这并不是非此即彼的。
* 函数式编程不仅仅用在学术领域。这是真的,在函数式编程的历史中,如像 Haskell 和 OCaml 语言是最流行的研究语言。但是今天许多公司使用函数式编程来用于大型的系统、小型专业程序,以及种种不同场合。甚至还有一个[面向函数式编程的商业用户[33](http://cufp.org/)的年度会议;以前的那些程序让我们了解了函数式编程在工业中的用途,以及谁在使用它。
* 函数式编程与 [monad](https://www.haskell.org/tutorial/monads.html) 无关 ,也不是任何其他特殊的抽象。在这篇文章里面 monad 只是一个抽象的规定。有些是 monad,有些不是。
* 函数式编程不是特别难学的。某些语言可能与您已经知道的语法或求值语义不同,但这些差异是浅显的。函数式编程中有大量的概念,但其他语言也是如此。
### 什么是函数式编程?
核心是函数式编程是只使用*纯粹*的数学函数编程,函数的结果仅取决于参数,而没有副作用,就像 I/O 或者状态转换这样。程序是通过 <ruby> 组合函数 <rt> function composition </rt></ruby> 的方法构建的:
```
(.) :: (b -> c) -> (a -> b) -> (a -> c)
(g . f) x = g (f x)
```
这个<ruby> 中缀 <rt> infix </rt></ruby>函数 `(.)` 表示的是二个函数组合成一个,将 `g` 作用到 `f` 上。我们将在下一个示例中看到它的使用。作为比较,我们看看在 Python 中同样的函数:
```
def compose(g, f):
return lambda x: g(f(x))
```
函数式编程的优点在于:由于函数是确定的、没有副作用的,所以可以用结果替换函数,这种替代等价于使用使 <ruby> 等式推理 <rt> equational reasoning </rt></ruby> 。每个程序员都有使用自己代码和别人代码的理由,而等式推理就是解决这样问题不错的工具。来看一个示例。等你遇到这个问题:
```
map even . map (+1)
```
这段代码是做什么的?可以简化吗?通过等式推理,可以通过一系列替换来分析代码:
```
map even . map (+1)
map (even . (+1)) -- 来自 'map' 的定义
map (\x -> even (x + 1)) -- lambda 抽象
map odd -- 来自 'even' 的定义
```
我们可以使用等式推理来理解程序并优化可读性。Haskell 编译器使用等式推理进行多种程序优化。没有纯函数,等式推理是不可能的,或者需要程序员付出更多的努力。
### 函数式编程语言
你需要一种编程语言来做函数式编程吗?
在没有<ruby> 高阶函数 <rt> higher-order function </rt></ruby>(传递函数作为参数和返回函数的能力)、lambdas (匿名函数)和<ruby> 泛型 <rt> generics </rt></ruby>的语言中进行有意义的函数式编程是困难的。 大多数现代语言都有这些,但在不同语言中支持函数式编程方面存在差异。 具有最佳支持的语言称为<ruby> 函数式编程语言 <rt> functional programming language </rt></ruby>。 这些包括静态类型的 *Haskell*、*OCaml*、*F#* 和 *Scala* ,以及动态类型的 *Erlang* 和 *Clojure*。
即使是在函数式语言里,可以在多大程度上利用函数编程有很大差异。有一个<ruby> 类型系统 <rt> type system </rt></ruby>会有很大的帮助,特别是它支持 <ruby> 类型推断 <rt> type inference </rt></ruby> 的话(这样你就不用总是必须键入类型)。这篇文章中没有详细介绍这部分,但足以说明,并非所有的类型系统都是平等的。
与所有语言一样,不同的函数的语言强调不同的概念、技术或用例。选择语言时,考虑它支持函数式编程的程度以及是否适合您的用例很重要。如果您使用某些非 FP 语言,你仍然会受益于在该语言支持的范围内的函数式编程。
### 不要打开陷阱之门
回想一下,函数的结果只取决于它的输入。但是,几乎所有的编程语言都有破坏这一原则的“功能”。空值、<ruby> 实例类型 <rt> type case </rt></ruby>(`instanceof`)、类型转换、异常、<ruby> 边际效用 <rt> side-effect </rt></ruby>,以及无尽循环的可能性都是陷阱,它打破等式推理,并削弱程序员对程序行为正确性的理解能力。(所有语言里面,没有任何陷阱的语言包括 Agda、Idris 和 Coq。)
幸运的是,作为程序员,我们可以选择避免这些陷阱,如果我们受到严格的规范,我们可以假装陷阱不存在。 这个方法叫做<ruby> 轻率推理 <rt> fast and loose reasoning </rt></ruby> 。它不需要任何条件,几乎任何程序都可以在不使用陷阱的情况下进行编写,并且通过避免这些可以而获得等式推理、可组合性和可重用性。
让我们详细讨论一下。 这个陷阱破坏了等式推理,因为异常终止的可能性没有反映在类型中。(你可以庆幸文档中甚至没有提到能抛出的异常)。但是没有理由我们没有一个可以包含所有故障模式的返回类型。
避开陷阱是语言特征中出现很大差异的领域。为避免例外, <ruby> 代数数据类型 <rt> algebraic data type </rt></ruby>可用于模型错误的条件下,就像:
```
-- new data type for results of computations that can fail
--
data Result e a = Error e | Success a
-- new data type for three kinds of arithmetic errors
--
data ArithError = DivByZero | Overflow | Underflow
-- integer division, accounting for divide-by-zero
--
safeDiv :: Int -> Int -> Result ArithError Int
safeDiv x y =
if y == 0
then Error DivByZero
else Success (div x y)
```
在这个例子中的权衡你现在必须使用 Result ArithError Int 类型,而不是以前的 Int 类型,但这也是解决这个问题的一种方式。你不再需要处理异常,而能够使用轻率推理 ,总体来说这是一个胜利。
### 自由定理
大多数现代静态类型语言具有<ruby> 范型 <rt> generics </rt></ruby>(也称为<ruby> 参数多态性 <rt> parametric polymorphism </rt></ruby> ),其中函数是通过一个或多个抽象类型定义的。 例如,看看这个 List(序列)函数:
```
f :: [a] -> [a]
f = ...
```
Java 中的相同函数如下所示:
```
static <A> List<A> f(List<A> xs) { ... }
```
该编译的程序证明了这个函数适用于类型 `a` 的*任意*选择。考虑到这一点,采用轻率推理的方法,你能够弄清楚该函数的作用吗?知道类型有什么帮助?
在这种情况下,该类型并不能告诉我们函数的功能(它可以逆转序列、删除第一个元素,或许多其它的操作),但它确实告诉了我们很多信息。只是从该类型,我们可以推演出该函数的定理:
* 定理 1 :输出中的每个元素也出现于输入中;不可能在输入的序列 `a` 中添加值,因为你不知道 `a` 是什么,也不知道怎么构造一个。
* 定理 2 :如果你映射某个函数到列表上,然后对其应用 `f`,其等同于对映射应用 `f`。
定理 1 帮助我们了解代码的作用,定理 2 对于程序优化提供了帮助。我们从类型中学到了这一切!其结果,即从类型中获取有用的定理的能力,称之为<ruby> 参数化 <rt> parametricity </rt></ruby>。因此,类型是函数行为的部分(有时是完整的)规范,也是一种机器检查机制。
现在你可以利用参数化了。你可以从 `map` 和 `(.)` 的类型或者下面的这些函数中发现什么呢?
* `foo :: a -> (a, a)`
* `bar :: a -> a -> a`
* `baz :: b -> a -> a`
### 学习功能编程的资源
也许你已经相信函数式编程是编写软件不错的方式,你想知道如何开始?有几种学习功能编程的方法;这里有一些我推荐(我承认,我对 Haskell 偏爱):
* UPenn 的 [CIS 194: 介绍 Haskell](https://www.cis.upenn.edu/%7Ecis194/fall16/) 是函数式编程概念和 Haskell 实际开发的不错选择。有课程材料,但是没有讲座(您可以用几年前 Brisbane 函数式编程小组的 [CIS 194 系列讲座](https://github.com/bfpg/cis194-yorgey-lectures)。
* 不错的入门书籍有 《[Scala 的函数式编程](https://www.manning.com/books/functional-programming-in-scala)》 、 《[Haskell 函数式编程思想](http://www.cambridge.org/gb/academic/subjects/computer-science/programming-languages-and-applied-logic/thinking-functionally-haskell)》 , 和 《[Haskell 编程原理](http://haskellbook.com/)》。
* [Data61 FP 课程](https://github.com/data61/fp-course) (即 *NICTA* 课程)通过<ruby> 类型驱动开发 <rt> type-driven development </rt></ruby>来教授基础的抽象概念和数据结构。这是十分困难,但收获也是丰富的,其起源于培训会,如果你有一名愿意引导你函数式编程的程序员,你可以尝试。
* 在你的工作学习中使用函数式编程书写代码,写一些纯函数(避免不确定性和异常的出现),使用高阶函数和递归而不是循环,利用参数化来提高可读性和重用性。许多人从体验和实验各种语言的美妙之处,开始走上了函数式编程之旅。
* 加入到你的地区中的一些函数式编程小组或者学习小组中,或者创建一个,也可以是参加一些函数式编程的会议(新的会议总是不断的出现)。
### 总结
在本文中,我讨论了函数式编程是什么以及不是什么,并了解到了函数式编程的优势,包括等式推理和参数化。我们了解到在大多数编程语言中都有一些函数式编程功能,但是语言的选择会影响受益的程度,而 Haskell 是函数式编程中语言最受欢迎的语言。我也推荐了一些学习函数式编程的资源。
函数式编程是一个丰富的领域,还有许多更深入(更神秘)的主题正在等待探索。我没有提到那些具有实际意义的事情,比如:
* lenses 和 prisms (是一流的设置和获取值的方式;非常适合使用嵌套数据);
* 定理证明(当你可以证明你的代码正确时,为什么还要测试你的代码?);
* 延迟评估(让您处理潜在的无数的数据结构);
* 分类理论(函数式编程中许多美丽实用的抽象的起源);
我希望你喜欢这个函数式编程的介绍,并且启发你走上这个有趣和实用的软件开发之路。
*本文根据 [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) 许可证发布。*
(题图: opensource.com)
---
作者简介:
红帽软件工程师。对函数式编程,分类理论,数学感兴趣。Crazy about jalapeños.
---
via: <https://opensource.com/article/17/4/introduction-functional-programming>
作者:[Fraser Tweedale](https://opensource.com/users/frasertweedale) 译者:[MonkeyDEcho](https://github.com/MonkeyDEcho) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Depending on whom you ask, *functional programming* (FP) is either an enlightened approach to programming that should be spread far and wide, or an overly academic approach to programming with few real-world benefits. In this article, I will explain what functional programming is, explore its benefits, and recommend resources for learning functional programming.
## Syntax primer
Code examples in this article are in the [Haskell](https://wiki.haskell.org/Introduction) programming language. All that you need to understand for this article is the basic function syntax:
```
``````
even :: Int -> Bool
even = ... -- implementation goes here
```
This defines a one-argument function named **even**. The first line is the *type declaration*, which says that **even** takes an **Int** and returns a **Bool**. The implementation follows and consists of one or more *equations*. We'll ignore the implementation (the name and type tell us enough):
```
``````
map :: (a -> b) -> [a] -> [b]
map = ...
```
In this example, **map** is a function that takes two arguments:
**(a -> b)**: a functions that turns an**a**into a**b****[a]**: a list of**a**
and returns a list of **b**. Again, we don't care about the definition—the type is more interesting! **a** and **b** are *type variables* that could stand for any type. In the expression below, **a** is **Int** and **b** is **Bool**:
```
```` map even [1,2,3]`
It evaluates to a **[Bool]**:
```
```` [False,True,False]`
If you see other syntax that you do not understand, don't panic; full comprehension of the syntax is not essential.
## Myths about functional programming
Let's begin by dispelling common misconceptions:
- Functional programming is not the rival or antithesis of imperative or object-oriented programming. This is a false dichotomy.
- Functional programming is not just the domain of academics. It is true that the history of functional programming is steeped in academia, and languages such as like Haskell and OCaml are popular research languages. But today many companies use functional programming for large-scale systems, small specialized programs, and everything in between. There's even an annual conference for
[Commercial Users of Functional Programming](http://cufp.org/); past programs give an insight into how functional programming is being used in industry, and by whom. - Functional programming has nothing to do with
[monads](https://www.haskell.org/tutorial/monads.html), nor any other particular abstraction. For all the hand-wringing around this topic, monad is just an abstraction with laws. Some things are monads, others are not. - Functional programming is not especially hard to learn. Some languages may have different syntax or evaluation semantics from those you already know, but these differences are superficial. There are dense concepts in functional programming, but this is also true of other approaches.
## What is functional programming?
At its core, functional programming is just programming with functions—*pure* mathematical functions. The result of a function depends only on the arguments, and there are no side effects, such as I/O or mutation of state. Programs are built by combining functions together. One way of combining functions is *function composition*:
```
``````
(.) :: (b -> c) -> (a -> b) -> (a -> c)
(g . f) x = g (f x)
```
This *infix* function combines two functions into one, applying **g** to the output of **f**. We'll see it used in an upcoming example. For comparison, the same function in Python looks like:
```
``````
def compose(g, f):
return lambda x: g(f(x))
```
The beauty of functional programming is that because functions are deterministic and have no side effects, you can always replace a function application with the result of the application. This substitution of equals for equals enables *equational reasoning*. Every programmer has to reason about their code and others', and equational reasoning is a great tool for doing that. Let's look at an example. You encounter the expression:
```
```` map even . map (+1)`
What does this program do? Can it be simplified? Equational reasoning lets you analyze the code through a series of substitutions:
```
``````
map even . map (+1)
map (even . (+1)) -- from definition of 'map'
map (\x -> even (x + 1)) -- lambda abstraction
map odd -- from definition of 'even'
```
We can use equational reasoning to understand programs and optimize for readability. The Haskell compiler uses equational reasoning to perform many kinds of program optimizations. Without pure functions, equational reasoning either isn't possible, or requires an inordinate effort from the programmer.
## Functional programming languages
What do you need from a programming language to be able to do functional programming?
Doing functional programming meaningfully in a language without *higher-order functions* (the ability to pass functions as arguments and return functions), *lambdas* (anonymous functions), and *generics* is difficult. Most modern languages have these, but there are differences in *how well* different languages support functional programming. The languages with the best support are called *functional programming languages*. These include *Haskell*, *OCaml*, *F#*, and *Scala*, which are statically typed, and the dynamically typed *Erlang* and *Clojure*.
Even among functional languages there are big differences in how far you can exploit functional programming. Having a type system helps a lot, especially if it supports *type inference* (so you don't always have to type the types). There isn't room in this article to go into detail, but suffice it to say, not all type systems are created equal.
As with all languages, different functional languages emphasize different concepts, techniques, or use cases. When choosing a language, considering how well it supports functional programming and whether it fits your use case is important. If you're stuck using some non-FP language, you will still benefit from applying functional programming to the extent the language supports it.
## Don't open that trap door!
Recall that the result of a function depends only on its inputs. Alas, almost all programming languages have "features" that break this assumption. Null values, type case (**instanceof**), type casting, exceptions, side-effects, and the possibility of infinite recursion are trap doors that break equational reasoning and impair a programmer's ability to reason about the behavior or correctness of a program. (*Total languages*, which do not have any trap doors, include Agda, Idris, and Coq.)
Fortunately, as programmers, we can choose to avoid these traps, and if we are disciplined, we can pretend that the trap doors do not exist. This idea is called *fast and loose reasoning*. It costs nothing—almost any program can be written without using the trap doors—and by avoiding them you win back equational reasoning, composability and reuse.
Let's discuss exceptions in detail. This trap door breaks equational reasoning because the possibility of abnormal termination is not reflected in the type. (Count yourself lucky if the documentation even mentions the exceptions that could be thrown.) But there is no reason why we can't have a return type that encompasses all the failure modes.
Avoiding trap doors is an area in which language features can make a big difference. For avoiding exceptions, *algebraic data types* can be used to model error conditions, like so:
```
``````
-- new data type for results of computations that can fail
--
data Result e a = Error e | Success a
-- new data type for three kinds of arithmetic errors
--
data ArithError = DivByZero | Overflow | Underflow
-- integer division, accounting for divide-by-zero
--
safeDiv :: Int -> Int -> Result ArithError Int
safeDiv x y =
if y == 0
then Error DivByZero
else Success (div x y)
```
The trade-off in this example is that you must now work with values of type **Result ArithError Int** instead of plain old **Int**, but there are abstractions for dealing with this. You no longer need to handle exceptions and can use fast and loose reasoning, so overall it's a win.
## Theorems for free
Most modern statically typed languages have *generics* (also called *parametric polymorphism*), where functions are defined over one or more abstract types. For example, consider a function over lists:
```
``````
f :: [a] -> [a]
f = ...
```
The same function in Java looks like:
```
```` static <A> List<A> f(List<A> xs) { ... }`
The compiled program is a proof that this function will work with *any* choice for the type **a**. With that in mind, and employing fast and loose reasoning, can you work out what the function does? Does knowing the type help?
In this case, the type doesn't tell us exactly what the function does (it could reverse the list, drop the first element, or many other things), but it does tell us a lot. Just from the type, we can derive theorems about the function:
**Theorem 1**: Every element in the output appears in the input; it couldn't possibly add an**a**to the list because it has no knowledge of what**a**is or how to construct one.**Theorem 2**: If you map any function over the list then apply**f**, the result is the same as applying**f**then mapping.
Theorem 1 helps us understand what the code is doing, and Theorem 2 is useful for program optimization. We learned all this just from the type! This result—the ability to derive useful theorems from types—is called *parametricity*. It follows that a type is a partial (sometimes complete) specification of a function's behavior, and a kind of machine-checked documentation.
Now it's your turn to exploit parametricity. What can you conclude from the types of **map** and **(.)**, or the following functions?
**foo :: a -> (a, a)****bar :: a -> a -> a****baz :: b -> a -> a**
## Resources for learning functional programming
Perhaps you have been convinced that functional programming is a better way to write software, and you are wondering how to get started? There are several approaches to learning functional programming; here are some I recommend (with, I admit, a strong bias toward Haskell):
- UPenn's
[CIS 194: Introduction to Haskell](https://www.cis.upenn.edu/~cis194/fall16/)is a solid introduction to functional programming concepts and real-world Haskell development. The course material is available, but the lectures are not (you could view Brisbane Functional Programming Group's[series of talks covering CIS 194](https://github.com/bfpg/cis194-yorgey-lectures)from a few years ago instead). - Good introductory books include
,[Functional Programming in Scala](https://www.manning.com/books/functional-programming-in-scala), and[Thinking Functionally with Haskell](http://www.cambridge.org/gb/academic/subjects/computer-science/programming-languages-and-applied-logic/thinking-functionally-haskell).[Haskell Programming from first principles](http://haskellbook.com/) - The
[Data61 FP course](https://github.com/data61/fp-course)(f.k.a.,*NICTA*course) teaches foundational abstractions and data structures through*type-driven development*. The payoff is huge, but it is*difficult by design*, having its origins in training workshops, so only attempt it if you know a functional programmer who is willing to mentor you. - Start practicing functional programming in whatever code you're working on. Write pure functions (avoid non-determinism and mutation), use higher-order functions and recursion instead of loops, exploit parametricity for improved readability and reuse. Many people start out in functional programming by experimenting and experiencing the benefits in all kinds of languages.
- Join a functional programming user group or study group in your area—or start one—and look out for functional programming conferences (new ones are popping up all the time).
## Conclusion
In this article, I discussed what functional programming is and is not, and looked at advantages of functional programming, including equational reasoning and parametricity. We learned that you can do *some* functional programming in most programming languages, but the choice of language affects how much you can benefit, with *functional programming languages*, such as Haskell, having the most to offer. I also recommended resources for learning functional programming.
Functional programming is a rich field and there are many deeper (and denser) topics awaiting exploration. I would be remiss not to mention a few that have practical implications, such as:
- lenses and prisms (first-class, composable getters and setters; great for working with nested data);
- theorem proving (why test your code when you could
*prove it correct*instead?); - lazy evaluation (lets you work with potentially infinite data structures);
- and category theory (the origin of many beautiful and practical abstractions in functional programming).
I hope that you have enjoyed this introduction to functional programming and are inspired to dive into this fun and practical approach to software development.
*This article is published under the CC BY 4.0 license.*
## 9 Comments |
8,870 | 减少 curl 中内存分配操作(malloc) | https://daniel.haxx.se/blog/2017/04/22/fewer-mallocs-in-curl/ | 2017-09-16T11:55:51 | [
"curl",
"malloc"
] | https://linux.cn/article-8870-1.html | 
今天我在 libcurl 内部又做了[一个小改动](https://github.com/curl/curl/commit/cbae73e1dd95946597ea74ccb580c30f78e3fa73),使其做更少的 malloc。这一次,泛型链表函数被转换成更少的 malloc (这才是链表函数应有的方式,真的)。
### 研究 malloc
几周前我开始研究内存分配。这很容易,因为多年前我们 curl 中就已经有内存调试和日志记录系统了。使用 curl 的调试版本,并在我的构建目录中运行此脚本:
```
#!/bin/sh
export CURL_MEMDEBUG=$HOME/tmp/curlmem.log
./src/curl http://localhost
./tests/memanalyze.pl -v $HOME/tmp/curlmem.log
```
对于 curl 7.53.1,这大约有 115 次内存分配。这算多还是少?
内存日志非常基础。为了让你有所了解,这是一个示例片段:
```
MEM getinfo.c:70 free((nil))
MEM getinfo.c:73 free((nil))
MEM url.c:294 free((nil))
MEM url.c:297 strdup(0x559e7150d616) (24) = 0x559e73760f98
MEM url.c:294 free((nil))
MEM url.c:297 strdup(0x559e7150d62e) (22) = 0x559e73760fc8
MEM multi.c:302 calloc(1,480) = 0x559e73760ff8
MEM hash.c:75 malloc(224) = 0x559e737611f8
MEM hash.c:75 malloc(29152) = 0x559e737a2bc8
MEM hash.c:75 malloc(3104) = 0x559e737a9dc8
```
### 检查日志
然后,我对日志进行了更深入的研究,我意识到在相同的代码行做了许多小内存分配。我们显然有一些相当愚蠢的代码模式,我们分配一个结构体,然后将该结构添加到链表或哈希,然后该代码随后再添加另一个小结构体,如此这般,而且经常在循环中执行。(我在这里说的是*我们*,不是为了责怪某个人,当然大部分的责任是我自己……)
这两种分配操作将总是成对地出现,并被同时释放。我决定解决这些问题。做非常小的(小于 32 字节)的分配也是浪费的,因为非常多的数据将被用于(在 malloc 系统内)跟踪那个微小的内存区域。更不用说堆碎片了。
因此,将该哈希和链表代码修复为不使用 malloc 是快速且简单的方法,对于最简单的 “curl http://localhost” 传输,它可以消除 20% 以上的 malloc。
此时,我根据大小对所有的内存分配操作进行排序,并检查所有最小的分配操作。一个突出的部分是在 `curl_multi_wait()` 中,它是一个典型的在 curl 传输主循环中被反复调用的函数。对于大多数典型情况,我将其转换为[使用堆栈](https://github.com/curl/curl/commit/5f1163517e1597339d)。在大量重复的调用函数中避免 malloc 是一件好事。
### 重新计数
现在,如上面的脚本所示,同样的 `curl localhost` 命令从 curl 7.53.1 的 115 次分配操作下降到 80 个分配操作,而没有牺牲任何东西。轻松地有 26% 的改善。一点也不差!
由于我修改了 `curl_multi_wait()`,我也想看看它实际上是如何改进一些稍微更高级一些的传输。我使用了 [multi-double.c](https://github.com/curl/curl/commit/5f1163517e1597339d) 示例代码,添加了初始化内存记录的调用,让它使用 `curl_multi_wait()`,并且并行下载了这两个 URL:
```
http://www.example.com/
http://localhost/512M
```
第二个文件是 512 兆字节的零,第一个文件是一个 600 字节的公共 html 页面。这是 [count-malloc.c 代码](https://gist.github.com/bagder/dc4a42cb561e791e470362da7ef731d3)。
首先,我使用 7.53.1 来测试上面的例子,并使用 `memanalyze` 脚本检查:
```
Mallocs: 33901
Reallocs: 5
Callocs: 24
Strdups: 31
Wcsdups: 0
Frees: 33956
Allocations: 33961
Maximum allocated: 160385
```
好了,所以它总共使用了 160KB 的内存,分配操作次数超过 33900 次。而它下载超过 512 兆字节的数据,所以它每 15KB 数据有一次 malloc。是好是坏?
回到 git master,现在是 7.54.1-DEV 的版本 - 因为我们不太确定当我们发布下一个版本时会变成哪个版本号。它可能是 7.54.1 或 7.55.0,它还尚未确定。我离题了,我再次运行相同修改的 multi-double.c 示例,再次对内存日志运行 memanalyze,报告来了:
```
Mallocs: 69
Reallocs: 5
Callocs: 24
Strdups: 31
Wcsdups: 0
Frees: 124
Allocations: 129
Maximum allocated: 153247
```
我不敢置信地反复看了两遍。发生什么了吗?为了仔细检查,我最好再运行一次。无论我运行多少次,结果还是一样的。
### 33961 vs 129
在典型的传输中 `curl_multi_wait()` 被调用了很多次,并且在传输过程中至少要正常进行一次内存分配操作,因此删除那个单一的微小分配操作对计数器有非常大的影响。正常的传输也会做一些将数据移入或移出链表和散列操作,但是它们现在也大都是无 malloc 的。简单地说:剩余的分配操作不会在传输循环中执行,所以它们的重要性不大。
以前的 curl 是当前示例分配操作数量的 263 倍。换句话说:新的是旧的分配操作数量的 0.37% 。
另外还有一点好处,新的内存分配量更少,总共减少了 7KB(4.3%)。
### malloc 重要吗?
在几个 G 内存的时代里,在传输中有几个 malloc 真的对于普通人有显著的区别吗?对 512MB 数据进行的 33832 个额外的 malloc 有什么影响?
为了衡量这些变化的影响,我决定比较 localhost 的 HTTP 传输,看看是否可以看到任何速度差异。localhost 对于这个测试是很好的,因为没有网络速度限制,更快的 curl 下载也越快。服务器端也会相同的快/慢,因为我将使用相同的测试集进行这两个测试。
我相同方式构建了 curl 7.53.1 和 curl 7.54.1-DEV,并运行这个命令:
```
curl http://localhost/80GB -o /dev/null
```
下载的 80GB 的数据会尽可能快地写到空设备中。
我获得的确切数字可能不是很有用,因为它将取决于机器中的 CPU、使用的 HTTP 服务器、构建 curl 时的优化级别等,但是相对数字仍然应该是高度相关的。新代码对决旧代码!
7.54.1-DEV 反复地表现出更快 30%!我的早期版本是 2200MB/秒增加到当前版本的超过 2900 MB/秒。
这里的要点当然不是说它很容易在我的机器上使用单一内核以超过 20GB/秒的速度来进行 HTTP 传输,因为实际上很少有用户可以通过 curl 做到这样快速的传输。关键在于 curl 现在每个字节的传输使用更少的 CPU,这将使更多的 CPU 转移到系统的其余部分来执行任何需要做的事情。或者如果设备是便携式设备,那么可以省电。
关于 malloc 的成本:512MB 测试中,我使用旧代码发生了 33832 次或更多的分配。旧代码以大约 2200MB/秒的速率进行 HTTP 传输。这等于每秒 145827 次 malloc - 现在它们被消除了!600 MB/秒的改进意味着每秒钟 curl 中每个减少的 malloc 操作能额外换来多传输 4300 字节。
### 去掉这些 malloc 难吗?
一点也不难,非常简单。然而,有趣的是,在这个旧项目中,仍然有这样的改进空间。我有这个想法已经好几年了,我很高兴我终于花点时间来实现。感谢我们的测试套件,我可以有相当大的信心做这个“激烈的”内部变化,而不会引入太可怕的回归问题。由于我们的 API 很好地隐藏了内部,所以这种变化可以完全不改变任何旧的或新的应用程序……
(是的,我还没在版本中发布该变更,所以这还有风险,我有点后悔我的“这很容易”的声明……)
### 注意数字
curl 的 git 仓库从 7.53.1 到今天已经有 213 个提交。即使我没有别的想法,可能还会有一次或多次的提交,而不仅仅是内存分配对性能的影响。
### 还有吗?
还有其他类似的情况么?
也许。我们不会做很多性能测量或比较,所以谁知道呢,我们也许会做更多的愚蠢事情,我们可以收手并做得更好。有一个事情是我一直想做,但是从来没有做,就是添加所使用的内存/malloc 和 curl 执行速度的每日“监视” ,以便更好地跟踪我们在这些方面不知不觉的回归问题。
### 补遗,4/23
(关于我在 hacker news、Reddit 和其它地方读到的关于这篇文章的评论)
有些人让我再次运行那个 80GB 的下载,给出时间。我运行了三次新代码和旧代码,其运行“中值”如下:
旧代码:
```
real 0m36.705s
user 0m20.176s
sys 0m16.072s
```
新代码:
```
real 0m29.032s
user 0m12.196s
sys 0m12.820s
```
承载这个 80GB 文件的服务器是标准的 Apache 2.4.25,文件存储在 SSD 上,我的机器的 CPU 是 i7 3770K 3.50GHz 。
有些人也提到 `alloca()` 作为该补丁之一也是个解决方案,但是 `alloca()` 移植性不够,只能作为一个孤立的解决方案,这意味着如果我们要使用它的话,需要写一堆丑陋的 `#ifdef`。
---
via: <https://daniel.haxx.se/blog/2017/04/22/fewer-mallocs-in-curl/>
作者:[DANIEL STENBERG](https://daniel.haxx.se/blog/author/daniel/) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Today I landed yet [another small change](https://github.com/curl/curl/commit/cbae73e1dd95946597ea74ccb580c30f78e3fa73) to libcurl internals that further reduces the number of small mallocs we do. This time the generic linked list functions got converted to become malloc-less (the way linked list functions should behave, really).
## Instrument mallocs
I started out my quest a few weeks ago by instrumenting our memory allocations. This is easy since we have our own memory debug and logging system in curl since many years. Using a debug build of curl I run this script in my build dir:
#!/bin/sh export CURL_MEMDEBUG=$HOME/tmp/curlmem.log ./src/curl http://localhost ./tests/memanalyze.pl -v $HOME/tmp/curlmem.log
For curl 7.53.1, this counted about 115 memory allocations. Is that many or a few?
The memory log is very basic. To give you an idea what it looks like, here’s an example snippet:
MEM getinfo.c:70 free((nil)) MEM getinfo.c:73 free((nil)) MEM url.c:294 free((nil)) MEM url.c:297 strdup(0x559e7150d616) (24) = 0x559e73760f98 MEM url.c:294 free((nil)) MEM url.c:297 strdup(0x559e7150d62e) (22) = 0x559e73760fc8 MEM multi.c:302 calloc(1,480) = 0x559e73760ff8 MEM hash.c:75 malloc(224) = 0x559e737611f8 MEM hash.c:75 malloc(29152) = 0x559e737a2bc8 MEM hash.c:75 malloc(3104) = 0x559e737a9dc8
## Check the log
I then studied the log closer and I realized that there were many small memory allocations done from the same code lines. We clearly had some rather silly code patterns where we would allocate a struct and then add that struct to a linked list or a hash and that code would then subsequently add yet another small struct and similar – and then often do that in a loop. (I say *we* here to avoid blaming anyone, but of course I myself am to blame for most of this…)
Those two allocations would always happen in pairs and they would be freed at the same time. I decided to address those. Doing very small (less than say 32 bytes) allocations is also wasteful just due to the very large amount of data in proportion that will be used just to keep track of that tiny little memory area (within the malloc system). Not to mention fragmentation of the heap.
So, fixing the hash code and the linked list code to not use mallocs were immediate and easy ways to remove over 20% of the mallocs for a plain and simple ‘curl http://localhost’ transfer.
At this point I sorted all allocations based on size and checked all the smallest ones. One that stood out was one we made in *curl_multi_wait(),* a function that is called over and over in a typical curl transfer main loop. I converted it over to [use the stack](https://github.com/curl/curl/commit/5f1163517e1597339d) for most typical use cases. Avoiding mallocs in very repeatedly called functions is a good thing.
## Recount
Today, the script from above shows that the same “curl localhost” command is **down to 80 allocations from the 115** curl 7.53.1 used. Without sacrificing anything really. An easy 26% improvement. Not bad at all!
But okay, since I modified curl_multi_wait() I wanted to also see how it actually improves things for a slightly more advanced transfer. I took the [multi-double.c](https://github.com/curl/curl/commit/5f1163517e1597339d) example code, added the call to initiate the memory logging, made it uses curl_multi_wait() and had it download these two URLs in parallel:
http://www.example.com/ http://localhost/512M
The second one being just 512 megabytes of zeroes and the first being a 600 bytes something public html page. Here’s the [count-malloc.c code](https://gist.github.com/bagder/dc4a42cb561e791e470362da7ef731d3).
First, I brought out 7.53.1 and built the example against that and had the memanalyze script check it:
Mallocs: 33901 Reallocs: 5 Callocs: 24 Strdups: 31 Wcsdups: 0 Frees: 33956Allocations: 33961Maximum allocated: 160385
Okay, so it used 160KB of memory totally and it did over 33,900 allocations. But ok, it downloaded over 512 megabytes of data so it makes one malloc per 15KB of data. Good or bad?
Back to git master, the version we call 7.54.1-DEV right now – since we’re not quite sure which version number it’ll become when we release the next release. It can become 7.54.1 or 7.55.0, it has not been determined yet. But I digress, I ran the same modified multi-double.c example again, ran memanalyze on the memory log again and it now reported…
Mallocs: 69 Reallocs: 5 Callocs: 24 Strdups: 31 Wcsdups: 0 Frees: 124Allocations: 129Maximum allocated: 153247
I had to look twice. Did I do something wrong? I better run it again just to double-check. The results are the same no matter how many times I run it…
## 33,961 vs 129
curl_multi_wait() is called a lot of times in a typical transfer, and it had at least one of the memory allocations we normally did during a transfer so removing that single tiny allocation had a pretty dramatic impact on the counter. A normal transfer also moves things in and out of linked lists and hashes a bit, but they too are mostly malloc-less now. Simply put: the remaining allocations are not done in the transfer loop so they’re way less important.
The old curl did **263 times** the number of allocations the current does for this example. Or the other way around: the new one does **0.37%** the number of allocations the old one did…
As an added bonus, the new one also allocates less memory in total as it decreased that amount by 7KB (4.3%).
## Are mallocs important?
In the day and age with many gigabytes of RAM and all, does a few mallocs in a transfer really make a notable difference for mere mortals? What is the impact of 33,832 extra mallocs done for 512MB of data?
To measure what impact these changes have, I decided to compare HTTP transfers from localhost and see if we can see any speed difference. localhost is fine for this test since there’s no network speed limit, but the faster curl is the faster the download will be. The server side will be equally fast/slow since I’ll use the same set for both tests.
I built curl 7.53.1 and curl 7.54.1-DEV identically and ran this command line:
curl http://localhost/80GB -o /dev/null
80 gigabytes downloaded as fast as possible written into the void.
The exact numbers I got for this may not be totally interesting, as it will depend on CPU in the machine, which HTTP server that serves the file and optimization level when I build curl etc. But the relative numbers should still be highly relevant. The old code vs the new.
7.54.1-DEV repeatedly performed **30% faster!** The 2200MB/sec in my build of the earlier release increased to over 2900 MB/sec with the current version.
The point here is of course not that it easily can transfer HTTP over 20 Gigabit/sec using a single core on my machine – since there are very few users who actually do that speedy transfers with curl. The point is rather that curl now uses less CPU per byte transferred, which leaves more CPU over to the rest of the system to perform whatever it needs to do. Or to save battery if the device is a portable one.
On the cost of malloc: The 512MB test I did resulted in 33832 more allocations using the old code. The old code transferred HTTP at a rate of about 2200MB/sec. That equals **145,827 mallocs/second **– that are now removed! A 600 MB/sec improvement means that curl managed to transfer 4300 bytes extra for each malloc it didn’t do, each second.
## Was removing these mallocs hard?
Not at all, it was all straight forward. It is however interesting that there’s still room for changes like this in a project this old. I’ve had this idea for some years and I’m glad I finally took the time to make it happen. Thanks to our test suite I could do this level of “drastic” internal change with a fairly high degree of confidence that I don’t introduce too terrible regressions. Thanks to our APIs being good at hiding internals, this change could be done completely without changing anything for old or new applications.
(Yeah I haven’t shipped the entire change in a release yet so there’s of course a risk that I’ll have to regret my “this was easy” statement…)
## Caveats on the numbers
There have been 213 commits in the curl git repo from 7.53.1 till today. There’s a chance one or more other commits than just the pure alloc changes have made a performance impact, even if I can’t think of any.
## More?
Are there more “low hanging fruits” to pick here in the similar vein?
Perhaps. We don’t do a lot of performance measurements or comparisons so who knows, we might do more silly things that we could stop doing and do even better. One thing I’ve always wanted to do, but never got around to, was to add daily “monitoring” of memory/mallocs used and how fast curl performs in order to better track when we unknowingly regress in these areas.
## Addendum, April 23rd
(Follow-up on some comments on this article that I’ve read on [hacker news](https://news.ycombinator.com/item?id=14177739), [Reddit](https://www.reddit.com/r/programming/comments/671ucd/curl_reducing_malloc3_calls_lead_to_30_faster/) and elsewhere.)
Someone asked and I ran the 80GB download again with ‘time’. Three times each with the old and the new code, and the “middle” run of them showed these timings:
Old code:
real 0m36.705s user 0m20.176s sys 0m16.072s
New code:
real 0m29.032s user 0m12.196s sys 0m12.820s
The server that hosts this 80GB file is a standard Apache 2.4.25, and the 80GB file is stored on an SSD. The CPU in my machine is a core-i7 3770K 3.50GHz.
Someone also mentioned alloca() as a solution for one of the patches, but alloca() is not portable enough to work as the sole solution, meaning we would have to do ugly #ifdef if we would want to use alloca() there.
Have you tried heaptrack? It is pretty neat tool for memory debugging. It doesn’t require custom allocator functions. It has a gui that makes it easy to compare two builds/runs.
No I haven’t, but it looks like a nifty tool I should investigate closer…
When do you do that on Firefox ? 😉
Hehe… I’d say that one of the beauties with working on curl and libcurl is the simplicity. No processes/threads (with ownership, locking, IPC etc), no C++ and not the least: a relatively small code base with a rather focused and narrow use case. It just transfers data…
Wow, 2.9GB/s !? I just come to ~330MB/s here (i3, 3.1GHz, SSD apache Debian stock installation, 5.6GB gzipped test file). The bottleneck here seems to be the SSD. So the test basically measures the speed of my SSD (why the heck is your SSD so much faster ?)
Anyways, a quick comparison with wget2 using ‘time’ shows the same ‘real’ for both tools, but the other differ:
curl 7.54.1-DEV (current master):
user 0m1.272s
sys 0m2.296s
Wget2 (current master):
user 0m0.176s
sys 0m1.844s
So curl has quite some air for further optimizations before it equals out Wget2.
Clearly, and I will never claim that we are ever “done” optimizing or improving.
But a comparison of numbers like that between different projects is not really fair nor easy. I mean, curl uses and libcurl provides an API that sets limitations on how curl can and will perform, for example. In a quite different manner than wget2 does or needs. Not to mention other portability ambitions. Still, curious that the difference was that notable. Makes me wonder what the main explanation is…
Regarding the 80GB file, it is simply a sparse file made with “truncate -s 80G” stored on the SSD. Nothing magic at all.
Thanks for the sparse file trick. Here come the numbers for a 80GB file (best out of several runs). And yes, fair comparisons are difficult. Maybe someone likes to find out about the differences.
Curl (-s and -o /dev/null):
real 0m38.788s
user 0m9.244s
sys 0m19.028s
Wget2 (-O /dev/null):
real 0m38.503s
user 0m0.972s
sys 0m13.052s
One basic thing: I believe wget2 uses a 100KB buffer while curl uses a 16KB one (by default), which explains some of the differences. My numbers for a curl using 100KB buffer:
real 0m26.039s (down from 29)
user 0m4.404s (down from 12)
sys 0m8.156s (down from 12)
That’s an awesome speedup ! You should consider using larger buffers in the curl tool (maybe not in the library).
Yes, agreed!
2200MB/sec and 2900 MB/sec are not 20 Gigabit/s.
2900 megabyte is 23200 megabits. That is more than 20 gigabits. Math!
That sounds great, I wonder if we care. Not only do large file transfers like that rarely happen, as the one user discovered the bottleneck is usually the hard drive.
Even if, lets say, you are doing only processing on the files after you retrieve them, such as passing html to a parser, I would think that parsing process would be pretty slow as well.
My point is, I wonder if the slowness of the malloc makes any real-world difference that we care about?
Sebastian: I already addressed that in the post. I doubt there will be many users noticing a difference, but spending less CPU is still valuable to save power and to allow other parts to use more.
Don’t get me wrong, I don’t think you should revert these changes – they do save battery life.
Put another way though, the slowness of real world operations like disk ops or payload processing also has a corresponding power cost, so again, compared to real world applications, do we really care about the couple microwatt hours we save here compared to the power cost of writing to a drive, or doing complex dom processing?
I would think that would be where the real focus on power conservation would be, the savings this change produces are probably a rounding error in comparison. I’d be interested in seeing the numbers, though.
Hey Daniel,
it’s really nice that you’re doing this. I’m constantly fighting against anyone introducing malloc() calls in haproxy’s fast path for this exact reason. I know that sometimes it’s difficult to avoid them. Often for most objects, using some fixed-size pools significantly helps without changing the code too much. Good luck with the 80 remaining ones, you’re on the right direction! |
8,871 | Oracle 要将 Java EE 移交给 Eclipse 基金会 | https://adtmag.com/articles/2017/09/12/java-ee-moving-to-eclipse.aspx | 2017-09-16T17:44:41 | [
"Oracle",
"Java"
] | https://linux.cn/article-8871-1.html | 
Oracle 日前宣布,选择将 [Eclipse 基金会](https://eclipse.org/org/foundation/)作为 Java EE(Java 平台企业版)的新家。Oracle 是与 Java EE 的两个最大的贡献者 IBM 和 Red Hat 一同做出的该决定。
Oracle 软件布道师 David Delabassee 在[博文](https://blogs.oracle.com/theaquarium/opening-up-ee-update)中说,“…… Eclipse 基金会积极参与了 Java EE 及相关技术的发展,具有丰富的经验。这能帮助我们快速移交 Java EE,创建社区友好的流程来推进该平台的发展,并充分利用如 MicroProfile 这样的互补项目。我们期待这一合作。”
Eclipse 基金会的执行总监 Mike Milinkovich 对这次移交持乐观态度,他说,这正是企业级 Java 所需要的,也是社区所期望的。
他说,“开源模式已经一再被时间所证实是成功创新和协作的最佳方式。随着企业更多地转向以云为中心的模式,很显然 Java EE 需要有更快速的创新步伐。移交给 Eclipse 基金会对于供应商来说是一次巨大的机会,他们并不总是有最好的合作机会。我们为个人、小型公司、企业和大型供应商提供开放合作的机会。这将为他们提供一个可靠的平台,让他们可以协作前进,并将支持 Java EE 所需的更快的创新步伐。”
Milinkovich 说,Java EE 成为获准项目也将经历所有的 Eclipse 项目的同样的获准流程。他期待 “Java EE” 融合为一个包含大量子项目的顶级项目。该平台现在包含近 40 个 Java JSR。
Delabassee 说,Oracle 计划将其主导的 Java EE 技术和相关的 GlassFish 技术重新授权给 Eclipse 基金会,包括参考实现、技术兼容性工具包(TCK)和“相关项目文档”。并计划给该平台“重新定名”,但此事尚未确定。
这一移交何时进行还未确定,但 Oracle 希望在 “Java EE 8 完成后尽快进行,以促进快速转型”,Delabassee 承诺,在移交期间,Oracle 将继续支持现有的 Java EE 许可用户,包括升级到 Java EE 8 的许可用户。该公司也将继续支持现有的 WebLogic 服务器版本中的 Java EE,包括之后的 WebLogic 服务器版本中的 Java EE 8。
Delabassee 写道,“我们相信这一计划将使我们可以继续支持现有的 Java EE 标准,同时将其演进为更开放的环境。还有许多工作需要去做,但我们相信正走在一条正确的道路上。”
| 200 | OK | [News](https://adtmag.com/Articles/List/News.aspx)
### Java EE Is Moving to the Eclipse Foundation
- By
[John K. Waters](https://adtmag.com/Forms/EmailToAuthor.aspx?AuthorItem={6D0D3D88-A361-4921-9BD9-EA24D4266205}&ArticleItem={5FF980B3-9984-419C-A9F0-AA3A1FB2D559}) - September 12, 2017
Oracle has chosen the [Eclipse Foundation](https://eclipse.org/org/foundation/) to be the new home of the Java Platform Enterprise Edition (Java EE), the company announced today. Oracle made the decision in collaboration with IBM and Red Hat, the two other largest contributors to the platform.
"…The Eclipse Foundation has strong experience and involvement with Java EE and related technologies," wrote Oracle software evangelist David Delabassee in a [blog post](https://blogs.oracle.com/theaquarium/opening-up-ee-update). This will help us transition Java EE rapidly, create community-friendly processes for evolving the platform, and leverage complementary projects such as MicroProfile. We look forward to this collaboration."
Mike Milinkovich, executive director of the Eclipse Foundation, is optimistic about this move, which he said is exactly what the enterprise Java needs and what the community has been hoping for.
"The open source model has been shown time and again to be the best way to innovate and collaborate successfully," he told *ADTmag*. "As enterprises move to a more cloud-centric model, it's pretty clear that Java EE requires a more rapid pace of innovation. Also, moving Java to the Eclipse Foundation is going to be a great opportunity for vendors, who haven't always had the best time collaborating. We're all about enabling open collaboration among individuals, small companies, enterprises, and large vendors. This will give them a reliable platform on which to collaborate going forward, and it will support the more rapid pace of innovation that Java EE needs."
Java EE will go through the same project approval process all proposed Eclipse projects go through, Milinkovich said. He expects "Java EE" to emerge as a top-level project over a large collection of subprojects. The platform comprises nearly 40 Java JSRs.
Oracle plans to re-license Oracle-led Java EE technologies and related GlassFish technologies to the Eclipse Foundation, including reference implementations, technology compatibility kits (TCKs,) and "associated project documentation," Delabassee said. And it plans to "rebrand" the platform with a new name, yet to be determined.
When the move will happen is also yet to be determined, but Oracle wants to get started "as soon as possible after completion of Java EE 8 to facilitate a rapid transition." During the transition period, Oracle will continue to support existing Java EE licensees, Delabassee promised, including licensees moving to Java EE 8. The company will also continue to support its existing WebLogic Server versions, as well as Java EE 8 in a future WebLogic Server version.
"We believe this plan will enable us to continue to support existing Java EE standards, while enabling the evolution to a more open environment," Delabassee wrote. "There's more work to do, but we believe we're on the right track."
About the Author
[John K. Waters](https://twitter.com/johnkwaters) is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film *Silicon Valley: A 100 Year Renaissance*, which aired on PBS. He can be reached at [[email protected]](/cdn-cgi/l/email-protection#264c5147524354556645494850435441431510160845494b). |
8,872 | 微软在 Windows 10 上支持 Ubuntu 容器 | http://news.softpedia.com/news/canonical-microsoft-enable-ubuntu-containers-with-hyper-v-isolation-on-windows-517734.shtml | 2017-09-17T09:34:00 | [
"Windows",
"Canonical"
] | https://linux.cn/article-8872-1.html | 
Canonical 的 Dustin Kirkland 宣布该公司最近与微软合作让 Ubuntu 容器可以运行在带有 Hyper-V 隔离的 Windows 系统上。
如果你曾经想象过在 Windows 机器上使用你喜欢的 GNU/Linux 发行版(比如 Ubuntu)来运行 Linux 应用,那么现在有个好消息,你可以在 Windows 10 和 Windows 服务器上运行 Docker 容器了。
该技术利用 Ubuntu Linux 操作系统作为宿主基础,通过 Docker 容器镜像和 Hyper-V 虚拟化在 Windows 上运行 Linux 应用。你所需的只是一台 8GB 内存的 64 位 x86 PC,以及加入了 Windows Insider 计划。
“Canonical 和微软合作交付了一种真正特别的体验——在 Windows 10 和 Windows 服务器上运行带有 Hyper-V 隔离的 Ubuntu 容器,”Canonical 的 Ubuntu 产品与战略副总裁 Dustin Kirkland 说,“只需要一分钟就能跑起来!”
在他最近写的一篇[博客文章](https://insights.ubuntu.com/2017/09/13/running-ubuntu-containers-with-hyper-v-isolation/)中,Dustin Kirkland 分享了一篇教程,提供了简单易行的指南和截屏,对这种技术感兴趣的人可以去看看。不过该技术目前还只能运行在 Windows 10 和 Windows 服务器上。
根据这篇指南,你只需要在 Windows PowerShell 中运行 docker run -it ubuntu bash 即可启动带有 Hyper-V 隔离的 Ubuntu 容器。如果你在该教程中遇到了困难,你可以加入官方的 [Ubuntu Forums](https://ubuntuforums.org/) 或 [Ask Ubuntu](https://askubuntu.com/) 寻求支持。此外,在 Windows 10 上,[Ubuntu 也可以作为 app 从 Windows 商店上得到](http://news.softpedia.com/news/here-s-how-to-upgrade-your-old-ubuntu-on-windows-install-to-the-app-version-517332.shtml)。
| 301 | Moved Permanently | null |
8,873 | 如何在 Windows 上运行 Linux 容器 | https://tutorials.ubuntu.com/tutorial/tutorial-windows-ubuntu-hyperv-containers | 2017-09-17T10:56:00 | [
"Ubuntu",
"Windows",
"Docker"
] | https://linux.cn/article-8873-1.html | ### 1、概述
现在能够在 Windows 10 和 Windows 服务器上运行 Docker 容器了,它是以 Ubuntu 作为宿主基础的。
想象一下,使用你喜欢的 Linux 发行版——比如 Ubuntu——在 Windows 上运行你自己的 Linux 应用。
现在,借助 Docker 技术和 Windows 上的 Hyper-V 虚拟化的力量,这一切成为了可能。

### 2、前置需求
你需要一个 8GB 内存的 64 位 x86 PC,运行 Windows 10 或 Windows Server。
只有加入了 [Windows 预览体验计划(Insider)](https://insider.windows.com/zh-cn/),才能运行带有 Hyper-V 支持的 Linux 容器。该计划可以让你测试预发布软件和即将发布的 Windows。
如果你特别在意稳定性和隐私(Windows 预览体验计划允许微软收集使用信息),你可以考虑等待 2017 年 10 月发布的[Windows 10 Fall Creator update](https://www.microsoft.com/zh-cn/windows/upcoming-features),这个版本可以让你无需 Windows 预览体验身份即可使用带有 Hyper-V 支持的 Docker 技术。
你也需要最新版本的 Docker,它可以从 [http://dockerproject.org](http://dockerproject.org/) 下载得到。
最后,你还需要确认你安装了 [XZ 工具](https://tukaani.org/xz/),解压 Ubuntu 宿主容器镜像时需要它。
### 3、加入 Windows 预览体验计划(Insider)
如果你已经是 Windows 预览体验计划(Insider)成员,你可以跳过此步。否则在浏览器中打开如下链接:
<https://insider.windows.com/zh-cn/getting-started/>

要注册该计划,使用你在 Windows 10 中的微软个人账户登录,并在预览体验计划首页点击“注册”,接受条款并完成注册。
然后你需要打开 Windows 开始菜单中的“更新和安全”菜单,并在菜单左侧选择“Windows 预览体验计划”。

如果需要的话,在 Windows 提示“你的 Windows 预览体验计划账户需要关注”时,点击“修复”按钮。
### 4、 Windows 预览体验(Insider)的内容
从 Windows 预览体验计划面板,选择“开始使用”。如果你的微软账户没有关联到你的 Windows 10 系统,当提示时使用你要关联的账户进行登录。
然后你可以选择你希望从 Windows 预览体验计划中收到何种内容。要得到 Docker 技术所需要的 Hyper-V 隔离功能,你需要加入“快圈”,两次确认后,重启 Windows。重启后,你需要等待你的机器安装各种更新后才能进行下一步。

### 5、安装 Docker for Windows
从 [Docker Store](https://store.docker.com/editions/community/docker-ce-desktop-windows) 下载 Docker for Windows。

下载完成后,安装,并在需要时重启。

重启后,Docker 就已经启动了。Docker 要求启用 Hyper-V 功能,因此它会提示你启用并重启。点击“OK”来为 Docker 启用它并重启系统。

### 6、下载 Ubuntu 容器镜像
从 [Canonical 合作伙伴镜像网站](https://partner-images.canonical.com/hyper-v/linux-containers/xenial/current/)下载用于 Windows 的最新的 Ubuntu 容器镜像。
下载后,使用 XZ 工具解压:
```
C:\Users\mathi\> .\xz.exe -d xenial-container-hyper-v.vhdx.xz
C:\Users\mathi\>
```
### 7、准备容器环境
首先创建两个目录:

创建 `C:\lcow`*,*它将用于 Docker 准备容器时的临时空间。

再创建一个 `C:\Program Files\Linux Containers` ,这是存放 Ubuntu 容器镜像的地方。
你需要给这个目录额外的权限以允许 Docker 在其中使用镜像。在管理员权限的 Powershell 窗口中运行如下 Powershell 脚本:
```
param(
[string] $Root
)
# Give the virtual machines group full control
$acl = Get-Acl -Path $Root
$vmGroupRule = new-object System.Security.AccessControl.FileSystemAccessRule("NT VIRTUAL MACHINE\Virtual Machines", "FullControl","ContainerInherit,ObjectInherit", "None", "Allow")
$acl.SetAccessRule($vmGroupRule)
Set-Acl -AclObject $acl -Path $Root
```
将其保存为`set_perms.ps1`并运行它。
提示**,你也许需要运行** `Set-ExecutionPolicy -Scope process unrestricted` 来允许运行未签名的 Powershell 脚本。

```
C:\Users\mathi\> .\set_perms.ps1 "C:\Program Files\Linux Containers"
C:\Users\mathi\>
```
现在,将上一步解压得到的 Ubuntu 容器镜像(.vhdx)复制到 `C:\Program Files\Linux Containers` 下的 `uvm.vhdx`。
### 8、更多的 Docker 准备工作
Docker for Windows 要求一些预发布的功能才能与 Hyper-V 隔离相配合工作。这些功能在之前的 Docker CE 版本中还不可用,这些所需的文件可以从 [master.dockerproject.org](https://master.dockerproject.org/) 下载。

从 [master.dockerproject.org](https://master.dockerproject.org/) 下载 `dockerd.exe` 和 `docker.exe`,并将其放到安全的地方,比如你自己的文件夹中。它们用于在下一步中启动 Ubuntu 容器。
### 9、 在 Hyper-V 上运行 Ubuntu 容器
你现在已经准备好启动你的容器了。首先以管理员身份打开命令行(`cmd.exe`),然后以正确的环境变量启动 `dockerd.exe`。
```
C:\Users\mathi\> set LCOW_SUPPORTED=1
C:\Users\mathi\> .\dockerd.exe -D --data-root C:\lcow
```
然后,以管理员身份启动 Powershell 窗口,并运行 `docker.exe` 为你的容器拉取镜像:
```
C:\Users\mathi\> .\docker.exe pull ubuntu
```

现在你终于启动了容器,再次运行 `docker.exe`,让它运行这个新镜像:
```
C:\Users\mathi\> .\docker.exe run -it ubuntu
```

恭喜你!你已经成功地在 Windows 上让你的系统运行了带有 Hyper-V 隔离的容器,并且跑的是你非常喜欢的 Ubuntu 容器。
### 10、获取帮助
如果你需要一些 Hyper-V Ubuntu 容器的起步指导,或者你遇到一些问题,你可以在这里寻求帮助:
* [Ask Ubuntu](https://askubuntu.com/)
* [Ubuntu Forums](https://ubuntuforums.org/)
* [IRC-based support](https://wiki.ubuntu.com/IRC/ChannelList)
| 301 | null |
|
8,874 | 开发者定义的应用交付 | https://www.oreilly.com/learning/developer-defined-application-delivery | 2017-09-17T17:10:23 | [
"负载均衡",
"原生云应用"
] | https://linux.cn/article-8874-1.html |
>
> 负载均衡器如何帮助你解决分布式系统的复杂性。
>
>
>

原生云应用旨在利用分布式系统的性能、可扩展性和可靠性优势。不幸的是,分布式系统往往以额外的复杂性为代价。由于你程序的各个组件跨网络分布,并且这些网络有通信障碍或者性能降级,因此你的分布式程序组件需要能够继续独立运行。
为了避免程序状态的不一致,分布式系统设计应该有一个共识,即组件会失效。没有什么比在网络中更突出了。因此,在其核心,分布式系统在很大程度上依赖于负载平衡——请求分布于两个或多个系统,以便在面临网络中断时具有弹性,并在系统负载波动时水平缩放时。
随着分布式系统在原生云程序的设计和交付中越来越普及,负载平衡器在现代应用程序体系结构的各个层次都影响了基础设施设计。在大多数常见配置中,负载平衡器部署在应用程序前端,处理来自外部世界的请求。然而,微服务的出现意味着负载平衡器可以在幕后发挥关键作用:即管理*服务*之间的流。
因此,当你使用原生云程序和分布式系统时,负载均衡器将承担其他角色:
* 作为提供缓存和增加安全性的**反向代理**,因为它成为外部客户端的中间人。
* 作为通过提供协议转换(例如 REST 到 AMQP)的 **API 网关**。
* 它可以处理**安全性**(即运行 Web 应用程序防火墙)。
* 它可能承担应用程序管理任务,如速率限制和 HTTP/2 支持。
鉴于它们的扩展能力远大于平衡流量,<ruby> 负载平衡器 <rt> load balancer </rt></ruby>可以更广泛地称为<ruby> 应用交付控制器 <rt> Application Delivery Controller </rt></ruby>(ADC)。
### 开发人员定义基础设施
从历史上看,ADC 是由 IT 专业人员购买、部署和管理的,最常见运行企业级架构的应用程序。对于物理负载平衡器设备(如 F5、Citrix、Brocade等),这种情况在很大程度上仍然存在。具有分布式系统设计和临时基础设施的云原生应用要求负载平衡器与它们运行时的基础设施 (如容器) 一样具有动态特性。这些通常是软件负载均衡器(例如来自公共云提供商的 NGINX 和负载平衡器)。云原生应用通常是开发人员主导的计划,这意味着开发人员正在创建应用程序(例如微服务器)和基础设施(Kubernetes 和 NGINX)。开发人员越来越多地对负载平衡 (和其他) 基础设施的决策做出或产生重大影响。
作为决策者,云原生应用的开发人员通常不会意识到企业基础设施需求或现有部署的影响,同时要考虑到这些部署通常是新的,并且经常在公共或私有云环境中进行部署。云技术将基础设施抽象为可编程 API,开发人员正在定义应用程序在该基础设施的每一层的构建方式。在有负载平衡器的情况下,开发人员会选择要使用的类型、部署方式以及启用哪些功能。它们以编程的方式对负载平衡器的行为进行编码 —— 随着程序在部署的生存期内增长、收缩和功能上进化时,它如何动态响应应用程序的需要。开发人员将基础设施定义为代码 —— 包括基础设施配置及其运维。
### 开发者为什么定义基础设施?
编写如何构建和部署应用程序的代码实践已经发生了根本性的转变,它体现在很多方面。简而言之,这种根本性的转变是由两个因素推动的:将新的应用功能推向市场所需的时间(*上市时间*)以及应用用户从产品中获得价值所需的时间(*获益时间*)。因此,新的程序写出来就被持续地交付(作为服务),无需下载和安装。
上市时间和获益时间的压力并不是新的,但由于其他因素的加剧,这些因素正在加强开发者的决策权力:
* 云:通过 API 将基础设施定义为代码的能力。
* 伸缩:需要在大型环境中高效运维。
* 速度:马上需要交付应用功能,为企业争取竞争力。
* 微服务:抽象框架和工具选择,进一步赋予开发人员基础设施决策权力。
除了上述因素外,值得注意的是开源的影响。随着开源软件的普及和发展,开发人员手中掌握了许多应用程序基础设施 - 语言、运行时环境、框架、数据库、负载均衡器、托管服务等。微服务的兴起使应用程序基础设施的选择民主化,允许开发人员选择最佳的工具。在选择负载平衡器的情况下,那些与云原生应用的动态特质紧密集成并响应的那些人将超人一等。
### 总结
当你在仔细考虑你的云原生应用设计时,请与我一起讨论“[在云中使用 NGINX 和 Kubernetes 进行负载平衡](http://www.oreilly.com/pub/e/3864?intcmp=il-webops-webcast-reg-webcast_new_site_developer_defined_application_delivery_body_text_cta)”。我们将检测不同公共云和容器平台的负载平衡功能,并通过一个宏应用的案例研究。我们将看看它是如何被变成较小的、独立的服务,以及 NGINX 和 Kubernetes 的能力是如何拯救它的。
---
作者简介:
Lee Calcote 是一位创新思想领袖,对开发者平台和云、容器、基础设施和应用的管理软件充满热情。先进的和新兴的技术一直是 Calcote 在 SolarWinds、Seagate、Cisco 和 Pelco 时的关注重点。他是技术会议和聚会的组织者、写作者、作家、演讲者,经常活跃在技术社区。
---
via: <https://www.oreilly.com/learning/developer-defined-application-delivery>
作者:[Lee Calcote](https://www.oreilly.com/people/7f693-lee-calcote) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,876 | React 许可证的五宗罪 | https://opensource.com/article/17/9/5-reasons-facebooks-react-license-was-mistake | 2017-09-18T10:43:00 | [
"Facebook",
"React"
] | /article-8876-1.html |
>
> Facebook 公司的 BSD+专利许可证失败的原因不是因为许可证本身,而是因为它忽略了开源软件更深层次的本质。
>
>
>

2017 年 7 月,Facebook 公司应用于 react 等项目的许可证组合[被 Apache 软件基金会禁止使用](https://meshedinsights.com/2017/07/16/apache-bans-facebooks-license-combo/)。该许可证组合曾被 Facebook 应用于其所有作为开源软件发布的项目,它使用了被 OSI 批准的广泛使用的[非互惠](https://meshedinsights.com/2017/04/04/permissive-and-copyleft-are-not-antonyms/)许可证 BSD 3-Clause,并且加入了宽泛的、非互惠的专利授权条款,但为了应对挑衅者,该许可证组合的终止规则同样宽泛。这种组合代表了一种新的开源许可证,我称之为“Facebook BSD+专利许可证”(FB + PL),在我看来,该许可证试图与 GPL v2 和 Apache v2 许可证保持兼容,以规避这些许可证所指称的不兼容性。
[使用 FB + PL 许可证的 Apache 项目仍在发挥作用](https://meshedinsights.com/2017/07/16/apache-bans-facebooks-license-combo/),但是我认为 Facebook 所犯错误的原因可能不会立即被一贯秉持实用主义态度的软件开发人员们所注意到。例如,[由律师转行做程序员的 Dennis Walsh 针对这个问题表示](https://medium.com/@dwalsh.sdlr/react-facebook-and-the-revokable-patent-license-why-its-a-paper-25c40c50b562):“它没有什么实质意义”。他的观点是,FB + PL 许可证仅对你所使用的适用该许可证的特定软件项目产生影响,专利授权撤回的后果对另一个专利持有者来说并不严重。他得出结论说:
>
> “Facebook 想要推广开源软件同时不被起诉——这是一个崇高的目标。为此,它可以使用一些苛刻的条款。但在这种情况下,由于上述实践和法律方面的原因,很难在被攻击之后发现是谁干的。”
>
>
>
Dennis 也许是对的,但这并不重要。Facebook 自出机杼地发明自己的开源许可证是一个坏主意。有一系列非直接风险或具体情境的重要因素需要考虑,Facebook 的许可证几乎将它们完全忽略了。
1. 许可证审批很重要
使用非 OSI 批准的许可证意味着,就像任何专有许可证一样,将其应用于企业用途时总是需要法律审查。[OSI 许可证审批之所以重要](https://meshedinsights.com/2017/07/12/why-osi-license-approval-matters/),是因为它为开发人员提供了一种指示,即社区评估已经认可了该许可证,并认为它以不会产生不可接受风险的方式提供了软件自由。如果 Facebook 采取了寻求 OSI 批准的路线,那么很有可能一开始就会发现并且回避了他造成的问题。实际上有一种 OSI 批准的许可证能够实现 Facebook 的明确目标([BSD + 专利许可证](https://opensource.org/licenses/BSDplusPatent))。该许可证最近刚被提交和批准,这看起来是 Facebook 可以考虑采用的合理替代方案。
2. 较少的提前许可
不仅仅是许可证批准很重要。任何有关创新自由的不确定性都会阻碍开发人员使用代码。与使用相关的含糊条款在使用前需要消除歧义——该步骤相当于为了继续推进而寻求许可。出于同样的原因,[公共领域对于软件开发者来说是不利的](https://meshedinsights.com/2017/03/16/public-domain-is-not-open-source/),因为使用条款规定了在采用之前需要寻求许可。由于需要向项目添加一个与 Facebook 相关的法律文件,每个企业开发人员现在都需要与法律顾问联系,才能继续推进。即使答案是“是的,可以”,他们仍然不得不停下来寻求许可。[正如 Aaron Williamson 指出的那样](https://github.com/facebook/react/issues/10191#issuecomment-316380810),这可能不是他们的反应。
3. 不公平的比赛场
Facebook 使用的专利授权条款为其提供了特殊权利和保护,而项目中的其他人并不享有。这使得开源开发人员感到紧张,原因与<ruby> <a href="https://webmink.com/2010/09/01/copyright-aggregation/"> 贡献者协议 </a> <rp> ( </rp> <rt> contributor agreements </rt> <rp> ) </rp></ruby>一样。开源项目的软件自由的全部价值在于它创造了一个[安全的空间](https://meshedinsights.com/2017/05/03/is-the-gpl-really-declining/#safe),在其中每个人都是平等的,它保护每个人免受他人动机的影响。刻意为自己设置额外权利是反社会的行为,可悲的是,开源社区有很多不平等权利最终被滥用的例子。
4. 隐含的授权无效
许多开源法律机构认为,通过对用于任何目的的软件使用授予许可,即使没有提及专利的许可证也隐含授予专利许可。通过添加任意形式的外部专利授权,Facebook 表示它不认为隐含授权足够(最好的情形)或存在(最差的情形)。专注于开源软件的律师们认为,这是 Facebook 毫无根据的扩大化解释。这也是[为什么 OSI 放弃了对 CC0 许可协议批准的原因](https://opensource.org/faq#cc-zero)。
5. Apache 基金会的规则
虽然我没有找到清晰和完整的理由,但 Apache 基金会似乎已经对 Facebook 的许可证组合做出了裁定,因为 Apache 基金会认为,Facebook 的专利授权比 Apache v2 许可证中的专利条款限制性更强。Apache 基金会希望自身是一个中立的软件来源——一个“万能供血者”,而对此产生妨害的条款很有可能违背其规则。对于旨在成为 Apache 生态系统一部分的组件,与 Apache 的许可证产生混乱的做法看起来不太明智。
所以争论这个问题没有什么意义,因为风险很小,问题微不足道。以一种忽视我上面列出的五个社区规范的方式行动无益于任何人,甚至也帮不了 Facebook。虽然 [Facebook 目前正在坚持己见](https://code.facebook.com/posts/112130496157735/explaining-react-s-license/),但我希望它能够改正,我愿意提供帮助。
---
作者简介:Simon Phipps 是<ruby> “公开软件” <rp> ( </rp> <rt> Public Software </rt> <rp> ) </rp></ruby>开源项目的发起人,以志愿者身份担任 OSI 和<ruby> 文档基金会 <rp> ( </rp> <rt> The Document Foundation </rt> <rp> ) </rp></ruby>的理事。
**译者简介:**薛亮,集慧智佳知识产权咨询公司高级咨询师,擅长专利检索、专利分析、竞争对手跟踪、FTO分析、开源软件知识产权风险分析,致力于为互联网企业、高科技公司提供知识产权咨询服务。
| null | HTTPSConnectionPool(host='opensource.com', port=443): Read timed out. (read timeout=10) | null |
8,877 | WordPress 和 React 分手后,你支持使用哪种 JavaScript 框架替代? | https://ahmadawais.com/wordpress-react-vue-preact-development/ | 2017-09-18T13:01:00 | [
"WordPress",
"React"
] | https://linux.cn/article-8877-1.html | 
WordPress 和 ReactJS 分道扬镳了,WordPress 的共同创始人 Matt Mullenweg 在其博客中[宣布](https://ma.tt/2017/09/on-react-and-wordpress/)了这一消息。
关于 WordPress 之后将采用何种 JavaScript 框架,Matt 并未宣布,目前几个选择:
1. [VueJS](https://vuejs.org/)
2. [Preact](https://preactjs.com/)
3. 其它框架([Angular](https://angularjs.org/)、 [Ember](https://www.emberjs.com/)、 [Polymer](https://www.polymer-project.org/)、 [Aurelia](http://aurelia.io/) 等等)
其中, VueJS 和 Preact 的呼声最高。关于它们的优劣势分析如下:
**[VueJS](https://vuejs.org/):**
* **优势:易于学习**
* **优势:与 Laravel 一贯协作良好**
* **优势:比 Preact 更流行,支持的社区更多**
* **优势:贡献者比 Preact 更多**
* *劣势**:**依赖于关键人物*
* **状态:**Github 上有 [133](https://github.com/vuejs/vue/graphs/contributors) 个核心贡献者,[67152](https://github.com/vuejs/vue/stargazers) 个星标,做了 [209](https://github.com/vuejs/vue/releases) 次发布
* **资金支持:截止至本文写作,** 在社区的支持下,[VueJS 在 OpenCollective](https://opencollective.com/vuejs) 得到了每年 $9,895 的捐助,作者尤雨溪在 [Patreon](https://www.patreon.com/evanyou) 得到了每月 $8,815 的捐助。
我确信 WordPress 使用 VueJS 能够更好。VueJS 有大量的拥护者,而且初学者易于上手。如果采用 VueJS,这对于 WordPress 是极好的。我自己也在几个项目中使用 VueJS,我喜欢它。
此外,这个框架也可以用在 WordPress 之外的项目(比如 Vue 与 Laravel 的集成),这可以让开发者在 WordPress 项目和非 WordPress 项目中发挥其经验。 有很多开发者都同时参与 Laravel 和 WordPress 项目,所以如果使用同一个框架,有助于同时推动 Laravel、 VueJS 和 WordPress 的发展。
**[PreactJS](https://preactjs.com/):**
* **优势:易于过渡**
* **优势:与 VueJS 大致相同的资金支持,不断推进的社区**
* **优势:基于 React 的子集库仍然被 Preact** 和 compat 支持
* *劣势:过渡也许导致代码混乱和困扰(针对初学者)*
* *劣势:依赖于关键人物*
* **状态:在 GitHub 上有** [100](https://github.com/developit/preact/graphs/contributors) 个核心贡献者, [14319](https://github.com/developit/preact/stargazers) 个星标,做了 [114](https://github.com/developit/preact/releases) 次发布
* **资金支持:**截止至本文写作,在社区的支持下, Preact 在 [OpenCollective](https://opencollective.com/preact) 得到了 $16,087 的捐助
PreactJS 有其优势,但我找不到合适的人咨询(我仅在两个项目中稍微使用过它)。不过看起来从 React 过渡到 Preact 非常容易。这也许会促使开发者选择 Preact,但是我认为这不是选择它的理由。这只会让开发者在采用这个新的 JavaScript 框架生态、node 模块、Webpack 时发生混淆,Preact 又不是 React 的别名!这会让代码味道难闻。
本文作者在几个平台上做了投票,欢迎参与你的意见:
* [Twitter 投票](https://twitter.com/MrAhmadAwais/status/908551927264305152)
* Facebook 讨论:Advanced WordPress Fb Group 的 [讨论](https://ahmda.ws/2h6skDa) & [投票](https://ahmda.ws/2h5ZPFD)
* GitHub Issue: [Choosing the JavaScript Framework for Gutenberg (~WordPress)](https://github.com/WordPress/gutenberg/issues/2733)
那么你的意见呢?
(题图:maxprog.net.pl)
| 200 | OK | ReactJS and WordPress are breaking up. Matt Mullenweg (co-founder of WordPress) [announced](https://ma.tt/2017/09/on-react-and-wordpress/) is it today.
I was half asleep when I read the announcement and since then I have [commented](https://ma.tt/2017/09/on-react-and-wordpress/#comment-587782), created a [Twitter Poll](https://twitter.com/MrAhmadAwais/status/908551927264305152), a Facebook [poll/thread](https://www.facebook.com/groups/advancedwp/permalink/1624383604290514/), started a Gutenberg issue [Choosing the JavaScript Framework for Gutenberg (~WordPress)](https://github.com/WordPress/gutenberg/issues/2733), and now I am writing this post. It’s an exciting news.
Since I believe the community is moving in the right direction here — this [issue](https://github.com/WordPress/gutenberg/issues/2733) is where one could share their thoughts about different JavaScript Frameworks for Gutenberg (that goes into the WordPress Core).
## 🚢 JavaScript Frameworks[#](#%f0%9f%9a%a2-javascript-frameworks)
IMHO there are two prominent contenders here.
[VueJS](https://vuejs.org/)
[Preact](https://preactjs.com/)
- Other options (
[Angular](https://angularjs.org/), [Ember](https://www.emberjs.com/), [Polymer](https://www.polymer-project.org/), [Aurelia](http://aurelia.io/), etc.)
Just to kick-start the discussion, here’re a few thoughts from the top of my head.
**PRO**: Beginner friendly.
**PRO**: Proven track-record of success with Laravel.
**PRO**: Way more popular as compared to Preact with a great amount of community support.
**PRO**: More contributors than Preact.
**CONS**: Key person dependency.
**STATS**: [133](https://github.com/vuejs/vue/graphs/contributors) Core Contributors on GitHub, [207772](https://github.com/vuejs/vue/stargazers) Stargazers, and [209](https://github.com/vuejs/vue/releases) Releases.
**MONETARY BACKING: **At the time of writing, [VueJS OpenCollective](https://opencollective.com/vuejs) ($9,895/year — New campaign only four days old) and [Evan You’s Patreon page](https://www.patreon.com/evanyou) ($8,815/month) monetary backing from the community. Sören in comments pointed out that OpenCollective of Vue is only four days old.
🎯 I truly believe that WordPress can do a lot better with VueJS. VueJS has a huge set of followers and it’s easier for beginners to adopt. This can also turn into a big win for WordPress if done right. I have used VueJS myself, in several projects, and I love it.
Also, a framework that’s used outside of WP (such as Vue and its integration with Laravel), allows developers to use their experience in WP projects and non-WP projects.
There’s already a large cross-over of Laravel/WP devs, so having the same js framework makes a lot of sense as those devs can contribute to help drive Laravel, Vue, and WP forward all at the same time. — *Jason Bahl.*
**PRO**: Easier transition.
**PRO**: Evolving community with about the same amount of monetary support as of VueJS.
**PRO**: A subset of React based libraries would still be well supported with Preact and with compat.
**CON**: Transition could lead to messy code and confusion (for beginners).
**CONS**: Key person dependency.
**STATS**: [100](https://github.com/developit/preact/graphs/contributors) Core Contributors on GitHub, [36666](https://github.com/developit/preact/stargazers) Stargazers, and [114](https://github.com/developit/preact/releases) Releases.
**MONETARY BACKING: **At the time of writing, [Preact OpenCollective](https://opencollective.com/preact) ($16,087) monetary backing from the community.
While PreactJS has its benefits, I am not the right person to ask for an opinion about it (since I have only slightly used Preact in two small projects). Though, it does look like that transitioning from React to Preact is very easy. That can motivate developers to chose Preact. I think this would be the wrong reason to chose it.
🤔 I think this would be the wrong reason to chose it. It will only confuse the developers trying to adapt to this whole new eco-system of JavaScript frameworks, node modules, Webpack, and now aliasing Preact over React? Which could also lead to the code smells. Messier code.
### Resources:[#](#resources)
Or you can [Tweet](https://twitter.com/MrAhmadAwais/status/908551927264305152) your thoughts, share your explanation on [Facebook](https://www.facebook.com/groups/advancedwp/permalink/1624383604290514/), and drop by in [this issue](https://github.com/WordPress/gutenberg/issues/2733) at Gutenberg’s GitHub repository.
Hey ahmad, thanks for your writeup.
One addition: the OpenCollective is an annual budget and patreon is per month. Vue.js is quite new on OpenCollective, they announced it 4 days ago.
Thanks for the info, I knew something was wrong there. I’ll add that to the article.
Yep, I agree. +1 for Vue.
The best thing to do, rather than the **** 1+ reply, would be to vote in the Twitter Pole. Thanks. |
8,879 | 创建更好的灾难恢复计划 | https://www.oreilly.com/ideas/creating-better-disaster-recovery-plans | 2017-09-19T09:09:00 | [
"灾难恢复",
"备份"
] | https://linux.cn/article-8879-1.html | 
>
> Tanya Reilly 的五个问题:相互依赖的服务如何使恢复更加困难,为什么有意并预先管理依赖是个好主意。
>
>
>
我最近请 Google 的网站可靠性工程师 Tanya Reilly 分享了她关于如何制定更好的灾难恢复计划的想法。Tanya 将在 10 月 1 日到 4 日在纽约举行的 O'Reilly Velocity Conference 上发表了一个题为《[你有没有试着把它关闭之后再打开?](https://conferences.oreilly.com/velocity/vl-ny/public/schedule/detail/61400?intcmp=il-webops-confreg-reg-vlny17_new_site_creating_better_disaster_recovery_plans_body_text_cta)》的演讲。
### 1、 在计划备份系统策略时,人们最常犯的错误是什么?
经典的一条是“**你不需要备份策略,你需要一个恢复策略**”。如果你有备份,但你尚未测试恢复它们,那么你没有真正的备份。测试不仅仅意味着知道你可以获得数据,还意味着知道如何把它放回数据库,如何处理增量更改,甚至如果你需要的话,如何重新安装整个系统。这意味着确保你的恢复路径不依赖于与数据同时丢失的某些系统。
但测试恢复是枯燥的。这是人们在忙碌时会偷工减料的那类事情。这值得花时间使其尽可能简单、无痛、自动化,永远不要靠任何人的意志力!同时,你必须确保有关人员知道该怎么做,所以定期进行大规模的灾难测试是很好的。恢复演练是个好方法,可以找出该过程的文档是否缺失或过期,或者你是否没有足够的资源(磁盘、网络等)来传输和重新插入数据。
### 2、 创建<ruby> 灾难恢复 <rt> disaster recovery </rt></ruby> (DR) 计划最常见的挑战是什么?
我认为很多 DR 是一种事后的想法:“我们有这个很棒的系统,我们的业务依赖它……我猜我们应该为它做 DR?”而且到那时,系统会非常复杂,充满相互依赖关系,很难复制。
第一次安装的东西,它通常是由人手动调整才正常工作的,有时那是个具体特定的版本。当你构建*第二*个时,很难确定它是完全一样的。即使在具有严格的配置管理的站点中,你也可能丢了某些东西,或者过期了。
例如,如果你已经失去对解密密钥的访问权限,那么加密备份没有太多用处。而且任何只在灾难中使用的部分都可能从你上次检查它们过后就破环了。确保你已经涵盖了所有东西的唯一方法做认真地故障切换。当你准备好了的,就计划一下你的灾难(演练)吧!
如果你可以设计系统,以使灾难恢复模式成为正常运行的一部分,那么情况会更好。如果你的服务从一开始就被设计为可复制的,添加更多的副本就是一个常规的操作并可能是自动化的。没有新的方法,这只是一个容量问题。但是,系统中仍然存在一些只能在一个或两个地方运行的组件。偶然计划中的假灾难能够很好地将它们暴露出来。
顺便说一句,那些被遗忘的组件可能包括仅在一个人的大脑中的信息,所以如果你自己发现说:“我们不能在 X 休假回来前进行 DR 故障切换测试”,那么那个人是一个危险的单点失败。
仅在灾难中使用的部分系统需要最多的测试,否则在需要时会失败。这个部分越少越安全,且辛苦的测试工作也越少。
### 3、 为什么服务相互依赖使得灾难恢复更加困难?
如果你只有一个二进制文件,那么恢复它是比较容易的:你做个二进制备份就行。但是我们越来越多地将通用功能分解成单独的服务。微服务意味着我们有更多的灵活性和更少地重新发明轮子:如果我们需要一个后端做一些事情,并且有一个已经存在,那么很好,我们就可以使用它。但是一些需要保留很大的依赖关系,因为它很快会变得纠缠。
你可能知道你直接使用的后端,但是你可能不会注意到有新的后端添加到你使用的库中。你可能依赖于某个东西,它也间接依赖于你。在依赖中断之后,你可能会遇到一个死锁:两个系统都不能启动,直到另一个运行并提供一些功能。这是一个困难的恢复情况!
你甚至可以最终遇到间接依赖于自身的东西,例如你需要配置启动网络的设备,但在网络关闭时无法访问该设备。人们通常会提前考虑这些循环依赖,并且有某种后备计划,但是这些本质上是不太行得通的方式:它们只适用于极端情况,并且以不同的方式使用你的系统、进程或代码。这意味着,它们很可能有一个不会被发现的问题,直到你真的,真的需要它们的工作的时候才发现。
### 4、 你建议人们在感觉需要之前就开始有意管理其依赖关系,以防止潜在的灾难性系统故障。为什么这很重要,你有什么建议有效地做到这一点?
管理你的依赖关系对于确保你可以从灾难中恢复至关重要。它使操作系统更容易。如果你的依赖不可靠,那么你就不可靠,所以你需要知道它们是什么。
虽然在它们变得混乱后也可以开始管理依赖关系,但是如果你早点开始,它会变得更容易一些。你可以设置使用各种服务策略——例如,你必须在堆栈中的这一层依赖于这组系统。你可以通过使其成为设计文件审查的常规部分,引入考虑依赖关系的习惯。但请记住,依赖关系列表将很快变得陈旧。如果你有程序化的发现依赖关系的方式,甚至强制实施依赖,这是最好的。 [我的 Velocity 谈话](https://conferences.oreilly.com/velocity/vl-ny/public/schedule/detail/61400?intcmp=il-webops-confreg-reg-vlny17_new_site_creating_better_disaster_recovery_plans_body_text_cta)涵盖了我们如何做到这一点。
早期开始的另一个优点是,你可以将服务拆分为垂直“层”,每个层中的功能必须能够在下一个层启动之前完全在线。所以,例如,你可以说网络必须能够完全启动而不借助任何其他服务。然后说,你的存储系统应该仅仅依赖于网络,程序后端应该仅仅依赖于网络和存储,等等。不同的层次对于不同的架构是有意义的。
如果你提前计划,新服务更容易选择依赖关系。每个服务应该只依赖堆栈中较低的服务。你仍然可以结束循环,在相同的层次服务上批次依赖 —— 但是它们可以更加紧密地包含,并且在逐个基础上处理更容易。
### 5、 你对 Velocity NY 的其他部分感兴趣么?
我整个星期二和星期三的时间表都完成了!正如你可能收集的那样,我非常关心大型相互依赖的系统的可管理性,所以我期待听到 [Carin Meier 关于管理系统复杂性的想法](https://conferences.oreilly.com/velocity/vl-ny/public/schedule/detail/62779?intcmp=il-webops-confreg-reg-vlny17_new_site_creating_better_disaster_recovery_plans_body_text_cta)、[Sarah Wells 的微服务](https://conferences.oreilly.com/velocity/vl-ny/public/schedule/detail/61597?intcmp=il-webops-confreg-reg-vlny17_new_site_creating_better_disaster_recovery_plans_body_text_cta)和 [Baron 的可观察性](https://conferences.oreilly.com/velocity/vl-ny/public/schedule/detail/61630?intcmp=il-webops-confreg-reg-vlny17_new_site_creating_better_disaster_recovery_plans_body_text_cta) 的谈话。我非常着迷听到 [Jon Moore 关于 Comcast 如何从年度发布到每天发布的故事](https://conferences.oreilly.com/velocity/vl-ny/public/schedule/detail/62733?intcmp=il-webops-confreg-reg-vlny17_new_site_creating_better_disaster_recovery_plans_body_text_cta)。作为一个前系统管理员,我很期待听到 [Bryan Liles 对这个职位走向的看法](https://conferences.oreilly.com/velocity/vl-ny/public/schedule/detail/62893?intcmp=il-webops-confreg-reg-vlny17_new_site_creating_better_disaster_recovery_plans_body_text_cta)。
---
作者简介:
Nikki McDonald 是 O'Reilly Media,Inc. 的内容总监。她住在密歇根州的安娜堡市。
Tanya Reilly 自 2005 年以来一直是 Google 的系统管理员和站点可靠性工程师,致力于分布式锁、负载均衡和引导等底层基础架构。在加入 Google 之前,她是爱尔兰最大的 ISP eircom.net 的系统管理员,在这之前她担当了一个小型软件公司的整个 IT 部门。
---
via: <https://www.oreilly.com/ideas/creating-better-disaster-recovery-plans>
作者:[Nikki McDonald](https://www.oreilly.com/people/nikki-mcdonald), [Tanya Reilly](https://www.oreilly.com/people/5c97a-tanya-reilly) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,880 | WordPress 弃用 React,并将进行重写 | https://ma.tt/2017/09/on-react-and-wordpress/ | 2017-09-19T08:46:00 | [
"WordPress",
"React"
] | https://linux.cn/article-8880-1.html | 
开源网络出版软件 WordPress 的联合创始人 Matt Mullenweg 日前[表示](https://ma.tt/2017/09/on-react-and-wordpress/),出于对 Facebook 开源许可证中专利条款的担忧,WordPress 社区将不再使用 Facebook 的 React JavaScript 库。
Mullenweg 在一篇[博客文章](https://ma.tt/2017/09/on-react-and-wordpress/)中对其决定做出了解释。几个星期之前,即便是在 Apache 基金会表示了[不再允许其项目使用 Facebook 许可证](https://issues.apache.org/jira/browse/LEGAL-303)后,Facebook 还是[决定保留](https://code.facebook.com/posts/112130496157735/explaining-react-s-license/)其在 React 许可证中附加的专利条款。Mullenweg 认为,试图去删除该专利条款将“增加他们在对抗无事实根据的诉讼方面所花费的时间和费用”。
Mullenweg 表示,他不是在评论 Facebook 或者认为 Facebook 错了。Facebook 的决定对于 Facebook 来说是正确的,这是他们的工作,Facebook 可以决定以任何他们想要的方式来授权其软件。但对于 Mullenweg 来说,Facebook 的意图已经非常明确了。
几年之前,[Automattic 将 React 作为基础来重写 WordPress.com 的前端 Calypso](https://developer.wordpress.com/calypso/),这应该是基于 React 的最大的开源项目之一。正如 Automattic 的法律顾问所写,[Automattic 做出了最好不要牵涉到专利问题的决定](https://github.com/Automattic/wp-calypso/issues/650#issuecomment-235086367),在今天仍是如此。
总体来说,Mullenweg 过去一直对 React 很满意。最近,WordPress 社区开始将 React 用于 [Gutenberg](https://make.wordpress.org/core/2017/02/17/dev-chat-summary-february-15th-4-7-3-week-3/) 项目,[这是该社区多年以来最大的核心项目](https://ma.tt/2017/08/we-called-it-gutenberg-for-a-reason/)。人们在 React 方面的经验以及 React 社区(包括 Calypso)的规模,是 WordPress 将 React 用于 Gutenberg 项目的考虑因素之一。这使得 React 成为 WordPress 以及为 WordPress 编写的数以万计插件的事实上的标准。
Mullenweg 在博客里表示,WordPress 曾准备了一份几千字的公告,阐述了 React 是多么伟大,WordPress 如何正式采用了 React,以及鼓励插件同样采用 React。在该公告中,Mullenweg 一直希望专利问题能够以一种让用户放心使用的方式解决。
但这份公告现在不会被公布了。Mullenweg 表示,Gutenberg 项目将会退而采用另外的库来进行重写。这使得 Gutenberg 项目至少要延迟几个星期,其发布日期可能要推到明年。
Automattic 还将采用另外的同样的库来重写 Calypso,这将需要更长的时间。Automattic 目前还未牵涉到专利条款的问题当中。虽然会对业务造成短期影响,但是从内核的长期一致性来考虑,重写是值得的。WordPress 的内核升级涉及到全部网页的四分之一以上,让它们全部继承专利条款问题令人担忧。
Mullenweg 认为,Facebook 的专利条款实际上比许多公司可以采取的其他方法更清晰,Facebook 已经成为不错的开源贡献者之一。让全世界夸赞 Facebook 专利条款不是 Automattic 的工作,而是 Facebook 自己的战斗。
Mullenweg 在博客中表示,采用哪个库来重写将会在另外的帖子中公布,这主要是技术性的决定。Automattic 将会寻求具备 React 的大部分优势,同时又没有给许多人造成混淆和威胁的专利条款包袱的替代选择,并请大家就这些问题提供反馈。目前针对替换的库已经有了[几个建议](/article-8877-1.html),包括 VueJS 、Preact 等。
另据 [TechCrunch 报道](https://techcrunch.com/2017/09/15/wordpress-to-ditch-react-library-over-facebook-patent-clause-risk/),针对 Facebook 公司的专利条款,一些最激烈的批评家认为其将“特洛伊木马”引入了开源社区。对于 Mullenweg 在博客中发布的决定,一位评论家称之为“艰难但重要的决定”,其他评论家则将其称为“明智”和“有益”的决定。
而另外一些评论则发出了警告:“不要过度反应。过去五六年来,WordPress 生态系统的动荡和流失已经足够多了。Facebook 的业务规模和范围使得该条款变得非常可怕。它们最终必须放弃。”
---
**译者简介:**薛亮,集慧智佳知识产权咨询公司高级咨询师,擅长专利检索、专利分析、竞争对手跟踪、FTO分析、开源软件知识产权风险分析,致力于为互联网企业、高科技公司提供知识产权咨询服务。
| 200 | OK | Big companies like to bury unpleasant news on Fridays: A few weeks ago, [Facebook announced](https://code.facebook.com/posts/112130496157735/explaining-react-s-license/) they have decided to dig in on their patent clause addition to the React license, even after Apache had said [it’s no longer allowed for Apache.org projects](https://issues.apache.org/jira/browse/LEGAL-303). In their words, removing the patent clause would "increase the amount of time and money we have to spend fighting meritless lawsuits."
I'm not judging Facebook or saying they're wrong, it's not my place. They have decided it's right for them — it's their work and they can decide to license it however they wish. I appreciate that they've made their intentions going forward clear.
A few years ago, Automattic used React as the basis for the [ground-up rewrite of WordPress.com we called Calypso](https://developer.wordpress.com/calypso/), I believe it's one of the larger React-based open source projects. As our general counsel wrote, [we made the decision that we'd never run into the patent issue](https://github.com/Automattic/wp-calypso/issues/650#issuecomment-235086367). That is still true today as it was then, and overall, we’ve been really happy with React. More recently, the WordPress community started to use React for [Gutenberg](https://make.wordpress.org/core/2017/02/17/dev-chat-summary-february-15th-4-7-3-week-3/), [the largest core project we've taken on in many years](https://ma.tt/2017/08/we-called-it-gutenberg-for-a-reason/). People's experience with React and the size of the React community — including Calypso — was a factor in trying out React for Gutenberg, and that made React the new de facto standard for WordPress and the tens of thousands of plugins written for WordPress.
We had a many-thousand word announcement talking about how great React is and how we're officially adopting it for WordPress, and encouraging plugins to do the same. I’ve been sitting on that post, hoping that the patent issue would be resolved in a way we were comfortable passing down to our users.
That post won't be published, and instead I'm here to say that **the Gutenberg team is going to take a step back and rewrite Gutenberg using a different library**. It will likely delay Gutenberg at least a few weeks, and may push the release into next year.
Automattic will also use whatever we choose for Gutenberg to rewrite Calypso — that will take a lot longer, and Automattic still has no issue with the patents clause, but the *long-term consistency* with core is worth more than a *short-term hit* to Automattic’s business from a rewrite. Core WordPress updates go out to over a quarter of all websites, having them all inherit the patents clause isn’t something I’m comfortable with.
I think Facebook’s clause is actually clearer than many other approaches companies could take, and Facebook has been one of the better open source contributors out there. But we have a lot of problems to tackle, and convincing the world that Facebook’s patent clause is fine isn’t ours to take on. It’s their fight.
The decision on which library to use going forward will be another post; it’ll be primarily a technical decision. We’ll look for something with most of the benefits of React, but without the baggage of a patents clause that’s confusing and threatening to many people. Thank you to everyone who took time to share their thoughts and give feedback on these issues thus far — we're always listening.
**Update:** This post received an incredible response from the wider web and open source community, and [a few days later Facebook reversed their position](https://ma.tt/2017/09/facebook-dropping-patent-clause/).
We think this is a wise decision, Matt. We look forward to seeing what framework/library is chosen moving forward.
Wonder if they will look at Vue.
Exactly what I’ve first thought of too! Vue is awesome.
Agreed! Vue would be a friendly alternative that is powerful, yet similar to React.
Seconded (or thirded?) – Vue would be my hands-down go-to.
Um… yeah, my first thought too… VueJS!
I think Vue.js is prepping to grow https://medium.com/the-vue-point/vue-is-now-on-opencollective-1ef89ca1334b
It’s likely that Vue would infringe on patents Facebook may hold on React related tech, so if you’re afraid of Facebook React patents, Vue would be a very bad choice.
But Vue uses MIT Licence, which is worse than the React licence, no? The MIT license doesn’t grant any patents at ALL. They don’t have a revocation clause because they are not giving any patents away (or ensuring you they don’t have/won’t get any) so MIT-only is leaving devs in the dark – at least Facebooks license is clear on intent.
Vue is awesome, I hope they decide to use it.
Is this going to be a problem with Vue? https://github.com/vuejs/vue/blob/dev/package.json#L85
Have you guys evaluated Oracle JET?
Honestly that hasn’t come up yet, but I’ll note it.
Then, please, burn the note…
Really? Lol
I personally feel Vue would be a better long-term option and already has buy-in in the PHP community through Laravel. It’s easier to learn and doesn’t require much outside the core to build most projects. We’re already using it internally on quite a few projects. Would love to talk about it a bit more next month in Seattle!
That’s been a frequently suggested one and the team has met with Vue’s lead developer.
Excellent! I’m biased, but I love it!
Yes! Vue all the things! I’m currently in the process of converting all of our React components to Vue because they’re so much easier to work, more flexible and progressive
Yeah, if a ReactNative equivalent isn’t critical, Vue or Marko are both great choices.
+1 on Vue, I was a previous Angular 1 Dev but Angular 2 is over engineered and has a greater learning curve. Vue also has great SSR support which we are using as well with our PHP code. PHP -> Node back to PHP to output the HTML.
That’s so cool. And moving away from React is a wise choice.
Vue +1
Already used by the Laravel community, currently the go to PHP MVC framework
Will this allow for plugins be developed in Vue – or the new language chosen?
+1 for vue
Vue please
Oh happy day!
Respect the decision Matt. Not easy but that’s leadership.
Any reason why the team wouldn’t go with something like Preact? https://preactjs.com/
Preact is definitely one that’s being considered.
Also check out Inferno (MIT Licensed).
I was going to say the same thing. Preact is pretty much drop-in replacement (using preact-compat), fully migrating is relatively straightforward too.
Any Facebook patents related to React are likely to infringe Preact and Vue, so that seems like a dumb move if the whole motivation of moving away from React is protection against patents.
With React, at least Facebook can’t sue you for patent infringement if you are meeting the license conditions. With Vue and Preact you have no such protection.
That’s not the motivation — take another read of the post.
Wow! Big decision. This point is particularly resonant:
Kudos on making a tough but important decision.
Are you considering API-compatible alternatives like Preact?
I wonder if the complete lack of a patent grant is better or worse than an explicitly revokable one.
The lack of explicit and revokable Grant is technically worse as any given tech might infringe, regardless of copyright license. Getting people to understand that is easier said than done though.
I would probably leave towards preact as I feel the model for react components combined with redux or similar is much better than alternatives for testable and composeable code.
Here comes Vue 🙂 But excellent decision by Automattic on the rewrite. Logical decisions shock us as much as outrageous ones these days
This is great news! +1 for Vue.
I would go for Vue because it plays nice with PHP. The best alternative to React is Preact, many companies are switching to Preact right now as it’s almost identical but way smaller and even more performant.
Hi Matt,
Please consider Vuejs as the JS library for Gutenberg. It’s open source and having a huge community.
A bit worried on “it’ll be primarily a technical decision”. I hope the new framework is not just have a good architecture but also friendly for the WordPress developers.
You’re not worried at all. Just trying to bring toxic liberal politics into the mix.
Well that was quite the non-sequitur.
mustache.js is probably a better choice for WordPress given that it is the only JS compatible with AMP HTML. Many well know sites are using mustache.js https://github.com/janl/mustache.js/wiki/beard-competition
Ghost went with Ember and I think they’re pretty happy with it 🙂
I though I was alone thinking in Ember 😀 Currently working in a React project and I just keep missing Ember so much.
Have you and the team looked into Preact https://github.com/developit/preact? Same API wrappers we expect from React, MIT license and smaller overhead. Without sparking debate, React teaches sane Javascript fundamentals. It’s a great tool and I would hate to see WordPress rewrite, from the ground up, two great projects that up the ante when it comes to how great WordPress can be.
I’ve switched to Preact (MIT) instead of React.
3Kb is impactful for our mobile users, and it doesn’t require hacks to use custom HTML attributes.
I’m very curious to see what happens there. I feel like Vuejs has a pretty good run at taking over the React pieces.
While the re-write of code will take a while this is likely to be a blessing in blessing in disguise.
Considering how large the WordPress Development Community is the onboarding of another lib such as Vue (hint hint) is an easier path for developers.
Good decision. I’m for Vue as well.
Vue.js would be good alternative have been using it and would love to see that being used with WordPress 🙂
Hi Matt,
I have something that will blow away React and all other cutting edge web frameworks. It’s brand new and based on a simple but consequential new mathematical theory. I use it to power ohayo.computer. I’d say odds are 50-50 it could be a viable option for your use immediately, but even if you don’t use it now I’d bet money you’ll be using it (or someone else’s derivative fork of it) in 2 years time. Happy to explain more to someone on your team over email or phone. I do everything over MIT license or public domain, if that works for you all.
Thanks for WordPress! WP is how I first got into developing seriously a decade ago.
Cheers,
Breck
Oh please. Don’t over-react.
There has been enough turmoil & churn in the WP ecosystem in the last 5, 6 years.
Fb’s business size & range makes the clause scary. They will have to give it up, eventually.
One thing to note for commenters discussing Preact vs Vue.js is Vue.js appears to have a more active community, more commits (2x) and more individuals contributing to it’s Github project than Preact does. Developer community, and not just basing a decision on something having the same syntax as React, will be extremely important for any framework chosen by WordPress. But I don’t doubt that these are the types of things that will be factored into the decision.
+1
Preact benefits directly from Reacts eco system. Vues eco system is tiny in comparision: https://npmcharts.com/compare/react,angular,@angular/core,ember-cli,vue Same applies to number of OSS packages/components available. The reason is obvious, Vue is the older technology. Not a steop back as large as with Angular for instance, but a regression nonetheless.
Thanks for the update, Matt. I’ve always appreciated your candor.
Casting my vote for Vue.js.
I love React! But definitely agree that their license should not be pushed out to ~28% of the web.
Vue definitely looks like the best option! Can’t wait to see how the Gutenberg team takes what they’ve learned so far building on React and applies their learnings to something new. It’s been a fun project to follow.
Happy to read it, Matt. Thanks a lot for taking the step.
One option is to not rewrite, and use preact https://github.com/developit/preact. Same API re-implemented with a clean MIT license.
That’s another one the team has been spending a lot of time with.
+1 for Preact
Another +1 for Vue! 🙂
I know there’s too many already but had to add my +1 for Vue.
Matt, I know this couldn’t have been an easy decision and I commend you on it, as well as for announcing it on a Thursday 🙂 The patent clause really is onerous to those of use building web apps for clients that someday plan on selling them to another party… I know that’s not the 96% of WP sites with a single user… but it’s important component of WP being “the OS of the web”. It’s got to stay unencumbered.
I think the vue ahead is clear! hehe…
I certainly vote for Vue, but appreciate taking a step back and really considering the landscape. I suspect a lot of the React components can be reused or salvage into Vue, but leave that to those that have been toiling tirelessly on Gutenberg.
Very happy to see this since React has never felt true to the spirit of GPL. But I think that begs the question. The next library can’t be considered only for it’s parity with React, you have to consider the licensing as well. That alone seems to make Vue.jsthe obvious front-runner, right?
Preact is MIT licensed and a drop in replacement
I appreciate that you gave the Gutenberg developers lots of room and freedom and that they have done a great job. Thank you for making the hard decision. I think you said that after the initial work the team would stop and rewrite. It is not that uncommon. Now they can benefit from the experience and progress quickly. 5.0 is going to awesome.
I know this must have been a hard decision. Kudos to you again for making the right call.
Since we are talking also about Gutenberg and large community developers need to jump on board with that then VueJS is that one to use. It has clear syntax and everything needed.
Framework decision should also consider wide amount people. People who love WP because it’s easy to use. React and Preact are way too complex than it needs to be.
Happy to read it and Thanks Matt. Vue.js is also one of the good options.
@Dan Orner How Angular is over engineered?
Yes may be Angular is not as easy to master Angular than Vue – I agree. The most tricky learning curve of Angular is probably the fact that Angular is heavily rely RxJS so you have to have a basic understanding of that and other thing Angular is relying on is Typescript, that some hardcore Javascript devs just do not want to get all that “type check system” etc. But knowing that RxJS and Typescript will pay off anyway in the long run anyway.
Typescript and RxJS was invented to solve a problem, not way around. If you do not want to learn that new stack that doesn’t mean that Angular is overengeneered.
Why bring in the gorilla and a jungle when all you want is a banana?
It seems like most of people think easy learning is the most important factor. And it is really weird that most of people think Angular is hard to learn but the concept of Vue is actually pretty similar with Angular.
I bet if we ask people to use VueJS with Typescript and RXJS, they will say it is really hard to lean too.
It is a shame that lots of people don’t recognize the benefits from Typescript for large projects and the powerful of RxJS.
Anyway, I can’t agree with you more on if you do not want to learn that new stack that doesn’t mean that Angular is over-engineered. It is like a pilot walking into a future Enterprise spaceship and saying it is over-engineered.
What do people think of Mithril?
I suggested using Mithril as a replacement when I posted the original GitHub issue “Replace React with Mithril for licensing reasons” in November 2015 (that Matt linked to for the lawyer’s opinion posted in the issue). Glad to see Automattic is finally seeing the light a couple years later. 🙂
I also outlined in that issue some ideas for developing software tools to support a rapid migration away from React to Mithril, suggesting some small amount of time/money spent by Automattic then on creating React to Mithril automated conversion tools, and perhaps also spent on even better Mithril training materials and even better Mithril third-party libraries, might be a relatively cheap “insurance policy” in case of legal disaster for WordPress developers using React.
Personally, I still feel Mithril.js is one of the best technical and social choices out there. It has only gotten better with time.
Now if only Automattic would also come to see Slack as a competitor for the WordPress community like I explained when I turned down a job interview with Automattic over their insistence of using Slack for the interview: https://web.archive.org/web/20160601144053/https://narrafirma.com/automattic-javascript-engineer-application-by-paul-d-fernhout/
Ironically I use Slack all the time now for my day job writing UIs — but those UIs are not primarily intended to be open source communications tools the way WordPress is. So while I’d rather the company used in-house-hosted MatterMost or Matrix.org services for company communications, there is not so much of an moral conflict there.
I continue to work towards better communications tools in Mithril (e.g. Twirlip7) in my very limited spare time…
Too bad I did not apply for a job at Automattic a couple years earlier or I could have helped steer them away from React and Slack rather than just point out how poisonous those things were after a big commitment had been made.
Matt, it takes a big man to admit the need for a 180 degree course change on such a big project — so kudos for making a tough call. I hope some of the ideas I outlined in that GitHub issue and elsewhere like the link above can help you do the right thing and continue to make Automattic a great success supporting free and open source communication tools like WordPress and beyond.
I posted the following elsewhere before, but this explains the technical reasons why Mithril.js or other vdoms emphasizing the HyperScript API are the best choices.
Personally, I feel templating approaches to making JavaScript-powered UIs like React’s JSX or Angular’s own templating approach or the templating systems in many other UI systems [including Vue] are obsolete.
Modern webapps can use Mithril+Tachyons+JavaScript/TypeScript to write components in single files where all the code is just JavaScript/TypeScript. Such apps don’t need to be partially written in either CSS and some non-standard variant of HTML that reimplements part of a programming language (badly). (Well, there may be a tiny bit of custom CSS needed on top of Tachyons, but very little.)
Here is an example of a coding playground I wrote that way with several examples in it which use that approach: http://rawgit.com/pdfernhout/Twirlip7/master/src/ui/twirlip7.html
So, by writing UIs using HyperScript (plus a vdom library), you can potentially (with some work) replace a backend like Mithril with almost any other vdom or even a non-vdom solution. So, that is another way I mitigate this risk when I have a choice.
Granted, I know many web developers grew up on tweaking HTML and love HTML-looking templates and so they love JSX or whatever and are happy to ignore how hard it is to refactor such non-code stuff in the middle of their applications or validate it (granted, some IDEs are getting better at that). But I came to web development from desktop and embedded development working with systems where you (usually) generated UIs directly from code (e.g. using Swing, Tk, wxWidgets, and so on). I like the idea that standard tools can help me refactor all the code I work on and detect many inconsistencies.
This must have been a tough decision. But when you are doing it right it does matter. I also commend the move.
You folks met with Evan? That’s excellent news. Vue.js makes more sense now than ever. The best thing is it will make the transition easy.
But may I throw in Preact! It does have an added benefit of supporting the React eci system with having to support React. Though, it can make our code messier (in a way).
One thing that I think matters the most here, is that we transition the code with coding best practices in mind and not in a way that just saves us time.
✔︎ What I mean to say is, whatever new JavaScript framework we adopt, it’s essential that we do it right. Having Gutenberg implement this new code in the right way will set the foundation of how WordPress developers will learn it.
I have often seen the coding best practices being lost in the midst of converting projects from one framework to another.
I know several developers who had started learning ReactJS because of WordPress. Done even paid to start learning it.
While this is a good move, it’s a frustrating one for many of us. Transition to a new JS framework in the right way and also taking care of documentation (at least relevant to adopting THE new framework) would help mitigate the frustration.
That said, I am thrilled to read that the community is being heard. It’s the high time for us. I’ll play my role in helping with this transition as much as I can.
Gutenberg Boilerplate is already a resource for more than few hundred WordPress developers trying to learn to adapt Gutenberg. I’ll make sure to keep it updated.
I’m supporting the adoption of Vue.js right now, but I must add I have only use Preact in two projects.
Cheers!
Thanks you Matt for taking action. You prevent with the decision a deep split of the WordPress Community! Now we have it behind us and can focus on the next big thing “Gutenberg”!
I just blogged about this topic: WordPress-React Breakup: My Vue on P*react + WordPress Development! — this also includes the links to resources and threads where the community is voting for and discussing different JS frameworks.
Another +1 for Vue!
The React patent issue continues to be a controversial topic among developers and I’m sure this wasn’t an easy decision. I immediately thought of Vue and preact when you mentioned looking for a replacement for React. Good luck and I’m sure whatever direction you end up going will be the best for WordPress and the community.
I’d suggest going down the road of Vue.js. It seems to be the most popular choice. A pain to rewrite but a fresh start moving forward.
Thanks Matt for this decision. My vote is for VueJS as it’s very easy to learn and on the other hand Preact is bit complex and hard for newbies. I have used both ( React and Vue ) and to me Vue is more simpler than react and I hope, WP community can easily adopt vue.
A very honest decision.
Thank you, Matt!
Hi Matt,
even though most of the things I wanted to say have been mentioned (in some way) already, here they are…
If the (only) bigger problem with React is its license, then the next best, and also logical, choice would be Preact.
Having already two quite large React-based applications, Calypso and Gutenberg, and thus having invested a lot of time into learning (and teaching) these concepts and paradigms, something like Preact, with the same API, makes even more sense.
In my opinion, one should not compare the usage of Vue with the one of Preact alone, as there is a huge overlap between React things and Preact things.
What’s more, I suspect that if WordPress were to adopt Preact, one or the other developer and also company might make the move from React to Preact – and thus the Preact-specific community would grow.
So yes, I would like to see some React-ish library make it into Core, but I don’t mind having to spend quite a few hours digging (further) into Vue as well.
Cheers,
Thorsten
@Matt
You have taken a tough decision already. Kudos for it!
Please finish it by choosing Vue.
Preact may be tempting as it will reduce refactoring efforts, but it will be job half done.
Most people who favoured Vue over React, did not do it because of Facebook’s patent clause. Vue has many advantages.
For me #1 is easy to learn. I believe, WordPress, which is so easy to learn itself, should use a JS library which is also equally easy to learn.
I have nothing against Preact but I wonder what will happen to Preact if few years down Facebook steps back on patent issue!
This.
Vue is much more in line with WordPress philosophy in my opinion. We’ve settled on Vue for our website frontends precisely because of the low overhead of Vue.
Exactly. +1 for Vue.
Licencing is a very important process and when you have a huge community it only gets more complicated. I kind of see why Facebook did that, but it’s going to affect us all.
I see that Vue already been suggested here and Preact as well. I think those 2 are some great alternatives with a supportive community as well.
I’m sure that wasn’t an easy decision but I think it’s the right one! Intresting days laying ahead!
Marko seems to be a better choice.
Very wise and brave decision. Thank you!
Vue +1
+1 for Vue.js
A very hard and brave decision. Thank you!
Vue +1
Awesome news! +1 for Vue of course! Go for it!
I guess I would take a lot of hate comments for this but for the sake of choosing something conceptually clear, be careful with Vue with its angular-like `v-if`, `v-else-if`, `v-*` attrs and web-component like element definitions. Yes it can be used with JSX but something that documents itself by default with meta-language for logic in the template, is a red alert.
I haven’t looked in the patent clause but can’t you just use a react-compatible library and still use the available react-components like `react-select`?
If plugins are not important, a very simple, pure alternative is `inferno.js`. All-functional components. You can go also lower-level but I guess you want to keep it easy for plugin developers and tinkerers of WordPress ecosystem.
Preact enables you to use the available React components. I agree with you about Vue’s angular-like weird meta-language. Preact would be a better alternative than Vue
A valid point, and one I was a little wary about when I was evaluating Vue.
However, I came to the conclusion that my markup is FAR easier to read and maintain with a few basic flow-control directives (if/else and for) rather than context-switching between the template language and the host language (as one does for JSX, ASP, ASP.NET, PHP, etc.).
The context-switching is far more jarring visually, even with no-delimiter switching like JSX, and it encourages unhealthy mixing of template code and business logic.
While someone could certainly abuse the Vue directives by having very complex JS statements in them, having them in those attributes encourages keeping the template simple, describing the intent (e.g., ‘v-for=”button in buttonList”‘), and keeping complex business logic (say, deciding which buttons are in the list) separated into a more appropriate part of the component code.
I’m not “a little wary”. Vue.js is wanting to draw Angular-like things back in. And yes people that used and liked and aged with Angular had to change to something like React. These people really wants back into the “good old Angular-looking thing”. No. Just no.
Logic in template strings is a step back. Built-in data-binding is a step back. `var vm = new Vue({` instead of simple functions for components is a step back. Directives are a step back.
Yes to simple functions. Yes to props/arguments as input. Yes to JSX and babel plugins if you want to extend it. You can do anything with this combo. Super flexible and simple. If you don’t want to switch context for simple ifs and foreach then write a babel plugin. Here is one :https://github.com/AlexGilleran/jsx-control-statements.
Templates are limitations and to make them work there is absolutely unnecessary complexity.
I like that this post is in the “Asides” category 🙂
It would be interesting to see VueJS being used. It would make the adoption of some of my project a little faster.
I’ve seen all the +1s for Vue & Preact however Inferno (https://infernojs.org/) is a contender too in terms of license and parity, not to mention far better performance.
WordPress community is not ready for react yet. People get what they want. Good or bad.
This is something going to change the entire ecosystem of WordPress coding standards, I vote for VueJs.
It’s tough and wise decision.
“the Gutenberg team is going to take a step back and rewrite Gutenberg using a different library”
While you’re at it, make Gutenberg just an editor, not a complete editor screen. Do not kill metaboxes, custom fields and whatever. WordPress is used as a CMS in many projects that need a simple way to insert custom data. We don’t need to use a rocketship like Gutenberg for such simple things. A LOT of my projects don’t even use the editor at all.
Gutenberg is great, I love it since the first time I saw it. I, like many developers before WordPress, created our own custom CMSs back then, and I DID use a blocks concept on my own, so I’m all into this. That being said, we should not kill backward compatibility on such important things like metaboxes just because we’re developing a fantastic new way of editing content. (It’s content, not meta)
The plan from the beginning was to redo the entire editing screen — you can’t make a great experience with taking the old TinyMCE-in-a-box approach to editing and customization. Metaboxes are there, and totally backwards compatible, there’s just FUD saying they won’t be. (I don’t know why this is so persistent, I feel like we’re fighting fake news.) My hope is many (not all) of the plugins using meta boxes upgrade them to Gutenberg boxes, but even if they don’t they’ll still work, it just won’t be as slick and integrated as the ones that update.
Thanks of the clarification. Last time I installed the latest Gutenberg beta, the metaboxes were not there, so I assumed it was decided to kill metaboxes. If this is not the case, my bad.
Perhaps you should be clearer about it Matt: “other things like Metaboxes there will be no problem to provide a legacy interface for a few releases. But I would say that plugin authors should start updating their plugins in late September if they want to benefit from Gutenberg’s launch.” (https://ma.tt/2017/08/we-called-it-gutenberg-for-a-reason/#comment-587619)
But why only now?
License has always been the same.
I had really hoped the license would change, and had hints that it would.
I think license is changing. I hope it is not fake news.
Please check https://code.facebook.com/posts/300798627056246
Great to hear!
Preact benefits directly from React’s large number of OSS packages/components available. Vue’s eco system is much smaller in comparison.
Matt, this decision must suuuuck.
But you were mature enough not to flame Facebook.
Kudos to you.
My vote is for Vue
The learning curve of Vue.js and its approachability makes that this is a framework of choice of many developers. The same thing was with WP, which – in contrast to Joomla or Drupal – allowed us to create some simple but customized websites with very limited ‘dev’ knowledge, based only on one or two tutorials. It’s all about products that are easy to start and hard to master. So yeah, +1 for Vue
Worked at places. Migrated to React because, well React. Training provided and everything. Devs didn’t actually like it. Everything is in Vue.js now.
Remember script.aculo.us? Yeah me neither…
Place your bets.
VueJS all the way!
Vue vs Preact.. I am keen to see WordPress team’s analysis and evaluation of these two libraries.
A lot of people are voting for Vue and I can see its growing with new features also.
Vue.js FTW! The usage of Laravel and Vue in various projects are increasing in the WordPress community and a lot of businesses are already working on it. So, if our community chooses Vue as the framework of their choice it would be a lot easier for us to contribute as we are already familiar with it. Just my 2 cents 🙂
Matt, how does using Preact, Vue, etc., address the patent issue? My understanding is that you are still at risk of FB’s patents even if you don’t use React, as explained here:
https://blog.cloudboost.io/3-points-to-consider-before-migrating-away-from-react-because-of-facebooks-bsd-patent-license-b4a32562d268
add the following to your webpack config and you’re done:
alias: {
‘react’: ‘preact-compat’,
‘react-dom’: ‘preact-compat’,
‘create-react-class’: ‘preact-compat/lib/create-react-class’
},
One thing to consider if you really care about the patents, is to think about what Facebook has patents on, and is there a chance your next choice might infringe on them.
We earlier migrated our projects from React and Angular to Vue, we found Vue is really amazing!
I think a lot of votes here are for Vue not realizing that Preact is also a decent option. Mine was that way like a month ago. As I think most major plugin developers are professional developers, I think that Preact is actually likely the better choice because Preact / React is a highly employable skill and thus developers are likely happy to learn it.
There appear to be about 7x-10x more *high-paying* jobs in React vs. Vue currently,
https://www.indeed.com/jobs?q=react+javascript&l=Chicago%2C+IL
https://www.indeed.com/jobs?q=vue+javascript&l=Chicago%2C+IL
and based on the preliminary 2017 state of JS survey, as React is apparently still more satisfying to developers than Vue, it looks like Vue might never catch up. As someone who had once put in a vote for Vue, I think you should maybe lay out the case for Preact or for each library before taking feedback too seriously. Also, choosing an easy ramp-up library isn’t that important since plugin development isn’t that easy of a ramp up already. In total, Vue vs Preact feels like the difference between having to learn 7 things vs 8 things, but the one with 8 things opens up 10x as many job opportunities and pays you 30% per more year.
I am switching my vote to Preact. I think with WordPress backing, it would be better than Vue, and even better than Vue with WordPress backing.
Thank you for heeding the community feedback.
PS – please show us some under-content meta-boxes in Gutenberg to help the community finish relaxing back to normal
Preact may be great for existing professional developers but Vue is more inline with WordPress philosophy to lower the bar for learning how to extend WordPress.
There are many reasons why WordPress runs greater than 25% of the web, but one of the more important ones is that it is makes development easier. Choosing Preact would be a move in the opposite direction.
P.S. I have been a professional developer for over 25 years so clearly I am looking out for what would be best for WordPress as a whole and not just what would pay me more on non-WordPress projects.
Well said Matt! I look forward to seeing where the course correction takes our community.
Also, please have your lawyers do some good research on Google patents to make sure Facebook doesn’t have any JS-related patent filings that might be dangerously incorporated Preact / Vue before deciding.
This is something to look out for. While the core libraries and CLIs for Preact and Vue don’t use any FB licensed dependencies (from what I have found), because of the sharing of popular libraries between React, Vue, and Preact these popular libraries could include/incorporate either React itself or other BSD + Patents dependencies.
+1 for Preact.
Please consider that there are other developers in the WP community who have adopted React because it was adopted by WP. To switch from React to Vue is essentially saying to these developers that they should migrate their codebase as well if they want to be doing things the “WordPress way.”
I agree with Thorsten Frommen, if WP switched to Preact, the Preact community would grow. Here are some other advantages of React/Preact:
– It would also be easy to switch back to React if Facebook revised their license again
– The React ecosystem is much larger
– There are more React developers so businesses have an easier time hiring talent
– It’s pretty much a drop-in replacement!
Vue may be a great language and I sympathize with WP developers who prefer it to React. But React is a great language too. And I believe that switching to Vue, when you’ve already made the commitment to React and there’s a drop-in replacement for React available, would just further fuel business-minded worries of instability in WordPress.
I realize that this change is being prompted by external forces, but I hope that you decide to respond in a way that respects those developers and businesses who try to do things the WordPress way.
+1 for Vue! I’m a Angular to Vue convert, and I have NEVER been happier writing code than I am now with Vue. It is such a breath of fresh air. It is amazing how much you can get done in such a short time, with clean, concise code using Vue. I highly suggest you take a good look at Vue, it won’t disappoint. It’s just a such a joy to use.
Is there any reason you can’t redistribute React without including the PATENTS file?
That would be violating Facebook’s IP rights — IP rights are the basis of all open source including the GPL so it’s something we wouldn’t do.
Nobody has mentioned Elm yet, which I think is the best replacement for React (actually, way better than React): no runtime errors, super friendly compiler, fast and it will make wordpress engineers even happier because programming Elm apps is a delightful joy…
I’d seriously consider it.
Elm would be awesome, but the requirement of a compiler would make it a non-starter for many WordPress developers who do not have a local build system set up nor the skills to do so. Also, being a functional language it has a much higher learning curve than most JavaScript frameworks.
That said I would love to see core use Elm for all its unit-testable functionality.
+1 elm . having amazing debugging, speed, and IMHO helps a team think better together
Inferno.js +1
+1 for Preact.
Preact is a pretty good choice if you have an existing React code base. The API is almost the same and preact-compat makes migration easy.https://preactjs.com/
It seems that Vue will be the champion.
Good move Matt and team, though I’m sure it stings. I don’t have enough experience to put a +1 in any particular bucket… however, the more I’ve become familiar with Vue and seen the projects being build on it the more I’m impressed by it.
I’ll throw my hat in for Elm. I’ve been using it at work for two medium-size apps and it’s great. The interop with JS is good so I can re-use existing libraries. Out of all the not-JS things out there, Elm has the best documentation.
You have the community’s overwhelming support for this decision. Thanks for the humility and excellent response to the issue.
I am pro-Vue, but happy to support whatever framework gets adopted.
I don’t get it. I tried to understand what all this patent jargon even means, and based on what this guy said (https://medium.com/@dwalsh.sdlr/react-facebook-and-the-revokable-patent-license-why-its-a-paper-25c40c50b562) it doesn’t seem to make much of a difference.
Any thoughts on using Polymer? It’s a web component library created by Google https://github.com/Polymer/polymer
Matt, I think it is a strong but difficult decision you and the team are making and I support the decision since the legality of this is tricky at best and you are looking out for many users that could be affected by the BSD + Patents clause.
I would like to recommend the Ember ecosystem including both Ember and Glimmer.js.
I think for smaller web components such as drop in editors and other content behaviors, Glimmer provides a great experience and creates drop in web-components that can run outside of a framework.
For larger projects like Guttenberg and Calypso where routing, complex state management, access control, content management, and more come in to play: Ember provides what I think is the best Ecosystem.
Especially with a large set of contributors; the set patterns, addons, and build system help to keep applications performant and maintainable as the application scales.
Also, the ecosystem with Engines and in-repo addons can help to keep optional pieces of the application modular and installable to end users.
Ember is also being used heavily by other content management systems and that effort can be built upon, learned from, and shared.
As mentioned in an earlier comment, Ghost uses Ember for their admin and editor, but Ember is also used by Drupal headless, Cardstack, and content companies like Conde Naste, Bustle, and more.
This means that common features like tag lists, component based editors (specifically the Mobiledoc editor), and more are available as part of the Ember Addon ecosystem.
From a community and developer experience I think Ember best matches the WordPress ecosystem (as a developer that worked in WordPress for over 5 years).
Ember has many best practices either built in, well documented, or available via addons; this reduces the question of “will this work with my app” and helps to better reduce possible security or performance bugs.
From a newcomers perspective, I think Julia Donaldson’s recent article says alot to Ember’s architecture: https://medium.com/this-dot-labs/ember-mentorship-and-the-confidence-gap-8c0b93dc1ccd.
Ember is also built on customizable abstractions meaning that complexity can be abstracted from end developers and difficult code can be limited in scope.
Ember addons closely match WordPress plugins and themes as they are auto-discovered and have default configuration out of the box, but can be further customized to meet the end user’s needs.
I am part of the Ember.js Learning Team and work closely with core contributors, I would love to talk or set up a conversation with other Ember Team members on how Ember and Glimmer could fit your needs.
Am I the only one that thinks this is complete garbage?
Look, WordPress started developing Calypso in 2014/2015. Back then, the license to React was more restrictive/ambiguous than it is today. It was modified Summer of 2016 after strong push back from the (non-wordpress) community (and pretty sure it was Google, but don’t have the link handy — google it).
No issues from WordPress yet… still promoting Calypso like it’s the best new thing since sliced bread.
At some point through all this they begin work on Gutenberg. Again, React still had the same, clear license that it has today.
STILL no problems raised by WordPress team. And in fact, as @matt himself stated in this article, he had a blog post typed up ready to trumpet how everything should be using React — plugins, themes, etc.
Fast forward to today and, to quote @Matt, “Core WordPress updates go out to over a quarter of all websites, having them all inherit the patents clause isn’t something I’m comfortable with.”
W T F??
Call me crazy, but the way I see it, there’s only 1 of 2 things that caused this about-face:
1. Terrible management/vision. Did you just “hope” Facebook would change their licensing clause just-in-time? You had zero hope of this. Facebook has been adamant about their BSD + Patents license, and have made their reasons clear time & time again. So it’s not like this just appeared out of thin air. If anything, Facebook has been very transparent on this issue.
2. You’re only pulling the usage of React in Gutenberg because of perceived community blow back. And therefore, it’s a marketing decision. Fair, but then be honest about it. Don’t get on some high horse and pretend like you’re taking up some pseudo moral crusade. You aren’t. You’re making a business decision, and I respect that. But again, call it for what it is and don’t hide behind some feel-good rhetoric.
Regardless of whether we’re here today because of #1 or #2 really doesn’t matter. Neither of them inspires confidence, but maybe that’s just me…
Sorry if that comes across as harsh, but truth is truth. And people need to see through the BS that this article spits out.
…or if I missed something, please correct me @matt
You can certainly interpret things in that way if you like, it’s a very cynical and worst-case assumption approach. I don’t think it’s true, but also not sure what I could say to change your mind. The key difference may be that you’re mixing up Automattic, the commercial entity, with WordPress, the open source project, and the respective decisions for each.
Thanks for the reply @matt, Indeed, I was confusing Automaticc’s Calypso with Core’s Gutenberg project, assuming they were being worked on together in tandem. Or, at the very least, Core would have thought through these issues before trumpeting the use of React for Gutenberg. So in that sense, the point of my comment remains. But I will admit that my assumption that Calypso & Gutenberg were being developed in tandem lead to a bit more of a cynical comment than it should have been.
Again, I do appreciate your reply, and I hope the Core community can learn from this and not make decisions on frameworks until all issues are resolved to prevent so much wasted time & effort, not to mention confusion… because there’s no doubt the WP community is facing a lot of changes in the months/years ahead that alone are difficult for many to handle. Adding uncertainty and a perceived lack of research/leadership by Core before embarking on a major new direction hardly exudes confidence. Especially when people’s businesses are on the line.
But fingers crossed this becomes just a minor misstep along the way, and not indicative of future Core decisions.
Best.
He… didn’t.
Yes, you’re likely in the minority of people that thinks it’s “complete garbage,” assuming you’re not just operating in rhetorical hyperbole. Many might disagree with Matt’s conclusion, but this is a nuanced, complicated and significant issue. Are you arguing that Facebook’s license is merely superficial?
Facebook’s license creates a “toll,” requiring broad indemnity in return for the privilege of utilizing their product. This is well within their rights, but given the fact that Core’s decisions act as an implicit guide and endorsement, it’s a decision that has far reaching implications for the entire ecosystem: both users and devs alike. Because of this, if there are competent alternatives that don’t require materially restricting the community’s legal standing or rights to remedy, shouldn’t it be well considered?
Please take a look anf giuve a chance to aurelia.io
Has no one else here used or like Aurelia? Would you guys consider Aurelia Matt?
Haven’t come across it before but will add it to the list.
Honestly Matt, I see this as a win. Now the developers can take a look back and see if they can do things better.
Please be careful about “few weeks” timeline estimations. I’d rather Gutenberg take into consideration some of the comments of the community rather than just a quick rewrite. We have a chance to make things a bunch more performant here.
An example of that is the customizer. It’s SLOW when things get bigger. I wish that could be completely rewritten to be more performant.
another plus one for Vue, React in my view is overly complex for what it does.
Web Components ftw!
Aurelia JS
Way to go Matt. It’s high time open source teams start putting nails in the coffins of tools that don’t support their cause. Especially ones that have sneakish licenses.
Stay away from Vue, keep it clean from WP hands, P.L.E.A.S.E!
Please give hyperHTML a chance. It has an ISC license (a simplfied MIT) and absolutely nothing in common with React and other libraries.
It’s based on latest JS standards so it requires zero tooling.
It’s compatible with IE9+ and older Mobile platforms too.
It can pick-up forms and content, so it plays wonderfully with SSR.
It’s fast by default and it weights less than 5K with all its capabilities.
It’s just a young project, but it has 100% code coverage (AFAIK the only one out there) and every benchmark and example shows its muscles VS any alternative you can mention.
I’m also fully committed to this project, and as old Zend Certified Engineer, and after my recent experience with Word Press and UK publishers, I think you would rarely regret this choice. Here to help with any extra clarification, example, or improvement.
It’s annoying as hell that React (really awesome library) have such crappy licence :/ But they have they own reasons thou
Sorry for asking annoying questions.. do you even consider looking into new Angular (4.x)? It’s really good for serious applications, and can be flexible enough with LazyLoading and AOT.
Well Done Matt, very wise decision. Gutenberg will be delayed like you said but happy to wait for it.
Choose Elm!
http://elm-lang.org
Elm! http://elm-lang.org/ It’s what React / Redux has always wanted to be.
Please take into consideration compatibility with Web Components standards. Helpful website tracking various frameworks: https://custom-elements-everywhere.com/ (currenlty Vue scores there better than Preact)
I knew that Automattic would be choosing a JS framework soon enough. However, I am surprised that React was even on the roster. I began a project over a year ago where my partner and I began replacing the templating engine with Angular. It makes the most sense to me aside from building something in-house…
We eventually gave up when the API was not making strides like we had expected.
Hopefully Automattic builds something amazing in house to replace PHP templating or leverages an existing MVC framework to take advantage of the ongoing API efforts.
Matt will do the right thing, He already has.
+1 for Vue.js if you go this way Matt.
I also wanted to ask a rather naive question: should we expect any future improvement around back-end as well with the help of Node.js or have I just asked a really silly question? lol
I think the server-side of WordPress will remain PHP for a long time to come, especially with the improvements in performance PHP has in the past few years and its continued ubiquity.
Something I personally would really like to see is a direction away from global variables, “the loop” and magic functions like wp_reset_postdata. I realize these would need to be kept for backwards compatibility but a true OOP approach like $post->title() rather than the_title() (etc) as a new standard would be very nice to see.
Not to pile on, Matt, but I’m a huge Vue fan (having come by way of several other libraries), and I’m confident it would be a good choice. I know it has been for my own projects (corporate intranet stuff mostly, and my personal site).
Of course, Web Components are getting close to being a reality without shimming. So, I’d suggest that one strong consideration in making your choice is looking into how easy it is to write a “proper” Web Component and then wrap it in a component for the chosen JS framework.
That way, as web components take over, the component framework is less about being a template language and more about syntactic sugar and state management.
Angular+1
So glad to read this, Matt! I’ll also say Vue would be my choice for using in Core. Cheers!
VueJS.org :]
@matt Have you consider using Polymer by google as an option. As web components will be a next big thing due to immense reusability of components
WordPress is so influential that for the long-term welfare of the web and the JS community in particular, I hope your decision is Preact.
React community is 10x bigger than Vue’s and 99% of that can be reabsorbed by Preact.
People say Vue is simpler to learn but that’s not necessarily true anymore. React is moving more and more towards the use of stateless functions. Nothing can be simpler than that. Pure Javascript functions with inputs (props) and a string output (html). Easy to write, reason and test. Redux will be replaced with better state management tools like mobx-state-tree, which are in fact easier to write, reason and test than Vue’s mvvm pattern. That’s really hard to beat in terms of simplicity, performance and developer experience.
WordPress support of Preact would mean the React community would start moving from React to Preact even more (that’s already happening by the way) and FB would have added pressure to remove the patents file from their repo.
And finally, WP support would also mean more and more people would start using a React-like API and help to finally have a JS framework king within we can all (JS devs) collaborate together.
NIce article about the patent stuff:
https://blog.cloudboost.io/3-points-to-consider-before-migrating-away-from-react-because-of-facebooks-bsd-patent-license-b4a32562d268
Why not using preact instead? The best of both worlds: using React, without this patent issue. Plus it would make you avoid rewriting everything.
Indeed, there are most or less compatible libraries like Preact or Inferno. This would certainly reduce the workload of the change. Not sure if FB’s patents extend to them…
Vue.js is indeed a nice alternative.
I don’t know if you use TypeScript already, but it might be a good idea to adopt if, it is well suited to large projects…
I wonder, will React be discarded for other Automattic projects? I am thinking about Simplenote… 🙂
Great decision Matt.
InfernoJS is indeed a very good alternative for React, if you are thinking of switching because of license issue.
https://github.com/infernojs/inferno
Shiva
https://www.npmjs.com/package/shiva
Elm.
As a huge fan of Aurelia, it’s saddening to see no one has even mentioned it :/
I would cast a vote for Aurelia and AdonisJS. The two together are a fabulous match. Aurelia fully supports JS&Typescript, is fully webCompliant, and will shortly have SSR.
It can easily leverage your existing/re-written React/Preact webComponents.
Hmm, searching for Aurelia again I notice a few people actually _have_ mentioned it 😛
back to angular, maybe?
angular 4 😀
+1 for Vue.js!!! Great opportunity for this now!
AHEM –> https://code.facebook.com/posts/300798627056246/relicensing-react-jest-flow-and-immutable-js/
I’m curious if this will change Matt’s and WordPress position towards moving away from React
Facebook really blundered on this one -wow!
I definitely would drop React because we now know how they believed their heavy-handedness can push around the development world. What’s the future hold when they try this prank again?
But licensing aside, we should look at the long term. I’m thinking the benefits of switching to a better platform still outweigh the challenges.
Vue has my vote for its ease and user base.
Will you use React now after they changed license to MIT?
Hey, Matt!
With the recent announcement from Facebook — it looks like your stance about WordPress-React just might have made them rethink the whole licensing issue.
Which I believe is a good thing? Ain’t it? Does that also mean we get to keep React in the Gutenberg core now?
After a week of research for an alternate JS Framework, I happen to always end up with React. Also, after listening to what folks at Facebook had to say about the new bytecode compiler for React and React 16 — I must say we are in good company with this framework.
Not sure where we end up, but having the WordPress community learn React helps them get far more superior jobs, salaries, and everything than any other JS FW has to offer. And that’s just one concern. I am sure you have many, in the posts that never got published.
I am looking forward to how this all ends being a one week of ups and downs that I’d very much want to forget.
Peace!
I came here to ask the same thing 🙂
With Facebook’s announcement today that they will relicense React under the MIT license, will WordPress reconsider their decision to abandon it?
https://code.facebook.com/posts/300798627056246/relicensing-react-jest-flow-and-immutable-js/
Now what…
They’re relicensing react under MIT only, no more patents. Does this mean back to React then?
https://code.facebook.com/posts/300798627056246/relicensing-react-jest-flow-and-immutable-js/
Thanks for taking a stand.
It took a lot of courage for you and the WordPress community to do what you did. Many of us in the broader web community are happy to see that your actions among others led Facebook to decide to relicense React. Let’s hope that this another step towards diminishing the value of software patents.
With the latest announcement from Facebook, will WordPress consider React again? If React is under MIT, then there is no more concern, right?
Facebook news
https://code.facebook.com/posts/300798627056246 |
8,881 | 在 Kubernetes 集群中运行 WordPress | https://deliciousbrains.com/running-wordpress-kubernetes-cluster/ | 2017-09-19T09:24:31 | [
"Kubernetes",
"WordPress",
"容器"
] | https://linux.cn/article-8881-1.html | 
作为一名开发者,我会尝试留意那些我可能不会每天使用的技术的进步。了解这些技术至关重要,因为它们可能会间接影响到我的工作。比如[由 Docker 推动](http://www.zdnet.com/article/what-is-docker-and-why-is-it-so-darn-popular/)的、近期正在兴起的容器化技术,可用于上规模地托管 Web 应用。从技术层面来讲,我并不是一个 DevOps,但当我每天构建 Web 应用时,多去留意这些技术如何去发展,会对我有所裨益。
这种进步的一个绝佳的例子,是近一段时间高速发展的容器编排平台。它允许你轻松地部署、管理容器化应用,并对它们的规模进行调整。目前看来,容器编排的流行工具有 [Kubernetes (来自 Google)](https://kubernetes.io/),[Docker Swarm](https://docs.docker.com/engine/swarm/) 和 [Apache Mesos](http://mesos.apache.org/)。如果你想较好的了解上面那些技术以及它们的区别,我推荐你看一下[这篇文章](https://mesosphere.com/blog/docker-vs-kubernetes-vs-apache-mesos/)。
在这篇文章中,我们将会从一些简单的操作开始,了解一下 Kubernetes 平台,看看如何将一个 WordPress 网站部署在本地机器上的一个单节点集群中。
### 安装 Kubernetes
在 [Kubernetes 文档](https://kubernetes.io/docs/tutorials/kubernetes-basics/)中有一个很好的互动教程,涵盖了很多东西。但出于本文的目的,我只会介绍在 MacOS 中 Kuberentes 的安装和使用。
我们要做的第一件事是在你的本地主机中安装 Kubernetes。我们将使用一个叫做 [MiniKube](https://kubernetes.io/docs/getting-started-guides/minikube/) 的工具,它专门用于在你的机器上方便地设置一个用于测试的 Kubernetes 集群。
根据 Minikube 文档,在我们开始之前,有一些先决条件。首先要保证你已经安装了一个 Hypervisor (我将会使用 Virtualbox)。接下来,我们需要[安装 Kubernetes 命令行工具](https://kubernetes.io/docs/tasks/tools/install-kubectl/)(也就是 `kubectl`)。如果你在用 Homebrew,这一步非常简单,只需要运行命令:
```
$ brew install kubectl
```
现在我们可以真正 [安装 Minikube](https://github.com/kubernetes/minikube/releases) 了:
```
$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.21.0/minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
```
最后,我们要[启动 Minicube](https://kubernetes.io/docs/getting-started-guides/minikube/#quickstart) 创建一个虚拟机,来作为我们的单节点 Kubernetes 集群。现在我要说一点:尽管我们在本文中只在本地运行它,但是在[真正的服务器](https://kubernetes.io/docs/tutorials/kubernetes-basics/)上运行 Kubernetes 集群时,后面提到的大多数概念都会适用。在多节点集群上,“主节点”将负责管理其它工作节点(虚拟机或物理服务器),并且 Kubernetes 将会在集群中自动进行容器的分发和调度。
```
$ minikube start --vm-driver=virtualbox
```
### 安装 Helm
现在,本机中应该有一个正在运行的(单节点)Kubernetes 集群了。我们现在可以用任何方式来与 Kubernetes 交互。如果你想现在可以体验一下,我觉得 [kubernetesbyexample.com](http://kubernetesbyexample.com/) 可以很好地向你介绍 Kubernetes 的概念和术语。
虽然我们可以手动配置这些东西,但实际上我们将会使用另外的工具,来将我们的 WordPress 应用部署到 Kubernetes 集群中。[Helm](https://docs.helm.sh/) 被称为“Kubernetes 的包管理工具”,它可以让你轻松地在你的集群中部署预构建的软件包,也就是“<ruby> 图表 <rt> chart </rt></ruby>”。你可以把图表看做一组专为特定应用(如 WordPress)而设计的容器定义和配置。首先我们在本地主机上安装 Helm:
```
$ brew install kubernetes-helm
```
然后我们需要在集群中安装 Helm。 幸运的是,只需要运行下面的命令就好:
```
$ helm init
```
### 安装 WordPress
现在 Helm 已经在我们的集群中运行了,我们可以安装 [WordPress 图表](https://kubeapps.com/charts/stable/wordpress)。运行:
```
$ helm install --namespace wordpress --name wordpress --set serviceType=NodePort stable/wordpress
```
这条命令将会在容器中安装并运行 WordPress,并在容器中运行 MariaDB 作为数据库。它在 Kubernetes 中被称为“Pod”。一个 [Pod](https://kubernetes.io/docs/tutorials/kubernetes-basics/explore-intro/) 基本上可视为一个或多个应用程序容器和这些容器的一些共享资源(例如存储卷,网络等)的组合的抽象。
我们需要给这个部署一个名字和一个命名空间,以将它们组织起来并便于查找。我们同样会将 `serviceType` 设置为 `NodePort` 。这一步非常重要,因为在默认设置中,服务类型会被设置为 `LoadBalancer`。由于我们的集群现在没有负载均衡器,所以我们将无法在集群外访问我们的 WordPress 站点。
在输出数据的最后一部分,你会注意到一些关于访问你的 WordPress 站点的有用的命令。运行那些命令,你可以获取到我们的 WordPress 站点的外部 IP 地址和端口:
```
$ export NODE_PORT=$(kubectl get --namespace wordpress -o jsonpath="{.spec.ports[0].nodePort}" services wordpress-wordpress)
$ export NODE_IP=$(kubectl get nodes --namespace wordpress -o jsonpath="{.items[0].status.addresses[0].address}")
$ echo http://$NODE_IP:$NODE_PORT/admin
```
你现在访问刚刚生成的 URL(忽略 `/admin` 部分),就可以看到 WordPress 已经在你的 Kubernetes 集群中运行了!
### 扩展 WordPress
Kubernetes 等服务编排平台的一个伟大之处,在于它将应用的扩展和管理变得易如反掌。我们看一下应用的部署状态:
```
$ kubectl get deployments --namespace=wordpress
```
[](https://cdn.deliciousbrains.com/content/uploads/2017/08/07120711/image4.png)
可以看到,我们有两个部署,一个是 Mariadb 数据库,一个是 WordPress 本身。现在,我们假设你的 WordPress 开始承载大量的流量,所以我们想将这些负载分摊在多个实例上。我们可以通过一个简单的命令来扩展 `wordpress-wordpress` 部署:
```
$ kubectl scale --replicas 2 deployments wordpress-wordpress --namespace=wordpress
```
再次运行 `kubectl get deployments`,我们现在应该会看到下面的场景:
[](https://cdn.deliciousbrains.com/content/uploads/2017/08/07120710/image2.png)
你刚刚扩大了你的 WordPress 站点规模!超级简单,对不对?现在我们有了多个 WordPress 容器,可以在它们之中对流量进行负载均衡。想了解 Kubernetes 扩展的更多信息,参见[这篇指南](https://kubernetes.io/docs/tutorials/kubernetes-basics/scale-intro/)。
### 高可用
Kubernetes 等平台的的另一大特色在于,它不单单能进行方便的扩展,还可以通过自愈组件来提供高可用性。假设我们的一个 WordPress 部署因为某些原因失效了,那 Kubernetes 会立刻自动替换掉这个部署。我们可以通过删除我们 WordPress 部署的一个 pod 来模拟这个过程。
首先运行命令,获取 pod 列表:
```
$ kubectl get pods --namespace=wordpress
```
[](https://cdn.deliciousbrains.com/content/uploads/2017/08/07120711/image3.png)
然后删除其中一个 pod:
```
$ kubectl delete pod wordpress-wordpress-876183909-jqc8s --namespace=wordpress
```
如果你再次运行 `kubectl get pods` 命令,应该会看到 Kubernetes 立刻换上了新的 pod (`3l167`)。
[](https://cdn.deliciousbrains.com/content/uploads/2017/08/07120709/image1.png)
### 更进一步
我们只是简单了解了 Kubernetes 能完成工作的表面。如果你想深入研究,我建议你查看以下功能:
* [平行扩展](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/)
* [自愈](https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/#what-is-a-replicationcontroller)
* [自动更新及回滚](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#what-is-a-deployment)
* [密钥管理](https://kubernetes.io/docs/concepts/configuration/secret/)
你在容器平台上运行过 WordPress 吗?有没有使用过 Kubernetes(或其它容器编排平台),有没有什么好的技巧?你通常会怎么扩展你的 WordPress 站点?请在评论中告诉我们。
---
作者简介:
Gilbert 喜欢构建软件。从 jQuery 脚本到 WordPress 插件,再到完整的 SaaS 应用程序,Gilbert 一直在创造优雅的软件。 他粗昂做的最有名的的产品,应该是 Nivo Slider.
---
via: <https://deliciousbrains.com/running-wordpress-kubernetes-cluster/>
作者:[Gilbert Pellegrom](https://deliciousbrains.com/author/gilbert-pellegrom/) 译者:[StdioA](https://github.com/StdioA) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | 
As a developer I try to keep my eye on the progression of technologies that I might not use every day, but are important to understand as they might indirectly affect my work. For example the recent rise of containerization, [popularized by Docker](http://www.zdnet.com/article/what-is-docker-and-why-is-it-so-darn-popular/), used for hosting web apps at scale. I’m not technically a devops person but as I build web apps on a daily basis it’s good for me to keep my eye on how these technologies are progressing.
A good example of this progression is the rapid development of container orchestration platforms that allow you to easily deploy, scale and manage containerized applications. The main players at the moment seem to be [Kubernetes (by Google)](https://kubernetes.io/), [Docker Swarm](https://docs.docker.com/engine/swarm/) and [Apache Mesos](http://mesos.apache.org/). If you want a good intro to each of these technologies and their differences I recommend giving [this article](https://mesosphere.com/blog/docker-vs-kubernetes-vs-apache-mesos/) a read.
In this article, we’re going to start simple and take a look at the Kubernetes platform and how you can set up a WordPress site on a single node cluster on your local machine.
## Installing Kubernetes
The [Kubernetes docs](https://kubernetes.io/docs/tutorials/kubernetes-basics/) have a great interactive tutorial that covers a lot of this stuff but for the purpose of this article I’m just going to cover installation and usage on macOS.
The first thing we need to do is install Kubernetes on your local machine. We’re going to use a tool called [Minikube](https://kubernetes.io/docs/getting-started-guides/minikube/) which is specifically designed to make it easy to set up a Kubernetes cluster on your local machine for testing.
As per the Minikube docs, there are a few prerequisites before we get going. Make sure you have a Hypervisor installed (‘m going to use Virtualbox). Next we need to [install the Kubernetes command-line tool](https://kubernetes.io/docs/tasks/tools/install-kubectl/) (known as `kubectl`
). If you use Homebrew this is as simple as running:
```
$ brew install kubectl
```
Now we can actually [install Minikube](https://github.com/kubernetes/minikube/releases):
```
$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.21.0/minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
```
Finally we want to [start Minikube](https://kubernetes.io/docs/getting-started-guides/minikube/#quickstart) which will create a virtual machine which will act as our single-node Kubernetes cluster. At this point I should state that, although we’re running things locally in this article, most of the following concepts will apply when running a full Kubernetes cluster on [real servers](https://kubernetes.io/docs/tutorials/kubernetes-basics/). On a multi-node cluster a “master” node would be responsible for managing the other worker nodes (VM’s or physical servers) and Kubernetes would automate the distribution and scheduling of application containers across the cluster.
```
$ minikube start --vm-driver=virtualbox
```
## Installing Helm
At this point we should now have a (single node) Kubernetes cluster running on our local machine. We can now interact with Kubernetes in any way we want. I found [kubernetesbyexample.com](http://kubernetesbyexample.com/) to be a good introduction to Kubernetes concepts and terms if you want to start playing around.
While we could set things up manually, we’re actually going to use a separate tool to install our WordPress application to our Kubernetes cluster. [Helm](https://docs.helm.sh/) is labelled as a “package manager for Kubernetes” and works by allowing you to easily deploy pre-built software packages to your cluster, known as “Charts”. You can think of a Chart as a group of container definitions and configs that are designed for a specific application (such as WordPress). First let’s install Helm on our local machine:
```
$ brew install kubernetes-helm
```
Next we need to install Helm on our cluster. Thankfully this is as simple as running:
```
$ helm init
```
## Installing WordPress
Now that Helm is running on our cluster we can install the [WordPress chart](https://kubeapps.com/charts/stable/wordpress) by running:
```
$ helm install --namespace wordpress --name wordpress --set serviceType=NodePort stable/wordpress
```
The will install and run WordPress in a container and MariaDB in a container for the database. This is known as a “Pod” in Kubernetes. A [Pod](https://kubernetes.io/docs/tutorials/kubernetes-basics/explore-intro/) is basically an abstraction that represents a group of one or more application containers and some shared resources for those containers (e.g. storage volumes, networking etc.).
We give the release a namespace and a name to keep things organized and make them easy to find. We also set the `serviceType`
to `NodePort`
. This is important because, by default, the service type will be set to `LoadBalancer`
and, as we currently don’t have a load balancer for our cluster, we wouldn’t be able to access our WordPress site from outside the cluster.
In the last part of the output from this command you will notice some helpful instructions on how to access your WordPress site. Run these commands to get the external IP address and port for our WordPress site:
```
$ export NODE_PORT=$(kubectl get --namespace wordpress -o jsonpath="{.spec.ports[0].nodePort}" services wordpress-wordpress)
$ export NODE_IP=$(kubectl get nodes --namespace wordpress -o jsonpath="{.items[0].status.addresses[0].address}")
$ echo http://$NODE_IP:$NODE_PORT/admin
```
You should now be able to visit the resulting URL (ignoring the `/admin`
bit) and see WordPress running on your very own Kubernetes cluster!
## Scaling WordPress
One of the great things about container orchestration platforms such as Kubernetes is that it makes scaling and managing your application really simple. Let’s check the status of our deployments:
```
$ kubectl get deployments --namespace=wordpress
```
We should see that we have 2 deployments, one for the Mariadb database and one for WordPress itself. Now let’s say your WordPress site is starting to see a lot of traffic and we want to split the load over multiple instances. We can scale our `wordpress-wordpress`
deployment by running a simple command:
```
$ kubectl scale --replicas 2 deployments wordpress-wordpress --namespace=wordpress
```
If we run the `kubectl get deployments`
command again we should now see something like this:
You’ve just scaled up your WordPress site! Easy peasy, right? There are now multiple WordPress containers that traffic can be load-balanced across. For more info on Kubernetes scaling check out [this tutorial](https://kubernetes.io/docs/tutorials/kubernetes-basics/scale-intro/).
## High Availability
Another great feature of platforms such as Kubernetes is the ability to not only scale easily, but to provide high availability by implementing self-healing components. Say one of your WordPress deployments fails for some reason. Kubernetes will automatically replace the deployment instantly. We can simulate this by deleting one of the pods running in our WordPress deployment.
First get a list of pods by running:
```
$ kubectl get pods --namespace=wordpress
```
Then delete one of the pods:
```
$ kubectl delete pod {POD-ID} --namespace=wordpress
```
If you run the `kubectl get pods`
command again you should see Kubernetes spinning up the replacement pod straight away.
## Going Further
We’ve only really scratched the surface of what Kubernetes can do. If you want to delve a bit deeper, I would recommend having a look at some of the following features:
Have you ever run WordPress on a container platform? Have you ever used Kubernetes (or another container orchestration platform) and got any good tips? How do you normally scale your WordPress sites? Let us know in the comments. |
8,882 | React 许可证虽严苛,但不必过度 react | https://opensource.com/article/17/9/facebook-patents-license | 2017-09-21T10:14:00 | [
"Facebook",
"许可证",
"React"
] | /article-8882-1.html |
>
> 对 Apache 基金会禁止将 BSD+专利许可证(FB + PL)用于其项目的批评,在理性的审视之下是无法成立的。
>
>
>

最近,Apache 基金会将 Facebook 公司 BSD + 专利许可证下的代码重新分类为 “X 类”,从而有效地阻止了其未来对 Apache 基金会项目的贡献。这一举动再度引发了对专利授权的[争议](/article-8733-1.html),但是像开源社区的许多事件一样,与实际情况相比,这个争议更具倾向性。事实上,这样做不太可能影响 React.js 的采用,对 BSD +专利许可证(FB + PL)的批评大多数不能在理性的审视下成立。
官方名称为“[专利权的补充授权第二版](https://github.com/facebook/react/blob/master/PATENTS)”的 Facebook 专利授权条款已经生效多年。它用于非常受欢迎的 React.js 代码,该代码是一种用于呈现用户界面的 JavaScript 库。[使用该代码的主要技术公司的名单令人印象深刻](http://reactkungfu.com/2015/07/big-names-using-react-js/),其中包括像 Netflix 这样面向消费者的巨头公司,当然还有 Facebook 自身。(LCTT 译注:国内包括百度、阿里云等顶级互联网公司也在使用它,但是据闻这些公司在纷纷考虑更换对该库的依赖。)
### **对旧授权的新反应**
对这个消息的反应是令人惊讶的,因为并行专利许可模式并不是什么新鲜事。 Facebook 在 2013 年发布了 BSD + 专利授权许可证(2015 年进行了修订)。而 Google 2010 年也在 WebM 编解码器上有些高调地使用了类似的模型。该许可模式涉及两个并列和同时授权的权利:软件版权的 BSD 许可,以及单独授予的执行该软件的专利授权。将两者合在一起意味着有两个独立和平行的授权权利。在这方面,它与 Apache 2.0 许可证非常相似,与 BSD 一样,Apache 2.0 是一个许可证,并且还包含与版权许可授权一起存在的防御性终止条款。
对 Apache 基金会通告的大部分反应造成了混乱,例如[有篇文章](https://www.theregister.co.uk/2017/07/17/apache_says_no_to_facebook_code_libraries/)误导性地称之为“陷阱”。**事实上,许多开源许可证都有防御性的终止条款,这些规定大多被认为是阻止专利诉讼的合理机制,而不是陷阱。**它们是规则而不是例外;所有具有专利授权的主要开源许可证也具有防御性的终止条款——虽然每个条款略有不同。在 Apache 所拒绝的 Facebook 专利授权与 Apache 对其项目所采取的 Apache 2.0 许可证之间,其中的区别比争议提出的更为微妙。
### **防御性终止条款有多种风格**
防御性终止条款在两个主要方面有所不同:终止条款的触发和权利终止的范围。
关于权利终止的范围,有两个阵营:仅终止专利授权(包括 Apache 2.0、Eclipse 公共许可证和 Facebook 专利授权)以及也同时终止版权许可(Mozilla 公共许可证和 GPL v3)。换句话说,对于大多数的许可证,提起专利侵权诉讼只能导致专利权的终止;对于其他许可证来说,提起专利诉讼能够同时导致版权许可的终止,即让某人停止使用该代码。版权许可终止是一个更强大的反专利机制,对于私营企业来说风险更大,因此导致一些私营公司拒绝使用 GPL v3 或 MPL 代码。
与大多数其他开源许可证相比,Facebook 专利授权触发终止的阈值不同。例如,在 Apache 2.0 中,专利授权的终止是由对该许可证下的软件提出指控的专利权利主张引发的。这个想法是为软件创建一个<ruby> “专利共同体” <rp> ( </rp> <rt> patent commons </rt> <rp> ) </rp></ruby>。大多数其他开源许可证大致遵循这个推演。(但在 Facebook 许可证中,)如果被许可人向 Facebook 或任何对 Facebook 产品提出指控的第三方提出权利主张,Facebook 专利许可也将终止。在这方面,终止触发机制类似于 IBM 多年前撰写的 Common Public License 1.0 (CPL)中的终止触发机制。(“如果接收者利用适用于本软件的专利对贡献者提出专利诉讼,则该贡献者根据本协议授予该接收者的任何专利许可,将在提起诉讼之日终止。”)
### **天下无新事**
Facebook 授权范围的防御性终止条款在开源场景之外的专利许可中很常见。如果被许可人向许可人提出专利权利主张,大多数专利许可将被终止。原因是许可人不想在专利战争中被单方面“解除武装”。大多数专利只有在竞争对手起诉专利所有人时才被防御性使用。A 起诉 B,然后 B 起诉 A,导致互相伤害。如果 B 在没有广泛的防御性终止条款的情况下以开源许可证发布其软件,则 B 可能没有追索权,并且为其开源代码的发布付出高昂的代价。A 在免费利用 B 的软件进行开发的同时,还起诉 B 专利侵权。
最后,Facebook 专利授权本身并不新鲜。该授权于 2013 年发布,自那时起,React.js 的受欢迎程度一直在增长。与许多开源许可证一样,行业忍受新许可证的意愿取决于其下发布的代码的质量。在 React.js 代码质量非常好的情况下,这个专利许可条款虽然是新的,但合理。
### **还是开源吗?**
有些人认为 BSD + 专利条款违反<ruby> <a href="https://opensource.org/osd-annotated"> “开源定义” </a> <rp> ( </rp> <rt> Open Source Definition </rt> <rp> ) </rp></ruby>。OSD 不接受歧视个人、团体或领域的许可证。但专利授权没有许可范围限制;如果被许可人有不当行为,许可就会终止,并且与其他人相比,针对代码作者的不当行为的触发门槛更低。因此,BSD + 专利许可似乎并不违反 OSD,而且 CPL 已经被 OSI 认可为合规的。如同 BSD + 专利许可一样,CPL 根据针对代码作者的专利诉讼,设定了一个较低的终止门槛。
### **结果是什么?**
Apache 基金会决定的实际结果尚不清楚。 遵循 X 类许可的代码不能包含在 Apache 基金会的存储库中(该类别还包括 GPL 等许可证)。Apache 的重新分类并不意味着任何人都不能使用 React.js——它只是不能在 Apache 项目中被提交。目前甚至不清楚 Apache 项目是否包含对 BSD + 专利许可代码的依赖。
同时,在私营企业中,根据 BSD + 专利条款使用代码几乎没有争议。大多数公司已经检查了该许可证与其他许可证(如 Apache 2.0)相比的边际法律风险,并认为没有需要特别注意的地方。除非公司决定起诉 Facebook(或指控其产品侵权),否则终止触发机制没有实际效果。如果您想要在开发和发布一大堆代码的公司中发起专利权利主张,将该代码从您的业务中删除似乎是一种合理的代价。
有些争议似乎起因于担心 Facebook 在许可条款中比其他人占优。但是,这与伤害开源社区是不一样的。与 Apache 2.0 一样,BSD + 专利授权以“专利共同体”为基准建立,但是为贡献者(Facebook)针对被许可人的软件专利权利主张提供了更多的保护。很奇怪的是,一个如此反对软件专利的社区会发现这是令人反感的,特别是考虑到过去使用的一系列防御性终止规定。
**请注意:此文章是关于 BSD + 专利许可,而不是关于 Facebook 公司。这篇文章仅代表作者的个人观点,而不是 Facebook 的观点。作者在开源事务上代表 Facebook,但作者没有起草 BSD + 专利许可证。**
(题图:techtimes.com)
---
作者简介:Heather Meeker 是 O’Melveny & Myers 硅谷办公室的合伙人,为客户提供技术交易和知识产权方面的建议,是国际知名的开源软件许可专家。Heather 于 2016 年获得加州律师协会知识产权先锋奖。<ruby> 《最佳律师》 <rp> ( </rp> <rt> Best Lawyers </rt> <rp> ) </rp></ruby>将她提名为 2018 年年度 IT 律师。
译者简介**:**薛亮,集慧智佳知识产权咨询公司高级咨询师,擅长专利检索、专利分析、竞争对手跟踪、FTO分析、开源软件知识产权风险分析,致力于为互联网企业、高科技公司提供知识产权咨询服务。
| null | HTTPSConnectionPool(host='opensource.com', port=443): Read timed out. (read timeout=10) | null |
8,885 | 8 款适合树莓派使用的 IDE | http://opensourceforu.com/2017/06/top-ides-raspberry-pi/ | 2017-09-20T11:19:00 | [
"树莓派",
"IDE"
] | https://linux.cn/article-8885-1.html | 
树莓派是一种微型的单板电脑(SBC),已经在学校的计算机科学教学中掀起了一场革命,但同样,它也给软件开发者带来了福音。目前,树莓派获得的知名度远远超出了它原本的目标市场,而且正在应用于机器人项目中。
树莓派是一个可以运行 Linux 操作系统的微型开发板计算机,由英国树莓派基金会开发,用来在英国和发展中国家促进学校的基础计算机科学教育。树莓派拥有 USB 接口,能够支持多种即插即用外围设备,比如键盘、鼠标、打印机等。它包含了一个 HDMI(高清多媒体界面)端口,可以为用户提供视频输出。信用卡大小的尺寸使得树莓派非常便携且价格便宜。仅需一个 5V 的 micro-USB 电源供电,类似于给手机用的充电器一样。
多年来,树莓派基金会已经推出了几个不同版本的树莓派产品。 第一个版本是树莓派 1B 型,随后是一个相对简单便宜的 A 型。在 2014 年,基金会推出了一个增强版本 —— 树莓派 1B+。在 2015 年,基金会推出了全新设计的版本,售价为 5 美元,命名为树莓派 Zero。
在 2016 年 2 月,树莓派 3B 型发布,这也是现在可用的主要型号。在 2017 年,基金会发布了树莓派 Zero 的新型号树莓派 Zero W (W = wireless 无线)。
在不久的将来,一个提高了技术规格的型号将会到来,为嵌入式系统发烧友、研究员、爱好者和工程师们用其开发多种功能的实时应用提供一个稳健的平台。

*图 1 :树莓派*
### 树莓派是一个高效的编程设备
在给树莓派供电后,启动运行 LXDE 窗口管理器,用户会获得一个完整的基于 Debian 的 Linux 操作系统,即 Raspbian。Raspbian 操作系统为用户提供了众多自由开源的程序,涵盖了程序设计、游戏、应用以及教育方面。
树莓派的官方编程语言是 Python ,并已预装在了 Paspbian 操作系统上。结合树莓派和 Python 的集成开发环境 IDLE3 ,可以让程序员能够开发各种基于 Python 的程序。
除了 Python ,树莓派还支持多种其它语言。并且可以使用一些自由开源的 IDE (集成开发环境)。允许程序员、开发者和应用工程师在树莓派上开发程序和应用。
### 树莓派上的最佳 IDE
作为一名程序员和开发者,你需要的首先就是有一个 IDE ,这是一个集成了开发者和程序员编写、编译和测试软件所需的的基本工具的综合软件套件。IDE 包含了代码编辑器、编译或解释程序和调试器,并允许开发者通过一个图形用户界面(GUI)来访问。IDE 的主要目的之一是提供一个整合单元来统一功能设置,减少组合多个开发工具的必要配置。
IDE 的用户界面与文字处理程序相似,在工具栏提供颜色编码、源代码格式化、错误诊断、报告以及智能代码补全工具。IDE 被设计用来整合第三方版本控制库如 GitHub 或 Apache Subversion 。一些 IDE 专注于特定的编程语言,支持一个匹配该编程语言的功能集,当然也有一些是支持多种语言的。
树莓派上拥有丰富的 IDE ,为程序员提供友好界面来开发源代码、应用程序以及系统程序。
就让我们来探索最适合树莓派的 IDE 吧。
#### BlueJ

*图 2 :BlueJ 的 GUI 界面*
BlueJ 是一款致力于 Java 编程语言的 IDE ,主要是为教育目的而开发的。它也支持小型的软件开发项目。BlueJ 由澳大利亚的莫纳什大学的 Michael Kolling 和 John Rosenburg 在 2000 年作为 Blue 系统的继任者而开发的,后来在 2009 年 3 月成为自由开源软件。
BlueJ 提供一种学习面向对象的编程概念的高效的方式,图形用户界面为应用程序提供像 UML 图一样的类结构。每一个像类、对象和函数调用这样基于 OOPS 的概念,都可以通过基于交互的设计来表示。
**特性:**
* *简单的交互界面:* 与 NetBeans 或 Eclipse 这样的专业界面相比,BlueJ 的用户界面更加简易学。使开发者可以专注于编程而不是环境。
* *便携:* BlueJ 支持多种平台如 Windows、Linux 以及 Mac OS X , 可以免安装直接运行。
* *新的创新:* BlueJ IDE 在对象工作台、代码块和范围着色方面有着大量的创新,使新手体验到开发的乐趣。
* *强大的技术支持:* BlueJ 拥有一个核心功能团队来解答疑问,并且在 24 小时内为开发者的各种问题提供解决方案。
**最新版本:** 4.0.1
#### Geany IDE

*图 3 : Geany IDE 的 GUI 界面*
Geany IDE 使用了 Scintilla 和 GTK+ 的集成开发环境支持,被认为是一个非常轻量级的基于 GUI 的文本编辑器。 Geany 的独特之处在于它被设计为独立于特定的桌面环境,并且仅需要较少数量的依赖包。只需要 GTK2 运行库就可以运行。Geany IDE 支持多种编程语言如 C、C++、C#、Java、HTML、PHP、Python、Perl、Ruby、Erlang 和 LaTeX 。
**特性:**
* 代码自动补全和简单的代码导航。
* 高效的语法高亮和代码折叠。
* 支持嵌入式终端仿真器,拥有高度可扩展性,可以免费下载大量功能丰富的插件。
* 简单的项目管理并支持多种文件类型,包括 C、Java、PHP、HTML、Python、Perl 等。
* 高度定制的界面,可以添加或删除设置、栏及窗口。
**最新版本:** 1.30.1
#### Adafruit WebIDE

*图 4 :Adafruit WebIDE 的 GUI 界面*
Adafruit WebIDE 为树莓派用户提供一个基于 Web 的界面来执行编程功能,并且允许开发者编译多种语言的源代码如 Python、Ruby、JavaScript 等。
Adafruit IDE 允许开发者把代码放在 GIT 仓库,这样就可以通过 GitHub 在任何地方进行访问。
**特性:**
* 可以通过 Web 浏览器的 8080 端口或 80 端口进行访问。
* 支持源代码的简单编译和运行。
* 配备一个调试器和可视器来进行正确追踪,代码导航以及测试源代码。
#### AlgoIDE

*图 5 :AlgoIDE 的 GUI 界面*
AlgoIDE 结合了一个脚本语言和一个 IDE 环境,它被设计用来将编程与下一步的示例一起来运行。AlgoIDE 包含了一个强大的调试器、 实时范围管理器并且一步一步的执行代码。针对全年龄人群而设计,用来设计程序以及对算法进行大量的研究。
AlgoIDE 支持多种类型的语言如 C、C++、Python、Java、Smalltalk、Objective C、ActionScript 等。
**特性:**
* 代码自动缩进和补全。
* 高效的语法高亮和错误管理。
* 包含了一个调试器、范围管理器和动态帮助系统。
* 支持 GUI 和传统的 Logo 程序语言 Turtle 来进行源代码开发。
**最新版本:** 2016-12-08 (上次更新时间)
#### Ninja IDE

图 6 :Ninja IDE 的 GUI 界面
Ninja IDE (“Ninja-IDE Is Not Just Another IDE”的缩写),由 Diego Sarmentero 、Horacio Duranm Gabriel Acosta 、Pedro Mourelle 和 Jose Rostango 设计,使用纯 Python 编写并且支持多种平台运行如 Linux 、Mac OS X 和 Windows 。Ninja IDE 被认为是一个跨平台的 IDE 软件,尤其是用来设计基于 Python 的应用程序。
Ninja IDE 是非常轻量级的,并能执行多种功能如文件处理、代码定位、跳转行、标签、代码自动缩进和编辑器缩放。除了 Python ,这款 IDE 也支持几种其他语言。
**特性:**
* *高效的代码编辑器:* Ninja-IDE 被认为是最有效的代码编辑器,因为它能执行多种功能如代码补全和缩进,以及助手功能。
* *错误和 PEP8 查找器:* 高亮显示文件中的静态和 PEP8 错误。
* *代码定位器:* 使用此功能,快速直接访问能够访问的文件。用户可以使用快捷键 “CTRL+K” 进行输入,IDE 会找到特定的文本。
* 独特的项目管理功能以及大量的插件使得具有 Ninja-IDE 高度可扩展性。
**最新版本:** 2.3
#### Lazarus IDE

图 7 :Lazarus IDE 的 GUI 界面
Lazarus IDE 是由 Cliff Baeseman、Shane Miller 和 Michael A. Hess 于 1999 年 2 月 开发。它被视为是一款用于应用程序快速开发的基于 GUI 的跨平台 IDE ,使用的是 Free Pascal 编译器。Lazarus IDE 继承了 Free Pascal 的三个主要特性 —— 编译速度、执行速度和交叉编译。可以在多种操作系统上对应用程序进行交叉编译,如 Windows 、Linux 、Mac OS X 等。
这款 IDE 由 Lazarus 组件库组成。这些组件库以一个单一和带有不同的特定平台实现的统一接口的形式为开发者提供了多种配套设施。它支持“一次编写,随处编译”的原则。
**特性:**
* 强大而快速的处理各种类型的源代码,同时支持性能测试。
* 易用的 GUI ,支持组件拖拽功能。可以通过 Lazarus 包文件为 IDE 添加附加组件。
* 使用新功能加强的 Free Pascal ,可以用来开发 Android 应用。
* 高可扩展性、开放源代码并支持多种框架来编译其他语言。
**最新版本:** 1.6.4
#### Codeblock IDE

*图 8 : Codeblock IDE 界面*
Codeblock IDE 是用 C++ 编写的,使用了 wxWidgets 作为 GUI 库,发布于 2005 年。它是一款自由开源、跨平台的 IDE ,支持多种类型的编译器如 GCC 、Clang 和 Visual C++ 。
Codeblock IDE 高度智能并且可以支持多种功能,如语法高亮、代码折叠、代码补全和缩进,同时也拥有一些扩展插件来进行定制。它可以在 Windows 、Mac OS X 和 Linux 操作系统上运行。
**特性:**
* 支持多种类型的编译器如 GCC 、Visual C++ 、Borland C++ 、Watcom 、Intel C++ 等。主要针对 C++ 而设计,不过现在也支持其他的一些语言。
* 智能的调试器,允许用户通过访问本地函数符号和参数显示,用户自定义监视、调用堆栈、自定义内存转储、线程切换以及 GNU 调试接口调试程序。
* 支持多种功能用来从 Dev-C++ 、Visual C++ 等平台迁移代码。
* 使用自定义系统和 XML 扩展文件来存储信息。
**最新版本:** 16.01
#### Greenfoot IDE

*图 9 : Greenfoot IDE 界面*
Greenfoot IDE 是由肯特大学的 Michael Kolling 设计。它是一款基于 Java 的跨平台 IDE ,针对中学和大学教育目的而设计。Greenfoot IDE 的功能有项目管理、代码自动补全、语法高亮并提供一个简易的 GUI 界面。
Greenfoot IDE 编程包括两个主类的子类 —— World 和 Actor 。 World 表示主要执行发生的类,Actors 是已经存在且活动于 World 中的对象。
**特性:**
* 简单易用的 GUI ,比 BlueJ 和其他的 IDE 交互性更强。
* 易于新手和初学者上手。
* 在执行 Java 代码方面非常强大。
* 支持 GNOME/KDE/X11 图形环境。
* 其他功能包括项目管理、自动补全、语法高亮以及错误自动校正。
**最新版本:** 3.1.0
---
作者简介:
Anand Nayyar
作者是位于印度旁遮普邦的贾朗达尔学院计算机应用与 IT 系的教授助理。他热爱开源技术、嵌入式系统、云计算、无线传感器网络以及模拟器。可以在 [anand\[email protected]](mailto:[email protected]) 联系他。
---
via: <http://opensourceforu.com/2017/06/top-ides-raspberry-pi/>
作者:[Anand Nayyar](http://opensourceforu.com/author/anand-nayyar/) 译者:[softpaopao](https://github.com/softpaopao) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,887 | Linux 文件系统概览 | https://opensource.com/life/16/10/introduction-linux-filesystems | 2017-09-21T09:35:00 | [
"文件系统"
] | https://linux.cn/article-8887-1.html | 
本文旨在高屋建瓴地来讨论 Linux 文件系统概念,而不是对某种特定的文件系统,比如 EXT4 是如何工作的进行具体的描述。另外,本文也不是一个文件系统命令的教程。
每台通用计算机都需要将各种数据存储在硬盘驱动器(HDD)或其他类似设备上,比如 USB 存储器。这样做有两个原因。首先,当计算机关闭以后,内存(RAM)会失去存于它里面的内容。尽管存在非易失类型的 RAM,在计算机断电以后还能把数据存储下来(比如采用 USB 闪存和固态硬盘的闪存),但是,闪存和标准的、易失性的 RAM,比如 DDR3 以及其他相似类型的 RAM 相比,要贵很多。
数据需要存储在硬盘驱动上的另一个原因是,即使是标准的 RAM 也要比普通硬盘贵得多。尽管 RAM 和硬盘的价格都在迅速下降,但是 RAM 的价格依旧在以字节为单位来计算。让我们进行一个以字节为单位的快速计算:基于 16 GB 大的 RAM 的价格和 2 TB 大的硬盘驱动的价格。计算显示 RAM 的价格大约比硬盘驱动贵 71 倍。今天,一个典型的 RAM 的价格大约是 0.000000004373750 美元/每字节。
直观的展示一下在很久以前 RAM 的价格,在计算机发展的非常早的时期,其中一种类型的 RAM 是基于在 CRT 屏幕上的点。这种 RAM 非常昂贵,大约 1 美元/每字节。
### 定义
你可能听过其他人以各种不同和令人迷惑的方式谈论过文件系统。文件系统这个单词本身有多重含义,你需要从一个讨论或文件的上下文中理解它的正确含义。
我将根据我所观察到的在不同情况下使用“文件系统”这个词来定义它的不同含义。注意,尽管我试图遵循标准的“官方”含义,但是我打算基于它的不同用法来定义这个术语(如下)。这就是说我将在本文的后续章节中进行更详细的探讨。
1. 始于顶层 root(`/`)目录的整个 Linux 目录结构。
2. 特定类型的数据存储格式,比如 EXT3、EXT4、BTRFS 以及 XFS 等等。Linux 支持近百种类型的文件系统,包括一些非常老的以及一些最新的。每一种文件系统类型都使用它自己独特的元数据结构来定义数据是如何存储和访问的。
3. 用特定类型的文件系统格式化后的分区或逻辑卷,可以挂载到 Linux 文件系统的指定挂载点上。
### 文件系统的基本功能
磁盘存储是文件系统必须的功能,它与之伴生的有一些有趣而且不可或缺的细节。很明显,文件系统是用来为非易失数据的存储提供空间,这是它的基本功能。然而,它还有许多从需求出发的重要功能。
所有文件系统都需要提供一个名字空间,这是一种命名和组织方法。它定义了文件应该如何命名、文件名的最大长度,以及所有可用字符集中可用于文件名中字符集子集。它也定义了一个磁盘上数据的逻辑结构,比如使用目录来组织文件而不是把所有文件聚集成一个单一的、巨大的文件混合体。
定义名字空间以后,元数据结构是为该名字空间提供逻辑基础所必须的。这包括所需数据结构要能够支持分层目录结构,同时能够通过结构来确定硬盘空间中的块是已用的或可用的,支持修改文件或目录的名字,提供关于文件大小、创建时间、最后访问或修改时间等信息,以及位置或数据所属的文件在磁盘空间中的位置。其他的元数据用来存储关于磁盘细分的高级信息,比如逻辑卷和分区。这种更高层次的元数据以及它所代表的结构包含描述文件系统存储在驱动器或分区中的信息,但与文件系统元数据无关,与之独立。
文件系统也需要一个应用程序接口(API),从而提供了对文件系统对象,比如文件和目录进行操作的系统功能调用的访问。API 也提供了诸如创建、移动和删除文件的功能。它也提供了算法来确定某些信息,比如文件存于文件系统中的位置。这样的算法可以用来解释诸如磁盘速度和最小化磁盘碎片等术语。
现代文件系统还提供一个安全模型,这是一个定义文件和目录的访问权限的方案。Linux 文件系统安全模型确保用户只能访问自己的文件,而不能访问其他用户的文件或操作系统本身。
最后一块组成部分是实现这些所有功能所需要的软件。Linux 使用两层软件实现的方式来提高系统和程序员的效率。

*图片 1:Linux 两层文件系统软件实现。*
这两层中的第一层是 Linux 虚拟文件系统。虚拟文件系统提供了内核和开发者访问所有类型文件系统的的单一命令集。虚拟文件系统软件通过调用特殊设备驱动来和不同类型的文件系统进行交互。特定文件系统的设备驱动是第二层实现。设备驱动程序将文件系统命令的标准集解释为在分区或逻辑卷上的特定类型文件系统命令。
### 目录结构
作为一个通常来说非常有条理的处女座,我喜欢将东西存储在更小的、有组织的小容器中,而不是存于同一个大容器中。目录的使用使我能够存储文件并在我想要查看这些文件的时候也能够找到它们。目录也被称为文件夹,之所以被称为文件夹,是因为其中的文件被类比存放于物理桌面上。
在 Linux 和其他许多操作系统中,目录可以被组织成树状的分层结构。在 [Linux 文件系统层次标准](http://www.pathname.com/fhs/)中定义了 Linux 的目录结构(LCTT 译注:可参阅[这篇](/article-6132-1.html))。当通过目录引用来访问目录时,更深层目录名字是通过正斜杠(/)来连接,从而形成一个序列,比如 `/var/log` 和 `/var/spool/mail` 。这些被称为路径。
下表提供了标准的、众所周知的、预定义的顶层 Linux 目录及其用途的简要清单。
| 目录 | 描述 |
| --- | --- |
| **/ (root 文件系统)** | root 文件系统是文件系统的顶级目录。它必须包含在挂载其它文件系统前需要用来启动 Linux 系统的全部文件。它必须包含需要用来启动剩余文件系统的全部可执行文件和库。文件系统启动以后,所有其他文件系统作为 root 文件系统的子目录挂载到标准的、预定义好的挂载点上。 |
| **/bin** | `/bin` 目录包含用户的可执行文件。 |
| /boot | 包含启动 Linux 系统所需要的静态引导程序和内核可执行文件以及配置文件。 |
| **/dev** | 该目录包含每一个连接到系统的硬件设备的设备文件。这些文件不是设备驱动,而是代表计算机上的每一个计算机能够访问的设备。 |
| **/etc** | 包含主机计算机的本地系统配置文件。 |
| /home | 主目录存储用户文件,每一个用户都有一个位于 `/home` 目录中的子目录(作为其主目录)。 |
| **/lib** | 包含启动系统所需要的共享库文件。 |
| /media | 一个挂载外部可移动设备的地方,比如主机可能连接了一个 USB 驱动器。 |
| /mnt | 一个普通文件系统的临时挂载点(如不可移动的介质),当管理员对一个文件系统进行修复或在其上工作时可以使用。 |
| /opt | 可选文件,比如供应商提供的应用程序应该安装在这儿。 |
| **/root** | 这不是 root(`/`)文件系统。它是 root 用户的主目录。 |
| **/sbin** | 系统二进制文件。这些是用于系统管理的可执行文件。 |
| /tmp | 临时目录。被操作系统和许多程序用来存储临时文件。用户也可能临时在这儿存储文件。注意,存储在这儿的文件可能在任何时候在没有通知的情况下被删除。 |
| /usr | 该目录里面包含可共享的、只读的文件,包括可执行二进制文件和库、man 文件以及其他类型的文档。 |
| /var | 可变数据文件存储在这儿。这些文件包括日志文件、MySQL 和其他数据库的文件、Web 服务器的数据文件、邮件以及更多。 |
*表 1:Linux 文件系统层次结构的顶层*
这些目录以及它们的子目录如表 1 所示,在所有子目录中,粗体的目录组成了 root 文件系统的必需部分。也就是说,它们不能创建为一个分离的文件系统并且在开机时进行挂载。这是因为它们(特别是它们包含的内容)必须在系统启动的时候出现,从而系统才能正确启动。
`/media` 目录和 `/mnt` 目录是 root 文件系统的一部分,但是它们从来不包含任何数据,因为它们只是一个临时挂载点。
表 1 中剩下的非粗体的目录不需要在系统启动过程中出现,但会在之后挂载到 root 文件系统上,在开机阶段,它们为主机进行准备,从而执行有用的工作。
请参考官方 [Linux 文件系统层次标准](http://www.pathname.com/fhs/)(FHS)网页来了解这些每一个目录以及它们的子目录的更多细节。维基百科上也有关于 [FHS](https://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard) 的一个很好的介绍。应该尽可能的遵循这些标准,从而确保操作和功能的一致性。无论在主机上使用什么类型的文件系统,该层次目录结构都是相同的。
### Linux 统一目录结构
在一些非 Linux 操作系统的个人电脑上,如果有多个物理硬盘驱动器或多个分区,每一个硬盘或分区都会分配一个驱动器号。知道文件或程序位于哪一个硬盘驱动器上是很有必要的,比如 `C:` 或 `D:` 。然后,你可以在命令中使用驱动器号,以 `D:` 为例,为了进入 `D:` 驱动器,你可以使用 `cd` 命令来更改工作目录为正确的目录,从而定位需要的文件。每一个硬盘驱动器都有自己单独的、完整的目录树。
Linux 文件系统将所有物理硬盘驱动器和分区统一为一个目录结构。它们均从顶层 root 目录(`/`)开始。所有其它目录以及它们的子目录均位于单一的 Linux 根目录下。这意味着只有一棵目录树来搜索文件和程序。
因为只有一个文件系统,所以 `/home`、`/tmp`、`/var`、`/opt` 或 `/usr` 能够创建在和 root(`/`)文件系统不同的物理硬盘驱动器、分区或逻辑分区上,然后挂载到一个挂载点(目录)上,从而作为 root 文件系统树的一部分。甚至可移动驱动器,比如 USB 驱动器或一个外接的 USB 或 ESATA 硬盘驱动器均可以挂载到 root 文件系统上,成为目录树不可或缺的部分。
当从 Linux 发行版的一个版本升级到另一个版本或从一个发行版更改到另一个发行版的时候,就会很清楚地看到这样创建到不同分区的好处。通常情况下,除了任何像 Fedora 中的 `dnf-upgrade` 之类的升级工具,会明智地在升级过程中偶尔重新格式化包含操作系统的硬盘驱动来删除那些长期积累的垃圾。如果 `/home` 目录是 root 文件系统的一部分(位于同一个硬盘驱动器),那么它也会被格式化,然后需要通过之前的备份恢复。如果 /home 目录作为一个分离的文件系统,那么安装程序将会识别到,并跳过它的格式化。对于存储数据库、邮箱、网页和其它可变的用户以及系统数据的 `/var` 目录也是这样的。
将 Linux 系统目录树的某些部分作为一个分离的文件系统还有一些其他原因。比如,在很久以前,我还不知道将所有需要的 Linux 目录均作为 root(`/`)文件系统的一部分可能存在的问题,于是,一些非常大的文件填满了 `/home` 目录。因为 `/home` 目录和 `/tmp` 目录均不是分离的文件系统,而是 root 文件系统的简单子目录,整个 root 文件系统就被填满了。于是就不再有剩余空间可以让操作系统用来存储临时文件或扩展已存在数据文件。首先,应用程序开始抱怨没有空间来保存文件,然后,操作系统也开始异常行动。启动到单用户模式,并清除了 `/home` 目录中的多余文件之后,终于又能够重新工作了。然后,我使用非常标准的多重文件系统设置来重新安装 Linux 系统,从而避免了系统崩溃的再次发生。
我曾经遇到一个情况,Linux 主机还在运行,但是却不允许用户通过 GUI 桌面登录。我可以通过使用[虚拟控制台](https://en.wikipedia.org/wiki/Virtual_console)之一,通过命令行界面(CLI)本地登录,然后远程使用 SSH 。问题的原因是因为 `/tmp` 文件系统满了,因此 GUI 桌面登录时所需要的一些临时文件不能被创建。因为命令行界面登录不需要在 `/tmp` 目录中创建文件,所以无可用空间并不会阻止我使用命令行界面来登录。在这种情况下,`/tmp` 目录是一个分离的文件系统,在 `/tmp` 所位于的逻辑卷上还有大量的可用空间。我简单地[扩展了 /tmp 逻辑卷](https://opensource.com/business/16/9/linux-users-guide-lvm)的容量到能够容纳主机所需要的临时文件,于是问题便解决了。注意,这个解决方法不需要重启,当 `/tmp` 文件系统扩大以后,用户就可以登录到桌面了。
当我在一家很大的科技公司当实验室管理员的时候,遇到过另外一个故障。开发者将一个应用程序安装到了一个错误的位置(`/var`)。结果该应用程序崩溃了,因为 `/var` 文件系统满了,由于缺乏空间,存储于 `/var/log` 中的日志文件无法附加新的日志消息。然而,系统仍然在运行,因为 root 文件系统和 `/tmp` 文件系统还没有被填满。删除了该应用程序并重新安装在 `/opt` 文件系统后,问题便解决了。
### 文件系统类型
Linux 系统支持大约 100 种分区类型的读取,但是只能对很少的一些进行创建和写操作。但是,可以挂载不同类型的文件系统在同一个 root 文件系统上,并且是很常见的。在这样的背景下,我们所说的文件系统一词是指在硬盘驱动器或逻辑卷上的一个分区中存储和管理用户数据所需要的结构和元数据。能够被 Linux 系统的 `fdisk` 命令识别的文件系统类型的完整列表[在此](https://www.win.tue.nl/%7Eaeb/partitions/partition_types-1.html),你可以感受一下 Linux 系统对许多类型的系统的高度兼容性。
Linux 支持读取这么多类型的分区系统的主要目的是为了提高兼容性,从而至少能够与一些其他计算机系统的文件系统进行交互。下面列出了在 Fedora 中创建一个新的文件系统时的所有可选类型:
* btrfs
* **cramfs**
* **ext2**
* **ext3**
* **ext4**
* fat
* gfs2
* hfsplus
* minix
* **msdos**
* ntfs
* reiserfs
* **vfat**
* xfs
其他发行版支持创建的文件系统类型不同。比如,CentOS 6 只支持创建上表中标为黑体的文件系统类型。
### 挂载
在 Linux 系统上“<ruby> 挂载 <rt> mount </rt></ruby>”文件系统的术语是指在计算机发展的早期,磁带或可移动的磁盘组需要需要物理地挂载到一个合适的驱动器设备上。当通过物理的方式放置到驱动器上以后,操作系统会逻辑地挂载位于磁盘上的文件系统,从而操作系统、应用程序和用户才能够访问文件系统中的内容。
一个挂载点简单的来说就是一个目录,就像任何其它目录一样,是作为 root 文件系统的一部分创建的。所以,比如,home 文件系统是挂载在目录 `/home` 下。文件系统可以被挂载到其他非 root 文件系统的挂载点上,但是这并不常见。
在 Linux 系统启动阶段的最初阶段,root 文件系统就会被挂载到 root 目录下(`/`)。其它文件系统在之后通过 SystemV 下的 `rc` 或更新一些的 Linux 发行版中的 `systemd` 等 Linux 启动程序挂载。在启动进程中文件系统的挂载是由 `/etc/fstab` 配置文件管理的。一个简单的记忆方法是,fstab 代表“<ruby> 文件系统表 <rt> file system table </rt></ruby>”,它包含了需要挂载的文件系统的列表,这些文件系统均指定了挂载点,以及针对特定文件系统可能需要的选项。
使用 `mount` 命令可以把文件系统挂载到一个已有的目录/挂载点上。通常情况下,任何作为挂载点的目录都应该是空的且不包含任何其他文件。Linux 系统不会阻止用户挂载一个已被挂载了文件系统的目录或将文件系统挂载到一个包含文件的目录上。如果你将文件系统挂载到一个已有的目录或文件系统上,那么其原始内容将会被隐藏,只有新挂载的文件系统的内容是可见的。
### 结论
我希望通过这篇文章,阐明了围绕文件系统这个术语的一些可能的模糊之处。我花费了很长的时间,以及在一个良师的帮助下才真正理解和欣赏到 Linux 文件系统的复杂性、优雅性和功能以及它的全部含义。
如果你有任何问题,请写到下面的评论中,我会尽力来回答它们。
### 下个月
Linux 的另一个重要概念是:[万物皆为文件](https://opensource.com/life/15/9/everything-is-a-file)。这个概念对用户和系统管理员来说有一些有趣和重要的实际应用。当我说完这个理由之后,你可能会想阅读我的文章:[万物皆为文件](https://opensource.com/life/15/9/everything-is-a-file),这篇文章会在我下个月计划写的关于 `/dev` 目录的文章之前写完。(LCTT 译注,也可参阅[这篇](/article-7669-1.html))
(题图 : wallup.net)
---
作者简介:
David Both 居住在美国北卡罗纳州的首府罗利,是一个 Linux 开源贡献者。他已经从事 IT 行业 40 余年,在 IBM 教授 OS/2 20 余年。1981 年,他在 IBM 开发了第一个关于最初的 IBM 个人电脑的培训课程。他也曾在 Red Hat 教授 RHCE 课程,也曾供职于 MCI worldcom,Cico 以及北卡罗纳州等。他已经为 Linux 开源社区工作近 20 年。
---
via: <https://opensource.com/life/16/10/introduction-linux-filesystems>
作者:[David Both](https://opensource.com/users/dboth) 译者:[ucasFL](https://github.com/ucasFL) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | This article is intended to be a very high-level discussion of Linux filesystem concepts. It is not intended to be a low-level description of how a particular filesystem type, such as EXT4, works, nor is it intended to be a tutorial of filesystem commands.
Every general-purpose computer needs to store data of various types on a hard disk drive (HDD) or some equivalent, such as a USB memory stick. There are a couple reasons for this. First, RAM loses its contents when the computer is switched off. There are non-volatile types of RAM that can maintain the data stored there after power is removed (such as flash RAM that is used in USB memory sticks and solid state drives), but flash RAM is much more expensive than standard, volatile RAM like DDR3 and other, similar types.
The second reason that data needs to be stored on hard drives is that even standard RAM is still more expensive than disk space. Both RAM and disk costs have been dropping rapidly, but RAM still leads the way in terms of cost per byte. A quick calculation of the cost per byte, based on costs for 16GB of RAM vs. a 2TB hard drive, shows that the RAM is about 71 times more expensive per unit than the hard drive. A typical cost for RAM is around $0.0000000043743750 per byte today.
For a quick historical note to put present RAM costs in perspective, in the very early days of computing, one type of memory was based on dots on a CRT screen. This was very expensive at about $1.00 *per bit*!
## Definitions
You may hear people talk about filesystems in a number of different and confusing ways. The word itself can have multiple meanings, and you may have to discern the correct meaning from the context of a discussion or document.
I will attempt to define the various meanings of the word "filesystem" based on how I have observed it being used in different circumstances. Note that while attempting to conform to standard "official" meanings, my intent is to define the term based on its various usages. These meanings will be explored in greater detail in the following sections of this article.
- The entire Linux directory structure starting at the top (/) root directory.
- A specific type of data storage format, such as EXT3, EXT4, BTRFS, XFS, and so on. Linux supports almost 100 types of filesystems, including some very old ones as well as some of the newest. Each of these filesystem types uses its own metadata structures to define how the data is stored and accessed.
- A partition or logical volume formatted with a specific type of filesystem that can be mounted on a specified mount point on a Linux filesystem.
## Basic filesystem functions
Disk storage is a necessity that brings with it some interesting and inescapable details. Obviously, a filesystem is designed to provide space for non-volatile storage of data; that is its ultimate function. However, there are many other important functions that flow from that requirement.
All filesystems need to provide a namespace—that is, a naming and organizational methodology. This defines how a file can be named, specifically the length of a filename and the subset of characters that can be used for filenames out of the total set of characters available. It also defines the logical structure of the data on a disk, such as the use of directories for organizing files instead of just lumping them all together in a single, huge conglomeration of files.
Once the namespace has been defined, a metadata structure is necessary to provide the logical foundation for that namespace. This includes the data structures required to support a hierarchical directory structure; structures to determine which blocks of space on the disk are used and which are available; structures that allow for maintaining the names of the files and directories; information about the files such as their size and times they were created, modified or last accessed; and the location or locations of the data belonging to the file on the disk. Other metadata is used to store high-level information about the subdivisions of the disk, such as logical volumes and partitions. This higher-level metadata and the structures it represents contain the information describing the filesystem stored on the drive or partition, but is separate from and independent of the filesystem metadata.
Filesystems also require an Application Programming Interface (API) that provides access to system function calls which manipulate filesystem objects like files and directories. APIs provide for tasks such as creating, moving, and deleting files. It also provides algorithms that determine things like where a file is placed on a filesystem. Such algorithms may account for objectives such as speed or minimizing disk fragmentation.
Modern filesystems also provide a security model, which is a scheme for defining access rights to files and directories. The Linux filesystem security model helps to ensure that users only have access to their own files and not those of others or the operating system itself.
The final building block is the software required to implement all of these functions. Linux uses a two-part software implementation as a way to improve both system and programmer efficiency.

Figure 1: The Linux two-part filesystem software implementation.
The first part of this two-part implementation is the Linux virtual filesystem. This virtual filesystem provides a single set of commands for the kernel, and developers, to access all types of filesystems. The virtual filesystem software calls the specific device driver required to interface to the various types of filesystems. The filesystem-specific device drivers are the second part of the implementation. The device driver interprets the standard set of filesystem commands to ones specific to the type of filesystem on the partition or logical volume.
## Directory structure
As a usually very organized Virgo, I like things stored in smaller, organized groups rather than in one big bucket. The use of directories helps me to be able to store and then locate the files I want when I am looking for them. Directories are also known as folders because they can be thought of as folders in which files are kept in a sort of physical desktop analogy.
In Linux and many other operating systems, directories can be structured in a tree-like hierarchy. The Linux directory structure is well defined and documented in the [Linux Filesystem Hierarchy Standard](http://www.pathname.com/fhs/) (FHS). Referencing those directories when accessing them is accomplished by using the sequentially deeper directory names connected by forward slashes (/) such as /var/log and /var/spool/mail. These are called paths.
The following table provides a very brief list of the standard, well-known, and defined top-level Linux directories and their purposes.
Directory | Description |
---|---|
/ (root filesystem) | The root filesystem is the top-level directory of the filesystem. It must contain all of the files required to boot the Linux system before other filesystems are mounted. It must include all of the required executables and libraries required to boot the remaining filesystems. After the system is booted, all other filesystems are mounted on standard, well-defined mount points as subdirectories of the root filesystem. |
/bin | The /bin directory contains user executable files. |
/boot | Contains the static bootloader and kernel executable and configuration files required to boot a Linux computer. |
/dev | This directory contains the device files for every hardware device attached to the system. These are not device drivers, rather they are files that represent each device on the computer and facilitate access to those devices. |
/etc | Contains the local system configuration files for the host computer. |
/home | Home directory storage for user files. Each user has a subdirectory in /home. |
/lib | Contains shared library files that are required to boot the system. |
/media | A place to mount external removable media devices such as USB thumb drives that may be connected to the host. |
/mnt | A temporary mountpoint for regular filesystems (as in not removable media) that can be used while the administrator is repairing or working on a filesystem. |
/opt | Optional files such as vendor supplied application programs should be located here. |
/root | This is not the root (/) filesystem. It is the home directory for the root user. |
/sbin | System binary files. These are executables used for system administration. |
/tmp | Temporary directory. Used by the operating system and many programs to store temporary files. Users may also store files here temporarily. Note that files stored here may be deleted at any time without prior notice. |
/usr | These are shareable, read-only files, including executable binaries and libraries, man files, and other types of documentation. |
/var | Variable data files are stored here. This can include things like log files, MySQL, and other database files, web server data files, email inboxes, and much more. |
Table 1: The top level of the Linux filesystem hierarchy.
The directories and their subdirectories shown in Table 1, along with their subdirectories, that have a teal background are considered an integral part of the root filesystem. That is, they cannot be created as a separate filesystem and mounted at startup time. This is because they (specifically, their contents) must be present at boot time in order for the system to boot properly.
The /media and /mnt directories are part of the root filesystem, but they should never contain any data. Rather, they are simply temporary mount points.
The remaining directories, those that have no background color in Table 1 do not need to be present during the boot sequence, but will be mounted later, during the startup sequence that prepares the host to perform useful work.
Be sure to refer to the official [Linux Filesystem Hierarchy Standard](http://www.pathname.com/fhs/) (FHS) web page for details about each of these directories and their many subdirectories. Wikipedia also has a good description of the [FHS](https://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard). This standard should be followed as closely as possible to ensure operational and functional consistency. Regardless of the filesystem types used on a host, this hierarchical directory structure is the same.
## Linux unified directory structure
In some non-Linux PC operating systems, if there are multiple physical hard drives or multiple partitions, each disk or partition is assigned a drive letter. It is necessary to know on which hard drive a file or program is located, such as C: or D:. Then you issue the drive letter as a command, **D:**, for example, to change to the D: drive, and then you use the **cd** command to change to the correct directory to locate the desired file. Each hard drive has its own separate and complete directory tree.
The Linux filesystem unifies all physical hard drives and partitions into a single directory structure. It all starts at the top–the root (/) directory. All other directories and their subdirectories are located under the single Linux root directory. This means that there is only one single directory tree in which to search for files and programs.
This can work only because a filesystem, such as /home, /tmp, /var, /opt, or /usr can be created on separate physical hard drives, a different partition, or a different logical volume from the / (root) filesystem and then be mounted on a mountpoint (directory) as part of the root filesystem tree. Even removable drives such as a USB thumb drive or an external USB or ESATA hard drive will be mounted onto the root filesystem and become an integral part of that directory tree.
One good reason to do this is apparent during an upgrade from one version of a Linux distribution to another, or changing from one distribution to another. In general, and aside from any upgrade utilities like dnf-upgrade in Fedora, it is wise to occasionally reformat the hard drive(s) containing the operating system during an upgrade to positively remove any cruft that has accumulated over time. If /home is part of the root filesystem it will be reformatted as well and would then have to be restored from a backup. By having /home as a separate filesystem, it will be known to the installation program as a separate filesystem and formatting of it can be skipped. This can also apply to /var where database, email inboxes, website, and other variable user and system data are stored.
There are other reasons for maintaining certain parts of the Linux directory tree as separate filesystems. For example, a long time ago, when I was not yet aware of the potential issues surrounding having all of the required Linux directories as part of the / (root) filesystem, I managed to fill up my home directory with a large number of very big files. Since neither the /home directory nor the /tmp directory were separate filesystems but simply subdirectories of the root filesystem, the entire root filesystem filled up. There was no room left for the operating system to create temporary files or to expand existing data files. At first, the application programs started complaining that there was no room to save files, and then the OS itself started to act very strangely. Booting to single-user mode and clearing out the offending files in my home directory allowed me to get going again. I then reinstalled Linux using a pretty standard multi-filesystem setup and was able to prevent complete system crashes from occurring again.
I once had a situation where a Linux host continued to run, but prevented the user from logging in using the GUI desktop. I was able to log in using the command line interface (CLI) locally using one of the [virtual consoles](https://en.wikipedia.org/wiki/Virtual_console), and remotely using SSH. The problem was that the /tmp filesystem had filled up and some temporary files required by the GUI desktop could not be created at login time. Because the CLI login did not require files to be created in /tmp, the lack of space there did not prevent me from logging in using the CLI. In this case, the /tmp directory was a separate filesystem and there was plenty of space available in the volume group the /tmp logical volume was a part of. I simply [expanded the /tmp logical volume](https://opensource.com/business/16/9/linux-users-guide-lvm) to a size that accommodated my fresh understanding of the amount of temporary file space needed on that host and the problem was solved. Note that this solution did not require a reboot, and as soon as the /tmp filesystem was enlarged the user was able to login to the desktop.
Another situation occurred while I was working as a lab administrator at one large technology company. One of our developers had installed an application in the wrong location (/var). The application was crashing because the /var filesystem was full and the log files, which are stored in /var/log on that filesystem, could not be appended with new messages due to the lack of space. However, the system remained up and running because the critical / (root) and /tmp filesystems did not fill up. Removing the offending application and reinstalling it in the /opt filesystem resolved that problem.
## Filesystem types
Linux supports reading around 100 partition types; it can create and write to only a few of these. But it is possible—and very common—to mount filesystems of different types on the same root filesystem. In this context we are talking about filesystems in terms of the structures and metadata required to store and manage the user data on a partition of a hard drive or a logical volume. The complete list of filesystem partition types recognized by the Linux **fdisk** command is provided here, so that you can get a feel for the high degree of compatibility that Linux has with very many types of systems.
```
0 Empty 24 NEC DOS 81 Minix / old Lin bf Solaris
1 FAT12 27 Hidden NTFS Win 82 Linux swap / So c1 DRDOS/sec (FAT-
2 XENIX root 39 Plan 9 83 Linux c4 DRDOS/sec (FAT-
3 XENIX usr 3c PartitionMagic 84 OS/2 hidden or c6 DRDOS/sec (FAT-
4 FAT16 <32M 40 Venix 80286 85 Linux extended c7 Syrinx
5 Extended 41 PPC PReP Boot 86 NTFS volume set da Non-FS data
6 FAT16 42 SFS 87 NTFS volume set db CP/M / CTOS / .
7 HPFS/NTFS/exFAT 4d QNX4.x 88 Linux plaintext de Dell Utility
8 AIX 4e QNX4.x 2nd part 8e Linux LVM df BootIt
9 AIX bootable 4f QNX4.x 3rd part 93 Amoeba e1 DOS access
a OS/2 Boot Manag 50 OnTrack DM 94 Amoeba BBT e3 DOS R/O
b W95 FAT32 51 OnTrack DM6 Aux 9f BSD/OS e4 SpeedStor
c W95 FAT32 (LBA) 52 CP/M a0 IBM Thinkpad hi ea Rufus alignment
e W95 FAT16 (LBA) 53 OnTrack DM6 Aux a5 FreeBSD eb BeOS fs
f W95 Ext'd (LBA) 54 OnTrackDM6 a6 OpenBSD ee GPT
10 OPUS 55 EZ-Drive a7 NeXTSTEP ef EFI (FAT-12/16/
11 Hidden FAT12 56 Golden Bow a8 Darwin UFS f0 Linux/PA-RISC b
12 Compaq diagnost 5c Priam Edisk a9 NetBSD f1 SpeedStor
14 Hidden FAT16 <3 61 SpeedStor ab Darwin boot f4 SpeedStor
16 Hidden FAT16 63 GNU HURD or Sys af HFS / HFS+ f2 DOS secondary
17 Hidden HPFS/NTF 64 Novell Netware b7 BSDI fs fb VMware VMFS
18 AST SmartSleep 65 Novell Netware b8 BSDI swap fc VMware VMKCORE
1b Hidden W95 FAT3 70 DiskSecure Mult bb Boot Wizard hid fd Linux raid auto
1c Hidden W95 FAT3 75 PC/IX bc Acronis FAT32 L fe LANstep
1e Hidden W95 FAT1 80 Old Minix be Solaris boot ff BBT
```
The main purpose in supporting the ability to read so many partition types is to allow for compatibility and at least some interoperability with other computer systems' filesystems. The choices available when creating a new filesystem with Fedora are shown in the following list.
- btrfs
**cramfs****ext2****ext3****ext4**- fat
- gfs2
- hfsplus
- minix
**msdos**- ntfs
- reiserfs
**vfat**- xfs
Other distributions support creating different filesystem types. For example, CentOS 6 supports creating only those filesystems highlighted in bold in the above list.
## Mounting
The term "to mount" a filesystem in Linux refers back to the early days of computing when a tape or removable disk pack would need to be physically mounted on an appropriate drive device. After being physically placed on the drive, the filesystem on the disk pack would be logically mounted by the operating system to make the contents available for access by the OS, application programs and users.
A mount point is simply a directory, like any other, that is created as part of the root filesystem. So, for example, the home filesystem is mounted on the directory /home. Filesystems can be mounted at mount points on other non-root filesystems but this is less common.
The Linux root filesystem is mounted on the root directory (/) very early in the boot sequence. Other filesystems are mounted later, by the Linux startup programs, either **rc** under SystemV or by **systemd** in newer Linux releases. Mounting of filesystems during the startup process is managed by the /etc/fstab configuration file. An easy way to remember that is that fstab stands for "file system table," and it is a list of filesystems that are to be mounted, their designated mount points, and any options that might be needed for specific filesystems.
Filesystems are mounted on an existing directory/mount point using the **mount** command. In general, any directory that is used as a mount point should be empty and not have any other files contained in it. Linux will not prevent users from mounting one filesystem over one that is already there or on a directory that contains files. If you mount a filesystem on an existing directory or filesystem, the original contents will be hidden and only the content of the newly mounted filesystem will be visible.
## Conclusion
I hope that some of the possible confusion surrounding the term filesystem has been cleared up by this article. It took a long time and a very helpful mentor for me to truly understand and appreciate the complexity, elegance, and functionality of the Linux filesystem in all of its meanings.
If you have questions, please add them to the comments below and I will try to answer them.
## Next month
Another important concept is that for Linux, everything is a file. This concept has some interesting and important practical applications for users and system admins. The reason I mention this is that you might want to read my "[Everything is a file](https://opensource.com/life/15/9/everything-is-a-file)" article before the article I am planning for next month on the /dev directory.
## 11 Comments |
8,888 | Docker 引擎的 Swarm 模式:入门教程 | http://www.dedoimedo.com/computers/docker-swarm-intro.html | 2017-09-21T08:54:00 | [
"Docker",
"Swarm",
"编排"
] | https://linux.cn/article-8888-1.html | 
Swarm,听起来像是一个朋克摇滚乐队。但它确实是个新的编排机制,抑或者是,一个 [Docker](http://www.dedoimedo.com/computers/docker-guide.html) 现有编排体制的改进。简单来讲,如果你在用一个旧版本的 Docker,你必须手动配置 Swarm 来创建 Docker 集群。从 [1.12 版](https://blog.docker.com/2016/06/docker-1-12-built-in-orchestration/)开始,Docker 引擎集成了一个原生的实现(LCTT 译注:见下文)来支持无缝的集群设置。也就是为什么会有这篇文章。
在这篇教程中,我将带你体验一下编排后的 Docker 将能做的事情。这篇文章并不是包含所有细节(如 BnB 一般)或是让你对其全知全能,但它能带你踏上你的集群之路。在我的带领下开始吧。

### 技术概要
如果把 Docker 详细而又好用的文档照搬到这里那将太丢人了,所以我将简要概括下这个技术的概要。我们已经有了 Docker,对吧。现在,你想要更多的服务器作为 Docker 主机,但同时你希望它们属于同一个逻辑上的实体。也就是说,你想建立一个集群。

我们先从一个主机组成的集群开始。当你在一个主机上初始化一个 Swarm 集群,这台主机将成为这个集群的<ruby> 管理者 <rp> ( </rp> <rt> manager </rt> <rp> ) </rp></ruby>。从技术角度来讲,它成为了<ruby> 共识组 <rp> ( </rp> <rt> consensus group </rt> <rp> ) </rp></ruby>中的一个<ruby> 节点 <rt> node </rt></ruby>。其背后的数学逻辑建立在 [Raft](https://en.wikipedia.org/wiki/Raft_%28computer_science%29) 算法之上。<ruby> 管理者 <rp> ( </rp> <rt> manager </rt> <rp> ) </rp></ruby>负责调度任务。而具体的任务则会委任给各个加入了 Swarm 集群的<ruby> 工作者 <rp> ( </rp> <rt> worker </rt> <rp> ) </rp></ruby>节点。这些操作将由 Node API 所管理。虽说我讨厌 API 这个词汇,但我必须在这里用到它。
Service API 是这个实现中的第二个组件。它允许<ruby> 管理者 <rp> ( </rp> <rt> manager </rt> <rp> ) </rp></ruby>节点在所有的 Swarm 集群节点上创建一个分布式的服务。这个服务可以<ruby> 被复制 <rp> ( </rp> <rt> replicated </rt> <rp> ) </rp></ruby>,也就是说它们(LCTT 译注:指这些服务)会由平衡机制被分配到集群中(LCTT 译注:指 replicated 模式,多个容器实例将会自动调度任务到集群中的一些满足条件的节点),或者可以分配给全局(LCTT 译注:指 global 模式),也就是说每个节点都会运行一个容器实例。
此外还有更多的功课需要做,但这些信息已经足够你上路了。现在,我们开始整些实际的。我们的目标平台是 [CentOS 7.2](http://www.dedoimedo.com/computers/lenovo-g50-centos-xfce.html),有趣的是在我写这篇教程的时候,它的软件仓库中只有 1.10 版的 Docker,也就是说我必须手动更新以使用 Swarm。我们将在另一篇教程中讨论这个问题。接下来我们还有一个跟进的指南,其中涵盖了如何将新的节点加入我们现有的集群(LCTT 译注:指刚刚建立的单节点集群),并且我们将使用 [Fedora](http://www.dedoimedo.com/computers/fedora-24-gnome.html) 进行一个非对称的配置。至此,请确保正确的配置已经就位,并有一个工作的集群启动并正在运行(LCTT 译注:指第一个节点的 Docker 已经安装并已进入 Swarm 模式,但到这里笔者并没有介绍如何初始化 Swarm 集群,不过别担心下章会讲)。
### 配置镜像和服务
我将尝试配置一个负载均衡的 [Apache](https://hub.docker.com/_/httpd/) 服务,并使用多个容器实例通过唯一的 IP 地址提供页面内容。挺标准的吧(LCTT 译注:指这个负载均衡的网页服务器)。这个例子同时也突出了你想要使用集群的大多数原因:可用性、冗余、横向扩展以及性能。当然,你同时需要考虑[网络](http://www.dedoimedo.com/computers/docker-networking.html)和[储存](http://www.dedoimedo.com/computers/docker-data-volumes.html)这两块,但它们超出了这篇指南所涉及的范围了。
这个 Dockerfile 模板其实可以在官方镜像仓库里的 httpd 下找到。你只需一个最简单的设置来起步。至于如何下载或创建自己的镜像,请参考我的入门指南,链接可以在这篇教程的顶部可以找到。
```
docker build -t my-apache2 .
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM httpd:2.4
Trying to pull repository docker.io/library/httpd ...
2.4: Pulling from docker.io/library/httpd
8ad8b3f87b37: Pull complete
c95e1f92326d: Pull complete
96e8046a7a4e: Pull complete
00a0d292c371: Pull complete
3f7586acab34: Pull complete
Digest: sha256:3ad4d7c4f1815bd1c16788a57f81b413...a915e50a0d3a4
Status: Downloaded newer image for docker.io/httpd:2.4
---> fe3336dd034d
Step 2 : COPY ../public-html/ /usr/local/apache2/htdocs/
...
```

在你继续下面的步骤之前,你应该确保你能无错误的启动一个容器实例并能链接到这个网页服务器上(LCTT 译注:使用下面的命令)。一旦你确保你能连上,我们就可以开始着手创建一个分布式的服务。
```
docker run -dit --name my-running-app my-apache2
```
将这个 IP 地址输入浏览器,看看会出现什么。
### Swarm 初始化和配置
下一步就是启动 Swarm 集群了。你将需要这些最基础的命令来开始,它们与 Docker 博客中的例子非常相似:
```
docker service create --name frontend --replicas 5 -p 80:80/tcp my-apache2:latest
```
这里我们做了什么?我们创建了一个叫做 `frontent` 的服务,它有五个容器实例。同时我们还将主机的 80 端口和这些容器的 80 端口相绑定。我们将使用刚刚新创建的 Apache 镜像来做这个测试。然而,当你在自己的电脑上直接键入上面的指令时,你将看到下面的错误:
```
docker service create --name frontend --replicas 5 -p 80:80/tcp my-apache2:latest
Error response from daemon: This node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again.
```
这意味着你没有将你的主机(节点)配置成一个 Swarm <ruby> 管理者 <rp> ( </rp> <rt> manager </rt> <rp> ) </rp></ruby>。你可以在这台主机上初始化 Swarm 集群或是让它加入一个现有的集群。由于我们目前还没有一个现成的集群,我们将初始化它(LCTT 译注:指初始化 Swarm 集群并使当前节点成为 manager):
```
docker swarm init
Swarm initialized: current node (dm58mmsczqemiikazbfyfwqpd) is now a manager.
```
为了向这个 Swarm 集群添加一个<ruby> 工作者 <rp> ( </rp> <rt> worker </rt> <rp> ) </rp></ruby>,请执行下面的指令:
```
docker swarm join \
--token SWMTKN-1-4ofd46a2nfyvrqwu8w5oeetukrbylyznxla
9srf9vxkxysj4p8-eu5d68pu5f1ci66s7w4wjps1u \
10.0.2.15:2377
```
为了向这个 Swarm 集群添加一个<ruby> 管理者 <rp> ( </rp> <rt> manager </rt> <rp> ) </rp></ruby>,请执行 `docker swarm join-token manager` 并按照指示操作。
操作后的输出不用解释已经很清楚明了。我们成功的创建了一个 Swarm 集群。新的节点们将需要正确的<ruby> 令牌 <rp> ( </rp> <rt> token </rt> <rp> ) </rp></ruby>来加入这个 Swarm 集群。如果你需要配置防火墙,你还需找到它的 IP 地址和端口(LCTT 译注:指 Docker 的 Swarm 模式通讯所需的端口,默认 2377)。此外,你还可以向 Swarm 集群中添加管理者节点。现在,重新执行刚刚的服务创建指令:
```
docker service create --name frontend --replicas 5 -p 80:80/tcp my-apache2:latest
6lrx1vhxsar2i50is8arh4ud1
```
### 测试连通性
现在,我们来验证下我们的服务是否真的工作了。从某些方面讲,这很像我们在 [Vagrant](http://www.dedoimedo.com/computers/vagrant-intro.html) 和 [coreOS](http://www.dedoimedo.com/computers/vagrant-coreos.html) 中做的事情那样。毕竟它们的原理几乎相同。相同指导思想的不同实现罢了(LCTT 译注:笔者观点,无法苟同)。首先需要确保 `docker ps` 能够给出正确的输出。你应该能看到所创建服务的多个容器副本。
```
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
NAMES
cda532f67d55 my-apache2:latest "httpd-foreground"
2 minutes ago Up 2 minutes 80/tcp frontend.1.2sobjfchdyucschtu2xw6ms9a
75fe6e0aa77b my-apache2:latest "httpd-foreground"
2 minutes ago Up 2 minutes 80/tcp frontend.4.ag77qtdeby9fyvif5v6c4zcpc
3ce824d3151f my-apache2:latest "httpd-foreground"
2 minutes ago Up 2 minutes 80/tcp frontend.2.b6fqg6sf4hkeqs86ps4zjyq65
eda01569181d my-apache2:latest "httpd-foreground"
2 minutes ago Up 2 minutes 80/tcp frontend.5.0rmei3zeeh8usagg7fn3olsp4
497ef904e381 my-apache2:latest "httpd-foreground"
2 minutes ago Up 2 minutes 80/tcp frontend.3.7m83qsilli5dk8rncw3u10g5a
```
我也测试了不同的、非常规的端口,它们都能正常工作。对于你如何连接服务器和收取请求你将会有很多可配置的余地。你可以使用 localhost 或者 Docker 网络接口(笔者注:应该是指 Docker 的默认网桥 docker0,其网关为 172.17.0.1) 的 IP 地址的正确端口去访问。下面的例子使用了端口 1080:

至此,这是一个非常粗略、简单的开始。真正的挑战是创建一个优化过的、可扩展的服务,但是它们需要一个准确的技术用例。此外,你还会用到 `docker info` 和 `docker service`(还有 `inspect` 和 `ps`)命令来详细了解你的集群是如何工作的。
### 可能会遇到的问题
你可能会在把玩 Docker 和 Swarm 时遇到一些小的问题(也许没那么小)。比如 SELinux 也许会抱怨你正在执行一些非法的操作(LCTT 译注:指在强制访问控制策略中没有权限的操作)。然而,这些错误和警告应该不会对你造成太多阻碍。

* `docker service` 不是一条命令(`docker service is not a docker command`)
当你尝试执行必须的命令去创建一个<ruby> 复制模式 <rp> ( </rp> <rt> replicated </rt> <rp> ) </rp></ruby>的服务时,你可能会遇到一条错误说 `docker: 'service' is not a docker command`(LCTT 译注:见下面的例子)。这表示你的 Docker 版本不对(使用 `-v` 选项来检查)。我们将在将来的教程讨论如何修复这个问题。
```
docker service create --name frontend --replicas 5 -p 80:80/tcp my-apache2:latest
docker: 'service' is not a docker command.
```
* `docker tag` 无法识别(`docker tag not recognized`)
你也许会看到下面的错误:
```
docker service create -name frontend -replicas 5 -p 80:80/tcp my-apache2:latest
Error response from daemon: rpc error: code = 3 desc = ContainerSpec: "-name" is not a valid repository/tag
```
关于这个错误已经有多个相关的[讨论](https://github.com/docker/docker/issues/24192)和[帖子](http://stackoverflow.com/questions/38618609/docker-swarm-1-12-name-option-not-recognized)了。其实这个错误也许相当无辜。你也许是从浏览器粘贴的命令,在浏览器中的横线也许没被正确解析(笔者注:应该用 `--name` 而不是 `-name`)。就是这么简单的原因所导致的。
### 扩展阅读
关于这个话题还有很多可谈的,包含 1.12 版之前的 Swarm 集群实现(笔者注:旧的 Swarm 集群实现,下文亦作`独立版本`,需要 Consul 等应用提供服务发现),以及当前的 Docker 版本提供的(笔者注:新的 Swarm 集群实现,亦被称为 Docker 引擎的 Swarm 模式)。也就是说,请别偷懒花些时间阅读以下内容:
* Docker Swarm [概述](https://docs.docker.com/swarm/)(独立版本的 Swarm 集群安装)
* [构建](https://docs.docker.com/swarm/install-manual/)一个生产环境的 Swarm 集群(独立版本安装)
* [安装并创建](https://docs.docker.com/swarm/install-w-machine/)一个 Docker Swarm 集群(独立版本安装)
* Docker 引擎 Swarm [概述](https://docs.docker.com/engine/swarm/)(对于 1.12 版)
* [Swarm](https://docs.docker.com/engine/swarm/swarm-tutorial/) 模式入门(对于 1.12 版)
### 总结
你总算看到这里了。到这里仍然无法保证你学到了什么,但我相信你还是会觉得这篇文章有些用的。它涵盖了一些基础的概念,以及一个 Swarm 集群模式是如何工作的以及它能做什么的概述,与此同时我们也成功的下载了并创建了我们的网页服务器的镜像,并且在之后基于它运行了多个集群式的容器实例。虽然我们目前只在单一节点做了以上实验,但是我们会在将来解释清楚(LCTT 译注:以便解释清楚多节点的 Swarm 集群操作)。并且我们解决了一些常见的问题。
我希望你能认为这篇指南足够有趣。结合着我过去所写的关于 Docker 的文章,这些文章应该能给你一个像样的解释,包括:怎么样操作镜像、网络栈、储存、以及现在的集群。就当热身吧。的确,请享受并期待在新的 Docker 教程中与你见面。我控几不住我记几啊。
祝你愉快。
---
via: <http://www.dedoimedo.com/computers/docker-swarm-intro.html>
作者:[Dedoimedo](http://www.dedoimedo.com/computers/docker-swarm-intro.html) 译者:[Viz](https://github.com/vizv) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,889 | 我们为什么爱用 Linux? | https://cyb3rpunk.wordpress.com/2011/04/28/%C2%BFpor-que-gnulinux/ | 2017-09-21T09:04:00 | [
"Linux"
] | https://linux.cn/article-8889-1.html | 
国外有一位 Linux 粉丝做了一张[壁纸](https://cyb3rpunk.files.wordpress.com/2011/04/whylinux.jpg),其中对“我们为什么爱用 Linux”说了大实话:
```
我们告诉人们用 Linux 是因为它很安全,或者因为它是免费的、因为它是可以定制的、因为它是自由的、因为它有一个强大的社区支持着……
但是上面的所有原因都是在扯淡。我们这么跟非 Linux 用户讲是因为他们没法了解真正的原因。而且我们说多了这些借口,我们自己也开始就这么相信了。
但是在我们内心深处,还保留着真正的原因没说。
我们用 Linux 是因为它很有趣。
折腾你的系统很有趣;修改所有的设置,把系统搞挂,然后进入回复模式去修复它很有趣;有上百个发行版供你选择很有趣;用命令行很有趣。
让我再重申一遍,用命令行很有趣。
难怪非 Linux 用户无法理解。
我是在说 Linux 的爱好者们用 Linux 是因为自己的缘故。当然,我们喜欢做好自己的工作;当然,我们喜欢避免染上病毒;当然,我们喜欢省钱。但是这些仅仅是副产品。我们真正喜欢的是把玩这个系统,瞎胡折腾,并且发现些隐藏在软件深处迷人的真相。
```
| 410 | Gone | null |
8,890 | 开发一个 Linux 调试器(七):源码级断点 | https://blog.tartanllama.xyz/c++/2017/06/19/writing-a-linux-debugger-source-break/ | 2017-09-22T09:39:25 | [
"调试器"
] | https://linux.cn/article-8890-1.html | 
在内存地址上设置断点虽然不错,但它并没有提供最方便用户的工具。我们希望能够在源代码行和函数入口地址上设置断点,以便我们可以在与代码相同的抽象级别中进行调试。
这篇文章将会添加源码级断点到我们的调试器中。通过所有我们已经支持的功能,这要比起最初听起来容易得多。我们还将添加一个命令来获取符号的类型和地址,这对于定位代码或数据以及理解链接概念非常有用。
### 系列索引
随着后面文章的发布,这些链接会逐渐生效。
1. [准备环境](/article-8626-1.html)
2. [断点](/article-8645-1.html)
3. [寄存器和内存](/article-8663-1.html)
4. [Elves 和 dwarves](/article-8719-1.html)
5. [源码和信号](/article-8812-1.html)
6. [源码级逐步执行](/article-8813-1.html)
7. [源码级断点](https://blog.tartanllama.xyz/c++/2017/06/19/writing-a-linux-debugger-source-break/)
8. [调用栈](https://blog.tartanllama.xyz/c++/2017/06/24/writing-a-linux-debugger-unwinding/)
9. 读取变量
10. 之后步骤
### 断点
#### DWARF
[Elves 和 dwarves](/article-8719-1.html) 这篇文章,描述了 DWARF 调试信息是如何工作的,以及如何用它来将机器码映射到高层源码中。回想一下,DWARF 包含了函数的地址范围和一个允许你在抽象层之间转换代码位置的行表。我们将使用这些功能来实现我们的断点。
#### 函数入口
如果你考虑重载、成员函数等等,那么在函数名上设置断点可能有点复杂,但是我们将遍历所有的编译单元,并搜索与我们正在寻找的名称匹配的函数。DWARF 信息如下所示:
```
< 0><0x0000000b> DW_TAG_compile_unit
DW_AT_producer clang version 3.9.1 (tags/RELEASE_391/final)
DW_AT_language DW_LANG_C_plus_plus
DW_AT_name /super/secret/path/MiniDbg/examples/variable.cpp
DW_AT_stmt_list 0x00000000
DW_AT_comp_dir /super/secret/path/MiniDbg/build
DW_AT_low_pc 0x00400670
DW_AT_high_pc 0x0040069c
LOCAL_SYMBOLS:
< 1><0x0000002e> DW_TAG_subprogram
DW_AT_low_pc 0x00400670
DW_AT_high_pc 0x0040069c
DW_AT_name foo
...
...
<14><0x000000b0> DW_TAG_subprogram
DW_AT_low_pc 0x00400700
DW_AT_high_pc 0x004007a0
DW_AT_name bar
...
```
我们想要匹配 `DW_AT_name` 并使用 `DW_AT_low_pc`(函数的起始地址)来设置我们的断点。
```
void debugger::set_breakpoint_at_function(const std::string& name) {
for (const auto& cu : m_dwarf.compilation_units()) {
for (const auto& die : cu.root()) {
if (die.has(dwarf::DW_AT::name) && at_name(die) == name) {
auto low_pc = at_low_pc(die);
auto entry = get_line_entry_from_pc(low_pc);
++entry; //skip prologue
set_breakpoint_at_address(entry->address);
}
}
}
}
```
这代码看起来有点奇怪的唯一一点是 `++entry`。 问题是函数的 `DW_AT_low_pc` 不指向该函数的用户代码的起始地址,它指向 prologue 的开始。编译器通常会输出一个函数的 prologue 和 epilogue,它们用于执行保存和恢复堆栈、操作堆栈指针等。这对我们来说不是很有用,所以我们将入口行加一来获取用户代码的第一行而不是 prologue。DWARF 行表实际上具有一些功能,用于将入口标记为函数 prologue 之后的第一行,但并不是所有编译器都输出它,因此我采用了原始的方法。
#### 源码行
要在高层源码行上设置一个断点,我们要将这个行号转换成 DWARF 中的一个地址。我们将遍历编译单元,寻找一个名称与给定文件匹配的编译单元,然后查找与给定行对应的入口。
DWARF 看上去有点像这样:
```
.debug_line: line number info for a single cu
Source lines (from CU-DIE at .debug_info offset 0x0000000b):
NS new statement, BB new basic block, ET end of text sequence
PE prologue end, EB epilogue begin
IS=val ISA number, DI=val discriminator value
<pc> [lno,col] NS BB ET PE EB IS= DI= uri: "filepath"
0x004004a7 [ 1, 0] NS uri: "/super/secret/path/a.hpp"
0x004004ab [ 2, 0] NS
0x004004b2 [ 3, 0] NS
0x004004b9 [ 4, 0] NS
0x004004c1 [ 5, 0] NS
0x004004c3 [ 1, 0] NS uri: "/super/secret/path/b.hpp"
0x004004c7 [ 2, 0] NS
0x004004ce [ 3, 0] NS
0x004004d5 [ 4, 0] NS
0x004004dd [ 5, 0] NS
0x004004df [ 4, 0] NS uri: "/super/secret/path/ab.cpp"
0x004004e3 [ 5, 0] NS
0x004004e8 [ 6, 0] NS
0x004004ed [ 7, 0] NS
0x004004f4 [ 7, 0] NS ET
```
所以如果我们想要在 `ab.cpp` 的第五行设置一个断点,我们将查找与行 (`0x004004e3`) 相关的入口并设置一个断点。
```
void debugger::set_breakpoint_at_source_line(const std::string& file, unsigned line) {
for (const auto& cu : m_dwarf.compilation_units()) {
if (is_suffix(file, at_name(cu.root()))) {
const auto& lt = cu.get_line_table();
for (const auto& entry : lt) {
if (entry.is_stmt && entry.line == line) {
set_breakpoint_at_address(entry.address);
return;
}
}
}
}
}
```
我这里做了 `is_suffix` hack,这样你可以输入 `c.cpp` 代表 `a/b/c.cpp` 。当然你实际上应该使用大小写敏感路径处理库或者其它东西,但是我比较懒。`entry.is_stmt` 是检查行表入口是否被标记为一个语句的开头,这是由编译器根据它认为是断点的最佳目标的地址设置的。
### 符号查找
当我们在对象文件层时,符号是王者。函数用符号命名,全局变量用符号命名,你得到一个符号,我们得到一个符号,每个人都得到一个符号。 在给定的对象文件中,一些符号可能引用其他对象文件或共享库,链接器将从符号引用创建一个可执行程序。
可以在正确命名的符号表中查找符号,它存储在二进制文件的 ELF 部分中。幸运的是,`libelfin` 有一个不错的接口来做这件事,所以我们不需要自己处理所有的 ELF 的事情。为了让你知道我们在处理什么,下面是一个二进制文件的 `.symtab` 部分的转储,它由 `readelf` 生成:
```
Num: Value Size Type Bind Vis Ndx Name
0: 0000000000000000 0 NOTYPE LOCAL DEFAULT UND
1: 0000000000400238 0 SECTION LOCAL DEFAULT 1
2: 0000000000400254 0 SECTION LOCAL DEFAULT 2
3: 0000000000400278 0 SECTION LOCAL DEFAULT 3
4: 00000000004002c8 0 SECTION LOCAL DEFAULT 4
5: 0000000000400430 0 SECTION LOCAL DEFAULT 5
6: 00000000004004e4 0 SECTION LOCAL DEFAULT 6
7: 0000000000400508 0 SECTION LOCAL DEFAULT 7
8: 0000000000400528 0 SECTION LOCAL DEFAULT 8
9: 0000000000400558 0 SECTION LOCAL DEFAULT 9
10: 0000000000400570 0 SECTION LOCAL DEFAULT 10
11: 0000000000400714 0 SECTION LOCAL DEFAULT 11
12: 0000000000400720 0 SECTION LOCAL DEFAULT 12
13: 0000000000400724 0 SECTION LOCAL DEFAULT 13
14: 0000000000400750 0 SECTION LOCAL DEFAULT 14
15: 0000000000600e18 0 SECTION LOCAL DEFAULT 15
16: 0000000000600e20 0 SECTION LOCAL DEFAULT 16
17: 0000000000600e28 0 SECTION LOCAL DEFAULT 17
18: 0000000000600e30 0 SECTION LOCAL DEFAULT 18
19: 0000000000600ff0 0 SECTION LOCAL DEFAULT 19
20: 0000000000601000 0 SECTION LOCAL DEFAULT 20
21: 0000000000601018 0 SECTION LOCAL DEFAULT 21
22: 0000000000601028 0 SECTION LOCAL DEFAULT 22
23: 0000000000000000 0 SECTION LOCAL DEFAULT 23
24: 0000000000000000 0 SECTION LOCAL DEFAULT 24
25: 0000000000000000 0 SECTION LOCAL DEFAULT 25
26: 0000000000000000 0 SECTION LOCAL DEFAULT 26
27: 0000000000000000 0 SECTION LOCAL DEFAULT 27
28: 0000000000000000 0 SECTION LOCAL DEFAULT 28
29: 0000000000000000 0 SECTION LOCAL DEFAULT 29
30: 0000000000000000 0 SECTION LOCAL DEFAULT 30
31: 0000000000000000 0 FILE LOCAL DEFAULT ABS init.c
32: 0000000000000000 0 FILE LOCAL DEFAULT ABS crtstuff.c
33: 0000000000600e28 0 OBJECT LOCAL DEFAULT 17 __JCR_LIST__
34: 00000000004005a0 0 FUNC LOCAL DEFAULT 10 deregister_tm_clones
35: 00000000004005e0 0 FUNC LOCAL DEFAULT 10 register_tm_clones
36: 0000000000400620 0 FUNC LOCAL DEFAULT 10 __do_global_dtors_aux
37: 0000000000601028 1 OBJECT LOCAL DEFAULT 22 completed.6917
38: 0000000000600e20 0 OBJECT LOCAL DEFAULT 16 __do_global_dtors_aux_fin
39: 0000000000400640 0 FUNC LOCAL DEFAULT 10 frame_dummy
40: 0000000000600e18 0 OBJECT LOCAL DEFAULT 15 __frame_dummy_init_array_
41: 0000000000000000 0 FILE LOCAL DEFAULT ABS /super/secret/path/MiniDbg/
42: 0000000000000000 0 FILE LOCAL DEFAULT ABS crtstuff.c
43: 0000000000400818 0 OBJECT LOCAL DEFAULT 14 __FRAME_END__
44: 0000000000600e28 0 OBJECT LOCAL DEFAULT 17 __JCR_END__
45: 0000000000000000 0 FILE LOCAL DEFAULT ABS
46: 0000000000400724 0 NOTYPE LOCAL DEFAULT 13 __GNU_EH_FRAME_HDR
47: 0000000000601000 0 OBJECT LOCAL DEFAULT 20 _GLOBAL_OFFSET_TABLE_
48: 0000000000601028 0 OBJECT LOCAL DEFAULT 21 __TMC_END__
49: 0000000000601020 0 OBJECT LOCAL DEFAULT 21 __dso_handle
50: 0000000000600e20 0 NOTYPE LOCAL DEFAULT 15 __init_array_end
51: 0000000000600e18 0 NOTYPE LOCAL DEFAULT 15 __init_array_start
52: 0000000000600e30 0 OBJECT LOCAL DEFAULT 18 _DYNAMIC
53: 0000000000601018 0 NOTYPE WEAK DEFAULT 21 data_start
54: 0000000000400710 2 FUNC GLOBAL DEFAULT 10 __libc_csu_fini
55: 0000000000400570 43 FUNC GLOBAL DEFAULT 10 _start
56: 0000000000000000 0 NOTYPE WEAK DEFAULT UND __gmon_start__
57: 0000000000400714 0 FUNC GLOBAL DEFAULT 11 _fini
58: 0000000000000000 0 FUNC GLOBAL DEFAULT UND __libc_start_main@@GLIBC_
59: 0000000000400720 4 OBJECT GLOBAL DEFAULT 12 _IO_stdin_used
60: 0000000000601018 0 NOTYPE GLOBAL DEFAULT 21 __data_start
61: 00000000004006a0 101 FUNC GLOBAL DEFAULT 10 __libc_csu_init
62: 0000000000601028 0 NOTYPE GLOBAL DEFAULT 22 __bss_start
63: 0000000000601030 0 NOTYPE GLOBAL DEFAULT 22 _end
64: 0000000000601028 0 NOTYPE GLOBAL DEFAULT 21 _edata
65: 0000000000400670 44 FUNC GLOBAL DEFAULT 10 main
66: 0000000000400558 0 FUNC GLOBAL DEFAULT 9 _init
```
你可以在对象文件中看到用于设置环境的很多符号,最后还可以看到 `main` 符号。
我们对符号的类型、名称和值(地址)感兴趣。我们有一个该类型的 `symbol_type` 枚举,并使用一个 `std::string` 作为名称,`std::uintptr_t` 作为地址:
```
enum class symbol_type {
notype, // No type (e.g., absolute symbol)
object, // Data object
func, // Function entry point
section, // Symbol is associated with a section
file, // Source file associated with the
}; // object file
std::string to_string (symbol_type st) {
switch (st) {
case symbol_type::notype: return "notype";
case symbol_type::object: return "object";
case symbol_type::func: return "func";
case symbol_type::section: return "section";
case symbol_type::file: return "file";
}
}
struct symbol {
symbol_type type;
std::string name;
std::uintptr_t addr;
};
```
我们需要将从 `libelfin` 获得的符号类型映射到我们的枚举,因为我们不希望依赖关系破环这个接口。幸运的是,我为所有的东西选了同样的名字,所以这样很简单:
```
symbol_type to_symbol_type(elf::stt sym) {
switch (sym) {
case elf::stt::notype: return symbol_type::notype;
case elf::stt::object: return symbol_type::object;
case elf::stt::func: return symbol_type::func;
case elf::stt::section: return symbol_type::section;
case elf::stt::file: return symbol_type::file;
default: return symbol_type::notype;
}
};
```
最后我们要查找符号。为了说明的目的,我循环查找符号表的 ELF 部分,然后收集我在其中找到的任意符号到 `std::vector` 中。更智能的实现可以建立从名称到符号的映射,这样你只需要查看一次数据就行了。
```
std::vector<symbol> debugger::lookup_symbol(const std::string& name) {
std::vector<symbol> syms;
for (auto &sec : m_elf.sections()) {
if (sec.get_hdr().type != elf::sht::symtab && sec.get_hdr().type != elf::sht::dynsym)
continue;
for (auto sym : sec.as_symtab()) {
if (sym.get_name() == name) {
auto &d = sym.get_data();
syms.push_back(symbol{to_symbol_type(d.type()), sym.get_name(), d.value});
}
}
}
return syms;
}
```
### 添加命令
一如往常,我们需要添加一些更多的命令来向用户暴露功能。对于断点,我使用 GDB 风格的接口,其中断点类型是通过你传递的参数推断的,而不用要求显式切换:
* `0x<hexadecimal>` -> 断点地址
* `<line>:<filename>` -> 断点行号
* `<anything else>` -> 断点函数名
```
else if(is_prefix(command, "break")) {
if (args[1][0] == '0' && args[1][1] == 'x') {
std::string addr {args[1], 2};
set_breakpoint_at_address(std::stol(addr, 0, 16));
}
else if (args[1].find(':') != std::string::npos) {
auto file_and_line = split(args[1], ':');
set_breakpoint_at_source_line(file_and_line[0], std::stoi(file_and_line[1]));
}
else {
set_breakpoint_at_function(args[1]);
}
}
```
对于符号,我们将查找符号并打印出我们发现的任何匹配项:
```
else if(is_prefix(command, "symbol")) {
auto syms = lookup_symbol(args[1]);
for (auto&& s : syms) {
std::cout << s.name << ' ' << to_string(s.type) << " 0x" << std::hex << s.addr << std::endl;
}
}
```
### 测试一下
在一个简单的二进制文件上启动调试器,并设置源代码级别的断点。在一些 `foo` 函数上设置一个断点,看到我的调试器停在它上面是我这个项目最有价值的时刻之一。
符号查找可以通过在程序中添加一些函数或全局变量并查找它们的名称来进行测试。请注意,如果你正在编译 C++ 代码,你还需要考虑[名称重整](https://en.wikipedia.org/wiki/Name_mangling#C.2B.2B)。
本文就这些了。下一次我将展示如何向调试器添加堆栈展开支持。
你可以在[这里](https://github.com/TartanLlama/minidbg/tree/tut_source_break)找到这篇文章的代码。
---
via: <https://blog.tartanllama.xyz/c++/2017/06/19/writing-a-linux-debugger-source-break/>
作者:[Simon Brand](https://twitter.com/TartanLlama) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Redirecting…
Click here if you are not redirected. |
8,891 | 安全债务是工程师的问题 | https://thenewstack.io/security-engineers-problem/ | 2017-09-22T11:13:00 | [
"安全"
] | https://linux.cn/article-8891-1.html | 
在上个月旧金山 Twitter 总部举办的 [WomenWhoCode Connect](http://connect2017.womenwhocode.com/) 活动中参会者了解到,就像组织会形成技术债务一样,如果他们不相应地计划,也会形成一个名为“安全债务”的东西。
甲骨文首席安全官 [Mary Ann Davidson](https://www.linkedin.com/in/mary-ann-davidson-235ba/) 与 [WomenWhoCode](https://www.womenwhocode.com/) 的 [Zassmin Montes de Oca](https://www.linkedin.com/in/zassmin/) 在一个面对开发人员的安全性的主题谈话中强调,安全性已经成为软件开发过程中每步的重要组成部分,
在过去,除银行外,安全性几乎被所有人忽视。但安全性比以往任何时候都更重要,因为现在有这么多接入点。我们已经进入[物联网](https://www.thenewstack.io/tag/Internet-of-Things)的时代,窃贼可以通过劫持你的冰箱而了解到你不在家的情况。
Davidson 负责 Oracle 的保障,“我们确保为建立的一切构建安全性,无论是内部部署产品、云服务,甚至是设备,我们在客户的网站建立有支持小组并报告数据给我们,帮助我们做诊断 - 每件事情都必须对其进行安全保护。”
AirBnB 的 [Keziah Plattner](https://twitter.com/ittskeziah) 在分组会议中回应了这个看法。她说:“大多数开发者并不认为安全是他们的工作,但这必须改变。”
她分享了工程师的四项基本安全原则。首先,安全债务是昂贵的。现在有很多人在谈论[技术债务](https://martinfowler.com/bliki/TechnicalDebt.html),她认为这些谈话应该也包括安全债务。
Plattner 说:“历史上这个看法是‘我们会稍后考虑安全’”。当公司抓住软件效率和增长的唾手可得的成果时,他们忽视了安全性,但最初的不安全设计可能在未来几年会引发问题。
她说,很难为现有的脆弱系统增加安全性。即使你知道安全漏洞在哪里,并且有进行更改的时间和资源的预算,重新设计一个安全系统也是耗时和困难的。
她说,所以这就是关键,从一开始就建立安全性。将安全性视为技术债务的一部分以避免这个问题,并涵盖所有可能性。
根据 Plattner 说的,最重要的是难以让人们改变行为。没有人会自愿改变,她说,即使你指出新的行为更安全。他们也只不过是点点头而已。
Davidson 说,工程师们需要开始考虑他们的代码如何被攻击,并从这个角度进行设计。她说她只有两个规则。第一个从不信任任何未验证的数据;规则二参见规则一。
她笑着说:“人们一直这样做。他们说:‘我的客户端给我发送数据,所以没有问题’。千万不要……”。
Plattner说,安全的第二个关键是“永远不信任用户”。
Davidson 以另外一种说法表示:“我的工作是做专业的偏执狂。”她一直担心有人或许无意中会破坏她的系统。这不是学术性的考虑,最近已经有通过 IoT 设备的拒绝服务攻击。
### Little Bobby Tables
Plattner 说:“如果你安全计划的一部分是信任用户做正确的事情,那么无论你有什么其他安全措施,你系统本质上是不安全的。”
她解释说,重要的是要净化所有的用户输入,如 [XKCD 漫画](https://xkcd.com/327/)中的那样,一位妈妈干掉整个学校的数据库——因为她的儿子的中间名是 “DropTable Students”(LCTT 译注:看不懂的[点这里](https://www.explainxkcd.com/wiki/index.php/Little_Bobby_Tables))。

所以净化所有的用户输入。你一定检查一下。
她展示了一个 JavaScript 开发者在开源软件中使用 eval 的例子。她警告说:“一个好的基本规则是‘从不使用 eval()’”。 [eval()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/eval) 函数会执行 JavaScript 代码。“如果你这样做,你正在向任意用户开放你的系统。”
Davidson 警告说,她甚至偏执到将文档中的示例代码的安全测试也包括在内。她笑着说:“我们都知道没有人会去复制示例代码”。她强调指出,任何代码都应进行安全检查。
Plattner 的第三个建议:要使安全容易实施。她建议采取阻力最小的道路。
对外,使用户<ruby> 默认采用 <rt> option out </rt></ruby>安全措施而不是<ruby> 可选采用 <rt> option in </rt></ruby>,或者更好使其成为强制性的措施。她说,改变人们的行为是科技中最难的问题。一旦用户习惯以非安全的方式使用你的产品,让他们改进会变得非常困难。
在公司内部,她建议制定安全标准,因此这不是个别开发人员需要考虑的内容。例如,将数据加密作为服务,这样工程师可以只需要调用服务就可以加密或解密数据。
她说,确保公司注重安全环境。在让整个公司切换到好的安全习惯。
你的安全短板决定了你的安全水准,所以重要的是每个人都有良好的个人安全习惯,并具有良好的企业安全环境。
在 Oracle,他们已经全面覆盖安全的各个环节。Davidson 表示,她厌倦了向没有安全培训的大学毕业的工程师解释安全性,所以她写了 Oracle 的第一个编码标准,现在已经有数百个页面之多以及很多贡献者,还有一些课程是强制性的。它们具有符合安全要求的度量标准。这些课程不仅适用于工程师,也适用于文档作者。她说:“这是一种文化。”
没有提及密码的关于安全性的讨论怎么能是安全的?Plattner 说:“每个人都应该使用一个好的密码管理器,在工作中应该是强制性的,还有双重身份验证。”
她说,基本的密码原则应该是每个工程师日常生活的一部分。密码中最重要的是它们的长度和熵(使按键的集合尽可能地随机)。强健的密码熵检查器对此非常有用。她建议使用 Dropbox 开源的熵检查器 [zxcvbn](https://blogs.dropbox.com/tech/2012/04/zxcvbn-realistic-password-strength-estimation/)。
Plattner 说,另一个诀窍是在验证用户输入时使用一些故意减慢速度的算法,如 [bcrypt](https://en.wikipedia.org/wiki/Bcrypt)。慢速并不困扰大多数合法用户,但会让那些试图强行进行密码尝试的黑客难受。
Davidson 说:“所有这些都为那些想要进入技术安全领域的人提供了工作安全保障,我们在各种地方放了各种代码,这就产生了系统性风险。只要我们继续在技术领域做有趣的事情,我不认为任何人不想要在安全中工作。”
---
via: <https://thenewstack.io/security-engineers-problem/>
作者:[TC Currie](https://thenewstack.io/author/tc/) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | # Security Debt is an Engineer’s Problem


Keziah Plattner of AirBnBSecurity.
Just like organizations can build up technical debt, so too can they also build up something called “security debt,” if they don’t plan accordingly, attendees learned at the WomenWhoCode Connect event at Twitter headquarters in San Francisco last month.
Security has got to be integral to every step of the software development process, stressed [Mary Ann Davidson](https://www.linkedin.com/in/mary-ann-davidson-235ba/), Oracle’s Chief Security Officer, in a keynote talk with about security for developers with [Zassmin Montes de Oca](https://www.linkedin.com/in/zassmin/) of [WomenWhoCode](https://www.womenwhocode.com/).
In the past, security used to be ignored by pretty much everyone, except banks. But security is more critical than it has ever been because there are so many access points. We’ve entered the era of Internet of Things, where thieves can just hack your fridge to see that you’re not home.
Davidson is in charge of assurance at Oracle, “making sure we build security into everything we build, whether it’s an on-premise product, whether it’s a cloud service, even devices we have that support group builds at customer sites and reports data back to us, helping us do diagnostics — every single one of those things has to have security engineered into it.”

Plattner talking to a capacity crowd at #WWCConnect
AirBnB’s [Keziah Plattner](https://twitter.com/ittskeziah) echoed that sentiment in her breakout session. “Most developers don’t see security as their job,” she said, “but this has to change.”
She shared four basic security principles for engineers. First, security debt is expensive. There’s a lot of talk about [technical debt ](https://martinfowler.com/bliki/TechnicalDebt.html)and she thinks security debt should be included in those conversations.
“This historical attitude is ‘We’ll think about security later,’” Plattner said. As companies grab the low-hanging fruit of software efficiency and growth, they ignore security, but an initial insecure design can cause problems for years to come.
It’s very hard to add security to an existing vulnerable system, she said. Even when you know where the security holes are and have budgeted the time and resources to make the changes, it’s time-consuming and difficult to re-engineer a secure system.
So it’s key, she said, to build security into your design from the start. Think of security as part of the technical debt to avoid. And cover all possibilities.
Most importantly, according to Plattner, is the difficulty in getting to people to change their behavior. No one will change voluntarily, she said, even when you point out that the new behavior is more secure. We all nodded.
Davidson said engineers need to start thinking about how their code could be attacked, and design from that perspective. She said she only has two rules. The first is never trust any unvalidated data and rule two is see rule one.
“People do this all the time. They say ‘My client sent me the data so it will be fine.’ Nooooooooo,” she said, to laughs.
The second key to security, Plattner said, is “never trust users.”
Davidson put it another way: “My job is to be a professional paranoid.” She worries all the time about how someone might breach her systems even inadvertently. This is not academic, there has been recent denial of service attacks through IoT devices.
## Little Bobby Tables
If part of your security plan is trusting users to do the right thing, your system is inherently insecure regardless of whatever other security measures you have in place, said Plattner.
It’s important to properly sanitize all user input, she explained, showing the [XKCD cartoon](https://xkcd.com/327/) where a mom wiped out an entire school database because her son’s middle name was “DropTable Students.”
So sanitize all user input. Check.
She showed an example of JavaScript developers using **Eval** on open source. “A good ground rule is ‘Never use eval(),’” she cautioned. The [eval() ](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/eval)function evaluates JavaScript code. “You’re opening your system to random users if you do.”
Davidson cautioned that her paranoia extends to including security testing your example code in documentation. “Because we all know no one ever copies sample code,” she said to laughter. She underscored the point that any code should be subject to security checks.

Make it easy
Plattner’s suggestion three: Make security easy. Take the path of least resistance, she suggested.
Externally, make users opt out of security instead of opting in, or, better yet, make it mandatory. Changing people’s behavior is the hardest problem in tech, she said. Once users get used to using your product in a non-secure way, getting them to change in the future is extremely difficult.
Internal to your company, she suggested make tools that standardize security so it’s not something individual developers need to think about. For example, encrypting data as a service so engineers can just call the service to encrypt or decrypt data.
Make sure that your company is focused on good security hygiene, she said. Switch to good security habits across the company.
You’re only secure as your weakest link, so it’s important that each individual also has good personal security hygiene as well as having good corporate security hygiene.
At Oracle, they’ve got this covered. Davidson said she got tired of explaining security to engineers who graduated college with absolutely no security training, so she wrote the first coding standards at Oracle. There are now hundreds of pages with lots of contributors, and there are classes that are mandatory. They have metrics for compliance to security requirements and measure it. The classes are not just for engineers, but for doc writers as well. “It’s a cultural thing,” she said.
And what discussion about security would be secure without a mention of passwords? While everyone should be using a good password manager, Plattner said, but they should be mandatory for work, along with two-factor authentication.
Basic password principles should be a part of every engineer’s waking life, she said. What matters most in passwords is their length and entropy — making the collection of keystrokes as random as possible. A robust password entropy checker is invaluable for this. She recommends [zxcvbn](https://blogs.dropbox.com/tech/2012/04/zxcvbn-realistic-password-strength-estimation/), the Dropbox open-source entropy checker.
Another trick is to use something intentionally slow like [bcrypt](https://en.wikipedia.org/wiki/Bcrypt) when authenticating user input, said Plattner. The slowness doesn’t bother most legit users but irritates hackers who try to force password attempts.
All of this adds up to job security for anyone wanting to get into the security side of technology, said Davidson. We’re putting more code more places, she said, and that creates systemic risk. “I don’t think anybody is not going to have a job in security as long as we keep doing interesting things in technology.”
Feature image via Women Who Code.
[
YOUTUBE.COM/THENEWSTACK
Tech moves fast, don't miss an episode. Subscribe to our YouTube
channel to stream all our podcasts, interviews, demos, and more.
](https://youtube.com/thenewstack?sub_confirmation=1) |
8,892 | Docker 引擎的 Swarm 模式:添加工作者节点教程 | http://www.dedoimedo.com/computers/docker-swarm-adding-worker-nodes.html | 2017-09-23T09:04:00 | [
"Docker",
"Swarm"
] | https://linux.cn/article-8892-1.html | 
让我们继续几周前在 CentOS 7.2 中开始的工作。 在本[指南](/article-8888-1.html)中,我们学习了如何初始化以及启动 Docker 1.12 中内置的原生的集群以及编排功能。但是我们只有<ruby> 管理者 <rp> ( </rp> <rt> manager </rt> <rp> ) </rp></ruby>节点还没有其它<ruby> 工作者 <rp> ( </rp> <rt> worker </rt> <rp> ) </rp></ruby>节点。今天我们会展开讲述这个。
我将向你展示如何将不对称节点添加到 Sawrm 中,比如一个与 CentOS 相邻的 [Fedora 24](http://www.dedoimedo.com/computers/fedora-24-gnome.html),它们都将加入到集群中,还有相关很棒的负载均衡等等。当然这并不是轻而易举的,我们会遇到一些障碍,所以它应该是非常有趣的。

### 先决条件
在将其它节点成功加入 Swarm 之前,我们需要做几件事情。理想情况下,所有节点都应该运行相同版本的 Docker,为了支持原生的编排功能,它的版本至少应该为 1.12。像 CentOS 一样,Fedora 内置的仓库没有最新的构建版本,所以你需要手动构建,或者使用 Docker 仓库手动[添加和安装](http://www.dedoimedo.com/computers/docker-centos-upgrade-latest.html)正确的版本,并修复一些依赖冲突。我已经向你展示了如何在 CentOS 中操作,经过是相同的。
此外,所有节点都需要能够相互通信。这就需要有正确的路由和防火墙规则,这样<ruby> 管理者 <rp> ( </rp> <rt> manager </rt> <rp> ) </rp></ruby>和<ruby> 工作者 <rp> ( </rp> <rt> worker </rt> <rp> ) </rp></ruby>节点才能互相通信。否则,你无法将节点加入 Swarm 中。最简单的解决方法是临时清除防火墙规则 (`iptables -F`),但这可能会损害你的安全。请确保你完全了解你正在做什么,并为你的节点和端口创建正确的规则。
>
> Error response from daemon: Timeout was reached before node was joined. The attempt to join the swarm will continue in the background. Use the "docker info" command to see the current swarm status of your node.
>
>
> 守护进程的错误响应:节点加入之前已超时。尝试加入 Swarm 的请求将在后台继续进行。使用 “docker info” 命令查看节点的当前 Swarm 状态。
>
>
>
你需要在主机上提供相同的 Docker 镜像。在上一个教程中我们创建了一个 Apache 映像,你需要在你的<ruby> 工作者 <rp> ( </rp> <rt> worker </rt> <rp> ) </rp></ruby>节点上执行相同操作,或者分发已创建的镜像。如果你不这样做,你会遇到错误。如果你在设置 Docker 上需要帮助,请阅读我的[介绍指南](http://www.dedoimedo.com/computers/docker-guide.html)和[网络教程](http://www.dedoimedo.com/computers/docker-networking.html)。
```
7vwdxioopmmfp3amlm0ulimcu \_ websky.11 my-apache2:latest
localhost.localdomain Shutdown Rejected 7 minutes ago
"No such image: my-apache2:lat&"
```
### 现在开始
现在我们有一台启动了 CentOS 机器,并成功地创建了容器。你可以使用主机端口连接到该服务,这一切都看起来很好。目前,你的 Swarm 只有<ruby> 管理者 <rp> ( </rp> <rt> manager </rt> <rp> ) </rp></ruby>。

### 加入<ruby> 工作者 <rp> ( </rp> <rt> worker </rt> <rp> ) </rp></ruby>
要添加新的节点,你需要使用 `join` 命令。但是你首先必须提供令牌、IP 地址和端口,以便<ruby> 工作者 <rp> ( </rp> <rt> woker </rt> <rp> ) </rp></ruby>节点能正确地对 Swarm 管理器进行身份验证。接着(在 Fedora 上)执行:
```
[root@localhost ~]# docker swarm join-token worker
To add a worker to this swarm, run the following command:
docker swarm join \
--token SWMTKN-1-0xvojvlza90nrbihu6gfu3qm34ari7lwnza ... \
192.168.2.100:2377
```
如果你不修复防火墙和路由规则,你会得到超时错误。如果你已经加入了 Swarm,重复 `join` 命令会收到错误:
```
Error response from daemon: This node is already part of a swarm. Use "docker swarm leave" to leave this swarm and join another one.
```
如果有疑问,你可以离开 Swarm,然后重试:
```
[root@localhost ~]# docker swarm leave
Node left the swarm.
docker swarm join --token
SWMTKN-1-0xvojvlza90nrbihu6gfu3qnza4 ... 192.168.2.100:2377
This node joined a swarm as a worker.
```
在<ruby> 工作者 <rp> ( </rp> <rt> worker </rt> <rp> ) </rp></ruby>节点中,你可以使用 `docker info` 来检查状态:
```
Swarm: active
NodeID: 2i27v3ce9qs2aq33nofaon20k
Is Manager: false
Node Address: 192.168.2.103
Likewise, on the manager:
Swarm: active
NodeID: cneayene32jsb0t2inwfg5t5q
Is Manager: true
ClusterID: 8degfhtsi7xxucvi6dxvlx1n4
Managers: 1
Nodes: 3
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Heartbeat Tick: 1
Election Tick: 3
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Node Address: 192.168.2.100
```
### 创建或缩放服务
现在,我们需要看下 Docker 是否以及如何在节点间分发容器。我的测试展示了一个在非常轻的负载下相当简单的平衡算法。试了一两次之后,即使在我尝试缩放并更新之后,Docker 也没有将运行的服务重新分配给新的 worker。同样,有一次,它在<ruby> 工作者 <rp> ( </rp> <rt> worker </rt> <rp> ) </rp></ruby>节点上创建了一个新的服务。也许这是最好的选择。




*在新的<ruby> 工作者 <rp> ( </rp> <rt> worker </rt> <rp> ) </rp></ruby>节点上完整创建新的服务。*
过了一段时间,两个容器之间的现有服务有一些重新分配,但这需要一些时间。新服务工作正常。这只是一个前期观察,所以我现在不能说更多。现在是开始探索和调整的新起点。

*负载均衡过了一会工作了。*
### 总结
Docker 是一只灵巧的小野兽,它仍在继续长大,变得更复杂、更强大,当然也更优雅。它被一个大企业吃掉只是一个时间问题。当它带来了原生的编排功能时,Swarm 模式运行得很好,但是它不只是几个容器而已,而是充分利用了其算法和可扩展性。
我的教程展示了如何将 Fedora 节点添加到由 CentOS 运行的群集中,并且两者能并行工作。关于负载平衡还有一些问题,但这是我将在以后的文章中探讨的。总而言之,我希望这是一个值得记住的一课。我们已经解决了在尝试设置 Swarm 时可能遇到的一些先决条件和常见问题,同时我们启动了一堆容器,我们甚至简要介绍了如何缩放和分发服务。要记住,这只是一个开始。
干杯。
---
作者简介:
我是 Igor Ljubuncic。现在大约 38 岁,已婚但还没有孩子。我现在在一个大胆创新的云科技公司做首席工程师。直到大约 2015 年初时,我还在一个全世界最大的 IT 公司之一中做系统架构工程师,和一个工程计算团队开发新的基于 Linux 的解决方案,优化内核以及攻克 Linux 的问题。在那之前,我是一个为高性能计算环境设计创新解决方案的团队的技术领导。还有一些其他花哨的头衔,包括系统专家、系统程序员等等。所有这些都曾是我的爱好,但从 2008 年开始成为了我的付费工作。还有什么比这更令人满意的呢?
从 2004 年到 2008 年间,我曾通过作为医学影像行业的物理学家来糊口。我的工作专长集中在解决问题和算法开发。为此,我广泛地使用了 Matlab,主要用于信号和图像处理。另外,我得到了几个主要的工程方法学的认证,包括 MEDIC 六西格玛绿带、试验设计以及统计工程学。
我也开始写书,包括奇幻类和 Linux 上的技术性工作。彼此交融。
要查看我开源项目、出版物和专利的完整列表,请滚动到下面。
有关我的奖项,提名和 IT 相关认证的完整列表,请稍等一下。
---
via: <http://www.dedoimedo.com/computers/docker-swarm-adding-worker-nodes.html>
作者:[Igor Ljubuncic](http://www.dedoimedo.com/faq.html) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,898 | 为什么我们比以往更需要开放的领导人 | https://opensource.com/open-organization/17/2/need-open-leaders-more-ever | 2017-09-24T09:19:00 | [
"开放组织"
] | https://linux.cn/article-8898-1.html |
>
> 不断变化的社会和文化条件正促使着开放的领导。
>
>
>

领导力就是力量。更具体地说,领导力是影响他人行动的力量。 关于领导力的神话不仅可以让人联想到人类浪漫的一面而且还有人类境况险恶的一面。 我们最终决定如何领导才能决定其真正的本质。
现代许多对领导力的理解都是在战争中诞生的,在那里,领导力意味着熟练地执行命令和控制思想。 在现代商业的大部分时间里,我们都是以一个到达权力顶峰的伟大的男人或女人作为领导,并通过地位来发挥力量的。 这种传统的通过等级和报告关系的领导方式严重依赖于正式的权威。 这些结构中的权威通过垂直层次结构向下流动,并沿命令链的形式存在。
然而,在 20 世纪后期,一些东西开始改变。 新技术打开了全球化的大门,从而使团队更加分散。 我们投入人力资本的方式开始转变,永远地改变了人们之间的沟通方式。组织内部的人开始感觉得到了责任感,他们要求对自己的成功(和失败)拥有归属感。 领导者不再是权力的唯一拥有者。 21世纪的领导者带领 21 世纪的组织开始了解授权、协作、责任和清晰的沟通是一种新型权力的本质。 这些新领导人开始分享权力——他们无保留地信任他们的追随者。
随着组织继续变得更加开放,即使是没有“领导力”头衔的人也会感到有责任推动变革。 这些组织消除了等级制度的枷锁,让工人们以他们认为合适的方式去工作。 历史暴露了 20 世纪领导人倾向通过单边决策和单向信息流来扼杀敏捷性。 但是,新世纪的领导者却是确定一个组织,让由它授权的若干个体来完成一些事情。 重点是权力赋予若干个体——坦率地说,一个领导者不能在任何时候出现在所有的地方,做出所有的决定。
因此,领导人也开始变得开放。
### 控制
当旧式领导人专注于指挥和控制的地位权力时,一个开放的领导者通过新形式的组织管理方式、新技术和其他减少摩擦的方式,将组织控制权放在了其它人身上,这样可以更有效的方式实现集体行动的方式。 这些领导者了解信任的力量,相信追随者总是会表现出主动性、参与性和独立性。 而这种新的领导方式需要在战术上有所转变——从告诉人们如何去做,到向他们展示如何去做,并在路上指导他们。开放的领导人很快就发现,领导力不是影响我们发挥进步的力量,而是我们在组织成员中分配的力量和信心。 21 世纪的领导者专注于社区和对他人的教化。最后,开放的领导者并不是专注于自我,而是无私的。
### 交流
20 世纪的领导者人组织并控制整个组织的信息的流动。 然而,开放的领导者试图通过与团队成员共享信息和背景(以及权力)来组织一个组织。 这些领导人摧毁了领地,谦逊前行,分享着前所未有的力量。 集体赋权和参与的协作创造了灵活性,分担责任,所有权,尤其是幸福。 当一个组织的成员被授权做他们的工作时,他们比等级层次的同事更快乐(因而更有生产力)。
### 信任
开放的领导者接受不确定性,相信他们的追随者在正确的时间做正确的事情。 他们拥有比传统对手,有更高的吸引人力资本效率的能力。 再说一次:他们不会像命令和控制的微观管理者那样运作。 提高透明度,而不是暗箱操作,他们尽可能的把决策和行动放在公开场合,解释决策的基础,并假设员工对组织内的情况有高度的把握。开放领导者的操作的前提是,如果没有他们的持续干预,该组织的人力资本就更有能力取得成功。
### 自治权
在 20 世纪具有强大指挥和控制力的领导者专注于某些权力的时候,一个开放的领导者更多地关注组织内个人的实际活动。 当领导者专注于个人时,他们就能够更好地训练和指导团队成员。 从这个角度来看,一个开放的领导者关注的是与组织的愿景和使命一致的行为和行动。最后,一个开放的领导者被看作是团队中的一员,而不是团队的领导者。 这并不意味着领导人放弃了权力的地位,而是低估了这一点,以分享权力,并通过自主创造成果赋予个人权力。
### 赋权
开放的领导人把重点放在授予组织成员的权力上。 在这个过程中承认领导者在组织人力资本中的技能、能力和信任,从而为整个团队带来了积极的动力和意愿。 最终,赋权就是帮助追随者相信他们自己的能力。 那些相信自己拥有个人权力的追随者更有可能采取主动行动、制定和实现更高的目标,并在困难的环境下坚持下去。 最终,开放组织的概念是关于包容性,每个人都是属于自己的,个性和不同的观点对于成功是至关重要的。 一个开放的组织及其开放的领导者提供了一种社区的感觉,而成员则受到组织的使命或目的的驱动。 这会产生一种比个人更大的归属感。 个性创造了成员之间的幸福和工作满意度。 反过来,又实现了更高的效率和成功。
我们都应该为 21 世纪领导人所要求的开放性而努力。 这需要自我反省,好奇心,尤其是它正在进行的改变。 通过新的态度和习惯,我们逐渐发现了一个真正的开放领导者,并且希望我们在适应 21 世纪的领导风格的同时,也开始采纳这些理念。
是的,领导力就是力量。我们如何利用这种权力决定了我们组织的成败。 那些滥用权力的人不会持久,但那些分享权力和庆祝他人的人会更持久。 通过阅读 [这本书](https://opensource.com/open-organization/resources/leaders-manual),你可以在开放组织及其领导的持续对话中开始发挥重要作用。 在[本卷](https://opensource.com/open-organization/resources/leaders-manual)的结论中,您将找到与开放组织社区联系的额外资源和机会,以便您也可以与我们聊天、思考和成长。 欢迎来到谈话——欢光临!
*这篇文章最初是作为《开放组织领导手册》的引言出现的,它现在可以[从 Opensource.com 中可获得](https://opensource.com/open-organization/resources/leaders-manual)。*
( 题图:opensource.com)
---
作者简介:
Philip A Foster - Dr. Philip A. Foster 是一名领导/商业教练兼顾问兼兼职教授。 他是企业运营、组织发展、展望和战略领导层的著名思想领袖。 Dr. Foster 通过设计和实施战略、战略预见和规划来促进变革。
---
via: <https://opensource.com/open-organization/17/2/need-open-leaders-more-ever>
作者:[Philip A Foster](https://opensource.com/users/maximumchange) 译者:[TimeBear](https://github.com/TimeBear) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Leadership is power. More specifically, leadership is the power to influence the actions of others. The mythology of leadership can certainly conjure images of not only the romantic but also the sinister side of the human condition. How we ultimately decide to engage in leadership determines its true nature.
Many modern understandings of leadership are born out of warfare, where leadership is the skillful execution of command-and-control thinking. For most of the modern era of business, then, we engaged leadership as some great man or woman arriving at the pinnacle of power and exerting this power through position. Such traditional leadership relies heavily on formal lines of authority through hierarchies and reporting relationships. Authority in these structures flows down through the vertical hierarchy and exists along formal lines in the chain of command.
However, in the late 20th century, something began to change. New technologies opened doors to globalism and thus more dispersed teams. The way we engaged human capital began to shift, forever changing the way people communicate with each other. People inside organizations began to feel empowered, and they demanded a sense of ownership of their successes (and failures). Leaders were no longer the sole owners of power. The 21st century leader leading the 21st century organization began to understand empowerment, collaboration, accountability, and clear communication were the essence of a new kind of power. These new leaders began *sharing* that power—and they implicitly trusted their followers.
As organizations continue becoming more open, even individuals without "leadership" titles feel empowered to drive change. These organizations remove the chains of hierarchy and untether workers to do their jobs in the ways they best see fit. History has exposed 20th century leaders' tendencies to strangle agility through unilateral decision-making and unidirectional information flows. But the new century's leader best defines an organization by the number of individuals it empowers to get something done. There's power in numbers—and, frankly, one leader cannot be in all places at all times, making all the decisions.
So leaders are becoming open, too.
## Control
Where the leaders of old are focused on command-and-control positional power, an open leader cedes organizational control to others via new forms of organizational governance, new technologies, and other means of reducing friction, thereby enabling collective action in a more efficient manner. These leaders understand the power of trust, and believe followers will always show initiative, engagement, and independence. And this new brand of leadership requires a shift in tactics—from *telling people what to do* to *showing them what to do* and *coaching them along the way*. Open leaders quickly discover that leadership is not about the power we exert to influence progress, but the power and confidence we *distribute* among the members of the organization. The 21st century leader is focused on community and the edification of others. In the end, the open leader is not focused on self but is selfless.
## Communication
The 20th century leader hordes and controls the flow of information throughout the organization. The open leader, however, seeks to engage an organization by sharing information and context (as well as authority) with members of a team. These leaders destroy fiefdoms, walk humbly, and share power like never before. The collective empowerment and engaged collaboration they inspire create agility, shared responsibility, ownership—and, above all, happiness. When members of an organization are empowered to do their jobs, they're happier (and thus more productive) than their hierarchical counterparts.
## Trust
Open leaders embrace uncertainty and trust their followers to do the right thing at the right time. They possess an ability to engage human capital at a higher level of efficiency than their traditional counterparts. Again: They don't operate as command-and-control micromanagers. Elevating transparency, they don't operate in hiding, and they do their best to keep decisions and actions out in the open, explaining the basis on which decisions get made and assuming employees have a high level grasp of situations within the organization. Open leaders operate from the premise that the organization's human capital is more than capable of achieving success without their constant intervention.
## Autonomy
Where the powerful command-and-control 20th century leader is focused on some *position* of power, an open leader is more interested in the actual *role* an individual plays within the organization. When a leader is focused on an *individual*, they're better able to coach and mentor members of a team. From this perspective, an open leader is focused on modeling behaviors and actions that are congruent with the organization's vision and mission. In the end, an open leader is very much seen as a member of the team rather than the *head* of the team. This does not mean the leader abdicates a position of authority, but rather understates it in an effort to share power and empower individuals through autonomy to create results.
## Empowerment
Open leaders are focused on granting authority to members of an organization. This process acknowledges the skills, abilities, and trust the leader has in the organization's human capital, and thereby creates positive motivation and willingness for the entire team to take risks. Empowerment, in the end, is about helping followers believe in their own abilities. Followers who believe that they have personal power are more likely to undertake initiatives, set and achieve higher goals, and persist in the face of difficult circumstances. Ultimately the concept of an open organization is about inclusivity, where everyone belongs and individuality and differing opinions are essential to success. An open organization and its open leaders offer a sense of community, and members are motivated by the organization's mission or purpose. This creates a sense of belonging to something bigger than the individual. Individuality creates happiness and job satisfaction among its members. In turn, higher degrees of efficiency and success are achieved.
We should all strive for the openness the 21st century leader requires. This requires self-examination, curiosity—and, above all, it's ongoing process of change. Through new attitudes and habits, we move toward the discovery of what an open leader really *is *and *does,* and hopefully we begin to take on those ideals as we adapt our leadership styles to the 21st century.
Yes, leadership is power. How we use that power determines the success or failure of our organizations. Those who abuse power don't last, but those who share power and celebrate others do. By reading [this book](https://opensource.com/open-organization/resources/leaders-manual), you are beginning to play an important role in the ongoing conversation of the open organization and its leadership. And at the conclusion of [this volume](https://opensource.com/open-organization/resources/leaders-manual), you'll find additional resources and opportunities to connect with the open organization community, so that you too can chat, think, and grow with us. Welcome to the conversation—welcome to the journey!
*This article originally appeared as the introduction to *The Open Organization Leaders Manual*, now available from Opensource.com.*
## 2 Comments |
8,899 | IoT 边缘计算框架的新进展 | https://www.linux.com/blog/2017/7/iot-framework-edge-computing-gains-ground | 2017-09-24T14:12:13 | [
"物联网"
] | https://linux.cn/article-8899-1.html | 
>
> 开源项目 EdgeX Foundry 旨在开发一个标准化的互操作物联网边缘计算框架。
>
>
>
4 月份时, Linux 基金组织[启动](http://linuxgizmos.com/open-source-group-focuses-on-industrial-iot-gateway-middleware/)了一个开源项目 [EdgeX Foundry](https://www.edgexfoundry.org/) ,用于为物联网边缘计算开发一个标准化互操作框架。 就在最近, EdgeX Foundry 又[宣布](https://www.edgexfoundry.org/announcement/2017/07/17/edgex-foundry-builds-momentum-for-a-iot-interoperability-and-a-unified-marketplace-with-eight-new-members/)新增了 8 个成员,其总成员达到 58 位。
这些新成员是 Absolute、IoT Impact LABS、inwinStack、Parallel Machines、Queen's University Belfast、RIOT、Toshiba Digital Solutions Corporation 和 Tulip Interfaces。 其原有成员包括 AMD、Analog Devices、Canonical/Ubuntu、Cloud Foundry、Dell、Linaro、Mocana、NetFoundry、 Opto 22、RFMicron 和 VMWare 等其他公司或组织。
EdgeX Foundry 项目构建于戴尔早期的基于 Apache2.0 协议的 [FUSE](https://medium.com/@gigastacey/dell-plans-an-open-source-iot-stack-3dde43f24feb) 物联网中间件框架之上,其中包括十几个微服务和超过 12.5 万行代码。在 FUSE 合并了类同项目 AllJoyn-compliant IoTX 之后,Linux 基金会协同 Dell 创立了 EdgeX Foundry ,后者是由 EdgeX Foundry 现有成员 Two Bulls 和 Beechwood 发起的项目。
EdgeX Foundry 将创造一个互操作性的、即插即用组件的物联网边缘计算的生态系统。开源的 EdgeX 栈将协调各种传感器网络协议与多种云平台及分析平台。该框架旨在充分挖掘横跨边缘计算、安全、系统管理和服务等模块间的互操作性代码。
对于项目成员及其客户来说,其关键的好处是在于能将各种预先认证的软件集成到许多 IoT 网关和智能边缘设备上。 在 Linux.com 的一次采访中,[IoT Impact LABS](https://iotimpactlabs.com/) 的首席工程师 Dan Mahoney 说:“现实中,EdgeX Foundry 降低了我们在部署多供应商解决方案时所面对的挑战。”
在 Linux 基金会仍然将其 AllSeen Alliance 项目下的 AllJoyn 规范合并到 [IoTivity](https://www.linux.com/news/how-iotivity-and-alljoyn-could-combine) 标准的情况下,为什么会发起了另外一个物联网标准化项目(EdgeX Foundry) 呢? 原因之一,EdgeX Foundry 不同于 IoTivity,IoTivity 主要解决工业物联网问题,而 EdgeX Foundry 旨在解决消费级和工业级物联网全部的问题。 更具体来说, EdgeX Foundry 旨在成为网关和智能终端的通用中间件。 EdgeX Foundry 与 IoTivity 的另一个不同在于,前者希望借助预认证的终端塑造一种新产品,后者更多解决现存产品之间的互操作性。
Linux 基金会 IoT 高级总监 Philip DesAutels 说:“IoTivity 提供实现设备之间无缝连接的协议, 而 EdgeX Foundry 提供了一个边缘计算框架。EdgeX Foundry 能够兼容如 IoTivity、 BacNet、 EtherCat 等任何协议设备,从而实现集成多协议通信系统的通用边缘计算框架,该项目的目标是为构建互操作组件的生态系统的过程中,降低不确定性,缩短市场化时间,更好地产生规模效应。”
上个月, 由 [Open Connectivity Foundation](https://openconnectivity.org/developer/specifications/international-standards) (OCF)和 Linux 基金组织共同发起的 IoTivity 项目发布了 [IoTivity 1.3](https://wiki.iotivity.org/release_note_1.3.0),该版本增加了与其曾经的对手 AllJoyn spec 的纽带,也增加了对于 OCF 的 UPnP 设备发现标准的接口。 预计在 [IoTivity 2.0](https://www.linux.com/news/iotivity-20-whats-store) 中, IoTivity 和 AllJoyn 将会更进一步深入集成。
DesAutels 告诉 linux.com,IoTivity 和 EdgeX 是“高度互补的”,其“原因是 EdgeX Foundry 项目的几个成员也是 IoTivity 或 OCF 的成员,如此更强化了 IoTivity 和 EdgeX 的合作关系。”
尽管 IoTivity 和 EdgeX 都宣称是跨平台的,包括在 CPU 架构和 OS 方面,但是二者还是存在一定区别。 IoTivity 最初是基于 Linux 平台设计,兼容 Ubuntu、Tizen 和 Android 等 Linux 系列 OS,后来逐步扩展到 Windows 和 iOS 操作系统。与之对应的 EdgeX 设计之初就是基于跨平台的理念,其完美兼容于各种 CPU 架构,支持 Linux, Windows 和 Mac OS 等操作系统, 未来还将兼容于实时操作系统(RTOS)。”
EdgeX 的新成员 [RIOT](https://riot-os.org/) 提供了一个开源的面向物联网的项目 RIOT RTOS。RIOT 的主要维护者 Thomas Eichinger 在一次表彰讲话中说:“由于 RIOT 初衷就是致力于解决 linux 不太适应的问题, 故对于 RIOT 社区来说,参加和支持类似于 EdgeX Foundry 等边缘计算的开源组织的积极性是自然而然的。”
### 传感器集成的简化
IoT Impact LABS (即 Impact LABS 或直接称为 LABS)是另一个 EdgeX 新成员。 该公司推出了一个独特的业务模式,旨在帮助中小企业度过物联网解决方案的试用阶段。该公司的大部分客户,其中包括几个 EdgeX Foundry 的项目成员,是致力于建设智慧城市、基础设施再利用、提高食品安全,以及解决社会面临的自然资源缺乏的挑战。
Dan Mahoney 说:“在 LABS 我们花费了很多时间来调和试点客户的解决方案之间的差异性。 EdgeX Foundry 可以最小化部署边缘软件系统的工作量,从而使我们能够更快更好地部署高质量的解决方案。”
该框架在涉及多个供应商、多种类型传感器的场景尤其凸显优势。“Edgex Foundry 将为我们提供快速构建可以控制所有部署的传感器的网关的能力。” Mahoney 补充说到。传感器制造商将借助 EdgeX SDK 烧写应用层协议驱动到边缘设备,该协议能够兼容多供应商和解决方案。
### 边缘分析能力的构建
当我们问到, Mahoney 的公司希望见到 EdgeX Foundry 怎样的发展时,他说:“我们喜见乐闻的一个目标是有更多有效的工业协议成为设备服务,这是一个更清晰的边缘计算实现之路。”
在工业物联网和消费级物联网中边缘计算都呈现增长趋势。 在后者,我们已经看到如 Alexa 的智能声控以及录像分析等几个智能家居系统[集成了边缘计算分析](https://www.linux.com/news/smart-linux-home-hubs-mix-iot-ai)技术。 这减轻了云服务平台的计算负荷,但同时也带来了安全、隐私,以及由于供应商中断或延迟问题引起的服务中断问题。
对于工业物联网网关,延迟问题成为首要的问题。因此,在物联网网关方面出现了一些类似于云服务功能的扩展。 其中一个解决方案是,为了安全将一些云服务上的安全保障应用借助容器如 [RIOS 与 Ubuntu 内核快照机制](https://www.linux.com/news/future-iot-containers-aim-solve-security-crisis)等方式集成到嵌入式设备。 另一种方案是,开发 IoT 生态系统,迁移云功能到边缘计算上。上个月,Amazon 为基于 linux 的网关发布了实现 [AWS Greengrass](http://linuxgizmos.com/amazon-releases-aws-greengrass-for-local-iot-processing-on-linux-devices/) 物联网协议栈的 AWS lambda。 该软件能够使 AWS 计算、消息路由、数据缓存和同步能力在诸如物联网网关等联网设备上完成。
分析能力是 EdgeX Foundry 发展路线上的一个关键功能要点。 发起成员之一 Cloud Foundry 其旨在集成其主要的工业应用平台到边缘设备。 另一个新成员 [Parallel Machines](https://www.parallelmachines.com/) 则计划利用 EdgeX 将 AI 带到边缘设备。
EdgeX Foundry 仍然在项目早期, 软件仍然在 α 阶段,其成员在上个月(六月份)才刚刚进行了第一次全体成员大会。同时该项目已经为新开发者准备了一些初始训练课程,另外从[这里](https://wiki.edgexfoundry.org/)也能获取更多的信息。
---
via: <https://www.linux.com/blog/2017/7/iot-framework-edge-computing-gains-ground>
作者: [ERIC BROWN](https://www.linux.com/users/ericstephenbrown) 译者:[penghuster](https://github.com/penghuster) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,900 | 常用 GDB 命令中文速览 | https://sourceware.org/gdb/current/onlinedocs/gdb/ | 2017-09-24T15:04:31 | [
"gdb",
"调试"
] | https://linux.cn/article-8900-1.html | 
### 目录
* break -- 在指定的行或函数处设置断点,缩写为 `b`
* info breakpoints -- 打印未删除的所有断点,观察点和捕获点的列表,缩写为 `i b`
* disable -- 禁用断点,缩写为 `dis`
* enable -- 启用断点
* clear -- 清除指定行或函数处的断点
* delete -- 删除断点,缩写为 `d`
* tbreak -- 设置临时断点,参数同 `break`,但在程序第一次停住后会被自动删除
* watch -- 为表达式(或变量)设置观察点,当表达式(或变量)的值有变化时,暂停程序执行
* step -- 单步跟踪,如果有函数调用,会进入该函数,缩写为 `s`
* reverse-step -- 反向单步跟踪,如果有函数调用,会进入该函数
* next -- 单步跟踪,如果有函数调用,不会进入该函数,缩写为 `n`
* reverse-next -- 反向单步跟踪,如果有函数调用,不会进入该函数
* return -- 使选定的栈帧返回到其调用者
* finish -- 执行直到选择的栈帧返回,缩写为 `fin`
* until -- 执行直到达到当前栈帧中当前行后的某一行(用于跳过循环、递归函数调用),缩写为 `u`
* continue -- 恢复程序执行,缩写为 `c`
* print -- 打印表达式 EXP 的值,缩写为 `p`
* x -- 查看内存
* display -- 每次程序停止时打印表达式 EXP 的值(自动显示)
* info display -- 打印早先设置为自动显示的表达式列表
* disable display -- 禁用自动显示
* enable display -- 启用自动显示
* undisplay -- 删除自动显示项
* help -- 打印命令列表(带参数时查找命令的帮助),缩写为 `h`
* attach -- 挂接到已在运行的进程来调试
* run -- 启动被调试的程序,缩写为 `r`
* backtrace -- 查看程序调用栈的信息,缩写为 `bt`
* ptype -- 打印类型 TYPE 的定义
---
### break
使用 `break` 命令(缩写 `b`)来设置断点。
用法:
* `break` 当不带参数时,在所选栈帧中执行的下一条指令处设置断点。
* `break <function-name>` 在函数体入口处打断点,在 C++ 中可以使用 `class::function` 或 `function(type, ...)` 格式来指定函数名。
* `break <line-number>` 在当前源码文件指定行的开始处打断点。
* `break -N` `break +N` 在当前源码行前面或后面的 `N` 行开始处打断点,`N` 为正整数。
* `break <filename:linenum>` 在源码文件 `filename` 的 `linenum` 行处打断点。
* `break <filename:function>` 在源码文件 `filename` 的 `function` 函数入口处打断点。
* `break <address>` 在程序指令的地址处打断点。
* `break ... if <cond>` 设置条件断点,`...` 代表上述参数之一(或无参数),`cond` 为条件表达式,仅在 `cond` 值非零时暂停程序执行。
详见[官方文档](https://sourceware.org/gdb/current/onlinedocs/gdb/Set-Breaks.html)。
### info breakpoints
查看断点,观察点和捕获点的列表。
用法:
* `info breakpoints [list...]`
* `info break [list...]`
* `list...` 用来指定若干个断点的编号(可省略),可以是 `2`, `1-3`, `2 5` 等。
### disable
禁用一些断点。参数是用空格分隔的断点编号。要禁用所有断点,不加参数。
禁用的断点不会被忘记,但直到重新启用才有效。
用法:
* `disable [breakpoints] [list...]`
* `breakpoints` 是 `disable` 的子命令(可省略),`list...` 同 `info breakpoints` 中的描述。
详见[官方文档](https://sourceware.org/gdb/current/onlinedocs/gdb/Disabling.html)。
### enable
启用一些断点。给出断点编号(以空格分隔)作为参数。没有参数时,所有断点被启用。
用法:
* `enable [breakpoints] [list...]` 启用指定的断点(或所有定义的断点)。
* `enable [breakpoints] once list...` 临时启用指定的断点。GDB 在停止您的程序后立即禁用这些断点。
* `enable [breakpoints] delete list...` 使指定的断点启用一次,然后删除。一旦您的程序停止,GDB 就会删除这些断点。等效于用 `tbreak` 设置的断点。
`breakpoints` 同 `disable` 中的描述。
详见[官方文档](https://sourceware.org/gdb/current/onlinedocs/gdb/Disabling.html)。
### clear
在指定行或函数处清除断点。参数可以是行号,函数名称或 `*` 跟一个地址。
用法:
* `clear` 当不带参数时,清除所选栈帧在执行的源码行中的所有断点。
* `clear <function>`, `clear <filename:function>` 删除在命名函数的入口处设置的任何断点。
* `clear <linenum>`, `clear <filename:linenum>` 删除在指定的文件指定的行号的代码中设置的任何断点。
* `clear <address>` 清除指定程序指令的地址处的断点。
详见[官方文档](https://sourceware.org/gdb/current/onlinedocs/gdb/Delete-Breaks.html)。
### delete
删除一些断点或自动显示表达式。参数是用空格分隔的断点编号。要删除所有断点,不加参数。
用法: `delete [breakpoints] [list...]`
详见[官方文档](https://sourceware.org/gdb/current/onlinedocs/gdb/Delete-Breaks.html)。
### tbreak
设置临时断点。参数形式同 `break` 一样。
除了断点是临时的之外,其他同 `break` 一样,所以在命中时会被删除。
详见[官方文档](https://sourceware.org/gdb/current/onlinedocs/gdb/Set-Breaks.html)。
### watch
为表达式设置观察点。
用法: `watch [-l|-location] <expr>` 每当一个表达式的值改变时,观察点就会暂停程序执行。
如果给出了 `-l` 或者 `-location`,则它会对 `expr` 求值并观察它所指向的内存。例如,`watch *(int *)0x12345678` 将在指定的地址处观察一个 4 字节的区域(假设 int 占用 4 个字节)。
详见[官方文档](https://sourceware.org/gdb/current/onlinedocs/gdb/Set-Watchpoints.html)。
### step
单步执行程序,直到到达不同的源码行。
用法: `step [N]` 参数 `N` 表示执行 N 次(或由于另一个原因直到程序停止)。
警告:如果当控制在没有调试信息的情况下编译的函数中使用 `step` 命令,则执行将继续进行,直到控制到达具有调试信息的函数。 同样,它不会进入没有调试信息编译的函数。
要执行没有调试信息的函数,请使用 `stepi` 命令,详见后文。
详见[官方文档](https://sourceware.org/gdb/current/onlinedocs/gdb/Continuing-and-Stepping.html)。
### reverse-step
反向单步执行程序,直到到达另一个源码行的开头。
用法: `reverse-step [N]` 参数 `N` 表示执行 N 次(或由于另一个原因直到程序停止)。
详见[官方文档](https://sourceware.org/gdb/current/onlinedocs/gdb/Reverse-Execution.html)。
### next
单步执行程序,执行完子程序调用。
用法: `next [N]`
与 `step` 不同,如果当前的源代码行调用子程序,则此命令不会进入子程序,而是将其视为单个源代码行,继续执行。
详见[官方文档](https://sourceware.org/gdb/current/onlinedocs/gdb/Continuing-and-Stepping.html)。
### reverse-next
反向步进程序,执行完子程序调用。
用法: `reverse-next [N]`
如果要执行的源代码行调用子程序,则此命令不会进入子程序,调用被视为一个指令。
参数 `N` 表示执行 N 次(或由于另一个原因直到程序停止)。
详见[官方文档](https://sourceware.org/gdb/current/onlinedocs/gdb/Reverse-Execution.html)。
### return
您可以使用 `return` 命令取消函数调用的执行。如果你给出一个表达式参数,它的值被用作函数的返回值。
用法: `return <expression>` 将 `expression` 的值作为函数的返回值并使函数直接返回。
详见[官方文档](https://sourceware.org/gdb/current/onlinedocs/gdb/Returning.html)。
### finish
执行直到选定的栈帧返回。
用法: `finish` 返回后,返回的值将被打印并放入到值历史记录中。
详见[官方文档](https://sourceware.org/gdb/current/onlinedocs/gdb/Continuing-and-Stepping.html)。
### until
执行直到程序到达当前栈帧中当前行之后(与 [break](#break) 命令相同的参数)的源码行。此命令用于通过一个多次的循环,以避免单步执行。
用法:`until <location>` 或 `u <location>` 继续运行程序,直到达到指定的位置,或者当前栈帧返回。
详见[官方文档](https://sourceware.org/gdb/current/onlinedocs/gdb/Continuing-and-Stepping.html)。
### continue
在信号或断点之后,继续运行被调试的程序。
用法: `continue [N]` 如果从断点开始,可以使用数字 `N` 作为参数,这意味着将该断点的忽略计数设置为 `N - 1`(以便断点在第 N 次到达之前不会中断)。如果启用了非停止模式(使用 `show non-stop` 查看),则仅继续当前线程,否则程序中的所有线程都将继续。
详见[官方文档](https://sourceware.org/gdb/current/onlinedocs/gdb/Continuing-and-Stepping.html)。
### print
求值并打印表达式 EXP 的值。可访问的变量是所选栈帧的词法环境,以及范围为全局或整个文件的所有变量。
用法:
* `print [expr]` 或 `print /f [expr]` `expr` 是一个(在源代码语言中的)表达式。
默认情况下,`expr` 的值以适合其数据类型的格式打印;您可以通过指定 `/f` 来选择不同的格式,其中 `f` 是一个指定格式的字母;详见[输出格式](https://sourceware.org/gdb/current/onlinedocs/gdb/Output-Formats.html)。
如果省略 `expr`,GDB 再次显示最后一个值。
要以每行一个成员带缩进的格式打印结构体变量请使用命令 `set print pretty on`,取消则使用命令 `set print pretty off`。
可使用命令 `show print` 查看所有打印的设置。
详见[官方文档](https://sourceware.org/gdb/current/onlinedocs/gdb/Data.html)。
### x
检查内存。
用法: `x/nfu <addr>` 或 `x <addr>` `n`、`f` 和 `u` 都是可选参数,用于指定要显示的内存以及如何格式化。`addr` 是要开始显示内存的地址的表达式。
`n` 重复次数(默认值是 1),指定要显示多少个单位(由 `u` 指定)的内存值。
`f` 显示格式(初始默认值是 `x`),显示格式是 `print('x','d','u','o','t','a','c','f','s')` 使用的格式之一,再加 `i`(机器指令)。
`u` 单位大小,`b` 表示单字节,`h` 表示双字节,`w` 表示四字节,`g` 表示八字节。
例如:
`x/3uh 0x54320` 表示从地址 0x54320 开始以无符号十进制整数的格式,双字节为单位来显示 3 个内存值。
`x/16xb 0x7f95b7d18870` 表示从地址 0x7f95b7d18870 开始以十六进制整数的格式,单字节为单位显示 16 个内存值。
详见[官方文档](https://sourceware.org/gdb/current/onlinedocs/gdb/Memory.html)。
### display
每次程序暂停时,打印表达式 EXP 的值。
用法: `display <expr>`, `display/fmt <expr>` 或 `display/fmt <addr>` `fmt` 用于指定显示格式。像 [print](#print) 命令里的 `/f` 一样。
对于格式 `i` 或 `s`,或者包括单位大小或单位数量,将表达式 `addr` 添加为每次程序停止时要检查的内存地址。
详见[官方文档](https://sourceware.org/gdb/current/onlinedocs/gdb/Auto-Display.html)。
### info display
打印自动显示的表达式列表,每个表达式都带有项目编号,但不显示其值。
包括被禁用的表达式和不能立即显示的表达式(当前不可用的自动变量)。
### undisplay
取消某些表达式在程序暂停时的自动显示。参数是表达式的编号(使用 `info display` 查询编号)。不带参数表示取消所有自动显示表达式。
`delete display` 具有与此命令相同的效果。
### disable display
禁用某些表达式在程序暂停时的自动显示。禁用的显示项目不会被自动打印,但不会被忘记。 它可能稍后再次被启用。
参数是表达式的编号(使用 `info display` 查询编号)。不带参数表示禁用所有自动显示表达式。
### enable display
启用某些表达式在程序暂停时的自动显示。
参数是重新显示的表达式的编号(使用 `info display` 查询编号)。不带参数表示启用所有自动显示表达式。
### help
打印命令列表。
您可以使用不带参数的 `help`(缩写为 `h`)来显示命令的类别名的简短列表。
使用 `help <class>` 您可以获取该类中的各个命令的列表。使用 `help <command>` 显示如何使用该命令。
详见[官方文档](https://sourceware.org/gdb/current/onlinedocs/gdb/Help.html)。
### attach
挂接到 GDB 之外的进程或文件。该命令可以将进程 ID 或设备文件作为参数。
对于进程 ID,您必须具有向进程发送信号的权限,并且必须具有与调试器相同的有效的 uid。
用法: `attach <process-id>` GDB 在安排调试指定的进程之后做的第一件事是暂停该进程。
无论是通过 `attach` 命令挂接的进程还是通过 `run` 命令启动的进程,您都可以使用的 GDB 命令来检查和修改挂接的进程。
详见[官方文档](https://sourceware.org/gdb/current/onlinedocs/gdb/Attach.html)。
### run
启动被调试的程序。
可以直接指定参数,也可以用 [set args](https://sourceware.org/gdb/current/onlinedocs/gdb/Arguments.html) 设置(启动所需的)参数。
例如: `run arg1 arg2 ...` 等效于
```
set args arg1 arg2 ...
run
```
还允许使用 `>`、 `<` 或 `>>` 进行输入和输出重定向。
详见[官方文档](https://sourceware.org/gdb/current/onlinedocs/gdb/Starting.html)。
### backtrace
打印整体栈帧信息。
* `bt` 打印整体栈帧信息,每个栈帧一行。
* `bt n` 类似于上,但只打印最内层的 n 个栈帧。
* `bt -n` 类似于上,但只打印最外层的 n 个栈帧。
* `bt full n` 类似于 `bt n`,还打印局部变量的值。
`where` 和 `info stack`(缩写 `info s`) 是 `backtrace` 的别名。调用栈信息类似如下:
```
(gdb) where
#0 vconn_stream_run (vconn=0x99e5e38) at lib/vconn-stream.c:232
#1 0x080ed68a in vconn_run (vconn=0x99e5e38) at lib/vconn.c:276
#2 0x080dc6c8 in rconn_run (rc=0x99dbbe0) at lib/rconn.c:513
#3 0x08077b83 in ofconn_run (ofconn=0x99e8070, handle_openflow=0x805e274 <handle_openflow>) at ofproto/connmgr.c:1234
#4 0x08075f92 in connmgr_run (mgr=0x99dc878, handle_openflow=0x805e274 <handle_openflow>) at ofproto/connmgr.c:286
#5 0x08057d58 in ofproto_run (p=0x99d9ba0) at ofproto/ofproto.c:1159
#6 0x0804f96b in bridge_run () at vswitchd/bridge.c:2248
#7 0x08054168 in main (argc=4, argv=0xbf8333e4) at vswitchd/ovs-vswitchd.c:125
```
详见[官方文档](https://sourceware.org/gdb/current/onlinedocs/gdb/Backtrace.html)。
### ptype
打印类型 TYPE 的定义。
用法: `ptype[/FLAGS] TYPE-NAME | EXPRESSION`
参数可以是由 `typedef` 定义的类型名, 或者 `struct STRUCT-TAG` 或者 `class CLASS-NAME` 或者 `union UNION-TAG` 或者 `enum ENUM-TAG`。
根据所选的栈帧的词法上下文来查找该名字。
类似的命令是 `whatis`,区别在于 `whatis` 不展开由 `typedef` 定义的数据类型,而 `ptype` 会展开,举例如下:
```
/* 类型声明与变量定义 */
typedef double real_t;
struct complex {
real_t real;
double imag;
};
typedef struct complex complex_t;
complex_t var;
real_t *real_pointer_var;
```
这两个命令给出了如下输出:
```
(gdb) whatis var
type = complex_t
(gdb) ptype var
type = struct complex {
real_t real;
double imag;
}
(gdb) whatis complex_t
type = struct complex
(gdb) whatis struct complex
type = struct complex
(gdb) ptype struct complex
type = struct complex {
real_t real;
double imag;
}
(gdb) whatis real_pointer_var
type = real_t *
(gdb) ptype real_pointer_var
type = double *
```
详见[官方文档](https://sourceware.org/gdb/current/onlinedocs/gdb/Symbols.html)。
---
### 参考资料
* [Debugging with GDB](https://sourceware.org/gdb/current/onlinedocs/gdb/)
---
译者:[robot527](https://github.com/robot527) 校对:[mudongliang](https://github.com/mudongliang), [wxy](https://github.com/wxy)
| 301 | Moved Permanently | null |
8,901 | 18 个开源的项目本地化翻译工具 | https://opensource.com/article/17/6/open-source-localization-tools | 2017-09-25T07:47:00 | [
"翻译",
"本地化"
] | https://linux.cn/article-8901-1.html |
>
> <ruby> 本地化 <rt> Localization </rt></ruby>(L10N)在适应项目方面为世界各地的用户发挥着关键作用。
>
>
>

本地化在定制开源项目以适应世界各地用户的需求方面发挥着核心作用。 除了代码之外,语言翻译也是世界各地人们贡献和参与开源项目的主要方式之一。
有专门针对语言服务行业特有的工具(听到这件事是不是很惊讶?),这使得高品质的本地化过程可以很顺畅。 本地化工具的类别包括:
* 计算机辅助翻译工具(CAT)
* 机器翻译引擎(MT)
* 翻译管理系统(TMS)
* 术语管理工具
* 本地化自动化工具
这些工具的专有版本可能相当昂贵。一个 SDL Trados Studio (领先的 CAT 工具)的许可证可能要花费数千欧元,即使这样,它只能一个人使用,并且定制功能也是有限的(注意,它们的费用也很高)。开源项目希望本地化到多种语言,简化本地化过程,所以希望找到开源工具来节省资金,并可以通过定制获得所需的灵活性。我对许多开源本地化工具项目进行了深入的调查,以帮助您决定使用什么。
### 计算机辅助翻译工具(CAT)

*OmegaT CAT 工具。在这里您可以发现翻译记忆(模糊匹配)和术语回顾(术语表)特性。OmegaT 在 GPL v3 许可证之下发布。*
CAT 工具是语言服务行业的主要工具。 顾名思义,CAT 工具可以帮助翻译人员尽快完成翻译、双语审查和单语审查的任务,并通过重用翻译内容(也称为翻译记忆),达到尽可能高的一致性。 <ruby> 翻译记忆 <rt> translation memory </rt></ruby>和<ruby> 术语回忆 <rt> terminology recall </rt></ruby>是 CAT 工具的两个主要特性。它们能够使译者在新项目中重用以前项目中翻译的内容。这使得他们可以在较短的时间内翻译大量的文字,同时通过术语和风格的一致性保持较高水平的质量。这对于本地化特别方便,因为许多软件和 web UI 中的文本在平台和应用程序中通常是相同的。 尽管 CAT 工具是独立的软件,但需要翻译人员在本地使用它们并合并到中央存储库。
**可用工具:**
* [OmegaT](http://www.omegat.org/)
* [OmegaT+](http://omegatplus.sourceforge.net/)
* [OpenTM2](http://opentm2.org/)
* [Anaphraseus](http://anaphraseus.sourceforge.net/)
* [字幕翻译器](http://www.mironto.sk/)
### 机器翻译引擎(MT)

机器翻译引擎自动将文本从一种语言翻译到另一种语言。机器翻译引擎被分成三种主要的方法:基于规则、统计式和神经网络式(这是新技术)。最广泛的机器翻译引擎方法是统计式,简而言之,通过使用 [*n*-gram 模型](https://en.wikipedia.org/wiki/N-gram#n-gram_models) 对带注释的双语语料库数据进行统计分析,得出关于两种语言之间的相互关联性。当将新的源语言短语引入到引擎进行翻译时,它会在其分析的语料库数据中查找与目标语言产生统计相关的对等物。机器翻译引擎可以作为翻译人员的生产力辅助工具,将他们的主要任务从将源文本转换为目标文本,改变为对机器翻译引擎的目标语言输出结果的后期编辑。我不建议在本地化工作中使用原始的机器翻译引擎输出结果,但是如果您的社区接受了后期编辑的培训,那么机器翻译引擎可以成为一个有用的工具,帮助他们做出大量的贡献。
**可用工具:**
* [Apertium](http://www.apertium.org/)
* [Moses](http://www.statmt.org/moses/)
### 翻译管理系统(TMS)

*如上是 Mozilla 的 Pontoon 翻译管理系统用户界面。使用所见即所得编辑方式,您可以在上下文根据语境翻译内容,在翻译的同时保证质量。 Pontoon 在 BSD 3 句版许可证(新款或修订版)之下发布。*
翻译管理系统工具是基于 web 的平台,允许您管理本地化项目,并使翻译人员和审阅人员能够做他们最擅长的事情。 大多数翻译管理系统工具旨在通过包括版本控制系统(VCS)集成、云服务集成、项目报告以及标准的翻译记忆和术语回忆功能,实现本地化过程中的许多手工部分的自动化。这些工具最适合于社区本地化或翻译项目,因为它们允许大量的翻译人员和审阅人员为一个项目做出贡献。一些人还使用所见即所得编辑器为他们的翻译者提供翻译语境。这种增加的语境可以提高翻译的准确性,减少译者在用户界面里翻译和审查翻译之间需要等待的时间。
**可用工具:**
* [Pontoon](http://pontoon.mozilla.org/)
* [Pootle](http://pootle.translatehouse.org/)
* [Weblate](https://weblate.org/)
* [Translate5](http://translate5.net/)
* [GlobalSight](http://www.globalsight.com/)
* [Zanata](http://zanata.org/)
* [Jabylon](http://jabylon.org/)
### 术语管理工具

*杨百翰大学 (Brigham Young University) 的 BaseTerm 工具显示了新术语条目的对话窗口。 BaseTerm 在 Eclipse 公共许可证之下发布。*
术语管理工具为您提供 GUI 来创建术语资源(称为术语库)以添加语境并确保翻译的一致性。这些资源在帮助翻译人员的翻译过程中用于 CAT 工具和 TMS 平台。 对于一个术语基于语境可以是名词或动词的语言,术语管理工具允许您添加标记其词性、方言、单语定义以及上下文线索的术语元数据。 术语管理通常是本地化过程中使用不多的部分,但也是同样重要的部分。 在开源软件和专有软件的生态系统中,只有少量的可选产品。
**查看工具**
* [BaseTerm](http://certsoftadmin.byu.edu/baseterm/termbase/search_all)
* [Terminator](https://github.com/translate/terminator)
### 自动本地化工具

*Okapi 框架的 Ratel 和 Rainbow 组件。 图片由 Okapi 框架提供。Okapi 框架在 Apache 许可证 2.0 之下发布。*
自动本地化工具便于您处理本地化数据。这可以包括文本提取、文件格式转换、标记化、VCS 同步、术语提取、预翻译和对通用的本地化标准文件格式的各种质量检查。在一些工具套件中,如 Okapi 框架,您可以创建用于执行各种本地化任务的自动化流程。这对于各种情况都非常有用,但是它们的主要功能是通过自动化许多任务来节省时间。它们还可以让你更接近一个根据连续的本地化流程。
**查看工具**
* [Okapi Framework](http://okapiframework.org/)
* [Mojito](http://www.mojito.global/)
### 为什么开源是关键
本地化在开源时是最强力有效的。 这些工具应该让您和您的社区能够将您的项目本地化为尽可能多的语言。
想了解更多吗? 看看这些附加资源:
* [自由/开源的机器翻译软件](http://fosmt.org/) 列表
* *[开放翻译工具](https://booki.flossmanuals.net/open-translation-tools/index)* 电子书
(题图: opensource.com)
---
作者简介:
Jeff Beatty - Jeff Beatty 是 Mozilla 公司本地化的负责人, Mozilla 是流行的开源 web 浏览器 Firefox 的制造商。 他拥有利默里克大学(University of Limerick)多语言计算和本地化专业硕士学位。 Jeff 还在全球知名刊物中担任本地化专家,如<ruby> 《经济学人》 <rp> ( </rp> <rt> The Economist </rt> <rp> ) </rp></ruby>、<ruby> 《世界报》 <rp> ( </rp> <rt> El Universal </rt> <rp> ) </rp></ruby>、多语种杂志等。 Jeff 旨在展示 Mozilla 的本地化程序,创建颠覆性的开源翻译技术,并充当传播桥梁。
---
via: <https://opensource.com/article/17/6/open-source-localization-tools>
作者:[Jeff Beatty](https://opensource.com/users/guerojeff) 译者:[TimeBear](https://github.com/TimeBear) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Localization plays a central role in the ability to customize an open source project to suit the needs of users around the world. Besides coding, language translation is one of the main ways people around the world contribute to and engage with open source projects.
There are tools specific to the language services industry (surprised to hear that's a thing?) that enable a smooth localization process with a high level of quality. Categories that localization tools fall into include:
- Computer-assisted translation (CAT) tools
- Machine translation (MT) engines
- Translation management systems (TMS)
- Terminology management tools
- Localization automation tools
The proprietary versions of these tools can be quite expensive. A single license for SDL Trados Studio (the leading CAT tool) can cost thousands of euros, and even then it is only useful for one individual and the customizations are limited (and psst, they cost more, too). Open source projects looking to localize into many languages and streamline their localization processes will want to look at open source tools to save money and get the flexibility they need with customization. I've compiled this high-level survey of many of the open source localization tool projects out there to help you decide what to use.
## Computer-assisted translation (CAT) tools

opensource.com
CAT tools are a staple of the language services industry. As the name implies, CAT tools help translators perform the tasks of translation, bilingual review, and monolingual review as quickly as possible and with the highest possible consistency through reuse of translated content (also known as translation memory). Translation memory and terminology recall are two central features of CAT tools. They enable a translator to reuse previously translated content from old projects in new projects. This allows them to translate a high volume of words in a shorter amount of time while maintaining a high level of quality through terminology and style consistency. This is especially handy for localization, as text in a lot of software and web UIs is often the same across platforms and applications. CAT tools are standalone pieces of software though, requiring translators that use them to work locally and merge to a central repository.
**Tools to check out:**
## Machine translation (MT) engines
MT engines automate the transfer of text from one language to another. MT is broken up into three primary methodologies: rules-based, statistical, and neural (which is the new player). The most widespread MT methodology is statistical, which (in very brief terms) draws conclusions about the interconnectedness of a pair of languages by running statistical analyses over annotated bilingual corpus data using [ n-gram models](https://en.wikipedia.org/wiki/N-gram#n-gram_models). When a new source language phrase is introduced to the engine for translation, it looks within its analyzed corpus data to find statistically relevant equivalents, which it produces in the target language. MT can be useful as a productivity aid to translators, changing their primary task from translating a source text to a target text to post-editing the MT engine's target language output. I don't recommend using raw MT output in localizations, but if your community is trained in the art of post-editing, MT can be a useful tool to help them make large volumes of contributions.
**Tools to check out:**
## Translation management systems (TMS)

opensource.com
TMS tools are web-based platforms that allow you to manage a localization project and enable translators and reviewers to do what they do best. Most TMS tools aim to automate many manual parts of the localization process by including version control system (VCS) integrations, cloud services integrations, project reporting, as well as the standard translation memory and terminology recall features. These tools are most amenable to community localization or translation projects, as they allow large groups of translators and reviewers to contribute to a project. Some also use a WYSIWYG editor to give translators context for their translations. This added context improves translation accuracy and cuts down on the amount of time a translator has to wait between doing the translation and reviewing the translation within the user interface.
**Tools to check out**
## Terminology management tools

opensource.com
Terminology management tools give you a GUI to create terminology resources (known as termbases) to add context and ensure translation consistency. These resources are consumed by CAT tools and TMS platforms to aid translators in the process of translation. For languages in which a term could be either a noun or a verb based on the context, terminology management tools allows you to add metadata for a term that labels its gender, part of speech, monolingual definition, as well as context clues. Terminology management is often an underserved, but no less important, part of the localization process. In both the open source and proprietary ecosystems, there are only a small handful of options available.
**Tools to check out**
## Localization automation tools

opensource.com
Localization automation tools facilitate the way you process localization data. This can include text extraction, file format conversion, tokenization, VCS synchronization, term extraction, pre-translation, and various quality checks over common localization standard file formats. In some tool suites, like the Okapi Framework, you can create automation pipelines for performing various localization tasks. This can be very useful for a variety of situations, but their main utility is in the time they save by automating many tasks. They can also move you closer to a more continuous localization process.
**Tools to check out**
## Why open source is key
Localization is most powerful and effective when done in the open. These tools should give you and your communities the power to localize your projects into as many languages as humanly possible.
Want to learn more? Check out these additional resources:
*Jeff Beatty will be talking about open source localization tools at OpenWest, which will be held July 12-15 in Salt Lake City.*
## 2 Comments |
8,902 | Kubernetes 为什么这么重要? | https://opensource.com/article/17/6/introducing-kubernetes | 2017-09-26T10:50:43 | [
"Kubernetes",
"PaaS"
] | https://linux.cn/article-8902-1.html |
>
> 在开发和部署云原生应用程序时,运行容器化负载的 Kubernetes 平台起到了重大作用。
>
>
>

自然而然的,开发和部署云原生应用程序已经变得非常受欢迎。对于一个允许快速部署和连续交付的 bug 修复和新功能的流程来说,它有明显的优势,但是没有人会谈到鸡和鸡蛋问题:怎样才能达成这样的目的呢?从头开始构建基础设施和开发流程来开发和维护云原生应用程序是个不简单的、耗时的任务。
[Kubernetes](https://kubernetes.io/) 是一个相对较新的运行容器化负载的平台,它解决了这些问题。它原本是 Google 内部的一个项目,Kubernetes 在 2015 年被捐赠给了[云原生计算基金会](https://www.cncf.io/),并吸引了来自世界各地开源社区的开发人员。 Kubernetes 的设计基于 Google 15 年的在生产和开发环境运维的经验。由于它是开源的,任何人都可以下载并使用它,并实现其带来的优势。
那么为什么 Kubernetes 会有这么大的惊喜呢?我认为它在像 OpenStack 这样的基础架构即服务(IaaS)和完整的平台即服务 (PaaS)的资源之间达到了最佳平衡,它的底层运行时实现完全由供应商控制。Kubernetes 提供了两个优势:对管理基础设施的抽象,以及深入裸机进行故障排除的工具和功能。
### IaaS 与 PaaS
OpenStack 被大多数人归类为 IaaS 解决方案,其中物理资源池(如处理器、网络和存储)在不同用户之间分配和共享。它使用传统的基于硬件的虚拟化实现用户之间的隔离。
OpenStack 的 REST API 允许使用代码自动创建基础架构,但是这就是问题所在。IaaS 产品输出的也是基础设施。其创建后,支持和管理那些更多的基础设施的服务方式并不多。在一定程度上,OpenStack 生产的底层基础架构(如服务器和 IP 地址)成为管理工作的重中之重。一个众所周知的结果是虚拟机(VM)的无序蔓延,而同样的情况也出现于网络、加密密钥和存储卷方面。这样,开发人员建立和维护应用程序的时间就更少了。
像其它基于集群的解决方案一样,Kubernetes 以单个服务器级别的方式运行,以实现水平缩放。它可以轻松添加新的服务器,并立即在新硬件上安排负载。类似地,当服务器没有被有效利用或需要维护时,可以从集群中删除服务器。其它 Kubernetes 可以自动处理的其他任务是编排活动,如工作调度、健康监测和维护高可用性。
网络是另一个可能难以在 IaaS 环境中可靠编排的领域。微服务之间通过 IP 地址通信可能是很棘手的。Kubernetes 实现了 IP 地址管理、负载均衡、服务发现和 DNS 名称注册,以在集群内提供无痛、透明的网络环境。
### 专为部署而设计
一旦创建了运行应用程序的环境,部署就是一件小事了。可靠地部署一个应用程序是说起来容易做起来难的任务 —— 它并不是最简单的。Kubernetes 相对其他环境的巨大优势是,部署是一等公民。
使用一个单独的 Kubernetes 命令行界面(CLI)的命令,可以描述应用程序并将其安装在群集上。Kubernetes 从初始部署、推出新版本以及(当一个关键功能出现问题时)进行回滚,实现了应用程序的整个生命周期。运行中的部署也可以暂停和恢复。拥有现成的、内置的工具和支持应用程序部署,而不用自己构建部署系统,这是不容小觑的优点。Kubernetes 用户既不必重新发明应用程序部署的轮子,也不会发现这是一项艰巨的任务。
Kubernetes 还可以监控运行中的部署的状态。虽然你可以在 IaaS 环境中像编写部署过程一样编写这个功能,但这是一个非常困难的任务,而这样的情况还比比皆是。
### 专为 DevOps 而设计
随着你在开发和部署 Kubernetes 应用程序方面获得更多经验,你将沿着与 Google 和其他前行者相同的路径前行。你将发现有几种 Kubernetes 功能对于多服务应用程序的有效开发和故障排除是非常重要的。
首先,Kubernetes 能够通过日志或 SSH(安全 shell)轻松检查正在运行的服务的能力非常重要。通过一条命令行调用,管理员可以检查在 Kubernetes 下运行的服务的日志。这可能听起来像一个简单的任务,但在 IaaS 环境中,除非你已经做了一些工作,否则这并不容易。大型应用程序通常具有专门用于日志收集和分析的硬件和人员。在Kubernetes 中的日志可能不能替代完整功能的日志和指标解决方案,但它足以提供基本的故障排除。
第二,Kubernetes 提供内置的密钥管理。从头开发过自己的部署系统的团队知道的另一个问题是,将敏感数据(如密码和 API 令牌)安全地部署到虚拟机上很困难。通过将密钥管理变成一等公民,Kubernetes 可以避免你的团队发明自己的不安全的、错误的密钥分发系统或在部署脚本中硬编码凭据。
最后,Kubernetes 有一些用于自动进行缩放、负载均衡和重新启动应用程序的功能。同样,这些功能是开发人员在使用 IaaS 或裸机时要自己编写的。你的 Kubernetes 应用程序的缩放和运行状况检查在服务定义中进行声明,而 Kubernetes 会确保正确数量的实例健康运行。
### 总结
IaaS 和 PaaS 系统之间的差异是巨大的,包括 PaaS 可以节省大量的开发和调试时间。作为一种 PaaS,Kubernetes 实现了强大而有效的功能,可帮助你开发、部署和调试云原生应用程序。它的架构和设计代表了数十年的难得的经验,而你的团队能够免费获得该优势。
(题图:squarespace.com)
---
作者简介:
Tim Potter - Tim 是 Hewlett Packard Enterprise 的高级软件工程师。近二十年来,他一直致力于自由和开源软件的开发工作,其中包括 Samba、Wireshark、OpenPegasus 和 Docker 等多个项目。Tim 博客在 <https://elegantinfrastructure.com/> ,关于 Docker、Kubernetes 和其他基础设施相关主题。
---
via: <https://opensource.com/article/17/6/introducing-kubernetes>
作者:[Tim Potter](https://opensource.com/users/tpot) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Developing and deploying cloud-native applications has become very popular—for very good reasons. There are clear advantages to a process that allows rapid deployment and continuous delivery of bug fixes and new features, but there's a chicken-and-egg problem no one talks about: How do you get there from here? Building the infrastructure and developing processes to develop and maintain cloud-native applications—all from scratch—are non-trivial, time-intensive tasks.
[Kubernetes](https://kubernetes.io/), a relatively new platform for running containerized workloads, addresses these problems. Originally an internal project within Google, Kubernetes was donated to the [Cloud Native Computing Foundation](https://www.cncf.io/) in 2015 and has attracted developers from the open source community around the world. Kubernetes' design is based on 15 years of experience in running both production and development workloads. Since it is open source, anyone can download and use it and realize its benefits.
So why is such a big fuss being made over Kubernetes? I believe that it hits a sweet spot between an Infrastructure as a Service (IaaS) solution, like OpenStack, and a full Platform as a Service (PaaS) resource where the lower-level runtime implementation is completely controlled by a vendor. Kubernetes provides the benefits of both worlds: abstractions to manage infrastructure, as well as tools and features to drill down to bare metal for troubleshooting.
## IaaS vs. PaaS
OpenStack is classified by most people as an IaaS solution, where pools of physical resources, such as processors, networking, and storage, are allocated and shared among different users. Isolation between users is implemented using traditional, hardware-based virtualization.
OpenStack's REST API allows infrastructure to be created automatically using code, but therein lies the problem. The output of the IaaS product is yet more infrastructure. There's not much in the way of services to support and manage the extra infrastructure once it has been created. After a certain point, it becomes a lot of work to manage the low-level infrastructure, such as servers and IP addresses, produced by OpenStack. One well-known outcome is virtual machine (VM) sprawl, but the same concept applies to networks, cryptographic keys, and storage volumes. This leaves less time for developers to work on building and maintaining an application.
Like other cluster-based solutions, Kubernetes operates at the individual server level to implement horizontal scaling. New servers can be added easily and workloads scheduled on the hardware immediately. Similarly, servers can be removed from the cluster when they're not being utilized effectively or when maintenance is needed. Orchestration activities, such as job scheduling, health monitoring, and maintaining high availability, are other tasks automatically handled by Kubernetes.
Networking is another area that can be difficult to reliably orchestrate in an IaaS environment. Communication of IP addresses between services to link microservices can be particularly tricky. Kubernetes implements IP address management, load balancing, service discovery, and DNS name registration to provide a headache-free, transparent networking environment within the cluster.
## Designed for deployment
Once you have created the environment to run your application, there is the small matter of deploying it. Reliably deploying an application is one of those tasks that's easily said, but not easily done—not in the slightest. The huge advantage that Kubernetes has over other environments is that deployment is a first-class citizen.
There is a single command, using the Kubernetes command-line interface (CLI), that takes a description of the application and installs it on the cluster. Kubernetes implements the entire application lifecycle from initial deployment, rolling out new releases as well as rolling them back—a critical feature when things go wrong. In-progress deployments can also be paused and resumed. The advantage of having existing, built-in tools and support for application deployment, rather than building a deployment system yourself, cannot be overstated. Kubernetes users do not have to reinvent the application deployment wheel nor discover what a difficult task it is.
Kubernetes also has the facility to monitor the status of an in-progress deployment. While you can write this in an IaaS environment, like the deployment process itself, it's a surprisingly difficult task where corner cases abound.
## Designed for DevOps
As you gain more experience in developing and deploying applications for Kubernetes, you will be traveling the same path that Google and others have before you. You'll discover there are several Kubernetes features that are essential to effectively developing and troubleshooting a multi-service application.
First, Kubernetes' ability to easily examine the logs or SSH (secure shell) into a running service is vitally important. With a single command line invocation, an administrator can examine the logs of a service running under Kubernetes. This may sound like a simple task, but in an IaaS environment it's not easy unless you have already put some work into it. Large applications often have hardware and personnel dedicated just for log collection and analysis. Logging in Kubernetes may not replace a full-featured logging and metrics solution, but it provides enough to enable basic troubleshooting.
Second, Kubernetes offers built-in secret management. Another hitch known by teams who have developed their own deployment systems from scratch is that deploying sensitive data, such as passwords and API tokens, securely to VMs is hard. By making secrets first-class citizens, Kubernetes stops your team from inventing its own insecure, buggy secret-distribution system or just hardcoding credentials in deployment scripts.
Finally, there is a slew of features in Kubernetes for automatically scaling, load-balancing, and restarting your application. Again, these features are tempting targets for developers to write when using IaaS or bare metal. Scaling and health checks for your Kubernetes application are declared in the service definition, and Kubernetes ensures that the correct number of instances is running and healthy.
## Conclusion
The differences between IaaS and PaaS systems are enormous, including that PaaS can save a vast amount of development and debugging time. As a PaaS, Kubernetes implements a potent and effective set of features to help you develop, deploy, and debug cloud-native applications. Its architecture and design represent decades of hard-won experience that can your team can take advantage of—for free.
## Comments are closed. |
8,907 | 机器学习实践指南 | https://medium.freecodecamp.org/how-machines-learn-a-practical-guide-203aae23cafb | 2017-09-28T06:34:58 | [
"机器学习",
"Python",
"深度学习",
"人工智能"
] | https://linux.cn/article-8907-1.html | 
你可能在各种应用中听说过<ruby> 机器学习 <rt> machine learning </rt></ruby>(ML),比如垃圾邮件过滤、光学字符识别(OCR)和计算机视觉。
开启机器学习之旅是一个涉及多方面的漫长旅途。对于新手,有很多的书籍,有学术论文,有指导练习,有独立项目。在这些众多的选择里面,很容易迷失你最初想学习的目标。
所以在今天的文章中,我会列出 7 个步骤(和 50 多个资源)帮助你开启这个令人兴奋的计算机科学领域的大门,并逐渐成为一个机器学习高手。
请注意,这个资源列表并不详尽,只是为了让你入门。 除此之外,还有更多的资源。
### 1、 学习必要的背景知识
你可能还记得 DataCamp 网站上的[学习数据科学](https://www.datacamp.com/community/tutorials/learn-data-science-infographic)这篇文章里面的信息图:数学和统计学是开始机器学习(ML)的关键。 基础可能看起来很容易,因为它只有三个主题。 但不要忘记这些实际上是三个广泛的话题。
在这里需要记住两件非常重要的事情:
* 首先,你一定会需要一些进一步的指导,以了解开始机器学习需要覆盖哪些知识点。
* 其次,这些是你进一步学习的基础。 不要害怕花时间,有了这些知识你才能构建一切。
第一点很简单:学习线性代数和统计学是个好主意。这两门知识是必须要理解的。但是在你学习的同时,也应该尝试学习诸如最优化和高等微积分等主题。当你越来越深入 ML 的时候,它们就能派上用场。
如果是从零开始的,这里有一些入门指南可供参考:
* [Khan 学院](http://www.khanacademy.org/) 对于初学者是非常好的资源,可以考虑学习他们的线性代数和微积分课程。
* 在 [麻省理工学院 OpenCourseWare](https://ocw.mit.edu/index.htm) 网站上学习[线性代数](https://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2010/) 课程。
* [Coursera course](https://www.coursera.org/learn/basic-statistics) 网站上对描述统计学、概率论和推论统计学的介绍内容。

*统计学是学习 ML 的关键之一*
如果你更多喜欢阅读书籍,请参考以下内容:
* <ruby> <a href="https://www.amazon.com/Linear-Algebra-Its-Applications-4th/dp/0030105676"> 线性代数及其应用 </a> <rt> Linear Algebra and Its Applications </rt></ruby>
* <ruby> <a href="https://www.amazon.com/Applied-Linear-Algebra-3rd-Noble/dp/0130412600"> 应用线性代数 </a> <rt> Applied Linear Algebra </rt></ruby>
* <ruby> <a href="https://www.amazon.de/Solved-Problems-Linear-Algebra-Schaums/dp/0070380236"> 线性代数解决的 3000 个问题 </a> <rt> 3,000 Solved Problems in Linear Algebra </rt></ruby>
* [麻省理工学院在线教材](https://ocw.mit.edu/courses/online-textbooks/)
然而,在大多数情况下,你已经对统计学和数学有了一个初步的了解。很有可能你已经浏览过上面列举的的那些资源。
在这种情况下,诚实地回顾和评价你的知识是一个好主意,是否有一些领域是需要复习的,或者现在掌握的比较好的?
如果你一切都准备好了,那么现在是时候使用 R 或者 Python 应用这些知识了。作为一个通用的指导方针,选择一门语言开始是个好主意。另外,你仍然可以将另一门语言加入到你的技能池里。
为什么这些编程知识是必需的?
嗯,你会看到上面列出的课程(或你在学校或大学学习的课程)将为你提供关于数学和统计学主题的更理论性的介绍(而不是应用性的)。 然而,ML 非常便于应用,你需要能够应用你所学到的所有主题。 所以最好再次复习一遍之前的材料,但是这次需要付诸应用。
如果你想掌握 R 和 Python 的基础,可以看以下课程:
* DataCamp 上关于 Python 或者 R 的介绍性课程: [Python 语言数据科学介绍](https://www.datacamp.com/courses/intro-to-python-for-data-science) 或者 [R 语言编程介绍](https://www.datacamp.com/courses/free-introduction-to-r)。
* Edx 上关于 Python 或者 R 的介绍性课程: [Python 语言数据科学介绍](https://www.edx.org/course/introduction-python-data-science-microsoft-dat208x-5) 和 [R 语言数据科学介绍](https://www.edx.org/course/introduction-r-data-science-microsoft-dat204x-4)。
* 还有很多其他免费的课程。查看 [Coursera](http://www.coursera.org/) 或者 [Codeacademy](https://www.codecademy.com/) 了解更多。
当你打牢基础知识后,请查看 DataCamp 上的博客 [Python 统计学:40+ 数据科学资源](https://www.datacamp.com/community/tutorials/python-statistics-data-science)。 这篇文章提供了统计学方面的 40 多个资源,这些资源都是你开始数据科学(以及 ML)需要学习的。
还要确保你查看了关于向量和数组的 [这篇 SciPy 教程](https://www.datacamp.com/community/tutorials/python-scipy-tutorial)文章,以及使用 Python 进行科学计算的[研讨会](http://www.math.pitt.edu/%7Esiam/workshops/python10/python.pdf)。
要使用 Python 和微积分进行实践,你可以了解下 [SymPy 软件包](http://docs.sympy.org/latest/tutorial/calculus.html)。
### 2、 不要害怕在 ML 的“理论”上浪费时间
很多人并不会花很多精力去浏览理论材料,因为理论是枯燥的、无聊的。但从长远来看,在理论知识上投入时间是至关重要的、非常值得的。 你将会更好地了解机器学习的新进展,也能和背景知识结合起来。 这将有助于你保持学习积极性。
此外,理论并不会多无聊。 正如你在介绍中所看到的,你可以借助非常多的资料深入学习。
书籍是吸收理论知识的最佳途径之一。 它们可以让你停下来想一会儿。 当然,看书是一件非常平静的事情,可能不符合你的学习风格。 不过,请尝试阅读下列书籍,看看它是否适合你:
* <ruby> <a href="http://www.cs.cmu.edu/%7Etom/mlbook.html"> 机器学习教程 </a> <rt> Machine Learning textbook </rt></ruby>, Tom Mitchell 著,书可能比较旧,但是却很经典。这本书很好的解释介绍了机器学习中最重要的课题,步骤详尽,逐层深入。
* <ruby> 机器学习: 使数据有意义的算法艺术和科学 <rt> Machine Learning: The Art and Science of Algorithms that Make Sense of Data </rt></ruby>(你可以在[这里](http://www.cs.bris.ac.uk/%7Eflach/mlbook/materials/mlbook-beamer.pdf)看到这本书的幻灯片版本):这本书对初学者来说非常棒。 里面讨论了许多实践中的应用程序,其中有一些是在 Tom Mitchell 的书中缺少的。
* <ruby> <a href="http://www.mlyearning.org/"> 机器学习之向往 </a> <rt> Machine Learning Yearning </rt></ruby> :这本书由<ruby> 吴恩达 <rt> Andrew Ng </rt></ruby>编写的,仍未完本,但对于那些正在学习 ML 的学生来说,这一定是很好的参考资料。
* <ruby> <a href="https://www.amazon.com/Algorithms-Data-Structures-Applications-Practitioner/dp/0134894286"> 算法与数据结构 </a> <rt> Algorithms and Data Structures </rt></ruby> 由 Jurg Nievergelt 和 Klaus Hinrichs 著。
* 也可以参阅 Matthew North 的<ruby> <a href="https://www.amazon.com/Data-Mining-Masses-Matthew-North/dp/0615684378"> 面向大众的数据挖掘 </a> <rt> Data Mining for the Masses </rt></ruby>。 你会发现这本书引导你完成一些最困难的主题。
* <ruby> <a href="http://alex.smola.org/drafts/thebook.pdf"> 机器学习介绍 </a> <rt> Introduction to Machine Learning </rt></ruby> 由 Alex Smola 和 S.V.N. Vishwanathan 著。

*花些时间看书并研究其中涵盖的资料*
视频和慕课对于喜欢边听边看来学习的人来说非常棒。 慕课和视频非常的多,多到可能你都很难找到适合你的。 下面列出了最知名的几个:
* [这个著名的机器学习慕课](https://www.coursera.org/learn/machine-learning),是<ruby> 吴恩达 <rt> Andrew Ng </rt></ruby>讲的,介绍了机器学习及其理论。 别担心,这个慕课讲的非常好,一步一步深入,所以对初学者来说非常适用。
* [麻省理工学院 Open Courseware 的 6034 课程的节目清单](https://youtu.be/TjZBTDzGeGg?list=PLnvKubj2-I2LhIibS8TOGC42xsD3-liux),已经有点前沿了。 在你开始本系列之前,你需要做一些 ML 理论方面的准备工作,但是你不会后悔的。
在这一点上,重要的是要将各种独立的技术融会贯通,形成整体的结构图。 首先了解关键的概念:<ruby> 监督学习 <rt> supervised learning </rt></ruby>和<ruby> 无监督学习 <rt> unsupervised learning </rt></ruby>的区别、分类和回归等。 手动(书面)练习可以派上用场,能帮你了解算法是如何工作的以及如何应用这些算法。 在大学课程里你经常会找到一些书面练习,可以看看波特兰州立大学的 [ML 课程](http://web.cecs.pdx.edu/%7Emm/MachineLearningSpring2017/)。
### 3、 开始动手
通过看书和看视频了解理论和算法都非常好,但是需要超越这一阶段,就要开始做一些练习。你要学着去实现这些算法,应用学到的理论。
首先,有很多介绍 Python 和 R 方面的机器学习的基础知识。当然最好的方法就是使用交互式教程:
* [Python 机器学习:Scikit-Learn 教程](https://www.datacamp.com/community/tutorials/machine-learning-python),在这篇教程里面,你可以学到使用 Scikit-Learn 构建模型的 KMeans 和支持向量机(SVM)相关的知名算法。
* [给初学者的 R 语言机器学习教程](https://www.datacamp.com/community/tutorials/machine-learning-in-r) 用 R 中的类和 caret 包介绍机器学习。
* [Keras 教程:Python 深度学习[25](https://www.datacamp.com/community/tutorials/deep-learning-python) 涵盖了如何一步一步的为分类和回归任务构建多层感知器(MLP)。
还请查看以下静态的(非互动的)教程,这些需要你在 IDE 中操作:
* [循序渐进:Python 机器学习](http://machinelearningmastery.com/machine-learning-in-python-step-by-step/): 一步一步地学习 Scikit-Learn。
* [循序渐进:使用 Keras 开发你的第一个神经网络](http://machinelearningmastery.com/tutorial-first-neural-network-python-keras/): 按这个教程一步一步地使用 Keras 开发你的第一个神经网络。
* 你可以考虑看更多的教程,但是[机器学习精要](http://www.machinelearningmastery.com/)这篇教程是非常好的。
除了教程之外,还有一些课程。参加课程可以帮助你系统性地应用学到的概念。 经验丰富的导师很有帮助。 以下是 Python 和机器学习的一些互动课程:
* [用 scikit-learn 做监督学习](https://www.datacamp.com/courses/supervised-learning-with-scikit-learn): 学习如何构建预测模型,调整参数,并预测在未知数据上执行的效果。你将使用 Scikit-Learn 操作真实世界的数据集。
* [用 Python 做无监督学习](https://www.datacamp.com/courses/unsupervised-learning-in-python): 展示给你如何从未标记的数据集进行聚类、转换、可视化和提取关键信息。 在课程结束时,还会构建一个推荐系统。
* [Python 深度学习](https://www.datacamp.com/courses/deep-learning-in-python): 你将获得如何使用 Keras 2.0 进行深度学习的实践知识,Keras 2.0 是前沿的 Python 深度学习库 Keras 的最新版本。
* [在 Python 中应用机器学习](https://www.coursera.org/learn/python-machine-learning): 将学习者引入到机器学习实践中,更多地关注技术和方法,而不是这些方法背后的统计学知识。

*理论学习之后,花点时间来应用你所学到的知识。*
对于那些正在学习 R 语言机器学习的人,还有这些互动课程:
* [机器学习介绍](https://www.datacamp.com/courses/introduction-to-machine-learning-with-r) 可以让你宏观了解机器学习学科最常见的技术和应用,还可以更多地了解不同机器学习模型的评估和训练。这门课程剩下的部分重点介绍三个最基本的机器学习任务: 分类、回归和聚类。
* [R 语言无监督学习](https://www.datacamp.com/courses/unsupervised-learning-in-r) ,用 R 语言从 ML 角度提供聚类和降维的基本介绍。 可以让你尽快获得数据的关键信息。
* [实操机器学习](https://www.coursera.org/learn/practical-machine-learning)涵盖了构建和应用预测功能的基本组成部分,其重点是实际应用。
最后,还有很多书籍以偏向实践的方式介绍了 ML 主题。 如果你想借助书籍内容和 IDE 来学习,请查看这些书籍:
* <ruby> <a href="https://github.com/rasbt/python-machine-learning-book"> Python 机器学习 </a> <rt> Python Machine Learning Book </rt></ruby>,Sebastian Raschka 著。
* <ruby> <a href="https://github.com/rasbt/deep-learning-book"> 人工神经网络与深度学习导论:Python 应用实用指南 </a> <rt> Introduction to Artificial Neural Networks and Deep Learning: A Practical Guide with Applications in Python </rt></ruby>,Sebastian Raschka 著。
* <ruby> <a href="https://books.google.be/books/about/Machine_Learning_with_R.html?id=ZQu8AQAAQBAJ&amp;amp;amp;amp;amp;amp;source=kp_cover&amp;amp;amp;amp;amp;amp;redir_esc=y"> R 语言机器学习 </a> <rt> Machine Learning with R </rt></ruby>,Brett Lantz 著。
### 4、 练习
实践比使用 Python 进行练习和修改材料更重要。 这一步对我来说可能是最难的。 在做了一些练习后看看其他人是如何实现 ML 算法的。 然后,开始你自己的项目,阐述你对 ML 算法和理论的理解。
最直接的方法之一就是将练习的规模做得更大些。 要做一个更大的练习,就需要你做更多的数据清理和功能工程。
* 从 [Kaggle](http://www.kaggle.com/) 开始。 如果你需要额外的帮助来征服所谓的“数据恐惧”,请查看 [Kaggle 的 Python 机器学习教程](https://www.datacamp.com/community/open-courses/kaggle-python-tutorial-on-machine-learning) 和 [Kaggle 的 R 语言机器学习教程](https://www.datacamp.com/community/open-courses/kaggle-tutorial-on-machine-learing-the-sinking-of-the-titanic)。 这些将带给您快速的提升。
* 此后,你也可以自己开始挑战。 查看这些网站,您可以在其中找到大量的 ML 数据集:[UCI 机器学习仓库](http://archive.ics.uci.edu/ml/),[用于机器学习的公开数据集](http://homepages.inf.ed.ac.uk/rbf/IAPR/researchers/MLPAGES/mldat.htm) 和 [data.world](https://data.world/)。

*熟能生巧。*
### 5、 项目
虽然做一些小的练习也不错,但是在最后,您需要做一个项目,可以在其中展示您对使用到的 ML 算法的理解。
最好的练习是实现你自己的 ML 算法。 您可以在以下页面中阅读更多关于为什么您应该做这样的练习,以及您可以从中学到什么内容:
* [为什么有许多先进的 API,比如 tensorflow,还需要自己手动实现机器学习的算法?](https://www.quora.com/Why-is-there-a-need-to-manually-implement-machine-learning-algorithms-when-there-are-many-advanced-APIs-like-tensorflow-available)
* [为什么要从头开始实现机器学习算法?](http://www.kdnuggets.com/2016/05/implement-machine-learning-algorithms-scratch.html)
* [使用 Python 从头开始实现一个分类器,我能从中学到什么?](http://www.jeannicholashould.com/what-i-learned-implementing-a-classifier-from-scratch.html)
接下来,您可以查看以下文章和仓库。 可以从中获得一些灵感,并且了解他们是如何实现 ML 算法的。
* [如何实现机器学习算法](http://machinelearningmastery.com/how-to-implement-a-machine-learning-algorithm/)
* [从头开始学习机器学习](https://github.com/eriklindernoren/ML-From-Scratch)
* [从头开始学习机器学习算法](https://github.com/madhug-nadig/Machine-Learning-Algorithms-from-Scratch)

*开始时项目可能会很难,但是可以极大增加你的理解。*
### 6、 不要停止
对 ML 的学习永远不能停止,即使你在这个领域工作了十年,总是有新的东西要学习,许多人都将会证实这一点。
例如,ML 趋势,比如<ruby> 深度学习 <rt> deep learning </rt></ruby>现在就很受欢迎。你也可以专注于那些现在不怎么火,但是将来会火的话题上。如果你想了解更多,可以看看[这个有趣的问题和答案](https://www.quora.com/Should-I-quit-machine-learning)。
当你苦恼于掌握基础知识时,你最先想到的可能不是论文。 但是它们是你紧跟最新研究的一个途径。 论文并不适合刚刚开始学习的人,但是绝对适合高级人员。
* [20 篇最新的机器学习和深度学习领域的顶级研究论文](http://www.kdnuggets.com/2017/04/top-20-papers-machine-learning.html)
* [机器学习研究杂志](http://www.jmlr.org/)
* [优秀的深度学习论文](https://github.com/terryum/awesome-deep-learning-papers)
* [机器学习的一些最好的研究论文和书籍](https://www.quora.com/What-are-some-of-the-best-research-papers-books-for-Machine-learning)
其他技术也是需要考虑的。 但是当你刚开始学习时,不要担心这些。 例如,您可以专注于 Python 或 R 语言 (取决于你已经知道哪一个),并把它到你的技能池里。 你可以通过这篇文章来查找一些感兴趣的资源。
如果您还想转向大数据,您可以考虑研究 Spark。 这里有一些有趣的资源:
* [在 R 语言中使用 sparklyr 来了解 Spark](https://www.datacamp.com/courses/introduction-to-spark-in-r-using-sparklyr)
* [Spark 数据科学与工程](https://www.edx.org/xseries/data-science-engineering-apache-spark)
* [介绍 Apache Spark](https://www.edx.org/course/introduction-apache-spark-uc-berkeleyx-cs105x)
* [Apache Spark 分布式机器学习](https://www.edx.org/course/distributed-machine-learning-apache-uc-berkeleyx-cs120x)
* [用 Apache Spark 进行大数据分析](https://www.edx.org/course/big-data-analysis-apache-spark-uc-berkeleyx-cs110x)
* [初学者指南:用 Python 操作 Apache Spark](https://www.datacamp.com/community/tutorials/apache-spark-python)
* [PySpark RDD 速查表](https://www.datacamp.com/community/blog/pyspark-cheat-sheet-python)
* [PySpark SQL 速查表](https://www.datacamp.com/community/blog/pyspark-sql-cheat-sheet)
其他编程语言,比如 Java、JavaScript、C 和 C++ 在 ML 中越来越重要。 从长远来看,您可以考虑将其中一种语言添加到学习列表中。 你可以使用这些博客文章来指导你选择:
* [机器学习和数据科学最流行的编程语言](https://fossbytes.com/popular-top-programming-languages-machine-learning-data-science/)
* [机器学习和数据科学最流行的语言是...](http://www.kdnuggets.com/2017/01/most-popular-language-machine-learning-data-science.html)

*学无止境。*
### 7、 利用一切可以利用的资源
机器学习是一个充满难度的话题,有时候可能会让你失去动力。 或者也许你觉得你需要点改变。 在这种情况下,请记住,有很多资源可以让你打消掉这种想法。 查看以下资源:
**播客**是可以让你继续你的 ML 旅程,紧跟这个领域最新的发展的伟大资源:
* [谈论机器](http://www.thetalkingmachines.com/)
* [数据怀疑论者](https://dataskeptic.com/)
* [线性化](http://lineardigressions.com/)
* [本周的机器学习及 AI](https://twimlai.com/)
* [机器学习 101](http://www.learningmachines101.com/)
当然,还有更多的播客。
**文档和软件包源代码**是深入了解 ML 算法的实现的两种方法。 查看这些仓库:
* [Scikit-Learn](https://github.com/scikit-learn/scikit-learn):知名的 Python ML 软件包
* [Keras](http://www.github.com/fchollet/keras): Python 深度学习软件包
* [caret](http://topepo/caret): 非常受欢迎的用于分类和回归训练 R 软件包
**可视化**是深入 ML 理论的最新也是最流行的方式之一。 它们对初学者来说非常棒,但对于更高级的学习者来说也是非常有趣的。 你肯定会被下面这些可视化资源所吸引,它们能让你更加了解 ML 的工作原理:
* [机器学习的可视化介绍](http://www.r2d3.us/visual-intro-to-machine-learning-part-1/)
* [Distill](http://distill.pub/) 使 ML 研究清晰,动态和生动。
* 如果你想玩下神经网络架构,可以看下 [Tensorflow - 神经网络游乐场](http://playground.tensorflow.org/)。
* 更多的看这里:[机器学习算法最佳的可视化方法是什么?](https://www.quora.com/What-are-the-best-visualizations-of-machine-learning-algorithms)

*学习中的一些变化更加能激励你。*
### 现在你可以开始了
现在一切都取决于你自己了。学习机器学习是一个持续的过程,所以开始的越早就会越好。 运用你手边的一切工具开始吧。 祝你好运,并确保让我们知道你的进步。
*这篇文章是我基于 Quora 问题([小白该如何开始机器学习](https://www.quora.com/How-does-a-total-beginner-start-to-learn-machine-learning/answer/Karlijn-Willems-1))给出的答案。*
---
作者简介:
Karlijn Willems,数据科学记者
---
via: <https://medium.freecodecamp.org/how-machines-learn-a-practical-guide-203aae23cafb>
作者:[Karlijn Willems](https://medium.freecodecamp.org/@kacawi) 译者:[Flowsnow](https://github.com/Flowsnow) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,908 | 微软成为开源计划(OSI)白金赞助者 | http://news.softpedia.com/news/it-s-official-microsoft-becomes-premium-sponsor-of-the-open-source-initiative-517832.shtml | 2017-09-28T07:03:00 | [
"微软",
"OSI"
] | https://linux.cn/article-8908-1.html | 
旨在推行和保护开源软件的<ruby> 开源计划 <rp> ( </rp> <rt> Open Source Initiative </rt> <rp> ) </rp></ruby>(OSI)26 日[宣布](https://opensource.org/node/901),微软最近成为了其白金赞助者。
OSI 的主要目标是通过培训和合作,以及通过基础设施来促进开源技术和开源软件项目发展。没有像 OSI 这样的开源组织,整个开源运动将没有机会成功地成为软件行业的一等公民,而现在专有软件仍占用有利地位。而现在,微软的加入将进一步促进开源软件的发展。
OSI 总经理兼董事 Patrick Masson 说,“广义地说,这是 OSI 和开源软件运动的一个重要里程碑。我觉得没有比这个更能证明开源软件的成熟、生存能力、关注和成功,它不仅得到了微软的认可,而且是作为赞助商支持,以及他们作为贡献者参与这么多开放源项目和社区。”
### 微软是开源软件项目的领先贡献者
据 OSI,微软是一个领先的贡献者。它对开源社区的贡献是众所周知的,包括与 Canonical 合作定制 Azure 的 ubuntu 内核, 以及 Windows 上的 Linux 子系统(WSL)。
微软也与 Red Hat 和 SUSE 以及 Linux 基金会在各种项目上展开了合作,并与 OSI 成员 FreeBSD 基金会合作,在该公司的 Azure 云平台上支持其 FreeBSD 操作系统。通过加入 OSI,微软会更多地在其产品中集成开源软件。
| 301 | Moved Permanently | null |
8,909 | 现在可以将 Atom 编辑器变成 IDE 啦! | http://news.softpedia.com/news/you-can-now-transform-the-atom-hackable-text-editor-into-an-ide-with-atom-ide-517804.shtml | 2017-09-28T08:41:00 | [
"Atom"
] | https://linux.cn/article-8909-1.html | GitHub 和 Facebook 最近发起了一套工具集,它可以让你将你的可魔改 Atom 文本编辑器变身成为 IDE(集成开发环境),他们将这个项目叫做 Atom-IDE。

上周 [Atom 1.21 Beta 发布](http://blog.atom.io/2017/09/12/atom-1-20.html)之后,GitHub 引入了<ruby> 语言服务器协议 <rp> ( </rp> <rt> Language Server Protocol </rt> <rp> ) </rp></ruby>支持以集成其全新打造的 Atom-IDE 项目,它内置带有 5 个流行的语言服务器,包括 JavaScript、TypeScript、 PHP、Java、 C# 和 Flow,而更多的语言服务器正在赶来……
GitHub 的 Damien Guard [解释](http://blog.atom.io/2017/09/12/announcing-atom-ide.html)说:“该 IDE 的每个软件包都提供了基于底层的语言服务器的功能选择,并在打开它所支持的文件时激活。你至少需要安装两个包:Atom IDE 的用户界面和支持该语言的软件包。”

### 将 Atom 变成 Atom-IDE
如果你想要体验下 Atom 的 IDE 功能,在 Atom-IDE 项目的帮助下这很容易。你只需要在 Atom 的设置窗口中打开安装软件包对话框,并在其中搜索和安装 atom-ide-ui 软件包即可。
这将在你的 Atom 中呈现 IDE 界面,但是要成为一个完全可工作的 IDE ,你还需要安装你的语言服务器支持。目前,你可以从以下五种语言中选择:ide-typescript (TypeScript & JavaScript)、 ide-php (PHP)、 ide-java (Java)、 ide-csharp (C#)以及 ide-flowtype (Flow)。
当然,这些功能需要你安装使用 Atom 1.21 Beta 才能使用,它目前还是 Beta 版本,下个月才会发布正式版本。
| 301 | Moved Permanently | null |
8,912 | KDE Plasma 5 已经基本准备好移植到 FreeBSD 了 | https://www.phoronix.com/scan.php?page=news_item&px=Plasma-5-Desktop-Ready-FreeBSD | 2017-09-29T08:59:00 | [
"FreeBSD",
"KDE"
] | /article-8912-1.html | 
在距离 Linux 上的 [KDE Plasma 5 初次发布](/article-3411-1.html)已经三年之后,而其对 FreeBSD 的支持正在逐渐成型。
一直领导该项工作的 KDE 贡献者 Adriaan de Groot 说,他现在已经将其 FreeBSD 桌面上切换到了 Plasma 。这标志着距离将 Plasma 5 软件包移植到 FreeBSD Ports 已经不远了,一旦移植完成,在 FreeBSD 桌面上运行 Plasma 5 就很容易了。
一些 KDE 应用已经移植到了 Ports,而另外一些还需要点工夫。不过有些令人头疼的问题,比如 KDE 4 和 KDE 5 的一些翻译和 Baloo 不能和平共处。此外,如果使用 FreeBSD 12-CURRENT 的话,有一些英特尔显卡在 KDE 下存在图形问题,而英伟达的显卡则没事。
KDE FreeBSD 小组希望这些包可以出现在 2017Q4 的正式 Ports 树之中,渴望尝鲜的伙伴们则现在就可以试试 [51 区](https://community.kde.org/FreeBSD/Setup/Area51)的体验包了。
更多关于 FreeBSD + KDE Plasma 5 的细节,可以看看 [Adriaan 的博客](https://euroquis.nl/bobulate/?p=1725)。
| null | HTTPSConnectionPool(host='www.phoronix.com', port=443): Read timed out. (read timeout=10) | null |
8,915 | 22 天迁移到公共云 | https://www.itnews.com.au/news/a-public-cloud-migration-in-22-days-454186 | 2017-09-29T08:23:00 | [
"迁移",
"公有云"
] | https://linux.cn/article-8915-1.html | 
>
> Lush 说这是可能的。
>
>
>
在不到一个月内将你的核心业务从一个公共云迁移到另一个公共云看起来像是一个遥不可及的目标,但是英国化妆品巨头 Lush 认为可以做到这一点。
去年九月 Lush —— 你也许知道它是那些糖果色的、好闻的沐浴和护肤产品背后的公司 —— 与已有的基础设施供应商 Acquia 的合同快要到期了。
Acquia 已经在 AWS 中托管了 Lush 的基于 Drupal 的电子商务环境好几年了,但该零售商想要退出合作。
根据 Lush 的首席数字官及公司的继承人 Jack Constantine(他的父母在 1995 年成立该公司)的说法,该安排是“尴尬”和僵硬的。
他今天在旧金山举行的 Google Cloud Next 会议上说:“我们不太满意那份合同,我们想看看我们还能做些什么。”
“那是一个非常封闭的环境,这使我们难于看清下一步做什么。”
“(我们) 可以再签署一年,在此期间想出一个有更多的控制权的长期计划,但是(我们)最终结束了这种挣扎。”
在淘遍市场后,Lush 目标放在 Google 的云平台上。该公司已经很熟悉 Google 了,已于 [2013 年底](https://cloud.googleblog.com/2013/12/google-apps-helps-eco-cosmetics-company.html)从 Scalix 迁移到 Google Apps(现称为 G Suite)上。
然而,只有几个月不到的时间进行迁移,要赶在 12 月 22 日现有合同截止前和圣诞节购物的关键时期前。
Constantine 说:“所以这不仅仅是有点关键的业务,我们要考虑高峰交易期,那时会有巨大的交易。”
Lush 摆脱了官僚主义意味着 Constantine 能够在选择供应商上快速决定。他说:“接着团队只要全力进行就行。”
他们还为这次迁移专门优先优化了用于迁移的 “一体化” Drupal 程序,而把 bug 修复推到以后。
Lush 12 月 1 日开始物理迁移,12 月 22 日完成。
团队“像其他迁移一样”遇到了挑战,Constantine 说:“你肯定会担心将数据从一个地方传输到另一个地方,你必须确保一致性,客户、产品数据等需要稳定。”
但是,这位 CDO 表示,让公司采用这个难以置信的紧张时间表是因为团队缺乏备选方案: 没有后备计划。
Constantine 说:“在截止日期前的一个星期,我的同事和我们的 Google 合作伙伴打了电话,他们对这是否会发生有点紧张,他们问我们 Plan B 是什么,我的同事说:Plan B 就是让 Plan A,就是这样”。
“当你抛出这样一个听起来有点难以置信的艰难的截止日期时,而(你需要保持)关注那些认为这是我们可在这个时间范围内可以实现的目标的人,而不是那些放置阻碍说‘我们必须要延期’的人。”
“是的,每个人都很紧张,但你实现了很多。你实际上完成并盯牢了它。你所要做的就是完成、完成。”
现在的重点是将该电子商务应用转移到微服务架构,同时研究各种 Google 工具,如 Kubernetes 容器管理系统和 Spanner 关系数据库。
Constantine 说,该零售商最近也建立了使用 GCP 和 Android 的原型 POS 系统。
(题图:Lush's Oxford St, UK store. Credit: Lush)
---
via: <https://www.itnews.com.au/news/a-public-cloud-migration-in-22-days-454186>
作者:[Allie Coyne](https://www.itnews.com.au/author/allie-coyne-461593) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Migrating your core operations from one public cloud to another in less than one month may seem like a farfetched goal, but British cosmetics giant Lush reckons it can be done.

Last September Lush - who you might recognise as the company behind the candy-coloured, sweet smelling bath and body products - was nearing the end of its contract with its existing infrastructure provider Acquia.
Acquia had been hosting Lush's Drupal-based commerce environment out of Amazon Web Services for a few years, but the retailer wanted out.
The arrangement was 'awkward' and rigid, according to Lush chief digital officer and heir to the company throne Jack Constantine (his parents founded the business in 1995).
“We were in a contract that we weren’t really comfortable with, and we wanted to have a look and see what else we could go for,” he told the Google Cloud Next conference in San Francisco today.
“It was a very closed environment [which] made it difficult for us to get visibility of everything we wanted to be able to move over.
"[We] could either sign up for another year, and have that commitment and think up a long-term plan where we had more control ... but [we] would have ended up struggling."
After scouring the market Lush landed on Google’s Cloud Platform. The company was already familiar with Google, having migrated from Scalix to Google Apps (now known as G Suite) in [late 2013](https://cloud.googleblog.com/2013/12/google-apps-helps-eco-cosmetics-company.html).
However, it had less than a few months to make the migration, both in time for the end of its existing contract on December 22 as well as the critical Christmas shopping period.
“So it wasn’t just a little bit business critical. We were talking peak trade time. It was a huge deal,” Constantine said.
Lush’s lack of bureaucracy meant Constantine was able to make a quick decision on vendor selection, and “then the team just powered through”, he said.
They also prioritised optimising the "monolithic" Drupal application specifically for the migration, pushing back bug fixes until later.
Lush started the physical migration on December 1 and completed it on December 22.
The team came up against challenges “like with any migration”, Constantine said - “you have to worry about getting your data from one place to another, you have to make sure you have consistency, and customer, product data etc. needs to be up and stable”.
But the CDO said one thing that got the company through the incredibly tight timeframe was the team’s lack of alternatives: there was no fallback plan.
“About a week before the deadline my colleague had a conversation with our Google partner on the phone, they were getting a bit nervous about whether this was going to happen, and they asked us what Plan B was. My colleague said ‘Plan B is to make Plan A happen, that’s it’,” Constantine said.
“When you throw a hard deadline like that it can sound a bit unachieveable, but [you need to keep] that focus on people believing that this is a goal that we can achieve in that timeframe, and not letting people put up the blockers and say ‘we’re going to have to delay this and that’.
“Yes everybody gets very tense but you achieve a lot. You actually get through it and nail it. All the things you need to get done, get done.”
The focus now is on moving the commerce application to a microservices architecture, while looking into various Google tools like the Kubernetes container management system and Spanner relational database.
The retailer also recently built a prototype point-of-sale system using GCP and Android, which it is currently playing around with, Constantine said.
*Allie Coyne travelled to Google Cloud Next as a guest of Google* |
8,916 | 不要浪费时间写完美的代码 | https://dzone.com/articles/dont-waste-time-writing | 2017-09-29T09:15:31 | [
"编程",
"代码"
] | https://linux.cn/article-8916-1.html | 
系统可以持续运行 5 年、10 年甚至 20 年或者更多年。但是,特定的代码行的生命,即使是经过设计,通常要短得多:当你通过各种方式来迭代寻求解决方案时,它会有几个月、几天甚至几分钟的生命。
### 一些代码比其他代码重要
通过研究[代码如何随时间变化](http://www.youtube.com/watch?v=0eAhzJ_KM-Q),Michael Feathers 确定了[一个代码库的冥曲线](http://swreflections.blogspot.ca/2012/10/bad-things-happen-to-good-code.html)。每个系统都有代码,通常有很多是一次性写成,永远都不会改变。但是有少量的代码,包括最重要和最有用的代码,会一次又一次地改变、会有几次重构或者从头重写。
当你在一个系统、或者问题领域、体系结构方法中有更多经验时,会更容易了解并预测什么代码将一直改变,哪些代码将永远不会改变:什么代码重要,什么代码不重要。
### 我们应该尝试编写完美的代码么?
我们知道我们应该写[干净的代码](http://www.amazon.com/Clean-Code-Handbook-Software-Craftsmanship/dp/0132350882),代码应该一致、清晰也要尽可能简单。
有些人把这变成了极端,他们迫使自己写出[美丽](http://www.makinggoodsoftware.com/2011/03/27/the-obsession-with-beautiful-code-the-refactor-syndrome/)、优雅、接近[完美](http://stackoverflow.com/questions/1196405/how-to-keep-yourself-from-perfectionism-when-coding)的代码,[痴迷于重构](http://programmers.stackexchange.com/questions/43506/is-it-bad-to-have-an-obsessive-refactoring-disorder)并且纠结每个细节。
但是,如果代码只写一次而从不改变,或者如果在另一个极端下,它一直在改变的话,就如同尝试去写完美的需求和尝试做完美的前期设计那样,写完美的代码难道不是既浪费又没有必要(也不可能实现)的么?
>
> “你不能写出完美的软件。是不是收到了伤害?并不。把它作为生活的公理接受它、拥抱它、庆祝它。因为完美的软件不存在。在计算机的短暂历史中从没有人写过完美的软件。你不可能成为第一个。除非你接受这个事实,否则你最终会浪费时间和精力追逐不可能的梦想。”
>
>
> Andrew Hunt,[务实的程序员: 从熟练工到大师](https://pragprog.com/the-pragmatic-programmer)
>
>
>
一次性写的代码不需要美观优雅。但它必须是正确的、可以理解的 —— 因为绝不会改变的代码在系统的整个生命周期内可能仍然被阅读很多次。它不需要干净并紧凑 —— 只要干净就够了。代码中[复制和粘贴](http://swreflections.blogspot.com/2012/03/is-copy-and-paste-programming-really.html)和其他小的裁剪是允许的,至少在某种程度上是这样的。这些是永远不需要打磨的代码。即使周围的其他代码正在更改,这些也是不需要重构的代码(除非你需要更改它)。这是不值得花费额外时间的代码。
你一直在改变的代码怎么样了呢?纠结于代码风格以及提出最优雅的解决方案是浪费时间,因为这段代码可能会再次更改,甚至可能会在几天或几周内重写。因此,每当你进行更改时,都会[痴迷重构](http://programmers.stackexchange.com/questions/43506/is-it-bad-to-have-an-obsessive-refactoring-disorder)代码,或者没有重构没有改变的代码,因为它可能会更好。代码总是可以更好。但这并不重要。
重要的是:代码是否做了应该做的 —— 是正确的、可用的和高效的吗?它可以[处理错误和不良数据](http://swreflections.blogspot.com/2012/03/defensive-programming-being-just-enough.html)而不会崩溃 —— 或者至少可以[安全地失败](https://buildsecurityin.us-cert.gov/articles/knowledge/principles/failing-securely)?调试容易吗?改变是否容易且安全?这些不是美的主观方面。这些是成功与失败实际措施之间的差异。
### 务实编码和重构
<ruby> 精益开发 <rt> Lean Development </rt></ruby>的核心思想是:不要浪费时间在不重要的事情上。这应该提醒我们该如何编写代码,以及我们如何重构它、审查它、测试它。
为了让工作完成,只[重构你需要的](http://swreflections.blogspot.com/2012/04/what-refactoring-is-and-what-it-isnt.html) —— [Martin Fowler](http://martinfowler.com/articles/workflowsOfRefactoring/) 称之为<ruby> 机会主义重构 <rt> opportunistic refactoring </rt></ruby>(理解、清理、[童子军规则](http://programmer.97things.oreilly.com/wiki/index.php/The_Boy_Scout_Rule) )和<ruby> 准备重构 <rt> preparatory refactoring </rt></ruby>。足够使改变更加容易和安全,而不是更多。如果你不改变那些代码,那么它并不会如看起来的那么重要。
在代码审查中,只聚焦在[重要的事上](http://randomthoughtsonjavaprogramming.blogspot.com/2014/08/building-real-software-dont-waste-time.html)。代码是否正确?有防御机制吗?是否安全?你能理解么?改变是否安全?
忘记代码风格(除非代码风格变成无法理解)。让你的 IDE 处理代码格式化。不要争议代码是否应该是“更多的 OO”。只要它有意义,它是否适当地遵循这种或那种模式并不重要。无论你喜欢还是不喜欢都没关系。你是否有更好的方式做到这一点并不重要 —— 除非你在教新接触这个平台或者语言的人,而且需要在做代码审查时做一部分指导。
写测试很重要。测试涵盖主要流程和重要的意外情况。测试让你用最少的工作获得最多的信息和最大的信心。[大面积覆盖测试,或小型针对性测试](http://swreflections.blogspot.com/2012/08/whats-better-big-fat-tests-or-little.html) —— 都没关系,只要一直在做这个工作,在编写代码之前或之后编写测试并不重要。
### (不只是)代码无关
建筑和工程方面的隐喻对软件从未有效过。我们不是设计和建造几年或几代将保持基本不变的桥梁或摩天大楼。我们构建的是更加弹性和抽象、更加短暂的东西。代码写来是被修改的 —— 这就是为什么它被称为“软件”。
>
> “经过五年的使用和修改,成功的软件程序的源码通常完全认不出它原来的样子,而一个成功建筑五年后几乎没有变化。”
>
>
> Kevin Tate,[可持续软件开发](http://www.amazon.com/Sustainable-Software-Development-Agile-Perspective/dp/0321286081)
>
>
>
我们需要将代码看作是我们工作的一个暂时的手工制品:
>
> 有时候面对更重要的事情时,我们会迷信代码。我们经常有一个错觉,让卖出的产品有价值的是代码,然而实际上可能是对该问题领域的了解、设计难题的进展甚至是客户反馈。
>
>
> Dan Grover,[Code and Creative Destruction](http://dangrover.com/2013/07/16/code-and-creative-destruction/)
>
>
>
迭代开发教会我们来体验和检验我们工作的结果 —— 我们是否解决了这个问题,如果没有,我们学到了什么,我们如何改进?软件构建从没有止境。即使设计和代码是正确的,它们也可能只是一段时间内正确,直到环境要求再次更改或替换为更好的东西。
我们需要编写好的代码:代码可以理解、正确、安全和可靠。我们需要重构和审查它,并写出好的有用的测试,同时知道这其中一些或者所有的代码,可能会很快被抛弃,或者它可能永远不会被再被查看,或者它可能根本不会用到。我们需要认识到,我们的一些工作必然会被浪费,并为此而进行优化。做需要做的,没有别的了。不要浪费时间尝试编写完美的代码。
---
作者简介:
Jim Bird
我是一名经验丰富的软件开发经理、项目经理和 CTO,专注于软件开发和维护、软件质量和安全性方面的困难问题。在过去 15 年中,我一直在管理建立全球证券交易所和投资银行电子交易平台的团队。我特别感兴趣的是,小团队在构建真正的软件中如何有效率:在可靠性,性能和适应性极限限制下的高质量,安全系统。
---
via: <https://dzone.com/articles/dont-waste-time-writing>
作者:[Jim Bird](https://dzone.com/users/722527/jim.bird.html) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 410 | Gone | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.