text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringlengths
9
15
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
On 02/13/2011 05:49 PM, Loren Merritt wrote: >> +cglobal ac3_max_msb_abs_int16_%1, 2,2,5, src, len >> + pxor m2, m2 >> + pxor m3, m3 >> +.loop: >> +%ifidn %2, min_max >> + mova m0, [srcq] >> + mova m1, [srcq+mmsize] >> + pminsw m2, m0 >> + pminsw m2, m1 >> + pmaxsw m3, m0 >> + pmaxsw m3, m1 >> +%else ; or_abs >> +%ifidn %1, mmx >> + mova m0, [srcq] >> + mova m1, [srcq+mmsize] >> + ABS2 m0, m1, m3, m4 >> +%else ; ssse3 >> + ; using memory args is faster for ssse3 >> + pabsw m0, [srcq] >> + pabsw m1, [srcq+mmsize] >> +%endif >> + por m2, m0 >> + por m2, m1 >> +%endif >> + add srcq, mmsize*2 >> + sub lend, mmsize >> + ja .loop >> +%ifidn %2, min_max >> + ABS2 m2, m3, m0, m1 >> + por m2, m3 >> +%endif >> +%ifidn mmsize, 16 >> + mova m0, m2 >> + punpckhqdq m0, m0 > > movhlps Ah, I thought there was some instruction like that, but I must have missed it when I searched for it. I'll send a new patch to change this line since the original patch was already committed. Thanks, Justin
http://ffmpeg.org/pipermail/ffmpeg-devel/2011-February/104023.html
CC-MAIN-2014-52
refinedweb
158
51.04
[Updated Sep 24th 2014] Azure Redis Cache is now Generally Available with an availability SLA of 99.9%. This new cache service gives customers the ability to use a secure, dedicated Redis cache, managed by Microsoft. With this offer, you get to leverage the rich feature set and ecosystem provided by Redis, and reliable hosting and monitoring from Microsoft. This post will introduce the Azure Redis Cache and cover key features that it offers. We plan for this post to be the first of a series on Azure Redis Cache. So do stay tuned and on the lookout for blog posts tagged ‘Redis’. For a video tutorial that covers this content and more please check out – A look around Azure Redis Cache (Preview). Redis Redis.io describes Redis as “ … an open source, BSD licensed, advanced key-value store. It is often referred to as a data structure server since keys can contain strings, hashes, lists, sets and sorted sets.” Unlike traditional caches which deal only with key-value pairs, Redis is popular for its highly performant data types. Redis supports running atomic operations on these types, like. Redis creator Salvatore Sanfilippo (@antirez) writes, “Redis has many different use cases. The simplest way to describe it is something between a traditional database and doing computations in memory. Redis exposes data structures that are accessed in memory via a set of commands”. More details can be found in his blog post typical Redis use cases. Another key aspect to Redis success is the healthy, vibrant open source ecosystem built around it. One reflection of this is the diverse set of Redis clients available across multiple languages. The final key ode to the success of any technology is the customers who use it in their production scenarios, and here again Redis has had good success. Redis on Windows Microsoft Open Technologies has for the last few years maintained a port of Redis on Windows, thus helping make Redis available to Windows users. Folks can follow, build or contribute to the above project using its GitHub Repository. You can also download the binaries for the Windows compatible Redis Server (redis-server.exe) and Redis Client (redis-cli.exe) using the Nuget package. Running Redis locally This section talks about running Redis server and client locally, which is an invaluable learning, development, diagnostics resource, even for users primarily developing for Azure. However the local Redis Server or Client is not needed to talk to Azure Redis Cache, and users can choose to skip this section. Redis server All that is required to launch a Redis Server locally is running redis-serve.exe. This brings up a local Redis server, listening at a default port. Redis command line client Run redis-cli.exe to run the Redis client locally. Once the client lunches it will automatically connect to the server connecting to the default port. Typing a ping on the client should get you a pong response from the server. You are now set to issue your first Redis command – set azureblog:firstset “Hello World” get azureblog:firstset The full set of Redis commands can be executed from the redis-cli, including Redis health monitoring commands like INFO. In addition to being a great development and diagnostic tool, redis-cli can also be a great learning tool to help users get familiar with Redis commands. Azure Redis Cache (Preview) Sizes, Pricing and SKUs We are offering the Azure Redis Cache Preview in two tiers: Basic – A single cache node (ideal for development/test and non-critical workloads) Standard – A replicated cache (Two nodes, a master and a replica) Azure Redis Cache is available in the following sizes 250MB, 1GB, 2.5GB, 6GB, 13GB, 26 GB, 53 GB and in many regions. More details on Pricing can be found on the Azure Cache pricing page. Azure Redis Cache “Hello World” In this section we will create an Azure Redis Cache and then have a C# application connect to it. Getting started with the new Azure Redis Cache is easy. To create a new cache, sign in to the Azure Preview Portal, and click New -> Redis Cache: MSDN documentation provides detailed instruction on How to create an Azure Redis Cache. Once the new cache options are configured, click Create. It can take a few minutes for the cache to be created. After the cache has been created, your new cache has a Running status and is ready for use with default settings: The cache endpoint and key can be obtained respectively from the Properties blade and the Keys blade for your cache instance within the Azure Preview Portal. Once you’ve retrieved these you can create a connection to the cache using redis-cli.exe: Redis C# client Redis offers a diverse set of Redis clients available across multiple languages. For the example below we shall be using the StackExchange.Redis C# Redis client.. Type StackExchange.Redis into the Search Online text box, select it from the results, and click Install. The NuGet package downloads and adds the required assembly references for your client application to access the Azure Redis Cache with the StackExchange.Redis cache client. In order to programmatically work with a cache, let’s start by establishing a connection to the cache. First add the following to the top of any file from which you want to use the StackExchange.Redis client: using StackExchange.Redis; The connection to the Redis cache is managed by the ConnectionMultiplexer classDatabase method. // and StringGet methods. //. In addition to viewing the data, users can also choose to set an alert when a certain metric crosses a user defined threshold during a user defined time interval. For example, an alert could notify the cache administrator when the cache is seeing evictions. Which in turn might signal that the cache is running hot and needs to be upgraded to a larger size. This concludes our quick lap around key Azure Redis Cache features. For a sample application using Redis Cache check – MVC Movie App with Azure Redis Cache in 15 minutes The section below lists additional Azure Cache resources. Happy Caching! Learn more For more information, visit the following links: · Channel 9 video: Look around Azure Redis Cache (Preview) · Azure Cache Forum - For answers to all your Redis Cache questions
http://azure.microsoft.com/blog/2014/06/04/lap-around-azure-redis-cache-preview/
CC-MAIN-2014-49
refinedweb
1,048
62.68
LightIOLightIO LightIO provides green thread to ruby. Like Golang's goroutine, or Crystal's fiber. In LightIO it is called beam. Example: require 'lightio' start = Time.now beams = 1000.times.map do # LightIO::Beam is green-thread, use it instead Thread LightIO::Beam.new do # do some io operations in beam LightIO.sleep(1) end end beams.each(&:join) seconds = Time.now - start puts "1000 beams take #{seconds - 1} seconds to create" LightIO ship ruby stdlib compatible library under LightIO or LightIO::Library namespace, these libraries provide the ability to schedule LightIO beams when IO operations occur. LightIO also provide a monkey patch, it replaces ruby Thread with LightIO::Thread, and also replaces IO related classes. Example: require 'lightio' # apply monkey patch at beginning LightIO::Monkey.patch_all! require 'net/http' host = 'github.com' port = 443 start = Time.now 10.times.map do Thread.new do Net::HTTP.start(host, port, use_ssl: true) do |http| res = http.request_get('/ping') p res.code end end end.each(&:join) puts "#{Time.now - start} seconds" You Should KnowYou Should Know In fact ruby core team already plan to implement Thread::Green in core language, see It means if ruby implemented Thread::Green, this library has no reason to exist. But as a crazy userland implemented green thread library, it bring lots of fun to me, so I will continue to maintain it, and welcome to use. See Wiki and Roadmap to get more information. LightIO is build upon nio4r. Get heavily inspired by gevent, async-io. InstallationInstallation Add this line to your application's Gemfile: gem 'lightio' And then execute: $ bundle Or install it yourself as: $ gem install lightio DocumentationDocumentation Please see LightIO Wiki for more information. The following documentations is also usable: DiscussionDiscussion. Lightio project’s codebases, issue trackers, chat rooms and mailing lists is expected to follow the code of conduct.
https://www.ctolib.com/socketry-lightio.html
CC-MAIN-2018-43
refinedweb
309
53.17
Table following examples show a variety of ways to create futures and work with their eventual results.Back to top: - The importstatements bring the code into scope that’s needed. - The ExecutionContext.Implicits.globalimport statement imports the “default global execution context.” You can think of an execution context as being a thread pool, and this is a simple way to get access to a thread pool. - A Futureis created after the second comment. Creating a Futureis simple; you just pass it a block of code you want to run. This is the code that will be executed at some point in the future. - The Await.resultmethod call declares that it will wait for up to one second for the Futureto return. If the Futuredoesn’t return within that time, it throws a java.util.concurrent.TimeoutException. - The sleepstatement at the end of the code is used so the program will keep running while the Futureis off being calculated. You won’t need this in real-world programs, but in small example programs like this, you have to keep the JVM running. I created the sleep method in my package object while creating my future and concurrency examples, and it just calls Thread.sleep, like this: def sleep(time: Long) { Thread.sleep(time) } As mentioned, blocking is bad; you shouldn’t write code like this unless you have to. The following examples show better approaches. example demonstrates onComplete: import scala.concurrent.{Future} import scala.concurrent.ExecutionContext.Implicits.global import scala.util.{Failure, Success} import scala.util.Random object Example1 extends App { println("starting calculation ...") val f = Future { sleep(Random.nextInt(500)) 42 } println() sleep(2000) } This example is similar to the previous example, though it just returns the number 42 after a random delay. The important part of this example is the f.onComplete method call and the code that follows it. Here’s how that code works: - The f.onCompletemethod call sets up the callback. Whenever the Futurecompletes, it makes a callback to onComplete, at which time that code will be executed. - The Futurewill either return the desired result ( 42), or an exception. - The printlnstatements with the slight delays represent other work your code can do while the Future) sleep(2000) } This code is similar to the previous example, but this Future is wired to throw an exception]. Declaring } // important:. How to use multiple Futures in a for loop) // important: keep the jvm alive } Here’s a brief description of how this code works: - The three calls to Cloud.runAlgorithmcreate the result1, result2, and result3variables, which are of type Future[Int]. - When those lines are executed, those futures begin running, just like the web service calls in my stock market application. - The for-comprehension is used as a way to join the results back together. When all three futures return, their Intvalues are assigned to the variables r1, r2, and r3, and the sum of those three values is returned from the yieldexpression, and assigned to the resultvariable. - Notice that result can’t just be printed after the for-comprehension. That’s because the for-comprehension returns a new Future, so result has the type Future[Int]. (This makes sense in more complicated examples.) Therefore, the correct way to print the example is with the onSuccessmethod. Discussion Although using a future is straightforward, there are also many concepts behind it. The following sections summarize the most important concepts.Back to top A future and ExecutionContext The following statements describe the basic concepts of a future, and the ExecutionContext that. Callback methods The following statements describe the use of the callback methods that can be used with futures. - Callback methods are called asynchronously when a future completes. -, “Creating Partial Functions” for more information on partial functions.) onComplete, onSuccess, and onFailurehave.Back to top See Also - The Scala Futures documentation - These examples (and more) are available at my GitHub repository. - As shown in these examples, you can read a result from a future, and a promise is a way for some part of your software to put that result in there. I’ve linked to the best article I can find at alvinalexander.com/bookmarks/scala-futures-and-promises The Scala Cookbook This tutorial is sponsored by the Scala Cookbook, which I wrote for O’Reilly: You can find the Scala Cookbook at these locations:Back to top Add new comment
https://alvinalexander.com/scala/concurrency-with-scala-futures-tutorials-examples
CC-MAIN-2017-43
refinedweb
729
57.47
following error: [WARNING] java.lang.reflect.InvocationTargetException What is JSON? Engineer" } Read more tutorials about JSON. Which programming language supports... can check the complete list at Read more tutorials about JSON... What is JSON? In this article we are discussing about the JSON which JSON-RPC JSON-RPC JSON-RPC-Java is a dynamic JSON-RPC implementation in Java. It allows you to transparently call server-side Java code from JavaScript with an included lightweight JSON-RPC Creating Message in JSON with JavaScript Creating Message in JSON with JavaScript... about the JSON in JavaScript's some basic concepts of creating a simple object... a message with JSON in JavaScript. In this example of creating message in JSON JSON array objects retrieval in javascript JSON array objects retrieval in javascript I am fetching some data... box is not populating any value, perhaps i am doing something wrong in json...("application/json"); response.getWriter().write(jsonObj.toString()); want to get above How can I initialize the JSONArray and JSON object with data? How can I initialize the JSONArray and JSON object with data? How can I initialize the JSONArray and JSONObject with data How to Make HTTP Requests Using Curl and Decoding JSON Responses in PHP How to Make HTTP Requests Using Curl and Decoding JSON Responses in PHP Make HTTP Requests Using Curl and Decoding JSON Responses in PHP  ... the required data. you can also decode the JSON result with json_decode() function C++Tutorials benefit to download the source code for the example programs, then compile... other tutorials, such as C++: Annotations by Frank Brokken and Karel Kubat...; The CPlusPlus Language Tutorial These tutorials explain the C++ language "JSONArray" example in Java ; In this part of JSON tutorial you... and String or the JSONObject.NULL objects. To have functionality of JSON in your java program you must have JSON-lib. JSON-lib also requires following " JSP Tutorials - Page2 to create and use custom error page in jsp This is detailed java code how...JSP Tutorials page 2 JSP Examples Hello World JSP Page... through the HTML code in the JSP page. You can simply use the <form>< Exception in Java - Java Tutorials you compile the above code, you will get an error something like...Commenting Erroneous Code & Unicode newline Character In this section, you will find an interesting problem related to commenting erroneous code JSP Tutorials Resource - Useful Jsp Tutorials Links and Resources JSP Tutorials  ...-building tools you normally use. You then enclose the code for the dynamic... pages containing a combination of HTML, Java, and scripting code. JSPs Code error Code error package trail; import...) read.close(); } } } While using this it shows error as: run... seconds) Hw can i correct this code???????? Basically, the Exception Ajax Code Libraries and Tools Ajax Code Libraries and Tools Code libraries and loots for the development of your Ajax... on running the code on your own server see Running the jMaki Sample Application AWT Tutorials ;BODY> <APPLET ALIGN="CENTER" CODE="AppletExample.class" width = "260" height Commenting out your code - Java Tutorials you compile the above code, you will get an error something like...Commenting Erroneous Code & Unicode newline Correct In this section, you will find an interesting problem related to commenting erroneous code error error whats the error.............. import java.util.Scanner; public class g { public static void main(String[] args) { Scanner s=new Scanner.... Try the code below : package roseindia.net; import java.util.Scanner; public error "+it); } } this is my program i am getting an error saying cannot find symbol class string... inside the method 'accept()'. Here is your modified code: import java.util. any one help me in alfresco technology - Development process and JSON materials.please any body can u responding my questions. Hi friend, Code to give idea bout JSON : Array Object is =>...:// Thanks Error - Struts Tutorials Select the following links...:// for latest tutorials I changed only..., K.Senthuran. Hi friend, Do some changes in code try Java Training and Tutorials, Core Java Training Java Training and Tutorials, Core Java Training Introduction to online Java tutorials for new java programmers. Java is a powerful object-oriented programming language with simple code structure. You can create learn jquery learn jquery is it possible to learn myself jquery,ajax and json Yes, you can learn these technologies by yourself. Go through the following links: Ajax Tutorials JSON Tutorials JQuery Tutorials Upload Code error on deploying Upload Code error on deploying on deploying the above code as it is said it is giving error that " No getter method for property thefile of bean org.apache.struts.taglib.html.BEAN " Error 500--Internal Server Error Submit Tutorials - Submitting Tutorials at RoseIndia.net Submit Tutorials Submitting Tutorials at RoseIndia.net is very easy. We welcome all members to submit their tutorials at RoseIndia.net. We are big tutorial web site error in code - JDBC error in code hi friends i had one problem when i am running the application of jdbc code it is getting that Exception in thread "main" java.lang.NoSuchMethodError: main plz send me the solution for that error   Spring 3.0 Tutorials with example code Spring 3.0 - Tutorials and example code of Spring 3.0 framework... of example code. The Spring 3.0 tutorial explains you different modules... download the example code in the zip format. Then run the code in Eclipse Catching Exceptions in GUI Code - Java Tutorials the given below code to identify the uncaught exception : import...); gui.show(); } } First we compile and run this code using javaw.exe... stay pressed. When you run the same code using java.exe you will get Error in Code - Development process Error in Code Hi; This is my code to get all d records from View_Service table but am getting error. I just copied this from net.I am doing mini project plz send me code to get records. What action should i mention code error - JSP-Servlet code error hii this program is not working becoz when the mouse... is error in this progrm. ss function describe() { window.status... complaint Hi friend, Do some changes in your code to solve Java - JDK Tutorials Java - JDK Tutorials This is the list of JDK tutorials which... should learn the Java beginners tutorial before learning these tutorials. View the Java video tutorials, which will help you in learning Java quickly. We Code Error - WebSevices Code Error How to insert checkbox values to mysql using php Hi friend, Code to help in solving the problem : Insert CheckBox Thanks Snippet Code Error Snippet Code Error The following is a snippet of code from a Java application which uses Connector/J to query a MySQL database. Identify possible problems and possible solutions. Connection cn= DriverManager.getConnection Welcome to the MySQL Tutorials MySQL Tutorial - SQL Tutorials  ... in this code we are using JavaScript for validating date in a specified format... Error In was developing a form in Visual Basic and got the error " HTML5 Tutorials HTML 5 Tutorials In this section we have listed the tutorials of HTML 5... HTML5 tutorials. Here are some of the best HTML 5 tutorials: HTML5 Tutorials HTML 5 Introduction Here you will learn BASIC Java - Java Tutorials case. It will compile and run without any error. But for better implementation... compiler error. public static void main(String args[ ]) It is necessary Java Code Color Function Error Java Code Color Function Error Java Code Color Function Error Java HashMap - Java Tutorials implementation without any code compatibility problem. But this is not possible... the hash code for the invoking map. boolean isEmpty( ) Returns Thread Deadlocks - Java Tutorials code from deadlock : Lock ordering In this technique all locks are obtainedQuery Tutorials, jQuery Tutorial jQuery Tutorials, jQuery Tutorial The jQuery tutorials listed here will help... to understand examples to making learning path easier. These jQuery tutorials.... In our jQuery tutorials we have covered following topics: Getting Java error code Java error code Java Error code are the set of error that occurs during the compile-time and Run-time. From java error we have given you a sample of code JEE 7 Tutorial support, JSON support, WebSockets and many other features for developing... tutorials explaining you the new features of JEE 7. You can use the latest version.... Here are we are giving Java EE 7 technologies tutorials and examples. Tutorials code code to create the RMI client on the local machine: import java.rmi.*; public...!"); System.out.println("Server: " + h.sayHello()); } catch(Exception e) { System.out.println("Error : "+e); } } } However, when the preceding code is executed it results Can i write JavaScript code in AjaxResponse Code ? Can i write JavaScript code in AjaxResponse Code ? Hai Every Dynamic's We can't write JavaScript code in Ajax Response Code.Why because it takes only html,json,xml response.I tried a lot to create js form in ajax response.It jsp code error - Java Beginners jsp code error Hi, I have a problem with following code... part. Is it possible to display a message box or alert box in jsp code??. plz help...; Hi friend , Try this code: /*out.println("Login Failed error in java code - Java Beginners error in java code Hi Can u say how to communication between systems in a network. Hi friend, A method for controlling an operation of a server system by a client system interconnected with the server code error - JSP-Servlet jsp code error hello, is anyone here who can solve my problem. what happen experts where r u? r u not able to do jsp code error - JSP-Servlet jsp code error hello, is anyone here who can solve my problem. what happen experts where r u? or u r not able to do jQuery - jQuery Tutorials and examples jQuery - jQuery Tutorials and examples  ... and jQuery Tutorials on the web. Learn and master jQuery from scratch. jQuery is nice piece of code that provides very good support for ajax. jQuery can be used Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://www.roseindia.net/tutorialhelp/comment/89283
CC-MAIN-2015-40
refinedweb
1,674
57.57
How To Serve Flask Applications with uWSGI and Nginx on Ubuntu 18.04 Introduction In this guide, you will build a Python application using the Flask microframework on Ubuntu 18.04. The bulk of this article will be about how to set up the uWSGI application server and how to launch the application and configure Nginx to act as a front-end reverse proxy. Prerequisites Before starting this guide, you should have: - A server with Ubuntu 18.04 installed and a non-root user with sudo privileges. Follow our initial server setup guide for guidance. - Nginx installed, following Steps 1 and 2 of How To Install Nginx on Ubuntu uWSGI, our application server, and the WSGI specification. This discussion of definitions and concepts goes over both in detail. Step 1 — Installing the Components from the Ubuntu Repositories Our first step will be to install all of the pieces that we need from the Ubuntu repositories. We will install pip, the Python package manager, to manage our Python components. We will also get the Python development files necessary to build uWSGI..6 uWSGI and get started on designing your application. First, let's install wheel with the local instance of pip to ensure that our packages will install even if they are missing wheel archives: - pip install wheel pipcommand (not pip3). Next, let's install Flask and uWSGI: - pip install uwsgi. You can use this to define the functions that should be run when a specific route is requested: from flask import Flask app = Flask(__name__) @app.route("/") def hello(): return "<h1 style='color:blue'>Hello There!</h1>" if __name__ == "__main__": app.run(host='0.0.0.0') This basically defines what content to present when the root domain is accessed. Save and close the file when you're finished. If you followed the initial server setup guide, you should have a UFW firewall enabled. uWSGI server how to interact with it. uWSGI Your application is now written with an entry point established. We can now move on to configuring uWSGI. Testing uWSGI Serving Let's test to make sure that uWSGI can serve our application. We can do this by simply passing it the name of our entry point. This is constructed by the name of the module (minus the .py extension) plus the name of the callable within the application. In our case, this is wsgi:app. Let's also specify the socket, so that it will be started on a publicly available interface, as well as the protocol, so that it will use HTTP instead of the uwsgi binary protocol. We'll use the same port number, 5000, that we opened earlier: - uwsgi --socket 0.0.0.0:5000 --protocol=http -w wsgi:app Visit your server's IP address with :5000 appended to the end in your web browser again: You should see your application's output again: When you have confirmed that it's functioning properly, press CTRL-C in your terminal window. We're now done with our virtual environment, so we can deactivate it: - deactivate Any Python commands will now use the system's Python environment again. Creating a uWSGI Configuration File You have tested that uWSGI is able to serve your application, but ultimately you will want something more robust for long-term usage. You can create a uWSGI configuration file with the relevant options for this. Let's place that file in our project directory and call it myproject.ini: - nano ~/myproject/myproject.ini Inside, we will start off with the [uwsgi] header so that uWSGI knows to apply the settings. We'll specify two things: the module itself, by referring to the wsgi.py file minus the extension, and the callable within the file, app: [uwsgi] module = wsgi:app Next, we'll tell uWSGI to start up in master mode and spawn five worker processes to serve actual requests: [uwsgi] module = wsgi:app master = true processes = 5 When you were testing, you exposed uWSGI on a network port. However, you're going to be using Nginx to handle actual client connections, which will then pass requests to uWSGI. Since these components are operating on the same computer, a Unix socket is preferable because it is faster and more secure. Let's call the socket myproject.sock and place it in this directory. Let's also:app master = true processes = 5 socket = myproject.sock chmod-socket = 660 vacuum = true The last thing we'll do is set the die-on-term option. This can help ensure that the init system and uWSGI have the same assumptions about what each process signal means. Setting this aligns the two system components, implementing the expected behavior: [uwsgi] module = wsgi:app. Step 5 — Creating a systemd Unit File=uWSGI uWSGI processes. Remember to replace the username here with your username: [Unit] Description=uWSGI. Systemd requires that we give the full path to the uWSGI executable, which is installed within our virtual environment. We will pass the name of the .ini configuration file we created in our project directory. Remember to replace the username and project paths with your own information: Finally, let's Let's check the status: - sudo systemctl status myproject You should see output like this: Output● myproject.service - uWSGI instance to serve myproject Loaded: loaded (/etc/systemd/system/myproject.service; enabled; vendor preset: enabled) Active: active (running) since Fri 2018-07-13 14:28:39 UTC; 46s ago Main PID: 30360 (uwsgi) Tasks: 6 (limit: 1153) CGroup: /system.slice/myproject.service ├─30360 /home/sammy/myproject/myprojectenv/bin/uwsgi --ini myproject.ini ├─30378 /home/sammy/myproject/myprojectenv/bin/uwsgi --ini myproject.ini ├─30379 /home/sammy/myproject/myprojectenv/bin/uwsgi --ini myproject.ini ├─30380 /home/sammy/myproject/myprojectenv/bin/uwsgi --ini myproject.ini ├─30381 /home/sammy/myproject/myprojectenv/bin/uwsgi --ini myproject.ini └─30382 /home/sammy/myproject/myprojectenv/bin/uwsgi --ini myproject.ini If you see any errors, be sure to resolve them before continuing with the tutorial. Step 6 — Configuring Nginx to Proxy Requests Our uWSGI application server should now be up and running, waiting for requests on the socket file in the project directory. Let's configure Nginx to pass web requests to that socket using the uwsgi protocol. uwsgi_params; uwsgi_pass unix:/home/sammy/myproject/myproject.sock; } } Save and close the file when you're finished. To enable the Nginx server block configuration, restart the Nginx process to read the new configuration: - sudo systemctl restart nginx Finally, let's adjust the firewall again. We no longer need access through port 5000, so we can remove that rule. We can then allow access to the Nginx server: - sudo ufw delete allow 5000 - sudo ufw allow 'Nginx Full' You should now be able to navigate to your server's domain name in your web browser: You should see your application uWSGI logs. Step 18.04. We will go with option one for the sake of expediency. First, add the Certbot Ubuntu repository: - sudo add-apt-repository ppa:certbot/certbot You'll need to press ENTER to accept. Next, install Certbot's Nginx package with apt: - sudo apt install python followed the Nginx installation instructions in the prerequisites, you will no longer need the redundant HTTP profile allowance: - sudo ufw delete allow 'Nginx HTTP' To verify the configuration, let's uWSGI. 13 Comments
https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-uswgi-and-nginx-on-ubuntu-18-04
CC-MAIN-2018-51
refinedweb
1,217
56.15
Opened 7 years ago Closed 4 years ago #7430 closed Bug (needsinfo) Recursively iterating through a template's nodes Description Right now, for node in Template() will only yield the top-level nodes. Is this the way it is intended to work? I was expecting to get all nodes, recursively. The Template.__iter__ code looks like this: def __iter__(self): for node in self.nodelist: for subnode in node: yield subnode And Node.__iter__ does: def __iter__(self): yield self This looks like a precipe to allow nodes to yield their childnodes, without relying on the existence of the nodelist attribute. However, nodes like BlockNode and ExtendsNode do not implement __iter__ - only ForNode and IfNode seem to do, and not in the way I would have expected (they don't yield self). Is this a bug, or per design? Change History (7) comment:1 Changed 7 years ago by miracle2k - Needs documentation unset - Needs tests unset - Patch needs improvement unset comment:2 Changed 7 years ago by miracle2k comment:3 Changed 7 years ago by adrian I don't recall whether this was by design. Does any Django code actually *use* this iter behavior? comment:4 Changed 7 years ago by miracle2k Any of Django's code itself? I quick grep didn't reveal anything obvious. Rendering works via recursive render() calls, so that part doesn't need it. comment:5 Changed 7 years ago by ericholscher - Needs tests set - Triage Stage changed from Unreviewed to Design decision needed comment:6 Changed 4 years ago by lukeplant - Severity set to Normal - Type set to Bug comment:7 Changed 4 years ago by carljm - Easy pickings unset - Resolution set to needsinfo - Status changed from new to closed - UI/UX unset Code changes require more practical justification than this. Feel free to reopen with demonstration of a specific use case, the code that is currently required to achieve it, and the code that would be possible with a change to Django. I'm noticing that both ForNode and IfNode do not provide the nodelist attribute. Is the idea that you should use Node.nodelist if it exists, and fall back to the iterator otherwise? Wouldn't it make sense then to have the Node class implement a generic iterator that will yield the items in self.nodelist?
https://code.djangoproject.com/ticket/7430
CC-MAIN-2015-18
refinedweb
385
69.11
#include <itkEventObject.h> Inheritance diagram for itk::EventObject:. Constructor and copy constructor. Note that these functions will be called when children are instantiated. Definition at line 63 of file itkEventObject.h. Definition at line 65 of file itkEventObject.h. Virtual destructor needed Definition at line 68 of file itkEventObject.h. Check if given event matches or derives from this event. Return the StringName associated with the event. >. Create an Event of this type This method work as a Factory for creating events of each particular type. >. Print Event information. This method can be overridden by specific Event subtypes. The default is to print out the type of the event.
http://www.itk.org/Doxygen38/html/classitk_1_1EventObject.html
crawl-003
refinedweb
109
54.18
Key Takeaway Points and Lessons Learned from QCon San Francisco 2014 - | - - - - - - Read later Reading List At the start of November around 1,200 attendees descended on the Hyatt Regency in San Francisco for the eighth annual QCon in the city. The conference featured thought-provoking and engaging keynotes from security guru Bruce Schneier, 2013 Turing Award winner Leslie Lamport, Google Engineering Director Melody Meckfessel, and, for the first time at QCon, a set of 6 mini keynotes covering topics such as Password Security, Boosting Cognition, Skeletal I/O Tracking, VCs, Hacking Baby Monitors, and the Mythology of Big Data. QCon SF attendees - software engineers, architects, and project managers from a wide range of industries including some prominent Bay-area companies - attended 108 technical sessions across 6 concurrent tracks, 19 in-depth tutorials, facilitated open spaces and, as at last year's confer had instant access to all filmed presentations from the event on InfoQ, with around half of all attendees making use of that facility. This article summarizes the key takeaways and highlights from QCon San Francisco 2014 as blogged and tweeted by attendees. Over the course of the next 4 months, InfoQ will be publishing most of the conference sessions online, including 10 video interviews that were recorded by the InfoQ editorial team. The publishing schedule can be found on the QCon San Francisco web site, and you can see numerous photos from QCon on Flickr. - How DevOps and the Cloud Changed Google Engineering - Programming Should Be More Than Coding - Security Keynote Applied Machine Learning and Data Science - Explore Your Data: the Fundamentals of Network Analysis - Inside Pandora: Ten Years After - My Three Ex's: A Data Science Approach for Applied Machine Learning - Putting the Magic in Data Science @ Facebook - Too Big to Fail: Running A/B Experiments when You're Betting the Bank Architectures You've Always Wondered About - Etsy Search: How We Index and Query 26 Million One-Of-A-Kind Items - Software Development & Architecture @ Linkedin - Tumblr - Bits to GIFs - You Won't Believe How the Biggest Sites Build Scalable and Resilient Systems! Continuous Delivery: From Heroics To Becoming Invisible - Continuous Delivery for the Rest of Us - How We Learned to Stop Worrying and Start Deploying the Netflix API Service - The Art of the Builds - Building Conscious Engineering Teams - Careevolution: Building a Company through Ambiguity, Judgment, Trust, and Worklife Fusion - Growing Up Spotify - The Evolution of Engineering Culture: Oh, the Places We've Been Engineering for Product Success - Engineering the Resolution Center to Drive Success at Airbnb - Evolution of the PayPal API Platform: Enabling the Future of Money - Experimenting on Humans - How Ebay Puts Big Data and Data Science to Work - Metrics-Driven Prioritization Modern CS in the Real World - The Evolution of Testing Methodology at AWS: from Status Quo to Formal Methods with TLA+ - The Quest for the One True Parser Reactive Service Architecture - Comparing Elasticity of Reactive Frameworks - Concurrency at Large-Scale: the Evolution to Reactive Microservices - Reactive Programming with Rx Scalable Microservice Architectures - Building and Deploying Microservices with Event Sourcing, CQRS and Docker - Organizing Your Company to Embrace Microservices - Scalable Microservices at Netflix. Challenges and Tools of the Trade Keynotes Twitter feedback on mini-keynotes included: @techbint: You're storing passwords wrong! Use bcrypt. Friends don't let friends use sha1 passwords. #qconsf @gravanov: #qconsf "I'm not Russian, I'm from Armenia, so it's ok to take security advise from me" LOL @techbint: Learning how to hack baby monitors/ web cams. Terrifying how easy it is! #qconsf @philip_pfo: Don't focus on making tech better, focus on how you can make _someone_ better with technology #qconsf #lightningtalks @techbint: Don't automate the easy things, change behaviour. Lock the Xbox until surfaces are tidy. Lock iPad until I exercise. @jsoverson #qconsf How DevOps and the Cloud Changed Google Engineering Philipp Garbe’s attended this session:. Andrew Hao attended this keynote: - The world is changing. Mobile is up and coming. Cloud platform is a huge market opportunity. - Google DevOps platform: supports fast builds, caching. - Every day, Google cranks out 800K builds, 2 petabytes of data… - Internally: single monolithic code tree - Code is open to any engineer - Variety of languages - Mandatory code review. - Need good tools to get information about system. - Deployment resource utilization Twitter feedback on this keynote included: @philip_pfo: Biggest cloud benefit - focusing on what you do best, rather than everything else. @mmeckf #qconsf #keynote @aerabati: Innovation at Google - Tolerate failure and encourage risk-taking! via @mmeckf #qconsf @charleshumble: Google's cloud infrastructure encourages internal experimentation. @mmeckf #qconsf @aerabati: Google cloud platform is built on same infrastructure that powers Google! #qconsf #cloud @aerabati: Devops - Writing code with a better understanding of how that code is operating in production! #qconsf @aerabati: Google does 800k builds per day! #qconsf @rchakra1: Everyday at @Google by @mmeckf #qconsf #devops @dmarsh: Encouraging a testing culture helps us take more risks because testing has our back. @mmeckf #qconsf keynote @charleshumble: Google has a single monolithic source tree, as opposed to distributed repos - simplifies global refactoring of code. @mmeckf #QConSF @arburbank: Building better tests allows engineers to move faster. #qconsf @rundavidrun: .@Google has single code tree, gives devs access to majority of code, allowing easy sharing (and critique) of code across projects.#qconsf @arburbank: No matter how fast your build is, it's never fast enough. Users who once marveled at sub-minute builds soon ask for sub-second time. #qconsf @jasondanales: $600 of cloud storage can store ALL of the worlds music. @mmeckf #qconsf keynote @techbint: We overestimate what can get built in one year and underestimate what can get built in ten. @mmeckf #qconsf @charleshumble: You think of code in a completely different way when you are up in the middle of the night dealing with an outage. @mmeckf #QConSF @aerabati: A report that compares performance from release to release will be really nice to have! Google has it. #qconsf @philip_pfo: Prod debugging is hard to scale. Log scraping doesn't scale; time series trends, low overhead tracing does. @mmeckf #qconsf #devops @stonse: Google devs can attach to a running host and debug/inspect variables via @mmeckf #qconsf. Possible to block req thread via a breakpoint?:) Programming Should Be More Than Coding Andrew Hao attended this keynote: - We should be thinking harder before we start coding. Clear thinking can prevent errors. Fuzzy/wishful thinking can’t. - How do you think clearly? Write. - Specifications: help us think clearly. - Think like a scientist! - Best place to eliminate code is to think about what you need to do and what you don’t need to do. - Engineers starting to use TLA+ to describe system behaviors - debug 6 lines of specs better than debugging 850+ LOC. - when you write specs, you should write formal specs. - Before you write code, write spec. - Thus, you write better programs. Twitter feedback on this keynote included: @arburbank: No one just starts writing code and hopes it will implement a web browser. But we still don't spend enough time thinking up front. #qconsf @arburbank: Clear thinking can prevent errors. Fuzzy thinking can't. And wishful thinking actually *causes* errors. #qconsf @dtunkelang: Thinking is not just for hard programming problems. Thinking is necessary to determine whether the problem is easy or hard. #qconsf @aerabati: Best way to learn something is to teach it - 'Programming is not just coding - talk!' - Leslie Lamport keynote on 2nd day of #qconsf @hbrumleve: Leslie Lamport: incrementing an integer by one is harder than you think. #qconsf @crichardson: Writing is nature's way of letting you know how sloppy your thinking is.” —Richard Guindon #qconsf @philip_pfo: Think before (decide what, decide how) then do (implement). Clearer thinking will mean easier implementing. Leslie Lamport #qconsf @jpetazzo: Productivity: "I wrote N lines of code today." Greater productivity: "I elimitated N lines of code today." — Leslie Lamport at #qconsf @dtunkelang: The best way to eliminate code is in the spec. --Leslie Lamport #qconsf @philip_pfo: Productivity isn't increasing code volume, it's reducing complexity without reducing functionality.Leslie Lamport #qconsf @markmadsen: ~25 years ago at Bell, Mark Ardis introduced formal math to programmers by hiding it in programming language syntax, just like TLA+ #qconsf @aerabati: 'It was a lot of easier to understand and debug 6 rules than 850 lines of code those rules produced!' - Leslie Lamport at #qconsf @andrewhao: Leslie Lamport: write specs before you write code (formal specs in TLA+). How does this jive with Agile emergent design? #qconsf @rundavidrun: The less well you understand what something is supposed to do, the more valuable writing a spec becomes. Lamport at #qconsf @markmadsen: Every time code is patched, it becomes a little uglier, harder to understand, and harder to maintain Lamport at #qconsf @vpothnis: if you dont have a spec, every piece of code you write is a patch - #LesllieLamport #qconsf keynote Security Keynote Andrew Hao attended this keynote: - We don’t have control over our infrastructure. Handing more data to the cloud - The future: we might see more vendor control over our OSes — app store updates, etc. Windows 8, Yosemite look more like mobile OSes. - Nowadays, there are a wider variety of attack(er)s. Hackers now have standard tools. - Advanced persistent threats: Hackers just trying to grab a block of credit cards. Targeted threats: politically motivated hacking. - IP theft as well as hacktivism - “The entire supply chain is now complete.” Stealing credentials –> turning account control Increased gov’t involvment in cyberstates - infrastructure - legal involvement - cyber arms race: people building offensive weapons We are risk averse when it comes to gains and we are risk seeking when it comes to losses. Evolutionary theory describes: short, guaranteed small gain is better for survival. This makes security hard to sell: we got by OK last time. Why do we need security now? This is the boss flipping the coin. If we get hacked, we all have to leave and get a new job. The economics for prevention are hard to sell. See analogy to selling insurance. Principle: remove humans from the system as much as possible. Problem: humans cannot be completely eliminated from the process. Tools and OODA - OODA loops: observe, orient, decide, act. Air Force captain. The faster you can eval this loop, the more advantage you have. - What I want in IR (Instant Response) is tools to get into the attacker’s OODA loops. Twitter feedback on this keynote included: @charleshumble: We are loosing control of our data. Bruce Schneier #QConSF @dtunkelang: . @schneierblog: Attacks getting more sophisticated. Cyberwar is hype but escalation/commodification of tactics/capabilities is real #qconsf @dtunkelang: . @schneierblog: Nation-states are big players in hacking game -- but they're increasingly hacking companies rather than each other. #qconsf @dserodio: I'm still not on Facebook, but it's affecting my social life - Bruce Schneier #qconsf @christianralph: #QConSF humans are risk averse when it comes to gains. And risk favourable to loss. Which makes selling security difficult @schneierblog @dtunkelang: . @schneierblog: Conventional IT security wisdom: people are part of the problem, not part of the solution. Keep humans out of loop. #qconsf @dtunkelang: . @schneierblog: But humans *have* to be in the loop for instant response, since that's when automated security tools break down. #qconsf @rchakra1: Protection, Detection and Response are three key aspects of security @schneierblog #qconsf @rundavidrun: .@Bruce_Schneier at #qconsf: we need real-time technology/tools that support, not replace people, in order to make them more effective. @dtunkelang: . @schneierblog wants IT security tools to support the OODA loop: observe, orient, decide, act. #qconsf Applied Machine Learning and Data Science Explore Your Data: the Fundamentals of Network Analysis by Amy Heineike Twitter feedback on this session included: @dtunkelang: How to get started with network viz: force-directed layout, color for community detection, node size based on degree --@aheineike #qconsf @arburbank: @aheineike points out that companies are partnering with universities on machine learning - as uncovered by network analysis #qconsf @arburbank: Too much ink can get in the way of learning information -@aheineike #qconsf @arburbank: To simplify network viz: remove weak nodes, dominant nodes, and weak edges. Most people don't tweet. Some never stop. @aheineike #qconsf @dtunkelang: Filtering out weak data, varying layout algorithms, removing outliers -- all tactics to deriving value from network viz --@aheineike #qconsf @dtunkelang: More relevant reading for @aheineike's talk: algorithms for detecting community structure in networks. #qconsf @arburbank: Cytoscape, gephi among the tools you can use for network viz if you don't work at Quid. @aheineike #qconsf Inside Pandora: Ten Years After by Oscar Celma Twitter feedback on this session included: @seanjtaylor: .@ocelma: Pandora keeps a 1% holdout set of listeners to measure effect of accumulated improvements. #qconsf best practice for experiments. @seanjtaylor: .@ocelma: Pandora measures per segment treatment effects to see which model/product changes work for which groups. #qconsf #experiments My Three Ex’s: A Data Science Approach for Applied Machine Learning Alex Handy of SD Times attended this session: Daniel Tunkelang advocated building machine learning with traceable and explainable building blocks first, then optimizing later. As with traditional software development, building machine learning algorithms is an iterative process, said Tunkelang. He added that building a complex system from the start will leave you with only one way to measure its effectiveness: the accuracy of the data that comes out. “Accuracy gives you a very coarse way of evaluating an algorithm,” he said. “It’s very much like debugging code. I’ve gotten a lot of value from linear regression and decision trees. The nice thing about these is that they very clearly favor explainability. The most valuable thing about explainability is that you don’t have to entirely trust your training data if you can debug in this way. But if you have a black box approach, the only indicator you get is that it’s not as accurate as you’d like.” Daniel Tunkelang summarized his session: Express: Understand your utility and inputs. - Choose an objective function that models utility. - Be careful how you define precision. - Account for non-uniform inputs and costs. - Stratified sampling is your friend. - Express yourself in your feature vectors. Explain: Understand your models and metrics. - Accuracy isn’t everything. - Less is more when it comes to explainability. - Don’t knock linear models and decision trees! - Start with simple models, then upgrade. Experiment: Optimize for the speed of learning. - Kiss lots of frogs: experiments are cheap. - But test in good faith – don’t just flip coins. - Optimize for the speed of learning. - Be disciplined: test one variable at a time. Twitter feedback on this session included: @arburbank: Defining the objective function may be the most important part of setting up your machine learning problem. @dtunkelang #qconsf @arburbank: The precision metric in your objective function should include frequency weighting - but needs to value rare things too. #qconsf @dtunkelang @aerabati: This is the thing with grownups - they always need explanations :) via @dtunkelang #qconsf @ocelma: #qconsf @dtunkelang talking about ML explainability. Black boxes like deep learning don't help much @aerabati: Machine learning algorithms wont tell you that your training data is systematically skewed! via @dtunkelang #qconsf @arburbank: Even if you don't use linear regression or decision trees as your final model, they can be valuable when iterating. @dtunkelang #qconsf @arburbank: Kiss more frogs faster: better to run many experiments than to spend too much time coming up with one perfect idea. @dtunkelang #qconsf @dtunkelang: It's true that iterating with small experiments takes time. But big changes will be slower than you anticipated. --@arburbank #qconsf @COlivier: @dtunkelang on experimenting with ML: "No matter how brilliant you are, your brilliance is often no competition for volume" #qconsf @aerabati: Optimize your search experiments for the speed of learning! #failfast via @dtunkelang #qconsf @christianralph: #QConSF Explainability trumps accuracy, start with simple models (linear regression,decision trees) before upgrading @dtunkelang @christianralph: #QConSF corollary to that. Simple models are easier to explain but less accurate. Iterate towards better accuracy @dtunkelang Putting the Magic in Data Science @ Facebook by Sean Taylor Twitter feedback on this session included: @arburbank: Data work is basically counting stuff, figuring out the denominator, and making that process reproducible. -@seanjtaylor at #qconsf @arburbank: data science: finding the niche between analysts and creeping people out in a way that surprises and delights. -@seanjtaylor at #qconsf @dtunkelang: Data scientists should be more than accountants. Data science should drive decision making. --@seanjtaylor #qconsf @arburbank: continuum from "maybe this'll be useful" to "I built it; only I can use it" to "people can use this!" is data science @seanjtaylor #qconsf @dtunkelang: Good question from @seanjtaylor: what is the most magical thing you've seen in data science? Answers span the entire stack. #qconsf @arburbank: Data can't do anything. *People* do things. Communicating with data is a key part of your job. -@seanjtaylor #qconsf @markmadsen: Surprise (to me) basis layer for data science underlying amazing tools at Facebook is SQL access - Hive, Presto, Scuba #qconsf @dtunkelang: Magic: The Gathering of Data -- awesome plug by @seanjtaylor for the value of data collection from novel sources. #qconsf @markmadsen: Novel sources of data are a source of magic. Data collection infrastructure is a key capability #qconsf @dtunkelang: Making your own quality data is better than being a data alchemist. -- @seanjtaylor #qconsf @arburbank: Facebook uses document clustering, dimensionality reduction to understand huge volume of incoming bug reports. -@seanjtaylor #qconsf @dtunkelang: Another "trick" @seanjtaylor uses for estimating probabilities of rare and new events: James-Stein estimator #qconsf @dtunkelang: Distributions are much more powerful than point estimates because they communicate uncertainty, so use a bootstrap. --@seanjtaylor #qconsf @arburbank: Use the bootstrap to get a sampling distribution on any statistic. Always use confidence intervals. -@seanjtaylor #qconsf @dtunkelang: More about boostrapping: #qconsf @arburbank: Everything is linear if you use enough features. -@seanjtaylor #qconsf @arburbank: getting data science to be adopted #1: reliability. how do you anticipate when the data will break your product? -@seanjtaylor #qconsf @arburbank: getting data science to be adopted #2: latency & interactivity. Test more things per second if your system moves fast. @seanjtaylor #qconsf @arburbank: yes! the easier you make it to get answers, the more questions people will ask. @Pinterest's data-driven culture exactly. #qconsf @arburbank: getting data science to be adopted #4: unexpectedness. Is your data telling people something they don't already know? -@seanjtaylor #qconsf @dtunkelang: Great idea from @seanjtaylor: test-driven data science -- test-driven development for data scientists. #qconsf @dtunkelang: Speed doesn't just provide more answers per second. It gets people to ask more questions. --@seanjtaylor #qconsf @dtunkelang: Show people the most interesting things. --@seanjtaylor Also see @avinash's 2012 #strataconf keynote #qconsf Too Big to Fail: Running A/B Experiments when You're Betting the Bank Twitter feedback on this session included: @dtunkelang: It's true that iterating with small experiments takes time. But big changes will be slower than you anticipated. --@arburbank #qconsf @dtunkelang: Companies need to recognize & reward incremental change vs over-rewarding big changes. Let's call it "faster learning". --@arburbank #qconsf @aerabati: Recognize incremental change is faster than big change! Incremental change should actually be called "faster learning"! #qconsf @dtunkelang: Do incremental changes constrain us to local maxima? Maybe, but big changes can be big losses, not just big wins. --@arburbank #qconsf @dtunkelang: Making a big change all at once is a gamble that things will be better, and it masks negative components of the change. --@arburbank #qconsf @dtunkelang: Argument against running an experiment: "we've already told the press about it". I've lived through that one. :-) from @arburbank at #qconsf @dtunkelang: Opt-in features nice but no substitute for controlled, randomized experiments. Esp since opt-in users are more engaged. --@arburbank #qconsf @dtunkelang: You need to set the goal posts for ship decisions in advance. Otherwise you'll always be tempted to move the goal posts --@arburbank #qconsf @aerabati: Purpose of experimenting is to figure out when we/you/companies can ship! via @aburbank #qconsf @dtunkelang: Shipping a redesign doesn't mean that you're done. Ship and keep iterating. --@arburbank #qconsf Architectures You've Always Wondered About Etsy Search: How We Index and Query 26 Million One-Of-A-Kind Items Twitter feedback on this session included: @aerabati: Etsy dont do/need real time indexing at this point. Good example of not over engineering when not needed! #qconsf @philip_pfo: Downside of microservices - silo risk. Interesting that etsy favors larger services for eng culture benefits. #qconsf @aargard @aerabati: Etsy likes to kill not-so-simple architectures eg: turbocharging solr index proj #qconsf Software Development & Architecture @ Linkedin Andrew Hao attended this session: Startup mode - Systems built during startup lifetimes can’t stand public growth. - 300+ code projects in a single SVN repo - Testing code locally meant deploy every service locally! - 24 hrs of integration testing – you find you broke master - 3 hours writing code, you check it in, 3 days to commit it. LinkedIn Search - LinkedIn archives several indices - each search engine has its own search score - LI tries to guess your intent when you search and query the right indices. - query rewriting: “sr swe” can be “senior software eng” or “sr soft engineer” - Galene Architecture: built on lucene. real time indexer, updates indices between hadoop builds Twitter feedback on this session included: @aerabati: Partition your user base across the data centers! #linkedinarchitecture #qconsf @aerabati: Intent detection done by SearchFederator #LinkedinSearch #qconsf @aerabati: So, what does Linkedin use for their search federation? Galene! #qconsf Tumblr - Bits to GIFs by John Bunting Andrew Hao attended this session: - sharding: jetpants - php, haproxy, hdfs - s3 for image storage - Scala services: 90% of the time. Built in a world where Go didn’t exist - finagle –> colossus - Thrift - Protobuf - hbase stores notifications - firehose - job queue: gearman - Varnish caching: using DJB2 hashing for consistent hashes – balanced against Deployment - deployment used to take an hour for 500 servers - DUI is deployment tool - Need 2 people from your team + staff/sr engineer - get into an IRC deploy queue - Deploy canary first - Tumblr deploys over 40x a day - Fibr is a graphing tool. so everyone can feel empowered to see root problems - your engineers need to be empowered to understand tools. - func is installed on any server at any point in time. “give me the server” You Won't Believe How the Biggest Sites Build Scalable and Resilient Systems! by Philip Fisher-Ogden &Jeremy Edberg Andrew Hao attended this session: - 20% of org is dedicated to maintaining platform — internal PaaS - Microservice architectures organized around teams - Automate everything you can: tools, builds, images. - You want your revenue to grow faster than your org - Continuous integration – code deployed as soon as possible. - Self service: Try to make everything as self-service as possible. Make it easy for engineers to set up their own deployments, their own monitoring. - Monitoring: Do it from the beginning! Emitting events. See Etsy. Open sourcing the system soon. Make sure that they own the dashboard, so they can - Break things on production: Embrace failure. See: Netflix Simian Army. - Incident review: Blameless culture - Caching. Huge way to give users best experience. Write data to cache + queue. - Cassandra: used for consistent key hashing multi data center Think of SSD as cheap ram, not expensive disk Lambda/Kappa Architecture - Lambda: Idea of having two parallel data streams. One is accurate, slow, one is approximate, fast. Have queries combine the two. - Kappa: Always use the correct stream Twitter feedback on this session included: @aerabati: Make it easy to for your teams to Monitor their own stuff. Netflix does this! #qconsf #selfservice @aerabati: Do you break things in production to test resiliency of your own systems? #architecturetalk #qconsf @aerabati: Shared state should be stored in a shared service! #notetoself #qconsf @aerabati: Monitoring should be a firstclass citizen and not a afterthought! #qconsf @rchakra1: Netflix gen4 architecture -> Stateless #microservices #qconsf @flaviavanharten: Netflix did 3 redesigns in 6 years. Inspiring talk on scalable architectures at #qconsf @portixol: .@philip_pfo at #QConSF Netflix organise their teams like microservices, small and with an API contract between teams @QConSF: .@jedberg @philip_pfo on design patterns the most successful companies use to go from nothing to billions. #qconsf Beyond Hadoop Unified Big Data Processing with Apache Spark Twitter feedback on this track included: @rchakra1: Map Reduce lacks efficient data sharing @matei_zaharia #ApacheSpark #Qconsf @rchakra1: Data sharing in #MapReduce is slow due to data distribution and disk I/O @matei_zaharia #ApacheSpark #qconsf @davidgev: Nice results from Apache Spark, I guess time to rewrite map-reduce jobs #qconsf #ApacheSpark @rchakra1: MapReduce can emulate any distributed system @matei_zaharia #ApacheSpark #Qconsf @rchakra1: In Apache Spark, users can control data partitioning and caching @matei_zaharia #qconsf @rchakra1: #ApacheSpark can leverage most of the latest innovations in databases, graph processing, machine learning. @matei_zahara #qconsf @vpothnis: Apache #Spark - Unified engine for #batch, #streaming, #interactive processing. Awesome talk by @matei_zaharia at #qconsf Continuous Delivery: From Heroics To Becoming Invisible Twitter feedback on this track included: @bjorn_fb: Continuous Delivery: if you're doing it right, nobody knows you're doing anything - @sdether #QConSF @dserodio: Continuous Delivery is about removing the bottlenecks that stop you from delivering faster #qconsf Continuous Delivery for the Rest of Us Twitter feedback on this session included: @sdether: Problem w/ many release processes: the person causing the pain isn't the one feeling the pain -- @techbint #qconsf @sangeetan: A release should be a non-event. @techbint #qconsf #continuousdelivery @jstnm: Ignoring "blocked" status on your board is like ignoring your tests, it is telling you something is wrong @techbint #qconsf @QConSF: .@techbint on successful #continuousdelivery: culture and process are as important as tools and automation #qconsf How We Learned to Stop Worrying and Start Deploying the Netflix API Service Twitter feedback on this session included: @sdether: Netflix API team motivation for adopting Continuous Delivery: Stop being the Bottleneck! @sangeetan #qconsf @aerabati: 3 week major release + weekly incremental release cycle helped netflix gain some sanity! #qconsf @mshah_navis: Get rid of the code freeze #qconsf @modethirteen: Weekly release cadence @netflix with every 3rd release for big features and all others for fixes @sangeetan #qconsf @sdether: Netflix API runs tests for each pull request and if no reviewers were explicitly specified, auto-merges passing pull requests #qconsf @staffanfrisk: 1500+ metrics compare the new code with the old in the Netflix CI-chain. Impressive! #QConsf @aerabati: Having an agile architecture makes continuous delivery lot easier at netflix #qconsf @techbint: How do you test in the cloud when everything is changing all the time? Many levels of canaries. Dependency testing. @sangeetan #qconsf @portixol: 'We don’t have an Ops team as such, each team does their own ops’ @sangeetan on Netflix continuous deployment #qconsf @portixol: Having developers doing their own ops doesn't decrease velocity, they'd be involved anyhow, just makes it more efficient #qconsf @sangeetan @LeneHedeboe: #continuousdelivery: Netflix deploys services to multiple AWS regions many times a week #qconsf @sangeetan” The Art of the Builds by Hans Dockter Twitter feedback on this session included: @sdether: Don't build frameworks. Build toolkits. Then you can create lightweight frameworks on top of those toolkits/via @hans_d #qconsf @techbint: Your build system should be domain driven just as much as the code - don't throw away domain knowledge at build time. #qconsf #gradle @flaviavanharten: #ddd as an inspiration to modeling n building #gradle. Awesome talk by Hans Dockter at #qconsf on "the art of builds" Deploying at Scale Containerization Is More than the New Virtualization Alex Handy of SD Times attended this session: Petazzoni explained that the benefits of Linux containers extend beyond the ability to host multiple applications on a single server. “Containers are just processes isolated from each other,” he said. “When I start a container, I am starting a normal process, but it has a stamp that says it belongs to a container. That extra stamp is very similar to the User ID you have in a process. It can belong to root, or UID 1000.” Petazzoni added that in sharing information between containers, the path to communication is much simpler than when using virtual machines. Virtual machines often have to use networking in order to communicate between VMs even when they are hosted on the same machine. Because containers use fine-grained namespaces, Petazzoni said that individual containers can be isolated, or they can share the same namespace as other containers, and thus be allowed to communicate across the namespace. Real-World Docker: Past, Present, and Future by Jeff Lindsay Twitter feedback on this session included: @jsoverson: dokku was the gateway drug for people getting into docker #qconsf via @progrium @sangeetan: docker not all about containers;its a primitive for building modern platforms/architectures @progrium #qconsf @SashaO: #qconsf Jeff Lindsay @progrium : you don't want config managers (Chef, Puppet, Salt) with docker installs. CM is for non-docker systems @vivekvaid: @progrium likes lightweight @docker vs @OpenStack #qconsf Reminds me of the @Gartner_inc hype cycle. Time to ditch VMs already? Engineering Culture Building Conscious Engineering Teams by Rob Cromwell Twitter feedback on this session included: @aerabati: Biggest threat to engineering teams is hiring DIVAs (D.I.V.A - Difficult, Infallible, Victim & Arrogant) - via Rob Cromwell #qconsf @rundavidrun: Having a "gifted" dev is like meth, says @robertcromwell, on it he can do anything! When he comes down you've got a horrible mess. #qconsf @johnscattergood: 3 traits required for high performing teams #qconsf @arburbank: When a pen falls, is it due to gravity or because I dropped it? Choose the answer that will help solve the problem. @robertcromwell #qconsf Careevolution: Building a Company through Ambiguity, Judgment, Trust, and Worklife Fusion Andrew Hao attended this keynote: - judgment: Twitter feedback on this session included: @dmarsh: Find simplicity in complex things ... helps self-directed teams make good choices. #qconsf Engineering Culture @dmarsh: Work exists to serve the needs of the individual and their family not the other way around. Vik Kheterpal. #qconsf Engineering Culture. @rchakra1: self directed teams should believe in work-life fusion says Vik Kheterpal #qconsf Engineering culture @dmarsh: Measures of success are independent of size. Vik Kheterpal. #qconsf Engineering Culture. @dmarsh: Trust people to deal with ambiguity. Develop their judgment muscles. #qconsf Engineering Culture. @rchakra1: The traditional link between compensation and title/role is flawed. Vik Kheterpal at his Engineering culture talk #qconsf @johnscattergood: Seek to hire people with these characteristics #qconsf @rundavidrun: As a leader, demand that bias be turned to action and devs will realize that arguing a lot about something is volunteering to do it. #qconsf Growing Up Spotify by Simon Marcus Twitter feedback on this session included: @arburbank: Spotify is an adhocracy. It's specially designed for innovation and fostering dissent. @lycaonmarcus #qconsf @arburbank: Growing up Spotify: while we like incrementalism and testing, we also try to keep an eye on where we are headed. @lycaonmarcus #qconsf @arburbank: Being an adhocracy also makes it very difficult to get things done, and we talk for a long time to achieve consensus. @lycaonmarcus #qconsf @philip_pfo: Ideas are better built together. Innovate ideas require debate to evolve; discouraging dissent risks mediocrity. @lycaonmarcus #qconsf @techbint: You can't build a unified experience with silo'd teams - feature squads help. @lycaonmarcus #qconsf @arburbank: Spotify wanted to be able to spin up new missions quickly. A tribe is 70-130 folks, with dev, PM, design leader trio. @lycaonmarcus #qconsf @dmarsh: Mentoring at Spotify is critical given that they hire a large number of junior staffers. @lycaonmarcus #qconsf @rundavidrun: .@lycaonmarcus at #qconsf: @Spotify focuses on long-term effectiveness over short-term efficiency. @ShoReason: When something goes wrong, we don't play the blame game we look to see what in the system allowed it to happen - @lycaonmarcus #qconsf The Evolution of Engineering Culture: Oh, the Places We've Been Andrew Hao attended this keynote: -? Engineering for Product Success Engineering the Resolution Center to Drive Success at Airbnb Twitter feedback on this session included: @philip_pfo: More data -> more opportunity. Bring structure to formerly unstructured data, gather where not gathered. @alvinsng #qconsf Evolution of the PayPal API Platform: Enabling the Future of Money Andrew Hao attended this session: Paypal used to have a fragmented, legacy API ecosystem. Circa 2012, they began an initiative to standardize and modernize their API infrastructure. - API first - API as a product - REST first Architecture: - Built using facades - Facades coordinate between internal services (microservices?) - Tiered approach: experience APIs (facades) and capability APIs (core services) - How did they assess success? Did a maturity model. Level 1-5 - People got into competitions - Used DDD – bounded contexts – shared language - Evolution is more than technology Experimenting on Humans by Talya Gendler & Aviran Mordo Twitter feedback on this session included: @techbint: A/b tests don't mean that new feature has to win - it just has to not do worse than existing features @aviranm #qconsf How Ebay Puts Big Data and Data Science to Work Twitter feedback on this session included: @arburbank: You no longer need a SAS license to do data science. The competitive edge is in how you use your data. - Mike Mathieson at #qconsf @arburbank: It's easy to create a lot of data reports without actually creating a product that leverages your data. - Mike Mathieson #qconsf @randyshoup: Data science is relevant iff it is put into practice in a product (not just answering a question) - Mike Mathieson #qconsf @arburbank: Bad data science has long periods of silence. Someone hides in a corner and you don't know when they'll come back. - Mike Mathieson #qconsf @arburbank: People aren't very good at coming up with heuristic weighting schemes when there are more than a dozen variables. -Mike Mathieson #qconsf @arburbank: Advantage of switching from heuristics to machine learning: can devote your efforts to finding new sources of data. -Mike Mathieson #qconsf @arburbank: It's better to start simple and go from there than to try to design the perfect thing up front. - Mike Mathieson #qconsf @arburbank: As infra improves, you spend more time on features and less on experimental tools, modeling, and data acquisition. -Mike Mathieson #qconsf @randyshoup: Real world data science is almost all about data and feature selection instead of algorithm development - Mike Mathieson #qconsf @randyshoup: Need engineering discipline in data science: speed of model execution + experimentation, training data generation - Mike Mathieson #qconsf @arburbank: Celebrate failed experiments - but only if you learn something! -Mike Mathieson #qconsf @dtunkelang: Question for all you data scientists: do you need to ethically justify having a holdout set that doesn't benefit from improvements? #qconsf Metrics-Driven Prioritization by Sam McAfee Twitter feedback on this session included: @rchakra1: Innovate or Die says @sammcafee in his "Metrics driven Prioritization" talk at #qconsf @aerabati: Good product is at the Intersection of Usable, Feasible and Valuable! via @sammcafee #qconsf @aerabati: Modern day engineer not only needs to understand the business model of the company but also must understand growth model! @sammcafee #qconsf @aerabati: Information value = Economic impact * uncertainity #qconsf Java at the Cutting Edge Stuff I Learned about Performance by Mike Barker Twitter feedback on this session included: @flaviavanharten: If you cannot talk about trade offs, then you are just a fan person waving a coffee mug... Talk on performance by @mikeb2701 at #qconsf @flaviavanharten: Java vs C/C++ ... Java is fast. Certainly no reason to choose C/C++ over Java for performance reasons. Talk by @mikeb2701 at #qconsf @absynthmind: Let the problem define the solution. Design without bias… And not just for high-performance software #qconsf @rzanner: @mikeb2701: Try to solve problems w/o bias, i.e. w/o bringing your favorite tools #qconsf Modern CS in the Real World The Evolution of Testing Methodology at AWS: from Status Quo to Formal Methods with TLA+ Alex Handy of SD Times attended this session: Tim Rath discussed testing strategies used at the company. He echoed keynote speaker Leslie Lamport by admonishing developers in the audience to write specifications for their code. “Every class I turn in has to have some specification to it,” he said. “You should be writing things that you know about in the code: How does this interact with other parts of the system? What is its purpose in life? I will put the interaction diagrams in there. Ultimately, I am looking for those truths that I can test and describe, and describe them well.” The Quest for the One True Parser by Terence Parr Alex Handy of SD Times attended this session: Terence Parr, professor of computer science at the University of San Francisco, enlightened QCon attendees about the power of his parsing research. Parr has devoted his research to increasing the strength of the simple and efficient top-down LL parsers, culminating in the powerful ALL(*) strategy of ANTLR 4. “I’ve made this as powerful as possible, and I’m trading that last bit of generality for performance,” he said. Parr’s ANTLR parser optimizes itself after a successful parse, and given the same file a second time, it can parse it significantly faster than more generalized parsers. Parr said he’s finally completed his work on parsing science after 25 years of work, and he’s not quite sure what he’ll be working on next. Reactive Service Architecture Philipp Garbe’s attended this track:. Comparing Elasticity of Reactive Frameworks by James Ward Twitter feedback on this session included: @hbrumleve: @_JamesWard rocking reactive elasticity @QConSF ... Way more to it than just scaling hardware. #qconsf @hbrumleve: @_JamesWard may have just coined the term "going reactivist" @QConSF #qconsf Concurrency at Large-Scale: the Evolution to Reactive Microservices by Randy Shoup Andrew Hao attended this session: Microservices: loosely-coupled service oriented architecture with bounded contexts - single purpose - simple well-defined interface - modular and independent - more a graph of relationships than tiers - isolated persistence! - each unit is simple - helps your company scale. people can hold it in their heads. Reactive microservices: - first tenet is to be responsive, fail latencies, async nonblocking calls from client - resilient: redundancy, timeouts, retries. hystrix - “release it” book my michael nygard - elastic: can scale up and down according to load. - message-driven: message passing - FRP patterns: actor model - scala/akka + rxjava How to do this? - Don’t migrate in one big bang. do it incrementally. - find your worst scaling bottleneck - wall it off behind an interface - replace it - then do it again Twitter feedback on this session included: @vpothnis: 'no shame in matching architecture to problem scale' - @randyshoup at #qconsf @philip_pfo: Architecture should match the problem - don't over engineer from the start; evolve as you grow. @randyshoup #qconsf @sammcafee: No shame in simple 3 tier app if that's where you currently are. @randyshoup #qconsf @codingjester: If you don't end up regretting your early technology decisions, you probably over-engineered #qconsf @randyshoup @vpothnis: if you get to re-architect, thats a sign of success - @randyshoup talk at #qconsf @SashaO: #qconsf @randyshoup Google Cloud Datastore example: Cloud Datastore (6 people) -> Megastore -> Bigtable -> Colossus -> Cluster mgt infra @sangeetan: Layered microservices allow very small teams to achieve very great things @randyshoup #qconsf @vpothnis: predictable performance at 99th percentile trumps low mean latency - @randyshoup talk on #reactive systems at #qconsf @SashaO: #qconsf @randyshoup Kixeye's microservices framework: ; sync, async using a lot of Netflix OSS stack @philip_pfo: The only thing you are guaranteed to get with a big bang arch evolution is a Big Bang @martinfowler via @randyshoup #qconsf @philip_pfo: Build one to throw away (prototype). Next one will be better, less risk of second-system syndrome. @randyshoup #qconsf @philip_pfo: Set the goal, not the path. #leadership @randyshoup #qconsf @SashaO: #qconsf @randyshoup microservices in the org: many relationships become vendor-customer @vpothnis: cost allocation and charging can help motivate both customer and provider to optimize - @randyshoup on building #microservices #qconsf @stonse: Charge users of your #MicroService for usage of your service -> great point for gaining efficiency by @randyshoup at #qconsf @christianralph: #QConSF @randyshoup describes an internal service cost structure at google to create a market economy encouraging fair usage/behaviour Reactive Programming with Rx Andrew Hao attended this session: - Observables vs Iterables. Push vs Pull. - Observable is at the core: abstraction of events over sets. - Prefer Observable over Future because Future is still fragmented in Java ecosystem. - Using Rx services that aggregate granular APIs. Error handling - Flow control: You need backpressure when you hop threads. what do you do when you consume slower than producer? Often necessary in UIs - Hot vs Cold source: Hot: emits whether you’re ready or not. mouse event.Cold: emits when requested: HTTP request. - Approach: block threads. like iterables. - Hot streams: use temporal operators: like sample or debounce - Buffer, debounce, buffer pattern – can group signals by temporal - Reactive push: hot infinite stream: Buffer by time window, drop some samples if appropriately, do map/reduce on windows Mental shift - imperative –> functional - sync –> async - pull –> push - Rx doesn’t trivialize concurrency. You need to reason about what’s going on underneath. Philipp Garbe’s attended this session:.” Real World Functional Functional Systems @ Twitter Andrew Hao attended this session: - Key component is managing/reducing incidental complexity - (scala): Futures: containers for value. Use composition operations. - Presenter demonstrating how to build a search engine in scala with composable operations - FP helps in “separating mechanics from semantics” – the bind function - A service is an async function: takes a request and returns a Future reply. - “your server as a function”: By modeling apps as composable services, you can reason well about your programs. Example is finagle, which is just a composition of services - “composability is one of the great ideas of functional” - Good tool for enforcing modularity Scalable Microservice Architectures Building and Deploying Microservices with Event Sourcing, CQRS and Docker Andrew Hao attended this keynote: Docker - Jenkins deploy pipeline: Build & test code + build & test docker image + deploy docker to registry - Smoke test the image: - POST /containers/create, POST /containers/{id}/start - ping /health URL - tag image, then push image - takes seconds to build, deploy! - Jenkins is on Docker, too! Deployment to prod - Diffs running dockers against build dockers, then deploys the changed containers. - Mesos + marathon + zookeeper Twitter feedback on this session included: @echinopsii: #microservices : use dedicated DB for each modules according to functional needs. #qconsf @rchakra1: @crichardson explaining Event Sourcing. Before: Update State + Publish Events. Now: Persist and Publish Events #microservices #qconsf Organizing Your Company to Embrace Microservices by Paul Osman Andrew Hao attended this keynote: Twitter feedback on this session included: @aerabati: Failures happen in prod - Optimize for 'Responding to failures' and not 'avoiding failures' ! #micro-services #qconsf @aerabati: If possible - deploy in one command and provision hosts in one command! #oodaloop via @paulosman #microservices #qconsf @aerabati: Embracing Microservices Tip - Dont design around technology layers - frontend, backend etc #qconsf via@paulosman @aerabati: Big benefit of cross functional teams - You can easily move people around when required! #embracing-microservices #qconsf @techbint: Need to balance autonomy of team against feeling of neglect. Align tribes & chapters. @lycaonmarcus #qconsf @zimmermatt: Even if you don't (yet) have the size, treat each business functionality as a team @paulosman #QConsf Scalable Microservices at Netflix. Challenges and Tools of the Trade by Sudhir Tonse Twitter feedback on this session included: @philip_pfo: Production outages produce the best learnings. We've converted those into hardened #NetflixOSS offerings. @stonse #qconsf @ShoReason: Netflix supports more than 500 micro services @stonse #qconsf @philip_pfo: Distributed systems are inherently complex - microservices doesn't solve that but does expose it. @stonse #qconsf @techbint: Only three things are certain in life: death, taxes and outages in production. @stonse #qconsf @zimmermatt: Microservice best practice: Reduce/avoid hot spots @stonse #qconsf @zimmermatt: Microservice best practice: Use red/black deployment @stonse @NetflixOSS #asgard #qconsf @rzanner: @stonse monolithic apps like trains: when 1 wagon catches fire, you have to stop the whole train #qconsf The Future of Mobile Building Pinterest's Mobile Apps by Mike Beltzner & Garrett Moon Twitter feedback on this session included: @rundavidrun: .@beltzner at #qconsf: Pinterest builds in special feature for employees dogfood-testing the next mobile release: shake to report bugs. @arburbank: You are the only person who is desperate for your new feature. @beltzner on experimenting before shipping at #qconsf @arburbank: Users who review your mobile app before you ask them to are often angry - but their feedback will help you improve. @beltzner at #qconsf @arburbank: Predictability of the mobile app release schedule ensures that no bug is too costly and helps feature planning. @beltzner #qconsf @rundavidrun: .@beltzner at #qconsf shows @Pinterest's release "train" 3-week cadence. Large features are split over a few cycles. @arburbank: Apps hanging may be even worse than apps crashing because users don't know how to escape. @garrettmoon at #qconsf @arburbank: Reducing the app startup time on android tablet increased the number of pins viewed by 30%! @garrettmoon #qconsf @aerabati: Avoid unwanted image requests and API calls on app startup ! Pinit talk at #qconsf! @arburbank: Many crashes are specific to particular device/OS combos, so more specific crash reporting is key. (Also for AB tests!) #qconsf @garrettmoon @aerabati: Pinit guys use cocoapods to manage their external dependencies! #qconsf Facebook’S iOS Architecture Twitter feedback on this session included: @GMilano: iOS Facebook team use a declarative "objc++" to declare feeds in order to avoid maths, threads, etc when programming #qconsf Less, But Better Twitter feedback on this session included: @rundavidrun: .@michaelgarvey at #qconsf: bad design surprises us with disruption, good design surprises us with beauty. Same is true for code. Open Spaces Twitter feedback on open spaces included: @rkasper: 95 people at our first #openspace! #qconsf rocks! @rkasper: Wherever it happens is the right place #openspace principle in full use! #architecture #qconsf @rkasper: All it takes is a little passion... and they rush the center of he circle! Amazing #OpenSpace in the #qconsf #continuousdelivery track! Tutorials Domain Driven Overview by Eric Evans Twitter feedback on this tutorial included: @zimmermatt: Why bother with models? The critical complexity of many software projects is in understanding the domain itself. @ericevans0 #qconsf @zimmermatt: DDD Guiding Principle: Focus on the core domain @ericevans0 #qconsf @zimmermatt: DDD Guiding Principle: Explore /models/ in creative collaboration with domain practitioners and software practitioners @ericevans0 #qconsf @zimmermatt: DDD Guiding Principle: Speak a ubiquitous language within an explicitly bounded context @ericevans0 #qconsf @zimmermatt: We want models that allow us to make simple, clear assertions; we can only do this within a bounded context. @ericevans0 #qconsf @zimmermatt: It's not simply having a model, it's having one that fits the scenario. Use @ericevans0 #qconsf @zimmermatt: Not all of a large system will be well designed. @ericevans0 #qconsf @zimmermatt: If the Chief Architect has convinced the CTO there is "the one true model," run away... it's not going to end well. @ericevans0 #qconsf @zimmermatt: Distill the core domain @ericevans0 #qconsf Opinions about QCon Opinions expressed on Twitter included: @philip_pfo: Hacking the conference UX - love the format variation at #qconsf: lunch together or alone, open space, early adopter technology focus @rundavidrun: This year at #qconsf, they have a "hallway track": long breaks to facilitate peer sharing, which is sometimes more valuable than speakers. @eduardk: Woke up to this view. Not a bad way to start the day at #QConSF /C @HyattRegencySF @aerabati: Veg Biryani for lunch? How often do you see that at tech conferences in US? Thanks #qconsf Philipp Garbe’s opinion on QCon was: It seems that people in San Francisco are very healthy. There were no soft drinks, just water, tea and coffee. Also the food was more for vegetarians. I missed some good old American steaks :) Karan Parikh’s opinion on QCon was: QCon was an incredible conference, and I learned a lot. Can’t wait for QCon 2015. Takeaways Andrew Hao’s takeaways were: -. Philipp Garbe’s takeaways were: If I had to describe the conference this year in three words it would be: reactive,functional & microservices. Obviously that´s my opinion. There were also a lot of other talks about engineering culture, continuous deployment and architectures and and and. Unfortunately, I couldn’t attend them all. …. And there was also the traditional conference party at Thirsty Bear on Monday evening. The whole bar (2 floors) was crowded by nerds and beer and food was for free. @pgarbe: I have to go to #qconsf again to become gold alumni and get the fancy q-tshirt. Of course that´s not the main reason. When you work as web developer this conference is a must. Not only they provide trending topics and interesting insights but you have also the chance to extend your network with smart people. Conclusion The eighth annual QCon San Francisco brought together 1,200 attendees and more than 100 speakers in what was the largest ever QCon to be held in the US.ConSF 2014 was produced by InfoQ.com. Other upcoming QCons include: • QCon London March 2-6, 2015 • QCon São Paulo March 23-27, 2015 • QCon New York June 8-12, 2015 • QCon Rio De Janeiro August 24-28, 2015 Visit to see a complete list of our upcoming events. Rate this Article - Editor Review - Chief Editor Action
https://www.infoq.com/articles/qcon-san-francisco-2014
CC-MAIN-2018-30
refinedweb
8,125
53.1
Metaprogramming in Elixir Usually, we think of a program as something that manipulates data to achieve some result. But what is data? Can we use the programs themselves as data? 🤔 In today’s article, we’ll go down the rabbit hole with the assistance of Elixir, a programming language that is permeated by metaprogramming. I’ll introduce you to metaprogramming in Elixir and show you how to create a macro for defining curried functions. What is metaprogramming? Metaprogramming is just writing programs that manipulate programs. It’s a wide term that can include compilers, interpreters, and other kinds of programs. In this article, we will focus on metaprogramming as it is done in Elixir, which involves macros and compile-time code generation. Metaprogramming in Elixir To understand how metaprogramming works in Elixir, it is important to understand a few things about how compilers work. During the compilation process, every computer program is transformed into an abstract syntax tree (AST) – a tree structure that enables the computer to understand the contents of the program. In Elixir, each node of the AST (except basic values) is a tuple of three parts: function name, metadata, function arguments. Elixir enables us to access this internal AST representation via quote. iex(1)> quote do 2 + 2 - 1 end {:-, [context: Elixir, import: Kernel], [{:+, [context: Elixir, import: Kernel], [2, 2]}, 1]} We can modify these ASTs with macros, which are functions from AST to AST that are executed at compile-time. You can use macros to generate boilerplate code, create new language features, or even build domain-specific languages (DSLs). Actually, a lot of the language constructs that we regularly use in Elixir such as def, defmodule, if, and others are macros. Furthermore, many popular libraries like Phoenix, Ecto, and Absinthe use macros liberally to create convenient developer experiences. Here’s an example Ecto query from the documentation: query = from u in "users", where: u.age > 18, select: u.name Metaprogramming in Elixir is a powerful tool. It approaches LISP (the OG metaprogramming steam roller) in expressivity but keeps things one level above in abstraction, enabling you to delve into AST only when you need to. In other words, Elixir is basically LISP but readable. 🙃 Getting started So how do we channel this immense power? 🧙 While metaprogramming can be rather tricky, it is rather simple to start metaprogramming in Elixir. All you need to know are three things. quote quote converts Elixir code to its internal AST representation. You can think of the difference between regular and quoted expressions to be the difference in two different requests. - Say your name, please. Here, the request is to reply with your name. - Say “your name”, please. Here, the request is to reply with the internal representation of the request in the language – “your name”. iex(1)> 2 + 2 4 iex(2)> quote do 2 + 2 end {:+, [context: Elixir, import: Kernel], [2, 2]} quote makes it a breeze to write Elixir macros since we don’t have to generate or write the AST by hand. unquote But what if we want to have access to variables inside quote? The solution is unquote. unquote functions like string interpolation, enabling you to pull variables into quoted blocks from the surrounding context. Here’s how it looks in Elixir: iex(1)> two = 2 2 iex(2)> quote do 2 + 2 end {:+, [context: Elixir, import: Kernel], [2, 2]} iex(3)> quote do two + two end {:+, [context: Elixir, import: Kernel], [{:two, [], Elixir}, {:two, [], Elixir}]} iex(4)> quote do unquote(two) + unquote(two) end {:+, [context: Elixir, import: Kernel], [2, 2]} If we don’t unquote two, we will get Elixir’s internal representation of some unassigned variable called two. If we unquote it, we get access to the variable inside the quote block. defmacro Macros are functions from ASTs to ASTs. For example, suppose we want to make a new type of expression that checks for the oddity of numbers. We can make a macro for it in just a few lines with defmacro, quote, and unquote. defmodule My do defmacro odd(number, do: do_clause, else: else_clause) do quote do if rem(unquote(number), 2) == 1, do: unquote(do_clause), else: unquote(else_clause) end end end iex(1)> require My My iex(2)> My.odd 5, do: "is odd", else: "is not odd" "is odd" iex(3)> My.odd 6, do: "is odd", else: "is not odd" "is not odd" When should you use metaprogramming? “Rule 1: Don’t Write Macros” – Chris McCord, Metaprogramming in Elixir While metaprogramming can be an awesome tool, it should be used with caution. Macros can make debugging much harder and increase overall complexity. They should be turned to only when it is necessary – when you run into problems you can’t solve with regular functions or when there is a lot of plumbing behind the scenes that you need to hide. When used correctly, they can be very rewarding, though. To see how they can improve developer life, let’s look at some real-life examples from Phoenix, the main Elixir web framework. How macros are used in Phoenix In the following section, we’ll analyze the router submodule of a freshly made Phoenix project as an example of how macros are used in Elixir. use If you look at the top of basically any Phoenix file, you will most likely see a use macro. Our router submodule has one. defmodule HelloWeb.Router do use HelloWeb, :router What this one expands to is: require HelloWeb HelloWeb.__using__(:router) require asks HelloWeb to compile its macros so that they can be used for the module. But what’s using? It, as you might have guessed, is another macro! defmacro __using__(which) when is_atom(which) do apply(__MODULE__, which, []) end In our case, this macro invokes the router function from the HelloWeb module. def router do quote do use Phoenix.Router import Plug.Conn import Phoenix.Controller end end router imports two modules and launches another __using__ macro. As you can see, this hides a lot, which can be both a good and a bad thing. But it also gives us access to a magical use HelloWeb, :router to have everything ready for quick webdev action whenever we need. pipeline Now, look below use. pipeline :browser do plug :accepts, ["html"] plug :fetch_session plug :fetch_flash plug :protect_from_forgery plug :put_secure_browser_headers end Yup, more macros. pipeline and plug define pipelines of plugs, which are functions of functions that transform the connection data structure. While the previous macro was used for convenient one-line imports, this one helps to write pipelines in a very clear and natural language. scope And, of course, the routing table is a macro as well. scope "/", HelloWeb do pipe_through :browser get "/", PageController, :index end scope, pipe_through, get – all macros. In fact, the whole module is macros and an if statement (which is a macro) that adds an import statement and executes a macro. I hope that this helps you see how metaprogramming is at the heart of Elixir. Now, let’s try to build our own macro. Build your own Elixir macro Elixir and currying don’t really vibe together. But with some effort, you can create a curried function in Elixir. Here’s a regular Elixir sum function: def sum(a, b), do: a + b Here’s a curried sum function: def sum() do fn x -> fn y -> x + y end end end Here’s how they both behave: iex(1)> Example.sum(1,2) 3 iex(2)> plustwo = Example.sum.(2) #Function<10.76762873/1 in Example.sum/0> iex(3)> plustwo.(2) 4 Let’s say that we want to use curried functions in Elixir for some reason (for example, we want to create a monad library). Writing out every function in our code like that would be, to say the least, inconvenient. But with the power of metaprogramming, we can introduce curried functions without a lot of boilerplate. Let’s define our own defc macro that will define curried functions for us. First, we need to take a look at how a regular def looks as an AST: iex(1)> quote do def sum(a, b), do: a + b end {:def, [context: Elixir, import: Kernel], [ {:sum, [context: Elixir], [{:a, [], Elixir}, {:b, [], Elixir}]}, [ do: {:+, [context: Elixir, import: Kernel], [{:a, [], Elixir}, {:b, [], Elixir}]} ] ]} It is a macro with two arguments: the function definition (in this case, sum is being defined) and a do: expression. Therefore, our defc (which should take the same data) will be a macro that takes two things: - A function definition, which consists of the function name, context, and supplied arguments. - A do: expression, which consists of everything that should be done with these arguments. defmodule Curry do defmacro defc({name, ctx, arguments} = clause, do: expression) do end end We want the macro to define two functions: - The function defined in defc. - A 0-argument function that returns the 1st function, curried. defmacro defc({name, ctx, arguments} = clause, do: expression) do quote do def unquote(clause), do: unquote(expression) def unquote({name, ctx, []}), do: unquote(body) end end That’s more or less the macro. Now, we need to generate the main part of it, the body. To do that, we need to go through the whole argument list and, for each argument, wrap the expression in a lambda. defp create_fun([h | t], expression) do rest = create_fun(t, expression) quote do fn unquote(h) -> unquote(rest) end end end defp create_fun([], expression) do quote do unquote(expression) end end Then, we assign the variable body in the macro to be the result of create_fun, applied to the arguments and the expression. defmacro defc({name, ctx, arguments} = clause, do: expression) do body = create_fun(arguments, expression) quote do def unquote(clause), do: unquote(expression) def unquote({name, ctx, []}), do: unquote(body) end end That’s it! 🥳 To try it out, let’s define another module with a sum function in it. defmodule Example do import Curry defc sum(a, b), do: a + b end iex(1)> Example.sum(2,2) 4 iex(2)> Example.sum.(2).(2) 4 iex(3)> plustwo = Example.sum.(2) #Function<5.100981091/1 in Example.sum/0> iex(4)> plustwo.(2) 4 You can see the full code here. In our example, the macro provides only the sum() and sum(a,b) functions. But from here, it’s easy to extend our macro to generate partial functions for all arities. In the case of sum, we can make it so that the macro generates sum(), sum(a), and sum(a,b), modifying the function definition to account for the missing arguments. Since it is an awesome exercise to try on your own, I will not spoil the answer. 😊 Further learning If you want to learn more about macros in Elixir, here are a few resources I suggest to check out: - Metaprogramming in Elixir. Chris McCord’s book gives a detailed introduction to macros with awesome and practical code examples. - Understanding Elixir Macros. This article series by Saša Jurić covers the topic in much more detail than I do, and it’s a nice read if you don’t have time to read Chris’s book. - Don’t Write Macros But Do Learn How They Work. If you prefer to watch videos, this is a nice talk by Jesse Anderson. For more articles on Elixir, you can go to our Elixir section or follow us on Twitter or Medium to receive updates whenever we publish a new one. .jpg) .jpg) .jpg) .jpg)
https://serokell.io/blog/elixir-metaprogramming
CC-MAIN-2021-39
refinedweb
1,917
71.75
What am i doing wrong with QFile::rename? - MathSquare @ Qstring oldName = ("/home/myusername/Desktop/11.txt"); Qstring newName = ("/home/myusername/Desktop/22.txt"); bool QFile::rename ( const QString & oldName, const QString & newName ); @ Please tell me how to fix it and post the code since i am a new user to qt and coming from basic.net. [[Moved another thread out of QnA and added code formating, Tobias]] - Code_ReaQtor Seems like the title is different from your goal/link... - Code_ReaQtor This works in windows, just change the oldName and newName to machine-specific uri. Also, it is only a commandline program. @#include <QCoreApplication> #include <QFile> #include <QDebug> int main(int argc, char *argv[]) { QCoreApplication a(argc, argv); QString oldName = "D:\11.txt"; QString newName = "D:\22.txt"; qDebug()<<QFile::rename(oldName,newName); return a.exec(); }@ Note that I used qDebug() to show if the "rename" is successful or not. QFile::rename() will be only true if "oldName" exists! - BelenMuñoz I think the problem is that you are declaring twice the same variables. When you call the function with arguments that are existing variables, just use the name. QFile::rename (oldName, newName ); PD: As somebody told you in another post, you should learn more about C++.
https://forum.qt.io/topic/23476/what-am-i-doing-wrong-with-qfile-rename-63
CC-MAIN-2017-47
refinedweb
204
59.4
Originally posted by Arathi Rajashekar: I have a doubt. I modified the above code to following. It is compiling fine class A { protected int i = 10; public int getI() { return i; } } public class B extends A { public void process(A a) { a.i = a.i*2; } public static void main(String[] args) { A a = new B(); B b = new B(); b.process(a); System.out.println( a.getI() ); } } But if I make class A as public its not compiling.Even when i saved it in file A.java, while compling B.java it says class A not found. Am bit confused here. Can anybody give clear idea on this. The above code will be in same package. So if it is in same package can we access protected variables using super class reference. [ January 08, 2002: Message edited by: Arathi Rajashekar ] Originally posted by Seany Iris: class B is subclass of class A,why cannot it accesses the variables of A?
http://www.coderanch.com/t/235892/java-programmer-SCJP/certification/isn-compiling
CC-MAIN-2015-40
refinedweb
161
76.93
Ruby Array Exercises: Compute the average values of a given array of except the largest and smallest values Ruby Array: Exercise-31 with Solution Write a Ruby program to compute the average values of a given array, except the largest and smallest values. The array length must be 3 or more. Ruby Code: def check_array(nums) min = nums[0] max = nums[0] sum = 0 nums.each do |item| sum = sum + item if(item > max) max = item elsif(item < min) min = item end end return (sum-max-min).to_f/(nums.length - 2) end print check_array([3, 4, 5, 6]),"\n" print check_array([12, 3, 7, 6]),"\n" print check_array([2, 15, 7, 2]),"\n" print check_array([2, 15, 7]) Output: 4.5 6.5 4.5 7.0 Flowchart: Ruby Code Editor: Contribute your code and comments through Disqus. Previous: Write a Ruby program to find the difference between the largest and smallest values of a given array of integers of length 1 or more. Next: Write a Ruby program to compute the sum of the numbers of a given array except the number 17 and numbers that come immediately after a 17. Return 0 for an empty
https://www.w3resource.com/ruby-exercises/array/ruby-array-exercise-31.php
CC-MAIN-2021-21
refinedweb
197
60.35
textwrap, but savvy to ANSI colors and styles Project Description Release History Download Files ansiwrap wraps text, like the standard textwrap module. But it also correctly wraps text that contains ANSI control sequences that colorize or style text. Where textwrap is fooled by the raw string length of those control codes, ansiwrap is not; it understands that however much those codes affect color and display style, they have no logical length. The API mirrors the wrap, fill, and shorten functions of textwrap. For example: from __future__ import print_function from colors import * # ansicolors on PyPI from ansiwrap import * s = ' '.join([red('this string'), blue('is going on a bit long'), green('and may need to be'), color('shortened a bit', fg='purple')]) print('-- original string --') print(s) print('-- now filled --') print(fill(s, 20)) print('-- now shortened / truncated --') print(shorten(s, 20, placeholder='...')) It also exports several other functions: - ansilen (giving the effective length of a string, ignoring ANSI control codes) - ansi_terminate_lines (propogates control codes though a list of strings/lines and terminates each line.) - strip_color (removes ANSI control codes from a string) See also the enclosed demo.py. Download Files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/ansiwrap/
CC-MAIN-2017-43
refinedweb
209
60.14
Const As we have seen in the last chapter, there are mainly two reasons to pass an argument to a function by reference: It may be faster and we are able to change the original value. We also saw that it is often unnecessary inside the function to be able to change the value of the argument and noted that it is often a bad idea because it makes reasoning about the code harder. The problem boils down to the fact, that we are currently unable to see from the signature of a function whether it will change the value of it's arguments. The solution to this problem is called const. [edit] Immutable values const behaves somewhat similar to references: It is an annotation to an arbitrary type that ensures, that it will not be changed. Let's start by looking at const variables, also called constants: #include <iostream> #include <string> int main() { const int zero = 0; const int one = 1; const std::string str = "some const string"; // reading and printing constants is perfectly fine: std::cout << "zero=" << zero << ", one=" << one << ", str='" << str << "'\n"; // even operations that do not change the values are ok: std::cout << "the third letter in str is '" << str[2] << "'\n"; // doing calculations is no problem: std::cout << "one + one + zero = " << one + one + zero << "\n"; // trying to change the value results in a compiler-error: //zero = 2; //one += 1; } Output: zero=0, one=1, str='some const string' the third letter in str is 'm' one + one + zero = 2 Aside from the possibility that the purpose of restricting what can be done with variables may be unclear at this point, it is probably relatively easy to understand what the above code does and how const works so far. So, why should we use constants instead of variables and literals? The answer has to be split into two parts, concerning both alternatives: A constant may be more suitable than a variable if the value will never change, because it may both enable the compiler to produce better code (knowing that a certain multiplication is always by two instead of an arbitrary value will almost certainly result in faster code) and programmers to understand it faster as they don't have to watch for possible changes. On the other hand constants are almost always better then literal constants. Consider the following examples: Output: 0kg create 0 newton of force. 0.5kg create 4.905 newton of force. 1kg create 9.81 newton of force. 1.5kg create 14.715 newton of force. 2kg create 19.62 newton of force. Output: 0kg create 0 newton of force. 0.5kg create 4.905 newton of force. 1kg create 9.81 newton of force. 1.5kg create 14.715 newton of force. 2kg create 19.62 newton of force. Even this pretty small example gets easier to understand, once we give names to constant values. It should also be obvious that the advantage in readability increases even further if we need the value multiple times. In this case there is even another advantage: Should we be interested to change the value (for example because we want to be more precise about it), we just have to change one line in the whole program. [edit] Constant References At this point we understand how constant values work. The next step are constant references. We recall that a reference is an alias for a variable. If we add constness to it, we annotate, that the aliased variable may not be changed through this handle: #include <iostream> int main() { int x = 0; const int y = 1; int& z = x; const int& cref1 = x; const int& cref2 = y; const int& cref3 = z; // int& illegal_ref1 = y; // error // int& illegal_ref3 = cref1; // error std::cout << "x=" << x << ", y=" << y << ", z=" << z << ", cref1=" << cref1 << ", cref2=" << cref2 << ", cref3=" << cref3 << '\n'; x = 10; std::cout << "x=" << x << ", y=" << y << ", z=" << z << ", cref1=" << cref1 << ", cref2=" << cref2 << ", cref3=" << cref3 << '\n'; // ++ref1 // error // ++ref2 // error } Output: x=0, y=1, z=0, cref1=0, cref2=1, cref3=0 x=10, y=1, z=10, cref1=10, cref2=1, cref3=10 We note several things: - It is allowed to create const references to non-const values, but we may not change them through this reference. - References may be constructed from other references. - We may add constness when we create a reference, but we may not remove it. [edit] Functions and Constants With this knowledge it is pretty easy to solve our initial problem of passing arguments to functions by reference: We just pass them by const reference which unites the performance-advantage with the ease of reasoning about possible changes to variables. #include <iostream> #include <vector> //pass by const-reference int smallest_element(const std::vector<int>& vec) { auto smallest_value = vec[0]; for (auto x: vec) { if (x<smallest_value) { smallest_value = x; } } return smallest_value; } int main() { std::vector<int> vec; for(size_t i=0; i < 10000000; ++i) { vec.push_back(i); } // getting a const reference to any variable is trivial, therefore // it is done implicitly: std::cout << "smallest element of vec is " << smallest_element(vec) << std::endl; } Output: smallest element of vec is 0 This leaves us with the question of how to pass arguments into a function. While they may not be entirely perfect, the following two rules should apply in most cases: - If you just need to look at the argument: Pass by const reference. - If you need to make a copy anyways, pass by value and work on the argument. The rationale for this rule is simple: Big copies are very expensive, so you should avoid them. But if you need to make one anyways, passing by value enables the language to create much faster code if the argument is just a temporary value like in the following code: #include <iostream> #include <locale> // for toupper() #include <string> std::string get_some_string() { return "some very long string"; } std::string make_loud(std::string str) { for(char& c: str){ // toupper converts every character to it's equivalent // uppercase-character c = std::toupper(c, std::locale{}); } return str; } int main() { std::cout << make_loud(get_some_string()) << std::endl; } Output: SOME VERY LONG STRING Let's ignore the details of the function toupper() for a moment and look at the other parts of make_loud. It is pretty obvious that we need to create a complete copy of the argument if we don't want to change the original (often the only reasonable thing). On the other hand: In this special instance changing the original would not be a problem, since it is only a temporary value. The great thing at this point is, that our compiler knows this and will in fact not create a copy for this but just “move” the string in and tell the function “This is as good as a copy; change it as you want.”.
https://en.cppreference.com/book/intro/const
CC-MAIN-2018-39
refinedweb
1,136
56.39
As you may already know,. First let's create a directory, initialize npm, and install webpack locally: mkdir webpack-demo && cd webpack-demo npm init -y npm install --save-dev webpack Now we'll create the following directory structure and contents: project webpack-demo |- package.json + |- index.html + |- /src + |- index.js src/index.js function component() { var element = document.createElement('div'); // Lodash, currently included via a script, is required for this line to work element.innerHTML = _.join(['Hello', 'webpack'], ' '); return element; } document.body.appendChild(component()); index.html <html> <head> <title>Getting Started</title> <script src=""></script> </head> <body> <script src="./src/index.js"></script> </body> </html> In this example, there are implicit dependencies between the <script> tags. Our index.js file depends on lodash being included in the page before it runs. This is because index.js never: project webpack-demo |- package.json + |- /dist + |- index.html - |- index.html |- /src |- index.js To bundle the lodash dependency with index.js, we'll need to install the library locally... npm install --save lodash and then import it in our script... src/index.js + import _ from 'lodash'; + function component() { <html> <head> <title>Getting Started</title> - <script src=""></script> </head> <body> - <script src="./src/index.js"></script> + <script src="bundle with our script as the entry point and bundle.js as the output. The npx command, which ships with Node 8.2 or higher, runs the webpack binary ( ./node_modules/.bin/webpack) of the webpack package we installed in the beginning: npx webpack src/index.js dist/bundle.js Hash: 857f878815ce63ad5b4f Version: webpack 3.9.1 Time: Your output may vary a bit, but if the build is successful then you are good to go. Open index.html in your browser and, if everything went right, you should see the following text: 'Hello webpack'. The import and export statements have been standardized in ES2015. Although they are not supported in most browsers yet, webpack does support them out of the box. Behind the scenes, webpack actually "transpiles" the code so that older browsers can also run it. If you inspect dist/bundle. Most projects will need a more complex setup, which is why webpack supports a configuration file. This is much more efficient than having to type in a lot of commands in the terminal, so let's create one to replace the CLI options used above: project webpack-demo |- package.json + |- webpack.config.js |- /dist |- index.html |- /src |- index.js webpack.config.js const path = require('path'); module.exports = { entry: './src/index.js', output: { filename: 'bundle.js', path: path.resolve(__dirname, 'dist') } }; Now, let's run the build again but instead using our new configuration: npx webpack --config webpack.config.js Hash: 857f878815ce63ad5b4f Version: webpack 3.9.1 Time: 298 Note that when calling webpackvia its path on windows, you must use backslashes instead, e.g. node_modules\.bin\webpack --config webpack.config.js. If a webpack.config.jsis present, the webpackcommand picks it up by default. We use the --configoption here only to show that you can pass a config { ... "scripts": { "build": "webpack" }, ... } Hash: 857f878815ce63ad5b4f Version: webpack 3.9.1 Time: |- bundle.js |- index.html |- /src |- index.js |- /node_modules If you're using npm 5, you'll probably also see a package-lock.jsonfile in your directory. If you want to learn more about webpack's design, you can check out the basic concepts and configuration pages. Furthermore, the API section digs into the various interfaces webpack offers. © JS Foundation and other contributors Licensed under the Creative Commons Attribution License 4.0.
http://docs.w3cub.com/webpack/guides/getting-started/
CC-MAIN-2018-51
refinedweb
590
53.37
or Join Now! Immediately following is an abridged version of my cleaver story for those that find their time precious. Following that is the unabridged version for an audience that have time to kill and are still struggling to determine the fine line separating sanity from a basket cases. o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o-o When I moved into my current house, there was this rusty old cleaver lying on the ground in my back yard. Every time I went past it and it was on the ground, I picked it up and imbedded it into a treated pine log at the end of a wooden pot-plant bench. After repeating this for about 4 years, I considered throwing it out, but before doing that I thought I would try to see if it could somehow be restored/cleaned-up. I gave the head a thorough sanding and polishing, sharpened the edge, turned a new handle for it and used piece of polished copper pipe as a ferule. I made a presentation case for it and mounted it the wall. It turned out to be quite a reasonable restoration and has since been a good conversation piece whenever visitors notice it.The gallery pictures are designed to show both the before and after conditions. Thanks for reading...... ————-oooooooooooooooooooooooooo000OOO000ooooooooooooooooooooooooooooo————- Hello Boys and Girls, This is the unabridged version for those unfortunates that cannot heed sensible advice and please don’t blame the author for your shortcomings and depriving you of that precious part of your life that you squandered by reading it. In this version, the heading would be replaced by, “Griever” the Cleaver, the birth of an antique heirloom. In flashing neon lights. Most of my prior epics have been about T&J models which somehow seems to have a limited audience of people that have a T&J model fetish.For a change of pace, this blog is geared towards that reputable pastime of antique dealing.Actually the subject item may not be an antique… well, not just yet, but it is practically, almost exactly 100% guaranteed to be…, in about 100+ years… so watch this space and set reminders. When I moved into my current residence, every time I went walkabouts (which translated by Wiki: “Walkabout historically refers to a rite of passage during which ’Indigenous male Australians’ would undergo a journey during adolescence, typically ages 10 to 16, (and us oldies) and live in the wilderness for a period as long as six months to make the spiritual and traditional transition into manhood.”) into the wilderness of my back yard I saw this rusty old cleaver lying on the ground. I envisage it being left behind by the previous owners or maybe even Jack the Ripper, during his last Australia visit while taking a sabbatical from his normal daily exploits, or even better still, left by those ’Ingenious male Australian’ many centuries ago when they might have gone kangaroo hunting, jumped my fence and quickly realised it was not a boomerang when it was thrown and it wouldn’t return (being typical adolescents, did not give a rats about littering). The cleaver had some illegible writing on it which I am guessing could be some sort of ancient Egyptian hieroglyphics, hence my allusion to antiquity (hey, it certainly DID NOT have a “Made in China” sticker on its plastic parts). Dutifully I performed that rare activity called exercise, bent over and picked it up. Now that my drinking hand was full and not being ambidextrous (I like all animals), I disposed of the cleaver by deftly implanting it into the top of a good-for-nothing- treated-pine-log lazing around in a corner of my jungle. This scenario was repeated several times a year as well as all those times I managed to pick up the cleaver. Four years had passed and the cleaver was growing a better beard than I, so I eventually thought that with all this extreme exercise my right bicep might nurture some unsightly muscle and after all, who wants a six pack on their shoulder, I decided to lay that there cleaver to rest…, at the bottom of a half a ton of garbage (anyone remember Alice’s Restaurant?… and if you do, you are an old bugger like me). As I went exploring for that elusive trash can, I happened to pass my workshop and had a thought… unfortunately I can’t remember that thought… it could have been the making of another great story. Anyway, for some reason I decided to visit the hallowed grounds of my sanctuary and entered the portals of my workshop. I suddenly noticed this prohibited, prohibition weapon dangling from my hand and not having a good-for-nothing- treated-pine-log lazing around in the corner of my workshop to vent my fury on by deftly implanting it, I decided to polish it up to a mirror finish so I could check the status of my lipstick and makeup. Lifting the cleaver resulted in a large hunk of the handle imbedding itself into one of my fingers and seeing as how the handle was so deteriorated that there was not enough of it to distribute splinters to my other 9 fingers (OK, 7 fingers and 2 thumbs if your keeping score), I decided to turn it a new handle on the lathe. Having the face-lift and a new handle, I fell in love and put a copper ring on it as a sign of my everlasting commitment. After the transformation and not having to sign a prenuptial, I didn’t have the heart to release it to the work force, so in an attempt to impress (no idea who), I decided to fabricate it a home of its own by the construction of a presentation box. While making the box, I got the front caught under the drum sander and took a great divot out of it. As any good golfer does, I tried to replace the divot but failed as it had been miraculously transformed into shop-vac refuse. Undeterred (no I didn’t fall into the loo), I soldiered on and managed to perform a skin graft by the clever use of a half-moon shaped laminated inset (why half-moon… matched the shape of the divot, why laminated… je ne sais pas). After flocking (no not a typo or a cuss word, the bottom of the box) the back board, it got mounted (the box, not the bottom) on the wall awaiting the ides of time to metamorphosize into the aforementioned antique heirloom. It is now about 2 years old and looking quite old (greatly assisted by a 2 year deposit of dust…) and one doesn’t need a crystal ball to foretell it future in 100 years.. All deposits for its purchase would, correction, will be greatly appreciated and even more greatly accepted. Anticipated delivery 1st. June 2116. Thanks for sharing my insanity. a PS for Dutchy, If you happen to have had the misfortune to blunder across this post, the next transcript of this will be published in four (count it 1 to 4)… (correction 1 to three 4) languages, including pigeon-Netherlandisch (though that may be futile if you got this far). -- There's two ways to do things... My way or the right way.. LBD LittleBlackDuck home | projects | blog 663 posts in 325 days LumberJocks | HTML | URL/IMG Preview this project card cleaver restoration crowie 1582 posts in 1455 days #1 posted 06-01-2016 11:30 AM Nice restoration Ducky… Question please….Have you researched the name etched on the blade sir??? -- Lifes good, Enjoy each new day...... Cheers from "On Top DownUnder" Crowie #2 posted 06-01-2016 12:04 PM ... name etched on the blade… - crowie - crowie While formulating this reply and motivated by your question, we had another closer look at the cleaver and SWMBO did a random search and discovered that it is an Elwell cleaver, Look just like mine but the etching on mine is practically sanded off. Still not enthused, I just did a little “look see” myself and Googled this,Edward Elwell started at Wednesbury Forge in Staffordshire in 1817, and stopped around 1930 I think, when it merged with Chillington Tool company. Eventually taken over by Spear and Jackson They don’t make them as good as they used to.So getting on to be atleast 80 years old I reckon. Thanks for the kick up the rrrs, now I am enthused. I may not have to wait 100 years. If I knew this I would have titled the project ”Evil Elwell Cleaver the Forger from Chillington”. ralbuck 2243 posts in 1770 days #3 posted 06-01-2016 04:35 PM Still neat old piece; even re-vitalized! Nice work and (displacement)!—(DISPLAY—) could be embedded in a piece of PINE ) -- just rjR SCOTSMAN 5839 posts in 3089 days #4 posted 06-01-2016 06:53 PM she has better eyes than me cause if I had better eyes I’d still be single… just gagging). HE HE very good one.Alistair I have an old one of these given to me many years ago by our local butcher farmer Superb dog trainer in sheepherding .His was a very dear friend of mine and many’s a time I would spend their going straight into the back shop and making us both a cup of tea then we would spend an hour and I would entertain him with all the dirty jokes I could remember and there were quite a few. Sadly the last time he spoke to Bronwen my wife he just kept repeating himself and has since been put into a home(if he is still alive) I don’t know. Anyway the one I have is a large heavy one and I sharpened it with my hands on the grinder. I realize mow to make a really good job you must have a jig does anyone have one I would like to copy it NOT BUY IT as I am Scottish LOL -- excuse my typing as I have a form of parkinsons disease #5 posted 06-02-2016 12:20 AM Hey Scottie Man, I have and old one too, I have and old one too, I have and old one too, I have and old one too, right between my two big toes. The missus would also have put me in a home if it wasn’t for the fact that I know the way back (my phone has a built in GPS)... ... must have a jig does anyone have one I would like to copy it NOT BUY IT as I am Scottish LOL Alistair - SCOTSMAN - SCOTSMAN Anyway to get a good jig wouldn’t you need to turn into an Irishman? If you did that your wife would not recognise you! Would that be good or bad? My email is “[email protected]” in case you remember any of those dirty jokes… the “G” rating of LJ would otherwise veto them. “TEA”??? I thought only Poms drank tea and Scotsmen drank OUZO with a worm in it!! (The missus is saying “that home” is getting closer than the neighbours.) Finally, if you want a copy of my jig (quick lesson in jig copying), download this picture, go into “My computer”... no “My computer” on your computer, right click on the downloaded file, click on “Copy” and voila the cheapest copy of a jig, befitting the best Scotsman in the world. PS. I did design and build a fantastic sharpening jig but when I went to find it a permanent place in my workshop I found this bloody Tormek in its way so I threw it out (the new jig not the Tormek) and unfortunately the plans for it as well. So if the above mentioned copy does not work, you might have to Google
http://lumberjocks.com/projects/249650
CC-MAIN-2017-04
refinedweb
2,033
70.77
How to scroll to an element - danielo515 last edited by Funny, the same question was asked in Discord yesterday and we wanted to upgrade the docs for it. Here is the solution we came up with import { scroll } from 'quasar' const { getScrollTarget, setScrollPosition } = scroll export default { methods: { handleScroll () { const ele = document.getElementById('test') // You need to get your element here const target = getScrollTarget(ele) const offset = ele.offsetTop - ele.scrollHeight const duration = 1000 setScrollPosition(target, offset, duration) } } } - danielo515 last edited by danielo515. - rstoenescu Admin last edited by @danielo515 You don’t need anything from Quasar for this. Just think in plain JS terms only. The getScrollTarget()is a helper which looks for the closest DOM element with scrollCSS class or else returns the windowobject. @danielo515 @a47ae I had exactly the same problem with this snippet as you. It worked when I just deleted - ele.scrollHeightfrom the snippet above. It works with a helper function like so: function scrollToElement (el) { let target = getScrollTarget(el) let offset = el.offsetTop // do not subtract the el.scrollHeight here let duration = 1000 setScrollPosition(target, offset, duration) } I also updated the snippet in the Quasar documentation and made my PR here: Why not just use the javascript scrollIntoView()? I still have this problem, and my solution is as follows: <template> <q-page ... // ul->lists ... </q-page> <template> <script> mounted(){ this.scrollToBottom() }, methods: { scrollToBottom () { const el = this.$refs.pageChat.$el // MUST call it in timer setTimeout(() => { window.scrollTo(0, el.scrollHeight) }, 100); }, submitMsg () { // do something ... this.$nextTick(function(){ this.scrollToBottom() }) } } </script> BTW, these two things seem to be the same: const ela = document.getElementById('pageChat') const elb = this.$refs.pageChat.$el // ela === elb console.log(ela); console.log(elb);
https://forum.quasar-framework.org/topic/2008/how-to-scroll-to-an-element
CC-MAIN-2020-45
refinedweb
282
51.04
In this tutorial, we will discuss the Typing Library which was introduced in Python 3.5 It’s a rather special library that builds on the concept of Type Hinting and Checking, and brings in some additional functionality. If you are not aware of what Type Hinting is in Python, then it was best for you to check out our tutorial on Type Checking in Python first. Either way we will quickly go through the basics of Type hinting before proceeding. What is Type Hinting/Checking? In Statically-typed languages like C++ and Java, upon declaration of any variable or function parameter, it is given a type. However, in Python, which is dynamically-typed, no such concept exists. var = 5 var = "Hello" # Reassignment to a different type is legal Hence these commands in python are perfectly legal. But this can cause confusion, and can make debugging harder. So we want to bring in the concept of Static Type checking in Python, which we will do with Type Hinting and one other thing (that we’ll mention later). The following format is used to indicate that the variable var is only meant to hold integer values. var: int = 5 Variable name, followed by a colon, then the type of the variable, followed an optional assignment operator and value. This is the format for using type hinting on variables. Let’s take a look at another small example, where we create a function for adding only integers, and only returning integers. Anything that goes against this, will raise an error. def add(one: int, two: int) -> int: return (one + two) The arrow sign + type format that you see in the above example, is used to define the return type. If you trying calling this function with two strings, it will raise an error. print(add("Hello", "World")) # INVALID print(add(5, 10)) # VALID Note: In order for type hinting to actually raise errors, you need to have a static type checker. This may already be present inside an IDE you are using, but if not, then you need to setup a static type checker like mypy. It may not seem like it at first glance, but there are actually many advantages of using type hinting, that make it worth putting in the extra effort. Introducing the Typing Library As we mentioned earlier, the Python Typing library builds upon the concept of Type Hinting, and gives us even more features and ways to define the type of objects we are using. Previously, we could only declare the type of variables and functions. But now, with the Typing library, we can even define the type of Lists, Dictionaries, Tuples and other Python objects. Let’s take a look at some short examples. from typing import List, Tuple, Dict # List of Strings a: List[str] # Dict with strings as keys, and integers as values b: Dict[str, int] # Tuple of ints c: Tuple[int] Attempting something like this… a: List[str] = ["1", "2"] a.append(3) gives us the following error, as it only accepts string values. PS D:\VSCode_Programs\Python> python -m mypy typehinting.py typehinting.py:8: error: Argument 1 to "append" of "list" has incompatible type "int"; expected "str" Found 1 error in 1 file (checked 1 source file) With Tuples you can even define how many values it can hold. # Valid a: Tuple[str, str] = ("hi", "hello") # Invalid a: Tuple[str, str] = ("hi", "hello", "world") The above Tuple has been defined as a Tuple which holds two strings. Attempting to store more than that will result in an error. Combining together Types with Union Another interesting tool that the typing library gives us, is Union. Union is a special keyword, which allows us to specify multiple allowed datatypes. If for example, we wanted a string that allows both integers and strings, how would we do so? This is where Union helps, as shown in the below example. from typing import List, Dict, Tuple, Union mylist: List[Union[int, str]] = ["a", 1, "b", 2] The above command is perfectly valid, as both int and str are allowed in mylist. We can use Union anywhere, even with variables or in functions as the return type. from typing import List, Dict, Tuple, Union # myVar accepts both integers and strings myVar: Union[int, str] myVar = 5 myVar = "Hello" Other Keywords in the Typing Library The Typing Library in Python is vast, and has extensive documentation. For a complete list of keywords, you can refer to it. Here we have covered all of the important ones, that we felt might come in handy. Here are two more keywords, that might come in handy. The Any type As the name implies, this is used to define a variable that accepts “any” type. from typing import Any var: Any var = 5 var = "Hello" var = 1.23 All of the above 3 assignments, are all perfectly valid. The N oReturn keyword This keyword is used to indicate that a function is not returning anything. Below is a short code snippet using this keyword. from typing import NoReturn def display(x:int) -> NoReturn: print(x + 1) The above function simply prints out a number after incrementing it, and doesn’t need to return anything. Hence we defined it as such. This marks the end of the Python Typing Library Tutorial. Any suggestions or contributions for CodersLegacy are more than welcome. Questions regarding the tutorial content can be asked in the comments section below.
https://coderslegacy.com/python/typing-library/
CC-MAIN-2022-40
refinedweb
913
70.53
This is the mail archive of the cygwin mailing list for the Cygwin project. On 09/26/2014 07:36 AM, Mohammad Yaqoob wrote: > When are you releasing 4.1.12-6 > Today. It may be numbered 4.1.13-6, depending on what upstream does in the meantime (Chet has already prepared patch 13 [fixing a parser state leak], but not yet published it), but even without waiting for upstream, I'm already in the middle of building bash with the same patches in use by Fedora (which includes Chet's patch 13, but also an additional patch that Chet is still debating about [avoiding namespace collisions with function exports]), so as to plug CVE-2014-7169. I'm not sure yet if the build will include CVE-2014-7186 and CVE-2014-7187 fixes [both of them a parser buffer overflow], or if there will be a -7 next week. And given the high publicity of the initial CVE-2014-6271, I suspect there may be further fixes coming; needless to say I'm closely following the upstream developments. But I also stand by the Red Hat analysis - the worst exploits are those due to CVE-2014-6271, which is already fixed in 4.1.12-5; the remaining three CVEs are worth fixing, but do not have the same severity, so it is okay to wait a bit longer and get it right than it is to prematurely push something only have to repeat the exercise a day later. -- Eric Blake eblake redhat com +1-919-301-3266 Libvirt virtualization library Attachment: signature.asc Description: OpenPGP digital signature
https://cygwin.com/ml/cygwin/2014-09/msg00400.html
CC-MAIN-2019-43
refinedweb
271
63.93
It was back in 2018 we saw the presentation in Vue.js London and later read a post when Evan You (creator of Vue) announced Vue 3. Finally, the beta was released on April 16, 2020, and at the time of writing, we are at version 3.0.0-beta.14. According to the official roadmap, the official release is planned for the end of Q2 2020. That’s why we’ll be reviewing its most important features, those that generated the most commotion within the community, and how we can test this long-awaited beta. Changes and Features In the presentation, Evan You assures that Vue 3 is going to be, among other things: - Faster - Smaller - More maintainable - And ultimately, it’s going to make our lives easier. In this section, we will review the changes and features that make these statements a reality. Complete Rewrite of the Virtual DOM and Optimization of the Rendering It is now twice as fast in both mounting and upgrading using half the memory. In terms of the initial assembly of an application, a test was performed, rendering 3,000 components with state, and the result was as follows: As we can see in the image, it took less than half the time to execute the scripts and used less than half the memory compared to the same test in Vue 2.5. If we focus on the upgrade or patching of the components, where performance was also gained, Vue has become more ‘intelligent’ in discerning which nodes should be re-rendered within the tree. To achieve this, Vue’s team relied on the following: - The generation of slots was optimized to avoid unnecessary re-rendering of child components. - If within a component, there are static and dynamic nodes, only the dynamic ones will be updated, avoiding the update of the whole tree; this was achieved thanks to the hoisting of these static nodes. - If a component has static props but content that is dynamic, now only the dynamic content will be updated, avoiding updating the component itself. This was achieved thanks to the hoisting of static props. - If a component has an inline handler, it will now avoid re-rendering if the identity of that function changes. Also, in this context, the need to have a single component as the root has been eliminated. Now we can have several, and automatically the new Virtual DOM will wrap them in a new component called Fragment. Before After A Much Smaller Core and Tree Shaking Vue has always been relatively small; its runtime weight is ~23KB GZipped. For Vue 3, the size has been substantially reduced, thanks to the ‘Tree shaking’ where you can exclude the bundle of code that is not being used. Most of the global API and helpers have been moved to ES module exports. This way, modern bundlers like webpack can analyze the dependencies and not include code that has not been imported. Thanks to these changes, the size of the Vue 3 core is ~10KB GZipped. Goodbye Facebook, Hello Microsoft (Flow > Typescript) Initially, Vue 2 was written in Javascript. After the prototyping stage, they realized that a typing system would be very beneficial for a project of this magnitude, so they decided to start using Flow (javascript superset created by Facebook). Initially, Vue 2 was written in Javascript. For Vue 3, the development team chose to use Typescript (another javascript superset created by Microsoft); this was a very good decision since nowadays the use of typescript in Vue projects is increasing, and since Vue 2 used another typing system, they had to have the typescript declarations separated from the framework source code. Today, with Vue 3, they can generate the declarations automatically, making maintenance much easier. Composition API and Vue Drama Last but not least, we have the so-called Composition API, which comes to move the foundation of the framework. In Vue 3, instead of defining a component by specifying a long list of options (Options API), the Composition API allows the user to write and reuse component logic as if they were writing a function, all while enjoying excellent typescript integration. The differences between the two APIs are outlined below: Options API export default { data: function () { return { count: 0 } } methods: { increment: function () { this.count = this.count++ } }, computed: { double: function () { return this.count * 2) } } } Composition API import { reactive, computed } from 'vue' export default { setup() { const state = reactive({ count: 0, double: computed(() => state.count * 2) }) function increment() { state.count++ } return { state, increment } } } Pretty much like React, right? The Darkest Hours for Vue 3 But not everything was rosy for the Vue team. During the first stage of the proposal of this new API, the community was informed that the composition API was going to completely replace the Options API. This triggered what Reddit called ‘The Vue Drama’ or ‘The Vue Darkest Hours.’ A large part of the community opposed this replacement and proposed that both APIs could coexist. The Vue team decided to rework the proposal and reversed those statements, ensuring that both APIs would coexist and, once again, made it clear how much they rely on community and user feedback. How to test this beta To test the beta of Vue 3, we must start with a project in Vue 2. From the terminal, we start a new project. npm install -g @vue/cli vue create my-proyect Then we add the ‘vue-next’ plugin that will install the Vue 3 dependencies in our project and make all the necessary changes. vue add vue-next And that’s it; you can try the new composition API. Easy, right? Conclusion Breaking changes in frameworks can be very stressful. Even though this version of Vue doesn’t feature Breaking Changes in Apiumhub, we think it will be a before and after for this framework since it will change the way we program in Vue. Sooner or later, you will find yourself in front of a Vue 3 project using the new API, better to start now that you have more time to adapt.
https://graphicdon.com/2020/12/17/discovering-vue3-changes-and-features/
CC-MAIN-2021-04
refinedweb
1,013
58.21
Jagged arrays are two-dimensional arrays. You can think like each element of these arrays have elements of type Array, i.e. it holds different arrays. The size of these array-elements is different. It is not required that all elements should have the same sized array. Jagged arrays are also known as Ragged arrays in Java. In this tutorial, we will learn different ways to create a Jagged array in Java and different examples to understand it better. Creating a Jagged Array : So, the Jagged Array is nothing but an array of arrays. So, we can create it like other arrays. If the elements of the array are known to us, we can declare and initialize the array in one step like below : int[][] myArray = {{1,2,3},{4,5},{6}}; Here, we have created one Jagged array of three elements. Each element is an array of integers. First element is {1,2,3}, second element is {4,5},and the third element is {6}.In this case, we know the elements of the array. So, it becomes easier for us to create the array. If we don’t know the elements, then we need to declare it first and initialize it later. Suppose, we know that the array will hold 3 elements , but don’t know what are they. Then, we will first declare it like below : int[][] myArray = new int[3][]; After the declaration, we can initialize its first, second and third elements as below : myArray[0] = new int[]{1,2,3,4}; myArray[1] = new int[]{5,6}; myArray[2] = new int[]{7}; Read and store elements in a dynamic sized Jagged array : We can create different sized or dynamic sized Jagged array. Each element of the array can be dynamic i.e. the size will be different for each array element. Below program will show you how to create one dynamic sized array by taking the size of the array from the user : import java.util.Scanner; class Main { public static void main(String[] args) { int sizeOfArray; //1 Scanner scanner; scanner = new Scanner(System.in); //2 System.out.println("How many elements your array will hold : "); sizeOfArray = scanner.nextInt(); //3 int[][] myArray = new int[sizeOfArray][]; //4 for (int i = 0; i < sizeOfArray; i++) { //5 System.out.println("Enter element count for column " + (i + 1)); int count = scanner.nextInt(); myArray[i] = new int[count]; //6 for (int j = 0; j < count; j++) { System.out.println("Enter element " + (j + 1)); //7 myArray[i][j] = scanner.nextInt(); } } } } Explanation : The commented numbers in the above program denote the step number below : - We are not declaring the size of the array. sizeOfArray variable is initialized to store the size. - Create one Scanner object to read user inputs. - Ask the user to enter the size of the array. Read the number and store it in sizeOfArray. - Now, create the Jagged array same as user input size. - Run one for loop. This loop will run for the size of the array times. Each time ask the user how many elements are there for that specific row. - Read the element count and store it in count variable. Create one integer array of size count for that particular row. - Similarly, run one for loop, ask the user to enter an element for that row and read all the array elements for all the rows. This program will read all elements for a Jagged array and store it in an array object. The output of the program will look like as below : How many elements your array will hold : 3 Enter element count for column 1 2 Enter element 1 1 Enter element 2 2 Enter element count for column 2 3 Enter element 1 1 Enter element 2 2 Enter element 3 3 Enter element count for column 3 1 Enter element 1 56 The above example will create the below array : {{1,2},{1,2,3},{56}} How to print the elements of a Jagged array : Since the Jagged array is actually a 2D array, we can use two for loops to print all the contents of the array. Below example will show you how to print the elements of a Jagged array using two nested for loops : class Main {("\n"); } } } It will print the below output : Elements stored in position 0 1 2 3 Elements stored in position 1 4 5 Elements stored in position 2 6 8 9 Conclusion : In the example above, we have learnt what is a Jagged array and how to read and print out the contents of a Jagged array. The jagged array is useful for organizing similar data in an application. You can try the above example problem and if you have any question, please drop one comment below. Similar tutorials : - Java Linear Search : search one element in an array - Java program to convert string to byte array and byte array to string - Java program to find the kth smallest number in an unsorted array - Java program to remove element from an ArrayList of a specific index - How to remove elements of Java ArrayList using removeIf() method - Java program to sort an array of integers in ascending order
https://www.codevscolor.com/java-jagged-arrays-explanation-with-examples
CC-MAIN-2020-40
refinedweb
862
61.67
Hibernate Infinispan Entity/Query 2nd-Level cacheNuno Gonçalves Jul 31, 2012 11:45 AM I've been banging my head against the wall for a few days with this issue. We are trying to implement Hibernate's second-level cache, using Infinispan. The application is running on JBoss AS 6, and using JTA transactions. On our persistence.xml we have: ... <!-- JTA configurations --> <property name="hibernate.transaction.manager_lookup_class" value="org.hibernate.transaction.JBossTransactionManagerLookup" /> <property name="current_session_context_class" value="jta" /> <!-- Infinispan configurations --> <property name="hibernate.cache.use_second_level_cache" value="true" /> <property name="hibernate.cache.use_query_cache" value="true" /> <property name="hibernate.cache.region.factory_class" value="org.hibernate.cache.infinispan.InfinispanRegionFactory"/> <property name="hibernate.cache.infinispan.cachemanager" value="java:CacheManager/entity"/> ... as defined On our understanding this is the kind of cache that we need in order to do the following: Use case one: We have records on a Database which will hold reference data. This data won't be changed for long periods of time (we hope ). We want to cache these records as they are likely to be queried a lot. And as users query this data there won't be a need to go to the DB, since it should be cached. For this case, is the cache type query cache, or entity cache? Being the query always the same, my understanding it's query cache as que query is supposed to return always the same results. My query: List<MyEntity> list = session.createCriteria(MyEntity.class) .add(Restrictions.eq("id", 1)) .setCacheable(true) .list(); Use case two: A user gets a specific record from the DB, and he might update it. We would like this entity (or list of entities) to be saved on the user's session (login session) cache so if he updated this entity on the client, we wouldn't need need to make a select before the update. In this case, since we are saving specific entities, it's considered entity caching, right? If we want to store the For that we're using: @Cacheable (true) @Cache(usage = CacheConcurrencyStrategy.TRANSACTIONAL) public class MyEntity implements Serializable { ... } Am I making these assumptions correctly? If not, what is the approach here? I gues I'm making a big mess out of this. 1. Re: Hibernate Infinispan Entity/Query 2nd-Level cacheGalder Zamarreño Aug 20, 2012 6:59 AM (in response to Nuno Gonçalves) Re 1. Both entity and query will be used. Query to maintain when the entities involved in the query were last updated. And entity for the actual results. Re 2. Yeah, entity cache. Btw, to make your life easier, I'd suggest using AS7 if using Infinispan 2LC.
https://developer.jboss.org/thread/203379
CC-MAIN-2019-18
refinedweb
439
51.04
Multi-thread with java.nio without file.walk I was following this tutorial and I was wondering if there is a way to multithread the Files.walkFileTree() function?: this isn't a duplicate because I could use files.walk() but the issue with file.walk() is that it failed when there is a AccessDeniedException. I want to use walkFileTree because of visitFileFailed() function in SimpleFileVisitor class.(); } - READ format in FORTRAN I am trying to read a vector components from an external file: imposed.dat (just one line, numbers separated by commas): 0.0585560952144390,0.121244222730239,0.148358440672875,0.169973557163769,0.188894991200908,0.6376108145572 I can't provide a MWE as the code is large, but this is a piece of it: OPEN (91,file= 'imposed.dat',form='formatted',status='old') read(91,*) CU1P(:) write(*,*) 'read once imposed profile...' DO J=2,5 write(*,*) 'printf CU1P(J) ',CU1P(J) END DO where CU1P is defined as REAL*8 CU1P(0:129) The error I get is forrtl: severe (24): end-of-file during read, unit 91, file /data/forcing/imposed.dat I have read files like this in the past, so I don't know what's going on, did I miss anything? - PyTorch dataloader performance slump on HDD I've just built a new PC for DL and I'm testing it on the official Imagenet example from PyTorch. I'm seeing reasonable performance when the dataset resides on my SSD ( GoodRam IRDM Pro 240GB SATA3 (IRP-SSDPR-S25B-240)), but it becomes ridiculously slow on my HDD ( Toshiba P300 (HDWD120UZSVA)). It seems to be all about the DataLoader. Of course, SDD is expected to perform better than HDD, but I don't think this workload should even be bottlenecked by disc read (on other machines I used, it was either GPU bound, or CPU bound due to preprocessing), let alone to this extent. To investigate, I wrote a quick iterator wrapper to time the dataloader calls def time_iter(iter): while True: start = time.time() item = next(iter) print('Yielding in', time.time() - start) yield item except StopIteration: break And checked the results by evaluating a pretrained net on Imagenet. The results on HDD: jatentaki@Dzik:~/Programs/pytorch-examples/imagenet$ python main.py --pretrained -e -b 768 -j 10 ~/2tb/Datasets/ILSVRC2012/ => using pre-trained model 'resnet18' Yielding in 94.4384298324585 Test: [0/66] Time 97.014 (97.014) Loss 0.6302 (0.6302) Prec@1 82.943 (82.943) Prec@5 95.573 (95.573) Yielding in 0.00038623809814453125 Yielding in 0.00019431114196777344 Yielding in 0.0001766681671142578 Yielding in 0.0002028942108154297 Yielding in 0.00017595291137695312 Yielding in 0.00017023086547851562 Yielding in 0.000179290771484375 Yielding in 0.00019288063049316406 Yielding in 0.00017714500427246094 Yielding in 85.04550909996033 Test: [10/66] Time 85.408 (16.858) Loss 1.1352 (0.8930) Prec@1 69.661 (77.190) Prec@5 91.927 (92.779) Yielding in 0.15804052352905273 Yielding in 0.00020647048950195312 Yielding in 2.0329136848449707 Yielding in 0.00020360946655273438 The ~90 second spikes consistently appear every 9 iterations. On SDD: jatentaki@Dzik:~/Programs/pytorch-examples/imagenet$ python main.py --pretrained -e -b 768 -j 10 ~/FastDatasets/ => using pre-trained model 'resnet18' Yielding in 11.228104829788208 Test: [0/66] Time 14.272 (14.272) Loss 0.6302 (0.6302) Prec@1 82.943 (82.943) Prec@5 95.573 (95.573) Yielding in 0.00038361549377441406 Yielding in 0.00030112266540527344 Yielding in 0.0002224445343017578 Yielding in 0.0002486705780029297 Yielding in 0.00018787384033203125 Yielding in 0.0002593994140625 Yielding in 0.00020194053649902344 Yielding in 0.0003197193145751953 Yielding in 0.00019288063049316406 Yielding in 3.4066810607910156 Test: [10/66] Time 4.013 (1.946) Loss 1.1352 (0.8930) Prec@1 69.661 (77.190) Prec@5 91.927 (92.779) Yielding in 1.5148968696594238 Yielding in 0.0003371238708496094 Yielding in 0.0002467632293701172 Other diagnostics: iotop total disk read peaks at 12-14M/s for HDD and 100-120M/s for SDD. hdparmresults: jatentaki@Dzik:~$ sudo hdparm -Tt /dev/sda1 /dev/sda1: # SSD Timing cached reads: 20976 MB in 2.00 seconds = 10503.39 MB/sec Timing buffered disk reads: 512 MB in 1.02 seconds = 501.77 MB/sec jatentaki@Dzik:~$ sudo hdparm -Tt /dev/sdb1 /dev/sdb1: # HDD Timing cached reads: 19484 MB in 2.00 seconds = 9755.75 MB/sec Timing buffered disk reads: 586 MB in 3.01 seconds = 194.69 MB/sec What is a possible cause of the problems? I am considering improper installation/synergy between the hardware pieces (I'm a total hardware noob) or some issues with how the DataLoader works. Another possibility is improperly configured OS (Xubuntu 18.04 LTS). Apparently there's a lot of caching going on (all the 0.000...s yields), could it go wrong? System: Xubuntu 18.04 LTS PyTorch: 0.4.1, from Anaconda, with CUDA 9.2 and 396 nvidia driver. GPU: GTX 1080 Ti - f.tell() returns an exorbitant number While reading a text file line by line with readline()at some point tell()returns a number that is much, much larger that the total file size. Here is my code: import io with open(test_fpath, 'r', encoding='utf8') as f: size = f.seek(0, io.SEEK_END) f.seek(0) while f.tell() < size: previous_position = f.tell() line = f.readline() if f.tell() > size: print('size: {}'.format(size)) print('position: {}'.format(f.tell())) print('previous_position: {}'.format(previous_position)) print('line read: {}'.format(repr(line))) break Which yields: size: 125348811 position: 18446744073804856772 previous_position: 95305153 line read: '\n' What could be the reason for this strange behaviour? I am on Windows 7, Python version is 3.6.6.
http://quabr.com/48757657/multi-thread-with-java-nio
CC-MAIN-2018-34
refinedweb
934
72.22
Longest palindromic substring using Palindromic tree Sign up for FREE 1 month of Kindle and read all our books for free. Get FREE domain for 1st year and build your brand new site Given a string, we are required to find the longest palindromic substring. Ex s="abbbbamsksk". The longest palindromic subtring is abbbba. There are many approaches to solve this problem including DP in (O(n^2)) where n is the length of the string and the complex Manacher's algorithm that solves the problem in linear time. In this article we will be looking into solving the problem using a palindromic tree. We need to learn about how the tree works in order to get the longest palindromic substring. A palindromic tree is a data structure that has been introduced a few years ago. The tree is a close resemblance to that of a directed graph. The main idea is that a palindrome is a palindrome with a character added before and after it. Ex: babab is a palindrome formed as a result of adding 'b' to aba which, in turn is a palindrome. One thing can be inferred here is for a string of length 'l' there must a palindrome of length l-2 before it and l+2 after it. For example, for ababa, it is aba before it and cababac after it. Now you might think, what about "a", this is still a palindrome. For this, we consider an imaginary string of length -1 (-1+2 = 1). There are two important edges that can be defined for the tree: - Insertion Edge(weighted) - Maximum palindromic suffix edge(non-weighted). Insertion Edge An insertion edge from u to v with a weight of X indicates that v is formed by adding X before and after u. Maximum palindromic suffix As the title suggests an edge from u to v, suggests that v is the maximum palindromic suffix. Of course u is the maximum palindromic suffix, but, to avoid complexities in form of self loop, we simply omit that. Note The blue line represents maximum palindromic suffix. Apart from these two nodes, there are two more which are the root nodes of this entire tree. The first root node has length =-1, as discussed above (a, where length is -1 + 2). We assume that there is an imaginary string of length -1 and add some character, let's say 'a' on both sides, hence we call it the imaginary root. We also have the special case of an empty string where the length is 0, let's call it real root. The longest/ maximum palindromic suffix of the real root is the imaginary root, because it can't be the node itself. And for the imaginary root, the maximum palindromic suffix will be itself (self-loop) as we can't go any higher. Construction of the palindrome tree We will process the string one character at a time. At the end of the string, we are left with all the distinct palindromes of the given string.Throughout the program we refer to a node called 'current' which refers to the last inserted node. Let's consider our string S, that has the length n. We have inserted upto k characters starting from 0, now for the next k+1 th character, let's suppose s[k+1] = 'a'. To insert this 'a', we need to find an X such that aXa is a palindrome. Also, note that our current node holds the longest palindrome up to the index k. This node in turn holds other nodes which are the longest palindromix suffix. The search for X starts with current node itself, if it is the suffix we are looking for, then good, else we traverse down till we find the X, such that aXa is a palindrome. Look at the below image for better understanding. Example Consider string S="abb". - Start with S[0] = 'a', as mentioned before our tree has two roots. Insertion always starts with the current node, in our case is the imaginary root. Inserting 'a' on an imaginary root with length -1 will yield us a string of length 1. Hence we have an insertion edge from imaginary root to the new node 'a', whose suffix will link to the the empty string, i.e real root. - Now for S[1] = 'b', we start with the current node which is 'a', we will traverse the suffix chain till we find X, such that bXb is a palindrome, this brings us back to the imaginary root. And as in the previous one, the suffix of the 'b' is the real root. - Now for the final character S[2]='b', we start from the current node, we will traverse till we find X, in this case X is an empty string, i.e we will stop at the real root. Adding 'b' to an empty string on both sides gives us "bb" which is the longest palindromic substring. Implementation Storing all the palindromes will be really inefficient in terms of memory. The question is do we really have to have the strings stored. The answer is No. We are simply creating a structure that would hold the required information. This structure has start, end that holds the start and end indexes of the current node inclusively. The length stores the length of the substring. We are maintaining an array of integers to store insertion edges whose size is 26 (from a to z). Every time we are creating a new node, the insertionEdge of the weight of the vertex will be updated to ptr which stores the node value to which the edge is pointed to. In addition we are having a suffix edge variable which is called max_suffix, a current_node that has the last inserted character. The ptr value for root1 (imaginary root) is 1 and for root 2(real root) is 2. Both the roots are of type structure and their values are initialized at the beginning of the program. We also have a tree[] which is an array of structure to store the entire data required for the construction of the palindromic tree. Now to the insert(), this function is called in the main() in a loop inserting every single character of the input string. This function can be divided into two parts: - find X - checking if s[current_index] + X + s[current_index] already exists. As explained above, we always start with the current node, and check if adding s[current_index] will make it a palindrome or no. For this we simply have to compare the fo\irst character of the current node to s[current_index]. If they are the same, then we found our X, else we have to go to the current node's suffix edge and repeat the same process. int temp = current_node; while(true){ int current_length = tree[temp].length; if(current_index - current_length >=1 && (s[current_index -1] == s[current_index - current_length - 1])) break; temp = tree[temp].max_suffix; } Now, we check if s[current_index]+X+s[current_index] already exists, we can do so by checking if the temp (which is X) has an insertion edge with the label s[current_index]. If so, we update the current node, else we create a new node. if (tree[temp].insertionEdge[s[current_index] -'a'] != 0) { current+node = tree[temp].insertionEdge[s[current_index] - 'a']; return; } For the new node, the ptr will be incremented and all the variables in the structure will be filled accordingly. For the maximum suffix of a node, the ptr value of the suffix will be stored in max_suffix. For ex, if the string is of length 1, then it's max_suffix will be 2, ptr=2 is basically the real root. For finding max_suffix we repeat the same process of finding X. Let's see the code implementation: #include<bits/stdc++.h> #define MAXN 1000 using namespace std; struct node{ int start, end; int length; int insertionEdge[26]; int max_suffix; }; node root1, root2; node tree[MAXN]; int current_node; string s; int ptr; void insert(int current_index){ int temp = current_node; while(true){ int current_length = tree[temp].length; if(current_index - current_length >=1 && (s[current_index -1] == s[current_index - current_length - 1])) break; temp = tree[temp].max_suffix; } if (tree[temp].insertionEdge[s[current_index] -'a'] != 0) { current+node = tree[temp].insertionEdge[s[current_index] - 'a']; return; } ptr++; tree[temp].insertionEdge[s[current_index] - 'a'] = ptr; tree[ptr].end = current_index; tree[ptr].length = tree[temp].length + 2; tree[ptr].start = tree[ptr].end - tree[ptr].length + 1; current_node = ptr; temp = tree[temp].max_suffix; if (tree[current_node].length == 1) { tree[current_node].max_suffix = 2; return; } while (true) { int current_length = tree[temp].length; if (current_index - current_length >= 1 && (s[current_index] == s[current_index - current_length - 1])) break; temp = tree[temp].max_suffix; } tree[current_node].max_suffix = tree[temp].insertionEdge[s[current_index] - 'a']; } int main(){ root1.length = -1; root1.max_suffix = 1; root2.length = 0; root2.max_suffix = 1; tree[1]=root1; tree[2]=root2; ptr=2; current_node=1; s = "abb"; for(int i=0; i<s.size(); i++) insert(i); int last=ptr; for (int i = tree[last].start; i <= tree[last].end; i++) cout << s[i]; return 0; } Output bb Time Complexity O(n) where n is the size of the string. You might have a doubt regarding the process of finding X. Does that increase the complexity? The number of iterations to find the value of X is roughly constant when compared to the size of the string. Hence we can ignore it. Comparision with other approaches As mentioned in the beginning, there are a number of approaches that vary not only in the time complexity but also in the core logic and subsequently the implementation. - We have used a palindromic tree here, we could also go with a brute force approach that would find every substring and then check for it to satisfy the palindrome condition. The time complexity for this is O(n^3) and space complexity is O(1). - We can solve this using Dynamic Programming as well, here we completely omit the checking for palindrome procedure. We start in a bottom up fashion, maintain a boolean table for every substring. The time complexity is O(n^2) and space complexity is O(n^2). - This problem can be solved in linear time with Manacher's algorithm. In this algorithm we can find a palindrome by starting from the center of the string and comparing characters in both directions one by one. If corresponding characters on both sides match, then they will make a palindrome. Now that we've at least got a gist of other approaches, we can find the obvious differences in time complexities and the core logic. Except for the DP approach the other methods didn't use any extra space. Palindromic tree can be handy to solve other problems other than the current one which include finding the number of occurrences of each subpalindrome in the string and also the number of palindrome substrings. Even the DP table can be used to find the number of palindromic substrings. But, the rest approaches, they have one sole purpose which is to find the longest palindromic substring, this includes the Manacher's too. Applications Other than finding longest palindromic substring the palindromic tree can also find 1)The number of occurrences of each subpalindrome in the string 2)The number of palindrome substrings etc. With this article at OpenGenus, you must have a complete idea of Palindromic Tree and how it can be used to solve the longest Palindromic substring problem. Enjoy. Learn more: - Longest Palindromic Substring using Dynamic Programming by K. Sai Drishya - Manacher's Algorithm by Piyush Mittal - Palindromic Tree (Eertree) by Yash Aggarwal - Find minimum number of deletions to make a string palindrome by Abhiram Reddy Duggempudi - List of Dynamic Programming problems at OpenGenus - List of Data Structures at OpenGenus
https://iq.opengenus.org/longest-palindromic-substring-using-palindromic-tree/
CC-MAIN-2021-17
refinedweb
1,968
72.56
29 April 2010 21:51 [Source: ICIS news] HOUSTON (ICIS news)--NYMEX light sweet crude for June delivery settled at $85.17/bbl on Thursday, up $1.95 in response to euro gains against the dollar and expectations of rising gasoline demand. The weekly supply statistics from the Energy Information Administration (EIA) revealed an unexpected drawdown in gasoline inventories in spite of a rise in refinery operations. The rally same despite the same EIA report showing inland supplies of crude oil at the Cushing, ?xml:namespace> June crude surged to $85.46/bbl, up $2.24, before a portion of the gains were given back ahead of the closing bell. June Brent continued to trade at a steep premium to NYMEX crude, rising to $87.66/bbl before retreating to close at $86.90/bbl, up 74
http://www.icis.com/Articles/2010/04/29/9355271/us-crude-rises-1.95bbl-on-dollar-exchange-gasoline-demand.html
CC-MAIN-2014-35
refinedweb
137
68.47
ReleaseMutex This function releases ownership of the specified mutex object. - hMutex [in] Handle to the mutex object. The CreateMutex function returns this handle. Nonzero indicates success. Zero indicates failure. To get extended error information, call GetLastError. The ReleaseMutex function fails if the calling thread does not own the mutex object. A thread gets ownership of a mutex by specifying a handle to the mutex in one of the wait functions. The thread that creates a mutex object can also get immediate ownership without using one of the wait functions. When the owning thread no longer needs to own the mutex object, it calls the ReleaseMutex function. While a thread has ownership of a mutex, it can specify the same mutex in additional wait-function calls without blocking its execution. This prevents a thread from deadlocking itself while waiting for a mutex that it already owns. However, to release its ownership, the thread must call ReleaseMutex once for each time that the mutex satisfied a wait function call. Each object type, such as memory maps, semaphores, events, message queues, mutexes, and watchdog timers, has its own separate namespace. Empty strings, "", are handled as named objects. On Windows desktop-based platforms, synchronization objects all share the same namespace.
https://msdn.microsoft.com/en-us/library/aa908800.aspx
CC-MAIN-2016-44
refinedweb
206
58.48
Hello, I've playing with this for some time and it's interesting, i'm in the process of writing an app i had in vb. I'm also in between windows to mac as a development tool. When i moved my code from windows xp flex 3.02, adobe air app. Using air 1.5 to the Mac 10.6.2 Flex 3.01 which is air 1.0 This is the basic code is below. One line works in Windows OS and not in MAC and just the opposite for the Mac code.. Could this be because the MAC AIR version is air 1.0? and the Win is 1.5? I have not updated the MAC to 3.02 as i've read that it has problems with 10.6.2 Snow Leopard. I updated my Mac Flex version to match that of my win box 3.02 and now it works... go figure What do you think? Thanks for any help. Rob import mx.managers.FocusManager; private function keyinit():void { addEventListener(KeyboardEvent.KEY_DOWN, interceptEnterKey); } private function interceptEnterKey(evt:KeyboardEvent):void { if(evt.keyCode == 13) // Enter key { / Windows code - works goes to next textbox as defined by tabindex order. focusManager.moveFocus(mx.events.FocusRequestDirection.FORWARD); // Mac code - will work with this. focusManager.getNextFocusManagerComponent(false).setFocus(); }
https://forums.adobe.com/thread/544358
CC-MAIN-2017-30
refinedweb
218
72.22
Judge Somers 11-11425 Schupbach Investments, LLC and Amy Marie Schupbach (Doc. # 461) - Details - Category: Judge Somers - Published on 20 June 2013 - Written by Judge Somers In Re Schupbach Investments, LLC and Amy Marie Schupbach, 11-11425 (Bankr. D. Kan. Jun. 20, 2013) Doc. # 461 SO ORDERED. SIGNED this 20th day of June, 2013. For on-line use but not print publication IN THE UNITED STATES BANKRUPTCY COURT FOR THE DISTRICT OF KANSAS In re: SCHUPBACH INVESTMENTS, LLC, CASE NO. 11-11425 CHAPTER 11 DEBTOR. MEMORANDUM OPINION AND JUDGMENT DENYING DEBTOR'S OBJECTION TO ASSUMPTION AND ASSIGNMENT OF LIFE INSURANCE POLICIES By order filed on November 21, 2012, the Court approved the Creditors’ Plan of Liquidation Dated July 24, 2012 (Plan).1 Generally, the Plan provides for liquidation of the Debtor through transfer of all secured property to the respective secured creditors.2 The primary collateral is approximately 165 parcels of real property, mortgaged to 12 1 Dkt. 355. 2 Dkt. 294. Case 11-11425 Doc# 461 Filed 06/20/13 Page 1 of 8 different secured creditors. As additional collateral, “certain of the secured creditors,”3 hold assignments of insurance policies on the lives of Debtor’s principles, Jonathan I. and Amy Schupbach. As to this collateral, the Plan, in Article 6, Executory Contracts and Unexpired Leases, provides in part: Further, to the extent the Debtor owns any life insurance to which any creditor holds an assignment or lien, such life insurance will also be assumed and assigned to the creditor as of Confirmation and such assignment will be considered part of ARTICLE 5. The Debtor or the Schupbachs may object to the assumption and assignment of any life insurance policies owned by the Debtor within 21 days of Confirmation. If such an objection is filed, the Court will determine the issue.4 Article 5 of the Plan provides as to each creditor holding mortgages on real property, confirmation shall vest the real property in the secured creditor free and clear of all rights of Debtor. On December 10, 2012, Debtor’s Objection to Assumption and Assignment of Life Insurance Policies and Memorandum Brief in Support was timely filed.5 Debtor re-filed the same document on February 6, 2013.6 Rose Hill State Bank (Rose Hill) responded to 3 See dkt. 374 (Debtor’s Objection to Assumption and Assignment of Life Insurance Policies and Memorandum Brief in Support). Only Rose Hill State Bank, the holder of an assignment of life insurance, has objected to Debtor’s attempt to invalidate the assignments. 4 Dkt. 294, as amended by dkt. 355. 5 Dkt. 374. 6 Dkt. 414. 2 Case 11-11425 Doc# 461 Filed 06/20/13 Page 2 of 8 both filings.7 After a hearing held on March 14, 2013, the matter was placed under advisement.8 FINDINGS OF FACT. Rose Hill’s amended proof of claim9 is for approximately $2.7 million, secured primarily by real estate and by other property, including the assignment of a life insurance policy. By Commercial Security Agreement dated January 25, 2010, Jonathan Schupbach assigned “LIFE INSURANCE POLICY #Z03160011, FROM WEST COAST LIFE INSURANCE COMPANY, DATED 1/25/10, IN THE AMOUNT OF $500,000.00 ON THE LIFE OF JONATHAN I SCHUPBACH” to Rose Hill to secure all sums advanced by the bank to Debtor Schupbach Investments, LLC. In that assignment document, Jonathan Schupbach agreed to “protect the Property [the pledged rights under the insurance policy] and Secured Party’s [Rose Hill Bank’s] interest against any competing claim.” He also authorized Rose Hill “to do anything Secured Party deems reasonably necessary to protect the Property and Secured Party’s interest in the Property.” The liquidating Chapter 11 Plan confirmed by the Court provides that the life insurance policy owned by the Debtor and pledged to Rose Hill shall be assumed by the 7 Dkts. 417 and 436. 8, this a core proceeding which this Court may hear and determine as provided in 28 U.S.C.§ 157(b)(2)(A) and (L). There is no objection to venue or jurisdiction over the parties. 9 Claim 4-2, part 3. 3 Case 11-11425 Doc# 461 Filed 06/20/13 Page 3 of 8 Debtor, be assigned to Rose Hill, and become property of Rose Hill free and clear of all rights of the Debtor. Rose Hill anticipates a substantial deficiency after liquidating its collateral other than the life insurance.10 DEBTOR’S OBJECTION AND ROSE HILLS’ RESPONSE. Debtor objects to this treatment of the life insurance collateral and requests the Court to deny the assumption and assignment of the policies. The sole basis for the objection is the creditors’ alleged lack of insurable interest. Debtor relies upon K.S.A. 40-453(a), which provides: (a) Determination of the existence and extent of the insurable interest under any life insurance policy shall be made at the time the contract of insurance becomes effective but need not exist at the time the loss occurs.. K.S.A. 40-453(a) has been construed by the Kansas Supreme Court as establishing the Kansas public policy that “ongoing consent” of the insured is required for a life insurance policy for a stated term to remain in effect.11 Rose Hill responds that there is no evidence that either the policy assigned to it comes within the terms of the statute or that the insured has requested termination in writing. Rose Hill also argues that K.S.A. 40-453(a) does not address assumption and 10 See dkt. 436. 11 In re Marriage of Hall, 295 Kan. 776, 782, 286 P.3d 210, 214 (2012). 4 Case 11-11425 Doc# 461 Filed 06/20/13 Page 4 of 8 assignment in bankruptcy, but, to the extent it operates as a restriction on assumption and assignment, it is unenforceable under 11 U.S.C. §365(f)(1). DISCUSSION. Assignments of life insurance policies as collateral for a debt are called conditional assignments, or collateral assignments.12 “Ordinarily only part of the ownership rights are transferred, and seldom the right to change the beneficiary, and always on condition that upon payment of the debt the rights return (or revert) to the owner-assignor.”13 In Kansas, K.S.A. 40-439 preserves the right to assign a life policy by providing “no provision in any . . . law shall be construed as prohibiting a person whose life is insured under a policy of group life . . . or the policyowner of an individual life . . . policy from making an assignment of all or any part of his rights and privileges under such policy. . ..” “Most assignments of life insurance contracts are made for the purpose of providing additional security in a loan transaction.”14 “It is uniformly held that a creditor has an insurable interest in the life of his debtor.”15 “Without the power of assignment life insurance contracts would lose much of their value.”16 12 1 William F. Meyer and Franklin L. Best, Life & Health Insur. Law § 11:13 (2nd ed.), available on Westlaw at LHINSUR §11:13 (database updated Aug. 2012). 13 Id. 14 Id. 15 Butterworth v. Mississippi Valley Trust Co., 362 Mo. 133, 139, 240 S.W.2d 676, 680 (Mo.1951). 16 Id., 362 Mo. at 145, 240 S.W.2d at 684. 5 Case 11-11425 Doc# 461 Filed 06/20/13 Page 5 of 8 Jonathan Schupback made a collateral assignment of the policy on his life to secure the debt of Schupbach Investments, LLC to Rose Hill. Through the liquidating Plan, the Debtor’s secured creditors receive the collateral securing their loans. As to the life insurance collateral, this is accomplished through Debtor’s assumption of the life insurance contracts followed by assignment of the contracts to the secured party. Assumption and assignment of executory contracts are governed by § 365. Only contracts which remain materially unperformed on both sides as of the date of filing are executory contracts.17 A life insurance policy is an executory contract where on the date of filing premiums remain to be paid, the policy has not expired by its own terms, and the insured is living.18 Debtor asks the Court to deny assumption and assignment of the policy. But Debtor cites no bankruptcy law in support. Debtor does not argue that the life policy on the date of filing was outside the scope of executory contracts which may be assumed under § 365, either because all premiums had not been paid by the owner or the policy had expired on the petition date. Rather, Debtor relies exclusively upon K.S.A. 40453( a), quoted above. And there is no evidence that the insured prepetition exercised rights under K.S.A. 40-453(a) by requesting in writing the insurer to terminate or not renew the policy. If the insured, Jonathan Schupbach, as Debtor contends, held the right 17 3 Collier on Bankruptcy ¶365.02[2][a] and [e](Alan N. Resnick & Henry J.Sommer eds.-in-chief, 16th ed. rev. 2013). 18 LifeUSA, Ins. Co. v. Green (In re Green), 241 B.R. 187, 202-03 (Bankr. N.D. Ill. 1999) aff’d 259 B.R. 295 (N.D. Ill 2001) aff’d 42 Fed. Appx. 815 (7th Cir. 2002). 6 Case 11-11425 Doc# 461 Filed 06/20/13 Page 6 of 8 under Kansas law to unilaterally terminate the policy not withstanding the assignment, the Court finds that this unexercised right is insufficient to render the contract nonexecutory. The Court therefore finds that the policy pledged to Rose Hill Bank was an executory contract on the date of filing. Even though Debtor concedes that the policy was in force on the date of filing, Debtor nevertheless argues that Jonathan Schupbach’s rights under K.S.A. 40-453(a) can be exercised postpetition, thereby precluding assumption and assignment. But, assuming Debtor’s motion is based upon a valid legal theory, it fails for lack of proof. Debtor has failed to prove that the statute applies to the policy assigned to Rose Hill. The statute governs only policies which are issued or renewed for a specific term, but there is no evidence that the policy satisfied this condition. A copy of the policy has not been provided to the Court, and there is no evidence whether it is for a specific term. Perhaps if the alleged right had been exercised at some time after the petition date, so there no longer is a policy to assume, this would support Debtor’s objection. But again, Debtor has failed to provide evidence to support this position. There is no evidence that Jonathon Schupbach has sent written notice to the insurer and that the insurer has terminated the policy. Absent such termination, the policy exists and is subject to assumption and assignment. Such failure of proof makes it unnecessary for the Court to address the 11 U.S.C. § 365(f)(1) issue raised by Rose Hill or state law issues, such as the law of commercial transactions, which might be relevant if Jonathan Schupbach had attempted to terminate the policy. 7 Case 11-11425 Doc# 461 Filed 06/20/13 Page 7 of 8 The foregoing constitute Findings of Fact and Conclusions of Law under Rules 7052 and 9014(c) of the Federal Rules of Bankruptcy Procedure which make Rule 52(a) of the Federal Rules of Civil Procedure applicable to this matter. JUDGMENT. Judgment is hereby entered denying Debtor’s Objection to Assumption and Assignment of Life Insurance Policies. The judgment based on this ruling will become effective when it is entered on the docket for this case, as provided by Federal Rule of Bankruptcy Procedure 9021. IT IS SO ORDERED. ### 8 Case 11-11425 Doc# 461 Filed 06/20/13 Page 8 of 8
http://www.ksb.uscourts.gov/index.php/kansas-bankruptcy-court-opinions/judge-somers-opinions/2108-11-11425-schupbach-investments-llc-and-amy-marie-schupbach-doc-461?showall=1&limitstart=
CC-MAIN-2014-35
refinedweb
1,970
63.8
28 February 2018 0 comments Python, Django, Javascript This is a quick-and-dirty how-to on how to use csso to handle the minification/compression of CSS in django-pipeline. First create a file called compressors.py somewhere in your project. Make it something like this: import subprocess from pipeline.compressors import CompressorBase from django.conf import settings class CSSOCompressor(CompressorBase): def compress_css(self, css): proc = subprocess.Popen( [ settings.PIPELINE['CSSO_BINARY'], '--restructure-off' ], stdin=subprocess.PIPE, stdout=subprocess.PIPE, ) css_out = proc.communicate( input=css.encode('utf-8') )[0].decode('utf-8') # was_size = len(css) # new_size = len(css_out) # print('FROM {} to {} Saved {} ({!r})'.format( # was_size, # new_size, # was_size - new_size, # css_out[:50] # )) return css_out In your settings.py where you configure django-pipeline make it something like this: PIPELINE = { 'STYLESHEETS': PIPELINE_CSS, 'JAVASCRIPT': PIPELINE_JS, # These two important lines. 'CSSO_BINARY': path('node_modules/.bin/csso'), # Adjust the dotted path name to where you put your compressors.py 'CSS_COMPRESSOR': 'peterbecom.compressors.CSSOCompressor', 'JS_COMPRESSOR': ... Next, install csso-cli in your project root (where you have the package.json). It's a bit confusing. The main package is called csso but to have a command line app you need to install csso-cli and when that's been installed you'll have a command line app called csso. $ yarn add csso-cli or $ npm i --save csso-cli Check that it installed: $ ./node_modules/.bin/csso --version 3.5.0 And that's it! --restructure-off So csso has an advanced feature to restructure the CSS and not just remove whitespace and not needed semicolons. It costs a bit of time to do that so if you want to squeeze the extra milliseconds out, enable it. Trading time for space. See this benchmark for a comparison with and without --restructure-off in csso. cssoyou might ask Check out the latest result from css-minification-benchmark. It's not super easy to read by it seems the best performing one in terms of space (bytes) is crass written by my friend and former colleague @mattbasta. However, by far the fastest is csso when using --restructre-off. Minifiying font-awesome.css with crass takes 326.52 ms versus 3.84 ms in csso. But what's great about csso is Roman @lahmatiy Dvornov. I call him a friend too for all the help and work he's done on minimalcss (not a CSS minification tool by the way). Roman really understands CSS and csso is actively maintained by him and other smart people who actually get into the scary weeds of CSS browser hacks. That gives me more confidence to recommend csso. Also, squeezing a couple bytes extra out of your .min.css files isn't important when gzip comes into play. It's better that the minification tool is solid and stable. Check out Roman's slides which, even if you don't read it all, goes to show that CSS minification is so much more than just regex replacing whitespace. Also crass admits as one of its disadvantages: "Certain "CSS hacks" that use invalid syntax are unsupported". Follow @peterbe on Twitter
https://api.minimalcss.app/plog/csso-and-django-pipeline
CC-MAIN-2020-16
refinedweb
512
59.9
The Quality of Life for the World’s Poorest Can Be Advanced Farther, Faster, Cheaper and More Surely Through Adaptation than Through Zero-Carbon Technologies Guest Post By Indur M. Goklany A few days ago, Tom Nelson had a link to a blog posted by Mr. Bill Gates titled, Recommended Reading on Climate Change, in which he claims that the risk of “serious warming” from anthropogenic climate change is large enough to justify action. Mr. Gates adds, .” Over the years I have been very impressed by Mr. Gates’ desire and efforts to improve the quality of life for the world’s poorest people and to literally put his money where his mouth is, but the notion that “even moderate warming could cause mass starvation and have other very negative effects on the world’s poorest 2 billion people” is fundamentally flawed. And there are far better and more effective methods of improving their quality of life than through squandering money on zero-carbon technologies. So, to make these points, I fashioned a response to Mr. Gates’ post, but was frustrated in my efforts to post it either on the specific thread or via the General Inquiry form at his website. Accordingly, I decided to write Mr. Gates an open letter to convey my thoughts. The letter follows. I thank Mr. Watts for publishing it on his invaluable blog. ——————————– Dear Mr. Gates, For a long time I had admired your perspicacity and acumen in trying to address some of the world’s truly important problems (such as malaria and hunger) rather than signing on to the latest chic causes (e.g., global warming). But having read your entry, “Recommended Reading on Climate Change” at, on the Gates Notes, I fear my admiration may have been premature. First, the analytical basis for the notion that “even moderate warming could cause mass starvation and have other very negative effects on the world’s poorest 2 billion people” is, to put it mildly, weak. Virtually all analyses of the future impacts of global warming impose the hypothetical climate of tomorrow (often for the year 2100 and 2200) on the world of yesterday (most use a baseline of 1990). That is, they assume that future populations’ capacity to cope with or adapt to climate change (also known as “adaptive capacity”) will be little changed from what it was in 1990! Specifically, they fail to consider that future populations, particularly in today’s developing countries, will be far wealthier than they were in the baseline year (1990), per the IPCC’s own emissions scenarios.. Thus, developing countries’ adaptive capacity should by 2100 substantially exceed the US’s adaptive capacity today. Figure 1: Net GDP per capita, 1990-2200, after accounting for losses due to global warming for four major IPCC emission and climate scenarios. The net GDP per capita estimates are extremely conservative since the losses from global warming are based on the Stern Review’s 95th percentile estimates., Discounting the Future, Regulation 32: 36-40 (Spring 2009). And Figure 1 does not even consider secular technological change, which over the next 100 years would further increase adaptive capacity. [Since you have been in the forefront of technological change for quite some time now, you probably appreciate better than I that no confidence should be placed on the results of any analyses that assume little or no technological change over a period of decades.] For instance, the analyses of food production and hunger ignore the future potential of genetically-modified crops and precision agriculture to reduce hunger, regardless of cause. These technologies should not only be much more advanced in 2100 (or 2200) than they are today, but they should also be a lot more affordable even in the developing world because they will be wealthier (see Figure 1) while the technologies should also become more cost-effective. In any case, because future increases in adaptive capacity are largely ignored, future impact estimates are grossly exaggerated, including any findings that claim there will be “mass starvation” from “even moderate warming”. Second, even if one uses these flawed analyses that grossly exaggerate global warming impacts, one finds that the contribution of global warming to major problems like cumulative mortality from hunger, malaria and extreme events should be relatively small through the foreseeable future, compared to the contribution of non-global warming related factors. See Figure 2. Figure 2: Deaths in 2085 Due to Hunger, Malaria and Extreme Events, with and without Global Warming (GW). Only upper bound estimates are shown for mortality due to global warming. Average global temperature increase from 1990-2085 for each scenario is shown below the relevant bar. Source: Goklany, Global public health: Global warming in perspective, Journal of American Physicians and Surgeons 14 (3): 69-75 (2009). Figure 2 also tells us that eliminating global warming, even if possible, would reduce mortality in 2085 by, at most, 13% (under the warmest, A1FI, scenario). On the other hand, there are adaptive approaches that could address 100% of the mortality problem (including the contribution of global warming to that problem). The first such approach is focused adaptation, i.e., adaptive measures focused specifically on reducing vulnerability to climate sensitive threats. The rationale behind focused adaptation is that the technologies, practices and systems that would reduce the problems of, say, malaria or hunger, from non-global warming related causes would also help reduce the problems of malaria and hunger due to global warming. See “Climate Change and Malaria”. The second adaptive approach is to remove barriers to and stimulate broad economic development. This would reduce vulnerability to virtually all problems, climate-sensitive or not. That this approach would work is suggested by the fact that, by and large, wealthier countries have lower (age-related) mortalities regardless of the cause (and, therefore, higher life expectancies). The fundamental principle behind these adaptive approaches is that since global warming mainly exacerbates existing problems rather than creates new ones. If we solve or reduce vulnerability to the underlying problem — think malaria, hunger or extreme events for “focused adaptation” and the general lack of adaptive capacity for “broad economic development” — then we would also be reducing vulnerability to the contribution of global warming to that problem. As shown in Table 1, human well-being would be advanced lot more cost-effectively through either of the two adaptive approaches than by curbing global warming. Table 1: Comparing costs and benefits of advancing well-being via emission reductions (mitigation), focused adaptation, and broad economic development. MDGs = Millennium Development Goals. Entries in red indicate a worsening of human or environmental well-being. Source: Goklany, Is Climate Change the “Defining Challenge of Our Age”? Energy & Environment 20(3): 279-302 (2009). So, if you want to advance the well-being of the poorest countries, you could advance it farther, more surely and more cheaply through adaptive approaches than through zero-carbon technologies. Adaptive approaches would also advance well-being more rapidly, since curbing warming is necessarily a slow process because of the inertia of the climate system. I also note from your blog posting that you appreciate that quality of life is dependent on energy use. Given this, I would argue that for developing countries, increasing energy use should have a much higher priority than whether it is based on non-zero carbon technologies. Finally, following this letter, I have listed recommended readings on climate change that elaborate on the points I have striven to make. With regards, Indur Goklany Website:; E-mail: [email protected] ———————————————————— REFERENCES (in which the ideas advanced in this letter are more fully developed) 1. Deaths and Death Rates from Extreme Weather Events: 1900-2008. Journal of American Physicians and Surgeons 14 (4): 102-09 (2009). 2. Climate change is not the biggest health threat. Lancet 374: 973-75 (2009). 3. Global public health: Global warming in perspective. Journal of American Physicians and Surgeons 14 (3): 69-75 (2009). 4. Discounting the Future, Regulation 32: 36-40 (Spring 2009). 5. Is Climate Change the “Defining Challenge of Our Age”? Energy & Environment 20(3): 279-302 (2009). 6. What to Do about Global Warming, Policy Analysis, Number 609, Cato Institute, Washington, DC, 5 February 2008. 7. Climate Change and Malaria. Letter. Science 306: 55-57 (2004). Sponsored IT training links: Guaranteed pass real exam with help of up to date 642-533 dumps, 70-236 video tutorials and 70-293 practice tests. 79 thoughts on “An Open Letter to Mr. Bill Gates” Bill Gates writes: “And everybody agrees that CO2 absorbs infrared radiation from the sun, which tends to produce a greenhouse effect.” Afraid not, Bill. Everybody agrees that the surface absorbs shortwave radiation from the sun and the surface then emits it as infrared radiation which is absorbed primarily and overwhelmingly by atmospheric water vapor. I’ve met Bill Gates and seen him speak on several occasions. Statements like this make his genius look like its limited to software architecture and monopoly-building. .” Sounds like pie in the sky to me. Given: 1) greater energy consumption goes hand in hand with higher standards of living and gross domestic product 2) fossil fuels are nothing but increasingly costly to recover as the lower hanging fruits are harvested to extinction 3) Bill Gates’ genuine humanitarian interest (which I most certainly admire) then his greatest interest is in finding cheaper ways to produce and distribute energy. In order to do this we cannot afford to throttle the extant goose that’s laying the golden eggs (fossil fuel consumption) before we have another, more productive goose. Gates might be ill-informed about the risks (small) and benefits (large) of increased atmospheric CO2 but regardless of that he appears to be focusing on the right path to take in regard to future energy production and distribution. He’s doing the right thing for the wrong reasons. I tend to regard CAGW as having some value in that it tends to light a few fires under efforts to find better ways of providing the energy needed to improve living standards and net global productivity. It’s the abuse of CAGW by the power/money hungry (scientific establishment, politicians, governments, paper traders) and the well intentioned but disastrously naive environmentalists that makes it not something where the ends justify the means. Warm is better than cold. From the Medieval Warm Period, which was hotter than today, we note that that was the time when the great cathedrals of Europe were built because the countries were wealthier than in cooler times like the Little Ice Age – food production is much easier when it is warm. During the LIA was the time for witch burning – people were starving and blamed poor old women because the crops failed. CO2 is having the positive effect of increasing crop yeilds. Another brilliant post by Indur Goklany. His contributions to the debate are always extremely well argued and valuable. Caveat: I believe the ultimate answer to the energy problem isn’t in finding vastly better ways to produce and distribute virtually unlimited amounts energy but rather in finding vastly more efficient ways of utilizing energy to produce the things we need to sustain and improve global living standards. Nano-technology, particularly in the form of modifying and harnessing the molecular machinery in microscopic forms of life, is the next great leap in technology. This will provide us with fundamentally new ways of producing things with hugely lower energy and labor costs. The cool thing is there’s very little invention required. It’s all a matter of reverse engineering the molecular machinery in extant living things – a technology that’s been mature, tested, and proven for billions of years. It’s a technology served up for us on a silver platter and we’re just now on the cusp of understanding it well enough to begin exploiting it to its almost unimaginably large potential.. This open letter appeals to much of the sentiments of those who regularly come to this site, distracting us from its subtle message (the relentless message of recent days), that man is causing the earth to warm. Is this the end of WUWT? I see no reason to detract from Bill Gates using his own resources to develop zero-carbon energy. Far better for it to be done successfully by Gates than ineptly by government meddling. I only hope he brings it to market quickly before government wastes our money trying to. Claims by a software billionaire are just that claims. He sees himself as a savior of mankind, now that he has milked the computer world of its cash. The best way to help the worlds poorest is allow the generation of electricity using cheap fossil fuels, coal being the cheapest. Development of the third world would see the reduction of birth rate, as child mortality rates tumbled given good health care. It is ridiculous to claim that so called eco-friendly power generation, like wind or solar, will be the future. Neither provide what we require as a reliable system for power generation and in every country using wind power they have found that not only does it fail to exceed a few percent of total generation, it can cause distribution problems and requires backup by nuclear or fossil fuel. So to save the environment we must forget wind power, due to its resource hungry nature, and rely on the backup that is in use now. Solar is just as small a provider and attempts to exceed 9% have increased costs by many times making them too expensive for under developed countries. It will only supply power for up to 12 hours a day and then only when cloud cover is at a minimum. PS. I do not get any money from the fossil fuel companies! Mr Gates should get a grip.. In the third world he is simply treating the symptoms of bad governance and makes no attempt to resolve the cause of the misery there. With AGW he will be funding poor science at the expense of real science. If I was starting up a new business enterprise built on technological advances Mr Gates would be at the top of my list of people to go to for advice. When it comes to AGW or third world development I reckon my mate down the street has as much to offer. Let us not forget that charity is always about the donor, never about the recipient. And best misuse of the word ‘literally’ goes to: ‘ Mr Gates’ desire….to literally put his money where his mouth is…’ Hmm. Hope he’s got a big mouth! The day this sceptic wants to adopt a fluffy dream rather than hard reality is the day I curl up my toes. I agree with Philip Thomas. The insidious AGW propaganda has found its way on the pages of this site. Lately, too many articles published here assume by default that there is global warming, and that it is substantially anthropogenic. This assumption is false. All Mr. Gates has ever said and wrote in his life were politically correct platitudes. I’d rather listen to other successful businessman, head of the famous RyanAir (though I don’t necessarily agree with his choice of expressions): Thank you for writing this letter, Indur. It encourages me to place more faith in my own feelings that Mr Gates has lost his way on the climate trail; a puzzling development in a man I feel is exceptionally admirable in all he has given us from his very first operating system. Even if AGW were happening, the solutions you propose are the reasonable response. If any warming is taking place quite naturally, this would only enhance the value of the responses you propose. If we are now cooling, again your responses would be of value. The more I consider Mr Gates’ quoted words (the risk of “serious warming” from anthropogenic climate change is large enough to justify action.), the more puzzled I become. By now I would have expected a man of innovation, business and goodwill (as demonstrated by his charity) to have sensed the backing away of the carbon freebooters meant building scepticism into his thinking; the continuing changes of terminology to describe the proposition from “global warming” down the scale to far less emotive terms by the lobbies to have sounded a warning that all was not as settled as originally trumpeted; and the failure of grandiose schemes to save the poor and starving as pursued by so many of the world charities as a major caution against doing “good” as they do it. Mr Gates, by his own hand, has created wealth and respect almost beyond imagination. It will be a great shame if he squanders this through a slip in concentration. Great letter. As all governments are essentially conflicted (by the prospect of new revenue streams and/or captive markets) and the pressure and advocacy groups essentially corrupted (“we know who you are; we know where you live” indeed!) it is difficult to see where to turn for dispassionate, open-minded, analysis that we all in our own way seek – on future energy sources and policies. I for one would be interested in Mr Gates’ views, following receipt of this letter. One of the great benefits of adaptation measures is that they are helpful if the world warms for any reason. Carbon reduction is helpful only in the questionable case that warming is caused by CO2 emissions. Prince Charles, that well-known scientist and all-round intellectual, gives us his view on the topic: “I would say to sceptics: “It may be convenient to believe all these greenhouse gases we are pouring into the atmosphere disappear through holes conveniently into space, but it doesn’t work like that.”‘ Thank you for that Your Royal Highness, I am now convinced and will never fly again. Such an apercu, such penetration. [snip] Charlie has been the best advert for republicanism for many years now. And, I regret to say, I agree with some other posters here that WUWT seems to be becoming rather tepid re CAGW. Speak to us Anthony, please! “Philip Thomas says: September 11, 2010 at 1:33 am.” Agree with the above. There is plenty of evidence that CO2 is either makes no contribution to atmospheric warming or its contribution is swapped by other effects such as clouds. Gerlich & Tscheuschner 2009, have falsified the Greenhouse concept in the frame of Physics and Thermodynamics; and again in Mar 2010 from Hydrodynamic and Thermodynamics in deriving barometric formulae; Chilinger et al in “Cooling of the Atmosphere Due to CO2 Emission” Energy Sources 30, Jan2009 considered lapse rates on Earth and Venus using measured temperatures; then there are the numerous articles showing CO2 lags temperature in long term by 800+-200 years, in shorter term (20 to 100 years) by 1 to 5 years and daily by two to 4 hours. The tropical hot spot from models which include CO2 causing warming does not exist. On this website there have been posts about raw temperature data showing no significant increase since 1900. Why accept anything from so-called ( but better pseudo) scientists who manipulate data, leave out scientific laws, and twist conclusions to suit there own purpose? The truth is that the AGW alarmists do not understand thermodynamics, heat transfer, fluid dynamics, statistics or economics. Bill Gates has been conned. Maybe, he lost things somewhere. I started his 1995 book “The Road Ahead” years ago when it was first given to me. I note the book mark at page 136 where I gave up. It happens to many that they achieve early then fizzle out. Einstein did his best work prior to World war 1. If Gates’ energy solutions run as efficiently as his software, which is his specialty, then we’re doomed. The “Green” screen of death will become the new standard. Allanj beat me to it. Adaptation saves the poor from human and/or natural temperature rises/ falls and all there consequences. Bill Gates risks wasting a lot of money on the new carbon fashion resulting in wasting a lot of poor lives. Please stick with the old fashioned but very effective war on hunger and malaria for maximum result. Less cool but so what. Philip Thomas: Is this the end of WUWT? My thoughts exactly. Has WUWT been got at? Will Al Gore be invited to make a guest post soon? Dave Springer says: September 11, 2010 at 1:22 am Just like fussion technology. Science is vastly corrupted by many sources. This is why our knowledge base cannot look at any actual physical evidence as it interferes with the many careers based on bad science. Do we have any open forums for good technology to be looked at? No. They must go through a very rigerous and expensive process funded by institutions (funded by government) or governments. We have yet to incorporated a circle with motion. By golly, the planets do this. From Bill Gates: “As I said at TED, my dream is to create zero-carbon technologies that will be cheaper than coal or oil.” Statements like that make me nervous. Not the “cheaper than coal or oil” part but the zero-carbon mindset that could be inferred. CO2 is a good thing. I worry about the dunderheads that want to reduce or eliminate it. How will we support all the additional agriculture needed in a more populous world without additional CO2? I’m all for global warming because it sure beats the alternative. It’s amazing what electricity does for third world countries. I wonder why Gates wants to deny them that? Why would you want to deny poor people the ability to have a better life.. I wonder what it is that rots people’s brain. Do these folks actually think destroying every tree in the forest to cook food and keep warm is a better alternative? Bill you stole everything from Gary Arlen Kildall you know it!. Everything you stand for is a sham. I like Indur’s post. And, why should we believe Mr. Gates on CAGW? Didn’t he say that 640k would be enough memory for anybody? His ego will never let him admit that he may be wrong.. Microsoft didn’t invent the web browser either. The Mozilla group, working for Netscape, made the original web browser. Neither did Microsoft invent the office suite. Microsoft copies someone else’s ideas and attempts to make them better. Bill Gates started that legacy a long time ago. Bill Gates is smart, but just because he made his fortune in software does not make him a brilliant programmer. Larry Fields (re: ” … by 2100 the average inhabitant of developing countries would be more than twice as wealthy as the average US inhabitant in 2006 …”) : “Sounds like pie in the sky to me.” From 2006 to 2100 is 94 years. One could do a quick reasonableness check. Hong Kong was a developing country 94 years ago (1916). The average income of HK now is $US31,420 (Atlas method) or $US44,070 (PPP method) (actually 2009). The average US taxpayer’s income in 1916 was ??? about 300 ??? “in 2005 constant dollars” – maybe the average was a lot less. Since I don’t know what a “2005 constant dollar” is, I can’t complete the calculation, but the “pie in the sky” isn’t glaringly obvious?? The problem with Africa is the NGO’s How can a country advance when the NGO’s provide all for free, try opening a shoe factory or a clothing factory in Africa, impossible, why ? because the ‘do gooders’ of this world make sure you donate all your used garbage to the poor and starving, a Tv program in Italy at the time of the Naples garbage strikes followed a train load of garbage that was suposed to go to a diposal site in Germany, at the cost to the Italian taxpayer of millions of $, the trains all ended up in the port of Hamburg where the gagbage was loaded onto a ship along with 29,000 tonnes of other garbage and headed for the Ivory Coast, when asked, the Captian said his Company had 11 ships on this garbage disposal to Africa, I ask why not save the Italian taxpayer millions of $ and ship it from Naples ? NGO,s I feel empowered. Even Bill Gates isn’t any smarter than the average useful idiot concerning AGW. He doesn’t seem to know CO2 increases crop yields (and water-use efficiency), not decrease them. I am fairly optimistic about the medium term economic progress of our country but economists who predict the economies of rather unstable developing countries over periods of 100+ years do not inspire confidence. Effects of climate change are also likely to be highly variable. Some countries may be devestated while others may be uneffected or even benefited. Overall, the idea that we predict the costs of climate change, the costs of mitigation and economic development throughout the world lacks credibility. Wade says: September 11, 2010 at 4:46 am . ” That’s because Apple hired the Xerox guy who did the GUI at Palo Alto, Alan Kay. What’s your point? Companies buy talent and ideas and refine them; Microsoft bought an OS core called QDOS from Tim Paterson; do you think they shoved it out the door to IBM like that without refining and improving it? And do you think a “brilliant programmer” never steals an idea? How about Linus Thorvalds? He created Linux as a clone of MINIX because he didn’t agree with MINIX’ licensing terms. He had the expressed goal of creating a drop-in replacement. “…cheaper than coal or oil. That way, even climate skeptics will want to adopt them…” At last somebody understands me! Alberta Slim says: September 11, 2010 at 4:44 am “[…]Didn’t he say that 640k would be enough memory for anybody? His ego will never let him admit that he may be wrong.” Do you have a citation? Gates strongly denies ever having said that. Ziiex Zeburz (September 11, 2010 at 5:16 am) The ‘problems’ with charities and NGOs are discussed to an extent in this interesting animation looking at ethical implications of charitable giving. I certainly found it thought-provoking. DirkH says: September 11, 2010 at 5:37 am “Citation on Mr. Gates……………” No. Sorry about that. I never checked. I just qouted another internet lie. I will be more cautious in future. AS “Statements like this make his genius look like its limited to software architecture and monopoly-building.” His genius is limited to what he can copy from Apple and Steve Jobs. [do you expect a comment where you accuse him of copying to get published? Please be more careful in what you say ~jove, mod] Philip Thomas is exactly right. I am fully prepared to change my mind, if and when there are quantifiable measurements produced, showing conclusively that X emission of anthropogenic CO2 causes X rise in T. But so far there is almost total reliance by the believers in AGW on always-inaccurate computer models, which take the place of non-existent raw data supporting their conjecture. Looking at the many charts of CO2 and temperature [I have dozens like this], it is clear that whatever the climate sensitivity to CO2 may be, it must be very small, certainly less than 1°C. If it were large, temperature would closely track CO2. We know that on all time scales a rise in temperature causes a rise in CO2. But there is no empirical, testable evidence showing that a rise in CO2 causes a rise in temperature. That hypothesis is more of a conjecture. “Everyone” knows it’s true — but where is the real world, replicable, testable evidence? The ‘evidence’ is found in climate models, which are, of course, not evidence at all. The late, great John Daly shows here that the trumped-up claim of climate sensitivity to CO2 is unsupportable and easily deconstructed. Purveyors of the CO2=CAGW Conjecture have yet to produce solid, testable evidence backing their specious claim. Until/unless they do, they are practicing non-science, pseudo-science, anti-science, and all the other projection-based accusations the alarmist crowd hurls at scientific skeptics — who need to prove nothing, and only ask for convincing evidence that “AGW” even exists in a measurable, testable quantity. AGW could be a fact. Or, it may be based on an entirely coincidental, spurious correlation. At this point, AGW is simply a conjecture lacking convincing empirical evidence. So let us hold the climate alarmists’ feet to the fire, and demand that they begin to take the Scientific Method seriously: they must provide testable, real world, replicable evidence, based on raw data, showing that the rise in a minor trace gas is causing global warming. Years ago when Gates was called before the congress he said he has always been non political in his business. Not a direct quote but I think close. But after that he changed. Sacrifice is evil. Altruism is a suicidal impulse. Preach self sacrifice and you’ll lose any intelligent audience. Bill Gates wants to do the right thing. Therein lies the problem. Determining what is right. There’s always the “better to teach a man to fish, than to give him a fish” way. But that method requires a great deal of careful consideration, and just giving stuff away is much easier, especially if you need something to point to for your accomplishments. Verity Jones says: September 11, 2010 at 5:46 am Very thought provoking, but didn’t seem to offer a solution. Verity Jones, Thanks for the clip – it is spot on! The “global warming” fraud is adapted and promoted exclusively by rich people and big organizations, and western governments and their various agencies that have tons of cash to spend. The richer they are, the more they shill for it. In a hundred years I expect countries to be burning coal in specially built furnaces to produce CO2. Having converted to Nuclear energy (hopefully) in the meantime and hence finding that more CO2 is necessary for food production. The “funny” aspect in this is of course the hell bent for leather drive to reduce CO2 emissions now. I have often wondered the effect on Mr. Gates had he been born three years earlier. Despite his intellect I suspect he’d have become a great Professor in teaching Math. Its not the genes, its the matching of the genes with the environment. If the environment in three years time is enough to make that big a difference in the worlds social evolutionary development, worrying about infinitesimal (conjectured) climate change over hundreds of years seems odd. An impact of a future “Mr. Gates” being many more times powerful than AGW. Now in battling that conjectured CAGW notion the only driving position one needs to address, is the CO2 equation that its a scientifically proven fact CO2 will warm the planet. Once again, Keith Battye sees the issue most clearly: .” Mr. Gates would do the most good for inhabitants of the third world if he were to invest his money in the furtherance of good governance, the establishment of rules for and protection of property rights, a shift from the rule of men to the rule of law, and the expansion of micro-capital programs. Everything else follows. Gates is a thief. His father advocates and funds eugenics. Philip Thomas, Alexander Feht, RichieP, cementafriend, et al. Uh, oh – to be denounced by the true believers in AGW because I don’t buy the mitigation myth only to be disavowed by skeptics because I argue adaptation is the way to go! Life in the middle — or, as some may say less charitably, “a muddle”? – is never easy. More seriously, this is a letter not a thesis, so I have to brief and to the point within the context of the post that instigated this. If you want to know about my broader views on AGW, please see the references that were provided, and my website (which is reasonably up-to-date) at. My latest thinking on global warming (and developing countries) is probably best captured in the paper, Trapped Between the Falling Sky and the Rising Seas: The Imagined Terrors of the Impacts of Climate Change, at. I think the title captures my broader views quite succinctly. Second, implicit in the letter is the argument that even if one accepts the science according to the IPCC and its estimates of future emissions and global warming impacts, there is no policy case that can be made for pushing mitigation in general and zero-carbon emission technologies in particular. Adaptive approaches are much better and more effective methods of advancing human well-being, than be pushing them. The criticism I have of my own letter is that it fails to point out that additional CO2 can have — as pointed out by H.R. and beng, for instance – positive impacts. In fact, even the IPCC has stated that little to moderate warming could be net benefit to the world at large. For references, see the first page of the paper at. My reason for not getting into this is that I wanted to keep my letter brief and focused on the main point, namely, regardless of everything, adaptation, broadly defined is the way to go, and mitigation, by and large, is a loser for human well-being. Finally, just because Anthony is kind enough to post my musings from time to time, my views are not necessarily Anthony’s. In fact, what I like about Anthony is that he lets others speak without dictating a specific line of thought. Gotta go now, but will have additional responses later on. Verity Jones- thanks for that clip. I and my wife have been involved with a charity that is located in Haiti. What is said in that clip, I see there. For too long I have been troubled by this:” We can’t interfere with their culture,”-as if living in a pile of garbage is a culture, mentality. Elevation, the lack of want, by development, is the way up. Yet there are those who don’t see either by design or ignorance, Oscar Wilde’s point, what good is it if you fix someone’s problem yet they still live in a dump? Maybe this is why Gates is teamed up with Toshiba for the 4s reactor design. I think there is a time when you have to say that the “prime directive” goes out the window for the good of the planet and it’s people. -It can be done…. Indur M. Goklany says: September 11, 2010 at 9:56 am Well said, and thank you sir for your post! My gut tells me that efforts at charity by Mr. Gates, and others of the super rich elitist group are ‘generally’ founded from the following premise: “Gawd!! I have a ton of money, how should I spend it?” These people then distribute impressive amounts of support to people who need it, and again I say ‘generally’, as a social salve(how can you not feel somewhat guilty knowing you have more money than you can possibly spend in a dozen lifetimes?). ‘Impressive’, a relative term, to the most of us. Stated as a percentage of their total income, we might not be as impressed (apologies: I do not have the time right now to research the figures…anyone else?:-). I see the advance of charities from the super rich intelligencia position as a statement of, ” I am evolved, I am intelligent, I will give money where it is needed”. All right and proper in and of themselves, truly. I could have included in the aforementioned list, “I know what must be done, better than most (if not all), and associate with others that believe as I do, we will get it done”. And this without regard to the “betterment” of ALL mankind: the impoverished. They would rather siphon more money into (their?) industry than expend more to relieve suffering. Something they could do…right now. However, Mr. Gates now raises the zero carbon technology meme, one that we all know has intrinsic weaknesses at current capabilities and technology. But THIS stands to be a money-maker for the long term. He steps from charity for the many, which has no net return (other than social) to an endeavor of industry that will change (save?) the world, and no doubt one that will make a few people richer. Will anyone disagree that these guys are capatalistic / entrepeneurial / adversarial at their core? These people have enough money right now to “fix” the planet: end hunger, poverty, sanitation, and disease (for the most part anyway). I am not just referring to personal income but also the immense resources at their disposal to do what they say they want to do. My mind tells me that this, at least in part, is absolutely NOT what they want to do, regardless of what they say. Otherwise they would shut-up and just do it: fix the planet. The information / TRUTH is out there, they have better / quicker access to it than most of us. To think they are not aware of it is just plain naivete. It’s an agenda. Mr. Gates et al need to stop and listen to the likes of Mr. Goklany, to heed his admonitions and drop the pretense. Will they? Hope for the best…plan for the worst. Indur, Yes, sitting on the fence can be uncomfortable. Unfortunately, your letter to Mr. Gates strikes me (and many others) as an obsequious and useless exercise in convenient white lies. Mr. Gates shills for establishment ideology because he is a part of that establishment. He wouldn’t know CO2 from H2S. He just signs whatever his staffers prepare to “join the chorus of the moment.” You play along with a delusion that Mr. Gates actually has some informed opinions on the AGW subject? Thereby you play along with the AGW crowd, period. Re: the idea that Anthony is “selling out”, I think that is ridiculous. He is allowing voices with various shades of opinion to speak, and I am not so insecure in my own opinions that I only want to hear messages directed at a certain choir. I’d rather read about those various shades here, and get the reactions to them from an audience I know from experience to be well-informed and thoughtful, than at blogs where I can only count on the audience being robotic followers of propaganda. There is no other blog like this anywhere on the Web. The thousands who freqent it are testimony to its success and effectiveness. Anthony is doing something very right, and without WUWT, God only knows how great the despair we’d all feel at not being able to find an oasis of commonsense and conviviality. Indur Goklany is attempting to predict the future and in doing so (correctly of course) wants Bill Gates to move away from zero-carbon technology (is that really even possible?) and toward other things, like better agricultural methods. As any “real” skeptic will tell you, you cannot predict the future, either way. For example: Todays poor will be far richer 100 years from now. What if they aren’t, even in a cooling world? What if there is a serious breakdown within the economies of the rich countries? Who will feed the poor, let alone advance their standards? This letter to Bill Gates (who, yes, seems off-track) is typical well written and meaningless garbage. “Future populations … will be far wealthier than today”. I give that one a 50-50 just because I’m feeling generous. Zero-carbon technology. There is a saying that a fool and his money are soon parted. AGW supporters, meet Mr. Gates. DirkH: “Linus Thorvalds .. created Linux as a clone of MINIX because he didn’t agree with MINIX’ licensing terms.” Linux was never a clone of Minix. There is a fundamental technical difference (monolithic kernel versus microkernel) that was hotly debated by Torvalds and Tanenbaum from the moment that Linux first emerged. Mr. Gates also said that no one would ever need more than 64kb of memory. So much for the visionary Mr. Gates. “Since you have been in the forefront of technological change …” Mr. Gates has been on the forefront of technological stagnation and monopolistic business practices that would make the old-style industrial robber barons blush. There is something very wrong with an economic system that results in us spending huge amounts of our time trying to make billionaire’s products do what they were designed to do. Co2 does not drive climate. We need to remind people what the phrase “Climate Optimum” means– healthier and more abundant life forms because of higher temperatures than today’s. . } The conflict is still going today, and “climate change” is one of the battlegrounds. When Indur Goklany says “Over the years I have been very impressed by Mr. Gates’ desire and efforts to improve the quality of life for the world’s poorest people” I can agree with him on motive, but Bill Gates’ efforts would be much more helpful if they were aimed at increasing freedom rather than authority. There are many ways in which this can be done, the excellent micro-loans programmes are an example. Larry Fields and Steve from Rockwood 1. Both of you raise the issue of what if developing countries are not as wealthy in the future as projected — not “predicted” – under the IPCC scenarios? I get this argument quite frequently – usually from folks who want to push emission reductions. [Clearly, Steve, you are not one of those.] My response to this is that if developing countries’ economic growth is not as rapid as the IPCC projects then CO2 emissions will be lower than it projects, as will the amount of anthropogenic warming, and any resulting impact of that warming (as estimated by the flawed impacts models). 2. What if, moreover, the world cools, as Steve from Rockwood asks? RESPONSE: My adaptive approaches are not specific to warming or cooling. They would help societies adapt regardless of the direction of warming, because both build adaptive capacity and resiliency. Mike Jonas: “Bill Gates’ efforts would be much more helpful if they were aimed at increasing freedom rather than authority.” RESPONSE: Couldn’t agree with you more. Freedom, particularly economic freedom, is a necessary ingredient of economic growth. And, in my opinion, we cannot be free if we do not have economic freedom. See The Improving State of the World: Why We’re Living Longer, Healthier, More Comfortable Lives on a Cleaner Planet. Bill Gates. Isn’t he the one that said the internet wouldn’t amount to much? I am sure glad he never touched any of my servers and Unix stuff in my company. It is so much faster and more reliable than his low quality software. How about an honest look at the eugenics movement roughly a hundred years ago, and how it has been building up steam once again. The Gates family was never about being kind to others. It’s about not wanting to share the planet with what they consider inferior people. Bill and Melinda Gates have the right to make copies of their DNA, because they are so darn special, while his so-called humanitarian work sterilizes women without their consent. Read, for instance, a recent article on Eugenics: The Secret Agenda at Mike Jonas says: (September 11, 2010 at 5:23 pm) … more helpful if they were aimed at increasing freedom rather than authority. There are many ways in which this can be done, the excellent micro-loans programmes are an example. Fully endorse that, Mike. Mr. Gates would have done us a greater service were he to devote his energies toward being a venture capital investor, further encouraging new and innovative solutions. Becoming an ‘enlightened’ businessman and donning the mantle of noblesse oblige is just a short, slippery slope toward that of an attitude of master toward the lesser mortals he surveys from his high perch. If Mr Gates wishes to help mankind, he should bankroll the building of a large bio-dome, (maybe buy an old football field and gut the inside) The dome should be divided into 4 identical wedges, each insulated from the other. Each should be furnished with various vegetation, rocks, soil and water pools (all identical). Two diagonally opposite each other should be filled with plain air, the other two with double the amount of CO2. Then it’s a matter of taking T readings throughout the day. Within a few months, we will have empirical evidence of CO2s effect on climate. By the way, whatever happened to those bio-domes built back in the 60’s 70’s? I wonder if there is any data from those still in existence.’m sure Gates is well aware of the truth concerning CAGW but being a part of that group, publicly he toes the line and follows the religion. There is more than enough wealth and resources in the world today for the elite to make real change in the third world if they wanted to. But the reality is they just want to appear to be helping while pushing their depopulation/eugenics agenda. AGW belief is a business kiss of death! From Enron to Gore, by falling into such a focus, Gate and (by proxy) MS’s leadership and acumen become suspect. Like the former, MS may soon reap the consequences. Gates/MS could not have done many things more to question their grasp on what it takes to focus on their future success. My own speculation is PG&E will be another victim; scratch this a bit, and perhaps you’ll find their the embrace and lobbying for the scam indirectly leads to other bad business practices (like failing to diagnose and fix a neighborhood San Bruno, CA gas leak, when multiple customers call-in with the gas-smell odor reports). Hey, in this case global warming (or at least the embrace there of) may have led to increased fires? tarpon says: September 11, 2010 at 4:26 am …….. ____________________________________________________________ Speaking of the human history of genocide, the good old USA was still practicing it in the 1970’s when DDT was banned. ”…” If those in power in the “greatest free” country, the USA, were sterilizing its own citizens If the same country just recently funded spermicidal corn and may be shipping it to poor countries in their humanitarian “care packages” If the current US science adviser coauthored a book advocating forced birth control and stating .” source And if more recent information suggests DDT was not the health hazard it was hyped to be, there is good reason to think the world wide removal of DDT was done for reasons of genocide and not for environmental reasons. The occasional peeks behind the curtains of power reveal some really warped thinking. Bill Gates has recently joined that club and has been showing signs that he is aligning his thinking with that of the rest of the group. Remember if the world Corporate and Banking leaders decide he is getting out of line his company is history. They can essentially shut him down. It’s sad, really. Bill could do so much good with his wealth. Instead he chooses folly over philanthropy. I don’t begrudge him his 60,000 square foot house or other baubles of extreme wealth, but he might fund common sense instead of irrational hysteria. tarpon said: In 1972, about two million people died from malaria, worldwide. In 2008, about 880,000 people died from malaria, worldwide. That’s fewer than half the mortality the year the U.S. stopped DDT spraying on cotton. If it’s cause-effect you were trying to establish, I think you missed. Gail Combs says: September 12, 2010 at 4:48 am Gail et al have offered links to information which support the idea that CAGW does indeed have underpinnings tied to a general belief by the elitist intelligencia that ‘the herd must be thinned’. Owing to unsupported comments that we have all read about the “sustainability of the planet”. I have tried in the past to find a source concerning comments about the so-called benign / harmless effects of DDT. I found one today, and it is illuminating. It describes DDT as having a number of beneficial health effects on humans, including the reduction of cancers in new-born infants. The writer has himself ingested full tablespoons of commercial DDT when giving talks on the benefits of this pesticide to various groups. The spurious alarms and lies were begun by environmental activists AKA: The World Health Organsization (WHO). Turns out that Mr. Gates is heavily invested with the WHO. Anyone suprised? As I alluded to earlier in this thread, and ‘TWE’ stated more succincly than I: Sorry, hit the ‘Post Comment’ by mistake (I hate it when that happens!), here’s TWE’s post: “TWE says: September 11, 2010 at 11:00 pm think this strengthens the case that Gail has made repeatedly here at WUWT concerning the root of the CAGW alarmism meme, its apparent successes, and further dogged propogation. Gates Foundation programs to fight malaria have achieved remarkable results in decreasing malaria over the past five years, in just waking up the anti-malaria efforts (see also here), and also in the actual decline of malaria: Not only is the death toll half what it was when DDT was legal in the U.S., but total malaria infections have been dramatically reduced in nations who have adopted the Gates Foundation’s suggested strategies. We may not beat malaria by 2014, but it won’t be because the Gates Foundation is on the wrong path. Ed Darrell says: September 13, 2010 at 2:19 pm Ed, Not sure about your numbers, check the link below for data up to 2007 in this 2008 WHO report. In Nigeria malaria cases have grown from just under 1 million in 1990 to almost 3 million in 2007. Your link did cite global locations, the link I post below only cites African locations, so it’s not apples to apples. I do cite the same report as portions of your source link. I didn’t see many African countries reporting fewer cases over the last 10 years (quick scan though). Anyway, the issue, the point is that the WHO banned DDT in spite of overwhelming scientific evidence that proved it harmless to vertebrates, and in some instances it was beneficial (lower cancer rates in newborns: see link in my previous post above). This is what the WHO did, this is who the Gateses have aligned themselves with. Right path, wrong path is irrelevant at this point. Gates, if he truly wanted to make a difference should/could have easily started his own relief efforts to ease the suffering. A good first, inexpensive (as far as Gates is concerned), effective, and QUICK way would be to support the use of DDT and buy a couple of those big Army transport planes (C-147?) fill them with the stuff and dump away. In other words the WHO said: let them die. And that is exactly what they did to the tune of over 10 million deaths, mostly children under 5 years old. Just so we might be more “sustainable”. Sound good to you? So…why would anybody do that? Might it be in response to Ehrlich’s book ‘The Population Bomb’? And as it relates to this thread: why/how, would/could, anyone objectively looking at the available data on climate change come to the conclusion that we are all going to burn in the near/distant future due to runaway global warming? Mr. Goklany raises good points. Will Mr. Gates heed him? I don’t think so. As I said before: It’s an AGENDA! Reduce population size to support the sustainability of the planet and Cap & Tax to control energy usage thereby controlling populace. YOU, ME, and all of US. You are aware of the axiom: correlation does not prove causation? Well it equally applies to the thin veneer of “humanitarian” that people such as Mr. Gates wears. Sheep in wolves clothing if ever there were… We need to speak out against these people, take them to task, and shut them down. Period.. Eli Rabett : “Attacking the Gates Foundation is fundamentally wrong.” I think that you have misunderstood Indur Golkany’s letter. It begins “For a long time I had admired your perspicacity and acumen in trying to address some of the world’s truly important problems (such as malaria and hunger) rather than signing on to the latest chic causes … ” and continues by expressing disappointment that Bill Gates has now succumbed to a chic cause, namely global warming. Indur Golkany does not challenge Bill Gates’ motivation, merely points out that by misunderstanding global warming the Gates Foundation is taking actions which – although designed with the best of intentions – will damage those it seeks to help. I don’t see this as an “attack”, but as the sort of advice that a truly good friend would give – speaking up rather than let a friend continue with a mistake..” A typical unscientific summary personal attack from the author of this post! Okay, I’ll byte — guess who jumped the shark years ago from scientific reason to political advocacy. Hint: that same person drones on with his personal attacks as part of the advocacy campaign that AGW increases malaria and nearly every other scourge of humanity? Nuts, Eli Rabett! Feel free to delete these posts; I am sure that blog rules were broken.
http://wattsupwiththat.com/2010/09/11/an-open-letter-to-mr-bill-gates/
CC-MAIN-2015-27
refinedweb
8,994
62.17
time.sleep() crashes Pythonista when used in scene, decorated by @ui.in_background. - lachlantula I know this has been asked before, but I have a different scenario where the solutions suggested won't work. The app crashes when certain functions featuring sleep are run. Pythonista 3. Here's a sample of code that can sometimes crash the app: import time import sound vol = 0.1 for _ in range(3): sound.play_effect('rpg:Footstep01', vol) vol += 0.1 time.sleep(1) sound.play_effect('rpg:Footstep02', vol) vol += 0.1 time.sleep(1) sound.play_effect('rpg:Footstep03', vol) vol += 0.1 time.sleep(1) Any ideas? This function is decorated by @ui_in_backgroundas it is called by the updatefunction built into scene. (No, that doesn't mean there are footsteps playing all the time, it's part of an AI I'm working on:p) The main takeaway from that thread... don't use sleep in scene. Use actions. What you have would work with a Action.sequence consisting of an Action.call (to play the sound), and an Action.wait. - lachlantula I'll give it a try, thanks.
https://forum.omz-software.com/topic/3730/time-sleep-crashes-pythonista-when-used-in-scene-decorated-by-ui-in_background
CC-MAIN-2017-17
refinedweb
186
70.19
csGraphics2D Class Reference [Common Plugin Classes] This is the base class for 2D canvases. More... #include <csplugincommon/canvas/graph2d.h> Inherits scfImplementation7< csGraphics2D, iGraphics2D, iComponent, iNativeWindow, iNativeWindowManager, iPluginConfig, iDebugHelper, iEventHandler >. Inherited by scfImplementationExt2< csGraphics2DGLCommon, csGraphics2D, iEventPlug, iOpenGLDriverDatabase >, and scfImplementationExt2< csGraphics2DGLCommon, csGraphics2D, iEventPlug, iOpenGLDriverDatabase >. 62 of file graph2d.h. Constructor & Destructor Documentation Create csGraphics2D object. Destroy csGraphics2D object. Member Function Documentation This routine should be called before any draw operations. It should return true if graphics context is ready. Blit a memory block. Format of the image is RGBA in bytes. Row by row. Change the depth of the canvas. Clear backbuffer. Clear all video pages. Clip a line against given rectangle Function returns true if line is not visible. (*) Close graphics system Enable or disable double buffering; return TRUE if supported. Draw a box of given width and height. Draw a line. Same but exposed through iGraphics2D interface. Definition at line 212 of file graph2d.h. Draw a pixel in 16-bit modes. Draw a pixel in 32-bit modes. Default drawing routines for 8-bit and 16-bit modes If a system port has its own routines, it should assign their addresses to respective pointers. Draw a pixel in 8-bit modes This routine should be called when you finished drawing. Free storage allocated for a subarea of screen. Query clipping rectangle. Return current double buffering state. Get the name of this canvas. Return the Native Window interface for this canvas (if it has. Definition at line 266 of file graph2d.h. Query pixel R,G,B at given screen location. As GetPixel() above, but with alpha. Same but exposed through iGraphics2D interface. Definition at line 256 of file graph2d.h. Return address of a 16-bit pixel. Return address of a 32-bit pixel. Return address of a 8-bit pixel. Return the number of bytes for every pixel. This function is equivalent to the PixelBytes field that you get from GetPixelFormat(). Definition at line 274 of file graph2d.h. Event handler for plugin. Initialize the plugin. (*) Open graphics system (set videomode, open window etc) Perform a system specific extension. Return false if extension not supported. Perform a system specific extension. Return false if extension not supported. Resize the canvas. Restore a subarea of screen saved with SaveArea(). Save a subarea of screen area into the variable Data. Storage is allocated in this call, you should either FreeArea() it after usage or RestoreArea() it. Set clipping rectangle. Change the fullscreen state of the canvas. Set mouse cursor to one of predefined shape classes (see csmcXXX enum above). If a specific mouse cursor shape is not supported, return 'false'; otherwise return 'true'. If system supports it and iBitmap != 0, shape should be set to the bitmap passed as second argument; otherwise cursor should be set to its nearest system equivalent depending on iShape argument. position; return success status. (*) Set a color index to given R,G,B (0..255) values Write a text string into the back buffer. Write a text string into the back buffer. Member Data Documentation The counter that is incremented inside BeginDraw and decremented in FinishDraw(). Definition at line 128 of file graph2d.h. The documentation for this class was generated from the following file: Generated for Crystal Space 1.4.1 by doxygen 1.7.1
http://www.crystalspace3d.org/docs/online/api-1.4.1/classcsGraphics2D.html
CC-MAIN-2014-49
refinedweb
552
54.59
Getting Started with wxPython We shows everything you'll need in order to run wxPython. Table 1 Everything you'll need to run wxPython on your own computer Once the installations are complete, get ready to type. We're going to create a program that displays a single image file. This will happen in three steps: - We'll start with the bare minimum required for a working wxPython program. - We'll make that code more structured and sophisticated. - We'll end with a version that can display the wxPython logo. Figures 1, 2, and 3, illustrate what the final program will look like, depending on your platform. Figure 1 Running hello.py on Windows Figure 2 Running hello.py on Linux Figure 3 Running hello.py on Mac OS X Creating the bare minimum wxPython program Let's begin with the simplest possible wxPython program that will run successfully. Create a file named "bare.py" and type in the following code. Remember, in Python, the spacing at the start of each line is significant. import wx class App(wx.App): def OnInit(self): frame = wx.Frame(parent=None, title='Bare') frame.Show() return True app = App() app.MainLoop() There's not much to it, is there? Even at only eight lines of code (not counting blank lines) this program might seem like a waste of space, as it does little more than display an empty frame. But bear with us, as we'll soon revise it, making it something more useful. The real purpose of this program is to make sure you can create a Python source file, verify that wxPython is installed properly, and allow us to introduce more complex aspects of wxPython programming one step at a time. So humor us: create a file, type in the code, save the file with a name "bare.py," run it, and make sure it works for you. The mechanism for running the program depends on your operating system. You can usually run this program by sending it as a command line argument to the Python interpreter from an operating system prompt, using one of the following commands: python bare.py pythonw bare.py Figures 4, 5, and 6 show what the program looks like running on various operating systems. Figure 4 Running bare.py on Windows. Figure 5 Running bare.py on Linux. Figure 6 Running bare.py on Mac OS X. While this bare-minimum program does little more than create and display an empty frame, all of its code is essential; remove any line of code and the program will not work. This basic wxPython program illustrates the five basic steps you must complete for every wxPython program you develop: - Import the necessary wxPython package - Subclass the wxPython application class - Define an application initialization method - Create an application class instance - Enter the application's main event loop Let's examine this bare minimum program step-by-step to see how each one was accomplished. Page 1 of 3
https://www.developer.com/open/article.php/3625886/Getting-Started-with-wxPython.htm
CC-MAIN-2017-43
refinedweb
500
67.15
. 1) do setsockopt() BEFORE bind. 2) Try to bind directly to multicast address. // 224.3.0.15 = FE03000D cliAddr.sin_addr.s_addr = htons(0xFE03000D); 224.3.0.15 = 0xE003000D, not 0xFE03000D (0xFE03000D = 254.3.0.15). I will try your second suggestions first and will let you know. Could you please explain a little bit more about parameters for setsockopt()? Thank you for your help, Nhuan man setsockopt this link may also be useful: Restore full virtual machine or individual guest files from 19 common file systems directly from the backup file. Schedule VM backups with PowerShell scripts. Set desired time, lean back and let the script to notify you via email upon completion. Then, I found out that /sbin/route shows me something like Destination Gateway Genmask Flags Metric Ref Use Iface default 172.22.72.254 0.0.0.0 UG 0 0 0 eth1 This route will capture activities on 224.x.x.x. So, I add one more route to the kernel IP routing table so that I have Destination Gateway Genmask Flags Metric Ref Use Iface 224.0.0.0 * 240.0.0.0 U 0 0 0 eth0 After this addition, when I run the code, I can see ping 224.3.0.15 PING 224.3.0.15 (224.3.0.15) 56(84) bytes of data. 64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=0.020 ms 64 bytes from 192.168.1.1: icmp_seq=2 ttl=64 time=0.013 ms 64 bytes from 192.168.1.1: icmp_seq=3 ttl=64 time=0.009 ms Which mean I can JOIN successfully. However, my problem still exists, I ***DO NOT*** see the recvfrom to return. Please please help me, experts!!! Nhuan It helps me with my above findings. My problem is still there, though. YES, and it has not helped. I also noticed something: tcpdump showed me packets from 224.3.0.15. So data is absolutely there. I tried its open source code and deal with libpcap. I was able to write some code, using this library to get packets from 224.3.0.15 !!! The following link is very useful for me to write my code to get packets from 224.3.0.15, utilizing libpcap: The problem with using libpcap is that I have to run with root privillege, and of course it is not good to do so. Another thing is that with libpcap, I have to deal with packets with Ethernet, then IP, then UDP headers before getting the payload. I will keep using the libpcap to get data from 224.3.0.18 and will fight to use the usual recvfrom. It is irritated but it is also a good chance to study deeply into libpcap and tcpdump, and the like :-)) Any help is much appreciated. I wonder where all other experts are. Only Nopius has been trying to help me. There are soruces available for download. in file 'data_link.c' you will find binding to multicast interface and recvfrom() on that interface. I suggest a refund. Nopius, what do you think? Nhuan I did take a look inside data_link.c. You mean functions DL_init_channel and DL_recv, right? I did not see anything special to solve my problem, so I stopped. libpcap works, and the irritation that it brings is the need to use 'root' privillege so as to run the application. I am exhausted with this issue. Let me know what should I do with this question. Where are other experts, Venabili? Nhuan The kernel version is 2.4.27-2-686-smp. Compilation parameters should not cause a problem for me, or else tcpdump cannot work. Nhuan I'll check version 2.4.27 for multicast bugs later. I have this on my .config file: CONFIG_IP_MULTICAST=y There are many other kernel compilation parameters that I do not know if they relate to multicast or not. Please let me know if you need to know the setting for any dedicated compilation parameter. Nhuan [NETLINK]: Fix multicast bind/autobind race Also please post 'ifconfig eth0'. Is 'MULTICAST' there? netstat -gn inet addr:192.168.1.1 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::211:43ff:fefd:4881/6 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 IPv6/IPv4 Group Memberships Interface RefCnt Group --------------- ------ --------------------- lo 1 224.0.0.1 eth0 1 224.3.0.15 eth0 1 224.0.0.1 eth1 1 224.0.0.1 lo 1 ff02::1 eth0 1 ff02::1:fffd:4881 eth0 1 ff02::1 eth1 1 ff02::1:fff6:9506 eth1 1 ff02::1 there where two listeners: on Solaris (10/SPARC) and on Linux (2.6.16/i386) and everything works fine for me. I cannot figure out what is wrong with your code, because I don't see enture picture. You provided just a code snippet without content of it's usage. I recommend you to try this simple examples first (sender and listener). But please fix some errors in that code: change: sizeof(message) to strlen(message) in line "if (sendto(fd,message,sizeof( change: puts(message); to: puts(msgbuf); in very last line, also add bzero(msgbuf, MSGBUFSIZE) before line "if ((nbytes=recvfrom(fd,msgbu when you compile and run this code we may decide is it an error in your code or something wrong in the kernel. I think you FORGET about the problem that I am having. On the multipcast example that you gave me the link, I saw the following lines on the listener: mreq.imr_multiaddr.s_addr= mreq.imr_interface.s_addr= These lines will NOT BE SUITABLE for my usage, because I have MULTIPLE INTERFACES. So "everything works fine for you" does not mean anything to my case. I am sorry for the irritation that my problem is causing. Nhuan I want to join the multicast group on the particular interface (eth1), which is not the default interface. So mreq.imr_interface.s_addr = htonl(INADDR_ANY); should not be applied. Probably you need to bind to socket on specific address, then to join on that socket to the multicast group: /* before bind to 192.168.1.1 */ cliAddr.sin_addr.s_addr = htonl(0xC0A80101); cliAddr.sin_port = htons(obj.port); ... /* before join to 224.3.0.15 */ mreq.imr_multiaddr.s_addr = htonl(0xE003000D); mreq.imr_interface.s_addr = htonl(0xC0A80101); ... mreq.imr_multiaddr.s_addr = htonl(0xE003000D); mreq.imr_interface.s_addr = htonl(0xC0A80101); If you look at my original post at the top of the page, you will see these lines: mreq.imr_multiaddr.s_addr = remoteServAddr.sin_addr.s_ mreq.imr_interface.s_addr = localAddr.sin_addr.s_addr; Probably you have a routing problem in your sending side or you have very restrictive firewall. I tested similar scenario. I have no eth1, so I binded to lo0 and recvfrom also was blocked. My sender program (on the same host where lister) also didn't work until I fixed sending flag: sendto(fd,message,strlen(m Routing Table: IPv4 Destination Gateway Flags Ref Use Interface -------------------- -------------------- ----- ----- ------ --------- 224.0.0.0 127.0.0.1 U 1 0 lo0 224.0.0.0 10.100.12.110 U 1 0 bge0 may be it's better to use specific gateway IP address instead of *? I think this is a good clue. The problem with me is that in my usage scenario, the SENDER is from an external source, it's not MY SOFTWARE. I only develop the client to recvfrom from the multicast group. How can we check if the SENDER is in trouble (with the sending flag) or not? Also, please remember that plibcap works. Thanks, Nhuan For me ping of multicast address works always, even if sender uses incorrect interface, but changing routing tables might help. If nothing will work, try to add 224.3.0.15 as a host route to 192.168.1.1 as a default gateway. Thanks, Nhuan Use it for testing _from_the_same_host_ as listener (at least recvfrom should unblock now, even if message format is obviously incorrect for you) #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <arpa/inet.h> #include <time.h> #include <string.h> #include <stdio.h> /* !!!! FIX PORT NUMBER HERE !!!! */ #define HELLO_PORT #define HELLO_GROUP "224.3.0.15" main(int argc, char *argv[]) { struct sockaddr_in addr; int fd, cnt; struct ip_mreq mreq; char *message="Hello, World!"; /* create what looks like an ordinary UDP socket */ if ((fd=socket(AF_INET,SOCK_D perror("socket"); exit(1); } /* set up destination address */ memset(&addr,0,sizeof(addr addr.sin_family=AF_INET; addr.sin_addr.s_addr=inet_ addr.sin_port=htons(HELLO_ /* now just sendto() our destination! */ while (1) { if (sendto(fd,message,strlen( sizeof(addr)) < 0) { perror("sendto"); exit(1); } sleep(1); } } How can we check if the SENDER is in trouble (with the sending flag) or not? Also, please remember that plibcap works. For testing, run this commands in two separate screens: tcpdump -i eth0 -n multicast tcpdump -i eth1 -n multicast then run SENDER then see IP source address of the packet and interface where the packet really arrives. Probably SENDER uses system default IP address (on eth0) and then multicasts to eth0 segment, while listener listens on another network card. With your code above (and add the port number in, of course), I can sendto/recvfrom on the same host on the correct interface. It shows that the CLIENT that I developed does not cause trouble. Something is with the SENDER and/or with the routing/firewall. We come close to a solution, Nopius. Please investigate a little bit more with me. Thanks, Nhuan "The problem with me is that in my usage scenario, the SENDER is from an external source, it's not MY SOFTWARE. I only develop the client to recvfrom from the multicast group" You wrote: Probably SENDER uses system default IP address (on eth0) and then multicasts to eth0 segment, while listener listens on another network card. For sure the SENDER cannot multicasts to eth0 segment, because eth0 connects to a different line. The SENDDER MUST be sending data to eth1 because it connects to the correct line. to be shure that sender sends on correct interface run these commands: tcpdump -i eth0 -n multicast then tcpdump -i eth1 -n multicast while running sender and paste the output here Right now the remote SENDER does not work. It will start data transfering around 20:00 my time. I will check it and let you know later. Thanks, Nhuan Thank you very much for your help. Up to this point what I can think of is that we have a problem with the SENDER or the routing/firewall. I decided to close the topic because in a production point of view, I cannot survive after such a long time without answering WHY it is the case. Luckily, plibcap works and I am happy with it at the moment. I appreciate your effort much :-)))) Happy working, nhuanvn
https://www.experts-exchange.com/questions/21819380/Multicast-multiple-interfaces.html
CC-MAIN-2017-51
refinedweb
1,818
67.96
Position Relative to screen center How would I position a UI view relative to the center of the screen, no matter the orientation? Do I use the size_to_fit() function? ui.center? Do I have to make the ui view a sub view of a larger view? If so, how will that view be oriented when the iPad is rotated (sounds too complex to be the solution). I am trying to make the view show up in the center of the screen, and scale to fit horizontally. The easiest would be to use the auto-resizing inspector in the UI editor (or flexattribute when you do this from code). For centering a view vertically and horizontally in its parent, it should be LRTB(flexible left, right, top and bottom margins). For a view that stretches horizontally, and is vertically centered, it should be WTB. You can experiment with this in the UI editor (there's an animated preview to help you get an idea of the effect of different settings). Okay, it is flexing when I rotate the ipad, but what do I set the popover_location parameter of the UI.present() function to, to center the UI in the center if the screen when I run the workflow. It would seem that there is a constant for the screen's center, without having to use another UI element as the parent. This way, if someone is using a different screen size, it will take the dimensions of the screen and divide both the height and width by two, and then center at those coordinates. To center my button in a parent view I had to do: button.center = (window.width/2,window.height/2) But the window itself, I want to center in the screen, No matter what screen size or orientation. Edit I found ui.get_screen_size(). How do I get the length component separate from the height component? You can't really do that with a popover because it's currently not possible to get the current orientation. You might want to consider using a sheet instead... Could I resize the dimensions of a sheet? Also, I tried this: popover_location=(ui.get_screen_size()[0]/2,ui.get_screen_size()[1]/2) And it almost works. I used it along with flex too. The portrait orientation is off, but, well take a look Hmmm! I just saw that the screen size function would locate the arrow of the popover at the center! not the center of the popover. That is why it is off. While there is nit an easy way to get orientation, if the keyboard frame is displayed, you can use ui.get_keyboard_frameto determine the current orientations width. Take that back... ui.WebView().eval_js('window.orientation') Does give you orientation. Okay. Thanks. I put this one on hold. I may pick it up later. Thanks! I have picked this back up: <pre> import ui import workflow screen_size = ui.get_screen_size() screenX = screen_size[0] screenY = screen_size[1] window = ui.load_view() window.flex = "WL" window.height = 200 window.width = screenX window.present("popover",False,(screenX,(screenY/2))) view = window['view2'] view.text = workflow.get_input() </pre> Now I have to add the orientation This includes a 1 pixel webview, which is used to get orientation, see e.g. line 61 and following for the possible values
https://forum.omz-software.com/topic/1311/position-relative-to-screen-center
CC-MAIN-2021-31
refinedweb
550
67.45
.NET 3.0 and Beyond - Web Service Discovery The opinions expressed herein are my own personal opinions and do not represent my employer Microsoft's view in any way. The modification of the body is a common task required in many applications. The Message Fixer explained in "Fixed the Messages" article performs body modification before the after the request is received and before the response is sent on the server side or before sending the request and after receiving the response on the client side. As explained in the article for smaller message one can create the XMLDocument from the message body by getting the Body reader from the message as shown below. // Fix the requestXmlDocument doc = new XmlDocument();doc.Load(request.GetBodyReader()); However recently while working on an Interop application I discovered that this quick an easy way of modifying the body of the messages did not work in cases when the body content had elements or attributes of QName type. Specifically if the namespace for the prefix used in the value of element or attribute was defined before the Body element. The XML processors typically do not poke at the values and hence they try their best to preserve the context at which the namespace was declared. In my case I was creating XML document from the Body element onwards. Hence if the namespace declaration was before the Body element, i..e at the Envelope level, which XML document never saw, the re declaration of the namespace occurred. The namespace was re declared when it was first used in the name of the element or attribute and NOT when it was used as the value of the element or attribute. For example, if an XML document was created from the Body reader of the Message object for the following message the namespace was re declared after it was used in one of the attribute values. You can see how this can result in failure of parsing the values later on. So how does on modifies the message? The easiest way to do this would be to preserve the complete message and then perform the modification on the body. Note that this is applicable to small messages that you can fit reasonably in memory. StringBuilder builder = new StringBuilder(); XmlWriter writer = XmlWriter.Create(builder); origMsg.WriteMessage(writer); writer.Close(); string origMsgStr = builder.ToString() XmlDocument origMsgDoc = new XmlDocument(); origMsgDoc.LoadXml(origMsgStr); // modify the message XmlReader newMsgReader = XmlReader.Create(new StringReader(origMsgStr)); Message newMsg = Message.CreateMessage(newMsgReader, int.MaxValue); newMsg.Properties.CopyProperties(origMsg.Properties); If you would like to receive an email when updates are made to this post, please register here RSS
http://blogs.msdn.com/vipulmodi/archive/2005/09/16/469070.aspx
crawl-002
refinedweb
441
53.1
Access, Encapsulation, & Scope The public Keyword The public and private keywords are very important within Java. These keywords are defining what parts of our code have access to other parts of our code. We can define the access of many different parts of our code including instance variables, methods, constructors, and even a class itself. If we choose to declare these as public this means that any part of our code can interact with them - even if that code is in a different class! The way we declare something to be public is to use the public keyword in the declaration statement. In the code block belowm, we have a public class, constructor, instance variables, and method. Notice the five different uses of public: Dogis public, any other class can access anything about a Dog. For example, let's say there was a DogSchoolclass. Any method of the DogSchoolclass could make a new Dogusing the public Dogconstructor, directly access that Dog's instance variables, and directly use that Dog's methods: Notice that the DogSchoolclass and the makeADog()method are also public. This means that if some other class created a DogSchool, they would have access to these methods as well! We have public methods calling public methods! One final thing to note is that for the purposes of this lesson, we'll almost always make our classes and constructors public. While you can set them to private, it's fairly uncommon to do so. Instead, we'll focus on why you might make your instance variables and methods private. The private Keyword and Encapsulation When a Class' instance variable or method is marked as private, that means that you can only access those structures from elsewhere inside that same class. Let's look back at our DogSchool example: makeADogis trying to directly access Dog's .agevariable. It's also trying to use the .speak()method. If those are marked as privatein the Dogclass, the DogSchoolclass won't be able to do that. Other methods within the Dogclass would be able to us .ageor .speak()(for example, we could use cujo.agewithin the Dogclass), but other classes won't have access. Accessor and Mutator Methods When writing classes, we often make all of our instance variables private. However, we still might want some other classes to have access to them, we just don't want those classes to know the exact variable name. To give other classes access to a private instance variable, we would write an accessor method (sometimes also known as a "getter" method). nameis private, other classes could call the publicmethod getName()which returns the value of that instance variable. Accessor methods will always be public, and will have a return type of the instance variable they're accessing. Similarly, private instance variables often have mutator methods (sometimes known as "setters"). These methods allow other classes to reset the value stored in private instance variables. voidmethods - they don't return anything, they just reset the value of an existing variable. Similarly, they often have one parameter that is the same type as the variable they're trying to change. Scope: Local Variables: publicor privatewhen value. For example, consider the following block of code:You can only use ibetween the curly braces of the for loop. In general, whenever you see curly braces, be aware of the scope. If a variable is defined inside curly braces, and you try to use that variable outside those curly braces, you will likely see an error! Scope: The this Keyword Often times when creating classes, programmers will create local variables with the same name as instance variables. For example, consider the code block below:We have an instance variable named name, but the method speakNewNamehas a parameter named name. So when the method tries to print name, which variable will be printed? By default, Java refers to the local variable name. So in this case, the value passed to the parameter will be printed and not the instance variable. If we wanted to access the instance variable and not the local variable, we could use the this keyword. thiskeyword is a reference to the current object. We used this.namein our speakNewName()method. This caused the method to print out the value stored in the instance variable nameof whatever DogObject called speakNewName(). (Note that in this somewhat contrived example, the local variable name used as a parameter gets completely ignored). Oftentimes, you’ll see constructors have parameters with the same name as the instance variable. For example, you might see something like:You can read this as “set this Dog's instance variable nameequal to the variable passed into the constructor”. While this naming is a common convention, it can also be confusing. There’s nothing wrong with naming your parameters something else to be more clear. Sometimes you will see something like: This is now a little clearer — we’re setting the Dog's instance variable name equal to the namewe give the constructor. Finally, mutator methods also usually follow this pattern:We reset the instance variable to the value passed into the parameter. Throughout the rest of this lesson, we’ll use this. when referring to an instance variable. This isn't always explicitly necessary — if there's no local variable with the same name, Java will know to use the instance variable with that name. That being said, it is a good habit to use this. when working with your instance variables to avoid potential confusion. Using this With Methods We've seen how the this works with variables, but we can also use the this with methods. Recall how we've been calling methods up to this point:Here we're creating an instance of a Dogand using that Dogto call the speak()method. However, when defining methods, we can also use the thiskeyword to call other methods. Consider the code block below: Take a look at resetSettings()method in particular. This method calls other methods from the class. But it needs an object to call those methods! Rather than create a new object (like we did with the Dognamed myDogearlier), we use thisas the object. What this means is that the object that calls resetSettings()will be used to call setBrightness(0)and setVolume(0). In this example, calling myComputer.resetSettings()is as if we called myComputer.setBrightness(0)and myComputer.setVolume(0). thisserves as a placeholder for whatever object was used to call the original method. Other Private Methods Now that we've seen how methods can call other methods using this., let's look at a situation where you micht want to use private methods. Oftentimes, private methods are helper methods - that is to say that they're methods that other, bigger methods use. For example, for our CheckingAccount example, we might want a public method like getAccountInformation() that prints information like the name of the account owner, the amount of money in the account, and the amount of interest the account will make in a month. That way, another class, like a Bank, could call that public method to get all of that information quickly. Well, in order to get that information, we might want to break that larger method into several helper methods. For example, inside getAccountInformation(), we might want to call a function called calculateNextMonthInterest(). That helper method should probably be private. There's no need for a Bank to call these smaller helper methods - instead, a Bank can call one public method, and rely on that method to do all of the complicated work by calling smaller rivate methods. Access, Encapsualtion, & Scope Review The publicand privatekeywords are used to define what parts of code have access to other classes, methods, constructors, and instance variables. Encapsulation is a technique used to keep implementation details hidden from other classes. Its aim is to create small bundles of logic. The thiskeyword can be used to designate the difference between instance variables and local variables. Local variables can only be used within the scope that they were defined in. The thiskeyword can be used to call methods when writing classes. Static Methods Refresher In these notes, we're going to dive into how to create classes with our own static methods and static variables. To begin, let's brush up on static methods. Static methods are methods that belong to an entire class, not a specific object of the class. Static methods are clled using the class name and the . operator. We've seen a couple static methods already! random()is a static method that belongs to the Mathclass. We didn't need to create a Mathobject main method of your class - YourClassName.main(). Static Variables We'll begin writing our own static methods soon, but before we do, let's take a look at static variables. Much like static methods, you can think of static variables as belonging to the class itself instead of belonging to a particular object of the class. Just like with static methods, we can access static variables by using the name of the class and the . operator. Finally, we declare static variables by using the static keyword during declaration. This keyword usually comes after the variable's access modifier ( public or private). When we put this all together, we might end up with a class that looks something like this:Since all dogs share the same genus, we could use a static variable to store that information for the entire class. However, we want each dog to have it's own unique nameand age, so those aren't static. We could now access this static variable in a main()function, like so: Unlike static methods, you can still access static variables from a specific object of the class. However, no matter what object you use to access the variable, the value will always be the same. You can think of it as all objects of the class sharing the same variable. Modifying Static Variables Now that we've created a couple of static variables, let's start to edit them. The good news is that editing static variables is similar to editing any other variable. Whether you're writing code in a constructor, a non-static method, or a static method, you have access to static variables. Often times, you'll see static variables used to keep track of information about all objects of a class. For example, our variable numATMs is keep track of the total number of ATMs in the system. Therefore, every time an ATM is created (using the constructor), we should increase that variable by 1. If we could somehow destroy the ATM, the method that destroys it should decrease numATMs static variable by 1. Similarly, we have a variable names totalMoney. This variable is keeping track of all money across all ATMs. Whenever we remove money from an ATM using the non-static withdrawMoney() method, we should modify the money instance variable for that particular ATM as well as the totalMoney variable. In doing so, all ATMs will know how much money is in the system. Writing Your Own Static Methods Now that we have. One important rule to note is that static methods can't interact with non-static instance variables. To wrap our mind's around this, let just. Static Variables Review Static methods and variables are associated with the class as a whole, not objects of the class. Static methods and variables are declared as static by using the statickeyword upon declaration. Static methods cannot interact with non-static instance variables. This is due to static methods not having a thisreference. Both static methods and non-static methods can interact with static variables.
https://docs.nicklyss.com/java-access-scope/
CC-MAIN-2022-40
refinedweb
1,953
63.8
footboydog Senior Member Last Activity: 4th April 2015 09:52 PM About Me - About footboydog - Home country - United Kingdom - Signature - Black HTC Dream with standard Donut (rooted). Motorola Xoom 32Gb Wi-fi Nexus 5 32gb stock Nexus 7 2013 32Gb stock For the latest on my SMS Bot Widget and Paperless List app (UK only): ubikapps.net Most Thanked Thanks Post Summary 5 Don't think anyone else has posted this yet so here goes: A Google employee called Rich Hyndman posted this on his Google+: With Android 4.0 you can backup and restore app data to ... 3 Got a notification for a system update. Pulled the link from logcat: Sorry if this has already been posted. 2 my sgs2 one does not work with my nexus :( I got the SGS 2 MHL adapter too and it didn't work at first. I rebooted the phone and then it worked. 2 I had to do this. Override the onJSAlert() method in the WebChromeClient class: public class MyWebChromeClient extends WebChromeClient { @Override public boolean onJsAlert(WebView view, String url, String message, JsResult result) {...
http://forum.xda-developers.com/member.php?u=1517851
CC-MAIN-2015-18
refinedweb
182
70.63
How to Integrate jQuery Plugins into an Ember Application This article was peer reviewed by Craig Bilner. Thanks to all of SitePoint’s peer reviewers for making SitePoint content the best it can be! With its ubiquity, jQuery still plays a vital role in the web development world. Its regular employment shouldn’t be a surprise especially when using a framework like Ember. This framework has components that are similar to jQuery plugins in that they are both designed to have a single responsibility in your project. In this article, we will develop a simple Ember component. This tutorial will showcase how to integrate a jQuery plugin into an Ember application. The component acts as a wrapper for the plugin, which shows a list of picture thumbnails. Whenever we click a thumbnail, a bigger version of it is displayed in the picture previewer. This works by extracting the src property of the clicked thumbnail. Then, we set the src property of the previewer to that of the thumbnail. The complete code of this article can be found on GitHub. With this in mind, let’s start working on this project. Setting up the Project First of all, let’s create a new Ember project. To start, execute this command on the CLI: npm install -g ember-cli Once done, the project can be created by running: ember new emberjquery This will create a new project in a folder named emberjquery and install the required dependencies. Now, move into the directory by writing cd emberjquery. The project contains different files that we’ll edit in this tutorial. The first file you have to edit is the bower.json file. Open it and change your current Ember version to 2.1.0. The jQuery plugin I have created for this project is available as a Bower package. You can include it in the project by adding this line to your bower.json file: "jquerypic": " Now, to install the plugin and the new version of Ember run the command: bower install Since this plugin is not an Ember component, we need to manually include the required files. In the ember-cli-build.js file, add the following two lines right before the return statement: // Lines to add app.import("bower_components/jquerypic/js/jquerypic.js"); app.import("bower_components/jquerypic/styles/app.css"); return app.toTree(); }; These lines import two files and include them in the build. One is the plugin file itself and the other is the CSS file for the plugin. The stylesheet is optional and you are free to exclude it if you intend to style the plugin by yourself. Creating a New Plugin Component Once you have included the plugin in the application, let’s start creating a new component by executing the command: ember generate component jquery-pic This command creates a class file and a template file. In the template file, paste the contents from the bower_components/jquerypic/index.html file. Place the content in the body tag, excluding the scripts. At this point, the template file should look like this: {{yield}} <div class="jquerypic" > <div class="fullversion-container"> <img src=" alt="" class="full-version" > </div> <div class="thumbnails"> <img src=" alt="" class="thumbnail"> <img src=" alt="" class="thumbnail"> <img src=" alt="" class="thumbnail"> <img src=" alt="" class="thumbnail"> <img src=" alt="" class="thumbnail"> </div> </div> In the class file, add a function called didInsertElement: import Ember from 'ember'; export default Ember.Component.extend({ didInsertElement: function () { } }); We are now at a crucial point. With jQuery, plugin initialization usually happens within a document.ready function as shown below: $(document).ready(function(){ //Initialize plugin here }); With Ember components, instead, this initialization happens within a special function named didInsertElement. This function is called when a component is ready and has been successfully inserted into the DOM. By wrapping our code inside this function, we can guarantee two things: - The plugin is initialized only for that component - The plugin will not interfere with other components Before initializing our plugin, let’s use the component in its current state. To do that, create an index template using the command: ember generate template index Then add the following code to the template to use the component: {{jquery-pic}} Once done, load the Ember server with ember serve With this command the server is started. Open your browser of choice and access the URL specified by the command-line interface. You should see a list of thumbnails below a picture previewer. Please note that when you click on a thumbnail, nothing happens. This happens because we haven’t hooked up the plugin event handlers. Let’s do it! But before describing how to perform a correct initialization, I will show you a mistake that many developers make. This solution might seem to work at first but I will prove you that it isn’t the best by showing a bug it introduces. Ember Component Initialization To show the problem, let’s start by adding the following code to the didInsertElement function: $(".jquerypic").jquerypic(); When not using Ember, this is how you would normally initialize the plugin. Now, check your browser window and click on the thumbnails. You should see that they are loaded in the big picture previewer as intended. All may seem to work fine, right? Well, check what happens when we add a second instance of the component. Do this by adding another line to the index template containing the same code I showed before. So, your template should now look like this: {{jquery-pic}} {{jquery-pic}} If you switch to the browser window, you should see two instances of the component showing up. You can notice the bug when clicking on the thumbnail for one of the instances. The previewer change for both instances and not just for the clicked one. To fix this issue, we need to change our initializer a bit. The correct statement to use is reported below: this.$(".jquerypic").jquerypic(); The difference is that we are now using this.$ instead of just $. The two component instances should now behave properly. Clicking on the thumbnails for one instance should have no effect on the other component. When we use this.$ in a component we refer to the jQuery handler specific for that component only. So, any DOM manipulation we do on it will only affect that component instance. Moreover, any event handler will be set just on that component. When we use the global jQuery property $, we are referring to the whole document. That is why our initial initialization affected the other component. I had to modify my plugin to demonstrate this bug and this might be the topic of a future article. Nevertheless, the best practice when manipulating a component’s DOM is the use of this.$. Destroying the Plugin Well, so far we’ve seen how to set up event handlers. Now it’s time to show the way to remove any event we have set up with our plugin. This should be done when our component is going to be removed from the DOM. We should do this because we don’t want any redundant event handler hanging around. Luckily, Ember components provide another hook called willDestroyElement. This hook gets called every time Ember is about to destroy and remove a component instance from the DOM. My plugin has a stopEvents method which is callable on the plugin instance. This method should be called in the special hook Ember provides for us. So, add in the following function to the component: willDestroyElement: function () { this.get('jquerypic').stop(); } Modify the didInsertElement function so that it look like this: didInsertElement: function () { var jquerypic = this.$(".jquerypic").jquerypic(); this.set('jquerypic', jquerypic); }, In the didInsertElement function, we just stored the plugin instance in a property of the component. We perform this operation so that we can have access to it in other functions. In the willDestroyElement function, we are calling the stopEvents method on the plugin instance. Although this is good practice, our application has no way to trigger this hook. So we will set up a demonstration click handler. In this handler, we will call the stopEvents method on the plugin instance. This allows me to show that all the events handlers have been removed like we intended to. Now, let’s add a click function handler to the component: actions: { stopEvents: function () { this.get('jquerypic').stop(); } } Then add a paragraph tag to the component template as shown below: <p {{action "stopEvents"}} > Stop Events </p> When this tag is clicked, it calls the stopEvents action that destroy the plugin. After clicking the paragraph, the plugin should no longer respond to click events. To enable again the events, you have to initialize the plugin as we did in the didInsert hook. With this last section, we have completed our simple Ember component. Congratulations! Conclusions In this tutorial you’ve seen that jQuery plugins still play a vital role in our careers. With its powerful APIs and the JavaScript frameworks available, it’s very useful to know how to combine the two worlds together and make them work in harmony. In our example the component acts as a wrapper for the plugin, which shows a list of picture thumbnails. Whenever we click a thumbnail, a bigger version of it is displayed in the picture previewer. This was just an example but you can integrate any jQuery plugin you want. Once again, I want to remind you that the code is available on GitHub. Do you use jQuery plugins in your Ember apps? if you want to discuss about them, feel free to comment in the section below.
https://www.sitepoint.com/how-to-integrate-jquery-plugins-into-an-ember-application/
CC-MAIN-2022-21
refinedweb
1,605
64.41
The.L The.List; import org.bson.BSONObject; import org.bson.BasicBSONObject; import com.sequoiadb.base.Node.NodeStatus; import com.sequoiadb.base.DBCursor; import com.sequoiadb.base.Node; import com.sequoiadb.base.ReplicaGroup; import com.sequoiadb.base.Sequoiadb; import com.sequoiadb.exception.BaseException; public class BlogRG { static String rgName = "testRG"; static String hostName = "sdbserver1"; Public static void main(String[] args) {// Connect database String host = "192.168.20.46"; String port = "11810"; String usr = "admin"; String password = "admin"; Sequoiadb sdb = null; try { sdb = new Sequoiadb(host + ":" + port, usr, password); } catch (BaseException e) { e.printStackTrace(); System.exit(1); } // Print printGroupInfo(SDB); Remove environment. Delete duplicate copying group if(isGroupExist(SDB,rgName)){system.out. println(" removal the old Replica group..." ); sdb.removeReplicaGroup(rgName); } printGroupInfo(sdb); Add new copy group system. out. Println ("Adding the New Replica Group..."). ); ReplicaGroup rg = sdb.createReplicaGroup(rgName); printGroupInfo(sdb); Println ("Tere are "+ rg. getnonoDENUM (nodestatus.sdb_node_all) +" Nodes in the group."); Add three new nodes node1 = addNode(Rg,50000); Node node2 = addNode(rg,50010); Node node3 = addNode(rg,50020); Println ("Tere are "+ rg. getnonoDENUM (nodestatus.sdb_node_all) +" Nodes in the group."); // Get the master/slave Node master of replication group = rg.getMaster(); System.out.println("The master node is " +master.getPort()); System.out.println("The slave node is " + rg.getSlave().getPort()); System.out.println("stoping the master node...") ); master.stop(); While (rg.getMaster(). GetPort () == master.getport ()){try{thread.sleep (2000); {catch (Exception e){}} // View the newly elected master node system.out.println ("re-selecting the master node...") ); System.out.println("The master node is " + rg.getMaster().getPort()); } private static void printGroupInfo(Sequoiadb sdb){ ArrayList names = sdb.getReplicaGroupNames(); int count = 0; System.out.print("The replica groups are "); for (Object name : names){ count++; System.out.print((String)name + ", "); } System.out.println("\nThere are " + count + " replica groups in total."); } private static boolean isGroupExist(Sequoiadb sdb, String rgName){ ArrayList names = sdb.getReplicaGroupNames(); for (Object name : names){ if(rgName.equals((String)name)) return true; } return false; } private static Node addNode(ReplicaGroup rg, int port){ if(rg.getNode(hostName,port)! = null) rg.removeNode(hostName, port, null); Node node = rg.createNode(hostName,port,"/opt/sequoiadb/database/test/" + port,null); System.out.println("starting the node " + port + "..." ); node.start(); return node; }} The above code adds a new replication group in the database and adds three master nodes in the new replication group. The data group automatically elects the new master node. After the master node is stopped, the new master node is re-elected in the replication group. The result of running the above code is: The replica groups are SYSCatalogGroup, datagroup, testRG, There are 3 replica groups in total. Removing the old replica group... The replica groups are SYSCatalogGroup, datagroup, There are 2 replica groups in total. Adding the new replica group... The replica groups are SYSCatalogGroup, datagroup, testRG, There are 3 replica groups in total. Tere are 0 nodes in the group. starting the node 50000... starting the node 50010... starting the node 50020... Tere are 3 nodes in the group. The master node is 50000 The slave node is 50010 stoping the master node... re-selecting the master node... The master node is 50020 As you can see, when the program starts running, there are three replication groups in the database, with testRG being the useless replication group left over from the last run, and the other two being the default two replication groups for the database. The redundant testRG replication groups are removed via the removeReplicaGroup() method, new testRG replication groups are added via createReplicaGroup(), and three new nodes are added and started within the new group via createNode() and Start (), respectively 50000,50010,50020. GetMaster () and getSlave() methods are used to get the master and slave nodes within the group, and stop() is used to stop the master node of 50000. After the master node stops completely, a new master node of 50020 will be automatically re-elected within the group. After the run, check the details of the database testRG replication group through the shell console: >rg.getDetail() { "Group": [ { "HostName": "sdbserver1", "dbpath": "/opt/sequoiadb/database/test/50000", "Service": [ { "Type": 0, "Name": "50000" }, { "Type": 1, "Name": "50001" }, { "Type": 2, "Name": "50002" } ], "NodeID": 1053 }, { "HostName": "sdbserver1", "dbpath": "/opt/sequoiadb/database/test/50010", "Service": [ { "Type": 0, "Name": "50010" }, { "Type": 1, "Name": "50011" }, { "Type": 2, "Name": "50012" } ], "NodeID": 1054 }, { "HostName": "sdbserver1", "dbpath": "/opt/sequoiadb/database/test/50020", "Service": [ { "Type": 0, "Name": "50020" }, { "Type": 1, "Name": "50021" }, { "Type": 2, "Name": "50022" } ], "NodeID": 1055 } ], "GroupID": 1023, "GroupName": "testRG", "PrimaryNode": 1055, "Role": 0, "Status": 0, "Version": 4, "_id": { "$oid": "53D9D38E14A63A88C621EDd8"}} Return 1 row(s). Takes 0.4716s. You can see that there are three nodes in the group, PrimaryNode is 1055, i.e. 50020.
http://www.itworkman.com/145037.html
CC-MAIN-2021-39
refinedweb
776
52.26
Why we need Python in the Browser In his Pycon 2012 keynote speech on Sunday, Guido van Rossum covered many of the open “Troll” questions related to the Python community. I’ve had occasion to either complain about or defend all of the topics he covered, and he did a wonderful job of addressing them. The issues could largely be divided into two categories: those that come from a fundamental misunderstanding of why Python is wonderful (e.g. whitespace), and those that are not an issue for the core python dev team, but are being actively worked on outside of core Python (e.g event loop). And then there’s the question of supporting Python in the web browser. I honestly thought I was the only one that cared about this issue, but apparently enough people have complained about it that Guido felt a need to address it. His basic assertion is that the browsers aren’t going to support this because nobody uses it and that nobody uses it because the browsers don’t support it. This is a political problem. Politics shouldn’t impact technical awesomeness. The fallacious underlying assumption here is that modern HTML applications must be supported on all web browsers in order to be useful. This is no longer true. Web browser applications are not necessarily deployed to myriad unknown clients. In a way, HTML 5, CSS 3, and DOM manipulation have emerged as a de facto standard MVC and GUI system. For example, many mobile apps are developed with HTML 5 interfaces that are rendered by a packaged web library rather than an unknown browser. Client side local storage has created fully Javascript applications that require no or optional network connectivity. There are even situations where it may not be necessary to sandbox the code because it’s trusted. Many developers create personal or private projects using HTMl 5 because it’s convenient. Convenient. It would be more convenient if we could code these projects in Python. Web browsers can be viewed as a zero install interface, a virtual machine for these applications. Such a VM has no reason to be language dependent. It is simply unfair to all the other programming languages and coders of those languages to say, “we can’t displace Javascript, so we won’t try.” Web browsers have evolved into a virtualization layer more like operating systems than single programs. While it is true that the most restrictive operating systems only permit us to code in Objective C, in general, it is not considerate to restrict your developers a single language or environment. It is time (in fact, long overdue) for Python to be supported in the browser, not necessarily as an equal to Javascript, but as an alternative. The web is a platform, and we must take Guido’s statement as a call to improve this platform, not to give up on it. Update: I just stumbled across and I can’t wait to play with it! Update 2: From the number of comments on this article, it appears that my article has hit some web aggregator. To those users suggesting python to javascript compilers, I’ve been a minor contributor to the pyjaco project, a rewrite of the pyjs library. It has the potential to be a great tool, and the head developer, Christian Iversen is very open to outside contributions. Let’s make it better! There are several technical reasons why browsers won’t do it, and the most prominent is “how are you going to connect the PythonScript that you’re proposing — which will doubtless lack most of Python’s libraries — with the existing JavaScript model?”. The people who are making in-roads are ClojureScript and CoffeeScript, and what they have in common is that they compile to JavaScript so that they can solve this problem. Some things get a little more clumsy, especially in the event model, because Python’s lambda syntax is not as powerful as JavaScript’s inline functions and has now been obviated in the one usage case where it was important (providing a function to map() / filter(), both taken over now by generators). So the registering of event handlers is going to be a bit of a sticking point, since you can’t say e.g. “element.onmouseout = def (evt):” in Python. This might be solved with a decorator syntax, @on(element, ‘mouseout’, True) or so, but PythonScript would have to be engineered to provide it before PythonScript was ever implemented in any particular browser. Is the PythonScript thread going to be separate from the JavaScript thread? Will they share the same thread? Can you send events to PythonScript event listeners from JavaScript? What about the globals that Python leaves all over the place — will those pollute the JS namespace as well? And how are we going to make it so that we can rapidly parse and execute PythonScript in most browsers? Those are all the sorts of problems that ClojureScript and CoffeeScript have addressed by compiling to JavaScript. Be cautious when making a problem sound much less difficult than it is. Just a quick note, not using a sandbox because the code is trusted seems a bit odd to me. Even code I trust could have been tampered with (in case of signed code the private key could have been stolen) and putting aside a security feature is not something I’d want to do. Anyway, thanks for that pythonwebkit link ! I think the solution is to support all languages. As a heavy javascript & occasional ruby developer, I say “why not?”. Though perhaps it’s not so political. Rather, a lack of man-hours to support maintenance of another scripting language while at the same time focusing on improving performance. “Why not?” Because it costs time to spec out, time to implement, time to document, and time to maintain. While all of this is going on other, more important tasks are not being worked on. “more important tasks” This is a value judgement, and clearly many people place python in the browser as a very important task. Actually JavaScript has become quite a big target for language to language compilers, and there are projects that provide the possibility to compile Python to JavaScript such as Pyjamas ( ). Off course this comes with some caveats, such as debugging being a pain but they will be solved in time (for my previous debugging example, by the browsers introducing SMAP support). So I don’t think JavaScript will be replaced in the browser anytime soon (it’s already a really awesome language), but I’m pretty sure there will be a lot of languages that compile to it (see emscripten, CoffeeScript, ClojureScript and others). Just my two cents I think about this, a time ago and create a drumbeat project: but, sadly, nobody follows this idea… No Thanks. As if javascript was not bad enough, we would now have pages refusing to work because of idiotic language design decisions meaning that a lack of whitespace on a line in the Python script would render the web app broken. There is a place for Python, but the web it is not. For Python support in Firefox and XULRunner, you can install the extension and then even include Python code in your HTML files if you want to: That’s funny — I was just thinking this exact same thing last night! I think that Python is well suited to this task. Plus, I think that it would help open up web programming to a lot more people, as they won’t have to deal with the confusion that is JS and its quirks. What about pyjs? Much as I love javascript, I’d be interested to see other languages in the browser. However, it’s a minor point, and probably not a show-stopper, but any language that uses whitespace for syntax (including CoffeeScript) is always going to be problematic in browsers as it can’t be minified fully. In larger applications this could have a big impact on file sizes. Minifying will be solved if the python script gets compiled into byte-code first at the server. I would love to see this happen too. But if this does happen I would like to see python take the Perl – Parrot approach. Make a VM for web broswers that can execute languages built for it. This way Python could be a language that runs on the VM, and other languages could come later. While I’m a Python fan, others are not and I’m sure they want to see their language in the browser also. Doing this is definitely technically harder. However, if we can overcome the political problem, I think we should solve the problem in a way that solves the problem for everyone and not just the python community. It sounds like you need to learn JavaScript. It is always going to be more powerful than Python in the browser. Why? The problem with pyjs and all the other “lets do a simple transform in a python source to javascript) is that you end up with something that doesn’t have python semantics so it is in my opinion of little value. To really have python on top of javascript you would have to write a complete python interpreter in javascript so you end up with a very very slow python which is also of little value. Changing a browser to support python could be pretty awesome but will take a long time and it would probably have to use pypy-sandbox (wich is not embedable right now and I think doesn’t have a jit). The last thing we need is another attack vector for a browser to be compromised. Virtualized and sandboxed or not, haven’t we suffered enough? And what of individual implementations of python? Browser S supports x-features half as good as Browser C, Browser F supports others, and of course Browser E has vendor extensions with DirectZ now with hardware acceleration (but only if you’re on overlord’s favorite OS version)!! What you’re suggesting is about as insane as VB in the browser. [tongue firmly in cheek], I think MS already tried that. :-P Web doesn’t need any more fragmentation. Microsoft and Google are doing enough without Python being added to the mix. Please enlighten me to the real problem you are trying to solve here. All I see in your post is “Man, python is so hip and awesome! It’s a much better language than javascript and it’s ‘unfair’ that I can’t use my language of choice. Browser teams, please dedicate months and months of time to solving this ‘problem’ because I just really love python!”. Right. Just learn javascript bud. Programming languages are tools, not a goal unto themselves. You are not going to sway anyone without providing *real* arguments and demonstrating that this change would solve *real* problems. I don’t see any argument in your post that even attempts to do so. Spend your time solving problems, not inventing new ones. Just develop a Python to js translator. GWT proved this approach works exceptionally well. You would need to develop a browser plugin to support debugging your Python code in your IDE. The browser plugin (only needed for development) provides an out of band channel for the debugger to communicate with the debugged code. Even if this will never happen, the future JavaScript versions will be a lot like Python anyway. +1 Ed S. My sentiments exactly. If we were to take the above and replace the words “Python” and “Py***” with “Ruby”, “C#”, or “my favorite language!” we’d call them a quack. You’re not really solving a problem at-large, you’re just wishing it were easier to write web apps because your skill in Python, (which I’m sure is extensive) doesn’t port in its entirety to the client. Hey, let’s bring back VBScript while we’re at it… that’d make .NET devs happy. This is obviouly a good idea. Like python, we can also port C/C++, haskell, common lisp, ruby… or whatever to the browser. Thoses that don’t want all kind of languages (and any language one could want) available to the browser are just victim of their blob language (here it being javascript). Where the blob language is always enough and the best one for the job and all other languages are judged useless regardless of their features. But of course, browser can’t support natively all possible languages. This is the same for OS. Python or C are not supported natively for example on Linux or Windows. Python has an interpreter, C use a compiler. So a new language on the browser is no different. It just need an interpreter or compiler. This already work well for clojure (with clojure script), for java (GWT), soon for scala (scala GWT). I have also heard of python to js compilers, but I don’t know their exact status. Usefullness for different languages on the browser are obvious: - Many big developement teams like better statically typed language because of the IDE support and auto-documentation and additional checking it provide. - Language are for human, not browsers, OS or computer. IF it is possible to make another language that better fit some human needs so do it. Even if this new language is not usefull to all human… If enough developpers prefer to write code in X language, that is enough to implement it on whatever platform they want to use. You’ll be able to make a business out of it. - Browsers are very low level for some abstraction like having different behavior for each browser version/vendor, like managing image sprites or different text/images/resources depending of the locale. Browsers don’t provide this. Doing this on client side consume more resources for no reason. An appropriate compiler/platform/framework can perform theses optimizations on the go. - As we see more and more applications with heavy client code and access to the server limited to read/write distant data, we need languages that fit the domain of the application better. Some things are easier to write in prolog, lisp or haskell than javascript. Always use the best tool for the job. Always using Javascript is like saying a hammer is always the only tool to use whatever the task.
http://archlinux.me/dusty/2012/03/13/why-we-need-python-in-the-browser/
CC-MAIN-2014-15
refinedweb
2,414
70.63
HDF Group's web site at ....CHM/PDL-IO-HDF5-0.6501 - 26 Jan 2014 15:31:04 GMT - Search in distribution - PDL::IO::HDF5::tkview - View HDF5 files using perl/tk and PDL::IO::HDF5 modules - PDL::IO::HDF5::Dataset - PDL::IO::HDF5 Helper Object representing HDF5 datasets. - hdf5.pd PDL::IO - An overview of the modules in the PDL::IO namespace. 20 ++.007 (2 reviews) - 12 Oct 2013 16:57:07 GMT - Search in distribution BIE::Data::HDF5 - Perl extension for blah with HDF5. ++ BIE::Data::HDF5 is an interface to operate Hierarchical Data Format 5. Now it only reads h5 files. Writing capability is coming soon. EXPORT None by default. For developers, please check out the library file....XINZHENG/BIE-Data-HDF5-0.02 - 08 Jan 2013 21:56:54 GMT - Search in distribution
https://metacpan.org/search?q=PDL-IO-HDF5
CC-MAIN-2015-22
refinedweb
137
61.22
24 March 2009 17:40 [Source: ICIS news] By John Richardson LONDON (ICIS news)--The financial sector has indulged in an ?xml:namespace> “Banks in In the polyester and many other industries, money seems to be flowing to companies producing goods that might be going into inventories. “In Shaoxing, the government has set out three ‘No’s’ for the local banks to avoid,” writes Tom Orlick, a Shanghai-based freelance journalist, in the online research publication, the China Economic Quarterly (CEQ). These are no calling in of existing loans; no increase of collateral requirements on loans; and no imposition of additional requirements for companies wanting to take out new loans, he adds. One beneficiary of local government support was troubled purified terephthalic acid (PTA) producer Hualian Sunshine Petrochemical, which received a reported yuan (CNY) 1.5bn ($220m) in aid. Polyester producers are pledging to maintain operating rates of 75-80% in April and May. But just where is all their output going? “You have visions of row after row of warehouses packed with garments, with plastic toys and with computers as JP Morgan Chase Asset Management, in its Ins and Outs newsletter of 16 March, highlights the danger of this waiting game. “ “But if such timely growth in demand does not materialise, these policies are likely to feed a traditional supply-induced deflation. “If such a deflationary dynamic does indeed materialise its effects are likely to be exported since This could lead to increased pressure on debtor nations as deflation would increase the burden of their foreign debt, raising the likelihood of a classic liquidity trap, the bank warns. The continued rally in petrochemical pricing – when indications a few weeks ago were that prices would – is now starting to make sense. If this “spare capacity” is exported by Naphtha is unlikely to offer much support in the second half as supply of feedstock looks set to lengthen substantially. (More details will be provided in a later Insight article.) A big reason why chemical and polymer prices rallied from mid-January onwards was the recovery in naphtha, as refiners made operating rate adjustments, argue several sources. Modest restocking by end-users also appears to have been a factor as does, rather worryingly, speculation by traders in The good news is that a lot of the new Another factor behind the price rally seems to be widespread production discipline by petrochemical producers. “To a certain degree it’s also been down to extensive refinery turnarounds in northeast Let’s hope that all these imagined or real rows of warehouses stuffed with of everything from shirts to toys to shoes to keyboards will be emptied by strong domestic demand. But as many as 30m migrant workers have lost their jobs due to the collapse in export trade. Some of these workers might end up re-employed on government construction projects, which make up the bulk of the stimulus package. But will they spend their new earnings if they are worried about finding work once the construction projects are over? Many of these workers are also likely to be earning less than $5,000 a year, as does 90% of Incomes below this level mean that spending is concentrated on basic necessities rather than consumer luxuries, or relative luxuries such as washing machines and TVs. As in the The government’s pledge to spend the $123bn on such a scheme amounts to, according to one estimate, only $14.40 per person per year. China has “room to do more” to improve health, education and social security, says the World Bank in its latest quarterly report on China. Perhaps more announcements will follow, and, as the World Bank also points out, the huge stimulus package is already showing signs of bringing stability to the Chinese economy. GDP (gross domestic product) growth forecasts keep being revised down, however. Professor Nouriel Roubini, the economist, warns that the country’s GDP could grow by as a little as 5%. Let’s be optimistic and assume the Chinese government gets to its target of 8% growth for 2009. What would this mean for the global chemicals industry? “Over the next few years Chinese demand is likely to be almost irrelevant because it will be overwhelmed by the collapse in demand everywhere else,” writes the CEQ in another article. “Furthermore, since prices are a function of both supply and demand and, the large increases in supply brought on in recent years to cater to Chinese demand are now weighing down prices.” It is really hard to be an optimist these days when every piece of evidence points to a very difficult economic environment for at least the next one to two years – very probably a great deal longer. (
http://www.icis.com/Articles/2009/03/24/9202943/INSIGHT-Chinas-curious-path-to-economic-recovery.html
CC-MAIN-2013-48
refinedweb
788
55.47
I'm writing an x86 assembly function that determines whether a string is a palindrome or not (except for the null-terminator). This function is meant to return 0 if the strings are palindromes, and if the strings are not palindromes, it will return the comparison that failed (i.e. the index of the character on the left half of the string that didn't match). While it is successfully detecting which strings are and are not palindromes, it does always reports 1 .386 .MODEL FLAT, C .CODE ; Determines whether or not a given string is a palindrome ; Uses: ; ECX - pointer to start of string (incremented till halfway) ; EDX - pointer to end of string (decremented till halfway) ; AL - dereference character from ECX for comparison ; BL - dereference character from EDX for comparison ; ESI - index where comparison failed in case strings are not palindromes ; Arguments: ; [ESP+4] - pointer to string to test ; [ESP+8] - length of string ; Returns: ; 0 = string is a palindrome ; > 0 = string is not a palindrome; return value is the # comparison that failed (e.g. AABAAA would return 3) ; C prototype: int __cdecl palin(char *str, int len); palin PROC push ebx push esi ; Load ECX with a pointer to the first character in the string mov ecx, dword ptr [esp+12] ; Copy the pointer into EDX then add the length so EDX points to the end of the string mov edx, ecx add edx, dword ptr [esp+16] xor esi, esi loop0: ; Begin loop with decrement of EDX to skip the null terminator dec edx inc esi mov al, byte ptr [ecx] mov bl, byte ptr [edx] cmp al, bl ; Comparison fail = strings cannot be palindromes jnz not_palindrome inc ecx ; If start ptr >= end ptr we are done, else keep looping cmp ecx, edx jl loop0 ; Return 0 = success; string is a palindrome xor eax, eax jmp end_palin not_palindrome: ; Return > 0 = fail; string is not a palindrome mov eax, esi end_palin: pop esi pop ebx ret palin ENDP END #include <stdio.h> #include <string.h> int __cdecl palin(char *str, int len); int __cdecl main(int argc, char *argv[]) { int ret; if(argc<2) { printf("Usage: pal word"); return 0; } if(ret = (palin(argv[1], strlen(argv[1])) > 0)) { printf("%s is not a palindrome; first comparison that failed was #%d\n", argv[1], ret); } else { printf("%s is a palindrome\n", argv[1]); } return 0; } C:\Temp>pal ABCDEEDCBA ABCDEEDCBA is a palindrome C:\Temp>pal ABCDEDCBA ABCDEDCBA is a palindrome C:\Temp>pal AABAAA AABAAA is not a palindrome; first comparison that failed was #1 There are few bugs in your code... The one you are looking for is here: if(ret = (palin(argv[1], strlen(argv[1])) > 0)) This should emit warning in good C/C++ compiler, I think, what are you using? Do you use -Wall -Wextra (these are for gcc or clang, for other compiler you should check it's documentation). It's doing ret = (res > 0), and (res > 0) is boolean expression, so it's 0 or 1. You probably wanted if ((ret = palin(argv[1], strlen(argv[1]))) > 0), and this shows why it's sometimes better to KISS and split these things into two lines. Other bug: jl loop0: should be jb. ecx and edx are memory pointers, thus unsigned. If your data would be allocated on 0x80000000 boundary, then jl would fail at first cmp. And you can simplify the exit logic: ; Return 0 = success; string is a palindrome xor esi, esi ; fake "esi" index = 0, reusing "not palindrome" exit code fully not_palindrome: ; Return > 0 = fail; string is not a palindrome mov eax, esi pop esi pop ebx ret And final style nitpick: jnz not_palindrome => I would use jne alias for this one, as you are comparing two chars for equality, not for "zero" (it's the same instruction, just different alias, I tend to use both, trying to use the more appropriate to follow my "human" description of functionality). Also you can do cmp al,[edx] without loading the second char into bl (saving 1 more instruction and not clobbering ebx, so you don't need to push/pop ebx then, saving 2 more). If you insist on loading the second character into register just for the "easy to read" code, you can still use ah for second char, removing that ebx completely from the code.
https://codedump.io/share/Lk0LefRHVs1R/1/palindrome-function-always-reporting-offset-1-for-error
CC-MAIN-2017-30
refinedweb
728
52.26
in reply to Re^4: Net::LDAP help with distinguished namein thread Net::LDAP help with distinguished name Although you complain that the responses are vague, they are not. The API exposed by Net::LDAP is pretty much a Perl equivalent to the OpenLDAP C API. As such, if you want to be able to get the information for a single entry, you need to go through the bind and search steps The bind can be anonymous, if you LDAP server allows it and allows retrieval of the object whose DN you already have. If not, then you need to bind with that DN and provide the password associated with it. Once you do the bind, you can do a search for the DN. Once you have the search, you have the attributes and values associated with it. Even if you do a non-anonymous bind, what returns in Net::LDAP is a connection variable, not a hash or array of information on the entity that made the connection Your experience with Windows and Visual Basic has led you to believe that there is some magical way to say, "I have this DN, give me the attributes associated with it". Although the API you were using made this easy, behind the scenes what was happening is what you have to do more explicitly if you were using, say, C, Perl, or something else I understand what you're saying but constructing a filter for a distinguished name does not appear to work. Here is a bit of test code I wrote. sub getLDAPInfo { my $targetuser = shift; my $ldapuser = "SomeUser"; my $ldappassword = "SomePassword"; my $domain = "dc.mycompany.com"; my $fullname; my $ad = Net::LDAP->new($domain) or die "Could not connect!"; $ad->bind($ldapuser, password=>$ldappassword); my $searchbase = 'DC=mycompany,DC=com'; my $filter = "samaccountname=$targetuser"; my $results = $ad->search(base=>$searchbase,filter=>$filter); my $count = $results->count; if ($count) { my @entries = $results->entries; foreach my $entry (@entries) { $fullname = $entry->get_value('givenname'). " +" . $entry->get_value('sn'); return ($fullname); } else { return ""; } $ad->unbind; } my $fullname= &getLDAPInfo("JUSER"); print $fullname. "\n"; [download] This works perfectly. However, if I change the filter like so: my $filter = "distinguishedname=$targetuser"; [download] And pass it a distinguished name like so: my $fullname= &getLDAPInfo("CN=JUSER,OU=ACCT,DC=MYCOMPANY,DC=COM"); [download] it returns nothing. I have tried to build the filter as both distinguishedname= and dn= to no avail. If, as you say, I still need to do the search, please help me understand how to construct the filter to search for a distinguished name. Thanks,. behind the scenes what was happening is what you have to do more explicitly if you were using, say, C, Perl, or something else ;) Well, in perl , DBD::LDAP looks pretty slick Yes! No way! Results (109 votes). Check out past polls.
https://www.perlmonks.org/index.pl/jacques?node_id=951800
CC-MAIN-2018-26
refinedweb
471
58.11
focusing on the problems of punish_severely... To the layman this might satisfy his bloodthirst for getting back at the terrorists - yeah, we'll kick their b... The released open source code is however only political propaganda showing what the author wants you to think he intends to do. The implementation of this function has not been released in the open domain - we do not know how the punishment is carried out, when, to what extent, nor do we know who is punished, and to that effect not even how many, or if it has any undisclosed side-effects. Run once, this function might at first opportunity, first test decide to give it in to your friends, family, way of life, or the whole of society. In a democracy (or well behaved program), the function ought to have a contract with the other people (or here, globals) in terms of what, why and to whom it intends to act upon. Neither does the function indicate any intent of implementing any comprehensive strategy for dealing with terrorists beyond revenge. We do not know which criteria has decided that a person is a terrorist (assuming there is a choice in the implementation of the botched comparison), and at what stage this was determined (can a person be accused or suspected of terrorism, involuntarily linked, or are they all terrorists regardless - black'n'white). Surveillance, sting operations, interrogation, gaining useful information from the terrorist appears absent, and worryingly unimportant. Nothing is returned by the function and I would assume if there was any attempt to gain information in the implementation, the presumed authors (government) would do its utmost to shield (encapsulate) this information from other presumed terrorists as well as citizens (though they probably lost out on this when they chose C as their political platform). Hopefully, a few rounds of reviews and critique will add the necessary contracts, pre- and post-conditions, as well as define and declare the invariants upon which we're not willing to compromise in this implementation.]]> silly rabbit. person, terrorist, and punish_severely are each preprocessor macros. clearly this program prints itself out, then allows the user to play ascii tetris.]]> Why does everyone assume terrorist is non-zero? Maybe nobody gets punished.]]> No one has speculated on the ramifications of: if (terrorist == person) { waterboard_for_more_information(); } else { exit(-1); } which appears to be the algorithm for exploiting known terrorist Khalid Shaikh Mohammed, et. al. The apparent result: useful intelligence that was employed to deter other attacks. I am neither condoning nor condemning, but it's pretty clear it needs to be discussed. I was scared at first, but then i realized that because punish_severly() takes no arguments, it can only punish global terrorists, not local terrorists. So even though person is being mistakenly assigned to the value of a terrorist (not a pointer to a terrorist) no matter how you define terrorist, person will not be punished, (unless they are a global person). I feel safer already. I'm sure the point here isn't that the poor code means that everyone gets punished as a terrorist, but that the bill punishes everyone regardless of their terrorist status. Presumably that's why punish_severely takes no argument...]]> punish_severely() is a black box. Putting coding errors aside, the problem I have with this code is that we can't see inside the punish_severely() function. Submit a FOIA request to see the function and you'd be lucky to get a page full of blacked-out text. What controls do they have in place? What happens to person inside that function?]]> H.R.6166 is the worse kind of law. It is a compromise that cuts a road through the law. We have a Congress of "Ropers". The following is quoted from the movie "A Man for All Seasons" (1966).! Oh, for Pete's sake. Nobody has even mentioned the obvious error yet! The blatant and obvious error is that it's written in C; it should be written in Prolog.]]> Any particular reason that we're not testing whether punish_severely() had any effect (return code)? Oh, never mind. We're talking the US here. No question will be asked about the effectiveness of measures.]]> Two days ago there was this news of a Spanish woman detained by Mexico police because she was carrying some bullets on her check-in baggage. She went to Riviera Maya to spend her honeymoon with her husband. No intelligence signals her a terrorist, it might has been just a bad joke or someone trying to put her in trouble. I do not know. Should the mexican officials start the severe punishment just in case?]]> Y'all seem to be from the reality-based community, a group that has pretty much been deemed irrelevant to any policy decisions of this administration, or its pet Congress.]]> @C Gomez, AJM Ever looked the "witch trials" (EUrope and America), it had a nice simple proces, 1, somebody accused you for whatever reason of being an instrument of the devil etc. 2, There was an ad-hoc hearing to decide if the accusation had merit. If it was decided there was merit you where put to trial by a test. 3, You where found guilty by surviving the test which was concidered to not be survivable by a normal person. Having been found guilty you where usually killed in an entertaining way for the public. The normal method being burnning alive, which could be made to last for some considerable period. In fact it was considered the more you screamed the more the devil was being tourtured. Also to hear the screams was to have your soul cleansed, so a good turn out of the local populace was encoraged (especially as only an instrument of the devil would not attend, and therefore qualify as the next entertainment). So all around a win win process for the accusor the self apointed "forces of God" and free entertainment for everybody else... The majority of the witches would appear from the records to be the old, the ugly, and those suffering from idiocy, plus a few others who had no doubt upset somebody in authority. Oh and an easy way to upset somebody in authority was to try and help somebody who was under the "Curse of God" or as we would more correctly call it these days "ill". It is possibly why we have the expression "witch hunt" in the common parlance for an unjust and unfair process being brought to bear on an individual. As people are often heard to remark "it was oh so much simpler in the past". If you want to partake of the fun try the interactive Salem Witch Trials, @ C Gomez "Clearly, it is possible to identify some enemy combatants. You simply say no one could possibly identify enemy combatants, and if they could, they'd simply identify anyone they wish." Ten points deducted for missing the point again. Ten additional points deducted for reading the the phrase "no one could possibly identify" into what I wrote. The point, broken into two simple component parts is as follows: Given: Someone (whether appointed, elected, or simply government employed) makes an assertion that a person in custody is an enemy combatant (fine, correct me on the use of "terrorist" on principle, you started it with "war criminal"). Your assertion: "Why people think war criminals deserve criminal rights is beyond me." From this I assume that you mean (feel free to correct my error if you meant some other criminal rights, or you meant that rights are revoked after a fair trial that includes these rights): "Criminal rights" equals some or all of the following: "rights to legal council, the right to decline self incrimination, the right to a fair trial, the right to examine and respond to the evidence against them, are all about evaluating the assignment of the value "war criminal," or any other accusation against them." I conclude from your statement that you mean to say it is unthinkable that the afformentioned criminal rights should be given to someone who is detained as an enemy combatant (or substitute various other euphemisms for the same thing). I assert that this is incredibly intellectually lazy. Such a system is fundamentally broken. The initial assumption that the detainee is in fact an enemy combatant or other euphemism has been asserted with no oversight, no check, no balance, and no hearing is an entirely broken process. It is not a matter of whether someone *can* tell or not, it is the matter that you would just accept that someone (presumably with authority) asserted that the person is a terrorist and is not required to give proof. For that matter, who makes that assertion and how they are even vetted as competent to make that decision is not defined. Being in the government or the military does not make humans immune to poor judgement, alterior motives, corruption, or simple error. In fact, due to stress and confusion, humans are most prone to such errors on the battlefield. I agree entirely that combatants who hide among the civilian populous are foul and evil human beings who cause not only direct harm to their targets, but intentionally encourage additional harm to innocents by those who attempt to defeat them. In my view such enemy combatants should be executed. The point you miss is that an assertion that someone is an enemy combatant does not mean that they are. Think of it in Alice, Bob, and Trent terms for a second instead of starting with the assumption "they are war criminals." If you attempt this exercise for a moment, you will see that there is no trust model at all. There is no single point of responsibility or accountability for the accusation of "enemy combatant" status. No one will pay for a mistake and there is no rigour to how the accusation is made. Trials and evidence are not luxuries that should only be permitted for captives who are accused of crimes that you don't find as morally distasteful. If anything, your arguments on the matter reenforce my position that enemy combatants *must* be tried properly with "criminal rights." Clearly those close to the matter do not have sufficient perspective to be objective.]]> Hate to risk mangling a good joke, but it seems necessary to address some of the issues C Gomez raised. Citations are to sections of: as engrossed. Completely aside from questions of principle about whether the United States should be doing this at all, in the language of Crypto-gram, S.3930 replaces the normal decision procedure about whether someone is or isn't guilty with one likely to produce a greater rate of false positives. For example, it restricts the defendant's ability to know the evidence against him (§§ 949d(f); 949j(c), (d)), it allows admission of statements obtained by coercion under certain circumstances (§ 949r(c), (d)), it replaces the jury of peers with one of military officers (§ 948i), reduces preremptory challenges (§ 949f(b)), and allows only restricted appellate review of decisions (§ 950g). Furthermore, to say these people are terrorists or war criminals is inaccurate. Until you try them and find them guilty--that is, until the decision procedure produces a positive result--you don't know whether they're bad guys or innocent. If the tribunal returns a positive result, but a standard criminal trial would return a negative result, well . . . I'll leave it to you to decide what that means. Finally, S.3930 allows the executive branch--not the judiciary--broad discretion to apply this new decision procedure. It applies to any non-US citizen (§§ 948c; 948a(3)) who is an "unlawful enemy combatant". There are two ways to be an unlawful enemy combatant. First, you can engage in hostilities against the United States or one of its allies (§ 948a(1)(i)) without being a lawful enemy combatant (§ 948a(2)). A lawful enemy combatant, roughly translated, seems to mean someone in uniform. Or second, even if you don't engage in hostilities, or, presumably, even if you were in uniform, any "competent tribunal" that either the President or the Secretary of Defense establishes can decide you're an unlawful combatant anyway. (§ 948a(1)(ii)) I don't know whether declaring a citizen an "unlawful enemy combatant" has any legal consequences, but if so, they'd be outside S.3930 because military tribunals have jurisdiction only over alien unlawful enemy combatants. (§ 948c)]]> What amuses me is the structure of punish_severely() punish_severely() is a global function that takes no argument, and returns no status back to the caller. punish_severely() appears to be a continuous process that cares not for specific instances of people, terrorists, or controls.. As far as I can tell, the program works *exactly* as intended. After all, if there are NO terrorists, there is no need to call punish_severely() @ C Gomez: Thank you for your response. I agree with your view that attacking civilians is one of the most despicable acts one can commit. My problem with the bill is that, again, it applies not only to PROVEN war criminals/UEC, but everyone SUSPECTED of being such by the government. The two classes are distinctly not equal. The bar set by the bill for labeling a person an UEC is far too low, far too easily abused.]]> @Benny: Appreciate your giving me something to call you. I'll concede mistakes are made, and it is awful and intolerable. But there is a difference between an enemy combatant and Mr. Arar. An enemy combatant doesn't represent a standing army of a state, doesn't wear a uniform. There are clearly, _clearly_ folks who are enemy combatants. Many of them are killing civilians in Iraq and Afghanistan. They have exited human civilization. There is no reason ever to turn your anger on civilians. Put on a damn uniform and attack military targets. You and I agree about Mr. Arar. That's not enough for me to say there's no way to ever discern an enemy combatant. And the Supreme Court was right. Congress should have a hand in constructing this important body of law. @Bozo: Your theory of Democrats as Bread and Republicans as Circuses is debatable. I reject out of hand that "Clinton Democrats" did any kind of social spending that was good for the economy. I would point to welfare reform as his greatest accomplishment. Surely, we live in a political age where you can expect the other party to support your ideals (broadly speaking), in an effort to retain power and stay centric. Mr. Schneier forecasts this himself, expecting a Democratic administration more likely to perform an Orwellian overhaul than a Republican one. I tend to agree, given current political state. The 90s boom was based pure and simple on a massive expansion of the economy powered by the creation of new sectors out of thin air. The only thing Clinton did that had any measurable effect was direct the DOJ to sue MSFT for breakup. Check the economic fundamentals for yourself. It's not a coincidence that the economy began to slow down just then. Does that mean I think Republicans are great on economic matters? No, not these Republicans. Since 2000, they've not been "classic Republicans", and I can go into a deeper analysis on why. Really, that's getting off topic and not the point. The point is neither party is interested in solving the problems that plague this country. So I vote with my pocketbook. The more I can keep government out of it, the more I can save for my future. I have to because they aren't doing it for me (and I don't expect them to. Just get out of my way, please). Fortunately, I realized this enough years ago that I'm pretty well on my way. @Bozo and @Benny: I appreciate that your responses were insightful and promoted meaningful discourse. Please accept mine in the same tone. @AJM: You argued a nice straw man. Clearly, it is possible to identify some enemy combatants. You simply say no one could possibly identify enemy combatants, and if they could, they'd simply identify anyone they wish. That's clever, cute, and oh so typical. However, it's quite easy to identify some. For example, terrorists opening fire on Marines in Iraq. Well, you aren't wearing a uniform. You aren't part of an army we can report back to with your name, rank, and serial number. You are fighting illegally, in fact. My moral distaste kicks in when someone attacks civilians in Iraq. That's beyond any form of humanity. That is also different from the word terrorist, which is not language the bill uses. Determining who is a prisoner of war and who is an enemy combatant is very important, as your rights under the several "Geneva Conventions" make those distinctions as well.]]> "Why people think war criminals deserve criminal rights is beyond me." Missing the Point, 10 yard penalty, first down. The point is that the value "war criminal" or "terrorist" is declared and assigned to a person *before* any evaluation takes place. Criminal rights like the rights to legal council, the right to decline self incrimination, the right to a fair trial, the right to examine and respond to the evidence against them, are all about evaluating the assignment of the value "war criminal," or any other accusation against them. Otherwise, terrorists are terrorists because it says here that they are terrorists. They don't deserve rights, because it says right here that they are terrorists. Obviously they are all terrorists here because this is where we detain terrorists, and terrorists don't deserve rights. Look if we start giving them rights, then the terrorists win, I mean only the rights of terrorists are taken away, and if they are here they are terrorists, it says so right here. Only a terrorist would question that logic.]]> Does a person need to commit a terrorist act to become a terrorist? If the answer is yes we have a problem with code as punishment will be a consecuence of their actions. It is ok for them to be punished but it is unfortunate that they already succeeded on their plans. Given that many terrorists are kill by their same actions (i.e. Sept-11) or commit suicide afterwards (i.e. March-11), punishment after the action has been accomplished may be a bit difficult if they are already dead. Of course their remains may receive some treatment their religions might be against. Though this is more of a warning call for would-be attackers. If the answer is no they any of us can be punished as alleged terrorists. A better approach would be to use the function: if( planning_a_terrorist_atack(person) ) Not sure severe punishment may change that person's feelings (do you remember clockwork orange?). I am afraid I did not like much of the code, besides the errors other readers pointed out. @sidd: Did you intetionally avoid the exit(); after invade_country(); ? Is it included on it? HR 6166 == Enabling Act (v 1.0)]]> @ C. Gomez Of bread and Circuses, I'd say the Democrats are the party of bread and the Republicans are the party of Circuses. Sure, they'll both spend your money and pocket a cut of it, But spending on Bread like the Clinton Democrats is good for the economy: spending here on infrastructure and people takes advantage of economic multipliers. Spending on Military Circus theater is a drain and a danger, doubly so when we have an arsonist in charge of the fire department,]]> Oops, forgot to enter my name for the post at 9:42 AM.]]> @ C Gomez: Sub-section ii of the definition of Unlawful Enemy Combat." So all it takes to be labeled a war criminal is for a tribunal convened by the administration to call you one. Is that your definition of a war criminal? If you believe that government agencies NEVER make mistakes, i recommend looking into what happened to a gentleman named Maher Arar.]]> #include "watchlist.h"]]> The exit(-1) part of the code normally indicates that the application has failed. So the program states that all persons become terrorists and get punished or the application fails (presumably because the terrorist accidentally had a value of 0)]]> Cute... and funny. Doesn't seem to apply to the actual text of the bill, however. Why people think war criminals deserve criminal rights is beyond me. There is a major difference between a criminal committing theft, murder, rape, etc., and a terrorist whose sole goal is to kill as many of a certain type of civilian as possible just because. This differentiation is extremely important. To have decided your recourse for whatever wrongs you feel have been done to you is to attack innocent civilians, which is a clear violation of the very Geneva Conventions the bill is implementing, is really just a form of genocide. It's killing people because they are a certain race or nationality. Killing Americans, killing Spanish, killing English. It's wrong... it's outside of civilization's rules. There is nothing in the world that could have been done that merits specifically targeting civilians for execution, and that is what terrorists do. Not that we really see much of these supposed terrorists. They are either very stupid or dead or captured. Not much terror going around when you try to attack once in a blue moon. To say coders or left-brained people don't vote for Republicans is ridiculous. The party has made enough missteps that the pendulum will probably swing to the Democrats very soon, but really what's the difference? Both parties are champions of waste, fraud, and government domination of everything. There's very little I can do about it except try to keep more of my money so I can take care of myself when I need to. Both parties win elections on 'Bread and Circuses', promising what the government will give to you... a system headed for inevitable borrowing and collapse. Americans have not been taught their civics, and don't mind the federal government's scope is well beyond the simple clauses of Article I, Section 8.]]> There is only one equal sign. if (person = terrorist) it is asigning the value "terrorist" to "person" variable and the statement will be different than null. so, every person is a terrorist XD .. good joke!]]> If we assume that persons aren't instantiated as terrorists than there must also be code running somewhere that turns a person into a terrorist. Therefore there is a race condition. If the turn_terrorist() code runs first everything is ok, but if this code runs first the person is ignored even though it is a terrorist.]]> Given the mistakes in the earlier parts of the code maybe the exit (-1); Was ment to signify removal of the person permanently, After all GWB is from the hanging state and he sure has trouble with what he says / means...]]> @Jamie Gordon: I think the rationale of torture is: "we torture to make them confess that they are terrorists, but that's all right since they are terrorists". I didn't write the rutine to reflect any personal opinion on the use of torture or my moral standards - but to reflect what seems to be the "correct" application of torture (well in the dark ages).]]> @Erik N Even if the suspect confesses at the earliest opportunity you've tortured them twice already! Excessive? I hope you're not involved in law enforcement anywhere.]]> "Given this is C code, the condition "person = terrorist" is an assignment operation, not a logical comparison; it will always evaluate to true (given that "terrorist" is a non-null value), hence everyone is a terrorist." Yes, correct. So what's the error you're talking about, terrorist ;-)]]> This is the correct torture code: do { torture(suspect); } while(!confessed(suspect)); torture(suspect);]]> the exit(-1) call just shows that the US goverment uses VMS to run their security programs. Quite sensible actually. Security by obscurity and so on... ;-) terrorist is undefined. Plain and simple. Now define it?]]> public boolean isTerrorist(Person p) { return p.isUnderArrestForTerrorism(); } There is another obvious flaw. This code is being made available for peer review, thus it cannot be seriously considered by a government for a production environment.]]> Now that we have all satisfied our jolly duty ritually to ridicule the threat, we can hunch safely in our dens and not risk doing anything about it.]]> This is just classic commentary here. I haven't laughed this hard in awhile. Sigh, I needed that. Thanks everyone.]]> Ah wait, silly me. I bet punish_severely() takes care of repopulating the terrorist pool, too.]]> Assuming punish_severely() decrements the number of people--not unreasonable given § 950i of the bill--the code will terminate only if the number of people reaches zero. The negative exit code suggests the administering process cannot properly sustain its operation if the available pool of potential terrorists empties. Presumably there's a mechanism, not shown in this snippet, to repopulate the pool. let us give the author some credit suppose he knew roughly what he was doing what does this code do? it always executes punish_severely() unless the the quantity 'terrorist' is a (boolean) FALSE or an (int) zero. in either of these cases, the code admits failure. so it might be replaced with if terrorist punish_severely(); else exit(-1); so, presumably the code has previously set the variable 'terrorist' to some value and is checking it before calling the punish_severely() routine. if the variable is set to false or zero, it exits. but if the code is going to exit anyway, why would it set this variable to false or zero at all ? why would it not exit on the earlier assignment ? eg let us say the variable was set thus terrorist = terrorist_match(x,y,z....) why would the terrorist_match function not exit instead ? so it seems that the terrorist_match function must return and some other process must complete before the code presented. what could this code be ? well perhaps punish_severely needs to have a stack of punishable items set up before being called to select the next item for punishment. no argument is passed to the punish_severely routine.so perhaps 'terrorist' is a global that the routine knows about. but that would be a)inefficient b) perhaps unnecessary perhaps, every time the value 'terrorist' is set nonzero or nonfalse, someone gets punished severely. an efficient punish_severely() routine might have a stack of people (or animals... or ...) to be punished and merely pop one off the stack and punish it. i might suggest a similar piece of code if (might_be_WMD_somewhere) invade_country(); as you notice, invade_country() takes no arguments either... so it probably behaves similarly to punish_severely... just pop a country off the stack of invadable oil rich countries and... i must stop this line of retrograde analysis. my neuron hurts. my backup neuron is out drinking. sidd ]]> An excellent example of why the -Wall flag ought to be used!]]> > Also, exit(-1) is being used in the context of success, whereas > many OS's expect zero return values to mean success. Clearly, if they're not finding any terrorists then the program isn't successful. So I think exit(-1) is the correct thing in this case.]]> According to that C code frgament: 1) Everyone is a terrorist (obviously written by G. Bush); use == (comparison) instead of = (assignment) 2) Who gets punished? 3) exit(-1) would terminate the program, which is probably not the desired intent as there are always more people to test! 4) Also, exit(-1) is being used in the context of success, whereas many OS's expect zero return values to mean success. 5) what about a check for nobody? i.e. person or terrorist == Null Terrorists are real unless declared integer.]]> We also don't know what types person and terrorist are. Given that the author meant comparison, not assignment, if person and terrorist are ints, then the comparison pretty much behaves intuitively. If they're floats or doubles, then you don't really have a good idea if they're equal or not due to potential rounding errors in floating point representation.]]> Who do we punish_severly()? Obviously not the person, otherwise we'd have passed him (her?) in as a parameter. So it doesn't really matter that the if() is an assignment. We must just punish everybody. Or maybe we just pick a random person and punish them.]]> Aside from the assignment that we'd hope would actually be an equality comparison: It should enclosed in an infinite loop, since we're constantly checking everybody over and over again, whether or not we need to. And it should iterate over the entire set of persons, as there's more than one. Nobody is immune from suspicion. And exit(-1) isn't really the accurate alternate response. If not terrorist, then suspect_severely(). Who knows, in the next loop, perhaps things will have changed.]]>
https://www.schneier.com/blog/archives/2006/10/torture_bill_as.xml
CC-MAIN-2015-40
refinedweb
4,867
63.19
Ext JS 4 & Sencha Touch ? Ext JS 4 & Sencha Touch ? Hi,? Good question. Same situation for us: we already created a complex technical application with Ext JS 3.0. We will migrate to Ext JS 4.0. New requirement: People will use the application on desktop pcs and for sure on the ipad or similiar devices. We won't rewrite the complete application by using sencha touch, but we need support by extjs 4.0 to handle gesture events if the ext js 4.0 app is running on a safari mobile or similar. Any info on this? I have try the sandbox demo for the Ext JS 4, which you can use Ext JS 4 as a sandbox inside Ext JS 3, is there any other way around this to have the Sencha Touch as the sandbox (or alternate namespace as what sandbox do), that we can use the gesture event on to the Ext JS 4; this would be a lot more flexible for Ext JS 4 to use on the tablet device, which we would want the same user experience on the tablet and the pc - Join Date - Mar 2007 - Location - Gainesville, FL - 37,997 - Vote Rating - 978 ExtJS doesn't have touch gestures. If you want an app that works on desktop computers and mobile devices then yes, you will need to develop a version with each ExtJS and Sencha Touch. To my understanding, Sencha is working on parity between the two meaning minimal syntax this hard separation between a framework for touch devices and a framework for desktop pcs. I create an application to be used in a browser. That's my minimum requirement. The company I'm working for has created aplenty applications for our customers. The customers don't want to pay for a complete new user interface, that can be used on a tablet pc. They want to pay small money to give the existing user interface the ability for a better user experience on an ipad or a similar device. I think what we need is kind of a sencha touch core library that allows me to handle gesture events within my desktop application. I think there already are frameworks, that can simply provide a good architecture to handle touch events... I will evaluate this soon. But I think a small sencha touch core library could fill this gap even better. I completely agree with dloew. While Sencha Touch does offer a fair alternative to develop for touch-enabled devices, some of us are not interested in making a "native" looking copy of our desktop applications. While I plan to use Sencha Touch for a mobile version of my application - and by mobile I mean smartphone - my desktop application is perfectly suited for tablet use and should be capable of using the same Ext4 code with minimal exception coding for CSS3/HTML5 availability (media queries, orientation events, ect). What I'm really interested in is if Ext4 can make some attempt at propagating touch events to the equivalent mouse events. Yes, I'm well aware that touch events don't necessary map to mouse events, but the two are close enough to provide a suitable emulation in many Ext objects. pitty.. i aggree with dloew it a gap (imo) that we have Ext Js AND Touch, should be integrated in one product/solution... - Join Date - Mar 2007 - Location - Gainesville, FL - 37,997 - Vote Rating - 978 There are a lot of similarities between ExtJS 3 and Sencha Touch. Sencha Touch 2 will get synced with ExtJS 4. They now have what's called Sencha Platform. This is a mini framework that has the similar classes between ST and ExtJS. They want more class sharing but each framework will have it's own unique definitely would love an ext 4 version with sencha touch combined for tablets. but if you take a look at the sencha roadmap (getting ext 4 stable + sencha touch 2 afterwards), i see no chance to get this ready within the next 4-6 months. best regards tobiu Any new development on that? ExtJS apps are unusable on the iPad because there is no scroll bars. However, the size of the screen is big enough to use the full desktop app.
http://www.sencha.com/forum/showthread.php?126780-Ext-JS-4-amp-Sencha-Touch&p=592632&viewfull=1
CC-MAIN-2015-18
refinedweb
712
68.91
On Sun, Feb 02, 2014 at 11:37:39AM +0100, David Kastrup wrote: > So I mused: refs/heads contains branches, refs/tags contains tags. The > respective information would likely easily enough be stored in refs/bzr > and refs/bugs and in that manner would not pollute the "ordinary" tag > and branch spaces, rendering "git tag" and/or "git branch" output mostly > unusable. I tested creating such a directory and entries and indeed > references like bzr/39005 then worked. Advertising Yes. The names "refs/tags" and "refs/heads" are special by convention, and there is no reason you cannot have other hierarchies (and indeed, we already have "refs/notes" and "refs/remotes" as common hierarchies). > However, cloning from the repository did not copy those directories and > references, so without modification, this scheme would not work for > cloned repositories. Correct. Anyone who wants them will have to ask for them manually, like: git config --add remote.origin.fetch '+refs/bzr/*:refs/bzr/*' after which any "git fetch" will retrieve them. > Are there some measures one can take/configure in the parent repository > such that (named or all) additional directories inside of $GITDIR/refs > would get cloned along with the rest? No. It is up to the client to decide which parts of the ref namespace they want to fetch. The server only advertises what it has, and the client selects from that. Others mentioned that refs were never really intended to scale to one-per-commit. We serve some repositories with tens of thousands of refs from GitHub, and it does work. On the backend, we even have some repos in the hundreds of thousands (but these are not client facing). Most of the pain points (like O(n^2) loops) have been ironed out, but the two big ones are still: - server ref advertisement lists _all_ refs at the start of the conversation. So, e.g., git fetch git://github.com/Homebrew/homebrew.git sends 2MB of advertisement just so a client can find out "nope, nothing to fetch". - the packed-refs storage is rather monolithic. Reading a value from it currently requires parsing the whole file. Likewise, deleting a ref requires rewriting the whole file. So what you are proposing will work, but do note that there is a cost. -Peff -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [email protected] More majordomo info at
https://www.mail-archive.com/[email protected]/msg43012.html
CC-MAIN-2016-44
refinedweb
406
62.68
Contents - Introduction - A very simple C program - The program in PowerPC assembly language - The relocatable object file - Disassembly and machine code Introduction. - gcc translates our C code to assembly code. - gcc calls GNU as to translate the assembly code to machine code in an ELF relocatable object. - gcc calls GNU ld to link our relocatable object with the C runtime and the C library to form an ELF executable object. - NetBSD kernel loads ld.elf_so, which loads our ELF executable and the C library (an ELF shared object) to run our program. So far, this wiki page examines only the first two steps. A very simple C program This program is only one C file, which contains only one main function, which calls printf(3) to print a single message, then returns 0 as the exit status. #include <stdio.h> int main(int argc, char *argv[]) { printf("%s", "Greetings, Earth!\n"); return 0; }. We can apply gcc(1) in the usual way to compile this program. (With NetBSD, cc or gcc invokes the same command, so we use either name.) Then we can run our program: $ cc -o greetings greetings.c $ ./greetings Greetings, Earth! $. $ cc -v -o greetings greetings.c Using built-in specs. Target: powerpc--netbsd Configured with: /usr/src/tools/gcc/../../gnu/dist/gcc4/configure --enable-long- long --disable-multilib --enable-threads --disable-symvers --build=i386-unknown- netbsdelf4.99.3 --host=powerpc--netbsd --target=powerpc--netbsd Thread model: posix gcc version 4.1.2 20061021 prerelease (NetBSD nb3 20061125) **/usr/libexec/cc1 -quiet -v greetings.c -quiet -dumpbase greetings.c -auxbase gr** **eetings -version -o /var/tmp//ccVB1DcZ.s** #include "..." search starts here: #include <...> search starts here: /usr/include End of search list. GNU C version 4.1.2 20061021 prerelease (NetBSD nb3 20061125) (powerpc--netbsd) compiled by GNU C version 4.1.2 20061021 (prerelease) (NetBSD nb3 200611 25). GGC heuristics: --param ggc-min-expand=38 --param ggc-min-heapsize=77491 Compiler executable checksum: 325f59dbd937debe20281bd6a60a4aef **as -mppc -many -V -Qy -o /var/tmp//ccMiXutV.o /var/tmp//ccVB1DcZ.s** GNU assembler version 2.16.1 (powerpc--netbsd) using BFD version 2.16.1 **ld --eh-frame-hdr -dc -dp -e _start -dynamic-linker /usr/libexec/ld.elf_so -o g** **reetings /usr/lib/crt0.o /usr/lib/crti.o /usr/lib/crtbegin.o /var/tmp//ccMiXutV.** **o -lgcc -lgcc_eh -lc -lgcc -lgcc_eh /usr/lib/crtend.o /usr/lib/crtn.o** The first command, /usr/libexec/cc1, is internal to gcc and is not for our direct use. The other two commands, as and ld, are external to gcc. We would use as and ld without gcc, if we would want so.. The .s assembly file and the .o object file were temporary files, so the gcc driver program deleted them. We only keep the final executable of greetings. The program in PowerPC assembly language. - Comments begin with a '#' sign, though gcc never puts any comments in its generated code. PowerPC uses '#', unlike many other architectures that use ';' instead. - Assembler directives have names that begin with a dot (like .section or .string) and may take arguments. - Instructions have mnemonics without a dot (like li or stw) and may take operands. - Labels end with a colon (like .LC0: or main:) and save the current address into a symbol.. Commented copy of greetings.s. # This is a commented version of greeting.s, the 32-bit PowerPC # assembly code output from cc -mregnames -S greetings.c # .file takes the name of the original source file, # because this was a generated file. I guess that this # allows error messages or debuggers to blame the # original source file. .file "greetings.c" # Enter the .rodata section for read-only data. String constants # belong in this section. .section .rodata # For PowerPC, .align takes an exponent of 2. # So .align 2 gives an alignment of 4 bytes, so that # the current address is a multiple of 4. .align 2 # .string inserts a C string, and the assembler provides # the terminating \0 byte. The label sets the symbol # .LC0 to the address of the string. .LC0: .string "Greetings, Earth!" # Enter the .text section for program text, which is the # executable part. .section ".text" # We need an alignment of 4 bytes for the following # PowerPC processor instructions. .align 2 # We need to export main as a global symbol so that the # linker will see it. ELF wants to know that main is a # @function symbol, not an @object symbol. .globl main .type main, @function main: # The code for the main function begins here. # Passed in general purpose registers: # r1 = stack pointer, r3 = argc, r4 = argv # Passed in link register: # lr = return address # The int return value goes in r3. # Allocate 32 bytes for our the stack frame. Use the # atomic instruction "store word with update" (stwu) so # that r1[0] always points to the previous stack frame. stwu %r1,-32(%r1) # r1[-32] = r1; r1 -= 32 # Save registers r31 and lr to the stack. We need to # save r31 because it is a nonvolatile register, and to # save lr before any function calls. Now r31 belongs in # the register save area at the top of our stack frame, # but lr belongs in the previous stack frame, in the # lr save word at (r1[0])[0] == r1[36]. mflr %r0 # r0 = lr stw %r31,28(%r1) # r1[28] = r31 stw %r0,36(%r1) # r1[36] = r0 # Save argc, argv to the stack. mr %r31,%r1 # r31 = r1 stw %r3,8(%r31) # r31[8] = r3 /* argc */ stw %r4,12(%r31) # r31[12] = r4 /* argv */ # Call puts(.LC0). First we need to load r3 = .LC0, but # each instruction can load only 16 bits. # .LC0@ha = (.LC0 >> 16) & 0xff # .LC0@l = .LC0 & 0xff # This method uses "load immediate shifted" (lis) to # load r9 = (.LC0@ha << 16), then "load address" (la) to # load r3 = &(r9[.LC0@l]), same as r3 = (r9 + .LC0@l). lis %r9,.LC0@ha la %r3,.LC0@l(%r9) # r3 = .LC0 # The "bl" instruction calls a function; it also sets # the link register (lr) to the address of the next # instruction after "bl" so that puts can return here. bl puts # puts(r3) # Load r3 = 0 so that main returns 0. li %r0,0 # r0 = 0 mr %r3,%r0 # r3 = r0 # Point r11 to the previous stack frame. lwz %r11,0(%r1) # r11 = r1[0] # Restore lr from r11[4]. Restore r31 from r11[-4], # same as r1[28]. lwz %r0,4(%r11) # r0 = r11[4] mtlr %r0 # lr = r0 lwz %r31,-4(%r11) # r31 = r11[-4] # Free the stack frame, then return. mr %r1,%r11 # r1 = r11 blr # return r3 # End of main function. # ELF wants to know the size of the function. The dot # symbol is the current address, now the end of the # function, and the "main" symbol is the start, so we # set the size to dot minus main. .size main, .-main # This is the tag of the gcc from NetBSD 4.0.1; the # assembler will put this string in the object file. .ident "GCC: (GNU) 4.1.2 20061021 prerelease (NetBSD nb3 20061125)"! Optimizing the main function Expect a compiler like gcc to write better assembly code than a human programmer who knows assembly language. The best way to optimize the assembly code is to enable some gcc optimization flags. Released software often uses the -O2 flag, so here is a commented copy of greetings.s (from the gcc of NetBSD 4.0.1/macppc) with -O2 in use. # This is a commented version of the optimized assembly output # from cc -O2 -mregnames -S greetings.c .file "greetings.c" # Our string constant is now in a section that would allow an # ELF linker to remove duplicate strings. See the "info as" # documentation for the .section directive. .section .rodata.str1.4,"aMS",@progbits,1 .align 2 .LC0: .string "Greetings, Earth!" # Enter the .text section and declare main, as before. .section ".text" .align 2 .globl main .type main, @function main: # We use registers as before: # r1 = stack pointer, r3 = argc, r4 = argv, # lr = return address, r3 = int return value # Set r0 = lr so that we can save lr later. mflr %r0 # r0 = lr # Allocate only 16 bytes for our stack frame, and # point r1[0] to the previous stack frame. stwu %r1,-16(%r1) # r1[-16] = r1; r1 -= 16 # Save lr in the lr save word at (r1[0])[0] == r1[20], # before calling puts(.LC0). lis %r3,.LC0@ha la %r3,.LC0@l(%r3) # r3 = .LC0 stw %r0,20(%r1) # r1[20] = r0 bl puts # puts(r3) # Restore lr, free stack frame, and return 0. lwz %r0,20(%r1) # r0 = r1[20] li %r3,0 # r3 = 0 addi %r1,%r1,16 # r1 = r1 + 16 mtlr %r0 # lr = r0 blr # return r3 # This main function is smaller than before but ELF # wants to know the size. .size main, .-main .ident "GCC: (GNU) 4.1.2 20061021 prerelease (NetBSD nb3 20061125)" The optimized version of the main function does not use the r9, r11 or r31 registers; and it does not save r31, argc or argv to the stack. The stack frame occupies only 16 bytes, not 32 bytes.. The relocatable object file Now that we have the assembly code, there are two more steps before we have the final executable. - The first step is to run the assembler (as), which translates the assembly code to machine code, and stores the machine code in an ELF relocatable object. - The second step is to run the linker (ld), which combines some ELF relocatables into one ELF executable.. $ as -o greetings.o greetings.s The output greetings.o is a relocatable object file, and file(1) confirms this. $ file greetings.o greetings.o: ELF 32-bit MSB relocatable, PowerPC or cisco 4500, version 1 (SYSV) , not stripped List of sections The source greetings.s had assembler directives for two sections (.rodata.str1.4 and .text), so the ELF relocatable greetings.o should contain those two sections. The command objdump can list the sections. $ objdump Usage: objdump <option(s)> <file(s)> Display information from object <file(s)>. At least one of the following switches must be given: ... -h, --[section-]headers Display the contents of the section headers ... $.) That leaves the mystery of the .comment section. The objdump command accepts -j to select a section and -s to show the contents, so objdump -j .comment -s greetings.o dumps the 0x3c bytes in that section. $ objdump -j .comment -s greetings.o greetings.o: file format elf32-powerpc Contents of section .comment: 0000 00474343 3a202847 4e552920 342e312e .GCC: (GNU) 4.1. 0010 32203230 30363130 32312070 72657265 2 20061021 prere 0020 6c656173 6520284e 65744253 44206e62 lease (NetBSD nb 0030 33203230 30363131 32352900 3 20061125).. Of symbols and addresses Our assembly code in greetings.s had three symbols. The first symbol had the name .LC0 and pointed to our string. .LC0: .string "Greetings, Earth!" The second symbol had the name main. It was a global symbol that pointed to a function. .globl main .type main, @function main: mflr %r0 ... The third symbol had the name puts. Our code used puts in a function call, though it never defined the symbol. bl puts. The nm command shows the names of symbols in an object file. The output of nm shows that greetings.o contains only two symbols. The .LC0 symbol is missing. $ nm greetings.o 00000000 T main U puts. The nm tool claims that symbol main has address 0x00000000, which to be a useless value. The actual meaning is that main points to offset 0x0 within section .text. A more detailed view of the symbol table would provide evidence of this. Fate of symbols. (Because this part of the wiki page now comes before the part about machine code, this disassembly should probably not be here.). ELF, like any object format, allows for a symbol table. The list of symbols from nm greetings.o is only an incomplete view of this table. $ nm greetings.o 00000000 T main U puts The command objdump -t shows the symbol table in more detail. $ objdump -t greetings.o greetings.o: file format elf32-powerpc SYMBOL TABLE: 00000000 l df *ABS* 00000000 greetings.c 00000000 l d .text 00000000 .text 00000000 l d .data 00000000 .data 00000000 l d .bss 00000000 .bss 00000000 l d .rodata.str1.4 00000000 .rodata.str1.4 00000000 l d .comment 00000000 .comment 00000000 g F .text 0000002c main 00000000 *UND* 00000000 puts. The filename symbol greetings.c exists because the assembly code greetings.s had a directive .file greetings.c. The symbol main has a nonzero size because of the .size directive.. TODO: explain "relocation records" $ objdump -r greetings.o greetings.o: file format elf32-powerpc RELOCATION RECORDS FOR [.text]: OFFSET TYPE VALUE 0000000a R_PPC_ADDR16_HA .rodata.str1.4 0000000e R_PPC_ADDR16_LO .rodata.str1.4 00000014 R_PPC_REL24 puts Disassembly and machine code Disassembly GNU binutils provide both assembly and the reverse process, disassembly. While as does assembly, objdump -d does disassembly. Both programs use the same library of opcodes.. $ objdump -d greetings.o greetings.o: file format elf32-powerpc Disassembly of section .text: 00000000 <main>: 0: 7c 08 02 a6 mflr r0 4: 94 21 ff f0 stwu r1,-16(r1) 8: 3c 60 00 00 lis r3,0 c: 38 63 00 00 addi r3,r3,0 10: 90 01 00 14 stw r0,20(r1) 14: 48 00 00 01 bl 14 <main+0x14> 18: 80 01 00 14 lwz r0,20(r1) 1c: 38 60 00 00 li r3,0 20: 38 21 00 10 addi r1,r1,16 24: 7c 08 03 a6 mtlr r0 28: 4e 80 00 20 blr. The disassembled code would must resemble the assembly code in greetings.s. A comparison shows that every instruction is the same, except for three instructions. - Address 0x8 has lis r3,0 instead of lis %r3,.LC0@ha. - Address 0xc has addi r3,r3,0 instead of la %r3,.LC0@l(%r3). - Address 0x14 has bl 14 <main+0x14> instead of bl puts.. If the reader of objdump -d greetings.o would not know about these symbols, then the three instructions at 0x8, 0xc and 0x14 would seem strange, useless and wrong. -. - The "add immediate" (addi) instruction does addition, so addi r3,r3,0 increments r3 by zero, which effectively does nothing! The instruction seems unnecessary and useless. - The instruction at address 0x14 is bl 14 <main+0x14>, which branches to label 14, effectively forming an infinite loop because it branches to itself! Something is wrong.. A better understanding of how symbols fit into machine code would help. Machine code in parts The output of objdump -d has the machine code in hexadecimal. This allows the reader to identify individual bytes. This is good with architectures that organize opcodes and operands into bytes.. One can write the filter program using a scripting language that provides both regular expressions and bit-shifting operations. Perl (available in lang/perl5) is such a language. Here follows machine.pl, such a script. #!/usr/bin/env perl # usage: objdump -d ... | perl machine.pl # # The output of objdump -d shows the machine code in hexadecimal. This # script converts the machine code to a format that shows the parts of a # typical PowerPC instruction such as "addi". # # The format is (opcode|register-1|register-2|immediate-value), # with digits in (decimal|binary|binary|hexadecimal). use strict; use warnings; my $byte = "[0-9a-f][0-9a-f]"; my $word = "$byte $byte $byte $byte"; while (defined(my $line = <ARGV>)) { chomp $line; if ($line =~ m/^([^:]*:\s*)($word)(.*)$/) { my ($before, $code, $after) = ($1, $2, $3); $code =~ s/ //g; $code = hex($code); my $opcode = $code >> (32-6); # first 6 bits my $reg1 = ($code >> (32-11)) & 0x1f; # next 5 bits my $reg2 = ($code >> (32-16)) & 0x1f; # next 5 bits my $imm = $code & 0xffff; # last 16 bits $line = sprintf("%s(%2d|%05b|%05b|%04x)%s", $before, $opcode, $reg1, $reg2, $imm, $after); } print "$line\n"; } Here follows the disassembly of greetings.o, with the machine code in parts. $ objdump -d greetings.o | perl machine.pl greetings.o: file format elf32-powerpc Disassembly of section .text: disassembly now shows the machine code with the opcode in decimal, then the next 5 bits in binary, then another 5 bits in binary, then the remaining 16 bits in hexadecimal.. When machine code contains opcode 14, then the disassembler tries to be smart about choosing an instruction mnemonic. Here follows a quick example. $ cat quick-example.s .section .text addi 4,0,5 # bad la 3,3(0) # very bad la 3,0(3) la 5,2500(3) $ as -o quick-example.o quick-example.s $ objdump -d quick-example.o | perl machine.pl quick-example.o: file format elf32-powerpc Disassembly of section .text: 00000000 <.text>: 0: (14|00100|00000|0005) li r4,5 4: (14|00011|00000|0003) li r3,3 8: (14|00011|00011|0000) addi r3,r3,0 c: (14|00101|00011|09c4) addi r5,r3,2500 If the second register operand to opcode 14 is 00000, then the machine code looks like an instruction "li", so the disassembler uses the mnemonic "li". Otherwise the disassembler prefers mnemonic "addi" to "la". Opcodes more strange The filter script shows the four parts of a typical instruction, but not all instructions have those four parts. The instructions that do branching or access special registers are not typical instructions. Here again is the disassembly of the main function in greetings.o: Assembly code uses "branch and link" (bl) to call functions and "branch to link register" (blr) to return from functions. - The instruction bl branches to the address of a function, and stores the return address in the link register. - The instruction blr branches to the address in the link register. - The instructions "move from link register" (mflr) and "move to link register" (mtlr) access the link register, so that a function may save its return address while it uses bl to call other functions.. The source file /usr/src/gnu/dist/binutils/opcodes/ppc-opc.c contains a table of powerpc_opcodes that lists the various mnemonics that use opcodes 18, 19 and 31.
https://wiki.netbsd.org/examples/elf_executables_for_powerpc/
CC-MAIN-2015-22
refinedweb
3,015
68.16
ClojureQL is now coming dangerously close to version 1.0. Despite its young age, its already been adopted several interesting places, among others in the Danish Health Care industry. Before we ship version 1.0, I want to walk through some of the features and design decisions as well as encourage comments, criticisms and patches. (logo: SQL in parenthesis) Why do we need another DSL is a good question, to which there is a good answer. If you look through a PHP project for instance, how much of the code is actually PHP? 85%? 65% perhaps? It varies from project to project of course, but its not a big number if 100% is to be our goal and the reason is, PHP has no one-size-fits-all abstraction over the common SQL commands. Thats a big deal, because at a bare minimum the quality assurance team backing a PHP project has to support 2 languages in detail and although SQL can look simple on the cover don't let that fool you, there are many pitfalls. Then add shell-script for the OS integration and Ruby perhaps for some multithreading tasks and suddenly PHP becomes no more than glue. ClojureQL enables the Clojurian to stay within his Clojure-domain even when interfacing with his SQL database, greatly simplifying the code-base. Its simpler partly because now you only have Clojure code and not Clojure + SQL, but also because SQL doesn't really exist. If I ask you to write out an SQL statement which creates a table with a single column containing an auto incrementing int, what would you do ? If you were targeting a MySQL database, you could write CREATE TABLE table1 (id int NOT NULL AUTO_INCREMENT, PRIMARY KEY ('col')); But then half-way through the project, the Customer wants to target PostgreSQL instead, so you have to write CREATE TABLE table1 (id int PRIMARY KEY DEFAULT nextval('serial')); Or what if one service has to run against Oracle instead, or Derby, or Sqlite ? For all of the servers there are various smaller or larger differences. Some use double quotes ", others single. Sometimes its the entire construction of the statement which needs to be altered. Whatever the issue great or small, it means that you have to review your code and modify it accordingly. (create-table table1 [id int] :non-nulls id, :primary-key id, :auto-inc id) The code above is simple, flexible and will run on all supported backends. For version 1.0 we want to support 3 backends: MySql, Postgresql and Derby, but its not a complex matter to extend CQL to more than these. By having a modular backend, we've pretty much guaranteed that you can keep your team working in the same domain throughout your databasing needs, which is a big win. If the customer for whatever reason needs to change target or incorporate an entirely new target, you don't need to rewrite your code, ClojureQL will handle that: Code stays in the same domain, and the liquid concept of 'SQL code' becomes concrete: Lisp. Any change in the target destination rarely warrants change in the user-code! Currently we have a nice and uniform frontend, which is what the developers will be working with on a daily basis. Its here you'll find all your SQL-like functions: query (SELECT), insert-into, update, create-table, group-by, etc. The front-end only has 1 job and thats to categorize your input in a format which the backend understands (call it AST). Lets say I want to pick out the user Frank from my accounts table: >> (query accounts [username passwords] (= username "frank")) {:columns [username passwords], :tables [accounts], :predicates [= username "?"], :column-aliases {}, :table-aliases {}, :env ["frank"]} As you can see, this doesn't actually do any work. All the information is seperated into keys in a hash-map for later compilation by the backend. In order to compile the statement you need an open connection to a database, because only through that can we determine exactly what the final SQL Statement should look like. You can test it like so >> (with-connection [c (make-connection-info "mysql" "//localhost/cql" "cql" "cql")] (compile-sql (query accounts [username passwords] (= username "frank")) c)) "SELECT username,passwords FROM accounts WHERE (username = ?)" That opens and closes a connection in order to compile the statement. ClojureQL always uses parameterized SQL Statements, meaning that variables appear as question marks in the text and are passed as parameters to the actual execution call. If you examine the AST output from Query, you'll see the parameters lined up sequentially in the :env field, in this case just "frank". Query is pretty flexible accepting symbols like *, custom predicates, column aliases etc etc. Since everything is rolled in macros you have to explicitly tell CQL what you want evaluated: (let [frank "john"] (with-connection [c (make-connection-info "mysql" "//localhost/cql" "cql" "cql")] (prn (compile-sql (query accounts [username passwords] (= username frank)) c)) (prn (compile-sql (query accounts [username passwords] (= username ~frank)) c)))) "SELECT username,passwords FROM accounts WHERE (username = frank)" ; finds frank "SELECT username,passwords FROM accounts WHERE (username = ?)" ; finds john All the macros call drivers, so everything is evaluated at run-time instead of compile-time. Oh and by the way: Pulling out and watching a compiled statement is more verbose than just running the code: >> (run :mysql (query accounts * (= status "unpaid"))) {:id 22 :name "John the Debtor" :balance -225.25} ... Since all of our statements initially are just AST representations, the way to modify their core behavior with functions such as Group-By or Order-By is to pass the AST around to these functions (let [select-statement (query accounts *)] (with-connection [c (make-connection-info "mysql" "//localhost/cql" "cql" "cql")] (compile-sql (group-by select-statement username) c))) "SELECT * FROM accounts GROUP BY username" That hopefully quickly becomes second nature to you. I've considered rearranging the argument order, to make it easier to read but nothing is decided and I'm open to suggestions. In broad-strokes we have implemented the following sql-statement-types in the frontend query join ordered-query grouped-query having-query distinct-query union intersect difference let-query update delete create-table create-view drop-table drop-view alter-table batch-statement raw-statement If you're already comfortable with SQL you should recognize most of those. I'll explain the exceptions: Let Query: LetQuery is a way of binding a local to the result of a query, very similar to Clojure's 'let'. (let-query [password (query accounts [username password] (= username "frank"))] (println "Your password is: " password)) Batch Statement: Batch statements are (as you may have guessed) a cluster of statements which will execute sequentially with a single call to run. Initially we used a batch of create-table + alter-table to be db-agnostic in regards to table-creation, but we later sacrificed this approach by implementing backend modules. Raw Statement: A Raw Statement is what you use, when you've read all the documentation and frontend.clj, without finding a function that does exactly what you need. It'ins a brute and I hope you won't use it: (raw "SELECT * FROM accounts USING SQL_NINJAv9x(0x333);") If you end up using Raw for something which isn't specific to your setup, please drop an email so I can schedule it for assimilation. And finally, I put alter-table in italics because its not done yet, but it will be for 1.0. No two SQL implementations even remotely agree on how to use alter and yet I want to expose it to you in a uniform way, so thats a challenge - Input is encouraged. Documentation is...under way, but generally we try to put all significant functions into the the demos, so that you can see them in action. To get started with something like Derby takes no effort, so I'll show MySql instead: From your shell: $ mysql -u root -p Enter password: ********* Welcome to the MySQL monitor. Commands end with ; or \g. mysql> CREATE DATABASE cql; Query OK, 1 row affected (0,00 sec) mysql> GRANT ALL PRIVILEGES ON cql.* TO "cql"@"localhost" IDENTIFIED BY 'cql'; Query OK, 0 rows affected (0,29 sec) Now we have a database called cql, a user called cql and his password is cql. Then fire up your Clojure REPL user> (use :reload 'clojureql.demos.mysql) nil user> (load-file ".../clojureql/src/clojureql/demos/mysql.clj") nil user> (clojureql.demos.mysql/-main) SELECT StoreName FROM StoreInformation {:storename "Los Angeles"} .... The demos all do the same thing, so first you might want to read through demos/mysql.clj to see how we load the driver, intialize the namespace and so on. MySql is special because it shows off a global persistent connection, which means it stays open for as long as your program runs or until you close it. The other demos open/close connections on every call, depending on your project you will prefer one of the two. Once the driver is loaded and the connection-string defined, nothing seperates the demos for Derby, Mysql or Postgres and so they all load demos/common.clj. Once you've read and understood everything that goes on there, you'll have a good grasp on how to handle most situations with ClojureQL. For Version 1.0 I want ClojureQL running on a quality build-system and have spent the past couple of days researching what that might be. Currently we are interfacing directly with Ant & Ivy using some complicated XML configurations. This setup goes against every principle I have because of its inflexibility, lack of elegance and complexity. The 2 systems which I found most interesting, were Gradle and Leiningen. I get the impression that Leiningen is widely adopted in the Clojure Community already and rightfully so. It's a DSL which lets you configure your build using Clojure code and its easy to pick up and build with. On the down-side its very fresh off the press so I would have great reservations using it in a complex scenario of several projects and in case you need some Ruby, Python, whatever code Leiningen can not support you. So I opted for Gradle. Current users: Please notice that during the move we removed 'dk.bestinclass' from the namespace declarations. Gradle is Groovy because it lets you write build-scripts in Groovy, but supports a multitude of languages. Its been around for a while and is now coming into maturity. It gracefully lets you handle multiple projects, projects written in multiple languages, dependencies, distribution, etc. Best of all, my co-pilot on ClojureQL Mr. Meikel Brandmeyer has written a plugin for Gradle called Clojuresque. Clojuresque enables Gradle to read Clojure-code well enough to identify namespaces etc, letting us AOT compile the project neatly into a Jar which is what we need. He's also added support for distributing Jars to the newly started Clojars site (Leiningen does this too). I don't have much to say about the Groovy build-scripts except they beat XML, and if you don't know Groovy is much like Java without much of the boilerplate. To avoid any confusion, I'll quickly run you through how to install Gradle + Clojuresque. If you don't need this at the moment, feel free to bookmark the page and skip past it. First pick a directory where you want to setup. Grab the latest Gradle which depends on nothing, it comes with a small Groovy installation: wget unzip gradle-0.8-all.zip && rm gradle-0.8-all.zip Gradle really only needs your PATH variable to point to its /bin directory, but build-scripts sometimes ask for GRADLE_HOME, so set them up export PATH=$PATH:/PATH/TO/gradle-0.8/bin export GRADLE_HOME=/PATH/TO/gradle-0.8 Then with Gradle set up, you need to get Clojuresque and compile it wget unzip v1.1.0.zip && rm v1.1.0.zip cd clojuresque gradle build -- Massive output, BUILD SUCCESSFUL, jar in: build/libs/clojuresque-1.1.0.jar clojuresque-1.1.0.jar is all Gradle needs in order to understand your Clojure projects. If all you need Clojuresque for is building ClojureQL then don't bother fetching it, Gradle will handle that automatically once you build ClojureQL. Now you've seen a little bit of ClojureQL and I hope it has caught your interest. We're dedicated to making this as stable, elegant and featureful as possible in order to get you talented Lispniks to stop writing SQL - That said, contributions (even if its just ideas) are most welcome. These are the facts you should know ClojureQL is....
http://www.bestinclass.dk/index.clj/2009/12/clojureql-where-are-we-going.html
CC-MAIN-2013-48
refinedweb
2,122
59.74
Technical Support On-Line Manuals RL-ARM User's Guide #include <stdio.h> FILE* fopen ( const char* filename, /* file to open */ const char* mode); /* type of access */ The function fopen opens a file for reading or writing. The parameter filename is a pointer defining the file to open. The parameter mode is a pointer defining the access type. The function is included in the library RL-FlashFS. The prototype is defined in the file stdio.h. filename can include a path. If the path does not exist, all subfolders are created. filename can have the following prefixes: The parameter mode can have the following values: note File update modes are only supported on FAT file system! fclose, fflush, fseek, rewind #include <rtl.h> #include <stdio.h> void tst_fopen (void) { FILE *f; f = fopen ("Test.txt","r"); /* Open a file from default drive. */ if (f == NULL) { printf ("File not found!\n"); } else { // process file content fclose (f); } f = fopen ("M:\\Temp_Files\\Dump_file.log","w"); /* Create a file in subfolder on SD card.*/ if (f == NULL) { printf ("Failed to create a file!\n"); } else { // write data to file fclose
http://www.keil.com/support/man/docs/rlarm/rlarm_fopen.htm
CC-MAIN-2014-49
refinedweb
187
78.75
Hello and welcome to a tutorial for setting up Flask with Python 3 on a VPS. For the purposes of this tutorial, I will be using Digital Ocean. For $10 in credit to start with them, you can use my referral code, but you can use any VPS provider that you'd like. The point of being able to setup Flask yourself is so that you can run it from any full-service hosting provider that you want to. A special thanks to Daniel Kukiela for helping with this writeup. To begin, you need your VPS and your sudoer account. If you are using Digital Ocean, create a server (big green "create") button in the top right, choosing "droplet." Since 18.04 isn't quite yet out, I am going with Ubuntu 16.04. I will go with the smallest server available, since this is merely a simple setup demonstration. You can resize later quickly and easily, so just go with whatever you think you need right now. Next, just pick a region, change host name to something more appropriate for your project, and that's all you need. You can setup SSH keys, backups, and other things if you like, but this isn't necessary. Click create when you're ready. Within minutes, you should be emailed the IP address and root user password to your server. If you are on Mac or Linux, you can interface with your server by opening a terminal and doing something like ssh [email protected], where the 192.168.0.1 is your server's ip address. From here, you will be asked for a password, which is included in the email you were sent. Once you log in, you will be asked to change your password. Do that. If you are on Windows, you will need an SSH client. I use PuTTy. With PuTTy, you just fill in the Host Name with the IP address, make sure the port is 22, and hit enter to connect. Once you are in the server, let's start with an update and upgrade: sudo apt-get update && sudo apt-get upgrade Since we'll be using mysql in this series: sudo apt-get install apache2 mysql-client mysql-server Once you do that, you'll get the start up page for MySQL, where you will need to set your root user for MySQL. This is the specific MySQL root user, not your server root user. sudo add-apt-repository ppa:deadsnakes/ppa Hit enter to add this repository sudo apt-get update Get python3.6: sudo apt-get install python3.6 python3.6-dev Get pip3.6: curl | sudo python3.6 Now that we have Python 3.6, let's get our webserver. For this tutorial, we'll be using Apache: sudo apt-get install apache2 apache2-dev In order for our apps to talk with Apache, we need an intermediary, a gateway inferace...enter: WSGI (Web Server Gateway Inferface). Let's install WSGI for Python 3.6: pip3.6 install mod_wsgi Let's get the shared object file and the home dir for python's wsgi: mod_wsgi-express module-config You should see something like: LoadModule wsgi_module "/usr/local/lib/python3.6/dist-packages/mod_wsgi/server/mod_wsgi-py36.cpython-36m-x86_64-linux-gnu.so" WSGIPythonHome "/usr" ...but your's might be different. Copy these lines into a notepad or something. Now: nano /etc/apache2/mods-available/wsgi.load Paste those two lines in here. Save and exit ( ctrl+x, y, enter). Now let's enable wsgi: a2enmod wsgi Restart Apache with: service apache2 restart Great, now that we have our web server and the interface, we just need our web app. Since this is for Flask, let's get that! pip3.6 install Flask We need to have an Apache configuration file for this application. To do this: nano /etc/apache2/sites-available/FlaskApp.conf Inputting: <VirtualHost *:80> ServerName 192.168.0.1 ServerAdmin [email protected] WSGIScriptAlias / /var/www/FlaskApp/FlaskApp.wsgi <Directory /var/www/FlaskApp/FlaskApp/> Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/FlaskApp-error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/FlaskApp-access.log combined </VirtualHost> Replace the 192.168.0.1 with your server's IP address or your domain name (only if you've set this up to work), save and exit ( ctrl+x, y, enter). Now let's enable the site: sudo a2ensite FlaskApp Reload Apache: service apache2 reload Now we need to start preparing our Flask application. Let's set up some directories: mkdir /var/www/FlaskApp cd /var/www/FlaskApp nano FlaskApp.wsgi Now let's setup WSGI to interface with our application: In here, put: #!/usr/bin/python3.6 import sys import logging logging.basicConfig(stream=sys.stderr) sys.path.insert(0,"/var/www/FlaskApp/") from FlaskApp import app as application Save and exit. This WSGI script is in python. Our FlaskApp.conf file points to this WSGI file. As you can see, this WSGI script imports app from FlaskApp. Since that 2nd FlaskApp doesn't yet exist, let's begin our actual FlaskApp package. mkdir FlaskApp cd FlaskApp Now let's build our simple Flask Application: nano __init__.py from flask import Flask import sys app = Flask(__name__) @app.route('/') def homepage(): return "Hi there, how ya doin?" if __name__ == "__main__": app.run() Save and exit. Now run: service apache2 reload Now you should be able to visit your IP address to see your "Hi there, how ya doin?" message displayed proudly. There you have it, Flask on Python 3.6, hosted on your VPS wherever. Hope that's helped!
https://pythonprogramming.net/basic-flask-website-tutorial/?completed=/practical-flask-introduction/
CC-MAIN-2021-39
refinedweb
939
68.26
View Complete Post Hello, I was wondering if it's possible to write some code into the Header of my Master Page from within a Class? Any help is appreciated. Thanks'm just wondering if there is an exception class in the .NET Framework intended to signify that an internal error has occured. If there isn't, I'll simply define one, but if there is, I would like to use it. There are some situations where code is reachable, at least as judged by the compiler, but in fact should never be reached. Consider the following example: static public int GetElementIndex(XmlElement e) { XmlNode parent = e.ParentNode; if (parent is XmlDocument) return 1; int n = parent.ChildNodes.Count; int idx = 0; foreach (XmlNode child in parent.ChildNodes) { if (child is XmlElement && child.Name == e.Name) { ++idx; if (child == e) return idx; } } // If this line is ever reached, there is a bug somewhere! throw? Msg 8624, Level 16, State 1, Procedure pSetPersonExtraAnswer, Line 26 Internal Query Processor Error: The query processor could not produce a query plan. For more information, contact Customer Support Services. ALTER TABLE dbo.Persons ADD fullName AS (firstName + ' ' + lastName) PERSISTED GO Here is the very weird thing: If I restore the DB, immediately add the persisted computed column and run the problematic SP call, I get the error in Management Studio. If I restore the DB, run the problematic SP call and then add the computed column, I no longer get an error from SSMS when I rerun the SP call. But, I still get it from my ASP.NET (via enterprise library) code! When the error happens, altering the pro Hi all, i have a class Emp and whether it is possible to save the object of this class in a session? And after creating the session with object how can i store it in state service and how can i use the object from session?? thanks in advance...', declaring variable in constructor and right below the class name. Examples: public partial class MyClass1 { MyClass1 { int Numbers; string FullName; List<string> Employees = new List<string>(); } } The above is declaring in constructor. Example2: public partial MyClass1 { int Numbers; string FullName; List<string Employees = new List<string>(); MyClass1 { // empty constructor or some other code here } //...class routines and properties } So, what are the differences declaring int Numbes, string FullName, List<string> Employees in both examples? public class CasAuthentication : SoapHeader { internal int UserID1 = 0; public int UserID2 = 0; } public CasAuthentication m_CasAuthentication; [SoapHeader("m_CasAuthentication", Direction = SoapHeaderDirection.InOut)] [WebMethod] public void SetValues() { m_CasAuthentication.UserID1 = 1; m_CasAuthentication.UserID2 = 1; }[SoapHeader("m_CasAuthentication", Direction = SoapHeaderDirection.InOut)] [WebMethod] public string GetValues() { return m_CasAuthentication.UserID1 + ";" + m_CasAuthentication.UserID2; } hi, as you see i have a soapheader derived class that is named CasAuthentication. it has two variables, one of them is public and other one is internal Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend
http://www.dotnetspark.com/links/40727-designed-not-adding-new-internal-variable.aspx
CC-MAIN-2017-13
refinedweb
485
56.96
Details Description Error msg: In file included from java.c:24:0: /usr/lib/jvm/java-1.8.0-openjdk/include/jni.h:45:20: fatal error: jni_md.h: No such file or directory #include "jni_md.h" Issue Links - Blocked BIGTOP-2611 Adding Fedora-25 with Java 1.8 support - Closed Activity - All - Work Log - History - Activity - Transitions How about telling autoconf to look for an extra dir when it comes to -I ? Roman, that would only work for x86, however autoconf has yet to add Power support . checking build system type... powerpc64le-unknown-linux-gnu checking host system type... powerpc64le-unknown-linux-gnu checking cached host system type... ok C-Language compilation tools ranlib... ranlib checking for strip... strip Host support checking C flags dependant on host system type... failed configure: error: Unsupported CPU architecture "powerpc64le" error: Bad exit status from /var/tmp/rpm-tmp.pIDqNT (%build) I think we have to fix autoconf issue upstream similarly to what we did for ARM. Also, I don't think your patch will work cases where JAVA_HOME is coming from a system-level location. Roman, I agree with your assessment, however the issue we are dealing is with autoconf and common-daemon very slow release cycle. Ok, I'm motivated enough to do that. Can you hook me up with creds for a PPC env where the problem happens? Sorry Amir Sanjar: -1 to this patch. 1) Autoconf has detection code for jni_md.h in various positions. Why does it fail ? 2) Why guard the link with a os dependency ??? 3) Why guard the reconfiguration with a dependency??? I cannot reproduce the problem with debian 8 , There is not fedora25 in dockerhub. I will first generate the image before accepting patches triggered by fedora 25. And btw. There is no fedora2.5 and we do not support x86. (Please check the environment of this issue) Olaf Flebbe & Amir Sanjar please take a look at the attached patch. This one actually correctly lifts DAEMON-349 into Bigtop while simultaneously getting rid of the configure patch (we shouldn't really ever patch configure anyway – since it is always autogenerated). hmm, I just noticed this.. I have added Fedora25 support for both x86 and Power few weeks ago, check the github. However there are issues with JAVA 8 support that I am working to resolve. I was waiting to resolve all these issues before uploading the fedora 25 images to dockerhub. But I could do it now if you want. The failure is caused by change to the location of jni_md.h from $JAVA_HOME/include to $JAVA_HOME/include/linux in OpenJDK 1.8. Workaround: create a simple symbolic link as follow: ln -s $JAVA_HOME/include/linux/jni_md.h $JAVA_HOME/include/jni_md.h or adding -I$(JAVA_HOME)/include/linux/ to the makefile compiler options. Any thoughts?
https://issues.apache.org/jira/browse/BIGTOP-2618
CC-MAIN-2017-43
refinedweb
471
60.61
January 2009 Volume 24 Number 01 Extreme ASP.NET - Routing with ASP.NET Web Forms By Scott Allen | January 2009 Service Pack 1 for the Microsoft .NET Framework 3.5 introduced a routing engine to the ASP.NET runtime. The routing engine can decouple the URL in an incoming HTTP request from the physical Web Form that responds to the request, allowing you to build friendly URLs for your Web applications. Although you've been able to use friendly URLs in previous versions of ASP.NET, the routing engine provides an easier, cleaner, and more testable approach. The routing engine began as a part of the ASP.NET Model View Controller (MVC) framework, which is in a preview stage as of this writing. However, Microsoft packaged the routing logic into the System.Web.Routing assembly and released the assembly with SP1. The assembly currently provides routing for Web sites using ASP.NET Dynamic Data features (which were also released with SP1), but in this column I will demonstrate how to use the routing functionality with ASP.NET Web Forms. What Is Routing? Imagine you have an ASP.NET Web Form named RecipeDisplay.aspx, and this form lives inside a folder named Web Forms. The classic approach to viewing a recipe with this Web Form is to build a URL pointing to the physical location of the form and encode some data into the query string to tell the Web Form which recipe to display. The end of such a URL might look like the following: /WebForms/RecipeDisplay.aspx?id=5, where the number 5 represents a primary key value in a database table full of recipes.. This URL not only includes enough parameters to display a specific recipe, but is also human readable, reveals its intent to end users, and includes important keywords for search engines to see. A Brief History of URL Rewriting In ASP.NET, using a URL ending with /recipe/tacos traditionally required one to work with a URL rewriting scheme. For detailed information on URL rewriting, see Scott Mitchell's definitive article " URL Rewriting in ASP.NET." The article describes the common implementation of URL rewriting in ASP.NET using an HTTP module and the static RewritePath method of the HttpContext class. Scott's article also details the benefits of friendly, hackable URLs.. As you'll see in the rest of this column, the URL routing engine circumvents these problems. Figure 1 Routes, Route Handlers, and the Routing Module Of Routes and Route Handlers There are three fundamental players in the URL routing engine: routes, route handlers, and the routing module. A route associates a URL with a route handler. An instance of the Route class from the. The three primary types I've mentioned are shown in Figure 1. In the next section, I'll put these three players to work. Configuring ASP.NET for Routing To configure an ASP.NET Web site or Web application for routing, you first need to add a reference to the System.Web.Routing assembly. The SP1 installation for the .NET Framework 3.5 will install this assembly into the global assembly cache, and you can find the assembly inside the standard "Add Reference" dialog box. You'll also need to configure the routing module into the ASP.NET pipeline. The routing module is a standard HTTP module. For IIS 6.0 and earlier and for the Visual Studio Web development server, you install the module using the <httpModules> section of web.config, as you see here: To run a Web site with routing in IIS 7.0, you need two entries in web.config. The first entry is the URL routing module configuration, which is found in the <modules> section of <system.webServer>. You also need an entry to handle requests for UrlRouting.axd in the <handlers> section of <system.webServer>. Both of these entries are shown in Figure 2. Also, see the sidebar "IIS 7.0 Configuration Entries." > Once you've configured the URL routing module into the pipeline, it will wire itself to the PostResolveRequestCache and the PostMapRequestHandler events. Figure 3shows a subset of the pipeline events. URL rewriting implementations typically perform their work during the BeginRequest event, which is the earliest event to fire during a request. With URL routing, the route matching and selection of a route handler occurs during the PostResolveRequestCache stage, which is after the authentication, authorization, and cache lookup stages of processing. I will need to revisit the implications of this event timing later in the column. Figure 3 HTTP Request. Figure 4shows the route registration code you need to use for "/recipe/brownies" to reach the RecipeDisplay.aspx Web Form. The parameters for the Add method on the RouteCollection class include a friendly name for the route, followed by the route itself. The first parameter to the Route constructor is a URL pattern. The pattern consists of the URL segments that will appear at the end of a URL pointing to this application (after any segments required to reach the application's root). For an application rooted at localhost/food/ then, the route pattern in Figure 4will match localhost/food/recipe/brownies. IIS 7.0 Configuration Entries The runAllManagedModulesForAllRequests attribute requires a value of true if you want to use the extensionless URLs as I've done in this sample. Also, it might seem strange to configure an HTTP handler for UrlRouting.axd. This is a small workaround that the routing engine requires in order for routing to work under IIS 7.0. The UrlRouting module actually rewrites the incoming URL to ~/UrlRouting.axd, which will rewrite the URL back to the original, incoming URL. It's likely that a future version of IIS will integrate perfectly with the routing engine and not require this workaround. Segments enclosed inside curly braces denote parameters, and the routing engine will automatically extract the values there and place them into a name/value dictionary that will exist for the duration of the request. Using the previous example of localhost/food/recipe/brownies, the routing engine will extract the value "brownies" and store the value in the dictionary with a key of "name". You'll see how to use the dictionary when I look at the code for the route handler. You can add as many routes as you need into the RouteTable, but the ordering of the routes is important. The routing engine will test all incoming URLs against the routes in the collection in the order in which they appear, and the engine will select the first route with a matching pattern. For this reason, you should add the most specific routes first. If you added a generic route with the URL pattern "{category}/{subcategory}" before the recipe route, the routing engine would never find the recipe route. One additional note—the routing engine performs the pattern matching in a case-insensitive manner. Overloaded versions of the Route constructor allow you to create default parameter values and apply constraints. Defaults allow you to specify default values for the routing engine to place into the name/value parameter dictionary when no value exists for the parameter in an incoming URL. For example, you could make "brownies" the default recipe name when the routing engine sees a recipe URL without a name value (like localhost/food/recipe). Constraints allow you to specify regular expressions to validate parameters and fine-tune the route pattern matching on incoming URLs. If you were using primary key values to identify recipes in a URL (like localhost/food/recipe/5), you could use a regular expression to ensure the primary key value in the URL is an integer. You can also apply constraints using an object that implements the IRouteConstraint interface. The second parameter to the Route constructor is a new instance of my route handler, which I'll look at in Figure 5. public class RecipeRouteHandler : IRouteHandler { public RecipeRouteHandler(string virtualPath) { _virtualPath = virtualPath; } public IHttpHandler GetHttpHandler(RequestContext requestContext) { var display = BuildManager.CreateInstanceFromVirtualPath( _virtualPath, typeof(Page)) as IRecipeDisplay; display.RecipeName = requestContext.RouteData.Values["name"] as string; return display; } string _virtualPath; } The Recipe Routing Handler The following code snippet shows a basic implementation of a route handler for recipe requests. Since the route handler ultimately has to create an instance of an IHttpHandler (in this case, RecipeDisplay.aspx), the constructor requires a virtual path that points to the Web Form the route handler will create. The GetHttpHandler method passes this virtual path to the ASP.NET BuildManager in order to retrieve the instantiated Web Form: Notice how the route handler can also pull data from the routing engine's parameter dictionary, which is the RouteData property of the RequestContext class. The routing engine sets up the RequestContext and passes an instance when it invokes this method. There are many options available for getting the route data into the Web Form. You could pass the route data along in the HttpContext Items collection, for instance. In this example, you've defined an interface for your Web Form to implement (IRecipeDisplay). The route handler can set strongly typed properties on the Web Form to pass along any information the Web Form requires, and this approach will work with both the ASP.NET Web site and ASP.NET application compilation models. Routing and Security When you're using ASP.NET routing, you can still use all the ASP.NET features you've come to love—Master Pages, output caching, themes, user controls, and more. There is one notable exception, however. The routing module works its magic using events in the pipeline that occur after the authentication and authorization stages of processing, meaning that ASP.NET will be authorizing your users using the public, visible URL and not the virtual path to the ASP.NET Web Form that the route handler selects to process the request. You need to pay careful attention to the authorization strategy for an application using routing. Let's say you wanted to only allow authenticated users to view recipes. One approach would be to modify the root web.config to use the authorization settings here: Although this approach will prevent anonymous users from viewing /recipe/tacos, it does have two fundamental weaknesses. First, the setting doesn't prevent a user from directly requesting /WebForms/RecipeDisplay.aspx (although you could add another authorization rule that prevents all users from directly requesting resources from the Web Forms folder). Second, it is easy to change the route configuration in global.asax.cs without changing the authorization rules and leave your secret recipes open to anonymous users. An alternate approach to authorization would be to protect the RecipeDisplay.aspx Web Form based on its physical location, which is to place web.config files with <authorization> settings directly into the protected folder. However, since ASP.NET is authorizing users based on the public URL, you'll need to make the authorization checks manually on the virtual path that your route handler uses. You'll need to add the following code to the beginning of your route handler's GetHttpHandler method. This code uses the static CheckUrlAccessForPrincipal method of the UrlAuthorizationModule class (the same module that performs authorization checks in the ASP.NET pipeline): In order to access the HttpContext members via the RequestContext, you'll need to add a reference to the System.Web.Abstractions assembly. With a secure routing handler in place, you can now turn your attention to the page that needs to generate hyperlinks for each recipe in your database. It turns out the routing logic can help you build this page, too. URL Generation To generate the hyperlink to any given recipe, I will once again turn to the collection of routes configured during application startup. As shown here, the RouteCollection class has a GetVirtualPath method for this purpose: You need to pass in the desired route name ("Recipe") along with a dictionary of the required parameters and their associated values. This method will use the URL pattern you created earlier (/recipe/{name}) to construct the proper URL. The following code uses this method to generate a collection of anonymously typed objects. The objects have Name and Url properties that you can use with data binding to generate a list or table of available recipes: The ability to generate URLs from your routing configuration means you can change the configuration without the fear of creating broken links inside your application. Of course, you still might break your user's favorite links and bookmarks, but having the ability to change is a tremendous advantage when you are still designing the application's URL structure. Wrapping Up with Routes The URL routing engine does all of the dirty work of URL pattern matching and URL generation. All you need to do is configure your routes and implement your route handlers. With routing, you are truly isolated from file extensions and the physical layout of your file system, and you don't need to deal with the quirks of using a URL rewriter. Instead, you can concentrate on the optimum URL design for your end users and for search engines. In addition, Microsoft is working on making URL routing with Web Forms even easier and more configurable in the upcoming ASP.NET 4.0. Send your questions and comments to [email protected]. Scott Allen is a founder of OdeToCode and a member of the Pluralsight technical staff. You can reach Scott at [email protected], or read his blog at OdeToCode.com/blogs/scott. Current Issue Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus.
https://msdn.microsoft.com/en-us/magazine/dd347546.aspx
CC-MAIN-2018-30
refinedweb
2,276
55.84
04 December 2008 12:37 [Source: ICIS news] PARIS (ICIS news)--The EU was close to reaching an agreement on the climate change package, the EU Energy Commissioner Andris Piebalgs said during a debate on the issue at the European Parliament on Thursday. Piebalgs said there were only a few issues outstanding, most notably disagreement over whether ?xml:namespace> The package aims to cut EU emissions by 20%, increase the amount of energy from renewable sources by 20% and boost energy efficiency by 2020. The parliament felt that such a review could undermine the certainty for investment, said Piebalgs. He confirmed that there would be no changes to the 20/20/20 binding targets, but said there was a need for flexibility. The question of solidarity and how the burden should be divided between the member states remains an unsolved problem. France's Jean-Louis Borloo, the environment minster representing the French EU presidency, said he hoped progress would be made on the issue at an EU meeting on 6 December in Progress on the issue of the proposed target that 10% of road-transport fuel should come from renewable sources by 2020 is believed to have been made in closed-door discussions. The majority of the 10% was to have come from biofuels. After much lobbying from environmentalists, it is understood that up to almost a third of the 10% goal could be met through electric cars and trains. The Commission will also come forward with proposals in 2010 to limit indirect land-use change and promote a "double bonus" scheme for biofuels from non-food sources. EU Environment Commissioner Stavros Dimas told MEPs that he remained "optimistic of a first-reading agreement" on in the plenary session on 15-18 De
http://www.icis.com/Articles/2008/12/04/9177009/eu-close-to-agreement-on-climate-plan-piebalgs.html
CC-MAIN-2015-06
refinedweb
293
54.05
Subject: Re: [boost] TTI library updated in the sandbox From: Vicente Botet (vicente.botet_at_[hidden]) Date: 2011-02-10 10:37:05 Edward Diener-3 wrote: > > On 2/8/2011 5:47 PM, Vicente Botet wrote: >> >> >> Edward Diener-3 wrote: >> P.S. I would prefer that the library uses already standard Boost >> conventions >> for filenames, macros and namespaces (boost::tti) > > I have held off putting anything into the Boost namespace or prepending > BOOST_ to macro names until that time when the library would be reviewed > for inclusion into Boost. > The problem is if I use your library, which I expect will be accepted soon, for another library I'm preparing for Boost, I will need to change the interface at least for the macros when the library will be accepted. This will not be a big work, but if it can be avoided ... > As far as filenames are concerned, I am not aware of any Boost > conventions which apply. > Please take a look at HTH,
https://lists.boost.org/Archives/boost/2011/02/176999.php
CC-MAIN-2020-45
refinedweb
165
66.47
#include <dbz.h> dbminit(base) char *base; datum fetch(key) datum key; store(key, value) datum key; datum value; dbmclose() dbzfresh(base, size, fieldsep, cmap, tagmask) char *base; long size; int fieldsep; int cmap; long tagmask; dbzagain(base, oldbase) char *base; char *oldbase; datum dbzfetch(key) datum key; dbzstore(key, value) datum key; datum value; dbzsync() long dbzsize(nentries) long nentries; dbzincore(newvalue) dbzcancel() dbzdebug(newvalue) In principle, dbz stores key-value pairs, where both key and value are arbitrary sequences of bytes, specified to the functions by values of type datum, typedefed in the header file to be a structure with members dptr (a value of type char * pointing to the bytes) and dsize (a value of type int indicating how long the byte sequence is). In practice, dbz is more restricted than dbm. A dbz database must be an index into a base file, with the database values being fseek(3) offsets into the base file. Each such value must ``point to'' a place in the base file where the corresponding key sequence is found. A key can be no longer than DBZMAXKEY (a constant defined in the header file) bytes. No key can be an initial subsequence of another, which in most applications requires that keys be either bracketed or terminated in some way (see the discussion of the fieldsep parameter of dbzfresh, below, for a fine point on terminators). Dbminit opens a database, an index into the base file base, consisting of files base.dir and base.pag which must already exist. (If the database is new, they should be zero-length files.) Subsequent accesses go to that database until dbmclose is called to close the database. The base file need not exist at the time of the dbminit, but it must exist before accesses are attempted. Fetch searches the database for the specified key, returning the corresponding value if any. Store stores the key-value pair in the database. Store will fail unless the database files are writeable. See below for a complication arising from case mapping. Dbzfresh is a variant of dbminit for creating a new database with more control over details. Unlike for dbminit, the database files need not exist: they will be created if necessary, and truncated in any case. Dbzfresh's size parameter specifies the size of the first hash table within the database, in key-value pairs. Performance will be best if size is a prime number and .pag file will be 4*size bytes (the .dir file is tiny and roughly constant in size) until the number of key-value pairs exceeds about 80% of size. (Nothing awful will happen if the database grows beyond 100% of size, but accesses will slow down somewhat and the .pag file will grow somewhat.) Dbzfresh's fieldsep parameter specifies the field separator in the base file. If this is not NUL (0), and the last character of a key argument is NUL, that NUL compares equal to either a NUL or a fieldsep in the base file. This permits use of NUL to terminate key strings without requiring that NULs appear in the base file. The fieldsep of a database created with dbminit is the horizontal-tab character. For use in news systems, various forms of case mapping (e.g. uppercase to lowercase) in keys are available. The cmap parameter to dbzfresh is a single character specifying which of several mapping algorithms to use. Available algorithms are: Mapping algorithm 0 (no mapping) is faster than the others and is overwhelmingly the correct choice for most applications. Unless compatibility constraints interfere, it is more efficient to pre-map the keys, storing mapped keys in the base file, than to have dbz do the mapping on every search. For historical reasons, fetch and store expect their key arguments to be pre-mapped, but expect unmapped keys in the base file. Dbzfetch and dbzstore do the same jobs but handle all case mapping internally, so the customer need not worry about it. Dbz stores only the database values in its files, relying on reference to the base file to confirm a hit on a key. References to the base file can be minimized, greatly speeding up searches, if a little bit of information about the keys can be stored in the dbz files. This is ``free'' if there are some unused bits in an fseek offset, so that the offset can be tagged with some information about the key. The tagmask parameter of dbzfresh allows specifying the location of unused bits. Tagmask should be a mask with one group of contiguous 1 bits. The bits in the mask should be unused (0) in most offsets. The bit immediately above the mask (the flag bit) should be unused (0) in all offsets; (dbz)store will reject attempts to store a key-value pair in which the value has the flag bit on. Apart from this restriction, tagging is invisible to the user. As a special case, a tagmask of 1 means ``no tagging'', for use with enormous base files or on systems with unusual offset representations. A size of 0 given to dbzfresh is synonymous with the local default; the normal default is suitable for tables of 90-100,000 key-value pairs. A cmap of 0 (NUL) is synonymous with the character 0, signifying no case mapping (note that the character ? specifies the local default mapping, normally C). A tagmask of 0 is synonymous with the local default tag mask, normally 0x7f000000 (specifying the top bit in a 32-bit offset as the flag bit, and the next 7 bits as the mask, which is suitable for base files up to circa 24MB). Calling dbminit(name) with the database files empty is equivalent to calling dbzfresh(name,0,'\t','?'minit for creating a new database as a new generation of an old database. The database files for oldbase must exist. Dbzagain is equivalent to calling dbzfresh with the same field separator, case mapping, and tag mask as the old database, an internal flag is 1, an attempt is made to read the table in when the database is opened, and dbmclose writes it out to disk again (if it was read successfully and has been modified). Dbzincore sets the flag to newvalue (which should be 0 or 1) and returns the previous value; this does not affect the status of a database that has already been opened. The default is 0. The attempt to read the table in may fail due to memory shortage; in this case dbz quietly falls back on its default behavior. Stores to an in-memory database are not (in general) written out to the file until dbmclose or dbzsync, so if robustness in the presence of crashes or concurrent accesses is crucial, in-memory databases should probably be avoided.. Dbzcancel cancels any pending writes from buffers. This is typically useful only for in-core databases, since writes are otherwise done immediately. Its main purpose is to let a child process, in the wake of a fork, do a dbmclose without writing its parent's data to disk. If dbz has been compiled with debugging facilities available (which makes it bigger and a bit slower), dbzdebug alters the value (and returns the previous value) of an internal flag which (when 1; default is 0) causes verbose and cryptic debugging output on standard output. Concurrent reading of databases is fairly safe, but there is no (inter)locking, so concurrent updating is not. The database files include a record of the byte order of the processor creating the database, and accesses by processors with different byte order will work, although they will be slightly slower. Byte order is preserved by dbzagain. However, agreement on the size and internal structure of an fseek offset is necessary, as is consensus on the character set. An open database occupies three stdio streams and their corresponding file descriptors; a fourth is needed for an in-memory database. Memory consumption is negligible (except for stdio buffers) except for in-memory databases. Unlike dbm, dbz will misbehave if an existing key-value pair is `overwritten' by a new (dbz)store with the same key. The user is responsible for avoiding this by using (dbz)fetch first to check for duplicates; an internal optimization remembers the result of the first search so there is minimal overhead in this. Waiting until after dbminit to bring the base file into existence will fail if chdir(2) has been used meanwhile. The RFC822 case mapper implements only a first approximation to the hideously-complex RFC822 case rules. The prime finder in dbzsize is not particularly quick. Should implement the dbm functions delete, firstkey, and nextkey. On C implementations which trap integer overflow, dbz will refuse to (dbz)store an fseek offset equal to the greatest representable positive number, as this would cause overflow in the biased representation used. Dbzagain perhaps ought to notice when many offsets in the old database were too big for tagging, and shrink the tag mask to match. Marking dbz's file descriptors close-on-exec would be a better approach to the problem dbzcancel tries to address, but that's harder to do portably.
http://www.makelinux.net/man/3/D/dbzclose
CC-MAIN-2015-35
refinedweb
1,536
59.84
May 04, 2012 08:09 PM|kunal1982|LINK I have a strongly typed view, which has some input boxes to map to a collection in an entity. Say for an example , take case of a View for adding an Employee detail and within that there are input fields to enter department name as well (say 2). Both of them are required. Here is the class structure of these two entities: public class Employee { public int EmployeeID{get;set;} public string Name {get;set; } public IList<Department> DepartmentList{get;set;} } public class Deparment { public string Name {get;set; } public int ID { get;set; } } <input type='text' class='input-choice' id='txtChoice0' name='Department[0].Name' /> Now my question is how should I apply validation to this. I have marked Required Attribute over the department collection in the Employee class and as well as on the field level in Department object, but nothing works. Any ideas ?? mvc MVC2 May 04, 2012 09:11 PM|JohnLocke|LINK Have you tried adding data annotations to the Department class' properties? [Required] public string Name { get;set } Then you can tuck your code inside an if (ModelState.IsValid) { ... } conditional Just remember that it validates against the model you pass to your ActionResult. May 04, 2012 09:20 PM|kunal1982|LINK Yes that is already done, but client side validation fails to run... Also, my Markup for the input field and the associated validation is like this : <input type='text' class='input-choice' id='txtChoice0' name='DepartmentList[0].Name' /> <%=Html.ValidationMessage("DepartmentList[0].Name") %> Model binding works perfectly fine, so I do see that the name 'DepartmentList[0].Name' in the input field maps correctly to the collection property in the Employee object in the controller and then I thought same way my validations will work but they don't get fired at all May 04, 2012 10:21 PM|BrockAllen|LINK Edit -- I was incorrect in my original statement. You can still implemenet implement IValidatableObject for custom validation in your model. May 04, 2012 10:26 PM|kunal1982|LINK BrockAllen Since validation doesn't recurse on the model, consider having your top model (Employee in this case) implement IValidatableObject. It can then check the validation of its child objects and report it back up to MVC. Isn't this part of .NET framwork 4.5 . I am using .NET version 4.0 and MVC 2 ?? May 04, 2012 10:29 PM|BrockAllen|LINK It's in .NET 4. If you're still in MVC2/.NET3.5, then you can implement IDataErrorInfo. May 05, 2012 12:07 AM|francesco abbruzzese|LINK May 05, 2012 12:12 AM|BrockAllen|LINK francesco abbruzzeseCollection is not the problem!!! Client side validation needs html5 attributes that translates the conditions of the validation attributes. Such validation attributes are added by the html helpers like Html.TextboxFor...if you write simply <input......no attributes are created and clint side validation doesnt work. Regardless if you have client side validation, he will always need server side validation and for nested object model binding the validation attributes on nested objects aren't honored. May 05, 2012 12:25 AM|francesco abbruzzese|LINK May 05, 2012 12:27 AM|BrockAllen|LINK I'll double check but I've not found this to be the case in my testing. May 05, 2012 12:30 AM|BrockAllen|LINK BrockAllen I'll double check but I've not found this to be the case in my testing. Ah yes, my mistake... my test was flawed. So back to the original poster, you can then easily apply the validation attributes to the nested class (as well as implement IDataErrorInfo for further validation as needed). May 05, 2012 09:54 AM|francesco abbruzzese|LINK BrockAllenAh yes, my mistake... my test was flawed. It appears that also the more expert people have doubts on the exact way name convention are used in model binding and in error handling exactly....probably because there is pratically no documentation on the subject... I will write asap a blog post about name conventions and prefix handling in my blog(there are already 2 posts scheduled, so maybe it will apeear in about 1 month). May 05, 2012 02:42 PM|BrockAllen|LINK No, nested model binding I am fine with -- I just made a mistake on my test. I had never tried to put validation attributes on the nested classes to see if validation also flowed (or I had forgotten). So I used this test code: public class Baz { [Required] public string Quux { get; set; } } public class Foo { [Required] public string Bar { get; set; } public Baz Baz { get; set; } } public class HomeController : Controller { public ActionResult Index(Foo foo) { var v = ModelState.IsValid; return View(); } } And when I tested I used: /Home/Index?foo.Bar=1&foo.Baz.Quux=2 // IsValid == true /Home/Index?foo.Bar=1 // IsValid == true The 2nd query string is where I was mislead -- I should have used: /Home/Index?foo.Bar=1&foo.Baz.Quux= // IsValid == false And then I could have also changed the code: public class Baz { [Required] public string Quux { get; set; } } public class Foo { [Required] public string Bar { get; set; } [Required] public Baz Baz { get; set; } } public class HomeController : Controller { public ActionResult Index(Foo foo) { var v = ModelState.IsValid; return View(); } } And now with this change: /Home/Index?foo.Bar=1 // IsValid == false So yea, I was hasty in my test :/ May 05, 2012 08:14 PM|francesco abbruzzese|LINK BrockAllenNo, nested model binding I am fine with -- I just made a mistake on my test. I had never tried to put validation attributes on the nested classes to see if validation also flowed (or I had forgotten). No doubt you know the name conventions for nested model binding. In general, after one starts playing with more complex models or with lists...one master these rules. HOWEVER, there a lot of small ..things related to them, that in some circumstances becomes very important, such as: .......And the list continue... Does everyone has an answer to all aboveproblems? (and to several other name convention related problems) I discovered all this problems while implementing "tools" that forced me to study the sources....However I noticed that most of people don't know such stuffs probably because most of the above problems are nopt clearly stated in some place...(there is something in this forum...and in stackoverflow...).....however when the problem is "hit" they go crazy because some "stuffs" are not so easy to imagine May 05, 2012 10:27 PM|kunal1982|LINK francesco abbruzzeseServer side validation is pplied by the default model binder and since the drfault model binder is recursive it works ALSO for nested objects...hohrver all errors names are prefixed eith thr whole path ftom tje root model to the leaf property in error...if ...errors are not shown is becsusr thr right name convrntion has not been respected Honestly , I am more confused now. So do you mean the problem is with the naming convention of the validation helper that I have used. ?? I am using <%=Html.ValidationMessage("Department[0].Name") %> after the first input. If I also pass on the error message along with the model name , error message appears right away on the view load Model binding on other hand is working perfectly fine, I get both Parent and its child(collection of child in fact) in the controller. Also , I thought MVC will have a very straighforward approach for applying validation to cotrols representing child collection. My only requirement is that I need to add Parent and child together in a single view. I do not have much exp in MVC so don't know if my following the right approach or not. May 06, 2012 08:11 AM|francesco abbruzzese|LINK kunal1982So do you mean the problem is with the naming convention of the validation helper that I have used. No my answer was for Brock that asserted that the problkem was the nameing convention. However, my general observation apply also to your problem. "Most of people have not very clear the role of naming conventions". In your case the name convention is ok. However the name conventione ALONE is not enough for the client side validation to work. You used ValidationMessage to apply error but this is not enough. ValidationMassage is just a container where to write the error. The actual error is handled by the input field. Now you wrote simply <input....that is you have not used the Html helpers for that field. Also if you apply the right name convention this is not enough for client side valdation to work. In fact when you call TextBox or TextBoxFor they creat an <input....with the right name convention AND ALSO add them some Html5 data- .... attributes with informations about the client side validation rule to apply. If you want to write <input....manually you MUST write yourself MANUALLY this information. I hope now everything is clear VALIDATION ERROR INFORMATION ARE NOT STORED IN ValidationMessage BUT IN THE INPUT FIELDS. 15 replies Last post May 06, 2012 08:11 AM by francesco abbruzzese
http://forums.asp.net/t/1800271.aspx?How+to+apply+validation+to+a+collection+item+in+Asp+net+MVC+2
CC-MAIN-2014-10
refinedweb
1,524
64.3
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. How to add a new status in sale.order in odoo 9? How to add a new status in sale.order in odoo 9? You have to redefine "state" in sale order as follows: _columns = {'state': fields.selection([('draft', 'Draft Quotation'), ('to_check', 'Pending'), ('checked', 'Checked'), ('sent', 'Quotation sent'), ('cancel', 'Cancelled'), ('waiting_date', 'Waiting Schedule'), ('progress', 'Sales Order'), ('manual', 'Sale to Invoice'), ('invoice_except', 'Invoice Exception'), ('done', 'Done'),], 'status', readonly=True, track_visibility='onchange', select=True)} Then add button action in sale order as follows: def menu_to_check(self, cr, uid, ids, context=None): res = self.write(cr, uid, ids, {'state': 'to_check'}, context=context) return res def menu_checked(self, cr, uid, ids, context=None): res = self.write(cr, uid, ids, {'state': 'checked'}, context=context) return res Also you have to add corresponding two "workflow.activity" xml: and three "workflow.transition" records xml: Two buttons in sale order inherited xml: Hope this may help you. Don't forget to give groups for your "checked" button for sale_managers in above xml Hi, 1) Redefine Existing state field: state = fields.Selection([ ('draft', 'Quotation'), ('sent', 'Quotation Sent'), ('sale', 'Sale Order'), ('pending', 'Pending'), ('done', 'Done'), ('cancel', 'Cancelled'), ], string='Status', readonly=True, copy=False, index=True, track_visibility='onchange', default='draft') 2) Redifine the field state in xml..and create new button and its action for proper working as per your requirements Hello, In the new API you can use selection_add parameter to append some new options to the selection field. So you can do: class SaleOrder(models.Model): _inherit = 'sale.order' state = fields.Selection(selection_add([('pending', 'Pending')]) You can check this documentation for more details ... Hope this could hleps ... About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now What status do you want to add? What is the business reason to add this status? Can you clarify with a business case? I want to add status is "pending" . This status is required for the sales manager can validate the sale and before the user can "confirm the order".
https://www.odoo.com/forum/help-1/question/how-to-add-a-new-status-in-sale-order-in-odoo-9-110398
CC-MAIN-2017-13
refinedweb
373
50.02
In-Depth If Silverlight 2 wins the gold in Beijing, will it be ready to light up your rich Internet apps? A Web video player based on beta 2 of Redmond's rich Internet application (RIA) runtime will bring interactive features to real-time and on-demand video content. The player was developed by New York-based multimedia production outfit Schematic Inc., in partnership with Microsoft and NBC Universal. "We've been working on a product built on beta 2 for six months and a lot of their development has been happening over that period of time," says Matthew Rechs, chief technology officer of Schematic. "It's especially challenging to do design and development against a platform as it's being born, but we live for that kind of danger -- it makes things a little bit more interesting." With the enhanced video player, users can watch a full-screen HD image, six other screen sizes, picture-in-picture and pop-ups. They can also use embedded players. A "Live Video Control Room" offers up to four video streams simultaneously of the same or different events -- one large picture and three smaller pictures -- with the option to go "full screen" or "swap" a smaller view for the larger one at any time. You can also e-mail and share video links. The enhanced player's "Explore Videos" menu takes advantage of the transparency afforded by Silverlight, offering viewers the option to scroll through a features menu -- Live Video Control Room, Olympic Events, Most Watched, As Seen on TV and Highlights -- while watching an event, similar to a high-end cable TV experience. The Standard Player, also powered by Silverlight, allows users to watch on-demand video, while perusing content such as Inside This Sport, Athletes, Related Video and Trivia. "It's a complicated piece of software by any measure," says Rechs. "It's got so many different ways to enjoy the programming, so many different video formats, combinations of video streams, the image size, resolution and bit rate and ways to enjoy those combinations." The player takes advantage of adaptive streaming, a new variable-bit rate feature in Silverlight 2 beta 2, which automatically adjusts the bit rate for an optimal viewing experience based on the user's connection speed and hardware. Silverlight 2 offers a programming model and media capabilities that allow developers to enhance the users' interaction with content. "A lot of our work was about the user experience and talking to sports fans who will be watching the Olympics and understanding what their interests were and what their needs were," says Rechs, "so really the project has been about getting to know the content and the viewer and figuring out what kind of features are going to be interesting and compelling for them." Fundamentally, he points out, NBC is bringing an entirely new viewing experience to its audience via the Web. People can get information about the athletes and their favorite Olympic sports, historical data, broadcast commentary, and watch all of the coverage online if they choose as though it were live, instead of setting their alarms to get up to watch a broadcast of volleyball or parallel bars, for example. "I think it's going to be one of the most important video experiences on the Internet, ever," says Rechs. "And I can't think of another where the content is of interest to so many people and the experience of watching it online is so unique to the viewing experience." Beijing or Bust With the world watching, Microsoft's fledging RIA technology must survive this qualifying round to secure a foothold in the Web technology race alongside Adobe Systems' ubiquitous Flash technology. People who access the NBC site can use the Adobe Flash Player, Windows Media Player or Silverlight 2 beta 2 to view different types of content. A major stumble could consign Silverlight 2 to the fate of the 2008 Summer Olympics torch relay. That "Journey of Harmony" turned into a public relations disaster for China, as protesters around the world tried to derail torch bearers as a show of support for perceived human rights infractions in Tibet. There is precedent for concern about Silverlight's performance. In 1996, IBM widely promoted its Web site promising real-time content and updates from the Atlanta Olympic Games. That effort failed badly, leaving Big Blue with egg on its face. This year's high profile venue is also fitting for the Silverlight development team, which has essentially run its own marathon to get to this point, working tirelessly on a technology that was first announced in March 2006. Silverlight version 1, essentially a media player that supports the JavaScript programming model, was released in September 2007. Silverlight 2 (formerly Silverlight 1.1) added support for the .NET programming model, a key driver for companies and developers that have invested in the Windows/.NET ecosystem. Beta 2, released just two months ago at the end of the developer portion of Microsoft Tech-Ed, offers a commercial Go-Live license, which NBC has with the Olympic video player. The Silverlight 2 beta 2 technology is "mostly feature complete," according to Microsoft. In beta 2, Microsoft spruced up its controls, namely the Calendar, TextBox and DataGrid, and introduced a TabPanel, but the ComboBox -- commonly called a drop-down box -- is glaringly missing. A Visual State Manager was added so that designers and developers could more easily customize their controls, templates and skins without writing a lot of code. "It's easier to understand the concept of visual states in a Visual State Manager than to do everything manually with triggers," says Tony Lombardo, lead technical evangelist at component vendor Infragistics Inc., which offers a community technology preview of a Silverlight beta 2 chart and gauge. "Triggers are still very powerful and light and let you do a lot at different levels, and if you really want to dig into it, it's great. But that first experience of setting it up, they really needed to make that easier and I think they've accomplished that." The Visual State Manager is also one of the features that Microsoft is going to port back into Windows Presentation Foundation (WPF) in "a one-to-one code mapping," says Lombardo. WPF shipped without a DataGrid, and Microsoft did not repeat that same faux pas with Silverlight 2. A rough version appeared in beta 1. "Just about everything in the DataGrid has been improved in beta 2," explains John Papa, a senior consultant for ASPSOFT Inc. and the author of the upcoming book, "Data Access with Silverlight 2" (O'Reilly). Microsoft also added a few features to the data binding in Silverlight that WPF already had. Beta 2 introduces an ADO.NET Data Services wrapper. And LINQ to JSON, available in beta 1 but not confirmed as a feature, is full-fledged in the second beta. One of the best things about beta 2, says Papa, is that it can communicate with a variety of Web services, and those Web services can return data in a variety of formats. "Silverlight can talk to Web services, WCF, REST or plain-old XML," he explains, "and then the data that it can get from those services can be XML, JSON or it can be a construct from a business. You can also talk to a server using single sockets, which allows two-way communication." Early versions of the Silverlight 2 technology housed most of the controls in extended assemblies. Microsoft, in beta 2, moved 30-plus commonly used controls into the main DLL. Papa expects the company to move additional controls like the calendar and the data picker into the main DLL before the final release. "Now when you create an application, once somebody has a Silverlight program, your app is really tiny," he says. "An application that might have been 250k before might only be 20k now. " The plug-in has retained its small size, going from a 4.3MB to a 4.6MB download in beta 2, according to Microsoft. Microsoft also held true to its promise that it would work on making the Silverlight 2 API a true subset of WPF, where possible. At the Microsoft Tech-Ed North America 2008 Developers conference, an attendee asked Microsoft founder and chairman Bill Gates if Silverlight and WPF were being merged together. Gates, who gave the opening keynote address, replied, "Silverlight will probably have almost everything WPF has today, but WPF will keep getting richer and richer as we go forward." But the company still has some work ahead. "The big thing I'm expecting between now and the final release is not so much about additional controls being added, although they may get added, but more about finalizing the API, the namespaces, and just generally tweaking the libraries with a particular focus on making them more compatible with full WPF," says Daniel Chait, managing director of Lab49, a New York-based consultancy. Along with Silverlight 2 beta 2, Microsoft released the Expression Blend 2.5 preview. "Visual Studio is integrating more and more of Blend into it, but it's still behind, so you almost need to have them both open at one time," says Papa. "They kind of complement each other right now. I think the intent of Microsoft is to move a lot of what is in Blend into Visual Studio. To at least see [your design] in Visual Studio will be a big step." Visual State Manager is not really supported in Visual Studio and for developers using the designer and XAML in Blend, there's no IntelliSense. "A lot of it is that the design experience is a little bit behind in terms of features," he says. Go Live? Although it's still a beta technology and the tooling has a ways to go, Microsoft says Silverlight 2 should be considered as a technology solution for mission-critical applications in business environments. The company holds up as proof the high-stakes NBC Olympics site and the use of Silverlight 2 beta 2 for the upcoming Democratic convention. "There will be some API changes between beta 2 and the final release, so you should expect that applications you write with beta 2 will need to make some updates when the final release comes out," writes Microsoft Corporate Vice President Scott Guthrie in his June 7 blog announcing the release of the beta 2 technology. "But we think that these changes will be straightforward and relatively easy, and that you can begin planning and starting commercial projects now." Indeed, hundreds of millions of dollars exchanged hands as NBC lined up vendors and prepared to offer unprecedented Olympic coverage for the first time via Web programming and content, according to Rechs. "It's important for people to think about the business that Silverlight represents for our client [NBC]," he says. "It's not an exaggeration to say this is a billion dollar project. It's easy to get excited about the visibility of the product and the highly public, highly trafficked thing, but when you look at it from a dollars-and-cents perspective, there's really a ton riding on this." Schematic also had to consider their client's business model and did a lot of work with NBC and their advertisers to figure out how ads were going to be incorporated into the viewing experience. Lab49 has several Silverlight projects in the soft launch/pilot stage. "All of our Silverlight applications took the beta 2," says Chait. Broadly speaking, clients are choosing Silverlight for the same kinds of apps that Lab49 builds with WPF and WinForms-interactive trader desktops. Lab49 also uses the technology as a way to build management visualizations for infrastructure-level applications. "The main application focus is turning the bank inside out," says Chait, "providing functional applications that use bank systems for pricing, for access service management, but providing those apps to customers directly rather than having them communicate over the phone or over e-mail or enter your systems manually. Now I can essentially give them the same tools that my institutional clients such as hedge funds use to trade on my systems directly, and I can deliver that through a browser so I don't have the desktop-compatibility issues that have been obstacles and big costs to support." Chait says adoption of Silverlight has been faster than WPF by Lab49 clients. "The decision some of our customers have made when they're looking at their legacy internal apps is, what's the benefit of making the leap from WinForms to WPF? It's not as easy to justify. I think with Silverlight, you can say, "Oh, here's an opportunity to build the only kind of application that we deliver over the Web and it's a quantum leap over the HTML- and AJAX-style apps. The apps are a lot more functional and mature." "From a business standpoint, I think it deserves serious consideration; Should we build a desktop app at all or should we just use Silverlight?" he adds. A project's timeline is also an important consideration. The final version of Silverlight 2 is expected this month. "Right now WPF is more complete than Silverlight," says Papa. "If your app needs to work in a smart client and a browser, WPF is going to work in both. If you need to reach multiple browsers, Silverlight is a very good choice." But he cautions: "If it needs to go live in the next two months, I probably wouldn't go there because even though it's mature, there are going to be some changes." Infragistics' Lombardo says the technology is ready but developers need to be prepared for a bit of a learning curve. "It's certainly practical and it's a lot easier than writing two lines of JavaScript to get the same thing accomplished, and you can do things to get a better targeted experience," he says. However, it's not going to be a 1-2-3 app. "You should expect to spend a little bit of time learning Silverlight," he advises. "The data model is going to be different. If you're used to building an ASP.NET-type app or even a WinForms app, now you're dealing with Web services and WCF in order to get your data back and forth, persistence moves. There are some challenges that developers are going to face, but once you've learned the Silverlight way of doing that, it's pretty easy and it just becomes standard practice." Rechs says developers need to weigh the adoption costs against the advantages of the technology and its business value. "Silverlight is a new plug-in. It's going have adoption costs for the developer," he says. "You want all these benefits, but you have to consider this issue: People's desktops are often locked down and the install process is a little disruptive. Intrinsically there's some risk, although Microsoft has made it quick and easy to install." Schematic, which builds video players with Flash and Silverlight technology, sees the magnitude of the business impact of Microsoft's RIA technology, especially what it enables in terms of user experience, video and cross-platform support. "Many customers have big investments in Visual Studio, Expression Blend and the whole Microsoft ecosystem. Those people have no way to run apps in a multi-vendor environment without having to exit that ecosystem and getting a whole different set of tools and software licenses," says Rechs. The true test of Schematic's Silverlight video player is approaching, and it will last for 17 days straight. A team of 15 to 20 developers worked on the NBCOlympics.com enhanced video player along with engineers from NBC and Microsoft. Developers from all three companies will be working 24/7 throughout the Olympics to make sure the Silverlight-based video player works as planned. Printable Format > More TechLibrary I agree to this site's Privacy Policy. > More Webcasts
https://visualstudiomagazine.com/articles/2008/08/01/silverlight-2-olympic-trial.aspx
CC-MAIN-2018-13
refinedweb
2,676
58.32
getting classnotfound exception while running login application getting classnotfound exception while running login application hi, I am getting Error creating bean with name 'urlMapping' defined... to bean 'loginController' while setting bean property 'urlMap' with problem with executing JSF file(Build failed) problem with executing JSF file(Build failed) *while executing below code i am getting problem as **init: deps-module-jar: deps-ear-jar: deps... = st.executeQuery(qry); while (rs.next()) { b ERRor in executing DB program ERRor in executing DB program While executing following code an Error was generated.can any one help me for this. LiveDB liveDBObj...=pstmt.executeUpdate(qry); ---------- **ERROR:java.sql.SQLException: You Getting Error - Development process Getting Error Hi , i am getting error while executing this code. I just want to store date in to database. Dont mistake me for repeated questions. java.sql.SQLException: [Microsoft][ODBC Microsoft Access Driver] Number Binding Error in Spring - Spring Binding Error in Spring Error: Neither BindingResult nor plain target object for bean name 'loginBean' available as request attribute I am...; My Maping in xml Error in simple session bean .................. Error in simple session bean .................. Hi friends, i am trying a simple HelloWOrld EJb on Websphere Applicatiopn server 6.1. Can any... getting these errors while i am runnign above client Bean Bean what is bean? how to use bean HTML Code... viji"); rs=st.executeQuery(); while(rs.next...) { out.println("Error "+E); } finally { rs.close(); con.close(); } %> Error with Maven while deploying the war file Error with Maven while deploying the war file Hi, I am new... is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'rentalService... is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'sessionFactory Stateless Session Bean Example Error Stateless Session Bean Example Error Dear sir, I'm getting following error while running StatelessSessionBean example on Jboss. Please help me... Yes sir, I'm working on this example and getting error that I've bean creation exception bean creation exception hi i am getting exception while running simple spring ioc program Exception in thread "main" org.springframework.beans.factory.BeanDefinitionStoreException: IOException parsing XML document from class spring Xml spring Xml when ia m using spring 3.0 with eclipse 4 i am getting XmlBeanFactory is depricated? so what should i do? is there any refernces i mean i have to call indirectly Problem in executing query.... Problem in executing query.... Suppose there is a textbox or a text... it is showing error because of the '.I understand where the problem is.If the user does not enter ' then there is no problem while executing.But suppose the user: Bean life cycle in spring Bean life cycle in spring  ... and also explains the lifecycle of bean in spring. Run the given bean example... org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions INFO: Loading XML bean definitions from class path error for getting the data from an api using br.readLine - Development process error for getting the data from an api using br.readLine hi... executing the above while loop the application is hanging . i find that the application gets hanging while executing inputreader.readLine() function. please Questions on Spring - Spring . ------------------------ Spring reads the dependencies from an xml file, usually called... is the AOP framework. While the Spring IoC container does not depend on AOP, meaning...Questions on Spring 1> what is Spring Framework ? why does An introduction to spring framework complained that Spring is still too dependent on XML files. In this tutorial, any... through spring configuration file or class metadata while in EJB declarative... 'VelocityConfigurer' bean is declared in spring configuration. The view resolver Spring and configuration file ---.xml is <bean id="customerService" class...; </bean> <bean id="hijackAfterMethodBean" class="com.mkyong.aop.HijackAfterMethod" /> <bean id="customerServiceProxy" class...;/filter-mapping> </web-app> spring-security.xml <?xml version error while compiling - Java Beginners error while compiling i am facing problem with compiling and running a simple java program, can any one help me, the error i am getting is javac is not recognised as internal or external command Check if you JAVA_HOME Bean life cycle in spring Bean life cycle in spring  ... and also explains the lifecycle of bean in spring. Run the given bean example... loadBeanDefinitions INFO: Loading XML bean definitions from class path NoSuchBeanDefinitionException - Spring org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'ohfileUploadController... reference to bean 'uploadOHFileManager' while setting bean property... is org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean named 'uploadOHFileManager' is defined Spring with Hibernate - Spring Spring with Hibernate When Iam Executing my Spring ORM module (Spring with Hibernate), The following messages is displaying on the browser window... The server encountered an internal error () that prevented it from fulfilling error error while iam compiling iam getting expected error- Error- Hello, I would like to know about XSD file. I try to print XML file but I am getting error SAXException-- says Content is not allowed in prolog. Please help me file upload in spring 2.5 - Spring not found 404 error (when i start the tomcat5.50 the bean configuration...file upload in spring 2.5 hi, i facing problem in file upload in spring 2.5 my FileUploadController.java file package example spring3 mvc appliation bean definition not found error the following error, can you suggest me how to solve. The error message is: Error creating bean with name...spring3 mvc appliation bean definition not found error hi I spring3 mvc appliation bean definition not found error execute it shows index page, when I click on link it shows the following error Error creating bean with name...spring3 mvc appliation bean definition not found error hi deepak I tutorial for file upload in spring - Spring i am getting the requested resource not found 404 error (when i start... for uploading file using spring framework. The example in the spring reference uses... interface.How to work with it? I am totally new to spring can somebody help me.   org.hibernate.exception.GenericJDBCException - Spring shopping cart application from roseindia.net i am getting some exception while running application in tomcat server plz help me. exception is: Error is: Could... is org.hibernate.exception.GenericJDBCException: Cannot open connection Error:==> Getting an error :( Getting an error :( I implemented the same code as above.. But getting this error in console... Console Oct 5, 2012 10:18:14 AM org.apache.tomcat.util.digester.SetPropertiesRule begin WARNING: [SetPropertiesRule]{Server Erron while Erron while Hi, i'm doing a project regarding xml. I want to extract all nodes from body of existing xml doc to make another xml with that data.Here is my coding and error showing on that coding.Could anybody help to know my error HelloWorld Deployment Error for module: HelloWorld: Error occurred during deployment: Exception while deploying the app [HelloWorld... [HelloWorld]. TargetNamespace.1 : Espace de noms " XML error message: The reference to entity XML error message: The reference to entity XML error message: The reference to entity "ai" must end with the ';' delimiter. Im getting this error when i gotta edit my blogger template Please advice manjunath Session Bean method while ending the life cycle of the session bean. After this, the bean.... The container may deactivate a bean while in ready state (Generally... While ending the life cycle of the bean, the client calls the annotated @Remove What is Bean lifecycle in Spring framework? What is Bean lifecycle in Spring framework? HI, What is Bean lifecycle in Spring framework? Thanks bean life cycle methods in spring? bean life cycle methods in spring? bean life cycle methods in spring Spring SqlRowSet example ://"> <bean...Spring SqlRowSet example The 'SqlRowSet' is used to handle the result..."); int rowCount = 0; while (srs.next()) { System.out.println xml - XML xml hi convert xml file to xml string in java ,after getting the xml string result and how to load xml string result? this my problem pls help..."); FileOutputStream output = new FileOutputStream("new.xml"); ReadXML xml = new SPRING ... A JUMP START the bean Spring container and JavaBeans support utilities. 4. spring-aop.jar... methods, functions etc., 3. A XML file called Spring configuration file. 4...; <!DOCTYPE beans PUBLIC "-//SPRING//DTD BEAN//EN" "http java programe executing error - Java Beginners java programe executing error i am creating one package to one class, but the compilation is succeed but how to execute the class Hi Friend, If you have following java class: package newp; class Hello Spring filter Bean Spring Filter Bean The auto scan property can be gain through @Component... in the xml configuration file. In this example you will see how to filter the components in Spring framework. StudentDAO.java package XML DOM error - Java Beginners ) but im getting a error like this... "java.io.IOException: Server returned HTTP...XML DOM error import org.w3c.dom.*; import javax.xml.parsers....("xml Document Contains " + nodes.getLength() + " elements."); } else Spring Batch Example Spring Batch Example  ... about batchUpdate() method of class JdbcTemplate in Spring framework... and update data of the table Simultaneously. context.xml <?xml Handling Errors While Parsing an XML File Handling Errors While Parsing an XML File This Example shows you how to Handle Errors While parsing an XML document. JAXP (Java API for XML Processing) is an interface Spring Security Authorized Access ; </web-app> spring-security.xml <?xml version="1.0"...Spring Security Authorized Access In this section, you will learn about authorized access through Spring Security. EXAMPLE Sometimes you need to secure Getting started with the Spring MVC framework. Spring MVC Getting Started - Getting started with Spring MVC... Spring MVC Framework In this we will quickly start developing the application using Spring MVC module. We will also see what all configuration and code Stateless Bean Error Stateless Bean Error Ejb stateless bean giving following error,please help me. 11:49:54,894 INFO [STDOUT] Error:$Proxy72 cannot be cast... are providing you a link that will illustrate you more clearly about Stateless Bean spring spring hi how can we make spring bean as prototype how can we load applicationcontext in spring what is dependency injection jar file built by ant jar task does not have all the dependant jars and throws exception while reading the appplicationContext,xml exception while reading the appplicationContext,xml I have a spring...;/target> While executing the jar file using java -jar command it is throwing... to locate Spring NamespaceHandler for XML schema namespace spring bind - Spring ); I'm getting the following error: org.apache.jasper.JasperException: /WEB...spring bind I'm trying to retrieve a list in my jsp using the command object. Is this statement permissable? dataSource="" Here's the first part string manipulation in xml files present in a folder. while executing that .sql file in xml its giving me error as ""\" unexpected character" solution is I have to chage every 2.5 MVC File Upload Spring 2.5 MVC File Upload Spring 2.5 MVC File Upload This tutorial explains how to upload file in Spring 2.5 MVC Framework. Spring MVC module of Spring framework I had this error while deploying a web services in jboss I had this error while deploying a web services in jboss Error...: com.sun.xml.ws.transport.http.servlet.WSServletContextListener <?xml version="1.0" encoding="UTF-8"?> <web-app xmlns="" xmlns:xsi Spring 3 MVC Validation Example Spring 3 MVC Validation Example This tutorial shows you how to validate Spring 3 MVC based applications. In Spring 3 MVC annotation based controller has... application we will be create a Validation form in Spring 3.0  am Getting following exception when executing javax.servlet.ServletException: java.lang.NoClassDefFoundError: org...(0); Iterator rowIter = mySheet.rowIterator(); while (rowIter.hasNext An Entity Bean Example metadata annotations and/or XML deployment descriptor is used for the mapping..., such as relational database an entity bean persists across multiple session and can be accessed by multiple clients. An entity bean acts as an intermediary between a client Cmp Bean - EJB Cmp Bean I want to create connection pool in admin console in sun..., password , database name correctly atlast, while i give the ping button It will give the error, Operation 'pingConnectionPool' failed in 'resources' Config Spring Training Spring Framework Training for the Developers  ...; Hands-On: 70% The Spring Framework training is specially designed for the java programmers looking for a start up in the Spring Framework and use Executing JAR file - Swing AWT Executing JAR file Hello Friends! I have successfully...:\>java -jar secl.jar then, the following error msg is shown... error msg dialog box is shown: Could not find the main class.Program spring first example - Spring spring first example I am trying my first example of spring from the link But I am not getting... org.apache.catalina.core.StandardHost start INFO: XML validation disabled Jul 16, 2010 2:07:53 Spring Injection Example ; XML Bean-Injection, The given example below gives the brief...;Inject" is the name of the bean class which would be referred in the xml file... org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions INFO: Loading XML bean Learn Features of Spring 3.0 is present. The Spring Expression Language can be used while defining the XML and Annotation based bean definition. Support... Spring 3.0 Features - Spring 3 new features RFT 8.1.1.3 Error RFT 8.1.1.3 Error While executing RFT 8.1.1.3 from command mode I am getting the following error message, I have enabled JRE 1.6 and Browser enablement test is pass. Error message is "exception_message = C:\Program Files\IBM\SDP Inheritance in Spring loadBeanDefinitions INFO: Loading XML bean definitions from class path resource... Inheritance in Spring  ... about the inheritance in the Spring framework. By inheritance we mean a way The Complete Spring Tutorial In the last section we developed the .xml file to configure the IOC (Spring... Injection Example XML Bean-Injection, The given example below... in Spring Calling Bean using init() method in spring - Spring spring what is bean
http://www.roseindia.net/tutorialhelp/comment/91278
CC-MAIN-2014-52
refinedweb
2,320
50.12
Free doesn’t always mean “not as good as paid”, and OpenHAB is no exception. The open source home automation software far exceeds the capabilities of any other home automation system on the market – but it’s not easy to get set up. In fact, it can be downright frustrating. In part 1 of the guide, I walked you through installing OpenHAB on , introduced the core concepts of OpenHAB, and showed you how to add your first items into the system. Today we’ll be going further: - Adding ZWave devices - Adding a Harmony Ultimate controller - Introducing rules - Introducing MQTT, and installing an MQTT broker on your Pi, with sensors on an Arduino - Recording data and graphing it Introduction to Z-Wave Z-Wave has been the dominant home automation protocol for years: it’s reliable, has been extensively developed, and works over a much longer range than any other smart home products. There’s hundreds of Z-Wave sensors available to you that perform a wide range of tasks. OpenHAB can work with Z-Wave, but is a hassle to set up, and reliability is not guaranteed. If you’re considering the purchase of a house full of Z-Wave sensors specifically for use with OpenHAB, I’d urge you to reconsider. It may work out great for you, or it may be plagued with small but persistent problems. At least, don’t buy a house full of sensors until you’ve had a chance to try out a few. The only reason to choose Z-Wave is if you’re not 100% settled on OpenHAB, and would like to leave your options open in future: Z-Wave for instance works with Samsung SmartThings hub, as well as Z-Wave specific hubs such as Homeseer, and a range of other software options such as Domoticz. Though OpenHAB includes a Z-Wave binding, you still need to configure the Z-Wave network first, before OpenHAB can start querying it for data. If you’ve got a Rasberry controller board, you have some software supplied for configuring the network, so we won’t be covering that here. If you bought an Aeotec USB Z-Stick controller or similar, you likely don’t have any software included, so read on. Aeotec Z-Stick Gen5, Z-Wave Plus USB to create gateway Aeotec Z-Stick Gen5, Z-Wave Plus USB to create gateway Buy Now At Amazon $44.95 If you already have a Z-Wave network setup, you can just plug your controller into the Pi and start configuring the binding and items. If this is your first foray into Z-Wave, it’s a little more complex. First, on the hardware side: each controller has its own way of pairing with devices (technically known as “inclusion mode” in which a node ID is assigned). In the case of the Aotec Z-Stick, this means unplugging it from the USB port, and pressing the button once to place it into inclusion mode. Then take it near to the device you’re pairing, and press the inclusion button on that too (this will also vary: my Everspring socket requires the button to press 3 times in quick succession, so the lesson here is to read the manual for your device). The Z-Stick flashes briefly to indicate success. This presents problems when plugging it back into the Pi, as a new port is assigned. Restart your Pi to have it reset back to the standard port if you find it’s been dynamically reassigned a different one. Better still: don’t plug it into the Pi until you’ve done all the hardware pairings first. Installing HABmin and Z-Wave Bindings Since OpenHAB doesn’t actually a configuration utility for Z-Wave, we’re going to install another web management tool which does – something called HABmin. Head on over to the HABmin Github repository download the current release. Once you’ve unzipped it, you’ll find 2 .jar files in the addons directory – these should placed in the corresponding addons directory in your OpenHAB Home share (if you’re also using the Aotec gen5 Z-Stick, make sure you’ve got at least version 1.8 of the Z-Wave binding). Next, create a new folder in the webapps directory, and called it “habmin” (lowercase is important). Copy the rest of the downloaded files into there. Note: There’s also a HABmin 2 under active development. Installation is much the same but with one additional .jar addon. It might be worth trying both just to see which you prefer. If you haven’t already, plug your controller into your Pi. Type the following to find the correct port. ls /dev/tty* You’re looking for anything with USB in the name, or in my particular case, the Z-stick presented itself as /dev/ttyACM0 (a modem). It might be easier to do the command once before you plug it in, and once after, so you can see what changes if you’re unsure. Open up the OpenHAB config file and modify the section on Z-Wave, uncommenting both lines and putting your actual device address. One final step for me was to allow the OpenHAB user to access the modem. sudo usermod -a -G dialout openhab Now, to kick everything into action, restart OpenHAB sudo service openhab restart Hopefully, if you’re checking the debug log, you’ll see something like this. Congratulations, you’re now talking Z-Wave. You may also find the debug log flooded with messages from various Z-Wave nodes. Let’s start by checking HABMIN to see what it’s found: (replacing openhab.local with your Raspberry Pi hostname or IP address). There’s a lot to see in HABMIN, but we’re only really concerned with the Configuration -> Bindings -> Z-Wave -> Devices tab, as you can see below. Expand the node to edit the location and name label for your ease of reference. Configuring Z-Wave Items Each Z-Wave device will have a specific configuration for OpenHAB. Thankfully, most devices have already been explored and there will be examples out there for yours already. Configuring custom devices that aren’t recognized is well beyond the scope of this guide, but let’s assume it is supported for now. First, I’ve got a basic Everspring AN158 power switch and meter on Node 3. A quick Googling led me to a blog post on Wetwa.re, with a sample item configuration. I adapted this as follows: Switch Dehumidifier_Switch "Dehumidifier" {zwave="3:command=switch_binary"} Number Dehumidifier_Watts "Dehumidifier power consumption [%.1f W]" { zwave="3:command=meter" } Perfect. Next up is an Aeotec Gen5 Multi-Sensor. Aeon Labs Aeotec Z-Wave Gen5 Multi-Sensor (Z-Wave Plus) Aeon Labs Aeotec Z-Wave Gen5 Multi-Sensor (Z-Wave Plus) Buy Now At Amazon For this one, I found a sample config at iwasdot.com, and my multisensor is on Node 2. Number Hallway_Temperature "Hallway Temperature [%.1f °C]" (Hallway, Temperature) {zwave="2:0:command=sensor_multilevel,sensor_type=1,sensor_scale=0"} Number Hallway_Humidity "Hallway Humidity [%.0f %%]" (Hallway, Humidity) {zwave="2:0:command=sensor_multilevel,sensor_type=5"} Number Hallway_Luminance "Hallway Luminance [%.0f Lux]" (Hallway) {zwave="2:0:command=sensor_multilevel,sensor_type=3"} Contact Hallway_Motion "Hallway Motion [%s]" (Hallway, Motion) {zwave="2:0:command=sensor_binary,respond_to_basic=true"} Number sensor_1_battery "Battery [%s %%]" (Motion) {zwave="2:0:command=battery"} If the format of this looks strange to you, please head on back to the first beginner’s , specifically the Hue binding section, where I explain how items are added. You’ll probably only ever need to copy paste examples like this, but in case you have a new device, the binding documentation details all the commands. Logitech Harmony Binding Before we jump into rules, I wanted to add a quick note about working with the Harmony binding. I’m a big fan of the Harmony series of ultimate remotes... Read More to simplify the home media center experience, but they often stand as a separate system within the smart home. With OpenHAB, Logitech Harmony activities and full device control can now be a part of your centralised system, and even included in automation rules. Begin by installing the three binding files that you find by using apt-cache to search for “harmony”: Don’t forget to chown the bindings directory again when you’re done: sudo apt-get install openhab-addon-action-harmonyhub sudo apt-get install openhab-addon-binding-harmonyhub sudo apt-get install openhab-addon-io-harmonyhub sudo chown -hR openhab:openhab /usr/share/openhab To configure the binding, open up the openhab.cfg file and add a new section as follows: ########## HARMONY REMOTE CONTROLS ########## harmonyhub:host=192.168.1.181 or your ip harmonyhub:username=your-harmony-email-login harmonyhub:password=your-password The IP address is that of your Harmony hub. Use a network scanner to find that out. You’ll also need to enter your login details, the ones you enter when you launch the standard Harmony config utility. That’s it. Upon restarting your Hue, your debug log should have a sudden burst of output from the binding. This is a JSON formatted list of all your activities, devices, and commands that can be sent. It’s a good idea to copy this out for future reference. you can make it even easier to read with collapsible nodes by pasting into an online JSON formatter such as this one. As well as the standard PowerOff activity which is a default, you’ll find your own defined activities listed here by name. Now let’s create a simple one button control to start activities. First, in your items file, add the following line. Change the group and icon if you like. /* Harmony Hub */ String Harmony_Activity "Harmony [%s]" <television> (Living_Room) {harmonyhub="*[currentActivity]" } This is a two-way String binding, which is able to both fetch the current activity, and command the current activity to be something else. Now we can create a button for it, in the sitemap file. Switch item=Harmony_Activity mappings=[PowerOff='Off',Exercise='Exercise',13858434='TV',Karaoke='Karaoke'] In the square bracket you’ll see each activity along with the label. Generally you can refer directly to activities as you’ve named them on your remote, but the exception to this I found, was anything with a space in the activity name, such as “Watch TV”. In this case, you’ll need to use the activity ID. Again, you can find the ID in the JSON debug output. Save and refresh your interface, you should see something similar to this: You can also refer to activities in your rules, as we’ll see next. Read the wiki page for more info on the Harmony binding. A General Introduction to Rules Most smart home hubs include some kind of rules creation so you can automatically react to sensor data and events in the home. In fact, I’d argue that a truly smart home isn’t one you need to spend time interacting with mobile apps – it’s one that’s invisible to the end user and completely automated. To this end, OpenHAB also includes a powerful rules scripting language that you can program, far exceeding the complexity of most smart home hubs Battle of the Smart Home Hubs: What's Out There and What's Coming? Battle of the Smart Home Hubs: What's Out There and What's Coming? Read More or IFTTT . Programming rules sounds worse than it is. Let’s start simple with a pair of rules that turn on or off the light depending on the presence sensor: rule "Office light on when James present" when Item JamesInOffice changed from OFF to ON then sendCommand(Office_Hue,ON) end rule "Office light off when James leaves" when Item JamesInOffice changed from ON to OFF then sendCommand(Office_Hue,OFF) end First, we name the rule – be descriptive, so you know what event is firing. Next, we define our simple rule by saying when x is true, then do y. End signifies the closure of that particular rule. There’s a number of special words you can use in rules, but for now we’re dealing with two simple bits of syntax – Item, which allows you to query the state of something; and sendCommand, which does exactly what you think it will. I told you this was easy. It’s probably unnecessary to use a pair of rules, but as my logic gets more complex it’ll be beneficial to have them separate for whether I’m entering or leaving the area – and it might be a good idea to add a light sensor somewhere into the equation so we’re not unnecessarily turning on lights. Let’s look at another example to create a scheduled rule. rule "Exercise every morning" when Time cron "0 0 8 1/1 * ? *" then harmonyStartActivity("Exercise") end Again, we name the rule, state conditions when it should fire, and the actions to take. But in this case, we’re defining a Time pattern. The funny code you see in the quotes is a CRON expression for Quartz Scheduler (the format is slightly different to a regular CRONtab). I used cronmaker.com to help create the expression, but you can also read the format guide for a detailed explanation and more examples. My rules says simply “8am every morning, every day of the week, tell my Harmony Ultimate system to start the Exercise activity”, which in turn activates the TV, the Xbox, the amplifier, and presses the A button after a minute to launch the disk in the drive. Sadly, OpenHAB isn’t yet able to do the exercise for me. One more rule I want to show you is something I use to manage the humidity levels in my home. I have a single dehumidifier which I need to move around wherever needed, so I decided to look at all of my humidity sensors, find which one is the highest, and store that in a variable. It’s currently triggered every minute, but that can easily be lowered. Take a look first: import org.openhab.core.library.types.* import org.openhab.model.script.actions.* import java.lang.String rule "Humidity Monitor" when Time cron "0 * * * * ?" then var prevHigh = 0 var highHum = "" Humidity?.members.forEach[hum| logDebug("humidity.rules", hum.name); if(hum.state as DecimalType > prevHigh){ prevHigh = hum.state highHum = hum.name + ": " + hum.state + "%" } ] logDebug("humidity.rules", highHum); postUpdate(Dehumidifier_Needed,highHum); end The core of rule is in the Humidity?.members.foreach line. Humidity is a group name for my humidity sensors; .members grabs all of the items in that group; foreach iterates over them (with a curious square bracket format you’re probably not familiar with). The syntax of rules is a derivative of Xtend, so you can read the Xtend documentation if you can’t find an example to adapt. You probably won’t need to though – there are hundreds of example rules out there: - Detailed explanation of rules on the official wiki - The official rules samples wiki page - Taking rules to new heights - Advanced samples at IngeniousFool.net MQTT for OpenHAB and Internet of Things MQTT is a lightweight messaging system for machine-to-machine communication – a kind of Twitter for your Arduinos or Raspberry Pis to talk to each other (though of course it works with much more than just those). It’s rapidly gaining in popularity and finding itself a home with Internet of Things devices, which are typically low resource micro-controllers that need a reliable way to transmit sensor data back to your hub or receive remote commands. That’s exactly what’ll we’ll be doing with it. But why reinvent the wheel? MQ Telemetry Transport was invented way back in 1999 to connect oil pipelines via slow satellite connections, specifically designed to minimise battery usage and bandwidth, while still providing reliable data delivery. Over the years the design principles have remained the same, but the use case has shifted from specialised embedded systems to general Internet of Things devices. In 2010 the protocol was released royalty free, open for anyone to use and implement. We like free. You might be wondering why we’re even bothering with yet another protocol – we already have the HTTP after all – which can be used to send quick messages between all manner of web connected systems (like OpenHAB and IFTTT, particular with the new maker channel ). And you’d be right. However, the processing overhead of an HTTP server is quite large – so much so that you can’t easily run one on an embedded microcontroller like the Arduino (at least, you can, but you won’t have much memory left for anything else). MQTT is the other hand is lightweight, so sending messages around your network won’t clog the pipes up, and it can easily fit into our little Arduino memory space. How does MQTT Work? MQTT requires both a server (called a “broker”) and one or more clients. The server acts as a middleman, receiving messages and rebroadcasting them to any interested clients. Let’s continue with the Twitter-for-machines analogy though. Just as Twitter users can tweet their own meaningless 140 characters , and users can “follow” other users to see a curated stream of posts, MQTT clients can subscribe to a particular channel to receive all messages from there, as well as publish their own messages to that channel. This publish and subscribe pattern is referred to as pub/sub, as opposed to the tradition client/server model of HTTP. HTTP requires that you reach out to the machine you’re communicating with, say Hello, then have a back and forth of constantly acknowledging each other while you get or put data. With pub/sub, the client doing the publishing doesn’t need to know which clients are subscribed: it just pumps out the messages, and the broker redistributes them to any subscribed clients. Any client can both publish, and subscribe to topics, just like a Twitter user. Unlike Twitter though, MQTT isn’t limited to 140 characters. It’s data agnostic, so you can send small numbers or large text blocks, JSON-formatted datagrams, or even images and binary files. It isn’t that MQTT is better than HTTP for everything – but it is more suitable if we’re going to have lots of sensors all around the house, constantly reporting in. It’s also important to know that OpenHAB will not act as your MQTT broker – we’ll address that bit later. However, OpenHAB will act as a client: it can both publish your OpenHAB activity log, as well as bind particular channels to devices, so you can for instance have a switch that’s controlled by MQTT messages on a particular channel. This is ideal for creating a house full of sensors. Install Mosquitto on Your Pi Although OpenHAB includes an MQTT client so you can subscribe to a topic and also publish messages, it won’t act as the server. For that, you either need to use a web based MQTT broker (paid or free), or install the free software on your Pi. I’d like to keep it all in-house, so I’ve installed Mosquitto on the Pi. Unfortunately, the version available via the usual apt-get is completely out of date. Instead, let’s add the latest sources. wget sudo apt-key add mosquitto-repo.gpg.key cd /etc/apt/sources.list.d/ sudo wget sudo apt-get install mosquitto That’s all we need to do to have an MQTT server up and running on the local network. Your broker is running on port 1883 by default. Check your MQTT server is working using the free MQTT.fx, which is cross-platform. Click the settings icon to create a new profile, and enter your Raspberry Pi’s IP address or name. Save, and hit connect. If the little traffic light in the top right turns green, you’re good to go. For a quick test, click on the “subscribe” tab, and type inTopic/ into the text box, then hit the Subscribe button. You’re now subscribed to receive message on the topic named inTopic, though it’ll be showing 0 messages. Go back to the publish tab, type inTopic into the small box, and a short message into the large text box below. Hit Publish a few times and look back on the subscribe tab. You should see a few messages having appeared in that topic. Before we add some actual sensors to our network, we need to learn about topic levels, which enable us to structure and filter the MQTT network. Topic names are case-sensitive, shouldn’t start with $, or include a space, or non-ASCII characters – standard programming practices for variable names, really. The / separator indicates a topic level, which is hierarchical, for example the following are all valid topic levels. inTopic/smallSubdivision/evenSmallerSubdivision myHome/livingRoom/temperature myHome/livingRoom/humidity myHome/kitchen/temperature myHome/kitchen/humidity Already, you should be seeing how this tree structure is perfect for a smart home full of sensors and devices. The best practice for use with multiple sensors in a single room is to publish each sensor variable as it’s own topic level – branching out to more specificity (as in the examples above) – rather than try to publish multiple types of sensor to the same channel. Clients can then publish or subscribe to any number of individual topic levels, or use some special wildcard characters to filter from higher up in the tree. The + wildcard substitutes for any one topic level. For instance: myHome/+/temperature would subscribe the client to both myHome/livingRoom/temperature myHome/kitchen/temperature … but not the humidity levels. The # is a multi-level wildcard, so you could fetch anything from the livingRoom sensor array with: myHome/livingRoom/# Technically, you can also subscribe to the root level # which you get you absolutely everything going passing through the broker, but that can be like sticking a fire hose in your face: a bit overwhelming. Try connecting to the public MQTT broker from HiveMQ and subscribing to #. I got about 300 messages in a few seconds before my client just crashed. MQTT Beginner Tip: “/myHome/” is a different topic to “myHome/” – including a slash at the start creates a blank topic level, which while technically valid, isn’t recommended because it can be confusing. Now that we know the theory, let’s have a go with an Arduino, Ethernet Shield, and a DHT11 temperature and humidity sensor – you’ve probably got one in your starter kit, but if not, just swap out the environmental sensor for a motion sensor(or even a button). Publishing MQTT From an Arduino With Ethernet Connection If you have a hybrid Arduino-compatible device with Wi-Fi or Ethernet built-in, that should also work. Eventually we’ll want a better/cheaper way of communicating that having to use a network connection in every room, but this serves to learn the basics. Start by downloading pubsubclient library from Github. If you’ve used the “Download as ZIP” button, the structure is a bit wrong. Unzip, rename the folder to just pubsubclient, then take the two files out of the src folder and move them up one level to the root of the downloaded folder. Then move the whole folder to your Arduino/libraries directory. Here’s my sample code you can adapt: the DHT11 signal output is on pin 7. Change the server IP for that of your Pi on the following line: client.setServer("192.168.1.99", 1883); Unfortunately, we can’t use it’s friendly name (OpenHAB.local in my case) as the TCP/IP stack on the Arduino is very simplistic and adding the code for Bonjour naming would be a lot of memory we don’t want to waste. To change the topics that sensor data is being broadcast on, scroll down to these lines: char buffer[10]; dtostrf(t,0, 0, buffer); client.publish("openhab/himitsu/temperature",buffer); dtostrf(h,0, 0, buffer); client.publish("openhab/himitsu/humidity",buffer); The code also includes subscription to a command channel. Find and adjust the following line: client.subscribe("openhab/himitsu/command"); Examine the code around there and you’ll see that you could easily control an LED or relay for example by sending commands to specific channels. In the example code, it simply sends a message back acknowledging receipt of the command. Upload your code, plug your Arduino into the network, and using MQTT.fx subscribe to either # or openhab/himitsu/# (or whatever you changed the room name to, but don’t forget to include the # at the end). Pretty soon you should see messages coming in; and if you send ON or OFF to the command topic, you’ll see acknowledgments coming back too. MQTT Binding for OpenHAB The final step in the equation is to hook this into OpenHAB. For that, of course we need a binding. sudo apt-get install openhab-addon-binding-mqtt sudo chown -hR openhab:openhab /usr/share/openhab And edit the config file to enable the binding. mqtt:broker.url=tcp://localhost:1883 mqtt:broker.clientId=openhab Restart OpenHAB sudo service openhab restart Then let’s add an item or two: /* MQTT Sensors */ Number Himitsu_Temp "Himitsu Temperature [%.1f °C]" <temperature> (Himitsu,Temperature) {mqtt="<[broker:openhab/himitsu/temperature:state:default]"} Number Himitsu_Humidity "Himitsu Humidity [%.1f %%]" <water> (Himitsu,Humidity) {mqtt="<[broker:openhab/himitsu/humidity:state:default]"} By now you should understand the format; it’s getting a Number item from the MQTT binding, on a specified topic. This a simple example, you may wish to refer to the wiki page where it can get a lot more complex. Congratulation, you now have the basis of a cheap Arduino-based sensor array. We’ll be revisiting this in future and placing the Arduino’s onto their own entirely separate RF network. I’ve also created an identical version for Wizwiki 7500 boards if you happen to have one of those. Persistence and Graphing Data By now you probably a bunch of sensors set up, whether from Z-Wave or custom Arduinos running MQTT – so you can view the current state of those sensors at any time, and you should also be to react to their value in rules. But the interesting thing about sensor values is generally that they change over time: that’s where persistence and graphing comes in. Persistence in OpenHAB means saving the data over time. Let’s go ahead and setup RRD4J (Round Robin Database for Java), so called because data is saved in a round robin fashion – older data is discarded to compress the size of the database. Install rrd4j packages with the following commands. sudo apt-get install openhab-addon-persistence-rrd4j sudo chown -hR openhab:openhab /usr/share/openhab Then create a new file called rrd4j.persist in the configurations/persistence folder. Paste in the following: Strategies { everyMinute : "0 * * * * ?" everyHour : "0 0 * * * ?" everyDay : "0 0 0 * * ?" default = everyChange } Items { // persist everything when the value is updated, just a default, and restore them from database on startup * : strategy = everyChange, restoreOnStartup // next we define specific strategies of everyHour for anything in the Temperature group, and and every minute for Humidity Temperature* : strategy = everyHour Humidity* : strategy = everyMinute // alternatively you can add specific items here, such as //Bedroom_Humidity,JamesInOffice : strategy = everyMinute } In the first part of this file, we’re defining strategies, which just means giving a name to a CRON expression. This is the same as we already did with My.OpenHAB, but this time we’re create some new strategies that we can use of everyDay, everyHour and everyMinute. I haven’t used them all yet, but I might be in future. In the second half of the file, we tell rr4dj which data values to save. As a default, we’re going to save everything each time it updates, but I’ve also specified some time based strategies for specific sensors. Temperatures I’m not too bothered about, so I’ve set that to save everyHour only, but humidity is a big concern for me, so I want to see how it’s changing every minute. If there’s other data you specifically want to save at set times, add those here now or adjust as needed. Note: if you want to graph the data too, you MUST store it at least once a minute. It doesn’t matter if your sensor data is even updated this quickly, you simply need to tell rr4dj to store it once a minute. With that defined, you should begin to see some debug output telling you that values are being stored. Next up, let’s make some pretty graphs of all this data. It’s really easy. To make a graph of an individual sensor, add the following to your site map: Chart item=Bedroom_Humidity period=h That’s literally all you need. Valid values for period are h, 4h, 8h, 12h, D, 3D, W, 2W, M, 2M, 4M, Y; it should be obvious what these mean. It defaults to D for a full day of data if not specified. To create a graph with multiple items, simply graph the group name instead: Chart item=Humidity period=h You might also be interested to know that you can use this graph elsewhere; it’s generating an image using the following URL: How’s Your OpenHAB System Coming? That’s it for this installment of the guide, but don’t expect this’ll be last you hear from us about OpenHAB. Hopefully this and the beginner’s guide have given you a solid grounding to develop your own complete OpenHAB system – but it’s a process that’s never really completely finished. Thankfully, OpenHAB can scale well from a few devices to hundreds, from simple rule complexity to the ultimate in home automation – so how’s your system coming along? Which devices did you choose? What’s the next big project you’re going to tackle? Let’s talk in the comments – and please, if you found this guide useful, click those share buttons to tell your friends how they too can setup their own OpenHAB system. MQTT.FX is dead. Might want to find a different site. I will ping you if I find a similar service When I edit the name and location after a second or two the name/location reverts to blank (its original value). Any idea what I'm doing wrong that would cause that? Also I'm a little lost because I see the node ID in habmin but don't see where that ID is in turn referenced in openhab. What am I missing? Not sure why it isn't saving, but Node ID comes just after the {zwave=" bit... Number Hallway_Temperature "Hallway Temperature [%.1f °C]" (Hallway, Temperature) {zwave="2:0:command=sensor_multilevel,sensor_type=1,sensor_scale=0"} Thank you for the advice as I'm now up and running with my first device (turns out I had thought the item ID was the right one to use but I need the node ID). So I'm having trouble finding an example of a GE z-wave dimmer 12729 so I'm not sure if I'm taking full advantage of the light. My item code is: Dimmer Living_Rm_Light "Living Rm Light [%d %%]" (All_Lights) {zwave="5:command=SWITCH_MULTILEVEL"} But on github () there are all kinds of other commands included in dimmers such as respond_to_basic, refresh_interval, etc. I read through their purpose here () and don't think I'm missing anything but I wanted to double check. Also, on my openhab web interface for this light I have an up and down arrow which turns the light on/off but on my openhab app I have a slider where I can control the dimming, I wonder why I don't have the same in the openhab web interface....Does it look like I've assembled the item declaration correctly? Specific device functions can be problematic, and I'm afraid I've not used dimmers before as I can't stand them ;) The web/phone interface can vary a lot though, yes, it's one of the biggest complaints. I would post for help in the OpenHAB discussion group - somewhere there will be more experienced with dimmers than me. How refreshing to see a well thought-out, thorough, and RECENT piece like this. Thank you very much, it answers all kinds of questions that I'd otherwise have to scour the Internet one-by-one to find the answers to. You mentioned: "If you’re considering the purchase of a house full of Z-Wave sensors specifically for use with OpenHAB, I’d urge you to reconsider.". I don't YET have a houseful of Z-Wave sensors, but I plan on it eventually. Since you urge to reconsider, what WOULD you recommend besides Z-wave for one who is not yet fully invested?? At this point, I might actually retract that statement. Z-Wave is an arse to set up, but once running, very reliable - I've since expanded my system with more multisensors. If you don't like OpenHAB though, HomeSeer is probably the next best Z-Wave compatible system and that's quite pricey. I tried Home Assistant recently, and it was terrible with Z-Wave devices and generally quite unreliable. Hi man, i want to buy a harmony hub and I found this one: Boxe Logitech HARMONY(R) HOME HUB-OTHER-EMEA which has this serial no: 915-000262! What do you think? I couldn't find specific tehnical details abut the product like how many devices can controll or other specs, I know that is kinda of 100 euros! Shall I buy it? I want to integrate it with my openhab home control already running on a rpi. Thanks! That's just the home hub, so no remote control, but yes that should be compatible with OpenHAB. However, I'm not sure the limitations - I suspect it can't learn any unknown codes without the remote, so you'd be stuck with whatever's in the database, and there may be a limit on the number of devices you can add. Hi James, I am struggling to change the timings on the x axis on the chart of rrd4j. I am from India and the timing is not matching with my local timings. Can you help me in this regard? thanks. Try this: Thanks for answer James! So what you suggest? If I wannna learn some unknown codes as for eg a lightning strip with remotes, do I need the ultimate one? Or it's enough to only the hub? I didn't underatand well the procedure registering devices into the hub! Thanks again for your answer. D. Actually, looks like the hub suports learning unknown commands too: l have struggled for hours now trying to get MQTT going. I have got an arduino publishing temp and humid data to: client.publish("openhab/living_room/temperature",buffer); client.publish("openhab/living_room/humidity",buffer); .....and can see the data in MQTT tester on local network, so i now that side of things is ok. i am running openhab on raspberry pi 2 along with mosquitto. both seem ok as i can control Hue lamps via openhab, and can ping MQTT messages from machine to machine (except openhab). I installed the binding with: sudo apt-get install openhab-addon-binding-mqtt sudo chown -hR openhab:openhab /usr/share/openhab then restarted openhab. and edited home.items to look like this: Group Living_Room /* Lights */ Color Hue_01 "Sofa A Left" (Living_Room, Lights) {hue="1"} Color Hue_02 "Sofa A Right" (Living_Room, Lights) {hue="2"} Color Hue_03 "Sofa B Left" (Living_Room, Lights) {hue="3"} /* MQTT Sensors */ Number Living_Temp "Living Room Temperature [%.1f °C]" (Living_Room, Temperature) {mqtt="<[broker:openhab/living_room/temperature:state:default]"} Number Living_Humidity "Living Room Humidity [%.1f %%]" (Living_Room, Humidity) {mqtt="<[broker:openhab/living_room/humidity:state:default]"} and openhab.cfg mqtt section to: # URL to the MQTT broker, e.g. tcp://localhost:1883 or ssl://localhost:8883 mqtt:openhab.url=tcp://192.168.0.22:1883 # Optional. Client id (max 23 chars) to use when connecting to the broker. # If not provided a default one is generated. mqtt:openhab.clientId=openhab restarted all, even sudo reboot, but on the i-pad app i get the temperature and humidity items but no data showing can any one see what i have done wrong Sorry Everyone I got it. I Changed{mqtt=”<[broker:openhab/living_room/humidity:state:default]"} To {mqtt=”<[openhab:openhab/living_room..................... Thankyou for a really execlent guide to a very complcated system...i am persivering and hopefully one day it will all come together !!! James - thank you sooo much for this in-depth tutorial. This was exactly what I needed after much frustrating searching. So I have it all down to the wire. My arduino is successfully publishing as confirmed by just running mosquitto_sub -t openhab/light and its picking up the data. but, Openhab itself is not coming through... config file has these two lines modified: # URL to the MQTT broker, e.g. tcp://localhost:1883 or ssl://localhost:8883 mqtt:broker.url=tcp://localhost:1883 # Optional. Client id (max 23 chars) to use when connecting to the broker. # If not provided a default one is generated. mqtt:broker.clientId=openhab and my item declaration looks like this: Number FirstLight "Light Value [%1f]" (Sensors) {mqtt="<[broker:openhab/light:state:default]"} Can i assume that my mosquitto is installed and working correctly because I received data when using just the command line? Is there some other variable that i'm forgetting? Thanks so much! when I debug, this is popping up: 15:59:09.246 [ERROR] [o.i.t.m.i.MqttBrokerConnection:536 ] - MQTT connection to broker was lost org.eclipse.paho.client.mqttv3.MqttException: Connection lost at org.eclipse.paho.client.mqttv3.internal.CommsReceiver.run(CommsReceiver.java:138) [mqtt-client-0.4.0.jar:na] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_65] Caused by: java.io.EOFException: null at java.io.DataInputStream.readByte(DataInputStream.java:267) ~[na:1.8.0_65] at org.eclipse.paho.client.mqttv3.internal.wire.MqttInputStream.readMqttWireMessage(MqttInputStream.java:56) ~[na:na] at org.eclipse.paho.client.mqttv3.internal.CommsReceiver.run(CommsReceiver.java:100) [mqtt-client-0.4.0.jar:na] ... 1 common frames omitted That's a nasty error, and not something I can help with I'm afraid. Head over to and someone there should know more. The only reference I can find to something similar is here: , but that was old and apparently solved. Regarding the Arduino MQTT sensor, I just set up a similar system built on one of the ESP8266 variants (I like the NodeMCU lua developement boards, as versions can be found on Ebay for $4 shipped!) and there is a mDNS library included with the arduino environment for these which allows the use of "Friendly" hostnames rather than IP addresses. #include and call MDNS.begin("esp8266"); // esp8266 will serve as your hostname in your setup function and your done. Works a charm for me, but my network environment is all Linux, so I have not tested with Mac or Windows machines. I think it should work though. Thanks for the tip Peter, looks like the comment system stripped your code though, I think this in the right include: Since MQTT works properly I would like to start datalogging. As example I wanted to add rrd4j openhab.cfg persistence:default=rrd4j rrd4j.persistance / Configuration file for "rrd4j" persistence module // persistence strategies have a name and a definition and are referred to in the "Items" section Strategies { // for rrd charts, we need a cron strategy everyMinute : "0 * * * * ?" everyHour : "0 0 * * * ?" everyDay : "0 0 0 * * ?" default = everyChange } Items { * : strategy = everyMinute, restoreOnStartup // let's store EVERYTHING - we may need it later (: //* : strategy = everyMinute // TSHT21 : strategy = everyMinute, restoreOnStartup // let's only store temperature values in rrd //Office_temp : strategy = everyMinute } items Group All Group Sensors (All) Group Temp (Sensors) Group Hum (Sensors) Group Pressure (Sensors) Number TSHT21 "Temp SHT21= [%.1f °C]" (Temp,Sensors,All) {mqtt="<[mosquitto:Sensors/ESP1/Temp:state:default]"} Number TBMP180 "Temp BMP180= [%.1f °C]" (Temp,Sensors,All) {mqtt="<[mosquitto:Sensors/ESP1/Temp1:state:default]"} Number TDS18B20 "Temp DS18B20= [%.1f °C]" (Temp,Sensors,All) {mqtt="<[mosquitto:Sensors/ESP1/Temp2:state:default]"} Number HSHT21 "RH SHT21= [%.1f %%]" (Hum,Sensors,All) {mqtt="<[mosquitto:Sensors/ESP1/Hum:state:default]"} Number PBMP180 "Pressure BMP180= [%.1f hPa]" (Pressure,Sensors,All) {mqtt="<[mosquitto:Sensors/ESP1/Press:state:default]"} Frame label="Temperature Graph" { Chart item=Temp period=h } shows empty graph with correct 1hr time period but no data am I missing something? Well what does your debug log say? Do you see the messages saying the rrd4j is storing values every minute, or when they're getting updated? If it's not storing them properly, you'll need to confirm rrd4j is installed right, correctly permissions, and that your config is correct. Confirm each step before moving on to the next. You're missing a slash on one of the comments, btw. (/ Configuration file for “rrd4j” persistence module should be // Configuration file for “rrd4j” persistence module) Thanks James "/" was the issue - for the second time! next time I will read before posting. Hi, Im struggling with mosqitto connection to openhab. both running on RPI B+ cfg: mqtt:mosquitto.url=tcp://localhost:1883 mqtt:mosquitto.clientId=openhab mqtt-eventbus:broker=mosquitto mqtt-eventbus:stateSubscribeTopic=/+/+/+ item: Number temperature "Temp= [%s °C]" (Sensors,All) {mqtt="<[mosquitto:/Sensors/ESP1/Temp:state:default]"} Number temperature2 "Temp2= [%f °C]" (Sensors,All) {mqtt="<[mosquitto:/Sensors/ESP1/Temp1:state:default]"} { Frame { Text item=temperature Text item=temperature2 } } when I try other mqtt connect to rpi it works flawlesly. even the log appears to be connected 2016-03-14 22:38:25.349 [DEBUG] [.b.mqtt.internal.MqttActivator] - MQTT binding has been started. 2016-03-14 22:38:25.570 [DEBUG] [i.internal.GenericItemProvider] - Start processing binding configuration of Item 'temperature (Type=NumberItem, State=Uninitialized)' with 'MqttGenericBindingProvider' reader. 2016-03-14 22:38:25.600 [DEBUG] [inding.ntp.internal.NtpBinding] - Got time from ptbtime1.ptb.de: Monday, 14 March 2016 22:38:25 o'clock UTC 2016-03-14 22:38:25.653 [DEBUG] [b.mqtt.internal.MqttItemConfig] - Loaded MQTT config for item 'temperature' : 1 subscribers, 0 publishers 2016-03-14 22:38:25.659 [DEBUG] [o.i.t.m.i.MqttBrokerConnection] - Starting message consumer for broker 'mosquitto' on topic '/Sensors/ESP1/Temp' 2016-03-14 22:38:25.724 [DEBUG] [i.internal.GenericItemProvider] - Start processing binding configuration of Item 'temperature2 (Type=NumberItem, State=Uninitialized)' with 'MqttGenericBindingProvider' reader. 2016-03-14 22:38:25.730 [DEBUG] [b.mqtt.internal.MqttItemConfig] - Loaded MQTT config for item 'temperature2' : 1 subscribers, 0 publishers 2016-03-14 22:38:25.734 [DEBUG] [o.i.t.m.i.MqttBrokerConnection] - Starting message consumer for broker 'mosquitto' on topic '/Sensors/ESP1/Temp1' 2016-03-14 22:38:25.847 [DEBUG] [m.internal.MqttEventBusBinding] - MQTT: Activating event bus binding. 2016-03-14 22:38:25.866 [DEBUG] [m.internal.MqttEventBusBinding] - Initializing MQTT Event Bus Binding 2016-03-14 22:38:25.872 [DEBUG] [m.internal.MqttEventBusBinding] - Setting up Event Bus State Subscriber for topic /+/+/+ 2016-03-14 22:38:25.893 [DEBUG] [o.i.t.m.i.MqttBrokerConnection] - Starting message consumer for broker 'mosquitto' on topic '/+/+/+' 2016-03-14 22:38:25.915 [DEBUG] [m.internal.MqttEventBusBinding] - MQTT Event Bus Binding initialization completed. what am I missing? I think your channel names are wrong. The first / is actually looking for a root level channel with a blank name. Remove that, so it's just Sensors/ESP1/Temp But I'm also not familiar with using the eventbus part of the binding, I've only used the direct subscription. Perhaps remove the eventbus stuff until you've got the basic channels working. And you are right! / was the problem thank you The process for getting the newest mosquitto has changed a bit. and depends on Wheezy or Jessie Debian repo key is mosquitto-repo.gpg.key To add to your keyring, do the following as root: (type sudo su) wget -O - | apt-key add - exit root (type sudo pi) To add the repo to your sources.list: wget -O /etc/apt/sources/list.d/mosquitto-jessie.list Or: wget -O /etc/apt/sources/list.d/mosquitto-wheezy.list Then do: apt-get update && apt-get install mosquitto Hi James, So far so good, I have openhab up and running and connected to my wemo switch, I've been trying to understand rules: I've written this simple rule to switch on and off my coffee machine based on your's above rule "Switch on coffee" when Time cron "0 0 6 ? * MON-FRI *" then sendCommand(lon1,ON) end rule "Switch off coffee" when Time cron "0 0 8 ? * MON-FRI *" then sendCommand(lon1,OFF) end I have followed your example, so I'm hoping that's correct. But I'm not really sure what to do with it now? and how I make it active? I see there is a demo rules file, I was thinking I need to create a new rule file, but how does openhab trigger the rules? Do I need to do anything else? As long as it's correctly formatted, and saved in the rules folder with a .rules extension, it should work. Check the main log if you have debug enabled (see the first tutorial on how to to do that), and when you make any changes to the ruels file it should re-read it, say it parsed correctly, and give you a message saying something has been scheduled accordingly. At the time of the event, you'll also find any errors in the log file about why it wasn't triggered if there was a problem. Thank you for these tutorials! One issue I had with Z-Wave: I was getting the "Port /dev/ttyUSB0 does not exist" messages in my openhab log (Aeotec Z-Stick Series 2 on Ubuntu 14.04). Turns out to be the same permission issue you identified. So your comment about this not applying to devices on ttyUSB* is not quite correct (at least for my configuration). Once I did sudo "adduser openhab dialout", openhab was able to recognize the Z-Stick. Thanks Steve, I've removed that line now so it just reads as necessary for all. Thanks for your awesome tutorials! It was very nice to stumble across these after digging/fumbling around to figure a lot of this stuff out on my own. Very helpful to have it all laid out and in one place. I followed your instructions for setting up z-wave and mqtt flawlessly. Thanks again! Solved it: appearently OpenHab is case sensitive and I needed to change the "t" to "T" in the Text item in my sitemap... Many thanks to the OpenHab forum. I am completely stuck. The arduino works fine; I even added a barometer; Raspberry is installed, Mosquitto etc. seem to work. at MQTT.fx shows the data from my DTH22 sensor and the barometer. Very cool. Now Openhab: openhab.cfg: mqtt:mymosquitto.url=tcp://localhost:1883 mqtt:mymosquitto.clientId=openhab Item created in default.items: //Number studeerkamer_Temp "Temperatuur [%.1f °C]" (GF_Living) { mqtt="<[mymosquitto:openhab/studeerkamer/temperatuur:state:default]" } //Number studeerkamer_Vocht "Luchtvochtigheid [%.1f %%]" (GF_Living) { mqtt="<[mymosquitto:openhab/studeerkamer/vochtigheid:state:default]" } //Number studeerkamer_Luchtdruk "Luchtdruk [%.1f HPa]" (GF_Living) { mqtt="<[mymosquitto:openhab/studeerkamer/luchtdruk:state:default]" } I create a frame label in the default sitemap: Frame label="arduino" { text item=studeerkamer_Temp text item=studeerkamer_Vocht text item=studeerkamer_Luchtdruk } and thats where things go wrong: everything UNDER this label is not showing; I just see ädruino" and nothing below. When I move the frame to the bottom everything shows, except my sensor data. Anyone? I have 2 harmony remotes and the first one is set up just like you said. The second one I am trying to do it with a unique qualifer: harmonyhub:mbr.host= harmonyhub:mbr.username= harmonyhub:mbr.password= My items file has String HarmonyLR "Harmony Remote [%s]" (livingroom) { harmonyhub="*[currentActivity]" } String HarmonyMBR "Harmony Remote [%s]" (masterbedroom) { harmonyhub="*[mbr:currentActivity]" } The first one works and the second one does not. I took this directly off the github and can't figure out why the master bedroom one doesn't show up. Any thoughts? Try using a qualifier for each hub, not just the extra one. Also, emable debug output, the Harmony Hub binding is quite verbose and might be an obvious error in there. Excellent job, would be nice to have a full guide to start with the initial installation. As all the sudo apt-get inatallations of openhab doesn't work from scratch. Mine is working very well without the apt-get install. Therefore any change has to be manually which is a slow task and not always up to date. Keep doing the good manuals!! You've done.
https://www.makeuseof.com/tag/openhab-beginners-guide-part-2-zwave-mqtt-rules-charting/
CC-MAIN-2018-17
refinedweb
8,189
54.63
How to Annualize a Quarterly Return Investment companies update their clients regularly about their return on investment (ROI). If you have investments, you probably have received a quarterly return report that shows how well each of your investments has fared over the past 3 months. It is easier to comprehend the strength of the investment if you can think of overall result of the investment in 1 year. You can do this by annualizing the report over a period of 12 months, rather than 3, with a calculator and pen and paper. Read more to find out how to annualize a quarterly return. Steps - 1Find the quarterly return report. There will likely be a number of figures within the report that show how the investment rose or fell during that time. What you want to annualize is the percentage figure, called the rate of return (ROR), which shows the percentage of growth you received during the last 3 months. Ad - For example, at the bottom of the page of numbers it may show that your quarterly return is 1.5 percent. The annual return would be larger, because your money would have grown slightly each quarter. The annualized return would be the percentage of growth if the investment grew at the same level all year. - 2Figure out how many time periods there are in a year. In order to annualize, you must first know the time period you currently have, a quarter, and how many are in a year. For a quarterly return, there are 4 quarters in a year so you will be using the number 4 in the equation. - If you were trying to annualize a monthly return you would use the number 12 to annualize the ROR. - 3Use a formula to calculate the annual rate of return on your quarterly investment. The quarterly ROR formula that is used to annualize is: Annual Rate of Return = (1 + Quarterly Rate of Return)4 � 1. Where the number 4 is an exponent. - 4Turn your ROR into a decimal in order to use the number within the formula. Take the percent and divide it by 100. Our 1.5 percent divided by 100 is 0.015. - 5Plug in your numbers. For this example, we will use 0.015 percent as the Quarterly ROR. Annual Rate of Return = (1 + 0.015) 4-1. - Add 1 to 0.015 and you get 1.015. Your formula should look like AROR = (1.015)4 - 1. - 6Use a calculator to bring that number to the fourth power. If you do not have a calculator that works with exponents, you can search for one on the Internet. 1.015 to the fourth power is 1.061364. - The example formula now looks like AROR = 1.061364 - 1. - 7Subtract 1 from your result and you have your AROR in decimals. Multiply the decimal by 100 to get your percentage. Ad - In our example, AROR = 0.061364. The Annual Rate of Return is 6.1364 percent. We could really use your help! cutting hair? rate articles? electrical maintenance? web analytics? Adobe Photoshop? Tips - A quarterly return is also the name given to tax returns that must be filed every 3 months by some employers, self-employed people and people who receive unemployed benefits. Things You'll Need - Quarterly return - Calculator - Pen - Paper Article Info Categories: Investments and Trading In other languages: Español: anualizar un retorno trimestral Thanks to all authors for creating a page that has been read 38,932 times. About this wikiHow
http://www.wikihow.com/Annualize-a-Quarterly-Return
CC-MAIN-2015-22
refinedweb
585
65.52
Git Hooks Users Guide This document provides a quick reference towards using AdaCore's "Git Hooks", which are the scripts used to manage our git repositories, when new commits get pushed. These scripts are typically responsible for pre-commit checks, and email notifications. The source for these hooks can be found at: Contents - Git Hooks Users Guide - Enabling the hooks - Minimum Configuration - Configuration - Pre-commit Checks - Retiring Old Branches Enabling the hooks The hooks have been designed to work with both bare and non-bare repositories. But typical usage will be with bare repositories. o enable the hooks, an administrator needs to replace the "hooks" directory in your git repository by a link to the /hooks directory from a git-hooks checkout, and configure them as outlined below. Minimum Configuration The following config options must be set for all repositories. Updates of any kind will be rejected with an appropriate error message until minimum configuration is satisfied. hooks.from-domain hooks.mailinglist See below for a description of these config options. Configuration Configuration File location The hooks configuration is loaded from a file named project.config in branch refs/meta/config. This file follows the same format as the various git "config" files (Eg. $HOME/.gitconfig). To update your repository's configuration, you will need to do the following: Check the refs/meta/config branch out; Modify project.config accordingly; - Check your change in; - Push the updated branch. Configuration Options for General Use The following config options are available for general use: hooks.allow-delete-tag (default value: false): By default, deleting a tag is not allowed. To allow it, set this option to true. hooks.allow-non-fast-forward: - A coma-separated list of regular expressions matching branch names (NOT reference names; ie 'master', not 'refs/heads/master'). By default, non-fast-forward updates are only allowed on 'topic' branches (ie branches whose name start with topic/. This option allows us to extend the list of branches where non-fast-forward updates are allowed. hooks.allow-lightweight-tag (default value: false): Lightweight Tags (as opposed to Annotated Tags) are really not meant to be shared, and thus the hooks will reject updates that create a new lightweight tag, unless this config option is defined to true. hooks.combined-style-checking (default value: false): - By default, the pre-commit checks are performed on each commit individually. This ensures that none of the commits introduce some style violations. But some developers have found that this policy gets in the way more than it helps, and thus requested that the pre-commit checks be performed on the combination of all commits. The general recommendation is to keep commit-by-commit style checks. But to enable combined style-checking, set this config option to true. hooks.commit-url: - If defined, a URL to be provided at the start of every commit email notification. The following placeholders can be used: %(ref_name)s: The name of the reference being changed; %(rev)s: The commit's SHA1. Python string substitution is applied, so % characters must be escaped using %%. hooks.disable-email-diff (default value: False): - If True, "diffs" are not included in the emails describing each new commit. hooks.disable-merge-commit-checks (default value: false): If set to True, disable the precommit-check in charge of detecting [#UnintentionalMergeCommitsCheck unintentional merge commits]. The use of this option is strongly discouraged, as it helps catching mistakes that are easily done, especially by git users who are less experimented. hooks.file-commit-cmd: A command called with each commit triggering a commit notification email. The purpose of this config variable is to allow the use of an adhoc script when the filing of commits in bug tracking software cannot be done simply by just sending an email. This provided command is called as is, with the same contents as the commit email minus the "diff" part passed via the script's standard input. hooks.from-domain: - The domain name of the email address used in the 'From:' field for all email notifications being sent (the local part of the email address - before the '@' -, is simply the user name on the host where the hooks are running). hooks.mailinglist: - A coma-separated list of email addresses where to send all email notifications. An entry can also be a script instead of an email address, in which case the script will be executed to determine the list of recipients for that script. See [#UsingScriptInHooksMailinglist Using a script in hooks.mailinglist] for more details on how this works. hooks.max-commit-emails (default value: 100): - This is mostly a safe-guard against updates with unintended consequences in terms of the number of emails being sent out. If an update is pushed such that the update would trigger a number of commit email notifications greater than the value of this config option, the hooks will reject this update. Typically, this happens when a developer merges a large number of changes from an external source, and then pushes it into an AdaCore repository. hooks.max-email-diff-size (default value: 100,000): - This config option ensures that patches sent out inside commit email notifications do not exceed a certain size, clogging the mailbox of all recipients. Past a certain size, which is configured via this config option, the diff isn't likely to be useful anymore, and thus gets truncated. A small note is added at the end of the truncated diff to indicate that the truncation took place. hooks.max-rh-line-length (default value: 76): - The maximum length for each line in the revision log. If any line exceeds that length, the commit will be rejected. Setting this variable to zero turns this check off entirely. Note: We used a default limit of 76 characters instead of 80, because git commands have a tendency to indent the revision history by 4 characters. Similarly, the git hooks also send emails where the revision history also gets indented by 4 characters. This limit ensures that all lines of a commit revision history fit in a standard 80-characters wide terminal. hooks.no-emails: A coma-separated list of regular expressions matching some reference names for which updates should not trigger any email notification. The example below turns off email notifications for all branches whose name start with "fsf-", as well as the "thirdparty" branch. no-emails = /refs/heads/fsf-.*, /refs/heads/thirdparty hooks.no-precommit-check: A coma-separated list of regular expressions matching some reference names for which pre-commit checks should not be enabled. Note that this disables all pre-commit checks, including the revision history checks. It is therefore recommended that this option be only used for branches developed outside of AdaCore. This is typically used for branches tracking external repositories. The example below turns pre-commit-checks off for all branches whose name start with "fsf-", as well as the "thirdparty" branch. no-precommit-check = /refs/heads/fsf-.*, /refs/heads/thirdparty hooks. no-rh-style-checks A coma-separated list of regular expressions matching some reference names for which style-checking of the revision logs should not be enabled. The use of this option is strongly discouraged for branches maintained by AdaCore. Revision History style checks can be disabled for a specific commit by using the sequence '(no-rh-check)' in the revision history. The example below turns revision logs style-checking off for all branches whose name start with "fsf-", as well as the "thirdparty" branch. no-rh-style-checks = /refs/heads/fsf-.*, /refs/heads/thirdparty hooks.post-receive-hook: If defined, this is the name of a script to be called at the end of the post-received hook. The script is called exactly the same way the post-received hooks is called, and therefore should allow customized post-receive processing. the current working directory (cwd) when this script gets called is undefined, so it is recommended to provide a full path to that script. hooks.reject-merge-commits: A coma-separated list of regular expressions matching some reference names for which merge commits are not allowed. The example below causes merge commits to be rejected on branch "master" and all branches whose name start with "gdb-". reject-merge-commits = refs/heads/master, refs/heads/gdb-.* hooks.style-checker (default value: cvs_check): If provided, the program to call when performing style checks. It is expected that this program follow the same calling convention as cvs_check: - The first argument is a dummy argument mimicking an SVN path, and can be ignored; The second argument is the name of the file to be checked, relative to the project's root directory (Eg: path/to/filename.adb). The style-checking program is called with the Current Working Directory (CWD) such that opening that file can be done using the given (relative) filename. It is recommended that, unless located in a very standard location always included in the PATH (Eg: /usr/bin), the full path to the program be specified. hooks.tn-required (default value: false): If set to true, the hooks verify that the revision history of all new commits contain a Ticket Number, and reject the update if it is not the case. This requirement can be by-passed by using the sequence 'no-tn-check' or the word 'minor' in the revision history, in lieu of the Ticket Number. Configuration Options for Debugging The following config options are recognized, but are only meant to be used for debugging/testing purposes. They should not be used during normal operations. hooks.bcc-file-ci (default value: true): Setting this config option to false prevents the hooks from Bcc'ing [email protected] in all emails sent. This option should never be used in any official repository, and is only meant to for testing of the git hooks outside of the testsuite. hooks.debug-level (default value: 0): - Setting this debug option to a value higher than zero turns debugging traces on. The higher the value, the more verbose the traces. Pre-commit Checks Pre-commit Checks on the Revision History The hooks verify that the revision histories of all new commits being pushed comply with the rules defined below. This step is skipped for any commit whose revision history contains the '(no-rh-check)' sequence. Rules enforced on the revision logs: Empty line after subject line - By convention, the first line of the revision history should always be the subject of the commit. If additional text is required, an empty line should be inserted between the subject and the rest of the revision history. YES: | The subject of my commit - no other explanation required. YES: | The subject of my commit | | This is what this commit does. NO: | The subject of my commit | This is what this commit does, but an empty line is missing | between the subject and this description. Maximum line length in revision history: - See [#ConfigMaxRHLineLength hooks.max-rh-line-length]. Unedited revision history of merge commits The purpose of this rule is to prevent a merge commit which was unintentionally created to be pushed to the shared repository. This can easily happen when, for instance, forgetting the --rebase option when doing a git pull. It works by detecting the default text that git uses as the revision history when the merge does not trigger a merge conflict. When a merge was in fact intentional, the revision history of the merge commit must be manually edited to avoid the "Merge branch '[...]" line that git uses by default as the subject of the merge commit. Doing so will satisfy this pre-commit check. Although strongly discouraged, this check can be disabled by setting the [#ConfigDisableMergeCommitChecks hooks.disable-merge-commit-checks config option] to True. Merge conflict section - When creating a merge commit during which conflicts were discovered and had to be resolved, the default revision history created by git and proposed for edition contains a section at the end that lists the files inside which merge conflicts where found. We do not want this section in the revision history of our commits, so the hooks verify that the author of the commit remembered to delete it. Missing Ticket Number This check is enable only if the hooks.tn-required config option is set. For such repositories, the hooks verify that the revision history contains a Ticket Number. This requirement can be by-passed via the use of the word "Minor" (Eg. "Minor reformatting"), or via the sequence "no-tn-check". Casing is not taken into account for this rule. Filename Collisions Pre-commit Check On Operating Systems such as Darwin or Windows, where the File System is typically case-insensitive, having two files whose name only differ in the casing (Eg: "hello.txt" and "Hello.txt", or "dir/hello.txt" vs "DIR/hello.txt") can cause a lot of confusion. To avoid this, the hooks will reject any commit where such name collision occurs. This check is disabled on the branches matching the hooks.no-precommit-check config value, or if a valid $HOME/.no_cvs_check file is found (see below). Pre-commit Checks on the Differences Introduced by the Commit This is the usual "style check" performed by the cvs_check program, maintained by the infosys team. Note that the program verifies the entire contents of the files being modified, not just the modified parts. Controlling the Pre-commit Checks Despite the use of very similar names, note the fairly important difference in scope between the hooks.no-precommit-checks config option, and the no-precommit-check git attribute! (see below) By default, the pre-commit checks are turned on for all commits of all branches. The following controls are available to tailor the hooks' behavior regarding this check: The hooks.no-precommit-check config option can be used to turn pre-commit checks off entirely for a given branch. This option is typically used for branches tracking other branches from a third-party repository. If the (no-precommit-check) string is found anywhere in the revision log of that commit, pre-commit checks are also turned off entirely, but only for that commit. The $HOME/.no_cvs_check file, if less than 24 hours old and located on the machine running the hooks, will also turn all pre-commit checks off entirely. Setting the hooks.combined-style-checking config option tells the hooks that the second part of the pre-commit checks (operating on the differences introduces by the commits) to only check the final result. Thus, if a user pushes an update introduces two new commits C1 and C2, it does not matter if C1 contains a style-check violation as long as the violation is corrected in C2. It is important to note, however, that the pre-commit checks on the revision histories are still performed on a commit-per-commit basis. Otherwise, it would be possible to push a commit missing a Ticket Number in repositories that are configured to require one. The no-precommit-check git attribute. Setting this attribute for any given file disables the pre-commit checks for this file. See git --help attributes for more info on how to set those attributes. Normally, it is preferred that these attributes be maintained via a .gitattributes file which is checked in the repository. This makes everything properly tracked. But there may be situations where this is not convenient (Eg: trying to avoid a local change in a branch tracking another branch from a third-party repository). For those situations, it is possible to define the attribute in the GIT_DIR/info/default_attributes file inside the shared repository. The downside of this approach is that this file is not tracked. In general, developers are notified via email whenever a change is pushed to the repository. This section describes the policy used to determine which emails are being sent. The Summary Email The purpose of this email is to give a quick overview of what has changed. Composition The Summary Email is composed of two sections: - A short description of what has changed. For instance, if a tag was created, it will explain what kind of tag was created, what the associated revision log was, and what commit it points to. - Optionally, a list of commits which have been lost and/or added. Sending Policy The general policy is to send the Summary Email for all updates in order to inform its developers about the change. However, there are a number of situations where the email would bring little information to the Commit Emails already sent out: Branch updates: If the update does not cause any commit to be lost, nor does it include commits from a branch matching the hooks.no-emails configuration, then the email is superfluous and therefore not sent. Notes updates: Notes are really a special case of branch handling, where only fast-forward updates are allowed, and where the hooks.no-emails configuration is ignored. So the Summary Email is also never sent. Filing Policy Normally, this email is not used for filing purposes (ie, a copy is not even sent to file-ci@), as we are more interested in filing the individual commits than the summary. However, it is interesting to file those emails in the following cases: - tag creation - tag update In those cases, the revision log attached to those tags may contain a TN, which means this event deserves filing. In either case, a Diff: marker is always added before the section summarizing the list of commits that were lost and/or added, making sure that this part of the email never gets filed, as the commits themselves are already getting filed. The Commit Emails Composition The subject of that email is the commit's subject and its contents is roughly what the git show command would display. Sending Policy The Commit Email is always sent, unless the commit is found to exist in a branch matching the hooks.no-emails configuration. Filing Policy This email is always bcc'd to file-ci@. Note that this list must not appear in any explicit To:/Cc: header, as we want to prevent any replies from being sent there. Using a script in hooks.mailinglist For projects that share the same git repository but want separate email addresses for email notifications, it is possible to use a script in place of an email address in the hooks.mailinglist config. Script entries are indentified by the fact that the entry is an absolute filename, and that this filename points to a file on the server which an executable. Note that the "term" script is used loosely here as, although we expect most users of that feature to indeed use a script, a compiled program would work just as well. Script Calling Convention This script is called by the hooks as follow: - The list of files being changed is passed via standard input, one file per line (this list may be empty); - NO ARGUMENT is currently being passed on the command line, but we might use that in the future to provide info such as the reference name, for instance. The hooks expects the script to return the list of email addresses on standard output, one email address per line. By convention, we expect the scripts to return all email addresses when the given list of files being changed is empty. This is useful for "cover" emails that the hooks want to send to everyone. Script Email Expansion Policy for Commit Emails For commit emails, the hooks will call the mailinglist script with the list of files being changed by the commit, and let the script decide who should be notified based on that list of files. Script Email Expansion Policy for Git Note Update Emails Git Note Update emails are similar to Commit Emails, and therefore the distribution list will be computed based on the list of files being changed. The only difference is that the list of files is going to be the list of files in the commit being annotated, not the note's commit. Script Email Expansion Policy for "Cover" Emails The expansion policy for cover emails is currently very simple: Send to everyone. Rationale: - Branch Updates: - For branch creation and deletion, it seems easy to understand how there is little way for us to determine who is interested in that branch update, unless we provide the name of the branch to the script. We might do that at some point, but keep things simple for now. - For branch update, if we have a cover letter, it means we have either commits already in another branch, or we're losing commits. Either should be relatively rare since merges are discouraged, and non-fast-forward changes are forbidden. So, it seems simple enough to send to everyone. - Althought it might be tempting to say to say that the notification should be same to the same list as the target's commit, this does not work: It is entirely possible that a tag for a given project point to a commit that only touches files for another project (Eg: a branchpoint tag, for instance). So, tags should be really treated the same was a branches. Retiring Old Branches The recommended method for retiring a branch which is no longer useful is the following: - Create a tag referencing the tip of the branch to be retired. The tag name should be retired/<branch-name> where <branch-name> is the name of the branch to be retired. - Push this tag to the official repository; - Delete the retired branch in the official repository. By using the naming suggested for the tag, the hooks will ensure that the branch never gets accidently recreated. This would otherwise happen if a developer did not know that the branch was deleted, still had that branch locally in his repository, and tried to push his change as usual. The use of the retired/ namespace for those tags also helps standardizing the location where those tags are created. And the use of a tag allows everyone to determine the latest state of that branch prior to its retirement.
http://sourceware.org/gdb/wiki/GitHooksUsersGuide?action=diff&rev1=1&rev2=13
CC-MAIN-2018-39
refinedweb
3,687
52.39
Hey Guys, im trying to create a timer in c++ that starts at one, and every second the program displays the next number starting from -15, i have the start, but what i want is a breaker button... Such as while its counting, i want to be able to press either the space, enter, 1, or 0 button on the keyboard to break the timer and then display the timers current value. Heres What I Have So Far. (Note: my script is far longer than this and so some of my #includes are necessary that you don't see) ___________________________ #include <iostream> #include <stdlib.h> using namespace std; #include <cmath> #include <windows.h> #include <mmsystem.h> void main() { cout <<"\n\nPress 1 Then Enter To Begin: "; int anykey; cin >> anykey; if (anykey) { int timer = -15; while (timer < 45) { cout <<++timer <<", "; Sleep(1000); int key; cin >> key; switch(key) {case 1: cout <<"Your Time Was: " << timer;} } _________________________ If Anyone Can Help.... Please Do :-P Thanks! << moderator edit: added
https://www.daniweb.com/programming/software-development/threads/20938/help-with-a-timer-and-stopping-it
CC-MAIN-2017-34
refinedweb
167
78.79
I am not even sure how to world this right. I have an Universal Windows Application that is currently on the way. I write some Unit Test using MSTest along the way, and I ran into this problem. I have a method that use a Writeablebitmap from Windows.UI.XAML namespace, to convert some of the image […] Month: February 2016 Is there any way to set up automatic/smart string interpolation in editor? (Resharper or Visual Studio) I was just thinking about how great it would be if this happened for me automatically: I go to type a string, so I hit the double quotes key. What pops up is: $"|" (the | is supposed to represent the cursor) Then I type the string: $"You have {newMessages.Count} new messages!";| (again, | is […] Required fields in a class I'm working with a request my boss just gave me, and was looking for some outside input. His aim is to consolidate all committing of parts on a work order to go through one consolidated method… this, in essence, is fine. So we start listing off each of the required parameters for this method, and […] [Help] Get the instance name of an object SNMP trap receiver to csharp HELP [Help] [C#] More JSON.NET Another question relating to JSON.NET. Following link contains a souce-code excluding my API key for testing purposes: With API key the following output of this code is fine. Exactly like it should work. {"id":11,"key":"MasterYi","name":"Master Yi","title":"the Wuju Bladesman"} _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ […] [Help!] C# – Json.NET (Newtonsoft) I'm using a Newtonsoft's Json.NET for C#. Basically I'm trying to do a software which deserialize information from Riot's API. In example an items. I can't get it work. (I will post a source later, because right now I'm at work so.) I want catch all information, but I get error: "object reference not set […]
http://howtocode.net/2016/02/
CC-MAIN-2018-17
refinedweb
340
73.78
If you haven't done so already, be sure to visit the Wiki Portal to read about how the wiki works. Especially the Ogre Wiki Overview page. Table of contents First Steps In this tutorial, you will create and run a barebones Ogre program. As you might guess, this program is very simple and teaches you the very basics of what you'll need to do in each Ogre program. I recommend that you go over this tutorial until you understand everything in it. I will teach you about Ogre's core objects, but I will not teach you how to show things on the screen just yet. I believe that going that far in this tutorial would compromise the amount of space I could devote to ensuring that you first understand the essentials. N.B. This tutorial does not show you how to get entities and other things on screen, but merely shows you how to get a very basic barebones application running without the use of any frameworks. Let's get started! Unlike all of the other demos and tutorials out there, this one will teach you how to make an application without using the ExampleApplication framework. I believe the pre-made framework doesn't give the user a good idea of what they need to do in order to get Ogre running. Part 1: Project Setup The first thing you need to do is start a new project. I created a project in a subfolder (Tutorial_2) of the folder that contains our Ogre distribution (in my case, C:\Programming\Tutorials\ and \User\robo\Desktop\Tutorials\ for PC and Mac respectively). Create that folder, and then open your IDE. Here the tutorial branches into two different sections, one to set up a project in Visual C++ and one to set up your project in XCode. Visual C++ Click "File->New->Blank Solution". Now enter the solution's name - Ogre_Tutorial. In the Location Box, press Browse and then navigate to the folder you made for your project. Press OK. If you look at the Solution Explorer, "Solution ‘Ogre_Tutorial’ (0 projects)" should now be present. Now we need to create our project. Right click on the "Solution ‘Ogre_Tutorial’ (0 projects)" text and select "Add->New Project". A box should pop up. In the 'Project Types' box select 'Visual C++ Projects' / 'Win32'. Select 'Win32 Project' in the right-hand pane. In the Name box, type Ogre_Tutorial. In the Location box, browse again to the C:\Programming\Tutorials\Tutorial_2 folder. Click OK. Another box should show up, with two links on the upper left side. Click the bottom link, "Application Settings". Leave the Application Type on 'Windows Application', and check the 'Empty Project' checkbox below. Add an empty C++ file to the project called main.cpp . This file must exist before editing project properties in order for the 'C/C++' menu to appear! Now that the project is created, we have a lot of settings to define. Click "Project->Properties" in the top toolbar. Now you need to configure some settings. The paths specified in these settings are appropriate to the configuration established in this and the previous tutorial (Newbie Tutorial 1). If you subsequently wish to compile your code in Release Mode instead of Debug: change the 'Configuration' listbox at the top of the Properties Pages to 'Release'; input the above settings again in the Properties Pages, except for: in C/C++'s 'Code Generation' section, for 'Runtime Library' change 'Multithreaded Debug DLL' to 'Multithreaded DLL'; in Linker's 'Input' section, change OgreMain_d.lib to OgreMain.lib; change Linker's 'Additional Library Directories' filepath to end in 'Release' instead of 'Debug'. Part 2: The Code #include <iostream> #include "Ogre.h" #include <OIS/OIS.h> #if OGRE_PLATFORM == OGRE_PLATFORM_APPLE || OGRE_PLATFORM == OGRE_PLATFORM_IPHONE #include <macUtils.h> #endif int main() { std::string resourcePath; #if OGRE_PLATFORM == OGRE_PLATFORM_APPLE resourcePath = Ogre::macBundlePath() + "/Contents/Resources/"; #else resourcePath = ""; #endif Ogre::Root* root = new Ogre::Root(resourcePath + "plugins.cfg", resourcePath + "ogre.cfg", "Ogre.log"); if (!root->showConfigDialog()) return -1; Ogre::ConfigFile cf; cf.load(resourcePath + "resources.cfg"); //(); Ogre::RenderWindow* window = root->initialise(true); Ogre::SceneManager* smgr = root->createSceneManager(Ogre::ST_GENERIC); /)); while (1) { Ogre::WindowEventUtilities::messagePump(); keyboard->capture(); if (keyboard->isKeyDown(OIS::KC_ESCAPE)) break; if(root->renderOneFrame() == false) break; } im->destroyInputObject(keyboard); im->destroyInputSystem(im); im = 0; delete root; return 0; } Part 3: Code Breakdown Alright, let's break down this code step by step to see just what's going on. #include <iostream> #include "Ogre.h" #include <OIS/OIS.h> #if OGRE_PLATFORM == OGRE_PLATFORM_APPLE || OGRE_PLATFORM == OGRE_PLATFORM_IPHONE #include <macUtils.h> #endif Pretty basic setup here, we have to include "Ogre.h" and <OIS/OIS.h> to use Ogre and OIS, and iostream is there for some basic output stuff. Finally, in order to support OS X and iPhones, we have to load up the macUtils.h file (included in Ogre) in order to do some processing stuff later on in the tutorial. (NB: this code is cross platform and works on both Windows and OS X. It has not been tested for use in Linux) int main() { std::string resourcePath; #if OGRE_PLATFORM == OGRE_PLATFORM_APPLE resourcePath = Ogre::macBundlePath() + "/Contents/Resources/"; #else resourcePath = ""; #endif Now we start our main function. Before we get into any of the nitty gritty Ogre stuff, we need one housekeeping variable: resourcePath. This variable is only used on Macs to compensate for the way that app bundles are structured. If you're not looking to make OSX apps, you can ignore this variable. We set this variable to the app path plus /Contents/Resources/. If you're familiar with how apps work on OS X and in XCode, all of your data is stored within the app bundle, so there's no root folder like in Windows and Linux. Thus, we need to tell Ogre where to look for resources later on. Ogre::Root* root = new Ogre::Root(resourcePath + "plugins.cfg", resourcePath + "ogre.cfg", "Ogre.log"); if (!root->showConfigDialog()) return -1; Now we create and initialize our Ogre::Root. Ogre::Root is the core of the Ogre engine. All of what Ogre does is initiated through the Root. Needless to say, it's an important thing. In order to create it, we create a pointer to an Ogre::Root object and initialize it. The constructor for Root takes three parameters, a plugins.cfg file, a ogre.cfg file, and a log file. The plugins.cfg file lists the different plugins for Ogre to load when the Root is initialized. My plugins.cfg file looks like this: # Defines plugins to load # Define plugin folder PluginFolder= # Define plugins Plugin=RenderSystem_GL Plugin=Plugin_CgProgramManager Plugin=Plugin_OctreeSceneManager The ogre.cfg file manages the device settings. Luckily, we won't have to manual enter this, the config dialog will take care of setting it up for us. Finally, the ogre.log file is a file which saves the log from the last run of ogre. It's very helpful for debugging and troubleshooting. The next line (root->showConfigDialog()) shows the configuration dialog, which will allow you to set various device settings, such as fullscreen, vsync, and the like. //(); Now we get into some gritty stuff. In order for Ogre to be able to see our data, we have to tell it where the data is hiding. In order to do this, we set up a resources.cfg file. This file has a structure that helps Ogre deal with different resource groups and locations throughout your project tree. My resources.cfg file looks like this: # Resource locations to be added to the default path [General] FileSystem=./Contents/Resources/media There are a few things to note on this file. Comments are denoted with a # sign and are single lined. Resource groups are wrapped in [] (The [General] is the only resource group I have set up). These resource groups are useful for deferred loading. For example, you could load the General group for your main GUI and main menu data, then as you load a level, that could have its own separate resource group that could be loaded later! Cool stuff! Finally, there are two different types of file systems that Ogre can use out of the box. First off is the FileSystem type. These FileSystems are just standard folders, either relative to the working directory or as an absolute path. Second, Ogre can load standard zip archives (e.g. Zip=./archive.zip). Once this file is loaded, Ogre does some parsing in order to separate each line into three keys, the file system name, the type of file system, and the resource group. Once this parsing is done, the resource location is added to the resource group manager. Finally, after the entire file is parsed, we initialize all the resource groups we loaded. (N.B. for more advanced projects, you won't want to do this, but instead load only your core data). Ogre::RenderWindow* window = root->initialise(true); Ogre::SceneManager* smgr = root->createSceneManager(Ogre::ST_GENERIC); Next up, we have to create some stuff to make sure we can actually render some graphics! First of all, we have to open a render window. We do this by calling root->initalise(true). This function returns a RenderWindow* and asks if we want to autocreate our window. For the most part, we will want to create one automatically. Second, we make a scene manager. The Scene Manager is one of critical aspects of any Ogre program. In order to render anything, we need some sort of Scene Manager to tell Ogre how to render. The scene manager takes care of most graphical aspects including culling, world geometry, and node placement. We'll delve more into this in later tutorials. For now, we choose the Generic scene manager because it's generic and suits our purposes well! /)); Now, we need some form of input in order to tell our application when to quit. To do this, we use OIS, the input system included in Ogre. Basically, this code grabs the window handle from Ogre and then pushes it to OIS to create an input system. Finally, we create a keyboard object so we can get the state of the keyboard. For more information on using OIS, refer to the OIS Wiki pages. In terms of initializing Ogre, we're finished! That is all you need to set up a bare-bones Ogre application! This application is pretty useless though because it doesn't actually render anything, so let's make a render loop. This loop serves as the main logic center of the application, and it's here that all of the rendering, event handling, and logic of your program will go. Let's take a look: while (1) { Ogre::WindowEventUtilities::messagePump(); keyboard->capture(); if (keyboard->isKeyDown(OIS::KC_ESCAPE)) break; if(root->renderOneFrame() == false) break; } For starters, we create an infinite loop (while (1) { }) which will go until we break out of it. Once inside the loop, we need to do some housekeeping stuff to keep Ogre happy. Because we are not using root->startRendering(), we must do this ourselves, but fortunately it's easy. Ogre::WindowEventUtilities::messagePump() offers a cross platform way to check and pump system messages through the window. Without this line, we wouldn't be able to minimize or pause our application when we move away from the window. Next, we capture the state of the keyboard, so we can check its state. The next line does just that, checks the keyboard state to see if the user pressed the Escape key. If they did, the loop breaks and we exit the program. Finally, if the user didn't press escape, we render a single frame by calling root->renderOneFrame(). If this returns false, we break and quit the program. That's it for the render loop! This is a really basic loop, but it does the trick and gets us rendering with the least amount of effort! All that's left is to do some final housekeeping before we quit the program! im->destroyInputObject(keyboard); im->destroyInputSystem(im); im = 0; delete root; return 0; } First, we destroy the keyboard object and then the input system. Finally, we delete the root object in order to shut Ogre down. Then we're done! We return 0 and the program exits! Part 4: Conclusion I hope that this tutorial has helped you get a grip on Ogre's design and core objects. Contributors to this page: jacmoe . Page last modified on Wednesday 09 of June, 2010 15:53
http://www.ogre3d.org/tikiwiki/Basic+Ogre+Application
crawl-003
refinedweb
2,078
65.73
C++0x, scoped enums Introduction and good usage patterns for scoped enumerations Enumerations, commonly called enums, are constructs in a programming language that allow users to group a set of values under one group and assign a name to each value. Sometimes, you want to represent several states or values that are static and constant. In that case, it's better to enumerate those states, assign them some integral values to make them comparable, and establish relationships between their values if necessary, such as a = 1 and b = a+2. That's what C enums offer you: a better, shorter way of creating a set of #define values that are not globally visible at file scope. However, you can further improve on this enumeration representation, and you'll see in later sections how C++ enums and scoped enums achieve that. But first, it will help to review the history of enums. History of enums in C and C++ It all started, as most of C++ did, in C, as C enums. And before C enums came into existence, enumerating a set of numeric values was accomplished with plain #define directives. Figure 1 shows the time line of enumerations in C/C++ and an example of enumerating four values (top left, top right, bottom left, bottom right). Figure 1. C enumerations time line Notice that we added the direction enum, but we renamed its enumerators so that they're different from those of windowCorner enum to avoid conflicting names (enumerators are injected into the enclosing scope, so TOP_LEFT and TOPLEFT are in the same scope and, therefore, cannot have the same name). Enums in C and C++ A C/C++ enumeration is like a struct with static constant integral members (Figure 2), but the members are injected into the enclosing scope of the struct. In C++, you cannot initialize or assign to an enum variable any value other than one from an enumerator or variable of the same enumeration type. But in C, it is allowed, which makes C enums less type-safe then C++ enums. Figure 2. Struct with static const data members simulating enums It is preferable not to allow members of one struct be compared to members of a different struct. This is because when you enumerate a set of values, you're creating a unique type with members that are comparable to each other but not comparable to external values, no matter how similar their representation is. Of course, in many situations, we care about the integral value, but it should never make sense to compare two enumerators of different enumerations. For example, one enum can represent colors and another can represent days of the week. Even though both enums have an integral value that makes them mathematically comparable, semantically, it would be invalid. It is preferable that enums have such a property. The same argument could be made for using enum variables or constants in a context where a different enum or an integral type is expected: it makes no sense. For example, a foo function taking an int should not be called with an argument of the enum type. It might be desirable in some cases that this is allowed, but in general, you want your enum to hold certain semantics, not just get converted into a plain boring integer. Unfortunately, none of those enumeration mechanisms offer these two guarantees. It is obvious that with #defines you are dealing with integers directly, and there are no types other then ints, so no restrictions exist. For C and C++ enums, you are allowed to compare, for example, TOP_LEFT with TOPRIGHT (with reference to Figure 3), because the two enumerators are converted to an integral type and compared as integrals. You are also allowed to use an enum in any integral context, thus wherever an integral type is expected (see Figure 3). Figure 3. Sample code for non type-safe use of C/C++ enums Notice that we had to rename the enumerators of enum direction, so that they are different from the enumerators of enum windowCorner. The reason for this is that all enumerators belong to the scope enclosing the enumeration. So if the enum was defined inside a class, the enumerators become class members, and if it is defined in global scope then the enumerators become globally visible. This is an inconvenience, especially if the enums are declared in namespace or global scope, where you might have many of them. There is one more issue with our enums: size and signedness. Is it coherent across different compilers? No, it's not. According to the C++ standard, the underlying type of an enum is semi-defined, because the compiler can choose what integral type to use to represent the enums, so that the type is less then an int if all enum values can be represented with int. Otherwise, it can be any other, bigger type. The lack of a well-defined underlying type leads to the inability to forward declare an enum, which is weird, because structs and unions can be forward declared without knowing anything about them. The issue lies in the way enums are handled and passed. Forward declaration of enums helps further separate the interface from implementation (as does all other forms of forward declaration). Other benefits of forward declaration is decoupling compilation units that were coupled by enum definition, which reduces total compilation time, and hiding implementation details from the user. Having the enum definition in every compilation unit means that the whole enumeration is visible when it really shouldn't be. It would be beneficial if we could, though. Let's analyze the code example in Figure 4. The program consists of three files: - interface.h - implementation.C - usage.C There are two versions of the implementation and interface: - Version 1 is how the code will look like with C++03 - Version 2 is what we would like it to look like and is similar to what it looks like with C++0x. In Version 1, the definition of enum E is part of the interface directly (or indirectly, if we wanted to use #includes), and there's no escaping that. This means that if the definition of enum E changes, usage.C has to be recompiled, but it really shouldn't, because it doesn't depend on the definition of enum E. Now, if we were able to write code similar to Version 2, the definition of E will be independent of interface.h, thus of usage.C. In addition to decoupling usage.C and interface.h from implementation.C, we also have hidden the definition of enum E from the user, which is an ability that library developers want very much. Figure 4. Decoupling interface from implementation by using forward declaration Features of scoped enums The scoped enums solve most of the limitations incurred by regular enums: complete type safety, well-defined underlying type, scope issues, and forward declaration. The syntax for scoped enums is similar to regular enums (which are now called unscoped enums), but you need to specify the class or struct keyword after the enum keyword. You can also specify the underlying type using a colon followed by an integral type. Note: Enums are not integral types, so you cannot specify another enum as the underlying type of another enum. Figure 5 shows enum examples. Figure 5. Scoped and unscoped enums You get type safety by disallowing all implicit conversions of scoped enums to other types, thereby restricting any kind of non-arithmetic operation (assignment, comparison) on enums to just the set of enumerators and enum variables of the same enumeration, and disallowing any arithmetic operation on them. You can still use scoped enums in places such as switch statements, but you would be limited to maintaining a uniform enum type in the switch condition and case labels. See the code example in Figure 6 for a better understanding of this. Another feature of scoped enums is the introduction of a new scope, called the enumeration scope, which basically starts and ends with the opening and closing brackets of the enum body. Therefore, scoped enumerators are not injected into the enclosing scope anymore, thus saving us from the name conflicts of regular enums. But now there's a slight inconvenience in that you always have to refer to a scoped enumerator with an enumeration qualified name. For example: a (before) E::a (now) Figure 6 demonstrates the scoping rules. Next is the underlying type. Scoped enums gives you the ability to specify the underlying type of the enumeration, and for scoped enums, it defaults to int if you choose not to specify it. An unscoped enum with omitted underlying type will simply behave like regular C++03 enums, with implementation defined underlying type. When the underlying type is specified explicitly or implicitly (for scoped enum only), it is called fixed; otherwise, it's not fixed. This means that regular enum syntax does not have a fixed underlying type (see Figure 5). Figure 6. Conversion and scoping of scoped enums Finally, you can address the last issue, which becomes available as soon as the underlying type issue is resolved: forward declaration of enums. Basically, any enum with a fixed underlying type can be forward declared. As mentioned above, forward declaration has lots of benefits, such as decoupling code and hiding implementation of an enum when it's not part of a user's problem space. To forward declare an enum, you just declare it without the body section so that the underlying type is fixed (Figure 7 illustrates the rules). You can re-declare multiple times, but all declarations should be consistent with prior declarations, so they should have the same underlying types and be of the same kind (scoped or unscoped). Figure 7. Forward declaration rules for unscoped enums Good usage patterns Enums are usually used to represent states, types, kinds, conditions, and anything that is a set of members with no particular functionality other then to be a unique collection of elements. Regular enums offer a bit of type safety, specifically during assignment, but it all goes bad when you try to compare or use an enum in an integral context. There are many patterns for enum usage, so this article discusses some that exist with regular enums and some new ones that can be used only with scoped enums. Class inheritance, the kind enum Suppose that you have a parent class called Widget and a bunch of child classes, such as Button, Label, and so on. A common way to identify an object that is being pointed to by the Widget* pointer is to have an enumeration -- call it enum kind -- in the parent class and have one enumerator for each type of child. Then you add an enum kind member variable in the parent so that every child sets that member to its designated enumerator (see Figure 8) Figure 8. Using enums to identify objects of derived types Then all you have to do is look at that type member and identify the real type of the object, based on the enum value. This is all fine until you have a similar set of classes that inherit from each other, and they use the same enum mechanism to hold the type of the child, yet you're using both enum types in the same code. Other than assignment, all other operations that involve implicit conversions are not type-safe. With scoped enums, that would be different, because you will be forced to compare and assign from the same set of enumerators, and you are not allowed to use enums in an integer context without explicit conversions. If you try to do otherwise, such as comparing two enums of different enumerations or you use enums where an implicit conversion to another type is needed, you will get compile-time errors. With regular enums, those logical mistakes will pass silently. Type safe bool The Boolean ( bool) type in C++ is not type-safe, because it can be converted to and from any other integral type. (Actually, it's not type-safe because it is an integral type.) Sometimes, you need a type-safe bool to represent, for example, critical conditions that will allow only explicit manipulation, so they cannot be initialized, assigned, or compared to any other value of a different type. You can always achieve that with a class. For instance, in "A Typesafe Boolean Class" (see Resources) the author proposes a bool type for which he can control its conversion parameters, meaning what can be converted from and to the bool type. That way might be flexible but very cumbersome to maintain, at least for beginners. With scoped enum, you can create a type safe Boolean condition type the way that Figure 9 shows. Note: This use case was first suggested by Chris Bowler, XL C++ front-end compiler developer, IBM Canada. Figure 9. Type-safe bool Its use is much safer compared to C++ built-in Boolean type. Observe the two examples in Figures 10 and 11. In Figure 10, we represent the three conditions by three scoped enums. In Figure 11, we use three Boolean variables for the conditions. The initiate() function does some reasoning, maybe altering the values of x, y, and z, and then passes them to handle() twice. The first call misplaces the last two arguments, and the second call omits the int argument and uses the default argument for the fourth parameter. Given that bool can be converted to int, and vice versa, the example in Figure 10 passes silently, because the compiler finds the necessary implicit conversions and integral promotions to change the function call parameters to the right type: y is converted to int and 3 converted to bool. However, the example in Figure 11 will result in a compile time error because a scoped enum cannot be converted to int, and an int cannot be used to initialize a scoped enum. Figure 10. Bool type as condition type Figure 11. Scoped enum as condition type You could argue that you can improve the bool version by using unscoped enums (the old C++03), but you would still fail to detect the implicit conversion of y to int. Using unscoped enums, has another disadvantage related to their scoping problem. Notice how the true or false enumerators of each condition are called simply True/ False, even though they are all defined in the same scope. You cannot do that with unscoped enums, because they are injected into their enclosing scope, and you would have name conflicts if two injected enumerators had the same name. Another thing worth mentioning is the clarity of the functions' interface. In Figure 11, you know what each parameter indicates without using expressive variable names. You can go further and notice that if the functions' declaration resided in a separate .h file, then (a) you might need to go to that file to understand what the bool x and bool y stand for, and (b) the declaration might be missing variable names. But with scoped enums, the description of each parameter is embedded in the enum name, which you are forced to mention in both declaration and definition. Of course, you can still fail to give a proper name for you scoped enum, but then you're just purposely hurting yourself, its like having a program with class names, such as A1, A2, A3. Type-safe state representation States appear a lot in C++ programs. Any time that you encounter situations where there is a set of entities that you need to represent or enumerate, you would use enums. A common programming pattern is passing contextual information through a hierarchy of function calls. Let's say you have a set of functions that work on some part of your problem space, and there's dynamic information that you want to maintain throughout the functions' execution. One way is to make a globally accessible object that acts as a database maintaining the dynamic info, but we want to avoid global variables because they add coupling to the code. Another way would be to create a Context class that holds the dynamic information, and pass a reference or pointer to a Context object in the function calls. Yet another way is to pass each piece of information explicitly in the function calls. This would be better then creating a Context object if the information you're passing is small. It is quite frequent that the dynamic context being passed is of Boolean or enumerated type. Here is a perfect opportunity to use our type-safe bool and type-safe enum. Figure 12 demonstrates how scoped enums can be used to create a safer program that has more control over the execution process through a tight grasp (compile time detection) of the conditions that define the execution path. The contextual information in this scenario is the action trigger. For simplicity, we have two actions, but you can imagine that the size of the information can be bigger in real-life scenarios. Similar to the previous examples, by using enums for the conditions, we're ensuring that both the declaration and definition of the functions indicate what every parameter stands for. At each function call, the arguments of the functions clearly state what the value of the condition is. Misplacing parameters or using incompatible types is an error caught at compile time, rather than never being caught if bools were used. The context information can be safely processed and manipulated in a consistent fashion, so there are no unsafe comparisons nor silent, implicit conversions. Figure 12. Context passing Finally, this code is more portable, and enums can be forward declared, because the underlying type of each enumeration is fixed. The power of scoped enums is clearly demonstrated by having clearer code, type-safe conditions, type-safe enums, and portable code. Downloadable resources Related topics - Check these sources for more information related to this article: - A Typesafe Boolean Class for C++ by Martin Buchholz (2003). - More C++ Idioms: Type Safe Enum on WikiBooks - Strongly Typed Enums (Revision 3), ISO/IEC JTC1/SC22/WG21 D2347 = ANSI/INCITS J16 07-0207 - Find out more about XL C/C++ for AIX and Linux: - Visit the XL C/C++ for AIX product page. - Visit the XL C/C++ for Linux product page. - Subscribe to the developerWorks weekly email newsletter, and choose the topics to follow. - Get the free trial download for XL C/C++ for AIX. - Get the free trial download for XL C/C++ for Linux. - Download a free trial version of Rational software. - Evaluate other IBM software in the way that suits you best.
https://www.ibm.com/developerworks/rational/library/scoped-enums/index.html
CC-MAIN-2018-05
refinedweb
3,126
57.91
If may become chtT It may cover the i lar^c. Inflamed, burning, Itcfciiig, scaling patches and cause Intense suffering. It has been known to do so. Do not delay treatment. Thoroughly cleanse the systen\ of the humors on which this ailment de? pends and prevent their return. Tho medicine taken by Mrs. Ida E. Ward, Ccve Point. Md..\Was Hood's Sarsaparllla. 61i> writes: " I had a disagreeable itchlnar on my arms which I concluded was salt rheum. I boctan taking Hood's Snrsaparilla aud in two days felt bettor. It was not long before I was cured, sad I have never bad aar skia disease since." Btiood's SGrssspsaritta Promises to cure and keeps the promise. It Is positively unequalcd for all cutaneous eruptions. Take It. SCHOOLS AND COLLEGES Special NewYear's Issue of Scholarships at the Mem Stiortnand 8 Business universuy For Ten Days, Beginning Jan. 1st. HAY SESSION?Full Business Course, embracing liook-kecping. Hanking. Math? ematics. Penmanship, Business Practice, Correspondence, Commercial Law, Spell? ing, the regular tutlon for which is $10.00, for $3!>.00. Full Shorthand Course, embracing Shorthand. Typewriting, Letter-writing, Mathematics, Ponmanshlp, Manifolding, Copvlng, Court Reporting, Spelling, the regular tutlon for which Is $10.00. for $35.00. The Full Business and Shorthand Courses Combined, same as above, the regular tuition fur which Is $00.00, for $50.00. MUHT SESSION?Tho Full Business Or Shorthand Course, samt; as above, the regular tuition for which Is $35.00, for $90.00. Telegraphy Course, embracing both llallrond and Commercial Telegraphing, Typewriting ami Pcnmamdiip per month, In advance, $5.00. In order to secure these reduced rates, ?tnvlll be necessary to engage your scholarship before the close of January 30th. as these special rales will bo abso? lutely withdrawn and our regular rates reinstated on the nth. These scholarships entitle the holder to begin at any future date, should It not bo convenient to enter during tho above period. Call, send or phone for our Illustrated Catalogue, which gives general Informa? tion regarding our school, testimonials from patrons and former pupils, etc. Southern Shorthand & Business University Cor. dranby St. and City Hall Ave. New Phone 450. J. M. RESSLER, Pres. Hoineimers Gorr6ct> Dress Chart Evening Weddings, Balls, Receptions, Formal Din? ners and Theatre.&>jfijU> Coat ?livening Dress and Inverness Waistcoat ?White Double Breasted or Black Single Breasted. Trousers ?Same material as Coat. Hat ?Opera or High Silk. Shirt and Cuffs ?White with CulTs attached. Collar ?Lap Front Standing or Poke. Cravat ?Broad end White Tie Gloves ' ?Pearl or White. Shoes ?Patent Leather?Button or Lace, Jewelry ? Pearl Studs and Links. i'he above requisites may be procured at our double store ? fioilieiiiisr's 328-330 HAIN STREET. THEATRICAL NOTES. v WHY SMITH LEFT HOME." George H. Broadhurst's VWhy Smith Lett Home" will be seen at the Acad? emy Tuesday (New Year's Day), mati? nee and night. The story of the piece brlelly told Is this: Smith and wife are recently married, Smith has long been a bachelor, and It Is .his desire and that of his wife tha't they enjoy their honeymoon In the quiet seclusion of their own home, but their relatives seem to seize ou the opportunity to visit them from every nook and corner. First conies Smith's sister,. then Smith's wife's brother and his wife of a, day, then her aunt and her husband, the aunt coming with the express pur? pose of getting the household started right All this riles Smith, and to make matters worse the servants are new, and all members of unions, and are determined to run the bouse to suit their Ideas. The cook especially Is an autocrat of the worst kind, but in her Smith finds a friend and he i nters into a compact with her to drive the visi? tors away with vile cooking. There are many complications. Smith by mistake kisses one of the maids, and Is caught In the act. Mrs. Smith Introduces her brother to her husband as an Italian singing-master, and later embraces him jus't as her luisband en? ters the room. All the mistakes are finally cleared up In a masquerade given by the servants In the evening, wiien they had supposed the family were to be at too theatre, but Smiih learns that the masquerade is to take place and returns home with the Inten? tion of stopping It. His wife learns of the affair .and attends the masque? rade In costume In order to keep an eye on her husband. Smith proves himself O. Kv, nnd all ends happily. Seats now selling. Prices, 25 to $1.(10; mati? nee, 50 cents; children, 25 cents. *-:-* "WHAT'S IN A NAM B. There may not be much In a name, so far as the Individuals are concerned, but as applied to plays tho title Is al? most half the battle. Many a good play has been a failure before it has ever been prodbced, as a result of being given a poor title. Next to writing a good play the most difficult task for the playwright is to select :t good title. There are few men who are capable of writing not only good plays, but of naming them properly. In the long list of bis many successes every title selected by Mr. Charles Hoyt. has Wen at once striking and BUggeeftlve^of "REHEARSING FOR SUNDAYIT SCENE FROM HOYT'S "A MID? NIGHT BELL.." some definite idea. As 'in reading a newspaper, one selects tne articles having the most attractive headlines, so a the11 re-goer buys a ticket for the play whose title strikes his fancy. Take for Instance "A Midnight Bell." The title suggests so many humorous and enjoyable situations that the reader of advertisements In papers or on bill? boards is on the way to the-.theatre be? fore he has investigated furl tier. The tile alone has convinced htm that the entertainment is just what ho wants. There are many new and up-to-date features introduced In "A Midnight Bell' this season, which comes to the Academy of Music Wednesday night. *-: -* Miss Marion Convere, who has held a prominent place in Soul hern society, is shortly to adopt a theatrical career and go on tour in "My Daughter-ln Law," the comedy Which had such a Buccesful run in New York at the Ly? ceum Theatre last spring. She will play the part originally acted by Ella line Terrlss and afterward by Miss Shannon. Miss Convere was born In Charlotte, N. C. and is connected with many distinguished families in the South. Her father is Colonel Hamilton Convere Jones, a prominent member of the Bar of North Carolina, nnd at one time IT. S. District Attorney for North Carolina. Her grandfather, Hamilton C. Jones, was a distinguish? ed lawyer and wit in his day. Govern? or Martin, the first Colonial Governor of North Carolina, was one of her an? cestors. Mr. Hoko Smith, who was one of ex-President Cleveland's cabinet is a relative. Miss Convere is a mem? ber of tlie Wednesday Cotillons of the Si Uth?rn society set in New York. She has frequently played in amateur'the? atricals, and last year made an experi? ment In a professional way in one of Charles Frohman's "The Little Minis? ter" companies. Jn r.his she was so successful that she has formally adopt? ed the siage as a career, and will begin a tour of the South about Christmas time In the leading role in "My Da lighter-in-Law." ?": Otis Skinner is constantly receiving requests from admirers of Browning's poems for repetitions of "Tn a Balco? ny," In 'which he appeared with Mrs. LeMoyne and Eleanor ltobson at Wal lack's last October. Mrs. LcMoyne al? so tjnds the same demand wherever she plays. Consequently both stars will close their regular season early, In order to make a special spring tour I with the Same production of Brown? I lug's masterpiece which made such a 1 sensation In the literary and artistic circles of New York. Mrs. LeMoyne is winning a series of triumphs on her flrat stollai tour of the j South. Not* on'y 1? she playing to lurge auiliences, btK. she seems to lm press her powerful personality upon every spectator. Says the Atlanta Constitution: "Mrs. LeMoyne herself, who takes the role of the mother, be? longs to the school of 'Don't act?just bo'. Hers Is the simplicity of natural? ness, nnd at the same time the very apotheosis of acting." The charm of James O'Neill's per? formance itv "Monte Crlsto" is seldom in dispute, but what constitutes thai", charm is often a matter for discus? sions among 'the intelligent theaiire goers. The best explanation seems to be that the character of "Monte Cris to" is the embodiment of all human emotions,?love and hate, hope and re? venge, faith and perfidy. The actor who achieves success in the part must be able to portray diametrically oppo? site feelings and passions with equal ftdetky and sincerity. He must be as much a hater as a iover, a loyal friend and an implacable foe, a Johu-a ttreams, and a man of the world. James O'Neill's phenomenal power to express all these varying phases of hu? man emotions is the key to his as? tounding success in the part. 4 * ? Negotiations are now in progress for elaborate productions of "In the Palace of a King" in both England and-_Australia. A play which will bring into the box ofllce an unremitting How of ducats that will aggregate from ten to twelve thousand dollars a week Is the kind of material that all enter? prising .managers are in search of. an 1 Messrs. Llebler & Co. are to b?* accounted fortunate in having acquir? ed that sort of a play In the new Craw? ford -Stoddard production. Lewis Morrison will again personally appear as Faust during the season 1901-1902. *- :-* "Arizona." which has been pronounc? ed the greatest hit achieved by any of the plays written by Augustus Thomas, Is now being- presented by three companies in different parts of the country, under the joint manage? ment of Kirk La Shelle and Fred Hamlln. *- ? _? "The Princess Ohie." with Mnrgue rlta Sylva in the title role, will follow Frank Daniels' engagement in London next season. 9 Loneta Nnlvl. the pretty Hawaiian, who is a member of Frank Daniels' chorus, and who has attracted quite a little attention by her grace and beau? ty, suddenly succumbed to acute ap? pendicitis as the company was leaving San Francisco recently. She was taken to st hospital and operated upon. For a while her life was despaired of. Hut word has Just been received by Manager La Shello that she is now out of danger and will be able to rejoin the company In a short time. ? - ? _? Chauncey Olcott's songs return a royalty of $15,000 every year. Thous? ands of them are sold during his sea? son on the road. The toe ad. Is becoming popular. First It was 10. H. Sothern with a la? cerated toe nail. Now It Is Margue? rite Sylva with a sore corn. ? _. _ . The Stanhope-Wheatcroft Dramatic school In New York has turned out more successful leading actresses than any similar institution in the country. ? _. _? Chauncey Olcott has composed five or six new songs for "Garrett O'Magh" the new romantic Irish drama which has been written for him by manager Augustus Pitou. In a recent conversation with Mr. Joseph Jefferson tiie famous actor ex? pressed himself as much gratllied with the success as a star of his son Thomas Jefferson, who is having a prosperous season in "Hip Van Win? kle.". *-: -? Louis James ami Kathryn Kidder In Wagnala and Kemper'a costly revival of "A Midsummer Night's Dream" are touring the large cities in Texas to overflowing audiences. On January 27 they begin a two week's engagement in San Francisco. "The Gunner's Mate," which will be seen in New York at an early date, Is one of tho most elaborate scenic and iniiebaiiieul?production*, c^i'i?made by Manager Pitou. The scenes on board the United States cruiser New York are particularly realistic and sensa? tional. ? * ? During the recent final performance of "More Than Queen" in Detroit the supernumurarles employed in tho play presented Blanche Walsh with a mag? nificent floral piece. This has never before been done by supers for any star, apd Miss Walsh considers it. as it retily is, a splendid compliment. ? .*. * In booking Madam Modjeska's South? ern tour, her managers, Messrs. Wag cnhals nnd Kemper, had intended cutting the piny of "King John' out of the repertoire for that particular section; on account of its tremendously heavy scenic effects, and the number of people needed for its performance. Southern ihanogera insist upon its be? ing retained, especially in velw Of this being the farewell tour of the famous artist in the South. It I? therefore announced that "King John" will be presented In the South as elsewhere and will be made the feature of the tour. Modjeska will be accompanied by R. D. MncLcan and Miss Odette Tyler. ?-:-* Newport News on Monday night is to have for the first time a popular price theatre, and tho show Is to be opened in the Casino, which for years has been used for dances, and which last summer under the management of Mr. G. B. A. Hooker proved to be a frost as a vaudeville house. BRIEF ITEMS OF INTEREST. Dr. J. Handolph Garrett, who has been practicing here, has moved to Roanoke. Mr. Charlie D. Woodin, Jr., partner of Hie Home Art Company, returned home yesterday from a business trip South. The Christmas music at Grace P. 0. church will be repeated at today's ser? vices. The choir will be assisted by Professor Jackson's orchestra at the 5 o'clock service. The employees In the pressroom of tho Vlrglnian-Plldt regaled themselves last night with a delightful oyster "What 1? light?" asked the teacher of the pupils of 'the junior class. "A $10 gold piece that Isn't full weight," ropllfed a bright youngster. ART AND MUSIC. Mr. Harold Bauer, the sreut French pianist. Is announced to give the sec? ond of the series of subscription eon certs a ran god by Mr. Henry Mat Lachlan, In the Academy of Music ! Monday evening, January 7. Mr. Bauer I comes to America after wonderful SUC I cess In Paris, London and Berlin lost year. He made his American debut I with tho Boston Symphony Orchestra i three weeks ago and the audience went I wild with enthusiasm and he .was re I ca'.led seven times after hU perform? ance. Mr. Bauer was engaged for the third concert of this series, but the following telegram from Lenora Jackson, who was to appear With* her company, ex? plains why Mr. MaeLachlan has been -compelled to make the change: '?Atlanta, Ga?, Dec. 25.?Other book? ings make Norfolk impossible January without severe loss. Can you possibly postpone alter Lent. Kindly wire. "LENORA JACKSON." Miss Jackson has been booked to fill her date In Norfolk at the Academy of ' Music in E:u?tor week and Mr. Mae. Lachlan trusts the subscribers will kindly note the change and date. Acad? emy of Music, Monday evening, Jan? uary 7. Of Mr. Bauer's appearance at Bos? ton the Boston Herald of December 2 says: "Mr. Bauer seems to have a splendid technique: he bears himself mod**tlv, [days with easey freedom and wltnout any affectation either of style or of manner, and he does not abuse the In? strument by attempting to force Its tones. Mr. Bauer was cordially wel I COmed and very heartily applauded and recalled."?The Boston Herald, Decem? ber 2. 1900. *-: -* The class In Illustrating, now being engaged under the direction of Mr. C. A. Morrlsett. art instructor at the Nor? folk Conservatory of Music, bids fair to be a great success. Inasmuch as It Is I most practical, embracing drawing and sketching for life, pen and Ink work nnd plate engraving; in fact, teaching the pupil the essentials In newspaper .and magazine work. The Held for illustrators is large and lucrative. *- ? -? Tho Hume-Minor Company have in the window of their piano salesroom on Granby street, Tazewell building, a specimen of an early Norfolk settler, which is quite a curiosity. It is proba? bly the oldest instrument of Its kind in Norfolk. _. ?~ ? *? Mr. E. N. Wilcox, manager of the Hume.Minor Company, has received a letter from Mr. James \V. Casey, form? erly of this city. In which Mr. Casey states that he Is now snlesman in the pionu department of Jo4t??Wanamakerr New York city. Mr. Casey states that he is very pleasantly located, has a good position nnd is well pleased. Reduction of Price of Ice. The Norfolk Ice Company announce a very substantial reduction In the price of ice to take effect January 1st, and state that their, action Is In pur? suance of a policy which has been adopted by the management of this company to reduce the price of lee to all general consumers. The new prices range from S3..*0 per ton delivered and $3 per ton on platform In lots of 1,000 pounds or more to 40 cents per 100 pounds In lots less than 1,000 pounds. OTHER LOCAL ON PAGE 6. Easily Cured Miss Edith Williams Wants Every Lady Reader of this Paper to Know How She Saved Her Father. Used an Odorless and Tasteless Remedy In His Food, Quickly Curing Him Without His Knowledge. Trial Package of The Remedy Mulled l'roo To Show How Entry 11 Is T? Cure Drunkards. Nothing could be more dramatic cr de? voted than the manner in which Miss Edith Williams, Box Wayneavtjje, o.. cured her drunken father after years of misery, wretchedness und almost unbear? able suffering. "Yes. father Is a reformed man," she said, "and our friends t!.i;.k it ft miracle that I cured him without his knowledge or consent. I had read how Mrs. K.tle Lynch, of 3:"9 Kills St., San Francisco, Cab, had cured her husband by using a remedy sdcretly in his coffee and food and I wrote to Dr. Haines for it trial. When It came 1 put sonic In father's coffee nnd food and watched him closely, but he couldn't tell the difference, so 1 kept It Up. , "One morning father got up und said he was hungry. This was a ?0< d as he rarely ate much breakfast. lie went away and when he came home at noon perfectly sober I was almost fran? tic with joy, as l hadn't sein him sober for half a day before In over foiirtcort years. After dinner he gat down In the big easy chair and snld: 'Edith, I don't know what has come over me. but 1 hate the sight and smell of liquor and am going to stop drinking forever.' This was loo much for me and i told him Iben what I had done. Well. We both had a good cry. and now WO have the hap] I St home and tho kindest father you i an Imagine. I am so glad you will publish this experience, for It will reach many others and let them know about that wonderful Golden Specific." Or. Maine-:, the dlscoveror, will send a sample of this gi and remedy free to all who Will write for It. Enough of the remedy Is mailed freo to show how it Is USed in tea. coffee or food, und thai it will euro the dreaded habit quietly und permanently. Send your name and ad? dress to Dr. J. W. Haines, 3173 Glenn Building. Cincinnati, Ohio, nnd he will mail a free sample of th* remedy to you, securely sealed In a plain wrapper; also full directions how to uso It, books and testimonials fiom hundreds Who have been cured, mid everything needed to aid you In saving those nei'r and dear io you from a life of degradation and ultimate poverty and disgrace. Send for a free trial to-day. It will brighten tho rest of your life. It Women's Fine Shoes at greatly reduced prices. Something entirely new to the Hornthal shoe store will occur to-morrow?a Bargain Sale of Women's High-grade Shoes! In order to keep the stock free from accumulations of odd lines and sizes such a course as this is deemed advisable?it's to the best interests of the patrons of Norfolk s best shoe store. There, you have our reasons for inaugurating this sale?the quick converting into cash of the very tinest shoes that the very best shoemakers ever made. The p'ice lovverings are not "stupendous," nor "marvellous," nor "gigantic," but they are of sufficient import to cause you wearers of this class of shoes to Come to HonithaFs to=morrow. Hasty pen pictures of the excellent shoes which we invite you to take at the new prices are as follows:? The $3 Plack Viol Kid Shocs-hut ton or lace?plain or patent-leather tips \ toxins?satin finish?blind, eyelets?mannish effoct?heuvy soles -medium high heel.;. To <S^~) njQ Our great 13.25 Black Viel Kid Shock?taco or button?plain or pat? ent-leather lips?Cuban or low heels ?black silk facings. Oar tme on every p.nr. To go (jj^ g5 tit. The sj.ro patent KM Walking Boot ?lace?whole foxing -Cuban heel or low military bed ? very sightly at.d....8f.ry'.CeaWe- T? g? $3.00 The $3.75 Black Viel Kid Shoes? patent-leather tips?H-tnch leather concave heels?(toeiiye.tr welt, heavy soles?%-foxlng?satin tin lush. To go at. ?PO.O/ The ?6.25 Ideal Kid Shoes (Viel f'atmt-l.iathei)?l'.oeth &? CO.'S st*ck ? Louis xiv ivinoh heels?hand welt soles, V&-lncn extension?whole foxing?satin facing?blind eyelets ? extra iiiKh cut uppers (for <l e "7er short skirts). To go at.... /?J The W.S0 Ideal Kid Shoes (Viel Pat tent-leather)? Booth ?v Co's stock? Louis XIV 1%-lnch heels?hand-welt soles, 1-lti-incli extension?whole rox Ing?silk facings?hand-worked eye? lets. The best that money (\(\ can produce. To go at. ?pv.w Hornthal and Son, 272 Alain Street. Neat and tidy service counts for much with our patrons. Such service goes hand in hand with the rest of Snow's methods? promptness, politeness and the other little attentions that one looks for only in strictly first-class dining places?such as I h/s 1 Came In season: sweet, juicy steaks, delicious French-drip coffee with whipped cream, and our famous 2f>c. full-course dinners, retain our old patrons and win new ones daily. SNOW'S .^fsyaSi, SNOW'S $r D R Er WREV'S |_Eeinodel?ng and Removal Sale, I WILL BEGIN 1 {Sa.tu-ii?cl4F?y-9 Bee? SSO \i AND CONTINUE DURING NEXT WEEK. Z All Clothing and Men s Furnishing Goods will be sohl at n dls 2 count of 25 per cent, to 83 1-8 per cent, in order to reduce stock be ^ fore wo move, which will bo oti or ubout January 7tli. :?? 9 TERMS CASH. I s. s. phone 661. :Y'S, HERE'S THE CAPER P.U. pending?on our Spiced Beef Rofls and Rounds For Xmas. J.S. Beil, Jr. &Go., THE BUTCHERS. OPEN ALL DYA. BOTH PHONES. Wo are scents for tha rollov.-lnir machines: The Standard. White, New Home, Domestic aud Ilouselioli A gooj new machine from IIS.OO. Splen. did lln? of sec.enl-liatid machines from 15.00 to $15.On. Need lea and all part* for machines can be had at our office. W<? repair sewing ir.icftlnej and ,,..,i. an:.? j tau work. C. C. GUNTER, STANDARD SEWING MACHINE* IGi Church Street. Norfolk, Va. !8 FEREBEE, JONES & CO., Merchant Tailors, IN Overcoatings We HAVE 1 Oxfords Greys j A LARGE RANGE OF FABRICS AND A VARIETY OF GRADES; I Suitings OF ALL CLASSES IN Cheviots, Unfinihhed Worsteds AND Thibets IN BLACK, BLUB AND OXFORD COLORINGS. A CHOICE SELECTION OF EXCLUSIVE DESIGNS IN Fancy Suitings and Trouserings. CORtNER Plain and Commerce Sis. I GREAT SALE I Trunks, I Suit Cases, I Traveling Bags, I Fitted Toilet Cases, Lades' and Men's Pocket i Books at Greatly Reduced % Prices. I - I This is our annual "BE | FORE-SIQC1C T-ArrvlNG $ SALE." It will pay you J to make your purchase $ now. _ 1 NORFOLK 1 TRUNK FACTORY m Gtiurcli eStreet, near Main. Special Offer! | ? All Trunks guaranteed and j; ? kept in repair free of charge. ? f We Repair Old Trunks! f; O TO l. You don't know halt' the scad* wo e?r< ry?Pocket Books, Ladles' and Gante"; 's Duplicate \\ h! s. t--; Gold Pens ?a I Ivory Fori Holders- Fashion Favorit?. Flaying Cards, all the 'newest backs: th j ; irgt-st line of Fancy F.ox Papers In tin* eity, over two hundred styles; Oes!: lllotter Pads: Handsome Onvr Too P. sic Plotters; Library Ink Stands; SterUrtffi Silver Fen Holdora ar.d Sterling Silver Mounted Pencils, fi in ?et.; Every hous* should have a pnper cutter and a roll at wrapping paper nnd a ball of twine; Jutft ihe thing for this season of tho year. OLD DOMINION PAP 15 It COMPA.N'Sf. ^l?lil? _ Commercial Place. 's Busy Grocery Fancy Baltimore- Naval Cut Corned F-eOf and Spaced Bounds. LARGE IMPORTED MACKEREL LARGE IMPORTED MACKEREL, SPRINGFIELD BAMS. WKSVPllALIA HAil? Fancy Princess Anne County YurtK LOWE & RSlLUaR* xml | txt
http://chroniclingamerica.loc.gov/lccn/sn86071779/1900-12-30/ed-1/seq-5/ocr/
CC-MAIN-2015-48
refinedweb
4,331
77.23
By continuing to use this site you agree to the use of cookies. Please visit this page to see exactly how we use these. First of all: have they finally released an Android port of Allegro? This is too good to be true Also, my test game, which was compiled with V2.6SP1, didn't show up. Is it because my game is 8-bit and the engine does not support 8-bit modes?I actually tried the PSP version some time earlier out of curiosity and it also didn't recognise my game, but since I just tested it with an emulator (I don't own a PSP, and never will) this cannot be any indication.EDIT: I just put another 8-bit test game compiled with V3.21 and it was recognised (also white screen though), so it shouldn't be a colour depth. Maybe it has difficulties recognising V2.6SP1 games. Ok, it works on my hero but not on my sensation but the mouse movement doesnt work very well. The track ball seems to move the mouse a tiny bit and click the left button every time you move it.dragging the mouse doesnt work at all. My game doesn't load for some reason. I get a script error about some import called menu? Then when i try to quit I end up having to force close. btw I got a Sidekick 4g Can you post a link to the non-working game? I would like to check it out. Trilby Notes is an 8 bit game compiled with 2.62 and works, so it might be a file format difference that I didn't account for. Here is the initial version of an engine port to Android. It is based on the PSP port, so it also shares some features:- Support for AGS data files version 2.60 - 3.21- Plugin support including AGSBlend and open-source recreations of the Snow/Rain and Flashlight plugin- All color depths and screen resolutions (scaled to the screen size with correct aspect ratio)The mouse cursor is moved by dragging, a left click is done by a quick touch and a right click by a slightly longer touch. Only hardware keyboards are supported right now (no way to open the software keyboard atm.)I chose the namespace as "com.bigbluecup.android". The reason was that I want to avoid the situation of the ScummVm project, where another private namespace was initially used and is still kept to avoid breaking updates through the market. Still, this is open to discussion. For now, the application is also not on the market (would be too early in development anyway).In this version, no options can be set but I would implement it the same way as in the PSP. So there would be a config file for all games that can be overridden by each individual game. This would ideally be accessible from within the game or launcher.Games must be placed on the SD card root in the "ags" directory. Each game needs its own subdirectory (like in the PSP port). The data file is automatically detected within the folder.The native libraries are compiled for generic ARM cpus. Therefore performance could be improved on newer ARM7 cpus by building a "fat" package with multiple libraries (maybe even for x86 Android).I am no expert in Android development so I would appreciate any feedback. Download of the package here: here: Oh, after reading Steve Jobs New biography, please do not port to IPhone. I do not want another feather in apples cap. yep that fixes it.Still cant drag the mouse on either of my phones tho. Also videos are being played without framedrop. This is necessary for slow devices because they will always lag behind and therefore skip rendering all frames (happened on the PSP). This could be made optional though. As for midi music, it works the same as on the PSP port:- Download this file:- Rename it as "patches.dat"- Place the file in the "ags" directory on the SD card- Midi should play Page created in 0.135 seconds with 26 queries.
https://www.adventuregamestudio.co.uk/forums/index.php?topic=44768.msg598610
CC-MAIN-2019-22
refinedweb
697
74.08
#include <iostream> using namespace std; void whosprime(long long x) { bool imPrime = true; for(int i = 1; i <= x; i++) { for(int z = 2; z <= x; z++) { if((i != z) && (i%z == 0)) { imPrime = false; break; } } if(imPrime && x%i == 0) cout << i << endl; imPrime = true; } } int main() { long long r = 600851475143LL; whosprime(r); } Your algorithm is wrong; you don't need i. Here's pseudocode for integer factorization by trial division: define factors(n) z = 2 while (z * z <= n) if (n % z == 0) output z n /= z else z++ if n > 1 output n I'll leave it to you to translate to C++ with the appropriate integer datatypes. Edit: Fixed comparison (thanks, Harold) and added discussion for Bob John: The easiest way to understand this is by an example. Consider the factorization of n = 13195. Initially z = 2, but dividing 13195 by 2 leaves a remainder of 1, so the else clause sets z = 3 and we loop. Now n is not divisible by 3, or by 4, but when z = 5 the remainder when dividing 13195 by 5 is zero, so output 5 and divide 13195 by 5 so n = 2639 and z = 5 is unchanged. Now the new n = 2639 is not divisible by 5 or 6, but is divisible by 7, so output 7 and set n = 2639 / 7 = 377. Now we continue with z = 7, and that leaves a remainder, as does division by 8, and 9, and 10, and 11, and 12, but 377 / 13 = 29 with no remainder, so output 13 and set n = 29. At this point z = 13, and z * z = 169, which is larger than 29, so 29 is prime and is the final factor of 13195, so output 29. The complete factorization is 5 * 7 * 13 * 29 = 13195. There are better algorithms for factoring integers using trial division, and even more powerful algorithms for factoring integers that use techniques other than trial division, but the algorithm shown above will get you started, and is sufficient for Project Euler #3. When you're ready for more, look here.
https://codedump.io/share/2GkZwVxPmwqS/1/finding-prime-factors
CC-MAIN-2016-50
refinedweb
350
62.11
I have a web socket that accepts an object called PushMessage and sends it to a React page, which subsequently updates in real-time. The way this works is that the user searches for the PushMessage he wishes to display, and this is then passed in. However, what happens right now is that if the user searches for the first PushMessage, that one is displayed, but if he or she then searches for another PushMessage, both of them are displayed. I would like to only display the second of them. In order to do this I feel like I need to clear the Redux store (return it to its initial, empty state). However, I can't figure out how to do this. My Reducer is given by: var Redux = require('redux'); var _ = require('lodash'); var pushNotificationDefaultState = {}; var pushMessageReducer = function(state, action) { switch(action.type) { case 'RECEIVED_PUSH_MESSAGE': var obj = JSON.parse(action.PushMessage); return _.assign({}, state, obj); default: if (typeof state === 'undefined') { return pushNotificationDefaultState; } return state; } }; module.exports = Redux.combineReducers({ pushMessages: pushMessageReducer }); function mapStateToProps(state) { return state; } var AppContainer = connect( mapStateToProps, null )(App); You can achieve this in a number of ways, I will show you how I would approach it. Let's look at your state right off the bat - because you are using combine reducers you are splitting your state like so : { pushMessages: { pushMessages state in here } } So combineReducers is great when you want to split your app into kind of "substates", thought redux is still single store in this instance. Right now you just have the single substate of pushMessages. I would take the component and set up an action to add a message to it (and maybe one to remove message eventually). I am using immutable.js because I happen to like working with it and IMO it makes redux nice to work with because the state should be immutable anyways (again, personal opinion). So here are your reducers : var pushMessageReducer = function(state = immutable.Map(), action) { if (action && action.type) { switch (action.type) { case actions. RECEIVED_PUSH_MESSAGE: return state.update('messages', immutable.List(), (oldMessages) => oldMessages.push(action.PushMessage) ); default: return state; } } return state; } So what this does is set your state up like so : { pushMessages: { messages: [ ... all your messages pushed in here] } } Because you are using combine reducers your container should be subscribed to it's substate of pushMessages, so in your component you will have this.props.messages with an array of your messages. (if you have not subscribed to the sub state its just this.props.pushMessages.messages. And to display the last item all you really need is something like this (this is using lodash, you can certainly do this in vanilla js or whatever you want) constructor(props) { super(props); this.getLastMessage = this.getLastMessage.bind(this); } getLastMessage() { return _.last(this.props.messages); } render() { return( <div> Last Message : {this.getLastMessage()) </div> ); } So maybe you don't want the list to contain everything and only show the last one (I'm not sure what exact business logic you are looking for) but you can easily add an REMOVE_PUSH_MESSAGE action and just pop or splice from that array of messages. (and then you could just show this.props.message without the _.last). Hope this helps! Edit: Here is how I would set up your container: import immutable from 'immutable'; import { connect } from 'react-redux'; import Component from './component'; import * as actionCreators from './action-creators'; function mapStateToProps(state) { const normalizedState = state.get('pushMessageReducer', immutable.Map()).toJS(); return { messages: normalizedState.messages }; } function mapDispatchToProps(dispatch, ownProps) { return { myAction: actionCreators.myAction.bind(null, dispatch) }; } export default function(component = Component) { return connect(mapStateToProps, mapDispatchToProps)(component); } You'll notice the normalizedState in mapStateToProps, I use this to grab the section of the state I want to use in this container, unless your app only has 1 smart component then you probably wouldn't do this. It just grabs the part of the state (from immutable) that I am using in this container, in your case it's pushMessages. So now in this component you connected, you should have this.props.messages with your items. I also use mapDispatchToProps to bind in my actions, because it gives you dispatch in your action creators which makes async really easy. With the combined code properly hooked up (I didn't know hooking up the action here), you should fire that RECEIVED_PUSH_MESSAGE action creator with a new message on action.PushMessage. This then connects to your mapStateToProps which updates this.props.messages on your component, which should only be rendering the last item in the list.
https://codedump.io/share/oDspj0GJmoix/1/clear-redux-store
CC-MAIN-2016-50
refinedweb
763
56.25
MultiMarkdown in Editorial I found out that some features of MultiMarkdown are not supported in Editorial, as well as a bug: Bug: - Within Code delimiter (```), <>are not escaped. i.e. HTML codes within code block will still be HTML codes, not plain text. Does not support: - MMD metadata block (e.g. title is ignored) - superscript & subscript (e.g. if MultiMarkdown is not selected, there's an option to select superscript. But once MMD is enabled, MMD styled superscript and subscript is not supported) - Abbreviations - Inline footnote - ``this kind of smart quote'' will not resulted in “this” - Code class (e.g. "```tex" should becomes <code class="tex">...) - Math class (it doesn't have <span class="math">...enclosing the math) - MartinPacker Glad someone's nudging @omz to support MORE of MultiMarkdown. I wonder, however, which MultiMarkdown engine is being used. If it's a "standard" one then it's different from a home-grown one. if you open the Python console in Editorial and type the following: import markdown, markdown2 markdown.version # 2.2.0 markdown2.__version__ # 2.2.1 The current version of markdown is 2.2.6. The current version of markdown2 is 2.3.1. I am not sure if these versions include the functionality that you mention above or not. - MartinPacker Thanks @ccc and how would you know which of these Editorial is using (by default)? I would also wonder - if this is relevant - which settings are being used as well. Editorial doesn't use Python for its own Markdown preview. The library for MultiMarkdown is peg-multimarkdown (if MultiMarkdown is not enabled, I use sundown). I have to admit that I haven't updated the Markdown conversion libraries I'm using in a while... I suspected that you did not use Python for Markdown preview. The most recent update in either of those two repos (peg and sun) is three years ago so I doubt you are very far out of sync with them ;-). I'm a fresh user of Editorial and I bought the app because I've read it supports Multimarkdown. Imagine my disappointment when I open a file with lots of footnotes (written on my MacBook with Multimarkdown Composer Pro) and in Preview just see the source code for the footnotes … Could have stayed with Notebooks 8 just as well. :-( @omz Yes, MultiMarkdown is enabled. Example for the syntax: [^This is a footnote.] As far as I know, this is part of the MMD syntax since v. 4. It would be great if you could find a way to handle this with Editorial. The way footnotes work in Editorial at the moment is with labels like [^fn1]and definitions of the footnote somewhere else (e.g. at the bottom of the section/document), like [^fn1]: This is the footnote text (The labels don't have to be numbers.) @omz I've understood that. But I don't think it's worth editing more than 50 footnotes to a rather cumbersome format, I have to look at other editors instead. It's a pity because Editorial looks promising otherwise and because I already paid for it, but for me footnotes are a killer feature. - roosterboy "a rather cumbersome format" According to the MMD syntax guide, that's the official format. An identifier goes inline in your text and the actual text of the note goes at the end. - roosterboy @omz I'd just like to add a voice for inline footnotes support. They're really essential for trying to write scholarly work in markdown. I also wanted to note that inline footnotes would be a fantastic help for me! Support for inline footnotes is the only thing holding me back from purchasing the app at this time.
https://forum.omz-software.com/topic/2984/multimarkdown-in-editorial
CC-MAIN-2017-26
refinedweb
622
64.81
This action might not be possible to undo. Are you sure you want to continue? Architecture in a Climate of Change Architecture in a Climate of Change A guide to sustainable design Peter F. 2001 Second edition 2005 Copyright © 2001, 2005, Peter F. Smith. All rights reserved The right of Peter F. Smith to be identified as the author of this work provision’ Science and Technology Rights Department in Oxford, UK: phone: ( 44) (0) 1865 843830; fax: ( 44) (0) 1865 853333; e-mail: [email protected] For information on all Architectural Press publication visit our web site at Typeset by Newgen Imaging Systems Pvt Ltd, Chennai, India Printed and bound in Great Britain Contents Foreword Acknowledgements Introduction 1 Climate change – nature or human nature? The carbon cycle The greenhouse effect Climate change – the paleoclimate record Causes of climate fluctuation The evidence 2 Predictions Recent uncertainties What is being done? The outlook for energy The nuclear option 3 Renewable technologies – the marine environment The UK energy picture Energy from rivers and seas Hydroelectric generation Small-scale hydro ‘Run of river’ systems Tidal energy 4 Renewable technologies – the wider spectrum Passive solar energy Active solar Solar thermal electricity The parabolic solar thermal concentrator Photovoltaics Wind power Biomass and waste utilisation xi xii xiii 1 1 2 3 4 7 12 17 19 20 23 26 26 28 28 29 29 30 42 42 42 43 44 45 45 47 v . South Wales The prospects for wood The external environment Summary checklist for the energy efficient design of dwellings Report by Arup Research and Development for the DTI’s Partners in Innovation Programme 2004 9 Harvesting wind and water Small wind turbines Types of small-scale wind turbine Building integrated systems Conservation of water in housing Domestic appliances 10 Existing housing: a challenge and opportunity The remedy Case study 11 Low energy techniques for non-domestic buildings Design principles Environmental considerations in the design of offices Passive solar design 50 50 52 52 54 62 64 68 69 72 77 77 80 80 87 90 91 93 94 94 95 98 103 104 107 108 108 110 114 115 117 118 121 122 127 127 128 129 vi .CONTENTS Hydrogen Nuclear power 5 Low energy techniques for housing Construction systems Solar design Types of solar thermal collector Windows and glazing 6 Insulation The range of insulation options High and superinsulation Transparent insulation materials Insulation – the technical risks 7 Domestic energy Photovoltaic systems Micro-combined heat and power (CHP) Fuel cells Embodied energy and materials 8 Advanced and ultra-low energy houses The Beddington Zero Energy Development – BedZED The David Wilson Millennium Eco-House Demonstration House for the Future. CONTENTS 12 Ventilation Natural ventilation Internal air flow and ventilation Unassisted natural ventilation Mechanically assisted ventilation Cooling strategies Evaporative cooling Additional cooling strategies The ecological tower Summary Air conditioning 13 Energy options The fuel cell Proton exchange membrane fuel cell Phosphoric acid fuel cell (PAFC) Solid oxide fuel cell (SOFC) Alkaline fuel cell (AFC) Moltel carbonate fuel cell (MCFC) Storage techniques – electricity Photovoltaic applications Heat pumps Energy storage – heating and cooling Seasonal energy storage Electricity storage Building management systems Tools for environmental design Report by Arup Research and Development for the DTI’s Partners in Innovation Programme 2004 14 Lighting – designing for daylight Design considerations The atrium Light shelves Prismatic glazing Light pipes Holographic glazing Solar shading 15 Lighting – and human failings Photoelectric control Glare Dimming control and occupancy sensing Switches System management 138 138 138 140 145 151 152 154 154 160 161 162 163 164 165 165 166 166 169 170 171 174 176 177 178 179 180 181 182 184 185 185 185 187 187 188 189 190 190 191 191 vii . CONTENTS Air conditioned offices Lighting – conditions for success Summary of design considerations 16 Cautionary notes Why do things go wrong? High profile/low profile The ‘high-tech demand’ Operational difficulties Building related illness Inherent inefficiencies Common architectural problems Common engineering problems Avoiding air conditioning – the issues Common failures leading to energy waste The human factor Summary of recommendations Conclusions 17 Life-cycle assessment and recycling Waste disposal Recycling Life-cycle assessment Whole life costing Eco-materials External finishes Paints Materials and embodied energy Low energy Conference Centre. Sweden Towards the less unsustainable city 192 192 193 195 195 196 196 197 197 197 198 198 198 199 199 200 200 202 202 203 205 205 206 207 207 208 209 211 212 212 214 217 217 218 225 235 236 238 viii . Lillie Road. 2003 Beddington Zero Energy Development (BedZED) Beaufort court renewable energy centre zero emissions building 19 Integrated district environmental design Ecological City of Tomorrow. Doncaster Recycling strategy checklist 18 State of the art case studies The National Assembly for Wales Zuckermann Institute for Connective Environmental Research (ZICER) Social housing Beaufort Court. London. Fulham. Malmo. Earth Centre. CONTENTS 20 An American perspective Glenwood Park. Atlanta. Georgia 21 Emergent technologies and future prospects Energy for the future Next generation solar cells Artificial photosynthesis Energy storage Hydrogen storage Flywheel technology Advances in lighting The photonic revolution Smart materials Smart fluids Socio-economic factors Appendix 1 Key indicators for sustainable design Appendix 2 An outline sustainability syllabus for designers Index 245 248 250 251 254 256 256 257 257 258 259 260 261 262 265 267 275 ix . . There is now wide agreement that halting global warming and its climatic consequences is likely to be the greatest challenge that we shall face in this century. Lord Rogers of Riverside xi . at the same time. buildings old and new should be a prime target in the battle to reverse the demand for fossil-based energy. As populations increase and.Foreword This updated book is essential reading especially as it considers the ‘why’ as well as the ‘what’ of sustainable architecture. Students and practitioners alike within the construction industry need to be aware of the importance of their role in creating architecture which not only raises the quality of life but also ensures that such quality is sustainable. gravitate to cities. Dr Adrian Pitts of Sheffield University. Nick White of the Hockerton Housing Project.Acknowledgements I should like to express my thanks to the following practices for their help in providing illustrations and commenting on the text: Bennetts Associates. RMJM. Foster and Partners. xii . I am also indebted to Dr Randall Thomas for his valuable advice on the text. Fielden Clegg Bradley. Dr William Bordass for providing information from his ‘Probe’ studies. Grimshaw Architects Ove Arup and Partners. David Hammond Architects. Jestico Whiles. Alan Short Architects. Michael Hopkins and Partners. Bill Dunster Architects. Richard Rogers Partnership. Studio E Architects. Ray Morgan of Woking Borough Council and finally Rick Wilberforce of Pilkington plc for keeping me up to date with developments in glazing. The crucial factor is that the great bulk of this population is concentrated in the great valleys of the Yangtze and Yellow Rivers and xiii . China may well serve to give a foretaste of the future. At the same time it is important to appreciate that there are absolute limits to the availability of fossil fuels. By 2005 it had reached 1.6 billion.Introduction This book calls for changes in the way we build. This is followed by an outline of the international efforts to curb the rise in greenhouse gases. A difficulty encountered by many architects is that of persuading clients of the importance of buildings in the overall strategy to reduce carbon dioxide emissions. Buildings are particularly implicated in this process. a problem that will gather momentum as developing countries like China and India maintain their dramatic rates of economic growth. The book is designed to promote a creative partnership between the professions to produce buildings which achieve optimum conditions for their inhabitants whilst making minimum demands on fossil-based energy. The first part of the book seeks to set out those reasons by arguing that there is convincing evidence that climate changes now under way are primarily due to human activity in releasing carbon dioxide (CO2) into the atmosphere. One of the guiding principles in the production of buildings is that of integrated design. For change to be widely accepted there have to be convincing reasons why long-established practices should be replaced.3 billion population. The first chapters of the book explain the mechanism of the greenhouse effect and then summarise the present situation vis-à-vis global warming and climate change. being presently responsible for about 47 per cent of carbon dioxide emissions across the 25 nations of the European Union. at this rate by 2030 it will reach 1. meaning that there is a constructive dialogue between architects and services engineers at the inception of a project. This being the case it is appropriate that the design and construction of buildings should be a prime factor in the drive to mitigate the effects of climate change. The purpose is to equip designers with persuasive arguments as to why this approach to architecture is a vital element in the battle to avoid the worst excesses of climate change. an area about the size of the USA. The opportunity rests with architects and services engineers to bring about this step-change in the way buildings are designed. eight gas pipelines. The greatest potential for realising this change lies in the sphere of buildings. coal fired and nuclear power plants – a rate of expansion that equals Britain’s entire electrical output every two years. However.INTRODUCTION their tributaries. which. The Earth receives annually energy from the sun equivalent to 178 000 terawatt years which is around 15 000 times the present worldwide energy consumption. Its appetite for steel and building materials is voracious and already pushing up world prices. like the range of pollutants released by the burning of fossil fuels. The pre-industrial atmospheric concentration of CO2 was around 270 parts per million by volume (ppmv). By 2025 it will be importing 175 million tonnes of grain per year and by 2030 200 million tonnes. The security of the planet rests on our ability and willingness to use this free energy without creating unsavoury side effects. A supply of energy sufficient to match the rate of economic growth is China’s prime concern. Carbon has been slowly locked in the earth over millions of years creating massive fossil reserves. In the 1960s–1970s buildings were symbols xiv . The problem is that these reserves of carbon are being released as carbon dioxide into the atmosphere at a rate unprecedented in the paleoclimatic record. Already demonstration projects have proved that reductions can reach 80–90 per cent against the current norm. if the present trend is maintained we could expect concentrations exceeding 800 ppmv by the second half of the century. 30 per cent is reflected back into space. Of that. Between January and April 2004 demand for energy rose 16 per cent. In 2003 it spent £13 billion on hydroelectric. The aim of the scientific community is that we should stabilise atmospheric CO2 at under 500 ppmv by 2050 acknowledging that this total will nevertheless cause severe climate damage. Today it is approximately 380 ppmv and is rising by about 20 ppmv per decade. 26 Yanzhou coal mines. Only 0. and 20 per cent powers the hydrological cycle. According to a spokesman for the Academy of Engineering of China.6 per cent powers photosynthesis from which all life derives and which created our reserves of fossil fuel. the 800 plus figure looks ever more likely unless there are widespread and radical strategies that bypass political agreements. six new oil fields. Given the absence of a political consensus following the refusal of the US to ratify the Kyoto Protocol. 20 nuclear power stations and 400 thermal power generators. account for almost 50 per cent of all CO2 emissions. 50 per cent is absorbed and re-radiated. which equals present total world exports (US National Intelligence Council). The technology exists to cut this by half in both new and existing buildings. and this is where architects and engineers have a crucial part to play. the country will need an additional supply equivalent to four more Three Gorges hydroelectric dams. China is on the verge of consuming more than it can produce. in the UK. INTRODUCTION of human hubris. Strong and effective action has to start immediately. challenging nature at every step.’ Peter F. The turn of the millennium saw a new attitude gathering momentum in a synergy between human activity and the forces of nature. In 2000 the Royal Commission on Environmental Pollution produced a report on Energy – The Changing Climate. Nowhere can this be better demonstrated than in the design of buildings. large reductions of global emissions will be necessary during this century and the next. It concludes: ‘To limit the damage beyond that which is already in train. Smith January 2005 xv . . leaves. through photosynthesis. Once the issues are understood. if we accept that it is largely human induced. This atmospheric carbon is then taken up by plants which convert carbon dioxide (CO2) into stems. etc. The mechanism of the carbon cycle operates on the basis that the carbon locked in plants and animals is gradually released into the atmosphere after they die and decompose. The carbon then enters the food chain as the plants are eaten by animals.Climate change – nature or human nature? Chapter One The key question is this: climate change is now widely accepted as being a reality. trunks. Compounds of the element form the basis of plants. then it follows that we ought to be able to do something about it. The carbon cycle Carbon is the key element for life on Earth. Inspiring that commitment is the purpose of the first part of the book which then goes on to illustrate the kind of architecture that will have to happen as part of a broader campaign to avert the apocalyptic prospect of catastrophic climate change. There is also a geochemical component to the cycle mainly consisting of deep ocean water and rocks. a commitment to renewable energy sources and bioclimatic architectural design should become unavoidable. This should be good enough to persuade us that human action can ultimately put a brake on the progress of global warming and its climate consequences. There is widespread agreement among climate scientists worldwide that the present clear evidence of climate change is 90 per cent certain to be due to human activity mainly though the burning of fossil-based energy. The former is estimated to 1 . On the other hand. Carbon compounds in the atmosphere play a major part in ensuring that the planet is warm enough to support its rich diversity of life. so. is it a natural process in a sequence of climate changes that have occurred over the paleoclimatic record or is it being driven by humans? If we hold to the former view then all we can hope for is to adapt as best we can to the climate disruption. animals and micro-organisms. thus warming the Earth’s surface. every kilowatt hour of electricity used in the UK releases one kilogram of CO2. nitrous oxide and tropospheric ozone (the troposphere is the lowest 10–15 kilometres of the atmosphere). Without the greenhouse shield the Earth would be 33 C cooler. The main human activity responsible for overturning the balance of the carbon cycle is the burning of fossil fuels which adds a further 6 billion tonnes of carbon to the atmosphere over and above the natural flux each year. The greenhouse effect is caused by long-wave radiation being reflected by the Earth back into the atmosphere and then reflected back by trace gases in the cooler upper atmosphere. carbon dioxide. when forests are converted to cropland the carbon in the vegetation is oxidised through burning and decomposition. or would be if it were not for human interference. The greenhouse effect A variety of gases collaborate to form a canopy over the Earth which causes some solar radiation to be reflected back from the atmosphere. biota. The main greenhouse gases are water vapour. in turn. This terrestrial radiation in the form of longwave. Volcanic eruptions and the weathering of rocks release this carbon at a relatively slow rate. thus causing additional warming of the Earth’s surface (Figure 1. Of the solar radiation which reaches the Earth. The burning of one hectare of forest gives off between 300 and 700 tonnes of CO2. infra-red energy is determined by the temperature of the Earthatmosphere system.1). Under natural conditions the solar energy absorbed by these features is balanced by outgoing radiation from the Earth and atmosphere. hence the greenhouse analogy. Soil cultivation and erosion add further carbon dioxide to the atmosphere. With the present fuel mix. atmospheric concentrations will still double by this date. 2 . In addition. ice caps and the atmosphere. with obvious consequences for life on the planet. The balance between radiation and absorption can change due to natural causes such as the 11-year solar cycle. Even if there is decisive action on a global scale to reduce carbon emissions. is pushing up global temperatures. the CO2 in the atmosphere will treble by 2100. Under natural conditions the release of carbon into the atmosphere is balanced by the absorption of CO2 by plants. The sun provides the energy which drives weather and climate.ARCHITECTURE IN A CLIMATE OF CHANGE contain 36 billion tonnes and the latter 75 million billion tonnes of carbon. These are some of the factors which account for the serious imbalance within the carbon cycle which is forcing the pace of the greenhouse effect which. oceans. The system is in equilibrium. one third is reflected back into space and the remainder is absorbed by the land. methane. If fossil fuels are burnt and vegetation continues to be destroyed at the present rate. at the very least. A third indicator is the heavy oxygen isotope 18O in air trapped in the ice. Nitrous oxide emissions have increased by 8 per cent since pre-industrial times (IPCC 1992). Since then the rate of increase has. Ice core samples give information in four ways. In addition. been maintained. Second. It was evidence from ice core samples which showed a remarkably close correlation between temperature and concentrations of CO2 in the atmosphere from 160 000 years ago until 1989.1 The greenhouse ‘blanket’ Since the industrial revolution.CLIMATE CHANGE – NATURE OR HUMAN NATURE? a year Earth’s surface Figure 1. rising population in the less developed countries has led to a doubling of methane emissions from rice fields. a measurement of the extent to which ice melted and refroze after a given summer gives a picture of the relative warmth of that summer. Finally. Methane is a much more powerful greenhouse gas than carbon dioxide. It is more abundant in warm years.2). the combustion of fossil fuels and deforestation has resulted in an increase of 26 per cent in carbon dioxide concentrations in the atmosphere. First. It also revealed that present concentrations of CO2 are higher than at any time over that period. cattle and the burning of biomass. the air trapped in the snow layers gives a measurement of the CO2 in the atmosphere in a 3 . their melt layers provide an indication of the time span covered by the core. Climate change – the paleoclimate record In June 1990 scientists were brought up sharp by a graph which appeared in the journal Nature (Figure 1. at the peak of the last ice age 20 000 years ago. the more favourable the climate to growth. Causes of climate fluctuation To be able to see the current changes in climate in context. Also instrumental records going back to the sixteenth century are consistent with the proxy evidence. The thicker the ring. sea level was about 150 m lower than today. Their orbits produce a fluctuating gravitational pull on the Earth. it will be necessary to consider the causes of dramatic changes in the past. The Climate Research Unit of the University of East Anglia has made a special study of the evidence for climate changes from different sources and has concluded that there is a close affinity between ice core evidence and that obtained from tree rings. Each tree ring records one year of growth and the size of each ring offers a reliable indication of that year’s climate. affecting the angle of its 4 .2 Correspondence between historic temperature and carbon dioxide given year. Other data from ice cores show that. In northern latitudes warmth is the decisive factor. The Earth is subject to the influence of neighbouring planets. Some of the best data come from within the Arctic Circle where pine logs provide a 6000-year record.ARCHITECTURE IN A CLIMATE OF CHANGE Figure 1. Another source of what is called ‘proxy’ evidence comes from analysing tree rings. This can give a snapshot of climate going back 6000 years. A major cause of climate fluctuation has been the variation in the Earth’s axial tilt and the path of its orbit round the sun. notably the Gulf Stream. near Greenland it plunges to the ocean floor. To understand why the ice flows affect the Gulf Stream we need to look at what drives this rather special current. Paleoclimate data show that there have been periodic surges of ice flows into the north Atlantic which. releasing vast quantities of debris and CO2 in the process. So.) A second factor forcing climate change is the movement of tectonic plates and the resultant formation of volcanic mountains. what is the relevance of the icebergs? As these armadas of icebergs melted as they came south they produced huge amounts of fresh water which lowered the density of surface water undermining its ability to descend to the ocean floor. draws warmer water from the tropics which is why it is also called the conveyor belt or deep ocean pump. However. and. when plates collide. We may indeed be in the early stages of an interglacial episode and the accompanying natural warming which is being augmented by human induced warming. In the process rocks are heated and forced through the surface as volcanoes. As it moves north it gradually becomes cold and dense. As the Earth wobbles. The effect was to shut down the conveyor belt. Cambridge University Press. It has been calculated that the current orbital configuration is similar to that of the warm interglacial period 400 000 years ago. the variation in tilt is contained within limits which preserves the integrity of the seasons. They also generate fluctuations in atmospheric pressure. affect the deep ocean currents. After the melted iceberg water had dispersed. vast ice sheets wax and wane over a cycle called a Milankovitch cycle. In the longer term. all of which affect climate. since CO2 has a relatively long life in the atmosphere. in turn. The collision of plates accounts for the formation of mountains. large injections of CO2 lead to warming. the conveyor started up again 5 . one plate slides under the other. Without the moon. As a result northern Europe was periodically plunged into arctic conditions and scientists are concerned that there is now evidence that this process is beginning to happen due to melting ice in the southern tip of Greenland. It accounts for 25 per cent of the heat budget of northwest Europe. the axis could move to 90 degrees from the vertical meaning that half the planet would have permanent summer and the other endless winter. Particularly salty and warm surface water migrates from the tropics towards the north Atlantic. A feature of plate tectonics is that. this is called subduction. In themselves mountains add to the stirring effect on the atmosphere in concert with the rotation of the Earth. A third factor may be a consequence of the second. (2004) Global Warming. (For more information on climate fluctuations over the past million years see Houghton J. in turn. But it is volcanic activity which can cause dramatic changes. as a consequence.CLIMATE CHANGE – NATURE OR HUMAN NATURE? axis. This. 3rd edn. The surface of the Earth is constantly shifting. thanks to the stabilising pull of the moon. In the short term this can lead to a cooling as the dust cuts out solar radiation. This. creating relatively stable conditions which facilitated the development of agriculture and ultimately the emergence of urban civilisations. of all the other mass extinctions. Some of the best evidence for the climatic effects of varying levels of radiative output from the sun comes from Africa. and the evidence indicates that cooling was relatively slow whilst warming was rapid – 10–12 C in a lifetime. Sediment in Lake Naivasha in the Kenya Rift Valley reveals the levels of lake water over the past 1000 years. resulting in sharply dropping temperatures. In June 1999 the journal Nature (vol. 403. Oxfordshire which suggests that half the global warming over the last 160 years has been due to the increasing brightness of the sun. 410). A fourth factor may seem ironic. The most widely known on the popular level is the final one which occurred at the end of the Cretacious period 65 million years ago. we cannot ignore wider cosmic effects. There were long periods of intense drought leading to famine and mass migrations. in turn. notably the dinosaurs. the worst being from 1000 to 1270 (Nature. This cycle occurred 20 times in 60 000 years. The palaeontological record shows that there have been five mass extinctions in the recorded history of the planet. There is strong historic evidence that life on Earth has a precarious foothold. New sites of catastrophic impacts are still being discovered on the Earth. 399. Photosynthesising plants were deprived of their energy source and food chains collapsed resulting in the extinction of 75–80 per cent of species. The stability of that planet – no plate movement or vegetation to hide the evidence – ensures that we have a picture of meteor bombardment over hundreds of millennia. indicating that increased greenhouse gases are the culprit. For some reason these forays of icebergs stopped about 8000 years ago. The result of this stripping of atmospheric CO2 is a weakening of the greenhouse shield. because ice ages can be triggered by warm spells leading to the rapid expansion of forests. it is the third in the sequence that warrants most attention because it has contemporary 6 . p. The Earth will have been no different. vol. yet the rate of warming has been increasing. Finally. but if we want a true picture of the historic record of meteor impact we can see it on Venus. However. Changes in energy levels emitted by the sun are also implicated in global fluctuations. since 1970 the sun has become less responsible for the warming. leads to huge demands for CO2 which is drawn from the atmosphere. It is widely attributed to one or more massive meteorites that struck the Earth propelling huge quantities of debris into the atmosphere masking the sun probably for years. The dinosaurs will testify to the effect on climate of meteor strikes creating perpetual night. However. 437) published research evidence from the Rutherford Appleton Laboratory in Didcot.ARCHITECTURE IN A CLIMATE OF CHANGE leading to rapid warming. Periods of high water have higher concentrations of algae on the lake floor which translates to a higher carbon content in the annual layers of sediment. p. It took 50 million years for the planet to return to anything like the previous rate of biodiversity (New Scientist. The following year saw a similar occurrence with the rivers Elbe and Rhone bursting their banks. For the next 5 million years the remaining 5 per cent of species clung to a precarious existence. and. 251 million years ago. In both cases there is a loss of carbon fixing greenery and food producing land. At the end of the Permian period. on the one hand. Plants and animals literally suffocated. In the first months of 2000 Mozambique experienced catastrophic floods which were repeated in 2001.CLIMATE CHANGE – NATURE OR HUMAN NATURE? relevance. In 2002 devastating floods occurred across Europe inundating historic cities like Prague and Dresden creating ‘one of the worst flood catastrophes since the Middle Ages’ (Philippe Busquin. thereby intensifying the dynamics of weather systems. Why this should concern us now is because the world’s top climate scientists on the United Nations Inter-Governmental Panel on Climate Change (IPCC 2002) estimated that the Earth could warm to around 6 C by the latter part of the century unless global CO2 emissions are reduced by 60 per cent by 2050 against the emissions of 1990. A chain of events caused massive expulsions of CO2 into the atmosphere which led to rapid warming and plant growth. This had the effect of stripping much of the oxygen from the atmosphere leading to a collapse of much of the biosphere. At the same time central China also 7 . greater intensity of rain storms which increase run-off and erosion of fertile land. on the other. The importance of this evidence lies in the fact that this mass extinction occurred because the planet warmed by a mere 6 C over a relatively short period in the paleoclimate timescale. Greater extremes of the hydrological cycle are leading. not from mountains but from extensive fissures in the ground in the region which ultimately became Siberia. a catastrophic chain of events caused the extinction of 95 per cent of all species on Earth. It is the widescale evidence of anomalous climatic events coupled with the rate at which they are occurring that has persuaded the IPCC scientists that much of the blame lies with human activity. Over the past 50 years high pressure systems have increased by an average of three millibars whilst low pressure troughs have deepened by the same amount. European Union Research Commissioner). In July 2004 Southeast Asia experienced catastrophic floods due to exceptional rainfall. to increased area of desert. The evidence ● ● ● There has been a marked increase in the incidence and severity of storms over recent decades. The prime cause was a massive and prolonged period of volcanic eruptions. ‘Wipeout’). 26 April 2003. rendering 30 million homeless in Bangladesh and the Indian state of Bihar. There is even talk that the El Niño reversal may become a fixture which would have dire consequences for Australia and Southeast Asia. Munich Re. These have in the past created high pressure zones of cold stable air which have kept at bay the Atlantic lows with their attendant storms. In the last decade of the century there were 70 disasters costing £250 billion.ARCHITECTURE IN A CLIMATE OF CHANGE suffered devastating floods whilst Delhi experienced a major draught. This barrier has weakened and shifted further east allowing the storms to reach western Europe. Up to now much of the sea level rise has been due to thermal expansion. Besides the effect of increasingly steep pressure gradients another factor contributing to the intensification of storms is the contraction of snow fields. highlighting the threat to sea levels from land-based ice (The Observer. One of the largest. The Arctic ice sheet has thinned by 40 per cent due to global warming (report by an international panel of climate scientists. Insurance companies are good barometers of change. Sea temperatures in Antarctica are rising at five times the global average. km of the Larson B ice shelf has serious implications. The danger lies in the fact that the ice shelves act as a bulwark supporting the land-based ice. at present a 2. Sea level has risen 250 mm (10 inches) since 1860. The Loss Prevention Council has stated that. The recent breakaway of the 12 000 sq. Antarctic summers have lengthened by up to 50 per cent since the 1970s and new species of plants have appeared as glaciers have retreated. The major threat lies with the potential break-up of land-based ice. these extreme climatic events are only part of the scenario of global warming. In the first years of this century the pace has quickened. Receding polar ice is resulting in the rapid expansion of flora. The increased frequency of storms and floods in this area during the last decade of the twentieth century adds weight to this conclusion.5 C increase since the 1940s. The people of Ethiopia are facing starvation in their millions because of the year-by-year failure of the rains. In that decade there were 16 disasters costing £30 billion. Satellite measurements have shown that the two main glaciers have advanced 1. In Iceland Europe’s largest glacier is breaking up and is likely to slide into the north Atlantic within the next few years.25 and 1. Munich Re has reported that the 700 natural disasters in 2003 claimed 50 000 lives and cost the insurers £33 billion.65 km ● ● ● ● ● ● 8 . states that claims due to storms have doubled in every decade since 1960. following the collapse of the Larson ice shelf ‘inland [land based] glaciers have surged dramatically towards the coast in recent years’. losses will be ‘unimaginable’. El Niño has produced unprecedentedly severe effects due to the warming of the Pacific. In the May 2003 edition of Scientific American it was reported that. In itself it will not contribute to rising sea levels. 22 October 2000). January 2001). Yet. by the middle of this century. 23 July 2004). Global mean surface air temperature has increased between 0. This is another indication of the instability of the West Antarctic ice sheet.3 and 0. Together these pose serious threats to the survival of the subsistence-indigenous Eskimos (New Scientist. as it will. According to the Director of the US National Climate Data Center. In Alaska there is general thinning and retreating of sea ice. be due to the melting of the snow fields exposing tundra.5 C in the twenty-first century. Whilst snow reflects much of the solar radiation back into space. The sea is moving inland at the rate of 3 m a year (BBC News. The International Commission on Snow and Ice has reported that glaciers in the Himalayas are receding faster than anywhere else on Earth. op.CLIMATE CHANGE – NATURE OR HUMAN NATURE? respectively. the Pine Island glacier. is rapidly thinning – 10 metres in eight years – and accelerating towards the sea at a rate of 8 metres a day. This may. houses subsiding and world famous ski resorts becoming non-viable. When the West Antarctic ice sheet totally collapses. cit.4 C in 30 years. 22). Some houses have already fallen into the sea. March 1999). the bare tundra absorbs heat. From Alaska to Siberia. Roads are splitting apart. At the same time there has been massive melting of glacier ice on mountains. reducing summer rainfall. in part. migration patterns and numbers of some wildlife species. This makes it probable ● ● ● ● 9 . In April 1999 The Guardian reported that this ice shelf was breaking up 15 times faster than predicted. The Alps have lost 50 per cent of their ice in the past century.. They anticipated that temperatures would rise between 1 and 3.2 C – the largest jump ever recorded (Worldwatch Institute in Scientific American. this will raise sea level by 5 m (Scientific American. The average global surface temperature in 1998 set a new record surpassing the previous record in 1995 by 0. at the same time releasing huge amounts of carbon dioxide into the atmosphere – a classic positive feedback situation. The warmest year on record was 1999.4 metres per day. The village of Shishmaref on an island on the edge of the Arctic Circle is said to be ‘the most extreme example of global warming on the planet’ and ‘is literally being swallowed by the sea’.8 and 2. serious infrastructure problems are occurring due to the melting of the permafrost. drying tundra. in only a short time the rate of warming is already equivalent to a 3 C rise per century. others are crumbling due to the melting of the permafrost supporting their foundations. trees keeling over. 14 November 1998). In Alaska and much of the Arctic temperatures are rising ten times faster than the global average – 4. Global warming is increasing at a faster rate than predicted by the UN IPCC scientists in 1995. warmer winters and changes in the distribution. Even more disconcerting is the fact that the largest glacier in Antarctica.6 C since the later nineteenth century. increasing storm intensity. p. That represents a rate of 1. Then there was no ice on the planet. 29 January 2000.5 C. 5 March 1999). The conclusion was that spring arrived on average six days earlier and autumn five days later over a 30-year period (Nature. threatening coastal regions (Nature. this is the highest concentration in 55 million years. Munich scientists studied 70 botanical gardens from Finland to the Balkans (616 spring records and 178 autumn). Even if emissions were to be reduced by 60 per cent against 1990 levels by 2050 this will still raise levels to over 500 ppmv with unpredictable consequences due to the fact that CO2 concentrations survive in the atmosphere for at least 100 years. vol. A study of European gardens found that the growing season has expanded by at least ten days since 1960. A 40-year survey by Nigel Hepper at the Royal Botanical Gardens at Kew involving 5000 species indicates that spring is arriving ‘several weeks earlier’. 719). concentrations could reach 800–1000 ppmv by 2100. On the one hand. clotting factors and cholesterol. According to Sir David King. Altogether it would seem that a temperature rise of at least 6 C is very possible with the worst case scenario now rising to 11. Most of the increase has occurred over the last 50 years.5–2 ppmv per year. The pre-industrial level was 590 billion tonnes or 270 parts per million by volume (ppmv). Water is lost through sweating and this leads to higher levels of red blood cells. Spring in the northern hemisphere is arriving at least one week earlier than 20 years ago. The summer of 2003 saw heatwaves across Europe that were exceptional. 42–43).ARCHITECTURE IN A CLIMATE OF CHANGE that the end of century temperature level will be significantly higher than the IPCC top estimate (Geophysical Research Letters. Over the past 20 years the polar ice cap has thinned by 40 per cent. Concentrations of CO2 in the atmosphere are increasing at a steep rate. some estimates put it at 11 days. the aim now should be to prevent the planet crossing the threshold into runaway global warming whereby mutually reinforcing feedback loops become unstoppable. At the present rate of emission. 27. it leads directly to a rise in sea level. now it is 760 billion tonnes or around 380 ppmv and rising 1. The previous highest concentration was 300 ppmv 300 000 years ago (New Scientist. Altogether it has lost 5 m in southwest and east coasts. pp. The majority of heat-related deaths are due to a lethal assault on the blood’s chemistry. not only in terms of peak temperatures but also their ● ● ● ● ● 10 . p. Extreme heat episodes are becoming a feature of hitherto temperate climate zones. February 1999). The process starts within 30 minutes of exposure to sun. NASA scientists report satellite evidence of the Greenland landbased ice sheet thinning by 1 m per year. Bearing in mind the observed rate of temperature increase as mentioned above. UK Chief Government Scientist. this threatens the Gulf Stream or deep ocean pump and on the other. on the evidence of climate changes to date. 35 000 died in August across Europe and 14 800 in France alone from heat-related causes. Scientists meeting for a workshop in Berlin in 2003 concluded. that the planet could be on the verge of ‘abrupt. Harvard University. 22 November 2003). Methane emissions from natural wetlands and rice paddy fields are increasing as temperatures rise.5 C which was the highest early February temperature since records began in 1772 according to the UK Meteorological Office. According to the Earth Policy Institute in Washington DC. which not only means higher temperatures but also more extensive swings of atmospheric pressure. Towards the latter part of the century they predict such an event every second year. quoted in New Scientist. ● ● ● ● Finally. Dallas. Worldwide the assessment is £50 billion.CLIMATE CHANGE – NATURE OR HUMAN NATURE? duration. 11 . According to scientists in Zurich reporting in ‘Nature on-line’. North Australia. this kind of sustained summer temperature could normally be expected every 450 years. The study also showed that a drop in temperature of 10 C increases the risk of a heart attack by the same percentage (reported at a meeting of the American Heart Association. The year 2000 saw an unprecedented catalogue of warnings. As they warm they are becoming less efficient at absorbing CO2. To repeat. One of the predicted results of global warming is that there will be greater extremes of weather. That month was also the occasion of a severe heatwave in Brisbane. the cost of premature death due to rising numbers of heatwaves is reckoned to be £14 billion a year in the EU and £11 billion in the US. The latest prediction is that the carbon absorption capacity of oceans will decline by 50 per cent as sea temperatures rise. This has not happened since prehistoric interglacial warming. nasty and irreversible’ change (Bill Clark. the assumption generally held by policy makers is that a steady rise in CO2 concentrations will produce an equally steady rise in temperature. Other estimates put the figures at 20 000 and 11 000 respectively. The evidence from ice cores reveals that the planet has sometimes swung dramatically between extremes of climate in a relatively short time due to powerful feedback that tips the system into a dramatically different steady state. According to the UN Environment Protection Agency director. November 1998). methane is a much more potent greenhouse gas than CO2 and levels are rising rapidly. The warming that is eroding Europe’s largest glacier in Iceland also created clear water across the North West Passage at the top of Canada making navigation possible. Research at the University of Lille has indicated that when the pressure falls below 1006 millibars or rises above 1026 millibars the risk of heart attacks increases by 13 per cent. Oceans are the largest carbon sink. where there were 29 sudden deaths in one night. On 4 February 2004 the temperature in central England reached 12. The condition of the Greenland ice cap is another cause for concern. In 2001 Antarctic scientists indicated that sea levels could rise by 6 m (20 ft) within 25 years (Reuters). ● ● ● ● Historic sea levels are well recorded in the Bahamas and Bermuda because these islands have not been subject to tectonic rise and fall. There is a serious risk of this happening to the West Antarctic and Greenland ice sheets and their loss would mean a 12 m rise in sea level (Geology. 12 . This BaU scenario assumes some changes and improvements in efficiency in technology. This would occur if all the world’s vast ice sheets disintegrated. Singapore and its reclaimed territories will be at risk if the sea level rises above 20 cm. According to one scenario ‘warming of less than 3 C – likely in that part of the Arctic within a couple of decades – could start a runaway melting that will eventually raise sea levels worldwide by seven metres’ (New Scientist. According to a BBC report (28 July 2004) the Greenland ice sheet is melting ten times faster than previously thought. 27. Many millions of people live below one metre above sea level. Ancient shorelines show that. p. For example. The Guardian. Hamburg is 120 kilometres from the sea but could be inundated. Since May 2004 the ice thickness has reduced by 2–3 m. Ultimately. sea level was 20 m (70 ft) above the present level during an interglacial period 400 000 years ago. The same report stated that Alaska is 8 C warmer than 30 years ago. The Thames barrage is already deemed to be inadequate. ‘Doomsday Scenario’. vol. at its extreme. 375). 22 November 2003). The mean high tidal water level has increased between 40 and 50 cm since the 1970s. Here are some of the predictions. 14 July 2004). words attributed to Jonathan Gregory of the Hadley Centre..5 yrs 5 yrs 5 yrs 3 yrs 4 yrs ● A report from a committee chaired by the UK’s Chief Government Scientist. But the most serious threat is to 50 per cent of England’s grade 1 agricultural land which lies below the 5 m contour (Figure 2. predicts that global warming.1 Land below 5 metre and 10 metre contours 13 .1). The two worst case scenarios more or less correspond to the IPCC Land below 5 m AOD Land between 5 and 10 m AOD Lowestoft Colchester Sheerness Newhaven Figure 2. The panel of scientists behind the report considered four scenarios. Salination following storm surges will render this land sterile.PREDICTIONS ● In the UK rising sea levels threaten 10 000 hectares of mudflats and salt marshes. Sir David King. to have built our big cities on the edge of the sea where it is now obvious they cannot remain. The cost to the economy could be £27 billion per year (Future Flooding. April 2004) (Figure 2. The report concludes that the population at risk from coastal erosion and flooding could increase from 1.6 million today to 3. New York and New Orleans will be among the first to go. ● It was stated earlier that the geological record over 300 million years shows considerable climate swings every 1–2000 years until 8000 years ago. since which time the swings have been much more moderate. On current trends. cities like London. Future Flooding.2). since we are melting ice so fast. The danger is that increasing atmospheric carbon up to treble the pre-industrial level will trigger a return to this pattern.ARCHITECTURE IN A CLIMATE OF CHANGE Business as Usual scenario in which there is unrestricted economic development and hardly any constraints on pollution. a report from the Flood and Coastal Defence Project of the Foresight Programme. In an interview with The Guardian (14 July 2004) Sir David King stated: You might think it is not wise. The IPCC Scientific Committee believes that the absolute limit of Figure 2.6 million by the 2080s. April 2004) 14 .2 Areas in England and Wales at risk of flooding by 2080 under worst case scenario (from the Office of Science and Technology Foresight Report. He went on: ‘I am sure that climate change is the biggest problem that civilisation has had to face in 5000 years’ which gives added weight to his pronouncement in January 2004 that climate change poses a greater threat than international terrorism. 9 February 2001).3). Pests and pathogens are migrating to temperate latitudes. IPCC scientists consider that the net ● ● ● Figure 2. The incidence of the fatal disease West Nile fever has increased in warm temperate zones. Even this will have dramatic climate consequences. including the deadly plasmodium falciparum strain which kills around one million children a year in Africa (Figure 2. The paleoclimate record shows that generally cooling occurred at a slow rate. Global warming poses a serious threat to health. It is already widely understood that illnesses like vector borne malaria and Leishmaniasis (affecting the liver and spleen) are predicted to spread to northern Europe. A warmer atmosphere means greater evaporation with a consequent increase in cloud cover. Higher temperatures would also increase the incidence of food poisoning by 10 000 (Department of Health review of the effects of climate change on the nation’s health. for example 12 C in a lifetime. The Department also estimated that there will be around 3000 deaths a year from heatstroke – a prediction seriously understated if the summer of 2003 sets the pace of change. New York had an outbreak in 1999. seasonal malaria will have a firm foothold in southern Britain. but that warming was rapid as stated earlier. The UK Department of Health predicts that. by 2020.3 Predicted spread of seasonal malaria in Britain by 2020 15 .PREDICTIONS accumulation of atmospheric carbon should be fixed at double the pre-industrial level at around 500 parts per million by volume (ppmv). It is estimated that all the glaciers in the central and eastern Himalayas will disappear by 2035. absorbing more than 100 billion tonnes of carbon. faster growing trees are taking over from the slower growing trees of the understorey of the forest. Nature could still be the deciding factor. Historically relatively abrupt changes in climate have been triggered by vegetation. 13 March 2004). Melting glaciers in the Andes and Rocky Mountains will cause similar problems in the Americas (New Scientist. p. He states: ‘There is reason to fear that climatic changes in nearly all regions of the Earth will lead to natural catastrophes of hitherto unknown force and frequency. 18. Indian and Nepalese researchers predicts that the great rivers of northern India and Pakistan will flow strongly for about 40 years causing widespread flooding.ARCHITECTURE IN A CLIMATE OF CHANGE effect will be to increase global warming. This is attributed to the higher levels of CO2 in the atmosphere. 5 June 1999). The Hadley Centre forecasts that global warming will cause forests to grow faster over the next 50 years. In the longer term the latter trees are likely to be more susceptible to die-back through heat and drought (New Scientist. Canopy trees are faster growing and lower in carbon content. 7. Some regions ● ● ● ● 16 . Another danger is posed by the rapid accumulation of meltwater lakes. These mounds are unstable and periodically collapse with devastating results. thus returning 77 gigatonnes (billion) of carbon to the atmosphere. p. from about 2050 the increasing warming will kill many of the forests. Taller. The head of research at Munich Re. the world’s largest reinsurance group. by British. Meltwater is held back by the mound of debris marking the earlier extremity of the glacier path. In the short term this could mean a net loss in the carbon fixing capacity of the forest since the understorey trees are slower growing and denser in carbon content. 12. Earlier it was said that the paleoclimate record shows that in the past the explosive growth of vegetation absorbed massive amounts of atmospheric carbon resulting in a severe weakening of the greenhouse effect and a consequent ice age. average temperature rose by 5 C in 10 years 14 000 years ago. A report from the Calicut University. The worldwide melting of glaciers and ice caps will contribute 33 per cent of the predicted sea level rise (IPCC). However. 8 May 2004). After this date most of the glaciers will have disappeared creating dire problems for populations reliant on rivers fed by melt ice like the Indus and Ganges. Water vapour is a potent greenhouse gas. Already there is evidence of changes in growth patterns in the Amazon rainforest. This will bring a high risk of runaway global warming. predicts that claims within the decade 2040–2050 will have totalled £2000 billion based on the IPCC estimates of the rise in atmospheric carbon. For example. p. It is predicted that the largest of these lakes in the Sagarmatha National Park in Nepal currently holding 30 million cubic metres of water will break out within five years (New Scientist. Kerala. the lifetime of clouds and their thickness.3 billion. Even at present 1. Cirrus clouds 17 . 24 July 2004). The model suggested that warming could reach up to 10 C on the basis of a doubling of atmospheric CO2 which is widely regarded as inevitable. Add them to climate models and some frightening possibilities fall out’ (Fred Pierce. Bacteria in rivers rapidly convert DOC into CO2 that is released into the atmosphere. by the middle of the century. DOC from peat bogs could be as great a source of atmospheric CO2 as the burning of fossil fuels. of the total world population live in extreme poverty on less than $1 per day. It appears to be another feedback loop in that an increase in CO2 in the atmosphere is absorbed by vegetation which in turn releases it into the soil moisture. New Scientist. David Stainforth of Oxford University warns of the possibility of a 12 C rise by the end of the century. Global warming is causing peat bogs to dissolve. It then believes that the rate of fertility will fall below the replacement level. or one third. Recent research shows that DOC in Welsh rivers has increased 90 per cent since 1988. At present the greatest concentrations of population are in coastal regions which will be devastated if sea level rise predictions are fulfilled. mostly in areas which can least accommodate it.9 billion by 2050. Research in the University of Wales at Bangor indicates that ‘The world’s peatland stores of carbon are emptying at an alarming rate’ (Chris Freeman). We have to add to these natural events the prediction that there will be a substantial increase in world population.PREDICTIONS will soon become uninsurable’ (quoted in The Guardian.2 billion by that date. breaks down the peaty soil allowing it to release stored carbon into rivers. Freeman predicts that. The US Census Bureau predicted in March 2004 that the present population of 6. in turn. There it feeds bacteria in the water which. The worry is that global warming will either reduce the global level of cloud cover or change the character of the clouds and their influence on solar radiation. ● Recent uncertainties An article of 10 July 2004 in New Scientist was headed ‘Peat bogs harbour carbon time bomb’. Peat bogs store huge quantities of carbon and the evidence is that this is leaching into rivers in the form of dissolved organic carbon (DOC) at the rate of about 6 per cent per year. described by New Scientist as ‘the wild card in global warming predictions. The uncertainty with perhaps the greatest potential to derail current predictions about global warming is the role of the clouds. The UN Population Division estimates that the world figure will reach 8. 3 February 2001).2 billion will rise to 9. Recent modelling conducted by James Murphy of the Met Office Hadley Centre for Climate Prediction has factored in a range of uncertainties in cloud formations such as cloud cover. This leaves a distinct isotope pattern which gives an indication of the sea temperature at a given time. Car exhaust gases and nitrogen fertilisers are also increasing other gases’ (The Observer. At the time the Earth was carpeted with wetlands which produced high levels of methane which led to runaway warming. 11 July 2004). 24 July 2004. United States Russia European Union Japan China India CO2 EMISSIONS (1. It is expected that the next range of predictions by the IPCC due in 2007 will take account of feedback from cloud cover and produce significantly higher worst case temperature scenarios (from New Scientist. At the present time it is cattle. It transpires that this was due to emissions of methane. the latter point is a serious cause of concern.000 MILLION TONNES) USA EU China Russia Japan India +++ ++ 1990 2002 1994 only 6000 Figure 2. It is time to spread the net more widely if there is not to be a rerun of the Eocene catastrophe. Up to now the focus has been on limiting CO2 emissions almost to the exclusion of other greenhouse gases. according to the UN.ARCHITECTURE IN A CLIMATE OF CHANGE are the most efficient at reflecting heat back to Earth and these are becoming more prevalent. 45–47). pp. all more powerful greenhouse gases than CO2. It is sobering to compare how. In the Eocene epoch 50 million years ago there was a catastrophic rise in temperature with seas 12 C warmer than today. It should be noted that the improvement in the case of Russia is due to the collapse of its heavy industry since 1990 (Figure 2. Evidence from plant fossils has shown that CO2 levels were similar to the present day and therefore could not have been responsible for that level of warming. rice fields and termites which are major sources of the gas. ozone and nitrous oxide. different countries are making progress or otherwise in cutting their CO2 emissions. The evidence comes from oxygen trapped in the shells of marine fossils. Rice is a particularly intensive source. With a predicted steep rise in emissions from transport over the next decades. According to Professor Beerling of Sheffield University: ‘Methane is being produced in increasing amounts thanks to the spread of agriculture in the tropics.4 CO2 emissions by principal nations (UNFCCC 2004) 3000 4000 5000 1000 2000 0 ++1999+++2001 (both China figures include Hong Kong) SOURCES: UNFCCC (China figures from IEA) 18 . Another cause for concern stems from research finding from the Universities of Sheffield and Bristol.4). Even more of a problem faces the USA. Kyoto set its reduction target against the 1990 level at 7 per cent. In Europe it is about 2.5 per cent reduction target thanks to the gas power programme and the collapse of heavy industry. The only way it would be prepared to consider this kind if target is by carbon trading. At the same time it should be noted that CO2 accounts for only one third of the global warming caused by aircraft (Tom Blundell and Brian Hoskins. However. 7 August 2004. The Parliamentary Environmental Audit Committee (EAC) forecasts that by 2050 air transport will be responsible for two thirds of all UK greenhouse gas emissions. In 2003 there was a 1–2 per cent increase in CO2 emissions. The average citizen in the North American continent is responsible for around 6 tonnes of carbon per year.8 tonnes per person. it all depends on the currency of exchange. Globally the year 2003 witnessed a significant rise in the level of atmospheric carbon to 3 ppm per year – nearly double the average for the past decade. not. p. Though starting from a very low base. 24). New Scientist. The USA at twice the European average is still increasing its emissions which currently stand at 23 per cent of the world’s total. It has to be remembered that the UN IPCC scientists stated that a 60 per cent cut worldwide would be necessary to halt global warming. Planting forests may look attractive but it presents three problems. 22 March 2004).2 per cent globally based on 1990 levels. The Department of Transport expects the numbers flying in and out of the UK to rise from 180 million in 2004 to 500 million in 2030 (reported in The Observer. later endorsed by the UK Royal Commission on Pollution. the most rapidly rising per capita emissions are occurring in Southeast Asia. The US has refused to ratify Kyoto but Russia has signed up which meant that the Treaty came into force in February 2005. However. If aircraft emissions were also taken into account the situation would be substantially worse. an illegitimate recourse. India and China. To meet the Kyoto requirement it would now have to make a cut of 30 per cent. However. One great anomaly is that air travel is excluded from the calculations of CO2. members of the Royal Commission on Environmental Pollution. in itself.PREDICTIONS What is being done? The core of the problem lies in the disparity between the industrial and developing countries in terms of carbon dioxide emission per head. The US wants to use trees to balance its carbon books. Aviation’s share of the UK’s CO2 emissions will have increased four-fold by 2030. Despite all the international conventions carbon dioxide emissions from developed countries are showing little sign of abating. since then it has enjoyed a significant economic boom with a consequent increase in CO2 emissions. 19 . The UK was on track to meet its 12. these benefits have now been offset by the growth in emissions from transport. As a first step on the path of serious CO2 abatement an accord was signed by over 180 countries in 1997 in Kyoto to cut CO2 emissions by 5. This seems to have been uppermost in the minds of the European delegates to the conference in The Hague in November 2000 when they refused to sign an agreement which allowed the USA to continue with business as usual in return for planting trees. Another problem recently exposed in the USA is that forests are inclined to burn down.ARCHITECTURE IN A CLIMATE OF CHANGE First. When you see oil companies investing in renewables then it must be the dawning of the realisation that saving the planet might just be cost effective. will increase their share of global CO2 emissions from 30 per cent in 1990 to 58 per cent in 2030. forests could possibly end up huge net contributors to global warming. Unfortunately there is not a reliable method of accounting for the sequestration capacity of a single tree let alone a forest. then rapid die-back. Overall. China is the world’s second biggest emitter of greenhouse gases and the world’s biggest producer of coal.1 per cent per year for the next 30 years whilst energy use will rise by 1. or 40 trees counteract the carbon emitted by the average home in five years. The outlook for energy A report published in May 2004 from the European Union called ‘World Energy. if governments and society fail to respond to the imperatives set by climate change. The report expects that energy use in the US will increase by 50 per cent and in the EU by 18 per cent over the same period. The last point refers back to the Hadley Centre prediction that there will be accelerating forest growth over the next 50 years. To meet its expected energy needs China plans to nearly treble its output from coal fired power stations by 20 . The reason for the difference is that there will be increasing use of coal as oil and gas prices rise and reserves contract. In the final analysis.8 per cent. releasing massive quantities of carbon into the atmosphere. especially China and India. Developing countries. what they cannot escape is the inevitability of dramatic increases in the cost of fossil-based energy as demand increasingly outstrips supply as reserves get ever closer to exhaustion. Technology and Climate Change Outlook’ offers an insight into a future still dominated by fossil-based energy. This is mainly because growth in renewables will not keep pace with overall energy consumption. so. It also estimates a fall in the share of energy from renewables from 13 per cent today to 8 per cent. Market forces are already powering the drive towards renewable energy in some industrialised countries. five trees could soak up the carbon from an average car for one year. there have been attempts to equate the sequestration capacity of trees with human activities such as driving cars. It predicts that CO2 emissions will increase by 2. .5 Gas 4. The histogram in Figure 2. By 2020 the UK will be importing 80 per cent of its energy based on the current rate of consumption.0 2. 9 August 2004). City economic analyst. an all time high.5 Oil Oil equivalent (mmboe/day) 3.0 Already produced Future production Gas-Possible Oil-Possible Gas-2P Oil-2P 1967 1970 1973 1976 1979 1982 1985 1988 1991 1994 1997 2000 2003 2006 2009 2012 2015 2018 Figure 2. for decades to come. . The oil companies estimate that reserves will be exhausted within about 40 years but that is not so much the prime issue. it is not required to do. ‘the kind of growth rates to which oil consuming countries are committed appear to be generating the demand for oil well above the underlying growth in the rate of supply . the spectre of diminishing reserves heightens anxieties within the corridors of government. the US.5 2. Their estimate is that we are only two years away from the peak of oil production. as a developing country. the major reserves are located within countries that do not have a good record of stability. So.5 0.0 0. There are conflicting estimates. but petroconsultants who advise the government claim that only one new barrel of oil is discovered for every four that are used. The North Sea reserves are already diminishing with a 5. Oil consumption has doubled in the last 20 years and now stands at 80 million barrels per day. the Middle East. . with cities like Shanghai growing at an exponential rate. the North Sea . According to Stephen Lewis.5 UK oil and gas reserves to 2020 (Association for the Study of Peak Oil and Gas 2004) 21 . China is virtually ruling out measures to mitigate its CO2 emissions. which.0 4. As regards gas.5 3.0 1.5 1. As the economies of the world power ahead on the back of fossil fuels. These new power plants are not being constructed to accommodate future CO2 sequestration equipment and they are likely to be in service for 50 years. all appear to be past their production peaks’ (The Guardian. .PREDICTIONS 2020.5 indicates the rate of decline of UK reserves of both oil and gas. The world is one huge combustion engine which consumes 74 million barrels of oil a day to keep it running for now! At the present time in China one person in 125 has a car. Add to this the fact that at least half the remaining global reserves will be located in five autocracies in the Middle East who have already demonstrated their ability to manipulate prices causing the oil shocks of the 1970s. For the US the Department of Energy estimates that imports of oil will rise from 54 per cent in 2004 to 70 per cent by 2025 due to its declining reserves and increasing consumption.ARCHITECTURE IN A CLIMATE OF CHANGE life expectancy of 15–20 years. The government has acknowledged that. Even without including the prospects for China the current demand for oil worldwide is growing at 2 per cent a year. 90 per cent of the UK’s gas will come from Russia. By 2020 it is estimated that there will be one billion cars on the world’s roads. The Chinese economy is growing at 8–10 per cent a year. Campbell says we are ‘at the beginning of the end of the age of oil’. by 2020. 8 February 2001). These states account for 35 per cent of the market. in a graph published on the website of the Association for the Study of Peak Oil (ASPO). He predicts that after 2005 there will be serious shortages of supply with steeply rising prices and by 2010 a major oil shock reminiscent of the 1970s except that then there were huge reserves to be tapped.6). There are still large reserves but they are located in places like the states around the Caspian basin which Russia regards as its sphere of influence – not much comfort to the west. increasing price volatility for both oil and gas seems inevitable. that both gas and oil worldwide will peak around 2008 (Figure 2. where it is expected that its North Sea fields will be exhausted by 2016. in particular the UK. The government was warned that another oil price shock could trigger a stock market crash. or even war. In the oil shocks of the 1970s we were extricated from long-term pain by the discovery of large oil reserves in the North Sea and Alaska. This time there are no escape routes. An updated 2004 scenario for world peak oil production by Colin Campbell shows. Oil geologist Colin J. According to the environmental policy analyst Dr David Fleming it is ‘not possible that we can survive without a dramatic increase in the price of oil’ (The Guardian. especially by developing countries on the rapid road to developed status. At the same time petrol geologists estimate that production of oil will peak in the first decade of 2000 and then output will decline by 3 per cent a year. the point at which it is considered they are able to control prices at a time of rising demand. In no time there will be one person in 50 then perhaps one in 20 owning a car. It has joined the World Trade Organisation and opened its markets to international trade which gives additional impetus to economic growth. Iran and Nigeria (Ministry of Defence. The Kuwait episode then the Iraq war should remind us of the sensitivity of the situation. Beyond 2008. 2 March 2000). 22 . In fact nuclear output dropped 4 per cent in 1999 and 10 per cent in 2000 and in the latter year coal fired generation was up 13 per cent. in 2008 the EU will enforce desulphurisation regulations on coal fired plants making them uneconomic. for the next decade. the creaking nuclear industry will operate at full capacity with an unprecedented rate of efficiency. As we have noted gas has its uncertainties. After that. The pressurised water and gas cooled reactors have been beset with problems. Recently questions have been raised about the government’s estimates of future generation capacity within the nuclear industry. A plant 23 .East Heavy etc. However. The projected fuel mix for the UK in 2010 is: ● ● ● ● Figure 2.. renewables. a verdict which therefore also applies to the government’s target of 20 per cent reduction in CO2 emissions by 2010 since that target assumes full bore production by its ageing reactors. All but two of the Magnox stations have closure dates before 2008. Environment Data Services have described them as ‘heroically optimistic’.6 World oil and gas production to 2050 Coal 16 per cent Nuclear 16 per cent Renewables 10 per cent Gas 57 per cent. The use of biofuels may offer a future for coal fired power stations.PREDICTIONS 35 US-48 30 Billion barrels a year (Gb/a) 25 20 15 10 5 0 1930 Russia Europe Russia Other M. gas generation and possibly a new batch of nuclear generators will fill the vacuum. The DTI’s energy predictions assume that. at the same time.ARCHITECTURE IN A CLIMATE OF CHANGE operated by Biojoule in East Anglia is already producing 15 000 tonnes a year of specially processed wood for partial fuel replacement in coal fired power plants. demand will probably have increased by more than this percentage and. buildings and above all the biosphere into the price of fossil will intensify as the effects of global warming become increasingly threatening. in most cases. What is certain is that energy prices will rise steeply since there is still only patchy evidence of the will to stave off this crisis by the deployment of renewable energy technologies. many of the nuclear power plants are likely to have been decommissioned. ten years’ time. For buildings wholly reliant on fossil-based energy. still be functioning when the screws on fossil fuels are really tightening. The EU target is 21 per cent 80 Indicative targets 70 All other renewables Industrial and municipal waste 60 Renewables as share of electricity consumption (%) Large hydropower 50 40 30 20 10 (source: European Environment Agency 2004) 24 Au Sw stri ed a e La n Po tv r ia Sl tug ov al Fi enia Sl n ov ak Sland R pa ep in D ub en lic m ar Ita k Fr ly an c G EU. by then. The pressure to incorporate the external costs like damage to health.7 Comparison of electricity derived from renewables in 25 EU states 0 . which makes the latest offering from the European Environment Agency (EEA) report of 2004 all the more remarkable and disturbing. The obvious conclusion to draw from all this is that buildings being designed now will. By 2015 the UK could be facing an energy vacuum which emphasises the need to take the plunge into renewable technologies as a matter of urgency. It states that within the European Union the share of renewable electricity rose from 12 per cent in 1990 to 14 per cent in 2001. say. What tends to be overlooked is that.e er 2 m 5 G an re y N Ir ec C e e e ze th l ch e an Lu R rlan d xu ep ds em ub l U ni L bo ic te ith ur d u g Ki an ng ia d Po om Be lan d H lgiu un m g Es ary to C nia yp ru M s al ta Figure 2. it will be impossible to make accurate predictions as to running costs in. The government undertaking is to meet 10 per cent of electricity demand by 2010 from renewable sources. 25 . The UK is fourth from bottom of the table of all countries which have a contribution from renewables.PREDICTIONS by 2010. Signals 2004. suggesting that much more needs to be done. May 2004). Copenhagen. The EEA has produced a histogram which shows the relative performance of member states. (Figure 2. a European Environment Agency Update on selected issues.7) (EEA 2004. Decarbonisation of the energy system is task number one. The Solar Economy. Hermann Scheer.7 mtoe (UK Energy in Brief. July 2003).Chapter Renewable Three technologies – the marine environment Two quotes set the scene for this chapter: A sustainable energy system is probably the single most important milestone in our efforts to create a sustainable future . Chairman.3 mtoe to the total. DTI. Earthscan 2002. even exceed. .6 million tonnes of oil equivalent (mtoe). . this capacity without help from nuclear? This is a key question since the Energy White Paper of February 2002 put nuclear on hold pending a demonstration that renewables could fill the void left by the decommissioning of the present cluster of nuclear facilities. .4 per cent for renewables by 2010 and an aspiration to achieve 20 per cent by 2020. p. Oystein Dahle. 7 The UK energy picture In 2002 total inland energy consumption in the UK was 229. Is it fantasy to support that renewable energy sources could equal. Renewables and energy from waste accounted for a mere 2. Nuclear contributed 21. The government has declared a target of 10. RENEWABLE TECHNOLOGIES – THE MARINE ENVIRONMENT suppliers. One of the key factors favouring the big suppliers is the web of direct and indirect subsidies which the industry enjoys such as the fact that its raw material is regarded as being a free gift from nature. being relatively high capital cost but low running cost technologies. renewables. as recommended by the Royal Commission on Environmental Pollution (ibid. This is the market situation in which renewables have to compete and it constitutes a sharply tilted playing field in favour of the fossil fuel industries.. especially that of driving up global warming. The problem is that renewables with their high investment costs violate one of the founding laws of accountancy that investors want a high return on capital in the short term. Beyond this percentage the grid would have to be reconfigured to encompass extensive distributed generation. apart from coal. According to Hermann Scheer this would threaten the long-term ambitions of the power industry which sees the prospect of ultimately controlling information transmission as well as energy. We have the bizarre situation that a highly subsidised.. are not nearly so affected by macroeconomic shifts such as the international price of oil or the Stock Index. 60). will be exhausted sooner rather than later. xi). For example. As far as the major power distributors are concerned. highly polluting. at the same time compensating for fluctuations in the supply from renewables. The European Commission’s ExternE project has sought to quantify the externalities. p. This is clearly an abuse of the 27 . The New Elements for the Assessment of External Costs from Energy (NewExt) is refining the methodology to provide more accurate information and was due to report in 2004. In contrast. The anomaly is that the cost–benefit system employed here ignores the element of risk.. it concludes that the real cost of electricity from coal and oil is about double the current economic cost to the producers. Only now is it being widely realised that reserves. At the same time the market pays scant regard to its environmental responsibilities. ‘They hold all the cards they need to construct a comprehensive commodity supply and media empire’ (ibid. For some reason energy is not subjected to the normal rules of financial risk assessment in determining the market value of the commodity. the 20 per cent threshold may well be regarded as the ‘red line’ beyond which they will be forced to run on less than full capacity. For gas generated electricity the shortfall is about 30 per cent. The results should make it possible more accurately to calculate life-cycle environmental costs. Never has it been more apparent that oil and gas are high risk commodities that can have a powerful negative impact on the Stock Index due to price volatility. Repayment of capital and operating costs are largely fixed and so represent a low risk. p. but. from the Ethiopian highlands. is now trapped behind the dam. on the one hand. the most capital intensive form of energy. the Egyptian President. offers the longest-term energy certainty coupled with the highest energy density. from hydrodynamics or the movement of water by virtue of tidal rise and fall. Energy can be derived from water according to four basic principles: first. on the other. It involves damming a watercourse to create the necessary pressure to drive high speed impulse turbines. historically. part of which used to be deposited in the Nile flood plain.ARCHITECTURE IN A CLIMATE OF CHANGE term ‘free market’. mostly soil. The project has served to illustrate some of the problems which accompany hydroelectric schemes of this massive scale. Hydroelectric generation Hydroelectric schemes which exploit height difference in the flow path of water are the oldest method of generation from water. the dynamics of thermal difference. A further problem is that. 28 . evaporation from the lake has been much greater than anticipated. then these anomalies must be corrected if a decarbonised electricity infrastructure is to be a reality. hydroelectricity from the damming of rivers. Since it seems inevitable that renewables will have to fight their corner in a free market for an indefinite period. This chapter focuses on the first and second technologies. second. and the country is considering reactivating storage schemes beyond its borders. the Nile has conveyed millions of tonnes of silt per year. For example. If the contours of the energy playing field really were level. the extraction of hydrogen from water via electrolysis. tidal currents and waves. a fact which is calculated to have done irreparable damage to the fertility of the Nile valley and delta. Work started in 1960 to create the huge Lake Nasser as the storage facility and as a potential irrigation source for a major part of the country. and fourth. delivering 2000 megawatts (MW) of power. the dam has so disrupted the flow of the Nile that it threatens the agriculture of the delta. Energy from rivers and seas Energy extracted from the marine environment is. The silt. third. The Boulder Dam scheme in the USA was the first large-scale project implemented in the 1930s as a means of driving the country out of recession. then renewables would offer excellent investment opportunities. At the same time. It cost $1 billion ($10 billion at current prices) and began operations in 1968. To compensate for the loss Egypt is now one of world’s heaviest users of agricultural chemicals. One of the first major projects to be completed after the Second World War was the Aswan Dam scheme initiated by Colonel Nasser. In addition. In return the country will receive 18 000 MW of power which is 50 per cent more than the world’s existing largest dam. 29 June 1991) (further information in Smith. Architectural Press).RENEWABLE TECHNOLOGIES – THE MARINE ENVIRONMENT One of the worst drawbacks concerns saline pollution. Salts are dissolved in river water and modern irrigation systems leave salts behind – about one tonne per hectare. Given the right buying-in rates from the National Grid. (2002) ‘Small-scale hydro. ‘The UK has a considerable untapped small-scale hydro resource’ such as the discreet plant at Garnedd in Gwynedd. Ch. The dam is two kilometres long and some 100 metres high.F. Europe gains most of its hydroelectricity from medium to small-scale plants. in November 1994. There is now a project to remove saline water from two million hectares of land at a cost which exceeds the original price of the dam (New Scientist. ‘Run of river’ systems Many rivers have a flow rate in excess of 0. 7 May 1994). With the exception of projects on the River Danube. pp. again with drastic potential social consequences. It has created a lake 600 kilometres long displacing over one million people. North Wales. 10. The conventional method is to create a dedicated channel which 29 .’ in Sustainability at the Cutting Edge. mostly hydro. 28–32. According to the Department of Trade and Industry. P. In December 1994 work commenced on the Three Gorges scheme on the Yangtze River. in the long term this dam will make a relatively small impact on China’s dependency on fossil fuel. plans were revived to generate up to 37 000 MW along the course of Mekong River. such ventures could become a highly commercial proposition. Even so. the Itaipu Dam in Paraguay. p. Small-scale hydro In small-scale projects water is usually contained at high level by a dam or weir and led down a pipe (penstock) or channel to a generator about 50 m below to create the necessary force to drive the generator. Most of Norway’s supply is from hydro sources. Large areas of fertile land are being threatened by the salt which makes the ground toxic to plants and ultimately causes it to revert to desert. An intermediate technology version has been designed for developing countries in which a standard pump is converted to a turbine and an electric motor to a generator (New Scientist. 29.75 m per second which makes them eligible to power so-called run of river generators. in Sweden it is 50 per cent of the total and Scotland produces 60 per cent of its electricity from non-fossil sources. It is claimed that the water turbine converts 50 per cent of the energy in the water to electricity with a theoretical maximum of 59 per cent. Tidal energy Tidal energy is predictable to the minute for at least the rest of the century.1 WPI turbine (courtesy of CADDET. has developed a water turbine on floats that has a vertical axis rotor fitted with blades shaped like an aircraft wing. issue 1/04) accommodates a cross-flow generator which is a modern version of a water wheel or a ‘Kaplan’ turbine which has variable blades. Tide levels can be affected by storm surges as experienced dramatically in the UK in 1953. The wings are continuously adjusted by computer monitoring to keep them at their most efficient angle. Not only could this system capture the energy of many rivers.. it could also be situated in channels with a high tidal flow which are too shallow for other types of tidal turbine.ARCHITECTURE IN A CLIMATE OF CHANGE Figure 3. Water Power Industries (WPI). A Norwegian company. The British Isles benefit from some of the 30 .1). It reported in 1933 that the scheme was viable. there are at least four technologies that can exploit the action of the tides. Normandy. Recently a discussion document produced by the Institution of Civil Engineers stated in respect of tidal energy: it appears illogical that so potentially abundant an option will be deferred perpetually when the unit power costings involved are estimated to be reasonably competitive with all alternatives except combined cycle gas turbines. Suffolk. In the first quarter of the twentieth century this principle was applied to electricity generation in the feasibility studies for a barrage across the River Severn. In all cases the verdict was positive. Despite this supporting evidence the UK still shows reluctance to exploit this source of power. the French government elected to concentrate its generation policy on nuclear power which accounts for about 75 per cent of its capacity. Since then the technology has improved including a doubling of the size of generators. 31 . The tidal barrage Trapping water at high tide and releasing it when there is an adequate head is an ancient technology. Power generation is obviously intermittent but the spread of tide times around the coasts helps to even out the contribution to the grid. They are: ● ● ● ● The tidal barrage The tidal fence or bridge Tidal mills or rotors Impoundment. Dual generation is possible if the flow tide is also exploited. though the last report was cautious about the cost/benefit profile of the scheme in the context of nuclear energy. A medieval tide mill is still in working order in Woodbridge. Tidal power works on the principle that water is held back on the ebb tide to provide a sufficient head of water to rotate a turbine. it generates on both the flow and ebb tides. Annual production at La Rance is about 610 gigawatt hours (GWh). This increases the volume of water passing through the barrage by the square. The only operational barrage in Europe is at La Rance. It is a bidirectional scheme. A Royal Commission was formed in 1925 to report on the potential of the River Severn to produce energy at a competitive price. A further study was completed in 1945 and the latest in-depth investigation was concluded in 1981. In summary. Despite its success as a demonstration project. offering reliable electricity in the multi-gigawatt range. Two-way operation is only beneficial where there is a considerable tidal range and even then only during spring tides.RENEWABLE TECHNOLOGIES – THE MARINE ENVIRONMENT greatest tidal ranges in Europe. that is. generating only on the ebb tide.2 Basic tidal barrage Up to now. According to the Department of Trade and Industry’s Energy Paper Number 60. The DTI report concludes: There are several advantages arising from the construction of tidal barrages in addition to providing a clean. The upstream volume of water is supplemented by pumping additional water from downstream on the flood tide. The principle is that water is held upstream at high tide until the downstream level has fallen by at least 2. It is a modular technique with turbine caissons constructed on slipways or temporary sand islands. This is reckoned to be more cost effective than bidirectional generation in most situations (Figure 3.0 metres. schemes proposed in the UK have been one directional. create regional development 32 .2).ARCHITECTURE IN A CLIMATE OF CHANGE Figure 3. The technology of barrages was transformed by the caisson techniques employed in the construction of the Mulberry Harbour floated into place after D-Day in the Second World War. November 1992: ‘The UK has probably the most favourable conditions in Europe for generating electricity from the tides.’ In fact. Tidal barrages can assist with the local infrastructure of the region. non-polluting source of energy. it has about half of all the European Union’s tidal generating potential of approximately 105 terawatt hours per year (TWh/y) (ETSU). another concern has grown in stature. storm surges and increased rainfall and river rundown well before that date. this should not now be a factor. 25 January 2003). Professor Eric Wilson. The tidal fence There is. now it is a major cause of concern that the barrage will be overwhelmed by a combination of rising sea level. an alternative to a barrage which can also deliver massive amounts of energy at less cost/kWh. All this combines to make a strong case for an estuary barrage that will protect both the Thames and the Medway and. Following the 1953 floods. A group of engineering companies has renewed the argument in favour of the River Severn barrage. In market terms a normal market discount rate heavily penalises a high capital cost. 9 January 2004).3). in 2001 it closed 24 times. The Guardian. at the same time. namely. A further complication is the Thames Gateway project which includes 120 000 new homes below sea level. indicating that it would meet 6 per cent of Britain’s electricity needs whilst protecting the estuary’s coastline from flooding (New Scientist. a leading tidal expert in the UK. generate multi-gigawatt power for the capital (Figure 3. the threat from rising sea level amplified by an accelerating rate of storm surges. The Thames is claimed to be the cleanest river in Europe. the tidal fence or 33 . It was designed in the 1970s to last until 2030. the threat from rising sea level was hardly a factor in the 1970s. However. it is a gold mine. However. sums up the situation by saying that a tidal power scheme may be expensive to build. but it is cheap to run. namely. One of the arguments against tidal barrages is that they would trap pollution upstream. thanks largely to EU Directives. If one flood breaks through the Thames Barrier it will cost about £30 billion or roughly 2 per cent of GDP (Sir David King. Government Chief Scientist. Around the world numerous opportunities exist to exploit tidal energy.’ In 1994 the government decided to abandon further research into tidal barrages for a variety of reasons ranging from the ecological to the economic. notably in the Bay of Fundy in Canada where there is a proposal to generate 6400 MW. ‘After a time. long life. China has 500 possible sites with a total capacity of 110 000 MW. The economic argument could be countered if the market corrections stated earlier were to be implemented. low running cost technology. it was decided that London should be protected by a barrage. In the year 1986/87 the barrage was not closed once against tidal and river flooding. however. Since rivers are now appreciably cleaner than in the 1970s. playing host to salmon and other desirable fish species.RENEWABLE TECHNOLOGIES – THE MARINE ENVIRONMENT opportunities and provide protection against local flooding within the basin during storm surge. consists of modular shell concrete marine caissons linked to form a bridge. with large marine mammals protected by a fence with a backup of an automatic braking system operated by sonar sensors. From the ecological point of view the system has the advantage over the barrage option of preserving the integrity of the intertidal zones.5 m in diameter and rotate at 25 rpm. It is a four-phase project with the first phase comprising a 4 kilometre tidal fence between the islands of Dalupiri and Samar in the 34 . In terms of energy density. Wading birds have nothing to fear. At the same time the system allows for the free passage of silt.3 River Thames flood risk zones below 5m contour and suggested barrage bridge which has only recently come into prominence.ARCHITECTURE IN A CLIMATE OF CHANGE 5 miles Barrage Thames Barrier Figure 3. Multiple Darrieus rotors capture energy at different levels of the tide. each turbine having a peak output of up to14 MW. the tidal fence outstrips other renewable technologies: Wind 1000 kWh/m2 Solar (PV) 1051 kWh/m2 Wave 35–70 000 kWh/m2 Tidal fence 192 720 kWh/m2 (Source: Blue Energy Canada Inc. The slow rotation of the turbines poses minimum risk to marine life. Vertical axis Davis Hydro Turbines are housed between the concrete fins.. The tidal fence system.75 m.4). They can function within a tidal regime of at least 1. The generators are housed in the box structure bridge element which can also serve as a highway or platform for wind turbines (Figure 3. The rotors are 10.) Blue Energy has designed a major installation at Dalupiri in the Philippines. for example as designed by Blue Energy Canada Inc. The estimated maximum capacity of the 274 turbines housed in the tidal fence is 2. The Open University Renewable Energy Team has selected 17 estuary sites suitable for medium to large-scale barrage systems (Boyle. Blue Energy has already identified the Severn estuary as a suitable site. The data which are used here have been extracted from a paper from the Tyndall Centre in the University of Sussex. An exception is tidal energy which is predictable and this is where the tidal fence comes into its own. The Tyndall paper suggests that the optimum output from renewables is 136. Oxford University Press). The structure is designed to withstand typhoons of 150 mph and tsunami waves of 7 m.RENEWABLE TECHNOLOGIES – THE MARINE ENVIRONMENT Figure 3. in turn.1 GW. G.2 GW guaranteeing a base daily average of 1. On the assumption that 35 . (ed. 2004.) (1996) Renewable Energy – Power for a Sustainable Future.4 Blue Energy tidal fence concept San Bernardino Strait. The British Isles offer considerable opportunities for the application of this technology. The potential for the UK Many speculations have been offered regarding the ultimate generating potential of various renewable technologies. UK ‘Electricity Scenarios for 2050’ Working paper 41. cites data from the DTI 1999 and the RCEP 2000.5 GW as defined in the first of four scenarios Many of these are intermittent and unpredictable. by Jim Watson which. which means that a rotor 15 m in diameter will generate as much power as a wind turbine of 60 m diameter. the greatest potential source of tidal currents is located off the islands of Guernsey. This is based on an extrapolation from the Dalupiri scheme and is therefore only a rough estimation.ARCHITECTURE IN A CLIMATE OF CHANGE these sites would be equally suitable for tidal fences. it is an appropriate system to combine with pumped storage to even out the sinusoidal curves. Tidal currents The European Union has identified 42 sites around the coasts of the UK which have sufficient tidal velocity to accommodate tidal turbines. They operate at a minimum velocity of about 2 m/s. The installed cost at present is estimated to be US$1400 per kW. The hydroplanes are profiled like an aircraft wing to create ‘lift’. According to Blue Energy they have the potential to generate 26 GW or more than one third of the UK’s generating capacity. An ideal site could be the Pentland Firth. The tidal mill Horizontal axis turbines are similar to wind turbines but water has an energy density four times greater than air.5 knots. There are several technologies being researched. this would produce a peak output of about 60 GW and a daily average of 30 GW. It is still at the development stage and its final manifestation will operate in streams in both directions. including the Stingray project which exploits the tidal currents to operate hydroplanes which oscillate with the tide to drive hydraulic motors that generate electricity. of the UK’s electricity demand. However. It is estimated that tidal stream energy has the potential to meet one quarter of the electricity needs of the UK which amounts to about 18 GW. the most likely technology to succeed in the gigawatt range are the vertical or horizontal turbines.50. The minimum velocity of tidal flow to operate a tidal fence is 1. If only half of the full estuary width were available to house turbines in each case. 36 . this technology would deliver 9 GW. However. However. With a load factor of 0. The tidal fence vertical turbine is claimed to be ideal for tidal streams since it has multiple rotors which can capture tidal energy at different depths. A 1993 DTI report claimed that the Pentland Firth alone could provide 10 per cent. underwater turbines are subject to much less buffeting than their wind counterparts. they add up to a linear capacity of 208 km. especially as the cost is highly competitive. or about 7 GW. it should be enough to cause a reappraisal of the tidal potential of the UK.75 m/s or 3. Since the tidal flow is constant. The strength of the current tends to be strongest near the surface so a vertical series of rotors could accommodate the different speeds at various depths. Since the output from the tidal fence is predictable and peak output may not coincide with peak demand from the grid. The system consists of a circular barrage built from locally sourced loose rock. According to Tidal Electricity Ltd. Offshore impoundment An alternative to estuary tidal generation is the concept of the tidal pound. serviced above water According to Peter Fraenkel. computer simulations show 37 . His company has built a 300 kW demonstration turbine off the Devon coast and has a project for a turbine farm in the megawatt range (Figure 3.5 Tidal stream turbines or tidal mills. Director of Marine Current Turbines. This company is presently investigating the opportunities around Guernsey and Alderney. The system is ideal for situations in which there is a significant tidal range and shallow tidal flats encountered in many coasts of the UK.5). It is divided into three or more segments to allow for the phasing of supply to match demand. The idea is not new as mentioned earlier.RENEWABLE TECHNOLOGIES – THE MARINE ENVIRONMENT Figure 3. sand and gravel similar in appearance to standard coastal defences. the best tidal stream sites could generate 10 MW per square kilometre. The first inshore version in the UK was positioned on an inlet in the Scottish Isle of Islay (Figure 3.’ It has been estimated that impoundment electricity could meet up to 20 per cent of UK demand at around 15 GW. It offers predictable power with a load factor which is significantly better than. .6).3 GW.ARCHITECTURE IN A CLIMATE OF CHANGE that a load factor of 62 per cent can be achieved with generation possible 81 per cent of the time. The World Energy Council estimates that wave power could meet 10 per cent of world electricity demand. giving a reliable output of about 9 GW.62 this amounts to 9. The life expectancy of the structure is 100 years. wind power. in turn. There are both inshore and offshore versions either in operation or projected.. It should generate 432 MW. Tidal pounds would be fitted with low-head tidal generating equipment which is a reliable and mature technology. The tidal barrage at La Rance in Normandy attracts 600 000 visitors a year. drives a turbine. Wave power Wave power is regarded as a reliable power source and has been estimated as being capable of meeting 25 per cent or 18 GW of total UK demand with a load factor of 0. for example. . It is relatively unobtrusive and much kinder to marine life than a tidal barrage. tidal pounds can also provide coastal flood protection which was an important factor in determining the viability of the first largescale project in the UK off the North Wales coast. In 1990 Towyn near Rhyl experienced devastating floods. In total the potential capacity of the various technologies that exploit the tides around Britain is in the region of 65 GW.50. The most favoured system uses the motion of the waves to create an oscillating column of water in a closed chamber which compresses air which. The pound will be about 9 miles wide and 2 miles deep and located a mile offshore. The variation in high water times around the coasts coupled with pumped storage help to even out the peaks and troughs of generation before any account is taken of the range of other technologies. Because it is located in shallow water construction costs are much less than for barrage systems. This is perceived as a cost-effective technology thanks in part to the extra revenue from the Renewables Obligation Certification. With a load factor of 0. This is a popular holiday coast and it is expected that the project will become an important visitor attraction. There is talk of added attractions like a sea-life musuem and an education centre. 38 . that could conceivably generate thousands of megawatts. Two turbines are driven by positive pressure as air is compressed by incoming waves. As far back as 1986 a demonstration ocean wave power plant was built based on the ‘Tapchan’ concept (Figure 3.RENEWABLE TECHNOLOGIES – THE MARINE ENVIRONMENT Figure 3. The joints between the cylinders contain pumps which force oil through hydraulic electricity generators in response to the rise and fall of the waves. The narrowing channel has the effect of amplifying the wave height. and has an output of 75 kW which is fed directly to the grid. Belfast. Twenty such farms would provide enough electricity for a city the size of Edinburgh. Like Scotland. This lifts the sea water about 4 m depositing it into a 7500 m2 reservoir. The manufacturer. Norway enjoys an enormous potential for extracting energy from waves.5 m diameter. A large-scale version of this concept is under construction on the south coast of Java in association with the Norwegians.6 Principle of the Isle of Islay OWC wave generator It was designed by Queen’s University. each of 3. This consists of a 60 m long tapering channel built within an inlet to the sea. Ocean Power Devices (OPD). The head of water is sufficient to operate a conventional hydroelectric power plant with a capacity of 370 kW. Currently under test in the Orkneys is a snake-like device called Pelamis which consists of five flexibly linked floating cylinders. The rather clever Wells turbine rotates in one direction in either situation. A 25 metre slit has been cut into the cliffs facing the north Atlantic at Portnahaven to accommodate a wave chamber inclined at 45 degrees to the water.7). claims that a 30 MW wave farm covering a square kilometre of sea would provide power for 20 000 homes. It is rated at 500 kW which is enough to power 200 island homes. The plant 39 . and negative pressure as the receding waves pull air into the chamber. It is estimated to produce 750 kW of electricity. The success of this pilot project justified the construction of a full-scale version which is now in operation. As a system this has numerous advantages: ● ● ● ● ● ● ● The conversion device is passive with no moving parts in the open sea. The main mechanical components are standard products of proven reliability. The plant is totally pollution free. 40 . The Tapchan plant is able to cope with extremes of weather. the total comes to about 193 GW.) for wave. It will produce cheap electricity for remote islands. could come to about 74 GW. and add the remaining renewable technologies from this source amounting to 119 GW taking account of load factors. the ‘Tapchan’ will have a capacity of 1.ARCHITECTURE IN A CLIMATE OF CHANGE Figure 3. cit. Maintenance costs are very low. tidal stream and tidal barrage of around 16 GW. This amounts to more than twice the present electricity generating capacity of the UK. The total for the three tide and wave technologies. taking account of load factors.7 Wave elevator system.1 MW. It is unobtrusive. If we substitute these figures for the quantities indicated in Jim Watson’s paper (op. wave energy and tidal power. In his ‘green speech’ in March 2003 Prime Minister Blair stated that he wanted Britain ‘to be a leading player in this green industrial revolution . especially the range of opportunities offered by the tides. 41 . This could provide further backup capacity from megawatt grid connected fuel cells in addition to fuelling the growing population of hydrogen powered road vehicles expected over the next decade.’ This chapter suggests a ‘road map’ that would enable actions to be matched to words. It has been estimated that converting transport to hydrogen would require 143 GW of electrical power to extract hydrogen from water via electrolysis. There is no doubt that the UK has the natural assets to enable it to be fossil fuel free in meeting its electricity needs by 2030. this would require an immediate policy decision by the government to make a quantum leap in its investment in renewable technologies.RENEWABLE TECHNOLOGIES – THE MARINE ENVIRONMENT The other part of the equation is the demand side and Watson’s scenarios include a reduction in electricity demand of up to one third. We have many strengths to draw on. What is needed is cross-party political support so that the subject of renewable energy is removed from the cut and thrust of politics. . even if half the natural assets of the UK are exploited to produce carbon-free electricity. However. The logical use for this surplus capacity is to maximise pumped storage and to create hydrogen from electrolysis. Assuming significant gains in energy efficiency. . this leaves an appreciable margin of supply over demand. Some of the best marine renewable resources in the world – offshore wind. Tidal energy could more than fill the void in supply left by the demise of nuclear. it drives the Earth’s climate creating opportunities to draw energy from wind. It is particularly appropriate as an energy source for buildings. Passive solar energy Advocates of passive solar design have been around for many decades and the prize-winning schemes in a European competition for passive solar housing mounted in 1980 show that the technology has not advanced significantly since that time. This will be considered in detail in later chapters.5 million tonnes per year in the UK alone by the year 2025 (DOE Energy Paper 60). More detailed explanations will appear later. cooling and lighting. In temperate climates the most practical application of solar radiation is to exploit the heat of the sun to supplement a conventional heating system. The following paragraphs are by way of an introduction. Communities in a situation where there are high levels of insolation can benefit from technologies not viable in temperate climes. Besides offering a direct source of energy. tides (together with the moon) and a host of biological sources. this chapter scans more widely to include the opportunities that occur in different climates. 42 . Active solar This term refers to the conversion of solar energy into some form of usable heat. The sun is the primary source of renewable energy. Because it displaces the use of fossil fuel it is estimated that passive solar design could lead to a reduction in carbon dioxide (CO2) amounting to 3.Chapter Renewable technologies – Four the wider spectrum Whilst Chapter 3 has focused on marine renewable technologies with special emphasis on the UK. the intensification of the global warming debate has led to increasing pressure to design buildings which make maximum use of free solar gains for heating. However. waves. The system produces up to 1 megawatt of electricity. The Almeria region of Spain is the sunniest location in Europe. The tower will be 1000 metres high with a solar collector of glass and plastic 7 kilometres across. solar energy can be used to generate electricity in a number of ways. returning heat to the surface of the collector during the night. Designed primarily for desert locations it consists of a tall column surrounded by a glass solar collector. Construction of the tower will consume an estimated 700 000 m3 of high strength concrete. The surface of the absorber reaches 1000 C. In effect it is a chimney surrounded by a huge solar collector or greenhouse. 31 July 2004. with a 195 metre tower served by a greenhouse collector 240 metres in diameter. Air blown through its honeycomb structure reaches 680 C. The project has demonstrated the viability of the principle and plans are being drawn up for a giant version in Mildura. The idea has been made possible by the development of ceramics that can tolerate high temperatures. pp. The outer areas of the collector. achieving about 3000 hours of sun a year. This is why the area has been chosen to demonstrate another technology for producing electricity called the SolAir project. The air is heated by the circular greenhouse and drawn through the chimney which acts as a thermal accelerator. One method which has been successfully demonstrated is the solar chimney. The scheme would carry low maintenance costs and would have a life expectancy of 100 years. The plant would operate over night by using daytime heat to warm underground water pipes connected to an insulated chamber. The updraught would be about 15 metres per second or 54 km/hr and will drive 32 turbines at the base. The hot air travels down the absorber tower to a heat exchanger where it generates steam to drive a conventional turbine.1). In essence it produces superheated steam to drive a turbine. A lookout gallery at the top of the tower promises to be a not-to-be-missed tourist attraction (see New Scientist. According to the Spanish Ministry of Science: ‘In five to ten years’ time there should be several plants across Europe. The collector warms the air by around 17 C creating an updraught of 12 metres per second giving an output of 50 kilowatts (Figure 4. The economics suggest that the tower would produce about 650 gigawatt hours per year or enough to serve a population of 70 000. Within the chimney are one or more vertical axis turbines.1 Solar chimney generator 43 . each 15 to 20 times larger than the demonstration plant and together generating hundreds Figure 4. 42–45). Spain. The ceramic is also able to store heat to compensate for cloudy conditions. where the temperature would be near the ambient level.RENEWABLE TECHNOLOGIES – THE WIDER SPECTRUM Solar thermal electricity In areas where there is substantial sunshine. A prototype has been built in Manzanares. At ground level 300 large mirrors or heliostats each 70 m2 track the passage of the sun and focus its rays on a silicon carbide ceramic heat absorber. Australia. would be used to grow food. Plans are already in place to locate these plants along the Algerian coast to export electricity to Europe.6 MW for the grid which saves some 4500 tonnes/year of CO2.2 Solar concentrator. Abilene. Egypt is also warming to the possibility of this new export opportunity. The concentrator mirrors produce about 30 kW of reflective power to the heat pipe receiver which is linked to the engine. The engine operates on the basis that the heat vaporises liquid sodium in its receiver at the focal point of the dish. One spin-off from this technology is a demonstration scheme which has attached 18 solar thermal power dishes to an existing coal fired steam turbine power station producing the equivalent of 2. This system produces superheated steam in a solar boiler at the focal point. 10 April 2004). Condensation of the sodium on the heater tubes raises the temperature of an internal helium circuit. At current prices it is expected to produce electricity at one third the price of photovoltaics. The parabolic solar thermal concentrator This is another option for sun-drenched locations which focuses the radiation to produce intense heat – up to 800 C. A version in the United States links this to a unique helium-based Stirling engine. Texas (courtesy of CADDET) 44 . The steam is piped to a four cylinder expansion engine that drives a 65 kVA generator.ARCHITECTURE IN A CLIMATE OF CHANGE of megawatts’ (New Scientist.2).‘Power of the midday sun’. An alternative solar concentrator built by the Australian National University uses a computer to enable it to track the sun with extreme accuracy. The development Figure 4. The expanding helium drives pistons which in turn drive an alternator to produce electricity (Figure 4. for example the rural hospital at Dire’ in Mali. The UK has the best wind regime in 45 . In hot dry climates an ideal application for this system is in desalination. in certain countries such as the UK and Denmark. The result is that unit costs have almost halved between 1996 and 2002. Significant further cost reductions are confidently predicted coupled with steady improvements in efficiency. Already rural medical facilities are being served by PV arrays. The uniqueness of PV generation is that it is based on the ‘photoelectric quantum effect in semi-conductors’ which means it has no moving parts and requires minimum maintenance.RENEWABLE TECHNOLOGIES – THE WIDER SPECTRUM potential is to use the waste heat from the system for co-generation. One application of PVs is its potential radically to improve the quality of life in the rural regions of developing countries. Its disadvantages are that it is expensive. PV technology will be considered further in Chapter 7. Photovoltaics The amount of energy supplied to the Earth by the sun is five orders of magnitude larger than the energy needed to sustain modern civilisation. of course. In 2002 it was 56 per cent in Europe and 46 per cent in Japan. it is. as yet. PV materials generate direct electrical current (DC) when exposed to light. only operates during daylight hours and is therefore subject to fluctuation in output due to diurnal. the dominant PV material which is deposited on a suitable substrate such as glass. for most purposes this has to be changed to alternating current (AC) by means of an inverter. that is. capable of only a relatively low output per unit of area. As it produces DC current. at present. compact and mobile PV arrays can operate refigerators and water pumps. which uses a unique type of Stirling engine with integral electricity generation within the sealed chamber (see p. One of the most promising systems for converting this solar radiation into usable energy is the photovoltaic (PV) cell. On a smaller scale. greater than in 2001. as with the tides. wind is a major resource. 91). Whilst it is an intermittent source of power. wind power has been exploited as an energy source for over 2000 years. Growth in the manufacture of PVs has been accelerating at an extraordinary pace. A variation of this principle is the SunDish Tower System of STM Power in the USA. climate and seasonal variation. We are now seeing the emergence of large plants producing PVs on an industrial scale. and. Silicon is. over 200 MW per year. Wind power Wind is a by-product of solar power and. This is certainly one area on which the industrialised countries should focus capital and technology transfer to less and least developed countries. with the best wind regime in Europe. some exposed estuaries will need hard barrages which could serve as tidal generators as well as affording ideal sites for wind turbines. The great majority of generators in operation are of the horizontal axis type with either two or three blades. The conventional method is to fix the machine to the sea bed. On the other hand. At full revolutions the noise they create can be intrusive. At that time the existing installed capacity was 570 MW. as mentioned earlier. Vertical axis machines such as the helical turbine are particularly appropriate for siting on buildings. The target is. It is estimated that it would be feasible to produce 55 TWh/y by wind generation. since it takes no account of the avoided cost to both the lower and upper atmosphere with all its global warming implications. ambitious but necessary if wind is to supply its overall share of 8 GW towards the declared 10 GW target for renewables by 2010.ARCHITECTURE IN A CLIMATE OF CHANGE Europe but still has a considerable distance to go to meet its target of 8 per cent of total demand for wind generation by 2010. Britain has the capacity to generate three times as much electricity by windpower as it consumes. in practice there is a limit to the amount of unpredictable power the grid can accept and the realistic limit is said to be 32 TWh/y. In addition to offshore sites. Such sites are often some distance from the grid and centres of population. there are drawbacks to this form of power. They have been implicated in interfering with television reception. the majority of which would be located in Scotland. The output is unpredictable. However. The most frequently cited are: ● ● ● ● ● ● ● Often the most advantageous onshore sites are also places of particular natural beauty. The UK government announced in 2003 that it is planning a 6000 MW expansion of offshore wind generation by 2010. 46 . Two major offshore wind farms have already been installed off the coast of North Wales near Rhyl and Scroby Sands off the Norfolk coast. Several of the negative factors can be overcome by locating the generators offshore. Of course the required rate of return is a contentious issue. as sea levels rise and storm intensities increase. There are two basic types of wind generator: horizontal and vertical axis. They are a particular hazard to birds and have attracted severe criticism from the Royal Society for the Protection of Birds (RSPB). to say the least. Expert opinion has it that. Whilst the technology is well developed and robust. They are said to interfere with radar signals and have raised concerns in the Ministry of Defence. they are relatively cheap and in the UK can generate electricity at a cost of 7 p/kWh assuming a 20-year life and a 15 per cent rate of return (Energy Paper 60). There are three ways in which biomass and waste can be converted into energy: ● ● ● Direct combustion Conversion to biogas Conversion to liquid fuel. In the UK there is a major plant in Lewisham in southeast London. It is estimated that the amount of fixed carbon in land plants is roughly equivalent to that which is contained in recoverable fossil fuels (The World Directory of Renewable Energy (2003). Sheffield has one of the most extensive systems using Finnish technology and providing the city centre with heat and supplying power to the grid. James and James. SELCHP. it is claimed. dioxins. The direct burning of municipal waste is becoming increasingly popular. Biomass and waste utilisation The term ‘biomass’ refers to the concept either of growing plants as a source of energy or using plant waste such as that obtained from managed woodlands or saw mills. 47 . Whilst the economics of converting biomass and waste to energy are still somewhat uncompetitive compared with fossil fuels. There is a growing market for domestic scale wind power and several firms are producing small-scale generators with an output ranging from 3. However.RENEWABLE TECHNOLOGIES – THE WIDER SPECTRUM Harbour walls also have a highly favourable wind regime and therefore offer excellent sites. An ever increasing body of regulations is limiting the scope to dispose of waste in traditional ways. capable of generating 30 MW of electricity (DTI Renewable Energy Case Study: ‘Energy from Municipal Solid Waste’. the pressure to reduce CO2 emissions combined with ‘polluter pays’ principles and landfill taxes for waste will change the economic balance in the medium term. London).5 to 22 kW which could be installed on buildings. These turbines will be considered in more detail in Chapter 9. Within the European Union the ‘set-aside’ land regulations have created an opportunity to put the land to use to create bio-fuels. Direct combustion Direct combustion represents the greatest use of biomass for fuel worldwide. as demonstrated by Blyth in Northumberland. Sweden and Austria generate a significant proportion of their electricity by burning the residue from timber processing. p. Sorted municipal solid waste (MSW) represents the greatest untapped energy resource for which conversion technology already exists. 42. Increasing environmental pressures are stimulating the growth of waste to energy schemes. Lewisham). the presence of heavy metals in such waste poses a danger from toxic emissions including. Cuxton. which involves a fermentation stage. 48 . 13 November 1980). wood and poultry droppings) was built at Eye. However. a paper published in 1980 by Michael Allaby and James Lovelock drew attention to the risks to health associated with wood burning (‘Wood stoves: the trendy pollutant’. Suffolk.5 MW and uses about half the total of litter from broiler farms in the county. Its environmental benefit is that it reduces net CO2 emission by recycling carbon rather than producing new CO2. In the UK about 1. New Scientist. The first UK commercial biomass electricity generating plant fuelled by poultry litter (a mixture of straw. The process.8 million tonnes of poultry waste and 12 million tonnes of livestock slurry are produced annually. The engine would use ‘lean-burn’ technology to minimise emissions of nitrogen oxides and carbon monoxide (Power generation from landfill gas. The methane-rich gas is most effectively employed to fuel combined heat and power (CHP) schemes which is how the technology has been employed on an ambitious scale in Denmark. It is claimed to reduce greenhouse gas emissions by 70 per cent compared with coal-fired plants. This has a considerable environmental benefit since it burns the methane which would otherwise add more intensively to the greenhouse problem.5 MW of power. DTI Renewable Energy Case Study 2). A much larger biomass plant is in operation in Thetford which consumes 450 000 tonnes per year of poultry litter to deliver 38. The liquids and solids which remain after digestion are used as fertilisers and soil conditioners. The authors identified nine compounds found in wood smoke that are known or suspected carcinogens. This offers substantial biomass-to-energy conversion opportunities either as direct combustion or by using anaerobic digestion technologies. Anaerobic digestion uses wet waste products to produce energy in the form of methane-rich biogas. It also eliminates methane emissions from stored poultry litter. Foreign matter is extracted and the gas then fed to a conventional engine which drives a generator. takes place in large heated tanks at either 30–35 C or 55 C during which 60 per cent of the organic material is converted to biogas. Gas is collected using a series of vertical collection wells connected to a blower which draws gas from the waste. Here co-operative ventures receiving waste from all the farms in a viable collection area are combined with non-toxic industrial and food waste to fuel extensive CHP networks. It also eliminates the production of the powerful greenhouse gas methane and nitrates which enter the water supply. UK.ARCHITECTURE IN A CLIMATE OF CHANGE The direct burning of rapid rotation crops is a technology which is said to be CO2 efficient since the carbon emissions balance the carbon fixed during growth. It has a capacity of 12. Biogas The most straightforward exploitation of biogas involves the tapping of methane produced by decaying waste material in landfill sites. The world’s largest experiment in alternative fuel has been taking place in Brazil since 1975. hot dry rocks can supply energy by means of boreholes through which water is pumped and returned to the surface to provide space heating. water and air is fed to a reactor which is a compact fuel cell hydrogen generator. Now 22 countries generate electricity using geothermal energy. Non-solid waste is superheated to produce methane which then fuels a steam turbine. improvements in the rate of growth per hectare combined with greater use in the generation of power for the grid together with steeply rising oil prices could save the situation. A problem for Brazil was that world energy prices had fallen to such an extent as to make ethanol uneconomic without government subsidy. Much greater efficiency is realised with the direct use of this energy for space or district heating. It has the potential to produce electricity for the grid using biowaste from agriculture and dedicated energy crops. Geothermal energy Natural hot water has been used since at least the nineteenth century for industrial purposes. Ethanol produced from sugar cane powers about 4 million cars in that country. Then it rises to between 50 and 70 per cent. It produces fewer pollutants than petrol and is a net zero carbon fuel.RENEWABLE TECHNOLOGIES – THE WIDER SPECTRUM The next logical step is to employ the technology to exploit the energy potential of human sewage on a national scale. However. ranging from 5 to 20 per cent. The first geothermal power station was built in Italy in 1913 and produced 250 kW. Liquid fuels The advantage of converting crops to liquid fuel is that it is portable and therefore suitable for vehicles. However. The damage to health from low-level pollution is becoming increasingly a matter of concern and overtaking the greenhouse factor as the driving force behind the development of minimum polluting vehicles. One of the most promising technologies to have developed in recent years involves the gasification of municipal waste. thus alleviating two problems simultaneously. There was a danger that the ethanol programme would collapse under the weight of market forces. This is known as the borehole heat exchanger 49 . its conversion efficiency is low. Another use for ethanol is in the creation of hydrogen for fuel cells. Alternatively. The process involves heat recovery so that there is a commercial net energy gain in the process. The mixture is heated and passed through two catalysts. The unit price of electricity generated by this process can be offset by the avoided costs of landfill disposal together with the taxes this incurs. A mixture of ethanol. About half the gas emerging from the process is hydrogen. 90). However. it is a major source of energy with one borehole for every 300 persons. The Energy White Paper of 2002 deferred a decision on nuclear expansion until 2005 on the grounds that it would then review the potential of renewable technologies to fill the impending energy gap. Those opposed to nuclear generation have been encouraged by the decision of the UK government to abandon plans to construct two further pressurised water plants. The UK was one of the leaders in the field of hot dry rocks geothermal research. Whilst there may be as yet no known limit to the availability of fuel for fission nuclear power stations. the problems of security. With buildings they can therefore be used for both heating and cooling. for example.ARCHITECTURE IN A CLIMATE OF CHANGE system (BHE). Events of 2002 have illustrated how international terrorism has reached new heights of sophistication. For these reasons. Nuclear power There are some who would place nuclear generation in the renewables category. The principle is that a mix of hydrogen isotopes is heated to 100 million degrees which causes their nuclei to fuse producing helium and massive amounts of energy. In Switzerland. There has been progress on the development of nuclear fusion – the power source that replicates the energy of the sun. in this context. efforts to achieve a commercial return on this energy route have proved unsuccessful and further work has been abandoned. Some consider that it would be folly to construct a new generation of tempting targets. The UK’s radioactive waste tally currently stands at 10 000 tonnes. and can be safely stored. has a reasonable calorific value. Powerful elecromagnetic rings called tokamaks (like a doughnut) are able 50 . This can be used as a direct fuel or to make electricity through the chemical reaction in a fuel cell (see p. It is non-polluting. Hydrogen This is widely seen as the fuel of the future and will come in for further consideration in Chapter 13. At the present rate of progress it would seem that this is a forlorn hope even though studies have shown that renewables can generate at least twice the capacity needed for the UK as stated earlier. a means must be found to regenerate the ground by artificial means. If much more heat is required from the BHE in winter than can flow back in summer. nuclear power will remain an unsustainable energy source until its problems are solved in a way that will not impose a burden on future generations. decommissioning and waste disposal remain largely unsolved. This opens the way for the dual use of BHEs – heat collection in winter and heat rejection in summer. Off-peak or PV electricity can be used to split water via an electrolyser to make hydrogen. if current trends continue. ‘If successful. 2000). 51 . 20). It is predicted to produce ten times as much power as it consumes. we could have commercial fusion electricity within 30 years. Japan and Russia – the International Tokamak Experimental Reactor (ITER). UK Chief Government Scientist. According to Sir David King. However. Designs have been produced for the next generation of reactor by a consortium of the European Union. the UK’s fusion laboratory at Culham has achieved breakeven between energy input and output. p. A Japanese facility has achieved the same result. it could be the world’s most important energy source over the next millennium’ (New Scientist. Royal Commission on Environmental Pollution Report.RENEWABLE TECHNOLOGIES – THE WIDER SPECTRUM to store the superheated plasmas. 10 April 2004. fusion reactors will not produce masses of highly radioactive waste staying a hazard for 250 000 years. There has also been a problem of maintaining the high temperatures. Those concerned about a new generation of nuclear fission reactors should note the prediction in a Royal Commission report of 2000 that. including the present rate of installing renewable technologies. then by 2050 the country will need the equivalent of 46 of the latest Sizewell B type nuclear reactors to meet demand (Energy – The Changing Climate. Unlike the present day nuclear fission reactor. So far the problem has been that it has taken more energy to heat the gas to fusion temperature than is produced by the reaction. it is now time to consider how the demand side of the energy equation can respond to that challenge. In the UK housing has traditionally been of masonry and since the early 1920s this has largely been of cavity construction. It is the built environment which is the sector that can most easily accommodate fairly rapid change without pain. upgrading buildings. housing is in pole position accounting for 28 per cent of all UK carbon dioxide (CO2) emissions. accounting for over 50 per cent of total emissions. creates a cluster of interlocking virtuous circles. for the industrialised countries. Since the introduction of thermal regulations. it has been common practice to introduce insulation into the cavity. within that sector. initially deemed necessary to conserve energy rather than the planet. especially the lower end of the housing stock. the best chance of rescue lies with the built environment because buildings in use or in the course of erection are the biggest single indirect source of carbon emissions generated by burning fossil fuels. For a long time it was mandatory to preserve a space within the cavity and a long rearguard battle was fought by the traditionalists to preserve this ‘sacred space’. The built environment is the greatest sectoral consumer of energy and. In fact. If you add the transport costs generated by buildings the UK government estimate is 75 per cent. Construction systems Having considered the challenge presented by global warming and the opportunities to generate fossil-free energy. The purpose was to ensure that a saturated external leaf would have no physical contact with the inner leaf apart from wall ties and that water would be discharged through weep holes at the damp-proof course level.Chapter Low energy techniques for Five housing It would appear that. 52 . Solid masonry walls with external insulation are common practice in continental Europe and are beginning to make an appearance in the UK. Defeat was finally conceded when some extensive research by the Building Research Establishment found that there was no greater risk of damp penetration with filled cavities and in fact damp through condensation was reduced. LOW ENERGY TECHNIQUES FOR HOUSING In Cornwall the Penwith Housing Association has built apartments of this construction on the sea front. The attraction is the speed of erection especially when elements are fabricated off site. The selling point is again speed of erection but with the added benefit of a guaranteed quality in terms of strength and durability of the material. The advantages of masonry construction are: ● ● ● ● It is a tried and tested technology familiar to house building companies of all sizes. the ultimate criticism is that it is illogical to have a framed building clad in masonry when it cries out for a panel. Framed construction Volume house builders are increasingly resorting to timber-framed construction with a brick outer skin. However. Exposed brickwork is a low maintenance system. masonry homes have a relatively high thermal mass which is considerably improved if there are high density masonry internal walls and concrete floors. usually expanded polystyrene. there is an unfortunate history behind this system due to shortcomings in quality control. slate or tile hung external finish. perhaps the most challenging of situations. maintenance demands rise considerably if it receives a rendered finish. For the purist. Innovative techniques Permanent Insulation Formwork Systems (PIFS) are beginning to make an appearance in Britain. There can also be problems achieving internal fixings. With timber framing it is difficult to avoid piercing the barrier. From the energy efficiency point of view. making them appear identical to full masonry construction. When the concrete has set the result is a highly insulated wall ready for the installation of 53 . They can be rapidly assembled on site and then filled with pump grade concrete. This can apply to timber which has not been adequately cured or seasoned. framed buildings can accommodate high levels of insulation but have relatively poor thermal mass unless this is provided by floors and internal walls. Framed buildings need to have a vapour barrier to walls as well as roofs. It is durable and generally risk free as regards catastrophic failure – though not entirely. boarded. From the energy point of view. Pressed steel frames for homes are now being vigorously promoted by the steel industry. A few years ago the entire outer leaf of a university building in Plymouth collapsed due to the fact that the wall ties had corroded. The principle behind PIFS is the use of precision moulded interlocking hollow blocks made from an insulation material. Sunlight and shade patterns cast by the proposed building itself should also be considered.11 W/m2K. site orientation and slope. Experienced erectors can achieve 5 m2 per man hour for erection and placement of concrete. Most often used is the stereographic sun chart (Figure 5. Graphical and computer prediction techniques may be employed as well as techniques such as the testing of physical models with a heliodon. Solar design Passive solar design Since the sun drives every aspect of the climate it is logical to describe the techniques adopted in buildings to take advantage of this fact as ‘solar design’. The finished product has high structural strength together with considerable thermal mass and high insulation value. almost any plan shape is possible.1) in which a series of radiating lines and concentric circles allow the position of nearby obstructions to insolation. The advantages of this system are: ● ● ● Design flexibility. Computer modelling of shadows cast by the sun from any position is offered by Integrated Environmental Solutions (IES) with its ‘Suncast’ 54 . Access to solar radiation is determined by a number of conditions: ● ● ● ● the sun’s position relative to the principal facades of the building (solar altitude and azimuth). One of the methods by which solar access can be evaluated is the use of some form of sun chart. The intersection of the obstructions’ outlines and the solar trajectories indicate times of transition between sunlight and shade. Normally a different chart is constructed for use at different latitudes (at about two degree intervals). In this case buildings are designed to take full advantage of solar gain without any intermediate operations. skill requirements are modest which is why it has proved popular with the self-build sector. to be plotted. They can achieve a U-value as low as 0. Ease and speed of erection. also marked are the times of the day. potential for overshadowing from obstructions outside the site boundary. such as other buildings. Above three storeys the addition of steel reinforcement is necessary. The most basic response is referred to as ‘passive solar design’. existing obstructions on the site. On the same chart a series of sun path trajectories are also drawn (usually one arc for the 21st day of each month).ARCHITECTURE IN A CLIMATE OF CHANGE services and internal and exterior finishes. they perform the dual function of permitting solar penetration during the winter whilst providing a degree of shading in the summer. indirect gain. Passive solar design can be divided into three broad categories: ● ● ● direct gain. For example. On sloping sites there is a critical relationship between the angle of slope and the level of overshading. However. In buildings.com). attached sunspace or conservatory. rows of houses on a 10 north-facing slope must be more than twice as far apart than on 10 south-facing slope.1 Stereographic sun chart for 21 March program. if overshading is to be avoided at a latitude of 50 N. if they are deciduous. Each of the three categories relies in a different way on the ‘greenhouse effect’ as a means of absorbing and retaining heat.ies4d. Trees can obviously obstruct sunlight. re-emission of heat back through 55 . The greenhouse effect in buildings is that process which is mimicked by global environmental warming. the incident solar radiation is transmitted by facade glazing to the interior where it is absorbed by the internal surfaces causing warming.LOW ENERGY TECHNIQUES FOR HOUSING Figure 5. This is a user-friendly program which should be well within normal undergraduate competence (www. Again spacing between trees and buildings is critical. The spacing between buildings is important if overshading is to be avoided during winter months when the benefit of solar heat gain reaches its peak. However. However. Direct gain Direct gain is the design technique in which one attempts to concentrate the majority of the building’s glazing on the sun-facing facade. the following features are recommended: ● ● Use of external shutters and/or internal insulating panels might be considered to reduce night-time heat loss. Solar radiation is admitted directly into the space concerned. As far as the glazing is concerned. As regards the benefits of thermal mass. To reduce the potential of overheating in the summer. A vapour barrier should always be on the warm side of any insulation. shading may be provided by designing deep eaves or external louvres.2) and the Hockerton Project of 1998 by Robert and Brenda Vale (Figure 5.ARCHITECTURE IN A CLIMATE OF CHANGE the glazing is blocked by the fact that the radiation is of a much longer wavelength than the incoming radiation. Two examples 30 years apart are the author’s house in Sheffield.3). Internal 56 . for the normal daily cycle of heat absorption and emission. Windows facing west may pose a summer overheating risk. The main occupied living spaces should be located on the solar side of the building. which reduces temperature fluctuations inside the building. Windows should be at least double glazed with low emissivity glass (Low E) as now required by the UK Building Regulations. within about 30 of south for the northern hemisphere. insulation should be beneath the slab. with suspended timber floors a carpet is an advantage in excluding draughts from a ventilated underfloor zone. The floor should be of a high thermal mass to absorb the heat and provide thermal inertia. In the case of solid floors. it is only about the first 100 mm of thickness which is involved in the storage process. and the time period over which it happens makes it a very suitable match to domestic circumstances when the main demand for heat is in the early evening. The main design characteristics are: ● ● ● ● ● ● ● ● ● Apertures through which sunlight is admitted should be on the solar side of the building. designed in 1967 (Figure 5. During the day and into the evening the warmed floor should slowly release its heat. Thick carpets should be avoided over the main sunlit and heatabsorbing portion of the floor if it serves as a thermal store. Thickness greater than this provides marginal improvements in performance but can be useful in some longer-term storage options. This is because the re-emission is from surfaces at a much lower temperature and the glazing reflects back such radiation to the interior. The downside is that it also reduces heat gain at times of the year when it is beneficial.2 Passive solar house. Hockerton Self-Sufficient Housing Project 1998 57 . Sheffield 1960s Figure 5. Light shelves can help reduce summer overheating whilst improving daylight distribution (see Chapter 14). ● ● Figure 5.3 Passive solar houses.LOW ENERGY TECHNIQUES FOR HOUSING blinds are the most common technique but have the disadvantage of absorbing radiant heat thus adding to the internal temperature. Heat reflecting or absorbing glass may be used to limit overheating. ARCHITECTURE IN A CLIMATE OF CHANGE Figure 5. thus the heat is transferred in an indirect way. The main elements 58 .4 Hockerton individual house unit solar space Direct gain is also possible through the glazing located between the building interior and attached sunspace or conservatory. and this thermal storage wall controls the flow of heat into the building. In the UK climate and latitude as a general rule of thumb room depth should not be more than two and a half times the window head height and the glazing area should be between about 25 and 35 per cent of the floor area. Indirect gain In this form of design a heat absorbing element is inserted between the incident solar radiation and the space to be heated. it also takes place through upper level windows of clerestory designs. In each of these cases some consideration is required concerning the nature and position of the absorbing surfaces. This often consists of a wall placed behind glazing facing towards the sun. At times of excess heat gain the system can provide alternative benefits with the air circulation vented directly to the exterior carrying Figure 5. air can be circulated from the building through the air gap between wall and glazing and back into the room.6 Freiburg Solar House showing Trombe walls with blinds in operation.5 Indirect solar – Trombe wall In countries which receive inconsistent levels of solar radiation throughout the day because of climatic factors (such as in the UK). the option to circulate air is likely to be of greater benefit than awaiting its arrival after passage through the thermal storage wall.6). Heat reflecting blinds should be inserted between the glazing and the thermal wall to limit heat build-up in summer (Figures 5. Note the hydrogen storage tank on the right 59 . The area of the thermal storage wall element should be about 15–20 per cent of the floor area of the space into which it emits heat. In homes the flow can be delayed so that it arrives in the evening matched to occupancy periods. In this modified form this element is usually referred to as a Trombe wall.LOW ENERGY TECHNIQUES FOR HOUSING contributing to the functioning of the design are: ● ● ● ● ● High thermal mass element positioned between sun and internal spaces. Glazing on the outer side of the thermal wall is used to provide some insulation against heat loss and help retain the solar gain by making use of the greenhouse effect. the heat absorbed slowly conducts across the wall and is liberated to the interior some time later.5 and 5. In order to derive more immediate heat benefit. Materials and thickness of the wall are chosen to modify the heat flow. Typical thicknesses of the thermal wall are 20–30 cm. Flap to control reverse flow at night Thermal storage wall Opening to permit air flow Figure 5. At the very least.4). Blinds/ insulation Air movement between sunspace and building Indirect gain Direct gain Figure 5. Ideally the sunspace should be capable of being isolated from the main building to reduce heat loss in winter and excessive gain in summer. discussed earlier.7). this category of the three prime solar design technologies is not as widely used as its efficiency and effectiveness would suggest. partly because of the restrictions on position and view out from remaining windows. On balance it is considered that conservatories are a net contributor to global warming since they are often heated. and those of a more ‘active’ nature. Attached sunspace/conservatory This has become a popular feature in both new housing and as an addition to existing homes. Ideally the summer heat gain should be used to charge a seasonal thermal storage element to provide background warmth in winter. The area of glazing in the sunspace should be 20–30 per cent of the area of the room to which it is attached. air flow paths between the conservatory and the main building should be carefully controlled. The most adventurous sunspace so far encountered is in the Hockerton housing development which will feature later (Chapter 8 and see Figure 5. Active solar thermal systems A distinction must be drawn between passive means of utilising the thermal heat of the sun. at the same time drawing in outside air to the building from cooler external spaces.7 Attached sunspace 60 . and partly as a result of the implied dark surface finishes of the absorbing surfaces. Indirect gain options are often viewed as being the least aesthetically pleasing of the passive solar options. As a result.ARCHITECTURE IN A CLIMATE OF CHANGE away its heat. a preheater for ventilation air or simply an adjunct greenhouse for plants (Figure 5. It can function as an extension of living space. a solar heat store. The heat from 5600 m2 so lar c lle co to rs lar c e oll cto rs so heating central gas burner heat transfer substation hot tap CC water fresh water heat transfer substation hot tap CC water fresh water district heating network solar network hot water heat storage (seasonal storage) Schematic drawing of the CSHPSS system in Friedrichshafen. However. They convert direct solar radiation into another form of energy. A further distinction is the difference between systems using the thermal heat of the sun. which convert solar energy directly into electrical power. Friedrichshafen (courtesy of Renewable Energy World (REW)) 61 .8). The emergence of Legionella has highlighted the need to store hot water at a temperature above 60 C which means that for most of the year in temperate climes active solar heating must be supplemented by some form of heating. For solar energy to realise its full potential it needs to be installed on a district basis and coupled with seasonal storage.8 Diagram of CSHPSS system. Active systems are able to deliver high quality energy. Germany Figure 5. a penalty is incurred since energy is required to control and operate the system known as the ‘parasitic energy requirement’. One of the largest projects is at Friedrichshafen (Figure 5. and systems. such as photovoltaic cells.LOW ENERGY TECHNIQUES FOR HOUSING Active systems take solar gain a step further than passive solar. Solar collectors preheat water using a closed circuit calorifier. is of the hot water variety capable of storing 12 000 m3. Types of solar thermal collector The flat plate collector These units are. air and water. The heat delivery of the system amounts to 1915 MWh/year and the solar fraction is 47 per cent. There are two types of heat absorbing medium. mainly in the summer months. To exploit their efficiency to 62 . flat plates tilted to receive maximum solar radiation. Behind the plate are pipes which carry the heat extraction medium.9. The month by month ratio between solar and fossil-based energy indicates that from April to November inclusive. Surplus summer heat is directed to the seasonal heat store which.9 Seasonal storage tank under construction.ARCHITECTURE IN A CLIMATE OF CHANGE Figure 5. Friedrichshafen (courtesy of REW) of solar collectors on the roofs of eight housing blocks containing 570 apartments is transported to a central heating unit or substation. solar energy accounts for almost total demand. in this case. as the name indicates. Water containing an anti-freeze solution is the most common and is circulated behind an absorber plate to extract and transfer its heat. In places with high average temperatures and generous sunlight. active solar has considerable potential not just for heating water but also for electricity generation. In the UK they are usually limited to providing domestic hot water. being principally domestic hot water. The scale of this storage facility is indicated by Figure 5. The heated living area amounts to 39 500 m2. This has particular relevance to less and least developed countries. It is then distributed to the apartments as required. 11). Transparent insulation 4. Absorber 2. It works by exploiting a vacuum around the collector which reduces heat loss from the system. Low iron containing glass 5. making it especially suitable for more temperate climes. Figure 5. However. The reflected radiation means that the collector receives heat on both sides. Freiburg Solar House 63 . nearly doubling its efficiency. Evacuated tube collectors The most recent form of collector is the evacuated tube or vacuum tube system.LOW ENERGY TECHNIQUES FOR HOUSING the full there should be a heat storage facility which accepts excess heat during the summer to top up heating needs the rest of the year. These units heat water from 60 to 80 C which is 1. Reflector Figure 5.10 Flat plate collector Insulator Flow passages A more sophisticated version was devised for the Freiburg Solar House.12). Air gap 3. The collector is placed within a semi-circular reflector.10 and 5. heat absorber plate. a pipe circuit to absorb and transport the heat.11 Double-sided solar collector. the size of both the collectors and storage tanks makes this an uneconomic proposition in most cases. There are four main components to the design: ● ● ● ● Transparent cover plate Absorber plate transparent cover plate. insulation behind the plate and pipes (Figures 5. Coupled with insulated water storage this system was able to supply all the domestic hot water for the whole year (Figure 5. 2 illustrates the impact of solar gain according to orientation by giving the net U-values. Ch. Oxford (courtesy of David Hammond.F. Table 5. especially in the sphere of glass. Architectural Press. Glazing systems are now possible which react to environmental conditions such as light and heat.ARCHITECTURE IN A CLIMATE OF CHANGE Figure 5. Architect) sufficient for providing domestic hot water. They can continue to operate under cloudy conditions and should be linked to an insulated storage facility for continuity of supply. with U-values now commercially better than 1. Table 5. but also due to the rise in radiant temperature from the glass surface itself.12 Flat bed solar thermal collectors. Discomfort arises in summer. Also there have been considerable advances in the thermal efficiency of glazing. Radiant effects are further increased if the 64 . yet these are merely a foretaste of things to come.1 shows the heat transfer characteristics of seven glazing systems. aside from the obvious. (For more information see Smith.) Windows and glazing In recent years there has been rapid development in the technology of the building envelope. Windows have many benefits. they are the main weak thermal link when incorrectly specified. Osney Island. However. not just from the rise in air temperature due to heat gains.0 W/m2K. 2. P. the installation cost is significantly higher than for flat bed collectors. Nevertheless. (2002) Sustainability at the Cutting Edge. 7–4.4 2.1 Comparison of typical heat transfer through different glazing options Glazing South Single glazing Double glazing Triple glazing Double with Low E Triple with Low E 2.4 0. Visible light and solar heat gain are both parts of the electromagnetic spectrum of energy emitted by the sun. the absorbed component.4–2.2 0.4 2. Along with the change in temperature.LOW ENERGY TECHNIQUES FOR HOUSING Glazing Single glazing Double glazing Triple glazing Double with Low E Double with Low E and Argon Triple with 2 Low E and 2 Argon Double with Aerogel U-value (W/m2K) 5.7 0. cold window surfaces cool the adjacent internal air.6 0.6–5.2 Effective net U-value taking account of solar heat gain occupant experiences unshaded sunlight. there may well be an asymmetric temperature field leading to greater discomfort. absorption and transmission. This would also be accompanied by a cool radiant temperature. The interaction of glazing with light and solar heat has three components: reflection.2 0.1–0.0 0. Modifications in the proportions of reflected. As pressure has increased to improve the thermal efficiency of buildings this has forced the pace of developments in glass technology. Heat reflecting and heat absorbing glazing These products are usually considered for application in situations where overheating poses a risk.2–3.3–0. using reflective coatings.3 U-value ( W/m2K) with solar gain East/west 3.8–1.0 2. 65 . which then falls under the buoyancy effect leading to a cold downdraught.7–1.5–1.1–2. In winter.9–1. There are several ways of achieving this: ● ● ● using ‘body tinted’ glass.4 1. usually.0 1.0–0.1 0. which increase the reflected component and. The following are some examples.8 0. absorbed and transmitted radiation could be engineered by changing the glazing system properties. which increases absorption. using combinations of body tinted and reflective coatings.9 North 4.4 0.6 2.5–0.8–3.2–2.2 1.6–1.0 Table 5.6 1.6 Table 5.6 3. These react automatically to light levels. reducing the directly transmitted component. Reflective coatings are available in a wide range of colours and with a wide range of performance specifications. bronze and blue. 66 . a double glazed unit with a reflective outer layer is combined with a low emissivity coated inner layer to reflect outwards the heat which is transmitted. Photochromic devices change transmission in response to prevailing radiation levels. and by conduction.ARCHITECTURE IN A CLIMATE OF CHANGE It must be remembered that a reduction in solar heat gain can only be achieved at the cost of reducing daylight transmission. The effect is to increase the absorption of the radiation within the glazing. Small examples have been in everyday use for some years in the form of sunglasses and spectacles. Extensive opportunities exist for the development of some of these technologies to allow dynamic control of light and heat gain to match building and occupant requirements. the greater the reflection. the heat absorbed must be dissipated as the glass temperature increases. thermochromic and electrochromic glass Each of these terms describes a variety of glazing in which the transmission properties are variable. No two situations are quite the same and it is important to consider the full range of options before choosing a particular product or glazing system. In hot climates. plus the fact that higher levels of natural daylight are required. The reflected component can be increased by changing the angle of incidence – the more acute the angle. though some tinting and reflective products are more selective than others. glazing is specified to reduce heat gain. The warmth of the glass transmits heat inwards as well as outwards. are also considerations. reflective coated glass has a better performance. However. Though body tinted glass has an effect on heat transmission it also has aesthetic implications. To achieve this second aim. Photochromic. Because of this the body tinted layer would normally be installed as the outer pane of a multipane unit. Avoidance of glare and the provision of some natural light and view. The coating is applied to the surface of the glass which must be installed on the side facing in towards the cavity of a sealed unit. The tinting is produced by the addition of small amounts of metal oxides during production. There are considerable technical problems to scaling up photochromic glass to normal window size. or by applying a second laminating layer. which can be as low as 10 per cent in some cases. both by direct solar transmission. For improved solar heat gain attenuation. green. In temperate climates a balance must be struck between control of summer heat gain and the benefits of winter sun. Body tinted glass is normally available in a range of colours including grey. and it is present throughout the thickness of the glass. It is easier to specify and produce a glass with specific properties for a specific application than with body tinted varieties. It can also achieve an airborne sound insulation level of 42 dB using thicker internal glass and a special sound insulating gas between the panes.LOW ENERGY TECHNIQUES FOR HOUSING Thermochromic glass has changing optical properties in response to temperature variations. As it reacts to heat it may not be so suitable for windows since it could react to the internal temperature and again cannot be independently controlled. especially for commercial buildings.2 metres. in other words.3 2. Romag. The electrical signal reduces the transmission capacity of the electrochromic layer between two sheets of glass affecting not only daylight but also solar heat. 67 . It has a laminated structure incorporating a chemical which turns opaque at around 30 C. The most refined and controllable of the three options is electrochromic glass. This should offer significant revenue cost savings in maintenance. without any applied coatings. It will be available in a range of sizes up to 3. cut out over 80 per cent of solar radiation. a company specialising in laminated glass. the properties of which can be changed by the application of a small electrical current. Pilkington is also developing a solid state electrochromic glass. reducing insolation by about 70 per cent. Capital cost savings in terms of reduced cooling requirements and the exclusion of blinds plus revenue savings in respect of lower energy costs make electrochromic glass an attractive option. Pilkington has recently marketed a self-cleaning or ‘hydrophilic’ glass known commercially as ‘Pilkington Activ’. Their construction consists of complex multi-layered transparent coatings. It will be marketed as PowerGlaz and should be available towards the end of 2004. at the flick of a switch. has joined with BP Solar to produce a composite glass which incorporates PV cells. The latest version from Pilkington is EControl glass which can. especially since it can be controlled by the occupants – a major factor in workplace satisfaction. For this reason an ideal application is as external solar shading. Rainwater forms an overall film on the glass rather than collecting in drops that deposit dirt which remains after drying. it has already been demonstrated that the additional costs of insulation can be offset against a much reduced cost for the heating system involving a whole building radiator and central boiler option. It may be necessary to provide additional layers of insulation around them to prevent such elements acting as weak links or ‘cold bridges’ in the thermal design. chimneys. not capable of giving structural strength. and. many insulation products are based upon materials that have numerous layers or pockets of air trapped within them. Since air provides good resistance to heat flow. opaque building elements is by conduction. rather high in density. windows are all escape routes. in most cases. Warmth is a valuable commodity and it will seek every possible means to escape from a building. The main heat transfer process for solid. When specifying insulation materials it is important avoid those which are harmful to the environment such as materials involving chlorofluorocarbons (CFCs) in the production process and to select materials with zero ozone depletion potential (ZODP). which provides a restriction to heat flow. e. In several domestic and other small buildings. Insulation materials fall into three main categories: ● Inorganic/mineral – these include products based on silicon and calcium (glass and rock) and are usually evident in fibre boards. Generally. windows and floors. of necessity. roofs. floors. Walls. Most UK buildings make escape easy. Increased levels of insulation are a cost-effective way of reducing heating energy consumption. Heat flow through building components can be modified by the choice of materials. 68 . Since structural components are often. they are unable to provide the same level of resistive insulation. roofs. the greater the heat flow. glass fibre and ‘Rockwool’.Chapter Insulation Six Warmth is a valuable commodity and it will seek every possible means of escape from walls. the higher the density. Such materials are thus low density and lightweight. Thermal insulation.g. is used to reduce the magnitude of heat flow in a ‘resistive’ manner. Inorganic/mineral-based insulants Inorganic/mineral-based insulants come in two forms. The thermal efficiency of an insulant is denoted by its thermal conductivity.040 W/mK. termed lambda value. Natural organic – vegetation-based materials like hemp and lamb’s wool which must be treated to avoid rot or vermin infestation. Loose fill 69 .) (1996) Environmental Design. It is vapour and air permeable due to its structure. Health and safety There is a health issue with fibrous materials.) (1996) Environmental Design. (ed. 10). For fibrous materials such as glass and mineral fibres there is a theoretical risk of cancer and non-malignant diseases like bronchitis. p. Glass wool As for rock wool. The lower the value the more efficient the material. E & FN Spon. at the outset it is advisable to understand something about the most readily available insulants. Some also lose much their insulating efficiency if affected by moisture. Moisture can build up in the insulant reducing its insulating value. R. They differ in thermal efficiency and in offering certain important properties like resistance to fire and avoidance of ozone depleting chemicals. Technically it is a measure of the rate of heat conduction through 1 cubic metre of a material with a 1ºC temperature difference across the two opposite faces. Some cause skin irritation and it is advisable to wear protective gear during installation. So. Fibre Rock wool Rock wool is produced by melting a base substance at high temperature and spinning it into fibres with a binder added to provide rigidity. E & FN Spon). This is a matter that is still under review (Thomas. R. measured in W/mK. The range of insulation options There are numerous alternatives when it comes to choosing insulation materials.INSULATION ● ● Synthetic organic – materials derived from organic feedstocks based on polymers. May degrade over time. fibre or cellular structure.033–0. (ed. Lambda value 0. The thermal conductivity of a material ‘is the amount of heat transfer per unit of thickness for a given temperature difference’ (Thomas. Cellular Cellular glass Manufactured from natural materials and over 40 per cent recycled glass. Lambda value 0. available CFC and HCFC free. free from CFCs and HCFCs. Phenolic Rigid cellular foam very low lambda value. In general. vapour tight. Lambda value 0.040 W/mK. There has been the suggestion that fibrous materials constitute a cancer risk. flame retardant cellular.032–0.028 W/mK. It has a high insulation value. vapour tight. It is impervious to water vapour and is waterproof.036 W/mK.025–0. XPS (extruded polystyrene) Closed cell insulant water and vapour tight. and non-irritant. Lambda value 0. Lambda value 0.027–0.019 W/mK. they are currently listed as ‘not classifiable as to carcinogenicity in humans’. dimensionally stable. available CFC and HCFC free. Typical proprietary brand: Foamglas by Pittsburgh Corning (UK) Ltd. Vermiculite Vermiculite is the name given to a group of geological materials that resemble mica.047 depending on particular application. Lambda value 0. Organic/synthetic insulants Organic/synthetic insulants are confined to cellular structure: EPS (expanded polystyrene) Rigid. vermin-proof and has high compressive strength as well as being CFC and HCFC free.037–0. good fire resistance.ARCHITECTURE IN A CLIMATE OF CHANGE fibre insulants should not be ventilated to internal habitable spaces. CFC and HCFC free. odourless. PIR (polyisocyanurate) Cellular plastic foam. non-toxic.018–0. When subject to high temperature the flakes of vermiculite expand due to their water content to many times their original size to become ‘exfoliated vermiculite’. cellular materials do not pose a health risk and there are no special installation requirements. non-combustible. However. 70 . vapour resistant plastic insulation. is resistant to decay. Lambda value 0. batts or boards. Manufactured into fibres. In its present day form it should be much more reliable than the strawboard of the 1960s which had a tendency to germinate. grows without needing pesticides and produces no toxins. Lambda value 0. Sheep’s wool Must be treated with a boron and a fire retardant. Flax Treated with polyester and boron.INSULATION Natural/organic insulants Fibre structure Cellulose Mainly manufactured from recycled newspapers. The ecological preference is for materials derived from organic or recycled sources and which do not use high levels of energy during production. However. for example 200 times for expanded polystyrene and 1000 times for glass fibre. overall the use of insulation saves many times the embodied energy of even the worst cases. It can be used as a wall material with a high thermal efficiency. Lambda value 0. Straw Heat treated and compressed into fibre boards. Test houses have proved as thermally efficient as identical well-insulated brick built houses built alongside the hemp examples. A highly eco-friendly material.040 W/mK. Insulation materials derived from mineral fibres tend to be among the lowest in embodied energy and also CO2 emissions.038–0. Initial tests have used hemp as a building material mixed with lime and placed like concrete. Treated with fire retardant and pesticides.040 W/mK. energy involved in the extraction and manufacturing process. Disposal may have to be at specified sites. Treated with fire retardant and pesticide.037 W/mK. Hemp Under development as a compressed insulation board. Embodied energy.037 W/mK. is also a factor to consider. Lambda value 0. However. 71 . that is. Main points Insulation materials should be free from HFCs and HCFCs: ● ● The choice of insulation material is governed primarily by two factors: thermal conductivity and location in the home. there are certain overriding factors which will be described below. 020 0. Rockwool products are among the most stable in this respect. In the future.ARCHITECTURE IN A CLIMATE OF CHANGE Table 6. rather than individual component values. Finally. buildings which exhibit less tendency to overheat due to better environmental design may modify the priorities and make superinsulation attractive in all circumstances where buildings experience cold seasons.035 Table 6.1 Summary of comparative performance of insulation materials Expanded polystyrene slab Extruded polystyrene Glass fibre quilt Glass fibre slab Mineral fibre slab Phenolic foam Polyurethane board Cellulose fibre Thermal conductivity (W/mK) 0. there is the factor of internal strength or friability.030 0. for instance by allowable cavity widths in cavity wall construction. Superinsulation is associated with several design features: ● ● ● To qualify as superinsulated the building fabric should have U-values that are less than 0.035 0. Extruded polystyrene foams are attractive to house builders because they have good water resistance and a stiffness that enables them to be used in cavities. A broader definition of superinsulation is one which specifies a maximum overall building heat loss which permits ‘trade-offs’ within certain limits.025 0.040 0. The use of superinsulation has so far been best demonstrated at the domestic scale.2 W/m2K for all major non-transparent elements and often below 0. for example by an allowance for solar gain. Insulation thickness is often constrained by accepted construction techniques.035 0. High and superinsulation In recent years attention has been focused towards the use of very thick layers of insulation within the building fabric in order to minimise heat flow.1 shows the thermal conductivity of the main insulants. This technique has become known as superinsulation. This may be partly due to the problems of overheating experienced in many larger. deeper plan commercial buildings.035 0. problems which override the benefits of reduced winter heating requirements.1 W/m2K. 72 . however. The Jaywick Sands development is a social housing project which is designed on sustainability principles.1). With cavities of 200–300 mm width it is essential to have rigid wall ties of either stainless steel or tough rigid plastic.INSULATION ● ● In the case of low-energy housing. Achieving a superinsulation standard also requires a high level of air tightness of the building envelope which means that there will need to be trickle ventilation or even mechanical ventilation with heat recovery to reinforce the ‘stack effect’ in order to provide one to two air changes per hour. superinsulated walls may have 200–300 mm with 400 mm in the roof (Figure 6.1 Section.2). typical low energy construction 73 . the typical thickness of insulation material is likely to be of the order of 150 mm in walls and 300 mm in roofs (Figure 6. Its ‘breathing’ walls consist of Figure 6. ARCHITECTURE IN A CLIMATE OF CHANGE Figure 6.2 Superinsulation in the Autonomous House.3). Southwell (courtesy of Robert and Brenda Vale) partially prefabricated storey height structural panels (Figure 6. It can be argued that the insulation would have been better on the underside of the concrete to allow the slab to provide a degree of thermal mass (the scheme is described in detail in The Architects Journal. The exterior finish is western red cedar boards on battens. 23 November 2000). They are filled with 170 mm Warmcell insulation and clad with 9 mm sheathing board faced with a breather membrane. The floor is a pot and beam precast concrete slab with 60 mm rigid insulation on the upper surface. 74 . 3 Low energy timber panel housing. Jaywick Sands.INSULATION Figure 6. Essex 75 . Timber framed windows are triple glazed with Low-E coatings and an argon gas filled cavity achieving a U-value of 1.85 W/m2K.13 W/m2K. This is supplemented by a heat storage facility and backup liquid petroleum gas (LPG) heater unit.4 per hour. Space heating is also supplied by solar collectors and delivered in pipes embedded in the concrete floors. Polycarbonate honeycomb collectors absorb solar radiation to heat domestic water to 25 C even on cloudy days. 3 mm bitumous plastic Figure 6.2 W/m2K.15 W/m2K. Pressure tested to 50 pascals (Pa) the rate of air change was 0. An example is the Zero-energy House at Wadenswil. The structural wall consists of 150 mm dense concrete blocks. Air tightness is a prime consideration at this level of energy efficiency.4 Section of the Wadenswil House 76 . North facing windows are quadruple glazed achieving a U-value of 0.ARCHITECTURE IN A CLIMATE OF CHANGE On mainland Europe solid wall construction is much more common than in the UK. Switzerland.4). The walls have a U-value of 0. The annual energy consumption is around 14 kWh/m2 excluding solar energy (Figure 6. These are faced with 180 mm of extruded polystyrene insulation protected by external cladding. The roof has 180 mm of mineral fibre insulation giving it a U-value of 0. However. It would be possible to achieve a 99 per cent vacuum between the panes since they are supported by a solid. Despite this aerogels are relatively strong. Faced with a glass aerogel screen the heat would be retained and radiated into the interior of the building. The Freiburg Solar House shows Trombe walls with some blinds lowered (Figure 5. The outer surface of the wall would be coated black to maximise absorption. Flat plate solar panels collect heat then radiate it back into space. except the gap between glazed outer skin and the surface of the wall which faces into it contains insulation which is transparent rather than just air. An aerogel glass sandwich would provide a one-way barrier to the re-radiation of heat from the absorbing surface. metals even rubber.5). having about one hundredth the thermal conductivity of glass. A blind behind a glazed rain screen would minimise an excessive build-up of heat in summer (Figure 6. The insulation allows transmission of the incoming solar radiation but acts as a barrier to conductive and radiative heat loss. In the case of the silica aerogel it consists of tiny dense silica particles about 1 nanometre across which link up to form a gel. even with a thin aerogel sandwich the window would have a slightly frosted appearance. Aerogels Aerogels are materials that are mostly air – usually around 99 per cent by volume – and can be fabricated from silica.6). This would have an obvious application in active solar panels and also in solar walls. Aerogels are excellent insulators.INSULATION Transparent insulation materials Transparent insulation materials (usually abbreviated to TIMs) are a class of product which make use of particular materials to enhance the solar heat gain whilst simultaneously reducing the heat loss by conduction and radiation. They are sometimes called ‘frozen smoke’ due to their translucent appearance. a cubic metre of silica glass would weigh about 2000 kilograms. They are extremely light. Double glazing which replaced the gap with an aerogel would improve the insulation value by a factor of three as against the very best current multiple glazing. The thermal properties of aerogels also make them ideal for harvesting solar heat. The technology has similarities to the passive solar thermal mass wall designs already described. Some problems relate to the presence of moisture within the building 77 . Insulation – the technical risks The use of high levels of insulation brings with it some risks. A silica aerogel block of the same dimensions would weigh 20 kilograms. For example. Aerogels are a form of translucent insulation material which is located within a glazing sandwich. retaining absorbed heat very effectively. If substantial variations exist between the insulation levels of different parts of the building fabric. rusting or other degradation of components. condenses into water. It is on the inner surfaces of such cold bridges that condensation will occur. When considering floors. etc. their insulating effect is very much reduced. Particular attention must therefore be paid to ensuring adequate and correctly designed insulation details at floor edges. around windows and doors. This can lead to several difficulties such as rotting. which then become the main cold bridges or ‘thermal bridges’. for example: ● ● ● ● ● at the junction of roof and wall. 78 . around apertures for building services – electrical. Cavity insulation should be treated with a water repellent.. The answer is to ensure continuity of insulation.5 Transparent/translucent insulation wall construction which. and in addition can pose a safety risk if it comes into contact with electrical circuitry. at positions where structural framing elements connect with roofs. particularly frames and lintels. water. the majority of the heat loss occurs at its exposed edges. walls and floors. this creates weak links. drainage. The problem mainly occurs at the junction between main structural components. because the temperature gradient has been changed by the presence of insulation. Some insulation materials absorb moisture. or wall and floor.ARCHITECTURE IN A CLIMATE OF CHANGE Figure 6. and when wet. otherwise they will actually cause condensation. as a general principle. It is advisable to carry out a technical assessment of the condensation risk if this is suspected of being a problem.INSULATION The use of vapour barriers becomes more important as insulation levels rise. 79 . A large proportion of the reported faults associated with condensation are attributable to poor workmanship. As stated earlier. It is even more important to design components correctly and ensure that the construction is carried out according to the specification. since it is the appropriate construction and positioning of such layers that reduces condensation risk. vapour barriers should always be on the warm side of the insulation. As unit costs fall PV arrays attached to individual houses will become increasingly evident. Photovoltaic systems As stated earlier PV cells have no moving parts. In several countries there are substantial state subsidies to kick-start the PV industry so that costs can quickly fall due to the economy of scale.Chapter Domestic energy Seven Electricity produced by a stand-alone system within. Nottinghamshire (Figure 7. By far the most convenient form of renewable energy system which can be linked to housing is photovoltaic cells. Autonomous House. Figure 7. Southwell.1). also with an indication of the sunspace 80 .1 Remote PV array. a building is called ‘embedded’ energy generation. and seem attractive from both aesthetic and scientific perspectives. create no noise in operation. One of the pioneer examples of a domestic application in the UK is the Autonomous House by Robert and Brenda Vale in Southwell. or linked to. Cells with different characteristics and efficiencies can be created by using different base and doping materials. The introduction of impurities is known as ‘doping’. As a result of the doping one layer of silicon is negatively charged (n-type) and has a surplus of electrons. The principle of photovoltaic cells (PVs) PVs are devices which convert light directly into electricity. At present most PVs consist of two thin layers of a semi-conducting material. 81 . no additional land requirements. As a result a number of national and international development programmes now exist to help exploit the opportunities offered. The development of PV cells is gathering pace as indicated by the fact that the manufacturing capacity for PVs increased by 56 per cent in Europe and 46 per cent in Japan alone between 2001 and 2002. each layer having different electrical characteristics. Its initial target of 100 000 PV roofs has been surpassed.DOMESTIC ENERGY Power output is constrained by the availability of light falling on the cell. This law has recently been re-enacted and a further 100 000 PV roofs target has been instigated. This is a laboratory measurement and does not necessarily give a true indication of energy yield. The capacity of cells to convert light into electricity is defined by watts peak (Wp).2). Some of the electrons are captured as useful energy and directed to an external circuit (Figure 7. The advantages of building integrated systems are: ● ● ● clean generation of electricity. roof tiles and windows. When light falls on a PV cell electrons are liberated by the radiative energy from the sun and able to migrate from one side to the other. Its Renewable Energy Law offered significant added value to the production of electricity from domestic PV roofs. Examples of PV integrated cladding include the adaptation of rain screens. This is based on a bench test and is the power generated by a PV under light intensity of 1000 watts per square metre. Germany has been one of the frontrunners in promoting the application of PVs to buildings. though significant output is still possible with overcast skies. finely calculated amounts of impurities: p-type and n-type. In most common PV cells both layers are made from silicon but with different. The output is direct current (DC) which must be changed to alternating current (AC) by means of an inverter if it is to be fed to the grid. The efficiency of a cell is a function of both peak output and area. The greatest potential growth area is with building integrated PVs within facade and roof components. equivalent to bright sun. These two neighbouring regions generate an electrical field. generation at its point of use within the urban environment thus avoiding infrastructure costs and line losses. Converting to AC current involves a power loss. The other layer is given a positive charge (p-type) and an electron deficit. 2 Photovoltaic cell structure and function At the time of writing the most efficient PVs are monocrystalline silicon consisting of wafers of a pure crystal of silicon. Due to the production processes involved these cells are expensive. The solar cell size of around 10 cm 10 cm has a peak output of about 1. One or more strings form an array of modules. In appearance cells are blue and square.5 watts. molten silicon is cast in blocks containing random crystals of silicon. Each sphere is therefore a miniature 82 . It is cheaper than a monocrystalline cell but has a lower efficiency ranging between 8 and 12 per cent. in turn. To realise a usable amount of electricity cells are wired into modules which. They achieve a peak output of about 15 per cent. therefore during installation solar cells should be covered whilst all the electrical connections are made. A variation of silicon technology has been developed by Spheral Solar of Cambridge. Ontario. It must be remembered that a number of linked cells produces a significant amount of current. That means that 15 per cent of daylight is converted to electricity when daylight is at its maximum intensity. Tedlar or aluminium. It consists of 1 mm diameter silicon balls made from waste silicon from the chip making industry. The cells are sandwiched between an upper layer of toughened glass and a bottom layer of various materials including glass.ARCHITECTURE IN A CLIMATE OF CHANGE Solar radiation n-doped silicon Space-charge zone p-doped silicon Substrate backing Glass cover Electrodes Movement of electrons Figure 7. The core of each sphere is doped to make it a p-type semi-conductor and the outer surface to make it an n-type semi-conductor. Polycrystalline silicon In the production process of this cell. are electrically connected to form a string. However. Different cells have varying optimum conditions which has been highlighted by a research programme recently completed by the Oxford University Environmental Change Institute. By building up layers tuned to different parts of the solar spectrum known as a double or triple junction cell. unit cost is not necessarily the only criterion. However. having efficiencies of about 7 per cent and 9 per cent respectively. if a roof covering needs to be replaced. Standard PV modules can easily be fixed to an existing roof. The best performing modules produced nearly twice as much power as the lowest yielding cells. At present prices are comparatively high but will reduce as volume of sales increases. CIS (Seimens ST 40) gave the best returns at over 1000 kWh per kWp per year in the UK. As a rough guide. The system has a claimed efficiency of 11 per cent and can be formed to almost any profile. This showed that the amount of electricity generated by a PV array rated at 1 kWp in one year varies considerably between different technologies. a peak efficiency of 6 per cent is achievable. On low pitch or flat roofs it is advisable to mount the cells on tilt structures at the correct orientation. The optimum angle of tilt depends on latitude. which should make it attractive to architects. costs range between £2 and £4 per Wp. It is planned to market it during 2005. provided it is not overshadowed by trees or other buildings. a flat roof can still deliver 90 per cent of the optimum output. However. Unlike the crystalline cells it is capable of bulk production and is therefore potentially much cheaper. A sloping roof facing a southerly direction is the ideal situation. Double junction amorphous silicon cells were close behind. For example. tiles or shingles to maintain a traditional appearance. in the UK climate. east and west orientations can produce significant amounts of electricity. It is the first of a new breed of PVs based on thin film technology. Cadmium telluride (CdTe) and copper indium diselenide (CIS) These cells are a further development of thin film technology. 83 . Amorphous silicon This cell does not have a crystalline structure but is stretched into thin layers which can be deposited on a backing material which can be rigid or flexible. This is because these cells are more effective in the cloudy conditions so prevalent in the UK. The spheres are contained within a flexible aluminium and plastic matrix producing an effect similar to blue denim. so it is very much a case of ‘buyer beware’. Single junction amorphous silicon cells were the poorest performers. it then could become a cost-effective option to use solar slates. in London 1 square metre of monocrystalline PVs could produce 111 kWh of electricity per year. In London it is 35 . In summary. However.DOMESTIC ENERGY PV cell. Their purpose is now to provide battery charging for a pool of electric vehicles for the residents which has the advantage of avoiding conversion to AC current (Figure 7. The problem was the extent of the expected payback time at current low energy prices.4).ARCHITECTURE IN A CLIMATE OF CHANGE A housing project which integrates PVs into its elevations and roofs is the Beddington Zero Energy Development (BedZED) in the London Borough of Sutton. designed by Bill Dunster with Arup as the services engineers. Originally the intention was to use their power to meet the needs of the buildings.4 Southern elevation. It is important to ventilate PV cells since their efficiency falls as temperature increases. BedZED housing development. South London 84 . This requirement has been put to good use in a restaurant Figure 7. DOMESTIC ENERGY in North Carolina. USA. As solar heat builds up.5 Diagrams of PV heat recovery system (courtesy of CADDET) 85 . Its integrated roof system has 32 amorphous PV modules serving a 20 kWh battery facility. Solar radiation Photovoltaic modules Warm air out Cool air in Electricity generated from photovoltaic modules Air circulated behind modules PV modules PV modules Hot air from modules Fan Cool air into modules Hot water out Cold water in Heat exchanger Hot water heater Solar Hot water circulates pre-heat to pre-heat tanks tank Pump Figure 7. the fan cuts in automatically to circulate heat away from the PVs and direct it in a closed loop to a heat exchanger. at the same time avoiding 22 680 kg of CO2 emissions (Figure 7. p. See also Figure 18. What makes this system special is the fact that warmth that builds up under the cells is harnessed to heat water which supplements space heating. This technology will save the restaurant about $3000 per year in utility and hot water costs.16. This supplements demand at peak times and also bridges interruptions in the grid supply.5). 234. A fan circulates air through a series of passages beneath the modules. Since most uses of electricity require alternating current (AC). Output is adversely affected by high operating temperature with a drop in efficiency from about 12 per cent at 20 C to about 10 per cent at 50 C. The preferred option in most urban situations at the present time is the grid-connected system. Clearly this is impractical and costly and at present the drop in efficiency has to be accepted. It is often the case that the supply of electrical energy is not concurrent with demand. However. Photovoltaic panels would need active cooling in many building situations to maintain maximum output during summer months. The UK has some of the worst buy-back rates in Europe. though a sophisticated control system is required to ensure the output matches the grid phase. March 2000). An alternative is to encourage ventilation of the panels by suitable design of their location and position in order to permit air flow and natural ventilation cooling to front and. PV cladding materials can now be obtained in different patterns and colours depending upon the nature of the cells and the backing material to which they are applied. A combination of high capital cost and miserly buy-in rates is seriously undermining the adoption of this technology by householders in the UK in contrast to Germany where subsidised demand is outstripping manufacturing capacity. Alternatively it can simply be offloaded to the electricity grid. This offers an increasing range of facade options which might be exploited by architects to create 86 . In such situations two alternatives exist: either the excess power can be stored in some form of battery or used to heat water to be stored in an insulated tank to provide space heating. currently about 5 p per unit as against the utility price of approximately 15 p. It may be easier to justify the cost of PV cladding materials for commercial buildings where occupancy patterns coincide with peak production levels.ARCHITECTURE IN A CLIMATE OF CHANGE Energy output The energy output from a monocrystalline cell varies with insolation level in an almost linear fashion across its operating range. an inverter must be employed. as stated earlier. The former of these options causes an energy loss in the conversion process and additionally requires the provision of a suitable and substantial battery store. if possible. in the US PVs are now available from the Applied Power Corporation which deliver AC electricity which means they can be connected directly to the grid. rear of the array. An AC inverter is integrated with the cells (CADDET Renewable Energy Newsletter. perhaps because of occupancy and use patterns. This also provides a backup supply when PV generation is insufficient. A major drawback at present is the price at which the utility companies purchase the excess PV production. Pressure is mounting for the adoption of reversible meters that accumulate credit units from a renewable on-site installation but this is being resisted by some energy companies. DOMESTIC ENERGY particular aesthetic effects. Thin film photovoltaic systems, which basically have a layer of a coating layer applied to glass, look particularly promising. In the Netherlands PV cells are being mounted on motorway sound barriers. The UK Highways Agency gave approval in 2004 for PVs to be mounted in panels alongside motorways. A pilot project array has been installed on the M27 in Hampshire to feed directly into the national grid. Micro-combined heat and power (CHP) It is interesting how two nineteenth century technologies, the Stirling engine and the fuel cell, C, it is now considered a firm contender for the micro-heat-and-power market. Heat can be drawn off the engine to provide space heating for a warm air or wet system. Alternatively it can supply domestic hot water. In one system on the market, ‘Whisper C whilst the lower part is water cooled to 45 C 87 ARCHITECTURE IN A CLIMATE OF CHANGE Expanding gas Contracting gas Displacer piston Heat in from gas burner Heat out through water cooling Alternator power piston Piston The displacer moves gas from the hot to the cold end of the chamber whether expanding or contracting Alternator generates electricity and also kick-starts the engine Water cooling coupled with heat creates a pressure wave Planar spring keeps displacer moving up and down Figure 7.6 Four phases of the Stirling engine cycle excess electricity to the grid. The four phases of the Stirling cycle are kW (51 000 btu/h) to 36 kW (122 000 88 DOMESTIC ENERGY Figure 7.7 MicroGen kitchen wall-mounted unit efficient. Add to this line losses of 5–7 per cent and it is obvious there is no contest. In summary, the advantages of micro-CHP or micro-cogeneration are: ● ● ● ● ● ● ● ● It is a robust technology with few moving parts. Maintenance is simple, consisting of little more than cleaning the evaporator every 2000–3000 hours (on average once a year). Since there is no explosive combustion the engine produces a noise level equivalent to a refrigerator. It is compact with a domestic unit being no larger than an average refrigerator. It operates on natural gas, diesel or domestic fuel oil. In the not distant future machines will be fuelled by biogas from the anaerobic digestion of waste. The efficiency is up to 90 per cent compared with 60 per cent for a standard non-condensing boiler. Unlike a boiler it produces both heat and electricity, reducing energy use by about 20 per cent and saving perhaps £200–£300 on the average annual electricity bill. It can be adapted to provide cooling as well as heat. The UK government is keen to promote this technology and it is always worth checking if grants are available. The best source of advice is the Energy Saving Trust (). 89 ARCHITECTURE IN A CLIMATE OF CHANGE Fuel cells Looking towards the next decade, the source of heat and power for many homes could well be the fuel cell. This is an electrochemical device which feeds on hydrogen to produce electricity, heat and water (see Chapter 13 ‘Energy options’). In January 2004 the first UK domesticscale fuel cell began operation at West Beacon Farm in Leicestershire. The most common fuel cell at the moment is the proton exchange membrane type (PEMFC) which feeds on pure hydrogen. It has an operating temperature of 80 C and at the moment is 30 per cent efficient. This is expected to improve to 40 per cent. The farm is owned by the energy innovator Professor Tony Marmont. Rupert Gammon of Loughborough University is the project leader as part of the Hydrogen and Renewables Integration Project (HARI). It is designed to provide entirely clean energy. The hydrogen is extracted from water by means of an electrolyser which splits water into oxygen and hydrogen by means of an electric current (Figure 7.8). The electricity for the electrolyser is provided by wind, PV and micro-hydro generation. An alternative is to extract H2 from natural gas by means of a reformer but then it is no longer zero carbon. The fuel cell installation is compact and can fit into a cupboard. It has no moving parts and is therefore almost silent. At the moment it is producing 2 kW electricity and 2 kW heat. A second 5 kW fuel cell from Plugpower is in the process of being commisioned (Figure 7.9). Figure 7.8 Electrolyser at West Beacon Farm 90 DOMESTIC ENERGY Figure 7.9 Fuel cell installation, West Beacon Farm (courtesy of Intelligent Energy 2004) The production and storage of hydrogen as the energy carrier are the problems still to be solved satisfactorily. Cracking water into hydrogen and oxygen by electricity is analogous to the sledge hammer and the nut. An alternative method of producing hydrogen is to extract it from ethanol derived from biowaste as has recently been demonstrated at the University of Minnesota. The reactor is, in effect, a compact fuel cell hydrogen generator which would be ideal for vehicle application. It can be scaled up to provide the hydrogen for grid-connected fuel cells using ethanol fermented from both biowaste and energy crops. Sanyo plans to launch a domestic fuel cell using natural gas or propane in 2005. It will be used to power TVs, air conditioners, refrigerators and PCs as well as catering for domestic hot water requirements. It plans to export the system to the US and Europe. Other companies like Mitsubishi Heavy Industries Ltd and Matsushita Electrical Industrial Co. are developing a similar system also due on the market in 2005. Currently under development is a microbial fuel cell which avoids the need for hydrogen. It converts sewage to electricity. Bacterial enzymes break down the sewage liberating protons and electrons. The system then behaves like a proton exchange membrane fuel cell with protons passing through the membrane and electrons diverted to an external circuit to provide useful electricity (see pp. 255–256). Embodied energy and materials It is not just the energy consumed during the life of a building which has to be considered. Energy is involved in the extraction, manufacture and 91 which can lead to a wide band of values for similar products. materials used in refurbishment. At the moment the consensus is that a building consumes much more energy during its lifetime than is involved in extraction. energy used in demolition. The overall environmental credentials of a building are affected by a number of factors: ● ● ● ● ● ● ● ● energy used over its estimated lifetime. however. toxic substances used in the production process. the extent to which recycled materials have been used (see Chapter 18). that the area of materials energy and environmental effect is one which can only grow in coming years. However. manufacture and transportation. It can still be difficult to assess the full impact at present because of the scarcity of detailed information. It is clear.ARCHITECTURE IN A CLIMATE OF CHANGE transportation of building materials and this is known as the ‘embodied energy’ and directly relates to the gross carbon intensity of a material. It is also a sphere where much more information is required in order to exploit opportunities associated with carbon taxes and other fiscal measures to improve design. This arises from a natural reluctance on the part of manufacturers to disclose too much information about their commercial processes and also because of natural variations in techniques. energy used in the construction process. it will increasingly be the case that the embodied energy will be a significant fraction of the total as buildings become more energy efficient. the presence of pollutants in a material such as volatile organic compounds (VOCs). level of recyclable materials at demolition. 92 . A number of assessment tools and techniques are becoming available. Advanced and ultra-low energy houses Chapter Eight Besides designing the Autonomous House in Southwell. 300 mm of insulation in walls. A development that meets all its electricity needs on site and therefore is not connected to the grid is an autonomous development. It has a number of key features: ● ● ● ● ● ● ● ● ninety per cent energy saving compared with conventional housing. fruit and dairy products employing organicpermaculture principles. A wind generator supplements its electricity needs. One fossil fuel car is allowed per household and 8 hours’ support activity per week is required from each resident. It is described as a net zero energy scheme which is defined as a development which is connected to the grid and there is at least a balance between the exported and imported electricity. There is an imbalance in cost for reasons stated earlier. This is a narrow plan single aspect group of houses fully earth sheltered on the north side with the earth carried over the roof. The south elevation is completely occupied by a generous sunspace across all the units. but it is important to demonstrate just how far things can be taken in creating architecture that harmonises with nature (Figures 8. triple glazed internal windows and double glazed conservatory. 93 . Hockerton is a project designed for a special kind of lifestyle which will only ever have minority appeal.1 and 8. This would not be to everyone’s taste. a wind generator will reduce reliance on the grid. seventy per cent heat recovery from extracted warm air. considerable thermal storage due to earth sheltering. This is designed to be a partially autonomous scheme using recycled grey water and with waste products being aerobically treated by reed beds. self-sufficient in water with domestic water collected from the conservatory roof and reed bed-treated effluent for purposes that require the EU bathing water standard. it plans to be selfsufficient in vegetables. the Vales designed a group of ultra-low energy houses at Hockerton in Nottinghamshire.3 and 5. roof-mounted photovoltaics.2. 5.4). For example. 94 . University of Nottingham (Figure 8. Peabody was able to countenance the additional costs of the environmental provisions on the basis of the income from the offices as well as the homes. Though the Trust is extremely sympathetic to the aims of the scheme.ARCHITECTURE IN A CLIMATE OF CHANGE Figure 8. solar collectors of the vacuum tube type on the south elevation to meet the demand for domestic hot water. It consists of 82 homes. nursery. organic shop and health centre.3). It is designed as a research facility and a flexible platform for the range of systems appropriate to housing.1 Earth sheltered south and solar west elevations The Beddington Zero Energy Development – BedZED The Innovative Peabody Trust commissioned this development as an ultra-low energy mixed use scheme in the London Borough of Sutton. It is described in detail in Chapter 18 ‘State of the art case studies’. Its features are: ● ● ● PV tiles integrated into conventional slates providing 1250 kWh/year. all constructed on the site of a former sewage works – the ultimate brownfield site. The David Wilson Millennium Eco-House A demonstration Eco-House has been built in the grounds of the School of the Built Environment. light pipe illuminating an internal bathroom and providing natural ventilation. it had to stack up in financial terms. 1600 m2 of work space. a sports club. ground source heat pump to supplement space heating. plastic and cans are recycled 6 PVC-free wiring and pipes throughout house Eco-Balls used for washing clothes. but needs planning approval Rainwater for drinking and washing is stored in tanks and a reservoir Soil covered roofs and planting hide the house from the road Waste water is dealt with by Hockerton's own mini-sewage farm at the side of the large artificial lake. and plan to buy an electic vehicle This house has a TV and Water saving WC Water pours from taps as in normal houses 6 Glass. Associated with the Eco-House are several free-standing sun-tracking PV panels tilted to the optimum angle.2 Hockerton overall life-style specification The output from the energy systems is constantly monitored.ADVANCED AND ULTRA-LOW ENERGY HOUSES Five families teamed up to build the row of houses in Hockerton. Nottinghamshire. helical wind turbine. designed by architects Robert and Brenda Vale Diverse plants and animals are encouraged 5. Once treated it runs into the lake and becomes food for the fish The families try to walk and cycle rather than use cars. not baths video like any other 1 Low-energy light bulbs Key to rooms 1 Conservatory 2 Kitchen 3 Utility room 4 Dining area 5 Living room 6 Bedrooms 7 Bathroom ● ● ● solar chimney to provide buoyancy ventilation in summer and passive warmth in winter. not detergents Showers are fitted.000 native trees have been planted and 60 species of birds recorded Each adult contributes 16 hours a week on tasks like organic gardening A wind turbine would provide electricity. South Wales A competition winning ‘House for the Future’ has been designed by Jestico Wiles within the grounds of the Musuem of Welsh Life in 95 . Figure 8. Demonstration House for the Future. All materials were selected with a view to minimising embodied energy (Figures 8.3 David Wilson Millennium Eco-House. allowing total flexibility and adaptability. The void between the timbers is filled with 200 mm of sheep’s wool. Nottingham University South Wales. Windows on the south elevation are designed to change according to the seasons of the year. living space is fluid to accommodate the needs of different occupants. This insulation is manufactured from recycled paper and treated with borax as a flame and insect retardant. Cellulose fibre provides 200 mm of insulation between the deep rafters giving the roof a U-value of 0. As regards the plan. giving a U-value of 0. Internally much of the space is defined by non-load bearing stud partitions.4 to 8. It is capable of occupying a variety of situations: a rural location. There are some earth block partitions on the ground floor using clay found on the site.17 W/m2K. These provide thermal mass.16 W/m2K.6). A superinsulated timber stud wall faced with oak boarding and lime render occupies three sides of the building. The structure of the house consists of a post and beam timber frame prefabricated from locally grown oak.ARCHITECTURE IN A CLIMATE OF CHANGE Figure 8. Open living and daytime spaces face south whilst more private cellular spaces are on the north side. specially treated. The 96 . a greenfield suburban site or high density urban sites in terrace form. supplementing the thermal storage properties of the concrete floor slab. Considerable south facing glazing provides substantial amounts of passive solar energy. Its two key attributes are sustainability and flexibility. The north facing roof is covered with sedum plants laid on a recycled aluminium roof. Figure 8.5 Internal views obtained by Architectural Press .4 House for the Future – cross-section Figure 8. The prospects for wood The House of the Future raises the question of the structural use of timber in buildings. Roof-mounted solar collectors provide water heating for most of the year and a ridge-mounted wind generator and a PV array producing 800 W go some way to meeting the electricity demand. The Weald and Downland Open Air Museum 7 miles north of Chichester is a national centre for the conservation and study of traditional timber-framed buildings.6 Ground and first floor 98 . water conservation measures are an important component of its ecological credentials.ARCHITECTURE IN A CLIMATE OF CHANGE number of bedrooms can vary from one to five according to family needs. provided it is obtained from an accredited source such as the Forestry Stewardship Council. Edward Cullinan Architects in association with Buro Happold Engineers have produced an undulating structure which rhymes with the South Downs landscape. Architects Glenn Howells won a competition for a visitor centre with a wave form grid structure that Figure 8. The house can contract as well as expand. The high moisture content of the timber allows it to be formed into the necessary curves and then locked into shape. An even more ambitious gridshell structure is taking shape in Savill Garden in Windsor Great Park. Unique to the structure is the green jointing of the gridshell laths from freshly sawn oak. The Conservation Centre explores new techniques in greenwood timber construction.7). Finally. Oak is twice as strong as an equivalent size of other timbers which means that the cross-section of members can be reduced. Timber scores well on the sustainability scale. The longest laths are 37 metres. When renewable energy technologies become more affordable the house will become self-sufficient in energy. This is the first timber gridshell structure in Britain and should become an icon of sustainable construction (Figure 8. The structure is set on an earth sheltered masonry ground floor. A central row of glue-laminated columns supports the floor of the workshop. The energy regime makes maximum use of both passive and active solar systems. Rainwater is collected in a specially enlarged gutter which can store 3 m3. It is developments in glue technology which have made this possible. Gas is not available on the site.15 units of heat. A heat pump is driven by electricity but one unit of electricity produces 3. The lower storey is temperature controlled to safeguard archival material. The timber structure comprises a clear span gridshell formed out of a weave of oak laths. A pellet burning wood stove rounds off the space heating. This should meet about 25 per cent of an average family’s demand. Once the laths are in place natural drying strengthens the structure. It is mechanically filtered and gravity fed to toilets and washing machine. Space heating can be supplemented by a ground source heat pump fed by a 35 m bore hole. timber protected shaft. The structure has been designed by Buro Happold. benefits and performance of timber frame construction technologies. brick cladding.8). It will be the largest gridshell structure in the UK at 90 m long and 25 m wide. the engineers involved at Weald and Downland.ADVANCED AND ULTRA-LOW ENERGY HOUSES Figure 8.2:1. a plan-aspect ratio of c. The report on the project concludes: This high profile project has provided a unique opportunity to demonstrate the safety. The building comprises: ● ● ● ● ● ● four flats per floor. This project has brought all aspects of construction together. The results of the tests may well have a profound impact on the house building industry. including Regulations. 99 . As a research exercise in multi-storey timber buildings. allowing panoramic views of the park.7 Interior of the Weald and Downland Conservation Centre (Edward Cullinan and Partners) differs from the Weald and Downland building in that it is raised above ground. single timber stair and lift shaft. the Building Research Establishment Centre for Timber Technology and Construction has built a six-storey timber-framed apartment block as a test facility in its vast airship hangar at Cardington (Figure 8. platform timber frame. using 80 by 50 mm larch timbers harvested from the Park with oak forming the outer rainscreen. timber also has a good strength to weight ratio. As well as being a renewable resource. It has been the most challenging and exciting opportunity to obtain technical backup data for promotion of timber frame in the last 20 years and it has been recognised as one of the most valued projects (Enjily.8 Building Research Establishment experimental timber-framed apartments Research. which is why it was used to construct one of the most famous aircraft of the Second World War.ARCHITECTURE IN A CLIMATE OF CHANGE Figure 8. It is a testimony to the mastery of timber developed by the Finns over the centuries and serves to exemplify the versatility of this material as the ultimate renewable resource for construction. (2003) Performance Assessment of Six-Storey Timber Frame Buildings against the UK Building Regulations. The architects are Hanna Tikka and Kimmo Lintula. codes and standards are being updated as a result of this project. V. The designers of this aircraft pioneered timber monocoque construction in which the skin and framework as a unified whole coping with both compression and tension. Design. BRE Garston) A tour de force of timber construction is the recently completed Sibelius Hall at Lahti in Finland. This concert hall epitomises how timber used both as a structural and sheeting material can produce a building great elegance and beauty. Construction and Whole Building Evaluation. The main structural element is laminated veneered lumber (LVL) typically 100 . The advantage is this system is that it can accommodate curved and flowing shapes combining lightness with strength. Many Building Regulations. the Mosquito. ADVANCED AND ULTRA-LOW ENERGY HOUSES Figure 8.9 Roof formation. RIBA Building of the Year for 2004 101 . Dundee (courtesy of RIBA Journal) Figure 8. Dundee.10 Maggie Centre. the Maggie Centre. LVL was chosen for the roof of the Maggie Centre in Dundee. It opens at right angles to the Millennium Galleries that also integrate a pedestrian route with gallery and restaurant provision. The most striking feature is the laminated larch parabolic arches which support the glass skin forming a counterpoint to the trees within. Larch was chosen for its Figure 8.11 Winter Gardens.9 and 8.ARCHITECTURE IN A CLIMATE OF CHANGE made from Norwegian spruce. It is finished in stainless steel (Figures 8. The Winter Gardens form a spectacular element of the ‘heart of the city’ project for Sheffield (Figure 8. The contrasting space and architectural expression of the two buildings achieve the height of the poetic in urban terms. Sheffield 2002 (architects: Pringle Richards Sharratt) 102 .11). This can create a pressure difference between the faces of a building: positive on the windward side and negative on the lee face.12). Underfloor heating in winter is provided by the city centre district low grade heating scheme. The space is 65. many of which are under threat. NATSPEC 3 Guide. Vents in the roof and at both ends of the building encourage stack effect ventilation. (1998) Timber in Context – A Guide to Sustainable Use. the higher figures being in Scotland.ADVANCED AND ULTRA-LOW ENERGY HOUSES durability and minimal maintenance characteristics. in a frost-free environment. and Toukin. In summer surrounding buildings will provide solar shading. For the citizens of Sheffield it has been a spectacular success. The external environment ● ● ● ● Wind Rain Solar shading Evaporative cooling.5 m long and 22 m wide and designed to accommodate a wide variety of exotic plants.5 metres per second. Positive pressure Infiltration Negative pressure Figure 8. In time it will turn a silvery grey. This means that cold air tends to be forced into the windward elevation and warmth sucked out of the lee side (Figure 8. The orientation of a property can have a significant impact on the extent to which it is adversely affected by wind. C. A useful guide to designing in timber is provided by Willis. In the UK the average wind speed for 10 per cent of the time ranges from 8 to about 12.-M. The UK has one of the most turbulent climates in Europe.12 Wind pressure and infiltration 103 . Trees such as Norfolk Island Pine and New Zealand Flax occupy the highest central zone of the space which rises to 22 m. A. ARCHITECTURE IN A CLIMATE OF CHANGE At the same time. 19 January 2003). Wind plus driving rain can affect the thermal efficiency of a property. Climatologists predict that global warming will result in wetter winters and much drier summers with droughts a regular occurrence. 104 . This gives shrubs and trees a further benefit in the degree to which they protect from the drying or desiccating effect of wind. Building features ● ● ● In considering the plan a compact building shape reduces heat loss. Trees are the best option. average wind speeds increase by 7 per cent for every 100 m increase in altitude. At the same time a timber fence of this nature will be less likely to become a casualty of gale force winds. Heated areas within the dwelling should be isolated from unheated spaces by providing insulation in the partitions between such spaces. thereby slowing the wind force. wind is far more of a problem than sunshine and can be drought-inducing in the middle of winter when there is not a ray of sunshine to be seen for days’ (Observer Magazine. This problem would be cured with a polymer-based render. Natural features can be effective wind breaks. As a rule of thumb. an openwork fence with only 50 per cent solid resistance to wind will moderate wind speed over a much greater area. Even low level planting creates drag. Wind speeds can be considerably reduced by the introduction of natural or artificial dampeners. the distance from a house to a tree break should be 4–5 times the height of the trees to optimise the dampening effect. Some situations may allow for the protection afforded by earthberming and buffer spaces. remembering that deciduous trees are much less effective in winter. A solid wind break can greatly reduce wind speed in its immediate vicinity but beyond that zone will cause turbulence. The following are recommendations for minimising the use of energy and exploiting natural assets. On the other hand. Summary checklist for the energy efficient design of dwellings With the predicted growth in the house building sector over the next decade it is important that architects exert maximum pressure to ensure that new homes realise the highest standards of bioclimatic design. According to the TV gardening personality Monty Don ‘In the British climate. When brickwork becomes saturated the thermal conductivity of brickwork or blockwork increases since moist masonry transmits heat more effectively than in a dry condition. The detailing of joints in the building fabric can have a significant impact on energy efficiency. Shading (externally if possible) should be installed for windows posing overheating risk. 105 . Potential cold bridges should be eliminated. The built form ● ● ● ● ● ● ● The internal layout should place rooms on appropriate sides of the building either to benefit from solar heat gain or to avoid it where necessary. As a general rule it is desirable to maximise south facing windows and minimise north facing windows.ADVANCED AND ULTRA-LOW ENERGY HOUSES ● ● ● ● ● ● ● Glazing must be low emissivity (Low E) double glazing. Where possible contours should be exploited either to maximise solar gain or minimise adverse effects. In the design and positioning of windows the effect of solar gain must be considered in conjunction with daylight design. at the same time account should be taken of probable air flow patterns. The provision of deciduous trees and shrubs will offer summer shade whilst allowing penetration by winter sun. Care should be taken in the design of conservatories which should be able to be isolated from the main occupied area. Passive solar heat gain External considerations: ● ● ● ● ● The main facade of a dwelling should face close to south ( 30 approximately). The spacing between dwellings should be sufficient to avoid overshading. Fabric insulation which is significantly better than the minimum required by regulation is strongly recommended. preferably in a timber frame. The effect on heat gain of window frames and glazing bars can be significant. Areas of non-beneficial windows should be minimised. Air tightness should achieve a level of at most three air changes per hour at 50 pascals pressure in association with heat recovery ventilation. Areas with particular overheating risk should be considered when planning building layout and form. The heating of conservatories usually results in a net energy deficit. If metal frames are necessary there should be a thermal break between the frame and the glass. High thermal mass construction levels out the peaks and troughs of temperature. Internal surfaces should maximise solar heat absorption. for example condensing boilers. Controls. and thermostats should be appropriate to the task and correctly positioned and their operation easily understood by occupants.g. increased storm surges. The heating system should be geared to the thermal response of the building fabric and occupancy pattern of the dwelling. for example: ● ● ● ● ● ● ● ● Most living accommodation should. Floor and wall surfaces on the ground floor should be capable of recovery from flooding. e. Windows sills should be at least 1 m above ground. bathrooms and kitchens is especially desirable to prevent condensation. Also linked to evolving climate change will be the need to take account of increased wind speeds. Climate change is predicted to increase the risk of flooding from a combination of rising sea level. The environmental benefits of conservatories are cancelled out if they are centrally heated. where they are on the ground floor non-return valves should be fitted to WCs. Ventilation grilles and air bricks should be capable of being sealed. and space heating and hot water systems should be appropriately sized. allowing power to be available on upper floors in times of flooding. The electrical circuit on the ground floor should be able to be isolated. extremes of climate. Power sockets should be at least at bench height. Hot water storage cisterns and the distribution system should be effectively insulated. Door openings should be water tight for at least 1 m about ground. Systems ● ● ● ● ● ● ● ● ● ● Environmental considerations should be a priority when making the choice of fuel. In wet central heating systems thermostatic radiator valves are essential. Where there is a high standard of air tightness a heat recovery ventilation system is essential. The venting of hot air in summer should be considered. tiled finishes. be on the first and upper floors. if possible. High efficiency heating systems should be installed. greater precipitation and river run-down. In areas where there is the probability of flood risk special measures should be adopted.ARCHITECTURE IN A CLIMATE OF CHANGE ● A conservatory or other buffer space can be used to preheat incoming ventilation air. Ventilation of utility areas. programmers. Bathrooms should be on the first floor. heat episodes leading to 106 . Fitting louvres or external shutters to windows or internal blinds is recommended. There is a conflict here with the principle of optimising passive solar energy. more robust walls and roofs to withstand intense storms. adding that the air conditioning should be driven by PVs or other renewable energy sources. Natural ventilation will be counterproductive when outside temperature exceeds internal temperature. A guide to revised building practices has been published as part of the government’s advice as to how business can respond to climate change. It includes such points as: ● ● ● ● deeper foundations to cope with ground shrinkage. 107 .) To conclude this chapter it is worth summarising points from an Arup report of Autumn 2004 on the likely impact of climate change on UK buildings. On south facing elevations solar blinds will be essential. Report by Arup Research and Development for the DTI’s Partners in Innovation Programme 2004 Points raised in the report relevant to housing Housing built to 2002 Building Regulations will be uncomfortably warm to live in by 2020. orientation to present shorter elevation to prevailing winds. Where buildings are deficient in thermal mass a possible solution is to apply a phase change material to internal surfaces. These are now becoming available in plaster form (see p. It suggests that air conditioning and mechanical ventilation will be necessary. The answer could be removable or sliding heat reflective panels which reduce the glazed area in summer. p. 137).g.ADVANCED AND ULTRA-LOW ENERGY HOUSES the drying out of ground at normal foundation level. By 2080 internal temperatures could reach 40 C. Swiss Re. (DEFRA 2004. It recommends masonry buildings with high thermal mass over timber frame lightweight construction. Smaller windows with shutters are recommended. 158–159). Consider more aerodynamic forms (e. Arup concludes that by 2080 London will have the climate of the Mediterranean coast and we should consider adopting similar building techniques to that region. Currently the government is considering how to redress this inequity and thereby give a substantial boost to the market for small-scale renewables. The larger machines are suitable for commercial/industrial buildings and groups of houses. They are mainly confined to the domestic level and are often used to charge batteries. Accordingly a wind generator introduced into this environment must be able to cope with high turbulence caused by buildings. Because of the bending moment produced by the tower under wind load. By their very nature the vertical axis machines are not affected by changes in wind direction or turbulence. Machines between 1 and 5 kW may be used to provide either direct current (DC) or alternating current (AC). Small-scale electricity production on site has economic disadvantages in the UK given the present buy-in rates for small operators. Wind generation will do well if this happens since it is much less expensive in terms of installed cost per kilowatt than PV which makes it an attractive proposition as a building integrated power source. Small wind turbines In this context ‘small’ means wind machines that are scaled from a few watts to 20 kW. Wind patterns in the built environment are complex as the air passes over. This may not easily be achieved in retrofit situations. around and between buildings. measures must be taken to provide adequate strength in the building structure. Such conditions tend to favour vertical axis machines as opposed to the horizontal versions which have proliferated in wind farms. horizontal axis machines mounted on roofs tend to transmit vibrations through the structure of the buildings. They can be sited on roofs or 108 . In addition. This is because the vertical versions may be able to operate at lower wind speeds and they are less stressed mechanically by turbulence.Chapter Harvesting wind and water Nine This chapter is concerned with wind generation which can operate as embedded generation in buildings down to the scale of the individual house and the conservation of water as the pressure on this resource increases. The third type. Development work is continuing on designs for turbines which are suitable for the difficult wind conditions found in urban situations. silent. wind sharers. the ‘winddreamer’ which relates to low rise developments (Figure 9. 114) by the system patented by Altechnica. They are described as ‘wind catchers’. The wind catcher is well suited to small turbines being usually high and benefiting from a relatively free wind flow. easy to install and competitive on price. When it is fully appreciated that these machines are reliable. they are still undergoing development. are found in industrial areas and business parks. Ecofys has produced a diagram which depicts how four urban situations cope with varying wind conditions. This is appropriate since climate change predictions indicate that wind speeds will increase as the atmosphere heats up and so becomes more dynamic. A further advantage is that the electricity generator can be located beneath the rotors and therefore can be located within the envelope of the building. The wind collector type of building has a lower profile and can be subject to turbulence. The increasing deregulation of the energy market creates an increasingly attractive proposition for independent off-grid small-scale generation insulating the operator from price fluctuations and reliability uncertainties.2). For example. low maintenance. At present the regulatory regime for small turbines is much less onerous than for 20 kW machines. March 2001. Small horizontal axis machines could be satisfactory in this situation. Wind generation can be complemented by PVs as illustrated below (p. However. ‘wind sharers’ and ‘wind gatherers’. This is where the vertical axis machine comes into its own. ‘wind collectors’. Research conducted by Delft University of Technology and Ecofys identified five building conditions to determine their effectiveness for wind turbines. They have been particularly successful mounted on the sides of oil platforms in the North Sea (Figure 9. There is growing confidence that there will be a large market Figure 9. terms which define their effect on wind speed. Their relatively even roof height and spaced out siting makes such buildings subject to high winds and turbulence.9). The wind generators continue operating at night when PVs are in retirement (see Figure 9. in the Netherlands alone there is the potential for 20 000 urban turbines to be installed on industrial and commercial buildings by 2011. transmitting minimum vibration and bending stress to walls or roofs. They also have a high output power to weight ratio.HARVESTING WIND AND WATER walls.1 Helical side mounted turbine on oil platform 109 . There is a fifth category. it is likely the market will expand rapidly. Currently there are several versions of vertical axis machines on the market. It is to be hoped that the bureaucrats fail to spot this red tape opportunity. The machines are well balanced. with the proviso that there is a level playing field. estimates that the global market for small turbines by 2005 will be around Euros 173 million and several hundreds of million by 2010. A prediction in ‘WIND Directions’.1). it is a robust and tested technology. 110 .ARCHITECTURE IN A CLIMATE OF CHANGE Figure 9. housing blocks and individual dwelling. Systems under 2 kW usually have a 24–48 volt capacity aimed at battery charging or a DC circuit rather than having grid compatibility. horizontal axis machines are much more in evidence that the vertical axis type even at this scale. high output. mounted on buildings they require substantial foundation support. The disadvantages are: ● ● the necessity of a high mast. automatic start-up. These machines have efficient braking systems for when wind speed is excessive. Types of small-scale wind turbine Most small systems have a direct drive permanent magnet generator which limits mechanical transmission losses. Up to the present. Some even tip backwards in high winds adopting the so-called ‘helicopter position’.2 Categories of building cluster and their effectiveness for wind generation (courtesy of Ecofys and REW) for mini-turbines in various configurations on offices. There are advantages to horizontal axis machines such as: ● ● ● ● the cost benefit due to economy of scale of production. claims that this combination can produce electricity 60 per cent more of the time compared with conventional machines.4).HARVESTING WIND AND WATER ● ● ● in urban situations where there can be large variations in wind direction and speed.8 tonnes per year of carbon dioxide (CO2).4). Its peak output is 1. The Darrieus-Rotor employs three slender elliptical blades which can be assisted by a wind deflector. as the name indicates. more electricity is produced for a given wind speed as well as generating Figure 9. the Swift wind turbine claims to be the world’s first silent rooftop mounted wind turbine (35 dB) by incorporating silent aerodynamic rotor technology coupled with a revolutionary electronic control system. Doncaster (Figure 9. Another variety is the S-Rotor which has an S-shaped blade (Figure 9.6). this necessitates frequent changes of orientation and blade speed. A variation of the genre is the H-Darrieus-Rotor with triple vertical blades extending from the central axis (Figure 9. Doncaster 111 . As stated earlier. Yet another configuration is the Lange turbine which has three saillike wind scoops (Figure 9.3 Helican turbine on a column at Earth Centre.5 kW and it is estimated that avoided fossil fuel generation produces a saving of 1.4). A prototype developed at the University of Rijeka. This is because the aerofoil concentrator enables the machines to produce electricity at slower wind speeds than is possible with conventional turbines. Care has been taken to provide a secure mounting system which will not transfer vibrations. This not only undermines power output. there are noise problems with this kind of machine especially associated with braking in high winds. As a result.5). The most common vertical axis machine is the helical turbine as seen at Earth Centre. Glenrothes. A development from the 1970s has placed the turbine blades inside an aerofoil cowling. Scotland. This is an elegant machine which nevertheless needs start-up assistance (Figure 9. Croatia. In that instance it is mounted on a tower but it can also be side-hung on a building. and there are plans for installations in four other primary schools. This has the effect of accelerating the air over the turbine blades. Produced by Renewable Devices Ltd. The first unit was installed in Collydean Primary School.4).3). a spiral profile (Figure 9. they can be visually intrusive. They are discrete and virtually silent and much less likely to trigger the wrath of planning officials. it also increases the dynamic loading on the machine with consequent wear and tear. Last in this group is the ‘Spiral Flugel’ turbine in which twin blades create. It is regarded as an ideal system for residential developments (Figure 9. vertical axis turbines are particularly suited to urban situations and to being integrated into buildings. The cross-section of the cowling has a profile similar to the wing of an aircraft which creates an area of low pressure inside the cowling. This technology can generate power from 1 kW to megawatt capacity. The device is about 75 per cent more expensive than conventional rotors but the efficiency of performance is improved by a factor of five as against a conventional horizontal axis turbine (Figures 9.5 Spiral Flugel rotor at low air speeds compared to a conventional rotor. for example blades can be damaged.8). They also serve to stabilise electricity output in turbulent wind conditions.7 and 9. It can generate up to 750 watts at an installed cost of £1 per watt. bottom centre: Lange turbine. top centre: DarrieusRotor.4 Left: S-Rotor. A mini horizontal axis turbine was introduced in late 2003 called the Windsave. which makes them appropriate for urban sites. This amplification of wind speed has its hazards. The answer has been to introduce hydraulically driven air release vents into the cowling which are activated when the pressure within the cowling is too great. It is being considered for offshore application.ARCHITECTURE IN A CLIMATE OF CHANGE Figure 9. right: H-Darrieus-Rotor Figure 9. Its manufacturers claim it could meet about 15 per cent of 112 . 6 Swift rooftop wind energy system Figure 9.HARVESTING WIND AND WATER Figure 9. Hamburg Figure 9. Producing AC power it can be linked directly to the grid and the householder credited under the Renewables Obligation charges which currently pay a green electricity provider 6 p per kilowatt hour.7 Wind turbine with cowling wind concentrator the average household electricity demand. By using remote metering. each unit can be telephoned automatically each quarter to 113 . It starts generating at a wind speed as low as 3 mph but is most efficient at 20 mph.8 Simulation of wind turbines on the Vivo shopping complex. 10 ‘Aeolian’ roof devised by Altechnica 114 .9 Windsave rooftop wind energy system Patented Altechnica Aeolian Roof™ Wind Energy System wind turbine shown is Altechnica Wheel Darrieus™ cross flow wind turbine Altechnica SolAirfoil™ c Altechnica PV clad roof Figure 9. Building integrated systems The Vivo building illustrates one version of a building integrated wind generating system. It is this subsidy which justifies a claim that the payback time can be as short as 30 months (Figure 9. It is also a system which can easily be fitted to existing buildings where the wind regime is appropriate.10). The power company then collects the subsidy and distributes it back to the home owner on the basis of the total generated. Furthermore it indicates a building which is discretely capturing the elements and working for a living. Up to now such machines have been regarded as adjunct to buildings but a concept patented by Altechnica of Milton Keynes demonstrates how multiple turbines can become a feature of the design. for 40 GW of wind power installed by Figure 9. The advantage of this system is that it does not become an overassertive visual feature and is perceived as an integral design element. Rotors are incorporated in a cage-like structure which is capped with an aerofoil wind concentrator called in this case a ‘Solairfoil’. The research has concluded that.ARCHITECTURE IN A CLIMATE OF CHANGE assess the amount of electricity generated. The European Union Extern-E study has sought to put a price on the damage inflicted by fossil fuels compared with wind energy. The system is designed to be mounted on the ridge of a roof or at the apex of a curved roof section. There is increasing interest in the way that the design of buildings can incorporate renewable technologies including wind turbines. The flat top of the Solairfoil can accommodate PVs. Where the rotors are mounted at the apex of a curved roof the effect is to concentrate the wind in a manner similar to the Croatian cowling (Figure 9.9). other than drinking. the market should have no difficulty in switching to renewable energy en masse. On average about 200 litres of rainwater fall on the roof of a 100 m2 house each day in the UK. CO2 emissions could be reduced by 54 million tonnes per year in the final year. it must be subject to further purification. If filtered rainwater is to be used for other domestic purposes. The cumulative saving would amount to 320 million tonnes CO2 giving avoided external costs of up to Euros 15 billion. Conservation of water in housing Not only is water a precious resource in its own right. Of this total about half is used for flushing toilets and personal hygiene. it has wider uses. This is the first sign of a revolution in the way of accounting for energy. However.8 billion up to 2010. harvest rainwater. On average a person in the UK uses 135 litres (30 gallons) of water per day. All appliances should have isolating stopcocks so that the whole system does not have to be drained off if one item has a problem. 115 . Figure 9. recycle grey water. Best use of the filtered rainwater will be made if associated with dual flush WCs. This is one of the factors which should influence the choice of white goods. Washing machines and dishwashers vary in the amount of water they consume. there is also an energy component in storing and transporting it and making it drinkable. In many homes this is collected in water butts and used to irrigate the garden.HARVESTING WIND AND WATER 2010. Storage tanks are either concrete or glass reinforced plastic (GRP). A really thorough home ecological improvement strategy should have three components: ● ● ● reduce consumption. Reducing consumption Flushing toilets use about 30 per cent of total household consumption. usually by ultraviolet light.11 shows a typical configuration for rainwater storage. Recycled rainwater must only be sourced from roofs. Aerating (spray) taps on basins. This can be reduced by changing to a low flush toilet (2–4 litres) or a dual flush cistern. There are controls to ensure that mains water can make good any deficiencies in rainfall. and with a total investment of Euros 24. sinks and on shower heads make a big impact on consumption. When the avoided costs of external damage are realistically factored in to the cost of fossil fuels. There are several proprietary systems for collecting and treating rainwater so that it can be used to flush WCs and for clothes washing machines. An example is the Vortex water harvesting system which serves roof areas up to 200 m2 and 500 m2 respectively. The water is stored in 25 000 litre underground tanks where particles have time to settle to the bottom. Again there are systems on the market which serve this function. From here it is pumped to storage tanks in the loft and from there through a ceramic/carbon filter to the taps.11 Rainwater storage system layout (courtesy of Construction Resources) 2 It is possible to go a stage further and use rainwater for drinking.ARCHITECTURE IN A CLIMATE OF CHANGE Typical domestic rainwater installation with storage tank in the ground and a pressure pump in the tank 1 Vortex fine filter 2 inflow smoothing filter 3 Tank 4 Floating fine suction filter 5 Suction hose 6 Multigo pressure pump 7 Pressure hose 8 Automatic switch and ballvalve 9 Overflow trap 10 Installation controls 11 Magnetic valve 12 Open inflow for drinking water feed 13 Backpressure flaps 12 11 1 10 8 3 7 9 13 4 5 13 6 Figure 9.hockerton. The water from the roof passes through a sand filter in a conservatory. The water is treated first by passing it through a 5 micron filter to remove remaining particles.co. The Hockerton Housing Project has all these facilities and more because it uses rainwater collected from its conservatory roofs for drinking purposes. but this requires even more rigorous filtration. For the really dedicated there is the composting toilet which eliminates the need for water and drainage. in the the Vales’ Southwell autonomous house (p. as employed. including water storage. 77). showers and baths. In Europe a popular version is 116 . but those who feel inspired by this possibility should contact the Hockerton Housing Project at. The author can vouch for its purity! For the average home this may well be a step too far. Lastly it is subjected to ultraviolet light to kill bacteria and viruses. for example. then virtually all the waste water can be used to meet the needs of flushing toilets.uk. A variation on the water recycling strategy is to reuse grey water from wash basins. As an act of faith in the English weather there is no mains backup facility. If waste water from a washing machine is included. Then it is sent through a carbon filter to remove dissolved chemicals. whilst A is the top of the scale there is variation within this category which has prompted the introduction of an AA category. F and Gs to the bottom of the best buys. about one fifth the consumption of a state the size of Germany. 117 . Fan-assisted ducted air ensures an odourless aerobic decomposition process. refrigerators. For the EU it has been calculated that standby power accounts for 100 billion kWh/year. Even appliances with electronic clocks consume power. Refrigerators and freezers are particular culprits.HARVESTING WIND AND WATER the Clivus Multrum from Sweden. Domestic appliances As the building fabric of a home becomes more energy efficient. the impact of appliances like white goods and TVs becomes a much more significant element of the energy bill. are left on power because the consumption involved is regarded as insignificant. nevertheless. This has certainly been effective in sending E. freezers. washing machines. should be given an energy efficiency rating from A to G. It has been estimated that a typical household could consume 600 kWh per year on standby alone. The by-product from the composting chamber is a rich fertiliser. dishwashers etc. like fax machines and cordless telephones need to be permanently on standby. However. It is a two-storey appliance in that there has to be a composting chamber usually on the floor below the toilet basin. Others. In 1999 the European Commission decreed that all white goods. Some appliances like televisions and personal computers have optional standby modes which. A surprising amount of electricity demand is due to standby electrical consumption. the NHER profile. However. This will not only enable purchasers to compare older houses with new build but will also motivate vendors to upgrade their property in advance of a sale. This information is equated with the cost of making good the heat loss by means of the heating system and the cost of fuel. If buildings are to contribute to carbon abatement in the short to medium term then existing buildings must be targeted. Currently there is considerable interest in converting redundant industrial buildings to other uses. government improvement programmes were ‘unconvincing’ with ‘funding low in proportion to the magnitude of the task’. mainly houses. New homes complying with 118 . yet these comprise only about 2 per cent of the total building stock at any one time. At the same time. the real challenge lies in existing housing. which will include an Energy Efficiency Report. What is the magnitude of the task? To gauge the scale of the problem we first need to consider the four accredited ways of measuring the energy efficiency of both existing and new homes: ● ● ● ● the SAP method. which considers energy efficiency worldwide. the carbon dioxide measure. The UK government is introducing a requirement for houses that come on the market to be accompanied by a ‘House Condition Survey’. In England and Wales housing is responsible for about 28 per cent of total carbon dioxide (CO2) emissions. The official government system of measurement of energy efficiency is the Standard Assessment Procedure (SAP) which comprises a calculation of the heat loss resulting from the form of the building. especially residential. The International Energy Agency. described UK housing as ‘poorly insulated’ with ‘considerable scope for improvement’. Its scale is from 1 to 120.Chapter Existing housing: a Ten challenge and opportunity So far the emphasis has been on new buildings. It also takes into account benefits from solar gain. the BEPI profile. the thermal properties of its fabric and the level of ventilation. It is scheduled to come into force in 2006. The minimum heating regime is 18 C for the living room and 16 C for other rooms. since appliances and heating systems have a relatively short life and there is no guarantee that replacements will measure up to the previous standard. 3. It is measured in kg/square metre/year. Thus it gives an accurate picture of the underlying condition of the housing stock.6 per cent of dwellings were at or below SAP 60 with 8 per cent at or below SAP 20. electricity has roughly four times the carbon intensity of gas. DETR 1996. The current average for England overall is SAP 43. These numbers are substantially increased when Britain as a whole is considered. how does this translate into actual home heating habits? The official standard for adequate heating in a living room is 21 C and in other rooms. 1. At present the amount of investment in this area of need is totally inadequate and it is being left to enlightened bodies like housing associations to take the initiative. Only 25 per cent of homes have internal temperatures which meet these standards.3 million homes in England are at or below SAP 30. This is gradually improving as the ratio of new homes to existing increases. It is a performance indicator that gives an accurate reading of the energy efficiency of the total fabric and cannot be manipulated to gain a notional but unreal advantage. 119 .EXISTING HOUSING: A CHALLENGE AND OPPORTUNITY the Building Regulations according to the SAP method will probably have to be a minimum of SAP 100. The national average NHER is around 4. a carbon emission standard will be the only route to compliance. The unofficial recommended minimum for reasonable energy efficiency for existing homes is SAP 60. The Building Energy Performance Index (BEPI) assesses the thermal performance of the fabric of the building taking into account its orientation. December 2000).6 million are at or below SAP 20 and 900 000 are at or below SAP 10 (English House Condition Survey. this is a more accurate long-term measure of energy efficiency. The English House Condition Survey 1996 found that 84. Within the 10 category the bottom end is as low as SAP minus 25. appliances and lighting and is designed to give an indication of energy costs. The Carbon Dioxide Profile indicates the carbon dioxide emissions deriving from the total energy used by a property taking into account the type of fuel. for a given unit of heat.8. 18 C. To put some numbers against these standards. However. in the private rented sector in England 21 per cent are at or below SAP 20.8 per cent of this sector being at or below SAP 10. The Building Regulations standard equates to a BEPI of 100. In the revised Building Regulations 2005.0. and includes such items as the method of space heating. So. Because this measure is confined to the efficiency of the building fabric. with 12. The National Home Energy Rating (NHER) uses a scale of 1 to 10. domestic hot water. It does not include heating systems and does not factor in the cost of energy. This constitutes a monumental problem which calls for constant pressure on governments to rise to the challenge of upgrading the housing stock. For example. Cold homes are thought to exacerbate existing illnesses such as asthma and reduced resistance to infections. poorly insulated and damp homes as acknowledged by the government in its document Fuel Poverty: The New HEES (DETR 1999): The principal effects of fuel poverty are health related. The definition is that they are unable to obtain adequate energy services for 10 per cent of their income. To achieve adequate space heating a 1930s house is responsible for 4. Most of those energy services are of course taken up with space heating.6 tonnes of CO2 for space heating. This figure may be significantly higher in that it is impossible to quantify the contribution of poor housing to depressive illnesses. A 1976 house which was the first to encounter thermal regulations will account for 2. Many of the owner occupied homes are of 1930s vintage. About half of this total can be attributed to poor housing. The main culprit is cold. 120 . There is a strong social dimension to this state of affairs.ARCHITECTURE IN A CLIMATE OF CHANGE When the external temperature drops to 4 C.7 tonnes of carbon dioxide. The UK government acknowledges that up to 3 million households in England are officially designated ‘fuel poor’. with children. 62 per cent of council homes. How do they compare with today’s best practice? One crucial measure is carbon dioxide emissions. This compares with 0. the old. a superinsulated house with best available technology will produce a total of 2 tonnes of CO2 as against 8 tonnes in total for a 1930s dwelling. and 95 per cent of the private rented sector also fail to meet the minimum standard. We have the worst record in the EU for extra winter deaths. The DTER acknowledges that fuel poor households also suffer from opportunity loss. These figures are from the DETR House Condition Survey for England. Taking into account all fittings and appliances as well as the building fabric. This was the highest winter total since 1976 yet it was a relatively mild winter. caused by having to use a larger portion of income to keep warm than other households. the sick and the disabled most at risk. In the winter of 1999–2000 almost 55 000 died from cold related illnesses between December and March as against the other two four monthly periods. then ● ● ● 50 per cent of owner occupied dwellings fail to reach the minimum standard.6 tonnes in current best practice homes.. installing heat recovery ventilation system. It was confidence in the connection between damp homes and asthma that justified the Cornwall and Isles of Scilly Health Authority in directing £300 000 via district councils to thermally improve homes of young asthma patients. (Fuel Poverty: The New HEES. Damp generates mould and mould spores can trigger allergies and asthma attacks. Some moulds are toxic. 121 . It will surely be the first of many since it provides hard evidence of cost effectiveness. as in the genus Penicillium which can damage lung cells. cit. states: ‘This study provides the first evaluation of health outcomes following housing improvements’. The connection between housing and health has been recognised by the medical profession in a report Housing and Health: Building for the Future (eds Sir David Carter and Samantha Sharp. installing/converting central heating to include a gas condensing boiler. An example of a retrofit package for housing would consist of: ● ● ● ● ● improving the level of insulation in walls and roof and. Spon) which should remove any doubts there may be about the linkage between poor housing and ill health. Increasingly damp as well as cold is emerging as a major health hazard. The remedy There is no easy way to solve this problem and considerable investment will be required by central government if fuel poverty linked to substandard housing is to be eliminated. almost a quarter of all homes suffer from damp in Scotland (National Housing Agency for Scotland). sponsored by the EAGA Trust. This is a thorough analysis of the situation from the medical standpoint. The outcome was that the savings to the NHS exceeded the annual equivalent cost of the house improvements. draught-proofing. The report on this enterprise.EXISTING HOUSING: A CHALLENGE AND OPPORTUNITY This has adverse effects on the social well-being and overall quality of life for both individuals and communities. This was undertaken as much as an investment opportunity as a remediation intervention. where possible. op. Rudge and Nicol. A book appeared in 2000 called Cutting the Cost of Cold (ed. British Medical Association 2003). installing Low E double glazing preferably in timber frames. floor. At the opposite end of the country.) This cost will taper off as the upgrading programme gathers momentum. It is also possible to use cladding which includes ● ● ● ● lightweight natural stone aggregate. was formed in 1994 to take over the local authority housing from Penwith District Council to make it possible to gain access for funds to upgrade the entire stock. The technique involves applying a render to provide an even and smooth fixing surface to the rigid insulation panels. also called harling or wet cast. This consisted of a mix of 1940s houses with solid concrete block walls and post-war cavity built homes. carrying it round window reveals means that the window frame size is reduced. A mesh is applied to the insulation to provide a key for the external waterproof render which is finished with pebble dash (Figure 10. close to Building Regulations standard current at the time. It is necessary for the insulation boards to receive a finishing coat.e. textured renders in a range of colours. i. Being of concrete construction and rendered there was no problem as regards changing the appearance by overcladding. For example. The 1940s examples had a SAP rating of 1 and an NHER of 1. The addition of gas central heating raised the SAP to 76 which dramatically illustrates the effect of fixed appliances to the SAP value. weatherboarding. External cladding has a number of consequences. the crucial BEPI rating was raised to 97.ARCHITECTURE IN A CLIMATE OF CHANGE From the architectural point of view the insulation is the main challenge. Cornwall. Case study Penwith Housing Association in Penzance. brick. additional roof insulation and double glazing raised this to SAP 26. It can take three forms: ● ● ● external overcladding (enveloping).1.g. 122 . filling the cavity. Applied in one or two coats it offers a choice of finishes. terracotta. However. for example: ● ● ● pebble dash or spar dash. Roof eaves and verges have to be extended and rainwater/soil vent pipes have to be modified to take account of the deeper eaves. tile. The panels then receive a waterproofing finish. The application of external insulation. internal dry lining.1). A polymer-based render is the most reliable in this respect. e. roughcast. This is an adhesive render with an alkali resistant glass fibre mesh as reinforcement. In the case of most insulants the finish should offer total waterproofing. 1 Social housing with cladding over solid wall construction. Penzance 123 .EXISTING HOUSING: A CHALLENGE AND OPPORTUNITY Figure 10. Devon 124 . There should be absolute protection from penetration by damp. ensuring a longer life.2 Baggy House.ARCHITECTURE IN A CLIMATE OF CHANGE Benefits ● ● ● ● ● ● ● ● ● ● ● There is a significant improvement in comfort levels throughout the whole house. There is a significant reduction in carbon dioxide emissions. preventing cracking due to differential thermal expansion. over the lifetime of the building. for example. as. It allows the fabric of the home to act as a heat store – a warmth accumulator. one tonne of CO2 is saved for every square metre of 50 mm thick insulation. The dilemma is that this reduces internal space. in the case of eighteenth to nineteenth century terraced housing. Space heating bills can be reduced by up to 50 per cent. Government estimates suggest that. It stabilises the structure. To bring a 140 mm solid external wall near to the current Figure 10. The incidence of condensation is reduced to near zero.2). An example of an individual house application of external cladding is Baggy House in the UK which illustrates the ‘Dryvit’ system called ‘Outsulation’ (Figure 10. Where there are overriding reasons for not wishing to overclad with insulation. There is normally a significant improvement in appearance. The walls of the building are protected from weathering. The operation can be undertaken without the need to vacate the property. The increase in property value as a result of the upgrading usually more than offsets the cost. the alternative is to fix insulation to the inside face of external walls – ‘dry lining’. caution should be exercised regarding the kind of insulation selected and the bona fides of the installation contractor. This will be one to watch. The 564 dwellings have been transferred to the Fortunegate Housing Association. However. (2004) Eco-Refurbishment: A Guide to Saving and Producing Energy in the Home. There are consequences to using this system. Properly installed cavity filled insulation can have a significant impact on the thermal performance. In the Penwith 1960s properties. They have solid one and a half brick solid external walls and minimal insulation in the roofs. However. Where central heating was installed the SAP rose to 78 again illustrating why the BEPI is a more useful guide to the long-term energy efficiency of a house since it focuses on the fabric. Cavity filling Where there are cavity walls injecting insulation through holes drilled at regular intervals is a common practice. P.F. At the same time comfort levels have substantially improved. increased roof insulation. A suitable insulant is cellular glass fixed to the wall mechanically. full central heating with combination boilers. and new kitchens and bathrooms with extractor fans. Existing double glazing was considered adequate.5 tonnes of CO2 emissions per year with a reduced fuel bill of £150 per year.EXISTING HOUSING: A CHALLENGE AND OPPORTUNITY Buildings Regulations standard would require at least 90 mm of insulation with a plasterboard finish. this is one instance where the best can be the enemy of the good and compromise is reasonable. There is also the risk of cold bridging if the insulation is not continued around the reveals to openings.) The Roundwood Estate in Brent is typical of many former council developments with its numerous four storey flats and maisonettes linked by balcony access. such as the relocation of skirtings and electrical sockets and the reduced size of door and window openings. After consultation with the tenants PRP Architects agreed a specification including overcladding external walls with an insulated render system. The team responsible for the innovative Integer House has been commissioned by Westminster Council to raise the energy efficiency of one of its 20 storey tower blocks as a demonstration of best practice in renovation. This could involve the replacement of external doors and windows. Some of the least energy efficient dwellings exist in multi-storey buildings. (For further information refer to Smith. The result is that on average each flat will save 1. The finish is either plasterboard with a skim coat of plaster or plaster applied to metal lathing. Post-completion inspections have discovered a number of cases of fraud where only a notional amount of insulation has been injected. Architectural Press. the most notorious being tower blocks. following cavity filling and extra roof insulation the BEPI was 107 with a SAP of 49. This is the kind of unromantic but challenging work which will have to be undertaken nationwide if the consequences of fuel poverty are to 125 . From January 2007 it is likely that houses for sale will require a Home Condition Report. As a postscript to this chapter it should be noted that existing homes that are substantially refurbished are likely to need to comply with Part L of the Buildings Regulations.3).3 Roundwood Estate Housing Association flats. As mentioned earlier. This will include an Energy Survey. Another change is that houses will be subject to air tightness standards and will have to submit to pressure testing. existing and refurbished be overcome. the sole criterion for compliance will be based on carbon emissions which will close the loophole of the trade-offs. so much abused in the past. Refurbishment and overcladding is under way on this estate (Figure 10. 126 . Part L is being revised and this will have the aim of achieving a 25 per cent improvement in energy efficiency. For new homes there are currently discussions about enhancing the energy efficiency scale to take account of zero carbon homes.ARCHITECTURE IN A CLIMATE OF CHANGE Figure 10. An example of a considerable quantity of former council stock is the balcony access flats in the Roundwood estate. At the same time. energy is a relatively minor fraction of the total annual budget. passive and active solar optimisation. Embodied energy Minimising the carbon content of materials in the extraction. in relation to all other costs. The aim of the architect under the sustainability banner is to maximise comfort for the inhabitants whilst minimising. best practice is in the region of 90 kWh/m2/year. Techniques such as high insulation. ultimately eliminating. Currently. delivery and construction stages. thermal mass. 1. Promoting the use of recycled materials and designing for reuse after demolition. like the Wessex Water offices near Bath (see first edition) staff are 127 . Transport energy Avoiding unnecessary transport journeys during construction in terms of the delivery of materials and the removal of site waste. The Movement for Innovation (M4I) has produced six performance indicators as conditions for sustainable design. In many cases the major electricity cost is incurred by lighting. Operational energy The energy consumed by a commercial building during its lifetime should be kept to a minimum. on-site electricity generation and seasonal energy storage are components of the green agenda. 3. manufacture. The benchmark is currently 100 kWh/m2 but this will become more stringent as pressure mounts to limit carbon emissions. reliance on fossil-based energy.Low energy techniques for non-domestic buildings Chapter Eleven Design principles Offices in particular have traditionally been extravagant users of energy because. natural ventilation. The 1980s sealed glass box may use energy at a rate of over 500 kWh/m2/year. natural light. In some cases. 2. They are designed to validate or otherwise claims that buildings are ‘green’. at the outset. etc. to avoid landfill costs. ● ● ● It is important that all members of the design team share a common goal and if possible have a proven track record in achieving that goal.) (1998) Green Buildings Pay. There is also the matter of location even though this will usually be outside the province of the architect. Minimising hard areas to reduce run-off including permeable car park surfaces and porous paviors. This has encouraged a much greater use of cars resulting in a net increase in carbon dioxide (CO2) emissions. 6. Water Harvesting grey water and rainwater for use in toilets and irrigation. Access to good public transport should be a prime requisite in deciding location. B. E & F Spon. Waste Minimising waste through greater off-site fabrication and modular planning. the design process should be a collaborative effort. even for buildings to be let or sold on. Biodiversity Design landscape to support local flora and fauna. From the earliest outline proposals through to construction and installation. Preserve existing mature trees and generally ensure the well-being of wildlife. 5. There is now convincing evidence that ‘green buildings pay’ (see Edwards. The first aim should be to maximise passive systems to reduce the reliance on active systems which use energy. Sorting and recycling of off-cuts. (ed. the outcome of an RIBA conference). There have been instances where corporations have relocated from city centres accessible only by public transport to highly energy efficient offices on out of town sites. costs are calculated in a composite manner so that capital and revenue costs are considered as a single accountancy feature. Environmental considerations in the design of offices The first task is to persuade the clients of the benefits of environmental and energy efficient design. It is important that.ARCHITECTURE IN A CLIMATE OF CHANGE obliged to use communal transport wherever possible. This will help to convince clients that any extra capital expenditure is cost effective. Integrated design principles should be the rule from the first encounter with a client. 128 . 4. At the earliest stage of design one must consider the following parameters in relation to the site: ● ● ● ● the sun’s position relative to the principal facades of the building (solar altitude and azimuth). At the same time. proposed glazing types and areas. site orientation and slope. Lighting requirements should be clearly assessed to discriminate between general lighting and that required at desktop level. On completion building managers should be selected for their ability to cope with the complexities of the chosen building management system (BMS). the best compromise should be reached between optimum performance and the requirements for the majority of the year. Passive solar design Planning and site considerations Whether it is important to encourage or exclude solar radiation. For the development itself. nature of internal spaces into which solar radiation penetrates.LOW ENERGY TECHNIQUES FOR NON-DOMESTIC BUILDINGS ● ● ● ● ● Clients should be required to explain in detail the nature of office routines so that these can be properly matched to operational programmes. To provide significantly greater capacity for just a few days of the year is not best practice. existing obstructions on the site. hoursrun recorders. and facade design. Physical models can also be tested by means of the heliodon. occupant comfort and ease of operation and maintenance. Energy costs should be identified with specific cost centres. 129 . so that the likelihood of solar heat gain can be determined. Appropriate monitoring is necessary to be able to assess from day to day how systems are performing. The claims made for advanced technology do not always match performance. road layout and services distribution. give valuable returns for a small cost. the following factors need consideration: ● ● ● ● grouping and orientation of buildings. it is necessary to appreciate the degree to which solar access is available. The cost of submeters. etc. It is important to select appropriate technology which achieves the best balance between energy efficiency. potential for overshadowing from obstructions outside the site boundary. Chapter 5 referred to the stereographic sun chart and computer programs as a means of assessing the level of insolation enjoyed by a building. designed by Peter Ellis and completed in 1864. i. pp. have a facade which is staggered and stepped back away from the wind. where possible. for example the Swiss Re Offices. There are a number of guidelines: ● ● ● ● ● The larger building elevation should not face into the predominating wind direction. Buildings can be grouped in irregular arrays. Now there is mounting pressure to design buildings which operate in harmony with nature. The repository for knowledge in this context is the Centre for Window and Cladding Technology (www. protection for pedestrians can be provided by use of canopies and podiums which reduce downdraught at ground level.ARCHITECTURE IN A CLIMATE OF CHANGE The thermal efficiency of a building can also be affected by its plan form and orientation in respect of the prevailing wind direction. Metal panel systems are now available with integral insulation. though the feature first appeared in the US at the end of the nineteenth century. and even be dangerous. Construction technologies The building envelope Walls and rainscreens The glazed curtain wall has advanced considerably since it came into vogue in the 1960s. 157–58.e. but within each group the heights should be similar and spacing between them kept to a minimum (no more than about a ratio of 2 : 1 in building heights). Trent Concrete has introduced an insulated concrete sandwich under the name of Hardwall Cladding. Building layout should avoid creating a tunnelling effect between two adjacent buildings. the long axis should be parallel to the wind flow.cwct. for example EDM Spanwall which uses flat metal sheets pressure bonded to the insulation core. Climate facades The glass curtain wall is a familiar feature of office and institutional buildings dating from the 1950s. curved facades moderate the impact of wind. Sheer vertical faces to tall buildings can generate substantial downdraughts.uk). Tall buildings should. Buildings challenged the environment. The technique was conceived at a time when energy was cheap and plentiful and there was no glimmer of global warming. London. making the most of 130 . which can obstruct pedestrian access. An example is the 19 storey Arts Tower in the University of Sheffield where the downdraught has knocked people over close to the entrance. Liverpool can boast a number of office buildings that point the way to the glass curtain wall such as Oriel Chambers in Water Street. The Ocean Terminal at Leith completed in 2001 is a good example of this technology. Often these panels have an exterior finish of stone or reconstructed stone.co. Precast concrete panels also come with integral insulation. The requirement was for floor to ceiling glazing which can create a problem of solar gain which is exacerbated by the heat from computers and.5 m deep room. The facade developed by RRP and Ove Arup and Partners consists of a double glazed external skin made up of some of the world’s largest double glazed units measuring 3 m 3.2). Within the cavity are venetian blinds with perforated slats to control sunlight. acts as an active and passive solar collector. serves as a plenum for ventilation supply and extract air. minimises room heat loss. Windows are triple glazed with mid-pane retractable blinds designed to absorb solar gain. Another building with an active facade is Portcullis House.1 and 11. The result is that there are substantial savings in the energy normally needed to cool such spaces. a high services loading. it acts as a solar collector. The active facade fulfils a variety of functions. Now things have moved on with the incorporation of a second inside skin of glazing creating what is termed a ‘climate facade’ or alternatively an ‘active facade’. Then there is a 140 mm gap and a third inner leaf of glass with openable units which completes the facade. The shelf has a corrugated reflective 131 . in this case. at the same time. There is also a high rate of air change in the building at double the average for a typical office block (Figures 11. The outer double glazed element is Low E glass with argon gas. The glazing incorporates a light shelf to maintain daylight levels when solar shading is active. The demand for increasing energy efficiency led first to the introduction of double glazing. facilitates heat recovery.LOW ENERGY TECHNIQUES FOR NON-DOMESTIC BUILDINGS solar resources. It: ● ● ● ● ● ● offers room daylight control. and. The result is a summer solar heat gain of less than 25 W/m2 across a 4. the adjunct to the Houses of Parliament by Michael Hopkins and Partners. Photocells on the roof monitor light conditions and control the venetian blinds to one of three positions according to the level of glare. The cavity is ventilated by room extract air. An example of a climate facade building is the office development at 88 Wood Street in the City of London by the Richard Rogers Partnership (RRP). When the blinds are closed they act as a heat sink whilst the perforations admit a measure of natural light.25 m and weighing 800 kg. The aesthetic appeal of the structure is enhanced by the use of extra white glass or ‘Diamond White’ glass by the manufacturers Saint Gobain. These are terms for facades that play an active role in controlling the internal climate of offices in which there is an optimum requirement for daylight. Air from the offices is drawn into the main perimeter extract ducts within the cavity via plenum ducts within a suspended ceiling and then expelled at roof level. offers excess solar heat protection. 2 Sections through the facade of 88 Wood Street surface to maximise high altitude sky light but reject short wave low level radiation. There are manual trim controls over air supply. A sealed facade does not mean the individual user has no control over ventilation.1 Offices.ARCHITECTURE IN A CLIMATE OF CHANGE Figure 11. This almost doubles daylight levels in north facing rooms where adjacent buildings obstruct a view of the sky (Figure 11. City of London Figure 11. 132 . 88 Wood Street.3). After this it will continue to produce electricity free of cost for about 20 years. Based on this figure it is expected that the cost of the PVs will be paid back in three years thanks to a substantial subsidy. It is estimated that the annual saving in CO2 emissions from this building alone will be of the order of 6 tonnes. Another kind of active facade is one which incorporates solar cells.3 Portcullis House. The Solar Offices of Doxford International by Studio E Architects located near Sunderland is a pioneer example of this tactic in the UK. This is an outstanding example of a building that minimises reliance on services engineering. To date it has achieved an average daily output of 150 kWh. Commercial buildings have perhaps the greatest potential with PV cells integrated into their glazing as well as being roof mounted. In the UK a pioneer scheme is the Northumberland Building for the University of Northumbria in Newcastle where cells have been applied to the spandrels beneath the continuous windows. The main advantage of commercial application is that offices use most of their energy during daylight hours. The 73 kW (peak) array of over 400 000 photovoltaic cells on the facade produces 55 100 kWh per annum which represents one third to one quarter of the total anticipated electrical consumption (Figures 11. The case study of the ZICER building in the University of East Anglia will serve as an example (Chapter 18). It is claimed that the integrated approach to the design has resulted in simplified engineering solutions and a considerable saving in energy as against a standard naturally ventilated building. blinds. Given the abundance of information and advice available.5). Figure 11. artificial lights and a daylight dimming override. designers should now be able to grasp the opportunities offered by such technologies which also allow exploration of a range of new aesthetic options for the building envelope.4 and 11. cutaway section of facade 133 . Currently the Co-Operative Headquarters building in Manchester is retro-fitting PVs to the south elevation of its circulation tower as part of a refurbishment programme. This will increasingly be a preferred option as the cost of fossil fuel rises under the twin pressures of diminishing reserves and the need to curb CO2 emissions. radiator output. Even at the present state of the technology. One of the challenges of the next decades will be to retrofit buildings with PVs.LOW ENERGY TECHNIQUES FOR NON-DOMESTIC BUILDINGS volume. This is a speculative office development which offers the advantage of much reduced power consumption of 85 kWh/m2/year as against the normal air conditioned office of up to 500 kWh/m2/year. The Doxford Office is modest compared with the German government In-service Training Centre called Mont Cenis at Herne Sodingen in the Ruhr. Ove Arup and Partners estimate that one third of the electricity needed to run an office complex could come from PVs with only a 2 per cent addition to the building cost. A timber structural frame of rough hewn pine columns is a kind of reincarnation of the forests from which they originated. The structure encloses two three-storey buildings either side of an internal street running the length of the building (Figure 11.7). balancing out 134 . Their concrete structure provides substantial thermal mass. The building is. Doxford Solar Office The Mount Cenis Government Training Centre is one of the world’s most powerful solar electric plants and is a spectacular demonstration of the country’s commitment to rehabilitate this former industrial region whilst also signalling the country’s commitment to ecological development (Figure 11.6).4 Doxford Solar Offices Figure 11. in effect. After the demise of heavy industry the Ruhr became a heavily polluted wasteland which prompted the government of North-Rhine Westphalia to embark on an extensive regeneration programme covering 800 square kilometres.5 Interior. a giant canopy encompassing a variety of buildings and providing them with a Mediterranean climate. At 168 m long and 16 m high the form and scale of the building has echoes of the huge manufacturing sheds of former times.ARCHITECTURE IN A CLIMATE OF CHANGE Figure 11. Six hundred converters change the current from DC to AC to make it compatible with the grid. These provide a peak output of 1 megawatt. Herne – Sodingen. A 1.2 MW battery plant stores power from the PVs. The power generated greatly exceeds the 135 . The building is designed to be self-sufficient in energy. The roof and facade incorporate 10 000 m2 of PV cells integrated with glazed panels. Landscaped spaces provide social areas which can be used all year in a climate akin to the Côte d’Azur. Two types of solar module were employed: monocrystalline cells with a peak efficiency of 16 per cent and lower density polycrystalline cells at 12.LOW ENERGY TECHNIQUES FOR NON-DOMESTIC BUILDINGS Figure 11. Sections of the facade can be opened in summer to provide cross-ventilation.5 per cent. Germany Figure 11.7 Mount Cenis ground floor plan both diurnal and seasonal temperature fluctuations.6 Mount Cenis In-service Training Centre. balancing output fluctuations. Whilst the popular way to moderate the peaks and troughs of external temperature as it affects building interiors is to exploit thermal mass. a thermal bridge is a route whereby cold is able to bypass wall insulation. A proprietory deck system which incorporates ducts to transport both warm and cool air is Termodeck from Sweden. during the day. It is important that the slabs are not carried through to the facade in order to avoid a major thermal bridge. radiates cooling into the workplace. Concepts in Practice – Energy. This complex is an outstanding example of an alliance between green technology and aesthetics. 1999. Jourda and Perraudin. At the same time it affords a graphic reminder that regenerated industrial landscapes do not have to be populated by featureless utilitarian sheds. The soffit of the concrete floor is free of finishes. Paris. Such thermal mass features are sometimes called ‘thermal flywheels’ or dampeners since they flatten the peaks and troughs of temperature. There is no recirculation of air. To recapitulate. One of the most aesthetically and environmentally suitable methods of achieving radiative thermal mass is by barrel vaults. yet this is one of the most energy efficient buildings of the 1990s due to very high levels of insulation and air tightness. In summer. Air is passed through the ducts at low velocity with stale air drawn into grilles over light fittings and then its heat extracted in a heat recovery unit before being expelled to the open air. For further information see Smith and Pitts.ARCHITECTURE IN A CLIMATE OF CHANGE needs of the building at 750 000 kWh per year. the purpose being to improve the effectiveness of the thermal mass and radiate stored heat in colder temperatures and ‘cooling’ in hot conditions. The former mines in the area release more than one million cubic metres of methane which is used to provide both heat and power. Capturing the gas in this way results in a reduction of carbon dioxide emissions of 12 000 tonnes. Traditionally concrete plank or slab floors had ceilings suspended below them to house services. which then. German policy on renewables makes exporting to the grid a profitable proposition. light and shade. Floors and ceilings The undersides of floors have a crucial role to play in determining the effective thermal mass of a structure. night air is passed through ducts to cool the slab. It all adds up to an enchanting environment of spaciousness. This is not the only source of energy generation. Now there are increasing examples of the system being reversed with the floor above the slab raised to provide space for ducts and other services. The architects.8) and Wessex Water Operational Centre near Bath by Bennetts Associates. as employed in Portcullis House (Figure 11. Batsford. there is an alternative which is to use a phase change material on internal 136 . This was used in the Elizabeth Fry Building in the University of East Anglia to good effect. designed the distribution of PV panels to reflect the arbitrary distribution of clouds by means of six different types of module with different densities creating subtle variations to the play of light within the interior. LOW ENERGY TECHNIQUES FOR NON-DOMESTIC BUILDINGS Figure 11. Portcullis House surfaces. This makes the system particularly suitable for offices which are vacant at night and which can be vented to the outside. reducing or even eliminating the need for mechanical ventilation. It is claimed that a plaster coating of 6 mm has the same absorbent capacity as a 225 mm masonry wall. 137 .de). The wax stores heat up to its melting point. In an office context this material is ideal for facing internal partitions. This can be adjusted to a range of temperatures according to the requirements of the material that supports it. The mix is sprayed to walls.fraunhofer. As the wax stores heat its temperature does not rise until it reaches melting point which is its maximum storage capacity. A system is now on the market which enables lightweight structures to enjoy the benefits of thermal mass. Night-time cooling causes the wax to solidify and release the stored heat to warm the interior space. It is based on paraffin wax which is a phase change material. Up to spring 2004 ten buildings have been equipped with the system which was developed by the Fraunhofer Institute for Solar Energy Systems in Freiburg (e-mail: schossig@ise. The wax is micro-encapsulated within gypsum plaster.8 Vaulted floors with exposed soffits. The wax is encapsulated within minute plastic balls to form microcapsules in powder form which is mixed with plaster in a ratio of between 1 : 5 and 2 : 5 by weight. The most straightforward system of cross flow ventilation is where fresh air is provided with routes through a building from the windward to leeward side. As it rises. colder air is drawn in to compensate: the buoyancy principle.Chapter Ventilation Twelve Natural ventilation Part of the reaction against the sealed glass box concept of offices has been to explore the possibilities of creating an acceptable internal climate by natural means. Building depth should not be more than about five times the floor to ceiling height if cross-ventilation is to be successful. depth should be limited to about two and a half times the floor to ceiling height. The overriding principle should be to minimise the need for artificial climate systems and one way to achieve this is to make maximum use of natural ventilation in conjunction with climate sensitive design techniques for the building fabric. If air flow is to be encouraged to help provide natural ventilation and cooling the following are desirable design features: ● ● ● ● Plan form should be shallow to allow for the possibility of crossventilation. Openings on opposite walls to allow cross-ventilation are better than on one or more adjacent walls. In most office situations this can be considered as a supplement to the main ventilation strategy. This has caused a reappraisal of traditional methods including those employed in hot climates for two millennia or more. Internal air flow and ventilation Air flow in the interior of buildings may be created by allowing natural ventilation or by the use of artificial mechanical ventilation or air conditioning. 138 . Natural ventilation is possible due to the fact that warm air is lighter than cold air and therefore will tend to rise in relation to cold air. The production of buildings using more than one of these options is becoming more frequent. Such buildings are said to be ‘mixed-mode’. For single sided ventilation. 11).13).12 and 12. thus reducing internal heat gain. Continuous. but able to provide controlled air flow. Atria and vertical towers can be incorporated into the design to allow the stack effect to draw air through the building. 157–159). Portcullis House. A thermal chimney which is warmed by the sun accelerates the process. admirably demonstrates this technology (Figures 12. causing cooler air to be drawn into the building at ground level.1 Portcullis House. in this case. If the chimney has a matt black finish it will absorb heat and increase the rate of buoyancy. though care in meeting fire and smoke movement restrictions may determine the limits of what is possible.VENTILATION ● ● ● ● ● Minimum opening areas should be about 5 per cent of floor area to provide sufficient flow.11. In fact this building is one of the most overt demonstrations of the dynamics of natural ventilation. This is particularly difficult in high rise buildings but its problems have been addressed in the 40 storey Swiss Re building in the City of London (see pp. Westminster. The ventilation system most obviously borrowed from the past is the use of the thermal chimney exploiting the buoyancy principle. London 139 . Fresh air. secure background ventilation should be available using trickle vents and other devices.1 and 12. The effectiveness of natural ventilation and cooling can be improved by the use of low energy controlled lighting and low energy office equipment. Figure 12. Windows should be openable. 12. is drawn in at high level assisted by the thermal wheel (Figures 12. with external rising ducts carrying the warmed air from the offices to a thermal wheel on the roof before being expelled. This is a deep plan building making it impossible to employ cross flow ventilation from perimeter windows. From here the air is drawn upwards through preheating coils to be released to rooms at floor level. The building energy management system (BEMS) Figure 12.2 Coventry University library (courtesy of Marshalls plc) 140 . The solution was to provide each quadrant of the floor plan with large lightwells doubling up as air delivery shafts. (ed.3 and 12. By now the air has reached 18 C. There is also the problem of a raised ring road close to the site generating noise and pollution. The buoyancy of rising warm air draws fresh air into plenums below floor level to the base of each light tower. Maintaining the principle of pure natural ventilation without mechanical assistance is the Coventry University Library.) (1996) Environmental Design. control is critical.2).ARCHITECTURE IN A CLIMATE OF CHANGE Unassisted natural ventilation Pioneers of natural ventilation are Alan Short and Brian Ford in association with Max Fordham. R.4). The air is then drawn into the exit stacks spaced around the external walls. This building has been well documented and a particularly useful reference is Thomas. Accordingly perimeter windows are sealed (Figure 12. the Lanchester Building. Their first groundbreaking building in the UK was the Queen’s Engineering Building at Leicester de Montfort University (Short Ford and Partners). E & FN Spon. by architects Short and Associates. Additional warmth is provided by perimeter radiators. The environmental strategy was developed in association with Brian Ford. In a building relying solely on the buoyancy of natural ventilation. ‘Termination’ devices at the top of the stacks ensure that prevailing winds will not push air back down the stacks (Figures 12. VENTILATION NW SE SW FIRST FLOOR PLAN VENTILATION STACK ROOF PLAN Figure 12. Coventry University library 141 .3 Plans. 4 Air circulation paths 142 .ARCHITECTURE IN A CLIMATE OF CHANGE Section through central atrium (air outlet) Warm Exhaust air out Section through perimeter lightwell (air inlet) Fresh Air intake Figure 12. Brian Ford and Max Fordham have navigated uncharted waters (Figure 12. The result of avoiding mechanical ventilation and maximising natural light is that the estimated energy demand is 64 kWh/m2 per year which represents CO2 emissions of 20 kg/m2. meaning that it should progressively optimise the system.5 Contact Theatre.VENTILATION adjusts the outlet opening sizes according to outside temperature and the CO2 and temperature readings in each zone of the building.26 W/m2K for walls and less than 2 2. The latter comprise Low E double glazing with an argon filled cavity.0 W/m K for windows. This is a BEMS which is driven by a self-learning algorithm.5). There is a considerable heat load from stage lighting as well as the audience yet the Contact Theatre at Manchester University achieves comfort conditions without help from air conditioning. The building type which presents the most formidable challenge to anyone committed to natural ventilation is a theatre. It is tuned to meet the optimum fresh air requirement compatible with the minimum ventilation rate (Figure 12. Manchester University 143 . cooling the exposed thermal mass during the summer. This is around 85 per cent less than the standard air conditioned building. Short Ford Associates have risen to the challenge in a spectacular fashion. This is another building by which Alan Short. Heat losses through the fabric of the building are minimised by good insulations standards: U 0. Figure 12. The BEMS controls dampers which allow night air to flow through the building.4). learning by its mistakes. Things were made more complicated by the fact that this is a refurbishment of a 1963 auditorium.ARCHITECTURE IN A CLIMATE OF CHANGE The outstanding feature is the cluster of H-pot stacks over the auditorium reaching a height of 40 metres. Figure 12. Their volume is calculated to accelerate the buoyancy effect and draw out sufficient hot air whilst excluding rain. The H-pot design lifts them above neighbouring buildings to exclude downdraughts from the prevailing south-west winds. which has been largely preserved.6). In a theatre ventilation and cooling are the major energy sinks.6 Longitudinal and transverse sections. Contact Theatre 144 . Consequently the energy load of this building should be a fraction of the norm (Figure 12. above the diesel particulate matter zone. The aerofoil shape of the wind direction terminal produces negative pressure on the leeward side. the fresh air is delivered through perimeter ducts to provide displacement ventilation. night-time cooling can be achieved by passing large quantities of fresh air over the structure. Figure 12. Above this duct sizes may become excessively large to cope with the volume of air. A wind vane ensures that the terminal always faces the correct direction. exhaust air which has risen through the stack effect also needs to be expelled at high level. Westminster. To reduce the chance of this happening in highly polluted areas. Mechanically assisted ventilation Rotating cowls was the system adopted by Michael Hopkins and Partners with Ove Arup and Partners in the Nottingham University Jubilee Campus (Figure 12. There is considerable variation in the relative temperatures over the diurnal and seasonal cycle. perhaps by evaporative cooling or a heat pump. In the UK this system can work economically up to six storeys. assisting the expulsion of exhaust air. If heat is transferred from the input duct to the exhaust duct. These led to a low pressure mechanical system linked to heat recovery via a thermal wheel which recovers 84 per cent of the exhaust heat.VENTILATION In circumstances like this theatre it may be necessary to incorporate attenuators in the system to minimise external noise. In the daytime in summer when the internal temperature has become lower than the outside temperature. Night-time cooling works when the external temperature is lower than the internal one and gravity drives the cooler air down into the building. so a means has to be found of ensuring the exhaust air does not contaminate the fresh air. The mechanical system requires 51 000 kWh per year and this is supplied by 450 m2 monocrystalline photovoltaic cells. One way is to employ a terminal design which rotates according to the direction of the wind. and Portcullis House. One objection to naturally ventilated buildings is that they draw polluted air into a building. fresh air should be drawn into the building at high level.7 a design of terminal is shown which ensures that fresh air is always drawn in from the windward side and exhaust air to the leeward side. this further assists buoyancy. In Figure 12. This ventilation system is the successor to Hopkins’ and Arup’s innovations at the Inland Revenue HQ also in Nottingham.9). At the same time. The ventilation 145 . During the summer. The exhaust air can either exit through perimeter ducts or a climate facade. it is necessary to cool the incoming air. In the section.8. The stack effect or gravity displacement is dependent on the difference in temperature between the outside and inside air and the height of the air column. In most commercial and institutional buildings it is unlikely that natural ventilation on its own will be adequate. no cooling system is incorporated and therefore the lowest air temperature which can be supplied is usually restricted to ambient conditions. Such a system may be able to act as the heating system in winter. as in the traditional oast houses of Kent (Figure 12.ARCHITECTURE IN A CLIMATE OF CHANGE Figure 12. More precise control over air temperature and humidity can be achieved this way but usually only within a sealed 146 .7 Combined function rotary terminal system uses 100 per cent fresh air throughout the year. A degree of mechanical assistance is necessary to achieve an adequate rate of movement around the building. However. in its basic form. Air conditioning involves the cooling of the air using a refrigeration system. Mechanical assistance should not be confused with air conditioning which is a much more complex operation.10). Air is introduced directly into the roof mounted air handling units where it passed through electrostatic filters. Mechanical ventilation involves air flow and movement provision using fans and air and possibly supply/extract ducts. Exhaust air uses the corridor as the extract path from where it rises under low pressure via a staircase to the roof air handling unit (AHU) for heat recovery then expelled through the cowl. From here it is blown down vertical shafts into traditional floor voids and thence to teaching rooms via low pressure floor diffusers. The vane on the cowl ensures that the extract vent faces the leeward side according to the direction of the wind. VENTILATION Figure 12.8 Typical system for a naturally ventilated office Figure 12.9 Jubilee Campus. University of Nottingham Figure 12.10 Air handling units (AHUs) Jubilee Campus 147 . ARCHITECTURE IN A CLIMATE OF CHANGE building. In many temperate climates, the thermal inertia of a building structure, combined with controlled air flow, should be sufficient to avoid excessive overheating except for a few hours each year. Immediately air conditioning is specified, energy use is likely to increase substantially. As mentioned the inclusion of mechanical reinforcement of natural ventilation is the first step in the mixed mode direction. There are at least four types of mixed-mode ventilation: ● ● ● ● Contingency – mechanical ventilation is added or subtracted from the system as necessary. Zoned – different ventilation systems are provided for different portions of the building depending upon needs. Concurrent – natural and mechanical systems operate together. Changeover – natural and mechanical systems operate as alternatives (but often turn out to be concurrent because of difficulties in zoning or changeover point control). If mechanical ventilation is to be used to aid summer comfort levels, the following tactics are recommended: ● ● ● ● ● ● ● draw external air from the cool side of the building; consider drawing air through cooler pipes or ducts (for instance located underground) to reduce and stabilise its temperature; ground water cooling is becoming increasingly popular; ensure supply air is delivered to the required point of use efficiently to provide the most beneficial cooling effect but without uncomfortable draughts; ensure extracted air optimises heat removal by taking the most warm and humid air; integrate use and positioning of mechanical systems with natural air flow; in highly polluted city centre locations, air filtration down to PM5 (particulate matter down to 5 microns) is essential; employ night-time purging of the building to precool using lowest temperature ambient air. The last of these options offers many potential benefits since the air delivered to the space can achieve a lower temperature than ambient external conditions. This is particularly the case where cooler night-time air is passed over the building’s thermal mass (often the floor slab) which retains the ability to cool incoming daytime air. Further ‘natural cooling’ alternatives to air conditioning are summarised on pages 151–154. An increasingly popular option is ‘displacement ventilation’. In this case air at about one degree below room temperature is mechanically supplied at floor level at very low velocity, usually about 0.2 metres per second. This air is warmed by the occupants, computers or light 148 VENTILATION Figure 12.11 Portcullis House, section fittings, etc. causing it to rise and be extracted at ceiling level. Air quality and comfort levels can be more easily controlled using this system. However, not all rooms may be suitable for this strategy and therefore it should be specified only where appropriate. Portcullis House is one of the most prestigious buildings to use displacement ventilation (Figures 12.11 and 12.12). A mechanically assisted ventilation system serves a network of linked floor plenums drawing air from ducts in the facade to provide 100 per cent external air to each room. The system incorporates high efficiency heat recovery from solar gain, the occupants, electrical equipment and room radiators. Exhaust air is carried by ducts expressed externally in the steeply pitched roof and expelled through a series of chimneys designed to enhance the stack effect. Heat recovery is by means of a roof mounted rotary hygroscopic heat exchanger or ‘thermal wheel’ with 85 per cent efficiency which is fed by air return ducts which follow the profile of the roof. This thermal wheel is also able to recover winter moisture from exhaust air, reducing the load on humidifiers (Figure 12.13). Adjacent to Westminster Bridge, Portcullis House (Figure 12.1) is situated in one of the most heavily polluted locations in London. Ventilation air is drawn in at the highest possible level, well above the high concentration zone of particulate matter from vehicle exhausts. This outside air is fed into the underfloor plenum and the displacement ventilation is assisted by buoyancy action. The brief specified a temperature of 22 C plus or minus 2 so, when necessary, the ventilation air can be cooled by ground water in two bore holes at a steady 14 C. Buoyancy ventilation is assisted by low power fans. The full fresh air system is able to serve all rooms equally, despite the diversity of function. This is essential for a long-life building which may undergo numerous internal changes. 149 ARCHITECTURE IN A CLIMATE OF CHANGE Figure 12.12 Portcullis House, displacement ventilation An outstanding example of displacement ventilation being inserted into a refurbished building is afforded by the Reichstag. By a slender majority the German Parliament decided to move to Berlin and to rehabilitate the Reichstag. Norman Foster was invited to submit a design in a limited competition which he won. The debating chamber uses displacement ventilation drawing air again from high level above low level pollution such as PM10s (it is now considered that PM5 should be the health threshold). The chamber floor comprises a mesh of perforated panels covered by a porous carpet. The whole floor, therefore, is a ventilation grille. Large ducts under the floor enable air to be moved at low velocity, which reduces noise and minimises the power for fans (Figure 12.14). 150 VENTILATION Figure 12.13 Portcullis House, ventilation pathways and detail of the thermal wheel Finally, the critical design issues concerning mechanical ventilation involve: ● ● ● ● the sizing and routing of ducts to minimise resistance and thus keep fan size to a minimum; the positioning of diffusers in relation to plan and section of rooms; the size of diffusers to minimise noise; the inclusion of devices to stop the spread of fire. Cooling strategies Cooling strategies begin at the level of the site. Vegetation, especially trees, provides both shade and evaporative cooling through moisture expiration through leaves. Pools, fountains, waterfalls/cascades, sprays and other water features all add to the evaporative cooling effect. In studies of ‘heat island effect’ generated by buildings it was found that clusters of trees within the heat island can produce a localised drop in temperature of 2–3 C. Chilled ceilings are a method of providing cooling not necessarily associated with air flow systems. The advantages of the system are, first, that thermal stratification affects in a room are reduced and, second, that a chilled ceiling counterbalances the effect of thermal buoyancy, that is, rising warm air. The ceiling may be chilled using a refrigerant. The more 151 ARCHITECTURE IN A CLIMATE OF CHANGE Exhaust air Hot air Natural diffused light Fresh air intake Figure 12.14 Displacement ventilation and natural light in the Reichstag Air plenum Air treatment environmentally benign method is to employ mechanical night-time cooling to precool exposed floor slabs. An alternative system involves embedding pipes in concrete floors to carry cooling water, usually from a ground source. Evaporative cooling Another case of ‘nothing new under the sun’ is evaporative cooling. One of the earliest cases of this being incorporated in a building is the Emperor Nero’s megalomanic ‘Golden House’ which covered most of the centre of Rome. At its centre was the domed octagon room and in one of its sides a waterfall was inset, supplied by a mountain stream. No doubt it performed the dual role of architectural feature and cooling device. Evaporative cooling works on the principle that molecules in a vapour state contain much more energy than the same molecules in a 152 Orientation ensures that the prevailing wind is in the right direction (Figure 12.15 Directed evaporative cooling. This heat is removed from the water. (ed. fountains and across pools. the air does not come into direct contact with the moisture. hence ‘evaporative cooling’.VENTILATION liquid state. but can be allowed to pass through tubes or pipes which have their outer surfaces moistened. R. Evaporative techniques include: ● ● ● ● ● air that does not already have a high moisture content can be cooled by allowing water to evaporate into it. Jubilee Campus 153 . evaporative cooling is produced if incoming air to a building passes over a dampened surface. in the case of indirect evaporation. and transferred to the vapour. direct evaporation occurs when air passes through tree foliage. An example of a design which incorporates evaporative cooling is the Jubilee Campus at Nottingham University. Figure 12.) (1996) Environmental Design. E & FN Spon). direct evaporative cooling is best in dry climates where average relative humidity at noon in summer does not exceed 40 per cent. evaporation causes surfaces to cool (Thomas. The amount of heat required to change water into vapour is the latent heat of evaporation. as stated.15). So. Sloping glazing directs air which has previously passed across an extensive open air pool into an atrium between teaching and office units. or through a spray or damp material across windows. Also the construction energy costs rise significantly every five floors or so.17). The gardens feature vegetation from North America. The first manifestation of these principles in the west was the Commerzbank in Frankfurt (Figure 12. at the same time causing minimum interference with external views. most notably Ken Yeang from Kuala Lumpur. The atrium is subdivided into 12-storey units and within 12 floors there is cross-ventilation from the gardens in 154 . To cope with the wind speeds (up to 40 metres per second at 18 storeys) he uses wing wind walls and wind scoops which deflect the wind into the centre of the building.16). Use heat absorbing and heat reflecting glasses. The natural ventilation enters through the top of the gardens passing into the central atrium. we can learn from this. The brief was clear that it should be an ecological building in which energy efficiency and natural ventilation played a crucial role. the ecological tower block has its advocates. However. Japan and the Mediterranean according to their height above ground. The ecological tower Surely an oxymoron? The orthodox ‘green’ would rule out anything above about 12 storeys since this is the height at which natural ventilation in the western European climate zone is said to become impracticable. In traditional Mediterranean building. As the architects put it: ‘we’re breaking the building down into a number of village units’. a 60-storey three-sided building wraps round an open central core ascending the full height of the building (Figure 12. This is extremely important in reducing the scale of the place for its occupants. This began life as a limited competition for an office headquarters comprising 900 000 square feet of office space and 500 000 square feet of other uses. The nine gardens each occupy four storeys and rotate round the building at 120 degrees enabling all the offices to have contact with a garden.ARCHITECTURE IN A CLIMATE OF CHANGE Additional cooling strategies ● ● ● Shading should be compatible with daylight provision and passive solar gain. He pioneered the idea of gardens in the sky coupled with natural ventilation. In the winning design by Norman Foster Associates. At that time the Green Party was in control of the city. The most remarkable feature of the design is the incorporation of open gardens. The gardens are social spaces where people can have a coffee or lunch and each one ‘belongs’ to a segment of office space accommodating 240 people. Tower blocks usually require a heavy engineering services system. the outer surfaces were painted light colours to reflect a portion of the heat gain. 16 Commerzbank. the building management system activates a backup 155 . enhanced as it is by the greenery. When conditions are too cold. It is estimated that the natural ventilation system will be sufficient for 60 per cent of the year. Air quality is good.18). Frankfurt Figure 12. windy or hot.17 Commerzbank typical floor plan the three directions (Figure 12.VENTILATION Figure 12. in turn. The curtain wall design is on Klimafassade (climate facade) principles. This. Triangular in plan they serve to provide both light and ventilation. The climate facade consists of a 12 mm glass outer skin that has been specially coated to absorb radar signals. However. a thermal chimney.21). It also demonstrates how bioclimatic architecture is subject to the vagaries of political fortune.ARCHITECTURE IN A CLIMATE OF CHANGE ventilation system which is linked to a chilled ceiling system that operates throughout the building. the energy Figure 12.. It is claimed by its architects Foster and Partners to be the first environmental skyscraper in the City. In 2004 Number 30 St Mary Axe. Overall the ventilation system is mixed mode employing air conditioning which is perhaps inevitable in a building of this height and location. The spirals are accentuated in darker glass on the elevation. presumably from the airport. The question is whether this is a piece of architectural whimsy or a form that arises from a logical functional brief. At 40 storeys its circular plan and conelike shape differentiate it from all other high buildings in London. If the Greens had not had their brief moment of glory it is likely that this building would never have happened. Air enters at each floor in the facade into a 200 mm cavity where it heats up and passes out through the top of the cavity. The 39th floor is a restaurant offering spectacular views for the privileged few. in effect. The atria/lightwells provide natural ventilation and act as ‘lungs’ for the building. It is calculated that the ventilation system will use only 35 per cent of the energy of an air conditioned office.19).18 Natural ventilation paths in the Commerzbank 156 . There is no doubt as to its genetic origin which is the Commerzbank in Frankfurt with its triangular plan and four-storey atria which rotate around the plan (Figures 12. Motorised aluminium blinds in the cavity provide solar shading. providing natural ventilation for 40 per cent of the year. This is a remarkable attempt to create an extremely high tower block which minimises its environmental impact whilst also providing optimum comfort and amenity for its occupants. The curved aerodynamic shape ensures then even high winds slide off the surfaces making minimum impact. The idea of an atrium space easily accessible at all levels has now evolved into six spiral light wells that have a platform at every sixth floor. has made it possible to incorporate motorised opening windows in the atria to assist natural ventilation. was completed (Figure 12. which is. According to the services engineers Hilson Moran. Floors between the break-out spaces have balconies to the atria. The inner skin of the facade is Low E double glazing giving the overall system a high U-value.19–12. the London headquarters of the international reinsurers Swiss Re. The external skin is a climate facade consisting of an external double glazed external screen and single internal glazing. London impact of the air conditioning is reduced by a series of heat recovery units. This building highlights one of the dilemmas of bioclimatic architecture. Solar controlled blinds are positioned within the cavity. Altogether the environmental attributes of the design result in an estimated energy consumption of 150 kWh/m2 per year which represents a 50 per cent saving compared with a traditional. good practice design. Both natural and mechanical ventilation systems are controlled by an intelligent building management system. removing warm air in summer and providing insulation in winter.VENTILATION Figure 12. A circular plan has the advantage of maximising daylight in the office floors which are situated around the perimeter with circulation taking up the core of the building. The space between serves as a ventilated cavity. In this case Swiss Re will undertake rigorous energy 157 . namely that a bespoke building may only be partly used by the building owner.19 Swiss Re Insurance Group headquarters. fully serviced office development of similar size. When the energy savings are capitalised it quickly becomes clear that the extra cost is soon recovered. in Aldgate.20 Ground floor plan and piazza Figure 12. it should produce a two thirds energy saving against a conventional sealed air conditioned equivalent.22). Nicholas Grimshaw and Partners are constructing a 49-storey office building. However. the Minerva Tower (Figure 12. reckon that if the natural ventilation capacity is used to the full. Again the design makes maximum use of the climate facade principle which adds about 3 per cent to the cost and reduces the floor plate. Roger Preston and Partners. The services engineers. much of the tower will be let out with no guarantee of a similar quality of energy management. Nearby. pollution and noise. . This moderates the problems associated with high rise buildings: high wind velocity.21 Upper floors with triangular atria management.ARCHITECTURE IN A CLIMATE OF CHANGE Figure 12. offering considerable revenue benefits thereafter. Above this level the climate facade comes into its own. Up to the seventh floor occupants can open windows behind a protective glass screen. The worst case scenario is that the system will be allowed to default to air conditioning which will negate the energy efficiency targets of the designers. VENTILATION Figure 12. with mechanical ventilation only necessary in extremely hot. The designers are optimistic that the tower will be able to operate in natural mode for about two thirds of the year. This means that. 159 .22 Minerva Tower glazing.23. cold or windy conditions. The seasonal variations in the operation of the climate facade are shown in Figure 12. air velocity can be moderated by vents. allowing it to enter the office space at an agreeable velocity. even at a height of 200 m. Vents at the top and bottom of this void allow access for fresh air. ARCHITECTURE IN A CLIMATE OF CHANGE Figure 12. Plan the siting of building openings to enhance natural ventilation. Minerva 2 Summary Ventilation and air movement – recommendations ● ● ● ● ● ● ● ● Help cool occupants by increasing air movement during day time. oscillating and ceiling types – should be available when alternative air flow is insufficient. Allow stack effect flow paths to produce ventilation air movement. Consider the use of solar chimneys to enhance stack air movement. Cool the structure of building using cooler air normally available at night. Radiative loss of heat ● Radiant heat loss from building surfaces can be improved by consideration of the geometry of the building in relation to the sky and other structures. Absorption of heat gain ● ● ● Absorption cooling uses natural sources of heat to drive simple absorption refrigeration systems. Investigate the use of wing walls to improve air flow through openings.23 Natural ventilation in a climate facade. Lithium bromide and ammonia-based refrigerants are most frequently used. Internal fans – box. Wind towers and wind catchers can be used to derive additional air flow. 160 . Heat is removed from the building by air or liquid cooled by the absorption system. There are of course some circumstances in which air conditioning is necessary. 161 . Air conditioning Air conditioning systems have high energy demands for heating and particularly cooling systems. Earth cooling strategies ● ● The temperature of the earth below ground is generally cooler and more stable than the air above ground. its use should be justified by the particular circumstances. The additional proportion of energy consumption is not matched by a proportional increase in comfort. usually one to three metres below the surface. prior to supply to the building. The earth is used to absorb heat either by building wholly or partly underground or by passing air through ducts or passages. In addition the rates of air flow are often substantially higher than with simple mechanical ventilation systems. Where air conditioning is deemed necessary. However. thus requiring heavy duty energy guzzling fans. it likely to be of prime importance in only a fraction of the whole building and therefore designers should design for appropriate compartmentalisation with the conditioned area sealed from the remainder of the building. The system is often operated for large fractions of the day when a suitable building design combined with an appropriate environmental control strategy would obviate the need for such air conditioning. In general it can be asserted that climate-sensitive design can eliminate the need for air conditioning in most instances.VENTILATION ● Exposed roof surfaces may allow night-time cooling in suitable climates. The extravagant use of air conditioning is particularly noteworthy in the temperate climate of the United Kingdom. It can be one of the more efficient ways of using energy. delivered energy is that which is in the fuel at the point of use.Chapter Energy options Thirteen Electricity is the ultimate convenience source of energy which disguises the fact that. A typical distribution of total energy output from a CHP system is Electricity High grade heat Medium grade heat Low grade heat 25% 55% 10% 10% 162 . that is. yet still electricity accounts for 750 grams of CO2 in the atmosphere for every kilowatt hour. At its point of use. Energy is defined as ‘primary’ and ‘delivered’.21 Electricity Coal Fuel oil Gas Much has been made of the UK’s switch to gas fired electricity generation. as delivered energy. with present methods of production and the fuel mix. Meanwhile the biggest carbon dioxide (CO2) abatement gains are to be realised in cutting demand especially in buildings. It may soon become politically necessary to incorporate these costs into the price of fossil fuels which will have huge economic consequences. it does not carry its external costs such as the damage to health. it is highly energy inefficient. as indicated earlier.28 0.75 0. to forests. An increasingly popular way of servicing commercial and institutional buildings is by combine heat and power (CHP).31 0. to buildings and above all to climate. it is around 30 per cent efficient. Even at the current price of energy green buildings can be cost effective. Primary energy is that which is contained in the fuel in its natural state. At present fossil-based energy is relatively cheap because. It is worth noting the relative CO2 emissions between different forms of fossil-based energy: kg/kWh delivered 0. which will bring about a significant reduction in cost. According to Amory Lovins ‘A reformer the size of a water heater can produce enough hydrogen to serve the fuel cells in dozens of cars’ (New Scientist. However. It can be adapted to low to zero carbon applications. there is no reason why they shouldn’t 163 . A CHP system is flexible. The latest prediction is that the cost should fall to between $600 and $1000 per kilowatt. Considerable research effort is being directed into improving the efficiency and lowering the cost of fuel cells because this is the technology of the twenty-first century and huge rewards await whoever makes that breakthrough. Most fuel cells work with hydrogen. experts think that the quantity of platinum can be cut by a factor of 5. heat and water. The fuel cell is not an energy storage device but may be considered as an electrochemical internal combustion engine. According to David Hart of Imperial College ‘If fuel cells fulfil their potential. At present most CHP installations operate with gas or diesel reciprocating engines or turbines for larger installations. clean and quiet with no moving parts and are ideal for combined heat and power application. usually hydrogen. At present the most cost-effective way to obtain the hydrogen is by reforming natural gas. 25 November 2000. The problem at the moment is that it is an expensive way of producing energy. Later in the decade there will probably be a considerable rise in the use of fuel cells. However. even relatively small installations will soon be able to switch to gas fired micro-turbines.ENERGY OPTIONS This is called the ‘energy balance’ of CHP and it is attractive for two main reasons: ● ● Most of the energy of the fuel is useful. p. In that case it will not be long before it will be possible to buy a fuel cell and reformer kit for the home which will make it independent of the grid providing heat and power much more cheaply than is possible at present. This is the technology of the future. The fuel cell Fuel cells are electrochemical devices that generate DC electricity similar to batteries. There will also be considerable reductions as mass production begins to bite. 41). Fuel cells are efficient. The reason for this cost difference is that the fuel cell uses platinum as a catalyst. Thus its environmental credentials are impeccable. For static cells in buildings perhaps the most promising technology is the solid oxide fuel cell which operates at around 800 C. It is a reactor which combines hydrogen and oxygen to produce electricity. Unlike batteries they take their energy from a continuous supply of fuel. Each installed kilowatt costs $3000 to $4000. whereas a combined cycle gas turbine system costs $400 per kilowatt. Of all the cells in production it has the lowest operating temperature of 80 C. The electrical efficiency of the PEMFC is 35 per cent with a target of 45 per cent.2 Fuel cell stack 164 . The electrolyte membrane allows only protons to pass through to the cathode setting up a charge separation in the process. The electrons pass through an external circuit creating useful energy at around 0. Both the anode and cathode are coated with platinum which acts as a catalyst. Figure 13. usually Teflon.ARCHITECTURE IN A CLIMATE OF CHANGE replace almost every battery and combustion engine in the world’ (New Scientist.7 volts then recombining with protons at the cathode to produce water and heat (Figure 13.1). Hydrogen is fed to the anode and an oxidant (oxygen from the air) to the cathode. 16 June 2001). To build up a useful voltage cells are stacked between conductive bi-polar plates. At present there are five versions of fuel cell technology.1 Basic structure and function of the proton exchange membrane fuel cell Figure 13.0 kW/kg for internal combustion engines. The catalyst on the anode causes the hydrogen to split into its constituent protons and electrons. which have integral channels to allow the free flow of hydrogen and oxygen (Figure 13. Proton exchange membrane fuel cell Sometimes called the polymer electrolyte membrane fuel cell (PEMFC in either case) it is also referred to as the solid polymer fuel cell.3 kW/kg compared with 1. The cell consists of an anode and a cathode separated by an electrolyte. usually graphite. The proton exchange membrane system is the most straightforward and serves to explain the basic principles of the fuel cell. This is one of the most common types of cell being appropriate for both vehicle and static application. Inside Science ‘Fuelling the Future’.2). Its energy density is 0. ’ An example of this is the police station in Central Park. cit. It also has a capacity of 200 kW and provides heat. but this is expected to improve. UK. The fuel cell forms part of Woking Park’s larger combined heat and power system (see p.ENERGY OPTIONS One problem with the PEMFC is that it requires hydrogen of a high degree of purity. The main difference from a PEMFC is that it uses a liquid electrolyte. Solid oxide fuel cell (SOFC) This is a cell suitable only for static application. It is a high temperature cell. New York. The largest installation to date for the Tokyo Electric Power Company had an output of 11 megawatts – until it expired. light and dehumidification for the Pool in the Park recreation centre. However. Its high operating temperature also enables it to break down impurities. while power companies will use them as alternatives to extending the electricity grid. installed the first commercial PAFC fuel cell to operate in the UK. The system efficiency is currently in the 37–43 per cent range.). PAFC units have been used experimentally in buses. static fuel cells will become attractive for hotels and sports centres. Surrey. This technology seems particularly popular in Japan where electricity costs are high and dispersed generation is preferred. 242). Research activity is focusing on finding cheaper and more robust catalysts as well as more efficient ion exchange polymer electrolytes. it is likely that its future lies in stationary systems. One year after this prediction the Borough of Woking. It potentially has a wide range of power outputs. Its great virtue is that it can run on a range of fuels including natural gas and methanol which can be reformed within the cell. taking several hours to reach its operating temperature. from 2 to 1000 kW. A 200 kW unit which uses sewage gas provides heat and power for Yokohama sewage works. The New Scientist editorial referred to above predicts that ‘Larger. It employs a phosphoric acid proton conducting electrolyte and platinum or platinum–rhodium electrodes. Phosphoric acid fuel cell (PAFC) Similar to PEMFCs this cell operates in the middle temperature range at around 200 C. which found that installing a PAFC of 200 kW capacity was cheaper than a grid connection requiring new cables in the park (David Hart. running at between 800 and 1000 C. This means it can tolerate some impurities. Its high temperature also removes the need for noble metal catalysts such as platinum. In contrast to PEMFCs the electrolyte conducts oxygen ions rather than hydrogen ions which move from the cathode to the anode. The 165 . op. cooling. According to David Hart of Imperial College ‘Solid oxide fuel cells are expected to have the widest range of applications.ARCHITECTURE IN A CLIMATE OF CHANGE Figure 13. The electrolyte in this case is an alkaline mixture of lithium and potassium 166 . It was used in the Apollo spacecraft programme. Smaller units could be used in houses. Large units should be useful in industry for generating electricity and heat. Alkaline fuel cells (AFC) This fuel cell dates back to the 1940s and was the first to be fully developed in the 1960s. It employs an alkaline electrolyte such as potassium hydroxide set between nickel or precious metal electrodes.3 Solid oxide fuel cell in its tubular configuration electrolyte is a ceramic which becomes conductive to oxygen ions at 800 C. Molten carbonate fuel cell (MCFC) This is a high temperature fuel cell operating at about 650 C. Air (oxygen) flows through a central tube whilst fuel flows round the outside of the structure (Figure 13. SOFCs are often structured in a tubular rather than a planar form (as in the PEMFC) to reduce the chance of failure of the seals due to high temperature expansion. However. Its operating temperature is 60–80 C which enables it to have a short warm-up time. its energy density is merely one tenth that of a PEMFC which makes it much bulkier for a given output.3). Within five years they could be installing a fuel cell that would run on natural gas . The main disadvantage of the MCFC is that it uses as electrolytes highly corrosive molten salts that create both design and maintenance problems. considers a scenario whereby the fuel cell in a car would operate in conjunction with a home or office. The car would be fuelled by a hydrogen grid. This innovative cell uses a copper and cerium oxide catalyst instead of nickel. This makes it tolerate both carbon monoxide and carbon dioxide. has built a 2 megawatt unit for the municipality of Santa Clara. Connecticut. Until that is available a catalyser within 167 . He estimates that a car spends 96 per cent of its time stationary so it would make sense to couple the car to a building to provide space and domestic hot water heat. which. Development programmes in Japan and the US have produced small prototype units in the 5–20 kW range. The steam and carbon dioxide it produces can be used to drive a turbine generator (cogeneration) which can raise the total efficiency to 80 per cent – up to twice that of a typical oil or gas fired plant. thinks differently. the initiator of the fuel cell in his West Beacon farm. Every home could have a combined heat and power plant running off mains gas’ (New Scientist. and that company is currently developing a 2. International Fuel Cells (US) is testing a cell producing 5 kW to 10 kW of electricity and hot water at 120–160 C for heating. Consequently this technology could be ideal for urban power stations producing combined heat and power. The cell can consume hydrocarbon fuels that are reformed into hydrogen within the cell. . The operation of the MCFC differs from that of other fuel cells in that it involves carbonate ion transfer across the electrolyte. 18 March 2000).85 megawatt plant. The MCFC can achieve an efficiency of 55 per cent. However. if successful. In March 2000 it was announced that researchers in the University of Pennsylvania in Philadelphia had developed a cell that could run directly off natural gas or methane. That prediction should perhaps be raised to 2010. Professor Tony Marmont. It did not have to be reformed to produce hydrogen. The Energy Research Corporation (ERC) of Danbury. Kevin Kendall.ENERGY OPTIONS carbonates which becomes liquid at 650 C and is supported by a ceramic matrix. The researchers consider that cars will be the main beneficiaries of the technology. California. Other fuel cells cannot run directly on hydrocarbons which clog the catalyst within minutes. The electrodes are both nickel based. The electricity generated would be sold to the grid. According to him ‘Millions of homeowners replace their gasfired central heating systems in Europe every year. a chemist from the University of Keele. is marketing as the ‘GE HomeGen 7000’ domestic fuel cell. will make them attractive for domestic combined heat and power. which is linked to General Electric. This is a residential system which the US company Plug Power. . Research is concentrating on solutions to this problem. USA. It could also relieve us of reliance on a national grid which.and marinebased renewables. Ultimately hydrogen should be available ‘on tap’ through a piped network. In the US there are growing problems in some areas over the reliability of the power supply and this is increasing the attractiveness of fuel cells. If the hydrogen is obtained from sewage. The domestic-scale fuel cells will have built-in processing units to reform hydrocarbon fuels and the whole system will occupy about the same space as a central heating boiler. is unreliable. In Portland. underground methane or water split by PV/wind electrolysis then this programme will certainly be one to be emulated by all industrialised countries. A cheap fuel cell powered by hydrogen electrolysed from PV. 168 . Oregon hydrogen extracted from methane from a sewage works generates power sufficient to light 100 homes. The reason for the intensification of research activity is the belief that the fuel cell is the energy technology of the future in that it meets a cluster of needs. In the meantime reforming natural gas. The US Department of Energy plans to power two to four million households with hydrogen and fuel cells by 2010 and ten million by 2030. There is little doubt that we are approaching the threshold of the hydrogen-based economy. wind.ARCHITECTURE IN A CLIMATE OF CHANGE the car would reform methanol or even natural gas from the mains to provide the hydrogen. Fuel cells reliant on renewable energy will be heavily dependent on an efficient electricity storage system.6). The first domestic scale fuel call was installed in the experimental Self-Sufficient Solar House created by Fraunhofer Institute for Solar Energy Systems in Freiburg in 1994. If tidal energy is exploited to its full potential there will be peak surpluses of electricity which could serve to create hydrogen via electrolysis. At present this is one of the main stumbling blocks to a pollution-free future. Access to energy is the main factor which divides the rich from the poor throughout the world. The fuel cell will really come into its own when it is fuelled by hydrogen produced from renewable sources like solar cells. Its hydrogen was electrolysed from PVs on its roof and stored in an outside tank (Figure 5. in many countries. solar-electric or small-scale hydroelectricity could be the ultimate answer to this unacceptable inequality. propane and other hydrocarbons to produce hydrogen would still result in massive reductions in carbon dioxide emissions and pollutants like oxides of sulphur and nitrogen. If it were available to the grid it would power 300 homes. livestock waste. not least the fact that it can be a genuine zero carbon dioxide energy source. Perhaps the greatest beneficiaries will initially be rural communities in developing countries who could never hope to get access to a grid supply. The same happens in California where sewage from the Virgenes Municipal Water District in Calabasas reforms methane into hydrogen to supply a fuel cell that provides 90 per cent of the power needed to run the plant. petrol. Energy can be drawn off by the permanent magnets in the disc inducing an electric current in a coil. In this form it has a high energy to mass ratio – three times better than petrol but it requires heavily insulated tanks. A complete fuel cell stack would be made in a single process. has come from space technology. Production costs will be considerably reduced due to its patented onestop manufacturing process. Storage techniques – electricity Flywheel technology The use of flywheel technology to store energy has been pioneered in vehicles. Braking energy is used to power a flywheel which then supplements acceleration energy. The development thrust. Hydrogen storage Hydrogen has an image problem thanks to regular replays of the Hindenberg disaster. The traditional storage method is to contain it in pressurised tanks (see Freiburg House. The flywheel is made to rotate by electromagnetic induction to a speed of 3600 revolutions per minute which represents an energy storage capacity of 10 000 watt hours. It can be liquefied. If situated in a vacuum. 13 July 1991. p. The metal is charged by injecting hydrogen at high pressure into a container filled with small particles. Metal hydrides such as FeTi compounds store hydrogen by bonding it chemically to the surface of the material.ENERGY OPTIONS The main barrier to the widespread adoption of fuel cells is the cost. An experimental project is underway on the Isle of Islay which is also the site of the pioneer wave energy projects (Chapter 3). Figure 16). but this requires cooling to 253 C which is highly energy intensive. Up to 50 litres can be stored at 200 to 250 bar. however. 28). Larger-scale operations need pressures of 500–600 bar. The US Department of Energy estimates that the current cost of a fuel cell is ~$3000 per kilowatt. Bonded hydrogen is one of the more favoured options. It plans to have domestic-scale fuel cells on the market in 2005. Its true potential lies in the storage of energy over a much longer term and in larger quantities as friction problems are overcome. The Japanese are taking the technology further by developing a levitating flywheel using high temperature superconducting ceramics to repel magnetic fields. A UK firm ITM Power of Cambridge is claiming that it should be able to reduce this to ~$100/kW by developing a simplified fuel cell architecture based on a patented unique family of ionically conducting polymers which are cheap to produce. 169 . the energy loss over a 24 hour period would be negligible (New Scientist. Photovoltaic applications Commercial buildings have perhaps the greatest potential for PV cells to be integrated into their glazing as well as being roof mounted. offices and a library. Given the abundance of information and advice available.ARCHITECTURE IN A CLIMATE OF CHANGE The hydrogen bonds with the material producing heat in the process. More recently the technology has been incorporated into an atrium roof at Nottingham University’s Jubilee Campus (Figure 13. It is a multi-use complex. However. Of the 12 000 m2 of roof. hydrogen and regenerative fuel cells will be in widespread operation by the middle of the century. (1997). Batsford). 10 000 m2 are 170 . In the opinion of the Royal Commission on Environmental Pollution. Even at the present state of the technology. National Power in the UK is constructing a 360 GJ installation with a rated power output of 15 MW which will feed directly into the grid. The main advantage of commercial application is that offices use most of their energy during daylight hours. principally an Academy for Further Education. Regenerative fuel cell A technology which is about to receive its first large-scale demonstration is based on a technology called ‘Regenesys’. In Chapter 11 the example of the Mont Cenis training centre in Germany was cited as an ambitious use of PVs. A pioneer example is the Northumberland Building for the University of Northumbria in Newcastle where cells have been applied to the spandrels beneath the continuous windows (see Smith. The case study of the Zicer building in the University of East Anglia will serve as an example (Chapter 18). a hotel.. much more ambitious PV programmes have been carried out on the continent. The hydrogen is released as the heat and pressure dissipate. It converts electrical energy to chemical energy which is reversible and is capable of storing massive amounts of electricity. designers should now be able to grasp the opportunities offered by such technologies which also allow exploration of a range of new aesthetic options for the building envelope. P. and Pitts. These are contained within a glazed envelope 180 m by 72 m and 16 m high. If global warming and security of energy supply issues simultaneously become critical then viable large-scale storage technologies will arrive much sooner. The most extensive use of PV technology has been in the commercial and institutional sector. One of the challenges of the next decades will be to retrofit buildings with PVs. Concepts in Practice Energy. Reference was made earlier to the solar offices at Doxford with its complete southerly facade supporting 400 000 PV cells. A. Ove Arup and Partners estimate that one third of the electricity needed to run an office complex could come from PVs with only a 2 per cent addition to the building cost.4).C. This laboratory has also verified the bench efficiency of 36. Nottingham University devoted to PV cells producing more than twice the energy demand of the building (Figure 13. 64. The Sunpower Corporation is manufacturing a solar cell which achieves an efficiency of over 20 per cent as verified by the US National Renewable Energy Laboratory.5).4 Photovoltaic cells.ENERGY OPTIONS Figure 13. Heat pumps Heat pumps are an offshoot of refrigeration technology and are capable of providing both heat and cooling. Jubilee Campus. Efficiencies of over 40 per cent are confidently predicted. Earthscan 1999). Hermann Scheer calculated that Germany’s aggregate demand of 500 TWh/year could be met by installing PVs on 10 per cent of roofs. facades and motorway sound barriers (The Solar Economy. They exploit the principle that 171 . Germany and California. As economies of scale also bring down costs. Before these developments had occurred. p.8 per cent in 2002 with most going to grid connected supply in Japan. This is a technology which is seen to have enormous potential and therefore is attracting considerable research effort.9 per cent achieved by Spectrolab’s Improved Triple Junction solar cell. the impact on the electricity market could be dramatic with the potential for every home to become a micro-power station. The PV market is growing dramatically – 43. Herne Sodingen. This is another technology which goes back a long way but which is only now realising its potential as a technology for the future. The principle of the GHP is that it does not create heat. Refrigerants which have an ozone depleting potential are now banned. it transports it from one area to another. The heating and cooling capacity of the refrigerant is enhanced by the extraction of warmth or cooling from an external medium – earth. At present ground coupled heat pumps have a coefficient of performance (COP) between 3 and 4 which means that for every kilowatt of electricity they produce 3 to 4 kilowatts of useful heat. air or water. The most efficient is the ground source heat pump (GSHP) which originated in the 1940s. Germany certain chemicals absorb heat when they are condensed into a liquid and release heat when they evaporate into a gas. The theoretical ultimate COP for heat pumps is 14. The main benefit of this technology is that it uses up to 50 per cent less electricity than conventional electrical heating or cooling. In the near future a COP of 6 is likely. 172 . It exploits the stable temperature of the earth for both heating and cooling.ARCHITECTURE IN A CLIMATE OF CHANGE Figure 13. Currently refrigerants which have virtually zero GWP on release include ammonia which is one of the most prevalent.5 PV roof over the Mont Cenis complex. There are several different refrigerants that can be used for space heating and cooling with widely varying global warming potential (GWP). The vertical pipes descend up to a 100 m depth. lake or river as the heat transfer medium.ENERGY OPTIONS Most ground coupled heat pumps adopt the closed loop system whereby a high density polyethylene pipe filled with a mix of water and antifreeze. The system was designed by GeoScience. Usually the lowest cost option is to use water in a pond. together with a matched compressor. GeoScience was also involved in the design of one of the first business parks in the UK to exploit this technology. A secondary circuit serves to provide underfloor heating at about 50 C. Atkins which supplements its energy with GS heat pumps. However. It is laid in a U configuration vertically and a loop horizontally. The supply pipe is run underground from the building and coiled into circles at least 2 m below the surface. The only problem is that. gas prices will probably continue to rise due to security of supply problems. Heat pumps have been compared to rechargeable batteries that are permanently connected to a trickle charger. will enable the system to overtake a standard boiler option in economy of running costs in the near future. This. If the energy removed from the ground exceeds the ground’s regeneration capacity. In really cold weather immersion heaters in the storage vessels boost heat output. to meet the heating/cooling load of a building. so it is essential that demand is matched to the ground capacity (from Dr Robin Curtis. At current energy prices the system is more expensive to run than a conventional boiler installation. The heat pumps produce water at 45–50 C which is stored in two 700 litre insulated buffer tanks. namely the Tolvaddon Energy Park in Cornwall which exploits geothermal energy with 19 heat pumps that pump water around boreholes to a depth of 70 metres. The energy trickle comes from the surrounding land which recharges the volume of ground immediately surrounding the loop. is buried in the ground. The heat pumps operate mainly at night using off-peak electricity to minimise costs. plus the climate change levy. the horizontal loop is laid at a minimum of 2 m depth. In each case the presence of moving ground water improves performance. The system has 15 shafts sunk to a depth of 45 m. the circuit can be affected by solar gain or rainfall evaporation. the system ceases to function. The battery is the ground loop array which has to be large enough. The stored heat together with internal heat gains and the thermal mass of the building provide space heating for most of the time. which acts as a heat transporter.S. The horizontal type is most common in residential situations where there is usually adequate open space and because it incurs a much lower excavation cost than the alternative. The system has a coefficient of performance of 4. This project was only made viable because of support from the Regional Development Agency (RDA) for the South West which required that this business park should be a demonstration of heat pump technology. Pencoys Primary School in Cornwall is an example of a PFI project by W. 173 . GeoScience Ltd). even at a 2 m depth. 6). This is a mixed office and residential development totalling 1000 m2. The GS heat pumps utilise four plastic pipe loops connected to the reinforcement steel of the piles. As stated earlier. Energy storage – heating and cooling Sources of natural energy are intermittent.6). This supplements the heating when necessary but also serves as a night cooling system by dumping heat in summer.6 Building of the Future. In all it is expected that energy costs compared with conventional heating and cooling will be reduced by about 30 per cent (Figure 13. Primrose Hill. an economical option is to integrate ground source heat pumps into the foundations as demonstrated by the ‘Building of the Future’. These supply both heating and cooling to the floors depending on the season. A secondary coil is positioned in the roof and linked to the central manifold. in the Middle Ages. A gas boiler and evaporative (adiabatic) mechanical cooling act as backup to the heat pumps. by Richard Paxton Architects (Figure 13. To obtain continuous flows of energy using such sources therefore requires systems of storage. London. In addition. tide 174 . London Where buildings require piled foundations. PVs on the roof meet most of the electricity needs of the building.ARCHITECTURE IN A CLIMATE OF CHANGE Figure 13. this is not a new concept since. In conjunction with air conditioning. Cool storage As the automatic inclusion of full air conditioning is increasingly being questioned. cooling and the storage of electricity. the medium may be the earth beneath a building. More sophisticated is the use of a phase change material such as sodium sulphate which works on the principle of the latent heat of fusion. This system is ideal in situations where there is cyclic demand since it facilitates cooling (or heating) when energy costs are at their lowest or a plant is shut down. hence the term ‘seasonal storage’. Alternatively. the problem of space cooling enters a new dimension. Exposed concrete floors have been cited as an efficient storage medium for convective and radiative heat transfer. concrete blocks or water. to refrigerate a medium. enough heat can be stored to supplement space heating through the whole of the heating season. Heat storage The most straightforward method of storage is by means of a network of pipes carrying solar heated air though a reasonably dense medium such as bricks. Second. As indicated earlier.ENERGY OPTIONS mills stored water at high tide in order to release it at an appropriate rate to turn the water wheel during the ebb tide. First. It is worth noting again that it is the 175 . The storage container is heavily insulated. A more practicable method is to use phase change and the latent heat of fusion as above to provide high density storage. One option called the STL storage system comprises a storage vessel containing spherical polyethylene nodules filled with a solution of eutectic salts and hydrates. Again the principle is to use spare energy. Heat absorbed by the structure flattens the peaks and troughs of temperature. the building fabric can be a significant energy storage system on the basis of thermal mass. off-peak or PV derived electricity may be used as the heating element. surplus solar energy can be used to charge a storage facility to be used later for space heating. The system may be given a lift in efficiency by the use of heat pumps which provide either cooling or warmth on the principle of a refrigerator. off-peak or PV electricity. The storage potential of energy is available for three purposes: heating. Called eutectic or ‘Glaubars’ salts this medium turns from solid to liquid at around 30 C and then gives off heat as it solidifies. in buildings that optimise solar gain. If sufficient space is available below a building. At its crudest. storage can help to flatten the peaks of electricity costs by charging the store with off-peak electricity and using the stored power to reduce demand at peak periods. this system can result in a dramatic lowering of the required capacity of the chiller unit. Energy storage offers an efficiency and cost gain in two respects. via a heat exchanger. It is then returned to the warm well. In summer water from the cold well is pumped into the building and. The system relies on the fact that ground water is a constant (10–12 C in the UK). cools the ventilation system.6 per cent overall heat transfer whilst concealing services. buildings absorb considerable amounts of surplus heat which can either be vented to the atmosphere or used to provide a reservoir of warmth for the winter. The energy storage system comprises two wells drilled into the water table below the building. It loses heat to the building and returns to the cold well at about 8 C to be stored for summer cooling (Figure 13. The effectiveness of the underside of the floor is negated by suspended ceilings. However. one warm the other cold. Seasonal energy storage A marriage between solar energy and the thermal constancy of the ground offers an opportunity to make significant reductions in both the heating and cooling loads generated by buildings.7).ARCHITECTURE IN A CLIMATE OF CHANGE outer 100 mm of the fabric which comprise the effective thermal mass. in summer. Known as ‘aquifer storage’ the principle is that. This system should be distinguished from the tanked seasonal storage at Frierichshafen described earlier which is fed by solar thermal panels. As it passes through the building it absorbs heat ending up at around 15–20 C. Figure 13. Summer Winter Heat Exchanger Heat Exchanger Cold circuit Cold well Warm well 15–20° Aquifer 10°C Aquifer Warm well 176 . In winter the system is reversed and warm water heats the ventilation air. a compromise solution is perforated tiles which have an open area of 42 per cent which is sufficient to allow 91.7 Principles of seasonal storage (courtesy of CADDET) Basic functioning of energy storage in aquifers. one absorbing heat from refrigeration equipment. The Sainsbury supermarket at Greenwhich Peninsular. employs earth sheltered walls to regulate temperature on the sales floor.ENERGY OPTIONS The Netherlands are leading the way in this technology with 19 projects completed or under way with a projected annual primary energy saving of 1. Aquifers at 40 m depth are used for cooling.8 Banks of lead acid batteries in the Freiburg Solar House 177 .5 million cubic metres of natural gas equivalent.8). expensive and of limited life. Figure 13. Still in general use is the traditional lead acid battery which is heavy. Lighter but more expensive are nickel–cadmium batteries which have the advantage of rapid charging achieved by low internal resistence. In the case of the Reichstag surplus heat is stored interseasonally in a natural aquifer 400 m below ground. Recent buildings to benefit from this technology include the Reichstag in Berlin and the city hall and Schiphol Airport offices in The Hague. the other providing ground cooling. Ventilation air is passed through underground ducts to maintain cooling. Even the ground-breaking Freiburg zero electric house relied on lead acid batteries for its fall-back position. but the promised breakthrough in this technology has yet to materialise. The PV hydrogen fuel cell copes with most of the year (Figure 13. completed in September 1999. Electricity storage Batteries Battery technology is still the most common method of storage. Also there are two 75 m deep boreholes. p. 258). Frequently. P. they have to accept the discomfort. The incorporation of whole building systems control has often been accompanied by centralisation of the decision-making power. This has left people with the feeling that if they do not match the typical occupant profile. Further. Poor occupant satisfaction with environmental control systems has also been associated with complaints of sick building syndrome. The Ovonic battery would raise this to 480 kilometres. of course. a lead acid battery could give a vehicle a range of 190 kilometres. As an indication of its efficiency. which may in some circumstances operate autonomously. One of the most promising batteries is the Ovonic nickel–metal hydride battery. (1997) Made to Measure. There are. security. As a result.ARCHITECTURE IN A CLIMATE OF CHANGE The car industry is particularly interested in this form of storage. It can be discharged and recharged up to 10 000 times. The incorporation of computers. Often the environmental data collection and control system is incorporated within an overall building management system (BMS) which also deals with communication networks. though the building as a whole has the potential to optimise its energy and environmental performance to achieve some centrally defined goal. serious environmental hazards associated with cadmium. the system is under the control of a facilities manager. there is evidence to show that occupants are more tolerant of less than perfect environmental conditions if they have some control over their immediate environment. modern multiple-parameter optimisation techniques and intelligent control have enhanced the opportunity to provide very sophisticated environmental control systems in buildings. The location of the control system need not be on-site and the supervision of the system may well be located centrally for multiple building complexes or for a series of similar buildings in outlying areas. lighting. the ability of the occupants to influence their own environment has been degraded. Princeton. They can also be used to control more passive features such as window opening and shading device position. fire protection. The portion of the system dealing with energy is the building energy management system (BEMS). ventilation and air conditioning systems in terms of engineering the status of the internal environment. Further difficulties can arise if the facilities manager lacks either the 178 . This order of improvement makes it an attractive storage proposition for buildings employing PV generation (Ball. Building management systems Digital control mechanisms and the availability of system controls to operate them have developed rapidly since the 1970s. occupancy related scheduling and a number of other functions. lift operation. BMS/BEMS are generally designed to operate to control heating. permits a straightforward prediction of likely energy use for lighting. One assessment method that addresses this sector is the Lighting and Thermal Value of Glazing Method – the ‘LT Method’. The LT Method. This model is perhaps more appropriate at a post-graduate level. The combination of inadequate user control or central management can. similar principles have been analysed with reference to commercial developments. heating and cooling requirements. and are defined as passive zones. 179 . are most usually applied within domestic-scale designs. Its programs facilitate a full dynamic thermal modelling of a building and consequent energy consumption (APACHE-calc and APACHE-sim). Overcomplexity also affects the occupants who may revert to the option requiring least effort which may also be the least energy efficient. The perimeter zone is that which is subject to significant external climatic influences on its lighting.ENERGY OPTIONS time or the expertise to understand the complexities of the BMS analyses and thus does not appreciate the subtleties of the system or how the system can be fine tuned to create maximum energy savings. developed in the UK.ies4d. University of Cambridge. heating and (if specified) cooling services on an annual basis. does provide a quick guide to energy consumption by indicating optimum window size and orientation at the initial design stage. The Martin Centre. However. This method reduces a building to an orthogonal plan with core and perimeter zones.V. The one which has been adopted as the European Reference Model is the Environmental Systems Performance Model produced by Integrated Environmental Solutions (IES) which can be linked to Autocad. The system is described in Baker. At the time of writing one of the most sophisticated and comprehensive computer modelling systems also comes from IES (www. Tools for environmental design The three main categories of passive solar design. Such an approach. For more complex analysis a number of programme suites now exist. which has so far been developed for use in the European climate. along with their subdivisions. (2000) Energy and Environment in Nondomestic Buildings. as mentioned earlier.com). N. whilst being somewhat simplistic. Control of environmental conditions inside buildings is certainly of crucial importance in reducing energy consumption as well as affecting the well-being and efficiency of the occupants. The technique gives annual comparisons and is relatively quick and easy to use. It is therefore valuable in determining the basic plan form. Cambridge Architectural Research Ltd. A number of variations of this method now exist to deal with a variety of building types. The perimeter zones are classified by orientation and depth. negate the benefits of the whole system. Such mixed mode solutions are the way for the future. The Martin Centre. Report by Arup Research and Development for the DTI’s Partners in Innovation Programme 2004 Report on offices The datum for the research was the weather in 1989 which was extrapolated to 2020. The report makes the sobering remark that even the BRE low energy office in Watford fails to meet BRE’s own benchmark for comfort from 2020 onwards. The research used a median of four UKCIP climate change scenarios based on progress in abating CO2 emissions. University of Cambridge. ‘Air conditioning needs to be combined with passive cooling systems to provide a greener and more cost-effective solution’ (Jake Hacker. It predicts that temperatures in the south of England could be up to 8 C warmer by 2080 reaching 40 C. Natural ventilation is employed in 70 per cent of UK offices. The report predicts that a 1960s office that is naturally ventilated will be unusable between June and August by 2080 with internal temperatures reaching 39 C. solar radiation are summarised in Baker. This raises questions about current recommendations regarding night cooling of offices as the external air temperatures get hotter. It suggests that even offices which are air conditioned to current climate extremes will be inadequate. reference was made to its ‘Suncast’ program which generates shadows from any sun position.V. As a postscript to this chapter it is useful to summarise the conclusions of a report by Arup referred to earlier on building performance in the context of climate change up to 2080. It has the advantage that its programs are graded in complexity and so can be introduced at undergraduate level. That is 3 degrees hotter than the July average for a street in Cairo. In the non-domestic sector the benefits of. The report recommends that air conditioning systems should be driven by renewable electricity – PVs etc. project leader for the research). 180 . (2000) Energy and Environment in Non-Domestic Buildings. Cambridge Architectural Research Ltd.ARCHITECTURE IN A CLIMATE OF CHANGE Earlier. N. Above 28 C occupants experience increasing discomfort. 2050 and 2080 using data from the UK Climate Impacts Programme (UKCIP). and problems associated with. Another factor is that occupants tend to prefer natural light. reflectivity of surrounding surfaces. lighting justifies special treatment. Energy efficient buildings should make as much beneficial use of naturally available light as possible. Occupants are more accepting of variable illumination when daylight is the light source. It will be some time before we realise the revolution in lighting promised by developments in light emitting diodes. There was the added psychological penalty of reducing access to daylight and external views. Natural light produces a true colour rendering. Furthermore. obstructions to light admission (e. nearby buildings). the use of windows and plan form of buildings was very much influenced by the limits of natural light admission. The development of the fluorescent tube lamp made the deep plan office a feasible proposition but at the expense of noise pollution and frequency band discomfort. Principal factors influencing levels of daylight are: ● ● ● ● orientation of windows. It is only relatively recently that the importance of these benefits have been acknowledged. angle of tilt of windows. One reason for this is that lighting is often the largest single item of energy cost. Factors which relate to the exploitation of daylight include: ● ● ● Windows provide external views and time orientation for occupants. Current wisdom has it that office design should optimise natural lighting.g. particularly in open plan offices.Lighting – designing for daylight Chapter Fourteen As one of the largest energy sinks for commercial and industrial buildings. Lighting is important because of the influence it has over occupant experience. with buildings becoming increasingly energy efficient in terms of space heating so the lighting load becomes of greater significant. Until about 50 years ago. especially since certain forms of artificial lighting have been implicated as the source of health problems. 181 . but more window area is not always better.Rb) where L room depth. If there are many external obstructions the room depth should be reduced. m Rb average reflectance of internal surfaces (Adrian Pitts in Smith. The amount of sky which can be seen from the interior is a critical factor in determining satisfactory daylighting. Design considerations In order to achieve successful daylighting design. Allocation of rooms to facades should be appropriate to the activity – to do this successfully will require consideration of the issues at the building planning stage. it may simply increase contrast. Where single sided daylighting is proposed. (1997) Concepts in Practice – Energy. and Pitts.C. the following aspects should be considered: ● ● ● ● ● ● ● ● ● ● ● ● ● The amount of glazing has a clear influence on the amount of daylight available. P. Daylight normally penetrates about 4–6 m from the window into the room. m H height of top of window. the following formula gives a limiting depth (L) to the room: (L/W ) (L/H) 2/(1 .ARCHITECTURE IN A CLIMATE OF CHANGE However. High window heads permit higher lighting input as more sky is visible. External obstructions/buildings which subtend an angle of less than 25 to the horizontal will not usually exclude use of natural daylight. Adequate daylight levels can be achieved up to a depth of about 2. A.5 times the window head height. m W room width. 182 . Generally rooflights provide about three times the benefit of an equivalently sized vertical window. it would be unusual to expect to supply all lighting requirements using daylight in non-domestic buildings. Batsford). Rooflights give a wider and more even distribution of light but also permit heat gains which may cause overheating. Large windows admit light but also provide heat gain and heat loss routes and thus potential thermal discomfort. especially from cold draughts near the windows. Rooflight spacing should be one to one-and-a-half times the ceiling height. 1 Reflective cone in the Reichstag 183 . Internal reflectances should be kept as high as possible. The spectacular feature is the cone designed by Claude Engel which is sheathed in 360 mirrors that reflect daylight into the lower chamber.1). he yielded to pressure and used the dome as an opportunity to create something dramatic. Figure 14. However. Examples One of the most dramatic techniques for channelling daylight into the deep interior of a building has been devised by Foster Associates for the Reichstag building. with the lower portion sealed from the upper space (echoes of Wren at St Paul’s Cathedral). The upper cupola is a public space which permits views into the chamber.LIGHTING – DESIGNING FOR DAYLIGHT ● ● In non-domestic buildings. the window area should be about 20 per cent of the floor area to provide sufficient light to a depth of about 1.5 times the height of the room. The cone houses air extract and heat exchange equipment. The motorised shading and the heat exchange equipment is powered by photovoltaics (Figure 14. It is effectively a double dome. Sun-tracking shading prevents direct sunlight from reaching the chamber. The original design was for an all encompassing canopy but this proved much too expensive. Initially Norman Foster opposed the idea of reinstating a dome since this was emblematic of an era best forgotten. However. This is an important factor if the ground level is meant to be predominantly naturally lit. The offices enclosing the atrium will benefit from a measure of natural light as well as external views. Occasionally the incorporation of an atrium can transform existing buildings. The shape and form of the atrium also has an important effect on the availability of natural lighting in the spaces adjacent to the atrium. as in the case of the city campus of Sheffield Hallam University (Figure 14. There is no doubt that much of the appeal of atria lies in their aesthetic attributes.2 Atrium between existing buildings. Access to natural light will be improved significantly if the sides of the atrium are stepped outwards. they have a practical justification by creating opportunities for introducing natural light and ventilation often deep into a building. Figure 14. Sheffield Hallam University 184 . ● ● The structure of the atrium roof can reduce its transparency by between 20 and 50 per cent.ARCHITECTURE IN A CLIMATE OF CHANGE The atrium The atrium has become an almost universal feature of commercial buildings. There are several factors to consider.2). Sunlight is reflected from the upper surface of the light shelf into the room interior and particularly onto the ceiling where it provides additional diffuse light thus helping to provide uniform illumination. The system consists of a panel of linear prisms (triangular wedges) which refract and spread the incoming light to produce a more diffuse distribution. Glare can be somewhat reduced too. A special luminaire is required to provide distribution of the light within the building. Problems with low angle winter sunlight penetration can give rise to glare.3). Earlier.3 Basic principle of the light shelf Prismatic glazing Whilst the systems so far discussed rely on the reflection of light. but the system can be used as an alternative to the reflective louvre system without some of its drawbacks. Maintenance is virtually eliminated if the system is installed between the panes of double glazed units. The system is heavily reliant on the availability of sunlight and for critical tasks or areas a backup artificial light source is required. Some degree of control is possible by modifying the angle of the light shelf. In this context ceilings are usually designed to be higher than normal for best operation (Figure 14. UK.LIGHTING – DESIGNING FOR DAYLIGHT ● The surface finish in respect of colour and reflectance of the atrium walls will influence the level of daylight reaching the lower floors. Portcullis House illustrated this feature with a level of sophistication involving a corrugated reflective surface to maximise high altitude reflection whilst rejecting low altitude short wave solar radiation. The view out is substantially restricted.4 Section through a sunpipe 185 . The pipes can be hollow shafts or ducts with reflective internal finishes. Light shelves Light shelves have been in use for some time and serve the dual purpose of providing shade and reflected light. This almost doubles the daylight levels in north facing rooms. or may use fibre optic cable technology. Figure 14. and the experimental low energy house at Nottingham University (Figure 14.4). prismatic glazing operates by refracting incoming light. Under conditions of an overcast sky. either internally or externally or in combination. Light pipes Light pipes gather incoming sunlight sometimes using a solar tracking system. The light is concentrated using lenses or mirrors and is then transmitted to building interiors by ‘pipes’. Examples of the technology are to be found in the roof of the concourse at Manchester Airport. especially the external type. They operate most effectively in sunlight. light shelves cannot increase the lighting level. Difficulties can be experience in cleaning the light shelves. Figure 14. Wessex Water Divisional Headquarters 186 .ARCHITECTURE IN A CLIMATE OF CHANGE Figure 14.5 Solar shading. Figure 14.6 Variable solar shading. These are featured in Portcullis House.LIGHTING – DESIGNING FOR DAYLIGHT Holographic glazing Holographic glazing is still under development but potentially offers advantages over prismatic glazing. but in this case the light output can be more finely tuned to produce particular internal light patterns. More common are external shading devices which are confined to the southerly elevation. Sheffield 187 . A diffraction process is also used. There are some limitations set by the angle of incoming light to which the holographic pattern is tuned. The Millennium Galleries opened in Sheffield in 2001 have some of the most elaborate solar shading which can be rotated through 90 to achieve levels of solar exclusion up to total internal blackout (Figure 14. Millennium Galleries.5). Solar shading In considering climate facades solar shading featured as an integral element in the triple glazing. More recently the Wessex Water building features some of the most complex solar shading devices yet encountered (Figure 14.6). the claimed benefits of maximising natural lighting have turned into clear dis-benefits. Even in terms of capital cost alone energy efficiency can make savings. Passive solar studies claim that efficient and well-controlled lighting would reduce energy/carbon dioxide costs by more than any other single item.Chapter Lighting – and human Fifteen failings Artificial lighting is a major factor in deciding the quality of the internal environment of offices. W. There are many reasons for this. for up to 30 per cent of total electricity use (Scientific American. recent occupancy studies have shown that artificial lights are left on much more than predicted. At the same time. As a result. Changes in office design and work routines has caused a reappraisal of the maximisation philosophy. it is possible for a single 188 . It is also a serious contributor to carbon dioxide. a build-up of user appraisal has shown that. Post-occupancy analysis has thrown some doubt on these assumptions (Bordass. For example. In addition. for example. in many cases. good reflecting luminaires and infra-red controls can save money because fewer fittings are required with lower heat production. work in offices was largely paper based. For these reasons it is a subject that warrants special attention. For example. (CO2) emissions accounting in the US. When the original research into alternatives to the permanently artificially lit office space was carried out. Design studies suggest that considerable energy savings can be made by maximising natural light. research and guidance in the past has been simplistic and inadequately focused on the real contexts in which people make decisions. and this chapter will review some of the most prominent. March 2001). high frequency lighting. At the same time there is the chance to install fewer switch drops reducing cabling and simplifying fitting-out. There is still reluctance to accept any additional capital cost to achieve sustainable design despite the prospect of significant revenue savings. This lighting strategy could also reduce the contract period with obvious benefits in terms of an earlier occupancy date. particularly if it is linked to automatic controls. in turn leading to a reduced cooling load.. PROBE studies). in many cases there are not enough sensors and lighting zones to take account of localised variations in daylight due to orientation or shading. In open plan situations where no individual is responsible. but that they will tend to resent expending effort on activities which they regard as the responsibility of management. the systems can be either closed or open loop. it has been found that all lights can be on because one person has drawn the blinds to avoid glare. As a rule of thumb. For example. Complaints at the lack of finesse of such systems can result in management abandoning photoelectric control altogether. it is often easier to switch on lights than adjust blinds. it is not uncommon to find occupants closing the blinds to activate the lights. this can result in greater energy use than in a conventional office. Now computers are the universal office tool and excessive daylight can be a severe nuisance due to reflection from VDU screens. Open systems measure external incident daylight to dim lights but with no feedback of the actual levels of realised illuminance. Photoelectric control Where lights are operated electronically according to natural light levels. A lesson which is being gradually learnt is that individuals will always select the least cost option in terms of effort. to avoid the need to make small-scale adjustments.LIGHTING – AND HUMAN FAILINGS decision by an individual to put a whole system into an energy wasting state. Insufficient consideration is given to the fact that anomalous situations are often difficult to correct and it is easier to adopt the ‘inertia solution’. It is not that people are inherently lazy. If lighting controls are not tuned to each individual workstation. For example. individuals will adjust the blinds and artificial lighting to avoid discomfort and achieve an even distribution of lighting regardless of energy consumption. Even where lights are zoned according to daylight penetration. Where sensors in closed systems are near windows. lights should not go off until the illuminance level is about twice the design level. and that is what happens when natural light levels fluctuate. blinds tend to be left closed if that was their position on the previous day. A closed loop system controls the lighting to top up the daylight to achieve a given minimum acceptable illuminance level. Furthermore. In cellular offices individuals take more responsibility for adjusting their light levels and optimising the relationship between artificial and natural lighting. The common ‘inertia response’ is to close the blinds and switch on the lights. Blinds can override the controls of both open and closed systems. 189 . Where daylight results in glare. these often do not relate to workstations with the result that lights are on all day to compensate. regardless of external conditions. The adjustment of the sensors is a matter of fine tuning. In practice it is often difficult to locate sensors to suit occupancy patterns and work requirements especially in open plan offices where workstations may frequently be relocated. so individuals may wish to vary desktop light levels to minimise contrast. Local manual override is the preferred answer. throughout the whole circulation area. However. especially high winds. The ideal solution is to rely on manual switching for the ‘on’ and automatic switching for the ‘off’. Another problem that occurs is that lighting and blinds controls are not co-ordinated. Recent developments in glass technology referred to earlier in terms of electrochromic glass offer solutions to these problems. especially if it can be controlled on an individual pane basis. as the level of outside light fluctuates. On a bright day a constantly lit desk would appear gloomy. In some of the worst cases activating lights in an office area can switch on all lights along the exit route and in extreme instances. 190 . In most cases this will mean desks at right angles to the external wall and the VDU viewing axis parallel to the window plane. These are areas which are frequently overlooked yet they can use more energy pro rata than office spaces. One common fault is that the positioning of switches and sensors does not take account of the contribution of natural light. they may not be sufficiently sensitive to the movements of people engaged in high concentration tasks. Occupiers sometimes complain that the spontaneous action of the blind is an irritant and represents a denial of individual choice. This is a particular fault in offices with atria. This carries the attendant risk of glare unless workstations are properly positioned in relation to the window. For example. There are obvious advantages to light switching which is responsive to a human presence. A balance should be struck between optimum safety and the profligate use of energy. but even this technology is not devoid of problems. Alternatively they may be so sensitive that passers-by trigger the switch causing a distraction. One option is to resort to automatic blinds. Occupancy sensors achieve their optimum value in service areas and circulation spaces. Lights tend to stay on regardless of the position of the blinds. The result is tall windows to give maximum daylight penetration. Dimming control and occupancy sensing Closed loop systems are designed to provide a constant level of desktop illuminance. The operation of externally positioned blinds can be frustrated by adverse weather conditions.ARCHITECTURE IN A CLIMATE OF CHANGE Glare Another rule of thumb which is observed by designers is that ‘if you can’t see the sky the daylight level is inadequate’. Good communication between management and staff can achieve a satisfactory performance from a less than perfect system. in one instance. cost and energy implications of the system. Inadequate communication can undermine the virtues of the best possible system design.LIGHTING – AND HUMAN FAILINGS Switches A common failing is that switches are not positioned logically in terms of their relation to the fittings and behaviour patterns of occupants. In some cases the system’s programme no longer serves the functions of the building. The problem is exacerbated if the original software source is no longer available. System complexity is another problem. The answer would be to include a red ‘live’ light in the switch. There is also the situation that office managers sometimes fail to address the more subtle needs of staff. It may be that system interfaces are not well understood by staff. System management One major reason why certain high profile energy efficient buildings fail to meet expectations is because of deficiencies at the level of system management. This is another instance where the complications and cost of modifying the system to respond to new work practices may 191 . Remote infra-red switching is an efficient and effortless system. Switches remote from fittings lead to uncertainty as to the status of the lights. gearing the system to crude averages with the result that nobody is satisfied. Also. Calling out specialists to make adjustments can be expensive thus tempting managers to prefer to operate the system below its design efficiency. Where switching is not ergonomically appropriate the tendency once again is for lights to be left on permanently. overcomplex systems discourage interference and adjustment in case the outcome is worse and defies a remedy. The increasing popularity of flexible working hours is causing difficulties. Even service managers and suppliers are occasionally not as well informed as they should be. If services managers are not conversant with the intricacies of a system. provided the operation zone focuses down to the size of an individual workstation. The human factor is of prime importance. In extreme cases this has led to the complete abandonment of the system. they will tend to operate it at or near its optimum on the principle that overkill masks lower order problems and safeguards one’s back. staff were not told of the fact that pressing a switch twice would turn on extra lights. Light controls may have been designed for fixed working hours and set lunch times. Complex systems can also be inflexible. In some cases the fault for poor system performance lies with office managers who fail to inform the staff of the operational characteristics. For example. The architect and services designer may produce an elegant energy-efficient concept which can be totally vitiated by the fitting-out contractor who has not been informed of the energy saving features and consequent operational constraints of the design.ARCHITECTURE IN A CLIMATE OF CHANGE be unacceptable and therefore the system is abandoned. 192 .00 to 17. If. Since staff became able to work flexible hours. The system is robust and reliable. There is responsive and intelligent office and services management. The desk only operated from 09. Lighting – conditions for success Open plan installations which offer occupant satisfaction and energy efficiency usually satisfy four conditions: ● ● ● ● The design is straightforward and comprehensible. avoiding overcomplexity. Air conditioned offices These present a different set of problems for designers. even more lighting is used to compensate for the constantly gloomy outlook. in addition.30 were obliged to leave the lights burning all night. The psychological effect of a space hermetically sealed from the outside world is to suggest an environment designed to overcome nature and be wholly distinct from it. Variable working patterns and conflicting needs often means that lights are left on unnecessarily. It is usual for the principal tenant to have overall control of the services. it meant that those working after 17. Another potential source of conflict between design and operation is when a single occupancy office reverts to multiple occupancy.30 hours. As a result. It is still comparatively rare to encounter an open plan office which achieves low lighting energy consumption combined with high daylight use and which produces a high level of occupant satisfaction. The problem of discontinuity between design intention and fitting-out can be particularly acute in the case of refurbishments. Even worse is the situation where the subcontractor deliberately ignores the design objectives of the architect and engineer in order to keep down costs. the inhabitants tend to regard it as natural that all the services should be fully used all the time. They have intelligible local controls with clear user interfaces. There has been a case where all lights in an office were operated from a central control desk. the facades feature solar tinted glass. A relatively recent trend in the production of buildings is for the design to be separated from the fitting-out. Glare is reduced to a minimum by means of light shelves. overhangs. is the occasional external view. Control systems are capable of responding to individual requirements with good switching design (often infra-red) and controls which are user friendly and take account both of daylight levels and workstation layout. compact fluorescent lamps should be specified where appropriate. The interior layouts position most desks at right angles to windows. VDUs can be easily moved to avoid glare. There will be good levels of daylight in perimeter work areas without causing excessive glare in interior spaces. Daylight within circulation spaces has the effect of putting a brake on the use of artificial lighting in the office areas. lighting levels should be designed be to as low as is permitted whilst still achieving the standard required. Interior fittings and furnishings are light in colour and tonal value with tall fittings kept to a minimum. Summary of design considerations ● ● ● Design of artificial lighting systems should not be extravagant. Following commissioning there is intelligent management of the system combined with responsive management at office level. Blinds will be easy to operate with a good range of adjustment and which need to be fully closed only in exceptional circumstances. Design luminance should be set to achieve about 400 lux with lower levels in circulation areas. There should be efficient lighting throughout with high frequency control gear and good optics. It is an advantage to have variety in lighting but without excessive contrast or the ‘oppressive feel’ generated by installations with 100 per cent Category 1 luminaires. The system has a high degree of inherent flexibility so that it can be finely tuned and retuned to user needs. splayed reveals and deeply recessed windows. Circulation lighting is low energy and well planned and controlled with full account taken of contributions from daylight. Task lighting should be used for specific workstations in order to reduce the level of general background lighting. ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● There is an assertive client who has formulated the system’s requirements clearly and insisted from the outset on effective lighting controls. An important added advantage in corridors etc. Energy efficient lamps should be specified – usually high frequency fluorescent or alternative discharge lamps.LIGHTING – AND HUMAN FAILINGS Those that achieve low lighting energy consumption tend to display the following characteristics. 193 . It makes direct contact with the emotions. lighting is probably the most powerful influence on mood and demeanour. Four types of control are available. Localised switching ● ● ● Allows partial illumination of large areas when not fully occupied. As such its disposition. To conclude. Photocells are installed to detect when natural light is sufficient at which point artificial lighting is switched off. Gives individual control to occupants. This type is particularly appropriate for spaces with low occupancy levels. should be chosen. this can produce savings in spaces regularly used by more than two people. giving energy efficient light distribution. The on/off controlled switching of lighting systems needs careful consideration for optimum performance. 194 . Lights near windows should be separately controlled. of all the factors under the designer’s control. Recent developments include use of dimming control to avoid abrupt change as lights are switched off. Switching on is for a set period. with lights switched on manually.ARCHITECTURE IN A CLIMATE OF CHANGE ● ● Appropriate luminaires. Daylight linked control ● ● ● Can be used in conjunction with timed and occupancy linked systems. Occupancy linked control ● ● ● Sensors (ultrasonic. this can have a direct effect on the bottom line. Timed control ● ● Used to switch lights off automatically according to a specified schedule. infra-red. Leaving aside the moral arguments. microwave or acoustic) are used to detect presence of occupants. lights switched off if presence no longer detected. Control gear (ballasts) required for lamp functioning should be energy efficient. quality and means of control are critical factors in determining the well-being of the occupants of a building. Some form of override switching off required when building becomes unoccupied. for example high frequency electronic ballasts are up to 20 per cent more efficient than the norm. however.Cautionary notes Chapter Sixteen Having outlined techniques for optimising natural resources in the operation of offices. a joint examination by the US Department of Energy and the Rocky Mountain Institute of a number of refurbished offices found that renovations involving lighting and ventilation led to significant increases in productivity. Why do things go wrong? There is evidence that many recent offices designed to be energy efficient are not performing as well as expected. For example. It must be said. These ‘Probe’ studies brought to light a number of problems which caused some buildings to perform well below their design prediction. their architect persuaded them to invest in an extra 4 per cent to benefit from energy efficient design. The result was that absenteeism dropped 15 per cent as against their previous headquarters. These are some of the outcomes of the study. the picture changes when energy costs are related to profits where they can range from 10 to 20 per cent of the total. However. Energy savings were worth $500 000 per year. that this is in the context of relatively low energy prices. In a specific example. This means that there is not the incentive to incur extra expenditure to modify systems to meet the original performance specification. Where an environmentally effective building really does score is in the sphere of staff well-being. Not all the fault lies with clients. Many professionals are reluctant to negotiate new design territory for fear of falling victim to untried 195 . sometimes by a margin of 25 per cent. A mere 1 per cent increase in productivity paid for a typical company’s annual energy bill. the US report states that when Lockheed commissioned a new 60 000 m2 office complex. A factor which is often overlooked is that energy is comparatively cheap and accounts for only 1–2 per cent of total occupancy costs including salaries. it is necessary to temper enthusiasm with a cautionary note. The Chartered Institute of Building Services Engineers commissioned a number of post-occupancy studies of buildings designed to achieve a high level of environmental sophistication. ARCHITECTURE IN A CLIMATE OF CHANGE technology or because they will not make the effort to learn new construction techniques. Operating against integrated design procedures is fee competition which sometimes reduces returns for design work to less than cost. substantial improvements to energy efficiency can be achieved by paying sufficient attention to the low profile details of design (Bordass. should be the aim of designers. fan motors are substantially oversized leading to significant excess energy costs. the costs of which would enhance their fees. A consequence of this is that services designers are often brought into the project at a late stage. In a cut-throat world. High profile/low profile In the drive to reduce energy consumption. Furthermore. Avoiding the technological fix and installing only essential technology that is efficient. attention has tended to focus on insulation standards and heating/cooling installations. it also adds to the cooling load of the building. The ‘high-tech demand’ Some designers are seduced by the imagery of advanced technology and install hardware that greatly exceeds the real demands of the building and its occupants. easy to use and maintain. Even the professionals have problems in this respect though here there is useful guidance in the document ‘Energy Use in Offices’ 2003’ published under the Energy Efficiency Best Practice Programme (now called ‘Action Energy’). other factors become more energy significant such as duct sizes and fan motors. Whilst design professionals are urged to work as a team. if not most. In extreme cases the system is abandoned altogether. Overcomplex systems which require elaborate maintenance tend to deteriorate fairly rapidly because service managers are not up to the demands of the technology. Lights are another source of concern which will be considered in more detail later. W. this is often difficult in practice. In many. cases. In many instances.. Leaving computers switched on unnecessarily not only wastes energy directly. All construction professionals operate in the shadow of ‘professional indemnity’ which tends to make them overcautious and not ask questions after completion. PROBE studies). not overcomplex. 196 . They are less likely to embrace low energy design which involves excluding engineering hardware. One problem facing clients is the relative scarcity of information which is accessible to the non-specialist. a fee structure which is based on contract or subcontract cost operates as a disincentive to services engineers. Now that Building Regulations are driving up insulation standards. designers across all disciplines are more often competitors than collaborators. in small-scale buildings.CAUTIONARY NOTES Operational difficulties It is unfortunately the case that guidance/instruction manuals are often poorly written and inadequate in terms of information. Another problem which is all too common is that installers are expected to be able to comply with almost impossibly short completion dates. Building related illness Over recent years there has been awareness of the phenomenon ‘sick building syndrome’. Commissioning is hurried to avoid activating penalty clauses in the contract. service managers and office staff are at a disadvantage from the start. If the system goes into operation at a substandard level of efficiency due to time constraints. When energy efficient design sets a good baseline of environmental conditions. For example. There have been numerous horror stories of badly maintained systems providing a comfortable habitat for all manner of unmentionable life forms as well as closed systems recycling bacteria and viruses resulting in high levels of absenteeism. This problem seems to be especially acute in terms of services technology. it is not unusual to find an entire heating plant being run in summer to supply hot tap water. who find it impossible to empathise with the uninitiated installer and operator. Recent studies have suggested that sick building syndrome is also related to job satisfaction. Job satisfaction is more easily achieved in a pleasant comfortable environment in which the occupants are permitted some degree of control over their surroundings. There is a universal problem with instruction manuals because they are written by experts on the system in question. Unreasonable overcapacity is another problem. It may be less of a financial risk to commission the system properly after practical completion. They cannot conceive the breadth of the knowledge gap. Factors like off-gassing from plastics in furnishings and fittings or the frequency of fluorescent lights have been implicated. more accurately termed ‘building-induced sickness’. aside from the most spectacular problem of Legionnaires disease. An elaborate and expensive chiller 197 . Inherent inefficiencies A system designed to be energy efficient can be totally undermined if the whole system has to be operated to meet a small demand. This is a recipe for high energy consumption and less than perfect comfort conditions. the extra effort necessary to fine tune comfort to an individual’s personal preferences quickly pays off. Poorly designed heating and ventilation systems have also been identified as culprits. and thus reduction of plant. operating hours and zoning. System efficiency can drop dramatically without the management being aware of the problem.ARCHITECTURE IN A CLIMATE OF CHANGE may be installed to meet the cooling demand of a few days in the year or to supply cool air to a small number of prestigious rooms. Optimised engineering solutions which may not be robust and flexible. Fitting-out which may contradict original design intentions. 198 . lighting and distribution of services. Lower energy consumption is probable when air conditioning is not installed. The operation of such systems must be supplemented by adequate supervisory and analytical input from knowledgeable staff. Maximisation of daylight can produce problems for VDU operators. Downside risks not given the same weight as upside visions. Mechanical ventilation inappropriately designed in terms of rate of ventilation. Poor controls and user interfaces. leading to poor performance. The scale of such inefficiencies may go unnoticed because of the absence of proper monitoring systems. Avoiding air conditioning – the issues ● ● Whilst the avoidance of air conditioning. Common architectural problems ● ● ● ● ● Adverse effects of too much glass being underestimated. Inappropriate window design lacking refinement and ease of control. The relatively small capital costs involved in such equipment will quickly be paid back. efficiency. Common engineering problems ● ● ● ● Adoption of inappropriate standards regarding climate control. leads to a lower cost per m2 it may be that there are fewer ‘usable’ m2 across the site as a whole due to problems of the distribution of cooling and ventilation. Often a catastrophic failure is the first indication that something is wrong. Special problems occur with night ventilation. but not always easy to quantify. A blind faith in technology tends to underrate the human factor and fails to focus down to the finely tuned needs of occupiers. Sophisticated controls and electronic management systems combined with zonal submetering will ensure that faults are pinpointed and system inefficiencies identified. Tendency to highlight the positive and play down the negative. The reasons are that people like to feel in control. Small demands like domestic hot water can require whole systems to be in operation as stated earlier. The human factor ● ● People are more tolerant of conditions in a naturally run building than in sealed air conditioned boxes. clearly articulated systems are the answer. The alternative of a balanced natural/mechanical system requires sophisticated design techniques which may pose too much of a challenge to system designers. Common failures leading to energy waste ● ● ● ● ● Designers tend to err on the side of caution. Accordingly systems are often overpowered and therefore wasteful. hence the need to avoid excessive automatic control and also that they make unconscious adjustments for longer in environments that are congenial. Natural ventilation is claimed to be more adaptable but it is not always appreciated how this adaptability can be achieved. for example the acceptable range of temperature is wider and perceptible air movement more acceptable. There is less risk in overdesigning than underdesigning. Robust. Occupants generally do not have the patience to keep fine tuning their building environment and will tend to do what is most convenient. Inadequate monitoring systems can fail to identify progressive failure. Designers can place too much faith in arrows showing expected air flow when natural ventilation is used – a rigorous approach to patterns of expected air movement is necessary.CAUTIONARY NOTES ● ● ● ● ● ● ● Lower running costs for the building when avoiding air conditioning may be at the expense of staff satisfaction. 199 . but what there is may be more complex because of modes of operation. There is less plant to maintain without air conditioning. The drive towards green design has sometimes led to the use of untried and inadequately researched alternatives to air conditioning. Intrinsic faults in the system may remain hidden but may nevertheless adversely affect energy use without any detectable effect on service. Often the most convenient operating strategy is for switches to default to on whereas manual on and automatic off is the more energy efficient provided safety is not compromised. Natural ventilation is less controllable. Naturally ventilated buildings are claimed to offer greater occupant satisfaction. and the level of occupant satisfaction is difficult to measure on a constant basis. This can create variable climate conditions at any given time. automatic off should be the norm. windows etc. Mixed mode ventilation and cooling systems with different services zoned according to use patterns and need are often the most suitable strategy.ARCHITECTURE IN A CLIMATE OF CHANGE ● ● ● As a general rule people find it easier to switch systems on than off. the inertia factor tends to increase when people are in groups. are all supportive of natural systems. So often averaging out was another term for a ‘lowest common denominator’ solution. again. These are: ● ● Preferred conditions in offices are now complex and unpredictable. climate sensitive. blinds. manual on. In such instances it is often perceived as easier to decommission the system than rectify it. The passive potential of buildings should be fully exploited with care taken to ensure that building form. hence. Monitoring is essential to determine running costs and to identify critical failure paths before they lead to catastrophic failure. the system does not default into concurrent operation of the mechanical and natural systems. At the same time. controls. Awareness of the outside world is an important component of contentment. There is a reluctance to be conspicuous with its potential for risking criticism. 200 . designers and modellers are still reluctant to abandon their faith in fully automated controls. Summary of recommendations ● ● ● ● The temptation to opt for complex and ‘heavy’ engineering should be resisted in favour of ‘gentle engineering’ in which loads on the HVAC (heating. building design. in changeover conditions. Conclusions A number of factors are now tending to direct designers away from fully automated systems. ventilation. making it impossible to design for average needs. Sudden changes are disrupting therefore automatic climate modifications should occur imperceptibly where this is possible. ventilating and air conditioning) system are kept to a minimum by appropriate. Even so. Too often there is still no clear analysis of what controls can really achieve and how proficient people will be at operating and servicing them. Overcomplex systems can generate unpredicted consequences and even episodes of total failure. Where natural and mechanical systems are designed to work in a symbiotic relationship it is necessary to ensure that. but psychological studies indicate that the mind can make a very full response to the visual milieu without reference to consciousness. Most of the time external views are perceived at a nonconscious level. energy wasteful options. To repeat: the objective should be to design straightforward. Furthermore. The aim of the design team should be to achieve maximum energy conservation. robust systems which are well within the abilities of both service and office managers to understand and users to operate. This is the recipe for sustainable design. This will deflect occupants from resorting to easy. such as glare on VDU screens.CAUTIONARY NOTES ● ● Changes in office routine and design have revealed that opting for maximum daylight can produce irritating consequences. maximum daylight designs rely on the reliability and user friendliness of blinds and this reliance has often been misplaced. consistent with operational realism. 201 . Chapter Life-cycle assessment and Seventeen recycling Waste disposal ‘The Earth is infinitely bountiful’. increasing pressure on land for waste disposal. The waste being generated by the increasing consumerist ethos of the industrialised nations imposes four penalties: ● ● ● ● depletion of natural resources. Far from it. This provides the context for considering the problems of waste. As the natural capital of the Earth is being steadily eroded this is increasingly an ethical as well as an economic problem. Pakistan. pollution arising from landfill disposal. From being our problem it becomes someone else’s. Worldwide the average is 4. Currently it takes 1. so say the eco-sceptics.5 acres due mainly to consumption in the industrialised nations. For example. 20 February 2003). The reality is that society cannot continue to consume natural assets at the current rate. that’s the end of it. The irony is that our most expensive artefact after a house is the car which is designed for increasingly longer life. In ecological terms this means that the Earth is already living beyond its means.7 years for the annual biological harvest to regenerate. There is a temptation to think that when waste is thrown away. in turn. the ecological footprint is the area of land (and sea) taken up to meet the needs of individuals or societies. Packaging and style upgrades exploit the human drive to be seen to be in the height of fashion. increases the rate of obsolescence. A citizen of the US uses 34 acres.25 years which means the natural capital account is going increasingly ‘into the red’ (Mathis Wackernagel at a conference ‘Redefining Progress’. February 2003 reported in The Guardian. The market economy encourages ever more vigorous consumerism which. in the UK the average per capita is 14 acres. energy involved in disposal. Land is Earth’s most valuable commodity which is being increasingly diminished by building development and landfill sites. More and 202 . In 1962 it took 0.6 acres. At the same time we may be placing a valuable recyclable resource beyond use. 1. co.html. This index was started 203 . and this applies to renovation as well as new build. The consequence of this is that there is growing concern about how to dispose of the escalating quantities of waste. Some councils offer composting bins at a discount. refurbished materials.LIFE-CYCLE ASSESSMENT AND RECYCLING more cars are being claimed to have passed the million mile mark. There are at least three aspects to this: ● ● ● reused items for the same or an alternative purpose. separating waste at source and. Plans are being considered to levy a charge for each bin collection from a home. At the same time householders can do a great deal to help the process along by: ● ● ● reusing items wherever possible. not least within highly influential bodies like the International Monetary Fund and World Bank.co. There may be an added incentive to reduce the amounts of household waste. Local councils are under growing pressure to collect waste in segregated bins to facilitate recycling. delivering to appropriate waste bins. This is what recycling is mainly about. composting organic kitchen waste and most garden waste (some plants are not suitable for composting). notably plastic bags and containers.uk). A first point of reference could be the Architectural Salvage Index operated by Hutton Rostron (www. The solution starts in the home. Reuse Building demolition provides an endless source of items which can be reused with almost no adaptation. Architectural salvage has become a significant industry. Recycling We are slowly moving to a position where there will be no such thing as waste. where there are not segregated collection facilities. So. reconstituted materials. e-mail debi@handr. These dictate a nation’s standing in relation to other countries. It is in the sphere of building that recycling has considerable potential.uk/ salvage_home. constant style changes and technological tinkering rather than functional efficiency are needed to keep the market buoyant. merely transformation. Nations measure their success by the level of per capita GDP and the extent of annual economic growth.handr. This should be a major issue in local elections. salvoweb. internal features: panelling. external features: a range of garden features and furniture. doors and central heating items. There is also N1 Architectural Salvage at www. though not in the case of multi-storey buildings.ARCHITECTURE IN A CLIMATE OF CHANGE in 1977 to recycle building materials and architectural features from buildings that are being demolished or renovated.com/ dealers/v-and-v/index. Refurbished materials As the pace of economic change accelerates. tiles. flagstones and other heavy items. In many situations this may not be a problem. If this is not feasible local markets may well find a use for them. An example of good practice is the case of Dartford Hospital. This is normally an energy intensive material due to the mining of aggregate and the production of cement. This can reduce the cement content by 70 per cent in mass concrete for bases etc. The first requirement in minimising waste is to segregate materials – timber. This means that many items are being dismantled long before they should be retired offering good opportunities for refurbishment.salvoweb. conservatories and pergolas. Normal concrete uses about 323 kg/m3 of cement. slates. By agreement with the manufacturers all off-cuts were kept in separate containers taking care to keep them clean and then returned to the manufacturers to be recycled into new plasterboard. This figure can be reduced to 100 kg/m3 by the introduction of ground granulated blast furnace slag (GGBS) to provide additional bulk. The only drawback is that the curing time is increased from the normal 28 days to 56 days.html for reclaimed bricks. stairs. hard plastics – to be recycled on site where possible.com/dealers/ n1architectural for architectural features and www. fireplaces. metals. as illustrated by the Earth Centre case study below. windows. are obvious candidates. This has led to mounting pressure to employ recycled materials. stone and timber. plasterboard. It was estimated that 20 per cent of plasterboard comprises waste off-cuts. complete structures: barns. relatively recent buildings are being demolished to make way for more intensive and lucrative site development. Reconstituted materials In refurbishment schemes it is likely that there will be some element requiring the use of concrete. The Index covers: ● ● ● ● building materials: bricks. pumps etc. 204 . flooring. aggregates. The construction industry is the sector which carries the most guilt in this respect with its voracious appetite for raw materials and its resistance to cutting waste. Radiators. It covers some of the same ground as life-cycle costing but excludes the 205 . (1999) Methodology for Environmental Profiles of Construction Materials. Whole life costing This focuses on the financial profile of a building and its market cost. transport. N. Life-cycle assessment Pressure is mounting to derive standards for environmental performance over the lifetime of a building by targeting the environmental impact of its component materials. particularly in gardens. building-in-use and disposal stages of a product’s life-cycle. like climate change.. Components and Buildings. BRE. These various environmental impacts are then assessed against 13 categories including climate change. As wall tiles they have the appearance of polished granite at a fraction of the cost. Details of this system may be found on at www. To avoid the charge of subjectivity the BRE consulted with a wide range of construction professionals and environmentalists before fixing on a system of weightings.uk/envest. atmospheric pollution. Edwards. Waste glass has found a new incarnation as decorative tiles and blocks. There is therefore a system of weighting which reflects these differences. J.bre.co. Clearly some of the categories have a greater overall impact than others. In translucent form it can be backlit as illuminated flooring or walling. Crushed and mixed with resin. web address:. It is based on Howard. it is available in a wide variety of colours and textures. and Anderson. The benchmark is the environmental impact caused over a year by the average UK citizen which is set at 100. e-mail info@crystalpaving. water pollution and raw materials extraction. In parallel with this there is also growing awareness of the value of calculating the economic cost of a building from inception to demolition. In 1998 the Building Research Establishment (BRE) developed a scoring system for environmental impacts known as Ecopoints.LIFE-CYCLE ASSESSMENT AND RECYCLING The upgrading of the railways has resulted in a good supply of timber railway sleepers – an excellent source of recycled timber that can be put to a range of uses.co. The outcome is a system of Ecopoints and the higher the score the greater the environmental impact.uk. processing.) The mountains of waste slate in North Wales are slowly being ground into powder form to be transformed into resin-based building materials which can receive a high polish. It is an ideal cladding material. (See Crystal Paving Ltd of Ecclesfield Sheffield.co. The system deals with the extraction. S. Tel. manufacture. 0870 770 6189. yet this is the destiny of buildings according to John Harrison.uk. In addition the heating process produces a chemical reaction through the conversion of calcium carbonate into calcium oxide which releases about 2200 kg of CO2. Add to this the carbon miles in transportation.org. Using eco-cement for such items as concrete blocks means that nearly all the material will eventually carbonate resulting in an absorption rate of 0.. The roasting process produces CO2 but most of this is reabsorbed by a process of carbonation as the cement hardens. concrete attracts criticism from environmentalists on account of its carbon intensive production techniques and its use of a once-only natural resource. The development of the technology of geopolymers offers the prospect of a more eco-friendly concrete. The important point is that it marks a move from pure capital costing to integrating capital and revenue costs into an overall whole life cost. Saving energy is one thing.. buildings as carbon sinks is another. The ultimate eco-credential of this material is the rate of carbon sequestration. Cement is formed by heating clay and lime in a rotary kiln to a temperature of about 1450 C which produces some 3000 kg per tonne of carbon dioxide (CO2). He has produced a magnesium carbonatebased ‘eco-cement’.wlcf. Eco-materials Concrete As possibly the most extensively used building material. In the first place it only uses half the energy for process heating required by calcium carbonate (Portland) cement. Whole life costing information may be found on IN A CLIMATE OF CHANGE production process. Tasmania. The market availability of this material is said to be at least five years away (see www.. limestone. a technologist from Hobart. and concrete gains few points on the sustainability scale.geopolymer. the impacts caused by mining etc.org). This is said to be due to the avoidance of calcination from calcium carbonate and the lower kiln temperature of 750 C. Eco-cement is not unique in its pollution absorbing properties. Chelsea Green Publishing Company. Magnesium-based concrete coated with titanium dioxide could be the basis for eco-cities of the future. UK) claim that a coating of the paint will continue to be effective for five years in a heavily polluted city. 40). B. Vermont. p. a binding substance to hold the particles of pigment together and a solvent to enable the mixture to flow freely. sawdust. (2000) Ecology of Building Materials. This means that an eco-concrete tower block can perform the same function as growing trees as it steadily fixes carbon. This could include organic waste which would otherwise be burnt or added to landfill. (1999) Eco-Renovation. 13 July 2002. Called ‘Ecopaint’ it is designed to reduce levels of NOx in the atmosphere. Most of the solvents used come into the category of volatile organic compounds (VOCs) and are aggressive pollutants. These particles absorb ultra-violet radiation and they use this energy convert NOx into nitric acid which is either washed away by rain or neutralised by the calcium carbonate particles. plastics. Another statistic is that organic solvents are responsible for 20 per cent of the hydrocarbon pollution in the atmosphere and second only to motor vehicles (Berge. USA. rubber and fly ash. 207 . Paints Paints have three constituents: pigment for colour.LIFE-CYCLE ASSESSMENT AND RECYCLING (‘Green Foundations’. revised edition). Mitsubishi is producing paving slabs coated with titanium dioxide which remove most pollutants from the air. The manufacturers (Millennium Chemicals of Grimsby. Harrison estimates that a shift to eco-cement could ultimately cut CO2 emissions by over 1 billion tonnes since it could replace 80 per cent of uses currently served by Portland cement. External finishes A similar principle has been incorporated into a paint that is now available. In Japan 50 towns are already using these photocatalytic cements and in Hong Kong it is estimated that they remove up to 90 per cent of the nitrogen oxides or NOx gases that create smog. It is the solvents which are the main problem since they are designed to evaporate. E. Architectural Press. There is one further attribute to this material. It has been calculated that over 500 000 tonnes of solvent are released into the atmosphere globally each year (Harland. Oxford). The paint contains nanoparticles of titanium dioxide and calcium carbonate. New Scientist. Being less alkaline than Portland cement it can incorporate up to four times more waste in the mix than conventional cement to provide bulk without losing strength. the transfer of moisture will not happen if wall surfaces have impermeable finishes like oil-based paints or varnishes. that is.net. materials that can take up moisture. Rodale Press. When the water vapour enters the hygroscopic materials these chemicals may be deposited and broken down giving these materials a degree of air cleansing capacity. This ensures that excess moisture can be absorbed by the plaster and masonry wall. releasing it when the internal humidity level creates imbalance.. at 0 C it can only hold 3. It falls into five divisions: ● ● the extraction from the earth of raw materials.8 g/m3. L. A further benefit is that water vapour carries some gas contaminants like nitrogen oxide and formaldehydes. and Lawless. 405 and also Edwards. J. ibid. 208 . p. (2003) The Natural Paint Book. On average a living room contains 5–10 g/m3. Temperature is the key factor in determining how much moisture the air can hold. hygroscopic materials have a damping effect on moisture fluctuations just as thermal mass regulates temperature (Berge. In other words. available from the AECB book service: IN A CLIMATE OF CHANGE It is the solvents which derive from the petrochemical industry that are the most toxic and are implicated in the phenomenon of off-gasing. At 20 C air can hold 14. Fluctuations in temperature will alter the carrying capacity of the air and may result in condensation.8 g/m3.aecb. Materials and embodied energy In addition to the energy used during the occupied life of a building there is also a significant energy factor in terms of the materials used in its construction. However. 251–253). are as easy to apply.) There are alternatives such as those containing natural resin emulsions. It is recommended that internal walls should be finished in hygroscopic emulsion paint over plaster. It is important that the materials of the walls can absorb much of this moisture which means the use of hygroscopic materials. otherwise condensation is virtually inevitable. do not have the pervasive smell of chemically based paints and are also biodegradable. plastic wallpaper or even wallpaper fixed with plastic-based pastes. ibid. the processing of the raw material into finished products. keeping the humidity level reasonably constant. (A comprehensive list of surface treatments and their solvents is to be found in Berge. This may continue for a considerable time with sometimes serious health consequences. pp. They appear much the same as conventional petrochemical emulsions. Humidity The choice of paints and varnishes can have an impact on the level of humidity within a building. Such materials act as a stabilising agent. Internal walls need to breathe.. are solvent free. and thereby more accommodating to numerous changes of work pattern. There are strong environmental reasons to use timber in construction. so the reverse is true and ultimately the embodied energy may become the prime factor. their lifetime would be extended. for example. if buildings were to be made more adaptable. Bill Dunster. Assuming an average life for an office building in the UK of 15–20 years. making the embodied energy an insignificant element. the most direct impact can be made on ‘carbon miles’. However. with the first two stages. However. in the UK most softwood is imported. adding a significant transport component. as buildings become more energy efficient. the demolition and recycling of materials. For example. as in the case of aluminium processed in Canada from hydroelectric power. with a wood burning 209 . thus reducing the overall percentage attributable to embodied energy.LIFE-CYCLE ASSESSMENT AND RECYCLING ● ● ● the transportation to the supplier and then to the site. or bricks in Nottinghamshire fired by landfill methane. Solar collectors on the roof direct warm water to a calorifier in an underground insulated 400 m3 tank. In the UK. Doncaster This is a building built to the highest super-insulation standards which is no less than we would expect from its architect. On the other hand. sourcing materials as near as possible to the construction site. Earth Centre. extraction and processing. The problem with embodied energy is that it is difficult to quantify with any confidence. The situation is further complicated when some of that energy is from renewable sources. A case study will illustrate a whole building approach to recycled materials. Heat is stored over the summer to be circulated throughout the winter. Energy inputs into metals such as copper and aluminium can vary according to whether the source is from ore or recycled material. the construction process. since it is a renewable resource with the added benefit of fixing CO2 during growth. At this stage in the development of disclosure about embodied energy. the energy used in the processes may be withheld for commercial reasons. It has natural wind-driven ventilation with heat recovery from exhaust air transferred to incoming air. The matter will only be resolved when disclosure becomes a legal requirement. Low energy Conference Centre. about 7 per cent of total energy consumption is embodied in materials. the present replacement rate of housing means that life expectancy of a home is around 2000 years. Earth Centre. Figure 17. In this case the filling comprises crushed concrete from a nearby demolished colliery. A wind generator mounted in the boiler flue helps to meet the electricity demand (Figure 17. that is. The Conference Centre walls are of gabion construction.ARCHITECTURE IN A CLIMATE OF CHANGE stove for backup heat.1 Conference Centre. loose stones contained with a galvanised steel mesh. under construction and completed 210 .1). 211 . Even the wet heating system uses radiators recovered from demolished buildings and the steel for the conical roof to the conference space is recycled I beams. For example. 204).co. A useful website is www. On the supply site. This can save the costs of transport and landfill fees. All site waste should be sorted and recycled where possible.salvo. considerations about embodied energy and resource depletion should be factored in. A main contractor may also experience difficulty in guaranteeing time and quality on a design and build fixed price contract. Earth Centre. Contractors should be persuaded to be co-operative about the use of recycled materials and willing to accept a degree of liability on behalf of subcontractors.LIFE-CYCLE ASSESSMENT AND RECYCLING The timber supports for the main structure are rejuvenated pylons. Design practices need to adapt to accommodate the recycling culture. Reference to best practice case studies is useful in this respect. despite its very high embodied energy. retailers of recycled materials should provide a measure of quality assurance if not a full guarantee. Buildings which can be dismantled rather than demolished are much better in this respect. There are problems associated with recycled materials. In determining costs. Recycling strategy checklist ● ● ● ● ● ● First a client should be encouraged to sanction the use of recycled materials. subcontractors may be reluctant to work with them because of hidden hazards like nails in timber which can wreck valuable tools. It is impossible to avoid using cement. The entrance steps are redundant railway sleepers (a growth industry). probably telegraph poles. At the same time designers should maximise the opportunities for materials to be recycled after demolition. This is an important precondition of a project. discovered in a lorry park. At the same time there should be considerable overall savings as against new materials which should have an impact on the whole life cost assessment. Sadly in 2005 the Earth Centre closed. In the Earth Centre example the contractor was able to underwrite extra costs arising from work on recycled materials.uk. In the case of the Conference Centre. Only the Gluelam beams are from new timber in order to meet the manufacturer’s performance guarantee. Where possible GGBS concrete has been used in foundations etc. Most of the timber is either recycled or has received a certificate from the Forestry Stewardship Council. All these items should contribute to a favourable life-cycle assessment score. (p. the main contractor was an enthusiastic advocate of the strategy. When completed it should prove to be one of the most accessible and user-friendly parliamentary buildings of any state or principality in Europe. The results of extensive modelling of solar penetration and daylight at different times of day and at all seasons of the year have been factored into the design. It also has to last 100 years. 212 . The engineers are confident that the building will use no more than 50 per cent of the energy of a building conforming Regulations in this location.Chapter State of the art case Eighteen studies The National Assembly for Wales Richard Rogers Partnership Following devolution. the Principality of Wales was granted greater autonomy resulting in the need for an assembly building for which the Richard Rogers Partnership was appointed architects with environmental engineer BDSP. and rises through the debating and reception chambers through the stack effect. The roofed public spaces offer a phased progression in terms of environmental control from a minimum at the entrance overlooking Cardiff Bay to the highly controlled debating chamber at the heart of the complex. The curved member on the top of the cowl has an aerofoil profile creating negative pressure on the underside thus assisting the extraction of exhaust air (Figure 18. embodied energy will only be a tiny fraction of the energy in use over the life of the building. Given this lifespan. Airflow over and around the building has been modelled using computational fluid dynamics (CFD). so the primary aim was to drive down operational energy demand.1). The rotating roof cowls ensure that the grilles for exhaust air are always in the lee of the wind which is predominantly from the south west. It is a classic example of architect and engineer working in concert from the earliest stage of the project. Ventilation air enters at low level since there is little low level pollution. The design brief was for a building which reflected the democratic nature of government whilst also being a landmark example of low energy design. STATE OF THE ART CASE STUDIES Figure 18.1 National Assembly for Wales (photograph Eamonn O’Mahony) Services link to Crickhowell House Access link to Crickhowell House Wind pressure assisted vitiated air exhausted from debating chamber/reception via rotating wind cowls South westerly prevailing wind direction Solar altitude for July @ 1.30pm North - South Section Solar altitude for September 1@ 2.30pm Wind screen breaks ENVIRONMENTAL KEY Semi-sheltered external environment Sheltered external environment Low level of controlled internal environment Fully/partially controlled internal environment Void below grd floor for services distributions Ground floor plant room below entrance steps containing: -Heat rejection cooling/heating plant -Pump circuits -Rainwater harvesting tank -Sprinkler protection bulk storage tank (if required) -Air handling ventilation plant -Standby generator set -Transformer and LV switch room Figure 18.2 Welsh Assembly natural light and ventilation diagrams 213 ARCHITECTURE IN A CLIMATE OF CHANGE Wind pressure assisted vitiated air exhaust from debating chamber via wind cowl/lantern Solar position & entry to building for July @9.30am South westerly prevailing wind direction Variation in solar altitude Variation in solar altitude Solar position & entry to building for June @ 3.00pm Solar position & entry to building for September @ 10.00am West - East Section Glazed section of wind lantern allows daylight to penetrate into debating chamber Light penetration into committee rooms via glazed roof light Water feature used as solar reflector to enhance daylight penetration into heart of building Figure 18.3 Welsh Assembly detail of natural light and ventilation Zuckermann Institute for Connective Environmental Research (ZICER) RMJM Architects The University of East Anglia has a reputation for commissioning environmentally advanced buildings. The Elizabeth Fry building set the standard for low energy university buildings (see Smith and Pitts, ibid.). As part of the School of Environmental Sciences, the ZICER building has set even higher standards of bioclimatic performance with the Elizabeth Fry building acting as the benchmark. Designed by RMJM Architects and Whitbybird for the building physics, this building was conceived to make a powerful statement about sustainable design. It was designed to represent an improvement over Elizabeth Fry in several respects, including: ● ● ● ● ● ● better construction standards with a higher standard of air tightness; higher standard of insulation; higher standard windows; lower energy fans with better controls and lower pressure ductwork; heat and cool energy from a central university CHP system; better mixing of extract air for heat recovery (Figure 18.4). 214 STATE OF THE ART CASE STUDIES Figure 18.4 ZICER building south elevation with facade and roof PVs The 3000 m2 building was opened in 2003 and is principally a research facility with a mixture of cellular and open plan spaces on the ground, first and second floors. The top floor houses a large seminar and exhibition space in which natural light is moderated by wall and roof PV panels (Figure 18.5). The basement houses a Virtual Reality Theatre which is the centrepiece of the Social Science for the Environment, Virtual Reality and Experimental Laboratories. This facility provides opportunities for research into environmental decision making within real and hypothetical landscapes. Overall, it is expected to realise a total energy use of 77 kWh/m2/year despite the fact that it houses at least 150 computer terminals. It should achieve this record breaking performance by a combination of energy efficient construction and electricity production from PV cells in facade and roof. The building is linked to the main teaching block by a glazed bridge from an atrium at its eastern end. ZICER has an impressive array of sustainability credentials. First, the design had to achieve a high level of air tightness, with a target permeable rate of 3.0 m3/h/m2 at a pressure of 50 pascals (Pa). In practice it performs even better than this. Second, the elements of the building have U-values substantially better than are required by the current Building Regulations, for example: walls floors roof windows 0.10 W/m2K 0.16 W/m2K 0.13 W/m2K 1.0 W/m2K triple glazed 215 ARCHITECTURE IN A CLIMATE OF CHANGE Figure 18.5 ZICER building Seminar Room with facade and roof PVs and thermal mass suspended ceiling panels Third, thermal modelling of the building indicated the use of natural ventilation on the south elevation with fresh air entering at low level, rising through thermal buoyancy to pass behind the PV facade of the seminar space to remove heat from solar gain and the action of the PVs. Exhaust air is expelled at high level on the north elevation. Thermal mass at ceiling level on the top floor helps to ensure that no additional cooling is required. A TermoDeck ventilation system comprising hollow core concrete slabs supplies air to the floors via vertical ducts. The air is released to rooms through louvers controlled by the building management system. This is still undergoing fine tuning. The high thermal mass of the floor slabs flattens the peaks and troughs of temperature. Users are also provided with opening windows. Fourth, the top floor features 402.5 m2 of double glazed laminated monocrystalline PVs for the roof and polycrystalline PVs for the facade with a rated output of 33 kWp. The PVs are grid connected to offset the electricity consumption of the building (Figure 18.5). Fifth is the fact that artificial lighting using low energy luminaries and controls is mostly subject to movement sensors which can be overridden by local switching when required. 216 STATE OF THE ART CASE STUDIES Finally, attention has been paid to the environmental sensitivity of materials used in construction. Recycled aggregate and timber from certified sustainable sources have been used. Most of the concrete, steel, aluminium and insulation are capable of being recycled. The finishing touch is provided by over 70 covered cycle spaces coupled with locker spaces and shower facilities. A large waste storage space at lower ground level allows waste to be sorted into appropriate recycling containers. SOCIAL HOUSING Beaufort Court, Lillie Road, Fulham, London, 2003 Feilden Clegg Bradley Architects This is a high density development which epitomises the government’s policy on affordable housing, embracing shared ownership and key worker rental provision. The accommodation ranges from one bedroom flats to family apartments. Its social credentials are particularly signalled by the fact that it contains an element sponsored by the Rough Sleepers Initiative. Two things make this scheme stand out. First, it is a low energy building constructed well in excess of Building Regulations and fulfils the aims of sustainable development. The aim has been to surpass best practice for energy efficiency and provide affordable warmth for all its inhabitants. Its energy efficiency is achieved by: ● ● ● ● ● ● ● high levels of thermal insulation and draught sealing; assisted passive stack ventilation with humidity controlled dampers to the kitchens and bathrooms; the atria that serve the six-storey block have south facing glazing and are naturally ventilated at night to moderate summer temperatures; low energy, high efficiency lighting throughout; units designed to maximise natural lighting; trees introduced to the site, improving air quality and providing some insulation from the noise of adjacent roads the roofs of two low blocks covered with sedum which provides a habitat for wildlife and reduces the runoff from rainwater. Second, the method of construction involves three aspects of off-site fabrication: ● ● ● a prefabricated steel load-bearing system with large-scale coldrolled panels; large-scale hot-rolled elements; three-dimensional modular construction. 217 it is a prescription for a social revolution.6 Lillie Road flats. a prototype of how we should live in the twenty-first century if we are to enjoy a sustainable future (Figure 18. 218 . Beddington Zero Energy Development (BedZED) Bill Dunster Architects BedZED is not just another low energy housing scheme. well laid out.6). affordable accommodation’ (Building for Life Gold Standard) (Figure 18. The Commission for Architecture and the Built Environment described the project as ‘one of the more sustainable schemes to be built anywhere in England because it sharply addresses energy efficiency and life maintenance cost. courtyard view (photograph by Mandy Reynolds) It is the first social housing project in the UK to incorporate these three off-site fabrication techniques in one scheme.7). combined with a range of generously proportioned.ARCHITECTURE IN A CLIMATE OF CHANGE Figure 18. 219 . organic shop and health centre. maisonettes and town houses.STATE OF THE ART CASE STUDIES Figure 18. radically cutting down on the demand for travel. This density includes the provision of 4000 m2 of green space including sports facilities. 2500 m2 of space for offices. studios.and two-bedroom flats. It is a high density development along the line recommended by the Rogers Urban Task Force. The housing comprises a mix of one. In every respect this is an integrated and environmentally advanced project. It realises an overall density of 50 dwellings per hectare plus 120 workspaces per hectare.7 BedZED west elevation The design was led by Bill Dunster Architects who are one of the UK’s top evangelists for ecologically sustainable architecture with the services and energy strategy developed by Arup Associates. it had to stack up in financial terms. Peabody was able to countenance the additional costs of the environmental provisions on the basis of the income from the offices as well as the homes. Though the Trust is extremely sympathetic to the aims of the scheme. all constructed on the site of a former sewage works – the ultimate brownfield site. workspaces. It consists of 82 homes with 271 habitable rooms. At such a density almost 3 million homes could be provided on brownfield sites with the additional benefit of workspaces for the occupants. The Innovative Peabody Trust commissioned this development as an ultra-low energy mixed use scheme for the London Borough of Sutton. shops and community facilities including a nursery. Excluding the sports ground and placing cars beneath the ‘village square’ the density could be raised to 105 homes and 200 workspaces per hectare. 300 mm of Rockwool insulation and an outer skin of brick adding up to a U-value of 0. In traditional construction up to 40 per cent of warmth is lost through air leakage.20.11 W/m2K (Figure 18. Floors contain 300 mm of expanded polystyrene also having a U-value of 0. Masonry external and internal walls and concrete floors provide substantial thermal mass sustaining warmth in winter and preventing overheating in summer. Figure 18.8).10. They are framed in timber and have a U-value of 1.ARCHITECTURE IN A CLIMATE OF CHANGE Some dwellings have ground level gardens whilst the roofs of the north facing workspaces serve as gardens for the adjacent homes (Figure 18. One of its primary aims was to make the most of recycled materials and the main success in this respect was to obtain high grade steel from a demolished building as well as timber. The energy efficiency of the construction matches anything in the UK or mainland Europe. The majority of all the materials were sourced within a 35 mile radius. External walls consist of concrete block inner leaf. in this case Styrofoam with a U-value of 0. Windows are triple glazed with Low-E glass and argon filled.10.9).8 North elevation with workspaces at ground level and roof gardens serving dwellings opposite 220 . Roofs also contain 300 mm of insulation. These standards of insulation are a considerable improvement over those required by Part L of the 2002 Building Regulations in the UK. In the case of BedZED great attention has been paid to maximising air tightness which is designed to achieve two air changes per hour at 50 pascals (2ac50P). Materials containing volatile organic compounds (VOCs) have been avoided as part of the strategy to use low allergy materials. It is predicted that space heating costs will be reduced by 90 per cent against a SAP 75 building. The energy efficiency drive does not end there. South facing elevations capitalise on solar gain with windows and their frames accounting for nearly 100 per cent of the wall area. showers that rely on gravity replace baths in single bedroom flats. In this case the design team opted for passive natural ventilation with heat recovery driven by roof cowls. Sunspaces embracing two floors on the south elevation add to the quality of the accommodation (Figures 18.10 and 18. Here 3.11). Taps are fitted with flow restrictors. According to the 1998 version of SAP ratings BedZED achieves 150. This is to be achieved by the use of water-saving toilets. On average. Until the 2002 revision of the Regulations dwellings were required to achieve around SAP 75. BedZED aims to reduce domestic water consumption by 33 per cent. The heat recovery element captures up to 70 per cent of the heat from the exhaust air. As the scheme uses metered water it is expected that these measures will save a household £48 per year. Toilets normally use 9 litres per flush. Overall energy demand should be reduced by 60 per cent.5 litre dual flush toilets are provided producing an estimated saving of 55 000 litres per household per year.5 litre maximum. 18 per cent of a household’s water 221 .STATE OF THE ART CASE STUDIES Figure 18. regulations now stipulate a 7. A vane mounted on the cowls ensures that they rotate so that incoming air always faces upwind with exhaust air downwind.9 Masonry wall construction Ventilation becomes an important issue as better levels of air tightness are achieved. dishwashers and washing machines. 1W/m k WINDOWS = TRIPLE GLAZED AIRTIGHTNESS = 2 AC/HR @ 50Pa SUN SPACE = DOUBLE GLAZED TO ROOM & TO OUTSIDE IN WINTER-STORES PASSIVE HEAT GAINS UNTIL NEEDED MINIMUM OVERSHADING BY ADJACENT BUILDINGS WORK CIRCULATION HOME SUN SPACE NORTH FACING WINDOWS GOOD DAYLIGHT MINIMUM SOLAR HEAT GAIN EXTENSIVE SOUTH FACING GIVING GOOD.11 South elevation with PVs integrated into the glazing 222 . 12). the payback time would be about 13 years. It is fuelled by a mixture of hydrogen. Across London 51 000 tonnes of tree surgery waste is available for gasification. They are also sited on southerly facing roofs. Originally the idea was to use PVs to provide for the electricity needs of the buildings. The output from the plant is of a standard equivalent to rainwater and therefore can supplement the stored rainwater to be used to flush toilets. Their purpose is to provide a battery charging facility for electric vehicles. Excess electricity is sold to the grid whilst any shortfall in demand is met by the grid’s green tariff electricity. So. taking into account their high taxation burden. A combustion engine generates the heat and power producing 350 000 kWh of electricity per year. Evacuated tube solar collectors would provide the heating. it was calculated that 777 m2 of high efficiency monocrystalline PVs would provide a peak output of 109 kW. It is a biologically based system which uses nutrients in sewage sludge as food for plants..STATE OF THE ART CASE STUDIES requirements will be met by rainwater stored in large tanks integrated into the foundations. carbon monoxide and methane produced by the on-site gasification of wood chips which are the waste product from nearby managed woodlands. Foulwater is treated in a sewage treatment plant housed in a greenhouse. Figure 18. Household waste normally destined for landfill will be reduced by 80 per cent compared with the average home. sufficient for the energy needs of 40 light 223 . The plant also meets its space heating and domestic hot water requirements via a district heating system served by insulated pipes. This is sufficient for the power needs of the scheme. The energy package The principal energy source for the development is a combined heat and power unit which generates 130 kW of electric power.11 illustrates the inclusion of PVs in the south glazed elevations of the scheme. There is a further chapter to the energy story. It is worth restating that this is virtually a carbon neutral route to energy since carbon taken up in growth is returned to the atmosphere. If the electricity were to be used to displace the use of fossil fuels in vehicles. It turned out that this arrangement would involve a 70 year payback timescale. The plant requires 1100 tonnes per year which translates to two lorry loads per week. How the decision was made to dedicate the PVs to this role is worth recording. In the future rapid rotation willow coppicing from the adjacent ecology park will supplement the supply of woodland waste. The waste would otherwise go to landfill. As a yardstick. an environmental organisation based in Sutton who secured Peabody as the developer. With congestion charges due to be levied on vehicles using streets in other major cities besides London. the exemption of electric vehicles will provide an even greater incentive to adopt this technology. a family car travelling 12 000 miles (19 000 km) per year produces almost as much carbon dioxide (CO2) as a family of four living in a typical modern home. Bill Dunster was engaged on the strength of Hope House which he designed as an ecologically sound living/working environment and which served as a prototype for BedZED. On-site work and recreational facilities together with the electric vehicle pool of ‘Zedcars’ would more than satisfy that commitment.ARCHITECTURE IN A CLIMATE OF CHANGE BIO-FUELLED CHP GRID FLUE M WOOD-GAS MULTI-STAGE CLEANING GASIFIER WOODCHIP CHP ENGINE IMPORT/EXPORT METER ELECTRICITY AUTO-DISCONNECT UNIT ALTERNATOR Figure 18. Other car pool schemes have indicated that hiring a pool car to cover up to 13 000 km a year could save around £1500 in motoring costs. It has to be remembered that. Chris Twinn of Ove Arup and Partners worked 224 . The co-developers Peabody and Bioregional agreed as part of the terms of the planning consent to enter into a Green Travel Plan which meant a commitment to minimise the residents’ environmental impact from travel.13). in a project like BedZED. the energy used by a conventional car could greatly exceed that used in the dwelling. This development has come about because the right people were able to come together in the right place at the right time. Peabody is one of the most enlightened housing associations in Britain. The aim is that the 40 vehicles would provide a pool of cars to be hired by the hour by residents and commercial tenants. A diagram produced by Arup summarises the ecological inventory of the project (Figure 18.12 Wood chip gasification plant within the development (courtesy of ARUP and BRE) CHARCOAL DRYING ENGINE HEAT HEAT electric vehicles covering 8500 km per year. The idea came from Bioregional Development Group. And that is without factoring in the potential avoided cost of pollution. published by BRECSU at BRE e-mail address: brecsuenq@bre. For a more detailed description of this project.STATE OF THE ART CASE STUDIES WIND DRIVEN VENTILATION WITH HEAT RECOVERY RAINWATER COLLECTION PV TO CHARGE ELECTRIC CARS IT WIRED LOW FLUSH WC + RAINWATER STORE SEPTIC TANK LOW-E LIGHTING & APPLIANCES ELECTRICITY BIOFUEL CHP FOUL WATER TREATMENT HOT WATER with Bill Dunster when the latter was with Michael Hopkins and Partners so he was a natural choice as adviser on the physics and services of BedZED. An excellent example of this strategy is the headquarters of an energy company near London. The project happened due to a fortuitous conjunction of people committed to the principles of sustainable development.uk Figure 18. refer to ‘General Information Report 89’. developments of this nature must not rely on the chance collision of the brightest stars in the environmental firmament.co. BedZED – Beddington Zero Energy Development. The description of the development by David Lloyd Jones of Studio E Architects is quoted 225 . Sutton. In future.13 The ecological inventory of BedZED (courtesy of ARUP and BRE) Beaufort court renewable energy centre zero emissions building Studio E Architects Where possible the reuse of existing buildings is the best way to meet the sustainability agenda. RES requested that additional facilities for visitors and parties 226 . a range of renewable energy measures and employing ‘best practice’ sustainable strategies. idiom reflecting the contemporary concerns of Renewable Energy Systems and the leading edge energy technologies deployed over the site and concealed within the buildings.ARCHITECTURE IN A CLIMATE OF CHANGE at length as being the most appropriate explanation of the design strategy. a company whose business is developing wind farms on a global basis. weather and internal comfort are being monitored over a 2 year period. have now been converted and extended to provide for the office and visitors’ centre accommodation. a seasonal heat store. albeit sympathetic. An EC Framework 5 grant contributed to the cost of a hybrid PV thermal array. derelict for 10 years. These buildings. The editions and replacements are expressed in a clean. particularly in respect to energy supply and use. On the basis of this innovative content. over any year. The project brief was the conversion and extension of the former Ovaltine egg farm to provide 2665 m2 of headquarters office accommodation for RES. This will be fed into the electricity grid for the use of the community. Studio E Architects The Renewable Energy Centre at Kings Langley in the UK is the new headquarters and visitors’ centre for Renewable Energy Systems Ltd.14). Indeed the various integrated renewable energy systems will. The original buildings on the site housed chickens to provide eggs for the nearby Ovaltine malt drink plant. A sustainable approach was taken. RES was assisted in this objective by the contribution from the EC Framework 5 Programme. No attempt was made to replicate the arts and crafts style of the original buildings in the new building works. Design principles The Renewable Energy Centre is the first commercially developed building to be carbon neutral and entirely self-sufficient in energy. The project was completed in December 2003 and the energy systems. This was to be carried out using. so far as economically practical. The design was based on the comprehensive application of passive and active solar measures and is believed to be the first commercial net zero carbon dioxide emissions building in the UK. generate a surplus. This funding was conditional on the adoption of a radically innovative approach to resolving sustainable issues and the involvement of a pan-European design and development team. Solar design aspects of the renewable energy centre and interim findings David Lloyd Jones. modern. the space heating and the associated mechanical and electrical systems (Figure 18. f. m. provide exhibition.STATE OF THE ART CASE STUDIES a. air handling installations Fresh air Exhaust air Irrigation Figure 18. g. o. c. h. n. e. k. d. 225 kW Wind turbine Hybrid PVT array Crop store PV invertors 1500 m3 water heat sink Biomass crop (miscanthus) Renewble energy centre Crop shredder Biomass boilers and gas fired backup boilers Electrical import/export meters 80 m deep borehole in chalk aquifer 2 no.14 Energy strategy who might wish to see and learn about the building and its energy systems. l. 227 . b. i. the design principles upon which the development is based were to: ● ● provide a fully operational head office which meets the commercial needs and conditions of the property market. j. Accordingly. conference and facilities for the use of RES and visitors to the building. Its location adjacent to one of Europe’s busiest motorways brings sustainability in action closer to the millions of people using the road. in addition. The conversion of the horseshoe was more complex. and. deliver a building whose energy consumption is provided entirely from on-site renewable energy sources. The ground floor was extended into the courtyard by 5 m and a new single-storey link.5 ha of farmland located in the metropolitan green belt. the local planning authority required that the views of the outside of the building must remain largely unchanged. aligns with the Ovaltine factory – the destination of all the eggs laid on the old farm. and main plant spaces. However. to the west. The layout of the various elements comprising the development is shown on the site plan. Both the ‘coach house’ and ‘horseshoe’ buildings had to be converted for modern office use with. The egg farm is set out on an axis. the upper level floor and the roof reinforced. Its roof comprises the hybrid photovoltaic/thermal array. The energy strategy It is intended that all energy used at the Renewable Energy Centre be provided by renewable sources located on the site. incorporating the main entrance. Turf was planted on the roof of the new office space. meeting. The conversion of the coach house was relatively straightforward: the building fabric was upgraded to meet contemporary office use and the courtyard was enclosed by inserting a new steel structure. The construction between the two towers. was entirely demolished. This building provides storage for the harvested biomass crop. The boundary of the site is formed. this building was partly sunk into the ground and the excavated earth banked up against the north wall. The project demonstrates 228 . A third entirely new building was introduced close to the northern perimeter of the site. The new buildings In order to provide for the new uses. The site layout The triangular site comprises 7. the existing buildings had to be radically altered and extended. to the north east. conference. by the mainline London to Glasgow railway. was placed between. integrate seamlessly the social. the ground floor was lowered. except for the timber roof structure. by a private road. the two wings of the horseshoe. technical and aesthetic aspects of the project. catering. exhibition. by the M25 orbital motorway.ARCHITECTURE IN A CLIMATE OF CHANGE ● ● ● deliver a building that minimises energy consumption and the use of scarce resources and that contributes positively to local economic and community needs. if extended northwards. and the outer external wall rebuilt. and connecting. So as not to intrude in the landscape. which. to the south. harvested annually. car sharing and the encouragement of the use of public transport. artificial cooling.STATE OF THE ART CASE STUDIES the integration of passive solar techniques with a range of inter-related renewable energy systems. Clean and green Bringing back to life a derelict building rather than building new is a considerable benefit in terms of land utilisation. This strategy avoids the heavy energy consumption and potential polluting effects of refrigeration 229 . use of resources and improving the amenity of the area. The energy provision derives from: ● ● ● ● ● optimising the use of natural ventilation.15). low use of water. combined heat and power (CHP)). the minimisation of resource depletion. This. building management systems) and passive systems (solar heating. the heat of which is passed to: – a seasonal heat store. solar control. together with the relatively high levels of heat generated by modern office use. a judicious combination of active systems (mechanical ventilation. possibly. in a forthcoming adaptation. In order to minimise the need for energy. the outward facing facades had to be sealed. heating and lighting. solar shading. – a biomass crop (miscanthus or ‘elephant grass’) cultivated on the surrounding land. comprising a 1100 m2 body of water concealed beneath the ground. ground water cooling pumped from an 80 m deep bore hole to cool the buildings in summer (and then passed out of the building to irrigate the biomass crop). a well-insulated building envelope incorporating thermal mass) was developed. requires the building to be artificially cooled in summer months. materials that derive from the minimum use of energy in their manufacture and transport (low embodied energy materials). The cooling source is water drawn from aquifers located in the chalk below the building. natural ventilation and lighting. with the PVT installation. all the electrical power required by the building and a significant surplus fed into the national grid. future biomass plant which shreds the miscanthus and burns it to provide heating for the building (and. recycled materials. The buildings are exposed to considerable external noise from passing trains to the west and the motorway to the south. low air infiltration. used to assist heating of the buildings in winter (Figure 18. The construction work was undertaken on the basis of minimising waste and using materials and components with low embodied energy from readily available resources. daylight. high insulation. a hybrid photovoltaic/thermal (PVT) array providing both electricity and hot water installed as the roof to a biomass crop store. To cut out the disturbance from noise inside the buildings. dried and stored in the earth-sheltered space beneath the PVT array. a 225 kW wind turbine supplying. either direct or via the seasonal ground heat store. cooling the air within it. Windows can be opened in facades and roofs facing away. the motorway and the railway. The cool water is used to drop the temperature of air being fed into the building and/or is circulated through convectors within the office space. Heat is supplied from the biomass boiler (or gas boiler until such time the biomass plant is installed) and from the PVT array.15 PVT/heat store/space heating FRESH AIR BUILDING plant normally used for air conditioning. Electricity is generated from the PVT array and the wind turbine. to ventilate the building in 230 . Hot water from these sources is used in a similar way as the chilled water for cooling.ARCHITECTURE IN A CLIMATE OF CHANGE DRAIN BACK TANK COMBINED PV + SOLAR THERMAL ARRAY HEAT EXCHANGER HEAT STORE EXHAUST AIR Figure 18. or sheltered from. The renewable energy sources Wind Turbine The 225 kW wind turbine has a hub height of 36 m and a rotor diameter of 29 m and is a Vestas V29 model previously in operation in the Netherlands. It also records all monitored results from the various energy systems before passing the results to a site in Denmark for uploading onto the website.5 ha site are given over to miscanthus cultivation. The remainder of the land is planted with indigenous species of trees. RES actively encourages staff to use public transport. which is greater than the anticipated building consumption. The turbine is connected to the buildings’ electrical distribution network and to the national grid. thereby reducing unwanted solar gains and the need for cooling. Exposed windows are shaded from the sun by fixed glass or aluminium screens and by deciduous tree planting. Predicted energy use and energy supply is shown in the table below.STATE OF THE ART CASE STUDIES temperate conditions. and excess power (equivalent to the needs of around 40 homes) will be exported to the grid. In addition there is a car park and a 5 aside football pitch. Space heating 85 MWh 15 MWh 24 MWh 12 MWh 160 MWh 115 MWh 3. The current monitoring programme will show whether these predictions are borne out in reality. 231 .5 MWh 250 MWh 248. Estimated energy balance for the site: Electrical Building annual loads (2500 m2 building gross area) PV/T direct contribution Heat collected into storage Pumping load/heat lost from storage Wind turbine Miscanthus: peak expected production (60 odt/year) Net contribution Potential electrical export Potential surplus miscanthus for heat export * With 48 m2 of PV. It is expected to generate 250 MWh annually.7 MWh 187 MWh 102 MWh A building management system (BMS) controls and optimises all the energy systems. About 5 ha of the 7.2 MWh* 4.7 MWh 133. The building is well insulated and sealed. shrubs and grasses. bicycles and car sharing for travel between home and office. Wildlife is encouraged by the re-creation of natural habitats. including opening and closing the roof lights. which converts light into electricity. The shredded bales are fed into the boiler by a mechanical screw auger. 5 hectares of which have been planted on the site. PVT array The 170 m2 solar array comprises 54 m2 of hybrid PV/thermal (PVT) panels and 116 m2 of solar thermal panels. The crop is harvested annually in the late winter with conventional harvesting equipment and stored as bales until needed.. which is coppiced on short rotation. The field is expected to yield 60 oven-dried-tonnes per year with a calorific value of 17 GJ/tonne. Ground water cooling Ground water is used to cool the buildings during the summer. the water is used to irrigate the energy crop. Finally. The 100 kW biomass boiler is provided by Talbott’s heating. The bales are shredded before being fed into the biomass boiler.16 PV/thermal panels (Courtesy of Studio E Architects) Water Out Aluminium Glazing System 232 . First. The emissions from the boiler comply with the Clean Air Act. Water is extracted from the local aquifer at 12 C via a 75 m deep borehole. The boiler is expected to be installed and operating in 2004–2005.ARCHITECTURE IN A CLIMATE OF CHANGE Biomass The buildings’ heating needs will primarily be met by a biomass boiler fuelled by the energy crop: miscanthus or ‘elephant grass’. it is used to cool and dehumidify the incoming air to the buildings in the air handling units. and a copper Water In 3mm Toughened Glass Photovoltalc Cells Absorber Sheets DE Electrical Cable to Inventers Insulation Heat Transfer Pipework Figure 18. It is 80 to 85 per cent efficient and can modulate down to 25 per cent of full load. The panels have been developed by ECN in the Netherlands. The solar thermal panels are identical to the PVT panels. They produce electricity and hot water (Figure 18. The top of the store is insulated Figure 18.16).STATE OF THE ART CASE STUDIES heat exchanger on the back to capture the remaining solar energy. incorporating Shell Solar PV elements and Zen Solar thermal elements. Seasonal underground heat store The underground heat store is an 1100 m3 body of water that stores the heat generated by the PVT and solar thermal panels for use in the buildings during the colder months. but without the photovoltaic element.17 General views of Beaufort Court View from Southwest PVT array Heat stone prior to lid being installed 233 . 18 Forecourt. The relatively low-grade heat from the store can be used to preheat the incoming air to the building. In the autumn some of the solar heat generated will be used directly in the buildings and the excess will be added to the heat store. The temperature of the water in the store will drop as the heat is extracted. and heat will be extracted from the heat store to heat the incoming air to the building. increasing the capacity of the store. It is hinged around the perimeter to allow for the expansion and contraction of the water and the design also incorporates a suspension system to support the roof should the water level reduce. The high specific heat capacity of water (4. so the heat generated by the PVT array will be stored in the heat store. as the outside air will be at a lower temperature than the water. During the winter the solar heat generated will be less than the buildings’ heat load.2 kJ/kg C) makes it a good choice for storing heat.ARCHITECTURE IN A CLIMATE OF CHANGE Figure 18. This is estimated to be about 50 per cent of the total heat put into the store over the summer. The sloping sides are uninsulated. The temperature of the water in the store will gradually rise over the summer and early autumn. As long as the ground around the store is kept dry. During the summer there will be little or no demand for heat in the building. Some heat will also be lost to the surroundings. 234 . Beaufort Court with a floating lid of 500 mm expanded polystyrene. it will act as an insulator and additional thermal mass. City of Anglia and City of Mercia. Cambridge. Peterborough. the solar panels supply hot water to two storage tanks per block. They would comprise clusters of new communities which would create the ideal opportunity to realise the goal of integrated environmental design at the district level. mostly in the Southeast of England. This comprises a 6 km grid of pipes under the car park. Wellingborough and Rugby. The project was completed in 1996. energy efficiency subsidies and the greater use of renewable energy sources. One project that has gone some way down this route is in Switzerland at Plan-les-Ouates. In winter fresh air entering the ventilation system in the apartments first passes through the grid warming the air by up to 10 C due to the fact that the earth temperature never falls below 10 C. One suggestion by the planner Peter Hall is that there could be three new cities around London which he calls City of Kent. These are about 30 per cent less efficient than the most advanced glazed solar panels but they are significantly cheaper and can rapidly be installed to form an integral part of the roof system.Integrated district environmental design Chapter Nineteen Demographic changes are creating a demand for new communities in developed countries. The driving force behind the scheme was the proposal for legislation to reduce the use of non-renewable energy by means of a carbon tax. two thirds of the roofs are covered by black coated stainless steel solar collectors. The extract air is pumped into the car park thus avoiding the need for additional ventilation. the project employs a double flow ventilation system which provides all apartments with one to two air changes per hour with full heat recovery. in the UK there is a projected need for nearly 4 million new homes. The fourth attribute of the scheme is the ‘earth source preheater’. a solar city near Geneva. In anticipation of these measures this community constructed a major housing development consisting of nine apartment blocks which took the exploitation of solar radiation to the limit of current technology. They supply domestic hot water in summer and supplement space heating during spring and autumn. First. For example. 235 . each of 50 000 litres. These will focus on existing towns and cities: Ashford. In summer the system is reversed to provide space cooling. Third. Second. The objectives of the scheme are to: ● ● ● ● meet 100 per cent of energy needs from renewable sources by providing innovative energy generation plant and distribution systems. services designers and builders from inception to completion.1) under the direction of Professor Klas Tham. The same aquifer system will store cold water to provide cooling in summer. Linz City Council envisages that this will ultimately be a new town accommodating 30 000 people. A canal. Ecological City of Tomorrow. The district will include 800 homes in a mixture of detached. Energy strategy A 2 MW wind turbine and 120 m2 of PV cells connected to the grid will account for the area’s electricity demand. wind. the European Commission is sponsoring the Linz Solar City. The purpose of the project is to demonstrate that solar housing can be viable in high density urban situations. It is being built on a reclaimed industrial site by the Ribersborg beach and close to the historic centre of Malmo. integrate appropriate technologies like solar. It aims to be a zero net energy scheme. comprising a whole new district consisting of housing. They will also power a heat pump which will extract heat from underground aquifers and the sea to meet about 83 per cent of district heating requirements. terraced and apartment dwellings. The project is the first phase of a 10 year programme to make the city of Malmo a model of sustainable regeneration. The performance of the houses will be monitored by a new method of measuring carbon dioxide output devised by energy expert Norbert Kaiser. offices and other services. Sweden A European Commission demonstration project is almost completed in Malmo. The first phase of building will consist of 600 social housing units built to strict energy conservation standards and designed by some of Europe’s leading architects. establish synergy between Malmo’s existing electricity and district heating system and the local system. Malmo. with its 11 GWh/year energy demand making no net contribution to carbon dioxide (CO2) emissions. shops. It formed the centrepiece of an exhibition ‘Bo01’ held in June 2001 (Figure 19.ARCHITECTURE IN A CLIMATE OF CHANGE In Austria. promenade. harbour. About 15 per cent of the remaining heat demand will be met by 236 . parks and covered walkways are incorporated into the development. engage in holistic design procedures involving architects. heat pumps and aquifer storage to produce cost-effective clean heat and electricity. Occupants will have the opportunity 237 .INTEGRATED DISTRICT ENVIRONMENTAL DESIGN Figure 19. It is worth noting that the Swedes tend to regard 22 C as a minimum level for comfort compared with 18–20 C in the UK. At this rate it should be possible for the inhabitants to maintain the comfort standards to which they have become accustomed.1 Model of the ‘City of Tomorrow’ and the integrated energy system 2000 m2 solar collectors. Biogas produced from local waste will meet the remaining 2 per cent of heat requirement. The aim is to keep energy consumption in buildings below 105 kWh/m2 per year. The public transport system will be adapted accordingly and the pool of cars within the development will include electric and gas powered vehicles. The design of buildings and energy systems is integrated under a single management strategy with the whole process being subject to stringent quality control. A ‘vacuum refuse chute’ extracts organic waste from household refuse. All households will be connected to a broadband network equipping them for advanced communication functions such as voice activated systems as well as monitoring management systems. the basic form and infrastructure of European cities will not change all 238 . Towards the less unsustainable city The ultimate challenge will be to transform existing towns and cities so that they become less of an ‘ecological black hole’. There is also a plant which extracts energy and nutrients from the sludge from a sewage works.ARCHITECTURE IN A CLIMATE OF CHANGE to adjust and monitor their energy consumption with the help of IT. The city is an epicentre of consumption. hence the zero net energy label. Completed at the end of 2001 it forms an appropriate gateway to Sweden situated as it is at the end of the spectacular Oresund bridge and tunnel complex linking the country with Denmark. All the buildings are designed to the highest energy efficiency standards. Cities have powerful symbolic resonance which means that there are considerable constraints on change. From there it goes to the biogas digester. Over the year there should be a balance between the district’s electricity production and consumption. keeping space heating demand to a minimum. By connecting the renewable generators to Malmo’s existing distribution system. The management vehicles will be electrically powered. Over the next 50 years. All residents receive up-to-date information about waste separation and disposal. It is planned to embark on a programme of vehicles powered by environmentally friendly fuels. Disposal hatches attached to each property lead to holding tanks from which the waste makes its way to two docking stations at the edge of the site. There will be charging points for electric vehicles and a station providing natural gas for vehicles. Information technology is used not only to regulate the different elements of the energy system but also to inform residents about their energy consumption and allow them a degree of control over their energy management and comfort. but also capable of being the highest visible manifestation of civilisation – ‘civis’ the city. barring catastrophes like sea level rise. A biogas digester treats organic waste from the district converting it to fertiliser and producing biogas for heating and vehicle fuel. Transport is a key factor in any sustainability policy. the project is assured of security of supply of electricity. The target is to reduce unsorted waste by 80 per cent. They have subscribed to the Royal Commission on Environmental Pollution’s targets of a 60 per cent reduction in emissions by 2050 and an 80 per cent by 2100. As much as anything this was because of services routes and the complexities of land ownership.INTEGRATED DISTRICT ENVIRONMENTAL DESIGN that much. It can claim to have one of the most environmentally progressive local administrations in the UK which has committed the authority to eight key themes under the headings: ● ● ● ● ● ● ● ● energy services. In 1999 Thameswey Energy Limited was formed in partnership with a Danish energy service company (ESCO) International ApS. On the one hand it seeks to eliminate fuel poverty and. education and promotion. transport. adapting to climate change. drastically reduce CO2 emissions. procurement.. Energy services It is in this sphere that the local council has been particularly far sighted by recognising that energy has both immediate social and long-term global implications. Even after the 1940s blitz which destroyed the heart of many great cities like Liverpool. on the other. the reconstruction process in most cases followed the routes of the original infrastructure. planning and regulation. finance and operate CHP stations of up to 5 MW capacity throughout the town and offer energy services to institutional. management of natural habitats. Its purpose is to build. waste. In order to implement its energy strategy a council-owned company was formed called Thameswey Limited to act as an Energy and Environmental Services Company. But the most important priority is for towns and cities to make drastic reductions in their demand for fossil-based energy and in this respect there is a borough in the UK heading in the right direction. As suggested at the beginning energy prices as likely to rise steeply as demand increasingly outstrips supply. Woking: a pace-setting local authority Woking is south east of London close to the M25 motorway with a population of over 89 000. The company has entered into an agreement with the borough to act as its contractor to provide combined heat and power to the borough. business and residential customers. In all 117 sheltered housing tenants receive PV electricity. including Prior’s Croft. Brockhill. The most innovative enterprise by Thameswey Energy is the Woking Park project. The CHP stations achieve 80–90 per cent Figure 19.3).ARCHITECTURE IN A CLIMATE OF CHANGE principal energy users in the town centre including the council offices.2 Combined heat. There is also provision to direct surplus electricity to sheltered housing accommodation (Figure 19. with surplus heat in summer used to generate absorption cooling and dehumidification. These together with heat fired absorption cooling and a thermal store add up to a CHP capacity of 1.11 pkW PV installation. two 75 kWe CHP engines and 9. The extensive use of photovoltaics also features in the borough’s strategy. Woking town centre SUPPLY Natural Gas supply Back up Boilers Hestmains for heating & hor water Combined Heat & Power Unit Thermal Store Chilled water mains for air conditioning Heart Fired Absorption Chiller energy services supplied to Hot water converted into chilled water using water / liquid salt is a refridgerant Town Centre Buildings Heart Fired Absorption Chiller Chilled water mains for air conditioning Import/Export Electricity Private Electricity Network for town centre buildings Private electrical wire network for town centre buldings Inland genaration in the event of a failure of the national grid RETURN Public Electricity Grid 240 .4). It was officially commissioned in 2001 (Figure 19. In order to sidestep the problems of supplying small amounts of PV electricity to the grid at an uneconomic rate.195 MWe. This development is also served by a small CHP unit producing 22 kWe and 50 kW of heat backed up by a 6 50 kW boiler (Figure 19. together with plans to include an 836 kWe CHP reciprocating gas powered engine. The park complex includes a 200 kWe fuel cell. It supports the heating and power needs of the pool in the park and leisure centre. the council has created a mini-distribution system of private wires enabling it to sell PV and CHP electricity direct to customers.2). cooling and power grid. The fuel cell is of the phosphoric acid type which uses hydrogen reformed from natural gas. was the first in the UK to use a combination of CHP and PVs to serve its energy needs. an ‘extra care’ sheltered housing scheme. Woking Park efficiency compared with coal fired power stations at 25–35 per cent. New development proposals are assessed according to where they feature on the scale. 241 . the council operates a measure called Public Transport Accessibility Level Rating (PTAL). The aim is that.3 Phosphoric acid 200 KWe fuel cell. the new use should represent an 80 per cent reduction in CO2 emissions. layout. As regards location. This is particularly concerned with the CO2 emissions that are generated by the current use of the land. It is estimated that such measures reduce energy use by 20 per cent. when land use is changed.INTEGRATED DISTRICT ENVIRONMENTAL DESIGN Figure 19. This is because of the utilisation of heat from the engine and compact grids with minimum distribution losses. The concept of ‘environmental footprint’ is a prime consideration in land use policy. sustainable construction. Planning policy promotes housing layouts which maximise passive solar design and a preference for terrace housing and flats which minimise heat loss through external walls. landscape. Most new development in the borough scores near the top of the scale. location. Planning and regulation The main concerns are: ● ● ● ● ● land use. ranging from 1 to 7 with 7 being the most proximate to public transport. ARCHITECTURE IN A CLIMATE OF CHANGE Figure 19.4 Prior’s Croft PVs and inverters to convert DC power from the PVs to AC electricity 242 . liquid natural gas (LNG) and hydrogen. compressed natural gas (CNG). The target for all the stock is NHER 9 or SAP 74 with the aim of limiting energy costs to 10–15 per cent of income of those dependent on a state pension.e. Sums from the fund are ploughed back into energy efficiency projects. less than 100 g/km of CO2 equivalent) by 2010–11 when such vehicles should be in volume production.INTEGRATED DISTRICT ENVIRONMENTAL DESIGN In terms of landscaping the planting of trees and shrubs is encouraged with benefits that include a reduction in the heat island effect. building integrated renewable energy systems and requiring water conservation measures to be adopted. It aims to solve the fuel poverty problem in this sector. especially in rented accommodation. It operates a two-bin domestic waste system with a division into dry goods and organic waste. Energy use in buildings is targeted by encouraging insulation standards above those required by the Building Regulations. The anaerobic digestion of organic waste provides gas for CHP engines and compost. The council has the most energy efficient public housing stock in the UK with an average NHER rating of 8. Waste The council has adopted plans for a borough-wide zero waste strategy ultimately to reduce the need for landfill disposal to 10 per cent of current use. Diverting waste from landfill could equate to a CO2 reduction of 100 000 tonnes. Transport Council promotional campaigns seek to raise awareness of the benefits of alternative fuel vehicles at the same time encouraging local filling stations to provide liquid petroleum gas (LPG). by 2010–11. The council also intends to ensure that its own vehicles will be low carbon technology (i. solar shading in summer and protection from wind. Thermal gasification of other waste provides the hydrogen for the fuel cell. It ensures that timber is obtained from sustainably 243 . Recycling also plays a major part in its waste strategy. In the private sector the council had topped up government grants to provide full insulation measures to 3026 homes up to 2002. installing community heating. Procurement Where possible the council obtains materials from local sources reducing carbon miles. The council operates an energy recycling fund which benefits from saving due to energy and water efficiency measures and the recycling/reuse of materials. This should eliminate fuel poverty in this sector. especially in terms of the reuse of materials in construction. ARCHITECTURE IN A CLIMATE OF CHANGE managed forests. It encourages its contractors also to adopt sustainable procurement policies. The conclusion to be drawn from these case studies is that sustainable design is a holistic activity and demands an integrated approach. Reducing the demand for energy and generating clean energy are two sides of the same coin. Examples have been cited where buildings and transport are organically linked with building integrated renewables providing power for electric cars. BedZED, Malmo and Woking are signposts to new and more sustainable and agreeable patterns of life. 244 An American perspective Chapter Twenty In environmental terms the United States is a paradox. On the Federal level it opposes the Kyoto Accord and refuses to acknowledge the spectre of the fossil fuel trap. On the other hand many states have impressive environmental policies, especially on the west coast. One of the great pioneers of the environment movement is Amory Lovens who founded the Rocky Mountain Institute in Colorado in the 1980s. This was a ground-breaking organisation epitomised by its ultra-low energy buildings. The spin-off has been organisations such a ‘Earthship Biotecture’. This promotes a way of building that aims to realise the ultimate aspirations of sustainable architecture with its concept of ‘earthships’. In several respects the USA is a special case. First, compared with western Europe it has much greater extremes of climate which presents a formidable challenge to environmental designers and perhaps calls for greater tolerance on the part of environmental devotees in Europe. Buildings which have to cope with a seasonal temperature range of over 60 C demand rather exceptional treatment, even to having a dispensation as regards air conditioning. Second, it enjoys cheap energy which distorts the cost effectiveness of renewable energy and energy efficiency design measures. On the other hand, stand-alone energy generation is attractive where continuity of supply cannot be guaranteed. Not surprisingly, one of the driving forces behind environmental design is the ‘bottom line’. A study conducted by Ian McHard of the University of Pennsylvania concluded that a 1 per cent increase in productivity is equivalent to eliminating the whole energy bill. The research found that companies which were exploiting natural light coupled with better lighting design and better thermal comfort significantly raised worker productivity. This was demonstrated some years ago by the Lockheed Corporation with its $50 million engineering production centre accommodating 2600 workers. It has high ceilings (15 feet) and light shelves enabling natural light to penetrate to the core of the building. At the heart of a building is an atrium except that they call it a ‘lightrium’ to get round the Defense Department’s ban on atria! The building used only half the energy of buildings designed to the strictest codes in the 245 ARCHITECTURE IN A CLIMATE OF CHANGE country. As a result the energy measures paid for themselves in four years. However, the most striking benefit was that absenteeism fell by 15 per cent. This enabled the firm to win a contract, the profits of which paid for the whole building. Another salutary tale comes from the Wal-Mart enterprise. It has a prodigious appetite for space, amounting to one new store per working day at about 100 000 square feet each time. In one store they were persuaded to daylight half of the sales area. After only two months it found that sales were significantly higher in the daylit half, leading them to make a considerable investment in research into daylighting. In Bozeman, Montana, the state university is investing in a project for a naturally lit, passively ventilated and passively cooled laboratory. It is calculated that the Montana State Green Laboratory Complex will need heating for only six days in the year which will be supplied by radiant coils. It has to be remembered that Montana experiences extremes of cold. Like many other states, Montana has what is called an ‘extraction economy’ which means that it not only consumes its natural capital assets at a great rate, it also produces considerable waste. For a start the purpose in this project is to source all materials from within a 300 mile radius, which, for a country the size of the USA, is ambitious. Also, there is an emphasis on recycling the materials after demolition. As regards the construction, a high strength, lightweight concrete has been developed using fly-ash aggregate, similar to the Conference Centre at Earth Centre in the UK. But perhaps the most interesting innovation is the gluelam beams made from salvaged timber. The novelty lies in the adhesive which is made from bacteria and called Adheron. Its secret lies in the fact that it can be decomposed. At the end of a product’s life it can be placed in an autoclave and an enzymatic key introduced. This unlocks the bond and the timber is available to be reused. One of the most energy efficient buildings in the USA is the Zion National Park Visitor Center, Springdale, Utah, by architect James Crockett (Figure 20.1). It uses 80 per cent less energy than a ‘code compliant’ building and cost less to build than a conventional equivalent. This is due in part to the omission of heavy services. The building is naturally lit with substantial eaves shading the south facing windows from summer sun. When there is inadequate daylight the BEMS compensates with energy efficient fluorescent and high intensity discharge lamps. The building is also naturally ventilated. Cooling towers supplement cross ventilation (Figure 20.2). They contain pads that soak up pumped water providing evaporative cooling. The cool dense air exits through large openings at the base of the tower. The National Park is in a remote area of southern Utah where the electricity grid is somewhat unreliable. Roof mounted PVs linked to battery storage provide the uninterrupted supply required by the National Park Service. The PVs meet 30 per cent of the electricity demand and 246 AN AMERICAN PERSPECTIVE Figure 20.1 Zion National Park Visitor Center showing cooling towers and Trombe wall Figure 20.2 Zion National Park Visitor Center, cooling towers 247 ARCHITECTURE IN A CLIMATE OF CHANGE excess electricity is exported to the grid on the basis of net metering. This system of metering could transform the rate of uptake of PVs in the UK. The design features a Trombe wall enabling a masonry wall behind to store heat. In winter the temperature in the cavity can reach 38 C (100 F) which is gradually radiated into the building. Glenwood Park, Atlanta, Georgia Figure 20.3 Glenwood Park neighbourhood as proposed Due to be completed in 2006, Glenwood Park (Figure 20.3) aims to be ‘a model of environmentally conscious urbanism’ according to its developer Charles Brewer. The site is two miles east of downtown Atlanta and the developers have managed to ‘civilise’ a state highway, converting it to the development’s main street with traffic calming 248 An innovative stormwater system will reduce runoff by nearly 70 per cent.com). The object is to be pedestrian friendly. It is a paradox that. Dedicated cycle lanes will further reduce the need for car travel. 249 . The residential component of the township will comprise 60 single family houses. State subsidies would ensure that householders would make a net gain from exporting to the grid.AN AMERICAN PERSPECTIVE measures and lined with trees and shops. formed for this project. apartments. ‘to create a sociable. In the context of the norms of American urbanism Glenwood Park is a considerable step in the right direction (www. walkable community where there’s less need for driving’ (Brewer). individual states are leading players on a world stage in converting to renewable energy. The street layout will echo traditional European towns with narrower widths and tighter corners than is the norm in US neighbourhoods. The landscaping will be irrigated by ground water rather than the mains supply.glenwoodpark.6 million miles compared with average regional driving patterns which is the equivalent of removing 100 cars from the roads.2 million new and existing homes will be producing solar electricity by 2017. avoiding 50 m tonnes of carbon dioxide emissions. Over the past 3 years the company has invested $8 million in buying the 28 acre site. remediation and creating the infrastructure of roads and sewers. Its Environmental Protection Agency is proposing that one million homes will be equipped with PVs over the next 10 years in line with a pledge by the state governor. It will have a mix of individual houses. The ultimate aim is that 1. A signpost to the future is being offered by the state of California. It is estimated that the solar installations would be equivalent to the output from 36 gas fired 75 MW power plants. Housebuilders will be required to meet the high energy efficient design standards of the EarthCraft House programme which includes not only levels of energy conservation but also water conservation and methods to reduce soil erosion. up to 130 townhouses and 200 apartments. The site is a typical brownfield location which involved demolishing and recycling 40 000 yd3 of site concrete as well as recycling 700 000 lb of granite blocks for use in the parks. An important aspect of the project is that it will be totally funded by Brewer’s development company Green Street Properties. This means that there will not have to be compromises on standards to satisfy lending institutions. The EPA considers that the incentives will be sufficient to get PVs on 40 per cent of new homes by 2010 and 50 per cent by 2013. The township will feature up to 70 000 ft2 of shops and offices serving residents and nearby communities. ‘townhouses’. stores and parks interspersed with 1000 trees to moderate the heat island effect. whilst the Federal government seems to be dragging its feet on climate change issues. An existing brick building will be upgraded to supply 22 000 ft2 of office condominiums over covered parking. There will be direct access to the local rail services which adds up to an estimated reduction in car travel of 1. At present natural ventilation and the omission of air conditioning is justified on the grounds that cooling is only required for a short period in a year. Much greater extremes of temperature will have major design implications. This needs to be considered when incorporating passive solar design and the design of atria and conservatories. Already the prediction that global warming will lead to greater intensity and frequency of storms is being realised. There are some predictions we can make with reasonable confidence and consider the implications for architects and related professions. The best we can do is identify the developing technologies and socio-economic trends that are clearly discernible and extrapolate from them. This may change and mechanical ventilation incorporating some form of cooling will become a necessity such as aquifer or ground source cooling. We have noted examples of the predicted rate of return of the 1 in 100 year storm as currently defined. This will increase 250 . Newhaven headed the list with a return rate of 1 in 3 years by 2030. As fossil fuel prices rise. both for the stability of materials and the levels of insulation. this results in the release of energy which powers more extreme climate activity. The possibility of colder winters adds urgency to the need to tackle the problem of the unacceptable numbers of unfit homes in the UK as outlined in Chapter 10. In the south it is anticipated there will be much less rainfall and frequent drought conditions. as heat is built up within the biosphere. Another probability is that extreme heat episodes with occur more frequently. It is inevitable that. At the same time there is a possibility that winters will become more severe due to the weakening or rerouting of the Gulf Stream. this will increase the appeal of building integrated renewables plus active solar heating and seasonal heat storage. There is little doubt that global warming will trigger changes that will fundamentally change the practice of architecture. Rainfall patterns will change. but don’t stand too close’. workshop. In areas which will increasingly be threatened by flooding. Chapter 2). In the north rainfall amounts will rise. 6 January 1996). At the same time there is considerable anxiety about allowing the proliferation of nuclear technology. Already there are compelling reasons not to develop below the 5 metre contour at or near the coasts. On a global scale there is no real optimism about the capacity of current renewable technology to meet the energy needs of the next century especially the exploding economies of the Far East. For example. There are still those who put their faith in the commercial application of nuclear fusion perhaps relying on the fact that an energy vacuum will radically alter the definition of ‘commercial’. releasing massive amounts of energy (New Scientist. and you have the recipe for serious sea incursion. Italy and Switzerland have managed to combine single antiprotons and positrons to create antiatoms of antihydrogen. The least we can expect is a rise of 1 metre over the next century due mainly to thermal expansion. For example. This will have an impact not only on where we build but the way we design buildings. More than ever before there is hope that someone will find the ‘Rosetta Stone’ that will redefine physics and lead to limitless cheap. this goal still seems as elusive as ever. Energy for the future If demand continues to rise at the present rate it is expected that most of the world’s fossil fuel resources will run out around the middle of this century. with garages. with the doomsday scenario of a 110 metre rise if Antarctica melts (Sir David King. 251 .EMERGENT TECHNOLOGIES AND FUTURE PROSPECTS the pressure for water conservation and the harvesting and purification of both rainwater and grey water for use other than for human consumption. ‘Antiworld flashes into view’. on the threshold between science fiction and reality is the greatest potential energy source of all. It is inevitable that sea levels will rise. Physicists from Germany. It is unlikely that the Environment Agency will have the resources to provide protection against the prospect of an increasing threat. Add to this the fact that the predicted intense low pressure systems can cause the sea level to rise locally by over half a metre. devastating floods hit Boscastle in Devon in 2004 with 75 mm (3 inches) of rainfall in 15 minutes. The more immediate threat lies in storm surges since a small rise in sea level greatly amplifies the impact of a storm surge. The predictions of rising levels are becoming more alarmist. namely the exploitation of antimatter. clean energy. leisure activities below. It’s a case of ‘watch this space. increasing the risk of flash floods as rivers rise and the surrounding land is saturated. Antimatter is destroyed when it comes into contact with normal matter. However. one answer could be that new homes should have the living accommodation at least 2 metres above ground level. The design of substructures and foundations will need to take account of progressive drying out of clay subsoils. he 252 .9 per cent which amounts to stoppages of a few minutes at a time but adding up to about 8 hours a year. Large overland grids are inefficient and expensive to maintain.ARCHITECTURE IN A CLIMATE OF CHANGE The predicted rise in oil consumption considered in Chapter 2 is nothing compared with the anticipated demand for electricity. 2000 This ties in with the Washington Worldwatch Institute which states that ‘An electricity grid with many small generators is inherently more stable than a grid serviced by only a few large plants. At present the reliability is 99. 22nd Report. The revolution which will make this vision possible is the shift from mega power plants and creaking national grids to much smaller dispersed grids. The transition towards a low-emission energy system would be greatly helped by the development of new means of storing energy on a large scale. Above all. Stationery Office. the Changing Climate. By 2050 this will have risen to 5 billion unless there are fundamental changes in the way we produce and distribute electricity. As Yeager puts it. Solar energy promises unlimited free meals. The electricity distribution system will have to undergo major changes to cope with this development and with the expansion of smaller scale. Energy. So-called intelligent grids which can receive as well as distribute electricity at every node are already emerging. This would be the equivalent of tripling the world’s generating capacity which translates to building a 1000 MW power station every two days. They are subject to frequent failure and even at the best of times incur up to 10 per cent line losses. p. This in part will be driven by the digital revolution. as computers get faster they will require reliability of power in the order of 99. One of the leading think-tanks in this sphere is the Electric Power Research Institute (EPRI) at Palo Alto in California.’ It will of course be the perfect way to exploit renewable energy. This is fatal for microprocessors which are upset by millisecond disturbances. The Royal Commission on Environmental Pollution sees: a shift from very large.9999999 per cent. The challenge which the EPRI presents is to provide a minimum of 1000 kWh of electricity per year to everyone in the world by 2050. intermittent renewable energy sources. Another Yeager suggestion is that we create DC microgrids which. 169. about the same as the US in the 1920s. we will need power delivery systems with switching operations that reach the speed of light. the answer is to devote massive resources to the development of renewable energy technologies to harness a mere 1/15 000th of the energy of the sun. Its chief executive Kurt Yeager points out that 2 billion people are without electricity. remembering that by then the estimated global population will be 9–10 billion. In the UK it is claimed the grid is well over its 30 year replacement date. all-electricity plant towards smaller and more numerous combined heat and power plants. in turn. Now there is on the horizon a system of producing electricity directly from a microbial fuel cell (MFC). heat and water. A cluster of graphite anode rods surrounds the cathode.1 Microbial fuel cell (derived from New Scientist) 253 . The MFC is a cylinder with a central cathode rod surrounded by a proton exchange membrane (PEM). solar panels and hydrogen powered fuel cells. In effect it is a continuously regenerating battery in which the chemical equivalent of combustion takes place to release energy. The bacteria in normal sewage treatment use enzymes to oxidise organic material and in the process release electrons. Water Power circuit Cathode – + Anode Air Liquid input Water Figure 21. A charge separation occurs with protons allowed to pass through the PEM to the cathode but not the electrons. p. powers conventional generators. performing the function of a sewage treatment plant. at the same time. Bacteria become attached to the anodes causing the organic waste to be broken down into electrons and protons. As indicated earlier a fuel cell is a reactor which combines hydrogen and oxygen to produce electricity. ‘eliminate much of the imperfections in the sine wave that creates the upsets for microprocessors – those millisecond or nanosecond disturbances’ (Electrical Review. He is one of many who believe that the fuel cell is the power source of the future. These are diverted to power an external circuit. 10 October 2000. 27). The circuit is completed to allow the protons and electrons to recombine at the cathode to produce pure water (Figure 21.1). will. Most significant of all is his prediction that in the future most of our electricity will come from millions of micro-turbines. The route to electricity from sewage is normally via the digestion process which produces biogas which.EMERGENT TECHNOLOGIES AND FUTURE PROSPECTS says. Researchers at Pennsylvania State University have a developed a device which serves dual roles of generating electricity and. ARCHITECTURE IN A CLIMATE OF CHANGE This is the first MFC designed specifically to process human waste. The next step in the progression is to create cells which absorb light in the infra-red part of the spectrum. A less carbon intensive alternative being developed is the hydrogen generator fuel cell (HGFC). They absorb light strongly in the red and green parts of the visible spectrum.2). But here again things could be about to change. As the first prototype. Ethanol (alcohol) is produced by the breakdown and fermentation of crop waste or fuel crops. The gas then passes over a catalyst (rhodium and cerium oxide) which increases the temperature to 700 C and breaks down the ethanol into hydrogen. p. This system could be scaled up to supplying grid-scale fuel cells by using a combination of agricultural waste and dedicated rapid rotation energy crops. That will happen when the electrolytic process to split water into oxygen and hydrogen is driven by zero carbon renewable energy systems. 13 March 2004. It is also transportable so has universal application. Next generation solar cells From the point of view of buildings. carbon monoxide and carbon dioxide. the most obvious renewable electricity source is the solar cell. The carbon dioxide (CO2) balances that absorbed by the biological waste during growth. Its fuel is a mixture of ethanol and water. The initial heating process could be assisted by evacuated tube solar thermal collectors (Figure 21. The ethanol and water are mixed with air and then heated to 140 C causing the mix to vaporise. The holy grail of energy is the fuel cell that creates power with absolutely no polluting emissions. The gases pass to a cooling chamber reducing the temperature to 400 C. These would be coated with 254 . But. if you are already producing carbon-free electricity why incur an efficiency drop by creating hydrogen? The obvious answer is that it is the way of ensuring continuity of supply.. Producing hydrogen by the electrolyser method is fairly energy intensive and not carbon free unless it involves carbon neutral generation systems. High unit cost is the barrier which is preventing production achieving economies of scale. Some of this heat is used to heat up the incoming mixture. The solar cells of the future are likely to use thin film technology. 21). there is considerable further research and development to be undertaken and it may take another 20 years for it to achieve the scale of output which would make it commercially viable (New Scientist. Most renewable systems are intermittent and hydrogen supplies the so-called flywheel effect smoothing out the peaks and troughs. They should prove to be a fraction of the cost of silicon-based cells. Not only does this create a whole new field of opportunity for communications it could also herald the birth of much more efficient solar cells. in current terms. 40).2 Compact fuel cell hydrogen generator (courtesy of New Scientist) 255 . This is literally mimicking natural photosynthesis. 34–37). ‘biomimicry’. 23 January 1999. So. A normal silicon cell will absorb only about half the light that falls on it. they will have an application for windows as well as roofs. A solar cell which Figure 21. The result was a jet black structure of microscopic spikes which absorbs 97 per cent of visible light. A chance discovery in a laboratory could be the key to the ultimate breakthrough in solar cell technology. At 97 per cent absorption rate the black silicon photovoltaic cell could represent a quantum leap in efficiency and therefore cost effectiveness (New Scientist. Normal grey silicon is transparent to infra-red light. 13 January 2001. when a high energy photon hits a nanocrystal semiconductor the extra energy liberates two or even three electrons.. However. Michael Gratzel of the University of Lausanne estimates that a 10 per cent conversion rate should be possible (New Scientist. What really surprised the researchers was that it absorbs 97 per cent of infra-red part of the spectrum and even extends into the microwave end of the spectrum. Others are looking at capturing energy using biological rather than electrochemical cells. In 2004 it was reported that a team in the Los Alamos National Laboratory in New Mexico had found a method of considerably increasing the efficiency of crystalline solar cells. pp. being transparent. Normally a single photon knocks one electron out of the crystal structure creating a current. p. It resulted from etching silicon with a powerful laser hundreds of billions of times brighter than the sun. but where cars go buildings cannot be far behind (Autocar. Energy storage At Baltimore University there is a project to produce an all-plastic battery. Photosynthesis is ‘the most successful solar converting mechanism on Earth’ (New Scientist. Solar cell technology will achieve its ultimate breakthrough when it is coupled to an effective electricity storage system. a team at Imperial College. 15 May 2004. The prime target is cars. Berkeley has made cheap plastic solar cells flexible enough to paint onto any surface. However. hydrogen ions and electrons. In this process sunlight splits water into its constituents of oxygen. The difference between natural and artificial photosynthesis is that the latter is designed only to produce hydrogen. 186. (www. 92. may have made the crucial breakthrough. According to the director of the Interdisciplinary Research Centre at Cambridge University. vol. pp. p. opening up the prospect of producing hydrogen on an industrial scale. there is the prospect of storing massive amounts of electricity in a ring of superconducting cables. Already operational cells have been produced that have polymers as both anode and cathode with a special solid plastic gel as the electrolyte. Paul Alivisatos of the University of California. This is called the ‘catalytic core’ and it provides the platform for research into artificial photosynthesis called ‘artificial chloroplasts’. These superconducting reservoirs will be 256 . A team led by Professor A. 28 May 1997). 28–31).com). ‘Flower Power’. paving the way for unlimited quantities of sustainable energy (see New Scientist. 16). Electricity will run around the cables with no power loss until it is needed either for the grid or a stand-alone use. 1 May 2004).ARCHITECTURE IN A CLIMATE OF CHANGE employs this technology could convert 60 per cent of solar energy into electricity. having identified a plant’s photosynthetic machinery where water splitting occurs. Within the next decade it may be that scientists will have replicated nature’s most ingenious process.Azonano. 1 May 2004. London. The theoretical limit of conventional solar cells is 44 per cent (Physical Review Letters. The quantum leap in storage technology should emerge by around 2020 with the development of high temperature superconductors. Artificial photosynthesis The dream of researchers in energy is to replicate the process of photosynthesis to produce hydrogen. The task now is to raise the efficiency to ~10 per cent. This is yet another application of nanotechnology. pp. 601 and reported in New Scientist. Up to now the way plants perform this miracle has been a mystery. Over a 24 hour period the loss of energy would be negligible. The flywheel is set in motion by electromagnetic conduction. 26 April 1997. It is claimed that a cartridge in a hydrogen car could fuel it for 5000 kilometres.EMERGENT TECHNOLOGIES AND FUTURE PROSPECTS ideal for storing power from intermittent renewable sources and will change the whole economic status of. The inevitable conclusion is that fuel cell. More conventional flywheels will prove an economic way of enabling solar energy to cover the diurnal cycle. At present the highest temperature at which superconductivity has been achieved is minus 70 C which represents a considerable advance towards the goal of room temperature superconductivity. solar cell and storage technologies could all be on the verge of commercial viability. early in the 1990s research in Japan was developing a 3 m flywheel made from stainless steel levitating between powerful magnetic fields generated by superconducting ceramics. Space technology is the driving force behind the development of superfast flywheels that can store a considerable quantity of kinetic energy to be converted into electricity. These are all systems which store chemical or kinetic energy to be converted into electricity. Hydrogen storage The most promising safe storage technology so far has recently emerged from Japan and Hong Kong. For buildings the storage potential is enormous. Others are concentrating on small flywheels floating on magnetic bearings and capable of reaching 600 000 rpm with an energy density of 250 Wh/kg. 19). The outcome of this research remains to be seen. They are being spurred 257 . p. However. A nanofibre pack has the capacity to store up to 70 per cent of hydrogen by weight compared to 2–4 per cent in a metal hydride. tidal energy (New Scientist. for example.4 billionths of a metre) in diameter which is just right size to accommodate a hydrogen atom. Ultimately interseasonal storage may not be out of the question. The future seems to point to materials like composites of carbon fibre and epoxy resin. Energy can be drawn off by permanent magnets in the disc inducing electric current in a coil. There is no friction only air resistance and if the system operates in a near vacuum then it would be capable of storing 10 000 watt hours of energy. Fuel cells and titanium oxide solar cells are within a few years of presenting a serious challenge to conventional energy systems. This consists of cylinders 0. It is nanofibre carbon.4 nanometres (0. Flywheel technology The problem with flywheels is that the G-forces can cause a catastrophic explosion. if existing light sources in the US were converted to LEDs. This has enormous implications for the design of buildings now. an LED of less than one square centimetre would emit as much light as a 60 watt bulb using only 3 watts.ARCHITECTURE IN A CLIMATE OF CHANGE on by the pressing need to bring down carbon dioxide (CO2) emissions. but affordable ones powerful enough to illuminate a room remain at 258 . Large structures like sports stadia are particularly good candidates for embedded systems which provide heat and power. Within the next 5 to 10 years there should be a quantum improvement in the efficiency of solar cells coupled with a substantial reduction in unit cost. It might require a leap of faith to make the new Wembley independent of the grid but that could be the shape of things to come. For example. Advances in lighting Another technical step change will occur in the sphere of lighting. By adjusting the ‘band gap’ between the two levels. The really big incentive is cost. However. assuming the present annual rate of increase of consumption of 2. It also has a massive roof area which could house acres of solar cells dedicated to producing hydrogen easily sufficient to meet the surge of demand for events by day or night.7 per cent. Already the days of the compact fluorescent light are numbered. LEDs are a by-product of semi-conductor technology and produce light at much lower watts per lumen than conventional systems. light of different colours can be emitted. They are almost unbreakable and have a life expectancy of 100 000 hours. According to Scientific American ‘White LEDs are possible. and by anxieties about the security of supply of fossil fuels. The end of the world of fossil fuels is at hand and beyond it is the much brighter prospect of the post-hydrocarbon society. LEDs are predicted to realise 300 lumens per watt. This all bolsters the case for incorporating renewable generation systems into buildings at the earliest stage of design. They also have a size advantage. there would be no need for new power stations for 20 years. a note of caution. There would be a backup system of natural gas to provide hydrogen in the unlikely event that solar panels failed to perform adequately. No more power failures during football matches. It will be made redundant by developments in light emitting ‘photonic’ materials. Roofs and whole elevations will be able to accommodate solar cells. Whereas an incandescent lamp achieves an efficiency of 10–20 lumens per watt. particularly when cells have been produced commercially which are transparent. A large stadium has intermittent use but also huge energy costs. It is estimated that. LEDs will offer significant savings in annual costs. Solid state light emitting diodes (LEDs) are based on the quantum principle that an atom’s electrons emit energy when they jump from a high energy level to a lower one. As lighting accounts for most electricity used in most offices. EMERGENT TECHNOLOGIES AND FUTURE PROSPECTS least a decade away’ (February 2001). but entirely new kinds of computer architecture 259 . that is. at the receiving end. This will be further driven by developments in transport. This will herald the next IT revolution when. At present optical fibres require electronic devices to convert information into optical pulses and. computer and visual communication will have a major impact on work patterns. rates of transmission of information will increase at an exponential rate. Not only will they be faster. It is probable this will lead to a considerable reduction in the need for high concentrations of office accommodation. Quite soon the whole world will be linked to an optical fibre superhighway based on photonic materials. Nevertheless. The second theatre of war was information processing. One consequence is that teleworking will become much more prevalent. than the core. This will offer much greater freedom to employees as regards their place of abode. The difference in refractive index causes light to be bounced off the outer casing with little loss of intensity over a considerable distance. Towns and cities will compete on the basis of amenity and quality of life since people will have much greater freedom as to where to live. information processing. enabling commercial enterprises to scale down their centralised operations. a lower refractive index. Optical fibres work by trapping light within a solid rod of glass which is surrounded by a cladding material with different optical properties. to decode the information. avoiding the necessity of being connected to the grid. one that is free of electronic mediation. The limiting factor is the speed of light. The goal of current research is to create the photonic integrated circuit. The photonic revolution The battle between traditional electronics is being fought on two fronts: ● ● information transmission. Already teleconferencing is reducing the need for costly gatherings of executives as companies spread their operations globally. Optical fibres can carry up to 25 trillion bits per second. in other words. computers. what all this amounts to is that wholly electricity autonomous buildings should be an economic reality around a decade from now. High capacity communication systems based on a multimedia supercorridor accommodating audio. Particles of light – photons – can carry many thousands of times more information than wires. Because of this. as Philip Ball puts it: The photonic integrated circuit which processes light on a chip … will see computers change qualitatively. that is. We are already into the era when information is transmitted by pulses of light rather than through a copper wire. Princeton. In other words.. p. An active system is controlled not only be external forces but also by some internal signal. In smart systems an active response usually involves a feedback loop that enables the system to ‘tune’ its response and thus adapt to a changing environment rather than be passively driven by external forces. This will make a significant impact on the energy demand of the standard office. gears and even electronic circuitry’. An example is a vibrating–damping smart system. Smart materials are already on the market. Such materials come into the general category of passive smart materials. 1997 p. Smart materials carry out their tasks as a result of their intrinsic properties. It will also have implications for design of the building fabric and the services. Pilkington is developing a solid state version of this glass which should make both cheaper and available in much larger sizes. The current is required to change its state not to maintain that state. 260 . We will see ‘smart devices in which the materials themselves do the job of levers. 58) Computers are major consumers of energy not only in use but also because of the heat they generate which often must be disposed of mechanically. cit. The all-photonic computer will be much faster.’ (Made to Measure. As the frequency or amplitude of the vibrations changes so the feedback loop modifies the reaction to compensate. As Philip Ball puts it: ‘Smart materials represent the epitome of the new paradigm of materials science whereby structural materials are being superseded by functional ones’. In many situations they will replace mechanical operations. There is even the prospect of ‘A house built of bricks that change their thermal insulating properties depending on the outside temperature so as to maximise energy efficiency’ (op. The autonomous office is nigh. At present electrochromic glass is a sandwich construction with a gel which changes its light emission properties in response to an electric current. we will discover new ways to make machines think. Mechanical movement triggers a feedback loop into providing movement that stabilises the system. This will dispense with the need for mechanical blinds and solar shades and will give individuals much greater control over their immediate environment. Couple this with the introduction of LEDs and it is certainly conceivable that commercial buildings will more than meet their energy demands by means of the next generation of photovoltaic cells.ARCHITECTURE IN A CLIMATE OF CHANGE should become possible. like thermochromic or electrochromic glass. use a fraction of the energy of an electronic computer and generate virtually no heat. 104). The really exciting advances are in active smart materials. Smart materials Materials science is entering a whole new realm. They can be made intelligent by coupling them to sensor devices which detect sudden movement. after deformation. springs and dampening devices to eliminate mechanical vibrations. In Tokyo and Osaka several recent buildings already exploit vibration damping and variable stiffness devices to counteract seismic movement. they may perform a dual role extracting heat from low grade sources like ground water or geothermal reservoirs and serve as mechanical pumps to deliver the warmed water to the heating system of a building. return completely to their former shape. They have the potential to replace a range of mechanical devices such as vehicle clutches. They function by virtue of the fact that the crystal structures of SMAS change when heated. Sensors are detection devices to respond to changes in the environment and warn accordingly. What we will see in the near future are smart structures equipped with an array of fibre optic ‘nerves’ that will indicate what a structure is ‘feeling’ at any given moment and give instant information of any impending catastrophic failure. they are control devices that close or open an electrical circuit or act as a valve in a pipe. that get smarter as they get older. These are materials which.EMERGENT TECHNOLOGIES AND FUTURE PROSPECTS One useful class of smart materials are ‘shape memory alloys’ (SMAS) alternatively called ‘solid state phase transformations’. Another class of smart fluid is activated by being exposed to a magnetic field. If the end of the last century was 261 . Actuators make things happen. Buildings would be constructed off concrete rafts which in turn would be supported by an array of magnetorheological dampers. Linked to sensors they would be ideal for buildings in earthquake zones. For example. Smart fluids By introducing a strong electrical field. certain fluids can change to a near solid state. They have an inbuilt degree of intelligence and are capable of optimising their performance in response to feedback information. In general smart systems can be divided into sensors and actuators. They can be incorporated into mechanisms for operating ventilation louvres or ventilation/heating diffusers. At the onset of vibrations these would instantly change from solid to fluid and soak up the movement of the earth. it seems ‘such stuff as dreams are made of’. no possibility of mechanical breakdown and all at low cost. In principle SMAS can be used for any application which requires heat to be converted into mechanical action. An application already being exploited are thermostats where bimetal strips are replaced by alloys. They are called ‘electrorheological fluids’ (rheology is the study of the viscosity and flow capacity of fluids). No moving parts. There is yet another dimension to the characteristics of smart materials – materials that learn. As electronic commerce grows. given added impetus by the demographic changes that have created the need for 4 million new homes. as materials replace machines’ (Ball. ‘may hold an increasing simplicity.ARCHITECTURE IN A CLIMATE OF CHANGE characterised by the rise of high technology with ever more complex electronic wizardry packed into ever smaller spaces. p. Socio-economic factors The focus of the book began at the global scale and gradually sharpened down to the detailed design of buildings. mostly in southern England. It seems appropriate to end by again speculating more widely about socio-economic issues which will affect all who operate within the construction industry. Despite the rail chaos of late 2000 and early 2001.. This will have an impact on procurement as the public sector building realm declines. according to materials scientists. June 1999). governments will find it ever harder to raise taxes. it is still the view that superfast trains will be in service within the next decade which will create the conditions for a more dispersed. We will learn to be adaptive rather than assertive. At present the average distance travelled by car per day is 28 miles. We are social as well as economic beings. Many would agree that: The cold economic rationality of capitalism. cit. hypermobile society. The Observer. Despite government exhortations to convert to public transport. 142). op. in which every institution is subordinated to the calculus of profit and loss. Each day trillions of dollars move around the global money market as corporations locate their transactions in low tax jurisdictions. in turn. Add to this the fact that people are increasingly obtaining goods and services 262 . The meteoric growth of ‘turbo-capitalism’ with its single-minded purpose of optimising market opportunities is likely to lead to a sharp decline in public funded services. motorists are showing no sign of responding. It is likely and desirable that the Rogers Task Force (recommendations will be influential in the design of the next generation of new towns Towards an Urban Renaissance. does not answer the question posed by every human being – that there is more to life than the pursuit of economic efficiency. This. will create a demand for new kinds of development. It seems inevitable that there will be a new crop of new towns but designed to a high density. It might also have a negative effect on quality as price alone is the deciding factor. 2 January 2000 One outcome of the development of IT is that the economic and business certainties of the twentieth century are disintegrating. By 2025 it is expected that this will rise to 60 miles. the future. This surely is what environmental responsibility is all about. The reality is that we should not only now be imposing severe constraints on the use of fossil fuels. 263 . The Information Age will be kindest to those who adapt. we are likely to see the well-off retreating into gated and guarded communities (The Social Implications of Hypermobility.EMERGENT TECHNOLOGIES AND FUTURE PROSPECTS via the Internet from places with the lowest taxes and it is clear that national governments will have diminishing power to raise revenue. New Scientist. As Ian Angell (head of the Department of Information Systems. The dividing line will become sharply defined as between those with IT and communication skills who can keep up with the pace of change and those who increasingly fall behind in this new Darwinian environment. Those without are likely to emerge as losers. there are huge economic opportunities to be grasped in the development and manufacture of products related to the sphere of sustainability. The UK has the expertise but will not be a key player if it continues to be fixated on short-term profits and allows capital costs to outweigh the benefits of medium-to long-term revenue gains. Communities that invest substantially in communication technologies will thrive. with obvious consequences for the social services. Change is inevitable. if we do not revolutionise the way we produce and distribute energy. will suffer. Other nations subsidise technologies. Those who don’t. One scenario is that the growing gap between the poor and the affluent will continue to widen. London School of Economics) puts it: People with computer skills are likely to end up winners. Instead. to spend money now to limit catastrophic climate change in 50 years’ time is not an efficient way to deploy capital. the prospect of runaway global warming becomes a virtual certainty and that’s not an inviting prospect for our children and grandchildren. If this prediction is realised. The power of the nation state will weaken. 1999). Within the current economic climate. On the positive side. security will become a major design determinant in all types of building. This will be a countervailing trend to the ideals within the Rogers Urban Task Force for a more mixed and integrated society. 44–45 This progressive bi-polarisation will produce social tensions with decreasing social cohesion and an increase in crime. Many circumstances are conspiring to ensure that. 4 March 2000. London. pump-priming them so that they quickly achieve economy of scale. A rising crime rate will lead to anxieties which increase the attraction of buying the necessities of life though the Internet with obvious consequences for high street and even neighbourhood shops. pp. we should also be accumulating a contingency fund to deal with the future effects of global warming that are inevitable due to the momentum generated by past emissions of greenhouse gases. according to Professor John Adams of University College. or those whose citizens are isolated from the new ways to communicate. 264 .ARCHITECTURE IN A CLIMATE OF CHANGE Of this there can be no doubt: the next decades will witness the accelerating pace of change. mainly associated with the social and political consequences of climate change. the growing tensions arising from competition for access to water and fertile land exacerbated by the widening gap between rich and poor. Designers within construction have it in their power to help with the solution rather than add to the problem. huge uncertainties emerge. Later in the century. For the early part of the century it is likely that wealth will increase. Creating an external environmental which is both a visual amenity and also offers environmental benefits such as summer shading from deciduous trees and evaporative cooling from water features. Designing to make maximum use of natural light whilst also being aware of its limitations. Minimising rainwater runoff by limiting the extent of hard external landscape. Making best use of passive solar energy whilst employing heating/cooling systems which are fine-tuned to the needs of the occupants with air conditioning used only in exceptional circumstances. Identifying the potential for exploiting the constant ground temperature for evening out the peaks and troughs of summer and winter temperature. Minimising the use of water. 265 . Making best use of recycled materials and renewable materials from a verifiable source. transport and construction process and the energy used during the lifetime of the building. Exploiting the potential for natural ventilation in the context of an overall climate control strategy which minimises energy use and maximises comfort. ensuring that designs meet the highest standards of technical proficiency in combination with aesthetic excellence. Where possible using alternatives to materials containing volatile organic compounds.Key indicators for sustainable design Appendix One ● ● ● ● ● ● ● ● ● ● ● ● ● ● Minimising the use of fossil-based energy in terms of the energy embodied in the materials. Identifying opportunities to generate on-site renewable electricity (embedded systems). harvesting rainwater and grey water and purifying for use other than human consumption. Avoiding all ozone depleting chemicals in terms of manufacture and system operation including HCFCs. Ensuring that building management systems are user-friendly and not overcomplex. Whilst taking account of these key indicators. 266 . be a collaborative enterprise involving all the design professions? Have steps been taken to ensure that the development will not adversely affect the micro-climate.ARCHITECTURE IN A CLIMATE OF CHANGE Environmental checklist for development ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● Is it proposed that there will be consultation with the local community at the design stage? Has every attempt been made either to develop on a brownfield site or reuse an existing building? Will the proposed development achieve the highest standards in terms of energy efficiency and the conservation of natural resources? Will consideration be given to the production of on-site electricity from renewable sources? Has the opportunity to use recycled materials been explored? Is the proposed development capable of being adapted to other uses in the future? Will it achieve optimum standards of comfort for its inhabitants? Does the proposal achieve an appropriate density for its location? Has the potential for a mixed development on the site been realised? Does the proposal involve significant investment in landscaping? Does the proposed development make a significant contribution to the economic and social well-being of the community? Does the proposed development have access to a range of public transport options? Will the proposed development make a significant addition to the amenity of the wider area and does it pose any threat to the amenity of its immediate neighbours? Will the development be in harmony with the wider built environment? Is it proposed that the design process will. They should be viewed in a positive light since they offer an unprecedented range of development and design opportunities. for example by downdraughts or funnelling of wind? Will the proposed development contain areas of public access or create new pedestrian routes? These recommendations and checklists are intended to give a flavour of the challenges which face all who are associated with the design and production of buildings over the next decades. from the start. minerals and fossil hydrocarbons. acid rain and low level ozone. Soil erosion and oxidation. fixed and finite. Climate change The mechanism of the greenhouse. Processes: photosynthesis. Contaminated land and remediation strategies. Soil exhaustion through intensive use of agrochemicals. rise in surface global temperature and temperature records in the last decade.g. Nuclear waste disposal and decommissioning of nuclear power stations. oceans. Salination as a consequence of hydroelectric schemes and runoff. Continued deforestation in tropical and temperate rain forests. Predicted problems associated with resource depletion coupled with estimated population growth (11 billion by 2050). The evidence Rise in sea level over the past 150 years. ‘Soft’ assets: soil. hydrological cycle and carbon cycle. Evidence of past fluctuations and link between global temperature and CO2 in the atmosphere. Natural resources and pollution Global natural assets. e. The carbon cycle and current imbalance.g. soil formation and waste assimilation.An outline sustainability syllabus for designers Appendix Two The change to environmentally advanced design needs to be driven by a conviction that it is necessary. Problems associated with pollution. increasing intensity and 267 . troposphere. e. forests. animals and diseases. Steeper pressure gradients and deeper troughs more intense and more frequent storms. Migration of plants. 268 .g. Europe. Kyoto. This causes skin cancer and damages the immune system plus damage to crops. Scientific evidence for attributing most of the change to human activity (UN IPCC report 2001). Nuclear outlook including scenarios in Royal Commission on Pollution Energy Report. The Hague.ARCHITECTURE IN A CLIMATE OF CHANGE frequency of storms and floods. Projections of energy consumption: US. effects on health. Concept of carbon trading and international agreements: Rio. The ‘externalities’ include the contribution to global warming. Present position compared with pre-industrial in terms of temperature and level of atmospheric CO2. Future prospects for availability of fossil fuels based on latest estimates of reserves. Predictions IPCC predictions regarding level of atmospheric CO2 by 2050 assuming ‘business as usual’ (at least double pre-industrial level) and consequent temperature rise. Rising sea levels through expansion and melt ice threatening island states. threat to the Gulf Stream from melting Greenland ice. Increase in temperature greater energy in the system greater turbulence. e. China. International comparisons of per capita annual CO2 emissions (UN statistics). SE Asia.g. damage to crops and wildlife of low level pollution. migration to temperate zones of subtropical diseases. crops. severe heat episodes. The case for a shift from neo-classical economic theory which regards the Earth’s assets as free to eco-economics which factors in the environmental and social costs of human actions. Problems arising from the rate of climate change exceeding adaptation capacity of forests. melting of polar ice and glaciers. The outlook for conventional energy. Potential changes in ocean currents. Ozone depletion Caused by CFCs and HCFCs creating aerosols in the upper atmosphere which erode the ozone shield which protects against ultra-violet radiation. maritime cities and coastal agricultural belts. including the external costs in fixing fossil fuel prices. India. e. Windows and glazing Types of energy efficient glazing with U-values. innovative techniques. Active solar thermal systems: flat plat solar collector. 269 . Insulation Types of insulation material and thermal performance: natural organic. synthetic organic.AN OUTLINE SUSTAINABILITY SYLLABUS FOR DESIGNERS Renewable technologies Marine systems Hydroelectric generation. Other renewable technologies The solar chimney. thermal bridges. direct combustion from rapid rotation crops. Transparent insulation materials. Construction techniques High and superinsulation standards and specimen built examples. Developments in glass technology. aerogels. solid state electrochromic glass. e. nuclear. tidal energy – barrage systems and the tidal fence. thermal concentrator. Thermal conductivity of various insulation materials. thermochromic and electrochromic glass. Net U-values including solar gain. impoundment systems. coastal and offshore oscillating water column.g. Heat reflecting and heat absorbing glazing. liquid fuels. Construction systems: masonry. inorganic. Technical risks of insulation. biogas. photovoltaics. Trombe walls. Photochromic. power from biomass and waste. geothermal energy. underwater turbines. double sided collector. frame. Low energy housing Passive solar design. wave power. small-scale hydro. wind power. ‘run-of-river’ systems. hydrogen. rapeseed diesel. the ‘Tapchan’ system. Examples: ‘House of the Future’. Advanced and ultra-low energy housing – examples Hockerton self-sufficient housing project. NHER. Means of ensuring that timber originates from a sustainable source. Embodied energy in materials. Museum of Welsh Life. New centre at Weald and Downland Open Air Museum. Cardington. Sibelius Concert Hall. ‘House of the Future’ Museum of Welsh Life. Housing – the existing stock Assessment methods for energy efficiency: SAP. Multi-storey experimental timber house by the Building Research Establishment. Nottinghamshire. Types of PV cell and energy output. Singleton near Chichester. Sheffield. Timber construction The environmental benefits of timber in construction. BEPI and CO2 measure. Buildings in use account for 47 per cent of total UK CO2 emissions with an additional 5 per cent for those under construction. Lahti. 270 . Summary Checklist for the energy efficient design of dwellings. Wintergardens.ARCHITECTURE IN A CLIMATE OF CHANGE Domestic energy Photovoltaics (PVs) and the principle of PV generation. Thermal mass and the ‘flywheel’ effect. Beddington zero-energy development (BedZED). Remote and integrated systems. Finland. Heating. State of the housing stock from English House Condition Survey 2001. heating and cooling. pump storage – Creation of hydrogen by reformation of methane etc. this rises to 75 per cent. phase change materials. floors and ceilings. basic operation of the fuel cell Storage techniques – electricity Storage techniques – warmth and cooling PV applications. summary of recommendations Air conditioning as distinct from mechanically assisted ventilation. Energy options ● ● ● ● ● ● ● ● Carbon intensity of different fuels Energy distribution in a combined heat and power (CHP) system The fuel cell. Breakdown of SAP ratings across the stock. Offices and institutional buildings Six performance indicators from Movement for Innovation (M4I).5 per cent of the total comes from older building stock. Energy use across appliances. e. and Peabody Trust. 98. Penwith. Ventilation ● ● ● ● ● ● ● ● ● Natural ventilation Unassisted natural ventilation Gravity ventilation and ‘stack effect’ Mechanically assisted ventilation Displacement ventilation Cooling strategies’ evaporative cooling The ecological tower. Cornwall. Of this total. passive solar design. Environmental considerations in the design of offices.AN OUTLINE SUSTAINABILITY SYLLABUS FOR DESIGNERS When transport attributable to buildings is added. 29 per cent comes from housing.g. Retrofit examples. Incidence of fuel poverty as defined by DETR and health problems directly attributable to poor housing. Construction technologies: climate facades. building management systems (BMS) Energy storage: underground thermal storage. Standards for domestic warmth defined as ‘adequate’ and ‘minimum’. and by dedicated PV/electrolyser 271 . dense storage medium Electricity storage to overcome intermittence of supply by renewables: – Latest battery technology. examples: Commerzbank and Swiss Re Ventilation and air movement. light pipes. e. Further reference to embodied energy. risks regarding timber and quality control. prismatic glazing. occupancy linked control. Summary of recommendations Life-cycle assessment and recycling BRE Ecopoints system for construction materials and components. photoelectric control and the human behaviour. Recycling case study. problems relating to occupants and managers. Lighting controls and the human factor Post-occupancy analysis and differences between expectation and reality. timed controls. the ‘high-tech’ demand. light shelves. High profile–low profile.g. Precautions regarding recycled materials. heat gain. Design considerations – danger of excessive contrast.g. glare. common engineering problems. e. switches. ● Tools for environmental design Lighting – designing for daylight Factors influencing levels of daylight. holographic glazing. Earth Centre. 272 .g. localised switching. common architectural problems. daylight linked control. solar shading. air conditioned offices. metal hydrides – Regenerative fuel cell storage and ‘Regenesys’ technology Building management systems and Building Energy Management Systems. low energy Conference Centre. inherent inefficiencies. Environmental design and common problems The ‘Probe’ studies and the lessons from post-occupancy analysis. dimming control and occupancy sensing. building related illness/sick building syndrome. e. Recycling strategy checklist. system management. The atrium. Whole life costing.ARCHITECTURE IN A CLIMATE OF CHANGE – Hydrogen storage by pressurised tanks. Doncaster (Bill Dunster). common failures leading to energy waste. the human factor. Conditions for successful design. operational difficulties. AN OUTLINE SUSTAINABILITY SYLLABUS FOR DESIGNERS Integrated district environmental design Beyond the individual building. the photonic revolution. impact of economic PVs for embedded installation. Austria. Malmo. Examples: Plan-les-Ouates solar city near Geneva. polymer batteries. high temperature superconductivity. Sweden. socio-economic factors. flywheel technology. next generation solar cells. smart materials. Linz Solar City. advances in lighting – LEDs. 273 . energy for the future. electricity storage. implications of expansion of IT and the impact of optical fibre technology. Case study: Ecological City of Tomorrow. shift from mega power plants to many smaller dispersed generators using a range of renewable technologies. energy storage: hydrogen storage and the potential of nanofibre carbon. smart fluids. Emergent technologies and future prospects Recap of the likely consequences of global warming as they affect buildings. emergent technologies for combined heat and power systems for groups of houses or commercial/institutional buildings driven by micro-turbines. . 60 Construction systems. 181–2 See also Lighting Demonstration House for the Future. Fulham. 231–4 solar design aspects.Index Active facade. 84. 49–50 Building energy management system (BEMS). London. Manchester. 83 Coventry University Library. 133 Cold-related illnesses. 166 Amorphous silicon cells. 47–8 liquid fuels. 14–15 abatement strategies. 3–4. 162–3 Commerzbank. 80–7 amorphous silicon. 140–3. 2. 250 evaporative cooling. 198–9 Alkaline fuel cells (AFC). 4–5 Eco-cement. London. 251 Architectural Salvage Index. 53–4 masonry construction. 146–8. 151–4. South Wales. 82–3 principle of. 117 Domestic energy. Earth Centre. 17–18 Climate facade. 81–2 Dry lining. 177–8 Beaufort Court. 152–3 Copper indium diselenide (CIS) cells. 49 Blue Energy tidal fence system. 130–6 Aerogels. 70 Cellulose. 20–1 Co-operative Headquarters. 83 Antimatter. 140 Darrieus-Rotor. 119 Cavity wall filling. 94–5 Daylight. 52–4 framed construction. 119 Building of the Future. 253–4 micro-combined heat and power (CHP). 83 Carbon cycle. Devon. Lillie Road. 19–20 outlook. 83 energy output. Egypt. 107 offices. 19–20 paleoclimate record. 154–6 Composting toilet. 3–4 predicted effects. 66 Borehole heat exchange (BHE) system. 17–18 CO2 levels. 256 Arup Research and Development Report. Doncaster. 7. 218–25 energy package. 143–5 Cool storage. 93 Autonomous House. See Insulation Climate change: causes of fluctuations. 7–11 control strategies. 71 City transformation. 12–14 Cadmium telluride (CdTe) cells. 124 Batteries. 90–1. 1–2 Carbon Dioxide Profile. 217–18 renewable energy centre zero emissions building. 130–6 Clouds. 175–6 Cooling strategies. 4–7 evidence for human causation. 83 cadmium telluride (CdTe). 91–2 fuel cells. 238–9 Cladding. 28–9 Atrium. 125–6 Cellular glass. 87–9. 203–4 Artificial photosynthesis. 34–5 Body tinted glass. 116–17 Concrete. 226–31 Beddington Zero Energy Development (BedZED). 48–9 direct combustion. 77 Air conditioning. 86–7 polycrystalline silicon. 206–7 Conference Centre. 53 innovative techniques. 223–5 Biomass utilisation. 184–5 Autonomous developments. 83 copper indium diselenide (CIS). 206–7 275 . 174 Building Research Establishment timber-framed apartments. 99–100 Business as usual (BaU) scenario. 95–8 Displacement ventilation. 80–92 embodied energy and materials. 87–9 photovoltaic systems. 124–5 Earth’s axial tilt. 209–11 Conservatories. 120–1 Combined heat and power (CHP). Frankfurt. 225–34 renewable energy sources. Manchester University. 178–9 Building Energy Performance Index (BEPI). 10. 149–50 Domestic appliances. 180 Aswan Dam scheme. 47–9 biogas. 111 David Wilson Millennium Eco-House. 192 avoidance of. 116 Baggy House. 94. 80. 12–17 recent uncertainties. 52–3 Contact Theatre. 185 reflective coatings. 3. 152–3 Existing housing: energy efficiency measurement. 165–6 Fuel poverty. 120–1 remedy. 169–70. 71 Hockerton Housing Project. 252 future technologies. 29–30 small-scale hydro. 254 microbial fuel cell (MFC). 169. 91–2. 20–2 Framed construction. 170 Electrochromic glass. 16 Glare. 93. 119 National Home Energy Rating (NHER). 16 Fossil fuels.INDEX Eco-materials. 20–2 UK energy picture. 49 Evacuated tube solar thermal collectors. 206 Geothermal energy. England. 254 Ice core data. 119–20 Helican turbine. 253–4 molten carbonate fuel cell (MCFC). 66 prismatic glazing. 118–19 fuel poverty. 162 embodied energy. 90–1. 105 systems. 130 Glenwood Park. 177–8 flywheel technology. 256–7 heat storage. 187 photochromic. 164–5 regenerative fuel cell. 252 Electricity. Sheffield. 5–6 H-Darrieus-Rotor. 174–7 cool storage. 154–9 Ecopaint. 208–9 paints. 118–19 Energy efficient design. 257 regenerative fuel cell. 121–2 heating standards. 67. England. Malmo. 208–9 future technologies. 49–50 Glacial melting. 50. 187 House Condition Survey. 169–70. 260 holographic glazing. 71 Flooding. Atlanta. 257–8 hydrogen storage. 252–4 intelligent grids. 206–7 concrete. 124–5 case study. 2–3 Ground source heat pump (GSHP). 250 recent uncertainties. 121–2 standards. 208–9 Energy: delivered. California. 179 Ethanol as power source. 260 Electrorheological fluids. 70 Flat plate solar thermal collector. Sweden. 29 Hydrogen. 261 Embedded energy generation. 168. 20–2 nuclear option. 3–4 Ice sheet collapse. 2. 119–20 measurement. 17–18 See also Climate change Greenhouse effect. 119 Carbon Dioxide Profile. 62–3 Flax. 252–3 storage techniques. 9–11. 169. 118–19 Building Energy Performance Index (BEPI). 8–9. 250 control strategies. 119–20 Humidity. Georgia. 70 External environment. 66 thermochromic. 254 storage. 53 Friedrichshafen CSHPSS system. 170 solid oxide fuel cell (SOFC). 67. 257 See also Fuel cells Hydrogen generator fuel cell (HGFC). 64–7 body tinted. 111 Hemp. 253–4 alkaline fuel cells (AFC). 19–20 predicted effects. 169–70. 166–9 phosphoric acid fuel cell (PAFC). 205 El Niño. 119 Standard Assessment Procedure (SAP). 70 electrochromic. 190 Glass. 119–20 House Condition Survey. 67 Glazed curtain wall. 165 proton exchange membrane fuel cell (PEMFC). 26–8 See also Domestic energy. 251–4 primary. 106–7 Energy use: burning of fossil fuels. 259 Insulation. burning of. 121–2 Geopolymers. 3 outlook. 116 Holographic glazing. 61–2 Fuel cells. 162 storage. 23–4 outlook. 2. 10–11. 251 recent increases. 175–6 future technologies. England. 120 insulation: benefits. 104–7 building features. 248–9 Global warming. 171–4 Heat storage. 8–9. 111 Hallum University. 163–9. 184 Heat pumps. 91–2. 125–6 Expanded polystyrene (EPS). 175 seasonal energy storage. 16 Information processing. 28–30 ‘run of river’ systems. 172–4 Gulf Stream. 56–7. 236–8 Ecological tower blocks. 207–8 Ecological City of Tomorrow. 177–8. 63–4 Evaporative cooling. 199 See also Domestic energy Energy efficiency: House Condition Survey. 120–1 remedy. 15 Heating: House Condition Survey. 256–7 batteries. England. 68–79 276 . 257–8 Forest growth. 207 Ecopoints. 105–6 passive solar heat gain. 7–8 Flywheel technology. 208 Hydroelectric power. 103–4 Extruded polystyrene (XPS). 166 hydrogen generator fuel cell (HGFC). 259–60 Information transmission. 65–6 cellular. 175 Heat-related deaths. 162 future demand. 176–7 See also Electricity wastage. 122 cavity filling. 119–20 remedy for substandard housing. Renewable energy sources Environmental Systems Performance Model. 8 Electric Power Research Institute (EPRI). 80 See also Domestic energy Embodied energy. 12–17. 104–5 built form. 206–7 embodied energy. Dundee. 172 Refurbished materials. 258–9 glare. 70 Phosphoric acid fuel cell (PAFC). 77–9 Integrated district environmental design. 73–5 Jubilee Campus. 181–7. 252 biomass and waste utilisation. 170–1 Montana State Green Laboratory Complex. 70 Polystyrene: expanded (EPS). 38–9 Jaywick Sands development. 83 commercial buildings. 182–3. glass. 69 technical risks. 183 Renewable energy sources. 119 Northumberland Building. 140 Rainwater collection. 66 Refrigerants. 82–3 Polyisocyanurate (PIR). 256 Photovoltaic (PV) cells. 139. 70–1 transparent materials. 170–1 problems. 170 Lange turbine. 203–5 reconstituted materials. 122 Permanent Insulation Formwork Systems (PIFS). 204–5 Recycling. 173 Penwith Housing Association case study. 49–50 hydroelectric generation. 48–9 emissions. 136–7 planning and site considerations. 69–70 natural/organic insulants. 198 energy waste. 164–5 Queen’s Engineering Building. 24–5. 208 Parasitic energy requirement. 145–6. 207–8 humidity and. 203–4 strategy checklist. 246 Mosquito aircraft. 61 Life-cycle assessment. 198 common engineering problems. 195–200 air conditioning avoidance. 204–5 refurbished materials. 133. 185 solar shading. 23. 86–7 heat recovery system. 42–5 tidal energy. 194 system management. 17 Pencoys Primary School. 196 human factor. 77 range of options. 17 Portcullis House. 205 Light emitting diodes (LEDs). 53–4 Phenolic. 211 Reflective coatings. 28–30 hydrogen. 187 light pipes. Leicester de Montfort University. 197 Oil reserves. 153. 192–3 design considerations. 61 Peat bogs. 20. 102 Malaria spread. 150. 170 Reichstag. 69–70 inorganic/mineral-based insulants. 199 ‘high-tech demand’. 72–6 main points. 16 Meteor strikes. Germany. 130–6 floors and ceilings. 124–5 case study. Nottingham University. 204 reuse. 81–2 Plan-les-Ouates. 235 International Tokamak Experimental Reactor (ITER). 71–2 materials. 185 light shelves. 188–94 atrium. 170–1 copper indium diselenide (CIS). 125–6 high and superinsulation. 187 Minerva Tower. 190. 235 Polycrystalline silicon cells. 194 timed control. 185 photoelectric control. 71 organic/synthetic insulants. 129–30 walls and rainscreens. 82–3 principle of. 80–7 amorphic silicon. 15 Masonry construction. 212–13 National Home Energy Rating (NHER). 259–60 Photosynthesis. Sheffield. 45. 253–4 Milankovitch cycle. 193–4 dimming control. 70 extruded (XPS). 127–8 environmental considerations. 45–7 277 . 199–200 inherent inefficiencies. 68–9 health and safety issues. Berlin. 191. 100–2 National Assembly for Wales. 66 Photonic technologies. 179 Lockheed Corporation. 190 future technologies. 52–3 Mass extinctions. 185 Proton exchange membrane fuel cell (PEMFC). 258–9 Lighting. 194 occupancy sensing. 6 Methane: biogas. 115–16 Reconstituted materials. Switzerland. 130 photovoltaic cell applications. 83 energy output. 47–50 geothermal energy. 5 Millennium Galleries. 51 Isle of Islay wave power system. 30–41 UK energy picture. 194 localised switching. 194 daylight linked control. 133–6. Cornwall. 184–5 conditions for success. 128–9 passive solar design. 245–6 Maggie Centre. 131–3. 129–37 climate facades. artificial. 190 holographic glazing. 121–2 benefits. 196 building related illness. 191–2 Lighting and Thermal Value of Glazing (LT) Method. 50–1 nuclear fusion. 136.INDEX existing housing. 165 Photochromic glass. 170 Nuclear power. 189–90 prismatic glazing. Newcastle. 166–9 Mont Cenis Government Training Centre. 18 Microbial fuel cell (MFC). 83 cadmium telluride (CdTe). 197 common architectural problems. 85 polycrystalline silicon. 111 Legionella. 158–60 Molten carbonate fuel cell (MCFC). Cornwall. 50–1 Office design: design principles. 198–9 attention to low profile design details. 122 cavity filling. 50 solar energy. 26–8 wind power. 70 Population growth. 149 Prismatic glazing. 197–8 operational difficulties. 23–4. 21–2 Paints. 204 Regenesys. 6–7 Meltwater lake collapse. 187 switches. 261 Sheep’s wool. 42 photovoltaic (PV) cells. Leicestershire. 138 unassisted. 5. variation. 246 Waste disposal. 64–7 heat reflecting and heat absorbing glazing. 261 Spiral Flugel turbine. Cornwall. Switzerland. 202–3 Waste utilisation. 143–5 Thermal insulation. 30–41 offshore impoundment. 7 Vortex water harvesting system. 111 Saville Garden project. 102–3 Woking. Finland. thermochromic and electrochromic glass. 115. 90–1 West Nile fever spread. 62–4 evacuated tube collectors. 165–6 Solid state phase transformations. 98–103 Toilet: composting. 245–8 Ventilation. 250 Straw. Doxford International. 77 United States. 64–7 Solar energy. 173 Tower blocks: ecological towers. 114 Solar design. 115–17 Wave power. 262–4 SolAir project. 239–41 planning and regulation. 70 Vivo building. 65–6 photochromic. 33–5 potential for the UK. 71 Sun chart. 4 Trombe wall. 114 Volcanic activity. 62–3 Solar House. 114–15 small wind turbines. 98–103 Yangtze River. 66–7 Windsave. Freiburg. 15 Whole life costing. Spain. 250–1 reducing consumption. Sunderland. 77. 54–62 active solar thermal systems. 71 Sibelius Hall. 72–6 Swiss Re Insurance Group headquarters. 56–8 indirect gain. 31–3 tidal currents. 246–8 Zuckermann Institute for Connective Environmental Research (ZICER). 8–9. 241–3 procurement. 129–37. 117 Stirling engine. 261–2 Smart materials. 138–51 internal air flow and. 116 Tolvaddon Energy Park. 54 Sunspaces. 197 Smart fluids. 98–9 Sea level rise. Brent. 43 Solairfoil. 36 tidal fence. 179–80 windows and glazing. 125–6 S-Rotor. 60 Superinsulation. 37–8 tidal barrage. 239–40 Theatre design. 251 Shape memory alloys (SMAS). 98 West Beacon Farm. 103–4 Windows. Three Gorges scheme. 35–6 tidal mill. Colorado. 243 Wood. 77 Tree ring analysis. 145–51 natural ventilation. 33 Thameswey Energy Limited.INDEX Rock wool. 63–4 flat plate collector. 260–1 Socio-economic factors. 209 prospects for. 76 Zion National Park Visitor Center. 110–14 Wind speed. 29 Tidal energy. Sheffield. 38–41 Timber. 108–14 types of. 60 direct gain. 45–7 integrated systems. 156–8 Switches. 205–6 Wind power. 43–4 thermal collector types. 36–7 wave power. 59. 154–9 energy efficiency. 7–8. 54–6 commercial buildings. 191 System management. 42 next generation solar cells. 112–14 Winter Gardens. 111 Standby power. 254–6 parabolic solar thermal concentrator. London. 29 Zero-energy House. 69–70 Rocky Mountain Institute. 45 solar chimney generator. 125 Transparent insulation materials (TIMs). Wadenswil. 116–17 flushing of. 209 prospects for. 191–2 Tectonic plate movement. 243–4 transport. 48 Water conservation. 115. 168 Solar Offices. 69 health and safety issues. 245 Roundwood Estate. 140–5 See also Cooling strategies Vermiculite. 214–17 278 . 5 Thames Barrier. 115 Wal-Mart. 133 Solar radiation. See Insulation Thermochromic glass. 100 Sick building syndrome. 138–9 mechanically assisted ventilation. 59. 67 Three Gorges scheme. 47–9 biogas. 48–9 direct combustion. 58–60 passive solar design. 6 Solar shading. 187 Solid oxide fuel cell (SOFC). 43 solar thermal electricity. 239–43 energy services. Utah. 38–41 Weald and Downland Conservation Centre. increasing severity. 42–5 active solar. 44–5 passive solar. 47–8 liquid fuels. 87–9 Storms. 60–2 attached sunspace/conservatory. 243 waste. Yangtze River.
https://www.scribd.com/doc/124363468/Architecture-in-a-Climate-of-Change
CC-MAIN-2017-34
refinedweb
90,949
57.98
Introduction to Windows Script Technologies Microsoft® Windows® 2000 Scripting Guide This is a book about scripting for system administrators. If you are like many system administrators, you might be wondering why this book is targeted towards you. After all, scripting is not the sort of thing system administrators do. Everyone knows about scripting: scripting is hard; scripting is time-consuming; scripting requires you to learn all sorts of technical jargon and master a whole host of acronyms - WSH, WMI, ADSI, CDO, ADO, COM. System administrators have neither the time nor the requisite background to become script writers. Or do they? One of the primary purposes of this book is to clear up misconceptions such as these. Is scripting hard? It can be. On the other hand, take a look at this script, which actually performs a useful system administration task: Even if you do not know the first thing about scripting and even if you are completely bewildered by line 1 of the script, you can still make an educated guess that this script must map drive X to the shared folder \\atl-fs-01\public. And that is exactly what it does. If you already understand system administration - that is, if you know what it means to map a drive and you understand the concept of shared folders and Universal Naming Convention (UNC) paths - the leap from mapping drives by using the graphical user interface (GUI) or a command-line tool to mapping drives by using a script is not very big. Note If you are already lost - because you are not sure what is meant by scripting in the first place - think of scripting in these terms: Do you ever find yourself typing the same set of commands over and over to get a certain task done? Do you ever find yourself clicking the same set of buttons in the same sequence in the same wizard just to complete some chore - and then have to repeat the same process for, say, multiple computers or multiple user accounts? Scripts help eliminate some of this repetitive work. A script is a file you create that describes the steps required to complete a task.. Admittedly, not all scripts are as simple and intuitive as the one just shown. But if you thumb through this book, you will find that the vast majority of scripts - almost all of which carry out useful system administration tasks - are no more than 15 or 20 lines long. And with a great many of those, you can read the code and figure out what is going on regardless of your level of scripting experience. Does scripting take too much time? It can: If you write a script that is 500 lines long (and you probably never will), the typing alone will take some time. But it is important to balance the time it takes to write a script with the time that can be saved by using that script. For example, here is a script that backs up and clears all the event logs on a computer: strBackupLog = objLogFile.BackupEventLog _ ("c:\scripts\" & objLogFile.LogFileName & ".evt") objLogFile.ClearEventLog() Next Admittedly, this script is not as intuitive as the drive-mapping script. Furthermore, to write a script like this, you will need to learn a little bit about scripting in general, and about Windows Management Instrumentation (WMI) in particular. And then you still have to type it into Microsoft® Notepad, all 11 lines worth. This one might take you a little bit of time. But think of it this way: How much time does it take you to manually back up and clear each event log on a computer. (And that assumes that you actually do this; the manual process can be so tedious and time-consuming that many system administrators simply forgo backing up and clearing event logs, even though they know this task should be done on a regular basis.) With a script, you can back up and clear event logs in a minute or two, depending on the size of those logs. And what if you take an extra half hour or so and add code that causes the script to back up and clear all the event logs on all your computers? You might have to invest a little time and energy in learning to script, but it will not be long before these scripts begin to pay for themselves. Point conceded. But even though scripting does not have to be hard and does not have to be time-consuming, it still requires you to learn all the technical mumbo-jumbo, right? Sure, if you want to be an expert in scripting. But consider this script, which returns the names of all the services installed on a computer: Under the covers, this is a fairly complicated script. Among other things, it: Makes use of Automation object methods and properties. Utilizes Microsoft® Visual Basic® Scripting Edition (VBScript) constructs such as the For Each loop to iterate through the elements within a collection. Requires a COM (Common Object Model) moniker. Uses WMI object paths, namespaces, and classes. Executes a query string written in the WMI Query Language. That is an awful lot to know and remember just to write a seven-line script. No wonder people think scripting is hard. But the truth is, you do not have to fully understand COM and Automation to write a script like this. It does help to know about these things: As in any field, the more you know, the better off you are. But suppose what you really want is a script that returns the names of all the processes currently running on a computer instead of one that returns the names of all the installed services. Here is a script that does just that: What is so special about this script? Nothing. And that is the point. Look closely at the single item in boldface (Win32_Process). This is the only part of the process script that differs from the service script. Do you know anything more about COM monikers or WMI object paths than you did a minute ago? Probably not, and yet you can still take a basic script template and modify it to return useful information. Want to know the name of the video card installed on a computer? Try this script: Is it always this easy? No, not always. And these examples sidestep a few issues (such as, "How do I know to type in Win32_VideoController rather than, say, Win32_VideoCard?" or, "What if I want to know more than just the name of the video card?"). The point is not that you can start writing scripts without knowing anything; the point is that you can start writing scripts without knowing everything. If you want to master COM monikers and WMI object paths before you write your first script, that's fine. And if you prefer to just start writing scripts, perhaps by building on the examples in this book, that's fine too. You can always start writing and using scripts today, and then go back and learn about COM monikers and WMI object paths tomorrow. How Did Scripting Acquire Such a Bad Reputation? If scripting is so easy, then, how did it gain a reputation for being so hard? And if it is so valuable, why aren't more system administrators using it? After all, few system administrators knowingly turn their backs on something that will make their lives easier. There are probably many reasons for this, but at least part of the problem dates back to the birth of the Microsoft® Windows® Script Technologies. Both VBScript and Microsoft® JScript® (the two scripting languages included with the Microsoft® Windows® operating system) began as a way to add client-side scripting to Web pages. This was great for Internet developers, but of little use to the typical system administrator. As a result, scripting came to be associated with Web page development. (Even today, many of the code samples in the official Microsoft documentation for VBScript show the code embedded in a Web page.) Later on, Windows Script Host (WSH) was born. WSH provided a way for scripting languages and scripting technologies to be used outside Internet Explorer; in fact, WSH was aimed squarely at system administration. Nevertheless, scripting still failed to take the system administration world by storm. Initially, this was probably due to a lack of documentation and a lack of proper positioning. It was difficult to find information about using VBScript or JScript as a tool for system administration; it was next-to-impossible to find information about technologies such as WMI or Active Directory Service Interfaces (ADSI). Even when these technologies were documented (typically in software development kits), the documentation was aimed at programmers; in fact, code samples were usually written in C++ rather than a scripting language. For example, suppose you are a typical system administrator (with substantial knowledge of Windows and minimal knowledge of programming). And suppose you looked up scripting on Microsoft's Web site and saw sample code that looked like this: int main(int argc, char **argv) { HRESULT hres; hres = CoInitializeEx(0, COINIT_MULTITHREADED); // Initialize COM. if (FAILED(hres)) { cout << "Failed to initialize COM library. Error code = 0x" << hex << hres << endl; return 1; // Program has failed. } hres = CoInitializeSecurity(NULL, -1, NULL, NULL, RPC_C_AUTHN_LEVEL_CONNECT, RPC_C_IMP_LEVEL_IDENTIFY, NULL, EOAC_NONE, 0 ); Needless to say, very few system administrators saw WMI or ADSI as a tool that would be useful for them. Today, of course, there is no dearth of scripting-related literature; a recent search of a major online bookstore with the keyword "VBScript" returned 339 titles. That is the good news. The bad news is that most of those titles take one of two approaches: Either they continue to treat scripting as a tool for Web developers, or they focus almost exclusively on VBScript and WSH. There is no doubt that VBScript and WSH are important scripting technologies, but by themselves the two do not enable you to carry out many useful system administration tasks. Of the 339 scripting books found in the search, only a handful look at scripting as a tool for system administration, and only a few of those cover the key technologies - WMI and ADSI - in any depth. A system administrator who grabs a scripting book or two at random might still fail to understand that scripting can be extremely useful in managing Windows-based computers. How This Book Helps So is the Microsoft® Windows® 2000 Scripting Guide simply scripting book number 340, or does it somehow differ from its predecessors? In many ways, this book represents a new approach to scripting and system administration. In fact, at least four characteristics help distinguish this book from many of the other books on the market: The focus is on scripting from the point of view of system administration. This book includes many of the same chapters found in other scripting books; for example, it has a chapter devoted to VBScript. The difference is that the chapter is focused on the VBScript elements that are most useful to system administrators. System administrators need to work extensively with COM, so the VBScript chapter features detailed explanations of how to bind to and make use of COM objects within a script. System administrators have little use for calculating arctangents and cosines. Hence, these subjects are not covered at all, even though it is possible to make these calculations using VBScript. This book is task-centered rather than script-centered. In some respects, the scripts included in this book are an afterthought. Sometimes a book author will create a bunch of interesting scripts and then compose the text around those items. This book takes a very different approach: Instead of starting with scripts, the authors identified key tasks that system administrators must do on a routine basis. Only after those tasks were identified and categorized did they see whether the tasks could even be scripted. In that sense, this is not so much a book about scripting as it is a book about efficiently managing Windows-based computers. As it happens, the suggested ways to carry out these tasks all involve scripts. But the scripts could easily be removed from the book and replaced with command-line tool or GUI equivalents, and the book would still have value. This book combines tutorial elements with practical elements. Some books try to teach you scripting; thus they focus on conceptual notions and, at best, pay lip service to practical concerns. Others take the opposite approach. In those cases, the focus is on the practical: The books present a host of useful scripts, but make little effort to help you understand how the scripts work and how you might modify them. This book tries to combine the best of both worlds; for example, any time a useful system administration script is presented, the script is accompanied by a step-by-step explanation of how the script works and how it might be adapted to fit your individual needs. This book recognizes that, the larger the organization, the more pressing the need to automate procedures. If you are the system administrator for an organization that has a single computer, you might still find the scripts in this book useful. To be honest, though, you would probably find it faster and easier to manage your lone computer by using the GUI. If you have 100 computers, however, or 1,000 computers, the value of scripts and scripting suddenly skyrockets. In recognition of this fact, the book includes an entire chapter - "Creating Enterprise Scripts" - that discusses how the sample scripts in this book can be modified for use in organizations with many computers. How Do You Know if This Book is for You? Officially, this book was written for "system administrators in medium to large organizations who want to use scripting as a means to manage their Windows-based computers." That group (amorphous as it might be) will likely make up the bulk of the readership simply because 1) the book revolves around scripting system-administration tasks, and 2) system administrators in medium to large organizations are the people most likely to need to use scripts. However, the book should be useful to anyone interested in learning how to script. The techniques discussed throughout the book, while focused on medium to large organizations, are likely to prove useful in small organizations as well. These techniques are typically used to carry out system administration tasks, but many of them can be adapted by application programmers or Web developers. The book does not discuss scripting as a method of managing Microsoft Exchange Server; however, Microsoft Exchange Server can be managed using WMI. Because of this, Exchange administrators might be interested not only in the chapter "WMI Scripting Primer" but also in the chapter "VBScript Primer," which discusses generic techniques for working with Automation objects. This book also tries to provide information that will be useful to people with varying levels of scripting knowledge and experience. No scripting background is assumed, and if you read the book from cover to cover, you will start with the fundamental principles of scripting and gradually work your way through more complicated scenarios. But what if you already know VBScript but do not know much about ADSI? Skip directly to "ADSI Scripting Primer." What if you understand the basic principles of WMI but need to know how to create and terminate processes using WMI? Go right to the "Processes" chapter. There is something for everyone in this book: No knowledge or experience is required, but that does not mean that the book does not occasionally discuss a task or technique that might be a bit more advanced. And what if you have already mastered every scripting technique ever created? In that case, the book will likely be useful as a reference tool; after all, even those who know everything about WMI have rarely taken the time to memorize all the class names, methods, and properties. For those people, the tables in the task-based chapters might well make up for the fact that some of the explanations are aimed at beginners instead of experts. What Is in This Book The Windows 2000 Scripting Guide is divided into three parts: Conceptual chapters. The conceptual chapters offer comprehensive primers on the primary scripting technologies from Microsoft, including Windows Script Host (WSH), VBScript, WMI, ADSI, and the Script Runtime library. These are tutorial-type chapters, all written from the standpoint of a system administrator, and all written under the assumption that the reader has little, if any, scripting experience. Task-based chapters. For the task-based chapters, core areas of system administration were identified, including such things as managing services, managing printers, and managing event logs. Within each of these core areas, 25 or so common tasks were also identified, such as starting and stopping services, changing service account passwords, and identifying the services running on a computer. Each task includes 1) a brief explanation of the task and why it is important, 2) a sample script that performs the task, and 3) a step-by-step explanation of how the script works and how you might modify it to fit your own needs. Enterprise chapters. The enterprise chapters cover a range of topics, including guidelines for setting up a scripting infrastructure and best practices to consider when writing scripts as part of an administrative team. These chapters also describe different ways to enterprise-enable a script, for example, writing a script that performs an action on all your domain controllers or on all your user accounts, or a script that accepts arguments from a text file or a database. You do not have to begin on page 1 and read the entire book from start to finish. The book is designed so that you can skip around and read only the content that interests you. Are you less interested in a conceptual understanding of WMI than you are in learning how to manage services by using scripts? Then start off by reading the "Services" chapter; there is no reason to read all of the preceding chapters. If you are new to scripting, you might find it useful to read about VBScript and WMI first, but this is not a requirement. Consider this book to be a smorgasbord of scripting techniques: You are free to pick and choose as you please. In fact, if you are as interested in using scripts as you are in writing them, you might want to start with the task-based chapters. Read a chapter, copy and run the scripts, and see what happens. If you then want to better understand how the scripts work or would like to modify them so that they better fit your individual needs, go back and read up on the conceptual information. About the Scripts Used in This Book Most of the people who saw draft copies of this book expressed surprise - and gratitude - that the scripts were so short; many were used to scripting books in which a sample script might cover two or three pages, and had no idea that scripting could be so simple. However, some people were shocked by the fact that the scripts were so bare-boned. For example, very few of the scripts in the book include error handling; why would you write a production-level system administration script without including things such as error handling? The answer is simple: The scripts in this book were never intended to be production-level system administration scripts. Instead, they are included for educational purposes, to teach various scripting techniques and technologies. Most of them can be used as-is to carry out useful system administration tasks, but that is just a happy coincidence; this book and the script samples are designed to teach you how to write scripts to help you manage your computing infrastructure. They were never intended to be a management solution in and of themselves. Finding All the Pieces Keeping the scripts simple does not mean that concepts such as error handling are ignored; script writers definitely have a need for error handling, they have a need for parsing command-line arguments, and they have a need for creating scripts that run against more than one computer (for example, against all their Dynamic Host Configuration Protocol [DHCP] servers or against all the computers with accounts in a particular Active Directory container). Because of that, these techniques are covered in considerable detail in "Creating Enterprise Scripts" and "Scripting Guidelines" in this book. In other words, although this book does not include any 500-line scripts that make use of every possible scripting technique, all of these scripting techniques are demonstrated somewhere in the book. If you wanted to, you could easily take a number of the small sample scripts and stitch them together to create a 500-line production-level super script. By leaving out such things as error handling, the scripts were kept as short as possible, and the focus remained on the task at hand. Consider the first script shown in this chapter, the one designed to map a network drive on the local computer: This script is about as simple as it can be, which is exactly the point: You do not have to study it very long before you say to yourself, "Oh, so that's how I map network drives using a script." Admittedly, in a production environment you might want to modify the script so that the user can specify any drive letter and any shared folder. This can be done, but you will need code for parsing command-line arguments. Likewise, the sample script will fail if drive X is already mapped to a shared folder. This can be accounted for too, but now you need code to check which drive letters are in use and then to prompt the user to enter a new drive letter. You might also need code that checks to make sure that the shared folder \\atl-fs-01\public actually exists. To account for all these activities would turn a 2-line script into a 22-line script; even worse, the whole idea of showing the script in the first place - demonstrating how to map network drives - would then be buried somewhere in the middle of a relatively large script. Keeping the scripts short and simple also drives home the point that scripts do not have to be complicated to be useful. If you are creating a script that will be used by many different people throughout your organization, it might be advisable to include argument parsing and error handling. But what if this is a script that only you will use? In this case, you may not need these features. You should never feel compelled to do something in a script just because someone else did it that way. The only thing that matters is that the script carries out its appointed task. A Note Regarding VBScript All the scripts in this book were written using VBScript. The decision to use VBScript rather than another scripting language or combination of languages was based on three factors: With the possible exception of Perl, VBScript is the most popular language used for writing system administration scripts. It made sense to choose a language that many people are at least somewhat familiar with. Unlike Perl, VBScript (along with Jscript) is automatically installed on all Windows 2000-based computers. Thus there is nothing to buy and nothing to install. VBScript is easier to learn than Jscript. As a sort of added bonus, VBScript is very similar to Visual Basic, a programming language that many system administrators have a nodding acquaintance with. In other words, VBScript is easy to use, requires no additional purchase, download, or installation, and has a large user base. This makes it ideal for introducing people to system administration scripting. To be honest, though, in many ways the scripting language is irrelevant. By itself, VBScript offers very little support for system administration; VBScript is most useful when it works with WSH, WMI, ADSI, and other scripting technologies that offer extensive support for system administration. In this respect, it is similar to other scripting languages. The vast majority of the scripts in this book rely on WMI or ADSI; the scripting language is almost incidental. Do you prefer working in JScript or ActiveState ActivePerl? Great; all you have to do is learn how to connect to WMI or ADSI using those languages and then take it from there. For example, here is a WMI script that retrieves and then displays the name of the BIOS installed on the computer. This script is written in VBScript. Here is the same script, written in JScript. As you can see, the syntax and language conventions are different, but the key elements (shown in boldface) - connecting to WMI, retrieving information from the Win32_BIOS class, echoing the value of the BIOS name - are almost identical. In that respect, the language is largely a matter of individual choice; you can use WMI and VBScript to retrieve BIOS information, or you can use WMI and JScript to retrieve BIOS information. Note In reality, there are some minor differences among scripting languages that affect what you can and cannot do with system administration scripts. However, these differences are not important to this discussion. System Requirements This book is targeted toward computers running any Microsoft® Windows® 2000 operating system (including Microsoft® Windows 2000 Professional, and Microsoft® Windows 2000 Server, Windows® 2000 Advanced Server, and Windows® 2000 Datacenter Server). In addition to having Windows 2000 installed, these computers should be running Windows Script Host version 5.6, which was released after Windows 2000. Some of the scripts in the book rely on features found only in version 5.6. For more information about WSH version 5.6, see "WSH Primer" in this book. Note If you do not have WSH 5.6, an installation file for Windows 2000 is included on the compact disc that accompanies this book. If your computer is running an operating system other than Windows 2000, see the Windows Script Technologies link on the Web Resources page at and click the Microsoft Windows Script 5.6 download link. If you are not sure which version of WSH you have on your computer, see "WSH Primer" in this book for information about determining the WSH version number. If you are working with multiple operating systems, particularly Windows XP, it is also recommended that you install Windows 2000 Service Pack 2. Without this service pack, scripts running on a Windows 2000-based computer are unable to retrieve information from a Windows XP-based computer (although the Windows XP computers can retrieve information from the Windows 2000 computers). In addition, most of these scripts require you to be logged on with administrative credentials; this is a requirement for most WMI and ADSI operations. If you want to run a script against a remote computer, you need to be an administrator both on your computer and on that local computer. Beyond that, no fancy scripting tools, editors, or integrated development environments (IDEs) are required. As long as you have Notepad installed, you are ready to start writing scripts.
https://technet.microsoft.com/en-us/library/ee176792.aspx
CC-MAIN-2015-22
refinedweb
4,551
58.42
Computer Architecture Lab/FPGA Hello World Example When one starts to use a new language or environment the first program written is usually the famous 'Hello World' example. What is the 'Hello World' program in hardware, in an FPGA? The smallest project that produces dynamic output is a blinking LED. We will show the steps for a blinking LED example using Altera's Quartus and the Cyclone board Cycore. Design Flow[edit] All your design files (VHDL files) make up a project in Quartus. A Quartus II project is defined in Quartus II with just three files: projectname.qpf, projectname.qsf, and projectname.cdf (Close your Quartus II project before editing the files). Create a New Project[edit] Start Quartus II and create a new project with: - File -- New Project Wizard... - Select a project directory and select a project name. The project name is usually the name of the top-level design entity. In our case hello_world (make sure to name the project this exactly, or you will have compilation problems). - In the next dialog box the VHDL source files can be added to the project. As we have no VHDL files at the moment we will skip this step - We have to select the target device. Choose the right family and device depending on your board. - We leave the EDA tools settings blank - Press Finish at the summary window Device and Pin Options[edit] At the default setting the unused pins drive ground. As the default settings in Quartus II for a device are dangerous we specify more details for our target device: - Assignments -- Device to open the device properties - Press the button Device & Pin Options... - Important! At the tab Unused Pins select As input tri-stated. - The LVCMOS, selected in tab Voltage, is the better IO standard to interface e.g. SRAM devices - Close the dialog box and the next with OK Hello World in Chisel[edit] To get started with Chisel there is a Hello World example in Chisel available at: Chisel Hello World The following code shows the Hello World hardware in Chisel. import Chisel._ /** * The blinking LED component. */ class Hello extends Module { val io = new Bundle { val led = UInt(OUTPUT, 1) } val CNT_MAX = UInt(20000000 / 2 - 1); val cntReg = Reg(init = UInt(0, 25)) val blkReg = Reg(init = UInt(0, 1)) cntReg := cntReg + UInt(1) when(cntReg === CNT_MAX) { cntReg := UInt(0) blkReg := ~blkReg } io.led := blkReg } /** * An object containing a main() to invoke chiselMain() * to generate the Verilog code. */ object Hello { def main(args: Array[String]): Unit = { chiselMain(Array("--backend", "v"), () => Module(new Hello())) } } Compile and generate the Verilog hardware description with a simple make For FPGA synthesize you can either use your project you setup before or the Quartus project that is part of the Chisel hello world repository. The Hello World VHDL Example[edit] Add a VHDL file to the project with File -- New... and select VHDL File. Enter the following code and save the file with filename hello_world. -- -- hello_world.vhd -- -- The 'Hello World' example for FPGA programming. -- -- Author: Martin Schoeberl ([email protected]) -- -- 2006-08-04 created -- library ieee; use ieee.std_logic_1164.all; use ieee.numeric_std.all; entity hello_world is port ( clk : in std_logic; led : out std_logic ); end hello_world; architecture rtl of hello_world is constant CLK_FREQ : integer := 20000000; constant BLINK_FREQ : integer := 1; constant CNT_MAX : integer := CLK_FREQ/BLINK_FREQ/2-1; signal cnt : unsigned(24 downto 0); signal blink : std_logic; begin process(clk) begin if rising_edge(clk) then if cnt=CNT_MAX then cnt <= (others => '0'); blink <= not blink; else cnt <= cnt + 1; end if; end if; end process; led <= blink; end rtl; The not yet famous VHDL 'Hello World' example Compiling and Pin Assignment[edit] The analysis, synthesize, map, and place & route processes are all started with Processing -- Start Compilation. The compiler did not know that our PCB is already done, so the compiler assigned any pin numbers for the inputs and outputs. If Full Compilation fails you may try to manually correct the toplevel entity. However, the pin numbers are fixed for the board. We have to assign the pin numbers for our two ports clk and led. Open the pin assignment window with Assignments -- Pins. Our two ports are already listed in the All Pins window. Double click the field under Location and select the pin number. Enter following assignments: clk look up in board docu led look up in board docu With the correct pin assignment restart the compilation with Processing -- Start Compilation (or use the play button) and check the correct assignment in the compilation report under Fitter -- Pin-Out-File. The clk pin should be located at nn and led at mm; all unused pins should be listed as RESERVED_INPUT. FPGA Configuration[edit] Downloading your hardware project into the FPGA is called configuration. There are several ways an FPGA can be configured. Here we describe configuration via JTAG. UsbBlaster[edit] This section describes configuration via UsbBlaster connected to the printer port. - Connect your UsbBlaster to the PC and FPGA board - Start the programmer with Tools -- Programmer - Press the button Auto Detect and the programmer window should list the correct device. - Double click for the filename of the FPGA device and select hello_world.sof. - Select the checkbox under Program/Configure for the FPGA and press the Start button to configure the FPGA The LED should now blink! Further Experiments[edit] Play with the code and do following changes: - Change the blinking frequency. - Have the on shorter than the off time of the LED. Further Information[edit] Tips[edit] - The Chisel or VHDL source files and the project files do not have to be in the same directory. Keeping them in separate directories is a better organization and simplifies reuse - Use relative paths wherever possible to enable project sharing - In general choose the entity name as filename. However, with different files (and names) for different versions of an entity a project can be easily configured just with different Quartus II projects Quartus File Types[edit] The most important file types used by Quartus: - .qpf - Quartus II Project File. Contains almost no information. - .qsf - Quartus II Settings File defines the project. VHDL files that make up your project are listed. Constraints such as pin assignments and timing constraints set here. - .cdf - Chain Description File. This file stores device name, device order, and programming file name information for the programmer. - .tcl - Tool Command Language. Can be used in Quartus to automate parts of the design flow (e.g. pin assignment). - .sof - SRAM Output File. Configuration for Altera devices. Used by the Quartus programmer or by quartus_pgm. Can be converted to various (or too many) different format. Some are listed below. Links[edit] - Quarts II Web Edition - VHDL synthesis, place and route for Altera FPGAs - Jam STAPL Byte-Code Player - FPGA configuration in batch mode (jbi32.exe)
https://en.wikiversity.org/wiki/Computer_Architecture_Lab/FPGA_Hello_World_Example
CC-MAIN-2017-04
refinedweb
1,134
55.44
breakand continueStatements Java break and continue statements are used to manage program flow. We can use them in a loop to control loop iterations. These statements let us to control loop and switch statements by enabling us to either break out of the loop or jump to the next iteration by skipping the current loop iteration. In this tutorial, we will discuss each in details with examples. In Java, break is a statement that is used to break current execution flow of the program. We can use break statement inside loop, switch case etc. If break is used inside loop then it will terminate the loop. If break is used inside the innermost loop then break will terminate the innermost loop only and execution will start from the outer loop. If break is used in switch case then it will terminate the execution after the matched case. Use of break, we have covered in our switch case topic. Syntax: jump-statement; break; Data Flow Diagram of break statement Example: In this example, we are using break inside the loop, the loop will terminate when the value is 8. public class BreakDemo1 { public static void main(String[] args) { for(inti=1;i<=10;i++){ if(i==8){ break; } System.out.println(i); } } } Example using break in do while loop Loop can be any one whether it is for or while, break statement will do the same. Here, we are using break inside the do while loop. public class BreakDoWhileDemo1 { public static void main(String[] args) { inti=1; do { if(i==15) { i++; break; } System.out.println(i); i++; }while(i<=20); } } Example: Break in innermost loop In this example, we are using break inside the innermost loop. But the loop breaks each time when j is equal to 2 and control goes to outer loop that starts from the next iteration. public class Demo{ public static void main(String[] args) { for(int i=1;i<=2;i++){ for (int j = 0; j <=3; j++) { if(j==2) break; System.out.println(j); } } } } 0 1 0 1 In Java, the continue statement is used to skip the current iteration of the loop. It jumps to the next iteration of the loop immediately. We can use continue statement with for loop, while loop and do-while loop as well. jump-statement; continue; Example: In this example, we are using continue statement inside the for loop. See, it does not print 5 to the console because at fifth iteration continue statement skips the iteration that’s why print statement does not execute. public class ContinueDemo1 { public static void main(String[] args) { for(inti=1;i<=10;i++) { if(i==5) { continue; } System.out.println(i); } } } Example: We can use label along with continue statement to set flow control. By using label, we can transfer control at specified location. In this example, we are transferring control to outer loop by using label. public class ContinueDemo2 { public static void main(String[] args) { xy: for(inti=1;i<=5;i++){ pq: for(int j=1;j<=5;j++){ if(i==2&&j==2){ continue xy; } System.out.println(i+" "+j); } } } } Example: Continue in While loop continue statement can be used with while loop to manage the flow control of the program. As we already know continue statement is used to skip the current iteration of the loop. Here too, it will skip the execution if the value of variable is 5. public class Demo{ public static void main(String[] args) { int i=1; while (i < 10) { if (i == 5) { i++; continue; } System.out.println(i); i++; } } } 1 2 3 4 6 7 8 9 we can see the output, 5 is missing because at fifth iteration due to continue statement JVM skipped the print statement.
https://www.studytonight.com/java/break-continue-statement-in-java.php
CC-MAIN-2021-04
refinedweb
628
62.78
, Mar 04, 2004 at 03:33:53PM +0100, Tim Plessers wrote: > Hi Graham, > Thanks for your quick reply. > I looked into that mailinglist archive thread you suggested. Hi Graham, The current implementation in 2.0 b1 was a hack just to get something working. It, as I think you saw, simply loads the classes in the system classloader. This causes two problems: first, it's a security problem in some environments, and second, no classloader will ever load a class twice. If you look at how BeanShell implements its own classloaders for setClassPath(), addClassPath() and reloadClasses() you'll see on example of what can be done to work around this in the future. What you have to do is load classes through a classloader that can be "thrown away" when you want to load a new version. So, one possibility would be to have our own classloader either load BeanShell and everything it uses or possibly work with a context classloader that does this. The bootstrap app has to have a few interfaces - whatever it needs to interact with the interpreter - pulled out to its level and shared with the child classloader. Then when you want to execute the script again you can simply throw away the child classloader and create a new one. Object instances created via classes defined in the old (tossed) classloader can still hang around, and we can still use them because they stll implement the common interfaces... They will simply prevent the classloader object itself from being garbage collected. The difficulty is in sharing the common interfaces with the parent. We'll have to factor out an interface for the interpreter. Alternately, we might be able to simply set our own context classloader in BeanShell... as was discussed for the EJB environment a few weeks ago (some of that thread might have been lost from the archive while sf.net was down). In that case, we can add or redefine classes by wrapping any existing context classloader with a new child... The child simply points to the next one in the chain as its parent. And unlike a normal classloader, ours would not delegate to the parent first (See BshClassLoader for an example). There may be other options as well. I haven't really looked at it yet. For example, we might simply create our own namespaces for the classes by giving them internally unique names... As long as the interpreter knows what the real names are it will look fine to the script... but we can always change our mind and regenerate the classes with new names behind the scenes... To the outside world the Impl names shouldn't matter, because they have to be using them through a shared interface anyway... Sorry if this is not helpful... but the point was that we have options to address this in the future. And you might try loading bsh and your app through a URLClassLoader that can be thrown away to test this in the interum. thanks, Pat >. I knew that the source tar ball is available. That's what I'm using today. Subversion comes with a conversion tool from CVS, so I believe your investment to CVS today won't be wasted. regards, ---------------------- Kohsuke Kawaguchi On Wed, Mar 03, 2004 at 04:59:56PM -0800, Kohsuke Kawaguchi wrote: > >? Hi,, Pat? regards, -- Kohsuke Kawaguchi Sun Microsystems kohsuke.kawaguchi@...
https://sourceforge.net/p/beanshell/mailman/beanshell-developers/?viewmonth=200403&viewday=4
CC-MAIN-2018-17
refinedweb
564
72.46
The Presentation Model pattern, known as a variation of the Model-View-Controller (MVC) design pattern, usually is a self-contained class that contains the data and event publications of the user interface, but without knowing what kind of user interface it will be. The Presentation Model has key features like: In my previous article, I described using the Presentation Model pattern in an ASP.NET website, Windows Forms application, and a WPF application. They are very different technologies, but by using the Presentation Model pattern, it is possible to create a presentation model class that is reused among those different applications. If you haven’t seen how the Presentation Model works in an ASP.NET website, Windows Forms application, and a WPF application, please read the article titled Presentation Model in Action. In this article, I am going to reuse the presentation model class from the previous article into SharePoint. SharePoint is a great platform to build business applications. Having said that, I want to make clear that Windows SharePoint Services (WSS), SharePoint Portal Server (SPS), and Office SharePoint Server (MOSS) are designed to serve different levels of business requirements. SPS or MOSS might not meet your requirement, but WSS might. So, evaluate them one by one to see which one fits. Here, we will do some work on Windows SharePoint Services 3.0. I like WSS 3.0 because it provides a set of core application basic services and infrastructure, all for free. When building applications on top of WSS 3.0, it is just like what Bill Carson said in his blog. It likes to build an 80-floor skyscraper, you start at the 60th floor, because the first 59 are literally out of the box. To make the customer management ASP.NET website from the previous article in WSS 3.0, it requires only a few simple steps. To start with, I have created a new web application on port 45526 and a blank site. (If you want to learn more about creating a WSS site, please check out this link.) The presentation model object’s lifecycle is the session scope (reasons explained in previous article), but WSS 3.0 by default disables the session. To enable the session, we can edit the web.config and un-comment this line: <add name="Session" type="System.Web.SessionState.SessionStateModule" /> Then, find and set <pages> enableSessionState attribute to be true. <pages> enableSessionState By default, the WSS 3.0 out of the box settings is to use .NET 2.0. The presentation model itself does not require .NET 3.5, but it is required because I use the .NET 3.5 features such as Automatic Properties and LINQ to entities in the Demo.DataModel library. namespace Demo.DataModel { public class CustomerService : ICustomerService { #region ICustomerService Members public IEnumerable<customer> GetAllCustomers() { return new List<customer> { new Customer{ FirstName = "Jesper", LastName = "Aaberg", Address = "One Microsoft Way, Redmond WA 98052", }, new Customer{ FirstName = "Martin", LastName = "Bankov", Address = "Two Microsoft Way, Redmond WA 98052", }, new Customer{ FirstName = "Shu", LastName = "Ito", Address = "Three Microsoft Way, Redmond WA 98052", }, new Customer{ FirstName = "Kim", LastName = "Ralls", Address = "Four Microsoft Way, Redmond WA 98052", }, new Customer{ FirstName = "John", LastName = "Kane", Address = "Five Microsoft Way, Redmond WA 98052", }, }; } public IEnumerable<customer> GetSearchCustomers(string name) { return from customer in GetAllCustomers() where customer.FirstName.Contains(name) || customer.LastName.Contains(name) select customer; } #endregion } } To turn on .NET 3.5, in web.config, first add 3.5 assemblies: <assemblies> <add assembly="Microsoft.SharePoint, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" /> " /> </assemblies> Then, instruct it to use .NET 3.5 C# compile by adding the following session before </configuration>: </configuration> > </compilers> </system.codedom> Now, the WSS site is session enabled and .NET 3.5 enabled. Next is to copy the files to the WSS site. Copy the App_Code folder that has the CustomerPresentationModel class, and the bin folder that has the Demo.DataModel.dll and Demo.PresentationModel.dll and two user controls, CustomerList.ascx and CustomerEdit.ascx into the WSS site’s root folder. All the files and DLLs are from the previous article, no extra code needed. CustomerPresentationModel While in a regular ASP.NET website the building blocks are user controls, in WSS 3.0, Web Parts are mainly the building blocks. It could be a frustrating and scary job to build SharePoint Web Parts before WSS 3.0 because of two big barriers: Microsoft.SharePoint.WebPartPages.WebPart Once again, thanks Microsoft has made WSS 3.0 on top of the ASP.NET 2.0 API, these are no longer issues. ASP.NET 2.0 Web Parts can be used in WSS 3.0 directly. And, ASP.NET 2.0 Web Parts allow user controls to be loaded directly in a Web Part, with no modifications. (See this on MSDN.) Next, we will create a WSS feature project and two Web Parts to load a customer list and a customer details user control. In WSS, custom functionalities are grouped into features. Developers create and package business process functions into features. Administrators can install them into SharePoint sites for users to activate and deactivate according to their needs. Here, I have created a feature project using the STSDEV tool. This article is not going to describe how to create a WSS feature and Web Parts. Please refer to other sources for feature definition, development, and deployment information. I highly recommend you visit STSDEV's screencasts to learn how to use this great tool. Here, I just want to show how easy it is to make a Web Part out of the user control. Listed below is the code of CustomerEditWebPart which loads the CustomerEdit.ascx user control: CustomerEditWebPart public class CustomEditWebPart : System.Web.UI.WebControls.WebParts.WebPart { protected override void CreateChildControls() { Control control = Page.LoadControl("~/CustomerEdit.ascx"); control.ID = "CustomerEdit"; Controls.Add(control); } } That's it. As simple as three lines, the customer details user control is a Web Part now. Build and deploy the project. On the WSS site settings screen, the feature is installed and ready to be activated for use. The customer list and customer details Web Parts are shown in the Web Part gallery once the feature is activated. The next screenshot shows the home page in edit mode when adding the Web Parts to the home page. Another screenshot shows that the demo Web Parts are used in a Web Part page within the document library. It is also possible to create an ASPX page using SharePoint Designer 2007 and add the user controls to that page. But here, we will continue on discussing the Presentation Model pattern. Within the Presentation Model pattern, the presentation model class is nothing but a plain .NET class. This makes it possible to be widely used in ASP.NET websites, Windows Forms, WPF, and SharePoint.); } } The presentation model class holds the data used by the views and sends events when the data changes. Events are broadcasted. There can be many views and classes observing the events. A presentation model class does not know anything about the view, whether views are ASP.NET user controls, Windows Forms user controls, WPF user controls, SharePoint Web Parts, or any other classes. There are no limitations on specific application platforms. Views have references to the presentation model class. Views pull data from the presentation model class and listen to the events. Views decide if the data changed events are relevant that need actions. The presentation model class and views are loosely coupled through data and events. Compared to some other development platforms (Java, PHP …), .NET has huge advantages in areas of data binding and event handling. To use the Presentation Model on other platforms, it may require some work in synchronizing the state between the view and the presentation model and in sending messages between views and presentation model. But, .NET has delegates, anonymous methods, and two-way data binding. We can and should take the advantages of these advanced technologies to unleash the power of the Presentation Model pattern. This article demonstrated using the Presentation Model in a SharePoint site. The code is copied from the previous ASP.NET website without changes. Once again, it proved the Presentation Model as a powerful and useful.
https://www.codeproject.com/kb/sharepoint/pminsharepoint.aspx
CC-MAIN-2017-09
refinedweb
1,371
58.69
Before we start crafting scripts in Blender we must check whether or not we have all the necessary tools available. After that we will have to familiarize ourselves with these tools so that we can use them with confidence. In this chapter, we will look at: What can and cannot be accomplished with Python in Blender How to install a full Python distribution How to use With so many things possible there is an awful lot to learn, but fortunately the learning curve is not as steep as it might seem. Let's just type in a quick few lines of Python to put a simple object into our Blender scene, just to prove we can, before we head into deeper waters. Voila! That's all that is needed to add Suzanne, Blender's famous mascot, to the scene. Almost anything in Blender is accessible from Python scripts but there are some exceptions and limitations. In this section, we illustrate what this means exactly and which notable features are not accessible to Python (for example, fluid dynamics). The Blender API consists of three major areas of interest: Access to Blender objects and their properties, for example a Cameraobject and its angleproperty or a Sceneobject and its objectsproperty Access to operations to perform, for example adding a new Cameraor rendering an image Access to the graphical user interface, either by using simple building blocks or by interacting with the Blender event system There are also some utilities that do not fit well in any of these categories as they concern themselves with abstractions that have no direct relation to Blender objects as seen by the end user, for example functions to manipulate vectors and matrices. Taken together this means we can achieve a lot of things from Python scripts. We can: Create a new Blender object of any type, including cameras, lamps, meshes, and even scenes Interact with the user with a graphical user interface Automate common tasks within Blender such as rendering Automate maintenance tasks outside of Blender such as cleaning up directories Manipulate any property of a Blender object that is exposed by the API That last statement shows one of the current weaknesses of the Blender API: any object property that the developers add in the Blender C source must be provided separately in the Python API. There is no automatic conversion from internal structures to the interface available in Python and this means that efforts must be duplicated and may lead to omitted functionality. For instance, in Blender 2.49 it is not possible at all to set up a fluid simulation from a script. Although it is possible to set up a particle system, there is no way to set the behavioral characteristics of a boids particle system. Another problem of the 2.49 Python API is that many of the actions a user may choose to perform on an object have no equivalent in the API. Setting simple parameters such as the camera angle or performing a rotation of any object is easy and even associating for example, a subsurface modifier to a mesh is just a few lines of code but common actions, especially on mesh objects, such as subdividing selected edges or extruding faces are missing from the API and must be implemented by the script developer. These problems led the Blender developers to completely redesign the Blender Python API for the 2.5 version, focusing on feature parity (that is, everything possible in Blender should be possible using the Python API). This means that in many situations it will be far easier to get the same results in Blender 2.5. Finally, Python is used in more places than just standalone scripts: PyDrivers and PyConstraints enable us to control the way Blender objects behave and we will encounter them in later chapters. Python also allows us to write custom textures and shaders as part of the nodes system as we will see in Chapter 7, Creating Custom Shaders and Textures. Also, it is important to keep in mind that Python offers us far more than just the (already impressive) tools to automate all sorts of tasks in Blender. Python is a general programming language with an extensive library of tools included, so we do not have to resort to external tools for common system tasks such as copying files or archiving (zipping) directories. Even networking tasks can be implemented quite easily as a number of render farm solutions prove. When we install Blender, a Python interpreter is already part of the application. This means that it is not necessary to install Python as a separate application. But there is more to Python than just the interpreter. Python comes with a huge collection of modules that provide a wealth of functionality. Anything from file manipulation to XML processing and more is available, and the best bit is that these modules are a standard part of the language. They are just as well maintained as the Python interpreter itself and (with very few exceptions) available on any platform that Python runs on. The downside is, of course, that this collection of modules is fairly large (40MB or so), so the Blender developers chose to distribute only the bare minimum, primarily the math module. This makes sense if you want to keep the size of the Blender downloads manageable. Many Python developers have come to depend on the standard distribution because not having to reinvent the wheel saves huge amounts of time, not to mention it's not an easy task to develop and test a full-fledged XML library say, just because you want to be able to read a simple XML file. That is why it is now more or less a consensus that it is a good thing to install the full Python distribution. Fortunately, the installation is just as easy as the installation of Blender itself, even for end users, as binary installers are provided for many platforms, such as Windows and Mac, also in 64-bit versions. (Distributions for Linux are provided as source code with instructions on how to compile them, but many Linux distributions either already provide Python automatically or make it very easy to install it afterwards from a package repository). Chances are that you already have a full Python distribution on your system. You can verify this by starting Blender and checking the console window (the term console window refers to either the DOSBox that starts in parallel on Windows or the X terminal window where you start Blender from on other systems) to see if it displays the following text: Compiled with Python version 2.6.2. Checking for installed Python... got it! If it does, then there is nothing you have to do and you can skip to The interactive Python console section. If it shows the following message then you do have to take some action: Compiled with Python version 2.6.2. Checking for installed Python... No installed Python found. Only built-in modules are available. Some scripts may not run. Continuing happily. The steps toward a full Python installation for Windows or Mac are as follows: Download a suitable installer from. At the moment of writing, the latest stable 2.6 version is 2.6.2 (used in Blender 2.49). It is generally a good thing to install the latest stable version as it will contain the latest bug fixes. Make sure, however, to use the same major version as Blender is compiled with. It is fine to use version 2.6.3 when it is released even as Blender is compiled with version 2.6.2. But if you use an older version of Blender that is compiled with Python 2.5.4 you have to install the latest Python 2.5.x release (or upgrade to Blender 2.49, if that is an option). Run the installer: On Windows the installer offers you to choose where to install Python. You can choose anything you like here, but if you choose the default, Blender will almost certainly find the modules installed here without the need to set the PYTHONPATHvariable. (see below) (Re) start Blender. The Blender console should show the text: Compiled with Python version 2.6.2. Checking for installed Python... got it! If it doesn't, it might be necessary to set the PYTHONPATHvariable. Refer to the Blender wiki for detailed information: On Ubuntu Linux, the first step is not needed and installing can be done by using the built-in package manager: sudo apt-get update sudo apt-get install python2.6 Other distributions might use a different package management system so you might have to check the documentation for that. Under Windows it might be necessary to set the PYTHONPATH environment variable, although this is unlikely when using the provided packages. To see where Blender actually looks for modules you may look at Python's sys.path variable. To do this you have to start up Blender's interactive Python console. Note that you use a different and possibly confusing notion of console here—the DOSBox or the terminal window that is started alongside Blender's main application window and where various informational messages are displayed is referred to as console as well! The Python interactive console that we want to use now is started from the script window: Once the interactive Python console is started, type the following commands: import sys print sys.path Note that the interactive Python console does not show any prompt (unless when expecting indentation, for example within a for loop) but anything you type will be in a different color (white on black by default) from what is returned (that will be blue or black). The two preceding commands will give us access to Python's sys module that contains various variables with system information. The sys.path variable that we print here will hold all of the directories that will be searched when we try to import a module. (Note that importing sys will always work because sys is a built-in module.) The output will be something similar to: ['C:\\Program Files\\Blender Foundation\\Blender', 'C:\\Program Files\\Blender Foundation\\Blender\\python26.zip', 'C:\\Python26\\Lib', 'C:\\Python26\\DLLs', 'C:\\Python26\\Lib\\lib-tk', 'C:\\Program Files\\Blender Foundation\\Blender', 'C:\\Python26', 'C:\\Python26\\lib\\site-packages','C:\\Python26\\lib\\site-packages\\PIL', 'C:\\PROGRA~1\\BLENDE~1\\Blender', 'C:\\Documents and Settings\\Michel\\Application Data\\Blender Foundation\\Blender\\.blender\\scripts', 'C:\\Documents and Settings\\Michel\\Application Data\\Blender Foundation\\Blender\\.blender\\scripts\\bpymodules'] If your Python installation directory is not in this list then you should set the PYTHONPATH variable before starting Blender. The interactive Python console is a good platform to explore built-in modules as well. Because Python comes equipped with two very useful functions, help() and dir(), you have instant access to a lot of information contained in Blender's (and Python's) modules as a lot of documentation is provided as part of the code. For people not familiar with these functions, here are two short examples, both run from the interactive Python console. To get information on a specific object or function, type: help(Blender.Lamp.Get) The information will be printed in the same console: Help on built-in function Get in module Blender.Lamp: Lamp.Get (name = None): Return the Lamp Data with the given name, None if not found, or Return a list with all Lamp Data objects in the current scene, if no argument was given. The help() function will show the associated docstring of functions, classes, or modules. In the previous example, that is the information provided with the Get() method (function) of the Lamp class. A docstring is the first string defined in a function, class, or module. When defining your own functions, it is a good thing to do this as well. This might look like this: def square(x): """ calculate the square of x. """ return x*x We can now apply the help function to our newly-defined function like we did before: help(square) The output then shows: Help on function square in module __main__: square(x) calculate the square of x. In the programs that we will be developing, we will use this method of documenting where appropriate. The dir() function lists all members of an object. That object can be an instance, but also a class or module. For example, we might apply it to the Blender.Lamp module: dir(Blender.Lamp) The output will be a list of all members of the Blender.Lamp module. You can spot the Get() function that we encountered earlier: ['ENERGY', 'Falloffs', 'Get', 'Modes', 'New', 'OFFSET', 'RGB','SIZE', 'SPOTSIZE', 'Types', '__doc__', '__name__', '__package__','get'] Once you know which members a class or module has, you can then check for any additional help information for these members by applying the help() function. Of course both dir() and help() are most useful when you already have some clue where to look for information. But if so, they can be very convenient tools indeed. It is possible to use any editor (that you like) to write Python scripts and then import the scripts as text files but Blender's built-in text editor will probably be adequate for all programming needs. It features conveniences such as syntax highlighting, line numbering, and automatic indentation, and gives you the possibility to run a script directly from the editor. The ability to run a script directly from the editor is a definite boon when debugging because of the direct feedback that you get when encountering an error. You will not only get an informative message but the offending line will also be highlighted in the editor. What is more, the editor comes with many plug-ins of which the automatic suggestion of members and the documentation viewer are very convenient for programmers. And of course, it is possible to write additional plug-ins yourself.. The default name for this new text will be TX:Text.001, but you may change it to something more meaningful by clicking on the name and changing it. Note that if you would like to save this text to an external file (with Text | Save As...) the name of the text is distinct from the filename (although in general it is a good idea to keep these the same to avoid confusion). It is not mandatory to save texts as external files; texts are Blender objects that are saved together with all other information when you save your .blend file. External files may be opened as texts by selecting OPEN NEW from the Menu button drop-down instead of ADD NEW. If for some reason an external file and an associated text are out of sync when Blender is started, an out of sync button is displayed. When clicked, it displays a number of options to resolve the issue. Once a new or existing text is selected, the menu bar at the bottom of the screen is updated with some additional menu options: The Text file menu gives access to options to open or save a file or to run the script in the editor. It also presents a number of template scripts that may be used as a basis for your own scripts. If you select one of these templates a new text buffer is created with a copy of the selected template. The Edit menu contains cut-and-paste functionality as well as options to search and replace text or jump to a chosen line number. The Format menu has options to indent and unindent selected text as well as options to convert whitespace. The latter option can be very helpful when the Python interpreter complains about unexpected indentation levels although there seems nothing amiss with your file. If that happens you possibly have mixed tabs and spaces in way that confuse Python (as they are different as far as the interpreter is concerned) and a possible way out is to convert selected text to spaces first and then back to tabs. This way mixed spaces and tabs will be used in a uniform way again. To get used to the editor, create a new text buffer by choosing Text | New and type in the following example lines: import sys print sys.path Most keys on the keyboard will behave in a familiar way, including Delete, Backspace, and Enter. The shortcut keys for cutting, pasting, and copying are listed in the Edit menu as Alt + X, Alt + V, and Alt + C respectively but the Ctrl key equivalents Ctrl + X, Ctrl + V, and Ctrl + C (familiar to Windows users) work just as well. A full keyboard map can be consulted on the Blender wiki, Selecting portions of the text can be achieved by clicking and dragging the mouse, but you can also select text by moving the text cursor around while pressing the Shift key. Text will be uncolored by default, but reading scripts can be made a lot easier on the eye by enabling syntax highlighting. Clicking on the little AB button will toggle this (it will be black and white when syntax highlighting is off and colored when on.) Like many aspects of Blender, text colors can be customized in the themes section of the User Preferences window. Another feature that is very convenient to enable, especially when debugging scripts, is line numbering. (You might write a faultless code in one go, but unfortunately yours truly is less of a genius.) Every Python error message that will be shown will have a filename and a line number, and the offending line will be highlighted. But the lines of the calling function(s), if any, will not be highlighted although their line numbers will be shown in the error message, so having line numbers enabled will enable you to quickly locate the calling context of the trouble spot. Line numbering is enabled by clicking on the lines button. Running a script is done by pressing Alt + P. Nothing is displayed in the editor when there are no errors encountered, but the output will be shown on the console (that is, the DOSBox or X terminal Blender started from, not the Python interactive console that we encountered earlier). Tradition demands every book about programming to have a "hello world" example and why would we offend people? We will implement, and run, a simple object instantiating script and show how to integrate this in Blender's script menu. We will also show how to document it and make an entry in the help system. Finally, we will spend some words on the pros and cons of distributing scripts as .blend files or as scripts to install in the scriptdir by the user. Let's write some code! You can type in the following lines directly into the interactive Python console, or you can open a new text in Blender's text editor and then press Alt + P to run the script. It is a short script but we'll go through it in some detail as it features many of the key aspects of the Blender Python API. #!BPY import Blender from Blender import Scene, Text3d, Window hello = Text3d.New("HelloWorld") hello.setText("Hello World!") scn = Scene.GetCurrent() ob = scn.objects.new(hello) Window.RedrawAll() The first line identifies this script as a Blender script. This is not necessary to run the script, but if we want to be able to make this script a part of Blender's menu structure we need it, so we better get used to it right away. You will find the second line (which is highlighted) in virtually any Blender script because it gives us access to the classes and functions of the Blender Python API. Likewise, the third line gives us access to the specific submodules of the Blender module that we will need in this script. We could access them as members of the Blender module of course (for example, Blender.Scene), but importing them explicitly saves some typing and enhances readability. The next two lines first create a Text3d object and assign that to the variable hello. The Text3d object will have the name HelloWorld in Blender so users can refer to this object by this name. Also this is the name that will be visible in the Outliner window and in the lower-left corner if the object is selected. If there already exists an object of the same type with this name, Blender adds a numerical suffix to the name to make it unique. For example, HelloWorld might become HelloWord.001 if we run the scripts twice. By default, a newly created Text3d object will contain the text Text so we change that to Hello World! with setText() method. A newly created Blender object is not visible by default, we have to associate that with a Scene so the next few lines retrieve a reference to the current scene and add the Text3d object to it. The Text3d object is not added directly to the scene but the scene.objects.new() method embeds the Text3d object in a generic Blender object and returns a reference to the latter. The generic Blender object holds information common to all objects, such as position, whereas the Text3d object holds specific information, such as the text font. Finally, we tell the window manager to refresh any window that needs a refresh due to the addition of a new object. Your own script doesn't have to be a second class citizen. It can be made part of Blender on par with any of the bundled scripts that come with Blender. It can be added to the Add menu present in the header at the top of the View3D window. Note Actually, the Add menu is present in the header at the bottom of the user preferences window but as this window is situated above the View3D window, and is by default minimized to just the header, it looks as if it's a header at the top of the View3D window. Many users are so accustomed to it that they see it as part of the View3D window. It may supply information to Blender's help system just like any other script. The following few lines of code make that possible: """ Name: 'HelloWorld' Blender: 249 Group: 'AddMesh' Tip: 'Create a Hello World text object' """ We start the script with a standalone string containing several lines. Note Each line starts with a label followed by a colon and a value. The colon should follow the label immediately. There should not be any intervening space, otherwise our script will not show up in any menu. The labels at the beginning of each line serve the following purpose: Name(a string) defines the name of the scripts as it appears in the menu Blender(a number) defines the minimum version of Blender needed to use the script Group(a string) is the submenu of the scripts menu under which this script should be grouped If our scripts are to appear under the Add | Mesh menu in the View3D window (also accessible by pressing Space) this should read AddMesh. If it should be under a different submenu of the script's menu, it could read, for example, Wizardsor Object. Besides the necessary labels the following optional labels might be added: Version(a string) is the version of the script in any format you like. Tip(a string) is the information shown in the tooltip when hovering over the menu item in the Scripts menu. If the script belongs to the group AddMesh, no tooltip will be shown even if we define one here. Blender has an integrated help system that is accessible from the Help menu at the top of the screen. It gives access to online resources and to information on registered scripts via the Scripts Help Browser entry. Once selected, it shows a collection of drop-down menus, one for each group, where you can select a script and view its help information. If we want to enter our script in the integrated help system we need to define some additional global variables: __author__ = "Michel Anders (varkenvarken)" __version__ = "1.00 2009/08/01" __copyright__ = "(c) 2009" __url__ = ["author's site,"] __doc__ = """ A simple script to add a Blender Text object to a scene. It takes no parameters and initializes the object to contain the text 'Hello World' """ These variables should be self-explanatory except for the __url__ variable—this one will take a list of strings where each string consists of a short description, a comma, and a URL. The resulting help screen will look like this: Now all that we have left to do is to test it and then place this script in an appropriate location. We can test the script by pressing Alt + P. If no errors are encountered, this will result in our Hello World Text3d object being added to the scene but the script will not be appended to the Add menu yet. If a script is to be added to the Add menu it has to reside in Blender's script directory. To do this, first save the script in the text buffer to a file with a meaningful name. Next, make sure that this file is located in Blender's script directory. This directory is called scripts and is a subdirectory of .blender, Blender's configuration directory. It is either located in Blender's installation directory or (on Windows) in the Application Data directory. The easiest way to find ours is to simply look at the sys.path variable again to see which listed directory ends in .blender\scripts. Scripts located in Blender's scripts directory will be automatically executed on startup, so our hello world script will be available anytime we start up Blender. If we want Blender to reexamine the script directory (so that we don't have to restart Blender to see our new addition) we can choose Scripts | Update menus in the interactive console. As you may have noticed the word object is used in two different (possibly confusing) ways. In Blender almost anything is referred to as an Object. A Lamp for instance is an Object, but so is a Cube or a Camera. Objects are things that can be manipulated by the user and have for example a position and a rotation. In fact, things are a little bit more structured (or complicated, as some people say): any Blender object contains a reference to a more specific object called the data block. When you add a Cube object to an empty scene you will have a generic object at some location. That object will be called Cube and will contain a reference to another object, a Mesh. This Mesh object is called Cube by default as well but this is fine as the namespaces of different kind of objects are separate. This separation of properties common to all objects (such as position) and properties specific to a single type of object (such as the energy of a Lamp or the vertices of a Mesh) is a logical way to order sets of properties. It also allows for the instantiation of many copies of an object without consuming a lot of memory; we can have more than one object that points to the same Mesh object for example. (The way to achieve that is to create a linked duplicate, using Alt + D.) The following diagram might help to grasp the concept: Another way the word object is used is in the Python sense. Here we mean an instance of a class. The Blender API is object-oriented and almost every conceivable piece of structured data is represented by an object instanced from a class. Even fairly abstract concepts such as an Action or an IPO (abstract in the sense that they do not have a position somewhere in your scene) are defined as classes. How we refer to the Blender or to the Python sense of the word object in this book will mostly be obvious from the context if you keep in mind this distinction. But if not, we tend to write the Blender sense as Object and the Python sense as object or object instance. Adding other types of objects is, in many cases, just as straightforward as adding our text object. If we want our scene to be populated in a way that enabled us to render it, we would have to add a camera and a lamp to make things visible. Adding a camera to the same scene could be done like this (assuming we still have a reference to our active scene in the scn variable): from Blender import Camera cam = Camera.New() # creates new camera data ob = scn.objects.new(cam) # adds a new camera object scn.setCurrentCamera(ob) # makes this camera active Note that the Camera object is again different from the actual camera data. A Camera object holds camera-specific data, such as viewing angle, and a Blender object holds data common to all objects, notably its position and rotation. We will encounter cameras again later and see how we can point them and set the view angle. Lamps follow pretty much the same pattern: from Blender import Lamp lamp = Lamp.New() # create a new lamp ob = scn.objects.new(lamp) Again, the Lamp object holds lamp-specific data such as its type (for example, spot or area ) or its energy while the Blender object that encapsulates it defines its position and rotation. This pattern is similar for a Mesh object but the situation is subtly different here because a mesh is a conglomerate of vertices, edges, and faces among other properties. Like a Lamp or a Camera, a Mesh is a Blender object that encapsulates another object in this case, a Blender.Mesh object. But unlike Blender.Lamp or Blender.Camera objects it does not stop there. A Blender.Mesh object itself may contain many other objects. These objects are vertices, edges, and faces. Each of these may have a number of associated properties. They may be selected or hidden and may have a surface normal or an associated UV-texture. Beside's any associated properties, a single vertex is basically a point in 3D space. In a Blender.Mesh object any number of vertices are organized in a list of Blender.Mesh.MVert objects. Given a Mesh object me, this list may be accessed as me.verts. An edge is a line connecting two vertices in Blender represented by a Blender.Mesh.MEdge object. Its main properties are v1 and v2, which are references to MVert objects. The list of edges in a Mesh object can be accessed as me.edges. A MFace object is a like an edge, basically a list of references to the vertices that define it. If we have a MFace object face, this list may be accessed as face.verts. This jumble of objects containing other objects may be confusing, so keep the previous diagram in mind and let's look at some example code to clarify things. We will define a cube. A cube consists of eight vertices connected by twelve edges. The eight vertices also define the six sides (or faces) of the cube. from Blender import Mesh,Scene corners=[ (-1,-1,-1), (1,-1,-1), (1,1,-1), (-1,1,-1),(-1,-1, 1), (1,-1, 1), (1,1, 1), (-1,1, 1) ] sides= [ (0,1,2,3), (4,5,6,7), (0,1,5,4), (1,2,6,5), (2,3,7,6), (3,0,4,7) ] me = Mesh.New('Cube') me.verts.extend(corners) me.faces.extend(sides) scn = Scene.GetCurrent() ob = scn.objects.new(me, 'Cube') Window.RedrawAll() We start by defining a list of corners. Each of the eight corners is represented by a tuple of three numbers, its x, y, and z coordinates. Next we define a list of tuples defining the faces of the cube. The sides of a cube are squares so each tuple holds four integers—each integer is an index to the list of corners. It is important to get the order of these indices right: if we would list the first side as (0,1,3,2) we would get a twisted or a bow-tie face. Now we can define a Mesh object and name it Cube (the highlighted part in the preceding code). As noted earlier, the vertices of a Mesh object are accessible as a list named verts. It has an extend() method that may take a list of tuples representing vertex positions to define additional MVert objects in our Mesh. Likewise, we can add extra faces to the list of faces of a Mesh object by calling the extend() method of faces with a list of tuples. Because all edges of a cube are edges of the faces there is no need to add any edges separately. This is done automatically when we extend() the list of faces. The Mesh object that we have defined so far can now be embedded in a Blender object that can be added to the active scene. Note that it is perfectly acceptable to have a Mesh object and a Blender Object with the same name ( Cube in this case) because different kind of objects in Blender have separate namespaces. In the Blender GUI, names are always prefixed with a two letter prefix to distinguish them. (for example, LA for a lamp, ME for a mesh, or OB for a Blender object) When creating Mesh objects a great deal of attention is needed to get all the vertices, edges, and faces added and correctly numbered. This is just the tip of the iceberg when creating meshes. In Chapter 2, Creating and Editing Objects, we will see what hides underwater. In the previous sections, we saw that in order to integrate our script in Blender's menu system and help system we had to place the script in the .blender\scripts directory. A fully integrated script can be a big advantage, but this method has a clear drawback: the person who wants to use this script has to put it in the correct directory. This might be a problem if this person does not know how to locate this directory or does not have the permission to place scripts in that directory. That last problem may be overcome by setting an alternative script directory in the User Preferences, but not everybody might be that tech oriented. A viable alternative is distributing a script as a text within a . blend file. A . blend file can be saved with the script clearly visible in the main window and one of the first comment lines of the script might read ALT-P to start this script. This way, the script can be used by anybody who knows how to open a . blend file. An additional advantage is that it is easy to bundle extra resources in the same .blend file. For example, a script might use certain materials or textures or you might want to include sample output from your script. The only thing that is very difficult to do is distribute Python modules this way. You can use the import statement to access other text files but this may pose problems (see Appendix B). If you have a lot of code and it is organized in modules, you and your users are probably still better off if you distribute it as a ZIP file with clear instructions on where to unpack this ZIP file. For Pynodes (or dynamic nodes, see Chapter 7) you don't have a choice. Pynodes can refer to only the Python code contained in text files within a .blend file. This is not really a limitation though as these Pynodes are an integral part of a material, and Blender materials can be distributed only within a .blend file. When these materials are linked to or appended their associated nodes and any texts associated with Pynodes are linked to or appended as well, completely hiding from the end user the way a material is actually implemented. When developing Python programs in Blender it is important to understand what functionality is provided by the API and even more so, what not. The API basically exposes all data and provides functions for manipulating that data. Additionally, the API provides the developer with functions to draw on the screen and to interact with the user interface and windowing system. What the Blender API does not provide is object-specific functionality besides setting simple properties, especially lacking any functions to manipulate meshes on the level of vertices, edges, or faces other than adding or removing them. This means that very high-level or complex tasks such as adding a subsurface modifier to a Mesh object or displaying a file selector dialog are as simple as writing a single line of code, while functions as essential and seemingly simple as subdividing an edge or selecting an edge loop are not available. That doesn't mean these tasks cannot be accomplished, but we will have to code them ourselves. So many examples in this book will refer to a module called Tools that we will develop in the next chapters and that will contain useful tools from extruding faces to bridging face loops. Where appropriate and interesting we will highlight the code in this module as well but mainly it is a device to squirrel away any code that might detract us from our goals. The following sections give a short and very high-level overview of what is available in the Blender API. Many modules and utilities will feature prominently in the next chapters as we will develop practical examples. This overview is meant as a way to help you get started if you want to find out about some functionality and do not know where to look first. It is nowhere near a full documentation of the Blender API. For that, check the most recent version of the API documentation online. You can find the link in the Appendix A Links and Resources. The Blender module serves as a container for most other modules and provides functionality to access system information and perform general tasks. For example, information such as the Blender version that you are using can be retrieved with the Get() function: import Blender version = Blender.Get('version') Incorporating all externally referred files in a .blend file (called packing in Blender) or saving your current Blender session to a .blend file are other examples of functionality implemented in the top-level Blender module: import Blender Blender.PackAll() Blender.Save('myfile.blend') Each Blender object type ( Object, Mesh, Armature, Lamp, Scene, and so on) has an associated module which is a submodule of the top-level Blender module. Each module supplies functions to create new objects and find objects of a given type by name. Each module also defines a class with the same name that implements the functionality associated with the Blender object. Note that in Blender, not only the things directly visible in your scene, such as meshes, lamps, or cameras are objects, but also materials, textures, particle systems, and even IPOs, actions, worlds, and scenes. Many other data items in Blender are not Objects in the Blender sense (you cannot append them from another .blend file or move them about in your scene) but are objects in the Python sense. For example, vertices, edges, and faces within a mesh are implemented as classes: Blender.Mesh.MVert, Blender.Mesh.MEdge, and Blender.Mesh.MFace respectively. Many modules also have submodules of their own; for example the Blender.Scene module provides access to the rendering context by way of the Blender.Scene.Render module. Among other things, this module defines a RenderData class that allows you to render a still image or animation. So with what we know now it is possible to draw two slightly different family trees of Blender objects. The first one illustrates what kind of Blender objects may be contained within or referred to by another Blender object where we limit ourselves to the less abstract objects: Of course, the diagram above is greatly simplified as we left out some less relevant objects and as it only illustrates a single kind of relationship. There are of course many more types of relationship in a scene, such as parent-child relationships or constraints. We may contrast the previous diagram with the one that shows in which module a type of object (a class) is defined: The differences are quite noticeable and are important to keep in mind, especially when looking for specific information in the Blender API documentation. Don't expect to find information on a Curve object in the documentation for the Blender.Object module because a Blender Curve is a specific Blender Object; the Curve class is defined and documented in the Blender.Curve module. In general you can expect the documentation of a class to be in the module of same name. Besides the Blender module, there is another top-level module called bpy that provides a unified way to access data. It is considered experimental, but it is stable and might be used as a more intuitive way of accessing objects. For example, if we want to access an object called MyObject we normally would do something like this: import Blender ob = Blender.Object.Get(name='MyObject') With the bpy module we might rephrase this: import bpy ob = bpy.data.objects['MyObject'] Likewise, to get access to the active scene object we might write this: import Blender scene = Blender.Scene.GetCurrent() Which can be written in an alternative way: import bpy scene = bpy.data.scenes.active Which one to prefer is a matter of taste. The bpy module will be the only way to access data in the upcoming Blender 2.5 but the changes in Blender 2.5 go deeper than just this data access so don't be fooled by the superficial similarity of the module name! Access to Blender's windowing system is provided by the Blender.Draw module. Here you will find classes and functions to define buttons and pop-up menus and ways to interact with the user. The types of graphical elements that you can display using the Draw module are limited to the commonly used ones and customization is not an option. More advanced functions are provided in the Blender.BGL module that gives you access to virtually all OpenGL functions and constants, allowing you to draw almost anything on screen and to let the user interact in many different ways. Finally, there are a number of modules that encapsulate various functionality that do not fit in any of the previous categories: Blender.Library: Blender allows you to append (that is, import) or link (refer to) objects in another .blendfile. Another way to look at this is that a .blendfile can act as a library where you can store your assets. And because almost anything is an object in Blender, almost any asset can be stored in such a library, be it models, lamps, textures, or even complete scenes. The Blender.Librarymodule provides script authors the means to access those libraries. Blender.Mathutilsand Blender.Geometry: These modules contain among other things, the Vectorand Matrixclasses with associated functions to apply all sorts of vector algebra to Blender objects. With the functions provided in these modules you will be able to rotate or shear your object's co-ordinates or calculate the angle between two vectors. Many more convenience functions are provided and these will make many surprise appearances in the examples in this book. Don't worry, we will provide explanations where necessary for people not so at home with vector math. Blender.Noise: Noise is used in generating all the (apparently) random patterns that form the basis of many of the procedural textures in Blender. This module gives access to the same routines that provide the noise for those textures. This might not only be useful in generating your own textures but might for instance be used in the random placement of objects or implementing a slightly shaky camera path to add realism to your animation. Blender.Registry: The data inside scripts, whether local or global, is not stored once a script exits. This can be very inconvenient, for example if you want to save the user preferences for your custom script. The Blender.Registrymodule provides ways to store and retrieve persistent data. It does not, however, provide any means to store this data on disk, so the persistence is only for the duration of a Blender session. Blender.Sys: To quote this module's documentation: This module provides a minimal set of helper functions and data. Its purpose is to avoid the need for the standard Python module osin special os.path, though it is only meant for the simplest cases. As we argued earlier, it is generally advisable to install a full Python distribution which among other things includes the osand os.pathmodules that give you access to a far wider range of functionality. Therefore, we will not use the Blender.sysmodule in this book. Blender.Types: This module provides constants that can be used for the type checking of objects. Python's built-in function type()returns the type of its argument. This makes it quite easy to check whether an object has a given type when compared to one of the constants in this module. If we would want to make sure an Object is a Curveobject we could, for example, do it like this: … if type(someobject) == Blender.Types.CurveType : … do things only allowed for Curve objects ... In this chapter, we have seen how to extend Blender with a full Python distribution and familiarized ourselves with the built-in editor. This enabled us to write scripts that, although simple, were fully integrated in Blenders scripting menu and help system. We covered many subjects in detail including: What can and cannot be accomplished with Python in Blender How to install a full Python distribution How to use to In the next chapter, we take this knowledge a step further to create and edit complex objects and we will see how to define a graphical user interface.
https://www.packtpub.com/product/blender-2-49-scripting/9781849510400
CC-MAIN-2021-17
refinedweb
7,742
59.53
I'm writing a program for my java class that has to solve quadratic equations. I have separate methods that calls the private variables a, b, and c. I also have separate methods for the discriminant and both the positive and negative roots. My code is as follows: import java.util.Scanner; public class QuadraticEquation { private double a; private double b; private double c; public static void main(String[] args){ Scanner numberInput = new Scanner(System.in); System.out.print("Enter a, b, c: "); double a = numberInput.nextDouble(); double b = numberInput.nextDouble(); double c = numberInput.nextDouble(); QuadraticEquation q1 = new QuadraticEquation(a, b, c); System.out.println(getRoot1()); }//end main method public QuadraticEquation(double a, double b, double c){ this.a = a; this.b = b; this.c = c; }//end constructor public double getDiscriminant(){ double discriminant = Math.sqrt((getB()*getB()) - (4*getA()*getC())); return discriminant; }//end getDiscriminant() public double getA(){ this.a = a; return this.a; }//end getA() public double getB(){ this.b = b; return this.b; }//end getB() public double getC(){ this.c = c; return this.c; }//end getC public double getRoot1(){ double root1 = (-getB() + getDiscriminant()) / (2*a); return root1; }//end getRoot1() public double getRoot2(){ double root2 = (-getB() - getDiscriminant()) / (2*a); return root2; }//end getRoot2() } My problem is that I get an error that says "Cannot make a static reference to the non-static method getRoot1() from the type QuadraticEquation". I know this means that it is not working because my method is not a static method but when I try to change it to a static method I get errors in all my other methods. I have no idea what to do any help would be appreciated.
http://www.javaprogrammingforums.com/whats-wrong-my-code/33560-cannot-call-method-because-non-static.html
CC-MAIN-2015-22
refinedweb
276
51.85
Hello everyone, welcome to this new article, where we are going to explore React Native Box Shadow UI concept. This is a pretty common why to make Cards in Mobile apps. On Web, it’s pretty straight forward you can Css, but mobile is a bit different for every platform. Without further talk, let’s dig into it. This is the Result we will try to achieve for this example. The Concept The concept is pretty simple, you only need to add one or more fields to the stylesheet depending on the platform. Android It’s just adding the property Elevation with a value. card: { elevation: 5 } IOS The Ios box shadow, is more customisable and very similar to web css. card: { shadowColor: '#000', shadowOffset: { width: 0, height: 3 }, shadowOpacity: 0.5, shadowRadius: 5, } To be safe, you could use both properties for IOS and Android, and the device will render the platform specific properties and ignore the rest. Like rgis card: { shadowColor: '#000', shadowOffset: { width: 0, height: 3 }, shadowOpacity: 0.5, shadowRadius: 5, elevation: 5 } Let’s Get Started For our example as you have seen in the result photo. We will try to make a simple car element/component to render a title, logo and a body text. it could be used like a social media post or blog post. So the element will have two parts, the header section and the body section. Let’s get started by the card container View. It will be have these style props card:{ height:150, width:"80%", backgroundColor:"white", borderRadius:15, padding:10, elevation:10, shadowColor: '#000', shadowOffset: { width: 0, height: 3 }, shadowOpacity: 0.5, shadowRadius: 5, } Profile image styles profileImg:{ width:30, height:30, borderRadius:50, marginRight:10, }, The texts will only have a prop or two so I Always include them directly, instead of using a new property for the stylesheet. The whole app.js code will look something like this. import * as React from 'react'; import { Text, View, StyleSheet, Image } from 'react-native'; import Constants from 'expo-constants'; const profileImg ="" export default class App extends React.Component { render() { return ( <View style={styles.container}> <View style={styles.card}> <View style={styles.header}> <Image style={styles.profileImg} source={{uri: profileImg}} /> <Text style={{fontWeight:"bold",fontSize:18}}>React Native Master</Text> </View> <Text style={{color:"gray"}}>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus gravida, metus eleifend vulputate fringilla, ligula odio vehicula tortor, ut iaculis nulla eros id turpis. </Text> </View> </View> ); } } const styles = StyleSheet.create({ container: { flex: 1, justifyContent: 'center', paddingTop: Constants.statusBarHeight, backgroundColor: '#ecf0f1', padding: 8, alignItems:"center" }, card:{ height:150, width:"80%", backgroundColor:"white", borderRadius:15, elevation:10, padding:10 }, profileImg:{ width:30, height:30, borderRadius:50, marginRight:10, }, header: { flexDirection:"row", } }); And there you have it a pretty simple React Native Box Shadow example for your card elements. I hope you like this article and find it as informative as you have expected. Feel free to share it with your friends, and comment out your questions and concerns. Stay tuned for more, Happy coding
https://reactnativemaster.com/react-native-box-shadow-example/?utm_source=rss&utm_medium=rss&utm_campaign=react-native-box-shadow-example
CC-MAIN-2020-16
refinedweb
509
57.47
Python vs C++ Difference between Python and C++ Python and C++ programming languages are two most used programming languages used by programmers in competitive programing. C++ was released in 1985 by Bjarne Stroustrup as an extension to c programing language. Lets us have a look at the difference between python and C++ programing language. “Python vs C++” Difference between Python and C++ Python programming language Python programing language was introduced by Guido van Rossum in 1989 and the first version 0.9.0 of the language was introduced in 1991. Till then we have total 25 updates for python and now 3.8 is the most stable version used by the developers in the programming field. Python programing has now become more popular due to its rich library support and easy syntax. Python is used in many areas of development, Web development, Machine learning, Artificial Intelligence are some most popular areas of work. Some more application where python is used are: - Data Science - GUI - Game Development - Web Scraping - 3-D Graphics etc. C++ programming language C++ programing language was introduced in 1985 by Bjarne Stroustrup as an extension to C programing language. C++ has a major update after C, it was an object-oriented language and was more efficient than C programing language. It gave programmers a high level of control over system resources and memory. C++ programing language got major updates 3 times in 2011, 2014, 2017 with C++11, C++14, and C++17. C++17 is the updated and the stable version of the language available for the programmers to use. Some applications of C++ programing language are: - Graphics designing - Distributed System - Data-Base - Telephone-Switches - Compilers etc Code in C++ programming language for checking divisibility by 3 #include<bits/stdc++.h> using namespace std; int check(string number) { int num = number.length(); int Sum = 0; for (int i=0; i<num; i++) Sum += (number[i]-'0'); return (Sum % 3 == 0); } int main() { string number = "1331"; check(number)? cout << "Yes" : cout << "No"; return 0; } Code in Python programming language for checking divisibility by 3 number = int(input('Enter the Number :')) if number % 3 == 0: print('Yes') else: print('No') Login/Signup to comment
https://prepinsta.com/python/python-vs-c-plus-plus/
CC-MAIN-2020-45
refinedweb
366
53.31
HashMap basic principle and underlying source code analysis 1. Storage structure of HashMap: HashMap is composed of array, chain structure (linked list) and red black tree. The structure of red black tree is added in JDK 1.8. (the storage structure will change dynamically according to the amount of stored data). Source code implementation: /** * Basic hash box node for most entries. (yes) * For information about the subclass of TreeNode, see below; for information about the subclass of EntryEntry, see LinkedHashMap.) * Node: Data node */ static class Node<K,V> implements Map.Entry<K,V> { /** * hash Value the value obtained by hashing the hashcode value of the key is stored in the Entry to avoid repeated calculation * */ final int hash; /** * key Indexes * */ final K key; /** * data Data domain * */ V value; /** * Next node node * */ Node<K,V> next; /** * Constructor */ Node(int hash, K key, V value, Node<K,V> next) { this.hash = hash; this.key = key; this.value = value; this.next = next; } /** * Get key value * */ public final K getKey() { return key; } /** * Get value * */ public final V getValue() { return value; } /** * key = value * */ public final String toString() { return key + "=" + value; } /** * hashCode hashCode is used to determine the storage address of an object in the hash storage structure; * Note: the same hashCode of two objects does not necessarily mean that two objects are the same * * 1.hashcode For example, there is such a location in memory * 0 1 2 3 4 5 6 7 * And I have a class. This class has a field called ID. I want to store this class in one of the above 8 locations. If it is stored arbitrarily without hashcode, when searching * You need to go to these eight positions one by one, or use algorithms such as dichotomy. * But if hashcode is used, it will improve the efficiency a lot. * There is a field called ID in our class, so we define our hashcode as ID% 8, and then store our class in the location where we get the remainder. than * If our ID is 9 and the remainder of 9 divided by 8 is 1, then we will put the class in the position of 1. If the ID is 13 and the remainder is 5, then we will put the class * Put it in 5 this position. In this way, when looking for this class in the future, you can find the storage location directly by dividing the ID by 8. * * 2.But what if two classes have the same hashcode (we assume that the ID of the above class is not unique), for example, if the remainder * of 9 divided by 8 and 17 divided by 8 is 1, is this legal? The answer is: Yes. So how to judge? At this time, you need to define equals. * In other words, we first judge whether the two classes are stored in a bucket through hashcode, but there may be many classes in this bucket, so we need to find the class we want in this bucket through * equals. * So. Why rewrite hashCode() when equals() is overridden? * Think about it. If you want to find something in a bucket, you must first find the bucket. You don't find the bucket by rewriting hashcode(). What's the use of rewriting equals() * */ public final int hashCode() { return Objects.hashCode(key) ^ Objects.hashCode(value); } /** * Setting a new value will return the old data * */ public final V setValue(V newValue) { V oldValue = value; value = newValue; return oldValue; } /** * Judge whether objects are equal * */ public final boolean equals(Object o) { if (o == this) { return true; } if (o instanceof Map.Entry) { Map.Entry<?,?> e = (Map.Entry<?,?>)o; if (Objects.equals(key, e.getKey()) && Objects.equals(value, e.getValue())) { return true; } } return false; } } Some basic parameters used: /** * Default initial capacity - must be a power of 2. */ static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16 /** * Maximum capacity, which is used if both constructors implicitly specify a higher value using parameters. * Max 1073741824 */ static final int MAXIMUM_CAPACITY = 1 << 30; /** * The load factor to use when not specified in the constructor. * The default loading factor is 0.75 */ static final float DEFAULT_LOAD_FACTOR = 0.75f; /** * Use a tree instead of a list to list bin count thresholds for bin. * When an element is added to a bin with at least so many nodes, the bin is converted to a tree. * The value must be greater than 2 and at least 8 to be related to the assumption of deleting the tree, that is, converting back to the original category box when shrinking. * When the number of elements in the bucket exceeds this value, you need to replace the linked list node with a red black tree node to match the optimization speed * * That is, when the length of the linked list reaches 8, it is transformed into a tree structure * */ static final int TREEIFY_THRESHOLD = 8; /** * Box count threshold used to de tree (split) boxes during sizing operations. * Should be less than TREEIFY_THRESHOLD and up to 6 to engage with the shrinkage detection under removal. * When the capacity is expanded, if the number of elements in the bucket is less than this value, the tree bucket elements will be restored (segmented) into a linked list structure * From tree structure to chain structure */ static final int UNTREEIFY_THRESHOLD = 6; /** * It can be classified as the minimum capacity of the tree. * (Otherwise, if there are too many nodes in the bin, the table will be resized.) Should be at least 4 TREEIFY_THRESHOLD to avoid conflicts between resizing and treelization thresholds. * When the capacity in the hash table is greater than this value, the bucket in the table can be tree shaped * Otherwise, if there are too many elements in the bucket, the capacity will be expanded rather than tree shaped * In order to avoid the conflict between capacity expansion and tree selection, this value cannot be less than 4 * tree_ THRESHOLD (256) * */ static final int MIN_TREEIFY_CAPACITY = 64; Definition of basic structural parameters: /** * The table is initialized on first use and resized as needed. After allocation, the length is always a power of 2. * (In some operations, we also allow zero length to allow the use of boot mechanisms that are not currently needed.) * Main function: save the array structure of Node nodes. */ transient Node<K,V>[] table; /** * Save the cached entrySet(). * Note that the AbstractMap field is used for keySet () and values (). * Main function: Set data structure composed of Node nodes */ transient Set<Map.Entry<K,V>> entrySet; /** * The number of key value mappings contained in this mapping. */ transient int size; /** * The number of structural modifications to the HashMap * Structural modification: * A modification that changes the number of mappings in a HashMap or otherwise modifies its internal structure (for example, re hashing). * This field is used to make the iterator on the collection view of HashMap fail quickly. * (See concurrent modificationexception). */ transient int modCount; /** * The next size value to resize (capacity load factor). * threshold Indicates that the resize operation will be performed when the size of the HashMap is greater than the threshold. * * Usually: threshold = loadFactor * capacity * @serial */ // (after serialization, javadoc is described as true. //In addition, if a table array has not been allocated, //This field will retain the initial array capacity, //Or zero, indicating default_initial_capability.) int threshold; /** * Load factor of the hash table. * * @serial */ final float loadFactor; 2. Initialize HashMap Four initializing constructors are given by default During initialization, you can specify the initial capacity and load factor of hashMap. jdk1.7 calculates when calling the constructor for initialization, but 1.8 initializes when the first put operation is performed. The resize() method undertakes the tasks of initialization and capacity expansion to a certain extent. (initialization is also equivalent to capacity expansion.) /** * Construct an empty Map with a specified initial capacity and load factor * * @param initialCapacity the initial capacity Initialization space * @param loadFactor the load factor Load factor * @throws IllegalArgumentException if the initial capacity is negative or the load factor is nonpositive */ public HashMap(int initialCapacity, float loadFactor) { /* If the initial capacity is less than 0: exception: the initial capacity is illegal */ if (initialCapacity < 0) { throw new IllegalArgumentException("Illegal initial capacity: " + initialCapacity); } //If the initialized capacity is greater than the maximum capacity: 1 < < 30, the initialized capacity becomes the maximum capacity if (initialCapacity > MAXIMUM_CAPACITY) { initialCapacity = MAXIMUM_CAPACITY; } //Load factor is less than or equal to 0. Or no load factor was passed in. Exception thrown: load factor error if (loadFactor <= 0 || Float.isNaN(loadFactor)) { throw new IllegalArgumentException("Illegal load factor: " + loadFactor); } this.loadFactor = loadFactor; /*Initialization parameter threshold */ this.threshold = tableSizeFor(initialCapacity); } - Threshold capacity threshold: the threshold that needs to be resized next time. This calculation method is very interesting: If the given capacity is 3, the closest value is 22 = 4. If the given capacity is 5, the closest value is 23 = 8. If the given capacity is 13, the closest value is 24 = 16. From this, we can draw a law: the function of the algorithm is to change all the values after the highest 1 into 1, and finally add the calculated result + 1. /** * For a given target capacity, the transmitted parameter is transformed into a value to the nth power of 2 * cap The current capacity returns the n-th power of 2 of a cap binary bit through operation. * MAXIMUM_CAPACITY Is the maximum upper limit * Calculation principle: * 5: 0000 0000 0000 0101 * 7: 0000 0000 0000 0111 step 1: Shift a binary number to the right in turn, and then take or with the original value. For the binary of a number, start with the first bit that is not 0 and set all subsequent bits to 1. * 8: 0000 0000 0000 1000 step 2: 7 + 1 -> 8 Get the power of the first 2 greater than 0000 0101 * * However, the above operation is not suitable for 2, 4 and 8, which are originally the n-th power of 2, so the cap - 1 operation is implemented, so the minimum power of 2 (itself) will be obtained * */ static final int tableSizeFor(int cap) { int n = cap - 1; n |= n >>> 1; n |= n >>> 2; n |= n >>> 4; n |= n >>> 8; n |= n >>> 16; return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1; } Other construction methods: /** * Construct an empty HashMap with the specified initial capacity and default load factor (0.75). * If we specify the capacity value, we will generally use the power greater than the first 2 of the value as the initialization capacity * @param initialCapacity the initial capacity. * @throws IllegalArgumentException if the initial capacity is negative. */ public HashMap(int initialCapacity) { this(initialCapacity, DEFAULT_LOAD_FACTOR); } /** * Construct an empty < TT > HashMap < TT >, * It has a default initial capacity (16) and a default load factor (0.75). * If the initialization size is not specified, the default size is 16 and the load factor is 0.75 */ public HashMap() { this.loadFactor = DEFAULT_LOAD_FACTOR; //All other fields are default } /** * Construct a new HashMap, * Its mapping is the same as the specified Map. * HashMap It is created using the default load factor (0.75) and an initial capacity sufficient to save the Map in the specified Map. * * @param m The map whose map you want to place in its map * @throws NullPointerException if the specified map is null */ public HashMap(Map<? extends K, ? extends V> m) { /*The loading factor is 0.75 by default*/ this.loadFactor = DEFAULT_LOAD_FACTOR; putMapEntries(m, false); } 3. put method of HashMap: - If the table is empty, call the resize() method for the first capacity expansion, that is, initialize the HashMap. Allocate initialization capacity for HashMap. - There are no nodes in the bucket. Create a new node, node < K, V > - There are the same nodes p.hash == hash and (k = p.key) == key, which indicates that a hash conflict has occurred, and the new node is the same as the old node. Update old nodes. - Zipper method: loop through the linked list, find the address corresponding to the node, and judge whether to update the data node. Or create a node to judge whether the length of the linked list is 8 (head node + other 7 nodes). Determine whether tree structure is required. - Capacity expansion mechanism: if the length after adding elements is greater than the critical value, call the resize method /** * Associates the specified value with the specified key in the mapping. If the mapping previously contained a mapping for the key, the old value is replaced. * * @param key Specifies the key with which the value will be associated * @param value The value) { return putVal(hash(key), key, value, false, true); } /** * Implements Map.put and related methods. * * @param hash hash for key key hash value of * @param key the key Index key * @param value the value to put value * @param onlyIfAbsent If true, do not change the existing value * @param evict If false, the table is in create mode. * @return If the original value exists, return the previous value; null if none */ final V putVal(int hash, K key, V value, boolean onlyIfAbsent, boolean evict) { /*Bucket node array*/ Node<K,V>[] tab; /*New node bucket*/ Node<K,V> p; int n, i; //Initialize if the table of the storage element is empty if ((tab = table) == null || (n = tab.length) == 0) { //The initialization length here is 16: resize() expansion. n = (tab = resize()).length; } // (n - 1) & hash: & divide hash method to perform hash calculation. According to the hash value, the node is empty, and a new data node is initialized if ((p = tab[i = (n - 1) & hash]) == null) { //Initialize data node tab[i] = newNode(hash, key, value, null); } //Calculate and find the p node according to the hash value else { //New node Node<K,V> e; // Indexes K k; //p. Hash = = hash: the hash value of the P node is equal to the hash of the new data, and the (k = p.key) = = key index is the same //Or the key is not empty and equal. In short, e and p have the same hash and the same key. Directly use e to overwrite the original p node if (p.hash == hash && ((k = p.key) == key || (key != null && key.equals(k)))) { // e node = = p node (same address) e = p; } /*Indicates that the hash values are the same, but the key s are not the same*/ // If it is a tree node, insert it. Red black tree else if (p instanceof TreeNode) { e = ((TreeNode<K,V>) p).putTreeVal(this, tab, hash, key, value); } // If it is not a tree structure, it belongs to a chain structure. Create a new chain node else { //The length of nodes in the statistical chain, greater than 8, is transformed into a tree for (int binCount = 0; ; ++binCount) { // e = p.next indicates the next node to which the P node points. Each time, the next node is assigned to e node, which is equivalent to traversing the node if ((e = p.next) == null) { p.next = newNode(hash, key, value, null); // The first node is - 1. Plus the head node, the length of the linked list needs to be less than the threshold value of 8. When there are more than 8 nodes, the chain structure will be transformed into a tree structure if (binCount >= TREEIFY_THRESHOLD - 1) { //Chain to tree treeifyBin(tab, hash); } break; } //In the process of node traversal, if the hash value is the same and the key value is the same, exit the loop directly and assign the value to the found node directly if (e.hash == hash && ((k = e.key) == key || (key != null && key.equals(k)))) { break; } //Update the node and point to the next node every time p = e; } } // Mapping of existing keys. If there is a mapping relationship, replace the original value if (e != null) { V oldValue = e.value; // Judge whether overwrite is allowed and whether value is empty if (!onlyIfAbsent || oldValue == null) { e.value = value; } // Callback to allow LinkedHashMap post operation afterNodeAccess(e); return oldValue; } } ++modCount; //After the size of the hash table has checked the capacity expansion threshold, perform the capacity expansion operation if (++size > threshold) { resize(); } afterNodeInsertion(evict); return null; } Core mechanism: 1. Resize /** * Initialize or increase the table size. * If it is blank, it is allocated according to the initial capacity target maintained in the field threshold. * Otherwise, because we use a power of 2, the elements in each bin must maintain the same index or be offset by a power of 2 in the new table. * * The first method: initialize HashMap using the default construction method. From the above, we can know that HashMap will return an empty table at the beginning of initialization, and thershold is 0. Therefore, the capacity of the first expansion is default_ INITIAL_ Capability is 16. At the same time, threshold = DEFAULT_INITIAL_CAPACITY * DEFAULT_LOAD_FACTOR = 12. * The second method is to initialize HashMap by specifying the construction method of initial capacity. From the following source code, we can see that the initial capacity will be equal to threshold, and then threshold = current capacity (threshold) * DEFAULT_LOAD_FACTOR. * Third: HashMap is not the first expansion. If the HashMap has been expanded, the capacity and threshold of each table will be twice as large as the original. * @return the table */ final Node<K,V>[] resize() { //Save the current table to oldTable Node<K,V>[] oldTab = table; //Length of old table int oldCap = (oldTab == null) ? 0 : oldTab.length; //Threshold of old table int oldThr = threshold; int newCap, newThr = 0; //1. The old table has been initialized if (oldCap > 0) { //If the old capacity is greater than the maximum capacity, to reach the maximum capacity if (oldCap >= MAXIMUM_CAPACITY) { //The threshold is equal to the maximum value of Int type 2 ^ (30) - 1 threshold = Integer.MAX_VALUE; //Unable to expand, return to old table return oldTab; } //1. Expand the capacity of the old value (use the only left digit (old capacity multiplied by 2)) //2. If the capacity after capacity expansion is less than the maximum capacity and the old capacity value is greater than or less than the default capacity (16), double the old threshold (these two conditions must be met) else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY && oldCap >= DEFAULT_INITIAL_CAPACITY) { //The new threshold is twice the old threshold newThr = oldThr << 1; } } // Initial capacity set to threshold //If initialization has not occurred and initialCapacity is specified through the constructor during use, the size of the table is threshold, that is, an integer power greater than the minimum 2 of the specified initialCapacity (which can be obtained through the constructor) else if (oldThr > 0) { newCap = oldThr; } else { //If initialization has not been experienced and initialCapacity is not specified through the constructor, the default value is given (the array size is 16 and the load factor is 0.75) newCap = DEFAULT_INITIAL_CAPACITY; //threshold = loadFactor * capacity newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY); } //After the above method (capacity expansion or initialization) is completed, the capacity operation is completed, but the threshold value is not specified (initialCapacity is specified during normal capacity expansion or initialization), and the threshold value (final capacity * loading factor) is calculated if (newThr == 0) { float ft = (float)newCap * loadFactor; //If the last calculated threshold is less than the maximum capacity and the last determined capacity is less than the maximum capacity, the calculated threshold can be used. If either of the above two conditions is not met, the threshold is the Integer maximum newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ? (int)ft : Integer.MAX_VALUE); } threshold = newThr; @SuppressWarnings({"rawtypes","unchecked"}) Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap]; //Initialize a new table and redefine an array according to the new capacity //Assign the newly created array to the HashMap member variable table = newTab; //The previous table has data if (oldTab != null) { //Because HashMap is array + linked list or array + red black tree (find the corresponding linked list or red black tree according to the array subscript), traverse the original array and find the corresponding linked list or red black tree for operation for (int j = 0; j < oldCap; ++j) { //Temporary node Node<K,V> e; //Array [j] has data for subsequent operations //Assign the head node (root node) of the linked list or red black tree represented under array [j] to e if ((e = oldTab[j]) != null) { //Dispose of old array [j] as empty //I think the operation of these two steps is to move the data at the subscript of the original array from its original position, and then operate the linked list oldTab[j] = null; if (e.next == null) { //If there is only one head node (root node), the corresponding position in the new table can be calculated and inserted directly according to the calculation method of E. hash & (newcap - 1) //It is consistent with the operation in the put method newTab[e.hash & (newCap - 1)] = e; } else if (e instanceof TreeNode) { //If it is a red black tree node ((TreeNode<K,V>)e).split(this, newTab, j, oldCap); } else { //First define five variables (Tail means Tail), so we can understand it this way //loHead lo head loTail lo tail //hiHead hi head hiTail hi tail // The low order above refers to 0 to oldCap-1 of the new array, and the high order specifies oldCap to newCap - 1 Node<K,V> loHead = null, loTail = null; Node<K,V> hiHead = null, hiTail = null; Node<K,V> next; do { next = e.next; // The length of the array must be the nth power of 2 (for example, 16). If the hash value and the length are combined, the effective binary bits of the hash value that can participate in the calculation are the last few bits equivalent to the length binary. If the result is 0, it means that the highest bit of the binary bit of the hash value participating in the calculation must be 0 //Because the binary effective highest bit of the array length is 1 (for example, the binary corresponding to 16 is 10000), only when *.. 0 * * * * and 10000 are combined, the result is 00000 (*. Represents multiple binary bits of uncertainty). In addition, because the modulo operation when positioning the subscript is the sum operation based on the hash value and length minus 1, the subscript = (*. 0 * * * * & 1111) is also = (*. 0 * * * * & 11111). 1111 is a binary of 15 and 11111 is a two-level system of 16 * 2-1, that is, 31 (double capacity expansion). // Therefore, if the hash value is touched with the length of the new array, the mod value will not change. That is, the position of the element in the new array is the same as that in the old array, so the element can be placed in the low-order linked list. if ((e.hash & oldCap) == 0) { //This part is very similar to the operations required to insert a node into the linked list (Figure 1 below shows the final state of the following code when there is only one data on the right, and the final state of multiple data on the left) if (loTail == null) { // If there is no tail, the linked list is empty loHead = e; // When the linked list is empty, the header node points to the element } else { loTail.next = e; // If there is a tail, the linked list is not empty. Hang the element to the end of the linked list. } loTail = e; // Set the tail node as the current element } // If the result of the and operation is not 0, the hash value is greater than the length of the old array (for example, the hash value is 17) // At this point, the element should be placed in the high position of the new array // For example: if the old array has a length of 16, the new array with a length of 32 and a hash of 17 should be placed at the 17th position of the array, that is, if the subscript is 16, then the subscript of 16 already belongs to the high order, the low order is [0-15] and the high order is [16-31] else { if (hiTail == null) { hiHead = e; } else { hiTail.next = e; } hiTail = e; } } while ((e = next) != null); // The linked list composed of low-order elements is still placed in the original position if (loTail != null) { loTail.next = null; newTab[j] = loHead; } // The position of the linked list composed of high-order elements is only offset by the length of the old array. if (hiTail != null) { hiTail.next = null; newTab[j + oldCap] = hiHead; } } } } } return newTab; } The most important thing in resizing is the rehash operation of linked list and red black tree: Usually, when we expand the capacity, we usually expand the length to twice the original. Therefore, the position of the element is either in the original position or moved to the power of 2 in the original position. When expanding the HashMap, we only need to see whether the new bit of the original hash value is 1 or 0. If it is 0, the index does not change. If it is 1, the index becomes "original index + oldCap". Since the new 1 bit is 0 or 1, it can be considered random, so the resize process evenly disperses the previously conflicting nodes into new slots Core mechanism: 2: split tree /** * Split the nodes in the number shape into higher and lower trees, or cancel the tree if the tree is now too small. Call only from resize; * That is, cut the fraction to avoid excessive number * @param map the map * @param tab the table for recording bin heads * @param index the index of the table being split * @param bit the bit of hash to split on */ final void split(HashMap<K,V> map, Node<K,V>[] tab, int index, int bit) { TreeNode<K,V> b = this; // Relink to the lo and hi lists, keeping the order TreeNode<K,V> loHead = null, loTail = null; TreeNode<K,V> hiHead = null, hiTail = null; int lc = 0, hc = 0; //Loop through the tree. Because there is a double ended linked list relationship between TreeNode nodes, the linked list relationship can be used for rehash for (TreeNode<K,V> e = b, next; e != null; e = next) { next = (TreeNode<K,V>)e.next; e.next = null; if ((e.hash & bit) == 0) { if ((e.prev = loTail) == null) { loHead = e; } else { loTail.next = e; } loTail = e; ++lc; } else { if ((e.prev = hiTail) == null) { hiHead = e; } else { hiTail.next = e; } hiTail = e; ++hc; } } //After the rehash operation, pay attention to the untreeify or treeify operation according to the length of the linked list if (loHead != null) { if (lc <= UNTREEIFY_THRESHOLD) { tab[index] = loHead.untreeify(map); } else { tab[index] = loHead; //Otherwise it's already treelized if (hiHead != null) { loHead.treeify(tab); } } } if (hiHead != null) { if (hc <= UNTREEIFY_THRESHOLD) { tab[index + bit] = hiHead.untreeify(map); } else { tab[index + bit] = hiHead; if (loHead != null) { hiHead.treeify(tab); } } } } Author: coffee rabbit Link:
https://programmer.help/blogs/basic-principle-and-underlying-analysis-of-hashmap.html
CC-MAIN-2021-49
refinedweb
4,435
58.82
programmers we are regularly faced with the task of creating computer applications, which is a good thing. One of the reasons we are regularly faced with this is that no computer application is exactly like any other - the requirements vary in some detail, the look&feel of an existing application isn't exactly what the user wants, the programmer isn't quite happy with the implementation... On the surface, perhaps from the viewpoint of some manager type person (not a slight - just referring to a particular high level view of things), many applications sound like they are really the same thing. The task for the programmer is to reach the goal of creating an application that satisfies all the requirements exactly, from a technical point of view as much as with regard to end user functionality. To reach this aim, he (or she - just using the masculine form for simplicity) looks around for reusable components that can be the building blocks of the new application. Obviously this can't cover everything he needs to do - since the task is to create something new and unique, there can't possibly be building blocks around that cover every single detail. So at some point he'll have to fall back onto what some would regard the "real" work for a developer: start developing, i.e. analyzing problems, designing architecture, writing code. In the XAF discussion forum, I have recently noticed a certain tendency. Customers will ask us for a way to solve a certain problem. We provide them with a solution, which of course in each instance is of varying elegance, but usually solves the problem. We see customers turn around and ask for the particular solution to be included in the default feature set of XAF. In this context, words like "workaround" or even "hack" are regularly used to describe the solution we provided, and hint at the understanding on the customer's side that this feature should surely be an integral part of XAF. Think about this: if you wanted to create a simple Windows Forms application that let you handle a contact list, you could probably make use of some building blocks from the .NET framework. You'd be using standard controls, for instance, or perhaps ADO.NET to help you store the data. You might use third party controls to improve the UI experience. But in the end you'd also write code to implement that part of your functionality which is really the purpose of the whole application, which distinguishes this application from any other contact management application out there. It might be high-level code to collect all your closest friends from the complete collection of contacts, or it might be low-level code to facilitate certain visualization features beyond what the UI controls do by default. You have to write this code, because the .NET Framework doesn't provide you with ready-made building blocks to collect friends, or to do that creative visualization you have in mind. But the framework is flexible enough to let you reach your goals anyway, and in the process you have a chance of distinguishing yourself and your application and to create unique selling points for your application as well, things that perhaps other programmers can't implement or just don't have the ideas for. Would you say that this code you're writing is a workaround or a hack, to accommodate the unfortunate fact that the .NET Framework doesn't have these features yet? I wouldn't say that. In many of the cases I've seen in our forum, the features in question are really quite specific. If we wanted to include them in XAF, they'd have to be abstracted first, since it doesn't make sense to integrate a specific option, special class implementation or the like for every single use case out there, regardless of how often or rarely that use case applies. By implementing an abstraction of a feature, we widen the scope of use cases that can benefit from it. Of course, for every single one of our customers, this means that our abstracted feature doesn't solve the exact problem they're facing. Instead it enables a solution for that problem, which is then easier to achieve on the basis of a certain extent of framework support. Does that sound familiar? It brings us back to the exact point where we already are right now. At this time, we have a very strong framework in our hands, which enables solutions to the vast majority of problems. There are steps a programmer has to make to apply those solutions to his own use cases. It speaks to the strength of the framework that this is possible, and it speaks to the imaginative skills of our support team that they can deliver these application specific solutions to you. So please don't call these use-case specific solutions workarounds or hacks. That's not what they are. What they really are is this: the creative part of your application that sets it apart from other applications. They are the valuable parts of your work that makes it your work instead of being something that anybody could have created by choosing some options from a list. Can our framework be improved? It sure can. We're constantly looking out for things to include, problems to abstract, in order to enable further solutions in areas that weren't previously covered. We also look at things we can make easier, giving higher priority to those features that are used by most of our customers. But first and foremost we are in the business of providing a general purpose business application framework, and it is not a goal we have to allow any given specific application to be created only by clicking options with the mouse. Hi Oliver, you are right, when you tell us to not call code for specific problems "hacks or workarounds". But would you agree when I call code, that we need to write to "fully" integrate your own controls in XAF, a workaround? I'm talking of three requests, which are open since a while now: a) Master/Detail-Integration in XAF for XtraGrid b) Reminder Support for XtraScheduler c) Recurrence Support for XtraScheduler. I think, these features are of common interest (otherwise they wouldn't be part of a component you sell) and therefore priortiy to unleash these features in XAF should be "Very High". thanks reinhold Hi Reinhold, To me, a workaround would be an approach I have to take, perhaps a piece of code I have to write (or rather, the particular way I have to write a piece of code?), because something isn't working as it should. Then again, this is a fine line - the reason why something doesn't work as it should is important, for instance - who defines what "should" be, after all? Implementing a missing feature myself, under any circumstances, is not a workaround in my book. I'm not saying we never suggest workarounds - we do, all the time, for example in the case of bugs. Workarounds always have temporary character - "I do it this way due to bug X49827, and I'll change it back once this gets fixed". With regard to those particular problems you are mentioning - I believe you picked those out because they are among the few issues we see with XAF where the basic implementation we provide isn't sufficient to enable a feature to be implemented. Or at least the effort required to implement the feature is obviously too great. To try and answer your question - if anybody actually managed at this point to make reminders or recurrence work fully, then that would obviously be a feature, right? A feature that will hopefully one day be available out of the box. I was just thinking whether it would make sense for me to write another post that more clearly defines the definitions I use here. But I think in the end that's not necessary - most of us have pretty similar definitions anyway. As I said, a workaround sounds like something temporary, and if people sometimes use that word to describe what is really a feature implementation to me, then that means they think it should be part of the product. Well, nothing wrong with suggesting that - my post was meant, among other things, to describe the various points of view there can be on this topic. I agree that adding new functionality by extending that of XAF is not a workaround. But there are still areas where the suggested solutions by DX are hacks like in: 1- In XAF, reusing the same List schema definition in model to describe any type of list while actually there might be important differences depending on the control used to implement the list editor. Think of GridEditor and TreeListEditor where Group index does mean something for the grid and menas nothing for the treelist. 2- In XtraPivot/AspxPivotGrid where inconsitencies in core and UI bent classes/Interfaces/namespaces/assemblies make it tiring to share logic for both pivot grids (Win and Web), 3-In PivotGrid again, I had to flaten my hierarchies (ID_ParentID) in order to use the pivot because it does not support auto-referencing associations. One can always say that it is my part of the job at hand to make it my app and DX's app and I can not deny this fact ;-) Mohsen Hi Mohsen, I don't really want to start a detailed discussion in the comments here. Suffice it to summarize from my perspective that (1) is based on the strict layering that XAF uses - certainly an area that can be perfected, but equally certainly a very elegant solution to begin with, (2) sounds almost like a bug, without looking into too many details, but of course not really related to XAF at all and (3) seems to be a missing feature in the XtraPivotGrid (a known one btw, I remember talking about this ages ago when the product was really new). There's never anything wrong with discussing these things. We want to know your opinions, and if you want to use the approach of calling solutions workarounds in order to make us understand that you don't regard them as perfect, that's fine in general. We can interpret that to understand where you're coming from. But we disagree with the "workaround" or "hack" understanding regularly, and an objective discussions is always harder to manage if one party uses extreme vocabulary, belittles efforts other parties make and so on. To some extent my post is a reflection on the art of human interaction, or perhaps just on the idea that it is, in fact, an art form. After a week of frustrated chasing few inconsistencies in ASPxGridView control placed inside ASPxCallbackPanel I decided to put my few cents into this discussion. Coding is not workaround. The coding to overcome unknown (undocumented) limitation of the advertised functionality - is. Also, using functionality outside the library is not workaround. But if you need hack into the internal (not documented) details of the output produced by the library - is. When the developer does not depend on the library, he or she designs usage of the underlying technology and this is normal chain of events. When you depend on the library, the library becomes part of the underlying technology, such integrated library as DevExpress becomes the platform on its own and the custom code needs to follow the library guidelines to produce an integrated output, a lot of code is focused not to add new functionality or to address the business logic specifics but (again) hacks to do an artificial tricks to hook up with the library output (simply because of library wants in certain way) or (which is even worse) a lot of code is focused to overcome inconsistencies, limitations and defects of the library. And this is clear workaround. Just a simple example. Usage of HtmlRowCreated vs. HtmlDataRowPrepared vs. HtmlDataCellPrepared events. The advertised feature recommends to use the last event to make custom HTML adjustments to the content of the data cell. However, in certain callbacks situation that works initially and stops after the very first callback. Putting the same code into HtmlRowCreated event was not an documented approach but worked (using your own staff example). Is that coding or workaround? Coding if we add new functionality not planned to be provided by the library or spesific to the particularity of the application we are writing. The code tricks we have to overcome within the domain of library output are workarounds. As soon as library is grown up beyond few intruments that enhance existing tools into platform / framework, any coding around its standard flow become workarounds (whether recommended or documented or not) - they are not workarouds if the target output of that code pieces lies within the domain of the framework output itself. This is post no. 4 in the mini series "10 exciting things to know about XAF". You can find Please or to post comments.
https://community.devexpress.com/blogs/xaf/archive/2008/04/30/writing-code-is-not-a-workaround.aspx
CC-MAIN-2019-43
refinedweb
2,195
56.59
Abstract: This document describes how to modify 32-bit device drivers that run on the Solaris Operating System (OS) to be compatible with the 64-bit Solaris 10 OS on x86 platforms. Contents: The capabilities of the Solaris platform continue to expand to meet customer needs. The Solaris 10 release is designed to fully support both 32-bit and 64-bit architectures. The Solaris OS supports machines based on both 32-bit and 64-bit SPARC processors as well as 32-bit and 64-bit x86 platforms. The primary difference between the 32-bit and 64-bit development environments is that 32-bit applications are based on the ILP32 data model, while 64-bit applications are based on the LP64 model. The primary difference between applications for SPARC and x86-based systems, from the driver developer's point of view, is big-endian versus little-endian translation. To write a common device driver for the Solaris OS, developers need to understand and consider these differences. Note: This document addresses topics related to x86 platforms only. In this document, references to 64-bit operating systems refer to the Solaris OS on machines with AMD Opteron processors. The Solaris OS runs in 64-bit mode on appropriate hardware, and provides a 64-bit kernel with a 64-bit address space for applications. The 64-bit kernel extends the capabilities of the 32-bit kernel by addressing more than 4 Gbyte of physical memory, by mapping up to 16 Tbyte of virtual address space for 64-bit application programs, and by allowing 32-bit and 64-bit applications to coexist on the same system. This document discusses the differences between 32-bit and 64-bit data models, provides guidelines for cleaning 32-bit device drivers in preparation for the 64-bit Solaris OS kernel, and addresses driver-specific issues with the 64-bit Solaris OS kernel. This information is intended for device driver developers who want to deliver 32-bit and 64-bit clean device drivers for the Solaris OS on x86 platforms. The article provides guidance on how to write code that is portable between the 32-bit environment and the 64-bit environment. This document is organized as follows: ioctl Notes: Section 2, "Basic Information," explains some basic problem areas that a developer writing device drivers for Solaris systems may encounter when writing 32-bit and 64-bit clean device drivers. Section 3, "General Conversion Guidelines," explains in detail the steps that should be taken in writing a common device driver for Solaris systems. At the end of this section is a short checklist for conversion. These guidelines can help you provide clean code for a 64-bit driver for the Solaris OS on x86 platforms. Section 4, "Advanced Issues and Guidelines," addresses some advanced issues for developers writing device drivers for the 64-bit Solaris OS on x86 platforms, with a focus on enhancing performance. Section 5, "Porting Example," presents an example that illustrates the 32-bit to 64-bit conversion process. Section 6, "Conclusion," summarizes the issues involved in writing 32-bit and 64-bit common device drivers for the Solaris OS on x86 platforms. ILP32 C language data model where int, long, and pointer data types are 32 bits in size. int long LP64 C language data model where the int data type is 32 bits wide, but long and pointer data types are 64 bits wide. 32-bit program Program compiled to run in 32-bit mode. For example, programs compiled for IA32 and 32-bit SPARC platforms. 64-bit program Program compiled to run in 64-bit mode. For example, programs compiled for the AMD64 and 64-bit SPARC platforms. Programs that have been successfully converted to run in 64-bit mode are also referred to as being 64-bit clean or 64-bit safe. 32-bit and 64-bit common device driver A device driver with portable code that can be built and run in either a 32-bit or 64-bit environment. Before you start to write a 64-bit clean device driver, it is useful to understand some of the differences between 32-bit and 64-bit operating systems. Most of these differences are similar to those you would encounter if you ported a driver from a 32-bit SPARC processor-based machine to a 64-bit SPARC processor-based machine. ILP32 is the C language data model for the 32-bit Solaris OS. ILP32 defines the int, long and pointer data types as 32 bits wide, type short as 16 bits, and type char as 8 bits. short char LP64 is the C language data model for the 64-bit Solaris OS. LP64 defines the long and pointer data types as 64 bits wide. The LP64 data model also has a larger address space and a larger scalar arithmetic range. The following table shows the basic language differences that exist both in driver programming and in application programming. 8 16 32 64 long long float double long double 96 128 pointer It is not unusual for 32-bit applications to assume that types int, long and pointers are the same size. Drivers that run in a 64-bit environment may need to be converted to use the 64-bit data model. Because the size of type long and pointer changes in the LP64 data model, several potential problems could occur: For more detailed information, see Section 3.1.1, "Converting Driver Code to Be 64-Bit Clean." In addition to general code cleanup to support the data model changes for LP64, device driver writers have these driver-specific issues to consider: ddi_getw(9F) ddi_get16(9F) ioctl(9E) devmap(9E) mmap(9E) These two topics are discussed in Section 3.1.2, "Driver-Specific DDI Interfaces," and Section 3.2, "Other Driver Issues." Advanced issues concerning DMA and performance are discussed in Section 4, "Advanced Issues and Guidelines." The 64-bit Solaris OS requires 64-bit driver objects; 32-bit device drivers cannot be used with 64-bit operating systems. Conversion from 32-bit to 64-bit code requires at minimum recompilation and re-linking with 64-bit libraries. For cases in which source code changes are required, Section 3 provides guidelines for writing clean driver code that works correctly in both 32-bit and 64-bit environments. You may need to implement one or more of the suggestions discussed in this section to convert your code. These recommendations can help you maintain a single source and minimize use of #ifdef constructs. #ifdef The principal work in converting your driver is to clean up the code for the 64-bit environment. The basic steps are similar to porting from machines based on 32-bit SPARC technology to machines based on 64-bit SPARC technology. Specific concerns related to systems built on AMD64 architecture are highlighted: typedef 3.1.1 Converting Driver Code to Be 64-Bit Clean Check the code for the use of multiple data type models. Note the following when converting to LP64: size_t uint32_t <sys/types.h> <sys/inttypes.h> int8_t uint8_t uint64_t uintptr_t intptr_t int32_t You may want to run the lint utility in the Sun Studio 10 C5.7 compiler on your driver code to help check data model conversion problems. lint The following guidelines and examples explain some potential problem areas that you may encounter when porting from the ILP32 data model to the LP64 data model. All samples include a recommended solution that runs correctly in both 32-bit and 64-bit environments. Example 1: 64-bit values should not be assigned to smaller types. The code below is incorrect for a 64-bit environment: int int_a, int_b; long long_a, long_b; int_a = long_a; int_b = long_a + long_b; This code does not cause any issues in the ILP32 environment, but it does have potential for overflow in the LP64 environment because long is a 64-bit type. If such assignments are intentional, use explicit casts to tell the compiler and lint(1B). lint(1B) int_a = (int) long_a; int_b = (int) long_a + long_b; Example 2: Improperly applied explicit casts can give unintended results. int int_a; long long_a; va = (int)long_a/int_a; va = (int)(long_a/int_a); The first assignment in this code converts the 64-bit long_a to a 32-bit integer and then divides by the 32-bit int_a. The second assignment in this code divides long_a by int_a and then converts the result into a 32-bit integer. long_a int_a Example 3: A pointer to an int is not compatible with a pointer to a long. Even the use of explicit casting is not correct. int *int_a_p, *int_b_p; long *long_a_p, *long_b_p; long_a_p = int_a_p; long_a_p = (long *)int_a_p; int_b_p = long_b_p; int_b_p = (int *)long_b_p; This code results in alignment errors or wrong values on the 64-bit SPARC platform. A pointer to an int has 4-byte alignment, and a pointer to long has 8-byte alignment. In the AMD64 architecture, data alignment is not imposed. However, for maximum performance and portability between x86 and SPARC platforms, avoid misaligned memory accesses. Example 4: You cannot correct a potential overflow problem by casting to a larger data type. long long_a; int int_a, int_b; long_a = int_a * int_b; long_a = (long) (int_a * int_b ); The result of both multiplications is type int, which is then converted to type long before being assigned to long_a. Instead, cast either operand to a long prior to the multiplication as shown in the following line of code. Then the result of the multiplication will be type long and correctly assigned to long_a. long_a = (long)int_a * int_b; Example 5: Untyped integral constants are int by default. 60000000 * 40000000 is 32-bit multiplication. 60000000L * 40000000 is 64-bit multiplication. Example 6: When you perform arithmetic operations on pointers, converting a pointer to a 32-bit integer (int or unsigned int) can give unintended results in an LP64 environment. unsigned int int diff, base; int *start, *end; int pad; base = start; diff = end – start; pad = (int) end % 16; Instead, convert the pointer to intptr_t or uintptr_t before you perform any arithmetic operations on pointers. Use ptrdiff_t to hold the difference between two pointers. ptrdiff_t ptrdiff_t diff; intptr_t base; int *start, *end; uintptr_t pad; base = (intptr_t) start; diff = (ptrdiff_t) ((intptr_t) end – base); pad = (uintptr_t) end % 16; Example 7: The sizes of pointers and integer types are different depending on the arithmetic context. A pointer in the kernel is explicitly cast as an integral type when performing arithmetic operations, such as shift and AND operations, in order to determine which memory segment contains a particular address. These explicit casts should be either to intptr_t or to uintptr_t. The casts preserve the 64-bit values in LP64 mode and the 32-bit values in ILP32 mode. struct pagetable *p, *addr_item #define ADDR_OFFSET 03 addr_item = (struct pagetable *) (((int)p)|ADDR_OFFSET ); The pagetable structure is used to manage buffer pages in a module. The address pointer plus ADDR_OFFSET is the address of the data block. In a 64-bit environment, the address should be cast to a 64-bit integer before the OR with ADDR_OFFSET. ADDR_OFFSET OR addr_item = (struct pagetable *) (((uintptr_t)p) | ADDR_OFFSET); Example 8: Inadequate function prototypes. extern func_a(int), func_b(void); long long_a, long_b; long_a = func_a (long_b); int_a = func_b (); The return types of func_a() and func_b() are implicitly declared as int. Type conversion may occur in parameter passing. The return values of pointer or long may be truncated to 32-bit. func_a() func_b() Example 9: The size of data objects changes. The size of long and pointer types in a 64-bit environment changes the size of the data structure. The alignment padding also changes the size of the data structure. struct device_regs{ ulong_t addr; uint_t count; }; This data type occupies 8 bytes in the 32-bit model, but occupies 12 bytes in the 64-bit model. If count is placed before addr, the size may become larger because a long has 8-byte alignment. Do not use a fixed offset to access the member fields. Instead, access data members by referencing the names of corresponding members. count addr struct device_regs{ uint32_t addr; uint32_t count; }; struct device_regs r; uint_t *p = (uint_t *) ((char *) &r +4); Instead, the code should be written to access the member count as follows. struct device_regs { ulong_t addr; uint_t count; }; struct device_regs r; uint_t *p = &r.count; Use fixed-width structures if this is a desired case. For example, use a fixed-width structure for a protocol header definition and device hardware register definition. struct header { uint32_t type; uint32_t length; }; Check the use of system-derived types that change size in ILP32 and LP64. Some system derived types represent 32-bit quantities on a 32-bit system but represent 64-bit quantities on a 64-bit system. For example: clock_t: relative time in specified resolution daddr_t: disk block address ino_t: inode intptr_t: integral pointer type off_t: file offset size_t: size of an object ssize_t: size of an object or -1 time_t: time of day in seconds timeout_id_t: timeout() handler id uintptr_t: unsigned integral pointer type clock_t: daddr_t: ino_t: intptr_t: off_t: size_t: ssize_t: time_t: time timeout_id_t: timeout() uintptr_t: Pay particular attention to the use of these derived types, especially when the variables that use these types are assigned with the value from another derived type, such as a fixed-width type. Example 10: size_t page_addr, v_addr; page_addr = v_addr && 0xfffff000; In this example, the second line should be page_addr = v_addr && ~0x0fffL or a similar value. In a 64-bit environment, the constant is type int by default, so the value of v_addr && 0xfffff000 only contains 20 bits in the middle of v_addr. page_addr = v_addr && ~0x0fffL v_addr && 0xfffff000 v_addr Example 11: This example shows the difference between system-derived types in ILP32 and LP64 data models in reading or writing to a large file. The example shows an error caused by the second argument of the following function: int fseeko(FILE *stream, off_t offset, int whence). This function is identical to fseek(3C) except for the second argument, offset, which is a long type in fseek(3C). In the following example, record_pos[] is used to record the position of accessed pointers in a large file: fseek(3C) offset record_pos[] int record_pos[MAX_RECORD_NUM]; off_t offset; while ( !feof (fp) ) { ... /* calculate the offset; */ fseeko ( fp, offset, SEEK_SET); if ( condition ){ record_pos[i] = (int)offset; } } In a 32-bit environment, you need to cast offset to type int because a file cannot be larger than 4 Gbyte, which is the maximum value of an int variable. However, in a 64-bit environment, a file larger than 4 Gbyte cannot use record_pos[] to record the position. 3.1.2 Driver-Specific DDI Interfaces Check for potential problems due to DDI typedef changes. In the Solaris OS on 64-bit x86-based systems, the kernel redefines the DDI data types to allow the compiler to check that the correct items are being passed. The following type definitions are in <sys/dditypes.h> in the 32-bit Solaris kernel: <sys/dditypes.h> typedef void *ddi_dma_handle_t; typedef void *ddi_dma_win_t; typedef void *ddi_dma_seg_t; typedef void *ddi_iblock_cookie_t; typedef void *ddi_regspec_t; typedef void *ddi_intrspec_t; typedef void *ddi_softintr_t; typedef void *dev_info_t; typedef void *ddi_devmap_data_t; typedef struct ddi_devid *ddi_devid_t; typedef void *ddi_acc_handle_t; The following type definitions are in <sys/dditypes.h> in the 64-bit Solaris kernel: typedef struct __ddi_dma_handle *ddi_dma_handle_t; typedef struct __ddi_dma_win *ddi_dma_win_t; typedef struct __ddi_dma_seg *ddi_dma_seg_t; typedef struct __ddi_iblock_cookie *ddi_iblock_cookie_t; typedef struct __ddi_regspec *ddi_regspec_t; typedef struct __ddi_intrspec *ddi_intrspec_t; typedef struct __ddi_softintr *ddi_softintr_t; typedef struct __dev_info *dev_info_t; typedef struct __ddi_devmap_data *ddi_devmap_data_t; typedef struct __ddi_devid *ddi_devid_t; typedef struct __ddi_acc_handle *ddi_acc_handle_t; There is no impact on C binaries and correct C sources. Compilation errors occur in C sources that use these types incorrectly. One way to avoid passing incorrect argument types to functions is to define the structure pointers with specific structure tags. For example, notice the arguments to the following two DDI functions that are declared in sunddi.h: sunddi.h int ddi_add_softintr(dev_info_t *dip, int preference, ddi_softintr_t *idp, ddi_iblock_cookie_t *iblock_cookiep, ddi_idevice_cookie_t *idevice_cookiep, uint_t (*int_handler)(caddr_t int_handler_arg), caddr_t int_handler_arg); void ddi_remove_softintr(ddi_softintr_t id); The third argument of function ddi_add_softintr() is a pointer to ddi_softintr_t. In the 64-bit Solaris kernel, this is a pointer to pointer. In the partner function ddi_remove_softintr(), the argument is a ddi_softintr_t, which is a pointer to a ddi_softiniter structure. The interrupt cannot be removed if you make the following call: ddi_add_softintr() ddi_softintr_t ddi_remove_softintr() ddi_softiniter ddi_remove_softintr (&id) It may be difficult for developers to catch this kind of error in the program, but the compiler can catch these errors if you use the ddi_softintr_t type. Use fixed-width DDI common access functions. Functions that use symbolic names to specify their data access size are obsolete. These functions include ddi_getb(9F), ddi_getw(9F), ddi_getl(9F), and ddi_getll(9F). The new function names specify a fixed-width data size, such as ddi_get8(9F), ddi_get16(9F), ddi_get32(9F), and ddi_get64(9F). ddi_getb(9F) ddi_getl(9F) ddi_getll(9F) ddi_get8(9F) ddi_get32(9F) ddi_get64(9F) To port drivers to the 64-bit Solaris OS on x86 platforms, replace the obsolete non-fixed-width DDI functions with fixed-width DDI common access functions, as shown in the following table. reads 8-bit from device address reads 16-bit from device address reads 32 bits from device address reads 64 bits from device address ddi_putb(9F) ddi_put8(9F) writes 8-bit to device address ddi_putw(9F) ddi_put16(9F) writes 16-bit to device address ddi_putl(9F) ddi_put32(9F) writes 32 bits to device address ddi_putll(9F) ddi_put64(9F) writes 64 bits to device address ddi_rep_getb(9F) ddi_rep_get8(9F) reads 8-bit from device address repeatedly ddi_rep_getw(9F) ddi_rep_get16(9F) reads 16-bit from device address repeatedly ddi_rep_getl(9F) ddi_rep_get32(9F) reads 32 bits from device address repeatedly ddi_rep_getll(9F) ddi_rep_get64(9F) reads 64 bits from device address repeatedly ddi_rep_putb(9F) ddi_rep_put8(9F) writes 8-bit to device address repeatedly ddi_rep_putw(9F) ddi_rep_put16(9F) writes 16-bit to device address repeatedly ddi_rep_putl(9F) ddi_rep_put32(9F) writes 32 bits to device address repeatedly ddi_rep_putll(9F) ddi_rep_put64(9F) writes 64 bits to device address repeatedly pci_config_getb(9F) pci_config_get8(9F) reads 8-bit from PCI configuration space pci_config_getw(9F) pci_config_get16(9F) reads 16-bit from PCI configuration space pci_config_getl(9F) pci_config_get32(9F) reads 32 bits from PCI configuration space pci_config_getll(9F) pci_config_get64(9F) reads 64 bits from PCI configuration space pci_config_putb(9F) pci_config_put8(9F) writes 8-bit to PCI configuration space pci_config_putw(9F) pci_config_put16(9F) writes 16-bit to PCI configuration space pci_config_putl(9F) pci_config_put32(9F) writes 32 bits to PCI configuration space pci_config_putll(9F) pci_config_put64(9F) writes 64 bits to PCI configuration space Example: A driver that uses ddi_getl(9F) to access 32-bit data reads 64-bit data in a 64-bit environment. Drivers must use ddi_get32(9F) to access 32-bit data in the 64-bit environment: uint32_t ddi_get32(ddi_acc_handle_t hdl, uint32_t *dev_addr); In addition to function names, certain function parameter types and function return values are different in the 64-bit Solaris OS. Examples include unsigned char, unsigned short, and unsigned long change to uint8_t, uint16_t, and uint32_t. unsigned char unsigned short unsigned long uint16_t Those functions with changed parameter types and return values are shown in the following table. unsigned char inb(int port) uint8_t inb(int port) reads 8-bit from an I/O port unsigned short inw(int port) uint16_t inw(int port) reads 16-bit from an I/O port unsigned long inl(int port) uint32_t inl(int port) reads 32 bits from an I/O port void repinsb(int port, unsigned char *addr, int count) void repinsb(int port,uint8_t *addr, int count) reads multiple 8-bit from an I/O port void repinsw(int port, unsigned short *addr, int count); void repinsw(int port,uint16_t *addr, int count); reads multiple 16-bit from an I/O port void repinsd(int port, unsigned long *addr, int count); void repinsd(int port,uint32_t *addr, int count); reads multiple 32 bits from an I/O port void outb(int port, unsigned char value); void outb(int port, uint8_t value); writes 8-bit to an I/O port void outw(int port, unsigned short value); void outw(int port, uint16_t value); writes 16-bit to an I/O port void outl(int port, unsigned long value); void outl(int port, uint32_t value); writes 32 bits to an I/O port void repoutsb(int port, unsigned char *addr, int count); void repoutsb(int port,uint8_t *addr, int count); writes multiple 8-bit to an I/O port void repoutsw(int port, unsigned short *addr, int count); void repoutsw(int port,uint16_t *addr, int count); writes multiple 16-bit to an I/O port void repoutsd(int port, unsigned long *addr, int count) void repoutsd(int port,uint32_t *addr, int count); writes multiple 32 bits to an I/O port Check changed fields in DDI data structures. The data types of some of the fields in DDI data structures are changed in the 64-bit Solaris OS. Drivers that use these data structures should make sure that these fields are used appropriately. The following table shows changed fields in DDI data structures. struct buf { ... unsigned int b_bcount; unsigned int b_resid; int b_bufsize; ... } struct buf { ... size_t b_bcount; size_t b_resid; size_t b_bufsize; ... } buf.h typedef struct { ... unsigned long dmac_address unsigned int dmac_size; unsigned int dmac_type; ... } ddi_dma_cookie_t; typedef struct { ... union { uint64_t _dmac_ll; unit32_t _dmac_la[2]; } _dmu; size_t dmac_size; uint_t dmac_type; ... } ddi_dma_cookie_t; dditypes.h The dmac_address is defined by MACRO and depends on the _LONG_LONG_HTOL. In 64-bit Solaris x86: #define dmac_laddress _dmu.dmac_ll #define dmac_address _dmu.dmac_la[0] struct scsi_pkt { ... unsigned long pkt_flags; long pkt_time; long pkt_resid; unsigned long pkt_state; unsigned long pkt_statistics; ... } struct scsi_pkt { ... uint_t pkt_flags; int pkt_time; ssize_t pkt_resid; uint_t pkt_state; uint_t pkt_statistics; ... } scsi/scsi_pkt.h typedef struct ddi_dma_attr { unsigned int dma_attr_version; unsigned long dma_attr_addr_lo; unsigned long dma_attr_addr_hi; unsigned long dma_attr_count_max; unsigned long dma_attr_align; unsigned int dma_attr_burstsizes; unsigned int dma_attr_minxfer; unsigned long dma_attr_maxxfer; unsigned long dma_attr_seg; int dma_attr_sgllen; unsigned int dma_attr_granular; unsigned int dma_attr_flags; } ddi_dma_attr_t; typedef struct ddi_dma_attr { uint_t dma_attr_version; uint64_t dma_attr_addr_lo; uint64_t dma_attr_addr_hi; uint64_t dma_attr_count_max; uint64_t dma_attr_align; uint_t dma_attr_burstsizes; uint32_t dma_attr_minxfer; uint64_t dma_attr_maxxfer; uint64_t dma_attr_seg; int dma_attr_sgllen; uint32_t dma_attr_granular; uint_t dma_attr_flags; } ddi_dma_attr_t; ddidmareq.h This structure defines attributes of the DMA engine and the device. Check changed arguments of DDI interfaces. The DDI function argument types in the following table have been changed in the 64-bit Solaris OS. void ddi_set_driver_private(dev_info_t *devi, caddr_t data); void ddi_set_driver_private(dev_info_t *devi, void *data); caddr_t ddi_get_driver_private(dev_info_t *devi); void *ddi_get_driver_private(dev_info_t *devi); struct buf *getrbuf(long sleepflag) struct buf *getrbuf(int sleepflag) void delay(long ticks); void delay(clock_t ticks); timeout_id_t timeout(void (*func)(caddr_t), caddr_t arg,long ticks); timeout_id_t timeout(void (*func)(caddr_t), caddr_t arg, clock_t ticks); struct map *rmallocmap(ulong_t mapsize); struct map *rmallocmap(size_t mapsize); struct map *rmallocmap_wait(ulong_t mapsize); struct map *rmallocmap_wait(size_t mapsize); struct buf *scsi_alloc_consistent_buf( struct scsi_address *ap, struct buf *bp,int datalen, ulong_t bflags, int (*callback )(caddr_t), caddr_t arg); struct buf *scsi_alloc_consistent_buf(structs scsi_address *ap, struct buf *bp, size_t datalen, uint_t bflags, int (*callback )(caddr_t), caddr_t arg); int uiomove(caddr_t address, long nbytes, enum uio_rw rwflag, uio_t *uio_p); int uiomove(caddr_t address, size_t nbytes, enum uio_rw rwflag, uio_t *uio_p); int cv_timedwait(kcondvar_t *cvp, kmutex_t *mp, long timeout); int cv_timedwait(kcondvar_t *cvp, kmutex_t *mp,clock_t timeout); int cv_timedwait_sig(kcondvar_t *cvp, kmutex_t *mp, long timeout); int cv_timedwait_sig(kcondvar_t *cvp, kmutex_t *mp,clock_t timeout); int ddi_device_copy(ddi_acc_handle_t src_handle, caddr_t src_addr, long src_advcnt,ddi_acc_handle_t dest_handle, caddr_t dest_addr, long dest_advcnt, size_t bytecount, ulong_t dev_datasz);); int ddi_device_zero(ddi_acc_handle_t handle, caddr_t dev_addr, size_t bytecount, long dev_advcnt, ulong_t dev_datasz): int ddi_device_zero(ddi_acc_handle_t handle, caddr_t dev_addr, size_t bytecount, ssize_t dev_advcnt,uint_t dev_datasz): int ddi_dma_mem_alloc(ddi_dma_handle_t handle, uint_t length, ddi_device_acc_attr_t *accattrp, ulong_t flags, int (*waitfp)(caddr_t), caddr_t arg, caddr_t *kaddrp, uint_t *real_length, ddi_acc_handle_t *handlep);); int drv_getparm(unsigned int parm, unsigned long *value_p); int drv_getparm(unsigned int parm, void *value_p); In the 64-bit kernel, drv_getparm() can be used to fetch both 32-bit and 64-bit quantities. However, the interface does not define the data type of the value pointed to by value_p. This can lead to programming errors. You should not use drv_getparm(). Use the following new routines instead: drv_getparm() value_p clock_t ddi_get_lbolt(void); time_t ddi_get_time(void); cred_t *ddi_get_cred(void); pid_t ddi_get_pid(void); Changes to the DDI functions The following DDI and libc functions are removed or added in the Solaris 10 OS: libc ddi_dma_segtocookie(9F) ddi_dma_nextcookie(9F) memcpy memset memmove memcmp strncat strlcat strlcpy strspn gcc 3.2.1 Converting ioctl Routines to Be 64-Bit Clean Many ioctl operations are common to device drivers in the same class. Many of these interfaces copy in or copy out data structures to or from the kernel. Some of these data structure members are changed in size in the 64-bit data model. The following table lists ioctl structures that you must convert explicitly in 64-bit driver ioctl routines for dkio, fdio, fbio, cdio, mtio, and scsi: dkio fdio fbio cdio mtio scsi DKIOCGAPARTDKIOCSAPARTDKIOGVTOCDKIOSVTOC struct dk_mapstruct dk_allmapstructpartitionstruct vtoc FBIOPUTCMAPFBIOGETCMAP struct fbcmap FBIOPUTCMAPIFBIOGETCMAPI struct fbcmap_i FBIOSCURSORFBIOSCURSOR struct fbcursor CDROMREADMODE1CDROMREADMODE2 struct cdrom_read CDROMCDDACDROMCDXACDROMSUBCODE struct cdrom_cdda structcdrom_cdxa struct cdrom_subcode FDIOCMDFDRAW struct fd_cmdstruct fd_raw MTIOCTOPMTIOCGETMTIOCGETDRIVETYPE struct mtopstruct mtgetstruct mtdrivetype_request USCSICMD struct uscsi_cm The nblocks property, the number of blocks each device contains, is defined as a signed 32-bit integer. The nblocks property therefore limits the maximum device size to 1 Tbyte. A new property, Nblocks, is defined as an unsigned 64-bit integer to remove this limitation. nblocks Nblocks For more information, see Appendix C, "Making a Device Driver 64-Bit Ready," in the Writing Device Drivers manual on the Sun Product Documentation site. 3.2.2 Modifying the Routines That Handle Data Sharing If a 64-bit device driver uses ioctl(9E), devmap(9E), or mmap(9E) to share data structures with a 32-bit application, check whether those data structures contain long or pointer types. The binary layout of such data structures is incompatible. devmap(9E To handle potential data model differences, driver entry point routines that receive arguments from user applications must determine whether the argument came from an application that uses the same data type model as the kernel. The new DDI function ddi_model_convert_from(9F) enables drivers to determine this. The argument for ddi_model_convert_from(9F) is the data type model of the current thread. If conversion to or from IPL32 is necessary, the return value is DDI_MODEL_IPL32. If no conversion is needed, the return value is DDI_MODEL_NONE. Typically, the _MULTI_DATAMODEL macro is defined by the system when the driver supports multiple data models. ddi_model_convert_from(9F) DDI_MODEL_IPL32 DDI_MODEL_NONE _MULTI_DATAMODEL Example: The xxxdevmap() function from devmap(9E) provides a simple example. The devmap() function maps memory from a device into the address space of a process. The range of mapped memory in a device is from offset to offset+len. In this function, the data structure is used to interact with 32-bit and 64-bit applications. For a 32-bit application, the member addr in struct data contains a 32-bit user process's address, but for a 64-bit application, addr is a 64-bit address. xxxdevmap() devmap() offset+len data; /* cast shared_area->addr from 64-bit to 32-bit */ da32p = (struct data32 )shared_area; dp = &dtc; dp->len = da32p->len; dp->addr = da32p->addr; break; } case DDI_MODEL_NONE: break; } #endif /* _MULTI_DATAMODEL */ /* continues along using dp */ ... } To support 64-bit clean code, the ioctl(9E) and mmap(9E) routines need to consider the macro _MULTI_DATAMODEL as well. The simplest and most straightforward procedure to verify a 64-bit driver ported from 32-bit source code is to run the 64-bit driver on a 64-bit kernel. If you do not have a 64-bit Solaris system, other source-level verification methods are available for you to use. The compiler and lint are practical tools you can use to check for the use of constructs that impact 32-bit and 64-bit portability. The lint tool is a C-program checker. The lint tool in the Sun Studio C 5.7 compiler can be used to help find potential 64-bit problems in your code. You can use the special -errchk=longptr64 option to request that lint notify you whenever you try to put something big into something small, such as casting a 64-bit pointer into a 32-bit int. -errchk=longptr64 The lint tool prints the line number of the offending code, issues a warning message that describes the problem, and informs you that a pointer was involved or gives the sizes of types involved. This information (the fact that a pointer is involved, and the sizes of the types) can be useful in finding only the 64-bit problems and avoiding the pre-existing problems between 32-bit and smaller types. The following sample shows how the lint output will appear. Hello.c contains syntax errors under the LP64 model. Hello64.c cleans up those errors after re-declaring the variable types and explicitly casting them. Hello.c Hello64.c sh$ cat -n hello.c 1 /* hello.c */ 2 #include <stdio.h> 3 #include <stdlib.h> 4 5 static int func1(int pass1); 6 static long func2(long pass2); 7 8 void 9 main(void) 10 { 11 int i_a = 0, *i_ptr = 0; 12 long l_b = 0; 13 14 i_a = (int) i_ptr; 15 i_ptr = (void *)i_a; 16 i_a = l_b; 17 i_a = (int) l_b; 18 i_a = 0xffffaabbcc; 19 i_a = (int) 0xffffaabbcc; 20 i_a = func1(l_b); 21 i_a = func2(0xffffaabbcc); 22 printf("output 32-bit int %d\n", l_b); 23 scanf("input 32-bit int %d\n", &l_b); 24 } 25 26 static int 27 func1(int pass1) 28 { 29 return (pass1); 30 } 31 32 static long 33 func2(long pass2) 34 { 35 return (pass2); 36 } sh$ /opt/SUNWspro/bin/lint hello.c -errchk=longptr64 |more (14) warning: conversion of pointer loses bits (15) warning: cast to pointer from 32-bit integer (16) warning: assignment of 64-bit integer to 32-bit integer (17) warning: cast from 64-bit integer to 32-bit integer (18) warning: 64-bit constant truncated to 32 bits by assignment (19) warning: cast from 64-bit integer constant expression to 32-bit integer (20) warning: passing 64-bit integer arg, expecting 32-bit integer: func1(arg 1) (21) warning: assignment of 64-bit integer to 32-bit integer function returns value which is always ignored printf scanf function argument ( number ) type inconsistent with format printf (arg 2) long :: (format) int hello.c(22) scanf (arg 2) long * :: (format) int * hello.c(23) sh$ sh$ cat -n hello64.c 1 /* hello64.c */ 2 #include <stdio.h> 3 #include <stdlib.h> 4 5 static int func1(int pass1); 6 static long func2(long pass2); 7 void 8 main(void) 9 { 10 int i_a = 0, i_b = 0, *i_ptr = 0; 11 long l_a = 0, l_b = 0; 12 13 l_a = (unsigned long) i_ptr; 14 i_ptr = (void *)l_a; 15 l_a = l_b; 16 i_a = (int) l_b; /* intended narrow conversion */ 17 l_a = 0xffffaabbcc; 18 i_a = (int) 0xffffaabbcc; /* intended narrow conversion */ 19 i_a = func1(i_b); 20 l_a = func2(0xffffaabbcc); 21 printf("output 32-bit int %ld\n", l_b); 22 scanf("input 32-bit int %ld\n", &l_b); 23 } 24 25 static int func1(int pass1) 26 { 27 return (pass1); 28 } 29 30 static long func2(long pass2) 31 { 32 return (pass2); 33 } sh$/opt/SUNWspro/SOS8/bin/lint hello64.c -errchk=longptr64 |more (16) warning: cast from 64-bit integer to 32-bit integer (18) warning: cast from 64-bit integer constant expression to 32-bit integer set but not used in function (10) i_a in main function returns value which is always ignored printf scanf sh$ Compiler The following example shows the error messages output by gcc from compiling the helloworld.c program on a 64-bit x86-based system. The errors are eliminated in helloworld64.c by narrowing the constants, padding the structures, and re-declaring the variable and function types. helloworld.c helloworld64.c sh$ cat -n helloworld.c 1 #include <stdio.h> 2 #include <stdlib.h> 3 4 #define MAX_HEAD_COUNT 0xfffffffffffL 5 #define MAX_FORMAL_EMPLOYEE 0x0fffL 6 7 struct private_info { 8 long id; 9 int stat; 10 char *other_info; 11 } employee[MAX_HEAD_COUNT]; 12 13 struct { 14 int outside_id; 15 unsigned int priv_info_addr; 16 } filo_index[MAX_FORMAL_EMPLOYEE]; /* all people who are recorded now */ 17 18 long 19 encryp(long input) 20 { 21 return (input); 22 } 23 24 int 25 main(void) 26 { 27 int i, key; long tmp_id; 28 void *priv_p; 29 30 do { 31 scanf("%d%d%d", &i, &employee[i].stat, &employee[i].id); 32 if (i < 0) break; 33 if (i > MAX_HEAD_COUNT) 34 i = MAX_HEAD_COUNT; 35 if (employee[i].stat) { 36 filo_index[key].outside_id = employee[i].id; 37 key = encryp(filo_index[key].outside_id); 38 filo_index[key].priv_info_addr = (int)(employee + i); 39 } 40 } while (i < sizeof (employee)); 41 42 while (1) { 43 scanf("%d", (int *)&tmp_id); 44 key = (int)encryp(tmp_id); 45 if (key == -1) 46 key = (int)MAX_FORMAL_EMPLOYEE; 47 priv_p = (void *)filo_index[key].priv_info_addr; 48 printf("%d\t%d\n", ((struct private_info *)priv_p)->id, 49 ((struct private_info *)priv_p)->stat); 50 } 51 } sh$ gcc -fsyntax-only -Wall -Wcast-qual -Wconversion -Wmissing-format-attribute \ -Wpadded -Werror -g3 -o helloworld helloworld.c cc1: warnings being treated as errors helloworld.c:10: warning: padding struct to align `other_info' helloworld.c: In function `main': helloworld.c:31: warning: int format, different type arg (arg 4) helloworld.c:33: warning: comparison is always false due to limited range of data type helloworld.c:34: warning: overflow in implicit constant conversion helloworld.c:37: warning: passing arg 1 of `encryp' with different width due to prototype helloworld.c:38: warning: cast from pointer to integer of different size helloworld.c:46: warning: cast to pointer from integer of different size helloworld.c:48: warning: int format, different type arg (arg 2) sh$ sh$ cat -n helloworld64.c 1 #include <stdio.h> 2 #include <stdlib.h> 3 4 #define MAX_HEAD_COUNT 0xffffL 5 #define MAX_FORMAL_EMPLOYEE 0x0fffL 6 7 struct private_info { 8 long id; 9 int stat; 10 char padding[4]; 11 char *other_info; 12 } employee[MAX_HEAD_COUNT]; /* all people who have been employees */ 13 14 struct { 15 int outside_id; 16 char padding[4]; 17 unsigned long priv_info_addr; 18 } filo_index[MAX_FORMAL_EMPLOYEE]; /* all people who are employees now */ 19 20 long 21 encryp(long input) 22 { 23 return (input); 24 } 25 26 int 27 main(void) 28 { 29 int i, key; 30 long tmp_id; 31 void *priv_p; 32 33 do { 34 scanf("%d%d%ld", &i, &employee[i].stat, &employee[i].id); 35 if (i < 0) break; 36 if (i > MAX_HEAD_COUNT) 37 i = MAX_HEAD_COUNT; 38 if (employee[i].stat) { 39 filo_index[key].outside_id = employee[i].id; 40 key = encryp((long)filo_index[key].outside_id); 41 filo_index[key].priv_info_addr = 42 (unsigned long)(employee + i); 43 } 44 } while (i < sizeof (employee)); 45 46 while (1) { 47 scanf("%d", (int *)&tmp_id); 48 key = (int)encryp(tmp_id); 49 if (key == -1) 50 key = (int)MAX_FORMAL_EMPLOYEE; 51 priv_p = (void *)filo_index[key].priv_info_addr; 52 printf("%ld\t%d\n", ((struct private_info *)priv_p)->id, 53 ((struct private_info *)priv_p)->stat); 54 } 55 } sh$ ./gcc -fsyntax-only -Wall -Wcast-qual -Wconversion -Wmissing-format-attribute \ -Wpadded -Werror -g3 -o helloworld64 helloworld64.c sh$ Use the following checklist to convert your driver code to the Solaris OS on 64-bit x86-based systems: <sys/isa_defs.h> _ILP32 _LP64 int64_t #include <stdio.h> #include <sys/types.h> struct misalign { uint32_t foo; uint64_t bar; }; main() { struct misalign a; printf("sizeof struct is: %d\n", sizeof(a)); printf("offset of bar is: %d\n", (uintptr_t)&a.bar - (uintptr_t)&a); } On a 32-bit system based on 386 architecture, this code produces: sizeof struct is: 12 offset of bar is: 4 On a 64-bit system based on AMD64 architecture, this code produces: sizeof struct is: 16 offset of bar is: 8 As a result, attempts to make structures work in both 32-bit and 64-bit environments by using 64-bit fields may not succeed. This occurs fairly often, especially in structures that have been passed into and out of the kernel through ioctl calls. A 32-bit application sees the structure differently than the 64-bit kernel sees it. hat_getkpfnum() ddi_dma_* lint(1) Following the general conversion guidelines from the previous section should help you produce clean 64-bit code in device drivers for the Solaris OS on x86 platforms. However, that does not mean that the code is portable or tuned for performance. This section can help you take full advantage of the features of the AMD Opteron processor. This section discusses some advanced topics and offers guidelines for addressing these topics. The DMA framework in the Solaris OS hides the hardware details of a platform, such as I/O MMU, I/O cache, data alignment, data order and so on. When writing a driver to work on multiple platforms, you need to consider the following issues: You might also encounter some performance-related problems with the DDI functions. You may have issues specific to the 64-bit Solaris OS as well. 4.1.1 Be Careful With I/O MMU Translations The CPU uses MMU to translate a virtual address (CPU view) to physical address (main bus view). The device uses I/O MMU to translate I/O bus addresses (PCI address) to physical addresses (main bus view). The I/O MMU can be very convenient for performing DMA transfers. The I/O MMU provides a device with the ability to perform DVMA. DVMA enables you to program a DMA engine with a large block of virtual contiguous address. That relieves the CPU from programming the DMA engine with many small blocks of physical addresses. SPARC technology offers an I/O MMU and can perform DVMA. In some cases, only a single DMA window with a few DMA cookies needs to be programmed to the DMA engine. Some platforms, IA for example, have no I/O MMU. This means that developers can get multiple DMA windows and multiple DMA cookies. Of course, each DMA cookie may only contain a few pages. Fortunately, some devices can provide scatter/gather (S/G) capability to perform highly efficient DMA transferring. So when writing a device driver to support both the SPARC and IA (AMD64 included) platforms, developers should add it with multiple DMA window and multiple DMA cookie support. If the device has S/G capability, the device driver should make good use of it to enhance DMA performance. struct sglentry { size_t dma_addr; uint32_t dma_size; } sglist[SGLLEN]; In this example, each DMA cookie should be filled in sglist as an S/G element. sglist 1. When using DVMA, cookie.dma_address is the virtual address that appears on PCI bus. It is the responsibility of the I/O MMU to translate the virtual addresses into physical addresses. cookie.dma_address 2. AMD64 compatible processors can take advantage of the so-called AGP GART, which is quite similar to the I/O MMU address translation table, to serve other PCI devices. See the end of this section for more details. 4.1.2 Max Burst Size Can Change Drivers specify the DMA burst sizes that their device supports in the dma_attr_burst sizes field of the ddi_dma_attr structure. This is a bitmap of the supported burst sizes. When you write a driver that is 32-bit and 64-bit compatible, you may need to change this structure to optimize performance. When DMA resources are allocated, the system can impose further restrictions on the burst sizes that the device can use. A better approach is to use the ddi_dma_burstsizes(9F) routine to obtain the allowed burst sizes. This routine returns the appropriate burst size bitmap for the device. When DMA resources are allocated, a driver can ask the system for the appropriate burst sizes to use for its DMA engine. dma_attr_burst ddi_dma_attr ddi_dma_burstsizes(9F) Example: Determining Burst Size The following pseudocode explains a correct way to determine */ } 4.1.3 Device With No 64-Bit Addressing Uses 64-Bit Driver Some devices are only capable of 32-bit addressing. Others are capable of both 32-bit and 64-bit addressing. Knowing which addressing capability your device supports is very important. Note that some 32-bit devices are capable of 64-bit addressing and also that some 32-bit PCI devices can achieve 64-bit addressing through a DAC (Dual Address Cycle) approach. DAC can finish 64-bit addressing within two PCI clock periods. For example, a device may possess two registers: DMADAC0 and DMADAC1. A DMA engine can perform DAC within two PCI clock periods, where the first PCI address is a Lo-Addr DMADAC0 with the PCI command (C/BE[3:0]#) D, and the second PCI address is a Hi-Addr DMADAC1 with the PCI command (C/BE[3:0]#) 6 or 7 (depending on whether there is a write or a read). If the device has 64-bit addressing capability, then its performance should be greatly enhanced in a 64-bit OS. But what if the device has no 64-bit addressing capability? The DMA engine has a limited addressing capability, for example, a PCI master device that can only perform SAC (Single Address Cycle) would only be capable of 32-bit addressing. Its ddi_dma_attr_t should be described as follows: ddi_dma_attr_t static ddi_dma_attr_t attributes = { DMA_ATTR_VO, /* Version number */ 0x00000000, /* low address */ 0xFFFFFFFF, /* high address */ 0xFFFFFFFF, /* counter register max */ ..... }; This example tells the DDI DMA framework that your device has only 0 to 32-bit addressing capability by assigning "0x00000000" to the low address and "0xFFFFFFFF" to the high address. If the driver uses the ddi_dma_mem_alloc(9F) routine to allocate a piece of kernel virtual memory, ddi_dma_buf_bind_handle(9F) or ddi_dma_addr_bind_handle(9F) assures that DMA cookies are allocated in the range of the low and high addresses assigned by ddi_dma_attr_t. Therefore, you do not need to worry about issues such as whether the physical memory is greater than 4 Gbyte. ddi_dma_mem_alloc(9F) ddi_dma_buf_bind_handle(9F) ddi_dma_addr_bind_handle(9F) If this device needs to process a DMA request coming from other devices, you can perform a device-to-device DMA transfer. In this situation, the local memory of another device may be mapped into a segment beyond 4 Gbyte, so that you cannot perform a DMA transfer directly. Fortunately, device-to-device DMA transfer is only used rarely. For those devices that support both 32-bit and 64-bit addressing, 64-bit drivers should take advantage of the device's 64-bit addressing capability. The following example describes the DMA engine for this case: static ddi_dma_attr_t attributes = { DMA_ATTR_VO, /* Version number */ 0x0000000000000000, /* low address */ 0xFFFFFFFFFFFFFFFF, /* high address */ 0xFFFFFFFF, /* counter register max */ ..... }; Then you can use dma_laddress in ddi_dma_cookie_t structure and program it into the DMA engine. dma_laddress ddi_dma_cookie_t 4.1.4 Changes in DMA DDI Functions and DMA DDI Structures The primary changes to DMA DDI function and structures are in ddi_dma_cookie_t and ddi_dma_mem_alloc. Note that the size of some arguments returned from the function may change. The definitions of those arguments should be changed. For more details, please refer to Section 3, "Basic Conversion Guidelines." ddi_dma_mem_alloc 4.1.5 Be Careful With the NUMA System NUMA has a single OS image, single address space view, nonuniform memory, nonuniform I/O, and non-coherent cache. After DMA transfers are done, ddi_dma_sync(9F) should be explicitly called to ensure that the caches are successfully flushed. ddi_dma_sync(9F) The following issues concern the performance of DDI functions: mem_alloc The AMD64 platform can offer some architectural advantages. For example, its AGP Aperture can be shared by both AGP and PCI devices. That is, you can use the AGP Aperture as an I/O MMU component. This is a good way to enhance DMA performance. You can also enable cache coherency for the AGP aperture by setting one bit in the GART entry. With this approach, the AGP master can read the data from the processor caches faster than it can read data from the DDR memory. This section uses sample code to explain how to port a driver from 32-bit to 64-bit. The driver in this example manages a RAM space such as a RAM disk and uses programmed I/O to drive a device with a 32-bit CSR register and a 32-bit data register. This sample modifies a 32-bit driver to be a 64-bit safe driver. The sample defines the MACRO VERSION_64-bit while compiling in a 64-bit environment. VERSION_64-bit The original source code and the header file, pio.h, are provided in the appendix. A link to the source code for the converted version appears below. The changes that are made in this example can be summarized as follows: pio.h ddi_putl() ddi_getl() ddi_put32() ddi_get32() size *addr caddr_t getminor() minor_t pio_p->addr min() tmp Note that these changes are also documented by comments. Porting Example This document describes issues that you need to be aware of when you write 32-bit and 64-bit safe drivers for the Solaris OS on x86 platforms. These issues include multiple C language data models, the use of system-derived types that have changed, and changes to some of the DDI interfaces. Also, you need to address some driver-specific issues. Finally, you need to consider performance issues such as the use of DMA. This article lists and describes these issues, and it provides solutions and recommendations for these issues. This guide should help you write clean code for 32-bit and 64-bit device drivers for the Solaris OS on x86 platforms. For further information on device drivers in the Solaris OS, see Writing Device Drivers. To see examples of some basic device drivers, see Device Driver Tutorial (PN 817-5789, Sun Microsystems). If you are new to development in the Solaris OS or are unfamiliar with the range of information on the Solaris OS, see the Introduction to the Solaris Development Environment. This appendix lists the pio.h header file and the source code before conversion, pio_32.c. pio_32.c
http://developers.sun.com/solaris/articles/64_bit_driver.html
crawl-001
refinedweb
7,649
50.57
Mic Raspberry Pi website (click the Getting Started with MicroPython tab and follow the instructions). After that point you might get a bit stuck. The Pico documentation covers connecting to the Pico from a Pi, so if you're wanting to code from your own computer you'll need something else. One option is the Thonny IDE which you use to write and upload code to your Pico. It's got a nice friendly interface for working with Python. But if you don't want to change your IDE, or want a way to communicate with your Pico from the command line? You're in luck: there is a simple tool for accessing the MicroPython REPL on your Pico and uploading custom Python scripts or libraries you may wish to use: rshell In this tutorial I'll take you through working with MicroPython in rshell, coding live and uploading custom scripts to your Pico. Installing rshell Rshell itself is built on Python (not MicroPython) and can be installed and run locally on your main machine. You can install it like any other Python library. python -m pip install rshell Unfortunately, the current version of rshell does not always play nicely with the Raspberry Pico. If you have problems you can install a fixed version in the pico branch from the rshell repository. You can install this directly from Github with the following -- python -m pip install This will download the latest version of the pico branch (as a .zip) and install this in your Python environment. Once installed, you will have access to a new command line tool rshell. The rshell interface To use rshell from the command line, enter rshell at your command prompt. You will see a welcome message and the prompt will turn green, to indicate you're in rshell mode. The rshell interface on Windows 10 The rshell interface on macOS If previous pip install worked but the rshell command doesn't work, then you may have a problem with your Python paths. To see the commands available in rshell, enter help and press Enter. help Documented commands (type help <topic>): ======================================== args cat connect date edit filesize help mkdir rm shell boards cd cp echo exit filetype ls repl rsync Use the exit command to exit rshell. You can exit rshell at any time by entering exit or pressing Ctrl-C. Once exited the prompt will turn white. The basic file operation commands are shown below. cd <dirname>change directory cp <from> <to>copy a file lslist current directory rm <filename>remove (delete) a file filesize <filename>give the size of a file in bytes If you type ls and press enter you will see a listing of your current folder on your host computer. The same goes for any of the other file operations, until we've connect a board and opened it's file storage -- so be careful! We'll look at how to connect to a MicroPython board and work with the files on it next. Connecting to your Pico with rshell Enter boards to see a list of MicroPython boards connected to your computer. If you don't have any connected boards, you'll see the message No boards connected. If your board isn't connected plug your Pico in now. You can use the connect command to connect to the board, but for that you'll need to know which port it is on. Save yourself some effort and just restart rshell to connect automatically. To do this, type exit and press Enter (or press Ctrl-C) to exit and then restart rshell by entering rshell again at the prompt. If a board is connected when you start rshell you will see something like the following... C:\Users\Gebruiker>rshell Connecting to COM4 (buffer-size 128)... Trying to connect to REPL connected Or an equivalent on macOS... Martins-Mac: ~ mfitzp$ rshell Connecting to /dev/cu.usbmodem0000000000001 (buffer-size 128) Trying to connect to REPL connected ...which shows you've connected to the MicroPython REPL on the Raspberry Pi Pico. Once connected the boards command will return some information about the connected board, like the following. pyboard @ COM4 connected Epoch: 1970 Dirs: The name on the left is the type of board (Pico appears as pyboard) and connected port (here COM4). The label at the end Dirs: will list any files on the Pico -- currently none. Starting a REPL With the board connected, you can enter the Pico's REPL by entering the repl command. This will return something like the following repl Entering REPL. Use Control-X to exit. > MicroPython v1.14 on 2021-02-14; Raspberry Pi Pico with RP2040 Type "help()" for more information. >>> >>> You are now writing Python on the Pico! Try entering print("Hello!") at the REPL prompt. MicroPython v1.14 on 2021-02-14; Raspberry Pi Pico with RP2040 Type "help()" for more information. >>> >>> print("Hello!") Hello! As you can see, MicroPython works just like normal Python. If you enter help() and press Enter, you'll get some basic help information about MicroPython on the Pico. Helpfully, you also get a small reference to how the pins on the Pico are numbered and the different ways you have to control them. Type "help()" for more information. >>> help():') You can run help() in the REPL any time you need a reminder. While we're here, lets flash the LED on the Pico board. Enter the following at the REPL prompt... from machine import Pin led = Pin(25, Pin.OUT) led.toggle() Every time you call led.toggle() the LED will toggle from ON to OFF or OFF to ON. To exit the REPL at any time press Ctrl-X Uploading a file MicroPython comes with a lot of built-in support for simple devices and communication protocols -- enough to build some quite fun things just by hacking in the REPL. But there are also a lot of libraries available for working with more complex hardware. To use these, you need to be able to upload them to your Pico! Once you can upload files, you can also edit your own code locally on your own computer and upload it from there. To keep things simple, lets create our own "library" that adjusts the brightness of the LED on the Pico board -- exciting I know. This library contains a single function ledon which accepts a single parameter brightness between 0 and 65535. from machine import Pin, PWM led = PWM(Pin(25)) def ledon(brightness=65535): led.duty_u16(brightness) Don't worry if you don't understand it, we'll cover how this works later. The important bit now is getting this on your Pico. Take the code above and save it in a file named picoled.py on your main computer, in the same folder you're executing rshell from. We'll upload this file to the Pico next. Start rshell if you are not already in it -- look for the green prompt. Enter boards at the prompt to get a list of connected boards. pyboard @ COM4 connected Epoch: 1970 Dirs: To see the directory contents of the pyboard device, you can enter: ls /pyboard You should see nothing listed. The path /pyboard works like a virtual folder meaning you can copy files to this location to have the uploaded to your Pico. It is only available by a pyboard is connected. To upload a file, we copy it to this location. Enter the following at the prompt. cp picoled.py /pyboard/picoled.py After you press Enter you'll see a message confirming the copy is taking place C:\Users\Gebruiker> cp picoled.py /pyboard Copying 'C:\Users\Gebruiker/picoled.py' to '/pyboard/picoled.py' ... Once the copy is complete, run boards again at the prompt and you'll see the file listed after the Dirs: section, showing that it's on the board. C:\Users\Gebruiker> boards pyboard @ COM4 connected Epoch: 1970 Dirs: /picoled.py /pyboard/picoled.py You can also enter ls /pyboard to see the listing directly. C:\Users\Gebruiker> ls /pyboard picoled.py If you ever need to upload multiple files, just repeat the upload steps until everything is where it needs to be. You can always drop in and out of the REPL to make sure things work. Using uploaded libraries Now we've uploaded our library, we can use it from the REPL. To get to the MicroPython REPL enter the repl command in rshell as before. To use the library we uploaded, we can import it, just like any other Python library. MicroPython v1.14 on 2021-02-14; Raspberry Pi Pico with RP2040 Type "help()" for more information. >>> >>> import picoled >>> picoled.ledon(65535) >>> picoled.ledon(30000) >>> picoled.ledon(20000) >>> picoled.ledon(10000) Or to pulse the brightness of the LED... >>> import picoled >>> import time >>> while True: ... for a in range(0, 65536, 10000): ... picoled.ledon(a) ... time.sleep(0.1) Auto-running Python So far we've been uploading code and running it manually, but once you start building projects you'll want your code to run automatically. When it starts, MicroPython runs two scripts by default: boot.py and main.py, in that order. By uploading your own script with the name main.py it will run automatically every time the Raspberry Pico starts. Let's update our "library" to become an auto script that runs at startup. Save the following code to a script named main.py. from machine import Pin, PWM from time import sleep led = PWM(Pin(25)) def ledon(brightness=65535): led.duty_u16(brightness) while True: for a in range(0, 65536, 10000): ledon(a) sleep(0.1) In rshell run the command to copy the file to main.py on the board. cp main.py /pyboard/main.py Don't copy this file to boot.py -- the loop will block the REPL startup and you won't be able to connect to your Pico to delete it again! If you do this, use the Resetting Flash memory instructions to clear your Pico. You will need to re-install MicroPython afterwards. Once the main.py file is uploaded, restart your Pico -- either unplug and re-plug it, or press Ctrl-D in the REPL -- and the LED will start pulsing automatically. The script will continue running until it finishes, or the Pico is reset. You can replace the main.py script at any time to change the behavior, or delete it with. rm /pyboard/main.py What's next? Now you can upload libraries to your Pico you can get experimenting with the many MicroPython libraries that are available. If you're looking for some more things to do with MicroPython on your Pico, there are some MicroPython examples available from Raspberry Pi themselves, and also the MicroPython documentation for language/API references.
https://www.mfitzp.com/tutorials/using-micropython-raspberry-pico/
CC-MAIN-2021-43
refinedweb
1,809
73.98
A Hitchhiker’s Guide to OCR Optical Character Recognition (OCR) is the process of extracting text out of images. There are numerous open source engines out there which make it incredibly easy to integrate OCR into almost any kind of product. These engines, particularly neural network based ones, know how to extract text out of random images because they have seen thousands of examples of text and found a general mapping between images and the text they might contain. However, this means they work the best when given images which look like those they were trained on, namely black and white documents with pure text and little background noise and non-textual objects. If you are trying to use OCR in a “natural scene” environment, then using an OCR engine out-of-the-box without any image pre-processing may not be so successful. Inaccurate OCR would make it difficult, if not impossible, to automate tasks which require finding text in images where blur, glare, rotation, skew, non-text, and a myriad of other problems exist. Tools like Amazon’s Textract and Google’s Cloud Vision make these problems go away, but they have their limitations (not to mention you have to pay for them). Thankfully, there are plenty of steps we can take to pre-process images for an open-source OCR engine and achieve comparable levels of accuracy. The Goal of Pre-processing Images work best with OCR engines when they look similar to the images the engine was trained on. Namely, they have: - Very little non-text objects - A high contrast between the text and the background - Text with clear edges - Little noise/granularity - Horizontal Text (no rotation) - A birds-eye view of the text (no skew) Depending on what system you are developing for, some of these goals will be harder to achieve than others. To demonstrate how we can achieve some of these goals, I’ll be using Python’s OpenCV module because it can achieve most of these goals in a few lines of code. I’ll use Google’s Tesseract OCR through the PyTesseract Python module for the OCR. You can follow along with this Jupyter notebook. Please note that I use several custom functions/abstractions which make the tutorial code more compact. While most of these functions are either for setup or to apply an OpenCV function to a set of images, others such as the EastDetector are quite complex. If you are curious, I have tried to document it as clearly as possible in the repository for this tutorial. Text Localization Lets say we’re trying to find all of the book titles and author names in this image. If I put this straight into Tesseract, it doesn’t do very well. books = load_image("Images/books1.jpg") print(pytesseract.image_to_string(books)) returns DR) The Way It [5 — cxsrwour LONG WaAtkine In circtes HF RCA Maca CRC Usa CW ta Sohwxcrceey] None of these are very accurate. There are a lot of extra, random letters that are clearly not part of the book or author titles. Tesseract is having a tough time because of the various fonts and colors on the books. It can’t properly chunk the image into pieces it can understand, so we have to help it along. The first and easiest thing we can do is give Tesseract only the pieces of the image which contain text in them. detector = EASTDetector() slices = detector.get_slices(books) titles = [pytesseract.image_to_string(img) for img in slices] show_images(slices, titles=titles, cols=3) As you can see, by passing in the spine of each book individually instead of the image all at once brought a drastic improvement. What was gibberish before is now recognizable text. Note: I located the book spines using the EAST Text Detector. PyImageSearch has a fantastic tutorial on how to use it, so I won’t go into much detail here. If you are curious, you can check out east_detector.py in the repository to see how my EASTDetector class uses the model to generates bounding rectangles for the text. Notice that while EAST separated all the books from each other, it didn’t break text separated by large spaces into chunks. That is why for The Way It Is, Tesseract is still having trouble reading it. If I narrow down the frame specifically for The Way It Is, then Tesseract can finally read it properly. narrowed = binarize_images(slices[2:], black_on_white=False) narrowed = narrow_images(narrowed) titles = [pytesseract.image_to_string(img) for img in narrowed] show_images(narrowed, titles=titles, cols=3) Narrowing the frame on just The Way It Is book gives us 3 frames. Notice that the frame which contains the title is very clean, so Tesseract can read it perfectly. Meanwhile the frame containing the authors name is quite blurry and noisy, so Tesseract can’t read it. This results from the fact that the name is somewhat blurry in the original image to begin with, bringing us to an important lesson in OCR: sometimes, there is only so much you can do. Note: I am choosing to not go into details about how I narrowed the image frame in the narrow_images function here because it uses a technique called dilation which I will cover later in the tutorial. I will go into the details of this function after I introduce dilation. If you read the code above, you’ll notice I called a function binarize_images before I narrowed the image frames. This puts the image through a process called Image Binarization, the preprocessing step which I will cover next. Image Binarization After narrowing the search field, one of the easiest pre-processing steps is to binarize the image. Binarization means converting each pixel to either black (0) or white (255). These are called binary images. We do this because OCR Engines like Tesseract perform well on images with a high contrast between the text and the background, and nothing sticks out more than white text on a black background. Binarization can be achieved in many ways. For example, you can set a simple threshold (i.e every pixel > 127 is set to 255 and every pixel below is set to 0) or you can do something more complicated (e.g for each pixel, take the median of surrounding pixels and apply a threshold to that). However, in natural scenes, it can be hard to find a single threshold which works for every image. It is much better to instead calculate the threshold dynamically. One way of doing this is known as Otsu’s Binarization. It assumes that pixels are bimodally distributed and decides the best threshold is in the middle of the two modes. Thankfully, OpenCV has functions to do this for us. image = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY) _, thresholded = cv2.threshold(image, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU) cv2.THRESH_OTSU tells OpenCV to use Otsu Binaration, and cv2.THRESH_BINARY_INV will make dark parts of the image white and light parts of the image black. Notice that we have to convert the image to grayscale before binarizing it because you can’t binarize a 3-channel color image. Notice that my implementation of the binarize_images function is not as straight forward as using cv2. gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) _, binary = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU) if np.mean(binary) > 127: binary = cv2.bitwise_not(binary) After binarizing the image, I compute its mean. If the mean is greater than 127, then I take a bitwise not of every pixel. I do this because the THRES_BINARY_INV flag will make dark parts of the image white and vice versa, so if our text is white, then it will become black in the binary image. This will be a problem for our later preprocessing steps. If the image is predominantly white (i.e most pixels are > 127), then most likely the text is black, so I do a color-flip to make the text white. Let’s try out binarization on a new image. detector = EASTDetector(small_overlap=0.75) slices = detector.get_slices(books binarized = binarize_images(slices, black_on_white=True) For comparison, here is what I get if I don’t binarize the images after using EAST. Notice for most of the books, binarization actually made the OCR worse. For others it made OCR possible, and for yet others, it made OCR impossible. This is another common lesson with out-of-the-box OCR models: pre-processing will work differently for every image. It might even take you in the wrong direction. Binarization also appears to add some noise into the image that wasn’t there before. This is usually fine because our other preprocessing steps will take care of it for us. Blurring While it seems counter-intuitive, slightly blurring an image can actually improve OCR, especially after the image has been binarized. A binarized image has pixels which are either 255 or 0, so this can add graininess/noise into the image even though it makes the contrast very sharp. OCR does not perform well under noise, so we should try an remove as much noise as possible. Applying a slight blur accomplishes this. Let’s focus on the All-of-a-Kind Family book. When it was binarized, Tesseract read “oer Ar” (see above). After applying a blur, img_blurred = cv2.medianBlur(img, blur_weight) To the human eye, not much has changed to the image. But clearly, the blurred version is a lot easier for Tesseract to work with! The specific type of blur I used was a Median Blur. Median blurs compute the median of neighboring pixels to replace the current filter. Another blur that is commonly used is the Gaussian blur which computes a Gaussian distribution over the neighborhood and uses that to replace the current pixel. Dilation Sometimes, the text we want to read is in an extremely thin font. Image dilation is a technique which can help us with that. It works by applying a kernel to the image. Think of a kernel like a sliding window. As the window slides over the image, it replaces the current pixel with the maximum value of all pixels inside the window multiplied by the value of the kernel which falls over them. This is what causes white regions to enlarge. For the book The Well-Educated Mind in the image above, the OCR output on the binarized image was gibberish, and the OCR output on the original image was not the exact title (“Tae H-EDUCATED MIND”). If we dilate the image, we can give more body to the text so Tesseract can see it easier. blurred = blur_images([binarized[0]], blur_weight=1) dilated = dilate_images(blurred, kernel=np.ones((5, 5), np.uint8)) Notice that I blurred the binary form of the image before dilating it. This was to smooth the image first. Otherwise, noise/graininess which the binarization introduced would be dilated as well, making the output image blocky and unreadable. The particular kernel that I used was a 5x5 unit square. This moderately expands the text in the x and y directions. As you can see, Tesseract could properly extract the title of the book from the dilated image. Frame Narrowing Earlier in the tutorial, I used a function called narrow_images to get even more specific with the part of the image I was feeding into OCR beyond what EAST was giving me. Now that we have covered dilation we can go into how it works. def narrow(image, convert_color = False, binarize = True): """ Draws narrower bounding boxes by heavily dilating the image and picking out the 3 largest blocks """ original = image.copy() if convert_color: image = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY) if binarize: _, image = cv2.threshold(image, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU) if np.mean(image) > 127: binary = cv2.bitwise_not(image) box_kernel = np.ones((5, 25), np.uint8) dilation = cv2.dilate(image, box_kernel, iterations = 1) bounds, _ = cv2.findContours(dilation, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)boxes = []for cnt in bounds: x, y, w, h = cv2.boundingRect(cnt) region = original[y:y + h, x:x + w] boxes.append(region)boxes = sorted(boxes, key=lambda i: -1 * i.shape[0] * i.shape[1]) return boxes[:3] The first step is to convert the image to grayscale and binarize it. This is a requirement for what we will do later: contouring. Once we have the binary image, we then apply a heavy dilation (the kernel is a 5x25 rectangle). This expands the bright areas of the image in the x direction so close regions of text blend together. At this point, groups of text look like solid blobs of white. We then find the external contours of these blobs and they become the parts of the image containing text. External contours are the edges which define the white blobs (i.e where the white meets the black). This is accomplished by OpenCVs findContours function. We return the 3 largest of these blobs (to remove any blobs which arise from noise or are not text). Here I have drawn the external contours in green. Notice that not all of the regions contain text, but they were white in the dilated image. Tesseract will waste time trying to find text in them, but at least it will also find the text that matters. Conclusion This article has barely scratched the surface of image pre-processing. In addition to binarization, blurring, and dilation, there are numerous other techniques which are used to remove noise from images and make it easier for OCR systems to work with them. Deskewing, erosion, filling, and opening are some other operations which might be used. For most tasks, however, the techniques covered in these guidelines should be sufficient. In general, the techniques you will use are heavily dependent on the system you are building. Because each processing step does not guarantee to improve the OCR quality, it is extremely difficult to build a set of pre-processing steps that works perfectly for all the types of images you want to recognize. There are multiple ways to handle this issue. If you care mostly about speed and not accuracy, there is no point doing any pre-processing because Tesseract can handle lots of images without needing additional work. If you care strongly about accuracy, you might design a pipeline which sequentially applies different transformations to an image until your OCR engine outputs something understandable. However, be aware that a more accurate system will be slower (usually). Since building OCR systems with out-of-the-box models requires a lot of engineering, if you have a lot of labeled training data, you might be better off training an OCR Engine yourself using transfer learning. That would produce a higher accuracy for your particular types of images and would solve the problem of pre-processing. The downside to that, of course, is the difficulty in gathering the training data and make sure the model learns. At the end of the day, there is no “right” answer as to how to get OCR to work with an image, but there are definitive techniques which you can try. It is just a matter of figuring out which transformations in which order are the right ones for your particular image. For more information on image processing, check out OpenCV’s documentation for their built-in functions. If you are curious about Tesseract, you can check out the repository FAQ for the different parameters it takes. I hope you found this tutorial useful. You can see all of the code and the Jupyter notebook I used for the examples here.
https://medium.com/analytics-vidhya/a-hitchhikers-guide-to-ocr-8b869f4e3743
CC-MAIN-2021-04
refinedweb
2,601
64
Hi Folks, I am trying to control the behaviour exhibited by some of our POGOs when they are created using a Map constructor which contains keys that do NOT map to their properties e.g. class Person { String name } def person = new Person([name: 'Edd', age: 35]) Normally the above would call propertyMissing() however I have discovered that I can define a trait (which the class then implements) and provide a default implementation of this method e.g. trait LogsUnknownProperties implements GroovyInterceptable { private static final Logger LOGGER = Logger.getLogger(LogsUnknownProperties) def propertyMissing(String name, value){ LOGGER.warn("Class: ${this.class.name} - Could not set property with name '${name}' having value '${value}' as property does not exist.") } } This works brilliantly in mutable POGOs, however for POGOs which are annotated with @Immutable it doesn't work. From looking at the code in ImmutableASTTransformation.java this seems to be because the checkPropNames() method throws a MissingPropertyException: Is there any way I can intercept the throwing of this exception so I can control the behaviour for @Immutable classes in the same way I can for mutable ones? I wondered if it could be achieved with a sprinkling of meta-programming but I'm not sure where to start looking? Many thanks, Edd -- Web: Mobile: +44 (0) 7861 394 543
http://mail-archives.eu.apache.org/mod_mbox/groovy-users/201606.mbox/%3CCAO5arLNv7p_3R09ZoSh2TBbBGcy76y7922eiXF3iCh52XB8yuA@mail.gmail.com%3E
CC-MAIN-2020-40
refinedweb
215
51.07
im new to c++ and have been writing the simple programs that i can in console apps. my hardest, simple program yet has become hard. im trying to make a program that has timed shutdown. ie. you put in 10 minutes, and in 10 minutes your computer will shut off. i have been reading posts on how people have been trying to shut down their computers and all of the replies say that it isn't possible to do it in console mode. is this true? Here is what i have: #include <windows.h> #include <iostream.h> int main() { int timelef; cout<<"How long would you like to wait until you shut down your computer?:"<<endl; cin>>timelef; if (timelef<500) { Sleep(1000*60*timelef); ExitWindowsEx(); //this is giving me an error, too few arguments } if (timelef>500) { cout<<"Your number is too big"<<endl; } int exitme; cout<<"The program has finished, press a key and Enter to leave"<<endl; cin>>exitme; return 0; } //where x is the number of milliseconds to "sleep" //also be sure to capitalize sleep or it won't work i think
http://cboard.cprogramming.com/cplusplus-programming/8030-nooo.html
CC-MAIN-2014-10
refinedweb
186
71.24
sysctlAPI Mac Adding a sysctl Procedure Call Registering a New Top Level sysctl Adding a Simple sysctl Calling a sysctl From User Space sysctl When adding a sysctl, you must do all of the following first: add the following includes: #include <mach/mach_types.h> #include <sys/systm.h> #include <sys/types.h> #include <sys/sysctl.h> add -no-cpp-precomp to your compiler options in Project Builder (or to CFLAGS in your makefile if building by hand). sysctlProcedure. Note: Because this is largely a construct of the BSD subsystem, all path names in this section can be assumed to be from /path/to/xnu-version/bsd/. Also, you may safely assume that all program code snippets should go into the main source file for your subsystem or module unless otherwise noted, and that in the case of modules, function calls should be made from your start or stop routines unless otherwise noted.. Note: Not all top level categories will necessarily accept the addition of a user-specified new-style sysctl. If you run into problems, you should try a different top-level category.. Note: When creating a top level sysctl, parent is simply left blank, for example, SYSCTL_NODE( , OID_AUTO, _topname, flags, handler_fn, “desc”);. sysctlF. sysctlbynameSystem. sysctlSystem. Note: If you are adding a sysctl, it will be accessible using sysctlbyname. You should use this system call only if the sysctl you need cannot be retrieved using sysctlbyname. In particular, you should not assume that future versions of sysctl will be backed by traditional numeric OIDs except for the existing legacy OIDs, which will be retained for compatibility reasons.(3). Last updated: 2006-11-07
http://docs.huihoo.com/darwin/kernel-programming-guide/boundaries/chapter_14_section_7.html
CC-MAIN-2017-17
refinedweb
273
53.1
Java's miscellaneous operators are: ternary operator, member access, comma, array index, new, instanceof, and typecast. These operators are explained one by one in following sections. Java provides a special operator that is called ternary or conditional operator. This operator is a set of two symbols that are ? and :. Both symbols collectively form the conditional operator. This operator can be used if we have to initialize or assign a variable on basis of some condition. It follows the below syntax. expr-1 ? expr-2 : expr-3; In above syntax, expr-1 can be any expression that returns a boolean value. If expr-1 returns true then expr-2 is processed; else expr-3 is processed. Note that both expr-2 and expr-3 must return the same type of value and they cannot be void. Following piece of code demonstrates the use of Java ternary operator. /* TernaryOperatorDemo.java */ public class TernaryOperatorDemo { public static void main(String[] args) { int n = 10; boolean flag = (n % 2 == 0 ? true : false ); System.out.println(n + " is even? " + flag); } } OUTPUT ====== 10 is even? true The Java member access operator is a dot (.) symbol that is used to access data members and member methods of a class by its objects. Java comma operator is a ',' sign that is used to separate function arguments, and to declare more than one variable of same type in one statement. Java array index operator, a set of square brackets ([]), is used to declare and access array elements. The Java new operator is used to create a new object. Operator new is a Java keyword. It is followed by a call to a constructor, which initializes the new object. Note that declaring an object and creating an object are two different things. Simply declaring a reference variable does not create an object. For that, we need to use the new operator. The new operator creates an object by allocating memory to it and returns a reference to that memory location. The Java new operator needs a single, postfix argument: a call to a constructor. The name of the constructor provides the name of the class to instantiate. Following piece of code demonstrates the use of new operator. /* NewOperatorDemo.java */ class Universe { public Universe() {} public void myUniverse() { System.out.println("This is my Universe"); } } public class NewOperatorDemo { public static void main(String[] args) { //new operator creates a new object of type Universe //and assigns it to reference newUniverse Universe newUniverse = new Universe(); newUniverse.myUniverse(); } } OUTPUT ====== This is my Universe Java instanceof operator also called type comparison operator compares an object to a specific type. It follows the syntax objRef instanceof type. Here objRef is the object name and the type is the name of object type whom objRef will be compared to. The equals() method of Java is a nice example that uses instanceof operator to check if two objects are equal. The following example ( InstanceOfDemo.java) shows the use of Java instanceof operator. /* InstanceOfDemo.java */ class Universe { public Universe() {} public void myUniverse() { System.out.println("This is my Universe"); } } public class InstanceOfDemo { public static void main(String[] args) { Universe newUniverse = new Universe(); newUniverse.myUniverse(); System.out.println("Is newUniverse an object of Universe? " + (newUniverse instanceof Universe)); System.out.println("Does newUniverse inherit Object? " + (newUniverse instanceof Object)); } } OUTPUT ====== This is my Universe Is newUniverse an object of Universe? true Does newUniverse inherit Object? true Java has a rich set of operators. But, like C and C++, Java does not support operator overloading. You can also note that most of the operators of Java are applied on basic types only and not on objects. To perform any operation on objects Java believes in using member methods that's because of object orientation approach of Java to problem solving. In this tutorial we explained miscellaneous Java
http://cs-fundamentals.com/java-programming/java-miscellaneous-operators.php
CC-MAIN-2017-17
refinedweb
632
50.63
Created 07-05-2016 10:16 PM Hello, I'm using spyder in the Anaconda distribution to execute instruction by instruction a .py. In my .py I have the instruction from pyspark.mllib.clustering import KMeans, KMeansModel But when I execute this instruction I get the error ImportError: No module named 'pyspark' How do I have to configure spyder to work with pyspark? Thanks in advance Carlota Vina Created 07-06-2016 01:08 AM This is really a spyder question, not Pyspark, but you will likely need to use spyder in your Pyspark app. If spyder is a separate interpreter, then configure Pyspark to use it as the interpreter. See the Pyspark docs.
https://community.cloudera.com/t5/Support-Questions/How-to-use-spyder-anaconda3/m-p/42630
CC-MAIN-2019-35
refinedweb
114
53.71
This HTML version of Think Data Structures is provided for convenience, but it is not the best format of the book. In particular, some of the symbols are not rendered correctly. You might prefer to read the PDF version. Or you can buy this book on Amazon.com. As we saw in the previous chapter, Java provides two implementations of the List interface, ArrayList and LinkedList. For some applications LinkedList is faster; for other applications ArrayList is faster. List ArrayList LinkedList To decide which one is better for a particular application, one approach is to try them both and see how long they take. This approach, which is called “profiling”, has a few problems: We can address some of these problems using analysis of algorithms. When it works, algorithm analysis makes it possible to compare algorithms without having to implement them. But we have to make some assumptions: This kind of analysis lends itself to simple classification of algorithms. For example, if we know that the run time of Algorithm A tends to be proportional to the size of the input, n, and Algorithm B tends to be proportional to n2, we expect A to be faster than B, at least for large values of n. Most simple algorithms fall into just a few categories. [] For example, here’s an implementation of a simple algorithm called selection sort (see): public class SelectionSort { /** * Swaps the elements at indexes i and j. */ public static void swapElements(int[] array, int i, int j) { int temp = array[i]; array[i] = array[j]; array[j] = temp; } /** * Finds the index of the lowest value * starting from the index at start (inclusive) * and going to the end of the array. */ public static int indexLowest(int[] array, int start) { int lowIndex = start; for (int i = start; i < array.length; i++) { if (array[i] < array[lowIndex]) { lowIndex = i; } } return lowIndex; } /** * Sorts the elements (in place) using selection sort. */ public static void selectionSort(int[] array) { for (int i = 0; i < array.length; i++) { int j = indexLowest(array, i); swapElements(array, i, j); } } } The first method, swapElements, swaps two elements of the array. Reading and writing elements are constant time operations, because if we know the size of the elements and the location of the first, we can compute the location of any other element with one multiplication and one addition, and those are constant time operations. Since everything in swapElements is constant time, the whole method is constant time. swapElements The second method, indexLowest, finds the index of the smallest element of the array starting at a given index, start. Each time through the loop, it accesses two elements of the array and performs one comparison. Since these are all constant time operations, it doesn’t really matter which ones we count. To keep it simple, let’s count the number of comparisons. indexLowest start The third method, selectionSort, sorts the array. It loops from 0 to n−1, so the loop executes n times. Each time, it calls indexLowest and then performs a constant time operation, swapElements. selectionSort The first time indexLowest is called, it performs n comparisons. The second time, it performs n−1 comparisons, and so on. The total number of comparisons is The sum of this series is n(n+1)/2, which is proportional to n2; and that means that selectionSort is quadratic. To get to the same result a different way, we can think of indexLowest as a nested loop. Each time we call indexLowest, the number of operations is proportional to n. We call it n times, so the total number of operations is proportional to n2. All constant time algorithms belong to a set called O(1). So another way to say that an algorithm is constant time is to say that it is in O(1). Similarly, all linear algorithms belong to O(n), and all quadratic algorithms belong to O(n2). This way of classifying algorithms is called “big O notation”. NOTE: I am providing a casual definition of big O notation. For a more mathematical treatment, see. This notation provides a convenient way to write general rules about how algorithms behave when we compose them. For example, if you perform a linear time algorithm followed by a constant algorithm, the total run time is linear. Using ∈ to mean “is a member of”: If f ∈ O(n) and g ∈ O(1), f+g ∈ O(n). If you perform two linear operations, the total is still linear: If f ∈ O(n) and g ∈ O(n), f+g ∈ O(n). In fact, if you perform a linear operation any number of times, k, the total is linear, as long as k is a constant that does not depend on n. If f ∈ O(n) and k is a constant, kf ∈ O(n). But if you perform a linear operation n times, the result is quadratic: If f ∈ O(n), nf ∈ O(n2). In general, we only care about the largest exponent of n. So if the total number of operations is 2n + 1, it belongs to O(n). The leading constant, 2, and the additive term, 1, are not important for this kind of analysis. Similarly, n2 + 100n + 1000 is in O(n2). Don’t be distracted by the big numbers! “Order of growth” is another name for the same idea. An order of growth is a set of algorithms whose run times are in the same big O category; for example, all linear algorithms belong to the same order of growth because their run times are in O(n). In this context, an “order” is a group, like the Order of the Knights of the Round Table, which is a group of knights, not a way of lining them up. So you can imagine the Order of Linear Algorithms as a set of brave, chivalrous, and particularly efficient algorithms. The exercise for this chapter is to implement a List that uses a Java array to store the elements. In the code repository for this book (see Section 0.1), you’ll find the source files you’ll need: MyArrayList.java MyArrayListTest.java You’ll also find the Ant build file build.xml. From the code directory, you should be able to run ant MyArrayList to run MyArrayList.java, which contains a few simple tests. Or you can run ant MyArrayListTest to run the JUnit test. build.xml ant MyArrayList ant MyArrayListTest When you run the tests, several of them should fail. If you examine the source code, you’ll find four TODO comments indicating the methods you should fill in. TODO Before you start filling in the missing methods, let’s walk through some of the code. Here are the class definition, instance variables, and constructor. public class MyArrayList<E> implements List<E> { int size; // keeps track of the number of elements private E[] array; // stores the elements public MyArrayList() { array = (E[]) new Object[10]; size = 0; } } As the comments indicate, size keeps track of how many elements are in MyArrayList, and array is the array that actually contains the elements. size MyArrayList array The constructor creates an array of 10 elements, which are initially null, and sets size to 0. Most of the time, the length of the array is bigger than size, so there are unused slots in the array. null One detail about Java: you can’t instantiate an array using a type parameter; for example, the following will not work: array = new E[10]; To work around this limitation, you have to instantiate an array of Object and then typecast it. You can read more about this issue at. Object Next we’ll look at the method that adds elements to the list: public boolean add(E element) { if (size >= array.length) { // make a bigger array and copy over the elements E[] bigger = (E[]) new Object[array.length * 2]; System.arraycopy(array, 0, bigger, 0, array.length); array = bigger; } array[size] = element; size++; return true; } If there are no unused spaces in the array, we have to create a bigger array and copy over the elements. Then we can store the element in the array and increment size. It might not be obvious why this method returns a boolean, since it seems like it always returns true. As always, you can find the answer in the documentation:. It’s also not obvious how to analyze the performance of this method. In the normal case, it’s constant time, but if we have to resize the array, it’s linear. I’ll explain how to handle this in Section 3.2. true Finally, let’s look at get; then you can get started on the exercises. get public T get(int index) { if (index < 0 || index >= size) { throw new IndexOutOfBoundsException(); } return array[index]; } Actually, get is pretty simple: if the index is out of bounds, it throws an exception; otherwise it reads and returns an element of the array. Notice that it checks whether the index is less than size, not array.length, so it’s not possible to access the unused elements of the array. array.length In MyArrayList.java, you’ll find a stub for set that looks like this: set public T set(int index, T element) { // TODO: fill in this method. return null; } Read the documentation of set at, then fill in the body of this method. If you run MyArrayListTest again, testSet should pass. MyArrayListTest testSet HINT: Try to avoid repeating the index-checking code. Your next mission is to fill in indexOf. As usual, you should read the documentation at so you know what it’s supposed to do. In particular, notice how it is supposed to handle null. indexOf I’ve provided a helper method called equals that compares an element from the array to a target value and returns true if they are equal (and it handles null correctly). Notice that this method is private because it is only used inside this class; it is not part of the List interface. equals When you are done, run MyArrayListTest again; testIndexOf should pass now, as well as the other tests that depend on it. testIndexOf Only two more methods to go, and you’ll be done with this exercise. The next one is an overloaded version of add that takes an index and stores the new value at the given index, shifting the other elements to make room, if necessary. add Again, read the documentation at, write an implementation, and run the tests for confirmation. HINT: Avoid repeating the code that makes the array bigger. Last one: fill in the body of remove. The documentation is at. When you finish this one, all tests should pass. remove Once you have your implementation working, compare it to mine, which you can read at. Think Data Structures Think DSP Think Java Think Bayes Think Python 2e Think Stats 2e Think Complexity
http://greenteapress.com/thinkdast/html/thinkdast003.html
CC-MAIN-2017-43
refinedweb
1,818
63.39
The recent series of Why XYZ Is Not My Favourite Programming Language articles has been fun to do, and it’s been great to see the discussion in the comments (even if it’s mostly people people saying that I am talking a load of fetid dingo’s kidneys). But I don’t want to extend that series beyond the point of diminishing returns, and it’s time to think about what it all means. As duwanis commented on the Ruby article, “I’m a bit lost as to the point of these posts”; now I want to try to figure out just what, if anything, I was getting at. By the way, it’s been interesting how people respond to articles that are critical (however flippantly) of languages. Most of what I’ve written here on TRP has had comments pretty evenly balanced between “Yes, I know exactly what you mean” and “You are talking complete nonsense”, which seems about right to me; but comments on the NMFPL posts have almost all been telling me why I am wrong. It’s also been interesting to watch all the Reddit posts for these articles drop to zero, or at best stay at one: evidently people who like languages are keener to defend them than those who dislike them are to pile in — which is as it should be. Reviewing the languages First of all, let me say that all the languages I picked on are, or at least have been, good languages. I didn’t bother criticising BASIC or FORTRAN, or Tcl for that matter, because, well, everyone already knows they’re not going to save the world. (I didn’t criticise any of the functional languages because I don’t honestly feel that I yet know any of them well enough to do an honest job of it.) So, to look at the positive, here are some reasons to like each of the languages I’ve been saying are not my favourites. In roughly chronological order: - C (1972) was, depending on your perspective, either the first really expressive low-level language, or the first really efficient high-level language. Its historical importance, as the foundation of all but the earliest implementations of Unix, is immense. But, more than that, it has a crystalline elegance that few other languages approach. (I’ll be writing more about C in future articles.) - C++ (1983), despite being more prone to abuse than any other language, can indeed be used as Stroustrup suggests, as “a better C”. It was also a very impressive technical achievement: to come so close to being object oriented while retaining binary compatibility with C is pretty astonishing. It solves that problem well, while leaving open the question of whether it was the right problem to solve. - Perl (1987) was and is amazingly useful for just, you know, getting stuff done. It has a likeable humility, in that it was the first major language to make working together nicely with other languages a major goal, and its Swiss Army Chainsaw of text-processing methods were a huge and important pragmatic step forward. It’s not pretty, but it’s very effective. - Java (1995) can be thought of as “a better C++”; and it is better in lots of important ways. It hugely reduces the amount of saying-it-twice that C++ source and header files require, the code is much cleaner, it is much harder to shoot yourself in the foot, and experience tells us that it scales well to very large projects with many programmers. - JavaScript (1995) has proven its usefulness over and over again, despite being saddled with a hideous set of largely incompatible operating environments; and underneath that perplexing surface, as Douglas Crockford’s book The Good Parts explains, there is a beautiful little language struggling to get out. - Ruby (1995) is in a sense not really a huge leap forward over previous languages; but it’s done the best job of any of them in terms of learning from what’s gone before. It really does seem to combine the best parts of Perl (string handling, friendliness towards other languages), Smalltalk (coherent and consistent object model), Lisp (functional programming support) and more. Although there are plenty of other languages out there, these are the main contenders for the not-very-coveted position of My Favourite Programming Language: I am deliberately overlooking all the Lisps and other functional languages for now, as I just don’t know them well enough, and I am ignoring C# as a separate language because even highly trained scientists with sensitive instruments can’t tell it apart from Java; and PHP because it’s just Even Uglier Perl, and Visual BASIC for all the obvious reasons. (I don’t really have a good reason for leaving Python out, but I’m going to anyway.) Some thoughts on Java According to the Normalized Comparison chart on langpop.com, and also the same site’s Normalized Discussion Site results (at the bottom of the same page), Java is currently the most popular programming language of them all (followed by C, C++, PHP, JavaScript and Python), so in a sense it’s the reigning champion: if you want to be an advocate for some other language, you need to make a case for why it’s preferable to Java. And it’s a good language. At the cost of some expressiveness, it tries to make itself foolproof, and it does a good job of it. In a comment on the recent frivolous Java article, Osvaldo Pinali Doederlein boldly asserted that “there are no major blunders in the Java language”. Surprisingly enough, I do more or less agree with that (though its handling of static methods is pretty hideous). I think that almost-no-major-blunders property for both size and pleasantness while being much more convenient. My main issue with Java is actually much more pervasive than any specific flaw: you’ll forgive me if I find this hard to tie down, but it’s just a sense that the language is, well, lumpen. Everything feels like it’s harder work that it ought to be: programming in Java feels like typing in rubber gloves. An obvious example of this is what Hello World looks like in Java: You have to say a lot of stuff before you can say what you want to say. You have to have a main() function, it has to be declared as taking an array of String and it has to be public static void. It has to be wrapped in public class SomeIrrelevantName (which, by the way, has to be the same as the name of the source file.) The print() function is called System.out.println(). The comparison with complete Hello World programs in other popular languages is instructive: print “Hello, world!\n” # Perl print “Hello, world!” # Python puts “Hello, world!” # Ruby (print “Hello, world!”) ; Emacs Lisp 10 PRINT “Hello, world” :REM Commodore 64 BASIC Is it a big deal that Java makes you say public static void main(String args[])? No, it’s not. It’s easily learned, and Java programmers develop the ability to become “blind” to all the syntactic noise (at least I assume good ones do). But it’s pervasive. All Java code looks like this, to a greater or lesser extent. How many mental CPU cycles do Java programmers burn filtering out all the keyword soup? At the risk of looking like a total Steve Yegge fanboy, I’ll illustrate that with an example taken from his article on Allocation Styles: how to ask a language what index-related methods its string class supports (i.e. which methods’ names contain the word “index”). His Java code looks like this: If you’re a Java jockey by nature, you’re probably looking at that and thinking “well, that doesn’t look too bad” (though quite possibly also thinking about a couple of incremental improvements you would make). Here’s how that program looks in a less verbose language (Ruby, as it happens): “”.methods.sort.grep /index/i Now even if you agree with me and Osvaldo that “there are no major blunders in the Java language”, you have to admire the concision of the Ruby version. It’s literally an order of magnitude shorter (30 characters vs. 332, or one line vs. 11). What a concise language buys you “But Mike, surely you’re not saying that the Ruby version is better just because it’s shorter?” Well, maybe I am. Let’s see what the advantages are: - Less code is quicker to write than more code. - Less code is easier to maintain than more code. As Gordon Bell has pithily observed, “The cheapest, fastest and most reliable components of a computer system are those that aren’t there.” Each line of code is a line that can go wrong. - Concise code lets you see more of the program at once: this isn’t as big a deal now we all have 1920×1200 screens rather than the 80×24 character terminals that I did all my early C programming on, but it’s still an important factor, especially as programs grow and start sprouting all kinds of extra glue classes and interfaces and what have you. - A concise language keeps the total code-base size down. I think this is very important. ScottKit currently weighs in at 1870 lines of Ruby, including blank lines and comments (or 1484 once those are stripped). Would I have started a fun little project like that at all if it was going to be a fun big project of 20,000 lines? Probably not. And this factor becomes more important for more substantial projects — the difference between ten million lines of code and one million is much more significant than the difference between ten thousand and one thousand. - Most importantly, look at what code isn’t in the Ruby version: it’s all scaffolding. It’s nothing to do with the problem I am trying to solve. In the Java version, I have to spend a lot of time talking about ArrayLists and Italian guys and loops up to methods.length and temporary String[] buffers and Italian guys. In the Ruby version, it’s a relief not to have to bother to mention such things — they are not part of my solution. (Arguably, they are part of the problem.) I think the last of these may be the most important factor of all here. I’m reminded of Larry Wall’s observation that “The computer should be doing the hard work. That’s what it’s paid to do, after all”. When I stop and think about this, I feel slightly outraged that in this day and age the computer expects me to waste my time allocating buffers and looping up to maxima and suchlike. That is dumb work. It doesn’t take a programmer to do it right; the computer is smart enough. Let it do that job. The upshot is that in the Ruby version, all I have to write about is the actual problem I am trying to solve. You can literally break the program down token by token and see how each one advances the solution: “”.methods.sort.grep /index/i Here we go: - “” — a string. (The empty string, as it happens, but any other string would do just as well.) (I notice that WordPress inconveniently transforms these into “smart quotes”, so that you can’t copy and paste the code and expect it to Just Work. D’oh! Use normal double quotes.) - .methods — invoke the methods method on the string, to return a list of the methods that it supports. (You can do this to anything in Ruby, because Everything Is An Object.) - .sort — sort the list alphabetically. - .grep — filter the list, retaining only those members that match a specified condition. - /index/ — the condition is a regular expression that matches all strings containing the substring “index”. - i — the regular expression matches case-insensitively. Bonus pleasant property As a bonus, the code reads naturally left-to-right, rather than inside-to-outside as it would in a language where it all has so be done in function calls, like this: grep(sort(methods(“”)), /index/i) I think that Ruby’s object-oriented formulation is objectively better than the pure-functional version, because you don’t have to skip back and forth through the expression to see what order things are done in. “Say what you mean, simply and directly.” When I started to write this article, I didn’t know what my conclusion was going to be. I just felt that I ought to say something substantial at the conclusion of a sequence of light-and-fluffy pieces. But, as Paul Graham says [long, but well worth reading], part of the purpose of writing an essay is to find out what your conclusion is. More concisely, E. M. Forster asked, “How do I know what I think until I see what I say?” But now I’ve conveniently landed on an actual conclusion. And here it is. Remember in that Elements of Programming Style review, I drew special attention to the first rule in the first proper chapter — “Say what you mean, simply and directly”? The more that runs through my mind, the more convinced I am that this deceptively simple-sounding aphorism is the heart of good programming. Seven short words; a whole world of wisdom. And how can I say what I mean simply and directly if I’m spending all my time allocating temporary arrays and typing public static void main? My code can’t be simple if the functions I’m calling have complex interfaces. My code can’t be direct if it has to faff around making places to put intermediate results. If I am going to abide by the Prime Directive, I need a language that does all the fiddly stuff for me. So it looks like My Favourite Programming Language is Ruby, at least for now. That might change as my early infatuation wears off (I’ve still only been using it for a couple of months), and it might also change as my long-anticipated getting-to-grips-with-Lisp project gathers momentum. But for now, Ruby is the winner. And it’s going to be dethroned, it’s not going to be by a scaffolding-rich language like Java. Coda: I don’t hate Java Just to defuse one possible stream of unproductive comments: this is not about me hating Java. I don’t hate it: I think it’s way better than C++, and in most ways better than C (which, given that I love C, is saying a lot). All the criticism I’ve levelled at Java in this article applies equally to C++, C and C#, and many other languages. To some lesser degree, they also apply to Perl and JavaScript. But I am on a quest now — to say what I mean, simply and directly. And Java is not a language that helps me do that. . Update (19 March 2010) The discussion at Reddit is not extensive, but it’s well worth a look because it contains some nice code samples of how you can do “”.methods.sort.grep /index/i in other languages. There’s also a little discussion at Hacker News. . And, finally: please ignore this magic code, which I am gluing into the article so that Technorati can pick it up and know that I really am the owner of this blog: EECRHMA873AV This is a fine example why hackers of the world don’t take pleasure in Java programming. I don’t hate Java either, it’s got qualities, but I would hate to do anything in it. And I really like Ruby too. It has another set of qualities, especially the consistency (Matz’s principle of least surprise, I just don’t get that feeling from Python at all, indenting issues aside). I am not surprised that it appeals to many smart people. Language doesn’t have to be easy as Ruby to be pleasurable, I don’t really mind C at all and I used assembler too. When you know your way around programming languages you just feel there is something wrong (like Java or C++). “Great minds” think alike? :) And I also think that 80 characters wide text buffer should be enough. ;) An important factor (for me at least) is the kind of programming projects to be tackled. I love Forth because it works well for the problems I normally work on. i work in, and love, q, an array language descended from APL and J. it doesn’t have objects, so the nearest equivalent to your code would be to search some namespace for functions containing “index”: q){x where x like”*index*”}system”f .q” .q is the namespace being searched (in this case, the one containing all the built-in functions) and “f” is a “system” (internal) command returning all functions in a namespace. In Groovy: return String.class.methods.findAll { m -> m.name.toLowerCase().indexOf(“index”)>-1 }.collect{ m.name }.sort(); While I wouldn’t deny that Ruby is more succinct than Java, in this instance Steve Yegge’s code stinks. He either doesn’t really know Java, or he’s being deliberately mischievous to make his point. Here’s a much cleaner and more concise version that will return the “index” methods of any class. It took me less than a minute to write. It has the added advantage of not containing any bugs – unlike Steve’s version. Of course Ruby still wins here, but I would argue that this is mainly because Java has no native capability to filter all the members of a collection at once. I think you may be conflating several things here. I used to love APL for writing concise programs. Everything was done with arrays, array operators, expand and reduce. That let me write extremely concise, but often unreadable code. For example, looking up a string in a table would be something like: (TABLE^.=STRING)i1 That’s a matrix multiplication using a boolean and and an equals operator to produce a boolean vector of where the STRING appeared in the TABLE, the former being a vector and the latter a matrix. The i was the iota operator for finding the index of the first value in the result vector. It was beautiful, but every line was a puzzle. You seem to have two complaints about Java. One is that Java is largely scalar, without a lot of clever vector, map and reduce, or pattern match and replace operators. That’s a valid complaint about the language density. COBOL and Applescript are notorious for their low expressive density because they try to look like a natural language. You can destroy your brain at either extreme. The other complaint is that Java requires code to be placed in context. A lot of the text of a program does not advance the algorithm directly, but focuses on what the code does within the overall program. The code is descriptive, not prescriptive. Itdoesn’t run. from the top of the page to the bottom as in a straight forward narrative, but lives in various compartments and those compartments interact with each other. You can actually read the code or the context, much as we can parse DNA with introns, regulatory sequences and coding sequences. When you just want to write a quick hack to fix something, it is nice to have a language which strips out the contextual overhead and just does something. Of course, if your program proves useful, you are likely to wind up gluing it to a host of other similar hacks using the shell or some other patchwork. If I recall, dealing with this patchwork of program components was noted rather negatively at one point. There is just no winning. A couple of things: a) You can make your Ruby example more expressive by turning it into: String.instance_methods.sort.grep /index/i Which leaves out the need for the arbitrary string by explicitly stating “All methods applicable to instances of String.” It’s not as short, but that’s not always what expressive is about, eh? :) b) In fairness to Java, that’s an outdated bit of code you’ve quoted, and it doesn’t take advantage of some of Java’s newer niceties (like the For-each loop, for example). It can also make use of String.matches(regex) rather than doing the clumsy .toLowerCase().indexOf(“index”) != −1 business. Not that those really help a lot, but they do make it much easier to read. c) As someone who was a Java developer for many years, you *do* learn to filter out all the keyword cruft… until you start learning other languages. Once you start learning Python and Ruby, for example, writing Java becomes more and more painful. There are worse things in life than being a Steve Yegge fan(boy). His drunken blog rants are mostly on target. My co-workers just “love” Java. I like it too. I can whip things up really quickly in it but all the line noise really gets to me these days. Why do I have to deal with so much boilerplate? Do I really have to create (yet) another interface for this damned object? Yes. I know. Its the way the language is and there is no point in complaining about it. Except there is a point. When all you have is a hammer, everything looks like a nail. After all these years using Java I am coming to the conclusion that it is merely a different kind of hammer to the others I have used before — C and C++. What I need is a new tool — a screwdriver. I have tried to like Python. I was really nice about it during that interview with The Google a few years back but, seriously, isn’t __self__ some kind of joke? Having said that I really like the whole indention thing but I’m a bit of a weirdo that way. These days I find myself looking at Clojure, Ruby, Scala and (believe it or not) Go. If Ruby performance wasn’t such a dog I’d be off down that road at 1000mph. Still, performance will get better and not just because CPUs get more powerful. Heh… I “get” to work with classic ASP/VBScript daily, and it offends on all fronts here. Inability to declare and initialize on the same line? Check. Reading inside-out? Check. Lack of a concatenation-assignment operator? Check check check. It’s like the language wants me to write solutions that are inefficient and unmaintainable. I do. I hate Java with a fucking passion. It’s crappy APIs punish me everytime I’m forced (for I would never do so willingly) to write java. Fuck those assholes at Sun. They easily could have done a better job. No wonder they had to have a system like javadoc, because you’ll spend half your development time searching for method sigs. I wonder how much will tooling help you writing that one line of code or maybe even explaining it later on during code review? In Java and using a modern IDE I can actually click on most of the code parts (classes, method invocations and such) in that file. So if you would explaining the code to me and you said that methods invocations returns all methods and I would ask you if you’re sure that public and private ones both are returned? How can you show it to me? Would you need to google documentation or is there an easier way? Or if want to check the type that is returned from methods and check all methods that the object has, what would be my steps? I tend to program mostly in Java, but use python and PHP for some other projects and main distinction I’ve seen is that you have this excellent tooling compared to just vimming in the other languages. toomasr, my take on that is that Java’s huge IDEs are there to help work around the deficiencies of the language — to help you not have to think so hard about all the boilerplate. But the easiest boilerplate of all to think about is the boilerplate that just isn’t there. Big IDEs worry me for another reason: it seems that in some programming cultures (and again, I don’t JUST mean Java, but it does examplify this trend), the response to any technical deficiency is just to build yet more technology on top — which leaves you with yet more stuff to learn. I’d much rather take more time to get the foundation of my building right than build an edifice of scaffolding to hold it up, only to find that I then need to maintain the scaffolding as well. @futilelaneswapper has beat me with the better Java code. I would add that the Ruby code also reveals an imperfect language – for one thing it’s horrible that any object responds directly to a “methods” message; “”.class.methods.sort.grep /index/i would be concise enough (with an extra “class” call). It seems that Ruby’s root Object type is a bit polluted. This is probably due to the heavy use of MOP in the language, but that’s still a tradeoff, not and example of a heaven-perfect design. Even Smalltalk, that didn’t care about the number of public methods in the root classes, required to send first a “class” message to later grab the methods etc. Also, in Java, adding utility methods like sort() to array/collection classes would also be a problem because then you enormously expand the size of important interfaces like Collection and List, so every new implementation of those is required to implement much more methods (including many methods that will be just delegates to some common implementation). So this reveals a classic tradeoff of static-typing versus dynamic-typing. Ruby classes can have two hundred methods without much harm because you rely on duck typing, you can provide alternate implementations that don’t necessary inherit the existing classes, and don’t necessary implement all their methods. But I will stick with the advantages of static-typed languages, even with the extra verbosity. Other “issues” of Java verbosity, like longer method and variable declarations, also come off static typing and exist in most other static-typed languages, so I’d say that Java’s major defect is not having a modern typesystem with Hindler-Milner inference or similar (for this look at Scala, or even JavaFX Script). Finally… @futilelaneswapper would be a bit shorter if the method is not required to return a primitive array, just return a collection! And will be even shorter in Java7 with lambdas (this is another item that is also sorely missing in Java, but we’re fixing that, finally). Hi, Vince! (That’s “futilelaneswapper”, for those of you who don’t know him.) I suspect Steve Yegge was neither ignorant not mischievous — probably just using an earlier version of Java. Anyway, your version certainly looks like an improvement. I wrapped it in a public-static-void-main function and a class, and added what seem to be the necessary imports, in the hope of seeing it work. I got: import java.util.Set; import java.util.TreeSet; import java.lang.reflect.*; public class bickers { public static void main (String[] args) { Set methods = new TreeSet(); for (Method m : String.class.getMethods()) { if (m.getName().toLowerCase().contains("index")) methods.add(m.getName()); } System.out.println(methods.toArray(new String[0])); } } Which I compile with javac bickers.java (ignoring the unchecked unsafe operations warning), and run with java bickers. The output is a little on the delphic side: [Ljava.lang.String;@c17164 What am I doing wrong? Osvaldo: yes, it does seem that a lot of the scaffolding that I’m finding such a drag(*) is related to static typing. I’d like to do an article about the pros and cons of static-vs.-dynamic, but for this particular topic Steve Yegge has covered the ground so thoroughly that I’d have almost nothing new to say. (Sorry, yes, more fanboyage.) If you’re interested in an open-minded and balanced view, I recommend his article Is Weak Typing Strong Enough? at I do see both sides of this equation. But I think it’s pretty clear now what side of the fence I’ve fallen on, so I won’t pretend to neutrality :-) (*) I didn’t want to use the phrase that I am “finding it such a drag” because it’s such a lazy colloquialism, but in this case it’s actually a perfect description of the situation. Drag is exactly what the scaffolding is imposing on me, like trying to move through a viscous medium. Ah, the enthusiasm of the first days of Ruby adoption. Been there, done that. Came back to Java crying. I totally agree with toomasr – Ruby IDE support just plain sucks for now, and i just don’t how can it be drastically improved in the future. This is just what you get from such a language. And speaking of the real world – the myth that Ruby is easier to maintain is just plain false. Ruby is far more prone to the situation when you’re looking at your own code (that was written a while back) and say – what the hell does this do? toomasr, sorry to reply twice to your comment, but I only just registered this bit: Yes, there is a much easier way — just ask the language itself! First ask it what it can tell us about methods, then use one or more of the methods it provides to ask the specific question you’re interested in: irb> "".methods.sort.grep /method/i => ["method", "methods", "private_methods", "protected_methods", "public_methods", "singleton_methods"] irb> "".private_methods.sort.grep /instance/ => ["remove_instance_variable"] irb> And unlike documentation, you know it’s up to date :-) Just for completeness: sorted([method for method in dir(str) if ‘index’ in method]) @Nike: I have read Steve’s static/dynamic rant now. Most of his Cons of Static Typing are, IMHO, in the range of highly debatable, to sheer stupid and false statements. It’s worth notice that some of these items can backfire in the debate against dynamic typing. For one thing, take Steve’s points 4 (false sense of security) and 5 (sloppy documentation). [These are in the stupid/false category.] Yeah I suppose that some bozos think that they don’t need to write any test because the language is static-typed so the compiler catches so many errors; and no documentation either because the code is more explicit, IDE can do perfect browsing and refactoring etc. But we should not judge a language from the habits of incompetent users. I program in Java and my code is well documented and tested. It’s a matter of discipline. Now, let me flip the coin and look at dynamic languages: they tend to NEED extra effort in both testing and documentation, to compensate for the missing guarantees and explicit info that static typing would provide. So, we could look at the tests and docs that a (professional, mission-critical, well-maintained) Ruby app contains in excess when compared to a similar Java app, and categorize these as “scaffolding”. Now the Ruby hacker will be typically pose as a superior developer because his version of the code contains 2,500 unit tests while my version contains only 700 tests – for Agilists/TDD’ers it seems that humongous test suites are obvious evidence of excellence – but the hard fact may be that my version, with much less tests, is more reliable and simply doesn’t need as many tests. (Tests are also code that has a maintenance/evolution cost, you know.) The same arguments are valid for documentation, with the big extra problem that broken docs won’t be caught even by a good test suite. (So, the only really good code is code that’s clear enough to not require detailed documentation.) Now let me finish this with a real story. A few days ago I’ve got a quick freelance job to fix two bugs and add some simple enhancements to an old Java program, which does some interesting particle-based 3D visualization – runs with good performance even on old and unpowered J2ME devices without any 3D acceleration (take that, Ruby!). The code is well-written, but it contains zero tests; all documentation including code comments is written in German (which I don’t grok beyond ja/nein); the original author is not available to help and the guy who hired me is not a developer so I was basically on my own. But no problem: the code is crystal-clear, I fixed the offending bugs in the first hour of work, and added the new features in the first (and single) day of work, job finished – in a code base that I’ve never seen before. In fact, so far I didn’t bother to read >95% of the code. Testing effort was ridiculously minimal. This is the wonderful world of a language like Java. ;-) Osvaldo, I don’t understand why you’d characterise such a balanced, exporatory article as a “rant”. It begins with “I’d love to know the answer to this” and ends with “This is a hard problem” — not really what I’d consider the hallmarks of rantitude. I also don’t find it at all obvious that Yegge’s static typing cons are “sheer stupid and false”, any more than his static typing pros are. I can’t help wondering whether you’ve just made up your mind in advance what your conclusion is going to be. No arguments about these two statements of yours, though: “Tests are also code that has a maintenance/evolution cost”, and “the only really good code is code that’s clear enough to not require detailed documentation.”. It’s nice that we get to agree on something! Mike: Ok, that might have been a bit hard on Steve’s article. The fact is that even with his soft language, I disagree vehemently from some of his findings. And yes, I am a static-typing bigot, like I often confess in these discussions. So you are probably right to say that I’ve made my mind before reading Steve’s blog; but I didn’t made my mind in five minutes – I’ve been making it since I was 13 years old with my ZX Spectrum, and I’ve fallen in love with both camps – e.g. Eiffel as the early champion of static typing (Meyer’s OOSC is still the SINGLE programming book that’s 1300 pages and worth reading cover-to-cover), and Smalltalk as an early and über-dynamic language. Having said that, other people may be ten times as smart and experienced as I am and still have opposite conclusions, and I respect that – but then, Steve’s used some old and tired “cons of static typing” (although the facts-list is followed by a discussion of much better quality); and on top of that, Steve makes a classic confusion of terminology, mixing together the concepts of static/strong typing, and weak/dynamic typing. He is a brilliant hacker and he did have formal CS education, so maybe programming language design is just not his focus, or he is just in the camp that doesn’t care much about formal language/compiler theory. So call me an academic elitist – in fact it’s been many years that I don’t read a single programming book, only research papers — but when it comes to programming language I clearly expect a lot of people who have something to say. It is odd that, having consistently said strong/weak typing throughout when he meant static/dynamic typing, he then went to all the trouble to “explain” at the end that the mistake was deliberate, rather than just fixing it. I like the fact that Lisps (like Clojure) allow me to say something like (-> “” methods sort (grep /index/i)) if I want to, instead of (grep (sort (methods “”)) /index/i) being my only option. The fact that this sort of syntax creation is possible gives it the edge (in my mind) over languages that enforce one way of thinking over all of the others that I might want to use. @Osvaldo: Respectfully, you might take your own advice () and learn Ruby before you start criticizing it. The fact that an object responds to the “methods” call is not ‘horrible,’ but a rather useful artifact of the way OO works in Ruby. It also doesn’t do what you think it does – “”.class.methods DOES work, but it’s not the same as “”.methods :) Looking over the documentation for Object doesn’t suggest any ‘pollution’ – sure, it has more methods than the Java object, but Ruby handles more of its functionality in methods than Java does (operators, etc.). Although if you still think it’s too bloated, you’d perhaps be pleased to see the introduction of BasicObject in Ruby 1.9: …and if not, well, you’re happy with Java. Mike: When you call System.out.println on an object, Java invokes the toString() method of that object and prints whatever it returns. So what you are seeing is the output from Array.toString(): an object Signature, followed by its memory address in hex. This is the default behaviour for all objects that don’t override toString() themselves (like Array). Sp you need to traverse the array yourself and print each String object in turn: There are good reasons for this behaviour, but it is admittedly a bit of a pain. However, as Oswaldo has pointed out, there’s a bunch of unnecessary conversions going on in Steve’s code, which for the sake of fairness I have maintained in my version of the code. The most obvious issue is that having iterated over one collection to create a result set, we then iterate over the result set to get what we really wanted in the first place! Ugh. Incidentally, the bug in Steve’s code, in case you hadn’t figured it out, was his use of a List to hold method names. The String class overloads its two index methods, so the List ends up holding multiple copies of the same method name. As for the unsafe conversions, the correctly typed version was mangled when I posted it by WordPress. It removed everything in angle brackets. I liked your series of posts on why these weren’t your favorite languages. All languages have their (mis)features, and it fomented a lot of interesting discussion. I also have to say I agree completely with your summary of C++. It is by far the most concise and precise description I’ve seen. I think I’ll make up a poster of it at some point and hang it over my desk, if you don’t mind. I’ve followed Java developments over the years, but I was never inspired to jump on that particular bandwagon. It didn’t seem to solve any of the problems I needed solved in a better fashion than C/C++ and added a whole new layer of dependency in the VM. The fact it gained such a following in the server/enterprise space took my by surprise. Although, in hindsight I understand why that happened. I now regret not gaining some experience with Java development as it seems a great many jobs require decades worth of Java experience to even be considered. Alas! My misspent youth! Thanks, Charles, for generously overestimating the value of my C++ summary :-) I recently read a much better, or at least much snarkier, summary: “an octopus made by nailing extra legs to a dog” (Steve Taylor). Harsh, but perhaps not entirely unfair. Nice Article, and thanks for your inspiration, I’ve started learning LISP as well :-) Vince: but why on earth would Array.toString() not return something useful? Something like a newline-separated list of the members’ toString() results would do, or comma-separated, or something. Just not “[Ljava.lang.String;@c17164”, which reads like a run-time buffer overflow. Anyway, I tried to do what you suggested in the context of the little program that had built the collection called methods: for (String s : methods) { System.out.println(s); } But that wouldn’t compile: “incompatible types”. So I tried converting methods to an Array, and using for (String s : methods.toArray()), but that was rejected too. I tried a couple more incantations, then thought — hang on, what the heck am I doing? This is exactly what I am trying to get away from. Guessing vocabulary is all very well when you’re playing 1980s Adventure games (GIVE HONEY doesn’t work, you have to type OFFER HONEY), but it’s no way to write programs. Surely the whole point of the so-called smart for-loop is that it works on all collections? So why doesn’t it? Not to keep beating the same dead horse, but in Ruby you would say collection.each { |member| do_something(member) } and it Just Works, every time, whatever kind of collection collection. Is there a compelling reason why Java can’t do the same? @Mike Taylor: To pretty-print an array in Java, you should use java.util.Arrays.toString(array). That will take in an array of any type (it’s an overloaded method) and return a String. Is this defective behaviour? Yeah. Sun should have changed the definition of toString() for arrays directly (what you are seeing is the internal representation of the array’s _type_). But Sun decided to leave the warts in Java, build new stuff instead, and hope that documentation will cure all ills. However, it’s all silly in this context, because you don’t need to convert a Set to an array to print it out. If you want to print it out, simply using: System.out.println(methods); would work just fine. Ha! And so it does: $ javac bickers.java && java bickers Note: bickers.java uses unchecked or unsafe operations. Note: Recompile with -Xlint:unchecked for details. [indexOf, lastIndexOf] Thanks, Jon! What about Google Go language? I want to like Go, because of the Thompson/Pike connection; but I find can’t anything about it that excites me. It smells too much like Java and C# to stir any deep emotion. Pingback: Top Posts — WordPress.com Pingback: In search of simplicity « vsl More thoughts on all this here Having read Vince’s post in response, I am pleasantly impressed by his Filter class. I recommend it to anyone who’s stuck with Java. Hi, it seems like I’m the first one to point this out, but your example on how to find names of all method names containing indexes is incorrect. Take a moment to run it () to verify that. The problem is that an Object[] cannot be cast to a String[] even if it contains only strings. Fun but true … After adding a couple of extension methods (to make the syntax more similar) I can write: from methodName in “”.MethodNames() where MethodName.Grep(“index”,System.Text.RegularExpressions.RegexOptions.IgnoreCase) orderby methodName select methodName; in C# The group I work for conducts very extensive code reviews. One of the things we strive for is to make the code as succinct as possible without being obscure. We feel strongly that succinctness makes the code easier to learn an maintain. So, I agree with your argument about the importance of conciseness. However, 90% of our code is Java, so I always feel as if we are fighting a losing battle in striving for conciseness. I find it interesting that I can’t get anybody in my group to consider changing languages, yet at the same time I feel that with Java, we are trying to prevent the tide from coming in. I guess the devil you know… I think you should take another look at C#. I only spend about 5% of my time coding in C#; I’m definitely not very experienced with the language. However, C# is evolving very much faster than Java. The new .NET 4.0 version is close in features to Scala. In general, I don’t like being tied to a particular platform, but C# is becoming very interesting. The Java world has seen the emergence of languages much more interesting than Java itself: Scala, Clojure, and Groovy. Groovy, as the phonological similarity indicates, was conceived as a Ruby-like language for Java. The index filter is a similar one-liner: String.methods.name.sort().findAll{ it =~ /(?i)index/ } I spent a couple of years programming in Groovy and just loved it. But these days I find the most fascinating language around to be Clojure. It’s a Lisp dialect for the JVM, more purely functional than traditional Lisps, with a strong story about how to handle concurrency and state. Rich Hickey, the author of the language, has made some wise choices about what Lisp baggage to leave behind, recognizing, for instance, that there’s more to data structures than linked lists. It’s a compelling piece of language design. If you’re really looking to get into the world of functional languages, this is a great place to start. In your Ruby example, surely you want to filter first *before* sorting? Okay, what’s with all the sushi? Aha! An even simpler way to do it in C#: “”.MethodNames().Where(m => m.Grep(“index”,RegexOptions.IgnoreCase)).OrderBy(m => m) (suggested by a commenter on my journal) Michel S., you certainly can filter first before sorting if you wish: "".methods.grep(/index/i).sort but it doesn’t really make much difference. Marius Andersen, what’s not to like about delicious, firm yet yielding, sushi? My favorite programming language is pseudocode. RBL wrote: “My favorite programming language is pseudocode.” Mine is pseudocode than the computer can execute. You know, like: “”.methods.sort.grep /index/i >“”.methods.sort.grep /index/i Off the top of my head, in Javascript: Object.keys(String).sort().filter(/index/i.test) — MV The JavaScript veersion is pretty nice (though it seems weird that the methods for listing String methods is a method of Object rather than of String). What does the .test at the end mean? I feel bad about JavaScript. It seems like it’s that close to being a good language, but falls short for a variety of reason, not all of them the language’s fault (e.g. the horribly incompatible DOM implementations in the various browsers). But some of the failings — the the single global namespace — are its own fault. 1. Maybe “String.keys()” works, not sure 2. .test is a method of any RegExp object as in /index/i.test(“myindexstring”) (which returns True); /index/i.test is a function/method (passed to filter) which will be applied to every array element from the array returned by sort() 3. Forget all about JS+browser+DOM and checkout serverside Javascript at and/or — MV A good sign of languages that are ‘part of the problem’, as you suggest, is the existence of ‘design patterns’ and books of ‘design patterns’ for these languages. Think what those are; they’re instructions to the human about how to model some repeatedly-needed construct so that the computer can understand it. If the language were sufficiently expressive, you could express this *in the language* and never need the books in the first place. (All these books say that you cannot express patterns in computer languages. They are wrong.) (This is not an original insight and you have doubtless encountered it before in the writings of Paul Graham. But I thought it was worth reiterating here. You do need a decent macro system to do this, but these *are* implementable in non-Lispy languages, e.g. Metalua.) btw, another wonderful description of C++, from a years-old post on alt.sysadmin.recovery: “No, no. C is a small sharp knife. You can cut down trees with it, and get it cut down exactly the way you want it, with each shaving shaped exactly as you wish. C++ is a small sharp knife with a bolted-on chainsaw and bearing-mounted laser cannon rotating at one revolution per second wildly firing every which way. You can cut down trees with it, and get it cut down exactly the way you want it, with each shaving shaped exactly as you wish. You can also fire up the chainsaw and cut down the entire forest at one go, with all the trees cut down exactly the way you want them and every shaving shaped exactly as you wish — provided that you make sure to point the wildly rotating and firing lasercannon in the right direction all the time.” — Padrone, LysKom, article 717443, 11 Sep 1994, translated by Calle Dybedahl Pingback: Closer To The Ideal » Blog Archive » A comparison of Java and Ruby @Nix: You are a bit off with design patterns. First, the books don’t say that you can’t express patterns in programming languages – this wouldn’t make sense. I think you mean that you can’t express the pattern as a single, reusable implementation, so you can have library-provided patterns that you just call from app code, or “plug” into app code through some mechanism (inheritance, composition, templates, aspects, whatever). My MSc thesis was focused on the automatic detection of design patterns in existing code, I have researched the field pretty well [back in 1999-2000 anyway] and implemented a reverse engineering tool that was state of the art for its time. But this field of research was a dead-end, because Design Patterns are _by definition_, higher-level problem/solution recipes that don’t easily translate to a reusable piece of code in a mainstream programming language. They don’t typically have a standard implementation structure, i.e., the pattern description doesn’t always produce the same concrete OO design, even on a single language/platform. You most often need to adapt the pattern to the needs of your application. Of course, there are patterns in different levels of abstraction. Picking the well-known GoF patterns, Iterator is one that has a standard implementation in the Java language (java.util.Iterator), C++ (STL iterators) and other modern languages. [You must still create many specializations of the base iterator type, so it’s only white-box reuse.] But this pattern is so simple that its inclusion in the book may only be justified because it was written in the early 90’s. OTOH, the Interpreter pattern has yet to find a single implementation anywhere – there’s a huge range of techniques, none of them are ideal for every case, not even powerful compiler-compilers like JavaCC/ANLTR. And then you can move beyond GoF and check lattern patterns catalogs, e.g. is one of the best modern references – I can’t recommend this enough; my most complex and successful project owes too much to the fact that I’ve digested this book cover-to-cover because the system needed custom implementations of most of these patterns. I agree that a powerful macro or metaprogramming facility can increase – at least a little – the set of patterns that can have a standard implementation, even if that’s just a partial implementation. But even these techniques won’t tame most of the patterns. Look at Ruby on Rails, it’s a fine example of using metaprogramming to implement many persistence patterns (). But then, the solution was not solved by simple use of the language’s expressiveness – it required a major new piece of runtime (e.g. ActiveRecord), which implementation is big and complex, so they might just as well have created a brand-new language with all the ORM stuff hardwired as native features… the MOP capability of Ruby is not [in this example] a big deal for application developers, although it is for runtime/middleware developers because it’s easier to write some advanced MOP tricks than create a new compiler. And it’s not yet the Ultimate Implementation of those patterns – we could propose a very different OO/Persistence solution, e.g. a Ruby port of LINQ to SQL. Er, the GoF book says precisely that you can’t expect to implement a pattern as a single reusable thing. I agree that not all patterns can have a single implementation, but that’s because some of them touch on active areas of research (e.g. interpreters of all sorts) or are just too vague to be useful ;P (I can’t comment on Ruby: I haven’t learnt it yet and that it manages to be even slower than CPython is a major strike against it in my eyes. Dammit, if Lua can manage to be both ridiculously small *and* faster than anything else bar compiled code *and* completely portable, there’s no excuse for a less portable interpreter to be slower. Yet they all are. Guessing wildly without data, maybe it’s their bigger cache footprints…) @Nix: No, the reason why any patterns can’t have a single implementation is the fact that they are DESIGN patterns. You are failing to realize the gap that exists from design to implementation, or more generally, from one level of abstraction to the next (e.g. analysis model to design model). There was a ton of research trying to tame these gaps, e.g. Catalysis () that planned to allow full, automatic mapping / binding / traceability between these levels… this research was also steaming hot when I was in the Master, but it is largely forgotten now. And I’m glad it failed, because the idea was that we should create even more complex UML models describing everything from high-level business model down to algorithms, with tons of UML “improvements” to bind everything together so when you make some change in the analysis layer it auto-propagates all way down to code and vice-versa. But even this stuff would not enable automatic generation of flower-level artifacts from higher-level ones, except maybe for restricted cases. Many current CASE tools can actually “generate design patterns”, but that feature is pretty rigid and limited, it doesn’t buy you much. In fact I don’t even use CASE integration to code, either forward or reverse engineering; in fact I only write Design-level UML models when I’m forced by client requirement because it’s not worth it – but I digress… Osvaldo writes: “the reason why any patterns can’t have a single implementation is the fact that they are DESIGN patterns.” And yet, some patterns do have a single implementation in some languages — unless you consider the “implementation” so simple as not to count. For example, the Decorator Pattern is trivial enough in Ruby that it’s implemented in one line here at — the relative complexity of doing this in other languages seems to come mostly from having to punch careful holes in the type system. @Mike: When some pattern has a trivial impl in some language (or platform – language+frameworks), this typically happens because their design has “adopted” that pattern. For example, the Java language adopts the prototype pattern (Cloneable / clone()); any OO language&framework adopts the Template Method pattern (polymorphic methods); the Java APIs adopt many many other simple patterns like Iterator, Proxy, Observer, MVC and so on. Other patterns may be so simple that they often have a standard implementation even without explicit support from the language or frameworks, e.g. Singleton. But even in these apparently trivial cases there is room for variations; for one thing, check out the Lazy Initialization Holder for Singleton: public class Singleton { private static class Holder { private static final Singleton instance = new Singleton(); } public static Singleton getInstance() { return Holder.instance; } private Singleton() { //... potentially slow or race-unfriendly init } } Smart-ass, concurrent-savvy Java programmers use the code above because it allows the initialization to be lazy, and without risk of concurrent initialization, but without any synchronization cost. You don’t have to synchronize the getInstance() method because Java’s classloading rules will guarantee that the holder class is only initialized once, so its static-init only runs once. (This obviously requires some synchronization within the classloader; but as soon as the Singleton and its Holder are fully initialized, the JVM patches code so no further calls to getInstance() will ever run into ANY overhead of classloading, synchronization or anything.) The same is valid for most other patterns that are not “officially adopted” by the platform – even when there is a trivial implementation, you’ll often discover that it’s not the single and perhaps not the best implementation. ;-) Succinctness is always overstated. The minimal effort of wrapping everything in Java as an object is completely overwhelmed by the vast oceans of libraries to take advantage of. Could it be improved…. hell yes, by making it more like Smalltalk I cannot agree with you at all that succinctness is overstated (if, as I assume, you mean overrated). It pains me that Java programmers have been conditioned to believe that it’s tolerable, or even normal, to reflexively write things like class Point { int _theXCoordinate; int _theYCoordinate; int getTheXCoordinate() { return _theXCoordinate; } void setTheXCoordinate(int newXCoordinate) { _theXCoordinate = newXCoordinate; } int getTheYCoordinate() { return _theYCoordinate; } void setTheYCoordinate(int newYCoordinate) { _theYCoordinate = newYCoordinate; } } When they could be writing: class Point attr_accessor :x, :y; end Doesn’t that seem morally wrong to you? CurtainDog: I call poppycock. Minimal effort stops being ‘minimal’ when it impacts every line of code you write. Also, ignoring the fact that library support is always a contextual argument; if library breadth is solely sufficient for choosing a language, we wouldn’t have ever had Java to begin with. :) @Mike: but they haven’t been conditioned to write that. They’ve been conditioned to write: class Point { private int theXCoordinate; private int theYCoordinate; } … and then click a little button that says “generate getters and setters.” Which is morally wrong on a completely different level. ;P LOL at duwanis’s last line :-) Or in C# class Point { public int X {get;set;} public int Y {get;set;} } Which gives you all the control you get in Java, in less space, with none of the crustiness. The C# version is certainly a step in the right direction. (Presumably you meant X and Y to be private rather than public?) Is it conventional in C# for data members to have capitalised names like this? Those aren’t member variables, those are properties. In earlier versions of C# you’d have written: class Point { private int x, y; public int X { get { return x; } set { x = value; } } public int Y { get { return y; } set { y = value; } } but the new syntax is the equivalent of that (and a heck of a lot more concise) If I just wanted two public members I wouldn’t have to worry about the get/set bits. public class Point { public int X,Y; } You can have different accessibility on the set and get statements, if you want external immutability: class Point { public Point(int x, int y) { this.X = x; this.Y = y; } public int X { get; private set; } public int Y { get; private set; } } The convention in C# is camelCase for private and PascalCase for public/protected. Isn’t it disastrous that WordPress doesn’t preserve indentation in <code> sections? Luckily, my site-owner superpowers meant that I was able to edit Andrew Drucker’s comment, and see it in all its indented glory. Sucks to be the rest of you. (Don’t worry, Andrew, I didn’t change anything!) If you can edit that in order to make it preserve the indentation then I’ll be incredibly grateful :-> But I suspect it’s not possible. JavaFX Script, just for completeness: class Point { var x: Double; var y: Double; } …and you create this object with a one-liner like “var pt = Point { x:0, y:1 }”, no constructors required. This example only scratches the capabilities of JavaFX Script’s properties – there are more features like support for visibility control, immutable or init-only fields, and the powerful binding/trigger stuff – all of that with similar conciseness. No, Andrew, sorry, I don’t believe it’s possible. Believe me, it’s hard enough to get the code indented even in my own posts. “there are no major blunders in the Java language” Perhaps not, but the library sure is chock-full of them, with date, time, and calendar among the most spectacular failures. I wish Joda Time had been around when I was being frustrated by those atrocities. Pingback: Early experiments with JRuby « The Reinvigorated Programmer Pingback: Programming Books, part 4: The C Programming Language « The Reinvigorated Programmer grep(sort(methods(“”)), /index/i) grep the sorted methods of a String and filter ones containing ‘index’ case-insensitively. read well anyway, it’s a matter of perspective. Pingback: Writing correct code, part 1: invariants (binary search part 4a) « The Reinvigorated Programmer Pingback: Заметки о программировании » Blog Archive » Больше кода, меньше кода, не все ли равно Pingback: Entity Framework v4, End to End Application Strategy (Part 1, Intro) Pingback: The Perl Flip Flop Operator « A Curious Programmer Pingback: Dependency injection demystified | The Reinvigorated Programmer Today, you might write the Java code as: Arrays.stream(“”.getClass().getMethods()) .map(m -> m.getName()) .map(n -> n.toLowerCase()) .filter(n -> n.contains(“index”)) .sorted() .collect(Collectors.toList()) .toArray(new String[0]); That is certainly an improvement — though more than a little long-winded compared with the versions in languages for which this kind of thing comes naturally. Pingback: Clearing out some junk: computing paraphernalia of yesteryear | The Reinvigorated Programmer
https://reprog.wordpress.com/2010/03/18/so-what-actually-is-my-favourite-programming-language/
CC-MAIN-2016-40
refinedweb
10,365
60.65
supports iteration and the with statement. Changed in version 2.7: Support for the with statement was added. Changed in version 2.7: Support for zero-padded files was added. New in version 2.7: The mtime argument. This is a shorthand for GzipFile(filename, mode, compresslevel). The filename argument is required; mode defaults to 'rb' and compresslevel defaults to 9. Example of how to read a compressed file: import gzip f = gzip.open('file.txt.gz', 'rb') file_content = f.read() f.close() Example of how to create a compressed GZIP file: import gzip content = "Lots of content here" f = gzip.open('file.txt.gz', 'wb') f.write(content) f.close() Example of how to GZIP compress an existing file: import gzip f_in = open('file.txt', 'rb') f_out = gzip.open('file.txt.gz', 'wb') f_out.writelines(f_in) f_out.close() f_in.close()
http://docs.python.org/2/library/gzip.html
CC-MAIN-2013-48
refinedweb
141
72.73
Hello, this is Jim Springfield. This post will cover some low-level details about how we represent information in the browse database in VS 2010. As I’ve mentioned in a previous post, we are using SQL Server Compact Edition (SSCE) for storing information about all of the C, C++, and IDL files in your solution. I will show some SQL examples that illustrate how to mine this database for information about your code. NOTE: The particular database schema we use in VS2010 may very well change in future versions, so the examples I show may not work in future versions of VS. Opening the SDF file The database file (SDF) can normally be found in the same directory as your solution file (SLN), although it may get relocated if your SLN is on a network share or a flash drive. You can open the SDF using several different tools. SSMS (SQL Server Management Studio) is a good one if you have access to it. However, you can also open it in Visual Studio itself. Before opening the SDF to play with, it is best to close the associated solution. 1. Within VS, go to the “Server Explorer” window, right click on “Data Connections”, and select “Add Connection…”. 2. Select “Microsoft SQL Server Compact 3.5” for the Data Source and click “Continue”. 3. Click the “Browse…” button and then navigate to the SDF file to open. 4. If the SDF file is large (i.e. > 250MB), you will need to set an option before opening. To do this, click the “Advanced…” button and set “Max Database Size” to something larger than the SDF you are opening. 4091MB is the largest value allowed and you can just use that if you wish. 5. Finally, click the “OK” button and VS should open your SDF file. Learning your way around If you expand the new node for your SDF in Server Explorer, you will see a “Tables” node. Expand that and you can see all of the tables that are currently defined. The “code_items” table contains information on every definition and declaration that occur in your source. I don’t have space to cover what all of the tables do, but take a look around. Don’t expect to see data in the refs or symbols tables as those are there for some possible future use. Note: Don’t try to make changes to the schema or indexes and expect that to persist. When we open the SDF for actual browsing use, we do a consistency check and if anything is not “correct”, we delete the SDF and rebuild it. Take a look at the “code_item_kinds” table. You can see the contents by right-clicking and selecting “Show Table Data”. There should be 59 entries here. These values are used in the “code_items” table and you can use them in queries to find certain types of code items. Creating a query Right click on the “code_items” table and select “New Query”. You will get a query window with some tools that help you build and run queries. A window will popup asking you to add a table. Just click close as you will just be copying queries into the query window for now. Try this query first to get your feet wet. To run a query, click the red exclamation icon in the toolbar or press “Ctrl+R”. select * from code_items where kind=1 This query returns all code items that are C++ classes. If you look at all of the columns, most should make sense. Note that the database gives start and end position information for the entire class and just the name portion of the code item. It is hard to see what file a code item comes from, however, as the code_items table uses a file_id and not a filename. To see the filename as well, try the following query which does a join with the files table to get the filename. SELECT f.name, ci.* FROM code_items AS ci INNER JOIN files AS f ON ci.file_id = f.id WHERE (ci.kind = 1) One thing to notice is that this query returns information on code items that occur in your own code as well as in the SDK and other headers. You can return information for files that are explicitly in your solution by using the “config_files” table. This table has a column that indicates whether a file is implicit or explicit. A file may have multiple entries in the config_files table as a file can be used in multiple configs/projects. The “DISTINCT” keyword prevents returning duplicate copies of code items. SELECT DISTINCT f.name, ci.* FROM code_items AS ci INNER JOIN files AS f ON ci.file_id = f.id INNER JOIN config_files AS cf ON ci.file_id = cf.file_id WHERE (cf.implicit = 0) AND (ci.kind = 1) The parent_id in the code_items table refers to another item in the code_items table. Using this information, you can get some parent/child information. The id of 0 is the global namespace. So, to get all functions in the global namespace you could do this. FROM code_items WHERE (kind = 22) AND (parent_id = 0) You can also join the code_items table to itself to find a set of code_items whose parent matches some set of criteria. The following example finds all functions whose parent code_item is named “ATL”. SELECT ci1.* FROM code_items AS ci1 INNER JOIN code_items AS ci2 ON ci2.id = ci1.parent_id WHERE (ci1.kind = 22) AND (ci2.name = ‘ATL’) I have only scratched the surface of what you can do to mine the SDF for interesting information about your source code. There are many other ways to leverage the data contained in the SDF to gather information about your source code. All of our browsing features for C/C++ are implemented on top of the SDF. We do take advantage of some caching, prepared commands, and the special “table direct” mode that SSCE provides in order to increase performance, but everything comes from the SDF at some point. The query processor in SSCE is limited in some ways, but you could even replicate the data from the SDF into a full SQL database and perform even more complex queries than are allowed by SSCE. Jim Springfield Visual C++ Architect Join the conversationAdd Comment Brilliant! We can now sneak into intellisense content itself. Can you go through any VCCodeModel changes? I'm having trouble where some previously working code now only returns exceptions when accessing VCCodeClass.DeclarationText in VS2010. Thanks for the suggestion JK. I will talk to the dev that worked on CodeModel and see about getting something written up. If there is a specific bug you have, please open it on Connect. connect.microsoft.com/VisualStudio Hi JK, Please, have a look at the following blog post: blogs.msdn.com/…/visual-c-code-model-in-visual-studio-2010.aspx . Here, we tried to enumerate differences in VCCM of VS2010. We're interested in source code that causes DeclarationText to throw. Could you please share it with us? Vytas/Visual C++ IDE Since you have the old VC++6.0 (MFC and all) source code somewhere there why not just recompile it and update the compiler to produce modern cpu code and include the latest MFC. That would make the great 6.0 compiler available again. And kick all managed and .NET stuff to some other package, give them an other name C++NET and ManagedC++ etc. and keep the old VC++ as it was. It is crazy to try to integrate everything since it just greates problems and the compiler front end just gets worse and worse. If I had the 6.0 source code available I would just recompile it and swap the compiler to a later version and include fresh libraries. Hi Janne, I always think about the same when I run VS ..I suppose that the reasons are that they designed the IDE to stand at the same level ( or better) amongst Eclipse, Netbeans and Co. where several languajes (Java, Php, JavaFX, Ruby, Python, etc) converge in the same IDE. After all, programming languajes share the same components ( statements, conditional, loops,etc). So there is some solid foundation in the current architecture of the IDE. But currently, we use the same IDE to develop programs for "managed code", "native code" and an "C++/CLI" hybrid. Managed Code: VB.NET, ASP.NET, C#, F# Native Code: C++ Hybrid Code: C++ /CLI It would be better if C++ can have its own special IDE ( with the latests controls such as property grids, ribbons,etc) as in VC++ 6.0 and group the .NET languages in another IDE. After all, most .NET programmers don't like to program in C++. About the browse database, it has improved a lot since the bsc and ncb format, the database it seems to be updated in realtime and is is not necessary to recompile the code to in order to update the database. @ Vytas: I've filed Connect bug 568823 which demonstrates the issue.
https://blogs.msdn.microsoft.com/vcblog/2010/06/09/exploring-the-visual-c-browse-database/
CC-MAIN-2017-13
refinedweb
1,516
74.08
ice-stream An expressive streaming utility for node, inspired by Lo-Dash, Async, and event-stream. Ice-stream goal is to allow for complex processing of streaming data. npm install ice-stream Ice-Stream An expressive streaming utility for node, inspired by Lo-Dash, Async, and event-stream. Ice-Stream aims to make stream processing as easy as the ubiquitous mentioned above make processing arrays and objects. About Streams Stream processing is basically pumping data through a number of operations, piece by piece. Using streams is especially useful when: - There is more data than available memory - The data source is slow, e.g. over a network, user input - Some of the data can be processed without all of the data In some cases it is useful to think about and operate on a stream as a continual flow of data, and sometimes it is better to think about it as a segmented, chunk by chunk flow. Ice-Stream's methods do both, depending on the operation. Examples First, to include Ice-Stream var is = require('ice-stream'); Using the static methods results in a typical Node Stream // Stream from a file, through a lowercaser, to an output file is.toLower( fs.createReadStream('file.txt') ).pipe( fs.createWriteStream('file-low.txt') ); Passing a Stream to the constructor generates a wrapped stream, which can be chained // Parse out unique keywords from the file and output them to stdout is( fs.createReadStream('file.txt') ).split(' ').toLower().unique().without('ruby', 'python').join('\n').out(); Constructor(mixed) The Ice-Stream variable can be used as a namespace to access the stream methods, as a function to wrap basic Node streams, and as a constructor to create streams for data. Examples // Wrap a basic Stream var wstream1 = is( fs.createReadStream('file.txt') ); // Create a text stream var wstream2 = is('stream this data'); // The above result is wrapped so we can immediately chain it wstream2.split(' ').join('\n').out(); // Create a stream from an array is(['stream', 'this', 'data']).join('\n').out(); // Create a non-text stream from an array is([1, 4, 6, 2, 91]).map(function(num) { return num*2; }).join('\n').out(); Methods - exec - split - join - toLower - toUpper - map - mapAsync - mapAsyncSeries - filter - filterAsync - filterAsyncSeries - unique - without - out exec(cmd) Spawn an external process process. Input is passed to stdin of the new process, and output comes from stdout. Any data that is received from stderr is emitted as an error. Arguments - cmd - The command to run split([separator]) Chunks the data based on the delimiter. Concatenates and buffers the input until the delimiter is found, at which point the buffered data is emitted. The delimiters are removed and not emitted. When the input stream closes, the final data chunk is emitted. Note that this method converts input to strings. Arguments - separator - String specifying where to split the input stream. Defaults to \n. join([separator]) Injects data in between chunks of the input stream. Note that a split() followed by a join() will produce the same overall stream, but the chunking will be different. Arguments - separator - The extra data to emit between chunks toLower() Converts the input to lower case. toUpper() Converts the input to upper case. map(iterator) Maps each stream chunk using a synchronous callback function. Arguments - iterator(chunk) - A synchronous function which returns the new chunk. mapAsync(iterator) Maps the stream chunks using an async callback. Note that iterator will be called in parallel as chunks are received, and the output order is determined by when the callbacks finish, not the input order. Arguments - iterator(chunk, callback) - The user-defined function which performs the mapping. The first callback parameter is an optional error, with the second parameter being the mapped value. mapAsyncSeries(iterator) Same as above, except the chunks are guaranteed to remain in order when emitted. Note that the iterator will still be called in parallel as chunks are received, but the results are buffered to ensure proper emission order. Arguments - iterator(chunk, callback) - Same as above. filter(iterator) Sends each chunk to a user-defined iterator which determines whether or not to send the chunk on. Arguments - iterator(chunk) - A synchronous function which returns true to keep the chunk filterAsync(iterator) Send each chunk to a user-defined asynchronous function. Note that iterator will be called in parallel as chunks are received, and the output order is determined by when the callbacks finish, not the input order. Arguments - iterator(chunk, callback) - The user-defined function which performs the filtering. The first callback parameter is a boolean. There is no errcallback parameter. filterAsyncSeries(callback) Same as above, but the chunks are guaranteed to remain in order Arguments - iterator(chunk, callback) - Same as above. unique() Stores a hash of processed chunks, and discards already seen chunks. This works with string or object streams, but objects will be hashed based on their toString() result. without(chunk1[, chunk2, chunk3...]) Discard the specified chunks using strict equality. This works on string or object streams. out() Simply pipes the stream to stdout.
https://www.npmjs.org/package/ice-stream
CC-MAIN-2014-10
refinedweb
836
56.55