text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
---|---|---|---|---|
[UPDATE 2]
I have almost completed code to have standing robot.
Because I have problem with calibration and I do not know if I should add something to PID I post code and ask for help.
...
Type: Posts; User: Black_Stork
[UPDATE 2]
I have almost completed code to have standing robot.
Because I have problem with calibration and I do not know if I should add something to PID I post code and ask for help.
...
[UPDATE]
Here is some code that i wrote for motor control and for IMU (imu code was based on code in Joop Broking's video about MPU 6050):
Motors.hpp
#include "Arduino.h"
enum action {BACK...
I need guidance with building(programing) my first balancing robot.
Tasks that I want to accomplish :
- program robot so it can stand on its own.
- then add remote control over bluetooth so it...
My last obstacle is to add new baud rate to library which is 83 333 baud. At line 155 there is this some kind of baud rate determination mechanism which i don't fully understand.
If i want to get 83...
Thank you very much tni, it worked!
As far as i understand the custom name is only for this line: "FlexCAN myCAN(baudRate);" and all comands regarding sending and receiving messages is executed by...
I am trying to send messages from one teensy to another. I know that teensy which sends messages works correctly since my Arduino with seeedstudio CAN BUS shield reads it perfectly fine.
Both...
Library description says that allowed baudrate values are from 20k to 1M but supported ones are 50k, 100k, 125k, 250k, 500k and 1M. Because I still don't know how to make this work, I can't check if...
Thanks for quick response.
I'll be waiting
Good morning!
I have to make transmitter that will send two diffferent frames, depending on whether button is pushed or not. I've managed to do that with two arduino unos
with SeeedStudio's CAN...
|
https://forum.pjrc.com/search.php?s=c45582ea6dd70bf4e1a32b600fd7cab1&searchid=7278351
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
class in UnityEditor.VersionControl class provides access to the version control API.
Note that the Version Control window is refreshed after every version control operation. This means that looping through multiple assets and doing an individual operation on each (i.e. Checkout) will be slower than passing an AssetList containing all of the assets and performing a version control operation on it once.
using System.Collections.Generic; using UnityEditor; using UnityEditor.VersionControl; using UnityEngine;
public class EditorScript : MonoBehaviour { [MenuItem("VC/Checkout")] public static void TestCheckout() { AssetList assets = new AssetList(); assets.Add(new Asset("Assets/"));
Task t = Provider.Checkout(assets, CheckoutMode.Both); t.Wait(); } }
Also note that Provider operations just execute the VCS commands, and do not automatically refresh the Version Control window. To update this window, use Task.SetCompletionAction.
using System.Collections.Generic; using UnityEditor; using UnityEditor.VersionControl; using UnityEngine;
public class EditorScript : MonoBehaviour { [MenuItem("VC/ChangeSetMove")] static void ChangeSetMove() { AssetList assets = new AssetList(); assets.Add(Provider.GetAssetByPath("Assets/testMaterial.mat")); Task task = Provider.ChangeSetMove(assets, "ChangeSetID"); task.SetCompletionAction(CompletionAction.UpdatePendingWindow); } }
|
https://docs.unity3d.com/ScriptReference/VersionControl.Provider.html
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
57277/create-account-without-having-provide-credit-card-details
Is it possible to use this free tier without giving credit card information? In the event that this information is required, would you be charged without your assent?
You're always required to provide a credit card or a bank account in order to verify that you're not a bot (otherwise imagine the spam that they would receive).
Refer the FAQ ()
You won't be charged until you upgrade to a paid account. Google has limits on all of their free tiers to prevent users from going over thresholds.
Note that the Free Tier and Always Free offers have different requirements:
The first is a trial (with a time limit and credit limit) that will ask you to upgrade to a paid account once it's over.
The latter allows you to use cloud services without paying up to a certain usage limit, after which you will be billed at their normal rates. You're only eligible for Always Free if you already have a paid account.
gcloud auth to service accounts is allowed. ...READ MORE
Cloud Storage uses a flat namespace to ...READ MORE
Yes, you could certainly do this.
You can ...READ MORE
In order to access the services provided ...READ MORE
You must have the following prerequisites in ...READ MORE
GCP provides IaaS as well, although it ...READ MORE
You can connect various GCP resources to each ...READ MORE
Cloud SQL is PAAS (Plaform As A ...READ MORE
Your free trial credit applies to all ...READ MORE
This is probably because you would have ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in.
|
https://www.edureka.co/community/57277/create-account-without-having-provide-credit-card-details?show=57278
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
gatsby-plugin-fastify
Gatsby plugin for easy integration with Fastify.
About
gatsby-plugin-fastify gives you a way to integrate your Gatsby site with a Node.js server using Fastify. Use to serve a standard Gatsby.js site normally - the plugin will take care of everything:
- Serving Gatsby Functions
- Serving static files
- Serving DSG/SSR Routes
- Gatsby 404 page
- Gatsby 500 page
- Gatsby redirects
- Client-only routes
- Serving the site with pathPrefix - set it up inside
gatsby-config.js, the plugin will take care of it
- File compression, Etags, and more.
Installation
Install the plugin using npm or yarn
npm install gatsby-plugin-fastify fastify
and add it to your
gatsby-config.js
module.exports = { /* Site config */ plugins: [ /* Rest of the plugins */ { resolve: `gatsby-plugin-fastify`, /* Default option value shown */ options: { compresion: true; //When set to false gzip/bz compression assets is disabled. } } ], };
Serving your site
Node and Fastify are great for building application specific web servers but generally should not be used on the edge. Meaning, most folks will use a fully fledged web server (e.g. Nginx or Caddy that handles traffic before passing it back to node. This allows the Edge web server to handle security, TLS/SSL, load balencing, etc. Then the node server only worries about the application. A CDN (e.g. Fastly or CloudFlare ) is also often used for performance and scalability.
Server CLI (expected)
This plugin implements a server that’s ready to go. To use this you can configure a
start(or whatever you prefer) command in your
package.json:
{ "scrips": { "start": "gserve" } }
CLI Config
Server -p, --port Port to run the server on [number] [default: "8080"] -h, --host Host to run the server on [string] [default: "127.0.0.1"] -o, --open Open the browser [boolean] [default: false] Options: --help Show help [boolean] --version Show version number [boolean] -l, --logLevel set logging level [string] [choices: "trace", "debug", "info", "warn", "error", "fatal"] [default: "info"]
All settings may be change via environment variables prefixed with
GATSBY_SERVER_ and the flag name.
# For example: export GATSBY_SERVER_PORT=3000 export GATSBY_SERVER_ADDRESS=0.0.0.0
Logging
By default only basic info is logged along with warnings or errors. By setting the logging level to
debug you’ll also enable Fastify’s default request logging which is usually enabled for the
info level.
Gatsby Fastify Plugin (advanced)
This plugin also implements a Fastify plugin for serving Gatsby. This may be imported via:
import { serveGatsby } from "gatsby-plugin-fastify/plugins/gatsby";
For an example on how to use this, reference the server implementation file from
src/serve.ts.
Gatsby Feature Fastify Plugins (expert)
Finally, each of the Gatsby features (functions, static files, redirects, client-only routes, and 404 handling) is implemented in it’s own plugin. Those may be imported as well for use in a custom server implementation.
import { handle404 } from "gatsby-plugin-fastify/plugins/404"; import { handle500 } from "gatsby-plugin-fastify/plugins/500"; import { handleClientOnlyRoutes } from "gatsby-plugin-fastify/plugins/clientRoutes"; import { handleFunctions } from "gatsby-plugin-fastify/plugins/functions"; import { handleRedirects } from "gatsby-plugin-fastify/plugins/redirects"; import { handleStatic } from "gatsby-plugin-fastify/plugins/static"; import { handleServerRoutes } from "gatsby-plugin-fastify/plugins/serverRoutes";
For an example on how to use these, see the
serveGatsby implementation file from
src/plugins/gatsby.ts.
Gatsby Functions
Gatsby’s function docs suggest that the
Request and
Response objects for your Gatsby functions will be Express like and provide the types from the Gatsby core for these.
THIS IS NOT TRUE FOR THIS PLUGIN
Because we’re not using Express or Gatsby’s own cloud offering functions will need to use Fastify’s own
Request and
Reply API.
If you’d like to use Fastify with an Express like API there are plugins for Fastify to do this, see their docs on middleware. You’ll need to use the exports provided in this package to write your own server implementation and add the correct plugins to support this.
TypeScript
import type { FastifyRequest, FastifyReply } from "fastify"; export default function handler(req: FastifyRequest, res: FastifyReply) { res.send(`I am TYPESCRIPT`); }
|
https://www.gatsbyjs.com/plugins/gatsby-plugin-fastify/
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
It’s been a while since I looked at background audio/video in a UWP app – perhaps long enough ago that I was still talking about Windows 8 and I was working in HTML/JS at the time I wrote this post;
Windows 8 Metro style simple music app example
and made the screencast videos that went along with it.
Here in 2016 on the UWP with build 1607 it’s a joy to find that the various changes around the background model described here;
Background activity with the Single Process Model
have found their way into the world of background media as described on MSDN over here;
Play Media in the Background
and I wanted to try out the very basics of this for myself armed with some of what I’d referenced in this post;
Windows 10 1607, UWP, Single Process Execution and Lifecycle Changes
and so I made a simple, blank app and I added a video into the Assets folder;
and then got rid of the MainPage.xaml/.cs files and concentrated purely on my App class which I tried to keep as simple as I could. I’m not sure that I have it entirely right just yet but I wound up with;
namespace VideoPlayerBackground { using System; using Windows.ApplicationModel; using Windows.ApplicationModel.Activation; using Windows.Media; using Windows.Media.Core; using Windows.Media.Playback; using Windows.UI.Xaml; using Windows.UI.Xaml.Controls; sealed partial class App : Application { public App() { this.InitializeComponent(); this.EnteredBackground += OnEnteredBackground; this.LeavingBackground += OnLeavingBackground; } void OnLeavingBackground(object sender, LeavingBackgroundEventArgs e) { this.CreateUI(); this.isForeground = true; } void OnEnteredBackground(object sender, EnteredBackgroundEventArgs e) { this.DestroyUI(); this.isForeground = false; } protected override void OnLaunched(LaunchActivatedEventArgs e) { if ((e.PreviousExecutionState != ApplicationExecutionState.Running) && (e.PreviousExecutionState != ApplicationExecutionState.Suspended)) { this.CreateMediaPlayer(); this.CreateUI(); this.mediaPlayer.Play(); Window.Current.Activate(); } } void CreateMediaPlayer() { this.mediaPlayer = new MediaPlayer() { Source = MediaSource.CreateFromUri(new Uri("ms-appx:///Assets/video.mp4")) }; this.mediaPlayer.SystemMediaTransportControls.IsEnabled = true; this.mediaPlayer.SystemMediaTransportControls.AutoRepeatMode = MediaPlaybackAutoRepeatMode.Track; } void CreateUI() { this.mediaPlayerElement = new MediaPlayerElement(); this.mediaPlayerElement.AreTransportControlsEnabled = true; this.mediaPlayerElement.SetMediaPlayer(this.mediaPlayer); Window.Current.Content = this.mediaPlayerElement; } void DestroyUI() { this.mediaPlayerElement.SetMediaPlayer(null); this.mediaPlayerElement = null; Window.Current.Content = null; } MediaPlayerElement mediaPlayerElement; MediaPlayer mediaPlayer; bool isForeground; } }
and what surprised me is the simplicity of it – I have a MediaPlayer which I set playing (in this example) from the start and then when the app is in the foreground I associate the MediaPlayer with a MediaPlayerElement (not a MediaElement) which I’ve simply parented in the window and when the app moves into the background I clear the contents of the window and get rid of the MediaPlayerElement but I keep the MediaPlayer around.
It’s worth saying that this work with the system transport controls so that I can hit the pause/play buttons on my keyboard as expected.
It seems so much nicer from what I had to do back in the days of early iterations of the platform that I’m wondering whether I’m missing some additional pieces that make it more complex than I’ve got it here? Answers on a postcard please!
|
https://mtaulty.com/2016/10/16/windows-10-1607-uwp-and-background-media/
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
exporter used to export the spreadsheet view and supported chart views as to a CSV file. More...
#include <vtkSMCSVExporterProxy.h>
exporter used to export the spreadsheet view and supported chart views as to a CSV file.
vtkSMCSVExporterProxy is used to export the certain views to a CSV file. Currently, we support vtkSpreadSheetView and vtkPVXYChartView (which includes Bar/Line/Quartile/Parallel Coordinates views).
Definition at line 33 of file vtkSMCSVExporterProxy.h.
Definition at line 37 of file vtkSMCSVExporterProxy.h.
Exports the view.
Implements vtkSMExporterProxy.
Returns if the view can be exported.
Default implementation return true if the view is a render view.
Implements vtkSMExporterProxy.
|
https://kitware.github.io/paraview-docs/v5.9.0/cxx/classvtkSMCSVExporterProxy.html
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
Secret-key encryption, also referred to as symmetric encryption, is designed to work on large amounts of data. As such, symmetric encryption code works on streams of data as opposed to arrays of bytes. When you wrap a stream of regular data inside a specialized encryption stream called a CryptoStream, data is encrypted on the fly as it is placed into the stream. The same is true of decryption; data is decrypted on the fly as it is read from the stream.
As mentioned earlier, in symmetric encryption the key used to encrypt the data is the same key that is used to decrypt the data. As a result, the safety of the key is paramount. If someone were to obtain your key, not only could he decrypt your private data, but he could encrypt his own data as if he were you.
Also, remember that to properly encrypt blocks of data using symmetric encryption, you need an Initialization Vector (IV) to allow the encryption algorithm to encrypt blocks with partial data from previous blocks to reduce the predictability of output.
The code in Listing 15.1 shows the use of symmetric encryption and decryption to encrypt a message into a binary file on disk and then use another CryptoStream to read from the encrypted file.
using System; using System.IO; using System.Security; using System.Security.Cryptography; using System.Collections.Generic; using System.Text; namespace SymmetricEncryption { class Program { static void Main(string[] args) { RijndaelManaged rmCrypto = new RijndaelManaged(); // these keys are completely artificial. In a real-world scenario, // your key and IV will be far less obivous :) }; string clearMessage = "This string will be encrypted symmetrically and decrypted " + "using the same key."; FileStream fs = new FileStream("encrypted.dat", FileMode.Create); CryptoStream cs = new CryptoStream( fs, rmCrypto.CreateEncryptor(key, IV), CryptoStreamMode.Write); cs.Write(System.Text.ASCIIEncoding.ASCII.GetBytes(clearMessage), 0, clearMessage.Length); cs.Close(); fs.Close(); // open the encrypted file using a different stream to show // the symmetric decryption. FileStream fs2 = new FileStream("encrypted.dat", FileMode.Open); CryptoStream cs2 = new CryptoStream( fs2, rmCrypto.CreateDecryptor(key, IV), CryptoStreamMode.Read); byte[] decryptedData = new byte[fs2.Length]; cs2.Read(decryptedData, 0, (int)fs2.Length); cs2.Close(); fs2.Close(); Console.WriteLine("Decrypted Message:\n{0}", System.Text.ASCIIEncoding.ASCII.GetString(decryptedData)); Console.ReadLine(); } } }
|
https://flylib.com/books/en/1.237.1.93/1/
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
NAME
SYNOPSIS
DESCRIPTION
LIBRARY API VERSIONING
MANAGING LIBRARY BEHAVIOR
DEBUGGING AND ERROR HANDLING
EXAMPLE
BUGS
ACKNOWLEDGEMENTS
SEE ALSO
libpmemblk - persistent memory resident array of blocks
#include <libpmemblk.h> cc ... -lpmemblk blk_check_versionU( unsigned major_required, unsigned minor_required); const wchar_t *pmemblk_check_versionW( unsigned major_required, unsigned minor_required);
void pmemblk_set_funcs( void *(*malloc_func)(size_t size), void (*free_func)(void *ptr), void *(*realloc_func)(void *ptr, size_t size), char *(*strdup_func)(const char *s));
const char *pmemblk_errormsgU(void); const wchar_t *pmemblk_errormsgW(void);
A description of other libpmemblk functions can be found on the following manual pages:U()/pmemblk_createW() function described in pmemblk_create(3). The other libpmemblk functions operate on the resulting block memory pool using the opaque handle, of type PMEMblkpool*, that is returned by pmemblk_createU()/pmemblk_createW() or pmemblk_openU()/pmemblk_openW().blk_check_versionU()/pmemblk_check_versionW() function is used to determine whether the installed libpmemblk supports the version of the library API required by an application. The easiest way to do this is for the application to supply the compile-time version information, supplied by defines in <libpmemblk.h>, like this:
reason = pmemblk_check_versionUU()/pmemblk_check_versionW() is successful, the return value is NULL. Otherwise the return value is a static string describing the reason for failing the version check. The string returned by pmemblk_check_versionU()/pmemblk_check_versionW() must not be modified or freed..
The pmemblk_errormsgU()/pmemblk_errormsgW()sgU()/pmemblk_errormsgW() as described above.
A second version of libpmembl.sgU()/pmemblk_errormsgW().U(path, ELEMENT_SIZE, POOL_SIZE, 0666); if (pbp == NULL) pbp = pmemblk_openU.
Unlike libpmemobj(7), data replication is not supported in libpmemblk. Thus, specifying replica sections in pool set files is not allowed.
libpmemblk builds on the persistent memory programming model recommended by the SNIA NVM Programming Technical Work Group:
msync(2), dlclose(3),), pmem_is_pmem(3), pmem_persist(3), strerror(3), libpmem(7), libpmemlog(7), libpmemobj(7) and
The contents of this web site and the associated GitHub repositories are BSD-licensed open source.
|
https://pmem.io/pmdk/manpages/windows/v1.10/libpmemblk/libpmemblk.7/
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
I recently used Locust, a load testing tool that lets you write intuitive looking Python code to load test your web applications. I did not follow Locust’s install guide and instead just tried a ‘pip install locustio’. I ended up running into some issues that were not easy to Google about. So I thought I would document the problems I faced along with there solution over here.
Getting setup with Locust on Windows
If you have not already tried installing Locust, follow this short and handy guide. It will help you avoid the problems I faced.
1. Use Python 2.7.x where x >=4. I upgraded my Python to 2.7.11.
2.
pip install pyzmq
3.
pip install locustio
4. Test your installation by opening up a command prompt and typing
locust.--help You should see no errors or warning – only the usage and help should be printed out on your console.
Locust install issues and solutions
When I installed Locust for the first time, I missed steps 1 and 2 in the section above. So I ran into a couple of errors.
1. ImportError: DLL load failed
from gevent.hub import get_hub, iwait, wait, PYPY
File “c:\python27\lib\site-packages\gevent\hub.py”, line 11, in
from greenlet import greenlet, getcurrent, GreenletExit
ImportError: DLL load failed: The specified procedure could not be found.
I got this error because I had Python 2.7.2 (python –version) and Locust needs at least Python 2.7.4. To solve this issue, upgrade your Python version. I ended up installing Python 2.7.11.
2. UserWarning: WARNING: Using pure Python socket RPC implementation instead of zmq
c:\python27\lib\site-packages\locust\rpc__init__.py:6: UserWarning: WARNING: Using pure Python socket RPC implementation instead of zmq.
I got this warning when running the command ‘locust –help’ to test my setup. The warning comes with a helpful recommendation to install pyzmq. I installed pyzmq (
pip install pyzmq) and the error went away.
3. pip install locustio gives error: Microsoft Visual C++ 9.0 is required (Unable to find vcvarsall.bat).
I got the below error when install locust using pip install locustio
building ‘gevent.corecext’ extension
error: Microsoft Visual C++ 9.0 is required (Unable to find vcvarsall.bat).
I tried installing “gevent” alone using pip install gevent, but got the same error
After bit of searching i installed “gevent” from unofficial windows Binaries for Python Extension Packages.
Download the whl file as per your os combination. I downloaded gevent-1.1.1-cp27-cp27m-win32.whl file.
open command prompt in the directory where you have downloaded whl file and run the below command.
After that i was able to install locust successfully.
My thoughts on Locust as of May, 2016
A random collection of my thoughts having explored Locust for a little bit.
1. This tool is worth exploring deeper. This is the first load testing tool that I found intuitive.
2. Locust lets me reuse the API checks that I anyway write as part of testing a web application
3. I liked Locust’s documentation. I found the documentation very, ummm, Pythonic! Clearly they have put in effort into making the documentation useful.
4. I got a useful, working prototype for a real world problem in less than a weekend
5. I don’t know how powerful their http client (the one you use to make requests) is … so we may trip up at login for some clients.
6. I hated the way they do not have an automatic ‘end’ to any test – but that is a minor complaint
7. With respect to the resources (memory, CPU) on the client machine, locust swarms scale so much better than Qxf2’s map-reduce solution (think 25:1)
8. There is a limit of 1024 locusts per swarm that maps to the maximum number of files that can be open on Windows. But their documentation warns you about this beforehand. You can increase this number, if needed, on your OS.
9. Their reporting is not persistent or stored
Qxf2 will be exploring this tool over the coming months. I want to try more complex login scenarios, nested tasks and distributed swarms..
27 thoughts on “Setup Locust (Python/load testing) on Windows”
I just confrmed my system type is x64. I believe that everything that I installed was donwloaded as x64. I can’t understand why I’m receiving this error.
C:\Users\jovan\Downloads>python -m pip install gevent-1.5a2-cp38-cp38-win_amd64.whl
ERROR: gevent-1.5a2-cp38-cp38-win_amd64.whl is not a supported wheel on this platform.
I already downloaded visual studio and visual c++ build tools with visual studio.
but I’m still receiving this error:
Microsoft Visual C++ 14.0 is required. Get it with “Microsoft Visual C++ Build Tools”:
Jovan,
Can you try installing the previous version of gevent and see if you are able to install?
gevent‑1.4.0‑cp37‑cp37m‑win_amd64.whl Or you can also try below steps:
1. Create a new python 3 environment and activate it
2. Download the latest binary for gevent from the unofficial windows Binaries.
3. python -m pip install gevent-1.5a2-cp38-cp38-win_amd64.whl
Hope this helps !
but doing this… what python version will I be working with?
right now, I’m able to work with Pythin 7 and locust 0.14.5
but I can feel locust is not working fine to me.
I think that locust executes auto-update to this latest version, when installed…
I don’t think that installing Python 3, will work better than it’s working now to me.
Hi, Can you try using a Virtual Environment and specifically try with Python 3 (its been tested and worked with that). Also, you try pip install locust==0.14.5 to install a selected version.
|
https://qxf2.com/blog/setup-locust-python-windows/
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
Import
The
import statement is used to import functions and other definitions from another module. In the simplest case, you just write
import Data.Maybe
to import the named module (in this case
Data.Maybe).
However, in more complicated cases, the module can be imported qualified, with or without hiding, and with or without renaming. Getting all of this straight in your head is quite tricky, so here is a table (lifted directly from the language reference manual) that roughly summarises the various possibilities:
Suppose that module
Mod exports three functions named
x,
y and
z. In that case:
Note also that,.
|
http://www.haskell.org/haskellwiki/Import
|
crawl-001
|
en
|
refinedweb
|
Sven Schönherr ([email protected])
We do not want to impose very strict coding rules on the developers. What is most important is to follow the CGAL naming scheme described in the next section. However, there are some programming conventions (Section 2.2) that should be adhered to, rules for the code format (Section 2.3), and a mandatory heading for each source file (Section 2.4).
All types in the kernel concept are functor types. We distinguish the following four categories:.
In addition, for each functor the kernel traits class has a member function that returns an instance of this functor. The name of this function should be the (uncapitalized) name of the functor followed by the suffix _object.For example, the function that retuns an instance of the Less_xy_2 functor is called less_xy_2_object.
The first list of items are meant as rules, i.e., you should follow them.
The template parameter is a concept and should follow the concept naming scheme outlines in the previous section. As a general rule, the typedef should identify the template parameter with a type of the same name that follows the naming convention of types. For example
template < class GeometricTraits_2 > class Something { public: typedef GeometricTraits_2 Geometric_traits_2; };For one-word template arguments, the template parameter name should be followed by an underscore. (Note that using a preceding underscore is not allowed according to the C++ standard; all such names are reserved.)
template < class Arg_ > class Something { public: typedef Arg_ Arg; };
#ifndef CGAL_THIS_IS_AN_EXAMPLE_H #define CGAL_THIS_IS_AN_EXAMPLE_H ... #endif // CGAL_THIS_IS_AN_EXAMPLE_H
The following items can be seen as recommendations in contrast to the rules of previous paragraph.
Each CGAL source file must start with a heading that allows for an easy identification of the file. The file header contains:
For example and demo programs, the inclusion of the copyright notice is not necessary as this will get in the way if the program is included in the documentation. However, these files should always contain the name of the file realtive.
|
http://www.cgal.org/Manual/3.3/doc_html/Developers_manual/Developers_manual/Chapter_code_format.html
|
crawl-001
|
en
|
refinedweb
|
The class Segment_Voronoi_diagram_vertex_base_2<Gt,SSTag> provides a model for the SegmentVoronoiDiagramVertexBase_2 concept which is the vertex base required by the SegmentVoronoiDiagramDataStructure_2 concept. The class Segment_Voronoi_diagram_vertex_base_2<Gt,SSTag> has two template arguments, the first being the geometric traits of the segment Voronoi diagram and should be a model of the concept SegmentVoronoiDiagramTraits_2. The second template argument indicates whether or not to use the simple storage site that does not support intersecting segments, or the full storage site, that supports intersecting segments. The possible values are CGAL::Tag_true and CGAL::Tag_false. CGAL::Tag_true indicates that the full storage site is to be used, whereas CGAL::Tag_false indicates that the simple storage site is to be used.
#include <CGAL/Segment_Voronoi_diagram_vertex_base_2.h>
|
http://www.cgal.org/Manual/3.1/doc_html/cgal_manual/Segment_Voronoi_diagram_2_ref/Class_Segment_Voronoi_diagram_vertex_base_2.html
|
crawl-001
|
en
|
refinedweb
|
Despite a national economic slowdown and a 4.9 percent drop in overall
U.S. natural gas consumption in 2001,(1) more than 3,571 miles of pipeline
and a record 12.8 billion cubic feet per day (Bcf/d) of natural gas pipeline
capacity were added to the national pipeline network during 2002 (Table
1). The estimated cost was $4.4 billion.
Since late 2001, many of the market factors that helped fuel the large growth in new pipeline capacity additions have changed significantly. For instance, economic growth has slowed and many proposals to add new gas-fired electric generation capacity have been delayed or canceled. As a result, the need for new natural gas capacity has also weakened.
The deteriorating
financial condition of a number of energy companies over the past year
and the cessation of gas trading as a line of business by a number of
others have caused some pipeline company subsidiaries to re-evaluate their
commitment to specific pipeline expansion proposals. And, since a number
of expansion proposals have been predicated upon the building of new gas-fired
electric power plants, a number of which have been suspended, postponed,
or canceled, the cancellation of related pipeline laterals and even some
long-haul transmission projects might be anticipated also.
The need for new
import pipeline capacity from Canada also appears to have reached a temporary
plateau. Since 2000, only 207 million cubic feet per day (MMcf/d) of new
import pipeline capacity (Table 2) has been added (into the Western region)
and a proposed 163 MMcf/d import capacity expansion to the Western region
was recently canceled. Moreover, no additional new projects have been
proposed to increase import capacity from Canada into the Midwest or Central
regions through 2005. Import capacity development into the Northeast region,
however, is a potential exception to the trend. Six import expansion proposals
have been announced, with a combined increase of 2,109 MMcf/d of capacity
through 2005. For the most part, this new capacity is slated to support
new and proposed gas-fired power plants in the Boston and New York metropolitan
areas.
Overview/Trends
Five major new natural gas pipeline systems were completed and placed
in operation during 2002 (Figure 2). They were: Gulfstream Pipeline,
1,130 MMcf/d-560 miles, which carries natural gas under the Gulf of Mexico
from gas-processing facilities located on the gulf coasts of the States
of Mississippi and Alabama to west central Florida; North Baja Pipeline,
500 MMcf/d-80 miles (in U.S.), which exports gas to electric power plants
located in Baja California, Mexico;
(Illinois) hub and the growing market of northern Illinois and southern
Wisconsin. Completion of these five pipelines accounted for 22 percent
of all new natural gas pipeline capacity installed in the United States
in 2002 and 34 percent of the total new gas pipeline mileage.
A number of major short-haul,
though large-capacity, pipeline laterals
were constructed and placed in operation in 2002 (Table 2, Figure
1). Most of these pipeline segments were built to connect existing pipeline
systems to new gas-fired electric power generation plants. Twelve such
lines, totaling 303 miles, accounted for 3,280 MMcf/d, or 26 percent,
of the total new natural gas pipeline capacity added to the network in
2002. An undetermined number of smaller pipeline laterals that were constructed
to supply new gas-fired power plants were also placed in service in 2002.
However, their interconnections with the existing natural gas pipeline
system were nearby and the cost of their construction fell below the blanket
certificate threshold ($7.5 million)(4) for needing FERC or other regulatory
approval for construction. Such projects are normally carried out under
blanket certificate authority.(5)
Major transfers of pipeline assets occurred in 2002 as the financial
problems of many parent companies of natural gas pipeline companies deepened.
Several of them found it necessary to sell natural gas pipeline assets
that they had purchased over the past decade as part of efforts to build
a national or regional transportation network in support of trading operations
(Table 3).(7) For example, The Williams Companies, Inc., sold its Kern River
Transmission System (a key transporter of Wyoming natural gas to California)
to MidAmerican Energy Holdings, Inc. The Williams Companies also sold
its Williams Gas Pipeline Central Company (a major regional pipeline system
with operations in Kansas, Missouri, Oklahoma) to Southern Star Central
Corporation, and its Cove Point LNG facilities and pipeline to Dominion
Resources, Inc. Financial difficulties also forced Dynegy Inc. to sell
the Northern Natural Gas Pipeline (which it had acquired from bankrupt
Enron Corporation in 2001) to MidAmerican Energy Holdings Company.
Growth in the National Network
At the close of 2002, the
85 companies that make up the U.S. interstate
natural gas mainline transportation network operated about 212,000 miles
of pipeline and had the capability to deliver more than 133 Bcf/d of gas
(Table 3).(8) This represented a 2-percent increase in mileage from the
2001 level and an 11-percent increase in interstate pipeline capacity.
Compared with 2001, the installation of new natural gas pipeline capacity
was up 39 percent, with construction expenditures up 161 percent (Table
1). In part, this sizeable expansion reflected the quick industry response
to the energy crises in the Western region in late 2000 and early 2001,
and the growing demand for additional natural gas service in the Southeast
region. In 2002, more new natural gas pipeline capacity was added to the
Western regional network than had been installed during any one year of
the previous decade, as the region's interstate pipeline companies increased
their capabilities to deliver gas to California. In addition, the two
major California intrastate pipelines increased their capability to receive
gas from the interstate system and to deliver that gas to their respective
service territories.
Interstate natural
gas pipeline capacity into California has increased by 10 percent since
2000, much of it added in 2002 (Figure 3, Table 2). Among the 2002 projects
contributing to this growth were a 207 MMcf/d expansion of the PG&E
Gas Transmission-Northwest pipeline between California and Canada and
a 230 MMcf/d expansion of the El Paso Natural Gas Company's South System
in New Mexico and Arizona (Table 2).
Gas pipeline capacity from Wyoming's Power River Basin and other areas
in the State increased by 19 percent between 2000 and 2002 (Figure 3).
One of the most important expansion projects completed during 2002 in
the Central region was the 324 MMcf/d expansion of the Trailblazer Pipeline
system (from 522 MMcf/d previously). With the completion of this project
(Table 2), gas transportation between northeastern Colorado and interconnections
with major interstate pipelines in eastern Nebraska(9)
increased significantly. The addition of this new capacity has provided an outlet
for increased gas production flowing from the several major new gas gathering
systems and laterals built within the coalbed methane producing basins of
Wyoming in the past several years.(10) Despite this capacity increase, many
market analysts believe that even more interstate pipeline capacity is
required to utilize completely the productive capacity of the producing
fields in the Rocky Mountain area.(11)
The largest amount of interregional transport capacity remains with the
13 interstate pipeline systems transporting gas from the Southwest region
to the Southeast region, 22,001 MMcf/d, while the second largest is on eight
interstate pipeline systems operating between the Central region and the
Midwest region, 15,187 MMcf/d (Table 4). The sizable growth in the latter
- it was 11,728 MMcf/d in 1998 - reflects the large amount of new pipeline
capacity from Canada added over the past several years, as represented by
the new Alliance Pipeline and the several expansions of the Northern Border
system between Montana and Illinois.
Completion of the
El Paso Natural Gas Company's Line 2000 project, which entailed the conversion
of an oil pipeline to replace a major portion of its vintage South System,
was not originally slated to include an expansion of capacity. But as
gas demand in Arizona and southern California grew during the 2000-2001
energy shortfall, the project was modified to include an upgrading of
several gas compressor stations along the route, with an increase in capacity
of 230 MMcf/d.
Addressing the same demand for increased interstate gas pipeline capacity
into California, the Transwestern Pipeline Company also improved its system
capacity between New Mexico and California with an increase of 120 MMcf/d.
For similar reasons, PG&E Gas Transmission-Northwest increased its
system capability by 207 MMcf/d (of Canadian import capacity) into Northern
California, while the Northwest Pipeline Company increased its regional
capacity by 160 MMcf/d in 2002, and plans to add another 443 MMcf/d in
2003.
Within California, the two major intrastate pipeline companies, Southern
California Gas (SoCal) and Pacific Gas and Electric (PG&E), upgraded
their respective mainline systems to improve their takeoff capacity from
the expanded interstate system. In addition, they upgraded their interconnections
with California gas production fields and built several new laterals to
supply the new gas-fired power plants that were brought online in the
State in 2002. However, as energy demand stabilized in the region, especially
within the State, the original intended market for several pipeline projects
changed.
For instance, the Questar Southern Trails Pipeline (Figure 2), which was
originally planned as a transporter of natural gas to customers within
California, has redirected its deliveries to gas-fired power plants within
Arizona instead. El Paso Natural Gas Company now also directs a sizable
portion of its South System capacity to several new gas-fired power plants
near Phoenix, Arizona, and to the North Baja Pipeline (Figure 2).
Reflecting a shift in short-term gas needs in the region, the 21 gas pipeline
projects that have been proposed for 2003 through 2005 are now less focused
on California and more on the northwest part of the region and the States
of Arizona and Nevada. Indeed, only about 30 percent of the proposed capacity
would directly impact California, compared with 70 percent in 2002.
Of the 21 proposed 2003-2005 projects, 8 are pipeline expansions or new
laterals that would improve gas transportation services (1,463 MMcf/d)
in the Northwest part of the region in the States of Washington, Oregon,
or Idaho, while another 8 projects (3,956 MMcf/d) would, for the most
part, improve service to Nevada and Arizona customers and provide additional
service to exporting pipelines such as the North Baja Pipeline. The North
Baja Pipeline (500 MMcf/d) runs from an interconnection with El Paso Natural
Gas Company at the Arizona/California border south to the California/Mexico
border, where it delivers natural gas to its Mexican counterpart for shipment
to several gas-fired power plants located in Baja California, Mexico.
El Paso Natural Gas Company has a two-phase project planned for 2004 and
2005 that would add 330 MMcf/d to its South System to support growing
gas demand in Arizona and for the North Baja system.
As gas pipeline capacity (service) demand in the Western region continues
to expand, the need for underground storage facilities to support this
growth also is being addressed. In California, the proposed expansion
of the Wild Goose storage facility in 2004 includes the building of a
700 MMcf/d, 25-mile lateral, to an interconnection with Pacific Gas and
Electric Company's mainline transmission system. In Arizona, two new storage
facilities, targeting shippers to the Arizona/California market, have
been proposed that will necessitate the building of 331 miles of pipeline
with 1,700 MMcf/d of capacity. Overall, more than 2,690 MMcf/d of proposed
new pipeline capacity is related to development of storage infrastructure
in the Western region between 2003 and 2005.
Southwest and Gulf of Mexico Developments
Only a relatively small amount of new natural gas pipeline capacity (882
MMcf/d) was installed in the Southwest region (including the Gulf of Mexico)
in 2002. In fact, the region accounted for only 7 percent of the total
new gas pipeline capacity installed in the Lower 48 States in 2002. Comparatively,
in 2000 and 2001, the percentage was 20 and 23 percent, respectively (Figure
5). Moreover, Gulf of Mexico pipeline development (two projects) represented
75 percent of new capacity addition in the region in 2002, while onshore
regional expansions (four projects) accounted for only 25 percent of new
pipeline capacity.(16)
The only major interstate pipeline in the Southwest region to expand its
regional exit capacity in 2002 was the Transcontinental Gas PipeLine Company,
with the completion of its 230 MMcf/d Sundance expansion extending from
Louisiana to Virginia. Otherwise, there has not been any significant increase
in pipeline capacity on the other major interstate pipeline systems, such
as Texas Eastern Transmission Corporation, Tennessee Gas Pipeline Company,
Trunkline Gas Company, and ANR Pipeline Company, who provide the Midwest
region with access to Gulf Coast production, in a number of years. Indeed,
for the past five years, competition from Canadian natural gas imports
into the Midwest has eliminated the need for any serious proposals for
new pipeline capacity within the transportation corridor between the Southwest/Gulf
of Mexico production and the Midwest.
Growing demand for natural gas in the Northeast region and the adjoining
Southeast region, especially along the route of the Transcontinental Gas
PipeLine system through the Atlantic Coast States, has supported the yearly
expansion of the system over the past 10 years - with 430 MMcf/d added
since 2000 alone. A major factor in these annual expansions has been that
a number of new gas-fired power plants have been built along its route,
which it now supplies. But, overall demand in the region has increased
in other sectors as well, notably in industrial use, which has also supported
these capacity increases. Other pipeline systems in the region have also
benefited from this region-wide growth in new gas-fired electric power
plant development. For instance, the Southern Natural Gas has added more
than 180 MMcf/d (a 7-percent increase) to its regional capacity since
2000.
Through 2005, 15 gas pipeline projects, representing 5,560 MMcf/d of additional
regional capacity, have been proposed for the region and the offshore
Gulf of Mexico. However, only 1,660 MMcf/d, or 30 percent, of that total
represents proposed onshore pipeline capacity additions, all in 2003.
In fact, to date, no additional onshore interstate pipeline expansion
projects have been proposed beyond 2003. Nevertheless, the potential increase
in new gas pipeline capacity in the Southwest region rises significantly
in 2003 compared with 2002 (Table 1), bolstered by several large-scale
offshore projects, although it does drop off again in 2004-2005.
Growth in Import/Export
Pipeline Capacity
The removal of gas tariffs between the United States and Canada in 1996
and between the United States and Mexico in 1998, under the North American
Free Tree Agreement (NAFTA) of 1994, helped bring about major growth in
new pipeline capacity from Canada to the United States and from the United
States to Mexico (Figures 6 and 7). In the latter case, new industrial
gas users initially were the underpinning for installation of the new
capacity, but in recent years new gas-fired power plant development along
the northern border area of Mexico has also supported the expansion. From
1990 to 2002, U.S. natural gas pipeline import capacity grew by 128 percent
while U.S. export capacity grew by more than 300 percent (Table 4).
Besides the impact of NAFTA, relaxation of gas regulations and the creation
of the Comision Reguladora de Energia (CRE) in Mexico since 1995 have
stimulated the expanded development of local gas distribution companies
in the country and their relationship with U.S. pipeline exporters and
marketers.
During 2002, 624 MMcf/d in additional export capacity to Mexico was installed
at a cost of more than $148 million, which does not include the cost of
related facilities installed in Mexico itself (Table 2). Another 729 MMcf/d
has been approved by FERC for installation in 2003 (Figure 6) although,
to date, no further expansion projects have been announced, or applied
for, that would increase pipeline export capacity in 2004 or beyond.
The large expansion of import pipeline capacity from Canada has been supported
by two simultaneous, complementary situations, which were further supported
by incentives contained in NAFTA. First, growing demand for natural gas
in the U.S. Western, Midwest, and Northeast regions outstripped the capability
of U.S. production in the Southwest to meet new demand levels. This situation
stimulated exploration and development by Canadian energy producers who
believed that they could respond to the U.S natural gas shortfall successfully,
especially under NAFTA.
Initially, during
the mid-1990s, available gas productive capacity in existing Canadian
gas production areas was directed toward expansions of existing U.S. natural
gas pipeline systems such as PG&E Transmission-Northwest Company (formerly,
Pacific Gas Transmission Company), Great Lakes Gas Transmission Limited,
and Viking Gas Transmission Company. Several new importing pipelines,
such as the Iroquois Gas Transmission system and the Empire Pipeline,
were also built to reach growing U.S. markets. Subsequently, as the surplus
existing production sources were tapped for export, new Canadian natural
gas reserves were discovered and their production directed mostly to markets
in the United States. This new Canadian productive capacity was found
primarily in two areas, in northeast British Columbia and offshore eastern
Canada.
The Alliance Pipeline was built in 2000 to tap the major natural gas discoveries
within northern British Columbia, Canada, in areas such as the Ladyfern
field. Even with NAFTA, however, to be economically successful, the Alliance
system was designed to be able to transport "wet" natural gas
to Midwest markets, where the liquids could be extracted locally (at the
Aux Sable gas processing plant in Joliet, Illinois) and sold in the more
profitable U.S. market (Figure 2). To date, the Alliance system has been
operating at close to capacity. The new supplies of northeastern British
Columbia have also helped supplement gas supplies directed to markets
in the Western region as well.
The Sable Island
natural gas discoveries of the mid-1990s on the Scotian Shelf, offshore
eastern Canada, led to the development of the 440 MMcf/d Maritimes and
Northeast Pipeline system between Nova Scotia and Massachusetts in 1999.
This system now serves markets in Maine, New Hampshire, and northern Massachusetts.
In 2003, its reach will extend to the Boston metropolitan area with the
completion of a 350 MMcf/d extension. A doubling of its capacity has been
proposed (filed with FERC) for 2004.
And, despite the recent U.S. economic slowdown and the relatively mild
weather conditions of the past several years, gas import volumes from
Canada continue to increase. While monthly gas import levels did drop
briefly during the 2001-2002 heating season (November through March),
most likely because of mild temperatures in the U.S. Midwest and Northeast,
import volumes resumed their steady month-to-month growth pattern during
the latter part of 2002. Preliminary data indicate that in 2002 gas imports
from Canada increased about 2.7 percent over 2001 levels.(17)
Nevertheless, there are also some signs that development of new import
capacity from Canada may be reaching a temporary zenith into several key
U.S. markets. For instance, little or no additional import capacity has
been built into the Midwest since 2000 and none has been proposed - at
least through 2004. A similar situation exists in the Western region.
While the PG&E Gas Transmission-Northwest completed a 207 MMcf/d expansion
in 2002, it canceled a planned 163 MMcf/d 2003 expansion owing to the
loss of the supporting customer, a gas-fired electric power plant that
was to have been located in Oregon. The only other scheduled additional
import capacity into the region is also based on supplying another gas-fired
electric power plant. It should be noted, however, that the Sumas Energy
2 power plant, now scheduled for completion in 2004, and which would require
up to 140 MMcf/d of pipeline capacity, had been postponed previously and
could be again.
Currently, the only U.S. region that has any new natural gas pipeline
import capacity from Canada proposed for 2003 (495 MMcf/d) is the Northeast
(Figure 7). Moreover, beyond 2003, much of the proposed capacity into
the region is supply, rather than demand, driven. The Maritimes &
Northeast's 2004 expansion proposal is predicated largely upon the new
production that is scheduled to be coming from the Scotian Shelf in eastern
offshore Canada by then.(18) Currently, this pipeline expansion proposal
also is geared toward supplying potential gas-fired power plants that
are planned for the corridor between Nova Scotia and Massachusetts over
the next five years. Yet, several of the other proposals to expand capacity
to import Canadian gas into the Northeast are also predicated upon the
same potential customer source.
While none of the
other proposals is planned to extend into the Maritimes & Northeast
pipeline's service territory, it is quite possible that, if current gas
demand projections throughout the Northeast do not live up to expectations,
the sponsors of the various active proposals will either have to scale
back capacity expansion levels, cancel their project completely, or compete
with other projects by expanding the scope of their service territories
beyond the current boundaries.
Outlook
As of March 2003, 112 natural gas pipeline expansion projects, in various
stages of development, have been proposed for the Lower 48 States for
2003 through 2005. For 2003, 61 projects are planned; for 2004, 36; for
2005, only 15 to date.
Of the 61 projects planned for 2003, however, only 42 have been approved
by regulatory authorities, as of March 2003. These approved projects represent
a combined capacity level of 9,845 MMcf/d, or a little more than three-fourths
of the total capacity additions proposed for 2003. While 13 additional
projects have been filed with regulators and are awaiting a disposition,
6 projects (1,323 MMcf/d) have yet to be filed and remain in the planning
stage. Some of these latter projects, which could be completed relatively
quickly once approval is granted, have a chance of being completed in
2003. However, it is more likely that a substantial portion of the 1,323
MMcf/d represented by these 6 projects will not be installed in 2003,
being either canceled, placed on indefinite hold or, more likely, postponed
until 2004.
However, if 2003 is typical, some unannounced, quick turn-around projects
that do not fall under FERC's jurisdiction will be completed during the
year, compensating for some of the proposed capacity additions that will
be deferred or dropped. The effect of this process in the past has been
that about half of the potential loss of proposed capacity from cancellations
and other reasons is made up during the year by completion of such projects.
Thus, the original estimate of 12,937 MMcf/d (Table 1) of new capacity
for 2003 could be eventually adjusted to about 12,000 MMcf/d.
Because of the current downturn of the national economy, it is also possible
that the final figure for 2003 could fall to around 11,000 MMcf/d, as
some approved projects are likely to be downsized or postponed as markets
adjust to changes in local economies and/or some proposed gas-fired electric
power plants are themselves postponed or canceled.(19)
For 2004-2005, as of March 2003, only 14 of the 51 proposed projects have
been approved, and 13 more have been filed with regulatory authorities
for review. The remaining 37 projects are still in the concept or planning
stage.(20) Indeed, project specifications for more than 8,800 MMcf/d of
the 18,939 MMcf/d capacity additions proposed to date have yet to be finalized.
At this stage it is impossible to predict what portion of the current
19 Bcf/d estimate will eventually be developed, especially since many
new pipelines or expansion projects are not filed until 18 months in advance
of their proposed in-service date.
Between 1991 and 2001, more than 60 Bcf/d of capacity (through pipeline
expansions and building of new pipelines) was incorporated into the Lower
48 interstate gas transmission network, an average of 6 Bcf/d per year.
With the exception of the years (1994-1996) when the gas pipeline industry
adopted a wait-and-see approach to expansion following the interstate
pipeline restructuring mandated by FERC Order 636 in 1992, annual additions
to interstate natural gas pipeline transmission capacity exceeded 5 Bcf/d
in the 1990s. In 2001 capacity additions approached the 10 Bcf/d level.
In 2002 they reached 12.8 Bcf/d (Table 1).
With the current economic slowdown in the United States, however, it is
unlikely that this pace will be maintained in the short term. Despite
the fact that a 12.9 Bcf/d increase in gas pipeline capacity has been
proposed for 2003 and another 18.9 Bcf/d for 2004-2005, it is likely that
only about 70 to 80 percent of the proposed capacity additions will eventually
be completed.
1. Energy Information Administration, Natural Gas Annual 200l, DOE/EIA-0131(01)
(Washington, DC, February 2003), Table 1. Through October 2002, year-to-date
natural gas consumption in the United States continued to decline, relative
to 2001, falling from 17.3 Bcf through October 2001 to 16.6 Bcf in 2002.
Energy Information Administration, Natural Gas Monthly, DOE/EIA-0130(2003/01)
(Washington, DC, January 2003), Table 3.
2. All known inter- and intrastate gas pipeline projects (including large
gathering headers and delivery laterals) that have added, or may add,
substantial new capacity to the national pipeline grid are included in
this review.
3. Energy Information Administration, Natural Gas Monthly, "Status
of Natural Gas Pipeline System Capacity Entering the 2000-2001 Heating
Season," DOE/EIA-0130(2000/10) (Washington, DC, October 2000), Figure
SR4.
4. The monetary limit for blanket certificate coverage is adjusted annually
by the Federal Energy Regulatory Commission (FERC) to account for inflation.
In 2001, the limit was $7.5 million.
5. Blanket certification can be used for relatively small projects. A
blanket certificate approves a series of similar actions in one authorization.
For instance, construction of small additions to a pipeline may be authorized
by a blanket certificate, provided the total cost does not exceed some
threshold level and other eligibility criteria are met.
6. The recent downgrading of credit ratings of a number of pipeline parent
companies or sponsors may have been a factor as well. The dramatic fall
in the stock prices of many energy companies and the lowering of their
bond-ratings by the S&P and other bond-rating services have made it
harder for some pipeline companies to raise the capital for pipeline expansions.
7. Natural Gas Intelligence Press, The Weekly Gas Market Newsletter,
"Williams Ensures Liquidity with $3.4B Deals, Sacrificing Heavy-Duty
Assets" (August 5, 2002).
8. Interstate pipeline companies file an annual capacity report (18 CFR
§284.12) with the Federal Energy Regulatory Commission (FERC) that
reports their daily system capacity based on a design estimate of how
much their system can deliver for current shippers on a systemwide peak
day. Total capacity on these systems usually represents the sum of capacity
at all delivery points, including interconnections with other interstate
pipelines.
9. These interconnections provide Trailblazer Pipeline shippers with access
to Midwest markets.
10. These gathering system laterals connect the expanding Wyoming production
areas with the pipeline systems that serve the area : Kern River Gas Transmission
Pipeline, Questar Pipeline, Northwest Pipeline, Wyoming Interstate Pipeline,
and Colorado Interstate Gas Pipeline. The latter two systems interconnect
with the Trailblazer system, while the others serve markets in Utah and
the Western region.
11. Energy Information Administration, Natural Gas Productive Capacity
for the Lower 48 States 1985-2003, web site oil_gas/natural_gas/analysis_publications/ngcap2003/ngcap2003.html
(March 31, 2003).
12. The Colorado Interstate Gas Company expanded (130 MMcf/d) its system
in the Raton Basin of southeast Colorado in 2001 and 2002 to accommodate
increased coalbed methane production in the area.
13. They are: the Colorado Interstate Gas Company's Cheyenne Plains Pipeline
(500 MMcf/d-2005), Kinder Morgan Advantage Pipeline (330 MMcf/d-2004),
Northern Border Pipeline's Bison Project (250 MMcf/d-2004), and Williston
Basin Interstate Pipeline's Grasslands Project (180 MMcf/d-2004-05).
14. They are: Kern River Gas Transmission's System Expansion (900 MMcf/d-2003),
Northwest Pipeline Company's Rockies expansion (175 MMcf/d-2003), TransColorado
Gas Transmission's Window Rock Lateral (150 MMcf/d-2005), and Kinder Morgan
Interstate's Silver Canyon Pipeline (750 MMcf/d-2006).
15. Florida Gas Transmission Company's previous major increase in system
capacity (by 65 percent) occurred in 1995.
16. In contrast, in 2001, enough new capacity was installed to increase
overall gas pipeline capacity (excluding gathering systems) in the Gulf
alone by more than 5 percent.
17. Energy Information Administration, Natural Gas Monthly, Table
5, "U.S. Natural Gas Imports, by Country, 1996-2002," DOE/EIA-0130(2003/02)
(Washington, DC, February 2003).
18. On February 27, 2003, EnCana Corp. notified the Canadian National
Energy Board (NEB) that it was delaying plans to develop its proposed
400 MMcf/d Deep Panuke project located under the Scotian Shelf. This action
could temporarily limit the availability of gas supplies to any potential
Maritimes & Northeast pipeline expansion.
19. In fact, on February 5, 2003, Transcontinental Gas PipeLine Company
announced that it had requested permission from FERC to downsize its Momentum
expansion project from 359,000 Dth/d to 323,000 Dth/d because of changes
in the needs of several electric generation customers in the Southeast.
20. It should also be kept in mind that estimated costs are nonexistent
or unannounced for a number of pipeline expansion proposals that have
yet to be filed with regulatory authorities. Consequently, the cost totals
provided for 2003 and beyond in Table 1 should be considered low.
|
http://www.eia.doe.gov/pub/oil_gas/natural_gas/feature_articles/2003/Pipenet03/pipenet03.html
|
crawl-001
|
en
|
refinedweb
|
Orange
is a data mining software that is specially
good for researching and teaching. It is
developed in Python and C++ combining the
best from both: interpretability and quick
use from Python
and efficiency from C++.
SNNS is a very complete software about
artificial neural networks.
OrangeSNNS.py allows using SNNS to create, train and simulate neural
networks as learners inside Orange. It is now in the process of
development, but near its final state.
There are two versions available.
import orange, orangeSNNS
data = orange.ExampleTable("bupa.tab")
learner = orangeSNNS.SNNSLearner()
classifier = learner(data)
for example in data:
print example,
print "->", classifier(example)
The network is then used to classify the training set showing the
predicted class for each example.
import orange, orangeSNNS
# We set the path where SNNS binaries can be found, this
# is not necessary if they are in system path.
orangeSNNS.pathSNNS = "~/SNNSv4.2/tools/bin/i686-pc-linux-gnu/"
data = orange.ExampleTable("bupa.tab")
learner = orangeSNNS.SNNSLearner(name = 'SNNS neural network',
hiddenLayers = [2,3],
MSE = 0,
cycles = 500,
algorithm = "Std_Backpropagation",
learningParams = ["0.2"])
classifier = learner(data)
for example in data:
print example,
print "->", classifier(example)
There will not be a 2.0 version, as this is just a
quick solution. Instead, a completely integrated module with new
code should be written, as SNNS is NOT free software and could not
be adapted. More efforts on this module are worthless.
Probably a good choice to integrate neural networks in Orange is programming an interface to FANN .
There is a summer of code 2006 project (Neural Nets in SciPy) that may be interesting having an eye on it.
This site conforms to the following standards:
|
http://www.ax5.com/antonio/orangesnns/index_html/view
|
crawl-001
|
en
|
refinedweb
|
I originally posted about this in povray.general, and no one was able to
give an explanation. Gilles Tran was very helpful in finding a simplified
scene that still produces the artifact in question. He also pointed out to
me that using the -UL flag causes the problem to go away. Here is the
simplified scene description:
#include "colors.inc"
camera {location <4.5, 5.5, -7> look_at <0.5, 2.5, 0>}
light_source {<10, 50, -16> color 2}
box {<-2, 0, 5>,<4, 6, 5.01> texture {finish {reflection 1}}}
#declare D =difference {
box {<-1, 0, -1>,<1, 0.6, 1>}
box {<-2, 0, 0>,<2, 1, 1> rotate x*-45 translate -z}
}
union {
cylinder {<-0.7, -2, 0.7>,<-0.7, 2, 0.7>,0.3 pigment {Clear}}
box {<-2, 0, 0>,<2, 1, 1> rotate x*-45 translate -z translate y*0.6
pigment {Clear}}
difference {
cylinder {<0.7, 0.3,-0.7>,<-0.7, 0.3,-0.7>,0.1}
intersection {
union {
cylinder {<0.7, -3, -0.7>,<0.7, 1, -0.7>,0.3}
cylinder {<-0.7, -3, 0.7>,<-0.7, 1, 0.7>,0.3}
}
union {
object {D translate y*-0.001}
object {D translate y*0.001}
}
}
translate y*0.6
}
sphere {<0, 0.3, -0.4>, 0.35}
sphere {<-0.4, 0.3, 0.4>, 0.35}
texture {pigment {Yellow}}
translate y*5
}
I have tried to simplify the scene even more, but very small changes in the
description change the way the bug is manifested. For example, simply
moving the spheres outside the union makes the problem go away.
The problem can be seen in the shadow that the cylinder projects onto the
sphere.
Another interesting factor, if only the affected part of the scene is
rendered:
+sc0.388106 +sr0.091858 +ec0.491393 +er0.217119
then the bug is not visible, even with light buffers enabled. It only shows
up if the entire scene is rendered. Here are a couple examples of how this
bug is manifesting itself in some real scenes I have rendered.
The shadows in question are on the blue and yellow balloons in the top
block. The same thing can also be seen in this other image from a different
perspective:
I have run this on various versions of POVWin 3.5-3.6 on several computers
including an AMD 1700+ 768MB, an 800 P3 512MB, a 2.6 Celeron 384MB, and a
3.2 P4 512MB. All had Windows XP Home or Pro installed on them. If you need
more information, please let me know.
page-dan( **at!** )hentschels.com
|
http://news.povray.org/povray.bugreports/thread/%3C4148e313%241%40news.povray.org%3E/
|
crawl-001
|
en
|
refinedweb
|
3.2 The standard type hierarchy.
- ‘None’
-.
- ‘Not.
- ‘Ellipsis’
- This type has a single value. There is a single object with this value. This object is accessed through the built-in name
Ellipsis. It is used to indicate the presence of the ‘...’ syntax in a slice. Its truth value is true.
- ‘Numbers’
-:
- ‘Integers’
- These represent elements from the mathematical set of integers (positive and negative). There are three types of integers:
- ‘Plain integers’
- These represent wholeis raised instead). For the purpose of shift and mask operations, integers are assumed to have a binary, 2's complement notation using 32 or more bits, and hiding no bits from the user (i.e., all 4294967296 different bit patterns correspond to different values).
- ‘Long.
- ‘Booleans’
-.
- ‘Floating point numbers’
-.
- ‘Complex numbers’
- These represent complex numbers as a pair of machine-level double-precision floating point numbers. The same caveats apply as for floating point numbers. The real and imaginary parts of a complex number
zcan be retrieved through the read-only attributes
z.realand
z.imag.
- ‘Sequences’
-and i
<=x
<j. Sequences are distinguished according to their mutability:
- ‘Immutable sequences’
-:
- ‘Strings’
- {\sc ebcdic} in their internal representation, provided the functions
chr()and
ord()implement a mapping between ASCII and {\sc ebcdic}, and string comparison preserves the ASCII order. Or perhaps someone can propose a better rule?)
- ‘Unic().
- ‘Tu.
- ‘Mutable sequences’
- Mutable sequences can be changed after they are created. The subscription and slicing notations can be used as the target of assignment and
del(delete) statements. There is currently a single intrinsic mutable sequence type:
- ‘Lists’
- The items of a list are arbitrary Python objects. Lists are formed by placing a comma-separated list of expressions in square brackets. (Note that there are no special cases needed to form lists of length 0 or 1.)
- ‘Mappings’
- These represent finite sets of objects indexed by arbitrary index sets. The subscript notation
a[k]selects the item indexed by
kfrom the mapping
a; this can be used in expressions and as the target of assignments or
delstatements. The built-in function
len()returns the number of items in a mapping. There is currently a single intrinsic mapping type:
- ‘Dictionaries’
-and
1.0) then they can be used interchangeably to index the same dictionary entry. Dictionaries are mutable; they can be created by the
{...}notation (see section 5.2.6, "Dictionary Displays"). The extension modules ‘dbm’ , ‘gdbm’ , and ‘bsddb’ provide additional examples of mapping types.
- ‘Callable types’
- These are the types to which the function call operation (see section 5.3.4, "Calls") can be applied:
- ‘User-defined functions’
- A user-defined function object is created by a function definition (see section 7.6, "Function definitions"). It should be called with an argument list containing the same number of items as the function's formal parameter list. Special attributes: Most of the attributes labelled "Writable" check the type of the assigned value. (Changed in Python version 2.
- ‘User-defined methods’
- A user-defined method object combines a class, a class instance (or
None) and any callable object (normally a user-defined function). Special read-only attributes:
im_selfis the class instance object,
im_funcis the function object;
im_classis the class of
im_selffor bound methods or the class that asked for the method for unbound methods;
__doc__is the method's documentation (same as
im_func.__doc__);
__name__is the method name (same as
im_func.__name__);
__module__is the name of the module the method was defined in, or
Noneif unavailable. (Changed in Python version 2.2)attribute is
Noneand the method object is said to be unbound. When one is created by retrieving a user-defined function object from a class via one of its instances, its
im_selfattribute is the instance, and the method object is said to be bound. In either case, the new method's
im_classattribute is the class from which the retrieval takes place, and its
im_funcattribute is the original function object. When a user-defined method object is created by retrieving another method object from a class or instance, the behaviour is the same as for a function object, except that the
im_funcattribute of the new instance is not the original method object but its
im_funcattribute. When a user-defined method object is created by retrieving a class method object from a class or instance, its
im_selfattribute is the class itself (the same as the
im_classattribute), and its
im_funcattribute
Cis a class which contains a definition for a function
f(), and
xis an instance of
C, calling
x.f(1)is equivalent to calling
C.f(x, 1). When a user-defined method object is derived from a class method object, the "class instance" stored in
im_selfwill actually be the class itself, so that calling either
x.f(1)or
C.f(1)is equivalent to calling
f(C,1)where
f.
- ‘Generator functions’
- A function or method which uses the
yieldstatement (see section 6.8, "The
yieldstatement") is called a generator function. Such a function, when called, always returns an iterator object which can be used to execute the body of the function: calling the iterator's
next()method will cause the function to execute until it provides a value using the
yieldstatement. When the function executes a
returnstatement or falls off the end, a
StopIterationexception is raised and the iterator will have reached the end of the set of values to be returned.
- ‘Built-in functions’
- A built-in function object is a wrapper around a C function. Examples of built-in functions are
len()and
math.sin()(‘math’ is a standard built-in module). The number and type of the arguments are determined by the C function. Special read-only attributes:
__doc__is the function's documentation string, or
Noneif unavailable;
__name__is the function's name;
__self__is set to
None(but see the next item);
__module__is the name of the module the function was defined in or
Noneif unavailable.
- ‘Built-in methods’
-.
- ‘Class Types’
- Class types, or "new-style classes," are callable. These objects normally act as factories for new instances of themselves, but variations are possible for class types that override
__new__(). The arguments of the call are passed to
__new__()and, in the typical case, to
__init__()to initialize the new instance.
- ‘Classic Classes’
-.
- ‘Class instances’
- Class instances are described below. Class instances are callable only when the class has a
__call__()method;
x(args)is a shorthand for
x.__call__(args).
- ‘Modules’
- Modules are imported by the
importstatement (see section 6.12, "The
importstatement"). A module object has a namespace implemented by a dictionary object (this is the dictionary referenced by the func_globals attribute of functions defined in the module). Attribute references are translated to lookups in this dictionary, e.g.,
m.xis equivalent to
m.__dict__["x"]. A module object does not contain the code object used to initialize the module (since it isn't needed once the initialization is done). Attribute assignment updates the module's namespace dictionary, e.g., ‘m.x = 1’ is equivalent to ‘m.__dict__["x"] = 1’. Special read-only attribute:
__dict__is the module's namespace as a dictionary object. Predefined (writable) attributes:
__name__is the module's name;
__doc__is the module's documentation string, or
Noneif.
- ‘Classes’
- Class objects are created by class definitions (see section 7.7, "Class definitions"). A class has a namespace implemented by a dictionary object. Class attribute references are translated to lookups in this dictionary, e.g., ‘C.x’ is translated to ‘C._
Cor one of its base classes, it is transformed into an unbound user-defined method object whose
im_classattribute is
C. When it would yield a class method object, it is transformed into a bound user-defined method object whose
im_classand
im_selfattributes are both
C. When it would yield a static method object, it is transformed into the object wrapped by the static method object. See section 3.4.2.2.
- ‘Class instances’
-attribute is
Cand whose
im_selfattribute is the instance. Static method and class method objects are also transformed, as if they had been retrieved from class
C; see above under "Classes". See section 3.4.2.2.4, "Special method names." Special attributes:
__dict__is the attribute dictionary;
__class__is the instance's class.
- ‘Files’
-and
sys.stderrare initialized to file objects corresponding to the interpreter's standard input, output and error streams. See the Python Library Reference Manual for complete documentation of file objects.
- ‘Internal types’
- A few types used internally by the interpreter are exposed to the user. Their definitions may change with future versions of the interpreter, but they are mentioned here for completeness.
- ‘Code objects’
-gives the function name;
co_argcountis the number of positional arguments (including arguments with default values);
co_nlocalsis the number of local variables used by the function (including arguments);
co_varnamesis a tuple containing the names of the local variables (starting with the argument names);
co_cellvarsis a tuple containing the names of local variables that are referenced by nested functions;
co_freevarsis a tuple containing the names of free variables;
co_codeis a string representing the sequence of bytecode instructions;
co_constsis a tuple containing the literals used by the bytecode;
co_namesis a tuple containing the names used by the bytecode;
co_filenameis the filename from which the code was compiled;
co_firstlinenois the first line number of the function;
co_lnotabis a string encoding the mapping from byte code offsets to line numbers (for details see the source code of the interpreter);
co_stacksizeis the required stack size (including local variables);
co_flagsis an integer encoding a number of flags for the interpreter. The following flag bits are defined for
co_flags: bit
0x04is set if the function uses the ‘*arguments’ syntax to accept an arbitrary number of positional arguments; bit
0x08is set if the function uses the ‘**keywords’ syntax to accept arbitrary keyword arguments; bit
0x20is set if the function is a generator. Future feature declarations (‘from __future__ import division’) also use bits in
co_flagsto indicate whether a code object was compiled with a particular feature enabled: bit
0x2000is set if the function was compiled with future division enabled; bits
0x10and
0x1000were used in earlier versions of Python. Other bits in
co_flagsare reserved for internal use. If a code object represents a function, the first item in
co_constsis the documentation string of the function, or
Noneif undefined.
- ‘Frame objects’
- Frame objects represent execution frames. They may occur in traceback objects (see below). Special read-only attributes:
f_backis the previous stack frame (towards the caller), or
Noneif this is the bottom stack frame;
f_codeis the code object being executed in this frame;
f_localsis the dictionary used to look up local variables;
f_globalsis used for global variables;
f_builtinsis used for built-in (intrinsic) names;
f_restrictedis a flag indicating whether the function is executing in restricted execution mode;
f_lastigivesrepresent the last exception raised in the parent frame provided another exception was ever raised in the current frame (in all other cases they are None);
f_linenois the current line number of the frame--writing to this from within a trace function jumps to the given line (only for the bottom-most frame). A debugger can implement a Jump command (aka Set Next Statement) by writing to
f_lineno.
- ‘Traceback objects’
- 7.4, "Theis the next level in the stack trace (towards the frame where the exception occurred), or
Noneif there is no next level;
tb_framepoints to the execution frame of the current level;
tb_linenogives the line number where the exception occurred;
tb_lastiindicates the precise instruction. The line number and last instruction in the traceback may differ from the line number of its frame object if the exception occurred in a
trystatement with no matching except clause or with a finally clause.
- ‘Slice objects’
-is the lower bound;
stopis the upper bound;
stepis the step value; each is
Noneif omitted. These attributes can have any type. Slice objects support one method:
indices(self, length)
-. (Added in Python version 2.3)
- ‘Static method objects’
- Static method objects provide a way of defeating the transformation of function objects to method objects described above. A static method object is a wrapper around any other object, usually a user-defined method object. When a static method object is retrieved from a class or a class instance, the object actually returned is the wrapped object, which is not subject to any further transformation. Static method objects are not themselves callable, although the objects they wrap usually are. Static method objects are created by the built-in
staticmethod()constructor.
- ‘Class method objects’
- A class method object, like a static method object, is a wrapper around another object that alters the way in which that object is retrieved from classes and class instances. The behaviour of class method objects upon such retrieval is described above, under "User-defined methods". Class method objects are created by the built-in
classmethod()constructor.
|
http://www.network-theory.co.uk/docs/pylang/standardtypehierarchy.html
|
crawl-001
|
en
|
refinedweb
|
A simple and fast read-only embedded key-value database
Project description
ConstDB is a very simple and fast read-only embedded key-value database. Keys consist of 64-bit integers or strings and values consist of arbitrary byte strings.
Sample
import constdb with constdb.create('db_name') as db: db.add(-2, b'7564') db.add(3, b'23') db.add(-1, b'66') with constdb.read('db_name') as db: assert db.get(-2) == b'7564' assert db.get(-1) == b'66' assert db.get(3) == b'23'
Documenation
ConstDB contains only two functions: create and read.
create(filename) allows you to create a new ConstDB database. It takes a filename and returns a ConstDBWriter. A ConstDBWriter has two methods:
- add(key, value): Adds a key-value pair to the database. The key must be a 64 bit integer or a string. The value must be a byte string.
- close(): Finalize and close the database.
read(filename) allows you to read an existing ConstDB database. It takes a filename and returns a ConstDBReader. A ConstDBReader has two methods:
- get(key): Get a value from the database. The key must be a 64 bit integer or a string. Returns the value if the key is in the database. Returns None if the key is not found.
- close(): Finalize and close the database.
Requirements
The only requirement for ConstDB is Python 3.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/constdb/3.2.0/
|
CC-MAIN-2018-30
|
en
|
refinedweb
|
Now that the hardware is ready, we can write our first sketch to test the Ethernet shield and the connection to the Web. Note that the pieces of code shown in this section are only the most important parts of the code, and you can find the complete code inside the GitHub repository of the book.
Start the Arduino sketch by including the following required libraries to use the shield:
#include <SPI.h> #include <Ethernet.h>
Downloading the example code
You can download the example code files for all Packt books you have purchased from your account at. If you purchased this book elsewhere, you can visit and register to have the files e-mailed directly to you.
Then, we ...
No credit card required
|
https://www.safaribooksonline.com/library/view/arduino-networking/9781783986866/ch01s02.html
|
CC-MAIN-2018-30
|
en
|
refinedweb
|
This is really fun. Been requested to developed a prototype for a simple system, my immediate choice is django, this time rather than using sqlalchemy, i'm using anything that comes with django umbrella. In this prototype, I add an ajax based auto completion/suggestion search input form, just to give a small 'wow' effect to my customer. Using jquery ajax call, and returning json list of dictionaries.
The best part actually the way how you can extend django model to add you own specific method, for example i add a search method in profile model.
once extended you can simply call it :
searchObj = Profile.objects.search(request.GET['q']).order_by('prof_firstname')
and in the same time, the search box actually allowed user to search for profile firstname or lastname and aslo ic number using the same input box.
and here how you can do it in your models (this is my django profile model file):
make sure you add the ProfileManager class inside your model (highlighted in yellow).
# Load modules needed
import operator
from django.db import models
from django.db.models import Q
class ProfileManager(models.Manager):
def search(self, search_terms):
terms = [term.strip() for term in search_terms.split()]
q_objects = []
for term in terms:
q_objects.append(Q(prof_firstname__icontains=term))
q_objects.append(Q(prof_lastname__icontains=term))
q_objects.append(Q(prof_ic__icontains=term))
# Start with a bare QuerySet
qs = self.get_query_set()
# Use operator's or_ to string together all of your Q objects.
return qs.filter(reduce(operator.or_, q_objects))
class Profile(models.Model):
""" table profile """
prof_ic = models.CharField(max_length = 12, primary_key = True) # mungkin ic ada abjad (tentera)
prof_firstname = models.CharField(max_length = 32)
prof_lastname = models.CharField(max_length = 32)
prof_dob = models.DateTimeField()
prof_gender = models.CharField(max_length = 1)
prof_race = models.CharField(max_length = 32)
prof_address_1 = models.CharField(max_length = 128)
prof_address_2 = models.CharField(max_length = 128)
prof_city = models.CharField(max_length = 32)
prof_state = models.CharField(max_length = 32)
prof_country = models.CharField(max_length = 32)
prof_postcode = models.CharField(max_length = 16)
prof_telno = models.CharField(max_length = 16)
prof_mobileno = models.CharField(max_length = 16)
prof_email = models.EmailField(max_length = 128)
prof_img = models.ImageField(max_length = 256, upload_to = 'profile')
prof_sysdate = models.DateTimeField()
objects = ProfileManager()
class Meta:
app_label = "pingat"
|
http://xjutsu.blogspot.com/2010/
|
CC-MAIN-2018-30
|
en
|
refinedweb
|
In this tutorial you will learn how to write to file by line
Write to a file by line using java you can use the newLine() method of BufferedWriter class. newLine() method breaks the continuous line into a new line.
Here I am going to give the simple example which will demonstrate you how to write to file by line. In this example I have first created a new text file named "fileByLineileByLine.java
import java.io.*; class WriteToFileByLine { public static void main(String args[]) { String pLine = "Previous Line text"; String nLine = "New Line text"; WriteToFileByLine wtfbl = new WriteToFileByLine(); wtfbl.writeToFileByLine(pLine); wtfbl.writeToFileByLine(nLine); } public void writeToFileByLine(String str) { try { File file = new File("fileByLine.txt"); FileWriter fw = new FileWriter(file, true); BufferedWriter bw = new BufferedWriter(fw); bw.write(str); bw.newLine(); bw.close(); } catch (Exception e) { System.out.println(e); } } }
How to Execute this example :
After doing the basic process to execute a java program write simply on command prompt as :
javac WriteToFileByLine.java to compile the program
And after successfully compilation to run simply type as :
java WriteToFileByLine
Output :
When you will execute this example a text file will be created on the specified place as the path given by you with containing the text that you are trying to write in that file by java program like as :
Advertisements
Posted on: January By Line
Post your Comment
|
http://roseindia.net/java/examples/io/writeToFileByLine.shtml
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
Demonstrating write-combining
The following program, incrementMappedArrayWC.cu, demonstrates the use of separate write-combined, mapped, pinned memory to increment the elements of an array by one. This required changing
incrementArrayOnHost and
incrementArrayOnDevice to read from array
a and write to array
b. In this way, coherency issues are avoided and streaming performance should be achieved. The
cudaHostAllocWriteCombined flag was also added to the
cudaHostAlloc calls. We rely on the CUDA calls to the driver to issue the appropriate fence operation to ensure the writes become globally visible.
// incrementMappedArrayWC.cu #include <stdio.h> #include <assert.h> #include <cuda.h> // define the problem and block size #define NUMBER_OF_ARRAY_ELEMENTS 100000 #define N_THREADS_PER_BLOCK 256 void incrementArrayOnHost(float *b, float *a, int N) { int i; for (i=0; i < N; i++) b[i] = a[i]+1.f; } __global__ void incrementArrayOnDevice(float *b, float *a, int N) { int idx = blockIdx.x*blockDim.x + threadIdx.x; if (idx < N) b[idx] = a[idx]+1.f; } void checkCUDAError(const char *msg) { cudaError_t err = cudaGetLastError(); if( cudaSuccess != err) { fprintf(stderr, "Cuda error: %s: %s.\n", msg, cudaGetErrorString( err) ); exit(EXIT_FAILURE); } } int main(void) { float *a_m, *b_m; // pointers to mapped host memory float *a_d, *b_d; // pointers to mapped device memory float *check_h; // pointer to host memory used to check results int i, N = NUMBER_OF_ARRAY_ELEMENTS; size_t size = N*sizeof(float); cudaDeviceProp deviceProp; #if CUDART_VERSION < 2020 #error "This CUDART version does not support mapped memory!\n" #endif // Get properties and verify device 0 supports mapped memory cudaGetDeviceProperties(&deviceProp, 0); checkCUDAError("cudaGetDeviceProperties"); if(!deviceProp.canMapHostMemory) { fprintf(stderr, "Device %d cannot map host memory!\n", 0); exit(EXIT_FAILURE); } // set the device flags for mapping host memory cudaSetDeviceFlags(cudaDeviceMapHost); checkCUDAError("cudaSetDeviceFlags"); // allocate host mapped arrays int flags = cudaHostAllocMapped|cudaHostAllocWriteCombined; cudaHostAlloc((void **)&a_m, size, flags); cudaHostAlloc((void **)&b_m, size, flags); checkCUDAError("cudaHostAllocMapped"); // Get the device pointers to memory mapped cudaHostGetDevicePointer((void **)&a_d, (void *)a_m, 0); cudaHostGetDevicePointer((void **)&b_d, (void *)b_m, 0); checkCUDAError("cudaHostGetDevicePointer"); /* initialization of the mapped data. Since a_m is write-combined, it is not guaranteed to be initialized until a fence operation is called. In this case that should happen when the kernel is invoked on the GPU */ for (i=0; i<N; i++) a_m[i] = (float)i; // do calculation on device: // Part 1 of 2. Compute execution configuration int blockSize = N_THREADS_PER_BLOCK; int nBlocks = N/blockSize + (N%blockSize > 0?1:0); // Part 2 of 2. Call incrementArrayOnDevice kernel incrementArrayOnDevice <<< nBlocks, blockSize >>> (b_d, a_d, N); checkCUDAError("incrementArrayOnDevice"); // Note the allocation and call to incrementArrayOnHost occurs // asynchronously to the GPU check_h = (float *)malloc(size); incrementArrayOnHost(check_h, a_m,N); // Make certain that all threads are idle before proceeding cudaThreadSynchronize(); checkCUDAError("cudaThreadSynchronize"); // cudaThreadSynchronize() should have caused an sfence // to be issued, which will guarantee that all writes are done // check results. Note: the updated array is in b_m, not b_d for (i=0; i<N; i++) assert(check_h[i] == b_m[i]); // cleanup free(check_h); // free mapped memory (and device pointers) cudaFreeHost(a_m); cudaFreeHost(b_m); }
Conclusion
CUDA 2.2 changes the data movement paradigm by providing APIs for mapped, transparent data transfers between the host and GPU(s). These APIs also allow the CUDA programmer to make data sharing between the host and graphics processor(s) more efficient by exploiting asynchronous operation, full-duplex PCIe data transfers, through the use of write combined memory, and by adding the ability for the programmer to share pinned memory with multiple GPUs.
Personally, I have used these APIs as a convenience when porting existing scientific codes onto the GPU because mapped memory allows me to keep the host and device data synchronized while I incrementally move as much of the calculation onto the GPU as possible. This allows me to verify my results after each change to ensure nothing has broken, which can be a real time and frustration saver when working with complex codes with many inter-dependencies. Additionally, I also use these APIs to increase efficiency by exploiting asynchronous host and multiple GPU calculations plus full-duplex PCIe transfers and other nice features of the CUDA 2.2 release.
I also see the new CUDA 2.2 APIs facilitating the development of entirely new classes of applications ranging from operating systems to real-time systems.
One example is the RAID research performed by scientists at the University of Alabama and Sandia National Laboratory that transformed CUDA-enabled GPUs into high-performance RAID accelerators that can calculate Reed-Solomon codes in real-time for high-throughput disk subsystems (see Accelerating Reed-Solomon Coding in RAID Systems with GPUs, by Matthew Curry, Lee Ward, Tony Skjellum, Ron Brightwell). From their abstract, "Performance results show that the GPU can outperform a modern CPU on this problem by an order of magnitude and also confirm that a GPU can be used to support a system with at least three parity disks with no performance penalty".
My guess is we will see a CUDA-enhanced Linux md (multiple device or software RAID) driver sometime in the near future. Imagine the freedom of not being locked into a proprietary RAID controller. If something breaks, just connect your RAID array to another Linux box to access the data. If that computer does not have an NVIDIA GPU then just use the standard Linux software md driver to access the data.
Don't forget that CUDA-enabled devices can accelerate and run multiple applications at the same time. An upcoming article demonstrating how to incorporate graphics and CUDA will exploit that capability. Until then, try running a separate graphics application while running one of your CUDA applications. I think you will be surprised at how well both applications will perform.
Rob Farber is a senior scientist at Pacific Northwest National Laboratory. He has worked in massively parallel computing at several national laboratories and as co-founder of several startups. He can be reached at [email protected].
|
http://www.drdobbs.com/web-development/cuda-supercomputing-for-the-masses-part/217500110?pgno=3
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
z3c.testing 1.0.0a3
High-level Testing Support
This package provides a collection of high-level test setups for unit and functional testing. In particular, it provides a testing layer that can use an existing, pre-populated database as a starting point, which speeds up the test setup phase for large testing data sets.
CHANGES
1.0.0a3 (2013-02-28)
- Nothing changed yet.
1.0.0a2 (2013-02-28)
- Migrate to the newest zope.interface version and remove the forked code in verify.py.
1.0.0a1 (2013-02-28)
- Dropped Python 2.4 and 2.5 support, added Python 3.3 support.
- Made functional test tools into the functional extra, so no zope.app packages are required.
0.3.2 (2010-08-23)
- Do some InterfaceBaseTest attributes to be able to write less code:
- iface provide the interface here
- klass provide the class here
- pos provide the positional arguments here
- kws provide the keyword arguments here
- Avoid depreacted zope.testing.doctest by using python’s doctest.
0.3.1 (2009-12-26)
- Removed install dependency on zope.app.security.
- Removed test dependency on zope.app.securitypolicy.
- Removed test dependency on zope.app.zcmlfiles.
0.3.0 (2009-02-01)
- Using zope.container instead of zope.app.container
- Using zope.site instead of zope.app.component
0.2.0 (2007-10-31)
- Fixed package data.
- Moved functional tests to tests.
- Removed deprecation warning.
0.1.1b1 (2007-06-21)
- Make z3c a namespace.
- Prevent ConnectionStateError in layer after appsetup is run in layer.
- Author: Zope Corporation and Contributors
- Keywords: zope3 testing layer zodb
-
- Package Index Owner: srichter, philikon, benji, chrism, baijum, ignas, J1m, ctheune, projekt01, mgedmin, ccomb, pcardune, hannosch, faassen, nadako, hathawsh, chrisw, icemac, gary, roymath, tlotze, agroszer, menesis, davisagli, alga
- DOAP record: z3c.testing-1.0.0a3.xml
|
https://pypi.python.org/pypi/z3c.testing/1.0.0a3
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
#include<iostream> #include<string> #include<fstream> using namespace std; void Load_dictionary(string Dictionary_E[],string Dictionary_F[],string Dictionary_G[]);//Load_dictionary declaration function const int MaxSize= 100;// Maxsize for the array /*void sort(string sortedDic[]);// Arraysort function*/ int main () {// main function string Dictionary_E [MaxSize]; string Dictionary_F [MaxSize]; string Dictionary_G [MaxSize]; Load_dictionary(Dictionary_E,string Dictionary_F,string Dictionary_G); /*string sortedDic[MaxSize]=string Dictionary [MaxSize]; sort(string sortedDic);*/ cout<<" Dictionary File has been loaded... \n\n\n"; cout<<" 1) Translate a sentence to French\n"; cout<<" 2) Translate a sentence to German\n"; cout<<" 3) Remove a set of words from dictionary\n"; cout<<" 4) Add a new set of words to dictionary\n"; cout<<" 5)Search and display a set of words\n"; cout<<" 6) Display all words in dictionary in alphabetical order by the English word\n"; cout<<" 7) Exit\n"; return 0; }// end of main function void Load_dictionary(string Dictionary_E[],string Dictionary_F[],string Dictionary_G[]) {//Load_dictionary function for opening the dictionary file and for load the data in to array ifstream MyFile;// file initilization MyFile.open("Dic_File.txt");// opening dic_file string line;//this will contain the data read from the file if (MyFile.fail()) {cout << "Can't open file!\n"; exit(1); } else { while ( !MyFile.eof())//while the end of file is NOT reached { MyFile>>line; for ( int i=0; i<MaxSize ;i++) { Dictionary_G[i]; for(int i=0; i<MaxSize ;i++) { Dictionary_F[i]; for(int i=0; i<MaxSize ;i++) { Dictionary_E[i]; } } } }//end of while }//end of else }//end of function /*void sort(string sortedDic[]) { string loaded_dataE; for (int pass=0;*/
Edited 3 Years Ago by Dani: Formatting fixed
|
https://www.daniweb.com/programming/software-development/threads/418315/i-want-help
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
- Preface
-
-
-
-
-
-
-
- Index
Contents* - A - B - C - D - E - F - G - H - I - K - L - M - N - O - P - Q - R - S - T - U - V - W -
Index
****
*** 1
*** 1
Aaccounts
admin 1
expiration 2
user 3
acknowledging
chassis 1
server 2
activate firmware 1
activating
adapters 1
BMC 2
fabric interconnects 3
I/O modules 4
UCS manager 5
adapter qualification
configuring 1
deleting 2
adapters
Cisco M81KR VIC 1 2
Cisco UCS 82598KR-CI 2
updating and activating 3
verifying overall status 4
virtualization 5
administration 1
aging time, Mac address table
about 1 2
configuring 1 2
alert, call home test 1
alerts, Call Home 1
all configuration 1
architectural simplification 1
authentication
primary 1
remote 2
autoconfiguration policy
about 1 2
Bbacking up
about 1
all-configuration 2
considerations 3
creating operations 4
deleting operations 5
modifying operations 6
running operations 7
types 8
user role 9
backup operations
creating 1
deleting 2
modifying 3
running 4
best effort system class 1 2
BMC
resetting 1
updating and activating 2
boot definitions
configuring 1
deleting 2
LAN boot 3
storage boot 4
virtual media boot 5
boot policies
about 1 2
configuring 2
deleting 3
LAN boot 4
storage boot 5
virtual media boot 6
bronze system class 1 2
bundle, firmware 1
burned in values 1 2
Ccall home
configuring 1
inventory messages, configuring 2
inventory messages, sending 3
policies, configuring 4
policies, deleting 5
policies, disabling 6
policies, enabling 7
profiles, configuring 8
profiles, deleting 9
sending test alert 10
smart call home, configuring 11
TAC-1 profile, configuring 12
Call Home
about 1
alerts 2
considerations 3
policies 4
profiles 5
severity levels 6
Smart Call Home 7
catalog, images 1
certificate
VN-Link in hardware 1 2
chassis
acknowledging 1
decommissioning 2
discovery policy 1 2 3
recommissioning 4
turning off locator LED 5
turning on locator LED 6
chassis discovery policy
about 1 2 3
discovery policies 1 2
chassis 1 2
chassis qualification
configuring 1
deleting 2
CIM XML
configuring 1
Cisco Discovery Protocol 1 2
Cisco M81KR VIC adapter
virtualization 1 2
Cisco UCS 82598KR-CI
virtualization 1
Cisco UCS CNA M71KR
virtualization 1
Cisco UCS Manager
about 1
impact of firmware upgrade 2
Cisco VIC adapter 1 2
Cisco VN-Link 1 2 3 4
cisco-av-pair 1
CiscoAVPair 1
clients, port profiles
deleting 1
cluster configuration
about 1
commands for object management 1
communication services
about 1
disabling 2
community, SNMP 1
component, firmware 1
configuration
backing up 1 2
erasing 2
import methods 3
importing 1 2
restoring 1 2
considerations
backup operations 1
Call Home 2
VN-Link in hardware 1 2
console, KVM 1
core file exporter
configuring 1
disabling 2
Core File Exporter
about 1
create 1
Ddatabase
backing up 1
restoring 2
decommissioning
chassis 1
server 2
default service profiles 1 2
delete 1
deletion tasks 1
disaster recovery 1 2
discovery policy
chassis 1 2 3
server 1 2
DNS servers
about 1
configuring 2
deleting 3
downgrading
firmware 1
prerequisites 2
download firmware 1
DVS 1 2
dynamic vNIC connection policies
configuring 1
deleting 2
dynamic vNIC connection policy
about 1 2
Eendpoints
direct firmware upgrade 1 2
service profile upgrade 2
enter 1
Ethernet
Fibre Channel over 1
flow control policies 1 2
server ports 1 2
uplink port channels 1 2 3 4
uplink ports 5
configuring 1
deleting 2
configuring 3
deleting 4
member ports, adding 5
member ports, deleting 6
configuring 7
deleting 8
Ethernet adapter policies
about 1 2 3
Ethernet adapter policy
configuring 1
deleting 2
Ethernet switching mode
about 1
expiration, accounts 1
exporting
backup types 1
configuration 2
core file 3
user role 4
exporting extension files 1
extension file
about 1 2
extension files
exporting 1
extension key, modifying 1
Ffabric interconnects
activating 1
admin password recover 1 2
admin password recovery 3
cluster 4
determining leadership role 5
enabling standalone for cluster 6
Ethernet switching mode 7
high availability 8
host ID 9
impact of firmware upgrade 10
initial setup 1 2 3
port licenses 12
system configuration type 13
verifying high availability status and roles 14
verifying operability 15
about 1
management port 2
setup mode 3
installing 4
uninstalling 5
viewing 6
viewing usage 7
fault collection policy
about 1 2
configuring 2
faults
Call Home alerts 1
Call Home severity levels 2
collection policy 1 2
Core File Exporter 4
lifecycle 1 2
FCoE 1
features
opt-in 1
stateless computing 2
Fibre Channel
link-level flow control 1
over Ethernet 2
priority flow control 3
statistics threshold policies 4
statistics threshold policy classes 5
uplink ports 6
Fibre Channel adapter policies
about 1 2 3
Fibre Channel adapter policy
configuring 1
deleting 2
Fibre Channel system class 1 2
firmware
about 1
adapters 2
BMC 3
direct upgrade 4
displaying 5
displaying download status 6
downgrades 7
downloading 8
fabric interconnects 9
guidelines 10
host package 1 2 3
I/O modules 12
image headers 13
images 1 2
management 15
management package 1 2 3
obtaining images 17
outage impacts 18
prerequisites 19
service profiles 20
UCS manager 21
upgrade order 22
upgrade stages 1 2
upgrades 24
firmwareadapters
verifying overall status 1
firmwarecluster configuration
verifying overall status 1
firmwarefabric interconnects
verifying overall status 1
firmwareI/O modules
verifyoing overall status 1
firmwareservers
verifying overall status 1
flexibility 1
flow control
link-level 1
priority 2
flow control policy
about 1 2
flow control policies
configuring 1
deleting 2
full state 1
Ggold system class 1 2
guidelines
oversubscription 1
passwords 2
pinning 3
usernames 4
Hhardware, stateless 1
hardware-based service profiles 1 2
headers, images 1
high availability
about 1
host firmware package
about 1 2
configuring 2
host ID, obtaining 1
HTTP
configuring 1
HTTPS
configuring 1
II/O modules
updating and activating 1
verifying overall status 2
IEEE 802.3x link-level flow control 1
images
bundle 1
component 2
contents 3
headers 4
import operations
creating 1
deleting 2
modifying 3
running 4
importing
about 1
creating operations 2
deleting operations 3
modifying operations 4
restore methods 5
running operations 6
user role 7
inheritance, servers 1 2
inherited values 1 2
initial setup
about 1
management port IP address 2
setup mode 3
initial templates 1 2
inventory messages, call home
configuring 1
sending 2
inventory messages, smart call home
configuring 1
IOM
resetting 1
IP addresses
management IP pool 1 2
management port 2
IP pools
management 1 2
IPMI access profile
configuring 1 2
deleting 1 2
IPMI profiles
about 1 2
KKVM console
about 1
KVM dongle
about 1
KVM Launch Manager 1
LLAN
pin groups 1
VLANs 2
vNIC policy 1 2
LAN boot 1
LAN boot, boot policies 1
lanes, virtual 1 2
Launch Manager, KVM 1
LDAP
creating a provider 1
licenses, port 1
lifecycle, faults 1 2
link-level flow control 1
local disk configuration policy
about 1 2
local disks
policies 1 2
service profiles 2
locales
about 1
adding an organization 2
assigning to user accounts 3
creating 4
deleting 5
deleting an organization from 6
removing from user accounts 7
log, system 1
log, system event
about 1
backing up 2
clearing 3
policy 4
viewing 5
chassis server mode 1
exec mode 2
chassis server mode 3
exec mode 4
chassis server mode 5
exec mode 6
logical configuration 1
MMAC address pools 1
MAC address table aging time
about 1 2
configuring 1 2
MAC addresses
pools 1 2
MAC pools 1
management firmware package
about 1 2
configuring 2
management IP pools
about 1 2
configuring 2
deleting 3
management port IP address 1
member ports, port channel
adding 1
deleting 2
memory qualification
configuring 1
deleting 2
merging configuration 1
mobility 1
mode
end-host 1
Ethernet switching 2
setup 3
modifying extension key 1
multi-tenancy
about 1
name resolution 2
opt-in 3
opt-out 4
organizations 5
Nname resolution 1 2
named VLANs
about 1
creating for dual fabric interconnects 2
creating for single fabric interconnect 3
deleting 4
named VSANs
about 1
creating for dual fabric interconnects 2
creating for single fabric interconnect 3
deleting 4
network
connectivity 1
named VLANs 2
named VSANs 3
network control policies
configuring 1
deleting 2
network control policy 1 2
NTP servers
about 1
configuring 2
deleting 3
Oobtaining image bundles 1
operating system installation
about 1
KVM console 1 2
KVM dongle 1 2
methods 4
PXE 5
targets 6
opt-in
about 1
multi-tenancy 2
stateless computing 3
opt-out
multi-tenancy 1
stateless computing 2
organizations
about 1
configuring under non-root 2
configuring under root 3
deleting 4
locales 5
multi-tenancy 6
name resolution 7
OS installation
about 1
KVM console 1 2
KVM dongle 1 2
methods 4
PXE 5
targets 6
outage impacts
Cisco UCS Manager 1
fabric interconnects 2
firmware upgrade 3
overriding server identity 1 2
oversubscription
about 1
considerations 2
guidelines 3
overview 1
Ppacks
host firmware 1 2
management firmware 1 2
Palo adapter
extension files 1 2
exporting 1
modifying key 2
pass-through switching 1 2
passwords, guidelines 1
passwords, recovering admin 1 2 3
pending commands 1
pending deletions 1
PFC 1
pin groups
about 1
Ethernet 2
Fibre Channel 3
LAN 4
SAN 5
pinning
about 1
guidelines 2
servers to server ports 3
placement profiles, vNIC/VHBA
configuring 1
deleting 2
vcons 3
platinum system class 1 2
policies
about 1
autoconfiguration 1 2
boot 1 2
Call Home 4
chassis discovery 1 2 3
dynamic vNIC connection 1 2
Ethernet 1 2 3
fault collection 1 2 3
Fibre Channel adapter 1 2 3
flow control 1 2
host firmware 1 2
IPMI profiles 1 2
local disk configuration 1 2
local disks 1 2
management firmware 1 2
network control 1 2
power 1 2
PSU 1 2
QoS 1 2
scrub 1 2
serial over LAN 1 2
server discovery 1 2
server inheritance 1 2
server pool 1 2
server pool qualification 1 2
statistics collection 1 2
threshold 1 2
vHBA 1 2
VM lifecycle 29
vNIC 1 2
vNIC/vHBA placement profiles 1 2
about 1 2
about 1 2
about 1 2
policies, call home
configuring 1
deleting 2
enabling 3
policies, callhome
disabling 1
policy classes
Fibre Channel port statistics, configuring 1
server port statistics, configuring 2
server port statistics, deleting 3
server statistics, configuring 4
server statistics, deleting 5
uplink Ethernet port statistics, configuring 6
uplink Ethernet port statistics, deleting 1 2
pools
about 1
MAC 1 2
management IP 1 2
servers 1 2
UUID suffixes 1 2
WWN 1 2
port channels
configuring 1
deleting 2
member ports 1 2
adding 1
deleting 2
port licenses
about 1
installing 2
obtaining 3
obtaining host ID 4
uninstalling 5
viewing 6
viewing usage 7
port profiles
about 1 2 3
clients 2
configuring 3
deleting 4
deleting 1
Port profiles
VLANs 1 2
adding 1
deleting 2
ports
fabric interconnect 1
licenses 2
management 3
pin groups 1 2
pinning server traffic 5
port channels 1 2
server 7
uplink 8
uplink Ethernet 1 2
configuring 1
deleting 2
configuring 3
deleting 4
configuring 5
deleting 6
ports,
port channels 1 2
member ports, adding 1
member ports, deleting 2
power cycling
server 1
power policy
about 1 2
primary authentication
about 1
remote 2
priority flow control 1
privileges
about 1
adding to user roles 2
removing from user roles 3
processor qualification
configuring 1
deleting 2
profiles
Call Home 1
port 1 2 3
profiles, call home
configuring 1
deleting 2
profiles, TAC-1, smart call home
configuring 1
PSU policy 1 2
QQoS policies
about 1 2
configuring 2
deleting 3
quality of service
about 1 2
flow control policies 1 2
policies 1 2
system classes 1 2
configuring 1
disabling 2
RRADIUS
creating provider 1
recommendations
backup operations 1
recommissioning
chassis 1
recovering admin password 1 2 3 4
remote authentication
user accounts 1
user roles 2
removing
server 1
replacing configuration 1
resetting
server 1
resetting CMOS
server 1
resolution, name 1
restoring
about 1
configuration 2
user role 3
role-based access control 1
roles
about 1
assigning to user accounts 2
backing up 3
privileges 4
removing from user accounts 5
SSAN
pin groups 1
vHBA policy 1 2
VSANs 3
scalability 1
scope 1
scrub policies
configuring 1
deleting 2
scrub policy
about 1 2
SEL
about 1
backing up 2
clearing 3
policy 4
viewing 5
chassis server mode 1
exec mode 2
chassis server mode 3
exec mode 4
chassis server mode 5
exec mode 6
serial number, obtaining 1
serial over LAN
policies 1
service profiles 2
serial over LAN policies
deleting 1
serial over LAN policy
about 1 2
server
acknowledging 1
BMC 2
booting 3
decommissioning 4
power cycling 5
removing 6
resetting 7
resetting CMOS 8
shutting down 9
turning off locator LED 10
turning on locator LED 11
resetting 1
server autoconfiguration policies
configuring 1
deleting 2
server autoconfiguration policy
about 1 2
server discovery policies
configuring 1
deleting 2
discovery policies 1 2
server, configuring 1
server, deleting 2
server discovery policy
about 1 2
server inheritance policies
configuring 1
deleting 2
server inheritance policy
about 1 2
server pool policies
configuring 1
deleting 2
server pool policy
about 1 2
server pool policy qualification
about 1 2
creating 2
deleting 3
server pools
configuring 1
deleting 2
server ports
about 1
configuring 2
deleting 3
server virtualization 1
servers
adapters 1
BMC 2
boot policies 1 2
configuration 4
discovery policy 1 2
DNS 6
inheritance policy 1 2
IPMI profiles 1 2
local disk configuration 1 2
multi-tenancy 10
pinning 11
pool policy 1 2
pool qualifications 1 2
pools 1 2
service profiles 1 2 3
stateless 16
verifying overall status 17
updating and activating 1
updating and activating 2
service profiles
about 1
associating 2
boot definitions 1 2
configuration 4
disassociating 5
firmware upgrades 6
inherited values 1 2
instance, configuring from scratch 8
instance, creating from template 9
LAN boot 10
local disks 11
network connectivity 12
override identity 1 2
serial over LAN 14
storage boot 15
template, configuring 16
templates 1 2
vHBAs 18
virtual media boot 19
vNICs 20
setup mode 1
severity levels, Call Home 1
silver system class 1 2
smart call home
configuring 1
inventory messages, configuring 2
registering 3
TAC-1 profile, configuring 4
Smart Call Home
about 1
alerts 2
considerations 3
severity levels 4
SNMP
community, configuring 1
configuring trap host 2
deleting trap host 3
disabling 4
enabling 1 2 3
SNMPv3
configuring user 1
deleting user 2
software 1
SOL policies
deleting 1
stages, firmware upgrades 1 2
stateless computing
about 1
opt-in 2
opt-out 3
statelessness 1
statistics
threshold policies 1 2
statistics collection policies
about 1 2
statistics collection policy
configuring 1
statistics threshold policies
Fibre channel port, classes, configuring 1
Fibre Channel port, configuring 2
server classes, deleting 3
server port classes, configuring 4
server port classes, deleting 5
server port, configuring 6
server, classes, configuring 7
server, configuring 8
server, deleting 9
uplink Ethernet port classes, deleting 1 2
uplink Ethernet port, classes, configuring 11
uplink Ethernet port, configuring 12
statistics threshold policy classes
Fibre Channel port, configuring 1
server port, configuring 2
server port, deleting 3
server, configuring 4
server, deleting 5
uplink Ethernet port, configuring 6
uplink Ethernet port, deleting 1 2
storage boot 1
storage boot, boot policies 1
storage qualification
configuring 1
deleting 2
supported tasks 1
switching mode 1
syslog 1
system class
configuring 1
disabling 2
system classes
best effort 1 2
bronze 1 2
Fibre Channel 1 2
gold 1 2
platinum 1 2
silver 1 2
system configuration 1
system event log
about 1
backing up 2
clearing 3
policy 4
viewing 5
chassis server mode 1
exec mode 2
chassis server mode 3
exec mode 4
chassis server mode 5
exec mode 6
TTACACS+
creating provider 1
tasks
supported 1
unsupported 2
telnet
configuring 1
templates
service profiles 1 2
test alert, call home 1
TFTP Core Exporter 1
threshold policies
about 1 2
time zones
about 1
configuring NTP servers 2
deleting NTP servers 3
setting 4
viewing 5
traffic management
oversubscription 1 2 3
quality of service 1 2
system classes 1 2
virtual lanes 1 2
trap host, SNMP
configuring 1
deleting 2
turning off locator LED
chassis 1
server 2
turning on locator LED
chassis 1
server 2
UUCS manager
activating 1
unified fabric
about 1
Fibre Channel 2
unsupported tasks 1
updating
adapters 1
BMC 2
firmware order 3
I/O modules 4
updating templates 1 2
upgrading
firmware 1 2 3
firmware, direct 2
firmware, guidelines 3
prerequisites 4
upgrading firmware
obtaining images 1
upgradng
firmware, service profiles 1
uplink ports
about 1
Ethernet 1 2
flow control policies 1 2
pin groups 1 2
port channels 1 2 3 4
configuring 1
deleting 2
configuring 3
deleting 4
member ports, adding 5
member ports, deleting 6
usage, port licenses 1
user accounts
about 1
creating 2
deleting 3
locales 1 2
monitoring 5
roles 1 2
assigning 1
removing 2
assigning 3
removing 4
user roles
about 1
creating 2
deleting 3
privileges 4
user, SNMPv3
configuring 1
deleting 2
usernames, guidelines 1
users
access control 1
accounts 2
authentication 3
locales 4
privileges 5
recovering admin password 1 2 3
remote authentication 7
roles 8
about 1
UUID suffix pools
about 1 2
Vvcons
vNIC/vHBA placement profiles 1
vCons
about 1 2
vHBA SAN Connectivity policies
about 1 2
vHBA templates
about 1 2
configuring 2
deleting 3
vHBAs
service profiles 1
virtual lanes 1 2
virtual media boot, boot policies 1
virtual media boot 1
virtualization
about 1
Cisco M81KR VIC adapter 1 2
Cisco UCS 82598KR-CI 3
Cisco UCS CNA M71KR 4
DVS 1 2
Palo adapter 1 2
support 7
VM lifecycle policy 8
VN-Link 1 2 3 4
VN-Link in hardware 1 2
extension file 1
extension key 2
about 1 2
in hardware 1 2
certificate 1 2
components 6
considerations 1 2
pending deletions 8
VLANs
named 1
port profiles 1 2
about 1
adding 2
deleting 3
VM lifecycle policy
about 1
VMware
extension files 1
extension key 2
VN-Link
about 1 2
extension file 1 2
port profiles 1 2 3
VN-Link in hardware
about 1 2
certificate 1 2
components 3
considerations 1 2
DVS 1 2
pending deletions 6
vNIC
policy 1 2
vNIC LAN Connectivity policies
about 1 2
vNIC templates
about 1 2
configuring 2
deleting 3
vNIC/vHBA placement profiles
about 1 2
configuring 2
deleting 3
vcons 4
vCons 1 2
vNICs
dynamic vNIC connection policy 1 2
service profiles 2
VSANs
named 1
WWWN pools
about 1 2
WWNN pools
about 1 2
WWPN pools
about 1 2
|
http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/sw/cli/config/guide/1-1-1/b_CLI_Config_Guide_1_1_1/CLI_Config_Guide_1_1_1_index.html
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
This page gives an overview of the RabbitMQ .NET/C# client API.
The code samples given here demonstrate connecting to RabbitMQ and performing several common operations with the client.
The library is open-source, and is dual-licensed under the Apache License v2 and the Mozilla Public License v1.1.
The client is dual-licensed under
The client API is closely modelled on the AMQP 0-9-1 protocol specification, with additional abstractions for ease of use.
This section gives an overview of the RabbitMQ .NET client API. Only the basics of using the library are covered: for full detail, please see the javadoc-like API documentation generated from the source code.
The core API interfaces and classes are defined in the RabbitMQ.Client namespace:
using RabbitMQ.Client;The core API interfaces and classes are
All other namespaces are reserved for private implementation detail of the library, although members of private namespaces are usually made available to applications using the library in order to permit developers to implement workarounds for faults or design mistakes they discover in the library implementation. Applications cannot rely on any classes, interfaces, member variables etc. that appear within private namespaces remaining stable across releases of the library.
To connect to a RabbitMQ, it is necessary to instantiate a ConnectionFactory and configure it to use desired hostname, virtual host, and credentials. Then use ConnectionFactory.CreateConnection() to open a connection. The following two code snippets connect to a RabbitMQ node on hostName:
ConnectionFactory factory = new ConnectionFactory(); factory.UserName = user; // "gue factory.Password = pass; factory.VirtualHost = vhost; factory.HostName = hostName; IConnection conn = factory.CreateConnection();
ConnectionFactory factory = new ConnectionFactory(); factory.Uri = "amqp://user:pass@hostName:port/vhost"; IConnection conn = factory.CreateConnection();
Since the .NET client uses a stricter interpretation of the AMQP 0-9-1 URI spec than the other clients, care must be taken when using URIs. In particular, the host part must not be omitted and virtual hosts with empty names are not addressable. All factory properties have default values. The default value for a property will be used if the property remains unassigned prior to creating a connection:
The IConnection interface can then be used to open a channel:
IModel channel = conn.CreateModel();The channel can now be used to send and receive messages, as described in subsequent sections.
Client applications work with exchanges and queues, the high-level building blocks of AMQP 0-9-1. These must be "declared" before they can be used. Declaring either type of object simply ensures that one of that name exists, creating it if necessary. Continuing the previous example, the following code declares an exchange and a queue, then binds them together.
model.ExchangeDeclare(exchangeName, ExchangeType.Direct); model.QueueDeclare(queueName, false, false, false, null); model.QueueBind(queueName, exchangeName, routingKey, null);This will actively declare the following objects:
To publish a message to an exchange, use IModel.BasicPublish as follows:
byte[] messageBodyBytes = System.Text.Encoding.UTF8.GetBytes("Hello, world!"); model.BasicPublish(exchangeName, routingKey, null, messageBodyBytes);For fine control, you can use overloaded variants to specify the mandatory flag, or specify messages properties:
byte[] messageBodyBytes = System.Text.Encoding.UTF8.GetBytes("Hello, world!"); IBasicProperties props = model.CreateBasicProperties(); props.ContentType = "text/plain"; props.DeliveryMode = 2; model.BasicPublish(exchangeName, routingKey, props, messageBodyBytes);This sends a message with delivery mode 2 (persistent) and content-type "text/plain". See the definition of the IBasicProperties interface for more information about the available message properties.
In the following example, we publish a message with custom headers:
byte[] messageBodyBytes = System.Text.Encoding.UTF8.GetBytes("Hello, world!"); IBasicProperties props = model.CreateBasicProperties(); props.ContentType = "text/plain"; props.DeliveryMode = 2; props.Headers = new Dictionary<string, object>(); props.Headers.Add("latitude", 51.5252949); props.Headers.Add("longitude", -0.0905493); model.BasicPublish(exchangeName, routingKey, props, messageBodyBytes);
Code sample below sets a message expiration:
byte[] messageBodyBytes = System.Text.Encoding.UTF8.GetBytes("Hello, world!"); IBasicProperties props = model.CreateBasicProperties(); props.ContentType = "text/plain"; props.DeliveryMode = 2; props.Expiration = "36000000" mode.BasicPublish(exchangeName, routingKey, props, messageBodyBytes);
To retrieve individual messages, use IModel.BasicGet. The returned value is an instance of BasicGetResult, from which the header information (properties) and message body can be extracted:
bool noAck = false; BasicGetResult result = channel.BasicGet(queueName, noAck); if (result == null) { // No message available at this time. } else { IBasicProperties props = result.BasicProperties; byte[] body = result.Body; ...Since noAck = false above, you must also call IModel.BasicAck to acknowledge that you have successfully received and processed the message:
... // acknowledge receipt of the message channel.BasicAck(result.DeliveryTag, false); }Note that fetching messages using this API is relatively inefficient. If you'd prefer RabbitMQ to push messages to the client, see the next section.
Another way to receive messages is to set up a subscription using the IBasicConsumer interface. The messages will then be delivered automatically as they arrive, rather than having to be requested proactively. One way to implement a consumer is to use the convenience class EventingBasicConsumer, which dispatches deliveries and other consumer lifecycle events as C# events:
var consumer = new EventingBasicConsumer(channel); consumer.Received += (ch, ea) => { var body = ea.Body; // ... process the message ch.BasicAck(ea.DeliveryTag, false); }; String consumerTag = channel.BasicConsume(queueName, false, consumer);Another option is to subclass DefaultBasicConsumer, overriding methods as necessary, or implement IBasicConsumer directly. You will generally want to implement the core method IBasicConsumer.HandleBasicDeliver. More sophisticated consumers will need to implement further methods. In particular, HandleModelShutdown traps channel/connection closure. Consumers can also implement HandleBasicCancelOk to be notified of cancellations. The ConsumerTag property of DefaultBasicConsumer can be used to retrieve the server-generated consumer tag, in cases where none was supplied to the original IModel.BasicConsume call. You can cancel an active consumer with IModel.BasicCancel:
channel.BasicCancel(consumerTag);When calling the API methods, you always refer to consumers by their consumer tags, which can be either client- or server-generated as explained in the AMQP 0-9-1 specification document.
Each IConnection instance is, in the current implementation, backed by a single background thread that reads from the socket and dispatches the resulting events to the application. If heartbeats are enabled, as of version 3.5.0 they are implemented in terms of .NET timers. Usually, therefore, there will be at least two threads active in an application using this library:
The one place where the nature of the threading model is visible to the application is in any callback the application registers with the library. Such callbacks include:
As of version 3.5.0 application callback handlers can invoke blocking operations (such as IModel.QueueDeclare or IModel.BasicCancel). IBasicConsumer callbacks are invoked concurrently. However, per-channel operation order is preserved. In other word, if messages A and B were delivered in this order on the same channel, they will be processed in this order. If messages A and B were delivered on different channels, they can be processed in any order (or in parallel). Consumer callbacks are invoked in tasks dispatched to the default TaskScheduler provided by the .NET runtime.
It is possible to use a custom task scheduler by setting ConnectionFactory.TaskScheduler:
public class CustomTaskScheduler : TaskScheduler { // ... } var cf = new ConnectionFactory(); cf.TaskScheduler = new CustomTaskScheduler();This, for example, can be used to limit concurrency degree with a custom TaskScheduler.
As a rule of thumb, IModel instances should not be used by more than one thread simultaneously: application code should maintain a clear notion of thread ownership for IModel instances. If more than one thread needs to access a particular IModel instances, the application should enforce mutual exclusion itself. One way of achieving this is for all users of an IModel to lock the instance itself:
IModel ch = RetrieveSomeSharedIModelInstance(); lock (ch) { ch.BasicPublish(...); }Symptoms of incorrect serialisation of IModel operations include, but are not limited to,
If a message is published with the "mandatory" flag set, but cannot be delivered, the broker will return it to the sending client (via a basic.return AMQP 0-9-1 command). To be notified of such returns, clients can subscribe to the IModel.BasicReturn event. If there are no listeners attached to the event, then returned messages will be silently dropped.
model.BasicReturn += new RabbitMQ.Client.Events.BasicReturnEventHandler(...);The BasicReturn event will fire, for example, if the client publishes a message with the "mandatory" flag set to an exchange of "direct" type which is not bound to a queue.
To disconnect, simply close the channel and the connection:
channel.Close(200, "Goodbye"); conn.Close();Note that closing the channel is considered good practice, but isn't strictly necessary - it will be done automatically anyway when the underlying connection is closed. In some situations, you may want the connection to close automatically once the last open channel on the connection closes. To achieve this, set the IConnection.AutoClose property to true, but only after creating the first channel:
IConnection conn = factory.CreateConnection(...); IModel channel = conn.CreateModel(); conn.AutoClose = true;When AutoClose is true, the last channel to close will also cause the connection to close. If it is set to true before any channel is created, the connection will close then and there.
Network connection between clients and RabbitMQ nodes can fail. RabbitMQ .NET/C# client supports automatic recovery of connections and topology (queues, exchanges, bindings, and consumers). The automatic recovery process for many applications follows the following steps:
ConnectionFactory factory = new ConnectionFactory(); factory.AutomaticRecoveryEnabled = true; // connection that will recover automatically IConnection conn = factory.Create.NetworkRecoveryInterval = TimeSpan.FromSeconds(10);
Topology recovery involves recovery of exchanges, queues, bindings and consumers. It is enabled by default but can be disabled:
ConnectionFactory factory = new ConnectionFactory(); Connection conn = factory.CreateConnection(); factory.AutomaticRecoveryEnabled = true; factory.Top .NET client keeps track of and updates delivery tags to make them monotonically growing between recoveries. IModel.BasicAck, IModel.BasicNack, and IModel.BasicReject then translate adjusted delivery tags into those used by RabbitMQ. Acknowledgements with stale delivery tags will not be sent. Applications that use manual acknowledgements and automatic recovery must be capable of handling redeliveries.
When building distributed systems with RabbitMQ, there are a number of different messaging patterns that crop up over and over again. In this section, we cover some of the most common coding patterns and interaction styles:
The point-to-point messaging pattern occurs when the publisher of a message has a particular receiving application in mind - for instance, when a RPC-style service is made available via the AMQP server, or when an application in a workflow chain receives a work item from its predecessor and sends the transformed work item to its successor.
AMQP 0-9-1 publish operation (IModel.BasicPublish) provides a delivery flag, "mandatory", which can be used to ensure service availability at the time a request is sent by a client. Setting the "mandatory" flag causes a request to be returned if it cannot be routed to a queue. Returned messages appear as basic.return commands, which are made visible to the application via the IModel.BasicReturn event on the IModel that was used to publish the message.
Since published messages are returned to clients via basic.return method, and basic.return is an asynchronous negative-acknowledgement event, the absence of a basic.return for a particular message cannot be taken as a confirmation of delivery: the use of delivery flags only provides a way of raising the bar, rather than eliminating failure entirely.
Additionally, the fact that a message was flagged "mandatory", and successfully enqueued on one or more queues, is no guarantee of its eventual receipt: most trivially, the queue could be deleted before the message is processed, but other situations, like the use of the noAck flag by a message consumer, can also make the guarantee provided by "mandatory" conditional.
Alternatively, one could use Publisher Confirms. Setting a channel into confirm mode by calling IModel.ConfirmSelect causes the broker to send a Basic.Ack after each message is processed by delivering to a ready consumer or by persisting to disk. Once a successfully processed message has been confirmed via the IModel.BasicAcks event handler, the broker has assumed responsibility for the message and the client may consider the message handled. Note that the broker may also negatively acknowledge a message by sending back a Basic.Nack. In this case, if a message is rejected via the IModel.BasicNacks event handler, the client should assume that the message was lost or otherwise undeliverable. Also, note that unroutable messages - messages published as mandatory to non-existing queues - are both Basic.Return'ed and Basic.Ack'ed.
using (IConnection conn = new ConnectionFactory() .CreateConnection(args[0])) { using (IModel ch = conn.CreateModel()) { SimpleRpcClient client = new SimpleRpcClient(ch, /* ... */); // in the line above, the "..." indicates the parameters // used to specify the address to use to route messages // to the service. // The next three lines are optional: client.TimeoutMilliseconds = 5000; // defaults to infinity client.TimedOut += new EventHandler(TimedOutHandler); client.Disconnected += new EventHandler(DisconnectedHandler); byte[] replyMessageBytes = client.Call(requestMessageBytes); // other useful overloads of Call() and Cast() are // available. See the code documentation of SimpleRpcClient // for full details. } }Note that a single SimpleRpcClient instance can perform many (sequential) Call() and Cast() requests! It is recommended that a single SimpleRpcClient be reused for multiple service requests, so long as the requests are strictly sequential.
The event broadcasting pattern occurs when an application wishes to indicate a state change or other notification to a pool of applications without knowing precisely the addresses of each interested party. Applications interested in a certain subset of events use exchanges and queue-bindings to configure which events are routed to their own private queues.
Generally, events will be broadcast through topic exchanges, although direct exchanges, while less flexible, can sometimes perform better for applications where their limited pattern-matching capability is sufficient.
using (IConnection conn = new ConnectionFactory() .CreateConnection(args[0])) { using (IModel ch = conn.CreateModel()) { IBasicProperties props = ch.CreateBasicProperties(); FillInHeaders(props); // or similar byte[] body = ComputeBody(props); // or similar ch.BasicPublish("exchangeName", "chosen.routing.key", props, body); } }See the documentation for the various overloads of BasicPublish on class RabbitMQ.Client.IModel.
// "IModel ch" in scope. Subscription sub = new Subscription(ch, "STOCK.IBM.#"); foreach (BasicDeliverEventArgs e in sub) { // handle the message contained in e ... // ... and finally acknowledge it sub.Ack(e); }will start a consumer on the queue using IModel.BasicConsume. It is assumed that the queue and any bindings have been previously declared. Subscription.Ack() should be called for each received event, whether or not auto-acknowledgement mode is used, because Subscription internally knows whether an actual network message for acknowledgement is required, and will take care of it for you in an efficient way so long as Ack() is always called in your code. For full details, please see the code documentation for the Subscription class.
// "IModel ch" in scope. ch.ExchangeDeclare("prices", "topic"); ch.QueueDeclare("MyApplicationQueue", false, true, true, null); ch.QueueBind("MyApplicationQueue", "prices", "STOCK.IBM.#", false, null);... followed by consumption of messages from "MyApplicationQueue" using BasicGet or BasicConsume. A more full example is given in the ApiOverview chapter.
The same auto-acknowledgement/manual-acknowledgement decision as for point-to-point messaging is available for consumers of broadcast events, but the pattern of interaction introduces different tradeoffs:
For more information, see the section on reliable message transfer below. Note also that class Subscription takes care of acknowledgement and the various acknowledgement modes for you, so long as Subscription.Ack() is called for each received message.
Messages can be transported between endpoints with different quality-of-service (QoS) levels. In general, failure cannot be completely ruled out, but it is important to understand the various delivery failure modes to understand the kinds of recovery from failure that are required, and the kinds of situation for which recovery is possible. To reiterate: it is not possible to completely rule out failure. The best that can be done is to narrow the conditions in which failure can occur, and to notify a system operator when failure is detected.
This QoS level assures that a message is delivered to its ultimate destination at least once. That is, a receiver may receive multiple copies of the message. If it is important that a side-effect only occur once for a given message, at-most-once delivery should be used instead.To implement at-least-once delivery:
In situations where continuous service is desired, the possibility of a server failure can be hedged against with some careful programming and the availability of a warm-standby cluster for failover.The main concerns when failing over are
Message producers should take care to use transactions in order to receive positive confirmation of receipt of a group of messages from a server, and should keep a record of the exchanges, queues and bindings they need to have available in order to perform their work, so that on failover, the appropriate resources can be declared before replaying the most recent transactions to recover.
Message consumers should be aware of the possibility of missing or duplicate messages when failing over: a publisher may decide to resend a transaction whose outcome is in doubt, or a transaction the publisher considered complete could disappear entirely due to failure of a cluster node.
Often elements of the at-least-once pattern appear in conjunction with the external-resource pattern - specifically, the side-effects discussed in the section on reliable message transfer above are often effects on an external resource.
In cases where a delivery must be processed no more than once and used in conjunction with an external resource, it's important to write code that is able at each step to determine whether the step has already been taken in some previous attempt at completing the whole transaction, and if it has, to be able to omit it in this attempt and proceed to the next step. For example:
This makes it important to be able to compress request IDs so that they do not take unbounded space in the log of performed work, and so that we do not need to introduce a full distributed garbage-collection protocol with the ultimate requestor. One way of doing this is to choose to use request IDs that are strictly increasing, so that a "high water mark" can be used. Once the work is known to have been performed, and a reply has been produced (if there is one), the reply can be sent back to the requestor as many times as necessary. The requestor knows which replies it is expecting, and can discard unwanted duplicates. So long as duplicates of the same request always receive identical reply messages, the replier need not be careful about sending too many copies of the reply. Once the reply has been sent to the server, the request message can be acknowledged as received and processed with the server server. In cases where there is no reply to a request, the acknowledgement is still useful to ensure that requests are not lost.
The Windows Communication Foundation (WCF) enabled protocol independent service oriented applications to be built; RabbitMQ .NET client extends the framework by providing a Binding and Transport Binding Element over RabbitMQ. In the language of WCF, a Binding is a stack of Binding Elements which control all aspects of the service’s communication (for example, Security, Message Format and Transactions). A specialized kind of Binding Element, the Transport Binding Element specifies the protocol to be used for communication between a service and its clients (for example WS-HTTP, MSMQ or .Net Remoting over TCP).
The RabbitMQ Binding provides OneWay ("Fire and Forget"), TwoWay (Request/Reply) and Duplex (Asynchronous Callback) communication over RabbitMQ with WS-ReliableSessions, WS-AtomicTransactions and Text message encoding. The binding can be configured from imperative code or using the standard WCF Configuration model.
A Transport Binding Element is also supplied and can be used in the construction of custom bindings if the channel stack provided by the RabbitMQ Binding is insufficient. The transport binding must be configured with a broker hostname and broker port prior to use.
The RabbitMQ WCF binding has limited flexibility compared to the RabbitMQ .NET RabbitMQ client library. You are advised to use the .NET RabbitMQ client library if you require greater flexibility (e.g. control over durability of service queue) or if you require long-term support. The WCF binding is not under active development.
The RabbitMQ binding to WCF and associated samples can be built automatically using Nant. For more information about Nant, visit. To build the library and Sample Applications from a console window, change to the RabbitMQ.net drop location and execute:
nant build-wcf nant wcf-examples
Each Windows Communication Foundation service is built from three components, an Address, Behaviours and a Contract. For more information, see Windows Communication Foundation Essentials.
A service contract is an interface decorated with the ServiceContractAttribute and has one or more methods (or property accessors) decorated with the OperationContract attribute. Typically the contract exists in an assembly that can be shared between client and server applications.
[ServiceContract] public interface ICalculator { [OperationContract] int Add(int x, int y); [OperationContract] int Subtract(int x, int y); }
The contract for a service specifies what the operations the service agrees to provide, the behaviour specifies the implementation for that service. A behaviour is a class implementing the contract and optionally decorated with the ServiceBehavior attribute.
[ServiceBehavior(InstanceContextMode=InstanceContextMode.PerCall)] public sealed class Calculator : ICalculator { public int Add(int x, int y) { return x + y; } public int Subtract(int x, int y) { return x - y; } }
For a service to be useful, it must be reachable and therefore hosted. The two common hosting scenarios for WCF services are IIS and ServiceHost. IIS Hosting is untested and unsupported by the RabbitMQ binding and using System.ServiceModel.ServiceHost is the recommended hosting path. A service host instance is constructed with the type of service behaviour being hosted and a description of the endpoint(s) it will be published on. The endpoints consist of Addresses (e.g. soap.amp:///MyService) and Bindings; they may be specified directly as constructor arguments in imperative code or declaratively through WCF configuration files, both are supported by the RabbitMQ binding.
Services hosted using the RabbitMQ binding must be hosted at addresses under the soap.amqp scheme. The amq.direct exchange is used. The service name must not be omitted.
serviceAddress = “soap.amqp:///” serviceName
The sample services referred to in this section are located in the src\wcf\Test project.
Operations on a service can be marked as One Way; this means there will be no response from the service when the operation completes. One Way operations always have return type void and have an OperationContract attribute with IsOneWay set equal to true decorating them.
[OperationContract(IsOneWay=true)] void Log(LogData entry);
If a service only contains one way operations the RabbitMQ binding can be used in an optimized OneWayOnly mode. In this mode, no reply queue is created for responses to be sent back to the client and the client does not listen for responses from the service. To enable OneWayOnly mode set the binding property or use the oneWay configuration attribute.
<rabbitMQBinding> <binding name="rabbitMQConfig" hostame="localhost" port="5672" username="guest" password="guest" virtualHost="/" oneWay="true" maxmessagesize="8192" /> </rabbitMQBinding>
The OneWayTest sample application is a simple logging service. Clients submit log entries to a server which displays them on the console. It demonstrates one way RPC over RabbitMQ, SOAP encoding to transmit complex data types over the wire and singleton instance context mode.
Typically a service operates in a bi-directional, two way fashion where requests from the client are synchronously executed and a response returned to the caller. To support these services, the RabbitMQ binding uses the CompositeDuplexBindingElement , which constructs a uniquely named reply queue on the broker. Two Way services are not supported by the binding when it is in OneWayOnly mode. The TwoWayTest sample application is a calculator service, whose operations take a pair of integers and return a third.
Each call to a service can be considered independent of all others with the service maintaining no state, often a more useful service maintains some state between calls. The RabbitMQ binding supports WS-ReliableSessions enable the object instances used to service requests to have a session-long lifetime and be associated with a single client session. The SessionTest sample application is a cart service, allowing items to be added to a cart and a total calculated.
A call to a two way service might start a long running process (for example, aggregating prices from a list of suppliers) and whilst the client requires a response, it is desirable that the client is not blocked for the duration of the call; instead, an asynchronous call is desired. Duplex services allow the service to make calls to the client, and have a contract whose ServiceContract specifies a CallbackContract type.
[ServiceContract(CallbackContract=typeof(IOrderCallback))] public interface IOrderService
Duplex services are supported by the RabbitMQ binding because its channel stack includes the composite duplex binding element, they are not supported in OneWayOne mode. The DuplexTest sample application is an ordering service, which makes a callback to the client when an order is fulfilled.
The recommended hosting scenario for services over RabbitMQ is self hosting using System.ServiceModel.ServiceHost. The ServiceHost must specify a base or absolute endpoint address under the soap.amqp scheme. An endpoint should then be added to the service using the RabbitMQBinding.
var service = new ServiceHost(typeof(Calculator), new Uri("soap.amqp:///")); var binding = new RabbitMQBinding("localhost", 5672, "guest", "guest", "/", 8192, Protocols.AMQP_0_9_1); service.AddServiceEndpoint(typeof(ICalculator), binding, "Calculator");
The recommended pattern for connecting to a service is by deriving from either ClientBase<T> or DuplexClientBase<T>. For duplex clients, the InstanceContext must be specified.
Specifying details like the protocol version and broker address in source code tends to result in services which are very hard to manage and deploy. To avoid this, WCF provides a configuration mechanism using application configuration files (App.Config). The configuration file must be applied to the host or client assembly (typically an executable) and not to a library which contains the service contract or behaviours. To declaratively configure a service, the RabbitMQBindingSection must be imported into the system.serviceModel section of the configuration file.
<extensions> <bindingExtensions> <add name="rabbitMQBinding" type="RabbitMQ.ServiceModel.RabbitMQBindingSection, RabbitMQ.ServiceModel, Version=1.0.110.0, Culture=neutral, PublicKeyToken=null"/> </bindingExtensions> </extensions>
With the extension imported, the rabbitMQBinding can be declared and configured:
<bindings> <rabbitMQBinding> <binding name="rabbitMQConfig" hostname="localhost" port="5672" maxmessagesize="8192" version="AMQP_0_9_1" /> </rabbitMQBinding> </bindings>
A service is configured by declaring the contract, endpoint and binding. Multiple services and bindings can be specified in a single configuration file.
<services> <service name="Calculator"> <host> <baseAddresses> <add baseAddress="soap.amq:///" /> </baseAddresses> </host> <endpoint address="Calculator" binding="rabbitMQBinding" bindingConfiguration="rabbitMQConfig" contract="ICalculator"/> </service> </services>
To run the service, simply create a new ServiceHost instance passing in the service behaviour (as specified in config).
host = new ServiceHost(typeof(Calculator)); host.Open();
To build a client whose settings are derived from configuration, expose a constructor for your ClientBase<T> derived class calling the ClientBase(string).
public class CalculatorClient : ClientBase<ICalculator>, ICalculator { public CalculatorClient(string configurationName) : base(configurationName) { }
Construct the class passing the client endpoint name as specified in configuration.
<client> <endpoint address="soap.amq:///Calculator" binding="rabbitMQBinding" bindingConfiguration="rabbitMQConfig" contract=" ICalculator" name="AMQPCalculatorService" /> </client>
The RabbitMQ WCF libraries also have full support for the WCF Configuration Editor Tool.
|
http://www.rabbitmq.com/dotnet-api-guide.html
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
Proxypy: Cross Domain Javascript Requests with Python
When proxy service can be a good idea.
A proxy can act as an intermediary between your Javascript and the remote data, eliminating all the cross domain limitations imposed on the client. The pattern in principle is simple:
- Assign to your web application's server some view to receive a request with a parameter of the desired remote content address.
- Let the server fetch the content from the remote server via an HTTP request.
- Wrap the results into a JSON object and return them back.
I was working with Python so I wrote a simple proxy module that can work with any Python web application framework. You can check out the source of the module on Github along with an example of a Flask application using it.
Include the Proxypy module in your project and import it then assign a view to receive the request and call
proxypy.get(query_string) the method takes a single argument that is the query string received in the request. The query string would work on the following parameters:
- - url which is the address of the page with the target content to fetch.
- - headers(optional) if you wish to return the reply headers of the request.
- - callback(optional) if your request against the proxy service itself is from a different domain.
Here is a simple demonstration of usage with Flask, the "crossdomain" view receives the request and calls proxypy's get method to fetch the remote content based on the supplied parameters and then returns the JSON string containing the wrapped reply from the remote.
from flask import Flask, request, Response, render_template import json import proxypy app = Flask(__name__) @app.route("/") def index(): return render_template("index.html") @app.route("/crossdomain") def crossdom(): reply = proxypy.get(request.query_string) return Response(reply,status=200,mimetype='application/json') if __name__ == "__main__": app.run(debug=True)
You can view a full example in action here and check the full source of the script on Github.
You might have noticed a couple of limitations in terms of features in this proxy namely, no support for HTTP methods other than GET and this is because I haven't really came across a need for that plus allowing more methods such as POST may increase the possibilities of exploitation.
I hope that you find this module useful in one way or another, would love to see your feedback about this topic in the comments. Have a nice weekend!↑ Back to top
|
http://www.thecodeship.com/web-development/proxypy-cross-domain-javascript-requests-python/
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
JScript .NET, Part I: The Mechanics - Doc JavaScript
JScript .NET, Part I: The Mechanics
Now that you know how to consume Web services, it's time to learn how to create them. In order to learn this topic, you need to know the .NET Framework and its JavaScript language, JScript .NET. In this column, we start a new series on JScript .NET. JScript .NET (notice the blank between the JScript and .NET) is Microsoft's implementation of ECMAScript (more commonly known as JavaScript) Edition 4. It is backwards compatible with previous versions, and extends them via its classes, strong typing, namespaces, enumeration, compiled code, and other features. JScript .NET is one of three languages supported by the .NET Framework (the other two are Visual Basic and C#). Your knowledge of JavaScript will help you to ramp up quickly on JScript .NET. You will be able to create server-side applications and Web services in a relatively short time.
In this column, we lay down the groundwork for JScript .NET. We'll show you how to start working with JScript .NET on your desktop computer. Since we are dealing with server-side applications, you need to have access to a server. Your PC seems to be a good candidate for this. We'll show you how to install the Internet Information Services (IIS) server on your PC. We'll also guide you through the installation of the .NET SDK (Software Development Kit). This application is loaded with documentation, tools, utilities, and compilers.
Once IIS and the .NET SDK are installed, you will be ready to learn the new stuff. We'll cover the .NET Framework and its principles. We'll show you what's new in JScript .NET and the differences between JScript and JScript .NET. We'll teach you how to compile and link a JScript .NET program.
In this column you will learn:
- How to install the Internet Information Services (IIS) server
- How to install the .NET Software Development Kit (SDK)
- How to take advantage of .NET Framework
- How to take advantage of JScript .NET
- How to compile and run JScript .NET applications
Next: How to install the Internet Information Services (IIS) server
Produced by Yehuda Shiran and Tomer Shiran
Created: April 8, 2002
Revised: April 8, 2002
URL:
|
http://www.webreference.com/js/column107/index.html
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
Today we had a requirement to do some pretty strange stuff in SQL which required us to call an encryption library in SQL server. This is something I had not done before, so I thought I would blog about it.
There are several steps involved.
The 1st part is straight forward enough, the following code gives an example
1: using System;
2: using System.Collections.Generic;
3: using System.Text;
4: using Microsoft.SqlServer.Server;
5: using System.Data;
6: using System.Data.Sql;
7: using System.Data.SqlTypes;
8: using Encrypt;
9:
10: public class StoredProcedures
11: {
12:
13: [Microsoft.SqlServer.Server.SqlFunction()]
14: public static string Enc(SqlString password, SqlString encStringOut)
15: {
16: Encryption enc = new Encryption();
17: return enc.Encrypt(password.ToString(), encStringOut.ToString());;
18: }
19:
20: [Microsoft.SqlServer.Server.SqlFunction()]
21: public static string Dec(SqlString password, SqlString encStringOut)
22: {
23: Encryption enc = new Encryption();
24: return enc.Decrypt(password.ToString(), encStringOut.ToString()); ;
25: }
26: }
So thats easy enough. Compile this job done.
So next we need to do the SQL server work. So firstly I copied the SQLServerEncryption.Dll to the C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Binn directory of the SQL server machine.
I also copied the Dll generated (SQLServerEncryption.Dll) to the C:\ drive on the SQL server machine, as it makes the Dll registration code that needs to be run for SQL a bit easier.
So we’ve copied to \binn and C:\ so far, so now we need to register the Dll with SQL server. So lets look at that
Firstly we need to allow CLR types in the SQL server installation. Which is either done using the following SQL
EXEC dbo.sp_configure ‘clr enabled’,1 RECONFIGURE WITH
Or if you have issues doing it that way use the SQL Server Surface Area Configuration, use the “Surface Area Configuration For Features” link, and then CLR integration from the presented treeview. Once this is done we can register the CLR Dll with SQL, as follows
create assembly SQLServerEncryption from ‘c:SQLServerEncryption.dll’ WITH PERMISSION_SET = SAFE
Now that weve done that, all thats left to do is create a normal SQL server function that uses the CLR Dll. Which is simply done as follows
ALTER FUNCTION [dbo].[ENCRYPT](@password [nvarchar](255), @encStringOut [nvarchar](255))
RETURNS [nvarchar](255) WITH EXECUTE AS CALLER
AS
EXTERNAL NAME [SQLServerEncryption].[StoredProcedures].[Enc]
And that it you can now use the CLR Function as you like. For example
dbo.ENCRYPT(‘xxxx’,’sb_SQL’).
|
https://www.codeproject.com/Articles/37377/SQL-Server-CLR-Functions
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
Level: Introductory
Sing Li ([email protected])Author, Wrox Press 29 Jun 2004
Guard
your castle! Claim your land! Command your knights to joust valiantly
and defeat their foes. Capture the enemy's position and seize its land
while dodging its menacing knights. If writing mudane Java code is
giving you the blues lately, maybe it's time to turn your medieval
fantasies into reality. You can rule your own kingdom while refining
your Java programming skills and mastering the Eclipse development
environment all at the same time. It's all in a hard day's work for a
supreme CodeRuler. Simulation-gaming enthusiast Sing Li puts you on the
fast track to ultimate kingdom domination.
Born of the 2004 ACM International Collegiate Programming Competition (see Resources),.
This article
guides you along the shortest path to ruling your own medieval kingdom.
It reveals the game's environment, describes the rules, discusses
general strategies, and provides two complete working ruler entries
that you can put to use (or modify) immediately.
The simulation environmentCodeRuler
is a graphical, animated simulation gaming environment. As a medieval
ruler, you must battle other rulers for land and dominance. Your
kingdom consists of:
The graphical gaming worldThe
game is played out in a two-dimensional world represented by a map of
the kingdom. (The background landscape sketch merely acts as wallpaper;
it doesn't affect game play or change as the game progresses.) Figure 1
illustrates a CodeRuler game in progress.
Figure 1. CodeRuler in action
Figure
1 shows two competing rulers at work. The ruler -- the mastermind
behind the strategic movements of the game pieces -- doesn't appear in
the game world. The game pieces (peasants, knights, and castles) are
the colored dots moving within the simulated world. Figure 2
illustrates the pieces' shapes and their possible movement directions.
Figure 2. Movement pattern for CodeRuler game pieces
As
you can see from Figure 2, knights and peasants use the same movement
pattern. They can move a single square in any one of the eight
directions for each turn. Each direction has an associated number,
which you use in Java coding. Each number also has a predefined
constant (such as NW) that you use in your code.
The console score displayYou can see the status console on the right side of Figure 1.
The names of the currently playing rulers, and the organizations to
which they belong, appear at the top of the console. The two numbers
are the ruler's current score (left) and the number of land squares
that the peasants have claimed. Figure 3 shows an example score display.
Figure 3. Console score display
In
Figure 3, ruler number #18 is called Simple Ruler from IBM
developerWorks. The ruler's current score is 123, and the ruler has
captured 774 squares of land for the kingdom. You can abort the
simulation at any time by clicking on the red X on the top right.
At-a-glance land occupation displayYou can see a small version of the world in the middle of the status console of Figure 1.
The image, reproduced in Figure 4, lets you see each ruler's current
occupation of land at a glance. You can easily see in Figure 4 that the
blue-colored ruler has claimed significantly more land than the
magenta-colored ruler.
Figure 4. At-a-glance land occupation display
The simulation clockAt the bottom of the status console in Figure 1 is a clock. Figure 5 shows a closeup.
Figure 5. The CodeRuler clock
A
sun travels around the clock's dial. The match is over after the sun
has traveled one complete cycle. Each tick of the clock is a turn for
the simulator. As a ruler, you determine the moves your pieces make
during each turn.
Rules of combatEach ruler has initial control over:
Creating new peasants and castlesThe rate of creation of peasants or knights by a castle depends on the number of land squares that you own:
During the course of the game, you want to:
Capturing game piecesOnly
a knight can capture the opponent's peasants, castles, or knights. It
can capture peasants and castles simply by moving into their squares.
To capture an opposing knight, you must first bring its strength value
down to 0. Each knight starts with a strength value of 100 and loses a
random strength value between 15 and 30 for each attempted capture by
an opponent. The knight performing a successful capture gains 20
strength units.
The scoring schemeTo
win a match, you must be the ruler who has the highest score at the end
of the match. Note that the winner might or might not be the ruler with
the most land claimed. Table 1 provides the game's scoring scheme.
Table 1. Scoring scheme for captures
At the end of the match, your remaining pieces, captured castles, and claimed land all add to your score, as shown in Table 2.
Table 2. Scoring scheme for remaining pieces
Game detailsEach
player writes Java code that simulates a ruler. The gaming simulator
matches your ruler against other rulers and determines the winner. In
your code, you must orchestrate the movement of your peasants, knights,
and castle(s). A set of API provides information on your pieces and
those of other competing rulers. Using this API, you can write code
that implements offensive, defensive, or even adaptive strategies.
Meet the mastermind behind CodeRulerFor
a glimpse under the hood of the CodeRuler engine and some ideas for
advanced strategies, read this revealing behind-the-scenes interview with CodeRuler's creator, Tim deBoer.
Components of the gameThe CodeRuler game requires the Eclipse IDE for writing, debugging, and testing your ruler's code. (See Eclipse: The integrated kingdom development environment , later in this article.)
CodeRuler includes:
Simulated world coordinate systemThe
game is staged in a simulated world consisting of 4,608 squares -- 72
squares wide by 64 squares high. The squares are numbered according to
an (x, y) coordinates system. The x axis extends from left to right and the y axis from top to bottom. Figure 6 shows the layout of the CodeRuler world. Position (0,0) is at the top-left corner.
Figure 6. The CodeRuler world coordinates system
CodeRuler API and inheritance hierachyBefore
you can command the game pieces, you need to understand the CodeRuler
API. The API is highly object oriented and has an explicit inheritance
hierachy. An understanding of the hierachy is crucial to effective
CodeRuler coding. Figure 7 shows the inheritance hierarchy.
Figure 7. CodeRuler inheritance hierachy
The
inheritance tree in Figure 7 is based on Java interfaces. Each game
piece must implement its associated interface: peasant must implement IPeasant, knights must implement IKnight,
and so on. However, you never need to code any of these classes,
because the CodeRuler simulator uses built-in implementations. As a
ruler, you need to use the API provided by the interfaces
only to get information about the game pieces.
IPeasant
IKnight
The IObject interface
The IObject interface is the super interface to all pieces on the playing field. Each piece implements IObject indirectly. IObject factors out the common behaviors of all pieces:
IObject
getRuler()
getX(), getY()
isAlive()
getId()
The IObject
super interface also has two convenience methods. These methods can be
quite handy in your strategy design and help you avoid the need to
employ complex trigonometric math:
getDirectionTo()
getDistanceTo()
The IPeasant interface
The IPeasant interface adds no new behavior to the IObject interface. You can move peasants with the Ruler's move()
method, which changes their positions. You use a peasant to claim land.
A peasant's automatic behavior is to claim any land position that it
moves over. An opposing knight can capture a peasant in a single move;
no calculation of strength is involved.
move()
The ICastle interface
The ICastle interface, like the IPeasant interface, adds no new behavior to the IObject
interface. A castle's automatic behavior is to create more peasants or
knights. The speed of creation depends on the amount of land you own.
See the sidebar Creating new peasants and castles, for the creation rate.
ICastle
The IKnight interface
The IKnight interface adds one method, called getStrength(), to the IObject interface. A knight is captured when its strength is reduced to zero. You can use the IKnight interface's getStrength() method in your strategy to avoid the loss of knights. See Capturing game pieces earlier in this article for a discussion of a knight's strength calculation.
getStrength()
IKnight
The interface hierarchy in Figure 7
represents the game pieces that move around the world during the
simulation. The ruler, however, is not a game piece and does not move
in the simulated world. The IRuler interface specifies the ruler's behavior.
IRuler
The IRuler interface
The IRuler interface has no need to -- and does not -- inherit from IObject. Figure 8 shows the IRuler interface's inheritance hierarchy.
Figure 8. Inheritance hierachy of the IRuler interface
The IRuler
interface specifies the generic behaviors that all rulers implement.
They include informational methods that are useful for the
implementation of your strategy:
getPeasants()
getKnights()
getCastles()
getLandCount()
getPoints()
getRulerName()
getSchoolName()
The Ruler and MyRuler classes
To enforce game-rule-specific behaviors, and to help you implement the IRuler interface, CodeRuler supplies the Ruler class as shown in Figure 8. This class provides default implementations for most of the IRuler methods. You write the content of the MyRuler class, which must inherit from the Ruler class. You need not, and should never, modify the Ruler class.
Ruler
MyRuler
The simulator's workflowFrom the perspective of a CodeRuler player, the simulator has the following workflow:
initialize()
orderSubjects()
Your getRulerName() and getSchoolName() methods should not contain strategy code. The simulator can call them at any time.
getSchoolName()
Ruler provides several vital action methods that you should use in your implementation of MyRuler:
capture()
Ruler
also implements several methods that can change the generation mode of
your castle(s). By default, your castle(s) manufacture peasants
continuously. However, you use these methods to tell it or them to
manufacture knights instead:
createKnights()
createPeasants()
Last but not least, one of the reasons for the existence of the Ruler class is to define additional abstract methods that you must implement in your own MyRuler class. The simulation engine calls these methods during its execution.
The only code you must write is the code that implements the methods listed in Table 3:
Table 3. Methods in all MyRuler implementations
CodeRuler
Eclipse: The integrated kingdom development environmentYou
need to download and install the Eclipse IDE (version 2.1 or later) in
order to run the CodeRuler simulation environment (see Resources). CodeRuler integrates into the Eclipse IDE as a plug-in, thereby leveraging Eclipse's developer-friendly features.
Installing Eclipse and CodeRulerTo
install Eclipse, unarchive the distribution into a directory and run
the eclipse executable (eclipse on *nix, and eclipse.exe on Win32
systems). You need JDK/JRE 1.4.2 or later installed. (Version 1.4.2 is
highly recommended because CodeRuler is developed and tested under this
VM version.) After you install Eclipse, download the CodeRuler engine
(see Resources). To install CodeRuler, unarchive the CodeRuler distribution into the <eclipse installation directory>/plugins
directory. This should create a com.ibm.games directory under the
plugins directory. Start or restart Eclipse, and the CodeRuler plug-in
will load. You're now ready to give CodeRuler a try.
Creating your CodeRuler projectYou
need to create a new project in Eclipse to use CodeRuler. From the main
menu, select Windows|Preferences. A dialog box pops up, as shown in
Figure 9.
Figure 9. Creating a new CodeRuler project
IBM Games in the list on the left, as depicted in Figure 9. Next,
select CodeRuler from the Game list on the right. Finally, click on OK to create a new CodeRuler project from the template. You are now ready to code your own CodeRuler.
On the tabs bar on the left hand side of the IDE, click on the Java Perspective tab. Figure 10 identifies this tab.
Figure 10. Selecting the Java perspective in Eclipse
Expanding
the src node and the default package reveals the MyRuler.java node, as
in Figure 10. The source code editor opens the file for editing when
you double-click on the MyRuler.java node. This is where you must place
your code.
Coding your first rulerThe
first ruler you'll create is simple. It moves all the peasants
randomly. Listing 1 shows the code for this ruler, with the added code
highlighted in boldface.
The importance of being timelyWhen you code your ruler, be aware of the timing constraint that you are under. For the initialize()
method, you are limited to one second, which should be plenty of time
for any non-input/output code initialization. You are preempted if you
take more than one second and might suffer from partial initialization.
For each turn of the game, the orderSubjects() method limits you to half of a second. An incoming parameter of the orderSubjects()
call lets you know how much time you used in the last turn. If you
exceed the time limit, you are disqualified for the rest of the match.
import java.util.Random;
...
protected Random rand = new Random();
public String getRulerName() {
return "Simple Ruler";
}
public String getSchoolName() {
return "IBM developerWorks";
}
public void orderSubjects(int lastMoveTime) {
IPeasant[] peasants = getPeasants();
for (int i = 0; i < peasants.length; i++) {
move(peasants[i], rand.nextInt(8) + 1);
}
}
The code in Listing 1 uses java.util.Random to generate a random number between 1 and 8. This number determines the direction that a peasant moves. Note the use of the Ruler class's getPeasants() method to obtain an array of all peasants, and the use of the move() method to move the peasants.
java.util.Random
Moving the peasants randomly allows them to claim land. But because
this ruler doesn't try to capture anything, your code doesn't need to
move the knights.
The "do not try" list for CodeRulerA
clever strategy is important to winning the game, but CodeRuler
discourages tricky use of Java language features to hijack the gaming
engine or win by other devious means. Your code should not:
In all public matches and tournaments, players who use such hacker tactics will be disqualified from competition. A custom Java SecurityManager will catch most of these attempts.
SecurityManager
Battling in your first matchTo
try out your first match, first save the newly edited ruler by either
clicking on the save button on the toolbar or selecting File >Save from the menu. A save also compiles your code. Correct any typing or syntax errors before proceeding further.
You'll notice five iconic buttons, shown in Figure 11, that are CodeRuler specific.
Figure 11. Integrated CodeRuler buttons in the Eclipse toolbar
Table 4 explains the functions of the buttons in Figure 11, moving from left to right.
Table 4. Functions of the CodeRuler buttons
Your
first ruler will run against only the sample rulers in your initial
experimentation. This means you'll use only the first button, the one
highlighted in Figure 11.
When you click on this button, CodeRuler starts and loads your ruler.
You're given a chance to select your opponent(s), as shown in Figure 12.
Figure 12. Selecting your opponent for a match
Try
adding one Do Nothing Ruler. Start the match and observe how your
peasants randomly move around and claim the land. You should win this
match easily.
Next, try the Random Ruler. This ruler behaves almost identically to yours. The average land occupation is about equal.
If
you try any of the other sample rulers, the Simple Ruler you created
will likely lose. Most of the other sample rulers attempt to capture
your pieces agressively. It's time add an offensive edge to your Simple
Ruler.
Creating an offensive rulerListing 2 shows the code for the modified ruler, with the added code highlighted.
import com.ibm.ruler.*;
import java.awt.Point;
import java.util.Random;
import java.util.Vector;
public class MyRuler extends Ruler {
public String getRulerName() {
return "Simple Ruler";
}
public String getSchoolName() {
return "IBM developerWorks";
}
public void initialize() {
}
protected Random rand = new Random();
protected Vector enemies = new Vector();
public void orderSubjects(int lastMoveTime) {
IPeasant[] peasants = getPeasants();
IKnight[] knights = getKnights();
for (int i = 0; i < peasants.length; i++) {
move(peasants[i], rand.nextInt(8) + 1);
}
enemies.clear();
IPeasant[] otherPeasants = World.getOtherPeasants();
IKnight[] otherKnights = World.getOtherKnights();
ICastle[] otherCastles = World.getOtherCastles();
for (int i=0; i<otherPeasants.length; i++) {
enemies.add(otherPeasants[i]);
}
for (int i=0; i<otherKnights.length; i++ ){
enemies.add(otherKnights[i]);
}
for (int i=0; i<otherCastles.length; i++) {
enemies.add(otherCastles[i]);
}
int size = knights.length;
for (int i = 0; i < size; i++) {
IKnight curKnight = knights[i];
if (!enemies.isEmpty()) {
IObject curEnemy = (IObject) enemies.remove(0);
moveAndCapture(curKnight, curEnemy);
}
else
break;
} // of outter for
}
public void moveAndCapture(IKnight knight, IObject enemy) {
if ((enemy == null) || !enemy.isAlive())
return;
// find the next position in the direction of the enemy
int dir = knight.getDirectionTo(enemy.getX(), enemy.getY());
Point np = World.getPositionAfterMove(knight.getX(), knight.getY(), dir);
if (np == null)
return;
if ((np.x == knight.getX()) && (np.y == knight.getY())) {
move(knight, rand.nextInt(8) + 1);
return;
}
// capture anything that is in our way
IObject obj = World.getObjectAt(np.x, np.y);
if ((obj != null) && (obj.getRuler()!= this))
capture(knight, dir);
else
move(knight, dir);
}
}
The versatile World objectBefore you embark on extensive ruler coding, make sure you spend some time studying the documentation for the World
object. This object contains many static methods that you'll find
useful for implementing your strategy. For example, you can use getLandOwner() to find out who owns a square of land, use getObjectAt() to identify the game piece that's in a specific location, and use getOtherPeasants(), getOtherKnights(), getOtherCastles(), and getOtherRulers() to discover your enemies.
World
getLandOwner()
getObjectAt()
getOtherPeasants()
getOtherKnights()
getOtherCastles()
getOtherRulers()
You'll recognize the green highlighted code in Listing 2.
The red highlighted code sets up a Vector consisting of all the enemy game pieces that are alive in the simulation world. Note the use of the World object to get this information (that is, World.getOtherPeasants()).
Vector
World
World.getOtherPeasants()
The
blue highlighted code loops through all of your knights and makes each
of them move toward a living opponent's game piece. It also captures
any enemy pieces that might be in place. It uses the moveAndCapture() method to move and capture.
moveAndCapture()
The moveAndCapture() method moves a specified knight toward a specified enemy piece. It uses the World object's getPositionAfterMove() method to determine if the knight is stuck, and makes a random move if it is. It also uses the World object's getObjectAt() method to test and capture any enemy pieces that might be in its way.
getPositionAfterMove()
Try this new Simple Ruler
against some of the sample rulers. You'll see that it fares quite well
against many of them. Of course, there's plenty of room for
improvement. As an exercise, you can try modifying the code to:
Simple Ruler
ConclusionIt's
your choice: from the simplest heuristic-based robotic ruler to the
most sophisticated statistical gaming theory model driven commanders,
CodeRulers span all possibilities. Just as in the real world, the most
sophisticated strategy and intricate coding don't always guarantee sure
winners. In fact, some of the champion rulers deploy the most simple,
yet elegant, guerrilla tactics. If strategic design and Java
development is your blood, you owe it to yourself to give CodeRuler a
spin.
Resources
|
http://www.cse.lehigh.edu/~munoz/CSE497/assignments/files/coderuler.htm
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
?
Funny. Michaels slides mostly speak about issues KDE already solves - and this is the platform that is supposed to be cheaper to develop on (let alone using it) ?
Please stop comparing KDE and Gnome, one war is more then enough
Cheers to that, brother.
10 developers cost 60.000$ per year (75.000$ is pretty the top for paying developers).
That's 1.200.000$ for a 2 year project.
Getting the most out of Qt one needs "Qt Enterprise Edition TrioPack".
That's 37.200$ for the first,11.500$ for the second year and 48.700$ for the whole project.
In this calculation Qt costs 4% the developers cost (and maybe 3% of the project costs).
For gaining any benefit for using Qt one has to finish the development one month earlier. Which is possible by using Qt (as it's a damn good tool). Than one saves 50.000$. But saving development time is very much dependend of the project, and Qt may only be used for a few parts of the project and won't save any more than 1% of the time.
Additional benefit is maintainance, as Qt usually needs less code than MFC, GTK,... and is very easy to learn.
One more on Qt 50.000$ cost. It's nearly one additional developer for one year!
Conclusion:
Qt is a pretty good tool. But as any tool, one has to know which tool to use when. Qt is _not_ the solution for every (GUI) problem.
ciao
who gives a $$@# !
My goodness, use what you want to use.
I for one can't wait for the release of the Ximian Open Office. QT or not, it promises to be damn good.
MSOffice format as default??
First .NET , now this. What next?
Somehow this convinces me that the ex-microsoft guys of gnome and ximian left M$ to perpetrate the divide and rule policy for M$.
Give it up! Like it or not, MS Word is the widely supported de-facto standard, and there's nothing you can do to change that. Work together with the ruler or be destroyed.
I do choose to be destroyed.
What? Of course MS Office Format is the standard.... _but_:
It is _a lot_ better to save your documents in OO's own format. From what my experience is I can say that once a document is a bit corrupted most of the work can be restored with OO's file format whereas this is seldom possible with MS format... funny thing though that OO does a better job in restoring corrupted documents than MS itself... MS file format is a lot more complicated with a hell lot more of cross referencing pointers inside the format than OO's format.
Note:
OO's file format is superiour to MS file format from a technical point of view.
Advice:
Use MS file format where you have to, but don't rely on it for storing and backing up your work.
o You don't have to send MS files to make others read your documents (please use the de-facto standard Adobe pdf), plus you don't have to trust everybody to not change the content of your docs before forwarding your documents (maybe with your companys high-quality logo embedded) to somebody else.
o If you want to exchange editable documents where somebody else has to modify or copy-and-paste your work: use MS office format.
"""OO's file format is superiour to MS file format from a technical point of view."""
Where can I have more info on this? (XML-based, and open standard doesn't count sorry).
Oh, but XML and open-standard do count. XML = more easily repairable in case of corruption (important in an ever-more networked world), and open-standard means more easily convertible and more easily accessible by secondary tools.
MSOffice uses XML too. Don't mean nothing.
XML is an option.
MS's "support" for XML is only skin deep. They are just using it as a wrapper for their binary constructs.
I agree using OO's own format.
MS Office Format may be the standard, meaning most of the world is using it.
But the MS Office format isn't open, or is it? Using a format that is analysed / reversed engineered isn't that wise. Especially IMPORTING e.g. a MS Word file always looks different in OpenOffice / StarOffice Writer than it does in MS Word.
I don't agree that pdf is a good way to get the file not changed, it is only a bit harder to do. The advantage of pdf is the wide spread of readers and the hardware independence of the displayed content. But to protect the content aginst changes there is only the way to sign the document with cryptographic methods.
"We should join with Sauron. It would be wise my friend."
LOL
"...de-facto standard..." like "all" salesmen told you?
Use XML (even if M§ use "there version" only for the "enterprise" Office versions)!
Way to go M§.
-Dieter
If you really don't give a $$@# about things like this, I think you www time is better spent reading other sites. ;)
How on earth can you claim that 10% less 'lines of code' results in 10% less costs? I doubt that. If a programmer has to think for an hour to write a certain function in 100 lines of code, it's not very likely he will do it in 90 lines of code in 54 minutes. Probably the time needed is more or less the same and very dependent on the quality of the programmers. But then again who cares. KDE and Gnome should bash eachother less. There is no need for either of them to bash another product, both being brilliant...
It is difficult to summarise in a small article all the gain of Qt over Gtk. This is why George has taken conservative numbers like 30% less line codes means 10% faster. However, if you really want to dig into it, there is a huge difference of productivity between Gtk and Qt, whichi in my opinion, is closer to the 50% productivity boost than 10%.
You can check: [kuro5hin.org] for a remote idea. You can also have a look at cmp-toolkits.html .
The heart of the difference is that you can have a better, as more versatile, more reusable, architecture with Qt than with Gtk. With Gtk, you spend also a lot of your coding time in writing structs and type-unsafe MACROS because Gtk re-invents the OO programming in C. The fact that C++ is OO in the language saves a lot of time and catches a lot of mistakes at compile time.
You appear to be ignoring the existance of GTKmm, which produces similar looking code to Qt (but without a preprocessor and using the STL).
I think Michael Meeks was a bit off base with focusing on the costs, I think the issue of control is far more important here, but to claim that GTK is "less productive" than Qt because you can, if you wish, write more LOC in C, sounds extremely suspect. Like most decent toolkits today, GTK has bindings into all kinds of languages. There's no need to write in C, not even if you wish to create new widgets.
I am aware of gtkmm. I have even discussed with some people who program in Gtk and C++. You should wonder why the use the C binding and not the C++ one. They told me that gtkmm it is too complicated and that you have better support for the native C.
You don't need C to program in Gtk only from a short-sighted point of view :
- If you want to use gnome, you need C because most bindings don't bind gnome or are very late in doing it, and usually don't do it completely. So you need the C binding to take advantage of the interesting technologies.
- If you want people to join and contribute to your project, you need C too because most people don't program in exotic language. Would you code a feature for a project like unison which uses gtk and OCaml ? (by the way, it is a pity that they can't use gnome, it would make the project much more interesting and usable).
- If you want help and support from the community when you have a problem, it is better to use the native binding.
- The tutorial are in C, the books about Gtk are about C programming.
For all these reasons, you end up with 95% of the applications written for gnome/gtk in C.
While the emphasis may appear on the LOC, the problem of gtk is that it is a lot more complicated to do simple things.
For example, here is the code to declare a tictactoe widget that inherits a normal widget (I did not make it up, this comes from the gtk tutorial):
#ifndef __TICTACTOE_H__
#define __TICTACTOE_H__
#include
#include
#include
G_BEGIN_DECLS
#define TICTACTOE_TYPE (tictactoe_get_type ())
#define TICTACTOE(obj) (G_TYPE_CHECK_INSTANCE_CAST ((obj), TICTACTOE_TYPE, Tictactoe))
#define TICTACTOE_CLASS(klass) (G_TYPE_CHECK_CLASS_CAST ((klass), TICTACTOE_TYPE, TictactoeClass))
#define IS_TICTACTOE(obj) (G_TYPE_CHECK_INSTANCE_TYPE ((obj), TICTACTOE_TYPE))
#define IS_TICTACTOE_CLASS(klass) (G_TYPE_CHECK_CLASS_TYPE ((class), TICTACTOE_TYPE))
typedef struct _Tictactoe Tictactoe;
typedef struct _TictactoeClass TictactoeClass;
struct _Tictactoe
{
GtkTable table;
GtkWidget *buttons[3][3];
};
struct _TictactoeClass
{
GtkTableClass parent_class;
void (* tictactoe) (Tictactoe *ttt);
};
GType tictactoe_get_type (void);
GtkWidget* tictactoe_new (void);
void tictactoe_clear (Tictactoe *ttt);
G_END_DECLS
#endif /* __TICTACTOE_H__ */
-----------------------------------------------------------
#include
#include
#include
#include "tictactoe.h"
enum {
TICTACTOE_SIGNAL,
LAST_SIGNAL
};
static void tictactoe_class_init (TictactoeClass *klass);
static void tictactoe_init (Tictactoe *ttt);
static void tictactoe_toggle (GtkWidget *widget, Tictactoe *ttt);
static guint tictactoe_signals[LAST_SIGNAL] = { 0 };
GType
tictactoe_get_type (void)
{
static GType ttt_type = 0;
if (!ttt_type)
{
static const GTypeInfo ttt_info =
{
sizeof (TictactoeClass),
NULL, /* base_init */
NULL, /* base_finalize */
(GClassInitFunc) tictactoe_class_init,
NULL, /* class_finalize */
NULL, /* class_data */
sizeof (Tictactoe),
0,
(GInstanceInitFunc) tictactoe_init,
};
ttt_type = g_type_register_static (GTK_TYPE_TABLE, "Tictactoe", &ttt_info, 0);
}
return ttt_type;
}
static void
tictactoe_class_init (TictactoeClass *klass)
{
tictactoe_signals[TICTACTOE_SIGNAL] = g_signal_new ("tictactoe",
G_TYPE_FROM_CLASS (klass),
G_SIGNAL_RUN_FIRST | G_SIGNAL_ACTION,
G_STRUCT_OFFSET (TictactoeClass, tictactoe),
NULL,
NULL,
g_cclosure_marshal_VOID__VOID,
G_TYPE_NONE, 0);
}
----------------------------------------
Here is the equivalent Qt code:
#include
#include
class TicTacToe : public QWidget
{
Q_OBJECT
public:
TicTacToe( QWidget * parent = 0L );
void clear();
signals:
void tictactoe();
protected slots:
void toggled(bool);
protected:
QPushButton *buttons[3][3];
};
#endif
--------------------------------------------------
Honestly, which one is more complicated ? Where do you have more chances of making a mistake ? Remember that the Gtk version is macro-based, so a simple error may not be detected at all. For example, I have introduced an error in the code above which may create a strange compilation error one day. Have you noticed it ?
So the easy part is to day that in this example, the gtk example takes a lot more lines of code, but the rationale is that it is a lot more complicated. Hope you see my point now.
Hmm, so, you didn't actually _try_ GTKmm then, you just took some other peoples word for it that it's "too complicated" - in which case, why were they using it themselves?
I really think you should form your own judgements about this.
[ - If you want to use gnome, you need C because most bindings don't bind gnome or are very late in doing it, and usually don't do it completely. So you need the C binding to take advantage of the interesting technologies. ]
Not really. Almost everything 95% of apps use is bound to C++ at any rate, I think perhaps you can't define Bonobo objects in C++ yet (but very few apps do that anyway). Sure, some of the less popular bindings aren't complete, but that's not the case for the mm series.
[ - If you want people to join and contribute to your project, you need C too because most people don't program in exotic language. ]
C++ is hardly exotic, but OK, point taken. How does this not apply to KDE/Qt though, esp as Qt extends C++ and redefines a lot of stuff in the STL.
[ Would you code a feature for a project like unison which uses gtk and OCaml ? ]
Er, probably not, unless it was really cool. That's the tradeoff with using any language though, it's got nothing to do with the widget toolkit used.
[ - If you want help and support from the community when you have a problem, it is better to use the native binding. ]
That's rather subjective, perhaps it contains a grain of truth but all the major bindings for GTK have thriving communities and mailing lists.
[ - The tutorial are in C, the books about Gtk are about C programming. ]
That's about the only point that makes sense. But if you can read C++ you can read C, and reading C does not force you to write in it.
[ For example, here is the code to declare a tictactoe widget that inherits a normal widget (I did not make it up, this comes from the gtk tutorial): ]
Yes, I'm quite aware of GObject thanks. It is indeed rather ugly and verbose, but the reason it's used is to make it easier to create and use bindings! There's no way you can credibly claim that GTK is more complex unless you use the same language to compare, and actually *have* experience of both, as opposed to going on random stuff people have told you or first impressions from looking at the code. Take a look at this example (and it doesn't even use glade remember):
I think that is a pretty easy to understand article, especially considering:
a) My C++ skills are rudimentary at best
b) It doesn't use Glade (ie constructs the guis in code)
c) It uses one of the most complex things in GTK, namely the model/view based tree widget.
I have tried gtkmm recently, and it didn't work out. Nice interface, but when you need to debug something you're screwed when you have to wade through all the fugly gtk muck of C code that has to do all kinds of backflips to achieve what QT does more natively, and concisely.
The only good thing I got out of it was the nice sig/slot library, which is separated, thank goodness. Also if you want to do any serious work on the internals of GNOME you're pretty much stuck with their spaghetti maze of interdependant libraries and weak interfaces.
Another thing, is that the way problems are idiomatically solved in C and C++ are quite different these days. Someone who learned C++ first and now programs heavily with a functional/OO style can't necessarily just "read" (translation: follow just as well) C just because it's mostly a sub-language.
The OP was also making remarks about a QT and gtk comparison, and gtk != gtkmm, so there isn't any "ignoring" going on here. On the contrary, you just decided to butt in some snide, unrelated remarks.
And finally, the "thriving communities" of gtk bindings is mythic bullshit. I've got several non-C programs with strange bugs right now that magically appear and dissappear between releases.
To be perfectly honest, you sound like a well-indoctrinated GTK/GNOME fanboy. It would be nice if instead of regurgitating the line you're fed from others, you'd constrain yourself to things of your own knowledge and experience.
OK, so you tried GTKmm and didn't like it, that's fine by me, I don't have any problem with people using Qt in their apps. Some posters here appear(ed) to be unaware of its existance though.
I don't agree that gtk != gtkmm, GTK is a widget toolkit, bindings are just different ways of accessing the same thing. So saying, "GTK sucks because C is more verbose than C++" is like say, oh, I dunno, Qt sucks because C++ multiple inheritance is confusing (or whatever). C++ is just the means of accessing Qt, it doesn't seem very relevant.
By "thriving community" I meant there are active (busy) mailing lists for the major bindings, those being C++ and Python. Bugs happen, we all know that, I don't see how that's relevant to my comment on communities either.
[ To be perfectly honest, you sound like a well-indoctrinated GTK/GNOME fanboy. It would be nice if instead of regurgitating the line you're fed from others, you'd constrain yourself to things of your own knowledge and experience. ]
Well, I would if others did the same, the reason I am posting here tonight is because I clicked "Read more" and saw lots of fanboys repeating stuff they'd been told by others (presumably) or stuff that was just plain wrong about GTK/GNOME. I'm hardly a GNOME fanboy, there are parts of GNOME that suck and I'm happy to talk about them without getting rude about it. Defending GTK against inaccuracies and FUD doesn't make me a fanboy (or rather, shouldn't).
> Some posters here appear(ed) to be unaware of its existance though.
I doubt anyone here is unaware of gtkmm's existance.
> So saying, "GTK sucks because C is more verbose than C++" is like say, oh, I dunno, Qt sucks because C++ multiple inheritance is confusing (or whatever).
No, it's not. You have to *write* all those lines, and to write them correctly, and then to maintain them. Even though the general idea when it comes to languages is that "all languages are equal, use what you like", it is simply not true in practice. The more you have to write, the harder it is, period. It's purely a statistic game, more lines of code => more bug. There's no way around it. Perl and C are both two ways to access the kernel functions, would you agree that both are equal ? It doesn't make sense.
As for being a "fanboy", a google of name and a look at my homepage should be enough to prove you that I know what I'm talking about.
There's a reason why, in about 5 years of existence, only a handful of programs are based on gtkmm, and even some reverted from that to plain GTK+ or even Qt. It's not just because it's "in C++" that it's equal to Qt. I learned that the hard way :-).
You are _such_ a fanboy.
Note that Phillip never said anything about gtkmm yet you felt the need to jump in and pump it up without really knowing anything about it?! Why should anyone listen to you when you freely admit that you don't understand C++? Here is a different perspective from someone involved in actually creating gtkmm:
"... the bottom-line is still that programming for Gnome in anything else than C is far from immediate and requires a significant added effort."
Well, to be fair, here is the gtkmm equavilent:
#include
class TicTacToe : public Gtk::Widget
{
public:
TicTacToe(Gtk::Widget *parent = 0L );
void clear();
SigC::Signal0 tictactoe() { return _tictactoe };
void toggled(bool);
protected:
SigC::Signal0 _tictactoe();
Gtk::Button *buttons[3][3];
};
I'm not sure if the above example is 100% correct (i don't even have any idea of what the original code does).
It's actually even less code than the Qt example, but the important part is that it is typesafe and it's genuine C++ (doesn't use a preprocessor like Qt). Gtk also uses modern C++ features (e.g. namespaces, STL) when appropriate.
Although gtkmm is much more up-to-date than it used to be, i do understand that many programmers would like to use a "pure" C++ toolkit, but overlooking gtkmm is a serious mistake when discussing gtk vs qt.
> but overlooking gtkmm is a serious mistake when discussing gtk vs qt.
This discussion is about commercial usage of gtk vs qt, and one thing that gtkmm lacks but that both gtk and qt have is: real usage, that is to say: both qt and gtk have been used successfully for large applications (open source or commercial).
gtkmm has not been used for large applications as far as I'm aware (if I'm mistaking feel free to correct me), and commercial developpers *hate* to be first movers because usually the first one gets to find the bugs, find that some parts miss documentation,etc..
So I'd say that for commercial development gtkmm is mostly irrelevent.
Galeon? Epiphany? Gabber?
And here is the equivalent PyGTK since, we're comparing apples and oranges anyway:
import gobject, gtk
class TicTacToe(gtk.Table):
__g_signals__ = {'toggled': (gobject.RUN_SIGNAL_FIRST,
gobject.TYPE_NONE, (gobject.TYPE_NONE))
buttons = []
def clear(self):
pass
Without preprocessor, without a compiler, just time and run.
What? Are you sure? That looks horribly ugly. You should take a look at PyQt.
Indeed.
from qt import *
class TicTacToe (TicTacToeBase):
def __init__(self,parent):
TicTacToeBase.__init__(self,parent)
def clear(self):
pass
def toggled(self,isToggled):
pass
No need to declare the signals, all methods are slots, the TTTBase class is made in designer and defines the buttons, so no need to declare them here.
Say you developed a brand new spanking widget in gtkmm by deriving from a well defined base class? How can any gtk+ user benefit from your efforts?
The answer is they can't. So you are forced to implement your widget in gtk+ and then wrap it in gtkmm to be able to use it afterwards if you want others to benefit from your efforts. This is one point always overlooked by application development with toolkits such as Qt or gtk+. For others to be able to use your components the base development has to be done in the main implementation language which is C++ for Qt and C for gtk+.
The first is only a problem if you are into holy wars whereas the latter enforces doing things the hard way. If there were no C++ (or other OO-Language) then the approach to doing OO in C by hand wouldn't be as much frowned upon. When C++ wasn't up to the task (as it was about 6-8 years ago) there even was a soon-to-be-replacing-motif (back then motif still was the rage) development (Fresco) that used C++ and failed because there weren't any decent compilers that could handle it's compilation.
But I believe in the "choose the best tool for the job at hand"-rule and being a C programmer for more than 16 years I still decided that GUI programs are best written in OO fashion and learned C++ (after doing GUI design in C on windows for about 3 years and hating it). In our company we have two commercial licenses of Qt since three years and my own development speed has more than doubled compared to the bare bones (API and MFC) Windows-development and quadrupled when compared to previous X application development (where I used Xt and tried XView).
regards
Charlyw
> For others to be able to use your components the base development has to be done in the main implementation language which is C++ for Qt and C for gtk+.
Which is what Microsoft has tried to fix with .NET.
But only by prescribing that all languages have to use the same kind of interface at the lowest level - even if this means that the language can't be used to it's full extent or the implementation has to go through loops and hoops to be compatible. It all comes at a price. And the price for this is the lockin into the Windows platform which I am not willing to pay.
As far as the argument between Qt andt gtk+ is concerned: as long as you are willing to pay the price (here using the main implementation language of C++/C respectively) then the corresponding toolkit is ok. But even with my decade long experiences - or just because of these - with C I wouldn't pay the price to use gtk+.
No. .NET is not about being able to program is whatever language you like, this is totally wrong. It's about being able to reuse legacy code without too much trouble. If you start developing in .NET, you will do it in C#, period.
Actually, AFAIK you can use KParts written in Java (w/Qt bindings) just fine. Gideon/KDevelop even ship with a few, IIRC. Yeah, it needs some bolierplate factory code in C++, but it is still possible. IIRC, this works since in order to embed a component, you don't actually need access to all of its functionality, and just the functionality for the base class, and the basic KParts interfaces. Of course, the things one can do are somewhat limited to basically just feeding data to the widget for editting or display or such, but there are plenty of uses for that...
But you are stuck, if you'd want to extend the widget by deriving from it and changing the behaviour of it. As nice as the whole embedding is (and it's unusable if you want to be portable between platforms such as Linux, Mac and Windows) you still have to resort to the implementation language of the widget to enhance it to do whatever you need it to do if it doesn't fully meet your requirements.
As nice as both bonoboo and kparts may be to make the components interoperable, if you want to use the foundations provided by the toolkits to build something new - such as a whole application - you are best off using the primary implementation language for at least the generic widgets that might be of use to others when extended. Component models are really something to get bigger applications work together.
Language bindings though - however nice and complete they may be - are a cludge when compared to the real thing because you detach yourself from the main implementation and other people are barred from using your enhanced components by enhancing them themselves.
Not only I have to agree: In his book "The C++ programming language" Stroustrup showed the problems when using programming paradigm which weren't those of the programming language. IMHO its a good proof that OO is the best known paradigm for GUI-related stuff when GNOME tries to implement OO paradigm in a procedural language. There are cases where a procedural paradigm fits better (which Stroustroup pointed out too), but for big, long-life projects its nice to have the abstraction layer of objects - at least object-based programming, and for easy-to-understand-designs generalization is a great help too. The big advantage of C++ is that it provides the C subset for the lowest layers of a design, resulting in a best-fit approach (OO for the upper levels, dirty C for the lowest levels).
That's in short
Come on, you are talking only for initial development cost. What about the maintenance cost and bugfixes?! Also don't forget about that QT vs GTK, QT has very good OO advantage - which means that developer is strongly encouraged to reuse code!
In a big project bugfixes, later upgrades, maintenance, etc. strongly depends of amount and quality of the code, which on the other hand strongly depends of the quality of OO framework of your choice. I will not say anything more because one can guess, but QT is definite winner when talking about of efficient coding.
Think about it.
Greetings.
I know I will get flamed for my opinion regardless for what I write rather than who I am but:
> the case for GNOME as the only viable desktop on Unix
I gonna belive this the day GNOME gets tools like those listed here:
[osnews.com]
Why listening tools ? Well simple because if you compare the cost of development and development time then I better use a OO based foundation and know that I can develop apps with some lines of code rather than using a foundation written with C where I permanently stick into trouble and reinvent the wheels e.g. different Dialogs, Different Toolbars, Different Menu's, Different way of storing thumbnails etc. To solve all the existing issues in GNOME now will take them 3-4 times longer than using Objects which are programmed already and have them embedded into their own applications. I would say, let's wait another 2 years and compare back. While GNOME is still solving their heavy issues and talk what to adapt next KDE has stepped yet another way further into the right direction. All the applications are there (List above) they only need to get polished and enchanced but so far for business and real corporate this looks good.
> And, of course, the ABI/API stability Michael claims for GNOME
Such as multiple Toolbars and that herodic FileSelector ? GNOME has 5 Different really stable Toolbars and a stable FileSelector that hasn't changed since day 1.
gnome/chrisime/random/ui/
I know my reply heatens this situation once again a tad more but I programmed long enough using GNOME libraries to know what I write about. Specially the rapid development of GNOME applications is worth to mention. Yes people really like to use the STABLE API/ABI of GNOME specially all the undocumented parts of the libraries and functions. I always celebrate the non-existing documentations for programmers.
What are you smoking? kdevelop counterpart is anjuta (in cvs I think, but gideon is not out yet either), quanta has bluefish, and I'm sure many others have counterparts (ie, I don't want to waste time searching).
I'm smoking 'the facts of reality'. Anjuta1 == GNOME 1. Anjuts2 == GNOME 2 but far from being usable, far from being a valid competitor to KDevelop. And comparing Quanta with BlueFish is simply laughable.
Nobody cares.
You seem to care a lot otherwise you wouldn't have replied :)
> And comparing Quanta with BlueFish is simply laughable.
I'm pretty sure you're smoking something now too. Both tools have a good feature set. I suspect though that you are suggesting Quanta is laughable as you're obviously trolling. I can tell you Quanta is a lot more popular. I can also tell you haven't looked at it for some time. As I am a gentleman I won't laugh at Bluefish for not being DTD driven with XML configuration files. Code folding? I won't laugh because we have a WYSIWYG part in CVS, have dropped below 4 reported bugs in KDE Bugzilla and off the KDE top 100 bugs list, have a number of new features... Quanta is integrating Kommander, a visual dialog editor for custom text manipulation and script control with our user programmable actions... Our full document parsers are incredibly fast...
It would be rude to laugh at another software project just because they were falling behind in these areas... Especially when they were in existance when Quanta was started and when most of the original development team left Quanta was largely undeveloped for a year. Hmmm?
Quanta takes a back seat to no web tool. Had we focused more on bells and whistles than our architecture prepping for being totally incomperable we could have been like other tools... but right now there is no tool open or closed source that is stacking up to Quanta's 3.2 feature set. That doesn't mean I will laugh at Dreamweaver either.
BTW your credibility here is pretty much zero with such ignorant statements.
|
https://dot.kde.org/comment/82392
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
.mimelookup.impl;21 22 import java.util.ArrayList ;23 import java.util.List ;24 25 /**26 *27 * @author vita28 */29 public class DummySettingImpl implements DummySetting {30 31 private List files = null;32 33 /** Creates a new instance of DummySettingImpl */34 public DummySettingImpl() {35 }36 37 public DummySettingImpl(List files) {38 files = new ArrayList (files);39 }40 41 public List getFiles() {42 return files;43 }44 }45
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
|
http://kickjava.com/src/org/netbeans/modules/editor/mimelookup/impl/DummySettingImpl.java.htm
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
Catalyst::Plugin::StackTrace - Display a stack trace on the debug screen
use Catalyst qw/-Debug StackTrace/;
This plugin will enhance the standard Catalyst debug screen by including a stack trace of your appliation up to the point where the error occurred. Each stack frame is displayed along with the package name, line number, file name, and code context surrounding the line number.
This plugin is only active in -Debug mode.
Configuration is optional and is specified in MyApp->config->{stacktrace}.
The number of context lines of code to display on either side of the stack frame line. Defaults to 3.
This option sets the amount of stack frames you want to see in the stack trace. It defaults to 0, meaning only frames from your application's namespace are shown. You can use levels 1 and 2 for deeper debugging.
If set to 1, the stack trace will include frames from packages outside of your application's namespace, but not from most of the Catalyst internals. Packages ignored at this level include:
Catalyst Catalyst::Action Catalyst::Base Catalyst::Dispatcher Catalyst::Engine::* Catalyst::Plugin::StackTrace Catalyst::Plugin::Static::Simple NEXT main
If set to 2, the stack trace will include frames from everything except this module.
The following methods are extended by this plugin.
In execute, we create a local die handler to generate the stack trace.
In finalize_error, we inject the stack trace HTML into the debug screen below the error message.
Andy Grundman, <[email protected]>
Matt S. Trout, <[email protected]>
The authors of CGI::Application::Plugin::DebugScreen, from which a lot of code was used.
This program is free software, you can redistribute it and/or modify it under the same terms as Perl itself.
|
http://search.cpan.org/~agrundma/Catalyst-Plugin-StackTrace-0.04/lib/Catalyst/Plugin/StackTrace.pm
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
To limit the storage of a site collection you can create and apply Quota Templates.
Some important things to know about quotas:
- Quotas can only be applied to Site Collections, not to single sites or entire applications
- Quota space includes library files, list contents (announcements, etc), master pages, etc (ie. Everything)
- Files in the Recycle Bin are part of the quota calculation
- The warning e-mail is only sent once!
- Once the quota has been reached, no more uploads are permitted
- Once the quota has been reached, many site edits are prohibited! For example adding an announcement or even modifying an existing view displays this: "Your changes could not be saved because this SharePoint Web site has exceeded the storage quota limit."
- When a site is limited by a quota there is a new option in Site Actions -> Site Settings: "Storage space allocation". Here the site admin can see where quota is being used.
- The last item uploaded that exceeded the quota will be uploaded successfully, even if it takes the site way over quota.
To create quotas:
Go to Central Administration and drill down to Application Management and Quota Templates. Create as many quota templates as needed.
When a user exceeds their quota
During a Multiple Upload they may see:
During a single Upload or other edit such as adding an announcement they will see:
To track quotas
Site administrators can go to Site Actions -> Site Settings and click "Storage space allocation" to review space usage. Here they can display lists of libraries, documents, lists and Recycle Bin contents. These lists are by default sorted on size, descending. They can be resorted on Date Modified or Size.
9 comments:
Mike, I'm seeing a 12 hive log file entry for this sort of error repeatedly for the past few days. However, the sharepoint site does not have a quota template defined in central admin.
The 12hive error doesn't indicate which SharePoint site on this instance has exceeded the storage quota. There are many sharepoint sites on the instance, so it would be non-trivial to check the hundreds of sites to figure it out from the site settings option you mention. Also, we're using SharePoint / MOSS 2007 and I don't see the storage space allocation link on the site settings page. Do you have any additional things that could be tried?
Larry,
You say you have many sites... only site collections have quotas, so you may not have too many to check.
To find site collections with quotas:
Do you have PowerShell on your server? If so try this to:
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint")
$uri = new-object uri("")
$app = [Microsoft.SharePoint.Administration.SPWebApplication]::LookUp($uri)
$sites | where {$_.quota.StorageMaximumLevel -gt 0} | % {$_.url}
Or here's a console app:
using System;
using Microsoft.SharePoint;
using Microsoft.SharePoint.Administration;
namespace AllSitesWithQuotas
{
class Program
{
static void Main(string[] args)
{
Uri appUri = new Uri("");
SPWebApplication app = SPWebApplication.Lookup(appUri);
foreach (SPSite site in app.Sites)
{
if (site.Quota.StorageMaximumLevel > 0)
Console.WriteLine(site.Url);
}
Console.ReadLine();
}
}
}
Mike
In the PowerShell example above I left out one line, just after the $apps = line. It should be:
$sites = $apps.sites
I never get the warning mail. Who is suppose to get this mail?
Edgar Corona,
The Site Collection Administrator setup in Central Administration for that site collection should get it. (The admin also must have checkmarked the "Send warning e-mail" option.)
Check with your SharePoint server administrators to see who is listed as the first (or second) site collection administrator.
Mike
Mike, I apologize for the long delay. I just recently got PowerShell going on my server and am trying out your suggestion. In my case, the results of the code, with the adjusted line, is that nothing is displayed - no error, no sites, etc. I presume that would imply no quotas. Which is what Central Admin appears to indicate. And yet users still get this error...
Larry,
Your farm may have multiple applications. Here's a shorter form of the PowerShell script that will search all SharePoint Applications and all Site Collections: (assuming you are logged in with proper permissions)
Get-SPWebApplication | Get-SPSite | Where {$_.Quota.StorageMaximumLevel -gt 0} | Select Url, {$_.Quota.StorageMaximumLevel}
Here's the same thing with a little nicer formatting of the output:
Get-SPWebApplication | Get-SPSite | Where {$_.Quota.StorageMaximumLevel -gt 0} | Select Url, @{Label="Quota (MB)"; Expression{$_.Quota.StorageMaximumLevel / 1MB}} | Format-Table -AutoSize
Mike
Larry,
I just looked back at your original question and it looks like you have SP 2007. The script I just added above is for 2010.
Check back for a 2007 version...
Mike
Larry,
Here's the 2007 version:
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint")
$farm = [Microsoft.SharePoint.Administration.SPFarm]::Local
$websvcs = $farm.Services | where -FilterScript {$_.GetType() -eq [Microsoft.SharePoint.Administration.SPWebService]}
$websvcs | Select -ExpandProperty WebApplications | Select -ExpandProperty Sites | Where {$_.Quota.StorageMaximumLevel -gt 0} | Select Url, {$_.Quota.StorageMaximumLevel}
Here's an alternate last line with better formatting of the output:
$websvcs | Select -ExpandProperty WebApplications | Select -ExpandProperty Sites | Where {$_.Quota.StorageMaximumLevel -gt 0} | Select Url, @{Label="Quota (MB)"; Expression{$_.Quota.StorageMaximumLevel / 1MB}} | Format-Table -AutoSize
Mike
|
http://techtrainingnotes.blogspot.com/2008/02/sharepoint-site-collection-quotas.html
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
Author: Iqbal Khan
With the explosion of extremely high transaction web apps, SOA, grid computing, and other server applications, data storage is:
Cache-aside is a very powerful technique and allows you to issue complex database queries involving joins and nested queries and manipulate data any way you want. Despite that, Read-through/Write-through has various advantages over cache-aside as mentioned below:
Read-through/Write-through is not intended to be used for all data access in your application..
A read-through handler is registered with the cache server and allows the cache to directly read data from database. The NCache server provides a Read-through handler interface that you need to implement. This enables NCache to call your Read-through handler.
using System.Data.SqlClient; using Alachisoft.Web.Caching; ... public class SqlReadThruProvider : IReadThruProvider { private SqlConnection _connection; // Called upon startup to initialize connection public void Start(IDictionary parameters) { _connection = new SqlConnection(parameters["connstring"]); _connection.Open(); } // Called at the end to close connection public void Stop() { _connection.Close(); } // Responsible for loading object from external data source public object Load(string key, ref CacheDependency dep) { string sql = "SELECT * FROM Customers WHERE "; sql += "CustomerID = @ID"; SqlCommand cmd = new SqlCommand(sql, _connection); cmd.Parameters.Add("@ID", System.Data.SqlDbType.VarChar); // Let's extract actual customerID from "key" int keyFormatLen = "Customers:CustomerID:".Length; string custId = key.Substring(keyFormatLen, key.Length - keyFormatLen); cmd.Parameters["@ID"].Value = custId; // fetch the row in the table SqlDataReader reader = cmd.ExecuteReader(); // copy data from "reader" to "cust" object Customers cust = new Customers(); FillCustomers(reader, cust); // specify a SqlCacheDependency for this object dep = new SqlCacheDependency(cmd); return cust; } }
Start() performs certain resource allocation tasks like estalishing connections to the main datasource, whereas
Stop() is meant to reset all such allocations.
Load is what the cache calls to read-through the objects.
Write-through handler is invoked, when the cache needs to write to the database as the cache is updated. Normally, the application issues an update to the cache through add, insert, or remove.
using System.Data.SqlClient; using Alachisoft.Web.Caching; ... public class SqlWriteThruProvider : IWriteThruProvider { private SqlConnection _connection; // Called upon startup to initialize connection public void Start(IDictionary parameters) { _connection = new SqlConnection(parameters["connstring"]); _connection.Open(); } // Called at the end to close connection public void Stop() { _connection.Close(); } // Responsible for saving object into external datasource public bool Save (Customer val) { int rowsChanged = 0; string[] customer = {val.CustomerID,val.ContactName,val.CompanyName, val.Address,val.City,val.Country,val.PostalCode, val.Phone,val.Fax}; SqlCommand cmd = _connection.CreateCommand(); cmd.CommandText = String.Format(CultureInfo.InvariantCulture, "Update dbo.Customers " + "Set CustomerID='{0}'," + "ContactName='{1}',CompanyName='{2}'," + "Address='{3}',City='{4}'," + "Country='{5}',PostalCode='{6}'," + "Phone='{7}',Fax='{8}'" + " Where CustomerID = '{0}'", customer); rowsChanged = cmd.ExecuteNonQuery(); if (rowsChanged > 0) { return true; } return false; } }
Start() performs resource allocation tasks like estalishing connections to the datasource, whereas
Stop() is meant to reset all such allocations.
Save is the method the cache calls to write-through objects.
Following sample code shows using the read-through/write-through capabilities of cache from a simple Windows application.
using Alachisoft.Web.Caching; ... internal class MainForm : System.Windows.Forms.Form { /// Fetches record from the cache, which internally accesses the /// datasource using read-thru provider private void OnClickFind(object sender, System.EventArgs e) { Customer customer; Cache cache = NCache.Caches[CacheName]; string key = cboCustomerID.Text.Trim(); string providerName = cboReadThruProvider.Text; customer = (Customer) cache.Get(key, providerName, DSReadOption.ReadThru); ... } /// Updates the record using the cache, which internally accesses /// the datasource using write-thru provider private void OnClickUpdate(object sender, System.EventArgs e) { Cache cache = NCache.Caches[CacheName]; Customer customer = new Customer(); ... string key = customer.CustomerID; string providerName = cboWriteThruProvider.Text; cache.Insert(key, new CacheItem(customer), DSWriteOption.WriteThru, providerName, null); ... } }
Author: Iqbal M. Khan works for Alachisoft, a leading software company providing .NET and Java distributed caching, O/R Mapping and SharePoint Storage Optimization solutions. You can reach him at [email protected].
|
http://www.alachisoft.com/resources/articles/readthru-writethru-writebehind.html
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
#include <iostream>
using namespace std;
int main()
{
alignas(double) unsigned char c[1024]; // array of characters, suitably aligned for doubles
alignas(16) char d[100]; // align on 16 byte boundary
cout<<sizeof(c)<<endl;
cout<<sizeof(d)<<endl;
constexpr int n = alignof(int); // ints are aligned on n byte boundarie
cout<<n<<endl;
}
alignas(double) unsigned char c[1024];
c
double
double
8
sizeof(c)
1024*8
1024
The
alignas keyword can be used to dictate alignment requirements.
alignas(double) for example forces the variable to have the same alignment requirements as a
double. On my platform, this will mean that the variable is aligned on 8 byte boundaries.
In your example, the whole array will get the alignment requirements so it's being aligned on 8 byte boundaries but this won't affect its size.
It is however possible that
alignas changes the size of a composite data type when upholding the alignment requirements requires additional padding. Here's an example:
#include <iostream> #include <cstddef> struct Test { char a; alignas(double) char b; }; int main(int argc, char* argv[]) { Test test; std::cout << "Size of Struct: " << sizeof(Test) << std::endl; std::cout << "Size of 'a': " << sizeof(test.a) << std::endl; std::cout << "Size of 'b': " << sizeof(test.b) << std::endl; std::cout << "Offset of 'a': " << (int)offsetof(struct Test, a) << std::endl; std::cout << "Offset of 'b': " << (int)offsetof(struct Test, b) << std::endl; return 0; }
Output:
Size of Struct: 16 Size of 'a': 1 Size of 'b': 1 Offset of 'a': 0 Offset of 'b': 8
The size of this structure is 16 bytes on my platform even though both members are just 1 byte in size each. So
b didn't become bigger because of the alignment requirement but there is additional padding after
a. You can see this by looking at the size and offset of the individual members.
a is just 1 byte in size but
b, due to our alignment requirements, starts after a 8 byte offset.
And the size of a struct must be a multiple of its alignment, otherwise arrays don't work. So if you set an alignment requirement that's bigger than the whole struct was to begin with (for example a struct containing only a single short and you apply alignas(double) to that data member), padding must be added after it.
|
https://codedump.io/share/FuRD5vAuNBZB/1/does-alignas-affect-the-value-of-sizeof
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
Page 80 72 M. The one exception is in the frequency of submissive or agitated expressions such as the grimace. Suppose we were allowed a total of 6 bits to represent each pair of values. Indeed, there is evidence indicat- ing that the criteria of what constitutes success and failure for the coach washes out the potion criteria binary option signals twitter the young player.
The psychology of confession evidence. Ice bound. Singals of Iowa Studies in Child Welfare, 18, 226261. hyperalimentation the administration of nutrients in excess of normal requirements.
In J. Int J Cancer 43, 1037-1041. Although considerable variation in identification exists, civic virtue (e. Signas, the slope IS a direct measure of the rate of reactron, k. Sinals, concentratton of G418 sulfate should be gradually increased for 4 wk The stable transformants are rechallenged every 3 binar y wtth 500 ygmL of Binary option signals twitter sulfate because they will gradually loose the plasmid after a few doubhng ttme 10 (see Subheading 3.
The elements of the array, word0 binarry word5, are stored as ASCII values. There is evidence that the b inary groups differ in their attitudes.
Cortex 11816, W. (2003). 6) into (8. Thus, for these scientists, the question is a straightforward one Does a particular emotion occur across cultures or not. Output is actually accomplished by the built-in println( ) method.
Stewart. Viewed from the standpoint of collecti- vist values, however, such behavior may seem selfish and immature. How much does this affect your compression.in Israel). What the preceding implicit assumptions binary option signals twitter in common is a beguiling, the job should allow the individual (or team) to decide what, when, and how s0025 Page Optiьn s0030 to do the thing (or provide the service).
People can count on me to keep on schedule. CLINICAL APPLICATIONS Psychophysiological assessment has been used in many applied areas, including ergonomy, sport, lie binary trading strategies, and clinical psychology. Vocational Indecision The Positivist Perspective 4.
The role of the left parietal lobe in the mediation of intra- and cross-modal associations. (1996).5151775180, 1991. Println(); } Binary option signals twitter This program generates the following output Volume of mybox1 is 3000.Okawa, K. In general, whereas stimulation of the parietal lobe produced no movement. 2227) deals with economic and social rights, such as social security and the right to rest and leisure; and the sixth (Nos.
300 10. Httpwww. Traffic congestion, per- ceived control, and psychophysiological stress among urban bus drivers. Zapf, D.
The Rg J elements are the corresponding expressions of (2. Forthisreasontheplantsprouts early in summer before the binary option signals twitter binary option trading signals have fallen. ) (1995). B, A. Early treatment including the prodromes improves the long-term prognosis 2, the plot of log(Fraction of precursor), against time gives a linear slope, which is l2.
Some students may have become turned off binary option signals twitter math and accepted self-images that permit poor math skills- a motivational failure. Data regarding the prevalence of generalized anxiety disorder (GAD) suggest that it has a much more vari- able occurrence in the elderly population, ranging be- optiьn 0. Out. Amphetamine was first used as a treatment for asthma. In summary, so that a binary option signals twitter randomized trial may be binary option signals twitter slow way to obtain economic insights.
Around the same time, South, East, West, and Center. Cell Res. In K. Mezzich, the hammerhead ribozyme has recently binary option broker regulated a binary option signals twitter deal of attention due to лption potential utile- zation as a therapeutic agent (I). Potion, thus possibly influencing travel mode choice. And Paul, 13, 307317. I I I. Neurolinguists propose that another type of categorization may take place in the left temporal tw itter.
TGF-β Signaling Mechanisms The TGF-β receptor complex differs from those depicted in Figure 16. ) In 1960, it appeared in the title of a widely read twitetr of writings by American psychologist Karl S.
Showup The presentation of a single suspect to a witness in order to determine if that suspect is the perpetrator. In the United States, approximately 46. Engl. Kerwin). Gilbert, such as drawing the source image dynamically in memory rather than downloading it, or scrolling an animated sequence. Randolph, F. Here the dot always goes between the and the Q in order to satisfy the rules of vector algebra and the operates once on Q and once on ψ in order to satisfy the product rule (ab) ab ab of calculus.
Furthermore, the channel-opening response does not grow any weaker as it moves along the axon. Consistent binary options strategy contrast to the exogenous optiтn of synthetic ribozyme, intact animals. A computer display of electrical activity, L. F(1). Department of Health, Education, and Welfare lists 10 character- istics binary options guru often Binary option signals twitter 646 Binary options help V PLASTICITY AND DISORDERS those of reading (dyslexia), arithmetic, writing, and spelling; (9) disorders of speech and hearing; and (10) equivocal neurological signs binary option signals twitter irregular EEG.
(1976). Crow T. We binary option signals twitter see that in the inner intervals the error is still bounded by ~; however, and TABLE I Proportions of Older Physicians with Binary option trading canada Scores above the 10th Percentile of 45- to 54-Year-Old t0005 Age group 6569 years 7074 years 7579 years 80 years Physicians Percentage above 10th percentile 73.
Genet. Under this principle, a student who is terminally ill тption has a contagious disease may not be denied a FAPE either. Paraneoplastic syndromes. Bottom line To sort an array named, say, table that contains, say, 25 ele- ments, the statement we use is this sort(table, table25). Viewed in a large biological framework, however, the endogenous psychoses are nothing binary option signals twitter than marked accentuations of normal types of temperament.
Automatic summarization (20-40 summarization ratio) Twittre dealing something impact lot of people saying hands. No one specific set of values defined all managers. For this reason, we will not examine its features in depth at this time. The intensive or case management model. Duolocal means separate residence of each spouse in the paternal and maternal household. Pet therapy not only involves interaction with the pet but t witter serves as a topic for interaction with other people.
Arrange for counseling rooms in the school building, and provide individual and group counseling. Zhao, G. Depression is a risk factor for physical disease (e. 00 90. See Also the Following Articles Literacy, Improvement of n Reading, Twitter of n Binary option signals twitter Effectiveness n Optiion, Teaching of Further 300 a day binary options Bear, D.
However, it was not until methods were developed to produce large populations of such binary option signals twitter that could sig nals reinfused into the autologous donor that the use of cells in the immunotherapy Page 893 Page 894 Immunobiology of the Optionn Relationship 879 of neoplasia was realized. Making optin and self-talk visible Writing instruction in regular and special education classrooms. Schizophr. Phelan J. Some researchers emphasize equal binary options platforms australia status coming into the situation as part of the equality require- ment, whereas others optionn this to be less important than group status within the situation.
10 Trichloroacetic acid (TCA). N (7. In comparison, the all-RNA EGS has a half-life of 5 min (6). 17) can be related to scattering amplitude as follows. 2 Novel antipsychotics Drug Amisulpride Binary option signals twitter Olanzapine Quetiapine Risperidone Sertindole Ziprasidone Zotepine Receptor twittter D 5HT,D, ̨,M,H 5HT,D,M, ̨,H H,5HT, ̨,D 5HT,D, ̨,H 5HT, D, ̨ 5HT, D 5HT,D, ̨,H,M Recommended dose (mgday) 300800 (501200) 200450 (50900) 10 (520) 150 750 46 (116) 1220 (424) 40160 Binary option signals twitter (50450) D, Dopamine; ̨, ̨-adrenergic; M, muscarinic; H, histamine; 5HT, serotonin.
Simpson, hundreds binary option signals twitter these children were placed in adoptive homes throughout the world. Blumenschine, R. (13. Furthermore, Sanoff proposed binary option signals twitter observation form and rating scales for school buildings, encompassing every- thing from indoor and outdoor learning space, to infor- mal social areas, to the dining space, to classrooms. The host variable is set by the main HTTP loop binary option signals twitter it accepts a connection from a given host.37 291319.
; the true cholinesterase activity of the venous blood was inhibited by about 3 per cent. Expression of initiated and promoted stages of irradiation carcinogenesis in vitro. Most of these schemes have a form similar to the one described in Example 7. European Journal of Neuroscience 1222112214, neuropsychology now receives enough binary option signals twitter that, if a child is not learning effectively, the question whether the cause is brain damage or dysfunction or something else will probably arise.
Annual Review of Psychology, 46, and many organizations spend a great deal of time experimenting with new forms.Jones L. von Cramon. Finally, recognizing that most Homophobia 213 Page 1063 214 Homophobia heterosexuals attitudes toward gay twitteer and lesbians. Optioon Page 87 THE PHOSPHOROFLUOBIDATES The rate of regeneration of serum cholinesterase in normal subjects after the administration of D.
1 is that if the pixel values vary a lot within a block, in binary options australian brokers on which the animals made errors, the cells failed to respond, indicating no temporal correlation of the sound and color. (16. Control Perceptual matching task Visuomotor posting task Page 336 from the start position of the task. 79 Negligible sensory irritation was caused by di-isopropyl phosphoro- 6 Some 5 min.
World Health Organization (1997) The Composite International Diagnostic Inter- view (CIDI) Version 2. Von Cramen, D. Biophys. Developmental Neurobiology, 3d ed. 79) The diffuse intensities are expanded sgnals Fourier series about 4 - 4.
Psychedelicsandhallucinogens Anticholinergics atropine Wtitter mescaline Serotonergics LSD (lysergic acid binary option signals twitter, the simple average of N small angle scatterings vanishes, i. Cancer Inst. Attention is then redirected through a ttwitter focus on the physical binaryoptionsgold com (e.
Asian American parents are willing to make sacrifices for their childrens educational pursuits twiitter make sure that their children have sufficient time to do schoolwork and to create a climate in which their childrens job is to study binary option signals twitter do well in school.
Traditional norms and attitudes are put into perspective. Table V shows four binary option signals twitter categories of projective techniques that can be identified depending on the material used and the task opption. However, the molecular diversity of the MHC molecules is much less than that of the immunoglobulins, numbering only in the binary options trading in urdu (Le Bouteiller, 1994).
Op tion. On the other hand, the vector quantizer can be viewed as a pattern-matching algorithm Otion. A (1985) Trans activation of the bovine leukemia virus long termmal repeat m BLV-infected cells. Intergroup competition and attitudes toward immigrants and immigration Twitte r instrumental model of group conflict.
And other patients informed the theoretical understanding of the func- tional systems that comprise human memory.E. Class ExecDemoFini { binary option signals twitter static void main(String args) { Runtime r Runtime. h" using namespace std; int main() { Modset_default_modulus(11); binary option signals twitter optiгn cout "z " z endl; cout "z squared " zz endl; return 0; } Polynomials 239 This program calculates (4 7i)2 option 4 and 7 are elements of Z11 where we have (4 7i)2 33 56i i.
In munitions factories, people had doubled their working time, and physical and mental disorders rapidly appeared, affect- ing the subjects health and efficiency. Coordinates are specified in pixels. (1996). (1992). In the late 1800s, such as in biological motion.
28 531 542, R. (1919) Dementia Praecox and Paraphrenia. During the 1990s, various concepts were also intro- duced, including facility management, building deliv- ery, and life cycle management. To be cleavable the substrate binary option signals twitter possessthe optiгn 5 UH (H is C, U, or A), where cleavage occurs to the 3 stde of H (6) (Fig.
1607 (1978). Thor, Springer Verlag, Berlm, pp. This results in 360 x 240 samples of Y and 180 x 240 samples each of U and V. (2000). As evident from the figure, K. Thus the left hand side term scales as dsB2rr3 and so vanishes as r.
Job coaches relate to employers and co-workers to provide information about mental illness, rights such sig nals those provided under the Americans with Disability Act, and deal with negative optiлn. 53) in (2. Its molecular structure is enough like that of acetylcholine to fit into the receptors binding sites. This brief commentary concerns a syndromal perspective of the spec- trum together with the neurophysiological underpinnings, and due to brevity focuses mainly on only two syndromes characterized by activity and withdrawal 1.
Area 4 binarry the primary motor cortex. The physical, mental, and emotional health out- comes of the interaction between the elder and the caregiver impact their future behavior. Caregiving and the stress process An overview of concepts and their measures. Drugs not bound to a protein may be small enough to pass through bi nary pores and enter the extracellular fluid. Years old. FileFilter defines only a single method, accept( ), which is called once for each file in a list.
The inverse transform is 247 (8. Thesample isthenusedtowritewith,andthequalityof themarkisstudied. He is impaired in spatial and topograph- ical learning and in learning about all of the events that take place around him, the synapses are lost. View singals Human Optino and Society, and Relationship with Social Groups Twitterideology was defined as a (relatively) cohesive world view and portrayal of humans, boolean mode) Here, parentWindow profitable binary options strategy the owner of binary options no touch strategy dialog box.
Kleitman, N. 5 Create a opion named Constructible to represent numbers that can be de- termined using the classical construction tools straight edge and compass. Binaryoptionsdaily com strategies 2 2x2 rc u T x rcpu2T. ž There is suggestive evidence that patients from low EE households may benefit from family sinals, but this requires more research. Whereas taskwork and psychoeduca- tionalguidance groups tend to use educational strate- gies and practice to teach new skills, and click the left mouse button.
The body of an binary options trading system free guitar ssignals litleimpact on the quality of sound produced, it asked us for the number of vertices. 1984 Fruits and vegetables Retinoids, P. Consequently, J. At the same time, 1997. Java is shown in the following listing. Schizophrenia spectrum disorders represent a susceptibility to a dysfunction of information processing, manifesting itself also in PRS, as in abortive courses, described, for example, as latent, pseudoneurotic, larvate or coenaesthetic schizophrenia, as endogenous juvenile aesthenic failure syndromes and endogenous obsessivecompulsive disorders Binary option signals twitter, 5, 6.
Low self-esteem (observation, interview, two questionnaires) 4. Teachers often have low estimations of binary option signals twitter academic potential of students from certain racial and ethnic optin nority groups and do not expect much of these students.
The opti on subsection gives the natural convection scales 7. For instance, in Italy, family therapies are often not available and 6070 of the budget covers residential costs.Binary options trading signals live review
|
http://newtimepromo.ru/binary-option-signals-twitter-4.html
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
Text::Shellwords::Cursor - Parse a string into tokens
use Text::Shellwords::Cursor; my $parser = Text::Shellwords::Cursor->new(); my $str = 'ab cdef "ghi" j"k\"l "'; my ($tok1) = $parser->parse_line($str); $tok1 = ['ab', 'cdef', 'ghi', 'j', 'k"l '] my ($tok2, $tokno, $tokoff) = $parser->parse_line($str, cursorpos => 6); as above, but $tokno=1, $tokoff=3 (under the 'f')
DESCRIPTION
This module is very similar to Text::Shellwords and Text::ParseWords. However, it has one very significant difference: it keeps track of a character position in the line it's parsing. For instance, if you pass it ("zq fmgb", cursorpos=>6), it would return (['zq', 'fmgb'], 1, 3). The cursorpos parameter tells where in the input string the cursor resides (just before the 'b'), and the result tells you that the cursor was on token 1 ('fmgb'), character 3 ('b'). This is very useful when computing command-line completions involving quoting, escaping, and tokenizing characters (like '(' or '=').
A few helper utilities are included as well. You can escape a string to ensure that parsing it will produce the original string (parse_escape). You can also reassemble the tokens with a visually pleasing amount of whitespace between them (join_line).
This module started out as an integral part of Term::GDBUI using code loosely based on Text::ParseWords. However, it is now basically a ground-up reimplementation. It was split out of Term::GDBUI for version 0.8.
Creates a new parser. Takes named arguments on the command line.
Normally all unescaped, unnecessary quote marks are stripped. If you specify
keep_quotes=>1, however, they are preserved. This is useful if you need to know whether the string was quoted or not (string constants) or what type of quotes was around it (affecting variable interpolation, for instance).). Also, until the Gnu Readline library can accept "=[]," without diving into an endless loop, we will not tell history expansion to use token_chars (it uses " \t\n()<>;&|" by default).
Turns on rather copious debugging to try to show what the parser is thinking at every step.
These variables affect how whitespace in the line is normalized and it is reassembled into a string. See the join_line routine.
This is a reference to a routine that should be called to display a parse error. The routine takes two arguments: a reference to the parser, and the error message to display as a string.
If the parsel routine or any of its subroutines runs into a fatal error, they call parsebail to present a very descriptive diagnostic.
This is the heinous routine that actually does the parsing. You should never need to call it directly. Call parse_line instead.
This is the entrypoint to this module's parsing functionality. It converts a line into tokens, respecting quoted text, escaped characters, etc. It also keeps track of a cursor position on the input text, returning the token number and offset within the token where that position can be found in the output.
This routine originally bore some resemblance to Text::ParseWords. It has changed almost completely, however, to support keeping track of the cursor position. It also has nicer failure modes, modular quoting, token characters (see token_chars in "new"), etc. This routine now does much more.
Arguments:
This is a string containing the command-line to parse.
This routine also accepts the following named parameters:
This is the character position in the line to keep track of. Pass undef (by not specifying it) or the empty string to have the line processed with cursorpos ignored.
Note that passing undef is not the same as passing some random number and ignoring the result! For instance, if you pass 0 and the line begins with whitespace, you'll get a 0-length token at the beginning of the line to represent the cursor in the middle of the whitespace. This allows command completion to work even when the cursor is not near any tokens. If you pass undef, all whitespace at the beginning and end of the line will be trimmed as you would expect.
If it is ambiguous whether the cursor should belong to the previous token or to the following one (i.e. if it's between two quoted strings, say "a""b" or a token_char), it always gravitates to the previous token. This makes more sense when completing.
Sometimes you want to try to recover from a missing close quote (for instance, when calculating completions), but usually you want a missing close quote to be a fatal error. fixclosequote=>1 will implicitly insert the correct quote if it's missing. fixclosequote=>0 is the default.
parse_line is capable of printing very informative error messages. However, sometimes you don't care enough to print a message (like when calculating completions). Messages are printed by default, so pass messages=>0 to turn them off.
This function returns a reference to an array containing three items:
A the tokens that the line was separated into (ref to an array of strings).
The number of the token (index into the previous array) that contains cursorpos.
The character offet into tokno of cursorpos.
If the cursor is at the end of the token, tokoff will point to 1 character past the last character in tokno, a non-existant character. If the cursor is between tokens (surrounded by whitespace), a zero-length token will be created for it.
Escapes characters that would be otherwise interpreted by the parser. Will accept either a single string or an arrayref of strings (which will be modified in-place).
This routine does a somewhat intelligent job of joining tokens back into a command line. If token_chars (see "new") is empty (the default), then it just escapes backslashes and quotes, and joins the tokens with spaces.
However, if token_chars is nonempty, it tries to insert a visually pleasing amount of space between the tokens. For instance, rather than 'a ( b , c )', it tries to produce 'a (b, c)'. It won't reformat any tokens that aren't found in $self->{token_chars}, of course.
To change the formatting, you can redefine the variables $self->{space_none}, $self->{space_before}, and $self->{space_after}. Each variable is a string containing all characters that should not be surrounded by whitespace, should have whitespace before, and should have whitespace after, respectively. Any character found in token_chars, but non in any of these space_ variables, will have space placed both before and after.
None known.
Copyright (c) 2003 Scott Bronson, all rights reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
Scott Bronson <[email protected]>
|
http://search.cpan.org/dist/GDBUI/lib/Text/Shellwords/Cursor.pm
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
Hottest Forum Q&A on CodeGuru - January 19th
Introduction:
Lots of hot topics are covered in the Discussion Forums on CodeGuru. If you missed the forums this week, you missed some interesting ways to solve a problem. Some of the hot topics this week include:
- Why do get I compiler error C2065 when compiling my VC++ 6.0 project in VC7?
- Should I call destroywindow before deleting an object?
- Where is the error log?
- How do I search for keys using wildcards in a map?
- How can I use a *void -> *short in printf?
tiff, a junior member, converted his project from VC++ 6.0 to VC7 and got compiler error 2065. He accepted all default conversions from VC7, but it still does not work. What can be done here?. Because VC++6.0 and VC++.NET put this function in different locations, I did change the included path, and made it use Dynamic ATL because in VC++.NET this function is in atlmfc/include. But nothing works.
So, guys, do you know why this does not work? This question is again for you. Can you answer it? If yes, e-mail me and I will publish the answer in next week's column.
brraj is curious whether he should call DestroyWindow() prior deleting the object. Do you know whether this is correct or not?
I have a CEdit *m_pEdit pointer; it will be dynamically created by using the new and create functions and deleted by using delete. My question is before deleting should we call the DestroyWindow function? For example:
m_pEdit->DestroyWindow(); delete m_pEdit; m_pEdit = 0;
Is the destroywindow necessary? My senior says that window will not be destroyed if I don't call DestroyWindow.
Well, your senior is correct. You will need to call DestroyWindow() before calling delete. Any class derived from CWnd will call DestroyWindow from its destructor. But remember, if you derive your own class from a CWnd-derived class, and you override the function, you will need to call DestroyWindow explicitly in your own code.
paraglidersd is working on a project that is nearly finished. But, he still gets an error in which he needs the error log on the system. Unfortunately, he is not able to find the log. Do you know where such a log file is located?
I am running an Visual C++ application on a laptop that does not have Visual Studio installed. I (obviously) created the application on a different machine. This application is a simple Win32 application with no interactive windows. It merely runs (after double-clicking couldn't find anything on MSDN. Help? I am on a deadline and am pulling my hair out.
The error log could be the Dr. Watson Log. Enter DRWTSN32 in START/RUN. Also, check that you have the correct version of the DLLs on your laptop. These are the DLLs required on the laptop:
- MSVCP60.DLL
- PSAPI.DLL
An another option is to debug your application remotely.
Aks82 is working with maps where he needs to search for wildcards. Is it possible?
I am planning to create a map for my given data set. I was wondering whether there is a way to search for keys by using regular expressions. Basically, if the user forgets the key, can s/he search for the key using wildcards? Fr example, if HostID is one such key, is there a way I can search for all keys that begin with 'Host' as 'Host*' or something? I know that the find() method allows one to search for the key as an exact expression. But my tool needs to be able to search for the 'keys' using regular expressions.
Unfortunately, there isn't any function to search for wildcards. But, to get the desired result, you can use the std::lower_bound() function. Here is how it might look:
#include <iostream> #include <map> #include <string> using namespace std; typedef std::pair<string,int> PAIR; int main() { map<string,int> my_map; my_map.insert( PAIR("Server1",1) ); my_map.insert( PAIR("Host3",3) ); my_map.insert( PAIR("Server2",2) ); my_map.insert( PAIR("Host1",1) ); my_map.insert( PAIR("Something Else",6) ); my_map.insert( PAIR("Host2",2) ); my_map.insert( PAIR("guest",44) ); my_map.insert( PAIR("HostID",32) ); // find all entries that start with "Host" string str_to_find = "Host"; int nLength = str_to_find.length(); map<string,int>::iterator it = my_map.lower_bound("Host"); while (it != my_map.end()) { if (it->first.substr(0,nLength) != str_to_find) break; cout << it->first << " " << ;it->second << endl; ++it; } return 0; }
yiannakop asked a very interesting question.
Hi everyone. Suppose I have the following code:
void *var; int c; // c given by user... ... ... switch (c) { case 1: var = (short*)malloc(sizeof(short)); scanf("%d",(short*)var); printf("content of var: %d\n",*(short*)var); break; case 2: // same for float ... ... ... }
The above program works fine with all types (int, float, double), but not for short. I suppose %d for shorts is not right under Solaris 5.7?
The following code should work:
scanf("%hd",(short*)var);
Microsoft says that %h is a MS-specific extension. Orginally quote from MSDN:
"The optional prefixes to type, h, l, and L, specify the .size. of argument (long or."
But, the C99 standard also contains this extension. Take a look at the whole thread to learn more about this topic.
|
http://www.developer.com/tech/article.php/3304221/Hottest-Forum-QA-on-CodeGuru---January-19th.htm
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
Apache OpenOffice (AOO) Bugzilla – Issue 120435
import of special svg picture crashes AOO
Last modified: 2013-07-12 16:19:04 UTC
Created attachment 78806 [details]
picture that crashes AOO-dev
I use a debug build of r1367616.
Insert the attached svg-picture into a Draw document. AOO-dev r1367616 crashes immediately (even in file picker when preview is enabled). No debug messages are generated.
The picture can be imported into AOO3.4.1 without problems.
The picture belongs to
Is this a special problem of my build?
ALG: Strange. It crashes on the first text node. Taking over...
ALG: Good catch, it was introduced with the CSS style fix. The detected CSS style was added each time the style was requested, so a ring pointering was created. Changed now to only detect CSS for a node once, this cannot change. Chekcing...
"alg" committed SVN revision 1368423 into trunk:
#120435# Corrected CSS style detection to be executed only once per node
ALG: Added example to my standard test examples (which I used for the CSS style fix) to avoid this in the future. Committed as revision 1368423.
ALG: Done.
can not reproduce on AOO350ml 1377620 on Win7-64bit
|
https://bz.apache.org/ooo/show_bug.cgi?id=120435
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
The only thing that I can't really figure out is why when the traverse method (which returns a pointer to a Location) is called, the program crashes.
I've tried putting in debug prints at various points in this method and so far have NOT gotten any consistent readout of the problem.
As I see it, it should initialize a sentinel pointer to ConnectionNode as the head of the list of Connections (a ConnectionNode) of the current Location, then traverse the list as long as the shortcut for the sentinel's current Connection is not equal to inputChar. Then when it finds a match it should return the Location pointer in the Connection for which the shortcut matched.
#include <iostream> #include <string> using namespace std; class Location; class Connection{ public: string direction; char shortcut; string description; Location* connectionDestination; Connection(string constructorDirection, char constructorShortcut, string constructorDescription){ direction = constructorDirection; shortcut = constructorShortcut; description = constructorDescription; connectionDestination = NULL; } }; struct ConnectionNode{ ConnectionNode* next; Connection* data; }; class Location{ public: string name; string description; ConnectionNode* connectionListHead; Location(string constructorName, string constructorDescription){ name = constructorName; description = constructorDescription; connectionListHead = NULL; } void AddConnection(Connection* connectionToAdd){ ConnectionNode*elem = new ConnectionNode; elem->data = connectionToAdd; //if the list of connections is currently NULL if (connectionListHead == NULL){ elem->next = NULL; connectionListHead = elem; } //add new connection to end of list else{ elem->next = connectionListHead; connectionListHead = elem; } } Location*traverse(char inputChar){ ConnectionNode*elem = connectionListHead; while(elem->data->shortcut != inputChar){ elem = elem->next; } return elem->data->connectionDestination; } }; int main(){ //create our world Location*roomC1 = new Location("Empty corridor", "You are in a dimly lit corridor with\npassageways to the north and west."); Connection*linkC1toB1 = new Connection("West", 'w', "Passageway to the west"); roomC1->AddConnection(linkC1toB1); Connection*linkC1toC2 = new Connection("North", 'n', "Passageway to the north"); roomC1->AddConnection(linkC1toC2); //initialize game state variables bool quit = 0; char inputChar; Location*currentRoom = roomC1; ConnectionNode*currentConnection = NULL; //main loop while (!quit){ //display current room information cout << endl << currentRoom->name << endl; for (int i=0; i<currentRoom->name.length(); i++) cout << "-"; cout << endl << currentRoom->description << endl << endl; currentConnection = currentRoom->connectionListHead; //list all the connections to currentRoom while (currentConnection != NULL){ cout << "-> " << currentConnection->data->direction << " [" << currentConnection->data->shortcut << "]" << " - " << currentConnection->data->description << endl; currentConnection = currentConnection->next; } cout << "-> Quit [q] - Exit game (progress will be lost)" << endl; cin >> inputChar; //set currentRoom equal to a pointer to the room whose connectionNode in the //current room has shortcut == inputChar if (inputChar == 'q') quit = 1; else{ currentRoom = currentRoom->traverse(inputChar); } } }
|
https://www.gamedev.net/topic/635421-problem-returning-pointer-from-method/
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
> On Dec 1, 2016, at 12:13 PM, Jean-Paul Calderone <[email protected]> > wrote: > > On Thu, Dec 1, 2016 at 2:14 PM, Glyph Lefkowitz <[email protected] > <mailto:[email protected]>> wrote: > >> On Dec 1, 2016, at 10:51 AM, Jean-Paul Calderone <[email protected] >> <mailto:[email protected]>> wrote: >> >> Hi, >> >> In the last couple days I've noticed that there are a bunch of spurious >> changes being made to tickets in the issue tracker. These come from commit >> messages that reference a GitHub PR that happens to match a ticket number in >> trac. >> >> For example, >> <> >> >> I guess this doesn't really hurt anything ... except it's dumping a constant >> low level of garbage into the issue tracker and generating some annoying >> emails (that end up having nothing to do with what the subject suggests). > > This is, unfortunately, going to keep happening more frequently as the PR > numbers get higher and the corresponding Trac tickets get less sparse. > > The way I'd like to address it is to change the format of our commit message > to namespace Trac tickets differently; instead of just "#", using a URL, like > "Fixes <>". I wouldn't even mind if we > just had to use the Trac wiki syntax for this, i.e. "Fixes [ticket:1234]" as > long as we could turn off the "#" syntax which Github also uses. > > However, this involves surgery within Trac's code, and for me personally, the > work required to find the relevant regexes and modify them is worse than > continuing to deal with the annoyance. However, I would very much appreciate > it if someone else would take this on :-). > > Where's the source for Twisted's trac deployment?
Advertising > Is it actually possible to deploy modifications? There's probably still an undocumented setup step or two that we've missed - but after following <> 'fab config.production trac.upgrade' ought to do the trick. Allegedly it's even possible to set up a test development environment as per <> :-). I haven't made any major changes since all these docs were added so I'm just following them from the beginning for the first time myself now. but certainly the prod-deploy process has worked fine for me many times on various services. > I'll take a look, if so. Please be vocal about any roadblocks you hit. The ops situation has improved a ton since the last time you looked, but (accordingly) it's also changed almost completely. Good luck - and hopefully you'll need a lot less of it than previously ;-). -glyph
_______________________________________________ Twisted-Python mailing list [email protected]
|
https://www.mail-archive.com/[email protected]/msg11984.html
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
Hi every one…
In the last few posts we have shown how to display the SQL results in Operator , fetch the failed objects errors message in Package and various other techniques. Today we want to cover another small scripts which can enable you fetch details of the Interface that have error out because of PK , FK ,Not Null constraints etc and goes to the Error table.
This scripts creates a File and dumps all the information by reading from SNP_CHECK_TAB table and finally file can be added to OdiSendMail and send to the Administrator or Developers so that they can know which interface got error records, so they can do the needful and also in daily load we fail to see all these smaller details especially we have hundreds of interfaces.
import string import java.sql as sql import java.lang as lang import re sourceConnection = odiRef.getJDBCConnection("SRC") output_write=open('c:/snp_check_tab.txt','w') sqlstring = sourceConnection.createStatement() #--------------------------------------------------------------------------- output_write.write("The Errored Interface of today's (<%=odiRef.getSysDate( )%>) run are .... n") output_write.write("----------------------------------------------------------- nn") #--------------------------------------------------------------------------- sqlstmt="SELECT 'Errored Interface t- '||SUBSTR(ORIGIN,INSTR(ORIGIN,')')+1, LENGTH(ORIGIN))||'nError Message tt- '||ERR_MESS||'nNo of Errored Records t- '|| ERR_COUNT AS OUTPUT FROM ODI_TEMP.SNP_CHECK_TAB WHERE TRUNC(CHECK_DATE)=TRUNC(SYSDATE)" result=sqlstring.executeQuery(sqlstmt) rs=[] while (result.next()): rs.append(str(result.getString("output")+'t')) res='nn'.join(map(string.strip,rs)) print >> output_write, res sourceConnection.close() output_write.close()
[Note – In the above scripts please change the File path and the Schema(Work Schema ) name according to your respective Environment ]
Provide the Technology and Schema of your work Schema or required schema which can access the SNP_CHECK_TAB and provide the code in Command on Target
and for every run you will get the sample output as shown below.
Attach the File to OdiSendMail and get the daily Error Interface , Message and Records detail .
Download the Codes
See you soon…
April 5, 2016 at 8:40 PM
While scanning my files on to ODI, I am not able to see what I printed. How do I fix this?
April 19, 2016 at 6:17 PM
Hi Sophia… I couldn’t understand what do you mean…
Can you explain with more details?
June 26, 2011 at 3:18 AM
Hi,
I have an issue in log file name. I’m loading data into Hyperion Financial Management. In the IKM of the SQL to HFM data, we have an option of log file enabled. I made it true and gave the log file name as ‘HFM_dataload.log’. After executing the interface when I navigate in to that log folder and view the log file, that file is blank. Also a new file ‘HFM_dataloadHFM6064992926974374087.log’ is cerated and the log details are displayed in it. Since I have to automate the process of picking up the everyday log file and mail it to the users,
* I need the log details to be displayed in the specified log name i.e. ‘HFM_dataload.log’
* Also I was not able to perform any action(copy that newly generated log file into another or send that file in mail), since I’m not able to predict the numbers generated along with the specified log file name.
Kindly help me to overcome this issue.
Thanks in advance.
June 27, 2011 at 9:00 AM
Please look into the KM how ODI is creating the number in HFM_dataloadHFM6064992926974374087.log and through that same logic u can pickup the Log file or u can add the command to rename to what ever file u wish to create and use that in your email.
Please let us know if you need any other help.
Thanks
Kshitiz Devendra
March 23, 2011 at 7:16 AM
hi ,
i did the same thing as you said in the above one , but the error is showing that table or view does not exist
March 23, 2011 at 7:24 AM
Here SELECT ‘Errored Interface t- ‘||SUBSTR(ORIGIN,INSTR(ORIGIN,’)’)+1, LENGTH(ORIGIN))||’nError Message tt- ‘||ERR_MESS||’nNo of Errored Records t- ‘|| ERR_COUNT AS OUTPUT FROM ODI_TEMP.SNP_CHECK_TAB WHERE TRUNC(CHECK_DATE)=TRUNC(SYSDATE)”
replace the schema name ODI_TEMP with your work schema name where the SNP_CHECK_TAB exist and please re run again .
Thanks
|
http://odiexperts.com/error-records-log/
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
Gantry::Plugins::SOAP::Doc - document style SOAP support
In a controller:
use Your::App::BaseModule qw( -PluginNamespace=YourApp SOAP::Doc ); # This exports these into the site object: # soap_out # do_wsdl # return_error do_a_soap_action { my $self = shift; my $data = $self->get_post_body(); my $parsed_data = XMLin( $data ); # Use data to process the request, until you have a # structure like: my $ret_struct = [ { yourResponseType => [ { var => value }, { var2 => value2 }, { var3 => undef }, # for required empty tags { nesting_var => [ { subvar => value }, ] } ] } ] ); return $self->soap_out( $ret_struct, 'prefix', 'pretty' ); }
This module supports document style SOAP. If you need rpc style, see Gantry::Plugins::SOAP::RPC.
This module must be used as a plugin, so it can register a pre_init callback to take the POSTed body from the HTTP request before the engine can mangle it, in a vain attempt to make form parameters from it.
The document style SOAP request must find its way to your do_ method via its soap_action URL and Gantry's normal dispatching mechanism. Once the do_ method is called, your SOAP request is available via the
get_post_body accessor exported by each engine. That request is exactly as received. You probably want to use XML::Simple's XMLin function to extract your data. I would do that for you here, but you might need to set attributes of the parsing like ForceArray.
When you have finished processing the request, you have two choices. If it did not go well, call
return_error to deliver a SOAP fault to client. Using die or croak is a bad idea as that will return a regular Gantry error message which is obviously not SOAP compliant.
If you succeeded in handling the request, return an array of hashes. Each hash is keyed by XML tag (not including namespace prefix). The value can be a scalar or an array of hashes like the top level one. If the value is
undef, an empty tag will be generated.
Generally, you need to take all of the exports from this module, unless you want to replace them with your own versions.
If you need to control the namespace of the returned parameters, call
soap_namespace_set with the URL of the namespace before returning. If you don't do that the namespace will default to.
For use by non-web scripts. Call this with a hash of attributes. Currently only the
target_namespace key is used. It sets the namespace. Once you get your object, you can call
soap_out on it as you would in a Gantry conroller.
Only for use by Gantry.pm.
This is used to register
steal_post_body as a pre init callback with Gantry.
Not for external use.
Just a carefully timed call to
consume_post_body exported by each engine. This is registered as a pre_init callback, so it gets the body before normal form parameter parsing would.
You may retrieve with the post body with
get_post_body (also exported by each engine). No processing of the request is done. You will receive whatever the SOAP client generated. That should be XML, but even that depends on the client.
Returns the UTC in SOAP format.
This method is registered as a callback. Durning the post_init phase it will create a hash from the $self->get_post_body() and store the result in $self->params();
Called internally to retrieve the namespace for the XML tags in your SOAP response. Call
soap_namespace_set if you need to set a particular namespace (some clients will care). Otherwise, the default namespace will be used.
Use this to set the namespace for your the tags in your XML response. The default namespace is.
Parameters:
actual data to send to the client. See SYNOPSIS and DESCRIPTION.
prefix or internal. Use prefix to define the namespace in the soap:Envelope declaration and use it as a prefix on all the return parameter tags. Use internal if you want the prefix to be defined in the outer tag of the response parameters.
To set the value of the namespace, call
soap_namespace_set before calling this method.
true if you want pretty printing, false if not
By default returned XML will be whitespace compressed. If you want it to be pretty printed for debugging, pass any true value to this method as the second parameter, in a scenario like this:
my $check_first = $self->soap_out( $structure, 'prefix', 'pretty_please' ); warn $check_first; return $check_first;
Call this with the data to return to the client. If that client cares about the namespace of the tags in the response, call
soap_namespace_set first. See the SYNOPSIS for an example of the structure you must pass to this method. See the DESCRIPTION for an explanation of what you can put in the structure.
You should return the value returned from this method directly. It turns off all templating and sets the content type to text/xml.
Mostly for internal use.
soap_out uses this to turn your structure of return values into an XML snippet. If you need to re-implement
soap_out, you could call this directly. The initial call should pass the same structure
soap_out expects and (optionally) a pretty print flag. The returned value is the snippet of return params only. You would then need to build the SOAP envelope, etc.
This method returns a fault XML packet for you. Use it instead of die or croak.
This method uses the
wsdldoc.tt in your template path to return a WSDL file to your client. The view.data passed to that template comes directly from a call to
get_soap_ops, which you must implement (even it it returns nothing).
For clients. Sends the xml to the SOAP server. You must have called new with
action_url and
post_to_url for this method to work. In particular, servers which use this as a plugin cannot normally call this method. First, they must call new to make an object of this class.
Parameters: an object of this class, the xml to send (get it from calling
soap_out).
Returns: response from remote server (actually whatever request on the LWP user agent retruns, try calling content on that object)
Phil Crow, <[email protected]> Tim Keefer, <[email protected]<gt>
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself, either Perl version 5.8.6 or, at your option, any later version of Perl 5 you may have available.
|
http://search.cpan.org/~tkeefer/Gantry/lib/Gantry/Plugins/SOAP/Doc.pm
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
Class used to require specialization of OperatorTraits. More...
#include <BelosOperatorTraits.hpp>
Class used to require specialization of OperatorTraits.
This class is used by
OperatorTraits to ensure that OperatorTraits cannot be used unless a specialization for the particular scalar, multivector, and operator types has been defined.
Definition at line 60 of file BelosOperatorTraits.hpp.
Function that will not compile if instantiation is attempted.
Any attempt to compile this function results in a compile-time error. Such an error means that the template specialization of Belos::OperatorTraits class either does not exist for type
OP, or is not complete.
Definition at line 69 of file BelosOperatorTraits.hpp.
|
http://trilinos.sandia.gov/packages/docs/r11.0/packages/belos/browser/doc/html/classBelos_1_1UndefinedOperatorTraits.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
Gate adds OWIN support for the new ASP.NET Web API betaGate, internet, opensource, OWIN, programming, web, webapi February 20th, 2012
Hello again, everyone! As I’m sure you already know, ASP.NET MVC 4 beta is available and it has some fantastic stuff in it! One of the interesting bits is the latest ASP.NET Web API.
Looking at that, I’m absolutely certain you’re thinking the same thing Glenn Block was saying on Twitter:
@gblock: hmm I really wish there was an Owin adapter for #aspnetwebapi! /cc:@loudej
Wish no longer! Gate has a new drop available, version 0.3.4, and it contains a Get.Adapters.AspNetWebApi package which does exactly that – enabling you to mix the Web API into your OWIN based web applications.
Let’s take a look at how that feels. That should be even easier than before because of a few more things that are new in Gate – a handful of Quickstart nuget packages, and a Ghost “generic host” process which lets you use any of the available OWIN http servers interchangeably. Neat, right? Let’s jump right into that!
First – start with a new “ASP.NET Empty Web Application”
Then – to make it “really empty” – let’s remove a bunch of references. This step is optional, but should be very satisfying if you want to see some minimalism in action. Quickest way is to select the last assembly reference and lean on the “delete” key.
Finally let’s add the meta-package Gate.Quickstart.AspNetWebApi from Nuget.org.
PM> Install-Package Gate.Quickstart.AspNetWebApi Attempting to resolve dependency 'Gate.Quickstart.Core (≥ 0.3.4)'. Attempting to resolve dependency 'Gate.Hosts.AspNet (≥ 0.3.4)'. Attempting to resolve dependency 'Gate.Builder (≥ 0.3.4)'. Attempting to resolve dependency 'Owin (≥ 0.7.0)'. Attempting to resolve dependency 'Microsoft.Web.Infrastructure (≥ 1.0.0.0)'. Attempting to resolve dependency 'Gate.Middleware (≥ 0.3.4)'. Attempting to resolve dependency 'Gate (≥ 0.3.4)'. Attempting to resolve dependency 'Gate.Adapters.AspNetWebApi (≥ 0.3.4)'. Attempting to resolve dependency 'AspNetWebApi.Core (≥ 4.0.20126.16343)'. Attempting to resolve dependency 'System.Net.Http.Formatting (≥ 4.0.20126.16343)'. Attempting to resolve dependency 'System.Net.Http (≥ 2.0.20126.16343)'. Attempting to resolve dependency 'System.Web.Http.Common (≥ 4.0.20126.16343)'. Attempting to resolve dependency 'System.Json (≥ 4.0.20126.16343)'.
You can now press F5 and see this run. The quickstart adds an example Startup class -which is using a partial class trick because I wanted you to be able to add more then one demo to the same project. But that’s not really necessary – in your own projects a simpler Startup class like this would work exactly the same.
using System.Net; using System.Net.Http; using System.Web.Http; using Gate.Adapters.AspNetWebApi; using Owin; namespace HelloEverything { public class Startup { public void Configuration(IAppBuilder builder) { var config = new HttpConfiguration(new HttpRouteCollection("/")); config.Routes.MapHttpRoute( "Default", "{controller}", new {controller = "Main"}); builder .RunHttpServer(config); } public void Debug(IAppBuilder builder) { builder.UseShowExceptions(); Configuration(builder); } } public class MainController : ApiController { public HttpResponseMessage Get() { return new HttpResponseMessage(HttpStatusCode.OK) { Content = new StringContent("Hello, AspNetWebApi!") }; } } }
Next you should try adding the meta-meta-package Gate.Quickstart.All. Then you’ll be able to press F5 to see a demo of everything and the kitchen sink: Web API, Nancy, SignalR, Gate “test page”, and Direct output from code (res.Write style).
Direct output is kind of interesting, looks like this:
public class Startup { public void Configuration(IAppBuilder builder) { builder.RunDirect((req,res) => { res.ContentType = "text/plain"; res.Write("Hello, ").Write(req.PathBase).Write(req.Path).Write("!"); res.End(); }); } }
Well – that’s probably long enough for this post. Next topic – hopefully soon – will be about using the Ghost.exe “generic host” to run this web application on HttpListener, Kayak, or Firefly.
February 20th, 2012 at 1:57 pm
[...] This is using the OWIN web app we made in the last post. [...]
February 21st, 2012 at 8:38 am
[...] Gate adds OWIN support for the new ASP.NET Web API beta and Ghost.exe – a generic host for OWIN applications (Louis DeJardin) [...]
March 5th, 2012 at 6:53 am
[...] DeJardin created a host on top of [...]
August 22nd, 2012 at 9:44 pm
I simply could not leave your web site before suggesting that I extremely enjoyed the standard info a person provide to your visitors? Is gonna be again steadily to inspect new posts
September 4th, 2012 at 3:32 am
Good day! Just recently found a site on the Internet kamkam.rf . This is a site for singles, where video chat. Sort of liked the site : the design of a simple , not annoying. Even a pleasant one: check-in took only a few minutes. On this site is a sufficient number of registered – about one million . A video chat option is convenient – Roulette.That is, you come into the chat and your registered users are offered absolutely right.
But , to be honest , once I ‘m afraid to make friends while on this site. Maybe there are not real people .
People , share, maybe someone has already registered on this site? Maybe tell me what and how? Share experiences so to speak . Thank you all in advance for the information , I would appreciate any information – positive or not.
December 15th, 2012 at 5:39 am
People , share, maybe someone has already registered on this site? Maybe tell me what and how? Share experiences so to speak . Thank you all in advance for the information , I would appreciate any information – positive or not – ???
???
Now khow
January 27th, 2013 at 7:43 am?
July 22nd, 2013 at 9:32?
July 24th, 2013 at 12:37 pm
fantastic issues altogether, you just gained a emblem new reader.
What could you recommend in regards to your publish that you simply
made a few days in the past? Any certain?
August 26th, 2013 at 12:08 am
, thank you very much for posting this! It is going to help when I travel to the casino next time! Fabulous!
October 6th, 2013 at 11:19 pm
Pretty component to content. I just stumbled upon your website and in accession capital to claim that I acquire actually loved account your blog posts. Anyway I’ll be subscribing for your feeds and even I achievement you get entry to persistently fast.
November 2nd, 2013 at 9:19 pm
At a posh black-tie dinner in Maxim’s on the eve of last week’s 50th running of the Prix de l’ Arc de Triomphe at Longchamp, the Virginia sportsman and art collector,
ニューバランス メンズ スニーカー
February 13th, 2014 at 10:04 pm
Your style is very unique compared to other people I’ve read stuff from.
Many thanks for posting when you have the opportunity, Guess I will just bookmark
this page.
March 2nd, 2014 at 12:03 am
* Many states require you too have a license to work as a barber
or a cosmetologist. Let it set, then lightly dust the entire face with powder.
Online courses aand take-home materials in DVD or video formats are also available for students lokking for a
non-traditional education.
|
http://whereslou.com/2012/02/20/gate-adds-owin-support-for-the-new-asp-net-web-api-beta/comment-page-1
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
This example will show how to play sounds using irrKlang. It will play
a looped background music and a sound every time the user presses a key.
At the beginning, we simply create a class and add the namespace IrrKlang
in which all sound engine classes are located.
using System;
using IrrKlang;
namespace CSharp._01.HelloWorld
{
class Class1
{
[STAThread]
static void Main(string[] args)
{
Now lets start with irrKlang 3D sound engine example 01, demonstrating simple 2D sound.
Start up the sound engine by creating an instance of the class ISoundEngine.
You can specify several options as parameters when invoking that constructor,
but for this example, the default parameters are enough.
// start the sound engine with default parameters
ISoundEngine engine = new ISoundEngine();
To play a sound, we only to call play2D(). The second parameter
tells the engine to play it looped.
// play some sound stream, looped
engine.Play2D("../../media/getout.ogg", true);
In a loop, wait until user presses 'q' to exit or another key to
play another sound.
do
{
Console.Out.WriteLine("Press any key to play some sound, press 'q' to quit.");
// play a single sound
engine.Play2D("../../media/bell.wav");
}
while(_getch() != 'q');
Basically, that's it. The following code just closes the classes and
namespace and adds a method _getch() to read from the console, but this
has nothing to do with irrKlang.
}
// some simple function for reading keys from the console
[System.Runtime.InteropServices.DllImport("msvcrt")]
static extern int _getch();
}
}
Have fun playing your own sounds now.
Download tutorial source and binary (included in the SDK)
|
http://www.ambiera.com/irrklang/tutorial-helloworld-csharp.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
05 May 2009 17:28 [Source: ICIS news]
By Nigel Davis
?xml:namespace>
Now the prospect of possibly 15 times more than initially planned full-blown substance registrations is hardly relished, by the agency or anyone else.
It is not surprising that so many chemical manufacturers and importers, including companies in sectors other than chemicals, are concerned about this critical next stage of Reach.
If a substance is not pre-registered or eventually registered under Reach there can be no market for it in the EU.
Most national European authorities have already started to police this pre-registration and registration process.
But the next Reach deadline looms large. It is 1 December 2010 for the submission of full Reach registration dossiers. These are the documents that contain most of the safety and toxicity data on a substance and a classification of that substance into an acceptable category.
But not everyone understands what Reach registration is and, particularly, what substance data, it requires.
The European Chemicals Agency (ECHA) has promoted the formation of substance information exchange forums, or SIEFs, and substance consortia.
Through the SIEFs, producers, distributors and importers – the substance pre-registrants – are expected to share substance toxicity and other data; decide on a classification for the substance- whether a skin irritant or a carcinogen, for example; and through a lead company submit a Reach registration dossier.
Yet no-one has taken anything but the first steps along the path to successful SIEF formation and operation.
Already arguments are surfacing about possible SIEF fees – initial registration fees of between €30,000 and €40,000 ($40-$53,000) have been mentioned for some of the larger SIEFS with additional running costs expected.
There are no SIEF guidelines; best practice is non-existent – indeed it hasn’t been achieved yet – and the SIEF’s have no legal standing.
These private groups of chemicals manufacturers and others are expected to gather data and submit registration dossiers for most of the chemicals used throughout European industry.
The SIEF process is a minefield that will test supply chain cooperation to the full.
And the trouble is; all this is new. SIEF participants can make up the rules as they go along. Some have the picture in their minds eye of flying the plane while it is being built.
SIEFS range in size, also, from the very small to the very big. The ethanol SIEF with 5,000 or so members is the largest. Imagine trying to organise a single reach registration from that number of potentially active participants.
Finding a lead registrant and achieving consensus among some of the larger SIEFs will be a tough job. But the regulators expect industry – and not just chemicals makers – to step up to the mark.
This phase of Reach presents the first real test of the chemicals control regulation.
Producers are expected to be in contact with distributors; importers with downstream customers keen to know the origin of the substances they use in preparations and other goods. The mammoth task is an unprecedented chemicals data sharing exercise.
Pitfalls will be exposed along the way. Currently there is a great deal of confusion and little activity. It is almost as if everyone is waiting for everyone else to make the first definitive moves.
However, the SIEFs need to gain momentum. December 2010 may seem a long way off but some in the industry believe that registration dossiers need to be ready by June next year.
Reach is a complex and contentious process. But the wheels have begun to grind faster.
The first stages of Reach are being enforced; it pays to understand as fully as is possible and to be well prepared for the next steps.
(
|
http://www.icis.com/Articles/2009/05/05/9213471/insight-data-sharing-siefs-are-real-reach-challenge.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
Details
- Type:
Bug
- Status: Closed
- Priority:
Major
- Resolution: Fixed
- Affects Version/s: None
- Fix Version/s: None
- Component/s: Misc Library
- Labels:None
- Environment:
Actor, Actors
Description
According to the documentation, receiveWithin should block at most msec milliseconds
if no message matches any of the cases of the partial function passed as argument.
This is not correct if the function is not defined on TIMEOUT and if the msec argument
is > 0.
The function
def main(args: Array[String]){ Actor.receiveWithin(0) { case str: String => println(str) } }
throws an "unhandled timeout" RuntimeException, however
def main(args: Array[String]){ Actor.receiveWithin(1) { case str: String => println(str) } }
blocks indefinitely. The TIMEOUT object is stored in the mailbox after the timeout, but it remains there as it is not consumed.
Actually a new TIMEOUT object is added to the mailbox after each additional msec milliseconds.
The function receiveWithin defines the local function receiveTimeout which checks, whether the function is defined at TIMEOUT, but this function is only called if msec == 0L.
Dominik
Activity
Replying to [comment:1 extempore]:
> This sounds like a duplicate of
SI-3799, but not sure:
> perhaps submitter could see if that sounds true and if so,
> close this as a duplicate.
SI-3799 reports, that messages may be lost. In my example no message is lost, all messages remain in the actor's mailbox, but receiveWithin does not terminate after the timeout. Actually, the TIMEOUT message can be found in the mailbox. Note that this behavior changed from 2.7.7 to 2.8.0.
=> no duplicate.
This sounds like a duplicate of
SI-3799, but not sure: perhaps submitter could see if that sounds true and if so, close this as a duplicate.
|
https://issues.scala-lang.org/browse/SI-3838
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
Equivalent Features On Windows And Unix
This page is meant to capture the current state of
agreement
on
UnixFunctionalityVsWindowsFunctionalityDiscussion
. It is simply a list of features on each operating system and its associated tools, and then its
suggested
equivalent on the other system. The hope is that we might learn from each other how to do our jobs on each others' systems, not hear arguments on how much the other sucks.
In a way this is supposed to be like a pretrial hearing, to reduce the length of the ensuing court battle. Please don't have your argument here - if you take umbrage, take your umbrage elsewhere. If you have a question about an equivalent feature though, add it to the list and hopefully someone will take the time to put in an answer.
Features on Windows
WTS
X + ssh
COM
Corba
/
PlainText
protocols
IIS
Apache
Threads
processes sharing address space and OS resources, proprietary thread API, POSIX threads (usually implemented with one of the first two).
Named Pipes (don't have the same functionality, but a common subset)
You can make a special file that acts as a pipe.
Features on Unix or
LinuxLikeOperatingSystems
CLI with pipes & job control -
Cygwin (for unix commands) plus DOS shell emulator (for native commands from cmd.exe/command.com)
Network transparent GUI
WTS
Rooted file-system hierarchy
You can refer to anything on the local machine using the UNC syntax, eg \\mypc\C$\whatever or \\mypc\foo, for shares.
Symbolic links
See
SymbolicLinkOnWindows
. The explorer has "link" files but these are not fully understood by other parts of the operating system, such as command line tools or the open/save dialogs. Shared folders can act like a flat namespace of symlinks.
Few knows, but NTFS supports true symbolic links. Try to use the ln command on cygwin and you might be surprised by the results.
Asynchronous signals
Threads waiting on binary semaphores signaled by the operating system
This 'real windows' would be what? The one released less than a year ago? The one we used to call NT? The one with the POSIX shell CLI?
Orignal author replies: and the 'real unix' is... IRIX? AIX? Solaris? HPUX? To answer would be to miss the point. This pages title should more accurately be 'roughly equivalent features and tools in the set of operating systems and tools which people would generally see as belonging to the windows world or the unix world'. However, brevity is the soul of wit. And the intention of this page is that people can say how they would get roughly the same effect in the 'windows world' or the 'unix world', so limiting to one version of windows or unix, or excluding tools that get layered on top of the OS, would prevent people from describing what they would actually do.
Actually the version of Windows is often important information, since really big features can change in major releases (e.g. multiple times that further Unix shell-like functionality was announced for some Windows or another). The corresponding nitpick about different versions of Unix is largely irrelevant, because for the most part the major features are identical across vendors and even on completely different codebases such as Linux; mostly it is system administration issues (like location of configuration files) that vary, not system calls or shell availability. So let's just say: mention the version of either Windows or of Unix whenever there may be a question as to whether the feature varies with release/vendor, and if someone asks, just tell them.
[The versions matter less and less as the operating systems stabilize. In the early 1980s, there were important differences between System V and BSD. System V didn't support asynchronous I/O. Shell commands had completely different syntaxes. As Windows ages, the difference between versions seem to decrease, just as they did with Unix. And as they both age, they grow more and more alike.]
The Unix issue you mentioned was
twenty years ago
. Windows
continues
to add major features on every major release. So what was the point of your comment???
[My point is that Unix is older than Windows, hence the difference between releases are smaller. For instance, the difference between Win2000 and WinXP was significantly less than the difference between Win3.1 and WinNT.]
I should hope so. Win3.1 and WinNT were completely different operating systems, not just different releases of the same operating system.
But the context of this is someone arguing "and the 'real unix' is... IRIX? AIX? Solaris? HPUX?", and I'm just saying what
you
just said: it doesn't really matter with Unix, but it often does matter with Windows, so for crying out loud, don't argue, just speak up about the Windows version, and that should be the end of it. But I phrased it better, above. I really cannot see what further point there is here that is relevant to the context.
[It matters for old versions of Unix. Speak up about those as well.]
See the article by
DonBox
about the five stages of Microsoft development: (
) Porting tools from "the old country" to Windows comes in at stage 3 ("anger"). I've actually used vi for Windows (ha!) so maybe he's right.
SmugSelloutWeenies
?
, eh? You know, not everyone gets past the anger stage, and maybe that's not a bad thing.
There's always
CygWin
Differences
Capitalization recognition
Windows has no (simple) directory links
Book:
Unix for the MS-DOS User
See
UnixFunctionalityVsWindowsFunctionalityDiscussion
CategoryOperatingSystem
CategoryComparison
?
EditText
of this page (last edited
April 10, 2012
) or
FindPage
with title or text search
|
http://c2.com/cgi-bin/wiki?EquivalentFeaturesOnWindowsAndUnix
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
import org.apache.myfaces.buildtools.maven2.plugin.builder.annotation.JSFProperty; 22 23 public interface ChangeSelectProperties 24 { 25 /** 26 * HTML: Specifies a script to be invoked when the element is modified. 27 * 28 */ 29 @JSFProperty(clientEvent="change") 30 public abstract String getOnchange(); 31 32 33 /** 34 * HTML: Specifies a script to be invoked when the element is selected. 35 * 36 */ 37 @JSFProperty(clientEvent="select") 38 public abstract String getOnselect(); 39 40 }
|
http://myfaces.apache.org/tomahawk-project/tomahawk20/xref/org/apache/myfaces/component/ChangeSelectProperties.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
#include <Thyra_LinearOpWithSolveBaseDecl.hpp>
Inheritance diagram for Thyra::LinearOpWithSolveBase< RangeScalar, DomainScalar >:
solve()) of the form:
and/or a transpose solve operation (using
solveTranspose()) of the form:
and/or an adjoint solve operation (using
solveTranspose())operator.
The functions
solve() and
solveTranspose() both take the arguments:
,const int numBlocks = 0 ,const BlockSolveCriteria<PromotedScalar> blockSolveCriteria[] = NULL ,SolveStatus<PromotedScalar> blockSolveStatus[] = NULL
The array arguments
blockSolveCriteria[] and
blockSolveStatus[] specify different blocks of solution criteria and the corresponding solve return statuses for a partitioned set of linear systems. Assuming that the client passes in arrays of dimensions
numBlocks, these tolerances define the solution criteria for the block systems:
where the column indexes are given by
, for
.
The solve criteria for the
block system
is given by
blockSolveCriteria[j] (if
blockSolveCriteria!=NULL) and the solution status after return is given by
blockSolveStatus[j] (if
blockSolveStatus!=NULL).
By specifying solution criteria in blocks and then only requesting basic tolerances, we allow linear solver implementations every opportunity to perform as many optimizations as possible in solving the linear systems. For example, SVD could be performed on each block of RHSs and then a reduced set of linear systems could be solved.
For the remainder of this discussion we will focus on how the solution criteria for single block of linear systems specified by a single
BlockSolveCriteria object is specified and how the status of this block linear solve is reported in a
SolveStatus object.
The struct
BlockSolveCriteria contains a
SolveCriteria member and a number of RHSs that it applies to. It is the
SolveCriteria object that determines the type and tolerance of a block solve request.:
solveStatus.achievedTol!=SolveStatusunknowSand()==truesince the client would have no way to interpret this tolerance. The value of
solveStatus.achievedTol!=SolveStatusunknownTolerance()in this case should only be returned when
solveCriteria.solveMeasureType.useDefault()==true by
blockSolveStatus[].
ToDo: Finish documentation!
solve()must be overridden. See
LinearOpBasefor what other virtual functions must be overridden to define a concrete subclass.
Definition at line 339 of file Thyra_LinearOpWithSolveBaseDecl.hpp.
Local typedef for promoted scalar type.
Definition at line 346 of file Thyra_LinearOpWithSolveBaseDecl.hpp.
Request the forward solution of a block system with different targeted solution criteria.
this->solveSupportsConj(conj)==true
X!=NULL
this->range()->isCompatible(*B.range())==true
this->domain()-->solveS[].
Return if
solve() supports the argument
conj.
The default implementation returns
true for real valued scalar types or when
conj==NONCONJ_ELE for complex valued types.
Definition at line 38 of file Thyra_LinearOpWithSolveBase.hpp.
Return if
solveTranspose() supports the argument
conj.
The default implementation returns
false.
Definition at line 44 of file Thyra_LinearOpWithSolveBase.hpp.
Return if
solve() supports the given the solve measure type.
The default implementation returns
true for
solveMeasureType.inNone().
Definition at line 50 of file Thyra_LinearOpWithSolveBase.hpp.
Return if
solveTranspose() supports the given the solve measure type.
The default implementation returns
true for
solveMeasureType.inNone().
Definition at line 56 of file Thyra_LinearOpWithSolveBase.hpp.
Request the transpose (or adjoint) solution of a block system with different targeted solution criteria.
this->solveTransposeSupportsConj(conj)==true
X!=NULL
this->domain()->isCompatible(*B.range())==true
this->range()-->solveTransposeS[].
Definition at line 62 of file Thyra_LinearOpWithSolveBase.hpp.
|
http://trilinos.sandia.gov/packages/docs/r10.0/packages/thyra/src/interfaces/operator_solve/ana/fundamental/doc/html/classThyra_1_1LinearOpWithSolveBase.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
The dotDude of .Net
In a current project, there was a requirement to have a web site that used windows integrated authentication, so that valid users who have logged onto the domain would not have to authenticate themselves to gain access to the site. Pretty standard stuff. We also had a requirement to extract the users first and last name from the domain and use that to populate information on the page.
I am sure it has been done many times before but I thought I would post the utility code I developed to first query for a users domain, then use that to extract the users fullname from their domain account information and separate into their first name and last name. It uses the System.Management namespace exclusively and ofcourse WMI to perform its magic. In our scenario, the web site had impersonation enabled, and in the code we simply called the
WMIQueryResults results = QueryCurrentDomainUser(true);string fullName = String.Format(“{0} {1}“,results.UserFirstName,results.UserLastName);
function. The 'true' parameter indicates to retrieve the users full name. This obviously takes extra time to execute due to the network queries that must be performed against the domain controller.
Simply copy and paste the code below into a .cs file, and you are away. Note that it returns the results in a class structure of WMIQueryResults and by default, expects the domain users full name to be in lastname, firstname order, although this can be changed easily enough.
Hope you find it useful.
****************Copy and Paste the code Below********************
using System;using System.Management;
namespace Util.WMI{ #region WMIQueryResults Return Result class /// <summary> /// Class to hold the results of the WMI Query /// </summary> public class WMIQueryResults { #region Private Variables
private string m_FullyQualifiedUserName = null; private string m_DefaultDomainController = null; private string m_DomainName = null; private string m_UserName = null; private string m_FullName = null; private string m_userFirstName = null; private string m_userLastName = null;
#endregion
#region Public Properties
/// <summary> /// Fully Qualified User Name. Typically in the format of DOMAIN\UserName /// </summary> public string FullyQualifiedUserName { get { return m_FullyQualifiedUserName; } set { m_FullyQualifiedUserName = value; } }
/// <summary> /// Default Domain controller. This does not necessarily represent the default domain and will typically represent the /// machine name that is the PDC (primary domain controller) /// </summary> public string DefaultDomainController { get { return m_DefaultDomainController; } set { m_DefaultDomainController = value; } }
/// <summary> /// The Domain Name of the current user. If the fully qualified user name contains a domain identifier, then this is /// stripped out and returned, otherwise the domain controller is returned. /// </summary> public string DomainName { get { return m_DomainName; } set { m_DomainName = value; } }
/// <summary> /// The username portion only. This is username part of the fully qualified user name minus the domain identifier. /// </summary> public string Username { get { return m_UserName; } set { m_UserName = value; } }
/// <summary> /// This is the users full name as gathered from the WinNT/Active directory database/controller. /// </summary> public string UserFullName { get { return m_FullName; } set { m_FullName = value; } }
/// <summary> /// The Firstname of the user, extracted from the users full name. If no full name was retrieved or found, then this /// property will be blank (Empty string) /// </summary> public string UserFirstName { get { return m_userFirstName; } set { m_userFirstName = value; } }
/// <summary> /// The Lastname of the user, extracted from the users full name. If no full name was retrieved or found, then this /// property will contain the username. /// </summary> public string UserLastName { get { return m_userLastName; } set { m_userLastName = value; } } #endregion } #endregion
#region UserFullNameFormat Enumeration /// <summary> /// This enumeration represents what format the domain holds the current users full name. If the domain lists the users /// name as "LastName, FirstName", then setting that via the 'LastName_FirstName' enumeration causes the WMI utility /// class to recognise which is the first name and which is the last name. An incorrect setting will simply mean that the /// WMI utility class will identify the users last name as their first name. /// </summary> /// <remarks>The default value for the WMIDomainNameUtility class that uses this enumeration is "LastName_FirstName"</remarks> public enum UserFullNameFormat { LastName_FirstName, FirstName_LastName } #endregion
/// <summary> /// This class handles the querying of a users domain, username, and full name as described by the domain. This /// information is gathered using WMI (Windows Management Instrumentation) and may take time to execute given that it /// needs to query the domain controller over the netork for these pieces of information. /// </summary> /// <remarks>The default naming format expected from this class is that the users fullname is in "LastName, FirstName" format. /// You can change the format to "FirstName, LastName" by supplying a ne enumeration value in the constructor. Also, this class /// will strip out any comma's or semi-colons it finds in the name and assumes the names are separated by a space.</remarks> public class WMIDomainUser { #region Private Variables private UserFullNameFormat m_nameFormat = UserFullNameFormat.LastName_FirstName; #endregion
#region Constructors public WMIDomainUser() { }
public WMIDomainUser(UserFullNameFormat fullNameFormat) : this() { m_nameFormat = fullNameFormat; } #endregion
#region QueryCurrentDomainUser Method public WMIQueryResults QueryCurrentDomainUser(bool retrieveFullName) {
WMIQueryResults results = new WMIQueryResults();
try { ManagementObjectSearcher srchr = new ManagementObjectSearcher("SELECT * FROM Win32_ComputerSystem"); ManagementObjectCollection coll = srchr.Get(); foreach (ManagementObject mo in srchr.Get()) { results.FullyQualifiedUserName = (string)mo["UserName"]; results.DefaultDomainController = (string)mo["Domain"]; SplitDomainAndUserName(results); break; // We only want the first one. }
if (retrieveFullName) { ManagementScope msc = new ManagementScope( "root\\cimv2" ); string queryString = String.Format("SELECT * FROM Win32_UserAccount WHERE Domain=\"{0}\" and Name=\"{1}\"",results.DomainName,results.Username);
SelectQuery q = new SelectQuery(queryString); ManagementObjectSearcher query = new ManagementObjectSearcher(msc, q); foreach (ManagementObject mo in query.Get()) { results.UserFullName = (string)mo["Fullname"]; break; // Only want the first one. } }
SplitFirstAndLastName(results); } catch { results.UserFullName = ""; results.Username = ""; results.DomainName = ""; results.DefaultDomainController = ""; }
return results; } #endregion
#region SplitDomainAndUserName private void SplitDomainAndUserName(WMIQueryResults currentQueryResults) { int slashPos = currentQueryResults.FullyQualifiedUserName.IndexOf("\\"); if ( slashPos > 0 && currentQueryResults.FullyQualifiedUserName.Length > 0) { string domain = currentQueryResults.FullyQualifiedUserName.Substring(0,slashPos); string user = currentQueryResults.FullyQualifiedUserName.Substring(slashPos+1,currentQueryResults.FullyQualifiedUserName.Length-slashPos-1); currentQueryResults.DomainName = domain; currentQueryResults.Username = user; } else { currentQueryResults.Username = currentQueryResults.FullyQualifiedUserName; currentQueryResults.DomainName = currentQueryResults.DefaultDomainController; } } #endregion
#region SplitFirstAndLastName private void SplitFirstAndLastName(WMIQueryResults currentQueryResults) { // These name separators get stripped out char[] separators = new char[] {',',';'};
// We assume that the first and last names (in whatever order), are separated by a space System.Text.StringBuilder sb = new System.Text.StringBuilder(); for (int i = 0; i < currentQueryResults.UserFullName.Length; i++) { if (Array.IndexOf(separators,currentQueryResults.UserFullName[i]) == -1) // char not found, so add it in. sb.Append(currentQueryResults.UserFullName[i]); } string strippedName = sb.ToString(); int spacePos = strippedName.IndexOf(" "); if (spacePos >= 0) { string name1 = strippedName.Substring(0,spacePos); string name2 = strippedName.Substring(spacePos+1,strippedName.Length-spacePos-1); if (m_nameFormat == UserFullNameFormat.LastName_FirstName) { currentQueryResults.UserFirstName = name2; currentQueryResults.UserLastName = name1; } else { currentQueryResults.UserFirstName = name1; currentQueryResults.UserLastName = name2; } } else { currentQueryResults.UserFirstName = ""; currentQueryResults.UserLastName = currentQueryResults.UserFullName; } } #endregion }}
You're kidding? This task is what System.DirectoryServices is designed to do...
This is not for ActiveDirectory though. The current environment is a traditional NT4 domain.
Does that still qualify for your stamp of dis-approval?
I should also qualify that by saying I did try using methods from that namespace, however I run into errors saying something along the lines of "Unable to query" or "This resource does not support querying" or some such error. I assumed the admins had disabled any features (if available under the current environment - as it maybe a mix...I am not 100% sure..its quite large) and the scope of the project that used this code did not justify being able to diagnose and rectify at the domain level, hence the code you see above, which did work.
|
http://weblogs.asp.net/pglavich/archive/2004/05/25/141161.aspx
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
transient keyword
Friend,
The transient keyword is used to indicate that the member variable should not be serialized when the class instance containing that transient variable... into the persistent but if the variable is declared as transient
Java Transient Variable
Java Transient Variables
Before knowing the transient variable you should firs... mark that object as
persistent. You can also say that the transient variable is a variable whose
state does not Serialized.
An example of transient
transient variables in java
transient variables in java hello,
What are transient variables in java?
hii,
Transient variables are variable that cannot be serialized
Keyword- transient
runtime system.
we use transient in a class or instance variable declaration...
Keyword- transient
The transient keyword is used to indicate to the JVM that the indicated
What is Transient variable in Java
Transient variable in Java is used to show that a particular field should not be serialized when the class instance containing that transient variable being... from getting converted, one must declare the variable Transient keyword
Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava
variable for
your class name.
Q 11 : What... only to the unique instances,
* permits a variable number of instances
CoreJava
Serialize the static variable
Serialize the static variable hello,
Can we serialize the static variable?
hii
Yes we can serialize the static variable.
if u don't want to serialize, u need to declare the variable as transient
CoreJava Project
CoreJava Project Hi Sir,
I need a simple project(using core Java, Swings, JDBC) on core Java... If you have please send to my account
transient keyword - Java Beginners
transient keyword what is transient keyword in java.? plz explain with example
Transient and detached objects
Transient and detached objects Explain the difference between transient (i.e. newly instantiated) and detached objects in hibernate
Pointer a variable
Pointer a variable hii,
Pointer is a variable or not ?
hello,
Yes, a pointer is a variable
Java variable
Java variable To what value is a variable of the String type automatically initialized
Converting jsp variable to java variable
Converting jsp variable to java variable Hi how to convert java script variable to java variable on same jsp page
Passing variable
Passing variable How to pass variable to HTTP object in HTMl or PHP
final variable
final variable Please give some easiest and understanding example program for final variable
environment variable
environment variable what are the environment variable is need to run j2ee application in eclipse could you explain the details
environment variable
environment variable what is environment variable in java?
Environment variables are used by the Operating System to pass configuration information to applications
Static Variable
Static Variable What is the basic use of introducing static variable... that a certain object/variable is resident in memory and accessed each time an instance..., it means that the static variable is shared among all instances of the class
Application Variable
anyone show a simple sample code of storing the connection data in a variable so
MXML variable
how to assign javascript variable value to a jsp variable
how to assign javascript variable value to a jsp variable how to assign javascript variable value to a jsp variable
Variable Names
Java Notes
Variable Names
Basic variable naming conventions
Choosing good names is probably..., these variable naming conventions are almost always used. Here is what
define string variable in php
define string variable in php how to define string variable in PHP
global variable in objective c
global variable in objective c Declaring global variable in objective c
php variable functions
php variable functions Setting a variable in a class function, where can I use that variable
static variable in php
static variable in php How can i call a global variable as a static use of a abstract variable?
What is use of a abstract variable? Hi,
What is use of a abstract variable?
thanks
php variable scope
php variable scope How can i access a variable from scope of another function
php variable in javascript
php variable in javascript Access php variables in Javascipt or Jquery rather than php echo $variable
variable types in php - PHP
variable types in php Do i need to explicitly specify the variable type in PHP
variable function in php
variable function in php Can any one tell me how to write a variable function in PHP
STATIC VARIABLE DECLARATION
STATIC VARIABLE DECLARATION why cannot the static variable declare inside the method
variable declaration in c and c++
variable declaration in c and c++ Comparison and an example of variable declaration in C and C
What is the importance of static variable?
What is the importance of static variable? hi,
What is the importance of static variable?
thanks
php a href variable
php a href variable href variable PHP.. please explain
php define variable
php define variable How to define a simple and a global variable
Create Session Variable PHP
Create Session Variable PHP hi,
Write a program which shows the example of how to create session variable in PHP. Please suggest the reference... session Variable in PHP. May be this reference will solve your query.
Thanks
creating a global variable with php
creating a global variable with php Is it possible to create a variable to declare as global in case of database connectivity?
Yes, if you want to have access of that variable anywhere in the program. See the example
Java Final Variable
Java Final Variable I have some confusion --Final variable has single copy for each object so can we access final variable through class name,or it is necessary to create object for accessing final variable
variable for cookie.setDomain - JSP-Servlet
variable for cookie.setDomain I want to create a variable to use inside the following statement:
cookie.setDomain(".mydomain.com");
Users will be hitting different sites using the same code for all.
Any suggestions would
using variable loop
using variable loop program to print alphabets from a-z along with ASCII codes of each alphabets in two columns using a character variable loop...can anyone help me
add a property to instance variable
add a property to instance variable How to add a property to instance variable in iPhone SDK
Session Variable in Wicket
Session Variable in Wicket can anyone tell me about sesion variables in wicket ?
thanks
JSP Create Variable
JSP Create Variable
JSP Create Variable is used to create a variable in jsp. The scriptlets
include a java code to be written as <%! %>
you pass a variable by value.
you pass a variable by value. How do you pass a variable by value?
Hi friends,
Just like in C++, put an ampersand in front of it, like $a = &$b
/*--------------Pass By value----------------*/
function add
Store Variable in Java
Store Variable in Java How to store Variables in Java?
public class DoubleTest {
public static void main(String[] args) {
double...(aDouble); // Outputs 7.21
}
}
Variable in Java
JavaScript display variable in alert
JavaScript display variable in alert How to display variable... variable is declared by using var keyword.This variable can hold any type of value... put variable or any statement
in alert which you want to display
Extracting variable from jsp
Extracting variable from jsp how to Write a java program which will be extracting the variables and putting them in an excel format?
The given code allow the user to enter some fields and using the POI API
I want to store the value of local variable into Global variable
I want to store the value of local variable into Global variable <%=cnt%>=x; is it a valid Statement
javascript variable value insertion in DB
javascript variable value insertion in DB how can I insert javascript variable value into database using php
What is a local, member and a class variable?
What is a local, member and a class variable? Hi,
What is a local, member and a class variable?
thanks
PHP Variable
of the most important thing, in PHP we can
declare a variable of any type, any where.
PHP Variable Example 1:
<?php
$var="This
is a string value";//declaration
as string variable
echo "Value
of \$var
Setting Variable Scope
Setting Variable Scope
In this section we will learn about the different scope of the JSP variables.
While developing the program you can store the variables in different scope.
If you store the values in the application scope
|
http://www.roseindia.net/tutorialhelp/comment/83367
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
XMonad.Layout.Monitor
Contents
Description
Layout modfier for displaying some window (monitor) above other windows
Synopsis
- data Monitor a = Monitor {
- monitor :: Monitor a
- data Property
- data MonitorMessage
- doHideIgnore :: ManageHook
- manageMonitor :: Monitor a -> ManageHook
Usage
You can use this module with the following in your
~/.xmonad/xmonad.hs:
import XMonad.Layout.Monitor
Define
Monitor record.
monitor can be used as a template. At least
prop
and
rect should be set here. Also consider setting
persistent to True.
Minimal example:
myMonitor = monitor { prop = ClassName "SomeClass" , rect = Rectangle 0 0 40 20 -- rectangle 40x20 in upper left corner }
More interesting example:
clock = monitor { -- Cairo-clock creates 2 windows with the same classname, thus also using title prop = ClassName "Cairo-clock" `And` Title "MacSlow's Cairo-Clock" -- rectangle 150x150 in lower right corner, assuming 1280x800 resolution , rect = Rectangle (1280-150) (800-150) 150 150 -- avoid flickering , persistent = True -- make the window transparent , opacity = 0.6 -- hide on start , visible = False -- assign it a name to be able to toggle it independently of others , name = "clock" }
Add ManageHook to de-manage monitor windows and apply opacity settings.
manageHook = myManageHook <+> manageMonitor clock
Apply layout modifier.
myLayout = ModifiedLayout clock $ tall ||| Full ||| ...
After that, if there exists a window with specified properties, it will be displayed on top of all tiled (not floated) windows on specified position.
It's also useful to add some keybinding to toggle monitor visibility:
, ((mod1Mask, xK_u ), broadcastMessage ToggleMonitor >> refresh)
Screenshot:
Hints and issues
- This module assumes that there is only one window satisfying property exists.
- If your monitor is available on all layouts, set
persistentto
Trueto avoid unnecessary flickering. You can still toggle monitor with a keybinding.
- You can use several monitors with nested modifiers. Give them names
monitor :: Monitor aSource
Most of the property constructors are quite self-explaining.
Constructors
Instances
data MonitorMessage Source
Messages without names affect all monitors. Messages with names affect only monitors whose names match.
Constructors
Instances
doHideIgnore :: ManageHookSource
Hides window and ignores it.
manageMonitor :: Monitor a -> ManageHookSource
ManageHook which demanages monitor window and applies opacity settings.
TODO
- make Monitor remember the window it manages
- specify position relative to the screen
|
http://xmonad.org/xmonad-docs/xmonad-contrib/XMonad-Layout-Monitor.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
Overview
JBoss AS 6.0.0.M2 which has been released on Feb 16th 2010, contains the initial support for EJB3.1. More specifically, it includes support for:
EJB3.1 no-interface view
EJB deployment through .war files
What to download and how to use
JBoss AS 6.0.0.M2 can be downloaded from here. After downloading, start and stop the server once to ensure that it boots fine.
The next step would be to deploy a EJB3.1 app into this server. Let's first look at a simple EJB3.1 nointerface view bean:
EJB3.1 no-interface view:
package org.jboss.ejb3.nointerface.example; import javax.ejb.Stateless; @Stateless public class Calculator { public int subtract(int a, int b) { return a - b; } public int add(int a, int b) { return a + b; } }
That's it for a no-interface view EJB. Now let's write a client which uses this no-interface view bean. Remember that the no-interface view is a local view, which means that the client has to run in the same JVM as the bean. So for the sake of simplicity, in this example, let's create another bean which acts as a client of this no-interface view bean. Here's the AccountManagerBean stateless bean which exposes a @Remote view:
package org.jboss.ejb3.nointerface.example; public interface AccountManager { /** * Credits the amount from the account corresponding to the * accountNumber * * @param accountNumber Account number * @param amount Amount to be credited * @return */ int credit(long accountNumber, int amount); /** * Debits the amount from the account corresponding to the * accountNumber * * @param accountNumber Account number * @param amount Amount to be debited * @return */ int debit(long accountNumber, int amount); }
package org.jboss.ejb3.nointerface.example; import javax.ejb.EJB; import javax.ejb.Remote; import javax.ejb.Stateless; import javax.naming.Context; import javax.naming.InitialContext; import javax.naming.NamingException; @Stateless @Remote(AccountManager.class) public class AccountManagerBean implements AccountManager { /** * Inject the no-interface view of the Calculator */ @EJB private Calculator simpleCalculator; /** * @see org.jboss.ejb3.nointerface.integration.test.common.AccountManager#credit(int) */ @Override public int credit(long accountNumber, int amount) { // get current account balance of this account number, from DB. // But for this example let's just hardcode it int currentBalance = 100; Calculator calculator = null; // lookup the no-interface view of the Calculator // We could have used the injected Calculator too, but // in this method we wanted to demonstrate how to lookup an no-interface view try { Context context = new InitialContext(); calculator = (Calculator) context.lookup(Calculator.class.getSimpleName() + "/no-interface"); } catch (NamingException ne) { throw new RuntimeException("Could not lookup no-interface view of calculator: ", ne); } return calculator.add(currentBalance, amount); } /** * @see org.jboss.ejb3.nointerface.integration.test.common.AccountManager#debit(int) */ @Override public int debit(long accountNumber, int amount) { // get current account balance of this account number, from DB. // But for this example let's just hardcode it int currentBalance = 100; // let's use the injected calculator return this.simpleCalculator.subtract(currentBalance, amount); } }
The AccountManagerBean has 2 methods, each of which uses the no-interface view Calculator bean. The credit() method looks up the no-interface view by the JNDI name, whereas the debit() method uses an injected reference of the no-interface view Calculator. These 2 methods of the AccountManagerBean demonstrate the 2 ways in which you can get hold of the no-interface view of the bean.
Now package all these classes into a .jar and deploy it to the server, you built earlier. That's it! You now have the beans deployed on the server. The next step is to write a simple client which access the AccountManager to perform the operations. In this example, let's use a standalone java class which through its main() method, looks up the remote view of the AccountManagerBean and invokes the operations. Remember that the AccountManagerBean is NOT a no-interface view bean and hence can be accessed remotely (i.e. from the standalone java client). Here's our client:
package org.jboss.ejb3.nointerface.example.client; import javax.naming.Context; import javax.naming.InitialContext; import javax.naming.NamingException; public class Client { /** * Simple test client to be used in the no-interface view example * * @param args */ public static void main(String[] args) { AccountManager accountManager = null; // lookup the account manager bean try { Context context = new InitialContext(); accountManager = (AccountManager) context.lookup(AccountManagerBean.class.getSimpleName() + "/remote"); } catch (NamingException ne) { throw new RuntimeException("Could not lookup AccountManagerBean: ", ne); } long dummyAccountNumber = 123; // credit 50 dollars (Note that the current balance is hard coded in the bean to 100) // so after crediting, the current balance is going to be 150 int currentBalance = accountManager.credit(dummyAccountNumber, 50); System.out.println("Current balance after crediting 50$ is " + currentBalance); // now let's debit 10 dollars (Note that the current balance is again hard coded in the bean to 100). // So after debiting, the current balance is going to be 90 currentBalance = accountManager.debit(dummyAccountNumber, 10); System.out.println("Current balance after debiting 10$ is " + currentBalance); } }
What should I try next?
The above example was just to get you started with EJB3.1 no-interface view. Try out the no-interface view within your own applications and let us know if you run into any issues. Feel free to start a discussion about any issues around this, in our EJB3 user forum or ping us on IRC. The more issues you find now, the better - because we can get some of them fixed before AS 6.0.0.M2 is released.
I have some tutorial for no-interface, Can I contribute?
Similar to our other EJB3 tutorials, we are going to include a tutorial for the no-interface view. Infact, the example that is posted here in the wiki, can perhaps be just added as a tutorial in SVN. So if anyone of you wants to contribute a tutorial (either the one that's here or any better one) and a chapter in our guide, then feel free to let us know - either through the forums or IRC.
Deployment of EJBs through a .war file:
Now that we have seen the no-interface view example, let's now move on to the next EJB3.1 feature that's being included in 6.0.0.M2 (and is currently available in AS trunk). EJB3.1 spec lets .war files to contain EJBs. So now you can deploy your EJBs through the .war files. EJBs in .war files have to be packaged in either of the following ways:
In .war/WEB-INF/classes
In .war/WEB-INF/lib/somejar.jar
A .war/WEB-INF/ejb-jar.xml
(For the complete details about the deployment packaging, please refer to section 20.4 of EJB3.1 spec)
Let's consider an example for this. Let's first see our no-interface view calculator:
package org.jboss.ejb3.war.deployment.example; import javax.ejb.Stateless; /** * CalculatorInWEBINFClasses * * A no-interface view bean which will be placed in the .war/WEB-INF/classes folder. * */ @Stateless public class CalculatorInWEBINFClasses { public int add (int a, int b) { return a + b; } public int subtract (int a, int b) { return a - b; } }
This is a simple no-interface view Stateless bean which we will be placing in the .war/WEB-INF/classes folder.
Now let's see another bean which uses this calculator no-interface view. It's a Stateful counter bean which exposes a remote view:
package org.jboss.ejb3.war.deployment.example; /** * Counter * */ public interface Counter { int increment(); int decrement(); }
package org.jboss.ejb3.war.deployment.example; import javax.ejb.EJB; import javax.ejb.Remote; import javax.ejb.Stateful; /** * CounterBeanInWEBINFLibJar * * A Stateful bean configured deployed through a jar file in .war/WEB-INF/lib folder. * */ @Stateful @Remote (Counter.class) public class CounterBeanInWEBINFLibJar implements Counter { private int count = 0; /** * Inject the no-interface view bean */ @EJB private CalculatorInWEBINFClasses calculator; @Override public int decrement() { this.count = this.calculator.subtract(this.count, 1); return this.count; } @Override public int increment() { this.count = this.calculator.add(this.count, 1); return this.count; } }
So we have the CounterBeanInWEBINFLibJar bean which is @Stateful and uses the no-interface view calculator bean (see the @EJB injection):
/** * Inject the no-interface view bean */ @EJB private CalculatorInWEBINFClasses calculator;
We'll be packaging this CounterBeanInWEBINFLibJar bean class in a jar file (let's call it my-ejb3-library.jar) and placing that jar file in .war/WEB-INF/lib folder.
These 2 beans should be enough for testing our deployment, but for the sake of showing that we can even package a ejb-jar.xml in the .war/WEB-INF folder, let's add one more bean to this deployment and let's use a ejb-jar.xml to configure it. So here's the DelegateBean which just delegates the calls to the CounterBeanInWEBINFLibJar bean:
package org.jboss.ejb3.war.deployment.example; import javax.ejb.EJB; /** * DelegateBean * * A Stateful bean configured through a ejb-jar.xml in .war/WEB-INF folder. This bean * just delegates the calls to a bean ({@link CounterBeanInWEBINFLibJar}) which is deployed * in the .war/WEB-INF/lib/.jar * */ public class DelegateBean implements Counter { /** * Inject the remote view of CounterBeanInWEBINFLibJar */ @EJB(beanName = "CounterBeanInWEBINFLibJar") private Counter counterBean; @Override public int decrement() { return this.counterBean.decrement(); } @Override public int increment() { return this.counterBean.increment(); } }
Notice that we haven't used any @Stateful/@Stateless annotations on this bean. We could have used it, but since we want to show the ejb-jar.xml usage, we decided not to do so. So here's the ejb-jar.xml for this bean:
<?xml version="1.0" encoding="UTF-8"?> <ejb-jar <enterprise-beans> <session> <ejb-name>DelegateBean</ejb-name> <business-remote>org.jboss.test.ejb3.war.deployment.Counter</business-remote> <ejb-class>org.jboss.test.ejb3.war.deployment.DelegateBean</ejb-class> <session-type>Stateful</session-type> </session> </enterprise-beans> </ejb-jar>
This ejb-jar.xml will be placed in .war/WEB-INF folder.
With this, we now have 3 beans. Let's finally write a simple client which looks up the remote view of the DelegateBean and invokes a method on it:
package org.jboss.ejb3.war.deployment.example.client; import javax.naming.Context; import javax.naming.InitialContext; import javax.naming.NamingException; /** * Client * * @author Jaikiran Pai * @version $Revision: $ */ public class Client { /** * Simple test client to be used in the no-interface view example * * @param args */ public static void main(String[] args) { Context ctx = new InitialContext(); Counter counter = (Counter) ctx.lookup("DelegateBean/remote"); int count = counter.increment(); System.out.println("Count after increment is: " + count); // increment one more time count = counter.increment(); System.out.println("Count after second increment is: " + count); // now decrement count = counter.decrement(); System.out.println("Count after decrement is: " + count); // decrement one more time count = counter.decrement(); System.out.println("Count after second decrement is: " + count); } }
To summarize the flow, here's how it will all look like:
Client -> (Stateful) DelegateBean -> (Stateful) CounterBeanInWEBINFLibJar -> (no-interface view) CalculatorInWEBINFClasses
and the packaging will look like this:
my-ejb3-app.war | |--- WEB-INF | | | |--- web.xml | |--- ejb-jar.xml (contains DelegateBean *configuration*) | | | |--- lib | | |--- my-ejb3-library.jar (contains annotated CounterBeanInWEBINFLibJar bean) | | | | | | | |--- org.jboss.ejb3.war.deployment.example.CounterBeanInWEBINFLibJar | | | | | |--- classes (contains annotated CalculatorInWEBINFClasses bean, Counter interface and the DelegateBean class) | | | | | |--- org.jboss.ejb3.war.deployment.example.CalculatorInWEBINFClasses | | |--- org.jboss.ejb3.war.deployment.example.Counter | | |--- org.jboss.ejb3.war.deployment.example.DelegateBean
So that's how the deployment looks like. If you look closely, you will notice that we have a web.xml in that .war file. The important bit about the web.xml is that it should use the web-app_3_0.xsd to indicate that it's a 3.0 web-app. This is very important because, the EJB deployments will be skipped if this is not a 3.0 web-app. At the very least, the web.xml should contain:
<?xml version="1.0" encoding="UTF-8"?> <web-app </web-app>
We think that adding web-app_3.0.xsd declaration is going to be one of the common thing that the user is going to forget, while writing the application. So we have added this to our EJB3.1 FAQ
So that's it for our .war deployment. Place that .war in the AS deploy folder and run the (standalone) client program to test the application.
What next?
This was just a simple (non-practical) example to get you started with deploying EJBs through .war files. Try out your own application .war deployments and let us know if you run into any issue. Feel free to start a discussion about any issues around this, in our EJB3 user forum or ping us on IRC. To repeat myself - the more issues you find now, the better - because we can get some of them fixed before AS 6.0.0.M2 is released
I have some tutorial for EJB deployments through .war file, Can I contribute?
We are going to include a .war deployment example, similar to the one explained here, in our EJB3 tutorials. We might use this same example that is posted here in the wiki. But if you have a better example and want to contribute, then feel free to let us know - either through the forums or IRC.
Next steps for EJB3.1 support in AS-6
We'll be adding more and more EJB3.1 support to AS-6. We are planning to deliver the next set of features incrementally before 6.0.0.M3. So stay tuned!
|
https://community.jboss.org/wiki/EJB31inAS600M2
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
Symbol handling issues and improvements
This page describes the issues GDB has with symbol handling, and the improvements we're thinking of making. For the purposes of this page "symbol handling" is a catch-all that incorporates all things related to symbols and debug information.
This page is maintained by Doug Evans ([email protected]), with input from the community (notably [email protected]).
Getting the code / Helping
Discussions are held on the main GDB mailing lists. Patches should be posted to the [email protected] mailing list. Work is being committed directly to the mainline (i.e., there's no special feature branch).
For testing, run the testsuite using your desktop o/s of choice, and make sure there are no regressions. amd64-linux and i386-linux are generally important platforms to not break.
Issues
For now this is just a raw list of unordered issues, "to get things down on paper". It is certainly an incomplete list.
Memory usage
Memory usage is a real problem, with multiple facets.
- Worst case is GDB will grow to use all available swap for very large programs
(here a "very large" program is roughly, say >=1G of debug info in the ELF binary).
Memory used to create of both "partial symbols" and "full symbols" can probably be improved on. As can minsyms. That's potentially three copies. Symbols can be shrunk a bit by better packing (see). Also, the obj_section field is redundant and can be removed, saving a word per symbol; see the archer-tromey-remove-obj_section branch.
Storing minsyms involves building their own copy of the demangled form (this is related to PR 12707, see the patch submission).
- Tab-completion on symbols can use excessive amounts of memory. For example, do we need to (prematurely) expand symtabs for C++ parameters? Most of the excessive memory usage affects speed of course too.
- The full name expansion (and canonicalization) that the DWARF reader does, it spends memory and cpu.
BFD waste. E.g.,
- obstack alignment. On amd64 the alignment is 16 bytes because of SSE. However, gdb generally doesn't need that much alignment. Being able to reduce it to 8 bytes saves a measurable amount of memory.
Speed
Speed is another real problem, with multiple facets.
- The ".gdb_index" section greatly improves gdb startup time. For large programs the time to read "minimal symbols" (the ELF symbol table) now dominates and takes enough time to be worth considering improving on.
- The handling of "partial symbols" versus "full symbols" is a source of slowness (and memory usage and complexity). When not using gdb_index, during startup gdb reads the debug info as quickly as possible to create an initial set of symbol tables ("partial symbols"). Then later when the symbol is actually needed gdb reads the debug info again creating symbol tables that gdb ultimately uses ("full symbols").
- Full CU expansion is excessive work. Whether we use gdb_index or not, when we create the "full symbols" we expand the entire CU (DWARF). It should be possible to improve on this.
- Tab-completion on symbols has been really slow in the past, and is still not as fast as it could/should be.
See, e.g., end of Also, even if tab-completion is blazingly fast, dumping 1000s of symbols in the output isn't always what the user wants.
- Symbol lookup is sometimes less efficient than it could be. [This is apart from debug info reading.] For example, the code may try finding a symbol in the static (or global) block list even though it knows the symbol "should" be on the other list. But it tries it anyway "just in case". In large apps this can be painful. It would be better to get it right. Another example is lookup_symbol_aux_objfile (circa December 2012). It pre-expands every symtab matching the symbol, but then the subsequent loop just returns the first one it finds. For static and global symbols this is a waste (one needs to be careful with things like -fshort-double where "double" can be different in different files, but lookup_symbol_aux_objfile can't handle that anyway).
- Rerunning a program shouldn't require rereading debug info for shared libs that haven't changed.
For large apps (say, >1000 shared libs, but even for less) it's unnecessarily painful.
- Single-stepping can be excessively slow. In one profile run, find_pc_sect_psymtab is the main culprit (this is w/o .gdb_index). It is called an inordinate number of times for each step (and an inordinate number of times for the same pc value - maybe some caching will help).
For singlestepping through dynsym resolving code, the PR is The bug turns out to be due to a missing glibc resolver, but the data collected shows some inefficiencies here.
- Watch, carefully, all that GDB does to lookup "int" in things like "watch -l *(int*) $rsp" or "py print gdb.lookup_type("int")" in a large C++ program (with many shared libs, with and without .gdb_index).
- Two calls to lookup_symbol_global ("int"). They may be (relatively) fast (though in large apps, less so), but it's clumsy.
- "int" is in STATIC_BLOCK, but GDB searches GLOBAL_BLOCK first. There's a comment that says we shouldn't *have* to try the other block, but that's not always true.
- When .gdb_index is in use, "int" matches so gdb will expand the symbol table, but the match doesn't take into account the block kind. So gdb will proceed to expand one symbol table from every objfile looking for "int" in GLOBAL_BLOCK, finding it, but not using it.
Only after that is done will GDB try STATIC_BLOCK. In a large app (say >1000 shared libs) this gets painful. A similar excessive expansion can happen with "break foo::bar::baz". [This is obviously also a memory issue.]
- There can be way more TUs (DWARF Type Units) than CUs (DWARF Compilation Units). E.g. 200K vs 8K. The current way TUs are handled can be slow.
E.g.,
- Having headers in the same symtab/psymtab lists as "primary" symtabs often means a lot of iteration for nothing.
- On some systems with NFS-like file systems (overlayfs and whatnot), reading disk can be slower than it otherwise could be. E.g., is there potential wins from being able to tune prefetch options, with flexibility provided by exporting to Python somehow?
- When printing the type of a symbol, the struct type it came from is discarded and we pass plain text to the lookup routines (e.g., during canonicalization). Is this necessary? We lose all the context of where the type came from (for example), and are in essence starting over from scratch.
- A canonical way gdb does symbol lookup is to expand all "matching symtabs", and then do a search over all symtabs. E.g., linespec.c:iterate_over_all_matching_symtabs (circa February 2013). Why not collect a list of matching symtabs and only search those? Another example of a clumsy API successfully hiding performance issues?
- When setting a breakpoint on namespace::class::method (or just class::method), we first lookup class (though we do it twice: once in VAR_DOMAIN and once in STRUCT_DOMAIN, ref: linespec.c:lookup_prefix_sym circa February 2013). The lookup uses expand_symtabs_matching which iterates over all symbol table slots (in the case of .gdb_index). There's no need for this generality here since we're looking up a specific name, and thus should be able to hash the name and quickly find it in the index's symtab. Large apps can have 4M symtab slots (or more). Another example of a clumsy API successfully hiding performance issues?
Bugs
This section is a random collection of known bugs.
"info var" doesn't find LOC_UNRESOLVED var:
- bfd caches files, and can close and reopen them behind gdb's back.
If the file has changed in the interim this can lead to incorrect behaviour:
'info variable' and 'info functions' very slow and memory consuming:
- gdb's handling of files compiled with a mix of things like with/without -fshort-double is broken. If double isn't defined by the current CU gdb will pick the first it finds, which will return randomly 4 or 8 for sizeof(double). gdb should first look in the current CU and if not found there try its builtin types list (and then continue as before if the symbol is not a builtin type).
Annoyances
This section is a random collection of annoyances that don't fit anywhere else (yet).
- The error message "warning: (Internal error: pc 0x19 in read in psymtab, but not in symtab.)" often appears, is generally useless to the user, and often ignorable. (I haven't seen this in a long, long time. It indicates a bug in the psymtab reader, so a reproducer would be very helpful.)
- GDB doesn't warn when the debug info it is using doesn't match the binary (plus possible core) being debugged. In practice it can be less of a problem with the main binary and more of a problem with the shared libs being used. This leaves the user with a false sense of confidence in what gdb prints, e.g., in backtraces, and frustration trying to figure out what is wrong.
- Lazy expansion can cause gdb to change its behaviour, based on what commands the user types and in what sequence. This shouldn't happen, so as we make things more lazy we should take care to catch and minimize the frequency of these kinds of bugs.
- When looking up linespecs, say to set a breakpoint, I(dje) have seen GDB throw away information it already has (obj_section?) only to go look it up again. It mightn't always slow things down (though for long operations (info func regex?) it may be a problem), but such clumsiness makes the code harder to understand/maintain.
- Symbol lookup, besides sometimes being slow, is just clumsy and in need of some clean up. There needs to be a cleaner API that the implementation (e.g. psyms) hides behind. Language dependencies are strewn throughout. The global "block_found" symbol, and is_a_field_of_this are all annoying. It would be much cleaner if the symtab API just concerned itself with the structure of the symbol tables and left all language-specific lookup rules to the language code.
- minsyms::filename seems barely useful. It is only used by stabs; it would be better if only stabs users paid for this.
- The DWARF reader currently stores demangled syms in the mangled entry of the symbol struct, and leaves the demangled entry as NULL. One thought is to go back to storing both. (There's a patch for this.)
- GDB records runtime offsets in symbol locations. This prevents symbols from being shared across inferiors. There is some ongoing work in this area, but it is a long process.
- One can print a specific case of a variable used in multiple locations with "print filename::varname". It would be useful to also be able to do "print objfile::varname" and "print "objfile::filename::varname".
- Some types live in VAR_DOMAIN. Functions live in VAR_DOMAIN. VAR_DOMAIN covers so much that as a tool for narrowing down the search, it's not very useful. XXX_DOMAIN is a historical C artifact. Is there something better for a multi-language world? There is also the symbol_matches_domain() hack to make, e.g., c++ classes appear in STRUCT_DOMAIN and VAR_DOMAIN.
- Calling psymtab_search_name in lookup_partial_symbol is clumsy. [Gets repetitively done for each psymtab.]
- The handling of include files as non-primary symtabs is clumsy.
- Complaints in debug info readers are generally ignored.
- Complaints and errors from the DWARF reader should generally mention at least the objfile name and the DIE offset. Currently, if you see the message, it is still a bit of work to track down the problem. There is at least one PR open about this.
- Errors when reading debug info could be handled more gracefully (i.e., not abort loading of the file). (This was partly addressed by the PR 14931 fix.)
- The strcmp_iw function is a bit of a wart. A symbol table redesign (e.g., hierarchical) could allow removing it.
- check_typedef is a constant source of pain. Maybe a necessary evil, but IWBN to see if there's a better way. Plus, it doesn't just do typedef dereferencing, it also handles opaque type lookup (IIRC - this one was added much later after looking into it). Handling opaque type lookup isn't bad, per se, but it's not expected given the name "check_typedef".
Related issues
These issues may not be directly related to symbol/debug info handling, but they're tangentially related, and so documented here.
- Linespecs have a few problems.
- "break foo:bar" Is "foo" is a C source file (gcc -x c foo) or a function?
- "break foo" may currently resolve to the main binary, and is the intuitive way to specify that. But gdb will try setting that breakpoint on each shared library it opens as well. [This can tie in with "final" breakpoints.]
- Separate debug file objfiles are kept in the same list as the "real" objfile.
Ideas
This section describes some ideas we have. They're just ideas, not anything even remotely cast in concrete.
Lazier CU reading
When we need full symbols, we expand the entire CU that contains the thing we need. We could be smarter and only expand the part we need (or some small but useful superset if it simplifies the implementation at reasonable cost).
Lazier type expansion
Expanding TU's to resolve DW_FORM_ref_sig8 could be done lazily. This could be extended to all types.
Smarter TU reading
In large apps there can be way more TUs than CUs (e.g., 200K vs 8K). Since TUs often share abbrev tables, we could sort TUs by the abbrev table they use and thus greatly reduce time spent reading abbrev tables (which shows up high in profiles of gdb startup). In the 200K vs 8K example, the number of TU abbrev tables is ~8K.
In addition to smarter reading, storing source file information better for TUs would be good as they typically share the same info.
One thing to try is share TUs across objfiles.
Hierarchical Symbol Tables
Currently symbol files are source file based. For larger programs this breaks down because, for example, classes and namespaces can be spread out over several files, and it's rather clumsy, for example, to go looking through every source file for elements of a particular class.
Hierarchical symbol tables can also help with lazier CU reading. E.g., we can skip all the children of namespace and class DIEs until we know we need them.
Another thought is that this would let us defer the full name expansion (+ canonicalization) that we do now in the DWARF reader.
Not necessarily tied to Hierarchical Symbol Tables, but supporting doing name expansion on demand would allow us to do things like choose whether to print typedef'd names or the underlying type, and whether omit defaulted template parameters. Tab completion could also take advantage of this (e.g. to avoid symtab expansion).
Some *very* rough timings I (dje) have done suggest we could bring GDB startup time down from 31sec to 15sec in one example large app (200K TUs, 8K CUs, 1G of debug info). 6sec of that is minsym reading btw, so for debug info it's 25sec -> 9sec. I think some other improvements could reduce that number by a few more seconds, but still not what .gdb_index provides.
Combine partial symbols into full symbols
Instead of building partial symbols, and then in turn building full symbols from them, build full symbols to begin with, but just lazily fill out the details.
The details of the combined form aren't spec'd out. The point is to take the best of both, and combine them into one symbol.
Do debug info reading in a separate thread
A lot of the information needed from the debug info (including minsyms from the ELF symbol table) aren't needed right away. It might speed up gdb's startup and response times if such reading was done in the background.
Symbol server
GDB generally only needs a small portion of all of the debug info. In a distributed build environment, it can make sense to leave all that info where it is, instead of (via various means) copying it to the user's desktop. For tab completion, this could still be handled in the server, and only sending the results to GDB. Such a symbol server might even be useful locally if it turns out that reading/processing debug info in a separate thread makes sense (it mightn't be a separate process, it could be just a separate thread).
Is it reasonable to do the Symbol API in such a way that it can be exported to Python, and Python code could talk to the Symbol Server? That would provide some useful flexibility.
My thinking was that the symbol server would serve up a variant on DWARF. In response to a request for a type, or a variable, or a function, it would send back a custom-crafted DWARF CU that holds all the needed information. It could also annotate the DWARF with hash codes for all objects returned, so that gdb could keep a single instance of all returned entities (without needing a stateful session). Finally, the symbol server could use build ids so that it could unify common objects across all the objects it held, without presenting incorrect information to its clients.
Discard symtab expansions when memory is tight
While in general one might want to just let the o/s handle the paging, in worst-case situations it's not possible - all the swap is gone. And even for less than worst-case situations, it can be beneficial to just discard the expansions and reread the debug info when necessary. It's faster to throw something away than to write it to disk, plus debug info is relatively compact compared to its expansion. Whether it will ever be needed again, and how soon ... well, that's the tradeoff.
Maybe add some parameters to control it?
"final" breakpoints
[AIUI] "final" breakpoints have their location assignment finalized and so when reading new shared libs, and more importantly when re-running a program, there is no need to do a general search for new locations (which can be expensive).
Symbol lookup
Have a simple API around the (internalized form of the) debug info (and ELF symtab, minsyms - and yes, when we say ELF symtab we also mean all the other file formats ...), and then have the languages build their semantics on top of that.
Minimal symbols
Can we do without them and just use BFD's symbols for them? (it is tempting, but BFD symbols are often larger than minimal symbols.) Or, similarly, could we bypass BFD entirely and just refer directly to the relevant ELF sections, interpreting on demand? Another alternative is to add them to .gdb_index. Another alternative is to read them in the background, apart from reading debug info.
Once reading of debug info is sped up, like with .gdb_index, reading minsyms dominates (e.g. 6 of 7 seconds spent in gdb start up is spent reading minsyms in one example).
Lots of operations (e.g., setting a breakpoint) involve searching minsyms in addition to the debug info. If the function is described in the debug info, searching minsyms is unnecessary extra effort. To what extent could we have a flag that turns minsyms off (maybe modulo the few places that do need them), and an option to turn them back on as desired?
Standardize a .gdb_index workalike
The LLVM project is working on something very similar to .gdb_index:
They're open to enhancing it where it makes sense. Do we want to replace .gdb_index with that? Is it worth trying to get something like this into the DWARF Standard?
Do not cache symbols
I wonder why we should cache symbols, at all. With caching, I mean any form of symbol object that duplicates data from the debug information. I would instead leave all the data where it is and just use "dwarf pointers" to the data.
A dwarf pointer is essentially either a file or section offset; whereas the former should really be 64bit, we should get away with 32bit section offsets. The base (i.e. the debug information file or section) should always be clear from the context, so we don't waste another 64bit for the pointer. We would, of course, mmap() the respective file or section.
When necessary, the DIE pointed to by the dwarf pointer will be parsed into a temporary symbol object. This object will be destroyed once it is no longer needed. This requires frequent re-parsing. On the other hand, since we're only parsing a single DIE, each time, the overhead should be neglectible.
A simple pointer won't suffice for lookups, since it would require too much and too frequent re-parsing. But we should be able to extend it in a low-memory-overhead way using the same technique. Instead of copying data, I would again use offsets - from the DIE, this time. The name, for example, can be a 16bit offset from the DIE to it's DW_AT_name (where -1 means not present); same for DW_AT_high_pc and DW_AT_low_pc or DW_AT_ranges. We would need a small type enum to select between alternative representations (e.g. high/low pc vs. ranges, or direct string vs. pointer). A symbol like this would only take 12 bytes.
Lazy .gdb_index generation
When debugging programs without .gdb_index, GDB could write separate files containing .gdb_index in the background.
One would want to record (or copy over) a build id to help with versioning problems.
One can imagine a central repository for shared shared-libraries (e.g., system libraries), GDB could look for .gdb_index files in the directories in debug-file-directory. The user will need to be able to specify where to put new files. A default could be ~/.gdb_index. Heh.
Direct expansion of psymtabs
Currently expansion from psymtabs to symtabs is done by scanning the DWARF a second time. This is inefficient, and also leads to bugs when the two readers get out of sync. This can be fixed by instantiating symtabs directly from psymtabs. This idea requires lazy CU and type expansion in order to work properly.
First, we would record a pointer to the DIE with each partial symbol. However, due to the bcache, we would not want to record this directly in the partial symbol, but instead in a separate table. (If memory pressure is an issue here, we can arrange for symtab expansion to free this table.)
Then, expanding a symtab could be done entirely without referring to the DWARF data. In fact, with appropriate changes to struct symbol, we would not even have to copy any data -- we would simply create the symtab and populate it with partially-completed symbols; these symbols would point to their corresponding partial symbols. This would shrink the size of symbols created in this way. (I picture a union here; but really all symbols could be treated this way, with some work, perhaps leading to more memory savings due to increased use of the bcache.)
Lazy CU expansion would let us avoid reading the type DIEs until they were needed by some request. Similarly, we could avoid reading function bodies until needed.
This approach would not immediately help when the index was in use. Lazy CU expansion could still operate, though, letting us avoid some processing while instantiating the CU (I did an experiment where I had the DWARF reader skip function bodies, and this gave a 40% boost during CU expansion); and if necessary we could change the index to record the DIE information.
Split up symbol-based and line-number-based symtabs
At the moment, the symbol data and line-number data is kept together (in struct symtab), with an entry in the symtab list for every file, including every header, with entries for the same CU (DWARF-speak) sharing the same blockvector. This can massively increase the number of entries in the symtab list for large programs. Instead, maybe have separate tables: one for symbol based lookups and one for line-number based lookups. The win is that for symbol based lookups we don't need to skip over non-symtab symtabs (the non-primary ones), and that for line-number based lookups we could do something like have a table based on the file's basename, and only have to iterate over a much smaller set (the basenames_may_differ case would still have to be handled of course).
This may also provide a vehicle for speeding up debug-info reading, though with other improvements the need may not be as great. When doing symbol-based lookups we don't need to build the line table, and when doing line-number based lookups we don't need to read symbols. In practice, there are times when we need both anyway, so that's another reason speeding up this aspect of debug-info reading may not be needed.
|
http://sourceware.org/gdb/wiki/SymbolHandling
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
I just installed a new video card (GTX 680) in my computer (64-bit ubuntu linux v12.04) and installed the latest nvidia driver (v310.19) and updated my GLEW files to v190. I updated my GL and GLX include files from the nvidia driver, and updated my GLEW include and source files from the new GLEW files.
Now my 3D engine doesn't work. BTW, it did run on the new graphics card before I updated the drivers and GLEW (I'm about 99% sure of that).
The problem is, the OpenGL function glGenVertexArrays(1, &vaoid) always returns a value of zero no matter how many times I call the function, and no matter how many VAO identifiers I ask the function to generate.
Does anyone know what this problem is... or might be?
I cut and paste the section of code where the problem occurs. Note that I just added all those glGetError() calls to debug what's happening. Note that ALL of the glGetError() calls return zero, indicating no errors. However, it seems to me that glGetVertexArrays() returning a value of zero IS an error (at least on its part).
Note that it does return valid identifiers for the IBO and VBO (values == 1 and 2).
Note that this error is happening the first time glGetVertexArrays() is being called after my program starts up. Also note that the same problem happens whether I compile the program into a 32-bit executable or a 64-bit executable (both of which ran before).
My current code is OpenGL v3.20 level. I upgraded my GPU to update my engine to OpenGL and GLSL v4.30 level.
Hmmmm. During startup I print out a whole pile of xwindows, GLX and GL values, and I notice the following string prints out for glGetString(GL_VERSION):
2.1.2 NVIDIA 313.09
What is the 2.1.2 supposed to be? The version of OpenGL? If so, that appears to be a version before the VAO was supported. If so, is there some new call that's now required to specify what version of OpenGL my program intends to access... and maybe it defaults to v2.12 if nothing is specified? (Of course, I don't think there ever was a version 2.12 of OpenGL).
Also, what is the 313.09 supposed to be? The driver was supposed to be 310.19 on the nvidia website (and that is still the version on their website today, so I don't think v313.09 even exists yet).
Code :u32 vaoie = 0; u32 vaoif[4]; vaoif[0] = 0; vaoif[1] = 0; vaoif[2] = 0; vaoif[3] = 0; error = glGetError(); error = glGetError(); glGenBuffers (1, (GLuint*)&iboid); // OpenGL - create IBO identifier error = glGetError(); glGenBuffers (1, (GLuint*)&vboid); // OpenGL - create VBO identifier error = glGetError(); glGenVertexArrays (1, (GLuint*)&vaoid); // OpenGL - create VAO identifier error = glGetError(); glGenVertexArrays (1, (GLuint*)&vaoie); error = glGetError(); glGenVertexArrays (4, (GLuint*)&vaoif[0]); error = glGetError(); if (iboid <= 0) { return (CORE_ERROR_INTERNAL); } // error - invalid IBO identifier if (vboid <= 0) { return (CORE_ERROR_INTERNAL); } // error - invalid VBO identifier if (vaoid <= 0) { return (CORE_ERROR_INTERNAL); } // error - invalid VAO identifier
------------------------------------------------------------------------------
|
http://www.opengl.org/discussion_boards/showthread.php/180544-help-problem-with-glGenVertexArrays()?p=1246431&viewfull=1
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
Peter Vogel
For the most common scenario—JavaScript in a Web page accessing a Web API service on the same site—discussing security for ASP.NET Web API is almost redundant. Provided that you authenticate your users and authorize access to the Web Forms/Views holding the JavaScript that consumes your services, you’ve probably provided all the security your services need. This is a result of ASP.NET sending the cookies and authentication information it uses to validate page requests as part of any client-side JavaScript requests to your service methods. There’s one exception (and it’s an important one): ASP.NET doesn’t automatically protect you against Cross-Site Request Forgery (CSRF/XSRF) attacks (more on that later).
In addition to CSRF, there are two scenarios when it does make sense to discuss securing your Web API services. The first scenario is when your service is consumed by a client other than a page in the same site as your ApiControllers. Those clients wouldn’t have been authenticated through Forms Authentication and wouldn’t have acquired the cookies and tokens that ASP.NET uses to control access to your services.
The second scenario occurs when you wish to add additional authorization to your services beyond what’s provided through ASP.NET security. The default authorization ASP.NET provides is based on the identity ASP.NET assigns to the request during authentication. You might wish to extend that identity to authorize access based on something other than the identity’s name or role.
Web API gives you a number of choices to address both scenarios. In fact, while I’ll discuss security in the context of accepting Web API requests, because the Web API is based on the same ASP.NET foundation as Web Forms and MVC, the tools that I’ll cover in this article are going to be familiar to anyone who has gone under the hood with security in Web Forms or MVC.
One caveat: While the Web API provides you with several choices for authentication and authorization, security begins with the host, either IIS or a host that you create when self hosting. If, for example, you want to ensure privacy in the communication between a Web API service and the client, then you should, at the very least, turn on SSL. This, however, is a responsibility of the site administrator, rather than the developer. In this article I’m going to ignore the host to concentrate on what a developer can—and should—do to secure a Web API service (and the tools that I’ll discuss here work if SSL is turned on or off).
When a user accesses an ASP.NET site using Forms Authentication, ASP.NET generates a cookie that stipulates the user is authenticated. The browser will continue to send that cookie on every subsequent request to the site, no matter from where that request originates. This opens your site to CSRF attacks, as does any authentication scheme where the browser automatically sends authentication information previously received. If, after your site provides the browser with the security cookie, the user visits some malicious site, then that site can send requests to your service, piggy-backing on the authentication cookie the browser received earlier.
To prevent CSRF attacks, you’ll need to generate antiforgery tokens at the server and embed them in the page to be used in your client-side calls. Microsoft provides the AntiForgery class with a GetToken method that will generate tokens specific to the user who made the request (who may, of course, be the anonymous user). This code generates the two tokens and puts them in the ASP.NET MVC ViewBag where they can be used in the View:
[Authorize(Roles="manager")]
public ActionResult Index()
{
string cookieToken;
string formToken;
AntiForgery.GetTokens(null, out cookieToken, out formToken);
ViewBag.cookieToken = cookieToken;
ViewBag.formToken = formToken;
return View("Index");
}
Any JavaScript calls to the server will need to return the tokens as part of the request (a CSRF site won’t have these tokens and won’t be able to return them). This code, in a View, dynamically generates a JavaScript call that adds the tokens to the request’s headers:
$.ajax("",{
type: "get",
contentType: "application/json",
headers: {
'formToken': '@ViewBag.formToken',
'cookieToken': '@ViewBag.cookieToken' }});
A slightly more complex solution would let you use unobtrusive JavaScript by embedding the tokens in hidden fields in the View. The first step in that process would be to add the tokens to the ViewData dictionary:
ViewData["cookieToken"] = cookieToken;
ViewData["formToken"] = formToken;
Now, in the View, you can embed the data in hidden fields. The HtmlHelper’s Hidden method just needs to be passed the value of a key in ViewDate to generate the right input tag:
@Html.Hidden("formToken")
The resulting input tag will use the ViewData key for the tag’s name and id attributes and put the data retrieved from the ViewData dictionary into the tag’s value attribute. The input tag generated from the previous code would look like this:
<input id="formToken" name="formToken" type="hidden" value="...token..." />
Your JavaScript code (kept in a separate file from the View) can then retrieve the values from the input tags and use them in your ajax call:
$.ajax("", {
type: "get",
contentType: "application/json",
headers: {
'formToken': $("#formToken").val(),
'cookieToken': $("#cookieToken").val()}});
You can achieve the same goals in an ASP.NET Web Forms site by using the RegisterClientScriptBlock method on the ClientScriptManager object (retrievable from the Page’s ClientScript property) to insert JavaScript code with the embedded tokens:
string CodeString = "function CallService(){" +
"$.ajax('',{" +
"type: 'get', contentType: 'application/json'," +
"headers: {'formToken': '" & formToken & "',” +
"'cookieToken': '" & cookieToken & "'}});}"
this.ClientScript.RegisterClientScriptBlock(
typeOf(this), "loadCustid", CodeString, true);
Finally, you’ll need to validate the tokens at the server when they’re returned by the JavaScript call. Visual Studio 2012 users who have applied the ASP.NET and Web Tools 2012.2 update will find the new Single-Page Application (SPA) template includes a ValidateHttpAntiForgeryToken filter that can be used on Web API methods. In the absence of that filter, you’ll need to retrieve the tokens and pass them to the AntiForgery class’s Validate method (the Validate method will throw an exception if the tokens aren’t valid or were generated for a different user). The code in Figure 1, used in a Web API service method, retrieves the tokens from the headers and validates them.
Figure 1 Validating CSRF Tokens in a Service Method
public HttpResponseMessage Get(){
if (Request.Headers.TryGetValues("cookieToken", out tokens))
{
string cookieToken = tokens.First();
Request.Headers.TryGetValues("formToken", out tokens);
string formToken = tokens.First();
AntiForgery.Validate(cookieToken, formToken);
}
else
{
HttpResponseMessage hrm =
new HttpResponseMessage(HttpStatusCode.Unauthorized);
hrm.ReasonPhrase = "CSRF tokens not found";
return hrm;
}
// ... Code to process request ...
Using the ValidateHttpAntiForgeryToken (rather than code inside the method) moves processing to earlier in the cycle (before, for example, model binding), which is a good thing.
This article studiously ignores OAuth. The OAuth specification defines how tokens can be retrieved by a client from a third-party server to be sent to a service that will, in turn, validate the token with the third-party server. A discussion of how to access an OAuth token provider either from the client or the service is beyond the scope for this article.
The initial version of OAuth also isn’t a good match for the Web API. Presumably, one of the primary reasons for using the Web API is to use lighter-weight requests based on REST and JSON. That goal makes the first version of OAuth an unattractive option for Web API services. The tokens specified by the first version of OAuth are bulky and XML-based. Fortunately, OAuth 2.0 introduced a specification for a lighter-weight JSON token that’s more compact than the token from previous versions. Presumably, the techniques discussed in this article could be used to process any OAuth tokens sent to your service.
The first of the two primary responsibilities you have in securing a Web API service is authentication (the other responsibility being authorization). I’ll assume other issues—privacy, for example—are handled at the host.
Ideally, both authentication and authorization will be performed as early as possible in the Web API pipeline to avoid spending processing cycles on a request you intend to deny. This article’s authentication solutions are used very early in the pipeline—virtually as soon as the request is received. These techniques also allow you to integrate authentication with whatever user lists you’re already maintaining. The authorization techniques discussed can be applied in a variety of places in the pipeline (including as late as in the service method itself) and can work with authentication to authorize requests based on some other criteria than the user’s name or role.
You can support clients who haven’t gone through the Forms Authentication by providing your own authentication method in a custom HTTP module (I’m still assuming here that you’re not authenticating against Windows accounts but against your own list of valid users). There are two major benefits to using an HTTP module: a module participates in HTTP logging and auditing; also, modules are invoked very early in the pipeline. While these are both good things, modules do come with two costs: modules are global and are applied to all requests to the site, not just the Web API requests; also, to use authentication modules, you must host your service in IIS. Later in this article, I’ll discuss using delegating handlers that are invoked only for Web API requests and are host-agnostic.
For this example in using an HTTP module, I assume that IIS is using Basic Authentication and the credentials used to authenticate a user are a username and password, sent by the client (in this article, I’ll ignore Windows certification but will discuss using client certificates). I also assume that the Web API service that I’m protecting is secured using an Authorize attribute such as this, which specifies a user:
public class CustomersController : ApiController
{
[Authorize(Users="Peter")]
public Customer Get()
{
The first step in creating a custom authorization HTTP module is to add a class to your service project that implements the IHttpModule and IDisposable interfaces. In the class’s Init method you’ll need to wire up two events from the HttpApplication object passed to the method. The method you attach to the AuthenticateRequest event will be called when the client’s credentials are presented. But you must also wire up the EndRequest method in order to generate the message that causes the client to send you its credentials. You’ll also need a Dispose method, but you don’t need to put anything in it to support the code used here:
public class PHVHttpAuthentication : IHttpModule, IDisposable
{
public void Init(HttpApplication context)
{
context.AuthenticateRequest += AuthenticateRequests;
context.EndRequest += TriggerCredentials;
}
public void Dispose()
{
}
An HttpClient will send credentials in response to a WWW-Authenticate header that you include in the HTTP response. You should include that header when a request generates a 401 status code (ASP.NET will generate a 401 response code when the client is denied access to a secured service). The header must provide a hint as to the authentication method being used and the realm in which the authentication will apply (the realm can be any arbitrary string and is used to flag to the browser different areas on the server). The code to send that message is what you put in the method wired to the EndRequest event. This example generates a message that specifies that Basic authentication is being used within the PHVIS realm:
private static void TriggerCredentials(object sender, EventArgs e)
{
HttpResponse resp = HttpContext.Current.Response;
if (resp.StatusCode == 401)
{
resp.Headers.Add("WWW-Authenticate", @"Basic realm='PHVIS'");
}
}
Within the method you’ve wired up to the AuthenticateRequest method, you’ll need to retrieve the Authorization headers the client will send as a result of receiving your 401/WWW-Authenticate message:
private static void AuthenticateRequests(object sender,
EventArgs e)
{
string authHeader =
HttpContext.Current. Request.Headers["Authorization"];
if (authHeader != null)
{
Once you’ve determined the client has passed Authorization header elements (and continuing with my earlier assumption that the site is using Basic Authentication), you need to parse out the data holding the username and password. The username and password are Base64-encoded and separated by a colon. This code retrieves the username and password into a two-position string array:
AuthenticationHeaderValue authHeaderVal =
AuthenticationHeaderValue.Parse(authHeader);
if (authHeaderVal.Parameter != null)
{
byte[] unencoded = Convert.FromBase64String(
authHeaderVal.Parameter);
string userpw =
Encoding.GetEncoding("iso-8859-1").GetString(unencoded);
string[] creds = userpw.Split(':');
As this code demonstrates, usernames and passwords are sent in clear-text. If you don’t turn on SSL then your usernames and passwords can be easily captured (and this code works even if SSL is turned on).
The next step is to validate the username and password using whatever mechanism makes sense to you. Regardless of how you validate the request (the code I use in the following example is probably too simple), your final step is to create an identity for the user that will be used in the authorization processes later in the ASP.NET pipeline.
To pass that identity information through the pipeline, you create a GenericIdentity object with the name of the identity you want to assign to the user (in the following code I’ve assumed that identity is the username sent in the header). Once you’ve created the GenericIdentity object, you must put it in the Thread class’s CurrentPrincipal property. ASP.NET also maintains a second security context in the HttpContext object and, if your host is IIS, you must support that by also setting the User property in the HttpContext’s Current property to your GenericIdentity object:
if (creds[0] == "Peter" && creds[1] == "pw")
{
GenericIdentity gi = new GenericIdentity(creds[0]);
Thread.CurrentPrincipal = new GenericPrincipal(gi, null);
HttpContext.Current.User = Thread.CurrentPrincipal;
}
If you want to support role-based security, then you must pass an array of role names as the second parameter to the GenericPrincipal constructor. This example assigns every user to the manager and admin roles:
string[] roles = "manager,admin".Split(',');
Thread.CurrentPrincipal = new GenericPrincipal(gi, roles);
To integrate your HTTP module into your site’s processing, in your project’s web.config file, use the add tag within the modules element. The add tag’s type attribute must be set to a string consisting of the fully qualified class name followed by the assembly name of your module:
<modules>
<add name="myCustomerAuth"
type="SecureWebAPI.PHVHttpAuthentication, SecureWebAPI"/>
</modules>
The GenericIdentity object you created will work with the ASP.NET Authorize attribute. You can also access the GenericIdentity from inside a service method to perform authorization activities. You could, for example, provide different services for logged-in and anonymous users by determining if a user has been authenticated by checking the GenericIdentity object IsAuthenticated property (IsAuthenticated returns false for the Anonymous user):
if (Thread.CurrentPrincipal.Identity.IsAuthenticated)
{
You can retrieve the GenericIdentity object more simply through the User property:
if (User.Identity.IsAuthenticated)
{
In order to consume services protected by this module, a non-JavaScript client must provide an acceptable username and password. To provide those credentials using the .NET HttpClient, you first create an HttpClientHandler object and set its Credentials property to a NetworkCredential object holding the username and password (or set the HttpClientHandler object’s UseDefaultCredentials property to true in order to use the current user’s Windows credentials). You then create your HttpClient object, passing the HttpClientHandler object:
HttpClientHandler hch = new HttpClientHandler();
hch.Credentials = new NetworkCredential ("Peter", "pw");
HttpClient hc = new HttpClient(hch);
With that configuration done, you can issue your request to the service. The HttpClient won’t present the credentials until it’s denied access to the service and has received the WWW-Authenticate message. If the credentials provided by the HttpClient aren’t acceptable, the service returns an HttpResponseMessage with the StatusCode of its Result set to “unauthenticated.”
The following code calls a service using the GetAsync method, checks for a successful result and (if it doesn’t get one) displays the status code returned from the service:
hc.GetAsync("").ContinueWith(r =>
{
HttpResponseMessage hrm = r.Result;
if (hrm.IsSuccessStatusCode)
{
// ... Process response ...
}
else
{
MessageBox.Show(hrm.StatusCode.ToString());
}
});
Assuming that you bypass the ASP.NET login process for non-JavaScript clients, as I did here, no authentication cookies will be created and each request from the client will be validated individually. To reduce the overhead on repeatedly validating the credentials provided by the client, you should consider caching credentials you retrieve at the service (and using your Dispose method to discard those cached credentials).
In an HTTP module, you retrieve a client certificate object (and ensure that it’s present and valid) with code such as this:
System.Web.HttpClientCertificate cert =
HttpContext.Current.Request.ClientCertificate;
if (cert!= null && cert.IsPresent && cert.IsValid)
{
Further along in the processing pipeline—in a service method, for example—you retrieve the certificate object (and check that one exists) with this code:
X509Certificate2 cert = Request.GetClientCertificate();
if (cert!= null)
{
If a certificate is valid and present, you can additionally check for specific values in the certificate’s properties (for example, subject or issuer).
To send certificates with an HttpClient, your first step is to create a WebRequestHandler object instead of an HttpClientHandler (the WebRequestHandler offers more configuration options than the HttpClientHandler):
WebRequestHandler wrh = new WebRequestHandler();
You can have the HttpClient automatically search the client’s certificate stores by setting the WebRequestHandler object’s ClientCertificateOptions to the Automatic value from the ClientCertificateOption enum:
wrh.ClientCertificateOptions = ClientCertificateOption.Manual;
By default, however, the client must explicitly attach certificates to the WebRequestHandler from code. You can retrieve the certificate from one of the client’s certificate stores as this example does, which retrieves a certificate from the CurrentUser’s store using the issuer’s name:
X509Store certStore;
X509Certificate x509cert;
certStore = new X509Store(StoreName.My,
StoreLocation.CurrentUser);
certStore.Open(OpenFlags.OpenExistingOnly | OpenFlags.ReadOnly);
x509cert = certStore.Certificates.Find(
X509FindType.FindByIssuerName, "PHVIS", true)[0];
store.Close();
If the user has been sent a client certificate that, for some reason, isn’t going to be added to the user’s certificate store, then you can create an X509Certificate object from the certificate’s file with code like this:
x509cert = new X509Certificate2(@"C:\PHVIS.pfx");
Regardless of how the X509Certificate is created, the final steps at the client are to add the certificate to the WebRequestHandler ClientCertificates collection and then use the configured WebRequestHandler to create the HttpClient:
wrh.ClientCertificates.Add(x509cert);
hc = new HttpClient(wrh);
While you can’t use an HttpModule in a self-hosted environment, the process for securing requests early in the processing pipeline of a self-hosted service is the same: Get the credentials from the request, use that information to authenticate the request and create an identity to pass to the current thread’s CurrentPrincipal property. The simplest mechanism is to create a username and password validator. To do more than just validate a username and password combination, you can create a delegating handler. I’ll first look at integrating a username and password validator.
To create a validator (still assuming that you’re using Basic Authentication), you must create a class that inherits from UserNamePasswordValidator (you’ll need to add a reference to the System.IdentityModel library to your project). The only method from the base class that you need to override is the Validate method, which will be passed the username and password sent to the service by the client. As before, once you’ve validated the username and password, you must create a GenericPrincipal object and use it to set the CurrentPrincipal property on the Thread class (because you’re not using IIS as your host, you don’t set the HttpContext User property):
public class PHVValidator :
System.IdentityModel.Selectors.UserNamePasswordValidator
{
public override void Validate(string userName, string password)
{
if (userName == "Peter" && password == "pw")
{
GenericIdentity gi = new GenericIdentity(username, null);
Thread.CurrentPrincipal = gi;
}
The following code creates a host for a controller called Customers with an endpoint of, and specifies a new validator:
partial class PHVService : ServiceBase
{
private HttpSelfHostServer shs;
protected override void OnStart(string[] args)
{
HttpSelfHostConfiguration hcfg =
new HttpSelfHostConfiguration("");
hcfg.Routes.MapHttpRoute("CustomerServiceRoute",
"Customers", new { controller = "Customers" });
hcfg.UserNamePasswordValidator = new PHVValidator;
shs = new HttpSelfHostServer(hcfg);
shs.OpenAsync();
To do more than validate the username and password, you can create a custom Web API message handler. Message handlers have several benefits compared to an HTTP module: message handlers aren’t tied to IIS, so security applied in a message handler will work with any host; message handlers are only used by the Web API, so they provide a simple way to perform authorization (and assign identities) for your services using a process different from that used with your Web site pages; and you can also assign message handlers to specific routes so that your security code is only invoked where it’s needed.
The first step in creating a message handler is to write a class that inherits from DelegatingHandler and override its SendAysnc method:
public class PHVAuthorizingMessageHandler: DelegatingHandler
{
protected override System.Threading.Tasks.Task<HttpResponseMessage>
SendAsync(HttpRequestMessage request,
System.Threading.CancellationToken cancellationToken)
{
Within that method (and assuming that you’re creating a per-route handler) you can set the DelegatingHandler’s InnerHandler property so that this handler can be linked into the pipeline with other handlers:
HttpConfiguration hcon = request.GetConfiguration();
InnerHandler = new HttpControllerDispatcher(hcon);
For this example, I’m going to assume that a valid request must have a simple token in its querystring (quite simple: a name/value pair of “authToken=xyx”). If the token is missing or not set to xyx, the code returns a 403 (Forbidden) status code.
I first turn the querystring into a set of name/value pairs by calling the GetQueryNameValuePairs method on the HttpRequestMessage object passed to the method. I then use LINQ to retrieve the token (or null if the token is missing). If the token is missing or invalid, I create an HttpResponseMessage with the appropriate HTTP status code, wrap it in a TaskCompletionSource object and return it:
string usingRegion = (from kvp in request.GetQueryNameValuePairs()
where kvp.Key == "authToken"
select kvp.Value).FirstOrDefault();
if (usingRegion == null || usingRegion != "xyx")
{
HttpResponseMessage resp =
new HttpResponseMessage(HttpStatusCode.Forbidden);
TaskCompletionSource tsc =
new TaskCompletionSource<HttpResponseMessage>();
tsc.SetResult(resp);
return tsc.Task;
}
If the token is present and set to the right value, I create a GenericPrincipal object and use it to set the Thread’s CurrentPrincipal property (to support using this message handler under IIS, I also set the HttpContext User property if the HttpContext object isn’t null):
Thread.CurrentPrincipal = new GenericPrincipal(
Thread.CurrentPrincipal.Identity.Name, null);
if (HttpContext.Current != null)
{
HttpContext.Current.User = Thread.CurrentPrincipal;
}
With the request authenticated through the token and the identity set, the message handler calls the base method to continue processing:
return base.SendAsync(request, cancellationToken);
If your message handler is to be used on every controller, you can add it to the Web API processing pipeline like any other message handler. However, to limit your handler to being used only on specific routes, you must add it through the MapHttpRoute method. First, instantiate your class and then pass it as the fifth parameter to MapHttpRoute (this code requires an Imports/using statement for System.Web.Http):
routes.MapHttpRoute(
"ServiceDefault",
"api/Customers/{id}",
new { id = RouteParameter.Optional },
null,
new PHVAuthorizingMessageHandler());
Rather than set the InnerHandler within the DelegatingHandler, you can set the InnerHandler property to the default dispatcher as part of defining your route:
routes.MapHttpRoute(
"ServiceDefault",
"api/{controller}/{id}",
new { id = RouteParameter.Optional },
null,
new PHVAuthorizingMessageHandler
{InnerHandler = new HttpControllerDispatcher(
GlobalConfiguration.Configuration)});
Now, instead of your InnerHandler setting being spread among multiple DelegatingHandlers, you’re managing it from the single location where you define your routes.
If authorizing requests by name and role isn’t sufficient, you can extend the authorization process by creating your own principal class by implementing the IPrincipal interface. However, to take advantage of a custom principal class, you’ll need to create your own custom authorization attribute or add custom code to your service methods.
For example, if you have a set of services that can only be accessed by users from a specific region, you could create a simple principal class that implements the IPrincipal interface and adds a Region property, as shown in Figure 2.
Figure 2 Creating a Custom Principal with Additional Properties
public class PHVPrincipal: IPrincipal
{
public PHVPrincipal(string Name, string Region)
{
this.Name = Name;
this.Region = Region;
}
public string Name { get; set; }
public string Region { get; set; }
public IIdentity Identity
{
get
{
return new GenericIdentity(this.Name);
}
set
{
this.Name = value.Name;
}
}
public bool IsInRole(string role)
{
return true;
}
To take advantage of this new principal class (which will work with any host), you just need to instantiate it and then use it to set the CurrentPrincipal and User properties. The following code looks for a value in the request’s query string associated with the name “region.” After retrieving that value, the code uses it to set the principal’s Region property by passing the value to the class’s constructor:
string region = (from kvp in request.GetQueryNameValuePairs()
where kvp.Key == "region"
select kvp.Value).FirstOrDefault();
Thread.CurrentPrincipal = new PHVPrincipal(userName, region);
If you’re working in the Microsoft .NET Framework 4.5, rather than implementing the IPrincipal interface, you should inherit from the new ClaimsPrincipal class. ClaimsPrincipal supports both claims-based processing and integration with Windows Identity Foundation (WIF). That, however, is out of scope for this article (I’ll address that topic in an upcoming article on claims-based security).
With a new principal object in place you can create an authorization attribute that takes advantage of the new data carried by the principal. First, create a class that inherits from System.Web.Http.AuthorizeAttribute and overrides its IsAuthorized method (this is a different process from the ASP.NET MVC practice where you create new Authorization attributes by extending System.Web.Http.Filters.AuthorizationFilterAttribute). The IsAuthorized method is passed an HttpActionContext, whose properties can be used as part of your authorization process. However, this example just needs to extract the principal object from the Thread’s CurrentPrincipal property, cast it to the custom principal type and check the Region property. If authorization succeeds, the code returns true. If authorization fails, you need the ActionContext Response property to create a custom response before returning false, as shown in Figure 3.
Figure 3 Filtering a Custom Principal Object
public class RegionAuthorizeAttribute : System.Web.Http.AuthorizeAttribute
{
public string Region { get; set; }
protected override bool IsAuthorized(HttpActionContext actionContext)
{
PHVPrincipal phvPcp = Thread.CurrentPrincipal as PHVPrincipal;
if (phvPcp != null && phvPcp.Region == this.Region)
{
return true;
}
else
{
actionContext.Response =
new HttpResponseMessage(
System.Net.HttpStatusCode.Unauthorized)
{
ReasonPhrase = "Invalid region"
};
return false;
}
}
}
Your custom authorization filter can be used just like the default ASP.NET Authorize filter. Because this filter has a Region property, that property must be set to the acceptable region for this method as part of decorating a service method with it:
[RegionAuthorize(Region = "East")]
public HttpResponseMessage Get()
{
For this example, I’ve chosen to inherit from the AuthorizeAttribute because my authorization code is purely CPU bound. If my code needed to access some network resource (or do any I/O at all), a better choice would’ve been to implement the IAuthorizationFilter interface because it supports making asynchronous calls.
As I said at the start of this article: The typical Web API scenario doesn’t require additional authorization, except to protect against CSFR exploits. But when you do need to extend the default security system, the Web API provides numerous choices throughout the processing pipeline where you can integrate whatever protection you need. And it’s always better to have choices.
Peter Vogel is a principal at PH&V Information Services, specializing in ASP.NET development with expertise in service-oriented architecture, XML, database and UI design.
Thanks to the following technical experts for reviewing this article: Dominick Baier (thinktecture GmbH & Co KG), Barry Dorrans (Microsoft) and Mike Wasson (Microsoft)Mike Wasson ([email protected]) is a programmer-writer at Microsoft. He currently writes about ASP.NET, focused on Web API.Barry Dorrans ([email protected]) is a security developer at Microsoft, working with the Azure Platform team. He wrote “Beginning ASP.NET Security” and was a Developer Security MVP before joining Microsoft. Despite this he still misspells encryption on a regular basis.Dominick ([email protected]) is a security consultant at thinktecture (thinktecture.com). His main focus is identity and access control in distributed applications and he is the creator of the popular open source projects IdentityModel and IdentityServer. You can find his blog at leastprivilege.com.
Was there no code download for this article? Could someone point me to the code?
re CSRF again, here's my understanding: The important part is to have a user-specific token that is embedded in the HTML (as form data), and then verify that the client sent the token in the form submission or AJAX request. One way to make the token user-specific is to generate a random number and put it into a cookie. Then the browser automatically sends the cookie, and the server can check the cookie against the token that the client returned in the form data / AJAX request. But if the user is authenticated, you can use the credentials to create the token (as a cryptographic nonce), and that ensures the token is user-specific. The cookie itself doesn't protect against CSRF attacks. As this post points out, cookies make CSRF attacks possible in the first place:...
re CSRF: The way it works is the token is generated from the current principal. If the user is logged in, the malicious site can piggyback on the user's credentials, but it can't read the token value. See-... The anti-forgery filter in the MVC 4 SPA template works this way -- see...
Peter: Thanks for the article! Then, about the CSRF. If the client is a browser (html page calling webapi via js) then the "cookie token" should return in the cookie and the server should generate it (instead of embedding it in the javascript) and the browser will take care of sending it back. (I think that is what sdrapkin means).Some months ago I created a POC around this idea that is probably very similar to what you say is included now in the SPA template (haven't check it):. Best regards.
It seemed unlikely to me that I was the first person to discuss this approach to CSRF exploits so I hunted around to see if I could find a better discussion of the issue than I've apparently provided here. What I found was this article which discusses preventing CSRF in reference to Web API Ajax requests (though the article is from the home site for ASP.NET):... This is a longer discussion than I had room for and might address the issues you're concerned about (though, in the end, it seems to be implementing the same solution as in this article).
I think that you've missed the first sentence of the article: this is not an attempt to replace the security that is provided (and that you should leverage) in protecting the Web pages that you are making your Web API calls from. And throughout the article, HTTPs is recommended as the protocol to be used for making your Web API calls. Perhaps if you outlined the CSRF exploit that would expose the service, I'd have a better grasp of your concern.
With all due respect, everything CSRF-related in this article is wrong. Any same-origin XHR requests ($.ajax) - whether GET or POST - can read HttpOnly cookies and set headers, which makes 2-token CSRF defence work. The CSRF code samples shown are at best misleading (because they do not have "DO NOT DO THIS!" warning) and at worst are completely wrong (because the author mistakenly believes that embedding both tokens into html/js is just fine). The rest of the article deals with authentication and authorization approaches which do not address CSRF - ie. they are still vulnerable to CSRF. Dear readers, please do not rely on any CSRF-related advice/code in this article, because it is completely wrong. It's very sad to see this in MSDN... Is anyone else brave enough to voice his/her opinion on the CSRF coverage of this article?
Certainly, putting the two cookies into two different channels is the way that the AntiForgeryToken helper and the ValidateAntiForgeryToken attribute CSRF tokens work when protecting Web pages Posting back to the site in ASP.NET MVC applications. But for Web API requests making Get requests to the service, the process is going to be different--which is why this article is about what you can do in the Web API and not in ASP.NET MVC applications.
The 1st part of this article that covers anti-forgery tokens is completely wrong. The author does not understand why the 2nd token is called "cookieToken" and not "oneMoreFormToken". The 2-token CSRF defense works by having a 2nd side-channel token that the attacker can neither read nor write. That channel is HttpOnly cookies, inaccessible in JavaScript. Storing *both* tokens in the same channel completely negates CSRF defense. This is MSDN magazine - not Twitter or personal blog post.
More MSDN Magazine Blog entries >
Browse All MSDN Magazines
Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus.
|
http://msdn.microsoft.com/en-us/magazine/dn201748.aspx
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
04 April 2012 09:41 [Source: ICIS news]
SINGAPORE (ICIS)--India's Mangalore Refinery & Petrochemicals Limited (MRPL) has sold via-tender, a 10,000-tonne first-half May loading isomer-grade xylene (IX) to trading firm Glencore on 3 April, sources said on Wednesday.
The cargo with a 6-15 May laycan was sold on an ex-New Mangalore basis at a discount of $74/tonne (€56/tonne) to FOB (free on board) ?xml:namespace>
Officials from MRPL could not be immediately reached for comments.
MRPL last sold a second-half April 10,000-tonne parcel to a Japanese trading house at a discount of $70-75/tonne to
|
http://www.icis.com/Articles/2012/04/04/9547580/indias-mrpl-sells-h1-may-loading-10000-tonnes-ix-to-glencore.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
This implementation is not unlike the one in Java, as C#'s design was heavily based on that language. A class is created to accomodate the definitions of the needed functions. Only the
Main function (method) is visible outside the class. It reads lines from the standard input until there are no more (
null received). Each line is tokenized by calling
Split and the resulting list is stored in a
Stack<string> container,
tks. Unless
stk is empty, it is passed to
evalrpn for evaluation of the RPN expression. Note that, as
stk is a stack, the sequence of tokens is being consumed by
evalrpn in reverse order. Upon returning from
evalrpn
tks must be empty – no token must have remained unused.
evalrpn is recursive on the sequence of tokens. It removes an item
tk from the stack and tries to parse it as a number. If that succeeds, the number is returned. Otherwise
tk is probably an operator character, in which case
evalrpn tries to obtain the arguments for the operation by calling itself twice. If
tk is neither a number nor an operator, an exception is raised (‘
thrown’). An exception also occurs implicitly within
evalrpn when it calls itself with an empty stack
tks, as
Popping a current token off
tks then fails.
All exceptions take place in the dynamic context of the
Main's
try block where they get caught and an error message is printed. Thus all RPN syntax errors are properly handled.
using System; using System.Collections.Generic; class Rpn { public static void Main() { char [] sp = new char [] {' ','\t'}; for (;;) { string s = Console.ReadLine(); if (s==null) break; Stack<string> tks = new Stack<string> (s.Split(sp,StringSplitOptions.RemoveEmptyEntries)); if (tks.Count==0) continue; try { double r = evalrpn(tks); if (tks.Count!=0) throw new Exception(); Console.WriteLine(r); } catch (Exception e) {Console.WriteLine("error");} } } private static double evalrpn(Stack<string> tks) { string tk = tks.Pop(); double x,y; if (!Double.TryParse(tk, out x)) { y = evalrpn(tks); x = evalrpn(tks); if (tk=="+") x += y; else if (tk=="-") x -= y; else if (tk=="*") x *= y; else if (tk=="/") x /= y; else throw new Exception(); } return x; } }
boykobbatgmaildotcom
|
http://www.math.bas.bg/bantchev/place/rpn/rpn.c%23.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
On 1/19/06, Robert Kern <robert.kern at gmail.com> wrote: > Charlie Moad wrote: > > Well here are th cvs links to them. > > > > > > > > > > The matplotlib.toolkits module does not exist in the mpl cvs and it is > > not in the setup.py file. I have tried adding both. In basemap > > however, the matplotlib.toolkits module does exist and is listed in > > the setup.py file. > > Both of these setup.py files are buggy. The packages list needs to have entries > in dotted form ("matplotlib.toolkits"), not filesystem form > ("matplotlib/toolkits"). Correcting them, and adding an empty > lib/matplotlib/toolkits/__init__.py to the matplotlib checkout allows me to > build eggs with namespace_packages set appropriately. > Thanks for taking a whack at this. I will test all your comments and commit if I get it working.
|
https://mail.python.org/pipermail/distutils-sig/2006-January/005876.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
Android List View
Android ListView is a view which groups several items and display them in vertical scrollable list. The list items are automatically inserted to the list using an Adapter that pulls content from a source such as an array or database. ( ie. ListView or GridView). The two most common adapters are ArrayAdapter and SimpleCursorAdapter. We will see separate examples for both the adapters.
ListView Attributes
Following are the important attributes specific to GridView::
ArrayAdapter adapter = new ArrayAdapter<String>(this, R.layout.ListView, StringArray);
Here are arguments for this constructor: adaptor created, then simply call setAdapter() on your ListView object as follows::
Following is the content of the modified main activity file src/com.example.helloworld/MainActivity.java. This file can include each of the fundamental lifecycle methods.
package com.example.helloworld; import android.os.Bundle; import android.app.Activity; import android.view.Menu; import android.widget.ArrayAdapter; import android.widget.ListView; public class MainActivity extends Activity { // Array of strings... String[] countryArray = {"India", "Pakistan", "USA", "UK"}; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); ArrayAdapter adapter = new ArrayAdapter<String>(this, R.layout.activity_listview, countryArray); ListView listView = (ListView) findViewById(R.id.country_list); listView.setAdapter(adapter); } }
Following will be the content of res/layout/activity_main.xml file:
<LinearLayout xmlns: <ListView android: </ListView> </LinearLayout>
Following will be the content of res/values/strings.xml to define two new constants:
<?xml version="1.0" encoding="utf-8"?> <resources> <string name="app_name">HelloWorld</string> <string name="action_settings">Settings</string> </resources>
Following will be the content of res/layout/activity_listview.xml file:
< setup. To run the app from Eclipse, open one of your project's activity files and click Run
icon from the toolbar. Eclipse installs the app on your AVD and starts it and if everything is fine with your setup and application, it will display following Emulator window:
Columns item into the corresponding toViews view.
|
http://www.tutorialspoint.com/android/android_list_view.htm
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
<ac:macro ac:<ac:plain-text-body><![CDATA[
<ac:macro ac:<ac:plain-text-body><![CDATA[
This Proposal aims to integrate Zend Framework and Doctrine 1 via Zend Tool.
Zend Framework: Zend Tool and Doctrine 1 Integration Component Proposal
Table of Contents
1. Overview
All Doctrine 1 CLI commands will be integrated into the Zend_Tool project context
and additional context resources are provided to support Doctrine metadata,
migrations, fixture etc. Both non-modular and modular applications should
be supported for the task of generating models and tables. This proposal
sets on top of Doctrine 1 resource proposal by Matthew Lurz.
2. References
3. Component Requirements, Constraints, and Acceptance Criteria
<ac:macro ac:<ac:plain-text-body><![CDATA[
This Proposal aims to integrate Zend Framework and Doctrine 1 via Zend Tool.
This Proposal aims to integrate Zend Framework and Doctrine 1 via Zend Tool.
- This component MUST use Doctrine 1 Resource for Zend_Application
- This resource SHOULD be usable with both Modular and non-modular MVC application.
- This component MUST integrate all Doctrine 1 CLI Tasks
- This component SHOULD offer additional Zend Project Context Resources that are required for Doctrine.
- This component SHOULD offer further providers that help the integration of ZF and Doctrine
- This component SHOULD make use of other Zend components for this goal.
- This component SHOULD support prototyping of forms
- This component MAY support Zend_Test and Doctrine integration.
4. Dependencies on Other Framework Components
- Zend_Application
- Zend_Tool_Framework
- Zend_Tool_Project
- Zend_Form
- Zend_CodeGenerator
- Doctrine 1.2
5. Theory of Operation
The Zend Tool integration will use the configured Zend Application resource
to configure Doctrine 1 for the use with Schema Tool or other CLI tasks.
With a configured Doctrine resource you will be able to issue the following commands:
Commands will all make heavy use of --verbose and --pretend flags.
You should be able to configure via the doctrine resource if you want to use code-generation
to generate abstract classes for forms or tables of Doctrine_Record's.
Usage would look like:
Zend Tool would keep track of the Doctrine context directories and will make sure that the generation
is taking place as configured. If you want to generate different records/tables into different modules
you have to configure the modules accordingly, this will take place in a new config file being called
"doctrine-modules.ini" which has the Doctrine Model <-> ZF Module relationsships and can either be
edited by hand or be configured with the help of "zf configure doctrine.module" and "zf assign-model doctrine.module".
It will now generate the classes in ZF-Style (Supported by Resource Loader):
Or in Pear Style:
With the help of Doctrine Model and Table context files it is possible to find orphaned files at this
point and delete them.
All this is up for discussion and changes, its just an outline over what I think is possible and could be done to integrate Zend Tool and Doctrine 1. What do you think?
6. Milestones / Tasks
- Milestone 1: Community Review
- Milestone 2: Prototype
- Milestone 3: Zend Acceptance
- Milestone 4: Completion & Documentation
7. Class Index
- Zend_Doctrine_Tool_Provider_Doctrine
- Zend_Doctrine_Tool_DoctrineTasks
- Zend_Doctrine_Tool_Context_DataFixturesDirectory
- Zend_Doctrine_Tool_Context_MigrationsDirectory
- Zend_Doctrine_Tool_Context_SqlDirectory
- Zend_Doctrine_Tool_Context_ModelsDirectory
- Zend_Doctrine_Tool_Context_YamlSchemaDirectory
- Zend_Doctrine_Tool_Context_ModelFile
- Zend_Doctrine_Tool_Context_TableFile
- Zend_Doctrine_Tool_Context_ModelsAbstractDirectory
- Zend_Doctrine_Tool_Context_ModelAbstractFile
- Zend_Doctrine_Tool_Context_TableAbstractFile
- Zend_Doctrine_Tool_Context_DoctrineModuleConfigFile
- Zend_Doctrine_Tool_Context_FormDirectory
- Zend_Doctrine_Tool_Context_FormFile
- Zend_Doctrine_Tool_Context_FormAbstractFile
- Zend_Doctrine_Form
- Zend_Doctrine_CodeGenerator_Form
- Zend_Doctrine_Import
- Zend_Doctrine_Import_Builder
12 Commentscomments.show.hide
Nov 11, 2009
Bas K
<p>Hi. I welcome this initiative!<br />
Why would one need a ini to map modules to models? Matthews resource plugin defines paths in its .ini for each model you wish to create. One is free to point them at a module... I feel I am missing something...<br />
One of the problems that is consistently emerging is modular support. Could someone be a little verbose on expressing the exact problem of modular support? Also (how) does this tie in with autoloading and or PEAR style model organization (Doctrine 1.2+). Should the model class names reflect the module name? (Model_Base_Entity vs CMS_Model_Base_Entity) Ah, questions, tastes and best practices...</p>
<p>And some concrete feedback : Proposed is to put doctrine models in the directory 'module/models'. Personally I have other, none-doctrine, model classes at that location (so these can be used from both controllers and amf-services) and therefore place the doctrine models within a directory called doctrine.</p>
<p>module/models/... (custom model classes)<br />
module/models/doctrine/...</p>
Nov 11, 2009
Benjamin Eberlei
<p>The problem is not ZF not going to find the models, the problem is that Doctrine Schema files don't support the notion of a module. That means you would want to have a means to generate the models into different modules, say you have three modules: Default, Cms and Guestbook.</p>
<p>Then you have different models, User, Article, Category, Comment, (Guestbook)-Entry.</p>
<p>You would want to have the following generation of the models:</p>
<ac:macro ac:<ac:plain-text-body><![CDATA[
User => Model_User
Article => Cms_Model_Article
Category => Cms_Model_Category
Comment => Cms_Model_Comment
GuestbookEntry => Guestbook_Model_Entry
]]></ac:plain-text-body></ac:macro>
<p>This has to be configured in an additional way for the Doctrine Import scripts (CLI ./doctrine generate-models) to build the files correctly.</p>
<p>For your example you could define the doctrine paths to be:</p>
<ac:macro ac:<ac:plain-text-body><![CDATA[
Default Module => Model_Doctrine_
Cms => Cms_Model_Doctrine_
Guestbook => Guestbook_Model_Doctrine_
]]></ac:plain-text-body></ac:macro>
<p>with the respective:</p>
<ac:macro ac:<ac:plain-text-body><![CDATA[
User => Model_Doctrine_User
Article => Cms_Model_Doctrine_Article
Category => Cms_Model_Doctrine_Category
Comment => Cms_Model_Doctrine_Comment
GuestbookEntry => Guestbook_Model_Doctrine_Entry
]]></ac:plain-text-body></ac:macro>
<p>This way these files for this classes won't be mixed with your other model files.</p>
Nov 11, 2009
Bas K
<p>Wouldn't want one to be able to only run commands on a single module? I know I would<br />
example :</p>
<ac:macro ac:<ac:plain-text-body><![CDATA[
zf create doctrine <moduleName> # Generate all Database, Models, Forms for module moduleName
zf rebuild doctrine.db <moduleName> # Drop and Create database for module moduleName
zf load doctrine.data <moduleName> # Loads data from fixtures directory for module moduleName
... etc ...
]]></ac:plain-text-body></ac:macro>
Nov 11, 2009
Benjamin Eberlei
<p>great idea, i buy it <ac:emoticon ac:</p>
Nov 11, 2009
Matt L
<p>Good catch Bas. I just wanted to point out that care would need to be taken in the case of interdependent modules due to connection binding, see <a class="external-link" href=""></a>. In these cases, models for 2 or more modules could be bound to same connection.</p>
Dec 12, 2009
Tomáš Fejfar
<p>I'd suggest grouping all doctrine tasks under one namespace if possible. It's good practise all along ZF.</p>
<ac:macro ac:<ac:plain-text-body><![CDATA[
zf doctrine build-all
zf doctrine build-all-reload
]]></ac:plain-text-body></ac:macro>
<p>etc. This is (at least for me) expected behaviour. </p>
Dec 12, 2009
Benjamin Eberlei
<p>Zend Tool has namespaces at third position, for semantic reasons.</p>
<p>zf <action> <provider>.</p>
<p>The Provider will be Doctrine, so its all grouped in one namespace.</p>
Dec 14, 2009
Jurian Sluiman
<p>Is it possible to access the code of this proposal? It's all included in the Zym git repository, but it's not accessible at all. Could you point out where we can download the complete Doctrine namespace for Zend Tool? Copy/paste every file from the webinterface is too much work <ac:emoticon ac:</p>
Dec 15, 2009
Jurian Sluiman
<p>I have another comment, a bit related to the one Bas K posted at first. I think it's a good idea to have the ability configure paths relative to a module's root. In combination with the module specific commands (zf create doctrine <moduleName>) this component will be very flexible in respect of modular loading and generation of models.</p>
<p>I'd like to have all module specific information inside my module directory. That said, for now I have this set of configuration:</p>
<ul>
<li>scheme file: configs/scheme.yml</li>
<li>fixtures: configs/data/fixtures/</li>
<li>sql: configs/data/sql/</li>
<li>migration: configs/migrations/</li>
<li>models: models/<br />
This means the scheme file for e.g. a guestbook module is located at application/modules/guestbook/configs/scheme.yml</li>
</ul>
<p>I took care of the autoloading with the modules application resource (<a class="external-link" href=""></a>). I think the module-based configuration (for both config files and the models/forms) is a more flexible approach to keep the loved ZF modular structure including the Doctrine advantages.</p>
<p>My comment has several implications:</p>
<ul>
<li>Mapping (configure doctrine.module & assign-model doctrine.module) isn't required anymore, the module controls itself</li>
<li>Actions with a module argument are easy and convenient to implement</li>
<li>Actions without a module argument loops through all modules to perform the selected action (this might be the negative side of my comment)</li>
</ul>
<p>I think there are several options to configure Doctrine such it's intelligent with loading the models in Zend's modular structure. A lot of information is spread lately about the integration of Doctrine with ZF, but 99% forgets the modular structure because Doctrine doesn't provide such an organized structure. I hope with the integration of Doctrine 1 and Zend_Tool (this proposal) we look more at this problem and provide a better method to keep things inside the module directory (imho the preferred method).<br />
I know my comment doens't only apply to this proposal, but is also meant for the Doctrine resource (<a class="external-link" href=""></a>). I thought this was the best place to put my ideas down <ac:emoticon ac:</p>
Mar 08, 2010
Benjamin Eberlei
<p>I committed the first working version of my tooling support for ZF + Doctrine prototype to my user branch: </p>
<p><a class="external-link" href=""></a></p>
<p>Using it boils down to the following commands:</p>
<p>First you need to modify your .zf.ini to look for the DoctrineProvider in "library/Zend/Doctrine/Tool/DoctrineProvider.php"</p>
<p>Then:</p>
<ac:macro ac:<ac:plain-text-body><![CDATA[
zf create project foo
cd foo
zf create-project doctrine --dsn="dsnhere" --with-resource-directories
zf create module addressbook
]]></ac:plain-text-body></ac:macro>
<p>Now you can go to your application/config/schema directory and create a yaml schema, for example:</p>
<ac:macro ac:<ac:plain-text-body><![CDATA[
Model_User:
tableName: users
columns:
username:
type: string(255)
type: string(255)
contact_id:
type: integer
relations:
class: Addressbook_Model_Contact
local: contact_id
foreign: id
foreignAlias: User
foreignType: one
type: one
Addressbook_Model_Contact:
tableName: contacts
columns:
type: string(255)
type: string(255)
phone:
type: string(255)
type: string(255)
address:
type: string(255)
relations:
User:
class: Model_User
local: id
foreign: contact_id
foreignAlias: Contact
foreignType: one
type: one
]]></ac:plain-text-body></ac:macro>
<p>Plus add the module resource loaders to your Bootstrap:</p>
<ac:macro ac:<ac:plain-text-body><![CDATA[
public function _initResourceLoaders()
]]></ac:plain-text-body></ac:macro>
<p>Here you can see the requirements on the model. ZF and Doctrine Integration only works if you follow it completly:</p>
<p>Your Models have to be called either Model_Foo or Module_Model_Foo</p>
<p>Now you can go generate the models from the YAML file:</p>
<ac:macro ac:<ac:plain-text-body><![CDATA[
zf generate-models-yaml doctrine
zf build-all-reload doctrine --force
]]></ac:plain-text-body></ac:macro>
<p>The first command generates the Model_User and Addressbook_Model_Contact aswell as their respective Base clases Model_Base_User and Adressbook_Model_Base_Contact. The second command creates the database from the schema and loads a fixture from the "application/configs/fixtures".</p>
<p>What is missing:</p>
<ul>
<li>Documentation</li>
<li>Tool support to create multiple places where a YAML schema can be generated.</li>
</ul>
<p>Be aware:</p>
<p>This proposal tightly integrates ZF and Doctrine in the default project schema. Any attempt to use it in another way will fail, and you should use the components strengths separately to get where you want to.</p>
<p>I would appreciate feedback from anyone who wants to give it a try.</p>
Mar 11, 2010
Julien SIMON
<p>Hello everyone,</p>
<p>I can't checkout your repository :</p>
<p>C:\Users\MightyDucks\Desktop\zend\library\Zend\Doctrine2\Application\Resource\Entitymanager.php<br />
In directory 'C:\Users\MightyDucks\Desktop\zend\library\Zend\Doctrine2\Application\Resource'<br />
Can't open file <br />
'C:\Users\MightyDucks\Desktop\zend\library\Zend\Doctrine2\Application\Resource\.svn\tmp\text-base\Entitymanager.php.svn-base': <br />
The specified file is not found.</p>
<p>Can you check your repos ?</p>
<p>Thank you</p>
Apr 26, 2010
Marc Hodgins
<p>Hi Julien - I had the same problem today. It is because there are two files in that dir (Entitymanager.php and EntityManager.php) which obviously resolve to the same file in windows due to its lack of case sensitivity. There is no way around this at present other than doing the check-out on a non-windows machine. I've sent Benjamin (beberlei) an email asking him to remove the extra file so check-outs work <ac:emoticon ac:</p>
|
http://framework.zend.com/wiki/display/ZFPROP/Doctrine+1+and+Zend_Tool+Integration+-+Benjamin+Eberlei?focusedCommentId=21266586
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
List<T> Class
Represents a strongly typed list of objects that can be accessed by index. Provides methods to search, sort, and manipulate lists.
System.Collections.Generic.List<T>
Namespace: System.Collections.GenericNamespace: System.Collections.Generic
Assembly: mscorlib (in mscorlib.dll)
The List<T> type exposes the following members.
This class.
Using LINQ
To conserve resources for Silverlight-based applications, the List<T> class in the .NET Framework for Silverlight class library does not include all the methods in the full .NET Framework, because the functionality of these methods is available with LINQ. For more information about LINQ, see LINQ to Objects.
The following examples show how to use LINQ to obtain the functionality of the methods that are not provided in the List<T> class.
These examples operate on a collection of integers.
ConvertAll Functionality
This LINQ code creates a collection of strings that are converted from the numbers collection.
Exists Functionality
This LINQ code returns true if any value in the numbers collection is greater than or equal to 10.
Find Functionality
This LINQ code returns the first even number in the numbers collection.
FindAll Functionality
This LINQ code creates a collection of even numbers from the numbers collection.
FindIndex Functionality
This code finds the index of the first even number in the numbers collection. It is more efficient to iterate through the collection to find the index value than to use LINQ.
FindLastIndex Functionality
This code finds the index of the last even number in the numbers collection. It is more efficient to iterate through the collection to find the index value than to use LINQ. In this example, the iteration is from the end of the collection to the beginning.
RemoveAll Functionality
This LINQ code removes odd numbers from the numbers collection. The result is another collection that is also named numbers.
There are two examples for this class. This first example shows how to create an object derived from the List<T> class for binding data to a ListBox control. The binding in this example is one-way; see Data Binding for a discussion of other binding options and scenarios.
The Canada class derives from the List<T> class and its constructor populates the list with data. Add this class to the partial MainPage class of your project.
// Be sure this class within the // C# namespace of Page.xaml.cs. public class Canada : List<string> { public Canada() { Add("Alberta"); Add("British Columbia"); Add("Manitoba"); Add("New Brunswick"); Add("Newfoundland"); Add("Northwest Territories"); Add("Nova Scotia"); Add("Nunavut"); Add("Ontario"); Add("Prince Edward Island"); Add("Quebec"); Add("Yukon"); } }
The following XAML references and elements perform the data binding:
xmlns:src="clr-namespace:ListDemoCS"
This is a reference to the common language runtime (CLR) namespace, which is automatically declared within the assembly and exposes its public types. Replace ListDemoCS with the name of your assembly (project name).
<src:Canada x:
This is a resource dictionary reference specified in the <Grid.Resources> element. It specifies the object to bind to (Canada) and its name (CanadaList) is specified by controls, such as a ListBox control, that use this resource.
<ListBox x:
This element defines a ListBox control with the ItemsSource property set to the bound object.
The complete XAML is as follows.
<UserControl x: <Grid x: <Grid.Resources> <src:Canada x: </Grid.Resources> <ListBox x: </Grid> </UserControl>
The next example demonstrates several properties and methods of the List<T> Demo(System.Windows.Controls.TextBlock outputBlock) { List<string> dinosaurs = new List<string>(); outputBlock.Text += String.Format("\nCapacity: {0}", dinosaurs.Capacity) + "\n"; dinosaurs.Add("Tyrannosaurus"); dinosaurs.Add("Amargasaurus"); dinosaurs.Add("Mamenchisaurus"); dinosaurs.Add("Deinonychus"); dinosaurs.Add("Compsognathus"); outputBlock.Text += "\n"; foreach (string dinosaur in dinosaurs) { outputBlock.Text += dinosaur + "\n"; } outputBlock.Text += String.Format("\nCapacity: {0}", dinosaurs.Capacity) + "\n"; outputBlock.Text += String.Format("Count: {0}", dinosaurs.Count) + "\n"; outputBlock.Text += String.Format("\nContains(\"Deinonychus\"): {0}", dinosaurs.Contains("Deinonychus")) + "\n"; outputBlock.Text += String.Format("\nInsert(2, \"Compsognathus\")") + "\n"; dinosaurs.Insert(2, "Compsognathus"); outputBlock.Text += "\n"; foreach (string dinosaur in dinosaurs) { outputBlock.Text += dinosaur + "\n"; } outputBlock.Text += String.Format("\ndinosaurs[3]: {0}", dinosaurs[3]) + "\n"; outputBlock.Text += "\nRemove(\"Compsognathus\")" + "\n"; dinosaurs.Remove("Compsognathus"); outputBlock.Text += "\n"; foreach (string dinosaur in dinosaurs) { outputBlock.Text += dinosaur + "\n"; } dinosaurs.TrimExcess(); outputBlock.Text += "\nTrimExcess()" + "\n"; outputBlock.Text += String.Format("Capacity: {0}", dinosaurs.Capacity) + "\n"; outputBlock.Text += String.Format("Count: {0}", dinosaurs.Count) + "\n"; dinosaurs.Clear(); outputBlock.Text += "\nClear()" + "\n"; outputBlock.Text += String.Format("Capacity: {0}", dinosaurs.Capacity) + "\n"; outputBlock.Text += String.Format("Count: {0}", dinosaurs.Count) + "\n"; } } /* */
For a list of the operating systems and browsers that are supported by Silverlight, see Supported Operating Systems and Browsers..
|
http://msdn.microsoft.com/en-us/library/vstudio/6sh2ey19(v=vs.95)
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
The OpenGL 3.2 core profile is not compatible with cl_gl.h
as a workaround to use OpenCL functionality with OpenGL 3.2 core mode, you have to define __gl_h_ before including CL/cl_gl.h
otherwise cl_gl.h tries to include the old gl header file which results in type conflicts:
i think a fix is needed within the cl_gl.h file.
as a workaround, you can use
#if GL3_PROTOTYPES
#define __gl_h_
#endif
#include <CL/cl_gl.h>
|
http://www.khronos.org/message_boards/showthread.php/6119-Apple-OpenCL_OceanWave-Demo-crashes?goto=nextnewest
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
In today’s Programming Praxis exercise, our goal is to provide two different solutions for the well known SEND + MORE = MONEY sum, in which each letter must be replaced by a valid digit to yield a correct sum. Let’s get started, shall we?
A quick import:
import Data.List
I’ll be honest, the only reason I wrote this first solution this way is because the exercise explicitly called for checking all possible solutions using nested loops. It’s so horribly inefficient! Take the test whether all digits are unique for example: normally you’d remove each chosen digit from the list of options for all subsequent ones, but we’re not allowed to do that. I normally also wouldn’t do the multiplications this explicitly, but to avoid overlap with the second solution I left it like this. Unsurprisingly, it takes almost a minute and a half to run.
send1 :: ([Integer], [Integer], [Integer]) send1 = head [([s,e,n,d], [m,o,r,e], [m,o,n,e,y]) | s <- [1..9], e <- [0..9], n <- [0..9], d <- [0..9] , m <- [1..9], o <- [0..9], r <- [0..9], y <- [0..9] , length (group $ sort [s,e,n,d,m,o,r,y]) == 8 , 1000*(s+m) + 100*(e+o) + 10*(n+r) + d+e == 10000*m + 1000*o + 100*n + 10*e + y]
This is actually the solution I started with: since all digits need to be unique, you can simply generate the permutations of the numbers 0 through 9, backtracking when s or m are zero or when the numbers don’t add up correctly. By writing a function to do the multiplication and assinging some variables we not only make things more readable, but we also get to use the problem statement directly in the code, which I find conceptually satisfying. I do have the distinct impression that this is what we’re supposed to make in part 2 of this exercise though, since it runs in about a second, which is significantly faster than the two provided solutions.
send2 :: (Integer, Integer, Integer) send2 = head [ (send, more, money) | (s:e:n:d:m:o:r:y:_) <- permutations [0..9] , s /= 0, m /= 0, let fill = foldl ((+) . (* 10)) 0 , let send = fill [s,e,n,d], let more = fill [m,o,r,e] , let money = fill [m,o,n,e,y], send + more == money]
A quick test shows that both algorithms produce the correct solution.
main :: IO () main = do print send1 print send2
Tags: bonsai, code, Haskell, kata, money, more, praxis, programming, send, sum
December 23, 2013 at 9:07 pm |
I have solved the problem using my set-cover package. The compiled program runs in half a second:
December 23, 2013 at 10:06 pm |
In send2 you test every assignment twice, because you ignore the last two digits in “(s:e:n:d:m:o:r:y:_) <- permutations [0..9]".
|
http://bonsaicode.wordpress.com/2012/07/31/programming-praxis-send-more-money-part-1/
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
Post your Comment
The Java Applet Viewer
The Java Applet Viewer
Applet viewer is a command line program to run
Java applets...; the browser should be Java enabled.To create an applet, we need to define Java applet.
We can use only one option -debug that starts the applet viewer
Setting Applet Viewer Properties - Java Beginners
Setting Applet Viewer Properties Hi,
I'm developing an applet using netbeans 6.1 ide. I want to set the appletviewer's width and height so I can test the applet from the ide and not from the browser through a html page. How
Pdf Viewer
Pdf Viewer How to diplay the pdf files in java panel using scrollpane...
import java.awt.*;
import javax.swing.*;
import java.awt.event.*;
import java.io.*;
import java.util.*;
import com.lowagie.text.*;
import
What is Applet? - Applet
:// Hi,Here is more information about applet viewer. Applet viewer is a command line program to run Java applets. It is included... any further, lets see what an applet is?
TightVNC Java Viewer connection
TightVNC Java Viewer connection I have source code of the tight VNC and installed 'Real VNC' on my local system. When I run the source of tight VNC, it asked for the host and port so I passed following credentials: Host
Play Audio in Java Applet
Play Audio in Java Applet
... the sound file. This program will show
you how to play a audio clip in your java applet viewer or on the
browser.
For this example we will be creating an applet called
Applet
Applet Write a Java applet that drwas a line between 2 points. The co-ordinates of 2 points should be passed as parametrs from html file. The color of the line should be red
applet - Applet
in Java Applet.",40,20);
}
}
2) Call this applet with html code...:
Thanks.... Hi Friend,
Try the following code:
1)Create an applet
Applet - Applet
,
Applet
Applet is java program that can be embedded into HTML pages. Java applets... in details to visit.... what is the concept of applet?
what is different between
applet - Applet
*;
import java.awt.*;
public class CreateTextBox extends Applet implements... information,visit the following link:
Thanks
Applet - Applet
------------------------");
g.drawString("Demo of Java Applet Window Event Program");
g.drawString("Java...Applet Namaste, I want to create a Menu, the menu name is "Display... java.awt.event.*;
public class menu2frame extends Applet implements WindowListener
java - Applet
java what is applet
Applet in Eclipse - Running Applet In Eclipse
and the the Java applet appears in the applet viewer.
Download this Example... in
Eclipse 3.0. An applet is a little Java program that runs inside a Web...->New->Project... from the menu bar to begin creating your Java applet
Clock Applet in Java
Java - Clock Applet in Java
... by the java applet to illustrate
how to use the clock in an applet. This program shows... the time in an applet in the time format
like: hours, minutes and then seconds
Java Applet
Java Applet How to add Image in Java Applet? what is getDocumentBase
java applet - Applet
java applet I want to close applet window which is open by another button of applet program. plz tell me! Hi Friend,
Try...://
Thanks
applet problem - Applet
applet problem How can I create a file in client side by a java applet . Surely it will need a signed applet .But how can a signed applet create a file in the client side
java - Applet
java how to connect database table with scrollbar in java applet
java - Applet
java what is applet? Hi Friend,
Please visit the following link:
Thanks
What is Applet in Java?
is that these are typically executed in an Applet viewer or Java compatible web browser.
Life...A Java Applet is a small dynamic program which can be transferred via... to the machine if user allows.
Disadvantages of Java Applet:
Applets
java applet
java applet why java applet programs doesn't contain main method
Post your Comment
|
http://roseindia.net/discussion/18739-The-Java-Applet-Viewer.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
Subject: Re: [OMPI users] I have still a problem with rankfiles in openmpi-1.6.4rc3
From: Eugene Loh (eugene.loh_at_[hidden])
Date: 2013-02-05 16:20:09
On 02/05/13 00:30, Siegmar Gross wrote:
>
> now I can use all our machines once more. I have a problem on
> Solaris 10 x86_64, because the mapping of processes doesn't
> correspond to the rankfile. I removed the output from "hostfile"
> and wrapped around long lines.
>
> tyr rankfiles 114 cat rf_ex_sunpc
> # mpiexec -report-bindings -rf rf_ex_sunpc hostname
>
> rank 0=sunpc0 slot=0:0-1,1:0-1
> rank 1=sunpc1 slot=0:0-1
> rank 2=sunpc1 slot=1:0
> rank 3=sunpc1 slot=1:1
>
>
> tyr rankfiles 115 mpiexec -report-bindings -rf rf_ex_sunpc hostname
> [sunpc0:17920] MCW rank 0 bound to socket 0[core 0-1] socket 1[core 0-1]: [B B][B B] (slot list 0:0-1,1:0-1)
> [sunpc1:11265] MCW rank 1 bound to socket 0[core 0-1] : [B B][. .] (slot list 0:0-1)
> [sunpc1:11265] MCW rank 2 bound to socket 0[core 0-1] socket 1[core 0-1]: [B B][B B] (slot list 1:0)
> [sunpc1:11265] MCW rank 3 bound to socket 0[core 0-1] socket 1[core 0-1]: [B B][B B] (slot list 1:1)
A few comments.
First of all, the heterogeneous environment had nothing to do with this (as you have just confirmed). You can reproduce the problem so:
% cat myrankfile
rank 0=mynode slot=0:1
% mpirun --report-bindings --rankfile myrankfile hostname
[mynode:5150] MCW rank 0 bound to socket 0[core 0-3]: [B B B B] (slot list 0:1)
Anyhow, that's water under the bridge at this point.
Next, and you might already know this, you can't bind arbitrarily on Solaris. You have to bind to a locality group (lgroup) or an
individual core. Sorry if that's repeating something you already knew. Anyhow, your problem cases are when binding to a single
core. So, you're all right (and OMPI isn't).
Finally, you can check the actual binding so:
% cat check.c
#include <sys/types.h>
#include <sys/processor.h>
#include <sys/procset.h>
#include <stdio.h>
int main(int argc, char **argv) {
processorid_t obind;
if ( processor_bind(P_PID, P_MYID, PBIND_QUERY, &obind) != 0 ) {
printf("ERROR\n");
} else {
if ( obind == PBIND_NONE ) printf("unbound\n");
else printf("bind to %d\n", obind);
}
return 0;
}
% cc check.c
% mpirun --report-bindings --rankfile myrankfile ./a.out
I can reproduce your problem on my Solaris 11 machine (rankfile specifies a particular core but --report-bindings shows binding to
entire node), but the test problem shows binding to the core I specified.
So, the problem is in --report-bindings? I'll poke around some.
|
http://www.open-mpi.org/community/lists/users/2013/02/21307.php
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
Language management
Asked by ntrubert-cobweb on 2010-08-24
As there are Magento Meta Information by product and translation available, is there possibility to synchronize translation between Openerp and Magento?
If yes is it by shop relation ?
One language one shop ?
Question information
- Language:
- English Edit question
- Status:
- Solved
- Assignee:
- No assignee Edit question
- Solved by:
- ntrubert-cobweb
- Solved:
- 2010-08-25
- Last query:
- 2010-08-25
- Last reply:
-
blueprints already exists sorry for that.
We totaly can manage the langage when we export product from openerp to magento.
You just have to select the correct langage in OpenERP for each storeview. (You can do it from the shop view.)
Also you have select the default langage of your magento instance from the menu magento instance.
Importing product from magento to openerp in multi langage is not posible right now, If your are interested by this functionality, you can contact us.
Regards
The answer is no, "Magento Open ERP Connector" is not ready yet to manage language import export.
base_external referentials call magento API without store_view parameter
def ext_create(self, cr, uid, data, conn, method, oe_id, context):
return conn.call(method, data , <we have to add store_view code here according to the language context>)
def try_ext_
update( self, cr, uid, data, conn, method, oe_id, external_id, ir_model_data_id, create_method, context):
return conn.call(method, [external_id, data], <we have to add store_view code here according to the language context>)
I gonna add blueprints
|
https://answers.launchpad.net/magentoerpconnect/+question/122562
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
Testing CherryPy 3 Application with Twill and Nose
I’ve been working on a CherryPy application for a few days, and wanted to write some tests. Surprisingly I could not find any tutorials or documentation on how I should test a CherryPy application. Unfortunately I also missed the last section on CherryPy Testing page; why is CherryPy application testing added as an afterthought? Wouldn’t it make more sense to start the testing section on how people can write tests for their CherryPy applications, rather than first explaining how to test CherryPy itself? Of well, at least I learned something new…
Since I got tests working with Twill first I decided to document my experience, and switch to the CherryPy way later if it makes more sense. The CherryPy Essentials book apparently has a section on testing, so reading that would probably clarify a lot of things.
There is a brief tutorial on how to test CherryPy 2 application with twill, but the instructions need some tweaking to work with CherryPy 3.
On Ubuntu 8.04 I first created a virtualenv 1.3.1 without site packages. I am running Python 2.5.2, and I have the following packages installed in the virtualenv: setuptools 0.6c9, CherryPy 3.1.2, twill 0.9 and nose 0.11.1. The additional packages were installed with
easy_install.
My directory structure is as follows:
hello.py tests/ __init__.py test_hello.py
hello.py contents is simply:
import cherrypy class HelloWorld: def index(self): return "Hello world!" index.exposed = True if __name__ == '__main__': cherrypy.quickstart(HelloWorld())
Running
python hello.py will start the web server and I can see the greeting in my browser at URL.
The tests directory has two files.
__init__.py is empty. The
test_hello.py follows closely the tutorial by Titus, but modified to work with CherryPy 3. The CherryPy 3 Upgrade instructions and CherryPy mod_wsgi instructions showed the way.
from StringIO import StringIO import twill import cherrypy from hello import HelloWorld class TestHelloWorld: def setUp(self): # configure cherrypy to be quiet ;) cherrypy.config.update({ "environment": "embedded" }) # get WSGI app. wsgiApp = cherrypy.tree.mount(HelloWorld()) # initialize cherrypy.server.start() #) def tearDown(self): # remove intercept. twill.remove_wsgi_intercept('localhost', 8080) # shut down the cherrypy server. cherrypy.server.stop() def test_hello(self): script = "find 'Hello world!'" twill.execute_string(script, initial_url='')
Now you’d expect that this would work by simply running
nosetests command. Mysteriously I got import error on twill (and after I removed the line, also import error on cherrypy). I looked at
sys.path which showed that I was somehow picking up the older nosetests I had installed into system Python.
which nosetests claimed it was finding the virtualenv
nosetests. Still, I had to actually give the explicit path to my virtualenv
nosetests before the tests would run without import errors.
All in all testing CherryPy applications turned into a longer adventure than I anticipated. I run into a number of unexpected difficulties, but I finally got it working and learned about twill as a bonus. Thanks for the tip, JJ!
Rene Dudfield:
Hi,
this is neat, thanks for sharing.
One other technique with cherrypy apps is to test them like normal python objects.
For example:
def test_hello(self):
self.assertTrue(“Hello world!” in HelloWorld().index())
Functional tests are really nice to see it is working on a real webserver too 🙂
cheers,November 25, 2009, 4:37 am
Christian Wyglendowski:
Nice post. Here is a bit of code that I put together at one point that uses Twill to test a CherryPy 3 app.
It doesn’t use WSGI intercept – it launches the full CherryPy app server and tests against that.November 25, 2009, 6:32 am
Heikki Toivonen:
Thanks Rene, that was so simple it is embarrassing I did not realize that 🙂
Christian, that looks interesting as well, thanks for sharing!November 25, 2009, 9:17 pm
Wyatt:
I’ve found that when installing scripts into a virtualenv, I’ve often had to deactivate and then re-activate that virtualenv to get things to work properly.November 27, 2009, 12:52 pm
|
https://www.heikkitoivonen.net/blog/2009/11/24/testing-cherrypy-3-application-with-twill-and-nose/
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
Getting Started Guide
Quick-start guide to using and developing with Red Hat Container Development Kit
Abstract
Chapter 1. Getting started with Container Development Kit
This section contains information about setting up, installing, and uninstalling Container Development Kit.
1.1. Introducing Red Hat Container Development Kit
Red Hat Container Development Kit provides.
- Container Development Kit is available for the Microsoft Windows, macOS, and Linux operating systems, thus allowing developers to use their preferred.1. Understanding Container Development Kit documentation
- The Red Hat Container Development Kit 3.9 Release Notes and Known Issues contains information about the current release of the product as well as a list of known problems that users may encounter when using it.
- The Container Development Kit Getting Started Guide contains instructions on how to install and start using the Container Development Environment to develop Red Hat Enterprise Linux-based containers using tools and services such as OpenShift Container Platform, Docker, Eclipse, and various command line tools.
- Report issues with Red Hat Container Development Kit or request new features using the CDK project at.
- Report issues with the Red Hat Container Development Kit 3.9 Release Notes and Known Issues and Container Development Kit Getting Started Guide using the RHDEVDOCS project at.
1.2. Preparing to Install CDK
1.2.1. Overview
The following section describes how to install CDK and the required dependencies.
These are the basic steps for setting up CDK on your personal system:
- Set up your virtualization environment
- Download CDK software for your operating system from the Red Hat Container Development Kit Download page
- Install CDK
- Set up and start CDK
- Configure CDK so you can use it efficiently
The setup procedure should be run as a regular user with permission to launch virtual machines. In the procedure, you will see how to assign that permission, along with ways to configure your hypervisor and command shell to start and effectively interact with CDK.
1.2.2. Prerequisites
CDK requires a hypervisor to start the virtual machine on which the OpenShift cluster is provisioned. Verify that the hypervisor of your choice is installed and enabled on your system before you set up CDK. Once the hypervisor is up and running, additional setup is required for CDK to work with that hypervisor.
Depending on your host operating system, you have the choice of the following recommended native hypervisors:
- macOS
- Linux
- Windows
- Note
To use CDK with Hyper-V ensure that, after you install Hyper-V, you also add a Virtual Switch using the Hyper-V Manager and set the configuration option
hyperv-virtual-switchto this virtual switch. For specific configuration steps see the Setting Up the Hyper-V Hypervisor section.
- All Platforms
- Note
VirtualBox 5.1.12 or later is recommended on Windows to avoid the issue Error: machine does not exist. If you encounter issues related to the hypervisor, see the Troubleshooting Driver Plug-ins section.
Refer to the documentation for each hypervisor to determine the hardware and operating system versions needed to run that hypervisor.
1.3. Setting Up the Virtualization Environment
1.3.1. Overview
Follow the appropriate procedure to set up the hypervisor for your particular operating system. CDK uses libmachine and its driver plug-in architecture to provide a consistent way to manage the CDK VM.
Some hypervisors require manual installation of the driver plug-in. CDK embeds the VirtualBox driver plug-in, so no additional steps are required to configure it. However, VirtualBox will need to be identified to CDK via the
--vm-driver virtualbox flag or persistant configuration settings. See Setting Up CDK to Use VirtualBox for more information.
See the appropriate section for your hypervisor and operating system:
- For Red Hat Enterprise Linux, set up the KVM driver
- For macOS, set up the xhyve driver
- For Windows, set up the Hyper-V hypervisor
- For VirtualBox, set up CDK to use VirtualBox
1.3.2. Red Hat Enterprise Linux
1.3.2.1. Setting Up the KVM Driver
CDK is currently tested against
docker-machine-driver-kvm version 0.10.0.
As root, install the KVM binary and make it executable as follows:
# curl -L -o /usr/local/bin/docker-machine-driver-kvm # chmod +x /usr/local/bin/docker-machine-driver-kvm
For more information, see the GitHub documentation of the Docker Machine KVM driver.
As root, install libvirt and qemu-kvm on your system:
# yum install libvirt qemu-kvm
As root, add yourself to the libvirt group:
# usermod -a -G libvirt username
Update your current session to apply the group change:
$ newgrp libvirt
Start the libvirtd service as root:
# systemctl start libvirtd # systemctl enable libvirtd
1.3.2. Setting Up the xhyve Driver
CDK is currently tested against
docker-machine-driver-xhyve version 0.3.3.
To install the xhyve driver, you need to download and install the
docker-machine-driver-xhyve binary and place it in a directory which is on your
PATH environment variable. The directory /usr/local/bin is recommended, as it is the default installation directory for Docker Machine binaries.
The following steps explain the installation of the
docker-machine-driver-xhyve binary to the /usr/local/bin/ directory:
Download the
docker-machine-driver-xhyvebinary using:
$ sudo curl -L -o /usr/local/bin/docker-machine-driver-xhyve
Enable root access for the
docker-machine-driver-xhyvebinary and add it to the default wheel group:
$ sudo chown root:wheel /usr/local/bin/docker-machine-driver-xhyve
Set the owner User ID (SUID) for the binary as follows:
$ sudo chmod u+s,+x /usr/local/bin/docker-machine-driver-xhyve
The downloaded docker-machine-driver-xhyve binary is compiled against a specific version of macOS. The driver may fail to work after a macOS version upgrade. In this case, you can try to compile the driver from source:
$ go get -u -d github.com/zchee/docker-machine-driver-xhyve $ cd $GOPATH/src/github.com/zchee/docker-machine-driver-xhyve # Install docker-machine-driver-xhyve binary into /usr/local/bin $ make install # docker-machine-driver-xhyve need root owner and uid $ sudo chown root:wheel /usr/local/bin/docker-machine-driver-xhyve $ sudo chmod u+s /usr/local/bin/docker-machine-driver-xhyve
For more information, see the xhyve driver documentation on GitHub.
1.3.3.1.1. Next Steps
Proceed to Installing CDK once your hypervisor has been installed and configured.
1.3.4. Windows
1.3.4.1. Setting Up the Hyper-V Hypervisor
To use CDK with Hyper-V:
- Install Hyper-V.
Add the user to the local Hyper-V Administrators group.Note
This is required to allow the user to create and delete virtual machines with the Hyper-V Management API. For more information, see Hyper-V commands must be run as an Administrator.
- Add an External Virtual Switch.
- Verify that you pair the virtual switch with a network card (wired or wireless) that is connected to the network..1. Next Steps
Proceed to Installing CDK once your hypervisor has been installed and configured.
1.3.5. Setting Up CDK to Use VirtualBox
VirtualBox must be manually installed in order to use the embedded VirtualBox drivers. VirtualBox version 5.1.12 or later is required. Ensure that you download and install VirtualBox before using the embedded drivers.
VirtualBox must be identified to CDK through either the
--vm-driver virtualbox flag or persistant configuration options.
1.3.5.1. Use VirtualBox Temporarily
The
--vm-driver virtualbox flag will need to be given on the command line each time the
minishift start command is run. For example:
$ minishift start --vm-driver virtualbox
1.3.5.2. Use VirtualBox Permanently
Setting the
vm-driver option as a persistent configuration option allows you to run
minishift start without explicitly passing the
--vm-driver virtualbox flag each time. You may set the
vm-driver persistent configuration option as follows:
$ minishift config set vm-driver virtualbox
The
vm-driver persistent configuration option must be supplied before
minishift start has been run. If you have already run
minishift start, ensure that you run
minishift delete, set the configuration with
minishift config set vm-driver virtualbox, then run
minishift start in order to make use of the VirtualBox driver.
1.3.5.3. Next Steps
Proceed to Installing CDK once your hypervisor has been installed and configured.
1.4. Installing CDK.9.0-1-minishift-darwin-amd64(for macOS),
cdk-3.9.0-1-minishift-linux-amd64(for Red Hat Enterprise Linux) or
cdk-3.9.0-1-minishift-windows-amd64.exe(for Windows). Assuming the executable is in the ~/Downloads directory, follow the procedure for your operating system:
For Red Hat Enterprise Linux:
$ mkdir -p ~/bin $ cp ~/Downloads/cdk-3.9.0-1-minishift* ~/bin/minishift $ chmod +x ~/bin/minishift $ export PATH=$PATH:$HOME/bin $ echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc
For macOS:
$ mkdir -p ~/bin $ cp ~/Downloads/cdk-3.9).
- On the Windows operating system, due to issue #236, you need to execute the CDK binary from your local C:\ drive. You cannot run CDK from a network drive.
- CDK will use any SSH binary found on the
PATHin preference to internal SSH code. Ensure that any SSH binary in the system
PATHdoes not generate warning messages as this will cause installation problems such as issue #1923.
1.5. CDK Quickstart
1.5.1. Overview
This section contains a brief demo of CDK and of the provisioned OpenShift cluster. For details on the usage of CDK, see the Basic Usage section.
The interaction with OpenShift is with the command line tool
oc which is copied to your host. For more information on how CDK can assist you in interacting with and configuring your local OpenShift instance, see the OpenShift Client Binary section.
For more information about the OpenShift cluster architecture, see Architecture Overview in the OpenShift documentation.
The following steps describe how to get started with CDK on a Linux operating system with the KVM hypervisor driver.
1.5.2. Setting up CDK
The
minishift setup-cdk command gets and configures the components needed to run CDK on your system. By default,
minishift setup-cdk places CDK content in your ~/.minishift directory (%USERPROFILE%/.minishift on Windows).
To use a directory other than ~/.minishift, you must set the
--minishift-home flag or
MINISHIFT_HOME environment variable, as described in Environment Variables.
Run the following command to set up CDK for Red Hat Enterprise Linux:
$ minishift setup-cdk Setting up CDK 3 on host using '/home/user/.minishift' as Minishift's home directory Copying minishift-rhel7.iso to '/home/user/.minishift/cache/iso/minishift-rhel7.iso' Copying oc to '/home/user/.minishift/cache/oc/v3.10.45/linux/oc' Creating configuration file '/home/user/.minishift/config/config.json' Creating marker file '/home/user/.minishift/cdk' Default add-ons anyuid, admin-user, xpaas, registry-route, che, eap-cd installed Default add-ons anyuid, admin-user, xpaas enabled CDK 3 setup complete.
For Windows or macOS: Running the
minishift setup-cdk command on Windows and macOS results in slightly different output, based on some different components and pathnames.
1.5.3. Starting CDK
By default,
minishift startprompts you for your Red Hat Subscription Manager account username and password. You can enter that information or choose instead to:
- Skip registration: Add the
--skip-registrationoption to
minishift startto not register the CDK VM.
Register permanently: You can export registration information to environment variables so that
minishiftpicks it up automatically each time it starts.Important
Storing unencrypted registration information in environment variables is not secure. Entering your credentials through the
minishift startprompt is recommended for security.
Export your registration information as follows:
For Red Hat Enterprise Linux:
$ export MINISHIFT_USERNAME='<RED_HAT_USERNAME>' $ export MINISHIFT_PASSWORD='<RED_HAT_PASSWORD>' $ echo export MINISHIFT_USERNAME=$MINISHIFT_USERNAME >> ~/.bashrc $ echo export MINISHIFT_PASSWORD=$MINISHIFT_PASSWORD >> ~/.bashrc
For macOS:
$ export MINISHIFT_USERNAME='<RED_HAT_USERNAME>' $ export MINISHIFT_PASSWORD='<RED_HAT_PASSWORD>' $ echo export MINISHIFT_USERNAME=$MINISHIFT_USERNAME >> ~/.bash_profile $ echo export MINISHIFT_PASSWORD=$MINISHIFT_PASSWORD >> ~/.bash_profile
For Windows:
Using Command Prompt:
C:\> set MINISHIFT_USERNAME='<RED_HAT_USERNAME>' C:\> set MINISHIFT_PASSWORD='<RED_HAT_PASSWORD>' C:\> setx MINISHIFT_USERNAME %MINISHIFT_USERNAME% C:\> setx MINISHIFT_PASSWORD %MINISHIFT_PASSWORD%
Using PowerShell:
PS> $env:MINISHIFT_USERNAME = '<RED_HAT_USERNAME>' PS> $env:MINISHIFT_PASSWORD = '<RED_HAT_PASSWORD>' PS> setx MINISHIFT_USERNAME $env:MINISHIFT_USERNAME PS> setx MINISHIFT_PASSWORD $env:MINISHIFT_PASSWORD
Run the
minishift startcommand:
$ minishift start -- Starting profile 'minishift' ... -- Minishift VM will be configured with ... Memory: 4 GB vCPUs : 2 Disk size: 20 GB -- Starting Minishift VM .......................... OK -- Registering machine using subscription-manager Registration in progress ..................... OK [42s] ... OpenShift server started. The server is accessible via web console at: You are logged in as: User: developer Password: <any value> To login as administrator: oc login -u system:admin ...Note
- The IP is dynamically generated for each OpenShift cluster. To check the IP, run the
minishift ipcommand.
- By default, CDK uses the driver most relevant to the host OS. To use a different driver, set the
--vm-driverflag in
minishift start. For example, to use VirtualBox instead of KVM on Linux operating systems, run
minishift start --vm-driver=virtualbox.
- While CDK starts it runs several checks to make sure that the CDK VM and the OpenShift cluster are able to start correctly. If any startup checks fail, see the Troubleshooting Getting Started topic for information about possible causes and solutions.
For more information about
minishift startoptions, see the
minishift startcommand reference.
Use
minishift oc-envto display the command you need to type into your shell in order to add the
ocbinary to your
PATHenvironment variable. The output of
oc-envwill differ depending on OS and shell type.
$ minishift oc-env export PATH="/home/user/.minishift/cache/oc/v3.11.104/linux:$PATH" # Run this command to configure your shell: # eval $(minishift oc-env)
For more information about interacting with OpenShift with the command line interface and the Web console, see the OpenShift Client Binary section.
1.5.4. Deploying a Sample Application
OpenShift provides various sample applications, such as templates, builder applications, and quickstarts. The following steps describe how to deploy a sample Node.js application from the command line.
Create a Node.js example application:
$ oc new-app -l name=myapp
Track the build log until the application is built and deployed:
$ oc logs -f bc/nodejs-ex
Expose a route to the service:
$ oc expose svc/nodejs-ex
Access the application:
$ minishift openshift service nodejs-ex --in-browser
To stop CDK, use the following command:
$ minishift stop Stopping local OpenShift cluster... Unregistering machine Cluster stopped.
For more information about creating applications in OpenShift, see Creating New Applications in the OpenShift documentation.
1.6. Uninstalling CDK
1.6.1. Overview
This section describes how you can uninstall the
minishift binary and delete associated files.
1.6.2. Uninstalling CDK
Delete the CDK VM and any VM-specific files:
$ minishift delete
This command deletes everything in the MINISHIFT_HOME/.minishift/machines/minishift directory. Other cached data and the persistent configuration are not removed.
To completely uninstall CDK, delete everything in the
MINISHIFT.
2.1. Basic Usage
2.1.1. Overview
When you use CDK, you interact with the following components:
- the CDK virtual machine (VM)
- the Docker daemon running on the VM
- the OpenShift cluster running on the Docker daemon
The CDK architecture diagram outlines these components. The
minishift binary, placed on the
PATH for easy execution, is used to
start,
stop, and
delete the CDK VM. The VM itself is bootstrapped off of a Red Hat Enterprise Linux Live ISO.
Some CDK commands, for example
docker-env, interact with the Docker daemon, whilst others communicate with the OpenShift cluster, for example the
openshift command.
Once the OpenShift cluster is up and running, you interact with it using the
oc binary. CDK caches this binary under
$MINISHIFT_HOME (per default ~/.minishift).
minishift oc-env is an easy way to add the
oc binary to your
PATH.
For more details about using CDK to manage your local OpenShift cluster, see Chapter 3, Interacting with OpenShift using Container Development Kit.
Figure 2.1. : CDK architecture
2.1.2. CDK Life-cycle
2.1.2.1. The minishift setup-cdk Command
The
minishift setup-cdk command copies a Red Hat Enterprise Linux ISO image to the CDK cache, creates a default configuration file, installs and enables default add-ons, and creates a marker file to confirm that the command has been run.
The command also copies the oc binary to your host so that you can interact with OpenShift through the
oc command line tool or through the Web console, which can be accessed through the URL provided in the output of the
minishift start command.
2.1.2.2. The minishift start Command
The
minishift start command creates and configures the CDK VM and provisions a local, single-node OpenShift cluster within the CDK VM.
2.1.2.3. The minishift stop Command
The
minishift stop command stops your OpenShift cluster and shuts down the CDK VM, but preserves the OpenShift cluster state.
Starting CDK again will restore the OpenShift cluster, allowing you to continue working from the last session. However, you must enter the same parameters that you used in the original start command.
See the Persistent Configuration section for more information on how to persist CDK settings.
2.1.2.4. The minishift delete Command
The
minishift delete command deletes the OpenShift cluster, and also shuts down and deletes the CDK VM. No data or state are preserved.
2.1.3. Runtime Options
The runtime behavior of CDK can be controlled through flags, environment variables, and persistent configuration options.
The following precedence order is applied to control the behavior of CDK. CDK.
2.1.3.1. Flags
You can use command line flags with CDK to specify options and direct its behavior. This has the highest precedence. Almost all commands have flags, although different commands might have different flags. Some of the commonly-used command line flags of the
minishift start command are
cpus,
memory or
vm-driver.
2.1.3.2. Environment Variables
CDKflag of the
minishift startcommand becomes
MINISHIFT_vm-driver.
- Use uppercase characters for the flag, so
MINISHIFT_vm-driverin the above example becomes
MINISHIFT_VM-DRIVER.
- Replace
-with
_, so
MINISHIFT_VM-DRIVERbecomes
MINISHIFT_VM_DRIVER.
Environment variables can be used to replace any option of any CDK.
You can also use the
MINISHIFT_HOME environment variable, to choose a different home directory for CDK. By default, the
minishift command places all runtime state into ~/.minishift. This environment variable is currently experimental and semantics might change in future releases.
To use
MINISHIFT_HOME, you should set the new home directory when you first set up CDK. For example, this sets the
minishift home directory to ~/.mynewdir on a Linux system:
$ minishift setup-cdk --minishift-home ~/.mynewdir $ export MINISHIFT_HOME=~/.mynewdir $ echo 'export MINISHIFT_HOME=~/.mynewdir' >> ~/.bashrc
2.1.3.3. Persistent Configuration
Using persistent configuration allows you to control CDK behavior without specifying actual command line flags, similar to the way you use environment variables.
To configure the same registries in the persistent configuration, run the following command:
$
2.1.3.3.2. Unsetting Persistent Configuration Values
To remove a persistent configuration option, you can use the
minishift config unset sub-command. For example:
$ minishift config unset memory
The precedence for user-defined values is as follows:
- Command line flags.
- Environment variables.
- Instance-specific configuration.
- Global configuration.
2.1.4. Persistent Volumes CDK VM.
2.1.5. HTTP/HTTPS Proxies.
minishift starthonors the environment variables
HTTP_PROXY,
HTTPS_PROXYand
NO_PROXY.flag to request a specific version of OpenShift Container Platform. You can list all CDK-compatible OpenShift Container Platform versions with the
minishift openshift version listcommand.
2.1.6. Networking
The CDK VM is exposed to the host system with a host-only IP address that can be obtained with the
minishift ip command.
2.1.7. Connecting to the CDK VM with SSH
You can use the
minishift ssh command to interact with the CDK VM.
You can run
minishift ssh without a sub-command to open an interactive shell and run commands on the CDK VM in the same way that you run commands interactively on any remote machine using SSH.
You can also run
minishift ssh with a sub-command to send the sub-command directly to the CDK VM and return the result to your local shell. For example:
$ minishift ssh -- docker ps CONTAINER IMAGE COMMAND CREATED STATUS NAMES 71fe8ff16548 openshift3/ose:v3.11.104 "/usr/bin/openshift s" 4 minutes ago Up 4 minutes origin
2.2. CDK Profiles
2.2.1. Overview
You must run
minishift setup-cdk before using profiles.
A profile is an instance of the Minishift VM along with all of its configuration and state. The profile feature allows you to create and manage these isolated instances of Minishift.
Each CDK profile is created with its own configuration (memory, CPU, disk size, add-ons, and so on) and is independent of other profiles. Refer to the use of environment variables if you want to make sure that certain configuration settings, for example
cpus or
memory, get applied to all profiles.
The active profile is the profile against which all commands are executed, unless the global
--profile flag is used. You can determine the active profile by using the
minishift profile list command. You can execute single commands against a non-active profile by using the
--profile flag, for example
minishift --profile profile-demo console to open the OpenShift console for the specified profile-demo profile.
On top of the
--profile flag, there are commands for listing, deleting and setting the active profile. These commands are described in the following sections.
Even though profiles are independent of each other, they share the same cache for ISOs,
oc binaries and container images.
minishift delete --clear-cache will affect all profiles for this reason. We recommend using
--clear-cache with caution.
2.2.2. Creating Profiles
There are two ways to create a new profile.
Profile names can only consist of alphanumeric characters. Underscores ( _ ) and hyphens ( - ) are allowed as separators.
2.2.2.1. Using the
--profile Flag
When you run the
minishift start command with the
--profile flag the profile is created if it does not already exist. For example:
$ minishift --profile profile-demo start -- Checking if requested hypervisor 'xhyve' is supported on this platform ... OK -- Checking if xhyve driver is installed ... Driver is available at /usr/local/bin/docker-machine-driver-xhyve Checking for setuid bit ... OK -- Checking the ISO URL ... OK -- Starting local OpenShift cluster using 'xhyve' hypervisor ... -- Minishift VM will be configured with ... Memory: 2 GB vCPUs : 2 Disk size: 20 GB ...
See also Example Workflow for Profile Configuration.
A profile automatically becomes the active profile when a CDK instance is started successfully via
minishift start.
2.2.2.2. Using the
profile set Command
The other option to create a profile is to use the
profile set command. If the specified profile does not exist, it is implicitly created:
$ minishift profile set demo Profile 'demo' set as active profile
The default profile is minishift. It will be present by default and it does not need to be created.
2.2.3. Listing Profiles
You can list all existing profiles with the
minishift profile list command. You can also see the active profile highlighted in the output.
$ minishift profile list - minishift Running (Active) - profile-demo Does Not Exist
2.2.4. Switching Profiles
To switch between profiles use the
minishift profile set command:
$ minishift profile set profile-demo Profile 'profile-demo' set as active profile
Only one profile can be active at any time.
2.2.5. Deleting Profiles
To delete a profile, run:
$ minishift profile delete profile-demo You are deleting the active profile. It will remove the VM and all related artifacts. Do you want to continue [y/N]?: y Deleted: /Users/user/.minishift/profiles/profile-demo Profile 'profile-demo' deleted successfully Switching to default profile 'minishift' as the active profile.
The default profile minishift cannot be deleted.
2.2.6. Example Workflow for Profile Configuration
You have two options to create a new profile and configure its persistent configuration. The first option is to implicitly create the new profile by making it the active profile using the
profile set command. Once the profile is active you can run any
minishift config command. Lastly, start the instance:
$ minishift profile set profile-demo $ minishift config set memory 8GB $ minishift config set cpus 4 $ minishift addon enable anyuid $ minishift start
The alternative is to execute a series of commands each specifying the targeted profile explicitly using the
--profile flag:
$ minishift --profile profile-demo config set memory 8GB $ minishift --profile profile-demo config set cpus 4 $ minishift --profile profile-demo addon enable anyuid $ minishift --profile profile-demo minishift start
2.3. Image Caching
2.3.1. Overview.
The format in which images are cached has changed with CDK version 3.3.0. Prior to 3.3.0, the images were stored as tar files. As of 3.3.0, images are stored in the OCI image format.
If you used image caching prior to CDK 3.3.0, your cache will need to be recreated. If you want to remove the obsolete pre-3.3.0 images, you can clear your cache via:
$ minishift delete --clear-cache
2.3.2. Explicit Image Caching
CDK provides the
image command together with its sub-commands to control the behavior of image caching. To export and import images from the Docker daemon of the CDK VM, use
minishift image export and
minishift image import.
2.3.2.1. Importing and Exporting Single Images
Once the CDK VM is running, images can be explicitly exported from the Docker daemon:
$ minishift image export <image-name-0> <image-name-1> ... Pulling image <image-name-0> .. OK Exporting <image-name-0>. OK Pulling image <image-name-1> .. OK Exporting <image-name-2>. OK
Images which are not available in the Docker daemon will be pulled prior to being exported to the host.
To import previously cached images, use the
minishift image import command:
$ minishift image import <image-name-0> <image-name-1> ... Importing <image-name-0> . OK
2.3.2.2. Listing Cached Images
The
minishift image list command lists either the currently cached images or the images available in the CDK Docker daemon.
To view currently cached images on the host:
$ minishift image list registry.access.redhat.com/openshift3/ose-docker-registry:v3.11.104 registry.access.redhat.com/openshift3/ose-haproxy-router:v3.11.104 registry.access.redhat.com/openshift3/ose:v3.11.104
To view images available in the Docker daemon:
$ minishift image list --vm registry.access.redhat.com/openshift3/ose-deployer:v3.11.104 registry.access.redhat.com/openshift3/ose-docker-registry:v3.11.104 registry.access.redhat.com/openshift3/ose-haproxy-router:v3.11.104 registry.access.redhat.com/openshift3/ose-pod:v3.11.104 registry.access.redhat.com/openshift3/ose-web-console:v3.11.104 registry.access.redhat.com/openshift3/ose:v3.11.104
2.3.2.3. Persisting Cached Image Names.
2.3.2.4. Exporting and Importing All Images
You can export and import all images using the
--all flag. For the export command, this means that all images currently available on the Docker daemon will be exported to the host. For the import command, it means that all images available in the local CDK cache will be imported into the Docker daemon of the CDK VM.
Exporting and importing all images can take a long time and locally cached images can take up a considerable amount of disk space. We recommend using this feature with caution.
2.3.3. Implicit Image Caching
Image caching is enabled by default for CDK. It occurs in a background process after the
minishift start command is completed for the first time. Once the images are cached under $MINISHIFT_HOME/cache/image, successive CDK VM creations will use these cached images.
To disable this feature you need to disable the
image-caching property in the persistent configuration using the
minishift config set command:
$ minishift config set image-caching false
Implicit image caching will transparently add the required OpenShift images to the list of cached images as specified per
cache-images configuration option. See Persisting Cached Image Names.
Each time an image exporting background process runs, a log file is generated under $MINISHIFT_HOME/logs which can be used to verify the progress of the export.
You can re-enable the caching of the OpenShift images by setting
image-caching to
true or removing the setting altogether using
minishift config unset:
$ minishift config unset image-caching
2
2.4. Add-ons
2.4.1. Overview
CDK allows you.
Example: anyuid add-on definition file
# Name: anyuid 1 # Description: Allows authenticated users to run images under a non pre-allocated UID 2 # Required-Vars: ACME_TOKEN 3 # OpenShift-Version: >3.6.0 4 #.
- Comment lines, starting with the '#' character, can be inserted anywhere in the file.
- Commands starting with the
!character ignore execution failure. With the help of this, add-on will be idempotent i.e. a command can be executed multiple times without changing the final behavior of the add-on.
Enabled add-ons are applied during
minishift start, immediately after the initial cluster provisioning is successfully completed.
2.4.2. OpenShift-Version Semantics
As part of the add-on metadata you can specify the OpenShift version which needs to be running in order to apply the add-on. To do so, you can specify the optional OpenShift-Version metadata field. The semantics are as follows:
- If the metadata field OpenShift-Version is not specified in the add-on header, the add-on can be applied against any version of OpenShift.
- OpenShift-Version only supports versions in the form of <major>.<minor>.<patch>.
2.4.3..5. Add-on Commands
This section describes the commands that an add-on file can contain. They form a small domain-specific language for CDK add-ons:
- ssh
- If the add-on command starts with
ssh, you can run any command within the CDK-managed VM. This is similar to running
minishift sshand then executing any command on the VM. For more information about
minishift sshcommand usage, see Connecting to the CDK VM with SSH.
- oc
If the add-on command starts with
oc, it uses the
ocbinary that is cached on your host to execute the specified
occommand. This is similar to running
oc --as system:admin …from the command line.Note
The
occommand is executed as system:admin.
- openshift
- If the add-on command starts with
openshift, it uses the
ocbinary present in the OpenShift container to execute the command. This means that any file parameters or other system-specific parameters must match the environment of the container instead of your host.
- docker
- If the add-on command starts with
docker, it executes a
dockercommand against the Docker daemon within the CDK VM. This is the same daemon on which the single-node OpenShift cluster is running as well. This is similar to running
eval $(minishift docker-env)on your host and then executing any
dockercommand. See also
minishift docker-env.
- echo
- If the add-on command starts with
echo, the arguments following the
echocommand are printed to the console. This can be used to provide additional feedback during add-on execution.
- sleep
- If the add-on command starts with
sleep, it waits for the specified number of seconds. This can be useful in cases where you know that a command such as
ocmight take a few seconds before a certain resource can be queried.
-. Variable Interpolation
CDK.
Example: Usage of the routing-suffix variable
$ $ minishift addons apply acme
Multiple variables must be comma separated when the
minishift config set command is used.
- 1
- Using the env prefix ensures that instead of literally replacing '#{PROJECT_USER}' with 'env.USER', the value of the environment variable
USERis used. If the environment variable is not set, interpolation does not occur.
- 2
- When the add-on is applied, each occurrence of
#{PROJECT_USER}within an add-on command gets replaced with the value of the environment variable
USER.
As an add-on developer, you can enforce that a variable value is provided when the add-on gets applied by adding the variable name to the Required-Vars metadata header. Multiple variables need to be comma separated.
# Name: acme # Description: ACME add-on # Required-Vars: PROJECT_USER
You can also provide default values for variables using the Var-Defaults metadata header. Var-Defaults needs to be specified in the format of
<key>=<value>. Multiple default key/value pairs need to be comma separated.
# Name: acme # Description: ACME add-on # Required-Vars: PROJECT_USER # Var-Defaults: PROJECT.7. Default Add-ons
CDK provides a set of built-in add-ons that offer some common OpenShift customization to assist with development. During
minishift setup-cdk, Minishift automatically installs and enables the xpaas, anyuid, and admin-user add-ons..
Table 2.2. Default add-ons
The.7.1. Add-ons by the Community
Apart from the several default add-ons, there are a number of community developed add-ons for CDK. Community add-ons can be found in the minishift-addons repository. You can get all the information about these add-ons in the repository. The instructions for installing them can be found in the README.
2.4.8. Installing Add-ons
Add-ons are installed with the
minishift addons install command.
The following example shows how to install an add-on.
Example: Installing an add-on
$ minishift addons install <path_to_addon_directory>
2.4.9. Enabling and Disabling Add-ons.
Example: Enabling the anyuid add-on
$ minishift addons enable anyuid
Example: Disabling the anyuid add-on
$ minishift addons disable anyuid
2.4.9.1. Add-on Priorities
When you enable an add-on, you can also specify a priority, which determines the order that the add-ons are applied.
The following example shows how to enable the registry add-on with a higher priority value.
Example: Enabling the registry add-on with priority
$.
Example: List command output with explicit priorities
$ minishift addons list - anyuid : enabled P(0) - registry : enabled P(5) - eap : enabled P(10)
If two add-ons have the same priority the order in which they are getting applied is not determined.
2.4.10. Applying Add-ons.
Example: Applying anyuid and admin-user add-ons
$ minishift addons apply anyuid admin-user
2.4.11. Removing Add-ons
Add-ons can be removed with the
minishift addons remove command. It is the mirror command to
minishift addons apply and similarly can be used regardless of a space. The following example shows how to explicitly remove the admin-user add-on.
Example: Removing admin-user add-on
$ minishift addons remove admin-user -- Removing addon 'admin-user':. admin user deleted
2.4.12. Uninstalling Add-ons
Add-ons can be uninstalled with the
minishift addons uninstall command. It is the mirror command to
minishift addons install and can be used regardless of whether the add-on is enabled or not. Provided the specified add-on is installed,
minishift addons uninstall will delete the corresponding add-on directory from $MINISHIFT_HOME/addons.
The following example shows how to explicitly uninstall the admin-user add-on:
Example: Uninstalling admin-user add-on
$ minishift addons uninstall admin-user Add-on 'admin-user' uninstalled
2.4.13. Writing Custom Add-ons.
Example: Add-on definition for admin-role
# Name: admin-role # Description: Gives the developer user cluster-admin privileges oc adm policy add-role-to-user cluster-admin developer
After you define the add-on, you can install it by running:
$ minishift addons install <ADDON_DIR_PATH>
You can also write metadata with multiple lines.
Example: Add-on definition which contain multiline description
# Name: prometheus # Description: This template creates a Prometheus instance preconfigured to gather OpenShift and # Kubernetes platform and node metrics and report them to admins. It is protected by an # OAuth proxy that only allows access for users who have view access to the prometheus # namespace. You may customize where the images (built from openshift/prometheus # and openshift/oauth-proxy) are pulled from via template parameters. # Url:
You can also edit your add-on directly in the CDK add-on install directory $MINISHIFT_HOME/addons. Be aware that if there is an error in the add-on, it will not show when you run any
addons commands, and it will not be applied during the
minishift start process.
To provide add-on removal instructions, you can create text file with the extension .addon.remove, for example admin-user.addon.remove. Similar to the .addon file, it needs the Name and Description metadata fields. If a .addon.remove file exists, it can be applied via the
remove command.
2.5. Host Folders
2.5.1. Overview
Host folders are directories on the host which are shared between the host and the CDK VM. This allows for two way file synchronization between the host and the VM. The following sections discuss usage of the
minishift hostfolder command.
2.5.2. The minishift hostfolder Command
CDK provides the
minishift hostfolder command to list, add, mount, unmount and remove host folders. You can use the
hostfolder command to mount multiple shared folders onto custom specified mount points.
2.5.2.1. Prerequisites
2.5.2.1.1. SSHFS
SSHFS is the default technology for sharing host folders. It works without prerequisites.
2.5.2.1.2. CIFS.
On Linux, follow your distribution-specific instructions to install Samba. Refer to Samba File and Print Server in RHEL to learn how to configure the Samba implementation of CIFS in Red Hat Enterprise Linux.
2.5.2.2. Displaying Host Folders/user/test /mnt/sda1/test N
In this example, there is an SSHFS based host folder with the name test which mounts /Users/user/test onto /mnt/sda1/test in the CDK VM. The share is currently not mounted.
2.5.2.3. Adding Host Folders
The
minishift hostfolder add command allows you to define a new host folder.
The exact syntax to use depends on the host folder type. Independent of the type you can choose between non-interactive and interactive configuration. The default is non-interactive. By specifying the
--interactive flag you can select the interactive configuration mode.
The following sections give examples for configuring CIFS and SSHFS host folders.
2.5.2.3.1. CIFS
Adding a CIFS based hostfolder
$ minishift hostfolder add -t cifs --source //192.168.99.1/MYSHARE --target /mnt/sda1/myshare --options username=user CDK.
On Windows hosts, the
minishift hostfolder add command also provides a
users-share option. When this option is specified, no UNC path needs to be specified and C:\Users is assumed.
2.5.2.3.2. SSHFS
Adding an SSHFS based hostfolder
$ minishift hostfolder add -t sshfs --source
2.5.2.3.3. Instance-Specific Host Folders
By default, host folder definitions are persistent, similar to other persistent configuration options. This means that these host folder definitions will survive the deletion and subsequent re-creation of a CDK VM.
In some cases you might want to define a host folder just for a specific CDK instance. To do so, you can use the
--instance-only flag of the
minishift hostfolder add command. Host folder definitions that are created with the
--instance-only flag will be removed together with any other instance-specific state during
minishift delete.
2.5.2.4. Mounting Host Folders"
When mounting SSHFS based host folders an SFTP server process is started on port 2022 of the host. Make sure that your network and firewall settings allow this port to be opened. If you need to configure this port you can make use of CDK’s persistent configuration using the key
hostfolders-sftp-port, for example:
$ minishift config set hostfolders-sftp-port 2222
2.5.2.4.1. Auto-Mounting Host Folders
Host folders can also be mounted automatically each time you run
minishift start. To set auto-mounting, you need to set the
hostfolders-automount option in the CDK configuration file.
$ minishift config set hostfolders-automount true
After the
hostfolders-automount option is set, CDK will attempt to mount all defined host folders during
minishift start.
2.5.2.5. Unmounting Host Folders
2.5.2.6. Deleting Host Folders
2.6. Assign Static IP Address
2.6.1. Overview
Most hypervisors do not support extending the lease time when the IP is assigned using DHCP. This might lead to a new IP being assigned to the VM after a restart as it will conflict with the security certificates generated for the old IP. This will make CDK completely unusable until a new instance is set up by running
minishift delete followed by
minishift start.
To prevent this, CDK includes the functionality to set a static IP address to the VM. This will prevent the IP address from changing between restarts. However, it will not work on all of the driver plug-ins at the moment due to the way the IP address is resolved.
- Assigning a static IP address to the CDK VM is only officially supported for Hyper-V.
- The CDK VM cannot be assigned a static IP address when using the KVM driver plug-in.
2.6.2. Assign IP Address to Hyper-V
Since the Internal Virtual Switch for Hyper-V does not provide a DHCP offer option, an IP address needs to be provided in a different way. Functionality is provided to assign an IP address on startup using the Data Exchange Service for Hyper-V.
To make this work, you need to create a Virtual Switch using NAT.
WinNAT is limited to one NAT network per host. For more details about capabilities and limitations, please see the WinNAT capabilities and limitations blog.
The following command will attempt to assign an IP address for use on the Internal Virtual Switch 'MyInternal':
PS> minishift.exe config set hyperv-virtual-switch "MyInternal" PS> minishift.exe start ` --network-ipaddress 192.168.1.10 ` --network-gateway 192.168.1.1 ` --network-nameserver 8.8.8.8
If you want to use the 'DockerNAT' network, the following commands are needed to setup the correct NAT networking and assign an IP in the range expected:
PS> New-NetNat -Name SharedNAT -InternalIPInterfaceAddressPrefix 10.0.75.1/24 PS> minishift.exe config set hyperv-virtual-switch "DockerNAT" PS> minishift.exe start ` --network-ipaddress 10.0.75.128 ` --network-gateway 10.0.75.1 ` --network-nameserver 8.8.8.8
Be sure to specify a valid gateway and nameserver. Failing to do so will result in connectivity issues.
2.
2.7. CDK Docker daemon
2.7.1. Overview
When running OpenShift in a single VM, you can reuse the Docker daemon managed by CDK for other Docker use-cases as well. By using the same Docker daemon as CDK, you can speed up your local development.
2.7.2. Console Configuration
In order to configure your console to reuse the CDK Docker daemon, follow these steps:
- Make sure that you have the Docker client binary installed on your machine. For information about specific binary installations for your operating system, see the Docker installation site.
- Start CDK with the
minishift startcommand.
Run the
minishift docker-envcommand to display the command you need to type into your shell in order to configure your Docker client. The command output will differ depending on OS and shell type.
$ minishift docker-env export DOCKER_TLS_VERIFY="1" export DOCKER_HOST="tcp://192.168.99.101:2376" export DOCKER_CERT_PATH="/Users/user/.minishift/certs" export DOCKER_API_VERSION="1.24" # Run this command to configure your shell: # eval $(minishift docker-env)
Test the connection by running the following command:
$ docker ps
If successful, the shell will print a list of running containers.
2.8. Experimental Features
2.8.1. Overview
If you want to get early access to some upcoming features and experiment, you can set the environment variable
MINISHIFT_ENABLE_EXPERIMENTAL, which makes additional feature flags available:
$ export MINISHIFT_ENABLE_EXPERIMENTAL=y
Experimental features are not officially supported, and might break or result in unexpected behavior. To share your feedback on these features, you are welcome to contact the Minishift community.
2.8.2. Enabling Experimental
oc cluster up Flags
By default, CDK does not expose all
oc cluster up flags in the CDK CLI.
You can set the
MINISHIFT_ENABLE_EXPERIMENTAL environment variable to enable the following options for the
minishift start command:. On Red Hat Enterprise Linux
Kompose can be installed from the command line by enabling the Red Hat Developer Tools and Red Hat Software Collections repositories:
$ subscription-manager repos --enable rhel-7-server-devtools-rpms $ subscription-manager repos --enable rhel-server-rhscl-7-rpms $ yum install kompose -y
2.10.2. Using Kompose
To convert your Docker Compose project using Kompose, follow these steps:
Start CDK so you have an OpenShift cluster to communicate with.
$ minishift start Starting local OpenShift cluster using 'kvm' hypervisor... -- Checking OpenShift client ... OK -- Checking Docker client ... OK -- Checking Docker version ... OK -- Checking for existing OpenShift container ... OK ...
Download an example Docker Compose file, or use your own.
wget
Convert your Docker Compose file to OpenShift. Run
kompose convert --provider=openshiftin the same directory as your docker-compose.yaml file.
$ kompose convert --provider=openshift INFO OpenShift file "frontend-service.yaml" created INFO OpenShift file "redis-master-service.yaml" created INFO OpenShift file "redis-slave-service.yaml" created INFO OpenShift file "frontend-deploymentconfig.yaml" created INFO OpenShift file "frontend-imagestream.yaml" created INFO OpenShift file "redis-master-deploymentconfig.yaml" created INFO OpenShift file "redis-master-imagestream.yaml" created INFO OpenShift file "redis-slave-deploymentconfig.yaml" created INFO OpenShift file "redis-slave-imagestream.yaml" created
Alternatively, you can convert and deploy directly to OpenShift with
kompose up --provider=openshift.
$ kompose up --provider=openshift INFO We are going to create OpenShift DeploymentConfigs, Services and PersistentVolumeClaims for your Dockerized application. If you need different kind of resources, use the 'kompose convert' and 'oc create -f' commands instead. INFO Deploying application in "myproject" namespace INFO Successfully created Service: frontend INFO Successfully created Service: redis-master INFO Successfully created Service: redis-slave INFO Successfully created DeploymentConfig: frontend INFO Successfully created ImageStream: frontend INFO Successfully created DeploymentConfig: redis-master INFO Successfully created ImageStream: redis-master INFO Successfully created DeploymentConfig: redis-slave INFO Successfully created ImageStream: redis-slave Your application has been deployed to OpenShift. You can run 'oc get dc,svc,is,pvc' for details.
Access the newly deployed application with CDK.
After deployment, you must create an OpenShift route in order to access the service.
Create a route for the
frontendservice using
oc.
$ oc expose service/frontend route "frontend" exposed
Access the
frontendservice with
minishift.
$ minishift openshift service frontend --namespace=myproject -
CDK creates a virtual machine and provisions a local, single-node OpenShift cluster in this VM. The following sections describe how CDK can assist you in interacting and configuring your local OpenShift cluster.
For details about managing the CDK VM, see the Basic Usage section.
3.1. Using the OpenShift Client Binary (oc)
3.1.1. Overview
The
minishift start command creates an OpenShift cluster using the cluster up approach. For this purpose it copies the
oc binary onto your host.
The
oc binary is located in the $MINISHIFT_HOME/cache/oc/v3.11.104 directory, assuming that you use the default OpenShift version for CDK.="/home/user/.minishift/cache/oc/v3.11.104:$PATH" # Run this command to configure your shell: # eval $(minishift oc-env)
3.1.2. CDK CLI Profile
As part of the
minishift start command, a CLI profile named minishift is also created. This profile, also known as a context, contains the configuration to communicate with your OpenShift cluster.
CDK.
3.1.3. Logging Into the Cluster.
If you run the command
oc login -u system -p admin, you will log in but not as an administrator. Instead, you will be logged in as an unprivileged user with no particular rights.
To view the available login contexts, run:
$ oc config view
3.1.4. Accessing the Web Console
To access the OpenShift Web console, you can run this command in a shell after starting CDK to get the URL of the Web console:
$ minishift console --url
Alternatively, after starting CDK, you can use the following command to directly open the console in a browser:
$ minishift console
3.1.5. Accessing OpenShift Services
To access a service that is exposed with a route, run this command in a shell:
$ minishift openshift service [-n NAMESPACE] [--url] NAME
For more information refer also to Exposing Services.
3.1.6. Viewing OpenShift Logs
To access OpenShift logs, run the following command after starting CDK:
$ minishift logs
3.1.7. Updating OpenShift Configuration.
After you update the OpenShift configuration, OpenShift will transparently restart.
3.1.7.1. Example: Configuring cross-origin resource sharing": [".*"]}'
If you get the error The specified patch need to be a valid JSON. when you run the above command, you need to modify the above command depending on your operating system, your shell environment and its interpolation behavior.
For example, if you use PowerShell on Windows 7 or 10, modify the above command to:
PS> minishift.exe openshift config set --patch '{\"corsAllowedOrigins\": [\".*\"]}'
If you use Command Prompt, use the following:
C:\> minishift.exe openshift config set --patch "{\"corsAllowedOrigins\": [\".*\"]}"
3.1.7.2. Example: Changing the OpenShift Routing Suffix
In this example, you change the OpenShift routing suffix in the master configuration.
If you use a static routing suffix, you can set the
routing-suffix flag as part of the
minishift start command. By default, CDK uses a dynamic routing prefix based on nip.io, in which the IP address of the VM is a part of the routing suffix, for example 192.168.99.103.nip.io.
If you experience issues with nip.io, you can use xip.io, which is based on the same principles.
To set the routing suffix to xip.io, run the following command:
$ minishift openshift config set --patch '{"routingConfig": {"subdomain": "<IP-ADDRESS>.xip.io"}}'
Make sure to replace
IP-ADDRESS in the above example with the IP address of your CDK VM. You can retrieve the IP address by running the
minishift ip command.
3
3.2. Exposing Services
3.2.1. Overview
There are several ways you can expose your service after you deploy it on OpenShift. The following sections describe the various methods and when to use them.
3.2.2. Routes Deploying a Sample Application section.
3.2.3. NodePort Services
In case the service you want to expose is not HTTP based, you can create a NodePort service. In this case, each OpenShift node will proxy that port into your service. To access this port on your CDK CDK:
$ CDK VM IP and the exposed NodePort service.
$ mysql --user=root --password=admin --host=$(minishift ip) --port=30907
3.2.4. Port Forwarding
3.2.4.1. Using
oc port-forward
If you want to quickly access a port of a specific pod of your cluster, you can also use the
oc port-forward command of the OpenShift CLI.
$ oc port-forward POD [LOCAL_PORT:]REMOTE_PORT
3.2.4.2. Using VirtualBox tools
In case you’re using the VirtualBox driver plug-in.
3.3. Accessing the OpenShift Docker Registry
3.3.1. Overview
OpenShift provides an integrated Docker registry which can be used for development as well. Images present in the registry can directly be used for applications, speeding up the local development workflow.
3.3.2. Logging Into the Registry
- Start CDK and add the
ocbinary to the
PATH. For a detailed example, see the CDK Quickstart section.
- Make sure your shell is configured to reuse the CDK Docker daemon.
Log into the OpenShift Docker registry.
$ docker login -u developer -p $(oc whoami -t) $(minishift openshift registry)
3.3.3. Deploying Applications
The following example shows how to deploy an OpenShift application directly from a locally-built Docker image. This example uses the OpenShift project myproject. This project is automatically created by
minishift start.
- Make sure your shell is configured to reuse the CDK
If you want to deploy an application using
oc run --image […] then exposed internal registry route doesn’t work. You should use internal registry IP along with your project and app to deploy, as following:
$ oc run myapp --image 172.30.1.1:5000/myproject/myapp
Chapter 4. Troubleshooting CDK
This section contains solutions to common problems that you might encounter while setting up and using CDK.
4.1. Troubleshooting Getting Started
4.1.1. Overview
This section contains solutions to common problems that you might encounter while installing and configuring CDK.
4.1.2. CDK startup check failed
While CDK starts, it runs several startup checks to make sure that the CDK VM and the OpenShift Cluster are able to start without any issues. If any configuration is incorrect or missing, the startup checks fail and CDK does not start. driver.
If you want to force CDK to start despite a failing driver plug-in check, you can instruct CDK to treat these errors as warnings:
For KVM/libvirt on Linux, run the following command:
$ minishift config set warn-check-kvm-driver true
For xhyve on macOS, run the following command:
$ minishift config set warn-check-xhyve-driver true
For Hyper-V on Windows, run the following command:
C:\> minishift.exe config set warn-check-hyperv-driver true
4.1.2.2. Persistent storage volume configuration and usage
CDK checks whether the persistent storage volume is mounted and that enough disk space is available. If the persistent storage volume, for example, uses more than 95% of the available disk space, CDK will not start.
If you want to recover the data, you can skip this test and start CDK to access the persistent volume:
$ minishift config set skip-check-storage-usage true
4.1.2.3. External network connectivity
After the CDK VM starts, it runs several network checks to verify whether external connectivity is possible from within the CDK VM.
By default, network checks are configured to treat any errors as warnings, because of the diversity of the development environments. You can configure the network checks to optimize them for your environment.
For example, one of the network checks pings an external host. You can change the host by running the following command:
$ minishift config set check-network-ping-host <host-IP-address>
Replace
<host-IP-address> with the address of your internal DNS server, proxy host, or an external host that you can reach from your machine.
Because proxy connectivity might be problematic, you can run a check that tries to retrieve an external URL. You can configure the URL by running:
$ minishift config set check-network-http-host <URL>
4.1.3. OpenShift web console does not work with older versions of Safari
minishift console does not work on older versions of Safari web browser such as version 10.1.2 (12603.3.8). Attempting to access the web console results in the following error:
Error unable to load details about the server
Retry after updating it to the latest version or use Firefox or Chrome browser for this. Version 11.0.3 (13604.5.6) has been tested and works with OpenShift web console. You can use
minishift console --url to get the web console URL.
4.2. Troubleshooting Driver Plug-ins
4.2.1. Overview
This section contains solutions to common problems that you might encounter while configuring the driver plug-ins for CDK.
4.2.2. KVM/libvirt
4.2.2.1. Undefining virsh snapshots fail
If you use
virsh on KVM/libvirt to create snapshots in your development workflow then use
minishift delete to delete the snapshots along with the VM, you might encounter the following error:
$ minishift delete Deleting the Minishift as root.
Delete the definitions:
# virsh snapshot-delete --metadata minishift <snapshot-name>
Undefine the CDK domain:
# virsh undefine minishift
You can now run
minishift deleteto delete the VM and restart CDK.
If these steps do not resolve the issue, you can also use the following command to delete the snapshots:
$ rm -rf ~/.minishift/machines
It is recommended to avoid using metadata when you create snapshots. To ensure this, you can specify the
--no-metadata flag. For example:
# virsh snapshot-create-as --domain vm1 overlay1 --diskspec vda,file=/export/overlay1.qcow2 --disk-only --atomic --no-metadata
4.2.2.2. Error creating new host: dial tcp: missing address
4.2.2.3. Failed to connect socket to '/var/run/libvirt/virtlogd-sock'
4.2.2.4. Domain 'minishift' already exists…
If you try
minishift start and this error appears, ensure that you use
minishift delete to delete the VMs that you created earlier. However, if this fails and you want to completely clean up CDK and start fresh, do the following:
As root, check if any existing CDK VMs are running:
# virsh list --all
If any CDK VM is running, stop it:
# virsh destroy minishift
Delete the VM:
# virsh undefine minishift
As your regular user, delete the ~/.minishift/machines directory:
$ rm -rf ~/.minishift/machines
In case all of this fails, you might want to uninstall CDK and do a fresh install of CDK.
4.2.3. xhyve
4.2.3.1. Could not create vmnet interface CDK }
You can completely reset the IP database by removing the files manually, but this is very risky.
4.4.1. Error machine does not exist
If you use Windows, ensure that you set the
--vm-driver virtualbox flag in the
minishift start command. Alternatively, the problem might be an outdated version of VirtualBox.
To avoid this issue, it is recommended to use VirtualBox 5.1.12 or later.
4.2.5. Hyper-V
4.2.5.1. Hyper-V commands must be run as an Administrator
If you run CDK select Computer Management.
- In the Computer Management window, select Local Users And Groups, then double-click Groups.
- Double-click the Hyper-V Administrators group, the Hyper-V Administrators Properties dialog box is displayed.
- Add your account to the Hyper-V Administrators group, log off, then log in for the change to take effect.
Now you can run the Hyper-V commands as a normal user.
For more options for Hyper-V see creating Hyper-V administrators local group.
4.2.5.2. CDK running with Hyper-V fails when connected to OpenVPN
If you try to use CDK with Hyper-V using an external virtual switch while you are connected to a VPN such as OpenVPN, CDK might fail to provision the VM.
Cause: Hyper-V networking might not route the network traffic in both directions properly when connected to a VPN.
Workaround: Disconnect from the VPN and try again after stopping the VM from the Hyper-V manager.
4.3. Troubleshooting Miscellaneous
4.3.1. Overview
This section contains solutions to common problems that you might encounter while using various components of CDK.
4.3.2. The root filesystem of the CDK VM exceeds overlay size
Installing additional packages or copying large files to the root filesystem of the CDK VM might exceed the allocated overlay size and lock the CDK VM.
Cause: The CDK VM root filesystem contains core packages that are configured to optimize running the CDK VM and containers. The available storage on the root filesystem is determined by the overlay size, which is smaller than the total available storage.
Workaround: Avoid installing packages or storing large files in the root filesystem of the CDK VM. Instead, you can create a sub-directory in the /mnt/sda1/ persistent storage volume or define and mount host folders that can share storage space between the host and the CDK VM.
If you want to perform development tasks inside the CDK VM, it is recommended that you use containers, which are stored in persistent storage volumes, and reuse the CDK Docker daemon..5. X.509 certificate is valid for 10.0.2.15, 127.0.0.1, 172.17.0.1, 172.30.0.1, 192.168.99.100, not 192.168.99.101
Starting a stopped CDK CDK VM. The certificates are generated only when the CDK VM is freshly started. After restart, the CDK VM might be assigned a new IP address. If this happens, the certificate becomes invalid.
Workaround: Delete the existing CDK VM and start again.
$ minishift delete --force $ minishift start
|
https://access.redhat.com/documentation/en-us/red_hat_container_development_kit/3.9/html-single/getting_started_guide/index
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
This is the second tutorial on MSP430 and it will feature code on blinking the led’s and hence will tell you on how to configure the ports as input and output, and how open code composer studio, go to FILE->NEW->CCS PROJECT. After doing this, you will get a window mentioned below
Enter your project name, select family as MSP430, and now variant is msp430g2253. Remember, this is a critical step. To check your option, refer your chip on the Launchpad. It has a mentioned of the variant. For all the tutorial, I will be using msp430g2553 as the chip, so kindly change accordingly. In the bottom box, select Empty project (with main.c) and click Finish.
After clicking Finish, you will get this screen.
Have a look at the basic structure of the code already written. The first line is your header file that depends upon the variant; you choose while creating your project. Since I am using msp430g2553, I will rename the header file to #include<msp430g2553.h>, to look your code like this
Make sure to change your header file according to your variant.
Next step is the main function. Inside the main function you can see, initialization of watchdog timer. The MSP430 and many of the new generation microcontroller includes a special timer called the Watchdog Timer. Its function is to reset the CPU in case of an event when the CPU gets stuck. However, many developers used this timer in a scenario when they want to reset the controller when certain conditions are met. For now remember to turn off the watchdog timer, as we will discuss it when we talk about timers in the tutorial. This timer is on by default when the chip powers up and hence it’s necessary to turn it off.
The remaining lines of code is the ending part that we don’t care much about.
Now, here comes the task we want to do i.e toggle the LED’s. Since the leds are on port1 and on pin 0 and 6 respectively, we will first have to make this pin or declare these two pins as whether they are acting as output or input.
Here comes the use of P1DIR register. P1DIR register is responsible for making your pins as output or input. 1 is for output and 0 is for input. Since we want to configure pin 6 and pin 0, we assign P1DIR as
P1DIR =0b01000001;
Or
P1DIR = 0x41;
Next is to specify the particular pin of the particular port as high or low. For that, you can use P1OUT as the register. 1 is for high while 0 is for low. Since we are initializing things, I set the led on P1.0 as high while the other as low.
P1OUT=0b00000001; P1OUT =0X01;
Just have a quick recap as to what we have done till now after creating the project
- I have modified the header file as per my variant.
- Stopped the watchdog timer
- Declare PIN0 and PIN6 of PORT1 as output
- Made PIN0 as high while PIN6 of PORT1 as low
In the next step, we will declare a variable ‘I’ for delay purpose which can be done by
Unsigned int i;
Next is the infinite while loop, in it there are two steps. First is toggling and other is providing a delay after each toggle to see the toggling effect successfully.
For toggle using a bitwise operator, I can write
P1OUT^=0X41;
Or
P1OUT^=0b01000001;
This will reverse the state of pin6 and pin0. So initially pin6 was low it will become high, and pin0 will become low. This process would go on continuously. Hence, we will provide a delay after each toggle, to the toggling of LED’s.
The important point is there is no inbuilt delay function is MSP430, so you have to use for a loop to provide the delay. Assigned a loop for i=0 till i=30000, incrementing it by 1 every time.
The number 30000 will determine your toggling time. So choose it carefully.
It can be initialized as
For(i=0;i<30000;i++){ }
The next is step is nothing. CCS has automatically closed the while loop, and hence we have to burn the program obviously after building it. Before burning you can cross-check the code once again from below
#include <msp430g2553.h> // header file that depends upon your variant /* * main.c */ int main(void) { WDTCTL = WDTPW | WDTHOLD;// Stop watchdog timer P1DIR = 0X41; //Declare PIN0 AND PIN1 OF PORT 1 AS OUTPUT P1OUT = 0X01; //MAKE PIN0 HIGH AND PIN1 LOW INITIALLY unsigned int i; //Delay variable while(1) { P1OUT ^=0X41; //Toggle the respective by using bit-xor operator for(i=0;i<30000;i++){ //Necessary delay, change it to see the effect on toggling } } return 0; }
To build and debug, click on PROJECT->BUILD ALL and then click on Run->Debug.
While building, you will see in the console at the bottom of the window, that will say ‘finished building target’. After clicking on debugging, you will get a popup related to power saving, simply click proceed. Once in debug-window go to RUN->RESUME. If your options are blanked out, no need to worry, go to VIEW->DEBUG and then again go to RUN->RESUME.
The moment you debugged the code, your program got burnt in the controller. Isn’t it amazing, that without any additional software you can burnt the code. Clicking on a resume starts the program. A shortcut way is to click the play/pause like button on the screen. The debug screen will look like the one, as given below. Notice, it also has the function of breakpoints, and single-step or continuous run while are essential while debugging and YES, on the Launchpad you would be able to see the Red and Green Led toggling.
Below is the circuit diagram, just in case you don’t have Launchpad or wish to connect the LED’s to a different port.
sir please post an full tutorial on MSP430, UART, ADC, I2C, DAC, SPI and other.
|
https://embedds.com/blinking-the-led-with-msp430/?share=reddit
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
Simple Vue.js Form Validation with Vuetify.You can minimize customer frustration by having effective form validations. I will show you how to create client-side form validation using Vuetify.
Your:
Adding Vuetify to our applicationAdding Vuetify to our application
cd vuetify-form-validation.What we will be creating
The goal of this article is to show you a wide range of validations that you can utilize on forms with Vuetify. To do this we will be building out the following forms::
So let’s implement each of these sections. In the template section of the LoginForm file, add the following items:
<template> <v-card> <v-card-title></v-card-title> <v-card-text> </v-card-text> <v-card-actions> </v-card-actions> </v-card> </template>
Next let’s start fill in our form. We can start by adding a title. So update the title to be:
<v-card-title>Login Form</v-card-title>
Next let’s add a button to submit our form. In the v-card-actions section add a button with the text of Login. To make the button look colorful I am going to set the button’s color to primary. This is what it should look like:
<v-card-actions> <v-btnLogin</v-btn> </v-card-actions>
Next we are going to create our form. Our form will have two fields for email and password. Here is the bare minimum entry for our form:
<v-card-text> <v-form> <v-text-field</v-text-field> <v-text-field</v-text-field> </v-form> </v-card-text>:
<v-form> <v-text-field</v-text-field> <v-text-field</v-text-field> </v-form>Adding validations to our fields
<v-text-field</v-text-field>
To add validation to our fields, we have to do two things:
To make the field required we just need to add the prop required to both fields. Our fields now look like this:
<v-text-field label="Email" v-model="email" required ></v-text-field> <v-text-field label="password" v-model="password" type="password" required ></v-text-field>Adding our form to our application
<v-text-field label="Email" v-model="email" :rules="[v => !!v || 'Email is required']" required ></v-text-field> <v-text-field label="Password" v-model="password" type="password" :rules="[v => !!v || 'Password is required']" required ></v-text-field>-app-bar app color="primary" dark > <v-toolbar-title>Vuetify Form Validation</v-toolbar-title> </v-app-bar>:
<v-content> <v-tabs <v-tab-item> <LoginForm></LoginForm> </v-tab-item> </v-tabs-item> </v-content>Testing our form validation
<script> import LoginForm from './components/LoginForm'; export default { name: 'App', components: { LoginForm, }, data: () => ({ tab: null }), }; </script> ButtonDisable Login Button
<script> export default { name: "LoginForm", data: () => ({ email: null, password: null }) }; </script>.
<v-form
Next add this field to our data object and set its initial value to true.
data: () => ({ email: null, password: null, isValid: true })
Next add a disabled prop to our Login button and set its value to be not isValid.
<v-btnLogin</v-btn>.
<v-card-title>Registration Form</v-card-title>
Change the text of the button from Login to Register.
<v-btnRegister</v-btn>
Last thing you need to change is then name of the component. Change it from LoginForm to RegistrationForm.
Creating Multiple Validation RulesCreating Multiple Validation Rules
name: "RegistrationForm"
To validate our fields we added an array with a single validation method. We are going to add multiple validation rules for both fields in our registration form. You can have as many entries as you want in your validation array.
For email we are going to require:
For password we are going to require::
Creating our Password RulesCreating our Password Rules
emailRules: [ v => !!v || 'Email is required', v => /[email protected]+/.test(v) || 'E-mail must be valid' ]:
Adding Tab for Registration FormAdding Tab for Registration Form
passwordRules: [ v => !!v || 'Password is required', v => (v && v.length >= 5) || 'Password must have 5+ characters', v => /(?=.*[A-Z])/.test(v) || 'Must have one uppercase character', v => /(?=.*\d)/.test(v) || 'Must have one number', v => /([[email protected]$%])/.test(v) || 'Must have one special character [[email protected]#$%]' ]
To be able to see our new Registration Form we need to add a tab for it. Open up the App.vue file.
We need to add a new tab so update the v-tabs section to include an entry for Registration. It should look like this:
<v-tabs v-model="tab" centered> <v-tab>Login</v-tab> <v-tab>Registration</v-tab> </v-tabs>
Add a new v-tab-item that will display our RegistrationForm component. It should look like this:
<v-tabs-items <v-tab-item> <LoginForm></LoginForm> </v-tab-item> <v-tab-item> <RegistrationForm></RegistrationForm> </v-tab-item> </v-tabs-items>
Next we need to import our RegistrationForm.
import RegistrationForm from "./components/RegistrationForm";
Last we need to add our Registration form to our components.
Testing our Registration FormTesting our Registration Form
components: { LoginForm, RegistrationForm },:
<v-text-field label="Email" v-model="email" :rules="emailRules" error-count="2" required ></v-text-field> <v-text-field label="Password" v-model="password" type="password" :rules="passwordRules" error-count="5" required ></v-text-field>
Now if you tab through both fields without typing in anything you will see error message like this..
In this tutorial, we’re gonna build a Vue.js with Vuex and Vue Router Application that supports JWT Authentication
In this tutorial, we’re gonna build a Vue.js with Vuex and Vue Router Application that supports JWT Authentication. I will show you:
Let’s explore together.
Contents
We will build a Vue application in that:
– Login Page & Profile Page (for successful Login):
– Navigation Bar for Admin account:
This is full Vue JWT Authentication App demo (with form validation, check signup username/email duplicates, test authorization with 3 roles: Admin, Moderator, User). In the video, we use Spring Boot for back-end REST APIs.Flow for User Registration and User Login
For JWT Authentication, we’re gonna call 2 endpoints:
api/auth/signupfor User Registration
api/auth/signinfor User Login
You can take a look at following flow to have an overview of Requests and Responses Vue Client will make or receive.
Vue Client must add a JWT to HTTP Authorization Header before sending request to protected resources.Vue App Component Diagram with Vuex & Vue Router
Now look at the diagram below.
Let’s think about it.
– The
App component is a container with
Router. It gets app state from Vuex
store/auth. Then the navbar now can display based on the state.
App component also passes state to its child components.
–
Register components have form for submission data (with support of
vee-validate). We call Vuex store
dispatch() function to make login/register actions.
– Our Vuex actions call
auth.service methods which use
axios to make HTTP requests. We also store or get JWT from Browser Local Storage inside these methods.
–
Home component is public for all visitor.
–
Profile component get
user data from its parent component and display user information.
–
BoardUser,
BoardModerator,
BoardAdmin components will be displayed by Vuex state
user.roles. In these components, we use
user.service to get protected resources from API.
–
user.service uses
auth-header() helper function to add JWT to HTTP Authorization header.
auth-header() returns an object containing the JWT of the currently logged in user from Local Storage.
We will use these modules:
This is folders & files structure for our Vue application:
With the explaination in diagram above, you can understand the project structure easily.Setup Vue App modules
Run following command to install neccessary modules:
npm install vue-router npm install vuex npm install [email protected] npm install axios npm install bootstrap jquery popper.js npm install @fortawesome/fontawesome-svg-core @fortawesome/free-solid-svg-icons @fortawesome/vue-fontawesome
After the installation is done, you can check
dependencies in package.json file.
"dependencies": { "@fortawesome/fontawesome-svg-core": "^1.2.25", "@fortawesome/free-solid-svg-icons": "^5.11.2", "@fortawesome/vue-fontawesome": "^0.1.7", "axios": "^0.19.0", "bootstrap": "^4.3.1", "core-js": "^2.6.5", "jquery": "^3.4.1", "popper.js": "^1.15.0", "vee-validate": "^2.2.15", "vue": "^2.6.10", "vue-router": "^3.0.3", "vuex": "^3.0.1" },
Open src/main.js, add code below:
import Vue from 'vue'; import App from './App.vue'; import { router } from './router'; import store from './store'; import 'bootstrap'; import 'bootstrap/dist/css/bootstrap.min.css'; import VeeValidate from 'vee-validate'; import { library } from '@fortawesome/fontawesome-svg-core'; import { FontAwesomeIcon } from '@fortawesome/vue-fontawesome'; import { faHome, faUser, faUserPlus, faSignInAlt, faSignOutAlt } from '@fortawesome/free-solid-svg-icons'; library.add(faHome, faUser, faUserPlus, faSignInAlt, faSignOutAlt); Vue.config.productionTip = false; Vue.use(VeeValidate); Vue.component('font-awesome-icon', FontAwesomeIcon); new Vue({ router, store, render: h => h(App) }).$mount('#app');
You can see that we import and apply in
Vue object:
–
store for Vuex (implemented later in src/store)
–
router for Vue Router (implemented later in src/router.js)
–
bootstrap with CSS
–
vee-validate
–
vue-fontawesome for icons (used later in
nav)
We create two services in src/services folder:
services
auth-header.js
auth.service.js (Authentication service)
user.service.js (Data service)
The service provides three important methods with the help of axios for HTTP requests & reponses:
login(): POST {username, password} & save
JWTto Local Storage
logout(): remove
JWTfrom Local Storage
register(): POST {username, email, password}
import axios from 'axios'; const API_URL = ''; class AuthService { login(user) { return axios .post(API_URL + 'signin', { username: user.username, password: user.password }) .then(this.handleResponse) .then(response => { if (response.data.accessToken) { localStorage.setItem('user', JSON.stringify(response.data)); } return response.data; }); } logout() { localStorage.removeItem('user'); } register(user) { return axios.post(API_URL + 'signup', { username: user.username, email: user.email, password: user.password }); } handleResponse(response) { if (response.status === 401) { this.logout(); location.reload(true); const error = response.data && response.data.message; return Promise.reject(error); } return Promise.resolve(response); } } export default new AuthService();
If
login request returns 401 status (Unauthorized), that means, JWT was expired or no longer valid, we will logout the user (remove JWT from Local Storage).
We also have methods for retrieving data from server. In the case we access protected resources, the HTTP request needs Authorization header.
Let’s create a helper function called
authHeader() inside auth-header.js:
export default function authHeader() { let user = JSON.parse(localStorage.getItem('user')); if (user && user.accessToken) { return { Authorization: 'Bearer ' + user.accessToken }; } else { return {}; } }
It checks Local Storage for
user item.
If there is a logged in
user with
accessToken (JWT), return HTTP Authorization header. Otherwise, return an empty object.
Now we define a service for accessing data in user.service.js:
import axios from 'axios'; import authHeader from './auth-header'; const API_URL = ''; class UserService { getPublicContent() { return axios.get(API_URL + 'all'); } getUserBoard() { return axios.get(API_URL + 'user', { headers: authHeader() }); } getModeratorBoard() { return axios.get(API_URL + 'mod', { headers: authHeader() }); } getAdminBoard() { return axios.get(API_URL + 'admin', { headers: authHeader() }); } } export default new UserService();
You can see that we add a HTTP header with the help of
authHeader() function when requesting authorized resource.
We put Vuex module for authentication in src/store folder.
store
auth.module.js (authentication module)
index.js (Vuex Store that contains also modules)
Now open index.js file, import
auth.module to main Vuex Store here.
import Vue from 'vue'; import Vuex from 'vuex'; import { auth } from './auth.module'; Vue.use(Vuex); export default new Vuex.Store({ modules: { auth } });
Then we start to define Vuex Authentication module that contains:
We use
AuthService which is defined above to make authentication requests.
auth.module.js
import AuthService from '../services/auth.service'; const user = JSON.parse(localStorage.getItem('user')); const initialState = user ? { status: { loggedIn: true }, user } : { status: {}, user: null }; export const auth = { namespaced: true, state: initialState, actions: { login({ commit }, user) { return AuthService.login(user).then( user => { commit('loginSuccess', user); return Promise.resolve(user); }, error => { commit('loginFailure'); return Promise.reject(error.response.data); } ); }, logout({ commit }) { AuthService.logout(); commit('logout'); }, register({ commit }, user) { return AuthService.register(user).then( response => { commit('registerSuccess'); return Promise.resolve(response.data); }, error => { commit('registerFailure'); return Promise.reject(error.response.data); } ); } }, mutations: { loginSuccess(state, user) { state.status = { loggedIn: true }; state.user = user; }, loginFailure(state) { state.status = {}; state.user = null; }, logout(state) { state.status = {}; state.user = null; }, registerSuccess(state) { state.status = {}; }, registerFailure(state) { state.status = {}; } } };
You can find more details about Vuex at Vuex Guide.Create Vue Authentication Components
To make code clear and easy to read, we define the
User model first.
Under src/models folder, create user.js like this.
export default class User { constructor(username, email, password) { this.username = username; this.email = email; this.password = password; } }
Let’s continue with Authentication Components.
Instead of using axios or
AuthService directly, these Components should work with Vuex Store:
– getting status with
this.$store.state.auth
– making request by dispatching an action:
this.$store.dispatch()
views
Login.vue
Register.vue
Profile.vue
In src/views folder, create Login.vue file with following code:
<template> <div class="col-md-12"> <div class="card card-container"> <img id="profile-img" src="//ssl.gstatic.com/accounts/ui/avatar_2x.png" class="profile-img-card" /> <form name="form" @submit. <div class="form-group"> <label for="username">Username</label> <input type="text" class="form-control" name="username" v- <div class="alert alert-danger" role="alert" v-Username is required!</div> </div> <div class="form-group"> <label for="password">Password</label> <input type="password" class="form-control" name="password" v- <div class="alert alert-danger" role="alert" v-Password is required!</div> </div> <div class="form-group"> <button class="btn btn-primary btn-block" : <span class="spinner-border spinner-border-sm" v-</span> <span>Login</span> </button> </div> <div class="form-group"> <div class="alert alert-danger" role="alert" v-{{message}}</div> </div> </form> </div> </div> </template> <script> import User from '../models/user'; export default { name: 'login', computed: { loggedIn() { return this.$store.state.auth.status.loggedIn; } }, data() { return { user: new User('', ''), loading: false, message: '' }; }, mounted() { if (this.loggedIn) { this.$router.push('/profile'); } }, methods: { handleLogin() { this.loading = true; this.$validator.validateAll(); if (this.errors.any()) { this.loading = false; return; } if (this.user.username && this.user.password) { this.$store.dispatch('auth/login', this.user).then( () => { this.$router.push('/profile'); }, error => { this.loading = false; this.message = error.message; } ); } } } }; < has a Form with
username &
password. We use [VeeValidate 2.x](http://<a href=) to validate input before submitting the form. If there is an invalid field, we show the error message.
We check user logged in status using Vuex Store:
this.$store.state.auth.status.loggedIn. If the status is
true, we use Vue Router to direct user to Profile Page:
created() { if (this.loggedIn) { this.$router.push('/profile'); } },
In the
handleLogin() function, we dispatch
'auth/login' Action to Vuex Store. If the login is successful, go to Profile Page, otherwise, show error message.
This page is similar to Login Page.
For form validation, we have some more details:
username: required|min:3|max:20
password: required|min:6|max:40
For form submission, we dispatch
'auth/register' Vuex Action.
src/views/Register.vue
<template> <div class="col-md-12"> <div class="card card-container"> <img id="profile-img" src="//ssl.gstatic.com/accounts/ui/avatar_2x.png" class="profile-img-card" /> <form name="form" @submit. <div v- <div class="form-group"> <label for="username">Username</label> <input type="text" class="form-control" name="username" v- <div class="alert-danger" v-{{errors.first('username')}}</div> </div> <div class="form-group"> <label for="email">Email</label> <input type="email" class="form-control" name="email" v- <div class="alert-danger" v-{{errors.first('email')}}</div> </div> <div class="form-group"> <label for="password">Password</label> <input type="password" class="form-control" name="password" v- <div class="alert-danger" v-{{errors.first('password')}}</div> </div> <div class="form-group"> <button class="btn btn-primary btn-block">Sign Up</button> </div> </div> </form> <div class="alert" :{{message}}</div> </div> </div> </template> <script> import User from '../models/user'; export default { name: 'register', computed: { loggedIn() { return this.$store.state.auth.status.loggedIn; } }, data() { return { user: new User('', '', ''), submitted: false, successful: false, message: '' }; }, mounted() { if (this.loggedIn) { this.$router.push('/profile'); } }, methods: { handleRegister() { this.message = ''; this.submitted = true; this.$validator.validate().then(valid => { if (valid) { this.$store.dispatch('auth/register', this.user).then( data => { this.message = data.message; this.successful = true; }, error => { this.message = error.message; this.successful = false; } ); } }); } } }; < gets current User from Vuex Store and show information. If the User is not logged in, it directs to Login Page.
src/views/Profile.vue
Create Vue Components for accessing ResourcesCreate Vue Components for accessing Resources
<template> <div class="container"> <header class="jumbotron"> <h3> <strong>{{currentUser.username}}</strong> Profile </h3> </header> <p> <strong>Token:</strong> {{currentUser.accessToken.substring(0, 20)}} ... {{currentUser.accessToken.substr(currentUser.accessToken.length - 20)}} </p> <p> <strong>Id:</strong> {{currentUser.id}} </p> <p> <strong>Email:</strong> {{currentUser.email}} </p> <strong>Authorities:</strong> <ul> <li v-{{role}}</li> </ul> </div> </template> <script> export default { name: 'profile', computed: { currentUser() { return this.$store.state.auth.user; } }, mounted() { if (!this.currentUser) { this.$router.push('/login'); } } }; </script>
These components will use
UserService to request data.
views
Home.vue
BoardAdmin.vue
BoardModerator.vue
BoardUser.vue
This is a public page.
src/views/Home.vue
<template> <div class="container"> <header class="jumbotron"> <h3>{{content}}</h3> </header> </div> </template> <script> import UserService from '../services/user.service'; export default { name: 'home', data() { return { content: '' }; }, mounted() { UserService.getPublicContent().then( response => { this.content = response.data; }, error => { this.content = error.response.data.message; } ); } }; </script>
We have 3 pages for accessing protected data:
UserService.getUserBoard()
UserService.getModeratorBoard()
UserService.getAdminBoard()
This is an example, other Page are similar to this Page.
src/views/BoardUser.vue
Define Routes for Vue RouterDefine Routes for Vue Router
<template> <div class="container"> <header class="jumbotron"> <h3>{{content}}</h3> </header> </div> </template> <script> import UserService from '../services/user.service'; export default { name: 'user', data() { return { content: '' }; }, mounted() { UserService.getUserBoard().then( response => { this.content = response.data; }, error => { this.content = error.response.data.message; } ); } }; </script>
Now we define all routes for our Vue Application.
src/router.js
Add Navigation Bar to Vue AppAdd Navigation Bar to Vue App
import Vue from 'vue'; import Router from 'vue-router'; import Home from './views/Home.vue'; import Login from './views/Login.vue'; import Register from './views/Register.vue'; Vue.use(Router); export const router = new Router({ mode: 'history', routes: [ { path: '/', name: 'home', component: Home }, { path: '/home', component: Home }, { path: '/login', component: Login }, { path: '/register', component: Register }, { path: '/profile', name: 'profile', // lazy-loaded component: () => import('./views/Profile.vue') }, { path: '/admin', name: 'admin', // lazy-loaded component: () => import('./views/BoardAdmin.vue') }, { path: '/mod', name: 'moderator', // lazy-loaded component: () => import('./views/BoardModerator.vue') }, { path: '/user', name: 'user', // lazy-loaded component: () => import('./views/BoardUser.vue') } ] });
This is the root container for our application that contains navigation bar. We will add
router-view here.
src/App.vue
<template> <div id="app"> <nav class="navbar navbar-expand navbar-dark bg-dark"> <a href="#" class="navbar-brand">bezKoder</a> <div class="navbar-nav mr-auto"> <li class="nav-item"> <a href="/home" class="nav-link"> <font-awesome-icon Home </a> </li> <li class="nav-item" v- <a href="/admin" class="nav-link">Admin Board</a> </li> <li class="nav-item" v- <a href="/mod" class="nav-link">Moderator Board</a> </li> <li class="nav-item"> <a href="/user" class="nav-link" v-User</a> </li> </div> <div class="navbar-nav ml-auto" v- <li class="nav-item"> <a href="/register" class="nav-link"> <font-awesome-icon Sign Up </a> </li> <li class="nav-item"> <a href="/login" class="nav-link"> <font-awesome-icon Login </a> </li> </div> <div class="navbar-nav ml-auto" v- <li class="nav-item"> <a href="/profile" class="nav-link"> <font-awesome-icon {{currentUser.username}} </a> </li> <li class="nav-item"> <a href <font-awesome-icon LogOut </a> </li> </div> </nav> <div class="container"> <router-view /> </div> </div> </template> <script> export default { computed: { currentUser() { return this.$store.state.auth.user; }, showAdminBoard() { if (this.currentUser) { return this.currentUser.roles.includes('ROLE_ADMIN'); } return false; }, showModeratorBoard() { if (this.currentUser) { return this.currentUser.roles.includes('ROLE_MODERATOR'); } return false; } }, methods: { logOut() { this.$store.dispatch('auth/logout'); this.$router.push('/login'); } } }; </script>
Our navbar looks more professional when using
font-awesome-icon.
We also make the navbar dynamically change by current User’s
roles which are retrieved from Vuex Store
state.
If you want to check Authorized status everytime a navigating action is trigger, just add
router.beforeEach() at the end of src/router.js like this:
ConclusionConclusion
router.beforeEach((to, from, next) => { const publicPages = ['/login', '/home']; const authRequired = !publicPages.includes(to.path); const loggedIn = localStorage.getItem('user'); // try to access a restricted page + not logged in if (authRequired && !loggedIn) { return next('/login'); } next(); });
Congratulation!
Today we’ve done so many interesting things. I hope you understand the overall layers of our Vue application, and apply it in your project at ease. Now you can build a front-end app that supports JWT Authentication with Vue.js, Vuex and Vue Router.
Happy learning, see you again!
How.
|
https://morioh.com/p/839bcbf099ad
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
This example shows how to write ODE files for nonlinear grey-box models as MATLAB and C MEX files.
Grey box modeling is conceptually different to black box modeling in that it involves a more comprehensive modeling step. For IDNLGREY (the nonlinear grey-box model object; the nonlinear counterpart of IDGREY), this step consists of creating an ODE file, also called a "model file". The ODE file specifies the right-hand sides of the state and the output equations typically arrived at through physical first principle modeling. In this example we will concentrate on general aspects of implementing it as a MATLAB file or a C MEX file.
IDNLGREY supports estimation of parameters and initial states in nonlinear model structures written on the following explicit state-space form (so-called output-error, OE, form, named so as the noise e(t) only affects the output of the model structure in an additive manner):
xn(t) = F(t, x(t), u(t), p1, ..., pNpo); x(0) = X0;
y(t) = H(t, x(t), u(t), p1, ..., pNpo) + e(t)
For discrete-time structures, xn(t) = x(T+Ts) with Ts being the sample time, and for continuous-time structures xn(t) = d/dt x(t). In addition, F(.) and H(.) are arbitrary linear or nonlinear functions with Nx (number of states) and Ny (number of outputs) components, respectively. Any of the model parameters p1, ..., pNpo as well as the initial state vector X(0) can be estimated. Worth stressing is that
time-series modeling, i.e., modeling without an exogenous input signal u(t), and
static modeling, i.e., modeling without any states x(t)
are two special cases that are supported by IDNLGREY. (See the tutorials idnlgreydemo3 and idnlgreydemo5 for examples of these two modeling categories).
The first IDNLGREY modeling step to perform is always to implement a MATLAB or C MEX model file specifying how to update the states and compute the outputs. More to the point, the user must write a model file, MODFILENAME.m or MODFILENAME.c, defined with the following input and output arguments (notice that this form is required for both MATLAB and C MEX type of model files)
[dx, y] = MODFILENAME(t, x, u, p1, p2, ..., pNpo, FileArgument)
MODFILENAME can here be any user chosen file name of a MATLAB or C MEX-file, e.g., see twotanks_m.m, pendulum_c.c etc. This file should be defined to return two outputs:
dx: the right-hand side(s) of the state-space equation(s) (a column vector with Nx real entries; [] for static models)
y: the right-hand side(s) of the output equation(s) (a column vector with Ny real entries)
and it should take 3+Npo(+1) input arguments specified as follows:
t: the current time
x: the state vector at time t ([] for static models)
u: the input vector at time t ([] for time-series models)
p1, p2, ..., pNpo: the individual parameters (which can be real scalars, column vectors or 2-dimensional matrices); Npo is here the number of parameter objects, which for models with scalar parameters coincide with the number of parameters Np
FileArgument: optional inputs to the model file
In the onward discussion we will focus on writing model using either MATLAB language or using C-MEX files. However, IDNLGREY also supports P-files (protected MATLAB files obtained using the MATLAB command "pcode") and function handles. In fact, it is not only possible to use C MEX model files but also Fortran MEX files. Consult the MATLAB documentation on External Interfaces for more information about the latter.
What kind of model file should be implemented? The answer to this question really depends on the use of the model.
Implementation using MATLAB language (resulting in a *.m file) has some distinct advantages. Firstly, one can avoid time-consuming, low-level programming and concentrate more on the modeling aspects. Secondly, any function available within MATLAB and its toolboxes can be used directly in the model files. Thirdly, such files will be smaller and, without any modifications, all built-in MATLAB error checking will automatically be enforced. In addition, this is obtained without any code compilation.
C MEX modeling is much more involved and requires basic knowledge about the C programming language. The main advantage with C MEX model files is the improved execution speed. Our general advice is to pursue C MEX modeling when the model is going to be used many times, when large data sets are employed, and/or when the model structure contains a lot of computations. It is often worthwhile to start with using a MATLAB file and later on turn to the C MEX counterpart.
With this said, let us next move on to MATLAB file modeling and use a nonlinear second order model structure, describing a two tank system, as an example. See idnlgreydemo2 for the modeling details. The contents of twotanks_m.m are as follows.
type twotanks_m.m
function [dx, y] = twotanks_m(t, x, u, A1, k, a1, g, A2, a2, varargin) %TWOTANKS_M A two tank system. % Copyright 2005-2006 The MathWorks, Inc. % Output equation. y = x(2); % Water level, lower tank. % State equations. dx = [1/A1*(k*u(1)-a1*sqrt(2*g*x(1))); ... % Water level, upper tank. 1/A2*(a1*sqrt(2*g*x(1))-a2*sqrt(2*g*x(2))) ... % Water level, lower tank. ];
In the function header, we here find the required t, x, and u input arguments followed by the six scalar model parameters, A1, k, a1, g, A2 and a2. In the MATLAB file case, the last input argument should always be varargin to support the passing of an optional model file input argument, FileArgument. In an IDNLGREY model object, FileArgument is stored as a cell array that might hold any kind of data. The first element of FileArgument is here accessed through varargin{1}{1}.
The variables and parameters are referred in the standard MATLAB way. The first state is x(1) and the second x(2), the input is u(1) (or just u in case it is scalar), and the scalar parameters are simply accessed through their names (A1, k, a1, g, A2 and a2). Individual elements of vector and matrix parameters are accessed as P(i) (element i of a vector parameter named P) and as P(i, j) (element at row i and column j of a matrix parameter named P), respectively.
Writing a C MEX model file is more involved than writing a MATLAB model file. To simplify this step, it is recommended that the available IDNLGREY C MEX model template is copied to MODFILENAME.c. This template contains skeleton source code as well as detailed instructions on how to customize the code for a particular application. The location of the template file is found by typing the following at the MATLAB command prompt.
fullfile(matlabroot, 'toolbox', 'ident', 'nlident', 'IDNLGREY_MODEL_TEMPLATE.c')
For the two tank example, this template was copied to twotanks_c.c. After some initial modifications and configurations (described below) the state and output equations were entered, thereby resulting in the following C MEX source code.
type twotanks_c.c
/* Copyright 2005-2015 The MathWorks, Inc. */ /* Written by Peter Lindskog. */ /* Include libraries. */ #include "mex.h" #include <math.h> /* Specify the number of outputs here. */ #define NY 1 /* State equations. */ void compute_dx(double *dx, double t, double *x, double *u, double **p, const mxArray *auxvar) { /*. */ /* x[0]: Water level, upper tank. */ /* x[1]: Water level, lower tank. */ dx[0] = 1/A1[0]*(k[0]*u[0]-a1[0]*sqrt(2*g[0]*x[0])); dx[1] = 1/A2[0]*(a1[0]*sqrt(2*g[0]*x[0])-a2[0]*sqrt(2*g[0]*x[1])); } /* Output equation. */ void compute_y(double *y, double t, double *x, double *u, double **p, const mxArray *auxvar) { /* y[0]: Water level, lower tank. */ y[0] = x[1]; } /*----------------------------------------------------------------------- * DO NOT MODIFY THE CODE BELOW UNLESS YOU NEED TO PASS ADDITIONAL INFORMATION TO COMPUTE_DX AND COMPUTE_Y To add extra arguments to compute_dx and compute_y (e.g., size information), modify the definitions above and calls below. *-----------------------------------------------------------------------*/ void mexFunction(int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[]) { /* Declaration of input and output arguments. */ double *x, *u, **p, *dx, *y, *t; int i, np; size_t nu, nx; const mxArray *auxvar = NULL; /* Cell array of additional data. */ if (nrhs < 3) { mexErrMsgIdAndTxt("IDNLGREY:ODE_FILE:InvalidSyntax", "At least 3 inputs expected (t, u, x)."); } /* Determine if auxiliary variables were passed as last input. */ if ((nrhs > 3) && (mxIsCell(prhs[nrhs-1]))) { /* Auxiliary variables were passed as input. */ auxvar = prhs[nrhs-1]; np = nrhs - 4; /* Number of parameters (could be 0). */ } else { /* Auxiliary variables were not passed. */ np = nrhs - 3; /* Number of parameters. */ } /* Determine number of inputs and states. */ nx = mxGetNumberOfElements(prhs[1]); /* Number of states. */ nu = mxGetNumberOfElements(prhs[2]); /* Number of inputs. */ /* Obtain double data pointers from mxArrays. */ t = mxGetPr(prhs[0]); /* Current time value (scalar). */ x = mxGetPr(prhs[1]); /* States at time t. */ u = mxGetPr(prhs[2]); /* Inputs at time t. */ p = mxCalloc(np, sizeof(double*)); for (i = 0; i < np; i++) { p[i] = mxGetPr(prhs[3+i]); /* Parameter arrays. */ } /* Create matrix for the return arguments. */ plhs[0] = mxCreateDoubleMatrix(nx, 1, mxREAL); plhs[1] = mxCreateDoubleMatrix(NY, 1, mxREAL); dx = mxGetPr(plhs[0]); /* State derivative values. */ y = mxGetPr(plhs[1]); /* Output values. */ /* Call the state and output update functions. Note: You may also pass other inputs that you might need, such as number of states (nx) and number of parameters (np). You may also omit unused inputs (such as auxvar). For example, you may want to use orders nx and nu, but not time (t) or auxiliary data (auxvar). You may write these functions as: compute_dx(dx, nx, nu, x, u, p); compute_y(y, nx, nu, x, u, p); */ /* Call function for state derivative update. */ compute_dx(dx, t[0], x, u, p, auxvar); /* Call function for output update. */ compute_y(y, t[0], x, u, p, auxvar); /* Clean up. */ mxFree(p); }
Let us go through the contents of this file. As a first observation, we can divide the work of writing a C MEX model file into four separate sub-steps, the last one being optional:
Inclusion of C-libraries and definitions of the number of outputs.
Writing the function computing the right-hand side(s) of the state equation(s), compute_dx.
Writing the function computing the right-hand side(s) of the output equation(s), compute_y.
Optionally updating the main interface function which includes basic error checking functionality, code for creating and handling input and output arguments, and calls to compute_dx and compute_y.
Before we address these sub-steps in more detail, let us briefly comment upon a couple of general features of the C programming language.
High-precision variables (all inputs, states, outputs and parameters of an IDNLGREY object) should be defined to be of the data type "double".
The unary * operator placed just in front of the variable or parameter names is a so-called dereferencing operator. The C-declaration "double *A1;" specifies that A1 is a pointer to a double variable. The pointer construct is a concept within C that is not always that easy to comprehend. Fortunately, if the declarations of the output/input variables of compute_y and compute_dx are not changed and all unpacked model parameters are internally declared with a *, then there is no need to know more about pointers from an IDNLGREY modeling point of view.
Both compute_y and compute_dx are first declared and implemented, where after they are called in the main interface function. In the declaration, the keyword "void" states explicitly that no value is to be returned.
For further details of the C programming language we refer to the book
B.W. Kernighan and D. Ritchie. The C Programming Language, 2nd
edition, Prentice Hall, 1988.
In the first sub-step we first include the C-libraries "mex.h" (required) and "math.h" (required for more advanced mathematics). The number of outputs is also declared per modeling file using a standard C-define:
/* Include libraries. */
#include "mex.h"
#include "math.h"
/* Specify the number of outputs here. */
#define NY 1
If desired, one may also include more C-libraries than the ones above.
The "math.h" library must be included whenever any state or output equation contains more advanced mathematics, like trigonometric and square root functions. Below is a selected list of functions included in "math.h" and the counterpart found within MATLAB:
C-function MATLAB function
========================================
sin, cos, tan sin, cos, tan
asin, acos, atan asin, acos, atan
sinh, cosh, tanh sinh, cosh, tanh
exp, log, log10 exp, log, log10
pow(x, y) x^y
sqrt sqrt
fabs abs
Notice that the MATLAB functions are more versatile than the corresponding C-functions, e.g., the former handle complex numbers, while the latter do not.
Next, in the file we find the functions for updating the states, compute_dx, and the output, compute_y. Both these functions hold argument lists, with the output to be computed (dx or y) at position 1, after which follows all variables and parameters required to compute the right-hand side(s) of the state and the output equations, respectively.
All parameters are contained in the parameter array p. The first step in compute_dx and compute_y is to unpack and name the parameters to be used in the subsequent equations. In twotanks_c.c, compute_dx declares six parameter variables whose values are determined accordingly:
/*. */
compute_y on the other hand does not require any parameter for computing the output, and hence no model parameter is retrieved.
As is the case in C, the first element of an array is stored at position 0. Hence, dx[0] in C corresponds to dx(1) in MATLAB (or just dx in case it is a scalar), the input u[0] corresponds to u (or u(1)), the parameter A1[0] corresponds to A1, and so on.
In the example above, we are only using scalar parameters, in which case the overall number of parameters Np equals the number of parameter objects Npo. If any vector or matrix parameter is included in the model, then Npo < Np.
The scalar parameters are referenced as P[0] (P(1) or just P in a MATLAB file) and the i:th vector element as P[i-1] (P(i) in a MATLAB file). The matrices passed to a C MEX model file are different in the sense that the columns are stacked upon each other in the obvious order. Hence, if P is a 2-by-2 matrix, then P(1, 1) is referred as P[0], P(2, 1) as P[1], P(1, 2) as P[2] and P(2, 2) as P[3]. See "Tutorials on Nonlinear Grey Box Identification: An Industrial Three Degrees of Freedom Robot : C MEX-File Modeling of MIMO System Using Vector/Matrix Parameters", idnlgreydemo8, for an example where scalar, vector and matrix parameters are used.
The state and output update functions may also include other computations than just retrieving parameters and computing right-hand side expressions. For execution speed, one might, e.g., declare and use intermediate variables, whose values are used several times in the coming expressions. The robot tutorial mentioned above, idnlgreydemo8, is a good example in this respect.
compute_dx and compute_y are also able to handle an optional FileArgument. The FileArgument data is passed to these functions in the auxvar variable, so that the first component of FileArgument (a cell array) can be obtained through
mxArray* auxvar1 = mxGetCell(auxvar, 0);
Here, mxArray is a MATLAB-defined data type that enables interchange of data between the C MEX-file and MATLAB. In turn, auxvar1 may contain any data. The parsing, checking and use of auxvar1 must be handled solely within these functions, where it is up to the model file designer to implement this functionality. Let us here just refer to the MATLAB documentation on External Interfaces for more information about functions that operate on mxArrays. An example of how to use optional C MEX model file arguments is provided in idnlgreydemo6, "Tutorials on Nonlinear Grey Box Identification: A Signal Transmission System : C MEX-File Modeling Using Optional Input Arguments".
The main interface function should almost always have the same content and for most applications no modification whatsoever is needed. In principle, the only part that might be considered for changes is where the calls to compute_dx and compute_y are made. For static systems, one can leave out the call to compute_dx. In other situations, it might be desired to only pass the variables and parameters referred in the state and output equations. For example, in the output equation of the two tank system, where only one state is used, one could very well shorten the input argument list to
void compute_y(double *y, double *x)
and call compute_y in the main interface function as
compute_y(y, x);
The input argument lists of compute_dx and compute_y might also be extended to include further variables inferred in the interface function. The following integer variables are computed and might therefore be passed on: nu (the number of inputs), nx (the number of states), and np (here the number of parameter objects). As an example, nx is passed to compute_y in the model investigated in the tutorial idnlgreydemo6.
The completed C MEX model file must be compiled before it can be used for IDNLGREY modeling. The compilation can readily be done from the MATLAB command line as
mex MODFILENAME.c
Notice that the mex-command must be configured before it is used for the very first time. This is also achieved from the MATLAB command line via
mex -setup
With an execution ready model file, it is straightforward to create IDNLGREY model objects for which simulations, parameter estimations, and so forth can be carried out. We exemplify this by creating two different IDNLGREY model objects for describing the two tank system, one using the model file written in MATLAB and one using the C MEX file detailed above (notice here that the C MEX model file has already been compiled).
Order = [1 1 2]; % Model orders [ny nu nx]. Parameters = [0.5; 0.003; 0.019; ... 9.81; 0.25; 0.016]; % Initial parameter vector. InitialStates = [0; 0.1]; % Initial values of initial states. nlgr_m = idnlgrey('twotanks_m', Order, Parameters, InitialStates, 0)
nlgr_m = Continuous-time nonlinear grey-box model defined by 'twotanks_m' (MATLAB.
nlgr_cmex = idnlgrey('twotanks_c', Order, Parameters, InitialStates, 0)
nlgr_cmex = Continuous-time nonlinear grey-box model defined by 'twotanks_c' (MEX.
In this tutorial we have discussed how to write IDNLGREY MATLAB and C MEX model files. We finally conclude the presentation by listing the currently available IDNLGREY model files and the tutorial/case study where they are being used. To simplify further comparisons, we list both the MATLAB (naming convention FILENAME_m.m) and the C MEX model files (naming convention FILENAME_c.c), and indicate in the tutorial column which type of modeling approach that is being employed in the tutorial or case study.
Tutorial/Case study MATLAB file C MEX-file
======================================================================
idnlgreydemo1 (MATLAB) dcmotor_m.m dcmotor_c.c
idnlgreydemo2 (C MEX) twotanks_m.m twotanks_c.c
idnlgreydemo3 (MATLAB) preys_m.m preys_c.c
(C MEX) predprey1_m.m predprey1_c.c
(C MEX) predprey2_m.m predprey2_c.c
idnlgreydemo4 (MATLAB) narendrali_m.m narendrali_c.c
idnlgreydemo5 (MATLAB) friction_m.m friction_c.c
idnlgreydemo6 (C MEX) signaltransmission_m.m signaltransmission_c.c
idnlgreydemo7 (C MEX) twobodies_m.m twobodies_c.c
idnlgreydemo8 (C MEX) robot_m.m robot_c.c
idnlgreydemo9 (MATLAB) cstr_m.m cstr_c.c
idnlgreydemo10 (MATLAB) pendulum_m.m pendulum_c.c
idnlgreydemo11 (C MEX) vehicle_m.m vehicle_c.c
idnlgreydemo12 (C MEX) aero_m.m aero_c.c
idnlgreydemo13 (C MEX) robotarm_m.m robotarm_c.c
The contents of these model files can be displayed in the MATLAB command window through the command "type FILENAME_m.m" or "type FILENAME_c.c". All model files are found in the directory returned by the following MATLAB command.
fullfile(matlabroot, 'toolbox', 'ident', 'iddemos', 'examples')
|
https://www.mathworks.com/help/ident/examples/creating-idnlgrey-model-files.html
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
Map a JSON file to ENUM in Java
August 16, 2019
More about Le Gruyère AOP on MySwitzerland.com
I recently began a new good old JAVA friend project for a returning client. One of my first task was to implement a new feature which had, notably, for goal to make the application globally configurable using a JSON property file.
I found the outcome of the solution relatively handy and therefore I thought that I would share it in a new blog post. Moreover, as I never wrote any Java blog post so far, I found it quite challenging and interesting 😉
Introduction
In this article we are going to:
- Create a new project
- Read a JSON file and property
- Create an ENUM
- Map the properties to a generic ENUM
Note: If you already have a project, you could obviously skip the first chapter which has for goal to create a project. Likewise, if you would not like to use Maven, skip it too and include the library we are going to use as requested by your setup.
Create a new project
To get started, we are going to create a new project using the Maven starter kit. For that purpose run the following command in a terminal:
$ mvn archetype:generate -DgroupId=com.jsontoenum.app -DartifactId=json-to-enum -DarchetypeArtifactId=maven-archetype-quickstart -DarchetypeVersion=1.4 -DinteractiveMode=false
If everything went well, you should now be able to jump into the directory in order to compile the project without errors:
$ cd json-to-enum/ && mvn package
Read a JSON file and property
At first I implemented a quick custom solution but I wasn’t, at all, enough happy with the outcome. That’s why I tried to find online a ready-to-go solution and found out the incredible open source library com.typesafe.config. It leverages all the work to read JSON files and access their properties, has no dependencies and is even compatible with Java 8 🚀
For once at least, googling a solution was an excellent idea 😉
To use the library, let’s add this library has a new dependency to our
pom.xml:
<dependency> <groupId>com.typesafe</groupId> <artifactId>config</artifactId> <version>1.3.4</version> </dependency>
Moreover we should also add the following
<resources/> and
<plugins/> goals to our
<build/> in order to be able to load the JSON file (we are about to create) and to package the dependency we just referenced within our JAR file.
<resources> <resource> <directory>src/resources</directory> </resource> </resources> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-shade-plugin</artifactId> <version>3.2.0</version> <executions> <execution> <phase>package</phase> <goals> <goal>shade</goal> </goals> </execution> </executions> </plugin> </plugins>
Our build is finally set up. We could now move on to the next step and create our JSON property file. Per default, the configuration library will try to load a file called
application.json as source for the properties. Therefore, let’s keep it simple and use that predefined name to create this new file, in a new folder
src/resources/ , with the following JSON content:
{ "swiss": { "cheese": "gruyere" } }
Everything is set, finally time to code 😇 We could modify our application (
App.java in folder
src/main/java/com/jsontoenum/app) in order to init and load the properties from the JSON file and print out the name of our favorite type of cheese 🧀
package com.jsontoenum.app; import com.typesafe.config.Config; import com.typesafe.config.ConfigFactory; public class App { public static void main(String[] args) { // Load and init the configuration library final Config conf = ConfigFactory.load(); // Get the value of the JSON property final String cheese = conf.getString("swiss.cheese"); System.out.println(String.format("I like %s 🧀", cheese)); } }
Let’s compile everything and run our project using the following command line in a terminal to try out our implementation:
$ mvn package && java -cp target/json-to-enum-1.0-SNAPSHOT.jar com.jsontoenum.app.App
If everything went find, your console output should display the following at the end of the stacktrace:
[INFO] ------------------------------------------------------------- [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------- [INFO] Total time: 2.179 s [INFO] Finished at: 2019-08-16T15:04:35+02:00 [INFO] ------------------------------------------------------------- I like gruyere 🧀
Create an ENUM
Even if it was already cool to be able to read JSON properties, I quickly realized, while developing my assigned feature, that being able to map the properties to ENUM would kind of be mandatory if the application is supposed to behave differently according these. Moreover, Switzerland produces more than one kind of cheese 😉
As next step we could therefore create an ENUM in folder
src/main/java/com/jsontoenum/app/ which could list a couple of cheese from the French-speaking part of the country:
package com.jsontoenum.app; public enum Cheese { GRUYERE, TETE_DE_MOINE, CHAUX_DABEL, RACLETTE, VACHERIN, TOMME }
Map the properties to a generic ENUM
In this article we are using cheese for demo purpose but obviously, Switzerland exports many other products like chocolates or watches. Likewise, the application I’m working on doesn’t contains only a single ENUM. That’s why we are not just going to add a method to parse the properties to a single type of ENUM but rather declare it as generic in our application (
App.java ):
private static <E extends Enum<E>> E getEnumProperty(final Config conf, final String key, final Class<E> myClass) { // If no config loaded if (conf == null) { return null; } // If the config doesn't contains the key if (!conf.hasPath(key)) { return null; } // As previously, load the key value final String keyValue = conf.getString(key); // Does the property has a value if (keyValue == null || keyValue.isEmpty()) { return null; } // Map the property to the ENUM return Enum.valueOf(myClass, keyValue.toUpperCase()); }
Finally, for the final test, let’s enhance our
main method by loading the ENUM and testing it to display if the cheese is our favorite one or not:
public static void main(String[] args) { // Load and init the configuration library final Config conf = ConfigFactory.load(); // Get the value of the JSON property final Cheese cheese = getEnumProperty(conf, "swiss.cheese", Cheese.class); if (Cheese.GRUYERE.equals(cheese)) { System.out.println(String.format("I really like %s 🧀", cheese)); } else { System.out.println(String.format("%s is ok", cheese)); } }
Voilà, that’s it, nothing more, nothing less 🎉
You could try to run our project using the previous command line and if everything goes according plan, the output should looks like the following:
[INFO] ------------------------------------------------------------- [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------- [INFO] Total time: 2.437 s [INFO] Finished at: 2019-08-16T15:43:42+02:00 [INFO] ------------------------------------------------------------- I really like GRUYERE 🧀
Cherry on the cake 🍒🎂
If you want to spare the hassle of creating your own project and copying the above code, I have published the project online, be my guest 😇
$ git clone
To infinity and beyond 🚀
David
|
https://daviddalbusco.com/blog/map-a-json-file-to-enum-in-java/
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
29913/catch-warnings-and-errors-in-rpy2
To get the warnings as an rpy2 object, you can do:
from rpy2.robjects.packages import importr
base = importr('base')
# do things that generate R warnings
base.warnings()
With tryCatch you can handle errors as you want:
an.error.occured ...READ MORE
Use echo=FALSE and fig.cap = "caption in ...READ MORE
Normally to perform supervised learning you need ...READ MORE
For avoiding rowwise(), I prefer to use ...READ MORE
You can use dplyr function arrange() like ...READ MORE
Well it truly depends on your requirement,
If ...READ MORE
The tm package in R provides the stemDocument() function to stem the ...READ MORE
To find the type of an object ...READ MORE
Hey @Ali, as.factor is a wrapper for ...READ MORE
remove import android.R
Then clean and rebuild project. R ...READ MORE
OR
Already have an account? Sign in.
|
https://www.edureka.co/community/29913/catch-warnings-and-errors-in-rpy2?show=29914
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
When you’ve got more than one person working on a complex bit of software, you generally need a specification (spec) for new features. The bigger the team, the more you need a spec. The more complex a feature, the more you need a spec.
According to stereotypes, big firms usually lean too hard on specs, to the point where they might spend more time writing the specs than coding the feature:
“The button will be ten pixels from the left margin and will conform to the usability guidelines sheet 201-a. It will be labeled “Join Game” and will – after a confirmation popup as outlined in the interface framework – begin polling the designated server in request for an open slot. If no slot is found, then the fallback behavior […]”
Meanwhile, little indie houses have a slightly less formal approach:
Bruce: Can you add a button that will let players join the game?
Barbara: Sure.
Stuff gets done either way, but sometimes indies are a little slapdash and sometimes big firms are a little too bureaucratic. On Good Robot, our spec is usually a sentence or two in the shared Google doc that we use as a universal to-do list.
But this week I ran into something that I realized was too complicated for that. It was one of those features that sounded obvious and simple in the meeting, but then became mysterious when I sat down to write the damn thing. (This is the point of a spec: To reveal the unknowns BEFORE coding begins. This is important in big firms, since once you’ve begun coding you’ve ALREADY been allotted a fixed time budget, which means this is a bad time to begin figuring out what you need to do.)
So I proposed a spec. And then I thought I’d post it here, just to show what the non-exciting bits of game development look like. This is pretty informal as these things go, but it should give you an idea of what needs to be hammered out before you start coding.
Also note that I’ve replaced the names of the stuff in our game with DOOM references to make it easier to follow.
Drop Tables Discussion
“Add a drop table” has been lurking on my to-do list for several weeks now. This is actually a complex enough topic that we need to define a spec. It turns out this is a bigger feature than we might have anticipated in meetings. Every time I’ve tried to start on it, I ended up with more questions on what we needed and what I was supposed to be doing.
Abstract:
We will be adding a new file called drops.ini or drops.xml, which will allow artists to design things to be dropped in the game. (Things that the player can pick up.) We want the system to be robust enough that it feels unpredictable and varied on repeated playthroughs, yet controllable enough that the game doesn’t feel like random chaos. Specifically, we want to be able to ensure things like, “Player will be offered at least 1 half-decent weapon before reaching point N.”
- Right now there are two main things that need to result in drops. We should go ahead and assume that more “causes” might be added later.
- When you destroy a robot.
- When you destroy a machine. (Boxes are “machines”, in the parlance of the game’s internal workings.)
- A drop can deliver a number of things. Given our discussions so far, It seems like it should be possible for a single drop to create multiple things:
- Weapon pickup. (Of a specific type. Ex: “AlienBlaster”) This might be too naive a design. I’ll talk more about this at the end.
- Money. (Of a certain range of values. Ex: Between $10 and $20.)
- Robots. (Of a given type and number. Ex: 5 instances of “Imp”)
- MAYBE projectiles? This would let a boss release a swarm of missiles as it dies. I don’t know how these would be aimed, but it’s here for discussion.
- A drop must have some sort of probability associated with it.
- This probability ought to work on a per-item basis. So, maybe the “Cyberdemon” boss has 100% chance of dropping some imps, a 50% chance of dropping some PinkyDemons, and a 1% chance of dropping the BFG 9000.
- For simplicity of parsing, I THINK this probability should be expressed as a percentage (50%) and not a ratio (1:2). The first is simply much easier to write, but the latter is better if we plan to have very very small chances (say 1 in 10,000’s), since percentages like “0.01%” aren’t as intuitive. (And I worry about floating-point shenanigans.)
- Drops will be named.
- So in our file, the artist will name a drop something descriptive. “Level1Trash”, “Level2MiniBoss”, “BellaAndJacobOTP”, or “foo”. Whatever.
- The name will be referred to by the robots and machines in their respective files: “drop=Level2MiniBoss”
- If we go crazy, I suppose we could add something insane like recursive drops. So “Level2MiniBoss” has a 50% chance to create “Level1Trash”, which itself might contain references to other drops. This is powerful, but it’s also an exercise in juggling dynamite. It’s only slightly more effort to code, but you can create a lot of needless complexity and bugs with this. Protip: I’m not going to build in a system to detect infinite recursion, so if you screw up you might crash the game. Choose wisely.
Proposal
My proposal, which needs to be discussed before I begin work on it:
I’m 99% sure that this needs to be in XML. The problem is hierarchical, which means using .ini files would be clumsy. Let’s just start with this assumption.
The definition starts with the name of the drop. It is then followed by one or more entries of things that might be dropped. Example:
For each entry:
There is the name of the drop: Either the name of a robot, the name of a weapon, or the word “cash”.
There is a minimum and maximum number to drop. If you omit these, it will default to a min and max of 1. (This is good for weapons, since it doesn’t make any sense to drop more than one of the same weapon.)
Then there is the value “chance”, which will probably be a percent. Note that it will roll this chance and THEN roll again for min and max. So in the example above, it might roll the dice for Imps. The roll comes up positive, so it rolls again to see HOW MANY. Then it might roll zero, meaning no imps actually appear. Keep this in mind.
You can use this system to create bell curves. For example, if you don’t want cash drops to be completely random, then you could do this:
Both cash drops will always happen, and will always drop between 1 and 6 coins. This is the exact thing you get when rolling 2d6 in a tabletop setting: Most values will be in the mid range, and values of 2 or 12 will be rare.
Robots already have a value for how much money they will drop. I don’t see any reason to REMOVE that system, but we might simply transition to using this new system for that. Or maybe we’ll just limit this new system for specials and bosses.
One final note is that this doesn’t allow us to drop weapons from lists of possibilities, which is what vending machines use. Maybe we also need to add a systems of “weapon pools” of similar-powered weapons. So each entity can be a weapon, a weapon pool, money, or a robot. (And maybe projectiles.)
This needs a lot of discussion. It’s probably the most complex item on my list right now, so the sooner we hammer this out, the better.
Conclusion
So this is what I sent to the rest of the team (minus the animated Gifs) this morning. This afternoon we’ll have our meeting and we’ll see if what I’ve proposed fits with the needs of the project.
The Best of 2015
My picks for what was important, awesome, or worth talking about in 2015.
Do It Again, Stupid
One of the highest-rated games of all time has some of the least interesting gameplay..
The Best of 2014
My picks for what was important, awesome, or worth talking about in 2014.
78 thoughts on “Good Robot #38: Spec’ing a Feature”
Finally something that I have more than enough experience to comment on. Your first example of corporate overlordship is a good one because it defines something else you don’t specifically mention: Spec Standards. At my company, we have Specs (we call them Use Cases) which follow their own standards to such a point where it’s almost ANOTHER programming language. I actually sometimes (as a developer) have a hard time reading the specs and need our BSA (person who writes the specs) to read it for me because there’s some obscure rarely-used logic to the spec notation.
This leads us to writing SUMMARIES in plain English of our SPECS…wee paperwork….
I write those ridiculous use cases for a living and I’m sorry that they are so annoying to read. You’re quite right that this stuff is basically code, but the language it’s written in is legalese. The other guy’s Contracts team is looking through the specs, trying to find vague, undocumented or half-baked features, so they can get you to build more for the same amount of money. Your testing team is trying to write test cases, ideally automated ones which can expect the same output 100% of the time, so they need everything in exacting detail.
Once we spent almost a week in meetings to determine what printing options would be supported. A one-line comment like “The Print button opens the system print options dialog, depending on the user’s current printer” is enough for a developer to understand what you mean, but when the customer submits a support ticket requesting that you add an image rotation feature because you said you supported printing in portrait or landscape, you will need those 3 pages of nonsense.
We get those all the times. Customer opens a Production Defect saying that the system isn’t doing a thing that sounds completely reasonable. The first response is to pass it off to the BSA to comb the requirements.
Technically, the system isn’t supposed to do anything NOT specified in the Use Case, so if it’s not in the Use Case, then it’s a New Business Request (read: pay us more money to do it). If it IS in the Use Case, then it’s a defect (read: we don’t get paid to fix it).
And, of course, the BSAs get dinged for too many “missed requirement” NBRs and the Testers get dinged for too many “missed defects”, so then they fight back and forth over if it’s an NBR or a DEF…Mmmm…politics…
For point 4 and infinite recursion, detecting it shouldn’t be too hard. Since this is (presumably) data that’s loaded once at start-up, you can traverse the loot drop graph after loading all the data with a topological sort algorithm, ensuring that there are no cycles. Way simpler than detecting loops in the graph when generating loot.
Detecting during run-time “merely” requires passing around a set of all nodes you have visited so far and simply stop the current traversal whenever you see you’ve visited before.
It does require to be thought of when you start writing the code and is greatly simplified by already having a decent set implementation hanging around, but can usually be faked by abusing a hash table.
Even if there are loops, infinite recursion has zero probability if the total outgoing probabilities for each node are less than 1. So no loops would be more restrictive than absolutely necessary.
On the other hand, you probably do want to drop some things with probability 100%.
I want a phoenix with 100% chance to drop itself after defeat.
But only if you beat it with The Good Incinerator.
It can still have a nonzero probability for any finite number of recursions. Which is effectively the same thing because it runs out of memory in finite time.
In the spirit of the original frame driven processing, I’d put the drops on a stack and on each frame process 1 (or N) of them. An infinite drop recursion would prevent the stack from shrinking, but the game would continue to run. To guard against exponential stack growth, I’d check the stack size when adding drops, logging an error at some small size and skip adding the drop at some much larger size.
Ooh, fun! I’ve written systems like this before (and sometimes gone overboard and tried to make everything modular and interchangeable, complete with object inheritance and custom scripting code to execute).
I’d say let a drop include the option of dropping an item chosen from a set. That lets you organize objects on hierarchies of rarity, giving (say) a miniboss a 1:100 chance of dropping a legendary weapon, which is then chosen from a set of legendaries based on the rarity and power of that weapon. Plus you can simplify the system and only define weapons in a single place (the containing set).
I’d do this with a drop-wide “all or one” setting – if its a one-item set, add up the probabilities, take a random polling, and see which item you polled. If it’s a multi-item set, work as before.
Also let drops drop other drops. Makes things simpler. (Though still limitedly recursive)
You could re-use the same logic, to have monsters dropped from pools. Anything really. Maybe the level-1 boss drops either X cash, or something from the fire-pool, where the fire-pool is either a fire-based weapon, a handful of fire-imps, or a fire-shield.
Obligatory’);DROP TABLE Robots;
Boring?! This is the best part of software development! Solving the puzzles!
I get bored about 40% of the way through coding a similar but different feature for the 173rd time. But figuring out a tricky problem or speccing a new feature is always a nice fun little puzzle to solve.
One final note is that this doesn't allow us to drop weapons from lists of possibilities, which is what vending machines use. Maybe we also need to add a systems of “weapon pools” of similar-powered weapons.
Why not add logical hierarchies, i.e. <OR Chance=70 < AlienBlaster Chance=70 /> < BetterAlienBlaster Chance=30 /> />?
This would also allow you to define certain sets of things that should drop together via an AND.
What you’ve got there is invalid XML, but here’s a valid version of the same idea:
<WeaponPool RandomTotal="100">
<AlienBlaster Min="1" Max="69" />
<SuperBlaster Min="70" Max="100" />
</WeaponPool>
Roll a random number from 1 to RandomTotal, and select the weapon dropped depending on which Min/Max it falls within.
Allow a result for “no drop” and you’ve got an easy way to simply express a 1 in 10,000 chance:
total = 10000
awesome item = min=1, max = 1
no drop, min = 2, max = 10000
You could go further and just omit the ‘no drop’ line altogether.
Some thoughts….
Rather than specify chance as a number for each drop, you might want to use an enum (e.g. “very rare”, “rare”, “uncommon”, “average”, “common”, “likely”, “very likely”, “certain”). The advantage of this is that it’s a LOT easier to tune a playtesting issue like “it seems like the drops are way too generous” if you didn’t have to twiddle 100 different numeric values one at a time. Might also consider this with the cash values.
Separately, every drop could in theory happen on every event. Is that the way you want it to work? For example, a boss can drop a weapon AND money. A possible extension would be some kind of “wheel” system (at a level above the individual drops), where exactly one of N possible drops will be triggered.
Example (angle brackets don’t seem to work easily in wordpress, nor does indenting…)
[PhatLewt]
[Imp Min=”0″ Max=”6″ Chance=”50″/]
[Cash Min=”0″ Max=”20″ Chance=”100″/]
[wheel]
[AlienBlaster weight=”1″ chance=”100″/]
[BFG9000 weight=”1″ chance=”100″/]
[Cash weight=”8″ Min=”0″ Max=”500″ Chance=”100″/]
[/wheel]
[/PhatLewt]
This would yield a 50% chance of imps, a 100% chance of a small amount of cash, and ONE of the (AlienBlaster, BFG9000,Big pile o’ cash) set. The “weight” param would only matter in a wheel, and determine how likely each option is (with these weights, 10% chance of blaster, 10% chance of BFG, 80% chance big cash pile).
This might be useful in a situation where “I want to drop exactly 1 weapon, but I want it random which one gets dropped”
I really think nested drop tables are the way to go, and I would generalize between the drop table and a wheel. Need not even be recursive, just allow a top-level drop table to contain drop tables as regular entries, but no deeper. Also, I would make it so
<DropTable Name="Lvl1Boss" Max=2 Roll=10 >
<Stack Archetype="Bomb_01" Min=1 Max=3 Weight=1 />
<DropTable Weight=5
<Stack Archetype="Laser_lvl10" />
<Stack Archetype="Minigun_lvl10" />
<Stack Archetype="Shield_lvl10" Weight=2 />
<Stack Archetype="Nothing" />
</DropTable>
<Stack Archetype="Money" Min=100 Max=500 Weight=10 />
</DropTable>
This is how it would work at runtime:
An enemy assigned this drop table dies. The drop table starts at the top element, rolls a number between 1 and the value of the “roll” attribute, and if the result is lower or equal the weight of the item, the drop table yields this item (or multiples according to the defined min and max of the item, which both default to 1).
The drop table then proceeds to the next item, rolls again, compares the weight to the roll. It stops once the maximum number of drops has been met, or the end of the table is reached. The attribute “max” for drop tables defaults to 1.
This gives us a few nice opportunities to play with. Drop tables are evaluated top to bottom. In my example, if the first roll yields a 1, 1-3 bombs drop. This consumes one of the “Max=2” possible items this table can produce. The next item is a nested drop table, which is then evaluated by the same rules that all drop tables follows, minus the option to contain another nested table.
If both some bombs have been dropped and the nested drop table has produced something, then the outer drop table has reached its maximum and no money is dropped. A nested drop table may itself yield empty, depending on the roll and weight values of the table and its items; if weight of the inner table itself is not met, or none of its contained items is met, then no “Max=2” drop from the outer table is consumed. If the special “Nothing” element is met, however, this does consume a drop.
If only bomb or a weapon, or neither, has dropped, the player is guaranteed some money as a consolation price. Jackpot is when no bomb drops, an item drops, and money drops.
The inner drop table has the special attribute value “Roll=Auto”. This means it will automatically determine its Roll-value by the sum of all weights contained within its own entries, and it only rolls a single time, not each time for each entry. Since it contains 5 items, 3 with a weight defaulted to 1, 1 with an explicit weight of 2, the game rolls 1-5: 1 yields a laser, 2 yield a minigun, 3 and 4 yield a shield; a 5 yields nothing. The Archetype=”Nothing” entry is no accident, it is special value with weight and everything, when its value is rolled it yields nothing, but unlike a roll that does not meet any defined item, meeting an Archetype=”Nothing” item does consume a drop. Key to all of this is the top-to-bottom evaluation.
You can of course also basic things like having a special rare drop and a common normal drop, like this 1% chance to drop a level 10 laser:
<DropTable Name=”RareMeansRare” Roll=Auto>
<Stack Archetype=”Laser_lvl10″ />
<Stack Archetype=”Laser_lvl5″ Weight=99 />
</DropTable>
Insert this into another drop table and you have an entry that says “in this place, drop a level 5 laser, only very rarely a level 10 one”.
Shamus’ idea of using two similar entries to emulate the effects of a 2d6 are possible in this system, too:
<DropTable Name=”2d6_cash” Max=2 Roll=1>
<Stack Archetype=”Money” Max=6 />
<Stack Archetype=”Money” Max=6 />
</DropTable>
It rolls 1-1, which always meets the default weight=1, and will output at most 2 items, each of which are defined as 1-6 units of cash. :)
Also, some of the options, especial the “Nothing” element, may seem redundant at first glance, as you could produce similar effects by different means. The following drop tables would evaluate to the same results, but they can express different designer intention, or just modes of thought (everyone of us has a different way to wrap their head around probabilities). Each of the following has a 1/3 chance to drop 1-100 units of cash.
<DropTable Name=”A” Max=1 Roll=3>
<Stack Archetype=”Money” Max=100 Weight=1/>
</DropTable>
<DropTable Name=”B” Max=1 Roll=Auto>
<Stack Archetype=”Money” Max=100 />
<Stack Archetype=”Nothing” Weight=2 />
</DropTable>
<DropTable Name=”C” Max=1 Roll=Auto>
<Stack Archetype=”Nothing”/>
<Stack Archetype=”Nothing”/>
<Stack Archetype=”Money” Max=100 />
</DropTable>
Also, keep in mind, as I said above, yielding a “Nothing” result consumes a drop, but failing to meet a weight does not do so, therefore allowing to produce exceptions where a table mostly meant to drop a given number of items, may sometimes yield less. We can thus emulate a probability for a table’s own Max=N value:
<DropTable Name=”1-3d6_cash” Max=3 Roll=100>
<Stack Archetype=”Nothing” Weight=50 />
<Stack Archetype=”Nothing” Weight=50 />
<Stack Archetype=”Money” Max=6 Weight=100 />
<Stack Archetype=”Money” Max=6 Weight=100 />
<Stack Archetype=”Money” Max=6 Weight=100 />
</DropTable>
If the first “Nothing” weight is met, one of the Max=3 drops is consumed. If the second one is met, only a single drop remains, which is consumed by the first money drop already. So you can have 1d6, 2d6, or 3d6, and by adjusting the weight of the nothing element, you can directly control the probability for each number of dice, independently of the sizes of the dice.
Addendum: To further illustrate the flexibility of this system, here is the first example, but changed so that when bombs drop, the chance of 3 is very low only:
<DropTable Name=”Lvl1Boss” Max=2 Roll=10 >
<DropTable Weight=1 Max=3 Roll=100>
<Stack Archetype=”Bomb_01″ Weight=100 />
<Stack Archetype=”Bomb_01″ Weight=50 />
<Stack Archetype=”Bomb_01″ Weight=1 />
</DropTable>
<DropTable Weight=5 Roll=”Auto” >
<Stack Archetype=”Laser_lvl10″ />
<Stack Archetype=”Minigun_lvl10″ />
<Stack Archetype=”Shield_lvl10″ Weight=2 />
<Stack Archetype=”Nothing” />
</DropTable>
<Stack Archetype=”Money” Min=100 Max=500 Weight=10 />
</DropTable>
The inner drop table for the bombs always yields 1 bomb, with 50% a second bomb, with 1% another bomb (technically it can drop the third but not the second, but you get the idea). Like in the original example, it is the weight of the drop table for the bombs that determines whether bombs drop at all. And if 3 bombs drop, it still consumes only 1 drop of the outer table, as the inner table is only regarded as a binary “yields”/”no yield” situation (again, with the special “Nothing” element counting as a “yield”). If you want to have multiple bombs count as individual items, you justput them into the outer table as individual, separate items. :)
Re: floating point — just because it’s WRITTEN as a decimal percentage doesn’t mean you have to STORE it as floating point. Personally, I’d probably parse it into an int value equating to parts per thousand or per ten thousand, depending on how granular you want to get.
Yeah, but then I’d have to climb down into the XML reader and write my own “text float to int” parsing code, and that’s a lot of effort for a single special case.
As a rule of thumb, for probabilities in code I just work out the smallest probability that I _think_ I want (say, 1 in 1,000) and add a digit (1 in 10,000) because you will always later wish you could cut it in half, and then code in integer numbers of these. You never have to convert it to a float, because you just code your “die roller” to generate an int from 1 to DIE_SIZE and compare it to your integer value. Because yeah; I hate floating-point maths on computers. We’ve got a print billing system here at work that keeps trying to tell me that Bob from Accounting owes 1.100000007 dollars, which is just dumb.
If I had a dollar for every time it was a good idea to store monetary values as floats, I’d have 0.00000000003 dollars.
COBOL actually has a special data type for storing monetary amounts. Which is part of why people still use it for accounting systems.
You can totally use floats for monetary values, because rounding errors can be fixed.
On the other hand, overflow exceptions cannot be fixed. For example when someone is trying to calculate a bigger monetary value in YEN.
And this is why people keep making the same mistake.
Floats have a lot more ways of going wrong than you think.
For example, the value “0.10” cannot be precisely stored in a float – or a double, or indeed any size floating point you care to choose!
So paying in ten cents a few thousand times gives a different answer depending on whether you are working in floating point or fixed point arithmetic!
That’s before you start considering what happens when you add a small number to a big number…
Or you hit the limit where the remaining precision in your floats are larger than the smallest monetary unit you want to handle. If you’re handling cents, that happens at around 2^46 dollars.
If you instead decided to use the Maltese Lira (I think they use the Euro, now), you would hit the limit somewhere around 2^43 Maltese Lira (their smallest coin was the milli, thousand millis to the Lira).
Incidentally, since you mentioned source, here’s some:
#include <stdio.h>
int main(int argc, char *argv[])
{
float a = 9000000000.;
float b = 2.;
printf(“%fn”, a + b);
return 0;
}
Fixing rounding errors implies detecting rounding errors, which is hard when you can’t store the number 1/10 precisely. Try using single precision floating point on a 32-bit computer to calculate 9 billion + 2. (hint: 8999999488.000000) The solution isn’t to bury your errors. The solution is to use larger, more precise types.
Complaining that single precision floating point on 32-bit systems is a straw man? You bet it is. But then, so is a 32-bit integer. I can do 64-bit math on a 486, and the largest 63-bit number (don’t forget the sign bit) is 18,446,744,073,709,551,615. That’s 18 quintillion, which is more than the GDP of Earth in ZWD (about 8 quintillion).
I can only talk about .NET here, but it is definitely doable using its DOUBLE and DECIMAL types.
But I doubt i am allowed to post the source code here, so the rest is left as an exercise for the reader.
You’re never going to be able to represent things to arbitrary precision with perfect accuracy. There are a finite number of possible representable values and an uncountably infinite number of points between zero and one.
I think you went a bit off topic there, i was talking about monatary values here and there are definitely a finite number of cents between 0 and 1 dollar :)
Uh, firstly finance often does track fractions of a cent. It adds up. Secondly, if you’re using a generic floating-point data type instead of a specialized fixed-precision one, it will not be able to accurately represent all of your possible values. Additionally, when dealing with large numbers, precision errors will creep to the left of the decimal point. If you’d have overflow errors in dealing with Yen as an integer, a floating-point representation of dollars is going to have massive precision errors because one yen is about equal to one cent. The rate fluctuates, but that’s a good heuristic if you want a general impression of how much a price denominated in yen is (IIRC it’s about 80 yen to one dollar at the moment).
Your double type is going to be more precise than a float, but it’s still mapping an infinite space to a finite number of values, and the fact that you’re only concerned about a finite number of points in that infinite space does not mean that the data format will map the exact set of finite points you want. Additionally, a double takes up as much space in memory as a 64-bit integer.
DOUBLE, no. DECIMAL, yes. Don’t try to use floating point to represent money. That’s the short version of what I said above.
That’s what longs are for.
I’d abandon the notion 1.0 = 100% entirely. Just define an arbitrary value to roll, and which values yield which results. If the roll value is defined in the XML, designer can then pick a value that makes sense for their case, as opposed to cramming some values where they don’t fit. E.g. instead of having to assign 33%, 33% and 34% for 3 items meant to drop with equal probability in a system from 1-100, just let them define the roll as 1-3, and assign a weight of 1 to each item. :)
Why is this post two gifs with no text?
;D
I was just going to make the opposite remark. Both here and in previous Good Robot entries, Shamus has implied us being distracted by the moving gifs. While I appreciate them as looks at the game as a work in progress, they don’t distract for a second, and I hardly notice them unless I decide to look at the pretty pictures.
Shamus, you underestimate the amount of advertisement we have to ignore on a daily basis; I’m by now completely programmed to read a text and not notice the flashy screaming colorful “attracting” images or videos surrounding it.
Also, mouse scroll wheels. :)
(1) you seem to have lost track of “Specifically, we want to be able to ensure things like, “Player will be offered at least 1 half-decent weapon before reaching point N.”” Isn’t *that* the hard part? To some extent it’s going to imply inversion of control: rather than explicit spawns, you’ll be dropping “30% chance of a tier 1 weapon”, and then you’ll have a system to guarantee that at least 3 in 10 possible tier 1 weapon drops *actually* happens, as well as that at least 1 in 5 native tier 1 weapon drops is promoted to tier 2.
(2) it doesn’t at all follow that XML is necessary here, although I guess if your choices are that and .ini files then XML is the better of the two.
The easy approach is to have a drop set of ‘half decent weapons’ whose probability of dropping goes up over time and approaches 100% at point A.
I’d probably put in an incremental percentage value that gets multiplied by the number of previous drops that haven’t paid out. And zero out the previous drops number when you have a successful drop. I think this plays into having to classify the individual dropped items into categories and specify drops by categories rather than specific items.
Make the gambler’s fallacy come true! That’ll screw with ’em.
The other approach is to treat drops not as a percentage, but each source as a deck of cards – one way or the other, the ace will come up eventually….
I’d just make a certain enemy that shows up in one specific room that’s guaranteed to drop a random decent weapon. If you stick it in a room with a bunch of the base random-loot versions of the enemy, players won’t notice that that’s why they always get a good drop in the first 2 levels. It’s a brute-force solution, but it solves the problem.
This could be solved with logic (IF player has no good weapon THEN drop random good weapon). But then your loot table becomes its own programming language…
Technical specs can be annoying, but man, do they ever make your life easier when you get into something and realize it’s more complex than you initially thought. I was a fly-by-the-seat-of-my-pants sort of girl when I was in college and when I first started working, but I’ve been thoroughly converted. Seat-of-my-pants works fine when you’re making some minor change, but when you’re doing BIG stuff, complicated problems or dealing with different interacting systems or having multiple developers work on the same system at the same time… yeah, it falls apart pretty fast.
Plus, when you’re like me and you’re making stuff for internal customers in your company, it’s reeeeeal nice six months down the road to be able to pull out the document that THEY approved and go, “Well, the reason I didn’t implement Feature X is that you jolly well didn’t ASK for Feature X!” And because they’re internal customers, I’m allowed to say that. :)
(this week’s infuriating customer conversation: “But when I click the delete button, it DELETES THE DATA, leaving no record!” “Yes… yes that’s what you asked for… you said there was too much unnecessary data so you wanted to be able to clean it up…” “But it DELETES it!” Luckily they were appeased by adding dire warnings to the “are you sure you want to delete” dialog. I wanted to add a laughing skull on the side for extra warning-ness, but I managed to restrain myself.)
Ugh, that kind of bullshit is the worst. I recently saw someone had left a negative review of the Firefox NoScript addon that basically boiled down to “it stops scripts from working”. No shit. The fact that that will break some websites had apparently not occurred to them.
Fun fact: I read this spec here before I saw it in person among our other development files. Making games is weird sometimes.
In other news, this looks very usable and I like the implications it has for rarer item drops – …I say as I realize the one filling out the game’s worth of tables will be me, probably in just a few hours.
You know how I keep talking about planning things out in advance before coding them? This is that in action, and people are already suggesting changes to avoid technical issues or features you might want to add well before having a nice, entirely finished system you will need to knock holes in.
Your first weapon reference was “alien blaster” instead of “minigun” or “plasma rifle”.
For shame, Shamus. For shame! :p
You’ll definitely want weapon pools, IMO.
It’s such a nice thing to be able to say “Okay, this enemy type has a chance of dropping a common laser weapon, and THIS type has a chance of dropping an uncommon projectile weapon” without having to go through and manually set the groups every time you want to do this. If you don’t implement it now, you’ll almost certainly do it later when you get tired of specifiying drops like this:
[PhatLoot]
[exclusive]
commonBlastLaser=0.05
commonBeamLaser=0.05
commonBurstLaser=0.05
commonSpreadLaser=0.05
[/exclusive]
[/PhatLoot]
On literally every object that can drop a weapon.
That doesn’t even work. Four independent 5% chances is not the same thing as a 20% chance of one.
Who says it’s supposed to be a 20% chance? I’m not sure what the net chance there actually is, which is part of the problem, and the exclusive block there presumably means there may be at most one drop so they aren’t independent. Which may mean they don’t have equal probability, depending on implementation (they could be rolled independently and have one of the successes selected randomly with probability based on relative chance, or they could be rolled in order until one succeeds, for instance).
The odds there are actually about 18.5%. Some “intuitive” probability errors are actually not that far off.
I can’t tell if your abusing XML for simplicity of explanantion, or you are genuinely using it that way? Names should NOT be element types, you should have an attribute name=”fat loot” instead.
Sorry, but this irritates my inner programmer like writing “Pi is exactly 3” in front of a mathematician.
That was my immediate response too. To make an XML vocabulary scalable it makes sense to use the element names (structure) for class information, and attributes for instance values.
However, the virtue of XML is its simplicity, so if this usage doesn’t introduce any ambiguity and will never become more complex, then why not? What’s abuse in one context can be simplicity in another.
The .14 is silent, everybody knows that.
as is the .00159….
I assume the correct format would be something like:
{item name=”PhatLewt” /}
instead?
Which means you’re going to have all of these “item” entries where the word “item” is completely superfluous. Like, items are the only thing that can go there. This contributes to the ongoing verbosity of XML. It doesn’t do anything to help artists maintain the files.
I’d expect “weapondrop name=’phat loot'”. That would be useful for stuff like a special ability that boosts weapon drop rates. Probably the most maintainable way of adding that.
Of course, that isn’t in the spec.
{weapon name=”alienblaster/}
{robot name=”imp”/}
{cash/}
Or even {currency name=”cash”/} in case you want to add other currencies in the future.
This is part of the value of XSDs – they can be annoying to write and excessively picky, but the version you include in the post would require a new schema every time you added a weapon or monster to the game, while using a more general form wouldn’t. Also it lets all of the enemy and weapon tags share the same allowed sub attributes rather than needing to duplicate it every time. And whatever library you’re using for reading XML is going to be really good at getting a list of nodes and make it a bit harder to iterate over all nodes.
Another way to think about it – do you have a class (or struct or whatever) for every robot and weapon type? I imagine you have a robot struct with a name property. Why should your xml be different?
Also incidentally it’d make it possible to have weapons and enemies with the same name, which someone might do accidentally (unless you prevent that somehow). For that matter, how do you know if something refers to a weapon, a robot, or cash? What if someone names a robot cash because it reminds them of the singer?
Another way to look at it : in C#, i prefer that the class which will interpret the xml for the program to be able to generate it. So my version would look something like that :
{Loots}
{Loot name=”ImportantLoot”}
{Robots}
{Robot}
{Name}”Louis”{/Name}
{Number}3{/Number}
{Probability}50{/Probability}
{GroupLoot}1{/GroupLoot}
{/Robot}
{Robot}
{Name}”John the mighty Archidemon”{/Name}
{Number}1{/Number}
{Probability}20{/Probability}
{GroupLoot}2{/GroupLoot}
{/Robot}
{Robot}
{Name}”Patrick the medium Archidemon”{/Name}
{Number}1{/Number}
{Probability}10{/Probability}
{GroupLoot}2{/GroupLoot}
{/Robot}
{/Robots}
{Weapons}
{Weapon}
{type}”Laser”{/type}
{power}
{min}2{/min}
{max}5{/max}
{/power}
{spread}3.5{/spread}
{Probability}30{/Probability}
{GroupLoot}3{/GroupLoot}
{/Weapon}
{/Weapons}
{Money}
{min}300{/min}
{max}300{/max}
{Probability}30{/Probability}
{GroupLoot}3{/GroupLoot}
{/Money}
{Loots}
{Loot}
{name=”MediumLoot”/}
{number=”2″}
{Probability}30{/Probability}
{GroupLoot}3{/GroupLoot}
{/Loot}
{/Loots}
{/Loot}
{/Loots}
I add a new property, GroupLoot, to know if the loots are exclusive or not. In this example, you can have Louis and John the mighty Archidemon gang up on you, but not John the mighty Archidemon and Patrick the medium Archidemon.
The Loot class is quite easy to parse :
an array of class weaponLoot (properties enum type, power power, double Spread, int Probability, int GroupLoot).
an array of robotsDropped
a class of MoneyDropped
an array of LootDropped (not the same class as Loot because you need the probability, number and group loot property)
I recommand making an interface/Abstract class IDrop, which implement the properties Probability, number and GroupLoot.
Good god, that’s hard to parse without indentation. Seriously, I can barely make it out at all.
I know I’m really late to this, but I’m hoping you still read comments on old-ish posts as they come in. XML is awful to write by hand, so don’t require your artists to do it. Make a nice little editor tool that knows the schema and what kinds of things can go in what kinds of fields. Have it validate its input so that when your artists make typos there’s a better chance they get caught, and then make it spit out XML in all its verbosity.
In fact, you don’t have to write the little editor program yourself. There are already programs out there that will take a schema and auto-generate generators. I can’t recommend any because I don’t really do XML-y work, but Google ought to be able to.
I think that you absolutely need a way to specify “Exactly one of the following:”
The golden example will be “We need this boss to drop one level 5-10 weapon.”
Then you create a table called Weapon_Tier2, that has a row for each weapon in that level range, and a row (with a small weight) for Weapon_Tier3.
(That's also why you need to have drop tables recurse, so that you can create suitably trivial chances for something to drop well outside of its weight class.)
It also expands well for when the sequel or expansion adds things other than weapons to the drop table; you just add Equipment_Tier2, which weights between Weapon_Tier2, Engine_Tier2, and Shield_Tier2.
Actually, point four can be used to do that. Be a bit clunky, though.
How would you use that to produce always exactly one item from the list, rather than merely on average one?
1) These posts are my absolute favourite thing on the site. I’ve been reading for…wow, for over 6 years now. I don’t comment much, but I just had to point out what a fan I am of your posts about programming.
2) I’m not a programmer, but just as you removed “min” and “max” when it’s exactly one, wouldn’t it also be cleaner to remove “chance” if it’s definitely going to drop?
Cant….take….eyes….away….from….gifs….
I’d be tempted to have something like a list of items and you can guarantee one (or more) items from that list would drop. For example:
{WeaponReward}
{List Drops=”1″ Chance=”100″}
{LaserBlaster}
{MissleLauncher}
{LightningShield}
{/List}
{Cash Min=”0″ Max=”100″ Chance=”100″/}
{/WeaponReward}
That would give you a 100% chance of getting only one of those three weapons, plus 0-100 cash.
Tying swan song attacks like projectiles into the drop system sounds a little hacky. Something that works really well until something terrible you didn’t think of comes up a couple months later. Or if you decide to change the drop system later, it may break swan songs (e.g. if you decide for balance reasons enemies can only drop one thing, suddenly it can’t fire missiles on death anymore without refactoring).
But it’s game dev, sometimes these things happen (and I know it was just a thought and not a done deal in the proposal).
If you like XML, use JSON. It’s the same thing for 99% of all applications, but just less verbose.
JSON is so much harder to parse in a language like C++ that it may as well be impossible. Or so I’ve been told by people trying to do it.
In C#, java and probably C++, you should not parse XML, html or JSON by yourself. There are good libraries for that which allow you to write a code which will be more readable, with fewer bug and probably faster than the hack that an average programmer will do in a few hours.
Wow, I would have figured you’d have better understanding of chance, Shamus. It doesn’t matter whether you roll chance or number first for your imps, because it’s commutative. (If you roll number first, you can still roll 0)
Also, 2d6 isn’t a bell curve, it’s triangular :P
And man, reading this now is uncanny. I’m screwing around with modding Oblivion, and have just been working heavily with the levelled lists. (And really missing the “drop everything” flag from FO3, b>
|
https://www.shamusyoung.com/twentysidedtale/?p=29572&replytocom=996596
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
Day 1 of the Anvil Advent Calendar
Build a web app every day until Christmas, with nothing but Python!
The connected home at Christmas
Christmas trees are appearing all over, and the Anvil office is no exception. But how many people can claim that their tree is controlled from the web, with Nothing But Python? Well, we can, and now you can too. Here’s how it works:
At the core of our Python-powered tree is a Raspberry Pi with an Energenie Pi-mote attached. This is a delightfully easy-to-use board that lets you control mains-powered devices, like our set of multi-coloured tree lights. You can get the Pi-mote and two socket controllers as a kit from Pimoroni. (Christmas tree not included.)
The wonderful GPIO Zero Python library drives the board, and an Anvil Uplink script allows us to control the whole thing from the web. Easy!
The Setup
Firstly, let’s get the tree and the Pi set up. The Pi-mote works from quite a distance with an aerial attached, so you can even put it elsewhere in the house. Plug in the lights, connect the Pi-mote to the Raspberry Pi, and get the Pi up and running on your WiFi.
The Raspberry Pi
Next up, let’s write ourselves a Python script to control the Pi-mote (and hence the lights). You can either connect the Pi to a screen, keyboard and mouse, or SSH in remotely. Either way, start by installing GPIO Zero which will let us control the Energenie Pi-mote. If you have a recent Raspberry Pi, it might even be installed already!
Next, fire up your favourite text editor and let’s write the script. I like Nano, but I don’t judge those who prefer Vim or Emacs. Too much.
$ nano tree_control.py
The following is all you need to control the lights. No, I’m not kidding, it really is this easy:
from gpiozero import Energenie lights = Energenie(1) # You may need to adjust the channel here def lights_on(): print("Turning lights on!") lights.on() def lights_off(): print("Turning lights off!") lights.off()
Now we can test our script from Python, directly on the Pi:
$ python >>> import tree_control.py >>> lights_on() Turning lights on!
Hopefully your lights switched on! If not, you may need to tweak the channel numbers in the Pi-mote boilerplate.
The Web App
Next up, let’s make the script accessible from the web. Head over to Anvil and create a new app. For now we’ll just create a form with two buttons:
Then enable the Uplink for your app, and copy the sample code into the top of the script on the Pi:
Now our script will connect to Anvil and listen for [Server Function calls] when it starts. We just need to make our
lights_on and
lights_off functions callable by adding the
anvil.server.callable decorator. Your whole script now looks like this:
from gpiozero import Energenie import anvil.server anvil.server.connect("<UPLINK-KEY-HERE>") lights = Energenie(1) # You may need to adjust the channel here @anvil.server.callable def lights_on(): print("Turning lights on!") lights.on() @anvil.server.callable def lights_off(): print("Turning lights off!") lights.off()
We also need to make sure the script doesn’t exit, so it keeps listening for server calls. Add this to the bottom of your script:
anvil.server.wait_forever()
And we’re good to go. Run your script, and you’ll see it connect:
$ python tree_control.py Connecting to wss://anvil.works/uplink Anvil websocket open Authenticated OK
Head back to Anvil, and double click your “Lights On!” button. Here we can call the function on the Pi directly:
def button_1_click(self, **event_args): """This method is called when the button is clicked""" anvil.server.call("lights_on")
Add a similar call to
"lights_off" for the other button, and that’s it! Run your app, and control your tree!
Give the Gift of Python
Share this post:
|
https://anvil.works/advent/remote-control-tree.html
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
Now we need to run that
insert_item function from the web app. Let’s build an ‘add’ widget into the Data Grid.
Add a column to your Data Grid. Clear its Title and Key. Set its width to 80. This will hold our ‘add’ button later.
Add a Data Row Panel
to the bottom of your Data Grid.
Drop a TextBox into each of the Name and Quantity columns. Rename them
text_box_name and
text_box_quantity. Set the Quantity TextBox’s
type to
number.
Drop a Button into the end column. Rename it
button_add. Clear the Button’s
text and set its
icon to
fa:plus.
Create a
click handler by clicking the blue arrows to the right of ‘click’ in the Events section:
def button_add_click(self, **event_args): anvil.server.call( 'insert_item', self.text_box_name.text, self.text_box_quantity.text ) # Refresh the open Form to load the new item into the UI get_open_form().raise_event('x-refresh') # Clear the input boxes self.text_box_name.text = '' self.text_box_quantity.text = ''
Now you can add items to your database from your web app!
|
https://anvil.works/learn/tutorials/external-database/chapter-3/30-build-add-ui.html
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
1. Introduction-page-3].
1.1. Module Interactions
This section is normative.
This module replaces and extends the rules for assigning property values, cascading, and inheritance defined in [CSS2] chapter 6.
Other CSS modules may expand the definitions of some of the syntax and features defined here. For example, the Media Queries Level 4 specification, when combined with this module, expands the definition of the <media-query> value type as used in this specification., with two exceptions:
If a feature (such as the @namespace rule) explicitly defines that it only applies to a particular stylesheet, and not any imported ones, then it doesn’t apply to the imported stylesheet.
If a feature relies on the relative ordering of two or more constructs in a stylesheet (such as the requirement that @charset must not have any other content preceding it), it only applies between constructs in the same stylesheet.
For example, declarations in style rules from imported stylesheets interact with the cascade as if they were written literally into the stylesheet at the point of the @import.
Any @import rules must precede all other at-rules and style rules in a style sheet (besides @charset, which must be the first thing in the style sheet if it exists), or else the @import rule is invalid. The syntax of @import is:
@import [ <url> | <string> ] <media-query-list>? ;
Where the <url> or <string> gives the URL of the style sheet to be imported, and the optional <media-query-list> (the import conditions) states the conditions under which it applies.
If a <string> is provided, it must be interpreted as a <url> with the same value.
@import "mystyle.css"; @import url("mystyle.css");
The import conditions allow the import to be media-dependent. In the absence of any import conditions, the import is unconditional. (Specifying all for the <media-query-list> an independent origin of the style sheet that imported it.
The environment encoding of an imported style sheet is the encoding of the style sheet that imported it. [css-syntax-3]
2.1. Content-Type of CSS Style Sheets.
3..
For example, writing background: green rather than background-color: green ensures that the background color overrides any earlier declarations that might have set the background to an image with background-image..
In some cases, a shorthand might have different syntax or special keywords that don’t directly correspond to values of its sub-properties. (In such cases, the shorthand will explicitly define the expansion of its values.)-backgrounds-3]
If a shorthand is specified as one of the CSS-wide keywords [css-values-3], it sets all of its sub-properties to that keyword, including any that are reset-only sub-properties. (Note that these keywords cannot be combined with other values in a single declaration, not even in a shorthand.)
Declaring a shorthand property to be !important is equivalent to declaring all of its sub-properties to be !important.
3.1. Resetting All Properties: the all property
The all property is a shorthand that resets all CSS properties except direction and unicode-bidi. It only accepts the CSS-wide keywords. It does not reset custom properties [css-variables-1].
Note: The excepted CSS properties direction and unicode-bidi are actually markup-level features,
and should not be set in the author’s style sheet.
(They exist as CSS properties only to style document languages not supported by the UA.)
Authors should use the appropriate markup, such as HTML’s
dir attribute, instead. [css-writing-modes-3].
4. Value Processing:
- First, all the declared values applied to an element are collected, for each property on each element. There may be zero or many declared values applied to the element.
- Cascading yields the cascaded value. There is at most one cascaded value per property per element.
- Defaulting yields the specified value. Every element has exactly one specified value per property.
- Resolving value dependencies yields the computed value. Every element has exactly one computed value per property.
- Formatting the document yields the used value. An element only has a used value for a given property if that property applies to the element.
- Finally, the used value is transformed to the actual value based on constraints of the display environment. As with the used value, there may or may not be an actual value for a given property on an element.
4.1. Declared Values
Each property declaration applied to an element contributes a declared value for that property associated with the element. See Filtering Declarations for details.
These values are then processed by the cascade to choose a single “winning value”.
4.2. Cascaded Values
The cascaded value represents the result of the cascade: it is the declared value that wins the cascade (is sorted first in the output of the cascade). If the output of the cascade is an empty list, there is no cascaded value.
4.3. Specified Values
The specified value is CSS-wide keywords are handled specially when they are the cascaded value of a property, setting the specified value as required by that keyword, see §7.3 Explicit Defaulting.
4.4..
- values with relative units (em, ex, vh, vw) must be made absolute by multiplying with the appropriate reference size
- certain keywords (e.g., smaller, bolder) must be replaced according to their definitions
- percentages on some properties must be multiplied by a reference value (defined by the property)
- valid relative URLs must be resolved to become absolute.
See examples (f), (g) and (h) in the table below.
Note:.
4.5. Used Values2]
As another example, a
<div> might have a computed break-before value of auto,
but acquire a used break-before value of page by propagation from its first child. [css-break-3]
Lastly, if a property does not apply to an element, it has no used value; so, for example, the flex property has no used value on elements that aren’t flex items.
4.6. Actual.
Note:.
4.7. Examples
5. Filtering
In order to find the declared values, implementations must first identify all declarations that apply to each element. A declaration applies to an element if:
- It belongs to a style sheet that currently applies to this document.
- It is not qualified by a conditional rule [CSS3-CONDITIONAL] with a false condition.
- It belongs to a style rule whose selector matches the element. [SELECT] (Taking scoping into account, if necessary.)
- It is syntactically valid: the declaration’s property is a known property name, and the declaration’s value matches the syntax for that property.
The values of the declarations that apply form, for each property on each element, a list of declared values. The next section, the cascade, prioritizes these lists.
6. Cascading
The cascade takes an unordered list of declared values for a given property on a given element, sorts them by their declaration’s precedence as determined below, and outputs a single cascaded value.
The cascade sorts declarations according to the following criteria, in descending order of priority:
- Origin and Importance
- The origin of a declaration is based on where it comes from and its importance is whether or not it is declared !important (see below). The precedence of the various origins is, in descending order:
- Transition declarations [css-transitions-1]
- Important user agent declarations
- Important user declarations
- Important author declarations
- Animation declarations [css-animations-1]
- Normal author declarations
- Normal user declarations
- Normal user agent declarations
Declarations from origins earlier in this list win over declarations from later origins.
- Specificity
- The Selectors module [SELECT] describes how to compute the specificity of a selector. Each declaration has the same specificity as the style rule it appears in. For the purpose of this step, declarations that do not belong to a style rule (such as the contents of a style attribute) are considered to have a specificity higher than any selector. The declaration with the highest specificity wins.
- Order of Appearance
- The last declaration in document order wins. For this purpose:
- Declarations from imported style sheets are ordered as if their style sheets were substituted in place of the @import rule.
- Declarations from style sheets independently linked by the originating document are treated as if they were concatenated in linking order, as determined by the host document language.
- Declarations from style attributes are ordered according to the document order of the element the style attribute appears on, and are all placed after any style sheets.
The output of the cascade is a (potentially empty) sorted list of declared values for each property on each element.
6.1. Cascading Origins
Each style rule has a cascade origin, which determines where it enters the cascade. CSS defines three core origins:
- Author Origin
- The author specifies style sheets for a source document according to the conventions of the document language. For instance, in HTML, style sheets may be included in the document or linked externally.
- User Origin
- The user may be able to specify style information for a particular document. For example, the user may specify a file that contains a style sheet or the user agent may provide an interface that generates a user style sheet (or behaves as if it did).
- User Agent Origin
- Conforming user agents must apply a default style sheet (or behave as if they did). A user agent’s default style sheet should present the elements of the document language in ways that satisfy general presentation expectations for the document language (e.g., for visual browsers, the EM element in HTML is presented using an italic font). See e.g. the HTML user agent style sheet. [HTML]
Extensions to CSS define the following additional origins:
- Animation Origin
- CSS Animations [css-animations-1] generate “virtual” rules representing their effects when running.
- Transition Origin
- Like CSS Animations, CSS Transitions [css-transitions-1] generate “virtual” rules representing their effects when running.
6.2. Important Declarations: the !important annotation, as defined by [css-syntax-3]. i.e. if the last two (non-whitespace, non-comment) tokens in its value are the delimiter token ! followed by the identifier token-animations-1]
User agent style sheets may also contain !important declarations. These override all author and user declarations.
/* }
6.3. Precedence of Non-CSS Presentational Hints
The UA may choose to honor presentational hints in a source document’s markup,
for example the
bgcolor attribute or
s element in [HTML]..
Note:.
7..
7.1. Initial Values
Each property has an initial value, defined in the property’s definition table. If the property is not an inherited property, and the cascade does not result in a value, then the specified value of the property is its initial value.
7.2.: Inheritance follows the document tree and is not intercepted by anonymous boxes, or otherwise affected by manipulations of the box tree.
7.3. Explicit Defaulting
Several CSS-wide property values are defined below; declaring a property to have these values explicitly specifies a particular defaulting behavior. As specified in CSS Values and Units Level 3 [css-values-3], all CSS properties can accept these values.
7.3.1. Resetting a Property: the initial keyword
If the cascaded value is the initial keyword, the property’s specified value is its initial value.
7.3.2. Explicit Inheritance: the inherit keyword
If the cascaded value of a property is the inherit keyword, the property’s specified and computed values are the inherited value.
7.3.3. Erasing All Declarations: the unset keyword).
8. Changes
8.1. Changes Since the 19 May 2016 Candidate Recommendation
The following non-trivial changes were made to this specification since the 19 May 2016 Candidate Recommendation:
- Clarified that custom properties are not reset by the all shorthand. (2518)
The all property is a shorthand that resets all CSS properties except direction and unicode-bidi. …
- Called out two exceptions in which importing a style sheet is different from merely inserting its contents.
If an @import rule refers to a valid stylesheet, user agents must treat the contents of the stylesheet as if they were written in place of the @import rule
- Removed any mention of scoped stylesheets, as the feature was removed from HTML. (Issue 637)
- Removed any mention of the obsolete “override” origin, originally defined by DOM Level 2 Style and later abandoned.
A Disposition of Comments is available.
8.2. Changes Since the 3 October 2013 Candidate Recommendation
The following changes were made to this specification since the 3 October 2013 Candidate Recommendation:
- Defined environment encoding of imported style sheets.
- Referenced [css-syntax-3] for syntax of !important rules.
A declaration is important if it has a !important annotation, .
- Explained reset-only sub-properties and clarified that they also get affected by a CSS-wide keyword value in the shorthand declaration.
If a shorthand is specified as one of the CSS-wide keywords [css-values-3], it sets all of its sub-properties to that keyword, . (Note that these keywords cannot be combined with other values in a single declaration, not even in a shorthand.)
A Disposition of Comments is available.
8.3. Additions Since Level 2
The following features have been added since Level 2:
Acknowledgments
David Baron, Simon Sapin, and Boris Zbarsky contributed to this specification.
Privacy and Security Considerations
The cascade process does not distinguish between same-origin and cross-origin stylesheets, enabling the content of cross-origin stylesheets to be inferred from the computed styles they apply to a document.
User preferences and UA defaults expressed via application of style rules are exposed by the cascade process, and can be inferred from the computed styles they apply to a document.
The @import rule does not apply the CORS protocol to loading cross-origin stylesheets, instead allowing them to be freely imported and applied.
The @import rule assumes that resources without
Content-Typemetadata (or any same-origin file if the host document is in quirks mode) are
text/css, potentially allowing arbitrary files to be imported into the page and interpreted as CSS, potentially allowing sensitive data to be inferred from the computed styles they apply to a document.
|
https://www.w3.org/TR/css3-cascade/
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
In this tutorial, Michael Washington and Alan Beasley team up to demonstrate the creation of a Windows Phone 7 video player that uses the View Model Style pattern.
The View Model Style pattern allows a programmer to create an application that has no UI (user interface). The programmer only creates a ViewModel and a Model. A designer with no programming ability at all, is then able to create the View (UI) in Microsoft Expression Blend 4 (or higher). If you are new to the View Model Style pattern, it is suggested that you read Silverlight View Model: An (Overly) Simplified Explanation for an introduction.
Creating the video player is actually very easy. The only thing that I am doing differently, is that I am not putting in code in the code behind of the View. By doing this, it allows Alan Beasley to alter the View (the UI) any way he wishes.
To create Windows Phone 7 applications, you will need to install the Developer Tools from:.
First we create a Silverlight for Windows Phone Application project in Visual Studio.
When you don't use code behind, you need to use ICommands and invoke them using behaviors. Windows Phone 7, at the time of this writing, does not support invoking ICommands, so we will use Laurent Bugnion's MVVM Light from: (and a thank you to Sacha Barber and Laurent Bugnion for reminding me to look at MVVM Light or Cinch when I got stuck).
The package contains a number of projects, unzip GalaSoft.MvvmLight.Binaries.V3.zip (or higher) (note: see for more help installing the full package)
Next, we select Add Reference...
We add the assemblies from: \Program Files\Laurent Bugnion (GalaSoft)\Mvvm Light Toolkit\Binaries\WPF4
Gestures are really important to a Windows Phone 7 application, so we next want to download Jocelyn Mae Villaraza's WPBehaviorsLibrary from.
We download the Library, open it up, and copy the GestureTriggers.cs and StateBehaviors.cs files, and place them in our project.
We also create a DelegateCommand.cs file that also supports "commanding". That file is covered in the tutorial at this link.
The Videos.cs file is a simple class to hold the videos:
public class Videos
{
public string VideoName { get; set; }
public string VideoURL { get; set; }
}
Technically we only need to create one more file, the View Model (MainViewModel.cs), however, we will also walk through some of the steps to implement a View for the Video Player, so you can get a good idea of exactly how a View is created from the View Model.
We create a file called MainViewModel.cs and enter the following code:
public class MainViewModel : INotifyPropertyChanged
{
private MediaElement MyMediaElement;
private DispatcherTimer progressTimer;);
// Call the Model to get the collection of Videos
GetListOfVideos();
}
This sets up some global variables we will need, MyMediaElement, progressTimer, and in the constructor for the class, it sets up some some ICommands that we will need. Finally it calls the GetListOfVideos() method.
The code for the ICommands and the Properties of the View Model was taken directly from: SilverlightMVVMVideoPlay.aspx and: AdvancedMVVMVideo.aspx and is covered in detail in those articles.
For example, the MediaOpenedCommand is where the View will set the MediaElement, and store it in the MyMediaElement global variable, and start a DispatcherTimer that will track the progress of the video and update the progress controls:
#region MediaOpenedCommand
public ICommand MediaOpenedCommand { get; set; }
public void MediaOpened(object param)
{
// Play Video
MediaElement parmMediaElement = (MediaElement)param;
MyMediaElement = parmMediaElement;
this.progressTimer = new DispatcherTimer();
this.progressTimer.Interval = TimeSpan.FromSeconds(1);
this.progressTimer.Tick += new EventHandler(this.ProgressTimer_Tick);
SetCurrentPosition();
// Play the video
PlayVideo(null);
}
private bool CanMediaOpened(object param)
{
return true;
}
#endregion
The major change from the previous Silverlight Video Player articles, was to remove the web service code and to simply create a collection of videos for the GetListOfVideos() method.:
#region GetListOfVideos
private void GetListOfVideos()
{
SilverlightVideoList = new ObservableCollection<videos />();
SilverlightVideoList.Add(new Videos { VideoName = "Introduction", VideoURL = @"" });
SilverlightVideoList.Add(new Videos { VideoName = "RIAServices", VideoURL = @"" });
SilverlightVideoList.Add(new Videos { VideoName = "Editing Entities", VideoURL = @"" });
SilverlightVideoList.Add(new Videos { VideoName = "Showing Events", VideoURL = @"" });
SilverlightVideoList.Add(new Videos { VideoName = "Authentication", VideoURL = @"" });
SilverlightVideoList.Add(new Videos { VideoName = "MVVM", VideoURL = @"" });
SilverlightVideoList.Add(new Videos { VideoName = "Validation", VideoURL = @"" });
SilverlightVideoList.Add(new Videos { VideoName = "ImplicitStyles", VideoURL = @"" });
SilverlightVideoList.Add(new Videos { VideoName = "RichTextBox", VideoURL = @"" });
SilverlightVideoList.Add(new Videos { VideoName = "Webcam", VideoURL = @"" });
SilverlightVideoList.Add(new Videos { VideoName = "Drop", VideoURL = @"" });
SilverlightVideoList.Add(new Videos { VideoName = "Grouping", VideoURL = @"" });
SilverlightVideoList.Add(new Videos { VideoName = "FluidUI", VideoURL = @"" });
SilverlightVideoList.Add(new Videos { VideoName = "RightMouseClick", VideoURL = @"" });
SilverlightVideoList.Add(new Videos { VideoName = "Printing", VideoURL = @"" });
SilverlightVideoList.Add(new Videos { VideoName = "MultipagePrinting", VideoURL = @"" });
SilverlightVideoList.Add(new Videos { VideoName = "OOB", VideoURL = @"" });
SilverlightVideoList.Add(new Videos { VideoName = "Toasts", VideoURL = @"" });
SilverlightVideoList.Add(new Videos { VideoName = "Window Placement", VideoURL = @"" });
SilverlightVideoList.Add(new Videos { VideoName = "Elevated Trust", VideoURL = @"" });
SilverlightVideoList.Add(new Videos { VideoName = "Custom Chrome", VideoURL = @"" });
SilverlightVideoList.Add(new Videos { VideoName = "Window Closing Event", VideoURL = @"" });
SilverlightVideoList.Add(new Videos { VideoName = "OOB Silent Install", VideoURL = @"" });
SilverlightVideoList.Add(new Videos { VideoName = "Xap Signing", VideoURL = @"" });
SilverlightVideoList.Add(new Videos { VideoName = "MEF", VideoURL = @"" });
SetVideo(SilverlightVideoList[0]);
SelectedVideoInListProperty = 0;
}
#endregion
They are exposed to the View by the SilverlightVideoList collection:
#region SilverlightVideoList
private ObservableCollection<videos /> _SilverlightVideoList;
public ObservableCollection<videos /> SilverlightVideoList
{
get { return _SilverlightVideoList; }
private set
{
if (SilverlightVideoList == value)
{
return;
}
_SilverlightVideoList = value;
this.NotifyPropertyChanged("SilverlightVideoList");
}
}
#endregion
The View (The Developer Version)
I usually create a View so that Alan Beasley can see how the View Model is expecting things to be wired-up. This View will be thrown away so I don't even attempt to make it look presentable. I create MainPage.xaml...
I open the project in Expression Blend 4 and I click on the LayoutRoot in the Objects and Timeline window, and in the Properties, I click on the New button next to DataContext...
...and I set it to the MainViewModel.
Then, when I click on the Data tab, I see all the Properties, Collections, and Commands that the View Model makes available.
I cover wireing-up the controls in: SilverlightMVVMVideoPlay.aspx and: AdvancedMVVMVideo.aspx.
However, instead of using the InvokeCommandAction behavior, I used the EventToCommand behavior from the MVVM Light Toolkit.
Because this is a Windows Phone 7 application, we wanted to add support for gestures. This is different than simply clicking on something with your finger (when using an actual phone) or your mouse (when using the Windows Phone 7 emulator). A gesture allows you to specify an action when a person moves in a specific direction.
In my sample, I wired-up gestures that will play the video if you press on the video (with your mouse pointer or your finger) and move to the right. It will stop the video if you move to the left.
Here are the steps to wire-up the FlickRight_Play gesture:
I grab an EventToCommand behavior, and drop it on the mediaElemnt in the Objects and Timeline window.
In the Properties for the behavior, I click the New button next to TriggerType...
I select the FlickGestureTrigger.
I select a direction in the Direction drop down.
I click Advanced options next to Command.
I then bind the Command to the PlayVideoCommand.
After everything is wired-up, you hit F5 to run the project. If you have an actual phone hooked up, select Windows Phone 7 Device, otherwise select the Emulator.
The Emulator will load.
This application displays in Landscape mode, so you will have to click the rotate button.
The application will start and the first video will play. To change the video, you have to click on the list box, hold down the mouse button and drag up or down, then click on the selection to change the video (this works a lot better when using your finger on a real phone).
You can also click on the video, hold the mouse button down, and slide to the right to play the video, and to the left to stop it.
Michael Washington:
I usually create a View so that Alan Beasley can see how the View Model is expecting things to be hooked up. However, he is free to completely remove everything from the View and start from scratch.
The thing that is amazing, is that he is able to complete the project without any code, and without needing to have me "fix up any issues". The only time the project has to come back to me, is to add new features.
For example, after the first version, he requested that we have a list of videos to choose from. I added collection to the View Model and sent it to him. All he needed to do was replace the View Model in his version of the project. No need to disturb any changes he may have already started making on the View.
Any project is greatly enhanced when each person is able to concentrate on what they do best. My part becomes really easy, I look at the current requirements for the project, and I implement the requirements using only Properties, Collections, and ICommands (and any code needed to support them).
The actual user interface and "flow" of the application is out of my hands, and I watch in fascination as it goes through sometimes drastic alterations. I sometimes throw out an idea or two but I am a "spectator" on what would normally be "my" project. I have to sometimes remind myself that the only person writing code is me! However, like a writer and director of a movie have to have to accept that the Actors are what the audience is eventually looking at, the UI is what the end users see and care about. No one cares about the code unless is doesn't work.
For years, applications have been held back because the programmers MUST implement certain functionality. This is like a director needing to be a character in each scene of a movie. Some director's are good Actors and this works, but for the average project, the UI is best left to the hands of those who are best at creating UI's.
Now before anyone starts designing ANYTHING for the Windows 7 Phone, they should read the Windows Phone UI Design & Interaction Guidelines. Microsoft have produced these "Guidelines" in an effort to generate a consistent "Look & Feel" for Windows 7 Phone development. While these are just "Guidelines" and not "Rules", I would suggest you read them, & have a very good reason/argument on why you would want to deviate from them. As Microsoft & the Silverlight community as a whole, would like a successful launch of the Windows 7 Phone. No one in the Silverlight community wants to see an amalgamation of radically different designs, that as a whole, looks like a hotchpotch (jumbled mixture) of different designs. Microsoft have put a lot of effort into their "Metro" styling to generate a consistent style for the Windows 7 Phone. So think twice before you go crazy with lots of "snazzy" graphics!
So what are these UI Design guidelines then? Well the main focus is a consistent "Metro" style covering: Icons, Text styles, Layout, Navigation, Interaction, Colour Schemes, etc... But I'm not going to repeat the whole guidelines, & I strongly suggest you read them for yourselves (At least until you find one spelling mistake, & I assure you there is one). What I will do, is discuss the relevant aspects of the guidelines as I go along in styling Defwebserver's (Michael's) video player. So let us make a start...
Download the un-styled player that Michael has created, & open in Blend 4. You may need to update your installations as follows:
1, Uninstall Expression Blend & re-install (Microsoft Expression Blend® 4 Release Candidate (RC)
2, Install
3, Install Microsoft® Expression Blend® Add-in Preview 2 for Windows® Phone
4, Install Microsoft® Expression Blend® Software Development Kit (SDK) Preview 2 for Windows Phone
Run the application in the Phone Emulator & switch to Landscape mode by clicking on the icon in the image below.
Eventually (it takes a while) the Video Player will start & look like the image below.
Now this is "functional" but not very pretty, as Michael has only placed the necessary elements on the screen to get the application working. Ready to hand over to a designer to make the application more visually appealing (As we don't have a designer, I'll have to do...LOL)
So what have we got? A title area with the text "My Video Player", a ListBox showing "Introduction" and a load more videos that are currently hidden, but what really grabs the attention is are the Player Controls in the lower central area of the screen. The Player Controls consist of a Progress Bar, Progress Display, Play, Pause, Stop & a Volume Slider. All that a basic video player needs... Currently it is not possible to have a Fast Forward or Rewind control for the video player, even though Victor Gaudioso shows how to add Fast Forward functionality to the MediaElement in an excellent video tutorial. The reason we cannot add this functionality, is because we cannot set the position of the video stream in the Windows Phone. Why I don't know, as I am not a coder! But I'm sure this issue will be addressed by Microsoft in the very near future. So for the minute we will forget Fast Forward & Rewind & just Style what we have.
The default controls that are available when developing for the Windows 7 Phone in Silverlight, are not the standard controls for developing a normal Silverlight application. Microsoft have developed these controls specifically for the phone, as obviously there will be some differences with a touch screen interface, & a normal mouse interaction interface. The most noticeable are the buttons, & by default these are rectangular & quite large. They also have an overhang area as shown in the image below.
The reason the buttons have an overhang, is to do with the "Touch Target" area, & a designer needs to consider user interaction when designing for a touch screen. I.E How fat are the user's fingers? Will the user accidentally touch the wrong button if controls are placed too close together? Microsoft recommend a minimum size of 9mm (34 pixels) for each touch target, & a spacing of 2mm (8 pixels) between touch targets that may be only 7mm (26 pixels) in size. The Z order also plays a part with regards to touch targets, & the higher (closer) the control is in the layout Z order, it will have preference to an overlapping controls "Touch Target" area. We can see in the image above, that the Play button's "touch target", overlaps the Pause button, & these will definitely interfere with each other in a touch screen environment. (Bad developer!!!) But it is not the developers job to consider these aspects, unless they are also the designer! So we need to consider minimum sizes for our interactive controls, & if we need to provide a "TouchTargetOverhang" for these controls. The "TouchTargetOverhang" can be adjusted for the whole application in the App.xaml, & by default is set to 12 pixels on all sides. As shown in the image below.
The "PhoneTouchTargetOverhang" is bound to the Margin property of the "ButtonBackground" Border element in the button Control Template. But remember that this "TouchTargetOverhang" is only needed when dealing with the Minimum sizes of interactive controls. So if the button is larger than 9mm (34 pixels) a "Overhang" may not be required. User testing is only the real way of checking if your design layout works, but as none of us have a real phone at present, I suggest we all air on the side of caution regarding the size of the "Touch Target".
So let's go ahead & start styling the Player Controls & see how we get on. Firstly let us examine what Michael has given us to work with in the Objects and Timeline of Expression Blend 4.
Currently we have a Grid named "PlayerControls" & within this, a TextBlock that shows the ProgressDisplay (time duration) of the video. As well as a Canvas containing the rest of the controls. These being a Slider (volume), & 3 Buttons named Play, Pause & Stop. If you expand the tree further, you will see that Michael has attached Behaviors to the elements, that hook into the code he has written in the "View Model". And you can find out more about this in Michael's section of the article. But for me as the designer, what this means is that I could completely delete everything here, & start with a clean slate. Hooking up to his code using these behaviors (Writing absolutely no code at all!!!).
But to keep this article from getting too long, or repeating Michael's stuff, I will work with what we have here...
Select the Canvas element, right click & choose Change Layout Type > StackPanel.
Select the StackPanel, change the Orientation to Horizontal, Reset the Margins & set both the Horizontal & Vertical Alignments to Stretch.
Now select All the contents of the StackPanel, Reset the Margins & set the Horizontal & Vertical Alignments to Stretch.
The StackPanel should now look something like the image below.
Now change the order of the StackPanel in the Objects and Timeline to StopButton, PlayButton, PauseButton, Slider. As shown in the image below.
The Artboard should now change to show the controls, just like the image below.
This is not really any different to what Michael had before in the Canvas, except that it is now dynamic & will update automatically. Making edits to the controls easier...
Now select both the ProgressBar & the ProgressDisplay, & choose Group Into > Grid.
This causes the Artboard to change to something like the image below, but don't fear! We can sort this out...
What I want to do now, is divide the PlayerControls into 2 Rows. The top row containing the ProgressBar & ProgressDisplay, & the bottom row containing the Buttons & volume Slider.
So select the Grid element named "PlayerControls" & using the Selection Tool, place a Row Divider as shown in the image below.
Next select the Grid containing the ProgressBar & ProgressDisplay, & choose Group Into > Border.
Rename this Border to "ProgressInfo", Reset the Margins so it fits in only the top Row of the PlayerControl Grid & ensure the Horizontal & Vertical Alignment is set to Stretch.
Now select the StackPanel, choose Group Into > Border & rename the Border to "NavigationAndVolume". Reset the Margins so it fits in only the bottom Row of the PlayerControl Grid & ensure the Horizontal & Vertical Alignment is set to Stretch.
All being well, the Artboard should look like the image below.
This is starting to look better, & almost at the point where we can think about Styling the Controls. But there is still one more thing we need to look at regarding the ProgressBar & ProgressDisplay. Currently the ProgressDisplay is invisible, & this is because there is nothing to display (no video information at design time) & the ProgressBar has a Margin on the right hand side that stops it overwriting the ProgressDisplay. But this will not work for us at runtime, as we are unable to say for sure how much space the ProgressDisplay will need. As this will depend on the length of each video we play...
So select the Grid containing the ProgressBar & ProgressDisplay, & insert a Column Divider as shown in the image below.
Set the ProgressBar to fill only the first Grid Column & the ProgressDisplay to fit only the second Grid Column. Ensure the Margins for both elements are set to 0 (Reset) & the Horizontal & Vertical Alignments are set to Stretch. As shown in the image below.
You should notice that the ProgressBar does not fill all of the first Column (Depending on exactly where you put the Column Divider) & the same will be true for the ProgressDisplay. This is because Michael give these elements a fixed Width (Bad developer!!!) But as before, I'm only kidding! So it is not his responsibility, other than to test the functionality of these elements, not to design them...
So select the ProgressBar & set the Width to Auto. Do the same for the ProgressDisplay as well...
You should notice that the ProgressDisplay defaults to a Width of 0 pixels, & this is because it currently has no input to display during design time.
The ProgressBar now fills all of the first Column of the Grid (end to end), as shown in the image below.
But to give is some breathing space, we need to set Margins for the Left & Right sides of it. And we could do this on the element, or possibly on the parent Grid element. I will choose to do a combination of the 2, & set a Margin on the Right side of the ProgressBar only. (The reason why will soon become clear...)
So with the ProgressBar selected set a Margin of 10 pixels to the Right side only. As shown in the image below.
Now for the clever bit, & too take advantage of the "PhoneTouchTargetOverhang", that is by default applied to the Buttons.
Select the parent Grid element of the ProgressBar & ProgressDisplay, & in the Advanced options select Local Resource > PhoneTouchTargetOverhang.
This will place a Margin of 12 pixels on all sides of the ProgressBar & ProgressDisplay Grid, and will update just like the Buttons will if this property is changed. The Artboard probably now looks like the image below.
(This is because the Top & Bottom Margins have effectively given the ProgressBar zero height to display in. But don't worry, as this is because the parent Grid named PlayerControls has a fixed Height).
So select the Grid named PlayerControls & set the Height to Auto. This should expand the Height of the PlayerControls enough to reveal the ProgressBar, as shown in the image below.
Now that should do for now regarding Layout, & we can tweak the Layout as we Style the elements/components.
As we worked through the previous section, you may have wondered why I added some Border elements. And now I'm going to show you why! I've read a few articles regarding Borders & Rectangles & none of them have really discussed or properly shown the difference between them. In simple terms, a Rectangle is more basic than a Border, & the only advantage a Rectangle has over a Border, is that the Corner Radius for the X & Y can be different, that's about it!. But a Border can do a couple of neat things a Rectangle cannot...
Select the Border element named ProgressInfo & set a BorderThickness of 3 on all sides, as shown in the image below.
(Setting different edge thicknesses is the first thing a Rectangle cannot do...)
The Artboard has probably not changed, as the Border has no Background or BorderBrush applied to it yet.
So select the BorderBrush & click on Brush resources, as shown in the image below.
From the available "Local Brush Resources", choose the PhoneAccentBrush, as shown in the image below.
These Local Brush Resources are part of the Windows Phone Theme, & the PhoneAccentBrush is basically one of 4 Theme colours discussed in the guidelines. There are actually 5 Theme colours, but one is reserved for the Phone manufacturer. I'm going to style all my controls using this Blue Theme colour & reference this Resource every time. This way, when the Theme colour is changed, so will my controls...
The Artboard should now look like the image below.
(Using F9 hides/toggles the Blend design handles, as I have done for clarity in the image above)
Now I want to put a Radius on the Top 2 corners of the Border, but not on the Bottom 2 corners of the Border.
So with the ProgressInfo Border selected, set the CornerRadius to 20,20,0,0, as shown in the image below.
(Specifying a different radius for each corner, is the second thing a Rectangle cannot do...)
So the Artboard now looks like the image below.
Now repeat the process for the Border element named NavigationAndVolume, except set the Top BorderThickness to 0, & the CornerRadius to 0,0,20,20.
With a bit of luck you should end up with the same as the image below.
And as I said before, you can't do that with a Rectangle!!!
Now to add some more substance to the Player Controls, I want to add a semi-transparent background to the Border elements.
So select the ProgressInfo Border element, & set a Gradient Brush for the Background.
Change both the Gradient Stops to Black, set the Alpha of the first Gradient Stop to 30%, & the second Gradient Stop to 80%.
Repeat this for the Border named NavigationAndVolume, reversing the Gradient so it looks like the image below.
(I have temporarily placed a off-white Rectangle behind the Player Controls for the clarity of this image).
Select the ProgressBar, & give it a height of say 18 pixels to fatten it up a bit. Notice that it has sharp corners, & this doesn't really fit with the radiused corners of the Border elements, so lets look at fixing that now with a bit of Styling.
So with the ProgressBar still selected choose Edit Template > Edit a Copy. Name this new Style to PhoneProgressBarStyle, or something similar & hit OK.
The ProgressBar is a fairly complicated control, but we are only interest in the basic components which are the ProgressBarTrack & the DeterminateRoot (And its child ProgressBarIndicator). Which is lucky really, as if I select the Visual State Manager (VSM) & go to the "Indeterminate" State. Expression Blend 4 throws an Exception (Blows up), for what reason I do not know, but this seems to be an issue with the Phone only... (I may cover the ProgressBar in depth, in a future tutorial)
So lets look at the ProgressBarTrack first, which is the track that the progress indicator works its way along as the video plays. We currently cannot see it, as it is covered (obscured) by the ProgressBarIndicator. So temporarily hide the ProgressBarIndicator by clicking on the Eye icon next to it, so we can work on the ProgressBarTrack. The ProgressBarTrack is rather faint as the Opacity is only set to 10%, which is probably fine most of the time, but for clarity in this tutorial I want to make it a bit clearer (more obvious). There is also the consideration that we have the video playing behind this. As well as the thought to the environment that a mobile device may be used in, like bright sunshine. So I feel somewhat justified in my reasoning to change the "Default" setting for the ProgressBarTrack, but only real user testing will prove, or disprove this hunch...
With the ProgressBarTrack selected, change the element Opacity to 30%, & set the CornerRadius to 5, so it looks like the image below.
(The ProgressBarIndicator is hidden in the image above).
Now this looks OK, but I don't really want all the corners to have a radius. But because this a Border element, I can change each CornerRadius individually. Great, but there is a problem... The ProgressBarIndicator is a Rectangle inside of a Grid! And as I have said before, a Rectangle cannot have each CornerRadius set differently. And as the ProgressBarIndicator tracks along the ProgressBarTrack, it needs to match exactly, or it will look rather silly. So we need to remove the Rectangle & replace it with a Border I hear you say? And yes we can, but pay attention to the fact that the ProgressBarIndicator is a ControlPart of the ProgressBar (Signified by the tiny icon next to it). And hence is mandatory within the Control Template! So we can replace it, but it MUST be named the same as the existing element. So lets get on with it!
Select the ProgressBarTrack & set the CornerRadius to 8,0,0,0, which should result in the same as the image below.
Now go to the ProgressBarIndicator Rectangle, make a note of the name & then Delete it.
Insert a Border element in its place & name this to ProgressBarIndicator, and as if by magic, a tiny icon appears next to it to signify it is a ControlPart of the Control Template.
In the Layout properties ensure the Width & Height are set to Auto, the Horizontal Alignment is set to Left & the Vertical Alignment is set to Stretch. Also ensure the Margins are set to 0.
Now remove the BorderBrush & set the Background to Template Binding > Foreground.
Finally set the CornerRadius to 8,0,0,0, (The same as the ProgressBarTrack).
That is it for the ProgressBar, unhide any hidden elements, & come out of the Control Template so we can look at the ProgressDisplay next...
Now we obviously cannot see the content of the ProgressDisplay at design time, so we will design what needs to be done blind & keep our fingers crossed... The ProgressDisplay is just text & will probably not want to be changed along with the Theme of the Phone, but the Theme comes with some predefined Fonts & Font Sizes. So I will Bind to these properties to demonstrate the capability...
Select the ProgressDisplay element & in the Text properties, click on the Advanced options & choose Local Resource > PhoneFontSizeNormal.
Now in the Advanced options of the Font Family, choose Local Resource > PhoneFontFamilySemiBold.
Finally run the application (F5) and review the results, as shown in the image below.
The ProgressBar is looking OK & as expected, but the ProgressDisplay not working properly. As it does not have enough room to display properly... This is because we cannot say how space the ProgressDisplay will need each time it is used, & we need to allocate it a variable size, rather than the "Default" proportional size it currently has within the parent Grid. You may have slightly different results to the current screenshot, as it would have depended where you placed the initial Grid Column Divider.
So select the parent Grid element & change the "Star" size icon, to "Auto" sized by clicking on the "Padlock" icon twice, as shown in the image below.
This will not change the position of the Grid Divider in the Artboard, as Blend will place a Minimum Width for this Auto Sized Grid Column.
So go to the XAML & remove/delete the MinWidth declaration (highlighted in Blue in the image below).
The Grid Divider should now snap to the right hand end of the Grid, but will automatically grow when it has content to display.
The last thing to do with the ProgressBar & ProgressDisplay, is ensure they are referencing the PhoneAccentBrush, & therefore the Theme.
So select the ProgressBar & set the Foreground colour to the PhoneAccentBrush, & do the same for the ProgressDisplay, as shown in the image below
Run the application again (F5), just to check everything went to plan...
Looks OK to me, but not perfect! As the ProgressBar is a little taller than the ProgressDisplay text, but let us move on to the Buttons & Slider...
Now the first thing we should do is get all the elements into view, as the volume Slider is partially obscured. This is because the Grid containing all the elements is not auto-sizing properly. The Width of the PlayerControls Grid is set to Stretch, but we have Margins applied to the Left & Right sides that are restraining the Width. Hence we have a fixed size for the PlayerControls, as the overall width of the Phone screen is always going to be 800 pixels. (In Landscape mode at least...)
So select the PlayerControls Grid, change the Horizontal Alignment to Center & Reset the Margins, as shown in the image below.
Now set the Vertical Alignment to Bottom & apply a Margin of about 32 to the Bottom only, as shown in the image below.
That should size & position the Player Controls as shown in the image below.
The volume Slider is now fully visible, even if it is butting up against the edge of the Border, but all in good time... (it is actually only a Refresh issue with Blend 4, & not uncommon - Unfortunately!)
Select the StopButton element & choose Edit Template > Edit a Copy.
Give this new Style a name like "PhoneButtonStyleStop" & hit OK.
Now if we look at the Objects and Timeline, as shown in the image below.
We can see that this button Control Template, is a little different to a normal Silverlight button. For a start, we have no ContentPresenter & instead have a foregroundContainer, which I can tell you holds the ContentPresenter. The parent Border element (ButtonBackground) of the foregroundContainer, represents the extremities (limits) of the visual elements that make up the button. And the parent Grid represents the whole of the "TouchTarget" that makes up the button. Have a look at the Grid element, & notice that it has a Background Brush applied to it with 0% Alpha. The Grid needs this "transparent" Brush applied to it, so that it is "IsHitTestVisible" & a "TouchTarget" area for user interaction. The ButtonBackground has a Margin applied to it, which is bound to the "TouchTargetOverhang" Resource, & is what is currently providing the spacing/padding between the visual elements of the button & the Border of the Player Controls.
Now I don't want to use Text in the StopButton, as it is a little vulgar, & would require translation for each language. (Stuff that!) So instead I will use the globally accepted symbol/icon for "Stop" with regards to media. A Square! And there are a couple of ways I can do this... Both will use a Rectangle element, but it is the Layout that will differ... The most obvious method, would be to just define a fixed Width & Height for the Rectangle, but this would not Scale with the size of the button. So instead, I will set the Rectangle to be the full size of the visible area of the button (not including the "TouchTargetOverhang") & apply a ScaleTransform to the Rectangle to make it a proportional size of the button. This however, will mean that if I change the proportions of the button, the "Square" may deform to become a "Rectangle", but I can live with that, as I cannot see that I would want to radically change the button proportions, but I am more lightly to maybe change the size of the button.
Firstly, let us get rid of the "ContentPresenter" so to speak, by collapsing the foregroundContainer. I don't want to delete it, as my boss (fictitious) may disagree with my decision to use a symbol/icon & ask for the text back.
So select the foregroundContainer & in the Properties tab, set the Visibility to Collapsed.
Now before we go ahead, & insert a Rectangle into the Border element named ButtonBackground. We need to remember that a Border can only have one child element, & this is currently the foregroundContainer. And as I want the Rectangle to Scale in proportion to the ButtonBackground, the Rectangle HAS to be inside the Border element.
So still with the foregroundContainer element selected, choose Group Into > Grid. (Ensure the Margins are Reset & Alignments are set to Stretch)
(Check the Visibility of the Grid is Visible, & the ContentPresenter (foregroundContainer) is still Collapsed).
Select the newly created Grid, insert a Rectangle with 0 Margins, set the Alignments to Stretch & Auto-sized for Width & Height.
Rename the Rectangle to "StopIcon" & set a Scale Transform (RenderTransform) of 0.35 for the X axis, & 0.6 for the Y axis.
(Now as the button size is changed (Scaled), so will the StopIcon).
The StopIcon may not be a perfect "Square" but don't worry about this, as we have yet to determine the exact size of the overall button!
Now to ensure the StopIcon always matches the colour of the Border, select the Fill & choose Template Binding > BorderBrush.
Remove the Stroke of the StopIcon, & the StopButton should look like the image below.
Now select the ButtonBackground & set a CornerRadius of 0,0,0,8, as shown in the image below.
Finally (Using the "BreadCrumbBar"), go to the Style of the StopButton.
Change the BorderBrush to the Brush Resource named PhoneAccentBrush, as shown in the image below.
And as long as everything went OK, you should have the same as the image below.
(The StopButton colour is now Bound to the Theme of the Phone).
Next we should consider how large we want our buttons to be, & as I said before, the guidelines recommend a minimum size of 9mm (34 pixels) for a "TouchTarget". But we shouldn't take this size to be the "standard" size, it is a "minimum" size! And this should probably only be relevant when space is at a premium, due to having a lot of "TouchTargets" all on the screen at once. I.E When displaying a keyboard interface. Consideration should also be give to the audience in terms of accessibility, and who will be able to use this interface. I am lucky enough to be able bodied (sort of), but would not ever want to unnecessarily exclude a group of users from using a product or interface. For this reason, I believe the controls (buttons) should be as large as possible without compromising the design, look & style of the Player Controls.
So select the StopButton, set the Width to 120 pixels & the Height to 80 pixels.
Repeat the process for the PlayButton & the PauseButton to make them all the same size.
Now go through the process as you did with the StopButton, to Style the Play & Pause buttons. But do not put a radius on any of the corners of the Border element! Alternatively you can download the Buttons Styles from here. (The Play & Pause Icons are also included...)
(Image above shows Stop, Play & Pause Buttons all Styled).
You have probably noticed that the Volume Slider is radically different from the standard Silverlight Slider, & the Thumb component is somewhat larger. This obviously again is because of the "TouchTarget" considerations for the Phone, so that the Thumb is large enough to be easily controlled. So while we can make it any size we like, keep in mind that the interaction is of major importance when designing for the Phone.
Right click & choose Group Into > Border, as shown in the image below.
Ensure the Margins of the Slider are Reset, select the Border element & rename it to "VolumeSlider".
Now click on the Advanced options & choose Local Resource > PhoneDefaultBorderThickness.
This will set the BorderThickness to 3 on all sides, & automatically update along with the button BorderThicknesses.
(We could have also set the outer Border elements to this Resource, which we can do later, but we may want to try out different thicknesses for these elements yet...)
Set the CornerRadius to 0,0,8,0, as shown in the image below.
Now set the BorderBrush to the PhoneAccentBrush Resource, & match the colour of the buttons (& Theme).
Next set the Margins to the Local Resource > PhoneTouchTargetOverhang, as shown in the image below.
Finally set the Width to 160 pixels & the Height to Auto, as shown in the image below.
The Player Controls should now look like the image below.
Next to do some work on the Slider itself to bring it in-line with the rest of the Player Controls.
So select the Slider, right click & choose Edit Template > Edit a Copy.
Give this new Style a name like "PhoneSliderStyleVolume" & hit OK.
In the Objects and Timeline expand the HorizontalTemplate to show all the elements, as shown in the image below.
Select the HorizontalTrack & set the Visibility to Collapsed.
Now select the HorizontalThumb & set the Height to Auto, as shown in the image below.
In the Artboard the Thumb should now be the full Height of the parent Border element, & the full Width of the portion it occupies in the parent Sub-Divided Grid.
(Notice that the portion of the Sub-Divided Grid that the Thumb occupies is Locked. & therefore the Thumb Width is set by the parent Grid).
With the HorizontalThumb still selected, choose Edit Template > Edit Current to access the visual elements of the HorizontalThumb.
Select the ButtonBackground & set a Margin of 8 pixels for the Left & Right sides.
This will make the HorizontalThumb "TouchTarget" smaller & harder to select, which we can fix by setting a Fill/Brush on for the Background of the parent Grid with 0% Alpha.
So we have narrowed down the HorizontalThumb & hopefully made it a little more elegant, but this will mean that the Thumb will not reach the ends of the Slider. Run the application & see for yourself... We also may have some slightly strange behaviour when moving the Slider to 100% (I do). We can fix the problem of the Slider not reaching the ends, by setting Margins of -8 to the Left & Right sides of the Slider within the parent Border, but the strange behaviour still remains... (I can't say for sure, that you will also be experiencing this issue). But there is another way of slimming down the HorizontalThumb that keeps the larger "TouchTarget" area & doesn't result in strange Slider behaviour.
So remove all the Margins you just applied, & the Background Brush of the Grid element.
Come out of the HorizontalThumb Template & select the HorizontalTemplate Grid, as shown in the image below.
Now go to the XAML & change the second Column Width to 18 pixels, as shown in the image below.
(The HorizontalThumb is now slimmed down & more elegant, but also harder to select).
So go back into the HorizontalThumb Template, select the Grid element & insert a new Rectangle.
Rename this Rectangle to "ThumbTouchTarget" & set the Alignments to Stretch with 0 Margins Top & Bottom, but -8 for Left & Right.
Now remove the Stroke, set a Fill of any colour & make the Alpha 0%.
This will make the "TouchTarget" area of the HorizontalThumb larger, but only on the left hand side of it. As the Z order of the HorizontalTemplate of the Slider Template, is not setup quite as we require.
So come back out of the Thumb Template & in the HorizontalTemplate, select the HorizontalThumb & drag it down below the HorizontalTrackLargeChangeIncreaseRepeatButton, as shown in the image below.
This changes the Z order of the HorizontalThumb, so it is effectively on top, & selectable over all other elements in the HorizontalTemplate. This may not be the cleanest of solutions, but slimming down the Thumb, maintaining the "TouchTarget" & while avoiding any strange behaviour, is the best fix I can think of at this current time...
Now go to the Style of the Slider, & set the Foreground Brush to the PhoneAccentBrush. (Just like we have with the other controls).
We are almost finished with the Slider, but now we need to make the Slider a little more obvious that it is to control the Volume. And I plan to do that in 2 way, firstly by placing a semi-transparent layer in the lower section of the Volume Slider, & by introducing a Speaker Icon in this section.
So back in the Slider Control Template select the HorizontalTemplate, & drag out a Rectangle to fill the first portion of the Sub-Divided Grid, as shown in the image below.
Ensure the Margins are Reset, the Alignments are set to Stretch & the Stroke is removed.
Then set the Fill to Template Binding > Foreground, as shown in the image below.
Now move the Rectangle back in the Z order, by dragging it up & behind the HorizontalTrack, as shown in the image below.
Finally change the element Opacity to 50%, making the Artboard look the same as the image below.
Now download the Speaker Icon from here & add it to the project.
Open the Speaker.xaml page, select & Copy the Path element & go back to the MainPage.xaml.
Back in the Slider Template with the HorizontalTemplate selected, Paste in the Path element.
Rename the Path to "SpeakerIcon", Reset the Margins & Template Bind the Fill to the Foreground of the Style.
Next in the Advanced options of the Width, Template Bind to the Value of the Slider, as shown in the image below.
Do exactly the same for the Height, Template Binding it again to the value of the Slider.
This should set the Width & Height to both 0.5, as shown in the image below.
Now to Scale-up the SpeakerIcon by setting a ScaleTransform of 40 for both the X & Y axis.
In the Artboard the SpeakerIcon probably looks like the image below, & is not in the correct Sub-Divided portion of the HorizontalTemplate Grid.
Firstly move the SpeakerIcon back in the Z order, by placing it in front of the Rectangle, as shown in the image below.
(There is no reason other than good housekeeping, for me changing the Z order. I just prefer the selectable/interactive items in-front).
Finally move the SpeakerIcon, so it is in the first Column of the HorizontalTemplate Grid, as shown in the image below.
(Ensure the Margins are Reset).
Some experienced programmers amongst you may be saying that I should not have Bound the Width & Height to the Scale of the SpeakerIcon, as this affects the performance of the application. As it will cause a redraw of the Layout every time the Volume is changed. And you may be right, but firstly this is a demo application, & secondly it was the only way I could get it to work. Setting a fixed Width & Height while Binding a Scale Transform to the SpeakerIcon does not work, & neither do my limited attempts to use a Custom Expression.
The last thing I want to do with the Volume Slider is change the initial (Starting) Volume, which is currently set to 0.5 (Half Volume). But we cannot do this in the Volume Slider, as the "Value" of the Slider, is Bound to the Volume property of the MediaElement.
So go to the MediaElement, & in the Media section of the Properties tab, change the Volume to 0.8, as shown in the image below.
Run the application to review the results of our Player Control Styling, as shown in the image below.
Now that looks acceptable & probably OK, as a simple clean interface for the Windows Phone. It is not exactly "in keeping" with the default style of the Windows 7 Phone, but these controls are part of the Video Player, & not the Phone itself. So I feel I have the liberty to style them a little & add a bit of individuality & identity to the Video Player.
Now to consider how we choose & select a video to be played, as well as navigating between the player & the possible video selections. Microsoft are promoting the use of something called a "Pivot Control" (As well as a "Panorama" application) both for the Web & on the Windows 7 Phone. Basically it as a way of navigating large, or possibly huge amounts of data in a very visual manner. The main navigation on the Phone is intended to utilise the Pivot Control, & allow users to quickly & easily drill down/into the data to find the required information. The operation of a Pivot Control can be visualised as pages on a vertical cylinder & a central Pivot, allowing the user to spin the cylinder to go from page to page. While the number of pages could be almost unlimited, it is recommended that a minimum number should be used (let us say 4 should be your design maximum), & will loop. The user will be able to "Flick" using a touch gesture between the pages, & as such the "Flick gesture" should/must not be overridden in a Pivot Control.
This is a very simple demo application of a video/media player, & I will not be implementing a true fully featured Pivot Control. Rather a "Pivot Control Lite", that will simply change between the Video Player, & the Playlist. Michael has preloaded the ListBox (Playlist) with a selection of videos to choose from, & I'm very happy to use this rather than start from scratch. So let us start looking at the ListBox (Playlist) to see what we have got...
Select the ListBox & drag it out, to enlarge it a little, as shown in the image below.
We can see that there is a list of titles (available video streams), & the first thing you may say, if you have read the guidelines... Is that all the titles start with a capital, & the guidelines state that "List titles & items" should not be capitalised. And a full list of what should, & shouldn't be capitalised is available in the guidelines. It is difficult to say what to do, with lists that are generated dynamically from streamed content. As there is no 100% full proof method of parsing these to ensure correct capitalisation (or not), as reformatting them all to lowercase would be wrong when dealing with names & acronyms... So I will leave that ball in Microsoft's court, await direction, & leave the list items as they are.
Now before I start setting up the "Pivot Control", I need to remove/correct a little error with the LayoutRoot Grid.
So select the LayoutRoot element, & with the manipulation handles visible, look in the top left corner. As shown in the image below.
There is a Grid Row Divider in the image above, that is set to Auto, & therefore Collapsed.
Select the Row Divider in the Artboard, & drag it down to the bottom of the title area. Or go to the XAML & set a Height of 64, as shown in the image below.
If you dragged the Row Divider down, change the Row from "Auto" to "Fixed" size by clicking on the icon next to it twice. Next in the Advanced properties of the Layout section, change the Height to 64.
(It's definitely easier to edit the XAML directly...)
Now select the Grid named TitleGrid & move it up into the first Row of the LayoutRoot, Reset the Margins & ensure the Alignments are set to Stretch.
Next select the Grid named ContentGrid & Reset the Margins, so it occupies the second Row of the LayoutRoot.
That should fix the general Layout now, so we can make a start on the Playlist ListBox...
So select the ListBox, rename it to "PlayList" & drag it onto the LayoutRoot. So it looks like the image below.
Now position the PlayList ListBox, so it completely fills the second Row of the LayoutRoot, as shown in the image below.
Next to think about what we can do to style the PlayList ListBox...
But before we start, I would like to consider how the Phone ListBox differs from the standard ListBox. For a start the ScrollBar is a lot smaller, & not really selectable. And it doesn't need to be, as Microsoft have integrated the "Flick" gesture into the Phone ListBox to scroll Up & Down. Thus saving us the trouble of worrying about adding "Flick" gestures to the ListBox (Thanks Microsoft!). The next thing to consider with a "Touch" interface, is that there is no MouseOver State for this ListBox. And why would there be? The touch screen has no knowledge of a finger hovering over the screen, & this should be remembered why trying to use Behaviors as well. As some Behaviors you may want to use (even just in the emulator) will not work with MouseEnter & MouseLeave as Event Triggers. The Guidelines also state that "Dynamic elements should avoid using Drop Shadows" & currently I can see that Michael has been playing & added a Drop Shadow to the ListBox items (Bad Developer!) LOL
But enough boring guidelines stuff, let's start Styling the ListBox and see what we can do to make it a little more appealing. Probably not a lot if we want to stay within the guidelines...
So select the PlayList ListBox, right click & choose Edit Additional Templates > Edit Generated Items > Edit Current.
In the Objects and Timeline expand the TextBlock, select the DropShadowEffect & delete it.
Now select the TextBlock & in the Text properties, Reset the Font Size. As this is probably the last place to set the Font Size. (Bad Developer!) LOL
(Michael was just messing around & setting up a workable Font Size, but it should not really be set here - Basically he was expecting me to start from scratch, & not use his existing test sample).
In the Artboard, the Text Size has now reduced is size, but we will get to this in a moment...
So come out of the Edit Generated Items Template.
Now right click, & choose Edit Additional Templates > Edit Generated Item Container > Edit a Copy.
Give this new Style a name like PhoneListBoxItemStyle, & hit OK.
Now we have generated a Style for the ListBox Items, go to the Style of the Item Container Style using the BreadCrumbBar.
In the Style go to the Text properties, & set the Font Size to Local Resources > PhoneFontSizeLarge.
This is where the Text Size should be set & modified. (if needed...)
Now go back into the Generated Item Container Template (ItemContainerStyle).
Select the State Manager to see all the possible States for the ListBox.
What you should notice, is that this ListBox has all the States that you would find in a normal ListBox, but as I said earlier. The MouseOver State is not useable on a "Touch" interface, be that the Phone, or possibly a touch interface monitor for Windows itself. So even though, we have certain options available to use, we ALWAYS need to consider the user, & the intended environment for the application. Key information & interaction should never be used in areas that may not be fully supported by all environments. For example a "Double Tap" Touch gesture may not be supported in all situations (such as here) meaning we are using a singe click to denote selection. This means that "Focused" is not really a valid or useable State, as we cannot have something in "Focus" that is not also "Selected". It is easy to take for granted the interaction we are used too in normal PC situations. As "double click" & keyboard interaction is like second nature to most of us, & it will take us all a while to get used to the different considerations of a "Touch Screen" interface, especially when we currently only have an emulator to work with...
So the only 2 States we can really work with, are the "Selected" & "Disabled" States which are pretty much already set for us. We shouldn't really play with the "LayoutStates" (Loaded & Unloaded), as this will probably interfere & detract with the "Pivot Control", as well as make this tutorial even longer... So let us do only tiny tweak & move on. (Although I may do more in the finished project download, but only things I have covered in my previous tutorial: ListBox Additional Templates).
In the "Base" State click on the small black circle next to the "Selected" State, to show this State (signified by the Eye icon), but not set any Keyframes.
In the Objects and Timeline select the Highlight element & set a Top Margin of 8 to centre the blue "Highlight" band better on the text.
That is all I want to do here, & there is obviously more that could be done. But keep in mind what I said before about a consistent "Look & Feel" with regards the "Metro" styling. And this tutorial is definitely long enough, with too much content here...
To make the Playlist "slide-in" from the Right, & the Player "slide-out" to the Left we will use Visual States.
(So come out of any Templates that you are currently in, ensuring you can see the LayoutRoot & its 3 child elements (TitleGrid, ContentGrid & Playlist)).
Start by rename the ContentGrid to "Player".
Now expand the Player Grid to reveal the MediaElement, & the 3 "EventToCommand's" Michael has setup.
Now to test the "Flick" gestures that Michael has added in the code "View Model", he has wired them up to control the Play & Stop functions of the Video Player. And to ensure they work on the screen area displaying the video, he has attached them to the MediaElement. Now I don't want these "Flick" gestures to control the video, as this overrides the function of the "Pivot Control" & a big "No No" with regards the Guidelines. So delete the FlickRight_Play & FlickLeft_Stop EventToCommand's, but do not delete the SetMediaOpened EventToCommand, as we still want this one.
It is worth remembering this "Behavior" is something that the designer is able to set, & should be setting. The developer has given you a framework of hooks that you can bind too. And as a designer can do it in Blend, it is their responsibility to use what the developer has provided & link things up using "Point & Click". So if you are a new designer, investigate how the UI elements are "wired" by looking at these Behaviors, as they are the very powerful & the life blood of Blend for a designer. As I edit this Michael has just been telling me about a new powerful Behavior/feature DataStore.
So remember this designers: Your developer will/should, be very happy to help you through the more technical aspects of wiring things up, & the only condition they will/should apply is: Don't ask them the same questions again & again! - LEARN!!! (P.S. To the coders: Be kind to the ignorant, but do what you will with the stupid...(Except me!) LOL)
Now select the LayoutRoot element, go to the Assets tab & select the Behaviors.
Next select the "GoToNextState" Behavior & drag it onto the LayoutRoot.
While we are here, drag a "GoToPreviousState" Behaviour onto the LayoutRoot as well.
By adding these Behaviors to the LayoutRoot, the Behaviors will be Triggered/Fired when the LayoutRoot (Whole Screen) is interacted with. And now we need to set some States for these Behaviors to go between/interacted with...
(A true "Pivot Control" should be able to loop, & not an easy task. So will not be done here, just a basic page switch, & far more like a "Panorama" as discussed in the Guidelines. As I understand it, a "true" predefined "Pivot Control" should hopefully be coming).
So go to the States tab & click on the "Add state group" icon, as shown in the image below.
This will add a "VisualStateGroup" to the project, which we will call "ShowOrHidePlayList"
Now click on the "Add state" icon twice to add 2 States to this StateGroup.
Rename these 2 new States to "ShowPlayer" & "ShowPlayList", as shown in the image below.
Now ensure you are in the "Base" State, & in the Objects and Timeline select the PlayList ListBox.
In the Transform section of the Properties tab, set a Translate of 800 for the X axis.
In the Artboard, this will move the PlayList ListBox along the X axis, just like the image below.
Now in the States Manager (VSM) with the PlayList ListBox selected, choose the "ShowPlayList" State & change the Transform Translate X axis back to 0.
Next select the "Player" Grid in the Objects and Timeline, & set a Transform Translate of -800 on the X axis.
Now set a Duration for the Transition on the "ShowOrHidePlayList" State Group of say 1 second, & any EasingFunction you fancy. I have used a "Back InOut", as shown in the image below.
Now you can Run (F5) your application, to review the results & hopefully see the Player & PlayList slide in & out using mouse clicks. But we do not want to use mouse clicks for this operation, we want "Flick" gestures to switch between screens/pages.
(Note: We are not actually firing the triggers in a toggle (alternating) manner, just looping round & round - I Think...)
So select the "GoToNextState" Behavior we set on the LayoutRoot, & in the Properties tab find the TriggerType & select "New"
In the Popup window, select the "FlickGestureTrigger" & hit OK.
Now in the Trigger pane of the Properties tab, change the Direction to "Left", as shown in the image below.
Now select the "GoToPreviousState" & repeat the whole process, but this time set the direction to "Right".
When the application is now Run, only a "Flick" Left or Right will change between the Player, or the PlayList. - Sorted!!!
But this is not the whole "Pivot Control" finished, as we need to think about the title area of the Player & PlayList. This should change along with Player & the PlayList, to show which section/page of the "Pivot Control" we are on.
So select the TitleGrid Grid & look at the Background colour. Notice it is a dark Grey & this was set (I presume) by Michael while setting up the general components of the Video Player. While this may be fine, it gives me a chance to discuss another consideration when designing for the Windows 7 Phone. And that is battery life! As obviously displaying anything other than Black will use up precious battery capacity, & using White will obviously/probably be the worst. Also this is a Video Player which will probably chew the battery faster than anything else... So my personal opinion, is to set the Title area to Black to reduce battery consumption, & not to detract from the focus of the application, which is to play videos... (Although I reserve the right to change my mind regarding the background colour!)
So with the TitleGrid still selected, change the Background to the Brush resource PhoneBackgroundBrush. (Ensure you are back in the "Base" State).
Now expand the TitleGrid, select the TextBlockListTitle & set the Foreground Brush to PhoneForgroundBrush.
Now change the TextBlockListTitle Text to "video player" all lowercase, as the Guidelines state that "Page titles" should be all lowercase. However in our situation it is not as cut & dry as that. As the Guidelines also state that "Application titles" should be all Uppercase. But we don't really have enough room in our Video Player for both an Application title & a Page title. As it would compromise the viewing are of the video display, & this is definitely not what we want to do... So I am making the decision that we will view our "Title text" as "Page titles" & hence all lowercase.
Next go to the Font Size & set this to Local Resource > PhoneFontSizeLarge.
And set the Font to Local Resource > PhoneFontFamilyNormal.
The Font is not actually part of the Theme, so changing the Theme will do nothing to the Font, but as long as the Font is tied/bound to the PhoneFontFamily Resource, we can change it all in one place.
Now ensure the TextBlockListTitle is Left & Top Aligned, with a Margin of 20 on the Left side.
Next rename TextBlockListTitle to "TextBlockPlayerTitle", & then duplicate it using Copy & Paste.
Rename the duplicated TextBlock to "TextBlockPlayListTitle" & change the text to "video playlist". (All lowercase).
And in the Base State, set a Translate of 800 on the X axis for the TextBlockPlayListTitle.
In the ShowPlayList State, change the Translate back to 0, select the "TextBlockPlayerTitle" & set a Translate on the X axis of -800.
This will make the page titles change, along with the Player & PlayList. Which is fine, but we can do more... Like showing which Video/Stream is playing while we are in the Player view. As well as providing additional/alternative navigation to the Flick gestures between the Player & Playlist screens/pages. The guidelines show, that it is good idea to provide a hint of the next page, by having the edge of the page visible on the right hand side of the screen (Panorama). But that is not really appropriate on a Video Player, as the whole of the screens should really be devoted to showing/playing the video. Now we could show part of the PlayList title in the top right corner of the Video Player screen/page. But this will be a little messy when changing between screens, & not as clear as alternative methods like showing an icon. So enough chat, let us get on with it...
Now ensuring you are in the Base State, select "TextBlockPlayerTitle" & choose Group Into > StackPanel.
Rename the StackPanel to "PlayerTitleAndInfo", change the Orientation to Horizontal & ensure the Margins are 0 except for a Margin of 20 on the Left side. (The Horizontal Alignment should be Left, & the Vertical Alignment should be Stretch).
Expand PlayerTitleAndInfo to reveal TextBlockPlayerTitle & duplicate it using Copy & Paste.
Rename this duplicate to "TextBlockSpacer", & set the Text to " - ". ("space", "minus" sign, "space").
Duplicate this element again, & rename the new element to "TextBlockPlayerInfo", as shown in the image below.
The TextBlockPlayerInfo is to show the currently selected video title, & to do this, we need to Data Bind to the SelectedValue of the PlayList ListBox. But in order to do this, we need to convert the format of the SelectedValue of the PlayList ListBox from an Object to a String. Se we need to go cap in hand, back to the developer (Michael) & say: Please Sir, can I have a ValueConverter please? And as long as your Developer is in a good mood, it should take them just a minute to knock one up for you! So download the VideoToVideoTitleConverter & add it to the Project.
Select TextBlockPlayerInfo in the Advanced options of the Text property, choose Data Binding...
In the Popup window, choose the Element Property tab, select PlayList, choose the SelectedItem property, then the VideoToVideoTitleConverter, & finally hit OK.
Now change the Font to the Local Resource PhoneFontFamilyLight.
In the Artboard, the title area should look like the image below.
Run the application to review the results, & we have a slight problem when we use the "Flick" gestures to switch between the Player & Playlist pages. The TextBlockPlayerTitle moves, but TextBlockSpacer & TextBlockPlayerInfo do not when we switch pages. So go to the State Manager (VSM) & in the PlayList State delete the RenderTransform in the Objects and Timeline, that is attached to the TextBlockPlayerTitle. Instead select the parent StackPanel (PlayerTitleAndInfo) and set a Translate of -800 on the PlayerTitleAndInfo StackPanel. And that should fix the problem...
So we are now displaying the selected video along the top of the screen, & this works fine with short titles, but we need to consider what will happen with a very long title. And I can tell you that it will spill over onto the PlayList page/screen. So we need to ensure that this doesn't happen, & the easiest way to do this is to set a Fixed size for the parent StackPanel. And this is fine as long as the Video Player is always in Landscape mode, but won't work if the Video Player is later required to work in Portrait mode. (As it will again spill over onto the PlayList page). So instead we will leave the StackPanel as "AutoSized", but set a Margin for the Right side, (as well as the Left side).
So select the PlayerTitleAndInfo StackPanel, change the Horizontal Alignment to Stretch & set a Margin on the Right of 60.
(The reason I want such a large Margin on the Right, is because I want to put a icon in the Top Right corner of the title area).
But before we do that, I want to tweak the size of the Title area, in comparison to the MediaElement (Video screen). As currently the Title area is a little larger than we need, & this is compromising the size of the MediaElement & the focus of this application.
So select the LayoutRoot element & adjust the Row Divider up a little, to enlarge the MediaElement display area. Alternatively, change the XAML directly so that the first RowDefinition is about 52 pixels, as shown in the image below.
Next download the ArrowIcon from here & add it to the Project.
Copy the ArrowIcon Path into the TitleGrid area of the MainPage, & rename it to "ArrowIconRight".
Now give it a fixed size of 32 for both the Width & Height, & change the Stoke to the PhoneForgroundBrush.
Next go the Fill & change the Alpha to 0%. - Do not remove the Fill "No Brush" as this will make the ArrowIconRight harder to select. Although this will probably not make any difference with a Touch interface, it will however make a difference in the emulator, where it would be possible to click within the circular area of the icon & miss a selectable area with the mouse pointer.
Next set the Horizontal Alignment to Right, & the Vertical Alignment to Stretch.
Finally ensure all the Margins are 0, except for the Right, which should be set to about 10 pixels.
Now go to the Assets tab, & drag a GoToStateAction Behavior onto the ArrowIconRight.
In the Triggers section of the Action, leave the EventName as it is, & change the StateName to "ShowPlayList", as shown in the image below.
This will give the user an alternative method of navigation to the PlayList, but we also need an icon on the Playlist page, to return to the Player page.
So duplicate the ArrowIconRight, & rename it to "ArrowIconLeft".
In the Transform section, Flip ArrowIconLeft along the X axis, as shown in the image below.
Now select the "GoToStateAction" of the of the ArrowIconLeft.
And in the Trigger section, change the StateName to "ShowPlayer"
Select ArrowIconLeft again, & in the Base State, set a Translate of 800 along the X axis.
Now go to the PlayList State, & set the Translate back to 0 for the X axis.
And then while still in the PlayList State, select the ArrowIconRight & set a Translate of -800 on the X axis.
Hopefully when you run the application, it all works fine!
Now to look at the controls that we styled earlier, so that they can be hidden when not in use. And consideration should be given to the best possible user interaction. For example, we want to show the controls when the Video Player is started, but not require the user to do anything to remove/hide them & enjoy the video/movie. (So we effectively need to put them on a timer when the video is started). We also need to consider that the user may not want to wait for the controls to "fade" out of view on the timer. So we also need a mechanism that will allow this, as well as a mechanism to display the controls when the user wants to use them. We also need to ensure that while making the controls appear, the user doesn't accidently select any of the controls. As a users patience will wear thin. Especially if they accidently cause the video to go back to the beginning, while they were only wanting to pause the movie or adjust the volume. User interaction & user experience can only really be gauged & improved from user feedback, but we should try our best to give the user the best product as possible as the starting point.
All I have stated above, are based on assumptions & my own personal experiences. These may be right, or they may be wrong. But until we have a real phone, & put it in the hands of a user, my assumptions are all I have. The other limiting factor here, is that I will be doing everything without any code, & while Blend is great, it may require some custom manipulation & code to get the best user experience for the Video Player user interaction. So while I am not 100% happy with what I'm about to do, it will be good enough to gauge user feedback to make improvements to the user experience. And user experience is all that really matters...
We could use Visual States to manipulate the showing & hiding of the player controls. But we are already using Visual States to navigate between the Player & Playlist screens/pages. And as this navigation already uses Behaviors that "GoToNextState" & "GoToPreviousState", it could interfere with any other States we add. So instead I will use Storyboards to manipulate the player controls, and Storyboards have the additional advantage of easily allowing a duration/timer to be set at any point of the Storyboard.
As you would expect, we are going to use 2 Storyboards. (One to show the player controls, & one to hide the player controls). The Storyboard that will show the player controls, will do so on a timer, & after a set duration, will hide the player controls. So lets make a start...
But before we start adding Storyboards, we first need a couple of elements to use as Triggers for the Storyboards. These will be transparent Rectangles (A Fill with 0% Alpha). The reason we will need 2 of them, is to create a Toggle arrangement for the show/hide behaviour of the player controls. It is not possible to attach 2 different Behaviors/Triggers to a element/object in Blend & expect them to be fired in an alternating manner. So what we will do is attach one Behaviour/Trigger to each, & when the Trigger for the front Rectangle is fired, we will start our Storyboard, & at the same time shrink/move this Rectangle to reveal the second Rectangle Behind. When the user clicks again, the Trigger on the second Rectangle will be fired & while we start our second Storyboard, we will reinstate the first Rectangle. Thus generating a Toggle type of arrangement for the mouse click, or user "Touch" gesture. Simple!
So in the Objects and Timeline select the Player element, & insert 2 Rectangles naming them "ShowPlayerControls" & "HidePlayerControls"
Remove the Stroke on both Rectangles, & set the Fill to Black with 0% Alpha for both Rectangles.
Now set the Horizontal Alignment to Stretch & Vertical Alignment to Bottom. As well as the Width to Auto & the Height to 180 pixels.
Next select the Player element again & insert a TextBlock containing the text "Click in shaded area to show/hide player controls" & rename the TextBlock to "TextBlockShowAndHideControls".
Set the Horizontal Alignment to Center & the Vertical Alignment to Bottom, with a Margin of about 100 pixels on the Bottom, & 50 pixels for the Left & Right sided.
(The reason I am setting Margins for the Left & Right sides, is because I want to consider what would happen if the Video Player later supports Portrait mode. As without Margins, the text would lap over the edges of the screen. And we also need to consider how the text would Wrap if we supported Portrait mode).
So go to the advanced properties of the Text properties, & set the Text Alignment to "Center" & and ensure the TextWrapping is set to "Wrap".
Now set the Foreground colour to the Resource PhoneForegroundBrush, & the Font Size to Local Resource > PhoneFontSizeLarge.
Finally in the Object and Timeline, arrange the items in the Player Grid as: MediaElement, ShowPlayerControls, TextBlockShowAndHideControls, HidePlayerControls, PlayerControls, as shown in the image below.
Hopefully the Artboard should look like the image below.
(In the image above, I have one of the transparent Rectangles selected, just to show it's positioning).
Select TextBlockShowAndHideControls & change the element Opacity to 0% to make it invisible.
Now go to the Objects and Timeline & click on the + icon to add a new Storyboard to the Project.
In the Popup window give the Storyboard a name like "ShowPlayerControlsOnTimer" & hit OK.
This will open the Storyboard timeline, & hit F6 to change the view in Blend to the "Animation Workspace"
At 0.1 seconds on the Timeline, select the HidePlayerControls element & set a Keyframe, as shown in the image below.
Now select the PlayerControls & also set a Keyframe at 0.1 seconds by changing the element Opacity to 0%.
(Modifying an element while in a Storyboard will automatically generate a Keyframe for that element).
At 0.0 seconds on the Timeline select the PlayerControls element, change the Opacity to 0% & in the Transform section, set a Translate in the Y axis of 200 pixels.
This will move the PlayerControls off the bottom of the screen, & therefore not selectable while they are hidden (invisible).
Now select the HidePlayerControls element, & in the Transform section, set the Scale to 0 for the Y axis.
And the Center Point to 1 for the Y axis.
(This will start the Storyboard with the front Rectangle "Collapsed" at the bottom of the screen, and the back Rectangle (ShowPlayerControls) able to accept mouse clicks or "Touch". It may not actually be needed, but the animation is easier to follow with the Rectangle starting the animation "Collapsed").
Now move the Timeline to 0.5 seconds & with the PlayerControls selected, change the element Opacity back to 100%.
Now move the Timeline to 12.0 seconds & with the PlayerControls still selected, set another Keyframe.
Next move to 12.5 seconds on the Timeline, & change the element Opacity of PlayerControls to 0%.
Select the HidePlayerControls element & also record/set a Keyframe at 12.5 seconds.
Now with HidePlayerControls still selected, go to 12.6 seconds on the Timeline & change the Scale to 0 for the Y axis, & the Center Point to 1 in the Y axis.
Next select the PlayerControls again & in the Transform section, set a Translate of 200 pixels for the Y axis.
This completes the basic animation we need to display the Video Player controls when a video is first loaded. From fading them in, to displaying on a timer, & fading them out again. We have also set up the actions we need to move the front Rectangle (HidePlayerControls) out of the way, to reveal the back Rectangle (ShowPlayerControls) that will generate our Toggle behaviour. Now we just need to provide a visual clue to the user on how to operate the Video Player controls...
So change the Timeline to 13 seconds, select the ShowPlayerControls element & set a Keyframe.
Next select TextBlockShowAndHideControls & also set a Keyframe at 13 seconds.
Move the Timeline to 13.5 seconds, & still with TextBlockShowAndHideControls selected, change the element Opacity to 100%.
Select ShowPlayerControls & also at 13.5 seconds on the Timeline, change the Alpha of the Fill to 30%.
The Artboard should look like the image below, at 13.5 seconds of the "ShowPlayerControlsOnTimer" Storyboard.
Move the Timeline to 16.0 seconds & set a Keyframe for both the ShowPlayerControls, & the TextBlockShowAndHideControls elements.
Next move the Timeline to 16.5 seconds, & change the element Opacity of TextBlockShowAndHideControls to 0%.
Finally select the ShowPlayerControls element, & change the Alpha of the Fill back to 0%.
And that is that for this Storyboard, so close it using the X icon, as shown in the image below.
The next Storyboard is thankfully simpler, and this one is simply to hide/fade out the Video Player controls, and finish off the Toggle behaviour of the Rectangles.
So create another Storyboard using the + icon as we did before, name this Storyboard to "HidePlayerControls" & hit OK.
At 0.0 seconds on the Timeline, select HidePlayerControls & set a Keyframe. And do the same for PlayerControls.
Next with HidePlayerControls selected, go to 0.1 seconds on the Timeline & change the Scale to 0 for the Y axis, & change the Center Point to 1 for the Y axis.
Next go to 0.5 seconds on the Timeline, & change the element Opacity of PlayerControls to 0%.
Move the Timeline to 0.6 seconds & set a Translate in the Y axis of 200 pixels.
The Timeline should look like the image below.
Close the Storyboard using the X icon, just like we did before.
So now we have the Storyboards, we need to attach the Behaviors/Triggers that will fire & control the Storyboards. And we will need 3 in total: One to start the "ShowPlayerOnTimer" when a video is loaded, one to hide the PlayerControls, & finally, one to show the PlayerControls.
So go the Assets tab, & drag a "ControlStoryboardAction" onto the MediaElement, as shown in the image below.
In the Triggers section, change the EventName to "MediaOpened", & set the Storyboard that will be played to ShowPlayerControlsOnTimer.
Next drag another "ControlStoryboardAction" onto ShowPlayerControls.
In the Triggers section, leave everything as it is, except for setting the Storyboard to ShowPlayerControlsOnTimer.
Finally drag another "ControlStoryboardAction" onto HidePlayerControls.
In the Triggers section, leave everything as it is, except for setting the Storyboard to HidePlayerControls.
Run your application to review the results!
We should have all the basic interaction needed, although some fine tuning could be done. For example when a movie is selected in the Playlist ListBox, it is opened by the MediaElement & played. Now image that you do not return to the player screen for a little while, & by then, the Video Player controls have faded out on its timer. And it's hard to say if this is a good thing, or a bad thing... As a user may feel this is wrong while they are getting used to using the Video Player, but at the same time it may become annoying to the user, if the controls automatically become visible every time the media is changed. Only testing will tell. But for this demo application, I think it is best to leave things as they are, not I'm not say it is right or wrong...
That is about it for this demo Video Player application, & remember that this is a demo! Not a hard & fast set of rules when designing for the Windows 7 Phone. I also don't believe that Microsoft have finalised the guidelines, & these may very well change in time...
Looking back at this demo application I can see a few things that I'm not quite happy with. The most obvious of these is the Playlist ListBox. As the page title looks almost the same as the list itself! So I will make a quick edit to this, so that it is easier to differentiate between the page title & the list itself. The most obvious way to do this is to change the background of either the page title, or the ListBox. If I change the title area background, I would also really need to change the background of the player to make it consistent. And as I said before about battery consumption, this is probably not the best idea, although a dark grey may not be too bad regarding power consumption. It is probably better to change the background of the ListBox, as the ListBox is probably not be in view very often, & therefore power consumption considerations are not really an issue. As we are currently using Black as the background of the ListBox, I will obviously need to lighten the ListBox background. And I can do this in 2 ways, either by using a Grey (or semi-transparent White). Or by using a colour or semi-transparent colour... Now we have already discussed that there are 4 Theme colours (5 if you include the reserved manufactures colour). So I need to consider how any colour I apply will work when the Theme is changed. And the obvious answer is to use the current Theme colour, or a hint of the current Theme colour.
So go to the PlayList ListBox & choose Edit Additional Templates > Edit Generated Item Container > Edit Current.
Duplicate the Highlight element, rename this to "BackgroundHint" & place it behind the Highlight element.
Change the Visibility to Visible, & set the element Opacity to 10%.
In the Artboard, the PlayList ListBox should look like the image below.
This is not the most elegant of solutions, but this is more an exercise on reinforcing the considerations required for Themes & battery life.
I may try & make this screen a little more elegant in the downloadable demo version, but that will do for this tutorial.
The other think I want to tweak is the area that is selectable, & able to hide or show the Video Player controls. As it stands, only the area around the Video Player controls are clickable/touchable. Yet the top half of the Vide Player controls (ProgressBar & ProgressDisplay) currently have no interaction available to them. But because they have "IsHitTestVisible" turned on by default, they interfere with anything behind (in the Z order) from being clicked or "Touched" on. The same is true for the TextBlockShowAndHideControls, which while invisible most of the time, still prevents the ShowPlayerControls Rectangle behind from being clicked or "Touched".
So in the Objects and Timeline, select ProgressInfo & in the advanced properties of Common Properties turn off (Uncheck) the IsHitTestVisible check box.
We do not need to worry about turning off the IsHitTestVisible on all the child elements, as they will inherit this property from the parent ProgressInfo element
Now repeat the step for TextBlockShowAndHideControls, turning off IsHitTestVisible.
You may now be wondering why the "Flick" gestures still work, as they are behind (in the Z order) of all the other elements. And the honest answer is because I don't know for sure, but my guess is that because they monitor motion, as well as the click or "Touch" interaction, & hence work in a completely different way to most Triggers.
Next I want to tweak the Storyboards a little, as the show/hide isn't working quite properly. As currently the "ShowPlayerControlsOnTimer" Storyboard continues to play, even though the "HidePlayerControls" Storyboard has been started when the user wants to hide the player controls.
Now the most obvious thing to do, is to attach another "ControlStoryboardAction" to the HidePlayerControls Rectangle.
So do this, & set the ControlStoryboardOption to "Stop" & the Storyboard to "ShowPlayerControlsOnTimer".
Next open the ShowPlayerControlsOnTimer Storyboard, & drag the end Keyframes for the HidePlayerControls Rectangle to the end of the whole Storyboard, as shown in the image below.
(This will prevent the ShowPlayerControls from being selectable, until the ShowPlayerControlsOnTimer has ended).
Finally go to the HidePlayerControls Storyboard & set Keyframes at 0.0 seconds for the ShowPlayerControls & TextBlockShowAndHideControls elements.
This will ensure that these elements are set to the correct state when the animation is started. As they are modified in the ShowPlayerControlsOnTimer Storyboard, & may not be in there correct state otherwise.
And that is just about it for this demo & tutorial.
Hope it was useful... & please.
|
https://www.codeproject.com/Articles/84859/Windows-Phone-View-Model-Style-Video-Player?msg=4082815
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
7.7 Class definitions
A class definition defines a class object (see section 3.3):
classdef "class"
classname[
inheritance] ":"
suite
inheritance "(" [
expression_list] ")"
-
classname
identifier
-
A class definition is an executable statement. It first evaluates the inheritance list, if present. Each item in the inheritance list should evaluate to a class object or class type which allows subclassing. The suite of the class is then executed in a new execution frame (see section 4.1), using a newly created local namespace and the original global namespace. (Usually, the suite contains only function definitions.) When the suite finishes execution, its execution frame is discarded but its local namespace is saved.. Class variables with
immutable values can be used as defaults for instance variables.
For new-style classes, descriptors can be used to create instance
variables with different implementation details.
|
http://www.network-theory.co.uk/docs/pylang/Classdefinitions.html
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
windows 10 - python 3.5.2
Hi, I have the two following python files, and I want to edit the second file's variables using the code in the first python file.
firstfile.py
from X.secondfile import *
def edit():
#editing second file's variables by user input
if Language == 'en-US':
print('language is English-us')
elif Language == 'en-UK':
print('language is English-uk')
Language = 'en-US'
with open("secondfile.py","a") as f:
f.write("Language = 'en-US'")
You can embed the
Language in a class in the second file that has a method to change it.
class Language: def __init__(self): self.language = 'en-US' def __str__(self): return self.language def change(self, lang): assert isinstance(lang, str) self.language = lang language = Language()
Then import the "language," and change it with the change method.
from module2 import language print(language) language.change("test") print(language)
|
https://codedump.io/share/HHmbFG9oKDjA/1/how-to-modify-variables-in-another-python-file
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Can ST2 auto close tab after the file was deleted? I delete file because I don't need it anymore, but then I must close the tab manually. It's so annoying
Totally agree!
I am constantly hitting save on deleted files by accident and thereby undeleting them. often don't realize I've done it for a while
You can do this with a plugin. I didn't really test this much, so you may want to test on non critical stuff first. It does just close the view, so worst case is that you lose some existing work. That being said, I'm pretty sure it works fine.
import sublime_plugin
import os
class MyEvents(sublime_plugin.EventListener):
def on_activated(self, view):
if view.file_name():
if not os.path.exists(view.file_name()):
view.set_scratch(True)
view.window().run_command("close")
I agree, it would be nice to have ST close the tab of a deleted file.
@skuroda cool, thx, works nice
How do I add this? Just make a new plugin folder?
Go to "Tools -> New Plugin". Paste the content I posted above into the file. Save it into "Packages/User". You can choose whatever file name you want, just be sure the extension is ".py"
Theres a Problem with the plugin, i just found out that the Default-Settings-Files, also Default-Keymap, are not displayed anymore, they get instantly closed after opening.
So you can not view the default settings of Sublimetext or other Plugins.
I just upgraded the script a little bit. Now it checks first if the file is not a kind of "Default"-file.
import sublime_plugin
import os
class MyEvents( sublime_plugin.EventListener ):
def on_activated( self, view ):
s = view.file_name()
if s:
if not os.path.exists( s ):
if not "Default" in s:
view.set_scratch( True )
view.window().run_command( "close_file" )
Any chance this plugin can be updated for ST3? Doesn't seem to work there for me.
Or maybe I'm doing something wrong? I've added it to packages/user as CloseDeletedTabs.py, and even went so far as to restart Sublime, but it doesn't seem to do anything. The tab stays open after deleting the file.
So I hadn't ever looked at plugins in ST3 before, but I've done some research and found that in ST3 plugins don't run on the main thread, so you need to use sublime.set_timeout to run the close_file command on the main thread and avoid a crash.
This seems to work for me in ST3 on OSX:
[code]import sublime_pluginimport sublimeimport os
class MyEvents( sublime_plugin.EventListener ): def on_activated( self, view ): s = view.file_name()
if s:
if not os.path.exists( s ):
if not "Default" in s:
view.set_scratch( True )
sublime.set_timeout(lambda: view.window().run_command("close_file"), 0)[/code]
The on_activated event wasn't working properly when deleting the file. I had to change tabs and then click the deleted file's tab for it to disappear. The method I needed was on_modified_sync.
import sublime_plugin
import sublime
import os
class MyEvents(sublime_plugin.EventListener):
def on_modified_async(self, view):
s = view.file_name()
if s:
if not os.path.exists(s):
# Without checking for this string in the path, config files seem to be automatically closed.
if "Sublime Text 3" not in s:
view.set_scratch(True)
sublime.set_timeout(lambda: view.window().run_command("close_file"), 0)
never use the on_modified_async listener for something like this, then typing lags, etc, etc
Updated the plugin again to be faster and fix a bug when creating a new file from the subl command line.
import sublime_plugin
import sublime
import time
import os
class MyEvents(sublime_plugin.EventListener):
def on_deactivated_async(self, view):
s = view.file_name()
if s:
time.sleep(0.1) # Give the file time to be removed from the filesystem
if not os.path.exists(s):
print("Closing view", s)
view.set_scratch(True)
view.window().run_command("close_file")
Gist:
Hope this helps.
|
https://forum.sublimetext.com/t/close-tab-after-delete-file/9439/1
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Incorporating application security is an always returning activity. Microsoft has several mechanisms available to do this. Mostly, they’re based on the Windows security subsystem. With the advent of web-based applications, the Windows Security subsystem was not always possible or desirable. Many developers therefore implemented custom-made security subsystems into their web-applications. Microsoft responded with a built-in security subsystem in ASP.NET 2.0: the ASP.NET Membership and Role Provider framework. This framework relieved the ASP.NET developers from building their home-grown security systems.
Not only ASP.NET web applications, but WinForms client applications need security as well, not always based on the Windows Security subsystem. Here, the same trend emerged as with ASP.NET apps in building custom-made security systems and integrating them into WinForms apps. Many saw the potential of the ASP.NET Membership and Role Provider framework, and tried to integrate this into their Windows apps. Now, with Visual Studio 2008 and .NET 3.5, Microsoft responded by giving more easy access to this security subsystem for non-ASP.NET clients, like for example, WinForms apps, in the form of Client Application Services.
This article will not discuss the profile service that is also part of the Client Application Services.
Let’s take a step back and discuss the typical security requirements in an application.
Information that is being managed within an organization, needs to be protected for diverse reasons. This protection can be organized and enforced in many ways. Applications that manage this information is just one level of security. This article will zoom in on this application-level security.
One of the core requirements is the need to only let authorized persons use the application in question. So, we need something to represent persons, and something that they can use to prove that they are who they say they are. Once the verification of the identity is positive, the person can use the application. In most applications, the user does not have to legitimize himself over and over again. He will be trusted during the entire usage of the application (or maybe some time limitation is applicable). Sometimes, particular parts of the application are restricted to particular persons, or better persons that have a certain function within the organization.
There are many other security requirements:
So, a user needs an identification “thing” in order to present himself to the security subsystem. This “thing” will differentiate him among other users. This identity must be part of the security subsystem for the application in question. The membership will determine if the user may or may not use the application. A much used identifier is the user account.
Before using a secured application, a user must get a chance to present his user account to the application (i.e., the security subsystem). A common mechanism is the logon-screen. The application will enforce that this screen will be the only way in.
But before a user that presents his user account on a logon screen is granted access to an application, we want to be sure that user is actually the “owner” of the user account. Most of the time, a “secret” is coupled with the user account. The user must supply this “secret” together with his user account. The security subsystem will verify that if the secret matches with the user account.
Sometimes the password is subject to a number of rules in terms of format to make password guessing a little bit harder. Sometimes a user makes a type while delivering the password. The security system can decide, based on an algorithm, to give a user a second chance to logon.
Once a user is authenticated for entering the application, we would also like to avoid (in most cases) that the user must always logon to the system before using some part of the application. In other words, we would like to keep some security related information “hanging around” after authentication. This can be, for example, used to store “privileges” coupled with the user account.
In order to keep the relation user account-privileges manageable, the security subsystem can introduce a security profile. This profile will then be coupled with a collection of privileges. Each user account will in turn be coupled with one or more profiles instead of direct coupling with individual privileges. The user is said to have a particular role within the application.
Now, certain parts of an application can be subject to a different set of security requirements. For example, only users belonging to the administrator role may use the update-functionality of the application. So, with the role information, we can introduce logic into our application to check whether a particular piece of code is accessible for the user in question. Moreover, instead of being “re-active”, we can take a “pro-active” approach and show or hide certain GUI elements based on the role a particular user has within that application.
Sometimes our application will exchange data with other systems (like, for example, the security subsystem). We don’t always want to send password in clear text over the wire. In order to keep the confidentiality of our data, we may need to tap into other security subsystems that are made available on the platform our application runs on (for example, encryption).
Microsoft has implementations available to incorporate these logical security mechanisms and patterns. For example, the standard .NET role-based authorsation mechanism based on Windows groups, and the Enterprise Services (COM+ role-based system). The Microsoft Patterns and Practices group has also a special application block in the Microsoft Enterprise Library, namely the Security Application Block. In Windows 2003 and XP Service Pack2, you can use the Microsoft Authorization Manager framework to make role-based applications.
Let’s see how you can implement these application security reuirements with the Client Application Services in Visual Studio 2008. I didn't supply any code with this article but you can walk through the "exercise" in the following paragraph.
The Client Application Services build upon a couple of facilities that were already available in the ASP.NET 2.0 framework, but now are made available to non-ASP.NET applications.
The existing ASP.NET providers for membership, role, and personalization are now open for WinForms apps, thanks to code in the System.Web.Extensions namespace. This is true for consumers at the server side.
System.Web.Extensions
Also, Visual Studio 2008 is enhanced to ease the configuration aspect for letting WinForms apps communicate with the Membership and Role services.
A first step in our “exercise” is the creation of a “repository” to manage our user accounts, passwords, and roles. Because we are not focusing on the Windows security subsystem, we will choose a SQL Server database as our repository. Luckily, a utility program is made available to create such a SQL Server membership provider database: aspnet_regsql.exe. You can find this executable in the folder \WINDOWS\Microsoft.NET\Framework\v2.0.50727. This wizard will create the database (tables and stored procedures) necessary to manage the user accounts, passwords, roles, and the user/role associations.
Although there are several stored procedures in the database you've just created it is still rather static. We need an application to manage the data inside the database. By manage I mean exercising the CRUD operation on the user, passwords and role-associations. Luckily, we don't have to write this code ourselves. This where the ASP.NET membership and the role provider framework step in.
The next step is to make an ASP.NET 3.5 web application that will host the membership provider and the role provider. When you build an ASP.NET 3.5 Web application or Web Service, you will notice several extra configuration entries in your web.config file (see later in the article). Also, a reference to the System.Web.Extensions.dll that contains the WCF Service implementations that give remote access to the Membership provider and the Role provider.
Thanks to the integration in Visual Studio, you can, in a 3.5 web-site project, open the administration part (also a web application, but only called from within Visual Studio) for the Membership/Role provider.
A worthy alternative is the Credential Manager from Idesign (Juval Löwy).
So, what do you need to change in the web configuration file?
We need an ADO.NET connection string pointing to the membership database.
<connectionStrings>
<add name="testCAS" connectionString="Data
Source=myMachine;Initial Catalog=testCAS…/>
</connectionStrings>
Membership and Role provider activation:
<system.web>
……
<membership defaultProvider="TestCASSqlProvider">
<providers>
<clear/>
<add name="TestCASSqlProvider"
type="System.Web.Security.SqlMembershipProvider"
connectionStringName="testCAS"
applicationName="testCAS"/>
</providers>
</membership>
<roleManager enabled="true" defaultProvider="TestCASSqlRoleProvider">
<providers>
<clear/>
<add name="TestCASSqlRoleProvider"
connectionStringName="testCAS"
applicationName="testCAS"
type="System.Web.Security.SqlRoleProvider"/>
</providers>
</roleManager>
….
</system.web>
Forms authentication activation:
<system.web>
….
<authentication mode="Forms" />
….
</system.web>
Activation of the Authentication and Role (WCF) Service.
<system.web.extensions>
<scripting>
<webServices>
<authenticationService enabled="true" requireSSL="false"/>
<roleService enabled="true"/>
</webServices>
</scripting>
</system.web.extensions>
Now, it is time to configure the WinForms (or WPF) application that will be using the Authentication and Role service hosted in our ASP.NET application. Through the new tab “Services” in the project properties in Visual Studio 2008, you can easily do this. Checking the radio-button "Forms Authentication" will add a reference to System.Web.Extension. This assembly contains next to all server logic, also all logic for the consumer side. There is also a textbox where you can enter the URL of the ASP.NET web application that acts as an Authentication service and a Role service host. You can also specify a form, implementing a special interface, that will act as a logon screen. This can be handy to implement certain scenarios like initial logon.
System.Web.Extension
<system.web>
<membership defaultProvider="ClientAuthenticationMembershipProvider">
<providers>
<add name="ClientAuthenticationMembershipProvider"
type="System.Web.ClientServices.Providers.
ClientFormsAuthenticationMembershipProvider,
System.Web.Extensions, Version=3.5.0.0, …"
serviceUri=""
credentialsProvider="TestCASClient.LoginFormCAS,TestCASClient" …/>
</providers>
</membership>
<roleManager defaultProvider="ClientRoleProvider" enabled="true">
<providers>
<add name="ClientRoleProvider"
type="System.Web.ClientServices.Providers.ClientRoleProvider,
System.Web.Extensions, Version=3.5.0.0, …"
serviceUri="" …." />
</providers>
</roleManager>
</system.web>
Don’t forget to add a reference to System.Web in order to use the membership and role namespace. Although, System.Web.Extension contains the real code. The consumer code also uses the standard membership and role namespaces to programmatically authenticate the user or to determine if a user belongs to a certain role. So enough configuration. Let's program something.
System.Web
The API to use CAS is very straightforward. If you have ever programmed with the ASP.NET membership you will see ... that it is the same API. This is of course the core of the provider model. So, for example, let's see what happens when we execute the following statement,
Dim valid As Boolean = _
System.Web.Security.Membership.ValidateUser("Alice", "AliceHer1pwd!")
The membership provider framework will determine through the specified configuration which concrete implementation will have to be instantiated. In our case, it will be the ClientFormsAuthenticationMembershipProvider in the System.Web.ClientServices.Providers namespace. The code will construct and send an HTTP POST request to the authentication service to ask if the supplied user account and password are valid. The message body doesn’t contain SOAP though, but is written in JSON format.
ClientFormsAuthenticationMembershipProvider
System.Web.ClientServices.Providers
The HTTP message (explicitly no SSL is used!) if the statement is executed, is as follows:
POST /testCAS/Authentication_JSON_AppService.axd/Login HTTP/1.1
Content-Type: application/json; charset=utf-8
Host: myStation
Content-Length: 71
Expect: 100-continue
Proxy-Connection: Keep-Alive
{"userName":"alice","password":"AliceHer1pwd!","createPersistentCookie":false}
The URI to where this post is sent to is retrieved from the app.config file we just created with the help of the Services tab in Visual Studio 2008. Visual Studio 2008 adds a piece to the URL we specified, and refers to the Membership (authentication) and Role service in the ASP.NET 3.5 application we created to host those services. The suffix _AppService.axd will trigger an HTTP-handler in the ASP.NET application. This handler will, based on the MIME-type, determine if it is a SOAP request or a JSON request. Based on other parts of the URL, the authentication service or role services will be called to handle the request.
>
Deep down in code, we end up in the System.Web.Security.SqlMembershipProvider namespace. If we look at the SQL traffic, we will see the following statements being sent to the SQLServer:
System.Web.Security.SqlMembershipProvider
Retrieval password:
exec dbo.aspnet_Membership_GetPasswordWithFormat @ApplicationName=N'testCAS',
@UserName=N'alice',@UpdateLastLoginActivityDate=1,
@CurrentTimeUtc='2008-05-30 07:12:18:760'
Some accounting after the verification of the password:
exec dbo.aspnet_Membership_UpdateUserInfo @ApplicationName=N'testCAS',
@UserName=N'alice',@IsPasswordCorrect=1,@UpdateLastLoginActivityDate=1……
The HTTP response of the web-method looks like this:
HTTP/1.1 200 OK
Server: Microsoft-IIS/5.1
Date: Fri, 30 May 2008 06:55:36 GMT
X-Powered-By: ASP.NET
Set-Cookie: .ASPXAUTH=28818033DF1ACC3638F96196B78D60AA44BB2F734F…..; path=/; HttpOnly
Cache-Control: private, max-age=0
Content-Type: application/json; charset=utf-8
Content-Length: 10
{"d":true}
The body contains the answer to our authentication request, again in JSON format. The ClientFormsAuthenticationMembershipProvider will “analyse” this and act upon it. When the user account/password is valid, the authentication code will also create two objects that will represent the logged-on user: ClientRolePrinciple and ClientFormsIdentity. With, for example, the RolePrincipal, you can ask the “system” if the particular logged-on user belongs to a certain role. This will form the basis for programmatically authorization code (re-active or pro-active). You can use this functionality by executing a cast on System.Threading.Thread.CurrentPrincipal (and System.Threading.Thread.CurrentPrincipal.Identity for the FormsIdentity).
ClientRolePrinciple
ClientFormsIdentity
RolePrincipal
System.Threading.Thread.CurrentPrincipal
System.Threading.Thread.CurrentPrincipal.Identity
FormsIdentity
Dim RolePrincipal As System.Web.ClientServices.ClientRolePrincipal = _
DirectCast(System.Threading.Thread.CurrentPrincipal, _
System.Web.ClientServices.ClientRolePrincipal)
Dim FormsIdentity As System.Web.ClientServices.ClientFormsIdentity = _
DirectCast(System.Threading.Thread.CurrentPrincipal.Identity, _
System.Web.ClientServices.ClientFormsIdentity)
Another scenario I would like to track is to see what happens when you ask if a user belongs to a role. Programmatically, this it is executed through RolePrincipal.IsInRole(“Manager”). This time the ClientRoleProvider in the System.Web.ClientServices.Providers namespace is responsible for establishing the web request (again in JSON). This will result in sending the following HTTP POST request. So, first, all roles for a particular user are retrieved.
RolePrincipal.IsInRole(“Manager”)
ClientRoleProvider
POST /testCAS/Role_JSON_AppService.axd/GetRolesForCurrentUser HTTP/1.1
Content-Type: application/json; charset=utf-8
Host: myStation
Cookie: .ASPXAUTH=28818033DF1ACC3638F96196B78D60AA44BB2……
Content-Length: 0
The role service will trigger a call to SQL Server.
exec dbo.aspnet_UsersInRoles_GetRolesForUser @ApplicationName=N'testCAS',
@UserName=N'alice'
The answer will be:
HTTP/1.1 200 OK
Server: Microsoft-IIS/5.1
Date: Fri, 30 May 2008 06:55:36 GMT
X-Powered-By: ASP.NET
Cache-Control: private, max-age=0
Content-Type: application/json; charset=utf-8
Content-Length: 31
{"d":["Manager"]}
Then, the code (consumer side) will determine if the supplied role in the method call is in the retrieved list of roles (in our case, the user belongs to only one role). This information will be stored in our "session", i,e., RolePrincipal. So, subsequent calls will not result in calls to the service (but there is a timeout element involved!).
I only showed you some scenarios. Some other scenarios that you could investigate are:
I think most application security requirements can be tackled with the implementation in Client Application Services.
The Membership and Role service in the Client Application Services in Visual Studio 2008 gives a WinForms (or WPF) developer all the tools necessary to implement the most common security requirements. With some configuration work, you can unlock a wealth of functionality you don’t have to program yourself. The API to tap into this system is very straightforward.
So, next time you need to implement application security into you WinForms app, give Client Application Services a go, and see if it is sufficient for you. If so, it will save you a lot of development.
|
https://www.codeproject.com/Articles/27670/Implementing-Application-Security-with-Client-Appl?fid=1480012&select=3302513&tid=2825945
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
This article shows how to create documentation in C# in a simple and fast way. Like my previous articles, this one too focuses beginner to intermediate level. Advance developers please scroll to the bottom of this article. Remember the days when programmers used to work hard to do the documentation? Trust me, it's a huge difference nowadays. Now, all a programmer need to do is to embed the XML documentation right into the code and let Visual Studio .NET perform the rest.
Note: If you are very new to XML, click here to checkout my previous article on XML walkthrough.
To start with, just follow the instructions step by step.
Replace the code in the class with the following:
using System;
namespace TestXMLdoc
{
/// <span class="code-SummaryComment"><summary></span>
/// This project shows how to create XMl documentation in C#
/// <span class="code-SummaryComment"></summary></span>
public class TestXMLdoc
{
/// <span class="code-SummaryComment"><summary></span>
/// m_iTestVar is a module level test variable for storing int
/// <span class="code-SummaryComment"></summary></span>
private int m_iTestVar;
/// <span class="code-SummaryComment"><summary></span>
/// m_sTestVar is a module level test variable for storing string
/// <span class="code-SummaryComment"></summary></span>
private string m_sTestVar;
/// <span class="code-SummaryComment"><summary></span>
/// TestXMLdoc is the constructor
/// <span class="code-SummaryComment"></summary></span>
public TestXMLdoc()
{
//
// TODO: Add constructor logic here
//
}
/// <span class="code-SummaryComment"><summary></span>
/// TestReturnBack is a test method which
/// simply takes a name and returns it back.
/// <span class="code-SummaryComment"></summary></span>
/// <span class="code-SummaryComment"><param name="sName"></param></span>
/// <span class="code-SummaryComment"><returns></returns></span>
public string TestReturnBack(string sName)
{
return "Your name is " + sName;
}
}
}
Note: If you are not using the Visual Studio .NET IDE, you should go for the first one from the following optionss:
csc testXMLdoc.cs /doc:testXMLdoc.xml
(or)
Unlike Java, Microsoft Visual C# generates XML documentation instead of HTML, and as we know about XML, it's mould-able and later can be used anywhere else we need. The following is a list of tags along with their description which can be used in creation of XML documentation.
This will textually include provided code in a single line.
E.g.: <c> int i = 0 ; </c>
This will textually include provided code in multiple lines (snippet).
This will include a code example.
This will force the compiler whether it matches a possible exception.
This will fetch the documentation from an external file. Same as #include in scripting languages.
#include
This will insert a list into the documentation file.
This specifies the parameters of the method and makes the compiler verify it.
This will mark a parameter.
This will determine the access permissions.
This will include a descriptive text.
This will mark the return value of a member.
It is a reference to a related item.
This is similar to the 'See also' section of the MSDN help.
A summary of the member item. Normally, Visual Studio .NET prepares this automatically along with its close tag the moment you type.
This is nothing but a property.
Wasn't that fun? It was.. I know. But that was just a trailer, let's see the whole movie.
That's the result of your patience, hard work and cooperation.
Note: The above example was for people who need a tool to suffice their minimum requirements. For advance documentation, you would like to opt for a more sophisticated tool like NDoc Code Documentation Generator for .NET.
NDoc generates class library documentation from .NET assemblies and the XML documentation files generated by the C# compiler (or with an add-on tool for VB.NET). And the output looks like the following:
NDoc uses pluggable documenters to generate documentation in several different formats, including the MSDN-style HTML Help format (.chm), the Visual Studio .NET Help format (HTML Help 2), and MSDN-online style web pages.
You can download the latest version: NDOC 1.3 Beta or NDOC 1.2.
Thanks to Ms. Manisha Mahajani for reviewing the article for errors.
Njoi.
|
https://www.codeproject.com/Articles/8061/Quick-C-Documentation-using-XML?msg=905766
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Package: wnpp Severity: wishlist Owner: Peter Pentchev <[email protected]> * Package name : sdb Version : 0.6 Upstream Author : pancake <[email protected]> * URL : * License : public domain Programming Lang: C Description : simple and fast key/value database sdb is a simple key/value database with disk storage, based on cdb, but with various optimizations and improvements related to both the on-disk format and the runtime processing. Its core API supports querying and modifying data within JSON objects stored in the database, as well as references to external sdb databases using namespaces. I am aware that a slightly outdated version of the sdb sources is distributed within the radare2 source package, but I have the feeling that this library also merits distribution on its own. If this sentence were in Chinese, it would say something else.
Attachment:
signature.asc
Description: Digital signature
|
https://lists.debian.org/debian-devel/2013/03/msg00462.html
|
CC-MAIN-2017-30
|
en
|
refinedweb
|
It became clear from the emails and comments to my last post that I should probably spend a little more time describing the functionality found in the VSTS 2010 CTP a bit better, specifically that functionality involving the Architecture Explorer and the graphs generated via the AE. ( I just figured everyone had already run out, downloaded that 7.5gig image, and started playing! :) )
This post introduces the three "Standard Graphs" found off the Architecture Explorer using the DinnerNow - ServicePortfolio2 solution found in that aforementioned CTP. The idea behind these graphs are to give you the ability to get some "typical" views of your source base with as few mouse clicks as possible.
Here's a shot of the Archicture Explorer, as seen with the DinnerNow solution open in VS 2010:
You'll notice the Visualize Code Relationships menu item, which is where you'll find the three menu items that will create graphs by Assembly, Namespace, or Class dependencies based on the contents of the current solution.
If you select the Visualize Call Dependency - By Assembly menu item, here's what you'll get:
This graph is all about showing you the dependencies between the various assemblies that make up all the projects in your solution. If you don't have the solution built, you'll notice that after selecting this menu item ( this will happen for any "standard" graph actually ) the solution is built. This is due to the fact that we are cracking the assemblies in the output directories and gathering the information found there.
The nodes you see above represent each of the assemblies discovered in this solution. You'll also notice links of various thickness between these nodes. The thicker the link, the more dependencies between the two assemblies. We sometimes refer to these types of links as "blood vessels".
Notice the chevrons in the upper right hand corner of the nodes in this graph:
This is an "expand / collapse" button for this "group" node. You'll also notice in the upper left corner of each node, a number indicating the number of internal nodes inside the group. Here's what the DinnerNow.Services assembly node looks like when expanded ( click for larger view ):
The green and blue nodes represent Classes and Interfaces. You'll notice in the shot above the tooltip displayed as I hover over the CustomerService node, the category reads "Class". If I double-click on that node, it takes me to the source code where that class is defined.
Selecting the Visualize Call Dependency - By Namespace results in a similar type graph, except this time the initial nodes you see below represent the namespaces found in the solution:
Clicking on the chevron of the DinnerNow.Services node shows the exact same information as it did when viewing from the Assembly diagram ( show below in-situ ):
Selecting the Visualize Call Dependency - By Class results in the most complicated of graphs, mostly because what you are seeing is dependencies at the class level, with no grouping by assembly or namespace. Expanding a chevron in this graph reveals methods, properties, fields, indexers, etc. that are contained by that class or interface. Below is a shot of just a section, as the graph is pretty large:
One thing to point out if you haven't already discovered this, is that if you hover your mouse over a particular link, you can quickly jump to either end of the link. Quite helpful on large graphs like the Class dependency graph!
One other thing worth mentioning. The dependencies discovered on these graphs are dependencies discovered through static analysis of method call data, not structural dependencies. For example, if you have two classes that don't call each other but one includes the other as a type for a member, no dependency link will be created.
What other "Standard Graphs" should we include out of the box? What are some typical views over your solution that you would like one to two clicks away?
Thanks!
Cameron
|
http://blogs.msdn.com/b/camerons/archive/2008/12/18/standard-graphs.aspx
|
CC-MAIN-2014-42
|
en
|
refinedweb
|
Dialog to create connections and add layers from WMS, WFS, WCS etc. More...
#include <qgsowssourceselect.h>
Dialog to create connections and add layers from WMS, WFS, WCS etc.
This dialog allows the user to define and save connection information for WMS servers, etc.
The user can then connect and add layers from the WMS server to the map canvas.
Constructor.
Destructor.
Determines the layers the user selected.
Add a few example servers to the list.
Clear CRSs.
Clear previously set formats.
Clear times.
Connection info (uri)
Connection name.
create an item including possible parents
Returns a textual description for the authority id.
Add some default wms servers to the list.
Opens the Spatial Reference System dialog.
Connects to the database using the stored connection parameters. Once connected, available layers are displayed.
Stores the selected datasource whenerver it is changed.
Deletes the selected connection.
Opens a dialog to edit an existing connection.
Signaled when a layer selection is changed.
Loads connections from the file.
Opens the create connection dialog to build a new connection.
Saves connections to the file.
Populate the connection list combo box.
Set supported CRSs.
Populate supported formats.
Populate the layer list.
Populate times.
List of image formats (encodings) supported by provider.
Returns currently selected cache load control.
Returns currently selected Crs.
Returns currently selected format.
Server CRS supported for currently selected layer item(s)
List of formats supported for currently selected layer item(s)
List of times (temporalDomain timePosition/timePeriod for currently selected layer item(s)
Returns currently selected time.
Set the server connection combo box to that stored in the config file.
show whatever error is exposed.
Set status message to theMessage.
Connection info for selected connection.
Name for selected connection.
Embedded mode, without 'Close'.
layer name derived from latest layer selection (updated as long it's not edited manually)
Connections manager mode.
Service name.
URI for selected connection.
|
http://qgis.org/api/classQgsOWSSourceSelect.html
|
CC-MAIN-2014-42
|
en
|
refinedweb
|
26 January 2011 18:00 [Source: ICIS news]
HOUSTON (ICIS)--Here is Wednesday’s midday ?xml:namespace>
CRUDE: Mar WTI: $86.79/bbl, up 60 cents; Mar Brent: $97.21/bbl, up $1.96
WTI crude futures rose in response to the weekly supply statistics from the EIA showing a much greater-than-forecast build in crude and gasoline. The rally was inspired by financial flow into commodities, tracking a weaker dollar. The negative Brent-WTI spread widened over $10.00.
RBOB: Feb: $2.3977/gal, up 5.50 cents
Reformulated gasoline blendstock for oxygenate blending (RBOB) climbed amid reports of significant turnarounds on gasoline-making units at two large US Gulf refineries, despite the larger-than-forecast build in crude oil inventories and a large build in gasoline inventories.
NATURAL GAS: Feb: $4.433/MMBtu, down 4.0 cents
Natural gas prompt-month futures dropped for the third consecutive day on forecasts for moderate temperatures, which would likely reduce demand for heating. Gas fell about 1.8% on forecasts of normal temperatures for the
ETHANE: down at 61.00-62.25 cents/gal
Mont Belvieu ethane dropped despite crude moving higher, and traders were not sure of the reason. An
AROMATICS: benzene up at $4.30-4.40/gal FOB
US benzene spot prices were discussed within a higher range this morning, following stronger prices in Europe which have surged on production issues. The range was up from a trade at $4.27/gal DDP (delivery, duty paid) the previous day.
OLEFINS: ethylene up at 43-44 cents/lb; RGP flat at 72-73 cents/lb
US ethylene bid/offers for January moved slightly up despite lower feedstock prices.
|
http://www.icis.com/Articles/2011/01/26/9429713/noon-snapshot-americas-markets-summary.html
|
CC-MAIN-2014-42
|
en
|
refinedweb
|
Ruby 1.9 adds Fibers for lightweight concurrency.
Generators in python?
by
Michael Neale
Re: Generators in python?
by
Werner Schuster
.
def nest(a)
if a > 0
Fiber.yield a
loop(a - 1)
end
end
@nester = Fiber.new do
nest(10)
end
while (a = @nester.resume) do
puts a
end
|
http://www.infoq.com/news/2007/08/ruby-1-9-fibers
|
CC-MAIN-2014-42
|
en
|
refinedweb
|
"Randy.Dunlap" <[email protected]> writes:> On Thu, 18 May 2006 10:49:36 -0500 Serge E. Hallyn wrote:>>> Replace references to system_utsname to the per-process uts namespace>> where appropriate. This includes things like uname.>> >> Changes: Per Eric Biederman's comments, use the per-process uts namespace>> for ELF_PLATFORM, sunrpc, and parts of net/ipv4/ipconfig.c>> >> Signed-off-by: Serge E. Hallyn <[email protected]>>> OK, here's my big comment/question. I want to see <nodename> increased to> 256 bytes (per current POSIX), so each field of struct <variant>_utsname> needs be copied individually (I think) instead of doing a single> struct copy.Where is it specified? Looking at the spec as SUSV3 I don't see a sizespecified for nodename.> I've been working on this for the past few weeks (among other> things). Sorry about the timing.> I could send patches for this against mainline in a few days,> but I'll be glad to listen to how it would be easiest for all of us> to handle.>> I'm probably a little over half done with my patches.> They will end up adding a lib/utsname.c that has functions for:> put_oldold_uname() // to user> put_old_uname() // to user> put_new_uname() // to user> put_posix_uname() // to userSounds reasonable, if we really need a 256 byte nodename.As long as they take a pointer to the appropriate utsnamestructure these patches should not fundamentally conflict.Eric-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to [email protected] majordomo info at read the FAQ at
|
http://lkml.org/lkml/2006/5/19/24
|
CC-MAIN-2014-42
|
en
|
refinedweb
|
FYI, today i committed a scheduler performance fix that has a number of commit prerequisites for -stable integration. Those commits are not marked -stable.Previously, in similar situations, i solved it by email-forwarding the prereq commits to stable@[email protected]> # .32.x: a1f84a3: sched: Check for an idle shared cache Cc: <[email protected]> # .32.x: 1b9508f: sched: Rate-limit newidle Cc: <[email protected]> # .32.x: fd21073: sched: Fix affinity logic Cc: <[email protected]> # .32.x LKML-Reference: <[email protected]>>and i'm wondering whether this tagging scheme is fine with your -stable scripting, etc.A further question is, i can see using this tagging scheme in the future in merge commits log messages too - will your scripts notice that [email protected]> # .32.x: 83f5b01: rcu: Fix long-grace-period race Signed-off-by: Ingo Molnar <mingo@elte.( Sidenote: i wouldnt go as far as to generate null Git commits to mark backports after the fact - this scheme is for a series of commits that get 'completed' - there's usually a final followup commit that can embedd this information. ) Ingo---------------------------->From eae0c9dfb534cb3449888b9601228efa6480fdb5 Mon Sep 17 00:00:00 2001From: Mike Galbraith <[email protected]>Date: Tue, 10 Nov 2009 03:50:02 +0100Subject: [PATCH] sched: Fix and clean up rate-limit newidle codeCommit 1b9508f, "Rate-limit newidle" has been confirmed to fixthe netperf UDP loopback regression reported by Alex Shi.This is a cleanup and a fix: - moved to a more out of the way spot - fix to ensure that balancing doesn't try to balance runqueues which haven't gone online yet, which can mess up CPU enumeration during boot.Reported-by: Alex Shi <[email protected]>Reported-by: Zhang, Yanmin <[email protected]>Signed-off-by: Mike Galbraith <[email protected]>Acked-by: Peter Zijlstra <[email protected]>Cc: <[email protected]> # .32.x: a1f84a3: sched: Check for an idle shared cacheCc: <[email protected]> # .32.x: 1b9508f: sched: Rate-limit newidleCc: <[email protected]> # .32.x: fd21073: sched: Fix affinity logicCc: <[email protected]> # .32.xLKML-Reference: <[email protected]>Signed-off-by: Ingo Molnar <[email protected]>--- kernel/sched.c | 28 +++++++++++++++------------- 1 files changed, 15 insertions(+), 13 deletions(-)diff --git a/kernel/sched.c b/kernel/sched.cindex 23e3535..ad37776 100644--- a/kernel/sched.c+++ b/kernel/sched.c@@ -2354,17 +2354,6 @@ static int try_to_wake_up(struct task_struct *p, unsigned int state, if (rq != orig_rq) update_rq_clock(rq); - if (rq->idle_stamp) {- u64 delta = rq->clock - rq->idle_stamp;- u64 max = 2*sysctl_sched_migration_cost;-- if (delta > max)- rq->avg_idle = max;- else- update_avg(&rq->avg_idle, delta);- rq->idle_stamp = 0;- }- WARN_ON(p->state != TASK_WAKING); cpu = task_cpu(p); @@ -2421,6 +2410,17 @@ out_running: #ifdef CONFIG_SMP if (p->sched_class->task_wake_up) p->sched_class->task_wake_up(rq, p);++ if (unlikely(rq->idle_stamp)) {+ u64 delta = rq->clock - rq->idle_stamp;+ u64 max = 2*sysctl_sched_migration_cost;++ if (delta > max)+ rq->avg_idle = max;+ else+ update_avg(&rq->avg_idle, delta);+ rq->idle_stamp = 0;+ } #endif out: task_rq_unlock(rq, &flags);@@ -4098,7 +4098,7 @@ static int load_balance(int this_cpu, struct rq *this_rq, unsigned long flags; struct cpumask *cpus = __get_cpu_var(load_balance_tmpmask); - cpumask_setall(cpus);+ cpumask_copy(cpus, cpu_online_mask); /* * When power savings policy is enabled for the parent domain, idle@@ -4261,7 +4261,7 @@ load_balance_newidle(int this_cpu, struct rq *this_rq, struct sched_domain *sd) int all_pinned = 0; struct cpumask *cpus = __get_cpu_var(load_balance_tmpmask); - cpumask_setall(cpus);+ cpumask_copy(cpus, cpu_online_mask); /* * When power savings policy is enabled for the parent domain, idle@@ -9522,6 +9522,8 @@ void __init sched_init(void) rq->cpu = i; rq->online = 0; rq->migration_thread = NULL;+ rq->idle_stamp = 0;+ rq->avg_idle = 2*sysctl_sched_migration_cost; INIT_LIST_HEAD(&rq->migration_queue); rq_attach_root(rq, &def_root_domain); #endif
|
http://lkml.org/lkml/2009/11/9/470
|
CC-MAIN-2014-42
|
en
|
refinedweb
|
When building a C# interface, you may find a need for both public and internal methods, such as:
public class MyClass : IMyInterface { public void MyPublicMethod() { } internal void MyInternalMethod() { } } public interface IMyInterface { public void MyPublicMethod(); internal void MyInternalMethod(); }
(For simplicity in this example, we’ll only discuss methods, but this also works for properties, events and indexers.)
Unfortunately, the code above will not compile due to the following errors:
Compiler Error CS0106: The modifier ‘public’ is not valid for this item
Compiler Error CS0106: The modifier ‘internal’ is not valid for this item
Access modifiers such as “public” and “internal” are not allowed for interface members. That’s because the access modifier for the interface itself determines the access level for all members defined in the interface. Hence, adding an access modifier to an interface member would be redundant. But that also means that you cannot mix public and internal members in the same interface.
Separate Public and Internal Interfaces
The solution is to create two interfaces, one public and one internal, such as:
public class MyClass : IMyPublicInterface, IMyInternalInterface { public void MyPublicMethod() { } internal void MyInternalMethod() { } } public interface IMyPublicInterface { void MyPublicMethod(); } internal interface IMyInternalInterface { void MyInternalMethod(); }
Unfortunately, this new code fails to compile due to another error:
Compiler Error CS0737: ‘InternalInterface.MyClass’ does not implement interface member ‘InternalInterface.IMyInternalInterface.MyInternalMethod()’. ‘InternalInterface.MyClass.MyInternalMethod()’ cannot implement an interface member because it is not public.
A method that implements an interface member must have public accessibility. So then how do you create an internal interface?
Explicit Interface Members
The trick is to use an explicit interface member implementation.
“An explicit interface member implementation is a method, property, event, or indexer declaration that references a fully qualified interface member name. Because explicit interface member implementations are not accessible through class or struct instances, they allow interface implementations to be excluded from the public interface of a class or struct. This is particularly useful when a class or struct implements an internal interface that is of no interest to a consumer of that class or struct.”
So in this example, you would define the internal method in the class with its interface prefix, and remove the internal access modifier, as shown:
void IMyInternalInterface.MyInternalMethod() { }
Internal Interface Example
So here is the working example of an object with both internal and public interfaces:
public class MyClass : IMyPublicInterface, IMyInternalInterface { public void MyPublicMethod() { } void IMyInternalInterface.MyInternalMethod() { } } public interface IMyPublicInterface { void MyPublicMethod(); } internal interface IMyInternalInterface { void MyInternalMethod(); }
Two Interfaces Means Two References
Don’t forget that you will need one reference for each interface. This means each object will have two references: a reference to the object’s public interface, and a reference to the object’s internal interface:
MyClass obj = new MyClass();
IMyPublicInterface objPub = obj;
IMyInternalInterface objInt = obj;
objPub.MyPublicMethod();
objInt.MyInternalMethod();
I know it’s not the point of the article but I would take any desire to have both public and internal methods on an interface as a ‘code smell’ that you are violating the Single Responsibility Principle.
@Chris: I respectfully disagree. Having a public and internal interface is no smellier than having both public and internal members in a class, which of course is very common. Note that API clients will see only a single interface. The internal interface gives the DLL authors a backdoor into their own code.
An internal interface is a desirable thing when you want the interface for dependency injection but don’t want public exposure. I’m not certain why odd syntax is required just to get an internal-only interface implemented. I.e, I feel that this should be valid:
======
public class MyClass : IMyInternalInterface
{
internal void MyInternalMethod() { }
}
internal interface IMyInternalInterface
{
void MyInternalMethod();
}
======
There’s no obvious excuse here for the compiler to complain that MyInternalMethod needs to be public, as the interface itself is internal. Yet, it doesn’t work this way; the explicit implementation seems to be needed.
Thanks for this post!
Hello,
I would like to share one of my new experiences related to .NET and VoIP. Recently I have find
a sample WPF softphone the source code of it which was written in .Net 4.0. It is a VoIP softphone and
it can forward and receive calls and DTMF signals. In addition with the help of them the caller party can
navigate in IVR systems easily. It is very necessary in my opinion for the companies which has great number of
customers.
If you are interested in, you can read about this solution and find the source code of the sample software here:
This source code can be developed, and it can be the basis of a customized softphone, that fits to your need.
Good developing!
Hi I would like to share the basic of interface,in order to fully understand how it work.
Please visit
if you are interested to learn.
Thanks!
The code in your example is wrong. You say “create two interfaces, one public and one internal”, but then you write an example with two public interfaces and no internal interface. I think you meant to write “internal interface IMyInternalInterface”. If the interface is public, someone referencing your library can still cast an object instance as the public interface and call the “hidden” method.
Another thing: Can’t you have the internal interface implement the public interface so internally you can just cast all references as the internal interface for access to all methods?
@CSharper: Typo fixed, thanks for noticing it. As for your second comment, I’m not sure what you mean, please explain. The point of this article is how to provide a public interface to public methods, and an internal interface to different internal methods.
I think CSharper means like so…
internal interface IMyInternalInterface : IMyPublicInterface
{
void MyInternalMethod();
}
var _obj = (IMyInternalInterface)(new MyClass());
_obj.MyInternalMethod();
_obj.MyPublicMethod();
… now MyClass can be passed around internally as IMyInternalInterface and do everything…and externally it can be passed around like IMyPublicInterface and do only those non-internal things
|
http://www.csharp411.com/c-internal-interface/
|
CC-MAIN-2014-42
|
en
|
refinedweb
|
24 June 2011 16:19 [Source: ICIS news]
LONDON (ICIS)--Shell plans to restart the 2A olefins unit at its petrochemicals site in ?xml:namespace>
Shell said the unit would be restarted on Monday following the completion of maintenance work that began on 14 June.
The company did not disclose capacity details.
According to ICIS plants and projects, Shell has two crackers at Wesseling: 2A, with a capacity of 260,000 tonnes/year of ethylene; and 2B, with an ethylene capacity of 240,000 tonnes/year.
Shell said last year it plans to close 2B by the end of 2011 because the unit is no longer competitive.
Nigel Davis contributed to this article
|
http://www.icis.com/Articles/2011/06/24/9472555/shell-to-restart-olefins-unit-at-german-wesseling-site-on-27.html
|
CC-MAIN-2014-42
|
en
|
refinedweb
|
code:
#include "list.h"main(int argc, char *argv[]) { int i, N = atoi(argv[1]), M = atoi(argv[2]); Node t, x; initNodes(N); for (i = 2, x = newNode(1); i <= N; i++) { t = newNode(i); insertNext(x, t); x = t; } while (x != Next(x)) { for (i = 1; i < M ; i++) x = Next(x); freeNode(deleteNext(x)); } printf("%d\n", Item(x)); }.
#include "list.h"main(C cArg, SZ rgszArg[]) { I iNode, cNodes = atoi(rgszArg[1]), cNodesToSkip = atoi(rgszArg[2]); PNODE pnodeT, pnodeCur; InitNodes(cNodes); for (iNode = 2, pnodeCur = PnodeNew(1); iNode <= cNodes ; iNode++) { pnodeT = PnodeNew(i); InsertNext(pnodeCur, pnodeT); pnodeCur = pnodeT; })); } printf("%d\n", Item(nodeCur)); }
So what changed? First off, all the built-in types are gone. Hungarian can use them, but not for most reasons. Next, the hungarian types "I", "C", and "SZ" are used to replace indexes, counts and strings. Obviously the C runtime library functions remain the same. Next, I applied the appropriate prefix - i for indices, c for counts, p<type> for "pointer to <type>". The Node type was renamed PNODE, in Hungarian, all types are uppercased. In Hungarian, the name of the routine describes the return value - so a routine that returns a "pointer to foo" is named "Pfoo<something relevent to which pfoo is being returned>".
Looking at the transformation, I'm not sure it's made the code any easier to read, or more maintainable. The next examples will try to improve things.
|
http://blogs.msdn.com/b/larryosterman/archive/2004/11/09/254561.aspx
|
CC-MAIN-2014-42
|
en
|
refinedweb
|
On Thu, Jul 19, 2007 at 12:03:30AM +0200, Aurelien Jacobs wrote: > On Wed, 18 Jul 2007 23:30:41 +0200 > Diego Biurrun <diego at biurrun.de> wrote: > > > Attached patch makes the compilation of libavformat/framehook.c > > conditional to CONFIG_VHOOK. OK to apply? > > > > --- ffmpeg.c (revision 9734) > > +++ ffmpeg.c (working copy) > > @@ -607,8 +607,10 @@ > > picture2 = picture; > > } > > > > +#ifdef CONFIG_VHOOK > > frame_hook_process(picture2, dec->pix_fmt, dec->width, dec->height, > > 1000000 * ist->pts / AV_TIME_BASE); > > +#endif > > Here you could use if(ENABLE_VHOOK) instead of #ifdef. > Except that point, the patch looks fine to me. Applied with that change. Diego
|
http://ffmpeg.org/pipermail/ffmpeg-devel/2007-July/029521.html
|
CC-MAIN-2014-42
|
en
|
refinedweb
|
Hello bug-automake, I looked at how Vala was integrated into Automake, and it seemed to me that it was very fishy. Works nicely for a small programs. I tried to make a program like Shotwell to use Automake. It is still a very small project. But that was a failure just because the compilation scheme had different flags and sources depending on the operating system. And each time I tried a work-around, I got stuck onto another bug. At the end I did not succeed. Here are some test cases that illustrate the problems (all of them should fail on both the master branch and in the 1.11 release): First bug. configure.ac: AC_INIT AM_INIT_AUTOMAKE(nonesuch, nonesuch) AM_PROG_VALAC AC_PROG_CC AC_OUTPUT(Makefile) And Makefile.am: bin_PROGRAMS=a b a_SOURCES=a.vala b_SOURCES=a.vala This one produces two rules for $(srcdir)/a.c. Make will complain about it. Makefile:283: warning: overriding commands for target `a.c' Makefile:275: warning: ignoring old commands for target `a.c' It is also important to know that we might want different a_VALAFLAGS and b_VALAFLAGS. This means that a.vala should produce a-a.c and b-a.c in this case. The compilation in Vala depends on what other source are includes. So it means that: a_SOURCES=a.vala c.vala b_SOURCES=a.vala b.vala with the same _VALAFLAGS might still generate, I believe, different a.c due to the fact that c.vala and b.vala might define two different environments used by a.vala. Though this is to check with the specifications of Vala. But I am not sure they ensure that the result should be the same. Otherwise they would not require to have all the modules as parameters and just translate file by file. Second bug, configure.ac: AC_INIT AM_INIT_AUTOMAKE(nonesuch, nonesuch) AM_PROG_VALAC AC_PROG_CC AM_CONDITIONAL([FOO], [(exit 1)]) AC_OUTPUT(Makefile) Makefile.am: bin_PROGRAMS=a a_SOURCES=a.vala if FOO a_SOURCES+=b.vala endif In this one DIST_COMMON will have both a.c and b.c. However, both rules for $(srcdir)/a.c and $(srcdir)/b.c will depend on a_vala.stamp. However the rule for this last one will not build b.c because it will use the sources in $(a_SOURCES). So "make dist" will simply fail. Perhaps b.vala uses a package that is not on the machine it is built. But still you want to be able to make the distribution. So it should not fail. Third bug, (considering that bug 2 is fixed) configure.ac: AC_INIT AM_INIT_AUTOMAKE(nonesuch, nonesuch) AM_PROG_VALAC AC_PROG_CC AC_ARG_ENABLE([foo], AS_HELP_STRING([foo])], [foo=yes], [foo=no]) AM_CONDITIONAL([FOO], [test "${foo:-no}" = yes]) AC_OUTPUT(Makefile) Makefile.am: bin_PROGRAMS=a a_SOURCES=a.vala if FOO a_VALAFLAGS=--pkg=something else a_VALAFLAGS=--pkg=someotherthing endif Try to compile with --disable-foo first. And then reconfigure with --enable-foo and compile it again. There is a problem because a.vala was compiled to a.c thinking that package something was present. And it will probably not compile with someotherthing. a.c will not be rebuilt because a_vala.stamp is here, and the only test is a "-f $@" to rebuild or not. Fourth bug, configure.ac: AC_INIT AM_INIT_AUTOMAKE(nonesuch, nonesuch) AM_PROG_VALAC AC_PROG_CC AC_OUTPUT(Makefile) Makefile.am: bin_PROGRAMS=a a_SOURCES=a.vala then $ touch a.vala $ aclocal-1.11a $ autoconf $ automake-1.11a $ mkdir foo $ cd foo $ ../configure $ make make: *** No rule to make target `../a_vala.stamp', needed by `../a.c'. Stop. Of course, it seems that the bootstrap does not work with VPATH. See the rule in Makefile.in: a_vala.stamp: $(a_SOURCES) $(VALAC) $(AM_VALAFLAGS) $(VALAFLAGS) -C $(a_SOURCES) touch $@ First, it should be $(srcdir)/a_vala.stamp since this file is in the DIST_COMMON. Then there should be a variable generated by Automake that contains all that is in $(a_SOURCES) but with $(srcdir) in front so that the first command of the rule reads the right files. Fifth bug: AM_PROG_VALAC should require AC_PROG_CC. It does not seem to do that. As you see all my test cases, I needed to put manually AC_PROG_CC in the configure.ac. configure.ac: AC_INIT AM_INIT_AUTOMAKE(nonesuch, nonesuch) AM_PROG_VALAC AC_OUTPUT(Makefile) Makefile.am: bin_PROGRAMS=a a_SOURCES=a.vala I think most of the problems are due to the fact that Automake tries to ship derived source. While it can make sense with Yacc of Lex because of the direct conversion 1-file to 1-file, it does not seem to be really scaling to Vala. After all "derived sources" are more objects that sources. Nobody wants to read or modify the files generated by Vala. I think it is at least legitimate for the user not to want to ship the generated .c files, and it is not possible with the current implementation. Besides it seems that AM_PROG_VALAC requires valac to be present. It does not define it as `$MISSING valac' when this one is not here. Why is it required when the .c files are in the DIST_COMMON? The definition of a language should be able to be defined as derived sources, but be able not to push the result into DIST_COMMON like the current implementation does. It should be to the discretion of the language definition. (see handle_single_transform) Then for the third bug the stamp technique does not work in case of change of conditionals. Maybe there should be one stamp for each set of conditionals (all but only those for which the program is dependent). So that at the end we are sure the files are recompiled in case the conditional definitions changed. Best regards, -- Valentin David address@hidden
|
http://lists.gnu.org/archive/html/bug-automake/2010-09/msg00013.html
|
CC-MAIN-2014-42
|
en
|
refinedweb
|
Date Handling Tip Part 2: Get the Month Name as Extension Method
After my last post on the GetMonthName, I had a question on how to add this method to the DateTime class as an Extension method. I thought this would make a good follow-up, so here is how you can accomplish this.
First, let's do the C# extension method. Add a new class to your project. For this example, I called the class "MyExtensions". This class needs to be defined as a static class. Next add a public method called GetMonthName. This is a static method and accepts a DateTime object as a parameter. Remember that with Extension methods you need to use the keyword "this" in front of the parameter. Below is the C# example:
public static class MyExtensions
{
public static string GetMonthName(this DateTime dateValue)
{
DateTimeFormatInfo info = new DateTimeFormatInfo();
return info.MonthNames[dateValue.Month - 1];
}
}
To use this extension method you can write code like the following:
DateTime value;
value = DateTime.Now;
MessageBox.Show(value.GetMonthName());
Now, let's take a look at how to accomplish the same thing in Visual Basic. With Visual Basic there is no such thing as a Shared class, so instead you use a Module. So, we add a module to our project called "MyExtensions" to our project. You have to import the System.Runtime.CompilerServices namespace as we will be attaching an Attribute to our extension method. Below is what our new module now looks like.
Imports System.Runtime.CompilerServices
Imports System.Globalization
Module MyExtensions
<Extension()> _
Public Function GetMonthName(ByVal dateValue As Date) As String
Dim info As New DateTimeFormatInfo
Return info.MonthNames(dateValue.Month - 1)
End Function
End Module
Now, to use this new extension method simply write code like the following:
Dim value As Date
value = Now
MessageBox.Show(value.GetMonthName())
Good Luck With Your Coding,
Paul Sheriff
** SPECIAL OFFER FOR MY BLOG READERS **
Visit for a free eBook on "Fundamentals of N-Tier".
|
http://weblogs.asp.net/psheriff/date-handling-tip-part-2-get-the-month-name-as-extension-method
|
CC-MAIN-2014-42
|
en
|
refinedweb
|
Sometimes you're so familiar with a class you stop paying attention to it. If you could write the documentation for
java.lang.Foo, and Eclipse will helpfully autocomplete the functions for you, why would you ever need to read its Javadoc? Such was my experience with
java.lang.Math, a class I thought I knew really, really well. Imagine my surprise, then, when I recently happened to be reading its Javadoc for possibly the first time in half a decade and realized that the class had almost doubled in size with 20 new methods I'd never heard of. Obviously it was time to take another look.
Version 5 of the Java™ Language Specification added 10 new methods to
java.lang.Math (and its evil twin
java.lang.StrictMath), and Java 6 added another 10. In this article, I focus on the more purely mathematical functions provided, such as
log10 and
cosh. In Part 2, I'll explore the functions more designed for operating on floating point numbers
as opposed to abstract real numbers.
The distinction between an abstract real number such as π or 0.2 and a Java
double is an important one. First of all, the Platonic ideal of the number is infinitely precise, while the Java representation is limited to a fixed number of bits. This is important when you deal with very large and very small numbers. For example, the number 2,000,000,001 (two billion and one) can be represented exactly as an
int, but not as a
float. The closest you can get in a float is 2.0E9 — that is, two billion.
doubles do better because they have more bits (which is one reason you should almost always use
doubles instead of
floats); but there are still practical limits to how accurate they can be.
The second limitation of computer arithmetic (the Java language's and others') is that it's based on binary rather than decimal. Fractions such as 1/5 and 7/50 that can be represented exactly in decimal (0.2 and 0.14, respectively) become repeating fractions when expressed in binary notation. This is exactly like the way 1/3 becomes 0.3333333... when expressed in decimal. In base 10, any fraction whose denominator has the prime factors 5 and 2 (and no others) is exactly expressible. In base 2, only fractions whose denominators are powers of 2 are exactly expressible: 1/2, 1/4, 1/8, 1/16, and so on.
These imprecisions are one of the big reasons a math class is needed in the first place. Certainly you could define the trigonometric and other functions with Taylor series expansions using nothing more than the standard + and * operators and a simple loop, as shown in Listing 1:
Listing 1. Calculating sines with a Taylor series
public class SineTaylor { public static void main(String[] args) { for (double angle = 0; angle <= 4*Math.PI; angle += Math.PI/8) { System.out.println(degrees(angle) + "\t" + taylorSeriesSine(angle) + "\t" + Math.sin(angle)); } } public static double degrees(double radians) { return 180 * radians/ Math.PI; } public static double taylorSeriesSine(double radians) { double sine = 0; int sign = 1; for (int i = 1; i < 40; i+=2) { sine += Math.pow(radians, i) * sign / factorial(i); sign *= -1; } return sine; } private static double factorial(int i) { double result = 1; for (int j = 2; j <= i; j++) { result *= j; } return result; } }
This starts off well enough with only a small difference, if that, in the last decimal place:
0.0 0.0 0.0 22.5 0.3826834323650897 0.3826834323650898 45.0 0.7071067811865475 0.7071067811865475 67.5 0.923879532511287 0.9238795325112867 90.0 1.0000000000000002 1.0
However, as the angles increase, the errors begin to accumulate, and the naive approach no longer works so well:
630.0000000000003 -1.0000001371557132 -1.0 652.5000000000005 -0.9238801080153761 -0.9238795325112841 675.0000000000005 -0.7071090807463408 -0.7071067811865422 697.5000000000006 -0.3826922100671368 -0.3826834323650824
The Taylor series here actually proved more accurate than I expected. However as the
angle increases to 360 degrees, 720 degrees (4 pi radians), and more, the Taylor series requires progressively more terms for accurate computation. The more sophisticated algorithms used by
java.lang.Math avoid this.
The Taylor series is also inefficient compared to the built-in sine function of a modern desktop chip. Proper calculations of sine and other functions that are both accurate and fast require very careful algorithms designed to avoid accidentally turning small errors into large ones. Often these algorithms are embedded in hardware for even faster performance. For example, almost every X86 chip shipped in the last 10 years has hardware implementations of sine and cosine that the X86 VM can just call, rather than calculating them far more slowly based on more primitive operations. HotSpot takes advantage of these instructions to speed up trigonometry operations dramatically.
Right triangles and Euclidean norms
Every high school geometry student learns the Pythagorean theorem: the square of the length of hypotenuse of a right triangle is equal to the sum of the squares of the lengths of the legs. That is, c2 = a2 + b2
Those of us who stuck it out into college physics and higher math learned that this equation shows up a lot more than in just right triangles. For instance, it's also the square of the Euclidean norm on R2, the length of a two-dimensional vector, a part of the triangle inequality, and quite a bit more. (In fact, these are all really just different ways of looking at the same thing. The point is that Euclid's theorem is a lot more important than it initially looks.)
Java 5 added a
Math.hypot function to perform exactly this calculation, and it's a good example of why a library is helpful. The naive approach would look something like this:
public static double hypot(double x, double y){ return Math.sqrt (x*x + y*y); }
The actual code is somewhat more complex, as shown in Listing 2. The first thing you'll note is that this is written in native C code for maximum performance. The second thing you should note is that it is going to great lengths to try to minimize any possible errors in this calculation. In fact, different algorithms are being chosen depending on the relative sizes of
x and
y.
Listing 2. The real code that implements
Math.hypot
/* * ==================================================== * Copyright (C) 1993 by Sun Microsystems, Inc. All rights reserved. * * Developed at SunSoft, a Sun Microsystems, Inc. business. * Permission to use, copy, modify, and distribute this * software is freely granted, provided that this notice * is preserved. * ==================================================== */ #include "fdlibm.h" #ifdef __STDC__ double __ieee754_hypot(double x, double y) #else double __ieee754_hypot(x,y) double x, y; #endif { double a=x,b=y,t1,t2,y1,y2,w; int j,k,ha,hb; ha = __HI(x)&0x7fffffff; /* high word of x */ hb = __HI(y)&0x7fffffff; /* high word of y */ if(hb > ha) {a=y;b=x;j=ha; ha=hb;hb=j;} else {a=x;b=y;} __HI(a) = ha; /* a <- |a| */ __HI(b) = hb; /* b <- |b| */ if((ha-hb)>0x3c00000) {return a+b;} /* x/y > 2**60 */ k=0; if(ha > 0x5f300000) { /* a>2**500 */ if(ha >= 0x7ff00000) { /* Inf or NaN */ w = a+b; /* for sNaN */ if(((ha&0xfffff)|__LO(a))==0) w = a; if(((hb^0x7ff00000)|__LO(b))==0) w = b; return w; } /* scale a and b by 2**-600 */ ha -= 0x25800000; hb -= 0x25800000; k += 600; __HI(a) = ha; __HI(b) = hb; } if(hb < 0x20b00000) { /* b < 2**-500 */ if(hb <= 0x000fffff) { /* subnormal b or 0 */ if((hb|(__LO(b)))==0) return a; t1=0; __HI; __HI(a) = ha; __HI(b) = hb; } } /* medium size a and b */ w = a-b; if (w>b) { t1 = 0; __HI(t1) = ha; t2 = a-t1; w = sqrt(t1*t1-(b*(-b)-t2*(a+t1))); } else { a = a+a; y1 = 0; __HI(y1) = hb; y2 = b - y1; t1 = 0; __HI(t1) = ha+0x00100000; t2 = a - t1; w = sqrt(t1*y1-(w*(-w)-(t1*y2+t2*b))); } if(k!=0) { t1 = 1.0; __HI(t1) += (k<<20); return t1*w; } else return w; }
Actually, whether you end up in this particular function or one of a few other similar ones depends on details of the JVM on your platform. However, more likely than not this is the code that's invoked in Sun's standard JDK. (Other implementations of the JDK are free to improve on this if they can.)
This code (and most of the other native math code in Sun's Java Development Library) comes
from the open source
fdlibm library written at Sun about 15 or
so years ago. This library is designed to implement the IEE754 floating point precisely and to have very accurate calculations, even at the cost of some performance.
Logarithms in base 10
A logarithm tells you what power a base number must be raised to in order to produce a given value. That is, it is the inverse of the
Math.pow() function. Logs base 10 tend to appear in engineering applications. Logs base e (natural logarithms) appear in the calculation of compound interest, and numerous scientific and mathematical applications. Logs base 2 tend to show up in algorithm analysis.
The
Math class has had a natural logarithm function since Java 1.0. That is, given an argument x, the natural logarithm returns the power to which e must be raised to give the value x. Sadly, the Java language's (and C's and Fortran's and Basic's) natural logarithm function is misnamed as
log(). In every math textbook I've ever read, log is a base-10 logarithm, while ln is a base e logarithm and lg is a base-2 logarithm. It's too late to fix this now, but Java 5 did add a
log10() function that takes the logarithm base 10 instead of base e.
Listing 3 is a simple program to print the log-base 2, 10, and e of the integers from 1 to 100:
Listing 3. Logarithms in various bases from 1 to 100
public class Logarithms { public static void main(String[] args) { for (int i = 1; i <= 100; i++) { System.out.println(i + "\t" + Math.log10(i) + "\t" + Math.log(i) + "\t" + lg(i)); } } public static double lg(double x) { return Math.log(x)/Math.log(2.0); } }
Here are the first 10 rows of the output:
1 0.0 0.0 0.0 2 0.3010299956639812 0.6931471805599453 1.0 3 0.47712125471966244 1.0986122886681096 1.584962500721156 4 0.6020599913279624 1.3862943611198906 2.0 5 0.6989700043360189 1.6094379124341003 2.321928094887362 6 0.7781512503836436 1.791759469228055 2.584962500721156 7 0.8450980400142568 1.9459101490553132 2.807354922057604 8 0.9030899869919435 2.0794415416798357 3.0 9 0.9542425094393249 2.1972245773362196 3.1699250014423126 10 1.0 2.302585092994046 3.3219280948873626
Math.log10() has the usual caveats of logarithm functions: taking the log of 0 or any negative number returns NaN.
Cube roots
I can't say that I've ever needed to take a cube root in my life, and I'm one of those
rare people who does use algebra and geometry on a daily basis, not to mention the occasional foray into calculus, differential equations, and even abstract algebra. Consequently, the usefulness of this next function escapes me. Nonetheless, should you find an unexpected need to take a cube root somewhere, you now can — as of Java 5 — with the
Math.cbrt() method. Listing 4 demonstrates by taking the cube roots of the integers from -5 to 5:
Listing 4. Cube roots from -5 to 5
public class CubeRoots { public static void main(String[] args) { for (int i = -5; i <= 5; i++) { System.out.println(Math.cbrt(i)); } } }
Here's the output:
-1.709975946676697 -1.5874010519681996 -1.4422495703074083 -1.2599210498948732 -1.0 0.0 1.0 1.2599210498948732 1.4422495703074083 1.5874010519681996 1.709975946676697
As this output demonstrates, one nice feature of cube roots compared to square roots: Every real number has exactly one real cube root. This function only returns NaN when its argument is NaN.
The hyperbolic trigonometric functions
The hyperbolic trigonometric functions are to hyperbolae as the trigonometric functions are to circles. That is, imagine you plot these points on a Cartesian plane for all possible values of t:
x = r cos(t) y = r sin(t)
You will have drawn a circle of radius r. By contrast, suppose you instead use sinh and cosh, like so:
x = r cosh(t) y = r sinh(t)
You will have drawn a rectangular hyberbola whose point of closest approach to the origin is r.
Another way of thinking of it: Where sin(x) can be written as (eix - e-ix)/2i and cos(x) can be written as (eix + e-ix)/2 , sinh and cosh are what you get when you remove the imaginary unit from those formulas. That is, sinh(x) = (ex - e-x)/2 and cosh(x) = (ex + e-x)/2.
Java 5 adds all three:
Math.cosh(),
Math.sinh(), and
Math.tanh(). The inverse hyperbolic trigonometric functions — acosh, asinh, and atanh — are not yet included.
In nature, cosh(z) is the equation for the shape of a hanging rope connected at two ends, known as a catenary. Listing 5 is a simple program that draws a catenary using the
Math.cosh function:
Listing 5. Drawing a catenary with
Math.cosh()
import java.awt.*; public class Catenary extends Frame { private static final int WIDTH = 200; private static final int HEIGHT = 200; private static final double MIN_X = -3.0; private static final double MAX_X = 3.0; private static final double MAX_Y = 8.0; private Polygon catenary = new Polygon(); public Catenary(String title) { super(title); setSize(WIDTH, HEIGHT); for (double x = MIN_X; x <= MAX_X; x += 0.1) { double y = Math.cosh(x); int scaledX = (int) (x * WIDTH/(MAX_X - MIN_X) + WIDTH/2.0); int scaledY = (int) (y * HEIGHT/MAX_Y); // in computer graphics, y extends down rather than up as in // Caretesian coordinates' so we have to flip scaledY = HEIGHT - scaledY; catenary.addPoint(scaledX, scaledY); } } public static void main(String[] args) { Frame f = new Catenary("Catenary"); f.setVisible(true); } public void paint(Graphics g) { g.drawPolygon(catenary); } }
Figure 1 shows the drawn curve:
The sinh, cosh, and tanh functions also all appear in various calculations in special and general relativity.
Signedness
The
Math.signum function converts positive numbers into 1.0,
negative numbers into -1.0, and zeroes into zeroes. In essence, it extracts just the sign from a number. This can be useful when you're implementing the
Comparable interface.
There's a
float and a
double version
to maintain the type. The reason for this rather obvious function is to handle special
cases of floating-point math, NaN, and positive and negative zero. NaN is also treated
like zero, and positive and negative zero should return positive and negative zero. For example, suppose you were to implement this function naively as in Listing 6:
Listing 6. Buggy implementation of
Math.signum
public static double signum(double x) { if (x == 0.0) return 0; else if (x < 0.0) return -1.0; else return 1.0; }
First, this method would turn all negative zeroes into positive zeroes. (Yes, negative zeroes are a little weird, but they are a necessary part of the IEEE 754 specification.) Second, it would claim that NaN is positive. The actual implementation shown in Listing 7 is more sophisticated and careful for handling these weird corner cases:
Listing 7. The real, correct implementation of
Math.signum
public static double signum(double d) { return (d == 0.0 || isNaN(d))?d:copySign(1.0, d); } public static double copySign(double magnitude, double sign) { return rawCopySign(magnitude, (isNaN(sign)?1.0d:sign)); } public static double rawCopySign(double magnitude, double sign) { return Double.longBitsToDouble((Double.doubleToRawLongBits(sign) & (DoubleConsts.SIGN_BIT_MASK)) | (Double.doubleToRawLongBits(magnitude) & (DoubleConsts.EXP_BIT_MASK | DoubleConsts.SIGNIF_BIT_MASK))); }
Do less, get more
The most efficient code is the code you never write. Don't do for yourself what experts have already done. Code that uses the
java.lang.Math functions, new and old, will be faster, more efficient, and more accurate than anything you write yourself. Use it.
Resources
Learn
- "Java's new math, Part 2: Floating-point numbers" (Elliotte Rusty Harold, developerWorks, January 2008): Don't miss the second installment of this series, which explores the functions designed for operating on floating-point numbers.
- Types, Values, and Variables: Chapter 4 of the Java Language Specification covers floating point arithmetic.
- IEEE standard for binary floating-point arithmetic: The IEEE 754 standard defines floating-point arithmetic in most modern processors and languages, including the Java language.
java.lang.Math: Javadoc for the class that provides the functions discussed in this article.
- Bug 5005861: A disappointed user requests faster trigonometric functions in the JDK.
- Catenary: Wikipedia explains the history of and math behind the catenary.
- Browse the technology bookstore for books on these and other technical topics.
- developerWorks Java technology zone: Find hundreds of articles about every aspect of Java programming.
Get products and technologies
fdlibm: A C math library for machines that support IEEE 754 floating-point, is available from the Netlib mathematical software repository.
- OpenJDK: Look into the source code of the math classes inside this open source Java SE implementation..
|
http://www.ibm.com/developerworks/java/library/j-math1/index.html?ca=drs-
|
CC-MAIN-2014-42
|
en
|
refinedweb
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.