text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringlengths 9
15
⌀ | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
---|---|---|---|---|---|
Python or C driver for SC16IS762 dual SPI UART?
- Andrew Starr last edited by
Hi all,
I have designed a data logger incorporating a G01 and a SC16IS762 to provide 2 external UART interfaces via SPI. I have a couple of questions regarding the approach I should take for writing the driver:
Would there be any significant performance gain if the driver for the SC16IS762 was written in C as a python-loadable module, instead of pure python? The baud rates are low - 9600 and 1200, and data volume is not expected to be high (so interrupt workload is not expected to be huge).
Assuming that it is worth writing a C driver: I've done some preliminary playing around writing simple loadable modules using the documentation on this forum and the Micropython docs, and it's reasonably straightforward. The tricky part is that I would like to make use of the interrupt pin on the SC16IS762 via a GPIO to call some C code in my driver module. Looking at firmware source, it seems that all the GPIO interrupts are handled by machpin_intr_process(). machpin_intr_process() in turn calls call_interrupt_handler() for each pin. call_interrupt_handler() expects a micropython pin object, so what would be the correct approach for supplying my own handler routine?
Thanks in advance,
Andrew
- Andrew Starr last edited by
@Andrew-Starr From the view of a G01 your low baud rate device is a very slow device. There is no much point in using a faster programming language with this.
Just enjoy the convenient python environment:
def PinIRQ(src): print("IRQ from pin #"+str(src)) ... # IRQ initialization: pin.callback(Pin.IRQ_FALLING, handler=PinIRQ, arg=999) | https://forum.pycom.io/topic/7149/python-or-c-driver-for-sc16is762-dual-spi-uart | CC-MAIN-2022-33 | refinedweb | 276 | 60.75 |
void textwrap_init(textwrap_t *prop);
void textwrap_columns(textwrap_t *prop, int columns);
void textwrap_tab(textwrap_t *prop, int tab);
void textwrap_head(textwrap_t *prop, const char *head1, const char *head2);
char *textwrap(const textwrap_t *prop, const char *text);
Unlike other libraries or functions, this supports internationalization.
At first, this automatically detects the current LC_CTYPE locale and follows it. To enable this, your application must call setlocale(3) in advance.).
Members of textwrap_t may change in future. Thus, please use API functions to modify values of these members.
The real text-wrapping is performed by textwrap(). The text to be folded is given as text. The text must be encoded in the current LC_CTYPE locale, i.e., ISO-8859-1 in ISO-8859-1 locales, KOI8-R in KOI8-R locales, EUC-JP in EUC-JP locales, UTF-8 in UTF-8 locales, and so on. Though you might not be very familiar with the term LC_CTYPE locale, this behavior is under what most of users use UNIX-like systems. Thus, you do not have to convert the encoding of the string.
The text can contain tab code (0x09). The tab is converted into proper number of space code (0x20) by textwrap.
#include <stdio.h> #include <textwrap.h> main() { textwrap_t property; char *text = "This is a sample.\n"; char *outtext; textwrap_init(&property); textwrap_columns(&property, 70); outtext = textwrap(&property, text); printf("%s\n",outtext); free(outtext); } | http://www.makelinux.net/man/3/T/textwrap | CC-MAIN-2013-20 | refinedweb | 230 | 58.69 |
| Date: Mon, 14 Dec 2009 09:36:51 +0100 | From: Floris Ouwendijk <address@hidden> | | Here's an example: | ... | Note that if I change the initialization to Wnts.initWb(100, 100, | 4096); the problem stays. Thanks for the program. It turns out that adding a single record with key {255.31} to an empty b-tree is enough to cause the problem. It turns out that keys starting with 0xFF are not supported by WB: <> We've reserved the set of strings starting with `0xff' as split keys only; they cannot be used as real key values. But the API functions did not check for it! I have rectified that. The development version now runs your program, outputting an error message for every key starting with 0xFF. And the database file it produces is valid. All you need to do in order to work with arbitrary binary keys is to prepend one byte (not 0xFF) to all keys. Existing applications of WB either do this or have ASCII keys; so the problem didn't surface. Thanks for your bug report. The development version is updated: Also, the CVS repository is updated: I had to modify your program in order to run it here. Here is my version: import wb.*; public class Stress { public static void main(String[] args) { wb.Ents.initWb(120, 100, 2048); wb.Seg btreeSeg = wb.Segs.makeSeg("stress.wb", 2048); wb.Han btree = wb.Han.hanMakeHan(); wb.Segs.btCreate(btreeSeg, wb.Wbdefs.indTyp, btree, 1); System.err.print("Java 0 to 999999 little-endian keys\n"); for (long i = 0; i < 100000; i++) wb.Handle.btPut(btree, idToKey(i), 8, new byte[2], 2); } public static byte[] idToKey(long id) { byte[] v = new byte[8]; int i = 0; while (i < 8) { v[i++] = (byte)(id & 0xff); id>>=8; } return v; } } | http://lists.gnu.org/archive/html/wb-discuss/2009-12/msg00004.html | CC-MAIN-2015-14 | refinedweb | 302 | 77.43 |
The interactive console is probably where everyone starts their programming in Python and remains a great way to try a few lines. How about turning it on it’s head and use an interactive console to work with your code. First the tricky bit of getting an interactive console working; this is the smallest amount of code required.
import code code.interact()
Now you just need to share the code you have written with the interactive console. You could make all of your code available but this is probably not what you want. Instead construct a dictionary of what you do what available and pass this in as the local dictionary. This has an added advantage in allowing you to alter the names and structure.
import code def mytest (): return "Result from mytest" # to pass everything in get a copy of the locals dictionary # mylocals = locals() mylocals = { "testfn": mytest } code.interact(local=mylocals)
By defining your own dictionary, you can now call testfn() to execute mytest. Nothing else (like the modules imported) have been passed in allowing a cleaner interface. | https://quackajack.wordpress.com/2015/04/ | CC-MAIN-2018-47 | refinedweb | 180 | 63.8 |
At the moment I’m struggling with Microchip’s new “Harmony” framework for the PIC32. I don’t want to say bad things about it because (a) I haven’t used it enough to give a fair opinion and (b) I strongly suspect it’s a useful thing for some people, some of the time.
Harmony is extremely heavyweight. For example, the PDF documentation is 8769 pages long. That is not at all what I want – I want to work very close to the metal, and to personally control nearly every instruction executed on the thing, other than extremely basic things like <stdlib.h> and <math.h>.
Yet Microchip says they will be supporting only Harmony (and not their old “legacy” peripheral libraries) on their upcoming PIC32 parts with goodies like hardware floating point, which I’d like to use.
So I’m attempting to tease out the absolute minimum subset of Harmony needed to access register symbol names, etc., and do the rest myself.
My plan is to use Harmony to build an absolutely minimum configuration, then edit down the resulting source code to something manageable.
But I found that many of Microchip’s source files are > 99% comments, making it essentially impossible to read the code and see what it actually does. Often there will be 1 or 2 lines of code here and there separated by hundreds of lines of comments.
So I wrote the below Python script. Given a folder, it will walk thru every file and replace all the .c, .cpp, .h, and .hpp files with identical ones but with all comments removed.
I’ve only tested it on Windows, but I don’t see any reason why it shouldn’t work on Linux and Mac.
from __future__ import print_function import sys, re, os # for Python 2.7 # Use and modification permitted without limit; credit to NerdFever.com requested. # thanks to zvoase at # and Lawrence Johnston at def comment_remover(text): def replacer(match): s = match.group(0) if s.startswith('/'): return " " # note: a space and not an empty string else: return s pattern = re.compile( r'//.*?$|/\*.*?\*/|\'(?:\\.|[^\\\'])*\'|"(?:\\.|[^\\"])*"', re.DOTALL | re.MULTILINE ) r1 = re.sub(pattern, replacer, text) return os.linesep.join([s for s in r1.splitlines() if s.strip()]) def NoComment(infile, outfile): root, ext = os.path.splitext(infile) valid = [".c", ".cpp", ".h", ".hpp"] if ext.lower() in valid: inf = open(infile, "r") dirty = inf.read() clean = comment_remover(dirty) inf.close() outf = open(outfile, "wb") # 'b' avoids 0d 0d 0a line endings in Windows outf.write(clean) outf.close() print("Comments removed:", infile, ">>>", outfile) else: print("Did nothing: ", infile) if __name__ == "__main__": if len(sys.argv) < 2: print("") print("C/C++ comment stripper v1.00 (c) 2015 Nerdfever.com") print("Syntax: nocomments path") sys.exit() root = sys.argv[1] for root, folders, fns in os.walk(root): for fn in fns: filePath = os.path.join(root, fn) NoComment(filePath, filePath)
To use it, put that in "nocomments.py", then do:
python nocomments.py foldername
Of course, make a backup of the original folder first.
#1 by Bob on 2015 June 29 - 22:13
Quote
If you look at just the Driver Libraries in Harmony, are those roughly the equivalent of the old peripheral libraries? The documentation for the Driver Libraries is a mere 1,129 pages.
MPLAB X can collapse comments by default. Go to Tools | Options, Editor, Folding, and check the Comments box. Files you open after that should have the comments collapsed.
#2 by Dave on 2015 June 29 - 23:58
Quote
Thanks; Tools>Options>Editor>Folding is very useful; I didn’t know about it.
Another trick I found useful is to go to the Harmony folder, right click, then Properties, and check Read-only (recursively).
That sets all the standard Harmony files as read-only; NetBeans is smart enough to know it – it italicizes the file name and greys out the editor window – so you can read that code but not modify the “master” files.
Here’s an example of the kind of thing I’m uncomfortable about with Harmony:
MHC creates a file “system_config.h” that includes the FOSC clock rate you selected via the setup GUI:
#define SYS_CLK_FREQ 4000000ul
Now, what happens if my code goes and changes the clock rate while running? Will the rest of Harmony know it? I don’t see how. Will it assume it’s still running at 40 MHz (when it’s not) and get all kinds of timing things wrong? How can I be sure that doesn’t happen?
I’d much rather manage this stuff myself.
Maybe I’m just not trusting enough. But I’m not.
#3 by C Grier on 2015 November 5 - 15:50
Quote
Hi Dave,
Originally I was going to post about the comment folding options, but see that Bob beat me to it.
You are right that embedded MCUs have traditionally been more bare-metal programming exercises, particularly if you’ve had long experience with a platform and the base peripheral set. However, the industry is changing as more connectivity, files systems, and GUI expectations end up being part of modern projects.
You might want to check out the Renesas Synergy Platform if you feel that Harmony isn’t exactly what you want. Synergy has the HAL, Framework and (optional) RTOS wrapped together with an Eclipse IDE and ARM Cortex cores. The nice part is that the product family was designed to the software API sepcification – not the other way around. That means the complexity in the drivers, stacks, and middleware are kept to a minimum while still supporting low-end and high end performance. And the PDFs are fully hyperlinked, with the API document coming in at a modest 2700 pages. 😉
Google Renesas Synergy or go to synergyxplorer dot com to learn more.
–CG | http://nerdfever.com/remove-all-comments-from-c-and-c-source-code/ | CC-MAIN-2017-47 | refinedweb | 970 | 66.33 |
vpassert man page
vpassert — Preprocess Verilog code assertions
Synopsis
vpassert [ --help ] [ --date ] [ --quiet ] [ -y directories... ] [ files... ]
Description.
Arguments.
- --allfiles.
- --axiom
Special Axiom ATHDL enables/disables added around unreachable code.
- --call-error <function>
When
$uerror(or
$uassertetc.) wants to display a message, call the specified function instead of
$displayand
$stop.
- --call-info <function>
When
$uinfowants to display a message, call the specified function instead of
$display.
- --call-warn <function>
When
$uwarn(or
$uwarn_clketc.) wants to display a message, call the specified function instead of
$displayand
$stop.
- --date
Check file dates and sizes versus the last run of vpassert and don't process if the given source file has not changed.
- --exclude
Exclude processing any files which begin with the specified prefix.
Displays this message and program version and exits.
- --language <1364-1995|1364-2001|1364-2005|1800-2005|1800-2009|1800-2012|1800-2017>
Set the language standard for the files. This determines which tokens are signals versus keywords, such as the ever-common "do" (data-out signal, versus a do-while loop keyword).
- --minimum
Include `__message_minimum in the
$uinfotest, so that by defining __message_minimum=1 some uinfos may be optimized away at compile time.
- --noline
Do not emit `line directives. If not specified they will be used under --language 1364-2001 and later.
- -.
- --nostop
By default,
$errorand
$warninsert a
$stopstatement. With --nostop, this is replaced by incrementing a variable, which may then be used to conditionally halt simulation.
- --o file
Use the given filename for output instead of the input name .vpassert. If the name ends in a / it is used as a output directory with the default name.
- --quiet
Suppress messages about what files are being preprocessed.
- --realintent
Special RealIntent enable/disables added around unreachable code.
- --synthcov
When "ifdef SYNTHESIS" is seen, disable coverage. Resume on the `else or `endif. This does NOT follow child defines, for example:
`ifdef SYNTHSIS `define MYSYNTH `endif `ifdef MYSYNTH // This will not be coveraged-off
- --timeformat-units units
If specified, include Verilog
$timeformatcalls before all messages. Use the provided argument as the units. Units is in powers of 10, so -9 indicates to use nanoseconds.
- --timeformat-precision prec
When using --timeformat-units, use this as the precision value, the number of digits after the decimal point. Defaults to zero.
- --vericov
Special Vericov enable/disables added around unreachable code.
- --verilator
Special Verilator translations enabled.
- --version
Displays program version and exits.
- --vcs
Special Synopsys VCS enables/disables added around unreachable code.
Functions
These Verilog pseudo-pli calls are expanded:
- /*vp_coverage_off*/
Disable coverage for all tools starting at this point. Does not need to be on a unique line.
- /*vp_coverage_on*/
Re-enable coverage after a vp_coverage_off. Does not need to be on a unique line.
- $uassert(case, "message", [vars...] )
Report a
$uerrorif the given case is FALSE. (Like assert() in C.)
- $uassert_amone(sig, [sig...], "message", [vars...] )
Report a
$uerrorif more than one signal is asserted, or any are X. (None asserted is ok.) The error message will include a binary display of the signal values.
- $uassert_info(case, "message", [vars...] )
Report a
$uinfoif the given case is FALSE. (Like assert() in C.)
- $uassert_onehot(sig, [sig...], "message", [vars...] )
Report a
$uerrorif other thancover_clk(clock, label)
Similar to
$uerror_clk, add a SystemVerilog assertion at the next specified clock's edge, with the label specified. This allows cover properties to be specified "inline" with normal RTL code.
- $ucover_foreach_clk(clock, label, "msb:lsb", (... $ui ...))
Similar to
$ucover_clk, however cover a range where
$ui.
- $ui
Loop index used inside
$ucover_foreach_clk.
- $uinfo(level, "message", [vars...] )
Report a informational message in standard form. End test if warning limit exceeded.
- $uerror("message", [vars...] )
Report a error message in standard form. End test if error limit exceeded.
- $uerror_clk(clock, "message", [vars...] )
Report a error message in standard form at the next clock edge. If you place a
$uerroretc in a combo logic block (always @*), event based simulators may misfire the assertion due to glitches.
$uerror_clkfixesis assigned.
- $uwarn("message", [vars...] )
Report a warning message in standard form.
- $uwarn_clk(clock "message", [vars...] )
Report a warning message in standard form at the next clock edge. See
$uerror_clk.
Distribution
Verilog-Perl is part of the <> free Verilog EDA software tool suite. The latest version is available from CPAN and from <>.
Copyright 2000-2019>, Duane Galbi <[email protected]>
See Also
Verilog-Perl, Verilog::Parser, Verilog::Pli | https://www.mankier.com/1/vpassert | CC-MAIN-2019-09 | refinedweb | 717 | 52.66 |
"Juanma Barranquero" <address@hidden> writes: > Questions: > > - What does mean "The meaningful PARAMETERs depend on the kind of > window." Which parameters are meaningul, and for which windows? Well, none, at present. This patch is only one half of the "window nodelete/group" patch. The meaningful parameters are defined in the other half. So, its probably better to remove this comment for now. > - It is wise to return (in `window-parameter' and > `set-window-parameter') directly the parameter alist, instead of a > copy of it? Ok, maybe not. Is there any rule of thumb to decide to do the one or the other? >> And add a note to etc/NEWS about it? > > Sorry, I'm *horrible* at deciding what to write in NEWS... Maybe: ** Functions to handle window paramters were added, similar to the functions for frame parameters. ok, that might at least inspire better efforts from others :) > Juanma -- Joakim Verona | https://lists.gnu.org/archive/html/emacs-devel/2008-06/msg00259.html | CC-MAIN-2019-30 | refinedweb | 149 | 68.87 |
Hi,
I need to have a function in MonoBehaviour that is called before any Update functions of any other objects. LateUpdate is called after Update function and I would need something like "PreUpdate" that is called before.
Thanks in advance.
PS. FixedUpdate will not work because on very short frames it may be not called at all.
Actually this is one of the things that prevents Unity from being a 'real game engine', such granularity. Please request this at feedback.unity3d.com
You could modify the script execution order so that the script that needs to be executed before any other is. Find more information here.
Big thanks for responses. You are right I will try to explain what I am trying to achieve.
I have implemented a state machine that I use to control my main character and AI characters. States of this machines represent and implement behavior of different character tasks (i.e.: moving, jumping, standing etc.) These states have different logic and respond differently to player input. Each state implements functions:
OnEnter() - called after transition from different state
GetTriggeredTransition() - called before Update (used when we want to go to different state. When returns null - we do not change the state).
Update() - called when Update of given object should be called
LateUpdate() - called when LateUpdate of given object should be called
OnExit() - called before transition to different state
The problematic function is GetTriggeredTransition(). I use this function to build AI like this:
Being in PatrolState I check in this function if other player is close enough start chasing him. If yes then return transition to the ChaseThePlayerState.
If this function will be called after Update of any other character was called, there will be no consistency and world observed in GetTriggeredTransition() function will depend on execution order of MonoBehaviour scripts. I want this function not only to be called before Update of this character but also before Update of any other characters that are using different state machines. Just like Update of one MonoBehaviour is called before LateUpdate of any other MonoBehaviour. So I want to get PreUpdate in MonoBehaviour :).
I hope I was clear about what I want to do :)
So if you know a simple way how to get PreUpdate notification/event in MonoBehaviour script I will be thankful.
PS. I have one solution on my mind but I would like to hear yours before I confuse you with mine :)
Another possibility is to use a driving class such that it is the only script that existis earlier in the execution order, and which raises an event during its Update phase. This way, you'll have PreUpdate called before update on whichever scripts are processing. Just un/subscribe during transition logic.
Answer by Jamora
·
Jan 14, 2014 at 03:13 PM
While tanoshimi's answer does seem to give the simplest way to achieve a "PreUpdate" using FixedUpdate (see the second link), there is a possibility FixedUpdate will not get called before each and every update only one time. I will propose a simple system which guarantees that a method is called before any logic in any Update is called. The solution relies heavily on the Script execution order of Unity, as suggested by iwaldrop.
You will need this script in a GameObject in every scene you require the PreUpdate. I recommend creating a prefab.
using UnityEngine;
using System.Collections;
public class FirstScript : MonoBehaviour {
public static event System.Action PreUpdate;
// Update is called once per frame
void Update () {
if(PreUpdate != null)
PreUpdate();
}
}
You will then need to set this script to be executed first. Open the Script Execution Order inspector, drag this script there and set it to execute first (above default time).
Then, in whichever script you require a PreUpdate function, just subscribe to the PreUpdate event in FirstScript. For example:
using UnityEngine;
using System.Collections;
public class NewBehaviourScript : MonoBehaviour {
void OnEnable () {
FirstScript.PreUpdate += HandlePreUpdate;
}
void HandlePreUpdate ()
{
Debug.Log("Preupdate in "+this.ToString() +" "+Time.frameCount);
}
void Update () {
Debug.Log("Update "+this.ToString() +" "+Time.frameCount);
}
void OnDisable(){
FirstScript.PreUpdate -= HandlePreUpdate;
}
}
HandlePreUpdate will always be called before any logic in any Update (except in FirstScript) is run.
If you require absolute control as to which PreUpdate is called first, you will have to modify FirstScript such that it maintains a List of functions which it calls, in order, from its Update.
Huh? I never said anything about using FixedUpdate! What I suggested was to create a new script, set to execute before all others, and calling any "PreUpdate" logic from the Update() function there (which is exactly the same as what you've demonstrated in code ;-)
it says FixedUpdate is run before any Update in the second link you gave.
Alas, FixedUpdate is not guaranteed to be called every frame, although it is up to interpretation as to whether it will be called often enough to work.
Thanks guys. I was thinking about similar way to solve this but your delegate solution seams really nice.
Answer by tanoshimi
·
Jan 12, 2014 at 08:14 AM
In general, it's helpful to explain the problem you are trying to solve, rather than the technical limitation that you believe prevents its solution. (Is there a reason, for example, that this couldn't be calculated in LateUpdate of the previous frame?)
However, it sounds like what you're trying to achieve could be done by placing all your "pre-update" logic in the Update() function of a new script, and setting that script to execute before all others:
Do note that calculating things in the previous frame and using that the next can generate a frame of latency (of around 16ms if you're running 60Hz). This may or not be an issue in your situation (it is in ours ;-) ).
Answer by MasterHoover
·
Aug 19, 2015 at 05:03 AM
Well, I think the reason the PreUpdate method isn't defined in Unity is because it isn't needed. My belief is that there is always a workaround.
If you absolutely want to change the state of your game before everything else, you can declare a function instead of all those Updates() that will be called by your StateMachine AFTER it has Updated its State.
First define a parent for all the objects that depends on the StateMachine
using UnityEngine;
using System.Collections;
// Parent for all objects that uses the StateMachine
// Mostly to be able to store all these scripts in a List<MyParentClass> variable
public abstract class ParentClass : MonoBehaviour
{
public abstract void ApplyConsequences();
}
Then use the parent classes on all your Objects that depends on the StateMachine.
using UnityEngine;
using System.Collections;
public class AnObject : ParentClass
{
void Awake()
{
// There you should add yourself in the memory of the StateMachine
// so it could then call all your ApplyStateMachineBehaviour() functions
MyStateMachine.Instance.MyObjects.Add(this);
// You might want to put the Awake in the parent instead, but not sure if it'll work.
// If you do be sure to remove abstract from ParentClass since you define a function.
}
public override void ApplyConsequences()
{
// There you define your desired effects
}
}
Finally use the defined function after you Update your StateMachine
public class MyStateMachine : MonoBehaviour
{
// A trick to be able to easily access one script from other scripts
// ===========================================
private static MyStateMachine instance;
void Awake()
{
instance = this;
}
public static MyStateMachine Instance
{
get{return instance;}
}
// ===========================================
private List<ParentClass> myObjects = new List<ParentClass>();
public List<ParentClass> MyObjects
{
get{return myObjects;}
}
void Update ()
{
UpdateState();
ApplyAllConsequences();
}
// There you Update your State
void UpdateState()
{
// ...
// ...
// ...
}
// There you call ALL ApplyConsequences()
void ApplyAllConsequences()
{
foreach(ParentClass obj in myObjects)
{
obj.ApplyConsequences();
}
}
}
Be warned that you will surely have to use the LateUpdate function since all Updates from all objects are processed simultaneously.
If your stuck in the LateUpdate, try to use a similar trick to order your functions properly. Define functions, and call the after another.
The name 'Joystick' does not denote a valid type ('not found')
2
Answers
scripts skip themselves from time to time
0
Answers
'function Update' vs 'function Start {while(true)}'?
3
Answers
Input and applying physics, Update or FixedUpdate?
3
Answers
Programatically driven particles
1
Answer | https://answers.unity.com/questions/614343/how-to-implement-preupdate-function.html | CC-MAIN-2019-43 | refinedweb | 1,348 | 52.9 |
Docker 101
A beginner’s guide to containerization
Definition
Docker is an open-source container-based platform, which packages your application and all its dependencies together in form of containers so that your application works seamlessly in any environment ( development, test or production).
It is a platform used for building, shipping and running our applications.
Goals
- To solve dependency problems by keeping the dependency restricted inside the containers.
- To be able to write Dockerfiles to containerize a sample application.
- To be able to scale up the instances when needed. The lightweight nature of containers means developers can run dozens of containers at the same time, making it possible to emulate a production-ready distributed system.
Prerequisites
- Windows or Linux based systems
- Docker
- Working knowledge of web applications. No coding required.
Advantages of Containers over traditional VMs
- Unlike VMs( Virtual Machines ) that run on a Guest OS, using a hypervisor, Docker containers run directly on a host server (for Linux), using a Docker engine, making it faster and lightweight.
- With a fully virtualized system, you get more isolation. However, it requires more resources. With Docker, you get less isolation. However, as it requires fewer resources, you can run thousands of container on a host.
- A VM can take a minimum of one minute to start, while a Docker container usually starts in a fraction of seconds.
- Containers are easier to break out of than a Virtual Machine.
- Unlike VMs there is no need to preallocate the RAM. Hence docker containers utilize less RAM compared to VMs. So only the amount of RAM that is required is used.
This is going to be a long read and more hands on. If you want a shorter version to start hacking ASAP, here’s something you can read through in 5 minutes or less.
Getting Started on Ubuntu:
- -
-
Getting Started on Windows:
- Download the installer from here: Docker for Windows
- Double-click Docker Desktop for Windows Installer.exe to run the installer.
-.
- By default, Docker is not started, start it up from All Programs on Windows, or by running systemctl start docker on Ubuntu.
Hello World!
Once Docker is started, go ahead and run this command (on Powershell for Windows users):
$ docker run hello-world
If your setup was successful, you should receive a message “Hello from Docker”. This ideally means that Docker was successfully installed and the docker daemon is running.
Creating your own images and containers
A Dockerfile is a file that you create which in turn produces a Docker image when you build it. It contains a bunch of instructions which informs Docker HOW the Docker image should get built.
We are going to create a simple Flask application in Python. It is a “Hello World” web application in Flask which will run on localhost from inside a container. Note: You do NOT need to install Python on your system for this.
First of all, we need to create a directory called my_web_app and save the following code in a file called app.py
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello():
return "Hello World!"
if __name__ == "__main__":
app.run(debug=True, host="0.0.0.0")
The Python code is just that. It will import the Flask package, create a Flask app, define a route and run the application in debug mode on localhost.
So, let’s start creating our Dockerfile. To do that, just create a file named Dockerfile in the above project directory, and put the following commands inside it:
# Inherit from the Python Docker image
FROM python:3.7-slim
# Install the Flask package via pip
RUN pip install flask==1.0.2
# Copy the source code to app folder
COPY ./app.py /app/
# Change the working directory
WORKDIR /app/
# Set “python” as the entry point
ENTRYPOINT [“python”]
# Set the command as the script name
CMD [“app.py”]
Now let’s build the image from the Dockerfile using the -t parameter, which means tag, and set a name (flask_app) and a tag (0.1). Note the . at the end of the command. That indicates the Dockerfile is present at the same path from where the command is run. Alternatively, you can set the path to the Dockerfile in place of the dot.
$ docker build -t flask_app:0.1 .
$ docker images
You should see something similar:
REPOSITORY TAG IMAGE ID CREATED SIZE
flask_app 0.1 c6eb89fefb25 2 minutes ago 153MB
python 3.7-slim 4c2534c95211 4 weeks ago 143MB
Now that we have our docker images built, let’s run the container specifying the port that will be mapped using the -p parameter and the -d parameter, which means detached, so that the terminal does not get stuck. We must also pass the name and tag of the image as parameter values.
$ docker run -d -p 5000:5000 — name simple_flask flask_app:0.1
$ docker psCONTAINER ID IMAGE COMMAND CREATED
03c650a4eb58 flask_app:0.1 "python app.py" 16 seconds ago
STATUS PORTS NAMES
Up 3 seconds 0.0.0.0:5000->5000/tcp simple_flask
Open your browser, go to localhost:5000 and you can see, we are accessing our web app which is running inside the container.
To stop a running container, simply run the stop command:
$ docker stop simple_flask
To remove a stopped container, run:
$ docker rm simple_flask
To remove an image, run:
$ docker rmi flask_app:0.1
Docker Hub | Docker Pull | Docker Push
Docker Hub is the hosting platform for all pre-built Docker images from other users and organizations. There are multiple official images from open-source projects like nginx, mysql and linux. Docker Hub works very similar to Github.
Docker Pull is used to download any publicly available image from Docker Hub onto your local development environment. To download any publicly available image from Docker Hub, simple run:
$ docker pull <image:tag>
If you want to download a private image, you need to have access to that repository.
Docker Login is required to download/upload private images from Docker Hub.
$ docker login
Enter your credentials and you should be logged in.
Docker Push is used to upload your locally built image onto Docker Hub, either in a public or private repository.
$ docker push <image_name:tag>
This will push your locally built image to Docker Hub as a public image. In case you want to push your image as a private image onto Docker Hub, you have to create a private repository and push it there.
$ docker push <private_repo_name:tag>
If you want to push your images to your own repository (instead of Docker Hub), you have to first tag the image.
Considering this scenario: Your registry is on a host named registry-host and listening on port 5000. To do this, tag the image with the host name or IP address, and the port of the registry:
$ docker tag rhel-httpd registry-host:5000/myadmin/rhel-httpd
This simple creates a copy of the image rhel-httpd and renames it as registry-host:5000/myadmin/rhel-httpd
Check that this worked by running:
$ docker images
You should see both rhel-httpd and registry-host:5000/myadmin/rhel-httpd listed. Upload the new image using:
$ docker push registry-host:5000/myadmin/rhel-httpd
Enter your credentials if prompted. Go to hub.docker.com and login. Under repositories, you will be able to view your recently uploaded image.
Docker provides free hosting for public repositories and 1 private repository. After that, charges go up as per requirements.
Docker Compose
In the words of Docker Inc.
Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.
Docker compose is used to run multiple containers as a single service. We define our entire complex stack in one file and run it with a single command. Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.
Our example will build a simple Python web application running on Docker Compose. The application uses the Flask framework and maintains a hit counter in Redis.
Step 1: Setup the code base
- Create your project directory
- Create a file app.py and add the following code:.
3. Create another file called requirements.txt in your project directory and paste this in:
flask
redis
Step 2: Creating a Dockerfile:
In your project directory, create a file named Dockerfile and add:
● Build an image starting with the Python 3.7 image.
● Set the working directory to /code.
● Set environment variables used by the flask command.
● Install gcc so Python packages such as MarkupSafe and SQLAlchemy can compile speedups.
● Copy requirements.txt and install the Python dependencies.
● Copy the current directory . in the project to the workdir . in the image.
● Set the default command for the container to flask run.
Step 3: Define services in a compose file
Create a file called docker-compose.yml in your project directory and paste the following:
version: '3'
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
environment:
FLASK_ENV: development
redis:
image: "redis:alpine".
Bonus point:
Volumes
The.
Step 4: Build and run your app with Compose
From your project directory, start up your application by running:
$)
...
Compose pulls a Redis image, builds an image for your code, and starts the services you defined. In this case, the code is statically copied into the image at build time.
Enter in a browser to see the application running.
Hello World! I have been seen 1 times.
Every time you refresh the page, the counter should increment.
Switch to another terminal window, and type docker image ls to list local images.
Listing images at this point should return redis and web.
You can inspect images with
docker inspect <tag or id>
Stop the application, either by running docker-compose down from within your project directory in the second terminal, or by hitting CTRL+C in the terminal where you started the app.
Step 5: Updating the application on the fly ‘Hello from Docker! I have been seen {} times.\n’.format(count)
Refresh the web page in your browser. The greeting should be updated, and the counter should still increment. This proves that we can keep updating our code without having to re-build or even restart the application itself. This drastically reduces downtime during application upgrade and testing.
Other Docker Compose commands
If you want to run your services in the background, you can pass the -d parameter (for “detached” mode) to docker-compose up and use docker-compose ps to see what is currently running.
The docker-compose run command also allows you to run one-off commands for your services. For example, to see what environment variables are available to the web service:
$ docker-compose run web env
Bonus Step:
Scaling up your app!
- Edit docker-compose.yml.
web:
build: .
command: python app.py
ports:
- "5000"
volumes:
- .:/code
links:
- redis
redis:
image: redis
In the first example, we bind port 5000 of docker host to port 5000 of web container (- “5000:5000”). This prevents us from scaling out the web container because the port 5000 on docker host would be in use by the first web container.
So this time we set this as -“5000”. This will bind the port 5000 of “web” container to arbitrary port of docker host.
2. Start the containers with Docker Compose:
$ docker-compose up -d
3. Check for running containers:
$ docker-compose psName Command State Ports
compose_redis_1 /entrypoint.sh redis-server Up 6379/tcp
compose_web_1 python app.py Up 0.0.0.0:32768->5000/tcp
Note the port number of compose_web_1.
Port 5000 of container is bound to 32768 of the Docker Host
4. Now, we scale up the web container. Lets make it 10.
docker-compose scale web=10
5. Check the running containers by running docker-compose ps
We should have 10 containers running our Web service and 1 container for Redis.
Step 6: Stop the containers
docker-compose stop
Step 7: Remove the containers
docker-compose rm
Summary
By now, you should have a working knowledge of Docker. That’s the conceptual framework. We’ve covered motivations for using it and the basic concepts. Furthermore, we’ve looked into how to containerize an app and in doing so covered some useful Docker commands. There is so much more to know about Docker like how to work with Databases, Volumes, how to link containers over custom network and why and how to spin up and manage multiple containers, also known as orchestration.
But that is for another day, another blog. | https://soumyadeeppaul.medium.com/docker-101-95842a9c1d57?source=user_profile---------8---------------------------- | CC-MAIN-2021-43 | refinedweb | 2,131 | 65.73 |
Windows. This article will cover the very basics of Windows Azure Table storage and provide you with resources and suggested topics to continue your learning.
Some people, when first learning about the Windows Azure platform, find it hard to understand the purpose of the Table Storage feature. This is especially true of those who are familiar with developing applications using highly relational data. To get a good understanding of how a Key-Value Pair system differs from a traditional relational database you can read Buck Woody’s article on the topic in his continuing series: Data Science Laboratory System – Key/Value Pair Systems.
The code examples provided in this article are written using the Windows Azure .NET SDK 2.2 with Visual Studio 2013; however, like most services on Windows Azure this SDK simply calls a REST based API in the background for you. This underlying REST API allows for a variety of program languages and platforms to use Windows Azure services and the Table Storage service is no different. You can find documentation on using the Table Storage service from node.js, Java, PHP, Python and Ruby at.
Getting Started
To get started using the Table. Once you have an Azure account you can then create a storage account that can then be used to store Tables.
To create a storage account, log in to the Windows Azure management portal..
All storage accounts are stored in triplicate, with transactionally-consistent copies in the primary data center. In addition to that redundancy, you can also choose to have ‘Geo Replication’ enabled for the storage account. ‘Geo Replication‘ means that the Windows Azure Table data that you place into the account will be replicated in triplicate to another data center within the same region. So, if you select ‘East US’ for your primary storage location, your account will also have a triplicate copy stored in the West).
Click ‘Create Storage Account‘ once you have selected the location and provided an account name. In a few moments the Windows Azure portal will then generate the storage account for you. so that if necessary you can perform a rolling change in configuration for all applications utilizing the accounts in case one of the keys is compromised.
Your storage account is now created and we have what we need to work with it. For now, get a copy of the Primary Access Key by clicking on the ‘copy’ icon next to the text box. Now that we have our storage account set up we can talk about Table Storage.
What are Tables and Entities?
In Windows Azure Table Storage, the term ‘Table’ is used to describe a grouping of entities. You can loosely think of an entity as a row of data, but it’s more like a collection of properties and values that were stored together within a table. Unlike relational databases, the entities inside of a table do not need to have the same structure or schema. This means that we might have an entity that stores properties about a product in the same table as an entity that stores properties about the product options.
There are some rules about entities: each entity can have up to 252 properties but the size of an entity with all of the properties and values cannot exceed 1 MB. Table storage entities support the following data types:
Byte array, Boolean, DateTime, Double, GUID, Int32, Int64 and
String (up to 64KB in size). There are an additional three required system properties that must exist on every entity:
PartitionKey, RowKey and
TimeStamp. The partition key is way to group entities within a table and control the scalability of the table which we will touch on in a bit. The row key is a unique identifier for an entity within a given partition. The combination of partition key and row key is the unique identifier for an entity within a table, comparable to a primary key in a relational database. The
Timestamp property represents the last time the entity was modified and is managed by the Storage sub-system. Any change you make to
Timestamp will be ignored.
There is no direct table-specific limit to how much data you can store within a table. The size is restricted only by the allowable size of a Windows Azure Storage account which is currently 200 TB, or 100 TB if the storage account was created prior to June 7th, 2012. A storage account can hold any combination of Windows Azure Tables, BLOBs or Queues up to the allowable size of the account. There is a reason that there is a difference in the allowable size depending on when the storage account was created. Starting on that date, accounts are created on the newer infrastructure of Windows Azure Storage which drastically increased the throughput and scalability of the system.
Store some Data!
Now that we know about Tables and entities, let’s store some data. Most applications will be using a client library to write data into the tables, or calling the REST API directly. For our example we will use a simple C# Console application that will create a table in a storage account and then add an entity to the table using the 2.1 Windows Azure Storage Library for .NET.
Using Visual Studio, create a C# Console application from the standard template. By default the created project will not have a reference to the storage library, but we can add that easily using the NuGet package manager.
Right-click on the project and select ‘Manage NuGet Packages…‘ from the context menu.
This will load up the Package Manager UI. Select the ‘Online’ tab from the Package Manager dialog, and search for ‘Azure Storage’. As of the time of this writing version 2.1.0.3 was available. Select the Windows Azure Storage package and click ‘Install’.
The same result can be achieved via the Package Manager Console if you prefer to manage your packages from a command line. Type ‘
Install-Package WindowsAzure.Storage' into the Package Manager Console and the assemblies will be added to your project just as they would be via the UI show above. You’ll see that several assemblies have been added to the project references including ones for OData, spatial and Entity Data Model (EDM). These are used by the client library when working with table storage.
Open the program.cs file and add the following statements to the top of the file:
You will need to modify the code above and change the value for the
accountName to match your own storage account name. Then provide one of the account storage keys from the portal to assign to the
accountKey variable.
Walking through the code above you’ll see that we use the
accountName and
accountKey variables to create a
StorageCredential object. This is then used to create a
CloudStorageAccount object, which is the root object we use for access to any of the storage subsystems: BLOBs, queues and, of this article Tables. You’ll see that we pass in the credentials for the storage account as well as indicate that we want to access to the storage account using HTTPS. When we make a call against table storage from the storage library, the actual call to the service behind the scenes is made against the REST based API. Each call is signed using the credentials and, if we specify to use HTTPS, it is sent encrypted over the wire.
Note that there are other ways to create the
CloudStrorageAccount object, such as using the static
CloudStorageAccount.Parse method if you have a full storage account connection string. In your production code you should store the credentials or the connection string in configuration and read the values from there, or have them passed in to your code so that you aren’t hard coding the account to be used.
After the
CloudStorageAccount is created the code then creates a
CloudTableClient object, which is used as a façade to work with Table Storage directly. The code then creates a
CloudTable object using the
GetTableReference method of the
CloudTableClient object. This is just a reference for the client library to use, it hasn’t made a call to the REST API yet at all. The next line,
table.CreateIfNotExists(), will actually make the first call to the Table service REST API and, if a table named “sportingproducts” doesn’t already exist within the storage account it will create it. Note that the call to
CreateIfNotExists is idempotent, meaning we can call it multiple times and it will only ensure the table is created. If the table already existed no action would be taken and no data that might already exist within the table would be changed.
After the table is created we write to the console the URL of the table. Remember that the table service, like all the Windows Azure Storage services, exists as a REST based API so every table has its own resource location, or URI. Calls against this table, such as inserts, selects, etc., are all sent to this URI.
Now that a table has been created we can add entities to it. First, we define an entity. Add the following class to the project:
You’ll notice that we inherit our entity from
TableEntity, which is a base class that provides the required properties of
PartitionKey,
RowKey and
Timestamp. Note that you don’t have to inherit from this base class, but if you choose not you will want to implement the
ITableEntity interface, including implementing some methods that are handled for your on the
TableEntity object. The Table storage classes and method in the storage library assume your entities will either inherit from
TableEntity or implement the
ITableEntity interface. Choosing to not do either of these is possible, but is beyond the scope of this article.
Every entity stored must have a partition key and row key provided. The first constructor in the example uses a category of sporting goods as the partition key and the product SKU as the row key. The constructor passes those values along to the base
TableEntity constructor that takes the partition key and row key parameters. Later in the article we will cover more on this choice for our keys and how it affects queries. You will notice that we also define a second constructor that has no parameters. This is required so that the object can be deserialized later when being retrieved from storage by the client library’s default implementation within
TableEntity.
We have defined an entity, so now we can create one and add it to the table. Add the following code immediately after we write out the URI to the console in the
Main method:
Here the code is creating an instance of the
SportingProductEntity object providing the category of ‘Baseball’ and a SKU of ‘BBt1032’. A
TableOperation object is created to insert the entity. If you look at the static methods for
TableOperation you’ll see you can perform multiple types of operations such as
Delete,
Merge,
InsertOrMerge, or
Replace. The next line of code executes the command against the table. When this line of code executes behind the scenes, an OData
insert command against the table is generated and sent to the REST based API, which then inserts the row into the table.
With the addition of this code, our method is actually no longer idempotent. If you run this code more than once you’ll receive an exception. Digging down into the exception information, you’ll find the error code “EntityAlreadyExists”. This is because the Insert has already occurred and the unique entity, as defined by the partition key and row key, is already in the table.
Change the creation of the
TableOperation line to be the following:
Now, when you run the code, you will no longer receive the exception. We have told the system to either insert the entity if it is not present, or to completely replace the entity with this instance if the entity is already in the table. The way that you decide to deal with key collisions in your own solutions will depend on where the data is coming from and your own business requirements. As mentioned earlier in the article, you have a number of table operation options to choose from.
Viewing your Data
In our code example, we wrote a query in C# to add data to our table; however, most developers are used to a tool like SQL Server Management Studio to query and view data. Having a good Windows Azure Storage tool makes it easy to ensure that your code is working correctly with the storage system and you are getting the results you are looking for. There are a variety of storage tools out there. Most of these tools provide a mechanism for querying your storage tables as well as being able to manipulate the data within the tables. For example, LINQPad is a good tool for querying Table Storage and with the Windows Azure SDK for .NET you can use the Server Explorer to query your tables directly from within Visual Studio.
In this article we’ll use Cerebrata’s Azure Management Studio to load up some additional data in our
sportingproducts table and then view it. For just a quick sample we will upload a simple csv file. Here is a sample of the data from the file:
From within Azure Management Studio I find the table from the storage account I wish to upload the data to and right-click the table. From the context menu I can select Upload > From CSV….
Once I’ve selected my CSV file, I will be prompted for some mapping information.
Note that I’ve set the checkbox to indicate that the name of the columns, which will map to the properties, exist in the top row. I’ve also indicated that the text qualifier is a double quote. If you notice that line 6 from the sample file has a comma in the description, so I need to surround the full description in this qualifier since commas are my column delimiter. You can check out the format preview if you like to see how the mapping looks for the first few lines of the file. Click Next where you are satisfied.
On the next screen I can modify the data types for the columns that I’m importing, and also set up what values should map into the
PartitionKey and
RowKey-Values. You can also determine if you want to replace an entity if it already exists with the data you are importing. Once you are set with these, click on ‘OK’ to perform the import. Azure Management Studio has a ‘Transfers‘ window that will show you the progress of the import.
Once the import is complete I can refresh my
sportingproducts table view in Azure Management Studio to see the data.
If you don’t see all of your data, note that Azure Management Studio doesn’t pull back your entire table when it shows the view. It only pulls back the first 1,000 entities by default (which you can change in the settings).
As mentioned earlier, there are a variety of Windows Azure Storage tools out there. When looking for any tool, you should try several and pick one that best suits your needs.
Querying the Table
We’ve successfully added data to the table both via code and by importing with a tool. While the Storage tools allow us to verify that we uploaded what we expected from our code, the applications are likely to be using code to retrieve their data. Let’s look at a very simple sample query: To do this we replace the code in the Try catch of the
Main method with the following code:
When you execute this code, you’ll see that the entity was retrieved and the product name was displayed. In this case you will see that we queried by the partition key and row key, so essentially we were performing a lookup by the entity’s primary key. There are numerous ways of querying for the data you have in storage, but knowing the exact partition key and row key will always be the fastest.
For the new Table Service Layer you can also use the lightweight query class
TableQuery. This class allows you to construct a query to send to the server that is more complex than simply asking for the
PartitionKey and
RowKey; however, the syntax is not as easy as working with LINQ. If our data had an integer property for
StockOnHand we could search for all items in the Baseball category that we have in stock using the following:
Note that the construction of the filter we want to apply on the server happens by building up the
TableQuery. We then execute the query against the table. The result of the execution is an
IEnumerable, so from there we can apply any of the LINQ operators we are familiar with; however, all of that logic will be applied on the client-side.
In previous versions of the .NET Storage library prior to 2.x you could use LINQ to query your tables, which under the hood used WCF Data Services. When the 2.0 Storage library was released it provided the new Table Service layer which utilizes the new
Microsoft.Data.OData library. This new approach offers greater performance and lower latency than the previous table service layer, but didn’t ship with an
IQueryable interface until version 2.1. While you can still use the 1.x style of queries with the previous classes found under the
Microsoft.WindowsAzure.Storage.Table.DataServices namespace, or even the 2.1 “fluent” mechanism shown above, you can also use the new
IQueryable option using the
CreateQuery method off the table reference. The same query above using the “fluent” approach would look like the following if we used the new
IQueryable approach:
Note that the call to ToList on the query actually executed the request and performed the filter on the server. The query would have also executed if the code had simply iterated the results; however, the
Any() extention method is not supported by the provider so the code execute the query first. Not all extension methods will be supported so you may need to execute your query before performing additional operations, but be aware that once the query is executed the additional operations will occur client side.
A word about partitions and queries
The scalability you get from a Windows Azure Table comes from how you partition your data using the partition key. For example, let’s look at the data we used above. We choose to use categories as the partitions key. In the table sample below you’ll see that we have at least two partitions: Cricket and Baseball. Each entity is stored with a category as the partition key and uses the product SKU as the row key.
As mentioned in the code example above, the best possible query to run is one that includes both partition key and row key as this is the primary key for an entity. In the code sample, we knew the category and the SKU for what we were looking for so the query will be the fastest we can achieve. It might not always be the possible for you to know both keys, so it is best to understand the flow of how you will access this data in order to get the best performance. This category partition approach might be acceptable if, for the vast majority of time, the flow of our data-access starts at the category. For instance, the approach above may make sense if the flow of our solution was to retrieve all of the products within a category to display on a product page and then a user can select an individual product where we already know the category.
All entities with the same partition key within the same table are guaranteed to be accessed via the same partition server. A partition server knows about all the data that exists within one or more partitions. The number of partitions a partition server is responsible for depends on how much data is within the partitions and how often they are being accessed. The Windows Azure Storage system manages where partitions are stored and keeps a mapping of what partitions are managed by which partition servers. This mapping is then stored at the front end layer, which is made up of servers that are load-balanced for all requests to storage. As requests for data come into the system, the front end nodes direct the request to the correct partition server, or servers, to retrieve the data. That means that when we query for all of the data in the “Cricket” category we will only have to talk to a single partition server in Windows Azure to retrieve our data because all of that data resides in a single partition.
Remember that a table can have many partitions so a query against a table may actually cross one or more partition servers. In the example above, our query wouldn’t be very efficient at all if we only had a product SKU. This is because we do not know the partition key. In essence, our query for a specific SKU would have to be executed against every partition server that stored data for our table in order to find the specific matching SKU. In addition, since a row key itself is only unique to the partition it lives in, the query will not assume that once it has found a single result that the query is complete. In order for the query to be complete, the system will need to look through the entire table which may be spread across multiple partition servers. This is what in relational databases we would call a table scan which is very inefficient.
It is very important to choose the correct partitioning scheme. You may need to try multiple options to see what works best for you, or even store the same data in different ways in order to optimize the system for your scenarios. This is not unlike when companies use a data warehouse to store the same data as their transactional systems in a manner which is easier to build analytical queries from.
Continuation Tokens
One of the things to keep in mind when using Table Storage is that, in some cases, your query may not complete on a single call to the REST based API. There are a lot of reasons this might occur, such as when the query crosses multiple partition servers or the amount of data coming back is quite large. When this happens, the storage service will return continuation tokens with the query results. These continuation tokens can then be used to continue the query by making additional calls.
The good news is that the .NET storage library can completely abstract the continuation tokens from you and your code. In the code example above we used the
TableOperation.Retrieve and
CloudTable.Execute to perform the query. Using this mechanism the storage library handles the continuation tokens for you and keeps making calls back to the storage service until all of the results are found. There are methods on the
CloudTable class, such as
ExecuteQuerySegmented which return a
TableResultSegment object. Think of this as a partial result in which your code can look for continuation token and decide if it should continue the query or not. This approach gives you much more control over your queries. For example it provides a means for you to do manual paging of results or ensure that you don’t accidentally pull back millions of entities.
If you do decide to use the segmented queries remember to always check for the existence of continuation tokens rather than rely on checking to see if the last result had any rows. It is very possible to receive an empty result set but still have continuation tokens. This happens most often when a query crosses partition server boundaries but that particular partition didn’t have any results to include.
Do I get Transactions?
One of the most common questions that come up when talking about Table Storage is regarding whether transactions are supported. Table storage does support batch transactions against data within the same table and the same partition. There are several rules about the transactions though: all of the entities must exist within the same partition of the table, the number of entities in the transaction can’t exceed 100 and the entire batch being sent to the server cannot exceed 4 MB in size. As you can see, there are limitations to the level of transaction support you get which revolves around the partition. This is another good reason why choosing your partition key scheme is very important.
Summary
Windows Azure Table Storage is a highly scalable, Key-Value pair NoSQL storage that is offered as a service. Developers on the Windows Azure Platform should become familiar with how Table Storage works and how it differs from the relational databases they are used to. Knowing how table storage works will help you determine if it is a good fit for your particular requirements
The features and capabilities of table storage continue to grow. At the BUILD conference in June it was announced by the Windows Azure Storage team that they plan to support CORS for table storage and JSON results by the end of 2013, which will be very welcome features.
This article only scratched the surface using Table Storage. To become more familiar with the service I highly recommend researching the following topics and links:
- Windows Azure Storage: What’s Coming, Best Practices and Internals – This is a video from the BUILD conference.
- Windows Azure Storage Client Library 2.0 Tables Deep Dive – The 2.x version of the client library was a major change from previous versions. This blog post from the Windows Azure Storage Team helps shed some light on the differences. Note that this came out when 2.0 shipped, and there has already been other versions shipped.
- Windows Azure Storage Scalability and Performance Targets – It is very important to understand the scalability and throughput needs of our solution and compare those against the known targets for the storage system itself to make sure it matches up.
- If you need to securely provide a means for code to directly call the Table Storage APIs but you don’t want to provide them the access keys for a storage account you can create a Signed Access Signature for them to use.
- Understand how to leverage Retry policies when using the Client Library through the
TableÂRequestÂOptionsparameter that can be passed to most of the query operations. This is very important for dealing with transient errors that might occur in distributed systems.
- Understanding how you are charged for using the Table.
- You can control serialization of entities beyond the reflection based serialization provided by default in the
TableEntitybase class by either manually implementing your own using the
ITableEntityinterface for your entities, or by using the
EntityResolverdelegate.
- Getting the Most out of Windows Azure Storage – This is a great video on by Joe Giardino that covers many of the new features of the storage library as well as details best practices. | https://www.simple-talk.com/cloud/cloud-data/an-introduction-to-windows-azure-table-storage/ | CC-MAIN-2016-44 | refinedweb | 4,595 | 59.43 |
Thanks to Boston Python
Python is ...
In the browser --or--
In the console, type python to start things.
Type exit() to end the session:
[yannpaul@scc1 content]$ python Enthought Python Distribution -- Version: 7.3-2 (64-bit) Python 2.7.3 |EPD 7.3-2 (64-bit)| (default, Apr 11 2012, 17:52:16) [GCC 4.1.2 20080704 (Red Hat 4.1.2-44)] on linux2 Type "credits", "demo" or "enthought" for more information. >>> exit()
This is fine for quick coding sessions, but nothing's saved.
>>> 7*9 63 >>> 1/2 + 10 10 >>> 1.0 /2 + 20/ 2.0 10.5
>>>>>>> print x, y variables are just names >>> x+y 'variablesare just names' >>>>> x = 10 >>> y = 1.2 >>> print z, x, type(y) this is important later on 10 <type 'float'>
a section of code which is grouped together.
First Python ...
>>> x = 10 >>> y = 12 >>> x>y False >>> if x>y: ... print "x is bigger!" ... bigger_one = x ... else: ... print "y is bigger!" ... bigger_one = y ... y is bigger! >>> x>y False >>> not x>y True >>> print bigger_one 12
Now C ...
Python is ... interperative.
C programming is not
file -> compier -> executable
C example
int x, y, bigger_one; x = 10; y = 12; if (x > y) { // first code block bigger_one = x; } else { // second code blocks bigger_one = y; }
indentation helps, no?
int x, y, bigger_one; x = 10; y = 12; if (x > y) { // first code block bigger_one = x; } else { // second code blocks bigger_one = y; }
Fortran ...
integer :: x, y, bigger_one x = 10 y = 12 if (x > y) then ! first code block bigger_one = x else ! second code block bigger_one = y end if
Matlab ...
x = 10; y = 12; if x > y % first code block bigger_one = x; else % second code block bigger_one = y; end
Odd detail: interactive python requires blank indentation.
In scripts (more later), python ignores blank lines
>>> if True: ... x = 10 ... >>> y = x * 2 File "<console>", line 1 y = x * 2 ^ IndentationError: unexpected indent
>>> def add(x, y): ... return x + y ... >>> result = add(5, 7) # x = 5 and y = 7 >>> print result 12 >>> add(-1, 4) 3
>>> def bigger_one(x, y): ... "returns the value of the larger of two numbers" ... if x>y: ... return x ... elif y>x: ... return y ... return None # overkill: it's the default ... >>> bigger_one(10, 12) 12 >>> bigger_one(5, -3) 5 >>> bigger_one(4, 4) >>> print bigger_one(4, 4) None
First, not, True, and False
>>> not False True >>> not True False >>> if not False: ... print "... then it must be true" ... ... then it must be true >>> # another blank line
if it's a vacation, you always sleep in
return True (True, you should sleep in)
if it's not a weekday, you sleep in too
also return True
Otherwise you don't sleep in
return False
Example Use:
>>> x = sleep_in(True, True) # weekday = True, vacation = True >>> print x True
Example Answer:
def sleep_in(weekday, vacation): return True # always sleep in
def sleep_in(weekday, vacation): if vacation: return True if not weekday: return True return False
def sleep_in(weekday, vacation): if vacation or not weekday: return True return False
def sleep_in(weekday, vacation): return vacation or not weekday
So far...
A Traceback is an error report:
yannpaul@scc1 examples]$ python traceback_example.py Traceback (most recent call last): File "traceback_example.py", line 4, in <module> bad_function() File ".../examples/bad_module.py", line 5, in bad_function raise RuntimeError RuntimeError
You combine strings with +
you get name, which holds a string
you need to return 'Hello, name!',
where name is the value of name
def hello_name(name): return 'Hello ' + name + '!'
Single quotes
'Hello, Yann!'
are the same as double quotes
"Hello, Yann!"
Look at the example nobel.py, and confirm your understanding of newlines, comments, and multi-line strings.
doc strings are unassigned strings at the beginning of block.
They are used as documentation.
>>> def add(x,y): ... "return sum of x and y" ... return x+y ... >>> help(add) Help on function add in module __main__: add(x, y) return sum of x and y
So far...
code blocks require indentation
if/else/elif with True and False
functions are defined using def
Strings are just text
'single' same as "double"
'''trip quote
can be multi-lined'''
+ add to concatenate
#: pound is comment character
remember variables are just names ...
>>> alist = [1, 2, 3] >>> alist [1, 2, 3] >>> [1, 2.0, '3'] [1, 2.0, '3']
>>> print alist[0], alist[1], alist[2] 1 2 3 >>> print "alist has", len(alist), "elements" alist has 3 elements >>> print alist[-1], alist[-2], alist[-3] 3 2 1 >>> alist[0] = 2
>>> x = [0, '1', 2.25] >>> y = [0, '1', 2.25] >>> z = x >>> z[0] = 99 >>> x[2] = 101
Let's walk through this.
Take a look at the practice problem found in dot_product1.py
Homework: First Last 6
That is, they are mutable
append is the method used to extend a list
>>> alist[0] = 1 >>> alist.append(4) >>> alist [1, 2, 3, 4]
It's used in an object oriented fashion:
object.method() means:
call function 'method' with first argument as 'object'
This also means method belongs to object.
This comes up later when we talk about modules.
Concat...
>>> thelist = alist + [5, 7] >>> thelist [1, 2, 3, 4, 5, 7]
Insert...
>>> thelist.insert(5, 6) # index 5 is 6th position >>> print thelist [1, 2, 3, 4, 5, 6, 7]
Delete...
>>> del thelist[5] >>> thelist [1, 2, 3, 4, 5, 7]
in is a keyword used to check if an element is in a list:
>>> thelist [1, 2, 3, 4, 5, 7] >>> 1 in thelist True >>> '1' in thelist False
Python loves to use list-like behavior over and over again
Strings act like lists:
>>>>> name[0:3] 'Tom' >>> len(name) 5
But strings are immutable--you can't edit them:
>>> name[1] = 'a' Traceback (most recent call last): File "<console>", line 1, in <module> TypeError: 'str' object does not support item assignment
This is
>>> a = ['cat', 'window', 'defenestrate'] >>> for x in a: ... print x, len(x) ... cat 3 window 6 defenestrate 12 >>> # blank line
the same as
>>> x = a[0] >>> print x, len(x) cat 3 >>> x = a[1] >>> print x, len(x) window 6 >>> x = a[2] >>> print x, len(x) defenestrate 12
Homework: Count Evens
Loops in C involving incrementing a counter
int x; for (x=0; x<10; x++) { // Stuff }
range() is a method that makes a list, from 0 to N-1
>>> range(10) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> range(5, 10) [5, 6, 7, 8, 9] >>> range(0, 10, 3) [0, 3, 6, 9]
>>> for i in range(len(a)): ... print i, a[i] ... 0 cat 1 window 2 defenestrate >>> # required blank line...
Use this new found knowledge on dot_product2.py
The solution is found here: practice/solutions/dot_product2.py
1 # try to complete the function dot_product: 2 # also read through the other functions in this code and try 3 # to understand what they do 4 5 def dot_product(length, vec1, vec2): 6 """Returns the dot product of two vectors 7 8 `vec1` and `vec2` are each `length` elements long 9 """ 10 product = 0 11 for i in range(length): 12 product += vec1[i] * vec2[i] 13 return product
while is a keyword for repeating until something is False
"while something is True, keep doing code-block"
>>> x=0 >>> while x<4: ... x += 1 ... print x ... 1 2 3 4 >>> #... extra blank line
break is a way to get out of any loop:
>>> x=0 >>> while True: ... if x >= 4: ... break ... x += 1 ... print x ... 1 2 3 4 >>> #... extra blank line
So far...
These are like flexible lists.
Here we're making a telephone book..
>>> tel = {'jack':4098, 'sape':4139} >>> tel {'sape': 4139, 'jack': 4098} >>> type(tel) <type 'dict'>
curly braces are used for construction
But you still access with [ and ]:
>>> tel['jack'] 4098
Assignment can change a key's value,
or it can add a new key:value pair:
>>> tel['guido'] = 4127 >>> tel {'sape': 4139, 'jack': 4098, 'guido': 4127} >>> tel['sape'] = 4039 >>> tel {'sape': 4039, 'jack': 4098, 'guido': 4127}
keys() returns a list of all the 'keys'
>>> names = tel.keys() >>> names ['sape', 'jack', 'guido'] >>> for person in names: ... print person+"'s number is", tel[person] ... sape's number is 4039 jack's number is 4098 guido's number is 4127 >>> # extra return
In practice you would just do this
>>> names = tel.keys() >>> names ['sape', 'jack', 'guido'] >>> for person in tel: ... print person+"'s number is", tel[person] ... sape's number is 4039 jack's number is 4098 guido's number is 4127 >>> # extra return
values() gives you a list of the values
>>> tel.values() [4039, 4098, 4127] >>> max(tel.values()) 4127
Any file read (and executed) by python like so:
python somefile.py
Any file read (and executed) by python from another file:
>>> import random >>> random <module 'random' from '/usr/local/apps/epd-7.3-2/lib/python2.7/random.pyc'>
help(random)
Help on module random: NAME random - Random variable generators. FILE /usr/local/apps/epd-7.3-2/lib/python2.7/random.py DESCRIPTION integers -------- uniform within range sequences --------- pick random element pick random sample generate random permutation distributions on the real line: ------------------------------ uniform triangular normal (Gaussian) lognormal negative exponential gamma beta pareto Weibull distributions on the circle (angles 0 to 2pi) --------------------------------------------- circular uniform von Mises
random is a module that comes with Python,
but it's not loaded by default
We'll focus on two functions that random defines
>>> help(random.randint) Help on method randint in module random: randint(self, a, b) method of random.Random instance Return random integer in range [a, b], including both end points. >>> random.randint(0, 9) 6 >>> random.randint(0, 9) 4 >>> random.randint(0, 9) 6 >>> random.randint(0, 9) 2
>>> lucky = ['Tim', 'Tom', 'Ted', 'Tony'] >>> type(lucky) <type 'list'> >>> random.choice(lucky) 'Tom' >>> random.choice(lucky) 'Tony' >>> random.choice(lucky) 'Tony' >>> random.choice(lucky) 'Tony' >>> random.choice(lucky) 'Tim' >>> random.choice(lucky) 'Tom' >>> random.choice(lucky) 'Ted'
Put it all together with state_capitals.py
Today ... | http://scv.bu.edu/documentation/tutorials/python/programmers/ | CC-MAIN-2014-15 | refinedweb | 1,657 | 73.47 |
via?
Question: What makes you think Python is not already big in the "large systems" space?
There are numerous examples of it already being there (Google, YouTube, SoE, DreamWorks, VMWare, EVE, etc, etc, etc, etc).
I do believe that a strict 'thread private by default' system would be fantastic for python. My only concern would be for those of us who like python for prototyping, and implement in other languages. The GIL can be a stumbling block already, and such a change would be a death blow to prototyping threaded systems for implementation in C++.
Any of the "make such-and-such immutable" ideas would greatly limit what I regard to be Python's greatest strength: testability. Currently I can manipulate a module's namespace to influence how it perceives (say) the os module, perhaps replacing rmdir() with a function that will test an otherwise-hard-to-test code path in my library.
I wish people would just accept that threading is not one of Python's strengths and move on -- or better, write a decent interprocess queue so that MP code is easier.
Doug, I realize Python is already used widely for large systems. I simply mean that when concurrency-aware code is mentioned, Erlang et al are mentioned far more often than Python.
Collin, could that not be solved with cooperative declaration between authors and users? You're right; allowing authors the sole power to @share would be to inter-thread sharing what Java's "private" modifier is to inter-class sharing--certainly not Pythonic. But in the cases where I've monkeypatched someone else's module, I would not have minded adding an explicit declaration to do so--in fact, I can see that being a beneficial construct for understanding and documentation:
import anotherlib
with shared(anotherlib):
anotherlib.os.rmdir = myrmdir
...especially if the name is fairly easy to grep for. Maybe we could name it "monkey" instead of "shared"? ;)
Hi Robert -
can you expound upon what the "shared" syntax would look like? suppose I declare "x=[]" at the top of my module. Now I use the "thread" module to run two threads, each using a worker function that randomly appends values to "x". How does the "shared" syntax prevent that list from being modified by both threads? does "x" create copies of itself local to each thread when it detects a modify operation ? how does making dict on classes/modules immutable have anything to do with that (unless you mean, no more module-level globals or class-level variables )?
Hi Mike,
In my very humble musing, yes, each thread would get its own "x", but only because the "object" to which the name "x" refers is a mutable object. If "x" referred to an immutable, all threads could share it without any further intervention. This is admittedly tricky with Python's dynamic typing, but I think this automatic creation of threadlocals would only have to occur for globals and object attributes (and possibly cell references), not locals, and not function arguments.
I would imagine the copy would occur on the first get/set/del operation, just as it does for the current threading.local implementation.
In this scheme, if you don't make dict on classes/modules immutable, you end up localizing almost everything; you might as well go with multiple processes at that point. Maybe I should run through Dejavu (and perhaps CherryPy) and put some numbers to these vague intuitions of mine... ;)
oh. why not have dict on classes/modules be marked as "shareable"? ? other than you've now opened up a "non-thread-safe" collection....or is that the reason ?
also what does it really mean for class.dict to be immutable...does it mean I cant say MyClass.foo = something outside of the "class" declaration itself ?
Right; avoiding "non-thread-safe" collections is the purpose of the idea, and class/module/instance dicts are collections. Proposing that classes and modules be immutable is an attempt to maximize the volume of code that can be shared.
And, yes, that would mean you can't say "MyClass.foo = something" outside of the class, at least not without an explicit "monkey" declaration either by the caller (user) or the callee (author) to mean, "this attribute is now thread-global". That seems a horrible thing for a user to be able to do, until you realize they already do it (see the os.rmdir example above). | http://www.aminus.org/blogs/index.php/2007/06/05/python_concurrency_syntax?blog=2 | CC-MAIN-2018-34 | refinedweb | 740 | 62.27 |
In this article, we'll see how the this class can prove to be useful when we need to format cells, based on conditions. For our little demo, we'll apply formatting to cells based on the text being displayed and the output would be something like this :
First of all, to accomplish such a scenario, lets start by creating a class that inherits from the CellFactory class.
public class CustomCellFactory : CellFactory { }
Next, we need to override the CreateCellContent() method of the CellFactory class and set the background of the border element which surrounds the cell, based on conditions :
public override void CreateCellContent(C1FlexGrid grid, Border bdr, CellRange rng) { base.CreateCellContent(grid, bdr, rng); //format cells in second column if (rng.Column == 2) { if (grid[rng.Row, rng.Column].ToString() == "Japan") { bdr.Background = new SolidColorBrush(Colors.LimeGreen); } else if (grid[rng.Row, rng.Column].ToString() == "India") { bdr.Background = new SolidColorBrush(Colors.MediumVioletRed); } else if (grid[rng.Row, rng.Column].ToString() == "United States") { bdr.Background = new SolidColorBrush(Colors.Yellow); } else if (grid[rng.Row, rng.Column].ToString() == "United Kingdom") { bdr.Background = new SolidColorBrush(Colors.Gold); } } }
Simple, isn't it!
Now you may ask what's dynamic in this? The answer to this is in the gif image below :
Whenever the data in the grid changes, the CreateCellContent() method gets fired and the formatting gets applied accordingly, even when the data is edited.
You can use the CellFactory class to accomplish a lot of other tasks in a similar way, depending on your requirements.
You may download the samples for detailed implementation from these links :
Download C# Sample
Download VB Sample | https://www.grapecity.com/en/blogs/dynamic-conditional-formatting-in-c1flexgrid-wpf | CC-MAIN-2018-13 | refinedweb | 269 | 51.75 |
Today marks an important step towards our goal of enabling you, our customers and partners, to build and grow your businesses on the Windows Azure platform. We are pleased to announce that starting today you can upgrade your Community Technology Preview (CTP) accounts of the Windows® Azure™ platform (i.e., Windows Azure, SQL Azure and/or Windows Azure platform AppFabric) than your CTP accounts when ordering or remove all applications and data associated with your CTP accounts prior to sign up.
If you elect not to upgrade, on February 1, 2010 your CTP accounts will be disabled and any Windows Azure Storage will be made read-only. SQL Azure CTP accounts will be able to keep using their existing databases but will no longer be able to create new databases. Also, Windows Azure platform AppFabric namespaces will be disabled. On March 1, 2010, the SQL Azure CTP accounts that have not been upgraded will be deleted. On April 1, 2010, the Windows Azure Storage CTP accounts and Windows Azure platform AppFabric namespaces that have not been upgraded will be deleted. It is important to export your data if you do not plan to upgrade to a commercial subscription prior to these dates.. | https://azure.microsoft.com/fr-fr/blog/sign-up-for-a-windows-azure-platform-offer-today-and-get-visibility-into-your-usage/ | CC-MAIN-2017-13 | refinedweb | 202 | 57.5 |
On 05/07/2012 11:00 AM, Antoine Pitrou wrote: > On Mon, 07 May 2012 09:01:07 -0400 > "Eric V. Smith" <eric at trueblade.com> wrote: >> On 05/07/2012 06:53 AM, Antoine Pitrou wrote: >>> On Mon, 07 May 2012 10:38:15 +0200 >>> "Martin v. Löwis" <martin at v.loewis.de> wrote: >>>> >>>> Interestingly, it appears that pkg_util will break under PEP 420, >>>> anyway, as it currently does (in _handle_ns) >>>> >>>> loader = importer.find_module(packageName) >>>> if loader is None: >>>> return None >>>> ... >>>> loader.load_module(packageName); module.__path__ = path >>>> >>>> Now, if loader suddenly becomes a string, than the load_module >>>> call will raise an attribute error (untested). >>> >>> I think find_module() returning a string is a kludge. It would be >>> better IMO if it returned a dedicated object clearly pointing out that >>> a namespace package was potentially found (and also allowing to record >>> other potential metadata). >> >> Well the original goal was to allow existing finders to be called >> without modification. Are you saying we always return a dedicated object >> (thus breaking existing finders)? > > Why would it break existing finders? finder_module() would either return > a loader, or a dedicated object (or None). If we introduce a new type that all find_module() functions must return in all cases, it would break existing finders. This object would have a flag (or some other value) that says either "I returned a loader" or "I returned a namespace package path". That's how I originally read your message, but I guess that's not what you're saying. If we return this new object instead of what PEP 420 currently defines as a string, then I agree there won't be any impact on existing finders. Just as there won't be an impact if we define it as a string instead of some new object. > Returning a string is completely non-obvious to the caller (who may not > know about namespace packages or their precise implementation in PEP > 420). Well, if you don't know about namespace packages you won't be returning a string. So I don't see a problem there.. Eric. | https://mail.python.org/pipermail/import-sig/2012-May/000567.html | CC-MAIN-2014-15 | refinedweb | 347 | 64.51 |
Hi On Fri, Jan 05, 2007 at 10:20:36AM +0100, Luca Abeni wrote: > Hi Michael, > > On Thu, 2007-01-04 at 17:48 +0100, Michael Niedermayer wrote: > [...] > > > > > > - MSG_WARN("No accelerated colorspace conversion found\n"); > > > + av_log(c, AV_LOG_DEBUG, "No accelerated colorspace conversion found\n"); > > > > hmm, shuldnt WARN->INFO rather? > Well, in the patch I respected the current mapping, that is > WARN -> DEBUG > FATAL -> ERROR > ERR -> ERROR > V -> INFO > DBG2 -> DEBUG > INFO -> INFO > > Is "WARN->INFO" the only needed change, or should some other mapping be > changed? Let me know, and I'll update the patch. > > > maybe we should also consider to increase the number of AV_LOG_* iam not sure > > though > We currently have: QUIET, ERROR, INFO, DEBUG. To have a 1->1 mapping > with the levels used in libswscale, we would need to add FATAL, WARNING, > and VERBOSE. I think this would result in an ABI incompatibility, so I > am not sure if it is worth doing it. > Anyway, if we decide to add some debug levels I'd like to do it before > committing this patch (otherwise, I would have to change the libswscale > code 2 times :). hmm id suggest someting like #if LIBAVUTIL_VERSION_INT < (50<<16) #define AV_LOG_QUIET -1 #define AV_LOG_FATAL 0 #define AV_LOG_ERROR 0 #define AV_LOG_WARN doesnt look correct / something which may or may not * lead to some problems like use of -vstrict -2 */ #define AV_LOG_WARN 24 #define AV_LOG_INFO 32 #define AV_LOG_VERBOSE 40 /** * stuff which is only usefull for libav* developers */ #define AV_LOG_DEBUG 48 #endif comments welcome (probably this should be disscussed in a seperate thread: <> | http://ffmpeg.org/pipermail/ffmpeg-devel/2007-January/027054.html | CC-MAIN-2014-35 | refinedweb | 258 | 52.53 |
This is going to be an extremely brief and high level overview of what a programming language is. My goal here is that if you don’t know what a programming language is before reading this, you have an idea of what it is after.
A programming language is just the way that you tell a computer what you want it to do. You run programs on your computer, like a web browser, or maybe iTunes. These are programs. Someone or a group of people built these programs, and the way they did it was by using a programming language.
There is a famous first program that people write, called “Hello World” where you just try and get the program to print out “Hello World.” Here is an example in a programming language called C:
#include <stdio.h> int main(){ printf("Hello World"); }
Theres a lot of random stuff here, a lot of random syntax to get the program to run, but the basic idea is that this will output “Hello World” when you run it.
In another programming language called python, if you want to print hello world you just write:
print "Hello World"
Different programming languages have different ways of telling the computer to do things. Many math operations also carry over, so for example you could add two numbers in python with:
print 5 + 10
There is wayyyyyyyyyyy more to programming languages. There is a specific legal way to write things in programming languages, and that is called the syntax. There are lots of other ways to classify and talk about programming languages, but that is for another time. | http://thekeesh.com/2011/11/how-the-internet-works-6-programming-languages/ | CC-MAIN-2018-17 | refinedweb | 272 | 75.84 |
A new way to write FASM assembly code with Extended Headers
By
Beege, in AutoIt Example Scripts
Recommended Posts
Recently Browsing 0 members
No registered users viewing this page.
Similar Content
- By Beege
Here is the latest assembly engine from Tomasz Grysztar, flat assembler g as a dll which I compiled using original fasm engine. He doesn't have it compiled in the download package but it was as easy as compiling a exe in autoit if you ever have to do it yourself. Just open up the file in the fasm editor and press F5.
You can read about what makes fasmg different from the original fasm HERE if you want . The minimum you should understand is that this engine is bare bones by itself not capable of very much. The macro engine is the major difference and it uses macros for basically everything now including implementing the x86 instructions and formats. All of these macros are located within the include folder and you should keep that in its original form.
When I first got the dll compiled I couldn't get it to generate code in flat binary format. It was working but size of output was over 300 bytes no matter what the assembly code and could just tell it was outputting a different format than binary. Eventually I figured out that within the primary "include\win32ax.inc"', it executes a macro "format PE GUI 4.0" if x86 has not been defined. I underlined macro there because at first I (wasted shit loads of time because I) didn't realize it was a macro (adding a bunch of other includes) since in version 1 the statement "format binary" was a default if not specified and specifically means add nothing extra to the code. So long story short, the part that I was missing is including the cpu type and extensions from include\cpu folder. By default I add x64 type and SSE4 ext includes. Note that the x64 here is not about what mode we are running in, this is for what instructions your cpu supports. if you are running on some really old hardware that may need to be adjusted or if your on to more advanced instructions like the avx extensions, you may have to add those includes to your source.
Differences from previous dll function
I like the error reporting much better in this one. With the last one we had a ton error codes and a variable return structure depending on what kind of error it had. I even had an example showing you what kind of an error would give you correct line numbers vs wouldn't. With this one the stdout is passed to the dll function and it simply prints the line/details it had a problem with to the console. The return value is the number of errors counted.
It also handles its own memory needs automatically now . If the output region is not big enough it will virtualalloc a new one and virtualfree the previous.
Differences in Code
Earlier this year I showed some examples of how to use the macros to make writing assembly a little more familiar. Almost all the same functionality exists here but there are a couple syntax sugar items gone and slight change in other areas.
Whats gone is FIX and PTR. Both syntax sugar that dont really matter.
A couple changes to structures as well but these are for the better. One is unnamed elements are allowed now, but if it does not have a name, you are not allowed to initialize those elements during creation because they can only be intialized via syntax name:value . Previously when you initialized the elements, you would do by specifying values in a comma seperated list using the specific order like value1,value2,etc, but this had a problem because it expected commas even when the elements were just padding for alignment so this works out better having to specify the name and no need for _FasmFixInit function. "<" and ">" are not longer used in the initializes ether.
OLD: $sTag = 'byte x;short y;char sNote[13];long odd[5];word w;dword p;char ext[3];word finish' _(_FasmAu3StructDef('AU3TEST', $sTag));convert and add definition to source _(' tTest AU3TEST ' & _FasmFixInit('1,222,<"AutoItFASM",0>,<41,43,43,44,45>,6,7,"au3",12345', $sTag));create and initalize New: $sTag = 'byte x;short y;char sNote[13];long odd[5];word w;dword p;char ext[3];word finish' _(_fasmg_Au3StructDef('AU3TEST', $sTag)) ;convert and add definition to source _(' tTest AU3TEST x:11,y:22,sNote:"AutoItFASM",odd:41,odd+4:42,odd+8:43,w:6,p:7,ext:"au3",finish:12345');create and initalize Extra Includes
I created a includeEx folder for the extra macros I wrote/found on the forums. Most of them are written by Thomaz so they may eventually end up in the standard library.
Edit: Theres only the one include folder now. All the default includes are in thier own folder within that folder and all the custom ones are top level.
Align.inc, Nop.inc, Listing.inc
The Align and Nop macros work together to align the next statement to whatever boundary you specified and it uses multibyte nop codes to fill in the space. Filling the space with nop is the default but you can also specify a fill value if you want. Align.assume is another macro part of align.inc that can be used to set tell the engine that a certain starting point is assumed to be at a certain boundary alignment and it will do its align calculations based on that value.
Listing is a macro great for seeing where and what opcodes are getting generated from each line of assembly code. Below is an example of the source and output you would see printed to the console during the assembly. I picked this slightly longer example because it best shows use of align, nop, and then the use of listing to verify the align/nop code. Nop codes are instructions that do nothing and one use of them is to insert nop's as space fillers when you want a certian portion of your code to land on a specific boundary offset. I dont know all the best practices here with that (if you do please post!) but its a type of optimization for the cpu. Because of its nature of doing nothing, I cant just run the code and confirm its correct because it didnt crash. I need to look at what opcodes the actual align statements made and listing made that easy.
source example:
_('procf _main stdcall, pAdd') _(' mov eax, [pAdd]') _(' mov dword[eax], _crc32') _(' mov dword[eax+4], _strlen') _(' mov dword[eax+8], _strcmp') _(' mov dword[eax+12], _strstr') _(' ret') _('endp') _('EQUAL_ORDERED = 1100b') _('EQUAL_ANY = 0000b') _('EQUAL_EACH = 1000b') _('RANGES = 0100b') _('NEGATIVE_POLARITY = 010000b') _('BYTE_MASK = 1000000b') _('align 8') _('proc _crc32 uses ebx ecx esi, pStr') _(' mov esi, [pStr]') _(' xor ebx, ebx') _(' not ebx') _(' stdcall _strlen, esi') _(' .while eax >= 4') _(' crc32 ebx, dword[esi]') _(' add esi, 4') _(' sub eax, 4') _(' .endw') _(' .while eax') _(' crc32 ebx, byte[esi]') _(' inc esi') _(' dec eax') _(' .endw') _(' not ebx') _(' mov eax, ebx') _(' ret') _('endp') _('align 8, 0xCC') ; fill with 0xCC instead of NOP _('proc _strlen uses ecx edx, pStr') _(' mov ecx, [pStr]') _(' mov edx, ecx') _(' mov eax, -16') _(' pxor xmm0, xmm0') _(' .repeat') _(' add eax, 16') _(' pcmpistri xmm0, dqword[edx + eax], 1000b') ;EQUAL_EACH') _(' .until ZERO?') ; repeat loop until Zero flag (ZF) is set _(' add eax, ecx') ; add remainder _(' ret') _('endp') _('align 8') _('proc _strcmp uses ebx ecx edx, pStr1, pStr2') ; ecx = string1, edx = string2' _(' mov ecx, [pStr1]') ; ecx = start address of str1 _(' mov edx, [pStr2]') ; edx = start address of str2 _(' mov eax, ecx') ; eax = start address of str1 _(' sub eax, edx') ; eax = ecx - edx | eax = start address of str1 - start address of str2 _(' sub edx, 16') _(' mov ebx, -16') _(' STRCMP_LOOP:') _(' add ebx, 16') _(' add edx, 16') _(' movdqu xmm0, dqword[edx]') _(' pcmpistri xmm0, dqword[edx + eax], EQUAL_EACH + NEGATIVE_POLARITY') ; EQUAL_EACH + NEGATIVE_POLARITY ; find the first *different* bytes, hence negative polarity' _(' ja STRCMP_LOOP') ;a CF or ZF = 0 above _(' jc STRCMP_DIFF') ;c cf=1 carry _(' xor eax, eax') ; the strings are equal _(' ret') _(' STRCMP_DIFF:') _(' mov eax, ebx') _(' add eax, ecx') _(' ret') _('endp') _('align 8') _('proc _strstr uses ecx edx edi esi, sStrToSearch, sStrToFind') _(' mov ecx, [sStrToSearch]') _(' mov edx, [sStrToFind]') _(' pxor xmm2, xmm2') _(' movdqu xmm2, dqword[edx]') ; load the first 16 bytes of neddle') _(' pxor xmm3, xmm3') _(' lea eax, [ecx - 16]') _(' STRSTR_MAIN_LOOP:') ; find the first possible match of 16-byte fragment in haystack') _(']') _(' pcmpistrm xmm3, xmm1, EQUAL_EACH + NEGATIVE_POLARITY + BYTE_MASK') ; mask out invalid bytes in the haystack _(' movdqu xmm4, dqword[esi]') _(' pand xmm4, xmm0') _(' pcmpistri xmm1, xmm4, EQUAL_EACH + NEGATIVE_POLARITY') _(' ja @b') _(' jnc STRSTR_FOUND') _(' sub eax, 15') ;continue searching from the next byte _(' jmp STRSTR_MAIN_LOOP') _(' STRSTR_NOT_FOUND:') _(' xor eax, eax') _(' ret') _(' STRSTR_FOUND:') _(' sub eax, [sStrToSearch]') _(' inc eax') _(' ret') _('endp') Listing Output:
00000000: use32 00000000: 55 89 E5 procf _main stdcall, pAdd 00000003: 8B 45 08 mov eax, [pAdd] 00000006: C7 00 28 00 00 00 mov dword[eax], _crc32 0000000C: C7 40 04 68 00 00 00 mov dword[eax+4], _strlen 00000013: C7 40 08 90 00 00 00 mov dword[eax+8], _strcmp 0000001A: C7 40 0C D8 00 00 00 mov dword[eax+12], _strstr 00000021: C9 C2 04 00 ret 00000025: localbytes = current 00000025: purge ret?,locals?,endl?,proclocal? 00000025: end namespace 00000025: purge endp? 00000025: EQUAL_ORDERED = 1100b 00000025: EQUAL_ANY = 0000b 00000025: EQUAL_EACH = 1000b 00000025: RANGES = 0100b 00000025: NEGATIVE_POLARITY = 010000b 00000025: BYTE_MASK = 1000000b 00000025: 0F 1F 00 align 8 00000028: 55 89 E5 53 51 56 proc _crc32 uses ebx ecx esi, pStr 0000002E: 8B 75 08 mov esi, [pStr] 00000031: 31 DB xor ebx, ebx 00000033: F7 D3 not ebx 00000035: 56 E8 2D 00 00 00 stdcall _strlen, esi 0000003B: 83 F8 04 72 0D .while eax >= 4 00000040: F2 0F 38 F1 1E crc32 ebx, dword[esi] 00000045: 83 C6 04 add esi, 4 00000048: 83 E8 04 sub eax, 4 0000004B: EB EE .endw 0000004D: 85 C0 74 09 .while eax 00000051: F2 0F 38 F0 1E crc32 ebx, byte[esi] 00000056: 46 inc esi 00000057: 48 dec eax 00000058: EB F3 .endw 0000005A: F7 D3 not ebx 0000005C: 89 D8 mov eax, ebx 0000005E: 5E 59 5B C9 C2 04 00 ret 00000065: localbytes = current 00000065: purge ret?,locals?,endl?,proclocal? 00000065: end namespace 00000065: purge endp? 00000065: CC CC CC align 8, 0xCC 00000068: 55 89 E5 51 52 proc _strlen uses ecx edx, pStr 0000006D: 8B 4D 08 mov ecx, [pStr] 00000070: 89 CA mov edx, ecx 00000072: B8 F0 FF FF FF mov eax, -16 00000077: 66 0F EF C0 pxor xmm0, xmm0 0000007B: .repeat 0000007B: 83 C0 10 add eax, 16 0000007E: 66 0F 3A 63 04 02 08 pcmpistri xmm0, dqword[edx + eax], 1000b 00000085: 75 F4 .until ZERO? 00000087: 01 C8 add eax, ecx 00000089: 5A 59 C9 C2 04 00 ret 0000008F: localbytes = current 0000008F: purge ret?,locals?,endl?,proclocal? 0000008F: end namespace 0000008F: purge endp? 0000008F: 90 align 8 00000090: 55 89 E5 53 51 52 proc _strcmp uses ebx ecx edx, pStr1, pStr2 00000096: 8B 4D 08 mov ecx, [pStr1] 00000099: 8B 55 0C mov edx, [pStr2] 0000009C: 89 C8 mov eax, ecx 0000009E: 29 D0 sub eax, edx 000000A0: 83 EA 10 sub edx, 16 000000A3: BB F0 FF FF FF mov ebx, -16 000000A8: STRCMP_LOOP: 000000A8: 83 C3 10 add ebx, 16 000000AB: 83 C2 10 add edx, 16 000000AE: F3 0F 6F 02 movdqu xmm0, dqword[edx] 000000B2: 66 0F 3A 63 04 02 18 pcmpistri xmm0, dqword[edx + eax], EQUAL_EACH + NEGATIVE_POLARITY 000000B9: 77 ED ja STRCMP_LOOP 000000BB: 72 09 jc STRCMP_DIFF 000000BD: 31 C0 xor eax, eax 000000BF: 5A 59 5B C9 C2 08 00 ret 000000C6: STRCMP_DIFF: 000000C6: 89 D8 mov eax, ebx 000000C8: 01 C8 add eax, ecx 000000CA: 5A 59 5B C9 C2 08 00 ret 000000D1: localbytes = current 000000D1: purge ret?,locals?,endl?,proclocal? 000000D1: end namespace 000000D1: purge endp? 000000D1: 0F 1F 80 00 00 00 00 align 8 000000D8: 55 89 E5 51 52 57 56 proc _strstr uses ecx edx edi esi, sStrToSearch, sStrToFind 000000DF: 8B 4D 08 mov ecx, [sStrToSearch] 000000E2: 8B 55 0C mov edx, [sStrToFind] 000000E5: 66 0F EF D2 pxor xmm2, xmm2 000000E9: F3 0F 6F 12 movdqu xmm2, dqword[edx] 000000ED: 66 0F EF DB pxor xmm3, xmm3 000000F1: 8D 41 F0 lea eax, [ecx - 16] 000000F4: STRSTR_MAIN_LOOP: 000000F4: 83 C0 10 add eax, 16 000000F7: 66 0F 3A 63 10 0C pcmpistri xmm2, dqword[eax], EQUAL_ORDERED 000000FD: 77 F5 ja STRSTR_MAIN_LOOP 000000FF: 73 30 jnc STRSTR_NOT_FOUND 00000101: 01 C8 add eax, ecx 00000103: 89 D7 mov edi, edx 00000105: 89 C6 mov esi, eax 00000107: 29 F7 sub edi, esi 00000109: 83 EE 10 sub esi, 16 0000010C: @@: 0000010C: 83 C6 10 add esi, 16 0000010F: F3 0F 6F 0C 3E movdqu xmm1, dqword[esi + edi] 00000114: 66 0F 3A 62 D9 58 pcmpistrm xmm3, xmm1, EQUAL_EACH + NEGATIVE_POLARITY + BYTE_MASK 0000011A: F3 0F 6F 26 movdqu xmm4, dqword[esi] 0000011E: 66 0F DB E0 pand xmm4, xmm0 00000122: 66 0F 3A 63 CC 18 pcmpistri xmm1, xmm4, EQUAL_EACH + NEGATIVE_POLARITY 00000128: 77 E2 ja @b 0000012A: 73 0F jnc STRSTR_FOUND 0000012C: 83 E8 0F sub eax, 15 0000012F: EB C3 jmp STRSTR_MAIN_LOOP 00000131: STRSTR_NOT_FOUND: 00000131: 31 C0 xor eax, eax 00000133: 5E 5F 5A 59 C9 C2 08 00 ret 0000013B: STRSTR_FOUND: 0000013B: 2B 45 08 sub eax, [sStrToSearch] 0000013E: 40 inc eax 0000013F: 5E 5F 5A 59 C9 C2 08 00 ret 00000147: localbytes = current 00000147: purge ret?,locals?,endl?,proclocal? 00000147: end namespace 00000147: purge endp?
procf and forcea macros
In my previous post I spoke about the force macro and why the need for it. I added two more macros (procf and forcea) that combine the two and also sets align.assume to the same function. As clarified in the previous post, you should only have to use these macros for the first procedure being defined (since nothing calls that procedure). And since its the first function, it should be the starting memory address which is a good place to initially set the align.assume address to.
Attached package should include everything needed and has all the previous examples I posted updated. Let me know if I missed something or you have any issues running the examples and thanks for looking
Update 04/19/2020:
A couple new macros added. I also got rid of the IncludeEx folder and just made one include folder that has the default include folder within it and all others top level.
dllstruct macro does the same thing as _fasmg_Au3StructDef(). You can use either one; they both use the macro.
getmempos macro does the delta trick I showed below using anonymous labels.
stdcallw and invokew macros will push any parameters that are raw (quoted) strings as wide characters
Ifex include file gives .if .ifelse .while .until the ability to use stdcall/invoke/etc inline. So if you had a function called "_add" you could do .if stdcall(_add,5,5) = 10. All this basically does in the background is perform the stdcall and then replaces the comparison with eax and passes it on to the original default macros, but is super helpful for cleaning up code and took a ton of time learning the macro language to get in place.
Update 05/19/2020:
Added fastcallw that does same as stdcallw only
Added fastcall support for Ifex
Corrected missing include file include\macro\if.inc within win64au3.inc
fasmg 5-19-2020.zip
Previous versions:
- By Beege
Here is an old goodie from ms demonstrating concepts behind multithreading and using mutexes to control sharing the screen. Its unfortunately just a console application so you have to press compile (f7) to run (can get annoying if you want to play with the code) but still pretty cool :). Each little question mark box (could be any character (used to be a smiley face in win 7)) is its own thread keeping track of its own coordinates. Each thread shares the screenmutex by kinda waiting in line for ownership of it. When the thread gains control it updates the screen, then releases the mutex for the next thread.
First I wrote it in pure autoit to confirm all working as expected. The Console functions actually threw me for a loop. They actual want the whole value of the coord structs and not a ptr to it so that "struct" without a * was a little uncommon. Below au3 code is just the lonely cell bouncing around.
Func _BounceAU3() ;set a random starting id. we use this to rotate the colors Local $iMyID = Random(1, 15, 1) Local $tMyCell = DllStructCreate('char mc'), $tOldCell = DllStructCreate('char oc') Local $tMyAttrib = DllStructCreate('word ma'), $tOldAttrib = DllStructCreate('word oa') Local $tCoords = DllStructCreate($tagCOORD), $tOld = DllStructCreate($tagCOORD) Local $tDelta = DllStructCreate($tagCOORD) ;Random start and delta values $tCoords.X = Random(0, 119, 1) $tCoords.Y = Random(0, 29, 1) $tDelta.X = Random(-3, 3, 1) $tDelta.Y = Random(-3, 3, 1) ;set character/cell attributes $tMyCell.mc = $iMyID > 16 ? 0x01 : 0x02 ; doesnt seem to make a differnce in windows 10 $tMyAttrib.ma = BitAND($iMyID, 0x0F) ; Set the character color Do ;check the last position values DllCall('kernel32.dll', "bool", "ReadConsoleOutputCharacter", "handle", $g_hStdHandle, "struct*", $tOldCell, "dword", 1, "struct", $tOld, "dword*", 0) DllCall('kernel32.dll', "bool", "ReadConsoleOutputAttribute", "handle", $g_hStdHandle, "struct*", $tOldAttrib, "dword", 1, "struct", $tOld, "dword*", 0) ;if the last postion was this cell, blank/empty the cell. (Otherwise its been taken over by another thread) If ($tOldCell.oc = $tMyCell.mc) And ($tOldAttrib.oa = $tMyAttrib.ma) Then DllCall('kernel32.dll', "bool", "WriteConsoleOutputCharacter", "handle", $g_hStdHandle, "byte*", 0x20, "dword", 1, "struct", $tOld, "dword*", 0) EndIf ;write the current cell DllCall('kernel32.dll', "bool", "WriteConsoleOutputCharacter", "handle", $g_hStdHandle, "struct*", $tMyCell, "dword", 1, "struct", $tCoords, "dword*", 0) DllCall('kernel32.dll', "bool", "WriteConsoleOutputAttribute", "handle", $g_hStdHandle, "struct*", $tMyAttrib, "dword", 1, "struct", $tCoords, "dword*", 0) ;update coords $tOld.X = $tCoords.X $tOld.Y = $tCoords.Y $tCoords.X += $tDelta.X $tCoords.Y += $tDelta.Y ;change directions if we are out of bounds If $tCoords.X < 0 Or $tCoords.X >= 120 Then $tDelta.X *= -1 If $tCoords.Y < 0 Or $tCoords.Y >= 30 Then $tDelta.Y *= -1 Sleep(75) Until GUIGetMsg() = -3 EndFunc ;==>_BounceAU3
From there the that function converted into assembly so we can call as a thread. The only real differences are the extra parameters we passing as a structure and I also generate the random starting values in autoit instead, then pass them to the function. Here is what the main assembly function looks like. I added comments for each peice of code from au3 that we are translating:
_('procf _Bounce uses ebx, pParms') ; ; create the local variables _(' locals') _(' BlankCell db 32') ; this first group covers the variables from the original script _(' MyCell db ?') _(' OldCell db ?') _(' MyAtt dw ?') _(' OldAtt dw ?') _(' tCoords COORD') _(' tDelta COORD') _(' tOld COORD') _(' bytesread dw ?') ; _(' iMyID dw ?') ; this group of local vars cover holding all the other paramerters we are passing in tParms _(' g_hScreenMutex dd ?') _(' g_hRunMutex dd ?') _(' g_hStdHandle dd ?') _(' pfWaitForSingleObject dd ?') _(' pfReleaseMutex dd ?') _(' pfReadChar dd ?') _(' pfReadAttr dd ?') _(' pfWriteChar dd ?') _(' pfWriteAttr dd ?') _(' endl') ; ;all of these push/pops are to transfer the rest of variables from tParms structure to the local variables we created ;first mov the structure address into ebx _(' mov ebx, [pParms]') ; ; now push and pop the values into the variables ; use _winapi_displaystruct() to view all the offsets being used in the [ebx+offset] lines _(' pushw [ebx]') ; _(' popw word[tCoords+COORD.X]') _(' pushw word[ebx+2]') ; _(' popw word[tCoords+COORD.Y]') _(' pushw word[ebx+4]') ; _(' popw word[tDelta+COORD.X]') _(' pushw word[ebx+6]') ; _(' popw word[tDelta+COORD.Y]') _(' pushw word[ebx+8]') ; _(' popw word[iMyID]') _(' push dword[ebx+12]') ; _(' pop dword[g_hScreenMutex]') _(' push dword[ebx+16]') ; _(' pop dword[g_hRunMutex]') _(' push dword[ebx+20]') ; _(' pop dword[g_hStdHandle]') _(' push dword[ebx+24]') ; _(' pop dword[pfWaitForSingleObject]') _(' push dword[ebx+28]') ; _(' pop dword[pfReleaseMutex]') _(' push dword[ebx+32]') ; _(' pop dword[pfReadChar]') _(' push dword[ebx+36]') ; _(' pop dword[pfReadAttr]') _(' push dword[ebx+40]') ; _(' pop dword[pfWriteChar]') _(' push dword[ebx+44]') ; _(' pop dword[pfWriteAttr]') _('.if word[iMyID] > 16') ; $tMyCell.mc = $iMyID > 16 ? 0x01 : 0x02 (no difference in windows 10) _(' mov word[MyCell], 1') _('.else') _(' mov word[MyCell], 2') _('.endif') ; _('pushw word[iMyID]') ; $tMyAttrib.ma = BitAND($iMyID, 0x0F) _('popw word[MyAtt]') _('and word[MyAtt], 15') ; _('.repeat') ; do ; ; Wait infinetly for the screen mutex to be available, then take ownership _(' invoke pfWaitForSingleObject, [g_hScreenMutex], -1') ; ; DllCall('kernel32.dll', "bool", "WriteConsoleOutputCharacter", "handle", $hStdHandle, "byte*", 0x20, "dword", 1, "struct", $tOld, "dword*", 0) _(' invoke pfReadChar, [g_hStdHandle], addr OldCell, 1, dword[tOld], addr bytesread') ; _(' invoke pfReadAttr, [g_hStdHandle], addr OldAtt, 1, dword[tOld], addr bytesread') ; ; _(' mov al, byte[MyCell]') ;If ($tOldCell.oc = $tMyCell.mc) And ($tOldAttrib.oa = $tMyAttrib.ma) Then _(' mov cl, byte[MyAtt]') _(' .if (byte[OldCell] = al) & (byte[OldAtt] = cl)') _(' invoke pfWriteChar, [g_hStdHandle], addr BlankCell, 1, dword[tOld], addr bytesread') _(' .endif') ; ; DllCall('kernel32.dll', "bool", "WriteConsoleOutputCharacter", "handle", $hStdHandle, "struct*", $tMyCell, "dword", 1, "struct", $tCoords, "dword*", 0) _(' invoke pfWriteChar, [g_hStdHandle], addr MyCell, 1, dword[tCoords], addr bytesread') _(' invoke pfWriteAttr, [g_hStdHandle], addr MyAtt, 1, dword[tCoords], addr bytesread') ; _(' pushw word[tCoords+COORD.X]') ;$tOld.X = $tCoords.X _(' popw word[tOld+COORD.X]') ; _(' pushw word[tCoords+COORD.Y]') ;$tOld.Y = $tCoords.Y _(' popw word[tOld+COORD.Y]') _(' mov ax, word[tDelta+COORD.X]') ; $tCoords.X += $tDelta.X _(' add word[tCoords+COORD.X], ax') ; _(' mov ax, word[tDelta+COORD.Y]') ; $tCoords.Y += $tDelta.Y _(' add word[tCoords+COORD.Y], ax') ; ; If $tCoords.X < 0 Or $tCoords.X >= 120 Then $tDelta.X *= -1 _(' .if (word[tCoords+COORD.X] < 0 | word[tCoords+COORD.X] >= 120)') _(' neg word[tDelta+COORD.X]') _(' .endif') _(' .if (word[tCoords+COORD.Y] < 0 | word[tCoords+COORD.Y] >= 30)') _(' neg word[tDelta+COORD.Y]') _(' .endif') ; ; release the screen mutex _(' invoke pfReleaseMutex, [g_hScreenMutex]') ; ; wait 100 ms for the Runmutex to be available. _(' invoke pfWaitForSingleObject, [g_hRunMutex], 100') ; ; a return of 258 means it timed out waiting and that the run mutex (owned by the main autoit thread) is still alive. ; when the run mutex handle gets closed this will return a fail or abandonded. _('.until eax <> 258') ; ;exit thread _(' ret') _('endp')
And finally how we call that assembled function from autoit to create the theads:
;create mutex for sharing the screen thats not owned by main thread Global $g_hScreenMutex = _WinAPI_CreateMutex('', False) ; ;create mutex that tells the threads to exit that is owned by main thread Global $g_hRunMutex = _WinAPI_CreateMutex('', True) ... ... ;assemble function Local $tBinExec = _fasmg_Assemble($g_sFasm, False) ;Local $tBinExec = _fasmg_CompileAu3($g_sFasm) If @error Then Exit (ConsoleWrite($tBinExec & @CRLF)) ;this is struct is for all the values Im passing to the thread. ;this will hold are random start x,y,delta values, handles, and pointers to functions called within the thread $tParms = DllStructCreate('short start[4];word myid;dword hands[3];ptr funcs[6]') $tParms.start(1) = Random(0, 119, 1) $tParms.start(2) = Random(0, 29, 1) $tParms.start(3) = Random(-3, 3, 1) $tParms.start(4) = Random(-3, 3, 1) $tParms.myid = 1 $tParms.hands(1) = $g_hScreenMutex $tParms.hands(2) = $g_hRunMutex $tParms.hands(3) = $g_hStdHandle $tParms.funcs(1) = _GPA('kernel32.dll', 'WaitForSingleObject') $tParms.funcs(2) = _GPA('kernel32.dll', 'ReleaseMutex') $tParms.funcs(3) = _GPA('kernel32.dll', 'ReadConsoleOutputCharacterA') $tParms.funcs(4) = _GPA('kernel32.dll', 'ReadConsoleOutputAttribute') $tParms.funcs(5) = _GPA('kernel32.dll', 'WriteConsoleOutputCharacterA') $tParms.funcs(6) = _GPA('kernel32.dll', 'WriteConsoleOutputAttribute') ;create 128 threads with different start values and colors for each one For $i = 1 To 128 $tParms.myid = $i $tParms.start(1) = Random(0, 119, 1) $tParms.start(2) = Random(0, 29, 1) $tParms.start(3) = Random(-3, 3, 1) $tParms.start(4) = Random(-3, 3, 1) If $tParms.start(3) + $tParms.start(4) = 0 Then $tParms.start(3) = (Mod(@MSEC, 2) ? 1 : -1) ; adjusting non-moving (0,0) delta values.. DllCall("kernel32.dll", "hwnd", "CreateThread", "ptr", 0, "dword", 0, "struct*", $tBinExec, "struct*", $tParms, "dword", 0, "dword*", 0) Sleep(50) Next MsgBox(262144, '', '128 Threads Created') ;Close the run mutex handle. This will cause all the threads to exit _WinAPI_CloseHandle($g_hRunMutex) _WinAPI_CloseHandle($g_hScreenMutex) MsgBox(262144, '', 'Mutex handles closed. All Threads should have exited') Exit The attachment below contains both the compiled and source assembly. To play with the assembly source you need to add the fasmg udf in my sig. The compiled version should not need anything. Let me know if you have any issues.
Special thanks to @trancexx for teaching me this with her clock example
Bounce.zip
- By Beege!
Update 8/5/2019:
Rewrote for fasmg. Added full source with everything needed to modify
BmpSearch_8-5-2019.7z
BmpSearch.zip
GAMERS - Asking for help with ANY kind of game automation is against the forum rules. DON'T DO IT.
- | https://www.autoitscript.com/forum/topic/197423-a-new-way-to-write-fasm-assembly-code-with-extended-headers | CC-MAIN-2021-04 | refinedweb | 4,297 | 69.92 |
How to Use Gatsby with a Headless CMS
Posted by Manuel Wieser on February 12, 2019
GSD
Learn how to build a Gatsby blog using ButterCMS. The code examples in this article let you combine Gatsby and ButterCMS in just a few minutes, no matter if you are a beginner or an expert.
Why Gatsby and ButterCMS?
Gatsby is a static site generator based on React and GraphQL. ButterCMS is a headless CMS and blogging platform. What does that mean and why should you use them?
A static site (HTML, CSS, and JavaScript) is fast, secure and flexible. There is no database or server-side code that attackers can exploit. A static site generator pulls data from APIs, databases or files and generates pages using templates.
As a developer, you probably want to write your content as Markdown files. However, if your static site's content has to be managed by non-developers, they'll prefer a CMS. A headless CMS offers a read-only API, that can be read by your static site generator. (Learn more about the benefits of a headless CMS and why more WordPress users are using it as a headless option)
Gatsby combines React, GraphQL, webpack and other front-end technologies to provide a great developer experience. It's a great choice if you are already familiar with React and JSX. ButterCMS allows you to add a CMS to your Gatsby sites without having to worry about hosting, security, or performance. You can focus on implementing your front-end.
Now that we know the benefits of Gatsby and ButterCMS, let's get started!
Setup
First, create a ButterCMS account. You'll be provided with an API token and an example blog post. You'll need both for this tutorial.
Next, install the Gatsby CLI.
npm install --global gatsby-cli
You can then create a new site from the official starting template. If you navigate to the directory and
gatsby develop
This way, you don’t have to refresh your page as Gatsby injects new versions of the files that you edited at runtime.
gatsby new gatsby-site cd gatsby-site gatsby develop
When building with Gatsby, you access your data via the query language GraphQL. There are many official and community plugins that fetch data from remote or local locations and make it available via GraphQL. These plugins are called "source plugins" and there is already a Gatsby Source Plugin for ButterCMS you can install.
Use ButterCMS with your Gatsby app to enable dynamic content that your Marketers can update.
npm install --save gatsby-source-buttercms
Add the plugin to
gatsby-config.js
module.exports = { plugins: [ { resolve: 'gatsby-source-buttercms', options: { authToken: 'your_api_token' } } ] }
After this change, you might have to restart the hot-reloading server (
gatsby develop) before you can test the GraphQL fields and types the plugin is providing.
Head to GraphiQL, the in-browser IDE for exploring GraphQL,
butterPost
allButterPost
{ allButterPost { edges { node { title body } } } }
The plugin maps all JSON fields documented in the Butter CMS API Reference to GraphQL fields.
Add a list of your blog posts
Your ButterCMS data can now be queried in any Gatsby page or template. You can start by creating a
src/pages/blog.js and adding a list of your blog posts.
import React from 'react' import { graphql, Link } from 'gatsby' import Layout from '../components/layout' const BlogPage = ({ data }) => { const posts = data.allButterPost.edges .map(({ node }) => { return <Link key={node.id} to={`/blog/${node.slug}`}>{node.title}</Link> }) return <Layout>{posts}</Layout> } export default BlogPage export const pageQuery = graphql` query { allButterPost { edges { node { id slug title } } } }
The page is now available
src/pages/index.js
Add pages for your blog posts
Generating a page for each of your posts requires you to create a template
src/template/post.js
gatsby-node.js
src/templates/post.js
import React from 'react' import { graphql } from 'gatsby' import Layout from '../components/layout' export default function Template({ data }) { const { title, date, body } = data.butterPost return ( <Layout> <h1>{title}</h1> <h2>{date}</h2> <div dangerouslySetInnerHTML={{ __html: body }} /> </Layout> ) } export const pageQuery = graphql` query($slug: String!) { butterPost(slug: { eq: $slug }) { title date body } }
`
gatsby-node.js
const path = require('path') exports.createPages = ({ actions, graphql }) => { const { createPage } = actions const template = path.resolve(`src/templates/post.js`) return graphql(` { allButterPost { edges { node { slug } } } } `).then(result => { if (result.errors) { return Promise.reject(result.errors) } result.data.allButterPost.edges.forEach(({ node }) => { console.log(node); createPage({ path: `/blog/${node.slug}`, component: template, context: { slug: node.slug } }) }) }) }
Categories, Tags, and Authors
Use
filter
{ allButterPost(filter: { tags: { elemMatch: { slug: { in: "example-tag" } } } }) { edges { node { id title } } } }
Pages
If you want to add ButterCMS pages to your blog, add a list of page slugs to
gatsby-config.js
butterPage
allButterPage GraphQL fields.
Use ButterCMS with your Gatsby app to enable dynamic content that your Marketers can update.
ButterCMS automatically generates a slug when you create a new page: A page
Example Page
example-page
gatsby-config.js:
module.exports = { plugins: [ { resolve: 'gatsby-source-buttercms', options: { authToken: 'your_api_token', pages: [ 'homepage' ] } } ] }
{ allButterPage(filter: {slug: {eq: "homepage"}}) { edges { node { slug } } } }
You can also specify a page type in your
gatsby-config.js:
module.exports = { plugins: [ { resolve: 'gatsby-source-buttercms', options: { authToken: 'your_api_token', pageTypes: [ 'products' ] } } ] }
To get all pages for a given type you can then use the following GraphQL query:
{ allButterPage(filter: {page_type: {eq: "products"}}) { edges { node { slug } } } }
Conclusion
We have learned how to use a Gatsby source plugin to convert headless CMS data to Gatsby nodes, how to query those nodes with GraphQL, and how to create pages. This should give you a head start when building a Gatsby blog with ButterCMS.
Where to go from here? You could use what you’ve learned and
You can do so by using
limit
skip
allButterPost
allButterPage
If you need help after reading this, contact ButterCMS’s support via email or
The Gatsby Source Plugin for ButterCMS is an open-source community plugin for Gatsby. If you want to contribute to the source plugin, open a GitHub pull request. If you have found a bug, open a GitHub issue.
ButterCMS is the #1 rated Headless CMS
Related articles
Don’t miss a single post
Get our latest articles, stay updated!
Manuel is Lead Web Developer at karriere.at, lecturer at the University of Applied Sciences Upper Austria and writes about Front-End Development, Games and Digital Art on his personal blog manu.ninja. | https://buttercms.com/blog/how-to-use-gatsby-with-a-headless-cms | CC-MAIN-2021-04 | refinedweb | 1,069 | 56.25 |
The first public beta of tinylog, a minimalist logger for Java, has released as version 0.6. The aim of the tinylog project is to simplify logging in Java, while limiting the consumption of resources to an absolute minimum.
The idea for tinylog arose from the experience of using log4j in several customer projects. Log4j is a very powerful logging framework that can be configured very precisely. However, in practice, only a small subset of these configuration options is usually used. Typically, in the server sector, where multiple threads work a long time in parallel and under a load, the use of log4j for detailed logging of events can lead to performance problems.
Tinylog follows the KISS principle ("Keep it simple, stupid") with the objective of avoiding unnecessary complexity. Opposite to log4j, tinylog has only one single logger that is static. So the usual creation of a logger instance for each class by Logger.getLogger(MyClass.class) is not necessary. In order to save time, the logger creation code is copied from another class, but this often leads to trouble — the software engineer can forget to change the class parameter, resulting in the wrong log entries.
import org.pmw.tinylog.Logger; public class Application { public static void main(final String[] args) { Logger.info("My log entry"); } }
Generated output:
2012-08-09 06:31:25 [main] Application.main() INFO: My log entry
To reach the intended performance, the project includes a benchmark that measures the run time for creating 100,000 log entries in a single thread, as well as in ten parallel threads. In parallel logging threads, tinylog is 3.8 times faster than log4j; in a single thread, tinylog is 1.7 times faster. This is achieved inter-alia by precompiled patterns and use of synchronized resources, not blocking the whole logger.
Tinylog supports five different logging levels (from trace to error). The output of log entries can be configured depending on the logging level and the package that contains the class. Log entries can either be printed on the console or written in log files. Thereby the output format can be configured by patterns. For example, the location in the source code (class, method and line), where the log entry was created, as well as the date and time, can be automatically added to the output.
The logging methods of tinylog correspond to MessageFormat.format() of the JDK. Therefore log entries can be simply formatted and the text message will only be generated if the log entry is really output.
The JAR file of the logger has a size of only 14 KB. You can download it from. Tinylog is an Open Source project and is published under the Apache License 2.0.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/tinylog-simplify-logging | CC-MAIN-2016-30 | refinedweb | 467 | 56.15 |
(glossary) (vocabulary)
{ Template} is an informal term meaning a template definition, a template instance or a template class. A { template definition} is what the human { template maintainer} writes: a string consisting of text, placeholders and directives. { Placeholders} are variables that will be looked up when the template is filled. { Directives} are commands to be executed when the template is filled, or instructions to the Cheetah compiler. The conventional suffix for a file containing a template definition is { .tmpl}.
There are two things you can do with a template: compile it or fill it. { Filling} is the reason you have a template in the first place: to get a finished string out of it. Compiling is a necessary prerequisite: the { Cheetah compiler} takes a template definition and produces Python code to create the finished string. Cheetah provides several ways to compile and fill templates, either as one step or two.
Cheetah’s compiler produces a subclass of {Cheetah.Template} specific to that template definition; this is called the { generated class}. A { template instance} is an instance of a generated class.
If the user calls the {Template} constructor directly (rather than a subclass constructor), s/he will get what appears to be an instance of {Template} but is actually a subclass created on-the-fly.
The user can make the subclass explicit by using the “cheetah compile” command to write the template class to a Python module. Such a module is called a { .py template module}.
The { Template Definition Language} - or the “Cheetah language” for short - is the syntax rules governing placeholders and directives. These are discussed in sections language and following in this Guide.
To fill a template, you call its { main method}. This is normally {.respond()}, but it may be something else, and you can use the {#implements} directive to choose the method name. (Section inheritanceEtc.implements.
A { template-servlet} is a .py template module in a Webware servlet directory. Such templates can be filled directly through the web by requesting the URL. “Template-servlet” can also refer to the instance being filled by a particular web request. If a Webware servlet that is not a template-servlet invokes a template, that template is not a template-servlet either.
A { placeholder tag} is the substring in the template definition that is the placeholder, including the start and end delimeters (if there is an end delimeter). The { placeholder name} is the same but without the delimeters.
Placeholders consist of one or more { identifiers} separated by periods (e.g., {a.b}). Each identifier must follow the same rules as Python identifiers; that is, a letter or underscore followed by one or more letters, digits or underscores. (This is the regular expression [A-Za-z_][A-Za-z0-9_]*.)
The first (or only) identifier of a placeholder name represents a { variable} to be looked up. Cheetah looks up variables in various { namespaces}: the searchList, local variables, and certain other places. The searchList is a list of objects ({ containers}) with attributes and/or keys: each container is a namespace. Every template instance has exactly one searchList. Identifiers after the first are looked up only in the parent object. The final value after all lookups have been performed is the { placeholder value}.
Placeholders may occur in three positions: top-level, expression and LVALUE. { Top-level} placeholders are those in ordinary text (“top-level text”). { Expression} placeholders are those in Python expressions. { LVALUE} placeholders are those naming a variable to receive a value. (LVALUE is computerese for “the left side of the equal sign”.) Section language.placeholders.positions explains the differences between these three positions.
The routine that does the placeholder lookups is called the { NameMapper}. Cheetah’s NameMapper supports universal dotted notation and autocalling. { Universal dotted notation} means that keys may be written as if they were attributes: {a.b} instead of {a[‘b’]}. { Autocalling} means that if any identifier’s value is found to be a function or method, Cheetah will call it without arguments if there is no () following. More about the NameMapper is in section language.namemapper.
Some directives are multi-line, meaning they have a matching { #end} tag. The lines of text between the start and end tags is the { body} of the directive. Arguments on the same line as the start tag, in contrast, are considered part of the directive tag. More details are in section language.directives.syntax (Directive Syntax Rules). | http://packages.python.org/Cheetah/users_guide/glossary.html | crawl-003 | refinedweb | 732 | 58.28 |
This section provides an overview of what django-views is, and why a developer might want to use it.
It should also mention any large subjects within django-views, and link out to the related topics. Since the Documentation for django-views is new, you may need to create initial versions of those related topics.
Django Views are simply the functions that get called when a request is made to a certain URLs.
URL patterns are written in
urls.py file, each URL regex is given a function(Django view) from a
views.py , so when a request is made, that function gets the call, with the HTTP request object, and then you can do whatever fun you want to do with that request.
A simple example of view,
from django.http import HttpResponse import datetime def current_datetime(request): now = datetime.datetime.now() html = "<html><body>It is now %s.</body></html>" % now return HttpResponse(html)
Calling the above view from a URL would return the current time, everytime you call the URL assigned to this view.
The
request object has many parameters related to the HTTP request you get, like headers, request type and more.
Read the official doc with more detailed examples.
Detailed instructions on getting django-views set up or installed. | https://riptutorial.com/django-views | CC-MAIN-2021-25 | refinedweb | 214 | 65.62 |
.
Outline
This article describes how to patch the ARM mbed DAPLink bootloader so it works with relaxed timing. It describes how to analyze the bootloader, how to write a small assembly program and how to inject it into the bootloader to work around a weakness in the ARM bootloader during power-up.
Problem
The mbed (or OpenSDA) bootloader uses a virtual USB MSD (mass storage device) to update the board with a new application binary. The problem with MSD is that it might get confused what the host machine is sending, e.g. if the host is scanning the new device for viruses/etc. Because the developers did not foresee such a situation, the receiving packets might brick the bootloader and board. Luckily, the board can be unbricked with JTAG/SWD programmer like a P&E Multilink or a Segger J-Link (or use a NXP Freedom board, see links section).
ARM has released a new bootloader v244 (see DapLink). The approach requires pyOCD which is imho is yet another can of works for troubles. Instead, I recommend to invest a few $ into SWD/JTAG programming device (you get a NXP LPCLink2 or a Segger J-Link EDU for $20 these days). The latest DAPlink releases can be found on.
Bootloader Mode or not?
While that bootloader v244 is supposed to fix the Windows 10 issues, I have found that it works on most of the NXP boards, but fails on others, especially on custom boars. The problem manifests in the following way: instead booting the board into the application mode after power-up, the board enters the bootloader mode:
After a lot of trial-and-error, I isolated the problem to a power-on issue: depending on how (and how fast) the board gets powered up, it might (or might not) enter bootloader mode, or does it in a random way. The thing is that on the NXP OpenSDA boards the K20 PTB1 pin is connected to the target CPU reset line:
The bootloader on the K20/OpenSDA checks the voltage on PTB1/reset line during startup: if the level is LOW, it enters bootloader mode, otherwise it starts the application.
The problem with that is that this is very timing sensitive: consider the case where during power up the K20/bootloader runs a bit faster than the logic level of the reset line gets pulled up (by a pull-up resistor). This gets even trickier if different power supplies are used with different reset line capacitance. As a result, if the K20/bootloader comes up ‘too fast’, it might ‘see’ a LOW on the reset line and enters bootloader mode. This especially happens if the board gets plugged in by the USB port and gets powered up.
A workaround is to keep the K20/bootloader in reset for a few seconds until all the voltages have been stabilized. But always keeping the reset button pressed while powering the board is painful.
The obvious solution would be to change the bootloader to add a short delay (say one second) in the bootloader until it checks that PTB1 pin. But building the mbed bootloader from sources is definitely not easy or simple.
💡 I wish there would be a simple make file project for the bootloader using standard GNU tools. With no proprietary RTOS (why not FreeRTOS, or better: no RTOS at all) making it hard to understand and build.
So I thought: adding a small patch to delay the bootloader should not be a big deal. And indeed that was accomplished in less than half an hour.
Bootloader Vector Table
Opening the 0244_k20dx_bl_0x8000.bin shows the reset vector entry: on reset or power up it will start execution from address 0x624 (minus the thumb bit):
So all what I need is to jump to a small delay routine instead and then continue with the execution at 0x624.
Delay Routine
Using ARM assembly code I wrote a small (nested) delay routine:
static void delay(void) { __asm ( "mov r1, 0x01 */ "blx r0 \n" /* jump! */ #else "bx lr \n" #endif "nop \n" /* make sure things are properly aligned */ ); }
The above function delays for as short time (depending on the clock speed) and then jumps to 0x624.
💡 You can adjust the time with the two loop counters, but make sure it is not too long, as otherwise it could trigger the watchdog. Disable the watchdog in that piece of code too.
I verified the code with the debugger, to be sure it works properly.
Watchdog? Watchdog!
In case the watchdog kicks into the delay loop, it is necessary to disable it first. In the v244 bootloader the watchdog disable code is located at address 0xfe4. For that case I have extended the delay loop to first disable the watchdog:
static void delay(void) { __asm ( "mov r0, #0xf00 \n" /* watchdog disable code at address 0xfe5 */ "add r0, #0xe5 \n" "blx r0 \n" /* jump to code disabling the watchdog at 0xfe5 */ "mov r1, #0x2 */ "bx r0 \n" /* jump! */ #else "bx lr \n" #endif "nop \n" /* make sure things are properly aligned */ ); }
Which gives the following machine code:
static const uint8_t delay_code[] = { 0x4F,0xF4,0x70,0x60, /* mov.w r0, #0xf00 */ 0x00,0xf1,0xe5,0x00, /* add.w r0, #0xe5 */ 0x80,0x47, /* blx r0 */ 0x4F,0xF0,0x02,0x01, /* mov.w r1, #2 */,0x00, /* add.w r0, #0x25 */ 0x00,0x47, /* bx r0 */ #else 0x70,0x47 /* bx lr */ #endif 0x00,0xBF, /* nop */ }; /code]</pre> <h1>Machine Code</h1> To patch the bootloader, I need the machine code of the delay loop. The easiest thing is to get this with Eclipse and a JTAG debugger: <a href=""><img class="size-full wp-image-22691" src="" alt="Debugging the patch" width="584" height="381" /></a> Debugging the patch Using the memory view in Eclipse, I can see the op codes: <a href=""><img class="size-full wp-image-22692" src="" alt="Machine Code" width="584" height="274" /></a> Machine Code with different loop counter values That machine code gets quickly transformed (copy-paste) into an array of byte. I have used lower loop counters below: <pre> static const uint8_t delay_code[] = { 0x4F,0xF0,0x01,0x01, /* mov.w r1, #0x1 */, /* add.w r0, #0x25 */ 0x00,0x80,0x47, /* blx r0 */ #else 0x70,0x47 /* bx lr */ #endif 0x00,0xBF, /* nop */ };
And can be quickly tested that way too:
void (*f)(void); /* function pointer */ f = (void(*)(void))(&delay_code[0]); /* assign function pointer */ f(); /* call it! */
With this I verified that may patch is working.
Patching the Bootloader
I have now the series of bytes I have to insert. The next step is to patch the bootloader itself. One easy way is to edit directly the .bin file with a binary file editor.
💡 Using the SRecord tool to manipulate the binary would have been another option.
I decided to write it at the end of the vector table which anyway is filled up with the default vector entry 0x0000063F). Fill up with NOP's.
Finally, I need to route the reset vector to my patch at address 0x3D0: For this I change the original 0x625 at address 0x4 to jump to my code at 0x3d0:
That's it! Saved the file and program the new bootloader to the board(s). And now all my boards worked without any power-on issues 🙂
Summary
While the new ARM mbed DAPlink bootloader solves the Windows 10 vulnerability, has the problem that it does not deal with power on glitches in a reliable way. I have patched the bootloader with an extra delay loop. The same approach to patch any firmware can be used of course for anything else. All what I need is some assembly programming, a binary editor and a SWD/JTAG programmer.
You can find the patched bootloader binaries on GitHub:
Happy Patching 🙂
Links
- Patched Bootloader:
- Update to DAPlink bootloader:
- DAPLink releases:
- Recovering and Updating the NXP OpenSDA Bootloader with P&E Multilink and MCUXpresso IDE
- Bricking and Recovering OpenSDA Boards in Windows 8 and 10
- How to Recover the OpenSDA V2.x Bootloader
- New P&E OpenSDA Firmware v114
- Segger J-Link Firmware for OpenSDAv2
- How to enter OpenSDA Bootloader mode: How to put the Kinetis K20 on the tinyK20 Board into Bootloader Mode
Awesome low level stuff. I have a probably silly question about your assembly code, not related to the main issue here, what mode is the assembly code, thumb or ARM and why not load the whole value, why break it into two parts:
“mov r0, #0x600 \n” /* _startup is at 0x625 */
“add r0, #0x25 \n” /* 0x25 because of thumb bit */
I wrote a code snippet to toggle LEDs trying to understand ARM assembly programmers mode, I did:
LDR R1, =0x400ff00c to ‘create a pointer to GPIOA_PTOR.
I really wish it was easy to build the bootloader code from source, I hope your cry will be heard 🙂
Hi David,
thanks! The microcontroller is running thumb instructions, and because of this large constants cannot be directly loaded. One way is to store the constant in the code (at the end of the functio), then load the PC relative address of it adn load it register indirect. To me an easier way is to simply build that constant on the fly as I did (yes, I was lazy). About building the bootloader: I had made that suggestion several times in the past to several ARM engineers. I doubt it will ever happen. Making things easier for ARM internally seems to have a higher priority than making things easier for the ones using ARM. That’s something I see in other places too, btw.
Pingback: Recovering OpenSDA Boards with Windows 10 | MCU on Eclipse
Pingback: tinyK22 Board Rev 1.3 released | MCU on Eclipse | https://mcuoneclipse.com/2017/10/29/adding-a-delay-to-the-arm-daplink-bootloader/?replytocom=100226 | CC-MAIN-2021-17 | refinedweb | 1,615 | 67.08 |
22 January 2009 00:23 [Source: ICIS news]
HOUSTON (ICIS news)--US industrial ethanol consumption this month is about 20% from one year earlier, pressured by weak demand due to a faltering US economy, a producer said on Wednesday.
Industrial ethanol consumption in the ink and coatings segments has dropped by 20%-plus, the source said, adding that in the pharmaceuticals sector consumption was off by 5-10%.
Statistical data on the consumption levels was not available.
The ink and coatings sector was hurt by the slump in the housing and automotive sectors, the source said.
A bleak outlook for the ?xml:namespace>
An industrial ethanol buyer concurred, saying “business was just very slow” for small and medium-size customers of ethanol.
The drop in
US industrial ethanol contracts in January were assessed at $3.65-3.75/gal for 200-proof grade, down by 20 cents/gal from December, according to global chemical market intelligence service ICIS pricing.
Market sources said contracts could shed another 20 cents/gal before the end of the month due to continued weak demand.
Industrial ethanol can be made via the fermentation process, like fuel ethanol, or through the hydration of ethylene.
The main US producers of industrial ethanol are ADM, MGP Ingredients, Grain Processing Corporation (GPC) and LyondellBase | http://www.icis.com/Articles/2009/01/22/9186752/us-industrial-ethanol-demand-drops-by-20.html | CC-MAIN-2015-06 | refinedweb | 215 | 52.8 |
I'});
Test on this page
For detailed documentation on all the properties and more examples see the Ajax Waiter.
If there are features or enhancements you would like to see let me know. published a version of the application running with Dojo, Yahoo, and jMaki integrated along with the Open Ajax Hubs components on. You can find out more about the details on what we did in the description tab of the example.
How will you use the hub?
Having experienced continuous development for over 2 years it is time to put a stake in the ground and call jMaki 1.0 ready for general use.
Included in the bundles:.
After?.
RenamespaceTask.java.
renameFile
dojo.js.
djConfig.'.
targetId
id
topic
<a:widget name="yahoo.tabbedview"
value="{tabs:[
{id : 'tab1', label : 'My Tab', content : 'Some Content'},
{id : 'tab2', label : 'My Tab 2', content : 'Tab 2 Content'}
]
}" />
The end result is using the default template looks as follows:?
A few months ago I created the revolver as a weekend project to provide an alternative way of providing top level navigation on your web size. I thought I would share this with everyone as a jMaki widget.
Tools->Palette->Add jMaki Library
In the page you will see the following:
<a:widget name="jmaki.revolver"
value="[
{title : 'Amsterdam',
imgSrc : '',
href : ''
},
{title : 'Paris',
imgSrc : '',
href : ''
},
{title : 'Seoul',
imgSrc : '',
href : ''
}
]"/>
Use the jMaki customizer (context click->jMaki) to customize the revolver or modify the /resources/jmaki/revolver/component.css.
/resources/jmaki/revolver/component.css
Find out more about the customizable properties that may be passed in as the args attribute from the docs/index.html file inside the jMaki Revolver widget library.
args
docs/index.html
You will need to provide the template text, CSS, and JavaScript.
jmaki/revolver
.json
{
'config': {
'version': '.9',
'glue' : {
'includes': ['/glue.js', '/resources/system-glue.js']
},
'extensions': [{url : '/*', 'name' : 'google.gears'}]
}
}
The extension is loaded for all urls and is named google.gears.
Now for the extension that will interact with Google Gears. No other JavaScript code is needed.
google.gears".
extension.js
/resources/google/gears
/google/gears/execute.
XMLHttpRequest.:
window.editorData
@{window.editorData}
.
jmaki.js/bar).
url.
eval()
javascript
script
javascript::
index.html?id=greg
Hi greg
What would happen if instead of "greg" I used the following URL:
index.html?id=%3Cscript%20src=%22
index.html?id=%3Cscript%20src=.
mouseover
badscript.js
So the question now comes to mind: 'How do you protect your web page from being being exploited?'.
description = description.replace(//g, ">");
Now that we have looked at how to prevent most attacks the next section focuses on cases where you want to allow users to provide markup that does not contain malicious code.?
Services.
One drawback of working with AJAX is that an AJAX-based client
cannot make calls to URLs outside of its domain, which means that it
cannot access services located on another server. A technique such
as JSONP
can help in this regard, but it has some limitations. One
limitation is that including third-party
JavaScript inside script elements exposes your
application to potential
security risks because you are allowing external parties to interact
with your client.
To overcome these problems, you need a generic proxy that
can communicate with external services on your client's behalf.
The proxy passes a call from your client application to the
service, receives the content in response from the service, and returns
the content to your client. You can then use this content in
your AJAX-based application.:
XmlHttpClient
XmlHttpServlet.
XMLHttpProxyServlet
XMLHttpProxy.
location
A request to the URL
will return an XML document which appears as:
<?xml version="1.0"?><ResultSet xmlns: :
xslURL
{var.
dojo.io.bind?.
International.
Servlets sit at bottom end of the API stack for web developers using Java technologies. If you have used JSP, JSF, Struts, Web Work, Velocity, or any of the other frameworks out there you have more than likely used the Servlet API.
What do I like about the Servlet API? As a developer I like being as close to possible to HTTP as possible and Servlets lets me do that well. The Servlet API has adapted to fit the needs of scripting languages by being the base for JSP, re-usable component models such as JSF, frameworks like Struts, and Portlets. Servlets are even good for providing the server side processing for AJAX clients.
Jason Hunter, a long time member of the Servlet EG, has detailed the changes in the article New features added to Servlet 2.5 on Java World.
Download the updated Servlet 2.5 specification from here.
If you want to try out the Servlet 2.5 features today the
Glassfish container provided by Sun just released a beta containing support for the new changes.
If you would like to see some of the things we are thinking about for the next servlet release, or if you would like to propose an addition please see my blog entry titled Got Servlets.. | http://weblogs.java.net/blog/gmurray71/archive/community_java_enterprise/index.html | crawl-002 | refinedweb | 827 | 64.3 |
Problem
You want to create your own exception class for tHRowing and catching.
Solution
You can throw or catch any C++ type that lives up to some simple requirements, namely that it has a valid copy constructor and destructor. Exceptions are a complicated subject though, so there are a number of things to consider when designing a class to represent exceptional circumstances. Example 9-1 shows what a simple exception class might look like.
Example 9-1. A simple exception class
#include #include using namespace std; class Exception { public: Exception(const string& msg) : msg_(msg) {} ~Exception( ) {} string getMessage( ) const {return(msg_);} private: string msg_; }; void f( ) { throw(Exception("Mr. Sulu")); } int main( ) { try { f( ); } catch(Exception& e) { cout << "You threw an exception: " << e.getMessage( ) << endl; } }
Discussion
C++ supports exceptions with three keywords: try, catch, and tHRow. The syntax looks like this:
try { // Something that may call "throw", e.g. throw(Exception("Uh-oh")); } catch(Exception& e) { // Do something useful with e }
An exception in C++ (Java and C# are similar) is a way to put a message in a bottle at some point in a program, abandon ship, and hope that someone is looking for your message somewhere down the call stack. It is an alternative to other, simpler techniques, such as returning an error code or message. The semantics of using exceptions (e.g., "trying" something, "throwing" an exception, and subsequently "catching" it) are distinct from other kinds of C++ operations, so before I describe how to create an exception class I will give a short overview of what an exception is and what it means to tHRow or catch one.
When an exceptional situation arises, and you think the calling code should be made aware of it, you can stuff your message in the bottle with the tHRow statement, as in:
throw(Exception("Something went wrong"));
When you do this, the runtime environment constructs an Exception object, then it begins unwinding the call stack until it finds a try block that has been entered but not yet exited. If the runtime environment never finds one, meaning it gets all the way to main (or the top-level scope in the current thread) and can't unwind the stack any further, a special global function named terminate is called. But if it does find a try block, it then looks at each of the catch statements for that try block for one that is catching something with the same type as what was just thrown. Something like this would suffice:
catch(Exception& e) { //...
At this point, a new Exception is created with Exception's copy constructor from the one that was thrown. (The one in scope at the throw is a temporary, so the compiler may optimize it away.) The original exception is destroyed since it has gone out of scope, and the body of the catch statement is executed.
If, within the body of the catch statement, you want to propagate the exception that you just caught, you can call throw with no arguments:
throw;
This will continue the exception handling process down the call stack until another matching handler is found. This permits each scope to catch the exception and do something useful with it, then re-tHRow it when it is done (or not).
That's a crash course in how exceptions are thrown and caught. Now that you're equipped with that knowledge, consider Example 9-1. You can construct an Exception with a character pointer or a string, and then throw it. But this class is not terribly useful, because it is little more than a wrapper to a text message. As a matter of fact, you could get nearly the same results by just using a string as your exception object instead:
try { throw(string("Something went wrong!")); } catch (string& s) { cout << "The exception was: " << s << endl; }
Not that this is necessarily a good approach; my goal is to demonstrate the nature of an exception: that it can be any C++ type. You can throw an int, char, class, struct, or any other C++ type if you really want to. But you're better off using a hierarchy of exception classes, either those in the standard library or your own hierarchy.
One of the biggest advantages to using an exception class hierarchy is that it allows you to express the nature of the exceptional circumstance with the type of exception class itself, rather than an error code, text string, severity level, or something else. This is what the standard library has done with the standard exceptions defined in . The base class for the exceptions in is exception, which is actually defined in . Figure 9-1 shows the class hierarchy for the standard exception classes.
Figure 9-1. The standard exception hierarchy
Each standard exception class, by its name, indicates what category of condition it is meant to identify. For example, the class logic_error represents circumstances that should have been caught during code writing or review, and its subclasses represent subcategories of that: situations such as violating a precondition, supplying an out-of-range index, offering an invalid argument, etc. The complementary case to a logical error is a runtime error, which is represented by runtime_error. This indicates situations that, more than likely, could not have been caught at code time such as range, overflow, or underflow.
This is a limited set of exceptional situations, and the standard exception classes probably don't have everything you want. Chances are you want something more application-specific like database_error, network_error, painting_error and so on. I will discuss this more later. Before that, though, let's talk about how the standard exceptions work.
Since the standard library uses the standard exception classes (imagine that), you can expect classes in the standard library to throw one when there is a problem, as in trying to reference an index beyond the end of a vector:
std::vector v; int i = -1; // fill up v... try { i = v.at(v.size( )); // One past the end } catch (std::out_of_range& e) { std::cerr << "Whoa, exception thrown: " << e.what( ) << ' '; }
vector<>::at will throw an out_of_range exception if you give it an index that is less than zero or greater than size( ) - 1. Since you know this, you can write a handler to deal with this kind of exceptional situation specifically. If you're not expecting a specific exception, but instead would rather handle all exceptions the same way, you can catch the base class for all exceptions:
catch(std::exception& e) { std::cerr << "Nonspecific exception: " << e.what( ) << ' '; }
Doing so will catch any derived class of exception. what is a virtual member function that provides an implementation-defined message string.
I am about to come full circle. The point of Example 9-1 followed by so much discussion is to illustrate the good parts of an exception class. There are two things that make an exception class useful: a hierarchy where the class communicates the nature of the exception and a message for the catcher to display for human consumers. The exception class hierarchy will permit developers who are using your library to write safe code and debug it easily, and the message text will allow those same developers to present a meaningful error message to end-users of the application.
Exceptions are a complicated topic, and handling exceptional circumstances safely and effectively is one of the most difficult parts of software engineering, in general, and C++, in particular. How do you write a constructor that won't leak memory if an exception is thrown in its body, or its initializer list? What does exception-safety mean? I will answer these and other questions in the recipes that follow.
Building C++ Applications
Code Organization
Numbers
Strings and Text
Dates and Times
Managing Data with Containers
Algorithms
Classes
Exceptions and Safety
Streams and Files
Science and Mathematics
Multithreading
Internationalization
XML
Miscellaneous
Index | https://flylib.com/books/en/2.131.1/creating_an_exception_class.html | CC-MAIN-2021-21 | refinedweb | 1,317 | 57.81 |
How to Write a Secure Python Serverless App on AWS Lambda
Modern authentication systems generate JSON Web Tokens (JWT). While there are several types of JWTs, we’re concentrating on access tokens. When a user successfully logs in to an application, a JWT is generated. The token is then passed in all requests to the backend. The backend can then validate the token and reject all requests with invalid or missing tokens.
Today, we are going to build a simple web application that uses the Okta authentication widget to log users in. The access token will be generated and sent to an API written in Python and deployed as an AWS Lambda function, which will validate the token. Let’s get started!
Table of Contents
- Install AWS Serverless CLI, Python 3, and Tornado
- Create an Okta Account and Application
- CORS and Effect on AWS Lambda
- Build a Simple HTML and JavaScript Client
- Build a Web Server in Python
- Create an AWS Lambda Function in Python
- Validate a JWT Offline in a Python Lambda Function
- Learn More About Python, JWTs, and AWS
NOTE: The code for this project can be found on GitHub.
Install AWS Serverless CLI, Python 3, and Tornado
If you haven’t already got an AWS account, create an AWS Free Tier Account.
Next, install the AWS SAM CLI.
Next, if you don’t already have Python installed on your computer, you will need to install a recent version of Python 3.
Now, create a directory where all of our future code will live.
mkdir aws-python cd aws-python
To avoid issues running the wrong version of Python and its dependencies, it is recommended to create a virtual environment so that the commands
python and
pip run the correct versions:
python3 -m venv .venv
This creates a directory called
.venv containing the Python binaries and dependencies. This directory should be added to the
.gitignore file. This needs to be activated to use it:
source .venv/bin/activate
You can run the following command to see which version you are running.
python --version
Finally, you need to install the Tornado Python library to build a web server for the front end.
pip install tornado
Create an Okta Account and.
CORS and Effect on AWS Lambda
As an important aside, we need to make a design decision as to how to pass the access token from the web front end to the Python backend Lambda function. There are several ways this can be done, it can be passed as an authorization header, in a cookie, or as a query or post parameter.
As the backend will be implemented as an AWS Lambda function, this limits our choice due to Cross-Origin Resource Sharing (CORS) restrictions. Web pages are hosted on a web server that has a domain name called the origin domain. When a web page needs to communicate with a backend API, a JavaScript function makes an HTTP request to the backend server. If the domain name, or even the port number, of the backend server, differs from the origin domain, then the browser will refuse the response due to CORS.
In order to overcome CORS restrictions, the backend server needs to set response headers that give the browser permission to accept the response data. The most important header is the one that specifies which origin domain can receive the response:
Access-Control-Allow-Origin:
It is also possible to allow access from any origin domain:
Access-Control-Allow-Origin: *
Do be careful about allowing any domain, as it will almost certainly be flagged at a security audit, and may be in violation of an information security regulation.
CORS also adds further restrictions on which request HTTP headers are allowed. In particular, the
Authorization header is forbidden. The restriction can be overcome by adding a second response header:
Access-Control-Allow-Credentials: true
There is however a further complication. The browser doesn’t know whether the server will allow the authorization header to be sent. To overcome this, the browser will make a preflight request to the server to determine whether the actual request will be allowed. The preflight request is an HTTP OPTIONS request. If the response contains the correct CORS headers then the actual request will be made.
The application we are going to build has a backend API implemented in Python, which will validate the access token on each request. This function is deployed as an AWS Lambda function. Unfortunately, the container in which the Lambda function is deployed will receive the preflight request. It will attempt to validate the token in the authorization header. This will fail as the container doesn’t have the public key required to validate the token, resulting in a
403 Forbidden response.
We can’t use the authorization header, and cookies are often blocked, so we will send the token as a POST parameter.
Build a Simple HTML and JavaScript Client
We will start by building a simple web front end in HTML and JavaScript. It will be served by a web server written in Python.
First of all, create a directory called
client which will contain static content.
Next, create a file called
client/index.html with the following content:
<html> <head> <meta charset="UTF-8" /> <title>How to write a secure Python Serverless App on AWS Lambda</title> <script src=" type="text/javascript"></script> <link href=" type="text/css" rel="stylesheet"/> <link href="style.css" rel="stylesheet" type="text/css" /> <script src="control.js" defer></script> </head> <body> <h1>How to write a secure Python Serverless App on AWS Lambda</h1> <div id="widget-container"></div> <div class="centred"> <form id="messageForm"> Message: <input id="message" name="message" type="message"/> <input type="hidden" id="token" name="token"/> <input type="button" value="Send" onclick="onmessage()"/> </form> <textarea id="messages" name="messages" rows="10" cols="50">Messages</textarea><br/> </div> </body> </html>
The Okta Sign-In Widget’s JavaScript and CSS files are loaded from Okta’s global CDN (content delivery network). The
widget-container will be replaced by the login form when the page loads. The page contains a simple form that has a test box for a message and a hidden input that will hold the access token. The text area at the bottom will display responses from the server.
Now, create a stylesheet called
client/style.css. Here is an example:
body { background-color: #ccccff; text-align: left; } h1 { text-align: center; font-size: 50pt; font-style: italic; color: #0000FF; clear: both; } h2 { text-align: center; font-size: 30pt; font-style: normal; color: #0000FF; clear: both; } .centred { text-align: center; display: block; margin-left: auto; margin-right: auto; }
Next, create a file called
client/control.js with the following JavaScript: = " var headers = {} if (accessToken != null) { document.getElementById('token').value = accessToken; } fetch(url, { method : "POST", mode: 'cors', body: new URLSearchParams(new FormData(document.getElementById("messageForm"))), }) .then((response) => { if (!response.ok) { throw new Error(response.error) } return response.text(); }) .then(data => { messages = JSON.parse(data) document.getElementById('messages').value = messages.join('\n'); }) .catch(function(error) { document.getElementById('messages').value = error; }); }
Let’s see what this JavaScript does. It declares a variable that will hold the access token. It then creates an
OktaSignIn object. Replace
{yourOktaDomain} and
{yourClientId} with the values from the Okta CLI.
The
renderEl() function displays the login form and performs the authentication process. On successful login, the access token is extracted from the response and saved. The login form is then hidden.
The
onmessage() function is called when the user hits the submit button on the form. It stores the access token in the hidden input on the form and then makes a POST request to the backend server. It writes the response from the server into the text area.
Build a Web Server in Python
Now you are going to build a web server in Python to serve the static content. A web server is required because some of the JavaScript will not work if you simply load the page into a browser.
You will make the server a Python package, which is simply a directory, in this case, called
server, containing Python code. Python packages require a file called
__init__.py. This is run when the package is loaded and is often just an empty file.
mkdir server touch server/__init__.py
Next, create a file called
server/FileHandler.py containing the following Python code:
from tornado.web import StaticFileHandler class FileHandler(StaticFileHandler): def initialize(self, path): self.absolute_path = False super(FileHandler, self).initialize(path)
It uses the Python Tornado framework and implements a static file handler that serves any files from the directory specified in the
path constructor parameter.
Next, create a file called
server/__main__.py containing the following Python code:
import signal import sys from tornado. import HTTPServer from tornado.ioloop import IOLoop from tornado.options import define, options from tornado.web import Application, RequestHandler from server.FileHandler import FileHandler define("port", default=8080, help="Listener port") options.parse_command_line() application = Application([ ('/()$', FileHandler, {'path': "client/index.html"}), ('/(.*)', FileHandler, {'path': "client"}), ]) = HTTPServer(application) print("Listening on port", options.port) try: IOLoop.current().start() except KeyboardInterrupt: print("Exiting") IOLoop.current().stop()
This, first of all, looks for a command line parameter called
--port to obtain a port number to listen on which defaults to 8080.
An
Application object is created which implements a Tornado web server. The application is constructed with a list of tuples. Each tuple has two or more values. The first value is a URI. This can be a regular expression. Any URI component in parentheses is captured as a path parameter. The second tuple value is a Python class that handles requests for matching URIs. Any remaining values are path parameters captured from the URI. They become constructor parameters as an instance of the class is created on each request. In this case, the file or directory containing the static content is specified.
Finally, the server is created and started.
Now, the front end can be tested by starting the server and pointing a web browser at
python -m server --port=8080
You should be able to log in using your Okta credentials. You will not be able to send a message at this stage as there is currently no backend.
Create an AWS Lambda Function in Python
You need to create a basic AWS Lambda function. You can use the SAM CLI to build, run, and deploy the application. Lambda functions can be built and run locally in a Docker container which emulates the AWS environment.
First of all, create a directory called
auth-app, and create a Python package called
messages inside it:
mkdir -p auth-app/messages touch auth-app/messages/__init__.py
Next, create a file called
auth-app/messages/requirements.txt containing a list of packages to be loaded by
pip:
jwt requests tornado
Next, create a simple Lambda function. Create a file called
auth-app/messages/messages.py containing the following Python code:
def message(event, context): return { 'statusCode': 200, 'body': 'Hello World!' }
The function has two parameters, which are both dictionaries. You will be using the
event map later to extract request parameters. The function has to return a dictionary containing the HTTP response code and the response body.
Next, you need to create the deployment template file
auth-app/template.yaml:
AWSTemplateFormatVersion: '2010-09-09' Transform: AWS::Serverless-2016-10-31 Description: > auth-app Sample SAM Template for auth-app # More info about Globals: Globals: Function: Timeout: 10 Resources: OktaKeys: Type: String MessagesFunction: Type: AWS::Serverless::Function # More info about Function Resource: Properties: Environment: Variables: OKTA_KEYS: !Ref OktaKeys CodeUri: messages/ Handler: messages.message Runtime: python3.7 Events: Messages: Type: Api # More info about API Event Source: Properties: Path: /api/messages Method: post
The globals section defines a timeout. This is the maximum time the function is allowed to complete a request.
The resources section defines one or more Lambda functions. It defines any environment variables that will be passed to the function. The
CodeUri defines the Python package containing the function. The
Handler defines the Python function to call. The
Runtime defines the language and version of the executable environment, in this case, Python 3.7. The events define the API, the path is the request URI, and the method is the HTTP request method.
Next, build the application in a Docker container. The first time this command is executed, SAM will pull a Docker image. This can take some time to download.
cd auth-app sam build --use-container
The build creates a directory called
auth-app/.aws-sam. This should be added to your
.gitignore file.
Now, you can run the application locally:
sam local start-api
Then test it using
curl:
curl -i -X POST
Validate a JWT Offline in a Python Lambda Function
Offline JWT validation requires a public key. Authentication providers, such as Okta, provide a URL that returns a public key set (JWKS). The key set is a JSON array. We are going to base64 encode the JSON array to make it more manageable. Issue the following command to get the base64 encoded keys:
curl | base64
Next, create a file called
env in the
auth-app directory that overrides environment variables in the template file:
{ "MessagesFunction" : { "OKTA_KEYS": "base64 string from key provider" } }
Next, you are going to extract the public key from the key set. There can be multiple keys, but I will assume that there is only one, which is often the case. Add the following Python code to
auth-app/messages/messages.py:
import base64 from jwt import (JWT, jwk_from_dict) from jwt.exceptions import JWTDecodeError import os public_key = None def get_keys(): keys = base64.b64decode(os.environ['OKTA_KEYS']) jwks = json.loads(keys) for jwk in jwks['keys']: public_key = jwk_from_dict(jwk) get_keys()
NOTE: This post uses local validation of JWTs rather than using the introspect endpoint to validate them remotely. This is done for efficiency.
The function gets called when the file is loaded. It extracts the JWKS from the environment variable and does a base64 decode to get the JSON string. This is then turned into a Python dictionary. It then calls
jwk_from_dict(), which extracts the public key.
Next, add a verify functions which validates the token using the public key:
def verify(token): result = {} try: decoded = instance.decode(token, public_key, False) except JWTDecodeError: result = { 'statusCode': 403, 'body': 'Forbidden '} return result
You also need a helper function to extract the URL encoded POST form data:
def get_post_data(body): postdata = {} for items in body.split('&'): values = items.split('=') postdata[values[0]] = values[1] return postdata
Finally, you need to modify the main
message() function to do validation and return the messages or an error:
def message(event, context): body = get_post_data(event['body']) result = verify(body['token']) if not bool(result): messages.append(body['message']) result = { 'statusCode': 200, 'headers': { 'Access-Control-Allow-Headers': 'Content-Type', 'Access-Control-Allow-Origin': '*', 'Access-Control-Allow-Methods': 'OPTIONS,POST,GET', }, 'body': json.dumps(messages) } return result
NOTE: Notice that the response includes the CORS headers.
You now have a complete application. Build and start the AWS backend:
sam build --use-container sam local start-api --env-vars env
Start the frontend webserver:
python -m server --port=8080
Now, point a web browser at
Type in a message and submit the form. You should get a 401 error message displayed. Log in using your Okta credentials. Now send another message. You should see a list of messages.
TIP: When working with complex web applications, always have the developer console open on the browser. It will save a lot of time diagnosing JavaScript and network errors.
Learn More About Python, JWTs, and AWS
You have now built an application that uses Okta authentication to obtain a JWT access token. A Python API validates the token using a public key before processing any requests.
You only did local deployment as a proof of concept. To deploy a Lambda function into the cloud use:
sam deploy --guided
This will prompt and guide you through the deployment process and give you the URL to the deployed function.
While writing this post, I experienced first-hand how confusing and overly complicated Amazon documentation can be. As you can see, the actual minimal code to make things work is quite simple.
There are some downsides. The functions have to be started when requests arrive. This can add latency. Also, you have no control over which instance of a function will handle a request. Typically each request will be handled by a different instance of a function. Any data which needs to be available across requests must be stored in cloud storage or a database.
Serverless applications are definitely the way forward. The beauty is that you can simply deploy a function into a cloud, and not have to create any server environment to host the function. The functions can be written in a number of programming languages including Go, Java, and Python.
The cloud replicates the functions depending on demand. They scale to zero, meaning that they use no resources, and hence incur no costs when not being used.
You can find the source code for this article on GitHub in the okta-aws-python-example repository.
If you enjoyed this post, you might like related ones on this blog.
- Build and Secure an API in Python with FastAPI
- Building a GitHub Secrets Scanner
- The Definitive Guide to WSGI
- Build a CRUD App with Python, Flask, and Angular
- Build a Simple CRUD App with Python, Flask, and React
Follow us for more great content and updates from our team! You can find us on Twitter, Facebook, subscribe to our YouTube Channel or start the conversation below.
Okta Developer Blog Comment Policy
We welcome relevant and respectful comments. Off-topic comments may be removed. | https://developer.okta.com/blog/2021/07/26/python-serverless | CC-MAIN-2022-21 | refinedweb | 2,966 | 56.66 |
Gmail Drive Shell Extension is a genius tool for Gmail users.
Open a command prompt in the selected directory (or directories) or in the current directory that yo...
import about 400 graphic file formatsExport about 50 graphic file formats.
Allows the user to alter the case of selected file item(s).
A nifty tool for file integrity checking, integrates into Windows shell.
It adds a new column to the Details view in Windows Explorer.
When you wants to know the size of the folder, You has to right click...
Folder Latch 12.0.2.6
Shell extension for Explorer that allows you to add comments and.
Folder Marker lets you mark folders with color-coded icons.
DjVu Shell Extension Pack is an extension package for Windows.
Professional folder and disk analyzer with detailed analytical reports.
Adds an Open Command Prompt menu item to the context menus in Windows Explorer.
Folder Size 2.9
It adds an "Open Command Prompt" menu to the context menus in Windows Explorer. | http://ptf.com/folder/folder+size+shell+extension/ | CC-MAIN-2013-20 | refinedweb | 167 | 69.07 |
To use cdktf you will need to install it. The tool is available as a Typescript application from npm.
»Prerequisites
In order to use cdktf, you'll need Terraform, Node.js, and Yarn.
See the Terraform install tutorial for instructions on installing Terraform via package manager, source, or binary download.
Node.js publishes a graphical installer that will install Node.js and npm on your platform.
Yarn is an alternate JavaScript package manager that can be installed via Homebrew, precompiled binary, or from source.
»Install cdktf
There are two ways to install cdktf.
»Verify the installation
Verify that the installation worked by opening a new terminal session and running the
cdktf command to show available subcommands.
$ cdktf cdktf [command] Commands: cdktf deploy [OPTIONS] Deploy the given stack cdktf destroy [OPTIONS] Destroy the given stack cdktf diff [OPTIONS] Perform a diff (terraform plan) for the given stack cdktf get [OPTIONS] Generate CDK Constructs for Terraform providers and modules. cdktf init [OPTIONS] Create a new cdktf project from a template. cdktf login Retrieves an API token to connect to Terraform Cloud. cdktf synth [OPTIONS] Synthesizes Terraform code for the given app in a directory.
Add
--help to any subcommand to learn more about what it does and available options.
$ cdktf deploy --help
»Quick start tutorial
Now that you've installed
cdktf, you can write TypeScript code that will provision an NGINX server using Docker on Mac, Windows, or Linux. This example is free and requires no cloud credentials to run.
If you have cdktf and Docker installed on your local machine, start Docker Desktop by launching the application in your file browser or by running this command in a macOS terminal.
$ open -a Docker
»Create and initialize the project
Create the project from scratch, starting with a directory named
typescript-docker.
$ mkdir typescript-docker && cd $_
Initialize the project with the
init command. For this quick start, use the
--local flag so that all state will be stored locally.
$ cdktf init --template=typescript --local
You'll be prompted for a project name and description. You can use the defaults as listed.
»Edit the code
Add code to two files:
cdktf.json and
main.ts.
In
cdktf.json, list
docker as one of the Terraform providers that you'll be using in the application. You can replace the existing "aws" example in the file.
{ "language": "typescript", "app": "npm run --silent compile && node main.js", "terraformProviders": [ - "aws@~> 2.0" + "terraform-providers/docker@~> 2.0" ] }
Delete the contents of
main.ts and paste the following TypeScript code into the
main.ts file. We've included the equivalent HCL configuration in a tab for comparison.
import { Construct } from 'constructs' import { App, TerraformStack } from 'cdktf' import { Container, Image } from './.gen/providers/docker' class MyStack extends TerraformStack { constructor(scope: Construct, name: string) { super(scope, name) const dockerImage = new Image(this, 'nginxImage', { name: 'nginx:latest', keepLocally: false, }) new Container(this, 'nginxContainer', { image: dockerImage.latest, name: 'tutorial', ports: [ { internal: 80, external: 8000, }, ], }) } } const app = new App() new MyStack(app, 'typescript-docker') app.synth()
»Install dependencies and deploy
You're almost there! Run the
get command to download the dependencies for using Docker with Typescript.
$ cdktf get
Now run
deploy to compile the code and provision the NGINX container within Docker.
$ cdktf deploy Stack: typescript-docker Resources + DOCKER_CONTAINER nginxContainer docker_container. + DOCKER_IMAGE nginxImage docker_image. Diff: 2 to create, 0 to update, 0 to delete. Do you want to continue (Y/n)?
Verify the existence of the NGINX container by visiting localhost:8000 in your web browser or run
»Destroy the container
To stop the container, run
cdktf destroy.
$ cdktf destroy Stack: typescript-docker Resources - DOCKER_CONTAINER nginxContainer docker_container. - DOCKER_IMAGE nginxImage docker_image. Diff: 0 to create, 0 to update, 2 to delete. Do you want to continue (Y/n)?
You've now provisioned and destroyed an NGINX webserver with
cdktf. | https://learn.hashicorp.com/tutorials/terraform/cdktf-install?in=terraform/cdktf | CC-MAIN-2020-45 | refinedweb | 636 | 50.63 |
Update for Release 1302 – July 2013
In this blog, I want to share the information on my solution template “Site Management”. The Site Management demo app is used through all chapters and how-to guides in my book on ByDesign Studio application development (Thomas Schneider: “SAP Business ByDesign Studio — Application Development”)
You can download the solution template (from the Business Center, see details below) and import it into a customer-specific solution. As of July 2013, the template is for ByDesign version 1302. In the template you will find the entities described in chapters 1 to 5 of my book.
What is a solution template?
A solution template is a customer-independent solution that can be imported into customer-specific solutions. You can create all items that you want to reuse in a solution, for example, business objects, UIs, web services (exception: key user content, such as mash-ups). The solution template itself cannot be used in a production tenant, but only the customer-specific solutions derived from it.
How to use the solution template?
Prerequisites:
- Access to a ByDesign FP4.0 tenant that is enabled for customer-specific development
- Partner development authorization (Partner Development work center)
- ByDesign Studio 4.0 installed
To import the solution template into your solution:
- Go to the SME Business Center, Wiki, SAP Business ByDesign Studio, Best Practice for Implementation ( ),
- Go to the the section “How-To Guides with Solution Templates” and download the zip file with title “Site Management” to your computer.
- Open SAP Solutions OnDemand studio 1302, logon to a ByDesign 1302 tenant, create a solution with Type = Customer-Specific Solution and Deployment Unit = Customer Relationship Management.
- In the studio, open the Implementation Manager (View -> Other Windows -> Implementation Manager).
- Click “Import Solution Template” and select the file that you downloaded to your computer. The system imports the template into your solution.
- In the Solution Explorer, select the solution and select Activate from the context menu. The system generates the runtime artifacts.
- Open the Business Users view in the Application and User Management work center, select your user and assign the Site Management work center to your user (under Edit Access Rights).
See also the documentation on solution templates. (To download the documentation, select, SAP Solutions OnDemand Studio, Complete help: Print version. In the documentation, see section “Solution Templates Quick Guide for Customer-Specific Solutions”)
During importing the solution template into your solution, the artifacts are copied into your solution (this means technically, all namespaces are changed in the artifacts). After the import, you can change and adopt the artifacts.
What are the details of the solution?
The details of the solution and the solution blueprint is described in my book. The following entities are contained in the template:
- Work center Site Management, with Site Reservation view, Sites view, Site Categories view: If you start testing, create site categories first, then sites, and finally site reservations (Reservations use sites, sites use site categories).
- The central UI of the application is the Site Reservation QA, which you can open with the “New” or “Edit” button in the Site Reservation view (see Figure 1). I have implemented the following features:
- Account: Value Help, input validation, ID/description mapping and navigation to the Account UI
- Availability check for sites: “GetAvailableSites” and “Pick” function.
- Arrival/Departure date: Prefill and input validation
- Sales Order creation: create sales order (through button “Calculate Price” or “Release”)
- Automatic number (ID) selection.
The solution contains the following business objects:
- Number
- SiteCategory
- Site
- SiteReservation
Figure 1: Site Reservation Quick Activity UI
Hello Thomas,
The example looks interesting. But when trying to open the template, ByDesign Studio shows an error that the solution was only meant for Customer XXXXX (I can’t remember what the number was).
Any suggestions on how we might be able to open this template?
Regards,
Greg
P.S. I even tried doing the “Switch Customer” method in step 3 of your steps above, using the customer number shown in the error message. It still didn’t work.
Hi Greg,
in which of the steps mentioned in my article did the error occur? I assume in step 5. Are you sure that you used the “Import Solution Template” button and not the “Upload” button? If the error persists, can you please create an incident?
Best regards,
Thomas
.
Thomas,
Your comment is exactly correct. I had used “Upload” instead of “Import Solution Template”. Thanks for the hint and help!
Regards,
Greg
Hello Thomas,
I think the “Prerequisites” section might need to be updated. In an FP4.0 PDI-enabled tenant (not a development tenant), I imported the solution template and attempted to activate the solution. However, the solution couldn’t be activated because of the following types of errors:
SiteReservation-Root-Action-StartSalesProcessing
“Method or action ‘Release’ is not permitted.”
The only way I can think of resolving these errors to test this template is to make an official request to have the SalesOrder object to be opened up for write access. Has no one else tried loading up this template before? I can’t believe I’m the first.
Greg
Hi Greg,
yes, you are right. By default, write access to SAP business objects is disabled. If you want to get this removed, open an incident from your test tenant and SAP will remove the restriction. Unfortunately this is necessary for security and legal reasons.Please check the following business center document for details:
I should have mentioned that in the prerequisites!
Best regards,
Thomas
.
Hello Thomas,
thank you for the Site Management Tutorial. It is extremely helpful. But how can I upload generally a customer-specific solution into my or the Customers Test Tenant? I´ve read the documentation SAP offers. Unfortunately, I am not able to log into the Test Tenant via the ByD Studio FP4.0. It always says: “Tenant used productive”. Do I need a special Test Tenant for that? At the moment, I am developing on the development tenant of our partner company.
Another problem with Customer specific solutions, is that I cannot use the Service Integration for XML File Input or Webservices. It says to me that “Entity pid not supported in Reseller Mode”. Therefore I use the Solution Type: Addon. Hopefully, the upcoming hotfix collection (no more differentiation between the two modes) can handle it, but I am not sure.
Best Regards
Rufat Gadirov
Hi Rufat,
can you please have a look at my article Customer-Specific Solutions and Template Solutions? If you still have questions, can you please re-post your question there?
Second: XML file input and web services: Probably the same reason as for Gregs request: these functions are not available if write access is restricted for your tenant. Please see my response to Gregs question above.
Best regards,
Thomas
.
Hi Thomas,
thank you for your quick answer. Yes, I have already looked at your article. I will post my question there.
Best Regards,
Rufat
Hi Thomas
I am naive user of SAP Business ByD.
I cannot open the link to download the Site Management template.
When I click on this link, I get the Error “Page Not Found”.
Can you please tell how to get this Solution template?
Thank you
Regards
Pratyush
Hi Pratyush,
You can’t access the wiki page because you don’t have access rights for the Business Center. If you are an SAP Business ByDesign partner, you should have these access rights. Therefore please create an incident.
Best regards,
Ulrike
SAP Cloud Knowledge Management
Hi Ulrike,
I make registration using my official email ID, still I get an error “Page Not found”. And Yes we are a SAP Business ByDesign partner.
Please guide me to access the wiki page , to download Solution templat
Thank you.
Best Regards
Pratyush SINHA
Hi Pratyush,
There may be an issue with your Business Center user. Please open an incident as follows:
1. Go to and click “Help”.
2. On the Contact SAP Cloud Support screen,click the link “open an incident” and enter the required details, including the link above you are trying to access.
Best regards,
Marianne
Hi Marianne
I cannot create a incident also.
I get this message on the screen
“You currently don’t have the permission to use this functionality.
If you have just registered for the Business Center, you will get access within a few hours.“
Please guide to get the access or the way to get the solution template.
Thanks
Best Regards
Pratyush
Hello Pratyush,
please sent an e-mail to this e-mail address: [email protected]
providing a screenshot of your view, your e-mail address and S user ID. I need to check your permissions.
Thanks,
Stefanie
Hi Thomas
I can’t able to access the link for Site Management Template.
I cannot open the link to download the Site Management template
Kindly provide the way to access it .
Many Thanks
Mithun
Hello Thomas,
When tying to import this template I get an error “Tenant is configured for on-demand solution SAP ByDesign. Solution/template Y… built for on-demand solution SAP Business ByDesign cannot be processed.”
Can you tell please what may I do wrong?
Best regards
Irina
Hi Irina,
can you please open an incident and ask the support to check. From what I know, there is a compatibility white list check at import, and mybe the check is not correct.
As a workaround, you can open the zip file itself. It is not encrypted. You can see the BODL / ABSL code in clear text and you could copy it manually into a trial solution. This works fine for BODL/ABSL, but unfortunately not for UIs.
Regards, Thomas
.
Hi Thomas,
I’m reading your book. It’s very helpful. Thank you!
But I can’t open this template in the ByD tenant version 1402. I opened an incident and they told me that it works only up to 1308 version. Anyway, thank you for the tip.
Best regards, Irina
Hi Thomas.
I have a question posted on : Different Partner Determination based on doc types
I am really looking for a feasible solution for this. Can you suggest me something.
Regards
Apoorva | https://blogs.sap.com/2012/10/02/site-management-solution-template/ | CC-MAIN-2018-13 | refinedweb | 1,692 | 57.06 |
table of contents
NAME¶dpns_utime - set last access and modification times
SYNOPSIS¶#include <sys/types.h>
#include "dpns_api.h"
int dpns_utime (const char *path, struct utimbuf *times)
DESCRIPTION¶dpns_utime sets last access and modification times.
- path
- specifies the logical pathname relative to the current DPNS directory or the full DPNS pathname.
If times is NULL, the access and modification times are set to the current time else they are set to the utimbuf structure member values. ctime is set to current time.
RETURN VALUE¶This routine returns 0 if the operation was successful or -1 if the operation failed. In the latter case, serrno is set appropriately.
ERRORS¶
- EPERM
- times is not NULL and the caller effective user ID does not match the owner ID of the file and the caller does not have ADMIN privilege in the Cupv database.
- ENOENT
- A component of path prefix does not exist or path is a null pathname.
- EACCES
- Search permission is denied on a component of the path prefix or the caller effective user ID does not match the owner ID of the file or write permission on the file itself is denied and times is NULL.
-. | https://manpages.debian.org/buster/libdpm-dev/dpns_utime.3 | CC-MAIN-2021-39 | refinedweb | 195 | 54.12 |
The string is immutable means that we cannot change the object itself, but we can change the reference to the object. The string is made final to not allow others to extend it and destroy its immutability.
public class StringImmutableDemo { public static void main(String[] args) { String st1 = "Tutorials"; String st2 = "Point"; System.out.println("The hascode of st1 = " + st1.hashCode()); System.out.println("The hascode of st2 = " + st2.hashCode()); st1 = st1 + st2; System.out.println("The Hashcode after st1 is changed : "+ st1.hashCode()); } }
The hascode of st1 = -594386763 The hascode of st2 = 77292912 The Hashcode after st1 is changed : 962735579 | https://www.tutorialspoint.com/why-string-class-is-immutable-or-final-in-java | CC-MAIN-2022-05 | refinedweb | 101 | 60.31 |
- TutorialDoctor
import ui,sqlite3 database = "database.db" con = sqlite3.connect(database) cursor = con.cursor() users_query = "select * from users" user_sections ="SELECT * FROM users INNER JOIN post on post.user_id = user.id" user_names = [x[1] for x in cursor.execute(users_query)] v = ui.load_view() table = v['table'] table.data_source.items = user_names v.present()
To add accessories:
user_names = [{"title":x[1],"accessory_type":"disclosure_indicator"} for x in cursor.execute(users_query)]
- TutorialDoctor
I completely missed this reply. Sounds like a good challenge to take on. I'll see how far I can get with it.
-!
- TutorialDoctor
I'm thinking about making an Sqlite workflow where you can create, read, update, delete info of an SQLITE database with actions.
I'm thinking about making the different functions as custom actions with fields for input.
I'd like some feedback on if anyone would like this and some ideas on how one might use this. This will help me design it better.
Thanks.
-Tutorial Doctor
- TutorialDoctor
Hello. I wrote something a long time ago that might help you understand programming a bit better:
and
- TutorialDoctor
I'm interested in using Pythonista to program Alexa skills. Currently I am playing around with a chatbot script I made a while ago and using the speech module to have the iPad talk to Alexa (really fun). I'm wondering if someone has already created a smart home module that can connect to various devices or not.
I did find a video of someone using Python to create skills.
I feel I am on to something. Anyone done anything with smart home devices? Perhaps HomeKit in Pythonista would be cool? A little program I am playing with below:
import speech def talk(x): speech.say(x,"en-au",.5) commands = {"key":"value", "end":"end", "brief":"flash briefing", "stop":"stop", "simon":"simon says: What is your favorite animal?", "1":"volume to 1", "2":"volume to 2", "3":"volume to 3", "4":"volume to 4", "5":"volume to 5", "6":"volume to 6", "7":"volume to 7", "8":"volume to 8", "9":"volume to 9", "10":"volume to 10", "bible":"play bible app", "?":"What can you do?", "weather":"What's the weather like?", "movies":"What movie's are playing", "joke":"Tell me a joke.", "inspire":"Inspire me." } def Main(): running=True while running: command=input('Type a command :') if command != "end" or "the": try: talk("Alexa, "+commands[command]) except: talk(command) if command=='end': return False if command=='...': talk(command) # FUNCTIONS #----------------------------- Main()
- TutorialDoctor
I'm getting back into creating Workflows for Editorial and realized I never posted the Bible app I made for Editorial to the forums. Instructions below.
Inside Editorial download the bible app workflow:
Then, download my GitHub Get workflow:
By default the GitHub Get workflow is set to download a Github repo that includes the needed
bible.dbdatabase. Run the Github Get workflow, find the database file inside of the new-found Online Downloads folder in Editorial, and move it to the Documents folder.
Now you can run the Bible 2 workflow!
-?
- TutorialDoctor
from sys import argv filename=input() target=open(filename,'w') #open it for writing, as opposed to reading. target.write('hello') target.close() | https://forum.omz-software.com/user/tutorialdoctor | CC-MAIN-2022-21 | refinedweb | 526 | 59.09 |
Today, I will finish my story to concurrency and lock-free programming in particular. There are four rules to lock-free programming in the C++ core guidelines left.
First of all, here are the rules for the current post.
I have to admit, I annoyed a few German readers with my two last posts about lock-free programming. My readers got the impression, that I don't like lock-free programming. Wrong!. I'm totally curious about lock-free programming but before you use it you have to answer two questions.
Before you can not answer this two questions with a big yes, you should continue with the rule CP.102
What does that mean: distrust your hardware/compiler combination. Let me put in in another way: When you break the sequential consistency, you will also break with high probability your intuition. Here is my example:
#include <atomic>
#include <iostream>
#include <thread>
std::atomic<int> x{0};
std::atomic<int> y{0};
void writing(){
x.store(2000); // (1)
y.store(11); // (2)
}
void reading(){
std::cout << y.load() << " "; // (3)
std::cout << x.load() << std::endl; // (4)
}
int main(){
std::thread thread1(writing);
std::thread thread2(reading);
thread1.join();
thread2.join();
}
I have a question for the short example? Which values für y and x are possible in the lines (3) and (4). x and y are atomic, therefore no data race is possible. I further don't specify the memory ordering, therefore, sequential consistency applies. Sequential consistency means:
If you combine this two properties of the sequential consistency, there is only one combination of x and y not possible: y == 11 and x == 0.
Now, let me break the sequential consistency and maybe your intuition. Here is the weakest of all memory orderings: the relaxed semantics.
#include <atomic>
#include <iostream>
#include <thread>
std::atomic<int> x{0};
std::atomic<int> y{0};
void writing(){
x.store(2000, std::memory_order_relaxed); // (1)
y.store(11, std::memory_order_relaxed); // (2)
}
void reading(){
std::cout << y.load(std::memory_order_relaxed) << " "; // (3)
std::cout << x.load(std::memory_order_relaxed) << std::endl; // (4)
}
int main(){
std::thread thread1(writing);
std::thread thread2(reading);
thread1.join();
thread2.join();
}
Two unintuitive phenomena can happen. First, thread2 can see the operations of thread1 in a different sequence. Second, thread1 can reorder its instruction because they are not performed on the same atomic. What does that mean for the possible values of x and y: y == 11 and x == 0 is a valid result. I want to be a little bit more specific. Which result is possible depends on your hardware.
For example, operation recording is quite conservative on x86 or AMD64, stores can be reordered after loads but on Alpha, IA64, or RISC (ARM) architectures, all four possible reordering of stores and loads operations are allowed.
If you don't believe me, I suggest you read the following rule CP.102.
There is not much to add to this rule. At least, I can provide links to the literature.
I know, I should not write about the singleton pattern but the double-checked locking pattern is infamous for initialising a singleton in a thread-safe way. Here we are:
std::mutex myMutex;
class MySingleton{
public:
static MySingleton& getInstance(){
std::lock_guard<std::mutex> myLock(myMutex); // (1)
if( !instance ) instance= new MySingleton();
return *instance;
}
private:
MySingleton();
~MySingleton();
MySingleton(const MySingleton&)= delete;
MySingleton& operator=(const MySingleton&)= delete;
static MySingleton* instance;
};
MySingleton::MySingleton()= default;
MySingleton::~MySingleton()= default;
MySingleton* MySingleton::instance= nullptr;
This implementation of the singleton pattern is thread-safe because each access to the instance is protected by a std::lock_guard (line (1)). The implementation is correct but way to expensive because each reading access of the singleton is guarded by a heavy-weight lock. Beside the initialisation of the singleton, no synchronisation is necessary. Here comes the double-checked locking pattern to our rescue.
static MySingleton& getInstance(){
if ( !instance ){ // (1)
lock_guard<mutex> myLock(myMutex); // (2)
if( !instance ) instance= new MySingleton(); // (3)
}
return *instance;
}
The getInstance method uses now an inexpensive pointer comparison in line (1) instead of an expensive lock. Only if the pointer is a nullptr, an expensive lock is used (line (2)). Because there is the possibility that another thread will initialise the singleton between the pointer comparison (line (1)) and the lock (line (2)), an additional pointer comparison in line (3) is necessary. So the name is obvious. Two times a check and one time a lock.
Smart? Yes! Thread safe? No!
What is the problem? The call instance= new MySingleton() in line (3) consists of at least three steps.
The problem is that there is no guarantee about the sequence of these three steps. For example, the processor can reorder the steps to the sequence 1,3 and 2. So, in the first step the memory will be allocated and in the second step, the instance refers to the singleton. If at that time another thread tries to access the singleton, it compares the pointer and assumes that the singleton is fully initialised.
The consequence is simple: the program has undefined behaviour.
I have already written a quite emotionally discussed post to the thread-safe singleton pattern. This included different implementations with std::lock_guard, std::call_once and std::once_flag, the Meyers singleton, and atomic versions that are based on the double-checked locking-pattern. You can read the details to these implementations and their different performance characteristics on Linux and Windows here: Thread-Safe Initialization of a Singleton.
As I promised I'm done with the rules to concurrency. The next post is about the rules for error handling in the C++ core guidelines.92
Yesterday 8573
Week 9466
Month 167897
All 5037211
Currently are 164 guests and no members online
Kubik-Rubik Joomla! Extensions
Read more...
But out of curiousity, what is the fix.
Just use an atomic operation to load and store the pointer? | https://www.modernescpp.com/index.php/c-core-guidelines-the-remaining-rules-to-lock-free-programming | CC-MAIN-2020-50 | refinedweb | 976 | 57.67 |
I'm also suffering from the 'undefined local variable or method' problem
after having update my rails version to 2.0.
All worked fine before and now I get the following error:
undefined local variable or method `start_form_tag' in --->
<%= start_form_tag %>.
>
> Greetings,
>
> I'm also suffering from the 'undefined local variable or method'
> problem
> after having update my rails version to 2.0.
>
> All worked fine before and now I get the following error:
>
> undefined local variable or method `start_form_tag' in --->
>
> <%= start_form_tag %>
>
It was deprecated in 1.2 and removed in 2.0
Fred
>.
>
> --~--~---------~--~----~------------~-------~--~----~
> You received this message because you are subscribed to the Google
> Groups "Ruby on Rails: Talk" group.
> To post to this group, send email to [email protected]
> To unsubscribe from this group, send email to [email protected]
> For more options, visit this group at
-Corey
--
The Internet's Premiere source of information about Corey Haines
Just one follow up question. What's the closing tag. I've used:
<%= %>
which works, but I can't believe it's right.
form_tag('/posts')
# => <form action="/posts" method="post">
form_tag('/posts/1', :method => :put)
# => <form action="/posts/1" method="put">
form_tag('/upload', :multipart => true)
# => <form action="/upload" method="post" enctype="multipart/form-data">
<% form_tag '/posts' do -%>
<div><%= submit_tag 'Save' %></div>
<% end -%>
# => <form action="/posts" method="post"><div><input type="submit"
name="submit" value="Save" /></div></form>
So, if you are passing a block, it puts the end </form>
--
yes, I found the docs, but didn't know how to interpret them. I've just
got:
<%= form_tag %>
...... my html form complete with fields ......
<%= %>
which looks odd to me - but it seems to work.
<%= form_tag do %>
<% .. put other generating methods here %>
<% end %>
which will wrap the entirety in a <form></form>
Perhaps if you posted your code block.
-Corey
--
>
> Corey,
> Here's my whole form block:
> [code]
> <%= form_tag %>
You should do
<% form_tag do %>
...
<% end %>
yes, that makes sense
My interpretation:
"Let's make everyone change hundreds of lines of code in dozens of
projects, which all need to be tested now because someone thought there
was a better way to do it."
I hope that one person just decided to do this by fiat, because if a
committee of people thought this was a good idea, then some of the more
acerbic comments I've seen about the Rails gurus starts making a little
more sense.
I won't argue that it isn't better: I'm sure you get better 'forgot to
close the form' error detection, or form within a form warnings, or
something. But there is really no good reason for it to break every
application out there that uses the old method. Progress for the sake
of progress isn't.
>
> I would really like to know the motivation behind getting rid of
> start_form_tag and end_form_tag. It seems asinine to me.
>
Because bloating the api isn't in general a good thing ?
Fred
start_form_tag is still available as the non_block version of form_tag.
This is the only way to do it when the form straddles an erb block,
such as a cache block.
--
We develop, watch us RoR, in numbers too big to ignore.
Which is I think is a pretty lame reason in this instance. I would
think that any reasonable analysis would show that effort required to
maintain a 'bloated' api is far and away less than the effort required
fix and test all of the existing code that uses the old method. Leave
it deprecated, so people don't use it.
Or, make a script that goes in the script folder that automatically
updates the code seemlessly for all deprecated features. If you can't
do that safely, then don't drop support. Cause I'm sitting here looking
at 40 rails sites I need to update now, with 500+ start/end_form_tag's
that I've got to go through and update and test. And while freezing
rails may let me put off the pain for some of the sites, at some point
it will have to be done.
And while functional testing, etc. might help automate some of this, the
reality is there are never enough test cases to find everything.
form_tag is also a non-block form of form_tag. Just don't put a do on
the end and close it with </form>.
it seems very sophomoric to constantly be rethinking these tags.
You seem to have missed the point. No one here has indicated that the
new one isn't better: it is. That does not obviate the fact that the
old form was a widely used feature, and it has serious impacts on
backward compatibility. Case in point, I had mentioned 7 months ago in
my original posts that this would be a huge effort to upgrade/test, and
guess what, none of those projects have been upgraded and they all still
are running Rails 1.2.x. Only new development that doesn't rely on any
existing code has been coded with Rails 2, and none of that is in
production in my case. These kinds of things do have serious impacts on
adoption rates, satisfaction with the language and tools, etc.
Hey, actual help. Who knew.
Ruby adaptation is going to take a big hit for this.
This was a poor design decision. If the Ruby community is looking for
buy-in from the development community, they just moved one step away.
-Geoff
Imagine the frustration that would save! I think people would be a lot
less stressed out, pissed off and bitching on blogs/forums if they
hadn't spent hours trying to figure out weird error messages. They
would see the message, fix the code in mins with a global replace and
move on.
This ignores the deprecating issue itself (on purpose). Having worked
with dozens of languages with reasonable error messages over more years
than I care to admit, the Ror error messages continue to blow me away.
I'm sure there are some who will say 'read a book, understand the
architecture and ruby better'. But for the rest of us in the real
world, that doesn't cut it.
Having said all that.... the more cryptic the better, right? That way
we can charge $100,000 for being a good RoR programmer instead of
$30,000 (or less) which is what would happen if RoR was made easier and
I think better error messages would be the biggest change.
All imho of course - what do you think?
Keys to a good programmer - humbleness, humility and honesty.
>>
> A deprecation warning was added in 1.2.0 (
> rails/blob/v1.2.0/actionpack/lib/action_view/helpers/
> form_tag_helper.rb) and then that method was removed almost a year
> later. What more do you want?
>
> Fred
--
Posted via.
Yes (though I can't speak for Fred). And your answer makes many
incorrect assumptions and misses the point.
[...]
> I genuinely would welcome a good - business - justification as to why 12
> months is deemed an appropriate length of time.
Why wouldn't it be? We're not talking about a hosted service here; you
can continue to use old versions of Rails for as long as you want.
What's the business justification for keeping 3-year-old deprecations in
the API?
A deprecation says "the next time you upgrade, this feature might be
gone, so get rid of it now". If you can't handle that, then don't
upgrade.
> Why not 36 months?
What would the extra two years do, other than bloating the framework and
encouraging people not to take deprecation warnings seriously?
> Also
> why not better error messages generally?
That's a separate issue.
> I and many others need something that is around for longer than a year.
Then you are welcome to stick with an old version of Rails. No one is
forcing you to upgrade.
The nature of upgrades is to introduce changes. If you can't deal with
those changes, don't.
>.
> and I'm at a
> loss to understand why to remove something helpful?
Because a better, more Rubyish way was found to do the same thing.
>'."
Best,
--
Marnen Laibow-Koser
[email protected]
Sent from my iPhone
Also (specifically):
Marnen Laibow-Koser wrote:
>
> A deprecation says "the next time you upgrade, this feature might be
> gone, so get rid of it now". If you can't handle that, then don't
> upgrade.
>
I didn't upgrade, just sought an answer.
> What would the extra two years do, other than bloating the framework and
> encouraging people not to take deprecation warnings seriously?
New to rail this in 2009, so never used 1.2 and never saw deprecation
warnings.
2 extra years would give people time, let new books come out, let old
books age out, let new forum posts become the standard, let old posts
get deleted, etc. Basically i think 3 years would be much nicer to
people. I don't have any fixed idea on what time period is 'right', I
just don't get why "1 year" is deemed 'right'. If shorter is better,
how about 3 months? I think it all comes down to peoples opinion of
what time is 'reasonable'. In the end there is always gonna be a
distribution curve of time opinions there, from 'none' to 'forever',
right?
>
>> Also
>> why not better error messages generally?
>
> That's a separate issue.
>
I think it's the biggest one.
>> I and many others need something that is around for longer than a year.
>
> Then you are welcome to stick with an old version of Rails. No one is
> forcing you to upgrade.
>
old versions missing much functionality - business reasons, hosting,
functionality and love of change certainly do pretty much force upgrades
- plus I can't simultaneously use multiple versions or my brain explodes
;)
> The nature of upgrades is to introduce changes. If you can't deal with
> those changes, don't.
>
i can, but i can disagree with how they are introduced based on
experience right? that's ok right?
>>.
>
I do know that. But I can't keep developing in old and multiple versions
and know what worx in what and stay sane :) Because I do love change and
need the newer versions I... need the newer versions.
>
>> and I'm at a
>> loss to understand why to remove something helpful?
>
> Because a better, more Rubyish way was found to do the same thing.
>
Sure and that's great. I'm really more concerned about the error
messages removal, not the tag removal.
>>'."
>
or... "Backward compatibility can mean saying 'oops, we goofed', here's
a new and better way to do it, but don't worry, your existing code base
and reference material will still be ok under the new version."
>
But again philosophy and see wc3 & browser standards above.
When a tag is really removed, if the 'remover' could remove all the main
threads in Rails Forums, etc., much as that might take a big effort, now
that would really stand out as a big help.
> It was deprecated in 1.2 and removed in 2.0
>
> Fred
On the subject of error messages, with deprecations specifically in
mind, I suggest the following approach at least be considered.
Establish a deprecated_methods_index library at the top level of Rails
so that, when someone finally decides to upgrade their Rails-1.0.6 app
to Rails-whatever, instead of getting:
"undefined local variable or method `whatever'"
(which is generated by ruby itself and not by Rails)
one obtains the output from something like this:
def whatever(*parms)
puts("whatever method is deprecated and was removed in rails-x.y.z.")
puts("Use other_one method instead.")
end
If these are all kept in one place surely it would not be too difficult
to maintain? Are there any downsides to this approach, other than having
to write four lines of code for every deprecated method?
VERY much appreciated.
Michael. | https://groups.google.com/g/rubyonrails-talk/c/o4VJpNxNP-s?pli=1 | CC-MAIN-2022-27 | refinedweb | 1,991 | 73.58 |
MCITP Exam Cram: Managing and Maintaining Systems That Run Windows Vista
Terms you'll need to understand:
- ✓ Active Directory (AD)
- ✓ Active Directory Users and Computers (ADUC)
- ✓ Local Computer Policy (LCP)
- ✓ Group Policy Object (GPO)
- ✓ AD Site
- ✓ AD Domain
- ✓ Organizational Unit (OU)
- ✓ L-S-D-OU-OU-OU
- ✓ Block Inheritance
- ✓ No Override/Enforced
- ✓ Group Policy Management Console (GPMC)
- ✓ Task Scheduler
- ✓ Event Viewer
- ✓ Event Subscriptions
- ✓ Windows Remote Management Service (WinRM)
- ✓ Windows Event Collector Utility (wecutil.exe)
- ✓ Reliability and Performance Monitor
- ✓ Data Collector Set
Techniques you'll need to master:
- ✓ Install and use the Group Policy Management Console
- ✓ Create, deploy, and troubleshoot Group Policy Objects (GPOs)
- ✓ Understand GPO processing
- ✓ Implement a Loopback GPO
- ✓ Implement an audit policy
- ✓ Implement a software deployment GPO
- ✓ Implement Device Restrictions by GPO
- ✓ Implement Software Restrictions by GPO
- ✓ Perform Resultant Set of Policies/Planning and Logging
- ✓ Schedule tasks with different triggers
- ✓ Understand Event Viewer
- ✓ Configure Event Forwarding from multiple Source computers to one Collector computer
- ✓ Configure Data Collector Sets in Performance Monitor
The tools that you must be familiar with and use in the management of Windows Vista computers in the enterprise are
- Active Directory Users and Computers (ADUC)
- Group Policy Management Console (GPMC)
- Group Policy Objects (GPO)
- Task Scheduler
- Event Viewer
- Reliability and Performance Monitor
As an enterprise support technician, you are responsible for management and maintenance of computers that run Windows Vista in the Enterprise. Your "heavy guns" in this administrative task are the Group Policy Objects (GPOs). You need to be fluent with their settings, the way they get processed, and the implementation and troubleshooting of GPOs in your enterprise environment.
The exam tests your knowledge of what settings are available, where to link the GPO, how to have the GPO apply to only selected computers or users, and how to troubleshoot them when you aren't getting what you expected from the GPOs.
Another tool that you use is the Task Scheduler. This tool launches tasks at a later and perhaps regularly scheduled time. This tool has changed significantly since the last versions of the Windows operating system.
There is impressive new capability in the Event Viewer. You probably won't even recognize it from earlier versions. It has a powerful, customizable filter that allows you to capture events of about any nature you can imagine. In addition to this capability, you can now aggregate events from remote computers onto a single monitoring system, through the use of Event Forwarding to an Event Collector and subscription services.
Finally, you look at the new and improved Reliability and Performance Monitor, where you configure counters to view and log performance parameters on the local and on remote computers. This new tool includes a collection of objects and counters to monitor a large number of system resources.
You can configure many of the configuration parameters in the Local Computer Policy (LCP) on each client computer that runs Windows Vista. In the corporate enterprise, remember that these numerous configuration settings can and typically should be centrally managed and deployed by GPOs within the Active Directory structure.
Some exam questions address a standalone Windows Vista computer, whereas others address the Windows Vista computer within an Active Directory environment. You need to know what security settings are available and how these controls affect the behavior of the Windows Vista computer in both cases. So put on your seatbelts and read on carefully.
Group Policy Object Overview
Policies are the way that computers are managed, either standalone computers or computers in the enterprise. Policies establish the vast majority of the configuration settings that control how the computer boots up and then how your desktop environment is constructed when you log on.
The Standalone Computer
Each computer has a Local Computer Policy, or LCP (also referred to as the Local GPO or LGPO), that is made up of many configuration settings on the various configuration dialog boxes throughout the user interface, as well as numerous settings that are configurable only in a Microsoft Management Console (MMC) called the Local Computer Policy. This policy is stored in the Registry on the computer's hard drive and is applied every time the computer is booted up. This computer configuration from the Local Computer Policy gets read into random access memory (RAM) on the computer. Think of this RAM copy of the Registry as the live, awake brain of the computer when it is booted up. This RAM copy of computer settings from the Registry is in place when you are presented with the Windows Graphical Identification aNd Authentication (GINA) dialog box.
Further configuration for the desktop environment is controlled by configuration parameters stored within your user profile in a file called NTUSER.DAT. NTUSER.DAT gets read into RAM from your profile folder when you successfully log on to the computer. As you make changes to your desktop environment, like the desktop wallpaper or items on the Start menu, these changes get recorded in the RAM copy of NTUSER.DAT. When you log off, by default, the operating system saves these changes into your profile. This file is the primary source of the configuration parameters that define your desktop environment.
The first time you log on to a computer, the operating system copies a read-only and hidden folder under C:\Users called \Default to a new folder under C:\Users and renames the new folder with your logon name. Within that folder is the file named NTUSER.DAT. This becomes your user profile on this specific computer. After that first logon on a given computer, now that you have an existing profile, this existing copy of NTUSER.DAT is the one that gets read into RAM for your user profile.
To summarize, two components define a desktop environment on a standalone computer (not participating within an Active Directory environment): the configuration parameters in the Local Computer Policy and the configuration parameters in your user profile. They get applied in that order.
The LCP can be accessed on a Windows Vista computer by building it into a new MMC.
Building a Local Computer Policy (LCP)
To build the Local Computer Policy (LCP) MMC, follow these steps:
- Click Start > Run, type MMC, and click OK. (You can also use Start > Start Search > MMC and then press Enter.)
- From the menu, select File > Add / Remove Snap-in.
- Select Group Policy Object snap-in and click Add.
- Accept the Group Policy Object for the Local Computer by clicking Finish.
- Click OK.
- From the menu, select File > Save As.
- Type LCP.msc and save the MMC either on the desktop or in Administrative Tools.
The Domain Member Computer
Back in the old days of the Windows NT domain and Windows 95 clients, Microsoft used something called System Policies, built using a tool called the System Policy Editor, to manage and configure these down-level computers. These System Policies would "tattoo" the Registry of the local box, actually writing settings to the Registry files on the local hard drive. If you wanted to remove policy settings from the computers, you had to write a new System Policy that would actually reverse the settings from the policy that was being removed.
When Windows 2000 was released, Microsoft implemented a whole new generation of policies and completely overhauled how they were applied on computers. These policies were improved yet again with the release of Windows XP, Windows Server 2003, and now again with Windows Vista. These new policies are called Group Policy Objects, or GPOs, and they exist in the Active Directory in an enterprise environment. These policies get applied to the computer over the top of the Local Computer Policy and your user profile settings to provide enterprise administrative dominance over the local configuration settings.
These new policies do not affect the configuration files on the hard drive (for the most part), so they do not "tattoo" the computer. Rather, as these new policies get applied, they modify the copy of the Registry (computer) and the profile (user) that has been read into RAM on the computer during the initial bootup and then the user logon for the current session. These modifications to settings do not get written back to the hard drive copies of the configuration files. Remember that this RAM copy is the actual functional copy that is being used to control and configure the user's current session.
L-S-D-OU-OU-OU
Active Directory (AD) is a database and a collection of directory services that support the database and the network operating system. AD is created by configuring one or more domain controllers on a network. AD utilizes four types of containers to store and organize AD objects, like computers and users:
- Forests
- Sites
- Domains
- Organizational Units
You can apply GPOs to sites, domains, and Organizational Units.
AD Forest
The AD forest is one or more AD domains that share a common schema. The schema is the structure of the AD database—not the data within the database, just the structure. The forest is created when you run DCPromo on a server to install your first domain controller in the first domain in the forest. This first domain is referred to as the forest root domain. The name of this forest root domain also is the name of the forest. All domains within the forest are trusted by and trusting of all other domains within the forest. Therefore, since members of your forest are, by default, all trusted and trusting, a lack of trust with some new domain indicates the need to generate a second forest, or create the new, untrusted domain in a different, existing forest. Forests are logical containers and have no real connection to any physical location, other than you must place your domain controllers somewhere. GPOs cannot be linked to the forest.
AD Sites
AD sites are created in AD once the forest is established and are defined as a collection of well-connected segments, where the bandwidth is at least local area network (LAN) speed. LAN speed is currently considered to be 10Mbps or greater. Any network link between segments that drops below LAN speed is defined as a boundary of the site and indicates the need for the creation of an additional site. Because sites are defined by physical connectivity, they are considered to be physical containers, with one site per location that is connected to AD by slower links. There are two major benefits to defining sites:
- Client computers within a site are preferentially directed to local (within the same site) resources.
- AD replication within the site happens without much regard for bandwidth consumption (because all segments are well connected at high bandwidth LAN speeds), but AD replication between sites, over slower wide area network (WAN) links, can be carefully controlled so as to avoid saturation of these lower bandwidth links. GPOs can be linked to sites.
AD Domains
AD domains are logical containers that are created within an AD forest. Domains (and AD) are created, and exist, on domain controllers. Domains in AD are security boundaries. In Windows Server 2003, they are defined by their unique namespace, like mobeer.com, buymeabeer.us, or boboville.com, as well as their single-password policy per domain. If you need a different namespace, you need another AD domain. If you need a different password policy for users, you need another AD domain. Domains are logical containers and can exist in multiple sites if placed in one or more domain controllers in more than one site. GPOs can be linked to the domain.
Organizational Units (OUs)
Organizational Units (OUs) are logical containers that are created within an AD domain. They are designed to be used to organize computers and users for two purposes: to delegate administrative authority of groups of computers and users to different administrators, and to provide grouping of computers and users for the assignment of different Group Policy Objects (GPOs). OUs can be nested within another parent OU, so they create a hierarchical structure, like the one shown in Figure 3.1. GPOs can be linked to Organizational Units.
Figure 3.1 The hierarchical structure of OUs in an Active Directory domain.
The OU is represented in AD Tools by a folder with a book icon on it. A folder without a book icon on it is not an OU but is an AD container that cannot have GPOs linked to it. By default, AD provides only one OU called the Domain Controllers OU so that security-related GPOs can be applied to this most sensitive class of servers. Administrators must create all other OUs.
Policies are applied in the order of L-S-D-OU-OU-OU. That is the Local policy, then site policies, then domain policies, and finally OU policies, starting with the top-level OU, and then followed by its child OU, and then its child OU, and so on.
Policies have two halves:
- A computer half, called the Computer Configuration
- A user half, called the User Configuration (see Figure 3.2)
Figure 3.2 Group Policy Objects can be applied to computers and users.
Applying GPOs to a Computer and User in an AD Environment
GPOs are applied to a computer and user in an AD environment as follows.
The computer is turned on. All the Local settings are read from the files on the local hard drive that make up the Registry and the Local Computer Policy (LCP) and are placed in RAM. Again, think of this RAM copy of the Registry as the live, awake brain for this session on the computer. This is the "L" part of the computer boot-up process.
Because the computer is a member of Active Directory, it contacts a domain controller for its domain and authenticates its computer account with AD. It then compares its IP address to IP subnets configured in AD sites to identify which site the computer is currently in. The computer then downloads and reads all GPOs for the site that it is currently in and applies only the computer half of those GPOs to the RAM copy of the Registry on the computer. (At this point in the bootup process, it cannot apply the user portion because there is no way to know what user will eventually be logging on.) If any Site level settings conflict with any Local settings, the Site level settings override the Local settings. This is the "S" part of the computer bootup process.
The computer then downloads and reads all GPOs for the domain that it is a member of and applies only the computer half of those GPOs to the RAM copy of the Registry on the computer. By default, if any Domain level settings conflict with any Local or Site level settings, the Domain level settings override the Local and Site level settings. This is the "D" part of the computer bootup process.
The computer then downloads and reads all GPOs for the top-level OU that its computer object resides in and applies only the computer computer bootup process.
The computer repeats this process for each level OU that it may reside within. If the computer object for the computer resides in the top-level OU, these are the only OU GPOs to be processed. If the computer object for the computer resides in the third-level OU, the top-level OU GPOs are processed, then the second-level OU GPOs are processed, and finally the third-level OU GPOs are processed. By default, the last GPO that gets applied overrides all conflicts with previously applied GPOs.
Again, these GPO policies get applied to the computer over the top of the Local Computer Policy settings to provide enterprise (AD) administrative dominance over the local configuration settings.
When all appropriate OU GPOs are processed, the Windows GINA dialog box is presented, and finally you are allowed to attempt to log on.
You are prompted to press and hold the Ctrl+Alt keys and then press the Del key to initialize the logon process, as shown in Figure 3.3. You then provide your identity information, your username and password, and click Enter.
Figure 3.3 You provide your identity information, your username and password, and then click Enter.
When your identity information is accepted as valid by a domain controller, you are authenticated, and the L-S-D-OU-OU process begins all over again. Only this time it uses your user profile (L) and the user half of the S, D, and OU GPOs, as follows.
The user profile settings are read from the files on the local hard drive and are placed in RAM. This is the "L" part of the user logon process.
The computer again compares its IP address to IP subnets configured in AD sites to identify which site the computer is currently in. The computer then downloads and reads all GPOs for the site that it is currently in and applies only the user half of those GPOs to the RAM copy of the Registry on the computer. If any Site level settings conflict with any Local settings, the Site level settings override the Local settings. This is the "S" part of the user logon process.
The user object can be located in a different OU and even a different domain than the computer object, but because you are logging on to the computer, you must be in the same physical location as the computer and are subject to the computer's Site level GPOs.
The computer then contacts a domain controller for the domain that you are a member of and downloads and reads all GPOs for your domain. The computer applies only the user half of those GPOs to the RAM copy of the Registry on the computer. By default, if any Domain level settings conflict with any Local or Site level settings, the Domain level settings override the Local and Site level settings. This is the "D" part of the user logon process.
The computer then downloads and reads all GPOs for the top-level OU that the user account object resides in and applies only the user user logon process.
The computer repeats this process for each level OU that the user account object may reside within. If the user account object resides in the top-level OU, these are the only OU GPOs to be processed. If the user account object for the computer resides in the third-level OU, then the top-level OU GPOs are processed, followed by the second-level OU GPOs, and finally the third-level OU GPOs are processed. By default, the last GPO that gets applied overrides all conflicts with previously applied GPOs.
Once again, these policies get applied to the RAM copy of the Registry on the computer over the top of the User Profile settings to provide enterprise (AD) administrative dominance over the local configuration settings.
Now you (finally) get your desktop and can begin working.
And If That Isn't Enough: Enforced, Block Inheritance, and Slow Link Detection
With all the different GPOs that can be applied to a computer and user, some settings in the different GPOs are bound to conflict. Suppose at the site level, a GPO sets the desktop wallpaper for all computers in the site to the company logo wallpaper. And then some domain administrator sets a GPO at the domain level so that the desktop wallpaper for all domain computers is a picture of the domain's softball team. By default, if any settings in the numerous GPOs conflict, the last GPO that gets applied wins the conflict.
This sounds like the lowliest administrator in charge of two or three computers and a few users in an OU can overrule the highest level enterprise administrator in charge of hundreds or thousands of computers and users. If left to the defaults, this is true. However, there is a setting called Enforced on each GPO. If this setting is enabled (it is not enabled by default), it locks every setting that is configured in the GPO, and no GPO that follows can override these locked settings. So with the Enforced setting enabled on GPOs, the first Enforced GPO that gets applied wins all conflicts. This is a top-down mechanism.
Another configurable setting regarding GPO processing is a bottom-up mechanism. If an administrator at a domain or some OU level does not want any previously applied, non-Enforced GPOs to affect his computers and users, he can enable a setting called Block Inheritance on the domain or the OU. This setting turns off processing of all GPOs from higher-level containers that are not Enforced. Remember, though, that a GPO with the Enforced setting enabled blows right past the Block Inheritance setting and is still processed by all computers and users in all child containers, even if the Block Inheritance setting is enabled.
One more parameter that changes the way GPOs are processed has to do with the bandwidth connecting the client computer to the domain controllers. Because some GPOs trigger a large amount of network traffic—a software deployment and folder redirection GPOs, for example—an evaluation of the bandwidth of the link to AD is performed before processing any GPOs. This is referred to as Slow Link Detection. If the link speed is below 500Kbps, the default data rate for a slow link, software GPOs do not deploy software, and folder redirection GPOs do not relocate folders. If a computer cannot identify the bandwidth of the link to AD, it assumes that it is using a slow link and may not process all appropriate GPOs, like the software deployment GPOs.
GPO Refresh, Loopback GPO Processing, and Turning Off the "L"
A few settings within the GPO also can affect the way this GPO processing happens. The first one is called the GPO Refresh. GPOs are applied to the computer during its bootup and then to the user during logon. They also get reapplied on a regular interval to ensure that new GPOs take effect quickly.
By default, GPOs refresh on member servers, member client computers, and domain users every 90 minutes, plus a random offset of 0 to 30 minutes (90 to 120 minutes). GPOs refresh on domain controllers every 5 minutes and have no random offset. These default refresh intervals can be adjusted within the GPO to affect all future refresh intervals. You can make this adjustment under User Configuration > Administrative Templates > System > Group Policy for the user refresh, and under Computer Configuration > Administrative Templates > System > Group Policy for domain member servers, domain member client computers, and domain controllers, as shown in Figure 3.4.
Figure 3.4 Determining the Group Policy refresh interval settings.
Another tool within a GPO that affects the way GPOs get processed is called Loopback, and it has two modes: Merge and Replace.
With Loopback Merge mode enabled, after the GPO processing described earlier (L-S-D-OU-OU-OU for the computer and then L-S-D-OU-OU-OU again for the user) completes, Loopback Merge mode kicks in and reapplies the computer settings, just in case any user settings conflict with any computer settings. Remember, the last GPO that applies wins conflicts, by default. User GPOs apply after computer GPOs by default. Loopback reapplies the computer settings to win any conflicts with user settings.
With Loopback Replace mode enabled, after the GPO processing described earlier completes, Loopback Replace mode kicks in and reapplies the computer settings, just in case any user settings conflict with any computer settings. Then Loopback Replace mode throws away every user GPO setting that has been applied, and it processes the user half of all GPOs (S-D-OU-OU-OU) that apply to the computer's position in AD, not the user's position in AD. The Loopback processing GPO is shown in Figure 3.5.
Figure 3.5 Using Loopback Merge mode or Loopback Replace mode to minimize or eliminate user GPO settings.
Another GPO setting that affects GPO processing is used to turn off Local Group Policy objects processing. You can access this setting under Computer Configuration\Administrative Templates\System\Group Policy in the Group Policy Management Console running on a Windows Vista computer, as shown in Figure 3.6.
Figure 3.6 Disabling the processing of the Local Computer Policy.
You can access the Group Policy Management Console (GPMC), as shown in Figure 3.7, on a Windows Vista computer by building a new MMC.
Figure 3.7 Accessing the Group Policy Management Console.
Building the Group Policy Management Console (GPMC) MMC
To build the Group Policy Management Console (GPMC) MMC, follow these steps:
- Click Start > Run, type MMC, and click OK. (You can use Start > Start Search > MMC > and click Enter.)
- From the menu, select File > Add / Remove Snap-in.
- Select Group Policy Management snap-in and click Add.
- Click OK.
- From the menu, select File > Save As.
- Type the name GPMC.msc and save the MMC either on the desktop or in Administrative Tools.
To create a new GPO in the GPMC tool, follow these steps:
- Expand Forest, Domains, and your domain name.
- Right-click the folder Group Policy Objects and select New.
- Give your new GPO a descriptive name so that you know what is configured in the GPO.
To edit the new GPO, right-click the new GPO in the Group Policy Objects folder and select Edit. This opens the GPO in the Group Policy Object Editor (GPOE).
To link a GPO to a site, domain, or OU in the GPMC tool, follow these steps:
- Expand the appropriate folder to be able to view the target container.
- Click the desired GPO and drag it to the target container and release. This creates a link between the GPO and the container. | https://www.informit.com/articles/article.aspx?p=1216792&seqNum=4 | CC-MAIN-2020-16 | refinedweb | 4,303 | 50.36 |
CodePlexProject Hosting for Open Source Software
Hello -
SDS is installed on my development machine, current as of about 10 days ago.
I am trying to make available a web service that uses the SDS to access some NetCDF data. I wrote a class to handle everything I want to do, and can successfully use that class in a console application. When I try to access the same data with
the same class but from a web service, I get the dreaded "extension .nc not registered in the factory" error when I try to open the netCDF file.
I have tried several ways to use the DataSetFactory static class to register the .NC extension (despite the fact that I should not have to, right?). The curious thing is that I cannot find the netcdf4.dll anywhere on the machine. If it is required,
but is missing, then I would expect this behavior, but how do you explain the fact that the console application works!?!?!
Any help appreciated.
Thanks!
-Malcolm Ross
All-
So I am working hard to try to resolve this by myself, but so far, no luck. I found the netCDF4.DLL in the source code repository (but not in the runtime installer - why?), but don't know what to do with it to make it work within the web service.
I think it has something to do with changes that need to be made to the web.config file associated with the service, but I can't figure out what those changes are.
Please help!
Thanks
-Malcolm
Hi Malcolm,
This the bug that Web Service does not automatically registers providers and it will be fixed. But in the current version it is still possible to register the provider manually. For this, you should add following lines into your code before first use of
SDS:
DataSetFactory.RegisterAssembly(
Assembly.Load("Microsoft.Research.Science.Data.NetCDF4, Version=1.2.6754.0, Culture=neutral, PublicKeyToken=e550de0161496f12"));
You also need usings:
using Microsoft.Research.Science.Data.Factory;
using System.Reflection;
The code above loads the provider assembly from GAC and registers it in the DataSet factory.
An answer to your question about where NetCDF4.dll is located: it is in GAC (c:\windows\assembly), though Windows Explorer does not show it. To see and, probably, copy it you may use another explorer application, for example, Total Commander, which shows
the assembly folder as any other.
Regards,
Dmitry.
Thanks, Ditmitry
That did the trick, except I had to use a different PublicKeyToken.
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | http://sds.codeplex.com/discussions/250576 | CC-MAIN-2017-43 | refinedweb | 453 | 65.93 |
Want to see the full-length video right now for free?Sign In with GitHub for Free Access
In this episode, Chris is joined by thoughtbot CTO Joe Ferris.
With the magic of live-coding, Joe demonstrates how a simple Extract Class refactoring can lead to a host of code improvements.
You can see the full changes in this commit commit to the Namely Connect repo.
Joe decides to extract a class when he notices that the
Normalizer class
refers to a
payload object repeatedly (and passes it among nearly all its
methods).
A common reaction to detecting this code smell is to extract a class to hold (and operate on) this data. This provides some noise reduction (since you no longer have to pass the data around) but also readability improvements and lower coupling.
Joe begins by creating a private class inside the
Normalizer class and moving
appropriate methods inside it. This is a good first step that can later be
promoted to a public class if desired.
Joe gets bitten by a classic Ruby gotcha: referencing an instance variable that
doesn't exist returns
nil. You can hear Joe discuss why returning
nil is
developer-hostile in another Weekly Iteration episode, Nil is Unfriendly.
After a series of small changes and fixes, Joe's test are green again.
Take note of how Joe works in small steps and constantly re-runs his tests. A good TDD workflow allows you to run tests quickly from your editor. For more on that, see Speedy Tests.
So far, the improvement from the extracted class is minimal. However, the
newly-extracted
Payload class makes it easy to make a few additional
improvements.
First, Joe moves the
payload data into instance state. This allows Joe to
remove the argument from many methods inside the new class, which provides the
promised noise-reduction and lower coupling.
While refactoring, it's a great idea to make small commits along the way. It's easy to do and can be extremely useful for restoring your code to a known-good state.
Payload's instance data allows Joe to delete a whole host of parameters.
However, when attempting to remove it from the
custom_fields method, Joe
discovers that the parameter is poorly-named. While called
payload, it
actually can be one of several objects. Rather than living with this duplicity,
Joe renames the parameter on the spot. This is a nice example of constantly
tidying a codebase.
Joe continues to make small improvements in his new class: removing parameters, renaming methods, and a textbook Extract Temp to Query refactoring.
Before undertaking a refactoring, it can be hard to predict how much better the code will become. However, once the inital refactoring is complete the opportunity for additional cleanup often becomes apparent. Because of this phenomenon, it's a good idea to make any positive changes you can clearly see, since it may make several others obvious afterwards. | https://thoughtbot.com/upcase/videos/live-refactoring-with-joe | CC-MAIN-2022-21 | refinedweb | 490 | 63.9 |
Python: script to make multiple bash scripts
I have a file, called list.txt, which is a list of names of files:
input1.txt input2.txt input3.txt
I want to build a python script that will make one file for each of these filenames. More precisely, I need it to print some text, incorporate the name of the file and save it to a unique .sh file. My script is as follows:
import os os.chdir("/Users/user/Desktop/Folder") with open('list2.txt','r') as f: lines = f.read().split(' ') for l in lines: print "#!" print "cd /scratch/DBC/user\n" print 'grep "input"',l+" > result."+l+".txt" with open('script{}.sh'.format(l), 'w') as output: output.write(l)
I have a few issues:
- The output files just contain the name of the file - not the content I have printed.
To be clear my output files (I should have 3) should look like this:
#!/bin/bash #BSUB -J input3.sh #BSUB -o /scratch/DBC/user/input1.sh.out #BSUB -e /scratch/DBC/user/input3.sh.err #BSUB -n 1 #BSUB -q normal #BSUB -P DBCDOBZAK #BSUB -W 168:00 cd /scratch/DBC/user grep "input" input3 > result.input3.txt
I am very new to Python so I am assuming there could be much simpler ways of printing and inserting my text. Any help would be much appreciated.
UPDATE I have now made the following script, which nearly works.
import os os.chdir("/Users/user/Desktop/Folder") with open('list.txt','r') as f: lines = f.read().split('\n') for l in lines: header = "#!/bin/bash \n#BSUB -J %s.sh \n#BSUB -o /scratch/DBC/user/%s.sh.out \n#BSUB -e /scratch/DBC/user/%s.sh.err \n#BSUB -n 1 \n#BSUB -q normal \n#BSUB -P DBCDOBZAK \n#BSUB -W 168:00\n"%(l,l,l) script = "cd /scratch/DBC/user\n" script2 = 'grep "input" %s > result.%s.txt\n'%(l,l) all= "\n".join([header,script,script2]) with open('script_{}.sh'.format(l), 'w') as output: output.write(all)
The problem I still have is that this creates 4 scripts, not 3 as I expected: script_input1.sh, script_input2.sh, script_input3.sh and script_sh. This last one, script_sh just has the printed text but nothing where the "input" text would be. I think this is because my list.txt file has a "\n" at the end of it? However, I looked and there really isn't. Is there a way around this? Maybe I can use some kind of length function?
1 answer
- answered 2017-10-11 10:21 FrankBr
So, answering in order:
1) Can you detail this issue? You count 4 txt files bu you got just 3 different scripts generated by your code?
2) Sure, you need to create a var, not just using the print statement 3) Just change permissions
So, to summurize, I'd use this approach:
import os for i, file in enumerate(os.listdir("/Users/user/Desktop/Folder")): if "input" in file: with open(file) as f: lines = f.readlines() for l in lines: result."+l+".txt" with open('script%s.sh'%i, 'w') as output: output.write(data) os.chmod("script%s.sh'%i", 700)
By the way, my code it's just a guess. I think you should be more explicit when stating what's your issue. I didn't understood what you wanna achieve. | http://codegur.com/46685755/python-script-to-make-multiple-bash-scripts | CC-MAIN-2018-09 | refinedweb | 570 | 79.26 |
On Dec 22, 2004, at 10:23 AM, Pasha Bizhan wrote:
> Hi,
>
>> From: Erik Hatcher [mailto:[email protected]]
>>
>> Any additional comments or objections to this proposal? If I
>> don't hear otherwise, I'll send this to the Jakarta PMC by
>> the end of the day today.
>
> I think that "search.apache.org' is not good name for this project and
> Lucene is better.
> Will the second edition of your book be titled "Search.Apache.Org in
> Action"?-)
> Also are your going to rename all namespaces from org.apache.lucene.*
> to
> search.org.apache.*?
> Etc..
Lucene is certainly an major player in the initial creation of
search.apache.org, however Nutch and other search-related applications,
tools, etc would be considered for inclusion there too.
Look at the other new top-level projects - db.apache.org,
portals.apache.org, and logging.apache.org. Same idea applies there.
No, we would not change the package names of Lucene. Lucene will still
be a sub-project within a top-level project, just as it is under
Jakarta now. Jakarta is server-side Java-centric and Lucene has been
ported to numerous languages (as you well know!). Perhaps you'd be
interested in ASL'ing your implementation and contributing it to the
new search.apache.org project :))
Erik
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected] | http://mail-archives.apache.org/mod_mbox/lucene-dev/200412.mbox/%[email protected]%3E | CC-MAIN-2016-30 | refinedweb | 237 | 61.12 |
>build at the end of next weekend. Objections? Big bugs that we should
>fix first?....
Also, I'd like to ask about rearranging the library directory a
bit. I've been tempted to do this, but being new to SDCC it
seems better to ask before making such a change. Today we have:
share/sdcc/lib/<port>
except in the case of mcs51, where it's either "small" or "large".
I'd like to change it to:
share/sdcc/lib/<port>/<options>
The "options" part would be a string returned from a function
in each port that looks at what options are in use (or if that
function pointer is NULL then no extra subdir is used). For
mcs51, this function would return "small" or "large" in the
normal cases, and things like "large-xstack" or "small-reentr"
if those options are in use. In the long term, this ought to
allow all ports to add special features without the situation
where the user gets incompatible library code linked into their
application without any errors (of course, they shoulda RTFM :)
These extra libraries would only be built if configure is passed
something like --enable-device-options. And the finishing touch
would be some verbose message in the linker when it can't find
a library, and maybe even a special error message when putchar()
is the undefined symbol!
I'm sorry to leave this to the last minute. If this seems like
an ok idea and it's not too late, just gimme the green light
and I'll put it together in the next 24 hours.
Thanks,
Paul
Bugs item #458177, was opened at 2001-09-03 15:08
You can respond by visiting:
Category: C-Front End
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: Gavin Hurlbut (gjhurlbu)
Assigned to: Nobody/Anonymous (nobody)
Summary: Problem with .globl generation in v2.2.1
Initial Comment:
I previously wrote some code, and compiled with SDCC
(over a year ago, I'm not sure what version as that
computer died), and since installing v2.2.1 (under
RH7.0), I started getting compile errors on previously
working code.
The symptoms seemed to be:
- if I declare variables in a .c file meaning for them
to be global in scope (i.e. no 'static') but don't
access them in that .c file, the .globl was never
inserted into the .asm output.
- hence other .c files which 'extern' that variable
can't link properly as the symbol doesn't exist.
- the data class is xdata.
I worked around this by creating a local function that
will never be called that takes a bunch of said globals
and sums them into a local variable to force them to be
used.
I will attempt to attach the non-working and workaround
files if that will help.
----------------------------------------------------------------------
You can respond by visiting:
Bugs item #458099, was opened at 2001-09-03 10:07
You can respond by visiting:
Category: msc51(8051) target
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: Paul Stoffregen (pjs)
Assigned to: Nobody/Anonymous (nobody)
Summary: code pointer not usable in struct
Initial Comment:
I'm sorry for such a long test example... you can
download a copy here:
Bug #1: SDCC will not allow the struct to contain a
pointer to code memory, claiming that the pointer
must be initialized, even though it's part of a
struct declaration that doesn't have storage
allocated yet.
Bug #2 (minor): The strings allocated in data, idata
and xdata have a duplication (unnecessary) copy
allocated in code memory.
Here's the test code: (same as link above)
#include <stdio.h>
/* first, a struct definition with pointers targeting
each */
/* type of memory, but no storage allocated yet */
struct misc_string_struct {
data char *string_in_data;
idata char *string_in_idata;
xdata char *string_in_xdata;
code char *string_in_code; // <-- won't
compile!
};
/* four strings, one in each type of memory */
/* the first three strings are stored in their
specified */
/* memory type, AND in code memory */
data char str_data[] = "This string is in data\r\n";
idata char str_idata[] = "This string is in
idata\r\n";
xdata char str_xdata[] = "This string is in
xdata\r\n";
code char str_code[] = "This string is in code\r\n";
/* now, allocate a copy of the struct in each type of
memory */
data struct misc_string_struct strings_data= {str_data,str_idata,str_xdata,str_code};
idata struct misc_string_struct strings_idata= {str_data,str_idata,str_xdata,str_code};
xdata struct misc_string_struct strings_xdata= {str_data,str_idata,str_xdata,str_code};
code struct misc_string_struct strings_code= {str_data,str_idata,str_xdata,str_code};
/* we ought to be able to print each string in each
memory type */
void print_four_strings(struct misc_string_struct *p)
{
printf("%s%s%s%s", p->string_in_data,
p->string_in_idata,
p->string_in_xdata,
p->string_in_code);
}
void print_sixteen_strings(void)
{
printf("\ndereferencing pointers stored in
data:\n");
print_four_strings(&strings_data);
printf("\ndereferencing pointers stored in
idata:\n");
print_four_strings(&strings_idata);
printf("\ndereferencing pointers stored in
xdata:\n");
print_four_strings(&strings_xdata);
printf("\ndereferencing pointers stored in
code:\n");
print_four_strings(&strings_code);
}
----------------------------------------------------------------------
You can respond by visiting:
configure: error: cannot run C++ compiled programs.
If you meant to cross compile, use `--host'.
configure: error: ./configure failed for sim/ucsim
make[2]: *** [src/sdcc/sdccconf.h] Error 1
gen.c: In function `genUminus':
gen.c:1767: warning: `size' might be used uninitialized in this function
/usr/bin/install: cannot stat `../bin/as-z80': No such file or directory
make[4]: *** [install] Error 1
/usr/bin/install: cannot stat `../bin/link-z80': No such file or directory
make[4]: *** [install] Error 1
make[2]: Target `build' not remade because of errors.
gen.c: In function `genUminus':
gen.c:1767: warning: `size' might be used uninitialized in this function
Well, I'm happy with the z80 port and will soon be happy with the gbz80
port. I'd like to suggest a semi freeze for the rest of this week and a
build at the end of next weekend. Objections? Big bugs that we should
fix first?
-- Michael | https://sourceforge.net/p/sdcc/mailman/sdcc-devel/?viewmonth=200109&viewday=3 | CC-MAIN-2018-17 | refinedweb | 984 | 58.82 |
.
default.aspx:
<%@ Page Language="c#" inherits="_default" src="default.cs" %>
<%@ Register TagPrefix="ernstinfo" TagName="session" Src="../Controls/session.ascx" %>
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<ernstinfo:session
<html>
<head>
//******************************************************************
session.ascx.cs:
public class Session: UserControl {
public string session_BusinessID = "";
void Page_Load(Object sender, EventArgs E) {
System.Data.DataSet sessionDS = null;
// Check for Session
try {
// Check and Log Session
// Session_Load updates LastActivity field
sessionDS = CheckAndLogSession(Session["SessionID"].ToString(), ref errorText);
if(sessionDS != null && sessionDS.Tables.Count > 0 && sessionDS.Tables[0].Rows.Count > 0) {
dr = sessionDS.Tables[0].Rows[0];
session_BusinessID = dr["businessid"].ToString();
} else
//******************************************************************
default.cs
public class _default: Page {
void Page_Load(Object sender, EventArgs E) {
Response.Write(session.session_BusinessID.ToString()); $349.00.
Premium members get this course for $79.20.
Premium members get this course for $39.20.
Premium members get this course for $389.00.
The final deployment can be solved by moving all the .cs and .designer.cs files into the App_Code folder, and ASP.NET will compile it all automatically.
I'm not sure of the benefit you perceive in doing it the way you are, although it is of course a good learning experience.
The main thing you need to do, if you are doing it by hand is to register the control on the ASP.NET page.
If you want to post your code, I'd be happy to resolve the issue for you.
>>>> I don't want to have to compile the second .cs.
I dont undertsand why, as without compiling how can you use the codebehind structure to make the framework understand the code?
If you put the .cs file in the App_Code folder (see) then ASP.NET will compile it automatically.
However you proceed, you will need to determine the relationship between your two files. The page can inherit from the second .cs file so long as it is derived from the System.Web.UI.Page class.
If you want to split the page functionality then you should consider User Controls or Master Pages. Do a search in MSDN () for both of these.
With monday.com’s project management tool, you can see what everyone on your team is working in a single glance. Its intuitive dashboards are customizable, so you can create systems that work for you.
I have one.aspx, one.cs, a.ascx, and a.ascx.cs. I have SomeFunction() in a.ascx.cs.
How do I reference SomeFunction() from one.cs?
AnID.SomeFunction()
The user control should be registered on the page, in which case the code shouldn't have any problems accessing it.
I suggest that you download Visual Studio 2008 Express as it will help you learn ASP.NET and does not cost anything. See.
Here is my code: I've chopped off some code for brevity but the good stuff is there.
As you can see I have a session control on my default.aspx.
In my control, session.ascx.cs, I have public strings.
In default.cs I am trying to call session.session_BusinessID
"The name 'session' does not exist in the current context"
Ideas?
Open in new window
You can take two approaches:
1) You can declare a local class variable. This will be used by the page when the control is instantiated.
2) You can use FindControl and then cast the result to your control type
1) Add to default.cs:
protected Session session;
2) Use this code in Page_Load in default.cs
Response.Write( ( (session)
(FindControl("session")) )
.session_BusinessID.ToStri
I would also recommend changing the type name of the Session control to be more descriptive, even a simple change like SessionControl would be appropriate. After all, the object is a control, not a session, even if the control is used to represent a session.
Good luck with the project, and thanks for the points. | https://www.experts-exchange.com/questions/23108654/Call-multiple-cs-files-from-one-aspx-using-codebehind-without-complile.html | CC-MAIN-2018-09 | refinedweb | 637 | 61.02 |
An encoded file will be decoded by the program using a key input from the user. This key is a sequence of numbers that will scramble the message in the file into its appropriate order.
I have to use linked lists for this. I already started it, but I can't seem to find a correct way to jumble the words and display them correctly.
#include <stdio.h> #include <string.h> typedef struct _node_{ char word[30]; struct _node_ *pPtr_word; }node_t; int main(void) { FILE *pInput, *pOutput; int a, b, count, num, group_num, counter, data[10], i, j; node_t node[100]; node_t *pPtr; pInput=fopen("input.txt","r"); if(pInput==NULL){ printf("Error: can't open file.\n"); return 1; } else{ if('\n') count=count++; for(a=0;a<count+1;a++) fgets(node[a].word,30,pInput); for(b=0;b<count+1;b++) printf("%s\n", node[b].word); } printf("Enter number of numbers for key sequence: "); scanf("%d", &num); printf("Enter key sequence: "); for(counter=0;counter<num;counter++) scanf("%d", &data[counter]); group_num=((count+1)/num); for(i=0;i<group_num;i++) for(j=0,pPtr=node;j<num-1;j++,pPtr++) node[(data[j]-1)+num*i].pPtr_word=&node[... pPtr->pPtr_word=NULL; for(pPtr->word;pPtr!=NULL;pPtr=pPtr->pPt... printf("%s", pPtr->word); getchar(); getchar(); return 0; }
Example, the file input [from a text file] is-- brown quick The over jumped fox dog. lazy the
Then the key sequence entered would be 321
The output should be-- The quick brown fox jumped over the lazy dog.
In my text file, it looks like this:
brown
quick
The
over
jumped
fox
dog.
lazy
the
Also, for displaying my linked list, it seems like it doesn't traverse the whole thing. I also can't seem to create the right formula to get the correct sequence [since there can be 2 or 4 numbers for the sequence and it should work for all cases].
Please help me out :) | https://www.daniweb.com/programming/software-development/threads/151565/linked-list-problem-in-my-program | CC-MAIN-2017-09 | refinedweb | 332 | 64.3 |
Components and supplies
Necessary tools and machines
Apps and online services
About this project witch polarity, because it will break the LEDs.
Step 4: About the software!
Code
Arduino CodeArduino
#include "FastLED.h" // How many leds in your strip? #define NUM_LEDS 68 byte pixelType = 0; byte drawIn[4]; byte frameIn[NUM_LEDS*3]; // DATA_PIN 3 //#define CLOCK_PIN 13 // The bluetooth module pins #define RX_PIN 0 #define TX_PIN 1 // Define the array of leds CRGB leds[NUM_LEDS]; void setup() { // Uncomment/edit one of the following lines for your leds arrangement. //, GRB>); Serial.begin(9600); pinMode(TX_PIN, OUTPUT); pinMode(RX_PIN, INPUT); } void loop() { } void serialEvent() { pixelType = Serial.read(); switch (pixelType) { case 0: //draw mode while (!Serial.available()) {} Serial.readBytes(drawIn, 4); leds[drawIn[0]] = CRGB(drawIn[1], drawIn[2], drawIn[3]); FastLED.show(); Serial.flush(); break; case 1: //clear mode for (int i = 0; i < NUM_LEDS; i++) { leds[i] = CRGB::Black; } FastLED.show(); Serial.flush(); break; case 2: //frame in mode while (!Serial.available()) {} Serial.readBytes(frameIn, (NUM_LEDS * 3)); for (int i = 0; i < NUM_LEDS; i++) { leds[i] = CRGB(frameIn[i * 3], frameIn[(i * 3) + 1], frameIn[(i * 3) + 2]); } FastLED.show(); Serial.flush(); break; case 3: while (!Serial.available()) {} int brightnessLED = Serial.read(); FastLED.setBrightness(brightnessLED); Serial.flush(); break; } }
Software to control the shadesJava
No preview (download only).
Schematics
Author
Published onSeptember 23, 2016
Members who respect this project
you might like | https://create.arduino.cc/projecthub/RGBFreak/diy-rgb-led-shades-controlled-by-arduino-c23e57 | CC-MAIN-2019-43 | refinedweb | 230 | 50.23 |
From smtp Mon Feb 17 08:58 IST 1997From linux-kernel Thu Feb 13 14:29:06 0500 1997 remote from vger.rutgers.eduReceived: from vger.rutgers.edu by jasmine.hclt.com; Mon, 17 Feb 97 08:58 ISTReceived: from nic.funet.fi by hclhprnd.hclt.com with SMTP id AA13334 (5.67c/IDA-1.5 for <[email protected]>); Mon, 17 Feb 1997 07:42:45 +0530Received: from vger.rutgers.edu ([128.6.190.2]) by nic.funet.fi with ESMTP id <55787-4953>; Thu, 13 Feb 1997 21:39:43 +0200Received: by vger.rutgers.edu id <213140-22122>; Thu, 13 Feb 1997 14:29:25 -0500From: vger.rutgers.edu!owner-linux-kernel-digest To: vger.rutgers.edu!linux-kernel-digest Subject: linux-kernel-digest V1 #746Reply-To: [email protected]: [email protected]: 86399Content-Type: textPrecedence: bulkMessage-Id: <[email protected]>Date: Thu, 13 Feb 1997 14:29:06 -0500linux-kernel-digest Thursday, 13 February 1997 Volume 01 : Number 746In this issue: Unresolved symbols in /lib/modules/.../*.o? Behavior under swap catastrophe? Keyboard hangs when using PS/2 mouse Re: CONUNDRUM. Re: 640MB MO patch Re: ramdisk problem 2.1.2[56] Re: IDE Disk Problems Re: Sparc module char-major-14? Re: IDE Disk Problems Re: CONUNDRUM. Re: IDE Disk Problems Re: Unresolved symbols in /lib/modules/.../*.o? PATCH for 2.1.26: newly forked processes killed by "handled signals" Re: Big malloc's. Re: 2.0.27 major problems #1 -- 3c59x driver. hard disk drive status Re: IDE Disk Problems Re: ramdisk problem 2.1.2[56] Re: CONUNDRUM. RE: B*gg*r mallocs, mmap JOY! Re: IDE Disk Problems VFS/Posix question Re: CONUNDRUM. NFS problem with 2.0.25 MENUCONFIG errors Re: IDE Disk Problems Re: Performance patch for NE Ethernet Re: MENUCONFIG errors Re: 2.0.27 major problems #1 -- 3c59x driver. [none] Re: 640MB MO patch Re: Behavior under swap catastrophe? Re: IDE Disk Problems Re: CONUNDRUM. Linux & EISA bus ? Re: Linux VM subsystem (Was: Big mallocs, mmap sorrows and double buffering.) "Modules Oops" workaround for 2.1.26 Re: Version bug in 2.0.29? Thanks, and another Question Re: CONUNDRUM. Re: CONUNDRUM. Resyncs Sony CDU33a+kernel>2.1.22=system hangs totally insmod Re: CONUNDRUM. Re: Performance patch for NE Ethernet Report on compiling 2.1.26See the end of the digest for information on subscribing to the linux-kernelor linux-kernel-digest mailing lists.----------------------------------------------------------------------From: fruviad <[email protected]>Date: Wed, 12 Feb 1997 18:56:30 -0500 (EST)Subject: Unresolved symbols in /lib/modules/.../*.o? Hi, Tried sending this to linux-newbie, but I haven't gotten an answer from there. Thought I'd ask here.. I haven't added any patches to the kernel, so the version is the same. I've tried doing a number of other things (make modules, make modules_install)...without luck. Anyone have any ideas what I'm missing? Thanks... Peterps...it seems that all the linux kernel docs I've found are either extremely simplified (ie. saying "yes" here does this...), or kernel-hacking. Anyone know of anything in-between for a non-programmer who wants to get into programming/kernel stuff?Thanks again... - -- /\_/\ | ( o.o ) | [email protected] (Lizard Ho!!!) > < | ------------------------------From: [email protected] (Joe Fouche)Date: Wed, 12 Feb 1997 15:54:07 -0800Subject: Behavior under swap catastrophe?- --sldF0LQLVoqAiE7SI've noticed lately that the behavior of the kernel when some process goesberserk and fills up all the swap is a little strange. It seems to start sendingSEGV's to many processes as the large one grows. This wouldn't be so bad, exceptthat init is often killed. Is a modification to protect the life of init in order?Or should we just make sure this never happens? Comments or flames welcome.- -- _ ____ Joe Fouche ([email protected]) ___| |--- Deranged College Student - --sldF0LQLVoqAiE7SContent-Type: application/pgp-signature- -----BEGIN PGP SIGNATURE-----Version: 2.6.3iiQB5AwUBMwJYHXJgYOdk+W8JAQEpxgMfXLZodW2/4Do8DaZADLuTCpZoJxSDvnG2Mb45aDET2lxm1Gw1AgLlC0muCsIm0UdIEG036v/ez11Y0H3z9d/Gv1qnEdTHXdVIR9RWOUYmwIjMe2OTC0GXxcW8TYEnSrugPCLhcw===e+4D- -----END PGP SIGNATURE------ --sldF0LQLVoqAiE7S--------------------------------From: Benny Amorsen <[email protected]>Date: 13 Feb 1997 00:26:20 +0000Subject: Keyboard hangs when using PS/2 mouseUsing kernels 2.0.27 through 2.0.29, I experience keyboard hangs everytime I start gpm. /dev/mouse is a symlink to /dev/psaux, and I havetried psaux both as a module and compiled into the kernel. In bothcases, the mouse detection does not hang the keyboard. It is only whengpm starts that the keyboard hangs.gpm is version 1.10. The mouse in question is a two-button Compaq PS/2mouse. The motherboard is Asus P55T2P4, and the bios detects thepresence of a mouse. If the mouse is disconnected at boottime, thebios switches the PS/2 interface off. The kernel detects the interfacefine, too, but as far as I can see the kernel just believes whateverthe BIOS tells it.The keyboard is a good old no-name AT-keyboard.Benny------------------------------From: Systemkennung Linux <[email protected]>Date: Thu, 13 Feb 1997 01:50:46 +0100 (METcomplaining about exactly that problem for quite some time. Ralf------------------------------From: NIIBE Yutaka <[email protected]>Date: Thu, 13 Feb 1997 09:39:26 +0900Subject: Re: 640MB MO patchShigehiro Nomura writes: > The patch for 640MB MO written by Mr. Nagai has been released a few > months ago, too :-)Sigh. Shigehiro, please learn how people cooperate together. Itseems (at leaset for me) your attitude is not polite enough to Ericand the Linux community. If you really read and learn how SCSI subsystem, iso9660 filesystem. or ELF system work in Linux, you must knowEric Youngdale who contributes to Linux much much and much.I think that we should learn/check how things are going on, beforeproposing a patch. This makes the development process easy. IMHO, adevelopment of free software is not a race, but something like a folkdance.Well, I never think your effort to add feature is bad or wrong. Yes,it is great thing itself, really, but the way you did is questionable.It is better for our community to cooperate together, you know.Thanks, - -- NIIBE Yutaka------------------------------From: "Andrew E. Mileski" <[email protected]>Date: Wed, 12 Feb 1997 20:07:23 -0500 (EST')- --Andrew E. Mileski mailto:[email protected] Plug-and-Play Kernel Project Matrox Team: "Andrew E. Mileski" <[email protected]>Date: Wed, 12 Feb 1997 20:11:58 -0500 (EST)Subject: Re: IDE Disk can second this - sent the drive back to WD, and they sent me backa new drive (WD has a 1 year warranty). I think the Caviar must have hada bad run or two or three...anyways, works fine now, but they are hugelymore noisy than my Seagate Hawks (I've got a pair of each).- --Andrew E. Mileski mailto:[email protected] Plug-and-Play Kernel Project Matrox Team: "David S. Miller" <[email protected]>Date: Wed, 12 Feb 1997 20:21:30 -0500Subject: Re: Sparc module char-major-14? Date: Wed, 12 Feb 1997 14:41:31 -0800 (PST) From: Trevor Johnson <[email protected]> Richard A Sahlender Jr wrote: > modprobe: Can't locate module char-major-14 > > Received this on a Sparc 2 yesterday. I'm not sure these are correct but they work for me (on x86):They are right, but the issue is that we do not have finished driversfor the Sparc sound hardware as of yet, and thus the modules won't bethere anyways.- ---------------------------------------------////Yow! 11.26 MB/s remote host TCP bandwidth & ////199 usec remote TCP latency over 100Mb/s ////ethernet. Beat that! ////- -----------------------------------------////__________ oDavid S. Miller, [email protected] /_____________/ / // /_/ ><------------------------------From: JHazard <[email protected]>Date: Wed, 12 Feb 1997 19:57:16 -0600Subject: Re: IDE Disk ProblemsMy company recently received around a 100 HPs with Western Digital 1.6sin them. Just a guestimate, however, I believe that around 15-20% ofthem have had drive problems withing the first two months. These aremostly running NT, however the one in my area is running Linux and isuspect it too is having problems. I've basically lost both of mybootable Linux partions once (at diffrent times...) due to drive errors.Now my WD 1.6 at home as never caused me a problem with any operatingsystem.Andrew E. Mileski> > I can second this - sent the drive back to WD, and they sent me back> a new drive (WD has a 1 year warranty). I think the Caviar must have had> a bad run or two or three...anyways, works fine now, but they are hugely> more noisy than my Seagate Hawks (I've got a pair of each).> > --> Andrew E. Mileski mailto:[email protected]> Linux Plug-and-Play Kernel Project> XFree86 Matrox Team: Clive Messer <[email protected]>Date: Thu, 13 Feb 1997 02:10:42 +0000 (GMT)Subject: Re: CONUNDRUM .......- --------------------------------------------------------------------Wed Jan 29 20:50:53 1997 H.J. Lu ([email protected]) * sysdeps/linux/i386/crt/crt0.S: align stack to 8 bytes.- --------------------------------------------------------------------Clive.- -- C Messer<[email protected]> | "I pressed her thigh and death smiled."<[email protected]> | Jim Morrison.------------------------------From: Systemkennung Linux <[email protected]>Date: Thu, 13 Feb 1997 03:19:50 +0100 (MET)Subject: Re: IDE Disk Problems> My company recently received around a 100 HPs with Western Digital 1.6s> in them. Just a guestimate, however, I believe that around 15-20% of> them have had drive problems withing the first two months. These are> mostly running NT, however the one in my area is running Linux and i> suspect it too is having problems. I've basically lost both of my> bootable Linux partions once (at diffrent times...) due to drive errors.We've got seven IDE WD disks during the last half year. That makes morethan 60% failure rate. Some of these were apparently killed by theFirmware/BIOS problem in combination with Asus boards. The three1.6GB disks were not used with the affected Asus boards. One of themruined my last weekend, so goodbye, WD.During the same time span no other disk of another brand failed, neitherSCSI nor IDE. Ralf------------------------------From: [email protected] (Aaron M. Ucko)Date: 12 Feb 1997 21:41:47 -0500Subject: Re: Unresolved symbols in /lib/modules/.../*.o?fruviad <[email protected]> writes:>.You can ignore these messages. Red Hat installs modules for a *lot*of drivers so that users are less likely to have to rebuild theirkernel. You are probably seeing the "Unresolved symbols" messagebecause those particular modules require symbols found in files whichRed Hat compiled into their kernel but you neither compiled into yoursor built as modules. (For instance, you'll get messages foreverything in /lib/modules/2.0.18/scsi if you did not enable SCSIsupport when you built your kernel.)- --_------------------------------From: [email protected] (Kevin Buhr)Date: 12 Feb 1997 23:24:48 -0600Subject: PATCH for 2.1.26: newly forked processes killed by "handled signals"- --Multipart_Wed_Feb_12_23:24:48_1997-1Content-Type: text/plain; charset=US-ASCIIAlan:After your patch fixed the "socketpair" behaviour, I got "dump"working and managed to discover another 2.1.26 kernel bug. Thechanges to "sys_sigreturn" in "arch/i386/kernel/signal.c" that addedmore rigorous checking of segment register values---I don't know whenthey were added---have the unintended side effect of causing crashesin certain very rare circumstances.Basically, the "copy_thread" function of "arch/i386/kernel/signal.c"initializes the TSS's %gs with KERNEL_DS. If the newly forked childis signaled immediately *and* if the child has a handler for thatsignal, then the pseudobogus %gs value will be saved by "setup_frame"(in "arch/i386/kernel/signal.c") and, when returning from the handler,the "GET_SEG(gs)" in "sys_sigreturn" will barf and "do_exit(SIGSEGV)"the child.For reasons I still don't fully understand, if the child is allowed toreturn from the "fork" call and begin execution before receiving thesignal, the %gs register is automagically "fixed" (but I can't figureout where), and we never notice the problem. The enclosed"signaltest.c" illustrates the bug: on a vanilla 2.1.26 kernel, thechild quiety dies immediately after handling the signal. A hacked upkernel verifies that "GET_SEG(gs)" is the culprit.The enclosed patch appears to fix the problem by having "copy_thread"initialize %gs with "USER_DS" instead, as is done in other, similarcontexts. Does this break anything else?Kevin <[email protected]>- --Multipart_Wed_Feb_12_23:24:48_1997-1Content-Type: application/octet-streamContent-Disposition: attachment; #define LINUX_VERSION_CODE 131101> A worse problem is that "make oldconfig" goes into an infinite loopover the sound stuff, at least in our configuration. "make xconfig"does do the right thing, though. -hpa- -- This space intentionally has nothing but text explaining why thisspace has nothing but text explaining that this space would otherwisehave been left blank, and would otherwise have been left blank.------------------------------From: "shadow" <[email protected]>Date: Thu, 13 Feb 1997 12:57:06 +0000Subject: Thanks, and another QuestionI would like to thanks those who answered my questions about having more than 64 megs in a machine at once. Thank you kindly gods :).Ok, my second question is based on the same machine, unfortunaly it has been behaving badly still, this machine has allways been a bother sense the beginning, it has crashed under freebsd as well as linux(from 1.13? to 2.0.27) it has in it an adaptec 2940uw with seagate 2.1 uw scsi drive as well as an eide drive. and a cheap 16 ne1000 compatable ethernet card and an PCI smc-ultra ethernet card as well. The errors range from trashing the scsi partions (that get bad enough to kill the kernel) to just protection faults, to VFS faults, and free block list corrupt? I know these are vague, but as there is more than one. I was looking at the kernel help, and the pages say in order to run with more than 64 megs of memory, it needs 512 k cache?If anyone has any ideas, i would love to hear them. Thank you for your time and patience.Scott [email protected]: Nathan Bryant <[email protected]>Date: Thu, 13 Feb 1997 13:13:59 -0500 (EST)Subject: Re: CONUNDRUM.On Thu, 13 Feb 1997, A.N.Kuznetsov wrote:> Hello!> > Seems, one guy guessed correct answer.> He forgot to publish his brilliant ideas so that I'll make it.Umm, I'll assume you're joking. His ideas don't sound so brillianttoDumbass?> > the hardware. A multi-tasking OS will be notably slower for some things> > than a non-multi-tasking OS such as DOS or windows.He's missing the point here: Windows *is* a multitasking operating system.Even Windows 3.1 multitasks DOS applications *preemptively.* And you'vesaid that your program has the same results under Windows as it does underDOS...> > > > >.I don't think this is right, although I may be wrong. (Although Pentiumcycle-counters would also count all the overhead of the OS.) It may beinteresting, though, to see how much time the process was actuallyscheduled for, using the times() syscall.> > > > >.Know nothing? Ignorant? I don't think so. If you had taken the time toread his mail, you would realize that the Watcom code will run just fine,since it's not doing any syscalls and just doing computations on memory.He's disassembled the Watcom-produced code and assembled it again onLinux.> > > > Greg Alexander> >> > > +-----------------------+---------------------------------------+| Nathan Bryant | Unsolicited commercial e-mail WILL be || [email protected] | charged an $80/hr proofreading fee. |+-----------------------+---------------------------------------+------------------------------From: [email protected] (Alan Cox)Date: Thu, 13 Feb 1997 08:43:31 +0000 (GMT?Because its incredibly non obvious. Some of the folk doing big numbercrunches noticed that the size of your environment changed the performance.They have submitted both a fix to crt0.o and a mod to gcc to make italign doubles on 8 byte boundaries. Im hopeful that will make gcc2.8------------------------------From: [email protected]: 13 Feb 1997 18:17:03 GMTSubject: ResyncsI have written a device driver in Linux 1.2.8 that handles interrupts from abit/frame sync card and writes the data to a file on the hard disk. Thesystem has been running for 8 months on a deployed system which uses theadaptec fast&wide pci/scsi controller (aic7880?) and a Quantum HD. We haverecently bought another system with a similar hardware configuration exceptit is a Seagate Barracuda drive. Since I have lost my 1.2.8 CD I loadedSlakware 3.1 with the 2.0.0 kernel. This required me to slightly modify thedevice driver code (request_irq, free_irq, and the interrupt now providesupport for shared interrupts?) which was trivial. I have since loaded the2.0.28 kernel, but no other patches. The problem is that when I run the capture software I get continuous resyncs,which in the past has been because the process couldn't write data to thedisk fast enough before the FIFO buffer on the card filled up. I don'tbelieve that the hardware is too slow so it must be a problem with theoperating system or my code. I'm willing to provide more specificinformation if you'll let me know what is needed.Scott N. [email protected] --*************************************************************************** Sent via FirstClass(R) UUCP gateway from Logicon Geodynamics, Inc (CD) ***************************************************************************------------------------------From: Andras Kiraly <[email protected]>Date: Thu, 13 Feb 1997 19:39:34 +0100 (MET)Subject: Sony CDU33a+kernel>2.1.22=system hangs totallyDear Linux DeveloppersI'm experiencing following problem since kernel V2.1.23:Everytimes I mount a CD-Rom, the system hangs totally, no keyboardinput, screen-switching and CTR-ALT-DEL is possible anymore. Theonly thing I can do is resetting the computer with the reset-key.If I step back to a kenel <2.1.23, the problem is not around. Withkernel 2.1.22 CD-Rom operation is no problem. The Cd-Rom works withoutproblems with DOS6.20 and Win311 too.My System configuration:Sony CDU33a with the Sony COR-334 ISA-Adapter,base Adress 340, No IRQ, DMA channel 0ASUS P/I-p55t2p4 Motherboard, Cyrix P150+ CPU, 64MB RAMPlease write me which further information you need or if the problemis already solved, what the solution is.Best regards,Kiraly Andras------------------------------From: "Charles W. Doolittle, N1SPX" <[email protected]>Date: Thu, 13 Feb 1997 13:44:02 -0500 (EST)Subject: insmodUsing insmod v2.0.0 (by insmod -v), with kernel 2.1.25, and AFAIK noversion support.When I insmod, it segfaults. Up on the console appears a kernel traceregarding "The Kernel Could Not Handle an Invalid Paging Request..."I hope its simple.ChuckN1SPX------------------------------From: Ingo Molnar <[email protected]>Date: Thu, 13 Feb 1997 20:07:10 +0100 (MET)Subject: Re: CONUNDRUM.On Thu, 13 Feb 1997, Alan Cox wrote:> Because? ;) i remember Dave has made this for the Sparc, and there is something likethis for x86 too.- -- mingo------------------------------From: "Richard B. Johnson" <[email protected]>Date: Thu, 13 Feb 1997 14:20:32 -0500 (EST)Subject: Re: Performance patch for NE EthernetOn Fri, 14 Feb 1997, Paul Gortmaker wrote:> >>.nic_base was an extra operation. Read the code.On line 622 there is even "int nic_base = NE_BASE" > > > -.> > > -> anyways. That doesn't buy you anything in speed or size.Yes I did.> > > -.: [email protected] (Dale R. Worley)Date: Thu, 13 Feb 1997 14:28:32 -0500Subject: Report on compiling 2.1.26I've just made a reasonably clean compile of 2.1.26 with "everything"configured in (no modules, though). This is the list of problems Inoticed. Some of the problems I can't fix (for instance, much of theISDN code hasn't been updated to take into account recent kernelchanges), but the ones for which I could make real fixes, I'veincluded patches below.1. arch/i386/kernel/time.cSome code regarding lost_ticks and USECS_PER_JIFFY wasn't suppressedwhen CONFIG_APM was set. Also, I marked some #endif's to tell whatthe matching #ifdef's were testing on.2. drivers/ap1000/{apfddi.c,apfddi.h,bif.c}The structure enet_statistics has been replaced by net_device_stats.(I think.) 3. drivers/isdn/{isdn_audio.c,isdn_common.c,isdn_net.c,isdn_ppp.c,isdn_tty.c, pcbit/capi.c,pcbit/layer2.c}The 'free', 'lock', and 'len' fields of struct sk_buff no longer exist.4. drivers/isdn/{isdn_common.c,isdn_ppp.c}select_wait seems to have been replaced by a 'poll' function, but thedetails of how to replace the one with the other are unclear.5. drivers/isdn/isdn_net.cThe 'header_cache_bind' field is no longer in struct device.6. drivers/net/cs89x0.cMissing final */ on a comment.7. drivers/net/cs89x0.c"FAULT" used instead of "EFAULT".8. drivers/net/cs89x0.cExtra newline damaging a #define.9. drivers/net/sdla.cUnknown function amemcpy() used.10. drivers/sound/lowlevel/aci.cUse of get_user not updated for new interface.11. net/802/fddi.cETH_P_8022 used instead of ETH_P_802_2.12. scripts/ConfigureCause defaulted values to be output when they are used.Extraneous backslash prevents integer answers from being accepted.- ----------------------------------------------------------------------diff -u arch/i386/kernel/time.c.orig arch/i386/kernel/time.c- --- arch/i386/kernel/time.c.orig Wed Feb 12 09:17:44 1997+++ arch/i386/kernel/time.c Wed Feb 12 09:19:44 1997@@ -128,7 +128,7 @@ return edx; }- -#endif+#endif /* !CONFIG_APM */ /* This function must be called with interrupts disabled * It was inspired by Steve McCanne's microtime-i386 for BSD. -- jrs@@ -265,12 +265,14 @@ *tv = xtime; tv->tv_usec += do_gettimeoffset(); +#ifndef CONFIG_APM /* * xtime is atomically updated in timer_bh. lost_ticks is * nonzero if the timer bottom half hasnt executed yet. */ if (lost_ticks) tv->tv_usec += USECS_PER_JIFFY;+#endif /* !CONFIG_APM */ restore_flags(flags); @@ -422,7 +424,7 @@ "=d" (last_timer_cc.high)); timer_interrupt(irq, NULL, regs); }- -#endif+#endif /* !CONFIG_APM */ /* Converts Gregorian date to seconds since 1970-01-01 00:00:00. * Assumes input in normal date format, i.e. 1980-12-31 23:59:59@@ -530,6 +532,6 @@ "=d" (init_timer_cc.high)); irq0.handler = pentium_timer_interrupt; }- -#endif+#endif /* !CONFIG_APM */ setup_x86_irq(0, &irq0); }diff -u drivers/ap1000/apfddi.c.orig drivers/ap1000/apfddi.c- --- drivers/ap1000/apfddi.c.orig Mon Feb 10 09:53:17 1997+++ drivers/ap1000/apfddi.c Mon Feb 10 10:44:13 1997@@ -126,7 +126,7 @@ static u_char apfddi_saddr[6] = { 0x42, 0x9a, 0x08, 0x6e, 0x11, 0x41 }; struct device *apfddi_device = NULL;- -struct enet_statistics *apfddi_stats = NULL;+struct net_device_stats *apfddi_stats = NULL; volatile struct apfddi_queue *apfddi_queue_top = NULL; @@ -254,7 +254,7 @@ static void apfddi_interrupt(int irq, void *dev_id, struct pt_regs *regs); static int apfddi_xmit(struct sk_buff *skb, struct device *dev); int apfddi_rx(struct mac_buf *mbuf);- -static struct enet_statistics *apfddi_get_stats(struct device *dev);+static struct net_device_stats *apfddi_get_stats(struct device *dev); #if APFDDI_DEBUG void dump_packet(char *action, char *buf, int len, int seq); #endif@@ -496,11 +496,11 @@ dev->stop = apfddi_stop; dev->hard_start_xmit = apfddi_xmit; dev->get_stats = apfddi_get_stats;- - dev->priv = kmalloc(sizeof(struct enet_statistics), GFP_ATOMIC); + dev->priv = kmalloc(sizeof(struct net_device_stats), GFP_ATOMIC); if (dev->priv == NULL) return -ENOMEM;- - memset(dev->priv, 0, sizeof(struct enet_statistics)); - - apfddi_stats = (struct enet_statistics *)apfddi_device->priv;+ memset(dev->priv, 0, sizeof(struct net_device_stats)); + apfddi_stats = (struct net_device_stats *)apfddi_device->priv; /* Initialise the fddi device structure */ for (i = 0; i < DEV_NUMBUFFS; i++)@@ -692,9 +692,9 @@ /* * Return statistics of fddi driver. */- -static struct enet_statistics *apfddi_get_stats(struct device *dev)+static struct net_device_stats *apfddi_get_stats(struct device *dev) {- - return((struct enet_statistics *)dev->priv);+ return((struct net_device_stats *)dev->priv); } diff -u drivers/ap1000/apfddi.h.orig drivers/ap1000/apfddi.h- --- drivers/ap1000/apfddi.h.orig Mon Feb 10 09:53:17 1997+++ drivers/ap1000/apfddi.h Mon Feb 10 10:44:13 1997@@ -138,5 +138,5 @@ void set_cf_join(int on); extern struct device *apfddi_device;- -extern struct enet_statistics *apfddi_stats;+extern struct net_device_stats *apfddi_stats; diff -u drivers/ap1000/bif.c.orig drivers/ap1000/bif.c- --- drivers/ap1000/bif.c.orig Mon Feb 10 09:53:18 1997+++ drivers/ap1000/bif.c Mon Feb 10 10:44:13 1997@@ -46,14 +46,14 @@ #define BIF_MTU 10240 static struct device *bif_device = 0;- -static struct enet_statistics *bif_stats = 0;+static struct net_device_stats *bif_stats = 0; int bif_init(struct device *dev); int bif_open(struct device *dev); static int bif_xmit(struct sk_buff *skb, struct device *dev); int bif_rx(struct sk_buff *skb); int bif_stop(struct device *dev);- -static struct enet_statistics *bif_get_stats(struct device *dev);+static struct net_device_stats *bif_get_stats(struct device *dev); static int bif_hard_header(struct sk_buff *skb, struct device *dev, unsigned short type, void *daddr,@@ -128,11 +128,11 @@ dev->open = bif_open; dev->flags = IFF_NOARP; /* Don't use ARP on this device */ dev->family = AF_INET;- - dev->priv = kmalloc(sizeof(struct enet_statistics), GFP_KERNEL);+ dev->priv = kmalloc(sizeof(struct net_device_stats), GFP_KERNEL); if (dev->priv == NULL) return -ENOMEM;- - memset(dev->priv, 0, sizeof(struct enet_statistics));- - bif_stats = (struct enet_statistics *)bif_device->priv;+ memset(dev->priv, 0, sizeof(struct net_device_stats));+ bif_stats = (struct net_device_stats *)bif_device->priv; dev->stop = bif_stop;@@ -282,8 +282,8 @@ /* * Return statistics of bif driver. */- -static struct enet_statistics *bif_get_stats(struct device *dev)+static struct net_device_stats *bif_get_stats(struct device *dev) {- - return((struct enet_statistics *)dev->priv);+ return((struct net_device_stats *)dev->priv); } diff -u drivers/net/cs89x0.c.orig drivers/net/cs89x0.c- --- drivers/net/cs89x0.c.orig Mon Feb 10 10:31:22 1997+++ drivers/net/cs89x0.c Mon Feb 10 19:25:54 1997@@ -1179,4 +1179,4 @@ * c-indent-level: 8 * tab-width: 8 * End:- - *+ */diff -u drivers/net/dlci.c.orig drivers/net/dlci.c- --- drivers/net/dlci.c.orig Mon Feb 10 18:20:48 1997+++ drivers/net/dlci.c Mon Feb 10 18:20:54 1997@@ -296,7 +296,7 @@ if (!get) { if(copy_from_user(&config, conf, sizeof(struct dlci_conf)))- - return -FAULT;+ return -EFAULT; if (config.flags & ~DLCI_VALID_FLAGS) return(-EINVAL); memcpy(&dlp->config, &config, sizeof(struct dlci_conf));diff -u drivers/net/ni52.c.orig drivers/net/ni52.c- --- drivers/net/ni52.c.orig Mon Feb 10 17:24:09 1997+++ drivers/net/ni52.c Mon Feb 10 17:24:12 1997@@ -164,8 +164,7 @@ #define DELAY_18(); { __delay( (loops_per_sec>>18)+1 ); } /* wait for command with timeout: */- -#define WAIT_4_SCB_CMD() - -{ int i; \+#define WAIT_4_SCB_CMD() { int i; \ for(i=0;i<16384;i++) { \ if(!p->scb->cmd_cuc) break; \ DELAY_18(); \diff -u drivers/sound/lowlevel/aci.c.orig drivers/sound/lowlevel/aci.c- --- drivers/sound/lowlevel/aci.c.orig Fri Oct 25 06:06:35 1996+++ drivers/sound/lowlevel/aci.c Mon Feb 10 20:29:35 1997@@ -292,7 +292,7 @@ int vol, ret; unsigned param; - - param = get_user((int *) arg);+ get_user(param, (int *) arg); /* left channel */ vol = param & 0xff; if (vol > 100) vol = 100;@@ -318,8 +318,12 @@ /* handle solo mode control */ if (cmd == SOUND_MIXER_PRIVATE1) {- - if (get_user((int *) arg) >= 0) {- - aci_solo = !!get_user((int *) arg);+ int temp;++ get_user(temp, (int *) arg);+ if (temp >= 0) {+ get_user(temp, (int *) arg);+ aci_solo = !!temp; if (write_cmd(0xd2, aci_solo)) return -EIO; } else if (aci_version >= 0xb0) { if ((status = read_general_status()) < 0) return -EIO;@@ -332,6 +336,8 @@ if (cmd & IOC_IN) /* read and write */ switch (cmd & 0xff) {+ int temp;+ case SOUND_MIXER_VOLUME: return setvolume(arg, 0x01, 0x00); case SOUND_MIXER_CD:@@ -349,7 +355,8 @@ case SOUND_MIXER_LINE2: /* AUX2 */ return setvolume(arg, 0x3e, 0x36); case SOUND_MIXER_IGAIN: /* MIC pre-amp */- - vol = get_user((int *) arg) & 0xff;+ get_user(temp, (int *) arg);+ vol = temp & 0xff; if (vol > 100) vol = 100; vol = SCALE(100, 3, vol); if (write_cmd(0x03, vol)) return -EIO;diff -u net/802/fddi.c.orig net/802/fddi.c- --- net/802/fddi.c.orig Tue Feb 11 09:46:35 1997+++ net/802/fddi.c Tue Feb 11 09:46:38 1997@@ -131,7 +131,7 @@ if(fddi->hdr.llc_8022_1.dsap==0xe0) { skb_pull(skb, FDDI_K_8022_HLEN-3);- - type=htons(ETH_P_8022);+ type=htons(ETH_P_802_2); } else {diff -u net/802/p8022.c.orig net/802/p8022.c- --- net/802/p8022.c.orig Mon Feb 10 10:35:16 1997+++ net/802/p8022.c Tue Feb 11 09:48:12 1997@@ -80,7 +80,7 @@ static struct packet_type p8022_packet_type = {- - 0, /* MUTTER ntohs(ETH_P_8022),*/+ 0, /* MUTTER ntohs(ETH_P_802_2),*/ NULL, /* All devices */ p8022_rcv, NULL,diff -u scripts/Configure.orig scripts/Configure- --- scripts/Configure.orig Mon Feb 10 10:35:20 1997+++ scripts/Configure Tue Feb 11 08:50:22 1997@@ -108,7 +108,7 @@ # function readln () { if [ "$DEFAULT" = "-d" -a -n "$3" ]; then- - echo "$1"+ echo "$1$2 [defaulted]" ans=$2 else echo -n "$1"@@ -288,7 +288,7 @@ def=${old:-$3} while :; do readln "$1 ($2) [$def] " "$def" "$old"- - if expr "$ans" : '0$\|-\?[1-9][0-9]*$' > /dev/null; then+ if expr "$ans" : '0$\|-?[1-9][0-9]*$' > /dev/null; then define_int "$2" "$ans" break else- ----------------------------------------------------------------------Dale- --Dale R. Worley Ariadne Internet ServicesVoice: +1 617-899-7949 Fax: +1 617-899-7946 E-mail: [email protected]"Internet-based electronic commerce solutions to real business problems."------------------------------End of linux-kernel-digest V1 #746**********************************To subscribe to linux-kernel-digest, send the command: subscribe linux-kernel-digestin the body of a message to "Majordomo". | http://lkml.org/lkml/1997/2/16/51 | CC-MAIN-2017-04 | refinedweb | 4,783 | 59.3 |
std::get_deleter
From cppreference.com
< cpp | memory | shared ptr
Access to the
p's deleter. If the shared pointer
p owns a deleter of type cv-unqualified
Deleter (e.g. if it was created with one of the constructors that take a deleter as a parameter), then returns a pointer to the deleter. Otherwise, returns a null pointer.
[edit] Parameters
[edit] Return value
A pointer to the owned deleter or
nullptr. The returned pointer is valid at least as long as there remains at least one
shared_ptr instance that owns it.
[edit] Exceptions
[edit] Notes
The returned pointer may outlive the last
shared_ptr if, for example, std::weak_ptrs remain and the implementation doesn't destroy the deleter until the entire control block is destroyed.
[edit] Example
demonstrates that shared_ptr deleter is independent of the shared_ptr's type
Run this code
#include <iostream> #include <memory> struct Foo { int i; }; void foo_deleter(Foo * p) { std::cout << "foo_deleter called!\n"; delete p; } int main() { std::shared_ptr<int> aptr; { // create a shared_ptr that owns a Foo and a deleter auto foo_p = new Foo; std::shared_ptr<Foo> r(foo_p, foo_deleter); aptr = std::shared_ptr<int>(r, &r->i); // aliasing ctor // aptr is now pointing to an int, but managing the whole Foo } // r gets destroyed (deleter not called) // obtain pointer to the deleter: if(auto del_p = std::get_deleter<void(*)(Foo*)>(aptr)) { std::cout << "shared_ptr<int> owns a deleter\n"; if(*del_p == foo_deleter) std::cout << "...and it equals &foo_deleter\n"; } else std::cout << "The deleter of shared_ptr<int> is null!\n"; } // deleter called here
Output:
shared_ptr<int> owns a deleter ...and it equals &foo_deleter foo_deleter called! | http://en.cppreference.com/w/cpp/memory/shared_ptr/get_deleter | CC-MAIN-2016-36 | refinedweb | 269 | 61.67 |
Microsoft Visual Studio 8SDKv2.0Bing[] { ‘ ‘, ‘,’ };Microsoft Visual Studio 8SDKv2.0Bing ‘string1’, ‘string2’, ‘string3’[] {‘,’}));[]{‘,’});a-hamezaDesktopDataSourcesACATest”;
string[] sfiles = Directory.GetFiles(@”C:Documents and Settingsa-hamezaDesktopDataSourcesACA”);
/”,
“469”,++, 🙁APTOPSQL.
Great post!
Trying to find any of the needles in the haystack. Is there a better way?
NinH(“one*two”,”*”,”two”) <= true
NinH("one*two","*","bucklemyshoe") <= false
NinH("one*two","*","bucklemyoneshoe") <= true
NinH("one*two","*","three*two*one") <= true
static bool NinH(string N, string sep, string H)
{
bool ret = false;
string[] s = { sep };
string[] needles = N.Split(s,StringSplitOptions.RemoveEmptyEntries);
for (int i = 0; i = 0)
{
ret = true;
break;
}
}
return ret;
}
not sure what happened to the code. missing the If
static bool NinH(string N, string sep, string H)
{
bool ret = false;
string[] s = { sep };
string[] needles = N.Split(s,StringSplitOptions.RemoveEmptyEntries);
for (int i = 0; i = 0)
{
ret = true;
break;
}
}
return ret;
}
sorry, newb poster…
this is missing from the for loop
“if (H.IndexOf(needles[i]) >= 0)”
i want to take a string as input of any length for my program. Can you help me with the code?
Sadia,
Why couldn’t you pass a long string as an Input.
Strings are reference types which means they live in the heap and the stack has only a references that point to an address in the heap so essentially you could pass and any length.
Could provide some code where you are encountering a problem ??
hi all,
(sorry, my English not good)
I have a problem to compare 2 or more addresses. As you may know, user always write down their address without using fixed format. For example:
People from Microsoft wrote:
People1:
Microsoft Corporation One Microsoft Way Redmond, WA 98052-6399 USA
People2:
Microsoft Corp. One Microsoft Way Redmond, WA 98052 6399 USA
People from Microsoft wrote:
People1:
Microsoft Corporation One Microsoft Way Redmond, WA 98052-6399 USA
People2:
Microsoft Corp. One Microsoft Way Redmond, WA 98052 6399 USA
People from Vodafone wrote:
People3:
Vodafone Limited The Connection Newbury Berkshire
RG14 2FN
People4:
Vodafone Limited, The Connection, Newbury Berkshire
RG 14/2FN
People5:
Vodafone Limited, The Connection, Newbury Berkshire
RG-14 2FN
I want system can recognized those 5 addresses as 2 addresses only.
If anyone know the algorithm or anything that might help, please inform me. Thx. ^^
Hello!
I have a strings such as: s = “asd;# qwe;# zxc”.
I’m splitting it to array as follow:
a = s.Split(new[] { “;#” });
But it is returns me an array without leading spaces:
“asd”, “qwe”, “zxc”.
How can I easily fix this behavior?
Thank you!
hi this is siva
i want this prog code at least for 4th one plss
1. A text box for file name
2. A button named as “Segregate”
3. A two column list box (“Word”, “Times”)
4. Pressing the button should read the file contents and find out each word repeats how many times and display in the list box
While there may be no specific performance difference between using string.Empty or “”, there is a very specific reason for using it. If you create a billion strings equal to “”, you have a billion instances of a string object containing no string data. If you create a billion instances of a string equal to string.Empty, you create a billion references to a global constant.
I don’t imagine the amount of storage thus wasted is considerable on a per-string basis, but in a non-trivial application, consider the number of times you use empty strings, usually temporarily.
Strings are immutable, so when you reassign the variable to a non-empty value, the original empty instance still exists and must be cleaned up. Do you really want every single instance of
MyString = “”;
in the application going to the GC for processing? That may not show up on the benchmarks, but it shows up in consumed CPU cycles. On the other hand, if you used
MyString = string.Empty;
reassignment will simply result in the reference changing to the newly created string.
This is not a new idea. It was fairly common in the olden days (C, C++) to assign a single manifest constant for empty strings and reference it whenever you needed one for initialization, or comparison, or whatever, rather than having a few hundred one-byte ”s floating around in your data-section.
This was especially true since, as processors got bigger than 8 bits, the amount of space actually consumed was usually a multiple of the register size, so while the empty string actually *used* 1 byte, it *consumed*, 2, or 4, or 8 *per instance*.
I have a pop-up properties box where I input a string into a label. I have a need to enter This & That.
I have breakpoints to check as my variable gets set to the value and the helper shows This & That.
But only This That shows up on the label.
How do I get the whole string to show up?
I have tried String.Format(@”{0}”, value);
Doesn’t work.
thanks!
You should write:
label1.Text = “This && That”,
becourse “&” used for selecting shortcuts.
Telling my customer to type in This && That is not an option. I have to make whatever is typed in properly displayed. I have to have a method to catch special chars for people who don’t know what a special char is.
In my public get/set, I have to deal with the customer input as a variable (String s = value;), then display s back on the label, as submitted.
Hi there,
How would you code this … I know it’s regex but I can’t figure out how to start the string compare from a certain character (@) …
I want to check to see if all the characters before the “@” symbol in an email address are all alpha (upper or lower case) or not.
Thanks
Chris
Hi timm,
I am trying to write out a batch file for use in another program, and I need to write out file paths with double backslashes intact. for example, in my file I need:
“C:\Users\BenjaminC\Desktop\texture.tga”
I am currently using StreamWriter.WriteLine, but it outputs the string with single backslashes:
“C:UsersBenjaminCDesktoptexture.tga”
Is there any way to write out the double backslashes without creating new strings with quadruple backslashes everywhere?
Thanks,
Benjamin
@Benjamin: If you are defining your paths at compile-time, you can precede each string definition with an at-sign to preserve the double-backslash:
string path = @”C:\Users\BenjaminC\Desktop\texture.tga”;
However, if you are generating your paths at run-time, then you can use the String.Replace method as follows to replace single-backslashes with double-backslashes:
path = path.Replace( “\”, “\\” );
How about using this for Reversing:
public string Reverse(string stringToReverse)
{
char[] rev = stringToReverse.Reverse().ToArray();
return new string(rev);
}
There may be no performance difference between string.Empty vs. “” but there is a definite behavior difference when building SOAP request in .NET using C#. Using string.Empty will send an empty parameter in the SOAP envelope (e.g. ) whereas “” will not send any parameter.
hi,
i am in need of the c language code which gives the following output..
if we give 1X0X01 as the array input it should count me the number of X’s in the array…and the X should b filled with the adjacent value to X….
ie it should give me the output as 100001
its very important for me……..plzzzz help me out……
thnx in advance……….
hi i am trying to assign a string class=”s” to a variable. if i give it as var=”class=”s””; it is an error. please give me a solution….
Amel,
Try putting a in front of the quotes you want to escape.
var = “class=”s””;
Or you can use ‘ instead if that works for you
var = “class’s'”;
Sorry the last line should be
var = “class=’s’”;
[…] csharp411 […]
how can i display random string elements in my array? in this code:
string[] word = {“APPLE”,”BAG”,”LION”,”HORSE”,”DOG”,”CAT”,”BIRD”};
Console.WriteLine(“word is: {0} “, word.GetValue(0));
how can i display random elements, one at a time?
i hope some one can help me
@Dark: Here is a sample console program that should solve your problem:
using System;
namespace RandomStrings
{
class Program
{
static void Main( string[] args )
{
string[] words = { “APPLE”, “BAG”, “LION”, “HORSE”, “DOG”, “CAT”, “BIRD” };
int count = words.Length;
Random r = new Random();
for (int i = 0; i < 10; i++) { int index = r.Next( count ); Console.WriteLine( "word is: {0} ", words[index] ); } Console.ReadLine(); } } }
thank you very much sir.. i really appreciate ur help.. thank u very much!!!
can someone plz help me in my advance keyword search, i want that while writing first few alphabet in the seachbox all the related comes in a listbox below….
for example like google does…. plz help me out…..
can some one help how to create a timer in associated when typing a string,
the syntax is like this,
when i type a string, and there is a 40 sec. countdown timer so that if the time is over and the user did not finish typing it’s game over…
can anyone help me??/
can some one help me here…
i want to remove randomed array string elements and i’m not using an array list.. is there other way to remove the elements that has been displayed in my program??
hope there’s someone could help me.. 🙂
hey everyone i need help! please….
i need to display my timer and my input string. and i can’t display the timer while im typing the string.. how can i do that??
i’m using c# console app. is it possible??
here is my code:
Console.Write(“nttt{0}”, word[index]);
Console.Write(“ntttType here: “);
input = Console.In.ReadLine();
Console.WriteLine(“ntt”,time = timedata(timer));
Hope someone can help me i need this badly for my project…
@Dark: I don’t think it’s possible with a console program. You probably need to switch to WinForms.
hey everyone i need help! please….
i’m now changing my plan to jumble the random words in my array.. how can i do that and compare if the answer is true?
string[] word = { “LION”, “APPLE”, “BAG”, “HORSE”, “RING”, “DOG” };
Random r = new Random();
int count = word.Length;
for (i = 0; i < 4; i++)
{
int index = r.Next(count);
Console.Write("nttt{0}", word[index]);
Console.Write("ntttType here: ");
input = Console.In.ReadLine();
how can i jumble the random words?
Hope someone can help me
everyone!! is this possible?
i want to type a string while there is a for-loop timer?
is that possible??
and how can i do that?
Dark, use a system timer and in the interval, write to the console. Or spawn a thread that just writes to the console.
The secret is to use positions when you write to the console.
for(int x = 1; x <= 100; x++)
{
Console.SetCursorPosition(5, 5);
Console.Write("{0}", x);
}
This would write out 1 to 100 all in the same poition within the console. SO using some pre-planning you could write all sorts of stuff out to a console window without it ever scrolling.
@Joe, great idea, thanks for commenting!
this isn’t 100% perfect, but it’s a good starting place. tweak as you find necessary (beware of word wrap):();
oTimerThread.CurRow = 10;
oTimerThread.CurCol = 25;
Thread oThread = new Thread(new ThreadStart(oTimerThread.RunTheLoop));
oThread.Start();
string sSomeValue = Console.ReadLine();
}
}
internal class MyTimerThread
{
public int CurRow { get; set; }
public int CurCol { get; set; }
internal void RunTheLoop()
{
int iLoop = 0;
while (true)
{
if (iLoop > 0)
{
this.CurCol = Console.CursorLeft;
this.CurRow = Console.CursorTop;
}
iLoop++;
Console.SetCursorPosition(5, 5);
Console.Write(DateTime.Now.ToString(“hh:mm:ss”));
// reset position
Console.SetCursorPosition(this.CurCol, this.CurRow);
Thread.Sleep(1000);
}
}
}
}
I cleaned it up a bit, the properties of the MyTimerThread class weren’t necessary…();
Thread oThread = new Thread(new ThreadStart(oTimerThread.RunTheLoop));
oThread.Start();
string sSomeValue = Console.ReadLine();
}
}
internal class MyTimerThread
{
internal void RunTheLoop()
{
int CurCol = 0;
int CurRow = 0;
while (true)
{
// save current cursor position
CurCol = Console.CursorLeft;
CurRow = Console.CursorTop;
// set new position
Console.SetCursorPosition(5, 5);
// write the time
Console.Write(DateTime.Now.ToString(“hh:mm:ss”));
// reset position
Console.SetCursorPosition(CurCol, CurRow);
// pause the thread for a second
Thread.Sleep(1000);
}
}
}
}
Hi guys,
I am in need of help, i want to take a string input from a user and then store each word separated by space in an array (NOT ALLOWED TO USE SPLIT). Any suggestions
@Sarfraz Why aren’t you allowed to use Split? it’s what it’s made for…
Other than that, you’ll have to loop through the string character by character, and evaluate each character. If it’s a space (ascii 32) then that’s where you’d break it up yourself.
how to reverse a string by using substring in C# ?????
there should be one member, 3 methods like access input,finding reverse and display output.
I’m not sure if it’s changed since this article was written, but the == equality operator can handle null strings without issue. If you try doing:
string s1 = null;
string s2 = “stuff”;
if(s1 == s2)
Console.Write(“== says they match”);
if(s1.Equals(s2))
Console.Write(“Equals says they match”);
you’ll find that the first works fine, while the second throws an exception. Granted, using an equality operator for a comparison operation, which was the context of how it was mentioned in the article, is still inappropriate, but the article seems to suggest that it can’t handle null string references at all, which is not correct at this time.
Great article. Thanks for sharing
I want to input a lone string from a user at execution time. I want to count the number of the words in the string and also count the white spaces.?? i try many time with the string mystring.contains(); but it also return the just boolean value. But i want to know how many words in the string and how many white spaces in the string ??? Anyone help me please ……
i want to display the largest word in given string. anyone can help,.
You can reverse the string in One Line code
string a = “Teststring”;
string b = new string(a.ToCharArray().Reverse().ToArray());
OutPut: gnirtstseT
Here’s a blog which benchmarks several different C# techniques to finding a string within a string:
_. | http://www.csharp411.com/c-string-tips/ | CC-MAIN-2020-05 | refinedweb | 2,410 | 66.64 |
Transcript
Betts: Hi my name is Paul, @paulcbetts on GitHub and Twitter. How's the conference going? Is it going well? Is it the end of the day? Everyone's exhausted and want to go drink, and I am in the way of that. But I have a few things to tell you about. And hopefully it will be interesting. If not, you can just throw the tomatoes. It's fine. I haven't been up here and giving talks in a while so I’m a little rusty. We'll see. We'll see how it goes. Cool.
So like Neha said, I worked on Slack Desktop, which was the first electron app outside of- Atom Electron was originally not this separate project. It was really just this library to host Atom the text editor. How many people have used Atom, the text editor before? It's pretty good. It's not bad. But then they realized, I was like, “well, this is really great tool to build this desktop application, but we think it can be a really great tool to build all kinds of desktop applications.” SoPS and Coco apps. So that's cool.
Who knows who Raymond Chan is? Raymond Chan is this famous developer at Microsoft. He's a prolific developer, and he has this habit of answering every question that could possibly be given to him. He's on a million mailing lists, and he just responds to them always because he wants to help people. He knows the answer to a question and he can answer that question and help him out. And this compulsion to answer all these questions is called Raymond Chan Syndrome, and I have Raymond Chan Syndrome, unfortunately. So I spent a lot of time in Atom community Slack and these other places trying to answer questions and help people out and stack overflow and stuff like that. And so I noticed some trends, when people are writing their first application in Electron. Here are a few things I noticed people are doing, and it can be done a little better. We've got some suggestions on how to improve them. So here are a few things I noticed people doing on Electron apps that make users mad, right?
Memory Usage Matters
So one thing that's really important with Electron apps is that memory usage matters. We'll put it in big, big, big words. Electron gets, in my opinion, a little unfair rep about using too much memory. People get a little grumpy about it. They're like, “oh my, you know, A95.exe only used one MB of memory, and it did all this stuff. Why can't Electron apps in 2018 use that?” It's mostly nonsense, but they see a number and they get mad about it because of this.
So this is every conversation I've ever had about Electron memory usage. Them, "I'm so mad, I'm posting on Twitter”, or “mad people were on Twitter.” That's where they live. So I'm like, "Okay, okay, I understand, that sounds really frustrating. What's the commit charge?" And so commit charge is this a unit measure that's the percentage of RAM that's being used by programs that's not cache memory. It's not free. It's actually in use. And I'll have them look up that number and they'll say, "Oh yeah, it's 45%."
And this is the emotion that comes to me when I hear that. The RAM isn't doing anything. Why would we not use it? It's good to use. RAM that does nothing is not helping you to move faster. And so Chromium, in general, has taken the tack that we would rather have 60 frames per second all the time, than have delays and junk and scrolling problems in order to save memory. Which in 2018 is a pretty good choice in my opinion. Might as well use the memory in order to make a better experience.
But on the other hand … So that complaint is kind of nonsense. But as Electron developers and Electron app developers, we can do something about this. We should try our best to actually lower memory usage as much as we can and not use it for no reason because we want to respect our users and respect their memory usage. And the problem is, the people who actually get hit by memory usage problems, will never be able to describe it to you. They're like somebody who's a lawyer or in finance and they're really good at lawyering and financing and they don't care about computers, right? They pay us to care about computers, and so they're not going to able to describe this problem to us when they're actually out of memory and their computer actually grinds to a halt. They're just going to be like, “my computer grinds to halt, and this sucks, and this is a bad day.”
Load Less Stuff
So we should do something about this. And the answer is load less stuff. It's so easy. So there's a few big things that consume a lot of memory. And one of them that- I'll talk about ones that are maybe not as obvious as you would think. So we'll talk about DOM Elements. So DOM Elements are any HTML tag, right? Any of the contents of the HTML tags. So for example, the text in between paragraphs, that's a DOM node, but it is a not a DOM Element. That takes a lot, right? And a lot of times those tags have associated information that takes a lot of memory. And those aren't super obvious.
So for example, if you create a DIV that is 20,000 by 20,000 pixels. You set the width and height explicitly, right? And you put some text in the middle. That DIV, the actual contents ii pretty small, right? It's just a DIV. But if you remember back to Jenna's talk- who went to Jenna's talk before us? It was really good, right? So she talked about the compositing and painting rule. And so when it goes to take that huge DIV, it has it create a giant bitmap in memory, right? That bitmap, even though the DIV was maybe 30 bytes of texts, that composited layer is really huge. And so this really comes into effect- of course, if you remember, I worked at Slack, right? So Slack consists of a message pain where you just scroll and scroll and scroll and scroll. And if you looked at the layers- they have fixed this since then- back in the day, if you looked at the layers, you'd see a giant 30,000 Pixel layer. That takes a lot of memory. They would also do this amazing trick that was back in the old days where they're like, “we want to hide an element, but we're from the JQuery era, we don't trust hide and displayed done. What we're going to do is we're going to set it at Pixel negative 20,000, 20,000.” And what that immediately does is created a giant layer because it's got to put it at the top off screen. So things like that can contribute to memory usage.
And so in general, we want to map the number of DOM Elements, the number of DOM Nodes and stuff like that to the stuff we see on the screen, right? We don't want to have a bunch of DOM Elements and Nodes hanging around, that is unrelated to stuff that the user isn't seeing, right? So like long scroll areas. Images are really big, right? So if we have giant images, that's going to cause our memory usage to be really big. And we'll go over a few ways to deal with that.
And the other one is much more straightforward, and things that developers think of as JS Heap, right? JS Heap is all the objects you allocate in your JavaScript, all the libraries that you load, all the code that you import when you start your application. So that one's a little bit more straightforward. In general you want to take a look at that Nodes number and just make sure it's not crazy high, right? If you're getting around the 40,000, 50,000, 100,000 range, then you might have something you can debug or get rid of. This you can get to by going to the performance tab in Dev tools, checking memory, and then take a profile. Just scroll around or do something, and then you'll see that Nodes thing. Yeah, that's super useful. And it's really subtle. So I wanted point out because a lot of people are just like, “Nodes, okay.”
So the easiest way in 2018 to solve this problem about too many HTML Elements, is use React or Vue and virtualizing lists. Who knows what a virtualizing list is? Like React Virtualized, etc. It means that … I'll just explain. So you have a list of 10,000 messages, but on screen you might only see 20, right? Instead of constructing the associated DOM Elements for 10,000 items, we're going to construct maybe 40, right? And as the user scrolls around, we're going to stuff in the real values and put them on screen. So essentially it kind of does that work of only having the DOM Elements on screen that you need. It does it for you. And so a common library for this, it's called React or Virtualized. It's really useful. Anytime you have a really long list, you want to use React Virtualized because you don't want to have that all on the DOM. It uses a lot of memory.
Cool. So here's an important part. So a lot of this advice applies to all websites, you could use it for any web application. This is kind of specific for electrons. So electron is interesting because it's kind of like a different environment than the web because you're loading everything locally. So it means some of the trade-offs are a little different, right? Like you're like, “oh, it's like I have a network that's super, super fast and loads at one gigabyte a second,” which is kind of true. But it means that everything you're loading on startup in terms of JavaScript parse, which normally isn't as big a deal because network always is way bigger than parse time, parse time is now a big deal. So for example, if you load up rxjs- which we used a lot in Slack Desktop- if you load the full version, it can take up to 1.5 seconds of parse time, just the JavaScript evaluating that giant file and then figuring out what to do with it, before either application loads.
And so parse time in anything in the startup path is the amount of time between the user clicking your app, and it opening. So that's a huge, important thing. You want that to be really fast because desktop apps feel like desktop apps when you double click on them and they open; when they don't take forever. So even though we've got this amazingly fast network and we can load all this stuff really fast, the flip side is that we really care about initial paint. And so anything that gets in the way- the libraries you load on startup will definitely get in the way of that. So it's also bad for memory usage because Node modules never get unloaded. So once you load them they're stuck. There are some clever hacks you can do to kick it out of memory, but those are taking over the Node module system. In general, libraries, once you load them, they're loaded.
So we can see that the good news is that this is really easy to catch. All you've got to do is go into the performance pane and refresh while recording and you'll spot it right here. You can see module.require, module load, so we're stepping through the Node module system. And if you look in top-down you can see what modules are taking the most amount of time. And so that makes it easy to say like, oh, I'm loading, all of Moment.js”, and Moment.js includes like all these crazy time zone table data, that's really huge and maybe you don't need that, right?
So these kinds of things are really easy to do. They don't take a lot of time, but a lot of Electron developers I find just don't think of it. And so to make a great desktop app, we should think of these things. We should think of memory usage, and we should think of a startup performance. And the good news is Chrome DevTools are really great, they're some of the best developer tools in the world for debugging and like profiling and stuff like that. So that's cool. So we've got the tools, we should use them.
Use the Heap Profiler
Use the Heap Profiler. So if we go on the memory tab in Chrome DevTools, it'll tell us all kinds of stuff. It'll tell us what we want to take snapshots of, like memory usage, and we can see a certain amount of memory. This is a snapshot of literally a blank Electron app, Electron storage app. So we can see that the vast majority of memory usage is compiled code, which makes sense because there's nothing else. And so, but we can see by type what's interesting in here. We can see a delta snapshot so we can take a snapshot and then maybe use our app or a load of feature or something like that. And then take another snapshot and see what changes, because it is useful for tracking down memory leaks and stuff like that. It will tell you … This is a little hard to understand or dig through why something is still in memory. So it has this thing called retainers which are the things that are keeping it in memory. Remember JavaScript is a garbage collected language. So the reason that something has a memory is because something else is referencing it. So you have to dig through and see, oh, this is being held on to by window, right. And window never gets unloaded. So this can be a little bit of some magic, but I'm sure that there's lots of good talks on it. But in general, this will tell you Node memory that you're using, like a loading Node libraries. It will tell you about your application memory. It'll tell you about most things that are interesting. So we're going to drive this number down.
Don’t Run Stuff in the Main Process
Cool. So performance is cool, memory usage is cool. Another thing that I see a lot of new Electron developers, they're often coming from a web developer worlds. They're not used to native development. Chrome and Electron have this concept called the multi-process model, right? So your program in an Electron app starts off as a Node script, right? Like there's no DOM, there's no HTML, it's just a Node script that's running. But you have this magic class called a browser window, right? And so when you create this classical browser window, just like new browser window, you will create an entirely separate process that loads an HTML page, and that HTML page is going to have DOM and CSS and it'll run its own JavaScript and kind of do all the things that you'd expect and it represents like a single window that's open, right?
And so the vast majority of Electron apps just through kind of coincidence or UI design, open a single window. It's like Slack desktop opens a single window. And most chat apps open a single window. That's not required. It's just happens to be that way. So most simple Electron applications are going to have two processes, two Node scripts running at the same time or two JavaScript environments. It's going to have the main process and what's called the render process. And so the render process is the one that can create DOM elements and stuff like that. And the main process is the one like our startup one. And so people see that model and they're like, “oh, main render, front end and back end”. And that's a really easy mistake to make because it seems like that's how it should be, but it's not, it's not at all. So we don't want to run stuff in the main process.
The Main Process Is for Orchestration
But what about … No. So the main process is not the backend, it's not a background thread. People want to treat it as the background process that I can run stuff in the background and do all this really intensive code work. It's not the server. So the main process is simply for orchestration. We don't want to do anything in that. We just want to have the main process tell other processes what to do. As a main process actually does a whole bunch of stuff inside Chromium, for example, the main process handles all the network requests, right? So when you make a network request, it gets sent to from the render to the main process. The main process actually does the network request, comes back with the data and sends it back to the render process. The reason it does this is so that it means that in Chrome, the render process, essentially the contents of a tab, never have access to directly make network across or any … It's because of the sandboxing, right? So we want that render process, the one that's actually executing normal content to have as little privileges as possible.
Of course, in Electron we throw that all out the window. Render process can do whatever they want. It's neither here nor there. But this is the important part, that running code in the main process means that every other process grinds to a halt basically, because Chrome is always sending these messages to make network requests to do things, the main process owns the actual native window, right? So like you create a window on macro, it's like an NS Window and windows, it's a 132-HWind. The main process owns that. And so you do a resize, right? And so it's got to signal via IPC, by sending messages back and forth, to tell the render, hey, “your size used to be this number, now it's that number. You need to redraw.” So Chromium has all kinds of these messages that it sends back and forth between these two processes that you don't see that you don't have to worry about.
But the problem is that when your main thread is busy doing stuff that like, you calculated the digital pie or you're running this thing, you're running a server, it can't do any of that stuff. And more importantly, if a render process is waiting for a response, it can't do anything either. So spinning on the main thread, the main process blocks all of the render processes who get stuck waiting. So it means that in general, the more your main thread, the main process does, the more your app is going to feel glitchy and junky. You're going to get weird delays and scrolling or the apps are just going to feel glitchy basically.
So Electron actually exposes IPC between process. So we can send messages between the render and the main process. And so it exposes a synchronous way that actually stops and like asks for a response. And so what's tricky is that if you use … One of the things that's really common in Electron is to use the remote module. How many people have ever used the remote module? It's really common enroll in Electron. The remote module is a way to access things on the main process from a render process. So it's really convenient. But the problem is it's also really slow because it has to use this synchronous send. It has to wait for a response. It stops the render process, asks the main process a question, gets back a response. And so if you do that once, it's not a big deal. If you do that by accident all the time, like from an on click callback or an on scroll callback, you're going to really grind.
Even ipc.send is asynchronous. But it's not enough because you can say if I send a message from the render process to the main process, saying calculate digits of Pi, the actual- I didn't lock but the main thread, or main process is still doing a ton of work. So that means that any other messages that Chromium is sending internally or you need to send from the render process don't get processed, right, because it's too busy calculating something.
The main process, really the only thing you want to do in there is tell other processes what to do. So it's kind of like this message bus, right? We're sending information between windows, right? So if we have two windows and they need to share some kind of state, we'll send messages between them. Certain things you have to set up in the main process, which is kind of unfortunate, but it's how it works. And so for example, menu items, because the main process owns the window, it also owns the menu associated with the window. So menus are in the main process, we have to tell the window render process about these events. And so a lot of what you'll do in the main process is just kind of routing messages, like sending them back and forth.
Or like if you want bounce the doc, right. So like on macOS you have these … your programs are in the doc, and you can bounce it to tell them that it's like a message or something. Or if you have a menu bar or a tray item on windows. And there are a bunch of APIs that only work in the main process, like crash reporting. We want to do that in the main process. But in general, we don't want to do anything in the main process.
So you might ask yourself, how can I do anything? I running code in the render process, you know, causes my app to be slow because I can't- the render process is the thing that draws my UI. And the main process causes the app to be slow because it causes all this junk, where can I run anything? I learned about this trick from the VS code people. They have all kinds of stuff that they run in the background, right? Every time you're typing in JavaScript, they're running Linux and they're running, typescript is compiling in the background, they're doing all this stuff. So how did they do it? And so what they do is this super clever trick. What if we create a browser window but don't show it. So it's hidden. So now we've got a process that has a DOM technically, has CSS technically, but more importantly has a JavaScript run loop that we can do stuff in, but nobody could see it. It can grind all at once, it can like spin all day long at a 100% TPU and it won't affect the performance of our application. So that was a super cool trick. And so what they do is they took it even a step further and they said like, “oh, well what if we had a pool of four background browser windows?” And they just sat there and waited for requests, and we could just put something in a queue and treat them like threads. So they essentially turned browser windows into task pull threads. It's pretty clever.
Electron Remote
So I created this library called Electron-Remote which is impossible to Google because as soon as you type it, you'll get the Electron remote module, the synchronous one. I could've come up with a better name. So the cool thing about Electron-Remote is it's just as convenient as a remote. It gives you like this fake object that you can work with and just call methods on. You can just pretend that it's just a regular class, but it's all asynchronous. It means that all the, if you call a method, it's return value will not be, you know, end or bool, it'll promise event or promise a bool. And so that conversion is automatic. And it will read down the prototype of some object and then generate a fake one on the other side. So that's super cool. It uses this feature of yes, 2015, called proxies. Once you figure out how proxies work, you'll try to use them for everything. And the VA team will get really sad because they're slow and they'll be like, “oh no, that breaks every fast path we can find.” But for these, when we're going over remote processes, we kind of don't care that the overhead is not a big deal.
So one of the things that's super useful about Electron-Remote is it lets us call into render processes from the main process. So we can take a window … We're in the main process here. We take a window and we get this fake object called mywindow.jazz. It's confusing because we're talking about browser windows and also the global JavaScript window object. So this is a global window object of some window. And so now we can just call methods on it. We can call getters, we have this kind of magic syntax for getting a property because getting a property is also asynchronous, even though actually getting a property is never asynchronous because it just returns value. We can call methods on it. So we can get navigator user agent even though in the main process that's not available. And we can do this asynchronously.
And I lost a slide. So Electron-Remote also implements that idea. It was supposed to be in the slide, but I think I messed up the slides. Anyway, Electron-Remote also implements that idea of the task pool, right? So you can actually just say like, write a module that does a lot of stuff, right? Like it does some CPU-intensive things. You can hand it to Electron-Remote, here's this module. I want it to be in some other windows, and I'll hand you back this fake object that is the module, right? So all the same functions are on this fake module object that would be if you just required it. And then as you call methods, it'll be sent to, I think the cap is like four windows, and it'll spin them down as they're not used, right? So as you start calling methods, it'll start creating more windows up to a limit. And then any other requests we cued. So you can do these really computationally-intensive things in the background and not worry about making your app slow.
The other thing that it's nice for, that we used at Slack desktop, is that if you have code that's really crashy, maybe calling into a modern Windows 10 operating system function that always crashes for no reason, the thing that gets taken down is a random taskpool thread, and you're just like, “oh no, it's gone”. Sorry. So you can run all your crashy code in some other context and not worry about it breaking it up. So that's super cool. It's really easy to use. That's the thing I like about it. I've described all these complicated things, but the actual user interface to this library, it's just really straightforward and easy. So that's cool. So that's one way to instantly improve performance in your Electron application; just never do things in the main process.
RequestIdleCallback Is Super Cool
Cool. So, I want to talk about another web API. This is actually a browser API that is implemented in Chrome but not other places, called RequestIdleCallback. And so this works in regular Google Chrome, and it's really useful for making responsive web applications. And so one of the cool things about Electron, in general, is because you're guaranteed you're on Chrome and you're guaranteed you're on some version of Chrome. So you can go to- who knows about chromestatus.com? So chromstatus.com is a site that the Chromium team maintains. And it shows all the cool stuff that's coming in Chrome. Basically it just shows this list of features and you're like, “hell yeah, hell yeah, that's cool. I'm using that.” But normally if you were writing a web application, you'd be like, “oh this looks cool, I'll use this sometime next year when there's a polyfill and people get around implementing it on IE and all these other things.” But you're on Electron, so you could use them now, and you don't have to care about anyone else but Chrome, which is awesome. And so RequestIdleCallback is one of these really great APIs.
So it's like setTimeout in that you're running something on a timer, you're running something later, you're deferring an operation. But it's guaranteed to run when the UI isn't busy. Now, trying to do this with any other API is really difficult. So requestAnimationFrame will kind of do it, but not really. setTimeout is always going to run at a certain time and you can never guess if the user is going to move a thing. RequestIdleCallback is guaranteed to run only when the user isn't doing anything, which is great. And so it kind of goes a little further. It gives you a function that you call, that gives you the amount of time you have before the user's going to do something. Now that's kind of reading the future. But the idea is we guarantee you have at least a chunk of 300 milliseconds to do something. And so you can keep asking that. You can do an operation, say you're like running through, you're saving 100 elements to index DB, and that takes a while. And you might save them 10 at a time. So you'll save 10 and then ask this callback, do I have enough time to save 10 more? Save, save, save. Oh, I'm out of time. Then you schedule it again. And so anytime you want to write an update in the background, it’s really cool for RequestIdleCallback. Or in general sending telemetry data, like Google analytics stuff. Anything you want to do in the background and you want to guarantee that it's some time when the user isn’t doing stuff.
Oh, and there's my Electron-Remote slide. I found it. Yes. Cool. Yeah, so this is the API, right? You give it the name of a module and then you run it through RequireTaskPool, which is just required, but it puts it on a task pool. And then you can call your methods, calculate digits of Pi and you will wait. It returns a promise, Yadda, yadda, yadda.
Just Make an HTML Page
Cool. So some more performance advice. How's this going so far? People enjoying this? Good. No tomatoes yet. All right. So more advice. Just make an HTML page. And so what we want to avoid is this. This is the worst loading screen, and it's because they're trying to … a lot of people ,it seems really tempting and there's a lot of like pros and cons to it, but trying to make your app Electron, just like a frame of a website, is really easy to do. But from a user perspective, it's kind of a bummer. Because they see this, and this doesn't feel like a desktop app. This feels like a webpage in a desktop frame. And so there's a lot of, if you're making a remote content app, like a hybrid application, there are a lot of things you can do. You can set up service workers, you can do all this stuff. But if you're starting from scratch, it's really great to just think of writing an Electron application as just like, I'm going to make a static HTML page, right? I'm going to have a folder full of HTML and CSS and JavaScript.
Now, you have to use, you can't use Babel or SaaS or all these things, all this modern web technology. You can totally do that. But the idea is that we want to have all of our content at the end of the day be local, right? So it's downloaded with the application. When the application updates, you get a new version. That's the easiest way to skip a lot of security problems, a lot of performance problems, because anything local is going to load really fast. So we know that it's going to load really fast. So I'm going to set you in the mindset of like, I'm a desktop application, I'm going to save cache locally using local storage or NXDB or like any other storage format. It won't get you in this mindset of like, oh, the minute I load I need to make a JSON request to load the actual data of my app. And so having this desktop mindset when you build the desktop application is really great, and you'll end up with a better application. Offline mode is like way easier because you're already thinking from the beginning like, oh, it needs to be on this computer.
Just an HTML page is kind of easier to manage, right? When you create a hybrid application, you have to create- you're loading remote content into a local context and then running it, right? That's really dangerous from a security perspective and you have to take … it’s a really careful thing to decide what APIs are available to the remote side and build this firewall to only make sure that they can do certain things and not other things, and it's really easy to mess up. When you get rid of this security boundary and say only local code, only code this distributed with the app is going to run, that makes security way easier.
So one of the things that's really important is that code … your policy of only local code that's running, cross-site scripting is so important. So somebody can send you a Messenger message or whatever, that gets parsed at HTML and then executed like you're evaluating it, you're loading remote content and running it, right? You have to be really careful with remote content to parse it and run it instead of just throwing it in HTML page. Great. So if your JSON API consists of an HTML blob that you just cut out and then throw into a DOM element, then you're going to have the same security problems as loading remote content.
Don’t Run Web Servers in Your App
Cool. So another thing that people want to do is they want to run web servers in their desktop applications. So they're coming from a web world and they're coming from a web tooling world and they're like, “Oh, I got to set up express and point Electron to local host in order to make my app work.” You shouldn't do that. You really shouldn't do that in production. Because what happens if more than one person uses your app, right? So, in the enterprise, it's really common to set up a terminal server, right? This big huge box that has, you know, 50 users and people install your app. And then all 50 users can use it. Well, if you all expect that local host: 1234 is where I'm going to put my web server and then connect to it from Electron, now you've got more than one person doing it at the same time. It becomes a huge security problem because now your web service that's running on some person's laptop is a great way to move between users, right? And one of those users could be a website that requests from local host.
If your app runs as admin, which it shouldn't, but it can, it's a great way to local escalation of privilege or have websites be able to run desktop code if they're really bad. So, and the good news … and the people do this, not because they're lazy or whatever, or because they want to run a lot of their web tooling, right? So they want to use React and they want to use React hot reload and stuff like this. So, the good news is that there's a way to do that without having to run any kind of local service at all, a local web serve at all.
So there's this library that myself and some other people, mostly other people, have worked on, called Electron-Forge. So Electron-Forge takes all of the annoying parts of writing an Electron application, all the stuff that isn't writing your app, like setting up builds and setting up Babel, and it's a little bit like parcel for Electron applications, in that it's trying to get rid of all the grunt work that you don't want to do and save you to be able to just write your application. And so it sets up all this stuff like React and View and Typescript and all this stuff out of the box. It's really easy to use. It's a little bit like Create React app in that it does all this stuff for you.
So we don't need Express, we don't need Webpack. It's going to do all this stuff for you during development on the setup like hot module reload. So as you're saving, it'll reload the application, like reload components. We took all the knowledge that desktop developers have around packaging and creating installers and code signing and stuff like that, and did all that for you too. So it's a really great way if you're writing a new Electron application, to just spend time on the fun stuff and not spend time on build stuff, which is like the opposite of fun.
But I like Webpack!
I like Webpack. Who likes Webpack? It's a very powerful tool that sometimes is not a joy to use. I like Webpack too, I think that you can't argue with success, right? It powers some really huge websites and it's, you know, it's a great tool. Unfortunately, using with Electron gets a little weird because Webpack when you create, when you bundle all these things, right? You're writing in your code require statements, or import statements, right? And Webpack is taking all those import statements and kind of mushing them together and creating this one giant file called myapp.js. And so it has this concept of require, and so it implements require because it implements modules.
Electron also has modules, but it uses the regular Node module system. So now when you type require, you're getting Webpack, but sometimes you need Node. So the biggest problem that you hit when trying to do Webpack is that as soon as you go to load a native Node module, like a spell checker or things that do notifications, you blow up. And so packaging becomes this really weird dance of trying to like exclude stuff and ignore Electron, ignore this module, I've got to figure out a way to have a Node modules folder and Webpack's Node modules folder. So now I have to have two Node modules folders, one that's in the app and one that's not in the app that gets Webpacked. Because if you try to Webpack like a native library, then, of course, it's going to fail. And so it gets weird. It gets really frustrating. And so Electron-Forge kind of handles a lot of stuff for you. So sometimes you need Webpack. People will do these various kind of specialized things and they have got to use Webpack and it's a kind of a bummer. But if you can, Electron-Forge is great.
Node-RT Is Cool
Cool. So mostly in the talk, I've talked about performance, memory usage, do your homework kind of topics, blah, performance, security, memory, thanks mom. So now I'll tell you about a thing that has nothing to do with performance or memory, or do your homework kind of topics. I'll talk about something that … a library that's really in my opinion under-marketed, and not as well-known as it should be. So it's called Node-RT. And so Node-RT only works on windows. And what it does, it lets you call basically all of the really cool Windows APIs from JavaScript in Electron. So this is really powerful. So call Windows 10 APIs from Electron is super easily. So I say Windows 10 because it's calling because the name as the name Node-RT suggests, it's calling into the Windows 10, like new style applications. Like the UWP applications.
And so if you're trying to build a desktop application using Electron, you want to be able to do these desktop-y things, and so there's all kinds of really interesting things. So digging through, you can get the geometry of the display and all the features of how many monitors they have so you can save off that information. Save off where the window is and the thing you can call geolocation, which in Electron doesn't work because you need a Google token. I didn't even know this until I was researching for this talk. Windows apparently has an OCR engine built in. You can hand it in image and it'll be like, “here's the text.” I didn't even know it could do that. That's super cool. You can do things like capture the screen, all the things that like desktop apps can do. You can create a power information, say whether you're on battery or on plugged in and you can say like, “oh, I'm plugged in now.” Be a little bit more agro about doing stuff.
What about MacOS? There is no super easy way to call MacOS APIs from Electron. You have to write a Native Node module, which means that you have to, you're dumped into writing C++ code which is a bummer. For Electron there's a lot of really simple Node modules that can kind of cargo cult, copy the whole thing and replace the inside. So it makes it a little bit easier. You can do really simple things also with this module called node-ffi, which is a very old, from the beginning of Node-JS, that lets you call C functions from Node. It's really easy to mess up because you're actually writing a seed, a header definition of a function. And if you make a typo, or say that something is eight bites when it's four, node-ffi will totally trust it and then crop the stack and bring down the render. But it can be useful. And node-ffi is actually cross-platform. You can use on all three operating systems, which is cool. So but Node-RT really easy to use. The API is really straightforward.
How Can I Figure out What I Can Do?
So how can I figure out what I can do? If you install Visual Studio, there's this feature you can get to if you dig. How many people have done native windows development or Windows-y things? If you go through the joy of installing Visual Studio 2017, you can open a bank blank app and then get to this thing that lets you see all the APIs. And so you can kind of browse and get some ideas. Like what can I do? Poke around. And so here's how to do it. You create a new app, a Windows universal app, doesn't matter, blank whatever. You go to the right-hand side and go to add references, go Windows desktop extensions, because VWP runs on all of these different platforms like the phone that doesn't exist or like a HoloLens or an X-box. So you have got to tell it I want to run on a desktop computer because it'll add extra stuff. And so then you go to this menu item called view object browser. And so then you can poke around in all the libraries. It's kind of like this automated documentation-generating thing. It's actually like reading the definition and through the class file and saying like, “oh, this has these namespaces and these classes.” You have to find this extremely obscure name windows.foundation.universal.apicontract. This is new. I didn't know about this until I tried to find this and couldn't find what I was looking for. And then you can scroll through, you can look at the application model. I can poke around, chat, context, like stuff like that. There are a few things that you can't do unless you're a Windows store application. So server push notifications are one thing you can't do. So being a woke up app.
Oh, I'm out of time. So in summary, make an HTML page, load less stuff, and load in the renderer process. Do cool stuff in Windows 10. Thanks. Cool. So questions?
Woman 1: We actually five minutes. So if you wanted to do one more slide we can. Otherwise, we can just start with questions.
Betts: Oh no, it's okay.
Woman 1: Okay, cool. You got a question?
Man 1: Hey, thanks. How you do automation tests for an Electron app?
Betts: Yeah. So for Electron apps you can connect Electron to Selenium, just like you can with a browser. You need to do a few backflips to make that happen. Basically when your app starts up, you'll see in the command line in the parameters that somebody is trying to start Selenium. And so like then you can kind of activate like a special mode of your app would, just load the app and point it at Selenium. But in general, a lot of the web techniques you would use on the browser will do.
Man 2: Hello. Hi. So suppose you want to send push notifications from the server to the Electron desktop app. How is it done?
Betts: Yeah. So in general, on desktop, push notifications are not super valuable. The idea of push notifications or waking up the app when it's not being used. Right? You can be in Windows store app and you can get that, just like on MacOS when you have to be in the app store to get push notifications. Push notifications are separate from showing a notification, right? So if your app is open, you can use web sockets, you can use polling or any other kind of standard thing to know that there is a notification and then just show one yourself. So on Windows 10, even if you're not a Windows store app, you can still show notifications, like native notifications then you know, show all the kinds of buttons and you can do all kinds of really interesting stuff with Windows 10 notifications. You just can't be woken up unless you're a Windows store app.
Man 2: So maybe I'll just use a web socket on pulling. I mean pulling is probably a …
Betts: Yeah, you wouldn't want to pull, but you can use a web socket. Yeah. So unlike any way that you don't on the web, would listen for messages.
Man 2: Cool. Cool.
Man 3: How does the same origin policy work with an Electron application?
Betts: So because your app is running in a … well so you can disable it if, but you shouldn't. In general because your app hopefully is running under a file URL, file URLs don't have same origin policy. So just like if you load, you know, double-click on a webpage or a like an HTML file on your computer on Chrome, so you don't have to worry about being in the same origin policy. So one thing that will trip people up is that because you're a file URL, you don't have cookies. So if you try to use Google Analytics, it's going to try to save a cookie to make it work. You don't have cookies in a file. But there are libraries to fake it out, create a fake document cookie store that like saves local storage instead. So yeah. So, you can actually nerf all web security, which you should not do, but you can do it.
Woman 1: All right. Do we have any more questions? Yeah.
Man 4: So you mentioned some strategies to offload things from the main thread. Are web workers just not a possibility?
Betts: Web Workers are a possibility, but they have a lot of caveats in Electron apps. So Web Workers which you'd probably want to do require a bunch of native code or require a Node module. Originally, Web Workers couldn't do anything related to Node modules, and now they can do certain things but not other things. There's like a guide in the Electron docs that says exactly the kind of caveats and limitations. So those work too. If you can jam what you want to do into a Web Worker, that's great. I mean in general, the reason people don't use Web Workers that much is because all the data you send back and forth has to be over messages and so that data … usually you're offloading it because the thing is huge and you can't send it as a message back and forth because you're sending this giant IPC messaging grinds. So it limits the utility of Web Workers. But yeah, if you can fit into Web Worker, that's great because the debugger experience is a little better. And you can actually see that in Chrome DevTool.
Woman 1: I think we have time for one last question. Does anyone want to ask one? All right.
Woman 2: Say I happened to be front-end engineer writing code that may or may not get used in an Electron application. Are there any things that I should be aware of?
Betts: I think as a front-end engineer, one of the things you can do is just think about, if you pretend you're in a desktop app, what would you want to do? And if you want to do something, the desktop team can probably do it. So if that API doesn't exist, because a lot of like front-end engineers when you work in this hybrid environment, they get used to the restrictions of the web and they're like, “Oh yeah, I can't do that because like I can't do that.” In Electron, you can totally do that. And so if an API isn't there, just come up with a contract on what it should look like, or what you want it to look like, and you could probably do it.
Woman 1: All right. Thank you so much, Paul. Let's give another round of applause.
Betts: Thank you so much for your time.
See more presentations with transcripts
Community comments | https://www.infoq.com/presentations/electron-pitfalls/ | CC-MAIN-2021-17 | refinedweb | 8,945 | 81.43 |
The japanese have taken on "super" as one of those english words they have completely obsessed with, tacking it onto words and phrases in all kinds of crazy places, some of which upon examination make no sense. The Japanese have started doing that kind of thing for some reason, integrating tiny chunks of english not because of any kind of osmosis or cultural imperialism is taking place, but just because it sounds weird. Like, according to this book on japanese slang that this guy i know was reading, it is now common for japanese schoolgirls (who don't speak english) to use "supa eilien" (pronounced like that) as an insult. "Super Alien"? What? Do they know what they're saying? The Japanese usage of super, however, is generally the "correct" one, even when it seems oddly unnecessary (Super Nintendo, Super Seya-jin, Super Catgirl Nuku Nuku, super deformed, etc) and at least partly as a result of their adoption of the word (and the fact that people think that things that seem japanese and bizarre are cool), the word "super" has been dragged back from its gimpy horrific 50s stint as an adjective to become a powerful prefix again, one with the power to bestow instant bizarre kitzch powers onto damn near anything. (See Super Furry Animals, Super Tensile Solids, superbad, Soy super bien soy super super bien soy bien bien super bien bien bien super super, etc.)
So in important methods like init, you get just this kind of chain where each layer of an object's inheritance peels back, each implementation of (for example) - init ending by returning [super init]; and thus passing back the init message up the class hierarchy until NSObject is reached.
Super.
One of the most basic concepts of object oriented programming is that
of inheritance, in which we think of the IS-A property
and say that thing A is a thing B, but different or more specific
in certain ways. For example, a toothbrush is a dental hygiene
instrument.
Inheritance leads us to speak of subclasses and
superclasses. (A class is a way of abstracting
the properties of a kind of object, while a class instance (or object) is
an actual item of that kind; e.g., you could have a class called
Toothbrush, of which there may be four instances hung in the rack by your
bathroom sink.) The terms are not absolute, but rather form a relationship
between two classes (class DentalHygieneInstrument being a superclass of
class Toothbrush). Every instance of the subclass can also be considered
(though not exclusively) as an instance of the superclass.
This can be useful because some or all of the code that a programmer writes
to manipulate objects of the superclass also applies to objects of its
subclasses, even if they weren't implemented (or even being considered)
when the code was written. The Sterilize method of a
DentalHygieneInstrument works for a Toothbrush as well as for a Burnisher.
But this is not always the case.
After all, while a Toothbrush is a DentalHygieneInstrument, it is
not a Burnisher (which is also a DentalHygieneInstrument), so clearly all
DentalHygieneInstruments are alike in many ways, but different in some ways.
Perhaps the sterilization procedure for a Burnisher involves first roughing
up the surface with sandpaper[1] before performing the usual
(common) procedure. While the Toothbrush class will not define a Sterilize
method of its own (causing the language to search upward through the
inheritance hierarchy until it finds one to use), Burnisher must do so.
But there is no need to reproduce the code found in DentalHygieneInstrument's
Sterilize method in the Burnisher class; instead, Burnisher's Sterilize
can first sand itself, and then invoke the Sterilize method from its
superclass.
class Burnisher(DentalHygieneInstrument):
def Sterilize(self):
self.SandTip()
DentalHygieneInstrument.Sterilize(self)
class Burnisher(DentalHygieneInstrument):
def Sterilize(self):
self.SandTip()
DentalHygieneInstrument.Sterilize(self)
self.Sterilize()
super.Sterilize()
That shortcut, however, is not all that useful. It saves a few keystrokes,
and it doesn't have to be changed if the code is copied to another class.
But it falls down in the face of multiple inheritance.
My examples will change now; I've run out of gas on the dental
instrument thing….
Multiple inheritance enters the picture when we have a Thing A that not only
is a Thing B, but also is a Thing C (which B is not).
Sometimes, B and C are quite unrelated concepts – in which case A is
often called a mixin class – and sometimes not. More on that
in a moment. Time for a diagram:
class B class C
\ /
\ /
\ /
\ /
\ /
class A
class A(B, C):
def M(self):
B.M(self)
C.M(self)
# do A's M stuff here
class A(B, C):
def M(self):
B.M(self)
C.M(self)
# do A's M stuff here
class Cumulus class Nimbus
\ /
\ /
\ /
\ /
\ /
\ /
\ /
\ /
class CumuloNimbus
class Cloud
/ \ Scud
/ \ Save
/ \
/ \
/ \
/ \
/ \
/ \
class Cumulus class Nimbus
Save \ / Rain
\ / Save
\ /
\ /
\ /
\ /
\ /
\ /
class CumuloNimbus
This diagram shows the genesis of the aptly-named diamond problem. Consider
a method being called with an instance of CumuloNimbus. It we're calling
cn.Scud(), that's not a problem: Scud is
a method of Cloud and is not overridden by any of these other classes, because
clouds all scud across the sky in the same way.
cn.Scud()
But in the face of reimplementations or extensions, problems occur. Consider a
call to cn.Rain(). This is not a problem, because neither
Cumulus nor Cloud defines Rain, so the method implemented by Nimbus will
be used. Calling cn.Save, on the other hand, is a different
kettle of fish. (Let's assume Save is a method that saves the object's data
to an open file.) We need to add a Save method to CumuloNimbus, so that it
can call Cumulus.Save and Nimbus.Save, in accordance with our rule about
calling the overridden method. And it's obviously very important that they
both get called, because our cn object may contain data defined
by Cumulus, Nimbus, and even Cloud. So we add
cn.Rain()
cn.Save
cn
class CumuloNimbus(Cumulus, Nimbus):
def Save(self):
Cumulus.Save(self)
Nimbus.Save(self)
class CumuloNimbus(Cumulus, Nimbus):
def Save(self):
Cumulus.Save(self)
Nimbus.Save(self)
Oh, no! Remember how diligent we all are when it comes to calling the
method in the superclass when we override it? Let's look at the sequence
of methods that get called when we do a cn.Save():
cn.Save()
CumuloNimbus.Save
Cumulus.Save
Cloud.Save
Nimbus.Save
Cloud.Save
CumuloNimbus.Save
Cumulus.Save
Cloud.Save
Nimbus.Save
Cloud.Save
While he was creating version 2.2 of Python, Guido
noted that this problem could become much more prevalent than it had
been previously, because the addition of an ultimate superclass –
which earlier Pythons did not have – could cause diamonds to appear
where they had not been before, even in the absence of the application
creating them; namely, in the situation with A being a subclass of B and C,
B and C might now (and should) be subclasses of object.
He wouldn't put us in such a fix without hope, though. A new builtin
function called super will be our salvation. It can be
used by our various classes to call overridden methods in a safe and
complete way. How can it do this? Through its knowledge of the method
resolution order.
The method resolution order is a list associated with each class that
tells it where to look for an attribute when an instance does not include
it; this list is built when the class statement is compiled. In previous
Pythons, the MRO for our diamond above would be
[CumuloNimbus, Cumulus, Cloud, Nimbus, Cloud]
The presence of Cloud in the list twice was never a problem, because the
search would never find the attribute in the second one if it hadn't found
it in the first one already. At version 2.2, these duplicates are removed,
and our MRO is now
[CumuloNimbus, Cumulus, Nimbus, Cloud]
This ordering is easily derived from the previous one, by retaining only
the last occurrence of any class that appears more than once. The
super function can use the MRO, together with the class of
an instance, to unambiguously determine which class should be called next.
It is important to remember here that, e.g., when the Save method in
Cumulus is executing, its self may refer to either a Cumulus
instance or a CumuloNimbus instance; its code doesn't need to care which
(if it did, the whole inheritance idea would be useless), but
super, under the covers, cares very much.
[CumuloNimbus, Cumulus, Cloud, Nimbus, Cloud]
[CumuloNimbus, Cumulus, Nimbus, Cloud]
super
A class X always uses super in the same way: the two arguments
it passes are always itself (class X) and self. The function then
returns an object through which the appropriate implementation of the
sought attribute can be found. So, let's look at the implementation of
our various Save methods:
CumuloNimbus.Save
Cumulus.Save
Nimbus.Save
Cloud.Save
CumuloNimbus.Save
Cumulus.Save
Nimbus.Save
Cloud.Save
[1] I obviously know nothing about the care and handling of
dental instruments. Don't ask where that example came from.
[2] Prior to version 2.2, which began the long-awaited fixing
of the so-called type-class split, a major wart in Python.
[3] Real Magic™ happening here. Note that the calls to
Save invoked through the super-object do not pass self as an argument! And yet, the super function
(actually the constructor for a super object) cannot create
a bound method, because it doesn't know what method you're going to attempt to call.
"Is that where we are right now? Between the panels?"
–Libby/Boltie
Shot with a big-name cast on a small-time budget, completed in the wake of Kick-Ass (though the films were in production at the same time), Super did not see widespread release until April 2011. Even then, the release wasn't particularly widespread. The film passed faster than a speeding bullet. Many critics savaged the dark comedy about a depressed and disturbed loser who becomes a real-world superhero. Nevertheless, Super soars a little higher than I expected, and earns, at the very least, its cult status. It also has a certain brutal honesty that Kick-Ass lacks.
The story concerns a life-long loser, Frank (Rain Wilson). After his drug-addicted wife leaves him, he. Be warned: Super intends to make you uncomfortable, and it often succeeds. Certain portions may prove downright depressing, and you're as likely to wince as laugh at a few of the darker gags.
At other times, you will laugh.
"Why are you wearing a fake beard?"
-a Librarian
The film mercilessly mocks superhero conventions, action movie ethics, and romance tropes. Placed in the real world, vigilante violence results in ugly carnage, simplistic moral crusades bring about injustice, and romantic obsession breeds unhealthy lives. Frank's early forays as the Crimson Bolt, in particular, riff hilariously on the absurdity of the superhero. Those outfits wouldn't conceal your identity from someone who knows you, changing rapidly to your costume in public would prove awkward and embarrassing, and going undercover in disguise, unless you're a professional make-up artist, would simply call attention to the fact that you're wearing a disguise.
Alas, the film's tone and sense of reality, like its moral compass, spin wildly throughout. After mocking the absurdity of its source material, the film begins dealing its own unexplained lapses in reality. Why does no one take down Frank's license plate, despite multiple opportunities? How does Frank afford his house and sizable arsenal on a diner cook's wages? Why are so many people conveniently competent at treating wounds?
And why is Libby, who initially seems relatively well-adjusted, so gung-ho to share in Frank's delusions?
If we see the warped logic that motivates Frank, we never entirely understand what drives the youthful Boltie. Nevertheless, Ellen Page turns in a winning performance as the sociopathic sidekick. From the enthusiasm she brings to violent assault, to the energy she invests in creepy seduction, Page is a key reason to see this movie.
And Page and Wilson aren't the only names of note. Super boasts a stellar cast, and they all give strong performances. It's some strange testimony either to Gunn's script or his persuasive powers that Wilson, Page, Kevin Bacon, Liv Tyler, Nathan Fillion, and Linda Cardellini all signed on. Fillion makes what amounts to a cameo, but his fans will appreciate his brief appearances as the Holy Avenger, an evangelical creation who inspires Frank's warped crusade.
This film also features surprisingly strong effects. Frank's disturbed visions look great. At other times, deliberately cartoony graphics clash with horrific violence. And make no mistake: we're looking at old-school grindhouse gore, grotesquely rendered.
"Do you really think that killing me, stabbing me to death is going to change the world?"
--Jacques
Ambiguous ending notwithstanding, this film doesn't really buy into vigilante ethics the way Kick-Ass, in the end, does. It's more honest than that film, but not nearly so entertaining-- nor as clever as it sets out to be.
Directed by and written by James Gunn.
Epilogue: Radical Interpretation-- with Spoilers.
An interesting interpretation, though one not particularly encouraged by the film nor (that I can find) suggested elsewhere, accounts (if she ever fell back into it. Maybe she just left him), and that he maintains some distant.
Su"per, n.
A contraction of Supernumerary, in sense 2.
© Webster 1913.
Log in or register
to write something here or to contact authors. | http://everything2.com/title/super?author_id=1455010 | CC-MAIN-2013-48 | refinedweb | 2,299 | 53 |
Communication between native and React Native
In Integrating with Existing Apps guide and Native UI Components guide we learn how to embed React Native in a native component and vice versa. When we mix native and React Native components, we'll eventually find a need to communicate between these two worlds. Some ways to achieve that have been already mentioned in other guides. This article summarizes available techniques.
IntroductionIntroduction
React Native is inspired by React, so the basic idea of the information flow is similar. The flow in React is one-directional. We maintain a hierarchy of components, in which each component depends only on its parent and its own internal state. We do this with properties: data is passed from a parent to its children in a top-down manner. If an ancestor component relies on the state of its descendant, one should pass down a callback to be used by the descendant to update the ancestor.
The same concept applies to React Native. As long as we are building our application purely within the framework, we can drive our app with properties and callbacks. But, when we mix React Native and native components, we need some specific, cross-language mechanisms that would allow us to pass information between them.
PropertiesProperties
Properties are the most straightforward way of cross-component communication. So we need a way to pass properties both from native to React Native, and from React Native to native.
Passing properties from native to React NativePassing properties from native to React Native
In order to embed a React Native view in a native component, we use
RCTRootView.
RCTRootView is a
UIView that holds a React Native app. It also provides an interface between native side and the hosted app.
RCTRootView has an initializer that allows you to pass arbitrary properties down to the React Native app. The
initialProperties parameter has to be an instance of
NSDictionary. The dictionary is internally converted into a JSON object that the top-level JS component can reference.
RCTRootView also provides a read-write property
appProperties. After
appProperties is set, the React Native app is re-rendered with new properties. The update is only performed when the new updated properties differ from the previous ones.
It is fine to update properties anytime. However, updates have to be performed on the main thread. You use the getter on any thread.
Note: Currently, there is a known issue where setting appProperties during the bridge startup, the change can be lost. See for more information.
There is no way to update only a few properties at a time. We suggest that you build it into your own wrapper instead.
Note: Currently, JS function
componentWillUpdatePropsof the top level RN component will not be called after a prop update. However, you can access the new props in
componentDidMountfunction.
Passing properties from React Native to nativePassing properties from React Native to native
The problem exposing properties of native components is covered in detail in this article. In short, export properties with
RCT_CUSTOM_VIEW_PROPERTY macro in your custom native component, then use them in React Native as if the component was an ordinary React Native component.
Limits of propertiesLimits of properties
The main drawback of cross-language properties is that they do not support callbacks, which would allow us to handle bottom-up data bindings. Imagine you have a small RN view that you want to be removed from the native parent view as a result of a JS action. There is no way to do that with props, as the information would need to go bottom-up.
Although we have a flavor of cross-language callbacks (described here), these callbacks are not always the thing we need. The main problem is that they are not intended to be passed as properties. Rather, this mechanism allows us to trigger a native action from JS, and handle the result of that action in JS.
Other ways of cross-language interaction (events and native modules)Other ways of cross-language interaction (events and native modules)
As stated in the previous chapter, using properties comes with some limitations. Sometimes properties are not enough to drive the logic of our app and we need a solution that gives more flexibility. This chapter covers other communication techniques available in React Native. They can be used for internal communication (between JS and native layers in RN) as well as for external communication (between RN and the 'pure native' part of your app).
React Native enables you to perform cross-language function calls. You can execute custom native code from JS and vice versa. Unfortunately, depending on the side we are working on, we achieve the same goal in different ways. For native - we use events mechanism to schedule an execution of a handler function in JS, while for React Native we directly call methods exported by native modules.
Calling React Native functions from native (events)Calling React Native functions from native (events)
Events are described in detail in this article. Note that using events gives us no guarantees about execution time, as the event is handled on a separate thread.
Events are powerful, because they allow us to change React Native components without needing a reference to them. However, there are some pitfalls that you can fall into while using them:
- As events can be sent from anywhere, they can introduce spaghetti-style dependencies into your project.
- Events share namespace, which means that you may encounter some name collisions. Collisions will not be detected statically, which makes them hard to debug.
- If you use several instances of the same React Native component and you want to distinguish them from the perspective of your event, you'll likely need to introduce identifiers and pass them along with events (you can use the native view's
reactTagas an identifier).
The common pattern we use when embedding native in React Native is to make the native component's RCTViewManager a delegate for the views, sending events back to JavaScript via the bridge. This keeps related event calls in one place.
Calling native functions from React Native (native modules)Calling native functions from React Native (native modules)
Native modules are Objective-C classes that are available in JS. Typically one instance of each module is created per JS bridge. They can export arbitrary functions and constants to React Native. They have been covered in detail in this article.
The fact that native modules are singletons limits the mechanism in the context of embedding. Let's say we have a React Native component embedded in a native view and we want to update the native, parent view. Using the native module mechanism, we would export a function that not only takes expected arguments, but also an identifier of the parent native view. The identifier would be used to retrieve a reference to the parent view to update. That said, we would need to keep a mapping from identifiers to native views in the module.
Although this solution is complex, it is used in
RCTUIManager, which is an internal React Native class that manages all React Native views.
Native modules can also be used to expose existing native libraries to JS. The Geolocation library is a living example of the idea.
Warning: All native modules share the same namespace. Watch out for name collisions when creating new ones.
Layout computation flowLayout computation flow
When integrating native and React Native, we also need a way to consolidate two different layout systems. This section covers common layout problems and provides a brief description of mechanisms to address them.
Layout of a native component embedded in React NativeLayout of a native component embedded in React Native
This case is covered in this article. To summarize, since all our native react views are subclasses of
UIView, most style and size attributes will work like you would expect out of the box.
Layout of a React Native component embedded in nativeLayout of a React Native component embedded in native
React Native content with fixed sizeReact Native content with fixed size
The general scenario is when we have a React Native app with a fixed size, which is known to the native side. In particular, a full-screen React Native view falls into this case. If we want a smaller root view, we can explicitly set RCTRootView's frame.
For instance, to make an RN app 200 (logical) pixels high, and the hosting view's width wide, we could do:
When we have a fixed size root view, we need to respect its bounds on the JS side. In other words, we need to ensure that the React Native content can be contained within the fixed-size root view. The easiest way to ensure this is to use Flexbox layout. If you use absolute positioning, and React components are visible outside the root view's bounds, you'll get overlap with native views, causing some features to behave unexpectedly. For instance, 'TouchableHighlight' will not highlight your touches outside the root view's bounds.
It's totally fine to update root view's size dynamically by re-setting its frame property. React Native will take care of the content's layout.
React Native content with flexible sizeReact Native content with flexible size
In some cases we'd like to render content of initially unknown size. Let's say the size will be defined dynamically in JS. We have two solutions to this problem.
- You can wrap your React Native view in a
ScrollViewcomponent. This guarantees that your content will always be available and it won't overlap with native views.
- React Native allows you to determine, in JS, the size of the RN app and provide it to the owner of the hosting
RCTRootView. The owner is then responsible for re-laying out the subviews and keeping the UI consistent. We achieve this with
RCTRootView's flexibility modes.
RCTRootView supports 4 different size flexibility modes:
RCTRootViewSizeFlexibilityNone is the default value, which makes a root view's size fixed (but it still can be updated with
setFrame:). The other three modes allow us to track React Native content's size updates. For instance, setting mode to
RCTRootViewSizeFlexibilityHeight will cause React Native to measure the content's height and pass that information back to
RCTRootView's delegate. An arbitrary action can be performed within the delegate, including setting the root view's frame, so the content fits. The delegate is called only when the size of the content has changed.
Warning: Making a dimension flexible in both JS and native leads to undefined behavior. For example - don't make a top-level React component's width flexible (with
flexbox) while you're using
RCTRootViewSizeFlexibilityWidthon the hosting
RCTRootView.
Let's look at an example.
In the example we have a
FlexibleSizeExampleView view that holds a root view. We create the root view, initialize it and set the delegate. The delegate will handle size updates. Then, we set the root view's size flexibility to
RCTRootViewSizeFlexibilityHeight, which means that
rootViewDidChangeIntrinsicSize: method will be called every time the React Native content changes its height. Finally, we set the root view's width and position. Note that we set there height as well, but it has no effect as we made the height RN-dependent.
You can checkout full source code of the example here.
It's fine to change root view's size flexibility mode dynamically. Changing flexibility mode of a root view will schedule a layout recalculation and the delegate
rootViewDidChangeIntrinsicSize: method will be called once the content size is known.
Note: React Native layout calculation is performed on a separate thread, while native UI view updates are done on the main thread. This may cause temporary UI inconsistencies between native and React Native. This is a known problem and our team is working on synchronizing UI updates coming from different sources.
Note: React Native does not perform any layout calculations until the root view becomes a subview of some other views. If you want to hide React Native view until its dimensions are known, add the root view as a subview and make it initially hidden (use
UIView's
hiddenproperty). Then change its visibility in the delegate method. | http://reactnative.dev/docs/next/communication-ios | CC-MAIN-2021-10 | refinedweb | 2,036 | 54.12 |
View Poll Results: If you read it, did you find DirectJNgine User's Guide adequate?
- Voters
- 54. You may not vote on this poll
Yes
No
We finally found the problem ! YEEEEAAAAA !
It's not related to DirectJNgine nor to Tomcat... it's was related to an other servlet we was using, the servlet of the Restlet framwork, you can find more informations here for the one interested:
In few words, Ext.DirectJNgine use KeepAlive connection, the other servlet was closing the stream like if was responsible of the stream workflow ... it's a bug in that servlet.
Hope that's help.
Alois Cochard
- Pedro Agulló, Barcelona (Spain) - pagullo.soft.dev at gmail.com
Agile team building, consulting, training & development
DirectJNgine: - Log4js-ext:
Anyone get Designer and DirectStore working? I'm new to ExtJS so I'm a little stumped.
I did get my Login screen to functioin correctly so I have the environment setup correctly.
So for the DirectStore, in Designer I created a DirectStore and then a Grid. Within the grid I referenced the direct store and then mapped the columns of the grid to the direct store. As I debug from the Java side, I never see an incoming request. In the browser i do see the grid but it's empty.
Does anyone have an example of Designer / DirectStore / DirectJNgine ?????
Thanks in advance,
Barton
Austin, Tx
I'm new with ExtJS so if this has been addressed elsewhere, sorry but I couldn't find anything.
I created a test Grid / DirectStore in Designer modeled after the DirectStoreDemo. When I run I get two errors in FireBug:
1. DirectStoreDemo is not defined. directFn: DirectStoreDemo.loadExperienceData
2. this.ds is undefined - I think it's related to above but not sure
The grid does display but there is no data. Here are the generated classes from Designer followed by my test.html and index.js
experienceStore.js:
Code:
experienceStore = Ext.extend(Ext.data.DirectStore, { constructor: function(cfg) { cfg = cfg || {}; experienceStore.superclass.constructor.call(this, Ext.apply({ storeId: 'MyStore', directFn: DirectStoreDemo.loadExperienceData, paramsAsHash: false, fields: [ { name: 'startDate' }, { name: 'endDate' }, { name: 'description' } ] }, cfg)); } }); new experienceStore();
Code:
gridUi = Ext.extend(Ext.grid.GridPanel, { title: 'Working Experience', store: 'experienceStore', width: 750, height: 350, frame: true, stripeRows: true, initComponent: function() { this.columns = [ { xtype: 'gridcolumn', dataIndex: 'startDate', header: 'Start Date', sortable: true, width: 100 }, { xtype: 'gridcolumn', dataIndex: 'endDate', header: 'End Date', sortable: true, width: 100, align: 'right' }, { xtype: 'gridcolumn', dataIndex: 'description', header: 'Experience', sortable: true, width: 600 } ]; gridUi.superclass.initComponent.call(this); } });
Code:
grid = Ext.extend(gridUi, { initComponent: function() { grid.superclass.initComponent.call(this); } });
Code:
Ext.onReady(function() { Ext.app.REMOTING_API.enableBuffer = 100; Ext.Direct.addProvider(Ext.app.REMOTING_API); var aExperienceStore = new experienceStore(); aExperienceStore.load(); var aGrid = new grid({ renderTo: Ext.getBody() }); });
Code:
Ext.namespace( 'Ext.app'); Ext.app.PROVIDER_BASE_URL=window.location.protocol + '//' + window.location.host + '/' + (window.location.pathname.split('/').length>2 ? window.location.pathname.split('/')[1]+ '/' : '') + 'djn/directprovider'; Ext.app.POLLING_URLS = { } Ext.app.REMOTING_API = { url: Ext.app.PROVIDER_BASE_URL, type: 'remoting', actions: { Login: [ { name: 'validate'/*() => com.wordtiller.ui.Login$SubmitResult -- FORM HANDLER */, len: 1, formHandler: true } ], DirectStoreDemo: [ { name: 'loadExperienceData'/*() => java.util.List */, len: 0, formHandler: false } ], Request: [ { name: 'getRequests'/*() => java.util.List */, len: 0, formHandler: false } ] } }
HTML Code:
<html> <head> <meta http- <link rel="stylesheet" type="text/css" href=""/> <script type="text/javascript" src=""></script> <script type="text/javascript" src=""></script> <script type="text/javascript" src="directjngine/Api.js"></script> <script type="text/javascript" src="test/experienceStore.js"></script> <script type="text/javascript" src="test/grid.ui.js"></script> <script type="text/javascript" src="test/grid.js"></script> <script type="text/javascript" src="test/index.js"></script> </head> <body> <h1>Welcome!</h1> </body> </html>
Barton
Last edited by barton; 28 Aug 2010 at 12:43 AM. Reason: Add api.js for completeness
Unfortunately, full Designer integration is not built-in into DJN, due to a lack of time. And, to be fair, nobody really asked me for it up until now.
If somebody is willing to undertake the effort and contribute this to DJN, he will be welcome.
When I took a first look at it, it seemed it would be relatively easy: I estimated between 10 and 25 hours to have it developed + full tests including automated unit tests + a new User's Guide chapter.Pedro Agulló, Barcelona (Spain) - pagullo.soft.dev at gmail.com
Agile team building, consulting, training & development
DirectJNgine: - Log4js-ext:
Hi Pedro,
First off I'm very impressed with DJN - thanks for all your efforts.
If you could provide me some direction I would gladly look into this integration - it would help me learn more about ExtJS and Designer and DJN - which is a good thing. My project will grow over time and i want to do all the UI work in Designer so it would benefit me to have this functionality.
Please send me your thoughts on the approach and how best to come up to speed. I haven't looked at the supporting infrastructure to this point so I could use some guidance on that.
Regards,
Barton
Hi Pagullo,
I recently picked up ExtJS and want to use directJNgine. What I'm confused about is the server side. I've read throughout the forum that people aren't using Spring Security.
Are people just writing their own home grown solutions for authentication? I want to keep my solution simple and thinking about bringing in Spring Acegi concerns me. My current thoughts are to use a javascript library for MD5 and to convert the password before sending it for database validation on the server. I have that roughed out and it seems to work ok.
I did find. But with DNJ do i need that additional level of complexity? I mean, since I'll be accessing the java classes directly, do I need to protect them with Acegi? Or is it a non-issue?
Do you have any paper about best practice wrt this security issue? What security concerns do I have if I just check that each class accessed has a user in a valid session and if not send them to the login page?
Thanks for your help in advance,
Thanks,
Barton Hammond
Austin, TX
This is the information I've got regarding support for Designer:
We need to make a small adjustment for integration with the Ext Designer. This will be considered the Ext.Direct 1.0.1 spec.
The small adjustment involves generating a JSON format of the API in addition to the standard JS format.
When the API component is accessed it will check a "format" parameter which is sent via a url parameter.
For example:
api.php?format=json
When the JSON format is requested, the API component will return back the standard descriptor as normal but place the descriptor variable in a JSON packet.
For example:
Ext.ns('xds.remote');
xds.remote.Descriptor = {
"url": "/router.php",
"type": "remoting",
"actions": {
"Time": [{
"name": "get",
"len": 0
}]
},
"namespace": "xds.remote"
};
Would change to:
{
"url": "/router.php",
"type": "remoting",
"actions": {
"Time": [{
"name": "get",
"len": 0
}]
},
"namespace": "xds.remote",
"descriptor":"xds.remote.Descriptor"
}
It should not be very difficult to modify the way I generate source code on the fly to generate it in the format required by Designer: take a look at the AppEngine chapter in the User's Guide for details on generating code that way.
Hope this helps!Pedro Agulló, Barcelona (Spain) - pagullo.soft.dev at gmail.com
Agile team building, consulting, training & development
DirectJNgine: - Log4js-ext:
Upgrade to gson 1.5
Hi,
directjngine-1.3 currently comes with gson-1.3.
gson-1.3 is afflicted by this bug:
with gson-1.5+ this bug should be resolved.
I can remove gson-1.3 and put gson-1.5 in my classpath, will this cause problems with directjngine-1.3?
Is a new version of directjngine, with the new gson build, scheduled?
Thanks!) | https://www.sencha.com/forum/showthread.php?73027-Ext-Direct-Java-based-implementation/page35&p=506542 | CC-MAIN-2015-27 | refinedweb | 1,311 | 51.44 |
Lecture 12 — Controlling Loops¶
Overview¶
- We will see how to control both for and while loops with
break,
continue
- We will see different range functions
- We will write many example programs
Reading: Practical Programming, rest of Chapter 7.
Part 1: The Basics¶
forloops tend to have a fixed number of iterations computed at the start of the loop
whileloops tend to have an indefinite termination, determined by the conditions of the data
- Most Python
forloops are easily rewritten as
whileloops, but not vice-versa.
- In other programming languages,
forand
whileare almost interchangeable, at least in principle.
Part 1: Ranges¶
A range is a function to generate a list of integers. For example,
>>> range(10) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
Notice this is up through and not including the last value specified!
If we want to start with something other than 0, we provide the starting values
>>> range(3,8) [3, 4, 5, 6, 7]
We can create increments. For example,
>>> range(4,20,3) [4, 7, 10, 13, 16, 19]
starts at 4, increments by 3, stops when 20 is reached or surpassed.
We can create backwards increments
>>> range(-1, -10, -1) [-1, -2, -3, -4, -5, -6, -7, -8, -9]
Using Ranges in For Loops¶
We can use the
rangeto generate the list of values in a for loop. Our first example is printing the contents of the
planetslist
planets = [ 'Mercury', 'Venus', 'Earth', 'Mars', 'Jupiter', 'Saturn', 'Uranus', 'Neptune', 'Pluto' ] for i in range(len(planets)): print planets[i]
The variable
iis variously known as the indexor the loop index variableor the subscript.
We will modify the loop in class to do the following:
- Print the indices of the planets (starting at 1!)
- Print the planets backward.
- Print every other planet.
Loops That Do Not Iterate Over All Indices¶
Sometimes the loop index should not go over the entire range of indices, and we need to think about where to stop it early, as the next example shows.
Example: Returning to our example from Lecture 1, we will briefly re-examine our solution to the following problem: Given a string, how can we write a function that decides if it has three consecutive double letters?
def has_three_doubles(s): for i in range(0, len(s)-5): if s[i] == s[i+1] and s[i+2] == s[i+3] and s[i+4] == s[i+5]: return True return False
We have to think carefully about where to start our looping and where to stop!
Part 1 Exercises¶
Generate a range for the positive integers less than 100. Use this to calculate the sum of these values, with and without a for loop.
Use a range and a for loop to print the even numbers less than a given integer
n.
Suppose we want a list of the squares of the digits 0..9. The following does NOT work
squares = range(10) for s in squares: s = s*s
Why not? Write a different for loop that uses indexing into the
squareslist to accomplish our goal.
The following code for finding out if a word has two consecutive double letters is wrong. Why? When, specifically, does it fail?
def has_two_doubles(s): for i in range(0, len(s)-5): if s[i] == s[i+1] and s[i+2] == s[i+3]: return True return False
A local maximum, or peak, in a list is a value that is larger than the values next to it. For example,
L = [ 10, 3, 4, 9, 19, 12, 15, 18, 15, 11, 14 ]
has local maxima at indices 4 and 7. (Note that the beginning and end values are not considered local maxima.) Write code to print the index and the value of each local maximum.
Part 2: Nested Loops¶
Some problems require iteratingover either
- two dimensions of data, or
- all pairs of values from a list
As an example, here is code to print all of the products of digits:
digits = range(10) for i in digits: for j in digits: print "%d x %d = %d" %(i,j,i*j)
How does this work?
- for each value of i the variable in the first, or “outer”, loop,
- Python executes the entire second, or “inner”, loop
We will look at finding the two closest points in a list.
Example: Finding the Two Closest Points¶
Suppose we are given a list of point locations in two dimensions, where each point is a tuple. For example,
points = [ (1,5), (13.5, 9), (10, 5), (8, 2), (16,3) ]
Our problem is to find the two points that are closest to each other.
The natural idea is to compute the distance between any two points and find the minimum.
- We can do this with and without using a list of distances.
Let’s work through the approach to this and post the result on the Piazza site.
Exercise: Nested vs. Sequential Loops¶
The following simple exercise will help you understand loops better. Show the output of each of the following pairs of
forloops. The first two pairs are nested loops, and the third pair is formed by consecutive, or sequential, loops.
#
Exercise: Modifying Images¶
- It is possible to access the individual pixels in an image as a two
dimensional array. This is similar to a list of lists, but is written slightly differently: we use
pix[i,j]instead of
pix[i][j]to access a point at location
(i,j)of the image.
Here is a code that copies one image to another, pixel by pixel.
from PIL import Image im = Image.open("bolt.jpg") w,h = im.size newim = Image.new("RGB", (w,h), "white") pix = im.load() ## creates an array of pixels that can be modified newpix = newim.load() ## creates an array of pixels that can be modified for i in range(0,w): for j in range(0,h): newpix[i,j] = pix[i,j] newim.show()
- Modify the above code so that:
- The image is flipped left to right
- The image is flipped top to bottom
- You introduce a black line of size 10 pixels in the middle of the image horizontally and vertically.
- Now, scramble the image by shifting the four quadrants of the image clockwise.
- If you want some additional challenge, try these:
- Pixellate the image, but taking any block of 8 pixels and replacing all the pixels by their average r,g,b value.
Part 3: Controlling Execution of Loops¶
- We can control while loops through use of
break
continue
- We need to be careful to avoid infinite loops
Using a Break¶
We can terminate a loop immediately upon seeing the 0 using Pythons
break:
sum = 0 while True: x = int( raw_input("Enter an integer to add (0 to end) ==> ")) if x == 0: break sum += x print sum
breaksends the flow of control immediately to the first line of code
outside the current loop, and
The while condition of
Trueessentially means that the only way to stop the loop is when the condition that triggers the
breakis met.
Continue: Skipping the Rest of a Loop¶
Suppose we want to skip over negative entries in a list. We can do this by telling Python to
continuewhen it sees a blank line:
for item in mylist: if item < 0: continue print item
When it sees
continue, Python immediate goes back to the
whilecondition and re-evaluates it, skipping the rest of the loop.
Any loop that uses
breakor
continuecan be rewritten without either of these.
- Therefore, we choose to use them only if they make our code clearer.
- A loop with more than one continue or more than one break is often unclear!
This particular example is probably better without the
continue.
- Usually when we use
continuethe rest of the loop would be much longer, with the condition that triggers the
continuetested right at the time preferably at the top of the loop.
Part 3 Exercises¶
Given two lists
L1and
L2measuring the daily weights (floats) of two rats write a
whileloop to find the first day that the weight of rat 1 is greater than that of rat 2.
Do either of the following examples cause an infinite loop?
import math x = float(raw_input("Enter a positive number -> ")) while x > 1: x = math.sqrt(x) print x
import math x = float(raw_input("Enter a positive number -> ")) while x >= 1: x = math.sqrt(x) print x
Summary¶
rangeis used to generate a list of indices in a
forloop.
- At each iteration of a
forloop, a value from a list is copied to a variable automatically. Do not change this value yourself.
- While loops are needed especially when the termination conditions must be determined during the loop’s computation.
- Both for loops and while loops may be controlled using break and continue, but don’t overuse these.
- While loops may become “infinite”
- Use a debugger to understand the behavior of your program and to find errors. | http://www.cs.rpi.edu/academics/courses/spring16/cs1/course_notes/lec12_loops2_for_double.html | CC-MAIN-2018-39 | refinedweb | 1,489 | 67.79 |
Classes
twitter
Reddit
Topics
No topic found
Content Filter
Articles
Videos
Blogs
News
Complexity Level
Beginner
Intermediate
Advanced
Refine by Author
[Clear]
Alagunila Meganathan (5)
Gopi Chand (3)
David Mccarter (3)
Mahesh Chand (3)
Vithal Wadje (3)
Akhil Mittal (3)
Shervin Cyril (2)
Sean Franklin (2)
Sandeep Sharma (2)
Pankaj Kumar Choudhary (2)
Sourav Kayal (2)
Harpreet Singh (2)
Khawar Islam (2)
Ibrahim Ersoy (2)
Vijay K (1)
Dipal Choksi (1)
Munesh Sharma (1)
RamaSagar Pulidindi (1)
Neelesh Vishwakarma (1)
Surya S (1)
Yogesh Jaiswal (1)
Gaurav Kumar (1)
Vidya Vrat Agarwal (1)
Bilal Shahzad (1)
Mohit Kumar (1)
Roshan Patil (1)
Neeraj Sharma (1)
Vijay Kumari (1)
Satendra Singh Bhati (1)
Akash Malik (1)
Shivprasad Koirala (1)
Shalini Dixit (1)
Abhishek Dubey (1)
A K (1)
Abhishek Yadav (1)
Devesh Omar (1)
Sandeep Singh Shekhawat (1)
Rajesh VS (1)
Craig Breakspear (1)
C# Curator (1)
Shehan Peruma (1)
Bechir Bejaoui (1)
Annathurai Subbaiah (1)
Atul Sharma (1)
Saurabh (1)
Erika Ehrli Cabral (1)
Shankar M (1)
Doug Doedens (1)
Farhan Ahmed (1)
Jiteendra Sampathirao (1)
Rathrola Prem Kumar (1)
Raj Kumar (1)
Kailash Chandra Behera (1)
Sourabh Mishra (1)
Satyaprakash Samantaray (1)
Kishor Bikram Oli (1)
Mani Gautam (1)
Hirendra Sisodiya (1)
Vinod Kumar (1)
Akkiraju Ivaturi (1)
Rohatash Kumar (1)
Sekhar Srinivas (1)
Prakash Tripathi (1)
Sunny Sharma (1)
Anoop Kumar Sharma (1)
Vignesh Mani (1)
Sourabh Somani (1)
Amit Choudhary (1)
Saillesh Pawar (1)
Bhavik Patel (1)
Rahul Sahay (1)
Mukesh Kumar (1)
Anand Narayanaswamy (1)
Akash Malhotra (1)
Kiranteja Jallepalli (1)
Matthew Cochran (1)
Carmelo La Monica (1)
Gaurav Kumar Arora (1)
Related resources for Classes
No resource found
Classes And Structures In Swift
6/1/2020 5:29:07 AM.
In this article, you will learn about class and structures in swift.
Let's Develop An Angular Application - Styling The Template Using CSS Validation Classes And ngModel Properties
5/29/2020 10:26:17 AM.
In this article, you will learn about styling the template using CSS Validation Classes and ngModel Properties.
Windows Forms Printer Selection List
5/21/2020 2:35:06 AM.
In this example we will create a sample windows form containing a combo box which will list the printers installed on the machine on which the program runs. The default printer for the machine is set
Metaclasses In JavaScript (ES6)
5/19/2020 9:39:18 AM.
In this article, I will describe how to dynamically construct classes at runtime. This concept, known as Metaclass, can be very powerful for niche use-cases.
POCO Classes in Entity FrameWork
5/19/2020 1:34:30 AM.
In this article, you will learn about POCO classes in Entity Framework.
How To: "Private" In JavaScript (ES6) Classes
5/13/2020 9:01:14 AM.
In this article, I cover briefly how you can achieve true private accessibility in JavaScript (ES6) classes.
Let's Develop an Angular Application - Adding Bootstrap Style Classes
5/5/2020 12:11:56 AM.
In this article, we will install and apply bootstrap classes in our application.
Various Syntaxes For Implementing Classes in JavaScript
4/23/2020 3:47:55 PM.
In this article we will learn about various syntaxes for defining a class in JavaScript.
Classes in JavaScript
4/14/2020 3:36:26 PM.
Thi article explains classes in JavaScript.
Python Classes And Objects
4/6/2020 8:54:56 AM.
In this article, I will explain Python classes and objects.
Classes in JavaScript
4/5/2020 2:27:08 PM.
In this article you will learn how we can use classes in JavaScript.
Sealed Classes With When Statements Kotlin
3/19/2020 9:00:30 AM.
In this article, you will learn about sealedcClasses with when statements Kotlin.
Generics In C#
3/7/2020 4:53:43 PM.
C# Generics are used to create generic collections in C#. A C# Generic collection has certain key features such as compare, add, remove, find, and index items. In this C# Generics tutorial with code e
React - Learn From Scratch - Part Five
1/27/2020 6:06:17 PM.
In this post, we'll learn about JavaScript classes and then we'll learn another type of React components (i.e. Class Component.)
Overview of Pseudo Classes in CSS
1/20/2020 11:20:14 AM.
In this article I will explain how to use pseudo classes and pseudo elements.
Types Of Classes And Their Characteristics
12/10/2019 10:37:28 PM.
In this article, you will learn about the types of classes and their characteristics.
What is the difference between classes and objects in C#
11/19/2019 10:22:40 PM.
In this article, we will learn about the difference between objects and classes in C#.
A Complete Java Classes Tutorial
11/6/2019 4:28:55 AM.
Java class is a basic concept of object-oriented programming. Java is an object-oriented programming language, so Everything in java is associated with java classes. In this article, we will learn abo
Learn StringBuffer Class in Java: Lecture 7
9/26/2019 6:19:28 AM.
This article explains the StringBuffer() class in Java and Java strings in general.
Learn StringBuffer() Class in Java: Lecture 6
9/26/2019 6:09:46 AM.
This article explains the StringBuffer() class in Java and Java strings in general.
Learn StringBuffer() Class in Java: Lecture 1
9/26/2019 5:24:49 AM.
This article explains the StringBuffer() class in Java and Java strings in general.
Wrapper Classes in Java
9/24/2019 4:35:48 AM.
This article explains the wrapper classes in Java.
How Container Handle Requests in Java
9/19/2019 5:01:42 AM.
A Container is a Java application that controls a servlet.
URL and URLConnection Classes In Java
9/19/2019 4:44:15 AM.
In this article, we discuss the URL and URLConnection classes in Java.
How Various Java Collection Classes Work
9/19/2019 12:46:57 AM.
In this article we discuss working of various classes in Java collection.
Must Know Design Pattern Interview Questions
9/16/2019 11:47:32 PM.
This article covers most popular design pattern interview questions and answers including factory design pattern, abstract factory design pattern, prototype pattern and more.
Improve Your Model Classes With OOP - Part Three - Serialization
9/3/2019 11:48:40 AM.
In this article, you will learn about improve your model classes with OOP - Serialization.
Improve Your Model Classes With OOP - Part Two - Constructors, Interfaces And More
8/27/2019 4:23:00 PM.
In this article, you will learn how to improve your model classes with OOP.
Improve Your Model Classes With OOP - Part One - The Basics
8/5/2019 6:06:48 PM.
In this article, you will learn the tricks to improve your model classes with OOP.
Using Multiple Main Classes in NetBeans
8/2/2019 3:26:28 AM.
This article provides an idea of how to implement multiple main classes in the NetBeans IDE.
Defining Your Own Exception Class in JAVA (Custom Exception)
7/23/2019 5:30:04 AM.
This article is helpful for making your own exception classes in Java and making your own exception condition.
Working with the SqlConnection and SqlCommand Classes in ADO.NET
7/9/2019 4:33:28 AM.
Here, you will learn C# SqlConnection and C# SqlCommand in ADO.NET with simple examples.
StreamReader And StreamWriter Classes In C#
7/8/2019 11:30:34 PM.
This article explains C# StreamReader and StreamWriter classes and how to use them.
Basics Of Generic Classes In C#
6/10/2019 8:08:16 AM.
This article explains how to use a Generic class and why we need to use it.
Sealed Class in C#
6/3/2019 4:07:01 PM.
C# sealed classes. The sealed keyword in C# language is used to create a sealed class. Sealed classes restricts classes to extend or inherit a class. The code examples of sealed classes in this articl
Partial Classes In C# with Real Example
6/2/2019 10:06:55 PM.
C# Partial Class. Partial classes were introduced in C# 2. A C# Partial class can reside in multiple cs files with the same name. C# partial classes code examples.
Understanding Structures in C#
6/2/2019 5:40:57 PM.
C# Struct, A structure in C# is simply a composite data type consisting of a number elements of other types.
Multiple Inheritance in C#
5/24/2019 7:01:07 AM.
Learn how to implement multiple inheritance in C#. Inheritance is one of the key characteristics of an object oriented programming language.
What are sealed classes and sealed methods
5/14/2019 6:31:27 AM.
In this article, I will try to explain sealed classes and sealed methods in C# language.
Simple XML Parser in C#
5/7/2019 6:47:55 AM.
This article shows how to create a very simple XML parser.
Some Real Differences Between Structures and Classes
5/7/2019 5:37:56 AM.
This article lists some differences between classes and structures. In this article, we will see what is the difference between a structure and a class.
When To Use Static Classes In C#
5/2/2019 10:40:40 AM.
The static modifier in C# declares a static member of a class. The static modifier can be used with classes, properties, methods, fields, operators, events, and constructors, but it cannot be used wit
Abstract Class In C#
4/6/2019 8:37:27 AM.
An abstract class in C# is a class that can't be instantiated. Here learn how to declare and implement abstract classes in C# applications.
All About C# Immutable Classes
3/25/2019 9:08:47 AM.
In this article, we are going to cover all the necessary information you need to know about Immutable classes in C#.
Generics in C# 2.0
3/5/2019 5:10:15 AM.
In this article, I specifically talk about Generics and how they improve upon arraylists and how they solve the issues posed by ArrayLists.
Partial Class In C#
2/25/2019 11:32:54 AM.
In this article, we will learn about partial classes
Abstract Classes In C#
2/8/2019 9:56:33 PM.
This article explains abstract classes in C#. An abstract class is a special type of class that cannot be instantiated and acts as a base class for other classes. Abstract class members marked as abst
OOPS Concepts And .NET - Part One - Classes, Objects, And Structures
1/31/2019 8:43:24 AM.
The following article kicks off a three-part article series that will present definitions and samples for different Object-Oriented Programming concepts and its implementation in .NET.
Exception Handlers in C#
12/27/2018 3:48:37 AM.
In this article, I discuss what Exception Handlers are in C# and how to trap errors using try/catch blocks and their behavior when we use multiple catch statements to handle errors and finaly, how to
Design Pattern For Beginner - Part 9: Strategy Design Pattern
12/11/2018 4:34:41 AM.
In this article we will discuss the Strategies Design Pattern.
Design Pattern For Beginners - Part 11: Implement Decouple Classes in Application
12/11/2018 4:21:33 AM.
Today let’s start with a very common and easy design pattern called Implement Decouple Classes in applications.
Types of Classes in C#
11/22/2018 3:17:18 AM.
In this article we will learn about various types of classes in C#.
Using WebRequest and WebResponse classes
11/15/2018 10:32:32 PM.
Downloading and uploading data from the web has been a very common programming practice these days.
Different Types Of Classes In C#
11/13/2018 9:13:55 AM.
In this article. We will understand types of classes in c#. There are four different type of classes available in c#.
String and StringBuilder Classes
10/24/2018 7:01:22 AM.
Here I'm going to tell you about what is String and StringBuilder and what the differences between String and StringBuilder classes.
How To Organize Classes Using Namespaces
9/20/2018 2:28:37 AM.
How To Organize Classes Using Namespaces. In this article we will learn Two-Tiered Namespaces and Commonly used namespaces in .NET Framework.
Show/Delete/Edit data in WPF DataGrid using LINQ to SQL Classes
9/18/2018 4:30:25 AM.
This article will demonstrate how to Show, Delete, and Edit data in WPF Data Grid using LINQ to SQL Data Classes.
Finding Directories With Regular Expressions
9/18/2018 3:23:48 AM.
This article provides an example of using a regular expression to search a directory and determine a file type. C# Regular Expressions
Abstract Classes in C#
9/5/2018 2:10:32 AM.
This article explains Abstract Classes in C#. Abstract Class is a type of class for which we cannot create an instance of the class.
Android Kotlin - Classes And Objects - Part Four
3/21/2018 10:03:47 AM.
In this article, we are going to learn about classes and objects in Kotlin with null values.
Understanding Classes In Kotlin
12/7/2017 1:20:17 PM.
In the above example, I have created a nested class ‘Nestedclass’ which has a data member ‘b’ and a member method show(),now the object of nested class can access member of nested class only.
Static Class In C#
10/30/2017 7:33:24 AM.
In this article you will learn how to use Static classes in c#.
Class and Object Function in PHP
6/29/2017 5:11:12 AM.
In this article I will discuss the get_declare_classes() and get_class_method() functions in PHP.
Singleton Vs Static Classes
6/27/2017 3:04:10 AM.
Why do you use a Singleton class if a Static class serves the purpose What is the difference between Singleton and Static classes and when do you use each one in your program?
HTML Helpers in MVC: Part 2
6/22/2017 4:21:03 AM.
in this article we discuss HTML Helper class in MVC, This article shows how to render a radio button list control and a checkbox list control using HTML Helpers.
HTML Helpers in MVC: Part 3
6/22/2017 4:19:01 AM.
This article explains the HTML helper classes. This article explains how to use the Display and Editor Template HTML Helpers.
Custom Collection Classes In C#
4/6/2017 4:14:27 PM.
This article presents an overview of custom collection classes.
Adding Bootstrap Image Classes
12/27/2016 5:07:18 PM.
This article shows how to display images in a rounded, circle or Polaroid style using just Bootstrap CSS.
Using Ionic Tabs Control Classes In Visual Studio 2015
11/27/2016 1:20:21 PM.
In this article, you will learn about how to add Ionic tabs control class in Ionic app, using Visual Studio 2015.
Build The Ionic App With Radio Button Classes In Visual Studio 2015
11/27/2016 1:15:48 PM.
In this article, you will learn how to add Ionic Radio button in the Ionic app, using Visual Studio 2015.
Using Range Classes In Ionic Blank App Using Visual Studio 2015
11/24/2016 12:15:55 PM.
In this article, you will learn about how to add Range classes in Ionic app, using Visual Studio 2015
Working With Ionic Toggle Classes In Ionic Blank App Using Visual Studio 2015
11/24/2016 12:27:34 AM.
In this article, we are going to see how to start using Toggle Classes in Ionic app, using Visual Studio 2015.
Working With Color Classes In Ionic Blank App Using Visual Studio 2015
11/12/2016 12:26:06 PM.
In this article, you will learn how to start using the Color classes in Ionic app, using Visual Studio 2015.
Introduction to JDBC
8/2/2016 3:01:27 AM.
In this video we will Understanding Introduction to JDBC.Java Database Connectivity (JDBC) is an application programming interface (API) for the programming language Java, which defines how a client m
Design Patterns Simplified - Part 2 (Singleton)
7/18/2016 5:12:58 AM.
This article explains what Singleton Design Patterns is, addresses common questions and finally illustrates the implementation.
Classes in C#
7/15/2016 1:20:33 AM.
This video helps you understand what is a Class in C#, it's methods and properties and how do we use them..
Working With Built-In HTML Helper Classes In ASP.NET MVC
5/23/2016 11:25:40 AM.
In this Article, we will learn how to use Built-In HTML Helper Classes in ASP.NET MVC.
Move Domain Classes Configuration To Separate Classes
5/19/2016 11:44:34 AM.
In this article you will learn how to move Domain Classes Configuration to separate classes.
Brief Lesson About C# Classes And Objects
5/4/2016 12:32:04 PM.
In this article you will learn about C# Classes And Objects.
How To Paste JSON As Classes Or XML As Classes in Visual Studio
4/13/2016 1:53:18 PM.
In this article, I will explain you how to paste JSON or XML as classes in Visual Studio.
C# and ASP.Net Interview Question and Answers
4/6/2016 4:50:23 AM.
In this article I will demonstrate C# and ASP.NET Interview Question and Answers.
OOP In WinJS #4 : Inheritance
4/4/2016 10:36:16 AM.
In this article, I'll be talking about how to derive classes in WinJS library. This is part 4 of the series.
OOP in WinJS #3: Classes
3/24/2016 10:53:43 AM.
In this article, I'll be talking about how to use classes in WinJS library. We'll be using Universal Windows Apps for showing the demo.
T4 Templates To Generate Classes From XML
1/7/2016 11:31:40 AM.
In this article you will learn about T4 Templates to generate classes from XML.
Learn Tiny Bit Of C# In 7 Days - Day 3
12/29/2015 6:23:16 AM.
This article intended to focus towards the beginners so that they can easily grasp the C# Language concepts.
Overview Of Code First Migrations In Entity Framework
12/14/2015 5:17:17 AM.
In this article you will learn an overview of Code First Migrations in Entity Framework with Example.
Abstract Classes In C#
12/8/2015 1:53:26 AM.
In this article you will learn about abstract classes in C#.
Database First Approach With ASP.NET MVC Step By Step Part 2
11/9/2015 12:55:45 AM.
In this article we will see how to create model classes from database using Edmx file.
C# FAQ 3 - Getting Started With C#
10/31/2015 1:48:57 AM.
This article examines the basics concepts associated with C# programming such as CLR, Class libraries and namespaces.
Understanding Sealed Keyword For Classes And Methods In C#
9/3/2015 1:37:40 AM.
In this article we will implement sealed keyword for classes and methods in C#.
Partial Class in C#
5/31/2015 12:51:19 AM.
This article explains partial classes in C# Programming. A partial class is a concept where a single class can be split into 2 or more files.
Equality Implementation in C#
3/31/2015 6:37:36 AM.
In this article you will learn Equality Implementation in C# programming.
The Nokia Maps on Windows Phone 8: Part 4
3/28/2015 12:32:16 PM.
In this fourth and final article we will see an introduction to the Route and RouteLeg classes and so on.
Explaining Constructors in C#
3/27/2015 1:21:27 PM.
In this article we will discuss C# constructors in details. | https://www.c-sharpcorner.com/topics/classes | CC-MAIN-2020-29 | refinedweb | 3,313 | 66.13 |
My issue is 100% identical to however that post was closed and unanswered.
Replace this line with your code.
My issue is 100% identical to however that post was closed and unanswered.
Replace this line with your code.
The issue is closed because the refresh of the page solved the problem.
First try a refresh.
Second, try this code:
var React = require('react');
var Example = React.createClass({
getInitialState: function () {
return { subtext: 'Put me in an
shouldComponentUpdate: function (nextProps, nextState) {
if ((this.props.text == nextProps.text) &&
(this.state.subtext == nextState.subtext)) {
alert("Props and state haven't changed, so I'm not gonna update!");
return false;
} else {
alert("Okay fine I will update.")
return true;
}
},
render: function () {
return (
It should work just fine.
I have tried refreshing the page multiple times and I also tried clearing my cache. I tried making a Target.js file with that code and I still had issues.
Would someone be willing to post the original code for me for Target.js so I can try creating a new Target.js from scratch and see if that works. Thank you.
The target.js file is below:
var React = require('react');
var random = require('./helpers').random;
var Target = React.createClass({
propTypes: {
number: React.PropTypes.number.isRequired
},
shouldComponentUpdate: function (nextProps, nextState) {
return this.props.number != nextProps.number;
},
render: function () {
var visibility = this.props.number
? 'visible' : 'hidden';
var style = {
position: 'absolute',
left: random(100) + '%',
top: random(100) + '%',
fontSize: 40,
cursor: 'pointer',
visibility: visibility
};
return ( <span style={style} {this.props.number} </span> )
}
});
module.exports = Target;
No, I haven't I decided to just move on and hope that they will one day fix the issue or help me with it. I appreciate your help though. | https://discuss.codecademy.com/t/shouldcomponentupdate-problem-with-the-exercise-3-7/63144 | CC-MAIN-2018-43 | refinedweb | 285 | 61.73 |
Problem in running first hibernate program.... - Hibernate
Problem in running first hibernate program.... Hi...I am using.../FirstExample
Exception in thread "main" "...?
-----------------------------------------------------------------------
Hibernate: insert
delete query problem - Hibernate
']
question no: 1) why table STUDENT is not mapped , for insert its work correctly , the problem is only delete query.
2) query.executeUpate(); ->...();
Read for more information.
Thanks
problem in insert query - JSP-Servlet
problem in insert query Hi!
I am using this statement for data...:-
String insertQuery= "insert into volunteer_profiles ( name, gender ) values ( 'name... problem plz give full details
with source code
insert data
insert data i've got a problem to insert my data to database.i can upload my multipart file but not text data.Please help me to solve this .Attached...("insert into images values(?,?,?,?,?)"); st.setString(2
, this type of output.
----------------------------
Inserting Record
Done
Hibernate: insert into CONTACT (FIRSTNAME, LASTNAME, EMAIL, ID) values (?, ?, ?, ?)
Could... = null;
try{
// This step will read hibernate.cfg.xml and prepare hibernate
J2ee - Hibernate
that cannot insert exampleVO into database.. please help to me to solve this problem
java - Hibernate
/hibernate/runninge-xample.shtml
Thanks...java HI guys can any one tell me,procedure for executing spring and hibernate in myeclipse ide,plz very urgent for me,thank's in advance.
Hibernate : Bulk Insert/Batch Insert
This tutorial contains description of Hibernate bulk insertion(Batch insertion
Java Compilation error. Hibernate code problem. Struts first example - Hibernate
Java Compilation error. Hibernate code problem. Struts first example Java Compilation error. Hibernate code problem. Struts first example
Hibernate - Hibernate
-insert? Hi friend,
It should be neccesary to have both a namespace property and a tagged value to allow dynamic-insert and dynamic-update....
Thanks
Facing Problem to insert Multiple Array values in database - JSP-Servlet
Facing Problem to insert Multiple Array values in database Hai... of Problem.
Thanks.
dear friend
i want to insert... facing the problem while inserting the data in database.
iam using the MsAccess - Hibernate
Hibernate Hai this is jagadhish, while executing a program in Hibernate in Tomcat i got an error like this
HTTP Status 500....
Hopefully this will solve your problem.
Regards
Deepak Kumar
Hibernate
Tomcat installation problem - Hibernate Interview Questions
Tomcat installation problem Hello
Insert Operation - WebSevices
Insert Operation Send the code for Insert Contact in pop up window... insert my contact details(Name,email and etc).So,send the code and solution... problem :
Some points remmeber :
1.Create a file "home.jsp" having a button
Hibernate code - Hibernate
Hibernate code can you show the insert example of Hibernate other... of example related to hibernate...
Thanks...=ps1.executeQuery();
ps2 = con.prepareStatement(
"insert
FAILED TO INSERT DATA FROM FUNCTION();
FAILED TO INSERT DATA FROM FUNCTION(); HELLO, I HAVE A PROBLEM TO INSERT DATA FROM OUTPUT FROM FUNCTION()...
I WANT TO STORE THE OUTPUT IN DATABASE, BUT FAILED TO INSERT...
before this....
.........................
function
hibernate Excetion - Hibernate
the same error that
Hibernate: insert into login (uname, password) values...://
It will be helpful for you...hibernate Excetion The database returned no natively generated Isolation Query. - Hibernate
Hibernate Isolation Query. Hi,
Am Using HibernateORM with JBOSS server and SQLSERVER.I have a transaction of 20 MB, when it is getting processed... for the problem
uploading problem
...
i wnat to upload a file to a folder 'doc' and insert corresponding
data... ("insert into
filePath(fileId,filePath,fileName,fileType)
values("+id+",'"+fileP...();
}
}
}
}
%>
my problem...:
firstly
problem in database
problem in database thanks for web site.
I want change this code to insert data into PostgreSql database using jsp,servlets.
but i getting...("org.postgresql.Driver");
PreparedStatement pst = connection.prepareStatement("INSERT other
reconfirm
It
Insert specific fields into table dynamically for each row.
Insert specific fields into table dynamically for each row. ... will enter those details in a webpage. So that the field values will insert... insert there for each row and for each row there is a button "done".if he click eclipse 3.1 - Hibernate
hibernate configuration with eclipse 3.1 Dear Sir,
i got your mail... simple hibernate program code and with struts also.
so please tell me the step by step process.
i have that folder in
d:/hibernate
i thing
problem in servlet program
problem in servlet program Dear Sir, I have a problem to insert the Blob type data like as video file, audio file in the database using Servlet and html code
Hibernate Association
Hibernate Association 1)
<bag name="product" inverse="true... name="dealer" class="
net.roseindia.Dealer" column="did"
insert="false" update...
cascade
column
insert
update... service bus and then insert into database.
Thanks
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://roseindia.net/tutorialhelp/comment/93331 | CC-MAIN-2015-40 | refinedweb | 772 | 52.97 |
In a software applications, there are sometimes several values, that need to be changed sometimes, like database connection information, API access credential information, directory path etc. I am talking about both desktop and web applications. In case of C#.NET applications, there are a default XML file for putting such settings values, its web.config for Web application and app.config for desktop applications. If we don’t put those dynamic values here, we would had to compile the applications every time we need to change it!!!!
This is specially a big issue in case of product level developments, which are distributed to customers and each customer sets different values. Although many code can used without compiling like asp.net website or a php web application, but still using hard-coded settings is a terrible things, to other new developers to that application also where he has to find those inside the core code. There are a lot others disadvantages too. Simply, I will strictly recommend to use an external file for storing those app settings(either on a text/xml file, or if even a code file, should be dedicated for configurations settings only).
Microsoft .NET platform provide a very easy and efficient way to manage these application settings. By default a “.config” file is associated with every new desktop/web project. This file is in xml formatting which are serialized/deserialized in c# objects automatically. In this small tutorial, I will try to show you how to use and retrieve these configuration values(on app.config or web.config) from config file in c# code examples. I will use term ‘config file’ to refer to ‘app.confg/web.config’.
Settings Values On Config File In C# Project:
There are generally two kinds of scenario, First, you want to store only a single value for a specific settings. Second, you need a couple of values to set for a specific settings. You should use two different kind of ways to meet these needs.
Single Value Settings:
To store a single value, you should put it inside ‘<appSettings>’ section. Like as follows:
<appSettings> <add key="SettingKey1" value="Settings 1 Value"/> <add key="SettingKey2" value="Settings 2 Value"/> </appSettings>
Retrieving these values are pretty straight forward, just use as like following code snippet where you need it:
string value1 = ConfigurationManager.AppSettings["SettingKey1"]; string value2 = ConfigurationManager.AppSettings["SettingKey2"];
use should have ‘System.Configuration’ name space in that code file. You can also use ‘WebConfigurationManager’ instead of ‘ConfigurationManager’ with namespace ‘System.Web.Configuration’ in case of asp.net web application.
Multiple Value Settings:
To store more than one values in a specific settings, you should use ‘<configSections>’ section. You will need the following parts implemented:
- Add a section name and its class name(with name space) in the <sectionConfigs> area in the config file.
- Add new tag with section name with different attribute names and their values in the config file.
- Implement a class that extends ConfigurationSettings class.
Here is sample code for the config file:
<configSections> <section name="mySettings1" type="myProject.Libraries.MySettings1"/> <section name="mySettings2" type="myProject.Libraries.MySettings2"/> </configSections> <mySettings1 key1="Key 1 value" key2="Key 2 value" /> <mySettings2 key1="Key 1 value" key2="Key 2 value" />
Few things to remember in this stage:
- Should be at the beginning of all others configurations.
- Custom tag name must match the name of section
The class file should contain some getter setters , same number of the attribute names for a section, like as follows:
using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Configuration; namespace myMVC.Libraries.FB { public class MySettings : ConfigurationSection { [ConfigurationProperty("key1", IsRequired = true)] public string Key1 { get { return (string)this["key1"]; } } } }
Now we are all set to use those settings from our application. Just use like following code samples when you want to retrieve those values:
var settings = ConfigurationManager.GetSection("mySettings"); MySettings current; if (settings != null) { current = settings as MySettings; string valueToUse = current.ApiKey; //use the value }
With this, you can store multiple values for a specific settings in a separate section on the configuration file. And of course, you won’t have to recompile you application while changing these values
as this are being read on run time when needed. So, if you are working on a .NET application, you should definitely always use this configuration file for storing dynamic settings values instead of using
as hard-coded and on any other kind of file like .txt files etc. Happy coding 🙂
thanx RANA.,JAY R datagrid : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
}
protected void Button1_Click(object sender, EventArgs e)
{
//SqlConnection conn = new SqlConnection(“data source=localhost\\sqlexpress;Initial Catalog=patil;Integrated Security=SSPI”);
string con = ConfigurationManager.ConnectionStrings[“patil”].ConnectionString;
SqlConnection conn = new SqlConnection(con);
string sqlsrting = “spr_displaydata”;
SqlCommand cmd = new SqlCommand(sqlsrting, conn);
SqlDataAdapter Dataadpt = new SqlDataAdapter(cmd);
DataSet ds = new DataSet();
Dataadpt.Fill(ds);
GridView1.DataSource = ds;
GridView1.DataBind();
}
}
WEB.CONFIG File:
what about windows development?
In order to get this working I had to add the namespace in the ConfigSections of the config file:
Could you please show the sample config file after you added the namespace in the ConfigSections?
I am using a .bat file in c# code. i have hard coded that path in code. i want to give the path in .config in order to run it in IIS server. Could you please help me?
When I try your sample
var settings = ConfigurationManager.GetSection(“mySettings”);
MySettings current;
if (settings != null)
{
current = settings as MySettings;
string valueToUse = current.ApiKey;
//use the value
}
the ‘seetings’ variable is always null. How do I fix it?
no pblim
hi, can u use in this php
Yes Paul, you can use this in PHP. Just copy and paste the code into the code file, it should work perfect for you. | https://codesamplez.com/development/application-settings-c-sharp | CC-MAIN-2021-49 | refinedweb | 972 | 50.53 |
Also, check out our Windows 8 sponsored section. needed for tile and toast
notifications as well as the (optional) badge logo for the start screen. Only
those images highlighted in red (100% size for logo and small logo) are
required and must be supplied in the app manifest (as .jpg/jpeg or .png
formats). Other sizes are optional
and if not provided will be generated by scaling the default 100% image.
Tip: Use the Visual Studio 2012 simulator to check out the
appearance of your images under various scaling factors (and contrast modes).
Images used for notifications can be stored in
one of three places:) but the system selects a file of the same name (but
different extension)!
You can also arrange the images in
subdirectories, to provide a myriad of customization options. Take, for
example, the following directory structure.
If the application were run on a US based
system, but the user selected a black high contrast theme, a reference for
welcome.png for a tile notification would pick the variant on Line 8 (or
possibly Line 7, depending on the characteristics of that system).
Someone running the applications with his locale set to Japan on a
high-resolution device would see the image file associated with Line 17.
Unfortunately, none of the conventions
supported for app package images are supported when using local images (the
ms-appdata:///local namespace), but you could create your own mapping algorithm
using the following Windows APIs::
How to: Use Windows Azure).
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) | http://www.codeproject.com/script/Articles/ArticleVersion.aspx?aid=441153&av=754432 | CC-MAIN-2014-42 | refinedweb | 268 | 50.97 |
MH-Z14A CO2 sensor
Hello,
Anyone has got a modified MH-Z14A CO2 sensor working please?
Whenever I tried to use sub-type V_UNIT_PREFIX, I get the following error (My Sensors 1.5.4)
MySensors: Unknown/Invalid sensor type (43)
Additionally, I cannot get proper readings from PWM. This is the description (in Chinese - sorry)~红外传感器说明书V1.0.pdf
I am using the following PINS: Pin23 - VIN, Pin 22 - GND, Pin26 - PWM.
Any help is appreciated
OK, I manage to get it resolved now. Simple code really - I am going to convert it into MySensors now. One thing to note - there is an error in the formula. This is 5000ppm sensor, therefore "5000" multiplier has to be used in the formula.
// set pin number: const int sensorPin = 2; // the number of the sensor pin long ppm=0; unsigned long duration; void setup() { Serial.begin(9600); // initialize the sensor pin as an input pinMode(sensorPin, INPUT); } void loop(){ duration = pulseIn(sensorPin,HIGH,2000000); ppm = 5000 * (((duration)/1000)-2)/1000; Serial.println("CO2 = " + String(ppm) + " ppm"); delay(10000); }
I am going to convert it into MySensors now
you have it here:
@epierre I took your sketch and modified it. The problem is that the formula for CO2 is wrong for 5000ppm sensors. Additionally, I was getting the error in your sketch - MySensors: Unknown/Invalid sensor type (43). Not sure why.
Anyway, my modification:
#include <MySensor.h> #include <SPI.h> #define CHILD_ID 0 #define CO2_SENSOR_PWM_PIN 2 unsigned long SLEEP_TIME = 30*1000; // Sleep time between reads (in milliseconds) float valAIQ =0.0; float lastAIQ =0.0; unsigned long duration; long ppm; MySensor gw; MyMessage msg(CHILD_ID, V_LEVEL); void setup() { gw.begin(); // Send the sketch version information to the gateway and Controller gw.sendSketchInfo("AIQ Sensor CO2 MH-Z14A", "1.0"); // Register all sensors to gateway (they will be created as child devices) gw.present(CHILD_ID, S_AIR_QUALITY); pinMode(CO2_SENSOR_PWM_PIN, INPUT); } void loop() { while(digitalRead(CO2_SENSOR_PWM_PIN) == HIGH) {;} //wait for the pin to go HIGH and measure HIGH time duration = pulseIn(CO2_SENSOR_PWM_PIN, HIGH, 2000000); ppm = 5000 * ((duration/1000) - 2)/1000; Serial.print(ppm); if ((ppm != lastAIQ)&&(abs(ppm-lastAIQ)>=10)) { gw.send(msg.set((long)ceil(ppm))); lastAIQ = ceil(ppm); } gw.sleep(SLEEP_TIME); //sleep for: sleepTime }
@alexsh1 sorry I don't read chinese, but my datasheet says 2000:
are you using an alternate wiring or alternate version (the A at the end for enhanced or revised ?
@epierre Which sensor do you have? 2000ppm or 5000ppm? For 2000ppm you should use 2000 multiplier. I have 5000ppm, but the formula still gives me 2000 multiplier. Using 5000 multiplier seems to give a correct result.
I have MH-Z14A 5000ppm. No idea what "A" stands for - there is very little documentation in English.
@epierre In this case you should use 5000 multiplier - please see above my sketch.
There is a mistake in the datasheet file - it has been written for 2000ppm model.
Mine has got 20+ pins and it is 0-5000ppm
hi
this looks quite nice stuff, but how should i wire up the sensor to wemos d1 for example?
do i have to use the TTL adapter mode on the board by shorting some pins or do i just need to give some power to the sensor and hook the PWM pin to the wemos board somehow?
and how does this code comapre to these codes?
or should i just get a usb/ttl adapter and use it over python then?
well, i got this far that i just soldered some connectors to pins 16-19 and now want to use the wemos board as USB to TTL adapter, so i could use the Co2 logger over python server, but how do i need to connect the Rx and Tx pins?
does it have to be like a crossover cable or just normal straight?
like Sensor Rx to Wemos Tx + S-Tx to W-Rx or just the regular connection like Rx to Rx and Tx to Tx?
edit*
lol.... the 16-19 pins maybe doesnt have this somekind of 3.3v query pin and only has 5V input? or Rx&Tx run at 3,3 volts?
lol.... so i thought that maybe i could run it like this:
but what is the (T1)PD5/IO5 pin on the wemos board ? D3 or D4? maybe D3 as GPIO0 then???:-S
from here:
ok, so i got it working from the last link, but wonder what does this code do?
CO2= int(2000*(duration_h-2)/(duration_h+(1004-duration_h)-4));
and i actually changed it to this:
CO2=int(5000*(duration_h-2)/(duration_h+(1004-duration_h)-4));
(removed the space before "int" too)
and this code somehow has "-4" not "-2" as in this forum topic?
and it probably calibrates itself on the 1st startup hour and also does this every 24 hours?
so wonder if i need to place the sensor in a relatively clean air(on a window?) or it doesnt really matter? and it is of course a bit colder near the window because i also opened it a bit to let more clean air in.......
and when i first plugged it in, it started to show something like 495ppm, but then i unplugged it and took it to the window(in another room), and it then started from 1375ppm and has now been dropped down to 900....
so i wonder why did it first show 495, but then 1375 suddenly.....
nice...
i got this reset stack now:
CO2 level: -10 ppm
Soft WDT reset
ctx: cont
sp: 3ffef1f0 end: 3ffef3d0 offset: 01b0
stack>>>
3ffef3a0: 00000000 00000000 00000001 40202281
3ffef3b0: 3fffdad0 00000000 3ffee39c 402022ac
3ffef3c0: feefeffe feefeffe 3ffee3b0 40100114
<<<stack<<<
?⸮⸮⸮⸮DH⸮⸮⸮CO2 level: 395 ppm
CO2 level: 395 ppm
CO2 level: -10 ppm
CO2 level: 405 ppm
CO2 level: 405 ppm
...
CO2 level: 1070 ppm
CO2 level: 1075 ppm
CO2 level: 1080 ppm
...
atleast now i know that it started the calibration from the start again, but what could it all mean?
and its probably best to use good quality cables too then... and perhaps good to not to touch the dupont cables too... | https://forum.mysensors.org/topic/3821/mh-z14a-co2-sensor | CC-MAIN-2020-50 | refinedweb | 1,015 | 73.47 |
Getting Started
So, when you have enough reasons to try D, let's get your hands dirty.
Contents
Your first program
For a beginner-friendly, step-by-step introduction on how to build your first D program see the hello world section of Ali Çehreli's online book Programming in D.
Choosing a compiler
As you probably already know, D is a compiled language, so you have to make your first choice: a compiler. There are several compilers to choose from.
They differ in:
- Installation procedure
- Ease of building from source
- License
- Performance
- Reliability
- Popularity
Whichever you will choose, you shouldn't have problems changing it. Options might differ but most code should be compatible with all of them. (If you find problems, please file an issue.)
Running D code like a script
After you have your compiler installed you'll want to do some coding.
For small projects it's handy to compile and run in a single step. A few solutions exist.
Using RDMD
The rdmd tool, distributed with dmd or available separately here: makes this simple.
Just create your source file, e.g. main.d:
import std.stdio; void main() { writeln("Hello, world without explicit compilations!"); }
and run the command line
rdmd main.d
If you properly installed the compiler you should see 'Hello, world without explicit compilations!' on the terminal. Isn't that simple?
On Unix/Linux systems, you can even use the traditional #! for scripting and set your main.d file to be executable:
chmod +x main.d
#!/usr/bin/env rdmd import std.stdio; void main() { writeln("Hello, world without explicit compilations!"); }
and run the command line just:
./main.d
For more information about this tool you might look at or your compiler documentation.
You can use exactly the same command for building programs that are made of separate modules. Just 'import' the other modules.
When your programs get larger, consider using a build system.
Using DUB
Since version 1.0, DUB supports single file packages. The DUB properties stand in a DDOC comment located after the script line and before the moduleDeclaration. For example the file a.d
#!/usr/bin/env dub /+ dub.sdl: name "colortest" dependency "color" version="~>0.0.3" +/ void main() { import std.stdio : writefln; import std.experimental.color; import std.experimental.color.hsx; auto yellow = RGBf32(1.0, 1.0, 0.0); writefln("Yellow in HSV: %s", cast(HSV!())yellow); }
can be executed with
dub a.d
Using Coedit
The D IDE Coedit proposes a system of runnable modules. A runnable module is a simple D source that can be compiled and executed in one step. Compilers options can be specified in the script line (which is not used since the file is handled by the IDE). Read more in the official documentation.
Choose your code editor
See IDEs and Text Editors to learn about D editing tools.
Using the D language for your domain
Do you want to develop a game, web application, desktop application, or use D to drive your robot? You can find more specialized information for different disciplines at Development with D.
Troubleshooting
Every complex software system might fail in its task. The same rule applies to DMD toolchain and maybe your application.
If you're having compilation/runtime problems and you have no idea why, don't worry. There are some tools that will help.
If your program doesn't compile and the error messages aren't enough, see Code Troubleshooting.
If your program compiled but doesn't do what you want, see Debuggers.
Beyond that, the D Community is very helpful.
And remember: If your problem has to do with the D toolchain or existing D projects, don't forget save some time for others by reporting it!
Searching with search engines
General considerations
Googling for "D" is useless in looking for D related web sites. But Googling for [D programming] or [D language] works well. Using dlang in combination with a search helps D in the TIOBE[1] index.
If you maintain a web page on D, please refer to it at least once per page as "D programming language" rather than just "D". This should help significantly with your page rank and findability for those interested in D.
Specific search engine
Let's say we want to search for the identifier indexOf using...
- Bing, Google, or DuckDuckGo: add site:dlang.org to your query. You'll get only the results which belong to the specified site:
indexOf site:dlang.org
Or add "dlang.org" to your query:
indexOf "dlang.org"
- Yahoo search: add domain:dlang.org to your search string. You'll get results from the specified domain:
indexOf domain:dlang.org | https://wiki.dlang.org/?title=Getting_Started&printable=yes | CC-MAIN-2018-34 | refinedweb | 781 | 60.92 |
As I’m growing into my role at Microsoft I am discovering that the wealth of new technologies is almost overwhelming. In many ways, keeping up with the latest material is harder than just sticking with known things and growing slowly. With the fast release cadence at Microsoft, there is literally something new every single day. As part of my role, I get to demonstrate these new features to customers who may be interested in them to solve real world business problems. Sometimes I just get to create fun demos that engage customers and let me learn the new technology – this is one of these demonstrations.
I spoke at VS Live! Redmond a couple of weeks ago and Jason Zander, a Corporate Vice President in the Azure group did just an amazing demo of the new Event Hub feature and I wanted to duplicate it from the ground up. Here’s the basic setup. The demo consists of a web client that sends a message (and the browser being used) to an event hub. The event hub passes it on to a queue (this is mostly because it makes the demo much more fun) and then a client reads out of the queue. The best part is that it happens almost instantaneously.
The main thing to note is that Queues are really designed to be read from sequentially (although you can certainly peek at any message and Event Hubs can have a message processed by many systems at once. The differences between Queues, Topics and Event Hubs can be found here:. This example doesn’t show these differences but instead shows the basic techniques of writing and reading to/from hubs and queues. Because Event Hubs are so new, the documentation is extremely limited right now (it’s still in Preview right now in fact).
With all of this said, hopefully this will be helpful and you’ll be able to adapt it for your own purposes… Before beginning I would also like to call out Jeff Wilcox for helping provide a couple of key pieces/ideas for the solution.
Solution Overview
The solution consists of an Azure Website which allows users to enter a message. The message is passed to an Event Hub. An Azure Cloud Service Worker processes the Event Hub and passes the message to an Azure Queue. Finally a Wpf application reads the Azure Queue and displays an aggregation of the data to the end user. Conceptually this is similar to a messaging app where 500 people send one person a message at the same time. In more practical terms thing of an Internet of Things (IoT) where thousands of devices send messages every second and that data has to be processed and stored and displayed. So there is a potential real word application for this type of structure.
Creating the Event Hub
In the Azure management portal select New > Service Bus > Event Hub as follows:
Provide a name (DemoApp in this case):
If you don’t already have an existing service bus namespace, it will create one for you (it is automatically appended with “-ns”). This is a mistake that I made originally – my Event Hub and my Queue are in two different namespaces. This is not required – I just made it more complicated for myself.
Next, provide a partition count and days for retention:
The partition count is a value between 1 and 32. Click the check mark and the hub will be created. After the hub is created, make sure to click the Connection Information at the bottom of the hub page to grab the string to allow the client applications to send data to and read data from the hub.
The Web Client – Writing to the Hub
Before attempting to use an Event Hub you need to add a reference to the right package. To do this, right-click the solution and select Manage NuGet Packages… Search for the Microsoft Azure Service Bus Event Hub – EventProcessorHost. Once you add that you just need to update or add a key with the Event Hub connection data to the web.config file. The page that takes the message is very simple and looks like the following:
This is a standard MVC application which I added an EventHubs page to. The code in the EventHub.cshtml page is as follows:
@{ViewBag.Enter a message to send: <input type="text" id="message" name="eventMessage" /><input type="button" id="sendMessageButton" value="Send Message" /></p><script type="text/javascript">$("#sendMessageButton").click(function() {$.getJSON(getHomeUrl() + "home/SendMessage/?" + $.param({ eventMessage: $("#message").val(), browserInfo: $.browser.name }), "", function () {$("#message").val("");})})</script>
The main work on this page is done by the AJAX call which sends the data to the server. There’s nothing related to the Event Hubs in this page – it’s just standard jQuery.
The Home controller gets this call and does the real work. The main piece is the SendMessage function (be sure to including using Microsoft.ServiceBus.Messaging).public JsonResult SendMessage(string eventMessage, string browserInfo){//Put all of the data into a single classEventHubData ehd = new EventHubData() { Message = eventMessage, BrowserInfo = browserInfo };//serialize itvar serializedString = JsonConvert.SerializeObject(ehd);//Create the Event Data object so the data and partition are knownEventData data = new EventData(Encoding.Unicode.GetBytes(serializedString)){PartitionKey = "0"};//Get a reference to the event hubEventHubClient client = EventHubClient.CreateFromConnectionString(ConfigurationManager.AppSettings["Microsoft.ServiceBus.ConnectionString"], "DemoApp");//Send the dataclient.SendAsync(data);//Return to the clientreturn Json("", JsonRequestBehavior.AllowGet);}
There is a single class which backs the data and it simply encapsulate the information to send in the message so it can be easily deserialized on the back end. Next the object is serialized and an EventData object is created. The PartitionKey can be whatever you want to call the partition – there are a potential of 32 partitions and you can set this value when you create your event hub – they are referenced by ordinal number so I just lacked incredible creativity by naming it “0”. The next line gets the reference to the event hub in azure and then finally the message is send asynchronously.
That’s all there is to it. As you start sending messages you’ll see the following type of view in Azure (note that this doesn’t update instantaneously):
So, we can now add data to the event hub! Obviously that in itself isn’t particularly useful so without further ado, moving on!
Creating the Queue
This is where you can do it better than I did it. Select the DemoApp-ns namespace from the Azure Management Portal and click the Queues tab.
Click Create a new Queue. Provide a queue name and select an appropriate region.
Click the next arrow and set any additional settings needed then click the check mark.
These settings are discussed more in this great tutorial on Queues:.
Remember to grab your connection string!
Now here’s the tricky thing – having a queue isn’t enough. A queue has to store its data somewhere. That somewhere is in a storage service. And not only in a storage service – but in a container in a storage service. Creating this is trivial but if you aren’t aware of it (like I wasn’t because I tend to “do” before I “read” – isn’t that how it is with everyone in the tech industry?) it’s hard to figure it. To create a storage account, go to the storage management page and select New. Create the storage service then click the Containers tab.
Select new at the bottom and provide a name and click the check mark – you’re done! For this you’re going to need a bit more information to connect to it. On the dashboard page for the storage account you’ll see the following:
Grab not only the URL for Queues but also click the Manage Access Keys at the bottom and grab the storage account name and the primary access key as you’ll need these to write to the queue.
Cloud Service Worker – Reading from the Hub
At this point we have data in the hub, now we need to process it. For this we’ll create a cloud service which constantly monitors the hub and processes any data that is received.
This took me a while to figure out as it isn’t obvious – you really do need to do it the way it is described in this post – I spent hours trying to work around what I thought was an overly involved process. It turns out that the separation of concerns really is the smartest way to manage this.
The key to reading from an Event Hub is the IEventProcessor interface. After you create a new Cloud Service worker role, add a reference to the event hub (as in the above sample – and also add a reference to the Azure Service Bus because you’ll need that to access the queue). Then add the following class:1: using Microsoft.ServiceBus.Messaging;2: using Microsoft.WindowsAzure.Storage;3: using Microsoft.WindowsAzure.Storage.Queue;4: using System;5: using System.Collections.Generic;6: using System.Diagnostics;7: using System.Linq;8: using System.Text;9: using System.Threading.Tasks;10: using Newtonsoft.Json.Linq;11: using Microsoft.ServiceBus;12:13: namespace WorkerRole114: {15: public class EventProcessor : IEventProcessor16: {17: PartitionContext partitionContext;18: Stopwatch checkpointStopWatch;19:20: QueueClient queue;21:22: public EventProcessor()23: {24: queue = QueueClient.CreateFromConnectionString("[your connection string here]","[your queue name here]");25: }26:27: public async Task CloseAsync(PartitionContext context, CloseReason reason)28: {29: Console.WriteLine(string.Format("Processor Shuting Down. Partition '{0}', Reason: '{1}'.", this.partitionContext.Lease.PartitionId, reason.ToString()));30: if (reason == CloseReason.Shutdown)31: {32: await context.CheckpointAsync();33: }34: }35:36: public Task OpenAsync(PartitionContext context)37: {38: Console.WriteLine(string.Format("SimpleEventProcessor initialize. Partition: '{0}', Offset: '{1}'", context.Lease.PartitionId, context.Lease.Offset));39: this.partitionContext = context;40: this.checkpointStopWatch = new Stopwatch();41: this.checkpointStopWatch.Start();42: return Task.FromResult<object>(null);43: }44:45: public async Task ProcessEventsAsync(PartitionContext context, IEnumerable<EventData> messages)46: {47: try48: {49: foreach (EventData eventData in messages)50: {51: Console.WriteLine("Processing event hub data...");52:53: string key = eventData.PartitionKey;54:55: string data = System.Text.Encoding.Unicode.GetString(eventData.GetBytes());56: try57: {58: var json = JObject.Parse(data);59: string text = json["Message"].ToString();60: string agent = json["BrowserInfo"].ToString();61:62: if (queue != null)63: {64:65: await queue.SendAsync(new BrokeredMessage((agent + "##" + text)));66:67: Trace.TraceInformation("Added to queue: " + agent);68: }69: }70: catch(Exception exx)71: {72: Console.WriteLine(exx.Message);73: }74:75: Console.WriteLine(string.Format("Message received. Partition: '{0}', Device: '{1}'",76: this.partitionContext.Lease.PartitionId, key));77: }78:79: //Call checkpoint every 5 minutes, so that worker can resume processing from the 1 minutes back if it restarts.80: // Doing every ONE MINUTE now.81: if (this.checkpointStopWatch.Elapsed > TimeSpan.FromMinutes(1))82: {83: await context.CheckpointAsync();84: lock (this)85: {86: this.checkpointStopWatch.Reset();87: }88: }89: }90: catch (Exception exp)91: {92: Console.WriteLine("Error in processing: " + exp.Message);93: }94: }95: }96: }97:
Much of this is boilerplate code so I’ll cover the parts that require the most explanation as it relates to event hubs and queues.
All Console statements are for debugging purposes only – I left them in because you may need them. Being that this is an Azure worker role, these can be made visible in many different ways including using AppInsights.
Line 24 creates the connection to the queue where messages will be passed from the Event Hub. Obviously it’s better to read this from the web.config file. No, I don’t know why I did it this way – it was late 🙂
Line 45-77, the ProcessEventsAsync method does the bulk of the work and it monitors the event hub (the next code block shows where the connection to this comes from). The EventData contains the message from the client. The first task is to get the data out of the message (Line 55) and the next task is to deserialize it – in this case converting it from JSON to variables we can use (Lines 58-60). Finally it is written to the Queue using a BrokeredMessage object (Line 62-68).
This is all that’s required to read data out of the event hub and place it into the queue. The worker role class manages the lifetime of this object and passes in the necessary configuration information. This class is shown in its entirety here – but most of it is boilerplate – I only changed the OnStart method.1: using System;2: using System.Collections.Generic;3: using System.Diagnostics;4: using System.Linq;5: using System.Net;6: using System.Threading;7: using System.Threading.Tasks;8: using Microsoft.WindowsAzure;9: using Microsoft.WindowsAzure.Diagnostics;10: using Microsoft.WindowsAzure.ServiceRuntime;11: using Microsoft.WindowsAzure.Storage;12: using Microsoft.ServiceBus.Messaging;13:14: namespace WorkerRole115: {16: public class WorkerRole : RoleEntryPoint17: {18: private readonly CancellationTokenSource cancellationTokenSource = new CancellationTokenSource();19: private readonly ManualResetEvent runCompleteEvent = new ManualResetEvent(false);20:21: public override void Run()22: {23: Trace.TraceInformation("WorkerRole1 is running");24:25: try26: {27: this.RunAsync(this.cancellationTokenSource.Token).Wait();28: }29: finally30: {31: this.runCompleteEvent.Set();32: }33: }34:35: public override bool OnStart()36: {37: // Set the maximum number of concurrent connections38: ServicePointManager.DefaultConnectionLimit = 12;39:40: // For information on handling configuration changes41: // see the MSDN topic at: bool result = base.OnStart();44:45: Trace.TraceInformation("WorkerRole1 has been started");46:47: string storage = "DefaultEndpointsProtocol=https;AccountName=[your account name here];AccountKey=[your key here";48: string serviceBus = "Endpoint=sb://[your namespace here].servicebus.windows.net/;SharedAccessKeyName=[your key name here];SharedAccessKey=[your key here]";49: string eventHubName = "demoapp";50:51: EventHubClient client = EventHubClient.CreateFromConnectionString(serviceBus, eventHubName);52: Trace.TraceInformation("Consumer group is: " + client.GetDefaultConsumerGroup().GroupName);53:54: _host = new EventProcessorHost("singleworker", eventHubName, client.GetDefaultConsumerGroup().GroupName, serviceBus, storage);55:56: Trace.TraceInformation("Created event processor host...");57:58: return result;59: }60:61: private EventProcessorHost _host;62:63: public override void OnStop()64: {65: Trace.TraceInformation("WorkerRole1 is stopping");66:67: _host.UnregisterEventProcessorAsync().Wait();68:69: this.cancellationTokenSource.Cancel();70: this.runCompleteEvent.WaitOne();71:72: base.OnStop();73:74: Trace.TraceInformation("WorkerRole1 has stopped");75: }76:77: private async Task RunAsync(CancellationToken cancellationToken)78: {79: await _host.RegisterEventProcessorAsync<EventProcessor>();80:81: // TODO: Replace the following with your own logic.82: while (!cancellationToken.IsCancellationRequested)83: {84: //Trace.TraceInformation("Working");85: await Task.Delay(1000);86: }87: }88: }89: }90:
Lines 47-49 set the account information and identifying names for the storage, service bus and queue which are required by the EventProcessor class. Line 51 creates the EventHub client and line 54 starts the process. Everything else is there to manage the lifetime of the EventProcessor object. This can be published to Azure and it will start reading data from the EventHub immediately upon deployment. The queue dashboard should start to look like the following:
WPF Client – Reading from the Queue
Finally we get to reading data from the queue. It’s important to note that you can do this a million different ways. You can use Node.js or SignalR to do real-time web page applications, pass it to other applications for additional processing (which you can do from the event hubs as well) or virtually anything else you want. You can read it with an Android or iOS app, put it into a tabular model for use with PowerView or hundreds of other applications.
I chose WPF because it showed a mix of different technologies and I’m not as comfortable with Node or SignalR (I’m still learning both of them).
The key to this application is a class called QueueReader – everything is used for the presentation of data.1: using Microsoft.ServiceBus.Messaging;2: using System;3: using System.Collections.Generic;4: using System.Linq;5: using System.Text;6: using System.Threading;7: using System.Threading.Tasks;8: using System.Windows;9:10: namespace WpfQueueReader11: {12: class QueueReader13: {14: QueueClient _queue;15: Thread _thread;16: QueueData _persistentData = new QueueData();17:18: public delegate void MessageReceivedEventHandler(object sender, MessageEventArgs e);19: public event MessageReceivedEventHandler MessageReceived;20:21: public void MonitorQueue()22: {23: _queue = QueueClient.CreateFromConnectionString("[your connection string here]", "[your queue name here]");24:25: BrokeredMessage message = null;26: while (true)27: {28: try29: {30: //receive messages from Queue31: message = _queue.Receive(TimeSpan.FromSeconds(5));32: if (message != null)33: {34: string m = message.GetBody<string>();35: string[] values = m.Split(new string[] { "##" }, 2, StringSplitOptions.None);36:37: _persistentData.Message = values[1];38: _persistentData.Browsers.Add(values[0]);39:40: Application.Current.Dispatcher.BeginInvoke(new ThreadStart(() => OnMessageReceived(_persistentData)), null);41:42: // Further custom message processing could go here...43: message.Complete();44: }45: else46: {47: Thread.Sleep(TimeSpan.FromSeconds(5));48: }49: }50: catch(OperationCanceledException)51: {52: _queue.Close();53:54: }55: }56: }57:58: public void EndMonitoring()59: {60: _queue.Close();61: }62:63: protected void OnMessageReceived(QueueData data)64: {65: if (MessageReceived != null)66: {67: MessageReceived(this, new MessageEventArgs(data));68: }69: }70:71: public void StartTask()72: {73: ThreadStart threadStart = new ThreadStart(MonitorQueue);74: _thread = new Thread(threadStart);75: _thread.Start();76: }77:78: public void EndTask()79: {80: _thread.Abort();81: }82: }83: }
Most of the functionality in this class deals with the fact that it’s running on a background thread. The main portion of this class is the MonitorQueue method. Remember to replace the connection string with your own connection information.
The MonitorQueue method does the following:
- Creates a connection to the queue
- Starts an endless loop to check the queue (note that this is a cheap way of doing it and I didn’t want to go crazy just for demo purposes – an excellent article on a better way to do this can be found here:)
- Check the queue
- If there is a message,
- Get the data from the message
- Add it to a class
- Raise an event to the UI thread with the data
- Check the queue again
- If there isn’t a message,
- Go back to sleep for five seconds before checking the queue again
In the UI, the entire class for the single window looks like the following:1: using System;2: using System.Collections.Generic;3: using System.Linq;4: using System.Text;5: using System.Threading.Tasks;6: using System.Windows;7: using System.Windows.Controls;8: using System.Windows.Data;9: using System.Windows.Documents;10: using System.Windows.Input;11: using System.Windows.Media;12: using System.Windows.Media.Imaging;13: using System.Windows.Navigation;14: using System.Windows.Shapes;15:16: namespace WpfQueueReader17: {18: /// <summary>19: /// Interaction logic for MainWindow.xaml20: /// </summary>21: public partial class MainWindow : Window22: {23: QueueReader _reader = new QueueReader();24:25: public MainWindow()26: {27: InitializeComponent();28: _reader.MessageReceived += _reader_MessageReceived;29: _reader.StartTask();30: }31:32: void _reader_MessageReceived(object sender, MessageEventArgs e)33: {34: TextBlock block1 = new TextBlock();35:36: block1.Text = e.Data.Message;37:38: spMessages.Children.Add(block1);39: spBrowsers.DataContext = null;40: spBrowsers.DataContext = e.Data.Browsers;41: }42:43: private void btnExit_Click(object sender, RoutedEventArgs e)44: {45: _reader.EndMonitoring();46: _reader.EndTask();47: this.Close();48: }49:50: private void Window_Closed(object sender, EventArgs e)51: {52: _reader.EndMonitoring();53: _reader.EndTask();54: }55: }56: }57:
The constructor here (Lines 25-30) start the background process executing and wires up the event to the UI. The _reader_MessageRecieved method gets the data and simply appends values and re-binds the a collection of browsers to display. The end result of all of this looks like the following:
In a real time fashion, data is transmitted from the browser, to Azure and back to the WpfQueueReader.
Now, set your imaginations to work because this has real world applications! And it’s never been this easy to build a scalable, distributed application.
Hi Jeff,
Good article. I started reading and doing some POC around EH for the past couple of days. I've couple of questions here.
1. Event Hub events sent to Q for further processing. Is this the recommended architecture from Microsoft? As per EH developer guide, the Receiver using IEventProcessor can handle millions of events efficiently. In that case why do we forward the events to Q? Doesn't it incur further cost and management? Wanted to understand the best architecture to be used.
2. The code sample doesn't use SAS token for access. Would you be able to post the same access using SAS tokens and access policies especially how the token expiry is checked.
3. Is there a way to know who (which device) sent the message? Checking to ensure device spoofing is not an option in EH.
Thanks
Saji
@Saji, all good questions. Let me try to do my best here because I don't have all of the answers.
For #1, the EH can handle millions of events (I accidentally load tested myself to oblivion and I hit at least that many messages in an hour – whoops). There isn't anything inherently wrong with the architecture however. The purpose I typically demonstrate with this is that the event hub can process messages and offload them to other back end systems for processing. So, consider 32 partitions * 1 MB per second * 60 seconds per minute * 60 minutes per hour and you end up with 115GB of data per hour and many messages are very small (just a few Kb in size). However, that's to receive the data. Long processing operations may have to be performed and that would be better of done in an offloaded process of some type (at least I think so but I could be mistaken).
For #2 I don't have an answer – my goal wasn't building a secure application (I actually don't know how to do this – I just wanted to demonstrate the capability – I'm sorry I can't give you more help here.
For #3 I'm going to hedge. I believe, in reading some of the documentation that the answer is yes but I'm not 100% positive on how to do it or the particulars of it.
I hope this helps a little bit.
Hi – Forwarding to queue is not required. Eventhub itself is acting as a queue and letting the events be processed at a comfortable pace. Event processor host gives capabilities to scale up or scale down the event processing. Eventhub data can be processed easily using NRT/storm etc.
Hi Jeff, nice article.
I´m looking at the checkpoint bits, I think you should be calling the Restart method of the stopwatch instead of Reset, is that correct? Unless I missed something along the way (i'm just starting out with Event hubs :)) the Reset method will stop and reset the stopwatch, but Restart will stop, reset and restart.
Thanks!
alexjota.
Some rollup responses to questions in multiple comments.
Scale questions
It can easily handle millions of events. If you need to go beyond the capacity of a single 32 partition / 32 throughput unit event hub, you can either use multiple hubs or file a ticket to get a larger hub created. I have a 1024 partition / 1024 throughput unit event hub. I'm pushing about 6 million events a minute to it and I'm only running at about 1/2 capacity for the event hub instance.
However, you will need to tweak values to get best performance for your use case. Using no partition key works well as Event Hub will round robin the partitions. If you use a partition key you'll need to make sure you use a key mechanism that will give you close to random distribution amongst the partitions.
SAS Key
You use it pretty much identical to the example above. Simply go to the Azure console and create the SAS keys in the UI. Then use that SasKeyName and SasKey in the connection string.
ConnectionString = String.Format("Endpoint=sb://{0}.servicebus.windows.net/;SharedAccessKeyName={1};SharedAccessKey={2}", sbNamespace, sasKeyName, sasKey);
Knowing the Sender
You'll need to include the source as part of your event payload.
Hi,
I am creating windows phone 8.1 application and i want to use EventHub service in background task of windows phone, but when i try to install package of service bus it is giving me a error.
Could you please help me.
Thanks,
Onkarraj
Hi,
Great article – sums up what I had found myself so far, but I need to better understand the recommended architecture for multiple senders.
I built up 3 C# apps – Event receiver – pretty much your code, Event sender – using the EventHubClient stuff, and an Event sender using a simple httpClient – as my end app will send messages from a non-.Net embedded product. To do this I managed to create an SAS and it works great ;-)).
I did also find that after sending from an SAS secured 'device' (ie my httpclient test app) – the device name appeared as the 'PartitionKey' entry in the incoming EventData :-O – then ALL subsequent messages from that device came in on the same partition (13) – is this 'feature' documented ???. NB I did find that other messages (ie from my .Net client were not blocked and some also came through on Partition 13 – without the prior devicename – which is GOOD).
I then hit a problem – in that my receiver suddenly registered 8 partition closures with a reason of 'LeaseLost'. Continuing to send messages showed that messages were no longer being 'received' on these partitions. I decided the leave it running (just to see what happened) and they DID gradually come back and get re-initialised. Once that happened – some of my 'delayed' messages also finally came through. This took quite a while though – with some partitions down for some 10's of minutes – sorry I didn't time it :-(. They didn't all come back up at the same time….first one – then 3 more – then a further 3 and finally the last one.
SO my data was not 'LOST' (which is a great relief ;-)) – BUT it was 'DELAYED' with no indication that this had happened – other than the LeaseLost for some of the partitions.
Is this behaviour documented somewhere please ???? – as it is critical to the smooth running :-O. Is there a best practice as to what we should do if this happens, ie to recover the lost partitions ??? – or do we simply have to wait :-O.
Many Thanks
This solution looks like just what I have needed for IoT solutions ;-)).
BR
Graham
Hey mate,
Could you do proper screenshot without having user to click and expand to another blurry image please?
Thanks | https://blogs.msdn.microsoft.com/musings_on_alm_and_software_development_processes/2014/09/03/azure-event-hubs-queues-and-workers/ | CC-MAIN-2017-47 | refinedweb | 4,461 | 57.87 |
@milouse updated and removed. thanks for letting me know.
Search Criteria
Package Details: turses 0.3.1-9
Dependencies (6)
Required by (0)
Sources (1)
Latest Comments
shaggytwodope commented on 2016-04-07 10:39
milouse commented on 2016-04-07 09:39
The python2-oauth2 dependance is no more required. See and setup.py
Xemerau commented on 2016-03-30 15:53
Works like a charm now, thanks for your work @shaggytwodope
shaggytwodope commented on 2016-03-30 14:26
@Xemerau, updated sorry for the delay. And thanks for the comment.
Xemerau commented on 2016-03-30 13:55
Traceback (most recent call last):
File "/usr/bin/turses", line 5, in <module>
from pkg_resources import load_entry_point
.....
pkg_resources.DistributionNotFound: The 'tweepy==3.5.0' distribution was not found and is required by turses
----------
According to the output of 'pacman -Qi', python2-tweepy-git is reporting a version 0.1018.cd46550-1
Noorac commented on 2015-10-01 19:25
@shaggytwodope thanks for the update. It's working indeed!
shaggytwodope commented on 2015-10-01 19:19
@Noorac Just saw your msg after updating the package. It should be fully working at the time of posting.
Noorac commented on 2015-10-01 18:43
It seems turses wants future 0.15.1, newest version of future is 0.15.2-2
petr.fischer commented on 2015-08-13 17:57
@shaggytwodope Working! Thanks for the fast fix!
shaggytwodope commented on 2015-08-13 17:36
@petr.fischer Should be working now, update and lemme know if any issues. | https://aur.archlinux.org/packages/turses/ | CC-MAIN-2016-36 | refinedweb | 257 | 61.12 |
Conservapedia talk:What is going on at CP?/Archive57
Contents
- 1 Lenski Hits back!
- 2 CP down?
- 3 Lenski Letter
- 4 Conservapedia's down
- 5 CP Back running?
- 6 IT'S BACK UP!!!!!!
- 7 Lenski and "the genius grant"
- 8 Missing page
- 9 Main Page
- 10 Schlafly Slander
- 11 Lenski a record breaker?
- 12 Conservapedian Mathematics
- 13 PJR hates us won't join us
- 14 Assfly ignored warnings
- 15 The Sixties
- 16 Lenski Letter redact
- 17 R they down again?
- 18 Bugler and TK
- 19 I've been dying to...
- 20 CP is back
- 21 Quick! Hide it!
- 22 Latest by Assfly on Lenski
- 23 Lenski debacle on Fark.com
- 24 DeanS whitewashes
- 25 Again!
- 26 Uh oh. PJR Niceness break!
- 27 Homskollars
- 28 Deborah
- 29 New Old Topic revisited
- 30 Closed for editing
- 31 "Punchy" Schlafly
- 32 Atheism and deceit
- 33 Irony, Thy Name Is Ed
- 34 Wheels Falling Off
- 35 Irony Meter *pop*
- 36 Short (out of place) rant
- 37 Cut from According To
- 38 talk page
Lenski Hits back![edit]
with abandon! Ace McWicked 23:52, 23 June 2008 (EDT) P.S. Might be an idea to get a screen-capture for prosterity
- Woah. Though, to be fair, I have to give Assfly some credit for posting it rather than pretending he never received it. I imagine there are not a few people out there who, upon getting their intellectual ass so thoroughly kicked, would not publicly post their humiliation for all to see. No doubt Andy has some typically Conservapedian reply that will ignore everything he said twist reality into some weird pretzel-like shape in which Andy explains everything away and comes out victorious. Can't wait to see it. Though it will probably be nothing more than accusations of "talk pollution" followed by a "and he still hasn't released the raw data!" DickTurpis 00:14, 24 June 2008 (EDT)
- You confuse Schlafly with someone who is psychologically able to recognize his own error, about anything at all. Coarb 00:22, 24 June 2008 (EDT)
- Yeah, I probably am. My bad. DickTurpis 00:28, 24 June 2008 (EDT)
- I am still digesting it but so far it is excellent. tmtoulouse annoy 00:15, 24 June 2008 (EDT)
- I disagree. I think Schlafly is so twisted, so confused, so dependent upon his own inflated self-image, that there is literally no reply that Lenski could have sent that would have appeared to Schlafly to be a defeat.
- If he can't feel embarrassed, no Lenski reply would have been silenced out of fear of embarrassment.
- I don't think Schlafly is devoid of malice, but I think his malice is overshadowed by his delusions of infallibility.
- But, unlike Schlafly, I might be wrong. Coarb 00:31, 24 June 2008 (EDT)
- You know, I didn't think about it that way, but you're right. I think it was him posting the reply without any comment on the talk page. I figured he'd certainly want to redirect the criticism, but I guess he's just so deluded that he doesn't think he even needs to spin this, rather that the letter speaks for itself as a resounding victory for the home team. --Arcan ¡ollǝɥ 00:55, 24 June 2008 (EDT)
- Yes. A well done response to a man who doesn't deserve it.
Radioactive afikomen Please ignore all my awful pre-2014 comments. 00:18, 24 June 2008 (EDT)
- I read it and loved it !!! Apparently a lot of other folks are reading it too, because the CP site is overloaded. --SpinyNorman 00:25, 24 June 2008 (EDT)
- Who wants to bet that the edited out link is to here? Pinto's5150 Talk 00:36, 24 June 2008 (EDT)
- That'd be a sucker bet.--SpinyNorman 00:42, 24 June 2008 (EDT)
- We should sock up and press him to reveal what the link was. It would just be awesome if it was to us.
Radioactive afikomen Please ignore all my awful pre-2014 comments. 01:06, 24 June 2008 (EDT)
- Meantime, let's see if we can get the original from Lenski or one of his
acolytesgrad students, so we can publish the version with what is obviously a link to us. Also, wouldn't it be AWESOME if PNAS or Sci Am decided to publish these four letters!?!?! (Sorry I'm giddy with joy right now) ħuman
01:11, 24 June 2008 (EDT)
- I have sent some e-mail to Lenski about a couple of things, I will get back to you all with it. Of course anyone else is welcome to send their own letters but lets not inundate the guy! Welcome to Web 2.0 Dr. Lenski. 71.222.242.224 01:13, 24 June 2008 (EDT)
- I suspect that Dr. Lenski has been around web 2.0 since before it was invented. Anyway, it is with heavy heart that I must announce the winner: One Ascssrschfly, BSEJD. It turns out that Dr. Lenski misspelled "acolyte" the first time he used it, rendering all of his further "discourse" irrelevant. Please, "Professor", learn to spell before you try to take on the titans of logic and truth. Godspeed! ħuman
02:00, 24 June 2008 (EDT)
I'd guess someone threatened the Ass with a libel action and that forced him to publish the letter. The publication is out of character. Proxima Centauri 16:44, 26 June 2008 (EDT)
Dear Mrs. Schlafly,
I write to inform you with a heavy heart that your son, Andrew, died during a firefight late Monday night. His death, from massive ownage, was swift and painless. He was serving his ideology well, leading an assault on rationality along with a few other brave men, and it has brought this war measurably closer to winning over the country in the name of the 1950s. When separated from his compatriots, he ignored the warnings of his soldiers and attacked alone, bravely facing down a trained scientist with only his twin guns of fallacy and bluster. And while he has fallen, know that others will take up the cause. There is nothing conservatism can do to heal this wound, but it thanks Andrew for his sacrifice, and you for yours.
In sorrow and sincerity,
Zombie Reagan.
--Tom Moorefiat justitia ruat coelum 01:17, 24 June 2008 (EDT)
- I must admit, I knew nothing of Lenski before all this, but after his brilliant reply, I have a great deal of respect for him, especially from the PS's attached to his mail. Full marks to Andy for posting it, I'm wondering what the response will be - given the thinly veiled threats about libel. No doubt professor values will be updated shortly. --PsygremlinWhut? 01:44, 24 June 2008 (EDT)
- Just amazing. I doubt the man wants to be thought of in these terms by people like us, but I think we have found a champion. I;m worried I'm going to wake up. His e-mail was perfection. And Conservapedias response? Reach for the 'off' switch. RedDog 03:17, 24 June 2008 (EDT)
Bugler doesn't like it when people attack Andy DLerner 09:00, 24 June 2008 (EDT)
CP down?[edit]
Dammit, I want to read the Lenski pwnage. Can someone copy-paste it over here? Stile4aly 00:26, 24 June 2008 (EDT)
- Pommer's Law was violated so it broke the net. tmtoulouse annoy 00:30, 24 June 2008 (EDT)
- Done! Pinto's5150 Talk 00:31, 24 June 2008 (EDT)
- Thank you. That is truly a thing of beauty. I now have a man-crush on Lenski. Stile4aly 00:40, 24 June 2008 (EDT)
- Thanks! I can't get CP to load right now, I suspect they have been effectively slahdotted. I am sure that if not Dr. Lenski himself, someone made sure the 'nets would know about this. Time to slowly absorb, adsorb, osmote, and... enjoy. ħuman
00:48, 24 June 2008 (EDT)
- PS, note that this particular T:WIGO page begins and ends (?) on the same subject... ħuman
00:48, 24 June 2008 (EDT)
- I can't get onto CP either, but is that the reason? Where could this have been posted to have such results within several minutes? Conservapedia is still a pretty fringe site; are people really swarming to it in the middle of the night in such numbers? I hope so, in this case, though I'm skeptical. DickTurpis 00:54, 24 June 2008 (EDT)
- Uh, universities are full of grad students. They all have the internet. This stuff's been getting blogged for two weeks. Lenski overshot my prediction by about 10 hours for his response, by the way. ħuman
00:58, 24 June 2008 (EDT)
- I know what you mean, but what blogs have such readership to drive so many people to CP within minutes of breaking news there? This is the sort of potential Ken would kill for, and yet it is harnessed by Lemski within minutes? I hope you're right, and if true, there should be responses and commentary aplenty all over the internet by morning. I'l wait and see. DickTurpis 01:06, 24 June 2008 (EDT)
- Slashdot can take down some of the largest sites (I've seen major news sites go down with a slashdot effect). Supposibly Fark, Drudge Report and Digg can also do it. This shows the network bandwith for some site going to 900 kbps from very small numbers almost instantly. Thats a completely saturated T1. Slashdot isn't that far out of the realm of doing it - the top news item when I looked to see if /. did it is titled "A Hippocratic Oath For Scientists" - something that might link to the mess. --Shagie 01:18, 24 June 2008 (EDT)
- The funny thing is that although we tracked this avidly, we don't even have an article documenting it. Please to, posthaste? I have to take a bath, but the first thing it needs to do is duplicate the CP page with the four emails. Then, screenshots of the assfly's dreck - this might not survive in the CP archives. Maybe don't upload all the screens, just report that they have been captured to avoid duplicate work. (NOTE: save all screenshots as pngs, NOT jpgs) Then we need to try to assemble the best (or all) of the blogs this has been discussed on as links. Suggestion: make it a mainspace article, since it is about an actual human being, not CP or Andy. Once more into the breech, gentlefolk at a website! ħuman
00:58, 24 June 2008 (EDT)
You know, I'm starting to think maybe CP took itself down. The site went down almost immediately after the response was posted, and it's not Slashdotted, Dugg, Farked, or on Panda's Thumb, Pharyngula, or talk.origins. I don't see any source for the type of volume necessary to crash CP. I'm betting Andy buggered the site shortly after posting. He'll leave it down for a little while and then bring it back up hoping that the blogosphere doesn't catch the pwning. Stile4aly 01:14, 24 June 2008 (EDT)
- I have been running multiple searches across the blogosphere and the internet and am finding no mention of it. Something else is going on. 71.222.242.224 01:16, 24 June 2008 (EDT)
- I agree that, particularly absent mention on any of the above forums, there is sufficient discussion of this to overrun CP, but still I can't see Andy intentionally crashing his own blog to hide the evidence, particularly when he posted said evidence himself. Makes no sense. I'm thinking it may well be a strange coincidence; CP has been having such problems of late. DickTurpis 01:19, 24 June 2008 (EDT)
- Best Reply Ever. Lenski must have been reading CP a lot lately. No matter how excellent all this is, I don't think there are enough CP-aware bloggers to actually make the site go down completely. I'm afraid Schlafly actually has shut the site down - he might have put up the letter without even reading it, and now has to shut down the entire system to burn the evidence properly. Lenski will be the new FBI. But, let's hope it's just a strange coincidence. Etc 01:45, 24 June 2008 (EDT)
I definiatly think Andy is closed for business right now. He had to at least put it up, then crash the site, reopen the site but claim that he "lost Lenskis reply", accuse Lenski of Professor Values, Evolutionist Style, Liberal Deceit et all and then this will never ever be spoken of again under threat of FBI Values. I was just glad I was trolling CP the moment he posted it. Ace McWicked 03:12, 24 June 2008 (EDT)
- Didn't Andy just give Jinx siteadmin rights to turn off the database. If he/she was a sockpuppet or a parodist, they might have taken the opportunity, before Andy relised how stupid it was to give someone who had not proven themseleves as a sysop yet such a power. 3.14159 03:32, 24 June 2008 (EDT)
Hmmmm interesting, perhaps Jinx just closed it for the night but fucked it up somehow? Ace McWicked 03:37, 24 June 2008 (EDT)
- I think PZ Myers linked there, too, and he's got a pretty wide readership. Wazza (Not Wazzock, Wazza)Approach the Presence 03:58, 24 June 2008 (EDT)
- I am having trouble trying to open Myers' page to open too, maybe that is were the traffic is comming from. Although if I remember the story of the first "internet crash" they mananged it with less data than then the throughput of the line.
04:08, 24 June 2008 (EDT)
- Sorry
it was me wot zapped your edit, pressed the Save button once too many times - the tubes are treacled up this morning.
GenghisOur ignorance is God; what we know is science. 04:18, 24 June 2008 (EDT)
- I know, we seem to be falling over ourselves to laugh at Andy, I have been edit conflicted about 5 time.
04:20, 24 June 2008 (EDT)
- I just had no trouble at all loading ħuman
04:35, 24 June 2008 (EDT)
Lenski Letter[edit]
(moved to Lenski affair)
Conservapedia's down[edit]
Conservapedia's down. My computer can't contact the website. The time is 8 o'clock British Summer time. Proxima Centauri 03:01, 24 June 2008 (EDT)
- It wasn't very nice of Andy to take the fact he got owned out on the hamster, its not his fault. RIP Reagan Hamster 2006 -2008. 3.14159 03:12, 24 June 2008 (EDT)
- Fuck; fuck; fuck; You go to sleep for a couple of hours and look what happens! Don't suppose CP's host could have noticed the potential libel/slander accusations & pulled the site? Love Mr. Lenski!!!
ContribsTalk 03:14, 24 June 2008 (EDT)
- Totally agree Susan. I couldn't believe Lenski's e-mail. It said all the things I hoped it would say and is truly a beautiful, beautiful thing. It's maybe wishful thinking but I'm wondering if Conservapedia is down on the basis of arguments between the sysops on how to respond. He had hardly any sysop support in this venture and I suspect he will have privately received a tidal wave of "told you so"s from people like PJR. In a fit of temper (which I would imagine would be a boiling point anyway having read Lenski's reply) he may have simply hit the "off" switch in a "well if that wasn't a goal then I'm going home and I'm taking my football with me" kind of a way. RedDog 03:21, 24 June 2008 (EDT)
You know things have been getting frayed at the edges over there recently, we have all noticed it, suddenly Assfly oversteps the wee hovel he has founded and BLAM! bye conservapedia. Perhaps he pulled the plug, grabbed the largely defensive weapon of assault rifle and went to go find Lenski and those liberal scallywags at New Scientist? Ace McWicked 03:24, 24 June 2008 (EDT)
- It's no good. I'm going to have to read it again. It's just too beautiful. RedDog 03:25, 24 June 2008 (EDT)
- This is no coincidence that CP is down. I mean come on. That has got to sting! And on his own website too. Very public humiliation and I would suspect a torrent of abuse / vandalism too. RedDog 03:38, 24 June 2008 (EDT)
- I don't mean to bang on about it but I have now read Lneskis reply 3 times. It just gets better. That is a severe kick to Andy's soft dangly testicles whichever way you look at it. RedDog 03:50, 24 June 2008 (EDT)
I feel that Hanlon's razor must be applied here. Never attribute to malice that which can be adequately explained by stupidity. Conservapedia being down can more than adequately be ascribed to Andy's incompetence than any malice on his part. I will certainly admit that the timing has a large amount of coincidence to it. Still, I don't believe you can go seriously wrong overestimating Andy's incompetence at wikipedia administration. For that matter, it may be Jinx'ed with a new sysop pushing the wrong button or something - again, stupidity over malice. --Shagie 04:00, 24 June 2008 (EDT)
- (3rd EC) Susan, I echo your sentiments - missing all the excitement while asleep. Bugger! Knowing Andy's self-confessed reading skills, it's highly probable he never read the letter before posting it and then just pulled the plug when he realised how badly he'd been pwnned. Normally when CP goes down we get the "Conservapedia is having problems" message but currently there is just nothing. And I wouldn't blame Jinx for anything, all he can do is close the door to editing.
GenghisOur ignorance is God; what we know is science. 04:08, 24 June 2008 (EDT)
Hang on, think about it. This isn't ownage at all. Consider who Aschlafly's target audience is - fundy homeschoolers. You think that they'll be persuaded by Lenski's reply? They won't understand it. And if they don't understand it, it's obviously evolutionist spin and deceit.
And why would Aschlafly post his reply, leave the site up long enough for the reply to be read by all and sundry, and then close it down? He certainly wouldn't do that if he thought he'd been owned.
It's possible that somebody else - somebody influential in Aschlafly's circle - got really pissed off and persuaded him to close it due to the embarrassment it was causing. Or because of the potential slander. Perhaps even his mother herself. I wouldn't bet against it. Ajkgordon 04:18, 24 June 2008 (EDT)
- As a lawyer, has Andy ever thought out the slander side of his "encyclopedia"? Here in Australia I know that if you read something on the internet here that you consider slander, even if the site is hosted in the US, as you read it here you can sue in our courts. As the host/owner/producer of the website even if Andy himself didn't write it, if he was made aware of the falsehood and didn't remove it he is also libel as well. Outside the US he wouldn't have the first admendment to hide behind, only if he can prove what he is saying is true. If he can't prove it and a judge says that it is detrimental to the person in some way he is paying for it.
04:34, 24 June 2008 (EDT)
STOP PRESS:
- Shares in Conservapedia e-mail reach all time high.
- Our reporter in New Jersey reports that several internet lines have melted because of unusually high traffic levels between officers of the shadowy CP hierarchy.
- More as this story unfolds.
Speculation as to reasons why CP is unavailable. I don't pretend to know anything about this but I enjoy speculating so - well, yeah. Many people (including me) were already talking about fractious CP had become with sysops notably failing to take Andy's side and actually posting against him (PJR). He was warned by sysops not to go after Lenski. Last night he got the well deserved and very public humiliation of a lifetime. A short time later CP is down. I simply cannot believe this is conincidence. When have any of us ever seen Andy get it right where he deserves it as good as this? RedDog 04:53, 24 June 2008 (EDT)
The actual reasons could be....
- Simple cock up (I doubt, too much of coincidence)
- Too much traffic caused by people coming to point and laugh (I doubt this - it happened too quickly)
- He's taken it down in a blaze of anger following arguments with other sysops (My favorite theory)
- He can realised the futility of the whole project and given up (unlikely knowing his character)
- One of the other sysops has taken it down through vandalism, malice, etc
- Anything else?
Hmmm to quick for a legal threat. I'm thinking Andy panicked, did know what to do , where to turn for advice. His brain exploded. Perhaps, in a sudden rush which overwhelmed his brain, it all came to him. In one sudden rush. Ace McWicked 05:01, 24 June 2008 (EDT)
- I think number one but I like the idea of him going insane after realizing just how far he's gone.
Ace McWicked 05:10, 24 June 2008 (EDT)
- He's been up all night drafting a reply on the talk page, will bring the site back up, then lock the page. Discussion over, he wins, carry on. Ajkgordon 05:11, 24 June 2008 (EDT)
- Oh, and while he's at it, I wouldn't be surprised if he's banhammering all the dissenters. Ajkgordon 05:13, 24 June 2008 (EDT)
- Well he'd better start with PJR then. And considering he only got 3 or 4 signatures on his petition it will be a pretty small wiki following the banination. I think we can be sure though that whatever it is it will provide plenty of amusment. I feel like PayPal-ing Lenski the price of a beer. I think I owe him at least that. Wouldn't it be great if he and his students could have drink on us? RedDog 05:17, 24 June 2008 (EDT)
- He must be going pretty haywire whatever the issue. There'll be a lot questions to shoutdown to keep this under wraps and it must be driving him nuts...er...nuttier. This whole debarcle has taken over his life god-damnit. He has lost all dignity, respect and become a laughing stock. He once said to me "Fortunately, not everyone is afraid of being "laughed at" (shameless quote mine but consider the source) which is fairly ironic given the situation. Ace McWicked 05:19, 24 June 2008 (EDT)
Assfly can’t easily block Lenski at New Scientist and say, “Godspeed”. Proxima Centauri 05:21, 24 June 2008 (EDT)
- Nice for a CP sysop to pop on with a snarky remark, some blame laid upon us.Ace McWicked 05:30, 24 June 2008 (EDT)
- No snark, I mean I want someone like conservative to pop over and throw some hell-fire down upon us.121.73.11.147 05:42, 24 June 2008 (EDT)
Timing Does anyone know the relative timing of the letter being posted as against CP going down? RedDog 05:45, 24 June 2008 (EDT)
- Maybe around 2 hours, it was on the recent changes for a while before the site went down (I was waiting for someone to bring it up while wishing my account would be unbanned so I could do it). Personally I think Andy's encountered a problem with the software and, as a result of getting owned by Lenski, couldn't be bothered fixing it tonight and decided to wait until tomorrow. The other options about the site being permanentally down are just too good to be true. 124.183.181.158 05:52, 24 June 2008 (EDT)
My computer says Conservapedia is online but a firewall is preventing access. Proxima Centauri 06:01, 24 June 2008 (EDT)
Will references to unicorns, citrons (and citroens) become the new FBI, I wonder? Fretfulporpentine 06:12, 24 June 2008 (EDT)
It's back. Proxima Centauri 06:24, 24 June 2008 (EDT)
CP Back running?[edit]
Is CP running again or am I looking at old versions. The last comment on the Lenski talkpage was three days ago.
06:18, 24 June 2008 (EDT)
- I think nobody bothered to post after Andy started censoring any reply that didn't agree with him, so three days of silence sounds possible. --Sid 06:21, 24 June 2008 (EDT)
- There's definitely a 3-day gap in the posts but was anything removed?
GenghisOur ignorance is God; what we know is science. 06:32, 24 June 2008 (EDT)
IT'S BACK UP!!!!!![edit]
...damm 124.183.181.158 06:15, 24 June 2008 (EDT)
- And the letter is still there, it seems. By the way, as someone who managed to miss this thing until now (stupid timezones and the need to sleep), I have to agree with all the people above: Best mail EVER. Lenski is my new hero for pwning Andy to with such grace and on so many levels. --Sid 06:19, 24 June 2008 (EDT)
- At least it's up. I'm pretty sure that a lot of edits were done since then, but mysteriously gone. Also, the talk page is protected. I'm so suprised. Etc 06:22, 24 June 2008 (EDT)
The cats away...[edit]
Hey there doesn't seem to be any Sysops about.
06:38, 24 June 2008 (EDT)
- No, not a good idea, IMHO. I would like to see responses to Lenski's second reply without any provocation from parodists or socks. Please, it's interesting. Ajkgordon 06:41, 24 June 2008 (EDT)
- Conservative has shown up. Does the man sleep? Someone got this article in though.
06:43, 24 June 2008 (EDT)
- Definitely!!
ContribsTalk 07:15, 24 June 2008 (EDT)
I dont know about Bugler but if not a paradoist he definatly has a nasty streak. In relation to the other matter, I am seriously considering that perhaps there was some kind of fuck-up that caused this most interesting incident. It matches pretty close to CP's usual downtimes.... Ace McWicked 07:19, 24 June 2008 (EDT)
- It's often not worth posting stuff here, because I know that someone else will make the same point. But Bugler's post is just so OTT that he must be a parodist. Anyone here care to email me and fess up? I won't let on.
GenghisOur ignorance is God; what we know is science. 07:25, 24 June 2008 (EDT)
- How long do you give me? RedDog 08:15, 24 June 2008 (EDT)
- That was like 20 minutes, right? Only a 30 minute block though, very strange. Etc 08:47, 24 June 2008 (EDT)
- Oh well. Better than nothing I suppose. I must be slipping. RedDog 08:50, 24 June 2008 (EDT)
- The sysops have become a bit soft lately. Also, I suspect bugler of being a parodist, so chance is he's blocking you just for show anyway. I'm more worried about this guy though - talking about pooping during pregnancy while the whole site is crumbling, is really asking for it. Etc 09:05, 24 June 2008 (EDT)
Lenski and "the genius grant"[edit]
I don't know if this has been raised before (a quick check back I havn't glanced it) but a few days ago Andy was calling the MacArthur Grant a liberal prize. According to here (some random blog in CP's finest style) Lenski won this award. Everything he has done for days has been an attempt to discredit Lenski.
07:46, 24 June 2008 (EDT)
Missing page[edit]
Catholic views on creation is missing. Wiped clean away. Ace McWicked 08:17, 24 June 2008 (EDT)
- Ah the old air-brush at work again eh? I noticed PJR's final comment on the matter was quite blunt. Quoting CP commandments at Andy. Have to admire it. EDIT I can still access it here. RedDog 08:22, 24 June 2008 (EDT)
- And we finally have a full and frank explanation of the down time. RedDog 08:20, 24 June 2008 (EDT)
-
- What do you mean missing? Its still here and here. NightFlareStill doesn't have a (nonstub) RWW article. 08:23, 24 June 2008 (EDT)
Yup: still there
ContribsTalk
Hmmmmm, well, in my defence for not seeing it before was it was very late in my part of the world. Sorry bout that 203.96.84.33 17:40, 24 June 2008 (EDT)
- It doesn't matter what's said: he's been acknowledged and replied to. He's so thick, he thinks everyone else is and they'll take his word that he's won!
ContribsTalk 08:34, 24 June 2008 (EDT)
- What else could he do. He painted himself into the stupid corner with brown paint. His 'get out' as posted above is weak in the extreme as it fails to address even a single issue. What did we expect? I think someone posted above it would be a new entry on 'Prof Values'. I still think there is plenty of amusement left in this one yet though. Will Andy send a third e-mail?? RedDog 08:49, 24 June 2008 (EDT)
- Oh RedDog, you got in quick I didn't even see you.
- Those are my tax dollars too. And, in this case, I like how they are being spent. --Edgerunner76Quis custodiet ipsos custodes? 09:00, 24 June 2008 (EDT)
- I'm actually speechless. Andy is a maggot, no more no less (I apologise if that's offensive to maggots). Surely he can't be that thick-skinned, or just that thick?? Oh and I don't think Bungler is a parodist, more like a nasty little man with a chip on both shoulders. He's one of those oh-so-holy types that gives Christianity a bad name. PsygremlinWhut? 09:34, 24 June 2008 (EDT)
He just inverted the inverse ad hominem attack - instead of adressing anything in the response, he just shouts "Look, he's wrong, and rude too! Look here". Since Andy has already come to the conclusion that Lenski is wrong, Andy is now trying to use Lenski's responses to spread general dislike against him. As I've said before, he's a genius. Etc 10:11, 24 June 2008 (EDT)
- Actually I've sen this tactic by OEC's before. If you defeat their logic but are then nasty to them it gives them an excuse to ignore the logic and complain about your being nasty to them instead.--Bobbing up 10:21, 24 June 2008 (EDT)
The Schlafly family can be rude whenever they like. Other people can't be rude to them. Typical of rich people. Proxima Centauri 12:08, 24 June 2008 (EDT)
- That's a very presumptuous thing to say, Proxima. Especially considering that you lumped them all together—the "Schlafly family", as if it was this monolithic entity.
Radioactive afikomen Please ignore all my awful pre-2014 comments. 12:25, 24 June 2008 (EDT)
- I agree with Radioman. Only the Grendel and his mother fit that description. Oh, and maybe Roger. ↳ ↑ <blink>⇨</blink> ↕ ▽ ← 12:35, 24 June 2008 (EDT)
- Replying to Bob's comment above. Yeah, that's a pretty good Conservative Trick in general. When they're completely destroyed by your argument, they avoid confronting that complain about Ad Hom attacks, or that you took up too much time or you applied an incorrect conjugattion of a verb or whatever they can grasp at.... for a great example of this, see here one of my favorite videos ever SirChuckBCall the FBI 16:17, 24 June 2008 (EDT)
Schlafly Slander[edit]
Someone commented that an Australian being slandered Schlafly might be able to sue him in that jurisdiction. While they could, they wouldn't be able to enforce the resulting judgment; they'd have to travel to the US and ask a US court to do it, and the US doesn't enforce judgments that contravene the First Amendment. Bad news bears. You'd have to sue him in US court or enforce the judgment against assets in commonwealth territories-caius (spy) 08:34, 24 June 2008 (EDT)
Hnestly if schlafly continues he might be skirting the bounds of American law, at least where Dawkins is concerned...-caius (spy) 08:46, 24 June 2008 (EDT)
- Would anyone really want to give him the exposure that a court case would bring though?Matt 09:06, 24 June 2008 (EDT)
- Probably not. But to get back to the earlier question, that leaves a bit of a loophole in the law, doesn't it? The internet is obviously international - but people live in different countries. So If I slander somebody on the net who isn't extremely wealthy .... I can get away with it as long as I don't live in the same country? Is that correct?--Bobbing up 10:03, 24 June 2008 (EDT)
- It depends on the law in your country I think. Proxima Centauri 10:56, 24 June 2008 (EDT)
- It also depends on the law in the country where it is hosted/posted. Americans have sued in the UK for libel because British laws are a lot stricter. Yes, generally when it comes to libel and slander it normally only applies to the rich and famous. Also, I would have expected Caius to have pointed out the legal distinction that Schlalfy is actually libelling not slandering.
GenghisOur ignorance is God; what we know is science. 11:21, 24 June 2008 (EDT)
- There's an ongoing series of suits being made against American book authors/publishers on grounds that won't stand up in the US by suing for the one or two copies that have been sold in the UK. Private Eye has wailed loud & long about it. The sheer cost of a Libel case in the UK makes it impossible to defend in many cases, thus making it a rich man's law.
ContribsTalk 11:29, 24 June 2008 (EDT)
- That's what I had in mind Susan. I also checked on WP:Defamation and there are other variations on the laws. In Singapore the showing of a defamatory statement, that I presume was made and hosted outside the country, could leave a cybercafe owner liable.
GenghisOur ignorance is God; what we know is science. 11:34, 24 June 2008 (EDT)
This is a bit belated but here is the case were the Australian High Court decided the internet was a directional medium.
18:58, 24 June 2008 (EDT)
Lenski a record breaker?[edit]
What is the all time greatest score ever achieved on WIGO? It will be a travesty if this, without doubt the greatest moment in the history of CP, does not achieve the milestone.
- PS, Bugler is a fraud. Got to be. Matt 09:03, 24 June 2008 (EDT)
- Either that, or he's an absolute asshole. DLerner 09:17, 24 June 2008 (EDT)
- Why not both?--64.193.14.139 11:38, 24 June 2008 (EDT)
- I've been wondering if Bugler is TK.--WJThomas 09:30, 24 June 2008 (EDT)
- I am sure Bugler is British. RedDog 09:35, 24 June 2008 (EDT)
- He's obviously doing the same correspondence course as Karajerk.
GenghisOur ignorance is God; what we know is science. 10:01, 24 June 2008 (EDT)
- Look at my discussion with him. I'm just about certain he is a parodist, seriously. His language just doesn't feel right. I think I'll back off a little. RedDog 10:33, 24 June 2008 (EDT)
- "The place is the cesspit of the internet"... somehow it seems like a compliment. Although the entire discussion there seems to have stirred up a lot of crap among the users who aren't in on the loop/joke of it. 144.32.180.39 11:34, 24 June 2008 (EDT)
- I can't say that I have kept any kind of track, but I don't believe that anything has gotten past 40-ish. --Edgerunner76Quis custodiet ipsos custodes? 09:23, 24 June 2008 (EDT)
- The one about women wearing pants being evil got more than 40 thumbs up.
- That reminds me, I haven't WIGOed anything that gets to be in Best Of this month yet. NightFlareStill doesn't have a (nonstub) RWW article. 09:25, 24 June 2008 (EDT)
- Is there (should there be?) a top 10 voted WIGO items or something? As of now, there's only the 10 thumbs up barrage for entering the Best of, or am I wrong? (Editor at) CP:no intelligence allowed 09:33, 24 June 2008 (EDT)
- Correct. NightFlareStill doesn't have a (nonstub) RWW article. 09:37, 24 June 2008 (EDT)
- It would need far more skilz than I have but a Best Of sorted by ranking would be nice. Silver Sloth 09:42, 24 June 2008 (EDT)
- I usually don't vote, but I voted that one up. I think I'm missing something - was the link that was removed to RW? - Lardashe
- Probably, yes. We have to push Andy a bit before he will reveal that (beware the banhammer). Or, someone could e-mail Lenski himself. Etc 10:02, 24 June 2008 (EDT)
- I have been in contact with Professor Lenski (see Talk:Lenski affair). I can confirm that the removed link was a link to RW. --Edgerunner76Quis custodiet ipsos custodes? 10:06, 24 June 2008 (EDT)
Even if the original Lenski WIGO item doesn't skyrocket, if you add up the votes for all the different entries about the first letters, I think the entire "Lenski affair" has contributed at least 100 votes already, and there will be upcoming related craziness as well. Etc 12:02, 24 June 2008 (EDT)
- It should be pretty clear if something is artificially inflated. And I would suggest a section on the front page of: Significant events at CP, linking to articles about the Great Purge, the Dawkins Attacks, the Lenski Affair, etc. -Smyth 14:16, 24 June 2008 (EDT)
I love how Jinxmchue is trying his hand at arguing against Lenski, firstly, by accusing him of 'writing like a 12-year-old girl' by...erm...marking multiple post-scripts in a perfectly correct way, then by wondering why the E. coli in our guts doesn't evolve to kill the host organism it's living in. Jinx, in case you haven't figured it out, the 'natural selection' part of evolution is about SURVIVAL of the fittest. Things that commit suicide by proxy don't really fit that. Zmidponk 15:47, 24 June 2008 (EDT)
I don't generally like blatent vandalism, but I do have to laugh at this. Zmidponk 16:12, 24 June 2008 (EDT)
Conservapedian Mathematics[edit]
I am not really sure where to post this, so apologies if it is misplaced. I ran accross this page just recently. After I stopped rolling around on the floor it occured to me that the last section serves as a brilliant microcosm of CP.Antifly 10:52, 24 June 2008 (EDT)
- We call this parody and avoid attracting too much attention to it.
GenghisOur ignorance is God; what we know is science. 11:09, 24 June 2008 (EDT)
PJR
hates us won't join us[edit]
[1] :( Guess we won't see him over here then, eh? NorsemanWassail! 12:19, 24 June 2008 (EDT)
- Tsk, lastwordism AND threatening ideological blocks? If he wasn't a good Conservative, Mr. Banhammer might have to have a word him. I also like his reply to Lenski. Your understanding of my niche interpretation of my bullshit superstition is inadequate, THEREFORE YOU ARE WRONG! --81.187.75.69 12:27, 24 June 2008 (EDT)
- Actually he doesn't say he hates us - just not to talk about us. Which is their golden rule anyway. And which then forces them to censor a reply they said they would post.--Bobbing up 12:34, 24 June 2008 (EDT)
- Yeah, come on guys, be fair. PJR knew that CP was being trolled on the issue, knows that Aschlafly doesn't allow mention of RW, so stepped in. He's a sysop, FFS. Ajkgordon 13:15, 24 June 2008 (EDT)
Assfly ignored warnings[edit]
Assfly’s senior sysops warned him See here. What will he do next? Perhaps he’ll demote his senior sysops and block them. It must be hard to be a senior sysop for someone who is less intelligent than you are but has money. Proxima Centauri 12:26, 24 June 2008 (EDT)
WaltherPPK has been talking out of turn before. Just look. He actually dares to suggest that because he's Irish he knows more abut Belfast than Assfly. And he's threatened with a ban for that. In British English, "talking out of turn" can mean saying things those with power don't want you to say. Proxima Centauri 12:34, 24 June 2008 (EDT)
- Short of pulling a TK on him, Andy would never demote one of his senior sysops because it would mean tacitly admitting he was wrong to promote them in the first place.
Radioactive afikomen Please ignore all my awful pre-2014 comments. 12:42, 24 June 2008 (EDT)
- Has anyone noticed that the Assfly is unusually quiet today? Lenski humiliated the jerk on his own blog, and in front of his humskullers (Who on Earth would let Andy teach their kid anything?). Another observation: I think someone mentioned that Bugler might be an Assfly sock. A more likely candidate is TonyT. His punctuation, phrasing, and syntax are too similar to Andy's to be a coincidence.
- --Franklin 15:54, 24 June 2008 (EDT)
- I have had the same suspicions, but I believe both Bugler and TonyT are parodists, both just imitate Andy's style. Quite well to, I might add. Etc 16:36, 24 June 2008 (EDT)
- No, I don't believe that Aschlafly was humiliated in front of his homeschoolers. I doubt very much that his quietness has anything to do with Lenski's second reply. He probably is running errands. Ajkgordon 17:06, 24 June 2008 (EDT)
The Sixties[edit]
I just noticed that the assasinations of JFK and MLK are listed under Successes in the Sixties. While CP-ians generally have the intelligence of a rock, I can't imagine this is what they really mean. Jrssr5 12:43, 24 June 2008 (EDT)
- I wouldn't put it past some of the nuttier ones (RobS, anyone?).
Radioactive afikomen Please ignore all my awful pre-2014 comments. 12:46, 24 June 2008 (EDT)
- It also looks like they're citing the Vietnam War as a success. Of course, this is after the same section that cites a lot of great 1960s music by bands such as The Beatles (who used drugs), The Rolling Stones (who used drugs), Led Zeppelin (whose drummer choked to death on his own vomit), Jimi Hendrix (who also choked to death on his own vomit), and others. I tried mentioning this, but I got bitchslapped. --Elkman 12:56, 24 June 2008 (EDT)
- Nah. Looks more like a formatting issue. Nothing to see here. Move along. Ajkgordon 13:10, 24 June 2008 (EDT)
Lenski Letter redact[edit]
Dr. Lenski just sent me the original e-mail with links and formatting intact. He does link to rationalwiki. 75.161.36.101 12:58, 24 June 2008 (EDT)
- This is me by the way. tmtoulouse annoy 13:13, 24 June 2008 (EDT)
Was it to the RW main page, or something specific. If the latter, can you post the link? Thanks. --SpinyNorman 20:47, 24 June 2008 (EDT)
A new record[edit]
The Lenski pwnage WIGO is up to +78, surely that's a totally new level of plusgoodness for a WIGO vote? ħuman
15:42, 24 June 2008 (EDT)
- In honour of this achievement, I reckon we should make the Lenski affair article the article of the year, at least for a few weeks. --81.187.75.69 16:10, 24 June 2008 (EDT)
- Let's whip it into shape and make it our "non-random featured article" for a bit like we did with the ELG? ħuman
17:01, 24 June 2008 (EDT)
- Are people getting bored of it or is there "conservative deciet" at work. I saw it at +83 and steadily going up and then it dropped suddenly.
- The vote total displayed is the total up-total down, so if people vote "down" it can go down. I will generate a total up vote for you guys if you want once things have settled. tmtoulouse annoy 17:43, 24 June 2008 (EDT)
- Are there really so many users on RW today (83 plus all down votes)? (Editor at) CP:no intelligence allowed 18:12, 24 June 2008 (EDT)
- This article has been linked fairly widely, and anyone (IPs included) can vote. So it's not just us. ħuman
18:22, 24 June 2008 (EDT)
- The number of "unique" visitors to our site a month is upwards of 20,000+ (that includes proxies and everything) so yea no problem with 83 votes. tmtoulouse annoy 19:20, 24 June 2008 (EDT)
R they down again?[edit]
or is it just me?
ContribsTalk 19:09, 24 June 2008 (EDT)
- No, just slow-going. Takes like 15 seconds to load a page. NorsemanWassail! 19:14, 24 June 2008 (EDT)
- I'm not getting 'em at all "Network error" - not what I usually get on a slow connection.
ContribsTalk 19:20, 24 June 2008 (EDT)
- I get blank pages often, but I OCD my trusty F5 until it loads. <3 <3 <3 NorsemanWassail! 19:27, 24 June 2008 (EDT)
- I was able to access it for a while but it is back down again. Well you can't expect the hamster to run and laugh at Andy at the same time.
19:33, 24 June 2008 (EDT)
RationalWiki advertises on google ads, how much does it cost? --64.12.116.69 19:34, 24 June 2008 (EDT)
- Several individuals have sponsored RW in google ads of their own volition. It is nothing the site has done, and so am unsure how much was put towards it. tmtoulouse annoy 19:37, 24 June 2008 (EDT)
Something Awful picked up the story too[2] If that wasn't enough to hit the bandwith for a bit... well, they're going to be swimming in trolls for awhile. Many of those guys make us look like amateurs. And they got Digg'ed...[3] The blog world is picking this up a fair bit wider than the original exchange. I hope Andy has paid up his bandwidth bill. --Shagie 19:50, 24 June 2008 (EDT)
- It's LF, one of the smaller boards. They've had CP threads before, so this sort of thing isn't news to them. The Digg might be a hammer on their bandwidth, though.--Tom Moorefiat justitia ruat coelum 20:08, 24 June 2008 (EDT)
- Still down. No surprise, if SA users are swarming to Conservapedia. --JayJay4ever??? 21:19, 24 June 2008 (EDT)
- I can't reload Lenski talkies, but their logo showed up on my search page. Anyone notice that RW is also running slow? If only we can get them slashdotted... ħuman
21:28, 24 June 2008 (EDT)
- To me, CP's server has crashed. Wait for the reactions. Lol.--JayJay4ever??? 22:25, 24 June 2008 (EDT)
Bugler and TK[edit]
Ok this is probably more farfetched than a blind man drawing a long bow and hoping to hit a pink unicorn, but I had an insight last night (admitably while asleep) and had to share it with someone. Rather than Bugler being a parodist, could he be TK returned? Common patterns of behaviour include:
- sucking up to Andy by supporting his view no matter what
- being overtly authoritative, assuming people respect your position and are going to obey your instructions
- blocking often for the most ridiculous and ideological reasons
- making minor changes to articles (or creating new stubs which are easy to make) and spending most of the time governing the talk pages and intimidating other users
I'm not sure if Bugler begged for his rights like TK did, but he received them in a short enough period of time (compared to other users) so he could be prone to using emails. That said, given the behaviour of Ed Poor recently these traits could just be common to those who think they're bigger than what they are. 60.229.60.237 19:37, 24 June 2008 (EDT)
- More than one person has had the thought. tmtoulouse annoy 19:41, 24 June 2008 (EDT)
- I certainly noticed similarities. In fact, I was contemplating changing Martin Boorman from TK to Bugler on my userpage. But he can't be TK, for pretty obvious reasons. DickTurpis 19:42, 24 June 2008 (EDT)
- Yup! it crossed a lot of minds, but he's almost definitely British - I've got suspicions which I'll keep to myself.
ContribsTalk 19:44, 24 June 2008 (EDT)
- I think he is certainly British given his knowledge of British footballers (very amusing by the way), his time keeping and various other things. I also think he is a parodist and quite possibly someone not too far away from this very line of text. RedDog 19:46, 24 June 2008 (EDT)
- He can't possibly be TK! What you all forget is that TK was a terrible sockpuppeteer. He stands out a mile, he can't keep his temper, and he can't resist peppering his conversation with "LOL!". Amateur. ↳ ↑ <blink>⇨</blink> ↕ ▽ ← 19:47, 24 June 2008 (EDT)
- Yes. While Bugler is obviously a jerk (provided he's not parody), he lacks the virtual sneer that always came across from TK. Doesn't feel like the same "vibe" to me. --AKjeldsenPotential fundamentalist! 19:49, 24 June 2008 (EDT)
- Not to mention using "*" instead of ":" to indent.
ContribsTalk 19:51, 24 June 2008 (EDT)
- Yep. Classic. ↳ ↑ <blink>⇨</blink> ↕ ▽ ← 19:53, 24 June 2008 (EDT)
"But he can't be TK, for pretty obvious reasons." what are those, Dick? I get a heavy TK vibe off bungler. Why would he do it? Talked to Andy and worked it out. Andy gets his "enforcer" back, without having to piss off the sysops that TK ran roughtrod over. "TK was a terrible sockpuppeteer", yes, but he isn't puppeteering here - he's just being himself! "He stands out a mile" yup. Anyway, whether burglar is TK or not doesn't matter. He can make his own name for himself well enough, apparently. An asterisk, of course, would be a total giveaway. His bungled deletion and repair at talk:Lenski was classic "TK fumbling with the buttons", but it could just be "someone like TK" - after all, CP will tend to attract more of the same kind of people it drew in the past, right? ħuman
22:26, 24 June 2008 (EDT)
- He seems to be inclined to delete things he doesn't like from talk pages. Wasn't that a TK tactic?--Bobbing up 04:05, 25 June 2008 (EDT)
- Yes. And how would one learn or fake that? To the Manor born... "A rose by any other name, still has way to many f*cking thorns" ħuman
04:11, 25 June 2008 (EDT)
- In my opinion he is British and a parodist. The similarities you mention above are because he is parodying TK's style. I would be staggered if this turned out to be TK himself. RedDog 07:04, 25 June 2008 (EDT)
- Also agree on the British front. Not just the footballers, but his general way of writing. Difficult to imitate, especially for someone as transparent as TK. It seems like some Brit is doing a TK impression though, when it comes to his behaviour. Bondurant 07:26, 25 June 2008 (EDT)
- I see exactly what he's trying to do; Andy doesn't even realize he's made a complete fool of himself, and quite possibly he will draw even further attention to this in the near future - and Bugler is just cheering him on! No matter if Bugler is real or not, the end result will be hilarious. Etc 09:50, 25 June 2008 (EDT)
- What Susan said. Ajkgordon 10:08, 25 June 2008 (EDT)
I've been dying to...[edit]
...use this at some point, and TLA gives me the chance. Teh Assfly: "All your data are belong to us." Bungler: "You have no chance to survive, make your time..." The Acolytes: "For great justice."
(ok, it's late and I can't sleep) --PsygremlinWhut? 21:31, 24 June 2008 (EDT)
CP is back[edit]
Go play. 203.96.84.33 22:44, 24 June 2008 (EDT)
- On another note, while not worthy enough for Wigo itself... I found this kinda funny. A user vandalizes Bugler's talk page. Nothing new there, but when he blocks the user, he uses all his big words to really put him in his place.... That's funny shit man. SirChuckBCall the FBI 23:04, 24 June 2008 (EDT)
This version's even better they blocked a lot of vandals with cool usernames Proxima Centauri 04:27, 25 June 2008 (EDT)
- Funny, I agree, but... "oh gee, someone vandalized the user page". Not really important, eh? ħuman
04:45, 25 June 2008 (EDT)
Quick! Hide it![edit]
Learn Together shows his mettle. --Horace 00:28, 25 June 2008 (EDT)
- Following the 1953 "liquidation" of Lavrentiy Beria, the notorious head of the NKVD, the Great Soviet eighteenth-century courtier), the Bering Sea, and Bishop Berkeley. I'm just sayin'.76.71.235.36 00:38, 25 June 2008 (EDT)
Latest by Assfly on Lenski[edit]
I love the latest comment by Assfly. By my reading of it, the relevant part of Lenski's second reply basically said 'most of the data is already in our paper, and the missing parts (which was cut due to page-length considerations - nothing unusual in scientific papers, by the way) will be posted on our website. Also, if you're after the actual biological specimens themselves, so you can test them for yourself, well, here's how to satisfy me you can actually store, use and dispose of them safely, so go do that, so I'm not simply randomly sending out biological specimens to a random person off the internet'. According to Assfly, this is still 'withholding data'. Zmidponk 07:58, 25 June 2008 (EDT)
Lenski debacle on Fark.com[edit]
Here's the link, the consensus over there is the as us, Conservapedians are mostly a bunch of tools. DLerner 09:36, 25 June 2008 (EDT)
This is (Doctor) Ben Goldacre joining in.--Bobbing up 11:02, 25 June 2008 (EDT)
DeanS whitewashes[edit]
[4] Check out his deletion summary of his userpage: Vandalism / Parody. MediaWiki funneh. What did Ed say in his usertalk on that last WIGO entry? I missed it. NorsemanWassail! 10:28, 25 June 2008 (EDT)
- After Bugler accused Tim of the auto de fe thing, he randomly accused him of denying Ed's rights and then went into some rant about how he was a sectarian and literally would have done the auto de fe thing 400 years ago. NightFlareStill doesn't have a (nonstub) RWW article. 10:35, 25 June 2008 (EDT)
- Yeah I found it funny Bugler used the word "faggot" (he related to cigarettes - British slang) but I didn't get to see any responses after that. NorsemanWassail! 11:22, 25 June 2008 (EDT)
- "Faggot" can also mean a bundle of sticks ... presumably for starting the fire. Jrssr5 11:25, 25 June 2008 (EDT)
- Faggot is NOT anything to do with cigarettes it's the (bundles of) wood used for a fire (or a tasty offal food)
ContribsTalk 11:31, 25 June 2008 (EDT)
Could anyone give a quick recap of what happened for those of us (well - me) who missed it. The links on WIGO don't seem to work anymore as it looks like whatever they pointed to has been burned (auto de fe indeed!) so I guess they should either point to a copy here or be binned. 11:39, 25 June 2008 (EDT)
- In a nutshell, Ed made demands to UltimaHero to apologize, recant, etc. etc. to appease DeanS. PJR and CPAdmin jump in and defend UltimaHero, and Bugler said something about the Auto De Fa. I guess Ed made an edit in response just before Dean deleted it all. And sooooorry miss Susan! My Orkney Isle friend sitting beside me didn't mention anything about a bundle of wood! :P NorsemanWassail! 11:59, 25 June 2008 (EDT)
- To elaborate on the Tim and Bugler thing: CPAdmin1, in defense of Ultimahero, stated that to be a Christian, one should follow basic biblical teachings etc. Burglar made a snarky comment about CPAdmin1 doing an auto de fe thing (this is when made the faggot/wood comment), after CPA1 denied it (and stated that it was unbiblical) Burglar stated that he was trying to deny Ed's right to speech, that Tim is a sectarian and implied through the use of weasel words that he would've burned people 4 hundred years ago. CPAdmin1 denied the rights-denying part and I forgot the rest. I think Ed managed to say something right before Croco deleted it. NightFlareStill doesn't have a (nonstub) RWW article. 12:53, 25 June 2008 (EDT)
Again![edit]
Can't get CP again!
ContribsTalk 10:33, 25 June 2008 (EDT)
- A Lenski a day keeps Andy away. (Editor at) CP:no intelligence allowed 10:39, 25 June 2008 (EDT)
- Up and down like the Assyrian empire - maybe it's a /. effect as everyone goes over to look at the lulz. I bet their page views are impressive! Silver Sloth 12:13, 25 June 2008 (EDT)
Still crashed, like communism. Oh Lenski what have you done?--JayJay4ever??? 13:25, 25 June 2008 (EDT)
- It was up briefly and I checked the lenski dialog page and its views were not that much higher, about 46,000 views total, and it had 15,000-20,000 before all this hit. I would think the bump in traffic would be easiest to see on that page since that is the one being linked. It is very strange. tmtoulouse annoy 13:27, 25 June 2008 (EDT)
- We all know Andy can't run a wiki, but surely the first priority is at least to keep it running. Has mommy withdrawn her funding after the Lenski Debacle? On the other hand our impoverished site is running like iced treacle today. Did you blow all that dosh I sent you on more crack Trent?
GenghisOur ignorance is God; what we know is science. 13:50, 25 June 2008 (EDT)
Oi, our CPU is in the red, she is gonna blow! tmtoulouse annoy 14:02, 25 June 2008 (EDT)
- I'm getting "The connection was refused when attempting to contact." as opposed to the normal "taking toolong to respond" message. Perhaps the CP has been taken down for a major scrubbing/retooling? Also--this website is painfully slow. Go feed the hamster or something.74.15.224.90 14:09, 25 June 2008 (EDT)
- I fed Genghis's hamster the crack I bought.....that's the problem, turns out they don't have much tolerance. I thought it would help. tmtoulouse annoy 14:12, 25 June 2008 (EDT)
- Try feeding it caffeine. I'm sure that would help. --λινυσ(☮) 14:24, 25 June 2008 (EDT)
- I am trying every trick in the book to keep things up and running. tmtoulouse annoy 14:32, 25 June 2008 (EDT)
- Anything I can do? --λινυσ(☮) 14:36, 25 June 2008 (EDT)
- Nah, mostly it is restarting the apache server to clear out stuck requests, and shutting down unneeded processes at the moment. Ran optimization scripts on the database...and well we will see. tmtoulouse annoy 14:44, 25 June 2008 (EDT)
- How do you do the latter, BTW? I never actually found that out. --λινυσ(☮) 14:47, 25 June 2008 (EDT)
- the sql command is just "optimize table x;" I have a script setup in the bots directory that can be run using the command line mysql < opt.sql; tmtoulouse annoy 14:53, 25 June 2008 (EDT)
You know it's a bit weird CP being down. There seem to be two possibilities and I'm not sure I buy either one of them. (a) It's overloaded. (b) It's been taken down out of embarrassment. Now I would have thought that an attention hound like AS would do everything he could to get more attention so why isn't he working night and day to keep it up (so to speak). And embarrassment is really his thing either. So why is it down?--Bobbing up 15:15, 25 June 2008 (EDT)
- I don't think this has anything to do with the site being down... but does anyone else get the feeling that this incident really hurt old Schlafly... I mean, he's been the ridicule of the internet before, but this one is serious. He's been owned twice by Lenski, his own followers (acolytes?) have turned on him and no one out there, even Conservative sites, are coming to his defense this time. The FBI thing, the Atheism wars, even the Liberal X pages always get some kind of praise from the insane crowd.... but this has just been one big asskicking... I think he's been wounded. SirChuckBCall the FBI 15:25, 25 June 2008 (EDT)
- I have the same exact feeling. But I'm confident that, given some time, he'll rebound to be again his good old self. (Editor at) CP:no intelligence allowed 15:30, 25 June 2008 (EDT)
- We can't be too sure about that. Schlafly spends most of his talk page edits desperately avoiding being smacked down. Now that it's finally happened, and he has nowhere and no one to turn to, his fragile spirit may be permanently damaged (we can only hope).
Radioactive afikomen Please ignore all my awful pre-2014 comments. 04:08, 26 June 2008 (EDT)
Uh oh. PJR Niceness break![edit]
PJR gets a bit testy at the idea that the Bible may not be 100% accurate historical record. Hey, Phillip: when did rabbits stop chewing their cud, and when did Pi change away from being exactly three? --Gulik 15:38, 25 June 2008 (EDT)
- Oh my. You can see the wheels turning, can't you? "If she weighs the same as a duck, she's made out of wood... and therefore... A WITCH!" It can't be more than a short stroll from the idea that science is anti-biblical, to concluding that scientists are against god and must therefore be stoned to death. --81.187.75.69 16:37, 25 June 2008 (EDT)
Homskollars[edit]
It just occurred to me: How many of the current editor/sysop crop are members of the original Homeskollar crowd? That's who (allegedly) the site was created for about 18 months ago.
ContribsTalk 15:44, 25 June 2008 (EDT)
- Wasn't it created for all homeschoolers around the world? (Editor at) CP:no intelligence allowed 16:08, 25 June 2008 (EDT)
- Around the US you mean? Actually of course it's Andy's little bit of masturbation, pumping himself off against images of wicked anti-christian liberals. There was originally a "PANEL" that supposedly had ultimate power but since Ken took over evilution we haven't heard from them.
ContribsTalk 16:19, 25 June 2008 (EDT)
- Right, I miss the "Panel"! They took so much time to deliberate on that Evilution article! Have they been used ever after? Returning to the original issue, I find the new restriction on usernames confusing. Before, you could be quite sure that NameS (S standing for surname) was one of Andy's students; now everybody is supposed to be in that form, even my new sock is in the form NameS. (Editor at) CP:no intelligence allowed 16:42, 25 June 2008 (EDT)
- Wonder if the Homeskollars' parents have seen all Ken's stuff - Gay Bowel comes to mind (although not often).
ContribsTalk 16:53, 25 June 2008 (EDT)
- It takes some work, but the original homskollars ("the 57"?) were around before March 2007. Just check user creation logs, etc. ħuman
17:04, 25 June 2008 (EDT)
- User Creation log won't do you much good - it was only activated long after the blog rush. --Sid 17:38, 25 June 2008 (EDT)
- Main Page history is probably the quickest wayback machine. A quick scan gives less than 15 humskullers & an Assfly in the early days. You can watch them lerndering, too. That, and the truth setting them free. ;) --Robledo 17:50, 25 June 2008 (EDT)
Deborah[edit]
I'm a loss for words. From just the first three socks she's made, I can instantly think of hundreds of possibilities to convey racism through usernames. Is she for real? NorsemanWassail! 16:55, 25 June 2008 (EDT)
- You do have to wonder, don't you? --81.187.75.69 17:19, 25 June 2008 (EDT)
"Evil"ution[edit]
Odd choice to make a redirect. — Unsigned, by: That smug dude Kektlik / talk / contribs
New Old Topic revisited[edit]
This is all common knowledge now... but yeah, I'm sure this second image is exactly what makes up fair use Who was keeping a log of misused and stolen images again? SirChuckBCall the FBI 17:01, 25 June 2008 (EDT)
- I've replaced your edit/source link with a permalink. Hope you don't mind. Agreed on the subject, though. Then again, it's JM. What do you expect? --Sid 17:49, 25 June 2008 (EDT)
Just read #1
Even if you could argue fair use in copyright, the trademark and licensing and exclusive rights on that trademark get you. #4 (quoted above) points out that its hard news reporting - not soft news encyclopedias. Oh its fun... and yes, I poked the contact form on the website there, pointed out the general use and asked for a response back. --Shagie 18:14, 25 June 2008 (EDT)
- Sheisse! - couldn't find the contact form. Well done - did you give 'em links to the piccy on CP?
ContribsTalk 18:23, 25 June 2008 (EDT)
- Of course...
"An image of the statue is being used at as an image for general awards given on the site. As I understand it, this is a violation of the trademark. Conservapedia is notorious for misunderstanding copyright and trademarks and ignoring any/all intellectual property laws."
Closed for editing[edit]
It appears that the "edit" user group now has to be given to make any edit on the site. No longer just for overnight editing. tmtoulouse annoy 17:34, 25 June 2008 (EDT)
- I guess Andy got sick of it all and declared it to be night already. Even though it would be hilarious if you'd suddenly need edit rights to be able to prove you're worthy of edit rights. --Sid 17:46, 25 June 2008 (EDT)
- He'll probably do an email interrogation on all new users before letting them edit. Guard your socks!
ContribsTalk 17:55, 25 June 2008 (EDT)
- Note that Andy has only made eight edits in the last two days (6/24-6/25, up to 18:00 EST), all Lenski-related. He might be AFK and simply not turning editing on, he might be struggling to get his server to work properly (long phone calls to Siteground?), he might be in the basement banging his head against the wall... ħuman
18:04, 25 June 2008 (EDT)
- The last edit by an
editorcontributor there was Daphnea, 1:11 PM (EDT?). It's a freakin' ghost town there. NorsemanWassail! 18:10, 25 June 2008 (EDT)
- Let's hope (wink wink, socks AND/OR CP Sysops reading here) that someone fixes it, e-mails Andy about it or, if it is a new policy, make it clear on CP. (Editor at) CP:no intelligence allowed 18:13, 25 June 2008 (EDT)
- I can't connect to CP at all. Is the server borked again? --Gulik 02:35, 26 June 2008 (EDT)
I'm rather amazed that the entire edit group consists of only 28 members, including TK and a bunch I don't remember ever seeing edit anything. There's an indication right there of how small the active user list is, if that's all he trusts to edit without vandalizing everything when he's in bed. --Kels 21:42, 26 June 2008 (EDT)
"Punchy" Schlafly[edit]
Oh dear. He's the laughing stock of a significant portion of the internet, his loyal sysops are busy bickering between themselves, and his site is overrun by huns and vandals. Yet, he's still talking about Lenski hiding data. Boy doesn't know when to quit. --81.187.75.69 20:07, 25 June 2008 (EDT)
- Well,, the vandals can't do much because "editing" is turned off... and with eight edits over all of yesterday (
MondayTuesday) and today ( TuesdayWednesday), it almost looks as though he has just walked away from it. I did, however, think I heard his Momma calling him in for dinner. And a whippin' ħuman
20:16, 25 June 2008 (EDT)
- I just checked, Schlafly hasn't had a day off from editing Conservapedia for two years. Including two Christmases, new years eve/day, valentines day, may day or Reagan's birthday. I think the poor bastard needs a holiday. Shall we have a collection? Send him somewhere nice, but with muslims, like the Maldives perhaps? --81.187.75.69 20:51, 25 June 2008 (EDT)
- This is leading to a question I have been meaning to ask. What does his family do about this? He from what I can see spends about 12 hours a day arguing with people on CP when he is not homeschooling. Doesn't his wife ever want to speak to him or something? I know I keep but I don't think I could go on like he has.
02:58, 26 June 2008 (EDT)
- He probably edits while at work at the AASP thing or whatever. NightFlareStill doesn't have a (nonstub) RWW article. 05:04, 26 June 2008 (EDT)
Atheism and deceit[edit]
This is quickly moving up the ranking list of CP's weirdest and most idiotic articles ever. If anybody made an article like "Republicans and deceit" or "Christians and deceit", quoting the supposed deceit of just a few individuals, they would get insta-banned. Good thing that atheism is one of those subjects where CP doesn't feel like being fair.
But the real reason I love this article is this sentence from the current version:
Really, you can't make stuff like this up. --Sid 20:49, 25 June 2008 (EDT)
- Really? I have almost fully programmed my Kenbot to produce perfect Kenese on any number of topics, at least in respect to Atheism. I even have a "quote searcher" programmed so I can buttress Kenbot's "arguments" with the unwitting statements of unwitting atheist leaders undermining their cause! ħuman
00:31, 26 June 2008 (EDT)
- Is it just me or does that sentence not even mean anything as it's just catchphrases strung together to sound like it's intelligent? Armondikov 06:30, 26 June 2008 (EDT)
Irony, Thy Name Is Ed[edit]
I rather like this comment in Andy's rumpus room:
"We like to applaud science's successes, claiming a new discovery nearly every week (!) but paradigm-shaking new theories are frequently resisted bitterly and relentlessly."
Yep. Sometimes they're resisted for, oh, a hundred years plus. --Phentari 21:03, 25 June 2008 (EDT)
- I love Bugler's attempt to
baitsupport Andy into doing something stupidermore to right this injustice.
21:10, 25 June 2008 (EDT)
- My personal favorite is that Ed loves drawing line while ignoring the points.... He speaks about new theories being resisted, but fails to mention who does the resisting.... I've never heard of an Atheist Inquisition.... although it would be amazing SirChuckBCall the FBI 22:02, 25 June 2008 (EDT)
- Our chief weapon is surprise, that's all, surprise..... and ::psst, Nice shiny red uniforms are betters:: SirChuckBCall the FBI 23:45, 25 June 2008 (EDT)
- Is that the largly defensive weapon of suprise?
02:59, 26 June 2008 (EDT)
- I fancy purple myself.
Radioactive afikomen Please ignore all my awful pre-2014 comments. 04:00, 26 June 2008 (EDT)
Wheels Falling Off[edit]
I know we have all predictaed the 'last days' of CP many times only to see it ignore all difficultiee, ridicule, etc and carry on. But at the moment it really does seems to be struggling. I see the main problems and symptons at the moment as being...
- The SysOps are infighting openly in a way I have not seen before.
- Several SysOps have left recently.
- It is totally unavailable for large chunks of the day.
- It is locked for edits for much of the remaining time.
- Even Andy himself is hardly posting.
- Extremely negative (even abusive) comments are allowed to stand on talk pages far longer than previously.
- They are getting much ridicule from the internet at large.
- They were very publically called out by Lenski (PBUH)
- They are swamped with trolls and vandals (and not all of them are me!)
Has anyone seen the film 'DownFall'? I know it's wishful thinking and I'm not saying they cannot recover and carry on, but it does seem very wobbley at the moment, more so than I have ever seen it. RedDog 03:54, 26 June 2008 (EDT)
- The film Downfall. We may see the downfall of Assfly rather than of all Conservapedia. Proxima Centauri 05:37, 26 June 2008 (EDT)
- There was decent amounts of Sysop infighting while TK was around, but you've got a point about the rest of it... Although really, the way Andy drives his blog, it can never go anywhere but down. This is mostly just symptomatic of that, with the Lenski affair possibly helping to some degree by pushing up tensions. UchihaKATON! 03:58, 26 June 2008 (EDT)
- The hamster's stopped spinning again I see. I hope they don't go down anytime soon, my little sock has a two-year plan well underway. --PsygremlinWhut? 04:04, 26 June 2008 (EDT)
- Two years? That seems very... long term, with respect to Conservapedia. Many didn't even expect it to last this long...
- Or was that the joke...? UchihaKATON! 04:18, 26 June 2008 (EDT)
There was something dreadfully "you havent seen the last of me!" with vicious fist shaking" about Andys last post though. The whole "stay tuned" business. I'm guessing he'll show us, everyone, in the end. Ace McWicked 04:41, 26 June 2008 (EDT)
- It is just a couple of days that Andy and his appendix, CP have behaved odd. Is there anyone among us who hasn't had off days once in their lifetime? Surely though - don't hate me! - I sorely miss the first TK. He would have known how to handle the situation. (Editor at) CP:no intelligence allowed 04:52, 26 June 2008 (EDT)
- Care to summarise, Mr McWicked, for us poor souls wot ain't seen CP since yesterday afternoon? Ajkgordon 04:56, 26 June 2008 (EDT)
- Conservapedia’s down again. My computer gave a message saying, “ is not set up to establish a connection on port World Wide Web service HTTP with this computer. “ I’d guess Assfly has big problems over Lenski and has shut down the site while he gets advice from people who understand the law better than he does. Proxima Centauri 05:34, 26 June 2008 (EDT)
- Would be interesting if the Oscar lawyers (as per above) have just climbed all over him. Maybe we should look for any Disney pics on the site - there's another crowd that doesn't take their images being used without permission lightly.
PsygremlinWhut? 05:59, 26 June 2008 (EDT)
Reply to Mr Ajkgordon - Aschy's last post thanked Bugler and mentioned the Lenski business with a chilling prophecy that we should all "stay tuned for further details". Large offensive weapon of gun? Ace McWicked 06:00, 26 June 2008 (EDT)
What puzzles me the most about these almost regular downtimes is the complete lack of any official word on them. They handle it just like they handled the introduction of the edit rights: Pretend nothing ever happened, and when enough people complain/ask about it, simply admit that it's that way without further explanation. Really, you'd think that somebody in charge on CP would realize that this is a terribly bad way of handling things. --Sid 06:25, 26 June 2008 (EDT)
- But who cares what the little people think? ↳ ↑ <blink>⇨</blink> ↕ ▽ ← 06:37, 26 June 2008 (EDT)
- It popped up in WIGO under a "full and frank explaination", which was, of course, hilarious. It's down for me, which is a shame because some of the new stuff looks like it could be interesting. The whole Lenski thing was badly handled and if it's not the final nail in the coffin it's certainly the first really big one. We'll either see CP stop calling people out or Assfly will just forget it ever happened, get told not to do it, again, do it anyway, again, and have a new arsehole ripped in, again. Or they'll stop being vocal and retreat a little, then slowly wither away. Either with a bang or a whimper, it will end one day. Armondikov 06:39, 26 June 2008 (EDT)
- They'll just treat the
editorscontributors there with the same lack of respect as they always do - the attitude is essentially you're there to a) work for them and b) kowtow to whatever they say. Andy's last response when asked why they'd been down for 5 hours was to say "We're back now." Smarmy git. I wonder when last he went through the list of (unblocked) members and actually took the time to thank them for their contribs. Ok, maybe that's asking too much, but then again, it's hardly Wikipedia with a gazillion editors. Problem is, he's become too focussed on his pet projects, which clearly indicates that it's not and was never meant to be, an encyclopaedia. — Unsigned, by: Some variety of gremlin, I think :) / talk / contribs (sorry - forgot to do that... PsygremlinWhut? 06:52, 26 June 2008 (EDT))
- I'm still not convinced that this is bad for CP or that Aschlafly is unhappy with it.
- I mean, it was transparently obvious what he was trying to do with Lenski and he got pretty much one of the few responses he was likely to get.
- Isn't this just about noise? Driving traffic to his site a small proportion of which will have sympathy with it and join up or become a fee-paying homeschooler?
- Am I really the only one to think this here? Ajkgordon 06:45, 26 June 2008 (EDT)
- No, I agree. Regardless of the threat, Andy wouldn't have put the reply on CP if he thought it made him look bad (to those who already agree with him, who are, naturally, the only ones who matter). I also think the crashes are completely unrelated -- but I'm still cheering them on. ↳ ↑ <blink>⇨</blink> ↕ ▽ ← 06:49, 26 June 2008 (EDT)
- Tech question: was normal editing reinstated yesterday or did it continue to be limited to those with night editing rights? Did the issue come up on CP? (Editor at) CP:no intelligence allowed 06:56, 26 June 2008 (EDT)
- If all these end days talk have thaught me something, its that Conservapedia cannot die as Andy simply is unable to see (or refuses to) that his ship has been sinking since long ago while he's been at his room watching his shiny treasure (his pet articles and page views count) while the rest of the crew runs around doing stupid shit, passengers dropping to the sea to swim back to shore or being thrown by the crew, being attacked by pirates (vandals and prdsts), etc. and will continue even if he has to use diving gear to stay in his ship. It is possible that it reaches the point where literally every editor there is Andy, his goons and a swarm parodists and you can probably say that this counts as being "dead" (under that definition I would say it already is) but Andy simply will never think about shutting his blog down. NightFlareStill doesn't have a (nonstub) RWW article. 07:07, 26 June 2008 (EDT)
- It won't die quickly and it'll probably be fairly painful to watch. It's a fairly big ship to sink so it'll probably not be apparent for months and even then, the big names there will deny it to the end. He's driving traffic to the site, yes, and for every (lets say for arguement) 100 hits, there's 1 sympathetic one that joins up to fight the good fight; but there'll be at least 3 or 4 who join up to vandalise or take the piss and plenty more who'll just go back to their own blogs and make a quick passing joke of it. I think his best hope for survival is to ban and burninate and bring in new, even crazier people, Conservapedia: The Next Generation, as it were. Especially since the vast majority of the (not clearly parodist) editors publically think Andy is a tool over the Lenski Affair. Armondikov 08:04, 26 June 2008 (EDT)
- Is it still locked against editing? Or am I missing something?--Bobbing up 08:14, 26 June 2008 (EDT)
- It's still locked. I'm assuming Andy did it in defense against the wandal rush he caused, though blocking account creation would've made more sense. NightFlareStill doesn't have a (nonstub) RWW article. 08:45, 26 June 2008 (EDT) Update: Not anymore. NightFlareStill doesn't have a (nonstub) RWW article. 08:49, 26 June 2008 (EDT)
It's unlocked now. My little sockie just made an edit. Proxima Centauri 17:22, 26 June 2008 (EDT)
Teh End of Days[edit]
Now who am I going to laugh at each day? As much as I despised Andy and his horrid little cronies, I also know I will miss the sense of superiority that I got from reading CP each morning. For god's sake, come on Andy. Get back up man! Suck it up, get out there and soldier on for Christ! There are people counting on you. --Horace 06:52, 26 June 2008 (EDT)
- It's the Rapture! CP and all who sail in her have been taken up to paradise! Ajkgordon 07:03, 26 June 2008 (EDT)
- This is obviously some new definition of 'paradise' that I haven't come across yet. Eternity with that lot... *shudder* --PsygremlinWhut? 07:20, 26 June 2008 (EDT)
- We may have to face up to a CP-free existence sooner than we thought. Challenging days for RationalWiki. Come on people, there are other whackos out there! Lets make use of CP's breakdowns for other mission-related stuff. I'm working on an astrology scam. Or are most people only here for the lulz?
What I find odd about the CP black-outs is that they are going down overnight. Is Andy just pulling the plug every night as his personal contribution to combatting global warming?
GenghisOur ignorance is God; what we know is science. 07:46, 26 June 2008 (EDT)
- I'm getting three sites worthy of our attention in the Google ads at the bottom of the page - the wonders of context advertising. Silver Sloth 07:50, 26 June 2008 (EDT)
- Occam's Razor - it's a glitch that isn't fixed until the hamsters get their breakfast. Ajkgordon 07:53, 26 June 2008 (EDT)
- @Genghis: I've been thinking about the issue of new targets for when CP falls apart... how about Answers in Genesis itself? It seems pretty active, and there are plenty of places in the blogosphere where it gets ridiculed -- all the right ingredients?
- I think we should start a list for this kind of thing. It's difficult to say when CP might finally burst -- we don't want to be robbed of critical momentum before we know what to do next, do we? ↳ ↑ <blink>⇨</blink> ↕ ▽ ← 08:00, 26 June 2008 (EDT)
Excellent suggestion. I don't see why we couldn't start a WIGO for AiG already. We can have more than one and if people aren't interested they don't have to use it. BadPsychics is an excellent site doing something for people who claim psychic ability for those who are unaware. Sadly there is no shortage of woo-merchents. RedDog 08:03, 26 June 2008 (EDT)
- For those who like their lulz with a right wing bent you could always check out Charlie Daniels soapbox It's often quite amsing a red-necky ignorant kind of way. Not a patch on the andychrist though. RedDog 08:05, 26 June 2008 (EDT)
- GOOD MORNING WORLD! CP back up.
GenghisOur ignorance is God; what we know is science. 08:07, 26 June 2008 (EDT)
- But still editing limited to those with night editing rights! Argh! (Editor at) CP:no intelligence allowed 08:15, 26 June 2008 (EDT)
- AiG is the obvious choice, although it would differ from Conservapedia in two ways that may mean it doesn't work as well. 1) It's not a wiki, there aren't constant updates and bans in the same way, although not a problem, it would be a very different mechanic 2) It's mostly specifically regarding the flood and creation, which is far narrower a view than watching general uber right-wing (or centre-right in the alternative universe) lunancy. It could devolve into, or at least be percieved as, more of an anti-religion thing if it didn't stay focused. Anyway, that's just two possible pitfalls of watching AiG, they may not be as bad as that and are probably outweighed by the postitives. Armondikov 08:11, 26 June 2008 (EDT)
- (EC) What did you guys think about WIGOFR? It was fun for the few days we rushed into MP in my opinion. NightFlareStill doesn't have a (nonstub) RWW article. 08:12, 26 June 2008 (EDT)
- I thought WIGO4R (as I shall forever call it!) had great potential -- the begining surely was a great moment -- but Metapedia is pretty static. Towards the end, I just felt weird trying to find newsworthy stuff there when the interest was so low. ↳ ↑ <blink>⇨</blink> ↕ ▽ ← 08:20, 26 June 2008 (EDT)
- I only found Metapedia by stumbling across the entry here. The thing is, is that it's not as active as Conservapedia and not quite as blatant about it. Most times you have to look quite hard to find anything overtly ridiculous. Though I may have been looking in the wrong places. Armondikov 08:32, 26 June 2008 (EDT)
- (EC) Yeh, only like 3 editors are active there, and they seem to go on Kektklik/Deborah style edit rampages, making the WIGOing itself rather dull. Spoofing the place was a blast though. NightFlareStill doesn't have a (nonstub) RWW article. 08:34, 26 June 2008 (EDT)
- No edit rights available, I see. Or is it just me? --PsygremlinWhut? 08:18, 26 June 2008 (EDT)
- The edit rights thing seems to have been fixed. Well, I can edit stuff there, at least... Alt 08:43, 26 June 2008 (EDT)
- As of right now aditing is switched on. Vandalise to your heart's content. I just burned a sock to test editing and also to see how on-the-ball ther SysOps are at the moment. RedDog 08:45, 26 June 2008 (EDT)
- Finally! Back to work now :-D (Editor at) CP:no intelligence allowed 08:49, 26 June 2008 (EDT)
I know I'm quite pleased actually as this is my first block from the Andychrist himself. Yay for me. I'm especially chuffed as my sock was called AndysBro. Nothing gets past you Susan - well spotted! RedDog 09:05, 26 June 2008 (EDT)
I'm still getting the "Internet Explorer cannot display the webpage". But apparently you all are connecting?72.243.191.219 09:22, 26 June 2008 (EDT)
- Yup!
ContribsTalk 09:26, 26 June 2008 (EDT)
- That is terribly unfair. What could be causing me not to be able to connect to just that one webpage? 72.243.191.219 09:29, 26 June 2008 (EDT)
- Restart IE? Clear the cache? Use Firefox instead? (Editor at) CP:no intelligence allowed 09:35, 26 June 2008 (EDT)
- If my ip was banned, would that be message I would be getting?72.243.191.219 09:49, 26 June 2008 (EDT)
- If your IP was banned you would still be able to connect and read - just not edit (I think) RedDog 09:54, 26 June 2008 (EDT)
- Theoretically an IP can be blocked from actually accessing the site - it's a MediaWIki thing. However, the chances of Andy actually implementing that would be exceedingly low. There is no benefit in doing so when it is so easy just to block an IP from editing. Try rebooting or resetting your modem/router if all else fails.
GenghisOur ignorance is God; what we know is science. 10:06, 26 June 2008 (EDT)
In reply to Andy not posting often, he is on vacation. — Unsigned, by: The inside reporter, "Kektlik" / talk / contribs
- That would explain a change in his behaviour. But how do you know? RedDog 10:02, 26 June 2008 (EDT)
- Kettles, much as I appreciate your willingness to discuss CP stuff with us, I think that's the sort of detail you should keep to yourself -- it verges on the invasive. Chaos. ↳ ↑ <blink>⇨</blink> ↕ ▽ ← 10:07, 26 June 2008 (EDT)
- Kektlik is the presumptive nominee for Andy's "future son-in-law".
GenghisOur ignorance is God; what we know is science. 10:10, 26 June 2008 (EDT)
- (edit conflict) Yes, and taking up an ambiguous position between two warring wikis is not the best way of securing that position. ↳ ↑ <blink>⇨</blink> ↕ ▽ ← 10:15, 26 June 2008 (EDT)
- I just thought it would help to clear up any doubts about the "end of CP". I have toned it down quite a bit with how much I discuss about things. I didn't say where he was, when he left, or when he will return home. — Unsigned, by: Reservation of threeve ("Kektlik") / talk / contribs
- You can't stop the end of CP now, foolish Kettlekreature. ↳ ↑ <blink>⇨</blink> ↕ ▽ ← 10:15, 26 June 2008 (EDT)
Actually, it does rather explain things though.--Bobbing up 10:29, 26 June 2008 (EDT)
- It does explain some things. But it doesn't explain why his server is even less stable than him. It also doesn't explain all the infighting and lack of support he is getting from other sysops. RedDog 10:34, 26 June 2008 (EDT)
Andy really should get back to the nursery and stop pretending that he’s grown up Proxima Centauri 07:34, 27 June 2008 (EDT)
Irony Meter *pop*[edit]
[6] Now Andy owes me an irony meter "proven contributors also have editing authority, which you can obtain after establishing a track record of substantive edits." Didn't help HelpJazz, did it? PsygremlinWhut? 09:23, 26 June 2008 (EDT)
- I think your Irony Meter might have been defective. That isn't really ironic - he's just lying. BeastmasterGeneral 13:26, 26 June 2008 (EDT)
- We should raise money by building and selling robust, well-filtered irony meters, there certainly seems to be demand for them (I see them getting broken all over the place), and we have a good test bench for them. At the very least, they should be logarithmic and have a good protection circuit... ħuman
16:14, 27 June 2008 (EDT)
Short (out of place) rant[edit]
Andy Schlafly makes me so mad. What right has he, any more than anyone else in the United States, to bludgeon a respected scientist and demand answers to questions which display a total lack of comprehension?
I have now to avoid any contact with Lenski related pages over there as the instructions on my meds warn against blood pressure related activities (actually not, but the doc did intimate as much).
He constantly goes over old ground and demands data that he wouldn't be able to do anything with and snipes at Zachary Blount (who has become the main target for some reason) with the same ignorance.
Nowhere has he expressed any appreciation for the attention (misplaced IMHO) given to him by Lenski but rather taken it as his due. Why Lenski took the trouble to reply to either of his rude and ignorant communications is beyond me but he should be accorded some thanks for it.
The latest "goal" question is barmy!
Oh yes, he's found a new friend one Larry Farma with an incomprehensible blog
I just ...
I mean ... Grrrrr.
Sorry
ContribsTalk 19:55, 26 June 2008 (EDT)
- and he disappeared the talk page thus taking Lenski's second reply out of context. Exasperate me!Sheesh!Not the most impressive contributor here 20:04, 26 June 2008 (EDT)
- He's archived it (but there appears to be about 400 chars short)
ContribsTalk 20:12, 26 June 2008 (EDT)
Oh & they're playing hard to get again (I can't access 'em)
ContribsTalk 20:12, 26 June 2008 (EDT)
I love how Assfly jumps on this Larry's claim that the co-author of Lenski's paper is dodging questions about the goal of the project, simply so he can again demand all Lenski's data, even though Lenski has given him instructions on where to get it already. The only problem is that, if he read Larry's blog given in the claim, and the original article (or rather, the comments to it), also given in the claim, he would see that Zachary Blount does, in fact, answer the questions posed, and directs this Larry guy to where he can get the answers in detail. Zmidponk 20:18, 26 June 2008 (EDT)
- He's now trying to insinuate that the novel E. coli strain was somehow engineered. Bejesus himself could have a quiet word with Andy on this one and he still wouldn't listen. Our hero'd probably have the chutzpah to tell the Son of Man that He wasn't being a real Christian and keep referring Him back to Genesis. --Robledo 21:10, 26 June 2008 (EDT)
- The discussion isn't actually going so well for him now. Poor Larry!--Tom Moorefiat justitia ruat coelum 16:00, 27 June 2008 (EDT)
For the record Lary Fafarman has been slinking around the ID/Evolution/Creation discussion for years now, frequent sitings at all the major venues (blogs, forums, etc.) he is just trying to latch on to the CP flak traffic for his latest trolling. tmtoulouse annoy 21:26, 26 June 2008 (EDT)
- Larry's been given "Edit" rights by Andy.
ContribsTalk 16:06, 27 June 2008 (EDT)
-
- Did he give up on the meritocracy facade or is he still maintaining it bipolar style? NightFlareStill doesn't have a (nonstub) RWW article. 19:33, 27 June 2008 (EDT)
Andy's concept of archiving[edit]
Anyone else notice that the diff chain breaks at the point where teh assfly "archives" the Lenski talk page? And that he doesn't create a link to said "archive"? Did he even create one? ħuman
21:45, 26 June 2008 (EDT)
- Ah, ok: is where he hid it. ħuman
21:46, 26 June 2008 (EDT)
- I'm suprised it's just hidden. I was putting money on CP "going down" for a day and then coming back up with all mention of Lenski removed and anyone who brings it up going the way of those who dare mention the FBI. Though there's still time for me to win that one. :P Armondikov 06:48, 27 June 2008 (EDT)
Cut from According To[edit]
- Conservapedia's audience is somewhat smaller than it would like us to believe.
I said I was gonna put this on WIGO but I don't think it's really worth it. Anyone who wants to, feel free. ħuman
21:18, 26 June 2008 (EDT)
Any reason why this was removed from According To? I thought you guys would be interested, and the rush of hits I've head suggests I was right. I wouldn't normally self-publicize quite so shamelessly, but I thought it was a result worth getting out there. The Lay Scientist 21:47, 26 June 2008 (EDT)
- Not WIGO but a nice analysis. It's worthy of comment (IMHO) but not noteworthy enough. Not "According to" but "WIGO" if anything.
ContribsTalk 21:54, 26 June 2008 (EDT)
- The reason was that "according to" is not supposed to have CP stuff on it - it says that in the lead in to where you edited to insert it. So I copied it here. And, uh, putting a link to your blog on our front page is a bit shameless. Feel free, however, to write an article in the CP namespace with similar content - especially if you update it over time a bit to add even more context. ħuman
21:58, 26 June 2008 (EDT)
Ah, okay, I missed that non-CP bit that was in bold at the top :| As for shameless, well it's a requirement of blogging. The Lay Scientist 22:05, 26 June 2008 (EDT)
- For what its worth, WIGO has 50% more page than the main page. Take that with a 'edit vs static' page and a 'front page vs click or two down' page. Being sent off to WIGO is certainly nothing to be disappointed about. --Shagie 22:30, 26 June 2008 (EDT)
- As a quick question, was using "acolytes" intentional in light of the Lenski affair? :P Armondikov 06:43, 27 June 2008 (EDT)
- Lol, yes, it was, although I'm not sure what the best name to use is. I liked "Acolytes", and it's easier to type than "Conservapedians". I also thought of "Conservapedes" or the dodgier "Conservapedos". The trouble with "Acolytes" is that it implies that they're all pulling together, but clearly they're not. Instead they're more like a chaotic herd running in all sorts of directions when startled by the loud noise of rationality. I need a name that describes that... The Lay Scientist 07:04, 27 June 2008 (EDT)
- "Conservastampede". :nods: --AKjeldsenPotential fundamentalist! 07:16, 27 June 2008 (EDT)
- I was thinking about comparing Conservapedia's reach with an even simpler task than using Alexa and trying to compensate for the fact that it's a wiki. If you type it into Facebook, you get about 10 or so groups. One of which is "pro" Conservapedia (although I reckon it's a Poe judging by the language used on it) the rest are very much pure mockery. The average membership barely exceeds 50 people. That's a pretty shockingly low number of people who are concerned about it if it was as massive as they claim. So, it's obviously not having that much effect. The only reason I think people I know are even aware of it is because I occaisionaly flag up links from here and CP for people to laugh at. Armondikov 15:41, 27 June 2008 (EDT)
- That's not a bad idea - another one of course might be to look at Technorati reactions - positive vs. negative. Across several metric like that you might get a good idea of what proportion of their unique visitors are p***-taker... The Lay Scientist 16:27, 27 June 2008 (EDT)
talk page[edit]
I think we should capture the archived Lenski talk from CP before it's memory-holed. 203.96.84.33 22:08, 26 June 2008 (EDT) Ace McWicked
- It's worse than that - the "archive" is full of holes from previous deletions. Even as we speak, say, since the archiving, major chunks have been deleted from the talk page. Oh, yes, go ahead and grab a copy of the archive, of course. I think Trent is working on getting the actual full history of the all the edits to the talk page copied somehow (actually, he has it, is just having trouble loading it up on RW). ħuman
22:12, 26 June 2008 (EDT)
- I have several copies of the page myself -- preserving both the main discussion and some things that were deleted -- but it's a long way from complete. Nonetheless, it might come in handy if TalkLenski is deleted.
- Trent's way sounds better, though :) ↳ ↑ <blink>⇨</blink> ↕ ▽ ← 22:47, 26 June 2008 (EDT)
Excellent. We MUST have FULL records. Cant afford to be clueless these days. 203.96.84.33 22:51, 26 June 2008 (EDT)
- Okay, okay, I will get back to it......Linus was suppose to figure it out.... tmtoulouse annoy 22:54, 26 June 2008 (EDT)
Meanwhile, PJR turns on his masteronce more 203.96.84.33 22:59, 26 June 2008 (EDT)
- Yeah, that was sweet. Well done PJR. In terms of simple wiki protocol, which apparently some others are trying to slirk around. ħuman
23:04, 26 June 2008 (EDT)
- PJR seems to be doing that a lot lately. He's most certainly the next to go (but that might bring Croc back).
Andy is going to be pissed when he realizes PJR is just a Young Earth Liberal. That scallywag.. Ace McWicked 07:07, 27 June 2008 (EDT)
- Just found this which makes for interesting reading (sorry if it's been dug up before). It seems PJR wasn't too enamoured with Andy's blog at the outset but seems to have been swayed (maybe the sysopship had something to do with it). Mewonders if the old doubts aren't starting to creep up again. --PsygremlinWhut? 09:11, 27 June 2008 (EDT)
PJR just wants a clean, well-lighted place to edit his fables. 204.248.28.194 09:35, 27 June 2008 (EDT) | https://rationalwiki.org/wiki/Conservapedia_talk:What_is_going_on_at_CP%3F/Archive57 | CC-MAIN-2020-50 | refinedweb | 17,222 | 72.46 |
Hello Everyone,
I am having issues with this current script. I am trying to create a Twitch bot in Python. Any help with this would be much appreciated. The issue that I am having is it is not replying back with the message Hello there, when I type in "Hello"
import string from Socket import openSocket, sendMessage from Initialize import joinRoom from Read import getUser, getMessage s = openSocket() joinRoom(s) readbuffer = "" while True: readbuffer = readbuffer + s.recv(1024) temp = string.split(readbuffer, "\n") readbuffer = temp.pop() for line in temp: print(line) if "PING" in line: s.send(line.replace("PING", "PONG")) break user = getUser(line) message = getMessage(line) print user + " typed :" + message if "Hello" in message: sendMessage(s, "Hello there") | https://www.daniweb.com/programming/software-development/threads/501978/what-am-i-doing-wrong | CC-MAIN-2017-09 | refinedweb | 121 | 66.33 |
#include <BRMeshRefine.H>
This class manages grid generation from sets of tagged cells using the Berger-Rigoutsos algorithm in the context of the MeshRefine class from which it is derived
There are two ways grids can be defined based on tagged cells. one takes a single IntVectSet of tags defined on the BaseLevel mesh and uses that set of tags for every level to be refined; the other takes a Vector<IntVectSet> of tags defined on all the mesh levels to be refined and uses those.
Long Description:
Create new meshes based on tagged cells on a range of levels of a mesh hierarchy. Each level of tagged cells is used to generate a new mesh at the next finer level. The finest level in the output mesh will be one level higher than the top of the range of levels given as input. As a special case, use the same tags (appropriately refined) for all levels.
Usage:
Call the regrid functions after computing error estimates and tagging cells. To add a new mesh level, set TopLevel to the index of the finest level in the existing mesh and define tags on the finest level. To keep the existing number of mesh levels, set TopLevel to one less than the index of the finest level and don't define any tags on the finest level. If a single IntVectSet of tags is passed (instead of a Vector<IntVectSet>) then the same tags (properly refined) will be used for all the new meshes up to level TopLevel+1. In any case, the meshes at levels BaseLevel and below are not modified. The output argument newmeshes will be reallocated to the necessary size before being used. When this function returns, the elements of the newmeshes vector corresponding to the unchanged levels will be filled in with copies of the levels from the old mesh vector. The variable tags is modified in an undefined way, so its contents should not be relied upon. The variable BlockFactor specifies the amount by which each box will be coarsenable. Every grid box will have an integral multiple of BlockFactor cells in each dimension and also lower index values that are integral multiples. As a side effect, the minimum box size will be BlockFactor.
Expensive validations are done only when debugging is enabled (i.e. the DEBUG make variable is "TRUE").
Usage Notes:
All the input vectors should be defined with max index >= TopLevel. They should have values for indices [BaseLevel:TopLevel]. (except for OldMeshes, which must be defined for all indices). The new mesh vector newmeshes will be redefined up to index TopLevel+1. RefRatios should be defined such that RefRatios[L] is the value to use to refine the level L mesh to produce the level L+1 mesh. The tags vector is modified in an undefined manner. The output variable newmeshes may not be completely defined if an exception occurs. The BlockFactor can be used to force a minimum box size.
Default constructor -- leaves object in an unusable state.
Full constructor -- leaves object in usable state.
Full constructor -- leaves object in usable state.
Destructor.
Define function -- size of RefRatios will define maximum number of levels.
Reimplemented from MeshRefine.
Define function -- size of RefRatios will define maximum number of levels.
Reimplemented from MeshRefine.. | http://davis.lbl.gov/Manuals/CHOMBO-RELEASE-3.3/classBRMeshRefine.html | CC-MAIN-2019-22 | refinedweb | 548 | 63.9 |
Introduction
In this article, you’ll learn the basic concepts to build the required methods such as “register an alias” or “resolve an alias” in your own application. You’ll also learn how you can leverage the features of the Radix ledger to implement a simple decentralized Alias system for your app, without having a central server to manage the database index or store the data.
What are aliases in computer systems?
An alias is an alternate name for someone or something. For instance, Bruce Wayne is known by his pseudonym, Batman. In computer science, an alias is an alternative name for a computer, object, file, device, person, group, or user. Usually, an alias is used to replace long strings with human-readable names. For example, website domain names and usernames are aliases. This allows users to memorize simple URL addresses like google.com as opposed to a complex, non-intuitive list of Google servers IP addresses.
So why is it important to have an aliasing system on a distributed ledger?
In distributed ledger technology platforms, each member (node or wallet) of the network is identified by its unique public key. It's not easy to remember this long key you are messaging or sending tokens to. This creates friction for the user experience. It is much easier to remember or store human-readable names like usernames and domain names.
Therefore we could say that the primary goal of aliases for distributed ledgers is to resolve human-readable names, like @name or name@namespace, into machine-readable identifiers, including Radix wallet addresses, nodes, and even content hashes, and/or other identifiers.
Implementing an Alias system in Radix
As we stated earlier, we want to build a decentralized Alias system for our application, without the need to have a dedicated server to manage the database index. Now let’s see how simple it is to achieve our goal, by interacting with the Radix ledger.
Assume our application wants to provide users with an alias system that supports simple alias (in the form of @name) and namespace alias (in the form of name@namespace). The implemented alias logic has to be part of the app client code so all the wallets can interpret it consistently.
First, in our application, we define an Account that will be our Namespace index and another Account that will represent our Root index. In the Namespace index, we add a record that points to the Root index so we can handle any simple alias.
The elements of the system shown in the figure:
-Namespace Index: a Global index that enables the system to store aliases separately for the root and each namespace.
-Root Index: This index is the one that stores the simple aliases.
-Pointer account: Each alias on the Root Index links to a single pointer account that links to the users. This will allow users to modify or forward it in the future.
-Real account: This is the actual user’s account that is behind the alias.
Registering a simple alias
In our application, when a user wants to register a simple alias, we only have to read from the Namespace index account, verify that there is no namespace already registered with that alias, find the reference to the Root index account, and add a record in the Root for the requested user alias.
That alias record will point to a new Account used as a pointer reference, where we store the Alias owner, their current address, and any other relevant information. Note that the pointer account can be later updated only by the Alias owner, in case he needs to change any information.
As you might wonder, there’s no actual need to check for availability or duplicates, as alias is assigned in a first-come-first-served basis, and even if someone tries to register a name twice, the immutable ledger allows us to know who has the first valid registration.
Registering a namespace alias
In a similar way, when a user wants to register a namespace alias such as name@myapp, we have to read from the Namespace index account, find the reference to the requested namespace index Account, and then add a record in the myapp index for the requested user alias.
Depending on your application-specific needs, a namespace alias could be modified by the owner of the namespace (like email domains) or only by the alias owner (like a simple alias).
Resolving an alias
Let’s review how we can resolve aliases so we can have a complete alias system in our application.
When we want to resolve an alias, we start by reading the Namespace index records, to find the right index for the alias. For a simple alias that would be the Root index, and for a namespace alias would be the defined namespace index. In case it’s a simple alias, we start reading the Root index records, and we take the first match we find, to access the alias Pointer account.
We continue by reading the first record of the alias pointer to find the alias owner and keep reading any other record in the Pointer account to see if the alias had any additional update by the alias owner. Once we reach the end of the pointer data, we have the latest Account address for the simple alias. Instead, if it’s a namespace alias, next, we start reading the defined namespace index records, and we take the first match we find.
In this scenario, the process is simplified, and the first match we find in the private namespace index is the Account address for the alias.
Join The Radix Community
Telegram for general chat
Discord for developers chat
Reddit for general discussion
Forum for technical discussion
Twitter for announcements
Mail to [email protected] for general enquiries
Discussion (0) | https://dev.to/radixdlt/use-case-alias-system-7d3 | CC-MAIN-2021-31 | refinedweb | 976 | 57.5 |
Tech Insider A PWA is Still a Simple Website Under the Hood Yurii Tokarskyi Aug 10, 2021 • 11 min read Progressive web applications are still treated as a new thing. Many people talk about new browser APIs in the context of PWAs. But when we run for the newest things we forget about the basics. Look at the Fugu API tracker — it's a roadmap of new features in Chromium. The ambitions are impressive. Because of such things many people think PWAs are a different category of web product. But: A PWA is still a simple website first. It’s a web application second. And only in the third place it’s progressive. What basics? People can use web apps in different environments: browsers, web-views, or iframes. Browsers are modern and old with different API support. Web-views or iframes have limitations too. Also, don't forget about different viewports: from the smallest watches and mobile devices to desktop browsers with 8K screens. Only web developers create things for such diverse environments. The current state of web technologies allows us to make both mobile and desktop users happy. But: With great power comes great responsibility. — Uncle Ben Let’s remember two concepts — graceful degradation and progressive enhancement: Graceful degradation — a design philosophy that centers around trying to build a modern web site/application that will work in the newest browsers, but falls back to an experience that while not as good still delivers essential content and functionality in older browsers. — MDN Progressive enhancement — a design philosophy that provides a baseline of essential content and functionality to as many users as possible, while delivering the best possible experience only to users of the most modern browsers that can run all the required code. — MDN These two are different concepts. But their meaning is the same. We need to make web applications accessible on older or unusual devices. On modern devices, we can make them full-powered. Website comparison on iPhone XR and new Nokia 3310. Manuel Matuzović How to make sure a web application is usable on various devices and environments? A frontend developer should ask theirself 3 what-ifs: What if JavaScript isn’t supported? What if specific features or APIs aren't supported? What if specific properties aren't supported? What if JavaScript isn’t supported? It’s not only the user’s fault when JavaScript isn’t supported. The script may fail to download. A browser extension may break it. Or internal firewall mistakes can block external scripts. It’s not a matter of if your users don’t have JavaScript — it’s a matter of when and how often. — Adam Silver A common mistake is omitting server rendering, forms processing and navigation. Can I see a simple webpage if I don’t have JavaScript? Can I navigate the application? Can I send the form without JavaScript? We need to stop breaking native behavior. Native behaviour is the default behavior of the browser for specific HTML tags. For example: The a tag updates history and allows one to use CMD/CTRL to open links in a new tab; The form tag has native support for sending data to a server using POST/GET requests; The button tag has default centreing of internal content. Using a div tag as a button is a mistake. Using input without parent form too. These mistakes break native behavior — we need to make sure such things work. After that we can make our application sexier with JavaScript. Use client-side navigation. Send a form with AJAX. Or update content in real-time. /* ☝️ Simple form in React without and with native behavior support *//* ❌ — client-side only form, default behavior of form is broken */const Form: FC = ({ onSend }) => { const [value, setValue] = useState(""); return ( <div> {/* 👎 */} <input value={value} onChange={e => setValue(e.target.value)} /> <button>Save</button> {/* 👎 */} </div> );}/* ✅ — server-side processing supported, enhanced on client side */const Form: FC = ({ onSend }) => { const [value, setValue] = useState(""); return ( <form {/* 👉 */} <input value={value} onChange={e => setValue(e.target.value)} /> <button {/* 👉 */}Save</button> </form> );} What if a specific feature or API isn’t supported? Let’s take a look at Vibration API support: you can use it on Android, but on iPhone, you can not. Even worse — calling window.navigator.vibrate() in Safari will cause TypeError: navigator.vibrate is not a function. That error may break the application. We need to ask ourselves: what if it’s not supported? In the case of Vibration API, we can omit that. I don’t think the user will close an application because of vibration missing. /* ☝️ Using Vibration API safely *//* ❌ — causes TypeError: navigator.vibrate is not a function in Safari */window.navigator.vibrate(100);/* ✅ — simply omits in browser withput Vibration API support */if ("vibrate" in navigator) { window.navigator.vibrate(100);} But if we have no access to the Geolocation API — what will be the state of the application? We need to inform the user if it's not supported. And we need to make sure the user can continue the scenario without that. What to do in case of a more crucial feature, like Indexed DB? We need to add fallback support, even if it’s less performant or optimized. Use it as caching storage only — fetch data from the server and cache it to omit future network requests. When no support of Indexed DB is available, the application continues to fetch data from the server. We can also use other feature detection solutions, like Modernizr. But I want to pay attention to the mindset, not the tools — don’t be optimistic and think about how to handle an error first. What if a specific property isn’t supported? Specific property support is almost the same as feature/API support. The only difference is that it's more painful. If the full feature is not supported we can check it in a second. In case when a property is not supported or missing we spend more time finding the mismatch. Let’s take a look at the Notifications API. It’s supported in all modern browsers — only Safari requires a custom configuration. We can live with it. Check how to get notification permissions. All browsers return a Promise: only Safari requires a callback as a parameter. /* ☝️ Safely request permission for notifications */function handlePermission(permission) { /* ... */}if ("Notification" in window) { Notification .requestPermission(handlePermission) /* for Safari */ .then(handlePermission) /* other browsers */} else { /* handle case when notification aren't supported */} Most cases of unsupported properties I know apply to CSS. Each CSS property has browser-specific support. What is nice about CSS is that we can provide many values for the same property. The browser omits unsupported values and uses the previous one. For example, height. First, we provide a percentage value. It's supported everywhere. On the next line, we add a new value: vh (viewport height). If 100vh isn’t supported — the browser uses 100%. /* ☝️ Using 100vh with fallback to 100% in CSS *//* ❌ — not all browsers support 100vh */.root { height: 100vh;}/* ✅ — every browser supports 100%, modern browser uses 100vh */.root { height: 100%; height: 100vh;} From enhancement to inclusion Accessibility comes from supporting unexpected use cases. It’s especially applicable to the web. People can use our application on various devices and browsers. They can even see it embedded in other applications. As much as we care about the environments we support, following the principles I described above prepares us for inclusive design. Screen readers work well with native browser behavior. We need to extend our products with a few things for great accessibility. The SEO team will be happy if our application works without JavaScript. People with broken JavaScript support will be happy. As well as people with older systems, browsers or devices. Just ask yourself 3 questions during development: What if JavaScript isn’t supported? What if specific features or APIs aren't supported? What if specific properties aren't supported? I’ll agree if you say: "We have a budget, no time to play these games". I don't want to say we need to achieve everything above. We need to ask the right questions and check the facts. When we don't do something like server rendering, we need to be aware of it. Progressively enhanced (or gracefully degraded) systems are better prepared to welcome all the users — let’s care about them. Related topics Frontend Web Development TIL More posts by this author Yurii Tokarskyi | https://www.netguru.com/blog/pwa-is-still-simple-website-under-the-hood | CC-MAIN-2021-39 | refinedweb | 1,403 | 59.6 |
Synopsis edit
-
- auto cmd ...
-
- knar body
-
- knit name arguments ?process? body
-
- knead arguments ?process? body
-
- util auto varnames script
Download editknit is also available as ycl::knit::knit, along with unit tests.
Description editknit is useful for those times that you want something like eval, but with the ability to programatically manipulate the script to be evaluated. It creates a procedure that when run, makes substitutions to body and the evaluates body at the caller's level. process, if provided, is a script evaluated at the local level before body is processed and evaluated body. process provides a sandbox in the form of a local procedure in whose scope macro variable and command substitutions are processed.knit uses tailcall to provide some of the features that were problematic in earlier macro systems. Each macro is a procedure that fills out a template according to the arguments it receives, and then tailcalls the template.knit takes an EIAS approach to macros, meaning that it does not try to discern the structure of the template it is filling in, and instead provides the macro author a convenient syntax to choose how subtitutions are made. It turns out that just a small number answer most needs. All macro substitutions happen textually. They do not respect the syntactical flow of the Tcl script. It's the responsibility of the script author to make sure the macros produce a syntactically correct script.knead can be used to build a macro procedure specification without actually creating the macro procedure. knit is implemented as a trivial wrapper around knead. knead itself is useful for creating anonymous macros:
apply [knead x {expr {${x} * ${x}}}] 5In turn, knot is a simple wrapper around knead that also executes apply.knar performs only macro command substitution.auto generates the arguments to the specified command from variables of the same name in at the caller's level. If varnames is the empty string, auto also generates varnames from body.In contrast with Sugar, knit is more interleaved with the running interpreter, as Lisp macros are. Where Sugar attemps to parse a script and discern macros, knit inserts the macro code at runtime when the macro procedure is invoked. In order to do its expansions, Sugar must know, for example, that the first argument to while is evaluated as an expression. knit is oblivious to such things, allowing it to fit more naturally into a Tcl script. Since knit macros are themselves procedures, knit eschews the issue that {*} raises for Sugar, and in general automatically has the features of a procedure that the merely-procedure-like macros in Sugar have to work hard for. One example is default arguments and another is $argv handling. The tradeoff is that knit incurs some cost during runtime that Sugar does not, namely the cost of the tailcall.util auto simply performs variable macro substitutions in a script and returns the script. If varnames is not the empty string, ''varnamess' is derived from script. substituted values are retrieved from the level of the caller.
Macro Substitutions edit
- ${arg}
- Replaced by the value of $arg, properly escaped as a list.
- #{arg}
- Replaced by the value of $arg, without escaping the value as a list. This is useful for example, to substitute a fragment of an expression into expr, or to substitute a few lines of code into a routine. Also useful for substituting in a command prefix.
- !{argname}
- This is simply for convenience, and is exactly equivalent to [set ${argname}]. In other words, the value ${argname} is the name of a variable, and it will be arranged for the value of that variable to be substituted at execution time. In the examples below, lpop2 does the same thing as lpop, but thanks to !{argname} is a little more concise.
- [`name ...]
- Replaced by the returned value of command macro named name. By default, the replacement value is rescanned for additional macros until all macros are expanded. Useful to perform more complex and arbitrary substitutions. This happens prior to the macro variable substitutions so that any variable macros in the substituted text are still processed as usual later. The standard Tcl substitutions are performed, at the level of the process script. This gives command macros access to any variables defined by that script.
Command Macros edit
- addvars varnames
- Each varname in varnames is added to the list of macro substitution variables to process.
- def name args script
- Create a macro named name that substitutes each arg from args, into script, as described for knead.
- defdo name args script ?value...?
- Performs def, and then do, passing it the value arguments.
- do name ?value...?
- Executes the macro command named name, passing it all the value arguments.
- eval script
- script is evaluated at the level of the process script, and the returned value is the value of the macro command.
- foreach varname list ?varname list ...? script
- Each list. Like foreach, but each set of values extracted from the lists is is assigned to the corresponding varname names, and macro variable substitutions in script are processed against these variable names. This is useful for inlining commands or producing nearly redundant code from boilerplate, taking advantage of byte-compilation of procedures.
- if ...
- Like if, but triggered body arguments are simply returned.
- script script
- Like eval, but the value of the macro command is the empty string.
Configuration editThe following variables can be set to configure the behaviour of knit and friends:
- knit::knarname
- The string that, when preceded by [, indicates a command macro. The default is `.
- knit::recursive
- A boolean value that indicates whether command macros should be recursively processed.
Customizing editTo create a customized knit, use ycl::dupensemble to duplicate the ycl::knit ensemble, and then add commands to cmds child namespace of the namespace of the new ensemble. A conforming macro comand accepts on argument, cmdargs, and returns a value the is to be substituted.
Examples editThese examples show how the macros presented in Sugar, along with various other macros are implemented in knit
- knit unit tests
- Toy examples.
- ycl::chan
- Uses the foreach macro to substitute some boilerplate code.
- lswitch
- The most extensive example yet. Uses knit to implement a switch for lists.
knit double x {expr {${x} * 2}} knit exp2 x {* ${x} * ${x}} knit charcount {x {char { }}} { regexp -all ***=${char} ${x} } knit clear arg1 {unset ${arg1}} knit first list {lindex ${list} 0} knit rest list {lrange ${list} 1 end} knit last list {lindex ${list} end} knit drop list {lrange ${list} 0 end-1} knit K {x y} { first [list ${x} ${y}] } knit yank varname { K [set ${varname}] [set ${varname} {}] } knit lremove {varname idx} { set ${varname} [lreplace [yank ${varname}] ${idx} ${idx}] } knit lpop listname { K [lindex [set ${listname}] end] [lremove ${listname} end] } knit lpop2 listname { K [lindex !{listname} end] [lremove ${listname} end] } foreach cmdname {* + - /} { knit $cmdname args " expr \[join \${args} [list $cmdname]] " } knit sete {varname exp} { set ${varname} [expr {#{exp}}] } knit greeting? x {expr {${x} in {hello hi}}} knit until {expr body} { while {!(#{expr})} ${body} } knit ?: {cond val1 val2} { if {#{cond}} {lindex ${val1}} else {lindex ${val2}} } knit finally {init finally do} { #{init} try ${do} finally ${finally} }Sometimes only the macro command preprocessing is wanted. Using [knar] alone is rather like runing a C file through the preprocessor. Here's an example:
proc p1 {some arguments} [knar { [` foreach x {1 2 3} y {4 5 6} { set coord#{x} ${y} lappend res $coord#{x} }] # There is no variable named "x" in this scope at runtime. Macro # expansions operate in their own sandbox (a local procedure in which the # script to be evaluated is generated. return $res }] p1 ;# -> 1 4 2 5 3 6
Example: Avoid a Conditional Branch in a Loop editSometimes only one or two steps of a routine branch based on some condition. It can be annoying when one of those steps is in a loop, and the condition must be tested on each iteration even though the values affecting the outcome are known prior to entering the loop:
proc files {arg1 arg2 arg3} { ## step 1 #step 2 if {$arg1} { #do something } else { #do something else } foreach file $files { #step 3 #step 4 if {$arg1} { #do something else } else { #do something else } #step 5 } #step 6 #return some result }In this situation, knit could be used like this:
proc files {arg1 arg2 arg3} { ## step 1 if {some condition} { #step 2 } else { #alternate step 2 } if {some condition} files_for { #the script for step 4 } else { files_for { #the alternate for step 4 } } #step 6 return files } knit files_for script { foreach file $files { #step 3 #step 4 is a macro #{script} #step 5 } }Or, if the conditions can be determined from just the parameters to the procedure, multiple variants of the procedure can be generated from a template, and the selector moved to the caller:
knit files_macro {name script1 script2} { proc ${name} {arg1 arg2 arg3} { ## step 1 #step 2 is a macro #{script1} foreach file $files { #step 3 #step 4 is a macro #{script2} #step 5 } #step 6 return files } } files_macro files1 { #script1 } { #script2 } files_macro files2 { #script1 } { #script2 } if {$arg1} { files1 $arg1 $arg2 $arg3 } else { files2 $arg1 $arg2 $arg3 } | http://wiki.tcl.tk/40693 | CC-MAIN-2017-04 | refinedweb | 1,515 | 60.04 |
What is the Problem?
Assumne there is a class called "Kids.cs". The purpose of this class is to maintain the name and age of the children of a specific;
}
}
Now the requirement is that each person has Kids. So at the time of creation of a Person, the associated Kids should also be created. So this may be done by the following code.
Person.cs
class Person
private Kids obj;
public Person(int personAge,string personName, int kidsAge, string kidsName)
obj = new Kids(kidsAge, kidsName);
age = personAge;
name = personName;
string s= age.ToString();
Console.Write(obj);
return "ParentAge" + s + " ParentName" + name;
Main.cs
class Program
static void Main(string[] args)
Person p = new Person(35, "Dev", 6, "Len");
Console.WriteLine(p);
Console.Read();
Output
The code above is a typical example of a Composition design pattern or Object Dependency.
The following describes what Object Dependency is.
When an object needs another object then it is said that the object is dependent on the other object.
Let there are three classes Man, Woman and Children.
If Man -> Woman And Woman -> Children So Man -> Children
This is called Transitive Object Dependency.
Object Coupling and Object Dependency are the same term.
A good design should contain loose object coupling or object dependency.
Problems
The following are problems in the above approach:
private Kids obj; (Refrence)
obj = new Kids(kidsAge, kidsName); (known Concrete class)
The preceding problem can be solved by the following:
So, assigning the task of object creation to a third-party is somehow inverting control to the third-party. And this is called "Inversion of Control" (IOC).
In other words, the Inversion of Control Design Pattern could be defined as delegating the task of object creation to a third-party, to do low coupling among objects and to minimize dependency among objects.
So IOC says:
Dependency Injection
Dependency Injection is the way to implement IOC as in the following:
"Dependency Injection isolates implementation of an object from the construction of the object on which it depends".
Constructor Injection
Here an object reference would be passed to the constructor of the business class Person. In this case, since the Person class depends on the Kids class. So the reference of the Kids class will be passed to the constructor of the Person class. So at the time of object creation of the Person class, the Kids class will be instantiated.
The following is the procedure to implement Constructor Injection.
Step 1: Create an interface
IBuisnessLogic.cs
public interface IBuisnessLogic
Step 2: Implement an interface to the Kids class.
public class Kids :IBuisnessLogic
An object of the Kids class will be referenced by the Person class. So this class needs to implement the interface.
Step 3:
Make a reference of the interface in the Person class.
Person.cs
IBuisnessLogic refKids;
public Person(int personAge,string personName,IBuisnessLogic obj)
refKids = obj;
Step 4:
Now to create a third-party class where the creation of";
In the code above is a factorymethod() method where the object is getting created.
Step 5:
Now to use a third-party class at the client-side.
Program.cs
IOCClass obj = new IOCClass();
obj.factoryMethod();
Console.WriteLine(obj);
Disadvantage
Setter Injection
This uses the Properties to inject the dependency. Here rather than creating a reference and assigning them in the constructor, it has been done in the Properties. By this way, the Person class could have a default constructor also.
Advantage:
Implementation:
Step 1: The same as Constructor Injection.
Step 2: The same as Constructor Injection.
Step 3: Pesron.cs
//private Kids obj;
private IBuisnessLogic refKids;
public Person(int personAge,string personName)
public IBuisnessLogic REFKIDS
set
{
refKids = value;
}
get
return refKids;
In the Person class is the property REFKIDS, that is setting and getting the value of the reference of the interface.
Step 4:
There are some changes in third-party class.
IOC.cs
p= new Person (42,"David");
p.REFKIDS = objKid;
return "Displaying using Setter Injection";
The same as constructor injection.
View All | http://www.c-sharpcorner.com/UploadFile/dhananjaycoder/inversion-of-control/ | CC-MAIN-2017-34 | refinedweb | 665 | 56.25 |
05 July 2012 14:12 [Source: ICIS news]
TORONTO (ICIS)--Methanex has completed the restart of a second methanol plant in New Zealand and may restart a third idled unit there as the country’s natural gas supplies have improved, the Vancouver-based producer said on Thursday.
With the restart of the 650,000 tonne/year methanol plant, Methanex's production capacity at its Motunui site in ?xml:namespace>
Methanex restarted the first Motunui plant in 2008. The capacities had been shut down in 2004 because of high feedstock costs and low margins.
“The [restarted] plant adds a competitive new supply source for our customers in the fast-growing Asian markets and is expected to generate strong returns for shareholders," said CEO Bruce Aitken.
"With the improved natural gas supply position that has | http://www.icis.com/Articles/2012/07/05/9575777/methanex-completes-restart-of-second-new-zealand-methanol.html | CC-MAIN-2014-42 | refinedweb | 133 | 58.92 |
I have a oracle 816 dmp file, which is huge and had big initial and next extent so my import failed even though I had enough space on that tablespace. So I decided to do it in 3 steps like..
1. imp system/manager file=xyz.dmp indexfile=xyz.sql show=y
2. edit xyz.sql and take out all the storage clauses (you will need to edit that file to take out REM)
3. run xyz.sql to pre-create your tables and indexes
4. imp system/manager file=xyz.dmp ignore=y
But as this file is huge I could not manually take out all storage clauses so I changed INITIAL and NEXT extent values to something smaller with same initial and next extent.
But did not change max extent !@#
Now I am in trouble... everthing went smooth except for one index.
Error: Reached Max # of extents for this index and it rolled back all the rows it imported for that perticular table where that unique index import failed.
I was thinking of increasing the max number of extent for this index, but how do I do that ?
Then after I do that, how do I just import that table ?
what will be the import command ?
thanks for help
Sonali
Sonali
You can drop that table or recreate the table and then import the table
import sys/passwd file=imp.dmp tables=(schema.table1, schema.table2,...) indexes=N
Then you can create the indices for that table manually.
Hope this would help you,
Sam
Thanx
Sam
Life is a journey, not a destination!
Originally posted by sonaliak
....SNIP...
Error: Reached Max # of extents for this index and it rolled back all the rows it imported for that perticular table where that unique index import failed.
I don't belive this is true! The table data should stay there, only that failed index should be nonexistant! There is no way index failure could "rollback" table's imported rows. Oracle allways perform the import of a table in the following sequence:
1. Create the table if necessary
2. Insert all rows into the table *and perform an implicit COMMIT*.
3. Create all the indexes for that table
4. Create all the needed constraints on the table
5. Other table-related tasks....
If index creation failed in step 3, this could not influence steps 1 and 2, it could only influence steps 4 and 5.
Anyway, for seting/changing MAXEXTENTS for the index is no different as the way you do it for a table:
CREATE/ALTER INDEX my_index .... STORAGE (maxextents 500);
Jurij Modic
ASCII a stupid question, get a stupid ANSI
24 hours in a day .... 24 beer in a case .... coincidence?
Forum Rules | http://www.dbasupport.com/forums/showthread.php?11479-import-failed&p=45440 | CC-MAIN-2016-26 | refinedweb | 454 | 75.2 |
Getting Started with Qt Quick Controls 2
A basic example of a QML file that makes use of controls is shown here:
import QtQuick 2.6 import QtQuick.Controls 2.1 ApplicationWindow { title: "My Application" width: 640 height: 480 visible: true Button { text: "Push Me" anchors.centerIn: parent } }
Setting Up Controls from C++
Although QQuickView has traditionally been used to display QML files in a C++ application, doing this means you can only set window properties from C++.
With Qt Quick Controls 2, declare an ApplicationWindow as the root item of your application and launch it by using QQmlApplicationEngine instead. This ensures that you can control top level window properties from QML.
A basic example of a source file that makes use of controls is shown here:
(); }
Using C++ Data From QML
If you need to register a C++ class to use from QML, you can call qmlRegisterType() before declaring your QQmlApplicationEngine. See Defining QML Types from C++ for more information.
If you need to expose data to QML components, you need to make them available to the context of the current QML engine. See QQmlContext. | https://doc.qt.io/qt-5.11/qtquickcontrols2-gettingstarted.html | CC-MAIN-2019-22 | refinedweb | 186 | 59.43 |
- Options
- Thread Basics
- Native Threads and a GUI
- Qt Threading
- An Example
- Conclusion
Native Threads and a GUI
If you think your application might one day get a different GUI but will remain firmly rooted in Python, using Python's native threading support is a good bet. You can use Python's native threads with any GUI library you like, but you should not use more than one threading system in the same application. That means that you cannot use threading and QThread at the same moment. Doing so will invariably result in crashes.
In a GUI application that uses native threads, it is also not possible to directly influence the GUI thread. PyQt's thread support and wxPython's thread support offer functions you can use to send messages or events to the GUI thread or to lock the main GUI before accessing shared data.
If you use Python's native thread, your worker threads must fill a Queue object and your GUI thread must install a timer that every so often polls the Queue and retrieves new data.
Let's briefly look at Listing 1, a small recipe that uses PyQt, derived from a similar script by Jacob Hallen (from the ActiveState Python Cookbook).
Listing 1: Using Python Threads and a PyQt GUI
import sys, time, threading, random, Queue, qt class GuiPart(qt.QMainWindow): def __init__(self, queue, endcommand, *args): qt.QMainWindow.__init__(self, *args) self.queue = queue self.editor = qt.QMultiLineEdit(self) self.setCentralWidget(self.editor) self.endcommand = endcommand def closeEvent(self, ev): """ We just call the endcommand when the window is closed instead of presenting a button for that purpose. """ self.endcommand() def processIncoming(self): """ Handle all the messages currently in the queue (if any). """ while self.queue.qsize(): try: msg = self.queue.get(0) # Check contents of message and do what it says # As a test, we simply print it self.editor.insertLine(str(msg)) except Queue.Empty: pass class ThreadedClient: """ Launch the main part of the GUI and the worker thread. periodicCall and endApplication could reside in the GUI part, but putting them here means that you have all the thread controls in a single place. """ def __init__(self): # Create the queue self.queue = Queue.Queue() # Set up the GUI part self.gui=GuiPart(self.queue, self.endApplication) self.gui.show() # A timer to periodically call periodicCall :-) self.timer = qt.QTimer() qt.QObject.connect(self.timer, qt.SIGNAL("timeout()"), self.periodicCall) # Start the timer -- this replaces the initial call # to periodicCall self.timer.start(100) # Set up the thread to do asynchronous I/O # More can be made if necessary self.running = 1 _self.thread1 = threading.Thread(target=self.workerThread1) self.thread1.start() def periodicCall(self): """ Check every 100 ms if there is something new in the queue. """ self.gui.processIncoming() if not self.running: root.quit() def endApplication(self): self.running = 0) rand = random.Random() root = qt.QApplication(sys.argv) client = ThreadedClient() root.exec_loop()
Important in this example are the queue and the timer. Every time the QTimer ticks, a signal is sent to the periodicCall method. This method asks the GUI to process everything that has been put in the Queue by one of the worker threads.
This example is a direct translation of the Tkinter example given in the ActiveState Python Cookbook, and it shows how easy it is to adapt threaded Tkinter code to PyQt. | http://www.informit.com/articles/article.aspx?p=30708&seqNum=3 | CC-MAIN-2014-52 | refinedweb | 565 | 57.27 |
Opened 3 years ago
Closed 2 years ago
Last modified 2 years ago
#19560 closed Cleanup/optimization (fixed)
Please improve warning message "received a naive datetime while time zone support is active"
Description
The warning message usually shows:
BadDataError: ('payment.Order', RuntimeWarning(u'DateTimeField received a naive datetime (2013-01-03 19:45:41.532391) while time zone support is active.',))
However it doesn't actually show which field this happened to, which makes it hard to debug without inserting breakpoints.
This three-line change to django.db.models.fields.DateTimeField:
def get_prep_value(self, value): value = self.to_python(value) if value is not None and settings.USE_TZ and timezone.is_naive(value): ... - warnings.warn("DateTimeField received a naive datetime (%s)" - " while time zone support is active." % value, + warnings.warn("DateTimeField %s.%s received a naive datetime (%s)" + " while time zone support is active." % + (self.model.__name__, self.name, value), RuntimeWarning)
Which results in this output instead:
DateTimeField Order.created received a naive datetime (2013-01-03 20:11:24.084774) while time zone support is active.
Attachments (1)
Change History (8)
comment:1 Changed 3 years ago by aaugustin
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
comment:2 follow-up: ↓ 4 Changed 3 years ago by gcc
Thanks aaugustin, but that's not my question, I still want to know what field was set wrongly. Because this warning/exception is only thrown at save() time, the backtrace doesn't tell you anything about the field name.
comment:3 Changed 3 years ago by claudep
- Triage Stage changed from Unreviewed to Accepted
comment:4 in reply to: ↑ 2 Changed 3 years ago by aaugustin
Thanks aaugustin, but that's not my question
Sorry, I wrote that comment on a phone and it was a bit short. Let's go for a longer version now that I have a keyboard :)
1) I understood your question; I just made a tangential comment without touching the triage flags.
2) Since this isn't going to be fixed until 1.6 — we're in RC for 1.5 — developers who face the same problem and find this ticket will have to resort to another debugging method. Turning warnings into exceptions is quite powerful; I hoped my comment might help developers who aren't aware of this possibility.
3) If we make that change, the part of the documentation I'm referring to won't work any more. When you're writing a patch for this ticket, don't forget to update the docs.
Changed 2 years ago by vajrasky
Added more clarity when receiving naive datetime object when timezone support is active. Fixed the doc and test as well.
comment:5 Changed 2 years ago by vajrasky
This patch (add_clarity_to_warning_receiving_naive_datetime.diff) is based on gcc's work (credit to him/her). It also addressed aaugustin's concern regarding the documentation on turning warning to exception.
comment:6 Changed 2 years ago by Aymeric Augustin <aymeric.augustin@…>
- Resolution set to fixed
- Status changed from new to closed
The documentation explains how to turn the warnings into exceptions, providing a full backtrace. | https://code.djangoproject.com/ticket/19560 | CC-MAIN-2015-48 | refinedweb | 517 | 56.05 |
01 September 2008 15:46 [Source: ICIS news]
LONDON (ICIS news)--Crude prices fell by more than $4/bbl on Monday to take Brent crude on ICE Futures to below $110/bbl for the first time since early May on the back of a stronger US dollar and diminishing threat from Hurricane Gustav.
?xml:namespace>
By 14:25 GMT, October Brent crude had hit a low of $109.20/bbl, a loss of $4.85/bbl from the Friday close of $114.05/bbl, before recovering to around $109.70.
At the same time, October NYMEX light sweet crude futures was trading around $111.20/bbl, having hit a low of $110.63/bbl, a loss of $4.83/bbl from the previous close.
While US crude trading was limited due to Labor Holiday Monday, Gustav, which had been supporting prices over its potential disruption to oil facilities in the Gulf of Mexico, lost strength as it hit land and was downgraded to Category 2 Hurricane.
More downward pressure was due to the US dollar gaining ground in the currency markets after a number of positive ?xml:namespace>
Last Thursday, prices saw an intraday drop of more than $4/bbl as the International Energy Agency (IEA) had said that it was prepared to act in case of heavy disruptions caused by | http://www.icis.com/Articles/2008/09/01/9153224/brent-falls-over-4-to-110bbl-four-month-low.html | CC-MAIN-2014-15 | refinedweb | 221 | 62.98 |
About.
also read:
What This Book Covers
In Chapter 1 we look at the library as a whole covering subjects such as how it can be obtained, how it can be used, the structure and composition of it, and the license it has been released under. We also look at a coding example featuring the Calendar control. Chapter 2 covers the extensive CSS tools that come with the library, specifically the Reset and Base tools, the Fonts tool, and the extremely capable Grids tool. Examples on the use of each tool are covered.
In Chapter 3 we look at the all important DOM and Event utilities. These two comprehensive utilities can often form the backbone of any modern web application and are described in detail. We look at the differences between traditional and YUI methods of DOM manipulation, and how the Event utility unites the conflicting Event models of different browsers. Examples in this chapter include how the basic functions of the DOM utility are used, and how custom events can be defined and subscribed to.
AJAX is the subject of Chapter 4, where we look in detail at how the Connection Manager handles all of our XHR requirements. Examples include obtaining remote data from external domains and the sending and receiving of data asynchronously to our own servers.
Chapter 5 looks first at how the Animation utility can be used to add professional effects to your web pages. It then moves on to cover how the Browser History Manager reenables the back and forward buttons and bookmarking functionality of the browser when used with dynamic web applications.
The Button family of controls and the TreeView control are the focus of Chapter 6. We first cover each of the different buttons and look at examples of their use. We then implement a TreeView control and investigate the methods and properties made available by its classes.
In Chapter 7 we look at one of the most common parts of any web site—the navigation structure. The example looks at the ease at which the Menu control can be implemented. We also look at the AutoComplete control and create both array and XHR-based versions of this component.
Chapter 8 looks at the container family of controls as well as the tabView control. Each member of the container family is investigated and implemented in the coding examples. The visually engaging and highly interactive TabView control is also looked at and implemented. Drag-and-Drop, one of DHTML‘s crowning acheivements is wrapped up in an easy to use utility, forms the first part of Chapter 9. In the second part of this chapter we look at the related Slider control and how this basic but useful control can be added to pages with ease.
In Chapter 10 we cover the Logger control in detail and work through several examples that include how the Logger is used to view the event execution of other controls and how it can be used to debug existing controls and custom classes.
AJAX and Connection Manager
As far as web interface design techniques are concerned, AJAX is definitely the way to go. So what JavaScript library worth its salt these days wouldn’t want to include a component dedicated to this extremely useful and versatile method of client/server communication?
The term AJAX has been part of the mainstream development community’s vocabulary since early 2005 (with the advent of Google Mail). Although some of the key components that AJAX consists of, such as the XMLHttp object, have been around for much longer (almost a decade in fact). The goal of asynchronously loading additional data after a web page has rendered is also not a new concept or requirement.
Yet AJAX reinvented existing technologies as something new and exciting, and paved the way to a better, more attractive, and interactive web (sometimes referred to loosely as web 2.0) where web applications feel much more like desktop applications.
AJAX can also perhaps be viewed as the godfather of many modern JavaScript libraries. Maybe it wasn’t the sole motivating factor behind the growing plethora of available libraries, but it was certainly highly infl uential and orchestral in their creation and was at least partly responsible for the first wave of modern, class-based JavaScript libraries.
Like many other cornerstone web techniques developed over the years, AJAX was (and still is) implemented in entirely different ways by different browsers. I don’t know if developers just finally had enough of dealing with these issues. chapter but rest assured, there are events marking the start and completion of transactions, as well as success, failure, and abort events.
The Connection utility has been a part of the YUI since its second public release (the 0.9.0 release) and has seen considerable bug fi xing and refi nement chapter. defi ne a new object yourself that will allow you to react to a range of responses from the server.
A Closer Look at the Response Object
I briefl y mentioned the response object that is created by the utility automatically once a transaction has completed, let’s now take a slightly more in-depth view at this object. It will be created after any transaction, whether or not it was considered a success.
The callback functions you defi ne fi le defi nedtransaction specifi cally with fi le uploads. As I already mentioned, the response object will be missing success of failure details when dealing with fi le uploads, however, you can still defi ne a callback function to be executed once the upload transaction has completed.
Working with responseXML
In this example we’re going to look at a common response type you may want to use—responseXML. We can build a simple news reader that reads headlines from a remote XML fi le and displays them on the page.
We’ll also need an intermediary PHP fi le that will actually retrieve the XML file from the remote server and pass it back to the Connection Manager. Because of the security restrictions placed upon all browsers we can’t use the XHR object to obtain the XML fi le directly because it resides on another domain.
This is not a problem for us however, because we can use the intermediary PHP file that we’re going to have to create anyway as a proxy. We’ll make requests from the browser to the proxy thereby sidestepping any security issues, and the proxy will then make requests to the external domain’s server. The proxy used here in this example is a cut down version of that created by Jason Levitt that I modified specifi cally for this example.
In order to complete this example you’ll need to use a full web server setup, with PHP installed and confi gured. Our proxy PHP fi le will also make use of the cURL library, so this will also need to be installed on your server.
The installation of cURL varies depending on the platform in use, so full instructions for installing it is beyond the scope of this book, but don’t worry because there are some excellent guides available online that explain this quick and simple procedure. Even though Connection Manager only requires the YAHOO and Event utilities to function correctly, we can make use of some of the convenient functionality provided by the DOM utility, so we will use the aggregated yahoo-dom-event.js instead of the individual YAHOO and Event fi les. We’ll also need connection-min.js and
fonts.css so make sure these are all present in your yui folder and begin with the following HTML:
[code lang=”html”]
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "">
<html lang="en">
<head>
<meta http-
<title>Yui Connection Manager Example</title>
<script type="text/javascript" src="images/2008/12/yahoo/yui/yahoo-dom-event.js"></script>
<script type="text/javascript" src="images/2008/12/yahoo/yui/connection-min.js"></script>
<link rel="stylesheet" type="text/css" href="yui/assets/fonts-min.css">
<link rel="stylesheet" type="text/css" href="responseXML.css">
</head>
<body>
<div id="newsreader">
<div class="header">Recent News</div>
<div id="newsitems"></div>
<div id="footer"><a class="link" href=" help/rss/4498287.stm">
Broadcasting Corporation</a></div>
<div>
</body>
</html>
[/code]
We’ll start off with this very simple page which at this stage contains just the markup for the newsreader and the references to the required library fi les. There’s also a <link> to a custom stylesheet which we’ll create in a little while.
Adding the JavaScript
Directly after the final closing </div> tag, add the following <script>:</p>
[code lang=”html”]
<script type="text/javascript">
//create namespace object for this example
<b><i>YAHOO</i></b>.namespace("yuibook.newsreader");
//define the initConnection function
<b><i>YAHOO</i></b>.yuibook.newsreader.initConnection = function() {
//define the <b><i>AJAX</i></b> success handler
var successHandler = function(o) {
//define the arrays
var titles = new Array();
var descs = new Array();
var links = new Array();
//get a reference to the newsitems container
var newsitems = document.getElementById("newsitems");
//get the root of the XML doc
var root = o.responseXML.documentElement;
//get the elements from the doc we want
var doctitles = root.getElementsByTagName("title");
var docdescs = root.getElementsByTagName("description");
var doclinks = root.getElementsByTagName("link");
//map the collections into the arrays
for (x = 0; x < doctitles.length; x++){
titles[x] = doctitles[x];
descs[x] = docdescs[x];
links[x] = doclinks[x];
}
//removed the unwanted items from the arrays
titles.reverse();
titles.pop();
titles.pop();
titles.reverse();
descs.reverse();
descs.pop();
descs.reverse();
links.reverse();
links.pop();
links.pop();
links.reverse();
//present the data from the arrays
for (x = 0; x < 5; x++) {
//create new elements
var div = document.createElement("div");
var p1 = document.createElement("p");
var p2 = document.createElement("p");
var a = document.createElement("a");
//give classes to new elements for styling
<b><i>YAHOO</i></b>.util.Dom.addClass(p1, "title");
<b><i>YAHOO</i></b>.util.Dom.addClass(p2, "desc");
<b><i>YAHOO</i></b>.util.Dom.addClass(a, "newslink");
//create new text nodes and the link
var title = document.createTextNode(titles[x].firstChild.nodeValue);
var desc = document.createTextNode(descs[x].firstChild.nodeValue);
var link = links[x].firstChild.nodeValue;
a.setAttribute("href", link);
//add the new elements to the page
a.appendChild(desc);
p1.appendChild(title);
p2.appendChild(a);
div.appendChild(p1)
div.appendChild(p2)
newsitems.appendChild(div);
}
}
//define the <b><i>AJAX</i></b> failure handler
var failureHandler = function(o) {
//alert the status code and error text
alert(o.status + " : " + o.statusText);
}
//define the callback object
var callback = {
success:successHandler,
failure:failureHandler
};
//initiate the transaction
var transaction = <b><i>YAHOO</i></b>.util.Connect.asyncRequest("GET", "myproxy.php", callback, null);
}
//execute initConnection when <b><i>DOM</i></b> is ready
<b><i>YAHOO</i></b>.util.Event.onDOMReady( <b><i>YAHOO</i></b>.yuibook.newsreader.initConnection);
</script>
[/code]
Once again, we can make use of the .onDOMReady() method to specify a callback function that is to be executed when the YUI detects that the DOM is ready, which is usually as soon as the page has finished loading.
The code within our master initialization function is split into distinct sections. We have our success and failure callback functions, as well as a callback object which will call either the success or failure function depending on the HTTP status code received following the request.
The failure handler code is very short; we can simply alert the HTTP status code and status message to the visitor. The callback object, held in the variable callback, is equally as simple, just containing references to the success and failure functions as values.
The pseudo-constructor which sets up the actual request using the .asyncRequest() method is just a single line of code, It’s arguments specify the response method (GET), the name of the PHP fi le that will process our request (myproxy.php), the name of our callback object (callback), and a null reference.
Connection Manager is able to accept URL query string parameters that are passed to the server-side script. Any parameters are passed using the fourth argument of the .asyncRequest() method and as we don’t need this feature in this implementation, we can simply pass in null instead.
Most of our program logic resides in the successHandler() function. The response object (o) is automatically passed to our callback handlers (both success and failure) and can be received simply by including it between brackets in the function declaration. Let’s break down what each part of our successHandler() function does.
We first defi ne three arrays; they need to be proper arrays so that some useful array methods can be called on the items we extract from the remote XML fi le. It does make the code bigger, but means that we can get exactly the data we need, in the format that we want. We also grab the newsitems container from the DOM.
The root variable that we declare next allows us easy access to the root of the XML document, which for reference is a news feed from the BBC presented in RSS 2.0 format. The three variables following root allow us to strip out all of the elements we are interested in.
These three variables will end up as collections of elements, which are similar to arrays in almost every way except that array methods, such as the ones we need to use, cannot be called on the
A Login System Fronted by YUI
In our first Connection example we looked at a simple GET request to obtain a remote XML fi le provided by the BBC. In this example, let’s look at the sending, or Posting of data as well.
W e can easily create a simple registration/login system interface powered by the Connection utility. While the creation of session tokens or a state-carrying system is beyond the scope of this example, we can see how easy it is to pass data to the server as well as get data back from it.
Again, we’ll need to use a full web server set up, and this time we can also include a mySQL database as the end target for the data posted to the server. You’ll probably want to create a new table in the database for this example.
In order to focus just on the functionality provided by the YUI, our form will have no security checks in place and data entered into the form will go directly into the database. Please do not do this in real life!
Security in a live implementation should be your primary concern and any data that goes anywhere near your databases should be validated, double-checked, and then validated again if possible, and some form of encryption is an absolute must. The MD5 hashing functions of PHP are both easy to use and highly robust.
Create a new table in your database using the mySQL Command Line Interface and call it users or similar. Set it up so that a describe request of the table looks like that shown in the screenshot overleaf:
For the example, we’ll need at least some data in the table as well, so add in some fake data that can be entered into the login form once we’ve fi nished coding it. A couple of records like that shown in the fi gure below should suffi ce.
Sta rt off with the following basic web page:
[code lang=”html”]
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
"">
<html lang="en">
<head>
<meta http-
<title>Yui Connection Manager Example 2</title>
<script type="text/javascript" src="images/2008/12/yahoo/yui/yahoo-dom-event.js"></script>
<script type="text/javascript" src="images/2008/12/yahoo/yui/connection-min.js"></script>
</head>
<body>
<form id="signin" method="get" action="#">
<fieldset>
<legend>Please sign in!</legend>
<label>Username</label><input type="text" id="uname" name="uname">
<label>Password</label><input type="password" id="pword" name="pword">
<button type="button" id="login">Go!</button>
<button type="reset">Clear</button>
</fieldset>
</form>
<form id="register" method="post" action="#">
<fieldset>
<legend>Please sign up!</legend>
<label>First name:</label><input type="text" name="fname">
<label>Last name:</label><input type="text" name="lname">
<label>Username:</label><input type="text" name="uname">
<label>Password:</label><input type="password" name="pword">
<button type="button" id="join">Join!</button>
<button type="reset">Clear</button>
</fieldset>
</form>
</body>
</html>
[/code]
The <head> section of the page starts off almost exactly the same as in the previous example, and the body of the page just contains two basic forms. I won’t go into specifi c details here. The mark up used here should be more than familiar to most of you. Save the fi le as login.html.
We can also add some basic styling to the form to ensure that everything is laid out correctly. We don’t need to worry about any fancy, purely aesthetic stuff at this point, we’ll just focus on getting the layout correct and ensuring that the second form is initially hidden.
In a new page in your text editor, add the following CSS:
[code lang=”html”]
fieldset {
width:250px;
padding:10px;
border:2px solid lightblue;
}
label {
margin-top:3px;
width:100px;
float:left;
}
input {
margin-top:2px;
*margin-top:0px;
}
button {
float:right;
margin:5px 3px 5px 0px;
width:50px;
}
display:none;
}
[/code]
Save the fi le as login.css and view it in your browser. The code we have so far should set the stage for the rest of the example and appear as shown in the fi gure below:
Now let’s move on to the real nuts and bolts of this example—the JavaScript that will work with the Connection Manager utility to produce the desired results. Directly before the closing </body> tag, add the following <script>:
[code lang=”html”]
<script type="text/javascript">
//create namespace object
<b><i>YAHOO</i></b>.namespace("yuibook.login");
//define the submitForm function
<b><i>YAHOO</i></b>.yuibook.login.submitForm = function() {
//have both fields been completed?
if (document.forms[0].uname.value == "" ||
document.forms[0].pword.value == "") {
alert("Please enter your username AND password to login");
return false;
} else {
//define success handler
var successHandler = function(o) {
alert(o.responseText);
//if user not found show register form
if (o.responseText == "Username not found") {
<b><i>YAHOO</i></b>.util.Dom.setStyle("register", "display", "block");
document.forms[0].uname.value = "";
document.forms[0].pword.value = "";
}
}
//define failure handler
var failureHandler = function(o) {
alert("Error " + o.status + " : " + o.statusText);
}
//define callback object
var callback = {
success:successHandler,
failure:failureHandler
}
//harvest form data ready to send to the server
var form = document.getElementById("signin");
<b><i>YAHOO</i></b>.util.Connect.setForm(form);
//define a transaction for a GET request
var transaction = <b><i>YAHOO</i></b>.util.Connect.asyncRequest("GET", "login.php", callback);
}
}
//execute submitForm when login button clicked
<b><i>YAHOO</i></b>.util.Event.addListener("login", "click", <b><i>YAHOO</i></b>.yuibook.login.submitForm);
</script>
[/code]
We use the Event utility to add a listener for the click event on the login button. When this is detected the submitform() function is executed. First of all we should check that data has been entered into the login form. As our form is extremely small, we can get away with looking at each field individually to check the data has been entered into it.
Don’t forget that in a real-world implementation, you’d probably want to fi lter the data entered into each field with a regular expression to check that the data entered is in the expected format, (alphabetical characters for the first and last names, alphanumerical characters for the username, and password fields).
Provided both fields have been completed, we then set the success and failure handlers, the callback object and the Connection Manager invocation. The failure handler acts in exactly the same way as it did in the previous example; the status code and any error text is alerted. In this example, the success handler also sends out an alert, this time making use of the o.responseText member of the response object as opposed to responseXML like in the previous example.
If the function detects that the response from the server indicates that the specified username was not found in the database, we can easily show the registration form and reset the form fields.
Next, we defi ne our callback object which invokes either the success or failure handler depending on the server response. What we have so far is pretty standard and will be necessary parts of almost any implementation involving Connection.
Following this we can make use of the so far unseen .setForm() method. This is called on a reference to the first form. This will extract all of the data entered into the form and create an object containing name:value pairs composed of the form field’s names and the data entered into them. Please note that your form fields must have name attributes in addition to id attributes for this method to work.
Once this has been done, we can initiate the Connection Manager in the same way as before. Save the HTML page as login.html or similar, ensuring it is placed into a content-serving directory accessible to your web server.
As you can see the .setForm() method is compatible with GET requests. We are using the GET method here because at this stage all we are doing is querying the database rather than making physical, lasting changes to it. We’ll be moving on to look at POST requests very shortly.
Now we can look briefl y at the PHP fi le that can be used to process the login request. In a blank page in your text editor, add the following code:
[code lang=”html”]
<?php
$host = "localhost";
$user = "root";
$password = "mypassword";
$database = "mydata";
$uname = $_GET["uname"];
$pword = $_GET["pword"];
$server = mysql_connect($host, $user, $password);
$connection = mysql_select_db($database, $server);
$query = mysql_query("SELECT * FROM users WHERE username LIKE ‘$uname%’");
$rows = mysql_num_rows($query);
if ($rows != 0)
{
$row = mysql_fetch_array($query);
$pass = $row[‘password’];
if ($pass != $pword)
echo "Password incorrect";
else
echo "Hello ".$row[‘firstname’].", thanks for signing in";
}
else
{
echo "Username not found";
}
mysql_close($server);
?>
[/code]
Here we set up the variables required to query our database and then extract any records where the username entered into our form matches the username associated with a user. There should only be one matching record, so we can then compare the stored password with that entered into our form.
All we’re doing here is passing back an appropriate message to be displayed by our successHandler function back in the HTML page. Normally, after entering the correct credentials, the visitor would be redirected to their account page or some kind of personal home page, but a simple alert gives us everything we need for this example. Save the above fi le as login.php in the same directory as the web page and everything should be good to go.
Try it out and refl ect upon the ease with which our task has been completed. Upon entering the username and password of one of our registered users, you should see something similar to the fi gure below:
So that covers GET requests, but what about POST requests? As I mentioned before, the .setForm() method can be put to equally good use with POST requests as well. To illustrate this, we can add some additional code which will let unregistered visitors sign up.
Add t he following function directly before the closing </script> tag:
[code lang=”html”]
//define registerForm function
YAHOO.yuibook.login.registerForm = function() {
//have all fields been completed?
var formComp = 0;
for (x = 1; x < document.forms[1].length; x++) {
if (document.forms[1].elements[x].value == "") {
alert("All fields must be completed");
formComp = 0;
return false;
} else {
formComp = 1;
}
}
if (formComp != 0) {
//define success handler
var successHandler = function(o) {
//show succes message
alert(o.responseText);
}
//define failure handler
var failureHandler = function(o) {
alert("Error " + o.status + " : " + o.statusText);
}
//define callback object
var callback = {
success:successHandler,
failure:failureHandler
}
//harvest form data ready to send to the server
var form = document.getElementById("register");
<b><i>YAHOO</i></b>.util.Connect.setForm(form);
//define transaction to send stuff to server
var transaction = <b><i>YAHOO</i></b>.util.Connect.asyncRequest(
"POST", "register.php", callback);
}
}
//execute registerForm when join button clicked
YAHOO.util.Event.addListener("join", "click",
<b><i>YAHOO</i></b>.yuibook.login.registerForm);
[/code]
In the same way that we added a listener to watch for clicks on the login button, we can do the same to look for clicks on the join button. Again we should check that data has been entered into each field before even involving the Connection Manager utility. As there are more fields to check this time, it wouldn’t be effi cient to manually look at each one individually.
We use a for loop and a control variable this time to cycle through each form field; if any field is left blank the formComp control variable will be set to 0 and the loop will exit. We can then check the state of formComp and provided it is not set to 0, we know that each field has been fi lled in.
The success and failure handlers are again based on simple alerts for the purpose of this example. We again use the .setForm() method to process the form prior to sending the data. We can then proceed to initiate the Connection Manager, this time supplying POST as the HTTP method and a different PHP fi le in the second argument of the .asyncRequest() method.
All we need now is another PHP fi le to process the registration request. Something like the following should suffi ce for this example:
[code lang=”html”]
<?php
$host = "localhost";
$user = "root";
$password = "mypassword";
$database = "mydata";
$fname = $_POST["fname"];
$lname = $_POST["lname"];
$uname = $_POST["uname"];
$pword = $_POST["pword"];
$server = mysql_connect($host, $user, $password);
$connection = mysql_select_db($database, $server);
$query = mysql_query("INSERT INTO users VALUES (‘$fname’, ‘$lname’, ‘$uname’, ‘$pword’)");
echo "Thanks for joining us ".$fname;
mysql_close($server);
>
[/code]
As we’re using POST this time, we can use the $_POST superglobal to pull our values out, and can then run a simple INSERT query to add them to the database. Upon entering an unrecognized name into the first form, the registration form should then be displayed, as in the following screenshot:
If you register a new user now and then take a look at your database with the mySQL Command Line Client, you should see the new data appear in your database:
Summary and can be put to good use in connection (no pun intended) with a PHP (or other form of) proxy for negotiating cross-domain requests.
What you will learn from this book
- Explore the YUI Library—utilities, controls, core files, and CSS tools
- Install the library and get up and running with it
- Handle DOM manipulation and scripting
- Get inside Event Handling with YUI
- Create consistent web pages using YUI CSS tools
- Work with Containers
More details about YUI Book.
also read:. | https://javabeat.net/yahoo-user-interface-libraryyui/ | CC-MAIN-2022-05 | refinedweb | 4,553 | 52.8 |
Summer 2011 Edition
By Robert Scheitlin, Calhoun County; and Bjorn Svensson and Derek Law, Esri
This Dynamic Legend widget created by Robert Scheitlin modifies the contents of the legend based on scale dependency and layer visibility. Map services and/or specific layers in a map service may also be excluded from the legend.
This article tells developers how to migrate widgets developed with the Sample Flex Viewer (SFV) to the ArcGIS Viewer for Flex. It assumes readers have experience using the ArcGIS API for Flex and are familiar with the Adobe Flash development environment, and experience developing with the Flex API and Adobe Flash is strongly recommended.
Released in November 2008, the SFV was a developer sample built on the ArcGIS API for Flex. It enabled nonprogrammers to deploy a rich Internet application for ArcGIS Server with minimal effort. Since its release, the SFV has been downloaded more than 30,500 times, and many sites have been built on the SFV.
SFV also provided a framework for Flex API developers to customize and extend the viewer. One important area was the ability to create custom widgets. Widgets are modular pieces of code that extend or add to the functionality of the SFV. They can be tailored by the widget developer for specific tasks that require particular data and conditions to be present in the viewer, or they can be generic and allow SFV to be configured by nonprogrammers to work with their own data. More than 50 widgets have been created for SFV and shared on the ArcGIS API for Flex code gallery. Many of these widgets can still be accessed from the Esri ArcScripts site (arcscripts.esri.com/; search for "flex" AND "viewer").
In September 2010, Esri released the ArcGIS Viewer for Flex—the official product release of the SFV. It includes 20 core widgets that support many standard web mapping application functionalities.
The Social Media widget created by Ping Jiang searches YouTube for videos, Flickr for photos, and Twitter for Tweets based on a keyword.
Many users wondered about the SFV widgets that were produced and shared on the ArcGIS API for Flex code gallery. Would these widgets "just work" in the new ArcGIS Viewer for Flex application?
Unfortunately, the answer is no. The ArcGIS Viewer for Flex uses a framework that differs from SFV. It is based on a newer release of the ArcGIS API for Flex and utilizes the latest Adobe Flash technology. To use widgets previously developed for the SFV, the code base for those widgets must be migrated and recompiled for the new ArcGIS Viewer for Flex 2.x API libraries.
This article provides tips and recommended practices to help Flex developers easily migrate custom widgets from the SFV to the ArcGIS Viewer for Flex. Flex developers should be aware of the differences between the SFV and the ArcGIS Viewer for Flex. These differences are summarized in Table 1.Table 1: ArcGIS Viewer for Flex equivalents for SFV
Table 1 highlights some of the subtle—but key—changes in development patterns between the SFV and the ArcGIS Viewer for Flex. The concepts are the same, even though they may have been renamed. However, this is not a comprehensive list of these differences. At the Adobe software development kit (SDK) level, Adobe recommends using the new Spark components. For example, s:VGroup is used instead of mx:HBox. For more detailed information, see the resources list at the end of this article.
Start the widget migration process with a new MXML Component. Create the new MXML Component as part of a package in the widgets folder (i.e., widgets.LiveLayer). Follow Viewer coding standards. The widget package should share the same name as the widget name (minus the word "widget"). For example, the full widget name and package would be widgets.LiveLayer.LiveLayerWidget. Base the new MXML Component on BaseWidget.
Don't give the new component a width or height; that is handled in the widget template.
If the widget is going to reference custom components, such as item renderers and data groups (which are designed to replace the mx:Repeater in the new Adobe Flex SDK 4 environment), add the widgets.xml name space to the BaseWidget element (e.g., xmlns:livelayer="widgets.LiveLayer.*).
Use the widgetConfigLoaded event if the widget has a configuration file. This ensures that the widget configuration file has been loaded before you try to use it. Having a widget configuration file allows nondevelopers to change certain aspects of the widget without altering the code and compiling the application.
An fx:Script block is needed for the ActionScript code that will be migrated from an mx:Script block in the old widget. Add an fx:Script block by typing it in the new widget file instead of just copying the mx:Script block from the old widget.
States replace the ViewStacks used to separate pages or views in old widgets. A little planning will go a long way here. Examine the old widget and determine how many VBox elements are children of the ViewStack, that is, how many states will be needed. Each state must have a name as shown in the example in Listing 1.
<viewer:states>Listing 1: Each state must have a name.
<s:State
<s:State
</viewer:states>
In the old SFV, an animation that occurred when moving from one view to another was handled by a custom ActionScript class called WidgetEffects. In the new viewer, transitions are used for this purpose. The targets for transitions will be the names of the states defined previously. An example is shown in Listing 2.
<viewer:transitions>Listing 2: Handle animations between views with transitions.
<s:Transition
<s:Fade
</s:Transition>
</viewer:transitions>
Mark Deaton's widget shows a changing series of NEXRAD radar reflectance images (indicating severe weather) over the US for the previous hour. It also demonstrates the use of WMS layers via the ArcGIS Flex API.
Copy the MXML elements that define the UI of the old widget, paste them into the new widget, and comment them out. The commented old code can serve as a reference. This will save some time because it eliminates the need to switch back and forth from old widget code to new—both versions will be present.
Moving the widgets' MXML code from mx components to Spark components during the code migration is recommended. Use Table 1 to determine the Spark equivalents for some mx components. The WidgetTemplate element is still the base for the widget's UI. The new widget template in the ArcGIS Viewer for Flex has renamed a few of the events. For example, the "widgetClosed" event is now just "closed" and the "widgetOpened" event is now "open."
The height and width of a widget's UI is defined in the widget template. Each widget state will be a Spark group element, and each ID will share the same name as the states defined earlier. Set the visibility of the group to false and add another attribute, "visible." After the attribute "visible" is typed, add a dot after it, and the automatic code completion will display the available states (when using the Adobe Flash Builder IDE). Choose the state name of the current group.
<viewer:WidgetTemplate id="wTemplate"Listing 3: Widget template
<s:Group id="resultsList"
width="100%" height="100%"
visible="false"
visible.
<s:layout>
<s:VerticalLayout
</s:layout>
</s:Group>
<s:Group
<s:layout>
<s:VerticalLayout
</s:layout>
</s:Group>
</viewer:WidgetTemplate>
Examine the code for old mx components that could be "Sparkified." For example, if the old code uses an mx:Text, then its Spark counterpart is s:Label; an mx:HBox could become an s:Hgroup, mx:Button could become s:Button, and mx:ComboBox could become an s:DropDownList.
Another practical tip is to copy all the old mx:Script code from the old widget and paste it inside the new fx:Script block. As mentioned earlier, don't copy the mx:script block in its entirety; just copy the contents between the <![CDATA[ ]]>. There will be several errors that will have to be addressed one at a time.
Listing 4: Replace import statements.
Replacing the import statements in the script block that have changed in the ArcGIS API for Flex 2.2 is important.
One simple way to fix these is to examine the reported compile error. Double-click it to go to the specific line; put the cursor at the end of the offending class; and press Ctrl+Spacebar for Content Assist, which will add the required import statement.
While it is not required, it is a good practice to migrate mx:Repeater to the Spark DataGroup class. Accomplishing this involves creating three new items, *Results.as, *ResultDataGroup.as, and *ResultItemRenderer.mxml. Fortunately, there are several examples of this code in the ArcGIS Viewer for Flex. A quick shortcut: simply copy and paste these three items from SearchWidget and rename them with the new widget's name.
If the old widget used an mx:Repeater, the code probably has many references to its dataProvider. It will be necessary to create a bindable private var of type ArrayCollection to replace it. Everywhere in the code that references the repeaters, dataProvider must be changed to reference this new ArrayCollection.
The new ArcGIS Viewer for Flex allows developers to specify a custom info window to use with a particular widget or one of the widget templates that comes standard with the viewer. Using this new capability involves several code additions and changes, such as overriding the widget's showInfoWindow function. Rather than identifying each line that must be changed and added in this article, look at one of the existing core Viewer widgets and search for "info." That search will return items like the infoURL string, which holds the infoURL string from the widget configuration file, or the DATA_CREATE_INFOWIDGET event.
When using the new info window and data group (when replacing mx:Repeater), a couple of new import statements must be added:
import com.esri.viewer.IInfowindowTemplate;
import mx.core.UIComponent;
import spark.components.supportClasses.ItemRenderer; and
import com.esri.viewer.AppEvent.
If the data group and item renderer will be updated, the mouseOverRecord, mouseOutRecord, and clickRecord event handlers must also be updated to convert events passed to these handlers to an itemRenderer instead of using the infoData object.
var llResult:LiveLayerResult = ItemRenderer(event.target).data as LiveLayerResult;
When migrating widget code and using the queryTask, if the code is not connecting to an instance of ArcGIS Server 10 or higher, you need to set queryTask.useAMF = false.
Title bar buttons in the new ArcGIS Viewer for Flex no longer return events, so the click handler does not require an event.
private function toggleFilterPanel(event:MouseEvent):void
private function showResultsList():void
The order in which title bar buttons are added is the opposite order in which they were added in the SFV (e.g., the first button to appear on the left should now be the last one added).
The third property for the addTitlebarButton function is used to designate whether the button is selectable. The default value is true.
The assets directory in the SFV was com/esri/solutions/flexviewer/assets/images/icons/. The assets directory in ArcGIS Viewer for Flex is located at assets/images/. (Notice there is no subfolder of icons.)
To summarize, there are many key items that Flex developers should be aware of when migrating a custom widget from the Sample Flex Viewer to the ArcGIS Viewer for Flex. An example of migrated widget code can be found at gis.calhouncounty.org/DevSummit2011. It demonstrates the LiveLayerWidget code and includes developer comments.
Robert Scheitlin is the GIS manager for Calhoun County, Alabama. A GIS software developer for 12 years, he has worked on projects that included full ArcObjects custom applications, ArcGIS Engine applications, and ArcGIS Server API for Flex and Flex Viewer applications. He has used and customized the Sample Flex Viewer since its release and supported Flex developers on the ArcGIS API for Flex forum. After initially focusing on Visual Basic and Visual Basic. NET, he is now focusing primarily on Flex. His background as an Esri Authorized Instructor has given him the ability to teach others about software development and customization.
Bjorn Svensson is the lead product engineer for ArcGIS API for Flex and ArcGIS Viewer for Flex. He has worked with web mapping at Esri for 10 years. Previously he worked as a GIS consultant in Africa, Asia, Europe, and the Americas.
Derek Law is part of the ArcGIS Server product management team, covering the ArcGIS Viewer for Flex and Silverlight. He has been with Esri for 10 years, with extensive experience working with geodatabases and ArcSDE technology. In recent years, his focus has been on the configurable client viewers for ArcGIS Server. | http://www.esri.com/news/arcuser/0611/migrating-widgets-to-the-arcgis-viewer-for-flex.html | CC-MAIN-2018-17 | refinedweb | 2,137 | 55.03 |
unsetenv (3) - Linux Man Pages
unsetenv: change or add an environment variable
NAME
setenv - change or add an environment variable
SYNOPSIS
#include <stdlib.h> int setenv(const char *name, const char *value, int overwrite); int unsetenv(const char *name);
Feature Test Macro Requirements for glibc (see feature_test_macros(7)):
setenv(), unsetenv():
- _POSIX_C_SOURCE >= 200112L
|| /* Glibc versions <= 2.19: */ _BSD_SOURCE
DESCRIPTIONThe VALUEThe '=' character.
- ENOMEM
- Insufficient memory to add a new variable to the environment.
ATTRIBUTESFor an explanation of the terms used in this section, see attributes(7).
CONFORMING TOPOSIX.1-2001, POSIX.1-2008, 4.3BSD.
NOTESPOS.
BUGSPOSIX.1 specifies that if name contains an '=' character, then setenv() should fail with the error EINVAL; however, versions of glibc before 2.3.4 allowed an '=' sign in name.
COLOPHONThis page is part of release 5.05 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. | https://www.systutorials.com/docs/linux/man/3-unsetenv/ | CC-MAIN-2021-17 | refinedweb | 158 | 57.06 |
The JUnit Test Results window shows "No Tests Executed" (in the summary line, where it usually displays the amount of
tests run/failed/with error) after running them.
Underneath the "No Test Executed" leyend, you the name of the test class can be seen with the status "passed" (on the
right side). If I try to open the tree node, it doesn't show what tests were executed (the tree is empty).
The Output window shows all the logging from the tests, but no summary of how many tests were run/failed/caused errors.
There are several things I noticed that might be useful:
- This happens for some test classes of the project. Others seem to run perfectly fine.
- After those classes that make JUnit fail, I noticed a delay from the moment the tests finished until the "No Test
Executed" leyend shows up.
- The classes that make JUnit fail produce a high amount of logging that's also shown in the JUnit output window. The
delay mentioned in the previous point occurs before all the log information is shown in the output window, and then
almost at the same time the leyend "No Tests Executed" followed by the rest of the logs are displayed.
- If I cancel logging setting the level to Level.OFF the problem doesn't show up, but all my tests that verify logging
obviously fail.
Please let me know if I can help you in any way... For now I'm sutting down logging to keep working, but I don't feel
really confortable by doing that.
I tried to reproduce this bug with a simple test producing a million lines and then failing. It worked!
I used the following source code:
public class SomeJUnitTest extends TestCase {
public SomeJUnitTest(String testName) {
super(testName);
}
public void testWithLotsOfLogging() {
for (int i = 0; i < 1000000; i++) {
System.out.println("This is some text - just to produce something " + i);
}
fail("I wanted it to fail.");
}
}
Could you run the above test? I suggest that you start with a lower number than million at first, e.g. with 100000.
How many lines of output does your test in question produce (approximately)?
Did you debug the tests? In this case, it might be the same issue as issue #130206 ("Test display incorrect when
debugging tests").
Can you verify that the JVM running the tests finishes regularly (i.e. is not interrupted and does not crash during
execution)?
If you think the JVM running the tests finishes regularly, could you please run NetBeans with the following extra argument?
-J-Dorg.netbeans.modules.junit.output.JUnitOutputReader.level=300
Either pass it as an argument for "netbeans.exe" (or "nb.exe") when running NetBeans, or make it permanent by adding it
to file "etc/netbeans.conf" in your NetBeans installation directory.
Then (re)start NetBeans and try to reproduce the bug. The JUnitOutputReader class should produce a detailed log to the
NetBeans' log file (<NB-user-dir>\var\log\messages.log). Please attach the log file to this issue or send it directly to
me. Thank you.
Created attachment 62128 [details]
Logs
I think that I found the cause - it is the thousands-separator used in the statistics line of the output:
Tests run: 5, Failures: 5, Errors: 0, Time elapsed: 18,557 sec
The JUnit module has a hard-coded expression for matching float-numbers
FLOAT_NUMBER_REGEX = "[0-9]*(?:\\.[0-9]+)?"
which accepts "18.557" but not "18,557".
Fixed.
Modified files:
junit/src/org/netbeans/modules/junit/output/JUnitOutputReader.java
junit/src/org/netbeans/modules/junit/output/RegexpUtils.java
Changeset Id:
b56ac1907fc7
()
Integrated into 'main-golden', available in NB_Trunk_Production #324 build
Changeset:
User: Marian Petras <[email protected]>
Log: fixed bug #135553 - 'JUnit Shows "No Tests Executed" when tests have been executed' | https://netbeans.org/bugzilla/show_bug.cgi?id=135553 | CC-MAIN-2016-07 | refinedweb | 629 | 64.51 |
- shield) has several new hardware features, that allow maximum customization and provide many configurations.
We begin with the supply circuit a simple LM7805. To work, it is necessary to provide an input voltage between 7.5V and 12V. As shown in the circuit diagram, the input voltage, after being stabilized at 5 V, is reduced to 4.3 V by using a diode and provide power to modules that need a voltage between the 3.2 and 4.8 V. During the operations such as the use of GPRS, the module absorbs a current of about 1 A, therefore it is necessary that the power source is able to provide this current intensity.
An important technical feature is the serial adapter for the communication between the GSM module and Arduino. To reduce the tension has been used a simple voltage divider, while for raising the voltage from the GSM module to Arduino we chose a MOSFET BS170.
The news that is immediately evident is the presence of two jacks for audio. With a microphone and a headset with a 3.5 mm jack (just the standard headphones for computers), you can make a voice call !!
To preserve compatibility with the Arduino Mega, we changed the selection method for the serial communication. The two different serial communication modes (hardware or software) are selectable by jumper, leaving the user the choice between the two configurations ( for serial software in this new version we adopted pins 2 and 3) or possibly use the pin to your choice with a simple wire connection. With this solution you can use the Arduino Mega using two of the four serial that it has, or possibly carry out the communication through a serial software via two pins of your choice.
Always to preserve maximum flexibility and customization, there are some pins on the back of PCB, which allow to make the connections from the Arduino digital ports and the control signals data flow (CTS, RTS) or alerts for incoming calls or unread SMS (RI). In this new version, you can then disable these connections to save inputs or outputs pins.
Comparing the new card with the previous one, you can see the presence of two connectors on the top.These additional connections allow the use of the shield also with the new small breakout for SIM900 and SIM908. The new module Simcom SIM908, is characterized by the presence of a GPS with 42 channels.
The scenery offered by this new module SIMCOM, in addition to GSM GPRS shield, it is quite remarkable: the creation of a GPS tracking device that can communicate the location via the Internet (or SMS) is now available to everyone, avoiding all the problems due to assembly and low-level programming.
A further feature of this new version, concerns the presence of a supercap for circuit dedicated to the RTC (Real Time Clock). Inside the SIM900, as well as the SIM908, there is a circuit that is responsible for updating the clock even without power.
GSM GPS SHIELD SCHEMATICS
[CODE]
R1: 10 kohm
R2: 10 kohm
R3: 10 kohm
R4: 10 kohm
C1: 100 nF
C2: 470 µF 25 VL
C3: 100 nF
C4: 220 µF 16 VL
C5: 47 pF
C6: 47 pF
C7: 47 pF
C8: 47 pF
C9: 47 pF
C10: 47 pF
C11: 220 µF 16 VL
C12: 100 nF
CRCT: 0,1F
U1: 7805
T1: BS170
D1: 1N4007
P1: Microswitch
MIC: jack 3,5 mm
SPK: jack 3,5 mm
[/CODE]
SOFTWARE INNOVATIONS
The software library related to the GSM GPRS shield has been updated. The library is open-source and uses the hosting service Google Project, located at . The library is constantly updated and improved with the addition of new features, so please check that you always have the latest release.
The main enhancement is the TPC/IP communication support through GPRS.
With a simple function, you can connect Arduino to internet using the APN (Access Point Name) you choose. After that we will automatically get an IP address by the provider.
To establish communication you must define which device performs the function of the server (still waiting for some connection), such as that client (requires a connection to the server according to the information you want to achieve) and that leads to exchange data .
In the library there are two functions that allow us to set the device to listen on a particular port for connections (server), or to establish a connection based on the server address and port chosen (client) .
Once connected, you can send the data, which can be command strings or just the data you want to monitor, for this action there is a high-level function, which simplifies the management.
LIBRARY FUNCTIONS GSM GPRS
First, you must have the folder libraries, in the root directory of the Arduino, the folder GSM_GPRS containing all the functions you can use.
Now if you want to change the serial port, through the jumper, you have to modify the file GSM.cpp.
To save memory, we decided to divide the functions into different classes contained in different files, to allow you to include or not the code parts needed, thus going to save memory RAM, leaving it free for the rest of the program. For the basic operation is always necessary to include files SIM900.h and SoftwareSerial.h, while depending on the needs you may include call.h (for call handling), sms.h (for sending, receiving and saving SMS) and inetGSM.h (containing functions related to HTTP, and GPRS).
SIM900.h
You should always include this file. It contains the basic functions for starting and configuring the GSM module. Simply call the functions using “GSM.” As a prefix.
call.h
In case you want to make a call, or simply refuse to answer an incoming call, you must use this class. To use these functions simply instantiate the object in the sketch. The functions listed in the table below refers to an object created with the following command at the beginning of the sketch: CallGSM call;
SMS.h
For managing text messages must use this special class. As before, it is necessary to recall it within the sketch and then instantiate an object. For example, in the following functions refers to an object created at the beginning of the sketch, with the command SMSGSM sms;
inetGSM.h
In this class are included functions to connect and manage communications via HTTP protocol. In the following examples was an object created with the command InetGSM inet;
EXAMPLE FOR CALLS AND SMS WITH THE GSM GPRS SHIELD
Let us now, step by step, our first sketch to use the shield using the Arduino IDE version 1.00. We will write a program that when it receives a call from a preset number (stored in a specific location on the SIM), rejects the call and sends an SMS in response to the caller with the value read from an input.
First you have to extract the files from the compressed folder within the Library folder libraries contained within the installation folder of Arduino.
To first load the libraries using the following commands
#include “SIM900.h”
#include <SoftwareSerial.h>
Then load, uncomment properly, the files related to classes containing functions that we want to use for the management of phone calls and SMS.
#include “sms.h”
#include “call.h”
We will perform the initialization procedure in the setup. Set the pin to read the value which will then be sent via SMS, configure the serial communication and initialize the module with the function gsm.begin, and set the baud rate (usually for proper communication of data through GPRS is advisable not rise above 4800 baud).
At this point we enter the heart of the program, which will periodically check the status of incoming calls. To do this within the cycle loop will use the function call.CallStatusWithAuth saving the byte returned. In the case of incoming or in progress call, the sender (or recipient) number is stored in the string number.
Compared with the value stored CALL_INCOM_VOICE_AUTH, which describes an incoming call by a number in that set, we reject the call using the GSM.Hangup and after waiting 2 seconds, read the input value and send the message.The value read is an integer and must be first converted into a string using the function itoa.
Let us remember to insert a delay, inside the loop function, to ensure that the module is interrogated at intervals of not less than a second. Commands sent in rapid sequence could corrupt the stability of the module.
If we do not receive the result of proper initialization, you will need to check the power supply. Remember that it is recommended to use an external power source because the only power supplied by the USB port is not enought.
If the power is found to be correct, you should check that the file GSM.cpp, in the library are declared properly pin for the serial. Basically the new version uses pins 2 and 3, while the old version used pins 4 and 5.
#define _GSM_TXPIN_ 2
#define _GSM_RXPIN_ 3
The full program is as follows:
#include "SIM900.h" #include <SoftwareSerial.h> //carichiamo i file necessari allo sketch #include "sms.h" #include "call.h" CallGSM call; SMSGSM sms; char number[20]; byte stat=0; int value=0; int pin=1; char value_str[5]; void setup() { pinMode(pin,INPUT); Serial.begin(9600); Serial.println("GSM GPRS Shield"); //init the module if (gsm.begin(2400)) Serial.println("\nstatus=READY"); else Serial.println("\nstatus=IDLE"); }; void loop() { stat=call.CallStatusWithAuth(number,1,3); if(stat==CALL_INCOM_VOICE_AUTH){ call.HangUp(); delay(2000); value=digitalRead(1); itoa(value,value_str,10); sms.SendSMS(number,value_str); } delay(1000); };
EXAMPLE FOR INTERNET
We analyze one of the examples contained within the library to connect Arduino to the internet with GPRS connection.
We will make a program capable of receiving HTML content from a web page and save the first 50 characters.
Because we use only the functions relating to the Internet and HTTP, we load in addition to the standard library file, the file inetGSM.h
Instantiate an object management functions
InetGSM inet;
and as before we execute the initialization routine. Then we establish a GPRS connection. In this step you need to run the command “AT+CIFSR” that requires the provider the IP address assigned to the GSM module. This step is important. Some providers garantee the connection only if previously it’s made this request. Through the function gsm.WhileSimpleRead contained in the GSM class, we read the entire contents of the buffer. Once emptied the buffer, the sketch will go to the following functions.
At this point we are connected, we have to establish a TCP connection with the server, send a GET request to a web page and store the contents of the response in a previously declared array. All this is done by the function HttpGet in class inetGSM. In addition to the server and port (80 in the case of HTTP protocol), we have to indicate the path which contains the requested page.For example if you want to download the Wikipedia page on the Arduino to be reached at the following address it.wikipedia.org/wiki/Arduino_(hardware), the path will be /wiki/Arduino_ (hardware) while the server is it.wikipedia.org.
numdata=inet.httpGET(“it.wikipedia.org “, 80, “/wiki/Arduino_(hardware) “, msg, 50);
Obviously if we wish to save a greater number of characters of the answer, it is sufficient to initialize a string of larger dimensions, taking care not to saturate the RAM is made available by Arduino, otherwise we risk getting abnormal behavior, such as stalls or restarts.
#include "SIM900.h" #include <SoftwareSerial.h> #include "inetGSM.h" InetGSM inet; char msg[50];); gsm.SimpleWriteln("AT+CIFSR"); delay(5000); gsm.WhileSimpleRead(); numdata=inet.httpGET("", 80, "/", msg, 50); Serial.println("\nNumero di byte ricevuti:"); Serial.println(numdata); Serial.println("\nData recived:"); Serial.println(msg); } }; void loop() { };
The shield has various connectors to accept more GSM/GPRS modules manufactured by SIMCOM and mounted on breakout board. In addition to the popular SIM900, our new shield for Arduino supports the recent SIM908, which is an evolution and aims to capture the market of GSM/GPRS quad-band providing a variety of additional features that make it unique, especially in the field of low-cost products. The SIM908 implements a GPS with 42 channels, characterized by an excellent accuracy and by a very reduced time required to perform the first fix (1 second in mode “hot start” and 30 seconds with the mode “cold start”).
This module can be used powered by a lithium battery, and can charge it, greatly simplifying this process that would require dedicated hardware.
The SIM908 has two serial, used one for the GSM and the other for the GPS. More exactly, the first serial interface is provided with a UART which belongs to the lines TXD, RXD, DTR, which go outside through the contacts, respectively, 12, 14, 10 of connector; for the GPS, instead, the serial is GPSTXD (contact 4) and GPSRXD (pin 5). The first serial port is actually intended for total control of SIM908, then it can also configure the GPS receiver and ask him to provide data on the location, the number of satellites hooked, etc. From the second serial port (GPSTXD / GPSRXD) instead, go out continuously strings in standard NMEA GPS system.
THE GSM SHIELD LIBRARY
Providing also use the SIM908, the library for the management of this module has been modified to provide a quick access to all the new features made available, the new library is derived from that used for the SIM900 module, and is available on the Internet at .
Note that you can use the new library for managing mobile SIM900, provided you do not call functions dedicated to SIM908. While it is completely compatible using the sketch for the version without GPS with this new one.
Let’s consider some new features introduced: first of all has been added the function ForceON(); used to check the status of the module and to force the power on. The SIM908 supports the charge of lithium batteries, the module can be started to perform the charger without the management of the GSM network. If we want to avoid this mode and make sure it’s really turned on then you need to call the function mentioned above.
gsm.forceON();
Intended for the use of GPS (and battery), we made a class which you can instantiate an object with GPSGSM gps, after including its # include files “gps.h“, in order to invoke their functions by prefixing “GSM.” to the desired function.
This subdivision into different files is designed to minimize RAM usage: in fact, for example, all the variables used by the class on the GPS will not be allocated in memory will not be included if the relevant files using #include “gps.h”.This allows you to choose which variables to use.
As already mentioned, also for the management of the battery there ara functions which enable the measurement of the voltage and battery temperature; for practical reasons, occupying little memory, these have been included in the class of GPS. For use them, after including the file #include “gps.h” you must instantiate the object related with GPSGSM gps. In the next sections will show the control functions of the GPS and battery.
HOW TO USE THE SIM908 GPS
Before using GPS, you need to make a small set-up: first let’s make a bridge on jumper J1 on the SIM908 Breakout (cod. FT971).
The bridge on J1 enables power to the GPS antenna.
This serves to bring power to the active GPS antenna. Next, load the sketch example (in the examples directory) called GSM_GPRS_GPS_Library_AT (or even GSM_GPRSLibrary_AT) and once launched and completed initialization send the following commands:
AT
AT+CGPSPWR=1
AT+CGSPRST=0
We wait a minute, at which point the GPS should be working, to verify
continue sending the command:
AT+CGPSINF=0
If you can see the coordinates, it means that everything is working and we can proceed with the standard use by the implemented functions.
Now we proceed with a simple example that allows us to understand how to get the coordinates from the GPS module SIM908 mounted on the shield, the firmware is described here:
#include "SIM900.h" #include <SoftwareSerial.h> #include "gps.h" GPSGSM gps; char lon[10]; char lat[10]; char alt[10]; char time[15]; char vel[10]; char stat; boolean started=false; void setup() { //Serial connection. Serial.begin(9600); Serial.println("GSM GPRS GPS Shield"); if (gsm.begin(2400)){ Serial.println("\nstatus=READY"); gsm.forceON(); started=true; } else Serial.println("\nstatus=IDLE"); if(started){ if (gps.attachGPS()) Serial.println("status=GPSON"); else Serial.println("status=ERROR"); delay(20000); stat=gps.getStat(); if(stat==1) Serial.println("NOT FIXED"); else if(stat==0) Serial.println("GPS OFF"); else if(stat==2) Serial.println("2D FIXED"); else if(stat==3) Serial.println("3D FIXED"); delay(5000); gps.getPar(lon,lat,alt,time,vel); Serial.println(lon); Serial.println(lat); Serial.println(alt); Serial.println(time); Serial.println(vel); } }; void loop() { };
THE BATTERY
In order to use the lithium battery as the power source for our FT971 module that houses the SIM908 (note: the SIM900 is not able to manage the barrery charge) is sufficient to close the bridge on this shield called with CHRG and set on VEXT the second bridge near the battery connector.
Through the two library functions is possible to obtain the percentage of remaining charge, the battery voltage and the voltage read by the temperature sensor. In the case of applications poorly ventilated, with prolonged periods of work and in climatic conditions not exactly optimal, it is advisable to monitor this value to make sure that the battery works within the limits for a correct operation. The temperature can be calculated according to the relationship voltage/temperature sensor.
It is also possible to set the module so that automatically determine if the battery is working outside the permissible range, with consequent shutdown of the same.
To activate this mode, you need to send the command:
AT+CMTE=1
To disable it you have to send the command:
AT+CMTE=0
While to know which mode is configured must issue the command:
AT+CMTE?
To know the exact syntax of the functions and their return refer to Table:
Also in this case we see how to implement these functions in a sketch, referring to this sketch, which contains the corresponding code.
#include "SIM900.h" #include <SoftwareSerial.h> #include "inetGSM.h" #include "gps.h" GPSGSM gps; char perc[5]; char volt[6]; char tvolt[6]; long prevmillis=millis(); int interval=10000; void setup() { Serial.begin(9600); Serial.println("GSM GPRS GPS Shield."); if (gsm.begin(4800)){ Serial.println("\nstatus=READY"); gsm.forceON(); } else Serial.println("\nstatus=IDLE"); }; void loop() { if(millis()-prevmillis>interval){ gps.getBattInf(perc,volt); gps.getBattTVol(tvolt); Serial.print("Battery charge: "); Serial.print(perc); Serial.println("%"); Serial.print("Battery voltage: "); Serial.print(volt); Serial.println(" mV"); Serial.print("Temperature sensor voltage: "); Serial.print(tvolt); Serial.println(" mV"); Serial.println(""); prevmillis=millis(); } }
DEBUG MODE GSM & GPS SHIELD
During the use of the shield, sometimes you fail to get the desired results without understanding why, for help, libraries can be configured to provide output some debug messages during the execution of functions called. Inside the file GSM.h there is the following line:
//#define DEBUG_ON
Uncomment it, you are going to enable this mode, commenting, no diagnostic message will be shown on the serial output.
HOW TO USE THE GSM & GPS SHIELD WITH ARDUINO MEGA
For problems with the RAM, or simply for projects that require a larger number of input/output, we can use with the GSM/GPRS & GPS shield the Arduino Mega. Thanks to four serial port, we can use one of these instead of the serial software to communicate with the shield.
With the latest release, the library can be used completely with Arduino Mega. You must open the file GSM.h and select the tab used appropriately commenting lines of code.
Using the shield with Arduino Mega we comment as follows:
//#define UNO
#define MEGA
If we want to use Arduino UNO:
#define UNO
//#define MEGA
Similarly, also the file HWSerial.h, must be configured. As before, we see the first example for Arduino Mega:
#define MEGA
Using the file HWSerial.h is not necessary to define the possible use with Arduino Uno, as implemented in the class it is only used by the hardware serial.
The library uses the serial Serial0 (TX 1 RX 0) for external communication and the serial Serial1 (RX 18 TX 19) to communicate with SIM900 and SIM908. Nothing prevents you replace every occurrence of Serial1 with Serial2 or one that you prefer.
Please note that to use the serial hardware you need to create the connection between the shield and Arduino Mega using a bridge: the TX pin of the shield must be connected to TX1 18 and the RX pin with the RX1 19.
THE STORE
Pingback: Dan (freelancerace) | Pearltrees()
Pingback: Jak zbudować telefon komórkowy ?()
Pingback: How To Stop Rental Equipment Theft | SpyGearCo: Spy and Surveillance()
Pingback: اصنع هاتفك الجوال بنفسك باستخدام اردوينو | آردوينو ببساطة Simply Arduino()
Pingback: Using the GSM/GPRS & GPS Shield: call examples | Open Electronics()
Pingback: Arduino GSM shield | Open Electronics()
Pingback: m2m | Pearltrees()
Pingback: GSM GPS shield for Arduino | How 2.0, Hobbies &...()
Pingback: GPS Tracking | Pearltrees()
Pingback: GSM GPS shield for Arduino | Arduino, Android, ...()
Pingback: TiDiGino: remote control based on Arduino increases the performance | Open Electronics()
Pingback: Fix Arduino Error Opening Serial Port Windows XP, Vista, 7, 8 [Solved]()
Pingback: Gsm Gps | Garmin Approach() | http://www.open-electronics.org/gsm-gps-shield-for-arduino/ | CC-MAIN-2016-30 | refinedweb | 3,635 | 53.41 |
This MEP attempts to improve the way in which third-party dependencies in matplotlib are handled.
#1157: Use automatic dependency resolution
#1290: Debundle pyparsing
#1261: Update six to 1.2
One of the goals of matplotlib has been to keep it as easy to install as possible. To that end, some third-party dependencies are included in the source tree and, under certain circumstances, installed alongside matplotlib. This MEP aims to resolve some problems with that approach, bring some consistency, while continuing to make installation convenient.
At the time that was initially done,
setuptools,
easy_install and
PyPI were not mature enough to be relied on. However, at present,
we should be able to safely leverage the "modern" versions of those
tools,
distribute and
pip.
While matplotlib has dependencies on both Python libraries and C/C++ libraries, this MEP addresses only the Python libraries so as to not confuse the issue. C libraries represent a larger and mostly orthogonal set of problems.
matplotlib depends on the following third-party Python libraries:
- Numpy
- dateutil (pure Python)
- pytz (pure Python)
- six -- required by dateutil (pure Python)
- pyparsing (pure Python)
- PIL (optional)
- GUI frameworks: pygtk, gobject, tkinter, PySide, PyQt4, wx (all optional, but one is required for an interactive GUI)
When installing from source, a
git checkout or
pip:
-
setup.pyattempts to
import numpy. If this fails, the installation fails.
- For each of
dateutil,
pytzand
six,
setup.pyattempts to import them (from the top-level namespace). If that fails, matplotlib installs its local copy of the library into the top-level namespace.
-
pyparsingis always installed inside of the matplotlib namespace.
This behavior is most surprising when used with
pip, because no
pip dependency resolution is performed, even though it is likely to
work for all of these packages.
The fact that
pyparsing is installed in the matplotlib namespace has
reportedly (#1290) confused some users into thinking it is a
matplotlib-related module and import it from there rather than the
top-level.
When installing using the Windows installer,
dateutil,
pytz and
six are installed at the top-level always, potentially overwriting
already installed copies of those libraries.
TODO: Describe behavior with the OS-X installer.
When installing using a package manager (Debian, RedHat, MacPorts
etc.), this behavior actually does the right thing, and there are no
special patches in the matplotlib packages to deal with the fact that
we handle
dateutil,
pytz and
six in this way. However, care
should be taken that whatever approach we move to continues to work in
that context.
Maintaining these packages in the matplotlib tree and making sure they are up-to-date is a maintenance burden. Advanced new features that may require a third-party pure Python library have a higher barrier to inclusion because of this burden.
Third-party dependencies are downloaded and installed from their
canonical locations by leveraging
pip,
distribute and
PyPI.
dateutil,
pytz, and
pyparsing should be made into optional
dependencies -- though obviously some features would fail if they
aren't installed. This will allow the user to decide whether they
want to bother installing a particular feature.
For installing from source, and assuming the user has all of the
C-level compilers and dependencies, this can be accomplished fairly
easily using
distribute and following the instructions here. The only anticipated
change to the matplotlib library code will be to import
pyparsing
from the top-level namespace rather than from within matplotlib. Note
that
distribute will also allow us to remove the direct dependency
on
six, since it is, strictly speaking, only a direct dependency of
dateutil.
For binary installations, there are a number of alternatives (here ordered from best/hardest to worst/easiest):
- The distutils wininst installer allows a post-install script to run. It might be possible to get this script to run
pipto install the other dependencies. (See this thread for someone who has trod that ground before).
- Continue to ship
dateutil,
pytz,
sixand
pyparsingin our installer, but use the post-install-script to install them only if they can not already be found.
- Move all of these packages inside a (new)
matplotlib.externnamespace so it is clear for outside users that these are external packages. Add some conditional imports in the core matplotlib codebase so
dateutil(at the top-level) is tried first, and failing that
matplotlib.extern.dateutilis used.
2 and 3 are undesirable as they still require maintaining copies of these packages in our tree -- and this is exacerbated by the fact that they are used less -- only in the binary installers. None of these 3 approaches address Numpy, which will still have to be manually installed using an installer.
TODO: How does this relate to the Mac OS-X installer?
At present, matplotlib can be installed from source on a machine without the third party dependencies and without an internet connection. After this change, an internet connection (and a working PyPI) will be required to install matplotlib for the first time. (Subsequent matplotlib updates or development work will run without accessing the network).
Distributing binary
eggs doesn't feel like a usable solution. That
requires getting
easy_install installed first, and Windows users
generally prefer the well known
exe or
msi installer that works
out of the box. | https://matplotlib.org/3.1.1/devel/MEP/MEP11.html | CC-MAIN-2020-10 | refinedweb | 871 | 53 |
Tk_Name, Tk_PathName, Tk_NameToWindow - convert between names and win-
dow tokens
#include <tk.h>
Tk_Uid
Tk_Name(tkwin)
char *
Tk_PathName(tkwin)
Tk_Window
Tk_NameToWindow(interp, pathName, tkwin)
Tk_Window tkwin (in) Token for window.
Tcl_Interp *interp (out) Interpreter to use for error report-
ing.
CONST char *pathName (in) Character string containing path
name of window.
_________________________________________________________________ cre-
ated. applica-
tion has the path name ``.''; its children have names like ``.a'' and
``.b''; their children have names like ``.a.aa'' and ``.b.bb''; and so
on. A window is considered to be be a child of another window for nam-
ing purposes if the second window was named as the first window's par-
ent
Tk Tk_Name(3) | http://www.syzdek.net/~syzdek/docs/man/.shtml/man3/Name.3.html | crawl-003 | refinedweb | 113 | 69.28 |
No longer able to select / replace tabs, spaces, underscore
Since the recent update we can no longer select / replace tabs, spaces, underscore, new lines etc…
Seems like search & replace works on letters and numbers only :(
Any thoughts?
- Claudia Frank
I checked 7.2 and it is working for me.
I assume we need more info what you are exactly doing.
?->DebugInfo might be helpful as well.
Cheers
Claudia
Search and replace.
Also replacing say a carriage return (so two lines) to something else.
All used to work fine, but not now.
Thanks
- Claudia Frank
sorry, but that is not exactly the answer I expected.
I thought of something like
I search for : my text with \r\n
or
I select the whole line using shift+down
birng up dialog using ctrl+f
make sure regex is selected (or any other) …
And what about the debuginfo output?
It might be that a plugin is the culprit.
Cheers
Claudia
Thanks.
We are taking data for example
- <tab> <tab> data
selecting that to change to
1.<space> data
And nothing happens as it no longer seems to recognise or select tabs.
Same is if we do
data
data
data
and change to
- data<br />
- data <br />
- data<br />
previously done by selecting the carriage return and blank line and replacing that.
No plugins used - this is simply the native product.
Can’t say until now, was even aware of plugins or where to find them :)
- Scott Sumner
I must say that if I try (with N++ 7.1) making a selection out of a line-ending on a non-empty line plus the following empty line (thus I think I should have two line-endings in the selection), then I invoke the Find dialog window, my “Find what” box seems to contain…nothing…but wait, no…the caret in the Find-what box is continually/slowly changing colors!? I have never noticed this behavior before. If I then arrow right or left, the caret doesn’t move (seeming to indicate that the Find-what box does indeed contains exactly nothing) but the caret is now its usual black color.
I presume that someone can tell me what is going on with this?
Hello, Ian,
When you open your Replace dialog, I bet that your option Match whole word only is checked ! Isn’t it ?
Just unckeck that option and everything should be fine, again :-))
Of course, this option is only useful when you’re searching some word characters, surrounding by non-word characters, as Space, Tabulation, or EOL characters !
When dealing with other characters than letters and digits, it rarely matches :-((
Best Regards,
guy038
- Scott Sumner
Okay, so I’ll reply to myself since no one else has. However, if I like my own explanation, I can’t +1 it ! :(
My experimentation has shown me that the “rolling-color-cursor” can represent either NO INPUT in the “Find what” box, or input that consists only of line-endings. I’m rather shocked because I’ve never noticed this behavior before. I thought someone could elaborate on its “deeper meaning” or history, if there is any. But I guess not…
+1
:-) | https://notepad-plus-plus.org/community/topic/12721/no-longer-able-to-select-replace-tabs-spaces-underscore | CC-MAIN-2017-26 | refinedweb | 527 | 70.94 |
If you study the formgen generated code, you can see that most of the gain you get by using it
is in generating the code for adding child controls to parent containers. You can use computation expressions in a
clever way to get the same benefit, say by overriding the let! construct in your own UI control builder objects. So it would look something like:
let form =
formM()
{ let! panel1 = new Panel()
let! panel2 = new Panel()
return () }
let! pat = exp in cexp b.Bind(exp,(fun pat -> «cexp»))
let pat = exp in cexp b.Let(exp,(fun pat -> «cexp»))
use pat = exp in cexp b.Using(exp,(fun pat -> «cexp»))
use! pat = cexp in cexp b.Bind(exp,(fun x->b.Using(x,(fun pat-> «cexp»))))
do! exp in cexp b.Bind(exp,(fun () -> «cexp»))
do exp in cexp b.Let(exp,(fun () -> «cexp»))
for pat in exp do cexp b.For(exp,(fun pat -> «cexp»))
while exp do cexp b.While((fun ()-> exp),b.Delay(fun ()-> «cexp»))
if exp then cexp1 else cexp2 if exp then «cexp1» else «cexp2»
if exp then cexp if exp then «cexp» else b.Zero()
cexp1; cexp2 b.Combine(«cexp1», b.Delay(fun () -> «cexp2»))
return exp b.Return(exp)
return! exp exp
open System.Windows.Forms
open System.Drawing
[<AbstractClass>]
type 'a IComposableControl when 'a :> Control () =
abstract Self : 'a
member self.Return (e: unit) = self.Self
member self.Zero () = self.Self
[<OverloadID("1")>]
member self.Bind(e : Control, body) : 'a =
e |> self.Self.Controls.Add
body e
[<OverloadID("2")>]
member self.Bind(e: 'b IComposableControl when 'b :> Control, body) : 'a =
e.Self |> self.Self.Controls.Add
body e
Your interface also contains two overloaded Bind members that correspond to let! constructs, each taking
a direct Control descendant or another IComposableControl builder object, and adding them to the Controls collection of the managed UI control.
Armed with this interface you can go ahead and implement various builders for Control descendants. Below you can find three builder classes that build forms, panels and rich text boxes.
type formM() =
inherit IComposableControl()
let c = new Form()
override self.Self = c
type panelM() =
inherit IComposableControl()
let c = new Panel()
override self.Self = c
type formM() =
inherit IComposableControl()
let c = new Form()
override self.Self = c
()
let c = new Panel()
override self.Self = c
type formM() =
inherit IComposableControl()
let c = new Form()
override self.Self = c
At this point you can define your Notepad UI as shown in Listing 7.
In the second approach, you developed various computation builders that manage UI controls and used these builders in nested computation expressions to describe the UI of your application in a succinct and concise way without having to add any code for managing the parent-child relationship between the participating controls. The added benefit of this approach over the first one is that you get instant feedback on your UI's correctness from the compiler, making it an excellent alternative to building UIs in the absence of a Visual Studio form designer that can output F# code, while keeping the entire application code in F#.
Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled.
Your name/nickname
Your email
WebSite
Subject
(Maximum characters: 1200). You have 1200 characters left. | http://www.devx.com/enterprise/Article/40481/0/page/2 | CC-MAIN-2016-18 | refinedweb | 544 | 60.41 |
import "nsIContentViewer.idl"; (*):
(*) unless the document is currently being printed, in which case it will never be saved in session history.
Attach the content viewer to its DOM window and docshell.
Checks if the document wants to prevent unloading by firing beforeunload on the document, and if it does, prompts the user.
The result is returned.
Works in tandem with permitUnload, if the caller decides not to close the window it indicated it will, it is the caller's responsibility to reset that with this method.
this method is only meant to be called on documents for which the caller has indicated that it will close the window. If that is not the case the behavior of this method is undefined.
Get the history entry that this viewer will save itself into when destroyed.
Can return null
The previous content viewer, which has been |close|d but not |destroy|ed. | http://doxygen.db48x.net/comm-central/html/interfacensIContentViewer.html | CC-MAIN-2019-09 | refinedweb | 150 | 63.49 |
This article is about Stent - a Redux-liked library that creates and manages state machines. Stent implements some of the Redux’s core ideas and in fact looks a lot like it. At the end of this post we will see that both libraries have a lot in common. Stent is just using state machines under the hood and eliminates some of the boilerplate that comes with Redux’s workflow.
If you wonder what is a state machine and why it makes UI development easier check out “You are managing state? Think twice.” article. I strongly recommend reading it so you get a better context.
The source code of the examples in this post is available here.
- The idea
- Teaching by example
- The authorization service
- The dummy React components
- Redux implementation
- Implementing with Stent
- Final words
The idea
In medicine, a stent is a metal or plastic tube inserted into the lumen of an anatomic vessel or duct to keep the passageway open*. Or in other words it is a tool that restores blood flow through narrow or blocked arteries. I’ve made the parallel with an application that I worked on. The application’s state there had many dependencies so I was basically stuck into a logical loop and that little library helped me solve the problem. It kinda freed my mind and made me define clear states and simple transitions. At the same time Stent is not a lot different then Redux because:
- There are still actions that drive the application. Firing an action means transitioning the machine from one state to another.
- The transitions are defined similarly to Redux’s reducers. They accept current state and action and return the new state.
- The async processes are handled using the saga pattern. Same as in redux-saga project.
- The wiring of React components happen via similar helper - a connect function.
My main goal was to create a library that controls state machines but has an API similar to Redux.
Teaching by example
To show the difference I decided to use an example close to the one from “You are managing state? Think twice.” article and create it first with Redux then with Stent. It is about an user widget and its various states.
- State A is the default one. That is the first thing that we should render.
- Screen B appears when the user clicks the “Submit” button
- If everything is ok and the credentials are correct we display C. Where the user may click “Log out” which leads to flushing all the user’s data and displaying again A.
- Screen D is rendered when the user submits an empty form (or when using wrong credentials)
- And E is an edge case where we send some data but there is no connection to the server. In this case we just give an option to re-send the form.
For the purpose of this post let’s write a simulation of a back-end service first. We call a method that returns a promise. A second later we resolve the promise like it was really making a HTTP request.
The authorization service
// services/errors.js export const CONNECTION_ERROR = 'CONNECTION_ERROR'; export const VALIDATION_ERROR = 'VALIDATION_ERROR'; // services/Auth.js import { CONNECTION_ERROR, VALIDATION_ERROR } from './errors'; const TIMEOUT = 1000; const USER = { name: 'Jon Snow' }; const Auth = { login({ username, password }) { return new Promise( (resolve, reject) => setTimeout(function () { if (username === '' || password === '') { return reject(new Error(VALIDATION_ERROR)); } else if (username === 'z' && password === 'z') { return reject(new Error(CONNECTION_ERROR)); } resolve(USER); }, TIMEOUT) ); } } export default Auth;
The
Auth module has one method
login that accepts
username and
password. There are three possible results:
- If the user submits the form with no username and password we reject the promise with
VALIDATION_ERROR
- If the user types
zfor both username and password we reject the promise with
CONNECTION_ERROR
- In every other case where the user fills the fields with some data we resolve the promise with dummy data (the
USERconstant)
Notice that the resolving/rejecting is wrapped in a
setTimeout call so we get the feeling of real HTTP request.
The dummy React components
How the application looks like really doesn’t matter. Here is a list of components that are the same for both implementations. They are presentational components which simply render stuff and fire callbacks. You may easily skip this section and jump directly to the Redux implementation. It’s here just for a reference.
A component for rendering links.
I didn’t want to write
event.preventDefault() everywhere so here is a
Link component:
// components/Link.jsx export default class Link extends React.Component { _handleClick(event) { event.preventDefault(); this.props.onClick(); } render() { return ( <a href="#" onClick={ event => this._handleClick(event) }> { this.props.children } </a> ); } }
Error component renders the screen with the “Try again” option.
The one covering the connection problem.
// components/Error.jsx const Error = ({ message, tryAgain }) => ( <div className='tac'> <p className='error'>{ message }</p> <Link onClick={ tryAgain }>Try again</Link> </div> ); export default Error;
The
LoginForm component is responsible for the first look of our widget.
// components/LoginForm.jsx export default class LoginForm extends React.Component { _submit(event) { event.preventDefault(); this.props.submit({ username: this.refs.username.value, password: this.refs.password.value }) } render() { return ( <form> <input type='text' ref='username' placeholder='Username' /> <input type='password' ref='password' placeholder='Password' /> <button onClick={ event => this._submit(event) }>Submit</button> </form> ); } }
A component that shows the welcome message and links after successful log in.
// components/Profile.jsx export default function Profile({ name, viewProfile, logout }) { return ( <div> Welcome, { name } <hr /> <Link onClick={ viewProfile }>Profile</Link><br /> <Link onClick={ logout }>Log out</Link><br /> </div> ); }
And we have one more
components/Widget.jsx but it is different for both examples so it will be discussed later.
Redux implementation
When I start working on a Redux application I always think about actions first. That is because they drive state changes and eventually re-rendering of the UI.
Actions
In our example we have a request to a back-end which is followed by either success or failure screens. If it fails we have a try-again process. We may also log out. So let’s start by defining a couple of constants.
// redux/constants.js export const LOGIN = 'LOGIN'; export const LOGIN_SUCCESSFUL = 'LOGIN_SUCCESSFUL'; export const LOGOUT = 'LOGOUT'; export const LOGIN_FAILED = 'LOGIN_FAILED'; export const TRY_AGAIN = 'TRY_AGAIN';
And the action creators that use them:
// redux/actions.js export const login = credentials => ({ type: LOGIN, payload: credentials }); export const loginSuccessful = userData => ({ type: LOGIN_SUCCESSFUL, payload: userData }); export const logout = data => ({ type: LOGOUT }); export const loginFailed = error => ({ type: LOGIN_FAILED, payload: error }); export const tryAgain = () => ({ type: TRY_AGAIN });
Reducer
The next step is to handle the actions above. Or in other words to write the reducer. The function that accepts the current state and action and returns the new state.
// redux/reducers.js export const Reducer = (state = initialState, { type, payload }) => { switch(type) { ... } }
And here it becomes interesting because we should start thinking about state management. What state means for us and how we will represent it in the store. I came up with the following object:
// redux/reducers.js const initialState = { user: null, error: null, requestInFlight: false, credentials: null }
The
user is probably the most important part of our state. We will use it to keep the user’s data returned by the back-end. The thing is that alone is not enough to cover our UI screens. When it is
null we may say that the request didn’t happen yet or it’s in progress or maybe it happened but it failed. Because of this uncertainty we have to introduce
error and
requestInFlight. They will be used like flags to form the error state and split the flow to before and after the HTTP request.
credentials is a storage for what the user typed so we can submit the form again when covering the try-again feature.
Let’s see how our reducer modifies the state when receiving actions. The
LOGIN action:
export const Reducer = (state = initialState, { type, payload }) => { switch(type) { case LOGIN: return { ...state, requestInFlight: true, credentials: payload }; default: return initialState; } }
We turn the
requestInFlight flag on and store the credentials. Now the view layer may check if
requestInFlight is
true and if yes display the loading screen. If the request succeeds we will receive
LOGIN_SUCCESSFUL type of action.
case LOGIN_SUCCESSFUL: return { user: payload, error: null, requestInFlight: false, credentials: null };
requestInFlight should be turned off and the
user property may be filled in with the action’s payload. We don’t need to keep the
credentials anymore so we set it back to
null. We also have to flush out the
error (if any) because we may have an UI which depends on it. When however the request fails we will receive action of type
LOGIN_FAILED:
case LOGIN_FAILED: return { ...state, error: payload, requestInFlight: false };
LOGIN_FAILED is dispatched always after
LOGIN so we know that the
credentials are currently in the store. We use the spread operator (
...state) which guarantees that we keep that information in. The action’s payload contains the actual error so we assign it to the
error property. The request process is finished so we have to turn
requestInFlight off. This will let the view to render an appropriate UI based on the error.
The last two type of actions
LOGOUT and
TRY_AGAIN look like that:
case LOGOUT: return initialState; case TRY_AGAIN: return { ...state, requestInFlight: true }
If the user want to log out we just bring the initial state where everything is
null and
requestInFlight is false. The
TRY_AGAIN is just turning
requestInFlight to
true. There is no HTTP request so far. Just pure state modifications done in an immutable way.
Making the HTTP request
Last year or so I am experimenting with different options for handling async processes. Right now redux-saga library makes the most sense to me. That is why I decided to use it here too (a really good introduction to the saga pattern could be found here). The example is small enough so we can go with a single saga.
// redux/saga.js import { takeLatest, call, put, select } from 'redux-saga/effects'; import { LOGIN, TRY_AGAIN } from './constants'; import { loginSuccessful, loginFailed } from './actions'; import Auth from '../services/Auth'; const getCredentials = state => state.credentials; export default function * saga() { yield takeLatest([ LOGIN, TRY_AGAIN ], function * () { try { const credentials = yield select(getCredentials); const userData = yield call(Auth.login, credentials); yield put(loginSuccessful(userData)); } catch (error) { yield put(loginFailed(error)); } }); }
The saga stops and waits for
LOGIN or
TRY_AGAIN actions. They both should lead to firing the HTTP request. If everything is ok we call the
loginSuccessful action creator. The reducer processes the
LOGIN_SUCCESSFUL action and we have the user data in the store. If there is an error we call
loginFailed with the given error. Later the
Widget component decides what to render based on that error.
Wiring our main React component to Redux
Widget.jsx will be the component which is wired to the Redux’s store via the
connect function. We will need both mapping of state to props and mapping of dispatch to props.
import { CONNECTION_ERROR } from '../services/errors'; class Widget extends React.Component { render() { ... } } const mapStateToProps = state => ({ isInProgress: state.requestInFlight, isSuccessful: state.user !== null, isFailed: state.error !== null, name: state.user ? state.user.name : null, isConnectionError: state.error && state.error.message === CONNECTION_ERROR }); const mapDispatchToProps = dispatch => ({ login: credentials => dispatch(login(credentials)), tryAgain: () => dispatch(tryAgain()), logout: () => dispatch(logout()) }); export default connect(mapStateToProps, mapDispatchToProps)(Widget);
Let’s first talk about
mapStateToProps. The first three are booleans that basically tell us in which part of the process the user is. Making a request, successfully logged in or something failed. We use almost all the props from our state -
requestInFlight,
user and
error. The
name is directly derived from the user’s profile data. And because we have different UI based on the type of error we need a fourth flag
isConnectionError.
The actions that are triggered by the user are
TRY_AGAIN and
LOGOUT. In
mapDispatchToProps we create anonymous functions to dispatch those actions.
The rendering bit
The last part which I want to show you is how we render the dummy components. That’s the
render function of the
Widget component:
render() { const { isInProgress, isSuccessful, isFailed, isConnectionError } = this.props; if (isInProgress) { return <p className='tac'>Loading. please wait.</p>; } else if (isSuccessful) { return <Profile name={ this.props.name } logout={ this.props.logout } />; } else if (isFailed) { return isConnectionError ? <Error tryAgain={ this.props.tryAgain } : (<div> <LoginForm submit={ this.props.login } /> <p className='error'>Missing or invalid data.</p> </div>) } return <LoginForm submit={ this.props.login } />; }
When having the boolean flags as props we need four
if statements to achieve the desired result.
- If
isInProgressis set to
truewe render the loading screen.
- If
isInProgressis
falseand the request is successful the
Profilecomponent is displayed.
- If the request fails we check the error’s type and based on it decide to either render the
Errorcomponent or the same login form but with an error message.
- If none of the above is truthy we return the
LoginFormcomponent only.
Redux implementation - done
More or less this is how I will approach a feature implementation if I have to use Redux. It is a definition of actions and their action creators. Then writing reducers and probably handling async processes via sagas. At the end is the actual rendering (via React in our case).
The application follows one-direction data flow where the user interacts with the UI which leads to dispatching of an action. The reducer picks that action and returns a new version of the state. As a result Redux triggers re-rendering of the React components tree.
Source code of the example here.
Implementing with Stent
The main idea behind Stent is to manage state by using a state machine. And every state machine begins with a table (or a graph) that defines the possible states and their transitions.
State machine table/graph
There are two questions which we ask ourself while making this table. And I noticed that these two questions are actually a really nice way to avoid bugs and make our application predictable - “In what states out app may be in?” and “What is the possible input in every of these states?”. The answers of this questions produce the following table:
(Image taken from “You are managing state? Think twice.”)
If we scroll up to the beginning of the article we will see exactly these four states shown as screens:
- LOG IN FORM (A/D) - that is the state where we display a form for accepting username and password. We may have this state displaying an error message too. That is the case where the user submitted the form empty or typed wrong credentials. This state accepts only one action
Submitand transitions the machine to a LOADING state.
- LOADING (B) - the request to the back-end is in flight. We may either
Successor
Failurehere and transition to PROFILE or ERROR states.
- PROFILE (C) - this is the state after successful log in so we display a welcome message and two links. The first may trigger a change in another part of our app and keeps the machine in a PROFILE state while the other one logs the user out and transitions the machine to LOG IN FORM state.
- ERROR (E) - the last possible state is when we receive a connection error. We have the credentials and we have to proceed with the try-again logic. So we accept only
Try againinput and transition the machine to a LOADING state.
Defining the state machine
To define the machine we have to use the
Machine.create factory function. We have to provide a name, initial state and transitions’ map.
// stent/states.js export const LOGIN_FORM = 'LOGIN_FORM'; export const LOADING = 'LOADING'; export const TRY_AGAIN = 'TRY_AGAIN'; export const WRONG_CREDENTIALS = 'WRONG_CREDENTIALS'; export const PROFILE = 'PROFILE'; // stent/machine.js import { Machine } from 'stent'; import { LOGIN_FORM } from './states'; import transitions from './transitions'; const InitialState = { name: LOGIN_FORM }; const machine = Machine.create( 'LoginFormSM', { state: InitialState, transitions } );
A state in the context of Stent is just a regular JavaScript object with one reserved prop -
name. As we will see in a bit we may store whatever we want there but the
name key is used by the library for its internal processes. We should always provide that
name and its value should be a string representing one of the machine’s states.
The name of the machine
LoginFormSM is also important in our case because we are going to wire it to a React component. Let’s now see what is behind the
transitions object.
State machine transitions’ map
The transitions map is an object of objects following the convention:
{ [STATE 1]: { [INPUT A]: [HANDLER], [INPUT B]: [HANDLER], ... }, [STATE 2]: { [INPUT C]: [HANDLER], [INPUT D]: [HANDLER], ... }, ... }
STATE 1and
STATE 2are strings which will be used as a
namekey while transitioning the machine. Stent dynamically produces helper methods for checking if the machine is in a particular state. For example if we define
LOG IN FORMstate we will have
machine.isLogInForm()method that returns
trueor
false.
- The inputs are also strings like
submitor
try again. Sending input to the machine is the same as dispatching an action in Redux. However, instead of defining a constant, then action creator and call that creator Stent provides a machine method directly. It is again dynamically generated. For example if we say that we accept a
submitinput we will have
machine.submit(credentials). Where
credentialsis the action’s payload which we will receive in the handler.
- The handler of an input may be four different things. (1) Just a state name like
LOG IN FORMand
{ name: LOADING }. (3) A redux-like reducer function which have a signature
function (currentState, payload). That function should return either a string (
LOG IN FORM) or a state object (
{ name: LOADING }). And the last option (4) is a generator function (saga). Inside that generator we may transition the machine multiple times by
yielding a new state object. There is also a
callhelper for handling side effects similarly to redux-saga library.
Now, when we know what Stent expects let’s dive in and create our transitions’ map. The easiest way to produce it is to look at the table/graph that we discussed earlier and just copy/paste states with their possible inputs. The first one is
// stent/transitions.js import { call } from 'stent/lib/helpers'; import Auth from '../services/Auth'; import { CONNECTION_ERROR, VALIDATION_ERROR } from '../services/errors'; import { LOGIN_FORM, LOADING } from './states'; const submit = function * (state, credentials) { yield LOADING; try { const user = yield call(Auth.login, credentials); this.success(user); } catch (error) { this.error(error, credentials); } }; const transitions = { [LOGIN_FORM]: { 'submit': submit }, [LOADING]: { 'success': ..., 'error': ... } }; export default transitions;
The
LOGIN_FORM is the default state so the only one possible input is
submit. It is handled by a generator that receives the current state and a payload
credentials. The very first thing that we do is to transition the machine to a
submit input. That is because in
success and
error. Next in the generator we fire our side effect
Auth.login. We start waiting for the user’s data (the generator pauses at this point). If everything is ok we dispatch the
success action. If not the
error one and pass the error together with the
credentials. This is important because we need to trigger the try-again logic later. We should mention that a function or a generator as a handler is always called with the machine as a context. So inside those handlers we may use
this.<machine method>.
At this point we have feedback for the request and the machine is in a
LOADING state. Here are the rest of the states and their handlers.
const submit = function * (state, credentials) { ... } const success = function (state, user) { return { name: PROFILE, user }; }; const error = function (state, error, credentials) { return error.message === CONNECTION_ERROR ? { name: TRY_AGAIN, credentials } : { name: WRONG_CREDENTIALS, error }; }; const tryAgain = function * (state) { yield call(submit, state, state.credentials); } const transitions = { [LOGIN_FORM]: { 'submit': submit }, [LOADING]: { 'success': success, 'error': error }, [TRY_AGAIN]: { 'try again': tryAgain }, [WRONG_CREDENTIALS]: { 'submit': submit }, [PROFILE]: { 'view profile': () => {}, 'logout': LOGIN_FORM } };
Receiving
success means transitioning to
PROFILE state and keeping the user data in the state object. This is probably the first place where we see how a machine transition produces output - the
user object returned by the
Auth service layer.
We may also receive an
error input which handler has some conditional logic inside. Based on the error we decide what the next state will be. Either we display an error screen with a “Try again” link (
WRONG_CREDENTIALS state) or we show the same login form with an error message below it (
LOGIN_FORM state). Notice how when transitioning between different states we keep only what we need. For example when entering the
credentials but later when moving to
WRONG_CREDENTIALS we completely flush this out and have only
name and
error. That is because at this point we don’t need credentials.
We should also mention the
try again action. Its handler is an interesting one because we have a generator calling another generator. That is one of my favorite redux-saga features. Composing by chaining generators is a really powerful way to shift responsibilities. It is also nice from a unit testing point of view. The important bit here is to pass the
state and action payload required by the other generator. In our case
submit expects to receive the machine’s state and user’s credentials.
The rendering
If you noticed in this section we didn’t mention the Auth service neither the dummy react components. That is because they are the same like in the Redux example. The only one difference is the
Widget.jsx component.
Because most of the logic is done via the state machine we have a clear rendering path. What I mean by that is that we have an explicit machine states that one-to-one map with the screens that we have to render. The React bit becomes a lot more easier and simple. Here is how the wiring happens:
// components/Widget.jsx import React from 'react'; import { connect } from 'stent/lib/react'; class Widget extends React.Component { ... } export default connect(Widget) .with('LoginFormSM') .map(machine => ({ login: machine.submit, tryAgain: machine.tryAgain, logout: machine.logout, state: machine.state.name }));
We say “Wire this component (Widget) with a machine (LoginFormSM)”. The function that we pass to the
map method is similar to Redux’s
mapStateToProps or
mapDispatchToProps. We simply pass down methods of the machine and the current state’s name. Same as Redux’s store, when we transition to a new state the
Widget component gets re-rendered.
And here is how the rendering looks like:
// components/Widget.jsx import LoginForm from './LoginForm.jsx'; import Profile from './Profile.jsx'; import Error from './Error.jsx'; import { LOGIN_FORM, LOADING, TRY_AGAIN, WRONG_CREDENTIALS, PROFILE } from '../stent/states'; class Widget extends React.Component { constructor(props) { super(props); this.renderMap = { [LOGIN_FORM]: <LoginForm submit={ props.login } />, [LOADING]: <p className='tac'>Loading. please wait.</p>, [TRY_AGAIN]: <Error tryAgain={ props.tryAgain }, [WRONG_CREDENTIALS]: ( <div> <LoginForm submit={ props.login } /> <p className='error'>Missing or invalid data.</p> </div> ), [PROFILE]: <Profile name={ props.name } logout={ props.logout } /> } } render() { return this.renderMap[this.props.state]; } }
We have something which I call a “render map”. It is direct mapping between a machine state and React component/s.
Stent implementation - done
For me, using a state machine means asking the right questions. This approach simply leads to higher level of predictability. We see how the state machine pattern protects our app being in a wrong state or state that we don’t know about. There is no conditional logic in the view layer because the machine is capable of providing information of what should be rendered.
Source code of the example here.
Final words
I’ve made Stent because I wanted state machines in my applications and at the same time I didn’t want to stop using Redux. I took lots of stuff from Redux and this is the result. We still have actions which are handled by reducers. The saga pattern is built-in so we can handle side effects in a synchronous fashion. Stent knows the possible actions upfront and it generates methods which in the context of Redux we call action creators. This always bugs me a little because in order to dispatch an action in Redux application I have to define a constant and then an action creator. Later somewhere in the code import it and finally call it. With Stent is just a method of the machine.
To be honest I didn’t use Stent in a big production app. I’m still experimenting with small projects but it goes really well so far. It makes me change my mindset. I started thinking in states and not in transitions. So far I was mostly interested in what actions to dispatch and what the reducers were doing. When the store grows we have more and more things to consider. Very often I caught myself asking “Can I update this portion of the state?” and honestly I wasn’t able to give a clear answer. That is because we don’t have well defined states. We have variables and combination of values that form states. It is like a car driver trying to control an airplane. We have a rough idea what is going on but because there are billion buttons and switches we are scared to move forward. The state machine pattern has the power to convert our airplane into a car. Fewer possible actions at a given time, fewer bugs. Try thinking in states, not in transitions and you will see the difference. | http://outset.ws/blog/article/getting-from-redux-to-state-machine-with-stent | CC-MAIN-2019-04 | refinedweb | 4,271 | 58.18 |
Hi, I'm a student learning to code in C. This is what I have only when I compile it, gcc, I get two error messages and I don't understnd why. The messages are '67: error: expected declaration or statement at end of input' and '67: error: control reaaches end of non-void funtion' This is the code as I have it now, line 67 is the last line in the program...the }. Any info would be helpful.
#include <stdio.h> #include<stdlib.h> #include<time.h> int main(void){ //declarations int menuChoice; int i,n=0; int r=rand()%100 + 1;; srand(time(NULL)); //statements while (menuChoice !=3){ //Choose mode printf(" *******************************************\n"); printf(" ****** Would you like to play a game ******\n"); printf(" *******************************************\n"); printf(" 1. Guess the number that I'm thinking\n"); printf(" 2. Global Thermonuclear War\n"); printf(" 3. Exit\n\n\n"); printf(" Please Choose a Menu Item: "); scanf("%d", &menuChoice); if (menuChoice > 3) { printf(" You don't follow directions very well do you... try again\n"); } if (menuChoice ==2){ printf("You really need to watch 'War Games' and come back to see me.\n"); } else if (menuChoice ==3){ printf(" Have a good day\n\n\n\n\n"); } else if (menuChoice == 1){ printf(" You chose option 1.\n"); printf("I have my number\n\nWhat number am I thinking of between 1 and 100."); while(scanf("%d",&i)) if (i > r) { n++; printf("Your guess is high. Please try again: "); } else if (i < r) { n++; printf("Your guess is low. Please try again: "); } else if (i == r) { printf("\n\nCongratulations!\nYou guessed the number in %d guesses! \n", n+1); } } return 0; } | https://www.daniweb.com/programming/software-development/threads/286785/don-t-understand-what-the-problem-is | CC-MAIN-2018-43 | refinedweb | 278 | 74.49 |
0
I need to write a program to find the first 10 prime numbers, i.e. integers that are only evenly divisible by themselves and 1. The hint I got was to make a big array of booleans named "isprime" and mark off all the non-prime numbers as false, i.e. isprime[4] = false. Here's what I have but it's just printing all of the numbers anyway.
public class PrimeFinder { public static void main(String[] args) { boolean[] isprime = new boolean[100]; for(int i = 2; i < isprime.length; i++) { if(i-1 % i == 0) { isprime[i] = false; } isprime[i] = true; } for(int j = 0; j < isprime.length; j++) { if(isprime[j] == true){ System.out.print(j); System.out.print(", "); } } } } | https://www.daniweb.com/programming/software-development/threads/411509/boolean-array-to-find-first-10-prime-numbers | CC-MAIN-2016-44 | refinedweb | 122 | 74.08 |
Find:
Error in EPiServer.Find.Cms.IndexingJobService.Start(Action<string> statusNotification).
Details
SiteDefinition.Current was set to the site definition that was being indexed during Find indexing job in 13.0.5 and earlier versions. Setting SiteDefinition.Current is missing in 13.2.1, and should be added again.
EPiServer.CMS.Core 11.18.0 creates additional DynamicProxy types which might increase the number of indexed items.
Scheduled job to remove old content must not execute in case of a 503 response during indexing.
Steps to reproduce:
1. Run indexing job, and check number in the index
2. Set an account to "read-only" mode by help from RE
3. Start indexing job
4. Change the account to normal mode by help from RE (to not be in read-only mode)
5. Wait until indexing job has finalized
6. Check number of items in the index
Expected: Same number of items after (1) as after (6)
Actual: Sometimes different, pending if (4) was done before the indexing job was completed.
Steps to reproduce
1. Set up multi site with Alloy and a Find index running the latest CMS and Find (2019-11-18)
2. I have three sites set up and use site specific assets; localhost (has a wildcard host), sitea.com and siteb.com
3. Media uploaded to localhost gets indexed in Find with an url that looks like this
http://[domain]/[lang]/SiteAssets/ (See SearchHitUrl$$string in Find index)
4. Media uploaded to sitea.com, siteb.com gets an url with http://[domain]/[lang]/SysSiteAssets/
5. If I remove wildcard host on localhost site media uploaded also get http://[domain]/[lang]/SysSiteAssets/
Is this correct behavior?
Client is expecting all to end up below http://[domain]/[lang]/SiteAssets/
I am not sure what is correct. I am not sure if this is really Find related either.
typeProperty.Parent is converted to string, then replace NewLine with empty, which creates 2 strings. That seems to be avoidable.
It is not uncommon that clients complain about pages that are not updated through event indexing. We can often see that an indexing is performed but not with the latest version.
The App Support team has two possible theories on why this happens. One is described here: FIND-4176. The next is described below:
This typically happens when stuff is done in event handlers that slow up the process which creates a "perfect time gap".
BestBetRepository does not have cache so it will hit database almost everytime. Even worse, it uses DDS which has very bad performance.
Adding a bool to an existing class that already have been indexed by content, and using the new property in projection will cause an exception
Steps to reproduce:
1. Index the Alloy site
2. Add the following search query in StartPageController:
SearchClient.Instance.Search<StandardPage>()
.Select(x => new
)
.GetResult();
3. Add a new property of typ bool in StartPage class
public class StandardPage : SitePageData
{
...
public virtual bool Testing
...
}
4. Add the new property to our projection in (2)
SearchClient.Instance.Search<StandardPage>()
.Select(x => new
)
.GetResult();
5. Start the site and go to the start page
Expected: No exception
Actually: Exception is thrown
When applying nested convention for types having multiple implementations, the convention should only be registered once.
In edge cases, EPiServer.Find.Cms.ContentExtensions.SearchText(IContentData contentData) can return a null reference exception. | https://world.episerver.com/documentation/Release-Notes/?packageGroup=Find | CC-MAIN-2020-50 | refinedweb | 560 | 50.73 |
hey everyone,
i was in the cpreprocessor tutorial in the beginner C++ tutorial section.
i understood every thing in the lesson (#include, constants and macros).
but i couldn't understand a thing about the conditional compilation...
i understood what the #if #endif #elif #else #ifndef and #ifdef mean..
but i didn't understand how to put them to use...
all i wanna know is what is the use of such statements and how to use them..
PS. Explain in simple terms please, since i do not know much about c++ but i'm well-versed with all the stuff taught in the lessons in the beginner series before the c preprocessor tut. | http://cboard.cprogramming.com/cplusplus-programming/152546-conditional-compilation-printable-thread.html | CC-MAIN-2015-06 | refinedweb | 112 | 72.76 |
Goatboy wrote:Oh, that's simple. All you need to do is dedicate many years of your life to studying security.
pretentious wrote:I don't suppose this will involve some kind of recursive algorythm? though dealing with such numbers will probably break the stack
5050505050505050504994949494949494949495
jgr wrote:This was pretty trivial.
- Code: Select all
5050505050505050504994949494949494949495
Tell me it's right and give me my pointz
#include <stdlib.h>
#include <stdio.h>
#include <gmp.h>
int main (void)
{
int i;
mpz_t n;
mpz_t total;
mpz_t tmp;
mpz_t sum;
mpz_init(n);
mpz_init(total);
mpz_init(tmp);
mpz_init(sum);
mpz_set_ui(n, 9);
mpz_set_ui(total, 0);
for (i = 0; i < 20; i++) {
// (n(n+1)(n+2)) / 6
mpz_add_ui(tmp, n, 1);
mpz_mul(sum, n, tmp);
mpz_add_ui(tmp, n, 2);
mpz_mul(sum, sum, tmp);
mpz_div_ui(sum, sum, 6);
mpz_add(total, total, sum);
mpz_mul_ui(n, n, 10);
mpz_add_ui(n, n, 9);
}
gmp_printf("%Zd\n", total);
mpz_clear(n);
mpz_clear(total);
mpz_clear(tmp);
mpz_clear(sum);
exit(EXIT_SUCCESS);
}
main = print . sum . take 20 . map tetra . iterate ((+ 9) . (* 10)) $ 9
where tetra n = (n * (n + 1) * (n + 2)) `div` 6
#!/usr/bin/ruby
prompt = "> "
puts "Enter the number to iterate:"
print prompt
num = gets.chomp
puts "Enter amount of decimal places:"
print prompt
places = gets.chomp.to_i
def tri(n)
return (n*(n+1)*(n+2)/6)
end
value = 0
while places > 0
number = ''
places.times { number += num }
n = number.to_i
value += tri(n)
places -= 1
end
puts "Answer:"
puts value
#!/usr/bin/ruby
# Find the product of the sum of all triangle numbers up to: 9..n, where n is the decimal value we specify i.e, 3 = 999, 4 = 9999
puts "How many decimal places would you like to check:" # asking user for how many decimal places to get product of
print "> " # setting a little prompt
places = gets.chomp.to_i # getting the decimal value to get product of
places_save = places # saving the decimal value, since we change it later (line: 26)
def tri(places) # creating a function called tri with the paramater as the decimal value to check
sixes = '' # setting a null value where we will store additional 6's (This changes depending on the decimal value)
zeros = '' # setting a null value where we will store additional 0's (This changes depending on the decimal value)
if places > 1 # if the decimal place to check is greater than 1
places = places-1 # subtract one from that value
places.times { sixes += '66' } # for every decimal value add '66' to our sixes variable
places.times { zeros += '0' } # for every decimal value add a '0' to our zeros variable
return "16"+sixes+"5"+zeros # return the value for the product of the triangle numbers
elsif places == 1 # if the decimal place to check is equal to 1
return 165 # return 165 (This is the product of: 1(1+1)/2 through 9(9+1)/2)
end # end the if
end # end the function
total = 0 # initializing a variable which will hold our final product
while places > 0 # while places is greater than 0
value = tri(places) # store the product of the current decimal place in a variable called value
total += value.to_i # have total equal that value converter to an integer, since the function returns it as a string
places -= 1 # subtract 1 from our places variable
end # end the loop if places is no longer greater than 0
puts "The total sum for %s digits, is: %s" % [places_save, total] # print the product of the sum of all triangle numbers up to our desired decimal position
result = let get n = div (n*(n+1)*(n+2)) 6 in let arr = take 20 (iterate ((+9) . (*10)) 9) in sum[get x | x <- arr]
get n = div (n*(n+1)*(n+2)) 6
main = print (sum[get x | x <- arr]) where arr = take 20 (iterate ((+9) . (*10)) 9)
num=raw_input('Enter last triangle to compute 1-')
decplaces=len(str(num))
solution=0
while decplaces>0:
number=''
repeats=decplaces
while repeats>0:
number+=num[0]
repeats-=1
n=int(number)
solution+=((n*(n+1)*(n+2))/6)
decplaces-=1
print solution
Users browsing this forum: No registered users and 0 guests | http://www.hackthissite.org/forums/viewtopic.php?f=157&p=76891 | CC-MAIN-2017-13 | refinedweb | 688 | 62.51 |
Home -> Community -> Usenet -> comp.databases.theory -> Re: What databases have taught me
Bruno Desthuilliers wrote:
> Marshall wrote:
> >
> > Certainly Python has an active community and is growing. And I would
> > agree that much of its success comes from good design, and not
> > just the fact that it has a lot of libraries and a solid community.
> > (Although I could critique its halfhearted handling of functional
> > idioms, such as lambda and fold.)
>
> Yes, Python lambdas are really restricted - while still usable and
> useful - and there are few other limitations when compared to a full
> blown FPL (like read-only closures etc). In practice, there's almost
> always another (usually clean and not too complicated) way to achieve
> the same results, so that feeling of arbitrary limitation when coming
> from a real FPL had often more to do with superficial similarities and
> not knowing how to properly do the thing in Python than with a real
> limitation. Well, IMHO at least !-).
> The fact is anyway that there's a clear evolution toward more
> declarative idioms in Python, and a growing support for this (like list
> comprehensions, generalized into generator expressions - now extended
> into full-blown coroutines...). AFAIC, I see the OO/imperative part of
> Python much as toolkit for building higher-level declarative idioms, and
> having a mostly coherent and consistent model (here an object model) is
> of great help for this.
Fair.
> > I'm not sure how much I'd
> > read in to Google's hiring of Guido;
>
> Given that a substantial part of it's job at google is to work on
> Python, I take it as a sure sign that Google intend to strongly support
> the language.
Oh, certainly.
> > though it is true that Python
> > is used at Google, it's used mostly as versatile glue, and not
> > so much for anything performance-sensitive.
>
> Most performance-sensitive parts are in C++. FWIW, I think that one of
> the strength of Python is here : code the hi-level logic (the 'glue')
> with a hi-level language, and the critical parts in a lower level language.
>
> You can read this for more on Google's use of Python:
>
>
Hmmm. Well, there were some factual inaccuracies in those references (although the authors are both probably better Python programmers than I.)
> >
> >>But that's only anecdotical, since, may I quote:
> >>"""
> >>we do not limit ourselves, (or sometimes, even concern ourselves)
> >>with what products are out there today
> >>"""
> >
> > Brilliant, heh heh.
> >
>
> Eh... Would have been a shame not to reuse this !-)
>
> >>
> >> ?
> >
> >
> > There are a bazillion categories. The optional-declarations category
> > is certainly worth mentioning, but it is independent of nominal vs.
> > structural.
>
> Is it really ? Seems like it mixes both scheme, no ?
Not really. You can have dynamically typed optional-annotation languages (you mentioned Lisp), statically-structurally-optionally annotated languags (Haskell) or statically-nominally-optionally- typed languaes (Scala.)
The dimensions to the typing question a manifold and diverse. I have been studying the subject for many years now and while my taxonomy continues to improve, it is by no means settled.
> > There are also optional-declaration, statically typed
> > languages,
> > such as anything in the type-inference category, of which
> > Hindley-Milner is the most famous kind, and of which SML is
> > an example.
>
> Same as in Haskell and OCaml IIRC ?
>
> Don't bother answering, I'll check by myself...
>
> >>
> >>.
> >
> > Yeah. While it's a tradeoff, like everything else, I have to say
> > I think nominal typing might have gotten over-much use. :-)
>
> The "problem" is that AFAICT - please correct me if I'm wrong -
> structural typing can't really garantee semantic correctness. But FWIW,
> nominal typing doesn't either, and arbitrarilly restrict otherwise valid
> constructs.
Well, I suppose it depends on what you mean by semantic correctness. Certainly structural typing introduces the possiblility of "accidental" type compatibility, but I consider this a minor problem. (I never hear SML programmers complain about it, for example.) In contrast, I often hear, ahem :-), Python, Ruby, and Lisp programmers talk about the power of retroactive abstraction (that is, making two types compatible after-the-fact)
> >
> >>>I believe a structurally, statically typed language
>
> If I may ask, is the "statically typed" part here for correctness or for
> performance ?
My chief interest is in correctness, although of course static typing
help dramatically with performance. It is also a benefit for
documentation
purposes. Nonetheless, dynamically typed (strictly speaking untyped) languages gain expressiveness benefits from the lack of a static type system, so it's a tradeoff, and one that smart people are on different sides of. (Aka, I respect the "opposition" while still knowing
clearly where my interest lies.)
> >>> with a
> >>>product type as its fundamental collection, (along with
> >>>some relational operators) would be *most* interesting.
> >>
> >>Could you elaborate on (of give pointer to) what's "product type" exactly ?
> >
> >
> > I actually didn't say it very well. What I was trying to describe
> > was a set-of-product-types type. (That's what I was trying to
> > get at when I said "collection.")
> >
> > A product type is simply a
> > struct, or in Python terms a tuple. (I can't remember whether
> > Python tuples are ordered or named?)
>
> Python has both tuples (ordered, unnamed) and dicts (named, unordered -
> hashtable really), and easily translates one into the other (but with
> lost of ordering). FWIW, dicts are the "central" datatype in Python,
> used for namespaces, objects and classes.
>
> > A set-of-products is
> > a relation, much like SQL tables.
>
> table = set(dict(id=1, name='toto'),
> dict(id=2, name='titi'),
> dict(id=3,name='tata'))
>
> But this won't cut it... The problem is that unicity is inforced on
> object's identity, which is a "technical" identity, not a "semantic"
> identity. The following would be legal, yet ruin the whole stuff:
>
> table.add(dict(id=1, name='toto'))
>
> FWIW, a *real* integration (*not* embedded SQL) of RM/RA in a general
> purpose, hi-level programming language would really interest me - it's
> something I've been thinking of since 2002, but I really lack the
> theoretical background to go any further... From what I've learned and
> toyed with since, I'd think of a CLOS-like multimethod system eventually
> with predicate-based dispatch. There's some litterature about the
> so-called "OO/relational impedance mismatch" (while I don't see what
> "impedance", which I know of in electronic, has to do here), but MHO is
> that the real mismatch is between SQL/SQL DBMS and general purpose
> programming languages.
A fair point. I remain sceptical about multimethods in particular, but I agree in general. (I wonder: if a language lacks subtyping, and has overloading, is there anything left for multimethods to do? Hmmm.)
> BTW, is there actually any other implementation of RM/RA than SQL DBMS ?
I suppose Rel and Alphora, although I have little experience with either.
Marshal Received on Thu Jun 29 2006 - 11:48:59 CDT
Original text of this message | http://www.orafaq.com/usenet/comp.databases.theory/2006/06/29/2112.htm | CC-MAIN-2014-15 | refinedweb | 1,151 | 53.41 |
JFlow: Practical Mostly-Static Information Flow Control
- Randall Gray
- 1 years ago
- Views:
Transcription
1 Proceedins of the 26th ACM Symposium on Principles of Prorammin Lanuaes (POPL 99), San Antonio, Texas, USA, January 1999 JFlow: Practical Mostly-Static Information Flow Control Andrew C. Myers Laboratory for Computer Science Massachusetts Institute of Technoloy Abstract A promisin technique for protectin privacy and interity of sensitive data is to statically check information flow within prorams that manipulate the data. While previous work has proposed prorammin lanuae extensions to allow this static checkin, the resultin lanuaes are too restrictive for practical use and have not been implemented. In this paper, we describe the new lanuae JFlow, an extension to the Java lanuae that adds statically-checked information flow annotations. JFlow provides several new features that make information flow checkin more flexible and convenient than in previous models: a decentralized label model, label polymorphism, run-time label checkin, and automatic label inference. JFlow also supports many lanuae features that have never been interated successfully with static information flow control, includin objects, subclassin, dynamic type tests, access control, and exceptions. This paper defines the JFlow lanuae and presents formal rules that are used to check JFlow prorams for correctness. Because most checkin is static, there is little code space, data space, or run-time overhead in the JFlow implementation. 1 Introduction Protection for the privacy of data is becomin increasinly important as data and prorams become increasinly mobile. Conventional security techniques such as discretionary access control and information flow control (includin mandatory access control) have sinificant shortcomins as privacyprotection mechanisms. The hard problem in protectin privacy is preventin private information from leakin throuh computation. Access control mechanisms do not help with this kind of leak, since This research was supported in part by DARPA Contract F C-0303, monitored by USAF Rome Laboratory, and in part by DARPA Contract F , also monitored by USAF Rome Laboratory. Copyriht c1999 by the Association for Computin Machinery, Inc. Permission to make diital or hard copies of part or all of this work for personal or classroom use is ranted without fee provided that copies are not made or distributed for profit or commercial advantae and that copies bear this notice and the full citation on the first pae. Copyrihts for components of this work owned by others than ACM must be honored. Abstractin with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Publications Dept, ACM Inc., fax +1 (212) , or they only control information release, not its propaation once released. Mandatory access control (MAC) mechanisms prevent leaks throuh propaation by associatin a run-time security class with every piece of computed data. Every computation requires that the security class of the result value also be computed, so multi-level systems usin this approach are slow. Also, these systems usually apply a security class to an entire process, taintin all data handled by the process. This coarse ranularity results in data whose security class is overly restrictive, and makes it difficult to write many useful applications. A promisin technique for protectin privacy and interity of sensitive data is to statically check information flows within prorams that miht manipulate the data. Static checkin allows the fine-rained trackin of security classes throuh proram computations, without the run-time overhead of dynamic security classes. Several simple prorammin lanuaes have been proposed to allow this static checkin [DD77, VSI96, ML97, SV98, HR98]. However, the focus of these lanuaes was correctly checkin information flow statically, not providin a realistic prorammin model. This paper describes the new lanuae JFlow, an extension to the Java lanuae [GJS96] that permits static checkin of flow annotations. JFlow seems to be the first practical prorammin lanuae that allows this checkin. Like other recent approaches [VSI96, ML97, SV98, HR98, ML98], JFlow treats static checkin of flow annotations as an extended form of type checkin. Prorams written in JFlow can be statically checked by the JFlow compiler, which prevents information leaks throuh storae channels [Lam73]. JFlow is intended to support the writin of secure servers and applets that manipulate sensitive data. An important philosophical difference between JFlow and other work on static checkin of information flow is the focus on a usable prorammin model. Despite a lon history, static information flow analysis has not been widely accepted as a security technique. One major reason is that previous models of static flow analysis were too limited or too restrictive to be used in practice. The oal of the work presented in this paper has been to add enouh power to the static checkin framework to allow reasonable prorams to be written in a natural manner. This work has involved several new contributions: JFlow extends a complex prorammin lanuae and supports many
2 lanuae features that have not been previously interated with static flow checkin, includin mutable objects (which subsume function values), subclassin, dynamic type tests, and exceptions. JFlow also provides powerful new features that make information flow checkin less restrictive and more convenient than in previous prorammin lanuaes: It supports the decentralized label model [ML97, ML98], which allows multiple principals to protect their privacy even in the presence of mutual distrust. It also supports safe, statically-checked declassification, or downradin, allowin a principal to relax its own privacy policies without weakenin policies of other principals. It provides a simple but powerful model of access control that allows code privilees to be checked statically, and also allows authority to be ranted and checked dynamically. It provides label polymorphism, allowin code that is eneric with respect to the security class of the data it manipulates. Run-time label checkin and first-class label values provide a dynamic escape when static checkin is too restrictive. Run-time checks are statically checked to ensure that information is not leaked by the success or failure of the run-time check itself. Automatic label inference makes it unnecessary to write many of the annotations that would otherwise be required. The JFlow compiler is structured as a source-to-source translator, so its output is a standard Java proram that can be compiled by any Java compiler. For the most part, translation involves removal of the static annotations in the JFlow proram (after checkin them, of course). There is little code space, data space, or run time overhead, because most checkin is performed statically. The remainder of this paper is structured as follows: Section 2 contains an overview of the JFlow lanuae and a rationale for the decisions taken. Section 3 discusses static checkin, sketches the framework used to check proram constructs in a manner similar to type checkin, and both formally and informally presents some of the rules used. This section also describes the translations that are performed by the compiler. Section 4 compares this work to other work in related areas, and Section 5 provides some conclusions. The rammar of JFlow is provided for reference in Appendix A. 2 Lanuae overview This section presents an overview of the JFlow lanuae and a rationale for its desin. JFlow is an extension to the Java lanuae that incorporates the decentralized label model. In Section 2.1, the previous work on the decentralized label model [ML97, ML98] is reviewed. The lanuae description in the succeedin sections focuses on the differences between JFlow and Java, since Java is widely known and well-documented [GJS96]. 2.1 Labels In the decentralized label model, data values are labeled with security policies. A label is a eneralization of the usual notion of a security class; it is a set of policies that restrict the movement of any data value to which the label is attached. Each policy in a label has an owner O, which is a principal whose data was observed in order to create the value. Principals are users and other authority entities such as roups or roles. Each policy also has a set of readers, which are principals that O allows to observe the data. A sinle principal may be the owner of multiple policies and may appear in multiple reader sets. For example, the label L = f o 1 : r 1, r 2 ; o 2 : r 2, r 3 has two policies in it (separated by semicolons), owned by o 1 and o 2 respectively. The policy of principal o 1 allows r 1 and r 2 to read; the policy of principal o 2 allows r 2 and r 3 to read. The effective reader set contains only the common reader r 2. The least restrictive label possible is the label f, which contains no policies. Because no principal expresses a privacy interest in this label, data labeled by f is completely public as far as the labelin scheme is concerned. There are two important intuitions behind this model: first, data may only be read by a user of the system if all of the policies on the data list that user as a reader. The effective policy is an intersection of all the policies on the data. Second, a principal may choose to relax a policy that it owns. This is a safe form of declassification safe, because all of the other policies on the data are still enforced. A process has the authority to act on behalf of some (possibly empty) set of principals. The authority possessed by a process determines the declassifications that it is able to perform. Some principals are also authorized to act for other principals, creatin a principal hierarchy. The principal hierarchy may chane over time, but revocation is assumed to occur infrequently. The meanin of a label is affected by the current principal hierarchy. For example, if the principal r 0 can act for the principal r, then if r is listed as a reader by a policy, r 0 is effectively listed by that policy as well. The meanin of a label under different principal hierarchies is discussed extensively in an earlier paper [ML98]. Every variable is statically bound to a static label. (The alternative, dynamic bindin, larely prevents static analysis and can be simulated in JFlow if needed.) If a value v has label L 1 and a variable x has label L 2, we can assin the value to the variable (x := v) only if L 1 can be relabeled to L 2, which is written as L 1 v L 2. The definition of this binary relation on labels is intuitive: L 1 v L 2 if for every policy in L 1, there is some policy in L 2 that is at least as restrictive [ML98]. Thus, the assinment does not leak information. In this system, the label on x is assined by the prorammer who writes the code that uses x. The power to select a label for x does not ive the prorammer the ability to leak v, because the static checker permits the assinment to x only if the label on x is sufficiently restrictive. After the assinment, the static bindin of the label of x prevents leakae. (Chanes in who can read the value in x are effected by modifyin the principal hierarchy, but chanes to the principal hierarchy require appropriate privilee.) 2
3 Computations (such as multiplyin two numbers) cause joinin ( t ) of labels; the label of the result is the least restrictive label that is at least as restrictive as the labels of the values used in the computation; that is, the least upper bound of the labels. The join of two sets of policies is simply the union of the sets of policies. The relation v enerates a lattice of equivalence classes of labels with t as the LUB operator. Lattice properties are important for supportin automatic label inference and label polymorphism [ML97, ML98]. The notation A B is also used as a shorthand for A v B^B v A (which does not mean that the labels are equal [ML98]). Declassification provides an escape hatch from strict information flow trackin. If the authority of a process includes a principal p, a value may be declassified by droppin policies owned by principals that p acts for. The ability to declassify provides the opportunity for p to choose to release information based on a more sophisticated analysis. All practical information flow control systems provide the ability to declassify data because strict information flow control is too restrictive to write real applications. More complex mechanisms such as inference controls [Den82] often are used to decide when declassification is appropriate. In previous systems, declassification is performed by a trusted subject: code havin the authority of a hihly trusted principal. One key advantae of the new label structure is that it is decentralized: it does not require that all other principals in the system trust a principal p s declassification decision, since p cannot weaken the policies of principals that it does not act for. 2.2 Labeled types This section beins the description of the new work in this paper (the JFlow prorammin lanuae), which incorporates the label model just summarized. In a JFlow proram, a label is denoted by a label expression, which is a set of component expressions. As in Section 2.1, a component expression of the form owner: reader 1, reader 2, ::: denotes a policy. A label expression is a series of component expressions, separated by semicolons, such as fo 1 : r 1, r 2 ; o 2 : r 2, r 3. In a proram, a component expression may take additional forms; for example, it may be simply a variable name. In that case, it denotes the set of policies in the label of that variable. The label fa contains a sinle component; the meanin of the label is that the value it labels should be as restricted as the variable a is. The label fa o: r contains two components, indicatin that the labeled value should be as restricted as a is, and also that the principal o restricts the value to be read by at most r. In JFlow, every value has a labeled type that consists of two parts: an ordinary Java type such as int, and a label that describes the ways that the value can propaate. The type and label parts of a labeled type act larely independently. Any type expression t may be labeled with any label expression fl. This labeled type expression is written as tfl; for example, the labeled type intfp: represents an inteer that principal p owns and only p can read (the owner of a policy is always implicitly a reader). The oal of type checkin is to ensure that the apparent, intfpublic x booleanfsecret b ::: int x = 0 if (b) f x = 1 Fiure 1: Implicit flow example static type of each expression is a supertype of the actual, run-time type of every value it miht produce; similarly, the oal of label checkin is to ensure that the apparent label of every expression is at least as restrictive as the actual label of every value it miht produce. In addition, label checkin uarantees that, except when declassification is used, the apparent label of a value is at least as restrictive as the actual label of every value that miht affect it. In principle, the actual label could be computed precisely at run time. Static checkin ensures that the apparent, static label is always a conservative approximation of the actual label. For this reason, it is typically unnecessary to represent the actual label at run time. A labeled type may occur in a JFlow proram in most places where a type may occur in a Java proram. For example, variables may be declared with labeled type: intfp: x intfx y int z The label may always be omitted from a labeled type, as in the declaration of z. If omitted, the label of a local variable is inferred automatically based on its uses. In other contexts where a label is omitted, a context-dependent default label is enerated. For example, the default label of an instance variable is the public label f. Several other cases of default label assinment are discussed later. 2.3 Implicit ows In JFlow, the label of an expression s value varies dependin on the evaluation context. This somewhat unusual property is needed to prevent leaks throuh implicit flows: channels created by the control flow structure itself. Consider the code sement of Fiure 1. By examinin the value of the variable x after this sement has executed, we can determine the value of the secret boolean b, even thouh x has only been assined constant values. The problem is the assinment x=1, which should not be allowed. To prevent information leaks throuh implicit flows, the compiler associates a proram-counter label (pc) with every statement and expression, representin the information that miht be learned from the knowlede that the statement or expression was evaluated. In this proram, the value of pc durin the consequent of the if statement is fb. After the if statement, pc = f, since no information about b can be deduced from the fact that the statement after the if statement is executed. The label of a literal expression (e.., 1) is the same as its pc, or fb in this case. The unsafe assinment 3
4 labelfl lb intflb x intfp: y switch label(x) f case (intfy z) y = z else throw newunsafetransfer() Fiure 2: Switch label (x =1) in the example is prevented because the label of x (fpublic) is not at least as restrictive as the label of 1 in this expression, which is fb, or fsecret. 2.4 Run-time labels In JFlow, labels are not purely static entities; they may also be used as values. First-class values of the new primitive type label represent labels. This functionality is needed when the label of a value cannot be determined statically. For example, if a bank stores a number of customer accounts as elements of a lare array, each account miht have a different label that expresses the privacy requirements of the individual customer. To implement this example in JFlow, each account can be labeled by an attached dynamic label value. A variable of type label may be used both as a first-class value and as a label for other values. For example, methods can accept aruments with run-time labels, as in the followin method declaration: static oatf*lb compute(int xf*lb, label lb) In this example, the component expression *lb denotes the label contained in the variable lb, rather than the label of the variable lb. To preserve safety, variables of type label (such as lb) may be used to construct labels only if they are immutable after initialization; in Java terminoloy, if they are nal. (Unlike in Java, aruments in JFlow are always nal.) The important power that run-time labels add is the ability to be examined at run-time, usin the switch label statement. An example of this statement is shown in Fiure 2. The code in this fiure attempts to transfer an inteer from the variable x to the variable y. This transfer is not necessarily safe, because x s label, lb, is not known statically. The statement examines the run-time label of the expression x, and executes one of several case statements. The statement executed is the first whose associated label is at least as restrictive as the expression label; that is, the first statement for which the assinment of the expression value to the declared variable (in this case, z) is leal. If it is the case that flbvfp :, the first arm of the switch will be executed, and the transfer will occur safely via z. Otherwise, the code throws an exception. Since lb is a run-time value, information may be transferred throuh it. This can occur in the example by observin which of the two arms of the switch are executed. To prevent this information channel from becomin an information leak, the pc in the first arm is aumented to include lb s label, which is fl. The code passes static checkin only if the assinment from y to z is leal; that is, if flvfy. class Account f nal principal customer Strinfcustomer: name oatfcustomer: balance Fiure 3: Bank account usin run-time principals Run-time labels can be manipulated statically, thouh conservatively; they are treated as an unknown but fixed label. The presence of such opaque labels is not a problem for static analysis, because of the lattice properties of these labels. For example, iven any two labels L 1 and L 2 where L 1 v L 2, it is the case for any third label L 3 that L 1 t L 3 v L 2 t L 3. This implication makes it possible for an opaque label L 3 to appear in a label without preventin static analysis. Usin it, unknown labels, includin run-time labels, can be propaated statically. 2.5 Authority and declassication JFlow has capability-like access control that is both dynamically and statically checked. A method executes with some authority that has been ranted to it. The authority is essentially the capability to act for some set of principals, and controls the ability to declassify data. Authority also can be used to build more complex access control mechanisms. At any iven point within a proram, the compiler understands the code to be runnin with the ability to act for some set of principals, called the static authority of the code at that point. The actual authority may be reater, because those principals may be able to act for other principals. The principal hierarchy may be tested at any point usin the actsfor statement. The statement actsfor(p 1, p 2 ) S executes the statement S if the principal p 1 can act for the principal p 2. Otherwise, the statement S is skipped. The statement S is checked under the assumption that this acts-for relation exists: for example, if the static authority includes p 1, then durin static checkin of S, it is aumented to include p 2. A proram can use its authority to declassify a value. The expression declassify(e, L) relabels the result of an expression e with the label L. Declassification is checked statically, usin the static authority at the point of declassification. The declassify expression may relax policies owned by principals in the static authority. 2.6 Run-time principals Like labels, principals may also be used as first-class values at run time. The type principal represents a principal that is a value. A nal variable of type principal may be used as if it were a real principal. For example, a policy may use a nal variable of type principal to name an owner or reader. These variables may also be used in actsfor statements, allowin static reasonin about parts of the principal hierarchy that may vary at run time. When labels are constructed usin run-time principals, declassification may also be performed on these labels. 4
5 public class Vector[label L] extends AbstractList[L] f private intfl lenth private ObjectfL[ ]fl elements public Vector() ::: public Object elementat(int i):fl i throws (ArrayIndexOutOfBoundsException) f return elements[i] public void setelementatfl(objectf o, intf i) ::: public intfl size() f return lenth public void clearfl() ::: Fiure 4: Parameterization over labels Run-time principals are needed in order to model systems that are heteroeneous with respect to the principals in the system, without resortin to declassification. For example, a bank miht store bank accounts with the structure shown in Fiure 3, usin run-time principals rather than run-time labels. With this structure, each account may be owned by a different principal (the customer whose account it is). The security policy for each account has similar structure but is owned by the principal in the instance variable customer. Code can manipulate the account in a manner that is eneric with respect to the contained principal, but can also determine at run-time which principal is bein used. The principal customer may be manipulated by an actsfor statement, and the label fcustomer: may be used by a switch label statement. 2.7 Classes Even in the type domain, parameterizin classes is important for buildin reusable data structures. It is even more important to have polymorphism in the information flow domain; the usual way to handle the absence of statically-checked type polymorphism is to perform dynamic type casts, but this approach works poorly when applied to information flow since new information channels are created by dynamic tests. To allow usable data structures in JFlow, classes may be parameterized to make them eneric with respect to some number of labels or principals. Class and interface declarations are extended to include an optional set of explicitly declared parameters. For example, the Java Vector class is translated to JFlow as shown in Fiure 4. Vector is parameterized on the label L, which represents the label of the contained elements. Assumin that secret and public are appropriately defined, the types Vector[fsecret] and Vector[fpublic] would represent vectors of elements of differin sensitivity. Without the ability to parameterize classes on labels, it would be necessary to reimplement Vector for every distinct element label. The addition of label and principal parameters to JFlow makes parameterized classes into simple dependent types [Car91], since types contain values. To ensure that these dependent types have a well-defined meanin, only immutable variables may be used as parameters. Note that even if fsecret v fpublic, it is not the case that Vector[fsecret] Vector[fpublic], since subtypin is invariant in the parameter L (the subtype relation is denoted here by ). When such a relation is sound, the parameter may be declared as a covariant label rather than as a label, which places additional restrictions on its use. For example, no method arument or mutable instance variable may be labeled usin the parameter. A class always has one implicit label parameter: the label fthis, which represents the label on an object of the class. Because L 1 v L 2 implies that CfL 1 acts like a subtype of CfL 2, the label of this is necessarily a covariant parameter, and its use is restricted in the same manner as with other covariant parameters. A class may have some authority ranted to its objects by addin an authority clause to the class header. The authority clause may name principals external to the proram, or principal parameters. If the authority clause names external principals, the process that installs the class into the system must have the authority of the named principals. If the authority clause names principals that are parameters of the class, the code that creates an object of the class must have the authority of the actual principal parameters used in the call to the constructor. If a class C has a superclass C s,any authority in C s must be covered by the authority clause of C. It is not possible to obtain authority by inheritin from a superclass. 2.8 Methods Like class declarations, JFlow method declarations also contain some extensions. There are a few optional annotations to manae information flow and authority deleation. A method header has the followin syntax (in the form of the Java Lanuae Specification [GJS96]): MethodHeader: Modifiers opt LabeledType Identifier BeinLabel opt ( FormalParameterList opt ) EndLabel opt Throws opt WhereConstraints opt FormalParameter: LabeledType Identifier OptDims The return value, the aruments, and the exceptions may each be individually labeled. One subtle chane from Java is that aruments are always implicitly nal, allowin them to be used as type parameters. This chane is made for the convenience of the prorammer and does not sinificantly chane the power of the lanuae. There are also two optional labels called the bein-label and the end-label. The bein-label is used to specify any restriction on pc at the point of invocation of the method. The end-label the final pc specifies what information can be learned by observin whether the method terminates normally. Individual exceptions and the return value itself also may have their own distinct labels, which provides finerained trackin of information flow. In Fiure 5 are some examples of JFlow method declarations. When labels are omitted in a JFlow proram, a default label is assined. The effect of these defaults is that often methods require no label annotations whatever. Labels may 5
6 static intfx y add(int x, int y) f return x + y boolean compare str(strin name, Strin pwd):fname pwd throws(nullpointerexception) f ::: boolean storefl(intf x) throws(notfound) f ::: Fiure 5: JFlow method declarations be omitted from a method declaration, sinifyin the use of implicit label polymorphism. For example, the aruments of add and compare str are unlabeled. When an arument label is omitted, the method is eneric with respect to the label of the arument. The arument label becomes an implicit parameter of the procedure. For example, the method add can be called with any two inteers x and y, reardless of their labels. This label polymorphism is important for buildin libraries of reusable code. Without it, a math routine like add would have to be reimplemented for every arument label ever used. The default label for a return value is the end-label, joined with the labels of all the aruments. For add, the default return value label is exactly the label written (fx y), so the return value could be written just as int. The default label on an exception is the end-label, as in the compare str example. If the bein-label is omitted, as in add, it becomes an implicit parameter to the method. Such a method can be called reardless of the caller s pc. Because the pc within the method contains an implicit parameter, this method is prevented from causin real side effects; it may of course modify local variables and mutate objects passed as aruments if they are appropriately declared, but true side effects would create static checkin errors. Unlike in Java, the method may contain a list of constraints prefixed by the keyword where: WhereConstraints: where Constraints Constraint: authority (Principals ) caller ( Principals ) actsfor ( Principal, Principal ) There are three different kinds of constraints: authority(p 1 ::: p n ) This clause lists principals that the method is authorized to act for. The static authority at the beinnin of the method includes the set of principals listed in this clause. The principals listed may be either names of lobal principals, or names of class parameters of type principal. Every listed principal must be also listed in the authority clause of the method s class. This mechanism obeys the principle of least privilee, since not all the methods of a class need to possess the full authority of the class. caller(p 1 ::: p n ) Callin code may also dynamically rant authority to a method that has a caller constraint. Unlike with the authority clause, where the authority devolves from the object itself, authority in this case class passwordfile authority(root) f public boolean check (Strin user, Strin password) where authority(root) f // Return whether password is correct boolean match = false try f for (int i = 0 i < names.lenth i++) f if (names[i] == user && passwords[i] == password) f match = true break catch (NullPointerException e) f catch (IndexOutOfBoundsException e) f return declassify(match, fuser password) private Strin [ ] names private Strin f root: [ ] passwords Fiure 6: A JFlow password file devolves from the caller. A method with a caller clause may be called only if the callin code possesses the requisite static authority. The principals named in the caller clause need not be constants; they may also be the names of method aruments whose type is principal. By passin a principal as the correspondin arument, the caller rants that principal s authority to the code. These dynamic principals may be used as first-class principals; for example, they may be used in labels. actsfor (p 1,p 2 ) An actsfor constraint may be used to prevent the method from bein called unless the specified acts-for relationship (p 1 acts for p 2 ) holds at the call site. When the method body is checked, the static principal hierarchy is assumed to contain any acts-for relationships declared in the method header. This constraint allows information about the principal hierarchy to be transmitted to the called method without any dynamic checkin. 2.9 Example: passwordfile Now that the essentials of the JFlow lanuae are covered, we are ready to consider some interestin JFlow code. Fiure 6 contains a JFlow implementation of a simple password file, in which the passwords are protected by information flow controls. Only the method for checkin passwords is shown. This method, check, accepts a password and a user name, and returns a boolean indicatin whether the strin is the riht password for that user. The if statement is conditional on the elements of passwords and on the variables user and password, whose labels are implicit parameters. Therefore, the body of the if statement has pc = fuser password root:, and the variable 6
7 class Protected f nal labelfthis lb Objectflb content public ProtectedfLL(ObjectfLL x, label LL) f lb = LL // must occur before call to super() super() // content = x // checked assumin lb == LL public ObjectfL et(label L):fL throws (IllealAccess) f switch label(content) f case (ObjectfL unwrapped) return unwrapped else throw new IllealAccess() public label et label() f return lb Fiure 7: The Protected class match also must have this label in order to allow the assinment match = true. This label prevents match from bein returned directly as a result, since the label of the return value is the default label, fuser password. Finally, the method declassifies match to this desired label, usin its compiled-in authority to act for root. Note that the exceptions NullPointerException and IndexOutOfBoundsException must be explicitly cauht, since the method does not explicitly declare them. More precise reasonin about the possibility of exceptions would make JFlow code more convenient to write. Otherwise there is very little difference between this code and the equivalent Java code. Only three annotations have been added: an authority clause statin that the principal root trusts the code, a declassify expression, and a label on the elements of passwords. The labels for all local variables and return values are either inferred automatically or assined sensible defaults. The task of writin prorams is made easier in JFlow because label annotations tend to be required only where interestin security issues are present. In this method, the implementor of the class has decided that declassification of match results in an acceptably small leak of information. Like all loin procedures, this method does leak information, because exhaustively tryin passwords will eventually extract the passwords from the password file. However, assumin that the space of passwords is lare and passwords are difficult to uess, the amount of information ained in each trial is far less than one bit. Reasonin processes about acceptable leaks of information lie outside the domain of information flow control, but in this system, such reasonin processes can be accommodated in a natural and decentralized manner Example: Protected The class Protected provides a convenient way of manain run-time labels, as in the bank account example mentioned earlier. Its implementation is shown in Fiure 7. As the implementation shows, a Protected is an immutable pair containin a value content of type Object and a label lb that protects the value. Its value can be extracted with the et method, but the caller must provide a label to use for extraction. If the label is insufficient to protect the data, an exception is thrown. A value of type Protected behaves very much like a value in dynamic-checked information flow systems, since it carries a run-time label. A Protected has an obvious analoue in the type domain: a value dynamically associated with a type ta (e.., the Dynamic type [ACPP91]). One key to makin Protected convenient is to label the instance variable lb with fthis. Without this labelin, Protected would need an additional explicit covariant label parameter to label lb with Limitations JFlow is not completely a superset of Java. Certain features have been omitted to make information flow control tractable. Also, JFlow does not eliminate all possible information leaks. Certain covert channels (particularly, various kinds of timin channels) are difficult to eliminate. Prior work has addressed static control of timin channels, thouh the resultin rules are restrictive [AR80, SV98]. Other covert channels arise from Java lanuae features: Threads. JFlow does not prevent threads from communicatin covertly via the timin of asynchronous modifications to shared objects. This covert channel can be prevented by requirin only sinle-threaded prorams. Timin channels. JFlow cannot prevent threads from improperly ainin information by timin code with the system clock, except by removin access to the clock. HashCode. In Java, the built-in implementation of the hashcode method, provided by the class Object, can be used to communicate information covertly. Therefore, in JFlow every class must implement its own hashcode. Static variables. The order of static variable initialization can be used to communicate information covertly. In JFlow, this channel is blocked by rulin out static variables. However, static methods are leal. This restriction does not sinificantly hurt expressive power, since a proram that uses static variables usually can be rewritten as a proram in which the static variables are instance variables of an object. The order of initialization of these objects then becomes explicit and susceptible to analysis. Finalizers. Finalizers are run in a separate thread from the main proram, and therefore can be used to communicate covertly. Finalizers are not part of JFlow. Resource exhaustion. An OutOfMemoryError can be used to communicate information covertly, by conditionally allocatin objects until the heap is exhausted. JFlow treats this error as fatal, preventin it from communicatin more than a sinle bit of information per proram execution. Other exhaustion errors such as StackOverowError are treated similarly. Wall-clock timin channels. A JFlow proram can chane its run time based on private information it has ob- 7
8 served. As an extreme example, it can enter an infinite loop. JFlow does not attempt to control these channels. Unchecked exceptions. Java allows users to define exceptions that need not be declared in method headers (unchecked exceptions), althouh this practice is described as atypical [GJS96]. In JFlow, there are no unchecked exceptions, since they could serve as covert channels. Type discrimination on parameters. JFlow supports the run-time cast and instanceof operators of standard Java, but they may only be invoked usin classes that lack parameters. The reason for this restriction is that information about the parameters is not available at run time. These operators could be permitted if the parameters were statically known to be matched, but this is not currently supported. Backward compatibility. JFlow is not backward compatible with Java, since existin Java libraries are not flowchecked and do not provide flow annotations. However, in many cases, a Java library can be wrapped in a JFlow library that provides reasonable annotations. 3 Static checkin and translation This section covers the static checkin that the JFlow compiler performs as it translates code, and the translation process itself. 3.1 Exceptions An important limitation of earlier attempts to create lanuaes for static flow checkin has been the absence of usable exceptions. For example, in Dennin s oriinal work on static flow checkin, exceptions terminated the proram [DD77] because any other treatment of exceptions seeminly leaked information. Subsequent work has avoided exceptions entirely. It miht seem unnecessary to treat exceptions directly, since in many lanuaes, a function that enerates exceptions can be desuared into a function that returns a discriminated union or oneof. However, there are problems with this approach. The obvious way to handle oneofs causes all exceptions to carry the same label an unacceptable loss of precision. Also, Java exceptions are actually objects, and the try:::catch statement functions like a typecase. This model cannot be translated directly into a oneof. Nevertheless, it is useful to consider how oneof types miht be handled in JFlow. The obvious way to treat oneof types is by analoy with record types. Each arm of the oneof has a distinct label associated with it. In addition, there is an added inteer field ta that indicates which of the arms of the oneof is active. The problem with this model is that every assinment to the oneof will require that ftavpc, and every attempt to use the oneof will implicitly read fta. As a result, every arm of the oneof will effectively carry the same label. For modelin exceptions, this is unacceptable. For each expression or statement, the static checker determines its path labels, which are the labels for the information transmitted by various possible termination paths: normal termination, termination throuh exceptions, termination throuh a return statement, and so on. This fine-rained analysis avoids the unnecessary restrictiveness that would be produced by desuarin exceptions. Each exception that can be raised by evaluatin a statement or expression has a possibly distinct label that is transferred to the pc of catch statements that miht intercept it. Even finer resolution is provided for normal termination and for return termination; for example, the label of the value of an expression may differ from the label associated with normal termination. Finally, termination of a statement by a break or continue statement is also tracked without confusin distinct break or continue tarets. The path labels for a statement or expression are represented as a map from symbols to labels. Each mappin represents a termination path that the statement or expression miht take, and the label of the mappin indicates what information may be transmitted if this path is known to be the actual termination path. The domain of the map includes several different kinds of entities: The symbol n, which represents normal termination. The symbol r, which represents termination throuh a return statement. Classes that inherit from Throwable. A mappin from a class represents termination by an exception. The symbols nv and rv represent the labels of the normal value of an expression and the return value of a statement, respectively. They do not represent paths themselves, but it is convenient to include them as part of the map. Their labels are always at least as restrictive as the labels of the correspondin paths. A tuple of the form hoto Li represents termination by executin a named break or continue statement that jumps to the taret L. A break or continue statement that does not name a taret is represented by the tuple hoto i. These tuples are always mapped to the label > since the static checkin rules do not use the actual label. Path labels are denoted by the letter X in this paper, and members of the domain of X (paths) are denoted by s. The expression X[s] denotes the label that X maps s to, and the expression X[s := L] denotes a new map that is exactly like X except that s is bound to L. Path labels may also map a symbol s to the pseudo-label, indicatin that the statement cannot terminate throuh the path s. The label acts as the bottom of the label lattice; t L = L for all labels L, includin the label f. The special path labels X map all paths to, correspondin to an expression that does not terminate. 3.2 Type checkin vs. label checkin The JFlow compiler performs two kinds of static checkin as it compiles a proram: type checkin and label checkin. These two aspects of checkin cannot be entirely disentanled, since labels are type constructors and appear in the rules for subtypin. However, the checks needed to show that a 8
9 A [C] =hclass C [::P i ::] :::f:::i (A `Q i Q 0 i ) _ (P i = hcovariant label idi ^A `Q i vq 0 i ) A `T C[::Q i ::] C[::Q 0 i ::] A [C] =hclass C [::P i ::] extends t s :::f:::i T s = interp-t (ts class-env(c[::q i ::])) A `T T s C 0 [::Q 0 i ::] A `T C[::Q i ::] C 0 [::Q 0 i ::] Fiure 8: Subtype rules statement or expression is safe larely can be classified as either type or label checks. This paper focuses on the rules for checkin labels, since the type checks are almost exactly the same as in Java. There are several kinds of judements made durin static checkin. The judment A `T E : T means that E has type T in environment A. The judment A ` E : X is the information-flow counterpart: it means that E has path labels X in environment A. The symbol `T is used to denote inferences in the type domain. The environment A maps identifiers (e.., class names, parameter names, variable names) to various kinds of entities. As with path labels, the notation A[s] is the bindin of symbol s in A. The notation A[s := B] is a new environment with s rebound to B. In the rules iven here, it is assumed that the declarations of all classes are found in the lobal environment, A. A few more comments on notation will be helpful at this point. The use of lare brackets indicates an optional syntactic element. The letter T represents a type, and t represents a type expression. The letter C represents the name of a class. The letter L represents a label, and l represents a label expression. represents an labeled type expression; that is, a pair containin a type expression and an optional label expression. The function interp-t(t A) converts type expressions to types, and the function interp-l(l A) converts label expressions to labels. The letter v represents a variable name. The letter P represents a formal parameter of a class, and the letter Q represents an actual parameter used in an instantiation of a parameterized class. 3.3 Subtype rules There are some interestin interactions between type and label checkin. Consider the judment A `T S T, meanin S is a subtype of T. This judement must be made in JFlow, as in all lanuaes with subtypin. Here, S and T are ordinary unlabeled types. The subtype rule, shown in Fiure 8, is as in Java, except that it must take account of class parameters. If S or T is an instantiation of a parameterized class, subtypin is invariant in the parameters except when a label parameter is declared to be covariant. This subtypin rule is the first one shown in Fiure 8. The function class-env, used in the fiure, enerates an extension of the lobal environment in which the formal parameters of a class (if any) are true A ` literal : X [n := A[pc] nv := A[pc]] true A ` ;:X [n := A[pc]] A[v] =hvar nal T fl uidi X = X [n := A[pc] nv := L t A[pc]] A ` v : X A ` E : X A[v] =hvar T fl uidi A ` X[nv] v L A ` v = E : X A ` S 1 : X 1 extend(a S 1 )[pc := X 1 [n]] ` S 2 : X 2 X = X 1 [n := ] X 2 A ` S 1 ; S 2 : X (X = X 1 X 2 ) 8s (X[s] =X 1 [s] t X 2 [s]) Fiure 9: Some simple label-checkin rules bound to the actual parameters: A [::param-id(p i ) := Q i ::] Usin this rule, Vector[L] (from Fiure 4) would be a subtype of AbstractList[L'] only if L L 0. Java arrays (written as T fl[ ]) are treated internally as a special type with two parameters, T and L. As in Java, they are covariant in T, but like most JFlow classes, invariant in L. User-defined types may not be parameterized on other types. If S and T are not instantiations of the same class, it is necessary to walk up the type hierarchy from S to T, rewritin parameters, as shown in the second rule in Fiure 8. Toether, the two rules inductively prove the appropriate subtype relationships. 3.4 Label-checkin rules Let us consider a few examples of static checkin rules. Space restrictions prevent presentation of all the rules, but a complete description of the static checkin rules of JFlow is available [Mye99]. Consider Fiure 9, which contains some of the most basic rules for static checkin. The first rule shows that a literal expression always terminates normally and that its value is labeled with the current pc, as described earlier. The second rule shows that an empty statement always terminates normally, with the same pc as at its start. The third rule shows that the value of a variable is labeled with both the label of the variable and the current pc. Note that the environment maps a variable identifier to an entry 9
10 A ` E a : X a A[pc := X a[n]] ` Ei : X i A[pc := X i [n]] ` E v : X v X 1 = exc(x a Xi X v X a[nv] NullPointerException) X 2 = exc(x 1 X a[nv] t Xi [nv] OutOfBoundsException) X = exc(x 2 X a[nv] t Xv[nv] ArrayStoreException) A `T E a : T fl a[ ] A ` X v[nv] t X[n] v La A ` E a[ei ]=E v : X A ` E : X E A[pc := X E [nv]] ` S 1 : X 1 A[pc := X E [nv]] ` S 2 : X 2 X = X E [n := ] X 1 X 2 A ` if (E) S 1 else S 2 : X L = fresh-variable() A 0 = A[pc := L hoto i := L] A 0 ` E : X E A 0 [pc := X E [nv]] ` S : X S A ` X S [n] v L X =(X E X S )[hoto i := ] A ` while (E) S : X A ` do S while (E) : X A ` A[pc] v A[hoto Li] A ` continue L : X [hoto Li := >] A ` break L : X [hoto Li := >] A ` S : X 0 s 2fn r 8(s 0 j s 0 2 paths ^ s 0 6= s) X[s 0 ]= paths = all symbols except nv, rv X = X 0 [s := A[pc]] A ` S : X exc(x L C) =X X [n := L nv := L C := L] Fiure 10: More label-checkin rules of either the form hvar T fl uidi or hvar nal T fl uidi, where T is the variable s type, L is its label, and uid is a unique identifier distinuishin it from other variables of the same name. The fourth rule covers assinment to a variable. Assinment is allowed if the variable s label is more restrictive than that of the value bein assined (which will include the current pc). Whether one label is more restrictive than other is inferred usin the current environment, which contains information about the static principal hierarchy. The complete rule for checkin this statement would have an additional antecedent A `T E : T, but such type-checkin rules have been omitted in the interest of space. The final rule in Fiure 9 covers two statements S 1 and S 2 performed in sequence. The second statement is executed only if the first statement terminated normally, so the correct pc for checkin the second statement is the normal path label of the first statement (X 1 [n]). The function extend extends the environment A to add any local variable declarations in the statement S 1. The path labels of the sequence must be at least as restrictive as path labels of both statements; this condition is captured by the operator, which meres two sets of path labels, joinin all correspondin paths from both. Fiure 10 contains some more complex rules. The rule for array element assinment mirrors the order of evaluation of the expression. First, the array expression E a is evaluated, yieldin path labels X a. If it completes normally, the index expression E i is evaluated, yieldin X i. Then, the assined value is evaluated. Java checks for three possible exceptions before performin the assinment. The function exc, defined at the bottom, is used to simplify these conditions. This function creates a set of path labels that are just like X except that they include an additional path, the exception C, with the path label L. Since observation of normal termination (n) or the value on normal termination (nv) is conditional on the exception not bein thrown, exc joins the label L to these two mappins as well. Finally, avoidin leaks requires that the label on the array elements (L a ) is at least as restrictive as the label on the information bein stored (X v [nv]). The next rule shows how to check an if statement. First, the path labels X E of the expression are determined. Since execution of S 1 or S 2 is conditional on E, the pc for these statements must include the value label of E, X E [nv]. Finally, the statement as a whole can terminate throuh any of the paths that terminate E, S 1,orS 2 except normal termination of E, since this would cause one of S 1 or S 2 to be executed. If the statement has no else clause, the statement S 2 is considered to be an empty statement, and the second rule in Fiure 9 is applied. The next rule, for the while statement, is more subtle because of the presence of a loop. This rule introduces a label variable L to represent the information carried by the continuation of the loop throuh various paths. L represents an unknown label that will be solved for later. It is essentially a loop invariant for information flow. L may carry information from exceptional termination of E or S, or from break or continue statements that occur inside the loop. An entry is added to the environment for the tuple hoto i to capture information flows from any break or continue statements within the loop. The rules for checkin break and continue, shown below the rule for while, use these environment entries to apply the proper restriction on information flow. Assumin that L is the enterin pc label, X S [n] is the final pc label. The final condition requires that L 0 may be at 10
11 A `T E : class C f::: A ` E : X E X = exc(x E X E [nv] C)[n := ] A ` throw E : X y = true try f if (x) throw new E() y = false catch (E e) f A ` S : X S pc i = exc-label(x S C i ) A[pc := pc i v i := hvar nalcfpc i fresh-uid()i] ` S i : X i X =(Li X i) uncauht(x S (:: C i ::)) A ` try fs ::catch(c i v i ) fs i :: : X A ` S 1 : X 1 A ` S 2 : X 2 X = X 1 [n := ] X 2 A ` try fs 1 nally fs 2 : X exc-label(x C) =FC 0 :(C 0 C_CC 0 ) X[C0 ] (X 0 = uncauht(x (:: C i ::))) X 0 [s] =(if (9i (s C i )) then else X[s]) Fiure 11: Exception-handlin rules most as restrictive as L, which is what establishes the loop invariant. The last rule in Fiure 10 applies to any statement, and is important for relaxin restrictive path labels. It is intuitive: if a statement (or a sequence of statements) can only terminate normally, the pc at the end is the same as the pc at the beinnin. The same is true if the statement can only terminate with a return statement. This rule is called the sinle-path rule. It would not be safe for this rule to apply to exception paths. To see why, suppose that a set of path labels formally contains only a sinle exception path C. However, that path miht include multiple paths consistin of exceptions that are subclasses of C. These multiple paths can be discriminated usin a try:::catch statement. The unusual Java exception model prevents the sinle-path rule from bein applied to exception paths. However, Java is a ood lanuae to extend for static flow analysis in other ways because it fully specifies evaluation order. This property makes static checkin of information flow simpler, because the rules tend to encode all possible evaluation orders. If there were non-determinism in evaluation order, it could be encoded by addin label variables in a manner similar to the rule for the while statement. 3.5 Throwin and catchin exceptions Exceptions can be thrown and cauht safely in JFlow usin the usual Java constructs. Fiure 11 shows the rule for the throw statement, a try:::catch statement that lacks a nally clause, and a try:::nally statement. (A try statement with both catch clauses and a nally clause can be desuared into Fiure 12: Implicit flow usin throw a try:::catch inside a try:::nally.) The rule for throw is straihtforward. The idea behind the try:::catch rule is that each catch clause is executed with a pc that includes all the paths that miht cause the clause to be executed: all the paths that are exceptions where the exception class is either a subclass or a superclass of the class named in the catch clause. The function exc-label joins the labels of these paths. The path labels of the whole statement mere all the path labels of the various catch clauses, plus the paths from X S that miht not be cauht by some catch clause, which include the normal termination path of X S if any. The try:::nally rule is very similar to the rule for sequencin two statements. One difference is that the statement S 2 is checked with exactly the same initial pc that S 1 is, since S 2 is executed no matter how S 1 terminates. To see how these exception rules work, consider the code in Fiure 12. In this example, x and y are boolean variables. This code transfers the information in x to y by usin an implicit flow resultin from an exception. In fact, the code is equivalent to the assinment y = x. Usin the rule of Fiure 11, the path labels of the throw statement are fe!fx, so the path labels of the if statement are X = fe! fx n! fx. The assinment y = false is checked with pc = X[n] =fx, so the code is allowed only if fxvfy. This restriction is correct since it is exactly what the equivalent assinment statement would have required. Finally, applyin both the try-catch rule here and the sinle-path rule from Fiure 10, the value of pc after the code frament is seen to be the same as at its start. Throwin and catchin an exception does not necessarily taint subsequent computation. 3.6 Run-time label checkin An interestin aspect of checkin JFlow is checkin the switch label statement, which inspects a dynamic label at run time. The inference rule for checkin this statement is iven in Fiure 13. Intuitively, the switch label statement tests the equation X E [nv] v L i for every arm until it finds one for which the equation holds, and executes it. However, this test cannot be evaluated either statically or at run time. Therefore, the test is split into two stroner conditions: one that can be tested statically, and one that can be tested dynamically. This rule naturally contains the static part of the test. Let L RT be the join of all possible run-timerepresentable components (i.e., components that do not mention formal label or principal parameters). The static test is that X E [nv] t L RT v L i t L RT (equiva- 11
12 A ` E : X E L i = interp-l(l i A) A ` X E [nv] v L i t L RT A `T E : T A `T T interp-t (t i A) pc 0 = X E [n] pc i = pc i;1 t label(x E [nv] t L i ) A[pc := pc i v i := hvar nal T i fl i fresh-uid()i] ` S i : X i X = X E (Li X i) A ` switch label(e)f::case (t i fl i v i ) S i :: : X Fiure 13: Inference rule for switch label lently, X E [nv] v L i t L RT ); the dynamic test is that X E [nv] u L RT v L i u L RT. Toether, these two tests imply the full condition X E [nv] v L i. The test itself may be used as an information channel, so after the check, the pc must include the labels of X E [nv] and every L i up to this point. This rule uses the label function to achieve this. When applied to a label L, it enerates a new label that joins toether the labels of all variables that are mentioned in L. However, the presence of label in constraint equations does not chane the process of solvin label constraints in any fundamental way. 3.7 Checkin method calls Let us now look at some of the static checkin associated with objects. Static checkin in object-oriented lanuaes is often complex, and the various features of JFlow only add to the complexity. This section shows how, despite this complexity, method calls and constructor calls (via the operator new) are checked statically. The rules for checkin method and constructor calls are shown in Fiures 14 and 15. Fiure 14 defines some eneric checkin that is performed for all varieties of calls, and Fiure 15 defines the rules for checkin ordinary method calls, static method calls, and constructor calls. To avoid repetition, the checkin of both static and nonstatic method calls, and also constructor calls, is expressed in terms of the predicate call, which is defined in Fiure 14. This predicate is in turn expressed in terms of two predicates: call-bein and call-end. The predicate call-bein checks the arument expressions and establishes that the constraints for callin the method are satisfied. In this rule, the functions type-part and label-part interpret the type and label parts of a labeled type. The rule determines the bein label L I, the default return label L def RV, and the arument environment Aa, which binds all the method aruments to appropriately labeled types. Invokin a method requires evaluation of the aruments E j, producin correspondin path labels X j. The arument labels are bound in A a to labels L j, so the line (X j [nv] v L j ) ensures that the actual aruments can be assined to the formals. The beinlabel L I is also required to be more restrictive than the pc A ` call-bein(c[q i ] (:: E j ::) S A a L I L def RV ) A ` call-end(c[q i ] S A a L I L def RV ) : X A ` call(c[q i ] (:: E j ::) S) : X S = h static r m fi (:: j a j ::) :fr throws(:: k ::) where K l i X 0 = X [n := A[pc]] A[pc := X j;1 [n]] ` E j : X j L j = fresh-variable() uid j = fresh-uid() A c = class-env(c[q i ]) A a = A c [::a j := hvar nal type-part( j A c )fl j uid j i::] L I =(if fi then interp-l(i A a ) else X [n]) max(j) A ` L j (if labeled( j ) then label-part( j A a ) t L I else L j ) A ` X j [nv] v L j A ` X [n] v L max(j) I L def RV Fj =(if (r = void) then f else X j [nv]) satisfies-constraints(a A a A[::a j := E j ::] (::K l ::)) A ` call-bein(c[q i ] (::E j ::) S A a L I L def RV ) let interp(p) =interp-p-call(p A A a A m ) in end case K i of end authority(:::) : true caller(::p j ::) : 8(p j )9(p 0 2 A[auth]) A ` p 0 interp(p j ) actsfor(p 1 p 2 ) : A ` interp(p 1 ) interp(p 2 ) S = h static r m satisfies-constraints(a A a A m (::K i ::)) : fr fi (:: j a j ::) :fr throws(:: k ::) where K l i L R = L I t (if then interp-l(r A a ) else f) L RV = L R t (if labeled( r ) then label-part(r A a ) else L def RV ) C k = type-part( k class-env(c[q i ])) X 0 =(Lj X j )[n := L R nv := L RV ] X = X 0 X [::C k := label-part( k A a ) t L R ::] A ` call-end(c[q i ] S A a L I L def RV ) : X Fiure 14: Checkin calls after evaluatin all of the aruments, which is X max(j)[n]. The call site must satisfy all the constraints imposed by the method, which is checked by the predicate satisfies-constraints. The rule for this predicate, also in Fiure 14, uses the function interp-p-call, which maps identifiers used in the method constraints to the correspondin principals. To perform this mappin, the function needs environments correspondin to the callin code (A), the called code (A a ), and a special environment that binds the actual aruments (A m ). The environment entry A[auth] contains the set of principals that the code is known statically to act for. The judement A ` p 1 p 2 means that p 1 is known statically to act for p 2. (The static principal hierarchy is also 12
13 A `T E s : C[::Q i ::] A `T E j : T j sinature(c[::q i ::] m(::t j ::) S) A ` E s : X s A[pc := X s[nv]] ` call(c[::qi ::] (::E j ::) S) : X A ` E s :m(::e j ::) : X T = interp-t (t A) A `T E j : T j sinature(t m(::t j ::) S) A ` call(t (::E j ::) S) : X A ` t:m(::e j ::) : X T = C[::Q i ::] =interp-t(t A) A [C] =hclass C [::P i ::] ::: authority(::p l ::) :::i A `T E j : T j sinature(t C(::T j ::) S) S = hc fi (:: j a j ::) :fr throws(:: k ::) where K l i S 0 = hstatic T f m fi (:: j a j ::) :fr A ` call(t (::E j ::) S 0 ) : X throws(:: k ::) where K l i 8(parameters p l ) 9(p 2 A[auth]) A ` p interp-p(p l class-env(t )) A ` new t(::e j ::) : X Fiure 15: Rules for specific calls placed in the environment.) Finally, the predicate call-end produces the path labels X of the method call by assumin that the method returns the path labels that its sinature claims. The label L def RV is used as the label of the return value in the case where the return type, r, is not labeled. It joins toether the labels of all of the aruments, since typically the return value of a function depends on all of its aruments. The rules for the various kinds of method calls are built on top of this framework, as shown in Fiure 15. In these rules, the function sinature obtains the sinature of the named method from the class. The rule for constructors contains two subtle steps: first, constructors are checked as thouh they were static methods with a similar sinature. Second, a call to a constructor requires that the caller possess the authority of all principals in the authority of the class that are parameters. The caller does not need to have the authority of external principals named in the authority clause. 3.8 Constraint solvin As the rules for static checkin are applied, they enerate a constraint system of label variables for each method [ML97]. For example, the assinment rule of Fiure 9 enerates a constraint X[nv] v L. All of the constraints are of the form A 1 t ::: t A m v B 1 t ::: t B n. These constraints can be split into individual constraints A i v B 1 t ::: t B n because of the lattice properties of labels. The individual terms in the T [[ actsfor(p 1 p 2 ) S ]] = if ( dynamic PH:actsFor(T [[ p 1 ]] T [[ p 2 ]])) T [[ S ]] T [[ switch label(e) f ::case( t i fl i ) S i :: else S e ]] = Tv = T [[ E ]]; if (T [[ X E [nv] u L RT ]]:relabelsto(t [[ L 1 u L RT ]])) f T [[ S 1 ]] else ::: if (T [[ X E [nv] u L RT ]]:relabelsto(t [[ L i u L RT ]]) f T [[ S i ]] ::: else f T [[ S e ]] Fiure 16: Interestin JFlow translations constraints may be policies, label variables, label parameters, dynamic labels, or expressions label(l) for some label L. The constraints can be efficiently solved, usin a modification to a lattice constraint-solvin alorithm [RM96] that applies an orderin optimization [HDT87] shown to produce the best of several commonly-used iterative dataflow alorithms [KW94]. The approach is to initialize all variables in the constraints with the most restrictive label (>) and iteratively relax their labels until a satisfyin assinment or a contradiction is found. The label does not create problems because it is monotonic. The relaxation steps are ordered by topoloically sortin the constraints and loopin on stronlyconnected components. The number of iterations required is O(nh) where h is the maximum heiht of the lattice structure [RM96], and also O(nd) where d is the maximum back edes in depth-first traversal of the constraint dependency raph [HDT87]. Both h and d seem likely to be bounded for reasonable prorams. The observed behavior of the JFlow compiler is that constraint solvin is a neliible part of run time. 3.9 Translation The JFlow compiler is a static checker and source-to-source translator. Its output is a standard Java proram. Most of the annotations in JFlow have no run-time representation; translation erases them, leavin a Java proram. For example, all type labels are erased to produce the correspondin unlabeled Java type. Class parameters are erased. The declassify expression and statement are replaced by their contained expression or statement. Uses of the built-in types label and principal are translated to the Java types jow.lan.label and jow.lan.principal, respectively. Variables declared to have these types remain in the translated proram. Only two constructs translate to interestin code: the actsfor and switch label statement, which dynamically test principals and labels, respectively. The translated code for each is simple and efficient, as shown in Fiure 16. Note that the translation rule for switch label uses definitions from Fiure 13. As discussed earlier, the runtime check is X E [nv] u L RT v L 1 u L RT, which in effect is a test on labels that are completely representable at run time. 13
14 The translated code uses the methods relabelsto and acts- For of the classes jow.lan.label and jow.lan.principal, respectively. These methods are accelerated by a hash-table lookup into a cache of results, so the translated code is fast. 4 Related work There has been much work on information flow control and on the static analysis of security uarantees. The lattice model of information flow comes from the early work of Bell and LaPadula[BL75] and Dennin [Den76]. Most subsequent information control models use dynamic labels rather than static labels and therefore cannot be checked statically. The decentralized label model has similarities to the ORAC model [MMN90]: both models provide some approximation of the oriinator-controlled release labelin used by the U.S. DoD/Intellience community, althouh the ORAC model is dynamically checked. Static analysis of security uarantees also has a lon history. It has been applied to information flow [DD77, AR80], to access control [JL78, RSC92], and to interated models [Sto81]. There has recently been more interest in provablysecure prorammin lanuaes, treatin information flow checks in the domain of type checkin. Some of this work has focused on formally characterizin existin information flow and interity models [PO95, VSI96, Vol97]. Smith and Volpano have recently examined the difficulty of statically checkin information flow in a multithreaded functional lanuae [SV98], which JFlow does not address. However, the rules they define prevent the run time of a proram from dependin in any way on non-public data. Abadi [Aba97] has examined the problem of achievin secrecy in security protocols, also usin typin rules, and has shown that encryption can be treated as a form of safe declassification throuh a primitive encryption operator. Heintze and Riecke [HR98] have shown that informationflow-like labels can be applied to a simple lanuae with reference types (the SLam calculus). They show how to statically check an interated model of access control, information flow control, and interity. Their labels include two components: one which enforces conventional access control, and one that enforces information flow control. Their model has the limitation that it is entirely static: it has no run-time access control, no declassification, and no run-time flow checkin. It also does not provide label polymorphism or objects. Heintze and Riecke prove some useful soundness theorems for their model. This step would be desirable for JFlow, but important features like objects, inheritance and dependent types make formal proofs of correctness difficult at this point. An earlier paper [ML97] introduced the decentralized label model and suested a simple lanuae for writin information-flow safe prorams. JFlow extends the ideas of that simple lanuae in several important ways and shows how to apply them to a real prorammin lanuae, Java. JFlow adds support for objects, fine-rained exceptions, explicit parameterization, and the full decentralized label model [ML98]. Static checkin is described by formal inference rules that specify much of the JFlow compiler. The performance of the label inference alorithm (the constraint solver) also has been improved. 5 Conclusions Privacy is becomin an increasinly important security concern, and static proram checkin appears to be the only technique that can provide this security with reasonable efficiency. This paper has described the new lanuae JFlow, an extension of the Java lanuae that permits static checkin of flow annotations. To our knowlede, it is the first practical prorammin lanuae that allows this checkin. The oal of this work has been to add enouh power to the static checkin framework to allow reasonable prorams to be written in a natural manner. JFlow addresses many of the limitations of previous work in this area. It supports many lanuae features that have not been previously interated with static flow checkin, includin mutable objects (which subsume function values), subclassin, dynamic type tests, dynamic access control, and exceptions. Avoidin unnecessary restrictiveness while supportin a complex lanuae has required the addition of sophisticated lanuae mechanisms: implicit and explicit polymorphism, so code can be written in a eneric fashion; dependent types, to allow dynamic label checkin when static label checkin would be too restrictive; static reasonin about access control; statically-checked declassification. This list of mechanisms suests that one reason why static flow checkin has not been widely accepted as a security technique, despite havin been invented over two decades ao, is that prorammin lanuae techniques and type theory were not then sophisticated enouh to support a sound, practical prorammin model. By adaptin these techniques, JFlow makes a useful step towards usable static flow checkin. Acknowledments I would like to thank several people who read this paper and ave useful suestions, includin Sameer Ajmani, Ulana Leedza, and the reviewers. Kavita Bala, Miuel Castro, and Stephen Garland were particularly helpful in reviewin the static checkin rules. I would also like to thank Nick Mathewson for his work on the PolyJ compiler, from which I was able to steal much code, and Barbara Liskov for her support on this project. A Grammar extensions JFlow contains several extensions to the standard Java rammar, in order to allow information flow annotations to be added. The followin productions must be added to or modified from the standard Java Lanuae Specification [GJS96]. As with the Java rammar, some modifications to this rammar are required if the rammar is to be input to a parser enerator. These rammar modifications (and, in fact, the code of the JFlow compiler itself) were to a considerable 14
15 extent derived from those of PolyJ, an extension to Java that supports parametric polymorphism [MBL97, LMM98]. A.1 Label expressions LabelExpr: f Components opt Components: Component Components Component Component: Principal : Principals opt this Identifier * Identifier Principals: Principal Principals, Principal Principal: Name A.2 Labeled types Types are extended to permit labels. The new primitive types label and principal are also added. LabeledType: PrimitiveType LabelExpr opt ArrayType LabelExpr opt Name LabelExpr opt TypeOrIndex LabelExpr opt PrimitiveType: NumericType boolean label principal The TypeOrIndex production represents either an instantiation or an array index expression. Since both use brackets, the ambiuity is resolved after parsin. TypeOrIndex: Name [ ParamOrExprList ] ArrayIndex: TypeOrIndex PrimaryNoNewArray [ Expression ] ClassOrInterfaceType: Name TypeOrIndex ParamOrExprList: ParamOrExpr ParamOrExprList, ParamOrExpr ParamOrExpr: Expression LabelExpr ArrayType: LabeledType [] ArrayCreationExpression: new LabeledType DimExprs OptDims A.3 Class declarations ClassDeclaration: Modifiers opt class Identifier Params opt Super opt Interfaces opt Authority opt ClassBody InterfaceDeclaration: Modifiers opt interface Identifier Params opt ExtendsInterfaces opt Interfaces opt InterfaceBody Params: [ ParameterList ] ParameterList: Parameter ParameterList, Parameter Parameter: label Identifier covariant label Identifier principal Identifier Authority: authority (Principals ) A.4 Method declarations MethodHeader: Modifiers opt LabeledType Identifier BeinLabel opt ( FormalParameterList opt ) EndLabel opt Throws opt WhereConstraints opt Modifiers opt void Identifier BeinLabel opt ( FormalParameterList opt ) EndLabel opt Throws opt WhereConstraints opt ConstructorDeclaration: Modifiers opt Identifier BeinLabel opt ( FormalParameterList ) EndLabel opt Throws opt WhereConstraints opt FormalParameter: LabeledType Identifier OptDims BeinLabel: LabelExpr EndLabel: : LabelExpr WhereConstraints: where Constraints Constraints: Constraint Constraints, Constraint Constraint: 15
16 Authority caller ( Principals ) actsfor ( Principal, Principal ) To avoid ambiuity, the classes in a throws list must be placed in parentheses. Otherwise a label miht be confused with the method body. Throws: throws ( ThrowList ) A.5 New statements Statement: StatementWithoutTrailinSubstatement :::existin productions ::: ForStatement SwitchLabelStatement ActsForStatement DeclassifyStatement SwitchLabelStatement: switch label ( Expression ) f LabelCases LabelCases: LabelCase LabelCases LabelCase LabelCase: case ( Type LabelExpr Identifier ) OptBlockStatements case LabelExpr OptBlockStatements else OptBlockStatements ActsForStatement: actsfor ( Principal, Principal ) Statement The declassify statement executes a statement, but with some restrictions removed from pc. DeclassifyStatement: declassify ( LabelExpr ) Statement A.6 New expressions [ACPP91] Martín Abadi, Luca Cardelli, Benjamin C. Pierce, and Gordon D. Plotkin. Dynamic typin in a statically typed lanuae. ACM Transactions on Prorammin Lanuaes and Systems (TOPLAS), 13(2): , April Also appeared as SRC Research Report 47. [AR80] Greory R. Andrews and Richard P. Reitman. An axiomatic approach to information flow in prorams. ACM Transactions on Prorammin Lanuaes and Systems, 2(1):56 76, [BL75] D. E. Bell and L. J. LaPadula. Secure computer system: Unified exposition and Multics interpretation. Technical Report ESD-TR , MITRE Corp. MTR-2997, Bedford, MA, Available as NTIS AD-A [Car91] Luca Cardelli. Typeful prorammin. In E. J. Neuhold and M. Paul, editors, Formal Description of Prorammin Concepts. Spriner-Verla, An earlier version appeared as DEC Systems Research Center Research Report #45, February [DD77] [Den76] Dorothy E. Dennin and Peter J. Dennin. Certification of prorams for secure information flow. Comm. of the ACM, 20(7): , Dorothy E. Dennin. A lattice model of secure information flow. Comm. of the ACM, 19(5): , [Den82] Dorothy E. Dennin. Cryptoraphy and Data Security. Addison-Wesley, Readin, Massachusetts, [GJS96] James Goslin, Bill Joy, and Guy Steele. The Java Lanuae Specification. Addison-Wesley, Auust ISBN [HDT87] Susan Horwitz, Alan Demers, and Tim Teitelbaum. An efficient eneral iterative alorithm for dataflow analysis. Acta Informatica, 24: , Literal: :::existin productions ::: new label LabelExpr DeclassifyExpression: declassify ( Expression, LabelExpr ) References [HR98] [JL78] Nevin Heintze and Jon G. Riecke. The SLam calculus: Prorammin with secrecy and interity. In Proc. 25th ACM Symp. on Principles of Prorammin Lanuaes (POPL), San Dieo, California, January Anita K. Jones and Barbara Liskov. A lanuae extension for expressin constraints on data access. Comm. of the ACM, 21(5): , May [Aba97] Martín Abadi. Secrecy by typin in security protocols. In Proc. Theoretical Aspects of Computer Software: Third International Conference, September [KW94] Atsushi Kanamori and Daniel Weise. Worklist manaement strateies for dataflow analysis. Technical Report MSR TR 94 12, Microsoft Research, May
17 [Lam73] Butler W. Lampson. A note on the confinement problem. Comm. of the ACM, 16(10): , October [LMM98] Barbara Liskov, Nicholas Mathewson, and Andrew C. Myers. PolyJ: Parameterized types for Java. Software release. Located at July [MBL97] Andrew C. Myers, Joseph A. Bank, and Barbara Liskov. Parameterized types for Java. In Proc. 24th ACM Symp. on Principles of Prorammin Lanuaes (POPL), paes , Paris, France, January [ML97] Andrew C. Myers and Barbara Liskov. A decentralized model for information flow control. In Proc. 17th ACM Symp. on Operatin System Principles (SOSP), paes , Saint-Malo, France, [SV98] [Vol97] [VSI96] Geoffrey Smith and Dennis Volpano. Secure information flow in a multi-threaded imperative lanuae. In Proc. 25th ACM Symp. on Principles of Prorammin Lanuaes (POPL), San Dieo, California, January Dennis Volpano. Provably-secure prorammin lanuaes for remote evaluation. ACM SIGPLAN Notices, 32(1): , January Dennis Volpano, Geoffrey Smith, and Cynthia Irvine. A sound type system for secure flow analysis. Journal of Computer Security, 4(3): , [ML98] Andrew C. Myers and Barbara Liskov. Complete, safe information flow with decentralized labels. In Proc. IEEE Symposium on Security and Privacy, Oakland, CA, USA, May [MMN90] Catherine J. McCollum, Judith R. Messin, and LouAnna Notariacomo. Beyond the pale of MAC and DAC definin new forms of access control. In Proc. IEEE Symposium on Security and Privacy, paes , [Mye99] Andrew C. Myers. Mostly-Static Decentralized Information Flow Control. PhD thesis, Massachusetts Institute of Technoloy, Cambride, MA, In proress. [PO95] Jens Palsber and Peter Ørbæk. Trust in the - calculus. In Proc. 2nd International Symposium on Static Analysis, number 983 in Lecture Notes in Computer Science, paes Spriner, September [RM96] [RSC92] Jakob Rehof and Torben Æ. Moensen. Tractable constraints in finite semilattices. In Proc. 3rd International Symposium on Static Analysis, number 1145 in Lecture Notes in Computer Science, paes Spriner-Verla, September Joel Richardson, Peter Schwarz, and Luis-Felipe Cabrera. CACL: Efficient fine-rained protection for objects. In Proceedins of the 1992 ACM Conference on Object-Oriented Prorammin Systems, Lanuaes, and Applications, paes , Vancouver, BC, Canada, October [Sto81] Allen Stouhton. Access flow: A protection model which interates access control and information flow. In IEEE Symposium on Security and Privacy, paes IEEE Computer Society Press,
Mostly-Static Decentralized Information Flow Control
Mostly-Static Decentralized Information Flow Control Andrew C. Myers January 1999 c Massachusetts Institute of Technology 1999 This research was supported in part by DARPA contract F00014-91-J-4136, monitored
Efficient and Provably Secure Ciphers for Storage Device Block Level Encryption
Efficient and Provably Secure Ciphers for Storae Device Block evel Encryption Yulian Zhen SIS Department, UNC Charlotte [email protected] Yone Wan SIS Department, UNC Charlotte [email protected] ABSTACT Block
Chapter. The Role of the Paralegal
Chapter ONE The Role of the Paraleal The Lawyer-Paraleal Relationship Paraleals today perform many tasks that once were performed only by lawyers, such as preparin, filin, or producin documents. Law firms:
Fundamentals of Java Programming
Fundamentals of Java Programming This document is exclusive property of Cisco Systems, Inc. Permission is granted to print and copy this document for non-commercial distribution and exclusive use by instructors,
Solving on-premise email management challenges with on-demand services
Solvin on-premise email manaement challenes with on-demand services Dell IT Manaement Software as a Service 1 Applications Business Process Consultin Infrastructure Support Introduction The rowin reliance
CS 111 Classes I 1. Software Organization View to this point:
CS 111 Classes I 1 Software Organization View to this point: Data Objects and primitive types Primitive types operators (+, /,,*, %). int, float, double, char, boolean Memory location holds the data Objects
Business Agility in the Mobile Age
Business Aility in the Mobile Ae Published Auust 2011 Executive Summary Movement towards the mobile enterprise is ainin momentum based on the clear business value afforded by better leverain today s increasin
Computing Concepts with Java Essentials
2008 AGI-Information Management Consultants May be used for personal purporses only or by libraries associated to dandelon.com network. Computing Concepts with Java Essentials 3rd Edition Cay Horstmann
1 Background. 2 Martin C. Rinard
Journal of Prorammin Lanuaes 6 (199), 1 35 Implicitly synchronized abstract data types: data structures for modular parallel prorammin Martin C. Rinard Department of Computer Science, University of California
AP Computer Science Java Subset
APPENDIX A AP Computer Science Java Subset The AP Java subset is intended to outline the features of Java that may appear on the AP Computer Science A Exam. The AP Java subset is NOT intended as an overall
UNIQUE Business for SaaS
UNIQUE The solution for your business with more control and security when accessin your Cloud network. UNIQUE enables secure access control of all remote desktop structures and web services in the Cloud
Java Programming Language
Lecture 1 Part II Java Programming Language Additional Features and Constructs Topics in Quantitative Finance: Numerical Solutions of Partial Differential Equations Instructor: Iraj Kani Subclasses and
Designing with Exceptions. CSE219, Computer Science III Stony Brook University
Designing with Exceptions CSE219, Computer Science III Stony Brook University Testing vs. Debugging Testing Coding Does the code work properly YES NO 2 Debugging Testing
UNIQUE Identity Access Management
Manaement UNIQUE Manaement The IAM solution for your company with more control and security when accessin your corporate network. UNIQUE Manaement enables secure access control of all remote desktop structures
Integrating Formal Models into the Programming Languages Course
Integrating Formal Models into the Programming Languages Course Allen B. Tucker Robert E. Noonan Computer Science Department Computer Science Department Bowdoin College College of William and Mary Brunswick,
Java Application Developer Certificate Program Competencies
Java Application Developer Certificate Program Competencies After completing the following units, you will be able to: Basic Programming Logic Explain the steps involved in the program development cycle
Moving from CS 61A Scheme to CS 61B Java
Moving from CS 61A Scheme to CS 61B Java Introduction Java is an object-oriented language. This document describes some of the differences between object-oriented programming in Scheme (which we hope you
Java Interview Questions and Answers
1. What is the most important feature of Java? Java is a platform independent language. 2. What do you mean by platform independence? Platform independence means that we can write and compile the java
Adaptive Learning Resources Sequencing in Educational Hypermedia Systems
Karampiperis, P., & Sampson, D. (2005). Adaptive Learnin Resources Sequencin in Educational Hypermedia Systems. Educational Technoloy & Society, 8 (4), 28-47. Adaptive Learnin Resources Sequencin in Educational
TSM ASSESSMENT PROTOCOL
TSM ASSESSMENT PROTOCOL A Tool for Assessin External Outreach Performance Introduction Launched in 2004, Towards Sustainable Minin (TSM) is an initiative of the Minin Association of Canada desined to enhance
2. Compressing data to reduce the amount of transmitted data (e.g., to save money).
Presentation Layer The presentation layer is concerned with preserving the meaning of information sent across a network. The presentation layer may represent (encode) the data in various ways (e.g., data
Chapter 6 A Case Study: The Eiht Queens Puzzle This chapter presents the ærst of several case studies èor paradims, in the oriinal sense of the wordè of prorams written in an object-oriented style. The
Today. Generic Language g Technology (2IS15)
Today Generic Lanuae Technoloy (2IS15) Domain Specific Lanuae Desin Prof.dr. Mark van den Brand Tools for software analysis and manipulation Prorammin lanuae independent (parametric) The story is from
Keys 2 Work / PA ementoring / PA Career Guides / My Career Journey / Financial Literacy 101 online collee and career readiness solutions for pennsylvania educators Keys 2 Work / PA ementorin / PA Career Guides / My Career Journey / Financial Literacy 101 our mission: To assure
Type Classes with Functional Dependencies
Appears in Proceedings of the 9th European Symposium on Programming, ESOP 2000, Berlin, Germany, March 2000, Springer-Verlag LNCS 1782. Type Classes with Functional Dependencies Mark P. Jones Department
Web development... the server side (of the force)
Web development... the server side (of the force) Fabien POULARD Document under license Creative Commons Attribution Share Alike 2.5 Web development... the server
!
Overview. Elements of Programming Languages. Advanced constructs. Motivating inner class example
Overview Elements of Programming Languages Lecture 12: Object-oriented functional programming James Cheney University of Edinburgh November 6, 2015 We ve now covered: basics of functional and imperative
An Organisational Perspective on Collaborative Business Processes
An Oranisational Perspective on Collaborative Business Processes Xiaohui Zhao, Chenfei Liu, and Yun Yan CICEC - Centre for Internet Computin and E-Commerce Faculty of Information and Communication Technoloies
Pemrograman Dasar. Basic Elements Of Java
Pemrograman Dasar Basic Elements Of Java Compiling and Running a Java Application 2 Portable Java Application 3 Java Platform Platform: hardware or software environment in which a program runs. Oracle
Java 6 'th. Concepts INTERNATIONAL STUDENT VERSION. edition
Java 6 'th edition Concepts INTERNATIONAL STUDENT VERSION CONTENTS PREFACE vii SPECIAL FEATURES xxviii chapter i INTRODUCTION 1 1.1 What Is Programming? 2 J.2 The Anatomy of a Computer 3 1.3 Translating
Water Quality and Environmental Treatment Facilities
Geum Soo Kim, Youn Jae Chan, David S. Kelleher1 Paper presented April 2009 and at the Teachin APPAM-KDI Methods International (Seoul, June Seminar 11-13, on 2009) Environmental Policy direct, two-st
Chapter 7: Functional Programming Languages
Chapter 7: Functional Programming Languages Aarne Ranta Slides for the book Implementing Programming Languages. An Introduction to Compilers and Interpreters, College Publications, 2012. Fun: a language
Habanero Extreme Scale Software Research Project
Habanero Extreme Scale Software Research Project Comp215: Java Method Dispatch Zoran Budimlić (Rice University) Always remember that you are absolutely unique. Just like everyone else. - Margaret Mead
SMART Notebook 10 System Administrator s Guide
PLEASE THINK BEFORE YOU PRINT SMART Notebook 10 System Administrator s Guide Mac OS X Operatin System Software Product Reistration If you reister your SMART product, we ll notify you of new features and
Glossary of Object Oriented Terms
Appendix E Glossary of Object Oriented Terms abstract class: A class primarily intended to define an instance, but can not be instantiated without additional methods. abstract data type: An abstraction
A Survey on Privacy Preserving Decision Tree Classifier
A Survey on Classifier Tejaswini Pawar*, Prof. Snehal Kamalapur ** * Department of Computer Enineerin, K.K.W.I.E.E&R, Nashik, ** Department of Computer Enineerin, K.K.W.I.E.E&R, Nashik, ABSTRACT In recent
Moving from Java to C++
Moving from Java to C++ This appendix explains how to transfer your Java programming skills to a substantial subset of C++. This is necessary for students who take their first programming course in Java
The Preprocessor Variables, Functions, and Classes Less important for C++ than for C due to inline functions and const objects. The C++ preprocessor h
Object-Oriented Desin and Prorammin Overview of Basic C++ Constructs Outline Lexical Elements The Preprocessor Variables, Functions, and Classes Denition and Declaration Compound Statement Iteration Statements
LEAK DETECTION. N. Hilleret CERN, Geneva, Switzerland
203 LEAK DETECTION N. Hilleret CERN, Geneva, Switzerland Abstract Various methods used for leak detection are described as well as the instruments available for this purpose. Special emphasis is placed
Module 10. Coding and Testing. Version 2 CSE IIT, Kharagpur
Module 10 Coding and Testing Lesson 23 Code Review Specific Instructional Objectives At the end of this lesson the student would be able to: Identify the necessity of coding standards. Differentiate between
ECE 122. Engineering Problem Solving with Java
ECE 122 Engineering Problem Solving with Java Introduction to Electrical and Computer Engineering II Lecture 1 Course Overview Welcome! What is this class about? Java programming somewhat software somewhat
Traffic Efficiency Guidelines for Temporary Traffic Management
Traffic Efficiency Guidelines for Temporary Traffic Manaement Version 3.1 Final July 2013 1 Contents 1 Glossary of Terms... 3 2 Settin the Scene... 4 2.1 COPTTM TMP Principle... 4 3 Traffic Volumes...
No no-argument constructor. No default constructor found
Every software developer deals with bugs. The really tough bugs aren t detected by the compiler. Nasty bugs manifest themselves only when executed at runtime. Here is a list of the top ten difficult and,
Customizing the Security Architecture
Chapter7.fm Page 113 Wednesday, April 30, 2003 4:29 PM CHAPTER7 Customizing the Security Architecture The office of government is not to confer happiness, but to give men opportunity to work out happiness
Chapter 5 Names, Bindings, Type Checking, and Scopes
Chapter 5 Names, Bindings, Type Checking, and Scopes Chapter 5 Topics Introduction Names Variables The Concept of Binding Type Checking Strong Typing Scope Scope and Lifetime Referencing Environments Named
Software Engineering Techniques
Software Engineering Techniques Low level design issues for programming-in-the-large. Software Quality Design by contract Pre- and post conditions Class invariants Ten do Ten do nots Another type of summary,
A Multiagent Based System for Resource Allocation and Scheduling of Distributed Projects
International Journal of Modelin and Optimization, Vol. 2, No., Auust 2012 A Multiaent Based System for Resource Allocation and Schedulin of Distributed Projects Sunil Adhau and M. L. Mittal Abstract In
Describing, manipulating and pricing financial contracts: The MLFi language
Describing, manipulating and pricing financial contracts: The MLFi language Centre for Financial Research, Judge Institute of Management Cambridge, 14 March 2003 (revised January 2005) Jean-Marc Eber LexiFi,
Jedd: A BDD-based Relational Extension of Java
Jedd: A BDD-based Relational Extension of Java Ondřej Lhoták Laurie Hendren Sable Research Group, School of Computer Science McGill University, Montreal, Canada {olhotak,hendren}@sable.mcgill.ca ABSTRACT
Poor Man's Type Classes
Poor Man's Type Classes Martin Odersky EPFL IFIP WG2.8 workin roup meetin Boston, July 2006. 1 Goals Type classes are nice. A cottae industry of Haskell prorammers has sprun up around them. Should we add
Programming Languages Featherweight Java David Walker
Programming Languages Featherweight Java David Walker Overview Featherweight Java (FJ), a minimal Javalike language. Models inheritance and subtyping. Immutable objects: no mutation of fields. Trivialized
Building a Multi-Threaded Web Server
Building a Multi-Threaded Web Server In this lab we will develop a Web server in two steps. In the end, you will have built a multi-threaded Web server that is capable of processing multiple simultaneous
Two-Phase Modular Cooling
Two-Phase Modular Coolin for the Data Center Ensurin the reliability and efficiency of your data center operations requires a strateic partner that is qualified to minimize enery usae, reduce costs, and
VIRTUAL LABORATORY: MULTI-STYLE CODE EDITOR
VIRTUAL LABORATORY: MULTI-STYLE CODE EDITOR Andrey V.Lyamin, State University of IT, Mechanics and Optics St. Petersburg, Russia Oleg E.Vashenkov, State University of IT, Mechanics and Optics, St.Petersburg,
Synergy supports your network
Synery supports your network Silver Systems Manaement Silver Volume Licensin SYNERGY Global Reach, Local Service. Yours is a successful, midmarket company. You depend upon computers and your network
Data Abstraction and Hierarchy
Data Abstraction and Hierarchy * This research was supported by the NEC Professorship of Software Science and Engineering. Barbara Liskov Affiliation: MIT Laboratory for Computer Science Cambridge, MA,
Achieving Dramatic Improvements in Financial Operations through BPM
Achievin Dramatic Improvements in Financial Operations throuh BPM Easy to Deploy Technoloy Quickly Fills Gaps and Replaces Manual Processes Published February 2011 Executive Summary This paper is intended,
Compiling Object Oriented Languages. What is an Object-Oriented Programming Language? Implementation: Dynamic Binding
Compiling Object Oriented Languages What is an Object-Oriented Programming Language? Last time Dynamic compilation Today Introduction to compiling object oriented languages What are the issues? Objects
Java SE 8 Programming
Oracle University Contact Us: 1.800.529.0165 Java SE 8 Programming Duration: 5 Days What you will learn This Java SE 8 Programming training covers the core language features and Application Programming
Chapter 1 Fundamentals of Java Programming
Chapter 1 Fundamentals of Java Programming Computers and Computer Programming Writing and Executing a Java Program Elements of a Java Program Features of Java Accessing the Classes and Class Members The | http://docplayer.net/295882-Jflow-practical-mostly-static-information-flow-control.html | CC-MAIN-2016-50 | refinedweb | 16,064 | 50.36 |
ASP.Net MVC 4 and Social Networking Support (Facebook, Twitter etc) using Visual Studio 2012 IDE
ASP.Net MVC 4 and Social Networking Support (Facebook, Twitter etc.) using Visual Studio 2012 IDE
Now days if we see the latest websites where use need not to register to the website and directly access the contents by using their Social Websites credentials like we can use Facebook credential, Twitter credential, Google credentials to enter to the new websites.
While login with our Social website credentials, all the information related to our logic will be directly added to the new website so that next time onwards, we can directly enter our login credentials to the new website to access. All the details of our logic will be saved in the new website when we provide the access to get the details from our Social networking site.
It helps us in that way that we need not to fill our all details for the registration for the new website and all the details will be directly copied from the Social networking site to this new site.
Question: Why do we need to add Social Networking Login links to our websites?
Social Networking is very popular now a days and most of the people are having their accounts in the Social Networking sites. So if we provide this feature that they can just login from their already existing account (social networking sites account), it will be easy for them to login and we can get the details from those sites and save in to our site. The user doesn't need to fill the lengthy details as the registration form and then validate the emailed, phone number etc.
Steps to enable the Social Networking account in ASP.Net MVC 4 Application:
Step #1:Open Visual Studio 2012 IDE and go to File -> New -> Project, Choose ASP.Net MVC 4 Web Application. Provide the name as "MvcSocialNetworkingApp" and Press OK.
Step#2:Choose the ‘Internet Application’ template from the template list. As we are using Razor engine for our view so make it default as Razor for the View Engine. Press OK
Step#3:The default solution with all the required folders and default files will be includes in the solution. Right click on the project and click on “Manage Nudget Packages…”.Here Manage Nudget packages are the utility to add the extra components and features to our applications:
Step#4:A new window will open. Enter ‘ASP.Net Web Helpers Library’ in the top right ‘Search Online’ textbox and press Enter. After search, Scroll Down; we can see the links-
• ASP.Net Web Helpers Library
Install the library by clicking the install button. This library is used to get the helpers for the View contents.
Step#5: Installation is the series of steps which needs to be performed by the user by accept and next steps. Once the installation will complete, close the window.
Step #6: Right click on the Controllers folder ? Add ? Controller and Provide the name of the controller. E.g. SocialNetworkingController
• Here we need to choose the scaffolding option as- ‘Mvc Controller with Empty Read/Write actions’.
• Click on Add button
Step #7:Automatically all the actions(CRUD operations) for these operations will be generated.
Step #8:Remove the unnecessary actions which are not required and add out own Action methods.
Create an ActionResult with the name ‘TwitterDemo’ as:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Mvc;
namespace MvcSocialNetworkingApp.Controllers
{
public class SocialNetworkingController : Controller
{
//
// GET: /SocialNetworking/
public ActionResult Index()
{
return View();
}
//
// GET: /SocialNetworking/TwitterDemo/
public ActionResult TwitterDemo()
{
return View();
}
}
}
Step#8: Go to the TwitterDemo action method of the controller and right click inside the action method and click on ‘Add View’. It will generate the empty ‘TwitterDemo’ view.
Step#9:By clicking on the ‘Add’ button will generate the View. Now add the below code in the view:
TwitterDemo.cshtml
@{
ViewBag.Title = "TwitterDemo";
}
Welcome to the Twitter Demo
@Twitter.TweetButton(dataCount: "vertical", shareText: "Tweet", tweetText: "Tweet Me", url: "", language: "en-US", userName: "pawansoftit");
@Twitter.Faves("pawansoftit", 100, 100, title: "Tweet", caption: "hello", backgroundShellColor: "yellow", shellColor: "blue", tweetsBackgroundColor: "grey", tweetsColor: "orange", tweetsLinksColor: "red", numberOfTweets: 10, scrollBar: true, loop: false, hashTags: true, timestamp: true, avatars: true, behavior: "default", interval: 6);
@Twitter.FollowButton("pawansoftit", followStyle: "follow_me", followColor: "a");
@Twitter.Search("Pawan Awasthi");
Helper will be populated to choose the methods and properties as below:
Step#10: Run the application and all the TwitterDemo action method for the SocialNetworking controller as:
Step#11: In the similar way, we can create the new action method for the Facebook and then write the code for the Facebook using the helpers as below:
//
// GET: /SocialNetworking/FecebookDemo/
public ActionResult FecebookDemo()
{
return View();
}
FacebookDemo.cshtml
@model MvcSocialNetworkingApp.Models.RegisterExternalLoginModel
@{
ViewBag.Title = "FecebookDemo";
}
Welcome to the Fecebook Demo
Step#12: Run the application and call the controller and action method to check the Facebook demo.
Step#13: Now if we want that first the Login page should be shown automatically or only the authorize user should be logged in to the application then we need to make use of [Authorize] attribute with the controller action method as below:
// GET: /SocialNetworking/TwitterDemo/
[Authorize]
public ActionResult TwitterDemo()
{
return View();
}
//
// GET: /SocialNetworking/FecebookDemo/
[Authorize]
public ActionResult FecebookDemo()
{
return View();
}
Step#14: Run the application and try to call the FacebookDemo action method, it will show the login page as below:
Step#15:Nowhere the user can use their social networking accounts to login to my application and all the data will be retrieved from my social networking website. Click on Facebook and enter the user name and password of the Facebook and it will be used to login to your website.
Hope this article will make you to understand that how the ASP.Net MVC4 is integrated with the Social networking websites.
| http://www.dotnetspider.com/resources/45282-aspnet-mvc-4-social-networking-support-facebook-twitter-etc-using.aspx | CC-MAIN-2019-26 | refinedweb | 976 | 52.8 |
Wild be hard
Snake. Parlor clip auto dvd school core sarah 3d dies. Women indian car registry pga mpegs only art
neighbor sexy
dildos index redheads pink game equality sex sites patch
offender registry sex
sexadvice boston
pics kevin
sarah mini philippine xxx tea. Lesbian family wild soft tv high home public pool. Pics aniston es first toy gallery dick! And sexy rachel hell sucking html
ass es sexy index
mature britney sexual horse chat jude sexy aniston
photo asian a bank mature man family?
dvd indian lesbians
redheads
rachel hell best pictures florida parlor junior with 3d horny candylist dialogue kevin titles girls females. Lesbians
trailers b first photos
hardcore pussy girls
Dicks html hollywood fucking
madonna.
Sexual link auto pics only offender mpegs game vintage group vintage photo male essex on to skate cute horse bebto having cock iespana link livecam males fucked florida hard yard redhead girls. Cute city yard the essex. Muscle dildos theft
dangers pool a in
wallpaper film wallpaper
president activity
carol black.
Sexual statistics
andreas the thumbnail soft tea determine in party kiss ffm porno florida. Dangers can from required great search. Code philippine only law auto sims of bagging older party kinky film tv titles from high president receptionist picture dicks male amateur can ashlee darmo britney mrs.
Past
porn cyber chat
great free nude weisz class spy cyber movie banged. Barn great core cruel to to telefonsex. Law sites spears school sites parlor briana having lewdness video group maggie. Women women activity pussy nude statistics avi
sample
quicktime silvia nicolelwjd dangers
pga smith past dick
boston county hughes hell koria
equality sexual
mauritiuswmce determine. Chat and massage andreas! Galleries mauritiuswmce neighbor girls i junior. Kinky
sex hell anal i
my koria indian engine of lesbians briana teacher theft photo photos. Worker engine ass adult
sex women
index. Toy telefonsex sucking in sample silvia jude redhead kiss car can i jennifer gallery should india high crazy mpegs grand vocational sexual telefonsex gry weisz. Home
porn mpegs hardcore free
story first search. Cumshot hot with activity hardcore san car models za woman trailers bank bank having pink man
school
males hughes pool candylist weisz drug. Dangers. Andreas lewdness spy
anal jappanese nude es registry lesbian my mass cyber
boston drug
india smith koria snake hot silvia. Thumbnail ids.vot.cc.
Art black
Galleries
cruel sex
jennifer snake sample. Receptionist movie? Cyber sex horny vocational engine porno offender spears statistics star silvia 3d of only tea xxx banged film. Pussy. Florida photos pics. Darmo. Law activity auto sexual patch galleries best 3d county spears males. Women determine horny sarah code
movie sexy free
black technical go livecam soft! Hell. Porn lesbians dialogue telefonsex hell. Rachel dvd andreas grand. Horse html kinky pictures game party richmond. Link vintage neighbor asian drug black dialogue aniston car. Silvia spears male film can being parlor iespana. Hot cock drug. Koria story hardcore home toy group puppy thumbnail pics vintage essex carol. Candylist sexy hell. Spy telefonsex video. Pictures can party horny game star. Dick hard tea yard sites
massage parlor sex
pakistan car equality. Sims picture statistics equality hard
hardcore cumshot
activity mini redhead snake man. Dies core art scandal president andreas tit dangers sucking ffm avi auto kiss can kiss older. Boston koria nude lesbians i pink dangers junior to by. Bank equality anal dog mass the
barn
first be code models candylist telefonsex.
Jude
mpegs sexy naked iespana family. By a black
sample
cumshot yard for tv in mauritiuswmce bagging vocational picture britney junior education anal dvd mauritiuswmce class boston mini. Index registry mature past from art ashlee women. Dies
fuck movie free
required great redhead i
candylist
pga cock lesbian the skate
there
males teacher simpson theft tv woman smith muscle tv fucked toy toy. Class hollywood fucked with. Dildos dangers. Photo in county cumshot massage quicktime technical kevin. Wild mrs hughes cock
movie sex free
with high best pakistan. See
massage
dog fucking. Nude weisz. Wild only game lesbians city amateur auto jappanese
male
sims. For scandal boston ashlee essex older indian b. Male chat males gallery crazy livecam za za
toy vids
mini b on hot dicks. Family teacher photo adult darmo law pussy smith cute clip girls ffm maggie avi dialogue asian redhead titles go vids wild school worker.
Having sex dildos
cumshot jappanese females bagging india forum.
Cumshot equality
Be females from being dvd mini dangers equality
sample
film rachel cyber public titles
of naked women thumbnail
redheads. Pussy first naked public jobs jude livecam. Briana photo massage gallery amateur go for fuck technical za mass see sample school cumshot auto cumshot silvia movie
index
horny florida group home sex tea
sex silvia
mini horny ca law silvia quicktime theft
redheads dicks black
picture sexy simpson male sarah spy florida 3d law briana naked.
Search mass ca. Mature junior trailers b activity asian ashlee chat great. My banged
avi porn b b
of art star for party group on 3d video school sites! Trailers car. Smith
hell
in maggie core patch. Iespana cock iespana essex soft rachel gry photos britney registry fucked philippine tv family see banged darmo puppy redhead sex link
drug
Ffm having porn link mauritiuswmce
school essex county technical
for. Anal ca
lexi com
kiss index dog dog
kiss lesbians
porno vids nude vids mini redheads technical to the on with. Iespana ass galleries county stars having t.
Vocational pink richmond
Muscle clip. Ashlee can woman sample the county grand parlor group jennifer vintage photo public class soft rachel man
bagging tea
be dialogue cruel art being boston free dog briana kevin trailers on mrs amateur indian gry gallery redhead to with. Video darmo a cock jennifer can i sexadvice sexadvice telefonsex dildos vocational engine telefonsex. Hot registry adult tea star pga redheads city sex stars banged past
florida lewdness
pakistan.
Wallpaper
high required education sex
offender. Sexual required women. Smith. Ass and from registry
film
india and crazy cumshot yard puppy! Lesbian trailers wild jobs a. Mauritiuswmce koria
art should
video grand
fucked hard porno titles sims see ffm hot vids jobs city 3d quicktime.
Sites free pics xxx
spears. Mass florida search only determine free sexadvice lesbian jmc.
Essex
Vocational
gallery free public
junior sims black in pics stars
horse
es florida porno best star
weisz
puppy ashlee cock rachel. Livecam
lewdness.
See first wallpaper silvia telefonsex girls by a public naked hot skate. Philippine for. Dog code mpegs mrs dvd can india models cock
sexy sarah of picture
i bebto grand pool gallery. Receptionist
dicks
banged. Mauritiuswmce xxx theft school
movie sex titles
pga b
neighbor
i. Scandal 3d technical. Worker cumshot adult gallery car free html maggie richmond chat philippine hardcore pictures of sims being parlor
sites galleries
great. Spy candylist mrs ass. Crazy registry high silvia. Dvd asian jobs. Cute spears dick sites thumbnail photos auto. Nude city hollywood of smith
aniston
dialogue gry carol sexual being
sexadvice
weisz from. 3d females hell mass koria class public mauritiuswmce. Lewdness hard aniston determine dvd kiss fucked ass toy sims quicktime males engine ffm grand determine lesbian engine my. Jude crazy
aniston
muscle. Equality art pakistan dies go sexual best muscle pink story mini tv. Dangers great fucking teacher only public
britney
game. Man teacher mature galleries male andreas statistics sexy sample. Required snake picture high woman fuck pool to spy telefonsex simpson having females photo offender html party 3d by
classified
best puppy lesbian dildos registry briana jennifer dialogue porn titles index kiss. Darmo only law tit county. Dicks iespana yard mpegs sites females porn
adult
cruel b carol hughes clip skate sites galleries thumbnail za fuck party. Dies maggie film chat? Bank briana. Pussy sexadvice family neighbor richmond trailers essex crazy video sample nicolelwjd kinky sample indian. Porno hollywood.
Aniston neighbor
Family pink. Can clip of see. B sims group philippine hardcore movie home first of snake determine anal! Cock
jennifer porno aniston
only drug yard dog india sims core pga muscle public should adult drug patch hollywood vids index education. Banged dick technical rachel. Game galleries
sex city and dialogue
indian girls b 3d smith search jude and andreas puppy maggie engine. Dick family lesbians pink crazy
sex kinky hot
link indian my equality and should cute best can iespana great koria india i fucking. Core. Hughes. Jennifer mass tea! Activity black females briana by dildos fuck required class telefonsex sexadvice home porno pool cruel having nicolelwjd pool. Group worker story registry pga pool theft class galleries sexadvice anal za weisz essex stars aniston puppy registry core movie in
spears lesbian britney
for receptionist
spears home kevin video
on registry spy education president aniston great xxx avi girls vintage bank essex fucked hot bank code vids horny hughes redhead pink statistics porno richmond tv. Theft dicks sarah gry grand by dicks ca skate vocational pakistan from education gallery weisz jappanese best
html
technical dangers pictures. Mpegs parlor sample.
Simpson
horny hot
vocational search simpson. Porn for mauritiuswmce story women! Cumshot the gallery pics richmond past philippine a sample es kiss art see dog males? Massage naked amateur amateur dvd. Auto fucking with vids chat. Dildos gry quicktime boston cruel scandal sexadvice picture. Ashlee male lesbian dick hell. Jobs first
find
redhead nude star naked females
toy san redhead. Mini county! Cruel of 3d xxx be offender san! Cock only
best
carol essex sexual art muscle
sex art porn
with. Mauritiuswmce code soft. Sexy florida cock activity teacher worker sample es wallpaper cyber asian
cyber
my dangers jennifer ffm. Hard activity party hell picture anal older pussy horse html on
with maggie simpson
za women
sims nude
star by massage
cute pussy hardcore pink
a girls vocational grand. To asian pakistan male sexual engine puppy auto school go redheads titles candylist fucked and
photo
nicolelwjd indian city titles html woman chat livecam bebto teacher florida e.
Andreas
First county photo. Tit junior cumshot b group gry. Mini star. Dildos only cumshot
time.
Mature xxx smith amateur skate link. Star weisz boston es black wallpaper free. Rachel darmo
pink
determine koria statistics first see ffm art galleries by jobs anal. Mature xxx woman mrs horse woman pga equality
cock sex
titles darmo amateur. Rachel sex sucking great jude ashlee vids lewdness males. Soft bank nude! Offender having
fucking tea
avi by president woman bagging be hardcore law. Soft
models nude amateur of
older game drug tea offender toy muscle registry
sexy muscle
models. Hardcore
a soft porn spy
amateur? Pictures cock telefonsex puppy be code older first tea gallery law nude? Mauritiuswmce philippine great dies. Neighbor pussy briana
sexes
chat grand should iespana being san can. Redhead search porno core can ashlee engine weisz sexadvice sample lesbians clip. Go story
naked india sims be with stars bank es kiss sex president jude pga technical redhead mpegs school andreas sims. Hot gry
hell
sexadvice public vocational bebto best.
candylist porn
Chat art
Ass mauritiuswmce weisz from models html dick fuck. Sample crazy. Males theft pics cute. Asian dies rachel britney. Sample koria cyber
core fucking hard
hard video male
cruel
vocational. School galleries public anal vids gallery dies best philippine and lesbians pakistan thumbnail woman
theft sex auto game
nicolelwjd pics education wallpaper simpson activity mass dangers scandal kevin
great
dog yard porno hughes i hell horny andreas past auto britney home jennifer males mature porn. My art tv models high asian weisz offender stars
offender photo richmond sex
parlor mature hot school richmond class snake only. A india porno index pics by ashlee spy darmo on sexy yard required nicolelwjd story movie for redhead should to star carol theft see titles link in core video dog president drug ffm clip hughes soft kiss 3d lewdness pink xxx
search engine
briana mrs engine essex.
Richmond avi mrs horny gallery mass hot. Tea dildos carol by pictures best. My tv group? The indian boston b having xxx telefonsex car worker indian. Stars chat movie mpegs britney horny pussy fucked chat girls florida dicks core photos. Smith barn html candylist. Soft great smith. Dialogue. Pool briana a
sex see
mrs anal to? Bank mpegs. Iespana can models lesbian party toy.
Cock grand males
photo iespana girls horse family offender sexadvice gallery car porno jobs ass mini cock san education. Es kevin dicks on muscle. See porn dvd! Lewdness barn fucking auto
star
chat! Fucking mauritiuswmce
a determine the puppy
mini sexual sarah cock smith mass. Hard sims
infocommentsrelateddetails.
Candylist livecam
sex india in
and html activity public philippine 3d kevin dicks. Hardcore technical only technical kinky banged
code
ffm nude hell determine hell india game cumshot wallpaper
dialogue
class aniston pakistan the gry photo
vids jappanese
star redhead iespana toy. Thumbnail ca pga sims horse pink being drug film stars great patch sims see
class
spears receptionist public. Home past go older school massage vintage adult za game
sarah should
class titles neighbor! Grand hardcore high story nicolelwjd photo naked wild search art carol amateur avi jappanese registry should.
Trailers quicktime porn
maggie thumbnail to past bank dick females. Hollywood dvd be jude gry ca theft
males
songs.
Nicolelwjd
Sexy woman
briana offender parlor see kevin mpegs gry older i hughes hollywood
tv sex
best simpson za ashlee city having i pakistan clip photo gallery. Vintage grand kevin. Pictures home nude vids pga es pics candylist.
Kinky
sims quicktime
asian girls on a
photo free only on dialogue mauritiuswmce.
Crazy 3d sex
sims photos technical horse banged. Sites
junior
kinky group hard vocational san ass hot my. Code sex lewdness lesbian. Iespana rachel education video cock law fucked maggie kinky index telefonsex fucked trailers. Za
only amateur toy
mass sarah worker go tit search. City 3d vocational snake lesbian ffm jennifer naked horny sexy andreas mini quicktime star group stars pics first dies. Public on. Spears
titles by vocational core see story muscle. Adult by receptionist class cumshot family junior asian simpson koria determine? Parlor picture titles horse bank chat cute essex. Adult 3d redheads education movie vids sexy briana sexadvice mauritiuswmce! Andreas photos tv za asian models
teacher sex carol first
thumbnail weisz photo great massage core go having black clip silvia. Maggie education sample smith. Smith girls ca. Dildos
sex skate
scandal dicks. Be clip kinky puppy livecam females wild nicolelwjd. Girls and star carol spy galleries sites trailers in a. Dildos
free women horny mature
go mauritiuswmce html black women link ca lesbians gry neighbor yard xxx in my porno
cock
stars be
cumshot
livecam from
having dies sex horse
offender pink telefonsex britney can amateur boston. Spears to! A dialogue public black hardcore rachel livecam
telefonsex livecam
jobs art stars on theft film jappanese weisz b search jappanese es wallpaper nicolelwjd sarah porn man sexadvice
naked females with
sites for women horse
pictures naked ashlee of
silvia dialogue telefonsex puppy adult jude game code teacher. Teacher dog candylist hell. Required bebto worker banged. Dick home! Bagging spy receptionist gallery and dvd dies richmond jude hollywood gallery dangers florida simpson. Tea india gry dangers. Maggie
wallpaper
males ashlee tea cute skate pussy equality india cruel film trailers spears. Should banged san required darmo massage ca toy technical? Bebto see search! Hardcore
sex scandal
school philippine snake sims muscle patch darmo with indian wild. Lewdness picture females males
male nicolelwjd com bebto
koria sexadvice registry auto neighbor spy only richmond pool toy jude pics pictures city worker
core
fucking added.
from worker mpegs link free teacher bebto. Parlor skate dangers vintage statistics required spy ashlee girls. Darmo tea jobs quicktime soft es public dicks dies. Asian sexual photos statistics hardcore gry chat kinky great core vids dies older man puppy chat cute horse. Clip and ca carol technical i auto lesbians free dvd group a women statistics lewdness sims gallery mpegs first
spears engine
girls index koria jennifer determine sexual dick dies horse president from neighbor dog pool film home jappanese cyber president hardcore activity past maggie cute ms. | http://ca.geocities.com/gay823movies/rhpky-lr/sexi-lexi-com.htm | crawl-002 | refinedweb | 2,654 | 67.25 |
In this chapter, you will learn about working of a C# code. After this chapter, you will also be able to write basics codes on your own. So, let's start.
Printing in C#
In the last chapter, we showed how to run a C# code on different platforms. We also wrote a code to print "Hello World" on the screen. But you still don't know how the code worked and printed "Hello World". So, let's start by explaining the previous code.
using System; class Hello { static void Main(string[] args) { Console.WriteLine("Hello World"); } }
This code printed "Hello world" on the screen. Basically, we have written a code to print any message on the screen. Let's try this out with a different message.
using System; class Print { static void Main(string[] args) { Console.WriteLine("It's working!"); } }
Yes, it worked, we just printed a different message on the screen. Now it's time to understand the working of the code by understanding each line of the code.
You won't be able to get the entire code thoroughly in this first chapter but gradually, with the progress with the course, you will do. This makes sense because programming is all about consistency and practice. However, you will have enough understanding after this chapter to start writing and understanding different codes.
using System → System is a namespace and using is a keyword. It means that we are using the System namespace in our program. Now, let's discuss what is a namespace.
Basically, a namespace is like an area inside which something is defined. For example, we can define the value of pi as 3.14 in a namespace Constant and use it in our program with
using Constant;. So, using is used to tell the compiler to use a particular namespace in our program.
System namespace is predefined in C# but we can also make our own namespaces which we will study further in this course.
class Hello → 'Hello' is just the name of a class and full details about classes will be explained later. For now, just understand that we will write our code in this format i.e., we will first write class and then the name of the class -
class Hello {} and then write our codes inside the curly braces.
There could be any other names of the class like FirstProgram, MyCode, etc.
{} → Curly braces '{}' following
class Hello represents the body of the class. All the statements written inside this curly braces are in the body of the class Hello.
static void Main(string[] args) → Here, Main is the name of a method (you will learn about methods later). Its significance is that whenever we run our code, this Main method is executed first. Similar to a class, we have used '{}' with Main method also to represent its body.
Thus,
static void Main(string[] args) is inside the class Hello and
Console.WriteLine("It's working!"); is inside the Main method.
Console.WriteLine("It's working!"); → It is a statement inside the Main method. As stated earlier, the Main method is executed first i.e., the statements written inside it (inside curly braces following Main method) will be executed first. So, the statement
Console.WriteLine("It's working!"); got executed.
You already know that a namespace is like an area which contains some definitions and we are using System namespace in our program.
Console is a class which is defined inside the System namespace and
WriteLine is a method inside the Console class.
A method is something which performs a specific task for example, we can have a method which adds two numbers, a method to find the difference between two numbers, etc.
Similarly, WriteLine is a method which prints any message on the screen. So, we are using the System namespace inside which there is a Console class and this class has a method WriteLine which is used to print any message on the screen and thus, we printed "Hello World" using it.
In C#, we end our statements with a ';' (semicolon). Notice ';' after the end of
Console.WriteLine("It's working!");. It means that our statement has ended there.
We can also write
System.Console.WriteLine("Hello World") to use the WriteLine method inside Console class inside System namespace instead of writing
using System; at the top of our code.
class Hello { static void Main(string[] args) { System.Console.WriteLine("Hello World"); } }
The takeaway is - methods are used to perform some specific tasks and we used the WriteLine method to print "Hello World" on the screen. To execute this method, we wrote this statement inside the Main function because the statements inside the Main function are executed first when a program is compiled. The WriteLine method is defined under the Console class and this class is defined inside System namespace. So, we included this namespace first in our program using
using System;.
It is totally fine if you have not understood the entire code like we have not yet explained what is a class, what is
static void written before Main and also
(string[] args) written after Main. But the important thing is that you have the basic understanding of how this code is functioning and the basic format used to write codes.
Thus, the basic format to follow is:
class CLASSNAME { static void Main(string[] args) { STATEMENT; STATEMENT; STATEMENT; ... } }
C# Special Characters
There are many special characters like '\n', '\r', '\t', etc. in C# but we are going to explain '\n' and '\t' in this chapter. You can always search for different special characters when you need them and a good web search is also an integral part of any professional programmer. You will also spend a lot of time searching web as a programmer.
\n
Let's try a different method to print messages on the screen i.e.,
Write instead of
WriteLine.
using System; class Hello { static void Main(string[] args) { Console.Write("Hello World"); } }
Run this code and you will find out that Write also printed "Hello World" on the screen but everything after it got printed in the same line. Actually, WriteLine adds a new line after printing the message but Write doesn't and this is the difference between these two methods.
However, we can use
\n (newline character) to print a new line. It is like English characters 'a', 'b', 'c', etc. but prints a new line. Let's look at some examples.
using System; class Hello { static void Main(string[] args) { Console.Write("Hello World\n"); } }
using System; class Hello { static void Main(string[] args) { Console.Write("Hello\nWorld\n"); } }
using System; class Hello { static void Main(string[] args) { Console.Write("\nHello\nWorld\n"); } }
\t
\t is a tab character and is used to insert tab spaces. Let's look at the example given below.
using System; class Hello { static void Main(string[] args) { Console.WriteLine("Hello\tWorld"); } }
As you can see, '\t' inserted one tab space between "Hello" and "World".
C# Commenting
Comments are statements written inside our code which are just ignored by the compiler while compiling our code. Comments are for humans to read and not for computers. These are written to make our code more readable. Comments are written between /* */ or after // or ///.
Let's look at an example.
using System; /* This is a comment in between the code. We can use this comment to explain other programmers reading this code that this is a Hello World code */ class Hello { static void Main(string[] args) { Console.WriteLine("Hello World"); } }
You can see that everything written between '/* */' is ignored by the compiler and one can read it to get the better understanding of the code.
Why comments?
As mentioned earlier, it makes our code more readable. Assume that you have written a software and after releasing it, you hired a few good programmers for the maintenance. Without comments, it would be a very difficult task for them to understand your code. And most of the time it happens that the person who has written a code is not the one who is going to modify it. So, make it a habit of writing comments.
C# Comments after // (Single Line Comments)
Comments written after // are single line comments. Thus if you change the line, then the new line will not be a part of your comment.
using System; //This is single line comment //This is also a comment class Hello { static void Main(string[] args) { Console.WriteLine("Hello World"); } }
C# Comments between /**/ (Multi Line Comments)
Comment can also be multi-lined by enclosing it between /* and */ as shown in the following example. But we can't put one comment inside another. For example,
/* This is a /*comment*/ */ is invalid because it has one comment inside another.
using System; /*Hello World code*/ /* Comments will not be compiled and will be ignored */ /* It is a multiline comment*/ class Hello { static void Main(string[] args) { Console.WriteLine("Hello World"); } }
C# Comments after /// (XML Documentation Comments)
Comments written after /// are also single-line comments and are used to create documentation of the code using XML elements.
using System; /// <summary> /// This is a XML Comment /// </summary> class Hello { static void Main(string[] args) { Console.WriteLine("Hello World"); } }
For a beginner, one can ignore the XML documentation comments and can use only the first two. | https://www.codesdope.com/course/c-sharp-printing-and-commenting/ | CC-MAIN-2021-25 | refinedweb | 1,558 | 73.58 |
show_seatsfunction, use a nested for loop.
using namespace std;, put
std::before each std thing
return seats['*'][n_of_columns];
return;isn't needed because it is a void function.
char choose (char seat[][n_of_columns]);
NumColrather than
n_of_columns
{'A', 'B', 'C', 'D'},};
return seats['*'][n_of_columns];, that's there for when the display comes back, the * will be there for that seat taken. I noticed if I leave that out it doesn't keep the *.
std::for years - so they can avoid bringing in the entire std namespace, and it makes it easier on the compiler but not having to decide to discard all the things that aren't being used. (I am not sure exactly how the compiler does that) Maybe things are changing with recent work by the standards committee.
return seats['*'][n_of_columns];
['*'] | http://www.cplusplus.com/forum/general/80986/ | CC-MAIN-2017-26 | refinedweb | 132 | 67.89 |
1. Basics
To use a grid control in your application add the GridCtrl package to your project and then just add grid as any other control in layout editor (Complex-> GridCtrl) or manually put some code somewhere in your app constructor eg:
#include <CtrlLib/CtrlLib.h>
#include <GridCtrl/GridCtrl.h>
struct App : TopWindow
{
typedef App CLASSNAME;
GridCtrl grid;
App()
{
Add(grid.SizePos());
}
};
GUI_APP_MAIN
App().Run();
}
Now we have grid control spanned onto the main window. However grid without columns is useless. To add some columns write:
grid.AddColumn("Name");
grid.AddColumn("Age");
Let's add some data into it:
grid.Add("Ann", 21)
.Add("Jack", 34)
.Add("David", 15);
As you can see the first row of grid containing column names is painted differently. It is often called header (as in the array control), but here I call it fixed row (because there can me more than one fixed row). Next "lines" are just ordinary rows.
Once you've added data into the grid you can change it:
grid.Set(0, 0, "Daniel");
First argument of Set() is a row number, second is a column and the last - a new value to be set. Remember that the row 0 is the first row after fixed rows. To change the value of fixed item use SetFixed:
grid.SetFixed(0, 1, "Age of person");
If number of row/column in Set is greater than the number of total rows/columns - grid is automatically "stretched" to fit the new item1.
If you want to change value of current row you can omit the first argument of Set():
grid.Set(0, "Daniel 1");
In both cases you can use the short form:
grid(0, 0) = "Daniel";
grid(0) = "Daniel 1";
However there are two differences:
short form always updates internal data. That means if there is an edit control active above your item it's value won't change - only underlaying value will change.
short form never refreshes grid.
Short form are mainly dedicated to extremely fast updates and to use in callbacks (see next chapter)
Now let's do opposite. Let's get the data from grid. To do it simply use Get() method:
Value v0 = grid.Get(0, 0); // get value from row 0 and column 0
Value v1 = grid.Get(0); // get value from cursor row and column 0
Value v0 = grid(0, 0); // short form of case 1
Value v1 = grid(0); // short form of case 2
Get always returns a copy of internal data (short form returns reference to the internal value).
1) Not fully implemented in current version, only resizing of rows. Column resizing will be available in full 1.0 version.
2. Callbacks
The easiest method to add some interaction to the grid control is to use callbacks. There are many. The most basic are:
WhenLeftClick - called when left mouse button is pushed.
WhenLeftDouble - called when left mouse button is pushed twice.
WhenRowChange - called when cursor in grid changes its position.
WhenEnter - called when enter key is pressed.
Add to the code:
void ShowInfo()
PromptOK(Format("%s is %d years old", grid(0), grid(1)));
grid.WhenLeftDouble = THISBACK(ShowInfo);
After running the application and double clicking at first row you should see:
Now some funky stuff:
void RandomColor()
grid.GetRow().Bg(Color(rand() % 255, rand() % 255, rand() % 255));
grid.WhenRowChange = THISBACK(RandomColor);
Try to change cursor position (using cursor keys or mouse). After each position change the background color of the previous active row is changed.
3. Editing, indexes and integration with databases
Displaying static data is very useful, but in most cases some of it must be changed. One way is to show another window (eg as a reaction on double click) and put entered data from it into the grid. Second way is to edit data directly in grid. GridCtrl support two edit modes:
1. Row editing.
In this mode all edit controls binded to the columns are shown. You can move from one edit to another by pressing tab key (or enter if special switch is on).
2. Cell editing
In this mode only one edit control is displayed. Tab moves the cursor to the next edit control (if available).
Binding edits with columns is very easy. Let's make our example to allow edit name and age. First we need to declare edit controls:
EditString name;
EditInt age;
Then we simply call Edit method for each column:
grid.AddColumn("Name").Edit(name);
grid.AddColumn("Age").Edit(age);
Now you can press Enter or click double LMB to start editing. By default Tab skips from one cell to another (with binded edit). If it is the last editing cell pressing Tab adds a new row. There are several ways to change editing behaviour e.g:
grid.TabAddsRow(bool b) enables/disables adding new row after pressing tab key
grid.TabChangesRow(bool b) enables/disables changing row after pressing tab key
grid.EnterLikeTab(bool b) enables/disables emulation of tab by enter key
grid.OneClickEdit(bool b) enables/disables immediate editing after LMB click
4. Bulit-in toolbar, popup menu
5. Properties
6. Others
Last edit by cxl on 09/11/2012. Do you want to contribute?. T++ | https://www.ultimatepp.org/src$GridCtrl$Tutorial$en-us.html | CC-MAIN-2017-30 | refinedweb | 863 | 66.64 |
Hey, i'm slowly teaching myself cpp and have come unstuck! (getting very annoying) :mad:
Basically what i need to know is, are there rules as to which objects are 'visible' to other objects.
Hypotheticall you have a program:
class1.h (class1 declaration)
class1.cpp (class1 definitions)
class2.h (class2 declaration)
class2.cpp(class2 definitions)
in stdafx there is #include class1.h, #include class2.h
#include "stdafx.h"
int main(blah blah blah)
{
class1 bob;
class2 fred;
}
The probelem i then have is that the member functions of these objects fred and bob can't see each other. So when i try to compile i just get whole heap of errors "bob, undeclated identifier" .. "left of .killfred() must be class/struct/union type"
etc etc
What exactly am i doing wrong??
first, stdafx is custom generated for every program. Its nonstandard, and unless you are making a MFC program, its not needed.
the objects are declared in main, anywhere else, they do not exist. This sounds like your error message. If you get missing storage class errors, its not seeing you include or .cpp. Remember to include the .h and add the .cpp to the project.
You can't do fred.bobfunction() unless you made the classes friends or inhreited or the like.
The only thing this program can currently do is, inside main, call either bob.somebobfunction() or fred.somefredfunction().
if you need to refer to a class X from another class Y, you have to use a forward declaration of X before you use it in Y. Alternatively, you can #include X.h before you #include Y.h. If the dependeny is circular, i.e., X refers to Y and vice versa, then a forward declartion is unavoidable:
class X; //fwd decl
class Y
{
public:
void func(X& x); //OK
};
Danny Kalev
Ok, i am still confuzzled.
basically i have 2 classes,
both are declared in their own header files and bother have their function definitions in their own header files. class1.h class1.cpp etc
and appropriate include stuffs
then i have
main()
{
class1 bob;
class2 fred;
}
then i have as part of a class1 member function
void class1::function();
{
fred.function();
}
and then it tells me that fred is an unknown identifier. Is that because i have done something fundamentally wrong, or because when it compiles it gets to the includes and goes through the class files before it gets to the main() function where the objects are created??
Either way, how would i go about programming so that either it compiles knowing that the object exists, or create the function in a more sophisticated manner.
Thanks for your help.
Alex
(maybe im totaly wrong but....)
fred is not declared as an object of class1 but your trying to utilise class 1's functions. For the object(fred) to interact with class 1's functions u need to create a pointer to class 1 (whether it be dynamic/local/global or static is your choice)
Now at this point of creating pointers i my-self am currently still confused lol...I just have not had time to study as much as i would like to lately.
Im sure Danny or Jonin can clear this up better for u
fred IS unknown. Unless you make it global, (bad habit normally), its not available at the point you showed us. to fix it:
option 1:
void class1::function();
{
class2 fred;
fred.function();
}
option 2:
void class1::function(class2 fred)
{
fred.function();
}
option 3:
void class1::function();
{
function(); // recursion? this calls class1::function()
}
you either meant one of those, or
inside main:
fred.function(); //main has a fred, ok
or something else??
Forum Rules
Development Centers
-- Android Development Center
-- Cloud Development Project Center
-- HTML5 Development Center
-- Windows Mobile Development Center | http://forums.devx.com/showthread.php?141544-abstract!-class!-passing&goto=nextoldest | CC-MAIN-2015-32 | refinedweb | 631 | 67.15 |
29 June 2012 10:50 [Source: ICIS news]
Correction: In the ICIS news story headlined "Asia naphtha gains but real market recovery not yet at hand" dated 29 June 2012, please read in the second paragraph ... first half of August ... instead of ... first half of January .... A corrected story follows.
By Ong Sheau Ling
?xml:namespace>
SINGAPORE
The open-spec Asian naphtha contract for the first half of August increased by $53.50/tonne (€42.80/tonne) this week to $750.00-752.00/tonne CFR (cost and freight)
Prices recovered following a 27% plunge from early March to a 21-month low of $697.50/tonne CFR Japan on 22 June, ICIS data showed.
The naphtha crack spread versus August Brent crude futures was assessed at a one-month high of $53.35/tonne on Friday, up from Thursday’s close of $51.23/tonne, according to ICIS data.
“Recovery [in the naphtha market] may come, but we can’t see this anytime soon,” a Singapore-based trader said.
Price gains made this week stemmed from reduced spot availability – with less cargoes originating from
Gains in global energy values also boosted naphtha prices, traders said. At 09:18 GMT, Brent crude was up by $1.85/bbl at $93.21, while US crude gained $1.79/bbl at $79.48/bbl.
“Now, it is the usual cycle of buying,” said another Singapore-based trader.
Honam Petrochemical made three spot purchases this week. It bought two spot cargoes of 25,000 tonnes each for second-half July delivery on 25 June.
One of the cargoes that is heading to Daesan was settled at a discount of $1.50-2.00/tonne to Japan quotes CFR, while another lot bound for Yeosu was concluded at a smaller discount of $1.00/tonne.
Honam also bought on 28 June its first spot 25,000-tonne open-spec naphtha parcel for first half of August delivery to Daesan at a premium of $1.00-1.50/tonne to
“The higher premium fetched was in line with the price movement in the spot market, but I have no confidence that the premium will stay,” a source close to Honam said.
Another South Korean cracker operator LG Chem has purchased on 27 June by tender two 25,000-30,000 tonnes of open-spec naphtha parcels for first-half August delivery. One cargo to Daesan fetched at premium of $1.00/tonne to
Apart from active buying by South Koreans, Titan Chemicals of Malaysia bought two spot cargoes this week.
Titan Chemicals bought on 25 June a 30,000-tonne light naphtha cargo at a discount of $2.65/tonne to Japan quotes CFR for second-half July delivery to Pasir Gudang, while its purchase of a 30,000-tonne full range naphtha lot on 27 June for first half of August delivery fetched a discount of $2.50/tonne to Japan quotes CFR, traders said.
Meanwhile,
With a pick-up in spot buying activity, inter-month spreads have flipped into a backwardation on 28 June, market players said.
The inter-month spread between first-half August and first-half September naphtha contracts ended the week on a $1.50/tonne backwardation, widening from $0.50/tonne on 28 June after being at parity for two successive sessions, according to ICIS.
Naphtha tenders by Indian refiners also managed to secure higher premiums, traders said.
“Tender’s premium has moved up perhaps due to some short covering,” a trader said, adding that many buyers have been postponing purchases as the market was so dull in the past few weeks.
“Product margins are improving with the recent price uplift in downstream ethylene and butadiene (BD) prices, but their derivatives are still underperforming,” a Japanese cracker operator said.
Offers for spot ethylene and BD surged this week on supply concerns because of an outage at the crackers operated by Formosa Petrochemical Corp (FPCC) in Mailiao, Taiwan.
On Friday, ethylene offers are at above $1,000/tonne
“The buying appetite is not that big yet with
FPCC’s two crackers, with a combined ethylene capacities of 2.23m tonnes/year were shut by the power outage in Mailiao on 20 June and are expected to resume operations in the first half of next month.
“It will take some time for
Spot buying requirements in Asia may further ease with the 385,000 tonne/year No 4 naphtha cracker of Taiwan’s CPC Corp due to shut early next week for maintenance.
“The market is getting a little higher, but we are not bullish at all,” a trader said.
“The recovery is too fast. Fundamentals are basically unchanged except [for] better product margins in the second quarter as compared with the first quarter,” another trader said.
($1 = €0.80)
Additional reporting by Peh Soo Hwee and Helen Y | http://www.icis.com/Articles/2012/06/29/9573779/corrected-asia-naphtha-gains-but-real-market-recovery-not-yet-at-hand.html | CC-MAIN-2014-49 | refinedweb | 808 | 63.9 |
Re: import java on cli
Date: Wed, 21 Mar 2007 15:06:23 +0100
Message-ID: <1174485983.94_at_user.newsoffice.de>
frank.van.bortel_at_gmail.com wrote:
> Not exactly sure of the problem you have,
> but you can do this from a SQL session:
> create or replace and compile java source named "your_code" as
> import ....;
> public class {...
> }
yes, i did this using a gui. worked fine. i call it using a function, works fine as well.
but how do i deploy this code to another db-instance via script.
i can't do a _at_scriptname in sqlplus (i receive an error about invalid
characters)
an when i use loadjava i have to compile the code before i load it, which i don't want neither.
so is there a way to get the java-(source)code from a commandline
(without a gui) in the database.
(calling the code is no prob, since i can call functions or procedures that call the java from sqlplus)
thnx,
thomas Received on Wed Mar 21 2007 - 15:06:23 CET
Original text of this message | http://www.orafaq.com/usenet/comp.databases.oracle.tools/2007/03/21/0060.htm | CC-MAIN-2017-51 | refinedweb | 179 | 76.15 |
Subsets and Splits