text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringlengths
9
15
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
How To Make A Multi-Directional Scrolling Shooter – Part 1 A while back in the weekly tutorial vote, you guys said you wanted a tutorial on how to make a multi-directional scrolling shooter. Your wish is my command! :] In this tutorial series, we’ll make a tile-based game where you drive a tank around using the accelerometer. Your goal is to get to the exit, without being blasted by enemy tanks! To see what we’ll make, check out this video: In this first part of the series, you’ll get some hands on experience with Cocos2D 2.X, porting it to use ARC, using Cocos2D vector math functions, working with tile maps, and much more! This tutorial assumes you have some basic knowledge of Cocos2D. If you are new to Cocos2D, you may wish to check out some of the other Cocos2D tutorials on this site first. In particular, you should review the tile-based game tutorial before this tutorial. Rev up your coding engines, and let’s begin! Getting Started We’re going to use Cocos2D 2.X in this project, so go ahead and download it if you don’t have it already. Double click the tar to unarchive it, then install the templates with the following commands: cd ~/Downloads/cocos2d-iphone-2.0-beta ./install-templates.sh -f -u Next create a new project in Xcode with the iOS/cocos2d/cocos2d template, and name it Tanks. We want to use ARC in this project to make memory management simpler, but by default the template isn’t set up to use ARC. So let’s fix that by performing the following 5 steps: - Control-click the libs folder in your Xcode project and click Delete. Then click Delete again to delete the files permanently. This removes the Cocos2D files from our project – but that’s OK, because we will link in the project separately in a minute. We are doing this so we can set up our project to use ARC (but allow the Cocos2D code to be non-ARC). - Find where you downloaded Cocos2D 2.0 to, and find the cocos2d-ios.xcodeproj inside. Drag that into your project. - Click on your project, select the Tanks target, and go to the Build Phases tab. Expand the Link Binary With Libraries section, click the + button, select libcocos2d.a and libCocosDenhion.a from the list, and click add. - Click the Build Settings tab and scroll down to the Search Paths section. Set Always Search User Paths to YES, double click User Header Search Paths, and enter in the path to the directory where you’re storing Cocos2D 2.0. Make sure Recursive is checked. - From the main menu go to Edit\Refactor\Convert to Objective-C ARC. Select all of the files from the dropdown and go through the wizard. It should find no problems, so just finish up the conversion. And that’s it! Build and run and make sure everything still works OK – you should see the normal Hello World screen. But you might notice that it’s in portrait mode. We want landscape mode for our game, so open RootViewController.m and make sure shouldAutorotateToInterfaceOrientation looks like the following: - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { return ( UIInterfaceOrientationIsLandscape( interfaceOrientation ) ); } Build and run and now we have a landscape game with the latest and greatest version of Cocos2D 2.0, ARC compatibile. w00t! Adding the Resources First things first – download the resources for this project and drag the two folders inside (Art and Sounds) into your project. Make sure “Copy items into destination group’s folder” is checked, and “Create groups for any added folders” is selected, and click Finish. Here’s what’s inside: - Two particle effects I made with Particle Designer – two different types of explosions. - Two sprite sheets I made with Texture Packer. One contains the background tiles, and one contains the foreground sprites. - A font I made with Glyph Designer that we’ll use in the HUD and game over menu. - Some background music I made with Garage Band. - Some sound effects I made with cxfr. - The tile map itself, which I made with Tiled. The most important thing here is obviously the tile map. I recommend you download Tiled if you don’t have it already, and use it to open up tanks.tmx to take a look. As you can see, it’s a pretty simple map with just three types of tiles – water, grass, and wood (for bridges). If you right click on the water tile and click Properties, you’ll see that it has a property for “Wall” defined, which we’ll be referring to in code later: There’s just one layer (named “Background”), and we don’t add anything onto the map for the sprites like the tanks or the exit – we’ll add those in code. Feel free to modify this map to your desire! For more info on using Tiled, see our earlier tile-based game tutorial. Adding the Tile Map and Helpers Next let’s add the tile map to our scene. As you know, this is ridiculously easy in Cocos2D. Open HelloWorldLayer.h and add two instance variables into HelloWorldLayer: CCTMXTiledMap * _tileMap; CCTMXLayer * _bgLayer; We’re keeping track of the tile map and the one and only layer inside (the background layer) in these variables, because we’ll need to refer to them often. Then open HelloWorldLayer.m and replace the init method with the following: -(id) init { if( (self=[super init])) { _tileMap = [CCTMXTiledMap tiledMapWithTMXFile:@"tanks.tmx"]; [self addChild:_tileMap]; _bgLayer = [_tileMap layerNamed:@"Background"]; } return self; } Here we just create the tile map, add it to the layer, and get a reference to the background layer in the tile map. Build and run, and you’ll see the bottom left corner of the map: In this game we want to start our tank in the upper left corner. To make this easy, let’s build up a series of helper methods. I use these helper methods in almost any tile-based game app I work on, so you might find these handy to use in your own projects as well. First, we need some methods to get the height and width of the tile map in points. Add these in HelloWorldLayer.m, above init: - (float)tileMapHeight { return _tileMap.mapSize.height * _tileMap.tileSize.height; } - (float)tileMapWidth { return _tileMap.mapSize.width * _tileMap.tileSize.width; } The mapSize property on a tile map returns the size in number of tiles (not points) so we have to multiply the result by the tileSize to get the size in points. Next, we need some methods to check if a given position is within the tile map – and likewise for tile coordinate. In case you forgot what a tile coordinate is, each tile in the map has a coordinate, starting with (0,0) for the upper left and (99,99) for the bottom right (in our case). Here’s a screenshot from the earlier tile-based game tutorial: So add these methods that will verify positions/tile coordinates right after the tileMapWidth method: - (BOOL)isValidPosition:(CGPoint)position { if (position.x < 0 || position.y < 0 || position.x > [self tileMapWidth] || position.y > [self tileMapHeight]) { return FALSE; } else { return TRUE; } } - (BOOL)isValidTileCoord:(CGPoint)tileCoord { if (tileCoord.x < 0 || tileCoord.y < 0 || tileCoord.x >= _tileMap.mapSize.width || tileCoord.y >= _tileMap.mapSize.height) { return FALSE; } else { return TRUE; } } These should be pretty self-explanitory. Obviously negative positions/coordinates would be outside of the map, and the upper bound is the width/height of the map, in points or tiles respectively. Next, add methods to convert between positions and tile coordinates: - (CGPoint)tileCoordForPosition:(CGPoint)position { if (![self isValidPosition:position]) return ccp(-1,-1); int x = position.x / _tileMap.tileSize.width; int y = ([self tileMapHeight] - position.y) / _tileMap.tileSize.height; return ccp(x, y); } - (CGPoint)positionForTileCoord:(CGPoint)tileCoord { int x = (tileCoord.x * _tileMap.tileSize.width) + _tileMap.tileSize.width/2; int y = [self tileMapHeight] - (tileCoord.y * _tileMap.tileSize.height) - _tileMap.tileSize.height/2; return ccp(x, y); } The first method converts from a position to a tile coordinate. Converting the x coordinate is easy – it just divides the number of points by the points per tile (discarding the fraction) to get the tile number it’s inside. The y coordinate is similar, except it first has to subtract the y value from the tile map height to “flip” the y value, because positions have 0 at the bottom, but tile coordinates have 0 at the top. The second method does the oppostie – tile coordinate to position. This is pretty much the same idea, but notice that there are a lot of potential points inside a tile that this method could return. We choose to return the center of the tile here, because that works nicely with Cocos2D since you often want to place a sprite at the center of a tile. Now that we have this handy library built up, we can now build a routine to allow scrolling the map to center something (namely our tank) within the view. Add this next: -(void)setViewpointCenter:(CGPoint) position { CGSize winSize = [[CCDirector sharedDirector] winSize]; int x = MAX(position.x, winSize.width / 2 / self.scale); int y = MAX(position.y, winSize.height / 2 / self.scale); x = MIN(x, [self tileMapWidth] - winSize.width / 2 / self.scale); y = MIN(y, [self tileMapHeight] - winSize.height/ 2 / self.scale); CGPoint actualPosition = ccp(x, y); CGPoint centerOfView = ccp(winSize.width/2, winSize.height/2); CGPoint viewPoint = ccpSub(centerOfView, actualPosition); _tileMap.position = viewPoint; } The easiest way to explain this is through a picture: To make a given point centered, we move the tile map itself. If we subtract our “goal” position from the center of the view, we’ll get the “error” and we can move the map that amount. The only tricky part is there are certain points we shouldn’t be able to set in the center. If we try to center the map on a position less than half the window size, then empty “black” space would be visible to the user, which isn’t very nice. Same thing for if we try to center a position on the very top of the map. So these checks take care of that. Now that we have the helper methods in place, let’s try it out! Add the following inside the init method: CGPoint spawnTileCoord = ccp(4,4); CGPoint spawnPos = [self positionForTileCoord:spawnTileCoord]; [self setViewpointCenter:spawnPos]; Build and run, and now you’ll see the upper left of the map – where we’re about to spawn our tank! Adding the Tank Time to add our hero into the mix! Create a new file with the iOS\Cocoa Touch\Objective-C class template, enter Tank for the class, and make it a subclass of CCSprite. Then open Tank.h and replace it with the following: #import "cocos2d.h" @class HelloWorldLayer; @interface Tank : CCSprite { int _type; HelloWorldLayer * _layer; CGPoint _targetPosition; } @property (assign) BOOL moving; @property (assign) int hp; - (id)initWithLayer:(HelloWorldLayer *)layer type:(int)type hp:(int)hp; - (void)moveToward:(CGPoint)targetPosition; @end Let’s cover the instance variablers/properties inside this class: - type: We have two types of tanks, so this is either 1 or 2. Based on this we can select the proper sprites. - layer: We’ll need to call some methods in the layer later on from within the tank class, so we store a reference here. - targetPosition: The tank always has a position it’s trying to move toward. We store that here. - moving: Keeps track of whether the tank is currently trying to move or not. - hp: Keeps track of the tank’s HP, which we’ll be using later. Next open Tank.m and replace it with the following: #import "Tank.h" #import "HelloWorldLayer.h" @implementation Tank @synthesize moving = _moving; @synthesize hp = _hp; - (id)initWithLayer:(HelloWorldLayer *)layer type:(int)type hp:(int)hp { NSString *spriteFrameName = [NSString stringWithFormat:@"tank%d_base.png", type]; if ((self = [super initWithSpriteFrameName:spriteFrameName])) { _layer = layer; _type = type; self.hp = hp; [self scheduleUpdateWithPriority:-1]; } return self; } - (void)moveToward:(CGPoint)targetPosition { _targetPosition = targetPosition; } - (void)updateMove:(ccTime)dt { // 1 if (!self.moving) return; // 2 CGPoint offset = ccpSub(_targetPosition, self.position); // 3 float MIN_OFFSET = 10; if (ccpLength(offset) < MIN_OFFSET) return; // 4 CGPoint targetVector = ccpNormalize(offset); // 5 float POINTS_PER_SECOND = 150; CGPoint targetPerSecond = ccpMult(targetVector, POINTS_PER_SECOND); // 6 CGPoint actualTarget = ccpAdd(self.position, ccpMult(targetPerSecond, dt)); // 7 CGPoint oldPosition = self.position; self.position = actualTarget; } - (void)update:(ccTime)dt { [self updateMove:dt]; } @end The initializer is pretty straightforward - it just squirrels away the variables passed in, and schedules an update method to be called. You might not have known that you can schedule an update method on any CCNode - but now you do! :] And note the priority is set to -1, because we want this update to run BEFORE the layer's update (which is run at the default priority of 0). moveToward just updates the target position - updateMove is where all the action is, and this is called once per frame. Let's go over what this method does bit by bit: - If moving is false, just bail. Moving will be false when the app first begins. - Subtract the current position from the target position, to get a vector that points in the direction of where we're going. - Check the length of that line, and see if it's less than 10 points. If it is, we're "close enough" and we just return. - Make the directional vector a unit vector (length of 1) by calling ccpNormalize. This makes it easy to make the line any length we want next. - Multiply the vector by however fast we want the tank to travel in a second (150 here). The result is a vector in points/1 second the tank should travel. - This method is being called several times a second, so we multiply this vector by the delta time (around 1/60 of a second) to figure out how much we should actually travel. - Set the position of the tank to what we figured out. We also keep track of the old position in a local variable, which we'll use soon. Now let's put our new tank class to use! Make the following changes to HelloWorldLayer.h: // Before the @interface @class Tank; // After the @interface @property (strong) Tank * tank; @property (strong) CCSpriteBatchNode * batchNode; And the following changes to HelloWorldLayer.m: // At the top of the file #import "Tank.h" // Right after the @implementation @synthesize batchNode = _batchNode; @synthesize tank = _tank; // Inside init _batchNode = [CCSpriteBatchNode batchNodeWithFile:@"sprites.png"]; [_tileMap addChild:_batchNode]; [[CCSpriteFrameCache sharedSpriteFrameCache] addSpriteFramesWithFile:@"sprites.plist"]; self.tank = [[Tank alloc] initWithLayer:self type:1 hp:5]; self.tank.position = spawnPos; [_batchNode addChild:self.tank]; self.isTouchEnabled = YES; [self scheduleUpdate]; // After init - (void)ccTouchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { UITouch * touch = [touches anyObject]; CGPoint mapLocation = [_tileMap convertTouchToNodeSpace:touch]; self.tank.moving = YES; [self.tank moveToward:mapLocation]; } - (void)ccTouchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { UITouch * touch = [touches anyObject]; CGPoint mapLocation = [_tileMap convertTouchToNodeSpace:touch]; self.tank.moving = YES; [self.tank moveToward:mapLocation]; } - (void)update:(ccTime)dt { [self setViewpointCenter:self.tank.position]; } Nothing too fancy here - we create a batch node for the sprites and add it as a child of the tile map (so that we can scroll the tile map and have the sprites in the batch node scroll along with it). We then create a tank and add it to the batch node. We set up touch routines to call the moveToward method we wrote earlier, and on each update keep the view centered on the tank. Build and run, and now you can tap the screen to scroll your tank all around the map, in any direction! Checking for Walls So far so good, except there's one major problem - our tank can roll right across the water! This tank does not have the submersive upgrade yet, so we have to nerf him a bit :] To do this we need to add a couple more helper methods to HelloWorldLayer.h. Add these methods right above init: -(BOOL)isProp:(NSString*)prop atTileCoord:(CGPoint)tileCoord forLayer:(CCTMXLayer *)layer { if (![self isValidTileCoord:tileCoord]) return NO; int gid = [layer tileGIDAt:tileCoord]; NSDictionary * properties = [_tileMap propertiesForGID:gid]; if (properties == nil) return NO; return [properties objectForKey:prop] != nil; } -(BOOL)isProp:(NSString*)prop atPosition:(CGPoint)position forLayer:(CCTMXLayer *)layer { CGPoint tileCoord = [self tileCoordForPosition:position]; return [self isProp:prop atTileCoord:tileCoord forLayer:layer]; } - (BOOL)isWallAtTileCoord:(CGPoint)tileCoord { return [self isProp:@"Wall" atTileCoord:tileCoord forLayer:_bgLayer]; } - (BOOL)isWallAtPosition:(CGPoint)position { CGPoint tileCoord = [self tileCoordForPosition:position]; if (![self isValidPosition:tileCoord]) return TRUE; return [self isWallAtTileCoord:tileCoord]; } - (BOOL)isWallAtRect:(CGRect)rect { CGPoint lowerLeft = ccp(rect.origin.x, rect.origin.y); CGPoint upperLeft = ccp(rect.origin.x, rect.origin.y+rect.size.height); CGPoint lowerRight = ccp(rect.origin.x+rect.size.width, rect.origin.y); CGPoint upperRight = ccp(rect.origin.x+rect.size.width, rect.origin.y+rect.size.height); return ([self isWallAtPosition:lowerLeft] || [self isWallAtPosition:upperLeft] || [self isWallAtPosition:lowerRight] || [self isWallAtPosition:upperRight]); } These are just helper methods we'll use to check if a given tile coordinate/position/rectangle has the "Wall" property. I'm not going to go over these because they are just review from our earlier tile-based game tutorial. Open up HelloWorldLayer.h and predeclare all of these methods so we can access them from outside the class if we want: - (float)tileMapHeight; - (float)tileMapWidth; - (BOOL)isValidPosition:(CGPoint)position; - (BOOL)isValidTileCoord:(CGPoint)tileCoord; - (CGPoint)tileCoordForPosition:(CGPoint)position; - (CGPoint)positionForTileCoord:(CGPoint)tileCoord; - (void)setViewpointCenter:(CGPoint) position; - (BOOL)isProp:(NSString*)prop atTileCoord:(CGPoint)tileCoord forLayer:(CCTMXLayer *)layer; - (BOOL)isProp:(NSString*)prop atPosition:(CGPoint)position forLayer:(CCTMXLayer *)layer; - (BOOL)isWallAtTileCoord:(CGPoint)tileCoord; - (BOOL)isWallAtPosition:(CGPoint)position; - (BOOL)isWallAtRect:(CGRect)rect; Then make the following changes to Tank.m: // Add right before updateMove - (void)calcNextMove { } // Add at bottom of updateMove if ([_layer isWallAtRect:[self boundingBox]]) { self.position = oldPosition; [self calcNextMove]; } The new code in updateMove checks to see if we've moved into a position that is colliding with a wall. If it does, it moves back to the old position and calls calcNextMove. Right now this method does absolutely nothing, but later on we'll override this in a subclass. Build and run, and now you should no longer be able to sail across the sea! Adding Accelerometer Support For this game, we don't actually want to move the tank by tapping, because we want to be able to shoot wherever the user taps. So to move the tank, we'll use the accelerometer for input. Add these new methods to HelloWorldLayer.m: - (void)onEnterTransitionDidFinish { self.isAccelerometerEnabled = YES; } - (void)accelerometer:(UIAccelerometer *)accelerometer didAccelerate:(UIAcceleration *)acceleration { #define kFilteringFactor 0.75 static UIAccelerationValue rollingX = 0, rollingY = 0, rollingZ = 0; = rollingX; float accelY = rollingY; float accelZ = rollingZ; CGPoint moveTo = _tank.position; if (accelX > 0.5) { moveTo.y -= 300; } else if (accelX < 0.4) { moveTo.y += 300; } if (accelY < -0.1) { moveTo.x -= 300; } else if (accelY > 0.1) { moveTo.x += 300; } _tank.moving = YES; [_tank moveToward:moveTo]; //NSLog(@"accelX: %f, accelY: %f", accelX, accelY); } We set isAccelerometerEnabled in onEnterTransitionDidFinish (I had trouble getting it to work if I put it in init because the scene wasn't "running" at that point)., this is called a high-pass filter, and you can read about it on Wikipedia's high pass filter entry. We check the acceleration in the x and y axis, and set the move target for the tank based on that. That's it - build and run (make sure your home button is to your left), and now you should be able to move your tank around the map with the accelerometer! Gratuituous Music I can't leave ya guys hanging without some gratuituous music! :] Just make the following mods to HelloWorldLayer.m: // Add to top of file #import "SimpleAudioEngine.h" // Add to end of init [[SimpleAudioEngine sharedEngine] playBackgroundMusic:@"bgMusic.caf"]; [[SimpleAudioEngine sharedEngine] preloadEffect:@"explode.wav"]; [[SimpleAudioEngine sharedEngine] preloadEffect:@"tank1Shoot.wav"]; [[SimpleAudioEngine sharedEngine] preloadEffect:@"tank2Shoot.wav"]; Build and run, and enjoy some groovy tunes as you explore! :] Where To Go From Here? Here is an example project with all of the code form the tutorial series so far. You're ready for the Part Two of the tutorial, where you'll add shooting, enemies, and action! In the meantime, if you have any questions or comments on the tutorial so far, please join the forum discussion below!
https://www.raywenderlich.com/6804/how-to-make-a-multi-directional-scrolling-shooter-part-1
CC-MAIN-2017-26
refinedweb
3,391
56.96
class Foo def to_ary [1,2,3] end end f = Foo.new [1,2,3] == f # Should be true This is also a bug in MRI 1.8.6 p36. See the slightly mis-titled bug #11585 on RubyForge for more info. Marcin, it should just return false. As a rule, boolean methods don't raise an error in MRI on an dumb comparison. They just return false. def test_ary o = Object.new def o.to_ary; end def o.==(o); true; end assert_equal(true, [].==(o)) end Which assumes that having to_ary method is enough and then it goes for "==" comparison (but not on to_ary result, just on the argument itself). From reading through the description of the bug it seems the expected behavior is when an array is test for equal against an object and the object has to_ary then it should compare against the result of the to_ary. In the test_ary that Marcin mentioned being broken by the patch, I see 2 possibilites. 1. The patch is working as expected where [].==(o) is calling o.to_ary for the comparison. Since o.to_ary is defined as def o.to_ary; end. It has nothing, so it must be returning nil which evaluate to false for the comparison. 2. Marcin mentioned that having to_ary method is enough and it should go for "==" comparison which I assume he expected the assertion to return true because it was defined in def o.==(o); true; end. But the definition is a singleton method for 'o' while the assertion is calling [].==(o) which will call '==' method of Array. If I assumed Marcin correctly, then the assertion should be assert_equal(true, o.==([])) 1. Change "def to_ary; end;" to "def to_ary; []; end" or 2. Change the assertion "[].==(o)" to "o.==([])" My guess will be option 1 since this is for testing the array comparison utilizing to_ary method.
http://jira.codehaus.org/browse/JRUBY-1148
crawl-001
refinedweb
309
85.89
Once Bug 770899 is on Nightly (v16) and naturally goes through a migration to Nightly v17, + 3 weeks, we should land this patch which will remove all prefetch clearing code completely. After the 3 weeks into the next cycle we'll also request that this goes onto Aurora v16. That way code that deletes read-only .pf files will never reach beta (as it shouldn't). Created attachment 639118 [details] [diff] [review] Patch v1. This won't land until v17. Comment on attachment 639118 [details] [diff] [review] Patch v1. if (gotCounters && !ioCounters.ReadOperationCount) #endif { XPCOMGlueEnablePreload(); } This should also be removed. From the b2g file? Yup I guess that was forgotten in the other bug for that functionality, but I'll include it before landing this one. Not yet tracking for 16, since bug 770899 hasn't landed yet. I think I marked the wrong bug with that flag, sorry about that. Ignore. The landing of this patch is on hold until we gather more data relating to the benefits of clearing prefetch on cold startups. Comment on attachment 639118 [details] [diff] [review] Patch v1. Taras any update on the prefetch priming analysis? I'd like to land this on m-c soon. (In reply to Brian R. Bondy [:bbondy] from comment #8) > Comment on attachment 639118 [details] [diff] [review] > Patch v1. > > Taras any update on the prefetch priming analysis? I'd like to land this on > m-c soon. Jason posted the analysis in bug 774444, looks like cold prefetch = better So as mentioned on IRC I'm going to land this patch which removes the prefetch code even after the analysis in bug 774444. I won't have spare cycles in the near future to work on this. And if we implement some kind of prefetch priming task in the future, we can land the related code from the original prefetch code again from a clean slate. You mentioned that prefetch gets regenerated after each update. I'm not entirely convinced that it's worth optimizing post-next-reboot after an update. Most people won't reboot for 1-2 weeks after they update, so they will not see any benefit until after those 1-2 weeks. Each new update after this, they'll have the same slower performance until they reboot again as well. If we do want to effect the way Windows calculates the prefetch, it might be worth using the FILE_FLAG_WRITE_THROUGH flag when writing files from the updater. This will bypass the cache completely so that the files will not be in cache on the next startup, and prefetch files will be generated in a better than normal way. It wouldn't be as good as cold startup prefetch files, but it might be close enough to be better than 1-2 weeks of no prefetch optimizations. Also I'm not 100% convinced that the data seen in bug 774444 will hold true in general as well. I'm afraid we'd have to implement something again before we could tell whether it's worth it or not across all computers reporting to telemetry. Created attachment 647741 [details] [diff] [review] Patch v2. This was previously r+. I just wanted you to do a quick pass on the nsBrowserApp.cpp files. It always enables the preload now. Comment on attachment 647741 [details] [diff] [review] Patch v2. // GetProcessIoCounters().ReadOperationCount seems to have little to // do with actual read operations. It reports 0 or 1 at this stage // in the program. Luckily 1 coincides with when prefetch is get rid of this comment + XPCOMGlueEnablePreload(); add a comment that we do this because of data in bug 771745 Kinda sucks that there are two nearly identical nsBrowserApp.cpp files I'll file a followup bug to get rid of GetProcessIoCounters-related stuff, we don't need it anymore Created attachment 648067 [details] [diff] [review] Patch v3. Implemented nits. Carried forward r+. Pushing to oak for testing and then I'll land on m-i. Comment on attachment 648067 [details] [diff] [review] Patch v3. [Approval Request Comment] Bug caused by (feature/regressing bug #): Bug 770899 + other prefetch bugs User impact if declined: We don't want the workaround from Bug 770899 that needed to land on Aurora for a couple weeks to go onto beta. It deleted people's read only prefetch files, but no beta/release users have prefetch files in that state. This is the last piece of code that needs to land relating to reverting the prefetch state to its old state. Testing completed (on m-c, etc.): Testing completed on oak which is a clone of m-c. Risk to taking this patch (and alternatives if risky): Low String or UUID changes made by this patch: none Comment on attachment 648067 [details] [diff] [review] Patch v3. looks good, approving for aurora and let's get the bug 770899 workaround backed out before merge. Verified on Win 7 using some basic sanity tests: Tests - Run latest firefox version, ensure prefetch does not turn on after 3 minutes (pass on nightly, aurora) - Update to latest firefox version, no prefetch service in install logs (pass on nightly/aurora) - Check telemetry dashboard to see startup_using_preload isn't showing data for latest builds (pass on latest nightly/aurora)
https://bugzilla.mozilla.org/show_bug.cgi?id=770911
CC-MAIN-2017-26
refinedweb
876
72.16
Photo by Barna Bartis on Unsplash I recently had to clean the code we are using in DeckDeckGo and had notably to refactor singleton methods to stateless functions. One of these gave me a harder time and that's why, guess what, I came to the idea of this new blog post 😅 What is debouncing? Sure, what’s “debouncing” ? Let’s say you have implemented an <input/> in your application which triggers an update into your database each time its content change. For performance reason and maybe even for cost reason (if for example you are using Google Firestore) you might not want to trigger a database update every single time a keyboard key is hit but rather perform a save only when needed. For example you might want to only perform the save when the user would mark a pause or when she/he has finished her/his interaction with the component. Likewise, you may have a function in your application, which might be called multiple times in a row, for which you would rather like to consider only the last call. That is what debouncing is for me, to make sure that a method is not called too often. Debounce time Commonly, in order to detect which functions should effectively be triggered, a delay between calls is observed. For example, if we are debouncing a function with a debounce time of 300ms, as soon as, or more than, 300ms between two calls are observed, the function will be triggered. Vanilla Javascript setTimeout and clearTimeout working together There is currently no platform implementation of a standard “debouncing function” supported across browsers (correct me if I am wrong of course 😅). Fortunately, Javascript provide both the ability to delay a function’s call using setTimeout and to cancel it using clearTimeout which we could combine in order to implement our own solution. export function debounce(func: Function, timeout?: number) { let timer: number | undefined; return (...args: any[]) => { const next = () => func(...args); if (timer) { clearTimeout(timer); } timer = setTimeout(next, timeout > 0 ? timeout : 300); }; } In the above code, our function (the one we effectively want to perform, passed as parameter func ) is going to be delayed ( setTimeout). Before effectively doing so, we first check if it was not already called before (using the timer reference to the previous call) and if it was, we cancel this previous call ( clearTimeout) before effectively delaying our target. We could for example validate this implementation with a simple test. We could call multiple times in a row a function which log a string to the console. If everything works well, the output should occur only one time. const myFunction: Function = debounce(() => { console.log('Triggered only once'); }); myFunction(); // Cleared myFunction(); // Cleared myFunction(); // Cleared myFunction(); // Cleared myFunction(); // Performed and will output: Triggered only once If you wish to observe and test this in action, give a try to this Codepen. RxJS Good dog helping with the cleaning The above solution with vanilla Javascript is pretty cool but what about achieving the same result using RxJS (the Reactive Extensions Library for JavaScript)? That would be pretty slick isn’t it? Lucky us, RxJS offers out of the box a solution to debounce easily functions using Observables. Moreover, in my point of view, this solution is a bit cleaner and more readable. The RxJS function we are going to use is debounceTime. As explained in the documentation, it delays values emitted by a source Observable, but drops previous pending delayed emissions if a new value arrives on the source Observable. To reproduce the same example as above and to create an observable, we could for example use a Subject and triggers multiple times in a row next() . If everything goes according plan, again, we should find only a single output in the console. const mySubject: Subject<void> = new Subject(); subject.pipe(debounceTime(300)).subscribe(() => { console.log('Triggered only once'); }); mySubject.next(); // Cleared mySubject.next(); // Cleared mySubject.next(); // Cleared mySubject.next(); // Cleared mySubject.next(); // Performed and will output: Triggered only once That’s it, nothing more nothing else. No custom functions to write, RxJS just solves the debouncing for us. If you wish to give it a try in action too, have a look at this other Codepen. Notabene: in the above example I did not, for simplicity reason, took care of unsubscribing the Observable. Obviously if you would use this solution in a real application, please be careful about this. Cherry on the cake 🍒🎂 In our open source project DeckDeckGo, we are using a small utils package across our applications and components called deckdeckgo/utils (published to npm) which offers miscellaneous utilities. One of these being the vanilla Javascript debounce function. Therefore, if you need a quick and dirty solution, be our guest and give it a try 🖖 To infinity and beyond 🚀 David Discussion (9) Hey David, what are you thoughts on this implementation of throttle with rAF for scroll codepen.io/aFarkas/pen/LNeopm and more so these two quite passionate discussions on rAF as it pertains to vanillaJS? github.com/google/WebFundamentals/... github.com/google/WebFundamentals/... Interesting approach, I would be curious to know how does it performs in a complex pages with rerendering an stuffs, really interesting 👍 About the discussions, I didn't used really rAF so I can't tell much as I'm not experienced with. More on debounce using requestAnimationFrame by Chris Ferdinandi gomakethings.com/debouncing-your-j... Just in case 😉 Coolio, thx for the interesting tips 👍. Hey David, i love your take on debouncing with vanilla JS! I faced a similar problem when working with a swipable component (to wait for the swipe event to stop), i'll give this a try to see which is the most performant. Great article, thanks for sharing. Super Mustapha, thank you for the positive feedback, really happy to hear that 😃 Let me know how it works out, I'm curious about the performance results! even jsif i'm not mistaken :) Neat! It looks better now, thx 👍
https://practicaldev-herokuapp-com.global.ssl.fastly.net/daviddalbusco/debounce-with-vanilla-javascript-or-rxjs-280c
CC-MAIN-2021-39
refinedweb
998
62.38
Waa, the Mac has this... Waaa, the Mac has this..., ad infinitum. As for #15, Windows 95/98 did come with a built in Web server. It, like any included complex service can be, was a gaping security hole. Microsoft rightly removed it. Adding in something like Apache by default and expecting non-technical users to understand it and not muck with the configuration in such a way as to open security holes is ludicrous. Never mind the fact that most residential ISP user agreements specifically prohibit the running of full time servers. Actually, every version of Windows since about 1995 had a web server (Not sure which version of NT introduced IIS, but it was there by 4.0. Windows 95 OSR2 at least had one.), except for XP Home, Vista Home and Home Basic. (The article acknowleges that Vista Business, Enterprise and Ultimate have it). Microsoft never "removed it" although AFAIK it has never been installed/enabled by default (except on servers), they just rightly realised that most Home users have no need/use for such a thing. The author I'm sure found this all very entertaining. Much of what he listed I really don't want on my Windows boxes. My two cents. The dock and the start menu essentially serve the same purpose. The start menu is not like the unified menu. The unified menu is actually something I dislike about Mac OS but Expose really helps it work well. As Windows does not have a unified menu, Expose would just get in my way. All in all, I'd have to say I like the way the Applications works on the Dock on my Mac over running through the start menu on my Windows machines. However, for applications I have in the quick launch versus apps directly on the dock, it's all the same. The whole web server thing. I like having this built into the mac. I also install Apache on most of my Windows boxes. It's great for doing stuff locally or across the network. I'd be be quite concerned about serving either to the Internet without going through the config files and make sure things are suitable for web consumption. If I recall correctly, the default Apache install in Windows clearly states that it is not suitable to be put on the web. If you wish to do so, you need to change some things. I'm sure there was more but I forgot what else the article talked about already. I don't get it. Almost all of these have free 3rd party (or in the case of Virtual Desktops, available from Microsoft) solutions available. Would the author rather Microsoft include all these features natively and make Windows bigger than it is, so they can turn around and accuse Microsoft of "bloat" or "anti-competitive practices"? Not Invented Here. They don't hold the patent too those nice little features and probably avoid adding them too keep from possible infringement. This, I'd guess, is the same as keeping there developers from referencing non-MS source for fear or contamination. Even MS get's screwed by software patents. Interestingly, apart from virtual desktops, Mac OS X doesn't have those basic window manager features by default either. That was a big surprise to me when I tried the Mac for the first time, after years of using Linux and taking "always on top" and "pin to all desktops" for granted. To get that functionality on Mac OS X, I had to install the free Afloat program. I'm still new to Mac OS X, so if I'm wrong about those features not being in the base system by default, please feel free to correct me. The pretty much are optional now since the user must find and download them but the "user must find" part means they are not easily available. MS could easily include these in the next Windows install disk as optional (default too not included) features which users could include. Win98's themes and theme manager where purely eye candy features yet they where available but not embedded by default. The complaint is not that MS includes everything in there OS. It's that they do so in a way that assumes the user wants everything available too be installed. It's also that they do so in a way that keeps those "optional" pieces not uninstallable. Example; Can I uninstall IE yet and replace it with another browser and still expect the install to be supported? Most of these suggestions seem reasonable enough to me. There are a few exceptions: the Dock (which exists well enough via the taskbar), Coverflow and the consistent menubar ribbon. Those are just "implement Mac on Windows" comments. The desktop cube (which would be cool, but is not something Windows "should" have), and software repos are "Linux on Windows." All the others seems fair to me. Beautiful backups, a decent Window Management tool (Expose > Flip3D), Podcast recording, a decent screenshot tool, etc are certainly fair things to expect from your OS. I also think self-contained apps are by far and away the best way for apps to exist. I take screenshots fairly regularly too. Here's the procedure: 1. Activate window you want to shoot. 2. Press Alt + Print Screen to capture active window. 3. Open Paint 4. Paste picture 5. Highlight section of item I want to shoot 6. Ctrl + X 7. Open paint.net 8. File > New 9. Press "OK" 10. Ctrl+V to paste content Here's how I do it on Mac: 1. Cmd + Shift + 4 2. Highlight area 3. Click You decide. You realise that step 7 is entirely unnecessary, right? Plus, I take it that you are going to do something with the image (I am often writing technical documentation)? To insert the resulting image in Word (for instance, since it is available on both platforms): Mac: Insert, Picture, From File, Browse to file, OK. Win: Ctrl+V. Edited 2008-04-30 14:29 UTC To test that theory, I took an "Alt+Print Screen" screenshot of my Firefox window and then used "Save for web" in Photoshop CS2 to save in both PNG-24 and JPG formats with the default settings. (My objection to step 7 was not that you used Paint.Net, but that you used two image editors when one was all that you needed). PNG: Pixel-Perfect, 121kb. JPG: Visible artifacting, 195kb. I know this is just a single test, but it does show my point, .png is far better suited for screenshots, since they typically have long runs of identical pixels that both cause artifacting to show and compress well losslessly. The real point is that one can use either Paint or Paint.NET (or some other image editor). You don't have to use both, as listed in your steps. If you prefer Paint.NET (as I do), then you can paste the captured shot directly into it (via Paste, Paste into new Image, or even Paste into new layer) without ever launching Paint, and then do your cropping/editing/whatever. I have decided. I chose Mac. How is dragging your mouse across a 20" screen to a cascade of menus easier than a key combo to capture the action? You haven't compared apples to apples here, so to speak. The other posts described taking a chunk of a screen and properly saving it. Both the Windows way (hit PrintScreen) and the Mac way (one keystroke combo) are easier than your description above. If you just want a snapshot, straight away, no editing at all, on Mac, it's Cmd+Shift+3 and you're done. YOU decide. MS sort of tried to implement it with the software store available through Windows Update though I couldn't tell you what's become of it since I looked last. It felt like a store displaying only the company private brand merchandise but with lots of empty space if other vendors took interest. Apple's iTunes does seem to be going the same route. They have a better chance of it since the more strict Apple certification program already vett's software. I think that was one of the things around the iPhone API anyhow. Where they both can't compete is still the volume of software included. Apple can use the iTunes business model and there existing osX certification program to come closest. Microsoft hasn't the controls in place too become a gateway for all win32/64 available programs. They are both also profit motivated. Due to corporate law, the profit margin is the bottom line. If a decision chooses between better quality and better return for investors then legally, the second choice must be accepted. Providing equal access too one's own products along side the competitions products does not maintain barriers against competition. Expensing more budget to hire repository administration staff also works agains the profit margin. We'll see what Apple does though. It has the smaller software library so it may be managable for one company to maintain the library along with the rest of it's day to day product design and retail. GNU/Linux has everything built in.. This article is about what Windows doesn't have, not what Linux may or may not have. As for the article, it's of little value to me. While I agree that there's some basic things missing from the window manager, such as focus follows mouse, I think that downing Windows for stuff it doesn't have yet easily added for free is silly. But then again I am not the average user; I know where to look for free stuff And for remote access to a PC, logmein.com is free, and works like gotomypc.com does. This article is really just a thinly masked Mac leg-humping session. Edited 2008-04-30 14:41 UTC You need to read the article ... 1. Expose Available on: Mac 2. Virtual Workspaces Available on: Linux, PC-BSD, Mac 3. Back to My Mac Available on: Mac 4. Screen Sharing Available on: Mac 5. Time Machine Available on: Mac 6. ISO Burning Available on: Mac, Linux, PC-BSD 7. Stickies Available on: Mac, Linux 8. Podcast Capture Available on: Mac 9. Software Repositories Available on: Linux, PC-BSD 10. Desktop Cube Available on: Linux, PC-BSD 11. Application Dock Available on: Mac 12. Automated Screen Shots Available on: Mac 13. Multitouch Trackpad Gestures Available on: Mac 14. Cover Flow Available on: Mac 15. Pre-Installed Web Server Available in: Mac, Linux, PC-BSD 16. POSIX Compliance Available on: BeOS, Mac, Linux, PC-BSD 17. Standardized Menu Ribbon Available on: Mac 18. Single-File Applications Available on: Mac All of those are available on the default install on GNU/Linux ... Edited 2008-05-01 00:08 UTC Xfce has stickes, so if PCBSD runs XFce, it should have them too.. Edited 2008-05-01 14:18 UTC. Nope not the case. Any good distro should have most of these things enable by default. If that were the case then Linux would fail on all counts because at the end of the day its just a kernel, everything else is an installable package. Most distros like Ubuntu include compiz by default, as well as vnc. On that note I would have to disagree. Its more a matter of taste. The little bit of acceleration that compiz adds to most animations is visually appealing to me, while both KDE4 and Apple just move the window without any acceleration. There are things that compiz does that I miss seeing in expose sometimes. Compiz lets you do filtering by typing text if you know the app name or window title, another thing I keep trying to do in expose is right-clicking on windows to bring them to the front or pressing the middle mouse button to close windows. Its little things like that I think makes compiz batter in some ways. Other than that there are very little differences in how each one works. You can always turn off the acceleration if you don't like the, imo, more dynamic animation in compiz, and all of the other features could be turned off as well. One feature I do like that iss missing in the list is the space bar thingy in OSX. I like being able to preview a file or get some information on a file just by tapping the space bar. That was genius imo. Edited 2008-04-30 14:00 UTC What does that have to do with Expose on the mac or the fact that I'm talking about filtering windows in in expose mode in compiz? As far as I know (and I'm using vista as I write this) you can't do that in flip3d mode. Not everything is an attack on Vista. I was comparing features between two different implementations of expose. First off, there is definitely a reason Microsoft Windows doesn't have some of these features - it is a different economy. Microsoft has built an economy based on 3rd-party software vendors. By providing excellent development tools, from early VB to current Visual Studio, MS has paved a way for ISV to carve out a niche for themselves. This article is definitely a case of "Damned if the do, Damned if the don't". When MS integrates tools into the OS, the hurt the ISV's, and they get grief over it. Now, if they leave it to the ISV's, they are criticised. Which is it? My second comment has to do with software repositories. I really believe this is the greatest strength of Linux/BSD. I don't just have Windows Update, I have everything-on-my-computer update. And how much easier can it get than the "Add/Remove Programs" on Ubuntu. Click the programs you want to add, and click Apply. I think package management is a must-have feature for a modern OS. It makes for a secure, easy-to-manage system. 1) Expose - Kinda useful, but since Windows has a per-window taskbar as opposed to the Mac's per-application dock, not as useful as on the Mac. 2) Virtual workspaces - Microsoft already make a powertoy (for XP at least) for this and there are several decent freeware implementations available. Useful, but not really missed. 3) Back To My Mac - Windows already has Remote Desktop, but it's not all that useful outside of a LAN unless you have a dynamic DNS provider, and even then it has no NAT traversal. I could imagine Microsoft putting together something more useful under the Windows Live or Home Server banners. Until then, LogMeIn has a free version, unlike the suggested GoToMyPC. 4) Screen Sharing - NetMeeting, Windows Meeting Space, SharedView. 'Nuff said. 5) Time Machine - Shadow Copy (mentioned in article) is part of Windows, just needs a better UI (currently accessed via file properties). 6) ISO Burning - I agree, definitely needed. 7) Stickies - Outlook has them, but I've never really seen the need to clutter up the screen with imitation post-its. 8) Podcast Capture - What? Very few people do/can/should podcast. Not really something that needs to be build in to Windows, but there is always Windows Sound Recorder (updated in Vista). 9) Software Repositories - A single application update system would be very welcome at least, although I think something more like Steam, but for all kinds of apps would work better than just coping apt. 10) Desktop cube - Would only work in conjunction with item 2, but even then, it's only a graphic effect. 11) Application Dock - Not sure exactly what it is asking for, we already have the sidebar and taskbar, what's missing? 12) Automated Screen Shots - Between "Print Screen", "Alt+Print Screen", Paint and Clipping Tool (Vista/Tablet PC), Windows has just about everything the Mac has, except I can never remember the key combinations on the Mac. 13) Multitouch Trackpad Gestures - "Some PC notebook vendors, such as AsusTek, are beginning to ship their notebooks with multitouch trackpads and the drivers required to make them work." - Already coming then. 14) Cover Flow - Just eye candy. I do, however use a cover-flow-esque UI to switch apps on Linux, so maybe a useful effect, but not something that is in any way required of Windows, in fact, I'd prefer Microsoft to come up with their own graphic effects. 15) Pre-Installed Web Server - All not non-"Home" versions of Windows already have it. Most users will never use it. 16) POSIX Compliance - Not something that should really be user-visible. Application developers can already include the cygwin runtime with their ported app, Microsoft has made several attempts at this (NT POSIX subsystem, Services For Unix, Subsystem for Unix-based Applications) none of them particularly successful. 17) Standardized Menu Ribbon - Mac UI feature. Probably better than Microsoft's current strategy of building a custom UI for every application though. 18) Single-File Applications - Would be great. Even single-folder applications (as they really are on the Mac) would be good, but Microsoft follows the philosophy of hiding things, rather than simplifying them, so very unlikely to happen.. 3) Hamachi and Teamviewer are free as well. 4) Use Teamviewer as well 11) i think the taskbar+task-switching+icon tray in Vista is better than Mac (except Mac had Expose). Yes, Mac had even more eye-candy. 9) Software Repositories make sense to OSS world. If Microsoft implement it, it will only bring more trouble to them! Point 8) 11) 12) 15) were added to discredit the article itself. 17) personally, i like Microsoft style. fairly speaking, i dont see anyone really had problem with using a menu. Sure, people may be confusing switching to/from Windows/Mac Why isnt there a way to virtualize the user space, so whatever a user installs, it does not damage the underlying system, and when you remove the user, you remove all software/changes/data from that user. It should be beyond file/device rights or jails, the user should have access to change everything, run services, etc, yet it does it only for that user. Single file application should be spread to all OS! Yes it takes more disk space with duplicates and common files but who cares? HD space is cheap these days and it's so much easier to manage, it keeps the bloat out of the system. It'd be even better if applications were linked to their generated files/configs and those were destroyed along the app in case of a "deinstall". Edited 2008-04-30 14:54 UTC I don't need or want all the flash expose/dock icon resizing. I like my menu bar to be associated with the window (and there is standardisation, File Edit View(optional) Tools(optional) Help is the standard order for windows). Most of the rest are included third party apps. Remember MS got sued for including a web browser and a media player in it's OS, so if the powers that be believe that that is too much bundling, how would they react to an app that recorded media and published it to the web! The only thing on that list that I agree with is ISO burning, though I wonder how often non-power-users who are not pirating software actually need to burn an ISO? This is ridiculous list, that could easily be replaced by the sentence "Windows is not Mac OS X" 18) Single-File Applications Already coming. Portable Applications became a trend (,), Thinstall (now part of VMware), Altiris (now part of Symantec) and Softgrid (now part of Microsoft) brought easier deployment with the application virtualization and now the only thing left is that the process is standardized on all windows app. Maybe soon will be the time for a registry-free windows? on the rest... I agree, they mostly want a mac-on-win, and many of those things aren't even useful on windows... anyways, if you want them, go get them.. there are free docks (, full even with leopard stacks: and another option:), free Expose clones ()... etc.. there are also non-free ones for the desktop cube, there was Yodm3D, that was bought by Otakusoftware, but now there is Cubik Desktop ()... The fact is... the per-window taskbar is a tool that works pretty well for me. Since I work on a mac I really miss it. Before I clicked on the right tab on the bar, I knew where the window was, so I had not to look for it. Now I either have to use Apple+TAB but that only switches beween Apps, or I have to use exposè, push the button, get a lot of windows that often look all the same, find the right window (I don't know where it is already), click on it. For me this is cubersome. Not really a must have. I really miss BeOS tiny sliding window tabs, lets you stack up windows with just the tab as the visual cue, far better than iconifying. I know KDE can make windows sort of BeOS like if you want, tried that but it didn't quite feel right, the sliding part was missing IIRC. Every OS should offer this, its not a big thing to implement in the window server. Did anyone ever do a Windows version of sliding tabs? Of course all apps should be single file / hidden folder. Even the OS should as much as possible be composed of single file objects that if present are available to use, would be pretty easy to upgrade or remove each item as needed. Also I really wish C,D,E... drive names would just go away What's wrong with drive names? It really makes sense to me, actually, to separate the different physical media of the machine in the UI so users can easily know which one is their external HDD and which one is their cdrom. It's all under one namespace in NT anyway (the drive letters are just Object Manager symbolic links to \Devices\HarddiskNVolumeM). I think in WinXP, definately under Server and in Vista, you can choose to mount all drives off a root directory like the *nix folks do. If I heard correctly, letter name mount points remain for compatability with programs that can't manage without a drive:/path style setup. If you really want drive letters to go away, you can do it already. It would be nice if that was the natural layout though. Expose - goofy eye-candy bull cookies that is hardly missed, because we have a much CLEARER list of what applications are currently running in the Windows, KDE and Gnome world - it's called a taskbar. Expose is just oh so useful when you have three spreadsheets, two word documents, four browsers and three to six text editors open... Asking for handling multitasking how an Apple does it is a step BACKWARDS - frankly they've not even caught up to Windows 95 in that department! Virtual Workspaces - Confuses Joe Sixpack, and frankly I've never needed it as if I need more 'workspace' I add another monitor - and have since Windows 3.1 using Targa boards. These people bragging about Twinview on linux or the total train wreck that is Xinerama need to take a look at how well it worked and how SIMPLE such things were in Win98. Hell, Ubuntu 8.04 is the first time I've been able to get more than three displays working in the placement order I want, something I've considered basic functionality since Windows 98 and Mac System 6. 'Back to my Mac' - Sounds like a good idea for the author - Oh wait, he's talking about some the overpriced service. Rather than pay for some goof assed service, how about setting up the home PC as a server or use remote desktop. This one's just a total /FAIL/ Screen Sharing - Again, remote desktop, or more specifically "request remote assistance" which even in XP, much less Vista. The writer of the article knows jack about windows. Time Machine - No matter how simple you make it, Joe user is going to be too lazy to bother with backups. For those of us who have important data though, I can see agreeing with this one... except anyone who has data important enough to spend the time backing up gigabytes is probably just gonna .rar or tar it anyways. ISO Burning and Podcast Capture - and if they included it, I bet you'd have third party vendors screaming bloody murder just like they do over including a browser, a media player, .zip functionality, anti-spy tools, etc. etc. etc. Even when the competitors, as pointed out, also include THE SAME THINGS. Which is why the EU should be going after Apple for including Safari and iTunes and Ubuntu for including Firefox and Totem and/or RythmBox - for a law to be fair it MUST be applied equally. But of course, your dirty hippy FLOSS fanboys and prius driving california tofu Mac zealots can't possibly believe in fair and equal treatment. Stickies - Funny, I just have a notepad replacement in my quicklaunch and save to the desktop with a meaningful title. Oh noes, I have to double click on it. Software Repositories - Repositories are great, right up until you want an application that is NOT in the repositories. On windows, you download the program you want and run it... It's not that hard and billions do it daily. Big ****ing deal. Desktop Cube - Because once again goof assed eye candy that distorts everything to the point that all those text and browser windows look the same is SO useful. Maybe if they spent less time on goofy crap like this they could write some stuff that added actual functionality? Application Dock - up, definately a Mac *** since from a functionality standpoint the taskbar with quicklaunch kicks the Dock's ass (and I do go back and forth between OSX and WinXP to say that)- if for no other reason than it consumes less screen space. The dock is such a pathetic tool for figuring out what's running (oh yeah, those crappy little four pixel triangles are SO obvious). Sleek? That is the LAST word I'd use to describe the dock. Pain in the ASS comes to mind... But then since I prefer to run my taskbar in portrait mode on the left (especially on widescreen systems since no APPLICATIONS are really useful that direction) I tend to get more functionality out of quick launch than the dock would ever provide. Automated Screenshots - Big ****ing deal - oh noes, you have to open an application. Honestly I prefer having an intermediate application handle that, so I can control saving the file or make necessary edits (since rarely would I want my full screencap displayed without editing out account names, etc). Trackpad Gestures - Most of which you don't need if you add a second or ***SHOCK*** third button, much less a WHEEL. Apple accomplished trackpad gestures of course, because they have their head wedged up their ass about putting more mouse buttons on a laptop - and lack the technical foresight to design a mouse with real buttons (because 'tapping' one side is a>accurate and b>intuitive) or a trackball that can be opened up and cleaned without breaking it. Cover Flow - Because of course, the icon is SO much more important than the title, date, filesize, and file type. Again, goof assed eye candy bull that provides LESS functionality than just switching to LIST view... Which you can navigate just fine with the keyboard and sort through a HELL of a lot quicker. (especially if you realize you can change the sort order by clicking on column headers and actually KNOW your alphabet) Pre-installed Web Server - 'some versions' - even 98 included the personal web server, and sorry, XP home has it TOO. It's been tried, and it was a miserable failure nobody used and those who did usually ended up getting banned from their ISP for violating their EULA. Of course, having a desktop computer default to responding on port 80 is SO ******* brilliant from a security standpoint. Posix compliance - because it has 'cost' windows so much in terms of getting developers to write software for it. Instead of posix, Microsoft took the time to make it so any jackass with a 4 year degree can churn out a VB crapplet - most of which STILL beat the tar out of 90% of the legacy crap Posix compliance would bring to the table. Remember, we're talking about the company who's representative said "DEVELOPERS, DEVELOPERS, DEVELOPERS, DEVELOPERS, DEVELOPERS, DEVELOPERS, DEVELOPERS, DEVELOPERS" - if they don't think they need posix for that, they're probably RIGHT. Single file applications - Ok, on this I WHOLEHEARTEDLY AGREE. Every other OS has this completely *****ed up. I really like apples approach of making applications self contained to their own directories. DLL hell, dependancy hell, registry hell, inconsistant directory location hell - **** THAT SHIT. (though of course your 'binaries are evil' Gentoo FLOSS whackjobs are going to argue against this) It's one of the few things I think Apple ever did that can be held up as a shining example of how things SHOULD BE DONE. (You certainly can't do it for their rinky badly designed hardware, goof assed back-assward UI and fat bloated eye candy manure - I've seen cars built by BL that were better made) All in all though, the article is a total miserable /FAIL/ by someone who doesn't know enough about windows to be writing such and article... Either that or they just don't use their computer to get actual WORK done on it. Edited 2008-04-30 16:52 UTC I disagree: in Windows or Linux, we have the taskbar AND the 'Alt+Tab' view/mode, Expose is a replacement (and an improvement) of the 'Alt+Tab' view/mode, this doesn't mean that the taskbar should be removed.. (sarcasm)Let them eat cake!(/sarcasm) Talk about an overpriced alternative.. I'd say that with the virtual cube for switching view a casual user shouldn't be too much confused with the virtual workspaces plus it's much cheaper and doesn't need physical space or power, so it's different. You sound really arrogant here.. The simpler it is to backup your data, the better it is, point. Given that Apple controls both the hardware and the software, IMHO they should do even more on this topic: provide by default an additionnal disk reserved for backup and push for configuration of online backup at first starup (to save data in case of theft|flood|fire). I am a Linux/FreeBSD/Windows user, and believe a couple of things in the article are far-fetched: 1. Expose. Would be nice to have actually. Mac, Linux, BSD have it. 2. Virtual Workspaces. Useful. Free on XP, so no problem. Have not tried any Vista solution on this. 3. Back to my mac. So? Install an FTP or SFTP server on your remote PC and you are ready to go. There are free/ open source solutions for this. And no yearly payments too. 4. Screen sharing. Never bothered with it, but surely there are free programs to use for this. 5. Time Machine. Well, windows backup tools need a major overhaul. And yes, storing the data for shadow and not allowing the users of home premium to use it is what I'd call fraud. 6. ISO Burning - Nice to have, but then there are so many free tools for this. Not a problem. 7. Stickies. Available in Vista sidebar. 8. Podcast capture. Well, as they said, Audacity. Free app, problem solved. 9. Repositories. Entirely different philosophy from Linux here, so don't think they would apply anyway. 10. Desktop cube. Oh come on. We don't need the cube, it is just for us *NIX users to show off and piss Windows fans. 11. Application dock. I love the dock, but it would probably mean a complete redesign of Windows UI to match with it. 12. Automated screen shots. Vista has a snip tool. Other windows versions can use mwsnap. Excellent and free utility. Problem solved. 13. Multitouch trackpad gestures. Never used this, don't know 14. Cover flow. Nice, but not essential. 15. Pre-installed Web Server. THANK GOD windows does not have IIS preinstalled! It is, after all, a desktop system. It is available for some versions, though I would advise you install apache anyway. So much better. 16. Posix.This is a big discussion. 17. Standardized manu ribbon. Well, at least Office 2007 has a ribbon. And to tell you the truth, I just hate it. 18. Single file application. This is not the real problem. Just say the magic word "Registry" ... A decent Microsoft email client (on Vista anyway) would be a start. A lot of the suggestions are about eye-candy. If you put those aside, the article has far fewer suggestions for genuinely useful must-have items that won't cause security problems or be too complicated for many users. Stickies, better backup and ssh (not mentioned directly) would be my three. Linux-style virtual desktops would be my top eye-candy want. Some of the suggestions which claim to be Mac-only are readily available on Linux anyway. I guess the next question is whether Microsoft is any longer a company able to produce a fast, slick OS that is also responsive to users' needs. Or might these ideas be rolled up into a very special Vista bonus pack for the bargain price of only, say, 99.99 bucks plus a free 176-page EULA? One wonders ... Within the osX UI, applications apear to be one file/icon because you manage them by there program folder. If you go into the terminal, you'll see that the icon is actually the start of a program specific directory tree: " [ icon ] " "MacProgram" is actually "~/applications/MacProgram/" You'll find config files and binaries within the subfolders under /MacProgram/. I believe the program is actually in /MacProgram/bin/ or something similar. I do like how much cleaner osX manages programs though. The first time I installed a program, the longest delay was in realizing I was trying to make it too difficult; just uncompress the file and copy the resulting program icon too your applications and it's done. It still doesn't come close to the repository and package manager method but it's closer than alternatives. Ha.. good point too mention. I can't tell you how often I open a large application then go back to what I was doing only too find half my text typed into the new program's window. When I want to start a program I will. When I want that program in focus, I'll put it there. God how I wish Windows would stop telling me what I want to do based on it's whims. Virtual Desktops and Expose.. I feel crippled in Windows without them. That it's going to confuse Joe Sixpack is not an excuse to hamstring everybody who knows what they're doing. Windows' management of applications is, at best, retarded. I know that something like apt-get in Windows is a complete pipe dream, and that's why Linux will always be superior in that department. What I really want, though, is an apt-get type system where I can download and install self-contained Mac like applications. That would really be the ultimate solution. ISO burning... no excuse. No built-in SSH, no excuse either. Probably the one thing I don't like about Mac OS X is the Dock. I find it to be a usability nightmare. The Gnome panel is really the superior solution in here. Time Machine owns. Every OS should have something similar. As far as Web Server is concerned? 99% of everyone wouldn't care, and the 1% that does can get a web server quite easily. I'd be nice if every retail copy of Windows came with a Visual Studio CD. The Wifi utility in Windows is a joke, and so is NetworkManager in Linux. Everyone should just rip off OS X in here and get it over with. Except for #9 and #18, I think you can get pretty much everything on this list (in one form or another) with 3rd party apps. Some stuff I would like to see not on the list is: - As an alternative to #9, I'd like to see Windows Update expanded to include 3rd party apps. That way, I don't have 9 million programs trying to run at startup for no other reason than to auto-update themselves. - As an extension to the above, have one centraized location for all startup entries, and present a UAC-style dialog to the user and allow them to approve each program's request to start up when Windows does. No more POS apps like Quicktime putting themselves back in the task try when I have already disabled it using msconfig. - Better CD/DVD access: Why should my computer slow down to a crawl when copying a large file from the CD/DVD drive? This problem is made even worse if there are CRC errors on the disc. Ideally, it should be as seamless as copying off a USB drive - System-wide spell checking. This is the only thing that makes me envious of Mac users - Integration of Powershell into the OS. But turn it off by default so newbies can't hurt themselves. All of the virtual desktop addons for Windows fail miserably when compared to X11. The functionality just isn't there. Also missing - X11 style copy/paste. I'm suprised no one mentioned that yet. Does OSX have X11 style copy paste? I never did see the need for the extra step of CTRL+C/CTRL+V - what a waste of time. For those of you not in the know, under X11 simply highlighting the text is enough to copy it to the clipboard and middle click is enough to paste. Also missing is the X11 clipboard where you can just pull down a list of all items copied to the clipboard. Windows has a clipboard but unless I've missed it there is no direct access to it. Sure it pops up in Outlook or Word (where you can't even shut it off if you don't want it) but what about the myriad of other apps? What if you're not using Outlook or Word? Drop the whole list, just give me file systems support. Native. Proper. And yes, I mean more than "the" two. And in my lifetime. Besides, what the pcworld fellas listed, well, most of them are not - and should not be - OS features, but applications. And - unsurprisingly - there exist apps for most of them. The whole thing is a guy having too much time and no proper idea to write about. Well, I also live on this planet. I've used crossmeta's stuff for reading xfs and reiser [I'm deeply uninterested in ext2/3 handling], which have always been a lifesaver (note: I didn't try it in vista/ws2k8). Yet, I wasn't asking for third party apps and addons. Additionally, all options we currently can find has limits. I was talking about native support for industry standard file systems. The only things I really miss in Windows are virtual desktops (a real execution, not the dysfunctional powertoy), a real command line (BASH, anyone?), and better software installation (screw the registry!). Otherwise, they should ditch the be-like-Apple eyecandy and focus on cleaning up their code and improving performance. Windows has power shell... Expose for Windows - doesn't Microsoft's Instant Viewer do this?...... Their list is pathetic! (except for number 9: repositories of software) Here is my list of features Windows should have: 1. ssh There is no easy way to get a remote shell on a Windows server, yet OpenSSH is freely available for Microsoft to include in Windows (every other OS does it). 2. syslog I know there are third party solutions to include syslog in Windows, but this should be part of the operating system! 3. fhs Maybe not literally fhs, but at least a file structure that includes /etc /proc /tmp /var /bin /sbin and other common directories. The current mess in drive letters and long directory names with spaces is just unmanageble. 4. nfs I mean real built-in support for the latest nfs, listing directories in a standard /etc/exports file. Not the pathetic attempt they call "Services For Unix" (which has not been updated in over four years). 5. built-in common file systems They should include native support for ext2,ext3.reiserfs,zfs etcetera. The specs are open, but we still need third party drivers to use these common file systems. 6. MBR and GRUB support Installing Windows on a machine with GRUB will destroy GRUB. Windows should at least recognize GRUB and give a proper warning before overwriting the Master Boot Record. It would be even better if Windows would add itself to GRUB (just like any other descent operating system does). 7. GNU tools and bash A descent shell with some handy tools. 8. /etc Did i mention it should have /etc (not C;\windows\system32\drivers\etc with five measely files) but a real /etc with a working /etc/hosts and /etc/nsswitch.conf and all the others like /etc/resolv.conf, /etc/passwd, /etc/shadow, /etc/group, ... ditch the registry! 9. file links It should support file links (ln and ln -s)! 10. / Stop using those pathetic drive letters and put everything under /. 11. more handy tools It would be nice to also have vi, man, gcc, tar, gzip, dd, cpio... 12. A home directory for users I mean a real home directory that contains all the customized user settings for applications. Not the current mix of fifteen different locations to store some simple information. 13. repositories The same as number 9 in the PCWorld list. In windows Control Panel there exists an icon called "Add/Remove Programs". It would be nice if it was actually possible to add or remove programs with it! The thousand most common freely available programs (firefox,anti-adware,anti-spyware,openoffice.org,gimp,...) should be in there so users can easily add them!!! The "remove" option should really remove the software and not leave (anti-piracy)traces all over the place. 14. gpl Microsoft should gpl the complete Windows source code and focus on supporting companies instead of ripping them of with proprietary lock-in and restrictive licenses! You just basically described Linux. Why not just use Linux? I'm no MS fanboy, far from it, but MS is not Linux and frankly it shouldn't try to be. Not because I don't think that Linux is superior but because I personally think that Ms would screw it up, and we also need the variety. Out of the major three Windows is the only non-unix like one. So 'Expose' sounds a lot like what happens when I click the middle mouse button on my IntelliMouse in Windows - I get a tiled view of all application windows to choose from. It's actually rather annoying, but it is there in WinXP - I think they added it as part of SP2 b/c I don't remember it happening when I originally upgraded to WinXP SP1.
http://www.osnews.com/comments/19696
CC-MAIN-2014-52
refinedweb
7,332
72.76
max_pool2d¶ - paddle.nn.functional. max_pool2d ( x, kernel_size, stride=None, padding=0, return_mask=False, ceil_mode=False, data_format='NCHW', name=None ) [source] This API implements max pooling 2d operation. See more details in api_nn_pooling_MaxPool2d . - Args: - x (Tensor): The input tensor of pooling operator which is a 4-D tensor with shape [N, C, H, W]. The format of input tensor is “NCHW” or “NHWC”, where N is batch size, C is the number of channels, H is the height of the feature, and W is the width of the feature. The data type if float32 or float64. - kernel_size (int|list|tuple): The pool kernel size. If pool kernel size is a tuple or list, it must contain two integers, (kernel_size_Height, kernel_size_Width). Otherwise, the pool kernel size will be a square of an int. - stride (int|list|tuple): The pool stride size. If pool stride size is a tuple or list, it must contain two integers, (stride_Height, stride_Width). Otherwise, the pool stride size will be a square of an int. - padding (string|int|list|tuple): The padding size. Padding could be in one of the following forms. A string in [‘valid’, ‘same’]. An int, which means the feature map is zero padded by size of padding on every sides. A list[int] or tuple(int) whose length is 2, [pad_height, pad_weight] whose value means the padding size of each dimension. A list[int] or tuple(int) whose length is): when True, will use ceil instead of floor to compute the output shape return_mask (bool): Whether to return the max indices along with the outputs. Default False, only support “NCHW” data format data_format (string): The data format of the input and output data. An optional string from: “NCHW”, “NHWC”. The default is “NCHW”. When it is “NCHW”, the data is stored in the order of: [batch_size, input_channels, input_height, input_width]. - Returns: Tensor: The output tensor of pooling result. The data type is same as input tensor. - Raises ValueError – If padding is a string, but not “SAME” or “VALID”. ValueError – If padding is “VALID”, but ceil_mode is True. ShapeError – If the output’s shape calculated is not greater than 0. - Examples: import paddle import paddle.nn.functional as F import numpy as np # max pool2d x = paddle.to_tensor(np.random.uniform(-1, 1, [1, 3, 32, 32]).astype(np.float32)) out = F.max_pool2d(x, kernel_size=2, stride=2, padding=0) # output.shape [1, 3, 16, 16] # for return_mask=True out, max_indices = F.max_pool2d(x, kernel_size=2, stride=2, padding=0, return_mask=True) # out.shape [1, 3, 16, 16], max_indices.shape [1, 3, 16, 16],
https://www.paddlepaddle.org.cn/documentation/docs/en/api/paddle/nn/functional/max_pool2d_en.html
CC-MAIN-2021-25
refinedweb
428
68.87
3D text possible with Hexagon? edited December 1969 in Hexagon Discussion I tried to make 3D test with hexagon and after 10 minutes I decided to use Wings3D for that. It seems impossible to make 3D text with Hexagon. What I did is I created a 2D text and then I wanted to extrude it. Then I tried to use some of the extrude tools from vertex modeling. Here the text jumps from crazy giant to tiny and its extremely fiddly to do anything just to end up with hollow letters without fillet. One of the tools gives me a submenu, but I cant read what it does because the sub menu is on top of the help text. see image I think Hexagon is after blender one of the weirdest and most difficult to use 3D programs. Since there are only three letters missing, I don;t think it is that difficult to read? I also think that Hex is about the easiest modelling program that I have ever seen, bar none, so I have to disagree there too. If you want 3D text, why not just use the 3D text tool in the Primitives menu as shown below. Is there any tutorial for what you just told me? Extract selected edges along edge??? where to find that? what does the icon for that look like and under which tab is that please? I think help text should be on the bottom of the screen. Now I get a tooltip AND a help text at the same time at the same place. Oh, yes, now I also see the 3D text Icon, but what purpose has the 2D text if I cant make a 3D text out of it? How do I change the text after I made it? Oh, now I had the text in and CRASH. Get yourself a test version of Cinema 4D then you know whats easy. Just made a text and then clicked on Scale manipulator and Hexagon stopped doing anything. If you like Hexagon then better dont say that in the Bryce forum. You might stand alone there with your opinion. If all you want is 3D text, and you find Wings esier to work with, by all means use it, I would. I also have C4D but I don;t create 3D text much, and it would be quite expensive to buy for that puprose, Hex is free! When you create 3D text, each letter in the text box is a seperate character, and you can expand the group in the Scene Tree (use the little 'eye' icon to turn them on and off to make identifying easier) Thanks for the Tips. I have just started with Hexagon and wanted to do something simple like a text. Thats why I asked. Now I have clicked on UV mapping, closed Hexagon and re-started it because I wanted to re-start with a clean screen. I made a sphere but can get rid of the funny pattern on the sphere. How to show the sphere in only one colour. Doesnt Hexagon reset everything upon closing it? I think with Hexagon you have to know EXACTLY what you are doing . One wrong move and everything is messed up. I think to have a free 3D program is a very nice thing. Bryce is very nice and easy enough to handle to get good results fast. I tried Daz studio, and after a few hours of fiddling around and getting no usable result I uninstalled it. I am already considering to do the same with hexagon. Its by far too complicated. I can make nice things with Wngs3D and Bryce, or even with my C4D demo version but I cant even make a simple nice looking text or sphere with Hexagon. Does Hexagon not have a renderer Light, camera, shadows, materials? Hexagon is a modelling progam, pure and simple. It does not have a render engine, lights or cameras (as such) You can use the Ambient Occlusion tab at the bottom of the viewport to create a render of sorts, and you can also change the lighting model. When you installed Hexagon, you should have a folder called 'Docs' which contains the manual, have a look at it. There are also lots of really good tutorials on the GeekAtPlay website. If you go to Edit > Preferences Editor, you will find a buttton that says 'Reset all preferences to defaults', that should get yu back to normal. There are loads of other things in Preferences that you can set too, have a look at them. Hi eireann.sg, Like you I find Blender very difficult. I've found that DS and Hexagon both require more than a couple of hours to fully understand. You should have seen my first results with both. I've perservered and now do much better, but unlike you I can't get a handle on Bryce. Sometimes the interface doesn't suit some people. : ( I have heard it said that Bryce is very like Poser in some respects and this would make sense as I find them both difficult but find Hex and DS much easier. If you want to perserve with Hex I would suggest looking at the tutorials over at geekatplay they have some of the best tutorials online for it. I tried the manual and initially found it difficult to follow. I'm very visual so I found the videos at geekatplay really useful. There is a stickied list for tutorials in the hexagon forum which may help also. In regard to DS, I'm sorry you didn't find it to your liking. It's a great program but it's interface is very different to the Bryce one and I know that some people prefer it and others don't. Did you try any of the ready to render scenes? I haven't but they may be a good place to start. They didn't have those when I started and also no smoothing...oh the agonies of poke thru! Hope that this is of use to you. Hugs Pen I tried Maya, 3Ds and other programs and also found them impossible to work with. For Hexagon I find fo me personally it has also too many nothing saying icons. I have to scroll over it and only then I know what t does, and next time I want to use it I have to search for it and yet again scroll over it to see its description. In Bryce I miss a mesh editor. The rest seems quite simple once you have looked through the texture editor. There are a lot of material presets you can use, but if you have a specific material in mind and want to make it on your own, thats very complicated to do with Bryce. C4D is the best for me when it comes to modelling. The materials editor is only so so because the procedural texture editor seems to be very limited while the one included in Bryce is by far over developed. In case I wanted to do commercial graphics I think I would buy C4D. Until now I am only working with a demo version which is good enough for me. @JimmyC_2009, just had a look at the documentation. I found the PDF document is full of example movies of which not a single one works because the websites for the example movies dont exist any more. e.g.: try to go to this page: I also found that a lot on this site doesnt work either. Is DAZ3D at a standstill? For the geekatplay tutorials only two were interesting because they were not too long and / or in many parts. That grenade modelling...Silly and too long. Do people only have violence in their minds??? UV mapping for what if there is no renderer. The only two good and interesting tut are: Copying and tapering objects...and... Modeling an Attic Ventilator They are not too long or n too many parts. Modelling a flower pot should not take more than 5 minutes and its a ridiculous three-parter. edited to add that it's easy to add to the custom tab, just right click and the dialogue pops up and asks if you want to add it to the custom tab. Other icons I find rather difficult to use. There are some when I select them the object to be changed jumps all over the place at the slightest mouse movement. I had similar issues when I first started that when I extruded I found it difficult to control. It just takes practice and taking it slowly. I also found using the following process helped. Select the face you want to extrude, then hit Ctrl and the curser will change. Use the universal gizmo to control where it goes. The other thing that I've found helps while getting used to the controls is to type the numbers in via the property tab. Hope that helps... Glad you like it! I think I originally found out about it at geekatplay... I just loaded Hex and as anything you get out of it what you put in. Read the manual find any tutorial you can. I am a cad bunny I.E. I model in autocad and Hex is a new language I want to learn so I will. I like the 3d text thingy I have been needing it for a long time. I tried Wings for example and it didnt work for me with that interface but hexagon interface I liked much, I think its a matter of personal taste what I want to add here is I found that hexagon is capable of making more fancy 3d text, depending on what fonts you have installed in your usual programs so go check, photoshop or whatever 2d program you uses for if it has some extended fonts you can install I want to import a 3D text into DAZ Studio 4. But... when I create a text in Hexagon and import it into DAZ Studio 4 it looks like the attachement on this post. Does anybody know a reason for that or has a link to a tutorial how to get this to work? - I use Mac OS X 10.7.4 - I exported it using "wavefront object" because using "send to daz studio" inside hexagon does not work for me. Quickest route to go would be to triangulate. Thx it works I dont mind learning about Hex and I have ran into trouble with everything i have ever tried to make in it. I still don't mind that, but my biggest problem is finding a way or places to learn aout specific problems I am having with it. I either have to spend a hour searching or write in Daz forums and to get a reply days later. I have had some nice people help me out although it is very troublesom. If it wasn't for a few nice poeple here and geekplay i wouldn't know a thing about it after spending a months trying. I am happy it is free and it is great, but if I had to pay a few hundred bucks for this I would be irate. The whole reason i started with Hex is because I wanted to create my own stuff for Daz because it is simply a money pit. Now that I know how to create objects in Hex and model outfits they don't work properly in Daz. Autofit doesn't work unless you buy it from Daz and usually anything you model in hex comes into daz tiny even if you import the models daz provides into Hex and then export them back in, ecspecially Genesis. I can import a genesis figure and then export it right back into Daz and the model will be tiny. Example: I export genesis figure into Hex and make some alterations. I wanted to make a tail for a cat lady and some whiskers so I went ahead and spent the time to create these things. When I exported it back into Daz using the file option "send to Daz' Or saved it and exported as a wavefont obj. It came back tiny. I adjusted it's size and tried putting something on it and nothing would work, not only would nothing work it wouldn't even allow me to apply shaders that i had bought from daz. I removed my altered model and used one that i didn't alter or import into Hex and it worked fine. It simply doesn't do what Daz claims it does only their products work. I am not going to bash daz and poser to much because for beginners it is fun and I enjoyed it a lot, but the more advanced I get the more I ask myself why in the hell didn't I research my options more wisely. After spending a lot in my opinion on daz products I am cursed with the knowledge that with other programs I could have started making my own models, clothing, scenes, props on my own. I like Daz and it is good for people who don't want to model and take the time it takes to create their own models, but if you are getting into Daz, Hex, ect to later become more advanced it is a bit of a disappointment. Again the most fustrating thing about Daz is learning how their programs work from Daz filing to Hex modeling. You can save yourself a few years of searching for this information by going elsewhere and finding better programs, Even Free Ones. Sorry Daz people Sincerely, Jerry Hi Jerry, it sounds like you're having a scaling issue. Are you sending the items through the bridge? If so the genesis that your are sending does it have a changed scale size in the parameters tab? If so this could cause the problems you're describing. When you send Genesis to Hexagon make sure that scaling is at 100% . Otherwise it will be the wrong size when it comes back in. 100 is default. I just did it to make sure and it came in tiny. I also put it at a 1000 and it came in at max size which i believe is around 6 foot. I can look and provide a tutorial where someone much smarter then I am is trying to figure it out. Another issue would be micheal 4 who is larger then 6'0 and no matter how large you import the object into Daz is it will only come in at 6'0. So, no matter what your dimensions are youre going to be way off when using him. Thank you by the way. I wish I was sitting next to you to see this for myself. Oh before I mislead you try to take a cylinder and then use the boolean operation in surface modeling to punch a hole in genesis, then try exporting back into Daz. Any manipulation of the model will some how change its size when exporting back into Daz, it will be teeny tiny, Give it a try. Can you describe the process you're using? It might help me to pinpoint what's happening. I just sent a skirt back and forth using the bridge with no problems so I can verify that it works. Also which version of Hex are you using? I am importing Genesis from Daz and then I am using the boolean operation to alter the model, then I am sending it back into Daz using ethier wavefont obj or the send to daz option in the file menu.Everyone should try this for themselves to see what happens. I am using 2.5 This is what it looks like
http://www.daz3d.com/forums/viewreply/185784/
CC-MAIN-2015-40
refinedweb
2,659
79.8
Hello Community, I have been stuck for couple of days trying to solve one issue. I have successfully taken the data from the web using pandas and stored the list into NAME as ‘name’ and CODE as ‘code’ and combined name and code as ds= ‘name’ + ’ ’ + ‘(’ + ‘code’ + ‘)’ So, dropdown is completed. However I’m stuck how to connect using the ‘name’ for subplots There are 3 different subplots I made and those subplots are all connected to one command as below. I still do not know how to pass the ‘name’ selected from dropdown to the item_name below. BTW, the ‘def get_code()’ is in library.py with all other def codes. and will it automatically update when the item from dropdown is selected? (second question) ‘’’ def get_code(): code_df = get_codedf() item_name = ‘name1’ code_url = get_url(item_name, code_df) return code_url ‘’’ getcodedf() in the code is where ‘name’ and ‘code’ list are stored in two different lists and using the item name, get_url() will find the code and input into a web url and that url will start bunch of codes to send the data to 3 different plots. If I designate the item_name it shows the plots perfectly but I don’t know how to connect them with the dropdown I have created Can anyone be so kind to help me out here? Best if you can make a code out of the ones I have or a sample project will be a great help. Thanks.
https://community.plotly.com/t/how-to-update-selecting-dropdown-list-from-pandas-using-callback-option/35148
CC-MAIN-2020-45
refinedweb
243
72.6
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). On 25/04/2013 at 12:23, xxxxxxxx wrote: Using GetUp I get my parent object I want to use / copy. So I use GetClone() to clone that parent However that gives me all the children of that parent. How to get only the parent and not the children? Of course I can clone the parent and then remove all (cloned) children, but I''m sure there is a better way? On 25/04/2013 at 12:30, xxxxxxxx wrote: Look at the documentation of GetClone(), there's a flag for exactly this. Best, -Niklas On 25/04/2013 at 12:31, xxxxxxxx wrote: getclone has an optional flag parameter which covers almost all cases. On 25/04/2013 at 12:56, xxxxxxxx wrote: Ok, I was looking at BaseSelect.GetClone() which has no flags. I see flags with C4DAtom.GetClone([ flags=0]) BaseSelect.GetClone C4DAtom.GetClone However, I''m getting a run time error saying COPY_NO_HIERARCHY is not defined. Also when using c4d.COPY_NO_HIERARCHY Importing C4DAtom does not seem to help. What am I missing? import c4d from c4d import C4DAtom #Welcome to the world of Python def main() : pass #put in your code here obj = op.GetObject() parent = obj.GetUp() source = parent.GetClone(COPY_NO_HIERARCHY) #create clone pass On 25/04/2013 at 13:28, xxxxxxxx wrote: myobj.GetClone(c4d. COPYFLAGS_NO_HIERARCHY) ** ** is correct. getclone is a member of c4datom. On 25/04/2013 at 14:49, xxxxxxxx wrote: Thanks. The R13 manual needs to be updated, R14 is correct. On 26/04/2013 at 01:19, xxxxxxxx wrote: Originally posted by xxxxxxxx Thanks. The R13 manual needs to be updated, R14 is correct. Originally posted by xxxxxxxx Just like previous releases of CINEMA, manuals aren't updated neither. Always make sure that you use the latest updated manual for the current version of CINEMA. On 26/04/2013 at 07:51, xxxxxxxx wrote: Yes, I understand.
https://plugincafe.maxon.net/topic/7121/8091_h2-clone-an-object-without-its-children
CC-MAIN-2021-31
refinedweb
360
69.18
Chatlog 2008-07-09 From OWL See original RRSAgent log and preview nicely formatted version. Please justify/explain all edits to this page, in your "edit summary" text. 00:00:00 <m_schnei> PRESENT: bijan, m_schnei, rob, MartinD, bmotik, IanH, Rinke, bcuencagrau, MarkusK, Carsten, msmith, alan_ruttenberg, Evan_Wallace, baojie, JeffP, christine, Achille 00:00:00 <m_schnei> CHAIR: alan_ruttenberg, IanH 00:00:00 <m_schnei> REGRETS: Ivan_Herman, Peter Patel-Schneider, Sandro_Hawke, Elisa_Kendall 16:55:10 <RRSAgent> RRSAgent has joined #owl 16:55:10 <RRSAgent> logging to 16:55:29 <bijan> Zakim, this is OWL 16:55:29 <Zakim> bijan, I see SW_OWL()12:00PM in the schedule but not yet started. Perhaps you mean "this will be OWL". 16:55:38 <bijan> Zakim, This will be OWLO 16:55:38 <Zakim> I do not see a conference matching that name scheduled within the next hour, bijan 16:55:40 <bijan> Zakim, This will be OWL 16:55:40 <Zakim> ok, bijan; I see SW_OWL()12:00PM scheduled to start 55 minutes ago 16:56:19 <Rinke> Rinke has joined #owl 16:57:43 <Zakim> SW_OWL()12:00PM has now started 16:57:50 <Zakim> +??P0 16:57:58 <bijan> zakim, ??P0 is me 16:57:58 <Zakim> +bijan; got it 16:58:35 <Zakim> +??P6 16:58:38 <Zakim> -bijan 16:58:39 <Zakim> +bijan 16:58:46 <m_schnei> zakim, ??P6 is me 16:58:46 <Zakim> +m_schnei; got it 16:58:54 <bijan> zakim, mute me 16:58:54 <Zakim> bijan should now be muted 16:59:03 <IanH> IanH has joined #owl 16:59:12 <baojie> baojie has joined #owl 16:59:27 <Zakim> +??P7 16:59:36 <Rinke> zakim, ??P7 is me 16:59:36 <Zakim> +Rinke; got it 16:59:38 <Zakim> + +0186528aaaa 16:59:44 <Rinke> zakim, mute me 16:59:44 <Zakim> Rinke should now be muted 16:59:46 <Zakim> +MartinD 16:59:58 <MartinD> zakim, mute me 17:00:00 <bmotik> bmotik has joined #owl 17:00:00 <Zakim> MartinD should now be muted 17:00:00 <Zakim> -Rinke 17:00:11 <MarkusK> MarkusK has joined #owl 17:00:25 <Zakim> +??P22 17:00:30 <bmotik> Zakim, ??P22 is me 17:00:30 <Zakim> +bmotik; got it 17:00:32 <Zakim> + +86527aabb 17:00:36 <bmotik> Zakim, mute me 17:00:36 <Zakim> bmotik should now be muted 17:00:49 <rob> Zakim, +0186528aaaa is me. 17:00:49 <Zakim> +rob; got it 17:00:54 <bcuencagrau> bcuencagrau has joined #owl 17:01:00 <Zakim> +??P28 17:01:02 <rob> Zakim, mute me. 17:01:02 <Zakim> rob should now be muted 17:01:06 <Rinke> zakim, ??P28 is me 17:01:06 <Zakim> +Rinke; got it 17:01:13 <m_schnei> ScribeNick: m_schnei 17:01:13 <Rinke> zakim, mute me 17:01:13 <Zakim> Rinke should now be muted 17:01:30 <bijan> (reminder to folks: ) 17:01:41 <Rinke> RRSAgent, pointer? 17:01:41 <RRSAgent> See 17:01:42 <Zakim> +??P33 17:01:42 <IanH> zakim, aabb is me 17:01:45 <Zakim> +IanH; got it 17:01:50 <bcuencagrau> Zakim, ??P33 is me 17:01:50 <Zakim> +bcuencagrau; got it 17:01:54 <Zakim> +??P35 17:01:54 <Rinke> RRSAgent, make records public 17:01:55 <bcuencagrau> Zakim, mute me 17:01:55 <Zakim> bcuencagrau should now be muted 17:01:58 <msmith> msmith has joined #owl 17:02:09 <alanr> alanr has joined #owl 17:02:09 <m_schnei> ScribeNick: m_schnei 17:02:25 <Rinke> zakim, unmute me 17:02:25 <Zakim> Rinke should no longer be muted 17:02:42 <Zakim> + +0493514633aacc 17:02:49 <msmith> msmith has changed the topic to: 17:02:51 <Rinke> zakim, mute me 17:02:51 <Zakim> Rinke should now be muted 17:02:52 <Carsten> zakim, aacc is me 17:02:52 <Zakim> +Carsten; got it 17:02:59 <Carsten> zakim, mute me 17:02:59 <Zakim> Carsten should now be muted 17:03:00 <Zakim> +msmith 17:03:18 <Zakim> +??P37 17:03:20 <IanH> zakim, who is here? 17:03:20 <Zakim> On the phone I see bijan (muted), m_schnei, rob (muted), MartinD (muted), bmotik (muted), IanH, Rinke (muted), bcuencagrau (muted), MarkusK, Carsten (muted), msmith, ??P37 17:03:23 :03:34 <alanr> zakim, mute me 17:03:34 <Zakim> sorry, alanr, I do not know which phone connection belongs to you 17:03:36 <Zakim> +Evan_Wallace 17:03:46 <Zakim> +baojie 17:03:49 <alanr> zakim, ??P37 is me 17:03:49 <Zakim> +alanr; got it 17:03:57 <alanr> zakim, mute me 17:03:57 <Zakim> alanr should now be muted 17:03:59 <IanH> zakim, who is here? 17:03:59 :04:03 <Zakim> ... Evan_Wallace, baojie 17:04:04 :04:15 <rob> I notice that my half of the datatype discussion is in the public-comments archive, but *not* the wg archive. 17:04:47 <alanr> mostly that I am without good connectivity 17:04:48 <bijan> Just my usual about 114 17:04:50 <m_schnei> Topic: Agenda Amendments 17:04:57 <m_schnei> IanH: no amendments 17:05:04 <bijan> zakim, unmute me 17:05:04 <Zakim> bijan should no longer be muted 17:05:38 <bijan> zakim, mute me 17:05:38 <Zakim> bijan should now be muted 17:05:48 <m_schnei> Bijan: amendment 114 should be discussed and perhaps resolved 17:05:51 <alanr> objection on behalf of those not here 17:05:59 <m_schnei> IanH: any objections 17:06:06 <bijan> (yes, do nothing) 17:06:18 <alanr> zakim, unmute me 17:06:18 <Zakim> alanr should no longer be muted 17:07:01 <bijan> This issue is on the agenda 17:07:15 <bijan> I'm fine with waiting 17:07:28 <m_schnei> alanr: time too short to put 114 on agenda's resolve list 17:07:47 <Zakim> +[IBM] 17:07:50 <bijan> zakim, unmute me 17:07:50 <Zakim> bijan should no longer be muted 17:07:50 <Rinke> let's discuss it today, and propose to resolve next week 17:08:08 <Achille> Achille has joined #owl 17:08:10 <m_schnei> alanr: 114 is not ready to be resolved 17:08:19 <bijan> zakim, mute me 17:08:19 <Zakim> bijan should now be muted 17:08:21 <ewallace> +1 to moving up on the list 17:08:28 <Achille> Zakim, IBM is me 17:08:28 <Zakim> +Achille; got it 17:08:29 <m_schnei> ianh: put 114 on top of the list, to be discussed at least 17:08:33 <bijan> thanks! 17:08:41 <alanr> zakim, mute me 17:08:41 <Zakim> alanr should now be muted 17:08:43 <Zakim> +??P11 17:08:48 <christine> christine has joined #owl 17:09:27 <m_schnei> Topic: Action Items Status 17:09:32 <JeffP> JeffP has joined #owl 17:09:53 <m_schnei> Subtopic: Action 150 17:10:05 <JeffP> (I am on IRC only) 17:10:10 <m_schnei> Jie: I sent a mail to RIF group 17:10:46 <m_schnei> Jie: answer suggested to put intern String in RDF namespace 17:10:48 <IanH> Jie: Contacted Axel Polleres and Ivan Herman 17:12:02 <m_schnei> Jie: someone or some group should have a vote 17:12:41 <m_schnei> IanH: summary, things are not quite complete yet, right? 17:12:48 <bijan> zakim, unmute me 17:12:48 <Zakim> bijan should no longer be muted 17:13:09 <m_schnei> bijan: asks whether this is about i18n strings 17:14:04 <bmotik> I believe that disjointness of xsd:string and owl:internationalizedString could be handled as part of ISSUE-126 17:14:34 <m_schnei> IanH: to jie, is this email on our list? 17:14:39 <m_schnei> jie: yes, it is 17:14:39 <bijan> Thanks! 17:14:42 <bijan> zakim, mute me 17:14:42 <Zakim> bijan should now be muted 17:15:08 <m_schnei> IanH: let's leave this open, because it didn't come to a conclusion 17:15:29 <alanr> alanr has joined #owl 17:15:36 <m_schnei> Subtopic: Action 156 17:15:42 <m_schnei> IanH: skipped, since AlanR not on phone at the moment 17:15:47 <m_schnei> Subtopic: Action 157 17:15:51 <m_schnei> IanH: skipped, since AlanR not on phone at the moment 17:15:52 <alanr> sorry - can't see agenda atm 17:15:57 <alanr> or web site. 17:16:03 <alanr> I think postpone 17:16:11 <IanH> Alan -- we skipped your actions till next week 17:16:13 <m_schnei> Subtopic: Action 161 17:16:16 <m_schnei> IanH: skipped, since Uli is on vacation (?) 17:16:17 <alanr> yes, please 17:16:21 <IanH> zakim, who is here? 17:16:22 :16:27 <Zakim> ... Evan_Wallace, baojie, Achille, ??P11 (muted) 17:16:28 <m_schnei> Subtopic: Action 162 17:16:29 <Zakim> On IRC I see alanr, JeffP, christine, Achille, msmith, bcuencagrau, MarkusK, bmotik, baojie, IanH, Rinke, RRSAgent, Zakim, MartinD, m_schnei, bijan, rob, Carsten, johnlsheridan, 17:16:32 <Zakim> ... sandro, ewallace, trackbot 17:16:38 <bmotik> Zakim, unmute me 17:16:38 <Zakim> bmotik should no longer be muted 17:16:41 <m_schnei> IanH: Diego not here, skipped 17:16:51 <m_schnei> Subtopic: Action 165 17:16:59 <alanr> alanr has joined #owl 17:17:00 <ewallace> diego was supposed to do a write up 17:17:01 <m_schnei> IanH: also Diego's action, thus skipped 17:17:14 <m_schnei> Boris: Hasn't this already been done? 17:17:23 <m_schnei> IanH: I did not see any emails 17:17:35 <m_schnei> Boris: It's already updated in the profiles 17:17:35 <bmotik> Zakim, mute me 17:17:35 <Zakim> bmotik should now be muted 17:17:50 <bmotik> Yes 17:17:54 <msmith> yes 17:18:05 <m_schnei> IanH: 161 subsumed by 162 17:18:31 <m_schnei> IanH: new member of the WG, which is Rob from Oxford 17:18:36 <rob> zakim, unmute me 17:18:36 <Zakim> rob should no longer be muted 17:18:42 <m_schnei> IanH: Rob helps with datatypes 17:18:47 <m_schnei> Rob: Hi! 17:19:02 <rob> zakim, mute me 17:19:02 <Zakim> rob should now be muted 17:19:09 <Rinke> q+ to ask about the minutes of the previous meeting? 17:19:30 <Rinke> zakim, unmute me 17:19:30 <Zakim> Rinke should no longer be muted 17:19:32 <alanr> Action 156 needs to be pushed to next week. Haven't heard back from Judy Brewer on Action 157, so push 17:19:32 <trackbot> Sorry, couldn't find user - 156 17:19:55 <alanr> "Action 156 needs to be pushed to next week. Haven't heard back from Judy Brewer on Action 157, so push" 17:19:56 <Rinke> zakim, mute me 17:19:56 <Zakim> Rinke should now be muted 17:20:00 <m_schnei> Topic: Accept Previous Minutes 17:20:22 <m_schnei> Rinke: previous minutes not yet treated 17:20:56 <Rinke> they looked ok to me as well 17:21:14 <IanH> Proposed: accept minutes 17:21:17 <bmotik> +1 17:21:20 <ewallace> +1 17:21:22 <Rinke> +1 17:21:32 <IanH> +1 17:21:37 <MartinD> +1 17:21:39 <JeffP> 0 (didn't check yet) 17:21:39 <IanH> Resolved: accept minutes 17:22:00 <m_schnei> IanH: now let's go on with issues to be resolved 17:22:04 <m_schnei> Topic: Proposals to Resolve Issues 17:22:09 <m_schnei> Subtopic: Issue 5 17:22:46 <m_schnei> IanH: slightly strange issue 17:22:59 <alanr> close as withdrawn 17:23:01 <m_schnei> IanH: Jeremy did not object 17:23:06 <bijan> zakim, unmute me 17:23:06 <Zakim> bijan should no longer be muted 17:23:37 <m_schnei> Bijan: Jeremy sent a mail that HP doen't care anymore 17:23:48 <alanr> zakim, unmute me 17:23:48 <Zakim> alanr should no longer be muted 17:23:56 <bijan> zakim, mute me 17:23:56 <Zakim> bijan should now be muted 17:24:13 <m_schnei> alanr: supports close as withdrawn 17:24:43 <alanr> zakim, mute me 17:24:43 <Zakim> alanr should now be muted 17:25:04 <IanH> PROPOSED: close Issue 5 as withdrawn 17:25:28 <ewallace> +1 17:25:31 <Rinke> +1 17:25:35 <MartinD> +1 17:25:39 <bmotik> +1 17:25:39 <IanH> +1 17:25:43 <baojie> 1 17:25:44 <bijan> +! 17:25:46 <bijan> +1 17:25:47 <msmith> +1 17:25:50 <alanr> I didn't want closing the issue to imply that the technical issues that were raised were solved or rejected. They may be brought up again, if appropriate. 17:25:54 <alanr> +1 17:26:02 <IanH> RESOLVED: close Issue 5 as withdrawn 17:26:17 <m_schnei> Subtopic: Issue 31 17:26:41 <m_schnei> IanH: seems to me as a left over from early days 17:26:52 <alanr> zakim, unmute me 17:26:53 <Zakim> alanr should no longer be muted 17:26:54 <m_schnei> IanH: looks moot to me 17:27:30 <m_schnei> alanr: we haven't finished this conversation 17:28:10 <bijan> zakim, unmute me 17:28:10 <Zakim> bijan should no longer be muted 17:28:25 <m_schnei> bijan: i sent email 17:29:12 <msmith> +1 to bijan. this issue has not been mooted. I also sent an email today. 17:29:14 <bmotik> Zakim, unmute me 17:29:14 <Zakim> bmotik should no longer be muted 17:29:15 <m_schnei> bijan: it's not mooted just by the fact that we have internal syntax 17:29:53 <m_schnei> boris: i don't understand this issue 17:30:02 <bmotik> Zakim, mute me 17:30:02 <Zakim> bmotik should now be muted 17:30:27 <msmith> See e.g., 17:30:29 <bmotik> What are user-defined datatypes? 17:31:00 <m_schnei> bijan: pellet supports working with external xml datatypes 17:31:18 <Zakim> -Achille 17:31:45 <ewallace> because SWBPD didn't choose 17:31:47 <m_schnei> IanH: why is this a problem for our WG? 17:31:48 <Achille> I have to leave because of a conflicting meeting 17:32:08 <m_schnei> bijan: old owl wg did not do something about this 17:32:11 <msmith> The last query to XML Schema said that XSCD work was ongoing 17:32:14 <bmotik> Zakim, unmute me 17:32:14 <Zakim> bmotik should no longer be muted 17:32:32 <m_schnei> boris: what is meant by "user defined datatypes" 17:32:34 <msmith> bmotik, see 17:32:55 <rob> Is the set of types open-ended in OWL 1.0? Our proposal is that the set of types is limited in OWL 2... 17:33:13 <alanr> +1 to not moot 17:33:14 <bmotik> Zakim, mute me 17:33:14 <Zakim> bmotik should now be muted 17:33:43 <bijan> I'm happy to resolve it negatively if the wg isn't interested 17:33:54 <m_schnei> ianh: we are not in agreement at the moment 17:34:32 <rob> I think some of these issues might be mooted after discussion of the new datatype proposal, but not until then. 17:34:33 <m_schnei> msmith: would be nice to have OWL together with XML Schema 17:34:41 <bmotik> Thanks! 17:35:00 <bijan> And they're not being mooted doesn't mean we can't close it 17:35:00 <m_schnei> IanH: let's take this offline, and defer resolution 17:35:15 <bijan> If the group sentiment is against that, it's fine to close it. 17:35:16 <m_schnei> Subtopic: Issue 53 17:35:41 <m_schnei> IanH: issue raised long time ago, it's rather a usecase 17:35:43 <bijan> I can add it ot the n-ary use case page 17:35:58 <alanr> +1 to resolve in this way 17:36:34 <ewallace> +1 to resolve by adding the use case to the N-ary use case page 17:36:35 <IanH> PROPOSED: Resolve issue-53 by turning it into an nary datatype use case 17:36:37 <bijan> I've added it ot the n-ary data predicate use case page. 17:36:41 <Rinke> +1 17:36:42 <ewallace> +1 17:36:45 <bijan> +1 17:36:45 <bmotik> +1 17:36:48 <MartinD> +1 17:36:55 <IanH> +1 17:36:57 <msmith> +1 17:37:00 <Carsten> +1 17:37:04 <IanH> RESOLVED: Resolve issue-53 by turning it into an nary datatype use case 17:37:31 <m_schnei> Subtopic: Issue 87 17:37:46 <rob> zakim, unmute me 17:37:46 <Zakim> rob should no longer be muted 17:37:55 <bijan> zakim, mute me 17:37:55 <Zakim> bijan should now be muted 17:37:57 <m_schnei> IanH: rational number datatype should be subsumed below 126 17:38:01 <alanr> fwiw, i do as well 17:38:13 <rob> zakim, mute me 17:38:13 <Zakim> rob should now be muted 17:38:15 <bmotik> I'd prefer closing the issue. 17:38:49 <rob> we can decide 126 independently of whether we support rationals 17:38:53 <msmith> msmith: I agree with Rob. It is easier to close smaller issues 17:38:56 <rob> ...thus easier to keep rationals as a separate issue 17:39:14 <m_schnei> IanH: so let this one open 17:39:25 <m_schnei> Subtopic: Issue 128 17:39:52 <alanr> zakim, unmute me 17:39:52 <Zakim> alanr was not muted, alanr 17:39:54 <m_schnei> IanH: I have proposed to close this issue 17:40:11 <m_schnei> alanr: this kind of review will be ongoing 17:40:25 <rob> so we resolve that it would be a good idea? 17:40:27 <bmotik> Sure -- all documents have to de reviewed before publishing. 17:40:36 <alanr> zakim, unmute me 17:40:36 <Zakim> alanr was not muted, alanr 17:40:40 <m_schnei> IanH: we did our job for now 17:40:47 <alanr> I prefer to leave it open as a reminder, but not bring it to meeting 17:40:53 <alanr> zakim, unmute me 17:40:57 <Zakim> alanr was not muted, alanr 17:41:25 <bmotik> Zakim, unmute me 17:41:25 <Zakim> bmotik should no longer be muted 17:41:30 <m_schnei> alanr: though this issue should be a reminder for us to review later 17:41:56 <alanr> ok, Boris, that's fine. Will start a wiki page. 17:42:01 <bijan> +1 to morale boosting effect of issue list 17:42:04 <bijan> reduction 17:42:12 <ewallace> +1 on moving to QA list per Boris' suggestion 17:42:15 <bmotik> Zakim, mute me 17:42:15 <Zakim> bmotik should now be muted 17:42:25 <m_schnei> boris: have a quality list which contains things which have to be done at the end 17:42:44 <IanH> PROPOSED: Issue 128 resolved by moving it to a QA wiki page 17:42:47 <rob> +1 17:42:48 <bmotik> +1 17:42:50 <bcuencagrau> +1 17:42:54 <ewallace> +1 17:42:56 <IanH> +1 17:42:56 <alanr> +1 17:42:56 <Rinke> +1 perfect 17:43:06 <bijan> +1 17:43:07 <IanH> RESOLVED: Issue 128 resolved by moving it to a QA wiki page 17:43:09 <MartinD> +1 17:43:33 <m_schnei> IanH: happy about having closed several issues 17:43:37 <m_schnei> Topic: Other Issue Discussions 17:43:46 <m_schnei> Subtopic: Issue 114 (Agenda Amendment) 17:44:00 <alanr> 17:44:33 <m_schnei> alanr: worries about sensibility of punning 17:44:48 <m_schnei> alanr: I would like to understand the usecases 17:45:39 <m_schnei> alanr: I looked at each possible combination and checked whether this makes sense (eg. class / constant punning) 17:46:06 <m_schnei> alanr: does punning make sense in the context of SPARQL queries? 17:47:01 <bijan> zakim, unmute me 17:47:01 <Zakim> bijan should no longer be muted 17:47:42 <rob> I'm very skeptical of calling any of this "trivial". 17:47:43 <m_schnei> bijan: given that all other forms of punning are in Full and easy to implement, we can keep it in 17:47:59 <rob> Rationale -- explaining it to users will be hard. 17:48:07 <alanr> -1 to perversions in the language 17:48:10 <rob> Unless we have a simple conceptualization. 17:48:13 <m_schnei> bijan: would otherwise create artificial distinction 17:48:15 <alanr> gives us a bad name 17:48:27 <rob> Not all OWL-DL tools. 17:49:00 <bmotik> Zakim, unmute me 17:49:00 <Zakim> bmotik should no longer be muted 17:49:04 <m_schnei> bijan: we should put it in, and give best practice notes if some form turns out to be harmful 17:49:41 <m_schnei> boris: what is the problem, what does it mean that a form of punning does not make sense? 17:50:07 <MarkusK> +1 to Boris: punning is no semantic problem 17:50:28 <m_schnei> boris: we only dropped property/property punning because of RDF serialization problems 17:50:50 <m_schnei> alanr: there were also other problems 17:51:20 <m_schnei> alanr: we are a Semantic Web working group 17:51:31 <m_schnei> alanr: have to take the usecases into account 17:51:47 <m_schnei> Boris: if you don't like a certain form of punning, don't use it 17:52:34 <bmotik> bmotik: What could go wrong with different types of punning? 17:52:51 <m_schnei> Bijan: Why should wg spend so much time on this point, if there is only a single member org against 17:52:56 <bmotik> bmotik: What types of punning do you consider really bad? 17:53:12 <rob> My concerns would be addressed by some good example of usage that could be used as the basis for some documentation. 17:53:33 <bijan> zakim, mute me 17:53:33 <Zakim> bijan should now be muted 17:54:11 <m_schnei> alanr: what goes wrong is that things can be done which are nonsense 17:54:26 <m_schnei> alanr: general question is, what is a feature for 17:54:38 <bijan> I didn't speak for them all 17:54:41 <bijan> I made a prediction 17:55:19 <Zakim> -alanr 17:55:29 <rob> zakim, unmute me 17:55:29 <Zakim> rob should no longer be muted 17:55:38 <alanr> alanr has joined #owl 17:55:47 <alanr> zakim, unmute me 17:55:47 <Zakim> sorry, alanr, I do not know which phone connection belongs to you 17:56:34 <Zakim> +??P1 17:56:42 <alanr> zakim, ??P1 is me 17:56:42 <Zakim> +alanr; got it 17:57:14 <alanr> ok - that was said in the issue 17:57:21 <alanr> Thus, the same name can be used 17:57:21 <alanr> in an ontology to denote a class, a datatype, a property 17:57:21 <alanr> (object or data), an individual, and a constant 17:57:22 <m_schnei> m_schnei: (answer to alanr) it is not possible to pun classes and constants, because of different syntax of URIs and constants 17:57:24 <bmotik> I agree with michael here 17:57:27 <bmotik> completely 17:57:29 <alanr> good. 17:57:38 <bijan> Yep. It's syntactically impossible, yes? 17:57:54 <bijan> Spelt differntly 17:58:00 <bijan> Pun requries same spelling 17:58:08 <alanr> Looks like a mistake in the issue submission 17:59:13 <alanr> class/properties has no interesting inference 18:00:20 <alanr> Actually, perhaps this approach would work over email. 18:00:49 <Carsten> Have to leave, bye. 18:00:59 <bijan> Interesting inferences aren't the only issue. It's useful in some cases to keep both forms in the same document instead of syntactically forbidding them. 18:01:00 <alanr> Would like a definitive list of what it is possible to pun. Could someone email this? 18:01:00 <Zakim> -Carsten 18:01:48 <alanr> +1 18:01:56 <alanr> zakim, unmute me 18:01:56 <Zakim> alanr was not muted, alanr 18:01:57 <rob> zakim, mute me 18:01:57 <Zakim> rob should now be muted 18:02:08 <christine> christine: also would like to se UC stemming from *real* appli not eagle 18:02:16 <m_schnei> rob: we can find a usecase for every form of punning 18:02:30 <rob> I've been searching the web site and can't find them... 18:02:43 <m_schnei> alanr: would like to see a list of all possible punning combinations 18:02:49 <MarkusK> Some use cases for punning are already at; maybe more can be added there 18:02:52 <msmith> IIRC, Evan has stated use cases for class/property more than once 18:03:12 <m_schnei> alanr: there is no entailment for class property punning 18:03:15 <MarkusK> Oops, better URL: 18:03:18 <bijan> rob, the general use case is not to reject intelligible rdf graphs 18:03:22 <bijan> (my general use cases) 18:03:42 <rob> true---there are use cases on the site. 18:03:48 <msmith> yes, the class property use cases 18:04:07 <bmotik> I'm already writing an e-mail 18:04:14 <m_schnei> Ian: let's take this offline, and try to clarify the usecases for all the different kinds of punning 18:04:25 <bijan> Because it's work? 18:04:26 <rob> (as a newbie, I've got to say I see the burden on Alan to identify his problems with the current use cases) 18:04:58 <alanr> zakim, unmute me 18:04:58 <Zakim> alanr was not muted, alanr 18:05:00 <m_schnei> boris: I will send a mail 18:05:02 <alanr> zakim, mute me 18:05:02 <Zakim> alanr should now be muted 18:05:15 <alanr> yes 18:05:29 <alanr> zakim, unmute me 18:05:29 <Zakim> alanr should no longer be muted 18:05:31 <m_schnei> IanH: alanr, can you take over chair? i have to go 18:05:56 <m_schnei> alanr: ok, but technical problems might arise with my phone connection 18:06:00 <m_schnei> IanH: Rinke as chair backup if alanr's connection go's down 18:06:11 <Rinke> me? sure 18:06:33 <bijan> BTW, I object to the characterization that my point was a matter of haphazard langauge design. My point is in part about burden a proof: you need a convincing argument to get people to stop supporting such punning 18:06:42 <bijan> zakim, unmute me 18:06:42 <Zakim> bijan was not muted, bijan 18:07:20 <m_schnei> Topic: General Discussion 18:07:20 <m_schnei> Subtopic: Rich Annotations 18:07:57 <alanr> very interested in rich annotations 18:08:14 <Rinke> me too 18:08:17 <m_schnei> bijan: we have this proposal to let people put annotations into a separate domain 18:08:59 <alanr> q+ to ask whether single annotation space/ serialized as one separate file is a useful extension 18:09:21 <m_schnei> bijan: in OWL 1 you could put annotations into a different document to have them separated 18:09:51 <alanr> q+ to ask, are you thinking about how such annotations can be queried within SPARQL - or how important this would be 18:10:30 <alanr> q+ to ask whether current question of annotations on annotations comes for free in this proposal 18:10:35 <m_schnei> bijan: we get a lot requests to have a DC ontology, but one either have to pun or make those annotations meaningless 18:11:08 <Zakim> bijan: queried within SPARQL - or how important this would be and to ask whether current question of annotations on annotations comes for free in this proposal 18:12:10 <m_schnei> alanr: (to bijan) why not have annotations in different documents? 18:12:56 <m_schnei> bijan: that's a design option, but some people don't like to work with several files 18:13:53 <m_schnei> alanr: second question about annotations on annotations 18:14:05 <m_schnei> alanr: would this be problematic? 18:15:19 <m_schnei> bijan: my current syntax doesn't allows this, but it would be an easy extension. 18:15:39 <m_schnei> alanr: third question about SPARQL, doesn't look operable there 18:16:44 <m_schnei> bijan: parser preprocessor should handle this 18:18:13 <m_schnei> alanr: strawpoll, whether bijan's approach or simply use multiple documents? 18:18:49 <IanH> Got to go -- bye 18:18:53 <Zakim> -IanH 18:19:27 <JeffP> bye 18:19:32 <Rinke> q+ to ask about the RDF problem 18:19:38 <Rinke> zakim, unmute me 18:19:38 <Zakim> Rinke should no longer be muted 18:20:21 <m_schnei> rinke: question, if there are specific problems with the RDF serialization? 18:20:54 <m_schnei> bijan: we use reification, because there is no other support in RDF 18:21:21 <Rinke> zakim, mute me 18:21:21 <Zakim> Rinke should now be muted 18:22:05 <bijan> I'm indifferent 18:22:06 <ewallace> Don't understand the question 18:22:08 <m_schnei> alanr: asks for strawpoll whether special annotation layer approach is wanted 18:22:12 <alanr> STRAWPOLL: Serializing rich annotation to separate files (for RDF) OK? Not OK? 18:22:09 <msmith> +0 1 or several files is acceptable if it works 18:22:22 <ewallace> multiple files would be o.k. 18:22:26 <alanr> ok 18:22:33 <Rinke> +0.5 no objection myself 18:22:34 <msmith> ok 18:22:41 <m_schnei> m_schnei: +.025 to separate files (but I did not think about this) 18:22:55 <MartinD> +epsilon 18:23:02 <JeffP> 0 18:23:07 <baojie> 0 18:23:13 <rob> -0 18:23:14 <MarkusK> 0 if it works, how would we specify the location of the other file (sound like ontology import ...)? 18:23:16 <bmotik> 0 18:23:27 <bcuencagrau> I am not sure if I understand completely 18:23:28 <Rinke> good point MarkusK 18:23:52 <Rinke> perhaps we should have sth. as owl:importAnnotation 18:24:17 <MartinD> +1 to rinke's idea... 18:24:36 <m_schnei> Bijan: suggests to send a proposal 18:25:27 <alanr> action: alan to draft sketch of how to serialize rdf annotation spaces - separate files. 18:25:27 <trackbot> Created ACTION-166 - Draft sketch of how to serialize rdf annotation spaces - separate files. [on Alan Ruttenberg - due 2008-07-16]. 18:25:47 <m_schnei> Subtopic: N-Ary Datatypes 18:25:48 <alanr> q+ to ask about progress with mockup in racer 18:25:51 <m_schnei> Bijan: i think there is a point with conformance 18:26:27 <m_schnei> Bijan: some people want linear equations 18:26:37 <msmith> Indeed, I'd like to see linear ineq in Pellet 18:26:52 <alanr> mike, do you have a use case you could document? 18:27:57 <msmith> alanr, I think some of the cases on motivate linear inequations 18:28:23 <m_schnei> AlanR: Any questions to Bijan? No? 18:28:24 <Zakim> -bijan 18:28:26 <alanr> adjourned 18:28:26 <Zakim> -Evan_Wallace 18:28:26 <Zakim> -msmith 18:28:28 <Zakim> -bmotik 18:28:29 <christine> bye 18:28:29 <Rinke> thanks, bye 18:28:31 <Zakim> -bcuencagrau 18:28:32 <Zakim> -MarkusK 18:28:33 <Zakim> -rob 18:28:35 <Zakim> -MartinD 18:28:42 <Zakim> -Rinke 18:28:53 <Zakim> -baojie 18:29:13 <alanr> alanr has joined #owl 18:29:14 <Zakim> -??P11 18:29:35 <m_schnei> rrsagent, bye 18:29:35 <RRSAgent> recorded in
https://www.w3.org/2007/OWL/wiki/Chatlog_2008-07-09
CC-MAIN-2016-36
refinedweb
5,266
59
Agenda: how to link evaluation to document: Interleave evaluation results in source? Put evaluation in separate file and point into document via XPath? Via Line Number? Via added pseudo-id attribute? MC The XPath pointing into would be preferred, the catch is that HTML does not require the tree nesting structure is required for using XPath. Therefore, can't use unless tool already modifying file to be more XHTML/XML like. WL Why not just Tidy it up? MC Depends on if the tool is performing a Tidy utility. LK If do have Tidy file and the XML points to the Tidied file. Need to tie the Tidied file to the original. MC If have sophisticated enough Tidy tool, could do that. WL Doesn't Gerald's tool do that? MC Don't think so. Have to look at it. WL You can get parse view, all of the errors. WC You mean the W3C HTML Validator. LK Does it XHTML-ize something? WL Doesn't change them, but gives you the errors. MC Therefore 2 steps: tidy or not, if using XPath how is it pointing to it. One suggestion, add an id attribute, another is to point into it by position in doc tree. id is robust across instances. by position is not as robust because if document modified then tree positions may change. using id's involves rewriting code. it's less obnoxious to add an id attribute rather than custom elements. i'm interested in non-tool specific code. WL Philosophical decision to Tidy code before we tool it? MC Yes, except it's probably not a decision we will want to make. Let the individual toolmakers decide that. LK It's the tool's business, but how do you inform the user? CR If the document is contained w/in the evaluation doc so that it can be recreated as it was initially. Then you can pull out just the evaluation. MC If it will be saved with those extra elements, I'm not sure that's what we want to do. LK In PA it's a state regulation that state sites follow the W3C. We have to have verification of that. WL It's not warrantless, in general you don't go on the web and modify their site. You're just annotating it. Unquestionably, it is best to Tidy it. MC My response is always, "I wouldn't use that tool." I hate having my code messed with. I would love a tool to add the "alt" text, I don't want to see all this other stuff added to it. LK If you like to deal with the HTML, if new id's added then you can search to them, if they change other pieces of tags, or removed closing tags, ... There is one class of errors that we can't mark this way, "invalid HTML." You can't document invalid HTML after you've transformed it. DB The FrontPage folks took pains to not modify code. WL This is off on the side. DB That's what they would prefer. MC Exception that using XPath requires changing the code if not XML. WL Could be separate. MC If you repair something you will need a pointer to the right place to do the repair. with XPath would have the pointer, but only to the modified file. WL Here is ".original" and then ".copy" and that's what you modify. can then have a side by side comparison. MC The original won't have repairs made to it. WL Once you've gone through what you object, then you can say, "go ahead and change it." DB It's fine if they are accepting it. "I accept this change." that's ok (by user "o.k.") LK If the user accepts change to XHTML. What if you have a user who doesn't want it, then say sorry, can't use the tool? WL Like grammar checker, don't have to spell a word like they request. MC Double or nothing. either XML-ize and repair it for accessibility or do neither. The people who want to do one but not the other are not served by the tool. WL If you want to change to XML, tool to do that. If want to do some accessibility fixes without doing that. WC Not exactly all or nothing. Should depend on the author. You can do the evaluation but you don't have to modify the file or save the state info. LK All alt-tags exist and correct, then someone changes the document tree. I've lost my alt-text checks. Better to have it be as robust as possible under editing of the pages and simple-id's does that. WC If order changes but still have 5 images and file names are the same, then could you assume that the order change does not affect context the images are used in? Yes, should be as robust as possible but not everyone is going to accept that. WL We must always consider "cry wolf." If we don't leave options. MC We've come up with 2 general approaches: XPath using id or position, wrap XML tags in pieces of source. LK Additional id attribute. MC I aggregated that into XPath approach. WC Two axes: XPath or wrap, separate set of files or not. LK If you have those two set up in such a way that you can mechanically convert from one to another, then we're just talking about implementation. Can we map between them? MC Agree can map between, in the XML approach mapping to the inline approach is harder. Another axis: one session or between sessions. LK Important axis. MC Xpath approach easier for cross sessions w/out custom markup in page, tradeoff if having XML-ized the original code. LK Major dependency seems to be: who is doing the evaluation. In PA I can't require sites to stick the ID tags in there. On Monday, I make judgements, then they make changes, then I have to judge again. I want something that is robust under changes. That's if I'm an outsider. If it's my code, I would be most happy sticking in id's. Different users are best served with different methods. MC I've had this discussion, part of the cost of this tool is that certain things be changed. The whole XPath approach only works for structural features. Those that work for content don't work. Chris' approach may work best here. With XPath all can do is point at the paragraph can't point to features. With Chris' could mark, "this is a front-loaded sentence." LK You can't point to the 3rd word of the 2nd paragraph? e.g. you can't point to CDATA? Even saying that language changes, no way to point to that word. MC Not unless there is a "span" element on that phrase. LK Want to ask for an extension to XPath? MC I assume someone working on. WC At least it will point to that paragraph and can then say "these 3 words" and the person can look for those words. LK If you have something annotated (like chris did) it seems you could process that to create separate document with XPath pointers as close as they can get. WC Combining the approaches? LK WC Al had good questions about Chris' approach, overlap of tool assertion and original HTML. MC Could say, "violation type1 = longdesc, type 2 = alt, etc." reduce nesting of error elements. use RDF. LK Or you could nest them. Wrap error 1 around it, then error 2. Wouldn't matter how nest. Would be irrelevant semantically. MC I'm seeing LK move towards saying "these 2 are equivalent" the trade offs are different. But there are trade-offs to either one. Source code will be modified. This is against the way current tools are working. There is no comment format, they do things internally. There is no trade-off in terms of modifying source. However, then can't share data with other tools. That is a broader description of what we're talking about. CR Can we share the evaluation data with other tools? Do we want to come up with a common format? One tool evaluate another repair? LK Or 2 do the evaluation. MC And a 3rd do the repair. LK They may duplicate or compliment each other. Could get a sum of evaluations. MC We're wanting to answer, "yes" we can share data. trade off of modifying source in some way. If assume perfect world (everything in XHTML) be a non-issue. This be the case in 5 years? Perhaps be leading edge and adopt that viewpoint. LK Even with XHTML we can't point into text strings and if the tree gets messed around we could add additional heuristics, but does that cover it? WC Look at spectrum, the ideal (nested elements and XPath) to nothing stored at all. Figure out how it breaks down based on user preference. LK If each element has a unique id, can XPath point into that element. MC Yes. LK Use XPath and say, "authors, if you voluntarily put these ids in your source and here is a tool that will do it for you, then you will have these additional features available when you use these other tools." We stick with XPath and we give authors the option. WC Good to me. MC That's how markup languages will be evolving anyway. Unique ids will be necessary. There may be any number of transforms it may have to go through. Adding inline elements is an extension of this approach. WL How reduce human readability? Can you hide them? MC Will be some gobbledy-gook. If looking for things, not sure you can hide. WC Would be cool to see Bobby, A-prompt, and WAVE implement this and begin sharing data between them to see how works and what issues are. MC We could prototype. I can't promise to get it done in the next few weeks, perhaps months. LK Yes. The way the WAVE works now, it does something similar to put in extra tags in namespace. It visually marks up the original markup. It wraps things in spans and sticking in extra tags. If it were putting in abstract tags the output would be output ala Chris or Al (if i stuck in colons to make namespaces). WC CR using suggested file format already? CR We have a working version that we've been testing. LK I could take that as an input and wind up with WAVE symbols. CR Once you have the evaluation several ways you can present it. We've been thinking of presentation modules: pie chart, graph, etc. Then allow the user to select. WC What about the other way, WAVE into A-prompt. LK It's like screen scraping. An icon comes on, could do something that works backwards into a-prompt. Would be better for WAVE to internally create an annotated file then convert into WAVE-like presentation. WC CR how do you feel about using XPath. MC I learned it from the W3C spec. Have heard that xml.com has good stuff. WC Also noticed new articles at webreview.com Action CR: review XPath and XSLT documentation to see how/if a-prompt could use XPath and XSLT. CR Very concerned about changing code and authors not using tools if code is modified. With embedded version, could use XSLT to produce a good report. Can you use XSLT with XPath. MC Yes. XPath is an integral part of XSLT. CR We can agree that working on this standard is a good thing. MC Easier for us to do the export rather than import. Sounds like WAVE be the candidate to do initial import. Action DB I'll try to get Lisa to talk to people in FrontPage and get feedback from them. Resolved: We will prototype sharing of evaluation and repair data between WAVE, A-prompt, and Bobby. MC Generic and non-technology specific and therefore not be implemented by the tool. Have to take techniques and line up against those rather than guidelines themselves. WL In principal can say that the checkpoints are machine-checkable. Do they have the potential? MC That sounds possible, although looking at the specific techniques will inform that better. WL The crafting of techniques may be affected by the notion of if they can be machine-checkable. LK In addition to deciding if they are machine-testable, if not, there are ways to write a page so its easier for human judgement. If I look at a page and different versions created by style sheets I can tell that the various versions are the same. If one graphical the other not I have to use eyeballs. So, not just focus on machine checkable but how easy it is for humans to check as well. WL The current 2.1 says "use markup according to spec" is machine-checkable. LK Depends on what "spec" means." if "passes the DTD" then yes, if "blockquote should not be used as quotes" then machine can't tell that. WC Machine checkable: syntax, human: semantics. /* discussion of classes and styles */ WL A great many of the 25 checkpoints can be tested by machine. A bit more-so than WCAG 1.0 because being more general....do not use mechanisms that interfere with navigation...bullets help this refresh, frames, etc. LK Wearing our ER hats, should we recommend "do not use techniques that will make evaluation more difficult." or a meta-principal. WC How test? what does that mean? LK specific techniques: you have a site that is accessible via an alternative site. If it is the same but uses style sheets, easy to determine. however if two generated by cgi scripts one produces images of text the other uses text. That's hard to check. We identify practices that make it hard to evaluate. WC Device independent authoring workshop was all about that. Can't exclude database. LK If architect that so that presentation is one module at the end, then easier to verify. But if no modules in common, then hard to verify. WL The original thing that LK proposed is a meta-guideline that the WCAG WG should keep in mind as they develop WCAG 2.0. WCAG 2.0 Requirements document Macromedia will be there. Macromedia accessibility. Someone from CAST. No one from ATRC. $Date: 2000/11/08 08:17:28 $ Wendy Chisholm
http://www.w3.org/WAI/ER/IG/2000/10/30-minutes
CC-MAIN-2016-36
refinedweb
2,451
77.64
1593480150 Swift | iOS | XCode - UIDatePicker timezone problem fix Text tutorial: in usa 1612441441 Are you looking to hire the best swift iOS developers for your iPhone or iPad App project? AppClues Infotech is a top-rated iOS app development company in the USA. Hire our dedicated swift iOS app developers to build feature-rich and robust iOS app. For more info: Website: Call: +1-978-309-9910 #top swift app development company usa #best swift app development company #swift app development #swift ios app development #swift app development company #hire expert swift ios app developers in usa Mobile App Development India 1603285318 Hire an Exceptional Swift App Developer from Mobile App Development India. Maadi has a dedicated Swift App Development team that is superiorly talented and builds highly functional, cost-effective mobile apps with error-free coding. #swift ios app development india #hire swift programmer india #swift ios development #apple swift app development #swift mobile app development #swift app development 1593872700 Unit testing is a testing method where you can test “unit” of code whether it is working as you want or not. In Xcode, use XCTest framework to perform unit tests. Unit test is a function starts with lowercase word "test” and it must be method of subclass of XCTestCase . It has conditions to check code is doing right as expected, but it has no parameters and no return value. How to setup unit testing Because units tests are perform under unit testing target you must have to add them before use. You can include “Unit Testing Bundle” in Xcode project two ways : File > New > Targetand search for “Include Unit Tests” . After setup it will generate new subclass of XCTestCase in your project inside testing folder which you can find inside Project navigator. import XCTest class XCArticleTests: XCTestCase { override func setUp() { // method is called before each test method // setup code here } override func tearDown() { // method is called after each test method in the // code to perform cleanup } func testExample() { // add test case. // Use XCTAssert to test code } } This example defines XCArticleTests which is a subclass of XCTestCase . It has three methods setUp() for initial setup, tearDown() to perform cleanup after execution and test method called testExample to perform all tests. If you want to read more about setup and teardown methods go to link . How to write unit tests Define new extension of type Int with a function called cubed which returns cube number. extension Int { func cubed() -> Int { return self * self * self } } Define new XCTestCase subclass CubeNumberTests with a method named testCubeNumber(). This method creates two properties one is number and another for cubeNumber and checks cubeNumber equals to 125. class CubeNumberTests: XCTestCase { func testCubeNumber() { let number = 5 let cubeNumber = number.cubed() XCTAssertEqual(cubeNumber, 125) } } Click on gray diamond button on the left side next to test method. The diamond turns into green if test passes or otherwise it will give error message. The above test will be succeed because cube number of 5 matches 125. Conclusion Unit testing is a necessary skill if you want to be a good developer. It look hard in the beginning but it gets easier as you use them. Don’t use unit tests all over your application it will be confusing and time wasting .You should use them when it’s too necessary. #swift #unit-testing #ios #ios-app-development #xcode
https://morioh.com/p/af875c938990
CC-MAIN-2021-43
refinedweb
560
60.14
FCA - UAE,Federal Customs Authority -United Arab Emirates Accessibility Zoom Color Theme Translate FCA is not responsible for the translation output by Google Advanced Search Twitter Youtube Instagram RSS In a press release the Authority stated that the direct non-oil foreign trade accounted for 63% of the total non-oil foreign trade of UAE in the last year, amounted to Dhs. 1.025 Trillion while the non-oil foreign trade of the free zones in the states amounted to 36% for a total value of Dhs. 592.4 Billion followed by Dhs. 11 billion at the rate of 1% for the customs warehouses. His Excellency Ali Saeed Matar Al Neyadi Customs Commissioner and Chairman of the Authority, said that the UAE Non-oil Foreign Trade witnessed positive and significant developments in the said year, most important of which is the continuation of growth and stability in UAE Foreign Trade with the rest of the world which enhanced the UAE position as strategic, commercial gate for the states in the region. His Excellency stated that the value of imports suffered a remarkable regress during the year at the rate of 4.2% to reach Dhs. 938 billion compared to Dhs 979 billion in the last year. At the same time the growth continued in the export business at the rate of 1.8% during the year in which the re-export value reached Dhs. 478.4 compared to Dhs. 470.1 Billion last year. This refluxed the national economy ability to achieve a remarkable improvement in the trade balance with the International Economic Groups and Regions. His Excellency drew the attention to the UAE Non-oil Foreign Trade which witnessed a huge development with the Arab States in the last year. The share of the Arab States in the UAE total Non-oil Foreign Trade increased in average from 19% in the previous year to 21% during the last year, indicating that the value of trade exchange with such states increased to Dhs. 341.2 billion, and the value of imports amounted to Dhs. 77.3 billion and the exports amounted to Dhs. 93.8 billion while the value of the re-export amounted to Dhs 170.1 billion which means that there is huge surplus in the trade balance of UAE with the Arab States. His Excellency Chairman of the Authority stated that the UAE national Industry managed to consolidate its position at the International market during the year, especially in the field of gold and aluminum trade. He explained that the value of the UAE export of raw and half finished gold increased to Dhs. 53.4 billion during the year while the export of raw aluminum amounted to Dhs. 18.6 billion. GCC States As to the UAE Trade with the GCC states the Authority statistical data indicated that the UAE has achieved a significant surplus in the trade balance with the GCC States, during the year owing to the increase of the value of exports and re-export by UAE to such states compared to their imports to UAE. The Authority stated that the share of the GCC States in the total UAE Non-oil Foreign Trade has increased to 14% of the total Non-oil Trade of UAE in 2018 for the value of Dhs. 220.9 billion. Out of such amount Dhs. 56.5 billion represented the value of imports, Dhs. 65.8 billion as value of export while the value of re-export reached Dhs. 98.6 billion. The Authority stated that the UAE trade with the Kingdom of Saudi Arabia amounted to Dhs. 107.4 billion in 2018, which enabled the Kingdom to acquire, almost half of the UAE trade with the GCC States (49%), followed by Sultanate of Oman for the value of Dhs. 46 billion at the rate of 21%, then Kuwait for Dhs. 39.2 billion at rate of 18% and Kingdom of Bahrain at the rate of 13% for a total value of Dhs. 28.3 billions. Trade Partners His Excellency Ali Saeed Al Neyadi, said that the map of UAE Non-oil foreign trade with the International economic regions remained stable during the last year as used to be, maintaining balanced trade relationship with the trade and strategic partners. In this respect the initial statistics of the Authority indicate that Asia and Pacific Ocean region came on the top of the Trade Partners' List in 2018, acquiring 39.3% of the UAE total Non-oil trade with the countries of the world by a share equal to Dhs. 603.2 billion while Europe has occupied the second place with a share equal to Dhs. 344.4 billion at the rate of 22.4% in general. The value of the Share of the Middle East and North Africa increased to Dhs. 332.1 billion at the rate of 21.6%, America and Caribbean for the value of Dhs. 136.5 billion equal to 8.9% of the total, East and North Africa for the value of Dhs. 64.9 billion at the rate of 4.2% while West and Central Africa accounted for 54.8 billion equal to 3.6%. The Best Imports and Exports On the other hand the Authority mentioned in its statement that the value of UAE non-oil imports during 2018 amounted to Dhs. 938 billion indicating that the import of raw and half finished gold occupied the first place, among the best imported commodities for the value of Dhs. 111 billion representing 12% of the UAE total imports value during the year. The telephone equipments came in the second place for Dhs. 73.7 billion representing 8%, the automobile imports for the value of Dhs. 57 billion at the rate of 6%, petroleum oils for Dhs. 52.6 billion at the rate of 5.6%, then the gold and jewelry for the value of 50.4 billion equal to 5.4% of the UAE total imports. The Authority said that the value of the UAE non-oil exports amounted to Dhs. 212 billion where the gold exports came in the first place for Dhs. 53.4 billion equal to 25% of the UAE non-oil exports during the year, followed by raw aluminum for Dhs. 18.6 billion at the rate of 9%, Cigarettes for Dhs. 12 billion at the rate 5.6%, gold and jewelry for Dhs. 10.4 billion at the rate of 5% and copper wires for the value of Dhs. 9.2 billion equal to 4.3% of the total value of exports during the year. Re-Export Trade According to the statistical data of the Authority regarding the re-export trade the telephone sets came in the first place as the best re-export commodity of UAE in 2018 for the value of Dhs. 86 billion, at the rate of 18% of the total re-export, followed by the non-composite Diamonds for the value of Dhs. 50.3 billion with a contribution rate of 10.5%, followed by gold and Jewelry for Dhs.7.6 billion equal to 10%, cars for Dhs. 39.7 billion at the rate of 8%, information self processing machines and their units for the value of Dhs.17.6 billion equal to 4% of the total re-export during the year.
https://www.fca.gov.ae/En/News/Pages/News140.aspx
CC-MAIN-2019-43
refinedweb
1,219
72.05
In this section of the Salesforce tutorial, you will be learning about Batch Apex in Salesforce. Batch Apex in Salesforce is an asynchronous execution of Apex code specifically designed to process large amounts of data by dividing it into some batches or chunks. In this section, you will understand Batch Apex in Salesforce in detail. Consider a situation wherein you have to process large amounts of data on a daily basis and have to delete some unused data. It would be difficult to manually do so. This is where batching with Apex Salesforce comes to your rescue. Before going deep into the concept of Batch Apex, let’s see the topics covered in this section: Watch this informative video on Salesforce from Intellipaat: Batch Apex in Salesforce is specially designed for a large number of records, i.e., here, data would be divided into small batches of records, and then it would be evaluated. In other words, the Batch class in Salesforce is specifically designed to process bulk data or records together and have a greater governor limit than the synchronous code. Learn more about Salesforce from this Salesforce Training in New York to get ahead in your career! There are various reasons why Batch Apex is better than normal Apex. Get familiar with the top Salesforce Interview Questions to get a head start in your career! When using a Batch class in Salesforce, the batchable interface needs to be implemented first. It has the following three methods: global void execute(Database.BatchableContext BC, list<sobject<) {} global void finish(Database.BatchableContext BC) {} The Database.Batchable interface needs to implement three methods: public (Database.QueryLocator | Iterable<sObject>) start(Database.BatchableContext bc) {} At the start of a batch Apex job, call the start method to collect the records or objects to pass to the interface method execute. It will return either a Database.QueryLocator object or an iterable that has the objects or records passed to the job. Use the Database.QueryLocator object when using a simple query like SELECT to get the scope of objects in the batch job. If you use a QueryLocator object, the governor limit for the total number of records retrieved by SOQL queries is bypassed. For example, a batch Apex job for the Account object can return a QueryLocator for up to 50 million records in an organization. Another instance is the sharing recalculation for the Contact object. It returns a QueryLocator for all account records in an organization. Iterable can be used to create a complex scope for the batch job as well as for creating a custom process for iterating through the list. If you use it, the governor limit for the total number of retrieved records by SOQL queries still applies. public void execute(Database.BatchableContext BC, list<P>){} The execute method is used to do the required processing for each chunk of data and is called for each batch of records that are passed to it. This method takes: If a Database.QueryLocator is implemented, the returned list should be used. The batches of records are executed in the order in which they are received from the start method. However, the order in which batches of records execute isn’t guaranteed as it depends on various factors. public void finish(Database.BatchableContext BC){} The finish method is ideal for sending confirmation emails or executing post-processing operations and is called after all the batches are processed. Every batch Apex job execution is considered a discrete transaction. For example, a batch Apex job with a thousand records executed with no optional scope parameter from Database.executeBatch is considered as five transactions that have 200 records each. Apex governor limits are reset for every transaction. In case the first transaction succeeds but the second one fails, the database updates from the first transaction aren’t rolled back. A reference to a Database.BatchableContext object is needed by all methods in the Database.Batchable interface. This object can be used to track the progress of the batch job. Following is the instance method with the Database.BatchableContext object: The start method can return either a Database.QueryLocator object containing the records to be used in the batch job or an iterable. Following is an example of using Database.QueryLocator. As mentioned above, the start method returns either a Database.QueryLocator object containing the records to be used in the batch job or an iterable. Using an iterable makes it possible to step through the returned items more easily. Following is an example of how to use an iterable in Batch Apex. Batch Apex splits the records into batches and processes them asynchronously, which helps the records to be processed in multiple threads. Without exceeding the processing limit, Batch Apex can run through millions of records. Also, if one batch fails to process successfully, the successful transactions will not be rolled back. Before writing the Batch Apex class, you have to implement the database batchable interface. If you don’t know what an interface is, it’s similar to a class where none of its functions are implemented. However, the signature of each function is there with an empty body. This interface has three main methods that must be implemented: start(), execute(), finish(). Now, let’s go through the steps given below and create a Batch class in Salesforce: Step 1: Search for Apex Salesforce Classes in Setup Step 2: Once you open Salesforce Apex Classes, click on New to create a new Apex Class Step 3: Below is the screen on which you’ll have to write your code The following is the program that updates the account name in the account object with) { } } Executing a Batch Class in Salesforces only takes a few clicks on the Developer Console. So, follow the steps given below to execute the Batch Class you created earlier. We’ve considered the above Batch Class as an example. If you want to execute another file, you can change the name accordingly. Step 4: After writing the code, you’ll have to go to Developer Console and click on Debug and then on Open Execute Anonymous Window. You’ll see the following screen: The basic syntax of the execution code will be: Id <variable_name> = new <variable_name>; database.ExecuteBatch(new <class_name>(), batch_size); Now enter the following code in the box and click on Execute batchAccountUpdate b = new batchAccountUpdate(); database.executeBatch(b); The following will be the output after you click on Execute As you can see, the status Success means that the account details have been updated. To learn in-depth about Workflow in Salesforce, sign up for an industry-based Salesforce Training. The Database.executeBatch method can be used to programmatically begin a batch job. A thing to keep in mind is that when Database.executeBatch is called, Salesforce adds the process to the queue. Based on the availability of the service, the actual execution can be delayed. The Database.executeBatch method considers two parameters: The optional parameter should be used when there are multiple operations for each record being passed in and run into governor limits. Limiting the number of records will limit the operations per transaction. This value must be greater than zero. The maximum value of the optional scope parameter of Database.executeBatch can be 2,000 if the start method of the batch class returns a QueryLocator. If the value is set higher than that, Salesforce breaks the records into smaller batches of up to 2,000 records. If an iterable is returned by the start method of the batch class, the scope parameter value will have no upper limit. However, in the case of high numbers, you could run into other limits. The optimal scope sizes are all in factors of 2000, for example, 100, 200, 400, and so on. The Database.executeBatch method returns the AsyncApexJob object ID, which can be used to track the progress of the job. For example: This ID can also be used with the System.abortJob method. Using the Apex flex queue, up to a hundred batch jobs can be submitted. The outcome of Database.executeBatch is: On a side note, if Apex flex queue is not enabled, the batch job is added by the Database.executeBatch to the batch job queue with the status as Queued. If the concurrent limit of active or queued batch jobs has been reached, a LimitException is thrown, and the job isn’t queued. The table below lists all the possible statuses for a batch job: You can schedule your Batch Apex class using the developer console or scheduler. These Batch Classes can then be executed at a particular time. However, you have to write the Apex Class file in order to execute the batchable interface. You can also chain the two or more apex Batch Classes together to execute one job after another. Moreover, you can split an Apex record into batches and schedule the groups of them to execute at a particular time. Below is an example of a Schedulable Apex class interface: Global class apexScheduler implements Schedulable { Global void execute(SchedulableContext sc) { batchAccountUnpdate b=new batchAccountUpdate(); } } Save the above Apex class on your device. Now, go to the Setup>> Apex Classes>> ScheduleApex, browse the SchedulerApex class, and set the execution time. Come to Intellipaat’s Salesforce Community if you have more queries on Salesforce! The System.scheduleBatch method can be used to schedule a batch job to run once at a future time. It takes the following parameters: The optional scope value is to be used when there are several operations for each record being passed in and run into governor limits. Limiting the number of records will limit the operations per transaction and this value must be greater than zero. If a QueryLocator is returned by the start method of the batch class, the optional scope parameter of Database.executeBatch can have 2,000 as the maximum value. If the value is set higher, Salesforce chunks the records into smaller batches of up to 2,000 records. If an iterable is returned by the start method of the batch class, the scope parameter value will have no upper limit. However, in case of a high number, there might be other limits. The optimal scope size is a factor of 2000. The System.scheduleBatch method returns the scheduled job ID (CronTrigger ID). To use a callout in Batch Apex, Database.AllowsCallouts have to be specified in the class definition. Callouts include HTTP requests and methods defined with the web service keyword. Following is an example of how to use a callout in Batch Apex: public class SearchAndReplace implements Database.Batchable<sObject>, Database.AllowsCallouts{ } Also, Check out our blog to learn about Test Class in Salesforce! Following are the governor limits as well as other limitations of Batch Apex: If the start method of the batch class returns an iterable, the scope parameter value has no upper limit. If you use a high number, you can, however, run into other limits. The optimal scope size is a factor of 2000. Also, Check out our blog on Salesforce Record Types! The OData adapter controls the paging behavior (client-driven) when Server Driven Pagination is disabled on the external data source. If external object records are added to the external system while a job is running, other records can be processed twice. If external object records are deleted from the external system while a job runs, other records can be skipped. I hope, you had fun learning Batch Apex! In this section, you learned about Salesforce Apex Classes and Batch Apex in Salesforce and you implemented them as well. In the next section of this Salesforce tutorial, you will be learning about Workflow Rules in Salesforce. Get ready for a Salesforce job by going through these Top Salesforce Admin Interview Questions and Answers!
https://intellipaat.com/blog/tutorial/salesforce-tutorial/salesforce-batch-apex/
CC-MAIN-2022-21
refinedweb
1,979
64.2
Encouragingly Parallel Contents When I am neck deep in hardcore TDD Red-Green-Refactor cycles I am constantly looking for ways to ensure that my feedback loop is as quick as possible. If the testing feedback takes too long I am liable to start daydreaming and I lose my context in the design process (because of course TDD is about design not testing). Often this means that I run a fast, small test suite which is focused on just the change at hand. However, sometimes the refactor step touches a few different areas, and it requires running a more substantial set of tests. How do we minimize the "distractable" time and maximize the design time in these cycles? Encouragingly, well written test suites are (almost) embarrassingly parallel. As a principle each test should be completely independent of the next and thus can be run in any order, on any machine, at any time. Furthermore test suites in the MATLAB® test framework are just arrays of Test objects where each element can be independently run. If you have the Parallel Computing Toolbox™ there exists a variety of ways these tests can be run in parallel. Awesome, let's dig into the details of how this is done. The Test Suite We need to establish what the test suite is as we explore this topic. Of course the test suite can be anything written using the unit test framework. Typically the time taken to execute the tests corresponds to the time taken actually setting up, exercising, verifying, and tearing down the software under test. However, for this example why don't we just add some calls to the pause function in order to mimic a real test? We can create 3 simple tests that we can use to build a demonstrative test suite. Let's use one script-based test with just a couple simple tests: %% The feature should do something pause(rand) % 0-1 seconds %% The feature should do another thing pause(rand); % 0-1 seconds ...a function-based test with a couple tests and a relatively long file fixture function: function tests = aFunctionBasedTest tests = functiontests(localfunctions); function setupOnce(~) % Create a fixture that takes a while to build pause(rand*10); % 0-10 seconds function testSomeFeature(~) pause(rand); % 0-1 seconds function testAnotherFeature(~) pause(rand); % 0-1 seconds ...and finally a class-based test with one simple test and one relatively long system test: classdef aClassBasedTest < matlab.unittest.TestCase methods(Test) function testLongRunningEndToEndWorkflow(~) pause(rand*10); % 0-10 seconds end function testANormalFeature(~) pause(rand); % 0-1 seconds end end end Using these simple dummy tests we can create a large representative suite by just using repmat and concatenation: import matlab.unittest.TestSuite; classSuite = TestSuite.fromFile('aClassBasedTest.m'); fcnSuite = TestSuite.fromFile('aFunctionBasedTest.m'); scriptSuite = TestSuite.fromFile('aScriptBasedTest.m'); suite = [repmat(classSuite, 1, 50), repmat(fcnSuite, 1, 50), repmat(scriptSuite, 1, 50)]; % Let's run this suite serially to see how long it takes: tic; result = run(suite) toc; Running aClassBasedTest .......... .......... .......... .......... .......... .......... .......... .......... .......... .......... Done aClassBasedTest __________ Running aFunctionBasedTest .......... .......... .......... .......... .......... .......... .......... .......... .......... .......... Done aFunctionBasedTest __________ Running aScriptBasedTest .......... .......... .......... .......... .......... .......... .......... .......... .......... .......... Done aScriptBasedTest __________ result = 1x300 TestResult array with properties: Name Passed Failed Incomplete Duration Totals: 300 Passed, 0 Failed, 0 Incomplete. 380.6043 seconds testing time. Elapsed time is 385.422383 seconds. Into the Pool Somewhere in the neighborhood of 6 and a half minutes delay time will definitely lose my attention, hopefully to something more productive than cat videos (but no guarantee!). What is the simplest way I can run these in parallel? Let's try a simple parfor loop using a parallel pool containing 16 workers: tic; parfor idx = 1:numel(suite) results(idx) = run(suite(idx)); end results toc; Running aClassBasedTest Running aClassBasedTest Running aFunctionBasedTest Running aClassBasedTest . Done aFunctionBasedTest __________ Running aFunctionBasedTest . Done aClassBasedTest __________ Running aClassBasedTest . Done aClassBasedTest __________ Running aClassBasedTest Running aFunctionBasedTest Running aFunctionBasedTest . Done aFunctionBasedTest __________ Running aFunctionBasedTest Running aClassBasedTest . Done aClassBasedTest __________ Running aClassBasedTest Running aFunctionBasedTest Running aClassBasedTest Running aClassBasedTest . Done aClassBasedTest __________ Running aClassBasedTest . Done aFunctionBasedTest __________ Running aFunctionBasedTest Running aClassBasedTest . Done aClassBasedTest __________ <SNIP: Lengthy output removed to save your scrollwheel finger.> Running aFunctionBasedTest . Done aFunctionBasedTest __________ results = 1x300 TestResult array with properties: Name Passed Failed Incomplete Duration Totals: 300 Passed, 0 Failed, 0 Incomplete. 838.4606 seconds testing time. Elapsed time is 81.866555 seconds. Parallelism FTW! Now we have the suite running on the order of a minute and a half. That is much better time, but it's still not good enough for me. Also, what is the deal with the humongous (and unparsable) output? Note, I spared you from excessive browser scrolling by actually removing (SNIP!) most of the produced output. You can see, however, that each test element got its own start/end lines and different workers all printed their output to the command window without any grouping or understanding of what ran where. Do you see the lines that look like we start aClassBasedTest and finish aFunctionBasedTest? Theres no magic test conversion going on here, were are just getting garbled output from the workers. Another interesting tidbit you can see is that the overall testing time actually increased significantly. This is not explained by the test framework time or the client/worker communication overhead, because the Duration property of TestResult only includes the time taken by the actual test content. What is actually happening here is that the function-based test, which has an expensive setupOnce function, is not enjoying the efficiency benefits of only setting up that fixture once and sharing it across multiple tests. Instead this setupOnce function is executed on every element of the function-based test on every worker. The benefits of sharing the fixture only apply when a MATLAB runs more than one test using that fixture. In this case, we are setting it up for every new Test element that we send to each parallel worker because we are sending each suite element one at a time to the pool. Let's talk next time about how we can improve on this further and tackle these problems. In the meantime, have you used parallelism in your testing workflow? What works for you? Published with MATLAB® R2014b To leave a comment, please click here to sign in to your MathWorks Account or create a new one.
https://blogs.mathworks.com/developer/2015/02/20/encouragingly-parallel-part-1/
CC-MAIN-2021-43
refinedweb
1,044
54.83
Here is a listing of C++ interview questions on “Simple String Template” along with answers, explanations and/or solutions: 1. What is a template? a) A template is a formula for creating a generic class b) A template is used to manipulate the class c) A template is used for creating the attributes d) None of the mentioned View Answer Explanation: Templates are used to for creating generic classes to handle different types in single classes. 2. Pick out the correct statement about string template. a) It is used to replace a string b) It is used to replace a string with another string at runtime c) It is used to delete a string d) None of the mentioned View Answer Explanation: Every string template is used to replace the string with another string at runtime. 3. How to declare a template? a) tem b) temp c) template<> d) none of the mentioned View Answer Explanation: template<> syntax is used. An example for calculating max of two ints, floats, doubles, or any other number type where T indicates the type of the parameters passes. template <typename T> T max(T a, T b){ return a > b? a : b; } 4. What is the output of this program? #include <iostream> using namespace std; template <class T> inline T square(T x) { T result; result = x * x; return result; }; template <> string square<string>(string ss) { return (ss+ss); }; int main() { int i = 4, ii; string ww("A"); ii = square<int>(i); cout << i << ii; cout << square<string>(ww) << endl; } a) 416AA b) 164AA c) AA416 d) none of the mentioned View Answer Explanation: In this program, We are using two template to calculate the square and to find the addition. Output: $ g++ tem.cpp $ a.out 416AA 5. What is the output of this program? #include <iostream> using namespace std; template <typename T, typename U> void squareAndPrint(T x, U y) { cout << x << x * x << endl; cout << y << " " << y * y << endl; }; int main() { int ii = 2; float jj = 2.1; squareAndPrint<int, float>(ii, jj); } a) 23 2.1 4.41 b) 24 2.1 4.41 c) 24 2.1 3.41 d) none of the mentioned View Answer Explanation: In this multiple templated types, We are passing two values of different types and producing the result. Output: $ g++ tem1.cpp $ a.out 24 2.1 4.41; } a) 5.5 Hello World b) 5.5 c) Hello World d) None of the mentioned View Answer Explanation: In this program, We are passing the value to the template and printing it in the template. Output: $ g++ tem2.cpp $ a.out 5.5 Hello World 7. How many types of templates are there in c++? a) 1 b) 2 c) 3 d) 4 View Answer Explanation: There are two types of templates. They are function template and class template. 8. Which are done by compiler for templates? a) type-safe b) portability c) code elimination d) all of the mentioned View Answer Explanation: The compiler can determine at compile time whether the type associated with a template definition can perform all of the functions required by that template definition. 9. What may be the name of the parameter that the template should take? a) same as template b) same as class c) same as function d) none of the mentioned View Answer Explanation: None. 10. How many parameters are legal for non-type template? a) 1 b) 2 c) 3 d) 4 View Answer Explanation: The following are legal for non-type template parameters: integral or enumeration type, Pointer to object or pointer to function, Reference to object or reference to function, Pointer to member. Sanfoundry Global Education & Learning Series – C++ Programming Language.
https://www.sanfoundry.com/interview-questions-cpp-simple-string-template/
CC-MAIN-2019-09
refinedweb
620
63.9
Next: graph Invocation, Previous: Multiplotting, Up: graph By default, graph reads datasets in ASCII format. But it can also read datasets in any of three binary formats (single precision floating point, double precision floating point, and integer). These three input formats are specified by the ‘-I d’, ‘-I f’, and ‘-I i’ options, respectively. There are two advantages to using binary data: 1) graph runs significantly faster because the computational overhead for converting data from ASCII to binary is eliminated, and 2) the input files may be significantly smaller. If you have very large datasets, using binary format may reduce storage and runtime costs. For example, you may create a single precision binary dataset as output from a C language program: #include <stdio.h> void write_point (float x, float y) { fwrite(&x, sizeof (float), 1, stdout); fwrite(&y, sizeof (float), 1, stdout); } You may plot data written this way by doing: graph -T ps -I f < binary_datafile > plot.ps The inclusion of multiple datasets within a single binary file is supported. If a binary file contains more than a single dataset, successive datasets should be separated by a single occurrence of the the largest possible number. For single precision datasets this is the quantity FLT_MAX, for double precision datasets it is the quantity DBL_MAX, and for integer datasets it is the quantity INT_MAX. On most machines FLT_MAX is approximately 3.4x10^38, DBL_MAX is approximately 1.8x10^308, and INT_MAX is 2^32-1. If you are reading datasets from more than one file, it is not required that the files be in the same format. For example, graph -T ps -I f binary_datafile -I a ascii_datafile > plot.ps will read binary_datafile in ‘f’ (binary single precision) format, and datafile in ‘a’ (normal ASCII) format. There is currently no support for reading and plotting binary data with error bars. If you have data with error bars, you should supply the data to graph in ASCII, and use the ‘-I e’ option. graph can also read data files in the ASCII `table' format produced by the gnuplot plotting program. For this, you should use the ‘-I g’ option. Such a data file may consist of more than one dataset. To sum up: there are six supported data formats, ‘a’ (normal ASCII), ‘e’ (ASCII with error bars), ‘g’ (the ASCII `table' format produced by gnuplot), ‘f’ (binary single precision), ‘d’ (binary double precision), and ‘i’ (binary integer). Input files may be in any of these six formats.
http://www.gnu.org/software/plotutils/manual/en/html_node/Data-Formats.html
CC-MAIN-2014-42
refinedweb
415
62.07
: retrieving newly added records from mssql database and display in a jsp retrieving newly added records from mssql database and display in a jsp ... from mssql database table and display those records in a jsp.And i have to delete these 10 records from the jsp and retrieve the next recently added 10 records java - JSP-Interview Questions java hi.. snd some JSP interview Q&A and i wnt the JNI(Java Native Interface) concepts matrial thanks krishna Hi friend, Read more information. JSP - JSP-Interview Questions and footers) should use the jsp:include for content that changes at runtime..., jsp:include tag includes the jsp during runtime, whereas the <%@ include> includes the specified jsp during compilation time. If you modify a jsp JSP - JSP-Interview Questions within a JSP page and Java Servlet. What is the difference between these objects and what would you use them for? Hi Read for more information.-Interview Questions should i make use of declaration tag if i can declare variable in scriptlets tag... in the scriptlet, it is gone to the service method when jsp converted into serv Interview : JSP Interview Questions -2 JSP Interview : JSP Interview Questions -2 Page of the JSP Interview Questions. Question: What is JSP Custom tags? Answer: JSP Custom tags are user defined bean - JSP-Interview Questions bean what is use bean in jsp? Hi Friend, Please visit the following links: Hope JSP Paging issue JSP Paging issue Hi; what should I have to write insted of "SQLCALCFOUND_ROWS" for MS-SQL database Amrit - Java Interview Questions What is Java JSP and Java Servlet What is JSP? and ..What is Servlet in Java? jsp: separate the prsentation and business logic.(custom... manually and for user input we must use swing or awt concepts to add text box Paging or pagination - Development process Paging or pagination 1.how to do paging or pagination in jsp using servlets? I have a 20 records ,i have to show them in 2 pages like pages 1 2... i done this but i m geting starting 10 records correctly but i m unable counter - JSP-Interview Questions using java technology,i explained we can do that by using jsp and servlets... the visited number for the web page, we can use page stroke counts. use that u JSP Paging Example in Datagrid JSP Paging Example in Datagrid  ... to create paging in JSP. Two files are used "paging.jsp" and "... (); 6). Using the Taglib to create paging and show records. Step:1 Jsp - Java Interview Questions Need JSP Interview Questions Hi, I need JSP interview questions.Thanks JSP Paging issue JSP Paging issue Hi; How to display large number of users- account profile with photo can be placed in continuous pages using JSP code . If any one have solution please help me . Amrit; Visit Here Paging Paging How does paging in struts by use oracle database? Thanks in advance JSps - JSP-Interview Questions JSps HI, I want to use scriptlet code into my html:link tag. Is it possible? Kindly help me out. Thanks jsp - JSP-Interview Questions JSP directives list What is JSP directives? Can anyone list the JSP directives java, - JSP-Interview Questions . Use the "Throw" Keyword. throw new MyException(); throws For particular exception may be thrown or to pass a possible exception then we use throws... an exception then we use throw keyword. e.g throw new MyException ("can't be divided ContentType - JSP-Interview Questions as " +saveFile); } %> Hi Friend, Use method="POST jsp - JSP-Interview Questions jsp what are the life cycles of jsp and give a brief description Hi friend, The lifecycle of jsp page life cycle of jsp... ----------------------------------------- Read for more information. Thanks Alternative and XSL-FO. The idea behind stxx is to remove the need to use JSP and tag libraries... Struts Alternative Struts is very robust and widely used framework, but there exists the alternative to the struts framework When we change JSP code , how the Servlet is reloaded reflecting the changes without restarting the server datagid with paging using jsp - Ajax datagid with paging using jsp datagrid with paging using ajax and jsp Hi friend, For read more information : Thanks printing records from jsp - JSP-Servlet , For paging in Jsp visit to : Thanks...printing records from jsp Hi Plz tell me how to printing out pages jsp - JSP-Interview Questions jsp i want to know how to take value from user in jsp and not with javascript.help me. Hi Friend, Try it: Enter Name: Thanks jsp - JSP-Interview Questions jsp i have installes tomcat server 5.0.how can i run a jsp.... After that create a jsp file 'hello.jsp' and put it into the 'application... links: For Servlets, Please visit Servlet & Jsp - Java Interview Questions Servlet & Jsp is it possible of communicating from SERVLET to JSP...; Hi Friend, You can also use RequestDispatcher to forward request from servlet to jsp. Servlet2.java: import java.io.*; import javax.servlet. JSP - JSP-Interview Questions : 'pagination.jsp' Pagination of JSP page Roll No Name Marks JSP - JSP-Interview Questions .... this code will develop using jsp only .. And another button i will create Data Redundancy - JSP-Interview Questions ); } Use of Select Box in JSP Select items from select box...Data Redundancy Sorry for disturbing you again but there's redundancy of the selected data on this jsp u gif me. The selected data will appear twice JSP jasper expection - JSP-Interview Questions JSP jasper expection What is JSP jasper expection? Answer: JasperException is a subclass of Exception, you can use the usual Exception.... JasperException is what I get if I mess up the syntax in a JSP page Set Parameter - JSP-Interview Questions Set Parameter Hi, could someone please explain the process of setting parameter in the session from JSP with the help of code? Thanks! Hi,In your JSP page use the Set Tag, and set the scope attribute - JSP-Interview Questions jsp interview Question - JSP-Interview Questions jsp interview Question What are taglibraries in jsp Hi Friend, Please visit the following link: Hope that it will be helpful for you. Thanks Passing array to jsp - JSP-Interview Questions Passing array to jsp Hi, I've a page with multiple check boxes, i can select multiple boxes at a time. On submit, i wish those all records to be populated in next screen. Pls suggest how can i achieve this. Thanks additinal info - JSP-Interview Questions questions. Regards, Hi Friend, You can use ArrayList class... +" "+grade); } } catch(Exception e){} } } Now you can use the variable id java script - JSP-Interview Questions java script i want that my registration page shud be get poped up when i will clik a on a link on my login page....how can i do it using java script or i shuld use html Debugging in jsp? - JSP-Interview Questions Debugging in jsp? Hi Friends, am newbie to jsp.How to debug error in jsp JSF - JSP-Interview Questions JSF How to embedded PDF in JSF page(jsp file created java - JSP-Interview Questions () and forward() methods? Hi JSP forward action transfers the control... file, another JSP file, or a servlet. It should be noted that the target file must be in the same application context as the forwarding JSP file JSP Paging Example in Datagrid - JSP-Servlet JSP Paging Example in Datagrid Hi, I have tested JSP Paging Example... it successfully. When i try... on the url is customizable or not if yes java - JSP-Interview Questions java 1. why implicit object "Exception" is difference from other implicit objects? 2. what is the meaning of exception page & exception in jsp directive Weblogic Portal - JSP-Interview Questions Weblogic Portal Hi, Can any please give me the details of 1) Weblogic portal interview questions & answers ? 2) Weblogic portal learning step by step websites? Thanks for your help in advance Scriptless Jsp - JSP-Interview Questions Scriptless Jsp Hi Deepak, Can we create scriptless jsp, if so explain me how, with advantages. can we access database by using javascript only. Thank u in advance JSP - Java Interview Questions a value to be reused in a single JSP page. The default scope is application."You must practice on JSP. Good Luck ArrayList - JSP-Interview Questions ); when I code this like in my jsp <%ArrayList<Integer> data= new... or not. It seems that values are not getting from jsp to servlet. Thanks java - JSP-Interview Questions . These are all fairly fundamental questions, try purchasing any introduction to Java java - JSP-Interview Questions javascript - JSP-Interview Questions estjs - JSP-Interview Questions Struts - JSP-Interview Questions tomcat - JSP-Interview Questions uninvalidateble (infinite) session - JSP-Interview Questions uninvalidateble (infinite) session Hello. I have a problem with HTTPSession. Here is a client and JSP page. Every 5 seconds client requires... how to do it?? I use web server 'Weblogic' (Excuse me for my Inglish HOW TO USE REQUEST DISPATCHER - Servlet Interview Questions HOW TO USE REQUEST DISPATCHER PLEASE USE A BUSINESS CODES TO EXPLAIN HOW TO USE REQUEST DISPATCHER, SESSION MANAGEMENT AND URL REWRITING. AND USE MSSQL DATABASE TO SAVE THE DATA THANKS FOR YOUR SOLUTION IN ADVANCE Jsp/Servlet - Servlet Interview Questions Jsp/Servlet How can we prvent duplicate transaction in web using servlet or jsp JSP Interview Questions JSP Interview Questions  ...? Answer: JSP actions are XML tags that direct the server to use existing components...; tag is used to use any java object in the jsp page. Here are the scope carriage return with javascript - JSP-Interview Questions carriage return with javascript Dear, Please in one webpage, I need a carriage return in javsacript code \r. I use it BUT no effects. Can you help me. Regards Model View Architecture - JSP-Interview Questions Model View Architecture Describe the architectural overview of Model view architecture? Hi friend, Model-view-controller (MVC) is an architectural pattern used in software engineering. Successful use of the pattern
http://www.roseindia.net/tutorialhelp/comment/24414
CC-MAIN-2014-52
refinedweb
1,645
64.1
ec_cache_create Name ec_cache_create — Create a cache with max_elts Synopsis #include "ec_cache.h" ec_cache_t * **ec_cache_create** ( | max_elts, | | | | max_lifetime, | | | | dtor ); | | unsigned int <var class="pdparam">max_elts</var>; unsigned int <var class="pdparam">max_lifetime</var>; ec_cache_elt_dtor_func <var class="pdparam">dtor</var>; Description Create a cache with max_elts. Note This is equivalent to calling ec_cache_create2 with a NULL name parameter. - max_elts The maximum number of elements that can be kept in the cache. If that number is exceeded, then the least recently used (LRU) element will be removed from the cache. - max_lifetime Specifies a time-to-live (TTL) in seconds for the cache element. If max_lifetimeis not given the value EC_CACHE_LIFETIME_INFINITE, then it specifies a time-to-live in seconds after which the entry will be removed from the cache. If using the cache in per-item-ttl mode, then max_lifetimeis actually a number of additional seconds beyond the ttl for which an element will not be removed. - dtor Specifies a function that will be called when the refcount of an item becomes zero. The following typedef applies to this data type: typedef void (*ec_cache_elt_dtor_func)(void *value);. Returns the address of an ec_cache_t type. The following typedef applies to this data type: typedef struct ec_cache_head ec_cache_t;. While it is legal to call this function in any thread, it should only be called in the Scheduler thread.
https://support.sparkpost.com/momentum/3/3-api/apis-ec-cache-create
CC-MAIN-2022-21
refinedweb
221
55.54
I have a df arranged like follows: x y z 0 a jj Nan 1 b ii mm 2 c kk nn 3 d ii NaN 4 e Nan oo 5 f jj mm 6 g Nan nn x y z w 0 a jj Nan a 1 b ii mm a 2 c kk nn c 3 d ii NaN a 4 e Nan oo e 5 f jj mm a 6 g Nan nn c ii == jj mm x y z w 0 a ii NaN a 1 b ii mm a 2 c jj mm a 3 d jj Nan a 4 e kk nn e 5 f Nan nn e 6 g Nan oo g x y z 0 a ii mm 1 b ii nn 2 c jj nn 3 d jj oo 4 e kk oo 0 1 2 w 0 a ii mm a 1 b ii mm a 2 c jj nn c 3 d jj nn c 4 e kk oo e In the general case this is a set consolidation/connected components problem. While if we assume certain things about your data we can solve a reduced case, it's just a bit of bookkeeping to do the whole thing. scipy has a connected components function we can use if we do some preparation: import scipy.sparse def via_cc(df_in): df = df_in.copy() # work with ranked version dfr = df[["y","z"]].rank(method='dense') # give nans their own temporary rank dfr = dfr.fillna(dfr.max().fillna(0) + dfr.isnull().cumsum(axis=0)) # don't let x and y get mixed up; have separate nodes per column dfr["z"] += dfr["y"].max() # build the adjacency matrix size = int(dfr.max().max()) + 1 m = scipy.sparse.coo_matrix(([1]*len(dfr), (dfr.y, dfr.z)), (size, size)) # do the work to find the groups _, cc = scipy.sparse.csgraph.connected_components(m) # get the group codes group = pd.Series(cc[dfr["y"].astype(int).values], index=dfr.index) # fill in w from x appropriately df["w"] = df["x"].groupby(group).transform(min) return df which gives me In [230]: via_cc(df0) Out[230]: x y z w 0 a jj NaN a 1 b ii mm a 2 c kk nn c 3 d ii NaN a 4 e NaN oo e 5 f jj mm a 6 g NaN nn c In [231]: via_cc(df1) Out[231]: x y z w 0 a ii mm a 1 b ii nn a 2 c jj nn a 3 d jj oo a 4 e kk oo a If you have a set consolidation recipe around, like the one here, you can simplify some of the above at the cost of an external function. (Aside: note that in my df0, the "Nan"s are really NaNs. If you have a string "Nan" (note how it's different from NaN), then the code will think it's just another string and will assume that you want all "Nan"s to be in the same group.)
https://codedump.io/share/epFVIZ8hwa8x/1/take-union-of-two-columns-python--pandas
CC-MAIN-2017-04
refinedweb
509
66.51
Void Structure Indicates a method that does not return a value; that is, the method has the void return type. For a list of all members of this type, see Void Members. System.Object System.ValueType System.Void [Visual Basic] <Serializable> Public Structure Void [C#] [Serializable] public struct Void [C++] [Serializable] public __value struct Void [JScript] In JScript, you can use the structures in the .NET Framework, but you cannot define your own. Thread Safety Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe. Remarks This structure is used in the System.Reflection namespace. This structure has no members, and you cannot create an instance of this structure. Requirements Platforms: Windows 98, Windows NT 4.0, Windows Millennium Edition, Windows 2000, Windows XP Home Edition, Windows XP Professional, Windows Server 2003 family, .NET Compact Framework Assembly: Mscorlib (in Mscorlib.dll) See Also Void Members | System Namespace
https://msdn.microsoft.com/en-us/library/system.void(v=vs.71).aspx
CC-MAIN-2015-40
refinedweb
161
58.38
A Python interface for Discount, the C Markdown parser This Python package is a ctypes binding of David Loren’s Discount, a C implementation of John Gruber’s Markdown. Contents Introduction Mark Using the Markdown class import discount mkd = discount.Markdown(sys.stdin) mkd.write_html_content(sys.stdout) Markdown takes one required argument, input_file_or_string, which is either a file object or a string-like object. Note: There are limitations to what kind of file-like objects can be passed to Markdown. File-like objects like StringIO can’t be handled at the C level in the same way as OS file objects like sys.stdin and sys.stdout, or file objects returned by the builtin open() method. Markdown also has methods for getting the output as a string, instead of writing to a file-like object. Let’s look at a modified version of the first example, this time using strings: import discount mkd = discount.Markdown('`test`') print mkd.get_html_content() Currently, Markdown does not manage character encoding, since the Markdown wraps C functions that support any character encoding that is a superset of ASCII. However, when working with unicode objects in Python, you will need to pass them as bytestrings to Markdown, and then convert them back to unicode afterwards. Here is an example of how you could do this: import discount txt = u'\xeb' # a unicode character, an e with an umlaut mkd = discount.Markdown(txt.encode('utf-8')) out = mkd.get_html_content() val = out.decode('utf-8') The Markdown class constructor also takes optional boolean keyword arguments that map to Discount flags compilation flags. - <’s with <. - ignore_pseudo_protocols - Do not process pseudo-protocols. Pandoc header elements can be retrieved with the methods get_pandoc_title(), get_pandoc_author() and get_pandoc_date(). The converted HTML document parts can be retrieved as a string with the get_html_css(), get_html_toc() and get_html_content() methods, or written to a file with the write_html_css(fp), write_html_toc(fp) and write_html_content(fp) methods, where fp is the output file descriptor. Discount provides two hooks for manipulating links while processing markdown. The first lets you rewrite urls specified by []() markup or <link/> tags, and the second lets you add additional HTML attributes on <a/> tags generated by Discount. You can pass callback functions to Markdown’s rewrite_links_func and link_attrs_func keyword arguments respectively. Here is an example of a callback function that adds the domain name to internal pages: def add_basepath(url): if url.startswith('/'): return '' % url md = Markdown( '`[a](/a.html)`', rewrite_links_func=add_basepath ) Here is an example that opens external pages in another window/tab: def add_target_blank(url): if url.startswith('http://'): return 'target="_blank"' md = Markdown( '`[a]()`', link_attrs_func=add_target_blank ) Alternatively, you can attach these callbacks using decorators: md = Markdown('`[a](/a.html)`') @md.rewrite_links def add_basepath(url): # same as above ... md = Markdown('`[a]()`') @md.link_attrs def add_target_blank(url): # same as above ... Under some conditions, the functions in libmarkdown may return integer error codes. These errors are raised as a MarkdownError exceptions when using the Markdown class. Using libmarkdown If you are familiar with using the C library and would rather use Discount library directly, libmarkdown is what you are looking for; it’s simply a thin wrapper around the original C implementation. libmarkdown exposes the public functions and flags documented on the Discount homepage. In Python you’ll need to do some extra work preparing Python objects you want to pass to libmarkdown’s functions. Most of these functions accept FILE* and char** types as their arguments, which require some additional ctypes boilerplate. To get a FILE* from a Python file descriptor for use with libmarkdown, use the following pattern: i = ctypes.pythonapi.PyFile_AsFile(sys.stdin) o = ctypes.pythonapi.PyFile_AsFile(sys.stdout) doc = libmarkdown.mkd_in(i, 0) libmarkdown.markdown(doc, o, 0)) For libmarkdown functions to which you pass a char**, use the following pattern: cp = ctypes.c_char_p('') ln = libmarkdown.mkd_document(doc, ctypes.byref(cp)) html_text = cp.value[:ln] It is important to initialize c_char_p with an empty string. Running the test suite Tests are available with the source distibution of discount in the tests.py file. The C shared object should be compiled first: python setup.py build_ext Then you can run the tests: python tests.py Source code and reporting bugs You can obtain the source code and report bugs on GitHub project page. Credits discount is maintained by Tamas Kemenczy, and is funded by Trapeze. The Discount C library is written and maintained by David Loren and contributors. See the AUTHORS file for details. Release History Download Files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/discount/
CC-MAIN-2018-05
refinedweb
765
58.08
Viewing the available DatasourcesRahul Shinde May 8, 2003 12:17 AM How can I view the available datasources bound in JBoss? I tried the JNDI Browser 1.1 but it shows everything besides the datasources. Thanks, Rahul 1. Re: Viewing the available Datasourcesbkbonner May 8, 2003 7:44 AM (in response to Rahul Shinde) I've been trying to get the JNDI browser for eclipse to connect to NS support over JNP without any luck. did you have to do anything special to get it to work? What JNDI browser are you using? 2. Re: Viewing the available DatasourcesJon Barnett May 8, 2003 8:13 AM (in response to Rahul Shinde) You can browse bindings through the web-based jmx-console provided with JBoss. At the main page, you can select "service=JNDIView", and then invoke the list function to list JNDI definitions that JBoss knows about. You can also look at the ConnectionFactory under jboss.management.local on the main page. Hope that is serviceable enough for your needs. 3. Re: Viewing the available DatasourcesRahul Shinde May 8, 2003 2:21 PM (in response to Rahul Shinde) I am using JNDI Browser 1.0.1 by EJTools.org I am not using Eclipse. I downloaded the file from sourceforge.net and just followed the installation instructions. There were some files I could not find but thats ok. Just added jbossall-client.jar instead of some files mentioned for jboss in the installation steps. Besides that didnt have to do extra to get it working. Hope that helps is you are looking for a non-eclipse solution. 4. Re: Viewing the available DatasourcesDavid Jencks May 8, 2003 8:56 PM (in response to Rahul Shinde) My guess is that this jndi browser is running in a different vm than jboss, so it will not be able to see anything in the java:/ context, including any datasource. You can see them in jndi-view from the jmx console since that is running in the jboss vm. 5. Re: Viewing the available Datasourcesbkbonner May 9, 2003 7:36 AM (in response to Rahul Shinde) jonlee, thanks for the tip. I found it in the 3.0.2 docs on page 132...thanks! 6. Re: Viewing the available Datasourcesbkbonner May 9, 2003 9:11 AM (in response to Rahul Shinde) rshinde, david rshinde, thanks for the reference. I was missing the jbossall-client.jar file. I had mistakenly used jboss.jar (which in hindsight was silly). I also added the jboss-jmx.jar from the lib directory to get the javax/management classes that were needed. David, unfortunately, as you pointed out, I was not able to see anything in the java:/ context since i wasn't running in the same JVM--much like the EJTools software which only allows you to see if you use the Web version. I'm curious if this will be fixed when Remote JMX is available? What is the source of this limitation? Thanks again for both of your suggestions. 7. Re: Viewing the available DatasourcesDavid Jencks May 10, 2003 6:25 AM (in response to Rahul Shinde) Well, I'm not really a jndi expert, but my understanding of the purpose of the java: namespace is that it should not be accessible to other vms. Therefore I would regard anything that allowed direct lookups in the java: namespace from another vm as a critical bug. I would expect this to prevent any tool that uses only jndi protocols and runs in another vm from ever, under any circumstances, being able to see the java: context. If you are willing to use non-jndi protocols there is nothing stopping you today from using the jmx rmi connector to get the xml list view from the jndiview mbean and parsing however you see fit for display, or writing something such as an mbean that runs in the jboss vm and produces whatever view of the jndi tree you like.
https://developer.jboss.org/thread/86490
CC-MAIN-2017-51
refinedweb
661
72.97
Recognizing people by their faces in pictures and video feeds is seen everywhere starting from social media to phone cameras. A face recognition system is built for matching human faces with a digital image. Ultimately what a computer recognizes is pixel values ranging from 0-255. In Computer Vision face recognition has been in since ages and has evolved over the years. Many researchers have come up with many new techniques to efficiently identify and tell apart faces. There are many use cases such as authentication and verification of users. This article covers all the aspects of face recognition based attendance systems. It discusses the challenges faced in face recognition, the face recognition library and building the attendance marking system based on these techniques. Challenges faced in face recognition: - Different lighting conditions - Differently posing – there could be images of the same person with different face angles. - Confusing between similar looking people (Source – OpenCV Wiki) Face Recognition Library Face recognition algorithms can extract features from a face image namely positions of forehead, eyes, nose, mouth, chin, jaws. Face Landmarks – There are 68 specific points (called landmarks) that exist on every face. Source – Created by Brandon Amos Face Encodings – This is the 128 encoding feature vector from a pretrained network over millions of images. Source – face recognition library documentation The last step is to match these encoding with the nearest possible image from a stored database. Basic Face Matching First, we get the location of where exactly the face is in the image using face_location() method(which gets the outline of the face) on the RGB image. Then face encodings(markings of eyes, nose, mouth, jaws which remain the same for different images of the same person) are taken using face_encodings() function which returns a list containing 128 measurements. Both these two steps are followed for the original and test image. Then a comparison between these two returned lists is done by the function compare_faces() which returns a list of boolean values(True or False). The face distance function gets the value of that by how much the two images differ. The lower the distance the better the matching and vice versa. import cv2 import face_recognition as fr imgAng = fr.load_image_file('andrew_ng.jpg') Test = fr.load_image_file('ian_godfellow.jpg') fLoc = fr.face_locations(imgAng)[0] encodeAng = fr.face_encodings(imgAng)[0] fLocTest = fr.face_locations(Test)[0] encTest = fr.face_encodings(Test)[0] result = fr.compare_faces([encodeAng],encTest) faceDist = fr.face_distance([encodeAng],encTest) print(result,faceDist) [True] [0.36569372] [False] [0.6898802] Building Face Attendance System Now we are ready to build a realtime face attendance system wherein webcam captured frames will be matched against the existing database images and if the match is found then it’ll store it in a CSV file called ‘Attendance Register’ along with name and time of capture. Only once the file will store the matched image’s details, if the same image is received again then it’ll not update. Path setting to the directory containing the image database. Read each image and the images array. Append the filenames into a list called Names and remove the extension. pathlib = 'ImagesAttendance' images = [] Names = [] myList = os.listdir(pathlib) print(myList) for cl in myList: currImg = cv2.imread(f'{pathlib}/{cl}') images.append(currImg) Names.append(os.path.splitext(cl)[0]) print(Names) Finding face encodings of images in the database and keeping them in a list to use later with incoming frames. def DbEncodings(images): encList = [] for image in images: image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) enc = fr.face_encodings(image)[0] encList.append(enc) return encodeList Capturing video frames cap = cv2.VideoCapture(0) Iterating through frames while True: _, img = cap.read() image = cv2.resize(img,(0,0),None,0.25,0.25) image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) The same process is followed by the first detection face location then getting the face encoding values. facesInFrame = fr.face_locations(image) encodesInFrame = fr.face_encodings(image,facesInFrame) Now the incoming images are tested against the previously-stored encodings. Then the face distance is also computed. Lastly, we call the Attendance function along with the person name who is identified. for encodeFace,faceLoc in zip(encodesInFrame,facesInFrame): matchList = fr.compare_faces(encodeKnown,encFace) faceDist = fr.face_distance(encodeKnown,encFace) match = np.argmin(faceDist) if matchList[match]: name = Names[match].upper() Attendance(name) Reading from attendance file, Storing data(Name and Time of entry) if previously not stored. def Attendance(name): with open('Attendance_Register.csv','r+') as f: DataList = f.readlines() names = [] for data in DataList: ent = data.split(',') names.append(ent[0]) if name not in names: curr = datetime.now() dt = curr.strftime('%H:%M:%S') f.writelines(f'\n{name},{dt}') encodeKnown = DbEncodings(images) print('Encoding Complete') OUTPUT ['andrew_ng.jpg', 'ian_goodfellow.jpg', 'Jayita.jpg'] ['andrew_ng', 'ian_goodfellow', 'Jayita'] Encoding Complete Attendence_Register.csv Conclusion Face recognition library being a high level deep learning library helps in identifying faces accurately. We’ve then used this to build a face attendance system which can be helpful in offices, schools or any other place reducing manual labour and automatically updating the attendance records in day-to-day life. This also notes down the time of arrival thus can acquire information about people coming in late after a specified time. The complete code of the above implementation is uploaded as a notebook.
https://analyticsindiamag.com/a-complete-guide-on-building-a-face-attendance-system/
CC-MAIN-2020-45
refinedweb
880
51.04
I’ve decided to skip last year’s Advent of Code edition. Mostly because I didn’t have time, but I also knew that I probably wouldn’t finish it. I’ve never finished any edition. I’m not very good at code katas, and I usually try to brute force them. With AoC, that works for the first ten days, but then the challenges start to get more and more complicated, and adding the @jit decorator to speed up my ugly Python code can only get me so far. But one thing that helped me a lot with the previous editions was to use IPython. Solving those problems incrementally is what actually makes it fun. You start by hard-coding the simple example that comes with each task. Then you try to find a solution for this small-scale problem. You try different things, you wrangle with the input data, and after each step, you see the output, so you know if you are getting closer to solving it or not. Once you manage to solve the simple case, you load the actual input data, and you run it just to find out that there were a few corner cases that you missed. It wouldn’t be fun if I had to use a compiled language and write a full program to see the first results. This year, instead of doing the “Advent of Code,” I’ve decided to do an “Advent of IPython” on Twitter - for 25 days, I’ve shared tips that can help you when you’re solving problems like AoC using IPython. Here is a recap of what you can do. 1. Display the documentation In [1]: import re In [2]: re.findall? Signature: re.findall(pattern, string, flags=0) Docstring:. File: ~/.pyenv/versions/3.9.0/lib/python3.9/re.py Type: function That’s one of my favorite features. You can display the documentation of any function, module, and variable by adding the “?” at the beginning or at the end of it. It’s called “dynamic object introspection,” and I love it because I don’t have to leave the terminal to get the documentation. You can use the built-in help() function to get this information with the standard Python REPL, but I find the “?” much more readable. It highlights the most important information like the signature and the docstring, and it comes with colors (even though you can’t see them here because my syntax highlighting library doesn’t support IPython). 2. Display the source code In [1]: import pandas In [2]: pandas.DataFrame?? Init signature: pandas.DataFrame( data=None, index: Optional[Collection] = None, columns: Optional[Collection] = None, dtype: Union[ForwardRef('ExtensionDtype'), str, numpy.dtype, Type[Union[str, float, int, complex, bool]], NoneType] = None, copy: bool = False, ) Source: class DataFrame(NDFrame): """ Two-dimensional, size-mutable, potentially heterogeneous tabular data. Data structure also contains labeled axes (rows and columns). Arithmetic operations align on both row and column labels. Can be thought of as a dict-like container for Series objects. The primary pandas data structure. Parameters ---------- ... and so on And if you want to see the full source code of a function (or class/module), use two question marks instead ( function_name?? or ??function_name). 3. %edit magic function If you want to write a long function, use the %edit magic command. It will open your favorite editor (or actually the one that you set with the $EDITOR environment variable) where you can edit your code. When you save and close this file, IPython will automatically execute it. I use it with vim, and it works great when I want to write a bit longer function (with vim I have a lightweight linter, and moving around the code is faster). It’s a nice middle ground when you are too lazy to switch to your code editor to write the whole code, but at the same time, the function that you are writing is a bit too big to write it comfortably in IPython. 4. Reopen last file with “%edit -p” And speaking of the %edit command, you can run %edit -p to reopen the same file that you edited the last time. This is useful if you made a mistake and you want to fix it without having to type everything again or if you want to add more code to the function that you just wrote. 5. Wildcard search In [1]: import os In [2]: os.*dir*? os.__dir__ os.chdir os.curdir os.fchdir os.listdir os.makedirs os.mkdir os.pardir os.removedirs os.rmdir os.scandir os.supports_dir_fd In [3]: os.chdir("/some/other/dir") If you forget the name of some function, you can combine the dynamic object introspection (the “?”) and a wildcard (the “*”) to perform a wildcard search. For example, I know that the os module has a function to change the current directory, but I don’t remember its name. I can list all the functions from the os module, but I’m sure that a function like this must contain “dir” in its name. So I can limit the search and list all the functions from the os module that contain “dir” in their names. 6. post-mortem debugging In [1]: from solver import solve In [2]: solve() IndexError: list index out of range In [3]: %debug > /Users/switowski/workspace/iac/solver.py(11)count_trees() 9 x = (x + dx) % mod 10 y += dy ---> 11 if values[y][x] == "#": 12 count += 1 13 return count ipdb> Displaying the documentation is one of my favorite features, but post-mortem debugging is my favorite feature. After you get an exception, you can run %debug, and it will start a debugging session for that exception. That’s right! You don’t need to put any breakpoints or run IPython with any special parameters. You just start coding, and if when an exception happens, you run this command to start debugging. 7. Start the debugger automatically In [1]: %pdb Automatic pdb calling has been turned ON In [2]: from solver import solve In [3]: solve() IndexError: list index out of range > /Users/switowski/workspace/iac/solver.py(11)count_trees() 9 x = (x + dx) % mod 10 y += dy ---> 11 if values[y][x] == "#": 12 count += 1 13 return count ipdb> y 1 ipdb> x 3 ipdb> And if you want to start a debugger on every exception automatically, you can run %pdb to enable the automatic debugger. Run %pdb again to disable it. 8. Run shell commands In [1]: !pwd /Users/switowski/workspace/iac In [2]: ls -al total 8 drwxr-xr-x 5 switowski staff 480 Dec 21 17:26 ./ drwxr-xr-x 55 switowski staff 1760 Dec 22 14:47 ../ drwxr-xr-x 9 switowski staff 384 Dec 21 17:27 .git/ drwxr-xr-x 4 switowski staff 160 Jan 25 11:39 __pycache__/ -rw-r--r-- 1 switowski staff 344 Dec 21 17:26 solver.py # Node REPL inside IPython? Sure! In [3]: !node Welcome to Node.js v12.8.0. Type ".help" for more information. > var x = "Hello world" undefined > x 'Hello world' > You can run shell commands without leaving IPython - you just need to prefix it with the exclamation mark. And the most common shell commands like ls, pwd, cd will work even without it (of course, unless you have a Python function with the same name). I use it mostly to move between folders or to move files around. But you can do all sorts of crazy things - including starting a REPL for a different programming language inside IPython. 9. Move around the filesystem with %cd In [1]: !pwd /Users/switowski/workspace/iac/input_files/wrong/folder In [2]: %cd ../.. /Users/switowski/workspace/iac/input_files In [3]: %cd right_folder/ /Users/switowski/workspace/iac/input_files/right_folder Alternatively, you can also move around the filesystem using the %cd magic command (press Tab to get the autocompletion for the list of available folders). It comes with some additional features - you can bookmark a folder or move a few folders back in the history (run %cd? to see the list of options). 10. %autoreload Use %autoreload to automatically reload all the imported functions before running them. By default, when you import a function in Python, Python “saves its source code in memory” (ok, that’s not what actually happens, but for illustration purposes, let’s stick with that oversimplification). When you change the source code of that function, Python won’t notice the change, and it will keep using the outdated version. If you are building a function or a module and you want to keep testing the latest version without restarting the IPython (or using the importlib.reload()), you can use the %autoreload magic command. It will always reload the source code before running your functions. If you want to learn more - I wrote a longer article about it. 11. Change the verbosity of exceptions By default, the amount of information in IPython’s exceptions is just right - at least for me. But if you prefer to change that, you can use the %xmode magic command. It will switch between 4 levels of traceback’s verbosity. Check it out - it’s the same exception, but the traceback gets more and more detailed: Minimal In [1]: %xmode Exception reporting mode: Minimal In [2]: solve() IndexError: list index out of range Plain In [3]: %xmode Exception reporting mode: Plain In [4]: solve() Traceback (most recent call last): File "<ipython-input-6-6f300b4f5987>", line 1, in <module> solve() File "/Users/switowski/workspace/iac/solver.py", line 27, in solve sol_part1 = part1(vals) File "/Users/switowski/workspace/iac/solver.py", line 16, in part1 return count_trees(vals, 3, 1) File "/Users/switowski/workspace/iac/solver.py", line 11, in count_trees if vals[y][x] == "#": IndexError: list index out of range Context (that’s the default setting) In [5]: %xmode Exception reporting mode: Context In [6]: solve() --------------------------------------------------------------------------- IndexError Traceback (most recent call last) <ipython-input-8-6f300b4f5987> in <module> ----> 1 solve() ~/workspace/iac/solver.py in solve() 25 def solve(): 26 vals = getInput() ---> 27 sol_part1 = part1(vals) 28 print(f"Part 1: {sol_part1}") 29 print(f"Part 2: {part2(vals, sol_part1)}") ~/workspace/iac/solver.py in part1(vals) 14 15 def part1(vals: list) -> int: ---> 16 return count_trees(vals, 3, 1) 17 18 def part2(vals: list, sol_part1: int) -> int: ~/workspace/iac/solver.py in count_trees(vals, dx, dy) 9 x = (x + dx) % mod 10 y += dy ---> 11 if vals[y][x] == "#": 12 cnt += 1 13 return cnt IndexError: list index out of range Verbose (like “Context” but also shows the values of local and global variables) In [7]: %xmode Exception reporting mode: Verbose In [8]: solve() --------------------------------------------------------------------------- IndexError Traceback (most recent call last) <ipython-input-10-6f300b4f5987> in <module> ----> 1 solve() global solve = <function solve at 0x109312b80> ~/workspace/iac/solver.py in solve() 25 def solve(): 26 values = read_input() ---> 27 part1 = solve1(values) part1 = undefined global solve1 = <function solve1 at 0x109f363a0> values = [['..##.......', ..., '.#..#...#.#']] 28 print(f"Part 1: {part1}") 29 print(f"Part 2: {solve2(values, part1)}") ~/workspace/iac/solver.py in solve1(values=[['..##.......', ..., '.#..#...#.#']]) 14 15 def solve1(values: list) -> int: ---> 16 return count_trees(values, 3, 1) global count_trees = <function count_trees at 0x109f364c0> values = [['..##.......', ..., '.#..#...#.#']] 17 18 def solve2(values: list, sol_part1: int) -> int: ... and so on IndexError: list index out of range 12. Rerun commands from the previous sessions In [1]: a = 10 In [2]: b = a + 20 In [3]: b Out[3]: 30 # Restart IPython In [1]: %rerun ~1/ === Executing: === a = 10 b = a + 20 b === Output: === Out[1]: 30 In [2]: b Out[2]: 30 You can use the %rerun ~1/ to rerun all the commands from the previous session. That’s a great way to get you back to the same place where you left IPython. But it has one huge downside - if you had any exception (and I’m pretty sure you did), the execution will stop there. So you have to remove the lines with exceptions manually. If you are using Jupyter Notebooks, there is a workaround that allows you to tag a notebook cell as “raising an exception.” If you rerun it, IPython will ignore this exception. It’s not a perfect solution, and an option to ignore exceptions during the %rerun command would be much better. 13. Execute some code at startup If you want to execute some code each time you start IPython, just create a new file inside the “startup” folder ( ~/.ipython/profile_default/startup/) and add your code there. IPython will automatically execute any files it finds in this folder. It’s great if you want to import some modules that you use all the time, but if you put too much code there, the startup time of IPython will be slower. 14. Use different profiles Maybe you have a set of modules that you want to import and settings to set in a specific situation. For example, when debugging/profiling, you want to set the exceptions to the verbose mode and import some profiling libraries. Don’t put that into the default profile because you don’t debug or profile your code all the time. Create a new profile and put your debugging settings inside. Profiles are like different user accounts for IPython - each of them has its own configuration file and startup folder. 15. Output from the previous commands In [1]: sum(range(1000000)) Out[1]: 499999500000 In [2]: the_sum = _ In [3]: the_sum Out[3]: 499999500000 In [4]: _1 Out[4]: 499999500000 If you forgot to assign an expression to a variable, use var = _. _ stores the output of the last command (this also works in the standard Python REPL). The results of all the previous commands are stored in variables _1 (output from the first command), _2 (output from the second command), etc. 16. Edit any function or module You can use %edit to edit any Python function. And I really mean ANY function - functions from your code, from packages installed with pip, or even the built-in ones. You don’t even need to know in which file that function is located. Just specify the name (you have to import it first), and IPython will find it for you. In the above example, I’m breaking the built-in randint() function by always returning 42. 17. Share your code In [1]: welcome = "Welcome to my gist" In [2]: welcome Out[2]: 'Welcome to my gist' In [3]: a = 42 In [4]: b = 41 In [5]: a - b Out[5]: 1 In [6]: %pastebin 1-5 Out[6]: '' If you want to share your code with someone, use the %pastebin command and specify which lines you want to share. IPython will create a pastebin (something similar to GitHub gist), paste selected lines, and return a link that you can send to someone. Just keep in mind that this snippet will expire in 7 days. 18. Use IPython as your debugger Maybe some of the tips that I’ve shared convinced you that IPython is actually pretty cool. If that’s the case, you can use it not only as a REPL (the interactive Python shell) but also as a debugger. IPython comes with “ipdb” - it’s like the built-in Python debugger “pdb”, but with some IPython’s features on top of it (syntax highlighting, autocompletion, etc.) You can use ipdb with your breakpoint statements by setting the PYTHONBREAKPOINT environment variable - it controls what happens when you call breakpoint() in your code. This trick requires using Python 3.7 or higher (that’s when the breakpoint() statement was introduced). 19. Execute code written in another language In [1]: %%ruby ...: 1.upto 16 do |i| ...: out = "" ...: out += "Fizz" if i % 3 == 0 ...: out += "Buzz" if i % 5 == 0 ...: puts out.empty? ? i : out ...: end ...: ...: 1 2 Fizz 4 Buzz Fizz 7 8 Fizz Buzz 11 Fizz 13 14 FizzBuzz 16 Let’s say you want to execute some code written in another language without leaving IPython. You might be surprised to see that IPython supports Ruby, Bash, or JavaScript out of the box. And even more languages can be supported when you install additional kernels! Just type %%ruby, write some Ruby code, and press Enter twice, and IPython will run it with no problem. It also works with Python2 ( %%python2). 20. Store variables between sessions In [1]: a = 100 In [2]: %store a Stored 'a' (int) # Restart IPython In [1]: %store -r a In [2]: a Out[2]: 100 IPython uses SQLite for some lightweight storage between sessions. That’s where it saves the history of your previous sessions. But you can use it to store your own data. For example, with the %store magic command, you can save variables in IPython’s database and restore them in another session using %store -r. You can also set the c.StoreMagics.autorestore = True in the configuration file to automatically restore all the variables from the database when you start IPython. 21. Save session to a file In [1]: a = 100 In [2]: b = 200 In [3]: c = a + b In [4]: c Out[4]: 300 In [5]: %save filename.py 1-4 The following commands were written to file `filename.py`: a = 100 b = 200 c = a + b c You can save your IPython session to a file with the %save command. That’s quite useful when you have some working code and you want to continue editing it with your text editor. Instead of manually copying and pasting lines to your code editor, you can dump the whole IPython session and then remove unwanted lines. 22. Clean up “>” symbols and fix indentation # Clipboard content: # >def greet(name): # > print(f"Hello {name}") # Just pasting the code won't work In [1]: >def greet(name): ...: > print(f"Hello {name}") File "<ipython-input-1-a7538fc939af>", line 1 >def greet(name): ^ SyntaxError: invalid syntax # But using %paste works In [2]: %paste >def greet(name): > print(f"Hello {name}") ## -- End pasted text -- In [3]: greet("Sebastian") Hello Sebastian If you need to clean up incorrect indentation or “>” symbols (for example, when you copy the code from a git diff, docstring, or an email), instead of doing it manually, copy the code and run %paste. IPython will paste the code from your clipboard, fix the indentation, and remove the “>” symbols (although it sometimes doesn’t work properly). 23. List all the variables In [1]: a = 100 In [2]: name = "Sebastian" In [3]: squares = [x*x for x in range(100)] In [4]: squares_sum = sum(squares) In [5]: def say_hello(): ...: print("Hello!") ...: In [6]: %whos Variable Type Data/Info ----------------------------------- a int 100 name str Sebastian say_hello function <function say_hello at 0x111b60a60> squares list n=100 squares_sum int 328350 You can get a list of all the variables from the current session (nicely formatted, with information about their type and the data they store) with the %whos command. 24. Use asynchronous functions In [1]: import asyncio In [2]: async def worker(): ...: print("Hi") ...: await asyncio.sleep(2) ...: print("Bye") ...: # The following code would fail in the standard Python REPL # because we can't call await outside of an async function In [3]: await asyncio.gather(worker(), worker(), worker()) Hi Hi Hi Bye Bye Bye You can speed up your code with asynchronous functions. But the thing about asynchronous code is that you need to start an event loop to call them. However, IPython comes with its own event loop! And with that, you can await asynchronous functions just like you would call a standard, synchronous one. 25. IPython scripts $ ls file1.py file2.py file3.py file4.py wishes.ipy $ cat wishes.ipy files = !ls # Run all the files with .py suffix for file in files: if file.endswith(".py"): %run $file $ ipython wishes.ipy Have a Very Merry Christmas! 🎄🎄🎄🎄🎄🎄 You can execute files containing IPython-specific code (shell commands prefixed with ! or magic methods prefixed with %). Just save the file with the “.ipy” extension and then pass it to the ipython command. Conclusions If you have been reading my blog for a bit, you probably already realize that IPython is one of my favorite Python tools. It’s an excellent choice for solving code challenges like the Advent of Code, and it has a lot of cool tricks that can help you. Leave a comment if you know some other cool tricks that you want to share! Image by Valeria Vinnik from: Pexels
https://switowski.com/blog/25-ipython-tips-for-your-next-advent-of-code
CC-MAIN-2021-39
refinedweb
3,404
70.02
import "github.com/hashicorp/go-multierror" append.go flatten.go format.go multierror.go prefix.go sort.go Flatten flattens the given error, merging any *Errors together into a single *Error. ListFormatFunc is a basic formatter that outputs the number of errors that occurred along with a bullet point list of the errors. Prefix is a helper function that will prefix some text to the given error. If the error is a multierror.Error, then it will be prefixed to each wrapped error. This is useful to use when appending multiple multierrors together in order to give better scoping. type Error struct { Errors []error ErrorFormat ErrorFormatFunc } Error is an error type to track multiple errors. This is used to accumulate errors in cases and return them as a single "error". Append is a helper function that will append more errors onto an Error in order to create a larger multi-error. If err is not a multierror.Error, then it will be turned into one. If any of the errs are multierr.Error, they will be flattened one level into err. ErrorOrNil returns an error interface if this Error represents a list of errors, or returns nil if the list of errors is empty. This function is useful at the end of accumulation to make sure that the value returned represents the existence of errors. Len implements sort.Interface function for length Less implements sort.Interface function for determining order Swap implements sort.Interface function for swapping elements WrappedErrors returns the list of errors that this Error is wrapping. It is an implementation of the errwrap.Wrapper interface so that multierror.Error can be used with that library. This method is not safe to be called concurrently and is no different than accessing the Errors field directly. It is implemented only to satisfy the errwrap.Wrapper interface. ErrorFormatFunc is a function callback that is called by Error to turn the list of errors into a string. Package multierror imports 3 packages (graph) and is imported by 2796 packages. Updated 2019-07-22. Refresh now. Tools for package owners.
https://godoc.org/github.com/hashicorp/go-multierror
CC-MAIN-2019-35
refinedweb
348
50.73
Lydie, if you just want to check all the values of a combo box you can simply record the respective check. There's one that checks all the items in one go. If you really need to get them by script for further processing, here's an example that should work for most cases. from de.qfs.apps.qftest.extensions.items import ItemRegistry com = rc.getComponent("<QF-Test ID of combobox>") items = [] for i in range(com.getItemCount()): items.append(ItemRegistry.instance().getItemValue(com, i)) # now you have the items to work with, e.g. # print items # to get them into a QF-Test variable: rc.setLocal("comboItems", items) # etc... Best regards, Greg NIVEAUX Lydie <Lydie.NIVEAUX@?.com> writes: > >
https://www.qfs.de/en/qf-test-mailing-list-archive-2016/lc/2016-msg00023.html
CC-MAIN-2018-17
refinedweb
119
60.61
Using Better Tracing to Understand the Doc/View Architecture WEBINAR: On-Demand Full Text Search: The Key to Better Natural Language Queries for NoSQL in Node.js Purpose One of the hard things about learning the document/view architecture is that it is full of marvelous functions, but nobody tells you which function gets called when. You are supposed to override functions when you want to add functionality, but it is not clear which function is the right one. One way to find out is to override each function you want to know about and put a TRACE statement in it. Now each time one of your functions is called, a line of text appears in the debug window. Start the program, see what happens. Click the mouse, see what happens. The CIndentedTrace class helps with this. You can use it to make the debug window look more like C++ code. By adding a line of code to a function like the CMyClass member function MyFunc(), CIndentedTrace prints CMyClass::MyFunc() { when it is called. When the function exits, CIndentedTrace prints } Functions called by MyFunc() can be treated the same way, except that their text will be indented. If MyFunc() calls SubFunc(), the debug window looks like this CMyClass::MyFunc() { CMyClass::SubFunc() { } } CIndentedTrace can also be used to add lines of text that look like C++ comments. E.g. CMyClass::MyFunc() { // This explains something that goes on inside MyFunc(). } CIndentedTrace contains other utility functions for such things as hex dumps of strings and displaying text strings for GetLastError() messages. TRACE is a macro whose behavior depends on whether _DEBUG is defined. When compiled with a debug configuration, TRACE does its job. When compiled with a release configuration, it disappears. The CIndentedTrace header file contains macros to do the same thing. If you use these macros instead of calling CIndentedTrace yourself, CIndentedTrace will only be called in debug versions of your program. Running the demos You may want to try out the demos before seeing how they work. The TraceEasy demo shows an easy example of using the CIndentedTrace class without macros. It is for understanding the class. The TraceMacro demo is an example of how to use all the features of the CIndentedTrace class with macros. The TraceSDI and TraceMDI demos use CIndentedTrace to make the inner workings of the single and multiple document interface visible. All demos work only from the development environment with debugging active. No compiled code has been included. === To run the demos, 1) Download and unzip them. Put each one in a separate directory. Each demo should have some source files and 3 or 4 files in a res subdirectory. 2) Make sure. 3) Use the Visual C++ development environment to open the workspace file (the dsw file). In most cases, this can be done by double clicking on the file in Windows Explorer. 4) Make sure a debug configuration is selected. Check it on the Build menu, Set Active Configuration option. 5) Compile the demo application. On the Build menu, choose the Build option. 6) If you have just finished compiling, the output window is probably open and displaying 0 error(s), 0 warning(s). If not, on the View menu, select the Output option. There is a row of tabs along the bottom of the Output window that say Build, Debug, etc. When the program runs, TRACE output and other messages go in the Debug window. 7) Run the demo from the development environment. On the Build menu, select the Start Debug option, Go sub-option. 8) At any time while the program is running or after it has stopped, switch back to the development environment and look in the Debug window. === The SDI and MDI demos trace MFC code through a Single Document Interface application and a Multiple Document Interface application. In each, a vanilla application was created with the app wizard. The class wizard was used to override many functions. The IT_IT() macro was added to each override. This macro uses CIndentedTrace to produce formatted TRACE output. Some functions were not overridden because they produce too much output. For example, the mouse move message handler produces several messages every time the mouse is touched. The screen drawing functions and idle message handlers are also left out. It is good to trace these to see what they do, even if you don't leave macros in them more than long enough to find out. Tip: Stepping through the MFC source code is another good way to learn about MFC. Add a breakpoint to the function you want to inspect, run the program, and step into the MFC source code. This works only in MFC code. Microsoft does not supply source code for the C run time library, SDK functions, or other code. Darn it. If you do this, be careful about breakpoints in window activation handlers or screen drawing functions. When you are done stepping through such a function, you might continue running the program. The debugger activates the program widows and redraws them. This calls the message handler you just finish stepping through, triggers its breakpoint, and shows you the code you just tried to get out of. To get out of this loop, you must turn off the breakpoint. Using CIndentedTrace in your programs 1) Before beginning to deal with a program,. 2) Make sure you are using a debug configuration. Open your project with Visual Studio. On the Build menu, choose the Set Active Configuration option. Choose the debug configuration. Recompile if needed. 3) Before the class or macros can be used, the IndentedTrace.cpp file must be added to the project (Project, Add To Project, Files...). Just having the file present in the project directory is not enough. The compiler will not compile a source file and the linker will not use the object file unless the source file is part of the project. If the linker doesn't find the CIntendedTrace functions referenced in the code, it will complain with LNK2001 errors. The IndentedTrace.h file can be added to the project or not. If not, CIndentedTrace will not appear in the Visual Studio class view window and IndentedTrace.h will be listed as an external dependency instead of a header file in the File View. 4) Each project cpp file where CIndentedTrace functions or macros are to be used must reference IndentedTrace.h. This can be done by adding #include "IndentedTrace.h" near the top of the cpp file or to a header file the cpp file includes. Perhaps the best way is to just add it once to stdafx.h. 5) In each function you want to trace, add an IT_IT() macro. It should be the first line, so that any code called by the function is properly included inside the function's braces. The IT_IT() macro takes a string argument. The string usually contains the function's class and name. The class should be added because the output window may contain output from many different classes. But you may use any text you want may appear in the output window. Within a function, an IT_IT() macro must appear before any other IT_ macro can be used. Only one IT_IT() macro can be used in any function. 6) To add a comment to the output window, add an IT_COMMENT() macro somewhere after the IT_IT(). Variables can be displayed with a format like TRACE or printf(). IT_COMMENT1() takes 1 variable. For 2 or 3 variables, use IT_COMMENT2() and IT_COMMENT3(). === The code you write might look like this #include "IndentedTrace.h" // This may go in stdafx.h CMyClass::MyFunc( int iArg1, double dArg2 ) { IT_IT( "CMyClass::MyFunc()" ); SubFunc( iArg1 ); IT_COMMENT( "This explains something that goes on inside MyFunc()." ); // ... } CMyClass::SubFunc( int iArg1 ) { IT_IT( "CMyClass:: SubFunc()" ); IT_COMMENT1( "The value of iArg1 is %d", iArg1 ); ... } How CIndentedTrace works The TraceEasy demo shows how CIndentedTrace works. Only a few CIndentedTrace features are used. No macros are used to make it easy to follow when stepping into CIndentedTrace code. Look in CTRaceEasyView to find code that uses CIndentedTrace. In OnLButtonUp(), try putting a break point at the CIndentedTrace variable declared. Step into the CIndentedTrace constructor and destructor. To see the destructor, wait until the cursor reaches the closing brace of OnLButtonUp(), and step into the brace. When a local variable of type CIndentedTrace is created, the CIndentedTrace constructor is called immediately, and the CIndentedTrace destructor is called at the end of the function body when local variables go out of scope. The most important things the constructor does are print its argument and a "{" at the current indent level, and increment the indent level. The most important things the destructor does are decrement the indent level and print a "}". Note that if the CIndentedTrace variable is declared on the first line of a function, it will be constructed before any other local variables. Its destructor will be called last. This is important. CIndentedTrace keeps track of the indent level with the static member variable ms_iTraceDepth. Because ms_iTraceDepth is static, all CIndentedTrace objects must share a single copy of it. This makes it an ideal way for CIndentedTrace objects to share information about the current indent level. Each time a new CIndentedTrace object is created, ms_iTraceDepth is incremented. Each time one is destroyed, ms_iTraceDepth is decremented. Each CIndentedTrace object keeps track of the indent level it was created at with m_nLocalTraceDepth. Since this member variable is not static, no other CIndentedTrace object can touch it. It remains fixed for the life of the object that created it. Other CIndentedTrace functions, such as Comment(), can only be called in functions that have defined a CIndentedTrace variable. This is obvious in TraceEasy, but when macros are used, the variable definition is hidden in a macro. Comment() just prints "// " and its argument at the local indent level. === The TraceMacro demo shows how to use most CIndentedTrace features, including some explained in the "For more advance users" section. TraceMacro is a little more complex, but mostly does the same things as TraceEasy. One difference is that TraceMacro uses macros for all CIndentedTrace variable definitions and function calls. The macros are defined at the end of the IndentedTrace header file. The IT_IT macro creates a local variable with the unlikely name of _IT_vnirrrdaewu in debug compiles. This name was chosen because, in most programs, it will not already be used by another variable. _IT_vnirrrdaewu stands for "Variable Name I Really Really Really Doubt Anyone Else Will Use." If you do not share my doubts, please feel free to rewrite the macros with a name based on a GUID. Most of the other macros call a CIndentedTrace function. They require that the variable _IT_vnirrrdaewu be defined. This means they can only be used after IT_IT() or IT_EMBEDDED_IT has been used. IT_IT() is intended for the usual local variable case. IT_EMBEDDED_IT is intended for a special case where CIndentedTrace must be a class member variable. For more advanced users A constructor with an initialization list may need to be handled a little differently. The initialization list is executed before entering the constructor body. An IT_IT() macro in the constructor body makes it look like the initialization list is not part of the constructor. The thing to do is put CIndentedTrace in the initialization list too. This means it has to be a member variable of the class. Recall that the order the constructors in an initialization list is set by the order that member variables are declared in the header, NOT the order in the initialization list. If this is not familiar, see Effective C++ by Scott Meyer. In any case, the CIndentedTrace member variable must be declared first in the class. CIndentedTrace prints a "{" and indents when constructed, and unindents and prints a "}" when destroyed. This means there will be a "{" when the class is constructed and a "}" when it is destroyed. If you want the class constructor and destructor each bracketed with { and }, you will have to call Entry() and Exit() to put them there yourself. A couple Comments() may be useful as well. Macros can be used for all of this. The IT_EMBEDDED_IT macro creates an uninitialized CIndentedTrace member variable named _IT_vnirrrdaewu. This member variable should be initialized in the constructor initialization list with IT_EMBEDDED_INIT(). You will probably want to add IT_EXIT to the constructor body and IT_ENTRY() to the destructor body. You may want to add IT_COMMENT() as well. Example header file class CMyClass { IT_EMBEDDED_IT; // Must come before other member variables. CEmbeddedClass m_EC; CMyClass(); ~CMyClass(); // ... } Example cpp file CMyClass::CMyClass() : IT_EMBEDDED_INIT("CMyClass::CMyClass - " "beginning of constr init list" ), m_EC( int iSomeArg ) { IT_COMMENT( "CMyClass::CMyClass - beginning of c'tor body " ); // Other initialization IT_EXIT(); } CMyClass::~CMyClass() { IT_ENTRY("CMyClass::~CMyClass - beginning of destructor body ") // Other destruction IT_COMMENT( "CMyClass::~CMyClass - End of destructor body"); // m_EC will be destroyed after the destructor body is done. } Note that if the IT_IT() macro is used in a member class, a member variable and a local variable with the same name have been declared. This is OK. The local variable hides the member variable. This means that if you add IT_COMMENT() to the function, it will invoke the local CIndentedTrace and print at the local function's indent level. If you have any doubts about which instance of CIndentedTrace produces what output, add IT_ENABLE_SERIAL_NUM(bEnable). This calls a static function, and so can be used before any CIndentedTrace variables have been defined. The output will identify itself with serial numbers. === CIndentedTrace is not thread safe. It would not be hard to protect the static variables with a critical section. But a multithreaded program may already have thread synchronization code. If CIndentedTrace had a critical section, the program using it could wait on two resources at the same time. This creates a potential for deadlock. The cause of a deadlock can be hard to find, particularly when some of the code is buried in macros. The developer of a multithreaded program should make the decision to add a critical section to CIndentedTrace if he wants it. In the meantime, CIndentedTrace works after a fashion in multithreaded programs. Sometimes the indentation is messed up. This is not a big problem, because indentation is messed up anyway when output from two execution paths are interleaved. If you are willing to overlook this shortcoming, you can add IT_ENABLE_THREAD_ID(bEnable). This calls a static function, and so can be used before any CIndentedTrace variables have been defined. The output produced outside the main thread will identify itself with a thread ID. This can be used together with serial numbers. There is another approach to tracing a multithreaded app. IT_EMBEDDED_IT and IT_EMBEDDED_INIT create a CIndentedTrace member variable in a class. The member variable has the right name, so all the macros can be used in any class function. All the output they produce will have the same indent level, but this can be an advantage in a multithreaded application. Downloads All demos are to be run from the development environment after a debug compile. Tracing must be enabled.Source for TraceEasy - 25 Kb Source for TraceMacro - 28 Kb Source for SDI Demo - 32 Kb Source for MDI Demo - 51 Kb It really takes part................Posted by Legacy on 12/17/2002 12:00am Originally posted by: ashfaq The tutorial is very informative. These demos not only provide me help,rather i made a lot of changed and really i am in the core of sdi and mdi. I always like to run and change already working stuff. Know i can appear in my midterm test with a great confidence...Reply oh i am looking forward.............
https://www.codeguru.com/cpp/cpp/cpp_mfc/tutorials/article.php/c4097/Using-Better-Tracing-to-Understand-the-DocView-Architecture.htm
CC-MAIN-2018-13
refinedweb
2,604
65.93
Updated 28 April 2011: my review of this book has now been published on Slashdot. They edited it down. Here's the complete review as submitted, complete with links to Amazon's current free-tier offer, and cloud computing cartoons! These are my notes of errata, typos, queries/issues, and hoped-for improvements to the 2010 Packt book "Amazon SimpleDB Developer Guide" I’ve no comments on the PHP code as I only tried the Java and Python, using Windows. Forgive the ugly "pre" blocks for some of the code, but that was the only way I could stop WordPress from turning normal quotes into the dreaded "smart" curly quotes that prevent code from running. Page by page p 5 - link at the bottom is wrong – extra slash, link doesn’t work in the PDF. pp 25-26 - needs “keep with next” for pics and captions. p 27 - link at the top doesn't work. p 28 and throughout - they should have put "awsAccessId, awsSecretKey" in a different font to make it really obvious that you insert your actual keys there rather than, eg, thinking there'd be a prompt to enter your keys when you run the code. Going further, the book should have made it crystal clear that you need quotes around the keys - they're strings. pp 28-31 – no typica imports were given – the book should provide them once, then they can be used throughout the book, but the first time would help a lot, especially given that this is a "getting started" book, because Eclipse suggests several options and it's not clear which is the correct one. In Chapters 2 onwards the minimum imports needed (some need more) are generally: import com.xerox.amazonws.sdb.Domain; import com.xerox.amazonws.sdb.SDBException; import com.xerox.amazonws.sdb.SimpleDB; (alternatively, the easiest if laziest solution is to import com.xerox.amazonws.sdb.*; ) Cf Chapters 9 and 10, eg p 194, which do give all the code, complete with all imports and even “main” – why the inconsistency? It would be easier for readers if the full code were provided in the early chapters. Contrast with this SimpleDB typica tutorial, which gives all imports (and makes it crystal clear that the keys go in as strings). There are also inconsistencies in the Python code, eg p 211 gives the preliminary code to import boto and set up the connection etc, whereas some earlier chapters leave that out. All the Python code should be similarly complete, for the convenience of those readers who (as seems most likely) try different chapters at different times: don't assume readers will work through the whole book in a single sitting. In contrast, the Amazon Web Services toolkit for Eclipse took seconds to install, a few more seconds to enter my credentials, and the SimpleDB sample code given ran immediately. p 38 – this Chapter should explain installation for Windows too, ie open a command window in the boto-[whatever] folder, then it's python setup.py install. Add environment variables for your keys as user variables in the normal way eg through Computer Properties -> Advanced System Settings -> Advanced -> Environment Variables). This is a strange omission as it’s in an IBM Developerworks tutorial on SimpleDB/Python/botowritten by one of this book's authors. p 40 – there should be a True at the bottom of the page for the output you get after creating new item. Similarly with top of p 41. p 42 - the last one: sdb_connection.get_attributes('prabhakar-dom-1',car1') should be: sdb_connection.get_attributes('prabhakar-dom-1','car1') - ie there's a missing open quote. p 59 – I don’t get Domain:Cars as the output in the penultimate line. Also, the code for creating the domain has a double underscore in the name cars__domain – but it needs to be single underscore ie cars_domain, or else copy/pasting the subsequent code (which uses cars_domain) won’t work. p 60 – needs a space after the import ie it's import SPACE inspect. Also, pp.pprint(inspect.getmembers(cars_domain, inspect.ismethod)) won’t work because the name has a single underscore here, see p 59. And so on. p 63 - The line Domain domain = sdb.getDomain("songs"); should be Domain domain = sdb.getDomain("Cars"); p 64 – pasting the Python code shown here won’t work unless the double underscore on p 59 is fixed, or you use a double underscore here instead, ie cars__domain p 70 – “It makes sure you call save() to actually persist your additions to SimpleDB” is misleading and gives the impression that add_value automatically includes a save() - cf p.71, which reads (correctly) “You must once again call save() in order to persist the changes.” The p 70 sentence should read something like “After calling add_values, make sure that you also call save()...” p 75 - cars __domain should be (see p 59) cars_domain Missing code (this should be line 4): myitem2 = cars_domain.get_item('Car 2’) p 78 – I get u'dealer' in the results of running the code, not 'dealer' p 88 – Java code gives the body of the method provided for zeropadding; but readers may be more interested in the use of the method, eg String encoded = DataUtils.encodeZeroPadding(int number, int maxNumDigits); or int decoded = DataUtils.decodeZeroPaddingInt("0000234"); p 93 – again it would be more useful if rather than providing the method body this page provided code showing its use, like: Date aDate = new Date(); String encodedDate = DataUtils.encodeDate(aDate); System.out.println(encodedDate); - and similarly with the decodeDate() method. p 111 – the p 116 info on quoting should be given here, and in the main body of the text rather than a side “warning” - I personally find those warnings easily missed, possibly because they’re in a smaller font. Using the backtick ` (above the Tab key) instead of a single quote ‘ isn’t obvious, especially to someone typing out the code instead of copy/pasting, so it merits major highlighting. The Amazon guide is much clearer on when ` must be used. To emphasise, in SELECT queries you must use ` around the domain name if the name contains, eg, a hyphen or underscore, or else it won't work. (And you're not "escaping" here, you're quoting with a backtick.) p 115 – there's info missing about You’re a Strange Animal, whereas info about that item was added in p 108 and 110 - the example should be carried through in full ie: >> 1045845425 {u'Genre': u'Rock', u'Rating': u'****', u'Song': u"You're a Strange Animal", u'Artist': u'Gowan', u'Year': u'1985'} (cf pp 122, 124, 125, 129, 130, 131, 132, 134 which are consistent on that front). p 136 – getAttributes in Java - this code won’t run, and I can’t find the getItemsAttributes() method in p 146 – the download link for JetS3t is now. And the info here is incomplete – “Add the jets3t-0.7.2.jar to your classpath” is not good enough. You also have to add commons-httpclient-*.jar (in the jets3t libs directory) to the classpath, or else it won’t work. By the way, this isn't mentioned in the book but, when testing stuff on S3, a good way to check the results of running the code is to use JetS3t Cockpit (run the script in the JetS3t bin directory eg cockpit.bat if you're on Windows). And if you try the book's examples, you might want to use a different bucket name from packt_songs, or, alternatively, don't forget to delete that bucket when you're through. Bucket names are unique throughout the whole of AWS, not just to your account, so if you don't delete it, no other readers will be able to use the same bucket name. p 148 – “We will use a MD5 hash that is generated from the name of the song, name of the artist, and year.” – but, the code given doesn’t in fact use the year. p 149 – why is the line with user key details commented out? p 149-151 – isn't there more efficient “for” code to do this, like the Python version on p 152, instead of going through each item individually? p 151 – code is missing for “You’re a Strange Animal”. p154 – “/songs_folder” is used here, cf "/Users/prabhakar/Documents/SimpleDB Book/songs/” on p 160 – another inconsistency. More importantly, the code doesn’t run unless the mimes.type file from the jets3t configs folder is added to the classpath (I copied it to a lib folder in my Eclipse SimpleDB then added that lib folder to the project’s build path as Class folder in the project’s properties). Also, this code doesn’t allocate keys for the uploaded files using the relevant data from SimpleDB; the keys here are just the filenames. Either the book should provide code that uses SimpleDB data as the keys (as the Python code on p 159 does), or else it should explain clearly to readers that this can’t be done in Java. p 160 – songs.select should be songs_domain.select in order to work with the previously-given code. Also, it wouldn't hurt to remind Windows users to escape the backslash in the file/folder path eg C:\path to\songs/%s.mp3 p 161 – why not use more efficient code with a “for” loop? In any event, this code wouldn’t run: first, “The method downloadObjects(S3Bucket, DownloadPackage[]) in the type S3ServiceSimpleMulti is not applicable for the arguments (S3Bucket, DownloadPackage[])”, then on casting downloadPackages to DownloadPackage[] and trying to run it: “Unable to determine S3 Object key name from signed URL: null”. And also warnings of deprecated methods/types. Also, it’s not clear what's the local location files get downloaded to, cf p 164 for Python which makes clear what the specified download directory will be. The info on with the comments in the code is clearer as to which code is mean to do what, and it would have helped if the code in the book had been similarly commented. p 164 – see p 160 comment on “songs_domain” – occurs twice. p 172 – “This sample will print the following values to the console:” – not exactly, the requestID will of course vary with the user. pp 186-188 – memcached is also available for Windows - installation instructions are on that page, your directory structure may vary of course. p 189 – why so specific on “Copy the JAR file named java_memcached-release_2.5.0.jar to a folder that is on your classpath.”? Why not just say, add it to your classpath? (adding it as an external jar also works, for instance). This page should also include instructions for memcached Windows, as p 38 sort of does - ie download the python-memcached library, extract the files, run cmd, cd to the folder, use “python setup.py install”; start the server with the command “c:pathtomemcached.exe -d start”. p 190 – it can't be a bad idea to remind readers to start the memcached server running first, here. p 192 – “mc = memcache.Client(['127.0.0.1:12312'])” – why is the port said to be 12312 here? Cf p 190 where it’s port 11211 for the Java. Only 11211 works for me, at least when using memcached for Windows with Python. pp 194-196 – the Java code didn’t work for me, it still keeps retrieving the data afresh from SimpleDB – even though the Java test on p 190-191 showed that the memcached server is working fine, and the caching certainly works in Python (p 202). p 201 – the code starting at the bottom of the page should be saved into a file called sdb_memcache.py – a big omission. Newbies – best to save the py files to the same folder as your Python installation eg the Lib subfolder; and NB you have to fix the indents if you copy/paste. p 202 – if using memcached for Windows, it won’t work unless you use port number 11211 ie: sdb_mc = SDBMemcache("127.0.0.1","11211") p 205 - "In this chapter, we will explore how to run parallel operations against SimpleDB using boto." - but it's not just using boto. The page number's missing from this page. p 213 - "Here is a simple Python script that updates items by making three different calls to SimpleDB, but in a serial fashion, that is one call after another." - but, no script was actually given?? And why not give the code for "Running this through time"? p 213-216 - it would have been more helpful to give the explanations as comments against the relevant parts of the code, so that it's clear which bit of the code does what. That's a general point about the earlier Java code in this book too. p 221 - to install eggs you have to first install easy_install. Although I'd already installed setuptools, I still had to download ez_install.py for this to work.
http://blog.kuan0.com/2011/04/
CC-MAIN-2018-17
refinedweb
2,176
69.72
date utils timezone and add to date Hi I’m missing something with the dates processing, here is the scenario. The user enters a date, for example 2020-05-26. My purpose was to get the days since epoch (18408) for that date. My first thought was that no matter the timezone the user is in, the days will always be the same. While in GMT+X the result was as expected, for GMT-X, it was one day behind. I then thought that converting the date to UTC, might do the trick so I did the following: getDateInUTC(value) { var year = Number(value.substring(0, 4)); var month = Number(value.substring(5, 7)); var day = Number(value.substring(8, 10)); return new Date(Date.UTC(year, month - 1, day)); } when: date.getDateDiff(this.getDateInUTC(this.itemWhen), new Date(1970, 0, 1), "days"), But the same problem is happening. I’m sure I’m missing something but I can’t figure out what exactly. Thanks - s.molinari last edited by s.molinari In terms of time offsets, UTC and GMT are basically the same time. It’s just that UTC isn’t a time zone. It is a standard. And GMT is a timezone. For instance, the UK is on GMT in the winter and British Summer Time (BST) in the summer (UTC/ GMT+1) I’m not sure of the problem, but to calculate time zone times, you need the time reverted to UTC/ GMT in your data and then it would be hours, not days you’d be adding or subtracting based on the user’s time zone. Also, time zone values aren’t fixed. Depending on the time the Date/Time value is taken, the value of the timezone offset may vary as the rules change from country to country. I just read a really interesting article on the subject. In it the author suggested, you should always store the local time of the event, store the converted UTC value and the value of the timezone offset (in hours). If you want to get really picky, to be sure the time for users, who are not in the same time zone, is depicted properly, you’d need to make sure you are following the offsets for the time zone rule set available at that time the time is depicted/ displayed. All in all, a quite complex subject for something seemingly so simple. Scott Thanks, @s-molinari It is a pain but from my perspective, I want it to be simple, after the all the user has set a date on his computer meaning his real date and the date he entered are of the same timezone. For simplicity, what I think happens is that because of the timezone, instead of calculating, for example: 1970-01-01 00:00:00 (epoch) to 1970-01-02 00:00:00 (user input), the second date is translated into timezone date, meaning 1970-01-01 21:00:00 (3 hours back) and because of that, the days since epoch = 0 instead of 1. So basically, I want to calculate 1970-01-01 00:00:00 to 1970-01-02 00:00:00 with both GMT timezone (to cause no change because of the time). I think that I did that with my code for the second date (see above code) but I either did it wrong or I should do the same to the epoch date, so Instead of just new Date(1970, 0, 1) I should do it also in GMT. What do you think? The real problem is that I’m GMT+X so it’s working fine as it is now, and changing to GMT-X showed my fix was correct but maybe it’s because the problem is only inside the time frame of the timezone (GMT-5 so the problem happens between midnight and 5am). Again, just a guess. - s.molinari last edited by s.molinari I’m not sure what the epoch time has to do with displaying a date proper to local time taking into account time zones. At least I’m not understanding your use or thinking with it. Epoch time is only about timestamps, meaning, if you use timestamps, they are a difference of milliseconds of the user’s local time to the epoch. The epoch should have no relation to your calculation as you’d still only need to calculate the UTC and save the user’s time zone and calculate any other times from that. You have to decide if you store the date/times as timestamps or iso date values. That doesn’t change the calculation, only the means to get to it. Scott I store the date as days from epoch. Meaning the user enters 1970-01-02 and I store 1. The problem is, as I described above, is that because the dateDiff uses 2 dates with timezone, the result is 0 maybe because of what I wrote above, thus my question. At the moment, I think I will need to set the first parameter (epoch date, 1970-01-01) to also be GMT like the other parameter and I was wondering if there is another, cleaner way to do it, maybe from the experience of other people who might be using dateDiff function. @amoss What I don’t understand, is that getDateDiffis already aware of timezones (first and second date can be in different timezone): // Internal date diff function in function getDiff (t, sub, interval) { return ( (t.getTime() - t.getTimezoneOffset() * MILLISECONDS_IN_MINUTE) - (sub.getTime() - sub.getTimezoneOffset() * MILLISECONDS_IN_MINUTE) ) / interval } So, it should work OTB. I made a pen () to test the getDateDiffwith different dates / time in different timezone. I also tried to change my laptop timezone, and I always get the good result. I will need to further test this, I’m now realizing that the problem might be when retrieving the days since epoch and converting it back to a date. - s.molinari last edited by I’m still not getting what the timezone has to do with an epoch date (in days or milliseconds). What is in this.itemWhen? If a user enters “2020-05-26” that is a date with no time zone offset info, right? Scott
https://forum.quasar-framework.org/topic/5994/date-utils-timezone-and-add-to-date
CC-MAIN-2020-45
refinedweb
1,042
69.41
A class that handles raw IP I/O communication for a specific protocol. More... #include <io_ip_manager.hh> A class that handles raw IP I/O communication for a specific protocol. It also allows arbitrary filters to receive the raw IP data for that protocol. Allocate an I/O IP plugin for a given data plane manager. Join an IP multicast group. Leave all IP multicast groups on this interface. Leave an IP multicast group. Received a raw IP packet. Implements IoIpReceiver. Received a multicast forwarding related upcall from the system. Examples of such upcalls are: "nocache", "wrongiif", "wholepkt", "bw_upcall". Implements IoIpReceiver. Remove filter from list of input filters.
http://xorp.org/releases/current/docs/kdoc/html/classIoIpComm.html
CC-MAIN-2018-05
refinedweb
108
53.37
Edit: Added an implementation that doesn’t require a lock and is therefore safer to use over all. Please, that ever get sent too close together. Simple Example This example uses a single String Item which gets commanded with a command line script that gets called using executCommandLine to trigger the device. This may not be possible in all cases so please post questions if you need help dealing with multiple Items to be commanded. The approach would be the same only we would parse the command into an Item name and new state. Items: String WirelessController Switch Outlet_A Rules: import java.util.concurrent.locks.ReentrantLock var lock = new ReentrantLock rule "433MHz Controller" when Item WirelessController received command then lock.lock // Ensures only one instance of the Rule can run at a time try { val results = executeCommandLine(WirelessController.state.toString, 5000) logDebug("433", results) Thread::sleep(100) // experiment to find the minimum sleep to obtain reliable switching } catch(Exception e) { logError("433", "Error handling 433MHz command: " + e) } finally { lock.unlock } end rule "Outlet A" when Item Outlet_A received command then if(receivedCommand == ON) WirelessController.sendCommand(“433-send xxxxx 1 1”) else WirelessController.sendCommand(“433-send xxxxx 1 0”) end Theory of Operation: There is a script called 433-send which is called with three arguments, a controller ID (I’m basing this off of someone else’s code and do not fully understand the script), a device ID, and the command as ON=1 and OFF=0. Rather than providing an executeCommandLine in a Rule for each device or binding the Items to the Exec binding, we use a Design Pattern: Proxy Item to represent each device. The Proxy Item triggers a Rule and in the Rule we construct the command to execute to command the device and send that to WirelessController as a command. The WirelessController command is handled by a Rule. A ReentrantLock prevents more than one instance of the Rule from executing at the same time. This rule then executes the command and sleeps for a tenth of a second before exiting. The lock and the sleep will prevent any two commands to the 433 controller from occuring closer together than 100 msec and therefore avoiding collisions. Complex Example We can take advantage of Design Pattern: Associated Items and Design Pattern: Encoding and Accessing Values in Rules to make the above solution a little more generic and flexible. Note: The code below depends on the the WirelessDevice Items to be persisted. Items: Group WirelessDevices Switch WirelessDevice_xxxxx_1 (WirelessDevices) Switch WirelessDevice_xxxxx_2 (WirelessDevices) Rules: import java.util.concurrent.locks.ReentrantLock var lock = new ReentrantLock var lastCommand = now.millis val commandDelay = 100 // experiment to find the lowest number that works Rule "A WirelessDevice received a command" when // We can't trigger the rule using WirelessDevices received update because there is no good way to handle the multiple rule triggers Item WirelessDevice_xxxxx_1 received command or Item WirelessDevice_xxxxx_2 received command then // Get the controller ID and device ID val split = triggeringItem.name.split("_") val controller = split.get(1) val device = split.get(2) val command = if(receivedCommand == ON) "1" else "0" // Ensures only one instance of the Rule can run at a time // We do this after the lines above so the delay below does not interfear with the Rule's ability to // determine which Item triggered the Rule lock.lock try { // Sleep if the last command happened too close to now, but only sleep just long enough val deltaTime = now.millis - lastCommand // how long since the last call to executeCommandLine if(deltaTime <= commandDelay) Thread::sleep(commandDelay-deltaTime) val results = executeCommandLine("433-send " + controller + " " + device + " " + command, 5000) lastCommand = now.millis logDebug("433", results) } catch(Exception e) { logError("433", "Error handling 433MHz command: " + e) } finally { lock.unlock } end Theory of Operation: A proxy Item is created for each device and the name of the device includes the controller ID and the device ID. All of these proxy Items are members of the WirelessDevices Group. A Rule gets triggered by any one of these proxy Items receiving a command. We cannot use the Group to trigger the Rule because there is no clear way to manage the fact that the Rule gets triggered multiple times per command. In the rule we use the lastUpdate hack to identify the Item that triggered the Rule and then parse out the controller and device IDs out of the Item’s name. Thus, adding a new device only requires adding a new Item and adding that Item as a trigger to this Rule. We then wait to acquire the lock, check to see if we need to sleep or not, and execute the command line using the values parsed out of the Item name and the received command. Complex Example using Queues As documented elsewhere, locks can be dangerous to use. So we should make every effort to keep the locked portion of code as fast and error free as possible. A call to executeCommandLine does not meet that criteria. This example shows how to create a queue of commands that get worked off in a separate Timer thread which makes a better use of the Rule’s threads. Items: Same as Simple Example. Rules:: This example differs significantly from the two above. At a high level,.
https://community.openhab.org/t/design-pattern-gate-keeper/36483
CC-MAIN-2019-13
refinedweb
879
60.55
read subprocess stdout line by line It's been a long time since I last worked with Python, but I think the problem is with the statement for line in proc.stdout, which reads the entire input before iterating over it. The solution is to use readline() instead: #filters outputimport subprocessproc = subprocess.Popen(['python','fake_utility.py'],stdout=subprocess.PIPE)while True: line = proc.stdout.readline() if not line: break #the real code does filtering here print "test:", line.rstrip() Of course you still have to deal with the subprocess' buffering. Note: according to the documentation the solution with an iterator should be equivalent to using readline(), except for the read-ahead buffer, but (or exactly because of this) the proposed change did produce different results for me (Python 2.5 on Windows XP). Bit late to the party, but was surprised not to see what I think is the simplest solution here: import ioimport subprocessproc = subprocess.Popen(["prog", "arg"], stdout=subprocess.PIPE)for line in io.TextIOWrapper(proc.stdout, encoding="utf-8"): # or another encoding # do something with line (This requires Python 3.) Indeed, if you sorted out the iterator then buffering could now be your problem. You could tell the python in the sub-process not to buffer its output. proc = subprocess.Popen(['python','fake_utility.py'],stdout=subprocess.PIPE) becomes proc = subprocess.Popen(['python','-u', 'fake_utility.py'],stdout=subprocess.PIPE) I have needed this when calling python from within python.
https://codehunter.cc/a/python/read-subprocess-stdout-line-by-line
CC-MAIN-2022-21
refinedweb
241
50.23
This section gives an overview of my shellcode. Most shellcode is completely coded by hand by me (I use the free nasm assembler), but some shellcode has also been generated with the help of a C-compiler. I worked out a method to generate WIN32 shellcode with a C-compiler. By using special constructs in C and avoiding incompatible constructs, the C-compiler will emit position-independent code for the C functions designed to be converted to shellcode. The shellcode is extracted from the compiled EXE-file when the program is run. Not only is it easier and faster to code shellcode with C in stead of assembly language; this method makes it also possible to debug shellcode with Visual C++2008 Express’ integrated debugger. I’m currently writing a tutorial for this method. The shellcodes presented here do not use hardcoded WIN32 API function addresses, they use the PEB method to dynamically lookup the addresses (code published in The Shellcoder’s Handbook, you can find it in the include file sc-api-functions.asm). Note that shellcodes available for download here are not restricted in the byte-values they may use. Most of them will contain 0x00-bytes. If this is an issue, I’ll provide you with a couple of decoders I developed to exclude specific byte-values. ShellCode With a C-Compiler I wrote an article in Hakin9 magazine how to write shellcode with a C-compiler. Download: ShellCodeLibLoader_v0_0_1.zip (https) MD5: F6D4779097A8A11C412BDD47B7B1C8AE SHA256: 3294A4322926476562AF34A80B8155638EFEEF38E401E69D6DB9BBB652C3EB58 The DLL-loading shellcode I used in my cmd.xls spreadsheet was generated with my C-Compiler method. You can download Joachim’s code, converted to shellcode with this method, here: Download: ShellCodeMemoryModule_V0_0_0_1.zip (https) MD5: CEABB3A8A9A4A507BA19C52EE2CC5DA9 SHA256: 284344C909E623B0406BB38A67F5A7A1AEE2473721244EED52CCEBB8846B0500 The shellcode is in file ShellCodeMemoryModule.exe.bin (it contains the shellcode with an appended DLL that displays a MessageBox). Finally, after extensive testing of this shellcode, I disassembled it with ndisasm and optimized it for size (2297 bytes in stead of 2634 bytes). But this step is only necessary if you want assembly code for your shellcode. This assembly code will be released when I’m done tweaking it ;-). The shellcode: Another requested file (sc-winexec.asm) was added to my-shellcode_v0_0_3.zip: shellcode to launch calc.exe via a WinExec call. After that, the shellcode will exit with a call to ExitThread. If you want this shellcode to launch another program than calc.exe, edit the last line of the assembly code to replace calc.exe with the desired program: COMMAND: db "calc.exe", 0 2 other requested files (sc-ping.asm and sc-ping-computername-username.asm) were added to my-shellcode_v0_0_3.zip: shellcode to perform a ping. First one does a ping with a static payload, second one has dynamic payload (computername + username). Shellcode to send a Twitter Update was added to my-shellcode_v0_0_4.zip. Before using/assembling the shellcode, you need to provide Twitter credentials and the text for the status update (url-encoded). ; Customize the following 3 TWITTER_ values according to your needs ; Notice that your Tweet has to be URL encoded! ; USER_AGENT is another value you might want to customize %define TWITTER_CREDENTIAL_NAME "user" %define TWITTER_CREDENTIAL_PASSWORD "password" %define TWITTER_TWEET_URL_ENCODED "This+is+a+Tweet+from+shellcode" Shellcode to load a .NET assembly in the current process was added to my-shellcode_v0_0_5.zip. Before using/assembling the shellcode, you need to provide your own assembly, class, method and a parameter. ; Customize the following 4 DOTNET_ values according to your needs %define DOTNET_ASSEMBLY_VALUE "C:\HelloWorldClass.dll" %define DOTNET_CLASS_VALUE "DidierStevens.HelloWorld" %define DOTNET_METHOD_VALUE "HelloWorldMessageBox" %define DOTNET_ARGUMENT_VALUE "Call from shellcode sc-dotNET" Example of a C# class: using System; using System.Windows.Forms; namespace DidierStevens { public class HelloWorld { public static Int32 HelloWorldMessageBox(String message) { MessageBox.Show(message, "Hello World from .NET"); return 1; } } } x64 Shellcode I’ve also started to write x64 shellcode, like this example. Look for filenames starting with sc-x64 in the zip file (my-shellcode_v….zip). Download: my-shellcode_v0_0_7.zip (https) MD5: E3D7866D59506696C3CEDE97FA742997 SHA256: C575FC6128ED65F83C19B2E5E6AC5554B8C1D27F27EA16E5CDC147927AD2AF76 interresting thx mubix ;) Comment by kermass — Sunday 14 February 2010 @ 14:11 Any chance you can write some shellcode to do a DNS lookup on a given domain? Comment by Ron — Sunday 14 February 2010 @ 18:24 @Ron do you have an example of what you want to do, for example in C? Comment by Didier Stevens — Sunday 14 February 2010 @ 18:53 Sure, I’m just thinking something like: gethostbyname(“xxx”) The reason is, I have the authoritative server for my domain, so I’d use that for detecting a vulnerability (by looking up my own domain and seeing the request come). Comment by Ron — Sunday 14 February 2010 @ 19:14 @Ron I’m not sure I understand why you need shellcode. You know that you can do this with nslookup, with a simple C program (or even in VBA). So why do you want shellcode? Shellcode doesn’t usually interact with the user, and here, it would have to ask you for a hostname and a DNS server, and then display the results. Comment by Didier Stevens — Sunday 14 February 2010 @ 19:54 I think I was writing that too soon after waking up. :) My thought is this: I see a vulnerable service on a remote box (say, NTP). I want to verify that it’s vulnerable without worrying about it having an ingress/egress firewall. So, I throw some shellcode with a domain name hardcoded into it that simply does gethostbyname(“xxx”), and I watch my dns server to see if the request gets made. Now that I think more, it’d probably be easier to just use an exec-style shellcode to run “ping xxx” instead of having special shellcode to do it. Does that make sense? Comment by Ron — Sunday 14 February 2010 @ 20:12 OK, now I understand. Yes, a ping would work to, but requires your shellcode to spawn a new process (the ping program). You could just use that ubiquitous shellcode that downloads a file with URLDownloadToFileA and then executes it with WinExec. You don’t need to execute the file, just download it from your website and monitor your logs. You can even use an empty file, but do host a file, otherwise, if you don’t host the file, URLDownloadToFileA will take long to execute (it will wait to timeout). Comment by Didier Stevens — Sunday 14 February 2010 @ 20:46 The problem with URLDownloadToFileA, initially, is that it’ll typically be stopped by an egress firewall. That being said, I think you’re on to something — I can use the URLDownloadToFileA shellcode with my domain as the URL, but return NXDOMAIN when it tries to download the code. The download will fail, and it’ll never attempt a HTTP connection anywhere, but I will be alerted that it attempted to do so. Comment by Ron — Sunday 14 February 2010 @ 20:50 @Ron Yep, even with an egress firewall, it must perform a DNS lookup first, which you can catch. Unless the server has only access to an internal DNS that doesn’t forward queries to the outside world. Comment by Didier Stevens — Sunday 14 February 2010 @ 20:55 [...] MemoryLoadLibrary: From C Program to Shellcode Filed under: Hacking, My Software, Shellcode — Didier Stevens @ 0:40 The DLL-loading shellcode I used in my cmd.xls spreadsheet was generated with a method I worked out to generate WIN32 shellcode with a C-compiler. You can find it on my new Shellcode page. [...] Pingback by MemoryLoadLibrary: From C Program to Shellcode « Didier Stevens — Tuesday 16 February 2010 @ 0:41 [...] Didier Stevens [...] Pingback by Exploit writing tutorial part 9 : Introduction to Win32 shellcoding | Peter Van Eeckhoutte's Blog — Thursday 25 February 2010 @ 16:24 [...] template can be found here. Leave a [...] Pingback by Writing WIN32 Shellcode With a C-compiler « Didier Stevens — Tuesday 4 May 2010 @ 10:17 [...] Didier Stevens [...] Pingback by [0x0027]Exploit writing tutorial part 9 : Introduction to Win32 shellcoding « Eohnik.c — Sunday 5 September 2010 @ 12:28 [...] This shellcode uses the library sc-api-functions.asm you can find in my shellcode repository. [...] Pingback by simple-shellcode-generator.py « Didier Stevens — Friday 23 September 2011 @ 9:04 [...] can get the code from my shellcode page. Look for filenames starting with sc-x64 in the zip file. Like this:LikeBe the first to like this [...] Pingback by x64 Windows Shellcode « Didier Stevens — Thursday 2 February 2012 @ 20:00 [...] can find this shellcode on my shellcode page. Like this:LikeBe the first to like this post. Leave a [...] Pingback by ExitProcess Shellcode « Didier Stevens — Monday 14 May 2012 @ 0:19
http://blog.didierstevens.com/programs/shellcode/
CC-MAIN-2014-35
refinedweb
1,434
56.66
I can't find any info on this but when I try and create a zip archive in Python it creates a .pyc instead. #!/Python27/python import zipfile z = zipfile.ZipFile('test.zip', 'w') z.write('README.txt') z.close() The script you're running (or some other script you have) is actually called zipfile.py, and so Python is actually first looking in the folder your script is in to find a module called zipfile. When it finds this script, it imports that instead of the actual module. Any time a script is imported to another Python file, Python automatically creates a compiled .pyc file resulting in that zipfile.pyc. If you rename your file to something more specific (and also fix your typo) you should be able to avoid this problem.
https://codedump.io/share/uYxSdiGEsdUq/1/zipfile-module-creating-pyc-file
CC-MAIN-2017-47
refinedweb
133
74.79
There are a couple of issues with OpenSSL’s BIO_*printf() functions, defined in crypto/bio/b_print.c, that are set to be fixed in the forthcoming security release. The function that is primarily responsible for interpreting the format string and transforming this string and the functions arguments to a string is _dopr(). _dopr() scans the format string in an incremental fashion and employs doapr_outch() for each character it wants to output. doapr_outchr() doapr_outch()’s first two arguments are a double pointer to a statically allocated buffer (char** sbuffer) and a pointer to a char pointer (char **buffer) whose value will be set to a memory region dynamically allocated by doapr_outch(). The first argument, the static buffer, should always be valid. Its size is pointed to by the third argument to doapr_outch(), size_t* currlen. 700 static void 701 doapr_outch(char **sbuffer, 702 char **buffer, size_t *currlen, size_t *maxlen, int c) 703 { 704 /* If we haven't at least one buffer, someone has doe a big booboo */ 705 assert(*sbuffer != NULL || buffer != NULL); 706 707 /* |currlen| must always be <= |*maxlen| */ 708 assert(*currlen <= *maxlen); 709 710 if (buffer && *currlen == *maxlen) { 711 *maxlen += 1024; 712 if (*buffer == NULL) {; 723 } else { 724 *buffer = OPENSSL_realloc(*buffer, *maxlen); 725 if (!*buffer) { 726 /* Panic! Can't really do anything sensible. Just return */ 727 return; 728 } 729 } 730 } 731 732 if (*currlen < *maxlen) { 733 if (*sbuffer) 734 (*sbuffer)[(*currlen)++] = (char)c; 735 else 736 (*buffer)[(*currlen)++] = (char)c; 737 } 738 739 return; 740 } The idea here is that doapr_outch() will incrementally fill the statically allocated buffer sbuffer until its maximum capacity has been reached; whether this is the case is asserted by the if on line 732, a byte will be appended to *sbuffer on line 734: 732 if (*currlen < *maxlen) { 733 if (*sbuffer) 734 (*sbuffer)[(*currlen)++] = (char)c; Once sbuffer is full (at which point *currlen is equal to *maxlen) and the calling functions allows the dynamic allocation of memory (buffer is non-zero), then this condition evaluates as true: 710 if (buffer && *currlen == *maxlen) { From this point on, an allocation takes place every 1024 bytes. Once a single successful heap allocation takes place, *sbuffer is zeroed:; The corollary of sbuffer being zero for the remainder of the BIO_printf() invocation is that from now on, bytes will be appended to the heap-based *buffer rather than the stack-based *sbuffer: 732 if (*currlen < *maxlen) { 733 if (*sbuffer) 734 (*sbuffer)[(*currlen)++] = (char)c; 735 else 736 (*buffer)[(*currlen)++] = (char)c; 737 } Differences between BIO_printf/BIO_vprintf and BIO_snprintf/BIO_vsnprintf The functions BIO_printf() and BIO_vprintf() allow doapr_outch() to dynamically allocate memory by supplying a valid pointer to a char pointer. 744 int BIO_printf(BIO *bio, const char *format, ...) 745 { 746 va_list args; 747 int ret; 748 749 va_start(args, format); 750 751 ret = BIO_vprintf(bio, format, args); 752 753 va_end(args); 754 return (ret); 755 } 756 757 int BIO_vprintf(BIO *bio, const char *format, va_list args) 758 { 759 int ret; 760 size_t retlen; 761 char hugebuf[1024 * 2]; /* Was previously 10k, which is unreasonable 762 * in small-stack environments, like threads 763 * or DOS programs. */ 764 char *hugebufp = hugebuf; 765 size_t hugebufsize = sizeof(hugebuf); 766 char *dynbuf = NULL; 767 int ignored; 768 769 dynbuf = NULL; 770 CRYPTO_push_info("doapr()"); 771 _dopr(&hugebufp, &dynbuf, &hugebufsize, &retlen, &ignored, format, args); 772 if (dynbuf) { 773 ret = BIO_write(bio, dynbuf, (int)retlen); 774 OPENSSL_free(dynbuf); 775 } else { 776 ret = BIO_write(bio, hugebuf, (int)retlen); 777 } 778 CRYPTO_pop_info(); 779 return (ret); 780 } BIO_vprintf() supplies both a statically allocated buffer (hugebuf), its size is encoded in hugebufsize, and it also supplies a pointer to a char pointer (dynbuf). The same applies to BIO_printf() through its use of BIO_vprintf(). By contrast, the other two *printf functions, BIO_vsnprintf() and BIO_snprintf() only use a statically allocated buffer, which is to be supplied by the caller: 788 int BIO_snprintf(char *buf, size_t n, const char *format, ...) 789 { 790 va_list args; 791 int ret; 792 793 va_start(args, format); 794 795 ret = BIO_vsnprintf(buf, n, format, args); 796 797 va_end(args); 798 return (ret); 799 } 800 801 int BIO_vsnprintf(char *buf, size_t n, const char *format, va_list args) 802 { 803 size_t retlen; 804 int truncated; 805 806 _dopr(&buf, NULL, &n, &retlen, &truncated, format, args); 807 808 if (truncated) 809 /* 810 * In case of truncation, return -1 like traditional snprintf. 811 * (Current drafts for ISO/IEC 9899 say snprintf should return the 812 * number of characters that would have been written, had the buffer 813 * been large enough.) 814 */ 815 return -1; 816 else 817 return (retlen <= INT_MAX) ? (int)retlen : -1; 818 } The vulnerability One of the problems with the doapr_outch() function is that it cannot signal failure to allocate memory to its caller, because it is a void-returning function: 713 *buffer = OPENSSL_malloc(*maxlen); 714 if (!*buffer) { 715 /* Panic! Can't really do anything sensible. Just return */ 716 return; 717 } 724 *buffer = OPENSSL_realloc(*buffer, *maxlen); 725 if (!*buffer) { 726 /* Panic! Can't really do anything sensible. Just return */ 727 return; This lack of error signaling means that _dopr() will continue to call doapr_outch() as long as there are characters left to output. Moreover, maxlen is incremented before the allocation. This means that even if the allocation fails, maxlen still represents the size of the heap memory which it would be if the allocation had succeeded: 711 *maxlen += 1024; 712 if (*buffer == NULL) { 713 *buffer = OPENSSL_malloc(*maxlen); 714 if (!*buffer) { 715 /* Panic! Can't really do anything sensible. Just return */ 716 return; 717 } Thus, upon the first call to doapr_outch() after the failed allocation, the following condition evaluates as false: 710 if (buffer && *currlen == *maxlen) { The failed allocation caused *buffer (the value) to be zeroed, but buffer (the pointer) is still valid. However, *currlen does no longer equate *maxlen, because *maxlen has just been incremented by 1024 in the previous call. Failing to evaluate this condition as true, the entire middle part of the function is skipped, and the following code is evaluated: 732 if (*currlen < *maxlen) { 733 if (*sbuffer) 734 (*sbuffer)[(*currlen)++] = (char)c; 735 else 736 (*buffer)[(*currlen)++] = (char)c; 737 } *currlen is now indeed *maxlen, and *sbuffer is zero (if at least one valid OPENSSL_malloc() call is succesfull, *sbuffer is zeroed, as noted earlier). Thus this code is executed: 736 (*buffer)[(*currlen)++] = (char)c; *buffer is zero, and *currlen might be anything, depending on at which point in the process an allocation failed. Thus, effectively *currlen is used as a pointer to write data to. *currlen is a 32-bit integer, so when used as a pointer it is bound to point to a byte within the first 4 gigabytes of the virtual address space. On a 64-bit system, it is unlikely that a write to this region will not cause a page fault. However, in a 32-bit memory layout, the odds are in the attacker’s favor, especially if they have some way of causing memory attrition within the germane system. It might seem far-fetched that an attacker might have the agency to cause an allocation to fail at a very precise moment, namely when *currlen, if used as a pointer, is pointing to a memory region that they want to overwrite. However, how much memory there is left to allocate within a system is not merely constituted by OpenSSL’s (or the application that uses it) use of the heap; any other application running concurrently with OpenSSL whose resource consumption might be influenced by the attacker (such as other public-facing networking services running on a server) is susceptible to being complicit in heap corruption occurring in doapr_outch(). Even if precise memory corruption through memory attrition, that could lead to code execution, is in practice too difficult for the attacker, there’s still the possibility that important data within the program’s heap is corrupted, whose consequences could be nearly as disastrous. Heap vandalism, basically. And even if you discount the presence of malice, then genuine, temporary shortages of heap memory could lead to random heap corruption. An alternative approach to triggering the vulnerability Moreover, an interesting, sure-fire way to cause a OPENSSL_realloc() failure exists. OPENSSL_realloc() is really just a macro for CRYPTO_realloc(): 375 void *CRYPTO_realloc(void *str, int num, const char *file, int line) 376 { 377 void *ret = NULL; 378 379 if (str == NULL) 380 return CRYPTO_malloc(num, file, line); 381 382 if (num <= 0) 383 return NULL; num is a signed, 32-bit integer. If it is zero or negative, NULL is returned. Because in doapr_outch() *maxlen is incremented by 1024 for each allocation: 711 *maxlen += 1024; it will eventually become a negative value. The subsequent OPENSSL_realloc() will then inevitably fail, because CRYPTO_realloc() refuses to do allocations of a negative size. In other words, by supplying a very large string to BIO_printf() (basically one where the result of the combination of the format string and the arguments exceeds 1 << 31 bytes minus the size of the stack-based buffer), the vulnerability is guaranteed to trigger. Probably another way than using the “%s” format with a very large string is to exploit the padding mechanisms present in the helper functions fmtstr(), fmtint(), fmptp(). Affected software I’ve been able to confirm that PHP’s openssl_pkcs7_encrypt is vulnerable to this attack through its internal use of BIO_printf, if an attacker is able to supply a very large $headers parameter. Apache httpd also uses BIO_printf: but I haven’t yet checked to what extent it might be exploitable. A number of other high-profile applications are also using BIO_printf(): 2 thoughts on “OpenSSL CVE-2016-0799: heap corruption via BIO_printf” The code-snippets are unfortunately unreadable because of html-escaping. Could you fix them? Thanks! Great work!!
https://guidovranken.com/2016/02/27/openssl-cve-2016-0799-heap-corruption-via-bio_printf/
CC-MAIN-2021-25
refinedweb
1,614
52.23
I really didn’t find any answer that close… the opposite way is pretty simple like str[0] But I need to cast only 1 char to string… like this: char c = 34; string(1,c); //this doesn't work, the string is always empty. string s(c); //also doesn't work. boost::lexical_cast<string>((int)c); //also return null All of string s(1, c); std::cout << s << std::endl; and std::cout << string(1, c) << std::endl; and string s; s.push_back(c); std::cout << s << std::endl; worked for me. I honestly thought that the casting method would work fine. Since it doesn’t you can try stringstream. An example is below: #include <sstream> #include <string> stringstream ss; string target; char mychar='a'; ss << mychar; ss >> target;
https://exceptionshub.com/c-convert-from-1-char-to-string-closed.html
CC-MAIN-2021-21
refinedweb
130
72.87
Creating and Running The First C Program Creating a new project # To create and run programs in Code Blocks you first have to create a project. So what is a project ? In simplest terms, you can think of a project as a collection different source files. A small project can also have a single source file. To create a new program we have to first create a project. 1) Go to File > New > Project. A wizard will be presented as shown in the following screenshot. Select Console application and click Go. 2) A Console Application wizard will be presented. Click on the Next button. 3) In the next window of the Console Application wizard select the language which you want to use in the project. Select C and click Next. 4) In the next window enter project title as "First App" and choose a path to save the "First App" project. Click on the Next button to continue. 5) This window allows you select compiler for the project. Select GNU GCC Compiler and keep other settings to default. Click Finish. A new project will be created for you along with some default code. Once the project is created, Code Blocks IDE will look something like this: Double click on the Sources folder to view files under it. Take a look at the Management Window that has been populated with newly created project files. As you can see at this time the project contains only one file main.c. Double click to open main.c in the editor window. Let's replace the default with the following code. #include <stdio.h> int main() { printf("My First App"); return 0; } Note: Do not copy and paste programs, just type, it will be more beneficial for you. We will discuss in detail how the program works in later chapters. Save the program by pressing Ctrl + S or hitting save icon in the toolbar. Compile the program by selecting Build > Build from the menu bar or by hitting Ctrl + F9 . If compilation succeeds, you will see some messages on the Build Log tab of the Logs Window. Notice the last line of the log which says "0 error(s), 0 warning(s) ". It simply means that the program is compiled successfully without any errors and warnings. Run the program by selecting Build > Run from the menu bar or by hitting Ctrl + F10 . When you run the program, you will see a window like this: To close this window press any key on the keyboard. Tip: You can also press F9 or Build > Build and Run to compile and run the program in one step. Help me! I got an error while compiling # Compilation errors or Compile time errors occurs when you have made a mistake while typing the program. These typing mistakes are known as Syntax errors. Just like the English language has grammatical rules Computer languages have Syntax rules. In other words, the syntax dictates how a language should be written. For example, one such rule is: Every statement in C must end with a semi-colon( ;). Compiler reports syntax errors in situations such as: - Ending a statement without a semicolon( ;). - Mistyped keyword. - There is an opening brace ( {) without a closing brace ( }). - Trying to use an declared variable. etc... So make sure you have typed the code as it is, with no typos or misspelling. When a syntax error is encountered by the compiler while compiling the program. It reports a syntax error message. This message contains the line number at which error is found and a description of the error. The compiler can detect the problems in two levels: warning and error. Warning: It simply means you are doing something wrong. Although it is syntactically valid but it may cause problems in the future. Code Blocks displays warning messages in blue color. Warnings do not halt the compilation process. Errors: Error is a fatal flaw in the program. Errors halt the compilation of the program. To compile the program you must first resolve all errors(syntax errors). Code Blocks displays errors in red color. When a syntax error is encountered the Code Blocks displays wealth of information in the Build message tab. For example: Suppose that by mistake, you have left the semicolon at the end of line 5. #include <stdio.h> int main() { printf("My First App") return 0; } Had you compiled this program you would have gotten the following errors. As you can see in the logs the compiler is reporting an error about missing semicolon in line 6. Although, no doubt errors messages provided by the compiler are useful, they may or may not be very accurate. For this reason, the error reported by the compiler may not reflect the original cause of the problem. For example: In the above program, the compiler is reporting an error at line 6, but we know that the actual problem is in line 5 due to missing semicolon ( ;). So the whole point of this discussion is that when compiler reports syntax error don't take compiler's message as it is, to find the actual error look around few lines above or below where the error was actually reported. The errors in your program should be resolved by now, if not comment below and we will try to solve it together.
https://overiq.com/c-programming/101/creating-and-running-the-first-c-program/
CC-MAIN-2018-09
refinedweb
889
74.29
Bubble warning Jan 7th 2010 From The Economist print edition ." - The Economist, January 7, 2010 (in come the waves, again?) 32 comments: The other day I was reading a hyper-bullish forecast for 2010. Ironically it came from the very Russian investment bank whose near-death experience made the stock market seize on September 16th 2008 (TBTF, it was bailed out by state-backed sponsors with the agility of presidential bodyguards and, to this day, still posts losses). Now we are told to just discard that nasty 2008 as a statistical outlier and extrapolate the good times of mid-2000s into 2010s. "Growth" is the main theme again. Worrisomely, zero-percent "growth" will not be tolerated by the regulators - it's expand, inflate, boom for the sake of great numbers. The Soviet central planners would praise today's neoliberals for setting GDP targets. Guess we won't see the final bottom until the religion of "growth" is discredited. I'm now building a "shortfolio" of basically the same domestic stocks I rescued a year ago at the bottom. If this bet blows up, I'll take out a low-interest "education loan" and train myself for a realtor, for there's no better way to contribute to "growth"! (just kidding on that one) The rubber band stretches, until it breaks. People going short in this market had better buy an extra large bottle of antacid pills because there is no limit to the schemes they will try to inflate asset prices. When it does eventually pop, all of us will remember fondly the relatively calm, simple crisis of October 2008. Keith, are you holding on to AAPL? Aside from the big picture that we’re in eye of the storm. I have a hard time understanding the viability of a company that produces commodity type items (no matter how innovative and trendy). It is so easy to copy and produce the same functionality for a lot cheaper, on top of the already existing competition. In today’s world any country/company can produce the same items Apple sells; for less. Just asking. Get out of stocks and gold 4 sure.Commodities are looking frothy too.Cash is king.Short treasuries and the nasdaq.FXP for china bubble burst. Remember, bubbles last longer, much longer, than you think they will. I'm bullish until I have a reason not to be. The stimulus cheese keeps pouring in. 0% interest rates for months and months to go. Corporate earnings comps will be strong this quarter. China numbers today were big. But read the Economist article. I agree with their thesis. We're in a bubble. An unsustainable bubble. Eventually, the government cheese ends. 0% ends. Cash-back on housing ends. And that's when you head to the shelter. Again. Agree with Keith, that we probably have another 4 or 5 months to go in the stock "rally." The government will do anything legal or illegal to keep the markets going (like buying S&P options, etc.). Although I will not exit the market (I have a lot of dividend stocks that I live on), I am trimming my holdings and increasing cash, selling those stocks like some energy that are getting too frothy. I will pare down to my strong utilities, and some pharma's, and a few infastructure companies. I am still seeing deflation in prices where I sit - rentals are still being advertised at "reduced rent", restaurants, stores, etc. are all having sales all the time, and the word on the street is "negotiate everything!" A little old lady in my building was telling me, she never pays full price for anything anymore. . .if a carpet cleaner gives here a price, she call 5 other places to get the price she wants. Just read an item this morning on Yahoo News that people who are getting new jobs are taking huge pay cuts, as employers have scores of people lined up for each job. Can we say "Japan?" "Today the prices of many assets are being held up by unsustainable fiscal and monetary stimulus. Something has to give." That' depends on the type of "flation" we get. In case central banks succeed with money printing the prices are very sustainable. We'll just get some really nasty inflation to go along with it. The rising tide lifts all boats. If central banks fail to get more money into circulation then those asset bubbles will suffer another spectacular bust. In either case, I believe the day of reckoning is still more than 6 month off, maybe even years. The current commodity prices are a reflection of lose bailout money and many investors betting that we will see inflation. Wow! I stopped reading them awhile ago but just yesterday I went back to their site and was drawn to that very article you dredged up! I guess you read it too, huh Keith? This reminds me Keith, there was a story they did sometime in the bubble years, like in the year from 2003 -2005 on the bubble and at the time it was not "premium" content, now it is. It was a rare time when this magazine actually looked forward and saw the ugly events that this bubble unfolding would yield. This story you posted like so many of their stories, are kind of "after the fact" analysis. Anyway it would be very informative to see that story from so long ago posted again without paying their exorbitant fees. Hats off to you Keith if you can dredge it up! Another thing that would be hugely appropriate that you may appreciate is a picture the Economist had on their cover, of a big fat over bloated bird running down a runway flapping his wings trying to take off but he couldn't. On his belly was labeled "debt". This was to signify the American economy at the time which was just after the tech wreck in the early 2000's. JDF Keefer said: “I'm bullish until I have a reason not to be.” Also Known As, gambling. 10 horses racing around in a circle and I know exactly which one is going to reach the finish line first. I am confident cause I figured it out last time. Mucho luck monsieur. 1)One can time things perfectly so many times until that one time when they don’t. 2)It usually does not happen when everyone knows its coming, most of the time only the people who prepare when things look good, beat the crash. You are absolutely right that things move slower then many think, but there is so much junk information out there that its clouding reality. Me thinks its better to pull out early even though there is room for a few more pennies. Someone who cashed out of tech stocks in early 1999 and someone who cashed out of real estate in early 2005 Generally did better then those who couldn’t walk away from the potential cash thats still on the table. Cash will be king! It hath been foretold That's why S&A is such a crucial website. Is there an RSS feed for your site Kieth? I've noticed a rather amusing trend of thought control lately. 2 MSNBC articles; one about how people are healthier when they don't(can't) retire; another saying there is more to life than material success. Those would have been a better hoot in '05, no? The latest NYTimes about how "...is housing a good investment?” he said. “In fact, it probably never was.”..." Message for the masses: 1)roll over and take it like a bitch 2)sell your real estate at the coming bottom. It is mainly a timing issue. Where would you put your money for a little return (5%ish?) without risk? Does that still exist? i see dozens of housing units on sale for 15 thousand bucks ....now how to control associative, .coop and maintenance costs all politically influenced? Obaaaaaammmy, gimmmmme chheeeeeeese!! . and this is shocking how? . Keith, Your starting to think like the Bankers, Greed! I thought you really wanted this country to bounce back but based on your current post, I can tell your all about the money! Your not any different! in the words of elizabeth warren, bring back the usury laws. stop lending money to stupid poor people, via credit cards, subprime loans and such. not gonna happen. the banks make too much money off of poor stupid people who spend more than they make. "Eventually, the government cheese ends. 0% ends. Cash-back on housing ends." Didn't you hear Keith? Fanny and Freddie have a license to gorge; they are the baddest of the "bad banks." There's a sucker born every minute, so eager buyers should be flooding the country as we speak. We can stimulate forever! 3:58 anon - must be a newbie. I'm a realist, trying to help folks here navigate the biggest financial mania and crash of all time. It's survival. Pure and simple. stupid question - don't you all think the latest bubble will be supported at least through the elections in November this year? Don't you think the powers that be won't allow a blood bath? Europe is toast I see residential units for sale at prices less than 2 years average rent... wondering said... stupid question - don't you all think the latest bubble will be supported at least through the elections in November this year? Don't you think the powers that be won't allow a blood bath? ================================== My financial advisor (who has not lost me a stinking dime in 9 years, even allowing for inflation,) thinks so. He has shifted a major chunk (but not all) of my portfolio out of stogy old conservative holdings and into the market, indirectly, thru funds with a proven track record for making money on the upside AND the downside of the market. Apparently, for now, he feels that in volatility there is strength. Like him, and Wondering, I don't see the stock market tanking until after Nov. The trick will be getting that money OUT of the market before anybody else does, and well before it all starts to slide. But hey, its all just paper, right! "Europe is toast".....well I thought that a year ago and gravity is still turned off over there. One wonders how long Europe can appear to "look good" relative to America when I hear from Europeans whom I work with that "back home" things are hanging by a thread. Yet, the MSM (both here and there) is making noises about a "recovery" this year. I guess when confronted with an 800 lb gorilla in the room, there are those who would close their eyes and by doing so, try and convince themselves (and others) the gorilla is gone. "Blind faith" or just "Blind"? How can anybody think that you can cure a stage 4 cancer with leukemia ? Print fake money, play monopoly and pretend it is all fair. It is all a Ponzi scheme, and sooner or later it will collapse as never before. No such thing as money as out thin air. "Europe is toast." Yes but it's a fancy little slice of Melba toast with with some sort of animal liver Pate on top. Wafer thin!! America's more like the whale the suddenly materializes at 50,000 feet in the book and movie "The Hitchhiker's Guide to the Galaxy". wondering said... stupid question - don't you all think the latest bubble will be supported at least through the elections in November this year? Don't you think the powers that be won't allow a blood bath? That is an EXCELLENT question. One thing we know for sure: all Extend And Pretend® is doing is letting the pressure build and the damage mount. This is going to end Biblical. Queef, You may be wondering why no one is questioning your elevating of The Economist as the most credible news source in the world. Well; Wonder. Anon 5:17 said... "I see residential units for sale at prices less than 2 years average rent..." I don't want to live in Detroit or the "Appalachia" of any state thank you very much. 29,000 blocks from the resorts..do it for you...at least they produce food in appalachia..... We still gots bubble prices in DC Metro. And, as a matter of fact, much to my chagrin, despite our utterly depressed private economy, rental rates seem a bit higher now than they were at the peak of the bubble. I guess despite there being a depression and the continued building of more and more units, condos and townhomes, the tard parasitic government workers, armed with their taxpayer funded COLAs continue to pay exorbitant prices to simply put a roof over their pea-brained heads. "...tard parasitic government workers, armed with their taxpayer funded COLAs... Isn't it appalling? I had to laugh when I saw some apologist write "...the one bright spot is increased government employment..."... AS IF that were a good thing/net improvement for society.
http://sootandashes.blogspot.com/2010/01/economist-most-credible-news-source-in.html
CC-MAIN-2017-47
refinedweb
2,207
74.29
fun f space = let# declare problem variablesin# post constraints# specify branching strategyend The procedure declares the variables needed, posts the constraints modeling the problem, and specifies the branching strategy. The argument space stands for the computational architecture mentioned in 2.3. The result type t is called the interface and is used to specify a solution of the problem. Often, the result type is just a record of the problem variables. There are two ways to obtain the solutions of a CSP. One way is by off-line search using the functions from the Search library structure. The other and easiest way is to interactively explore the induced search tree by using the Explorer. The Explorer must first be imported: import structure Explorer from ''x-alice:/lib/tools/Explorer''If you want to get all solutions of the problem, you use: Explorer.exploreAll (script)and the script will be run until the entire search tree is explored. With Explorer.exploreOne (script)you obtain a tree with just one solution. Now, by double-clicking on a solution node (green diamond) of the search tree, a window will open, called the Inspector. The inspector shows the variable assignments of this solution. If you double-click on an inner node, the inspector will display the information available at the respective point of the search tree. Andreas Rossberg 2006-08-28Andreas Rossberg 2006-08-28
http://www.ps.uni-saarland.de/alice/manual/cptutorial/node23.html
CC-MAIN-2018-47
refinedweb
231
55.95
01 April 2008 17:26 [Source: ICIS news] By Nigel Davis ?xml:namespace> US ethylene producers seek to lighten the feedstock slate in the face of $100/bbl plus crude. The pressure is on liquids cracking margins as demand for C3s and C4s continues apace. “The world for liquid cracking is [very] challenging,” Shell Chemicals LP president and CEO Stacy Methvin said on the sidelines of the 33rd National Petrochemical & Refiners Association (NPRA) meeting here this week. Her view was echoed by other producers. Liquids cracking is disadvantaged now, LyondellBasell’s chemicals division president Ed Dineen told ICIS news in an interview. But co-products are holding up and prices - of propylene and butadiene - are moving higher. Downstream demand has been hit by the On the feedstock front, ethane clearly has become more advantaged as oil-based feedstock prices have skyrocketed, but Dineen expects the WTI crude-to-ethane relationship to come more back into balance. In a high-cost oil environment producers just have to be more creative. But the extent to which they can be depends on many things, among them integration and technology and, ultimately, strategy. Shell adopted its heavy feed strategy some years ago and has sought to use more of the heavier end of the barrel in its crackers. It may not be advantaged in Dineen is encouraged by the now global cracker capability of LyondellBasell. He thinks the new company can create more value in aromatics and cracker co-products. Gaining competitive advantage in petrochemicals rests on many things but technology and engineering ultimately underpin the business. Over the past four years ExxonMobil Chemical has qualified 300 new cracker feedstocks, Sherman Glass, senior vice-president for basic chemicals, intermediates and synthetics, told the conference. Between 50 and 55 were qualified in the past year, he said later, with about 30 run through the group’s steam crackers. These include heavy feeds with a high sulphur and naphthenic acid content. ExxonMobil is relentless in its drive for greater cracking efficiencies and flexibility. Technology and integration lie at the core of this strategy. And a molecule optimisation group at each site helps decide which feeds will be run through the cracker or the refinery. So much has been said about petrochemical feedstock advantage in the The latest wave of cracker investment will hit the global business over the next few years but there is considerable uncertainty surrounding projects originally planned for 2012 and beyond. Those uncertainties over feedstock availability - and cost - are prompting global petrochemicals players to look elsewhere. LyondellBasell is planning a methanol-to-olefins (MTO)-fed polyolefins complex in Trinidad & Tobago, for example. It is progressing plans to build a gas-fed cracker in Feedstock availability and choice will continue to drive the business as technologies are applied to tap into new feedstock sources and locations. The oil sands - holding the world’s second largest oil reserve after Saudi Arabia - are a long way north, but oil extraction, should the oil price remain high, would produce vast quantities of bitumen with energy and chemicals feedstock potential. To be cost-effective, producers in The search for feedstocks in a higher cost environment is, hardly surprisingly, continuing apace. On the large, and the small, scale the opportunities can be significant. Listen to Ivan Lerner's radio interview with NOVA's Grant Thomson on Alberta's oil sands
http://www.icis.com/Articles/2008/04/01/9112750/insight-tapping-future-feedstock-opportunities.html
CC-MAIN-2014-49
refinedweb
561
51.89
The QPluginLoader class loads a plugin at run-time. More... #include <QPluginLoader> Inherits QObject. Note: All the application terminates. You can attempt to unload a plugin using unload(), but if other instances of QPluginLoader are using the same library, the call will fail, and unloading will only happen when every instance has called unload().. See also QLibrary and Plug & Paint Example. This property holds the file name of the plugin.(). Access functions: See also load(). Constructs a plugin loader with the given parent. Constructs a plugin loader with the given parent that will load the plugin specified by fileName.().(). Warning: The root component of a plugin, returned by the instance() function, becomes invalid once the plugin is unloaded. Delete the root component before unloading the plugin. Attempting to access members of invalid root components will in most cases result in a segmentation fault. See also instance() and load().
http://doc.trolltech.com/4.3/qpluginloader.html
crawl-002
refinedweb
148
60.01
POPEN(3) Linux Programmer's Manual POPEN(3) popen, pclose - pipe stream to or from a process #include <stdio.h> FILE *popen(const char *command, const char *type); int pclose(FILE *stream); Feature Test Macro Requirements for glibc (see feature_test_macros(7)): popen(), pclose(): _POSIX_C_SOURCE >= 2 || /* Glibc versions <= 2.19: */ , -1popen(), pclose() │ Thread safety │ MT-Safe │ └──────────────────┴───────────────┴─────────┘ POSIX.1-2001, POSIX.1-2008. The 'e' value for type is a Linux extension. Note: carefully read Caveats in system. sh(1), fork(2), pipe(2), wait4(2), fclose(3), fflush(3), fopen(3), stdio(3), system(3) This page is part of release 5.08 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. GNU 2017-09-15 POPEN(3) Pages that refer to this page: gawk(1), __pmprocesspipe(3), __pmProcessPipe(3), __pmProcessPipeClose(3)
https://man7.org/linux/man-pages/man3/pclose.3.html
CC-MAIN-2020-40
refinedweb
148
75.2
plone.caching 1.1.2 Zope 2 integration for z3c.caching Table of Contents - Introduction - Usage - Declaring cache rules for a view - Mapping cache rules to operations - Setting options for caching operations - Writing caching operations - 1.1.2 (2016-09-16) - 1.1.1 (2016-08-12) - 1.1.0 (2016-05-18) - 1.0.1 (2015-03-21) - 1.0 - 2011-05-13 - 1.0b2 - 2011-02-10 - 1.0b1 - 2010-08-04 - 1.0a1 - 2010-04-22 Introduction The plone.caching package provides a framework for the management of cache headers, built atop z3c.caching. It consists of the following elements: - An interface ICachingOperation, describing components which: - Modify the response for caching purposes. The most common operation will be to set cache headers. - Intercept a request before view rendering (but after traversal and authorisation) to provide a cached response. The most common operation will be to set a “304 Not Modified” response header and return an empty response, although it is also possible to provide a full response body. Caching operations are named multi-adapters on the published object (e.g. a view) and the request. An interfaces ICachingOperationType which is used for utilities describing caching operations. This is mainly for UI purposes, although this package does not provide any UI of its own. Hooks into the Zope 2 ZPublisher (installed provided ZPublisher is available) which will execute caching operations as appropriate. Helper functions for looking up configuration options caching operations in a registry managed by plone.registry An operation called plone.caching.operations.chain, which can be used to chain together multiple operations. It will look up the option plone.caching.operations.chain.${rulename}.operations in the registry, expecting a list of strings indicating the names of operations to execute. (${rulename} refers to the name of the caching rule set in use - more on this later). Usage To use plone.caching, you must first install it into your build and load its configuration. If you are using Plone, you can do that by installing plone.app.caching. Otherwise, depend on plone.caching in your own package’s setup.py: install_requires = [ ... 'plone.caching', ] Then load the package’s configuration from your own package’s configure.zcml: <include package="plone.caching" /> Next, you must ensure that the the cache settings records are installed in the registry. (plone.caching uses plone.registry to store various settings, and provides helpers for caching operations to do the same.) To use the registry, you must register a (usually local) utility providing plone.registry.interfaces.IRegistry. If you are using Plone, installing plone.app.registry will do this for you. Otherwise, configure one manually using the zope.component API. In tests, you can do the following: from zope.component import provideAdapter from plone.registry.interfaces import IRegistry from plone.registry import Registry provideAdapter(Registry(), IRegistry) Next, you must add the plone.caching settings to the registry. If you use plone.app.caching, it will do this for you. Otherwise, you can register them like so: from zope.component import getUtility from plone.registry.interfaces import IRegistry from plone.caching.interfaces import ICacheSettings registry = getUtility(IRegistry) registry.registerInterface(ICacheSettings) Finally, you must turn on the caching engine, by setting the registry value plone.caching.interfaces.ICacheSettings.enabled to True. If you are using Plone and have installed plone.app.caching, you can do this from the caching control panel. In code, you can do: registry['plone.caching.interfaces.ICacheSettings.enabled'] = True Declaring cache rules for a view The entry point for caching is a cache rule set. A rule set is simply a name given to a collection of publishable resources, such as views, for caching purposes. Take a look at z3c.caching for details, but a simple example may look like this: <configure xmlns="" xmlns: <cache:ruleset <browser:page </configure> Here, the view implemented by the class FrontpageView is associated with the rule set plone.contentTypes. NOTE: Ruleset names should be dotted names. That is, they should consist only of upper or lowercase letters, digits, underscores and/or periods (dots). The idea is that this forms a namespace similar to namespaces created by packages and modules in Python. Elsewhere (or in the same file) the plone.contentTypes ruleset should be declared with a title and description. This is can be used by a UI such as that provided by plone.app.caching. If “explicit” mode is set in z3c.caching, this is required. By default it is optional: <cache:rulesetType Hints: - Try to re-use existing rule sets rather than invent your own. - Rule sets inherit according to the same rules as those that apply to adapters. Thus, you can register a generic rule set for a generic interface or base class, and then override it for a more specific class or interface. - If you need to modify rule sets declared by packages not under your control, you can use an overrides.zcml file for your project. Mapping cache rules to operations plone.caching maintains a mapping of rule sets to caching operations in the registry. This mapping is stored in a dictionary of dotted name string keys to dotted name string values, under the record plone.caching.interfaces.ICacheSettings.operationMapping. To set the name of the operation to use for the plone.contentTypes rule shown above, a mapping like the following might be used: from zope.component import getUtility from plone.registry.interfaces import IRegistry from plone.caching.interfaces import ICacheSettings registry = getUtility(IRegistry) settings = registry.forInterface(ICacheSettings) if settings.operationMapping is None: # initialise if not set already settings.operationMapping = {} settings.operationMapping['plone.contentTypes'] = 'my.package.operation' Here, my.package.operation is the name of a caching operation. We will see an example of using one shortly. If you want to use several operations, you can chain them together using the plone.caching.operations.chain operation: settings.operationMapping['plone.contentTypes'] = 'plone.caching.operations.chain' registry['plone.caching.operations.chain.plone.contentTypes.operations'] = \ ['my.package.operation1', 'my.package.operation2'] The last line here is setting the operations option for the chain operation, in a way that is specific to the plone.contentTypes rule set. More on the configuration syntax shortly. If you need to list all operations for UI purposes, you can look up the registered instances of the ICachingOperationType utility: from zope.component import getUtilitiesFor from plone.caching.interfaces import ICachingOperationType for name, type_ in getUtilitiesFor(ICachingOperationType): ... The ICachingOperationType utility provides properties like title and description to help build a user interface around caching operations. plone.app.caching provides just such an interface. Setting options for caching operations plone.caching does not strictly enforce how caching operations configure themselves, if at all. However, it provides helper functionality to encourage a pattern based on settings stored in plone.registry. We have already seen this pattern in use for the chain operation above. Let’s now take a closer look. The chain operation is implemented by the class plone.caching.operations.Chain. The ICachingOperationType utility named plone.caching.operations.chain provides two attributes in addition to the title and description attributes mentioned above: - prefix - A dotted name prefix used for all registry keys. This key must be unique. By convention, it is the name of the caching operation - options - A tuple of option names Taken together, these attributes describe the configurable options (if any) of the caching operation. By default, the two are concatenated, so that if you have an operation called my.package.operation, the prefix is the same string, and the options are ('option1', 'option2'), two registry keys will be used: my.package.operation.option1 and my.package.operation.option2. (The type of those records and their value will obviously depend on how the registry is configured. Typically, the installation routine for a given operation will create them with sensible defaults). If you need to change these settings on a per-cache-rule basis, you can do so by inserting the cache rule name between the prefix and the option name. For example, for the cache rule my.rule, the rule-specific version of option1 would be my.package.operation.my.rule.option1. In this case, you probably want to use a field reference (FieldRef) for the “override” record that references the field of the “base” record. See the plone.registry documentation for details. Finally, note that it is generally safe to use caching operations if their registry keys are not installed. That is, they should fall back on sensible defaults and not crash. Writing caching operations Now that we have seen how to configure cache rules and operations, let’s look at how we can write our own caching operations Caching operations consist of two components: - A named multi-adapter implementing the operation itself - A named utility providing metadata about the operation Typically, both of these are implemented within a single class, although this is not a requirement. Typically, the operation will also look up options in accordance with the configuration methodology outlines above. Here is an example of an operation that sets a fixed max-age cache control header. It is registered for any published resource, and for any HTTP request (but not other types of request.): MaxAge(object): # Type metadata title = _(u"Max age") description = _(u"Sets a fixed max age value") prefix = 'plone.caching.tests.maxage' options = ('maxAge',) def __init__(self, published, request): self.published = published self.request = request def interceptResponse(self, rulename, response): return None def modifyResponse(self, rulename, response): options = lookupOptions(MaxAge, rulename) maxAge = options['maxAge'] or 3600 response.setHeader('Cache-Control', 'max-age=%s, must-revalidate' % maxAge) There are two methods here: - interceptResponse() is called before Zope attempts to render the published object. If this returns None, publication continues as normal. If it returns a string, the request is intercepted and the cached response is returned. - modifyResponse() is called after Zope has rendered the response (in a late stage of the transformation chain set up by plone.transformchain). This should not return a value, but can modify the response passed in. It should not modify the response body (in fact, doing so will have on effect), but may set headers. Note the use of the lookupOptions() helper method. You can pass this either an ICachingOperationType instance, or the name of one (in which case it will be looked up from the utility registry), as well as the current rule name. It will return a dictionary of all the options listed (only maxAge in this case), taking rule set overrides into account. The options are guaranteed to be there, but will fall back on a default of None if not set. To register this component in ZCML, we would do: <adapter factory=".maxage.MaxAge" name="plone.caching.tests.maxage" /> <utility component=".maxage.MaxAge" name="plone.caching.tests.maxage" /> Note that by using component instead of factory in the <utility /> declaration, we register the class object itself as the utility. The attributes are provided as class variables for that reason - setting them in __init__(), for example, would not work. What about the interceptResponse() method? Here is a simple example that sends a 304 not modified response always. (This is probably not very useful, but it serves as an example.): Always304(object): # Type metadata title = _(u"Always send 304") description = _(u"It's not modified, dammit!") prefix = 'plone.caching.tests.always304' options = ('temporarilyDisable',) def __init__(self, published, request): self.published = published self.request = request def interceptResponse(self, rulename, response): options = lookupOptions(self.__class__, rulename) if options['temporarilyDisable']: return None response.setStatus(304) return u"" def modifyResponse(self, rulename, response): pass Here, we return None to indicate that the request should not be intercepted if the temporarilyDisable option is set to True. Otherwise, we modify the response and return a response body. The return value must be a unicode string. In this case, an empty string will suffice. The ZCML registration would look like this: <adapter factory=".always.Always304" name="plone.caching.tests.always304" /> <utility component=".always.Always304" name="plone.caching.tests.always304" /> 1.1.2 (2016-09-16) Bug fixes: - Cleanup: isort, readability, pep8, utf8-headers. [jensens] 1.1.1 (2016-08-12) Bug fixes: - Use zope.interface decorator. [gforcada] 1.1.0 (2016-05-18) Fixes: - Use plone i18n domain. [klinger] 1.0.1 (2015-03-21) - Fix ruleset registry test isolation so that is no longer order dependent. [jone] 1.0 - 2011-05-13 - Release 1.0 Final. [esteele] - Add MANIFEST.in. [WouterVH] 1.0b2 - 2011-02-10 - Updated tests to reflect operation parameter overrides can now use plone.registry FieldRefs. Requires plone.registry >= 1.0b3. [optilude] - Removed monkey patches unneeded since Zope 2.12.4. [optilude] 1.0b1 - 2010-08-04 - Preparing release to coincide with plone.app.caching 1.0b1 [optilude] 1.0a1 - 2010-04-22 - Initial release [optilude, newbery] - Author: Plone Foundation - Keywords: plone http caching - License: GPL - Categories - Package Index Owner: esteele, newbery, davisagli, optilude, hannosch, timo, plone - DOAP record: plone.caching-1.1.2.xml
https://pypi.python.org/pypi/plone.caching/
CC-MAIN-2016-40
refinedweb
2,157
51.65
. Styled Components Styled Components is a well-known example for CSS-in-JS approach. Instead of following traditional pattern with classnames to compose stylesheets, you can create encapsulated styles and even engage them with props of the components. This pattern also allows us to have reusable and separated UI from the other stateful/stateless React components. Styled components simply act like a wrapper component by being mapped to HTML tags to style itself and child elements. It empowers you to write regular, vanilla CSS inside a JS file which is also reinforced and leveraged with tagged template literals. Basic usage: import styled from 'styled-components'; export const Wrapper = styled.div` border: 1px solid #ddd; padding: 10px; // Any nested selector is allowed here h1 { color: #000; } `; import React from 'react'; import { Wrapper } from './Wrapper; export default function Header() { return( <Wrapper> <h1>This is a simple styled component</h1> </Wrapper> ); } Adapting Props This is an example of the magic of CSS-in-JS approach. You can easily pass a function or any prop to a Styled component’s template literals to adapt it based on its props. const Button = styled.button` background: ${props => props.primary ? 'red' : 'white'}; color: ${props => props.primary ? 'white' : 'red'}; font-size: 1.6rem; border: 1px solid #ddd; border-radius: 4px; padding: 0.5em 1em; `; render( <div> <Button>Default</Button> <Button primary>Primary</Button> </div> ); CSS Modules CSS modules, regardless of React, have already made a significant contribution to the front-end community by introducing us to an isolated local scope and bringing a more modular approach to the good old CSS. It also worked well with React, thinking of every component gets its own stylesheet file with its own styling. Since this is not a direct CSS-in-JS approach like Styled Components, you will simply need to write CSS (can be also SASS) in separate stylesheet files. For the basic usage in React, all you need to do is referring to the desired selector within className attribute. Basic usage: .flex { display: flex; align-items: center; justify-content: center; } import styles from './styles.scss'; const Sample = () => ( <section className={styles.flex}> <p>This is a simple component styled with CSS Modules</p> </section> ); export default Sample; Extending/Inheritance As all class selectors are local by default, extending rulesets and communicating with other modules plays a relatively more significant role in CSS modules. Our old-pre-processor friend 'extending' is named as 'composition' here. To share rule-sets between the modules, extend or overwrite, you simply need to use 'composes' property in the stylesheet. .customTable { composes: table from "./table.css"; } Inline Even though React itself encourages to use inline styling it comes with a bunch of disadvantages which makes it one of the least preferable option to style React components due to its restrictive usage. Here are the biggest concerns for traditional inline styling for React, unless you don't use any additional library or come up with workaround solutions; Duplication. You might need to recompose same style code once again and use repetitive rules for another component. This flow itself looks like anti-pattern when it comes to reflecting the main principle of React which is described as 'reusable components'. No pseudo-classes: It's not possible to use one of the most fundamental features of CSS such as pseudo-classes (:hover, :active, :focus etc.) No vendor prefixes: You can't use vendor prefixes and override a rule on the same selector No media queries: Using media queries is not possible here, which is a pretty important issue in terms of RWD. To sum up, this might not be the best approach for using in large-scale and UI-heavy applications as you may end up with a mess and limited range of motion. However, there are several libraries such as Radium, React style, React JSS which provide solutions and workarounds to deal with the deficiency of inline styling. Basic usage: const wrapperStyle = { width: '50px', height: '50px', border: '1px solid #ddd', }; const textStyle = { fontSize: '1rem', color: '#000', }; const Sample = () => ( <div style={wrapperStyle}> <p style={textStyle}>This is a sample for inline styling in React</p> </div> ); export default Sample; Approaches and libraries to style React are not limited only to these mentioned above. There are also other projects such as state-driven styling approach fela, glamorous, styletron and few more.
https://www.visuality.pl/posts/styling-react-components
CC-MAIN-2020-16
refinedweb
721
51.48
En Fri, 19 Mar 2010 14:09:09 -0300, Peter Peyman Puk <peter_peyman_puk at yahoo.ca> escribió: > I am running a simulator written in python. The simulator has a small > TextView (actually a SourceView) widget which lets the user writes > scripts, and when they are satisfied they can execute that script to get > results. For arguments sake, we write a simple script and save it as > A.py and we import it and execute it more or less like so. > > import A > > #assume there is a function called test() in module A > A.test() > > > Then the user modifies the contents of A.py and saves it again (to A.py) > now all we have to do is the following > > if 'A' in dir(): > reload(A) > else: > import A > > A.test() Terry Reedy already gave you an answer for this import problem. I'd like to go one step back and question whether using import/modules is the right thing here. Clearly A.py is not a module but a script (that's the word you used) - and one does not *import* a script but *executes* it. That is, instead of using import/__import__, use exec (or execfile) within a known namespace: import __builtin__ ns = {'__builtins__': __builtin__.__dict__} exec contents_of_textview in ns # any change made to ns is visible here -- Gabriel Genellina
https://mail.python.org/pipermail/python-list/2010-March/571707.html
CC-MAIN-2016-40
refinedweb
221
73.47
homer. writer of two Epic Poems – the Iliad and the Odyssey Long – each 24 books of average 500 lines each (12,000 lines) narrative (tells a story) – not about feelings emotions hymns etc about lofty characters – heroes and gods. a single writer? Traditionally Unity of poetic wedding of Peleus and Thetis (parents of Achilles – divine origins – of course) Eris (discord) not invited but tosses in an apple “for the fairest” Hera, Athena, and Aphrodite each claim it. Zeus asked to decide, but pikes out and picks a poor mortal to do it Paris prince of Troy chooses Aphrodite (best bribe – he can have the most beautiful woman in the world Helen (daughter of Zeus and Leda) Paris steals back to Troy with Helen Husband Menelaus of Sparta get Br Agamemnon of Mycenae, and all the Achaeans to retrieve her. 10 yr siege with no side winning and the gods interfering to protect their favourites at various points Great heroes on both sides Gk – Achilles, Ajax, Odysseus; Trojan – Hector Trojan horse (an idea of Odysseus) used to break the siege Many stories of the difficulties the heroes had after the fall of Troy Most famous Odysseus (told in the Odyssey) Circe the witch, Cyclops, Lotus eaters, wife’s suitors, Reunited with his son Telemachus. Aeneas - a Trojan – son of Aphrodite - escapes with his father and son Eventually founds Rome – Caesar traced descent from him (told in Virgil’s Aeneid) Schliemann – great discoveries, but over the top claims the war starts Archaeological work shows there were such cities, and warfare between them. Focuses on a short episode in the final year of the war, not the whole story The rage of Achilles at being insulted by Agamemnon, then hector Agamemnon takes Achilles war-prise so he leaves the fighting Things go bad for Gks so Agamemnon asks him to return (in vane). Bk9 His friend Patroclus fights in his armour to restore Gk confidence BK16 Patroclus killed by Hector and Achilles finally returns to battle to avenge dead friend BK20 He kills Hector, and defiles the body Priam king of Troy ransoms the body of his son Hector from Achilles and the Iliad ends with the funeral of Hector. Bk24 What motivates heroic behaviour Individual’s duty to society Cooperation verses competition and conflict Place of fate in men’s live Interaction of the gods (God) in human affairs. How best should mortals live their lives esp in the face of imminent death not exactly heroic – but shows the hopnotic effect of rhyme and rythmn in story telling. “You insatiable creature, quite shameless. How can any Achaean obey you willingly— [150] join a raiding party or keep fighting.. That’s how godlike Ajax chopped down Simoeisius, son of Anthemion. “Father Zeus, aren’t you incensed at this barbarity? We gods are always suffering dreadfully at each other’s hands, when we bring men help. We all lay the blame for this on you. 1000 ….” Scowling at him, cloud-gatherer Zeus replied: “You hypocrite, don’t sit there whining at me. Among the gods who live on Mount Olympus, you’re the one I hate the most. For you love war, 1020 constant strife and battle. Your mother, Hera, has an implacable, unyielding spirit.. [470] The child’s loving father laughed, his noble mother, too. Glorious Hector pulled the glittering helmet off 580 and set it on the ground. Then he kissed his dear son and held him in his arms. He prayed aloud to Zeus and the rest of the immortals. So let the same container hold our bones, 110 that gold two-handled jar your mother gave you.” Swift-footed Achilles then said in reply: “Dear friend, why have you come to me here, telling me everything I need to do? I’ll carry out all these things for you, attend to your request. But come closer. Let’s hold each other one short moment more, enjoying a shared lament together.” Saying this, Achilles reached out with his arms, [100] but he grasped nothing. The spirit had departed, 120 going underground like vapour, muttering faintly. Achilles jumped up in amazement, clapped his hands, and then spoke out in sorrow: “How sad! It seems that even in Hades’ house, some spirit or ghost remains, but our being is not there at all. The ghost spoke to Achilles, saying: “You’re asleep, Achilles. 80 You’ve forgotten me. While I was alive, [70] you never did neglect me. But now I’m dead. So bury me as quickly as you can. Then I can pass through the gates of Hades. The spirits, ghosts of the dead, keep me away. They don’t let me join them past the river. So I wander aimlessly round Hades’ home by its wide gates. Give me your hand, I beg you, for I’ll never come again from Hades, once you’ve given me what’s due, my funeral fire. 90 We’ll no more sit together making plans, separated from our dear companions. The jaws of dreadful Fate are gaping for me, ready to consume me—my destiny from the day that I was born. You, too, godlike Achilles, you have your own fate, [80] to die under the walls of wealthy Troy. I’ll say one more thing, one last request, if you will listen. Achilles, don’t lay your bones apart from mine. Let them remain together, 100 .”
http://www.slideserve.com/reegan/homer
CC-MAIN-2017-43
refinedweb
904
78.69
11 February 2013 22:03 [Source: ICIS news] HOUSTON (ICIS)--US low density polyethylene (LDPE) exports for December 2012 increased by 5.2% from the same month in 2011, according to data available from the US International Trade Commission (ITC) on Monday.?xml:namespace> Exports of LDPE rose to 58,592 tonnes in December 2012 from 55,691 tonnes in 2011. Canada, Mexico and Brazil purchased the most LDPE from the US, with Canada taking 26% of exports for a total of 15,506 tonnes. Exports decreased to Canada and Mexico, but increased for Brazil. For the full 2012 year, exports of LDPE increased by 8.8% from the same period in 2011, rising to 685,274 tonnes from 629,985 tonnes in 2011. US high density polyethylene (HDPE) exports for December decreased by 0.5% from the same month in 2011, falling to 163,449 tonnes from 164,208 tonnes in December 2011. The top three destinations for US material were Mexico, Canada and Brazil. Mexico boosted its imports of US material to 58,304 tonnes from 48,998 tonnes in December 2011, while Canada imported 22,725 tonnes, down from 24,740 tonnes in December 2011. Brazil saw a slight increase in US imports to 10,187 tonnes from 9,389 tonnes in 2011. For the full 2012 year, exports of HDPE increased by 2.7% to 1,743,760 tonnes from 1,697,237 tonnes in 2011. US linear low density polyethylene (LLDPE) exports fell by 9.5% year on year in December, falling to 53,438 tonnes in 2012 from 59,026 tonnes in December 2011, according to the ITC. Mexico, Canada and Singapore were the top destinations for US material, together accounting for 60% of all US exports. For the full 2012 year, exports of LLDPE increased by 7.2% to 627,710 tonnes from 585,597 tonnes in
http://www.icis.com/Articles/2013/02/11/9639944/us-dec-ldpe-exports-rose-by-5.2-year-on-year.html
CC-MAIN-2015-14
refinedweb
314
73.27
pyromus 0 Posted February 4, 2015 (edited) Dear Friends I have been trying to get data from grid. i will use this data on my website. I can read handle of program. I can find handle of subwindow but I cant get data from subwindow. Can you give me idea for how i can get data from grid ? code that i can do until now is under. #include <GUIConstantsEx.au3> #include <GuiListView.au3> #include <GuiMenu.au3> #include <Array.au3> #include <WinAPI.au3> #include <WindowsConstants.au3> #include <Crypt.au3> #include <File.au3> #include <WinAPIEx.au3> Func winhandl($title = "") ; Retrieve a list of window handles. Local $aList = WinList(); ; Loop through the array displaying only visable windows with a title. For $i = 1 To $aList[0][0] If StringInStr($aList[$i][0],$title) Then return ($aList[$i][1]); EndIf Next EndFunc ;==>Example $h = winhandl("Rithmic Trader"); ConsoleWrite("h "&$h & @CRLF); $pen = ControlGetHandle($h,"","[NAME:PRI_oGrid]"); ConsoleWrite("pen "&$pen & @CRLF); $text = ControlGetText($pen,"","[NAME:PRI_oGrid]") ConsoleWrite("output "&$text & @CRLF); I have to get all data from rows and columns. I need help. picture of window window info Edited February 4, 2015 by pyromus Share this post Link to post Share on other sites
https://www.autoitscript.com/forum/topic/167245-reading-grid-from-program/
CC-MAIN-2018-51
refinedweb
199
62.34
On 6/10/07, Josiah Carlson <jcarlson at uci.edu> wrote: > > "Eyal Lotem" <eyal.lotem at gmail.com> wrote: > > > > I believe that it is possible to add a useful feature to dicts, that > > will enable some interesting performance features in the future. > > > > Dict lookups are in general supposed to be O(1) and they are indeed very fast. > > However, there are several drawbacks: > > A. O(1) is not always achieved, collisions may still occur > >. Ofcourse, though it is an interesting anecdote because it won't screw the lookups in the solution I'm describing. >). > You should note that Python's dictionary implementation has been tuned > to work *quite well* for the object attribute/namespace case, and I > would be quite surprised if anyone managed to improve upon Raymond's > work (without writing platform-specific assembly). Ofcourse - the idea is not to improve dict's performance with the normal way it is accessed, but to change the way it is accessed for the specific use-case of accessing static values in a static dict - which can be faster than even a fast dict lookup. The dict lookups in globals, builtins are all looking for literal static keys in a literal static dict. In this specific case, it is better to outdo the existing dict performance, by adding a special way to access such static keys in dicts - which insignificantly slows down access to the dict, but significantly speeds up this very common use pattern. Attribute lookups in the class dict are all literal/static key lookups in a static dict (though in order for a code object to know that it is a static dict, a psyco-like system is required. If such a system is used, all of those dict lookups can be made faster as well).
https://mail.python.org/pipermail/python-ideas/2007-June/000891.html
CC-MAIN-2016-40
refinedweb
297
68.1
Hi everyone, this is my first tutorial on dev.to, I'll love to share with you an awesome frontend development framework which I've been learning; Hyperapp JS. We're going to be building a movie shopping cart single page app to learn how to use some basic features of hyperapp which includes Virtual-dom rendering, routing and application state managment. Here's the github repo incase you would love to go straight to the code, and here is the Live Demo hosted on github pages. Go ahead and play with the buttons. Requirements There is'nt much required to follow/complete this tutorial but a basic knowledge of hyperapp (the quick documentation has a really simple example for this), and also it would help to know the basics of Javascript's ES6 syntax. npm packages we will be installing Hyperapp : a micro-framework for building modern web applications, it combines state management with a virtual DOM engine that supports keyed updates & lifecycle events - all with no dependencies. Hyperapp-router: the official routing package for hyperapp. Hyperapp-transitons: smooth animations for our components. Bulma: flex-box based framework for css styling. Getting started If you read through hyperapp basics, I suppose you already know about hyperapp state, actions and view concepts, but if you're still a bit confused; i got you buddy. state state is basically just data associated with the views/components of a webapp or any other software. Any time you like a post on twitter and then the love-shaped icon becomes red, we can say you have changed it's state from white to red, other examples include data from an API presented in a list/grid of css cards, a user's username or profile picture used across many components. actions As earlier said hyperapp provides us means to update or change the state data throughout all components in our app using actions. In hyperapp actions are functions a developer can create to do that. Hyperapp supports asynchronous actions and actions that produce side effects. views the view function automatically updates the virtual-DOM when there are changes in state based on how you want it to look like and renders our components. setting up our project We won't be covering setting up rollup or webpack & babel configs in this tutorial, it's quicker and easier to use this hyperapp-rollup-babel-hyperapp-router boilerplate. It contains dependencies/rollup module bundling configurations we need to develop/ship our app. Let's run the following commands in terminal to clone the repo, navigate to the folder and the install all project dependencies from package.json and also add bulma packages for styling. git clone cd hyperapp-one npm install npm install bulma bulma-slider bulma-switch bulma bulma-badge run the command below and visit localhost:8080 in your browser to view our app. npm start Our boilerplate comes with live reload so the browser automatically refreshes our app to reflect any saves/changes we make in our code. Folder structure We're not going to cover all the files/folders explanations in our project in this tutorial (I'm too lazy, no not now!). But it's always good to explain the main folders/files which we will be using frequently in our project. /src folders: Inside the main folder /src you will find we have folders: /state with a state.js file. /actions with actions.js file. /views/containers with lazy/container components files. /components with regular components files. /config folder for any helper functions/files we want. It comes empty in this boilerplate. /src files: index.js to serve as entry file for our module bundler. routes.js files for our routes and view function. (We're very close to knowing in detail what every file does in a moment). Faking our Data. Navigate to src/config folder and create a file data.js which will contain fake top-rated movie data, you can copy the data from here and paste it in the data.js file. In styles folder in the same location as config folder create another sass file with name variables.sass to customize bulma and declare our bulma packages. Edit the app.sass file and add the following sass imports for our bulma packages: @import "variables"; @import "node_modules/bulma/bulma.sass"; @import "node_modules/bulma-badge/src/sass/index.sass"; @import "node_modules/bulma-switch/src/sass/index.sass"; @import "node_modules/bulma-slider/src/sass/index.sass"; In variables.sass copy and paste the following styling/variables from here, you can customize it if you want, but to get the dark theme of our app etc we need to use this. Our rollup config includes a packages that compiles sass in our project. About JSX We will be writing our components using JSX. JSX is a language syntax extension that lets you write HTML tags interspersed with JavaScript. Because browsers don't understand JSX, we use a transpiler like babel to transform it into hyperapp.h function calls under the hood. Now let's get to it! Set up our app state model. The first thing we're going to do is to declare the state model which our app will use, navigate to path src/state.js and add this code: import { location } from "@hyperapp/router" import { data } from '../config/data' export const state = { location: location.state, movies: data, movie_list: [], cart: [], range_value: 160, switch_value: false, cart_item_count: 0, cart_item_total: 0, } In this code, we import the hyperapp-router location api function as required by hyperapp and register it in our state object for routing purposes. We also import our mock data from config/ folder and then set it as our movies state value. In other cases we can get this data from anywhere; a json response from a server etc but here we just fake it as an already gotten response data. This is what our fake data looks like. After that, we create an empty array which is then attached to our movie_list property, it's empty so we can fill it up with any kind of data we want, later we will use this to our advantage in a functionality in our app. the cart state property is also an empty array that will contain any movie a user adds to cart using an ADD_TO_CART action we will define soon. range_value will hold an integer value from the range slider element. Here the default value is 160. switch_value will hold a boolean value of an html switch element. cart_item_count will hold an integer value of the count of items in cart array. cart_item_total will hold an integer value of the total price of items in cart array. It's great to define our state object and it's properties/values as it serves as the data model for our application. Don't worry soon you'll see how everything links together. Mutating our state data using actions. We have briefly explained state concept previously and declared our state model, next we need to navigate to our actions file, this is where we will be writing actions functions that can mutate our state data, only actions can mutate state data in hyperapp. let's go ahead and write our first action. Add this code in src/actions/actions.js: import { location } from "@hyperapp/router" export const actions = { location: location.actions, GET_ALL_MOVIES: () => (state) => ({ movie_list: state.movie_list = state.movies, }), } If you have read the basics of hyperapp then you already know what this code does, but better still let's explain it a bit; we import and register our router api as usual, and create a function GET_ALL_MOVIES() which is passed our state store data, it mutates our initially empty movie_list state by copying the fake data from the movies state to it. Dont worry you will see why we are are not using the movies state instead in a bit. now let's add some other actions in the action object for the functionality features of our app. In Hyperapp you can have as many actions as you want; Add movie to cart action: ADD_MOVIE_TO_CART: (movie_id) => (state) => ({ cart_item_count: state.cart_item_count += 1, cart: state.cart.filter(movie => movie.id === movie_id).length>0 ? Object.assign(state.cart, state.cart[state.cart.findIndex(obj => obj.id === movie_id )].quantity ++ ) : state.cart.concat(state.movies.filter( movie => movie.id == movie_id).map(res => ({ movie_title: res.title, price: res.price, movie_poster: res.poster_path, total: res.price, quantity: 1, id: res.id }) )), cart_item_total: state.cart.reduce( (acc, cur) => { return acc + cur.price * cur.quantity; }, 0), }), the action ADD_MOVIE_TO_CART() contains functions that modify the state property values they are assigned to. The functions are: cart_item_count function increment's the cart_item_count state property value by adding 1 to it's current state value each time a movie object is added into the state cart array. It is called each time the ADD_MOVIE_TO_CART action is called. cart function adds a movie object into the state cart array from our state. Since actions have access to state, and can also be passed payload(data) from our components, we use Javascript's .filter() function on our movies state data to return the movie object from it's array whose id is same as the movie id passed from the movie component and return a boolean value so we can check if it is already present in the array or not. If it is present then we just only increase the movie's quantity property by 1 but if it isn't present we locate the movie using it's id in the movies state array and then copy it's properties into the cart state array along with some new properties to help us create a quantity and a total property/value. cart_item_total function calculates the total price of the movies in the cart array. Note: we are using JavaScript's .filter(), .concat(), .map(), .reduce() functions when mutating state data in hyperapp because they are pure functions that do not modify an array but instead return a new array after an operation. Filter movies by price range and shipping actions: FILTER_BY_PRICE: (event) => (state) => ({ range_value: state.range_value = event.target.value, movie_list: state.movie_list = state.movies.filter( movies => state.switch_value ? movies.price <= state.range_value && movies.planet_shipping == true : movies.price <= state.range_value ), }), FILTER_BY_SHIPPING: (event) => (state) => ({ switch_value: state.switch_value = event.target.checked, movie_list: state.movie_list = state.movies.filter( movies => state.switch_value ? movies.price <= state.range_value && movies.planet_shipping == true : movies.price <= state.range_value ), }), These actions are called by the range slider/switch html elements in our components, you can see how hyperapp renders state changes/updates the DOM quickly as the values are updated/changed. We pass the values as event data from the range slider or switch element depending on which is used. Remember we created a movie_list state array separate for these filtering operations, even though we are using pure functions for our operations, we do not want to modify the movies state array using dynamic data like this because it would be difficult to filter it again after the merge. Rendering our state data/executing actions using hyperapp View function components. With Hyperapp we can create 2 kinds of components (Components are pure functions that return a virtual-node). Regular components are components that have access to hyperapp state and actions. I like to think of them as container components with which we manipulate state/actions. Lazy components are components that do not have access to/cannot mutate state and actions. I like to think of them as presentational components where we just pass our state values as properties for styling, re-use , applying behaviours etc in our views. Creating our components. <App/> component. The first component we are going to create is the App.js component. It is a lazy/container component that would be rendered as our root / route component when a user visits our app. It calls the GET_ALL_MOVIES() actions from our actions api when it is created. Navigate to src/views/containers/App.js and add this code. import { h } from 'hyperapp' import { Link, Route, location, Switch } from "@hyperapp/router" import { NavBar } from '../../components/NavBar' import { MovieCard } from '../../components/MovieCard' export const App = () => ( state, actions ) => <div oncreate= { () => actions.GET_ALL_MOVIES() } > <NavBar cart_count= {state.cart_item_count}/> <section class="section"> <div class="container"> <div class="columns"> <div class="column is-3"> <div class="box"> <div class="content"> <b> Highest Price: ${state.range_value} </b> <input type="range" oninput = { (event) => actions.FILTER_BY_PRICE(event) } onchange = { (event) => actions.FILTER_BY_PRICE(event) } Only show mars shipping movies</h1> <div class="field"> <input id="switchMovie" type="checkbox" name="switchMovie" class="switch is-success" checked={state.switch_value} onchange= { (event) => actions.FILTER_BY_SHIPPING(event) } /> <label for="switchMovie"></label> </div> </div> </div> <div class="column is-9"> <div class="columns is-multiline is-mobile"> { state.movie_list && state.movie_list.map( ({ id, title, poster_path, price, vote_average, planet_shipping, overview, release_date }) => <div className="column is-half-mobile is-one-third-tablet is-one-third-desktop is-one-quarter-widescreen is-one-quarter-fullhd"> <MovieCard movie_id = {id} title = {title } poster = {poster_path } price = {price} rating = {vote_average} planet_shipping = { planet_shipping } plot = { overview } release_date = {release_date } /> </div> ) } </div> </div> </div> </div> </section> </div> Here we import hyperapp h function to transform our components written in JSX to virtual-Dom nodes. Also we import hyperapp routing api functions from the installed hyperapp-router package. We then import two regular/presentational components which we will create next for the purpose of styling and etc. (it's often good practice to do this, it encourages code reuse). Then we create a function that creates the App.js component and add a hyperapp life-cycle event which calls our intially created GET_ALL_MOVIES() action from our hyperapp actions when the App.js component is created in the DOM. Checkout hyperapp life-cycle events Then we create a function using jsx and the && operator in javascript to check the state store if state.movie_list has any data and executes Javascript's .map() function to each of the items in the movie_list array which in this case is our fake, top-rated movies data. Remember we talked about creating a presentational component soon, inside the .map() function we pass data from the returned objects into the component which we will call <MovieCard/> to style each object item as a component on it's own. Also you can see our range slider and switch elements and how they have access to their respective state data properties and also how they call actions and pass payload event data to them. Note the kind of javascript DOM event listeners attached to each of the elements. The rest is just responsive/grid styling thanks to the awesome bulma. <MovieCard/> component. Navigate to src/components and create a MovieCard.js file and add this code, this will be a normal component child of the lazy component App.js. import { h } from 'hyperapp' import { Enter } from "@hyperapp/transitions" import { Link, Route, location, Switch } from "@hyperapp/router" export const MovieCard = ({ movie_id, title, poster, price, rating, planet_shipping, plot }) => ( <div> <Link to={`/details/${movie_id}`} > <Enter time={200} <div class="media"> <div class="media-content"> <div class="content"> <span class="badge is-badge-warning is-badge-large" data-badge={rating}></span> </div> <div class="content"> <p class="title is-6 has-text-light"> {title} </p> </div> <div class="content"> <div class="tags has-addons"> { planet_shipping && <span class="tag is-success">ships to mars</span> } </div> </div> <div class="content"> <figure class="image"> <img src={`{poster}`}/> </figure> </div> <nav class="level is-mobile"> <span class="level-item"> <b> ${price} </b> </span> </nav> </div> </div> </div> </Enter> </Link> </div> ) Here we import the same packages as before and a new { Enter } component from the hyperapp-transitions package which we will use to create a nice,smooth animation for our moviecard component. Also we use the <Link/> component from the hyperapp router which we will use to open a modal route where a user can see more details of a selected/clicked movie card. The most important thing to note is that we pass the movie data as properties from our App.js lazy component to our normal component, the rest of the code just covers some styling of the data from our properties like {title} which is the movie title, {poster} is the movie poster url, {price} is the movie price and {planet_shipping} is the boolean value true/false which we will use to check if a movie ships to mars or not. You can see how we apply some logic using the && operator to render an element conditionally after we evaluate the {planet_shipping} property. You can use if/else or even the ? operator if you like but i prefer the && operator as it best suits this sort of simple evaluation for me. The rest of the code is just styling. <ViewMovieDetails/> component. Now we need to create a container component <ViewMovieDetails/> that will be rendered by hyperapp-router when the user selects a movie, it displays more information about the movie. It's route path is /details/:movie_id. Do take note of the special :movie_id term after the route path, it's just a parameter which the router package uses to grab the id property from our selected movie object in the previous component and pass it as a route parameter in the url for this component to use. In src/views/containers/ create a ViewMovieDetails.js file and add the code import { h } from 'hyperapp'; import { MovieDetails } from '../../components/MovieDetails' export const ViewMovieDetails = ({match}) => (state, actions) => ( <div> { state. movie_list. filter( movie => movie.id == match.params.movie_id ). map( ({ id, title, overview, poster_path, vote_average, release_date , price }) => <MovieDetails cart_count={state.cart_item_count} id={id} title={title} plot={overview} poster={poster_path} price={price} rating={vote_average} release_date={release_date} addAction={ () => actions.ADD_MOVIE_TO_CART(id) } /> ) } </div> ) As always we import the pnormal component called <MovieDetail/> for presentational use as a child component (we will create it shortly) and then we filter through movie_list state array to find a movie whose id is equal to id passed from the route and then applying .map() function to the result data as the <MovieDetail/> component which is then passed it's necessary properties which also includes an action ADD_MOVIE_TO_CART() and state value of the cart_item_count. <MovieDetails/> component. Now let's create the child component of the <ViewMovieDetails/> lazy component. In src/components/ folder create a file MovieDetails.js and add this code: import { h } from 'hyperapp' export const MovieDetails = ({ cart_count, id, title, addAction, poster, price, rating, release_date, plot }) => ( <div> <div class="modal is-active"> <div class="modal-background"></div> <div class="modal-card"> <header class="modal-card-head"> <p class="modal-card-title">{title} </p> <button class="delete" aria- <div class="columns"> <div class="column"> <figure class="media-left"> <img src={`{poster}`} /> </figure> </div> <div class="column"> <p class="title is-5 has-text-white"> Plot: </p> <p class="title is-6 has-text-white"> {plot} </p> <p class="title is-6 has-text-white">Release date {release_date} </p> <span class="tag is-warning">{rating}</span> </div> </div> </section> <footer class="modal-card-foot"> <a class="button is-success" onclick={ addAction }> <b> Add to Cart ${price} </b> </a> <b> {cart_count} Items in cart</b> </footer> </div> </div> </div> ) here we recieve properties cart_count, price e.t.c from the <ViewMovieDetails/> component and then add some styles to it. Remember we passed and action responsible for adding a selected movie to cart when the button onclick event is fired as {addAction} and the number of total items in the cart as {cart_count}. <ViewCart/> component. Now let's create a lazy component that will be rendered when a user visits /cart route. In this component we will display movies which have been added to cart by the user. In src/views/containers/ add a file ViewCart.js and add this code; import { h } from 'hyperapp'; import { CartItems } from '../../components/CartItems' import { NavBar } from '../../components/NavBar' export const ViewCart = ({match}) => ( state, actions ) => ( <div> <NavBar cart_count= {state.cart_item_count}/> <section class="section"> <div class="container"> <p class="title is-3 has-text-white"> Cart Items </p> { state.cart.filter(res => res.id ) .map( res => <CartItems movie_id={res.id} title={res.movie_title} price={res.price} quantity= {res.quantity} total= {res.quantity * res.price } poster= {res.movie_poster} /> ) } { <p class="title is-5 has-text-white"> total price: ${state.cart_item_total} </p> } </div> </section> </div> ) In this code we have the <NavBar/> imported and used and also we mapped through items that are in our state's cart array and passed the results to it's child component <CartItems/>. <CartItems/> component. Now let's create the <CartItems/> component. In src/components create a file CartItems.js and add this code: import { h } from 'hyperapp' export const CartItems = ({ movie_id, title, price, quantity, total, poster }) => ( <div> <article class="media"> <figure class="media-left"> <p class="image is-64x64"> <img src={`{poster}`}/> </p> </figure> <div class="media-content"> <div class="content"> <p class="title is-5 has-text-white"> {title} </p> <small> ${price} x </small> <b>{quantity} copies</b> <p/> <small> Total price: </small> <b> ${total}</b> <hr/> </div> </div> </article> </div> ) I assume this component is self-explanatory, it just styles the properties passed to it from it's parent component <ViewCart/> and applies some behaviour to it. <NavBar/> component. And then finally the NavBar component. Navigate to src/components and create a NavBar.js file and add this code so we can create a nice navigation bar that would hold links to other components and recieve the cart's items count data from any parent lazy component where it is used. import { h } from 'hyperapp' import { Link, Route, location, Switch } from "@hyperapp/router" export const NavBar = ({cart_count}) => ( <nav class="navbar is-primary has-shadows" role="navigation" aria- <div class="navbar-brand"> <a class="navbar-item" href=""> <svg xmlns="" width="30" height="30" viewBox="0 0 133.80357 132.29168"><g transform="translate(-37.57 -49.048)"><rect width="132.292" height="90.714" x="38.554" y="90.625" ry="9.719" fill="#edc905" paint-<rect width="5.292" height="61.988" x="12.631" y="72.602" ry="1.911" transform="rotate(-34.65)" fill="#edc905" paint-<rect transform="rotate(-145.35)" ry="1.911" y="-39.86" x="-154.078" height="61.988" width="5.292" fill="#edc905" paint-<ellipse cx="148.923" cy="115.949" rx="7.56" ry="7.182" paint-</g></svg> </a> <Link to="/cart" class="navbar-item"> <span class="badge is-badge-danger is-badge-medium" data-badge={cart_count}> <svg xmlns="" width="25" height="25" fill="#FFFFFF" viewBox="0 0 8 8"> <path d="M.34 0a.5.5 0 0 0 .16 1h1.5l.09.25.41 1.25.41 1.25c.04.13.21.25.34.25h3.5c.14 0 .3-.12.34-.25l.81-2.5c.04-.13-.02-.25-.16-.25h-4.44l-.38-.72a.5.5 0 0 0-.44-.28h-2a.5.5 0 0 0-.09 0 .5.5 0 0 0-.06 0zm3.16 5c-.28 0-.5.22-.5.5s.22.5.5.5.5-.22.5-.5-.22-.5-.5-.5zm3 0c-.28 0-.5.22-.5.5s.22.5.5.5.5-.22.5-.5-.22-.5-.5-.5z" transform="translate(0 1)" /> </svg> </span> </Link> </div> </nav> ) we create navigation bar component with a cart_count property to display the amount of items we have in our cart. Registering our lazy components to their respective route paths. Now that we have created all necessary components for our app, the next thing we need to do is register the parent/lazy components as route components so the hyperapp View function can return their respective virtual nodes and watch for state changes aka rendering UI. We are using hyperapp-router's <Switch/> component to declare multiple route paths and their respective components to render when the route path is visited. import { h } from 'hyperapp' import { Link, Route,location, Switch } from "@hyperapp/router" import { App } from './views/containers/App' import { ViewMovieDetails } from './views/containers/ViewMovieDetails' import { ViewCart } from './views/containers/ViewCart' export const view = ( state, actions ) => <div> <Switch> <Route path="/" render={ App } /> <Route path="/cart" render={ ViewCart } /> <Route path={ "/details/:movie_id"} render={ ViewMovieDetails} /> </Switch> </div> Connecting everything. Now we need to mount our entire app to the DOM, hyperapp requires us to do this in an index.js file, this file serves as an entry file for rollup or any other module bundler to bundle our entire application codes into a single javascript file. Let's add this code in the /index.js file; import { h , app } from 'hyperapp' import { location } from "@hyperapp/router" import { state } from './state/state' import { actions } from './actions/actions' import { view } from './routes' import './styles/app.scss' const main = app(state, actions, view, document.querySelector('.hyperapp-root')) const unsubscribe = location.subscribe(main.location) Here we import the needed hyperapp api functions and also our state, actions and view files which we then mount to the DOM using hyperapp's app function. We have also imported our sass file so it can be compiled too when our module bundler package processes the index.js file. The processing includes transpiling our JSX/ES6 syntax using babel, treeshaking, compiling sass to css e.t.c. That's it! We have finished our project for this tutorial, I suppose you have been checking our progress gradually in your browser at localhost:8080 to see our final output. You can run a production build and deploy to github pages or whatever static file server you want so you can share with others. There is a tutorial on this if this is your first time. Thank you so much for reading, I'm very much interested in any opinions to correct or report errors, or suggest anything that would make this tutorial better as I am looking to improve. You can visit the project repo on github. Feel free to open issues! Discussion (1) Too complicated, try this: gitlab.com/peter-rybar/prest-lib/b...
https://practicaldev-herokuapp-com.global.ssl.fastly.net/thobyv/hyperapp--hyperapp-router-create-a-movie-shopping-cart-web-app-47d
CC-MAIN-2021-10
refinedweb
4,326
55.03
A picture is worth a thousand words: It isn’t global. This is weather, not climate. It is caused by a persistent blocking high pressure pattern. In a day or two, that red splotch over the eastern USA will be gone. Image from Dr. Ryan N. Maue of WeatherBELL h/t to Joe Bastardi UPDATE: Dr. Roy Spencer puts it in perspective June 2012 U.S. Temperatures: Not That Remarkable July 6th, 2012. “But, Roy, the heat wave is consistent with climate model predictions!”. Yeah, well, it’s also consistent with natural weather variability. So, take your pick. For the whole U.S. in June, average temperatures were not that remarkable. Here are the last 40 years from my population-adjusted surface temperature dataset, and NOAA’s USHCN (v2) dataset (both based upon 5 deg lat/lon grid averages; click for large version): Certainly the U.S drought conditions cannot compare to the 1930s.. The only “global” thing in that picture is cooling. Wonder if the MSM will report that? No, I don’t really. To you with your low human science and pedantry it may appear to be merely yet another meteorological ‘blocking high’, of the sort that cause the Russian heatwave two years ago, but to those of the Climate Illuminati is revealed the real transcendental Truth of the anger of the Global Warming god. Amen, and pass the collection tin. Um, sarc. This is a normal summer for the Mid-Atlantic region. I just noted that National AP (Weah DC) was 102 degrees. But also noted that north of DC in Frederick Co MD that the temps are 95–not a record breaker for here. Usually the high is located off shore and called a Bermuda High which brings very hot, humid and lots of POx. But the high is on-shore. Haven’t heard of any POx warnings in the local weather/nes media. History will show that AGW theory will rank with “evil spirits” in terms of logical reasoning. Maybe if we sacrificed a few virgins all the bad weather will go away? I keep hearing about how winter in the southern hemisphere has been colder than normal, and the anamolies in that figure give credence to that idea. Same thing for summer in the British isles. Course, I fully expect that all these incidents will be somehow linked…you know, global warming is causing the ‘insane’ heat in the US while also causing summer to be cool and wet in Britain and winter to be chillier in Australia. And Great Britian is having one of the coolest and wetest summers in 100 years. But don’t let actual data get in the way when saving the Earth is in the balance, not to mention more government research grants. Bill Thankfully it is moving away. We are still so short of water that we are looking at crop failure, possibly total crop failure. Please can you send all that ‘global warming’ over here to the UK? It hasn’t stopped raining since April 1st. Some April Fool that was! Does put the heatwave in perspective. Maybe UAH is biased somehow, they have the last few months going warmer. Of course this map is for July, which of course is not tallied yet. Shall we wait and see what the UAH Global Temperature Update will do for July? Shall we? And then we will see what July will be for a month globally. A cool one or a warm one. If it gets a little bit higher than June, 2012 will be the third or fourth warmest year on record. And what about the rapid decline of Arctic Sea Ice this year: If it was really that cold on Earth as you people claim we would not be seeing such rapid ice loss. Well I guess you are right Mr. Watts. Pictures can say more than a thousand words. REPLY: Clinging to your graphs and religion Mr. Kuipers? – Anthony Same thing happened in Russia a couple of years ago and it was also blamed on global warming. misterjohnqpublic says: July 7, 2012 at 10:36 am Maybe if we sacrificed a few virgins all the bad weather will go away? ——————————————————————————————————————— Send them to me. I will take care of them – umm, the weather, that is… The Canadian Broadcasting Corporation has taken advantage of the first real summer (37 C in Waterloo) we have had in years to schedule a rabidly pro-AGW show that includes some lout claiming that ‘ocean acidity is up 33% because of global warming’. And you thought no one was keeping track! Robbie says (July 7, 2012 at 10:59 am) ——- Take a look at Aqua 5, normally a good precursor of UAH figures. It rose high in early June (to above 2011 levels) but has now fallen back. But I wouldn’t want to trump your ‘1 month = climate’ with my ‘2 weeks = climate’. :-) It is a perfect world and perfect debate stance when you can follow heat waves from region to region and call them global warming without revisiting the prior cases as you move along. At least that is the current scheme until something else comes along to topple it. That something will be a further span to 20 or 25 years of flat or declining GLOBAL measures of ocean temps and sea level non-rise. We will also see extreme attempts to deflect attention from the PDO, AMO, and solar cycle decline effects. Hey Robbie!! You got the balls to come back next month and give us your prognostication? Be aware that that map is not an equal-area projection. Along the equator it is cooler than average, and that is underemphasized by this map (remember Mexico is bigger than greenland), and the warmer areas, being away from the equator, are overemphasized. OT: It really bugs me that the ice area maps (actually, any “area” map) never seem to be an equal-area projection. It is, in my opinion, an error to attribute any single event, whether it’s a derecho, a heat wave, or a hurricane, to climate change. That’s weather. On the other hand, as the statistics of weather shift, that’s climate change. And what we’re seeing are the weather statistics moving (see for a discussion and data) to hotter and higher energy events. It’s just like lung cancer – any individual case might be from a genetic issue, accidental exposure to toxins or radiation, or just plain bad luck. But the changing statistics of increasing lung cancer (which was in the past extremely rare) are almost wholly attributable to smoking rates. So no, this particular heat wave cannot be directly attributed to climate change. But we’re certainly going to see a lot more more of them, with fewer cold events, as climate averages change and the weather dice get loaded more and more heavily to the hot side… Robbie said: “If it gets a little bit higher than June, 2012 will be the third or fourth warmest year on record.” If this happens, and it could, why should we care? We have been coming out of the Little Ice Age for 150+ years. Is it surprising if the general trend is upwards? What is problematic for you and your buddies is that for 15 years we have been basically flat. If CO2 was such a strong catalyst then the past 15 years should have led to inescapable warming…..they have not. For fun, let’s remember this quote by Gavin Schmidt from 2007. The first enumerated points were questions put to Dr. Schmidt and his response follows. Basically, if this year (“year 5″ of his points) doesn’t exceed the warmest year on record in all 4/5 indices, the condition has been met for Gavin to question Anthropogenic Global Warming….how much you wanna bet he finds a way to NOT question]“ I really do wish we could have Global Warming in the UK. The weather is cold and, to put it mildly, wet! @ Robbie And then we will see what July will be for a month globally. A cool one or a warm one. If it gets a little bit higher than June, 2012 will be the third or fourth warmest year on record. At the end of June the UAH YTD temp is 0.146C above the 1981-2010 average. I suggest we all panic now. When it is hot it is climate change. when it is cold it is weather. Se how easy it is ? Yet, much of the NH mid-latitude land areas are rising at a rate of over one deg F (over 0.6 deg C) per decade. See Figure 3 of the 2012 GRL paper by Judah Cohen. Draft here: By 2100, most (over 70%) of the summer days in the Midwest will exceed 90 deg F, unless we reduce AGW. Extreme hot weather similar to the last week’s excessive heat wave will occur every other summer week in the future! Quick analogy based on what I happen to be doing at the moment: Let’s say you’re trying to cook an omelet. You mix the eggs and pour them in a pan. You put the pan on a thing that’s labeled as a “warmer”. Ten minutes later, 1/3 of the egg is fried, and 2/3 is frozen. What have you done to the eggs on average? Have you cooked them? No, you haven’t done anything to the eggs on average. The question is completely meaningless. Yep… summer in the northern hemisphere and winter in the southern hemisphere… makes sense! One picture is worth a thousand words… Especially when it tells the story of how AGW is going to hit our states hard with extreme hot weather. This is the report for Pennsylvania (my home state getting hit hard by AGW) by the Union of Concerned Scientists: See Figure 1 showing the state’s temperatures before 1990, compared to the state temperatures by the end of this century. More heat waves are coming… On the site of Joanne Nova for the posting listed as There is a nice chart: A small green arrow and red dot show our current state of affairs. To the right of that red dot is the future in dashed lines. I’ve used the link to Jo’s site because the article there is so very interesting. However, the chart is also on WUWT here: In the current post and comments: Robbie says: July 7, 2012 at 10:59 am Shall we wait and see . . . ~ . . . Shall we? I think we should. Meanwhile, I trust Robbie will cut CO2 emissions to near zero – you do have to breathe. I, on the other hand, intend to use my car and air conditioner insofar as I agree with Roy Spencer, namely “It’s summer.” I did grow up before air conditioning and remember summer nights when it was too hot to sleep. That had the advantage that one did not find it unpleasant to go to the outhouse in the middle of the night, unlike in winter when the thunder mug was the better choice. Typo correction- My comment above should read: “Yet, much of the summer NH mid-latitude land areas are rising at a rate of over one deg F (over 0.6 deg C) per decade. Just looking at JJA average trend for the NH land area from 20-90N latitudes shows 0.7 deg F per decade rise average, but much of the land area where people live in the NH has average temperatures climbing over 1.0 deg F per decade (see the map in Figure 3 in the Cohen paper). Paul K2–sorry, but the UCS is not a group of scientists, just a politically leftist advocacy group. While I was writing the comment @12:22 a new comment came through. From Paul K2 with the home state of Pennsylvania. I think I should mention that the State that I found too hot to sleep in many years ago – before catastrophic-AGW religion was invented, was – ready? . . . . Pennsylvania! It’s just the weather and not the climate. By the way is Britain’s drought (caused by climate change) over yet after 3 months of wet, wet, wet? Anthony, have you bothered to read the Cohen (2012) paper I linked to that was published in the GRL entitled “Asymmetric seasonal temperature trends”? There is much there you might like; but most of the analysis contradicts your conjectures in this post. Here is the Abstract for your perusal: should address this apparent seasonal asymmetry. As for the Union of Concerned Scientist maps, there are many projections of what extrapolated AGW warming trends for NH land areas like the US mainland out there. The displays on that site convey the information quickly, accurately, and easy to understand. The U.S. and Canada should be preparing for a repeat of the “Dirty Thirties”. Being prepared for this type of cyclical weather pattern would be prudent social planning. It may get allot hotter and drier just like the 30’s (notice wikipedia renamed the dirty 30’s ROFL) “The is weather, not climate. It is caused by a persistent blocking high pressure pattern. In a day or two, that red splotch over the eastern USA will be gone.” Yes – our CAGW climate scientists forget that “weather is not cimate” unless it’s winter and somewhere there is a new low temperature record. But remember that these scientist/advocates are merely opportunists, fund-raising off of others’ misery (e.g. the western wild fires).. Robbie says: July 7, 2012 at 10:59 am Shall we wait and see what the UAH Global Temperature Update will do for July? Shall we? As you may have noticed Mr. Kuipers the atmosphere is extremely dry in the areas where it is the temperatures are high so the atmospheric enthalpy is also extremely low. Therefore the actual amount of heat energy required to raise the atmospheric temperatures to their current levels is very low. So it actually proves very little. But then you really knew that anyway. Joseph Adam-Smith says (July 7, 2012 at 11:32 am) I really do wish we could have Global Warming in the UK. The weather is cold and, to put it mildly, wet! —- And yet where I am (Hampshire) we still have a hosepipe ban. The local water company enforces it by sending scuba divers down into my garden to make sure I’m not watering it. Robbie,. How many years is climate according to the IPCC or the WMO??? Don’t bother it’s 30 years. It’s just the weather and not the climate. Just for some reference or some balance: the coolest spring I’ve experienced here in Eastern Oz was in 1999. The coolest summer was in 2012 (after a hot spring). The highest heat was in 2004, but it was brief. All records for sustained heat (ie mean monthly max) were set between 1910 and 1920 in my region. Only August set its record outside that decade – in 1946! I’d like to blame America’s Big Heat on something – but I don’t know much about what was happening in 1936. I’m told it was preceded by a horror winter. Go figure. I was recently conversing by phone with a highly educated American friend who lives in Oxford, UK. We of course touched on the weather. He vociferated about the exceptionally cold, wet, unpleasant weather there this summer. When I did the same regarding the exceptionally hot and dry weather here in Colorado, his immediate response was, “And they say Global Warming isn’t happening!” Is there something in the water over there? Of course this heatwave is weather. What else could it be? And of course it’s climate as well. It is scarcely credible that you would think the two are mutually exclusive. Perhaps you could explain in more detail what you could possibly mean by “weather, not climate”. It seems to me that an atmosphere containing greenhouse gases should moderate surface temperature extremes, both peak highs and peak lows, while increasing the daily average. This seems apparent to me because desert areas with their lower humidity levels generally have daily temperature extremes that are both higher and lower than more humid areas at the same latitude. It’s certainly most apparent on the moon. Is there something wrong with my logic? If not, why do climate alarmists blame most weather extremes on anthropogenic CO2. Or maybe they just choose to be dishonest and remain silent when journalists make incorrect assumptions. This doesn’t make sense to me either unless the vast majority of all journalists chose their profession because they have no aptitude whatsoever for math and science and are unable to distinguish fact from fiction in such matters. I have had one conservative newspaper editor privately agree with me that the latter case is true. James says: July 7, 2012 at 12:37 pm. —————– Are you Brits allowed to use your watering hoses yet? Or are you still officially in a drought? The UK MET Office deserves a medal for predicting the exact opposite weather to what happens. If they predict rain perhaps you might get some decent weather for the London Olympics! Its peculiar that so many readers here seem to deny that much of the middle part of the US mainland has had some extremely hot weather recently. Most people who live there seem to be asking “What’s UP with this extremely hot weather?” and “When will it end?” Well, what’s up might be that we have changed the meteorological system. I am happy to see that WUWT has finally recognized that the Arctic ice pack is melting off severely every summer, including this summer (WUWT expects ice extent to fall below 4.5 million sq km this year). This is a nice baby step in the correct direction. Now lets try a bigger step. NH weather systems are driven by the jet stream. The jet stream is driven by the pressure differential between the arctic and the mid-latitudes. Now what happens when the Arctic isn’t as cool, and the pressure of the mid-troposphere rises? The pressure differential changes, and the jet stream slows, which means that it is more susceptible to blocking patterns. In short, Arctic amplification warms the arctic, which changes the jet stream and changes weather patterns in the NH. Here is a nice video by Dr. Jennifer Francis, a meteorologist from Rutgers, explaining how this happens: So yes, the climate scientists predict climate changes. And now the meteorologists are predicting weather pattern shifts base on observed climatic changes, such as the loss of the Arctic ice pack and reduced snow cover. My friends and family are sorry to hear, that these kinds of heat waves are here to stay, and with more on the way in the coming years.. Joseph Adam-Smith says: July 7, 2012 at 11:32 am I really do wish we could have Global Warming in the UK. The weather is cold and, to put it mildly, wet! Been like that most of the 65 years of my life, despite the idiots prophesying ice ages and global warming and the end of the world. Anthony, I’ve enjoyed following your site for months as a layperson trying to get a better handle on various points of view. I tend to be skeptical by nature, though I lack any scientific training, so sometimes the hand waving of the commenters blurs things. It seems that saying it is colder in the UK so climate change isn’t happening is just the same as saying all the new high temps in the US means there is climate change happening. I’m hoping you can remind me of your point of view, which I believe is that the earth is in a warming trend but you are quite skeptical that the causes are induced by man vs. naturally occurring events, such as sunspot activity? I’m sure your view is much more nuanced, but I’d love to hear (and see on the home page) how you think about the trend. I’m also curious to learn more about your current view on where things might be going per the very interesting posts by you and others here. Don says: July 7, 2012 at 1:04 pm “Is there something in the water over there?” Maybe. At a dinner a week ago I was seated next to a nice man from Nottingham, UK. He was very pleasant, well educated, extremely polite, and he spoke in a very quiet voice. Throughout the dinner we discussed his job, which was quite technical. His company is a U.S. government subcontractor for NASA. Then someone mentioned “global warming”, and it was like The Wolfman, where hair suddenly sprouts from his face and hands, fangs appear, and he starts growling and ripping out of his clothing with saliva flying. He instantly launched into a loud rant about the oceans “acidifying by 30% practically overnight!” and other extreme climate alarmist talking points. I was astonished, because he was scientifically literate. I made the [deliberate] mistake of asking him, if “carbon” causes global warming, then why has the planet not warmed for 15 years? I am not exxagerating when I say he lost it. He jumped from one crazy talking point to the next in a loud voice, and I couldn’t get a word in. People were watching from other tables. I ended up just listening and smiling, which made him crazier. He announced that he had to leave, and did. I’m still amazed recalling it. I had heard of people losing it like that, but this was my first personal experience. Maybe there is something in the water. LSD? ”.” And the past records themselves have been changed at times. Here’s the record highs from 2 list obtained from the NWS. One was in 2007 and the other in 2012. Can anyone explain to me how the record high set in 1966 can be lower than the previous record set in 1907? The list from 2007:23-Mar 81 1907 The list from 2012: Mar-23 76 1966 Nick, The earth has been in a warming trend since the Little Ice Age. It is worthwhile to note that the long term, gentle warming trend has not accelerated. [The green line is the trend, which is actually decelerating.] You can see here that the rising trend is unchanged. If the ≈40% rise in CO2 had the claimed effect, the trend would be accelerating, no? But it is not. In fact, despite steadily rising CO2 levels, the global temperature is not responding, indicating that the effect of CO2, if any, is minuscule.. Lester Via and Smits: I think climatology is the study of long term climate trends, and meteorology is the study of weather patterns. The two fields intersect when persistent weather pattern changes occur year after year, or when meteorologists link observed climate change impacts (like reduced Arctic ice pack) to weather pattern changes ( such as NH heat waves, droughts, floods, snowstorms, and cold spells). Eventually some climate changes can impact weather patterns. Up until recently, the main mechanism for estimating AGW impacts on weather, was the “loaded dice” analogy. Higher average temperatures increase the odds of extreme high temperature events, or higher moisture levels in the atmosphere should result in more extreme precipitation events. But the climate scientists underestimated the impact of AGW on the polar regions. The Arctic ice pack, and snow coverage, have fallen much faster than the climate scientists expected. This appears to be due to increased teleconnection of heat into the Arctic, and the fact that the melt mechanism of the Arctic ice pack speeds up as the pack is weakened. So now some meteorologists are tying reduced ice pack and snow cover in the Arctic (warmer Arctic) to jet stream changes, which in turn can cause extreme weather events. In summary, because we delayed action to address AGW, we may have altered the meteorology of the NH (we ‘broke’ our weather system). This theory is gaining weight as more and more researchers publish. In Central Oregon I have not removed the tarp from the AC yet. Most summers it runs only a day or two. If CO2 is the AGW problem and it must be reduced and coal fired electrical plants are the culprit, I see a major reduction in electrical power coming courtesy of the EPA. I grew up in the midwest and remember the windows open sweating in the sheets summer nights. Now when I go back to visit it is AC everywhere. So the question is: are the AGW alarmists willing to cut electrical power to essentials, which certainly should not include AC since we survived without it 50 years ago and most of the world does today? Under those circumstances Paul K2 could decide just how important AC is versus his concern about AGW. In 1970 I was . I spent June in Cambridge, New York; very rural, It reached 100F. I was surprised, It was hotter than South Florida., my home. What made it hot then? What? Nick: You should realize that the people posting here are generally amateurs, and I have found most of the posts on this site that cover topics related to science to be incorrect. For example, just last week there was a post by Dr. Outcalt that was complete nonsense. If you take the time to read down through all the comments, you will eventually see that (even though many substantial comments were snipped by the moderator). This has been true for most of the posts that pretend to cover real science. If you take the time, and slog through the nonsense in the comments, in many cases the mistakes in the posts will become apparent. I highly recommend reading Tamino’s site, since he has the time to identify and correct at least some of the mistakes on the posts here (like the Outcalt post). I won’t put the link to Tamino here, because I am afraid of the moderators. [Moderator's Note: It is absolutely fascinating that a search on the name "Paul Klem" (or "Paul K. Lem") yields nothing of substance, but the very same anonymous coward is able to declare a respected physicist's work "nonsense" and manages to disparage, almost in the same breath, the many professionals that comment here. "Tamino" (known to his friends as "Grant Foster") is linked on the right. We know him of old. He's not as bright as he thinks he is. Your grasp of the science is quite a bit shakier than you think. Oh, yeah.... disparaging the moderators and moderation policy will get you snipped. You have been given a fair degree of latitude here and my advice is don't push it. -REP] Blaming the Eastern U.S. heat wave on global warming is just as follyish (?) (okay, foolish) as blaming the cooling wave in the UK right now on global cooling. But that won’t stop them. Here is a high resolution Temperature anomaly map from the Modis Terra satellite for the week of June 25 to July 2. The US hotspot is clear enough but there are quite cold areas in northwest North America, northern Europe, central Africa, central China, northeast Siberia, and Australia. Larger version. Certainly does not look like a GHG signal; the weather is variable is a better explanation. And we are freezing here in Australia – areas up to 7C below normal at times – coldest start to July in 27 years was one report & June was cold too. Minimum records falling like autumn leaves. R.S.Brown says: July 7, 2012 at 1:21 pm. =================================================================== And the airport in Columbus has been expanded since then. (I moved here about that time. The winters were brutal. A couple of close to all time record lows in 1989.) A point of personal order here… On our first 108 (Heat index) day of July 4th, 2012…I skated (inline) 12 miles, 4 times around a local 3 mile trail around a local Minneapolis lake. In 1988 I biked 12.5 miles from Excelsior MN, to an aunt and uncles in Edina MN. I figure if I can go from age 35 to age 59 and still “perform” I’m doing pretty good. And I’m not worrying about AWG, as I HAVE A MEMORY and 1988 makes this summer look like a “piker” in comparison. (I.e., NO comparison.) Paul K2; But the climate scientists underestimated the impact of AGW on the polar regions. The Arctic ice pack, and snow coverage, have fallen much faster than the climate scientists expected. >>>>>>>>>>>>>>>>>>. They predicted melting in the SH too (how convenient of you to leave that out) but the ice in the SH has been increasing (which you also conveniently over look). As for the rest of your tripe, it amounts to cherry picking some data and deriving a linear trend from a cyclic system. They predicted accelerating global temp increases, instead, the highest concentrations of CO2 ever recorded in our lifetimes correspond to declining temperatures. The predicted increased desertification, we’ve seen a decline instead. They predicted increased severeweather, but on a global basis, total cyclone energy has declined ever since we started measuring it 30 years ago. What your various comments amount to is sifting through reams and reams and reams of data that says the exact opposite of what the climate scientists and their models predicted to find the odd short term trend that agrees with the general meme and then pretend that it represents some sort of long term trend. For those of us who actually pay attention to the whole picture, the only one you are fooling is yourself. @TomE “Under those circumstances Paul K2 could decide just how important AC is versus his concern about AGW.” When the alarmists actually ACT like they are alarmed, (instead of just mouthing). 1) I’ll be totally amazed. 2) I might bother taking them 0.001% seriously Come on K2.. turn of all you AC’s computers.. anything that uses power, shows us you REALLY believe. Maybe you can then try and persuade AlGore and the many thousands who junketted to Rio, Cancun etc.. You know what Im tired of? Im tired of news and weather casters telling us how hot it feels. How the heck do they know how hot I feel. I bet you Im way cooler when Im standing in my sprinklers:) Wish they would just go back to degrees and humidity separately instead of making up some arbitrary number based on feels like and presenting that to us as today’s temps. Paul K2, You will get plenty of misinformation from tamina’s blog. And he censors opposing points of view; the mark of an insecure Grant Foster. That’s why his traffic is negligible. Here, you can bring the talking points you get from tamina and watch them get deconstructed. davidmhoffer wrote:. Try getting some facts from the source, instead of relying on WUWT to process the information before feeding it to you. I copied and pasted the predictions from the 4th IPCC report: Please note these predicitions contradict your statement:} If radiative forcing were to be stabilised in 2100 at A1B levels} AndyG55; When the alarmists actually ACT like they are alarmed, (instead of just mouthing). >>>>>>>>>>>>>>>> You’ve hit on one of my favourite points. Are the alarmist scientists who are predicting that the whole planet is going to roast except for just a handful of areas acting in any way as if that were true? Are they seeking citizenship in countries like Canada, and buying land at both high latitudes and high elevations? No they are not. Why would anyone who is so certain of a disaster about to befall humanity on a global basis not take such simple steps to protect their families? Their children and grandchildren? Why are they not building survival habitats stocked with canned goods in areas that they claim will be amongst the few that are sustainable for the support of human life? Why do they wail and scream and demand that we take action to save the world’s people while doing not one d*mn thing to save themselves, their kids, and their grandkids? Are we to believe that their altruism runs so deep that out of concern for humanity they have not taken a single step in regard to the safety of themselves and their kin? Instead they continue their course of conjuring up magic sufficiently advanced that they hope it is indistinguishable from science. Arthur C Clarke would be impressed. Paul K2 says: July 7, 2012 at 1:21 pm Its peculiar that so many readers here seem to deny that much of the middle part of the US mainland has had some extremely hot weather recently. ====================================================== ME: Sorry to disapoint but I haven’t read anyone denying that. My 10 year old RadioShack sensor says it got to 102 F on my front porch today. What have you been reading? =============================================================== PAUL: Most people who live there seem to be asking “What’s UP with this extremely hot weather?” and “When will it end?” =========================================================== ME: Maybe it will end when the CAGW “Team” stop blowing hot air UP our asses? ================================================================= PAUL: Well, what’s up might be that we have changed the meteorological system. ================================================================= ME: Got anything besides a hockey stick to prove that? (Don’t bother posting a link to “An Inconvenient Truth”.) I don’t know how old you are but I’m old enough to remember hot summers in the past. They happen sometimes. Always have. Always will. Now somebody figured out how to make money from it. ======================================================================== PAUL: My friends and family are sorry to hear, that these kinds of heat waves are here to stay, and with more on the way in the coming years. ================================================================ ME: So tell me. Just what is the weather “supposed to be”? What is “normal” in your world? KR says: July 7, 2012 at 11:26 am It’s just like lung cancer – any individual case might be from a genetic issue, accidental exposure to toxins or radiation, or just plain bad luck. But the changing statistics of increasing lung cancer (which was in the past extremely rare) are almost wholly attributable to smoking rates. ================================================= Yeah, it is the same “lying with statistics” story again. In your example, what about the possibility of changing statistics of increasing lung cancer because of increasing lung cancer among non-smokers, is it attributable to smoking rates? No, of course not. As for your “global warming”, even if it was real, it is a sort of average thing and could be the result of increasing temperatures in cold areas while the warmer areas were getting cooler on average. Hence you can not even theoretically attribute heat waves to a “global average warming”, it is completely unscientific. Paul K2 says (July 7, 2012 at 2:05 pm) I highly recommend reading Tamino’s site, ——— Reading, sadly, is all once can do. As soon as any opinion which differs in the slightest degree from the Great Man’s views are sent in, they are either suppressed or the commentator is gratuitously insulted with the crowd cheering him on. Since you have been talking about Arctic ice trends, here’s a perfect example. Note (1) that one should not fear the moderators here about linking to a site with a different outlook, and (2) Tamino’s blast of rudeness to a dissenter. Paul K2 says: Anthony, have you bothered to read the Cohen (2012) paper I linked to that was published in the GRL entitled “Asymmetric seasonal temperature trends”? There is much there you might like; but most of the analysis contradicts your conjectures in this post. Paul, have _you_ read it?? Why do they limit thier study to the last 30 years when models runs are available going back centuries. It is well known that there is a circa 60 year variation in climate and that there was a cooling running upto mid 70’s. Odd they chose to skip that period when looking at thier “trends”. This is not climate science it is SPAM. They and many others are spamming peer reviewed literature with this sort of pseudo science on an almost daily basis now and it is no more convincing than the magic claims that I could grow 9″ of dick in under 15 days that I receive with similarly boring repitition. Had they looked at seasonal difference between the warmig and the cooling parts of the cycle, it may have been interesting or even informative. Sadly they missed that opertunity and pointlessly looked at variations in the output of models that have notoriously failed to match climate. Paul K2: I got interested in this after the 2009 publication of the NCAR study discussing record high vs. record low temperatures. They identified a trend to more high records, and it was picked up by Andrew Revkin of the NYT. NOAA collects and reports 4 types of daily temperature records: maximum, minimum, maximum low, and minimum high. Here is a simple tabulation of the total of all 4 types of records, from 1994 to now (2012 is annualized by doubling the first six months): Year: Temperature records (in thousands): 1994 103.4 1995 128.0 1996 143.0 1997 117.1 1998 136.8 1999 117.8 2000 134.6 2001 100.0 2002 125.6 2003 111.0 2004 96.2 2005 106.7 2006 119.0 2007 90.7 2008 51.1 2009 62.0 2010 69.9 2011 81.7 2012 92.2 Insofar as the number of temperature records is a proxy for weather extremes, it’s pretty clear that there has been a *decrease* over the last six years compared to the previous dozen or so. mean 1994-2006: 110.2 mean 2007-2012: 74.6 That’s a pretty big change, more than 30%. Now, let’s see, what declined substantially over the period from 2007 or so to now, reaching a minimum around 2008-2009? Hmmm…. Oh, I know! Nah, couldn’t be. Climate scientists assure us that variations in solar activity are too small to explain much of anything…. Duh. I’ve been trying to explain this to people for over a week. Paul K2 says: July 7, 2012 at 1:56 pm But the climate scientists underestimated the impact of AGW on the polar regions. ==================================================== Let me tell you this: every single climate scientist who estimates the impact of AGW on whatever is blatantly wrong. I mean, even if AGW was real. AGW is a purely statistical sort of average calculation product derived from a sample of temperature measurements. An average can not have impact on parts of the sample, it is exactly the other way round. Face it Anthony… The earth is warming. Regardless of what you think the cause is, it’s getting hotter. And guess what happens when it gets hotter? More heat records are broken, droughts will be more severe, etc.. Is the point of this article that you don’t even think the earth is getting hotter? [REPLY: Yes, you want Anthony to defend denialist sites like this and this. We love the smell of real science in the morning. -REP] REPLY: What a warped conclusion. I’m talking about weather patterns and this anonymous twit thinks I’m in denier mode. Sure we’ve seen an increase in temperature in the last century, I’ve NEVER said we haven’t. I just don’t think its is a crisis, and I don’t see any evidence that CO2 forcings have overridden natural variations yet. – Anthony Paul K2; Try getting some facts from the source, instead of relying on WUWT to process the information before feeding it to you. I copied and pasted the predictions from the 4th IPCC report: >>>>>>>>>>> Sir, I see that you are no run of the mill troll. You are an expert troll. You planted an agregiously false statement in the hopes of getting called on it so that you could respond with reams of quotes that seem to support your position. For the record: 1. I got most of my background in climate initially from reading AR4 myself, long before I discovered WUWT. It was reading the highly cherry picked, grossly misrepresented, marketing spin dressed up as science, cherry picked total cr*p that convinced me that the whole mess was a facade in the first place. 2. The endless quotes, out of context and frequently poorly, inaccurately, or even fictitiously cited that comprise AR4 could be used to show that they predicted that the earth would freeze over tomorrow if you cherry picked enough. The wording of Ar4 is such that it could be taken to mean pretty much ANYTHING after the fact. 3. Your original assertions did not specify AR4, they were much broader than that, suggesting what the leadership of the CAGW meme have being saying, which goes far beyond what AR4 said, and encompasses a considerable amount of science that has been….. ooops I mean magic…. that has been published in the years since AR4. AR5 is on the horizon in part because AR4 has proven to be a complete failure. If you want to debate AR4, then say so up front. But don’t come here, try and tell me where I should get my facts from when you don’t even know in the first place where I get them from and where I don’t, and don’t make general statements that you then defend by trying to exclude everything before and since AR4 from the discussion. Sorry for under estimating your troll abilities earlier. Clearly you get a capital T. Troll. REPLY: Yes Paul K2 who previously commented here as Paul Klemencic is a well known troll. He mostly hangs out at Lucia’s. I generally ignore him as he’s hopeless. – Anthony Paul K2 says: July 7, 2012 at 2:44 pm Contraction of the Greenland Ice Sheet is projected to continue to contribute to sea level rise after 2100. Current models suggest that ice mass losses increase with temperature more rapidly than gains due to precipitation and that the surface mass balance becomes negative at a global average warming … =============================================== Yeah, this is exactly the sort of blunder I referred to in my previous comment. AVERAGE! A simple example. Australia gets warmer, Greenland Ice Sheet gets colder, everything else remains unchanged and you have net average warming as a result. Now, how on earth can the colder Greenland Ice Sheet lose more ice? No way. This is what happens if “scientists” connect unrelated stuff. Sad. Looking back at this thread, I ran across Smokey’s latest post, and I really have to say I’m impressed. I haven’t seen such deliberate distortion of graphical data in quite some time! Here’s Smokey’s last graph. He says that this demonstrates .” Sit down, class, and learn how to distort data. First and foremost, notice how compressed the Y-axis is. Looking at the data graphed, there are plots scaled by 0.00001 (essentially straight lines) and then offset high and low to compress the data. This is deceptive – those lines are unneeded, and minimize our ability to discern changes in the real data. This is particularly true since at this point in the discussion most people who are talking about climate have seen other graphs without this compression, and will interpret the compressed graph compared to the uncompressed ones. Here’s the graph without those extraneous scalars or re-centering offsets. That certainly makes things a bit clearer. The next item to note is that this graph has much of the data detrended! Amazing – the graph is intended to show something about the trends, but those trends have been artificially decreased. Here is that same data without the detrending. Next, the graph shows GISTEMP and HadCRUT3. HadCRUT3 is obsolete, the current data is HadCRUT4, which has notably more stations included. Here’s the graph with HadCRUT4 added. Finally, the assertion that “the long term rising trend is within past parameters”:. Conclusion? Smokey’s statements are incorrect. — Summary: If you see a graph with unneeded compression or expansion, and in particular if you see one where (as in the case of trends here) the important data has been altered to change values, you can conclude one thing with certainty. The presenter of that graph is attempting to mislead. Shame on you, Smokey. P.Solar wrote:? Actually Figures 1, 2, and 3 are all based on collected data on seasonal temperature anomalies. Figure 4 compares observed NH seasonal temperatures with model output (interestingly, only the NH winter temperature trend fell below the model trend band). The supplemental figures contain some additional model output, and very interestingly, Figure S4. shows the comparison between observed trends in Arctic sea level pressure versus the model forecasts for the boreal winter. The Arctic SLP is climbing in the northernmost latitudes, whereas the models predicted falling SLP. Yes, indeed, something is going dreadfully wrong in the Arctic. diogenesnj – With respect to temperature records, or in fact any record of a varying parameter over time, it is completely expected that the number of record extreme events will decrease over time – you’ve simply seen more of the system behavior. Early in observations, every new data point might be a record – after you’ve been observing for a while, you will see fewer extremes. The telling number is the ratio of record highs to record lows. Changes in that ratio reflect changes in the base statistics, changes to the average temperature. In the 1960’s that ratio in the continental US was about 0.78:1 highs to lows (slightly more lows than highs). But in the 2000’s, the ratio in that region was 2:04:1, twice as many highs as lows. And that reflects the increasing average temperature. Paul, I haven’t seen where we D-worders have denied that the Midwest is hot right now.? Michael Schaefer says: July 7, 2012 at 11:02 am misterjohnqpublic says:July 7, 2012 at 10:36 am Maybe if we sacrificed a few virgins all the bad weather will go away? ——————————————————————————————————————— Send them to me. I will take care of them – umm, the weather, that is… I’ll send you my share. My days of such….activities…are a quarter century in my past. And yes, that’s precisely where they belong. Robbie says: July 7, 2012 at 10:59 am Shall we wait and see what the UAH Global Temperature Update will do for July? Regardless what happens in July with UAH, a new record for 2012 is totally out of reach and at the end of this year, 1998 will still be the warmest on UAH (as well as RSS and Hadcrut3 and Hadsst2). Here is the analysis for UAH. With the UAH anomaly for June at 0.369, the average for the first six months of the year is (-0.089 -0.111 + 0.111 + 0.299 + 0.289 + 0.369)/6 = 0.145. If the average stayed this way for the rest of the year, its ranking would be 10th. This compares with the anomaly in 2011 at 0.153 to rank it 9th for that year. On the other hand, if the rest of the year averaged at least the June value, which is more likely if the El Nino gets stronger, then 2012 would come in at 0.257 and it would rank 3rd. (1998 was the warmest at 0.428. The highest ever monthly anomalies were in February and April of 1998 when it reached 0.66.) In order for a new record to be set in 2012, the average for the last 6 months of the year would need to be 0.71. Since this is above the highest monthly anomaly ever recorded, it is virtually impossible for 2012 to set a new record or to even come in second. Smokey says: July 7, 2012 at 1:36 pm Sounds more like bath salts than LSD. There *have* been a lot of those episodes lately. If the mentioned man rips his clothes off, make sure you protect your face! Or at least have a full magazine of hollow point ammo.. 45 years ago, when my parents built their house in N Central Tx, the central AC they got was a less than normal thing at the time. Most people in TEXAS didn’t have it at that time. Now, show me a house built in the last 20 years, in Texas that *doesn’t* have central AC. Furthermore, most POOR people in the US have AC. We’re a more affluent society (at least according to our debt levels, at both the gov’t level and individual level, but that’s another blog), so we get conveniences that we didn’t in the past. Almost every new car has not only AC, but power windows, automatic transmission, nice stereo, etc. What does that development tell you about AGW? Nothing? Exactamundo. Here’s a bit of goodness from NOAA where anomalies are displayed as percent. Paul K2 says: July 7, 2012 at 1:21 pm “Its peculiar that so many readers here seem to deny that much of the middle part of the US mainland has had some extremely hot weather recently” I am a relative newcomer to this site, but have seen it explained many times over that most skeptics do not deny that the climate changes and that it has been warming for a few hundred years. They don’t even deny that CO2 has an effect. The only real argument seems to be over the significance of the anthropogenic CO2 contribution. The alarmists claim, that we will suffer a catastrophe if don’t stop burning fossil fuel, is not supported by any scientific arguments I have seen anywhere, including my copy of the 4th IPCC report. There are always, valid scientific arguments that counter the IPCC claims. For example, I have never seen an explanation of the ice core data from approximately 120,000 years ago showing the temperature dropping from a high point to nearly ice age conditions before the CO2 even began to drop. Rather than address many of these counter claims, alarmists instead attack, the skeptics credentials or simply make stuff up about them. Paul, where exactly did anyone deny on today’s postings that we are currently having some unusually hot weather. I just drove around in the Virginia suburbs of the Washington D.C. area keeping my eye on my car’s outside air temperature display which indicated from 96 to 103 deg F., depending mainly on whether I was in a wooded area or a treeless area. This car is ten years old and during that time I have seen temperatures of 103 on at least 3 other occasions so to me the temperature is much higher than a normal summertime high but not all that unusual for a extreme summer heat wave. Just because most skeptics are not climate scientists is not a valid reason to ignore the science behind their claims. The laws of physics are the same no matter where you learn them and it is physics that control weather and climate. The problem to overcome is, all the laws of physics are in effect all the time, not just those that are chosen to be used in weather and climate models. It is obvious that present models are severely lacking something. But then, in fairness, it may be the most complex thing that has ever been attempted to model and many factors must simply be guessed at. I do appreciate your posting of Jennifer Francis’s interesting dissertation on a possible cause of hot weather in the area. But that is weather change not climate change, unless, of course, the arctic sea ice never returns in the next few hundred years. The joke about weather here in the Washington DC area, which is difficult to forecast, has always been – if you don’t like the weather now, just wait a day or two. I always used to enjoy Joe Bastardi’s morning forecasts on WMAL. I miss the humor Joe. Hey Paul K2. A question for you…. What is the normal temperature of the earth? Are we above or below this normal temperature? How was this normal temperature determined? After the awful floods in Queensland last year there were claims that they were caused by global warming/climate change. These were made by the head of the IPCC no less. Prior to the floods Prof T Flannery was on record as saying Queensland may never have drought breaking rains again. They were both wrong. The Queensland weather bureau put the event down to the PDO and La ninja the head of IPCC later retracted his statment. The release of water from the Wivenhoe dam also contributed. It seems any event now is being attributed to Global Warming, too hot, too cold, too dry too wet So any event you get in the US, drought, fire, floods severe winter will all be caused by Global Warming to the exclusion of anything else. Prof Christy called it the “ultimate non falisifiable hypothesis” which he further stated is not science. I’ll give Paul credit on this: Not many people are willing to go into the ‘belly of the beast’ and argue, with multiple comments (not just a hit and run), our point of view. I would venture that that holds true with people on both sides of whichever issue. I don’t go to Tamina’s site, or Daily Kos and hold forth, just because I don’t own enough duct tape. We call those who do Trolls, and they call we who do the same. Some people can indeed be Trollish, but again, credit to Paul. Regarding my last post, I must apologize – I didn’t catch that Smokey had plotted HadCRUT3 linear trends twice, and hadn’t plotted GISTEMP linear trends at all. But that’s easy to change in the dropdowns for data selection at WoodForTrees.org – I would encourage folks to explore the data themselves. For comparison to the obsolete HadCRUT3, here is HadCRUT4 with full length, 30, 60, and 90 year trends, and here is GISTEMP with full length, 30, 60, and 90 year trends. Note the steepening trends, note the clear acceleration in warming. And again, if you see a distorted or overly compressed graph, such as temperature change in zero-based degrees Kelvin, you know that someone is trying to deceive you. Keith Pearson wrote: ?” How about last summer in Texas? Didn’t you have a 1 in a 1000 year event caused by several abnormally long blocking highs? My relatives in San Antonio told me that Texas never had so many 100 degree days in a summer before. The farmers and ranchers in Texas lost over $7 billion. It might be helpful to search back through your memories and recall last year. Something weird is happening with the weather. Aside: I went to Texas A&M to recuit engineers (particularly drilling and petroleum engineers) once upon a time. When asked location preferences,a lot of the candidates saluted, and they said that they didn’t want to move north of the Red River! I had to go find a map. I hope you live on the right side of the Red River.. ==================================================== Yeah, that is what some warmists do: creating an impression that nobody challenge their warmism. The reality is quite different, however, but many are probably unwilling to speak up. I allow me to refer to a comment of mine on the issue of consensus: . Robbie on July 7th at 10:59 AM: “And what about the rapid decline of Arctic Sea Ice this year:” All true but irrelevant. Arctic warming is not greenhouse warming but is caused by Atlantic Ocean currents carrying warm Gulf Stream water into the Arctic Ocean. It started suddenly at the turn of the twentieth century, prior to which there was nothing but two thousand years of slow cooling. To learn the true story of Arctic warming download this article: To “Paul K2″ I have one question for you: If the science behind global warming is sound, then why do the scientists who promote it, such as Michael Mann and James Hansen, repeatedly violate the scientific method and federal law by doing everything in their power to avoid showing the raw data and methodology of their studies? Please tell me how something can be considered scientific if it brazenly violates the scientific method of transparency, openness, and willingness to admit the study may be wrong. Editing error, my apologies – in my last post the link for HadCRUT4 with full length, 30, 60, and 90 year trends should be: I recall seeing 44 Celsius (111 Fahrenheit) In the shade growing up (way too hot for ball hockey) Indeed heat waves and media hype are always a good bet. Humidity is a big factor, though, and ours was %25 at the time. Paul K2 the south pole just had a record low, minus 100.8. Ya I know if it wasen’t for CAGW there would not have been a record low. Just what is earths mean temperature, and what do we do if we over shoot taking it down:) Lacking a better place to put this … We don’t even know why Hot water freezes faster than cold water. Wikipedia on Mpembe effect (possible) causes. .” Actually, almost all skeptics believe that the earth has been warming in fits and stages since the last ice age, as most intelligent people would agree with. I speak only for myself when I state that the lastest stage of warming is no different from what occurred in the early twentieth century, is a transient artifact, and has not been proven by non-model scientific data to be associated with the increase in CO2. Paul K2 says: July 7, 2012 at 1:56 pm What you are really seeing is a combination of NATURAL factors that are the result of things like a weak solar cycle, multidecadal oscillations shifting from one phase to another, etc. The only man-made effects that can be found in any of the observations are the UHI effects on the overall temperature record (coupled with removing rural sensors) and the results of tinkering with the data by those who are supposed to be maintaining the integrity of said data. Instead of learning what has happened and what is happening, these supposed scientists have buried their collective head in the sand and will not consider anything other than a CO2 driven catastrophy that just will not cooperate with their predictions. There is SO much we could have learned over the past 30 years or so with the amount of money spent trying to prove the phantom menace really exists and labelling every extreme weather event (droughts, floods, excessive heat, monster snowstorms) as proof. These types of WEATHER and natural WEATHER pattern shifts are known and are very well documented throughout history in various parts of the world. It’s not climate change. That IS climate. That’s what happens when the various oscillations fluctuate between positive and negative phases. They don’t all happen at the same time, so each one may only change an aspect or two of a particular regions overall weather patterns. Get enough of them to shift at the same time and it can really wreak havoc on WEATHER patterns, but they do not necessarily cause climate change. Again, a climate can be and almost always covers a wide range of weather and persistent weather patterns. That’s the way it has been since we came out of the last glacial period (except the Younger Dryas event) and will be for at least for the remainder of this current interglacial. There is no place on earth where the climate is supposed to be identical every year. That’s the biggest mistake of many who believe in AGW and why so many seemingly intelligent people get suckered in. Are humans making an impact on the earth? Yes. We always have and always will – some good some bad. Is pollution a bad thing? Yes, emphatically! Is the world gaining heat at unprecedented levels that can only be attributed to CO2 emmissions? No! Are there natural events that seem to correlate much better with the current state of weather and climate? Without a doubt, yes. Do yourself a favor, take a look at the temperature reconstructions of the last million years and see where we are today compared to where we have been. Now, where were all those SUVs and coal fired electric plants 100s of thousands of years ago??? Oh… But then again, I’m just an amateur with a love for actual science, so what do I know… Im fom the uk. just watch the olympics. Paul K2 on July 7th at 1:56 PM: “But the climate scientists underestimated the impact of AGW on the polar regions. The Arctic ice pack, and snow coverage, have fallen much faster than the climate scientists expected.” This is absolutely true because they have no idea of what they are doing. Arctic warming is not greenhouse warming but they persist in using climate models that use the enhanced greenhouse effect to predict warming. Small wonder they underestimate the warming by a factor of four. The true cause of Arctic warming is warm water from the Gulf Stream carried north by Atlantic Ocean currents. It started suddenly at the turn of the twentieth century, paused for thirty years in mid-century, then resumed, and is still going strong. To understand it you have to download this paper: . Unfortunately this means that you cannot point to any aspect of Arctic warming as part of global warming. And since this is the case it leaves you without any authentic example of greenhouse warming whatsoever within the last 100 years. To coin a term,,,,, the “Climonista’s”©, are doing a Russian heat wave move from a couple years ago. Same old story, same old song and dance… Blaming one location is just idiotic. Every place has contributed to it. Sure the eastern side may have a bit more, but I don’t think it’s fair to assign blame to any one place. Also, look at Greenland and parts of Asia. Sure they aren’t AS hot, but it is still somewhat the same temperature. Idiots! Paul K2; How about last summer in Texas? Didn’t you have a 1 in a 1000 year event caused by several abnormally long blocking highs? My relatives in San Antonio told me that Texas never had so many 100 degree days in a summer before. >>>> Well obviously anecdotal evidence presented from a single city in Texas is representative of what is happening on a global scale. This is the mistake that you have made in pretty much all your comments. You focus on examples of extreme weather and present them as proof of predictions made in regard to climate while ignoring the larger picture which is chalk full of contrary examples which are just as valid on a cases by case basis as the examples you present (which is to say not at all). Hold the big picture in perspective or you’ve got nothing of substance upon which to predicate your argument. Take KR’s clever attempt to show us Hadcrut and GISS temperature trends with linear trend lines jammed through what is clearly cyclical data, making linear trends useless accept on time scales orders of magnitude larger than we have data for. Notice also that the globe has in fact been warming up at very roughly the same rate since the LIA several centuries ago, and that CO2 increases that became significant after 1920 don’t seem to have altered that rate of warming at all. Even the IPCC admits that CO2 is logarithmic, a fact that they tend to then gloss over when writing their reports, but read AR4 closely enough and you’ll see that every single prediction and outcome is predicated upon CO2 being logarithmic. That being the case, even by the IPCC’s own standards, anything over 400 ppm (which is pretty much where we are now) is subject to the law of diminishing returns, and hence is pretty much negligible. In 1920, accodring to the IPCC, it would have taken 280 additional ppm of CO2 to arrive at a direct temperature increase of +1 degree. The debate about feedbacks aside, this is a rather clever piece of subterfuge all on itz own. This is not 1920 and we’re not at a baseline of 280 ppm anymore. It will take 400 ppm to drive a direct temperature increase of just one degree from where we are today. At peak consumption rates, that will take us the next 200 years to arrive at and probably longer because of the increased uptake by the biosphere (which mostly turns that uptake into food btw). We have FAR more to fear from climate change due to natural variability than we do from CO2 increases, FAR more. That’s what the science espoused by the IPCC actually says once you get past all the distracting marketing spin and alarmist language. That’s what our written history books tell us. That’s what the geological record tells us. Spout cherry picked quotes and graphs from AR4 all you want, the science, the written history, and the geological record all say otherwise. But if you want to insist that something catastrophic is happening, then consider this. Human beings could survive rather easily another ice age provided that we exploit to the fullest possible extent the energy resources we have available. Human beings could survive rather easily global temperatures exceeding the highest the earth has ever experienced and at CO2 concentrations that make our current levels look like a rounding error to zero provided that we exploit to the fullest possible extent the energy resources we have available. Drill baby, drill. There must still be profits to be made. From: ———- Atlanticborg coming our way with blades Kent Malo took this photo of her on her way to Duluth from a Cessna 172 at 1500 feet. She will be here early Sunday morning with a cargo of wind turbine blades loaded in Denmark. After discharge here, they will be taken to a Minnesota Power wind farm in North Dakota. When discharge is completed, she will go to anchor for about a week, then coming back in to load wind turbine blades, built in North Dakota and going to Brazil. ################# A false economy in action. Personally, I don’t recall any Warmists back in February or March saying that there would be record heat in the U.S.A. or record wet in The Black Sea or Britain, or severe cold in Australia. Do you? I do not find post hoc pronouncements at all convincing. As Shakespeare put it: “I can summon Spirits from the Vasty Deep.” “Aye, so can any Man, but will they come?” KR says: July 7, 2012 at 3:26 pm. Your graph showed warming is accelerating over the latest 30, 60, and 90 years going from 1922 to 2012 at: I can do the same going from 1850 to 1940 as shown below. So what does that prove other than that climate goes in cycles which have nothing to do with CO2? See: I blame the EPA. Growing up, the summertime would always be super hazy. You could almost count on the forecast being hazy, hot, and humid. The visibility would routinely drop to five or six miles. I’ve noticed since about 2009, it no longer gets hazy. The sky remains blue and visibility remains at 10 miles at the airport. Ever since that time there has been a trend towards hotter weather even though it doesn’t look like there’s been any fundamental change in weather patterns. In the 90s and early 2000s, it usually never got above the low 90s. Even the “hot” summers only peaked at 91-93, though there might be 10 or 15 days with temperatures between 90 and 92. Several years never even hit 90. Temperatures have now reached the upper 90s for two straight years, with locations nearby reaching 100. Temperatures this high had not occurred since the drought of 1988. So I don’t know if it’s a coincidence or if it has something to do with a new EPA policy, but at the same time summers starting becoming hot again, the skies have cleared up. Weird. charles nelson says: July 7, 2012 at 5:24 pm! ============================================== Maybe he’s just upset that he didn’t get to watch McKibben’s iceberg melt. A global phenomenon? Yeah, well. Down here on the southern Victorian coast in Australia we’ve just had our second white frost in two days. Unusual enough for all the locals to chat about it on morning walks. Paul K2, Oh great one, holder of the climate change truth, please enlighten us poor shlemiels. You seem so willing to share your knowledge. Are you a team member of the inner circle of the climate scientists? You could be of great help to us shlemiels, to take us out of the darkness. The analysis of the data sets available to us is like plowing through fields of rocks. Perhaps you could provide us with the raw data sets your friends of the inner circle use. We will not tell on you. We have been trying to get them for a while. They have not provided them. They claim we will only find errors. Some of us were born to point out errors. Some of us became that way through education. Some of us have no social skills and love to point out that the King is running around naked. Oh the curse of it. Again the data sets would be incredibly enlightening. Apparently PaulK2 doesn’t know the difference between projection and measurement? It looks like climatology will end up contributing more material to the science of anthropology than it will to the study of climate. Several of the Usual Suspects have raised the argument that, while a particular event is not due to AGW per se, such events are becoming more frequent due to AGW. It is almost certainly the case that, to the extent that there have been trends in temperatures during the warm parts of the year, and to the extent that such trends are due to anthropogenic forcing, AGW might be responsible to a similar extent to any increase the frequency of events above some arbitrary threshold. So for the above argument to be correct, would merely require that warm days of the year are being warmed by AGW. But the truth of the above argument is a trivial matter, the real question, or questions, involve the magnitude of any AGW effect (can we measure it above the natural noise?) and what the effect on the opposite sort of extremes (extreme cold) that can be attributed to AGW is. Last, with regards to the question of societal impact, we have to ask the question, how will people adapt? Let’s deal with those issues one at a time. In regards to the frequency of “heat waves” in the US, any increase due to AGW is hard to isolate from the natural noise. The frequency of such events was greater during the 1930s by a great deal. According to U.S. Climate Change Science Program (2008), the recent heatwaves are distinguished from those heatwaves mainly by high nighttime temperatures, not the daily maximum temperatures. The EPA in 2010 used an updated version of the heatwave index the from that same report: In the presence of such high natural variability, it is difficult to isolate any anthropogenic effect, even if one should be present. That it is hard to isolate suggests it is probably small if present. With regard to the second matter, a uniform change in temperature would presumably mean that extreme cold events and extreme warm events would change in opposite directions. If the distribution is symmetrical and the thresholds are set symmetrical about the mean, then a mere shift of the distribution to warmer temperatures would not increase extremes in net. That is, the total number of extreme events should, for such a change, not change. In order for the climate to become more extreme, in terms of temperature, there must be some shift in the distribution that involves more warm events but anywhere from a slightly smaller decrease in cold events to an increase. But the observations, over sufficiently long periods, clearly indicate enhanced warming of the coldest days of the year, relative to the warm ones, in the US and probably the rest of the world, too. I analyzed this myself, it’s clear the reduction in cold events should be much stronger than any increase in warm events. For the whole US, over the 1979-2010 period, I ranked average daily temperatures for the region from coldest to warmest within each year (in non-leap years, the 183 coldest day was copied to be a the 184 coldest day for that year as well) I then calculated linear trends for each rank of day, over the period. The strongest warming occurred on the very coldest days, and while warm days also warmed, they didn’t do so nearly as much. Here’s the plot: This confirms the peer reviewed analysis of Knappenberger et al: The final question: how will people adapt? Will then simply fry to death as heat waves get worse? Well, cities have been substantially warmed by Urban Heat Islands, which provides us with a chance to see what ways people react to enhanced temperatures. As it turns out, heat related mortality declined even as cities warmed: Davis RE, et al., 2003. Changing Heat-Related Mortality in the United States. Environmental Health Perspectives 111, 1712–18. The above facts make it reasonably clear that AGW is not making the climate of the US more extreme, and even if it were increasing heat wave events, which isn’t clear, we would have little to fear from such an effect, as people readily adapt. Browse through Australian newspaper clippings all the way back to the 1800s and you’ll find that global warming has been happening since then … I think it should be mandatory for Julia Gillard and all American warmistas to read this. “Something weird is happening with the weather.” There are no re-runs in the universe Paul. You fear the unknown. It’s ok, lots of people do, but it’s a part of life you must learn to face head on. Enjoy it or fear it….your choice.. I notice these global warming alarmists were no where to be found when Florida citrus crops were covered in ice. I note that there has been a paradigm shift for assigning blame for unusual natural events. There was a time when an event, like the earthquake that had an epicenter near the headquarters of the Central Intelligence Agency and cracked the Washington Monument on the occasion of the fall of Khadaffi regime, would have been assumed to be a special portent from on high instead of a random event or the result of human influence on the climate. It does pose an interesting conundrum. Either use your electricity and air conditioning, thus causing CO2 emissions, or bake in the heat on the assumption that less CO2 emissions means that you will only bake a little bit more in the future. Or you can have no electricity as the warmers want and has happened to many because of the storms and you can bake without any choice. Take your pick. Forego electricity and air conditioning for the cause or blame global warming because you have no electricity to keep cool. It’s very hard to explain this conundrum. How many warmers have air conditioning? How many that have no electricity desperately want it now. David Duff says: July 7, 2012 at 10:53 am Please can you send all that ‘global warming’ over here to the UK? It hasn’t stopped raining since April 1st. Some April Fool that was! ____________________________________ David, I will gladly swap some of my nice sunny North Carolina weather for some nice soggy UK weather. The last thunderstorm headed right for my farm, split about a mile away and went around on both sides. We got lots of thunder and lighting from all directions and not a drop of rain… GRRRrrrr. I was sure that storm would drench my thirsty grass. No rain since a trace on June 23rd and ten days of 95F to 102F. I have hay, not pasture. Luckily the T-storms have hit most of the other farms in the area. I really want this blocking high to go away. KR says: “Note the steepening trends, note the clear acceleration in warming.” As Werner Brozek proved above, KR is blatantly cherry-picking. And as I showed with trend charts going back hundreds of years, there has been no acceleration in global warming. None. What KR shows is a fictional, cherry-picked artifact that completely disappears when a proper long-term trend chart is used.. Go figure. I am conservative on some issues and liberal on others. I do not identify with either the Republicans nor the Democrats. To limit ones thinking in such a manner is just the same as going through life wearing blinders. In other words, I try to keep an open mind and accept the facts as they are, not the way I would like them to be. No doubt this post will fire up all the Rushbots out there and the personal attacks on me will be fast and furious. Good, then my point is made. So many comments so far off the mark; too many to respond to. Arno Arrak says that the Arctic ice pack is melting not due to polar amplification, but a long term shift in ocean currents. In my comments I said polar amplification, but that the faster melt than expected seems due to teleconnection (transport of heat by wind and ocean currents) and faster melt mechanisms due to the weakened pack (fractured ice floes exposed to wind and waves have higher heat transfer rates than a solid pack). We may disagree on the triggering cause, but possibly may agree on the factors causing the runaway melt. Amino Acids in Meteorites: Please view the Jennifer Francis video more carefully. The increased amplitude of Rossby waves in the jet stream pulls cold Arctic air down into the Northern side of the jet stream bend, and pulls tropical air up into the Southern side of the bends. Florida citrus crops covered in ice are yet another sign. Many of the rest of you have been taught to attack climate scientists. But these theories I am discussing and have linked to today are espoused by meteorologists (weather scientists). You guys need to learn some new scrips and taunts. It seems that the group of scientists involved in your proposed conspiracy just got a whole lot bigger. Check out some famous meteorologists like the Wunderblog where Dr. Masters talks about the work of Dr. Francis. Its a bit odd that this stuff is so new and novel to you, especially since WUWT is run by someone supposedly knowledgeable in weather patterns. Keep drinking the Kool-Aid Mr. B. You and the goreacle can keep at it but it’s not gonna make a difference. RE:Paul K2 says: July 7, 2012 at 1:21 pm The current hot spell is already crumbling in northern areas, and is nothing compared to what people had to endure in the 1930’s. In the 1930’s the hot pattern locked in, and lasted for months. It endured. This year’s is fleeting, by comparison. I’m not certain where the data is coming from that speaks of “thousands” of records being broken, and establishes the current hot spell as the “worst ever,” but I doubt it will stand up to scrutiny. The “blocking pattern” you speak of will have to last until September to match what happened in the time of the Dust Bowl. Check the old records from Kansas and Nebraska. You ain’t seen nuthin’, yet. The BBC did a show on weird weather that can be found on youtube, and a youtube regular that I am not allowed to mention on this site constructed a nice overview of this on youtube and his DCoftheWeek site. Mr. B. In other words, I try to keep an open mind and accept the facts as they are, not the way I would like them to be. >>>>>>>>>>>> And yet you present not a single fact in support of your position, you only cite WHO you believe. Odd that you claim the support of factual evidence, yet cite none. Thanks Wade and Otsar. That is a refreshing reminder of how I think critically and of the frustration of dealing with those who don’t. Mr B “Over 95% of Scientist worldwide believe global warming is real and directly related to human activity.” —————————- Actually, Mr. B, I can’t recall ever being asked. Perhaps that opinion pole was conducted in Cantonese or Russian or Spanish or Hindi or Arabic….. Werner Brozek – You’ve raised an interesting question, why the warming seen in the 1930’s-1940’s. Take a look at temperature versus solar input. In the first part of the 20th century insolation was high, aerosols (sorry, not covered in the WFT’s website) were quite low (very few volcanic episodes, for example, when you examine the natural forcings), and hence a combination of natural forcings led to early century warming. As a side note, the sea surface temperatures (SST’s) Now, however, insolation is dropping, natural and anthropogenic aerosols are high, and yet temperatures are rising rapidly. If you look at the temperature record, current changes are being driven by significant anthropogenic influences. And it’s currently warming at 0.18C/decade, as opposed to the 0.13C/decade of the first part of the century, with a per century trend higher than the per 60 trend you pointed out. It’s warming faster than the first part of the century, faster over time, and that is by definition acceleration. — Smokey – What I showed, actually, demonstrates that current warming is due to anthropogenic influences, to GHG increases, rather than the natural forcings that would otherwise have induced a great deal of cooling over the last 50 years. As I said before, distorted graphs (such as the ones you produced) are in fact deliberately deceptive. At this point I can no longer extend you the benefit of doubt – you have clearly demonstrated that you’re willing to distort the data to support your point of view. Mr. B. says: July 7, 2012 at 6:48 pm ==================================== (Yawn) Question. This map shows most of the world is experiencing below normal temperatures. How does this square with Spencers UAH latest results that show above normal temperatures? Mr. B. says: July 7, 2012 at 6:48 pm. ==================================================== So, according to you the last Iraq war was a hoax perpetrated by the American president, the majority of the Congress and 35 other countries who sent their troops to Iraq. But at the same time the IPCC and the leaderships of some Academies of Science can not create a hoax nor can the left wing dominated media. Very logical. Speaking of the American National Academy of Science,. Doesn’t it smell like hoax now? hey Mr. B., you forgot to say we all are also holocaust deniers, flat earthers, Anti-Science, creationists, and in the pay of Big Oil. Seriously, that 95% of yours isn’t Scientist, its CLIMATE Scientist. Only bought and paid for Climey’s need apply to any of their polls, because if you aren’t one of them, your vote gets dropped out before the tally. KR says: July 7, 2012 at 11:26 am …. the statistics of weather shift, that’s climate change. And what we’re seeing are the weather statistics moving (see for a discussion and data) to hotter and higher energy events….. So no, this particular heat wave cannot be directly attributed to climate change. But we’re certainly going to see a lot more more of them, with fewer cold events, as climate averages change and the weather dice get loaded more and more heavily to the hot side….. ___________________________ OR Stephen Wilde could be correct, esp. given the weak solar cycle 24 That sure sounds like what I have observed the Jet Stream doing for the last few years and as a farmer I watch it carefully. OR E.M. Smith could be correct. Just before entering into a bond event, the ending of warmth turns into a lot of unsettled weather, high winds and flooding. OR Woods Hole Oceanographic Institution on the same ~ 1500 year cycle (Bond event) that E.M. Smith discusses in depth in the above article. If humankind is darn lucky, CO2 is the “Magic Control Knob” and we are not headed into a cold spell however I think that is highly unlikely if you look at this graph of CO2 vs temperature for the last five interglacials. I may have just worked outside in 104F for several days (grumbling all the while) but I prefer that to ice and famine in a heart beat! Previous reference (sorry, fighting a summer cold, not at my best) to SST’s was in reference to the HadCRUT4 corrections to WWII sea surfaced temps, due to changes in procedure at that time. Correcting _properly_ for the change in method at that time reduces the mid-century bump somewhat in HadCRUT4 – and if you disagree, be prepared to justify why correcting for a different method of data collection (engine room versus bucket) is a bad idea… Mr. B. says: July 7, 2012 at 6:48 pm No doubt this post will fire up all the Rushbots out there and the personal attacks on me will be fast and furious. Good, then my point is made. I doubt if anyone here will attack you personally. If they even bother to make the effort , they will instead, simply point out your misstatements and faulty logic. Michael Schaefer says: July 7, 2012 at 11:02 am Is there any observable change to ratings when particular TV Weathermen (or women) present this ‘news’?. It ripped a neighbors tree in half. We had to help him tear it down in the middle of the storm. Got the rope tossed over our anchor limb and the tree chose that moment to snap. I ran my ass off and wound up standing in leaves when it hit the ground. For those that doubt the power of tenths of seconds in the Olympics, I swear I was that far from road paste. Anecdotal crap aside, I think the map says it all. KR; In the first part of the 20th century insolation was high, aerosols (sorry, not covered in the WFT’s website) were quite low >>>> Is that why air quality wasn’t an issue in Europe and North America at the time? Oh wait, it was. We had jokes like “a shot an arrow into the sky, and it stuck there”. Or “its the smog, I swallowed a piece”. We had regular smog alerts in major cities all over the world. Coal was THE major source of both heat and electricity, and there were no scrubbers or regulations regarding their emissions. But aerosols were low. Uh huh. KR;y’s As a side note, the sea surface temperatures (SST’s)>>>> As a side note, the info we have to calculate SST’s from is patheticaly innacurate. The best data we have is in regard to OHC as measured by the Argo Buoys which show it to be in decline since their inception. KR; It’s warming faster than the first part of the century, >>>> Actually it has been coolling since the late 90’s according to the very sources you cite, despite CO2 being at the highest levels since we started recording them. KR; As I said before, distorted graphs (such as the ones you produced) are in fact deliberately deceptive>>>> What convinced me most that CAGW was total garbage was digging into the graphs and studies of AR4 and discovering how completely deceptive they were. Then the climategate emails came out and proved that I was incorrect. Deception was just a complete understatement. KR, You missed one more linear trend line for 10 years…here it is ;-) Mr. B., Prof Richard Lindzen exposed the corruption of the NAS here. Or do you pick and choose your authorities based on your obviously political belief system? And KR is still arguing against solid empirical evidence showing that there is no acceleration in global temperatures. HADcru “re-adjusted” its temperature record when it saw that the data shows declining global temperatures. There is no acceleration in the long term global temperature record, as this Bill Illis chart, based on satellite data, shows. There is simply no acceleration. The last three years of declining global temperatures happened despite the steady rise in harmless, beneficial CO2. Despite KR’s cherry-picked double talk, the data shows conclusively that there is no acceleration in global temperatures. The inescapable conclusion: any effect from CO2 is so small that it is unmeasurable, therefore CO2 can be completely disregarded for all practical purposes. Hey KR. I notice you like 30 year periods in your graphs. I like’um to. Here’s a good one It’s even better before the latest set of HADCRUT adjustments I didn’t have time to read all of the comments, but I am willing to bet that the majority of the “pro-warming” comments were written by people 50 years old or or younger. Having more than a decade and a half additional experience, I can recall summers just as hot as we are now experiencing, and winters just as cold as we experienced several years ago. As a matter of fact, I actually experienced the additional warming due to the UHI effect: in 1974 most of the distance that I walked from the mass transit station to my house was tree lined. The air temperature was easily 5 oF lower under the shade of the trees (no I didn’t carry a thermometer). Most of the trees have been removed, and the sidewalk now receives direct sun, Oh, and I hope the Earth is warming. Continuous cooling would indicate the start of a new ice age! Oh please, ‘talk your book’ brother; it is all you’ve got. . RE: Mr. B.:(July 7, 2012 at 6:48 pm) “The IPCC and the National Academy of Science believe global warming is real and directly related to human activity. Over 95% of Scientist worldwide believe global warming is real and directly related to human activity. “ Global warming may be real, but the extent to which human activity is responsible is subject to serious question. Carbon dioxide, the primary agent assumed for this effect might be likened to a gas that blocks olive-green light within a few hundred feet at current concentrations, but is perfectly transparent for all red and blue light. Adding more just ever so slightly increases the width of the total blockage. So far, man has only increased the concentration of this gas by about 40 percent and it remains questionable if there is enough carbon in the ground to allow man to double the initial concentration. Without assuming any special positive feedback effects, (which may have been calibrated on the basis of unrelated natural warming) it appears that one must double the concentration of this gas for each one degree Celsius increase. I believe the problem is that scientists who have chosen to work in this field may have been over-conditioned by environmental protectionist philosophy and over-zealous when trying to find and expose evidence of human degradation of the environment. For those outside that field, and for the ‘intelligentsia’ in general, I think support for this concept is primarily a matter of rubber-stamp political correctness. It is important to note that the official documented total average global warming since 1880 is less than one degree on the Celsius scale. Firey @ 4.05 pm After the awful floods in Queensland last year there were claims that they were caused by global warming/climate change. It seems that there has been another fabrication of [infrastructure] data. Queensland railway managers ‘falsified’ rail bridge safety inspections on the Central West line. The matter has been referred to the CMC (Crime and Misconduct Commission). source: Courier Mail news [Queensland], Australia. 7/7/2012 Mr B, KR, I spent some time upthread on the explanation of the logarithmic nature of CO2, which the IPCC admits, and how that implies that additional CO2 over current levels is just not significant. I’ve posted that explanation many times in many threads to the likes of you two, and to date, have not had a single reply. Why is that? KR says: July 7, 2012 at 7:24 pm It’s warming faster than the first part of the century, faster over time, and that is by definition acceleration. I was given the impression that anything other than CO2, such as the sun, was basically negligible. So whether the warming now as compared to 70 years ago is for different reasons may be a matter of debate, but compare the following: Essentially identical 30 year slopes many years apart. #Selected data from 1912.33 #Selected data up to 1942.33 #Least squares trend line; slope = 0.0156268 per year #Selected data from 1982.25 #Selected data up to 2013 #Least squares trend line; slope = 0.0153446 per year David Falkner says: July 7, 2012 at 7:48 pm. ====================================================== I think what made this seem worse and more news worthy wasn’t that the power was out longer (By and large it wasn’t) but that people didn’t have AC during a heat wave. At least we didn’t have to rebuild wind farms or solar power plants to get it back on. Just curious- what does the map header mean when it says “Global Anomaly 0.001 C” Is that a sum of the over and under presented on the map and displayed in different colors? NYT has an article on the heat wave “Unrelenting Heat Wave Bakes All in Its Reach”. Here is a nice clip:. Hmmm, I guess we need to add agronomists to the list of co-conspirators along with meteorologists and climate scientists. Probably botanists, biologists, zoologists, oceanographers, microbiologists, astrophysicists, heat seeking missile designers… Wow, this list is getting pretty long. KR said (July 7, 2012 at 11:26 am “…So no, this particular heat wave cannot be directly attributed to climate change. But we’re certainly going to see a lot more more of them, with fewer cold events, as climate averages change and the weather dice get loaded more and more heavily to the hot side…” As I’ve said before, those who forget extreme weather events in the past are doomed to state all current weather extremes are unprecidented. If heat waves are going to get longer over time, then why is it, more than 89 years later, there is still one extreme heat wave that remains unbroken – the one in Marble Bar, Australia. The town set a world record of most consecutive days of maximum temperatures of 37.8 degrees Celsius (100 degrees Fahrenheit) or more, during a period of 160 such days from 31 October 1923 to 7 April 1924. If their “climate” is getting worse, if their averages are rising at tenths of a degree per decade, why hasn’t this record been exceeded or beaten? Weather or climate? pinetree3, its a one day map, the gray colors are warming areas too, and the large hot areas where people live are much hotter than the cold areas in Antarctica etc. Of course, we could say that the “real weather information” is being hidden by a collusion of weather people and meteorologists, as suggested in a comment by Wade earlier… except he somehow believes that weather data is controlled by James Hansen and Michael Mann — geez Wade, we ain’t talkin about tree rings or latewood density here in this post! I know you believe that weather data such as jet stream positions and tropospheric pressure (hPa) info is being hidden, but check around. You might just find this information, along with record high temperatures etc. Organizations like newspapers, TV, pilots, Weather channel, and even AW seems to be able to find information on weather. Here is some help… a site that can give a nice forecast map of the Arctic Region for tomorrow. Click the N.Hemi option and select 500 hPa. You will see a nice map showing an Arctic Dipole forecast for tomorrow (HP over the Canadian Archipelago and LP over the Siberian/Russian side of the Arctic Ocean). Enjoy. From the looks of the map to save themselves East coast iberals and Leftists lshould move to India, please. Can we please stop the statement that 95-96-97% of the scientist support the AGW theory. That lie has been thouroughly debunked for years.. I have to agree with Paul K2 that something weird is happening with the weather. It appears to be melting people’s brains for the first time in history. People have memories that tend to blur the past, not remembering it as harshly and realistically as it was. So, the sweat trickling down their armpits right now, is far more poignant than the same sweaty armpits in 1998. It is, frankly, the same softening of reality that makes us women look forward to having multiple children… we simply cannot recall the reality sharply enough to remind us of what it was like. A sleepless night five years ago is softly nostalgic, compared to a sleepless night occurring right now. It’s a function of being human. Cold kills more than heat does. We can dissipate heat much better than we can create it within our own bodies. Given the choice of a desert summer with adequate water, food and a shelter (but no fan or a/c), or an arctic winter with the same (and no external heat), which would most of us choose to survive? Missed the link in my last comment to the site showing weather forecasts for the N. Hemi. Select N. Hemi option and 500 hPa and see forecasts for 48h 72h etc. Paul K2; You might just find this information, along with record high temperatures etc. Organizations like newspapers, TV, pilots, Weather channel, and even AW seems to be able to find information on weather.>>>> But that’s the thing Paul. When we consider the historical records, what we find is that there’s pretty much nothing special about what it going on right now. Neither KR nor Mr B have responded to my point about CO2 being logarithmic, nothing but crickets chirping on that one. Apologies, should have included you in the list. Same question. Why does no one respond to that point? Paul K2; Historical record: Thriving Viking colonies with productive agriculture in Greenland hundreds of years ago. Explain. Archeological Record: Receding glaciers in Canada’s north have recently exposed hunting camps that are hundreds, perhaps thousands of years old. Explain. Geological Record; CO2 has been, millions of years ago, many, many, many times higher concentration in the atmosphere than it is now, but temps were lower. Explain. In the end, there is a simple test. See where temps are in 20-30 after the PDO/AMO cold cycles can co-incide. Use objective satellite guidance. Since earths temps the last 15 years have really not gone anywhere, one can easily argue that the earth reached the equilibrium of both the warm cycles of the PDO and AMO can easily explain and so we simply test the theory. Since the IPCC disaster scenario is busting and the co2 keeps rising, a 20-30 year test of basic climate cycle theory is reasonable. Anyone disputing that we shouldn’t allow the test, without destroying the chance for mankind to advance,, given the facts, is not. “It is caused by a persistent blocking high pressure pattern” Here we go again. Wasn’t that the same weather pattern that brought that Heatwave to Russia back in 2010. The Warmist Alarmists were going gaga over and blaming that on AGW/CC hysteria aswell? We’re getting all the hype here in NZ on the US Heatwave. Going on about 35C Temps in Washington DC. I thought the record for Washington DC was 41C set back in the 1930s/40s? By the way I have 2 questions for my detractors. 1) Is there any thing that can happen that would convince you your ideas are wrong ( for instance, in spite of the factual disconnect from co2 and temp, you deny that). So what do we need to see.. lets say a fall of .1C next 10 years, is that anything? and 2) Just what answer DONT YOU OWN? Is everything that happens a sign you are correct? For instance, if temps were rising along the lines of your forecast, than that would be very troubling to me with my position . but they are not. How come it doesnt matter to you, that the very forecast you used to start pushing all this down the throat of people is busting and busting badly and I see no reason for why any kind of rise that will even make it close will resume. In fact quite the opposite. You say its warmer than it was, true, but partly because it was so cold. But no matter what, ITS NO WHERE NEAR WHAT YOU FORECASTED We hear one excuse after another. We hear one scare tactic after another yet the forecasts disappear and you arent accountable. Arctic Ice? Laughable since your crew promoted an ice free arctic in 2012. You busted much more severely than anyone on our side of the aisle. You even twist what people say. I said we would return to the normal level of the late 70s by 2030, that was twisted to a forecast for a year after I said it ( 2009) and southern hemisphere sea ice WHICH OVERALL HAS BEEN RISING WOULD FALL BACK TO WHERE IT WAS. There is a simple explanation. Warm PDO/AMO have colder water around the southern ice cap, hence the increase, while the same cycle warms the northern hemisphere and the combination of the land locked ice cap surrounded by warmth and warm water attacking underneath melts ice. Much more plausible that the nonsense spouted about co2. But how do you get off making a forecast that is a far worst bust, and then saying what you are saying, especially in light of the the southern hemisphere sea ice, which you never bring up. Tornadoe’s: what the heck happened to another year of global warming tornadoes, That fell apart as was forecasted using a method that had nothing to do with co2. you had to shut your mouth till you got a heat wave in less than 2% of the planet and severe weather event that many of your ilk didnt even know existed ( Derecho). And your hurricane ideas are absurd, Of course you are waiting for them to return ( SO am I) , I am puzzled how the east coast has gotten off scott free almost given the pattern as I have been saying for 5 years now was going back to the 1950s, with the flip of the pdo. anyone notice how close the summers are, or the western N America, and far east winters. But none of that has anything to do with co2. It has to do with the flip of the pdo. So we get the Mckibbens and Sullens yelling about Irene, the Desslers and Norths about Texas drought as if they never looked at the weather in the 1950s. And of course the all knowing POTUS using drought in Texas to say its global warming, when if one looks at the time the earths temp actually was rising due to the warm pdo, TEXAS PRECIP WAS ABOVE NORMAL Its when it turns colder globally, it dries in the south ( see 1950s) I am amazed that no one in the msm has the guts to actually take these people up and question them as to exactly what they really know. Instead its hit and run,, take any given event, dont research it and run with it. Imagine what else is going on in other aspects of life In the end its a simple test. The world is in far more danger in the next 20 years from things like economic and political strife, partly brought on by the fact that people with this agenda are trying to handcufff the globes progress with failed ideas on social manipulation mixed with environmentalism that is based on ideology. That is the problem. I suspect many of you that actually do look understand that this is mainly cyclical and there is nothing that can be done except to accept and adept. And while you are at , the cold option carries more weight than warm and you will see that become clearer even to the most blind among you in the coming years, NoteL should be accept and adapt, of course I’ll get hammered on that Yeah – It’s hot! But it’s a wry heat… MtK TV Report from drought stricken Texas (tree die-off from last year’s drought): Business is booming in Texas, and not just the oil and gas business. Tree trimmers are raking in the bucks after last year’s drought killed an estimated 500 million trees. “I’ve been so swamped, we’ve had to call in reinforcements” from other states, arborist Glen Jennings told NBCDFW.com. The Texas Forest Service, which estimates 5.6 million trees died in urban areas, urged homeowners to be pro-active about removing dead trees — before they land on neighboring property. “Be aware that your tree could fall onto someone else’s property,” service official Jim Rooni said in a statement Thursday. “he rules vary from place to place, but generally the owner of the tree is responsible. Bottom line: You could be liable.” Jennings was stunned by the amount of dead trees across the state. “I, personally, have never been in the middle of something like this before,” he said. “Small droughts, yeah, but statewide?” Maybe we need to add arborists to the list of co-conspirators identified by anti-AGW special interest groups, along with climate scientists, meteorologists, agronomists, etcetera… At what point does the conspiracy theory against scientists become ludicrous? P.S. And Joe Bastardi just leveled a tirade against mainstream news media… they must be in this massive conspiracy as well? It’s not about *this* extreme, its about the recurrence of the collective basket of extremes Joseph Adam-Smith says: July 7, 2012 at 11:32 am I really do wish we could have Global Warming in the UK. The weather is cold and, to put it mildly, wet! Rainfall in England in June was 277% of “normal” for the month, defined as the 1971-2000 average. Doug Eaton says: July 7, 2012 at 2:13 pm Blaming the Eastern U.S. heat wave on global warming is just as follyish (?) (okay, foolish) as blaming the cooling wave in the UK right now on global cooling. But that won’t stop them. I haven’t seen anyone claim that the poor,very wet weather in UK is “global cooling.” Please provide reference. What I am pointing out is that a hotter than average spell of weather for a couple of weeks in Eastern US is not “global.” Other parts of “global” are having cooler (than average) weather. The point being that is is all WEATHER, not climate.] made a career and very nice living out of perpetuating the scam…” There. That’s what you meant to say. The climate scientists that I have been reading are all reluctant to attribute any isolated spell of anomalously warm weather to global warming. They say, over and over again, that global warming is manifested by warming TRENDS that are GLOBAL in extent. The current heat wave in the US, by itself, is neither of these. But they also say that the current heat wave could be part of a trend, attributable to global changes in climate. They will, no doubt, be studying this question closely. In the meanwhile, for the past several years, worldwide, there have been twice as many record high temperatures recorded as record lows. Looks like a global trend to me. And I would add that for years after we bought our house in 1991, the rhododendron by our front door bloomed in mid-June. For the past 5 or 6 years, it has been blooming in the third or fourth week in May — as have the rhododendrons throughout our neighborhood. A small isolated observation, no doubt scientifically worthless …. For Reference: Here is documentary style video presenting a theory that seems to explain both long-term and short-term climate change as a result of extra-terrestrial influences, both solar and galactic. It is important to note that clouds serve as indicators of ongoing condensing convective activity–a primary form of heat transfer from the surface of the planet. Svensmark: The Cloud Mystery “Uploaded by rwesser1 on Jul 24, 2011″ 108 likes, 9 dislikes; 8,812 Views; 62:46 min Henrik Svensmark’s documentary on climate change and cosmic rays. Well, here in the UK we’re having one of the worst springs/summers I can remember. March was unseasonably warm; April was the wettest ever; May started likewise for 3 weeks, then we had a mini-heat wave for 10 days. June has been very wet and only a day or so above 21C. July has started very wet and cool also. That’s on the back of three winters with far more snow than average and much colder than average in at least one month also. No sign of global warming here in the UK. Marian “It is caused by a persistent blocking high pressure pattern” Here we go again…. Yep, pay attention to descriptions of weather events that include that word ‘blocking’. It is the description that tags what looks like a possible Global Warming phenomenon. And one that till recently hadn’t been appreciated that well. Possibly a disconnect between the Meteorologists and the Climatologists. A big factor in weather patterns are the Polar Jet Streams. Relatively high speed and high altitude eastward travelling air flows, they act as barriers, dividing climate zones because weather systems can’t easily cross them. So in the northern hemisphere, this tends to divide the colder polar weather patterns from the warmer mid latitude weather. But the jet streams don’t just travel across at one latitude. They meander north & south. When it meanders south, colder air from the arctic can penetrate southwards – colder weather. When it meanders north, warmer air from lower latitudes can penetrate north – warmer weather. And if the meandering of the jet-stream happens to freeze in place for a period, which it does from time to time, then whatever type of weather is happening nearby tends to intensify – warm gets warmer or cold gets colder. This is what happened in western Russia in 2010. Warmer air from the Mediterranean was able to keep moving northwards because the Jet Stream had ‘locked up’ for a time. Our weather, no matter where in the world you live, is best when it cycles frequently between cooling and warming influences. Too much of any factor is bad. So what is happening to the Jet-Stream? Preliminary research (this is still an emerging field of study) is suggesting that the North Polar Jet Stream is slowing. It is meandering further north & south. And it is becoming more prone to ‘locking up’ for periods. All of which isn’t good news; that will definitely lead to more extreme weather events in the regions influenced by it. More snowstorms and rain if you are north of it, more heatwaves, drought and fires if you are south of it. The recent intense thunderstorm activity in a band across from Chicago to Washington DC, a phenomenon called a Derecho, may have occured because of this. A large warm air mass moves north from the regions like Colorado where the fires are happening. This comes up against the Jet Stream which forms a barrier to it’s further northward movement. All that energy then starts to spill out sideways, following the normal pattern of air movement eastwards. And thunderstorms start plowing eastwards towards Washington DC. So, what is Climate Change doing to this pattern? The strength of the Jet Stream is driven by how large the temperature difference is between the equatorial latitudes and the poles. This temperature gradient drives an energy flow towards the poles. And one of the major places this energy ends up is in the JetStream. So as Global Warming progreses, the northen polar region is warming faster than the tropics. So the temperature difference between these two regions is dropping. With the result that the strength of the Jet Stream is dropping. More meandering. More frequent ‘lock-ups’. And so more frequent occurances of extreme weather events. AGW predicts more extreme weather. And this looks like a significant mechanism that drives that. The temperature difference between the Arctic and the Tropics declines. The Jet Stream slows and meanders more. And the weather systems bounded by the Jet Stream become more intense. So how can the US be suffering heatwaves while the UK is seeing a very wet summer? They are on opposite sides of a slower, lazier Jet Stream Why aren’t we seeing this down here in the southern hemisphere? The Southern Polar Jet Stream typically runs further south of us here in Australia, New Zealand, Southern Africa. The Southern Hemisphere has far more ocean than land that tends to moderate weather extremes. The Antarctic isn’t yet seeing the major temperature changes that the Arctic is. So the basic driver of this isn’t there. And if it was happening, it would be happening down in the Southern Ocean where we don’t pay much attention You forgot the “in part” or “largely” qualification that was in the originals. The debate isn’t about AGW, it’s about CAGW (positive feedbacks and tipping points). There’s no 95% consensus on those, not remotely. The US should be sued in World Court by the ROTW (Rest Of The World). Eastern and Central US has stolen all the rest of the world’s summer heat. It is even willing to put up with massive storms trying to keep cold air from getting in, in order to selfishly deprive everyone else! Those responsible should be put on ice, and left there until they’re terminally frostbitten. It’s only fair. History will show that all the authorities weighing in on this topic were intellectually corrupt, self-serving, grant-money-chasers except for the few disinterested, impartial experts who were paid large sums of money by the fossil fuel industry and the libertarian think tanks (funded by the fossil fuel industry) to explain to the public just how corrupt all those other people were. @Glenn Tamblyn Are you saying that this is a new phenomena, never happened before? Persistence of the Greenland below jet stream pattern is surely a key feature of interglacials as the Greenland above is of glacials. Historical evidence would suggest this is merely a natural response to natural process. Everything that can possibly be attributed to AGW will have happened before — many times. But it does not follow that what is happening now cannot therefore be attributed to AGW. It all depends on the specifics of the case — on actual things, and not on references to history’s habit of repeating itself, or natural cycles, and so on. K2 wrote: “With respect to temperature records, or in fact any record of a varying parameter over time, it is completely expected that the number of record extreme events will decrease over time”… and… “The telling number is the ratio of record highs to record lows. Changes in that ratio reflect changes in the base statistics, changes to the average temperature. ” PHWEEEP! Five-yard penalty for logical contradiction. (Ok, I know, but I’m from the USA and we play mostly American football over here.) Unlike Herr Dr. Prof. Schroedinger and his cat, you are not allowed to assert that a statistical process is simultaneously stationary and non-stationary. Trouble with this posting is that NOBODY will see it only the few hundred here. if a major newspaper had this article front page it would mean something. Anyway ther is NO global warming check AMSU data. Actually July looks like itrs gonna be quite coolish globally haha diogenesnj, Help me understand where the contradiction in what K2 wrote lies. I understand him to be saying that as we approach some natural limits in the physical world, fewer and fewer record low and high temperatures will be broken; still, as long as some record highs and lows are being broken, there will be a ratio between the two types of broken records, and that this ratio is significant for our understanding of climate change. Of course, all of our records are simply for the period of time during which observations have been made; and if we go back far enough in time, we will encounter new circumstances and new natural limits. But our concern is for what will happen under present circumstances with present natural limits; we are worried about the world that our children and grandchildren will have to live in. So far, then, K2’s observations seem reasonable; please advise.. My personal experience with the “Honesty” and “Integrity” of scientists is that it is rare, most will go along with the herd or with higher authority rather than stick their neck out. In my entire career I found only one other person willing to stand up for what was right instead of going along with what was easiest. She was also fired for her honesty. Most people are followers not leaders. I have read somewhere only one in two hundred is actually a leader and to control a group all that is needed is to identify and break that leader. That is what saying there is a “Consensus” and the labeling and denigrating of those who don’t go with the flow is all about. That practice alone should make people wonder about “The Science” Real science is about the quest for truth and facts not following “Authority” not being a member of the “A” list. Here is the current state of “Honesty” in Science: More articles about the lack of honesty in science. A Sharp Rise in Retractions Prompts Calls for Reform ScienceDaily: US Scientists Significantly More Likely to Publish Fake Research, Study Finds A few individual…. UConn officials said their internal review found 145 instances over seven years in which Dr. Dipak Das fabricated and falsified data, and the U.S. Office of Research Integrity has launched an independent investigation of his work. The inquiry found that Stapel, former professor of cognitive social psychology and dean of Tilburg’s school of social and behavioural sciences, fabricated data published in at least 30 scientific publications, inflicting “serious harm” on the reputation and career opportunities of young scientists entrusted to him. Some 35 co-authors are implicated in the publications, dating from 2000 to 2006 The United States Attorney’s Office..announced that a felony Information has been filed …. During the time period alleged in the Information, Grimes resided in Boalsburg, Pennsylvania, and was a Professor of Material Science and Engineering at The Pennsylvania State University. LISTINGS: Retraction Watch .naturalnews.com:Scientific fraud news, articles and information Many here at WUWT have a degree in science, engineering or the maths. That is why we smell something very fishy with the IPCC and “The Science” This is what Forty citizen auditors found when they looked at “the United Nations’ Nobel-winning climate bible.. the gold standard.” Sorry, the more we dig, and look at the data we can get our hands on (as any true scientist is required to do) the more it stinks. “The Team” knows this and that is why the data was not released upon simple requests, Freedom of Information Acts and when push finally came to shove the data was “Lost” Phil Jones: The Dog Ate My Homework From the “A goat ate my homework” excuse book: NIWA reveals NZ original climate data missing Lonnie and Ellen, A Serial Non-Archiving Couple Eduardo Zorita, Scientist at the Institute for Coastal Research, specialist in Paleoclimatology, Review Editor of Climate Research and IPCC co-author, calls for barring Phil Jones, Michael Mann, and Stefan Rahmstorf from further IPCC participation If you want more on the supposed “Integrity” of those you seem to believe in see: WUWT Climategate links KR says: “So no, this particular heat wave cannot be directly attributed to climate change. But we’re certainly going to see a lot more more of them, with fewer cold events, as climate averages change and the weather dice get loaded more and more heavily to the hot side…..” ———————————————————————————————————– Thats what they said after Katrina. That there would be a lot more of them and they would become the new normal. Chipotle says Along the equator it is cooler than average ——— The map shows the equator as white or grey . . The map key has that as 0 to +3. So no. a) Texas is not in ‘drought’ (okay, * perhaps far west Texas, but that can change in a comparatively short period of time like it did with the rest of the state); you write that as if it (Texas) were still ‘in drought’ in its entirety, which would certainly be an untruth (our reservoirs, our source for drinking and lawn-watering water are in GOOD shape this year). b) Planting region-inappropriate trees and shrubbery is the #1 reason for ‘tree deaths'; I lost several region-inappropriate deciduous trees on account of 1) a late frost (last year) after out-leafing and 2) the stress from a long, hot summer and through accumulated stress factors finally allowed WOOD BORES to overtake them whereas other deciduous trees are doing just fine … Continue to ‘talk your book’ brother; it’s all you’ve got. * . @Paul K2. You. Glenn Tamblyn’s comment marked July 8th at 3:00 AM is an excellent summary of the current theory covering changes in extreme weather patterns driven by polar amplification. Anyone interested in learning about the recent changes in extreme weather should read it. I might only add that observational evidence shows that the Arctic sea level pressure (SLP) has been rising over the historical average. This higher pressure decreases the driving force for the jet stream. Please see Figure S4 in the Cohen paper that shows the observed Arctic winter SLP trends (and shows the models didn’t predict the rising trend!). Also KR’s comments appear lucid and accurate. I suggest readers might re-read this thread, concentrating on these commenters, and then an accurate understanding of two important points will emerge: 1. Extreme weather events will become more common due to overall rising temperatures, in the case of the mid-latitudes in the NH, the temperatures are rising over 1 deg F every ten years in many places inhabited by people (such as the Midwest states bordering the Great Lakes. This temperature rise “loads the dice” and increases the chances of extreme heat events, and increases the chances of severe precipitation events due to higher atmospheric moisture. 2. Meteorologists are now collecting data that show the decline of the Arctic ice pack has increased the seasonal SLPs in the Arctic, and has changed the amplitude and movement of the Rossby waves in the jet stream, creating more stable and persistent weather patterns. Blocking patterns in the jet stream results in extreme weather events. LazyTeenager, your bias is blinding you…there is blue, white, and gray, so -3 to +3. overall – just by eyeballing it – its mostly around white with the grays and blues about balancing. about 0 on average. @ Jesse Fell Ah, the specifics of the case. Natural cycles have no reality in the anthropocene as all is different. Reminds me of an UK Chancellor of the Exchequer who decreed the end of boom and bust. Science is best guess, informed by empirical evidence. That the linear models do not fit the real world, with no sign of the predicted signatures of unnatural climate change, suggests that the models have been falsified and that more attention should be given to cycles and empirical evidence. Joseph Bastardi – A simple test? Certainly. Recent warming (last 30-35 years) has been on the order of 0.16-0.18 C / decade for the surface record, a bit less for the tropospheric. If the ongoing temperature record shows enough change to reject that rate of change at 2σ certainty, then, and only then, I will agree that the rate of warming has decreased. That shift has not happened. Short timeframes – 15 years (try 16 years, not starting with the 1998 El Nino peak, you get a completely different slope), 10 years, 3 years (Smokey – seriously?!?) – have too little data in the presence of ENSO, insolation, and just plain weather variation to establish a trend change. You could, of course, attempt to account for and remove variations (ENSO, insolation, aerosol, etc), as Foster and Rahmstorf 2011 did; then 10-12 years should be enough time to establish trends. They, however, found no indication of reduced trends. Eric, A cycle of nature is a sequence of events that is repeated at regular intervals. Like all other events, these events have causes. Too often, when people challenge the AGW thesis, they refer to cycles of nature as if they were absolute, autonomous, uncaused — but for a cycle of nature to be accepted as a serious alternative to the AGW hypothesis, these causes need to be identified. Otherwise, we are left with the explanation that increasing atmospheric CO2 isn’t causing temperatures to rise because — just because! davidmhoffer – “I spent some time upthread on the explanation of the logarithmic nature of CO2, which the IPCC admits, and how that implies that additional CO2 over current levels is just not significant.” The effects of increased CO2 (given current concentrations and any range we’re likely to see) are indeed logarithmic. If you look at the data for CO2 increase, however ( for data since 1958) and take the log of that increase, you will see a curve that is greater than linear (upwardly curving). You can do that easily in Excel, just insert the data in and take the Ln( ) of the concentration column. That means that CO2 increase over that period is greater than exponential, and that CO2 forcing is increasing faster than linearly. Personally, I would consider that significant. Maybe it is your education (indoctrination?) on select subject and lack thereof on others that blinds you; it is very possible you have no idea of the workings of ‘weather’ and how that integrates over time into ‘climate'; so let’s assume you are just short knowledge-wise, IOW ignorant, of weather processes. Perhaps something like The American Weather Book would go some ways in resolving this exhibited and repeated ignorance on subject. . ****. **** You need to go back just a few more yrs. In western MD, I distinctly remember the miserable heat in the mid-60s drought-summers (highest was 106F on our thermometer in ’66) when the whole family had to retreat to the mildewy basement to sleep in cots. Thank goodness for AC some yrs later. For comparison, the high yesterday here from a rural location was 96F. Jesse, The attempt to use CO2 as Occam`s razor to cut through the Gordian knot of climate complexity leaves an awful lot of loose ends, and the models which depend on it have been falsified. Time to look again at the cycles to understand the because. American journalists should feel ashamed at this East Coast, Washington centric news bias, it comes over as both elitist and rascist (as they seem to be saying other people who suffer similar weather are not as news worthy) as should those psuedo-scientists who feed them the alarmist nonsense. Paul K2~ I would also like to hear your take, as per Steve of Rockwood To reiterate: ‘You. 1. No doubt they’ve already decided on the answer, and are just accumulating evidence (and discarding counter-evidence). This is how a religion operates. 2. Looks like the expected result if half the thermometers are in places affexted by UHI or microclimate. 3. In the early 1900’s oranges were grown in Florida at least as far north as Cross Creek (north of Ocala). Now the groves have died back down to the Orlando area because of frequent freezes. Just mentioning that for those who think that an early-blooming flower is evidence in the context of cAGW. Jesse Fell says: Please show us these ‘large’ sums of money from the fossil fuel industry going to skeptics. I’m stilll waiting for mine. The oil and gas companies pour far more money into the CAGW religious coffers than they do to libertarian think tanks. I would welcome a simple public debate by the proclaimed experts on the CAGW subject, but just about all of the Climonista’s refuse to do so. The resistance to FOIA requests and lack of desire to publicly debate speaks volumes, no? If things are so obvious, it should we an easy thing to prove in an open forum for all to see. Here is an example of how this subject should be handled with free and open dialogue and full transparency. >> Glenn Tamblyn says: July 8, 2012 at 3:00 am … << Do you have a pre-2005 reference to some CAGW priest predicting this effect? Otherwise it's just another example of the CAGW religion coming up with another desperately lame explanation as to why their predictions for continued warming failed. In North America we have man made heat-waves and hot dry conditions as a result of Anthropogenic Climate Change (apparently). In the UK & Ireland we have man made heavy rain fall, cold damp conditions and localized flooding as a result of Anthropogenic Climate Change (apparently). Right across the European Continent and Eurasia we have a wide variety of conditions including thunderstorms, floods and varying degrees of extremes such as hot deserts, snow caped mountains and glaciers, are we to believe that all of these geographical locations and their different environmental conditions (or extremes if you like) are man made as a result of Anthropogenic Climate Change? I think we’re in a era of agenda driven media under political influence for the most part, natural disasters do happen all the time all over the world like they have done in the past, they are still as ferocious today when they happen, although there wasn’t the technology to document all these events when they happened in the past, today almost no heavy rainfall, heatwave or other meteorological event goes unreported or documented by the public and reported by our media and jumped upon by our out-of-touch political classes and used to push ridiculous policies that would otherwise be laughed at. Yesterday on the news, I watched a politician declare that the recent bad weather and floods in the UK was a result of a changing climate (Anthropogenic Global Warming) and that scientists have been warning us about these extreme events for years, this was an obvious arrogant attempt to try and justify their stupid climate change laws, failures or what ever else they’re peddling to the public. If the news interviewer had a set he would reply with– “The UK is cold, wet, damp, overcast, gray, foggy and flooding this summer how is this “Climate Change” (Anthropogenic Global Warming) you fecking daft idiot!!” Their TV ratings would also go through the roof!! lol Recent unusual weather events in the Northern Hemisphere have nothing to do with Climate Change . Footage produced by the UK Met Office demonstrates how the path of the Jet Stream has recently changed from its ‘normal’ route. This has resulted in significant increases in temperature to the south of the Jet Stream e.g. USA, together with dramatic increases in rainfall to the north of the Jet Stream e.g. northern Europe and particularly the UK. I understand that the Jet Stream is prone to wander off course from time to time but the cause of this is not known. The Jet Stream clearly has a major impact on weather patterns but I cannot recall reading anything about its effect on climate physics. Have the Jet Stream effects been considered in the GCMs I ask? Exactly my point. It’s time to look at the cycles — in particular, the things that are causing the cycles to happen. I am merely trying to say that in challenges to the AGW hypothesis, I often find “cycles” being proposed as explanations sufficient in themselves. They aren’t. scarletmacaw, You could start here: But that’s just a start. The money trail from oil companies to climate change denial groups has been traced by lots of investigators — a little Googling on this subject will turn up a wealth of information. Do you really believe that companies such as Exxon would keep their hands in the deep deep pockets while researchers say that burning fossil fuels is a threat to the welfare of humanity? [SNIP: Let's not go there. This is, after all, a family blog. -REP] Chipotle says: July 7, 2012 at 11:20 am RE Mexico v Greenland comparison In our real world Mexico is nearly the same size as Greenland (which here appears many times larger than Mexico). Both are about three times the area of Texas. From the CIA Factbook: Mexico: 1,964,375 sq km, slightly less than three times the size of Texas Greenland: 2,166,086 sq km, slightly more than three times the size of Texas Just trying to clean up your point, which is a great one. A picture is worth a thousand words and a number outweighs a thousand activist views: July 7, 2012. Global Anomaly 0.001 deg. From your link: Sorry, I asked for evidence of ‘large’ sums. Those numbers are just a drop in the bucket compared to the funding from fossil fuel industries to CRU, Greenpeace, and other church leaders. LazyTeenager says: July 8, 2012 at 6:03 am RE comment RE Chipotle says comment Terminology may have been imprecise, but the point stands – the higher latitudes of both hemispheres do appear to hold more of the above-average temp in this map projection while the lower latitudes appear to hold more of the below- (and near) average temp. Actually, Jesse, they are sufficient. The reason that they are sufficient is because it doesn’t matter why they are happening — that they happen at all, and have been for tens/hundreds/thousands/millions of years without human influences of any kind, gives them the status of “natural variation.” Merely because a natural cycle is currently unexplained does not mean that ANY explanation is the right one. We look at historical temperature records and see that the earth has been gently warming since the end of the Little Ice Age — long before human-produced CO2 was present in large quantities. We look further at those records and see spikes and dips in the record, and with modern technology we relate those spikes to El Nino events and dips to large aerosol releases by volcanoes. With such a long record we see many natural cycles in warming and cooling — and not knowing their exact causes does not make them go away any more than the Sun did not exist before we could explain nuclear fusion. Tides existed before we knew about gravity. Disease existed before we had germ theory. Natural cycles are not to be ignored because we don’t know their causes. They ARE, and that is enough. For the last time this is just the weather and not the climate. Where is the evidence that extreme weather events have worsening trends??? Are Warmists so desperate that they have not resorted to pointing at regional weather events to illustrate Catastrophic Anthropogenic Global Warming. We had record cold in New Zealand at the same time as the heatwave in the US. So what! It’s just the weather and not the climate. Alarmists please read about bad weather. KR; That means that CO2 increase over that period is greater than exponential, and that CO2 forcing is increasing faster than linearly. Personally, I would consider that significant. >>>>>>>>>>>>>>>>>> Very nice try. 1. There was a dramatic and rise in use of fossil fuels last century which has since levelled off. You cannot take a trend that no longer exists and extrapolate it out to the future. 2. We’ve reached peak oil, or very close to it. The rapid rises in fossil fuel use of the past are no longer physically possible. The forces that drove the trend in the past are physically incapable of continuing to increase at that pace, and hence cannot drive CO2 levels at an increasing pace. 3. The curve of CO2 concentration is SLIGHTLY exponential. Even ignoring the issues raised above, the exponent is so small that it is dwarfed by the logarithmic effects which govern CO2 increases and their cumulative total forcing. 4. Temperature itself is a negative feedback. w/m2 of forcing varies direcly with T raised to the power of 4. As an example, it requires just 2.9 w/m2 to raise the temperature from -40C by one degree, but it takes 7.0 w/m2 to raise the temperature at +40C by one degree. So nice try. But unless you assume massive increases in fossil fuel use, orders of magnitude beyond what we are physicaly capable of, the past increases in CO2 due to human activity can hardly be useful predictors of future concentrations of CO2, or of the forcing in w/m2 that may be attributed to them, and hence any temperature increases that might result. In other words, what I originaly stated stands. At current consumption rates, it will take about 200 years for direct forcing from CO2 to increase by a single degree, and probably longer since the biosphere is responding with increased uptake. Concentrations beyond 400 ppm are simply meaningless in the context of the amount of CO2 human beings are capable of putting into the atmosphere. Jesse Fell, in your various comments you reject the idea that climate change is principally due to natural causes unless someone can describe specific drivers which explain the changes better than the CO2 global warming hypothesis. Otherwise you will believe the CO2 hypothesis. In other words, you regard the CO2 hypothesis as the null hypothesis. This is where you make a serious error of logic. The world has existed for 4.5 billion years, during which time the climate has been constantly changing. Even during the short duration of the Holocene we have seen regular warmings and coolings: the Little Ice Age, the Medieval Warm Period, the Dark Ages, the Roman Warm Period, the [enter name], the Minoan Warm Period, the early and late Holocene Optima. None of the earlier warmings was caused by CO2 and none resulted in Thermageddon. It is untenable to take any position other than that the null hypothesis is “climate change is natural”. It is up to the CAGW believers to establish their case otherwise the null hypothesis pertains. It is not necessary for skeptics to propose a specific alternative climate mechanism. The CAGW believers have utterly failed in this regard because :- 1. There is nothing happening which is outside the envelope of past natural variability. So there is nothing requiring a non-natural explanation. 2. The proposed mechanism of the CO2 hypothesis depends crucially on strong positive water vapor feedbacks whereas empirical evidence shows these feedbacks to be negative. Paul K2 says: July 7, 2012 at 1:21 pm Its peculiar that so many readers here seem to deny that much of the middle part of the US mainland has had some extremely hot weather recently. Most people who live there seem to be asking “What’s UP with this extremely hot weather?” and “When will it end?” As the second graph CLEARLY indicates, “what’s UP with this extremely hot weather” is that it is June and July, and this year’s June and (early) July are nothing unusual. This year falls a good bit short of 1988 and 1994, and doesn’t outdo the more recent memory years of 2002 and 2006. Obviously you did not read/view the entirety of the short article at the top of this page or you did and you just choose to ignore the data. I’ll follow Anthony’s lead on this and ignore further posts by you as yours to date display a lack of interest on your part in truth. James, If we don’t know why cycles are occurring, we don’t even know whether what we are looking at are in fact cycles. We don’t know, for example, whether what we are seeing are cycles, or simply unrelated series of events that coincidentally have the same effect on temperature and climate. And certainly, if we understood the reasons behind the cycles, we would have a better understanding of the dynamics of climate, and would be better able to assess the effect of adding billions of tons of CO2 to the atmosphere every year. Cycles in themselves may be sufficient for talking points, but not for science, and not for policy that hopes to base itself on the best scientific understanding. I’ve elevated Gail Combs comment to a post, and continued showing examples of why this event is weather and not climate. Steve Divine: The chart shows entire US temperatures June anomalies. We are discussing changes in regional weather patterns related to the “Eastern US heat wave” as stated in the post title. The northwest US has has some very cool temps because of the location of the jet stream in June. The middle part of the country got clobbered with a heat wave, that moved east. , The map at the top of this post shows the high temperature region. This pattern alternating cold and hot regions is what we see when extreme hot or cold spells extend over a period of time. Regions in the downward dipping bends of the jet stream get cold, and regions in the upward dipping bends get hot. The current theory being discussed by meteorologists say the jet stream is slowing, stalling, and meandering more, causing more extreme weather events. Glenn Tamblyn has an excellent description above in his comment timestamped July 8 at 3:00 AM. dcfl51, OK, let me start by saying that I am curious about why the Northwest Passage has become navigable — at least in summer — during the past few years. That is, navigable by ships that aren’t equipped with ice-breaking prows. This is a dramatic change. I’d like to hear a plausible explanation. The AGW thesis is a plausible explanation. I agree that plausibility is not proof; but in the natural sciences, you never arrive at a logically inescapable QED; you can get only higher and higher degrees of plausibility. And. I have not seen an explanation for the recent changes in climate that comes near to approaching the AGW thesis in plausibility. I have read about cycles of nature and natural variability being the cause, but this explains nothing if it is left at that. Is there in fact a cycle that causes the Earth’s glaciers and arctic ice cap to shrink dramatically within, for example, the amount of time since bell-bottom trousers were fashionable? I am eager to hear all about this cycle, as well as the causes that bring the cycle about. Until then, the AGW thesis is the best explanation that we have going for the change that must be making Frobisher and Hudson and Baffin spinning in their graves with envy. davidmhoffer – “There was a dramatic and rise in use of fossil fuels last century which has since levelled off.” False. “We’ve reached peak oil, or very close to it. The rapid rises in fossil fuel use of the past are no longer physically possible.” Peak oil, perhaps. Peak coal? Peak shale gas? Peak natural gas? No, false again, there’s plenty of carbon we appear ready to burn. “At current consumption rates, it will take about 200 years for direct forcing from CO2 to increase by a single degree, and probably longer…” False once more. We’re currently at ~0.8 C above where we would be without anthropogenic warming. At 0.16 C/decade, the current rate of increase, we’ll see another degree C in under 65 years, not 200. Finally: “The curve of CO2 concentration is SLIGHTLY exponential. Even ignoring the issues raised above, the exponent is so small that it is dwarfed by the logarithmic effects which govern CO2 increases and their cumulative total forcing.” This is _so wrong_ I find it a bit difficult to even begin. CO2 levels are increasing at a rate greater than exponential. This means that CO2 forcing is increasing at a rate greater than linear – meaning the forcing imbalance will increase, and hence warming rates will increase. You are essentially (and quite incorrectly) invoking the logarithmic relationship twice. Jesse, Since temperature is a component of climate it it perhaps not surprising that the same effect is seen. Our data rich period, with increasing temporal and spatial resolution is too short to establish the nature of cycles, but much palaeo- and historical evidence suggests they exist and existed before any attempt to explain them. Identifying patterns in data does not not depend on explanation. At present our attempts to explain change as being a direct result of CO2 emissions is failing and should not be used to determine policy, as our best guess is so inadequate. Jesse, I am intrigued that you think that modern Arctic conditions are unique. Given our technology we have the ability to observe and record synoptically, an ability Frobisher et al would have loved. During warmer periods in the past similar conditions could have occurred without anyone knowing about it. What is different about now? Eric, What’s different now is that in Frobisher’s time he couldn’t get through because of all the &*$$ ice! Eric, If cycles are at work now, why can’t we figure out what’s behind them? We aren’t dealing with events of ages and ages ago; we are dealing with what is happening now. We ought to be able to get to the bottom of it — and wouldn’t it be a coup for the critics of the AGW thesis to point to the REAL moving forces behind climate change? It would be a death blow to the AGW thesis; James Hansen would go into real estate. But the “cycles of nature” explanation has not been fleshed out with understanding of the nature and cause of these cycles — at least the ones that are supposed to have caused, within the last few decades, the dramatic shrinking of glaciers, the warmer nights, the rising sea levels, and the increase in the annual total energy of tropical storms, and so on. KR says: “Recent warming (last 30-35 years) has been on the order of 0.16-0.18 C / decade …” Cherry-pick alert!! KR’s cherry picking begins at the same time the latest step change begins. However, by looking at a long term trend chart, we see that current temperatures remain on the same trend line they have been on since the LIA. There has been no acceleration in global temperatures, thus falsifying the conjecture that the 40% increase in CO2 has any measurable effect. Paul K2 says: July 7, 2012 at 8:44 pm … except he somehow believes that weather data is controlled by James Hansen and Michael Mann ================================= You forgot the “et al”. Gail Combs says: July 8, 2012 at 5:47 am .” Gail, Thank You, for looking the bastards straight in the eye and telling them “No. I won’t do that!” I have faced similar circumstances. I was pressured to ‘sign off’ on an inferior and potentially hazardous design. After repeated refusals, I was asked by a smirking toad “What if you are not given a choice?” Let me say right here, I don’t react well to threats.. and even more poorly when they are delivered with a smirk. I stood and, supporting myself on my knuckles, leaned as far over the table between us as I could. I told the no-longer-smirking pratt “You tell your boss these exact words: I Always Have A Choice! Tell him those exact words. Now, Get Out Of Here!” I didn’t get fired… and the needed changes were made. The moral to our mutual experiences is ‘You have to do what is right, regardless of the consequences. Self respect demands it’ MtK Current ships are equipped with GPS and get daily satellite photos of the Arctic ice. What makes you think there weren’t times in the past when ships could have navigated the NW Passage? Certainly it’s much less risky to do so with modern information technology. Jesse Fell, Your first logical error is assuming that scientific skeptics must replace the CO2=CAGW conjecture with another conjecture or hypothesis. You don’t seem to get the fact that skeptics have nothing to prove. And your emotional examples are nothing but alarmist pseudo-scientific talking points. Both tornadoes and hurricanes are decreasing in severity, despite your claims to the contrary. Also see here and here and here and here. And “dramatic shrinking of the glaciers” is simply more unscientific emotionalism. Glaciers have been receding since the LIA. What is so unexpected about that?? And sea level rise has been decelerating. Since all of your claims are provably wrong, maybe you should spend a few months reading the WUWT archives, and get up to speed on the subject. KR; This is _so wrong_ I find it a bit difficult to even begin. CO2 levels are increasing at a rate greater than exponential. This means that >>>> What this means is that you haven’t got a freaking clue what exponential means. You’ve stated something that is a mathematical impossibility. KR; So here’s carbon emissions by fuel type courtesy that paragon of virtue (ie most biased source possible) wikipedia: Note that: 1. oil grew rapidly from about 1940 to about 1975 and then pretty much levelled off. 2. coal grew more or less linearly with a couple of brief 5 year upticks 3. natural gas grew at barely a little above linear from around 1950 on. The cumulative graph (black line) looks like an exponential curve, but in fact is not. It is constructed of several other curves, each of which is added in at a later date in time. So in response to your quip that we have not reached peak coal, gas, etc, the answer is maybe, maybe not. What we have reached however is cost of oil that makes dramatic increases in additional oil production uneconomical. The uptake of coal and gas is in part a consequence of that, but note that they too are tapering off. We’re by no means running out, but we’re by no means in a position to ramp up consumption in the manner we did in the last half of the last century. You can see by the graph that oil consumption has flattened out, so has coal, and so has gas. Unless one or more of those is poised to “take off” in a major way in the next decade, that scary black line is going to flatten out too. Don’t lecture me about reserves either, go do a survey of all the oil, coal, and gas companies in the world and ask them what they could, based on their current capacity, ramp up as year over year maximum production. What you will find is that they cannot even get close to linear, let alone exponential. Assuming that an exponential curve that is bounded by resource depletion can extend into the long term is a fools game. It is the very math upon which a Ponzi scheme is based. Don’t feel bad about being bamboozled by the numbers, lots of rather smart people have been taken to the cleaners by a Ponzi scheme before. You compound your poor grasp of math (see my comment about “greater than exponential) above by assigning 0.8 degrees of warming as being 100% attributable to human GHG emissions. Given that global temperatures have been rising for the last 400 years, since the LIA, and so about 350 years before CO2 emissions became significant, you cannot assign ANY number to human GHG’s without first subtracting natural variance which we currently have NO way of calculating, we can only make an WAG. Putting aside for a moment your lack of understanding of exactly what an exponential curve is, what the actual peak oil/gas/coal supply curves actually are, and that no finite resource can support an exponential depletion for an extended period of time, and that you have conflated natural variation with human induced variation, let us return to the matter of CO2 being logarithmic. At current rates, it will take 200 years to add one additional degree of warming form CO2 forcing. Given that the human species thrived at temperatures less than that in the MWP and RWP, I am not particularly concerned (optimistic in fact). To illustrate how ludicrous your position is, let us consider a whopping TWO degrees of warming from CO2 increases. That would require TWO doublings of CO2. From 400, that means 1600 ppm of CO2 to get just 2 degrees of warming. At current rates, that would be….. 600 years. So I will tell you what, let’s assume that we TRIPLE production of ALL fossil fuels starting TODAY. In 200 years, we would get 2 degrees of warming. That’s 0.01 degrees per year. Oooh, I am scared. Wanna go for three degrees? You’ll need about a thousand years with every fossil industry we have going flat out and burning everything they dig up even if nobody needs it. As an aside, keep in mind that the 3 degrees occurs at the “effective black body temperature of earth” according to0 your prescious IPCC, which is about -20C. Surface temperatures being about 15C, you’ll actually only get 2.1 degrees of warming at surface, not 3. You’ll also get almost no change at high noon at in the tropics but large changes at high latititudes in depth of winter at night. The increases in temp at high noon in the tropics won’t be much noticed by the biosphere in general, and my expectation is that a survey of polar bears as to their preference for -60C or -50C during their hibernation period is likely to result in a sparcity of data due to lack of returning polsters. from comment above Given that the human species thrived at temperatures less than that in the MWP and RWP, I am not particularly concerned (optimistic in fact). Should have course read that we thrived in temperatures MORE than that… Jesse Fell, Could you please explain to me how the Norwegian explorer, Roald Amundsen, managed to take the GJOA , a 70 Ft, wood, cutter rigged sloop, herring trawler, around the North West passage. He did this from 1903 to 1907. He did not have good charts, no GPS, no satellite photos, the compass pointed down making it useless, etc. Did he and his crew of 6 drag the boat across the ice all that distance? Jesse, Frobisher could only find the #$&etc ice by going there. We now know where it is or isn`t and just happen to have been observing synoptically during a period of Arctic ice decline. Maps from the 1930s suggest ice extent almost, as low as at present. Just because we can monitor with more detail and immediacy does not mean what we are seeing is unique. Jesse, There are explanations of cycles. Start with Milankovitch and solar variability, then mix in the ocean oscillations. Far more likely to eventually explain observations than just CO2. Eric, OK, so when was the first time that a ship not equipped with a re-inforced ice breaking prow was able to sail through the northwest passage? And why, all of a sudden, are mineral rights to the Arctic Sea a bone of contention among the nations bordering that sea — could it be that the sea has now, as it was not before, navigable to an extent that drilling has become possible? otsar, He took three years to get through the Northwest Passage — he was locked in ice each of the three winters, and even in summer he was unable simply to sail through – he certainly would have taken advantage of ice-free summers if they had existed, knowing what was waiting for him in the winter. He got through because he was a great explorer, of heroic stature. It doesn’t take a hero to get through the Northwest Passage any more — at least in summer. Scarletmacaw, As Otsar has pointed out in another contribution to this thread, the first to navigate the Northwest Passage was Roald Amundsen, the Norwegian explorer. Otsar inspired me to do a little research, and I found out that it took Amundsen three years — starting in 1903 — to get through the passage — his ship was locked up by ice during each of the three winters, and even in the summer he could simply sail though — as he certainly would have if he could, knowing what waited for him in the winter. Now it’s easy sailing for the most part — in summer at least. RE: KR: (July 8, 2012 at 10:30 am) “Peak oil, perhaps. Peak coal? Peak shale gas? Peak natural gas? No, false again, there’s plenty of carbon we appear ready to burn.” The point remains that we have run through all the Earths easily available petroleum in less than a hundred years. We wont run out of carbon tomorrow, but if we do not find a safe sustainable method for the large-scale burning of atomic nuclei, (Nuclear Power) your children or grandchildren *are* going to be faced with living in a world with much less energy then we have now. Note that estimates the availability of many of these alternative fuels are usually based on their current rates of usage–not the accelerated usage that would occur after petroleum becomes prohibitively expensive to recover. We might end up having to give away natural resources just to service the interest on the national debt if that cannot be collected by taxation. KR says: July 8, 2012 at 10:30 am We’re currently at ~0.8 C above where we would be without anthropogenic warming. At 0.16 C/decade, the current rate of increase, we’ll see another degree C in under 65 years, not 200. Even if I accepted all of the above (which I do not), then it would take another 75 years to reach the 2 C increase. As well, the 2 C was more or less taken out of a hat with no proof it would be a disaster. Don’t you think we have more urgent things to worry about other than what may or may not happen in 75 years from now? Smokey, Lonny Thompson at the Byrd Polar Research Center at Ohio State University has been studying tropical glaciers for over 30 years; he has the distinction of having spent more time in the “death zone” — the highest altitudes in the Peruvian Andes — than any other researcher. During that time, he says that he has seen dramatic shrinkage of the glaciers in the time that he has been going to do research there. The Quechua Indians, who have lived at these altitudes for centuries, are being forced off the mountains because the disappearance of the glaciers has left them without water — and because the moss on which their alpacas feed is dying off, due to dryness and rising temperatures. It’s a problem for the rest of Peru as well: the country is heavily dependent on hydroelectric power, and a number of their major generating plants are running at 20% of capacity, due to the decrease in glacial runoff. Thompson has also done field work in the Alps and Himalayas and sees the same thing happening there. If you don’t believe Thompson, Google around for then and now photographs of glaciers just about anywhere — a few are still advancing, because of local conditions, but the large majority are shrinking, and rapidly. Jesse Fell says: July 8, 2012 at 10:23 am …. ==================================================== Of course it seems plausible. It is a very old plausible hypothesis, like 150 years old, but what a lot of people do not know is that this concept was debunked experimentally by American physics professor R.W.Wood in 1909: . Second, even without the Wood’s experiment, the AGW concept loses plausibility very fast if you start thinking critically. According to that concept, the -18 degrees Celsius cold Earth surface produces so much IR radiation, that the “greenhouse gases” are able to warm the surface by 33 degrees Celsius just by sending back a small part of that IR. Now, you possibly know, that an IR camera can see very well through the air, so the most IR radiation passes safely the “greenhouse gases” trap. Now, you can see the potential of maybe hundreds degrees warming from that second part of IR! You should be able to heat your apartment just by putting some things out of your freezer around! Or just open the freezer and you have an IR heating device! Now you can see how absurd the AGW concept is. Jesse, that’s a false statement. A cycle is merely a repeating event, regardless of cause. Your idea of “unrelated series of events that coincidentally have the same effect” is merely a spin on a cycle of currently-unknown cause. Do you presume that each El Nino event has a different origin, and that each is unrelated to the previous. The ancients knew about tides, but not about gravity and the moon’s influence. That did not make the cycles of the tides a coincidence, or simply an unrelated series of events. However, we ARE learning about the cycles of hot and cold, though we may not yet be able explain their causes. We geologists look at the long history of the Earth’s climate and see many changes: some cyclic, some catastrophic. None, unless there were ancient fossil-fuel-based civilizations of which we are unaware, were caused by human (or nonhuman) intervention. With the full knowledge that these changes occur naturally, you now stand and claim that CO2 is the cause of all of them. Does it not seem logical that the burden of proof is on you to prove it? If you admit that natural cycles might be the result of “unrelated series of events,” how do you prove that your increased CO2 is not just another “unrelated event” to this temperature cycle?. Admunsen didn’t have GPS, nor satellite photos. Basically, he didn’t know the path through the ice. But surely you knew that, so why are you acting like navigating the Arctic in 1903 is the same as navigating it a century later? Jesse, Seabed mineral rights have only become an issue in the late 20th century, so earlier ice free periods , when fewer people lived at higher latitudes to know about them anyway, would not have been important. Jesse Fell, You ignore my 12:10 pm comment above to post your appeals to authority and a ‘talk-talk story’ that has no way to verify it. Excuse me while I disregard that nonsense. As I posted above, glaciers have been receding since the LIA. What is so unexpected about that?? You don’t say. Your comments amount to emotional hand-waving. If you want credibility, post something verifiable, instead of folklore passed down by Indians – which disregards your own comment that hydroelectric power has changed the situation over the past few generations. Damming rivers has more of an effect on local tribes than receding glaciers. You say above: “…the rise in the Earth’s temperatures that started to become anomalously large around 35 or 40 years ago…” Absolute debunked nonsense. Otter says: July 8, 2012 at 7:55 am ————————————— Otter, Paul K2 has gone AWOL so I did a little digging. Seems as though Arctic sea ice was pretty constant up until 1950s – 1960s before heading into its “death spiral”. This means the US suffered from the great drought of the 1930s without the benefit of Arctic sea ice loss, meaning in turn this theory about weather sticking is crap. But better than that, the refutation is brought to you by an alarmist web-site that takes a swipe at Richard Linzden before he proves our point. Jesse Technically the death zone is above 7,900m , the height of South Col on Everest. Since the highest peak in the Andes is Aconcagua at 6,959m spending time in that zone in the Andes would be difficult. Glaciers have been in retreat since the end of the Little Ice Age, no change there then. Rates of those measured since the 19th century eg Gangotri in the Himalayas, show steady retreat throughout the period. GRACE satellite measurements suggest little or no loss of ice mass in the Himalayas, while the Andes do show continued ablation. Jesse, What is anomalous about the past 30 – 40 years? The rate of temperature change in the 1930s and the 1940s are the same rate as the 1980s and early 1990s. Just natural changes as as result of the interactions between the natural cycles of different periodicities. Eric Huxter says: “What is anomalous about the past 30 – 40 years? The rate of temperature change in the 1930s and the 1940s are the same rate as the 1980s and early 1990s.” Exactly right. Anomalously large is completely incorrect. Look at this temperature reconstruction from the Greenland GISP2 ice cores: GISP2 Ice Core graphic. Even the skepticalscience.com website does not dispute its accuracy, though it claims that since it’s from a Greenland site, it’s not “global.” (Though twelve trees from Siberia was global enough, somehow.) Global 2m temp’ change is 0.066 C / decade: KR said (July 8, 2012 at 7:58 am0 “. KR also said (July 8, 2012 at 10:30 am) “…We’re currently at ~0.8 C above where we would be without anthropogenic warming. At 0.16 C/decade, the current rate of increase, we’ll see another degree C in under 65 years, not 200…” But of course, using that same .16 degree per decade, and extrapolating back, you could say we were at “zero” 50 years ago (in 1962) and a steady rise from there meant that we should never have gone through “zero” again. So, let’s look at the facts? We’ll use GISS anomalies (supposedly the most accurate because of their ESTIMATION of Arctic temperatures). First, it shows that our current warming is leveling off at an average of .55 degrees above “zero” (which, according to you, means we would be at a level of minus .25 degrees if man wasn’t around). Second, we see we’ve been through “zero” several times (first was prior to 1940, the last prior to 1980). So we flirted with “zero” over a ~40 year period, the last pass through “zero” happening about 32 years ago. Where was the expected ~0.64 C degrees of warming during that period (.16 x 4)? That means, according to GISS, there was NO net warming from about 1940 to 1980. Maybe this is one reason GISS uses the base period of 1951-1980 to place their “zero”. Everything after that would show as “warmer” than that period. Third: If, according to GISS we were at about minus .3 in 1880, by 1980 we should have been at 1.3 degrees above “zero” (again, your current warming value of .16 per decade x 10 = expected 1.6 degree rise). We never made it. We still haven’t made it. Your numbers just don’t add up… Jesse, you may have missed my post earlier. Here I think it qualifies as a cycle that fits what we have seen in the resent warming. Perhaps what we need here is an experiment. Your AGW theory says that we will see continuous warming from increasing CO2. My Climate Cycle theory says we are in the middle of the 30 year cooling period of a 60 year Cycle, and will see 15 more years of steady or decreasing temperatures. Personally I think the last 15 years pretty well falsifies your theory, but I’m told that 15 years is to short for “Climate Science”, So lets just see were the temp goes from here. Care to make a bet? How about we bet for the entire western civilization, because if I’m wrong we get Thermageddon, with permanent extreme weather, dozens of meters of sea level rise, famine, climate refuges, extinct polar bears, and all the other things that have been repeatedly proven to not be happening. And if your wrong, then the socialist ‘Global Governance’, poverty, deindustrialization, horrors inherent in any attempt at enforcing ‘Sustainability’ (think Khmer Rouge year zero), and all the rest of the supposed necessary solutions to CAGW and Climate Change will have been for NOTHING. Jesse Fell says: July 8, 2012 at 2:34 pm “There are other cycles, too, but none of the size and shape to fit what we are seeing now.” Rubbish. @jesse fell: “Help me understand where the contradiction in what K2 wrote lies. ” Ok. What I wrote was mathematical shorthand. A “stationary process” is one in which the mean and standard deviation do not change with time. (It’s more complicated than that, but that’s good enough for our purposes.) K2’s first statement attempts to dismiss the observation in my original post, which is that the total number of temperature records has declined significantly in the last 6 years compared to the previous 13 years. K2 argues that the number of records should be expected to decline with time. That is true for a stationary process. Then he goes on to argue that the number of high records is growing relative to low records, and this is evidence that the mean is increasing with time — violating the requirement for a stationary process (that the mean stay constant) and causing an ever-increasing number of high temperature records. You can’t have it both ways. Either the recent sharp decrease in the number of records is indicative of something unusual and interesting (not the natural result of longer temperature records), or the mean temperature isn’t moving. Since the mean temperature has moved on the satellite time scale of 30 years or so, the process is not stationary and K2’s first argument doesn’t hold water. Here is an old paper which discusses the problem of the probability of setting temperature records as a function of the length of the climate record: Figure 3 is probably of most interest, in which he derives empirical curves for the probability of getting 1, 2, 3…. 9 temperature records (in this case, low ones) in a winter season as a function of record length. Clearly, longer data periods generate fewer records. The NOAA record-temperatures dataset is from a large number of stations (about 5500) having a minimum recording period of 30 years, and minimum coverage of 50% during that record. In other words, pretty loose criteria. But the key is that eliminating periods shorter than 30 years cuts off the steeply falling portion of the curves. Stations also continually enter and leave the dataset during the time series, as newer stations satisfy the 30-year cutoff and older ones stop being reported. The number of stations has grown steadily by about 7.5% from 1993 to 2012. I did not normalize for that, nor for the average reporting period of the stations, but one might expect the growing coverage to partly counteract the aging of the stations, since newly-entered stations contribute disproportionately more records to the total. So I still maintain that there is essentially zero chance that an average 30% drop in the number of records in a relatively short period of time is a normal result of increasing record period. I don’t have a really good hypothesis. Solar variation is tempting, but the drop is larger than we see during the previous solar minimum (1993-1998). There are many unusual features of the latest solar minimum, from its length to its magnetic properties. I didn’t carry the series further back, because NOAA only started keeping the maximum-low and minimum-high records in 1993. Prior to April 1993, only max/min records are available. The maximum-low is particularly interesting, because global warming theory suggests that’s where we should see the greatest effect. “The folly of blaming the Eastern U.S. heat wave on global warming” The same folly was evident in Australia during the last drought we had. Prof Tim Flannery predicted that Queensland would never have drought breaking rains again. A desalination plant was built. The drought was broken, the dams filled, the desalination plant put in moth balls. Efforts were made to blame the floods on AGW but that was debunked & those that made the claim retracted their statment. (head of the IPCC no less) Australia is known as the Country of droughts & floddings rains see: For those in Texas with tree problems, try some Blue Gums they will handle the heat OK. They can be a problem in a wildfire though. Jesse Fell for the scam. RE: Jesse Fell: (July 8, 2012 at 2:34 pm). It is my understanding that Henrik Svensmark has recently published a paper in the Monthly Notices of the Royal Astronomical Society in which he correlates cold periods in geological history with periods of time in which the solar system transits the spiral arms of the Galaxy and warm, polar-ice free, periods with transit through the clear regions of the Galaxy. He notes that galactic cosmic radiation is particularly intense in the spiral arms and this radiation appears to facilitate or enhance the condensation of water vapor in the Earths atmosphere. Modulation of this radiation by short-term solar magnetic activity seems to correlate with satellite cloud-cover data. From his viewpoint and data, this radiation appears to be *the* primary determinant for warm and cold climatic periods on the Earth. “The clouds take their orders from the stars.” Of course, theories of non-anthropogenic climate change are now regarded by many as environmental treason. It would appear that the Milankovitch cycle may be only a secondary effect. Seriously Jesse. All the history and climate I have read, (and it is fairly extensive) tells me the exact opposite of the conclusions you have come to. Curious. Hi KR, Jesse, Lazy, Paul K2, – sorry if I’ve missed any Alarmists, you know who you are. How come none of you have bothered to Alarm On about us. Don’t we have climate change Downunder? diogenesi, Interesting post — thanks for the mini course in statistics. Your post has far more substance than other replies that I’ve received here to my posts. I’ll study it diligently, with a cup of coffee at my side! Question: Would it make any difference, statistically, if the decline in the number of records is due, not to the increasing length of time for which records are kept, but for the fact that the records are approaching some physical limit, under current conditions? Could this be analogous to the decline in the number of new winning times in the Boston Marathon — which appears to be due not to the increasing length of time that the race has been in existence (the oldest records are not competitive today) but to the fact that the records are approaching the limit of what the human body is capable of? (This analogy breaks down because the distance between Hopkinton and Boston is going to remain what it is, while the physical limit to high temperatures may be imposed by the increasing amount of greenhouse gas in the atmosphere.) And if the frame (bounded by physical limits in each direction) in which records of both sorts can be broken is moving upward, it would follow that there would be more opportunity for records to be set on the high end than on the low — many of the existing low records now being below what is physically possible given the new changed conditions. This would mean that the ratio of high to low records is being controlled by a process that is not “stationary” — the upward movement of the highest-possibility-to-the-lowest-possibility frame. But I am not sure whether what is physically not stationary is also statistically non-stationary — as I wrote, I need to con your interesting post with my cup of coffee. Please advise. If the heat wave is due to global warming, when the heatwave is over does that mean global warming is over? From the climate research center made infamous by the Climategate Emails, here is a link to a graph that shows the official documented “Global Warming” over the last 160 years. It shows a linear rise of 0.4 degrees Celsius from 1910 and 1940 and another increase of 0.4 degrees between 1980 and Y2000. Otherwise the curve is generally flat. This is the (assumed Anthropogenic) Global Warming Signal, straight from the horse’s mouth. Global Temperature Record #jesse fell: As with everything in this business, reality is messy and complicated. :) We look at record temperatures because they are quite sensitive to small movements in the average. But there are still plenty of low-temperature records; the estimated movement of the mean temperature is much smaller than the daily variability. I don’t think there’s any physical limit involved. If the mean is rising, *and* the daily variability stays the same (which is also not necessarily correct), one would expect the system to settle down to some long-term rate of upward records which is representative of the movement of the mean. But inferring climatological properties from temperature records is not so easy. Here’s a nice and fairly recent (2002) paper which discusses the issue: On pages 2-3 and 17, the author makes a point of saying that low clouds reduce the daily range of temperature, and that relative humidity is a good predictor of cloudiness. IMO the implication of the recent reduction in the number of temperature records is some effect like that — the average daily variability of temperature over the continental US has declined for reasons I don’t understand. Has cloud cover increased over the last 6 years? Unfortunately I have a day job which keeps me from digging into these things as much as I’d. Climate change skeptics constantly refer to the variability of the Earth’s climate, and they are right about that. It is variable, because it is highly sensitive to forcings, even ones that may strike us as being very small. A slightly imbalance in the plane of the Earth’s orbit is responsible for the Malenkovic cycle, which appears to control the coming and going of ice ages; this theory was slow to gain acceptance, because the imbalance is so small, but the linkage between the cycle and the ice ages now seems inescapable.. diogenesni, I hear you about the day job. Let’s both win the lottery this week and then dig into this the way I think we’d both like to. I think that the ratio of high and low temperature records may be of limited significance to our understanding of climate change; it certainly appears, from all that you point out, that it is not easy, or perhaps even possible, to infer mean temperatures from it. But I don’t see how the ratio can be completely lacking in significance. A disproportion of high to low records could be a sign that the window of possibility is rising. That is, as the Earth’s mean temperature rises, new highs become more likely and new lows less likely. Maybe — this is enormously complex stuff. Still, the ratio is worth watching, along with all other indicators. It will probably be the case that we will get a bead on what’s happening only through a sort of empirical triangulation — looking at the case from every possible angle. With that, I return, reluctantly, to the day job. Good posts, deogenesni, whatever our difference in point of view. RE: diogenesnj: (July 9, 2012 at 4:33 am) “On pages 2-3 and 17, the author makes a point of saying that low clouds reduce the daily range of temperature, and that relative humidity is a good predictor of cloudiness. “ The question I often have on statements like this is whether clouds are causative or indicative. I regard clouds as indicators of ongoing condensing convective activity and wonder if that convective activity may be the ignored causative factor in some cases. Given that clouds are forming, however, they do reduce solar radiation arriving at the surface and by the same token, return thermal radiation emitted from the surface to reduce daily temperature variation. Svensmark seems to have data showing a direct relationship between observed global cloud cover and the measured intensity of cosmic radiation. misterjohnqpublic says: July 7, 2012 at 10:36 am History will show that AGW theory will rank with “evil spirits” in terms of logical reasoning. Maybe if we sacrificed a few virgins all the bad weather will go away? ———————————————————————————————— And the solution to ‘overpopulation’ in one go! Jesse Fell;. >>>>>>>>>>> As usual, confronted with information showing that climate is highly variable, that the behaviour we are seeing is nothing unusual, that CO2 is a factor so small as to be insignificant by comparison, some people can still respond ….”yeah, so look’s like what you are saying is it could be a disaster, right?” diogenesnj – Regarding decreasing numbers of records: I’m afraid I missed your earlier replies, as you had replied to “K2″, not “KR”, and given that there is a “Paul K2″ in the thread I had overlooked them. As you correctly noted, a stationary process with stochastic variation will show decreasing numbers of extreme records over time as more of the system behavior is observed. A non-stationary process with no noise will present new record values at every data point, if moving monotonically. In the case of temperature anomalies, the number of events is greatly increased simply because of the number of stations that are undergoing collection. In between, however, a non-stationary process with stochastic variation may fall anywhere in that range depending on the stochastic variation and the rate of change in process averages. Given that we’re observing a ~0.16 C/decade change in mean with variations over several degrees C, a decrease in new records is unsurprising. Keep in mind that even if average temperatures rise by another degree C, we should still see record lows – although the ratio of highs to lows may be something like 20:1.’s, as expected with slow changes in the mean temperatures. The data there is quite clear. **** Jesse Fell says: July 8, 2012 at 2:25 pm Smokey, Lonny Thompson at the Byrd Polar Research Center at Ohio State University has been studying tropical glaciers for over 30 years; **** Hahahaha. Lonnie Thompson, the ultimate data-hider. Jesse, go to Climateaudit & read up on Lonnie Thompson & his shenanigans. And yeah, tropical glaciers are sensitive to natural climate changes. So sensitive, they weren’t even present just 1000 yrs ago in the MWP, and just recently developed during the LIA, perhaps for the first time in the Holocene. henrythethird – .” CO2 is not the only forcing, and never has been. Right now CO2 (warming) and aerosols (cooling) are the most rapidly changing forcings, but they are far from the only ones. While I expect long term acceleration of warming, 20-30 years is only sufficient to clearly establish a linear trend. Clearly showing a longer term curvature requires more data to clearly check the larger degrees of freedom in fitting something more complex than linear – the atmospheric temperature record is unsurprisingly not showing significant curvature (up or down, mind you) with recent data. Sea level rise, on the other hand, (with both thermal and melt contributions) is showing acceleration both since the 1960’s, as described in Church 2008 () and other publications, and over the 20th century as a whole. — With regard to the “zero’s” you discussed at July 8, 2012 at 4:33 pm, please keep in mind that those are zeros relative to a baseline established within the temperature record, a relative number. Different records use different baselines, and crossing ‘zero’ is not a marker of anything other than when you define the baseline. For example, if you look at GISS with a 1951-1980 baseline versus UAH, which didn’t start until 1979, you have to rebaseline them to a common period or anomalies make no comparative sense whatsoever. The 0.8 C difference I noted with/without anthropogenic influence is based upon, where temperatures are estimated with and without the human contribution. It has nothing to do with anomaly baselines. davidmhoffer – “…anything over 400 ppm (which is pretty much where we are now) is subject to the law of diminishing returns, and hence is pretty much negligible.” With CO2 forcing being logarithmic WRT the increase, but the increase in CO2 being demonstrably greater than exponential, CO2 forcing imbalances will increase at a somewhat greater than linear rate. Meaning (given our current emissions path) that far from being negligible, it will have more influence over time. “You’ve stated something that is a mathematical impossibility.” The log of a value increasing exponentially increases linearly – the forcing Δ would remain constant as climate adjusts behind it. The log of a value increasing greater than exponentially (such as CO2) increases greater than linearly, forcing Δ increasing over time. If you feel that simple mathematic relationship is incorrect, I would have to disagree. And ask you to break out the math to demonstrate why high-school algebra is wrong. RE: Jesse Fell: (July 9, 2012 at 4:57. The problem here is that we have not had a stable system of reporting storms as we have gone from shipping observations to high quality satellite images. Over recent years, the Arctic Ice extent as measured on about May 15 each year has been relatively constant. While Arctic Ice has been retreating over the last forty years, Antarctic ice appears to have been advancing to a similar extent. The simplest interpretation is that average temperatures have increased 0.8 degrees Celsius, as indicated, and all other changes result from increasing experience of the full range of random fluctuation. Eventually, as the solar system moves ever farther out from the cosmic radiation coming from the fringe of the current galactic arm, we may see a period of polar melting and sea-level rise as atmospheric condensation is reduced, but that should be a long-long time coming. KR; Sorry bud, but there’s no such thing as “more than exponential”. You can have a high exponent or a lower one, or a super duper high one, but there simply is no such thing as “more than exponential”. If what you are trying to say is that the exponential growth in fossil fuel consumption is greater than the decresing impact of CO2 due to itz logarithmic nature, then that has the possibility to be mathematically correct. It would also be a ponsi scheme. Unless you believe that oil, gas and coal are infinite and that we can expand our use of those resources exponentially, forever, then your statement is exactly the math a ponzi scheme is based upon. The exponential growth of fossil fuel consumption that we have seen last century is levelling off for the simple reason that these resources are finite. Extrapolate out the exponential growh of the last few decades as continuing, and you’ll exceed the mass of the earth in short order. This is what ponzi schemes are based on, the assumption that exponential increases can be maintained for a resource that is limited for ever. They can’t be, and the levelling off of oil, gas, and coal production has already begun as I pointed out to you. Can they still increase? They most certainly can. Can they increase exponentially over the next 100 years as they did in the last 100 years. NOT A FREAKIN’ CHANCE IN H*LL SO PLEASE STOP PAINTING DISASTER SCENARIOS BASED ON SOMETHING THAT IS PHYSICALLY IMPOSSIBLE. KR says: July 9, 2012 at 7:40 am “…20-30 years is only sufficient to clearly establish a linear trend.” Based on what? Given a ~60 year cyclicality in the process (I would say the ~60 year cyclicality, but you cannot argue with this statement), ~30 years would be the worst possible time interval to choose. davidmhoffer says: July 9, 2012 at 12:47 pm “…there simply is no such thing as “more than exponential”.” I assume he means greater than with a linear exponent. But A) there is no evidence upon which to conclude the rise is more even than quadratic, much less exponential with a linear exponent and B) it’s moot anyway, because CO2 concentration is controlled by temperature, and not by humans. What’s remarkable to me has been the African cold, now for years running. That probably also explains loss of mass on Kilamanjaro, as that peak depends entirely on non mid latitude systems for its snow. The colder the African interior that less moisture drawn in via Monsoonal and ITCZ mechanisms. Just found this in an old Australian newspaper — to the right under the story of the missing student is a short piece on Heat Killing Americans davidmhoffer – As I (and others) have pointed out, the log of CO2 growth is upwardly curving, whereas growth with a fixed exponent would be linear. Hence CO2 is growing faster than exponential (with a fixed exponent, that is). As to limits on our emissions, while oil is getting more expensive to produce, we’ve apparently got lots of coal, shale oil (fracking), natural gas, and other resources – our current emissions have not slowed, and all projections of “business as usual” see enough carbon availability to continue. Bart – 60 year cycles would need a mechanism, a forcing change, a reason to occur. Curiously enough, looking at the forcing records there’s plenty of information on actual events, actual forcings, to show the recorded temperature changes – with physics, and without Mysterious Unknown Cycles (MUC’s, for short). Invoking MUC’s is simply hand-waving in denial of well supported physics. As to CO2 rise being caused by temperature rise? That just doesn’t follow – ocean CO2 is increasing, not decreasing (as seen by decreasing pH), known biosphere changes can’t account for the mass of CO2 either. Unless aliens are teleporting CO2 into our atmosphere that claim is just unsupportable. KR says: July 9, 2012 at 3:55 pm “60 year cycles would need a mechanism, a forcing change, a reason to occur.” Natural modes are ubiquitous in… Nature. They are observed in every engineering discipline. They are a manifestation of energy storage, as in the oscillations of a spring as it cyclically trades potential energy for kinetic energy. The ~60 year cycle is obvious in the data. Sticking your head in the sand will not make it go away. But, that is beside the point: Do you, or do you not, agree that IF there WERE a 60 year cycle in the data, 30 years would be a very bad number to use to proclaim a trend? If you disagree, then there is no point in further discussion, because you would then show yourself to be technically illiterate. If you agree, then how do you exclude the possibility of that which is staring you right in the face? And, speaking of denying that which is right before your eyes, if you tell me the beginning concentration and the temperatures since, I can tell you quite accurately what the current level of CO2 is – human inputs are effectively superfluous – merely by integrating the scaled temperature anomaly. The derivative of CO2 is proportional to the temperature in the post-1958 era for which we have reliable data. We know that the derivative of CO2 is not driving temperature – the very notion is absurd, because temperature would then be independent of the absolute level of CO2. Furthermore, it means the absolute level of CO2 (what you get when you integrate the derivative) lags the temperature. We conclude that, on a logical and causal basis, temperature must be driving CO2 and not the reverse. “there’s plenty of information on actual events, actual forcings, to show the recorded temperature changes” These are not “actual forcings”. They are modeled forcings, for an effectively back-of-the-envelope level model. There is no verification of it, it is merely one way in which you can recreate a time response in the measurements superficially similar to the observations. But, that proves nothing – it is always possible to do so in an underdetermined system, i.e., it is exactly like solving the equation x + y = 1 for x and y: the solution space is infinite. But, I am introducing to you another observable which is incompatible with the solution which has been agreed upon. It is like they said that, in the case x + y = 1, x is one and y is zero, and I am showing evidence of another equation which says 2x + y = 3/2, and y can no longer be zero. “ocean CO2 is increasing, not decreasing (as seen by decreasing pH), known biosphere changes can’t account for the mass of CO2 either.” Which means merely that the ocean depths below where we can measure are upwelling CO2 and/or the necessary biosphere changes are unknown and unobserved. This is a trivial objection. The figure does not lie. There is no viable alternative explanation. Temperature, not humankind, is driving CO2. [SNIP: Another anonymous drive-by coward with a fake e-mail address. -REP] I am flattered to be ignored. I also understand that all Jesse Fell (and all others who wear the cloak of green) have in their arsenal is fear. From the beginning of man’s existence, fear has been used to manipulate and control. Not buying it. “Obama and BP changed the dynamics of the weather by how they managed the BP Horizon disaster. There’s every indication this was a planned event to alter the weather to create global warming. A few weeks prior Global warming was outed as a giant UN scam to get taxes and now their at it again… Anything to get global tax to consolidate the NWO of tyrants… By changing the ambient surface temperature of the Gulf, fake global warming was produced… To do this they murdered billions of life forms in the gulf. That’s how demented the Global warning people are; killers and scam artists… And now you get the heat wave. How’s that for “change”? ” I posted the above on some blogs and such the other day… During the height of the BP disaster, I was curious as to the effect on Global warm this would have in the long run when I found a high-up NWO conspirator was on the Board of Directors of BP and BP and the UN stooge Obama shut down the media coverage in detail even now the real news of the disaster has been squelched… So, within minutes of web search, I found on a scientific site, the math that showed that surface oil increases the ambient temperature of the water! The whole Gulf stream was slowed down to a crawl causing drastic weather changes that year in the USA and Europe as well… They killed the Gulf to sell you all on “Global Warming” and you need to know this about the fact that the NWO will stop at nothing!!! Pass it around if you can or agree. RE: KR:(July 9, 2012 at 3:55 pm) davidmhoffer – As I (and others) have pointed out, the log of CO2 growth is upwardly curving, whereas growth with a fixed exponent would be linear. Hence CO2 is growing faster than exponential (with a fixed exponent, that is). First, the curve that you referenced earlier was *not* a plot of the log of the CO2 concentration. Second, exponential growth is not proportional to time or time difference raised to some fixed exponent, but some fixed number like 2.71828 raised to a positive exponent proportional to time or time difference. Such growth usually is characterized by a doubling interval. A periodic series of values like 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384…. is an example of exponential growth. Population would grow at an exponential rate if an infinite supply of food and living space were available. As world population doubled in the last forty years, we might expect a global population of over 500 billion people in a few hundred years if resource limitations did not prevent this. NONE OF YOU DOPES HAVE EVER DONE A SINGLE YEAR OF HIGH LEVEL STUDY ON CLIMATE. LEAVE IT TO THE SCIENTISTS. NONE OF YOU FOOL HAVE ANY TRAINING, EXPERIENCE OR EXPERTISE IN WHAT YOU ARE TALKING ABOUT. ITS A JOKE TO THINK YOU DO. [Left as-is, as-was-written, as a display of temperance and moderation and toleration of the CAGW-community. Robt] Spector – An exponential over time is expressed as Ae^(kT), where A and K are constants. To be more clear, a growth greater than exponential (as seen by taking the log of the data) indicates that either the function for CO2 is not exponential (it’s definitely increasing faster than that) or that K is not constant – and in fact increasing over time. That is very, very simple math. What I noted earlier () was where to get the data, how to take the log of it, and (if I might go a bit further) to suggest fitting a trend line to the log of the data. Sadly, I see no indication that anyone commenting has taken the few moments needed to import the CO2 data into Excel, take the log of it, and look at the resulting trend… just shouting that “You’re wrong, you’re wrong!” with no justification other than unfounded ideas about the math… To repeat: if you wish to remove seasonal (growth) variation, or if not. The data is at for your viewing. From this point on I will simply ignore anyone who has not looked at the data on this topic as presenting unsupported opinions rather than evidence. — Bart – You’re promoting MUC’s again. Solar forcings are from sunspot activity. Aerosols are from volcanic eruptions and checking dust deposits in ice. CO2 is from repeated measurements both instrumental and in recent ice cores. Each and every one of those forcing traces is supported by evidence. And the temperature record corresponds to the forcings. And the physics. MUC’s don’t in either case. And as Hitchens said, “What can be asserted without evidence can be dismissed without evidence.” Present your evidence – the physics behind the cycles, the data supporting the energy flows, the source of the 25×10^22 extra joules of heat in the oceans since 1970 () – or I and others will, quite frankly, dismiss your assertions. KR; “You’re wrong, you’re wrong!” with no justification other than unfounded ideas about the math… >>>>>>>>>>>>>>>> Sir, I and a few others understand what it is that your tried to say. The manner in which you said it however was, at best, confusing, at worst, completely wrong. Actually, I HAVE taken the Manua Loa CO2 data and pulled it into Excel, and yes, it is a slightly exponential, there was an article on WUWT a while back that went into some detail on this. I don’t dispute this, but I dispute what you said. So perhaps you should stop ranting and think about how to express yourself properly in the first place. I’ve also looked at the fossil fuel consumption graphs (posted links for you no less) which also exhibit an exponential growth. I pointed out to you earlier however, that this is a consequence of adding them together. Since their “start up” and “ramp up” times do not coincide, cumulatively they give the illussion of exponential growth. Not a single one of the individual fossil fuel consumption graphs by itself shows any evidence of continued exponential growth. Nor is it possible for any of them to do so for any extended period of time. They are limited resources, and were the exponential growth of the last century to continue, the mass required would in short order exceed the mass of the earth. Your rant about the stupidity of others when it comes to math pretty much discredits the point you are trying to make. Word it properly, and you might discover that some of us jump in and agree with some parts of what you are trying to explain. But at day’s end, CO2 is logarithmic, and the exponential growth of CO2 or the fossil fuel consumption which (supposedly) drives that growth, is a ponzi scheme. You’ve predicated your entire argument on a continued exponential growth which is by definition physically impossible. I was always told whoever started shouting lost the argument. KR; the source of the 25×10^22 extra joules of heat in the oceans since 1970 >>>> The best OHC data we have is from the ARGO buoys which have shown a decline since inception. Prior to that, our data regarding OHC was so spotty that at best it was an educated guess, not an actual measurement of any sort. davidmhoffer – If you take the log of values of an exponential function (Ae^(kT) with fixed constant A and k, you will get a linear plot. That’s the way the math works. If you take the log of values of a greater than exponential growth factor, you will see an upward curving line, increasing greater than linearly. The log of a less than exponential growth will curve downward. That’s the way exponential functions work. CO2 growth is not “slightly exponential” as you claim, but “greater than exponential” in growth. End of story. Please ask a high-school math teacher for clarification. As to fossil fuel use – “this is a consequence of adding them together” – yes, a consequence of adding up the individual contributions. Including developing nations (think China) that are continuing to expand their energy use. If the numbers add up to greater than exponential, that’s just what they are. And we certainly have enough carbon to burn to put us (if we continue to burn it) at 4-6 C over previous temperatures. Which means 3-5 C over anything that occurred during the Holocene (), the only time we’ve had civilization. Pardon me if I consider that significant. Enough. Adieu KR says: July 9, 2012 at 8:12 pm “You’re promoting MUC’s again.” No, I am talking about standard solutions to partial differential equations which are expected in natural systems. You must not have much practice in “the field” because anyone who did would immediately know what I am talking about and understand. “Solar forcings are from…blah, blah, blah” But, these are shoehorned into a theoretical model without actual knowledge of how they play together, or of what parts are missing. I am giving you data, and data trumps theory. Data is what is real, not just some flight of fancy of how you would like things to be. “Present your evidence…” It’s there. It’s right there. Look at it! I am on solid ground. You are the one making unprovable assertions. I do not have to know everything about trains to know that the light bearing down on me in the dark with an Earth shaking rumble is something I better get out of the way of. You, and others, have only constructed a narrative. It all fits together nicely, when you have ignored contrary evidence. But it’s just a superficially consistent construct, and mere consistency is not proof. And, my data says you are wrong. KR If the numbers add up to greater than exponential, that’s just what they are. Enough. Adieu >>>>>>> If I measure the upward velocity of the valve stem on a bus tire starting at the lowest possible spot and tracking it in increments for 1/4 of a revolution, I could then plot the data, show conclusively that the upward velocity of the valve stem was increasing exponentially, and conclude that it would reach orbit in a matter of minutes. My analysis would be exactly as accurate as the one you have presented, and would come to an erroneous conclusion by the exact same means which is to ignore the physical limitations of the system. You can shout and scream all you want about this data or that being exponential, but that valve stem is just not going into orbit. This is called the law of wheels, and is taught to children all over the world at a very young age: Oh….the wheels on the bus go ’round and ’round ’round and ’round ’round and ’round The wheels on the bus go ’round and ’round All, day, long. Adieu ta ya too. RE: KR:July 9, 2012 at 8:12 pm Spector – An exponential over time is expressed as Ae^(kT), where A and K are constants. If one looks at the increase in CO2 over the period from 1880 to the present, the early part of the curve looks like an exponential curve arrested around 1940, probably due to World War II. there is a rapid rebound after the war that seems to gradually start running out of steam around 1980. David Archibald in his article, ‘The Fate of All Carbon’ indicates that, based on estimates of available resources — perhaps outdated — that there may not enough be enough carbon left to fully double the CO2 concentration and perhaps cause a one degree anthropogenic effect. Beng, Lonnie Thompson is not a data hider. He has in fact posted large amounts of his data on the NOAA database — check it out. And he does not hide the source of his data — ice cores taken from glaciers in Peru. At Ohio State University, he is maintaining the world’s largest collection of unsectioned, unanalyzed ice cores — for the use of the next generation of glaciologists, who, he says, will not have much else to work with, given the rate at which glaciers are disappearing. If he had something to hide, he would be glad for the fact that the next generation would have no evidence on which to verify his data. As for your contention that the Andean glaciers are only, or even less than, a 1000 years old — you might want to double check that — carbon dating of ice cores indicates that the glaciers are much older. But even if the glaciers were only, say, 900 years old, would that in any way disprove that their current shrinking is caused by AGW? And if you don’t think that glaciers are shrinking, there are many sites where you can see then and now pictures of glaciers. The extent to which they have shrunk since, say, Gillgan’s Island was cancelled, is striking. Jesse Fell says: July 10, 2012 at 4:26 am The inconvenient fact is that they started melting before any industrialization. Does Thompson not hide this little bit of information? Your fear mongering is getting boring. Also the way you only tell part of the truth. KR says: July 9, 2012 at 8:12 pm Here is the type of thing I am talking about. The Earth is a great big fluid containment vessel. It’s minimum frequency modes of oscillation are thereby extremely low. It’s like the vibration of a piano string – the longer and thicker you make it, the lower the tone. Rossby waves are known with very long periods, e.g., from the link: It appears we haven’t been looking long enough to find longer ones, but they are theoretically viable and likely. There are a plethora of named oscillatory features in the climate: the ENSO, the NAO, the AO, the PDO, the AMO… these are all modes or modal superpositions of the system being driven with random excitation so that they appear quasi-periodic. It’s not in any way, shape, or form unusual to have such dynamics play out in a natural system. Quite the contrary, it would be extremely unusual not to observe quasi-periodic behavior in a natural system. The data says there is a strong 60 year quasi-periodicity in the temperature record. The data says that CO2 is driven by temperature, while its coupling feedback effect on temperature itself is negligible. That is what the data says. You can twist and squirm and try to worm out of it, and maybe even convince yourself that all is well, but the bottom line is that this AGW farce is about to descend into an utter fiasco – not only are temperatures failing to respond to the CO2 signal, but we aren’t even driving CO2. The repercussions are going to be severe. @Robbie: Arctic sea ice is way below normal due to the fact that the polar current and the polar wind patterns over the winter flushed ridiculous amounts of arctic sea ice southward into the Bering Sea, and left much of the arctic over northern Russia nearly ice-free all winter, in spite of very cold temperatures. All of the ice that was flushed out into the Bering Sea rapidly melted after the onset of Spring, as would be expected. You can attribute this “ice loss” to “global warming” if you want, but all it was was a somewhat unusual weather pattern which resulted in almost no sea ice over northern Russia, and an abundance of sea ice being flushed into the Bering Sea, where it subsequently melted. Also, if you precious warming is so “global”, then why is SOUTHERN HEMISPHERE sea ice about 500,000 square kilometers ABOVE NORMAL, and why has SH Sea Ice been above normal for an entire year now???? David Ball says: July 10, 2012 at 7:29 am “Also the way you only tell part of the truth.” SOP. Nothing looks blue when you view the world through rose colored glasses. And, thus Jesse swallows the line about “given the rate at which glaciers are disappearing” without looking to see that glaciers are hardly going extinct. As for heat and drought, we (western Kentucky) are in the center area of hottest and driest summer on record. But AGW??? Last year was the wettest year on record by 15% and average temperature. I’m willing to give it a bit more study before I’m willing to hand over tax payments to a United Nations tax scheme. @KR says: ′s, as expected with slow changes in the mean temperatures. The data there is quite clear.” Yes, sorry, I was confusing you with Paul K2. Mind if I call you both Bruce? It will cut down on the confusion…. :) In any event, I’ve read Meehl et. al. and cited it (although not by name) in my first post of the thread. They make an assertion in the introduction which I believe to be incorrect: “All stations record span the same period, from 1950 to 2006, to avoid any effect that would be introduced by a mix of shorter and longer records.” The language isn’t perfectly clear to me, but I think this means they chose stations with continuous records from 1950 to 2006. But in fact, they will still have a mix of longer and shorter records, because that depends on the time at which the stations were established. Unless they found stations which were all started their measurement series in 1950, they have a mix of varying record lengths. The ideal decline in the number of records for a single station is fairly smooth and (as their abstract points out) follows a hyperbolic 1/N curve. But different stations are on different points of that curve, depending on how long their recording period already was by 1950. The effect I think is unusual and significant is a large decline in the average total number of records in just a few years, by nearly 50% between 2005-2006 and 2008-2009. I reproduce part of the numbers in my previous post here: 2001 100.0 2002 125.6 2003 111.0 2004 96.2 2005 106.7 2006 119.0 2007 90.7 2008 51.1 2009 62.0 2010 69.9 2011 81.7 2012 92.2 (by doubling the 6-month number) Note the rather abrupt drop to a new average level. That isn’t the smooth decline within the local variability that one would expect from increasing data record length. So I think something interesting is going on. Also, there’s a U-shaped pattern from 2007 to now, the bottom of which corresponds to two global events: the most extraordinary solar minimum since 1913-14, and the Great Recession. I have no hypothesis for causation; I just note the interesting coincidence. diogenesnj – Meehl at al studied “the decay of observed annual record high maximum temperatures (..) compared to annual record low minimum temperatures (…) averaged over the U.S. since 1950“, although you have to read closely to get that detail. They accumulated the for each station (with that length of data) from 1950 on. So the 1/n records per year are about what you would expect – they did not include record temperatures from before the start date of the study. I would be wary of drawing strong conclusions from only the last few years, though – there’s very little data there, and nowhere near enough time or data to conclude anything statistically significant from that short a period. The basic problem with blaming anthropogenic CO2 for the observed climate change is that it is a ‘one-trick pony’ in the radiation band of interest. Its 15-micron (667 cycles per centimeter) absorption band stands like a one-foot diameter tree in the middle of a ten-foot wide stream. It has been shown (by David Archibald) that most of its greenhouse effect occurs when the first 20 parts per million of CO2 is added to the atmosphere. The effect of each additional cohort of CO2 is progressively minimized by the fact that you can only kill the same horse once. In clear tropical air, the MODTRAN atmospheric radiation model indicates a raw, no feedback, greenhouse effect temperature rise of less than one degree Celsius for each full doubling of the CO2 content (280=0, 560=1, 1120=2, 2240=3, 4480=4, … etc PPM=deg). The current seasonally corrected CO2 level reported by the Mauna Loa Observatory is about 394 PPM, about a 41% increase from the nominal base of 280 PPM, or just over 49 percent of a full doubling on a logarithmic scale. In order to blame the temperature rise observed to date on increased CO2 alone, it seems necessary to assume a dangerously high positive feedback factor that would double or triple the natural effect. This does not seem likely. Also, as Dr. Svensmark appears to have evidence showing a fine-scale correlation between cosmic ray flux and cloud cover, where most clouds indicate condensing convection cells, there is a very sound reason to expect that most of the climate change of this century has been cosmogenic rather than anthropogenic in nature– see the 9 minute point of his video: ‘The Cloud Mystery.’ Ref: At this time, it remains an open question whether man will ever be able to burn enough carbon to reach a concentration of 560 PPM before carbon fuels becomes so expensive to recover that few can afford to use them. Ref: Excellent analysis, thanks. [SNIP: Commenting here is a privilege, not a right. If you are going to be insulting, take it somewhere that appreciates it. -REP] I guess you do not like to be reminded of your many failures. Reminding you of inconvenient facts in your public history is not insuliting, on the face of it. If you feel it is insulting perhaps you should hold your silence, rather than make silly predictions and non sequiter remarks Do you have children or grandchildren? Have you consider what they will think of your activity here and elsewhere. I can understand why the people paying you to maintain this site would object to facts which do not support the failed “warming is not happening” theory. [REPLY: The terms "denial" and "denier", as well as their derivatives, are forbidden here. You can check site policy here. You should know that Anthony Watts does not get paid to maintain this site, your understanding of what we believe is seriously flawed, most of the commenters here are far better qualified to discuss climate than you are, and it is very definitely you who shouldremain silent until you learn enough. -REP]
http://wattsupwiththat.com/2012/07/07/the-folly-of-blaming-the-eastern-u-s-heat-wave-on-global-warming/
CC-MAIN-2014-35
refinedweb
36,687
70.02
This article is about writing a customized Button control that displays different images when a user moves the mouse over the button, clicks on the button and leaves the button. This control is exactly like CBitmapButton control class available in MFC. The demo project included with this article has a very simple implementation of a derived class from the Button class. CBitmapButton MFC Button For every control in Windows, events are fired by the framework for different actions taken by a user in the GUI application. This gives the application a chance to respond to these events and act according to the design and requirements of that control or window. This goes for the Button control too. The Button class is derived from the ButtonBase class. This base class encapsulates most of the typical implementation of event handling. If you do not want to do any customized work on the control, then the base class will handle all of the events for us. Since we want to draw different images on the button for different user actions, the derived class will have to handle the painting of the button window. In the .NET framework, drawing an image on a button control is as simple as attaching an image file resource as one of the properties of the Button class. The property is BackgroundImage. If you do not want to change the image for different actions, then you do not need to derive any special class from base class; in the application’s form, set that property and you are done. For accomplishing drawing of different images, you can make use of another property of the Button class, ImageList. Yes, The ImageList class is very much like the CImageList class in MFC. You can add different images to this list. You can attach an image list to the Button control, and then assign the index of the image in the list to the Button control. The .NET framework will draw that image on the control. You can make use of this concept to change the image on the button corresponding to different actions. The ButtonBase class has a bunch of overridable virtual functions that get called when the mouse moves over the control, a button is clicked or a button goes into the raised position. These methods are PaintDown, PaintRaised and PaintOver. You can supply your own implementation in the derived class to handle these events. Just make sure that you call the base class' method too. ButtonBase .NET BackgroundImage ImageList CImageList PaintDown PaintRaised PaintOver The implementation of derived control is as follows. public class NKBitmapButton : Button { public NKBitmapButton() { } protected override Rectangle OverChangeRectangle { get { return base.ClientRectangle; } } protected override void PaintDown (PaintEventArgs pevent, int borderWidth) { ImageIndex = 1; // Call base class method. base.PaintDown (pevent, borderWidth); } protected override void PaintOver (PaintEventArgs pevent) { ImageIndex = 0; // Call base class method. base.PaintOver (pevent); } protected override void PaintRaised (PaintEventArgs pevent, int borderWidth) { ImageIndex = 0; // Call base class method. base.PaintRaised (pevent, borderWidth); } } There is a bug in the .NET framework. The PaintOver method does not get called when the mouse moves over the button control. It only gets called after you have clicked on the button. Therefore I have set the image index back to the one that gets displayed when the button is in the normal raised position. In the windows application, add a button. The Wizard will add an entry into the InitializeComponent method of the Form class. Change the variable type from System.WinForms.Button to NetGUIGoodies.NKBitmapButton. Add an image list to your form. Create two bitmaps of size 48x48 and add them to the list. The image at index 0 will be drawn when the button is in normal raised position and the image at index 1 will be drawn when you click on the button and it is in the down position. To keep the implementation simple I have not added properties or methods to the derived button class to specify size and indices of the images. InitializeComponent System.WinForms.Button NetGUIGoodies.NKBitmapButton private NetGUIGoodies.NKBitmapButton PictureButton; private void InitializeComponent() { this.PictureButton = new NetGUIGoodies.NKBitmapButton (); this.ButtonImageList = new System.WinForms.ImageList (); . . PictureButton.ImageList = this.ButtonImageList; PictureButton.ImageIndex =.
http://www.codeproject.com/Articles/899/Writing-a-Bitmap-Button-control-using-the-NET-SDK?fid=1825&df=90&mpp=10&sort=Position&tid=18012
CC-MAIN-2016-07
refinedweb
698
55.84
Introduction In this article, we’ll be looking at the setup required to create an Angular 2 project with unit tests. We’ll cover various required technologies and how to write each of their configurations. Let’s dive in. Prerequisites and Assumptions Before getting started, we should make sure that we have everything we need. It is assumed that you have: - An intermediate understanding of JavaScript, including concepts of CommonJS modules, - At least a rough understanding of Angular 1, - An understanding of ES6/ES2015 concepts, such as arrow functions, modules, classes and block-scoped variables, - Comprehension of using command line or terminal, such as Git Bash, iTerm, or your operating system’s built-in terminal, and - You have Node >= v4 and NPM >= v2 installed. What is Angular 2? Angular 2 is a modern framework for developing JavaScript applications. It is the second major version of the extremely popular Angular framework by Google. It was written from the ground up using TypeScript to provide a modern, robust development experience. Note that, although TypeScript is the preferred language for developing with Angular 2, it is possible to develop through ES5 and regular ES2015. In these articles, we’ll be using TypeScript. Differences from Angular 1 Within the Angular community, there is a well-known video (slides here) where Angular team members Igor Minar and Tobias Bosch announced the death of many concepts Angular developers were familiar with. These concepts were: - Controllers, - Directive Definition Objects, - $scope, angular.module(...), and - jqLite. Many of these changes have simplified the concepts which developers must keep track of in their head. That, in turn, simplifies development with Angular. To see how Angular 1 concepts map to Angular 2 you can read this. Using TypeScript As mentioned above, Angular 2 can be developed without TypeScript, but the extra features that it provides on top of ES2015 make the development process richer. In this section, we’ll run through a quick primer on TypeScript. TypeScript is a superset of Javascript. What does that mean? In short, it means that the JavaScript you know today is understood by TypeScript. You can take any JavaScript file you have, change the extension to .ts, run it through the TypeScript parser, and it will all be understood. What the compiler outputs is JavaScript code, which, many times, is as good or even better written than the original code. One of the most prominent TypeScript features is its optional typing system. Note that this is optional. Utilizing types, however, makes it easier to reason about code. Many editors have support for TypeScript, which allows for features such as code completion to be utilized. Another feature is the ability to define interfaces, which can then be used as types throughout your code. This helps when you need to ensure that a data structure is consistent throughout your code. // note this code is available in the repo // at ./examples/introduction/types-and-interfaces.ts // MyType is a custom interface interface MyType { id: number; name: string; active?: boolean; // the "?" makes this an optional field } // someFunction takes three parameters, the third of which is an // optional callback function someFunction(id: string, value: MyType, callback?: Function) { // ... } Now, if you were to code and call someFunction, your editor (with TypeScript support) would give you a detailed definition of the parameters. Similarly, if you had a variable defined as MyType, code completion would show you the attributes available on MyType and the types of those attributes. TypeScript does require a build process of some sort to run code through its compiler. In later sections we’ll set up the configuration for the TypeScript compiler. What is Webpack? From the Webpack website, “webpack is a module loader” that “takes modules with dependencies and generates static assets”. Additionally, it provides a plugin system and methods for processing files. Module Loading So, what does module loading mean? Let’s look at a very simple example. We have a project with 4 files: app.js, child1.js, child2.js, and grandchild.js. app.jsis dependent on child1.jsand child2.js child2.jsdepends on grandchild.js We can then tell Webpack to run just on app.js, and it will compile a file that contains all files. It does this by finding all statements in app.js that indicate a dependency, such as an import or a require. In app.js, we have: const child1 = require('./child1.js'); const child2 = require('./child2.js'); Webpack knows it needs to find those files, read them, find any of their dependencies, and repeat until it hits a file with no dependencies. The file child1.js is such a file. In child2.js, though, we have: const grandchild = require('./grandchild.js'); So, Webpack finds grandchild.js, reads it, sees no dependencies, and stops its processing. What we end up with is all 4 files, compiled together and usable in the browser. In a nutshell, this is what module loading does. File Processing In addition to just loading modules based on dependencies, Webpack also provides a processing mechanism called loaders. To see what loaders do, let’s use another example. Say we have a TypeScript file. As mentioned above, to compile a TypeScript file to JavaScript, it needs to be run through the TypeScript compiler.There are Webpack loaders that will do just that. We can tell Webpack that, as it encounters .ts files, it should run those files through the TypeScript compiler. The same can be done for virtually any file type — SASS, LESS, HTML, Jade, not just JavaScript-like files. This concept is beneficial because it allows us to use Webpack as a sort of build system to do all the heavy lifting we need to get Angular 2 into the browser, or in our testing environment. Why Should We Unit Test? When we develop applications the most important thing we can do is to ship code as fast and bug-free as possible. Testing helps us achieve those goals. The number one concern for developers who do not unit test is that it takes too long. When you’re first getting into utilizing unit testing in development, it may take longer than you’re used to. However, the long-term benefits far outweigh that initial investment, especially if we test before we write our application code. This is known as Test-driven Development (TDD). Test-driven Development We need to write a JavaScript function called isPrimary. This function’s main purpose is to return true if a number is a primary number and false if it’s not. In the past, we would just dive in head-first, probably hit Google to remember what a primary number is, or find a solid algorithm for it and code away, but let’s use TDD. We know our ultimate goal, i.e. to output a boolean of whether a number is a primary number or not, but there are a few other concerns we need to address to try to achieve a bug-free function: - What parameters does the function take? What happens if they’re not provided? - What happens if a user passes in a non-number value? - How do we handle non-integers? When we approach a problem from a TDD perspective, our first step is to ask ourselves what could go wrong and figure out how to address it. Without this perspective, we may not think about these cases, and we might miss them. When these cases do arise, we may have to completely refactor our code to address them, potentially introducing new bugs. One of the core tenets of TDD is writing just enough code to make a test pass. We use a process known as the red-green-refactor cycle to achieve this goal. The steps are: - Think about the test you need to move towards completion, - Write a test, execute it, watch it fail (red), - Write just enough code to watch it pass (green), - Take a moment to look at the code for any smells. If you find any, refactor the code. Run the tests with each change to the code to ensure you haven’t broken anything, and - Repeat. Step number one is probably the hardest step for developers new to TDD and unit testing in general. Over time you’ll become more and more comfortable and recognize patterns of how to test. Advantages of Unit Testing and TDD We’ve seen a couple of the advantages of unit testing already, but there are more. Here are a few examples: - Reduces the level of bugs in code, - Less application code because we write just enough code to achieve our goals, - Makes it easier to refactor code, - Provides sample code of how to use your functions, - You get a low-level regression test suite, and - Speeds up code-writing. Disadvantages of Unit Testing and TDD Unit testing isn’t a silver bullet to writing perfect code. There are drawbacks to doing so. Here are a few of those drawbacks: - Could give a false sense of quality, - Can be time consuming, - Adds complexity to codebase, - Necessity to have mock objects and stubbed-out code, especially for things outside your control, i.e. third-party code, and - For a large codebase, tweaking one part of your application, such as a data structure, could result in large changes to tests. While these disadvantages exist, if we are diligent and thoughtful in our testing approach, the benefits unit of testing and TDD outweigh these risks. Using NPM as a Task Runner In a prior section, we saw that we could use Webpack to perform many of our build process functions, but not how to invoke them. You may be familiar with the plethora of task runners out there today, e.g. Grunt, Gulp, Broccoli, so why not use one of them? In short, NPM, which we already use to install our project dependencies, provides a simple system for running tasks. As you may know, each project that uses NPM needs a package.json file. One of the sections package.json offers is a scripts section. This section is just a JSON object where the keys are the name of our task, and the values are the script that will run once the task is approved. So, if we have the following in our package.json: ... "scripts": { "foo": "node ./scripts/foo.js", "bar": "node node_modules/bar", "baz": "baz some/config.file" } ... To run these tasks, all we need to do is say npm run [task name]. To run foo, we’d just do: npm run foo and the command node ./scripts/foo.js would be run. If we ran npm run baz it would look for the baz node module through node_modules/.bin, and then use some/config.file. Because we already have this task-runner capability, it will be used to perform tasks such as running unit tests. To read more about using the scripts section, take a look at the official NPM documentation. Installing Dependencies Now, we’ll move on to actually setting up the project. The first step is to get all the dependencies we need. We’ll be pulling in Angular, TypeScript, Webpack, and unit testing. Creating the NPM Project The first thing we need to do is create an NPM project. We’ll take the following steps: - Create a directory. The name doesn’t matter, but it’s useful to make it descriptive, e.g. ng2-webpack-test, - Change into that directory by doing cd ng2-webpack-test, or whatever you named your directory, and - Run npm init -f. This will generate a package.jsonfile for your project. The following commands should all be run from the directory you created in step 1 above. Angular Dependencies Angular 2 is broken into a lot of packages under the @angular organization in NPM. We’ll need to install them and pull in RxJS, Zone.js, and some shims. This can be accomplished through a single install operation: npm i -S @angular/common @angular/compiler @angular/core @angular/platform-browser @angular/platform-browser-dynamic es6-shim reflect-metadata [email protected] zone.js i is an alias for install, -S is an alias for --save. To see what each of these projects is for, take a look at the Angular 2 documentation. Although some of these packages are not immediately necessary for performing unit testing, they will allow us to run our application in the browser when the time comes. Note that this is for Angular 2 RC5. TypeScript Dependencies Since TypeScript is going to be used in this project, we’ll also need to pull it in as a dependency. To help our code have fewer mistakes and maintain a coding standard, we’ll be using code linting through the TypeScript linter, tslint. npm i -D typescript tslint typings -D is an alias for --save-dev. The dependency typings is a way to pull in TypeScript definition files so that TypeScript can understand third-party libraries and provide code completion suggestions for those libraries. We’ll see how to use this later. Webpack Dependencies We’ll also need to pull in all of the dependencies for using Webpack, too. This involves Webpack itself, as well as a list of loaders and plugins we’ll need for Angular, TypeScript, and unit testing. Here’s the command we need to run: npm i -D webpack webpack-dev-server html-webpack-plugin raw-loader ts-loader tslint-loader The html-webpack-plugin and webpack-dev-server will benefit us when we run our application in a web browser. We’ll see what the raw-loader does as we develop our application. Unit Testing Dependencies For unit testing, we’ll be using Karma as our test runner with Jasmine as the testing framework. There are a multitude of testing libraries out there that could be used, like Mocha and Chai, but by default Angular 2 uses Jasmine, and Karma works well with Webpack. npm i -D karma karma-jasmine jasmine-core karma-chrome-launcher karma-phantomjs-launcher phantomjs-prebuilt karma-sourcemap-loader karma-webpack The Chrome and Phantom launchers provide an environment for Karma to run the tests. Phantom is a “headless” browser, which basically means it doesn’t have a GUI. There are also launchers for Firefox, Internet Explorer, Safari, and others. The karma-sourcemap-loader will take the sourcemaps that we produce in other steps and load them for use during testing. This will be useful when running tests in Chrome, so we can place breakpoints in the debugger to see where our code may have problems. Configurations The following sections will show how set up our project to run tests and how to run our application in a browser. We’ll need to configure setups for: - TypeScript, - Unit Testing, - Webpack, and - NPM Scripts. This may seem like a lot to undertake, but we’ll see that the developers of these libraries have established configurations that are easy to understand. You can follow along with the example files located in examples/introduction/ng2-webpack-test. You will need to run npm i if you have cloned this repository to get all the Node modules installed. TypeScript Configuration The pieces needed for utilizing TypeScript are type definitions, linting, and the actual configuration for the TypeScript compiler. Let’s look at the type definitions first. Type Definitions First, we’ll need to create the typings.json file by running the following command from the root of our project: ./node_modules/.bin/typings init This will run Typings out of its node_modules directory and use its init command. The typings.json file will be placed in the root of the project. It will contain the name of the project and an empty dependencies object. We’ll use the install command to fill that object. There are three files to install, but we need two commands: ./node_modules/.bin/typings install dt~jasmine env~node --save --global Again, we are using Typings to install type definitions for jasmine and node. The second flag, --global, tells Typings that the definitions being installed are for libraries placed in the global scope, i.e. window.<var>. You’ll notice that each of the libraries is preceded by a ~ with some letters before it. Those letters correspond to different repositories to look for the type definition files. For information on those repositories, look at the “Sources” section of the Typings Github page. We’ll run a second install command for the es6-promise shim, as it is not a window.<var> library. Notice that there is no prefix required. ./node_modules/.bin/typings install es6-promise --save Your type definitions are now installed. Linting We’ll also be instituting code linting for our project. This will help our code stay as error-free as possible, but be aware that it won’t completely prevent errors from happening. As mentioned above, we’ll use the tslint library to achieve this goal. It uses the file tslint.json to describe the rules for how code linting should behave. Let’s take it one section at a time: { "class-name": true, This will ensure that all of our class names are in Pascal-case ( LikeThis). "comment-format": [ true, "check-space" ], Comments are required to have a space between the slashes and the comment itself ( // like this). "indent": [ true, "spaces" ], In the great war of tabs versus spaces, we’ll take up in the spaces camp. If you’re a fan of tabs, you can always change "spaces" to "tabs". "no-duplicate-variable": true, This will help prevent us from redeclaring variables in the same scope. "no-eval": true, This disables the use of eval. "no-internal-module": true, TypeScript’s module keyword has been known to cause confusion in the past, so we’ll prevent its usage in favor of namespace. "no-trailing-whitespace": true, This will ensure we’re not leaving spaces or tabs at the end of our lines. "no-var-keyword": true, ES2015 allows variables to be block-scoped by using const and let. Since TypeScript is a superset of ES2015, it also supports block-scoped variables. These new variable-declaration keywords provide clarity in our code which var does not, namely because let and const variables are not hoisted. To help achieve this clarity, this attribute tells tslint to raise a flag when it sees that we’ve used the var keyword. "one-line": [ true, "check-open-brace", "check-whitespace" ], This rule says that an opening brace must be on the same line as the statement it is for and it needs to be preceded by a space. "quotemark": [ true, "single" ], This states that all strings be surrounded by single quotemarks. To use double, change "single" to "double". "semicolon": true, This ensures that our lines will end with a semicolon. "triple-equals": [ true, "allow-null-check" ], This tells us to use triple equals. The "allow-null-check" lets == and != for doing null-checks. "typedef-whitespace": [ true, { "call-signature": "nospace", "index-signature": "nospace", "parameter": "nospace", "property-declaration": "nospace", "variable-declaration": "nospace" } ], These rules say that when defining types there should not be any spaces on the left side of the colon. This rule holds for return type of a function, index types, function parameters, properties, or variables. "variable-name": [ true, "ban-keywords", "check-format" ], We need to make sure we don’t accidentally use any TypeScript keywords and that variable names are only in camelCase ( likeThis) or, for constants, all uppercase ( LIKE_THIS). "whitespace": [ true, "check-branch", "check-decl", "check-operator", "check-separator", "check-type" ] We’ll do a little more whitespace checking for the last rule. This checks branching statements, the equals sign of variable declarations, operators, separators ( , / ;), and type definitions to see that there is proper spacing all around them. Configuring TypeScript The TypeScript compiler requires a configuration file, tsconfig.json. This file is broken into two sections: compilerOptions and exclude. There are other attributes which you can see in this schema, but we will focus on these two. The compiler options section is composed of more rules: "compilerOptions": { "emitDecoratorMetadata": true, "experimentalDecorators": true, Angular 2 relies heavily on decorators, e.g. @Component, and the above rules let TypeScript know that it can use them. The reflect-metadata library we pulled in above is used in conjunction with these rules to utilize decorators properly. "module": "commonjs", "moduleResolution": "node", With these two rules, the compiler knows we’ll be using CommonJS modules and that they should be resolved the way Node resolves its modules. It does so by looking at the node_modules directory for modules included with non-relative paths. We could have selected "es2015" for moduleResolution, but since we will be compiling to ES5, we cannot use it. "noImplicitAny": true, "suppressImplicitAnyIndexErrors": true, With the typing system, you can specify any as a type, the first attribute above prevents us from not specifying a type. If you don’t know what the type is, then use any. The one spot where we want to avoid errors for not specifying a type is with indexing objects, such as arrays, since it should be understood. "removeComments": false, When TypeScript compiles our code it will preserve any comments we write. "sourceMap": true, "target": "es5" }, As mentioned above we’ll be compiling to ES5. We’re going to have TypeScript create sourcemaps for us so that the code we write can be seen in browser debugging tools. The exclude section will tell the compiler which sections to ignore during compilation. There is a files section, but since it does not support globbing, we would end up entering every file we need TypeScript to compile, which becomes a serious problem only after a few files. "exclude": [ "node_modules", "typings/main", "typings/main.d.ts" ] This will exclude the node_modules directory and the use of the type definitions found in the main directory, as well as the main.d.ts file of the typings directory. Configuring Karma Next, we will set up Karma to work with Webpack. If you’ve ever used Karma before, you’re familiar with the fact that its configuration can easily become unwieldy. Karma relies on its configuration file in which we specify which files should be tested and how. The file, typically named karma.conf.js, is usually the only file needed. In our setup, we’ll have a second file, named karma.entry.js that will contain extra setup to work with Angular 2 and Webpack. We’re going to start developing our folder structure a little more here, to keep things clean as we proceed. Create a directory named karma in the root of your project. Save the files described in the following two sections inside this directory. Setting Up karma.conf.js 'use strict'; module.exports = (config) => { All Karma configuration files export a single function which takes, the Karma configuration object as a parameter. We’ll see some of the properties this object provides below. config.set({ autoWatch: true, browsers: ['Chrome', 'PhantomJS'], The first property config gives us is the .set method. This is how Karma requires us to set the configuration, which takes a JSON object as a parameter. Our first two attributes of that configuration object are autoWatch and browsers. The autoWatch attribute is a boolean that tells Karma whether it should watch the list of files we’ll later provide and reload the tests when any of those files change. This is a great feature for when we’re running our red-green-refactor loops. The second attribute, browsers, tells Karma in which browsers it should run the tests. This utilizes the karma-chrome-launcher and karma-phantomjs-launcher dependencies we installed earlier to launch the tests in those browsers. files: [ '../node_modules/es6-shim/es6-shim.min.js', 'karma.entry.js' ], Here, we describe the files we’ll be asking Karma to track. If you’ve used Karma in the past, this list may seem very small. You’ll see that we’re going to leverage TypeScript and Webpack to really track those files. The first file is the ES2015/ES6 shim that we installed earlier which adds in some functionality that hasn’t quite hit in PhantomJS yet. Then, we require the file karma.entry.js, which will be developed in the next section. frameworks: ['jasmine'], logLevel: config.LOG_INFO, Here, we tell Karma we’ll be using Jasmine and that the output messages should be at the console.info level or higher. The priority of messages are: LOG_DISABLE— this will, display no messages, LOG_ERROR, LOG_WARN, LOG_INFO, and LOG_DEBUG. When we do LOG_INFO, we’ll see the output from console.info, console.warn, and console.error, but the console.debug message will not appear. phantomJsLauncher: { exitOnResourceError: true }, This is a configuration item specific to PhantomJS which tells it to shut down if Karma throws a ResourceError. If we didn’t, PhantomJS might not shut down, and this would eat away at our system resources. preprocessors: { 'karma.entry.js': ['webpack', 'sourcemap'] }, We tell Karma to run a list of preprocessors on our karma.entry.js file. Those preprocessors are the Webpack preprocessor we installed earlier with karma-webpack and the sourcemap preprocessor installed with karma-sourcemap-loader. Karma and Webpack work in conjunction to look for the chain of dependencies starting with karma.conf.js and load sourcemaps as they run. reporters: ['dots'], singleRun: false, The first line tells Karma to use the dots reporter, which, instead of outputting a narrative descriptor for each test, just outputs a single dot, unless the test fails, in which case we get a descriptive message. The second line tells it that we’ll be rerunning the tests, so Karma can keep running after it completes running all the tests. webpack: require('../webpack/webpack.test'), webpackServer: { noInfo: true } }); }; The last two lines of our configuration set up Webpack for use with Karma. The first tells the karma-webpack plugin that the Webpack configuration file is located in our root directory’s webpack directory under the filename webpack.test.js. Webpack outputs a lot of messages, which can become cumbersome when we run tests in the console. To combat this, we’ll set up the Webpack server to keep its output to a minimum by setting noInfo to true. That’s the entire karma.conf.js. Let’s take a look at its sibling file, karma.entry.js. Setting Up karma.entry.js As mentioned in the previous section, the file karma.entry.js acts as the starting point for pulling in our test and application files when using Karma. Webpack provides a file as an entry point, and then it looks for dependencies and loads them file-by-file. By using TypeScript’s module capabilities, we can tell Webpack to look just for our test files, which will all be suffixed .spec.js. Since we’re going to test from those test files, we’ll load all the files we need. Additionally, we’ll perform minor Angular 2 and Jasmine setup. Remember, this file should be placed in the karma directory under the root project directory. require('es6-shim'); require('reflect-metadata'); require('zone.js/dist/zone'); require('zone.js/dist/long-stack-trace-zone'); require('zone.js/dist/jasmine-patch'); require('zone.js/dist/async-test'); require('zone.js/dist/fake-async-test'); require('zone.js/dist/sync-test'); The first thing we’ll do is to pull in some dependencies. The ones you may not notice are those from zone.js. Zone is a library for doing change detection. It’s a library owned by the Angular team, but shipped separately from Angular. If you’d like to learn more about it, here’s a nice talk given by former Angular team member Brian Ford at ng-conf 2014. const browserTesting = require('@angular/platform-browser-dynamic/testing'); const coreTesting = require('@angular/core/testing'); coreTesting.setBaseTestProviders( browserTesting.TEST_BROWSER_DYNAMIC_PLATFORM_PROVIDERS, browserTesting.TEST_BROWSER_DYNAMIC_APPLICATION_PROVIDERS ); Next, we’ll pull in and store more dependencies. The first two are libraries we’ll need for testing provided by Angular. They will let us set the base Angular providers we’ll need to run our application. Then, we’ll use those imported libraries to set up the base test providers. const context = require.context('../src/', true, /\.spec\.ts$/); context.keys().forEach(context); These two lines are the ones that start pulling in our .spec.ts files from our src directory. The .context method comes from Webpack. The second parameter of the first line tells Webpack to look in subdirectories for more files. After that, we’ll use the context we created just like we’d use a regular require statement. This context also has a map of all the files it found where each key is the name of a file found. Hence, by running .forEach over the array of keys and calling function for each, we read in each of those .spec.ts files and, as a result, any code those tests require to run. Error.stackTraceLimit = Infinity; jasmine.DEFAULT_TIMEOUT_INTERVAL = 2000; These lines are the Jasmine setup mentioned above. We’ll make sure that we get full stack traces when we have a problem and that Jasmine uses two seconds as its default timeout. The timeout is used when we test asynchronous processes. If we don’t set this properly, some of our tests could hang forever. With these two files, we’ve configured Karma to run. There’s a good chance we’ll never need to touch these files again. Configuring Webpack Now, we’ll set up Webpack to perform its role. If we have a webpack.test.js, a webpack.dev.js, and a webpack.prod.js there is bound to be an overlap in functionality. Some projects will use the webpack-merge from SurviveJS which keeps us from duplicating parts of our configurations. We won’t be using this approach in order to have a complete understanding of what the configuration files are providing us. For our purposes, we will have just a webpack.dev.js and a webpack.test.js. The .dev configuration will be used when spinning up the Webpack development server so that we can see our application in the browser. In your project directory, create a sub-directory named webpack, which will house both of these files. Setting up webpack.test.js This file has been mentioned a couple of times. Now, we’ll finally see what it’s all about. 'use strict'; const path = require('path'); const webpack = require('webpack'); Here, we’re pulling in a couple of dependencies we’ll need. The path library is a Node core library. We’ll mainly use it for resolving full file paths. We’ll also going to need to pull in Webpack to use it. module.exports = { devtool: 'inline-source-map', The Webpack configuration is a JSON object, provided to Webpack by using Node’s module.exports mechanism. The first attribute in the configuration defines that we’ll be using inline source maps as our debugging helper. You can read more about the devtool options on the Webpack site’s documentation for configuration. module: { preLoaders: [ { exclude: /node_modules/, loader: 'tslint', test: /\.ts$/ } ], loaders: [ { loader: 'raw', test: /\.(css|html)$/ }, { exclude: /node_modules/, loader: 'ts', test: /\.ts$/ } ] }, We’ve discussed loaders before, and here can we see them in action. We can also specify preLoaders which run before our regular loaders. We could put this loader with the other “regular” loaders, but as our application grows, having this separation of concerns will help prevent compilation from getting sluggish. Our first “real” loader will take .css and .html files and pull them in raw, whithout doing any processing, but will pull them in as JavaScript modules. We’ll then load all .ts files with the ts-loader we installed before, which is going to run each file through the TypeScript compiler. The exclude attribute allows us to avoid compiling any third-party TypeScript files. In this case, it will avoid pulling in any TypeScript files from the node_modules directory. If we wanted to use SASS on our CSS or Jade for our HTML, we could have installed the sass-loader or pug-loader respectively, and used them in a similar way to how we utilize the ts-loader. resolve: { extensions: ['', '.js', '.ts'], modulesDirectories: ['node_modules'], root: path.resolve('.', 'src') }, This section lets Webpack know which types of file extensions it should be loading. The empty string is needed for pulling in Node modules which do not need to provide an extension — for instance, how we pulled in path before. We also inform Webpack that the root directory for our modules is our src directory and that any external modules can be found in the node_modules directory. tslint: { emitErrors: true } }; The final part of this configuration sets up the tslint-loader to display any errors it finds in the console. This file, in conjunction with the two Karma files created previously, will power all of our unit testing. Setting Up webpack.dev.js Note: if you are not interested in using the webpack-dev-server as you follow these tutorials, you can skip this section. Also, any portion of the configuration which is discussed in the webpack.test.js section will not be rehashed below. 'use strict'; const HtmlWebpack = require('html-webpack-plugin'); const path = require('path'); const webpack = require('webpack'); const ChunkWebpack = webpack.optimize.CommonsChunkPlugin; const rootDir = path.resolve(__dirname, '..'); The two new Webpack dependencies here are the HtmlWebpack and ChunkWebpack plugins. The last line of this snippet utilizes that path library so we can be sure that we’ll always be referencing files using our project directory as the starting point. module.exports = { debug: true, devServer: { contentBase: path.resolve(rootDir, 'dist'), port: 9000 }, devtool: 'source-map', The first attribute is debug which, when set to true, lets Webpack know it can switch all of our loaders into debug mode, which gives more information when things go wrong. The devServer attribute describes how we want webpack-dev-server to be set up. This says that the location from which files are served will be the dist directory of our project and that we’ll be using port 9000. Don’t worry about creating a dist directory, as the dev server is going to serve all of our files from memory. You will actually never see a physical dist directory be created, since it is served from memory. However, doing this tells the browser that the files are coming from another location. entry: { app: [ path.resolve(rootDir, 'src', 'bootstrap') ], vendor: [ path.resolve(rootDir, 'src', 'vendor') ] }, Here, we’re telling Webpack that there are two entry points for our code. One is going to be src/bootstrap.ts, and the other will be src/vendor.ts. The file vendor.ts will be our entry to load the third-party code, such as Angular, while bootstrap.ts is where our application code will begin. You’ll notice that we don’t need to provide the .ts to the files. We’ll explain this in a moment. Also, we didn’t need this in webpack.test.js because the file karma.entry.js acted as a sort of faux entrypoint for that process. module: { loaders: [ { loader: 'raw', test: /\.(css|html)$/ }, { exclude: /node_modules/, loader: 'ts', test: /\.ts$/ } ] }, output: { filename: '[name].bundle.js', path: path.resolve(rootDir, 'dist') }, If you don’t remember how we used our loaders, you can take a look at the webpack.test.js section for more information. As mentioned before, files will be served from the dist directory, which we’ll define here. The name of each file will be its key in the entry section with a .bundle.js suffix. So, we’ll end up serving an app.bundle.js and a vendor.bundle.js. plugins: [ new ChunkWebpack({ filename: 'vendor.bundle.js', minChunks: Infinity, name: 'vendor' }), new HtmlWebpack({ filename: 'index.html', inject: 'body', template: path.resolve(rootDir, 'src', 'app', 'index.html') }) ], The two plugins we pulled in earlier — ChunkWebpack and HtmlWebpack — are utilized in the plugins section. The chunk plugin makes Webpack pull in the file which is referenced many times only once. The HTML plugin keeps us from having to add <script> tags to our index.html. It just takes the bundles we created in the output section and injects them into the <body> of index.html. resolve: { extensions: [ '', '.js', '.ts' ] } }; Earlier, when we didn’t specify the .ts for our entry points, we did so because this section lets Webpack know that if there is a file with .ts or .js and it matches that file path, it should be read in. We’re finished setting up the development environment for Webpack. Task Running Configuration (via NPM) To execute any of the threads, we can run Node commands as follows: node ./node_modules/.bin/webpack --config webpack/webpack.dev.js Running that command — and keeping it in our minds can be quite cumbersome. To combat this, we’ll leverage the aforementioned scripts section of package.json to act as our task runner. We’ll be creating the following tasks: - Manual linting, - Running the dev server, - In-browser (Chrome) testing, and - Headless browser (PhantomJS) testing. Doing this is very simple — we’ll utilize our installed Node modules to perform any of the above actions. You can add the following to your scripts section of your package.json: "lint": "tslint ./src/**/*.ts", "start": "webpack-dev-server --config ./webpack/webpack.dev.js", "test": "karma start ./karma/karma.conf.js", "test:headless": "karma start ./karma/karma.conf.js --browsers PhantomJS" Instead of saying node ./node_modules/tslint/bin/tslint.js, we can just use the name of a package, e.g. karma or tslint. This is because there is a symlink in ./node_modules/.bin to this file which NPM can utilize to run the package. The test task will run the unit tests in both Chrome and PhantomJS, while the test:headless task will run them just in PhantomJS, as specified by the browsers flag. If you’re unfamiliar with running NPM tasks, there are two ways to run them. The first one is by doing npm run [task name] which will run any task. If you used npm run lint, it would run the lint task. NPM also has the concept of lifecycle events, each of which can be run through npm [event]. The events that we have in our list are start and test, but not test:headless. There are other events as well, which you can learn about through NPM’s documentation in the scripts section. We’ve now finished 99% of the configuration needed for running our tests. NPM will run Karma which will, in turn, leverage Webpack to load all of our test and application files. Once our test and application modules are all loaded, Karma will execute our tests and let us know what succeeded and what failed. Right now if we try to execute npm test or npm run test:headless, we’ll get an error from Webpack telling us we don’t have an src directory. On top of that, we have no .spec.ts files, so Webpack has nothing to load. Conclusion We’ve covered a lot of ground here. Our application is completely configured for unit testing and running the red-green-refactor cycle. We were able to set up TypeScript, Karma, Webpack, and our test running with a very small amount of code, too. If you would like to see the final product, take a look at this repository. In the second part of this series, we’ll look at what to test before jumping into writing a sample Angular 2 application with unit tests. Feel free to leave any comments and questions in the section below.
http://semaphoreci.com/community/tutorials/setting-up-angular-2-with-webpack
CC-MAIN-2019-47
refinedweb
6,610
65.83
ok, I'm use to making functions that return one value, where I usually just write someting like: return ( something ) ; and I get a value for my preceeding funciton. but right now I am trying to write a function that takes no inputs but brings back 2 values to its preceeding function. I've been trying things like: return ( x = something, y = something ) ; but it doesnt work. i have the x and y variables in the preceeding fucntion, i've tried different variations of it but can't get it to work. does anyone know how to make a function return 2 values to two differnent variable in your main funciton? thank you
http://cboard.cprogramming.com/c-programming/70157-can-function-return-2-values-printable-thread.html
CC-MAIN-2015-22
refinedweb
113
66.27
Branch Added patch from Alexey Stukalov to improve error messages for initial values Further changes to configure script and documentation. Updated configure options (and documentation) when JAGS is in a non-standard location. Use jags namespace when calling jags library functions. Update .onLoad and .onAttach to conform to good practice for user messages. Merge changes from release branch Fix last update: allow empty list as data Allow empty or NULL data in jags.model Improve rollback of dic.samples in case of error Fail gracefully with a warning if we cannot set a monitor. Updated to link to jags 4.x library and modules Patch branch for rjags 3 Unlink model file after use Use updated author description Updated documentation for list.samplers() Branch merge Ignore .RNG.* when testing for unused inits. Fixed some formatting bugs in the manual.
http://sourceforge.net/p/mcmc-jags/rjags/ci/f270866fcb1ba192dbb5e5eef66a14ab9c98af03/log/
CC-MAIN-2015-35
refinedweb
139
60.11
One of the easiest and fastest ways to extend the AX for Retail POS is by using the Blank Operation. Blank Operations can be assigned to buttons in your Retail POS till layout – and you can deploy as many of them as you wish, which makes using them very flexible. Certain aspects of the the BlankOperations plug-in may a bit of a mystery, so here are a few tips to get you started. [Note: The Blank Operation has not changed from AX for Retail 2009 and AX for Retail 2012, so this article pertains to both versions.] Creating the Blank Operation button in POS The Till Layout designer is a good place to start understanding how the BlankOperations plug-in works. In a production environment you would do this in the Dynamics AX client (Retail Headquarters > Setup > POS > Retail POS > Till layout), but for quick testing, you can modify buttons directly in the POS. Just remember that your changes will be overwritten the next time you run the N-1090 job. If you are using the sample layout, launch POS and go to the Tasks menu. Right-click on an empty button and select Button Properties. In the resulting window, select Blank Operation as the Action Item: After you select Blank Operation, you will see that two new fields are added to the window: Operation Number and Blank Operation Param. These two fields are used to make each instance of a Blank Operation button unique and they are the key for how the the POS communicates to the BlankOperations plug-in. Both of these fields are simple string fields; you don’t have to actually send in a number for the Operation number. Developing the BlankOperations Plug-in When you first open the BlankOperations.cs file, you’ll see that it is a very simple class: just a constructor and one method. Here is the starting code for the method (I removed some of the code): By default, all this code does is open a message box with the two strings that were passed from the POS. For instance, if I passed in “Hello” and “World”, this is what would appear when the button was pressed: As you can see by the BlankOperation() method, you have two parameters available to you: a BlankOperationInfo variable and a PosTransaction variable. If you examine the BlankOperationInfo datatype, you’ll see that there are two main properties, both string values: operationID and parameter. These match up with the two values passed from the button. In addition, there are a few other values that get passed from the POS, including the currently selected item line and payment line. Also of great interest is the POSTransaction parameter. This is a substantial object which represents everything to do with the current transaction: all of the items, including price, discount, and tax; all of the payments that have been made so far; any customer information; etc. Explaining the POSTransaction is beyond the scope of this article, but if you set a breakpoint at the start of the BlankOperation() method, you can examine the properties of the POSTransaction object and get an idea of the information available to you. Hopefully the examples I provide will help you get started. The key to making a multi-purposed BlankOperations plug-in is to separate your code based on the Operation Number passed in. This can be done with a simple Case statement on the operationInfo.OperationId value: If you download the project attached to this article, I have included code for each of these operations. After deciding which block of code needs to be executed, you have at your disposal two main pieces of information: the POSTransaction parameter mentioned earlier, and the operationInfo.Parameter value, which is the string passed in from the second box on the button. In the “Price” example, you might want a separate button for +10%, +20%, +30%, etc. You would set up three buttons passing in “10”, “20”, “30”. Your code would then be responsible for using the value passed in (and from converting from a string to a numeric value). If you need to get more complicated, you could pass in multiple values, delimiting them accordingly. If you need to add 20% to all yellow items in the transaction, you could pass in “20;yellow”. Your code would simply break apart the two values that it needs. Tips for the BlankOperations Plug-in: The attached project will show you some fun stuff that you can do with the Blank Operation. All of the operations were done as proof-of-concept so they would definitely need to be cleaned up before using them. A few other notes: Hopefully this will get you started down the road of adding functionality to your POS implementation. Please use the comments below to share any ideas you have for the BlankOperations plug-in or any issues you may have run into. sorry, when I try to build this example, I have this error: Error 2 The type or namespace name 'ApplicationFramework' does not exist in the namespace 'LSRetailPosis' (are you missing an assembly reference?) C:\Users\Administrator\Desktop\Retail POS Plug-ins\Retail POS Plug-ins\Services\BlankOperations\BlankOperations.cs 73 37 BlankOperations could you help me? Antonio [email protected] Hi Shane! Thank you for another excellent article regarding POS, however I'm intensly awaiting the bug-workarounds that you mentioned: "There are two bugs in these support classes which I will address in another article" Any idea when you will publish that? BR Lars I thought that I responded to these comments... sorry 'bout that. Antonio - I believe you are missing a reference to one of the DLLs. In 2012 you don't have to worry about that since we provide the .csproj files for you. For 2009 it's still kind of a pain to get them all correct. If you're still having problems getting the DLL compiled, please open a support case and we can help them out. Lars - you may have already found it, but there is now an article out there with the bug fixes for 2009. See the link above. i would create a new retailtransaction when i first hit the blankoperation button.but the postransaction was reverted next time. How can i always keep the postransation? pls help for me. Jack, If you are talking about 2012, please see the note that I just published with respect to how to add an item to a transaction using the blank operation: blogs.msdn.com/.../ax-for-retail-two-bug-fixes-for-the-blank-operation-sell-item.aspx Hi Shane, we are customizing AxRetail POS using the blank operations. We are facing a problem in posting the current transaction on the screen through a button on our custom form. How can we post the current transaction using our own code? HI Asad and Tahir, What you're after is the ability to conclude the current transaction from the blank operation - I'm not aware of a way to do this programmatically. The product will do this automatically at the end of operations if it determines that it is time to do so (i.e., after a payment has been made that brings the outstanding balance to zero). If you are looking to integrate to an online payment processor (I'm wondering if this is the same request that you have been working on with one of my colleagues from Dubai?) then this should be something that could be accomplished with a customization to the EFT plug-in. You can find the 2009 version of the sample here: blogs.msdn.com/.../ax-for-retail-a-better-sample-for-eft-plug-in-credit-card-processing.aspx Shane I want to create a new form to display the item information. How to get the item price. I want to enter the item is and it should display the item price taking into consideration the discount if any on the same item. Thanks Kamran - this sounds like it is similar functionality that is available in the Price Check form which resides in the Dialog plug-in. You should be able to grab code from that form. Essentially what it does is creates a temporary transaction with the currently-selected customer and adds the item to that transaction. This then gets run through the same price/discount/tax engine as a normal transaction. Look for the "checkItemPrice()" method in the frmPriceCheck.cs file.
http://blogs.msdn.com/b/axsupport/archive/2012/02/12/ax-for-retail-the-blank-operation-explained.aspx?Redirected=true
CC-MAIN-2014-23
refinedweb
1,417
61.56
In each column, Mission: Messaging discusses topics designed to encourage you to re-examine your thinking about IBM® WebSphere MQ, its role in your environment, and why you should pay attention to it on a regular basis. The cultural heritage of WebSphere MQ In the fifteen years that WebSphere MQ has been available, an ongoing dialog amongst the user community has produced a cultural heritage of common knowledge and best practices. This collective wisdom has accumulated in online forums, conference proceedings, technical journals, and the private document repositories of thousands of IT shops worldwide. It has been refined, polished, tweaked, and tuned over the years to the point that the body of knowledge is remarkably consistent and persistent. This is a mixed blessing. In the first Mission: Messaging column, I wrote that the accessibility of WebSphere MQ has, in many shops, led to less emphasis on formal training and that the resulting skill gap often resulted in outages. The flip side of this, however, is that the same accessibility -- our cultural heritage of best practices and common wisdom -- lowers the barriers of entry into WebSphere MQ for new users and raises the overall quality of implementations. These benefits are a natural and powerful incentive for the WebSphere MQ community to cultivate this system of collective knowledge. The incentive is so strong, in fact, that the community will sometimes perpetuate practices that it no longer understands, that have little value or, in some cases, that are actually destructive anti-patterns. This illustrates the principle of cultural inertia -- the tendency of a meme at rest to remain at rest. Difference in degree vs. difference in kind The effect of cultural inertia over time is that we as a community are much better at adding to our body of knowledge than we are at updating it. When a new product feature or use case comes along, there is a tendency to find a parallel to some existing best practice and piggyback on top of it. If the result functions without obviously breaking anything it becomes part of the cultural fabric, even though the new use case may break fundamental assumptions that the original best practice was founded on. If the new use case truly is a superset or extension of the old use case, this process results in a sound and reliable new best practice. These kinds of incremental changes are differences in degree: A is like B, but a little more complex. The culture reacts much differently however to differences in kind where A is nothing like B and interacts with B unexpected ways. These changes are not easily pigeonholed into existing categories and so they force us to question the underlying assumptions that the current best practices are built on. They threaten to invalidate our architecture, our code, our operations manuals and, worst of all, our ability to substitute casual knowledge for deep skills. Differences in kind cost money. They require a business case. They find very few champions willing to campaign for them. We do not do a very good job of adapting to these paradigm shifts. More often than not, they are absorbed into the culture disguised as incremental changes. This process introduces latent defects and vulnerabilities into our implementations, which accumulate in the form of growing potential risk. Then, when something breaks catastrophically, we wonder how and why it got that bad and why there was no warning. This is the cultural equivalent of rust. What were once "best practices" over time become merely "practices" and eventually they become anti-patterns -- practices that look good at first glance but are actually destructive. Client vs. bindings connections Digging back into WebSphere MQ ancient history, one example of a difference in kind was the introduction of the MQSeries client. On the queue manager side of the connection, the API calls are the same and the authorization mechanism is the same so the common practice is to treat a client application like a bindings mode application -- but with an extra channel definition. On the application side, it is possible to take a bindings mode application and run it in client mode without any changes, and in many cases, this is exactly what happened. But the bindings and client mode are fundamentally different because, in client mode, a channel exists between the application and the queue manager. As administrators and developers, we want to think of this channel as a transparent connection to the queue manager, but it is not. When two queue managers exchange messages over a channel, both sides of the connection are managed by MCAs (message channel agents) that share a complex protocol which insures that persistent messages are serialized, hardened to disk and then acknowledged by the receiver before they are deleted from the sending side. The two MCA processes manage batches of messages and will automatically resync after failure, committing or backing out units of work as required to preserve the integrity of the data. Contrast this with a client application where one side of the connection is a message channel agent and the other is application code. In order to be as reliable as a queue manager-to-queue manager channel, the client application would have to duplicate the channel synchronization logic of the MCA in order to recover from broken connections. For example, if the connection were broken on a COMMIT, there are two possibilities: 1) that the connection was lost before the COMMIT was received by the MCA or 2) that the MCA processed the COMMIT but was unable to transmit the response code back to the application. In the first case, where the MCA never sees the COMMIT call, the transaction will eventually be rolled back. At this time, any messages PUT under syncpoint will be removed from the queue and any messages dequeued with destructive GET calls will be rolled back onto the queue and eventually redelivered to the application. In the second case, where the MCA has acted on the COMMIT but could not deliver the response code, there is no transaction to roll back. Messages that were read from the queue with destructive GET calls are permanently removed from the queue and any PUT messages are delivered. Compounding matters is the fact that pending transactions will remain under syncpoint for an indeterminate period while waiting for TCP to time out the socket. This interval might be measured in seconds or many minutes, depending on the TCP kernel settings. Compare this to a broken connection in bindings mode that is detected almost immediately by the queue manager, which then rolls back the transaction, typically within a few milliseconds. In both cases, the outcome of the transaction is ambiguous until the transaction is rolled back, but the duration of the ambiguity is milliseconds in one case and possibly many minutes in the other. An application coded to recover from broken connections in bindings mode has a reasonable expectation of reconciling the state of the transaction immediately and continuing processing in an orderly fashion. The same application in client mode must account for the possibility of the transaction remaining outstanding for several minutes before any reconciliation can occur. If the transactions are time sensitive, one of these cases is acceptable and one is not, but to much of the community these are treated as functionally equivalent. Over time, some new best practices for client applications have emerged to deal with the ambiguous outcomes of broken connections. These include performing all API calls under syncpoint, coding applications on either side of the interface to handle duplicate messages, and resending PUT messages after a broken connection. The situation is not unique to WebSphere MQ and in fact it is addressed in the JMS specification in section 4.4.13 which states:. It is up to a JMS application to deal with this ambiguity. In some cases, this may cause a client to produce functionally duplicate messages. A message that is redelivered due to session recovery is not considered a duplicate message. (from Java Message Service - Version 1.1. April 12, 2002) There is an opposing school of thought which holds that there is no difference between client and binding mode connections. The argument is that an MQRC 2009 CONNECTION_BROKEN response code from a bindings mode connection will have the same ambiguity of outcomes and that the application needs to handle these uniformly regardless of the connection mode. If this really is a difference in kind, as I am arguing, the right approach would be to design client and binding mode applications differently. On the other hand, if this is merely a difference in degree, then the right answer is to design client and bindings mode applications to account for the ambiguous outcome of messaging API calls, just as the JMS specification suggests. The problem is that either way, the prevailing practices get it wrong! The issue of broken connections surfaced only after the introduction of the MQSeries client. By the time the issue came to light, "best" practices had been established based on an underlying assumption that the outcome of failed transactions would be immediately and reliably detectable. Even though the MQSeries client broke the use case on which the existing coding practices were built, and despite the subsequent development of competing methods which correctly model the underlying issue, the prevailing practices to this day reflect the original model in which no ambiguity of outcomes exists. The bigger picture The broken connection issue is just one example of how the WebSphere MQ culture tends to perpetuate established practices even after they are obsolete or demonstrably broken. I picked it because it illustrates how incumbent cultural memes persist in a broken state despite competition from better modeled and more robust methods with significant support in the user community. There are many other examples, including these: Backup strategy: Most of the best practice documents I have seen, including the IBM Redbook on the subject, recommend backing up the files under the queue manager. This practice models and extends the practice of backing up an application and its configuration details at the filesystem level, and it worked when applications mostly connected to MQSeries in bindings mode, resided on the same server as MQSeries and everything was shut down for the backup. But with consolidation and virtualization, today's queue manager has no maintenance window and is shared among any number of applications, many of which are remote. It is never a good idea to back up the queue manager while it is running, although this happens more often than not and sometimes results in an unusable backup. Even if WebSphere MQ is stopped for the backup, it is impossible to sync the MQ backup with the backups of all the client applications. If the queue manager is ever restored from the backup, the impact to all those applications is unpredictable at best. Eliminating soft limits: In the course of developing a new application, it is quite common to bump into the soft limits such as MAXDEPTH or MAXMSGL. The usual response is to bump these values up to eliminate the "problem." Because there is no need to add code and complexity to deal with these limitations, development can proceed much faster. But this approach treats the soft limits as a nuisance to be eliminated rather than the useful tool they are intended to be. An application hitting one of these limits might be impacted, but at least it has the opportunity to respond sensibly to the problem. Remove the limits and the queue manager is at much greater risk of exhausting the physical resources. When these hard limits are reached, the entire queue manager and all connected applications come to a halt. This is much worse than the temporary and isolated impact to a single application that prompted the change in the first place. This is a case where the references in the written best practice documentation usually recommend the right thing but the community overwhelmingly ignores the advice. Cluster channels named TO.<QMGR>: This practice works in the limited case that there are no overlapping clusters. Now that clusters have become mainstream, overlapping clusters are becoming quite common. This naming convention insures that the CLUSRCVR channel will be shared across all clusters in which a queue manager participates. In this configuration, maintenance in one cluster necessarily impacts the operation of any others, so this practice is one I consider to be a classic anti-pattern. A better approach is to use names like <CLUSTER>.<QMGR> which ensure dedicated channels for each cluster. However, the product manuals still document the TO.<QMGR> convention and it is therefore likely to remain widely practiced for some time to come. Authentication of remote connections: There are a number of common practices that are related to authorization of client connections and channels from other queue managers. A typical example is the oft-repeated advice that the solution for authorization errors from WebSphere MQ Explorer is to place the user's ID into a local group that is sufficiently authorized. The problem with this is it assumes that the ID presented has been authenticated in some way. In fact, WebSphere MQ does not perform any authentication whatsoever. Authentication is delegated to the operating system for bindings mode connections or to a channel security exit for remote connections. SSL might be used to authenticate the channel connection but the identity obtained is not propagated to the API layer unless an exit is present. Because WebSphere MQ authentication is so misunderstood, the prevailing security practices almost universally focus on authorization (setmqaut commands) and ignore authentication. You can run all the setmqaut commands you want, but without authentication the only people bound by them are the honest people that you don't need to worry about. Anyone with malicious intent and access to the network will have no problem bypassing whatever authorization is in place if the authentication is ignored. This list could go on but I do not want to get lost in the examples. We will have to save those for other articles. My point is that we as a community could do a lot better about embracing cultural change. We should reexamine our practices from time to time and revalidate the underlying assumptions. Then, if we find that a practice no longer models the real world, it should be updated. Looking to the future Given how difficult it is to change an established practice, the leverage is in catching the errors up front. If we improve at distinguishing differences in kind from differences in degree when new use cases come along, and at adapting to the truly different use cases, far fewer members of our community will experience sudden, unforeseen, and sometimes catastrophic outages despite having followed all the "best" practices. It is an appropriate time to tackle this issue because forces such as SOA, virtualization consolidation, and regulation are driving architectural changes. The recent release of WebSphere MQ V7.0 included the biggest change in the product API since the initial release of MQSeries. In addition, new products are being layered over WebSphere MQ, such as the HTTP bridge, WebSphere MQ File Transfer Edition, and a recent update of WebSphere MQ Extended Security Edition. Perhaps as we integrate these new technologies we can focus not so much on how similar they are to our existing practices, but rather on how they differ. I will be happy to seed the discussion with a few topics. Service orientation The legacy of WebSphere MQ is largely based on point-to-point connectivity, a top-level namespace that resides in the network itself and line-of-business ownership of host and queue manager assets. SOA breaks all of these assumptions. The SOA connectivity model is any-to-any. This drives WebSphere MQ away from point-to-point and toward a clustered configuration. A common idiom in the existing best practices is to use the cluster as the top-level namespace and configure multiple separate clusters to provide namespace isolation and routing. But in an SOA context, the top level namespace is in a registry above the messaging network layer. The closer the MQ topology models the registry namespace, the more transparent and frictionless name resolution becomes as it moves vertically through the layers. Thus, SOA drives WebSphere MQ toward a single clustered namespace modeled after the service registry. SOA also treats the queues and topics in the cluster as destinations in their own right. The queue manager becomes nothing more than a container that provides life support for destinations. So while the object names are migrating up into the logical layer, the queue manager, channels, processes, and other system names are being driven down closer to the physical infrastructure layer. The result is that it no longer makes sense to name queue managers in a business context; for example, by application name. When the queue manager becomes shared infrastructure, it makes more sense to name it in that context. Naming the queue manager after the host name will help us model the network and locate assets as we drill down from top level logical names into the network. This reverses a trend of moving away from naming queue managers after hosts, which was a common practice a decade ago. SOA also changes the relationship of queues. The practice of embedding the sending and receiving qualifiers in the queue name is widespread today. In a point-to-point context, this naming convention helped to document the flow of messages in the network. But in an SOA context, it breaks the service-provider/service-consumer model. In an SOA implementation, the only well-known queue is the one that represents the service endpoint. This queue is named for the service that it represents and not the application providing the service. This ties the queue name to the service registry and the logical application layer. The service consumer needs only a reply-to queue, which can be an anonymous dynamic queue or a predefined static queue. If the reply-to queue is static, it is most likely named for the application making the service request. Services can request other services and there is a tendency to reuse the service HLQ (high-level qualifier) when creating reply-to queue names. This should be avoided because it potentially conflicts with or pollutes the namespace in the service registry. Any application that both provides and consumes services should use a separate HLQ for reply-to queues. In short, application of point-to-point naming conventions in an SOA context tends to result in a point-to-point connectivity model. Although the cluster provides any-to-any connectivity, the names preserve the old style of connectivity but push it up into the logical layer. Topic-level security In the new version of WebSphere MQ, topics are first-order objects on par with queues. The setmqaut command is used to grant and revoke authorizations just as it always has for queues, but with a few new options. On the surface, this looks like a difference in degree. But topics are fundamentally different than queues. The topic name can be extremely long and composed of many arbitrarily long nodes. The setmqaut command, however, works on standard 48-character object names. To create authorizations that are meaningful in the topic namespace, a topic object with a 48-character name is mapped to a specific point in the topic hierarchy. The setmqaut command then grants or revokes authority on the object definition. In order to efficiently resolve authorization requests on topic nodes for which no object definition exists, permissions are inherited down the topic tree from parent to child nodes. The best practices for authorization of topics have yet to emerge. I do not know yet what they will look like, but I do know that if we treat them as nothing more than extensions of the queue authorization model, the "best" practice will be wrong. Network topology In electrical engineering terms, a bus is a shared path by which electrical components can exchange signals in an any-to-any fashion and using a common protocol. The closest analogy in WebSphere MQ terms would be the any-to-any connectivity of a cluster combined with a common message format, such as EDI or SWIFT. But in the IT world, a bus is increasingly understood to mean a central component that provides common services, such as mediation, routing or translation. As a result, the bus concept is driving the adoption of hub-and-spoke topologies at both the physical and logical network layers. This is significant to WebSphere MQ because authentication of remote connections in MQ occurs at the link. In a point-to-point network, the link-level authentication could be granular. Each node typically hosted no more than one or two related applications so authorizing a link was roughly equivalent to authorizing the application residing there. Compromise of a single node placed a few adjacent nodes at risk. The hub-and-spoke topology breaks this security model. Every spoke node must be authorized to place messages onto the hub. Similarly, every spoke node must be authorized to receive messages from the hub. The vulnerability here is that a spoke node can address messages not to the input queue at the hub, but rather to the output queue at the hub. Using the hub in this way enables any spoke node to access any destination in the entire network. The mitigation is to create a separate identity for each spoke node, place that account in the MCAUSER of the inbound channel, and authorize it only to specific service endpoints. The problem with this is that it forces us to embed the authorization policy into the physical network. In addition to being very difficult to manage, it does not fit well within the SOA model, in which authorization policy is managed centrally and independently of the underlying transport. Setting that aside for a moment, the other problem with this model is that compromise of the hub exposes all of the business assets in the messaging network. It does not necessarily mean that administrative access is possible on any node, but all legitimate destination objects authorized to the hub are vulnerable. This suggests a tiered security model where the spoke nodes are the baseline and the hub is hardened, similar to a gateway. Ultimately though, these techniques treat the hub-and-spoke topology as a difference in degree when it is actually a difference in kind. Both the topology and the service oriented architecture that drives it are fundamentally different than the point-to-point constructions of the past. They are driving authentication up the stack from the link to the message itself. If we fail to recognize this as a fundamentally different use case and instead apply the old security model to it, the result will be a system which is wide open but perceived to be highly secure. This is worse than no security at all. IBM's SOA Security Expert Dr. Raj Nagaratnam explained in a recent interview that services are based on a trust model where authorization is delegated to a policy layer external to the application. Indeed, we can no longer assume that the application itself is a single component. It may be composed from several interoperating services. If authorization is to function effectively and efficiently in such a composite application, the identity must be tied to the individual transaction rather than the pipes through which the transactions flow. As SOA matures, message-level authentication technologies such as WebSphere MQ Extended Security Edition will become strategic components enabling the new security model. Summary Although I've mentioned some specific examples, I am not suggesting that we need to immediately rename all of our queue managers, rebuild our network topologies, or recode all of our client applications. The category of problem I have described here persists specifically because it tends to stay dormant in the majority of cases, so most of the community is not affected. But we have been reusing the same best practices for so long, and the underlying model has drifted so far, that the difference represents significant potential risk. The wider this gap is, the more of us are impacted. Extrapolate this process out long enough and the chance of experiencing one of these problems approaches 100%. What I see repeated over and over on my consulting assignments are cases where customers suffered a major outage despite having diligently applied all the best practices. The examples above were all real-world incidents. It doesn't happen often, but I have seen several occasions where a restore of a queue manager failed because the backup set was unusable. Similarly, most of the cluster outages I have worked on involved overlapping clusters that shared channels named TO.<QMGR>. When it comes to security, the prevailing practices completely ignore authentication. The result is that close to 95% of shops that have been assessed exposed anonymous administrative access. With all of the change occurring now, the community has an opportunity to close the gap and bring our best practices in line with the use cases currently employed. This need not be expensive, but it will require greater participation, an active dialog within the community, and a willingness to question some of our long-standing traditions. If we examine and refine our best practices in the online forums, we can begin to integrate them as we consolidate and virtualize our data centers, migrate to SOA, and deploy all the new versions and new products. More importantly, we can embrace a cultural change that values adaptability and flexibility in our knowledge management, just as we value these attributes in the systems we design. Agility is not a superficial trait. It runs deep or not at all. Resources Learn - Podcast: The Deep Queue - Q&A with Dr. Raj Nagaratnam, IBM's SOA Security Expert - WebSphere MQ V7 setmqaut command - WebSphere MQ V7 security manual - Author's Web page: T-Rob.net Discuss - IBMers' Blog on Messaging - The Vienna WebSphere MQ List server - MQSeries.net - developerWorks WebSphere MQ.
http://www.ibm.com/developerworks/websphere/techjournal/0809_mismes/0809_mismes.html
CC-MAIN-2014-42
refinedweb
4,359
50.16
Install Django and Build Your First App In our Introduction to Django, we covered all the basics of using the open source web-building framework. If you haven’t read through our beginner’s tutorial, go ahead and do so now. If you’ve already made it through the easy stuff, you’re probably ready to dive into some code and start building — so let’s do it. Our first step is to grab a copy of Django and set up a development environment where we can tinker away. Install Django Django 1.1 was released on July 29th, 2009. You can download the official release here and follow these instructions for installing. Alternatively, if you want to get the latest and greatest Django tools, you can checkout a copy of the trunk build using Subversion. If you don’t have Subversion installed, go grab a copy. Then fire up your terminal and paste in this line: svn co django-trunk Once all the files finish downloading, we need to make sure our local Python installation is aware that Django exists on our machine. There are a couple ways to go about that, but a symbolic link to your Python site packages directory is probably the easiest. Assuming you’re on a *nix system, this line will do the trick: ln -s `pwd`/django-trunk/django /path/to/python_site_packages/django If you don’t know where your Python site directory is, here’s a handy bit of Python that will tell you: python -c "from distutils.sysconfig import get_python_lib; print get_python_lib()" If you’re on Windows, the easiest thing to do is add Django to your PythonPath environment variable. On Windows, you can define environment variables in the Control Panel. To do this, see Microsoft’s Command Line Reference for more details. The excellent and well-written Django installation docs suggest creating a symbolic link to the file django-trunk/django/bin/django-admin.py in a directory on your system path, such as /usr/local/bin. I find that I don’t use django-admin.py all that often, but you can create the link if you like. Just paste this code in your shell: ln -s `pwd`/path/to/django-trunk/django/bin/django-admin.py /usr/local/bin Now that Django is installed and Python knows where it lives, we’re ready to get started. Remember that you have a Subversion checkout now. If you ever want to update to the latest release, just head to the “django-trunk” folder and run svn update. Set up your first project OK, let’s get started. From the command line, switch to your web development directory. Something like this: cd ~/sites/dev Now we’re going to run the django-admin tool we mentioned earlier. If you created the symlink, you don’t need the full path, but if you didn’t here’s the code: python /path/to/django-trunk/django/bin/django-admin.py startproject djangoblog Yep, you read that last bit correctly — we’re going to build a blog. Now cd over to the new folder: cd ~/sites/dev/djangoblog This is going to be our project folder into which we will add various apps. Some we’ll create and some we’ll be downloading as projects from Google Code. I like to keep my Python import statements clean and free of project-specific module names, so I always make sure my root project folder (in this case, djangoblog) is on my Python path. To do that, just add the path to your PythonPath variable. That way we can write statements like: import myapp rather than import myproject.myapp It’s not a huge thing, but it does make your code more portable. Fill out the project settings OK, we’re getting there. The next step is to fill out our project settings file. Fire up your favorite text editor and open up the settings.py file inside the “djangoblog” directory. The core of what we need to set up is at the top of the file. Look for these lines: DATABASE_ENGINE = 'sqlite3' # 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'oracle'. DATABASE_NAME = '/path/to/djangoblog/djangoblog. Note that we’re using SQLite as a database for development purposes. Assuming you have Python 2.5 installed, you don’t need to do anything to use SQLite. If you’re on either Python 2.3 or Python 2.4, you’ll need pysqlite — make sure you install version 2.0.3 or higher. If you have MySQL or PostgreSQL already installed, feel free to use them. Make Sure to include the entire pathname, as Django cannot understand ~/ or $HOME in defining the database ie /Users/usrname/Sites/dev/djangoblog/djangoblog.db The other settings are well documented in the settings.py file and we can skip over most of them for now. But there are a couple of settings we should take care of before moving on. If you look at the bottom of the settings.py file you’ll notice this bit of code: INSTALLED_APPS = ( 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', ) This where we tell our Django project which apps we want to install. In a minute, we’ll add our blog app. But for now let’s just add Django’s built-in admin tool. Paste in this line just below the sites app: 'django.contrib.admin', One more thing before we finish with settings.py, here’s a handy trick for the template directories. I generally keep all my templates in a folder named “templates” within my project folder (in this case, “djangoblog”). But I generally move between development and live servers quite a bit and I hate having to change the path to the templates folder. This trick takes care of that: import os.path TEMPLATE_DIRS = ( os.path.join(os.path.dirname(__file__), 'templates'), ) Instead of hard coding the path to our templates folder this is dynamic — and it showcases how easy it is to tweak Django using Python. We just import the os.path Python module and then find the path to the directory where settings.py is and then appends ‘templates’ to that path. Now when we push the site live, there’s no need to change the settings.py file. (Actually you’d probably want to switch to a more robust database, but we’ll get to that much later). For now, let’s use one of the tools included in manage.py, the syncdb tool. Paste this line in your terminal: python manage.py syncdb The syncdb tool tells Django to translate all our installed apps’ models.py files into actual database table. In this case the only thing we have installed are some of the built-in Django tools, but fear not, we’ll get to writing our own models in just a minute. Set up a user Once you enter the syncdb line above, you’ll get some feedback from Django telling you you’ve just installed the auth system. It will walk you through setting up a user. The output looks like this: sng: /djangoblog/ $ python manage.py syncdb Creating table auth_message Creating table auth_group Creating table auth_user Creating table auth_permission Creating table django_content_type Creating table django_session Creating table django_site Creating table django_admin_log You just installed Django's auth system, which means you don't have any superusers defined. Would you like to create one now? (yes/no): yes Username (Leave blank to use 'luxagraf'): E-mail address: [email protected] Password: Password (again): Superuser created successfully. Installing index for auth.Message model Installing index for auth.Permission model Installing index for admin.LogEntry model sng: /djangoblog/ $ Once you’ve created your username and password, it’s time to fire up Django’s built-in server. Start the server At the command prompt, tell Django to start the server: /djangoblog/ $ python manage.py runserver Validating models... 0 errors found Django version 0.97-pre-SVN-6920, using settings 'djangoblog.settings' Development server is running at Quit the server with CONTROL-C. Now open up your browser and head to. You should see a page like this: It works! But that isn’t very exciting yet, so let’s check out the admin interface. However, before we do that, we need to tell Django what the admin URL is. Fire up your text editor and open the file urls.py in your “djangoblog” folder. Copy and paste the code below, replacing what’s already in the file: from django.conf.urls.defaults import * from django.contrib import admin # Uncomment the next two lines to enable the admin: # from django.contrib import admin admin.autodiscover() urlpatterns = patterns('', (r'^admin/(.*)', admin.site.root), ) Now head to. Log in with the user/pass combo you created earlier and you should see something like this: Now that’s pretty cool. If you’ve ever labored over creating an admin system in Ruby on Rails or PHP, you’re going to love Django’s built-in admin system. But at the moment there isn’t much to see in the admin system, so let’s get started building our blog. Build the blog Now we could just throw in some code that creates a date field, title, entry and other basics, but that wouldn’t be a very complete blog would it? What about tags? An RSS feed? A sitemap? Maybe some Markdown support for easier publishing? Yeah, let’s add all that. But remember Django’s DRY principles — surely, somebody else has already created a Feed app? A Sitemap app? As a matter of fact Django ships with those built-in. Nice. But what about tags? Well there’s one of those apps available as well — the cleverly named django-tagging. Now, there have been some backwards-incompatible changes to Django recently, and as of this writing, django-tagging hasn’t caught up to those yet. So we’re actually going to need to checkout the Newforms Admin branch of the django-tagging codebase. To do that we’ll grab the files using Subversion. Paste this code into your terminal window: svn checkout django-tagging Now cd into the new django-tagging folder and type: python setup.py install Then just drop the tagging folder, which you’ll find inside django-tagging, in your “djangoblog” folder or wherever you’d like to keep it (I use a “lib” folder to hold all my frequently used components, like django-tagging). There’s also a handy Python implementation of Markdown, so grab that as well (follow the setup instructions on the site to get Markdown installed). Markdown is entirely optional, so feel free to skip it if it’s not your thing. Got all that stuff stashed in your “djangoblog” folder? Good. Now let’s go ahead and create our first Django application. To do that we’ll use Django’s app creating script, which lives inside manage.py in our project folder. Paste this line into your shell: python manage.py startapp blog If you look inside “djangoblog” you should now see a new “blog” folder. Open that up and find the models.py file. Open models.py in your favorite text editor and paste in this pub_date = models.DateTimeField('Date published') tags = TagField() enable_comments = models.BooleanField(default=True) PUB_STATUS = ( (0, 'Draft'), (1, 'Published'), ) status = models.IntegerField(choices=PUB_STATUS, default=0) class Meta: ordering = ('-pub_date',) get_latest_by = 'pub_date' verbose_name_plural = 'entries' def __unicode__(self): return u'%s' %(self.title) def get_absolute_url(self): return "/%s/%s/" %(self.pub_date.strftime("%Y/%b/%d").lower(), self.slug) def save(self): self.body_html = markdown.markdown(self.body_markdown, safe_mode = False) super(Entry, self).save() Let’s step through the code line by line and we’ll talk about what’s going on. First we import the basic stuff from django, including the model class, the Feed class and the Sitemap class. Then we import the tagging and markdown files we just saved in our project folder. Once we have all the modules we’re going to use, we can create our blog model. I elected to call it Entry — you can change that name if you like, but remember to substitute your name everywhere I refer to Entry. Entry extends Django’s built-in model.Model class, which handles all the basic create read update and delete (CRUD) tasks. In other words, all we have to do is tell Django about the various elements of the database table (like the title field, the entry slug, et cetera) and all the hard work is handled behind the scenes. The first bit of our Entry class definition just defines all our various blog entry components. Django will use this information to create our database tables and structure, and also to generate the Admin interface. Note that we’re using Django’s various model fields. Most of it should be self-explanatory, but if you want to learn more about each type, check out the Django documentation. Also be aware that there are quite a few more field types available. This is only one example. One thing worth mentioning is the body_html = models.TextField(blank=True) line. What’s up with that blank=True bit? Well that information is part of Django’s built-in Admin error checking. Unless you tell it otherwise, all fields in your model will create NOT NULL columns in your database. To allow for null columns, we would just add null=True. But adding null=True only affects the database, Django’s Admin system would still complain that it needs the information. To get around that, we simply add the blank=True. In this case, what we’re going to do is fill in the body_html field programatically — after we hit save in the Admin and before Django actually writes to the database. So, we need the Admin section to allow body_html to be blank, but not null. Also worth mentioning is the Meta class. Meta handles things like how Django should order our entries and what the name of the class would be. By default, Django would refer to our class as “Entrys.” That offends my grammatical senses, so we’ll just explicitly tell Django the proper plural name of “entries.” Next, we have a few function definitions. All Python objects should return their name. Django recently added unicode support, so we’ll return our name in unicode. Then there’s get_absolute_url. As you might imagine this refers to the entry’s permalink page. When we get to creating templates, we’ll use this to put in our permalinks. That way if you ever decide to change your permalinks you only have to change one line and your entire site will update accordingly — very slick. The last function simply overrides Django’s save function. Every Django model has a save function, and since we didn’t expose the body_html field, we need to fill it in. So we grab the text from our body_markdown field (which is exposed in the admin), run it through the markdown filter and store it in body_html. By doing it this way, we can just call this field in our templates and we’ll get nicely formatted HTML as output, yet the process stays transparent — write in markdown, display HTML. If you’re not using Markdown, just delete the save function, there’s no need to override it if you aren’t using the Markdown module. Check your head Now we need to tell Django about our new apps. Open up settings.py again and add these lines to your list of installed apps: INSTALLED_APPS = ( 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'django.contrib.admin', 'djangoblog.tagging', 'djangoblog.blog', ) Once that’s done, head over to the terminal and run manage.py syncdb. Refresh your admin section and you should see the tagging application we downloaded. Super cool. But where’s our blog model? Well, even though Django knows about our blog app, we haven’t told the app what to do in the Admin section. So head back over to your text editor and create a new file. Name it admin.py and save it inside the “blog” folder. Now add these lines: from django.contrib import admin from djangoblog.blog.models import Entry class EntryAdmin(admin.ModelAdmin): list_display = ('title', 'pub_date','enable_comments', 'status') search_fields = ['title', 'body_markdown'] list_filter = ('pub_date', 'enable_comments', 'status') prepopulated_fields = {"slug" : ('title',)} fieldsets = ( (None, {'fields': (('title', 'status'), 'body_markdown', ('pub_date', 'enable_comments'), 'tags', 'slug')}), ) admin.site.register(Entry, EntryAdmin) OK, what does all that do? The first thing we do is import Django’s admin class, as you might suspect, admin controls how the admin interface looks. Now, these customizations are entirely optional. You could simply write pass and go with the default admin layout. However I’ve customized a few things and added some filters to the admin list view so we can sort and filter our entries. Note that if you aren’t using Markdown, just replace body_markdown with body_html. We’ve also used a handy tool, Django’s prepopulated_fields, which will use a bit of Javascript to automatically build a slug from our title. The last step is to register our admin customizations with Django’s admin app. If you aren’t actually making any customizations, you could just write the model name. In other words the admin class name is optional. If you refresh your admin page you should now see the blog model with a link to create and edit blog entries. Want more control over what shows up in your admin? For instance, if it’s a personal site, you probably don’t need the “Users” section in the admin. Let’s customize what shows up. To do that we’re going to create a new file, again named admin.py, but put this one at the project level, inside the djangoblog folder. Okay now paste in this code: from django.contrib import admin from djangoblog.blog.models import Entry from djangoblog.blog.admin import EntryAdmin class AdminSite(admin.AdminSite): pass site = AdminSite() site.register(Entry, EntryAdmin) All this does is override Django’s default AdminSite and then simply registers our Entry model and admin classes. Of course you could do more than simply pass, check the Django docs for customization tips. Now if you go back and refresh the admin page you should see just the things we’ve built — the Entries and Tags models. Tweak the links and tags One last thing, let’s jump back over to our models.py file; we’re going to add one extra function to our blog to improve its usability. Add these lines to the bottom of your models.py file: def get_previous_published(self): return self.get_previous_by_pub_date(status__exact=1) def get_next_published(self): return self.get_next_by_pub_date(status__exact=1) def get_tags(self): return Tag.objects.get_for_object(self) So what’s going on here? Django includes a bunch of built-in methods for common tasks, like displaying next and previous links on pages. The function is called get_previous_by_ with the last bit of the function being the name of your datetime field, in our case pub_date. However, we included the ability to save drafts in our model, and, unfortunately, Django’s built-in function doesn’t know about our drafts. So, it will automatically include them in our next/previous links. This obviously isn’t what we want. So what we’ve done is wrap the Django function with a one-liner: def get_previous_published(self): return self.get_previous_by_pub_date(status__exact=1) All this does is wrap the Django function with a new name get_next_published, call the original get_previous_by_ function, but add a filter so that only published entries are included in the results. The last function in that set, get_tags, is just a time saver. There’s a good chance you’ll want to list all the tags you’ve added to your entry, so I’ve included a convenience method that does just that. Onward and upward Whew! That’s a lot of code to sort through, and we’ve glossed over a few things. But when you look at the models.py file and consider that from these 49 lines of code, Django was able to generate an entire blog website, it doesn’t seem like so much code at all, does it? Save the file and head back over to your browser. Refresh the admin page and click “Add new.” Feel free to create a few entries — blog monkey blog! So now we’ve got our back-end blogging system set up and everything in in place to create a public site. Feel free to take a well deserved break. The next thing to do is dress up the public-facing side of our blog, which is functional, yet totally vanilla. We tackle that in Lesson 3: Use URL Patterns and Views in Django, so click ahead once you feel ready. In the meantime, you’ve learned enough about Django to continue building the backend, and you can always consult the Django Book if you want to strike out on your own. Good luck!
http://www.webmonkey.com/2010/02/Install_Django_and_Build_Your_First_App
CC-MAIN-2014-49
refinedweb
3,525
66.64
Here we will see one interesting problem to check whether a number is jumbled or not. A number is said to be jumbled if, for every digit, it's neighbor digit differs by max 1. For example, a number 1223 is jumbled, but 1256 is not jumbled. To solve this problem, we have to check if a digit has a neighbor with a difference greater than 1. If such digit is found, then return false, otherwise true. #include <iostream> #include <cmath> using namespace std; bool isJumbled(int number) { if (number / 10 == 0) //for single digit number is is always jumbled return true; while (number != 0) { if (number / 10 == 0) //when all digits have checked, return true return true; int curr_digit = number % 10; int prev_digit = (number / 10) % 10; if (abs(prev_digit - curr_digit) > 1) return false; number = number / 10; } return true; } int main() { int n = 1223; if(isJumbled(n)){ cout << n << " is Jumbled"; } else { cout << n << " is not Jumbled"; } } 1223 is Jumbled
https://www.tutorialspoint.com/check-if-a-number-is-jumbled-or-not-in-cplusplus
CC-MAIN-2022-05
refinedweb
160
65.96
Python Certification Training for Data Scienc ... - 57k Enrolled Learners - Weekend/Weekday - Live Class Python programming language has been one step ahead of other programming languages from the start. Loops in Python has a similar advantage when it comes to Python programming. In this article, we will learn about Python For Loop and how we can use it in a program. The following concepts are covered in this article: With immense applications and easier implementations of Python with data science, there has been a significant increase in the number of jobs created for data science every year. Enroll for Edureka’s Python Certification Training For Data Science and get hands-on experience with real-time industry projects along with 24×7 support, which will set you on the path of becoming a successful Data Scientist, Let’s go ahead and start this article with a basic introduction to for loop in python. A for loop is used to iterate over sequences like a list, tuple, set, etc or. And not only just the sequences but any iterable object can also be traversed using a for loop. Let us understand the for loop with the help of a flowchart shown below. The execution will start and look for the first item in the sequence or iterable object. It will check whether it has reached the end of the sequence or not. After executing the statements in the block, it will look for the next item in the sequence and the process will continue until the execution has reached the last item in the sequence. Let us understand the for loop syntax with an example: x = (1,2,3,4,5) for i in x: print(i) Output: 1 2 3 4 5 In the above example, the execution started from the first item in the tuple x, and it went on until the execution reached 5. It is a very simple example of how we can use a for loop in python. Let us also take a look at how range function can be used with for loop. In python, range is a Built-in function that returns a sequence. A range function has three parameters which are starting parameter, ending parameter and a step parameter. Ending parameter does not include the number declared, let us understand this with an example. a = list(range(0,10,2)) print(a) Output: [0,2,4,6,8] In the above example, the sequence starts from 0 and ends at 9 because the ending parameter is 10 and the step is 2, therefore the while execution it jumps 2 steps after each item. Now let us take a look at an example using python for loop. def pattern(n): k = 2 * n - 2 for i in range(0,n): for j in range(0,k): print(end=" ") k = k - 1 for j in range(0, i+1): print("*", end=" ") print("r") pattern(15) Output: data-src= In the above example, we were able to make a python pyramid pattern program using a range function. We used the range function to get the exact number of white spaces and asterisk values so that we will get the above pattern. Let us take a look at how we can use a break statement in a python for loop. Break in python is a control flow statement that is used to exit the execution as soon as the break is encountered. Let us understand how we can use a break statement in a for loop using an example. Let’s say we have a list with strings as items, so we will exit the loop using the break statement as soon as the desired string is encountered. company = ['E','D','U','R','E','K','A'] for x in company: if x == "R": break print(x) Output: E D U In the above example, as soon as the loop encounters the string “R” it enters the if statement block where the break statement exits the loop. Similarly, we can use the break statement according to the problem statements. Now, let us take a look at how we can use python for loop in lists. A list in python is a sequence like any other data type, so it is quite evident on how we can make use of a list. Let me show you an example where a for loop is used in a list. color = ["blue", "white"] vehicle = ['car', 'bike', 'truck'] color_comb = [(x,y) for x in color for y in vehicle] print(color_comb) Output: [('blue', 'car'), ('blue', 'bike'), ('blue', 'truck'), ('white', 'car'), ('white', 'bike'), ('white', 'truck')] Let us also take a look how we can use continue statement in a for loop in python. Let us understand this the same example we used in the break statement, instead of break, we will use the continue statement. It is also a control statement but the only difference is that it will only skip the current iteration and execute the rest of the iterations anyway. company = ['E', 'D', 'U', 'R', 'E', 'K', 'A'] for x in company: if x == "R": continue print(x) Output: E D U E K A In the above example, the continue statement was encountered when the string value was “R”, so the execution skipped that particular iteration and moved to the next item in the list. Let us now look at a few other examples for a better understanding of how we can use for loop in Python. Here is a simple for loop program to print the product of any five numbers taken from the user res = 1 for i in range(0,5): n = int(input("enter a number")) res *= n print(res) Output: data-src= Here is another simple program to calculate area of squares whose sides are given in a list. side = [5,4,7,8,9,3,8,2,6,4] area = [x*x for x in side] print(area) Output: [25, 16, 49, 64, 81, 9, 64, 4, 36, 16] Now that we are done with the for loop concepts, here are a few tutorials that will help you learn the programming language in a structured way. This brings us to the end of this article where we have learned how we can use For Loop In Python. I hope you are clear with all that has been shared with you in this tutorial. If you found this article on “Python For Loop” relevant, check out the Edureka Python Certification Training, both core and advanced Python concepts along with various Python frameworks like Django. If you come across any questions, feel free to ask all your questions. Put them in the comments section of “Python for Loop” and our team will be glad to answer.
https://www.edureka.co/blog/python-for-loop/
CC-MAIN-2020-10
refinedweb
1,127
63.22
Scrapy Beginners Series Part 4: User Agents and Proxies So far in this Python Scrapy 5-Part Beginner Series we learned how to build a basic Scrapy spider, get it to scrape some data from a website, clean up the data as it was being scraped and then save the data to a file or database. In Part 4 we will be exploring how to use User Agents and Proxies to bypass restrictions on sites who are trying to prevent any scraping taking place.. (This Tutorial) Part 5: Deployment, Scheduling & Running Jobs - Deploying our spider on a server, and monitoring and scheduling jobs via ScrapeOps. (Part 5) The code for this project is available on Github here! If you prefer video tutorials, then check out the video version of this article. Need help scraping the web? Then check out ScrapeOps, the complete toolkit for web scraping. Getting Blocked & Banned Whilst Web Scraping What you will quickly find out when you start scraping at any large scale volume, is that building and running your scrapers is the easy part. The true difficulty of web scraping is in being able to reliably retrieve HTML responses from the pages you want to scrape. Whilst you can easily scrape a couple hundred pages with your local machine, when you need to scrape thousands or millions of pages websites will quickly start blocking your requests. Large websites such as Amazon monitor who is visiting their website by tracking your IPs and user agents, and detecting any unusual behaviour using sophisticated anti-bot techniques. If they identify someone they think is a scraper then they will block your requests. This isn't the end of the world however, as by properly managing our user agents, IP addresses and cookies we use when scraping we can bypass making of these anti-bot countermeasures. For the purposes of our beginners project scraping Chocolate.co.uk we don't need to worry about it. However, in this guide we're still going to look at how we can dynamically rotate our User Agents and IPs so that you can apply these techniques if you ever need to scrape a more difficult website like Amazon. In this tutorial, Part 4: Beginners guide to Scrapy User Agents and Proxies we're going to cover: - Getting Blocked & Banned Whilst Web Scraping - Using User Agents When Scraping - Using Proxies to Bypass Anti-bots and CAPTCHA's Using User Agents When Scraping When scraping the web, oftentimes the site you want to scrape don't really want you scraping their data so you need to disguise who you are. To do so we need to manage the User Agents we send along with our HTTP requests. User Agents are strings that let the website you are scraping identify the application, operating system (OSX/Windows/Linux), browser (Chrome/Firefox/Internet Explorer), etc. of the user sending a request to their website. They are sent to the server as part of the request headers. You can think of a User Agent as a way for a browser to say “Hi, I’m Mozilla Firefox on Windows” or “Hi, I’m Safari on an iPhone” to a website. Scrapy User Agent Web scrapers and crawlers also need to set the user agents they use as otherwise the website may block your requests based on the user agent you send to their server. For example, the default user agent Scrapy sends when making a request is: Scrapy/VERSION (+) This user agent will clearly identify your requests as comming from a web scraper so the website can easily block you from scraping the site. So if scraping most sites you will want to change this (will show you later). Identifying Yourself (Googlebot) Other times, you may want to clearly identify yourself as a specific web scraper/crawler as that website might want you to scrape their site and give you better treatment. The most famous example is Google’s Googlebot that crawls the web to index pages and rank them: Googlebot/2.1 (+) Web servers/websites can give bots special treatment, for example, by allowing them through mandatory registration screens, bypassing CAPTCHA etc. Changing User Agents If you are scraping any popular website, they will have some level of blocking mechanism in place based on the user agents it sees you using. We can get around this by rotating through multiple user agents, that appear like real visitors to their site. Luckily for us, the Scrapy community have created some great extensions that make rotating through user agents very easy. In this case, we're going to use the scrapy-user-agents Scrapy download middleware. To use the scrapy-user-agents download middleware, simply install it: pip install scrapy-user-agents Then in add it to your projects settings.py file, and disable Scrapy's default UserAgentMiddleware by setting its value to None: DOWNLOADER_MIDDLEWARES = { 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None, 'scrapy_user_agents.middlewares.RandomUserAgentMiddleware': 400, } The scrapy-user-agents download middleware contains about 2,200 common user agent strings, and rotates through them as your scraper makes requests. Okay, managing your user agents will improve your scrapers reliability, however, we also need to manage the IP addresses we use when scraping. Using Proxies to Bypass Anti-bots and CAPTCHA's Even when rotating your user agents, most websites will still refuse to serve your requests if you only specify User-Agent in the headers as they are also keeping track of the IP address. When a site sees to many requests coming from one IP Address/User Agent combination they usually limit/throttle the requests coming from that IP address. Most of the time this is to prevent things such as DDOS attacks or just to limit the amout of traffic to their site so that their servers don't run out of capacity. That is why we also will need to look at using proxies in combination with the random user agents to provide a much more reliable way of bypassing the restrictions placed on our spiders. While it's easy to scrape a few pages from a website using your local IP address and random User Agents if you want to scrape thousands/millions of pages you will need a proxy. Note: For those of you who don't know, your IP address is your computers unique identifier on the internet. Rotating IP Addresses With Proxies To bypass this rate limiting/throttling the easiest thing we can do is to change the IP address from which we are sending our scraping requests - just like the randomising of the User Agents which we have already looked at. This is done using proxies. Proxies are a gateway through which you route your scraping requests/traffic. As part of this routing process the IP address is updated to be the IP address of the gateway through which your scraping requests went through. Several companies provide this service of rotating proxies and their costs vary depending on their level of service and reliability. Proxy prices range from Free to thousands of dollars per month, and price often always isn't correlated to the performance. Using Paid Proxies There are many professional proxy services available that provide much higher quality of proxies that ensure almost all the requests you send via their proxies will reach the site you intend to scrape. Here are some of the best proxy providers: All of these proxy providers are slightly different with different proxy products and integration methods so we can't cover all of them in detail within this guide. However, you can use our Free Proxy Comparison Tool that allows to compare the pricing, features and limits of every proxy provider on the market so you can find the one that best suits your needs. Including finding the proxy providers who offer the most generous Free Plans. For this Beginner Series we're going to use ScrapeOps, as it has a great free plan and it's the most reliable solution(Being an "All-In-One Proxy" - It uses the best proxy provider from a pool of over 20+ proxy providers). However, you can use a different proxy if you wish. Integrating ScrapeOps The first thing we need to do would is to register for a free ScrapeOps account. Once you have your account setup and have found the API key from your account you can then start to interact with the API endpoint they provide using a very simple method which we will code into our chocolatespider. To do so we can simply add the following code to the top of our chocolatespider.py file. API_KEY = 'YOUR_API_KEY' def get_proxy_url(url): payload = {'api_key': API_KEY, 'url': url} proxy_url = '?' + urlencode(payload) return proxy_url And switch the use this function in our Scrapy request: yield scrapy.Request(url=get_proxy_url(url), callback=self.parse) This is how your final code should look. import scrapy from chocolatescraper.itemloaders import ChocolateProductLoader from chocolatescraper.items import ChocolateProduct from urllib.parse import urlencode API_KEY = 'YOUR_API_KEY' def get_proxy_url(url): payload = {'api_key': API_KEY, 'url': url} proxy_url = '?' + urlencode(payload) return proxy_url class ChocolateSpider(scrapy.Spider): # The name of the spider name = 'chocolatespider' # These are the urls that we will start scraping def start_requests(self): start_url = '' yield scrapy.Request(url=get_proxy_url(start_url), callback=self.parse)(get_proxy_url(next_page_url), callback=self.parse) Concurrent Requests When running more than a small scale crawl we might want to have several requests happen at the same time. We can make this happen with the CONCURRENT_REQUESTS setting inside our settings.py file. However, when using paid proxies we need to make sure that the plan we are using has a high enough concurrent requests limit - otherwise they will block some of our requests and only allow the amount we have paid for through. So a plan might be cheaper, but the concurrency limit might be very low - this can slow down our crawl as we'll only be able to send out one request at a time. You'll need to decide if you want your crawl to be completed quicker and then pay for higher concurrency limits or take longer if you only want to pay for a plan with a lower concurrency limit. Before running the spider we need to update our settings.py file to set the number of CONCURRENT_REQUESTS to 1 and make sure the previous user-agent and proxy middlewares we added above are disabled. ## settings.py CONCURRENT_REQUESTS = 1 DOWNLOADER_MIDDLEWARES = { ## Rotating User Agents # 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None, # 'scrapy_user_agents.middlewares.RandomUserAgentMiddleware': 400, ## Rotating Free Proxies # 'scrapy_proxy_pool.middlewares.ProxyPoolMiddleware': 610, # 'scrapy_proxy_pool.middlewares.BanDetectionMiddleware': 620, } Now when we run our spider, Scrapy will route our requests through ScrapeOps and prevent them from being blocked. Next Steps We hope you now have a good understanding of how to use User Agents and Proxies/Proxy Providers to bypass any sites which are blocking/limiting you from scraping the data you need! If you have any questions leave them in the comments below and we'll do our best to help out! If you would like the code from this example please check it out on Github. The next tutorial covers how to make our spider production ready by deploying our spiders to an external server, setting up monitoring, alerting and automated scheduling. We'll cover these in Part 5 - Deployment, Monitoring, Alerting and Scheduling with Scrapy .
https://scrapeops.io/python-scrapy-playbook/scrapy-beginners-guide-user-agents-proxies/
CC-MAIN-2022-40
refinedweb
1,893
58.32
sxb427 + 0 comments A simple DP approach works. For example, a = "aBbdD" b = "BBD" since the last character in a is upper case and last character in b is also upper case and both are equal, f(a,b) = f("aBbd","BB") Now d can never be made equal to B therfore- f("aBbd","BB") = f("aBb","BB") Now b can be capitalized to B,therfore we have two options - either capitalize b to B or dont capitalize b. f("aBb","BB") = f("aB","B") or f("aB","BB") #Note that this is the 'or' operator. f is a boolean value. if we have something like a = 'C' and b = 'D' then f(a,b) evaluates to False (boolean value). Lastly (for initialization of the dp array)- if a is non-empty and b is empty, then f(a,b) is True only if all the characters in a are lower case. if a is empty and b is non-empty, then f(a,b) is always False. if both are empty then f(a,b) = True Good Luck !! hellozhewang + 0 comments Useful test cases that was generated by my test driver which helped me pass all the test cases: ababbaAbAB AABABB false aAbAb ABAB true baaBa BAAA false abAAb AAA true babaABbbAb ABAA false suburb4nfilth + 0 comments I spent a few days thinking about the problem and wanted to give some tips to those learning about dynamic programming. - Try to first solve it recursively with small sample cases and then try to apply memoization. Do not start thinking about the dynamic approach because I got lost doing that and was not sure how to apply a single base case. - If you do it bottom up in Python try to optimize it because it might time out (mine did on the last 2 cases). - Try doing it with True and False as the values of the DP table, this way you don't have to think about numbers and edit distances and stuff. Hope this helps! m1samuel + 0 comments Took a while but I finally got it. My python solution: def abbreviation(a, b): m, n = len(a), len(b) dp = [[False]*(m+1) for _ in range(n+1)] dp[0][0] = True for i in range(n+1): for j in range(m+1): if i == 0 and j != 0: dp[i][j] = a[j-1].islower() and dp[i][j-1] elif i != 0 and j != 0: if a[j-1] == b[i-1]: dp[i][j] = dp[i-1][j-1] elif a[j-1].upper() == b[i-1]: dp[i][j] = dp[i-1][j-1] or dp[i][j-1] elif not (a[j-1].isupper() and b[i-1].isupper()): dp[i][j] = dp[i][j-1] return "YES" if dp[n][m] else "NO" beat1Percent + 0 comments What we need to solve is, regardless of the case, if bis a subsequence of awith the constraint that acan only discard lower case characters. Therefore, if we want to know if b[0, i]is an abbreviation of a[0, j], we have two cases to consider: a[j]is a lower case character. if uppercase(a[j]) == b[i], either b[i - 1]is an abbreviation of a[0, j - 1]or b[i - 1]is an abbreviation of a[0, j], b[0, i]is an abbreviation of a[0, j]. else if b[0, i]is an abbreviation of a[0, j -1], b[0, i]is an abbreviation of a[0, j]. else, b[0, i]cannot be an abbreviation of a[0, j]. a[j]is a upper case character. if a[j] == b[i]and b[0, i - 1]is an abbreviation of a[0, j - 1], b[0, i]is an abbreviation of a[0, j]. else b[0, i]cannot be an abbreviation of a[0, j]. Below is a Java solution: import java.io.*; import java.math.*; import java.security.*; import java.text.*; import java.util.*; import java.util.concurrent.*; import java.util.regex.*; public class Solution { // Complete the abbreviation function below. static String abbreviation(String a, String b) { boolean[][] dp = new boolean[b.length() + 1][a.length() + 1]; dp[0][0] = true; for (int j = 1; j < dp[0].length; j++) { if (Character.isLowerCase(a.charAt(j - 1))) dp[0][j] = dp[0][j - 1]; } for (int i = 1; i < dp.length; i++) { for (int j = 1; j < dp[0].length; j++) { char ca = a.charAt(j - 1), cb = b.charAt(i - 1); if (ca >= 'A' && ca <= 'Z') { if (ca == cb) { dp[i][j] = dp[i - 1][j - 1]; } }else { ca = Character.toUpperCase(ca); if (ca == cb) dp[i][j] = dp[i - 1][j - 1] || dp[i][j - 1]; else dp[i][j] = dp[i][j - 1]; } } } return dp[b.length()][a.length()] ? "YES" : "NO"; } private static final Scanner scanner = new Scanner(System.in); public static void main(String[] args) throws IOException { BufferedWriter bufferedWriter = new BufferedWriter(new FileWriter(System.getenv("OUTPUT_PATH"))); int q = scanner.nextInt(); scanner.skip("(\r\n|[\n\r\u2028\u2029\u0085])?"); for (int qItr = 0; qItr < q; qItr++) { String a = scanner.nextLine(); String b = scanner.nextLine(); String result = abbreviation(a, b); bufferedWriter.write(result); bufferedWriter.newLine(); } bufferedWriter.close(); scanner.close(); } } Sort 472 Discussions, By: Please Login in order to post a comment
https://www.hackerrank.com/challenges/abbr/forum
CC-MAIN-2020-50
refinedweb
891
73.78
Morph: import cv2 import numpy as np img = cv2.imread('j.png',0) kernel = np.ones((5,5),np.uint8) erosion = cv2.erode(img,kernel,iterations = 1) Result: It is just opposite of erosion. Here, a pixel element is ‘1’ if atleast one pixel under the kernel is ‘1’. So it increases the white region in the image or size of foreground object increases. Normally, in cases like noise removal, erosion is followed by dilation. Because, erosion removes white noises, but it also shrinks our object. So we dilate it. Since noise is gone, they won’t come back, but our object area increases. It is also useful in joining broken parts of an object. dilation = cv2.dilate(img,kernel,iterations = 1) Result: Opening is just another name of erosion followed by dilation. It is useful in removing noise, as we explained above. Here we use the function, cv2.morphologyEx() opening = cv2.morphologyEx(img, cv2.MORPH_OPEN, kernel) Result: Closing is reverse of Opening, Dilation followed by Erosion. It is useful in closing small holes inside the foreground objects, or small black points on the object. closing = cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel) Result: It is the difference between dilation and erosion of an image. The result will look like the outline of the object. gradient = cv2.morphologyEx(img, cv2.MORPH_GRADIENT, kernel) Result: It is the difference between input image and Opening of the image. Below example is done for a 9x9 kernel. tophat = cv2.morphologyEx(img, cv2.MORPH_TOPHAT, kernel))
https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/py_morphological_ops/py_morphological_ops.html
CC-MAIN-2021-21
refinedweb
249
52.46
JavaScript Cookbook 64 r3lody writes ." Keep reading for the rest of Ray's review. The first five chapters of the book are somewhat unremarkable. They start out easily enough with recipes for handling JavaScript strings. The discussion of String objects and literals obviously implies that the reader is already somewhat familiar with Object terminology and functionality. That makes this book unsuitable for beginners. The following chapter contains recipes for handling regular expressions. It starts off with an introduction to the basics, which are nothing that a somewhat savvy shell programmer should be familiar with. The remaining sections cover pretty basic problems. The only interesting ones I noted handled highlighted found phrases on a web page. Chapter 3 covers dates, time, and timers. Handling dates is shown to be pretty straightforward, and one-shot and recurring timers are presented in a clear, easy-to-understand manner to wrap up the chapter. The next chapter, Working with Number and Math, consists mostly of basic mathematical solutions. The fifth chapter rounds up the basics with recipes for working with arrays and loops. As a Perl programmer, I found this to be familiar territory – especially the discussions of the splice and map method, and using associative arrays. Chapters 6-10 provide the first real appetizing recipes in the JavaScript Cookbook. Shelley first discusses building reusability using function definitions, anonymous functions, recursion, scopes and memorization. It's starting with this chapter that you really begin to learn how to use JavaScript rather than just playing around with it. Event Handling is the first major hurdle a procedural programmer needs to overcome to use JavaScript effectively. Various event triggers are discussed in the sections of chapter 7. While most of the code is easy to comprehend, I ran into problems when using the new HTML5 drag and drop was discussed. I had to ask myself could drag-and-drop be any more complicated? This example worked on all but Opera, but the solution is convoluted. Overall, if you really want to know how screwed up code must be to work in all different browsers, chapter 7 (Handling Events) will demonstrate it. Internet Explorer's differences is the reason for most of the odd workarounds in this chapter. We would all like each browser to work just like another but, unfortunately, each one has its own quirks. Chapter 8 talks about the various ways browsers handle color support and page sizes. The chapter ends by dealing with dynamic pages and bookmarking their state. The first time I worked with JavaScript was when I was coding some form handling. Chapter 9 covers the ways to handle forms and modify web pages. The most useful recipes (at least, for me) were the last two, which showed how to hide and display form elements on the fly, and how to modify selection lists based on other form element entries. All programming involves error handling and usually some debugging. Chapter 10 describes the various ways to handle errors, followed by a well-written set of tips on how to use the debuggers and inspectors for the major browsers. The following three chapters all deal with manipulating web pages. The first of these contained a lot of discussion of namespaces. Namespaces can be confusing, and I didn't really understand them much better after I was finished reading. In addition, you are presented with several boilerplate templates, with little information as to why you would use them. I also had problems with some of the downloaded samples not running correctly on my browsers. Chapter 12 contains lots of fun ways to manipulate page content, with specific instructions on how to handle IE and its different ways of doing things. Finally, chapter 13 provided some good basics of page manipulation, including creating collapsible sections, and creating tab pages. Accessibility is the major topic of chapter 14, where you are introduced to ARIA (Accessible Rich Internet Applications). Many web pages are not built with accessibility in mind, so this chapter is very important for giving the web designer the tools for well-designed and usable pages. Some ARIA techniques are straightforward, but others (such as creating collapsible form sections) are much more complex to get right. This chapter does a marvelous job, even though it is somewhat hard to read. The next chapter covers creating media-rich and interactive applications. This chapter was pretty deep, and the examples were not necessarily bad, but the techniques required need the coder to really think clearly about how to accomplish their goal. Chapters 16 and 17 cover JavaScript Objects and Libraries. In the Objects chapter, there is quite a bit of discussion around ECMAScript 5, which is not yet well supported in the browsers normally available. As one example, Shelley does say regarding the preventExtensions feature "by the time this book hits the streets, I expect (hope) at least a couple of browsers will have implemented this feature". The Libraries chapter was more problematic in that I was not able to follow along and get the supplied samples working correctly. In addition, the coverage of jQuery was only a high-level overview, leaving the reader wanting more. In her defense, Shelley acknowledges the breadth of the jQuery topic and refers you to the jQuery Cookbook for more information. Overall, I found chapter 17 unsatisfying and abrupt in its coverage. Communication via Ajax is the main topic for the recipes of chapter 18. Without a proper web server at my disposal, I could not properly evaluate the workability of the solutions. I was also somewhat amused that one of the solutions was described with the caveat that it's not a recommended procedure. I would ask why it was included in that case. The Working with Structured Data chapter starts by covering JSON (JavaScript Object Notation), but then adds in recipes for handling hCalendar Microformat Annotations and RDFa. The transition was a little jarring, and not overly useful, in my opinion. The penultimate chapter covered the issues around persistent information. While using URLs and cookies to maintain some state are discussed, much of this chapter revolves around new capabilities made available in the new HTML5 specifications. Unfortunately, most browsers either do not support, or only partially support these features, so the information is only useful as a "taste of things to come". The final chapter covers the use of JavaScript in non-browser environments. Widgets and gadgets are simple JavaScript applications that are easily coded and disseminated. There are discussions of creating applications for the iPhone, Android phones, and Blackberry phones. 7 out of 10.".
https://books.slashdot.org/story/10/09/29/1352250/javascript-cookbook?sdsrc=nextbtmnext
CC-MAIN-2016-50
refinedweb
1,097
54.83
SafeConcurrent From HaskellWiki Revision as of 10:54, 11 April 2009 1 Motivation The base package (version 3.0.3.1) code for Control.Concurrent.QSem and QSemN and SamepleVar is not exception safe. This page is for holding proposed replacement code. Specifically, both the wait and signal operations on a semaphore may block. These may then be interrupted by a killThread or other asynchronous exception. Exception safety means that this will never leave the semaphore in a broken state. Exception correctness means that the semaphore does not lose any of its quantity if the waiter is interrupted before the wait operation finished. MSem is the proposed replacements for QSem. A replacement for QSemN is in progress. The SampleVar code is also not exception safe. The replacement has not yet been written. 2 MSem This code should be exception safe and exception correct. Note that it does not allocate any MVars to manage the waiting queue. Only newMSem allocates them. This should be more efficient than QSem. {-# LANGUAGE DeriveDataTypeable #-} -- |This modules is intended to replace "Control.Concurrent.QSem". Unlike QSem, this MSem module -- should be exception safe and correct. This means that when signalMSemN and waitQSem operations -- receive an asynchronous exception such as killThread they will leave the MSem in a non-broken -- state, and will not lose any quantity of the semaphore's value. -- -- TODO : drop the MSem suffix from the operations. -- -- Author : Chris Kuklewicz < haskell @at@ list .dot. mightyreason .dot. com > -- Copyright : BSD3 2009 module MSem(MSem,newMSem,signalMSem,waitMSem,MSem'Exception) where import Control.Concurrent.MVar import Control.Exception(Exception,throwIO,block) import Data.Maybe(fromMaybe) import Data.Typeable(Typeable) newtype MSem = MSem (MVar M) data M = M { avail :: Int , headWants :: Bool , headWait :: MVar () , tailWait :: MVar () } newtype MSem'Exception = MSem'Exception String deriving (Show,Typeable) instance Exception MSem'Exception -- |'newSem' allows positive, zero, and negative initial values. newMSem initial = do newHeadWait <- newEmptyMVar newTailWait <- newMVar () let m = M { avail = initial , headWants = False , headWait = newHeadWait , tailWait = newTailWait } sem <- newMVar m return (MSem sem) -- |Waiters block in FIFO order. This returns when it is the front waiter and the available value -- is positive. If this throws an exception then no quantity of semaphore will be lost. waitMSem :: MSem -> IO () waitMSem (MSem sem) = block $ do -- sem throw? advance <- withMVar sem $ \ m -> return (tailWait m) -- advance throw? withMVar advance $ \ _ -> do -- sem throw? withMVar cleans advance todo <- modifyMVar sem $ \ m -> do -- clean up if previous waiter died mStale <- tryTakeMVar (headWait m) let avail' = avail m + maybe 0 (const 1) mStale -- ensure the sem is in a sane state if avail' >= 1 then do return (m { avail = avail' - 1, headWants = False }, Nothing) else do return (m { avail = avail', headWants = True }, Just (headWait m)) case todo of Nothing -> return () Just wait -> do -- takeMVar throw? the headWants is still set to True, withMVar cleans advance takeMVar wait -- |Add one to the semaphore, if the new value is greater than 0 then the first waiter is worken. -- This may momentarily block, and thus may throw an exception and leave then MSem unchanged. signalMSem :: MSem -> IO () signalMSem msem@(MSem sem) = block $ modifyMVar_ sem $ \ m -> do case headWants m of False -> return (m { avail = avail m + 1 }) True -> if avail m >= 0 then do ok <- tryPutMVar (headWait m) () if ok then return (m { headWants = False }) else throwIO . MSem'Exception $ "MSem.signalMSem: impossible happened, the headWait MVar was full" else return (m { avail = avail m + 1 })
https://wiki.haskell.org/index.php?title=SafeConcurrent&diff=27421&oldid=27420
CC-MAIN-2015-32
refinedweb
567
56.86
This is the first error of several. If you could explain to me what I am doing wrong here I can continue to debug the rest on my own Here is the main code. Here is the prototype and function definitonHere is the prototype and function definitonCode:using namespace std; #include "complexh.hpp" int main() { Complex a(3.0,4.0); // instantiate a Complex c; cout << "Enter a complex number (q to quit):\n"; while (cin>>c) // error here { cout << "c is " << c << endl; cout << "complex conjugate is " << ~c << '\n'; cout << "a+c is "<< a+c << endl; cout << "a-c is "<< a-c << endl; cout << "a*c is "<< a*c << endl; cout << "2*c is "<< 2*c << endl; cout << "Enter a complex number (q to quit):\n" } cout << "Done!\n"; system("PAUSE"); return 0; } Code:friend ostream & operator>>(ostream & os,const Complex & n); ostream & operator>>(ostream & os,const Complex & n); { cout << "real:"; os >> n.rel; cout << endl; cout << "imaginary:"; os >> n.imaginary; cout << endl; return os; }
https://cboard.cprogramming.com/cplusplus-programming/16791-help-debug-overloading.html
CC-MAIN-2017-22
refinedweb
166
62.17
I'm working on an application (actually written in JavaScript, but using Wing because I'm also working on the Python server) which makes heavy use of namespaces. In this case I have some files in a folder, and a number of sub-folders (and sub-sub-folders etc). When the sub* folders are opened up in the "Project" window, the files in the root folder get pushed down to the bottom of the screen and it can be difficult to keep track of them as there are no tree lines. Regardless of whether there are tree lines, can the sub-folders be shown *after* the files in a folder? A simple example: - abling + constants - fragment + handlers CViewer.js + remote - ui - dialogs + handlers CBlink.js CPending.js + editor CIndexTree.js CObserver.js CApplication.js CDoBlink.js The main application file, in the root of the namespace, is pushed right to the bottom rather than being at the top. That's illogical and I find myself having to think hard about where things are. Ideally we should have: - abling CApplication.js CDoBlink.js + constants - fragment CViewer.js + handlers + remote - ui CIndexTree.js CObserver.js - dialogs CBlink.js CPending.js + handlers + editor Note that we have the same problem with our Python files, though to a lesser extent as the namespace is less deep. Hugh
http://wingware.com/pipermail/wingide-users/2006-December/003982.html
CC-MAIN-2014-52
refinedweb
223
65.32
Issue Type: Bug Created: 2009-04-28T15:08:46.000+0000 Last Updated: 2012-11-20T20:53:18.000+0000 Status: Closed Fix version(s): Reporter: Sebastian Krebs (kingcrunch) Assignee: None Tags: - Zend_Application Related issues: Attachments: I dont know, if its really a issue. To me it seems a strange behaviour, but its at least an undocumented feature ;) <pre class="highlight"> if (null !== $options) { if (is_string($options)) { $options = $this->_loadConfig($options); } elseif ($options instanceof Zend_Config) { $options = $options->toArray(); } elseif (!is_array($options)) { throw new Zend_Application_Exception('Invalid options provided; must be location of config file, a config object, or an array'); } $this->setOptions($options); } if $options is a string, the config will be loaded from file and then given to setOptions(). <pre class="highlight"> if (!empty($options['config'])) { $options += $this->_loadConfig($options['config']); } If there is a key 'config' in the file Ive loaded before, setOptions() tries to load a second config file. On the other hand, if there is a module 'config' with a module-bootstrap, its not configurable, because Zend_Application finds the key 'config' and tries to load ... something. <pre class="highlight"> Test The other predefined values 'includepaths', 'autoloadernamespaces', 'bootstrap', 'resources' and so on are also not useable as modules with module-bootstrap. Version is trunk rev 15242 ### Comments Posted by Ben Scholzen (dasprid) on 2009-04-29T01:32:19.000+0000 Matthew, I'd suggest a "module" key in the root options which childs are handled like module bootstrap options in the parent. The current way of defining module specific bootstrap options should stay, but this one would be a fallback which can be used when somebody has modules which conflict with our reserved keys. Posted by Sebastian Krebs (kingcrunch) on 2009-04-29T06:26:35.000+0000 I would like to see the this :) Meanwhile I found, that the other behaviour -- the "second config file" -- is quiet useful, even if its only useable, when the second constructor argument is a string. This allows something like this <pre class="highlight"> configs/dev.xml <pre class="highlight"> Currently its not working as expected caused by another bug: <> Posted by Ben Scholzen (dasprid) on 2009-04-29T06:38:32.000+0000 The actual idea behind the config option-key was, that you can specify specific options via an array, but additionally load a config file. But nice that it also finds other useful cases. Posted by Sebastian Krebs (kingcrunch) on 2009-04-29T07:53:12.000+0000 If you pass an array to the constructor the "hack" wont work, because it load a config file only once (in \_loadConfig()). It can confuse somebody, if he will change a string-to-a-file to a array later and wondering, that the "additional file" is not loaded anymore. Posted by Marek (xorock) on 2009-05-31T04:43:34.000+0000 My question is why are You working with arrays not objects? I think \_loadConfig() should additionally write protected $\_config variable with reference to loaded $config or return merged data. Currently there is no way to get configuration file in it's original state - there is only getOptions() method with options already transformed to array. I think some of us might need this for further prcoessing and pushing data: <pre class="highlight"> protected function _initConfig() { $config = new Zend_Config($this->getOptions()); Zend_Registry::set('config', $config); } back to Zend\_Config is really stupid. Posted by Matthew Weier O'Phinney (matthew) on 2009-05-31T09:19:28.000+0000 We're using arrays as it makes it easier to do comparisons, and to manipulate the keys in order to do comparisons; additionally, it offers better compatibility throughout various framework components. First, in Zend\_Application, keys are case insensitive. However, because Zend\_Config uses object properties, the keys are case sensitive. This makes it difficult, if not impossible, to test accurately for the existence of given keys. As a result, in our tests, we found there were many cases where existing keys were simply not matched. Casting to an array and storing as an array internally ensures we can transform those keys and do case insensitive matching. Second, many framework accept an array of options to the constructor and/or a factory. Many also accept Zend\_Config objects, but in all such cases, also accept arrays. It's thus easier to simply cast to an array within Zend\_Application, as the arrays will be able to be used with any component. Finally, the reason that you can pass a "config" key to the Zend\_Application options is to allow you to provide local overrides of the keys in your configuration file. These are, clearly, going to be passed as an array -- which means that any value in $\_config as you propose would be non-representative of the actual configuration..
https://framework.zend.com/issues/browse/ZF-6455
CC-MAIN-2016-40
refinedweb
791
53.31
![if !(IE 9)]> <![endif]> Modern computer technologies, hardware and software solutions all make it much easier and faster for us to do various kinds of scientific research. Computer simulation is often the only way to verify many theories. Scientific software has its own peculiarities. For instance, it's often heavily tested yet poorly documented. But anyway, software is written by humans, and humans tend to make mistakes. When found in scientific applications, programming mistakes could cast doubt on the results of much of the research work. In this article, we will look at dozens of defects found in the code of the NCBI Genome Workbench software package. NCBI Genome Workbench offers researchers a rich set of integrated tools for studying and analyzing genetic data. Users can explore and compare data from multiple sources including the NCBI (National Center for Biotechnology Information) databases or the user's own private data. As I already said, scientific software is usually richly covered by unit tests. When checking this project, I excluded 85 directories with test files from analysis, which makes about one thousand files. I guess this has to do with the test requirements for the various complex algorithms devised individually for each scientific study. That said, the rest of the code (other than the tests) is not as high-quality as one would like it to be. Well, this actually applies to any project that doesn't use static analysis yet :). The data for this review (or I'd say research) was collected using PVS-Studio, a static code analyzer for C/C++/C#/Java. Using our bug database, which currently includes more than 12 thousand select samples, we can detect and describe specific coding patterns that lead to numerous errors. For example, we did the following studies: With this project, we have discovered a new pattern. It has to do with the usage of numerals 1 and 2 in variable names such as file1 and file2, and the like. Such variables are very easy to mix up. Being a special case of typos, these defects all result from programmers' wish to work with variables sharing the same name save the ending numerals 1 and 2. I'm running a bit ahead of the story, but I've got to tell you that all of the patterns we examined in the studies mentioned above are found in this project's code too :D. Let's start with the first example from Genome Workbench: V501 There are identical sub-expressions '(!loc1.IsInt() &&!loc1.IsWhole())' to the left and to the right of the '||' operator. nw_aligner.cpp 480 CRef<CSeq_align> CNWAligner::Run(CScope &scope, const CSeq_loc &loc1, const CSeq_loc &loc2, bool trim_end_gaps) { if ((!loc1.IsInt() && !loc1.IsWhole()) || (!loc1.IsInt() && !loc1.IsWhole())) { NCBI_THROW(CException, eUnknown, "Only whole and interval locations supported"); } .... } You can see two variables, loc1 and loc2, and a typo: the loc2 variable is not used because loc1 is used one more time instead. Another example: V560 A part of conditional expression is always false: s1.IsSet(). valid_biosource.cpp 3073 static bool s_PCRPrimerSetLess(const CPCRPrimerSet& s1, const CPCRPrimerSet& s2) { if (!s1.IsSet() && s1.IsSet()) { return true; } else if (s1.IsSet() && !s2.IsSet()) { return false; } else if (!s1.IsSet() && !s2.IsSet()) { return false; } else if (s1.Get().size() < s2.Get().size()) { return true; } else if (s1.Get().size() > s2.Get().size()) { return false; } else { ..... } The programmer mixed up the variables s1 and s2 in the very first line. The name of the function suggests that it does comparison. But errors like that may come up just anywhere because if you name your variables Number1 and Number2, you are almost guaranteed to mess them up later. The more often these names are repeated in a function, the higher the risk. V501 There are identical sub-expressions to the left and to the right of the '!=' operator: bd.bit_.bits[i] != bd.bit_.bits[i] bm.h 296 bool compare_state(const iterator_base& ib) const { .... if (this->block_type_ == 0 { if (bd.bit_.ptr != ib_db.bit_.ptr) return false; if (bd.bit_.idx != ib_db.bit_.idx) return false; if (bd.bit_.cnt != ib_db.bit_.cnt) return false; if (bd.bit_.pos != ib_db.bit_.pos) return false; for (unsigned i = 0; i < bd.bit_.cnt; ++i) { if (bd.bit_.bits[i] != bd.bit_.bits[i]) return false; } } .... } I figure that after all those checks, the bits arrays of the objects bd.bit_ and ib_db.bit_ should be the same size. That's why the developer wrote one loop for element-by-element comparison of the bits arrays. But they mistyped the name of one of the objects under comparison. As a result, the objects may incorrectly compare equal in certain situations. That's a nice example worth mentioning in the article "The Evil within the Comparison Functions". V501 There are identical sub-expressions 'CFieldHandler::QualifierNamesAreEquivalent(field, kFieldTypeSeqId)' to the left and to the right of the '||' operator. field_handler.cpp 152 bool CFieldHandlerFactory::s_IsSequenceIDField(const string& field) { if ( CFieldHandler::QualifierNamesAreEquivalent(field, kFieldTypeSeqId) || CFieldHandler::QualifierNamesAreEquivalent(field, kFieldTypeSeqId)) { return true; } else { return false; } } It looks like one of the checks is redundant. I haven't found any other variables with a name similar to kFieldTypeSeqId. And using the "||" operator could still invoke one extra call to the function, thus slowing down the program. Here are two more cases of the same kind to be examined: V766 An item with the same key 'kArgRemote' has already been added. blast_args.cpp 3262 void CBlastAppArgs::x_IssueWarningsForIgnoredOptions(const CArgs& args) { set<string> can_override; .... can_override.insert(kArgOutputFormat); can_override.insert(kArgNumDescriptions); can_override.insert(kArgNumAlignments); can_override.insert(kArgMaxTargetSequences); can_override.insert(kArgRemote); // <= can_override.insert(kArgNumThreads); can_override.insert(kArgInputSearchStrategy); can_override.insert(kArgRemote); // <= can_override.insert("remote_verbose"); can_override.insert("verbose"); .... } The analyzer has detected the addition of two identical values to a set container. As you know, this type of container can store only unique values and doesn't permit duplicate elements. Code like that is often written using the copy-paste technique. What we are dealing with here is probably just an extra element, but it could also be a copy that was to be renamed to make a new variable. Deleting an extra insert call can help optimize the code a bit, but that's not a big deal. A much more serious concern is that this could be a missing element of the set. V523 The 'then' statement is equivalent to the subsequent code fragment. vcf_reader.cpp 1105 bool CVcfReader::xAssignFeatureLocationSet(....) { .... if (data.m_SetType == CVcfData::ST_ALL_DEL) {); //-1 for 0-based, //another -1 for inclusive end-point ( i.e. [], not [) ) pFeat->SetLocation().SetInt().SetTo( data.m_iPos -1 + data.m_strRef.length() - 1); pFeat->SetLocation().SetInt().SetId(*pId); } return true; } //default: For MNV's we will use the single starting point //NB: For references of size >=2, this location will not //match the reference allele. Future Variation-ref //normalization code will address these issues, //and obviate the need for this code altogether.); pFeat->SetLocation().SetInt().SetTo( data.m_iPos -1 + data.m_strRef.length() - 1); pFeat->SetLocation().SetInt().SetId(*pId); } return true; } The function contains large and absolutely identical blocks of code, while the comments are different. This code is written in a non-optimal and confusing way and may be faulty. Here's the full list of suspicious if-else statements: V597 The compiler could delete the 'memset' function call, which is used to flush 'pass". V597 The compiler could delete the 'memset' function call, which is used to flush 'answer' object. The memset_s() function should be used to erase the private data. challenge.c 561 static TDSRET tds7_send_auth(....) { .... /* for security reason clear structure */ memset(&answer, 0, sizeof(TDSANSWER)); return tds_flush_packet(tds); } That's not the only snippet with "security" comments. Judging by those comments, the authors do care about security, so I'm including the complete - and pretty long - list of all such defects detected: V534 It is likely that a wrong variable is being compared inside the 'for' operator. Consider reviewing 'i'. taxFormat.cpp 569 void CTaxFormat::x_LoadTaxTree(void) { .... for(size_t i = 0; i < alignTaxids.size(); i++) { int tax_id = alignTaxids[i]; .... for(size_t j = 0; i < taxInfo.seqInfoList.size(); j++) { SSeqInfo* seqInfo = taxInfo.seqInfoList[j]; seqInfo->taxid = newTaxid; } .... } .... } I suspect that the i variable wasn't really meant to be used in the inner loop's condition. It got there by mistake and should have been j instead. V535 The variable 'i' is being used for this loop and for the outer loop. Check lines: 302, 309. sls_alp.cpp 309 alp::~alp() { .... if(d_alp_states) { for(i=0;i<=d_nalp;i++) // <= { if(i<=d_alp_states->d_dim) { if(d_alp_states->d_elem[i]) { for(i=0;i<=d_nalp;i++) // <= { .... .... } Two twin nested loops resetting the global counter to zero - that doesn't look right at all. The authors should take a good look at what's going on here. V520 The comma operator ',' in array index expression '[-- i2, -- k]'. nw_spliced_aligner16.cpp 564 void CSplicedAligner16::x_DoBackTrace ( const Uint2* backtrace_matrix, CNWAligner::SAlignInOut* data, int i_global_max, int j_global_max) { .... while(intron_length < m_IntronMinSize || (Key & donor) == 0) { Key = backtrace_matrix[--i2, --k]; ++intron_length; data->m_transcript.push_back(eTS_Intron); } .... } I'll tell you right off that there's no apparent error here (at least for now, lol). Take a look at this line: Key = backtrace_matrix[--i2, --k]; The word 'matrix' and double indexing could make you think that this is a two-dimensional array, but it's not so. It's a regular pointer to an array of integers. But it was not for nothing that we designed the V520 diagnostic. Programmers do tend to get confused when indexing into two-dimensional arrays. Here, the author simply wanted to save on one extra line of code, but why not write it like this then: --i2; Key = backtrace_matrix[--k]; V661 A suspicious expression 'A[B == C]'. Probably meant 'A[B] == C'. ncbi_service_connector.c 180 static EHTTP_HeaderParse s_ParseHeader(const char* header, ....) { .... if (sscanf(header, "%u.%u.%u.%u%n", &i1, &i2, &i3, &i4, &n) < 4 || sscanf(header + n, "%hu%x%n", &uuu->port, &tkt, &m) < 2 || (header[m += n] && !(header[m] == '$') && !isspace((unsigned char)((header + m) [header[m] == '$'])))) { break/*failed - unreadable connection info*/; } .... } This is another snippet where I had a hard time figuring out what was going on :D. The isspace() function is used to check the character with the m index, but if that character is '$', then what is passed to the function is the character with the index m + 1. However, the check for '$' has been already done before. Perhaps there's no error here, but this code could definitely be rewritten in a clearer way. V557 Array overrun is possible. The 'row' index is pointing beyond array bound. aln_reader.cpp 412 bool CAlnReader::x_IsGap(TNumrow row, TSeqPos pos, const string& residue) { if (m_MiddleSections.size() == 0) { x_CalculateMiddleSections(); } if (row > m_MiddleSections.size()) { return false; } if (pos < m_MiddleSections[row].first) { .... } .... } This one is serious. The correct check of the row index should look like this: if (row >= m_MiddleSections.size()) { return false; } Otherwise, there's a risk of accessing the data beyond the MiddleSections vector. There are plenty of defects like that: V570 The 'm_onClickFunction' variable is assigned to itself. alngraphic.hpp 103 void SetOnClickFunctionName(string onClickFunction) { m_onClickFunction = m_onClickFunction; } No comment. You can only feel for users clicking again and again to no avail. Two more cases where a variable is assigned to itself: V763 Parameter 'w1' is always rewritten in function body before being used. bmfunc.h 5363 /// Bit COUNT functor template<typename W> struct bit_COUNT { W operator()(W w1, W w2) { w1 = 0; BM_INCWORD_BITCOUNT(w1, w2); return w1; } }; A function that has its argument overwritten right after the invocation may confuse the developers. This code should be reviewed. V688 The 'm_qsrc' function argument possesses the same name as one of the class members, which can result in a confusion. compart_matching.cpp 873 class CElementaryMatching: public CObject { .... ISequenceSource * m_qsrc; .... void x_CreateIndex (ISequenceSource *m_qsrc, EIndexMode index_more, ....); void x_CreateRemapData(ISequenceSource *m_qsrc, EIndexMode mode); void x_LoadRemapData (ISequenceSource *m_qsrc, const string& sdb); .... }; Three class functions at once have an argument of the same name as a class field. This may lead to mistakes in the function bodies: the programmer may think they're working with a class member, while in reality they are altering the local variable's value. V614 Uninitialized variable 'm_BitSet' used. SnpBitAttributes.hpp 187 /// SNP bit attribute container. class CSnpBitAttributes { public: .... private: /// Internal storage for bits. Uint8 m_BitSet; }; inline CSnpBitAttributes::CSnpBitAttributes(Uint8 bits) : m_BitSet(bits) { } inline CSnpBitAttributes::CSnpBitAttributes(const vector<char>& octet_string) { auto count = sizeof(m_BitSet); auto byte = octet_string.end(); do m_BitSet = (m_BitSet << 8) | *--byte; while (--count > 0); } One of the constructors is handling the m_BitSet variable in an unsafe manner. The problem is that this variable has not been initialized yet. Its "garbage" value will be used at the first loop iteration, and only then will it be initialized. This is a grave mistake, which could lead to undefined behavior. V603 The object was created but it is not being used. If you wish to call constructor, 'this->SIntervalComparisonResult::SIntervalComparisonResult(....)' should be used. compare_feats.hpp 100 //This struct keeps the result of comparison of two exons struct SIntervalComparisonResult : CObject { public: SIntervalComparisonResult(unsigned pos1, unsigned pos2, FCompareLocs result, int pos_comparison = 0) : m_exon_ordinal1(pos1), m_exon_ordinal2(pos2), m_result(result), m_position_comparison(pos_comparison) {} SIntervalComparisonResult() { SIntervalComparisonResult(0, 0, fCmp_Unknown, 0); } .... }; I haven't seen errors like this for quite a while, but the problem still persists. The point here is that calling a parameterized constructor in a way like that leads to creating and deleting a temporary object while leaving the class fields uninitialized. The call to the other constructor should be done using the initializer list (see Delegating constructor). V591 Non-void function should return a value. bio_tree.hpp 266 /// Recursive assignment CBioNode& operator=(const CBioNode& tree) { TParent::operator=(tree); TBioTree* pt = (TBioTree*)tree.GetParentTree(); SetParentTree(pt); } The analyzer says the overloaded operator lacks this single line: return *this; V670 The uninitialized class member 'm_OutBlobIdOrData' is used to initialize the 'm_StdOut' member. Remember that members are initialized in the order of their declarations inside a class. remote_app.hpp 215 class NCBI_XCONNECT_EXPORT CRemoteAppResult { public: CRemoteAppResult(CNetCacheAPI::TInstance netcache_api, size_t max_inline_size = kMaxBlobInlineSize) : m_NetCacheAPI(netcache_api), m_RetCode(-1), m_StdOut(netcache_api, m_OutBlobIdOrData, m_OutBlobSize), m_OutBlobSize(0), m_StdErr(netcache_api, m_ErrBlobIdOrData, m_ErrBlobSize), m_ErrBlobSize(0), m_StorageType(eBlobStorage), m_MaxInlineSize(max_inline_size) { } .... }; This snippet triggers 3 warnings at once. The order in which the class fields are initialized is the same order in which they are declared rather than the order in which they were added to the initializer list. This error typically occurs because not all programmers know or remember about this rule. And it's the initializer list here that has the wrong order, which looks as if it were random order. V746 Object slicing. An exception should be caught by reference rather than by value. cobalt.cpp 247 void CMultiAligner::SetQueries(const vector< CRef<objects::CBioseq> >& queries) { .... try { seq_loc->SetId(*it->GetSeqId()); } catch (objects::CObjMgrException e) { NCBI_THROW(CMultiAlignerException, eInvalidInput, (string)"Missing seq-id in bioseq. " + e.GetMsg()); } m_tQueries.push_back(seq_loc); .... } When catching exceptions by value, some of the information about the exception may be lost since a new object is created. A much better and safer practice is to catch exceptions by reference. Other similar cases: V779 Unreachable code detected. It is possible that an error is present. merge_tree_core.cpp 627 bool CMergeTree::x_FindBefores_Up_Iter(....) { .... FirstFrame->Curr = StartCurr; FirstFrame->Returned = false; FirstFrame->VisitCount = 0; FrameStack.push_back(FirstFrame); while(!FrameStack.empty()) { .... if(Rel == CEquivRange::eAfter) { Frame->Returned = false; FrameStack.pop_back(); continue; } else if(Rel == CEquivRange::eBefore) { .... continue; } else { if(Frame->VisitCount == 0) { .... continue; } else { .... continue; } } Frame->Returned = false; // <= FrameStack.pop_back(); continue; } // end stack loop FirstFrame->ChildFrames.clear(); return FirstFrame->Returned; } The conditional operator is written in such a way that absolutely all of its branches end with a continue statement. This renders some of the lines in the while loop unreachable. And those lines do look strange. The problem must have occurred after refactoring and now calls for careful code review. A few more cases: V519 The 'interval_width' variable is assigned values twice successively. Perhaps this is a mistake. Check lines: 454, 456. aln_writer.cpp 456 void CAlnWriter::AddGaps(....) { .... switch(exon_chunk->Which()) { case CSpliced_exon_chunk::e_Match: interval_width = exon_chunk->GetMatch(); case CSpliced_exon_chunk::e_Mismatch: interval_width = exon_chunk->GetMismatch(); case CSpliced_exon_chunk::e_Diag: interval_width = exon_chunk->GetDiag(); genomic_string.append(....); product_string.append(....); genomic_pos += interval_width; product_pos += interval_width/res_width; break; .... } .... } The interval_width variable is overwritten several times as the case branches lack break statements. Though classic, it's still a bad bug to have in one's code. V571 Recurring check. The 'if (m_QueryOpts->filtering_options)' condition was already verified in line 703. blast_options_local_priv.hpp 713 inline void CBlastOptionsLocal::SetFilterString(const char* f) { .... if (m_QueryOpts->filtering_options) // <= { SBlastFilterOptions* old_opts = m_QueryOpts->filtering_options; m_QueryOpts->filtering_options = NULL; SBlastFilterOptionsMerge(&(m_QueryOpts->filtering_options), old_opts, new_opts); old_opts = SBlastFilterOptionsFree(old_opts); new_opts = SBlastFilterOptionsFree(new_opts); } else { if (m_QueryOpts->filtering_options) // <= m_QueryOpts->filtering_options = SBlastFilterOptionsFree(m_QueryOpts->filtering_options); m_QueryOpts->filtering_options = new_opts; new_opts = NULL; } .... } The else branch obviously needs revising. I've got a couple ideas as to what the authors might have intended to do with the m_QueryOpts->filtering_options pointer, but the code is still pretty obscure. Please, guys, do make it clearer! Bad luck comes in threes, you know: V739 EOF should not be compared with a value of the 'char' type. The 'linestring[0]' should be of the 'int' type. alnread.c 3509 static EBool s_AfrpInitLineData( .... char* linestring = readfunc (pfile); .... while (linestring != NULL && linestring [0] != EOF) { s_TrimSpace (&linestring); .... } .... } Characters to be tested against EOF must not be stored in variables of type char; otherwise, there's a risk that the character with the value 0xFF (255) will turn into -1 and be interpreted as end-of-file. The implementation of the readfunc function should also be checked (just in case). V663 Infinite loop is possible. The 'cin.eof()' condition is insufficient to break from the loop. Consider adding the 'cin.fail()' function call to the conditional expression. ncbicgi.cpp 1564 typedef std::istream CNcbiIstream; void CCgiRequest::Serialize(CNcbiOstream& os) const { .... CNcbiIstream* istrm = GetInputStream(); if (istrm) { char buf[1024]; while(!istrm->eof()) { istrm->read(buf, sizeof(buf)); os.write(buf, istrm->gcount()); } } } The analyzer has detected a potential error that could leave you running over an infinite loop. If the data can't be read, a call to the eof() function will be returning false all the time. To guarantee that the loop will terminate in this case, you need to additionally check the value returned by fail(). V502 Perhaps the '?:' operator works in a different way than it was expected. The '?:' operator has a lower priority than the '&&' operator. ncbi_connutil.c 1135 static const char* x_ClientAddress(const char* client_host, int/*bool*/ local_host) { .... if ((client_host == c && x_IsSufficientAddress(client_host)) || !(ip = *c && !local_host ? SOCK_gethostbyname(c) : SOCK_GetLocalHostAddress(eDefault)) || SOCK_ntoa(ip, addr, sizeof(addr)) != 0 || !(s = (char*) malloc(strlen(client_host) + strlen(addr) + 3))) { return client_host/*least we can do :-/*/; } .... } Note the expression: !local_host ? SOCK_gethostbyname(c) : SOCK_GetLocalHostAddress(eDefault) It won't be evaluated the way the programmer expected because the entire expression looks like this: ip = *c && !local_host ? SOCK_gethostbyname(c) : SOCK_GetLocalHostAddress(...) The precedence of the && operator is higher than that of ?:. Because of that, the code executes differently from what was intended. V561 It's probably better to assign value to 'seq' variable than to declare it anew. Previous declaration: validator.cpp, line 490. validator.cpp 492 bool CValidator::IsSeqLocCorrectlyOrdered(const CSeq_loc& loc, CScope& scope) { CBioseq_Handle seq; try { CBioseq_Handle seq = scope.GetBioseqHandle(loc); } catch (CObjMgrException& ) { // no way to tell return true; } catch (const exception& ) { // no way to tell return true; } if (seq && seq.GetInst_Topology() == CSeq_inst::eTopology_circular) { // no way to check if topology is circular return true; } return CheckConsecutiveIntervals(loc, scope, x_IsCorrectlyOrdered); } Because the programmer declared a new variable seq inside the try/catch section, the other seq variable will remain uninitialized and be used further in the code. V562 It's odd to compare a bool type value with a value of 0: (((status) & 0x7f) == 0) != 0. ncbi_process.cpp 111 bool CProcess::CExitInfo::IsExited(void) const { EXIT_INFO_CHECK; if (state != eExitInfo_Terminated) { return false; } #if defined(NCBI_OS_UNIX) return WIFEXITED(status) != 0; #elif defined(NCBI_OS_MSWIN) // The process always terminates with exit code return true; #endif } It seemed nothing could go wrong, but WIFEXITED turned out to be a macro expanding into the following: return (((status) & 0x7f) == 0) != 0; It turns out the function returns the opposite value. There was one more function like that: V595 The 'dst_len' pointer was utilized before it was verified against nullptr. Check lines: 309, 315. zlib.cpp 309 bool CZipCompression::CompressBuffer( const void* src_buf, size_t src_len, void* dst_buf, size_t dst_size, /* out */ size_t* dst_len) { *dst_len = 0; // Check parameters if (!src_len && !F_ISSET(fAllowEmptyData)) { src_buf = NULL; } if (!src_buf || !dst_buf || !dst_len) { SetError(Z_STREAM_ERROR, "bad argument"); ERR_COMPRESS(48, FormatErrorMessage("CZipCompression::CompressBuffer")); return false; } .... } The dst_len pointer is dereferenced at the very beginning of the function and is later checked for null. This error will cause undefined behavior if dst_len is found to be equal to nullptr. V590 Consider inspecting the 'ch != '\0' && ch == ' '' expression. The expression is excessive or contains a misprint. cleanup_utils.cpp 580 bool Asn2gnbkCompressSpaces(string& val) { .... while (ch != '\0' && ch == ' ') { ptr++; ch = *ptr; } .... } The loop termination condition depends only on whether or not ch is a space character. It means the expression can be simplified as follows: while (ch == ' ') { .... } Scientific software is already helping us make new discoveries and will continue to do so. So let's hope we won't miss the most important ones just because of some trivial typo. I encourage the developers of the NCBI Genome Workbench project to contact us so that we could share with them the full analysis report by PVS-Studio. I hope this small research of ours will help fix a lot of bugs and make the project more reliable. Don't hesitate to try PVS-Studio with your own projects if you haven't done so yet. You'll probably like ...
https://www.viva64.com/en/b/0591/
CC-MAIN-2019-09
refinedweb
3,608
50.33
Guest Post by Willis Eschenbach Sounds like a scam, huh? But it’s real. Let me explain how people (no, not you or me, don’t be foolish) can make a guaranteed 29% return on their investment. However, to make it clear, I’ll need to take a short digression. I ran across a National Geographic article on where the world gets its electricity. Here are their figures: Figure 1. World electricity production by fuel type. Renewables (defined by AGW activists as solar-, geothermal-, wind-, and biomass-generated electricity, but not hydroelectricity) are 2.7% of the total electricity use. Data from National Geographic. Despite that history, you know how they say on those TV commercials, “But wait! There’s even more!”? In this case, it’s “But wait! There’s even less!” The reason that its even less is that Figure 1 just shows electricity. It doesn’t show total energy consumed, which is a much larger number. Total global energy consumption is shown in Figure 2. Figure 2. World energy consumption by source. “Renewables” are solar, geothermal, wind, and biomass. Note that the traditional use of firewood for cooking is not included. Data from the BP Statistical Review So although renewables have (finally) gotten to 2.7% of the electricity production, they still only represent 1.3% of the global energy consumption. And this is with heaps of subsidies. And I don’t mean just a bit of money to get them over the hump. Huge subsidies. Because of the total failure of renewables to penetrate the market, the AGW supporters are desperately throwing money at renewable technologies. The New York Times showed a graphic for one such power plant in California. Their graphic is reproduced below as Figure 3. Figure 3. Federal and State Subsidies for the California Valley Solar Ranch. Unfortunately, the Times didn’t really discuss the business implications of this chart, so let me remedy that omission. First, how much money did the investors have to put in? Since the project will start earning money once the key is turned and the market is guaranteed, the investors only had to put up the total capital outlay of $1.6 billion. Less, of course, the generous government grant of nearly half a billion dollars. Total invested, therefore, is $1,170 million dollars. On that money, the investors stand to make a net present value of $334 million dollars … which means that due to the screwing of the taxpayers and ratepayers, a few very wealthy investors are GUARANTEED A RETURN OF 29% ON THEIR INVESTMENT!!! How is this fair in any sane universe? AGW supporters talk about the 1% having too much money, and here the same folks are shoveling the money into the one percenters’ pockets. The 1% weren’t rich enough already, so I have to foot the bill for them to get a GUARANTEED 29% RETURN on their investment? Note also that a huge part of the money, some $462 million dollars, is coming from the California electricity ratepayers, including yours truly, through increased charges for electricity. This means that these solar scam artists are being allowed to sell their power at 50% ABOVE MARKET PRICES!!! Not just a little bit above market. Fifty percent above the market price! Where is the California Public Utilities Commission whose job is to protect the consumer? Oh, I see … the are the ones who agreed to the 50% above market rate hike … for shame. Pardon my screaming, but this insanity angrifies my blood mightily. Ripping off both the consumer and the taxpayer to allow millionaires to make a guaranteed 29% return on a not-ready-for-market technology, and charging ratepayers 50% above market for the electricity? That is reprehensible and indefensible. In particular, the rate hikes hit the poor much harder than the wealthy, so we are billing the poor to line the pockets of the 1% … and all this in the name of enlightened carbon fears. A few last numbers to consider. Without the layers and layers of subsidies, the investors would have had to put in $1.6 billion, and they would have suffered a loss of $1.1 billion dollars. The investors wouldn’t lose just a little, they’d lose their shirts, their pants and their ties … and seventy percent of the money they put in. That’s how far this technology is from being marketable. Not just a little ways short of profitability. A long, long, long ways from being marketable, more than a billion dollars short of making a profit.. You can see why I’m screaming … the inmates have taken over the asylum. Steven Chu, the Secretary of Energy, says we need more successful green projects in order to survive the depression … me, I fear we won’t survive Secretary Chu. I know we won’t survive if we follow Chu’s brilliant plan for ‘successful green projects’ that do nothing but line the pockets of the 1% with billions in subsidies. That path is the poster child for the concept “unsustainable”, and Secretary Chu is the poster child for the brilliant idiot. He is undoubtedly a genius in his scientific field, but whoever unlocked his ivory tower and let him loose on the business world has some serious explaining to do. Here is the problem with Energy Secretary Chu. His failures are bad enough. But his successes are lethal. w. 210 thoughts on “Make 29% On Your Money, Guaranteed!” Our prosperity is getting chewed up. Willis, you are right on target. I am screaming with you although I don’t live in CA. It would seem to me that the electricity users could choose which sources of juice they buy. I will take hydro, coal and nuclear, thank you. The people who are adamant about saving the planet can buy the “renewables”. I think that would be very fair, “Here is the problem with Energy Secretary Chu. His failures are bad enough. But his successes are lethal. ” Are you just now figuring this out??????????? [REPLY—Nope. Not sure why you’d think so. -w.] If they are getting a large benefit ($205 million?) from the government-guaranteed lower interest rate, then they are financing a huge part of the capital outlay with debt instruments. So the equity investment that gets the return of $334 million will be much smaller than $1.6 billion. The large benefit provided by the government-guarantee also means that these projects will try to max out on the amount of debt they use to finance the project. Which is probably why they are going bankrupt so fast and so often (when the economics don’t work quite as expected). And when they go backrupt, who picks up the loss (when there is a government guarantee). it’s game on in australia, and it will end badly:.” ——————————————————————————– And we thought “green energy” was a bad investment! Well, okay, it is for most of us that aren’t Kennedys or most any other Zero (Obama) backer you could name. Of course, there are folks from the right side of the aisle that take advantage of government largess in pursuit of zero population growth (cause that’s what it comes down to!) but Dims sure do lap this stuff up according to Schweizer’s new book “Throw Them All Out”. Umm, wait a minute, Willis. Couple of quick questions–do ratepayers pay the higher rates or do the utilities make lower profits? Second, SunPower is a publicly traded company–couldn’t you buy shares in that company and participate in the extravagant returns? (Wouldn’t recommend it–they’re on a list of seven companies ‘most likely to go bankrupt.’) Lastly, have you ever done (or seen) a similar breakout on funding for a natural gas/coal/nuclear plant? went on the level pay plan for electricity in California. They take your past year’s total bill and charge you 1/11 of that per month so your bill is less taxing during peak months. I have done this for the past two years. During this time my usage has not changed significantly, all my kids are still at home etc… My rates have gone up so much in the last two years, I recieved a settlement bill in addition to my regular level pay bill of over $750.00. They gave me one month to come up with $440.00 of it and I have a payment agreement that will allow me to pay the rest off over the next six months in addition to my regular payments. I live in an older mobile home that isn. I am sure this makes the Greens giddy with joy that I am being punished for my crappy inneficient use of energy and I get to work two jobs to pay for it. Power to the people. :) Willis, as usual, an excellent commentary, just one reservation. Should “billing the poor to line the pockets of the 1%” not read “bilking the poor to line the pockets of the 1%”? This is a very small taste of a very large meal that true believers think is what we need. “Note also that a huge part of the money, some $462 billion dollars…” Should be million, with an m. [Thanks, Edvin, fixed. -w.]. This is my fundamental annoyance about scientists in general; they step outside their area of expertise. Scientists know very little about economics and politics, yet we revere them as saviours of the world. The same goes for politicians who try to step into the scientific world (Gore?), and promote propaganda to the average person. In Australia we have crap-loads of uranium, heaps of the stuff. Yet how many nuclear power plants do we have? One! And its tiny and only used for research. All because it was decided in the 70s by flower power hippies that nuclear power is immoral and you must never speak of it. But it doesn’t matter as we’ll have a carbon tax in about a year which will solve all our environmental problems and make solar power competitive. What is the cost to maintain this equipment compared to a fossil fuel plant? How is generation calculated? The sun does’nt shine every day and winter days are shorter. Biased on current experience, can this project be expected to have a 25 year pay-back life? Glad I’m not a California rate payer like you! I like the 1% rhetoric… well not really. The Occucommies would probably hang you if you post these facts at one of the OWS rallies. Such as life when you deal with socialists. Isn’t hydro renewable?? Greenies only seem interested in wind for some reason cronyism in action. Those green tech “1%ers” were primary donors to the Obama and probably various congressional and state democrat campaigns. In return the politicians give them lots and lots of taxpayer dollars. If the “old media” was not so enamored of the green movement and leftism in general we wouldn’t hear about these things only in the blogs and a good bit of the insanity would have to stop. Hopefully enough folks are going to the new media that the insanity will be stopped soon in spite of the negligence of the MSM. The division of rate payer advocates at the CPUC felt that the project (and a PV one recently approved) were not the best deal for CA ratepayer- “Calif. Consumer Advocate Division Decries CPUC Approval of “Overpriced” CSP Project POWERnews The California Public Utilities Commission’s (CPUC’s) approval on Thursday of Abengoa Solar’s 250-MW Mojave Solar concentrating solar power (CSP) parabolic trough facility in San Bernardino County—the second “overpriced renewable contract” approved by the CPUC in recent weeks—was disappointing, the regulatory commission’s Division of Ratepayer Advocates (DRA) said in a statement.” “Let them have everything.” (Then they’ll have no excuse when they fail.) It’s worse than you thought. The company is SunPower (SWPR) and this only one of their efforts. There’s been a lot of shareholder value removed. Everybody is getting the short end of the stick. Paying $1.6 billion for a plant that will only generate $925 million dollars worth of energy does not seem like a very winning proposition… in fact, the dollar value of the subsidies ($1.4 billion) is over 50% greater than the value of all the electricity produced. This project is much less an energy production production operation than it is a government subsidy production operation. How does it work for the wind farms that have to turn the generators off at night so the bats won’t get killed? Do the owners or taxpayers pay for that? Don’t worry – Chu’s plan-b is painting his roof white .” Yes, but there’s nothing an environmentalist hates more than damming a river, so they don’t consider this a good form of renewable energy. Does anyone know where the new Republican front runner stands on CAGW? He used to be a BELIEVER in CO2 and warming. Where does he stand now? This sound like one of cheaper “job programs”. It is a really long list of available subsidies. I mean, it is way beyond any kind of rational level. How did all these uneconomic subsidies get approved in the first place? It is just begging to be reviewed by a government which is broke (even a government with money to burn should not be able to stomach this kind of irrationality). Since it takes energy to create those solar panel “power plant” and since, according to the graphic above, 68% of that energy create CO2 (all fossil sources). Could we assume that around 68% of the money “spent” on that project produced CO2 even before the solar panels started producing electricity? And what about pollution from creating those high-tech devices? And we did not even calculate the fossil fuel plants running while the sun is not shining! And they have the audacity of calling this scam “green” that will “save” the economy! This is insane… The world has gone mad! I feel the “1.3%” for Renewables in Figure 2 is too small, since many developing countries still depend heavily on woods for energy. AFAIK, it was (though stats for the year 2000) 80-90% in Nepal and Cambodia, ca. 50% in Viet Nam, ca. 40% in India, ca. 15% in Thailand, ca. 8% in China. Hard to believe these countries have significantly reduced the percentage in the past 10 years. Are the woods excluded from “Renewables” in Figure 2? It may be OK if Figure 2 refers to the “energy for industry” only. In the UK we have real problems with the elderly sufferingfrom fuel poverty and having the choice between starving or feezing to death. So a new bumper sticker is needed: KILL A PENSIONER – BUY A SOLAR PANEL WIllis, you are spot on to be screaming angry about this. Here in the UK despite two decades of government subsidy wind power delivers just 1.4% of our electricity (EXCELON official figures for the year April 2010 to March 2011. I have plotted them out here:). Yet back in the spring Chris Huhne, our hapless Energy Minister, told Parliament proudly that wind power was contributing 7%, a figure five times larger than the reality. How did he get away with this grossly misleading statistic? Well, either by design or through ignorance, he had used the ‘aggregate name plate’ figure. This is the totally fictitious power output tgat would be available from all wind farms if the wind blew everywhere at optimum speed and all the time, on and on for ever! Nobody picked him up on this so our elected representatives were successfully indoctrinated with the myth that UK wind power was a rollicking success when, in fact, it is an abject failure. So with wind farms operating at only one fifth of design capacity and, additionally, requiring a huge additional capital investment in extra standby conventional power generating equipment for those days when the wind fails to blow, it is clear that the whole idea of using wind as a major power source is completely busted. Yet does that stop the politicians making asses of themselves? Does it heck! I dislike that you parrot the 99% meme by the OWS folks because they play the old trick of the Bolsheviks calling themselves the Bolsheviks all over again… but with regard to the subsidies you are of course right. It’s the same scheme in Germany – the ROI is not as outrageous, though, more like 6% a year. But entirely risk-free, and paid for by the rate payers; poor folks paying rich folks, in the end. And promoted by leftist Greens. Which goes to show that socialism is all about self-enriching of the righteous revolutionaries. Oh! One of these days I am going to write my expose of the Superfund. If you think that energy subsidies are a waste of money, you should work for 30 years in the environmental remediation business. There has NEVER been such a money waster, time waster, resource waster, energy waster, such a pocket liner for law firms, engineering firms, accounting firms, the Army Corps of Engineers, the USGS, Green peace, Earthfirst, EDF – in short you name it – than the Superfund. Collassal sums of money have been thrown at “environment contamination” billions have been spent on research, trillions have been spent on ‘remediations’ almost ALL of which have done NOTHING to reduce any risk to human health and the environment – for the simple reason that most Superfund sites pose precious little risk to human health or the environment – if any at all. Well, that’s yet another story. Bill Illis says: November 18, 2011 at 3:55 pm Sad but true. However, I didn’t have information on the size of the loan, so I decided not to comment on that. However, you are quite right that the real return on their cash-out-of-pocket investment is likely to much larger than 29%, I can’t tell you how opposed I am to this entire process. Neither the federal nor the state government nor the Public Utilities commission should have anything to do with this kind of sweetheart deals for the uber-wealthy. I scratch around for some investment where my few dollars can make a few percent, and my taxes and my electricity costs are making them absurd rates of return. Disgusting … and more to the point, economically suicidal. What business will locate where electric rates are 50% above market? w. Willis Eschenbach says: “Disgusting … and more to the point, economically suicidal. What business will locate where electric rates are 50% above market?” Groups like boeing… and others that will be told to due so or be shut down by the government… just like whats happening where the fascists are telling boeing it can’t move to SC because its a non-union state. We have same moronic but Green policies here in the UK. There are new houses being built with solar panels on the roof, windmills on every hilltop and the BBC banging on at every opportunity about AGW, all subsidised by the taxpayer I have just fixed our energy costs at £256 PCM until 2015 because I know that the cost of energy will go through the roof in the next few years due to the exorbitant cost of subsidised renewable energy. would hydro be considered a renewable energy source? The solar project capital cost covers only to build the specific solar plant. However the fluctuating solar and wind power generation must be backed up with generation that is ready to compensate for the sudden variation in generation due to variation in wind speed and sun light. I.e. to show the full cost impact it should also show the cost of this backup need and also if wind and solar (PV) is a big share of the power generation on a grid they have very low inertia compared to regular generators with big rotating masses the cost impact on the transmission side that needs quick voltage compensation in form of capacitor banks etc. should also be considered. @Thomas Fuller: Yes, nuclear has gotten lots of subsidies. And hydro was almost entirely built by government. But there’s a difference. Nuclear and hydro are actual sources of reliable power, while solar is unreliable and wind is a net consumer of electricity. By building those nuke and hydro plants, the government assured stable economic growth in large parts of the country and good jobs for millions of people. Especially true with hydro, which was genuinely cheap before the anti-Darwinian and anti-scientific “”””endangered”””” “”””species”””” nonsense interfered with it. The government received plenty of taxes from the aluminum smelters and other industries that dominated the NW for many decades, thus paying back its initial investment in hydro many times over. There will be no payback for wind and solar, only continued cornucopial welfare payments to the obscenely grotesquely rich “investors” and to the members of Congress who made the laws to enrich themselves. You didn’t have to be too rich to use a similar ploy to keep from getting quite as keel-hauled by Obanomics. I added some solar panels to our installation, Sunpower in fact. The USGov. chipped in 5200, PA chipped in $3500, and the electric company contributed another $4500 Total cost of the addition was $13000, on top of the $27000 for the original 18 panels. My total final cost $26000. The panels essentially zero out the electric bill over the year. We use a little more than 10,000 kwh. The kicker is we get a check for $250 for every 1000 kwh, about $2500 a year for Solar Renewable Energy Credits sold on some sort of exchange. All the subsidies put the payback period somewhere around 10 years, if the SREC’s don’t go away. I would have done it anyway for personal reasons(I hate uncontrollable electric bills), but the payback would have been more like 30 years. Anyway, thanks for the help y’all. Too bad most of the subsidies have gone away. Guess the big O didn’t like the idea of ordinary folks getting a direct benefit. Willis Eschenbach; Disgusting … and more to the point, economically suicidal. What business will locate where electric rates are 50% above market?>>> …and what businesses that are already there and are energy intensive will pick up and move for the same reason? Worse, if you’ve decided to move, you may as well consider all the options. Texas, Nevada, China… Yet another case of not learning history’s lessons. We’ve heard this story before, just not necessarily spurred by global warming. If it isn’t that, it is something else. Every time governments decide that they can change the economy by subsidizing it, the big money interests do what they are paid to do. Figure out how to leverage the programs. It distorts the markets and business moves from a business model of providing cost effective goods and services to a business model of efficiently collecting government program money. I have seen the enemy. and they are us. Thomas Fuller says: November 18, 2011 at 3:57 pm Hey, Tom. Interesting questions all. The California Valley Solar Ranch is getting half a billion extra dollars out of the deal. Price to the consumer will rise. It already has. California pays $0.15 per kWh where states like Idaho and Utah pay only half of that. This is because of the asinine law requiring the CPUC to get 30% of its energy from renewables. So it has to pay through the nose, and in turn, the customer has to pay through the nose. Short answer? Ratepayers pay, as always. I find this from December 2010, indicating that SunPower no longer owns CVSR. That’s a curious setup. SunPower must have the on-the-ground knowhow, while NRG has the capital. NRG’s sales in 2010 were only $1.3 billion, down from 2.2 billion in both 2008 and 2009 … may be why they needed to cook up this sweetheart deal. Interestingly, if they did indeed “invest $450m of equity in the next four years”, that would make their return 334 / 450 or about a 75% return on their money … not bad at all. I didn’t do this breakout, the document says: And no, I haven’t seen any others. I would like to, however. w. uan says: November 18, 2011 at 5:09 pm would hydro be considered a renewable energy source?>>> Depends on who you ask. The really rabid CAGW alarmists will tell you that CO2 will raise temperatures which will raise the amount of water vapor generated, and in the same breath claim that the result will be droughts and increased desertification, so we can’t rely on it. Apart from the fact that the more water vapour there is the more rain there should be as a result, they claim that depending on hydro is risky because there might be no more rain. Right. There might be no more wind too… lol. But the rabid warmists are in many cases also rabid environmentalists. You see, hydro dams are “not natural” and so are “harmfull” to the environment. Therefore, they are bad too even if they are renewable. You see, if a beaver builds a dam, it is natural. If we build a dam, well shame on us for alterning the landscape just so people can have heat and light and clean water and stoves to cook on and power tools. Goodness grascious, leave power tools in the hands of the people and they might build a park bench to sit on, or maybe a kitchen cabinet. That’s just dangerous! Mike Hebb says: November 18, 2011 at 4:00 pm Thanks, Mike. While you are correct, unless you are advocating the increased use of yak dung, it is not relevant to the attempts by well-meaning folks to artificially increase the use of renewables … they’re not attempting to increase use of yak dung as fuel, they want to decrease it. w. Look on the bright side, Anthony, the totalitarian Gillard regime in OZ is planning to “hook up” their new CO2 Tax Market with the People’s Republic of California–since no one else will give them the time of day. Maybe that will reduce your costs. Then again, it will probably just add more lining to certain pockets. Here is OZ and ONLY due to the Government subsidies and feed-in tariffs, I have a 4.8KW solar system on my house that is returning better than 19% over the past 16 months. As my wife and I are self-funded retirees, I cannot get this sort of return from any bank account so it is in my interest to outlay the $15.5K rather than leave it invested, even though I know I am being subsidised by every other user without solar power. FYI, some figures for electricity price rises in South Australia PLUS 10% GST and do not factor in the ‘tax that shall not be named’ which is expected to increase electricity prices by another 20% or more (is that the sound of the carbon cops pulling up in my driveway?): Since 9th Feb 2011 (ex-GST in c/KWh): Summer peak (01Jan – 31Mar) 1st 1,200KWh/annum – 17.93 to 25.3 – up 41.1% next 2,800KWh/annum – 20.15 to 27.97 – up 38.8% next 6,000KWh/annum – 23.32 to 30.76 – up 31.9% all additional KWh/annum – 23.68 to 31.34 – up 32.4% Winter peak (01Apr – 31Dec) 1st 1,200KWh/annum – 19.43 to 24.82 – up 27.74% next 2,800KWh/annum – 19.79 to 25.08 – up 26.73% next 6,000KWh/annum – 23.34 to 27.83 – up 19.24% all additional KWh/annum – 23.62 to 28.41 – up 20.3% “would hydro be considered a renewable energy source?” It is IMO – that’s why they keep claiming that China is leading the world in renewables as China has 192GW of Hyrdro power equal to the US and Canada combined. Would never happen here because the greens won’t allow dams. A few cost driver are not included in the calculation. As the power is not generated 24 hours / 365 days per year, backup power generation has to remain in place. With increasing solar generation, such backup runs increasingly only at less than 100% or has to be switched on and off. All of this costs money. In Germany around midday on some sunny days, prices at the electricity stock exchange go towards zero or sometimes even negative. That means, there is no sufficient infrastructure to distribute or store, and it has to be given away for nothing or money has to be paid that someone takes it. Huge amounts of money will have to be spent to upgrade the networks and storage, And for storage, no solution has yet been found (flooding valleys is no option in central Europe, air pressure storage is inefficient and costly, batteries are extremely expensive, etc…). Willis Eschenbach; That’s a curious setup. SunPower must have the on-the-ground knowhow>>> Well they do have a long history in solar cells, I believe they supplied NASA for some of the satellites. But it seems to me the real know how is that they “know how the system works”. My understanding is that they hired a lobbyist by the name of George Miller IV to help them get the loans from the DOE. George Miller IV got his daddy to come for a tour of the plant. His daddy would be Senator George Miller (D). Of course Senator Miller dragged along his good friend Ken Salazar. Well, more correctly, Interior Secretary Ken Salazar. Now if the DOE were to directly fund SunPower under those circumstances, somebody might cry fowl. So, one needs a layer of plausible deniability… All this going on. It’s as if the Svensmark experiments at CERN never even happened. All that empirical evidence and then the confirming experimental proofs told us that “Clouds drive our climate and stars give our clouds their orders”, as one scientist put it. The magnetic fields protecting Earth from the cosmic rays that give us the low level clouds involved have been weakening. In 2009 NASA told us that more cosmic rays got through than at any time in the last 50 years. There is going to be a lot of death if humanity doesn’t start making practical adaptations fast. On that money, the investors stand to make a net present value of $334 million dollars … which means that due to the screwing of the taxpayers and ratepayers, a few very wealthy investors are GUARANTEED A RETURN OF 29% ON THEIR INVESTMENT!!! 29% on an investment like this in the private market is less than borderline. The screwing of the taxpayers is the only thing that makes it even remotely attractive. Taking the NPV/investment is not the same as IRR, as I’m sure you know. When I did cash flow analysis, I used to advise against anything less than about 30% IRR, unless there was virtually no risk, and the time horizon was short. If it was easy and quick, great, do it. Longer term and difficult or risky, we have better places to put money. In the private market, the cost of capital is easily 15% after tax, so getting 30% is really messing with pennies. In some industries, that’s almost OK (older, stable, predictable ones, with very predictable future cash flow). New ones or high risk ventures, I’d be looking for north of 40 or 50%. Really volatile or unpredictable (high risk), better than 100% per year. No wonder only folks with unlimited free money (the government) ponders such foolish enterprises. I would love to see Chu’s supposed politically neutral cash flow analysis. The notion that Solyndra was worth investing in is preposterous under almost any private sector scenario. Total credit market debt in the USA currently stands at about 350% of GDP after a three decade long debt fueled spending spree. Early on in this process each one dollar of debt resulted in 1.6 dollars of GDP growth. Immediately prior to the 2007 recession each one dollar of debt resulted in about 1.15 dollars of GDP growth due to the decreased marginal utility of that debt. These green projects are spending one dollar to add (much) less than one dollar to GDP. We know this because they raise the price of power but do not increase its economic utility so they must be a net cost to the economy. We are destroying useful capitol in these projects………..this is basic economics. This is a bad idea at all times in any economic cycle but it will probably prove disastrous now. So what will be the result? Hyperinflation to inflate away these debts, as has happened in other countries, or depression style asset deflation that will erase debt, and ruin lives, via default. This will not end well I think. Which brings me to this: This is what you get with command and control mechanisms. The State is in charge. They took over ostensibly to protect consumers from evil corporations. With their bait-and-switch, the game is now to protect the environment we consumers live in and not our pocketbooks. Yes, indeed, what is a healthy environment worth? Apparently, our jobs, our houses, our ability to buy food, and take care of our families. In short, we have to give up everything so the government can protect us. Roger Knights probably has the right idea: Give them everything. When the consumers who fall for the con game finally realize it is a scam, maybe it won’t be too late to turn it all around. And if California can’t be saved, well, maybe that is what is required to save the nation. Idiots are in charge. Clearly, the propaganda is still working on voters. There are many factors to blame, but a big one is term limits. Term limits eliminate the incentive for electeds to worry about the long-term consequences of their votes. They will be termed out first. They can give away lots of goodies paid for by tax payers, and leave much richer than they came. Elected office staff are often 20-somethings making decisions with no perspective at all. A few older staff run the operation, moving from official to official, completely unaccountable to the voters. Electeds have no idea what regulators are doing in practice. You barely come up to speed and then you leave office. State government is completely out of control. Until the voters understand the waste, it will just keep happening. Too many voters don’t have a direct connection to the costs of government so they buy the line that the “rich” don’t pay their fair share. The people who don’t pay taxes suffer by paying higher prices for goods and services, by losing jobs, and not being rehired due to high unemployment in economic conditions produced by government policy (and the folly of voters). It all has to come crashing down to fix this mess. We hope there will be a chance to fix it. Michael D Smith; In the private market, the cost of capital is easily 15% after tax, so getting 30% is really messing with pennies.>>> Really? Pennies? There is capital, then there is venture capital, and then there is high risk venture capital. In all cases, there is some element of risk, I’ve listed from lowest to highest. In any investment scenario, you can lose some or all of your money. Most venture capital projects fail, and high risk venture capital projects nearly all fail. This is a high risk venture capital project…. with no risk. Find me a no risk investment that pays more than bank interest rates. Pennies my *ss. Grrrrr.. OCCUPY CHU’S OFFICE! (for the 99%) Subsidies that support groups favored by many Republicans or conservatives are evil and are responsible for the exploitation of man and the earth. Subsides that support groups favored by many Democrats or liberals are enlightened and necessary and are our only hope for a brighter tomorrow. That is part of the genius of the liberal indoctrination. They found a way to convince people that those who don’t believe or practice what they do are evil. There’s one thing you haven’t mentioned. The electricity grid was built relying on a stable load and supply. To modify the grid to cope with brown outs and surges is a significant cost factor that the grid owner also passes on to the consumer. I believe it used to be called a ‘tragic moral choice.’ Do I do what’s good for me, or what’s good for the community? I shouldn’t have to make such a choice, but the politicians have set me up for it. In this case, I decided I’ve had enough; I’m paying taxes for these programs and I’m not going to make myself a further victim by refusing to take advantage of them. So I installed solar photovoltaic. My apologies to my neighbors who are I not in a position to do likewise, and who are subsidizing my power. If the politicians had any sense, they might have subsidized installations on public buildings (schools, libraries, clinics…) that would benefit everybody. So how is it working for me? Better than anticipated. The panels (yeah, they’re Sunpower) are kicking out the rated power and have produced all of our electricity since turn-on last June 17. Plus about 300kwh extra. Azusa Light and Power paid about a third of the cost, and Uncle Sam promises about a third of the remainder at tax time. My net outlay is somewhere between 8-9 thousand; estimated payback at present prices is 15-20 years. But that is savings–it is _tax free_. And present prices are certain to go up, given the situation in CA. (So I claim self-defense on the moral question.) There is a big difference between Solyndra and Sunpower. Sunpower produces real hardware that really works. And there is a big difference between rooftop installations that do not need two hundred mile long transmission lines and that do not flatten large areas of undeveloped countryside. As I drove down to Pick-a-Part to get a cheap junkyard tire for my car this morning I saw the crew constructing S. Cal. Edison’s new Tehachapi transmission line. $2 billion, I think they are spending on it. And I share your anger. The greens are walking contradictions. Useful idiots to the end. If that was an oil company subsidized to the same effect (by evil Reps no doubt) there would be no end to the “scandal” coverage by the MSM. Disgusting. Because of the weather, California energy bills are lower, so the political impact is small. If they tried that in other states further north, I think they’d be in some trouble. If oil companies are making profits of $40 billion per year, does it make sense to subsidize them $4 billion per year or is it also a big waste of money? Will. The scams never end even when the last dollar is stolen from the exhausted taxpayer, they will still won’t more. In the midnight hour they want more ,more ,more. Billy Idol. Jim D says: November 18, 2011 at 7:56 pm “If oil companies are making profits of $40 billion per year, does it make sense to subsidize them $4 billion per year or is it also a big waste of money?” Oil companies only get green energy subsidizes, some smaller oil companies get some subsidizes but mostly the same type that all small businesses get. If you remove the green energy subsidizes big oil would get zero money from the government… of course the green movement secrets loves giving money to big oil. Here in Australia the article was followed by this advertisement. A significant proportion of the biomass in the developed world is peat. Peat is a fossil fuel, albeit a recent one. Yet they have classified it as renewable biomass and you can even get carbon credits for using it as a fuel. In some countries it used to generate electricity, Ireland for example. Making the electricity from biomass misleadingly high. Exclude fossil fuel peat and the renewables electricity is likely under 2%. In California, hydro is not considered a renewable energy. Our PUD uses hydro exclusively, and has to diversify to meet the new requirements for “green energy.” Ridiculous! More on peat It is particularly inefficient as a fuel and requires drying before burning. It produce more CO2 per unit of electricity than any other source of fuel. Then you have to drain the peat bogs to get to it, which by itself is a major source of CO2 emissions. From wikipedia, Losing 5% of the 2.7m hectares of peatland in Britain, would equal UK’s annual carbon emissions and risk its climate targets (IUCN). Nothing better illustrates the lunacy of ‘renewable energy’ than carbon credits for using peat as a fuel. Rosco says: November 18, 2011 at 6:34 pm There’s one thing you haven’t mentioned. The electricity grid was built relying on a stable load and supply. To modify the grid to cope with brown outs and surges is a significant cost factor that the grid owner also passes on to the consumer. Autoresponse is the “solution” to handle variations in supply – by cutting demand. Smartgrid in homes and businesses will monitor devices. These devices will be turned off when they receive a command from the utility. The consumer will pay higher prices for power and will lose privacy. Smartgrid is a bidirectional communications system that has no limit on what it can report. Any data collected is fair game. Smartgrid is also planned to be another source of broadband. That prospect has hooked the utilities, now expecting to profit one day on a new revenue stream. One by one, companies and smaller public agencies that might otherwise oppose a crazy scheme are bought off with promises of cash. When the cash fails to materialize, the government will still have its monitoring equipment in everyone’s home and in businesses. Everything that can have its own IPV6 address can potentially be a reporting device. A water agency was just attacked via the internet by Russian hackers. Smartgrid will be the alternative of choice when the internet is deemed too risky to be allowed to operate as it does now. Then we will be using the more “secure” government provided powerline broadband. Our data sniffed and stored, our comments traced, and searches directed, we will be so much safer. All of our communications will be scoured for threats. What do we have to fear if we aren’t doing anything illegal? Why should our lives not be completely open for all to see? It just won’t be America anymore. OT, but somewhat relevant. Chu, as far as I know having never seen an oil well, made the call to stop the first attempt ( 3-4 weeks after initial expolsion, I think) to plug the Deepwater Horizon oil leak using a technique called ‘top kill’. After a few months of leaking oil the well was plugged using the very same technique called ‘top kill’. The precautionary principle caused a lot of oil to be dumped into the Gulf of Mexico at the US government’s hand and blamed on BP. Seems that if you hide under the umbrella of the government and make bad calls there will be no repercussions, just call it policy. Incorrect. It was the BP engineers who abandoned the “top kill” after about 3 days without success. Chu said in an interview that it probably should have been tried earlier. The second attempt at a top kill was only successful because TWO relief wells, one of which was started on May 2nd, had been drilled ann r says: November 18, 2011 at 8:50 pm In California, hydro is not considered a renewable energy. Our PUD uses hydro exclusively, and has to diversify to meet the new requirements for “green energy.” Ridiculous! The fear is if they counted large hydro (>30 MW), it would encourage developing more water impoundments. Four small dams on the Klamath River, producing clean power, could be torn down. Is that really necessary to “save the salmon”? Are toxic algae behind these dams truly a serious concern? Cyanobacteria can be controlled by balancing nutrients (typically, nitrogen needs to be added) instead of adding agents like CuSO4. The dam tear-down is funded by yet another bogus “jobs” bill. The money for the projects, over $500 million, could be given to 10,000 people instead, which amounts to about $50,000 each. Our government in action. Brilliant. John Trigge mentions getting 19% return. In Oz there are various schemes which assists what one would call upper & upper middle class (including those leftish proverbial Doctors wives and environmentalists in Government paid jobs). The Government gives a subsidy for the first 1.5kW of PV panels. This works out to be about 40% of the capital cost. This only replaces some of your usage which in my area is $0.20/kWhr. Allowing for average sunlight hours and an efficiency factor. I get about 20% return on investment. This is only available to a) house owners with roof space in the right orientation b) people who do not mind an untidy roof space or a roof which can not be seen (my panels can not be seen from ground level anywhere on the property) and b) people with some cash and intelligence to invest In some areas there is a feed in tariff benefit for larger systems but no additional capital subsidy. In my area anyone putting in a 3kW system will get a return on investment only around 11% (including a small contribution from feed in tariff). Anyone putting in 10kW has to be stupid or a dyed-in-the-wool environmentalist who does not mind throwing money away. That is why there are plenty of 1.5kW systems in better off areas and very few larger systems. The capital subsidy for PV panels is a waste of money for a token demonstration the the government is doing something. DMarshall says: November 18, 2011 at 8:46 pm ” I’m going to read the whole thing but from the 3rd page they already are merging and hiding what a subsidize is… Subsidies are direct payments not tax deductions… The fact that the whole number (1) listed is not subsidies already makes me question this study and number (2) Looks to be doing everything to merge green subsidies, R&D, etc as fossil fuel subs with the green subs .” The report looks to be classic green propaganda playing on people’s lack of knowledge about the tax code and that oil companies spend billions on green tech and thus get millions in funding from government. Something is off. It maybe the NYT. My understanding is thaT NPV of the project is the TOTAL profit expressed in current dollars. It does not take into account that it is earned over a number of years. If the operational life of the solar plant was mentioned, I missed it. Assuming 25 years, the annual return on investment is just over 1%. This is a bad deal even with all the subsidies. Even at ten years, this project makes sense only if someone is scammin; kickbacks or something. Manfred makes a valid point. What is the market value of this renewable power. Much of the time its close to zero, because its power no one wants because the infrastructure can’t handle it. The best use I can think for solar and wind energy is heating underground shale oil and gas to increase flows, because there is no time criticality. Willis, I commend you on ferreting out something of the truth about this project. The whole truth is much worse than your figures indicate because of leveraging, The actual discounted, after tax cash flow for this project produces an astronomical I.R.R. for its politically connected equity investors. As near as I can determine, the project is *80% or so financed by a bank loan that is guaranteed by the U.S. government. That means equity investor put up 20% of capital costs. That guarantee means the bank has no “skin in the game” and has no vested interest in the long term financial viability of the project. Equity investors really have no “skin in the game” once the plant achieves “commercial operation”. That’s because those investors receive a tax credit (or optional cash from the U.S. Treasury) equal to 30% of their share of the entire capital cost of the project, including the financed portion. That, in turn, means the lucky investors receive $150 back for every $100 dollars they invested THE DAY THE PLANT ACHIEVES COMMERCIAL OPERATION. Then it gets even better. Thanks to accelerated depreciation, that same investor receives a share of depreciation on the plant of $250 for every $100 invested DURING THE FIRST YEAR OF PLANT OPERATION. If this heavy-hitting investor is in a 50% marginal tax bracket (state and federal), that first year depreciation amounts to an additional $125 in his pocket of every $100 invested. First year return in such case is ($150 tax credit + $125 reduction in personal tax bill =) $275 dollars return for every $100 invested. Finally, here’s the really ominous part. First, the bank has no financial interest in seeing to it the plant continues to operate at any time, thanks to the federal loan guarantee. Second, the equity investors receive nearly all the return on their investment they will ever receive during the first five years of plant operation, thanks to a the investment tax credit and the five-year depreciation schedule. Since there is no provision in the federal tax code for recapture of either the investment tax credit or the depreciation benefits equity investors have received at any time after the plant achieves commercial operation, no one has any vested interest in seeing to it the plant continues to operate after five years, other than possibly the plant operating personnel who do not have financial resources to pour into the project in the event of almost inevitable problems. Finally, please note that the guaranteed power sale rate of between 15 and 18-cents at the fence being touted as “only” 50% above fossil rates is misleading. It is based on the assumption that fossil rates will escalate madly over the next few years. Note that the spot wholesale rate quoted at the Palos Verde trading hub this morning was 3.38-cents per Kwh. The whole thing is a recipe for failure at enormous expense to tax payer and rate payers. So reviewing the doc about the only thing that maybe considered a sub would be Strategic Petroleum Reserve ($6,183) and thats questionable at best… Plus alot of the subs were costing the oil companies money. They count Black Lung Disability Trust Fund ($1,035) as a sub but yet its really a tax on the coal companies… they in some twisted logic say because the coal companies are not fully taxed on the fund that its a sub… its a tax and a fund setup from that tax and has nothing to do with the coal companies. The other very very retarded argument they make is that heating homes The Low Income Home Energy Assistance Program (LIHEAP) ($18,309) this isn’t a sub for coal, oil etc I’m sure a good amount goes to “green” energy as well. This shouldn’t be listed along with most other stuff they have listed is questionable at best. I’ll also note for say ethanol they don’t list farm subs or anything along that line or countless other subs. Basically it throws everything and then some as a “fossil fuel” sub and nothing for the green tech like forced usage or any of a host of other subs along with the reduced loan interest rates and a host of other things. Think the NPV reasoning has gone a little off. If a project has an NPV of 30 and capital costs of 100, it does not follow that it is giving a 30% return. The NPV formula takes the cash in and out, and discounts them by the applicable interest rate. The result is the amount that a rational investor would pay now for the cash flow stream forecast. You cannot take this amount, divide it into the total capital, and then say that the result is the percentage return on the capital investment. You could figure payback in years. That is, take the cash out and then figure out how many years it takes to get to breakeven. Not very illuminating because it leaves out interest. You could do IRR. Better, but you have to know the timing of the cash flows to take account of interest. Need to look up the formulas. The point is still entirely valid in one sense: the subsidies are huge, the success of them even at these levels in converting power generation to anything but conventional sources is minimal. The write who cites the UK is correct, their tariffs and those of their European counterparts are truly insane, because what they are doing is to favor small scale inefficient production. In the UK for example, you are paid 43p per kWh generated from the smallest sort of solar power installation, whereas the going rate for wholesale electricity is one tenth of that. The larger your installation, the lower the rates. By the way, you are paid in the UK for generating, not for actually supplying. Generate power which no-one wants, you still get paid. This is total madness. Social engineering at its best. Surprisingly, the US has a trade surplus in solar energy, exporting manufacturing expertise and importing mass-produced products. Manufacturing expertise developed overseas has in turn been imported back into the U.S. in the most recent manufacturing plants. On the whole, government subsidies to energy development are not large compared to the earlier subsidies, and ongoing subsidies, to aircraft development and airlines. At policy level, the betting is (so to speak) that continuous development will drive down the cost of the alternatives, and reduce demand on the fossil fuels sufficiently to prevent them rising in price too much. When the govts of the world had been subsidizing aircraft manufacturers and airlines for 30 years, most people (99%) still traveled by boats and railroads. I am happy to see more development of coal, oil and natural gas, but they are gradually becoming more expensive (with alternations of price declines and price inclines), and if they last longer than 2100, even at high cost, it will mean that economic growth has slowed considerably and all the poor people of the earth will have remained poor. So, yes it’s a bite, and it’s a big bite. And like all previous human progress it is messy and haphazard. As always, a few people benefit first. Neither the free market nor the government has all the successes and all the failures. These homilies are as true now as always. Willis I wrote a long post, went off the check the NYT source and came back to find that michael has said more or less the same thing I wanted to say. Your first two charts are fine. Keep em around. You’ll probably need them from time to time. The third chart (from the NYT) looks to be not so great. It depends on a lot of unstated assumptions about future interest rates, electric rates, etc. It is unclear what timespan it projects over and that’s important because electricity generation projects tend to have a very long lifespan. The biggest assumption — a 1.6B project will be built at estimated cost. It’s hard to tell because the chart is kind of a mess, but I think that a 20% cost overrun in construction would probably make that 334M NPV zero or less — assuming that the “NPV” is actually Return On Investment — which is anything but clear. Hardly a guaranteed profit. I also agree with michael that the one thing the chart does accomplish is to show that the subsidies involved are very large compared to the scale of the project, and are probably a bad idea. I see that Claude Harvey has done what I couldn’t, and possibly made some sense out of the economics. He seems to think that the project as proposed is more or less a scam. He could well be right. But do keep in mind that the costs of solar have been dropping over time which is not true of other energy sources. And solar does not have wind’s unfortunate impacts on the (inadequate) power grid. This year’s dubious investment may well look pretty good in a decade or three even without the probably ill advised subsidies. I am very much against windfarms and solar in Northern latitudes due to their high cost and inefficient returns, I am against subsidising these industries so no doubt whilst I desire to be objective, it may be that I am biased. It seems to me that the position is far more complicated. The investors invest $1.1billion but are able to write down a substantial percantage of this over 5 years and of course they get a low interest loan. Both of which act as sweetners. But the real issue is over what period of time do the investors earn $334million? Is it over 5 years (the tax write down period) or over 25 years? or over some other period? The period over which the investors make their net profit makes a substantial difference to the assessment of the real return on their initial outlay of $1.1billion. That is the question that needs to be addressed, although I agree with the political comment that the poor are being forced to subsidise the investment returns of the rich. In the UK this is doubly unfair since the poor never really get a shot at getting their hamds on any of the subsidies that are available. First, by definition, they are poor so do not have the financial means to invest in expensive solar equipment and hence cannot get subsidies on their own energy bills or get the inflated feed in tarriffs. Second, since they. Morally, the subsidy system is very wrong and unfairly works against the less well off and unfairly enriches the the landed rich or ootherwise wealthy citizens. It is about time that governments stopped this scam. Fascinating considerations, Willis. But in this case the main conclusion is not showing the whole picture. The NET PRESENT VALUE of 334 million is not the PROFIT for the investors. I think this post is closer to the reasoning of the investors: Claude Harvey – November 18, 2011 at 10:55 pm “The whole truth is much worse than your figures indicate because of leveraging. … That means equity investor put up (only) 20% of capital costs.” They get huge tax deductions for the whole 100% of the the debt. That is the reason why it is attractive to invest in a project that is directly burning 675 million $ or 42% of the invested capital. If tax credits are taken into account the project is probably burning close to 70% of the invested capital over it’s lifetime. Actually the capital is not all destroyed. Most of it is recycled. It is taken from tax payers and rate payers and funneld into the pockets of the investors. We all now recycling is good… Whenever reality is shrouded, there’s always big bucks to be made for those who can see through. What are we waiting for? Electricity powered solar cells, anyone? In the UK we have Chris Huhne an equally stupid Secretary for Climate Change and Energy, neither of which he knows anything about, who stated that to keep warm we all need more insulation in our houses. He forgets that insulation will keep out any warmth on the outside, which might be vital given that you can’t heat your house at all. Any building at 0C inside will remain at 0C if well insulated. And insulation only delays heat loss so heating is still required. If you can’t afford the minimum heating, as many in the UK can’t, you will probably die. Where the British will get their energy from according to Chris Huhne, the Huh who spoilt Christmas… Same in the UK, prices going crazy and subsidies being given out left right and centre, although there are signs on a pullback, prices for electricity generation are going to only be about 50% above going rate guaranteed 25 years. These policies are going to kill people as they turn off the the heat & power at home. Perhaps that is the idea, cull the old and infirm, quietly. Genocide by green inflation, thats a tag if I ever saw one. A great deal of us find ourselves struggling financially. It gets really old living paycheck to paycheck. Must be a better way. We already work 40 hours or more a week just to stay afloat. Well, there are alternative money-makers than what you’re probably used to. Bio-mass makes up a significant portion (40%) of the graph’s 3.3 percent “electricity produced.” The total “electricity consumed” from renewables is 1.3 percent – only 40%. Wind and Solar are inherently intermittent, variable and unpredictable. Bio-mass is consumed as a hydrocarbon boiler fuel and the electricity it produces is reliable and constant. Geo-thermal likewise. The numbers in the graph probably indicate only a tiny fraction of the 1.3% “renewables – electricity consumed” is from wind and solar. On the face of it the numbers indicate wind and solar at <ZERO. Amazing. Which is why 2012 will be a horrible econ year for the Western democracies. No way to reverse things to get cheap energy – even breaking even. For the non-econs, it means as the economies “improve”, energy becomes a huge throttle sucking income away from all other sectors. Get used to it. Check lead times for building anything.. ____________________________________ Here in the USA we are tearing out the hydro electric dams thanks to the eco-nuts. I just do not like the word ‘renewable.’ In a sense everything is renewable, even the solar system, somewhere, sometime. Perhaps ‘sustainable’ would be a better term indicating, in the case of energy, that a given resource was either available in such abundance that it could never be exhausted or that it is being continuously replaced at our rate of usage. Also, this term implies that the continuous use of that resource would not make our world uninhabitable. Ralph Nadir has declared that nuclear power is ‘poison power,’ which would eventually contaminate the planet with deadly radioactive waste, also a former vice president of the United States has declared a scientific consensus now exists that the burning of carbon has reached a point where the climate is about to spin out of control, boiling away the oceans and make the Earth a twin of Venus. For those who accept these proclamations, solar, bio-solar, wind, and hydropower are the only politically correct, sustainable sources of energy, even though these energy sources would only support a small fraction of today’s population living a Charles Dickens era lifestyle. I am not quite ready to say that these views have been promoted because of some plot, as I believe the simplest explanation is a bias and sensitive concern caused by environmental indoctrination. Perhaps some have, unknowingly, reverted to an atheonic worship of Nature after losing faith in a traditional religion. However, there is at least one undeveloped source of energy on this planet that does appear to be indefinitely sustainable at current usage rates. At some point, someone has to invent a technology which more efficiently converts solar photons into additional electron energy. The problem is, they are two different things – photons versus electrons. The subsidies are given on the basis that economies of scale and continued small incremental improvements in the solar panels will eventually make them efficient enough. But that looks to have been a false assumption. They will NEVER be efficient enough. Mother nature did invent an efficient method billions of years ago. Solar photons get converted into chemical potential energy allowing plants to grow and reproduce. Some of that chemical potential energy got buried, transformed chemically again and it resulted in fossil fuels. Now we are converting that chemical energy into electricity and heat by burning it with oxygen. It is 10 times more efficient, because the plants carried out most of the work hundreds of millions of years ago. Some kind of new process or new methodology looks like it will be needed since the current technologies, solar panels and reflective mirrors concentrating the solar radiation, just doesn’t look like it will ever be efficient enough. The money should be going into research of new methods which will be efficient enough rather than continuing to use this assumption that economies of scale and small incremental improvements will reach the goal – because it won’t. Just for “the hell of it” I googled in the question: “Is 1 billion 1000 million or is it 1 million – million.” – The answer came back as follows: ]“ “” Maybe it is possible to ‘hide the cost” as well as the decline? Alan says: November 18, 2011 at 4:13 pm Isn’t hydro renewable?? Greenies only seem interested in wind for some reason Alan says: November 18, 2011 at 4:13 pm Isn’t hydro renewable?? Greenies only seem interested in wind for some reason _______________________________________ Hydro might hurt the little fishes. Also the USA passed the scenic rivers bill Never forget that the actual plan is to “De-develop the USA” ) President Bush’s pal Maurice Strong stated.” What they both forgot to tell you is this ONLY applies to the poor and the middle classes (serfs) and not to the “Master Class” You do not see them leading by example now do you???? The other part is the “Save the environment for the children” Again they left out something. It should be “Save the environment (and resources) for OUR children not yours.” Obama’s Science Czar makes that clear in the book Ecoscience (1977) They are summerized : The follow-up is the recent USDA funding of Epicyte to develop a spermicidal corn by inserting into corn the gene manufacturing a class of human antibodies that attack sperm. Epicyte has folded and was bought out by a company in Pittsboro NC. Prohibited Gene-Altered Corn [StarLink] Found in Latin American & Caribbean Food Aid Shipments: The Roots of Racism and Abortion An Exploration of Eugenics by John Cavanaugh-O’Keefe Table of Contents: Chapter 10: Eugenics after World War – II US Sterilization Program: Note that it is the WEALTHY who are the power behind this movement. As they are always the power behind most “grass-roots” movements that catch on in the modern world. Maurice Strong is sometimes credited with coming up with the idea of international NGOs thanks to his work with YMCA international in his early days. Anthony somehow you manged to become the exception. I do not think very many really appreciate just how rare that makes you or how much we owe you. ~ Thanks again. O H Dahlsveen says: November 19, 2011 at 5:39 am Just for “the hell of it” I googled in the question: “Is 1 billion 1000 million or is it 1 million – million.” Here’s the real logic: Bi means two Tri means three Bi-illion is two lotts of 6 zeros tri means three lots of 6 zeros. As Mr spock would say I can see no logic in the The American system in which bi- illion means a thousand million? ****. And, at least in the east US, state applications for any new transmission lines are always targeted by organized & funded Eco-zealots for denial, and at the very least they deliberately tie-up the legal process for many years. Not sure about Canada…..? Just asking. Several studies and tests have concluded that your average EV, run entirely on coal-generated electricity, will be roughly the same for emissions as a very fuel-efficient petrol auto. The only link I can find at the moment is a rather old one from Slate: It’s probably worth pointing out that, even if you consider EVs to have a “long tailpipe”, their emissions would be confined to generation plants, where they could be better managed or sequestered, and there would be no emissions, such as ground-level ozone, sulfur dioxide, etc in population centers that are typically well away from coal plants. Also, in most of the Western world, the power generation capacity is more than enough to handle the most optimistic adoption of electric vehicles. What does need considerable effort, in many places, is the local grid. Redistribution of wealth both within and among nations is absolutely essential”” John Holdren, Obama’s Science Czar – Human Ecology: Problems and Solutions (1973) Does that mean the 250+ millionaires in congress are going to give the rest of us some of their money? “Bill Illis says: November 19, 2011 at 5:15 am The money should be going into research of new methods which will be efficient enough rather than continuing to use this assumption that economies of scale and small incremental improvements will reach the goal – because it won’t.” Theoretical efficiency is limited by physical laws, and our current understanding of them. Nature has been working on solving this problem for quite a long time and so far hasn’t come up with anything more efficient than photosynthesis. Maybe some day someone will discover a low cost paint that you can spray on your roof that will generate electricity. The current approach appears to be centered on solar panels that can over their 20 year lifespan generate as much energy as it took to produce them. However, with subsidies this can be improved – not so they generate more power – but to reduced the costs and thereby hide the amount of power it took to produce the panels. Cecil Coupe says: November 18, 2011 at 4:19 pm It’s worse than you thought. The company is SunPower (SWPR) and this only one of their efforts. There’s been a lot of shareholder value removed. Everybody is getting the short end of the stick. Not everyone. The folks that made it happen, with friends in high places, they made out like bandits. Kelvin Vaughan says: November 19, 2011 at 6:56 am Maybe so. But if somone says “quadrillion”, I have a fair notion of what they mean. Is someone says “billiard”, I get out my cue. Mr. Spock had very little real world experience. Do we have a dollar for dollar comparison on all coal and oil vs energy produced with renewables? Comparing what $1 produces in renewables vs $1 of oil or coal would be the best way to compare. Comparing subsidized renewables with what the free market spends doesn’t give an honest look at renewable’s shortcomings. It still makes a great point: Government subsidies can’t even make a dent. I would love to see a complete cost-benefit analysis, including all “externalities” of fossil fuels, renewables and nuclear. Do any such studies exist? ferd berple:? maybe. maybe. It depends on whether a conversion to electric vehicles occurs faster than a conversion away from coal. Personally, I doubt that purely electric vehicles will ever make up a large share of vehicles. But I am sure that I can not “see” further than 10 years into the future, if that. ferd berple: The current approach appears to be centered on solar panels that can over their 20 year lifespan generate as much energy as it took to produce them. You are behind the times. Current generation pv panels produce more electricity than they have consumed after about 1 1/2 years. They produce at least 80% of rated peak power for at least 30 years. The estimated rate of energy return (energy harvested divided by energy input) is greater than with the Canadian tar sands; but the latter produces a nice liquid fuel. Steven Chu, Energy CEO Bill Illis: The subsidies are given on the basis that economies of scale and continued small incremental improvements in the solar panels will eventually make them efficient enough. But that looks to have been a false assumption. They will NEVER be efficient enough. How efficient is “efficient enough”? The high temperature (concentrated solar) PV cells are about 40% efficient. As they are mass produced (just starting) there will be continuous improvements in the manufacturing processes (as there are every year in almost all manufacturing, and specifically in producing pv panels) that reduce labor and material input and reduce overall cost. “Never” is too absolute a concept. John Runberg says: What is the cost to maintain this equipment compared to a fossil fuel plant? How is generation calculated? With solar (as well as wind) rather a lot of the equipment is likely to be outside in all weathers probably requiring all sorts of vehicles for people to even get to it. Whereas using steam turbines. (Regardless of if the steam is produced by burning coal, methane, oil, wood, etc. Even a nuclear reactor.) Means that the plant is inside a building. Which eliminates the equipment having to cope with weather. Access to machinery which needs servicing also tends to be easily available. Much of the same also tends to apply to hydro-electric, including pumped storage plants. The sun does’nt shine every day and winter days are shorter. The power output is likely to vary even from minute to minute. Probably not too big a problem if you are heating water (or even a battery powered illuminated road sign.) But a big problem if the aim to to generate electricity. The same applies to wind power. Throughout history animals, slaves, steam engines, internal combustion engines and electricity have wound replacing wind power because “reliable” beats “free”.. Mark says: November 19, 2011 at 8:30 am “Throughout history animals, slaves, steam engines, internal combustion engines and electricity have wound replacing wind power because “reliable” beats “free”.” Actually Mark it’s hard to think of anything more reliable than the sun. It’s been rising on time every single day for billions of years. You are confusing “reliable” with “on demand”. Solar and wind power are very reliable but they aren’t available on demand without some sort of buffer.. Septic Matthew As they are mass produced (just starting) there will be continuous improvements in the manufacturing processes (as there are every year in almost all manufacturing) Here is the manufacturing productivity trend for the US for the last 20 years. A couple of percent a year productivity increase is about normal. Various industries go thru rapid productivity increase then the increases slow to a trickle. Regarding “billing” (bilking)” the poor on their electricity…more than likely in Commiefornia there is a generous subsidy for low-no income people. The hardest hurt are the working middle class. Let’s start a ” We are the 50%ers movement” (the 50% who pay taxes), we’ll need to change our name often to reflect the current %, but… janama says: November 18, 2011 at 3:59 pm. Jamama, I recently installed a 5kW system, AGL have a 5kW limit but pay $0.68 kWh but your estimation of output is way opimistic, my system puts out between 1.3 and 18Kw per day averaging 12kW per day for the last 7 weeks. Inefficient is one way to describe solar, rort is another. One giant problem in our capitalist system of business is that investors won’t invest unless they get 20% or higher return on their money. Developers play all sorts of games with their financial models to show what the investors want to see. If the project comes up short, they look to the government subsidies to make up the difference. Any time there is a government subsidy, what it really means is a tax on the people. Why can’t our corporations be happy with a 10% return on investment for a while? michel says: November 18, 2011 at 11:46 pm The NPV measures (theoretically) what I could sell the future income stream of a project for in cash today. So to use your example, when the project is completed, in theory, they will be able to sell the finished plant for 130% of what they have into it. How is that not a return of 30% on their investment? What am I missing? Yes, this is not the IRR, but where is the error in my calculations? w. Septic Matthew says: November 19, 2011 at 12:33 am (Edit) Thanks, Matt. Man, I’d have to see a citation for that claim, because it truly is surprising (and not all that believable). Again, cite? Which “earlier subsidies” are you comparing to which “current subsidies”? Your claim is meaningless as it stands. w. Bill Illis says: November 19, 2011 at 5:15 am Never say never, Bill. It isn’t a matter of efficiency, per se. It’s a matter of cost efficiency and more than that it’s a matter of competitive cost efficiency. I went through the numbers. I’d have photovoltaics supplying all my electricity and selling excess generation back to the grid at a profit if it were going for $0.44kWh. Can’t do it when I can buy off the grid at $0.11/kWh. I figure breakeven (absent any calculation for maintenance cost over 25yr service life) is about $0.22/kWh. “Mther nature did invent an efficient method billions of years ago. Solar photons get converted into chemical potential energy allowing plants to grow and reproduce.” Yes, but it isn’t really all that efficient. Sugarcane, one of the most efficient at converting solar to chemical energy, is about 8%. That’s about mid-range for production solar panels (6-10%) and just a fraction of highest achieved laboratory panels (40%). The advantage plants have is they’re self-reproducing, self-reparing, and built out of air, water, and dirt. So cost efficiency even for a cultivated crop like sugar-cane is higher than photovoltaics. The remaining problem with plants is that starches, sugars, and even wood isn’t really desireable as an energy source without further processing which vastly decreases the competitive cost efficiency relative to fossil fuels. No one’s looking for more expensive sources of energy except misguided environmentalist whackos. The thing of it that nature does provide the basic technology we need. The end products we want like fuel-oil and ethanol are metabolic by-products and/or products the plant only needs in small proportions for survival value. Evolution doesn’t reward waste. Genetic engineers however can reward whatever genetic engineers want so if we want an algae that produces far more oil than it needs and grows in an unnaturally poisoned environment that no wild competitors can tolerate then we’ll have a renewable source that’s far cheaper than digging progressively more difficult to reach fossil fuels out of the ground. Here is everyone’s chance to have a direct say: Being a large landowner (in a wind corridor), the windmill people were quick to approach me regarding the erection of large 4MW generators. I sent them packing as I could not agree with the business model. Now I am the only area for miles NOT looking at windmills (also the only one not receiving subsidized income). Now the solar panel people have arrived. All I have to do is sign on the line, and with government guaranteed loans and a 30yr guaranteed contract, I would be in the solar “business”. The business model makes no more sense than windmills, but hey, subsidies are money… are they not. The promise is a 6 yr payback on the initial capital outlay. So the question is: Should I send these people packing also or suck up to the public teat, like everyone else?? What would you do, fellow skeptics, and more importantly… why? What would y’all advise? GK @bill illis “Some kind of new process or new methodology looks like it will be needed since the current technologies, solar panels and reflective mirrors concentrating the solar radiation, just doesn’t look like it will ever be efficient enough.” They’re efficient enough right now.. That’s why hydrocarbon fuels are going to be with us for a very long time to come IMO. They’re very cheap and practical for energy storage and vast infrastructure is already in place to do it. We just need to produce those fuels directly from air, water, and sunlight via synthetic biology instead of digging it up out of the ground where natural biological processes accumulated it. Philhippos says: November 18, 2011 at 4:44 pm “So a new bumper sticker is needed” KILL A PENSIONER – BUY A SOLAR PANEL You could print this off and apply it with some clear adhesive backed plastic (Fablon in the UK). davidmhoffer says: November 18, 2011 at 5:29 pm “…and what businesses that are already there and are energy intensive will pick up and move for the same reason? Worse, if you’ve decided to move, you may as well consider all the options. Texas, Nevada, China…” Texas has over 25% of the US total installed wind turbines. Over 3 times as much as California. There’s nothing inherently wrong with wind power. It works out fine in states with conservative governments. Evidently the inherent, intrinsic wrongness in all this is giving the looney left a seat at the government table. Re:Jesse says: November 19, 2011 at 9:52 am “One giant problem in our capitalist system of business is that investors won’t invest unless they get 20% or higher return on their money….Why can’t our corporations be happy with a 10% return on investment for a while?” I believe you are confusing “risk capital” with corporate returns on capital investment. Most corporations would be quite pleased to see a 10% return on their capital investment. Investments cobbled together by investment bankers and sold through unregistered security offerings is where you’ll see expectations of higher returns because of investor perceptions of higher risk. The problem with such investments being underwritten by the U.S. government is that the investor risk disappears while the fancy return remains intact. The risk for which such rewards are rationally exchanged have been shifted to the U.S. taxpaying public. @davidhoffer “They” say that, huh? Do you have a link or is that just more of the crap you constantly make up out of thin air to support your stupid rants? Dave Springer says: November 19, 2011 at 10:15 am Manufacturing has a history of constant cost decline. This is true for just about everything but it’s particularly true for solid state electronics and photovoltaics are mostly solid state electronics. Photovoltaic s are mostly glass and metal frames. The ‘solar part’ is maybe 1/6th of the total installed cost right now. The last I checked plate glass was a fairly mature industry. Tim says All because it was decided in the 70s by flower power hippies that nuclear power is immoral and you must never speak of it Henry@Tim They were right. Nuclear energy is no good. Isn’t hydro renewable?? Greenies only seem interested in wind for some reason Green activists are only interested in a technology until it becomes economic; at that point it becomes anathema.. The U.S. Environmental Protection Agency defines hydroelectric as a renewable energy source: “Renewable Energy” “The term renewable energy generally refers to electricity supplied from renewable energy sources, such as wind and solar power, geothermal, hydropower, and various forms of biomass. These energy sources are considered renewable sources because they are continuously replenished on the Earth.” The proper term, the one used by the EPA, for renewables other than hydro is (big surprise) “non-hydroelectric renewables”. For example: Interestingly enough the dimbulb who wrote the non-hydro article above classified “landfill gas” as a renewable. Like nature what generates the trash that goes into them. Duh. Where do they find these people? Re:Dean Cardno says: November 19, 2011 at 11:05 am “Green activists are only interested in a technology until it becomes economic; at that point it becomes anathema.” As a former “renewable energy” developer, I can attest to that one. So long as we were pouring money down a research rat-hole trying to get geothermal power into the economically viable range, environmentalists were cheering us on. We easily got a “negative declarations” on potential environmental impacts for most anything we chose to do. However, at the precise moment we developed the technology to the point where profit-making geothermal plants were possible they turned on us like mad dogs. The last geothermal plant I built was tied up for 5-1/2 years by “environmental intervenors” and it was a binary, “zero emissions” plant; the most environmentally benign plant I ever constructed. Dean Cardno says: November 19, 2011 at 11:05 am .” I don’t believe that’s quite right. The real goal is to have fewer humans in the world. These people are all Paul R. Erlich disciples only without the stones to admit it because Erlich is so widely discredited. Like minded people were behind the eugenics movement which was quite the fad in the United States around the time of Erlich’s birth. The name changes but the sentiment remains the same – the earth is being overrun by “useless eaters”. These days the useless eaters are anyone who questions the catastrophic climate change narrative. Oh, and old people with terminal illnesses. This is where the so-called “death panels” that Sarah Palin so aptly named come from. In that case it’s people uselessly consuming health care resources that are better spent on younger people with more to gain per public health care dollar spent. In the climate change game the unwashed masses are uselessly eating fossil fuels that should be conserved for a smaller world filled with enlightened people and descendnents of the elite liberal left. The Nazis called their scapegoats “Life unworthy of life”. I hate to run afoul of Godwin’s Law but there’s an exception for cases where it really is appropriate and I (naturally, as predicted by a corrollary to the law) think the case I made is one of the exceptions. These people are Nazis! :-) Dave Springer; “They” say that, huh? Do you have a link or is that just more of the crap you constantly make up out of thin air to support your stupid rants?>>> Obviously it was made up out of thin air due to my inferior intellect which produces a constant stream of stupidity. If only I was as smart as someone like you who got 100% on the math part of their SAT. If only I was as smart as someone like you who sat on a very important committee at a very large company evaluating ideas for patents. If only I was as smart as someone like you who can shoot a hole in something from 100 yards. If only I was as smart as you and could carry a gun and brag about how tough I am. You are my hero. I’m blowing you a kiss. Jesse says: November 19, 2011 at 9:52 am “One giant problem in our capitalist system of business is that investors won’t invest unless they get 20% or higher return on their money.” As Claude Harvey points out you are mixing apples and oranges! Given that this project is 80% guaranteed by the U.S. Government, the returns should be equal to Treasuries. The risk on the 80% is not very high? Their are lots of people buying Treasuries, which are yielding 2% (10ys)! As Claude mentions, once the government gets involved all market logic disappears! would it be snide for me to remark that the only thing “green” in “green energy” is the public dollar bills being abused ? Claude Harvey says: @November 19, 2011 at 10:35 am Claude, you are correct. I was talking about venture capital. I worked for an engineering and construction management company and we were happy with a 10% profit. We often did estimates for start-ups in ethanol and gasification and those start-ups were the ones looking for subsidies. Worked out well for ethanol but not so good for the gasification people. Dave Springer says: November 19, 2011 at 9:01 am Dave, I’d be overjoyed if people classed hydro with the renewables. That would be wonderful. But they don’t, hydro is not green enough for them or something. Duh. California has a goal of 33% renewables by 2020 … and if you think they count hydro as a renewable, think again, my short-sighted friend. Check out the California Independent Systems Operator web site, they handle almost all the power in California. You can explain to them how wrong they are for saying large hydro is not renewable, I didn’t have much luck convincing them. i suppose I should have mentioned why hydro is not counted as a renewable, to keep people like yourself from going off on a wild goose chase … hang on a minute, wait, I did mention that very fact, and you still didn’t get it. I said: I took the trouble to point out exactly why hydro is not counted among the renewables, it’s because of AGW activists like you. … and despite that, in your pathological urge to attack me, you didn’t even read what was written. Here’s a whole post on the subject of renewables, please try to catch up, you’re holding up the rest of the parade. Dave, your desire to find something, anything at all with which to attack me is leading you down strange paths of inanity. What drives me round the twist is how little thought you put into what you write. If you entered the conversation looking to make a contribution rather than looking to make an attack, you might even be able to convince folks that you should be listened to. As it stands, though, all you’ve convinced us of is that you are not paying attention and you’re not here to make a contribution, you’re just looking to bite someone. Unfortunately, to date all you’ve managed to bite is your own posterior … w. Jesse says: November 19, 2011 at 9:52 am Say what? Most corporations in the US make about a 10% profit on their sales. So your claim that they are “not happy” with a 10% return is, quite simply, wrong. w. PS—Investors, quite reasonably, want a higher return because of the risk involved in the investment. But it’s not “20% or higher”. The return required depends on the risk, so you can’t set a certain figure. Spector says: November 19, 2011 at 7:17 am. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> NAH, They already sold it outright to China. That is why Hilary was over there…. (Just kidding …I hope) Dave Springer; The real goal is to have fewer humans in the world. These people are all Paul R. Erlich disciples>>> Do you have links to prove this? Or is it more of the stupid crap you make up? Dave Springer; Texas has over 25% of the US total installed wind turbines. Over 3 times as much as California. There’s nothing inherently wrong with wind power.>>> Your comment above was in response to a discussion of the reaction of businesses in regard to locating in jurisdictions with dramatically higher electricity rates. If rates are 50% higher in one jurisdiction than in another, businesses will act accordingly, and it makes no difference if their electricity comes from 100% windmills or 0%. As for the thought that there is nothing inherantly wrong with wind power, the fact is that windpower is intermittant, and when supplied directly to the grid, imposes huge costs on other generation methods to deal with the fluctuating wind power. The loss in efficiency in the balance of the generation systems, plus the need to provision the rest of the generation systems with additional peak capacity that is not used when wind is being used makes the costs of the entire system higher across the board. Maintenance of a large number of widely disrtributed small generation points is also much higher, and the extended grid costs to carry their output are also much higher, than the costs for centralized infrastructure with stable output. You’re idea to buffer the wind production by, for example, pumping water into a resevoir is not new, and has been shown to have value in small local implementations. Scaling such an approach to serve thousands of windmills has considerable additional challenges. Dave Springer;.>>> I see. The breakthrough in manufacturing cost of PV is obviously in sight, but the similar breakthrough in electrical storage costs is impossible. Got it. Dave Springer; Never say never, Bill. >>> Ah, yes. Other people can’t generalize becaue they aren’t as smart as you, but it is OK for you. Dave Springer; Actually Mark it’s hard to think of anything more reliable than the sun. It’s been rising on time every single day for billions of years. You are confusing “reliable” with “on demand”. >>> The sun may be reliable, but the amount of sunshine available to a PV isn’t. Unless you live somewhere that the sun shines exactly the same every single day, one would think you would know that. Please do not confuse what you think shines out of your *ss with the actual sun. The logistics behind bulding a storage resevoire such as a water resevoir to produce hydro for a highly dispersed, highly variable energy source are far from trivial. Dave Springer; blah blah blah genetic engineering blah blah blah>>> Yeah, genetic engineers can whip up plants that produce oil and grow like mad in a toxic waste dump. Gotta link for that? Or just another of your brilliant rants? Did you catch my kiss? Gail Combs; NAH, They already sold it outright to China. That is why Hilary was over there…. (Just kidding …I hope)>>> She tried, but insisted they take Arnie along with it, and they said no way. I heard she also tried to sell them Guam, but they heard it was “tippy”. Then she tried to sell them Taiwan, but they said they already own that. I heard she was going to try and sell them Alberta next. Dave Springer says: November 19, 2011 at 10:35 am Texas has over 25% of the US total installed wind turbines. Over 3 times as much as California. There’s nothing inherently wrong with wind power. It works out fine in states with conservative governments ERCOT allows 8% for wind reliability in Texas. So basically you have to have 92% backup. It’s all fine and dandy if the backup already exists. If the backup doesn’t exist then you have to build an additional facility for backup plus the windmills. T Boone Pickens, the guy who built most of Texas’s wind farms isn’t stupid. He is in the Natural Gas Business. Build a windfarm…need to build a Natural Gas ‘peaker’ for backup. So how much actual energy did the investment in Solyndra produce and would it have been more efficient to simply convert the cash to $1 bills and burn the currency to produce power? Mark says: November 19, 2011 at 8:41 am. _______________________________________ No it would be made into compost and used to produce food. Horse is the least “Valuable” of the manures but it can be used “Fresh” if you do not mind the weed seeds. G. Karst says: November 19, 2011 at 10:07 am I have the same opportunity here, and I haven’t taken it. Why? Because forcing PG&E to buy power from me at way over market value just drives the price up for everyone else. That kind of “I’ve got mine, who cares what it does to your costs” attitude doesn’t sit well with me. So to date I’ve passed up my chance to be a solar magnate at the people’s expense … doesn’t seem right. w. Anthony and Willis, Have you run the numbers to see at what price point it becomes cost effective to run your own portable 5 or 10 kw generator on gasoline or nat gas or diesel ? GW That is assuming the “conventional” load. I have an idea that could change things dramatically if it were adopted on a wide scale. Part of the problem is that wind power is fickle and erratic. You can get 100 megawatts of power one day and none the next or even 30 minutes later the wind might go dead calm. Now imagine I have a system on my property that works basically like an off-grid energy storage system. I can charge the batteries (lets say AGM deep cycle batteries) at night and not use any power during the day or charge them at a constant rate day and night regardless of my load. I basically convert my home from a demand load to a base load. Now lets say with the new smart meters the power company can adjust power rates in real time according to supply and demand. Lets say it is the middle of the day, the wind comes up, they have power to spare, they send a signal over the power grid that says “the price of power just went down” so my system responds by buying a little more of it. I slightly increase my charge rate to take advantage of the cheaper power. So the grid operator sees the electricity demanded respond to the supply. This allows me to buy power when it is cheap and store it for use when it is more expensive. So on a hot summer day power might be scarce, the utility broadcasts over the grid that the price of power just went way up, and my system responds by significantly backing off on the charge rate off the grid. Such a system would use plain old ordinary free market supply and demand principles to modulate the demand in accordance with the supply. Currently, when the wind comes up, an operator might have to scramble to find a buyer for the power, maybe they sell it to the state next door for a few hours. This way they would see their own customer demand increase in response to a price reduction and could actually take advantage of that surplus. And it works just as well in the other extreme when they might see loads reduce when there is no wind but demand is high for climate control. Additionally, if I want to install my own turbine or PV panels, all the infrastructure is there. I simply need to hook them up to the charge controller. We already have all of the technology required to make this work. No real major R&D is required. Dave Springer says: November 19, 2011 at 10:15 am Sign me up if it can work. I suspect that synthetic photosynthesis (photosynthetisis?) may well be one of the future energy sources. The over-riding problem with sunlight is that it is so dispersed, both in time and in space, and cities need gigawatts of power 24/7. I lived quite happily off the grid on solar power for some years, so don’t get me wrong, solar has its place. But then … I didn’t even use a kilowatt, and certainly not 24/7, while the world uses terawatts and definitely 24/7. My point is simple. The only current options are fossil and nuclear. Period. Might change in the future, assuredly will change if we wait long enough, but for a while those are our choices. w. Dave Springer says: November 19, 2011 at 9:11 am ………. ________________________________________ On my property I have a hundred foot drop in elevation and a high water table (clay). I am sitting on the top of a hill with decent wind so I have considered this idea for my own energy requirements. The biggest hurdle is the blasted planning board and EPA and not the engineering. If I recall correctly, the generation of power from water in my area is forbidden to individuals on top of everything else. After my last couple go rounds with the cement head in planning, I am not even going to try to get her to understand an innovative concept. Arguing with an inspector is like mud wrestling with a pig…… GW says: November 19, 2011 at 1:43 pm Because I live in a small house and heat and cook mostly with gas, my electric bill is small. Around here, cost per kWh depends on how much you use. As a result, I get the cheapest level of power available, about 12.2¢ per kWh. Note that in Idaho and Utah power costs about 6¢ per kWh. From memory, fuel, maintenance, and replacement/lease costs are something like 20¢ per kWh for a large (10 Mw) diesel plant. Call it 30¢-40¢ per kWh for a small (5-10 kW) diesel (or petrol) power plant. That’s all from memory, any corrections welcome. w. Nor does it me, but looking down the road, I see outrageous power bills, and perhaps a unstable grid. Receiving compensation to offset such outrage, makes a soothing salve to treat a slightly bruised morality. Obviously, I am not close to a decision, which is why I wanted to throw it out there for discussion. GK G. Karst says: November 19, 2011 at 10:07 am Here is everyone’s chance to have a direct say: __________________________________________________ I would be VERY VERY careful. Land owners are usually the Rubes that eventually get the shaft. If the bottom falls out in say ten years, YOU get stuck cleaning up the mess. Even if you have a “Contract” it does not mean diddley if the company goes belly up. Talk to a lawyer and make sure there is an escrow account (up dated annually) for the removal costs and environmental impact costs. We have good water and had the chicken factory people approach us. After investigating we found it makes lots of money for Tyson, Pdue et al but is slavery for the farmer. The catch is having to mortgage your land to pay for the buildings only to find out that every time the buildings are close to being paid off the regulations changed and you need new buildings or renovations. Turns out the money maker is the MORTGAGE not the chicken produced! Your neighbors with those windmills are going to be stuck in about ten years when the maintenance costs go up and the government money runs out. crosspatch says: November 19, 2011 at 1:46 pm If such a storage were available that was 1) cheap, 2) durable, and 3) did I say cheap?, we wouldn’t be having this discussion. Lead/acid storage batteries that you suggest are none of the above. In addition, they are toxic and caustic. Here’s the real problem, though. A deep-cycle truck battery is typically around 80 ampere-hours nominal capacity. Generally, you don’t want to cycle them more than about halfway, so call it 40 amp-hrs usable. That’s about a half of a kilowatt-hour of stored, usable power per battery. We don’t drink much power here at chez Willis. we used 280 kilowatt-hours of electricity last month. Thats about 10 kWhs per day. So I would need no less than twenty deep-cycle truck batteries to power me for one day … oh, and I forgot two things. The inverter (from 12 volts to 120 volts) is not 100% efficient, and neither are the batteries. So we’ll likely need 25 car batteries for my household. But I draw lightly on the electric wires, the average household in the US likely uses twice the juice I do. Call it four people per household, 300 million folks in the US, maybe 75 million households, 50 batteries per household, that’s almost four billion batteries needed, cut it in half for a safety factor … We’d need two billion batteries just to start with … and of course, they’d get overcycled and would be run dry and need to be transported and replaced and recycled and … Your idea works in theory, but doesn’t account for the size/scale of the problem. w. Dave Springer says: November 19, 2011 at 11:07 am …….nterestingly enough the dimbulb who wrote the non-hydro article above classified “landfill gas” as a renewable. Like nature what generates the trash that goes into them. Duh. Where do they find these people? _______________________________ Affirmative Action Dave Springer says: November 19, 2011 at 9:01 am . Dave Springer, your irrational relationship with Willis causes you to lose credibility with many excellent to decent posts. Hydro was not counted becase the eco fanatics do not count Hydro. This was all made clear but your emotional advesity to Willis caused you to miss it. Yes, I too, have run off chicken scheme people, pig scheme people, spring water scheme people, paint ball scheme people, rock concert scheme people, etc. There is no end to it. Thanks for the input and advice. GK Easily the most energy efficient way of generating electricity from fossil fuels is home generation of electricity from natural gas, in places where the waste heat is used to heat the home. Combine this with an electric car that charges overnight when home heat demand is greatest, and you have an off grid solution that makes sense. And were there a real time pricing system for feeding into the grid then you could take advantage of high prices at peak demand times. beng says: November 19, 2011 at 7:16 am. ============ Hydro-Quebec generates power near James Bay and in Western Labrador and delivers power to Quebec, the maritimes, Eastern Ontario, New England and New York. It uses a mix of 735KV AC and 450KV DC. Transmission distances are around 400-800 miles. Total power delivered to consumers seems to be about 25,000MW. I thought I said AGM batteries. You can shoot at them, break them open with a hammer, over charge them, charge them wrong, whatever. They don’t explode and they don’t leak. They were originally designed for fighter aircraft and as far as I know are the only storage battery you can ship that isn’t considered HAZMAT. You can turn them sideways, upside down, overheat them, whatever. I would sleep with them under my bed and not worry. And with the new smart meters, people ARE going to be charged different rates for power according to time of use. That is the purpose for them and why PG&E here in California has been installing them. In the middle of the day in summer you might pay twice the rate for electricity as you will pay in the evening. The old mechanical meters had no way of reporting power usage per minute. The new “Smart Meters” do and they all have an IPv6 network connection back to the power utility. The NYT numbers in graph 3 considers tax deductions for depreciations as profit. If you consider this tax write-off as a gift or grant from the government, the governments’ commitment goes from $1431M to $1816M. The numbers in Graph 3 are obviously one-time, annual, five-year and whatever else. Classic mixed numbers. The $334M to the company from various government action of $1431M or $1816M or more (property tax?) makes my government pension appear cost-effective. I’ll bet that these investors (millionaires) are the same ones that are requestion higher taxes on the rich to keep their investiments alive. Do we know who they are? G. Karst says: November 19, 2011 at 2:18 pm I understand. And I would not say that my mind was made up either. Our rates have been up for a while, and I’ve been paying them. So if they want to make the grid so expensive, I can certainly see somebody saying hey, I’d be crazy not too. Why should I sign up for ever increasing prices when they will pay me not to do so? Good questions all … I hate to admit it, but it wouldn’t break my heart to have PG&E send me a check every month instead of the current system. w. RE: Willis Eschenbach: (November 19, 2011 at 1:49 pm) “My point is simple. The only current options are fossil and nuclear. Period. Might change in the future, assuredly will change if we wait long enough, but for a while those are our choices. w.” I have no basic disagreement with this, as long as ‘fossil’ includes any hypothetical abiotic carbon stores, and ‘nuclear’ includes thorium nuclear, which to me, appears to be the only plausible candidate for an indefinitely sustainable energy source. Here is a reference to the Wikipedia article (of unknown objectivity) describing a private corporation (Delaware, USA) founded to develop the thorium energy resource. I understand that China has a similar project underway based on the test results from our successful demonstration model built at the Oak Ridge TN facility forty years ago. Flibe Energy From Wikipedia, the free encyclopedia “Flibe Energy is a company that intends to design, construct and operate small modular reactors based on liquid fluoride thorium reactor (LFTR) technology.” That’s Kirk Sorenson’s company; he’s probably the key person in the revival or rediscovery of thorium nuclear energy. FLiBe is a molten salt, a mixture of Lithium Fluoride and Beryllium Fluoride. Definite potential but I fear he’s too optimistic about what it’ll take and how long for a reliable, commercial reactor to be developed. Spector says: November 19, 2011 at 4:55 pm Agreed as to both sides of the equation. w. Willis: … forcing PG&E to buy power from me at way over market value just drives the price up for everyone else.h Are the utilities really forced to pay above-market prices? If so, Azusa Light and Water hasn’t got the word; what they pay for my surplus power production is significantly less than their residential rates. I think we agree that giving these subsidies to prosperous individuals at the expense of everyone else is fundamentally unjust. I have pondered the ethics of taking advantage of the offer, but I can’t say it keeps me awake at night. After all, I am putting up a third of the capital to put up an installation which remains experimental and may go completely bust. DMarshall; I would love to see a complete cost-benefit analysis, including all “externalities” of fossil fuels, renewables and nuclear. Do any such studies exist?>>> Of course they do, but how are you going to compare them? Every time a jurisdiction changes environmental regulations, the cost analysis changes for all of them. You can’t even compare between jurisdictions unless their laws are the same. Even then you can’t compare because a jurisdiction with large amount of local coal reserves is going to have a completely different economic analysis than one in which there is no coal for large distances. The ultimate answer however is to look at the market and see what companies are willing to invest in provided their only incentive is to sell the electricity they generate for a profit. I’d guess coal would come out number one. As for “renewables”, I challenge you to find a single instance of a private company investing in wind mills or solar farms unless they have gauranteed prices and/or massive subsidies from government to do so. .” Not to worry ….. when kindly suggestions don’t work a gun to the head will suffice. @davidmhoffer The ultimate answer is a completely level playing field – a difficult thing in this complex global village that we’ve become. But apart from its abundance, coal dominates only because the burdens of mining, processing and combusting can be shifted elsewhere. Bi-directional electric utility meters that enable the solar home generators to “unwind” the charges for incoming utility power during the night with excess, self-generated solar power generated during the day may seem fair to all concerned, but they are far from it. The solar owner is not only “unwinding” the energy component of his bill, but he is unwinding the fixed costs and utility overhead it took to get that power to and from him and to provide backup generating capacity. Those fixed costs typically make up 2/3 of the residential utility bill and are pro-rated among all paying customers (note that the “energy component”, the current wholesale price of electric power at the U.S. trading hubs, is in the range of 4-cents per Kwh). Since those fixed costs are not reduced by the solar user’s arrangement, his avoided share of those costs must be piled onto the bills of his non-solar neighbors. Good deal for him. Bad deal for everyone else. DMarshall says: November 19, 2011 at 6:21 pm @davidmhoffer The ultimate answer is a completely level playing field – a difficult thing in this complex global village that we’ve become. But apart from its abundance, coal dominates only because the burdens of mining, processing and combusting can be shifted elsewhere.>>> The ultimate answer CHANGES over time. Even tax laws will alter the equation. As for coal dmoinating because mining, processing and combusting can be shited elsewhere, I’m not sure I understand your reasoning. Coal is going to be most cost effective when it is close to the production point. @davidmhoffer. Up to 25% of the cost of coal is transportation. Coal is the cheapest fossil fuel by BTU in North America. 90% of coal is burned to generate electricity. @DMarshall. The real importance of coal is the fact that it is reliable. Coal works. It is abundant. Unlike solar and wind. And the long term contracts ensure a predictable price and guaranteed supply. People under-estimate the importance of keeping warm. Coal works. Come to northern Canada in the winter time and I will explain the importance of staying warm. @Steve Isn’t coal the most polluting by BTU, as well? Solar and wind are abundant too, but intermittent. At the current percentage they hold in market share, that’s easily manageable. I know what cold is like – I worked outdoors for about 8 years, right through the winter.. So you heat with coal in northern Canada? Most of the cold places I’ve lived used wood, heating oil, or natgas for heat, not electricity.. Good stuff Willis. Any technology that requires life-of-project subsidies is fundamentally uneconomic, and I would argue, fundamentally anti-environmental. Here is a paper we published nine years ago when Canada was about to adopt the nonsensical Kyoto Protocol.. Please note our final point, on energy: “The ultimate agenda of pro-Kyoto advocates is to eliminate fossil fuels, but this would result in a catastrophic shortfall in global energy supply – the wasteful, inefficient energy solutions proposed by Kyoto advocates simply cannot replace fossil fuels.” Reviewing all our points, I would suggest that our predictive track record is infinitely better than that of the IPCC and the global warming movement. But then, every dire prediction the “global warmists” have made has failed to materialize., So where do we go from here? I wrote in 2003 that Earth would be entering a natural global cooling cycle by about 2020-2030. Global warming has now been absent for about a decade, so we’ll see. Based on more recent data, global cooling could commence sooner. Maybe it already has. In 2008, I discovered that the (annualized) rate of change of CO2 with time, or dCO2/dt, occurs at about the same time as changes in temperature, and CO2 inflections LAG temperature inflections by about 9 months. I recorded this observation at You may recall Willis that my finding was first condemned as “false correlation”, but was later accepted as valid. Then, the observed phenomenon was dismissed as a “feedback”, with no evidence provided to support that claim – essentially, a religious argument that said “We KNOW CO2 causes global warming, so it MUST BE a feedback”. I think that within a decade or two we will agree that changes in CO2 are primarily a result of global temperature change, not a cause thereof. There may or may not be a significant humanmade component to the current increase in atmospheric CO2 – it could be largely natural, or partly humanmade – we don’t’ even know that with any certainty. Best regards to all. and Happy Thanksgiving to all our American friends. – Allan ******************************************************************************************* Kyoto has many fatal flaws, any one of which should cause this treaty to be scrapped. Climate science does not support the theory of catastrophic human-made global warming – the alleged warming crisis does not exist. Kyoto focuses primarily on reducing CO2, a relatively harmless gas, and does nothing to control real air pollution like NOx, SO2, and particulates, or serious pollutants in water and soil.. Kyoto will destroy hundreds of thousands of jobs and damage the Canadian economy – the U.S., Canada’s biggest trading partner, will not ratify Kyoto, and developing countries are exempt. Kyoto will actually hurt the global environment – it will cause energy-intensive industries to move to exempted developing countries that do not control even the worst forms of pollution. Kyoto’s CO2 credit trading scheme punishes the most energy efficient countries and rewards the most wasteful. Due to the strange rules of Kyoto, Canada will pay the former Soviet Union billions of dollars per year for CO2 credits. Kyoto will be ineffective – even assuming the overstated pro-Kyoto science is correct, Kyoto will reduce projected warming insignificantly, and it would take as many as 40 such treaties to stop alleged global warming. The ultimate agenda of pro-Kyoto advocates is to eliminate fossil fuels, but this would result in a catastrophic shortfall in global energy supply – the wasteful, inefficient energy solutions proposed by Kyoto advocates simply cannot replace fossil fuels. ******************************************************************************************* Richard Holle says: November 19, 2011 at 8:19 pm. ************************************** Gentlemen, Please note that in North America, natural gas now costs about 1/3 to 1/4 as much as crude oil on an energy-equivalent basis. So simplistically, if your above numbers are correct, and capital costs are not too great, generating your own electricity, heat and hot water by burning natural gas could be highly economic. A fundamental question is how long will this energy-cost imbalance between oil and natural gas exist? ************************************** Allan MacRae says: November 19, 2011 at 11:43 pm Allan, that’s quite interesting. The same BP Statistical Review cited below Figure 2 says that 1000 cubic feet of natural gas is the energy equivalent of 0.18 barrels of oil. The EIA puts the residential price of natural gas at $16 in August. Combing these two gives a natural gas price of $16 / 0.18 = $88 per barrel of oil equivalent. Since oil right now is about $100 per barrel, these are not too different. However, the EIA (op. cit.) also says that the industrial price for natural gas is only about $5,10 per 1000 cubic feet, or an $5.10 / 0.18 = $28 for the amount of energy in a barrel of oil. This puts it right in the 1/3 to 1/4 range you specified above. Curious. In any case, if I could get natural gas at industrial prices it would be worth it … but likely, being a residence, I’ll be charged residential prices. w. PS—Your point about cogeneration, particularly in colder climates, is an area we haven’t touched on but which is important. >>Alan says: November 18, 2011 at 4:13 pm >>Isn’t hydro renewable?? Greenies only seem interested in >>wind for some reason. There are vast swathes of the world, like the UK, where hydro simply does not work. Either the hills are too low, or there are too many people living in the valleys. . Willis, I think your proposed treatment is like this. Take the total invested until the project starts to return cash. That is the investment. Then take the NPV at some discount rate, what we don’t know. The NPV is considered to be the return on the investment. But it isn’t. The NPV is unrelated to the investment. The NPV is the result of all the cash flows in and out for whatever reason. So marketing expenses will be in there – that is cash out. Depreciation will not be in there, its not cash out, what will be in there will be forecast cash out to buy or replace capital assets. Purchases of parts, supplies all that stuff will be in there. In any capital project, cash is going in and out all the time, and the NPV is the result of discounting all those cash flows. Capital and investment is only part of it. You might be able to say in pure cash terms, the capital requirement to get started to steady state is X, then when you are running it your profit is y per year, so (not accounting for interest) you will get back a return of Z% a year on your capital once the thing is running. But its not a very useful way to think about it. The best way to think about it is the actual NPV that is being generated. Read Brealey and Myers. Its more complicated in practice and simpler in theory than people usually think. What you really should not do is take the NPV and divide it into some investment number, that doesn’t tell us much of anything. I agree with most of the piece by the way – this is just a technical point. The fact is that the subsidies are nonsense. No question about that. Gentlemen: One standard, 42-gallon barrel of crude oil contains 5,848,000 btu. Natural gas is selling this morning on Nymex for $3.32/MM btu. Therefore, natural gas is selling at wholesale in the U.S. for the equivalent of $19.48 per barrel oil. The guaranteed heat rate for a GE, combined-cycle-gas-turbine (CCGT) plant is 5,690 btu/Kwh. Therefore, the fuel cost for CCGT at the quoted Nymex price is 1.9-cents per Kwh. The capital cost of CCGT is $550 per installed Kw. The capacity factor of CCGT is 92%. The above information should be the starting point for evaluating ANY competing form of electric power generation, because CCGT is, hands down, the cheapest commercial form of new electric power generation in the U.S. today (not appropriate for in-home installation). The astronomical capital cost of solar when capacity factor is included leaves the technology beyond rational consideration. Claude Harvey, It’s nice to see a Global Enterprise doing something right :-) Mike Hebb says: November 18, 2011 at 4:00 pm think that my Housing Association would have something to say if I kept a Yak and burnt its dung :-) But it would also save on the grass cutting. Re:Robert of Ottawa says: November 20, 2011 at 5:33 am “Claude Harvey, It’s nice to see a Global Enterprise doing something right :-)” Yes it is. I have no vested interest in “Global Enterprise”. I just quoted their machine figures because I happened to have them at hand. In the case of CCGT, the heart of the thing is a booming-big, aero-derivitive gas turbine and GE is huge in that business. Willis: So to date I’ve passed up my chance to be a solar magnate at the people’s expense … doesn’t seem right. Hmm. Did I read above that you use REA power? Us city slickers might have an opinion about that…. : > ) The complete failure of windmills: I think everyone will agree with your logic. However, the question can boil down to: What side of the equation does one want to be – “the good deal for him” or the “bad deal for everyone else”. I have never enjoyed being “everybody else”. GK A bit off topic here, but I hope Dave Springer is still around?. I was struck by the cost of replacing all the energy generating equipment in the world by installations like California Valley Solar Farm, which turns out to be $101.9T, $19.4T just for the US. That’s on a US GDP of about $15T, and a world GDP of about $75T. That number assumes an equivalence of solar and conventional power, which we know is not true. After paying to make it equivalent, the numbers are going to be a lot higher. The last time that there was that kind of expenditure, it was WWII, and the US spent $228B on a GDP that was between $100B and $200B. Do you suppose there is the same sort of commitment now? Among true believers, undoubtedly, but among the rest of us? Polls certainly don’t say so. There is no plan to overhaul the way we get energy, and for good reason. If people at large saw and understood those numbers, they would rebel in no uncertain terms. But without a plan, installations like that are going to end up as expensive white elephants in the not very distant future. It goes without saying that, as much as politics will allow, the taxpayer is going to end up holding the bag. How many of those do we really want? Claude Harvey says: November 20, 2011 at 4:49 am That’s the bottom line that folks just don’t seem to get. w. juanslayton says: November 20, 2011 at 6:52 am I don’t think the local area was originally wired by the REA, but I suppose it’s possible. I know that wiring a place now out here is hideously expensive if you have to put in the poles and all. w. Hi Willis, Hope you are well. I general agree with Claude Harvey’s comment at November 20, 2011 at 4:49 am. Re your comment: “The same BP Statistical Review cited below Figure 2 says that 1000 cubic feet of natural gas is the energy equivalent of 0.18 barrels of oil.” That is correct – in the North American energy industry, which commonly uses some English units, the oil:gas energy ratio is often referred to as ~6:1. Conversion: 1000 cubic feet of pipeline-quality natural gas contains about 1 MMBtu (~1.05 GJ) of energy. Natural gas is commonly selling on the Nymex at less than $4 per MMBtu (or per 1000Ft3 or per GJ) . Oil is selling on the Nymex at almost $100 per barrel, so the wholesale price ratio is roughly 25:1 when energy equivalence is 6:1. Hence wholesale gas is now priced at less than 1/4 the energy-equivalent price of wholesale oil, on the Nymex. This is true in North America but generally not elsewhere in the world, where the shale gas revolution has not yet taken hold and gas is often priced at a factor much closer to energy-equivalence with oil. I don’t understand the EIA residential gas price of $16/Mcf, which is almost 4 times the reported US wholesale price. It may be that your local gas distribution utilities are doing too well. In Calgary, our fixed-rate price for residential gas is $6.59/GJ and that includes a very healthy profit for the local utility. Our fixed rate for residential electricity is 8 cents/kWh (and that includes a huge subsidy for mandatory-included, worthless wind power) . Both rates are locked-in for 5 years. As long as our electric utilities insist on forcing us “non-believers” to subsidize worthless wind and solar power, it makes increasing economic sense for North American residential and industrial consumers to seek off-grid alternatives, powered by natural gas. Best regards, Allan Does “biomass” on the first chart include animal dung? Looks like the taxpayers are kicking-in about $1.4B for this one project. Why don’t we just kill it and give Secretary Chu the $334M he wants, let him cut checks to his buddies and we save over $1B. If we can get them to come up with 1000 projects like this and kill them all we could save over $1,000,000,000,000 (one trillion dollars.) Now that’s thinking like a true green politician. I should be in charge of the “Super Committee” in Washington. Willis, I was unable to get to the National Geographic data page (for your Figure 1). Guide me there please. Thank you! tokyoboy says: November 20, 2011 at 6:55 pm Not clear what the problem is. It works for me, I click on the link and wait. It’s an “interactive graphic”, takes a minute to load. All the best, w. RE Claude Harvey: (November 19, 2011 at 11:28 am) Ref: Green Support of Impractical Energy Projects “However, at the precise moment we developed the technology to the point where profit-making geothermal plants were possible they turned on us like mad dogs.” I am guessing that was when they saw you as threatening to clutter a beautiful natural landscape with ‘ugly’ concrete structures. I would expect the same kind of resistance to actually paving the Mojave Desert with solar cells. I’m no financial expert, so you’ll have to forgive me for asking… Isn’t this just an elaborate, government run, money laundering scheme to transfer tax/rate payer money to a group of investors without people knowing what’s going on? John T says: November 21, 2011 at 9:33 am My vote is absolutely not. It certainly turned out that way, but I put that down to the usual greed that comes into play whenever the gov’t hands out money. I don’t know about your government, but the tipoff that it’s not a US project is that I doubt greatly the US government could actually put together an “elaborate, government run, money laundering scheme” that actually works. It’s disqualified from being a government scheme by its very success. w. RE:John T: (November 21, 2011 at 9:33 am) “Isn’t this just an elaborate, government run, money laundering scheme to transfer tax/rate payer money to a group of investors without people knowing what’s going on?” It may have that effect, but I suspect this happens when those who believe that preservation of nature is a primary duty of all mankind wake up from a ‘Noble Dream’ to see that the real thing will have detrimental environmental consequences if it is exploited to any practical extent. Re:John T says: November 21, 2011 at 9:33 am “I’m no financial expert, so you’ll have to forgive me for asking… Isn’t this just an elaborate, government run, money laundering scheme to transfer tax/rate payer money to a group of investors without people knowing what’s going on?” Actually, you are close to the truth. Congress has found that tax credits and souped up depreciation schedules are a nifty, roundabout way to camouflage a subsidy. While the public might object to the government paying out taxpayer money to build certain “do-good” projects (let’s amuse ourselves and say, for example, “a brothel for handicapped veterans”), the public does not generally recognize that allowing investors in such projects to avoid paying taxes otherwise due on profits accrued from unrelated sources amounts to the same thing. Instead of taxes flowing into the public till and then back out to the “do-good” project in a manner transparent to public scrutiny, the diverted tax money never arrives at the federal coffers the first place and it takes a battery of “financial whiz-bangs” to even figure out how much taxpayer subsidy was actually paid out. You might be interested in the fact that most such investor opportunities are offered through “unregistered security offerings” where it is illegal under SEC regulations for any but “sophisticated investors” to participate. Translation: “Rich folks”. You must meet certain “minimum net worth excluding the value of home and personal automobiles” requirements to become eligible to participate. The hypocrisy of certain politicians you may hear bellowing “tax the rich” while legislating loopholes for the rich such as these into alternate energy programs is almost too much to bear. Fortunately for most of the voting public, they do not understand what goes on here and are spared that blistering headache I get every time I think on the subject. I should follow up my last post with a “tip-of-the-hat” to Willis’ reasoning that such a successful “laundering scheme” as solar subsidies approach would not be characteristic of government where nothing seems to succeed. The answer to that puzzling outcome is that most government legislators and bureaucrats do not remotely understand how what they have wrought actually works. They simply understand the outcome. That’s because the bright boys and girls in Wall Street investment banking dreamed up the scheme, worked out the details, handed the required legislation to Congress on a platter and then managed its execution. I think it should be patently obvious that the currently politically correct wind and solar power methods would never be accepted by the Green Earth people as a replacement for ‘Carbon Power’ because the huge installations required would necessarily entail a massive destruction of a precious natural environment. That presumes this could be made practical. “Note that the traditional use of firewood for cooking is not included. ” Say what?? With 3 billion people using biofuels for cooking daily, why on earth would it not be included. This sounds like a discussion of bourgeois energy, not energy consumed by the masses. For example, ALL biomass used as fertilizer is part of the energy inputs for agriculture, and all that energy is solar in origin, ultimately. This business of who uses energy and how much is so incomplete as to render the first and second pie charts moot. For those touting thorium reactors, check into how long the circulation and containment pipes etc. for the ‘molten salts’ last, and what it takes to replace them. Nasty, aggressive stuff. Henry@BrianH BTW: How did you figure out or knew (on one of your previous comments) that a slightly lower pH is better for coral?
http://wattsupwiththat.com/2011/11/18/make-29-on-your-money-guaranteed/
CC-MAIN-2015-11
refinedweb
22,725
62.98
Log message: ruby-twitter-text: update to 2.1.0. pkgsrc changes: - DEPENDS on ruby-idn gem Upstream changes: … ANGELOG.md ## [2.1] - 2017-12-20 ### Added - This CHANGELOG.md file ### Changed - Top-level namespace changed from `Twitter` to `Twitter::TwitterText`. This resolves a namespace collision with the popular [twitter gem](). This is considered a breaking change, so the version has been bumped to 2.1. This fixes issue [#221](), "NoMethodError Exception: undefined method `[]' for nil:NilClasswhen using gem in rails app" ## [2.0.2] - 2017-12-18 ### Changed - Resolved issue [#211](), "gem breaks, asset file is a dangling symlink" - config files, tld_lib.yml files now copied into the right place - Rakefile now included `prebuild`, `clean` tasks (no changelog for 2.0; 2.0.1 changes are mentioned in the 2.0.2 entry) Log message: Update ruby-twitter-text to 1.14.7. No upstream changelog. (seems TLDs data updates only) Log message: Update ruby-twitter-text to 1.14.5. No upstream changelog. Log message: Update ruby-twitter-text to 1.14.0. Update TLDs and several improvement. Please refer \ <> in detail. Log message: Update ruby-twitter-text to 1.13.4. * Use RegEx literal instead of String literal * Use regex literals instead of string literals for char class ranges that \ might get minized and decomposed * added FULLWIDTH TILDE U+FF5E as a valid hashtag special character * added WAVE DASH U+301C as a valid hashtag special character * Ignore Emojified # or keycap # when scanning for hashtags * Support Cyrillic characters in URLs path section * Version in bower file is deprecated, rely solely on git tag * also add a bower badge and removed old repo list * Update bower.json * update tlds and forward exit code from rake tests * add desc, license and fix source_files for podfile Log message: Import ruby-twitter-text-1.13.0 as net/ruby-twitter-text. Twitter-text gem provides text processing routines for Twitter Tweets. The major reason for this is to unify the various auto-linking and extraction of usernames, lists, hashtags and URLs.
http://pkgsrc.se/net/ruby-twitter-text
CC-MAIN-2018-05
refinedweb
340
58.79
2 - By JohnOne I fancy having a go at python. Looking for advice for what environment I need, good web resources etc... Ultimate goal is to create a kodi video addon. - By leegold Hi, I run the file: #include <Constants.au3> RunWait('python f:\walk2.py > c:\zz.txt') But nothing happens, no content inside zz.txt. It works OK from the XP command line. Wondered what I'm doing wrong(?) I eventually want to send a .py script values/parameters and get output/"return". Thank You. - By Decipher _ArraySlice() its similar to list[n:n] in Python. I was converting a python script to autoit and was bored afterwords so I decided to create this UDF. #AutoIt3Wrapper_Au3Check_Parameters=-q -d -w 1 -w 2 -w 3 -w- 4 -w 5 -w 6 -w- 7 ; #FUNCTION# ==================================================================================================================== ; Name...........: _ArraySlice ; Description ...: Returns the specified elements as a zero based array. ; Syntax.........: _ArraySlice(Const ByRef $avArray[, $iStart = 0[, $iEnd = 0[, $iStep = 1]]]) ; Parameters ....: $avArray - Array to Slice ; $iStart - [optional] Index of array to start slicing ; $iEnd - [optional] Index of array to stop slicing ; $iStep - [optional] Increment can be negative ; Return values .: Success - Array containing the specified portion or slices of the original. ; Failure - "", sets @error: ; |1 - $avArray is not an array ; |2 - $iStart is greater than $iEnd when increment is positive ; |3 - $avArray is not an 1 dimensional array ; |4 - $iStep is greater than the array ; Author ........: Decipher ; Modified.......: ; Remarks .......: ; Related .......: StringSplit, _ArrayToClip, _ArrayToString ; Link ..........: ; Example .......: Yes ; =============================================================================================================================== #include <Array.au3> ; Needed for _ArrayDisplay only. Example() Func Example() Local $MyArray[10] $MyArray[0] = 9 $MyArray[1] = "One" $MyArray[2] = "Two" $MyArray[3] = "Three" $MyArray[4] = "Four" $MyArray[5] = "Five" $MyArray[6] = "Six" $MyArray[7] = "Seven" $MyArray[8] = "Eight" $MyArray[9] = "Nine" Local $MyNewArray = _ArraySlice($MyArray, 9, 0, -2) _ArrayDisplay($MyNewArray) $MyNewArray = _ArraySlice($MyArray, 1) _ArrayDisplay($MyNewArray) $MyNewArray = _ArraySlice($MyArray, 1, 5) _ArrayDisplay($MyNewArray) $MyNewArray = _ArraySlice($MyArray, 5) _ArrayDisplay($MyNewArray) $MyNewArray = _ArraySlice($MyArray, 1, 3, 1) _ArrayDisplay($MyNewArray) EndFunc ;==>Example Func _ArraySlice(Const ByRef $avArray, $iStart = 0, $iEnd = 0, $iStep = 1) If Not IsArray($avArray) Then Return SetError(1, 0, 0) If UBound($avArray, 0) <> 1 Then Return SetError(3, 0, "") Local $iNew = 0, $iUBound = UBound($avArray) - 1 ; Bounds checking If $iStep > $iUBound Then Return SetError(4, 0, "") If $iEnd < 0 Or $iEnd > $iUBound Or $iEnd <= 0 And $iStep > 0 Then $iEnd = $iUBound If $iStart < 0 Then $iStart = 0 If $iStart > $iEnd And $iStep >= 1 Then Return SetError(2, 0, "") Local $aNewArray[$iUBound] For $i = $iStart To $iEnd Step $iStep ; Create a new zero based array $aNewArray[$iNew] = $avArray[$i] $iNew +=1 Next ReDim $aNewArray[$iNew] Return $aNewArray EndFunc ;==>_ArraySlice _ArraySlice.au3
https://www.autoitscript.com/forum/topic/185270-is-possible-read-output-python-script/
CC-MAIN-2017-13
refinedweb
436
53.61
arrow_back_ios Back to List class ReadArray<> Offload KB - offload-library Old Content Alert Please note that this is a old document archive and the will most likely be out-dated or superseded by various other products and is purely here for historical purposes. Include: <liboffload> The ReadArray class is defined within Offload contexts. It will DMA the data in to local store on construction and will not DMA it back to main memory on destruction. Usage: #include <liboffload> #define SIZE 128 class ParamsBlock { int param1, param2, param3, param4; }; ParamsBlock gParams[SIZE] __ALIGN16; int DoSomeWork() { int ret = 0; liboffload::data::ReadArray<ParamsBlock, SIZE> localParams(&gParams[0]); for(int i = 0; i < SIZE; i++) { ret += localParams[i].param1; } return ret; } Limitations: The __outer PPU pointer passed to the constructor must be aligned to 16 bytes, or naturally aligned if size is < 16. Reference types are not supported. The size of the local array must be a compile-time constant.
https://www.codeplay.com/products/offload/kb/class-readarray.html
CC-MAIN-2021-04
refinedweb
157
50.46
table of contents - buster 4.16-2 - buster-backports 5.04-1~bpo10+1 - testing 5.04-1 - unstable 5.05-1 NAME¶io_destroy - destroy an asynchronous I/O context SYNOPSIS¶ #include <linux/aio_abi.h> /* Defines needed types */ int io_destroy(aio_context_t ctx_id); Note: There is no glibc wrapper for this system call; see NOTES. DESCRIPTION¶The io_destroy() system call will attempt to cancel all outstanding asynchronous I/O operations against ctx_id, will block on the completion of all operations that could not be canceled, and will destroy the ctx_id. RETURN VALUE¶On success, io_destroy() returns 0. For the failure return, see NOTES. ERRORS¶ - EFAULT - The context pointed to is invalid. - EINVAL - The AIO context specified by ctx_id is invalid. - ENOSYS - io_destroy() is not implemented on this architecture. VERSIONS¶The asynchronous I/O system calls first appeared in Linux 2.5. CONFORMING TO¶io_destroy() is Linux-specific and should not be used in programs that are intended to be portable. NOTES¶Gl.
https://manpages.debian.org/buster-backports/manpages-dev/io_destroy.2.en.html
CC-MAIN-2020-10
refinedweb
161
53.17
my $flag = 0; foreach my $var (@somearray){ if($var eq "foo"){ $flag = 1; last;} } if($flag eq "1"){ &someaction;} [download] I do this kind of thing All The Time and I don't know a) Why is it wrong? and b) How else would I go about it? Checking if $var is equal to "foo" might be some other, more complex, calculation or maybe a database call of some kind. oakbox fore. This module is supposed to parse chess games. The problem in this code is that it skips the last game in each file it parses. I was able to spot where the problem is. As for fixing it using the same labyrinth of flags system, I simply couldn't. ## ## WARNING! broken code ahead! ## my $FLAG=1; my $GAME=0; ## ## ... snip ## sub ReadGame { my $self=shift; my $continue=1; $self->_init(); do { if ( $continue == 2 ) { $continue = 0 } if ( $LINE =~/\[(.*)\]/ ) { my $line=$1; my ($key,$val,$junk)=split(/"/,$line); $key=~s/ $//g, $self->{$key}=$val; $GAME=0; } elsif ( $LINE !~/^$/) { $self->{Game}.=$LINE; $GAME=1; } elsif ( $GAME == 1 ) { $FLAG=0; } $LINE=<FICPGN>; if ( eof(FICPGN) && $continue == 1 ) { $continue = 2 } $LINE=~s/\r//; } while ( $FLAG==1 ); return ( $continue ) ; } [download] _ _ _ _ (_|| | |(_|>< _| [download] Not all flags are bad. But make the flag's name a yes/no question which its value is the answer to (avoiding reversing its meaning), and don't use a flag where an alternative suggests itself. Additional misfeature in your code. The &foo notation passes implicit arguments which is often not desired. As perlsub says, This is an efficiency mechanism that new users may wish to avoid. *ahem* Experienced users as well. :-). What this code is doing is building a delayed action dispatch table. Whenever you need to look a variable up in an array, you should consider using a hash. That is exactly their forte. my %lookup = ( foo=>\&fooaction, bar=>\&baraction, qux=>\&quxaction ); if (exists $lookup{$var}) { $lookup{$var}->(); } else { #do the default or exception handling here. } [download] It cleaner and clearer to read, uses loads less variables, is much easier to maintain and extend and more efficient to boot. If brevity is compatible with your mindset, then the if/else can become &{ $lookup{$var} || $default_subref }->( parms ); [download] Looked) Interesting.. Maybe wiser heads than mine have posted here, but if you're dying to use flags, you could use constants. So here's an answer to your third question. use constant PRINTFLAG => 0; use constant SAVEFLAG => 1; use constant MAILFLAG => 2; [download] And then keep track of it with a status/state variable. That at least makes the code easier to read: $state = PRINTFLAG;, instead of just a number. Of course, it's really only approporiate where multiple states come into question. -- Allolex You.hi Dark Milk White Alcoholic Nutty Fruity Biscuity Gingery Spreadable Drinkable Moussed Alternative e.g. Carob None Other kind Results (774 votes), past polls
http://www.perlmonks.org/?node_id=239992
CC-MAIN-2015-27
refinedweb
485
73.47
Support for ComputeShader? I did not find any information on this topic, which leads me to believe it is not currently supported. Which would be a shame since every modern renderer uses it in one way or another, for me it would be important for light culling for tiled/clustered rendering. It would be nice to have a .Net Core nuget package. And the possebility to install monogame only using nuget, not the installer. Currently, as far as I know, you can't build content without the installed version. You can build the pipeline tool from source if you want. What has blocked us there in the past is getting a solution we could use cross-platform. There just isn't general purpose solution that works on all our target platforms including consoles. Our solution so far has been to encourage people to use 3rd party compute solutions and let them deal with the complexities of supporting their target platforms. For example if DirectX is all you care about you can use the features already in SharpDX to support compute. Really until we fully resolve our shader issues with OpenGL I don't see myself thinking about the compute side of things. Support for Microsoft® HoloLens®:- Microsoft® HoloLens® Project Template for Visual Studio®- Extensions to the MonoGame API (only few required) Thanks. The MonoGame UWP Template works with Hololense already. I tried it All UWP apps work with HoloLens® as a 2D projection.However, if you managed to get it running as a full screen holographic app, then let me know how you managed to do that with MonoGame. Thanks UI would be nice. The only options right now is outdated Xna frameworks, EmptyKeys which is crazy complex (for me anyway) or rolling out your own. A built in UI framework would be so nice, especially for us new to game development. That probably won't happen, at least not in the near future. Squid caught my eye a while ago. It's a C# GUI backend, it might be a good place to start for a simpler alternative to EmptyKeys. I think that I would be cool if we could have a buffer for spritebatch just like we do with vertexbuffers. So something like SpriteBatchBuffer sbb = new sbb(GraphicsDevice); public void CreateWorld() { //or could be ssb.Add() sbb.Draw(mytexture, new Vector2(10,10), Color.White); } public void Draw() { SpriteBatch.Begin(); SpriteBatch.Draw(ssb , Vector2.Zero, Colour.White); SpriteBatch.End(); } If the buffer saved everything on the gpu, We could get a performance increases when using 2d. It would make it very simple for people who just want to use spritebatch. Though I am quite sure that many of the features everyone would like to see implemented would be very beneficial, coming from a software engineering background I would like to see the following items addressed... 1...Complete documentation that is centralized on this site. What is the sense of producing an excellent software tool if you don't have good documentation that even a beginner could use to learn it? Simply providing the API documentation, though needed, is not very good unless everyone understands what and how each method or property means and how each works. People new to game programming won't. Scanning the Internet for hours at a time to learn how to do relatively simple things with Monogame is sort of a stretch. 2...A standard user interface namespace that will allow developers to easily implement various user controls. Every game requires visual controls that will allow the player to interact with it. And if you plan on making games for sale this is a definitive requirement. 1: While not present on the official monogame site, I use all the old XNA articles for monogame, plenty of those floating around that it takes very little time to find documentation. for example.2: It's not integrated, but is what I use and it's amazing. The author is quite dedicated to it as well. Monogame currently uses SharpDx 2.6.3.As of today the latest stable version os SharpDx is 3.0.2 which should have many bugs fixed (and yes, some news ones too I believe). Updating to SharpDx 3 any time soon would be a great benefit as it would support dx12 and all the improvements that comes with it in terms of speed, threading, resources, etc. and this would attract more developers (hopefully) to the community. Concerning the cons: it may have some parts used by MG that have changed, as for example it happens to the new Mathematics library which makes the core dll lightweight.As always, a lot to do, and no timemachine available for all this, as MG must support OpenGL, Android, etc platforms. Support for importing the FBX 2009 format. This was present in XNA but has been left out of MonoGame. I am migrating a big project and we cannot change our model and animation pipeline. The system is too brittle and it would take too long for us to get animations working again. So for now we have to keep our old XNA-based content pipeline in a side project, just for the models. This could be an Assimp feature, not MonoGame which just use it, better to ask Assimp dev team for it. I think autodesk has a tool to convert them to a newer format. Thanks. I looked it up and they do have a free converter tool, which I installed and tested. I will look more into it later. I've just tested it yesterday, and for some reason it changes the path to textures used by the model, adding a ..\ before it... So I had to convert to ASCII to be able to edit this. Don't know if it was already suggested: access to low-level graphics API, ie: SharpDX on windows DX platform, and other API used for others (OpenTK for GL I think, etc) in order to have access to features "denied" by multi-platform choices made, just for those who don't care about some platforms by choice. It could be done by giving a reference to (with the example above) SharpDX reference from graphicDevice instance (according to MG graphicsdevice DirectX, device, context and others are all internal, so I guess they can't be use from a game instance). This is already available for DirectX platforms via the GraphicsDevice.Handle property.
http://community.monogame.net/t/monogame-feature-wishlist/6850?page=4
CC-MAIN-2017-39
refinedweb
1,075
71.44
In this tutorial we will check how to mount a FAT file system on the ESP32, using the Arduino core. The tests were performed using a DFRobot’s ESP32 module integrated in a ESP32 development board. Introduction In this tutorial we will check how to mount a FAT file system on the ESP32, using the Arduino core. Under the hood, Arduino works on top of IDF, Espressif’s official framework for the ESP32. IDF uses the FatFs library to work with FAT file systems [1]. At the time of writing, the FAT file system support had been very recently added to the Arduino core. So my recommendation is to pull the latest code from the GitHub repository of the Arduino core in order to make sure to get these changes. Since the ESP32 FLASH can contain multiple applications and different kinds of data [2], there’s the need to have a partition table that specifies how the memory is segmented. Unless we specify a specific partition scheme, the default one is used. The definition of the partition schemes can be seen in the Arduino core installation folder, on the following path: hardware\espressif\esp32\tools\partitions As can be seen in figure 1, the files with the definitions are .csv and there’s one called default.csv, which corresponds to the default partition schema. Figure 1 – ESP32 partitions schemas. If you open this file, you should see something similar to figure 2 (please note that this was the schema at the time of writing this tutorial, which may be updated in the future in the Arduino core). Figure 2 – Default partition schema. As can be seen, the default schema includes a partition to the SPIFFS file system, which is why we were able to mount it and use it in the previous tutorials without the need for additional procedures. Nonetheless, if we tried to run the code below for the FAT file system without changing this definition, it would always return an error when mounting because there is no partition for the FAT file system. So, what we will do is editing this default schema to change the partition of the SPIFFS file system to a partition for a FAT file system. In order to be able to rollback easily in case of problems, my recommendation is to make a copy of the original default.csv file before proceeding with the editing. We will need to edit the SPIFFS line of the file. In the first column (named “Name”), we need to change “spiffs” with “ffat”. In the third column (named “SubType”), we need to change “spiffs” with “fat”. The changes to the file are highlighted in figure 3. Figure 3 – Default schema after edited to support the FAT file system. After saving the file, we don’t need to do any additional procedure and the changes will take effect after uploading the code. The tests were performed using a DFRobot’s ESP32 module integrated in a ESP32 development board. The code The first thing we need to do is importing the “FFat.h” library, so we can have access to all the functionalities needed to interact with the file system. As mentioned, you should pull the latest changes from the Arduino core to make sure this library is available. By doing this include, we will have access to an extern variable called FFat, as can be seen in the header file of the library. This variable is an object of class F_Fat. #include "FFat.h" Moving on to the setup function, where we will write the rest of the code, we start by opening a serial connection to output the results of our program. Serial.begin(115200); Next we need to mount the file system, which is a procedure we always need to perform before interacting with it. To do it, we simply need to call the begin method of the previously mentioned FFat object. Note that the file system needs to be formatted the first time we use it. So, the begin method receives as optional parameter a Boolean value indicating if the file system should be formatted automatically in case the mounting procedure fails. This parameter defaults to false if not specified but, in our case, we will set it to true so the formatting of the file system occurs in case the mounting procedure fails. This method call returns as output a Boolean value indicating if the file system was successfully mounted (true) or the procedure failed (false). So, we will enclose this method call in an IF condition to check if some problem has occurred, and print a message indicating so. if(!FFat.begin(true)){ Serial.println("Mount Failed"); return; } If everything goes well, we print a message indicating success. Serial.println("File system mounted"); The final source code can be seen below. #include "FFat.h" void setup(){ Serial.begin(115200); if(!FFat.begin(true)){ Serial.println("Mount Failed"); return; } Serial.println("File system mounted"); } void loop(){} Testing the code To test the code, simply compile it and upload it to the ESP32 using the Arduino IDE, assuming that you have already completed the procedure of editing the default partition schema definition. After the upload finishes, open the Arduino IDE serial monitor. If the file system was not yet formatted before, then it should print the error message we defined, indicating a problem while mounting. Simply restart the device and, this time, the mounting procedure should be successful, as shown in figure 4. Figure 4 – Output of the program, indicating the file system was successfully mounted. References [1] [2] 6 thoughts on “ESP32 Arduino: FAT file system” Pingback: techtutorialsx Pingback: techtutorialsx Pingback: ESP32 Arduino FAT file system: writing a file – techtutorialsx Pingback: ESP32 Arduino FAT file system: writing a file – techtutorialsx Pingback: ESP32 Arduino FAT file system: Checking if file exists – techtutorialsx Pingback: ESP32 Arduino FAT file system: Checking if file exists – techtutorialsx Pingback: ESP32 FAT file system: Reading a file – techtutorialsx Pingback: ESP32 FAT file system: Reading a file – techtutorialsx Pingback: ESP32 Arduino FAT file system: Append content to file – techtutorialsx Pingback: ESP32 Arduino FAT file system: Append content to file – techtutorialsx
https://techtutorialsx.com/2018/10/06/esp32-arduino-fat-file-system/comment-page-1/
CC-MAIN-2022-33
refinedweb
1,028
51.28
CodePlexProject Hosting for Open Source Software Hi I have problems to get file type combobox from SaveFileDialog the edit field with automation ID 1001 is no problem and the save button is also found (namespace Microsoft.win32.SaveFileDialog class) System Windows7 x64 (Windows XP uses an other dialog at this point) If I try to get the combobox ComboBox _fileTyp = _window.Get<ComboBox>(SearchCriteria.ByAutomationId("FileTypeControlHost")); the following error appears. I think your problem is because of frameworkid which is something I have seen for the first time. Use UIAutomation directly to get to it. Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
http://white.codeplex.com/discussions/218426
CC-MAIN-2017-22
refinedweb
131
65.93
Restructure L3 Agent¶ - Author Carl Baldwin <[email protected]> The L3 agent is implemented mostly in a single python file. At current count, this file is just over 2,000 lines of code 1. Most of the functionality is provided by the L3NATAgent class which comprises about 75% of the file. This class handles everything from handling RPC messages down to sending gratuitous arp for newly added addresses on the interfaces inside the routers’ namespaces. This structure makes the agent very difficult to extend and modify. This is a bit of technical debt. Paying it down will help enable the development of new functionality. Problem Description¶ As mentioned in the introductory paragraph, the L3 agent has gotten out of hand. The structure of this code has made it difficult to extend and develop new features. Following is a list of responsibilities taken care of in the l3_agent.py file and mostly in the L3NATAgent class within that file. Defines the L3PluginApi Manages link local addresses for the DVR fip namespace Defines the RouterInfo, basically a big struct of data about a router Handles router update messages from RPC in a queue Handles periodic synchronization of all routers L3NATAgent Manages namespace lifecycle Handles router addition removal Runs metadata proxy in each router Very large method called process_router snat_rules, dnat_rules floating_ip_address external gateway internal network interfaces ipv6 support cleanup of stale interfaces static routes HA router keepalive DVR routers rtr_2_fip dvr floating ips snat namespace handling arp entries routes_updated _process_routers_loop _sync_routers gratuitous arp L3NATAgentWithStateReport Code for DVR and HA routers is mingled throughout. There is a lot of “if router[‘distributed’]” this and “if ri.is_ha” that. There is no clear strategy for resource life-cycle management of resources like namespaces and devices. Most of it has evolved over time as problems with the initial implementation have been found. Such problems are usually around them not being deleted when they should be gone. Proposed Change¶ Overview¶ This will not be a rewrite of the L3 agent from scratch. Starting this project as a Herculean effort designed to land as a single patch is a sure recipe for failure. Much of this work will be pure refactoring but not all of it. Work will be posted for review early and often. Dependencies between patches will be avoided when possible. Each patch will stand on its own as a reasonably reviewable improvement to the existing code base. Proper separation of concerns is a goal however, this will be done in steps. We’ll start with the high level separation of the router abstraction from the L3 agent. L3 Agent¶ The L3 agent will be responsible for listening for updates from RPC, queuing the updates for processing by a worker. It will continue to oversee the set of routers which are managed by the agent. If namespaces are enabled, this could be a large set of routers. If namespaces are not enabled, this will be a single router. (_process_routers, _sync_routers, routes_updated, _router_added/removed) The agent will still manage external networks available to routers on the agent. The L3PluginApi class will remain as it is in l3_agent.py. The agent will retain the L3NATAgentWithStateReport capability. Router¶ A new router class will be introduced. A lot of functionality currently handled by the L3 agent – especially the functionality in the process_routers method – will be encapsulated by this new class. The current RouterInfo class will move under this abstraction. This class will obsolete and replace the RouterInfo class. The router class will be more than just a struct with data about the router. It will be a full-fledged class that is capable of handling the implementation of the router. It needs a clear and uncomplicated python API defined. As of the Juno release, there are three kinds of routers available. These are distributed, highly available, and legacy routers. A new router class hierarchy will be added to encapsulate the details of each available type of router. The appropriate class will be loaded when the router instance is first created. Kilo or beyond will see the addition of a fourth type of router which combines DVR and HA routers. Adding this fourth type is out of the scope of this blueprint. However, adding this new type of router should be relatively easy after this blueprint has been completed by creating a new class type which combines the functions of the separate base classes. This new classes should be written in a way which efficiently makes use of the existing code in the two base classes. Any additional complexity in this module should only exist to work out any coordination which needs to happen between the two classes. The above uses inheritence to encapsulate the details of the various kinds of routers with an abstract base router serving as the base class and the others implemented as sub-classes. The HA DVR type of router would then use multiple inheritence. Following is a dot representation of what I imagine the hierarchy will be. Notice that LegacyRouter is not a base class. This reflects the fact that “DistributedRouter is a LegacyRouter” is not a true statement. Also, there are two DVR classes. This reflects the fact that non-network nodes have the distributed part of a DVR and network nodes have the central part which builds from the distributed part: digraph inheritence { "LegacyRouter" -> "Router" "DistributedRouter" -> "Router" "DistributedRouterCentral" -> "DistributedRouter" "HARouter" -> "Router" "HADistributedRouter" -> "HARouter" "HADistributedRouter" -> "DistributedRouterCentral" } Given that HA and DVR are properties of individual routers and not properties of the deployment, we will need to pay attention to the migration path from one to another. The code should fully expect that a router can change from one type to another and have the capability to handle it by changing the class used for a router. I expect that the router should be functional with its new type and that any namespaces, devices, or other resources that are no longer necessary after the router changes type will be cleaned up. The clean up will be handled by the resource lifecycle pattern described in the Resource Lifecycle section. The very long _process_router method needs to be refactored with this. The following responsibilities are handled here. Eventually, these will be abstracted behind other interfaces (like an iptables abstraction) but that work may not be completely done as part of this effort. At a high level, the refactoring of this method will separate concerns like plugging interfaces to networks from routing responsibilies. snat_rules, dnat_rules floating_ip_address external_gateway_added internal network added static routes Services¶ There are a few services implemented in the L3 agent in various ways. This blueprint will add a simple service driver model to support decoupling these services from the L3 agent class and its inheritence hierarchy. As stated before, inheritence will not be used to integrate these services. Each of the services will be moved to a new service specific module Essentially, the agent will be a basic container which loads services as classes. The routing service orchestrates the workflow for services by dispatching router events to each of the known services sequentially. For this blueprint, the dispatching will likely be implemented as a simple method call to a common service interface. This can be expanded to support a more pluggable model as a follow-on effort. The services will have a reference to the router in order to access L3 function such as adding/removing NAT rules and opening ports. I don’t intend to make any significant changes to the device driver models that are implemented in the FW and VPN services in the scope of this blueprint. I don’t expect this effort to have any effect on the configuration of services. Backward compatibility will be actively preserved. This may involve leaving stubs in place for the VPNAgent and others to load a VPN enabled L3 agent. Existing integration tests will be modified to work with the new structure. The intent here will not be to make a model that is everything to everyone. That is out of the scope of this blueprint. The intent is to iteratively develop an interface that will work for the following services which are already integrated with the L3 agent. The goal is to reduce coupling and pave the way for a more sophisticated model which may be needed in the future. They will be tackled in the order listed and the interface will evolve to support them all. Metadata Proxy - The easiest one. Low-hanging fruit. FWaaS - Want to remove it as a super-class of L3NATAgent VPNaaS - Want to remove it as a sub-class of L3NATAgent The first step is to create a service abstract class, and then sub-classes for the various services to use these as observers to the L3 agent. The base class would have no-op methods for each action that the L3 agent could notify about, and the child classes would implement the ones they’re interested in. Each service will register as an observer. Currently, the L3 agent (and VPN agent) load the device drivers for services. What can be done in this first step, is, instead of doing the load, a service object can be created. This object would do the loading and register with the L3 agent for notifications. The child services’ notification handlers will be populated by moving the code in the various agent classes into the new service child classes, and adapt as needed. Anything more complicated than this should be considered out of the scope of this blueprint. Some guidelines for this work: We don’t need the service abstract class to be perfectly and completely defined in advance. I intend to do this iteratively tackling the services in the order listed above. This means that we don’t review the changes to decouple the metadata proxy with the needs of the VPN agent in mind. This initial decomposition should be done without changing any configuration or other deployment details. This might mean that we leave, for example, a tiny stub of a VPNAgent class in place. Initially, the services will get an L3 agent passed in on create, but in the as the blueprint progresses, a router instance can be passed to the service. DVR Router Class¶ Everything related to the floating IP namespace that was added for DVR should be encapsulated in a driver for plugging a router in to an external network and handle floating ip setup. This includes the LinkLocalAllocator, dvr specific floating ip processing, fip namespace management, connection of router to fip (rtr_2_fip, fip_2_rtr), _create_dvr_gateway, and the management of proxy arp entries. HA Router Class¶ This encapsulation will hide the details related to starting keepalived and creating and using interfaces needed for the HA network on which it communicates. Resource Lifecycle¶ The major problem here is that resources are often left lying around beyond their useful lifecycle. Assumptions were made about the reliable availability of the agent, guaranteed ordering and delivery of RPC messages, and other unrealistic guarantees. The new design will account for problems in these areas. No assumptions will be made. This will result in a more robust implementation. The problem that we’ve had with this is that the agent fails to cleanup resources when they should no longer exist. To address this, I’m thinking of something that supports the following pattern using namespaces as an example: if full_sync: with namespace_manager.prepare_to_clean_up_stale() as nsm: for router in all_active_routers: nsm.link_router_to_ns_somehow(router) The __enter__ and __exit__ methods should work together to discover stale namespaces and then clean them up. I’m thinking maybe a namespace object should hold a weak reference to the router that occupies it. When the weak ref goes stale then the namespace can be removed. This pattern is not too different from what exists in the code now since some earlier refactoring that I did. However, this effort will formalize the pattern and abstract it from the rest of the code. Code has been started to illustrate this pattern 2. The pattern can be applied to other resources such as interfaces inside of a namespace. We have had problems ensuring that those get removed when they are no longer useful as well. For devices and other resources in a router, the active resources would all be marked each time a router is processed. Stale resources are then identified and removed. There has been a problem with namespaces which are persistently difficult to delete due to a problem in the version of iproute in use on the system 3 and 4. There really is nothing that can be done to remove these except to reboot the machine. However, the new implementation of resource lifecycle management will hold a set of namespaces that it has tried to delete. If the deletion fails, it will skip this deletion in future clean up runs. Ideally, the operator will either keep namespace deletion disabled or upgrade the iproute package on the system to avoid these problems. Configuration Handling¶ The handling of the config options will be cleaned up a bit; there’s so much ‘if that’ and ‘if this’ with config options too. Behavior needs to be properly encapsulated so that we don’t need to branch so much so often. A few examples examples are linked in the references 5 6 7 8. - 5 - 6 - 7 - 8 Security Impact¶ No impact is expected. We need to be careful when reviewing code that these changes do not introduce vulnerabilities in the agent. IPv6 Impact¶ We will take care to preserve all existing IPv6 functionality in Neutron. No changes or additions to the current IPv6 functionality are planned. Developer Impact¶ Much of code in the l3_agent.py file will be moved out to other files. This refactoring will introduce better software engineering patterns to allow the functionality to be extended, modified, and maintained more easily. Developers who have become accustomed to the current implementation will likely not recognize the end result. However, they will be able to easily get reaquainted with the new code. To avoid problems with rebasing and potential regressions while the heavy-lifting is being done, non-critical changes to the L3 agent should be avoided while this work is in progress. Mail will be sent to the openstack-dev ML to begin a freeze on non-critical changes and another one to end it. The freeze will only be needed during the initial more disruptive restructuring. As certain part stabilize, the freeze will be lifted. For example, once the VPN and FW services have been decoupled from the agent code – which will be the first step – development on those services can continue. Community Impact¶ This change is part of the approved Neutron priorities for Kilo. It supports at least the following efforts which may also be planned for Kilo. Pluggable external networks blueprint (dynamic routing integration indirectly) Enabling HA routers and DVR to work together. Better integration of L3 services. Spinning out advanced services Alternatives¶ The alternative is to leave it like it is and to perform small bits of refactoring only when it is necessary for a particular new feature. This is not ideal since there are already a number of things that this refactoring needs to support. It will slow down the development of that work if this is delayed. Writing a new agent and eventually deprecating the current one is another alternative? I’ve personally never had a very good experience with this approach. It seems to trade one set of known problems for another set of unknown problems. Regressions are all too common. I prefer to restructure in small reviewable pieces. This does not guarantee no regressions but it can uncover them earlier in the process and they are easier to pinpoint and fix. Implementation¶ Work Items¶ I expect that some of the initial work items will need to be tackled in sequential order because of the high degree of coupling in the code. However, as things are decomposed and the coupling is reduced, other work items can be tackled in parallel. For example, since the service agents are coupled with the L3 agent inheritence hierarchy, they will need to be moved out before a proper router abstraction is feasible. Functional Testing for the Agent Service Drivers Start simple. This won’t be everything to everyone yet. It is not meant to full-blown pluggable service drivers. Metadata Proxy FWaaS VPNaaS Decomposition and modularization of DVR, HA, and legacy routers Create a proper abstraction of a router to replace RouterInfo Can serve as an abstraction for other router implementations. Again, we’ll start simple to introduce the abstraction. Create the inheritence hierarchy. This may be done in a few steps. Initially, the inheritence hierarchy may be thin with most of the implementation still in the base class. Future steps will move responsibilities to the sub-classes and evolve the interface. Testing¶ In addition to the functional tests discussed below, effort will be made to use existing unit tests as necessary to be sure that existing coverage is retained and avoid regressions they were created to prevent. The end result may look like all of the old unit tests have been removed and new, better ones have been written in their place. All new and restructured code will be covered with proper unit test coverage. It will be significantly easier to unit test with the new structure of the code. If it isn’t then we’re doing it wrong. I don’t plan to make an effort to add missing unit test coverage before the code is restructured. Functional Tests¶ Functional tests will be added from the L3 agent prior to any significant restructuring of the agent code. Assaf 9 will take the lead of this testing effort with help from John Schwarz 10 and all of the other assignees listed in this blueprint. This includes the addition of functional tests for the new DVR and HA 11 features. Documentation Impact¶ None
https://specs.openstack.org/openstack/neutron-specs/specs/kilo/restructure-l3-agent.html
CC-MAIN-2020-10
refinedweb
2,996
62.68
06 May 2010 10:13 [Source: ICIS news] LONDON (ICIS news)--Borealis has reported a return to a first-quarter net profit of €54m ($69.2m) in 2010, compared with a net loss of €56m recorded in the same period last year, as feedstock and polyolefin market prices continued to increase, the Austria-based polyolefins major said on Thursday. Sales revenue for the three months ended 31 March 2010 increased 37.8% year on year to €1.4bn, as Borealis witnessed recovery in its base chemicals business group, with an increase in sales volumes in melamine and plant nutrients as well as in phenol. “The positive result in the first quarter of 2010 is the outcome of some stabilisation that we can see in the international polyolefins industry as well as our continuous efforts in cost competitiveness and efficiency,” said Mark Garrett, Borealis chief executive. Borealis added that its new 350,000 tonnes/year low density polyethylene (LDPE) plant in Stenungsund, Sweden, was in the final start-up phase and would be inaugurated in June 2010, while in Middle East, the Borouge 2 project, a joint venture with Abu Dhabi National Oil Company (ADNOC) at Ruwais in the ?xml:namespace> However Garrett said the company could not expect the positive upward trend to continue throughout 2010. “We need to remain alert and cautious as we expect the second half of the year to be more difficult with additional capacities coming on stream in the ($1 = €0.78) For more on Borealis
http://www.icis.com/Articles/2010/05/06/9356627/borealis-records-a-swing-to-a-first-quarter-net-profit-of.html
CC-MAIN-2013-48
refinedweb
251
55.68
. Let’s get this one over with quickly: I truly hated StyleCop. I code in a very clean and consistent style, and I try my best to adhere to conventions. Parts of it I picked up over the years, parts of it just make sense or look nice. I also love the built-in code formatting of Visual Studio, I always use the default rules. It was about a year ago when I tried StyleCop, so I don’t remember everything clearly, but it didn’t agree with me or Visual Studio in several ways. To name one, putting using statements inside a namespace looks totally crazy to me. I knew I could have customized it, but I didn’t think that putting any effort into it would be worth it, I already had my style and Visual Studio’s code formatter, so I stuck with those. On the other hand, I really loved FxCop. It is very easy to use, it has rules that really make sense to me, and it is built into Visual Studio by default. It only focuses on one goal, showing you warnings if you do something bad, and it’s really good at that. It barely even has a UI. You can also extend it with custom checks, developed in .NET of course. Now let’s take a look at NDepend. It’s not the easiest tool to use, but that’s pretty much all the bad there is about it. It doesn’t care about my source code formatting, which is a plus for me . It is a static code analysis tool, like FxCop, but it’s also a lot more than that. Basically, it has lots of rules and lets you verify whether your code violates any of them, it can also integrate into Visual Studio and the building process, pretty much like FxCop. The big innovation is the rule engine, CQLinq (Code Query LINQ): rules are LINQ expressions. When I first heard about it, I thought it’s nice, but I had my doubts, I thought it’s hard to do complex stuff with that, but I was wrong. When NDepend analyses code, it first builds a huge object graph about your code (see NDepend.CodeModel namespace). Think about reflection, plus a lot more. It has dozens of extra properties about everything. There are standard metrics, like the number of IL instructions and cyclomatic complexity of methods, and even more advanced stuff, for example, IMember.CouldBePrivate tells you if a method or other kind of member could be declared as private. This kind of information needs a lot of effort to gather, some of the things which are full-fledged rules in FxCop, are readily and easily available built-in properties in NDepend, so you can focus on the real logic in the rules. So when you have the code model, you can start running queries with a LINQ syntax. I have to admit that I only learned about the let keyword when I started using CQLinq. This single language construct is the key thing, it lets you step away from simple SQL-like stuff and create very complex queries. After I looked at the built-in rules, I didn’t have any more doubts about the possibilities of NDepend. I’m pretty sure that whatever FxCop knows, NDepend knows that, too. And whatever way you can extend FxCop with a .NET assembly, you can also extend NDepend with a CQLinq query, which is just as powerful but a lot easier to use. NDepend comes with 200+ built-in rules. With the value-added code model, many queries can be quite simple as you can see on the image on the right, but of course there are very complex ones, too, and they are open sourced and well commented. And the best thing is that you can edit the queries in real time, with code completion, and see the results immediately in a grid, which even knows about how to display which kinds of data. CQLinq is awesome, but NDepend has a lot of knowledge about your code, so it doesn’t stop there, and gives you a lot more visually. For example, it can draw dependency graphs, I love dependency graphs. Then it can draw dependency matrices, treemaps, etc. And everything is interactive, clickable. NDepend also supports continuous integration testing. It has an assembly comparer UI, but the real deal is also built into CQLinq. For an analysis, you can define a baseline, and then all the changes can be queried from the exact same code model. For example, ICodeElement.WasRemoved() returns if the code element in the baseline doesn’t exist in the current version, and there you have it, detecting breaking changes comes down to a very simple CQLinq query, too. And last but not least, it has a very nice UI, with lots of context sensitive help and lots of online documentation as well. All in all, NDepend is just awesome.
http://joco.name/2013/06/02/dotnet-code-quality-analysis-with-ndepend/
CC-MAIN-2017-43
refinedweb
834
70.94
I'm brand new to c++, trying to learn it on my own. I've found several questions related to this but none of them have really answered it. I've enabled c++11 so ifstream should accept a std::string as far as I can tell. I've also tried this in Visual Studio, CLion, and Eclipse Neon with the exact same results. I've also checked the value of __cplusplus at runtime and I know it is set to 201103, which is required for the overloading of the constructor. Essentially if I use std::ifstream in the main function using a std::string I have no problems. On the other hand if I try to pass it to a another class that in another file I receive this error in Eclipse and Clion: "error: no match for call to '(std::ifstream {aka std::basic_ifstream}) (std::__cxx11::string&)' infile(filename);" And this error in Visual Studio: "error C2064: term does not evaluate to a function taking 1 arguments" Both errors point to the same line as indicated in the code block below. I would like to know what I'm doing wrong as I'd like to use ifstream inside of the class. // main.cpp #include test.h int main() { std::string name("test.txt"); TestClass test (name); std::ifstream testfile(name); return 0; } // test.h #include <fstream> class TestClass{ std::ifstream infile; public: TestClass(std::string filename); // have also tried with std::string& filename } // test.cpp TestClass::TestClass(std::string filename){ // again have tried using string& infile(filename); /** ERROR **/ } std::ifstream doesn't provide a operator()(std::string) overload. Hence infile(filename); in the constructor body fails to compile. There is a constructor taking a const std::string& though, that can be used in your classes member initializer list: TestClass::TestClass(std::string filename) : infile(filename) { // Omit that completely: infile(filename); /** ERROR **/ }
https://codedump.io/share/nwNwtmGgos3Y/1/c11-error-no-match-for-call-to-ifstream-outside-main
CC-MAIN-2017-13
refinedweb
314
61.67
EDI Context Properties The message context properties in the EDI global property schema are publicly exposed so you can use them in operations such as message routing. These context properties are defined in PropertySchema.xsd in the Microsoft.BizTalk.Edi.BaseArtifacts assembly. The namespace for the properties is Edi/PropertySchema. If they are promoted, these message context properties are available as Edi.<Property Name> in the Filters page of the Send Port Properties Dialog Box. The EDI context properties are also available in an orchestration, as long as a reference to the Microsoft.BizTalk.Edi.BaseArtifacts assembly has been added to the orchestration project e orchestration project. Extracting Individual Fields from the Segment Context Properties Some properties are not written or promoted to the message context by the EDI receive pipelines as individual properties, but only as part of a segment string. This is done for performance reasons, because property promotion has an impact on performance. For example, the ISA5, ISA6, ISA7, ISA8, and ISA15 fields of the ISA segment are promoted by the receive pipelines as individual properties, but the rest of the ISA fields are only written to the message context as part of the ISA_Segment property. These properties are written or promoted only when ReuseEnvelope is not set to True, indicating that a received batched interchange is not being preserved. If you need an individual field of one of the segments (ISA, GS, UNB, UNG, or UNA) to be written to the message context, but this individual field is not written to the message context by default, you will need to write a custom component to write it to the message context. This custom component needs to parse the segment fields and write an individual field to the message context. The Message Enrichment sample shows how to use a parser to extract individual fields from the segments and write them to the context. This sample is included in the <drive>:\Program Files\Microsoft BizTalk Server 2010\SDK\Samples\EDI\MessageEnrichment. For more information, see Message Enrichment Sample (BizTalk Server Sample).
http://technet.microsoft.com/en-us/library/bb226554.aspx
crawl-003
refinedweb
343
51.89
- 11 Feb, 2005 2 commits Careful with mutable list entries that point to THUNKs: the thunk might be updated, and the resulting IND_OLDGEN will be on the mutable list twice. We previously avoided this problem by having an extra MUT_CONS object on the mutable list pointing to the THUNK, so that we could tell the difference between the entry on the mutable list that used to be the THUNK, and the new entry for the IND_OLDGEN. We don't have MUT_CONS any more (this was part of the cleanup from separating the mutable list from the heap). So, now, when scavenging an IND_OLDGEN on the mutable list, we check whether it is pointing to an already-evacuated object. This is a bit crude, but at least it is a localised hack. - - 10 Feb, 2005 1 commit. - 20 Jan, 2005 1 commit - 18 Nov, 2004 1 commit - 07 Oct, 2004 1 commit. - 13 Sep, 2004 1 commit - - 21 May, 2004 1 commit. - 10 May, 2004 1 commit - 07 May, 2004 1 commit... - 26 Nov, 2003 1 commit - 12 Nov, 2003 1 commit - 24 Oct, 2003 1 commit - 22 Oct, 2003 1 commit - 23 Sep, 2003 1 commit - 26 Aug, 2003 1 commit - 14 Aug, 2003 1 commit - 26 Jun, 2003 1 commit - 19 Jun,. - 22 Apr, 2003 1 commit Fix an obscure bug: the most general kind of heap check, HEAP_CHECK_GEN(), is supposed to save the contents of *every* register known to the STG machine (used in cases where we either can't figure out which ones are live, or doing so would be too much hassle). The problem is that it wasn't saving the L1 register. A slight complication arose in that saving the L1 register pushed the size of the frame over the 16 words allowed for the size of the bitmap stored in the frame, so I changed the layout of the frame a bit. Describing all the registers using a single bitmap is overkill when only 8 of them can actually be pointers, so now the bitmap is only 8 bits long and we always skip over a fixed number of non-ptr words to account for all the non-ptr regs. This is all described in StgMacros.h. - 01 Apr, 2003 1 commit - 26 Mar, 2003 2 commits - 24 Mar, 2003 2 commits -. - 19 Mar, 2003 1 commit - 12 Feb, 2003. - 25 Oct, 2002 1 commit - 25 Sep, 2002 1 commit Fix a scheduling/GC bug, spotted by Wolfgang Thaller. If a main thread completes, and a GC runs before the return (from rts_evalIO()) happens, then the thread might be GC'd before we get a chance to extract its return value, leading to barf("main thread has been GC'd") from the garbage collector. The fix is to treat all main threads which have completed as roots: this is logically the right thing to do, because these threads must be retained by virtue of holding the return value, and this is a property of main threads only. - 18 Sep, 2002 1 commit - 17 Sep, 2002 1 commit - 10 Sep, 2002 1 commit - 06 Sep, 2002 1 commit Selector Thunk Fix, take II. The previous version didn't deal well with selector thunks which point to more selector thunks, and on closer inspection the method was flawed. Now I've introduced a function StgClosure *eval_selector_thunk( int field, StgClosure * ) which evaluates a selector thunk returning its value, in from-space, if possible. It blackholes the thunk during evaluation. It might recursively evaluate more selector thunks, but it does this in a bounded way and updates the thunks with indirections (NOT forwarding pointers) after evaluation. This cleans things up somewhat, and I believe it deals properly with both types of selector-thunk loops that arise. MERGE TO STABLE - 05 Sep, 2002 1 commit) - 16 Aug, 2002 1 commit
https://gitlab.haskell.org/shayne-fletcher-da/ghc/-/commits/4d3ce7360892fec57a9ae42d77d3a7ed344e023a/ghc/rts/GC.c
CC-MAIN-2022-05
refinedweb
641
64.85
Have via Hacker News API. For example, hitting will return the following: { "by" : "dhouston", "descendants" : 71, "id" : 8863, "kids" : [ 8952, 9224, 8917, 8884, 8887, 8943, 8869, 8958, 9005, 9671, 9067, 8940, 8908, 9055, 8865, 8881, 8872, 8873, 8955, 10403, 8903, 8928, 9125, 8998, 8901, 8902, 8907, 8894, 8878, 8980, 8870, 8934, 8876 ], "score" : 111, "time" : 1175714200, "title" : "My YC app: Dropbox - Throw away your USB drive", "type" : "story", "url" : "" } Where kids are all of the comments on post specified via id 8863. For those following along, I highly recommend using iPython repl, which is only a pip install ipython away. Step 1. Get the story ID Story ID is in the URL of the page. For example, URL for Ask HN: Who is hiring? (December 2017) is :, so ID is 15824597. Step 2. Get the Post Content Content of the mains post is retrieved by changing ID in the Hacker News API link, resulting in. I created a function to construct the URL and used it to get the data, with the help of requests module. import requests def getItemUrl(id): return '{}.json'.format(str(id)) storyID = 15824597 story = requests.get(getItemUrl(storyID)).json() At this point I have the story and a list of IDs of all of the “kids”. Step 3. Get all comments I used a similar process for all of the kids to get their content. The “who is hiring” post had over 600 comments, so I used tqdm module to show me the progress while I waited. I also used list comprehension instead of a regular for loop. After that I backed up all of the comments as a JSON file, just in case. with open("who-is-hiring.json", "w") as f: json.dump(comments, f) Step 4. Profit I only wanted to see jobs close to my home, so I made a new list only containing comments that had “CA” in them. Turned out that some comments were deleted and had no text, so I added a check for that as well. ca = [c for c in comments if "text" in c and "CA" in c['text']] Common way to write location in the comment is like San Francisco | CA, so I’ve split every comment text by CA. I took the resulting left side and split it by empty space to get all of the words. Finally I took 3 words to the left of CA and combined them back into one sentence, hoping that it would give me a good signal for the location. I converted the list of locations to a set, in order to remove all duplicates. locations = [] for c in ca: beforeca = c['text'].split("CA")[0] # Get everything to the left of CA loc = " ".join(beforeca.split(" ")[-3:]) # Get 3 words before CA locations.append(loc) # Save it set(locations) # Remove duplicates I got about 20 locations back. It was easy to look at all and identify a few that made sense for me. I picked out San Mateo and Redwood City. Finally I wrote all matching comments into an HTML file. tocheck = ["Redwood City", "San Mateo"] import codecs with codecs.open("res.html", "w", encoding="utf-8") as f: # Need codecs to write utf-8 in Python 2 for c in comments: for check in tocheck: if 'text' in c and check in c['text']: # If desired city f.write(c['text']) # Save to file f.write("<hr/>") # Separated by horizontal ruler I opened results in a web browser and they were exactly what I was hoping to see, demonstrated in the CodePen bellow. See the Pen HN Results Demo by Alex (@akras14) on CodePen. I thought this was pretty handy. Thanks Hacker News!
https://www.alexkras.com/parse-ask-hn-who-is-hiring-python-and-hacker-news-api/
CC-MAIN-2020-34
refinedweb
618
80.92
In this blog you will learn how to monitor a Spring Boot application. You will make use of Spring Actuator, Micrometer, Prometheus and Grafana. Seems a lot of work, but this is easier as you might think! 1. Introduction When an application runs in production (but also your other environments), it is wise to monitor its health. You want to make sure that everything is running without any problems and the only way to know this, is to measure the health of your application. When something goes wrong, you hopefully will be notified before your customer notices the problem and maybe you can solve the problem before your customer notices anything. In this post, you will create a sample Spring Boot application which you can monitor with the help of Spring Actuator, Micrometer, Prometheus and Grafana. This is visualized in the overview below, where Spring Actuator and Micrometer are part of the Spring Boot App. The purpose of the different components is explained briefly: - Spring Actuator: supplies several endpoints in order to monitor and interact with your application. See Spring Boot Actuator in Spring Boot 2.0 for more information. - Micrometer: an application metrics facade that supports numerous monitoring systems, Spring Boot Actuator provides support for it. - Prometheus: a timeseries database in order to collect the metrics. - Grafana: a dashboard for displaying the metrics. Every component will be covered in the next sections. The code used in this post can be found at GitHub. 2. Create Sample App First thing to do is to create a sample application which can be monitored. Go to Spring Initializr, add dependency Spring Boot Actuator, Prometheus and Spring Web. The sample application will be a Spring MVC application with two dummy endpoints. Create a RestController with the two endpoints. The endpoints only return a simple String. @RestController public class MetricsController { @GetMapping("/endPoint1") public String endPoint1() { return "Metrics for endPoint1"; } @GetMapping("/endPoint2") public String endPoint2() { return "Metrics for endPoint2"; } } Start the application: $ mvn spring-boot:run Verify the endpoints are working: $ curl Metrics for endPoint1 $ curl Metrics for endPoint2 Verify the Spring Actuator endpoint. The endpoint returns the information in json. In order to format the response so that it is readable, you can pipe the output of the actuator endpoint to mjson. $ curl | python -mjson.tool ... { "_links":{ "self":{ "href":"", "templated":false }, "health":{ "href":"", "templated":false }, "health-path":{ "href":"{*path}", "templated":true }, "info":{ "href":"", "templated":false } } } By default, the above information is available. Much more information can be provided by Spring Actuator, but you need to enable this. In order to enable the Prometheus endpoint, you need to add the following line into the application.properties file. management.endpoints.web.exposure.include=health,info,prometheus Restart the application and retrieve the data from the Prometheus endpoint. A large bunch of metrics are returned and available. Only a small part of the output is displayed because it is a really long list. The information which is available at this endpoint, will be used by Prometheus. $ curl # HELP jvm_gc_pause_seconds Time spent in GC pause # TYPE jvm_gc_pause_seconds summary jvm_gc_pause_seconds_count{action="end of minor GC",cause="G1 Evacuation Pause",} 2.0 jvm_gc_pause_seconds_sum{action="end of minor GC",cause="G1 Evacuation Pause",} 0.009 ... As mentioned before, Micrometer is also needed. Micrometer can be seen as SLF4J, but then for metrics. Spring Boot Actuator provides autoconfiguration for Micrometer. The only thing you need to do is to have a dependency on micrometer-registry-{system} in your runtime classpath and that is exactly what we did by adding the prometheus dependency when creating the Spring Boot app. The metrics Actuator endpoint can also be accessed when you add it to the application.properties file. management.endpoints.web.exposure.include=health,info,metrics,prometheus Restart the application and retrieve the data from the metrics endpoint. $ curl | python -mjson.tool ... { "names": [ "http.server.requests", "jvm.buffer.count", "jvm.buffer.memory.used", ... Each individual metric can be retrieved by adding it to the URL. E.g. the http.server.requests parameter can be retrieved as follows: $ curl | python -mjson.tool ... { "name": "http.server.requests", "description": null, "baseUnit": "seconds", "measurements": [ { "statistic": "COUNT", "value": 3.0 }, { "statistic": "TOTAL_TIME", "value": 0.08918682 }, ... 3. Add Prometheus Prometheus is an open source monitoring system of the Cloud Native Computing Foundation. Since you have an endpoint in your application which provides the metrics for Prometheus, you can now configure Prometheus to monitor your Spring Boot application. The Spring documentation for doing so can be found here. There are several ways to install Prometheus as described in the installation section of the Prometheus documentation. In this section, you will run Prometheus inside a Docker container. You need to create a configuration prometheus.yml file with a basic configuration to add to the Docker container. The minimal properties are: scrape_interval: how often Prometheus polls the metrics endpoint of your application job_name: just a name for the polling job metrics_path: the path to the URL where the metrics can be accessed targets: the hostname and port number. Replace HOSTwith the IP address of your host machine global: scrape_interval: 15s scrape_configs: - job_name: 'myspringmetricsplanet' metrics_path: '/actuator/prometheus' static_configs: - targets: ['HOST:8080'] If you have difficulties finding out your IP address on Linux, you can use the following command: $ ip -f inet -o addr show docker0 | awk '{print $4}' | cut -d '/' -f 1 Start the docker container and bind-mount the local prometheus.yml file to the one in the docker container. The above prometheus.yml file can be found in the git repository in directory prometheus. $ docker run \ -p 9090:9090 \ -v /path/to/prometheus.yml:/etc/prometheus/prometheus.yml \ prom/prometheus After successful startup of the Docker container, first verify whether Prometheus is able to gather the data via url. It seems that Prometheus is not able to access the Spring Boot application running on the host. An error context deadline exceeded is mentioned. This error can be solved by adding the Docker container to your host network which will enable Prometheus to access the URL. Therefore, add --network host as a parameter. Also remove the port mapping as this has no effect when --network is being used. Finally, give your container a name, this will make it easier to start and stop the container. The -d parameter will run the container in detached mode. $ docker run \ --name prometheus \ --network host \ -v /path/to/prometheus.yml:/etc/prometheus/prometheus.yml \ -d \ prom/prometheus Verify again the Prometheus targets URL, the state indicates UP which means the prerequisite of accessing the metrics endpoint is fullfilled now. It is now possible to display the Prometheus metrics. Navigate to, enter http_server_requests_seconds_max in the search box and click the Execute button. Access a couple of times the endPoint1 URL in order to generate some traffic. This parameter will give you the maximum execution time during a time period of a request. 4. Add Grafana The last component to add is Grafana. Although Prometheus is able to display the metrics, Grafana will allow you to show the metrics in a more fancy dashboard. Grafana also supports several ways for installing it, but you will run it in a Docker container, just like you did with Prometheus. $ docker run --name grafana -d -p 3000:3000 grafana/grafana Navigate to URL, the URL where Grafana is accessible. The default username/password is admin/admin. After clicking the Log in button, you need to change the default password. Google Chrome will also warn you about the default username/password. Next thing to do, is to add a Data Source. Click in the left sidebar the Configuration icon and select Data Sources. Click the Add data source button. Prometheus is at the top of the list, select Prometheus. Fill the URL where Prometheus can be accessed, set HTTP Access to Browser and click the Save & Test button at the bottom of the page. When everything is OK, a green notification banner is shown indicating that the data source is working. Now it is time to create a dashboard. You can create one of your own, but there are also several dashboards available which you can use. A popular one for displaying the Spring Boot metrics is the JVM dashboard. In the left sidebar, click the + sign and choose Import. Enter the URL where the JVM dashboard can be found and click the Load button. Enter a meaningful name for the dashboard (e.g. MySpringMonitoringPlanet), select Prometheus as Data Source and click the Import button. At this moment, you have a cool first Grafana dashboard at your disposal. Do not forget to scroll down, there are more metrics than shown in the screenshot. The default range is set to 24 hours, this is maybe a bit large when you just started the application. You can change the range in the top right corner. Change it to f.e. Last 30 minutes. It is also possible to add a custom panel to the dashboard. At the top of the dashboard, click the Add panel icon. Click Add new panel. In the Metrics field, you enter http_server_requests_seconds_max, in the Panel title field in the right sidebar, you can enter a name for your panel. Finally, click the Apply button at the top right corner and your panel is added to the dashboard. Do not forget to save the dashboard by means of the Save dashboard icon next to the Add panel icon. Set some load to the application and see what happens to the metrics on the dashboard. $ watch -n 5 curl $ watch -n 10 curl 5. Conclusion In this post, you have learnt how you can set up some basic monitoring for a Spring Boot application. It is necessary to use a combination of Spring Actuator, Micrometer, Prometheus and Grafana, but these are quite easy to set up and configure. This is of course just a starting point, but from here on, it is possible to expand and configure more specific metrics for your application. Spring boot apps are in every company, and this post is really useful for any ops people! A quick question: Is it a good idea to display multiple services in the same graphs? Or better to have separate graphs for each? Thank you for the nice comment. I think it is better to have separate graphs, but it really depends on what kind of metric (and the number of services of course) you display in the graph. E.g percentage CPU load can easily be displayed in one graph, the scale is from 0 up to 100. But number of requests will be very dependent on the service. An increase in load of a service which has normally seen not so many requests will be less visible when it is combined in a graph with services which have under normal conditions a higher load. Besides that, monitoring should also be useful for dev people 😉 Monitoring should start at the dev level and in the test and acceptance environments. It is a responsibility of dev and ops. Good point! Thanks for the answer, @mydeveloperplanet. LikeLiked by 1 person Nice article to get kickstarted with monitoring a spring boot app. One point that I wanted to clarify for my understanding was the role played by micrometer. Prometheus is basically scrapping the endpoints exposed by spring actuator. Micrometer helps the spring boot app to convert them into time-series format which prometheus can understand and display i.e. tags, counters, gauge & timers. Without micrometer, the information provided by spring actuator would be raw and won’t be useful for prometheus/grafana. In that sense micrometer is kind of mandatory to be present in the monitoring stack. Is that correct? Thanks. Thank you for your comment. You are completely right. Micrometer adds to the Spring Actuator endpoint the format that prometheus can understand. This is really a lovely article. So nicely organised and communicated. One question – if the prometheus docker goes down or restarted then all data will be lost, right ? What strategy we can adopt to persist data over time. LikeLiked by 1 person Thank you for the nice comment. You are right, it is better to mount the /etc/prometheus/ directory instead of only the yml file. This way, the data is persisted on your local machine and not only in the Docker container. So, starting the Docker container should be done with -v /path/to/config:/etc/prometheus , see also the Prometheus documentation: Hope this helps. I am a china software developer, thank you for you article, i successed after trying LikeLiked by 1 person Great to read this! Thank you for the comment
https://mydeveloperplanet.com/2021/03/03/how-to-monitor-a-spring-boot-app/?like_comment=20318&_wpnonce=679557a5c1
CC-MAIN-2022-05
refinedweb
2,103
65.52
Created on 2016-12-23 14:17 by Cornelius Diekmann, last changed 2017-01-06 12:37 by Cornelius Diekmann. My OS: Debian GNU/Linux 8.6 (jessie) Python 3.4.2 pty.py from Python-3.5.2/Lib (pty.py is just a tiny, portable python file which did not see many changes) Bug Report Steps to Reproduce: I wrote a very simple python remote shell: #!/usr/bin/env python3 import pty pty.spawn('/bin/sh') It can be run in a terminal (call it TermA) with `nc -e ./myfancypythonremoteshell.py -l -p 6699` In a different terminal (call it TermB), I connect to my fancy remote shell with `nc 127.0.0.1 6699`. The shell works fine. In TermB, I quit by pressing ctrl+c. Observed Behavior: In TermA, the nc process, the python process, and the spawned /bin/sh still exist. They still occupy TermA. Expected Behavior: The client in TermB has disconnected, /bin/sh in TermA can no longer read anything from stdin and it should close down. Ultimately, in TermA, nc should have exited successfully. Fix: End the _copy loop in pty.py once EOF in STDIN is reached. Everything shuts itself down automatically. I included a small patch to demonstrate this behavior. This patch is not meant to go straight into production. I have not verified if this behavior somehow breaks other use cases. This bug report is meant to document exactly one specific use case and supply exactly one line of code change for it. This issue is related to issue26228. Actually, it is complementary. issue26228 wants to return if master_fd is EOF, this issue wants to return if stdin is EOF. Both behaviors together looks good to me. By the way, I hacked a hacky `assert False` into my patch as a placeholder for issue26228's proper handling of exec failures at that part of the code. I suggest to combine the patches of this issue and issue26228. I wrote a proper patch for the issue of handling EOF in STDIN, including tests. My patch is against the github mirror head, but don't worry, the files I touch haven't been touched in recent years ;-) I only tested on Linux. My patch only addresses the issue in this thread. It does not include the patch for issue26228. I still recommend to also merge the patch for issue26228 (but I don't have a FreeBSD box to test). Removed git patch header from pty.patch to make python code review tool happy. Sorry, this is my first contribution. Review tool still did not show the test_pty.py file. Sry. Make review tool happy by giving it less broken patch format :) `make patchcheck` is already happy. Sorry for the noise :( This is a change in behaviour of the _copy() loop: it will stop as soon as EOF is read from the parent’s input, and immediately close the terminal master. Unpatched, the loop continues to read output from the child, until the child closes the terminal slave. I agree that your new behaviour may be desired in some cases, but you need to respect backwards compatibility. With your patch, things will no longer work robustly when the child “has the last word”, i.e. it writes output and exits without reading any (extra) input. Simple example: the child prints a message, but the parent has no input: python -c 'import pty; pty.spawn("./message.py")' < /dev/null Any new functionality would also need documenting. (If you want to suggest some wording to document the existing behaviour better, that would also be welcome :) Thank you Martin very much. To resolve this issue, I decided to document the current behavior and add test cases for it. No change in behavior is introduced. This hopefully allows to close this issue. The test cases for the current behavior ensure that we can (at some point in the future) add some different behavior without breaking backwards compatibility. Fixed: Observed behavior is now expected+documented behavior. Improved test cases. Happy Holidays! Status change: I proposed a generic test suite for pty.spawn() in issue29070. Once we have agreed on the current behavior of pty.spawn() and the test suite is merged, I would like to come back to this issue which requests for a change in behavior of pty.spawn(). Currently, I would like to document that this issue is waiting for issue29070 and this issue doesn't need any attention. [no status change, this issue currently does NOT need any attention] To keep issues separate, I just wanted to document a comment about this issue mentioned in issue29070. It refers to the _copy loop. if STDIN_FILENO in rfds: data = stdin_read(STDIN_FILENO) if not data: fds.remove(STDIN_FILENO) + # Proposal for future behavior change: Signal EOF to + # slave if STDIN of master is gone. Solves issue29054. + # os.write(master_fd, b'\x04') else: _writen(master_fd, data) > vadmium 2017/01/04 21:50:26 > I suggest leaving this for the other [issue29054, i.e. this] bug. Another option may be to send SIGHUP > (though I am far from an expert on Unix terminals :).
https://bugs.python.org/issue29054
CC-MAIN-2021-31
refinedweb
853
66.94
In this article, I have discussed how to connect to MySQL database remotely using python. For any application, it is very important to store the database on a server for easy data access. It is quite complicated to connect to the database remotely because every service provider doesn’t provide remote access to the MySQL database. Here I am using python’s MySQLdb module for connecting to our database which is at any server that provides remote access. What is MYSQLdb? MySQLdb is an interface for connecting to a MySQL database server from Python. It implements the Python Database API v2.0 and is built on top of the MySQL C API. Packages to Install mysql-connector-python mysql-python If using anaconda conda install -c anaconda mysql-python conda install -c anaconda mysql-connector-python else pip install MySQL-python pip install MySQL-python-connector Import-Package import MYSQLdb How to connect to a remote MySQL database using python? Before we start you should know the basics of SQL. Now let us discuss the methods used in this code: - connect(): This method is used for creating a connection to our database it has four arguments: - Server Name - Database User Name - Database Provider - Database Name - cursor(): This method creates a cursor object that is capable of executing SQL queries on the database. - execute(): This method is used for executing SQL queries on the database. It takes a sql query( as string) as an argument. - fetchone(): This method retrieves the next row of a query result set and returns a single sequence, or None if no more rows are available. - close() : This method close the database connection. Free remote mysql database providers: 1. 2. Python3 Connected Today's Date Is 2017-11-14 Python3 Output: Attention geek! Strengthen your foundations with the Python Programming Foundation Course and learn the basics. To begin with, your interview preparations Enhance your Data Structures concepts with the Python DS Course.
https://www.geeksforgeeks.org/mysqldb-connection-python/?ref=rp
CC-MAIN-2021-04
refinedweb
325
63.39
Get hostname and ip address of local computer (2) Posted by Jaroslav Pisk on February 9th, 1999 char szHostName[128]; if( gethostname(szHostName, 128) == 0 ) { // Get host adresses struct hostent * pHost; int i; pHost = gethostbyname(szHostName); for( i = 0; pHost!= NULL && pHost->h_addr_list[i]!= NULL; i++ ) { CString str; int j; for( j = 0; j < pHost->h_length; j++ ) { CString addr; if( j > 0 ) str += "."; addr.Format("%u", (unsigned int)((unsigned char*)pHost->h_addr_list[i])[j]); str += addr; } // str now contains one local IP address - do whatever you want to do with it (probably add it to a list) } } Doesn't workPosted by Legacy on 12/04/2002 12:00am Originally posted by: Patrick Now, can you please help me how to display the IP with printf ?Reply You need #include <winsock2.h> and link Ws2_32.libPosted by Legacy on 11/07/2002 12:00am Originally posted by: Chris He forget to mention add: #include <winsock2.h> and also click Project->Settings->Link Tab and add in: Ws2_32.libReply It doesn't work....Posted by Legacy on 10/08/2002 12:00am Originally posted by: Omar Mukhtar The program listed here claims to get IP local address & local host name. But it didn't work well in Linux environment. It retrieves "127.0.0.1" i.e. Loopback IP address. The reason behind is that gethostbyname() function looks for IP in a file. What is the real solution? How does ifconfig command works????? Omar Mukhtar serial port programmingPosted by Legacy on 07/27/2002 12:00am Originally posted by: deepaa. How to communicate with two adjacent computers.how to communicate thru the serial ports?Reply Re: Obtaining IP addressPosted by Legacy on 03/14/2002 12:00am Originally posted by: John Payne For the above code to work, you need to include the header file: #include <unistd.h>Reply How do I get all the IP addresses on a LAN having a WINDOWS NT 4 SERVER.Posted by Legacy on 11/04/2001 12:00am Originally posted by: Ajay How do I get all the IP addresses on a LAN having a WINDOWS NT 4 SERVER.Reply Its Ok, How can I get IPv6 host address in the same manner.Posted by Legacy on 10/15/2001 12:00am Originally posted by: Mura How to get IP in LinuxPosted by Legacy on 11/10/2000 12:00am Originally posted by: Senthilkumaran I want to get the IP address and Host name of a computer running Linux. Please help me out. Thanks,Reply Thanks for the addition.Posted by Legacy on 01/14/1999 12:00am Originally posted by: Jeff Lundgren I've had a lot of people ask this question.Reply
http://www.codeguru.com/cpp/i-n/network/networkinformation/article.php/c2499/Get-hostname-and-ip-address-of-local-computer-2.htm
CC-MAIN-2014-35
refinedweb
449
64.91
EDAG agrees EpiDoc's use of <unclear> makes sense and agrees it should be more explicitly documented. <unclear> doesn't map perfectly to epigraphic practice in the use of underdot, but our use of the element is unambiguous. span class=gap causes problems with square bracket handling Will attempt to do (a) Removed a bunch of duplicate @xml:ids. Discuss issue of how inclusive EpiDoc should be Use and document ODD chaining in releases Did we release a compiled ODD in the latest version? If not, I will go do it... Fix for anyElement issue. Adding @source to schemaSpec. Adding @source to schemaSpec. ODD Chaining (see [#119]) will enable the creation of ODDs that derive from EpiDoc,... The solution to [#119] should resolve this (though not add it to EpiDoc). Use ODD chaining in releases Use ODD chaining in releases Use ODD chaining in releases The source attribute will be inherited from the TEI 3.1.0 release (scheduled for... Use ODD chaining in releases Add Schematron rules to disallow [ ] ( ) and underdots Add Schematron rules to disallow [ ] ( ) and underdots This might better be moved to a separate "EpiDoc" Lint Schematron. There are scripts... adjacent gaps broken if not both @extent=unknown If you search the issues for "defaultVal" you'll see some of the work that's been... This is being gradually done in the TEI by Syd and Martin Holmes, I believe. On Tue,... adjacent gaps broken if not both @extent=unknown Correcting a mistake hardly constitutes re-animation :-). We don't really follow... Reinstate div/@type="figure(s)" Guidelines schema complains about sch and rng namespaces in EpiDoc ODD Fixed with r2464 Fix for [bugs:#137] Removed extraneous bracket in title. gap Remove temporary customization of m @cause in att.transcriptional probably doesn't belong att.textCritical on witDetail Sourceforge Guidance doc out of date Very likely. I don't know the half of what your brilliant stylesheets do :-) Sent... I assigned it to Peter so he’ll be prodded to elaborate, not to implement it—which... I assume Pure ODD, otherwise it wouldn’t make much sense. On Aug 12, 2015, at 11:15... It would be nice to be able to generate this kind of documentation of any ODD. Probably... whitespace to make nested list example comprehensible to newbies Formatting change only, and in any case done. So closing. add @methodology to editorialDecl Is this meant really as a shorthand for saying "this is a TEI Simple document"? Or... Allow q/quote/cit in u I don't see why you shouldn't be able to express a quote inside an utterance. This... I'm in favor of this (though it warrants discussion, obviously). I agree with Martin... Change description of TEI Header and simplify content model of TEI @cert needs data.probability as well as data.certainty I think we were in generally in agreement with this. I think the original restriction... show ODD declaration on the specs pages Thoughts? I think this would probably be nice. If so, then this would really be a... tagUsage/@render should be deprecated This seems uncontroversial to me. Any objections? Assigning to Syd. Allow a text-wrapper element (e.g. ab) in notatedMusic This seems straightforward to me—the main question is do you want/need the full content... New element for secluded text Implemented with [r13316] – [r13319] Wrapped example in ab to fix test failure. Fixed @versionDate on gloss. Added proper gloss for secl. Added new element secl. to address FR #531. bug in parm-external-app-style in Oxygen 17+ Fixed in [r2382]. Tunnel parameters have to be declared as such. I RTFM and fixed it. Adding the @place attribute to <head> and <seg> Closing, as there's been no followup. Broken clarosnet/Ashmolean links in relation example Closing. Opened [bugs:#764] and assigned to Martin for followup. Link checking in Examples Allow <hi> to be contained by <m> Done in r13281. Implementing... It has. I will be implementing it soon. On Thu, Jun 25, 2015 at 5:06 PM, Caroline... Hi Hugh! No, I'm not asking for a mechanism to bypass the TEI abstract model. What... Marjorie, Council has concerns that what you're asking for amounts to altering the... part of the "Title" clipped by Roma, in a somewhat sneaky fashion Martin will have a go at this. Replace @active and @passive on relation with @from and @to Decision at May 2015 Council Meeting: give up on "fixing" the relation element and... <defaultVal> should be removed from all specs Martin will examine cases where this occurs and report. Then kick back to SB. Allow g in notatedMusic Give more structure to abstract Reassigning to Lou to produce non-transcriptional divs and ps. guidance on use of @calendar and @datingMethod Gabby, are you still willing to implement this? Please do, if you can. Allow <hi> to be contained by <m> Will implement. <app> is phrase-level In my experience apparatus abbreviate anything over a couple of words. Doesn't mean... But even "conventional" apparatus deal with chunks at least at the level of the whole...
https://sourceforge.net/u/hcayless/activity/
CC-MAIN-2017-30
refinedweb
851
68.16
Monitor and control user input devices Project description pynput This library allows you to control and monitor input devices. Currently, mouse input and monitoring, and keyboard input are supported. Controlling the mouse Use pynput.mouse.Controller like this: from pynput.mouse import Button, Controller, Listener d = Controller() # Read pointer position print('The current pointer position is {0}'.format( d.position)) # Set pointer position d.position = (10, 20) print('Now we have moved it to {0}'.format( d.position)) # Move pointer relative to current position d.move(5, -5) # Press and release d.press(Button.left) d.release(Button.left) # Double click; this is different from pressing and releasing twice on Mac # OSX d.click(Button.left, 2) # Scroll two steps down d.scroll(0, 2) Monitoring the mouse Use pynput.mouse.Listener like this: def on_move(x, y): print('Pointer moved to {0}'.format( (x, y))) def on_click(x, y, button, pressed): print('{0} at {1}'.format( 'Pressed' if pressed else 'Released', (x, y))) if not pressed: # Stop listener return False def on_scroll(dx, dy): print('Scrolled {0}'.format( (x, y))) # Collect events until released with Listener( on_move=on_move, on_click=on_click, on_scroll=on_scroll) as l: l.join() A mouse listener is a threading.Thread, and all callbacks will be invoked from the thread. Call pynput.mouse.Listener.stop from anywhere, or raise pynput.mouse.Listener.StopException or return False from a callback to stop the listener. Release Notes.
https://pypi.org/project/pynput/0.5.1/
CC-MAIN-2020-10
refinedweb
238
53.17
On 17 June 2016 at 03:37, Tsung-Han Lin <address@hidden> wrote: > Hi, I made some changes to TRY TO fix the ARM semihosting issue in > SYS_HEAPINFO handling. > This problem has been bothering me for quite a while. > > A new global variable 'main_ram_base' is added while a new memory > API, memory_region_add_subregion_main, is also provided to let > SoC/board creator to initialize this variable. > I am not sure if this is a good idea (to add a new API) > or maybe we just let SoC/board creator to simply > set 'main_ram_base' in their 'xxx_realize' functions? > > As for Cortex-M series, 'main_ram_base' is set during cpu initialization. > A64 semihosting handling is also added and use zynqmp as an example. > > Any comments/reviews are big welcome! > Thanks in advance! Hi. First of all, unfortunately we can't accept any patch from you unless you provide a signed-off-by: line (which is basically saying you have the legal right to provide it to us under QEMU's license terms; see for more detail). We can fix up most other stuff, but this one is a hard requirement. > diff --git a/hw/arm/xlnx-zynqmp.c b/hw/arm/xlnx-zynqmp.c > index 23c719986715..8124f71992b4 100644 > --- a/hw/arm/xlnx-zynqmp.c > +++ b/hw/arm/xlnx-zynqmp.c > @@ -206,7 +206,7 @@ static void xlnx_zynqmp_realize(DeviceState *dev, Error > **errp) > memory_region_init_alias(&s->ddr_ram_high, NULL, > "ddr-ram-high", s->ddr_ram, > ddr_low_size, ddr_high_size); > - memory_region_add_subregion(get_system_memory(), > + memory_region_add_subregion_main(get_system_memory(), > XLNX_ZYNQMP_HIGH_RAM_START, > &s->ddr_ram_high); This isn't necessarily the main RAM for this board -- if you don't pass more than XLNX_ZYNQMP_MAX_LOW_RAM_SIZE as the RAM size then the only RAM is the low ram at address 0. In any case, even if you do have enough RAM to go through this code path, the executable being loaded might be linked so it goes into the low RAM alias at 0, in which case using this address as the heap start/limit would be wrong. > } else { If we can avoid having to change every board to specify this that would be nice. (Most of them already specify the RAM base in vbi->bootinfo.loader_start.) Is your use case passing an ELF file to QEMU to run? I suspect what we actually need to do for boards like the Xilinx with more than one RAM area is address the /* TODO: Make this use the limit of the loaded application. */ and actually use the values from the loaded executable, rather than guessing them. This would also address problems with the Cortex-M cores, where the application being loaded might be linked to be in RAM (non-zero start) or to be in flash (zero start). We should also be able to do a better job of guessing for simple boards with one RAM area at a non-zero offset, but if we look at the ELF files we're loading we might not need to bother... > diff --git a/target-arm/arm-semi.c b/target-arm/arm-semi.c > index 8be0645eb08b..d30469688b01 100644 > --- a/target-arm/arm-semi.c > +++ b/target-arm/arm-semi.c > @@ -599,17 +599,32 @@ target_ulong do_arm_semihosting(CPUARMState *env) > unlock_user(ptr, arg0, 16); > #else > limit = ram_size; > - ptr = lock_user(VERIFY_WRITE, arg0, 16, 0); > - if (!ptr) { > - /* FIXME - should this error code be -TARGET_EFAULT ? */ > - return (uint32_t)-1; > - } > - /* TODO: Make this use the limit of the loaded application. */ > - ptr[0] = tswap32(limit / 2); > - ptr[1] = tswap32(limit); > - ptr[2] = tswap32(limit); /* Stack base */ > - ptr[3] = tswap32(0); /* Stack limit. */ > - unlock_user(ptr, arg0, 16); > + if (is_a64(env)) { > + uint64_t *ptrx; > + ptrx = lock_user(VERIFY_WRITE, arg0, 32, 0); > + if (!ptrx) { > + /* FIXME - should this error code be > -TARGET_EFAULT ? */ > + return (uint32_t)-1; > + } > + /* TODO: Make this use the limit of the > loaded application. */ > + ptrx[0] = tswap64(main_ram_base + ram_size / > 2); /* Heap base */ > + ptrx[1] = tswap64(main_ram_base + ram_size); > /* limit */ > + ptrx[2] = tswap64(main_ram_base + ram_size); > /* Stack base */ > + ptrx[3] = tswap64(main_ram_base + ram_size / > 2); /* limit */ > + unlock_user(ptrx, arg0, 32); > + } else { > + ptr = lock_user(VERIFY_WRITE, arg0, 16, 0); > + if (!ptr) { > + /* FIXME - should this error code be > -TARGET_EFAULT ? */ > + return (uint32_t)-1; > + } > + /* TODO: Make this use the limit of the > loaded application. */ > + ptr[0] = tswap32(main_ram_base + limit / 2); > + ptr[1] = tswap32(main_ram_base + limit); > + ptr[2] = tswap32(main_ram_base + limit); /* > Stack base */ > + ptr[3] = tswap32(main_ram_base); /* Stack > limit. */ > + unlock_user(ptr, arg0, 16); > + } > #endif This is making two bug fixes at once. The part of this which is fixing the 64-bit code path to write 64-bit values into the data block is a simple non-controversial bugfix, and it should be in its own patch. Making better guesses at limit values for system emulation is trickier (see remarks above). You've also got some problems with your code indent, which should be four-space. scripts/checkpatch.pl can tell you about some style issues with patches. I suggest you start by sending a patch which just fixes the 64-bit case to write 64-bit values, since that's the easy bit. thanks -- PMM
https://lists.gnu.org/archive/html/qemu-devel/2016-06/msg06899.html
CC-MAIN-2019-09
refinedweb
816
63.19
#include <OcTreeDataNode.h> Basic node in the OcTree that can hold arbitrary data of type T in value. This is the base class for nodes used in an OcTree. The used implementation for occupancy mapping is in OcTreeNode.# Note: If you derive a class (directly or indirectly) from OcTreeDataNode, you have to implement (at least) the following functions to avoid slicing errors and memory-related bugs: createChild(), getChild(), getChild() const, expandNode() See ColorOcTreeNode in ColorOcTree.h for an example. Definition at line 63 of file OcTreeDataNode.h. Make the templated data type available from the outside. Definition at line 117 of file OcTreeDataNode.h. Copy constructor, performs a recursive deep-copy of all children including node data in "value" Copy the payload (data in "value") from rhs into this node Opposed to copy ctor, this does not clone the children as well Definition at line 103 of file OcTreeDataNode.h. Test whether the i-th child exists. Equals operator, compares if the stored value is identical. Read node payload (data only) from binary stream. sets value to be stored in the node Definition at line 105 of file OcTreeDataNode.h. Write node payload (data only) to binary stream. Definition at line 65 of file OcTreeDataNode.h. pointer to array of children, may be NULL Definition at line 126 of file OcTreeDataNode.h. stored data (payload) Definition at line 128 of file OcTreeDataNode.h.
https://docs.ros.org/en/melodic/api/octomap/html/classoctomap_1_1OcTreeDataNode.html
CC-MAIN-2021-39
refinedweb
233
58.28
Today.) determines the Type of the model element it is trying to render and then using that information, pick up the Display Template that needs to be used. Along with type, other attributes like Data Annotations and UI Hints are also used to determine the final outcome of our UI. A Simple ASP.NET MVC DemoLet’s look at these concepts in a little more details using a simple demo app. - Say we start off with the Internet Application ASP.NET MVC4 Project - Add an Entity in the Model folder with the following properties public class TimeCard { public int Id { get; set; } public string Subject { get; set; } public string Description { get; set; } public DateTime? StartDate { get; set; } public Decimal NumberOfHours { get; set; } } - Use the Add Controller method to scaffold up the Controller and Views for it. If we run the Application now and Navigate to the Create page for the DefaultController, it looks like this Now we will go back to the TimeCard entity and decorate it with attributes as follows public class TimeCard { public int Id { get; set; } public string Subject { get; set; } [DataType(DataType.MultilineText)] public string Description { get; set; } [DisplayName("Start Date")] public DateTime? StartDate { get; set; } [DataType(DataType.Duration)] [DisplayName("Number of Hours")] public Decimal NumberOfHours { get; set; } } When we run the application again, the view changes to the following. As we can see, the Description box is now much bigger thanks to the DataType annotation, and the labels look better thanks to the DisplayName attribute. However we still don’t have any DatePicker for our Dates, and what if we wanted the Number of hours to be a select going from 1 to 24? We could always go ahead and edit the view and cram in the required jQuery to put in a Date Picker. Similarly we could remove the default text box that our MVC scaffolding is generating and replace it with a custom “Select”. Or, there is another way around it – Custom Templates. Custom Templates in ASP.NET MVC Universal TemplateLet’s see how we can use Custom Templates to replace the Start Date with a Date Picker. By universal template, I mean the changes that we are going to do is going to apply to the entire project. - In the Shared folder, we’ll add a folder called EditorTemplates - Next we add a CSHTML file called DateTime.cshtml and add the following content to it @model DateTime? @Html.TextBox("", (Model.HasValue ? Model.Value.ToShortDateString() : string.Empty), new { @class = "datepicker" }) - As we can see, it simply drops in a TextBox with a CSS class called datepicker. This magically doesn’t convert it into a date picker. To tie it up with a date picker JavaScript control, you need to do a couple of additional steps. First thing, add a new JavaScript file in the Scripts folder and call it jquery.ext.datepicker.js - In the JS file we drop the following JavaScript $(function () { $(".datepicker").datepicker(); }); - Now we’ve to include it in one of our JavaScript bundles. Let’s update the BundleConfig.cs and add it to the jquery bundles as follows bundles.Add(new ScriptBundle("~/bundles/jqueryval").Include( "~/Scripts/jquery.unobtrusive*", "~/Scripts/jquery.validate*", "~/Scripts/jquery.ext*")); - We are all set. Note we have not touched any of the cshtml files. Let’s run the application. The create page will be as follows - If you save the entry and navigate back to it for Editing, you’ll see that the Edit page also has a date picker. - Fact is, here on when we use the Html.EditorFor(…) any DateTime type of field, it will get the date picker for free. Reducing Scope of the EditorNow if we wanted to reduce the scope of the Template, we could very well define it under Views\Default\EditorTemplates. This would have restricted it to the view in the Default controller. Inverting the Template AssociationNow that we’ve seen how to do a Universal Template and a Template per controller, let’s see what it takes to create a template that will be used only if we want to use it. - Let’s add another cshtml file under the EditorTemplates folder and call it HoursOfTheDay.cshtml - Add the following markup to it @model Decimal? <option value="0">Please Select</option> @for (int i = 1; i <= 24; i++) { if (Model.HasValue && ((int)Model.Value) == i) { <option value="@i" selected="selected">@i</option> } else { <option value="@i">@i</option> } } - What this does is for Decimal values it provides a drop down with items 1 to 24. It also checks if the input value is between 1 – 24 and if it is, sets it as the selected option - Now since it has a name HoursOfTheDay it will not get bound to all Decimal fields automatically. Instead there are two options to bind it. Option 1: Using the UIHint attribute on our Model as follows [DataType(DataType.Duration)] [DisplayName("Number of Hours")] [UIHint("HoursOfTheDay")] public Decimal NumberOfHours { get; set; } As we can see, the UIHint provides the name of the Template to use. Again this makes it applicable to all pages to which this Entity Type is bound. Option 2: If we want even finer grained control over when the template should be used, we can remove the UIHint from the attribute and use it in the cshtml markup as follows <div class="editor-field"> @Html.EditorFor(model => model.NumberOfHours, "HoursOfTheDay") @Html.ValidationMessageFor(model => model.NumberOfHours) </div> ConclusionWith that we wrap up this peek into Custom Templates. We looked at Editor Templates specifically and how to use them either universally or in a fine grained manner. We can also use Template for managing the Display only. These go under the folder DisplayTemplates. For example we could use a Display template to show all datetime fields as short dates only. Download the entire source code of this article (Github) Will you give this article a +1 ? Thanks in advance 3 comments: This time there is no more features in ASP.NET MVC but some of are good like bundling and minification and because of this I don't have to use more data for visitors of my site. and generally this features more liked by developers. nice article... Clear article and most helpfull. Thank you. As a newbie in MVC/Razor I struggled a bit because I didn't include the JQueryUI stuff in the bundles. This caused object doesn't support errors. I found this article helpfull:-
http://www.devcurry.com/2013/04/custom-templates-in-aspnet-mvc.html
CC-MAIN-2017-04
refinedweb
1,077
63.59
Developing Metro Apps in HTML and JS Last Friday I attended the first Windows 8 developer day in Belgium. It was. The first thing I noticed is the fact that the HTML 5 application was actually a game. That is one of the reasons it could be easily ported to a Metro app. If we would want to port a business app like an e-commerce site, that wouldn’t be so easy. One of the important issues when developing Metro apps is the design. The philosophy of Microsoft when designing an application is that all applications have a similar look & feel and should react on the same way. Almost all websites are designed for a single resolution and that is in contrast with the Metro philosophy where all apps should optimize their viewport for every screen. This means you’ll have to redesign your application so it changes its content to fit to the screen. Also you should implement the capabilities to change the view depending on the position of your device (landscape or portrait mode) and the multitask options which allows you to run multiple applications side by side. Next to the Metro design these Metro apps should at least have some specific windows 8 features like redefined search and sharing capabilities. It's only at this point you really start to take advantage of some of the new windows 8 features. But the most important thing you should implement is the live tile. This must engage the user to consume your application. You can do this by providing the user real-time information about your app. A second consideration is the fact when you just copy and paste your web application, you don’t take advantage of all the possibilities present in WinRT. With WinRT you can make calls to the operating system. For example you can access the webcam, save a file to the file system; share data … Developing Metro apps with JavaScript also brings the WinJS namespace. In here we find the Promise object that enables an easy way for the developer to handle asynchronous calls. Because all calls that take more than 50ms are preformed asynchronous by default, this can come in handy when you are making WinRT calls. After the theoretical part, it was time for the real work: an App-a-thon. Here we got the possibility to put all our theoretical knowledge into practice. Together with 4 RealDolmen colleagues (Maarten Balliauw, Xavier Decoster, Wesley Cabus and Angelo Trotta) we developed a metro Nuget Package Explorer. This allows the users to view NuGet repositories and their package details. All this information gets retrieved from the given feeds and gets stored in an IndexedDB for performance. The application is completely build in HTML5 and uses JavaScript for the logic. Surprisingly, developing this application went pretty fast. I took some time to adjust to the fact your writing a client application with JavaScript, but all knowledge about JavaScript in the past, could be reused pretty easy. There was one thing we were struggling with. When working with the promises you have to be very careful to call objects in the UI-thread, otherwise you can get some strange exceptions. Conclusion After a whole day Windows 8, I was really excited to start develop metro apps. Enabling developers to develop a metro app in HTML/JS was definitely a good choice Microsoft made. This way a whole new group of developers can start building Metro apps. But the business has to be aware that you can’t just copy and paste your web app in a Metro project and call it Metro app. Metro apps have a philosophy and that should be respected. Also it would be a shame if you wouldn’t take advantage of all the new features that Metro apps provide. I’m looking forward to the next app-a-thon and hope that our team can come together again to win the contest this time. (Ended second last time.) Currently I’m trying to port our Linq2IndexedDB project to use the WinJS promises instead of the jQuery promise. I hope to announce this feature in the near future. (Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)
http://css.dzone.com/articles/developing-metro-apps-html-js
CC-MAIN-2013-20
refinedweb
719
62.07
keyctl_search (3) - Linux Man Pages keyctl_search: Search a keyring for a key NAME keyctl_search - Search a keyring for a key SYNOPSIS #include <keyutils.h> long keyctl_search(key_serial_t keyring, const char *type, const char *description, key_serial_t destination); DESCRIPTIONkey. RETURN VALUEOn success keyctl_search() returns the serial number of the key it found. On error, the value -1 will be returned and errno will have been set to an appropriate error. ERRORS - ENOKEY - One of the keyrings doesn't exist, no key was found by the search, or the only key found by the search was a negative key. - ENOTDIR - One of the keyrings is a valid key that isn't a keyring. - EKEYEXPIRED - One of the keyrings has expired, or the only key found was expired. - EKEYREVOKED - One of the keyrings has been revoked, or the only key found was revoked. - ENOMEM - Insufficient memory to expand the destination keyring. - EDQUOT - The key quota for this user would be exceeded by creating a link to the found key in the destination keyring. - EACCES - The source keyring didn't grant search permission, the destination keyring didn't grant write permission or the found key didn't grant link permission to the caller. LINKINGAlthough this is a Linux system call, it is not present in libc but can be found rather in libkeyutils. When linking, -lkeyutils should be specified to the linker. SEE ALSOkeyctl(1), add_key(2), keyctl(2), request_key(2), keyctl(3), request-key(8)
https://www.systutorials.com/docs/linux/man/3-keyctl_search/
CC-MAIN-2021-31
refinedweb
241
71.95
Hi Mike, It is a bogus error message I have been struggling with for some time. Here is one of my typical workarounds: class something { // lots of code... private: #if defined(__APPLE__) && defined(__MACH__) \ && defined(__GNUC__) && __GNUC__ == 3 && __GNUC_MINOR__ == 3 bool dummy_; #endif uctbx::unit_cell unit_cell_; fractional<> original_site_; const wyckoff::position* position_; rt_mx sym_op_; }; I.e. simply adding a dummy bool as a member is typically enough to make the compiler happy. I've not been able to figure out what exactly causes the bogus error message. If anyone knows I'd be glad to learn about it. Cheers, Ralf __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around
https://mail.python.org/pipermail/cplusplus-sig/2005-December/009631.html
CC-MAIN-2016-30
refinedweb
115
67.86
The goal of ADO.NET is to provide a bridge between your objects in .NET and your backend database. ADO.NET provides an object-oriented API to a relational view of your database, encapsulating many of the database properties and relationships within ADO.NET objects. More importantly, the ADO.NET objects encapsulate and hide the details of database access; your objects can interact with ADO.NET objects without knowing or worrying about the details of how the data is moved to and from the database. 19.2.1 The DataSet Class The ADO.NET object model is rich, but at its heart is a fairly straightforward set of classes. One very powerful class, key to the disconnected architecture, is the DataSet, which is located in the System.Data namespace. processes. The DataSet captures not just a few rows from a single table, but represents a set of tables with all the metadata necessary to represent the relationships and constraints among the tables as recorded in the original database. The DataSet offers two key properties: Tables and Relations. The Tables property returns a collection of DataTables. Each DataTable, in turn, has two important properties: Columns and Rows. The Columns property returns a collection of DataColumn objects, while the Rows property returns a collection of DataRows. Similarly, the Relations property of the DataSet returns a collection of DataRelation objects. The principal objects available through the DataSet are represented schematically in Figure 19-6. Figure 19-6. The DataSet objects Table 19-1 shows the most important methods and properties of the DataSet class. 19.2.1.1 The DataTable class The DataSet object's Tables property returns a DataTableCollection collection, which contains tables in the DataSet. For example, the following line of code (in C#) creates a reference to the first DataTable in the Tables collection of a DataSet object named myDataSet: DataTable dataTable = myDataSet.Tables[0]; dim dataTable as DataTable = myDataSet.Tables(0) The DataTable has several public properties, including the Columns property, which returns the ColumnsCollection object, which consists of DataColumn objects. Each DataColumn object represents a column in a table. The Relations property returns a DataRelationCollection object, which contains DataRelation objects. Each DataRelation object represents a relationship between two tables through DataColumn objects. For example, in the Bugs database, the Bug table is in a relationship with the People table through the PersonID column. The nature of this relationship is many to onefor any given Bug, there will be exactly one owner, but any given person may be represented in any number of Bugs. The Bugs and BugHistory collection actually establish an even tighter relationship: that of parent/child. The Bug acts as a parent record for all of its history records (that is, for all the history records with the same BugID as the Bug). DataTables, DataColumns, and DataRelations are explored in more detail later in this chapter. The most important methods and properties of the DataTable class are shown in Table 19-2. 19.2.1.2 The DataRow class The Rows collection contains DataRow objects, one for each row in the table. Use this collection to examine the results of queries against the database, iterating through the rows to examine each record in turn. Programmers experienced with ADO are often process in Example 19-2. The most important methods and properties of the DataRow class are shown in Table 19-3. 19.2.2 DBCommand and DBConnection The DBConnection object represents a connection to a data source. This connection may be shared among different command objects. The DBCommand object allows you to send a command (typically a SQL statement or the name of a stored procedure) to the database. Often these objects are created implicitly when you create your DataSet, but you can explicitly access these objects, as you'll see in Example 19-4 and Example 19-5. 19.2.3 The DataAdapter Object. .NET provides versions of the DataAdapter object; one for each data provider (e.g., SQL Server). If you are connecting to a SQL Server database, you will increase the performance of your application by using SqlDataAdapter (from System.Data.SqlClient) along with SqlCommand and SqlConnection. If you are using another database, you will often use OleDbDataAdapter (from System.Data.OleDb) along with OleDbCommand and OleDbConnection. The most important methods and properties of the DataAdapter class are shown in Table 19-4.
https://flylib.com/books/en/2.654.1/the_adonet_object_model.html
CC-MAIN-2020-10
refinedweb
730
56.25
Re: msgbox Hi Steven I've created a gridview and it has a delete button on each row. "<asp:CommandField". I want to pop up a messagebox to ask the user Yes/No to ensure to delete the data instead of directly deleted the record. Can you help me? Thanks a lot Win "Steven Cheng [MSFT]" <stcheng@xxxxxxxxxxxxxxxxxxxx> 在郵件 news:6bx8Z1JvIHA.4088@xxxxxxxxxxxxxxxxxxxxxx 中撰寫... Hi Win, From your description, you're using MsgBox api in ASP.NET web application to display some dialog box and want to control position of the dialog, correct? As for the "MsgBox" api, would you tell me which control or component are you using? So far based on my understanding, ASP.NET doesn't provide built-in support on displaying messagebox since the messagebox is displayed at client-side browser (generally use javascript to display it). I'm wondering are you using System.Windows.Forms namespace's class to show to message box? If this is the case, I'm afraid winform API is not supported to be used in ASP.NET web application since ASP.NET appliation is server-side application which is mostly running in a non-interactive process and the messagebox showed via winform api can not be seen by client user. The reason you may see it when you use Visual Studio's webtest server on local machine is because the web test server itself is a winform application(you can close the browser when the msgbox displayed to verify this, it is displayed by the webtest server process). In addition, in ASP.NET we normally use client script to display message box( alert javascript method). Here are some web articles introducing this: #Message Box in ASP.NET 2.0 #Adding Client-Side Message Boxes in your ASP.NET Web Pages #ASP.NET Alerts: how to display message boxes from server-side code? ay-message-boxes-from-server-side-code.aspx: "win" <a@xxxxx> Subject: msgbox Date: Fri, 23 May 2008 12:54:18 +0800 I'm using ASP.Net 2.0 MsgBox("Are you sure to delete?", MsgBoxStyle.Question + MsgBoxStyle.OkCancel, Page.Title.ToString) The message box sometimes does not pop up on the top of the screen. Can I set the message box always on top of the screen? Thank you . - Follow-Ups: - Re: msgbox - From: Mark Rae [MVP] - References: - msgbox - From: win - RE: msgbox - From: Steven Cheng [MSFT] - Prev by Date: Profile.Save() - Next by Date: Keydown event problem when enter key is pressed - Previous by thread: RE: msgbox - Next by thread: Re: msgbox - Index(es):
http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.framework.aspnet/2008-05/msg01611.html
crawl-002
refinedweb
428
65.83