text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringlengths 9
15
⌀ | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
---|---|---|---|---|---|
Question:
I am using pure AS3 to build my project. I was wondering if there are anyways to change the stage background color through AS3...Thanks for the help....
Solution:1
like this:
[SWF(backgroundColor="0xec9900")] public class Main extends Sprite { }
Solution:2
I have this in a
creationComplete handler
<s:Application xmlns: private function on_init():void { stage.color = 0x000000; }
Though I have a feeling it would work anywhere.
Solution:3
This creates a shape and add it to the stage behind everything. To change the color anytime call:
changeBGColor(0xFF0000) (to red)
It also maintains the size of the background (covering all area) when the windows is resized.
import flash.display.Sprite; import flash.events.Event; var default_bg_color:uint = 0xffffff; var bgshape:Sprite; stage.align = "TL"; stage.scaleMode = "noScale"; function initBG() { bgshape = new Sprite(); bgshape.graphics.beginFill(default_bg_color); bgshape.graphics.drawRect(0,0,stage.stageWidth, stage.stageHeight); addChildAt(bgshape, 0); stage.addEventListener(Event.RESIZE, resizeBGWithStage); } function changeBGColor(color:uint) { bgshape.graphics.beginFill(color); bgshape.graphics.drawRect(0,0,stage.stageWidth, stage.stageHeight); } function resizeBGWithStage(e:Event) { try { bgshape.width = stage.stageWidth; bgshape.height = stage.stageHeight; } catch(e){} } initBG();
Solution:4
You should be able to use the following line of Actionscript 3.0 to set the background color. 0x000000 for black, 0xFFFFFF for white and anything in between.
this.stage.color = 0x00000;
Solution:5
You can set background colour on initialization, the way @Wopdoowop mentioned, but if you want to change it dynamically you would need to create your own bitmap/sprite/movieclip that would act as a background (should go below the rest of your content and have width and height of your stage) and change colour of that bitmap/sprite/movieclip.
Solution:6
[SWF(width='700',height='525',backgroundColor='#000000',frameRate='30')] public class RunTime extends Sprite {
Solution:7
Try setting the backgroundColor of the application object.
Solution:8
I suggest making a sprite and then making it in the back. This is the way I would do it.
Make sure to
import flash.display.Sprite;
var bkg:Sprite=new Sprite(); //replace the 0x000000 with a hex code. bkg.graphics.beginFill(0x000000, 1) bkg.graphics.drawRect(0,0,stage.stageWidth,stage.stageHeight) bkg.graphics.endFill() addChild(bkg)
a plus about this is that you can draw a background (if you want) either manually or with the code and then put it in through the code.
Note:If u also have question or solution just comment us below or mail us on [email protected]
EmoticonEmoticon | http://www.toontricks.com/2019/06/tutorial-change-stage-background-color.html | CC-MAIN-2020-50 | refinedweb | 418 | 50.33 |
Next generation GPU API for Python
Project description
wgpu-py
Next generation GPU API for Python
Introduction
In short, this is a Python lib wrapping the Rust wgpu lib and exposing it with a Pythonic API similar to WebGPU.. Based on wgpu-native.
To get an idea of what this API looks like have a look at triangle.py and the other examples.
Status
This is experimental, work in progress, you probably don't want to use this just yet!
- We have a few working examples!
- Support for Windows and Linux. Support for MacOS is underway.
- We have not fully implemented the API yet.
- The API may change. We're still figuring out what works best.
- The API may change more. Until WebGPU settles as a standard, its specification may change, and with that our API will probably too.
Installation
pip install wgpu pip install python-shader # optional - our examples use this to define shaders
The library ships with Rust binaries for Windows, MacOS and Linux. If you want to use
a custom build instead, you can set the environment variable
WGPU_LIB_PATH.
Platform requirements
Under the hood,
wgpu runs on Vulkan or Metal, and eventually also DX12 or OpenGL.
On Windows 10, things should just work. On older Windows versions you may need to install the Vulkan drivers (or wait for the DX12 backend to become more mature).
On Linux, it's advisable to install the proprietary drivers of your GPU
(if you have a dedicated GPU). You may need to
apt install mesa-vulkan-drivers.
Note that on Linux, the
tk canvas does not work. Wayland currently only
works with the GLFW canvas (and is unstable).
On MacOS you need at least 10.13 (High Sierra) to have Vulkan support. At the moment, we've not implemented drawing to a window yet (see #29).
Usage
The full API is accessable via the main namespace:
import wgpu
But to use it, you need to select a backend first. You do this by importing it. There is currently only one backend:
import wgpu.backend.rs
GUI integration
To render to the screen you can use any of the following GUI toolkits:
Tk (included with Python),
glfw,
PySide2,
PyQt5,
PySide,
PyQt4.
Web support
We are considering future support for compiling (Python) visualizations to the web via PScript and Flexx. We try to keep that option open as long as it does not get in the way too much. No promises.
python setup.py develop, this will also install our only runtime dependency
cffi
-.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/wgpu/ | CC-MAIN-2020-05 | refinedweb | 448 | 67.04 |
.NET
Raygun4Net - .NET Error Tracking & Reporting
Select your .NET platform
Raygun error monitoring and crash reporting is available for a wide variety of .NET platforms including ASP.NET, Windows Phone and Xamarin. Select which platform you are integrating Raygun with to jump to the specific documentation.
Adobe Air
RaygunAS - Adobe Air
Get Adobe Air error and crash reporting with Raygun. Adobe Air is supported using the third party provider RaygunAS.
RaygunAS is hosted on Github here:
Building the Client
Clone the repository and build the .swc file..
Copy the .swc file to wherever you have your libs and all is set.
Usage
Then go to your app and put the following lines in the main Sprite class, after some initial setup is complete, but before the main drawing begins:
_raygunAs = new RaygunAS(this,RAYGUN_API_KEY, APP_VERSION); _raygunAs.addEventListener(RaygunAS.READY_TO_ZAP, onRaygunReady); _raygunAs.chargeRaygun(); } private function onRaygunReady( event:Event ):void { //do logic here }
Initialize your main method in the onRaygunReady callback, this will allow RaygunAS to capture an error at any point in the stack.
That's that for the setup, and now if you create an Error RaygunAS will be triggered and send the Error report to your app's dashboard.
Additionally you can attach a RaygunAs.RAYGUN_COMPLETE listener to your RaygunAS instance in order to catch those error. It handles UncaughtErrorEvent.UNCAUGHT_ERROR events.
Android
Raygun4Android - Android Crash Reporting
The Raygun4Android provider
Raygun4Android is a library that you can easily add to your Android app, which will then allow you to transmit all exceptions to your Raygun.io dashboard. Installation is painless, and configuring your site to transmit errors and crashes takes less than five minutes, so you can start receiving error and crash reports right away.
Raygun4Android supports all devices with Gingerbread and newer installed (2.3/API v9). It can be added as a Gradle or Maven dependency, or manually adding the JAR to your project (if you use Ant or other build tools).
Setup instructions
Gradle and Android Studio
Ensure Maven Central is present in your project's build.gradle:
allprojects { repositories { mavenCentral() } }
Then add the following two compile statements to the
dependencies section in your module's build.gradle:
dependencies { // Existing dependencies may go here compile 'com.google.code.gson:gson:2.1' compile 'com.mindscapehq.android:raygun4android:3.0.2' }
Then sync your project. You may need to add the following specific imports to your class, where you wish to use RaygunClient:
import main.java.com.mindscapehq.android.raygun4android.RaygunClient; import main.java.com.mindscapehq.android.raygun4android.messages.RaygunUserInfo;
Then see the configuration section below.
With Maven
To your pom.xml, add:
<dependency> <groupId>com.mindscapehq.android</groupId> <artifactId>raygun4android</artifactId> <version>3.0.2</version> </dependency>
In your IDE, build your project (or run mvn compile), then see the configuration section below.
With Ant, other build tools, or manually
Download the JAR for the latest version, as well as the Gson library (if you do not already use it). Place both of these in a /lib folder in your project, add them to your project's classpath, then see below.
Configuration and Usage
In your AndroidManifest.xml, make sure you have granted Internet permissions. Beneath the manifest element add:
<uses-permission android: <uses-permission android:
Inside the application element, add:
<service android: <meta-data android:
And replace the value in meta-data with your API key, available from your Raygun dashboard.
In a central activity method (such as
onCreate()), call the following:
RaygunClient.init(getApplicationContext()); RaygunClient.attachExceptionHandler();
The above exception handler automatically catches & sends all uncaught exceptions. You can create your own or send from a catch block by calling
RaygunClient.send()and passing in the Throwable.
A usage example is also available from the GitHub repository and Maven, in /sample-app.
ProGuard support
Raygun includes support for retracing your exception reports that have been obfuscated with ProGuard. This is achieved if you upload the relevant ProGuard mapping.txt file to your application in Raygun. Retracing is done automatically to each report as they come into Raygun so that they are presented to you with readable stacktraces.
All the documentation for ProGuard support can be found here.
Affected user tracking
Raygun supports tracking the unique users who encounter bugs in your apps.
By default the device UUID is transmitted. You can also add the currently logged in user's data like this:
RaygunUserInfo user = new RaygunUserInfo(); user.setIdentifier("[email protected]"); user.setFirstName("User"); user.setFullName("User Name"); user.setEmail("[email protected]"); user.setUuid("a uuid"); user.setAnonymous(false); RaygunClient.setUser(user);
Any of the properties are optional, for instance you can set just isAnonymous by calling setAnonymous(). There is also a constructor overload if you prefer to specify all in one statement.
identifier should be a unique representation of the current logged in user - we will assume that messages with the same Identifier are the same user. If you do not set it, it will be automatically set to the device UUID.
If the user context changes, for instance on log in/out, you should remember to call SetUser again to store the updated username.
Version tracking
Set the
versionName attribute on in your AndroidManifest.xml to be of the form x.x.x.x, where x is a positive integer, and it will be sent with each message. You can then filter by version in the Raygun dashboard.) { Log.i("OnBeforeSend", "About to post to Raygun, returning the payload as is..."); return message; } } public class FullscreenActivity extends Activity { @Override protected void onCreate(Bundle savedInstanceState) { // Initialize the activity as normal RaygunClient.init(getApplicationContext()); RaygunClient.attachExceptionHandler(); RaygunClient.setOnBeforeSend(new BeforeSendImplementation()); } }
In the example above, the overridden
onBeforeSend method will log an info message every time an error is sent.
To
You can override Raygun's default grouping logic for Android exceptions by setting the grouping key manually in
onBeforeSend
.
API
See the GitHub repository for a full list of public initializing, attaching, sending and other methods.
Troubleshooting/Frequently Asked Questions
- Is there an example project?
Yes! Clone our GitHub repository then load the sample-app project. It has been confirmed to run on the emulator for SDK >= 9, and physical devices (4.1.2).
Not seeing errors in the dashboard?
Raygun4Android outputs Logcat messages - look for the 'Exception Message HTTP POST result' message - 403 will indicate an invalid API key, 400 a bad message, and 202 will indicate received successfully.
Also, ensure that you have called
attachExceptionHandler()after
init(), or provided your own uncaught exception handler that sends using
RaygunClient.
Environment data
A selection of enironment data will be attached and available in the Environment tab in the dashboard, and more in the Raw tab. This data is gathered from android.os.Build - if you wish to see more data you can add them on the userCustomData object.
What happens when the phone has no internet connection?
Raygun4Android will save the exception message to disk. When the provider is later asked to send another message it will check if the internet is now available, and if it is, send the cached messages. A maximum of 64 messages will be cached, then overwritten (using a LRU strategy).
The provider now attaches the device's network connectivity state to the payload when the exception occurs. This is visible under the Request tab in the Raygun dashboard.
Angular
Angular error tracking
Angular (v2+)
Step 1
Install the client library
On the command line
Install the raygun4js library with NPM:
$ npm install raygun4js --save
Step 2
Setup
Add the following setup code. For this example, we’ll add a new script: app.raygun.setup.ts
This is our recommended setup for Angular + TypeScript.
Error handling
import * as rg4js from 'raygun4js'; import { ErrorHandler } from '@angular/core'; const VERSION_NUMBER = '1.0.0.0'; rg4js('apiKey', 'INSERT_API_KEY_HERE'); rg4js('setVersion', VERSION_NUMBER); rg4js('enableCrashReporting', true); rg4js('enablePulse', true); export class RaygunErrorHandler implements ErrorHandler { handleError(e: any) { rg4js('send', { error: e, }); } }
In the main app module script (app.module.ts):
import { ErrorHandler } from '@angular/core'; import { RaygunErrorHandler } from './app.raygun.setup'; // Add the custom error handler to the providers array @NgModule({ imports: [...], declarations: [...], providers: [{...}, { provide: ErrorHandler, useClass: RaygunErrorHandler }], bootstrap: [...] })
Tracking route changes In the main app component script (app.component.ts):
import { Component, OnInit } from '@angular/core'; import { Router, NavigationEnd, NavigationError } from '@angular/router'; import * as rg4js from 'raygun4js'; export class AppComponent implements OnInit { constructor(private router: Router) {} ngOnInit() { this.router.events.subscribe(event => { if (event instanceof NavigationEnd) { // Track navigation end rg4js('trackEvent', { type: 'pageView', path: event.url }); } else if (event instanceof NavigationError) { // Track navigation error rg4js('send', { error: event.error }); } }); } }
This is just a simple example of the type of routing events you can track, see the Angular routing events documentation for all of the types of events you can track.
Step 3
Add user data:
To set up the library to transmit data for the currently logged-in user, add the following lines of JS code inside the end of the script block from the previous step. (optional, but highly recommended):
rg4js('setUser', { identifier: 'users_email_address_or_unique_id', isAnonymous: false, email: '[email protected]', firstName: 'Firstname', fullName: 'Firstname Lastname' });
Step 4
Done! We recommend raising a test exception from your application right now to view your dashboard and test that everything is wired up correctly.
C++
Raygun for C++ using Breakpad’:
- Drag and drop a “Release” mode PDB or previous generated Breakpad symbol file, or select a file to import to the Breakpad center. You can use the Breakpad tooling as part of your build toolchain to generate the symbol files yourself.
Cold)
How do I get started with raygun4cfml?
Installation
Option 1 (preferred):
Use Commandbox and Forgebox to get the library and then follow the ideas outlined in 'Library 'Library organisation'.
Option 3:
Download a zip file containing the current content of the repo or a release/tag of your choice. Unzip the resulting file. Move the src/test directories into places of your choice and suitable for your system and follow the ideas outlined in 'Library>
Configuration);
Sending user data);
Raygun4cfml GitHub Repository
To get the source code, visit the GitHub repository here.
Drupal
Raygun4Drupal - Drupal Error Reporting
Raygun for Drupal is available from:
Raygun for Drupal offers easy codeless setup of error and crash reporting for your Drupal application.
Requirements
This module requires:
- PHP 5.3+
- You have downloaded the raygun4php library (found at) to your `sites/all/libraries` directory under a sub folder `raygun`.
The libraries directory should look like:
sites/all/libraries -- raygun ---- RaygunClient.php ---- RaygunEnvironmentMessage.php ....
The easiest way to do this is to navigate to the sites/all/libraries folder (creating it if doesn't already exist) and running the following commands:
mkdir raygun git clone mv -f raygun4php/src/Raygun4php/* raygun/
And optionally (to clean the directory):
rm -rf raygun4php
Installing the Module
Logging into the admin panel of Drupal and access the Modules page, click Install new module
Paste the download link of the module, this should look like this:
Enable the Module on the Modules page.
Follow the Configure link and paste your API key into the API key section, you can configure some other options here as well.
After saving the configuration Raygun should be live and will collect any errors which occur on your Drupal site.
FluentD
FluentD output plugin to Raygun Crash Reporting
fluent-plugin-raygun is a FluentD output plugin that sends aggregated errors/exception events to Raygun.
This plugin extends the fluent buffered output and reports the events as crash reports to your Raygun dashboard. Currently we support limited information in the reports sent by our plugin. The reports include the following information:
OccuredOn - The date & time in which the event was sent by the raygun plugin.
MachineName - The hostname provided through the plugin configuration.
ErrorMessage - The event's record message.
Tags - The tag used to seperate the event in the fluentd logging.
Requirements
This plugin requires:
- Ruby v1.9.3 or higher
- FluentD v0.12 or v0.10
Setup Instructions
Once FluentD has been installed following the instructions detailed here.
Install the Raygun plugin using
gem:
fluent-gem install fluent-plugin-raygun
Update your FluentD config to include a matching rule to output to Raygun:
<match> @type raygun api_key YOUR_API_KEY </match>
Options
api_key - The key used to validate the reports sent to Raygun. Found in the Raygun dashboard under application settings.
default_level - The logging level at which to send events (options: fatal, error, warning, info or debug). The default is set to error.
default_logger - If a logger is not provided the default logger is used. The default is set to fluentd.
endpoint_url - The URL used by the raygun plugin to post reports to. The default is set to.
flush_interval - The time between data flushes. The default is set to zero (0) seconds.
hostname_command - The name of the server reporting the error. The default is set to hostname.
record_already_formatted - If set to false we transform the event's record into the format required by raygun's API. The default is set to false.
GitHub Repository
Visit the fluent-plugin-raygun GitHub repository to view the code. Like all our providers, it is available under the MIT license.
Not satisfied with our provider? We are open to pull requests, so feel free to submit one for us to review!
Go
Raygun4go - Go error reporting
About
raygun4go adds Raygun-based error handling to your golang code. It catches all occuring errors, extracts as much information as possible and sends the error to Raygun where it is viewable in the Crash Reporting dashboard.
Contents
Getting Started
Installation
$ go get github.com/MindscapeHQ/raygun4go
Basic Usage
Include the package and then defer the HandleError-method as soon as possible in a context as global as possible. In webservers, this will probably be your request handling method, in all other programs it should be your main-method. Having found the right spot, just add the following example code:
raygun, err := raygun4go.New("appName", "apiKey") if err != nil { log.Println("Unable to create Raygun client:", err.Error()) } raygun.Silent(true) defer raygun.HandleError()
where
appName is the name of your app and
apiKey is your Raygun-API-key. If your program runs into a panic now (which you can easily test by adding
panic("foo") after the call to
defer), the handler will print the resulting error message. If you remove the line
raygun.Silent(true)
the error will be sent to Raygun using your API-key.
Options
The client returned by
New has several chainable option-setting methods:
Silent(bool)
If set to true, this prevents the handler from sending the error to Raygun, printing it instead.
Request(*http.Request)
Adds the responsible http.Request to the error.
Version(string)
If your program has a version, you can add it here.
Tags([]string)
Adds the given tags to the error. These can be used for filtering later.
CustomData(interface{})
Add arbitrary custom data to you error. Will only reach Raygun if it works with
json.Marshal().
Add the name of the affected user to the error.
GitHub repository
You can find the source code for this library at. If you have any issues or feature requests you can add them there, or by clicking on the Feedback button in the Raygun web app's sidebar.
iOS
Raygun4iOS - iOS Error & Crash Reporting
With smart iOS crash reporting from Raygun, you'll be alerted to iOS errors the second they happen.
Raygun crash reporting and error monitoring is available for iOS with the raygun4iOS provider. The raygun4iOS provider can be used in Objective-C and Swift iOS applications. The setup instructions below includes steps to integrate Raygun4iOS in both types of applications. The features described in these docs also apply to both languages.
Setup instructions (Using CocoaPods)
Raygun can be obtained through CocoaPods which is an iOS dependency manager. Read more about CocoaPods here, or see further below for the manual setup instructions.
1. Update Podfile
Add the following to your project's Podfile:
platform :ios pod 'Raygun4iOS'"];
The steps above will cause all unhandled exceptions to be sent to your Raygun account.
Toubleshooting
If your project does not build after performing these steps, your projects "Other Linker Flags" option is most likely overriding the Pods project option. (You'd get a warning about this after running pod install). To fix this, click your project in Xcode, and then select your main app target. Go to the "Build Settings" tab, search for "Other Linker Flags" and add $(inherited) to this option and you'll be good to go.
Setup instructions (Manual)
If you don't use CocoaPods, follow these instructions to manually download and reference the Raygun4iOS framework.
1. Download Raygun4iOS
Download and unzip the current version from here: Raygun4iOS Version 2.3.4
Change log:
- Expanded automatic network logging coverage to include NSURLSession methods using NSURLSessionTaskDelegates.
- Able to enable/disable network logging when using Pulse.
- Added the ability to filter out data sent for network requests or view load timings
Legacy versions:
- Raygun4iOS Version 2.3.3
- Raygun4iOS Version 2.3.2
- Raygun4iOS Version 2.3.1
- Raygun4iOS Version 2.2.1
- Raygun4iOS Version 2.2.0
- Raygun4iOS Version 2.1.3
- Raygun4iOS Version 2.1.2
- Raygun4iOS Version 2.1.1
- Raygun4iOS Version 2.1.0
- Raygun4iOS Version 2.0.0
2. Reference Raygun4iOS in your project
In Xcode, click on your project and then select your main app target. Go to the "Build Phases" tab and expand "Link Binary With Libraries". Drag Raygun4iOS.framework into the library list."];
5. Link the C++ library
In Xcode, click on your project and then select your main app target. Go to the "Build Settings" tab, search for "Other Linker Flags" and add "-lc++" to this setting.
The steps above will cause all unhandled exceptions to be sent to your Raygun account.
Swift
Swift is another langauge created by Apple for building iOS (and OSX) applications. The same Raygun4iOS provider mentioned above for Objective-C iOS applications can also be used for error reporting in Swift iOS apps.
- Installation - same as with Objective-C applications, there are two ways to install Raygun4iOS into your Swift iOS app: via the CocoaPods dependancy manager, or by manually downloading and referencing the library. The CocoaPods instructions and the download link can be found above.
- Once your application is referencing the Raygun4iOS library, import Raygun4iOS into the bridging-header file of your Swift app:
#import <Raygun4iOS/Raygun4iOS.h>
- Finally, in AppDelegate.swift, add the following code to the application function:
[Raygun .sharedReporterWithApiKey("YOUR_APP_API_KEY")]
?A complete tutorial of these steps can be found here.
AffectediOS initialization.
[Raygun sharedReporterWithApiKey:@"YOUR_APP_API_KEY"]; [[Raygun sharedReporter] identify:@"UNIQUE_USER_IDENTITY"];
Custom tags and data
When sending exceptions manually, there is an option to include an array of strings or/and a dictionary of custom data. This data is displayed in your Raygun dashboard when veiwing exception instance data. The tag values are also searchable in your Raygun dashboard so that you can find all exception instances that you've marked with a particular flag. Sending tags and custom data can be done using one of the send method overloads below. In the second overload, the tags array can be nil if you only need to send the data dictionary. Please note: currently the custom data dictionary should only include simple values such as strings and numbers.
[[Raygun sharedReporter] send:exception withTags: array];
[[Raygun sharedReporter] send:exception withTags: array withUserCustomData: dictionary];
Symbolification
The i_0<<
Java
Raygun4Java - Java Error & Crash Reporting
Contents
- The Raygun4Java provider
- Installation
- Usage
- Play 2 Framework for Java and Scala
- Sending asynchronously
- Custom user data and tags
- Unique user tracking
- Version tracking
- Getting/setting/cancelling the error before it is sent
- Custom error grouping
- Troubleshooting
The Raygun4Java provider
Raygun crash reporting and error monitoring is easily available with raygun4java. 2.0.0.>[2.1.1)</version> </dependency> <dependency> <groupId>com.mindscapehq</groupId> <artifactId>core</artifactId> <version>[2.1.1)</version> </dependency> </dependencies>
POM for Web Projects
If you're using servlets, JSPs or similar, you'll need to also add:
<dependency> <groupId>com.mindscapehq</groupId> <artifactId>webprovider</artifactId> <version>[2.1-2.0.0..jar, webprovider-2.0.0..jar and gson-2.1.jar on their classpath.
Play 2 framework for Java and Scala
This provider now contains a dedicated Play 2 provider for automatically sending Java and Scala exceptions from Play 2 web apps. Feedback is appreciated if you use this provider in a Play 2 app. You can use the plain core-2.x.x provider from Scala, but if you use this dedicated Play 2 provider HTTP request data is transmitted too.
Installation
With SBT
Add the following line to your build.sbt's libraryDependencies:
libraryDependencies ++= Seq( "com.mindscapehq" % "raygun4java-play2" % "2.2.0" )
Usage
For automatic exception sending, in your Play 2 app's global error handler, RaygunPlayClient has a method which allows you to pass in a RequestHeader and send a Throwable. If you have changed your global class in conf/application.conf, the appropriate code below should be placed in that class instead.
In Scala - app/Global.scala:
override def onError(request: RequestHeader, ex: Throwable) = { val rg = new RaygunPlayClient("your_api_key", request) val result = rg.SendAsync.
SendAsync().
Custom user data and tags
To attach custom data or tags, use these overloads on Send:
RaygunClient client = new RaygunClient("apikey"); Exception exception; ArrayList tags = new ArrayList<String>(); tags.add("tag1"); Map<string, int> userCustomData = new HashMap<string, int>(); userCustomData.put("data", 1); client.Send(exception, tags); // or client.Send(exception, tags, userCustomData);
Tags can be
null if you only wish to transmit custom data. Send calls can take these objects inside a catch block (if you want one instance to contain specific local variables), or in a global exception handler (if you want every exception to contain a set of tags/custom data, initialized on construction).
Affected user tracking
You can call
client.SetUser(RaygunIdentifier) to set the current user's data, which will be displayed in the dashboard. There are two constructor overloads available, both of which requires a unique string as the uniqueUserIdentifier. This should be the user's email address if available, or an internally unique ID representing the users. Any errors containing this string will be considered to come from that user.
The other overload contains all the available properties, some or all of which can be null and can be also be set individually on the RaygunIdentifier object.
The previous method, SetUser(string) has been deprecated as of 1.5.0.
Version tracking
Raygun4Java reads the version of your application from your manifest.mf file in the calling package. It first attempts to read this from Specification-Version, then Implementation-Version if the first doesn't exist.
A SetVersion(string) method is also available to manually specify this version (for instance during testing). It is expected to be in the format X.X.X.X, where X is a positive integer.) { //Cancelling sending message to Raygun... return null; }
Custom.
Troubles.
Raygun4java GitHub Repository
For complete installation instructions, getting the source code and troubleshooting, visit the GitHub repository here.
JavaScript
JavaScript Error Tracking With Raygun
JavaScript error tracking with Raygun is available using the Raygun4js provider.
Raygun4js is a library that you can easily add to your website or web application, which will then monitor your application and display all JavaScript errors affecting your users within your Raygun dashboard. Installation is painless, and configuring your site to transmit errors takes only five minutes.
The provider is a single script which includes the sole dependency, allowing you to drop it straight in. It is also available as a script snippet which loads the library from Raygun's CDN.
The provider is open source and available at Raygun4js GitHub repository.
How do I track JavaScript errors using Raygun?
Raygun4js has two main functions, the first to transmit JavaScript errors manually caught in try-catch blocks, and the second which is a handler to catch and send all errors caught in window.onerror.
Supported browsers and platforms
Modern browsers are supported out-of-the-box. Auto-updating versions of Chrome, Firefox, IE10+, Safari and Opera. IE 8 and 9 are also supported (the former requires the
allowInsecureSubmissions option). Pulse requires IE >= 9 for Navigation Timing API support.
Raygun4js also has dedicated support for WinJS, and works on various Android browsers and smart TVs. Support for React Native and other bundled mobile frameworks is available.
It is also known to run on old versions of Firefox (pre 4.0) and Opera. Legacy browsers such as Netscape, IE <= 7 are supported on a best-effort basis. Naturally, browsers that do not have JavaScript enabled are incompatible.
Contents
- Getting started
- Usage
- Initialization options
- Pulse API
- Breadcrumbs API
- Multiple Raygun objects on a single page
- NoConflict mode
- Callback events
- Custom error grouping
- Sending custom data
- Adding tags
- User tracking
- Version tracking
- Filtering sensitive data
- Source maps
- Offline error saving
- Errors in scripts hosted on other domains
- AngularJS
- React Native
Getting Started
Step 1. It will also catch errors that are thrown while the page is loading, and send them when the script is ready.
Step 2
Add the following lines to your JavaScript site code just before the closing body tag and paste in your API key (from your Raygun dashboard), to set up the provider to automatically send errors to your Raygun account:
<script type="text/javascript"> rg4js('apiKey', 'paste_your_api_key_here'); rg4js('enableCrashReporting', true); </script>
This will configure the provider to automatically send all unhandled JavaScript errors to Raygun.
That's it for the basic setup! See Usage below for more info on how to send errors.
Alternative setup options
Note: This library can now be interacted with in two ways, the V1 API and the V2 API. The V1 API is available as 'public' functions on the global Raygun object, and is intended to be used to control the provider during runtime. Legacy setup methods remain on this API for backwards compatibility with 1.x releases. The V2 API is made available when using the snippet (above), and is used to asynchronously configure the provider during onLoad. This is the recommended approach for new setups.
If you are installing the provider locally using a package manager or manually, you can either use the V2 API by adding the snippet and replace the second-last parameter with the URL of your hosted version of the script, or use the V1 API. The snippet/V2 approach does not support the script being bundled with other vendor scripts, but the V1 API does.
Snippet without page load error handler
If you do not want errors to be caught while the page is loading, use this snippet here.
Synchronous methods
Note that using these methods will not catch errors thrown while the page is loading. The script needs to be referenced before your other site/app scripts, and will block the page load while it is being downloaded/parsed/executed.
This will also disrupt Pulse timings, making them erroneous. For Pulse, it is especially importing that the async snippet method above is used, instead of one of the following.
Bower
Using the Bower package manager, you can install it by running this command in a shell:
bower install raygun4js
NPM
npm install raygun4js --save
This lets you
require the library with tools such as Webpack or Browserify.
NuGet
Visual Studio users can get it by opening the Package Manager Console and typing:
Install-Package raygun4js
React Native/Webpack/as a UMD module
React Native and other bundled app frameworks that uses packaging/module loading libraries can use Raygun4js as a UMD module:
// Install the library npm install raygun4js --save // In a central module (as early as possible), reference and install the library with either this syntax: import rg4js from 'raygun4js'; // Or this syntax: var rg4js = require('raygun4js'); // Then set your config options (in one module only) rg4js('enableCrashReporting', true); rg4js('apiKey', 'paste_your_api_key_here');
All unhandled JavaScript errors will then be sent to Raygun. You can then
import rg4js from 'raygun4js' in any other modules and use the rest of the V2 API below - including
rg4js('send', anErrorObject) for manual error sending.
Note that the UMD module has a major limitation for web applications as errors that occur during page load in the bundle before the Raygun dependency is executed, such as syntax errors or runtime errors, can't be caught and sent. Also, Pulse timings may be severly disrupted. Thus, the HTML snippet at the end of </head> is greatly preferred to ensure you don't miss any errors and for data correctness. The tradeoff with this method is that it is slightly less idiomatic to call rg4js as a global variable without importing/requiring it. As such, we only recommend this approach for React Native and other bundled mobile app (non-web) frameworks.
Manual download
Download the production version or the development version.
You can also download a version without the jQuery hooks if you are not using jQuery or you wish to provide your own hooks. Get this as a production version or development version.
Usage
To send errors manually:
Raygun.init("apikey"); try { throw new Error("Description of the error"); } catch (e) { Raygun.send(e) }
In order to get stack traces, you need to wrap your code in a try/catch block like above. Otherwise the error hits
window.onerror handler and may only contain the error message, line number, and column number.
You also need to throw errors with
throw new Error('foo') instead of
throw 'foo'.
To automatically catch and send unhandled errors, you can attach the automatic window.onerror handler callback:
rg4js('enableCrashReporting', true);
If you need to detach it from window.onerror (which will disable automatic unhandled error sending):
rg4js('detach');
IE8
If you are serving your site over HTTP and want IE8 to be able to submit JavaScript errors then you will need to set the following setting which will allow IE8 to submit the error over HTTP. Otherwise the provider will only submit over HTTPS which IE8 will not allow while being served over HTTP.
rg4js('options', { allowInsecureSubmissions: true });
Enabling Pulse
To enable Pulse (Real User Monitoring), make this call:
rg4js('enablePulse', true);
Legacy V1 documentation
The old documentation for the V1 API (
Raygun.send() etc) is available here.
Initialization options
To configure the provider, call one of these and pass in an options object:
rg4js('options', { // Add some or all of the options below });
The second parameter 'Script Error's Pulse.
pulseMaxVirtualPageDuration - The maximum time a virtual page can be considered viewed, in milliseconds (defaults to 30 minutes).
pulseIgnoreUrlCasing - Ignore URL casing when sending data to Pulse. });
Pulse API
Tracking Single Page Application view change events
Raygun Pulse supports client-side SPAs through the trackEvent function:
rg4js('trackEvent', { type: 'pageView', path: '/' + window.location.pathname // Or perhaps window.location.hash });
When a route or view change is triggered in your SPA, this function should be called with type being
pageView and
pathset to a string representing the new view or route. Pulse will collect up all timing information that is available and send it to the dashboard. These are then viewable as 'virtual }); });
Tracking custom timings
You can override the time when Raygun4JS considers your page to be loaded at, as well as send up to 10 custom timings of your choosing, with the Custom Timings capability. For documentation on this, see:.
Breadcrumbs API
These should be called if needed during your page's lifecycle:
rg4js('one-of-the-options-below')
rg4js('disableAutoBreadcrumbs') - Disable all the automatic breadcrumb integrations (clicks, requests, console logs and navigation events). This has an inverse
enableAutoBreadcrumbs which is the default
rg4js('disableAutoBreadcrumbsConsole') - Disable just automatic breadcrumb creation from console messages
rg4js('disableAutoBreadcrumbsNavigation') - Disable just automatic breadcrumb creation from navigation events
rg4js('disableAutoBreadcrumbsClicks') - Disable just automatic breadcrumb creation from element clicks
rg4js('disableAutoBreadcrumbsXHR') - Disable just automatic breadcrumb creation XMLHttpRequests
All of the above have an inverse
enableAutoBreadcrumbs which is the default
rg4js('setAutoBreadcrumbsXHRIgnoredHosts', []) - This can be set to an array of hostnames to not create a breadcrumb for requests/responses to. The values inside the array can either be strings that an indexOf check against the host is made, or regexes which is matched against the host.
rg4js('setBreadcrumbLevel', 'warning') - Set the minimum level of breadcrumb to record. This works the same as log levels, you may set it to debug, info, warning and error and it will only keep breadcrumbs with a level equal or above what this is set to. Valid values are one of
['debug', 'info', 'warning', 'error'] defaults to info
rg4js('logContentsOfXhrCalls', true) - If set to true will include the body contents of XHR request and responses in Breadcrumb metadata, defaults to false
Logging a breadcrumb
Breadcrumbs can be manually logged via
rg4js('recordBreadcrumb', ...)
There are two argument formats:
rg4js('recordBreadcrumb', 'breadcrumb-message', {object: 'that will be attached to the breadcrumb custom data'})
This is the quickest way to log basic breadcrumbs, requiring only a message and optionally an object to attach some metadata
If you wish to have further control of the breadcrumb and configure the level (debug, info, warning, error) or set the class/method the breadcrumb was logged from
rg4js('recordBreadcrumb', {message: 'breadcrumb-message', metadata: {goes: 'here'}, level: 'info', location: 'class:method'})
You may use the above argument format
Payload size conservation
To help ensure your payload does not become too large only the most recent 32 breadcrumbs are kept, as well as limiting the size of recorded network request/response texts to 500 characters.
Multiple Raygun objects on a single page
You can have multiple Raygun objects in global scope. This lets you set them up with different API keys for instance, and allow you to send different errors to more than one application in the Raygun web app.
To create a new Raygun object and use it call:
var secondRaygun = rg4js('getRaygunInstance').constructNewRaygun(); secondRaygun.init('apikey'); secondRaygun.send(...);
Only one Raygun object can be attached as the window.onerror handler at one time, as onerror can only be bound to one function at once. Whichever Raygun object had
attach() called on it last will handle the unhandle errors for the page.
Note that you should use the V1 API to send using the second Raygun object, and it should be created and called once the page is loaded (for instance in an
onload callback).
NoConflict mode
If you already have an variable called Raygun attached to
window, you can prevent the provider from overwriting this by enabling NoConflict mode:
rg4js('noConflict', true);
To then get an instance of the Raygun object when using V2, call this once the page is loaded:
var raygun = rg4js('getRaygunInstance');
Callback events
onBeforeSend
rg4js('onBeforeSend', function (payload) { return payload; });
Call this function and pass in a function which takes one parameter (see the example below). This callback function will be called immediately before the payload is sent. The one parameter it gets will be the payload that is about to be sent. Thus from your function you can inspect the payload and decide whether or not to send it.
From the supplied function, you should return either the payload, or return
false.
If your function returns a truthy object, Raygun4js will attempt to send it as supplied. Thus, you can mutate it as per your needs - preferably only the values if you wish to filter out data that is not taken care of by
filterSensitiveData().', myBeforeSend);
onAfterSend
rg4js('onAfterSend', function (xhrResponse) { // Inspect the XHR response here });
Call this function and pass in a function which takes one parameter (see the example below). This callback function will be immediately called after the XHR request for a Crash Reporting or Pulse event responds successfully, or errors out (its
onerror was called). You can inspect the one parameter, which is the XHR object containing the HTTP response data.
onBeforeSendRUM
rg4js('onBeforeSendRUM', function (payload) { return payload; });
Call this function and pass in a function which takes one parameter (see the example below). This callback function will be called immediately before any Real User Monitoring events are sent. The one parameter it gets will be the payload that is about to be sent. Thus from your function you can inspect the payload and decide whether or not to send it.
From the supplied function, you should return either the payload (intact or mutated as per your needs), or false.
If your function returns a truthy object, Raygun4JS will attempt to send it as supplied. Thus, you can mutate it as per your needs.RUM', myBeforeSend);
onBeforeXHR
rg4js('onBeforeXHR', function (xhr) { // Mutate the xhr parameter as per your needs });
Call this function when you want control over the XmlHttpRequest object that is used to send error payloads to the API. Pass in a callback that receives one parameter (which is the XHR object). Your callback will be called after the XHR object is opened, immediately before it is sent.
For instance, you can use this to add custom HTTP headers.
Custom error grouping, stackTrace and options and return a string, ideally 64 characters or less. If the callback returns null or or a non-string the error will be grouped using Raygun's server side grouping logic (this can be useful if you only wish to use custom grouping for a subset of your errors).
var groupingKeyCallback = function (payload, stackTrace, options) { // Inspect the above parameters and return a hash derived from the properties you want return payload.Details.Error.Message; // Naive message-based grouping only }; rg4js('groupingKey', groupingKeyCallback);
Sending.
Adding tags
The Raygun dashboard can also display tags for errors. These are arrays of strings or Numbers. This is done similar to the above custom data, like so:
On initilization:
rg4js('withTags', ['tag1', 'tag2']);
During a Send:
rg4js('send', { error: e, tags: ['tag3']; });
Adding tags with a callback function
As above for custom data, withTags() can now also accept a callback function. This will be called when the provider is about to send, to construct the tags. The function you pass to withTags() should return an array (ideally of strings/Numbers/Dates).
Affected User Tracking' });
Only identifier or the first parameter is required. This method takes additional parameters that are used when reporting over the affected users. the full method signature is:
setUser: function (user, isAnonymous, email, fullName, firstName, uuid)
user the:
You can now pass in empty strings (or false to
isAnonymous) to reset the current user for login/logout scenarios.
Version filtering
You can set a version for your app by calling:
rg4js('setVersion', '1.0.0.0');
This will allow you to filter the errors in the dashboard by that version. You can also select only the latest version, to ignore errors that were triggered by ancient versions of your code. The parameter should be a string in the format
x.x.x if you want to get the version sorting in Raygun to work nicely, where x is a non-negative integer.
Filtering sensitive data
You can blacklist keys to prevent their values from being sent it the payload by providing an array of key names:
rg4js('filterSensitiveData', ['password', 'credit_card']);
If any key matches one in the input array, its value will be replaced with
[removed by filter].]);
Change maps support through the transmission of column numbers for errors, where available. This is confirmed to work in recent version of Chrome, Safari and Opera, and IE 10 and 11. See the source maps documentation to the left for more information.
Offline saving
The provider has a feature where get or set this option, call the following after your init() call:
rg4js('saveIfOffline', true);
If an error is caught and no network connectivity is available (the Raygun API cannot be reached),.
Errors in scripts hosted on other domains
Browsers have varying behavior for errors that occur in scripts located on domains that are not the origin. Many of these will be listed in Raygun as 'Script Error', or will contain junk stack traces. You can filter out these errors with the following code snippet. There is also an option to whitelist domains which you do want to allow transmission of errors to Raygun, which accepts the domains as an array of strings:
rg4js('options', { ignore3rdPartyErrors: true });
Whitelisting Domains
There is also an option to whitelist domains which you do want to allow transmission of errors to Raygun, which accepts the domains as an array of strings:
rg4js('options', { ignore3rdPartyErrors: true }); rg4js('whitelistCrossOriginDomains', ['code.jquery.com']);
This can be used to allow errors from remote sites.
Browser behaviour
Depending on what browser your users are running, the above properties may or may not have an effect. This sums up the situation as of writing:
- Chrome 30+
- Firefox 13+
- Opera 12.50+
- Safari (at least 6+)
In these browsers, if the script attribute is present, the HTTP header will need to be also present, otherwise the script will be blocked.
Firefox has additional behavior for RuntimeErrors. These will be provided to window.onerror regardless of the two properties, as these aren’t considered a security risk. SyntaxErrors, however, will be blocked in both Gecko and WebKit browsers, if crossorigin is present but the associated cross-origin domain lacks the header.
- Internet Explorer <= 10
Errors will be reported with all available data in IE 10 and below.
- Internet Explorer 11+
Third-party errors will not contain any data, and the attributes are not respected at current time.
Limitations of stack trace data
Due to browser API and security limitations, in cases where the message is 'Script error', only one stack trace frame may be present. In this scenario, the line number may not reflect the actual position where the original error was thrown.
For more information, check out this blog post on CORS requirements for Script Errors here.
AngularJS
You can hook failed Ajax requests with $http in AngularJS by providing an Interceptor that sends to Raygun on error. One possible simple implementation using custom data:
$httpProvider.interceptors.push(function($q, dependency1, dependency2) { return { 'requestError': function(rejection) { rg4js('send', { error: 'Failed $http request', customData: { rejection: rejection } }); }, 'responseError': function(rejection) { rg4js('send', { error: 'Failed $http response', customData: { rejection: rejection } }); } }; });
For more information, see the official docs under Interceptors.
React Native
Firstly, add the library to your project:
npm install --save raygun4js
Then in a central component (e.g index.ios.js/index.android.js), import the library:
import rg4js from 'raygun4js';
Next, configure the provider using your Raygun application's API key. These lines should be called once only early on when your app boots, for example:
export default class YourReactNativeProject extends Component { constructor() { super() rg4js('enableCrashReporting', true); rg4js('apiKey', 'add_your_api_key_here'); // Put any other rg4js() options you want to set here rg4js('boot'); // This call must be made last to start the provider } }
Naturally, you can factor the above logic out into a function or separate component, which should be called early on from your main app component.
Finally, see Source Maps below for how to get your minified JS errors mapped back to human-readable stacktraces.
Affected user tracking
You can optionally configure user tracking, either anonymously or non-anonymously (if your app has the concept of a logged-in user and their data available). Here's an example of how to set it up:) { } }
Manual error sending
You can manually send an error that you handle in a
catch block in a component, by importing 'rg4js' and calling send. The provider should have already been imported and configured (as above) before the code below is called.
import rg4js from 'raygun4js'; export default class YourChildComponent extends Component { constructor() { try { // Bug goes here } catch (e) { rg4js('send', e); } } }
Source maps
For iOS:
Run this command from your project's root directory to build your app JS bundle with source maps:
react-native bundle --platform ios --dev false --entry-file index.ios.js --bundle-output iOS/main.jsbundle --assets-dest ./ios --sourcemap-output ./iOS/main.jsbundle.min.js.map
Then run this command to upload the sourcemap to Raygun:
curl \ -X POST \ -u "emailaddress:password" \ -F "url=" \ -F "file=@./ios/main.jsbundle.min.js.map" \
You will need to update the
password to a valid Raygun account. If you don't want to use your account credentials you can create a new account and add it to the Owners team, or a team that has access to the Raygun application.
You will also need to replace
your_application_id above with the 6-letter application ID code (available at[app_id]).
When errors are received by Raygun, they will then be source mapped to the unminified code automatically. Every time you finish a version of your app and are about to release it, you should run the above two commands to upload the new source map to Raygun. We do note that this process is quite manual, and is expected to be replaced with an automatic first-class workflow for the production release.
For Android:
1. Optional one-time setup action: ensure there is a directory within your project for the JS bundle, e.g
mkdir android/app/src/main/assets
2. Run this command from your project's root directory to build your production app bundle with source maps:
react-native bundle --platform android --dev false --entry-file index.android.js --bundle-output android/app/src/main/assets/main.jsbundle --assets-dest android/app/src/main/res --sourcemap-output android/app/src/main/assets/main.jsbundle.min.js.map
3. Finally, run this command to upload your source map (replacing username, password and your_application_id):
curl \ -X POST \ -u "emailaddress:password" \ -F "url=" \ -F "file=@./android/app/src/main/assets/main.jsbundle.min.js.map" \
Note that if you have placed or want to place your
jsbundle file in a different location, you can set this location as the value for
--sourcemap-output in step 2 and the
file=@./ value in the step 3 cURL command.
macOS
Raygun4MacOS - macOS Error & Crash Reporting
- Setup instructions (Using CocoaPods)
- Setup instructions (Manual)
- Setup instructions (Swift)
- Unique user tracking
- dSYM symbolication
- Pulse real user monitoring
- Network call logging
Raygun crash reporting and real user monitoring is available for macOS with the Raygun4MacOS provider. The Raygun4MacOS provider can be used in Objective-C and Swift applications. The setup instructions below includes steps to integrate Raygun4MacOS in both types of applications. The features described in these docs also apply to both languages.
Setup instructions (Using CocoaPods)
Raygun can be obtained through CocoaPods which is a macOS (and iOS) dependency manager. Read more about CocoaPods here, or see further below for the manual setup instructions.
1. Update Podfile
Add the following under the relevant target(s) within your project's Podfile:
pod 'Raygun4MacOS'];
The steps above will cause all unhandled exceptions and real user monitoring analytics to be sent to your Raygun account.
Setup instructions (Manual)
If you don't use CocoaPods, follow these instructions to manually download and reference the Raygun4MacOS framework.
1. Download Raygun4MacOS
Download and unzip the current version from here: Raygun4MacOS Version 1.0.2
2. Reference Raygun4MacOS in your project
In Xcode, click on your project and then select your main app target. Go to the "Build Phases" tab and expand "Link Binary With Libraries". Drag Raygun4MacOS.framework into the library list.];
Setup instructions (Swift)
Swift is another langauge created by Apple for building Mac (and iOS) applications. The same Raygun4MacOS provider mentioned above for Objective-C Mac applications can also be used in Swift Mac apps.
- Installation - same as with Objective-C applications, there are two ways to install Raygun4MacOS into your Swift Mac app: via the CocoaPods dependancy manager, or by manually downloading and referencing the library. The CocoaPods instructions and the download link can be found above.
- Once your application is referencing the Raygun4MacOS library, import Raygun4MacOS into the bridging-header file of your Swift app:
#import <Raygun4MacOS/Raygun4MacOS.h>
- Finally, in AppDelegate.swift, add the following code to the application function:
[Raygun .sharedReporterWithApiKey("YOUR_APP_API_KEY")]
UniqueMacOS initialization.
[[Raygun sharedReporter] identify:@"UNIQUE_USER_IDENTITY"];
Symbolication
The mac_1<<
Pulse
The setup instructions above will enable both Crash Reporting as well as Pulse, which logs Real User Monitoring analytics to your Raygun application dashboard. This information includes when your app starts and stops, which views your users navigate through, network calls that your app makes and any user details that you have provided.
Pulse messages will be sent to Raygun as soon as the app starts up. So once you've set up Raygun in your app, run it up and go to your Raygun Pulse dashboard to see the data it collects. Of course Pulse is most valuable once your app is out there being used by your users.
Network calls
Once enabled, Raygun4MacOS will automatically log the performance of network calls made with the following methods.
- :]
Details about logged network calls can be found on the Performance tab of your Pulse dashboard.
N:
Node.js
Raygun4Node - Node.js Error Tracking & Reporting
The raygun4node provider.
The provider is available at the Raygun4node GitHub repository.: "yourkey"}); raygunClient.user = function (req) { if (req.user) { return req.user.username; } }. See the source maps documentation for more information on this.
PHP
PHP
The Raygun4PHP provider
Raygun4PHP is a library that you can easily add to your PHP-based website, which will then allow you to transmit all errors and exceptions to your Raygun Crash Reporting dashboard. Installation is painless, and configuring your site to start real time error monitoring and crash reporting takes only 5 minutes.
Raygun4PHP only requires that the server has PHP5.3 (or greater) and curl installed. The package manager Composer is optional, but if it is available installation is simple.
What can I send to Raygun Crash Reporting?
Raygun4PHP is designed to send both classical PHP errors, as well as PHP5 exception objects. Send() functions are available for both of these, that call a Send() that takes an ErrorException, which is also publically available. You can add a set of tags (as an array of strings) to identify a certain type of message, or add a custom user data (as an associative array). Dedicated functions for sending messages to Raygun instead of errors or exceptions are coming soon.
Lightning-quick asynchronous error sending
For supported platforms (*nix, OS X, some Windows) the PHP provider has fully asynchronous, non-blocking sending logic. This ensures the user recieves the requested page without having to wait for the server to send its error message to Raygun. Many of our competitors post their messages in a blocking way, leading to slow load times for the user. With Raygun your site remains highly responsive while it transmits error data.
Installation
Firstly, ensure that curl is installed and enabled in your server's php.ini file. file, this repository and copy src/Raygun4php into an appropriate subdirectory in your project, such as /vendor/Raygun4php. Add
requires definitions for RaygunClient.php where you want to make a call to Send().
require __DIR__ . '/vendor/raygun4php/src/Raygun4php/RaygunClient.php'; 'vendor/autoload.php', or if not manually import RaygunClient.php. Then, create handlers that look like this:); } set_exception_handler('exception_handler'); set_error_handler("error_handler"); }:
require_once "vendor/autoload.php"; // if using Composer $client = new \Raygun4php\RaygunClient("apikey"); try { throw new Exception("Your message"); } catch (Exception $e) { $client->SendException($e); }
Sending method - async/sync
Raygun4PHP has two algorithms which it can use to send your errors:
Asynchronous: POSTs the message and returns to your script immediately without waiting for the response from the Raygun API.
Synchronous: POSTs the message, blocks and receives the HTTP response from the Raygun API. This uses a socket connection which is still reasonably fast. This also allows the use of the debug mode to receive the HTTP response code; see below.
This can be set by passing in a boolean as the 2nd parameter to the constructor:
$client = new \Raygun4php\RaygunClient("apiKey", $useAsyncSending);
$useAsyncSending options
Type: boolean Linux/OS X default: true Windows default: false
If
$useAsyncSendingis true, and the script is running on a *nix platform, the message will be delivered asynchronously. SendError() and SendException() will return 0 if all went well
If
$useAsyncSendingis false, the script will block and receive the HTTP response.
false is the only effective option on Windows due to platform and library limitations within the supported versions.
Proxies
A HTTP proxy can be set if your environment can't connect out through PHP or the
curl binrary natively:
$client = new \Raygun4php\RaygunClient("apiKey"); $client->setProxy('');
Debug mode
New in 1.3, the client offers a debug mode in which the HTTP response code can be returned after a POST attempt. This can be useful when adding Raygun to your site. This is accessed by passing in
true as the third parameter in the client constructor:
$client = new \Raygun4php\RaygunClient("apiKey", $useAsyncSending, $debugMode);
$debugMode options
Default:
false
If true is passed in, and
$useAsyncSending is set to
false,
client->SendException() or
SendError() will return the HTTP status code of the POST attempt.
Note: If
$useAsyncSending is
true,
$debugMode is not available.
Response codes
202: Message received by Raygun API correctly
403: Invalid API key. Copy it from your Raygun Application Settings, it should be of the form
new RaygunClient("A+nUc2dLh27vbh8abls7==")); } }
Affected User tracking
New in 1.5: additional data support
You can call $client->SetUser, passing in some or all of the following data, which will be used to provide an affected user count and reports:
SetUser($user = null, $firstName = null, $fullName = null, $email = null, $isAnonymous = null, $uuid = null)
$user should be a unique identifier which is used to identify your users. If you set this to their email address, be sure to also set the $email parameter too.
This feature and values are optional if you wish to disable it for privacy concerns. To do so, pass
true in as the third parameter to the RaygunClient constructor. achieved by passing a callback to the
SetGroupingKey method on the client. If the callback returns a string, ideally 100 characters or less, errors matching that key will grouped together. Overriding the default automatic grouping. If the callback returns a non-string value then that error will be grouped automatically.
$client = new \Raygun4php\RaygunClient("apiKey"); $client-.
Troubleshooting
As above, enable debug mode by instantiating the client with
$client = new \Raygun4php\RaygunClient("apiKey", FALSE, TRUE);
This will echo the HTTP response code. Check the list above, and create an issue or contact us if you continue to have problems.
400 from command-line Posix environments
If, when running a PHP script from the command line on *nix operating systems, you receive a '400 Bad Request' error (when debug mode is enabled), check to see if you have any LESS_TERMCAP environment variables set. These are not compatible with the current version of Raygun4PHP. As a workaround, unset these variables before your script runs, then reset them afterwards.
Raygun4PHP GitHub Repository
For complete installation instructions, getting the source code and troubleshooting, visit the GitHub repository here.
Python
Raygun4Python - Python Error Tracking & Reporting.
Contents
Setup Instructions
Requirements
Raygun4py is known to work with Python 2.6-2.7, Python 3.1+ and PyPy environments.
It requires the
socket module to be built with SSL support.
Installation
Grab the module with pip:
pip install raygun4py
Then include and instantiate it:
from raygun4py import raygunprovider client = raygunprovider.RaygunSender('your_apikey')
Test the installation
From the command line, run:
$ raygun4py test your_apikey
Replace
your_apikey with the one listed on your Raygun dashboard. This will cause a test exception to be generated and sent.
Usage
Automatically send the current exception like this:
try: raise Exception("foo") except: client.send_exception()
See sending functions for more ways to send.
Unc
Raygun4py includes dedicated middleware implementations for Django and Flask, as well as generic WSGI frameworks (Tornado, Bottle, Ginkgo etc). These are available for both Python 2.6/2.7 and Python 3+.
Dj
from flask import Flask, current_app from raygun4py.middleware import flask app = Flask(__name__) flask.Provider(app, 'your_apikey').attach()
WS
Initialization.
Features
Custom
For Python 3, chained exceptions are now supported and automatically sent along with their traceback.
This occurs when an exception is raised while handling another exception - see tests_functional.py for an example.
Local,.
Raygun4py GitHub Repository
Visit the raygun4py GitHub repository to view the code. Like all our providers, it is available under the MIT license.
Ruby.
Unity
Raygun4Unity - Unity Error Monitoring & Crash Reporting
Raygun4unity allows you to setup real time error monitoring and crash reporting for your Unity games.
Supported platforms
Raygun4Unity has been tested to work on:
- Windows Desktop
- Windows Phone
- Mac
- iOS
- Android
Namespace
The main classes can be found in the Mindscape.Raygun4Unity namespace.
Setup instructions
1. Download the library
Download raygun4Unity.[version].zip from the latest release listed on Github. Extract the contents and paste the raygun4Unity folder somwhere into the Assets directory of your game.
2. Listen to Application.logMessageReceived
If you haven't done so already, listen to Application.logMessageReceived in a C# script. Attach this script to a GameObject that will be used when your game is loaded.
3. Use the RaygunClient to send exception reports
To send exception messages to your Raygun application, create an instance of the RaygunClient by passing your application API key into the constructor. Then call one of the Send methods. There are 3 different types of exception data that you can use in the Send methods:
- Strings provided by Unity for the error message and stack trace.
- Exception .Net objects. Useful if you need to send handled exceptions in try/catch blocks.
- RaygunMessage Allowing you to fully specify all the data fields that get sent to Raygun.
In the following example, Application.logMessageReceived has been hooked up in a MonoBehaviour that will be run during the initialization process of the game. In the handler, you can check to see if the type of the log is an exception or error to determine what to report to Raygun. Alternatively, you could send all types of log messages.
using Mindscape.Raygun4Unity; using UnityEngine; public class Logger : MonoBehaviour { void Start() { Application.logMessageReceived += Application_logMessageReceived; } private void Application_logMessageReceived(string condition, string stackTrace, LogType type) { if (type == LogType.Exception || type == LogType.Error) { RaygunClient raygunClient = new RaygunClient("YOUR_APP_API_KEY"); raygunClient.Send(condition, stackTrace); } } }
Affected user tracking
To keep track of how many users are affected by each exception, you can set the User or UserInfo property of the RaygunClient instance. The user can be any id string of your choosing to identify each user. Ideally, try to use an id that you can use to relate back to an actual user such as a database id, or an email address. If you use an email address, the users gravitars (if found) will be displayed on your Raygun error dashboards. Below is an example of setting the User property:
raygunClient.User = "[email protected]";
The UserInfo property lets you provide additional user information such as their name:
raygunClient.UserInfo = new RaygunIdentifierMessage("[email protected]") { IsAnonymous = false, FullName = "Robbie Robot", FirstName = "Robbie" };
Here are all the available RaygunIdentifierMessage properties. The only required field is Identifier.
- Identifier, as we will use the identifier as the email address if it looks like one, and no email address is not specified.
- FullName The user's full name.
- FirstName The user's first (or preferred) name.
- UUID A device identifier. Could be used to identify users across devices, or machines that are breaking for many users.
Tags and custom data
A couple of Send method overloads allow you to attach a list of tags and a dictionary of key-value custom data to the exception report. Tags and custom data get displayed on each report in Raygun. Either of these can be null if you only want to send one or the other.
The following overload is for when sending the message and stacktrace strings as provided by Unity in the HandleException callback.
var tags = new List() { "Level 6", "0 lives"}; var customData = new Dictionary() { {"Difficulty", "Very-Hard"} }; raygunClient.Send(message, stacktrace, tags, customData);
Another overload is available for when sending a .NET Exception object.
var tags = new List() { "Level 6", "0 lives"}; var customData = new Dictionary() { {"Difficulty", "Very-Hard"} }; raygunClient.Send(exception, tags, customData);
Message modifcations before sending
By listening to the RaygunClient.SendingMessage event, you can make modifications to any part of the message just before it is serialized and sent to Raygun. Setting e.Cancel = true will prevent Raygun4Unity from sending the message. This is useful for filtering out types of exceptions that you don't want.
Application/Game version
The current version of raygun4Unity does not automatically obtain the game version number. You can however specify this by setting the ApplicationVersion property of the RaygunClient instance.
raygunClient.ApplicationVersion = "1.3.37.0";
WordPress
Raygun4WordPress - Wordpress Error Reporting
Raygun4WP plugin allows you to easily setup Crash Reporting and Real User Monitoring on Wordpress website without you having to write a single line of code.
Dependancies
This plugin utilizes lower-level Raygun providers to add this functionality:
- Raygun4PHP: Server-side error tracking
- Raygun4JS: Client-side error tracking and real user monitoring
Requirements
The following server requirements are needed in order to use this plugin.
- PHP 5.3.3+
- Curl Installed
If you are using a *nix system, the package php5-curl may contain the required dependencies.
Contents
- Installation
- Usage
- Pulse
- Client-side error tracking
- User tracking
- Tagging errors
- Ignored domains
- Async sending
- Multisite support
Installation
Manually with Git
Clone the repository into your Wordpress installation's
/plugins folder - for instance at
/wordpress/wp-content/plugins.
Make sure you use the
--recursive flag to also pull down the Raygun4PHP and Raygun4JS dependancies.
git clone --recursive
From Wordpress plugin directory
You can also add Raygun4WP plugin repository using your admin panel. Raygun4WP is available on wordpress.org/plugins/raygun4wp/.
Usage
- Navigate to your Wordpress admin panel, click on Plugins, and then Activate Raygun4WP
- Go to the Raygun4WP settings panel either by the sidebar or admin notification
- Copy your application's API key from the Raygun dashboard and place it in the API key field.
- Enable Error Tracking (both server-side and client-side), Real User Monitoring and any other options
- Save your changes
- Done!
Pulse - Real User Monitoring
As of 1.8 of Raygun4WP plugin you can enable real user monitoring.
This feature can be enabled via the Settings page under Pulse - Real User Monitoring.
User information will be sent along if you have the unique user tracking feature enabled.
Client-side error tracking
Since 1.4 of the Raygun4WP plugin you can enable client-side error monitoring.
This feature automatically tracks JavaScript errors that occur in your user's browsers when they are loaded.
This setting can be activated via the Settings page.
User tracking
This feature can be enabled via the Settings page.
Enabling this feature will send through the currently logged in user's email address, first name and last name with each message to Raygun. This applies to both Crash Reporting and Pulse payloads.
If a user is not logged in, no user data will be sent and a random ID will be assigned to the user.
The user's information will then be available to you when viewing crash reports and user sessions. If the user has an associated Gravatar with that address, you will see their picture.
If this feature is not enabled, a random ID will be assigned to each user.
Tagging errors
Since 1.8 both client-side and server-side errors can be tagged. Tags are custom test allowing you to easily identify errors.
JavaScript and PHP errors can be tagged independently through a comma-delimited list in the field on the settings page.
For example:
Error, JavaScript would add two tags. The first being
Error second one being
JavaScript
Ignored domains
You can enter a comma-delimited list in the field on the Config page to prevent certain domains from sending errors and from being tracked with real user monitoring.
Async sending
Introduced in 1.1.3, this provider will now send asynchronously on *nix servers (async sockets) resulting in a massive speedup - POSTing to Raygun now takes ~56ms including SSL handshakes. This behaviour can be disabled in code if desired to fall back to blocking socket sends. Async sending is also unavailable on Windows due to a bug in PHP 5.3, and as a result it uses cURL processes. This can be disabled if your server is running a newer environment; please create an issue if you'd like help with totrue.
- Visit the Admin dashboard of a child site (not the root network site). Go to its Plugin page, and you should see raygun4WP ready to be activated - do so.
- A new raygun4WP submenu will be added to the left. In there click on Configuration, paste in your API key, change the top dropdown to Enabled then click Save Changes. You can now click Send Test Error and one will appear in your dashboard.
- Repeat the above process for any other child sites - you can use different API keys (to send to different Raygun apps) or the same one.
Finally, if you so desire you should be able to visit the root network site, activate it there and configure it. You must however activate it on at least one child site first.
Documentation missing?
If we don't have documentation about your desired topic, send us a message and we'll create it for you. | https://raygun.com/docs/languages/ | CC-MAIN-2018-34 | refinedweb | 10,869 | 54.83 |
DASH! micropython test.py no module named 'OmegaExpansion'
- LightSwitch last edited by
root@Omega-C592:~# micropython test.py
Traceback (most recent call last):
File "test.py", line 1, in <module>
ImportError: no module named 'OmegaExpansion'
but...
root@Omega-C592:~# python test.py
-0.795
works 4.0.
I'm using the dash, so I want to use the lv_micropython lib to create the gui. I've had several issues so far and this is just one of them I've created a posting for. The other one is here:
What do I have to do to get the microphython to recognize the OmegaExpansion package that is present?
I already also did this instruction: opkg install micropython-lib --nodeps
The code for test.py is below
from OmegaExpansion import AdcExp
class Test:
def init(self):
self.adc = AdcExp.AdcExp(address=0x48);
print(self.adc.read_voltage(0));
Test(); | https://community.onion.io/topic/4286/dash-micropython-test-py-no-module-named-omegaexpansion | CC-MAIN-2020-45 | refinedweb | 146 | 59.09 |
.
For sealing packages in a jar, we need to add it’s entries in jar manifest file. So I have the manifest file with following content.
manifest.txt
CopyName: com.jd.seal Sealed: true
Now I run following commands in both projects to generate two jar files with above manifest entry.
Copypankaj@JD:~/CODE/seal1/bin$ jar cvmf manifest.txt seal1.jar com added manifest adding: com/(in = 0) (out= 0)(stored 0%) adding: com/jd/(in = 0) (out= 0)(stored 0%) adding: com/jd/seal/(in = 0) (out= 0)(stored 0%) adding: com/jd/seal/A.class(in = 419) (out= 299)(deflated 28%) pankaj@JD:~/CODE/seal1/bin$ cd ../../seal2/bin pankaj@JD:~/CODE/seal2/bin$ jar cvmf manifest.txt seal2.jar com added manifest adding: com/(in = 0) (out= 0)(stored 0%) adding: com/jd/(in = 0) (out= 0)(stored 0%) adding: com/jd/seal/(in = 0) (out= 0)(stored 0%) adding: com/jd/seal/B.class(in = 419) (out= 299)(deflated 28%)
Java jar seal packages effect
Now I will write a sample program that will use both these jar files to show the effect of java jar sealing.
Copyimport com.jd.seal.A; import com.jd.seal.B; public class MyClass{ public static void main(String args[]){ A a = new A(); B b = new B(); } }
So
MyClass is trying to load class A from jar file seal1.jar and class B from seal2.jar. Let’s try to compile and run this class and see what happens.
Copypankaj@JD:~/tmp$ javac -cp seal1.jar:seal2.jar MyClass.java pankaj@JD:~/tmp$ java -cp seal1.jar:seal2.jar:. MyClass A class loaded Exception in thread "main" java.lang.SecurityException: sealing violation: package com.jd.seal is sealed at java.net.URLClassLoader.defineClass(URLClassLoader.java:234) MyClass.main(MyClass.java:9)
CopySealed: true Name: com.jd.util Sealed: false
That’s all about java jar sealing packages.
Reference: Oracle Doc
Will G says
Thanks for the clear explanation and example!
Alireza says
Thanks for sharing your knowledge. I have a question from you, when you said “from same version of jar file”, what is ‘same version’ mean? I think sealing packages in JAR files means there is no way to be used by another program. Am I wrong?
jafar2049 says
It has been two days since I’ve been looking for a “SIMPLE” working example in order to understand the security idea behind the sealed jar files.
Thank you very much for this nice explanation. Brilliant ! | https://www.journaldev.com/1347/java-jar-seal-packages | CC-MAIN-2019-13 | refinedweb | 418 | 68.06 |
Hi, I have implemented a copy control version of binary tree.. As I am a Beginner. I think i have made my mistake. But I cant figure it out.
is there any wrong.?is there any wrong.?Code:#include <iostream> #include <string> class TreeNode{ TreeNode(const std::string &val):value(val),count(new int(1)){} TreeNode(const TreeNode& rhs):value(rhs.value),count(count),left(rhs.left),right(rhs.right){++*count; } TreeNode& operator=(const TreeNode &rhs) { ++*rhs.count; if(--*count==0) { delete left; delete right; delete count; } left=rhs.left; right=rhs.right; value=rhs.value; count=rhs.count; } ~TreeNode() { if(--*count==0) { delete left; delete right; delete count; } } private: std::string value; int *count; TreeNode *left; TreeNode *right; }; using namespace std; int main() { cout << "Hello world!" << endl; return 0; } | https://cboard.cprogramming.com/cplusplus-programming/156858-copy-control-binary-tree.html | CC-MAIN-2017-09 | refinedweb | 130 | 54.49 |
The problem “Check if a given array contains duplicate elements within k distance from each other” states that we have to check for duplicates in given unordered array within the range of k. Here the value of k is smaller than the given array.
Examples
K = 3 arr[] = {1, 2, 3, 4, 5, 2, 1}
False
K = 2 arr[] = {3, 4, 3, 1, 1, 2, 6}
True
Explanation
We have two methods to solve this problem. The simpler one is to run two loops in which the first loop will pick every element as a starting element for the second loop ‘Inner loop’. After that, the second loop will compare the starting element with all the elements within the range of ‘k’. But this solution is not that efficient it takes time complexity of O(k*n).
But we have another more efficient method which can solve the problem in O(n) time complexity called hashing. In the hashing method, we will traverse all the elements of the array and we will check if the element is present in it or not. If the element is in there then we will return ‘True.’ Else we will add it to the hash and remove the arr[i-k] element from the hash if ‘i’ is greater than or equal to ‘k’.
Algorithm to Check if a given array contains duplicate elements within k distance from each other
- First, create the empty hash set in which we will store the elements of the array.
- Traverse all elements of the array from left to right.
- Check if the element is present in hash or not.
- If it’s in there then return “true.”
- Else add that element to the hash.
- After that remove the arr[i-k] element from hash if ‘I’ is greater or equal to ‘k’.
We have an array ‘arr[]’ with some element in it and a value k which is the range in which we have to find duplicates if there Is any so will use hash set to store the elements in it first we will add elements of the array in our hash set one by one if the element is already in the hash set then it will return true and break the loop else it will continuously insert the elements in the set and remove arr[i-k] element from the set.
Code
C++ code to Check if a given array contains duplicate elements within k distance from each other
#include<iostream> #include<unordered_set> using namespace std; bool Check_Duplicates(int a[], int n, int k) { unordered_set<int> D_set; for (int i = 0; i < n; i++) { if (D_set.find(a[i]) != D_set.end()) return true; D_set.insert(a[i]); if (i >= k) D_set.erase(a[i-k]); } return false; } int main () { int a[] = {1, 2, 3, 4, 1}; int k = 5; cout << ((Check_Duplicates(a, 5, k)) ? "Yes" : "No"); }
Yes
Java code to Check if a given array contains duplicate elements within k distance from each other
import java.util.*; class D_Class { static boolean Check_Duplicates(int a[], int k) { HashSet<Integer> D_set = new HashSet<>(); for (int i=0; i<a.length; i++) { if (D_set.contains(a[i])) return true; D_set.add(a[i]); if (i >= k) D_set.remove(a[i-k]); } return false; } public static void main (String[] args) { int a[] = {1, 2, 3, 4, 1}; int k = 5; if (Check_Duplicates(a, k)) System.out.println("Yes"); else System.out.println("No"); } }
Yes
Complexity Analysis
Time Complexity
O(n) where “n” is the number of elements in the array. Using a Hash Set allows solving the problem in linear time. Since using hash set enhances the ability to search, delete and insert data efficiently.
Space Complexity
O(k) where “k” is the number of elements in the window that needs to be looked upon. | https://www.tutorialcup.com/interview/hashing/check-if-a-given-array-contains-duplicate-elements-within-k-distance-from-each-other.htm | CC-MAIN-2021-49 | refinedweb | 637 | 69.82 |
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 hi,!!!) - --michael P.S.: Those packages have been built against: - -- System Information: Debian Release: 3.1 ~ APT prefers testing ~ APT policy: (500, 'testing') Architecture: i386 (i686) Kernel: Linux 2.4.18-686 Locale: LANG=en_US, LC_CTYPE=en_US Versions of packages libxml-libxml-perl depends on: ii libc6 2.3.2.ds1-16 ii libxml-libxml-common-perl 0.13-4 ii libxml-namespacesupport-per 1.08-3 ii libxml-sax-perl 0.12-4 ii libxml2 2.6.11-3 ii perl 5.8.4-2.3 ii perl-base [perlapi-5.8.4] 5.8.4-2.3 ii zlib1g 1:1.2.1.1-7 Versions of packages libxml-libxslt-perl depends on: ii libc6 2.3.2.ds1-16 ii libxml-libxml-perl 1.58-1 ii libxml2 2.6.11-3 ii libxslt1.1 1.1.8-4 ii perl 5.8.4-2.3 ii perl-base [perlapi-5.8.4] 5.8.4-2.3 ii zlib1g 1:1.2.1.1-7 - -- IT Services University of Innsbruck 063A F25E B064 A98F A479 1690 78CD D023 5E2A 6688 -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.4 (MingW32) iD8DBQFBe3EHeM3QI14qZogRAispAKD4zpgExUy6RByX3X4YXeyYJJy/XACfWsQo LJj+vstMd0GCVDpwX6P8mHU= =9tDG -----END PGP SIGNATURE----- | https://lists.debian.org/debian-qa-packages/2004/10/msg00126.html | CC-MAIN-2017-17 | refinedweb | 207 | 56.72 |
Super Simple Python is a series of Python projects you can do in under 15 minutes. In this episode, we’ll be covering how to build a simple calculator in under 30 lines of Python!
For a video version:
Unlike some of the Super Simple Python examples we’ve done, these don’t require any libraries!
Defining the Calculator Functions
Since this is an episode of Super Simple Python, we’re going to be creating a Super Simple Calculator. This calculator will just perform addition, subtraction, multiplication, and division. Each of these functions will simply take two parameters,
a and
b and return the specified operation on
a and
b.
def add(a, b): return a + b def subtract(a, b): return a - b def multiply(a, b): return a * b def divide(a, b): return a/b
Mapping the Calculator Functions
Once we’ve defined our functions, we’ll use a dictionary to map strings to the functions. One of the really nice things that’s unique to Python is that we can pass functions in as values in our dictionary. We’ll use an
ops dictionary that contains our set of four operations mapped as a key-value pair of string to function.
Other than our dictionary of operations, we’ll also need a way to perform the specified operation on the two passed in values. Let’s create a function that takes three parameters: the first number, a string value corresponding to the operation, and the second number. This function will use the dictionary to map the string value to a function and then call that function on the numbers.
ops = { "add": add, "subtract": subtract, "divide": divide, "multiply": multiply } def perform(a, op, b): function = ops[op] return function(a, b)
Performing the Calculation
Everything is set up. Let’s do the calculation. We ask the user for three inputs, the numbers and the desired operation. Note that I wrapped the inputs of the numbers in an
int() function, that is to turn these inputs from strings to integers. If you’d like to work with floating point numbers, you’ll have to use
float() instead of
int(). After we receive input from the user, we simply print out the returned value from the
perform function we created earlier.
print("Input two numbers and an operator") a = int(input("What is the first number? ")) op = input("Which operation would you like to perform?(add, subtract, multiply, or divide?) ") b = int(input("What is the second number? ")) print(perform(a, op, b))
An example printout should look like this:
What a: Simple Calculator” | https://pythonalgos.com/super-simple-python-simple-calculator/ | CC-MAIN-2022-27 | refinedweb | 435 | 62.07 |
|
Team System news from TechEd
Ok, so TechEd 2005 was fun and exhausting - there were a couple of things mentioned in sessions that are VERY cool and
important
.
1.
sounds like it's changing again. There was a Cabana talk on Monday where they covered the pricing strategy, and the new thing for me was that they mentioned that if you are an MSDN-U subscriber and want Team Suite, it will only be $1000 more per year (retail price), as opposed to the $2299 previously stated. I looked a bit on the
pricing website
and haven't been able to find this in print yet - so maybe it's not fully baked.
2. Microsoft will write a provider to allow
Visual Studio 2003
and VB6/VC6 to access the new Team Foundation Version Control system. People have been asking for this repeatedly, and Microsoft has listened. I have to give kudos to Microsoft for this, I've never watched a product release as closely as I'm watching this one, and I can honestly say that Microsoft is listening to customer feedback and using that to strengthen their offering.
3.
Partner stories
- I talked to several partners that are integrating with Team System. Here's my eval based on what I saw/heard:
SourceGear has the best story with their Allerton product which I saw a couple of demos of hitting Team Foundation Version Control from Eclipse/Linux. There are a lot of companies with both J2EE and .NET developers, and this will help them both be on the same platform. I've had many customers ask me about getting their Java developers on Team Foundation, so it will be nice to have an answer for that desire.
Rational - I went by their booth, and they have no plans to integrate at all with Team System. The top two questions I get from people about Rational products and Team system are: "Can I keep Clearcase?" and "Does Robot integrate?" The answer to both is "no". I assumed this would be the case, but it's nice to have a real answer to give people now, rather than just saying "you'll have to ask Rational.".
Borland - much has been made of their CaliberRM product, as it was announced to have planned support for Team System back at TechEd 2004! I must say that after talking to their reps, I was unimpressed. It sounds like the integration will be minimal (mainly 1-way creation of work items, so that as requirements change, things don't integrate further.)
Mercury - On the testing side of development, Mercury Interactive is a huge player with Test Director, WinRunner and LoadRunner. The only stated integration they have planned right now is that they will synchronize with work items behind the scenes. Mercury wants you using their products for all of your testing and then syncing those up with Team System's work items. While this makes sense from a business standpoint, it's not what I think most people will want. What I want them to do is allow me to run their tests (mainly WinRunner) from the testing framework in VSTE/ST (Visual Studio Team Edition for Software Testers). Obviously, this make their product little more than a plug-in, so that's not a compelling thing for them to do. Apparently they are still "considering" this, it will be interesting to see if the story changes.
Compuware - I didn't get a chance to talk to Compuware, but one of my coworkers did, and they seemed to "get it". It sounds like they looked at Team System, looked their own products and determined where they had more to offer than Team System does. They then targeted close integration. Compuware has an function UI testing tool (the type that record mouse clicks, etc.). They are making it so that you can call those test from Team System and report the results to Team Foundation. Assuming they get that working, I'm going to be advocating people ditch Rational/Mercury for UI testing and consider their product.
AutomatedQA - I didn't talk to these guys either, they seem to have the same story as Compuware. I'm going to try and take a look at their product (and Compuware's) in the coming months to see which I like best, so I have a some direction to point my customers. Automated UI testing is important to people, and Team System doesn't have it, it's a great gap to fill.
In unrelated news - we had a moderate amount of traffic at our booth (
Notion Solutions
). Our big emphasis at TechEd was
Team System training and mentoring
. It was interesting to see how many people had a genuine interest in learning more. We have scheduled a public Team System class for the first week of July in Dallas, I'm curious to see how much follow-up interest there is in that class. We're also doing ASP.NET 2.0 classes, but did a really bad job of talking about any of the public classes @ TechEd. I still think the right way to adopt Team System is on-site training, but so far we've only been teaching people who are "exploring" Team System, and public, overview training works great for that.
For those of you that came by our booth, or ran into us at the Cabana - it was great meeting you - hope to see you at PDC!
Published
Jun 11 2005, 11:09 AM
by
cmenegay
Filed under:
Team System
TrackBack
said:
June 11, 2005 1:33 PM
TrackBack
said:
June 11, 2005 1:33 PM
Frans Bouma
said:
"Microsoft will write a provider to allow Visual Studio 2003 and VB6/VC6 to access the new Team Foundation Version Control system. People have been asking for this repeatedly, and Microsoft has listened."
Yeah, duh, at release time of VS.NET 2005, everyone is still on vs.net 2003 (ok, the few early adopters on beta 2 not counted). Having people move with CURRENT projects to TS already is huge plus to get it accepted NOW. Otherwise, lots of projects will not use VSTS anytime soon, as they're already in progress... and in vs.net 2003.
"."
I don't see why J2EE developers will move to .NET. It will only make them lose functionality. Don't get me wrong, I like .NET a lot (:)) but J2EE developers have simply more functionality at their disposable, most of the time for free. Everyone who's played with the top Java IDE's knows why Java developers still think VS.NET 2005 doesn't cut it.
Rational is also a company (I think you mean XDE?) which produces expensive software, but for architects, not for every developer. They've proven themselves already in the past. MS' modelling software has to convince the developers out there it's better. That takes a lot of time.
June 11, 2005 1:40 PM
TrackBack
said:
June 11, 2005 5:34 PM
TrackBack
said:
June 11, 2005 5:34 PM
Bruce Lee said:
Great news.
June 12, 2005 10:14 PM
TrackBack
said:
Team System Partner Stories
June 13, 2005 5:32 AM
Chris Menegay
said:
Frans, Rational has software for developers, architects, testers, etc. XDE is just one minor piece of their whole offering. The main tools I would want are Robot and Requisite Pro, not XDE. IMO, Rational hasn't really proven itself much in the market - few people have their tools, and many that do aren't entirely happy with them.
June 13, 2005 7:46 AM
TrackBack
said:
June 13, 2005 1:54 PM
TrackBack
said:
Team System news from TechEd
Pricing sounds like it's changing again. There was a Cabana talk on Monday...
June 13, 2005 5:10 PM
Rolf Nelson said:
Rational has beta support for Visual Studio 2005 Beta 2 out already for both ClearCase and ClearQuest. We take full advantage of the latest Visual Studio 2005 features and we have worked closely with Microsoft to make ClearCase integrate even better in Visual Studio 2005. Our new integration is 100% VSIP based and has no dependencies on Microsoft's SCC at all. If you want to get access to our beta ClearCase/ClearQuest client for Visual Studio 2005 Beta 2 you can sign up here.
Go to the web site and select the "IBM Rational ClearCase/ClearQuest Clients for Visual Studio.NET" beta.
June 14, 2005 12:01 PM
Chris Menegay
said:
Rational is supporting Visual Studio, not Team System. That's a pretty big difference, it means I don't get check-in policy, or work item relationships. I don't get the same level of build integration. I know that I could use the entire Rational stack and get similar features - but let's be real, very few people want to do that - I certainly don't.
June 14, 2005 12:27 PM
Rolf Nelson said:
You can certainly keep IBM Rational ClearCase or IBM Rational Robot and use them with other Team System clients. IBM Rational also has a functional testing product - Rational Functional Tester that supports automated GUI testing of Windows Forms. It can be used to do automated functional testing of both VS.NET and Java/Eclipse.
June 14, 2005 12:47 PM
Rolf Nelson - IBM Rational said:
Rational ClearCase supports distributed servers so that your source controlled elements can span more than one physical server. Team System does not support this without creating TFS islands that don't integrate. Rational also supports replication so teams without reliable WAN access can participate in development. Partner companies can also gain also access to select replicated databases and collaborate without worrying about access to other servers. Team System has no replication.
June 14, 2005 12:58 PM
Rolf Nelson - IBM Rational said:
Team System API's aren't really designed to replace a server side component in Team Foundation Server. They are designed to allow clients to talk to Microsoft's Team Foundation Servers. Integrating a source control or defect tracking product with Team System Foundation Server is really not an option. We have integrated ClearCase and ClearQuest tightly in the Visual Studio 2005 shell to allow customers to continue to use Rational's Team tools to support both Visual Studio and Eclipse based projects with a single toolset from a single vendor. We are the first source control vendor to release a beta integration with Visual Studio 2005 Beta 2. We have been working with Visual Studio 2005 since early Alpha releases.
June 14, 2005 1:20 PM
TrackBack
said:
Microsoft to make VB6 work with Team System Version Control
June 20, 2005 8:29 PM
Name
(required)
Your URL
(optional)
(required)
Add | http://weblogs.asp.net/cmenegay/archive/2005/06/11/411882.aspx | crawl-002 | refinedweb | 1,796 | 70.13 |
:
def add(a, b) { return a + b}!
Excellent posts. Thanks.
One question. When we look at the generated code for checking the types of the arguments ( obj1.GetType() == typeof(int) ) isn't it more "performant" to have something like: obj1 is int ?
Good question!
There are two reasons for the check the way we emit it (most of the time anyway):
First is that it actually is not more performant than "is" check. The CLR just-in-time compiler (JIT) will detect the following pattern:
if (obj != null && obj.GetType() == <type>)
and almost completely optimize it away.
Second reason is that there is a subtle semantical difference. "Is" check will also work for subclasses, whereas the check we use is checking for exact type match. It is the same for sealed classes of course, but the two are subtly different.
Good Stuff on Dynamic Language Runtime
In the earlier posts on dynamic operations I talked about dynamic binders and rules. Then, rule was two
El viernes pasado tuve el gran gusto de compartir un TechNight con los buenos de Martín Salías y Rodolfo | http://blogs.msdn.com/mmaly/archive/2008/01/19/building-a-dlr-language-dynamic-behaviors-2.aspx | crawl-002 | refinedweb | 183 | 72.16 |
I find myself having need of a class where the class scope is included in the scope of methods in the class. A simple example from Python 3.1: x = "outside" class Magic: x = "inside" def method(self): return x I would like Magic().method() to return "inside" rather than "outside". Now, I understand why this is not Python's usual behaviour, and I agree with those reasons -- this is NOT a complaint that Python's normal behaviour is to exclude the class namespace from the method's scope. I also understand that the usual way of getting this would be to return self.x or self.__class__.x from method, instead of x. Again, normally I would do this. But in this specific case I have reasons for wanting to avoid both of the normal behaviours. Do not judge me, please accept that I have a good reason for wanting this, or at least allow me to shoot myself in the foot this way *wink*. In Python 3, is there some way to get this unusual behaviour? -- Steven | https://mail.python.org/pipermail/python-list/2010-November/592845.html | CC-MAIN-2014-15 | refinedweb | 179 | 82.44 |
pyblast 0.1
Run NCBI BLAST with an easy-to-use Pythonic API
Running NCBI BLAST manually is of course not rocket science, but this module provides several benefits over doing so:
- Automatically runs a BLAST process for each CPU on the system; achieves far better throughput than the -num_threads option
- Provides an iterator API that emits native Python objects for each BLAST result as they’re produced, rather than at the end
- Result and Hit objects obviate the need for manually parsing results; all values represented by their native Python types (e.g. Hit.evalue is a float, etc)
Example
Here’s a simple example with comments hilighting some relevant features
import pyblast with open('data.fasta') as f: # Use the pyblast.blastx() iterator function for r in pyblast.blastx(f, db='/path/to/swissprot'): msg = 'query {} has {} hits'.format(r.query_id, len(r.hits)) if r.hits: # Use Hit.evalue as a float for comparison min_evalue = sorted([h.evalue for h in r.hits])[0] msg += '; minimum evalue {:f}'.format(min_evalue) print msg
This will produce output like the following
query M00181:167:000000000-A4VBV:1:1101:11880:1874 1:N:0:6 has 6 hits; minimum evalue 0.310000 query M00181:167:000000000-A4VBV:1:1101:17067:1875 1:N:0:6 has 14 hits; minimum evalue 0.200000 query M00181:167:000000000-A4VBV:1:1101:15039:1878 1:N:0:6 has 4 hits; minimum evalue 4.400000 query M00181:167:000000000-A4VBV:1:1101:17090:1895 1:N:0:6 has 6 hits; minimum evalue 1.700000 query M00181:167:000000000-A4VBV:1:1101:15843:1907 1:N:0:6 has 2 hits; minimum evalue 1.800000
API
blastn(input_file, *args, **kwargs)
Iterator to process the contents of the FASTA input_file using blastn; yields Result objects.
The *args and **kwargs arguments control how blastn is invoked. The former are passed as options without values, while the latter are passed as options with values. For example, blastn(some_file, 'ungapped', db='foo/bar') will run blastn with the -ungapped -db foo/bar options.
In addition, the following keyword arguments are handled specially and are not passed on to BLAST:
- pb_num_processes: number of BLAST processes to spawn; default is sysconf(SC_NPROCESSORS_ONLN)
- pb_fields: iterable of field names to retrieve for each hit; default is DEFAULT_HIT_FIELDS. The list of valid field names (and their meanings) can be found in the *** Formatting options section of blastn -help.
blastp(input_file, *args, **kwargs)
See documentation for blastn.
blastx(input_file, *args, **kwargs)
See documentation for blastn.
Result
The result of BLAST processing a single query sequence. The set of attributes on this object are:
- id: identifier for the query sequence; can be None
- description: textual description of the query sequence; can be None
- hits: array of Hit objects
Hit
A single sequence hit in a Result object.
The attributes of this object are the names of the fields requested of BLAST. For example, if blastn was run with pb_fields=['qseqid', ...] then one could access the qseqid value of the Hit object h like so: h.qseqid. Fields referenced that were not requested of BLAST have a None value.
In addition, BLAST fields are converted to their native Python types. For example, evalue fields are automatically converted to floating point values.
DEFAULT_HIT_FIELDS
The default fields returned for each Hit object.
VERSION
The version of pyblast that’s being used. This can be used to more easily than feature detection to determine what features of the module are available.
- Author: Peter Griess
- License: MIT
- Categories
- Package Index Owner: pgriess
- DOAP record: pyblast-0.1.xml | https://pypi.python.org/pypi/pyblast/0.1 | CC-MAIN-2016-26 | refinedweb | 599 | 56.96 |
The .NET Framework's XML Serializer allows you to save an object's state to an xml file with a user-defined format. This can be especially useful when importing or exporting data to or from someone else's format. It can also be used as a quick-and-dirty data store or as a simple means of debugging object state.
Whatever your task, once you've written several classes that make use of the Xml Serializer, the coding starts to get repetitive. This is a key indicator that some of the code is ripe for reuse in a simple yet flexible base class. In fact, I've implemented just such a class, and in this article I'm going to show you how to use it. Keep in mind, I've designed this class for simplicity. If you need access to some of the richer features of the XML Serializer such as customized type mappings or responding to deserialization events, you'll have to extend the base class to support this functionality. Also, if your class already inherits from some other base class, then this class simply won't work for you. That said, I've found this class to handle my needs in most cases, where I just want a quick and easy way to save my object's state as XML.
Ok, let's dive right in to some code. The following block shows a typical use of the
XmlSerializer.
public static Person Load(string fullPath) { XmlSerializer ser = null; using (Stream s = File.OpenRead(fullPath)) { ser = new XmlSerializer(typeof(Person)); return (Person)ser.Deserialize(s); } } public void Save(string fullPath) { XmlSerializer ser = null; using (Stream s = File.OpenWrite(fullPath)) { ser = new XmlSerializer(typeof(Person)); ser.Serialize(s, this); } } public void Foo() { Person thePerson = Load(@"c:\person.xml"); // ...do something meaningful with the object here thePerson.Save(@"c:\person.xml"); }
Note that the C# using construct takes care of disposing the
FileStream for us. This sure doesn't seem like much code write, until you realize that you're writing Load and Save methods for every Xml Serializable class you write! Using the
XmlSerializationBase class, you no longer have to do this.
using Romney.Christian.Xml.Serialization; public class Person: XmlSerializationBase { // ...no need to write Load/Save code because we're using the base class } public void Foo() { Person thePerson = (Person)Person.Load(@"c:\person.xml", typeof(Person)); // ...do something meaningful with the object here thePerson.Save(@"c:\person.xml"); }
The abstract base class makes use of polymorphism and reflection to accomplish its tasks. In fact, the
Load overloads all return an XmlSerializationBase, so the result of this method call must always be cast to the appropriate type before using them as normal.
One final point of interest: some of the Load and Save overloads take a boolean parameter,
AutoClose, which lets you specify whether or not to close the underlying data source after de/persistance. The overloads which do not accept a boolean delegate to the boolean versions passing in true (close the data source) as the final parameter. Using the boolean versions can eliminate the overhead of an additional method call at the cost of an added parameter.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/XML/xmlserializationbase.aspx | crawl-002 | refinedweb | 539 | 55.13 |
+1 These are my sentiments as well.
-dain
On Aug 22, 2005, at 7:40 PM, Aaron Mulder wrote:
> I really disagree with having separate namespaces for the entire
> web deployment plan for Tomcat and Jetty. It makes Geronimo+Tomcat
> and
> Geronimo+Jetty totally different products. If I'm going to release a
> typical application for Geronimo, you're saying that every single
> bit of
> will be identical except for some stupid plumbing in the web
> plans? So
> you must release a Geronimo+Tomcat version of the application and a
> Geronimo+Jetty version of the application? Say it ain't so!
>
> I'll grant that it's possible to construct an application that
> works properly in only one container or the other. But I really
> object to
> crafting our whole configuration strategy around that case, which I
> expect
> to be very rare. I think it's going to be much more common that a
> plan is
> totally portable, or totally portable with a couple of container-
> specific
> tweaks for both containers that don't cause the app to fail if not
> deployed in its preferred container. I'd rather make that the
> baseline,
> and allow a generic plan and a generic plan with extensions for 0-N
> web
> containers.
>
> Aaron
>
> On Mon, 22 Aug 2005, David Jencks wrote:
>
>> After talking this issue over with Jeremy a bit and thinking about it
>> some more I don't think that the generic multi-container schema is a
>> good idea. I think the deployment system should be based on
>>
>> namespace determines builder
>>
>> and that we should not do anything that will make this difficult
>> in the
>> future.
>>
>> If the packaging plugin was working, we could, for each app (such as
>> the console) that needs to run on both containers, generate a
>> configuration for each container. Then you could run either one,
>> without rebuilding geronimo or the application config.
>>
>> I'm going to work on a proposal for schemas that would help keep the
>> configs for different containers as similar as possible.
>>
>> Meanwhile I've committed the "any" solution as I think it is
>> considerably better than what we have now. One problem with this is
>> that most tomcat configurations will not in fact be portable: if they
>> contain tomcat realm or tomcat valve gbeans, the config just plain
>> won't deploy under jetty. It might not be so easy, but I'm sure
>> there
>> are equivalent ways to get in trouble using jetty.
>>
>> Until we actually have the packaging plugin working, I suggest we
>> have
>> the tomcat and jetty builders munge a generic namespace to their
>> specific namespace, so that completely generic plans will still
>> deploy
>> on both.
>>
>> thanks
>> david jencks
>>
>> On Aug 22, 2005, at 5:51 PM, Jeff Genender wrote:
>>
>>
>>>> -----Original Message-----
>>>> The first would result in a configuration that could run on
>>>> any web container, the last two would produce configurations
>>>> that would run on a specific web container. Applications
>>>> would typically use the first form unless they needed
>>>> container-specific functionality (which would also mean that
>>>> they needed that specific container at runtime).
>>>>
>>>> I included the namespace qualifiers for clarity. I believe
>>>> that suitable use of schema imports would mean that they
>>>> could be removed simplifying the XML form used by users. It
>>>> may be harder for us to implement, but I think ease-of-use is
>>>> more important here than ease-of-implementation.
>>>>
>>>> --
>>>> Jeremy
>>>>
>>>>
>>>
>>> Everything you proposed is fine with me except for forcing the
>>> namespace for
>>> one container. I think we should have a universal web plan that
>>> will
>>> be
>>> accepted under both containers. So I would ask that we allow the
>>> generic
>>> file to be allowed to include both a jetty and tomcat name space.
>>> This will
>>> make our applications, like the console and debugtool to have 1
>>> geronimo-web.xml per app. IMHO this is a much simpler way to manage
>>> the
>>> apps that must run under both containers. I believe this is how DJ
>>> implemented it.
>>>
>>> Jeff
>>>
>>>
>>>
>>
>>
> | http://mail-archives.apache.org/mod_mbox/geronimo-dev/200508.mbox/%[email protected]%3E | CC-MAIN-2015-27 | refinedweb | 663 | 59.13 |
@gferreira Thanks! I was actually trying to generate a custom sized eps. Maybe this is the wrong approach. Matplotlib generates something similar very easily with the scipy.spatial libary...but ugly....
bic
@bic
Posts made by bic
- RE: Voronoi Fun
- RE: Voronoi Fun
Ok, maybe I need a bit more assistance. Not sure how the PIL library codes can be translated to drawbot's methods of creating images.
This the the sample code I'm trying to translate to Drawbot:
from PIL import Image import random import math def generate_voronoi_diagram(width, height, num_cells): image = Image.new("RGB", (width, height)) putpixel = image.putpixel imgx, imgy = image.size nx = [] ny = [] nr = [] ng = [] nb = [] for i in range(num_cells): nx.append(random.randrange(imgx)) ny.append(random.randrange(imgy)) nr.append(random.randrange(256)) ng.append(random.randrange(256)) nb.append(random.randrange(256)) for y in range(imgy): for x in range(imgx): dmin = math.hypot(imgx-1, imgy-1) j = -1 for i in range(num_cells): d = math.hypot(nx[i]-x, ny[i]-y) if d < dmin: dmin = d j = i putpixel((x, y), (nr[j], ng[j], nb[j])) image.save("VoronoiDiagram.png", "PNG") image.show() generate_voronoi_diagram(500, 500, 25)
- Voronoi Fun
I'm translating some existing codes to Drawbot to generate custom Voronoi SVG, and am looking for suggestions. I'm starting with bezier paths or polygons. What would you use? (Don't give away the answer yet! I may still ask for more help later in the week.)
- RE: Zip not supported?
- Zip not supported?
Is the "zip" function not supported in Drawbot? Seems to work in Sublime.
- Kadenze online course
Haven't tried this but worth a closer inspection. Anyone else wants to be a class buddy? | https://forum.drawbot.com/user/bic | CC-MAIN-2019-51 | refinedweb | 291 | 61.53 |
Amazon Echo is fun to use and it has a really cool feature to control most of the home automation devices that are available out there, like Philips Hue and other devices. But nothing is better than experimenting and building your own DIY Home automation system.
In this project I'm going to show you how to create an IoT-based light that connects to AWS Lambda and I will show you how to create your own Amazon skill to work with your IoT light.
Arduino and ESP8266
At the heart of this project is an Arduino Nano and a ESP8266-12E, the esp8266 is used to connect the Arduino Nano to the internet. The esp8266 and the Arduino nano can both be programmed using the Arduino IDE, so start by downloading it.
Data Flow Diagram
The IoT Light connects to the Phant server and checks for the state of the IoT Light, the state of the Light on the Phant is modified by the AWS Lambda function each time the Alexa intent is called. Detailed Instructions on how to setup each of the blocks can be found in later steps of the project.
Phant Server
Let's start with setting up the Phant server, the Phant server is an open source Data Logging server for IoT devices developed by Sparkfun written in Nodejs. You can setup Phant server on AWS E2C Instance.
You can start by downloading the Phant sever code from GitHub, create an E2C instance on AWS and open up an SSH terminal, install NodeJS on the Instance and then install Phant by typing the below command.
npm install -g phant
Next you should be able to launch phant server by typing
phant
By default the Phant server runs on port 8080 for HTTP and port 8081 for telnet. You can visit the Phant website setup by visiting <your-E2C-instance-ip>:8080
Once on the Phant website create a new stream by clicking on create and create a single value field in the name of "lightstate".
In the next page you should see your public and private keep note it down, you will need these in the next step.
Programming the ESP8266
Let's start with programming the esp8266, for this you will need the arduino IDE, after downloading and install the arduino IDE, navigate to File --> Preferences and in the Additional Boards URL enter the bellow URL.
And then navigate to Tools --> Boards --> Boards Manager and install the esp8266 boards.
Next, connect the esp8266 to the Sparkfun FTDI chip and upload the below program, you need to add your WiFi SSID, password, the public and private key of the Phant server and the IP of the Phant server.
#include <ESP8266WiFi.h> #include <Phant.h> const char WiFiSSID[] = "<wifi_ssid>"; const char WiFiPSK[] = "<wifi_pass>"; const char parseKey[] = "1b"; const char PhantHost[] = "<ip>"; const char gPublicKey[] = "<public_key>"; //Enter your phant public key goes here const char gPrivateKey[] = "<private_key>"; //Enter your phant private key goes here const unsigned long postRate = 1000; unsigned long lastPost = 0; void setup() { Serial.begin(115200); connectWiFi(); digitalWrite(LED_BUILTIN, HIGH); } void loop() { if (lastPost + postRate <= millis()) { if (getFromPhant()) lastPost = millis(); else lastPost = millis(); } } void connectWiFi() { byte ledStatus = LOW; WiFi.mode(WIFI_STA); WiFi.begin(WiFiSSID, WiFiPSK); while (WiFi.status() != WL_CONNECTED) { digitalWrite(LED_BUILTIN, ledStatus); ledStatus = (ledStatus == HIGH) ? LOW : HIGH; delay(100); } } int getFromPhant() { Phant phant(PhantHost, gPublicKey, gPrivateKey); WiFiClient client; if (!client.connect(PhantHost, 8080)) { return 0; } //Get data from phant cloud client.print(phant.get()); client.println(); int cTrack = 0; bool match = false; int pCount = 0; while(1) { if (client.available()) { char c = client.read(); if(!match) { if(c == parseKey[cTrack]) { if(cTrack == (sizeof(parseKey)-2)) match = true; cTrack++; } else { cTrack = 0; } } else { if(pCount == 2) { int dControl = c - '0'; if(dControl == 1 | dControl == 0){ Serial.println(dControl); } } } pCount++; } } if (!client.connected()) { client.stop(); } return 1; }
Programming the Arduino Nano
After programing the esp8266 it is now time to program the arduino Nano the arduino nano and the esp share data via serial so make sure you have the tx and rx pin of the nano disconnected before uploading the programing the arduino nano.
The light is controlled via digital pin 4 of the arduino nano you can change this or use multiple pins based on your project.
Copy the code from below and paste it in the arduino IDE and hit upload.
int light = 4; // Light Connected to digital pin 4 void setup() { pinMode(light, OUTPUT); Serial.begin(115200); while (!Serial); } void loop() { if (Serial.available()) { int state = Serial.parseInt(); //Convert serial data to int if (state == 1) { digitalWrite(light, HIGH); // When state is 1 turn on the Light } if (state == 0) { digitalWrite(light, LOW); // When state is 0 turn off the Light } } }
Light Circuit
Now, once the code is uploaded to both the boards, you can assemble the circuit, for this all you need the do is follow the circuit diagram below, no soldering skills is required as this project is assembled on a Breadboard but feel free to solder all the components onto a PCB once you are done testing it on a breadboard.
This circuit uses a Tiac (BTA26) and an opto-coupler to control the AC mains voltage based on the state of the arduino digital pin 4. So the light turns on when the arduino digital pin 4 goes high.
Testing
Before setting up the Alexa skill you can test the circuit by powering it on and making sure it connects to a WiFi network with internet access and once the esp8266 connects to the Phant server, you be able to change the state of the digital pin 4, on and off, there by turning the Triac and the light on and off.
AWS Lambda Function
Next, you will need to setup AWS Lambda, To create the function go to AWS Developer Console, if you don't have an account make sure you sign up for one, you could also use the same account credentials that you use for shopping at amazon.com.
Once you reach the developer console after logging in you need to navigate to Lambda and Create a new function.
Select Nodejs 4.x as your Runtime, and give a name and Role to your function. In the next page add Alexa Skills Kit Trigger form the add trigger menu and scroll down until you find a code editor and make sure the index.js file is selected.
Copy the code from below and paste it in the code editor make changes where it says <ip> , <public_key> and <private_key> to the values you saved while stetting up the Phant server.
var https = require('https'); var http = require('http'); ip = "<ip>"; // Enter your ip to the Phant server here public_key = "<public_key>"; // Enter your public key to the Phant server here private_key = "<private_key>"; // Enter your private key to the Phant server here exports.handler = (event, context) => { try { switch (event.request.type) { case "IntentRequest": console.log(`INTENT REQUEST`) switch(event.request.intent.name) { case "TurnLightOn": var endpoint = "http://"+ip+":8080/input/"+public_key+"?private_key="+private_key+"&lightstate=1" http.get(endpoint, function (result) { console.log('Success, with: ' + result.statusCode); context.succeed( generateResponse( buildSpeechletResponse("The light is turned on", true), {} ) ) }).on('error', function (err) { console.log('Error, with: ' + err.message); context.done("Failed"); }); break; case "TurnLightOff": var endpoint2 = "http://"+ip+":8080/input/"+public_key+"?private_key="+private_key+"&lightstate=0"; http.get(endpoint2, function (result) { console.log('Success, with: ' + result.statusCode); context.succeed( generateResponse( buildSpeechletResponse("The light is turned off", true), {} ) ); }).on('error', function (err) { console.log('Error, with: ' + err.message); context.done("Failed"); }); break; default: throw "Invalid intent"; } break; case "SessionEndedRequest": console.log(`SESSION ENDED REQUEST`); break; default: context.fail(`INVALID REQUEST TYPE: ${event.request.type}`) } } catch(error) { context.fail(`Exception: ${error}`) } }e buildSpeechletResponse = (outputText, shouldEndSession) => { return { outputSpeech: { type: "PlainText", text: outputText }, shouldEndSession: shouldEndSession } }; generateResponse = (speechletResponse, sessionAttributes) => { return { version: "1.0", sessionAttributes: sessionAttributes, response: speechletResponse } };
Once you have made the changes save the changes and then note the "arn", the arn can be found on the right, top of the page, you will need this number for setting up the Alexa Skill later.
Creating the Alexa Skill
Once you have setup the circuit, phant server and Lambda now you its time to create your own Alexa skill. The Amazon Skill runs on the Amazon Echo, dot and any devices that supports Alexa Voice service.
To create the Alexa Skill you will need to visit the Amazon developer page and select Alexa form the menu. In the alexa developer page, select alexa skills kit and in the next page select create add a new alexa skill.
Fill in the details in the first step as follows -
- Skill Type : Custom Interaction Model
- Name : Give a name to your amazon skill
- Invocation Name: This is a two word name that you use to ask alexa to select your skill set before asking alexa to toggle the state of the light. For more on Selecting Invocation Name you can check you the Invocation name Guidelines.
Make sure you save the details before you proceed to the next step.
Interaction Model, in this step you teach you amazon skill to understand Natural Language spoken by humans. For this you will need Sample Utterances, this is the sentences you would say to alexa to turn on or off the lights. You can see the Utterances I used below.
TurnLightOn Turn Light On TurnLightOn Set Light On TurnLightOn Switch the light on TurnLightOff Turn Light Off TurnLightOff Set Light Off TurnLightOff Switch the light off
Each of the utterances start with the instance name to bind it to, Here we have only 2 instances one to turn on the lights and the other to turn it off.
In the instance schema field you need to declare the instances before using it, this is writing in JSON format and you can use the Instance schema I used below.
{ "intents": [ { "intent": "TurnLightOn" }, { "intent": "TurnLightOff" } ] }
On the next page it is time to bind the AWS Lambda function to the Alexa Skill, Select the service end point to AWS Lambda and enter the ARN, make sure you save the details before proceeding.
Now it is time to test the Alexa app, power on the devices and you should be able to type in the Utterance in the Enter Utterance field, for example "Turn on the Light" and you should see the light turn on.
And that is it you should now have an Alexa Controlled Lights, you can publish your app and use it along with your Alexa devices or if you don't have an alexa device you could use echosim.io to test your Alexa skill. | https://www.hackster.io/tinker-project/arduino-powered-smart-light-works-with-amazon-echo-9e20fd | CC-MAIN-2018-47 | refinedweb | 1,773 | 67.89 |
NAME
shutdown - disable sends and/or receives on a socket
LIBRARY
Standard C Library (libc, -lc)
SYNOPSIS
#include <sys/types.h> #include <sys/socket.h> int shutdown(int s, int how);
DESCRIPTION
The NOTES VALUES
The shutdown() function returns the value 0 if successful; otherwise the value -1 is returned and the global variable errno is set to indicate the error.
ERRORS ALSO
connect(2), socket(2), inet(4), inet6(4)
STANDARDS
The shutdown() system call is expected to comply with IEEE Std 1003.1g-2000 (“POSIX.1”), when finalized.
HISTORY
The shutdown() system call appeared in 4.2BSD. The SHUT_RD, SHUT_WR, and SHUT_RDWR constants appeared in IEEE Std 1003.1g-2000 (“POSIX.1”).
AUTHORS
This manual page was updated by Bruce M. Simpson 〈[email protected]〉 to reflect how shutdown() behaves with PF_INET and PF_INET6 sockets.
BUGS
The ICMP “port unreachable” message should be generated in response to datagrams received on a local port to which s is bound after shutdown() is called. | http://manpages.ubuntu.com/manpages/maverick/man2/shutdown.2freebsd.html | CC-MAIN-2014-15 | refinedweb | 164 | 56.76 |
Talk:Key:tunnel
Contents
- 1 Tunnel comments
- 2 Shipping tunnel
- 3 Layer
- 4 Ways under a building
- 5 Tunnel vs. bridge
- 6 Name
- 7 Tunnel vs. underpass, add underpass=yes tag?
- 8 building passage
- 9 Tunnels not rendered
- 10 Structure on the end of tunnel to cover sun and avoid bright blindness
- 11 tunnel with partially open-air segments?
Tunnel comments
I made this tunnel here, but it looks a bit strange:
(click to enlarge). The orange road seems to extend past the little black tunnel marks on both sides of the tunnel. Did I do something wrong or is it normal? I've seen other tunnels where the road stops right at the tunnel mark, like in the example on the Tunnel page. Mtcv 16:21, 2 July 2007 (BST)
- Perhaps because the road is not a contiguous way? Renderers prefer ways to be in one piece, without branching segments. The way in you example consists of segments on both sides of the tunnel; possibly the renderer gets confused by that. I've cut the way in two; let's see if that improves things. Eugene van der Pijll 18:40, 2 July 2007 (BST)
- I don't think it's working. I didn't think it would make a difference anyway, because I made another tunnel in the same way (non-contiguous) here and there it looks ok. On the other hand, I made yet another tunnel here, with three ways (one before, one tunnel and one after) and there it looks strange again. I guess this is just normal random behaviour ;-) . Mtcv 16:14, 3 July 2007 (BST)
Shipping tunnel
Hi, i'am living in germany and want to tag the only shipping tunnel in our country (see also and). Do i have to tag the part of the river with level -1 ?
Matthias.
Tunnel with layer=-1 is like animal=black horse+color=black...
--Markus 11:31, 3 September 2008 (UTC)
Discussion at Talk:Key:layer#Layer. --Eimai 12:37, 3 September 2008 (UTC)
Ways under a building
I am mapping Barcelona, where some streets go under the houses. Then, I take the part where the street is under the house and tag it as tunnel. The problem is: When this street joins an other, I'm joining featuers at different levels. Is this right?
An example could be here:
- Where the end of the tunnel connects to only one other way, the ways can have different layers. The way with the tunnel tag should start at the location where the wall is. Example with the location you specified: you could/should add a new way with highway=pedestrian to connect this way to the Carrer Hospital (if it's possible to walk from that pedestrian way to that road, which I assume to be possible). Likewise for other highways in tunnel: a short way not in tunnel connects to the crossing street, because the tunnel does not end at the intersection midpoint but before that. Alv 13:31, 29 July 2009 (UTC)
Tunnel vs. bridge
Quote: Language sometimes misleads us. At least in Swedish, the word "tunnel" is used for two different things:
- a drilled tunnel through a mountain or underground, saving a road from going up and downhill, and
- a passage under another road/railway/canal where the other route maintains its elevation.
The latter is best represented in the OpenStreetMap as a bridge on the top route, even if most people wouldn't refer to that crossing as a bridge. The difference is especially noticable when the two routes are not perpendicular. For a tunnel, the OSM decoration on the lower route. For a bridge, the OSM decoration is on the upper route. If a road cuts under a railroad at 70° angle, this is better rendered as railroad bridge above the road, than as a road tunnel underneath the railroad.
- We don't tag for renderes. A tunnel should be tagged as tunnel and a bridge als bridge! It is not only a problem in swedish languages, that people mix up tunnel and bridges if there is an underbridge/underpass/undercrossing. I think some pictures should explain the difference. --Langläufer 16:58, 4 September 2009 (UTC)
- I'd say if the street over the tunnel has limitation in weight for example it is in every case a bridge. And if the tunnel has limitations( e.g. height), it is a tunnel. So you can see that I think that there is no "either or". Use both if you can't decide! --westfa 16:58, 15 November 2009 (UTC)
- This is not a question of the limitations. You can put the limitations without any tunnel or bride tag at the ways. And you can have limitations for both street, upper and lower at the same time - but it should not be tagget as tunnel and bridge at once. --Langläufer 07:06, 16 November 2009 (UTC)
- This is not a problem of Swedish language, this is a problem of common natural language. In natural languages, meaning of words is not as strict and clear, as in technical (scientific) language, where words are actually terms with strict definitions. I OSM, we don't use natural language, we use terms. Bridges and tunnels are subjects of structural engineering science, and in engineering, tunnel is a man-made structure (passage way), dug (or constructed in other way) through surrounding substance, such as earth, rock and so on (including water). So, if this structure is surround by earth or rock, it's a tunnel, not just a passage under something like bridge or any other structure. If there is a building above the street, standing on columns or other support structures, it doesn't turn this street into a tunnel. it just makes it covered. --BushmanK (talk) 22:21, 6 June 2016 (UTC)
Name
I think we should distinguish between street name and tunnel name. hence a single name-tag is insufficient since street and structure (e.g tunnel or bridge) can have different names. is the usage of a namespace advisable? e.g. name=foo (name of the street) vs. tunnel:name=bar (name of the tunnel in conjunction with the tunnel tag)? please comment! --Marc 12:02, 29 December 2009 (UTC)
- Wouldn't that be loc_name=*? A name for a tunnel seems quite local. --goldfndr 03:36, 30 December 2009 (UTC)
- . local? I wouldn't bet on it. the local name should be used when it differs from the original name. doesn't make much sense to alienate tags: the problem itself isn't solved by that. --Marc 08:19, 30 December 2009 (UTC)
- Could you provide an example of a tunnel on which loc_name=* would be inappropriate? Better yet, an example of a street section (way) that has a name, a local name, and a tunnel name? --goldfndr 16:51, 31 December 2009 (UTC)
- Well I haven't too. The fact that it could happen should be enough to look for a better, bullet proof solution. But if you want an example: take the Gotthard road tunnel in Switzerland. you got 3 names for each national language, an english name, local names (like "erste Röhre", "Gotthardröhre" and many more) and you have the A2 motorway going through the tunnel, also known as "Gotthardautobahn", "Nord-Süd-Achse" (many national, local and alternative names). Or take the old Elbtunnel in Hamburg: Sankt Pauli-Elbtunnel (official name), Alter Elbtunnel (currently used as old_name), Radkappenkiller (definitely a local name) etc..You can look for local names on almost any bigger object. You and I might not know them, but locals do (hence the name). And they can apply to the highway and the tunnel and be different from each other. Using the loc_name=* for the tunnel and name=*for the corresponding highway section would be an unnecessary constriction. --Marc 09:03, 1 January 2010 (UTC)
- One option is to use relations for the naming, i.e. one relation for the autobahn with its names, and another relation for the tunnel with its names. This way you can manage name=*+old_name=*+official_name=*+loc_name=*+alt_name=*+name:fr=*+name:de=*+name:en=*+name:it=* and probably many more for both objects without conflicts. This will also leave it to the renderer if the highway name or the tunnel name is to be rendered, and makes both the highway name and the tunnel name to be searchable. --Skippern 12:13, 1 January 2010 (UTC)
- yes this would be an option. but what is the benefit over using the namespace approach (tunnel=yes + tunnel:name=..., tunnel:description..., can be expanded to virtually any tag, e.g. if a tunnel / bridge / access restriction / the filling pump for diesel on a fuel station has a specific name or description or fixme-tag and so on). I'm just not that convinced about using relations to somehow fix tag scheme shortcomings. And therefore I'm searching a generic approach that fits any tag combination, even if some of them don't seem reasonable. the relation approach is accurate but adds - in my opinion - a lot of overhead and complexity without providing a generic and simple approach to the problem (which extends fairly beyond the tunnel naming problem). and especially when looking at things like the fixme tag it would be great to use a namespace approach, e.g. for something like opening_hours:fixme=resurvey instead of just a simple fixme tag. and this concept should be extended to tags like name, comments, notes and so on. it makes things just a lot easier without using relations or newly invented tags. that was the idea I had in mind. therefore I might like the relation approach but it doesn't solve the main problem I currently see with OSM tagging scheme. (sorry I'm not a native english speaker so it's fairly hard to express myself) --Marc 12:30, 1 January 2010 (UTC)
- If we are to head in and prefix name for everything, building:name, highway:name, place:name, waterway:name, bridge:name, tunnel:name, than we would make it extremely difficult to make rendering and search software, maybe even routing software, as they would need to filter *:name on top of how they do it today. Therefor relations are a cleaner approach to it. Than your fixme problem is different, I agree that a fixme tag could be prefixed to specify what needs fixing, and as far as I know, only keepright and maplint need to check for these. I see no problem in tagging pumps:fixme=check octane numbers, do they have bio diesel here? or other variants of fixme, this way it can be easier to search for those interested in fixing a specific problem, i.e. have a PD list of gas stations that sells bio diesel. But all of this are derailing off the problem with tunnel names. I suggest relation to solve that. --Skippern 14:07, 2 January 2010 (UTC)
- No, I don't want to replace every name tag with a prefix:name. so the normal name tags remain: name=* is used for the entity itself. there is no need for prefexing highway, place, waterway or building names. prefix:name=* is just used for naming specific tags of an entity (naming an access restriction of a street, the tunnel of a street and so on). It's a generic proposal that can be used for the tunnel and every other naming problem. so there is no need to change anything in current software implementations: the normal main name tag remains. and besides that: filtering for prefixed name tags or searching and handling relations makes no difference. it's always an additional step you have to make to extract the tunnel name. Do you have any information showing the performance differences between using a relation vs. extracting the name from an existing tag? that would be interesting. --Marc 07:26, 3 January 2010 (UTC)
- I cannot show you anything related to software performance, as I have no working rendering, searching or routing software to test on, but I can see it from my experiences as a mapper. I have used relations to groupe together long highways, where I have put both name and ref tags in the relations, while I have omitted this on some segments of complex intersections on the same highway. Also some parts of such highways passing through cities have local street names in addition to the highway name. Here I have used the local street names in the name tag and left the highway name in the relation. This can also be done for tunnels and bridges, I would then use the tunnel or bridge name on the road segment, and leave the highway name in the relation. I do not know if this is righter, it's the way I do it and will continue to do it. --Skippern 12:35, 3 January 2010 (UTC)
- I think Marcs proposal: "tunnel:name=* or name:tunnel=*" is verry good. It could be also used for bridges or some other things. Using relations ars not comfortable, because the name-key indicates only the name of the highway/railway/waterway. Example: If a highway with a tunnel is tagged by "name=Alpha Street", is this the name of the tunnel or street? (-> name is from highway/railway and name:tunnel is from tunnel) So we have to diskus which is better, XXX:name=* or name:XXX=*. In the tagwatch both is used. Than one of the important part is, write it into the wiki. -- MasiMaster 00:10, 27 January 2012 (UTC)
Tunnel vs. underpass, add underpass=yes tag?
In the wiki page, tunnel is first described by: "The tunnel tag is used to map ways that runs through an underground passage."
Then however, in the third paragraph, it appears the "underground" requirement is dropped, by classifying underpassesas tunnels: "At an undercrossing/underpass you have to decide on the basis of the construction if you are in a tunnel or under a bridge. The ramps down to the underpass are not part of the tunnel. These ramps can be tagged as cutting=yes."
In Wikipedia, , theres a topic "Usage limitations":
"A tunnel is relatively long and narrow; in general the length is more (usually much more) than twice the diameter. Some hold a tunnel to be at least 0.160 kilometres (0.10 mi) long and call shorter passageways by such terms as an "underpass" or a "chute". For example, the underpass beneath Yahata Station in Kitakyushu, Japan is 0.130 km long (0.081 mi) and so might not be considered a tunnel."
Typical tunnel-tagged underpasses go under a road or a kind of bridge, and are not really undergrand. I think it would make sense to introduce "underpass=yes" for the underpasses (used with the layer tag), and reserve "tunnel=yes" for the cases where the way really runs through an underground passage.
This would make navigators better able to give instructions. For example, the GpsMid navigator currently instructs to "into the tunnel" for underpasses commonly marked with tunnel=yes - even when they only go under a road construction, not really underground.
Anyway think alike? Anyone of those possibly thinking alike familiar with how to set up a new tag proposal for Key:underpass? --Jkp 10:52, 4 October 2010 (BST)
- there is no common international definition for tunnel. Nether heard of that 160 meter limit. Only common is, that it leads through underground. So it becomes not more simple to differentiate between tunnel, underpass and bridge. An underpass is every way, that goes under an other way. That says nothing about construction. --Langläufer 18:39, 4 October 2010 (BST)
- I'm thinking part of that issue was Langläufer incorporating my cutting mention into a paragraph rather than (as it's a separate topic) making it a separate paragraph. I've separated it out, and made a similar edit at bridge.
- I concur that any "tunnel-tagged underpasses" that "are not really underground" should not have a tunnel tag (and, also, should be layer=0). But I'm curious for an example in which the road above is not a "kind of bridge", as I'm wondering how that's physically possible. As for underpass=yes, I'm imagining that as cutting=yes, with bridge=yes for the layer>0 roads above, but a counterexample would be helpful. But if there was an underpass=yes, wouldn't it only be a node (carefully placed to be an "intersection" of the two highways) and not a way? Is there a rendering or other analysis advantage, and if so would the same argument be used for an overpass=yes tag or an expansion to Proposed features/Junction? --goldfndr 14:47, 5 October 2010 (BST)
- Could it be that tunnel=underpass would be the best choice ? It is quite often that general roads are lifted up from the surrounding landscape somewhat so that when paths or other roads cross under the road, there is neither a traditional tunnel nor a bridge, but some kind of hybrid ? Referring to Proposed_features/building_passage where we see that tunnel=avalanche_protector etc was approved. As of today tunnel=underpass has been used 20 times. [1]. We also have 34 instances of key:underpass=yes [2] --MortenLange (talk) 19:34, 14 June 2016 (UTC)
- OSMs "tunnel or bridge" scheme is unrealistic in 90% of cases. Many of those are grade separated crossings which are a mix of a tunnel and bridge building with two ways on different levels. We could develop a new tagging scheme for those but I don't see a benefit in those simple cases. RicoZ (talk) 13:53, 16 June 2016 (UTC)
building passage
The voting on new tunnel values has started: --Flaimo 08:19, 22 November 2012 (UTC)
Tunnels not rendered
Some tunnels on A14 motorway in Italy are not rendered, like this one: , or this one also: . Does someone know why? --Gspinoza 18:18, 19 December 2012 (UTC)
- Simple, they just needed to have the /dirty command called to render them. Seems like when they were last edited, it must have been during an rendering outage. Thus, when the /dirty command is called, they get re-rendered and the tunnel shows up properly. -- rickmastfan67 14:19, 20 December 2012 (UTC)
- I found that A14 relation had a tag highway=motorway which wasn't supposed to be there. Other Italian motorways had it (A4 and A1) and none of them showed tunnels. I removed the tags and tunnels reappeared on A4 but not on the others. I never encountered the /dirty command, and I don't know how to use it. --Gspinoza 16:24, 20 December 2012 (UTC)
- You need to get the tile image link, and then add "/dirty" to the end of it. See the Slippy Map page for more info about this. -- rickmastfan67 03:42, 1 January 2013 (UTC)
Structure on the end of tunnel to cover sun and avoid bright blindness
How would you map these structures often used in tunnels to prevent drivers/train conductors from being blinded by the sun? Check pics: 1 - 2 - 3 - 4 - 5. Currently I've been using building=yes, but some mappers might see that as tagging for the renderer. --Nighto (talk) 18:16, 5 May 2014 (UTC)
- Depends on what it looks like. Look at key:covered if it has something that you could use, else decide on some new tunnel or covered sub-tag and let us know. RicoZ (talk) 11:35, 13 May 2014 (UTC)
tunnel with partially open-air segments?
Some tunnels have segments that are uncovered, among other things to let air circulate. For example, in [1] those segments are marked as being surrounded by a retaining wall (google earth view in [2]) This situation is ambiguous. On the one hand, these segments are partially or wholly uncovered at the top, so it would be plausible to remove the tunnel tag and leave layer=-1; on the other hand, doing that would create several short non-tunnel segments that render badly (it preferable to render the tunnel continuously) and would be misleading, since they are in fact part of the tunnel and not completely open air (like a cutting). What is the best way to tag this?
[1] [2]
--Koenige (talk) 16:50, 4 April 2015 (UTC) | https://wiki.openstreetmap.org/wiki/Talk:Mapping/Features/Tunnel | CC-MAIN-2018-17 | refinedweb | 3,402 | 71.04 |
A unique Python redis-based queue with delay
This is a simple Redis-based queue. Two features that I needed were uniqueness (i.e. if an item exists in the queue already, it won't be added again) and a delay, like beanstalkd, where an item must wait a specified time before it can be popped from the queue. There are a number of other Redis-based queues that have many more features but I didn't see one that had these two features together. This 50-line class works for my needs. It may or may not work for you. Feel free to copy this and build on it.
Note: I wrote this in May 2010. I ended up using this solution after trying out beanstalkd and Gearman.
Install¶
Install on Ubuntu 10.10 Maverick
- Install the redis server
$ sudo apt-get install redis-server
- Install the python redis client
$ pip install redis
- Default conf file: /etc/redis/redis.conf
Default log file: /var/log/redis/redis-server.log
Default db dir: /var/lib/redis
Stop redis server: sudo /etc/init.d/redis-server stop
Start redis server: sudo /etc/init.d/redis-server start
Redis commands used¶
The queue is based on the redis sorted set data type and uses the following commands:
- ZADD - Add members to a sorted set, or update its score if it already exists
- ZRANGEBYSCORE - Return a range of members in a sorted set, by score
- ZREM - Remove one or more members from a sorted set
Code¶
import time import redis REDIS_ADDRESS = '127.0.0.1' class UniqueMessageQueueWithDelay(object): """A message queue based on the Redis sorted set data type. Duplicate items in the queue are not allowed. When a duplicate item is added to the queue, the new item is added, and the old duplicate item is removed. A delay may be specified when adding items to the queue. Items will only be popped after the delay has passed. Pop() is non-blocking, so polling must be used. The name of the queue == the Redis key for the sorted set. """ def __init__(self, name): self.name = name self.redis = redis.Redis(REDIS_ADDRESS) def add(self, data, delay=0): """Add an item to the queue. delay is in seconds. """ score = time.time() + delay self.redis.zadd(self.name, data, score) debug('Added %.1f, %s' % (score, data)) def pop(self): """Pop one item from the front of the queue. Items are popped only if the delay specified in the add() has passed. Return False if no items are available. """ min_score = 0 max_score = time.time() result = self.redis.zrangebyscore( self.name, min_score, max_score, start=0, num=1, withscores=False) if result == None: return False if len(result) == 1: debug('Popped %s' % result[0]) return result[0] else: return False def remove(self, data): return self.redis.zrem(self.name, data) def debug(msg): print msg def test_queue(): u = UniqueMessageQueueWithDelay('myqueue') # add items to the queue for i in [0, 1, 2, 3, 4, 0, 1]: data = 'Item %d' % i delay = 5 u.add(data, delay) time.sleep(0.1) # get items from the queue while True: print result = u.pop() print result if result != False: u.remove(result) time.sleep(1) if __name__ == '__main__': test_queue()
Results:
Added 1320773851.8, Item 0 Added 1320773851.9, Item 1 Added 1320773852.0, Item 2 Added 1320773852.1, Item 3 Added 1320773852.2, Item 4 Added 1320773852.3, Item 0 Added 1320773852.4, Item 1 False False False False False Popped Item 2 Item 2 Popped Item 3 Item 3 Popped Item 4 Item 4 Popped Item 0 Item 0 Popped Item 1 Item 1 False False False ^CTraceback (most recent call last): File "umqwdredisqueue.py", line 102, in
test_queue() File "umqwdredisqueue.py", line 98, in test_queue time.sleep(1) KeyboardInterrupt
Some links related to Redis queues¶
-
-
-
-
-
-
-
#1 Sajal commented on 2013-04-28:
Was trying to do the same thing, with only difference was i dont care about uniqueness (my objects are unique from before), and i ended up writing pretty much the exact same approach as you, but i find one problem with this.
This is not a real pop. If 2 workers are working on the same que, it is possible both get the same object when doing zrangebyscore at the same time before any one of them is able to zrem it. This is a very important consideration since i am in process of re-architecting the application to be able to run concurrently across cores (and eventually machines) with redis (or something) maintaining state.
Since i use redis for other things, id much rather use it for the que as well. Any ideas of the pop can be done in an atomic manner?
Will look into beanstalkd next. | http://www.saltycrane.com/blog/2011/11/unique-python-redis-based-queue-delay/ | CC-MAIN-2014-52 | refinedweb | 793 | 75.4 |
25 January 2007 21:57 [Source: ICIS news]
HOUSTON (ICIS news)--US paraffin wax (p-wax) spot values shed 2-5 cents/lb this week on weaker upstream costs as well as competitive activity, market sources said on Thursday.
Slack wax, a key feedstock for producing fully refined wax, fell by 4-5 cents/lb to 29-32 cents/lb ($660-700/tonne) FOB (free-on-board) following competitive activity that had been building for several weeks. Also contributing to lower numbers was weaker market sentiment, fresh offers and completed business.
Traders said that discounted prices were now fairly regular, but not all suppliers would agree. A few ?xml:namespace>
Prices for fully refined p-wax with melt points of 125-140°Fahrenheit dropped by 2-4 cents/lb to $1,000-1,050/tonne FOB, according to global chemical market intelligence service ICIS pricing. Lower prices were the result of weaker upstream costs - mainly softer base oils prices - as well as competitive activity, market sources said.
Participants on both sides of the market agreed that prices had slid as a result of attempts by suppliers to reduce inventories amid questionable p-wax demand the past couple of months.
The slowdown in the housing sector during the third and fourth quarters of last year was partially responsible for the slowed p-wax demand, sources said. Also, many p-wax consumption segments had satisfied their requirements in advance of the year-end holidays.
However, suppliers said that orders were ramping up with the approach of February.
The high-end microcrystalline sector remained stable with steady demand alongside balanced-to-tight supply. Associated prices continued to hold a premium of about 5-10 cents/lb to mid-melt point fully refined numbers.
Wax producers include ExxonMobil, Sunoco, CITGO, | http://www.icis.com/Articles/2007/01/25/9001524/us-p-wax-down-on-weaker-raw-material-costs.html | CC-MAIN-2014-42 | refinedweb | 296 | 50.97 |
I have run into a few scenarios where people want to be able to block access to Windows Explorer so that they can do something such as update the system in a machine that is otherwise publicly facing. One possibility is to create a desktop all your own.
The underlying architecture of Windows allows for something that may provide for this. Every instance of the operating system contains a collection of Sessions. Services run in Session 0, and interactive users run in Sessions 1, 2, 3, etc. (This is on Windows Vista - on Windows XP and earlier, the first interactive login shared Session 0 with services.) Each session contains a collection of Window Stations. Only one of these, WinSta0, is given access to display output, keyboard, and mouse. (Consequently, I haven't come up with any use in anything I have developed for the ability to create more.) Each Window Station contains a collection of Desktops.
You can already see multiple desktops just by using Windows. When you get to the login screen, that is a desktop. When your screen saver activates (assuming you are using a secure secreen saver), that has its own desktop. When you are prompted with a UAC dialog in Windows Vista, by default that has its own desktop. And you can create more. You can use the CreateDesktop API to create a new one, and then the SetThreadDesktop and SwitchDesktop APIs to switch to it. Here is a very simple example:
#include <windows.h>
int APIENTRY WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nShowCmd) {
HDESK hdeskOriginalThread = GetThreadDesktop(GetCurrentThreadId());
HDESK hdeskOriginalInput = OpenInputDesktop(0, FALSE, DESKTOP_SWITCHDESKTOP);
HDESK hdeskNewDesktop = CreateDesktop(TEXT("PrivateDesktop"), NULL, NULL, 0, GENERIC_ALL, NULL);
SetThreadDesktop(hdeskNewDesktop);
SwitchDesktop(hdeskNewDesktop);
MessageBox(NULL, TEXT("MessageBox on private desktop"), TEXT("Private Desktop"), MB_OK);
SwitchDesktop(hdeskOriginalInput);
SetThreadDesktop(hdeskOriginalThread);
CloseDesktop(hdeskNewDesktop);
return 0;
}
This may immediately give you some ideas about kiosk applications. However, the desktop window manager (DWM) only runs on the primary desktop, so you won't be able to use Glass on any additional desktops you create. (Incidentally, that's also why UAC prompts are not rendered using glass.) So, if that's a consideration, then you may want to think of other approaches. But for some edge case scenarios, it's nice to know that you have this option available.
Hi, Chris
I run into a strange issue about desktop on Vista.
If I create a 2nd desktop using CreateDesktop (with NULL for security attributes), and launch an IE using CreateProcess() on the 2nd desktop from default desktop, it will be created with medium integrity level and a pop up on the 2nd desktop mentioning that Admin rights is required w/o real IE displayed.
However, launch an IE using CreateProcess() on default desktop directly is OK(and in low integriy level).
Then I tried launching an IE using CreateProcess() on default desktop from 2nd desktop, it works fine with an IE with low integrity level on default desktop.
So, how can I enable same beheavior on 2nd desktop as the default one – IE can be launched in low integrity level w/o any issue?
BTW, I am testing under admin user account. What are the changes about desktop, explorer on Vista?
Thanks in advance!
Mary
Hi, Chris
I run into a strange issue about desktop on Vista.
I created an app running in 1st desktop, that launches an app in the 2nd desktop. The app in the second desktop launches 3 more apps (notepad, paint & calculater). When I switch to the second desktop, the Aero feature of Vista is missing in the second Desktop. I wanted to know why is it so, is it a limitation on part of Vista or is it by design or am I missing something…
Thanks in advance!
Manoj
Hi Manoj,
Yes, that is by design. As I mentioned, "the desktop window manager (DWM) only runs on the primary desktop, so you won’t be able to use Glass on any additional desktops you create." So, any desktop other than the default desktop does not get glass.
Hi, Chris
Thanks for your response, but I still have some problem.
My manager here, does not seem contented with the reply. Can you help me point so some articles/resources by Microsoft, which says that
"the desktop window manager (DWM) only runs on the primary desktop, so you won’t be able to use Glass on any additional desktops you create."
As of me, I am satisfied with your reply, but what to do with my manager???
Please help me…
Thanks in advance!
Manoj
Well, when I first discovered that limitation, I just shot an email to the team. One of the developers on the team is the person who verified this for me. I don’t know if this is documented in the SDK or not, since creating additional desktops is relatively rare.
For some reason, the idea of multiple desktops was bubbling around in my head this morning and I suddenly…
You wouldn’t happen to know whether it’s possible to create a desktop which is displayed on a separate monitor, would you ?
The problem I’m trying to workaround is that some display cards (eg nVidia Quadro an other dual head cards) do not support the independent display feature, which you would normally do by using CreateDC("\DISPLAY.."). These display cards return an 1801 error (Printer name is invalid). So I’m trying to find other ways using windows to emulate the Independent Display behaviour. Hopefully I won’t need try resorting to directly calling the display/video drivers, like Windows does. In other words, the secondary display cannot be part of the normal Windows desktop, accessible by anything else but my program(s).
Richard, that is not something I have experimented with, so unfortunately I don’t have a lot of information to help you here. I don’t see any way to specify the device; there is an lpszDevice argument, but it is reserved and must be NULL.
If you want to secure access to your desktop, you can provide a security descriptor in an argument to CreateDesktop(Ex), but it’s going to be difficult to keep processes from calling EnumDesktops and then calling OpenDesktop if other processes are running with the same credentials as yours are (unless you get creative).
Hey Chris,
Do you know if there are any plans on making DWM support multiple desktops in future releases of Windows? Thanks. 🙂
Alex.
I am not aware of any plans, but we are still fairly early on in the planning process. If you have a scenario or scenarios (the more details, the better – including the name of your org helps as well) feel free to send them to me via the email me link, and I will make sure that feedback gets into consideration for future releases.
I’ve been exploring using multiple desktops in Windows and found some great resources online: Chris Jackson
I've been exploring using multiple desktops in Windows and found some great resources online: Chris
i created new desktop using CreateDesktop API,then i move to that desktop using SwitchDesktop API.
here i couldnt able to access Flip 3D (Windows key+Tab) .what is the solution Chris.
Hi nbaskar1983,
Above, I reference the lack of DWM in additional desktops: "However, the desktop window manager (DWM) only runs on the primary desktop, so you won’t be able to use Glass on any additional desktops you create."
Unfortunately, Flip 3D is implemented by the DWM, so that’s gone also.
Thanks,
Chris
Thanks for replay chris
Any other solution for this.
nbaskar1983-
Erm … don’t switch to a separate desktop? Architecturally, the only desktop where the DWM operates (today – not sure if/when this will change) is the default desktop.
Thanks,
Chris
As I am focusing more and more on Windows 7, I find that blogging now begins with web searching, to make | https://blogs.msdn.microsoft.com/cjacks/2006/11/09/a-desktop-of-your-own/ | CC-MAIN-2018-47 | refinedweb | 1,323 | 60.55 |
pthread_spin_init()
Initialize a thread spinlock
Synopsis:
#include <pthread.h> int pthread_spin_init( pthread_spinlock_t * spinner, int pshared );
Since:
BlackBerry 10.0.0
Arguments:
- spinner
- A pointer to the pthread_spinlock_t object that you want to initialize.
- pshared
- The value that you want to use for the process-shared attribute of the spinlock. The possible values are:
- PTHREAD_PROCESS_SHARED — the spinlock may be operated on by any thread that has access to the memory where the spinlock is allocated, even if it's allocated in memory that's shared by multiple processes.
- PTHREAD_PROCESS_PRIVATE — the spinlock can be operated on only by threads created within the same process as the thread that initialized the spinlock. If threads of differing processes attempt to operate on such a spinlock, the behavior is undefined.
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
Description:
The pthread_spin_init() function allocates the resources required for the thread spinlock spinner, and initializes spinner to an unlocked state.
Any thread that can access the memory where spinner is allocated can operate on the spinlock.
Results are undefined if you call pthread_spin_init() on a spinner that's already initialized, or if you try to use a spinlock that hasn't been initialized.
Returns:
Zero on success, or an error number to indicate the error.
Errors:
- EAGAIN
- The system doesn't have the resources required to initialize a new spinlock.
- EBUSY
- The process spinlock, spinner, is in use by another thread and can't be initialized.
- EINVAL
- Invalid pthread_spinlock_t object spinner.
- ENOMEM
- The system doesn't have enough free memory to create the new spinlock.
Classification:
Last modified: 2014-06-24
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/p/pthread_spin_init.html | CC-MAIN-2015-11 | refinedweb | 292 | 57.37 |
User Tag List
Results 1 to 3 of 3
Thread: php sort() oddity
php sort() oddity
before the sort, my array looks like this:
item [0]: cow
item [1]: elepahnt
item [2]: donkey
item [3]: bear
afterwards it looks like this:
item [0]: bear
item [1]: donkey
item [2]: elephant
item [3]: cow
The first item is not being sorted. It is just being pushed to the end of the array. Any ideas?
- Join Date
- Jun 2000
- Location
- Sydney, Australia
- 3,798
- Mentioned
- 0 Post(s)
- Tagged
- 0 Thread(s)
If this odd sorting order is being caused because of upper/lower case problems it is because sort() does not sort alphabetically as you and I would - but by ASCII values.
You can pass a flag in you call to sort to tell it to compare the items as strings.
You will se that on that page, a user has contributed some code for their own solution which uses usort() which itself will use a user defined function to compare the items to be sorted. If nothing else the code is interesting
[email protected]
27-Mar-2001 01:40
I was looking for a way to sort an array alphabetically. The first post points out that 'Ab' is different to 'AB' and you can't make it case insensitive (well, I couldn't). So I did it like this:PHP Code:
function cmp ($a, $b) {
$tmp[0]=strtoupper($a);
$tmp[1]=strtoupper($b);
sort($tmp);
return (strcmp(strtoupper($tmp[1]) , strtoupper($b))) ? 1 : -1;
}
$listing[0]="AB";
$listing[1]="yzZAZz";
$listing[2]="YZzazZ";
$listing[3]="LhaASD";
$listing[4]="A";
$listing[5]="aB";
usort($listing, "cmp");
echo implode(",",$listing);
Which produces:
A,AB,aB,LhaASD,yzZAZz,YZzazZWebsite design, development & SEO
Format your code when posting. Skunk's useful PHP and MySQL links
Thanks for your help. Actually, it is not a uppercase/lowercase problem because all my words are lower case. Maybe it has something to do with the way I am displaying the array.
I am using
while ($query_data = mysql_fetch_array($rstResult))
to display the array. Any other ideas?
Bookmarks | http://www.sitepoint.com/forums/showthread.php?24912-php-sort()-oddity&p=173006 | CC-MAIN-2014-10 | refinedweb | 351 | 67.79 |
I have seen a few customers complain that their DataReceived event handler was never getting called and I thought I would share their problems here so you can learn from their mistakes. The problems revolve around the port not being open. This is sometimes as obvious as not calling SeiralPort.Close if you expect your DataReceived handler to be called but sometimes it is not as obvious as the GC can Close the SerialPort.<?xml:namespace prefix = o />
One customer had something like the following code:
shared Dim WithEvents mySerialPort as SerialPort
public shared sub SendText(text as string)
mySerialPort = new SerialPort(“COM1”)
mySerialPort.Open()
mySerialPort.WriteLine(text)
mySerialPort.Close()
end sub
public shared sub MySerialPortDataReceviedHandler( _
sender as Object, _
e As SerialDataReceivedEventArgs) _
Handles mySerialPort.DataReceived
‘Read here…
end sub
On the surface this may seem like everything should work. However the DataReceived event handler will not get called if the port has been closed. Also, the handler will only get called for data received since the port has been opened and the MySerialPortDataReceviedHandler is setup to handle the DataReceived event. So in this example the customer’s DataReceived handler will only get called for data received while code is executing between the Open call and the Close call in the SendText method. This is a pretty small window so it is unlikely that the customer would ever see their handler get called. The fix here is to not open and close the port for every write but open the port when the app is starting up and close the port when it is shutting down.
Another customer had the following code:
public static void
{
SerialPort mySerialPort = new SerialPort(“COM1”);
mySerialPort.DataReceived += MySerialPortDataReceviedHandler;
mySerialPort.Open();
Console.WriteLine(“Press any key to continue…”);
Console.ReadKey();
}
private static void MySerialPortDataReceviedHandler(
object sender,
SerialDataReceivedEventArgs e)
{
//Read here…
}
The problem here is a lot more subtle, the customer opened the port setup MySerialPortDataReceviedHandler to handle the DataReceived event and makes not calls to SerialPort.Close(). However the GC can actually close the port after Open has been called since there are no more references to mySerialPort after this call. An object is eligible to be GC’d as soon as there are no more references to the object. You can read more about this at. The fix is to make a call to SerialPort.Close() at the very end of the Main method. This keeps the mySerialPort alive until the end of this method.
This is true (C# Code )if you have /DEBUG set to false in c# compiler or if you don’t use VS.NET. If you have try the same with VS.NET the object would not be collected.
The SerialPort class requires some “warming up” time for our users coming from VB6 or other non-.NET | https://blogs.msdn.microsoft.com/bclteam/2006/05/15/serialport-and-datareceived-event-ryan-byington/ | CC-MAIN-2016-44 | refinedweb | 468 | 54.12 |
« Return to documentation listing
C Syntax
#include <mpi.h>
int MPI_File_set_size(MPI_File fh, MPI_Offset size)
INCLUDE ’mpif.h’
MPI_FILE_SET_SIZE(FH, SIZE, IERROR)
INTEGER FH, IERROR
INTEGER(KIND=MPI_OFFSET_KIND) SIZE
#include <mpi.h>
void MPI::File::Set_size(MPI::Offset size)
When using
MPI_File_set_size on a UNIX file, if size is larger than the current file
size, the file size becomes size. If size is smaller than the current file
size, the file is truncated at the position defined by size (from the beginning
of the file and measured in bytes). Regions of the file which have been
previously written are unaffected.
MPI_File_set_size does not affect the
individual file pointers or the shared file pointer.
Note that the actual
amount of storage space cannot be allocated by MPI_File_set_size. Use MPI_File_preallocate
to accomplish this.
It is erroneous to call this function if MPI_MODE_SEQUENTIAL
mode was specified when the file was opened.
INTEGER*MPI_OFFSET_KIND SIZE | http://www.open-mpi.org/doc/v1.6/man3/MPI_File_set_size.3.php | CC-MAIN-2014-10 | refinedweb | 151 | 57.37 |
The QDBusReply class stores the reply for a method call to a remote object. More...
#include <QDBusReply>
This class was introduced in Qt 4.2.
The QDBusReply class stores the reply for a method call to a remote object.:
QString reply = interface->call("RemoteMethod");
However, if it does fail under those conditions, the value returned by QDBusReply:.
Automatically construct a QDBusReply object from the reply message reply, extracting the first return value from it if it is a success reply.
Constructs an error reply from the D-Bus error code given by.
Returns the same as value().
This function is not available if the remote call returns void.
Makes this object contain the reply specified by message message. If message is an error message, this function will copy the error code and message into this object
If message.
This is an overloaded member function, provided for convenience.
Sets this object to contain the error code given by error. You can later access it with error().
This is an overloaded member function, provided for convenience.
Makes this object be a copy of the object other. | http://doc.trolltech.com/4.4/qdbusreply.html | crawl-002 | refinedweb | 184 | 59.3 |
Seam looses internationalization capabilitiesgadeyne.bram May 31, 2012 5:08 AM
Hi,
I'm using seam 2.2.2.Final with Richfaces 3.3.3.Final on JBoss AS 6.
My application is translated into 2 languages (dutch and english).
Sometimes ( at server startup or varying from hours to days) the web application looses the translations. It then only displays the key's as labels and not the translations.
Would anyone know what causes this?
First I thought it could be caused by the character set used in dutch. I then converted the files to ascii with native2ascii.exe but this did not solve the problem.
I'm using the default messages.properties files from seam. In my case these are messages_nl.properties and messages_en.properties. In JSF I use the EL-tags
#{messages['key']}.
My faces-config.xml file contains these settings:
<?xml version="1.0" encoding="UTF-8"?>
<faces-config
<application>
<view-handler>com.sun.facelets.FaceletViewHandler</view-handler>
<locale-config>
<default-locale>nl</default-locale>
<supported-locale>nl</supported-locale>
<supported-locale>en</supported-locale>
</locale-config>
</application>
</faces-config>
In JSF I use it like this:
<h:outputText
In Code I use
Messages.instance().get("admin.scheduling.title");
Can I add some error logging somewhere to maybe find a log message indicating the problem?
1. Re: Seam looses internationalization capabilitiesgebuh Jun 9, 2012 2:04 PM (in response to gadeyne.bram)
have you tried using the english locale? Does it do the same thing?
2. Re: Seam looses internationalization capabilitiesgadeyne.bram Jun 1, 2012 2:54 AM (in response to gebuh)
When it happens again I'll change the browsers locale to english en let you know if it found the english version.
3. Re: Seam looses internationalization capabilitiesgadeyne.bram Jun 1, 2012 11:27 AM (in response to gebuh)
It just happened again. If I change the locale to english the problem persists.
4. Re: Seam looses internationalization capabilitiesgadeyne.bram Jul 31, 2012 8:37 AM (in response to gadeyne.bram)
I still have this problem. What seam component should be responsible for this internationalisation? Maybe I could put a logger on it using a Debug, Fine of Finest log level?
Any ideas?
5. Re: Seam looses internationalization capabilitiesgebuh Aug 1, 2012 10:29 AM (in response to gadeyne.bram)
Bram, can you duplicate this? If so what do you do to make it happen? I don't have any concrete ideas on what could be causing the problem, but I'm wondering is it related to your server load.
6. Re: Seam looses internationalization capabilitiesgadeyne.bram Aug 1, 2012 1:21 PM (in response to gebuh)
Hi Beth,
We'll I'm not sure. I don't think server load could be the problem. Mostly there are only 6 concurrent users.
Could some external program cause this?
Our application is opened from another application that opens a IE browser as a tab in the program. I think its IE version 7. The problem never occured when I open it directly from IE or some other browsers. But when it is opened from this external program this somethimes hapens. I can however not reproduce it because I never know when it is goiing to hapen.
I also know that this external program adds GET parameters to the URL. I don't think this would cause sutch a problem?
Kind regards Bram
7. Re: Seam looses internationalization capabilitiesgadeyne.bram Aug 23, 2012 4:17 AM (in response to gadeyne.bram)
I'm still fasing this issue. Would anyone know how I could debug this? Maybe alter a logging setting or something? Does seam use an external package for the internationalisation? The logging for the org.jboss.seam package does not mention anything about internationalisation.
8. Re: Seam looses internationalization capabilitiesMarek Novotny Aug 23, 2012 6:17 AM (in response to gadeyne.bram)
Hey Bram,
why don't you use configuration in components.xml?
For instance like this:
<international:locale-config
9. Re: Seam looses internationalization capabilitiesgadeyne.bram Aug 24, 2012 5:19 AM (in response to gadeyne.bram)
It seems like I have used jdk6 dependencies where seam 2.2 only supports jdk5.
I've received some feedback on the IRC channel. I'll try these things
-Change jdk6 dependencies back to jdk5
-Change the web_app version van 3.0 to 2.5
-Move back from AS 6 to AS 5
I've also added a messages.properties file next to messages_nl and messages_en as a default properties file.
10. Re: Seam looses internationalization capabilitiesMartin Kouba Aug 27, 2012 7:06 AM (in response to gadeyne.bram)
Hi Bram,
if you get the keys instead of the values it means that there is no value for the given key and locale. In Seam2/JSF app the current locale is derived from JSF calculated value (HTTP Accept-Language header + supported locales...) and from cookie with name "org.jboss.seam.core.Locale" if it exists - see SeamViewHandler#calculateLocale() and LocaleSelector. So maybe the cookie send along with the request is a problem...
There's not much logging in Seam i18n components however you can replace the original components and add your own logging.
I would try something like this...
@Scope(ScopeType.STATELESS) @BypassInterceptors @Name("org.jboss.seam.international.messagesFactory") @Install(precedence = DEPLOYMENT) public class CustomMessages { protected Map createMap() { return new AbstractMap() { ... @Override public String get(Object key) { if (key instanceof String) { String resourceKey = (String) key; String resource; try { resource = bundle.getString(resourceKey); } catch (MissingResourceException mre) { // Log missing resource logger.warn("Missing resource with key: "+mre.getKey+", locale: "+org.jboss.seam.core.Locale.instance());
return resourceKey; } return (resource == null) ? resourceKey : resource; } else { return null; } } ... } }
11. Re: Seam looses internationalization capabilitiesgadeyne.bram Aug 28, 2012 8:16 AM (in response to Martin Kouba)
Hi Martin,
Thanks for this solution. I'll give it a try. The translations do work and are available but at some point they just disapear. It's like some process just stops.
12. Re: Seam looses internationalization capabilitiesgadeyne.bram Aug 28, 2012 8:18 AM (in response to gadeyne.bram)
I've tried
-pulling down the jdk version from 6 to 5
-using AS5 server in stead of AS6.
-downgrading the web_app and ejb versions to support jdk5.
This morning however the same issue occured on this new setting. Again without any notice in the log files.
13. Re: Seam looses internationalization capabilitiesgadeyne.bram Aug 29, 2012 2:58 AM (in response to gadeyne.bram)
Like Martin suggested I've added logging to a CustomMessages component. My CustomMessages extends the org.jboss.seam.international.Messages class.
I've writen override methods for createMap and getMessages.
Here I've added logging that checks the requested Locale and counts the messages in the Map generated by org.jboss.seam.international.Messages.
When this problem occurs the count is only 72 while otherwise there are 497 entry's in the map.
I've notices that the org.jboss.seam.international.Messages class uses the SeamResourceBundle class
Those are probably 16 messages from org.hibernate.validator.DefaulValidatorMessages and 56 messages from javax.faces.messages.properties.
Any ideas on what could go wrong?
14. Re: Seam looses internationalization capabilitiesMartin Kouba Aug 29, 2012 5:56 AM (in response to gadeyne.bram)
And I suppose the requested locale is correct, is it? Messages use SeamResourceBundle and in the end java.util.ResourceBundle is used (see ResourceLoader#loadBundle()). Maybe there is some classloading issue. What is the structure of your deployment? (EAR, WAR, etc.) | https://community.jboss.org/message/739196 | CC-MAIN-2015-27 | refinedweb | 1,251 | 53.37 |
hello i have been trying to write a code that allows me to exit a program at any point during that program by typing the word exit.
you will have to bare with me i am extremely new to this.
this is what i have come up with so far, any suggestions on why this is not working?this is what i have come up with so far, any suggestions on why this is not working?Code:
#include <stdio.h>
int main(void)
{
char x[20];
char person[20];
printf("Please type your name: ");
scanf("%s", person);
if ((person) == "x")
{
printf();
exit(1);
}
printf("After reading the January 1, issue of that demonstrated the Altair 8800, %s called the creators of the new\n", person);
} | http://cboard.cprogramming.com/c-programming/99791-exit-program-any-point-printable-thread.html | CC-MAIN-2015-48 | refinedweb | 125 | 77.57 |
14175/capture-events-create-assets-transaction-processing-functions
I am working on a POC and do not want to write any specific transaction processing functions. Created assets, participants etc. and all, so the model is ready. Generated rest api using hyperledger composer-rest-server. The frontend is developed in simple html/javascript. the problem is that i need events also available whenver i CRUD using composer generated APIs, but not able to figure out how. IS it that to capture events, we need to create assets using transaction processing functions only and not via composer rest server apis - a little novice kinda question but i am stuck in this thought.
I think you have figured it by now, but here's the answer for the rest of us: you can only generate events from your chaincode, and every event has to be described in your model.
Actually, we don't need Metamask. To make ...READ MORE
Hyperledger has evolved a lot. The updated ...READ MORE
Here you go:
var options = Web3Options.defaultOptions()
options.gasLimit = ...READ MORE
var options = Web3Options.defaultOptions()
options.gasLimit = BigUInt(21000)
options.from = ...READ MORE
Summary: Both should provide similar reliability of ...READ MORE
This will solve your problem
import org.apache.commons.codec.binary.Hex;
Transaction txn ...READ MORE
This was a bug. They've fixed it. ...READ MORE
For tracking changes you may simply save ...READ MORE
You could start like this:
method = POST
URL ...READ MORE
OR | https://www.edureka.co/community/14175/capture-events-create-assets-transaction-processing-functions | CC-MAIN-2019-39 | refinedweb | 243 | 60.92 |
Searching Active Directory with Windows PowerShell
Searching Active Directory with Windows PowerShell
At heart, Active Directory is nothing more than a database (a Jet database, to be exact). Big deal, you say? Well, as a matter of fact, it is a big deal: the fact that Active Directory is a database means that you can use scripts to search Active Directory. Need a list of all your user accounts? Write an Active Directory search script. Need a list of all your computer accounts? Write an Active Directory search script. Need a list of all your color printers or all your contacts from the Fabrikam Corporation? Write an Active Directory– well, you know how it goes by now.
Of course, it’s one thing to suggest that someone write an Active Directory search script; it’s quite another thing to actually sit down and write that Active Directory search script. That’s not because these scripts are hard to write; it’s because it’s very difficult to find documentation and examples that show you how to write Active Directory search scripts using Windows PowerShell.
Well, check that: it used to be very difficult to find documentation and examples that show you how to write Active Directory search scripts using Windows PowerShell.
The purpose of this article is straightforward: combined with 100+ sample scripts recently added to the Script Center Script Repository, this article provides an introduction to the fine art of writing Active Directory search scripts using Windows PowerShell. Does this article contain everything you’ll ever need to know about writing Active Directory search scripts? Probably not. But it does include enough information to help you get started.
By the way, these days all the excitement in the PowerShell world revolves around PowerShell 2.0 and the November 2007 Community Technology Preview release. Because of that, we thought it was important to stress that the ability to writes scripts that search Active Directory does not require PowerShell 2.0. All the sample code you’ll see today works equally well on both versions of PowerShell. If you’ve got either version of Windows PowerShell (1.0 or 2.0) installed then you’re ready to write Active Directory search scripts.
Writing Active Directory Search Scripts
OK, now that you’re ready, how do you write Active Directory search scripts? To tell you the truth, there are probably several different ways you could go about this task. But here’s how the Scripting Guys write Active Directory search scripts:
$strFilter = "(&(objectCategory=User)(Department=Finance))" }
So how does this script work? Don’t worry; we’re going to go over this script inch-by-inch and line-by-line … except for the first line; it will be a few minutes before we get to that (but we will get to that, promise). Instead, let’s start with line 2:
Technically, what we’re doing here is creating an instance of the System.DirectoryServices.DirectoryEntry class; this class represents an object in Active Directory. In more practical terms, what we’re doing is identifying the Active Directory location where we want the search to begin. As you might have noticed, however, we didn’t specify an Active Directory location; there’s not an ADsPath to be found here. But that’s OK; if we create a DirectoryEntry object without any additional parameters we’ll automatically be bound to the root of the current domain. You say you don’t want your search to start in the domain root? That’s fine; in that case, go ahead and include the ADsPath of the desired start location when creating the object. For example, this line of code binds us to the Finance OU rather than the domain root:
In turn, that means our search will start in the Finance OU rather than the domain root. (A good thing to know if you want to search for objects in just the Finance OU and its child OUs.)
After we have a DirectoryEntry object (and a starting location for our search) we use this line of code to create an instance of the System.DirectoryServices.DirectorySearcher class:
As you probably guessed, this is the object that actually performs an Active Directory search. What if you didn’t guess that? Well, don’t worry about it; now you know.
Before we can begin using our DirectorySearcher object we need to assign values to several different properties of this object:
The SearchRoot tells the DirectorySearcher where to begin its search. As you might recall, back in line 2 we connected to the domain root, something that occurred when we created a DirectoryEntry object named $objDomain. Thanks to that, we can simply assign the value of $objDomain to the SearchRoot property. Oh, and before you ask, no, we can’t do something along the lines of this:
Why not? Because the SearchRoot property will only accept an instance of the DirectoryEntry class; it won’t accept a string value.
Which, needless to say, seems like good enough reason to assign a DirectoryEntry object to SearchRoot.
Next we assign the value 1000 to the PageSize property. By default, an Active Directory search returns only 1000 items; if your domain includes 1001 items that last item will not be returned. The way to get around that issue is to assign a value to the PageSize property. When you do that, your search script will return (in this case) the first 1,000 items, pause for a split second, then return the next 1,000. This process will continue until all the items meeting the search criteria have been returned.
After taking care of the PageSize we next assign a value to the Filter property. We actually defined our filter in line 1, but told you we’d discuss this line later. And now we’re going to tell you that again: we’ll discuss this line later. For now, we’ll simply note that the Filter property is the spot where we define our search criteria; that’s where we tell the script exactly what to search for. Although it might not look like it, the following filter retrieves a collection of all the users in the Finance department:
But, again, we’ll talk about this later.
That’s a good question: Why didn’t we use a SQL query along the lines of this:
Well, we have a pretty good reason for that: the Filter property won’t accept a SQL query. Instead, we have to assign this property an LDAP search property. But that’s something that – um, that’s right, that’s something we’ll talk about later.
Finally, we assign the string Subtree to the SearchScope property. Subtree is the default value for search scripts, which means we didn’t actually have to set the SearchScope to Subtree. Instead, we did this just so we’d have an excuse to discuss the search scope.
OK, so then what is the search scope? Well, if the search root determines the location where a search will begin, the search scope determines how much of that location will be searched. As it turns out, there are three different search scopes you can use when searching Active Directory:
What does all that mean? Well, suppose we have an Active Directory which has a domain root, a few OUs, and then, under one of those OUs, a couple of child OUs. In other words, suppose we have an Active Directory that looks like this:
Let’s further suppose that we define our search root as the domain root and set the SearchScope to Base. What will we end up searching? As shown below, we’ll search only the domain root; we won’t even look at the OUs and child OUs:
OK, what about a OneLevel search targeted at the domain root? In that case we won’t search the root at all, nor will we search the child OUs. Instead, we’ll search only the immediate child objects of the target object. You know, like this:
That leaves us with the Subtree search, which searches an object and all of its child objects (and their child objects, and their child-child objects, and ….). Want to search an entire domain? Then start the search in the domain root, and set the SearchScope to Subtree:
By the way, setting the SearchScope to Subtree is half the equation for searching only an OU and its child OUs; the other half is to create a DirectoryEntry object that binds you to that particular OU. (Remember when we did that with the Finance OU?) Do both of those things and you’ll be able to search just an OU and its child OUs:
Or, set the SearchScope to Base and search just the OU itself, sidestepping any child OUs.
But enough about that; let’s get back to the script, and to this little block of code:
These two lines are where we define the properties we want returned when we conduct our search. In the first line we create an array named $colProplist, an array that contains each attribute we want the search to return. In this simple example we’re interested in only one attribute (Name); hence we assign only a single value to $colProplist. What if we wanted to retrieve more than one attribute value? Then we’d assign more than one attribute to $colProplist:
This, by the way, is standard Windows PowerShell syntax for creating an array of string values: we simply assign each value to the array, enclosing individual values in double quote marks and separating each value by a comma. In other words, if you’re a PowerShell user (and we assume you are), this is something you’ve probably done a million times by now.
In the second line, we set up a foreach loop to loop through each of the attributes in $colProplist. For each of these attributes we call the Add method to add the attribute to the DirectorySearcher’s PropertiesToLoad property. The attributes assigned to PropertiesToLoad are the only attributes that will be returned by our search. Suppose we assign name, jobTitle, and telephoneNumber to the PropertiesToLoad property. If we then try to echo back the value of the homeDirectory attribute that will trigger an error; that’s because the homeDirectory attribute won’t be included in the collection of information returned by the script.
We should probably point out that you don’t have to assign anything to the PropertiesToLoad property; you could simply leave these two lines of code out and you could still conduct a successful search of Active Directory. So why didn’t we just do that and be done with it? Well, if you don’t assign anything to PropertiesToLoad your search script will return all the attribute values for whatever it is you’re searching for. Suppose you have a search script that returns a collection of all the users in your domain. Each Active Directory user account has more than 200 attributes associated with it; if you have several thousand users in your domain that’s going to be a ton of data streaming across your network. In turn, that will: 1) clog up the network; 2) bog down the domain controller performing the search; and 3) slow your script down considerably. If you don’t need all the attribute values then there’s no reason to retrieve all the attribute values. Instead, assign only the attributes you do need to the PropertiesToLoad property.
Searching Active Directory
Yes, it did take awhile to explain all the ins and outs of setting up an Active Directory search, didn’t it? But that’s all behind us now; at long last, we’re ready to conduct a search:
As you can see, once you’ve configured the DirectorySearcher object actually carrying out a search is as easy as calling the FindAll method. Call that one method in that one line of code and your script will go out, search the requested portion of Active Directory, retrieve the requested attributes for the requested objects, and then store that information in a variable (in this case, a variable named $colResults).
Whew!
But wait, don’t relax just yet: we still need to display our results to the screen. Admittedly, we could simply echo back the value of $colResults, like this:
So why don’t we do that? Well, as you can see, the resulting output can be a little difficult to deal with:
That’s why we decided to use this block of code to echo back our search results:
All we’re doing is setting up a foreach loop to loop through each record in our recordset. For each of these records we use this command to grab the returned attribute values and assign them to a variable named $objItem:
And then we simply echo back the value of $objItem’s name attribute:
What if our script returned more than one attribute value? No problem; in that case we just echo back each of those values, like so:
Or, to make it easier to identify which value is which:
That gives us output similar to this:
And that’s all you need to know in order to use Windows PowerShell to search Active Directory!
Writing LDAP Filters
Oh, that’s right; we almost forgot. Turns out you still need to know one more thing before you can start writing Active Directory search scripts in Windows PowerShell: you also need to know how to write LDAP search filters. Let’s see if we can figure that out, too.
First, however, let’s refresh our memory by taking another look at the search filter we defined in our very first line of code:
This particular search filter combines two criteria: it searches for everything that has an objectCategory equal to User and a Department equal to Finance. We’ll explain how to combine criteria in a moment. Before we do that, however, let’s examine a simpler filter, one that searches for all user accounts regardless of the Department that user belongs to:
This example (as well as the picture below) illustrates the three parts of an LDAP search filter:
To begin with, note that the entire search filter must be enclosed within parentheses; that’s important. Therefore, when you sit down to write an LDAP search filter you might as well start with a set of parentheses:
Inside those parentheses we then specify the attribute we want to filter on. In our simple filter example we’re filtering on the objectCategory attribute; thus we put the name of that attribute in our filter:
Next we put in the operator. The DirectorySearcher Filter property allows us to use any of the following operators:
Notice there is no “not equal to” operator (e.g., <>). But don’t worry; before we go we’ll explain how to write a “not equal to” filter.
Hey, we’d never forgive ourselves if we didn’t.
As we noted, we’re looking for all objects where the objectCategory is equal to User; that means our filter now looks like this:
Yes, it does look a little crowded in there, doesn’t it? But, whatever you do, resist the temptation to make the filter look “prettier” by inserting a blank space between the attribute and the operator. Why? Because a “pretty” filter like this one will fail:
Why does it fail? You got it: because we used blank spaces to separate the attribute, operator, and value. Don’t be like the Scripting Guys: make sure that you don’t put blank spaces between the attribute, operator, and value.
Speaking of value, that’s what comes next:
As you can see, there’s no need to put double quotes around the value. That’s true even if your value includes blank spaces (and, yes, blank spaces are allowed in the value). Need to search for a user with a Name equal to Ken Myer? No problem:
By the way, you can also use the asterisk as a wildcard character when specifying the value. Want to search for all the users who name starts with Ken? This filter should do the trick:
Meanwhile, this filter searches for all the users who have a Name value of some kind:
See? These LDAP filters look weird. And to be honest, they are weird. But at least they’re easy enough to write.
"Special" Filters
Before we call it a day let’s take a quick look at three special types of filters:
AND filters
OR filters
NOT filter
You might not have realized it at the time, but the very first line of code we showed you in this article was an example of one of these special filters (an AND filter). Remember this line:
What we’re doing here is creating a filter that returns object that meet two criteria: the objectCategory must be equal to User and the Department must be equal to Finance. Note the syntax used to create this filter. As usual, the entire filter is enclosed in parentheses. We then have an ampersand (&); in the exciting world of LDAP filters, this symbol indicates that we want to create an AND filter. And then we simply have the two criteria, with each item enclosed in a set of parentheses. To make this a little easier to visualize, picture the query as being written like this:
See how that works? What if want to return objects that meet three criteria? That’s no problem; we just need to add the third item to the query:
Or, the way we’d type it in a script:
The OR filter is similar; we simply substitute the pipe separator (|) for the ampersand:
In this case, we’ll get back all objects where the Department is equal to Finance or where the Department is equal to Research. See the difference? In an AND filter we need to meet all the criteria; in an OR filter we simply need to meet any one of the specified criteria.
We can even combine these filter types to create a very finely-targeted filter. For example, this filter returns all the user accounts (objectCategory=User), provided that the user is a member of either the Finance or the Research department:
Or, to again put it a little more visually:
As you can see, with this filter we have both an AND filter and an OR filter. The AND filter looks like this, and specifies that we must have a user who comes from a specific department:
And then we have the OR filter, which indicates which department:
Don’t feel bad; it is a little confusing, especially at first. But eventually you’ll learn to read and write these LDAP filters almost as easily as you read and write SQL queries.
Last, but hardly least, we have the NOT filter. What do you suppose this filter does?
We won’t keep you in suspense: this filter returns all objects that are not user account objects. (The exclamation point – ! – indicates a NOT filter.) Need a list of users who do not have a telephone number? This filter should do the trick:
For more information, take a peek at the LDAP Filter Syntax on MSDN.
That’s All For Now
Like we said, this isn’t necessarily everything you need to know about writing Active Directory search scripts, but it should be enough to get you started. Let us know if you have additional questions, and we’ll see what we can do about addressing those issues sometime in the near future. | https://technet.microsoft.com/en-us/library/ff730967.aspx | CC-MAIN-2017-43 | refinedweb | 3,281 | 67.18 |
Python provides a few special methods to manipulate generators!
The
.send() method allows us to send a value to a generator using the
yield expression. If you assign
yield to a variable the argument passed to the
.send() method will be assigned to that variable. Calling
.send() will also cause the generator to perform an iteration.
Look at the following example to see the behavior of the
.send() method:
def count_generator(): while True: n = yield print(n) my_generator = count_generator() next(my_generator) # 1st Iteration Output: next(my_generator) # 2nd Iteration Output: None my_generator.send(3) # 3rd Iteration Output: 3 next(my_generator) # 4th Iteration Output: None
In the code example above, the generator definition contains the line
n = yield. This assigns the value in
yield to
n which will be
None unless a value is passed using
.send().
The last 4 lines in the code are 4 iterations, 3 using
next() and one using the
.send() method:
- The 1st iteration creates no output since the execution stops at
n = yieldwhich is before
print(n).
- The 2nd iteration assigns
Noneto
nthrough the
n = yieldexpression.
Noneis printed.
- The 3rd iteration is caused by
my_generator.send(3). The value
3is passed through
yieldand assigned to
n.
3is printed.
- The last, and 4th, iteration, assigns
Noneto
n.
Noneis printed.
The
.send() method can control the value of the generator when a second variable is introduced. One variable holds the iteration value and the other holds the value passed through
yield.
def generator(): count = 0 while True: n = yield count if n is not None: count = n count += 1 my_generator = generator() print(next(my_generator)) # Output: 0 print(next(my_generator)) # Output: 1 print(my_generator.send(3)) # Output: 4 print(next(my_generator)) # Output: 5
In the above example, the generator function defines
count = 0 as the iteration value.
n is used to hold the value provided by
yield. Just like
next(), the
.send() method returns the value of the recent iteration. In this example, the return values are printed using
print().
The updated line,
n = yield count, has 2 behaviors:
- At the start of each iteration the value provided by
yieldis assigned to
n. This value will be
Nonewhen
next()causes an iteration or it will be equal to the value passed using
.send()
- At the end of each iteration, the value stored in
countis returned by the generator.
If
n is not None the value stored in
n can be assigned to the iterator variable,
count. This allows the iterator to only change the value of
count when the
.send() method is called.
Instructions
You are a teacher with a roster of 50 students. You have created a generator,
get_student_ids(), that outputs each student’s id which you then use for assignment grading.
Things to note about the code in the workspace:
MAX_STUDENTSis set to 50 and is used in the
whileloop condition to cutoff the iteration.
student_idis initialized to
1and is incremented at the bottom of the
whileloop.
- The generator currently uses
yieldto return
student_idat the end of each iteration.
- A
forloop at the bottom of the code iterates through the generator object
student_id_generatorand outputs each id.
Run the code to see all 50 ids printed.
When you are interrupted while grading, you need to pick up where you left off! This requires you to start the id generation at a number higher than
1. One way to solve this problem is to change the generator to support the
.send() method.
Inside
get_student_ids():
- Change the
yieldexpression so the value from
yieldis assigned to
n.
- Just below the
yieldexpression check that
nis not equal to
None. If they are not equal, assign the value of
nto
student_id.
- Still inside the
ifstatement, stop
student_idfrom incrementing by skipping the rest of the iteration.
When you run the code, you should see no change.
To start the iteration at a different id, you want to send the generator a new value during the first iteration.
Inside the
for loop and before
print(i):
- Check if
iis equal to the first id number,
1.
- If so, set
ito the return value of the
student_id_generator.send()method.
- Set the argument for the
.send()method so the output starts at
25. | https://www.codecademy.com/courses/learn-intermediate-python-3/lessons/int-python-generators/exercises/generator-methods | CC-MAIN-2022-05 | refinedweb | 696 | 57.57 |
lenders directly, usually via online auctions. The loans issued often comprise).
Elsewhere, returns (and risks) are higher. IsePankur, which lends to more than 60,000 people in four euro-zone countries, pays its lenders (who include your correspondent) a stonking 21.45% average net return (after a 3% default rate). Its typical borrowers do not flinch at rates of up to 28%: they are refinancing far costlier credit-card debt and doorstep loans.
Peer-to-peer lending is growing fast in many countries. In Britain, loan volumes are doubling every six months. They have just passed the £1 billion mark ($1.7 billion), though this is tiny against the country’s £1.2 trillion in retail deposits. In America, the two largest P2P lenders, Lending Club and Prosper, have 98% of the market. They issued $2.4 billion in loans in 2013, up from $871m in 2012. The minnows are doing even better, though they are growing from a much lower base.
Neil Bindoff of PwC, a professional-services firm, speaks of a “perfect storm” supporting P2P’s growth. Interest rates are close to zero, the public is fed up with banks, costs are low (one third of a typical bank’s, according to Renaud Laplanche of Lending Club), and e-commerce is becoming part of daily life. People use the internet for peer-to-peer telephony (Skype) and shopping (eBay), so why not loans?
Awareness is still low—a survey by pwc found only 15%st. The Financial Conduct Authority will issue the new rules imminently. In America, people saving for retirement can apply tax breaks to their loans, and offset their losses against profits. Britain’s P2P industry is awaiting a decision to extend tax-free savings schemes to its lenders.
Regulation to the rescue
Regulation should help forestall a big worry: that an ill-run platform might collapse, taking investors’ money with it. At a conference organised by the P2P Finance Association, a trade body, this week, executives were worried about the risks of a “Bitcoin-style bust” that could rattle confidence in the nascent industry. New rules are likely to insist that P2P businesses ringfence unlent funds gathered from savers and arrange for third parties to manage outstanding loans if they cease trading.
Other big questions abound. One is insurance. Funds placed with P2P lenders are not covered by the state-backed guarantees that protect retail deposits in banks. Some platforms offer something of a substitute. Zopa and most other British companies have started “provision funds”, which aim (but do not promise) to make good on loans that sour. These smooth the risk for lenders, but blunt the original P2P concept. So too does insurance: Ron Suber of Prosper, America’s second-biggest lender, says “deep actuarial conversations” are going on with outsiders who would like to help lenders provide for the risk that their borrower defaults, dies, or loses his job. Purists fear such arrangements could recreate the moral hazard that has plagued conventional banking.
The boom in cross-border P2P raises tricky legal questions. The European Commission has yet to get to grips with the industry. National rules often determine how credit is issued and debts are collected. But they offer little help when the money comes from hundreds of lenders in dozens of countries. Yield-chasing foreigners, private and institutional, are investing heavily in the American market.
Only a third of the money coming to Lending Club is now from retail investors: the rest (the fastest-growing slice) comes from rich people and institutions. Should such big investors get a better deal—such as getting their pick of the best loans on offer? In Britain, Giles Andrews of Zopa regards the idea as anathema: all savers should be treated equally. Some others think big lenders will eventually dominate P2P.
P2P also ends the dangerous mismatch between short-term deposits and long-term loans inherent in conventional banking—but generally by locking lenders in for the loan’s duration. A secondary market in P2P loans is developing fast. This allows investors to get their money back if they need it, usually by selling the loans at a discount. But rules vary: some platforms will buy back the loans; others just hold an auction.
P2P is not complicated: success largely depends on marketing oomph, the quality of the algorithms used to screen borrowers and ease of use (P2P platforms are scrambling to develop apps for smartphones and tablets). P2P may attract big outsiders, such as banks, or internet companies which already have lots of data about their customers and (like Facebook) are good at connecting them. Google last year led a $125m investment in Lending Club, valuing it at $1.55 billion. It might well want more.
Excerpts from the print edition & blogs »
Editor's Highlights, The World This Week, and more » | http://www.economist.com/news/finance-and-economics/21597932-offering-both-borrowers-and-lenders-better-deal-websites-put-two | CC-MAIN-2014-42 | refinedweb | 807 | 64.61 |
Deno is a simple, modern, and secure runtime for JavaScript and TypeScript that uses V8 and is built in Rust. Recently Deno 1.0.5 was released, which is a stable version of the runtime. This post is the first in a series delineating the runtime.
Deno is not that new, as it was first announced in 2018, but it is starting to gain traction, so I thought now would be a perfect time to write about it, considering it could become the next big thing for JavaScript developers.
However, that doesnt mean Node.js will be swept under the rug. Be cautious about people saying Node.js is dead, or Deno is here to replace it entirely. I don’t buy that opinion. Ryan Dahl, the creator of Deno and Node.js, said this in a 2019 conference and I quote: “Node.js isn’t going anywhere.” He also added, “Deno isn’t ready for production yet.”
In this post, we will be discussing Deno’s installation, fundamentals, features, standard library, etc. Everything you will learn here is enough for you to join the Deno train and enjoy what it promises JavaScript developers.
With that said, let’s dive right into the big question: What is Deno? Deno is a runtime for JavaScript and TypeScript based on the V8 JavaScript engine and the Rust programming language. It was created by Ryan Dahl, the original creator of Node.js, and is focused on productivity. It was announced by Dahl in 2018 during his talk “10 Things I Regret About Node.js.”
When I first found out about Deno and the fact that it was created by the creator of Node.js, I had this feeling there must be a significant change, especially in design, so I think we should start going through some interesting features Deno introduced.
This is a list of few of Deno’s features:
Modern JavaScript: Node.js was created in 2009, and since then JavaScript has gotten a lot of updates and improvements. So Deno, as expected, takes advantage of more modern JavaScript.
Top-level await: Normally, when using async/await in Node.js, you have to wrap your awaits inside of an asynchronous function, and you have to label it async. Deno makes it possible to use the await function in the global scope without having to wrap it inside an async function, which is a great feature.
Typescript support out of the box: This is my second favorite feature—there is nothing more fun than having a little more control over your types in projects. This is the reason why I started building most of my projects in Go.
Built-in testing: Deno has a built-in test runner that you can use for testing JavaScript or TypeScript code.
A single executable file: If you have used Golang, the idea of shipping just a single executable file will be familiar. This is now present in JavaScript with the help of Deno. So say bye to downloading hundreds of files to set up your development environment.
Redesigned module system: This is my favorite feature:, Deno has no package.json file, nor huge collections of node_modules. It has its package manager shipped in the same executable, fetching all the resources for you. Modules are loaded onto the application using URLs. This helps to remove the dependency on a centralized registry like npm for Node.js.
Security: With Deno, a developer can provide permission to scripts using flags like --allow-net and --allow-write. Deno offers a sandbox security layer through permissions. A program can only access the permissions set to the executable as flagged by the user. You’re probably asking yourself, “How will I know which flags I have to add to execute the server?” Don’t worry; you will get a message in the console log asking you to add a given flag. Here is a list of the flags:
-)
No, but this is what I have to say about this constant comparison between Node and Deno. I think you should have an open mind, follow along with the post and get a first-hand experience. In the end, come to your conclusion which one better suits your style. One thing is sure, Deno will get to Node.js level with the attention it is getting recently, and it will be a Node.js successor.
“For some applications, Deno may be a good choice today, for others not yet. It will depend on the requirements. We want to be transparent about these limitations to help people make informed decisions when considering to use Deno.” - Ryan Dahl.
If you already know Node.js and you love TypeScript, or you know any other server-side language, I will give you a big go- ahead. But, if you are starting out learning server-side programming and you want to use JavaScript, I will advise you to learn Node.js first before learning Deno — that way, you will appreciate Deno even more.
Deno ships with a set of standard libraries that is audited by the core team, for example, http, server, fs, etc. And the modules, as stated earlier, are imported using URLs, which is super cool. A module can be imported, as shown below:
import { serve } from ""
Here is a list of Deno standard libraries:
There are couple of ways to get Deno installed on your machine.
Using shell (macOS & Linux):
$ curl -fsSL | sh
Using PowerShell (Windows):
$ iwr -useb | iex
$ scoop install deno
Using Chocolatey (Windows):
$ choco install deno
$ brew install deno
Using Cargo (Windows, macOS, Linux):
$ cargo install deno
I’m using Windows so I installed mine using PowerShell:
PS C:\Users\Codak> iwr -useb | iex Deno was installed successfully to C:\Users\Codak\.deno\bin\deno.exe Run 'deno --help' to get started
To access the
deno command, here’s the support that you can get using
deno --help:
PS C:\Users\Codak> deno --help deno 1.0.1 A secure JavaScript and TypeScript runtime Docs: Modules: Bugs: To start the REPL: deno To execute a script: deno run To evaluate code in the shell: deno eval "console.log(30933 + 404)" given version ENVIRONMENT VARIABLES: DENO_DIR Set deno's base directory (defaults to $HOME/.deno) DENO_INSTALL_ROOT Set deno install output directory (defaults to $HOME/.deno/bin) NO_COLOR Set to disable color HTTP_PROXY Proxy address for HTTP requests (module downloads, fetch) HTTPS_PROXY Same but for HTTPS
The SUBCOMMANDS section lists the commands we can run. You can run
deno <subcommand> help to get specific additional documentation for the command, for example
deno bundle --help.
We can access the REPL (Read Evaluate Print Loop) using the command deno. While in the REPL, we can write regular JavaScript, for example, to add numbers or assign a value to a variable and print the value:
$ deno Deno 1.0.0 Exit using ctrl+c or close() > 1+1 2 > const x = 100 undefined > x 100 >
Let’s touch on two important commands in the SUBCOMMANDS section:
1. Run command
The
run command is used to run a script, whether local or using a URL. To showcase an example, we are going to run a script URL found in Deno’s standard library example section on the Deno official website called
welcome.ts.
$ deno run Download Warning Implicitly using master branch Compile >> Welcome to Deno 🦕
The output of the script is
Welcome to Deno. You can take a look at the code that gets executed by opening the URL we passed to
run in the browser.
Let’s run another example that will throw an error if we don’t add permission. If you remember earlier, we talked about Deno’s security, and how we need to add flags to give access to the scripts because Deno runs every script in a sandbox.
$ deno run Download Compile error: Uncaught PermissionDenied: read access to "C:\Users\Codak", run again with the --allow-read flag at unwrapResponse ($deno$/ops/dispatch_json.ts:43:11) at Object.sendSync ($deno$/ops/dispatch_json.ts:72:10) at cwd ($deno$/ops/fs/dir.ts:5:10) at Module.resolve () at
Let’s add the required flags and rerun the code. The flags are added immediately after
deno run.
$ deno run --allow-read --allow-net >> HTTP server listening on
Now that our file_server script is running perfectly, you can test it with
localhost:4507/.
2. Install command
The
install command is used to install script as an executable. We are going to use the
file_server script we ran earlier, but this time we are going to install it.
$ deno install --allow-read --allow-net Warning Implicitly using master branch Download Compile >> ✅ Successfully installed file_server C:\Users\<USERNAME>\.deno\bin\file_server.cmd
The file will be downloaded and saved in my base directory
C:\Users\username\.deno\bin\file_server.cmd. If you are on a mac it can be found at
/Users/username/.deno/bin``/file_server .
To run the file, navigate to the base directory folder and run the name of the file
file_server and the server will start up.
$ C:\Users\Codak\.deno\bin> file_server >> HTTP server listening on
It will also work if you just run
file_server without navigating to the parent folder:
$ file_server
As a Go developer, I love the
go fmt command used to automatically format Go code; with Node.js, you probably use a third-party package like Beautify or Prettier. Deno ships with a
deno fmt command, just like Go, that automatically formats the script and adds semicolons if omitted anywhere.
$ deno fmt index.ts
So far, we have touched some important aspects of Deno to get an overview of what Deno has to offer. I will leave a couple of resources here for further reading:
Again, if you’re already a server-side developer, I strongly recommend checking out Deno so you can see for yourself what you think. If you’re new to server-side, maybe start first with Node.js so you have a better understanding of what the Deno experience will mean for you.
Chinedu is a tech enthusiast focused on full-stack JavaScript and Infrastructure engineering. | https://www.telerik.com/blogs/how-to-get-started-with-deno | CC-MAIN-2022-05 | refinedweb | 1,688 | 63.7 |
Firstly see the code:
Code:#include<iostream> #include<conio.h> using namespace std; typedef struct temp { int data; } T; void setframe2( T **tp) { (*tp)->data = 2; } void setframe3( T *tp) { tp->data = 3; } void setframe( T *tp) { tp->data = 1; setframe2(&tp); setframe3(tp); } void main() { T tvar; tvar.data=0; cout<<"\nValue of data before call to setframe "<<tvar.data; setframe(&tvar); cout<<"\nValue of data after call to setframe "<<tvar.data; getch(); } OUTPUT Value of data before call to setframe 0 Value of data after call to setframe 3
In setframe i have T *tp
when calling to setframe2, i am passing its address
setframe2(&tp);
and in setframe2 , tp->data has been changed
and change is permanant as it should be.
but when calling to setframe3, i am passing just pointer , not its address
setframe3(tp);
and in setframe3 , tp->data has been changed
and change is permanant here too , why??
why??
i mean i am not passing address of pointer
so why changes made are permanant, not local . | https://cboard.cprogramming.com/cplusplus-programming/69229-should-i-pass-address-pointer-just-pointer.html | CC-MAIN-2017-13 | refinedweb | 172 | 57.91 |
A question - or comment - on instance variables.
Looking at ‘:symbols’ today, I came across this example:
class Test
puts :Test.object_id.to_s
def test
puts :test.object_id.to_s
@test = 10
puts :test.object_id.to_s
end
end
t = Test.new
t.test
So far, my understanding was that both methods and variables used lower
case names, and that methods could be written both with and without
‘()’. To me that then made the t.test line a bit confusing, as it
appears to invoke the method test on t. How then could you ever access
the instance variable test of t?
Am I then also right in assuming that in Ruby there aren’t any true
visible instance variables, as to make the variable visible, you have to
include an accessor - which effectively creates the getter/setter of
that variable?
So if I changed the above code to have an instance variable of ‘value’,
with an accessor, I could then have:
and then access t.value - giving me 10. BUT t.value is then really a
call to the ‘generated’ getter of value - so is really a method call?
Do I have that about right?
And in the above case, where a ‘variable’ and method have the same name,
is there a way to force the use of a specific ‘setter/getter’ - ie to
have t.test return the value of test, rather than call the method test
on t? | https://www.ruby-forum.com/t/novice-understanding-instance-variables-and-methods/224795 | CC-MAIN-2022-40 | refinedweb | 238 | 74.29 |
In this post, we are going to learn how to explore data using Python, Pandas, and Seaborn. The data we are going to explore is data from a Wikipedia article. In this post, we are actually going to learn how to parse data from a URL using Python Pandas. Furthermore, we are going to explore the scraped data by grouping it and by Python data visualization. More specifically, we will learn how to count missing values, group data to calculate the mean, and then visualize relationships between two variables, among other things. Here, we will learn how to create a scatterplot with Seaborn.
In previous posts, we have used Pandas to import data from Excel and CSV files. In this post, however,.
Installing the Libraries
Before proceeding to the Pandas read_html example we are going to install the required libraries. In this post, we are going to use Pandas, Seaborn, NumPy, SciPy, and BeautifulSoup4. We are going to use Pandas to parse HTML and plotting, Seaborn for data visualization, NumPy and SciPy for some calculations, and BeautifulSoup4 as the parser for the read_html method.
Installing Anaconda is the absolutely easiest method to install all packages needed. If your Anaconda distribution you can open up your terminal and type: conda install <packagename>. That is, if you need to install all the packages:
conda install numpy scipy pandas seaborn beautifulsoup4
It’s also possible to install using Pip:
pip install numpy scipy pandas seaborn beautifulsoup4
In a more recent post, you can learn all about installing, using, and upgrading Python packages using both Pip and conda. Finally, sometimes when we install Python packages using pip we may get be noticed that we don’t have the latest version of pip:
If needed, we can, of course, upgrade pip using pip, conda, or anaconda.
How to Use Pandas read_html
In this section, we will work with Pandas read_html to parse data from a Wikipedia article. The article we are going to parse has 6 tables and there are some data we are going to explore in 5 of them. We are going to look at Scoville Heat Units and Pod size of different chili pepper species.
Now, in a more recent post, there is more information on how to scrape data from tables with Pandas read_html.
Pandas read_html example:
import pandas as pd url = '' data = pd.read_html(url, flavor='bs4', header=0, encoding='UTF8')
In the code above we are, as usual, starting by importing pandas. After that, we have a string variable (i.e., URL) that is pointing to the URL. We are then using Pandas read_html to parse the HTML from the URL. As with the read_csv and read_excel methods, the parameter header is used to tell Pandas read_html on which row the headers are. In this case, it’s the first row.
The parameter flavor is used, here, to make use of beatifulsoup4 as HTML parser. If we use LXML, some columns in the dataframe will be empty. Anyway, what we get is all tables from the URL. These tables are, in turn, stored in a list (data). In this Panda read_html example the last table is not of interest:
Thus we are going to remove this dataframe from the list:
# Let's remove the last table del data[-1]
Merging Pandas Dataframes
The aim of this post is to explore the data and what we need to do now is to add a column in each dataframe in the list. This column will have information about the species and we create a list with strings. In the following for-loop we are adding a new column, named “Species”, and we add the species name from the list.
species = ['Capsicum annum', 'Capsicum baccatum', 'Capsicum chinense', 'Capsicum frutescens', 'Capsicum pubescens'] for i in range(len(species)): data[i]['Species'] = species[i]
Finally, we are going to concatenate the list of dataframes using Pandas concat:
df = pd.concat(data, sort=False) df.head()
The data we obtained using Pandas read_html can, of course, be saved locally using either Pandas to_csv or to_excel, among other methods. See the two following tutorials on how to work with these methods and file formats:
How to Prepare Data Using Pandas
Now that we have used Pandas read_html and merged the dataframes we need to clean up the data a bit. We are going to use the method map together with lambda and regular expressions (i.e., sub, findall) to remove and extract certain things from the cells. We are also using the split and rstrip methods to split the strings into pieces. In this example, we want the centimeter values. Because of the missing values in the data we have to see if the value from a cell (x, in this case) is a string. If not, we will use NumPy’s NaN to code that it is a missing value.
# Remove brackets and whats between them (e.g. [14]) df['Name'] = df['Name'].map(lambda x: re.sub("[\(\[].*?[\)\]]", "", x) if isinstance(x, str) else np.NaN) # Pod Size get cm df['Pod size'] = df['Pod size'].map(lambda x: x.split(' ', 1)[0].rstrip('cm') if isinstance(x, str) else np.NaN) # Taking the largest number in a range and convert all values to float df['Pod size'] = df['Pod size'].map(lambda x: x.split('–', 1)[-1] if isinstance(x, str) else np.NaN) # Convert to float df['Pod size'] = df['Pod size'].map(lambda x: float(x)) # Taking the largest SHU df['Heat'] = df['Heat'].map(lambda x: re.sub("[\(\[].*?[\)\]]", "", x) if isinstance(x, str) else np.NaN) df['Heat'] = df['Heat'].str.replace(',', '') df['Heat'] = df['Heat'].map(lambda x: float(re.findall(r'\d+(?:,\d+)?', x)[-1]) if isinstance(x, str) else np.NaN)
Exploratory Data Analysis in Python
In this section we are going to explore the data using Pandas and Seaborn. First we are going to see how many missing values we have, count how many occurrences we have of one factor, and then group the data and calculate the mean values for the variables.
Counting Missing Values
First thing we are going to do is to count the number of missing values in the different columns. We are going to do this using the isna and sum methods:
df.isna().sum()
Later in the post, we are going to explore the relationship between the heat and the pod size of chili peppers. Note, there are a lot of missing data in both of these columns.
Counting Categorical Data in a Column
We can also count how many factors (or categorical data; i.e., strings) we have in a column by selecting that column and using the Pandas Series method value_counts:
df['Species'].value_counts()
Aggregating by Group
We can also calculate the mean Heat and Pod size for each species using Pandas groupby and mean methods:
df_aggregated = df.groupby('Species').mean().reset_index() df_aggregated
There are of course many other ways to explore your data using Pandas methods (e.g., value_counts, mean, groupby). See the posts Descriptive Statistics using Python and Data Manipulation with Pandas for more information.
Data Visualization using Pandas and Seaborn
In this section, we are going to visualize the data using Pandas and Seaborn. We are going to start to explore whether there is a relationship between the size of the chili pod (‘Pod size’) and the heat of the chili pepper (Scoville Heat Units).
More on Data Visualization using Python, Seaborn, and Pandas:
- How to Make a Scatter Plot in Python using Seaborn
- 9 Data Visualization Techniques You Should Learn in Python
Pandas Scatter Plot
In the first scatter plot, we are going to use Pandas built-in method ‘scatter’. In this basic example, we are going to have pod size on the x-axis and heat on the y-axis. We are also getting the blue points by using the parameter c.
ax1 = df.plot.scatter(x='Pod size', y='Heat', c='DarkBlue')
There seems to be a linear relationship between heat and pod size. However, we have an outlier in the data and the pattern may be more clear if we remove it. Thus, in the next Pandas scatter plot example we are going to subset the dataframe taking only values under 1,400,000 SHU:
ax1 = df.query('Heat < 1400000').plot.scatter(x='Pod size', y='Heat', c='DarkBlue', figsize=(8, 6))
We used pandas query to select the rows where the value in the column ‘Heat’ is lower than the preferred value. The resulting scatter plot shows a more convincing pattern:
We still have some possible outliers (around 300,000 – 35000 SHU) but we are going to leave them. Note that I used the parameter figsize=(8, 6) in both plots above to get the dimensions of the posted images. That is, if you want to change the dimensions of the Pandas plots you should use figsize.
Now we would like to plot a regression line on the Pandas scatter plot. As far as I know, this is not possible (please comment below if you know a solution and I will add it). Therefore, we are now going to use Seaborn to visualize data as it gives us more control and options over our graphics.
Data Visualization using Seaborn
In this section, we are going to continue exploring the data using the Python package Seaborn. We start with scatter plots and continue with
Seaborn Scatter Plot
Creating a scatter plot using Seaborn is very easy. In the basic scatter plot example below we are, as in the Pandas example, using the parameters x and y (x-axis and y-axis, respectively). However, we have to use the parameter data and our dataframe.
import seaborn as sns ax = sns.regplot(x="Pod size", y="Heat", data=df.query('Heat < 1400000'))
It’s also possible to change the size of a Seaborn plot, of course. For more about creating Scatter Plots in Python check this YouTube Video:
How to Carry out Correlation Analysis in Python
Judging from above there seems to be a relationship between the variables of interest. The next thing we are going to do is to see if this visual pattern also shows up as a statistical association (i.e., correlation). To this aim, we are going to use SciPy and the pearsonr method. We start by importing pearsonr from scipy.stats.
fromscipy.stats import pearsonr
As we found out when exploring the data using Pandas groupby there was a lot of missing data (both for heat and pod size). When calculating the correlation coefficient using Python we need to remove the missing values. Again, we are also removing the strongest chili pepper using Pandas query.
df_full = df[['Heat', 'Pod size']].dropna() df_full = df_full.query('Heat < 1400000') print(len(df_full)) # Output: 31
Note, in the example above we are selecting the columns “Heat” and “Pod size” only. If we want to keep the other variables but only have complete cases we can use the subset parameter (df_full = df.dropna(subset=[‘Heat’, ‘Pod size’])). That said, we now have a subset of our dataframe with 31 complete cases and it’s time to carry out the correlation. It’s quite simple, we just put in the variables of interest. We are going to display the correlation coefficient and p-value on the scatter plot later so we use NumPy’s round to round the values.
Python Correlation Example:
corr = pearsonr(df_full['Heat'], df_full['Pod size']) corr = [np.round(c, 2) for c in corr] print(corr) # Output: [-0.37, 0.04]
If we have a lot of variables we want to correlate, we can create a correlation matrix in Python using NumPy or Pandas.
Seaborn Correlation Plot with Trend Line
It’s time to stitch everything together! First, we are creating a text string for displaying the correlation coefficient (r=-0.37) and the p-value (p=0.04). Second, we are creating a correlation plot using Seaborn regplot, as in the previous example.
How to Add Text to a Seaborn Plot
To display the text we use the text method; the first parameter is the x coordinate and the second is the y coordinate. After the coordinates, we have our text and the size of the font. We are also using set_title to add a title to the Seaborn plot and we are changing the x- and y-labels using the set method.
text = 'r=%s, p=%s' % (corr[0], corr[1]) ax = sns.regplot(x="Pod size", y="Heat", data=df_full) ax.text(10, 300000, text, fontsize=12) ax.set_title('Capsicum') ax.set(xlabel='Pod size (cm)', ylabel='Scoville Heat Units (SHU)')
Pandas Bar graph Example
Now we are going to visualize some other aspects of the data. We are going to use the aggregated data (grouped by using Pandas groupby) to visualize the mean heat (Scoville) across species. We start by using Pandas plot method:
df_aggregated = df.groupby('Species').mean().reset_index() df_aggregated.plot.bar(x='Species', y='Heat')
In the image above, we can see that the mean heat is highest for the Capsicum Chinense species. However, the bar graph might hide important information (remember, the scatter plot revealed some outliers). We are therefore continuing with a categorical scatter plot using Seaborn.
Grouped Scatter Plot with Seaborn
Here, we don’t add that much compared to the previous Seaborn scatter plots examples. However, we need to rotate the tick labels on the x-axis using set_xticklabels and the parameter rotation.
ax = sns.catplot(x='Species', y='Heat', data=df) ax.set(xlabel='Capsicum Species', ylabel='Scoville Heat Units (SHU)') ax.set_xticklabels(rotation=70)
Finally, if we are going to write up the results from this explorative data analysis, we need to save the Seaborn (or Pandas) plots as high-resolution files. This can be done by using Matplotlib and pyplot.savefig().
Conclusion
Now we have learned how to explore data using Python, Pandas, NumPy, SciPy, and Seaborn. Specifically, we have learned how to use Pandas read_html to parse HTML from a URL, clean up the data in the columns (e.g., remove unwanted information), create scatter plots both in Pandas and Seaborn, visualize grouped data, and create categorical scatter plots in Seaborn. We have now an idea of how to change the axis ticks labels rotation, change the y- and x-axis labels, and adding a title to Seaborn plots.
| https://www.marsja.se/explorative-data-analysis-with-pandas-scipy-and-seaborn/ | CC-MAIN-2020-45 | refinedweb | 2,402 | 63.8 |
Build Objects With Interfaces
Being.
We are going to create an interface IHuman to build people objects from. Everyone can agree that all people have a First Name, Last Name, Age, and have some ability to speak. So we will wrap that up into a neat interface.
public interface IHuman { string fname { get; set; } string lname { get; set; } int age { get; set; } void Speak(string input); }
An interface is a very basic chunk of code in C#. It is like a checklist of objects and actions that all derived classes should have. The way each class handles each dimension of the IHuman will be unique to each class. So now we will create just a basic person object from the interface. I will then use Visual Studio to implement the interface, and this will be the result:
public class Person : IHuman { public string fname { get { throw new NotImplementedException(); } set { throw new NotImplementedException(); } } public string lname { get { throw new NotImplementedException(); } set { throw new NotImplementedException(); } } public int age { get { throw new NotImplementedException(); } set { throw new NotImplementedException(); } } public void Speak(string input) { throw new NotImplementedException(); } }
As you can tell, you’ll need to go through each method and implement it for use. I simply took a moment to do some clean up, and added logic to my Speak method:
public class Person : IHuman { public string fname { get; set; } public string lname { get; set; } public int age { get; set; } public void Speak(string input) { Console.WriteLine(input); } }
Now I can build a simple program and declare a Person Object. Then I can set variables within the object and/or use any of the methods associated with it. Yet, I need another object to describe a programmer. Again, I’ll use the IHuman interface and make the needed changes to my methods. I’m also adding a custom method in this class as another way to speak.
public class Programmer : IHuman { public string fname { get; set; } public string lname { get; set; } public int age { get; set; } public void Speak(string input) { string result = ""; foreach (string s in input.Select(c => Convert.ToString(c, 2))) { result += s; } Console.WriteLine(result); } public void DudeInPlainEnglish(string input) { Console.WriteLine("Sorry my bad... " + input); } }
If you pull the interface and two objects together, you can build a simple console application to prove this proof of concept:
static void Main(string[] args) { Person MyPerson = new Person(); MyPerson.fname = "John"; MyPerson.lname = "Smith"; MyPerson.age = 25; MyPerson.Speak(string.Format("Hello I am {0} {1}!", MyPerson.fname, MyPerson.lname)); Console.WriteLine(); Programmer Me = new Programmer(); Me.fname = "Peter"; Me.lname = "Urda"; Me.age = 21; string UrdaText = string.Format("Hey, I'm {0} {1}", Me.fname, Me.lname); Me.Speak(UrdaText); Console.WriteLine(); Me.DudeInPlainEnglish(UrdaText); Console.WriteLine(); }
Running the program will produce this output:
As you can tell the Programmer spits out binary when asked to speak, and it is only when you call the DudeInPlainEnglish method against it is when you get a readable format. The method also appends “Sorry my bad…” to the start of the print out.
If we only had access to the interface, we would know what properties and methods that each class must have when using said interface. Think of this interface as a type of contract, where each class that uses it must (in some fashion) use the properties and methods laid out. You can also think of an interface as a very basic framework for all involved classes.
So the next time you are working on a bunch of objects that are closely related to each other, consider using an interface. | http://urda.com/blog/2010/11/23/build-objects-with-interfaces | CC-MAIN-2018-34 | refinedweb | 598 | 63.9 |
2008-10-23 22:21:11 8 Comments. This piece of code was written by Jochen Schulz (jrschulz), who made a great Python library for metric spaces. This is the link to the complete source: Module mspace.
Related Questions
Sponsored Content
16 Answered Questions
[SOLVED] What are metaclasses in Python?
- 2008-09-19 06:10:46
- e-satis
- 751072 View
- 5424 Score
- 16 Answer
- Tags: python oop metaclass python-datamodel
25 Answered Questions
[SOLVED] Difference between staticmethod and classmethod
- 2008-09-25 21:01:57
- Daryl Spitzer
- 733959 View
- 3364 Score
- 25 Answer
- Tags: python oop methods python-decorators
10 Answered Questions
[SOLVED] Does Python have a string 'contains' substring method?
23 Answered Questions
[SOLVED] Does Python have a ternary conditional operator?
- 2008-12-27 08:32:18
- Devoted
- 1801296 View
- 5609 Score
- 23 Answer
- Tags: python operators ternary-operator conditional-operator
19 Answered Questions
[SOLVED] What does ** (double star/asterisk) and * (star/asterisk) do for parameters?
- 2008-08-31 15:04:35
- Todd
- 652893 View
- 2156 Score
- 19 Answer
- Tags: python syntax parameter-passing variadic-functions argument-unpacking
17 Answered Questions
[SOLVED] What is the yield keyword used for in C#?
29 Answered Questions
[SOLVED] What does if __name__ == "__main__": do?
- 2009-01-07 04:11:00
- Devoted
- 2619196 View
- 5564 Score
- 29 Answer
- Tags: python namespaces main python-module idioms
20 Answered Questions
[SOLVED] What is the difference between Python's list methods append and extend?
- 2008-10-31 05:55:36
- Claudiu
- 2847665 View
- 3119 Score
- 20 Answer
- Tags: python list data-structures append extend
11 Answered Questions
[SOLVED] What is __init__.py for?
- 2009-01-15 20:09:09
- Mat
- 1005294 View
- 2082 Score
- 11 Answer
- Tags: python module package python-packaging
@thavan 2019-02-22 12:11:45
yieldyields something. It's like somebody asks you to make 5 cupcakes. If you are done with at least one cupcake, you can give it to them to eat while you make other cakes.
Here
factoryis called a generator, which makes you cakes. If you call
make_function, you get a generator instead of running that function. It is because when
yieldkeyword is present in a function, it becomes a generator.
They consumed all the cakes, but they ask for one again.
and they are being told to stop asking more. So once you consumed a generator you are done with it. You need to call
make_cakeagain if you want more cakes. It is like placing another order for cupcakes.
You can also use for loop with a generator like the one above.
One more example: Lets say you want a random password whenever you ask for it.
Here
rpgis a generator, which can generate an infinite number of random passwords. So we can also say that generators are useful when we don't know the length of the sequence, unlike list which has a finite number of elements.
@e-satis 2008-10-23 22:48:44
To understand what
yielddoes, you must understand what generators are. And before you can understand generators, you must understand iterables.
Iterables
When you create a list, you can read its items one by one. Reading its items one by one is called iteration:
mylistis an iterable. When you use a list comprehension, you create a list, and so an iterable:
Everything you can use "
for... in..." on is an iterable;
lists,
strings, files...
These iterables are handy because you can read them as much as you wish, but you store all the values in memory and this is not always what you want when you have a lot of values.
Generators
Generators are iterators, a kind of iterable you can only iterate over once. Generators do not store all the values in memory, they generate the values on the fly:
It is just the same except you used
()instead of
[]. BUT, you cannot perform
for i in mygeneratora second time since generators can only be used once: they calculate 0, then forget about it and calculate 1, and end calculating 4, one by one.
Yield
yieldis a keyword that is used like
return, except the function will return a generator. continue from where it left off each time
foruses the generator.
Now the hard part:
The first time the
forcallsanymore. It can be because the loop had come to an end, or because you do not satisfy an
"if/else"anymore.
Your code explained
Generator:
Caller: all the values of the generator, but
whilekeeps creating new generator objects which will produce different values from the previous ones since it's not applied on the same node.
The
extend()method is a list object method that expects an iterable and adds its values to the list.
Usually we pass a list to it:
But in your code, it gets a generator, which is good because:
And it works because Python does not care if the argument of a method is a list or not. Python expects iterables so it will work with strings, lists, tuples, and generators! This is called duck typing and is one of the reasons why Python is so cool. But this is another story, for another question...
You can stop here, or read a little bit to see an advanced use of a generator:
Controlling a generator exhaustion
Note: For Python 3, use
print(corner_street_atm.__next__())or
print(next(corner_street_atm))without creating another list?
Then just
import itertools.
An example? Let's see the possible orders of arrival for a four-horse race:
Understanding the inner mechanisms of iteration
Iteration is a process implying iterables (implementing the
__iter__()method) and iterators (implementing the
__next__()method). Iterables are any objects you can get an iterator from. Iterators are objects that let you iterate on iterables.
There is more about it in this article about how
forloops work.
@augurar 2015-03-03 22:54:31
All iterators can only be iterated over once, not just those produced by generator functions. If you don't believe me, call
iter()on any iterable object and try to iterate over the result more than once.
@augurar 2015-08-24 20:52:18
@Craicerjack You have your terms mixed up. An iterable is something with an
__iter__method. An iterator is the result of calling
iter()on an iterable. Iterators can only be iterated over once.
@Matthias Fripp 2017-05-23 21:41:53.
@picmate 涅 2018-02-15 19:21:11
.
@Gavriel Cohen 2017-01-02 12:09:28
An easy example to understand what it is:
yield
The output is:
@Daniel 2013-01-18 17:25:17
For those who prefer a minimal working example, meditate on this interactive Python session:
@Jason Baker 2008-10-23 22:28:41loop can be rewritten to this:
Does that make more sense or just confuse you more? :)
I should note that this is an oversimplification for illustrative purposes. :)
@jfs 2008-10-25 02:03:38
__getitem__could be defined instead of
__iter__. For example:
class it: pass; it.__getitem__ = lambda self, i: i*10 if i < 10 else [][0]; for i in it(): print(i), It will print: 0, 10, 20, ..., 90
@Peter 2017-05-06 14:37:55
I tried this example in Python 3.6 and if I create
iterator = some_function(), the variable
iteratordoes not have a function called
next()anymore, but only a
__next__()function. Thought I'd mention it.
@user28409 2008-10-25 21:22:30
Shortcut to understanding
yield
When you see a function with
yieldstatements,and the loop body is executed. If an exception
StopIterationis a Python list).
Here
mylist:
__iter__().
Note that a
forloopcomes in:
Instead of
yieldstatements, if you had three
returnstatementsloop tries to loop over the generator object, the function resumes from its suspended state at the very next line after the
yieldit previously returned from, executes the next line of code, in this case a
yieldstatement,loopthat.
@DanielSank 2017-06-17 22:41:34
"When you see a function with yield statements, apply this easy trick to understand what will happen" Doesn't this completely ignore the fact that you can
sendinto a generator, which is a huge part of the point of generators?
@Pedro 2017-09-14 14:48:17
"it could be a for loop, but it could also be code like
otherlist.extend(mylist)" -> This is incorrect.
extend()modifies the list in-place and does not return an iterable. Trying to loop over
otherlist.extend(mylist)will fail with a
TypeErrorbecause
extend()implicitly returns
None, and you can't loop over
None.
@today 2017-12-26 18:53:57
@pedro You have misunderstood that sentence. It means that python performs the two mentioned steps on
mylist(not on
otherlist) when executing
otherlist.extend(mylist).
@Rafael 2019-03-23 13:55:51
An analogy could help to grasp the idea here:
Imagine that you have created an amazing machine that is capable of generating thousands and thousands of lightbulbs per day. The machine generates these lightbulbs in boxes with a unique serial number. You don't have enough space to store all these lightbulbs at the same time (i.e., you cannot keep up with the speed of the machine due to storage limitation), so you would like to adjust this machine to generate lightbulbs on demand.
Python generators don't differ much from this concept.
Imagine that you have a function
xthat generates unique serial numbers for the boxes. Obviously, you can have a very large number of such barcodes generated by the function. A wiser, and space efficient, option is to generate those serial numbers on-demand.
Machine's code:
As you can see we have a self-contained "function" to generate the next unique serial number each time. This function returns back a generator! As you can see we are not calling the function each time we need a new serial number, but we are using
next()given the generator to obtain the next serial number.
Output:
@RBansal 2013-01-16 06:42:09
Yield gives you a generator.
As you can see, in the first case
fooholdsjust gives you a generator. A generator is an iterable--which means you can use it in a
forloop,.
@It'sNotALie. 2019-03-21 18:33:13
Just a note - in Python 3,
rangealso returns a generator instead of a list, so you'd also see a similar idea, except that
__repr__/
__str__are overridden to show a nicer result, in this case
range(1, 10, 2).
@Douglas Mayle 2008-10-23 22:24:03
yieldis just like
return- it returns whatever you tell it to (as a generator). The difference is that the next time you call the generator, execution starts from the last call to the
yieldstatement. Unlike return, the stack frame is not cleaned up when a yield occurs, however control is transferred back to the caller, so its state will resume the next time the function is called.
In the case of your code, the function
get_child_candidatesis acting like an iterator so that when you extend your list, it adds one element at a time to the new list.
list.extendcalls an iterator until it's exhausted. In the case of the code sample you posted, it would be much clearer to just return a tuple and append that to the list.
@kurosch 2008-10-24 18:11:04
This is close, but not correct. Every time you call a function with a yield statement in it, it returns a brand new generator object. It's only when you call that generator's .next() method that execution resumes after the last yield.
@Bob Stein 2016-03-25 13:21:44
TL;DR
Instead of this:
do this:
Whenever you find yourself building a list from scratch,
yieldeach piece instead.
This was my first "aha" moment with yield.
yieldis a sugary way to say
Same behavior:
Different behavior:
Yield is single-pass: you can only iterate through once. When a function has a yield in it we call it a generator function. And an iterator is what it returns. Those terms are revealing. We lose the convenience of a container, but gain the power of a series that's computed as needed, and arbitrarily long.
Yield is lazy, it puts off computation. A function with a yield in it doesn't actually execute at all when you call it. It returns an iterator object that remembers where it left off. Each time you call
next()on the iterator (this happens in a for-loop) execution inches forward to the next yield.
returnraises StopIteration and ends the series (this is the natural end of a for-loop).
Yield is versatile. Data doesn't have to be stored all together, it can be made available one at a time. It can be infinite.
If you need multiple passes and the series isn't too long, just call
list()on it:
Brilliant choice of the word
yieldbecause both meanings apply:
...provide the next data in the series.
...relinquish CPU execution until the iterator advances.
@smwikipedia 2016-03-25 05:40:24
(My below answer only speaks from the perspective of using Python generator, not the underlying implementation of generator mechanism, which involves some tricks of stack and heap manipulation.)
When
yieldis used instead of a
returnin a python function, that function is turned into something special called
generator function. That function will return an object of
generatortype. The
yieldkeyword is a flag to notify the python compiler to treat such function specially. Normal functions will terminate once some value is returned from it. But with the help of the compiler, the generator function can be thought of as resumable. That is, the execution context will be restored and the execution will continue from last run. Until you explicitly call return, which will raise a
StopIterationexception (which is also part of the iterator protocol), or reach the end of the function. I found a lot of references about
generatorbut this one from the
functional programming perspectiveis the most digestable.
(Now I want to talk about the rationale behind
generator, and the
iteratorbased on my own understanding. I hope this can help you grasp the essential motivation of iterator and generator. Such concept shows up in other languages as well such as C#.)
As I understand, when we want to process a bunch of data, we usually first store the data somewhere and then process it one by one. But this naive approach is problematic. If the data volume is huge, it's expensive to store them as a whole beforehand. So instead of storing the
dataitself directly, why not store some kind of
metadataindirectly, i.e.
the logic how the data is computed.
There are 2 approaches to wrap such metadata.
as a class. This is the so-called
iteratorwho implements the iterator protocol (i.e. the
__next__(), and
__iter__()methods). This is also the commonly seen iterator design pattern.
as a function. This is the so-called
generator function. But under the hood, the returned
generator objectstill
IS-Aiterator because it also implements the iterator protocol.
Either way, an iterator is created, i.e. some object that can give you the data you want. The OO approach may be a bit complex. Anyway, which one to use is up to you.
@ARGeo 2018-09-09 13:25:57
In Python
generators(a special type of
iterators) are used to generate series of values and
yieldkeyword is just like the
returnkeyword of generator functions.
The other fascinating thing
yieldkeyword does is saving the
stateof a generator function.
So, we can set a
numberto a different value each time the
generatoryields.
Here's an instance:
@Jon Skeet 2008-10-23 22:26:06
It's returning a generator. I'm not particularly familiar with Python, but I believe it's the same kind of thing as C#'s iterator blocks if you're familiar with those..
@Savai Maheshwari 2018-08-17 12:36:56
A simple generator function
yield statement pauses the function saving all its states and later continues from there on successive calls.
@Algebra 2017-11-14 12:02:47
All great answers, however a bit difficult for newbies.
I assume you have learned the
returnstatement.
As an analogy,
returnand
yieldare twins.
returnmeans 'return and stop' whereas 'yield` means 'return, but continue'
Run it:
See, you get only a single number rather than a list of them.
returnnever allows you prevail happily, just implements once and quit.
Replace
returnwith
yield:
Now, you win to get all the numbers.
Comparing to
returnwhich runs once and stops,
yieldruns times you planed. You can interpret
returnas
return one of them, and
yieldas
return all of them. This is called
iterable.
It's the core about
yield.
The difference between a list
returnoutputs and the object
yieldoutput is:
You will always get [0, 1, 2] from a list object but only could retrieve them from 'the object
yieldoutput' once. So, it has a new name
generatorobject as displayed in
Out[11]: <generator object num_list at 0x10327c990>.
In conclusion, as a metaphor to grok it:
returnand
yieldare twins
listand
generatorare twins
@Mike S 2018-08-23 13:27:21
This is understandable, but one major difference is that you can have multiple yields in a function/method. The analogy totally breaks down at that point. Yield remembers its place in a function, so the next time you call next(), your function continues on to the next
yield. This is important, I think, and should be expressed.
@redbandit 2016-10-13 13:43:40
In summary, the
yieldstatement transforms your function into a factory that produces a special object called a
generatorwhich wraps around the body of your original function. When the
generatoris iterated, it executes your function until it reaches the next
yieldthen suspends execution and evaluates to the value passed to
yield. It repeats this process on each iteration until the path of execution exits the function. For instance,
simply outputs
The power comes from using the generator with a loop that calculates a sequence, the generator executes the loop stopping each time to 'yield' the next result of the calculation, in this way it calculates a list on the fly, the benefit being the memory saved for especially large calculations
Say you wanted to create a your own
rangefunction that produces an iterable range of numbers, you could do it like so,
and use it like this;
But this is inefficient because
Luckily Guido and his team were generous enough to develop generators so we could just do this;
Now upon each iteration a function on the generator called
next()executes the function until it either reaches a 'yield' statement in which it stops and 'yields' the value or reaches the end of the function. In this case on the first call,
next()executes up to the yield statement and yield 'n', on the next call it will execute the increment statement, jump back to the 'while', evaluate it, and if true, it will stop and yield 'n' again, it will continue that way until the while condition returns false and the generator jumps to the end of the function.
@Tom Fuller 2016-09-10 11:37:25
Many people use
returnrather than
yield, but in some cases
yieldcan be more efficient and easier to work with.
Here is an example which
yieldis definitely best for:
Both functions do the same thing, but
yielduses three lines instead of five and has one less variable to worry about.
As you can see both functions do the same thing. The only difference is
return_dates()gives a list and
yield_dates()gives a generator.
A real life example would be something like reading a file line by line or if you just want to make a generator.
@Christophe Roussy 2016-06-22 09:40:15
Yet another TL;DR
Iterator on list:
next()returns the next element of the list
Iterator generator:
next()will compute the next element on the fly (execute code)
You can see the yield/generator as a way to manually run the control flow from outside (like continue loop one step), by calling
next, however complex the flow.
Note: The generator is NOT a normal function. It remembers the previous state like local variables (stack). See other answers or articles for detailed explanation. The generator can only be iterated on once. You could do without
yield, but it would not be as nice, so it can be considered 'very nice' language sugar.
@Kaleem Ullah 2015-09-01 12:42:19
Yield is an object
A
returnin a function will return a single value.
If you want a function to return a huge set of values, use
yield.
More importantly,
yieldis a barrier.
That is, it will run the code in your function from the beginning until it hits
yield. Then, it’ll return the first value of the loop.
Then, every other call will run the loop you have written in the function one more time, returning the next value until there isn't any value to return.
@Mangu Singh Rajpurohit 2015-07-29 06:11:25
Like every answer suggests,
yieldis used for creating a sequence generator. It's used for generating some sequence dynamically. For example, while reading a file line by line on a network, you can use the
yieldfunction as follows:
You can use it in your code as follows:
Execution Control Transfer gotcha
The execution control will be transferred from getNextLines() to the
forloop when yield is executed. Thus, every time getNextLines() is invoked, execution begins from the point where it was paused last time.
Thus in short, a function with the following code
will print
@Sławomir Lenart 2014-07-24 21:15:29
There is another
yielduse and meaning (since Python 3.3):
From PEP 380 -- Syntax for Delegating to a Subgenerator:
Moreover this will introduce (since Python 3.5):
to avoid coroutines being confused with a regular generator (today
yieldis used in both).
@Engin OZTURK 2013-12-20 13:07:18
Here is a simple example:
Output:
I am not a Python developer, but it looks to me
yieldholds the position of program flow and the next loop start from "yield" position. It seems like it is waiting at that position, and just before that, returning a value outside, and next time continues to work.
It seems to be an interesting and nice ability :D
@Engin OZTURK 2018-07-02 01:44:08
You are correct. But what is the effect on flow which is to see the behaviour of "yield" ? I can change the algorithm in the name of mathmatics. Will it help to get different assessment of "yield" ?
@alinsoar 2013-08-21 19:01:25
From a programming viewpoint, the iterators are implemented as thunks.
To implement iterators, generators, and thread pools for concurrent execution, etc. as thunks (also called anonymous functions), one uses messages sent to a closure object, which has a dispatcher, and the dispatcher answers to "messages".
"next" is a message sent to a closure, created by the "iter" call.
There are lots of ways to implement this computation. I used mutation, but it is easy to do it without mutation, by returning the current value and the next yielder.
Here is a demonstration which uses the structure of R6RS, but the semantics is absolutely identical to Python's. It's the same model of computation, and only a change in syntax is required to rewrite it in Python.
@aestrivex 2013-04-04 14:56:19
There is one type of answer that I don't feel has been given yet, among the many great answers that describe how to use generators. Here is the programming language theory answer:
The
yieldstatement, the current values of variables, the operations that have yet to be performed, and so on, are saved. manage control flow after GUI events trigger.)).
But you could easily implement (and conceptualize) generators as a simple, specific case of continuation passing style:
Whenever
yieldis called, it tells the function to return a continuation. When the function is called again, it starts from wherever it left off. So, in pseudo-pseudocode (i.e., not pseudocode, but not code) the generator's
nextmethod is basically as follows:
where the
yieldkeyword is actually syntactic sugar for the real generator function, basically something like:
Remember that this is just pseudocode and the actual implementation of generators in Python is more complex. But as an exercise to understand what is going on, try to use continuation passing style to implement generator objects without use of the
yieldkeyword.
@tzot 2008-10-24 00:36:05
Here is:
This step corresponds to
defining the generator function, i.e. the function containing a
yield.
This step corresponds to calling the generator function which returns a generator object. Note that you don't tell me any numbers yet; you just grab your paper and pencil.
This step corresponds to calling
.next()on the generator object.
This step corresponds to the generator object ending its job, and raising a
StopIterationexception values.
The most famous user of the iterator protocol is the
forcommand in Python. So, whenever you do a:
it doesn't matter if
sequenceis a list, a string, a dictionary or a generator object like described above; the result is the same: you read items off a sequence one by one.
Note that
defining a function which contains a
yieldkeyword is not the only way to create a generator; it's just the easiest way to create one.
For more accurate information, read about iterator types, the yield statement and generators in the Python documentation.
@Gavriel Cohen 2018-01-17 12:26:00
Yield
In short, you can see that the loop does not stop and continues to function even after the object or variable is sent (unlike
returnwhere the loop stops after execution).
@Aaron Hall 2015-06-25 06:11:11
Answer Outline/Summary
yield, when called, returns a Generator.
yield from.
returnin a generator.)
Generators:
yieldis only legal inside of a function definition, and the inclusion of
yieldin a function definition makes it return a generator.
The idea for generators comes from other languages (see footnote 1) with varying implementations. In Python's Generators, the execution of the code is frozen at the point of the yield. When the generator is called (methods are discussed below) execution resumes and then freezes at the next yield.
yieldprovides an easy way of implementing the iterator protocol, defined by the following two methods:
__iter__and
next(Python 2) or
__next__(Python 3). Both of those methods make an object an iterator that you could type-check with the
IteratorAbstract Base Class from the
collectionsmodule.
The generator type is a sub-type of iterator:
And if necessary, we can type-check like this:
A feature of an
Iteratoris that once exhausted, you can't reuse or reset it:
You'll have to make another if you want to use its functionality again (see footnote 2):
One can yield data programmatically, for example:
The above simple generator is also equivalent to the below - as of Python 3.3 (and not available in Python 2), you can use
yield from:
However,
yield fromalso allows for delegation to subgenerators, which will be explained in the following section on cooperative delegation with sub-coroutines.
Coroutines:
yieldforms an expression that allows data to be sent into the generator (see footnote 3)
Here is an example, take note of the
receivedvariable, which will point to the data that is sent to the generator:
First, we must queue up the generator with the builtin function,
next. It will call the appropriate
nextor
__next__method, depending on the version of Python you are using:
And now we can send data into the generator. (Sending
Noneis the same as calling
Cooperative Delegation to Sub-Coroutine with
yield from
Now, recall that
yield fromis available in Python 3. This allows us to delegate coroutines to a subcoroutine:
And now we can delegate functionality to a sub-generator and it can be used by a generator just as above:
You can read more about the precise semantics of
yield fromin PEP 380.
Other Methods: close and throw
The
closemethod raises
GeneratorExitat the point the function execution was frozen. This will also be called by
__del__so you can put any cleanup code where you handle the
GeneratorExit:
You can also throw an exception which can be handled in the generator or propagated back to the user:
Conclusion
I believe I have covered all aspects of the following question:
It turns out that
yielddoes a lot. I'm sure I could add even more thorough examples to this. If you want more or have some constructive criticism, let me know by commenting below.
Appendix:.
Critique of answer suggesting
yieldin a generator expression or comprehension.
The grammar currently allows any expression in a list comprehension.
Since yield is an expression, it has been touted by some as interesting to use it in comprehensions or generator expression - in spite of citing no particularly good use-case.
The CPython core developers are discussing deprecating its allowance. Here's a relevant post from the mailing list:
Further, there is an outstanding issue (10544) which seems to be pointing in the direction of this never being a good idea (PyPy, a Python implementation written in Python, is already raising syntax warnings.)
Bottom line, until the developers of CPython tell us otherwise: Don't put
yieldin a generator expression or comprehension.
The
returnstatement in a generator
In Python 2:
An
expression_listis basically any number of expressions separated by commas - essentially, in Python 2, you can stop the generator with
return, but you can't return a value.
In Python 3:
Footnotes, which isn't even available on some systems.
This means, for example, that
xrangeobjects (
rangein Python 3) aren't
Iterators, even though they are iterable, because they can be reused. Like lists, their
__iter__methods return iterator objects.
yieldwas originally introduced as a statement, meaning that it could only appear at the beginning of a line in a code block. Now
yieldcreates a yield expression. This change was proposed to allow a user to send data into the generator just as one might receive it. To send data, one must be able to assign it to something, and for that, a statement just won't work.
@blueray 2017-04-29 17:22:22
yield is similar to return. The difference is:
yield makes a function iterable (in the following example
primes(n = 1)function becomes iterable).
What it essentially means is the next time the function is called, it will continue from where it left (which is after the line of
yield expression).
In the above example if
isprime(n)is true it will return the prime number. In the next iteration it will continue from the next line
@Dustin Getz 2012-10-03 20:38:16
Here are some Python examples of how to actually implement generators as if Python did not provide syntactic sugar for them:
As a Python generator:
Using lexical closures instead of generators
Using object closures instead of generators (because ClosuresAndObjectsAreEquivalent)
@Chen A. 2017-10-03 11:30:17
All of the answers here are great; but only one of them (the most voted one) relates to how your code works. Others are relating to generators in general, and how they work.
So I won't repeat what generators are or what yields do; I think these are covered by great existing answers. However, after spending few hours trying to understand a similar code to yours, I'll break it down how it works.
Your code traverse a binary tree structure. Let's take this tree for example:
And another simpler implementation of a binary-search tree traversal:
The execution code is on the
Treeobject, which implements
__iter__as this:
The
while candidatesstatement can be replaced with
for element in tree; Python translate this to
Because
Node.__iter__function is a generator, the code inside it is executed per iteration. So the execution would look like this:
foriterate them (let's call it it1 because its the first iterator object)
foris executed. The
for child in self.leftcreates a new iterator from
self.left, which is a Node object itself (it2)
iteratoris created (it3)
it3has no left childs so it continues and
yield self.value
next(it3)it raises
StopIterationand exists since it has no right childs (it reaches to the end of the function without yield anything)
it1and
it2are still active - they are not exhausted and calling
next(it2)would yield values, not raise
StopIteration
it2context, and call
next(it2)which continues where it stopped: right after the
yield childstatement. Since it has no more left childs it continues and yields it's
self.val.
The catch here is that every iteration creates sub-iterators to traverse the tree, and holds the state of the current iterator. Once it reaches the end it traverse back the stack, and values are returned in the correct order (smallest yields value first).
Your code example did something similar in a different technique: it populated a one-element list for every child, then on the next iteration it pops it and run the function code on the current object (hence the
self).
I hope this contributed a little to this legendary topic. I spent several good hours drawing this process to understand it. | https://tutel.me/c/programming/questions/231767/what+does+the+quotyieldquot+keyword+do | CC-MAIN-2019-51 | refinedweb | 5,565 | 61.97 |
Event that is fired if a log message is received.
This event will be triggered regardless of whether the message comes in on the main thread or not. This means that the handler code has to be thread-safe. It may be invoked from different threads and may be invoked in parallel. Make sure to only access Unity APIs from your handlers that are allowed to be called from threads other than the main thread.
Note: It is not necessary to subscribe to both Application.logMessageReceived and Application.logMessageReceivedThreaded. The multi-threaded variant will also be called for messages on the main thread.
See Also: Application.logMessageReceived.
using UnityEngine; using System.Collections;
public class ExampleClass : MonoBehaviour { public string output = ""; public string stack = "";
void OnEnable() { Application.logMessageReceivedThreaded += HandleLog; }
void OnDisable() { Application.logMessageReceivedThreaded -= HandleLog; }
void HandleLog(string logString, string stackTrace, LogType type) { output = logString; stack = stackTrace; } } | https://docs.unity3d.com/ru/2018.1/ScriptReference/Application-logMessageReceivedThreaded.html | CC-MAIN-2021-31 | refinedweb | 144 | 52.46 |
Automatically adapting SoftSelection Radius
On 07/02/2013 at 03:38, xxxxxxxx wrote:
Hi everyone, first post in this forum. I'm a long time programmer (C++, C#) but relatively new to Python in Cinema 4D and I'm trying to create a simple script that automatically enables soft selection for move, scale and rotate tools and sets the radius to the radius of the current selection. My question is (sorry if it's obviuos) : what modeling command should I call in the SendModelingCommand instruction to access Move, Rotate and Scale tools since I can't find them in the list of available commands?
Thanks in advance,
Michele
On 07/02/2013 at 05:10, xxxxxxxx wrote:
the ids are ID_MODELING_MOVE, ...SCALE and ...ROTATE. but they are not modelling commands,
but tools, so you should send them either with c4d.CallCommand or BaseDocument.SetActive .
On 07/02/2013 at 06:17, xxxxxxxx wrote:
Thanks a lot for the answer. So how can I set parameters like Soft Selection Radius if I can't send the values in the c4d.BaseContainer like with any other modeling command?
Thanks for your time and help!
Michele
On 07/02/2013 at 06:18, xxxxxxxx wrote:
You can't. You can try to use the SMC, but I don't know if it will work. It is however very easy to do the
soft-offsetting manually. This would also give you more control.
Edit: Oh wait, I think there was a method for getting the current tools
container. Search for GetTool or something similar. I don't have the docs
on m smartphone..
On 07/02/2013 at 07:32, xxxxxxxx wrote:
soft selections are not part of the move,rotate and scale tools, they are a separate plugin.
bp = c4d.plugins.FindPlugin(c4d.ID_MODELING_SOFTSELECTION) bp[c4d.MDATA_SOFT_ENABLE] = not bp[c4d.MDATA_SOFT_ENABLE] c4d.EventAdd()
On 07/02/2013 at 08:52, xxxxxxxx wrote:
Thanks a lot ferdinand and NiklasR!
On 07/02/2013 at 09:39, xxxxxxxx wrote:
Hi larsen,
the following code has exactly the same effect as the soft-selection moving the points when using
it with radius = 100, falloff = linear and offsetting the points about (0, 0, 100).
Note that I used the c4dtools module for computing the midpoint. You can find the source-code
for this function here:...
import c4d import c4dtools import itertools import collections def soft_move(op, sel, offset, falloff, radius) : r""" Move the points in the selection *sel* of *op* about the *offset* with the falloff described by the callable object *falloff*. @param op c4d.PointObject instance @param sel c4d.BaseSelect instance @param offset c4d.Vector instance @param falloff Callable object accepting a floating-point value as sole argument (between 0 and 1, including) and returning a floating-point argument as multiplier. @param radius The radius for the soft selection. """ if not isinstance(op, c4d.PointObject) : raise TypeError('expected c4d.PointObject for parameter `op`') if not isinstance(sel, c4d.BaseSelect) : raise TypeError('expected c4d.BaseSelect for parameter `sel`') if not isinstance(offset, c4d.Vector) : raise TypeError('expected c4d.Vector for parameter `offset`') if not isinstance(falloff, collections.Callable) : raise TypeError('expected callable object for parameter `falloff`') points = op.GetAllPoints() point_count = op.GetPointCount() sel_count = sel.GetCount() if point_count <= 0 or sel_count <= 0: return # Compute the mid-point of the selected points. midp = [] for i, p in itertools.izip(xrange(point_count), points) : if sel.IsSelected(i) : midp.append(p) midp = c4dtools.utils.vbbmid(midp) # Offset all points. for i, p in itertools.izip(xrange(point_count), points) : distance = (midp - p).GetLength() if distance > radius: continue x = falloff(distance / radius) p = p + offset * x op.SetPoint(i, p) def main() : if not op or not op.CheckType(c4d.Opoint) : return offset = c4d.Vector(0, 0, 100) falloff = lambda x: (1-x) radius = 100 doc.AddUndo(c4d.UNDOTYPE_CHANGE, op) soft_move(op, op.GetPointS(), offset, falloff, radius) op.Message(c4d.MSG_UPDATE) c4d.EventAdd() if __name__ == "__main__": main()
Best,
Niklas
On 07/02/2013 at 10:33, xxxxxxxx wrote:
Niklas, thank you very much for taking the time to help me! Also your c4dtools module is great!
THANKS!
Kind regards,
Michele | https://plugincafe.maxon.net/topic/6924/7779_automatically-adapting-softselection-radius | CC-MAIN-2020-40 | refinedweb | 687 | 60.72 |
1: Always have a Primary Key
When you create a new table, always ensure you have an ID field set as a primary Key. This is Database Design 101 but I have seen many production databases that have a number of tables with no Primary key. From my experience, 99.9% of the time it's best to set the ID column as an auto increment integer. This ensures that as soon a new entry is added to the table SQL server will automatically increment the ID. Also when you are inserting you don’t have to insert a new ID, SQL Server does this for you.
2: SQL Server Projects
How many times have you have so many SQL windows open with random snippets in SSMS that you can barely keep track. Before I knew about SQL Server projects, I had separate instances of SQL server open for different projects where I was working with data to try and keep things organised, plus I used to keep my machine on for as long as I needed these SQL windows open. Thankfully SSMS has the option to create a SQL Server project where you can organise your SQL files and snippets nice and neatly. It works just like a .SLN file in Visual studio. Give it a try you wont look back.
3: Timestamps
Just this week I had to update a table that was missing a timestamp field. This is a common mistake, which makes it very difficult to provide reporting on when an update or edit took place. For the vast majority of tables, ensure you add a TimeStamp Field of type DateTime.
A good Example of using timestamps is on a users table. On the User table for intermittentBug we have [DateRegistered], [LastSignedIn] and [EmailVerifiedDate] TimeStamp fields. This provides excellent clarity and reporting for user events.
4: Use diagram tools for Database Design, creating FK's and maintaining referential integrity
SQL Server Management Studio (SSMS) offers an excellent visual database design tool. You can access it by right clicking the "Database Diagrams" folder by expanding your DB in object explorer. It allows you to add and edit tables, assign PK's and drag out FK (foreign Key) relationships. It’s a great tool that makes creating a database schema's simple and easy.
5: SQL Table variables
As a Seasoned C# programmer writing SQL seemed easy at first, SELECT * FROM TABLE, wow this is easy! but things start getting complex quick. A technique I use when writing complicated SQL statements that require many joins and Aggregate functions across various tables is to take full advantage of SQL Table Variables. Table variables are an in memory table object that you can specify and populate. Once populated you can then join to another table variable to produce data that would be very difficult to do otherwise. I have a tutorial that goes through the basics here -> link coming soon
6: Tools for comparing/updating data and schemas
The way I work is to have 3 separate environments Dev, Staging and Live. This means I have 3 DB's, DB_Dev, DB_Staging and DB_Live. The problem is when you have completed all your work on your dev environment and you want to copy your data to staging for testing. You could create backups and then restore, but this is fiddly and time consuming. Also when you want to update your live DB restoring a backup is tricky and can result in downtime. Unbelievably, SSMS doesn't have a native comparison tool for data and schema changes. Thankfully Visual studio does. It has two tools, one for data which will copy data from the source DB to the target DB and a schema comparison which will update the target DB with changes to table structure, store procs, views, functions, ect. These tools are very useful so ensure you check them out.
7: Use Stored procedures
A Stored Procedure is essentially SQL code that can be run on demand. They offer 2 main advantages, 1 being the performance offered by executing SQL directly from the DB server and 2 the option to update at any time without any downtime for dependant apps. A prime example, Lets say we have a store proc that is used to return search results which orders the results by view count descending. If I want to switch the order by I can just alter the stored proc and update on the DB server. The application immediately updates, no need for a release or any downtime.
8: SQL Jobs
If you have access to any version of SQL server higher than express, you can use SQL Jobs. SQL Jobs are an inbuilt scheduling system that you can configure to trigger database events (jobs) that run at specific times or be trigged on an event. I have used them extensively to execute maintenance tasks such as backups or run data feeds. You can go right back to basics and simply trigger a stored proc to run at a certain time or at an interval such as every hour.
9: Generate Scripts
SSMS has a powerful feature called generate scripts that allows you to generate the SQL code that makes up your tables, stored procs, views or your entire database and all its data. The latter is especially useful when you need to backup your database or migrate to another DB server. One of the issues SQL Server has is backwards compatibility between versions. So for example I cannot restore a .bak file that was created in SQL Server 2014 to a SQL 2012 Server, even if the 2012 server is a higher version. To overcome this issue you can generate the entire SQL that makes up your database and all its data and simply execute this on the new server.
10: Object Explorer Details Window
Take this scenario, you have 100s of stored procs that have explicit database joins. You need to update them all to point to a test DB. Is the only way to achieve this to right click each one, update the DB namespace and then hit F5 to execute the alteration? Thankfully no, you can use the little known and often forgotten SSMS Object Explorer Details window. Using the Object explorer you can select all your stored procs, right click and select Drop and Create. This creates one big SQL file that contains all your stored procs so you can simply do a find and replace and then execute the lot in one hit. | https://www.intermittentbug.com/article/articlepage/top-10-sql-server-tips-for-the-.net-developer/2037 | CC-MAIN-2019-13 | refinedweb | 1,088 | 69.21 |
On my programming blog, I often try to compare performance characteristics of different algorithms or concepts. I usually log performance output (like elapsed time) to the console or a txt file and then copy this to a spreadsheet and analyse. But recently, I've found another way of doing this: I've used Spire.XSL library to generate the final spreadsheet file - with all the tables and charts! Read further to learn how one can leverage this library for various automation tasks.
Download C# project - 7.1 KB, zip
The Case
Our objective is to create a benchmark application that will test three different sorting algorithms. We would like to get elapsed time for a different element count. Below, there is a simple code that can be used:
abstract class PerfTestBase { public double ElapsedTimeSec {get; protected set;} public string Name {get; protected set;} public abstract void run(int n); } class BubbleSortPerfTest : PerfTestBase { public BubbleSortPerfTest() { Name = "Bubble Sort"; } public override void run(int n) { // real implementation here ElapsedTimeSec = X; } } class MergeSortPerfTest : PerfTestBase { public MergeSortPerfTest() { Name = "Merge Sort"; } public override void run(int n) { // real implementation here ElapsedTimeSec = X; } } class QuickSortPerfTest : PerfTestBase { public QuickSortPerfTest() { Name = "Quick Sort"; } public override void run(int n) { // real implementation here ElapsedTimeSec = X; } }
Algorithms are ready and now we need to run them with different startup parameters.
List<PerfTestBase> perfTests = new List<PerfTestBase> { new BubbleSortPerfTest(), new MergeSortPerfTest(), new QuickSortPerfTest() }; // N from 10 up to 200, step is 10 var res = runAllTests(perfTests, 10, 200, 10); printResults(res);
The function
runAllTests simply iterates through set of
N values and calls
.run(N) methods.
The most interesting part for us is the
printResults method. What code can be used to automate reporting and generate valuable results?
Simplest Solution
Initially we can, of course, print all the results to the console. We can even use CSV format and then easily copy it to spreadsheet.
N;Bubble Sort;Merge Sort;Quick Sort; 10;20,00;140,46;96,71; 20;80,00;365,48;251,64;
After a while, when you continue to change your algorithm code, the task of copying results becomes tedious. For sure, there must be a better and faster way. What if we could generate not CSV file, but full Excel file? And now is a best place to introduce our Spire.XLS library.
Introducing Spire.XLS
Spire.XSL is a library that makes Office automation easier.
But briefly: Add reference to Spire.XLS in your project and then you can create, open, update, run calculation without requiring Microsoft Excel or Microsoft Office to be installed on the system!
The library is fully compatible with Excel 97/2003, 2007 and 2010.
Additionally Spire.XLS can also protect, encrypt files and, what is more important, convert to other formats. You can for instance export your files to PDF, images or HTML.
This solution gives us possibility to implement valuable and automated applications quite easily.
Using Spire.XSL in the Code
In our example, we will use probably only 1% of the full power of the library! Still it will save us a lot of time with report generation.
Basic Usage
Add references:
using Spire.Xls; using Spire.Xls.Charts;
Four lines to create 'Hello World' workbook:
Workbook wb = new Workbook(); Worksheet sheet = wb.Worksheets[0]; sheet.Range["A1"].Text = "Hello,World!"; wb.SaveToFile("Sample.xls", ExcelVersion.Version2007);
The above code gives a basic idea how the library looks like. Basically you can manipulate workbooks, sheets and individual cells in a very light way.
Improved Solution
Let's go back to our original problem. Our new solution will keep the console output part, but we will save the results also to Excel file. In addition to that, we can create a chart. That way, a lot of time will be saved - no need for copy and regenerate charts again and again...
Here is a fragment of code related to saving the data:
Worksheet sheet = workbook.Worksheets[0]; sheet.Name = "Perf Test"; sheet.Range["A1"].Text = "Elapsed Time for sorting..."; sheet.Range["A1"].Style.Font.IsBold = true; // columns title: sheet.Range["C3"].Text = "N"; sheet.Range["C3"].Style.Font.IsBold = true; sheet.Range["C3"].Style.HorizontalAlignment = HorizontalAlignType.Center; char col = 'D'; foreach (var n in res.Map.Keys) { sheet.Range[col+"3"].Text = n; sheet.Range[col+"3"].Style.Font.IsBold = true; sheet.Range[col+"3"].Style.HorizontalAlignment = HorizontalAlignType.Center; col++; } // insert values into rows...
And here is some of the chart generation code:
Chart chart = sheet.Charts.Add(); //Set region of chart data chart.DataRange = workbook.Worksheets[0].Range[range]; chart.SeriesDataFromRange = false; //Set position of chart chart.LeftColumn = 2; chart.TopRow = 2; chart.RightColumn = 12; chart.BottomRow = 30; //Chart title chart.ChartTitle = "Sorting Time..."; chart.ChartTitleArea.IsBold = true; chart.ChartTitleArea.Size = 12; // ... chart.Legend.Position = LegendPositionType.Bottom; chart.ChartType = ExcelChartType.ScatterSmoothedLineMarkers;
Simple as it is!
I especially like the way we can get to a cell or a whole range. Note how easy it is to change style of a cell.
A final Excel file - generate automatically of course:
and the chart:
Alternatives
If you want to go opensource:
- ClosedXML - ClosedXML allows you to create Excel 2007/2010 files without the Excel application
- EPPlus - library that reads and writes Excel 2007/2010 files using the Open Office Xml format (xlsx)
- NPOI - an open source project which can help you read/write xls, doc, ppt files.
Summary
In this article, I've shown how we can easily automate the task of reporting performance results from an application. By using Spire.XLS, programmers are able to create and manipulate Excel files without having Office installation on the system. The library is very powerful and, what is more important, trivial to utilize. Our task - creating reports - could be automated in a few lines of code.
Remarks
- The library is designed for .NET, but even in native code we could use the same solution. I need to test this, but we could create a 'bridge' and call .NET library from C++ application. C++ app will do the work, but all the results would go to .NET module that will call Spire.XLS.
Article was sponsored by e-iceblue company. | http://www.bfilipek.com/2014/06/automated-reports.html | CC-MAIN-2017-22 | refinedweb | 1,027 | 58.48 |
This article is also available in Chinese.
When working with Swift on the server, most of the routing frameworks work by associating a route with a given closure. When we wrote Beacon, we chose the Vapor framework, which works like this. You can see this in action in the test example on their home page:
import Vapor let droplet = try Droplet() droplet.get("hello") { req in return "Hello, world." } try droplet.run()
Once you run this code, visiting
localhost:8080/hello will display the text “Hello, world.”.
Sometimes, you also want to return a special HTTP code to signal to consumers of the API that a special action happened. Take this example endpoint:
droplet.post("devices", handler: { request in let apnsToken: String = try request.niceJSON.fetch("apnsToken") let user = try request.session.ensureUser() var device = try Device(apnsToken: apnsToken, userID: user.id.unwrap()) try device.save() return try device.makeJSON() })
(I’ve written more about
NiceJSON here, if you’re curious about it.)
This is a perfectly fine request and is similar to code from the Beacon app. There is one problem: Vapor will assume a status code of 200 when you return objects like a string (in the first example in this blog post) or JSON (in the second example). However, this is a
POST request and a new
Device resource is being created, so it should return the HTTP status code “201 Created”. To do this, you have to create a full response object, like so:
let response = Response(status: .created) response.json = try device.makeJSON() return response
which is a bit annoying to have to do for every creation request.
Lastly, endpoints will often have side effects. Especially with apps written in Rails, managing and testing these is really hard, and much ink has been spilled in the Rails community about it. If signing up needs to send out a registration email, how do you stub that while still testing the rest of the logic? It’s a hard thing to do, and if everything is in one big function, it’s even harder. In Beacon’s case, we don’t have don’t have many emails to send, but we do have a lot of push notifications. Managing those side effects is important.
Generally speaking, this style of routing, where you use a closure for each route, has been used in frameworks like Flask, Sinatra, and Express. It makes for a pretty great demo, but a project in practice often has complicated endpoints, and putting everything in one big function doesn’t scale.
Going even further, the Rails style of having a giant controller which serves as a namespace for vaguely related methods for each endpoint is borderline offensive. I think we can do better than both of these. (If you want to dig into Ruby server architecture, I’ve taken a few ideas from the Trailblazer project.)
Basically, I want a better abstraction for responding to incoming requests. To this end, I’ve started using an object that I call a
Command to encapsulate the work that an endpoint needs to do.
The
Command pattern starts with a protocol:
public protocol Command { init(request: Request, droplet: Droplet) throws var status: Status { get } func execute() throws -> JSON } extension Command: ResponseRepresentable { public func makeResponse() throws -> Response { let response = Response(status: self.status) response.json = try execute() return response } }
We’ll add more stuff to it as we go, but this is the basic shell of the
Command protocol. You can see see just from the basics of the protocol how this pattern is meant to be used. Let’s rewrite the “register device” endpoint with this pattern.
droplet.post("devices", handler: { request in return RegisterDeviceCommand(request: request, droplet: droplet) })
Because the command is
ResponseRepresentable, Vapor accepts it as a valid result from the handler block for the route. It will automatically call
makeResponse() on the
Command and return that
Response to the consumer of the API.
public final class RegisterDeviceCommand: Command { let apnsToken: String let user: User public init(request: Request, droplet: Droplet) throws { self.apnsToken = try request.niceJSON.fetch("apnsToken") self.user = try request.session.ensureUser() } public let status = Status.created public func execute() throws -> JSON { var device = try Device(apnsToken: apnsToken, userID: user.id.unwrap()) try device.save() return try device.makeJSON() } }
There are a few advantages conferred by this pattern already.
- Maybe the major appeal of using a language like Swift for the server is to take advantage of things like optionals (and more pertinently, their absence) to be able to define the absolute requirements for a request to successfully complete. Because
apnsTokenand
userare non-optional, this file will not compile if the
initfunction ends without setting all of those values.
- The status code is applied in a nice declarative way.
- Initialization is separate from execution. You can write a test that checks to that the initialization of the object (e.g., the extraction of the properties from the request) that is completely separate from the test that checks that the actual
save()works correctly.
- As a side benefit, using this pattern makes it easy to put each
Commandinto its own file.
There are two more important components to add to a
Command like this. First, validation. We’ll add
func validate() throws to the
Command protocol and give it a default implementation that does nothing. It’ll also be added to the
makeResponse() function, before
execute():
public func makeResponse() throws -> Response { let response = Response(status: self.status) try validate() response.json = try execute() return response }
A typical
validate() function might look like this (this comes from Beacon’s
AttendEventCommand):
public func validate() throws { if attendees.contains(where: { $0.userID == user.id }) { throw ValidationError(message: "You can't join an event you've already joined.") } if attendees.count >= event.attendanceLimit { throw ValidationError(message: "This event is at capacity.") } if user.id == event.organizer.id { throw ValidationError(message: "You can't join an event you're organizing.") } }
Easy to read, keeps all validations localized, and very testable as well. While you can construct your
Request and
Droplet objects and pass them to the prescribed initializer for the
Command, you’re not obligated to. Because each
Command is your own object, you can write an initializer that accepts fully fledged
User,
Event, etc objects and you don’t have to muck about with manually constructing
Request objects for testing unless you’re specifically testing the initialization of the
Command.
The last component that a Command needs is the ability to execute side effects. Side effects are simple:
public protocol SideEffect { func perform() throws }
I added a property to the
Command protocol that lists the
SideEffect-conforming objects to perform once the command’s execution is done.
var sideEffects: [SideEffect] { get }
And finally, the side effects have to be added to the
makeResponse() function:
public func makeResponse() throws -> Response { let response = Response(status: self.status) try validate() response.json = try execute() try sideEffects.forEach({ try $0.perform() }) return response }
(In a future version of this code, side effects may end up being performed asynchronously, i.e., not blocking the response being sent back to the user, but currently they’re just performed synchronously.) The primary reason to decouple side effects from the rest of the
Command is to enable testing. You can create the
Command and
execute() it, without having to stub out the side effects, because they will never get fired.
The
Command pattern is a simple abstraction, but it enables testing and correctness, and frankly, it’s pleasant to use. You can find the complete protocol in this gist. I don’t knock Vapor for not including an abstraction like this: Vapor, like the other Swift on the server frameworks, is designed to be simple and and that simplicity allows you to bring abstractions to your own taste.
There are a few more blog posts coming on server-side Swift, as well as a few more in the Coordinator series. Beacon and WWDC have kept me busy, but rest assured! More posts are coming. | https://khanlou.com/2017/06/server-side-commands/ | CC-MAIN-2020-34 | refinedweb | 1,339 | 55.84 |
- Advertisement
Content Count466
Joined
Last visited
Community Reputation187 Neutral
About FireNet
- RankMember
Dynamic 2D lights
FireNet replied to bottomy's topic in Graphics and GPU ProgrammingAn article on dynamic 2d soft shadows
Best way to achieve fast 2D these days
FireNet replied to canislupis's topic in Graphics and GPU ProgrammingYou could use PixelToaster which is designed to be used for software rendering. Basically it provides you with a framebuffer and takes care of all the inner details. PixelToaster is a library for C++ programmers who want to write their own software rendering routines, reading and writing to an array of pixels. You choose between high dynamic range floating point color or 32 bit truecolor and pixeltoaster converts to the native display format automatically. You also get basic keyboard and mouse input and a high resolution timer. [/quote]
Game states and setting up the way things flow
FireNet replied to thehonestman's topic in General and Gameplay ProgrammingThere are many ways you could go about doing this. The simplest I can think of would be to have a list of states, with only last one being updated and rendered. push_back the intro state intro states runs for 10 seconds intro state after 10 seconds push_back menu state menu state can push_back game states At end of a game, pop_back game state, menu state becomes the current state Take a look at Managing Game States in C++ Example: Emilio's Flight It uses a state manager similar to the one I described above. The intro state runs a few timed objects and on the player pressing enter pushes a new state onto the state list. The intro state then marks itself as over, and is safely deleted in the garbage collection phase of the state manager. Handling memory is an issue. If you are putting pointers to states in std::list or std::vector consider using boost::shared_ptr rather than raw pointers.
MultiStream's problem
FireNet replied to Feihonghui's topic in General and Gameplay ProgrammingWhen you are switching... are you loading a new sound file?
Engine design and resources
FireNet replied to JorenJoestar's topic in General and Gameplay ProgrammingI use two template classes to handle my resources. The first one is a Resource Loader class that handles loading a valid resource. The other is of course the Resource Manager which ensures only one instance of resource is loaded. template<typename RES_ID,typename RES> class ResourceLoader { public: bool Load(); bool Unload(); } template<typename RES_ID, typename RES, typename RES_LOADER> class ResourceManager { public: bool Load(RES_ID &x); //Load a resource bool Get(); //Get last resource searched for or loaded bool GetP(); //Get pointer to above bool Have(RES_ID &x); //Check if a resource is loaded bool Unload(RES_ID &x); //unload said resource } Here's my c++ implementation of it #ifndef _MS_UTIL_RESOURCE_H_ #define _MS_UTIL_RESOURCE_H_ /*Simple Resource Manager Template*/ #include <string> #include <map> #include <iterator> namespace STICK { //!class GenResLoader /* \brief Format for a resource loader */ template<typename RESOURCE_ID, typename RESOURCE> class GenResLoader { public: virtual bool Load(const RESOURCE_ID &, const RESOURCE_ID &, RESOURCE &){return false;} virtual bool LoadNullResource(const RESOURCE_ID &, RESOURCE &){return false;} //virtual bool Unload(RESOURCE &){return false;} bool fileIsValid(std::string filepath) { std::fstream fin; fin.open(filepath.c_str(),std::ios::in); if( fin.is_open() ) { fin.close(); return true; } fin.close(); return false; } protected: //std::string path; }; //!class GenResManager /* \brief This class is meant to be used internally, so main objects are public RESOURCE Recommended: A container which contains a [pointer to data or an id] and some some info to describe the properties of the resource RESOURCE_ID A unique way to identify a resource, like a file name RESOURCE_LOADER A class which can load and unload resouces Members required: bool Load(const RESOURCE_ID,RESOURCE) bool Unload(RESOURCE) */ //Some way to uniquely id a res, a res object, a class to load it template<typename RESOURCE_ID,typename RESOURCE,typename RESOURCE_LOADER> class GenResManager : public RESOURCE_LOADER { public: class RESOURCE_1 { public: RESOURCE res; int count; }; bool Load(const RESOURCE_ID &); //!< Will attempt to gain resource from memory/file(true=sucess) RESOURCE LoadGet(const RESOURCE_ID &); //!< Load and returns a resource (even when said resource is not loaded) RESOURCE* LoadGetP(const RESOURCE_ID &); //!< Load and returns a resource (even when said resource is not loaded) bool Unload(const RESOURCE_ID &); //!< Will attempt to unload memory bool Have(const RESOURCE_ID &); //!< Searches for a loaded resource RESOURCE Get(); //!< Gives last loaded/found resource RESOURCE* GetP(); //!< Gives last loaded/found resource RESOURCE Get(unsigned int); //!< Gives resource by internal list id (-1 = last resource) bool Add(const RESOURCE_ID &,const RESOURCE*); //!< Add a resource with id to internal list void Path(RESOURCE_ID pth, RESOURCE_ID def_pth) { path = pth; path_default = def_pth; } //Resources Constructs std::map<RESOURCE_ID,RESOURCE_1> resList; //!< Loaded resources list typename std::map<RESOURCE_ID,RESOURCE_1>::iterator resLast; //!< Last Find typename std::map<RESOURCE_ID,RESOURCE_1>::iterator find; //Path RESOURCE_ID path; RESOURCE_ID path_default; GenResManager() { RESOURCE res; RESOURCE_ID res_id; RESOURCE_LOADER::LoadNullResource(res_id,res); Add(res_id,&res); } }; template<typename RESOURCE_ID,typename RESOURCE,typename RESOURCE_LOADER> bool GenResManager<RESOURCE_ID,RESOURCE,RESOURCE_LOADER>::Load(const RESOURCE_ID &id) { if(Have(id)) { resLast->second.count++; return true; } else { RESOURCE res; if(RESOURCE_LOADER::Load(id,path,res) || RESOURCE_LOADER::Load(id,path_default,res)) { Add(id,&res); return true; } else { return false; } } //return false; } template<typename RESOURCE_ID,typename RESOURCE,typename RESOURCE_LOADER> RESOURCE GenResManager<RESOURCE_ID,RESOURCE,RESOURCE_LOADER>::LoadGet(const RESOURCE_ID &id) { Load(id); return Get(); } template<typename RESOURCE_ID,typename RESOURCE,typename RESOURCE_LOADER> RESOURCE* GenResManager<RESOURCE_ID,RESOURCE,RESOURCE_LOADER>::LoadGetP(const RESOURCE_ID &id) { Load(id); return GetP(); } template<typename RESOURCE_ID,typename RESOURCE,typename RESOURCE_LOADER> bool GenResManager<RESOURCE_ID,RESOURCE,RESOURCE_LOADER>::Unload(const RESOURCE_ID &id) { find = resList.find(id); if(find == resMap.end())return false; if(find == resLast) { resLast = resList.begin(); } find->second.count -= 1; if(find->second.count > 0) { //Because there are other users dont remove it from the list return true; } RESOURCE_LOADER::Unload(find->second.res); resList.erase(find); return true; } template<typename RESOURCE_ID,typename RESOURCE,typename RESOURCE_LOADER> bool GenResManager<RESOURCE_ID,RESOURCE,RESOURCE_LOADER>::Have(const RESOURCE_ID &id) { find = resList.find(id); if( find != resList.end() ) { resLast = find; return true; } return false; } template<typename RESOURCE_ID,typename RESOURCE,typename RESOURCE_LOADER> RESOURCE GenResManager<RESOURCE_ID,RESOURCE,RESOURCE_LOADER>::Get() { return resLast->second.res; } template<typename RESOURCE_ID,typename RESOURCE,typename RESOURCE_LOADER> RESOURCE* GenResManager<RESOURCE_ID,RESOURCE,RESOURCE_LOADER>::GetP() { return &(resLast->second.res); } template<typename RESOURCE_ID,typename RESOURCE,typename RESOURCE_LOADER> RESOURCE GenResManager<RESOURCE_ID,RESOURCE,RESOURCE_LOADER>::Get(unsigned int arry_id) { return (resList.begin()+arry_id)->second.res; } template<typename RESOURCE_ID,typename RESOURCE,typename RESOURCE_LOADER> bool GenResManager<RESOURCE_ID,RESOURCE,RESOURCE_LOADER>::Add(const RESOURCE_ID &id,const RESOURCE *res) { RESOURCE_1 res_cont = {*(res),1}; resLast = resList.insert( resList.end(),std::pair<RESOURCE_ID,RESOURCE_1>(id,res_cont) ); //resLast = resList.begin()+; return true; } } #endif The resource loader class is for providing a standard way to load any resource. You can inherit the loader for every type of resource you want to load (or create a class with functions of the same name ...in C++ the resource manager template does not really care along as you have Load() ....) LoadNullResource is for having a default resource when no resource has been loaded before... (preferably generated) The resource id can be any way to uniquely identify a resource, usually std::string for a file name. It can be also be a class, for a font which has font_name and font_size. You will have to overload the comparison operator in such a class though. This will be required by std::map to compare resource ids. The Manager template can do some nifty stuff like have multiple paths for a resource and handle any type of data without trouble as long as a few conditions are satisfied. Word of warning though, I've not done the Unload()... simply because I am currently using it for a game in which I don't expect to unload anything till the game is closed. Another important thing to watch out for is the copying of resources... the manager does blind copies using the assignment operator and expects the resource class to handle efficient copying and management of memory for a single resource. Working with classes rather than pointers made it much easier to code. When resources are destroyed, the destructor can take care of freeing memory automatically. Smart pointers would be a good to use on dynamically allocated memory. Usage Example (for SFML Image class) #ifndef _MS_RESOURCE_IMAGE_H_ #define _MS_RESOURCE_IMAGE_H_ #include "engine.h" namespace STICK { class ImageLoader : public STICK::GenResLoader<std::string,sf::Image> { public: bool Load(const std::string file , const std::string path, sf::Image &Image) { std::string filepath = path+file; if( fileIsValid(filepath) ) { return Image.LoadFromFile( filepath ); } return false; } bool LoadNullResource(const std::string &file, sf::Image &Image) { Image.Create(32,32,sf::Color(255,0,255,100)); return true; } }; //Singleton template based on class ImageManager : public CSingleton<ImageManager>, public STICK::GenResManager<std::string,sf::Image,ImageLoader> { public: friend CSingleton<ImageManager>; static ImageManager* CreateInstance(std::string _path,std::string _path_default) { ImageManager* inst = ImageManager::Instance(); inst->Path(_path,_path_default); return inst; } private: ImageManager(){}; ~ImageManager(){}; }; } #endif Init, Usage of manager and Deinit //Image Resource Manager STICK::ImageManager* image = STICK::ImageManager::CreateInstance(CONFIG::path_image,CONFIG::default_path_image); //Load image std::string icon_file = "icon.png"; sf::Image *icon = ImageManager::Instance()->LoadGetP(icon_file.c_str()); //Deinit the manager image->DestroyInstance(); //Pastebin Link for resource manager class //original skeleton template
Raknet vs OpenTNL vs ???
FireNet replied to sipickles's topic in Networking and MultiplayerTry zoidcom... It has a fair amount of documentation and seems to be easy to use. I will be using it for a 2d game I am working on atm... Quote:.
2D Graphics - Best library?
FireNet replied to m0nkfish's topic in For Beginners's ForumYou can also use FreeGLUT to handle your window creation and input. It's the same as glut... You will still be working with OpenGL. The problem with DX is that you will have to download the SDK, set it up and use a bunch of code to set it up. OpenGL has a much simpler API, and as you are already familiar with it you can get coding your game faster. A sprite can be a texture mapped quad, so OpenGL can handled sprites just fine. You can also use an alpha channel in your images for transparent or translucent areas, just make sure you setup the blending properly. In any case since you are in the beginner's forum, the choice between DX or OGL is a simple one.... go with what easiest for you... - SFML is neat and does a fair amount of load lifting for you. I've been going over it documents and will be using it in a project I am working on. But in my opinion if you are just starting out, it's a lot more fun to do things yourself.... you dont have any time constraints and you get to learn a lot of things about the important background details.
- Quote:Original post by Falling Sky Just for the record, you should learn the language you want to use BEFORE you try to make a game with it. Extremely true, and learning the language, at least all the the basics of C++ and a little bit about STL will allow you to code a lot more efficiently. Otherwise you would be doing voodoo, doing things without really knowing what is going on and not being able to take full advantage of the tools available to you. However it does not mean you can't code a game while you are learning C++, just make sure they are small. Also if you are using a framework or an engine, there is an extra bit learning to use it. If you are not familiar with C++, keep the 3rd party stuff to a minimum as you are starting out. Just use GLUT, and stick to coding a simple game rather than coding a full game with everything. GLUT will give you rendering via opengl and input, pretty much all you need to make a simple game. And using GLUT is fairly trivial, just a few lines of code of init and you can be off making games.
- If you need 2D you could try: Gorgon - tape-worm.net Hge - hge.relishgames.com I've not used any of these yet :-P I am also working on the releasing framework I've been using as a starting point for my 2d projects. code.google.com/p/airbashframework/ I only put it up a few hours ago, so the SVN has only the Video class. If you are using Visual Studio 2008, comes with .libs and headers for any external libs used. It's aimed at only doing the boring stuff of setting up the project, and initializing anything required. From then on you would be dealing with the actual libraries and their api, those some nice helpers would be provided. Take a look at it in a few days ;-) [Edited by - FireNet on January 6, 2009 10:55:32 PM]
tile too big
FireNet replied to jagguy2's topic in For Beginners's ForumYou should try a few tutorials for making seamless tiles
Game Design in C++
FireNet replied to amhathaw's topic in For Beginners's ForumYea, make it sleep for a bit. Use getch() to make it wait till a key is pressed. You will have to include <conio.h> Or from look at the second lesson's code //Wait 2 seconds SDL_Delay( 2000 ); Increase 2000 to a higher value
Aerial Attack-Feedback
FireNet replied to c159101's topic in General and Gameplay ProgrammingThe game is fine, but the controls are seriously messed up and hard. I like most people just wanted to shoot when I first tried the game. I kept hitting space, left click, alt,ctrl and nothing happened ...... I was reduced to dodging bullets and enemies. It was fun though. I even found that hitting B sent out a bomb while I was desperately hitting keys. The game is fun to play though... Insanely weak weapons make dodging a required skill, but a little more power would not hurt. Kinda difficult to take out even the basic enemies. As a far as bugs go, in full screen mode, I can see the game area , but the rest of the screen is a flashing grey area :P.
Pathfollowing Implementation help
FireNet replied to Vish's topic in General and Gameplay ProgrammingFirst of all, I recommend you take the time to use floats in your ghosts(if you have not already).Box2D works on floats so you will find it a lot easier to use the data it gives you. When I did a pac-man clone (no physics), I took the vector from start to destination and scaled it by the speed multiplied by delta time (milliseconds). Something like: Vec Pos; Vec Dest; Vec Speed; Vec move = Dest - Pos; move *= speed * delta_time pos += move; if( abs(dest.x,y - pos.x,y) <= float_range)choose_next_node(); This would make the ghost move from point A to B as long as delta time was below 1. The next node would be targeted when the sprite was within distance from the target location. This would easily handle any small skips the sprite took, if it missed the exact location. Otherwise if delta time was >1 it would skip the target and go hit a wall. It would of course comeback, since a vector is being calculated before every move. For me the whole thing was just an exercise in A* with only ghosts running around to random targets so I didn't really care about how accurate the locations of the ghosts were. There were no walls, just an 2D array of ints which represented open/close area. You could use a similar method to push your ghosts by adding an impulse/force to the body and stopping it when it reaches within certain range to a node target. The you could set the position to the exact point on the node, before moving it to the next. You can use tweening move an object from one location to another. Adding a impulse or force to an object is somewhat like shooting a bullet and hope it reaches it's destination. But it you are happy to what it mostly follow the path then it should work out fine. Also looking at the programming faq for box2d will give you a few pointers. [Edited by - FireNet on January 6, 2009 12:28:19 PM]
Gamemonkey (geting a value from script)
FireNet replied to FireNet's topic in Engines and MiddlewarewOW I did just miss the global keyword. Darn. All that searching and trails. Thanks a lot and thanks for the tip ;) (v1.25b is what i used (latest atm))
Gamemonkey (geting a value from script)
FireNet posted a topic in Engines and MiddlewareI been trying to get a simple int value from the script and i've been having little success gmMachine gm; gmTableObject *Gtable; gmVariable var; cout<<"\n->Compile Ret="<<gmLoadAndExecuteScript(gm,"../data/test.gm")<<endl; handleErrors(gm); Gtable = gm.GetGlobals(); var = Gtable->Get(&gm,"svar"); test.gm svar = 1013; I get 0 all the time and the object type is GM_NULL... What am i doing wrong? How can i get a value of a var easily?
- Advertisement | https://www.gamedev.net/profile/60072-airbasher/ | CC-MAIN-2018-43 | refinedweb | 2,859 | 53.21 |
Thanks to those who attended my webcast "Exploring the System.Net namespace" earlier this week.
Here is the sample code and slide deck (about 500k download) and I'm still working on the questions that were asked so check back Friday!
(Note that this is pretty much the same presentation and code that I did at the DevConnections conference in Orlando in early April, so if you're looking for that, you're in the right place.)
If you missed the webcast you can watch a recording.
(UPDATE: Finally got the sample code and slide deck on a working server!)
Your source code zip was not found.
Hi Glen,
Could you please check the link you posted for the source code / slide deck – it doesn’t work for me either…
When I click "sample code and slide deck" I am taken to the following link:
My browser says "Cannot find server – The page cannot be displayed".
Thanks.
cannot be found
Sorry, that server is offline… I’m working on it…: Non-existent domain, please advise. does still not resolve!!
Link Error – "Cannot find server – The page cannot be displayed".
Is there somewhere else to get your code?
Finally got it on a working server and updated the link! | https://blogs.msdn.microsoft.com/glengordon/2006/05/03/sample-code-from-webcast-exploring-the-system-net-namespace/?replytocom=2904 | CC-MAIN-2018-43 | refinedweb | 208 | 72.66 |
Q6.¶
Perform the following file operations using Python¶
a) Traverse a path and display all the files and subdirectories in each level till the deepest level for a given path. Also, display the total number of files and subdirectories.¶
b) Read a file contents and copy only the contents at odd lines into a new file.¶
Approach to displaying the files and subdirectories in each level
Get path information from the user.
Check whether the path exists or not.
If the path exists, then walk through all the file and subdirectories (including root directory) using os.walk() function.
Normalize the paths and count the number of subdirectories and files in each of those subdirectories.
Display the names of subdirectories and files in subdirectories (including root directory).
Below functions are used in the program
os.path.exists(path)
Return
True if path refers to an existing path. Returns
False for broken symbolic links. On some platforms, this function may return
False if permission is not granted to access the requested file, even if the path physically exists.
os.path.normpath(path)
Normalize a pathname by collapsing redundant separators and up-level references so that A//B, A/B/, A/./B and A/foo/../B all become A/B. On Windows, it converts forward slashes to backward slashes.
os.walk(topdir[, topdown=True])
Generate the file names in a directory tree by walking the tree either top-down (default) or bottom-up (topdown = False). Specifying argument
topdown is optional. For each directory in the tree rooted at directory
topdir (including topdir itself), it yields a 3-tuple ().
Approach to copy the contents from one file to another file at odd lines
Open a file source_file as in_file in read mode.
Open a file destination_file as out_file in write mode.
To read all the lines of a file use
f.readlines().
Obtain the total number of lines in the file.
Check the line number which is not divisible by 2 and then write the contents to out_file else pass.
import os def display_files(): # Set the directory to start from print("Enter path to traverse: ") root_dir = input() if os.path.exists(root_dir): dir_count = 0 file_count = 0 for dir_name, sub_dir_list, file_list in os.walk(root_dir): print(f"Found directory: {dir_name} \n") # check to ignore starting directory while taking directory count # normpath returns the normalized path eliminating double slashes etc. if os.path.normpath(root_dir) != os.path.normpath(dir_name): dir_count += 1 for each_file_name in file_list: file_count += 1 print(f"File name(s) {each_file_name} \n") print(f"Number of subdirectories are {dir_count} \n") print(f"Number of files are {file_count} \n") display_menu() else: print("Entered path doesn't exist") display_menu() def copy_contents_to_file(): source_file = input("Enter the Source file name: ") print("\n") destination_file = input("Enter the Destination file name: ") print("\n") try: with open(source_file) as in_file, open(destination_file, "w") as out_file: list_of_lines = in_file.readlines() for i in range(0, len(list_of_lines)): if i % 2 != 0: out_file.write(list_of_lines[i]) except IOError: print("Error in file names") print("File contents at odd lines copied to destination file \n") display_menu() def display_menu(): print("Enter your choice") print("Press 1 --> Display files and directories for a given path and their count") print("Press 2 --> Copy the contents present at odd lines to another file") print("Press 3 --> Exit the program") choice = int(input()) if choice == 1: display_files() elif choice == 2: copy_contents_to_file() else: exit() if __name__ == "__main__": display_menu()
Enter your choice Press 1 --> Display files and directories for a given path and their count Press 2 --> Copy the contents present at odd lines to another file Press 3 --> Exit the program 1 Enter path to traverse: C:\Test_Data Found directory: C:\Test_Data File name(s) 1.txt Found directory: C:\Test_Data\Root_Dir File name(s) 2.txt Found directory: C:\Test_Data\Root_Dir\Sub_Dir_1 File name(s) 3.txt Found directory: C:\Test_Data\Root_Dir\Sub_Dir_1\Sub_Dir_2 File name(s) 4.txt File name(s) 5.txt Number of subdirectories are 3 Number of files are 5 Enter your choice Press 1 --> Display files and directories for a given path and their count Press 2 --> Copy the contents present at odd lines to another file Press 3 --> Exit the program 2 Enter the Source file name: source_file.txt Enter the Destination file name: destination_file.txt File contents at odd lines copied to destination file Enter your choice Press 1 --> Display files and directories for a given path and their count Press 2 --> Copy the contents present at odd lines to another file Press 3 --> Exit the program 3 | https://nbviewer.jupyter.org/github/gowrishankarnath/Dr.AIT_Python_Lab_2019/blob/master/Program_6.ipynb | CC-MAIN-2020-40 | refinedweb | 755 | 53.1 |
facebook ◦ twitter ◦
View blog authority
View blog top tags
Thought this might be useful. On a new project where you're using the Castle Windsor container for Dependency Injection, this is a handy spec to have:
[TestFixture]
public class When_starting_the_application : Spec
{
[Test]
public void verify_Castle_Windsor_mappings_are_correct()
{
IWindsorContainer container = new WindsorContainer("castle.xml");
foreach (IHandler handler in container.Kernel.GetAssignableHandlers(typeof(object)))
{
container.Resolve(handler.ComponentModel.Service);
}
}
}
It doesn't guarantee that someone missed adding something to your configuration, but this way anytime someone adds a type to the configuration this will verify the mapping is right. Very often I move things around in the domain into different namespaces and forget to update Castle. I supposed you *could* use reflection on your assembly as another test and verify the mapping is there, but not every type in the system is going to be dependency injected so that's probably not feasible.
Thanks to the Castle guys for helping me get the simplest syntax going for this.
I know this has nothing to do with the subject of your post, but I'm curious, is "Spec" your own base class?
@Greg: You can find an example of the Spec class here on Dave Laribee's blog: codebetter.com/.../approaching-bdd.aspx
Thanks Bil.
Thanks for sharing.
Thanks for sharing this.
BTW, for my container I had to add two lines before calling Resolve():
if (handler.ComponentModel.Service.IsGenericType)
continue;
This will skip components with generic arguments, like IRepository<>. Since we don't know the argument at this moment, Resolve() will throw.
Here's a dumb question. We keep Windsor Config with the app.config file. I added this spec (awesome BTW, wish I'd thought of that!) and pointed it to the application under test's config file, but it errors out because of the other sections of the config file.
The easy answer is to separate the Windsor configuration out from the app.config and divorce the two. I am not quite ready to do that if there is an easy way to get Windsor to only use the section of the config file that I tell it to. This is easy with your local config, but it doesn't appear to be as easy when pointing off to an XML file somewhere else.
I tried the XMLInterpreter (among a couple of other things), but perhaps I am missing something somewhere.
Any ideas? | http://weblogs.asp.net/bsimser/archive/2008/06/04/the-first-spec-you-should-write-when-using-castle.aspx | crawl-002 | refinedweb | 401 | 55.74 |
Opened 10 years ago
Closed 10 years ago
Last modified 10 years ago
#2590 closed defect (invalid)
missing __init__.py in django/core
Description
On Windows, when calling django-admin.py I am getting the error
django-admin.py startproject mysite
Traceback (most recent call last):
File "C:\Project\IXE\Django-0.95\django\bin\django-admin.py", line 2, in ?
from django.core import management
ImportError: No module named core
Trying to figure it out, I noticed that the file init.py is missing on the directories core and others, and therefore python24 skips those directories when looking for modules.
They are present as empty files in the tar.gz, but apparently winzip skips empty files.
Change History (1)
comment:1 Changed 10 years ago by verbosus
- Cc antonio@… added
- Component changed from Admin interface to Core framework
- Resolution set to invalid
- Status changed from new to closed
Giuseppe, that is a bug with winzip (skipping empty files by default is not OK, try to see if there’s an option to turn that off).
Empty __init__.py files for modules are a Python convention which we have no way of changing. I’m marking this as invalid. | https://code.djangoproject.com/ticket/2590 | CC-MAIN-2016-22 | refinedweb | 199 | 65.01 |
Exploring Cool Features of Devexpress' ASPxGridView
Introduction
The holiday season is a time for rich food, the warmth of family and friends, and festive fun and good times. And, if you are like me, you were able to squeeze in some computer time when the kiddies were playing their XBoxes or watching their new DVDs in between sledding and ice skating.
Of course, if you are going to play with technology during the holidays, you might as well play with fun technology. When I wasn't fighting the Lich King, I was working with Devexpress controls for Windows and the web. The Windows controls are part of an application I am helping a friend with, and the web controls because Devexpress put out a new release 8.3. Naturally, I wanted to take Devexpress' new controls for a spin.
The article is a little on the long side, so if you want to go refill your eggnog from whatever's left over, I'll wait. Already back, I see. Good.
Binding Persistent Classes to an ASPxGridView
An entity class is a class that represents a database table, generally. It's not worth being dogmatic about where you get your entity classes. Sometimes, you might roll your own, you might use something like Microsoft's LINQ to SQL, or you can use Devexpress' XPO Persistent Classes. The Express Persistent Classes (XPO) are designed to work with Devexpress' XpoDataSource and controls; if you are creating an application that leverages Devexpress' professional looking controls, the XPO Persistent Classes may be the way to go.
Start by creating a new web site project with Visual Studio 2008. To that project, you need to add an XPO class. XPO classes are code-generated in Visual Studio by selecting Project|Add New Item|Persistent Classes 8.3. The wizard will display a dialog labeled "Generating Persistent Classes for an Existing Database" (see Figure 1). The first step lets you pick the provider and the database connection. After you pick the database, click Next and select the table and the columns (see Figure 2). After you click Finish, the Persistent Classes will code-generate entity classes based on your selections. For the demonstration, select the Northwind Traders database and the Products table.
Figure 1: The XPO Persistent Classes item starts a wizard that will code generate entity classes based on your selections.
Figure 2: For the demo, pick the Northwind Traders Products table and all of the columns.
After you click Finish, the Devexpress' XPO technology will generate the classes and properties that represent the tables and columns you selected. Each source file will have a namespace that reflects the database and a class that reflects the tables.
Listing! | http://www.codeguru.com/vb/gen/vb_database/adonet/article.php/c15809/Exploring-Cool-Features-of-Devexpress-ASPxGridView.htm | CC-MAIN-2017-09 | refinedweb | 452 | 61.26 |
The ESP32 comes not only with Wi-Fi but also with Bluetooth and Bluetooth Low Energy (BLE). This post is a quick introduction to BLE with the ESP32. First, we’ll explore what’s BLE and what it can be used for, and then we’ll take a look at some examples with the ESP32 using Arduino IDE. For a simple introduction we’ll create an ESP32 BLE server, and an ESP32 BLE scanner to find that server.
Introducing Bluetooth Low Energy
For a quick introduction to BLE, you can watch the video below, or you can scroll down for a written explanation.
Recommended reading: learn how to use ESP32 Bluetooth Classic with Arduino IDE to exchange data between an ESP32 and an Android smartphone.
What is Bluetooth Low Energy?
Bluetooth Low Energy, BLE for short, is a power-conserving variant of Bluetooth. BLE’s primary application is short distance transmission of small amounts of data (low bandwidth). Unlike Bluetooth that is always on, BLE remains in sleep mode constantly except for when a connection is initiated.
This makes it consume very low power. BLE consumes approximately 100x less power than Bluetooth (depending on the use case).
Additionally, BLE supports not only point-to-point communication, but also broadcast mode, and mesh network.
Take a look at the table below that compares BLE and Bluetooth Classic in more detail.
Due to its properties, BLE is suitable for applications that need to exchange small amounts of data periodically running on a coin cell. For example, BLE is of great use in healthcare, fitness, tracking, beacons, security, and home automation industries.
BLE Server and Client
With Bluetooth Low Energy, there are two types of devices: the server and the client. The ESP32 can act either as a client or as a server.
The server advertises its existence, so it can be found by other devices, and contains the data that the client can read. The client scans the nearby devices, and when it finds the server it is looking for, it establishes a connection and listens for incoming data. This is called point-to-point communication.
As mentioned previously, BLE also supports broadcast mode and mesh network:
- Broadcast mode: the server transmits data to many clients that are connected;
- Mesh network: all the devices are connected, this is a many to many connection.
Even though the broadcast and mesh network setups are possible to implement, they were developed very recently, so there aren’t many examples implemented for the ESP32 at this moment.
GATT
GATT stands for Generic Attributes and it defines an hierarchical data structure that is exposed to connected BLE devices. This means that GATT defines the way that two BLE devices send and receive standard messages. Understanding this hierarchy is important, because it will make it easier to understand how to use the BLE and write your applications.
BLE Service
The top level of the hierarchy is a profile, which is composed of one or more services. Usually, a BLE device contains more than one service.
Every service contains at least one characteristic, or can also reference other services. A service is simply a collection of information, like sensor readings, for example.
There are predefined services for several types of data defined by the SIG (Bluetooth Special Interest Group) like: Battery Level, Blood Pressure, Heart Rate, Weight Scale, etc. You can check here other defined services.
BLE Characteristic
The characteristic is always owned by a service, and it is where the actual data is contained in the hierarchy (value). The characteristic always has two attributes: characteristic declaration (that provides metadata about the data) and the characteristic value.
Additionally, the characteristic value can be followed by descriptors, which further expand on the metadata contained in the characteristic declaration.
The properties describe how the characteristic value can be interacted with. Basically, it contains the operations and procedures that can be used with the characteristic:
- Broadcast
- Read
- Write without response
- Write
- Notify
- Indicate
- Authenticated Signed Writes
- Extended Properties
UUID
Each service, characteristic and descriptor have an UUID (Universally Unique Identifier). An UUID is a unique 128-bit (16 bytes) number. For example:
55072829-bc9e-4c53-938a-74a6d4c78776
There are shortened UUIDs for all types, services, and profiles specified in the SIG (Bluetooth Special Interest Group).
But if your application needs its own UUID, you can generate it using this UUID generator website.
In summary, the UUID is used for uniquely identifying information. For example, it can identify a particular service provided by a Bluetooth device.
BLE with ESP32. Follow one of the next tutorials to prepare your Arduino IDE to work with the ESP32, if you haven’t already.
- Windows instructions – ESP32 Board in Arduino IDE
- Mac and Linux instructions – ESP32 Board in Arduino IDE
In your Arduino IDE, you can go to File > Examples > ESP32 BLE Arduino and explore the examples that come with the BLE library.
Note: to see the ESP32 examples, you must have the ESP32 board selected on Tools > Board.
For a brief introduction to the ESP32 with BLE on the Arduino IDE, we’ll create an ESP32 BLE server, and then an ESP32 BLE scanner to find that server. We’ll use and explain the examples that come with the BLE library.
To follow this example, you need two ESP32 development boards. We’ll be using the ESP32 DOIT DEVKIT V1 Board.
ESP32 BLE Server
To create an ESP32 BLE Server, open your Arduino IDE and go to File > Examples > ESP32 BLE Arduino and select the BLE_server example. The following code should load:
/* Based on Neil Kolban example for IDF: Ported to Arduino ESP32 by Evandro Copercini updates by chegewara */ " void setup() { Serial.begin(115200); Serial.println("Starting BLE work!"); BLEDevice::init("Long name works now");Value("Hello World says Neil"); pService->start(); // loop() { // put your main code here, to run repeatedly: delay(2000); }
For creating a BLE server, the code should follow the next steps:
- Create a BLE Server. In this case, the ESP32 acts as a BLE server.
- Create a BLE Service.
- Create a BLE Characteristic on the Service.
- Create a BLE Descriptor on the Characteristic.
- Start the Service.
- Start advertising, so it can be found by other devices.
How the code works
Let’s take a quick look at how the BLE server example code works.
It starts by importing the necessary libraries for the BLE capabilities.
#include <BLEDevice.h> #include <BLEUtils.h> #include <BLEServer.h>
Then, you need to define a UUID for the Service and Characteristic.
"
You can leave the default UUIDs, or you can go to uuidgenerator.net to create random UUIDs for your services and characteristics.
In the setup(), it starts the serial communication at a baud rate of 115200.
Serial.begin(115200);
Then, you create a BLE device called “MyESP32”. You can change this name to whatever you like.
// Create the BLE Device BLEDevice::init("MyESP32");
In the following line, you set the BLE device as a server.
BLEServer *pServer = BLEDevice::createServer();
After that, you create a service for the BLE server with the UUID defined earlier.
BLEService *pService = pServer->createService(SERVICE_UUID);
Then, you set the characteristic for that service. As you can see, you also use the UUID defined earlier, and you need to pass as arguments the characteristic’s properties. In this case, it’s: READ and WRITE.
BLECharacteristic *pCharacteristic = pService->createCharacteristic( CHARACTERISTIC_UUID, BLECharacteristic::PROPERTY_READ | BLECharacteristic::PROPERTY_WRITE );
After creating the characteristic, you can set its value with the setValue() method.
pCharacteristic->setValue("Hello World says Neil");
In this case we’re setting the value to the text “Hello World says Neil”. You can change this text to whatever your like. In future projects, this text can be a sensor reading, or the state of a lamp, for example.
Finally, you can start the service, and the advertising, so other BLE devices can scan and find this BLE device.
BLEAdvertising *pAdvertising = pServer->getAdvertising(); pAdvertising->start();
This is just a simple example on how to create a BLE server. In this code nothing is done in the loop(), but you can add what happens when a new client connects (check the BLE_notify example for some guidance).
ESP32 BLE Scanner
Creating an ESP32 BLE scanner is simple. Grab another ESP32 (while the other is running the BLE server sketch). In your Arduino IDE, go to File > Examples > ESP32 BLE Arduino and select the BLE_scan example. The following code should load.
/* Based on Neil Kolban example for IDF: Ported to Arduino ESP32 by Evandro Copercini */ #include <BLEDevice.h> #include <BLEUtils.h> #include <BLEScan.h> #include <BLEAdvertisedDevice.h> int scanTime = 5; //In seconds BLEScan* pBLEScan; class MyAdvertisedDeviceCallbacks: public BLEAdvertisedDeviceCallbacks { void onResult(BLEAdvertisedDevice advertisedDevice) { Serial.printf("Advertised Device: %s \n", advertisedDevice.toString().c_str()); } }; void setup() { Serial.begin(115200); Serial.println("Scanning..."); BLEDevice::init(""); pBLEScan = BLEDevice::getScan(); //create new scan pBLEScan->setAdvertisedDeviceCallbacks(new MyAdvertisedDeviceCallbacks()); pBLEScan->setActiveScan(true); //active scan uses more power, but get results faster pBLEScan->setInterval(100); pBLEScan->setWindow(99); // less or equal setInterval value } void loop() { // put your main code here, to run repeatedly: BLEScanResults foundDevices = pBLEScan->start(scanTime, false); Serial.print("Devices found: "); Serial.println(foundDevices.getCount()); Serial.println("Scan done!"); pBLEScan->clearResults(); // delete results fromBLEScan buffer to release memory delay(2000); }
This code initializes the ESP32 as a BLE device and scans for nearby devices. Upload this code to your ESP32. You might want to temporarily disconnect the other ESP32 from your computer, so you’re sure that you’re uploading the code to the right ESP32 board.
Once the code is uploaded and you should have the two ESP32 boards powered on:
- One ESP32 with the “BLE_server” sketch;
- Other with ESP32 “BLE_scan” sketch.
Go to the Serial Monitor with the ESP32 running the “BLE_scan” example, press the ESP32 (with the “BLE_scan” sketch) ENABLE button to restart and wait a few seconds while it scans.
The scanner found two devices: one is the ESP32 (it has the name “MyESP32), and the other is our MiBand2.
Testing the ESP32 BLE Server with Your Smartphone
Most modern smartphones should have BLE capabilities. I’m currently using a OnePlus 5, but most smartphones should also work.
You can scan your ESP32 BLE server with your smartphone and see its services and characteristics. For that, we’ll be using a free app called nRF Connect for Mobile from Nordic, it works on Android (Google Play Store) and iOS (App Store).
Go to Google Play Store or App Store and search for “nRF Connect for Mobile”. Install the app and open it.
Don’t forget go to the Bluetooth settings and enable Bluetooth adapter in your smartphone. You may also want to make it visible to other devices to test other sketches later on.
Once everything is ready in your smartphone and the ESP32 is running the BLE server sketch, in the app, tap the scan button to scan for nearby devices. You should find an ESP32 with the name “MyESP32”.
Click the “Connect” button.
As you can see in the figure below, the ESP32 has a service with the UUID that you’ve defined earlier. If you tap the service, it expands the menu and shows the Characteristic with the UUID that you’ve also defined.
The characteristic has the READ and WRITE properties, and the value is the one you’ve previously defined in the BLE server sketch. So, everything is working fine.
Wrapping Up
In this tutorial we’ve shown you the basic principles of Bluetooth Low Energy and shown you some examples with the ESP32. We’ve explored the BLE server sketch and the BLE scan sketch. These are simple examples to get you started with BLE.
The idea is using BLE to send or receive sensor readings from other devices. We’ll be posting more tutorials and projects about BLE with the ESP32, so stay tuned! Bluetooth Classic with Arduino IDE – Getting Started
- ESP32 Data Logging Temperature to MicroSD Card
- ESP32 with DC Motor and L298N Motor Driver – Control Speed and Direction
Thanks for reading.
Updated May 16, 2019
53 thoughts on “Getting Started with ESP32 Bluetooth Low Energy (BLE) on Arduino IDE”
Should be good invest more in the BLE 4.2 THEORY!
Wow. Awesome tutorial. Thanks, Rui!
Great Tutorial. As you say Esp32 supports Classic Bluetooth too, but there are not many tutorials about it. There are some profiles in BT that do not have a BLE equivalent, like SPP or A2DP. I’m developing a Classic BT scanner library for Esp32 and Arduino IDE. I’ve got some good results but a theoretical tutorial about it would be super.
You’re right! Unfortunately I also couldn’t find any information on that subject. Let me know if you end up developing a classic BLE app.
Great site!
I have installed ESP32 board and many examples on my Arduino IDE, nut I can’t see on my IDE the example that you use… I see only SimpleBLEdevice and SerialToSerialBT
Thanks for this tutorial. It’s a very handy overview of BLE.
Thanks Rue for the tutorial. It is very informative.
Thank you!
Regards,
Sara 🙂
is it possible to make an application with two esp32 (one like a server an another like a client)? I didn´t find anyting about it.
Thanks for sharing! Túlio
Hi Túlio.
Yes, it is possible. However we don’t have any tutorial about that subject here on the blog.
We have a section about that in our course about ESP32:
Regards,
Sara 🙂
Túlio, there are two Examples, available from Files ∘ Examples ∘ ESP32BLEArduino — BLE_client and BLE_server. I also saw something on YouTube, could’ve been Rui himself :), maybe a later post here
Thanks, and congratulations for this useful tutorial. I also noticed that there are not many news on this topic on the net.
Hi! Do you know if we can awake the ESP32 running BLE-Low energy? (not the classic one). As far as I managed to find till now out there, ESP32 switches off WiFi and BLE in sleep modes, so I can not figure out how we could have an ESP32 asleep and wake it up when BLE tries to pair/send something… Any tip will be of a great help! Thanks in advance,
Thanks for this tutorial. It helped me a lot making a connection to my FitBit wearable. Now I also tryed to make a connection to a toy with Bluetooth and the software crahses at :
pRemoteCharacteristic = pRemoteService->getCharacteristic(charUUID)
in the serial monitor i find:
Stack smashing protect failure!
abort() was called at PC 0x4014f34b on core 1
Do you have an idea how to solve this, or where i need to look?
I had the same problem, it’s a flaw in our code, in my case I had a “String” subroutine and within this I had “int” type variables, I solved it by removing the “int” variable type
thanks a lot for sharing.
brief and short but good and applicable, thank you very much
Regards
Ali
Thanks 😀
Hello, Rui Santos.
I have a bluethooth project and I would like to know if you could give a more in-depth lesson on the bluetooth of the esp32.
I’ll give you a little context of what I want to accomplish so you can see if you can help me.
What I’m doing is a small phone (with a sim900 expansion card for arduino one) that can send the audio data (through bluetooth) to a bluetooth headset I have (it’s a samsung icon x 2018). The headphones also have a built-in microphone, so I would also like you to send audio (via bluetooth) to the esp32 and then to the sim900
I’d like to know what you think, if you can talk more about it or any code you already have done I’d appreciate it very much.
I would also like to support you with a donation :).
I’ll say goodbye and I’ll keep an eye on your answer.
Hi Joel.
Unfortunately, we don’t have any examples with BLE and audio.
Regards,
Sara
How can i update the ble name during runtime???
It’s possible to in ESP32 beacon, BLE manufacturer data can be segregation?
my question is BLE manufacturer data ( Beacon identification + Beacon ID+ Length+ Device UUID + major + minor + RSSI )can be possible to print on separated?
It’s mean’s in ESP32 beacon, BLE manufacturer data can be segregation?.
Banggood has TTGO
banggood.com/search/ttgo.html?from=nav
I have managed to create an ESP32 BLE server which sends data within the loop() to an ESP32 BLE client. Great! But how do I send a reply back from the client to the server?
pCharacteristic->setValue() i cannt set value in loop..getting error as ” Guru Meditation Error: Core 1 panic’ed (StoreProhibited). Exception was unhandled” . Please help me with code
My question is about licensing
I read that you have to get license for Bluetooth in order to use it, like FCC and others.
Does ESP32 is licence for using Bluetooth, or i need additional licence ?
Hi…
I must thank you for more and more lessons I used for me and my friends…
But I didn’t understand a little thing (maybe ’cause I am italian)…
Why it need two UUID?
Is this that I can use one UUID (service) for create a LAN (imagine) and UUID (characteristic) for one host address???
Please help me…
Thank
HI
why I cannot use esp32 as the client to connect the BLE headset, BLE watch, BLE keyboard etc, I have use nRF connect APP to get the UUID, MAC, and put that on the code, but the client does not work.
hello can you please make a tutorial to connect multiple esp32 on ble mesh network.
please share any helpful link if possible.
Great thanks
Is it possible to send serial data (like SerialBluetooth.h) through BLE?
Hi.
For that, you need to use Bluetooth Classic.
See this tutorial:
Regards,
Sara
Thank you Sarah. Do you have a tutorial on OTA over Bluetooth? Also setting WiFi credentials over initial BLE (wifimanager)?
Hello,
The code is very good and I just wanted to know that if this code will work with NodeMCU ESP32?
Regards
Hi.
This code should work with any ESP32 board.
Regards,
Sara
thanks for this amazing tutorial. am currently working on using the ESP32 board as a regular IBeacons but somehow could not figure out how to transmit the UUID, Major, Minor and RSSI the way normal iBeacons do. can anyone help me with that
Hi Sara,
This is really nice project.
Can you create some project on ESP32CAM for video streaming + control over WebBLE if possible ?
Thanks,
KP
the simple BLE server seems to have some issue now, it worked when I tried it a year or so ago, but not now.
What happens:
you can scan and connect and see the info from the device. if you disconnect then go back to scanning again, the ESp32 device has vanished. if you reboot it it comes back. scan and read again, close and go back to scanning and it has vanished.
I tried different ESp32 boards and still the same result.
I found an example of a esp32 BLE serial server that I used about a year ago to send strings to/from a ESP32-BLE which used to work. Now, the same error! connecting once, talk to the device, then disconnect. Device has vanished from the network and only comes back with a reboot.! tried scanning from Android phone iPad, all the same result
Maybe the ESP32 Arduino BLE lib. has issues??
Hi.
Did you try it with the sketch provided in the Examples menu?
Regards,
Sara
thanks – I think the issue is it needs to restart advertising again. ESP32 example BLE_server_multiconnect does this, so I can adapt that.
I’ll test the example and see if that also happens.
Regards,
Sara
Hi again.
I just tested the BLE Server sketch and it is working as expected.
I connected to it using my smartphone, then disconnected. After that, I was able to reconnect again, and so on…
Regards,
Sara
How can i update the name of the bluetooth in runtime??
Very interesting project and well described.
What could be the average power consumption sending one message every hour?
Thanks
Renzo
Maybe this will be usefull for someone, who use esp32 with ble:
Use NimBLEDevice.h instead BLEDevice.h for ESP32 !!!.
The BLEDevice.h eating too much memory, and if you will use wifi & BLE – the free memory will be dramatically low.
Found NimBLEDevice.h library,
In my sketches Its use up to 44% less memory, compared to BLEDevice.h !!!
I’d tried load this firmware on ESP32-CAM and shows this message:
“Arduino: 1.8.13 (Windows 10), Placa:”AI Thinker ESP32-CAM, 240MHz (WiFi/BT), QIO, 80MHz”
O sketch usa 806526 bytes (25%) de espaço de armazenamento para programas. O máximo são 3145728 bytes.
Variáveis globais usam 38936 bytes (11%) de memória dinâmica, deixando 288744 bytes para variáveis locais. O máximo são 327680 bytes.
esptool.py v3.0-dev
Serial port COM3
Connecting…….._____….._____….._____….._____….._____….._____….._____
A fatal error occurred: Failed to connect to ESP32: Timed out waiting for packet header
A fatal error occurred: Failed to connect to ESP32: Timed out waiting for packet header”
After that I didn’t get load any firmware, even that one I’ve already loaded with success.
Did anyone saw this failure before ? Can help me how to solve it ?
Hi.
Are you using an ESP32-CAM?
Take a look at the ESP32-CAM troubleshooting guide:
Check bullet number 1.
Regards,
Sara
I tried to make a BLE server protected so adding encryption and security. They are just 5 general SLOCs more and, eventually, one SLOC for each characteristics but it does not work. Testing with my smartphones (Andorid 8 and 10) usinf nrfConnect, they require the static PIN to pair/bond with the ESP32, they connect but they disconnect soon also, just matter a couple of seconds. Do you have any report about that?
This is my test code:
#include <Arduino.h>
SSID_CHAR_UUID “1afb81ce-36e1-4688-b7f5-ea07361b26a8”
#define PSWD_CHAR_UUID “1be107e0-2d9e-4091-a0d3-6407e01b2a30”
#define WIFI_CHAR_UUID “1afb81ce-a705-4ab5-aaab-294269ce9a52”
#define IP_CHAR_UUID “beb5483e-36e1-4688-b7f5-ea07361b26a8”
const char* ssid = “ESP32-TT22”;
const char* password = “pippo21931”;
char g_strWiFiEnabled[] = “0”;
const char* ip = “192.168.4.1”;
void setupBLE()
{
BLEDevice::init(“Long name works now”);
BLEServer *pServer = BLEDevice::createServer();
BLEService *pService = pServer->createService(SERVICE_UUID);
BLECharacteristic *pCharSSID = pService->createCharacteristic(
SSID_CHAR_UUID,
BLECharacteristic::PROPERTY_READ |
BLECharacteristic::PROPERTY_WRITE
);
BLECharacteristic *pCharPSWD = pService->createCharacteristic(
PSWD_CHAR_UUID,
BLECharacteristic::PROPERTY_READ |
BLECharacteristic::PROPERTY_WRITE
);
BLECharacteristic *pCharWIFI = pService->createCharacteristic(
WIFI_CHAR_UUID,
BLECharacteristic::PROPERTY_READ |
BLECharacteristic::PROPERTY_WRITE
);
BLECharacteristic *pCharIP = pService->createCharacteristic(
IP_CHAR_UUID,
BLECharacteristic::PROPERTY_READ |
BLECharacteristic::PROPERTY_WRITE
);
pCharSSID->setValue(ssid);
pCharPSWD->setValue(password);
pCharWIFI->setValue(g_strWiFiEnabled);
pCharIP->setValue(ip);
pService->start();
#define COND 1
#if COND == 0
// [MM] enable encrypting PIN protected pairing
BLEDevice::setEncryptionLevel(ESP_BLE_SEC_ENCRYPT);
#endif
#if COND == 0
// [MM] enable encrypting PIN protected pairing
BLEDevice::setEncryptionLevel(ESP_BLE_SEC_ENCRYPT);
#endif
// [MM] setup the encrypted PIN for protected pairing
BLESecurity *pSecurity = new BLESecurity();
pSecurity->setStaticPIN(2193);
// setup()
{
Serial.begin(115200);
Serial.println(“Starting BLE work!”);
setupBLE();
}
void loop() {
// put your main code here, to run repeatedly:
delay(2000);
}
Nice explanation on BLE. I currently use Home Assistant and ESPHOME to “program” esp32 devices. No example or suggestions how to get BLE server running in ESPHOME. I do find hardly anything on that topic
Hi.
Unfortunately, we don’t have any tutorials on that subject.
Regards,
Sara
Great tutorial. I am working on a project where a BLE TV remote control working as the BLE server over HID and the ESP32 is the client and acts as a bridge to WIFI. The battery life of the remote control is greatly reduced which let me believe the ESP32 does not allow the Remote Control to go into sleep mode. If I disconnect from the server to preserve battery, the first data package is lost. Is the best practice to disconect or is there another way to let the BLE server go into sleep mode
Hi.
Very useful post but I think it misses the content that all miss.
Bluetooth ble is used for low energy and that part is always forgotten.
I’ve search all over the web and still didn’t find any explanation of how it work.
As an example, do you save more power if you read from another device or if it’s written on the Arduino.
Another is how the intervals work.
Can it be that 2 devices get out of sync and never exchange information.
I’m trying to send strings from my android to the Arduino to show on a display. The Arduino is battery powered and I’m doing this once per second.
Would be nice if you can do an example of this. Because everywhere is just the same blink example that anyone can load from the ide.
Even to find how to send a long string was a nightmare.
Thanks! | https://randomnerdtutorials.com/esp32-bluetooth-low-energy-ble-arduino-ide/?replytocom=713977 | CC-MAIN-2022-27 | refinedweb | 4,168 | 55.64 |
Automating the world one-liner at a time..
IBM has issued some PowerShell cmdlets to manage its WebSphere MQ. Could this be the start of a unifed
Thanks to Alik , I got to read this post from the PowerShell Team . IBM Releases PowerShell Cmdlets to
Hi Jeff,
I have posted about the namespaces in powershell <a href="">here</a>. I'm also looking for how we can simulate polymorphic behavior within the scripts, which would be necessary when we have huge scripts and based on certain configuration settings, we would require say a typical template pattern where the order of the steps are the same, but the steps themselves could be implemented differently.
Thanks for the good comments on mainframe/minicomputer batch languages. Developers that have done .NET only are at a great disadvantage not knowing it.
I'm hoping that windows workflow would re-ignite interest in batch processing as a system solution (it's ignored today on the Windows platform). | http://blogs.msdn.com/powershell/archive/2007/12/05/ibm-releases-powershell-cmdlets-to-manage-websphere-mq.aspx | crawl-002 | refinedweb | 162 | 62.58 |
> socks5.zip > archie.man
.\" Originally by Jeff Kellem ([email protected]). .\" .\" This is from rn (1): .de Ip .br .ie \\n.$>=3 .ne \\$3 .el .ne 3 .IP "\\$1" \\$2 .. .\" .TH ARCHIE 1 "26 October 1992" "Archie (Prospero)" .SH NAME archie \- query the Archie anonymous FTP databases using Prospero .SH SYNOPSIS .in +\w'\fBarchie \fR'u .ti -\w'\fBarchie \fR'u .B archie\ \ [\ \fB\-cers\fR\ ]\ \ [\ \fB\-a\fR\ ]\ [\ \fB\-l\fR\ ]\ [\ \fB\-t\fR\ ]\ \ [\ \fB\-m\ \fIhits\fR\ ] [\ \fB\-N\ [\ \fIlevel\fR\ ]\ ]\ \ [\ \fB\-h\fR\ \fIhostname\fR\ ]\ \ [\ \fB\-o\fR\ \fIfilename\fR\ ] [\ \fB\-L\fR\ ]\ [\ \fB\-V\fR\ ]\ [\ \fB\-v\fR\ ]\ \fIstring\fR .SH DESCRIPTION .B archie queries an archie anonymous FTP database looking for the specified .I string using the .B Prospero protocol. This client is based on .B Prospero version Beta.4.2 and is provided to encourage non-interactive use of the Archie servers (and subsequently better performance on both sides). This man page describes version 1.3 of the client. The general method of use is of the form .RS % .B archie string .RE .PP This will go to the archie server and ask it to look for all known systems that have a file named `string' in their FTP area. \fBarchie\fP will wait, and print out any matches. For example, .RS % .B archie emacs .RE .PP will find all anonymous FTP sites in the archie database that have files named .B emacs somewhere in their FTP area. (This particular query would probably return a lot of directories.) If you want a list of every filename that contains \fBemacs\fR \fIanywhere\fR in it, you'd use .RS % .B archie -c emacs .RE .PP Regular expressions, such as .RS % .B archie -r '[xX][lL]isp' .RE .PP may also be used for searches. (See the manual of a reasonably good editor, like GNU Emacs or vi, for more information on using regular expressions.) .SH OPTIONS The options currently available to this .B archie client are: .PD 0 .TP 12 .BR \-c Search substrings paying attention to upper & lower case. .TP .BR \-e Exact string match. (This is the default.) .TP .BR \-r Search using a regular expression. .TP .BR \-s Search substrings ignoring the case of the letters. .TP .BI \-o filename If specified, place the results of the search in \fIfilename\fR. .TP .BR \-a Output results as Alex filenames. .TP .BR \-l Output results in a form suitable for parsing by programs. .TP .BR \-t Sort the results inverted by date. .TP .BI \-m hits Specifies the maximum number of hits (matches) to return (default of \fB95\fR). .TP .BI \-N level Sets the \fIniceness\fR of a query; by default, it's set to 0. Without an argument, ``\-N'' defaults to \fB35765\fR. If you use \fB\-N\fR with an argument between 0 and 35765, it'll adjust itself accordingly. (\fBNote\fR: VMS users will have to put quotes around this argument, and \fB\-L\fR, like "\fB\-N45\fR"; VMS will otherwise convert it to lowercase.) .TP .BI \-h\ \fIhostname\fR Tells the client to query the Archie server \fIhostname\fR. .TP .BI \-L Lists the Archie servers known to the program when it was compiled, as well as the name of the default Archie server. For an up-to-date list, write to ``[email protected]'' (or any Archie server) with the single command of \fIservers\fR. .TP .BI \-V With the verbose option, \fBarchie\fR will make some comments along the way if a search is going to take some time, to pacify the user. .PP The three search-modifying arguments (``\-c'', ``\-r'', and ``\-s'') are all mutually exclusive; only the last one counts. If you specify \fB\-e\fR with any of ``\-c'', ``\-r'', or ``\-s'', the server will first check for an exact match, then fall back to the case-sensitive, case-insensitive, or regular expression search. This is so if there are matches that are particularly obvious, it will take a minimal amount of time to satisfy your request. If you list a single `\-' by itself, any further arguments will be taken as part of the search string. This is intended to enable searching for strings that begin with a `\-'; for example: .RS % .B archie \-s \- \-old .RE will search for all filenames that contain the string `\-old' in them. .SH RESPONSE Archie servers are set up to respond to a number of requests in a queued fashion. That is, smaller requests get served much more quickly than do large requests. As a result, the more often you query the Archie server, or the larger your requests, the longer the queue will become, resulting in a longer waiting period for everyone's requests. Please be frugal when possible, for your benefit as well as for the other users. .SH QUERY PRIORITY Please use the ``-N'' option whenever you don't demand immediacy, or when you're requesting things that could generate large responses. Even when using the nice option, you should still try to avoid big jobs during busy periods. Here is a list of what we consider to be nice values that accurately reflect the priority of a job to the server. .RS .TP 20 .B Normal 0 .TP .B Nice 500 .TP .B Nicer 1000 .TP .B Very Nice 5000 .TP .B Extremely Nice 10000 .TP .B Nicest 32765 .RE The last priority, \fBNicest\fR, have selected \fBNicest\fR. There are certain types of things that we suggest using \fBNicest\fR for, irregardless. In particular, any searches for which you would have a hard time justifying the use of anything but extra resources. (We all know what those searches would be for.) .SH ENVIRONMENT .Ip "ARCHIE_HOST" 8 This will change the host .IR archie will consult when making queries. (The default value is what's been compiled in.) The ``\-h'' option will override this. If you're running VMS, create a symbol called ARCHIE_HOST. .SH SEE ALSO For more information on regular expressions, see the manual pages on: .BR regex (3) , .BR ed (1) Also read the file \fBarchie/doc/whatis.archie\fR on \fBarchie.mcgill.ca\fR for a detailed paper on Archie as a whole. Read the file README.ALEX distributed with this client for more information on what Alex is and how you can take advantage of it. .SH AUTHORS The .B archie service was conceived and implemented by Alan Emtage (\[email protected]\fR), Peter Deutsch (\[email protected]\fR), and Bill Heelan (\[email protected]\fR). The entire Internet is in their debt. The \fBProspero\fR system was created by Clifford Neuman (\[email protected]\fR); write to \fBinfo\[email protected]\fR for more information on the protocol and its use. This stripped client was put together by Brendan Kehoe (\[email protected]\fR), with modifications by Clifford Neuman and George Ferguson (\[email protected]\fR). .SH BUGS There are none; only a few unexpected features. | http://read.pudn.com/downloads/sourcecode/internet/proxy/797/clients/archie/archie.man__.htm | crawl-002 | refinedweb | 1,166 | 69.07 |
WSDL Full Form
WSDL stands for Web Services Description Language. It was developed jointly by IBM and Microsoft and recommended on June 26′ 2007 by the W3C. Written in XML, it is used in describing web services. These descriptions include service location and methods. It works in coordination with SOAP and UDDI in order to provide web services, i.e SOAP is used to call web services that are listed in WSDL. Generally, a typical WSDL contains information about definition, datatypes, messages, service, bindings, targetNamespace, and port type. Prerequisites for learning about WSDL is basic XML schema and namespaces.
Characteristics
- It specifies the operations that will be performed by the web services and how these services must be accessed.
- It is based on the XML protocol used for exchanging information in distributed systems.
- It describes the specifications to be met for interfacing with XML oriented services.
- WSDL works combinedly with SOAP and UDDI.
Advantages
- It provides a systematic approach to defining web services.
- Used to reduced the total LOC which is must to access the web services.
- It can be updated dynamically which allows the users to seamlessly upgrade to new patterns.
Disadvantages
- Single mode (one-way) messaging is prohibited.
- It cannot include more than one file i.e cannot have more than one <wsdl:include> element.
- It does not support output mapping.
My Personal Notes arrow_drop_up | https://www.geeksforgeeks.org/wsdl-full-form/ | CC-MAIN-2022-21 | refinedweb | 227 | 51.65 |
Simulation
Bütçe $10-30 USD
Background:
In the land of Puzzlevania, Aaron, Bob, and Charlie had an argument over which one of them
was the greatest puzzle-solver of all time. To end the arugment once and for all, they agreed on a
duel to the death (this makes sense?). Aaron was a poor shot and only hit this target with a
probability of 1/3. Bob was a bit better and hit his target with a probability of 1/2. Charlie was an
expert marksman and never missed. A hit means a kill and the person hit drops out of the duel.
(Perhaps he could come back as a zombie.).
To compensate for the inequities in their marksmanship skills, the three decided that they would
fire in turns, starting with Aaron, followed by Bob, and then by Charlie. The cycle would repeat
until there was one man standing. That man would be remembered for all time as the Greatest
Puzzle-Solver of All Time.
An obvious and reasonable strategy is for each man to shoot at the most accurate shooter still
alive, on the grounds that this shooter is the deadliest and has the best chance of hitting back.
Task :
Write a program to simulate the duel using this strategy. Your program should use random
numbers and the probabilities given in the problem to determine whether a shooter hits his target.
You will likely want to create multiple functions to complete the problem. Once you can
simulate a duel, add a loop to your program that simulates 10,000 duels. Count the number of
times that each contestant wins and print the probability of winning for each contestant (e.g., for
Aaron your might output "Aaron won 3595/10000 duels or 35.95%).
Hint:
You can start out with:
#include <iostream>
#include <cstdlib>
#include <ctime>
using namespace std;
int main() {
srand(time(0)); //Since we will be using rand()
}
Review how does random number generator works
Use boolean variable for each to keep track if they are alive
( 0 false or 1 true )
Seçilen:
A proposal has not yet been provided
4 freelancers are bidding on average $28 for this job
Dear sir, The problem is so wonderful to me. I can complete right now for you. Kind regards, Tin Tan
I am a Mechatronics Engineering student with considerably expertise knowledge in the job you posted. Good at c,c++, java, python..Deadline meeting is also assured as I can promise you. Please about the payment we can a Daha Fazla
Non hai ancora fornito una proposta | https://www.tr.freelancer.com/projects/cplusplus-programming/simulation.6548243/ | CC-MAIN-2018-13 | refinedweb | 428 | 70.13 |
A Java programmer has kindly ported the Hashids "library" to the Java platform, here:
So I thought I'd compile it into our 11gR2 11.2.0.1 database, using SQL Developer. At its top the java file reads:
set define off; -- added to avoid trouble with & characters
create or replace and compile
java source named "Hashids" as
import java.util.*;
public class Hashids {
...
}
The process looks to compile ok, with SQLD saying "anonymous block completed" but at the end the new object's status is INVALID. From the scattered reading I did, I'd guess that there are unresolved dependencies somewhere -- perhaps from that import java.util.*. I'm not convinced actually since the dox say that the Java classes are in PUBLIC and that those kinds of things are resolved more or less automatically.
This is a great tool, so I'd be really glad for some help with this.
Thank you.
"Nevermind."
By using loadjava I was able to see that the class used String.isEmpty(), which I guess is not defined by the JVM in 11gR2.
That may not get me very far, though: this code is a proper class, including overloaded constructors. That means to me using the class would require instantiating an instance of it, then calling its methods. It's not obvious that that could be done from within PL/SQL. | https://community.oracle.com/message/11077122?tstart=0 | CC-MAIN-2018-05 | refinedweb | 228 | 71.65 |
All you ever wanted to know about Windows Forms and Windows Presentation Foundation Interoperability (plus some other stuff).
(Tunes I'm listening to while blogging: Style Council - "Home & Abroad")
You betcha! You can certainly host ActiveX controls in a WPF application. You do this by using Windows Forms - WPF Integration. Okay, from now on, I'm gonna refer to this as "Crossbow" because that is our internal code name for this technology and I'm really getting tired of saying "Windows Forms - WPF Integration". Now...you can host ActiveX control in a WPF application because Windows Forms already supports hosting ActiveX controls and since Crossbow enables WPF applications to host Windows Forms controls you can use this technique to subsequently host ActiveX controls, whew! So how do we do it specifically? Let's build a little sample to do this...
Here are the steps:
1) Launch VS and create a new "Avalon Application" (Yeah, I know the templates still refer to Avalon...)
2) Let's add another project type to this solution, specifically a Windows Control Library project
3) Now what the UserControl in design mode, make sure your Toolbox is visible and right-click anywhere on the Toolbox and choose "Choose Items..."
This will give us a change to add the ActiveX control of our choice to the Toolbox so we can use it in our UserControl. This operation will also generate the managed wrappers for our ActiveX control. In this specific example, I will use the Adobe Acrobat Reader ActiveX control.
This will now give us an item on our Toolbox for the Adobe Acrobat Reader control so we can place it on our UserControl design surface.
5) Select the Adobe Acrobat Reader control on the Toolbox
6) And place it on the UserControl design surface
7) Now lets make sure to set the Dock property of the control to DockStyle.Fill so the control will always take up all the real estate of the UserControl.
8) Okay, now let's add a method to our UserControl that will allow us to load a PDF file. We'll do this by just creating a public method on the UserControl and in the implementation we will just call into the underlying ActiveX control's LoadFile() method.
namespace
9) Now let's go to the Window1.xaml file in the XML editor and add a handler for the Loaded event and give the grid tag a name
<
10) Jump over to the Window1.xaml.cs file and uncomment the WindowLoaded event handler and add the following implementation:
The above code will instantiate a WindowsFormsHost control and an instance of the UserControl (that contains the ActiveX control), then it will add the UserControl to the WindowsFormsHost control and then add the WindowsFormsHost control as a child of the Grid tag in our XAML. Finally it will call our Load method which will populate the Acrobat reader control with a PDF document.
11) Okay, run it!
WOO HOO! We did it! Now go get a beer or something... | http://blogs.msdn.com/mhendersblog/archive/2005/09/23/473065.aspx | crawl-002 | refinedweb | 506 | 60.04 |
Have you checked Action Cable in Rails 5? It’s a nice addition that integrates WebSockets to Rails. In this post, we’ll see how to implement Action Cable with Active Job. Active Job is a framework for declaring jobs.
Let’s say we’re building a Twitter app, we need to add a feature that allows finding the information of a group of users based on a group of ids. The thing is, there is a limitation in the Twitter API, it doesn’t allow to fetch the data of more than 100 users per request. We need to ensure our app performs when 100 user ids or more are received regardless of the API limit.
Active Job
With Active Job we can create jobs that can be scheduled using a queue backend such as Sidekiq or Resque. One of the main features of Active Job is that you just need to follow a unique syntax (the one of Active Job), then you can plug any queue backend.
In case you don’t know what a queue backend is, it’s a library that allows scheduling jobs for running them in background (in a separate process of the app’s process). Generally, you will want to send heavy tasks (such as generating a big CVS file based on data in your DB) to the background so that the app’s process isn’t stuck while waiting for the task to finish.
Some of the benefits we will get by using background jobs in our scenario are:
- The Rails server process won’t be busy while requesting the data to the Twitter API. Jobs run in separated processes than the application server process.
- The job processes run in parallel. This is very useful since we need to send a series of requests to the API, sending they in parallel we will finish processing all faster.
Some of the advantages of using Active Job are:
- It comes already bundled in Rails 4 and above.
- We don’t need to change the definition of a job if the app’s queueing backend changes. Active Job is in charge of connecting to the queue backend.
Here is how we define a job with Active Job:
class TheJob < ActiveJob::Base queue_as :the_queue def perform # the job's logic end end
Above we can see:
- The job class which needs to inherit from “ActiveJob::Base”.
- A call to “queue_as” with a parameter. The parameter corresponds to the queue where we want to assign the job.
- The “perform” method. In this method goes the logic that the job needs to execute.
We schedule a job for background execution with the following call:
TheJob.perform_later
Creating the job
Let’s define the job in charge of sending a request to the Twitter API to get the information of a group of users based on a group of ids. Create the file app/jobs/find_users_info_job.rb with the following content:
class FindUsersInfoJob < ActiveJob::Base queue_as :default def perform(current_user_id, user_ids) fail 'Up to 100 users are supported only.' if user_ids.count > 100 # the code to send a request to Twitter to get the users info base on the ids # ... end end
We will schedule the job once per each 100 users we need to fetch from the Twitter API. The job needs to receive a logged in user id, and the ids of the Twitter users.
To schedule the job that we just created we need to run something like:
logged_in_user_id = 1 twitter_user_ids = [938484, 239384, 3493421] FindUsersInfoJob.perform_later(logged_in_user_id, twitter_user_ids)
Okay, with the job being executed in the background the “n” times needed we will get the users information. However, since the job is executed in the background, how do we get the users info in the frontend once every job has finished? Here is where Action Cable plays really nice.
Real-time communication with Action Cable
Action Cable works on top of the WebSocket protocol. The WebSocket protocol allows opening a connection between the server and the browser, allowing them to send and receive messages through such connection. That allows the browser to receive real-time notifications from the server automatically, removing the need to poll the server every time the browser wants to know if there is any new notification (data).
In this post’s scenario, we need the frontend to receive a new notification (with the users info) every time any of the jobs has finished fetching the users info.
The bases
Before going into the code, let’s first check the base concepts in Action Cable. The concepts will be better understood with the code later on.
Server side
Connections:
- For every connection made between the client and the server, an instance of an Action Cable connection will be created.
Channels:
- A channel encapsulates a logical unit of work, similar to what a controller does in a regular MVC setup.
Streams:
- Streams provide the mechanism by which channels route published content (broadcasts) to its subscribers.
Client side
Connection consumer:
- A connection consumer is required to establish a connection to the server.
Subscribers:
- When a “connection consumer” subscribes to a channel then it becomes a subscriber and a connection is created.
Configuring Action Cable
There is some configuration our Rails 5 app needs to define in order to start sending messages (notifications) through Action Cable.
Set the following code in an initializer, “config/initializers/action_cable.rb”:
if Rails.env.development? Rails.application.config.action_cable.allowed_request_origins = ['', ''] end
The code above tells Action Cable to permit messages coming from localhost. In other environments, the code above will need to be updated.
Action Cable by default only permits messages coming from the app’s process itself (by using the “async” adapter). Active Job jobs run in separate processes than the Rails server process, so we need to update Action Cable’s configuration to allow messages coming from other processes. Update the file config/cable.yml to:
redis: &redis adapter: redis url: redis://localhost:6379/1 production: *redis development: *redis test: adapter: async
Next, let’s enable Action Cable. Uncomment the following line in the config/routes.rb file:
mount ActionCable.server => '/cable'
Last, let’s add the Action Cable meta tag in our app/views/layouts/application.html.haml. Add the following line in the HTML header:
= action_cable_meta_tag
Which will be converted to the following
<meta name="action-cable-url" content="/cable">
The line above, tells the Action Cable subscribers in the frontend the URL of the Action Cable server in the backend.
The connection
Action Cable can authenticate or reject a connection based on the sender’s data.
The class ApplicationCable::Connection located at “app/channels/application_cable/connection.rb” is the place where we need to add the logic to authenticate connections. Open the file, it should have the following content:
module ApplicationCable class Connection < ActionCable::Connection::Base end end
Change it to the following and restart the server (we need to restart the server every time we modify something in app/channels):
module ApplicationCable class Connection < ActionCable::Connection::Base identified_by :current_user def connect self.current_user = find_verified_user end protected def find_verified_user user = User.find(cookies.signed[:user_id]) return user if user fail 'User needs to be authenticated.' end end end
The method “connect” is executed every time a new connection is about to be created. In the code above we’re attempting to find a user in the database based on the cookie “user_id”, if the user isn’t found then the app raises an error, if she is found then it’s assigned to the identifier “current_user” (we’ll see later why setting “current_user” as an identifier is important).
The channel
Action Cable sends messages through what it calls channels. We need to create an Action Cable channel that the Active Job processes (the instances of the job we created in the Active Job section) can use to notify the frontend when they finished fetching the user data.
Let’s generate a channel with the name “UserInfoChannel”:
$ rails generate channel UserInfoChannel
The command above generates the following files:
app/channels/user_info_channel.rb app/assets/javascripts/channels/user_info.coffee
In the file app/channels/user_info_channel.rb we can set the code to be executed when a client makes a connection to the channel. The file initially has the following content:
class UserInfoChannel < ApplicationCable::Channel def subscribed end def unsubscribed end end
In the code above the “subscribed” method is executed when a client makes a connection to the server, whereas the “unsubscribed” method is executed when the client disconnects.
We need to create a stream when the client subscribes to the channel. The stream will allow data to travel through the connection. Let’s modify the “subscribed” method to set the stream:
def subscribed stream_from "user_info_channel_#{current_user.id}" end
What we’re saying above is when the client connects to the channel set the connection with the stream “user_info_channel_#{current_user.id}”.
In the code we can access the “current_user” variable, that’s because we set that identifier in the app/channels/application_cable/connection.rb file (when we were configuring Action Cable above).
The connection consumer
A connection consumer is required to create subscriptions in the client side. Let’s enable the code that creates the connection consumer. Go to the file app/assets/javascripts/cable.coffee and set the following content:
#= require action_cable #= require_self @App ||= {} App.cable = ActionCable.createConsumer() App.subscriptions = [];
The code above works together with the “action-cable-url” meta tag that we declared some steps above.
The subscription
If you recall, when we generated the channel files with the command
$ rails generate channel UserInfoChannel
The file app/assets/javascripts/channels/user_info.coffee was generated, that file has the code to initialize the subscription in the frontend. Let’s see its content:
App.cable.subscriptions.create "UserInfoChannel", connected: -> # Called when the subscription is ready for use on the server disconnected: -> # Called when the subscription has been terminated by the server received: (data) -> # Called when there's incoming data on the websocket for this channel
There is a subscription to the “UserInfoChannel” being created in the code above. Also, the functions to be executed when the connection is created or destroyed, and when the content is received are being declared.
We have set the bases in the backend and the frontend to perform real-time communication, now let’s send messages from Active Job to Action Cable.
Combining all: sending data through Action Cable
Let’s modify the job that we created in the Active Job section, we need it to broadcast the users data when finishes fetching it from the Twitter API. We need to set app/jobs/find_users_info_job.rb to:
class FindUsersInfoJob < ActiveJob::Base queue_as :default def perform(current_user_id, user_ids) fail 'Up to 100 users are supported only.' if user_ids.count > 100 MyTwitterClass.get_users_info(current_user_id, user_ids) end end
We need to add the necessary code to broadcast the users data with Action Cable. Let’s update it to:
class FindUsersInfoJob < ActiveJob::Base queue_as :default def perform(current_user_id, user_ids) fail 'Up to 100 users are supported only.' if user_ids.count > 100 stream_id = "user_info_channel_#{current_user_id}" users_info = MyTwitterClass.get_users_info(current_user_id, user_ids) ActionCable.server.broadcast(stream_id, users_info: users_info) end end
In the code above, we are telling Action Cable to broadcast a message to the corresponding stream, with the users info as the body of the message.
We need to modify the “received” function in the subscription, so that when the message arrives we take the users data and append it to the DOM:
... received: (data) -> for user in data['users'] do (user) -> $('body').append(user['screen_name']) $('body').append("<img src='"+ user['profile_image_url'] + "' />") ...
In the code above we’re appending to the body every user’s screen name and profile image that is fetched by Active Job and sent to Action Cable.
That’s it, when any of the jobs finishes fetching users info from the Twitter API, they will broadcast a message with the info, the frontend will get the message and render it.
Conclusion
Active Job and Action Cable seem to make a good fit. It seems to me also that their implementation is simple once you know their structure. Last, is good that there is now a “native” way for handling Web Sockets in Rails, I’m curious about how Action Cable will grow in the future.
So, what do you think? Do you like Active Job and Action Cable? Have you already implemented them? How?
Would you like to learn more about these technologies? We can schedule a pairing session and setup some challenge to work on. If you want you can contact me at [email protected].
评论 抢沙发 | http://www.shellsec.com/news/26608.html | CC-MAIN-2017-04 | refinedweb | 2,106 | 53.81 |
JSP 2.0 Simple Tags Explained
JSP 2.0 introduced a lot of new goodies, many of which focus on making the developer’s life easier. In this article, I’ll look at one of my personal favorites: simple tag handlers. Simple tag handlers let you create custom tags that out-perform tag file-based solutions, and are far easier to write than tags based on the previous custom tag API.
As there are now a few different ways to write custom tags, I’ll also provide some pointers on how to decide whether to use simple tag handlers, tag files, and what Sun Microsystems now refers to as ‘classic’ tags.
Let’s kick off by clearing up a potential misconception. The ‘simple’ in ‘simple tag handler’ refers to the ease with which such custom tags can be written, not to any limitations they have. In almost all cases, simple tags are every bit as capable as tags written using the classic tag API, the only caveat being that you cannot include scriptlet code in the body of a simple tag. You can, however, include JSTL tags, EL expressions and other custom actions, so this should rarely, if ever, pose a problem.
First Steps
A simple tag handler subclasses a support class called
'SimpleTagSupport'. This class is a very handy implementation of the
'SimpleTag' interface. It provides implementations of all 5 of this interface’s methods, the most important of which is the
doTag() method. The
doTag() method in
SimpleTagSupport actually does nothing — it’s up to you, the developer, to override this method and code your tag’s functionality. Let’s dive right into an example. The code below shows this method in action:
package demo.tags;
import javax.servlet.jsp.tagext.*;
import javax.servlet.jsp.*;
public class Greeter extends SimpleTagSupport {
public void doTag() throws JspException {
PageContext pageContext = (PageContext) getJspContext();
JspWriter out = pageContext.getOut();
try {
out.println("Hello World");
} catch (Exception e) {
// Ignore.
}
}
}
There’s nothing really fancy here. This class simply uses
doTag() to print ‘Hello World’ to the output stream. We will liven this example up as we go, but, for now, there are a few things to take notice of.
The two import statements give us access to all of the required classes; you will need the Servlet and JSP
API classes on your classpath for this code to compile. Tomcat users will find these under common/lib as
jasper-api.jar and
servlet-api.jar.
For the reasons just discussed, this class extends the
SimpleTagSupport class and expects us override the
doTag() method. Another consequence of extending the
SimpleTagSupport class is that a method called
setJspContext() was called by the container prior to
doTag(), which made the current JSP context information available via
getJspContext(). We used this method to get access to the output stream for the JSP.
Mapping Tags to Classes
Assuming this class is installed under /WEB-INF/classes, the next step would be to write a TLD file. The TLD (Tag Library Descriptor) is an XML file that the container uses to map the custom tags in your JSPs to their corresponding simple tag handler implementation classes. Below we see
demo.tld, a simple TLD file which, when installed under the /WEB-INF/tlds directory, would map a custom tag called
'greeter' to the class
'demo.tags.Greeter'.
<?xml version="1.0" encoding="UTF-8"?>
<taglib version="2.0" xmlns="" xmlns:
<tlib-version>1.0</tlib-version>
<short-name>demo</short-name>
<uri>DemoTags</uri>
<tag>
<name>greeter</name>
<tag-class>demo.tags.Greeter</tag-class>
<body-content>empty</body-content>
</tag>
</taglib>
The
'tlib-version' and
'short-name' elements are straightforward enough, and relate to the tag library version and default tag prefix respectively. The
'uri' element, however, is worth some discussion. When it starts up, the container uses an auto-discovery feature to map all
uri element values to the corresponding TLD; therefore, this string must be unique within an application. As of JSP 1.2, we no longer need to make any edits to the web.xml file in order to deploy a custom tag — the auto-discovery feature saves us this inconvenience.
The really interesting part is the contents of the
<tag> element. It is in here that, among other things, we can give our tag a name, define its associated
tag handler class, and determine whether or not our tag should be allowed to have body content.
The code below shows how we might use our example
greeter tag in a JSP.
<%@taglib prefix="t" uri="DemoTags" %>
<html>
<head><title>JSP Page</title></head>
<body>
<!-- prints Hello World. -->
<t:greeter />
</body>
</html>
Courtesy of the
taglib directive’s
'uri' attribute, we have told the JSP where our TLD file is and, consequently, where our simple
tag handler implementation class is located. The
'uri' attribute maps directly to one of the mappings the container created when it started up; the container mapping, in turn, points to the TLD information.
Generally, you won’t need to care about this, but if the
uri attribute does not resolve to a container mapping, it is assumed to be a file path. This is useful only when identical
uri element values are encountered in different TLD files. If conventions such as using a domain name, or some other unique string, are followed, this should never happen.
Handling Tag Attributes
Custom tags start to become more interesting when you configure them to use attributes. To achieve this, we add instance variables and corresponding property setter methods to the
tag handler class. The container calls these setter methods for us, passing along the custom tag attribute values as arguments.
Let us suppose that we want to allow our
greeter tag to accept an attribute that will determine who it should greet. We could rewrite the
tag handler class to accommodate a
'name' attribute, as shown here:
public class Greeter extends SimpleTagSupport {
private String name = "World";
public void setName(String name){this.name = name;}
public void doTag() throws JspException {
PageContext pageContext = (PageContext) getJspContext();
JspWriter out = pageContext.getOut();
try {
out.println("Hello " + name);
} catch (Exception e) {
// Ignore.
}
}
}
We would also need to update the TLD file to handle our newly defined attribute. The below code shows the relevant portion of the updated TLD:
<tag>
<name>greeter</name>
<tag-class>demo.tags.Greeter</tag-class>
<body-content>empty</body-content>
<attribute>
<name>name</name>
<rtexprvalue>true</rtexprvalue>
<required>false</required>
</attribute>
</tag>
The
attribute element sets up an attribute named
'name', states that it is not required, and further states that it will accept
'runtime expression values'. We could then use the tag in any of the following ways:
<t:greeter
<t:greeter
<t:greeter />
The first invocation simply prints ‘Hello Andy’. As you would expect, the
setName() method was called with the string literal
'Andy' as its argument.
The second invocation is similar, but gets its value from the incoming request via an
EL expression. You can disable this feature by setting the
'rtexprvalue' element to
'false' or, because
false is the default value, by omitting this element altogether.
The last invocation does not use an attribute; instead, it uses the default value of the instance variable
'name'. You can make the attribute mandatory by choosing a value of
'true' for the
'required' element (it is
false by default), or you can programmatically test for its existence by testing for null — whichever makes sense for your application.
Processing Body Content
Custom tags often need access to their body content, and simple tag handlers provide an elegant way to handle this requirement. First, a simple amendment to the TLD is required — the
body-content element needs a value of
'scriptless'. When using
'scriptless', you’re allowed to put template text, standard actions and custom actions within your tag’s body — but not java code.
Another important method that is called by the container is
setJspBody(). This method makes the tag’s body content available as an executable fragment of any
EL expressions, custom actions and template text. You access this fragment with the
getJspBody() method, and you can execute it using the
JspFragment object’s
invoke() method. The below code this in action:
public void doTag() throws JspException {
JspFragment body = getJspBody();
PageContext pageContext = (PageContext) getJspContext();
JspWriter out = pageContext.getOut();
try {
StringWriter stringWriter = new StringWriter();
StringBuffer buff = stringWriter.getBuffer();
buff.append("<h1>");
body.invoke(stringWriter);
buff.append("</h1>");
out.println(stringWriter);
} catch (Exception e) {
// Ignore.
}
}
Let’s break this code down. The
doTag() method kicks off with a call to
getJspBody(), and the resulting
JspFragment is stored in a variable called
'body'.
JspFragment has an interesting method called
invoke(), which takes a
java.io.Writer as an argument. Here’s the really important bit: when
invoke() is called, the fragment is executed and then written to this
Writer object.
In simple cases you can supply
invoke() an argument of null, causing it to use the current JSP’s Writer and, consequently, print the executed fragment directly to the page. In many cases, however, you will want to first process the body content in some way before you send it to the output stream.
One way to do this processing is to use a
StringWriter and work with its underlying
StringBuffer. As you can see in listing 6, I appended a
H1 tag to the buffer, used invoke to execute and write the body content to the writer (and consequently, its underlying buffer), then finished by appending the closing
H1 tag. A simple call to
out.println() takes care of sending the processed body content on to the output stream.
Once you get the gist of using the executable
JspFragment, manipulating body content is a breeze. Keep in mind that you have access to the JSP
pageContext object (via
getJspContext()), so you can use its
setAttribute() and
getAttribute() methods to return values to the JSP and otherwise coordinate your tag’s functionality with the JSP in which it resides.
Making the Right Choice
Hopefully, you now have a general understanding of how to write custom tags using simple tag handlers. So, is it time to abandon tag files and the classic tag handler
API? Here are a few pointers.
If your tag absolutely has to use scripting elements (scriptlets), you will need to use the classic tag approach. This should rarely be an issue, as we have largely replaced scriptlets with the
JSTL, custom actions, and
EL expressions.
One perceived advantage that classic tags have is that containers can be optimized to provide pooling features for them; simple tag handlers, on the other hand, are instantiated for each occurrence in the JSP page. It is often the case that pooling involves more overhead than it’s worth. So, unless you have a tag that creates a lot of expensive resources and is used repeatedly, I wouldn’t let this deter you from adopting the simple tag handler approach. In fact, even when multiple invocations do prove to be expensive, you can often stick with simple tag handlers and be a little creative with the
PageContext object’s ability to hold on to references.
Tag files should be used when you need to generate a lot of mark-up, such as HTML. They are also handy when a more RAD approach to development will suffice. However, keep in mind that they are generally a little slower than tags in compiled form.
When you have lots of Java code and performance is important, simple tag handlers are a great choice. They are easy to write and have more than enough power to get things done.
Summary
We’ve only scratched the surface of what is a possible using simple tag handlers. I suggest you refer to the JSP specification for more information. You may also want to look at what your IDE has to offer in the way of support for custom tags. Netbeans 4, for example, takes care of writing the TLD file for you, provides custom tag code completion, and has quite a few other features that speed up tag development.
No Reader comments | http://www.sitepoint.com/jsp-2-simple-tags/ | CC-MAIN-2015-18 | refinedweb | 2,015 | 62.27 |
I try to run the following code which actually from a book ( C How to program by P.J.Deitel & H.M. Deitel) regarding arrays chapter.
But when I try to run it, it will always produce 2 of the following errors.But when I try to run it, it will always produce 2 of the following errors.Code:
#include <stdio.h>
int main ( void )
{
int n[ 10 ];
int i;
for ( i = 0; i < 10; i++) {
n[ i ] = 0;
}
printf("%s%13s\n", "Element", "Value");
for ( i = 0; i < 10; i++ ) {
printf("%7d13d\n", i, n[ i ] );
}
getchar();
return 0;
}
1. error LNK2019: unresolved external symbol_WinMain@16 referenced in function____tmainCRTStartup
2. error LNK1120: 1 unresolved externals
Any idea? | http://cboard.cprogramming.com/c-programming/139991-error-lnk1120-1-unresolved-externals-printable-thread.html | CC-MAIN-2015-06 | refinedweb | 118 | 66.44 |
06 January 2009 19:00 [Source: ICIS news]
(Adds Dow comments in paragraphs 12-15)
By Joe Chang
NEW YORK (ICIS news)--Dow Chemical is unlikely to find a joint venture partner that would match the $7.5bn (€5.6bn) price that Kuwait’s Petrochemical Industries Co (PIC) offered for its commodity chemical assets, according to an analyst with BB&T Capital Markets on Tuesday.
“Although we think Dow may be able to find another joint venture partner, we do not believe that it will be able to find a bidder to match the downwardly revised $7.5bn price tag, especially in today’s economic environment,” said analyst Frank Mitsch in a research note.
Dow announced on Tuesday that it will establish a formal process to secure a joint venture partner, as it has already been approached by parties interested in partnering with Dow’s basic plastics business.
Dow’s planned K-Dow Petrochemicals joint venture with PIC fell apart when ?xml:namespace>
Dow was scheduled to receive $7.5bn in cash from PIC, plus a $1.5bn cash distribution from K-Dow upon completion of the deal.
Dow announced it will pursue legal action against PIC for breach of contract. The maximum break-up fee under the merger agreement is $2.5bn and the case would be decided by the International Chamber of Commerce.
Dow said it believed that another joint venture partner would, “combined with the acceleration of planned divestitures and several other divestments that are consistent with the company’s strategy, will yield proceeds greater than the funds Dow expected to receive in connection with the K-Dow joint venture”.
Although Dow’s press release did not mention its planned $18.8bn acquisition of Rohm and Haas, Mitsch said it contained some “cryptic” language regarding the potential of the deal taking place.
Dow confirmed its commitment to its transformational corporate strategy, but also said it was committed to retaining a strong investment grade credit rating and would continue to pay out its dividend.
“Point one suggests all’s well with the Rohm and Haas deal and point two suggests it is not,” said Mitsch.
“We’re hopeful that an intense game of ‘chicken’ is going on behind the scenes with Dow either looking to reduce the acquisition price and cash component, or working to get out of the deal,” he said.
Rohm and Haas is “a strategic fit for Dow,” said the latter company’s vice president of public affairs.
“Rohm and Haas is a strategic fit for Dow and is consistent with our strategy,” said Patti Temple Rocks, vice president of Dow Public Affairs, in a statement to ICIS.
“The transaction is still in the regulatory approval process,” she added.
Dow is expected to obtain regulatory approval by the European Commission this week, and
Shares of Dow jumped $0.77, or 5.1%, to $15.82 in late-Tuesday morning trading, while Rohm and Haas fell $2.42, or 3.8%, to $61.40.
In July 2008, Dow had agreed to acquire Rohm and Haas for $78/share.
($1 = €0.74) | http://www.icis.com/Articles/2009/01/06/9182090/dow-may-not-find-partner-to-match-7.5bn-analyst.html | CC-MAIN-2015-22 | refinedweb | 515 | 62.48 |
Opened 3 years ago
Closed 3 years ago
Last modified 3 years ago
#1773 closed defect (invalid)
Unable to build nginx as static
Description
Hello,
we are trying to build nginx static with the following code, on a clean Centos-7 x86_64:
#!/bin/bash yum install pcre-devel gcc openssl-devel wget -y /dev/null 2>&1 mkdir "${MYWORKDIR}" cd "${MYWORKDIR}" wget -4"${MYFILE}" && tar xzf "${MYFILE}" && cd nginx-"${MYVERSION}" ./configure \ --prefix=/usr \ --sbin-path=/usr/sbin/nginx \ --conf-path=/etc/nginx/nginx.conf \ --error-log-path=/var/log/nginx/error.log \ --http-log-path=/var/log/nginx/access.log \ --pid-path=/run/nginx.pid \ --lock-path=/run/lock/subsys/nginx \ - \ --user=nginx \ --group=nginx \ --with-http_ssl_module \ --with-http_v2_module \ --with-threads \ --with-cc-opt="-static -static-libgcc" \ --with-ld-opt="" make V=1
However the build fails at configure time with:
checking for int size ... ./configure: error: can not detect int size make: *** No rule to make target `build', needed by `default'. Stop.
What's wrong?
Thanks
Change History (3)
comment:1 by , 3 years ago
comment:2 by , 3 years ago
Hello,
I did multiple tests and I realized that I copy-pasted the wrong version of the script, so I know that -static is a linked flag, it was just copied into the wrong place.
I was looking at config.log so I didn't think to look at something else, however, thanks for the hint, I was able to find the cause of the issue.
Anyway, atm, the build still fails when it tries to compile the following test:
#include <sys/types.h> #include <unistd.h> #include <openssl/ssl.h> int main(void) { SSL_CTX_set_options(NULL, 0); return 0; }
In the link parameters it is missing -lz so it fails with undefined reference to some functions provided by zlib.
Are you able to reproduce the issue?
comment:3 by , 3 years ago
As previously suggested, you'll have to provide much more options than just
-static -static-libgcc, and providing appropriate static dependencies for each and every library you are compiling nginx with is among the things you'll have to do.
And, as also previously suggested, this is not something you should do unless you understand what you are doing and why, and understand possible consequences.
Try looking into
objs/autoconf.err, it should have details on what exactly compiler said. Note well that
-static -static-libgcc;
--with-ld-opt.
And, more importantly, on Linux the result is going to be non-portable: that is, you wan't be able to use the resulting binary on other hosts, and things may be suddenly become broken on upgrades. This is due to glib limitations on static linking - for certain functions it requires the same version of the glib library to be available at runtime.
In general, it is not recommended to use static linking unless you understand what you are doing and why, and understand possible consequences.
If you have further questions, consider using support options available. | https://trac.nginx.org/nginx/ticket/1773 | CC-MAIN-2022-05 | refinedweb | 499 | 51.99 |
#include <stddef.h>
#include "mysql_version.h"
#include "sql/sql_plugin.h"
#include "status_var.h"
#include <mysql/services.h>
Go to the source code of this file.
There can be some variables which needs to be set before plugin is loaded but not after plugin is loaded.
ex: GR specific variables. Below flag must be set for these kind of variables.
Create a temporary file.
The temporary file is created in a location specified by the mysql server configuration (–tmpdir option). The caller does not need to delete the file, it will be deleted automatically.
Interface to remove the per thread openssl error queue.
This function is a no-op when openssl is not used.
Interface to remove the per thread openssl error queue.
Check if batching is allowed for the thread.
Get binary log position for latest written entry.
Provide a handler data getter to simplify coding.
Return the thread id of a user thread.
Get the XID for this connection's transaction.
Check the killed state of a connection.
In MySQL support for the KILL statement is cooperative. The KILL statement only sets a "killed" flag. This function returns the value of that flag. A thread should check it often, especially inside time-consuming loops, and gracefully abort the operation if it is non-zero.
Check the killed state of a connection.
Mark transaction to rollback and mark error as fatal to a sub-statement if in sub statement mode.
Dumps a text description of a thread, its security context (user, host) and the current query.
Provide a handler data setter to simplify coding.
Set ha_data pointer (storage engine per-connection information).
To avoid unclean deactivation (uninstall) of storage engine plugin in the middle of transaction, additional storage engine plugin lock is acquired.
If ha_data is not null and storage engine plugin was not locked by thd_set_ha_data() in this connection before, storage engine plugin gets locked.
If ha_data is null and storage engine plugin was locked by thd_set_ha_data() in this connection before, storage engine plugin lock gets released.
If handlerton::close_connection() didn't reset ha_data, server does it immediately after calling handlerton::close_connection().
Set the killed status of the current statement. | https://dev.mysql.com/doc/dev/mysql-server/latest/include_2mysql_2plugin_8h.html | CC-MAIN-2019-43 | refinedweb | 361 | 61.22 |
Creative HTML5 and JavaScript workshop by @seb_ly
Join the DZone community and get the full member experience.Join For Free
this week i had the pleasure of attending seb lee-delisle's creative html5 and javascript workshop and even as someone who classes themselves as an expert javascripter (i hope!), i still learnt tons.
to me, seb has gone through it all when he was developing for flash years ago. things like building 3d in a 2d environment, optimising methods and getting to know the faking techniques and tricks to make something appear awesome without turning your computer in to a smouldering wreck after the cpu melted the thing to the ground!
in my humble opinion, the "open web" community are now going through the same process with canvas and other html5-esque technologies. so why not learn from what seb and others have already been through? this is why i wanted to attend seb's workshop, and if he runs it again, i'd recommend you do the same if you're not familiar with visual programming techniques.
what we built
the two day workshop broke in to segments around drawing, vectors, particles, simple 3d and using three.js .by the end of day one we had all the components to build a working asteroids game. which i glued together that evening by myself (note that most of these components are seb's with a few of my own tweaks and glueing) - full screen/ipad version also working online
the second day we upgraded from 2d to 3d and to close the day, we asked seb to build a 3d version of asteroids. clearly he wasn't going to completely finish developing the game, but it was fun (and impressive) to see the building blocks being put together. he got to the point where there we asteroids flying towards him, the ship was controlled by the mouse and had some eased effects to give it a more tactile feel, and he had bullets/laser beams firing at the asteroids - all very cool, and not far off a ready game:
notes from the day
by way of brain dump, and so that i've got a record of the stuff i need to remember somewhere , here's just list of bits that i picked up throughout the day. obviously there was tons more, but when you're coding, listening, learning and wrapping your head around trig (something that i'm quite useless at) - less notes are taken
canvas
- when using colours: hsl's where it's at! here's some help:
- remember subpixels drawing, by default you draw between pixels, mark pilgrim explains:
- when using beginpath , unless you do moveto the first use of lineto will actually move, and not draw a line, i.e. if you do a lineto without a start, it only moves (in fact because there's no starting point).
- rotating a canvas rotates around the origin, which by default is top, left. to move the origin of the canvas, use translate .
- save / restore state the drawing style and affects coordinate system - this useful for rotating the canvas drawing, and then resetting the rotation.
for an animation the (pseudo) code looks like this:
var mousex = 0, mousey = 0, mousedown = false, keys = {};
setup(); // initalisation
setinterval(loop, 1000 / 60); // 60 fps
function loop() {
// 1. handle key or mouse states
// 2. update position of animated objects: particles, etc
object.update();
// 3. draw each object
object.draw();
}
// capture events, but don't do anything with them
document.addeventlistener('mousemove', function (e) {
mousex = e.pagex;
mousey = e.pagey;
}, false);
vectors
- pythagoras can be used to determine whether a point (like a click) is inside an circle drawn on a canvas. however, it requires math . sqrt which is costly, so instead of using pythagoras to workout hit testing (for length of vector) - compare the distanced squared:
instead of:
var distance = math.sqrt((this.diff.x * this.diff.x) + (this.diff.y*this.diff.y));
return (distance<this.radius);
use:
var distancesq = (this.diff.x * this.diff.x) + (this.diff.y*this.diff.y);
return (distancesq < this.radius * this.radius)
3d
collision detection in 3d:
var distancesq3d = (this.diff.x * this.diff.x) + (this.diff.y*this.diff.y) + (this.diff.z * this.diff.z)
return (distancesq3d < this.radius * this.radius)
- to calculate the new x and y in a 3d space, you need to multiply them by a scale which is worked out from: f/(f+z) = newscale (note that f the field of view - like the zoom on a camera)
- to scale the 3d system properly, the origin must be in the centre of the canvas, this is key to getting the perspective correct: ctx . translate ( ctx . canvas . width / 2 , ctx . canvas . height / 2 )
- to rotate the y (the yaw ), you rotate the y and z axis, which is exactly the same as the distribution code (below in the optimisation tricks), using sin and cos
to get the z-order correct (painters algorithm), you need to sort by the z axis, where points is an array of objects with xyz:
points = points.sort(function (a, b) {
return a.z >= b.z ? -1 : 1;
});
- if the z is less than -fov (f, i.e. if f=250, and z=-250) it means the z position is behind you - so it should be removed
optimisation tricks
- when removing an item (or deleted) - like bullets or asteroids, recycle the item: disable it when it's finished with, and add it to an array. then when you want a new object, try to get it from the pool of spares first, otherwise create a new item.
- instead of having the keyboard interactivity interrupt the code and update values, track the key presses and check these in the render cycle process.
- ( math . random () * 0xff ) << 16 generates a random blue colour ( 0xff == 0x0000ff == blue), then shift 16 bits and we've now got a random red colour. we could equally get a random green using << 8 .
- when placing objects at a random position, if you use x = math . random () * width (and similarly with y axis) the distribution creates a square shape, which looks odd. this is easy to fix, you create a circular distribution.
to get a circular distribution, rotate around a circle using:
x = math.sin(angle) * speed; // sin for x
y = math.cos(angle) * speed; // cos for y
Published at DZone with permission of Remy Sharp, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own. | https://dzone.com/articles/creative-html5-and-javascript | CC-MAIN-2022-21 | refinedweb | 1,086 | 60.35 |
Combine many netCDF files into a single file with Python
NetCDF data come in many shapes. I was recently working with precipitation data downloaded from the NOAA Physical Science Laboratory Server. Unfortunately, the data were all in different years, and I couldn’t get a combined file to download for all the data.
It seemed to be a simple problem, and I know there are many netCDF libraries that can handle such file merges without ripping the file open for its dimensions and variables.
After perusing many fora, I was able to piece together a solution to do the easiest job of taking different netCDF files that need to be combined over a dimension, in my case, it was time.
So what’re we trying to do again? We’re trying to go from disjointed netCDF files to a single netCDF.
- The simplest solution is to install the xarray library for Python. I use Jupyter Notebooks with Anaconda, so all I did was go to Anacoda UI, go to Environments, search in ‘All’, and then search for xarray. Once you get the right option, select it and click ‘Apply’ on the button right corner and let it finish installing (Click ok for any subsequent dialogues). There are other install options available on the xarray webpage.
2. Create a folder with all the data that needs to be combined. This will also be the folder where the Python script will reside. I had a folder called ‘precip’ with the hourly precipitation data mentioned above. As in the screenshot, the data was names in this format — ‘precip.hour.yyyy.nc’, yyyy being the year of observation recording.
3. Open Jupyter notebook, navigate to the created folder and create a new Python 3 notebook.
4. And now for the code. First, I imported all the libraries I care about. (I don’t need all these, but always import the first two out of force of habit, which I keep here for the sake of, well, authenticity.)
import netCDF4
import numpy
import xarray
5. This is the main line of code. I tell. And so we say combine = ‘by_coords’ and the concat_dim = ‘time’. (xarray provides more option to concatenate data with more complex needs like merging along two dimensions. For more, see here)
ds = xarray.open_mfdataset('precip.hour.*.nc',combine = 'by_coords', concat_dim="time")
6. I want to export this data into a combined netCDF, so that’s next.
ds.to_netcdf('precip_combined.nc')
7. And now I can check this data, its dimensions using ncdump, or plot it as a Spacetime Cube.
Here’s the ncdump
netcdf precip_combined.nc {
dimensions:
lon = 33;
lat = 21;
time = 309024;
variables:
float precip(time=309024, lat=21, lon=33);
:_FillValue = NaNf; // float
:least_significant_digit = 2S; // short
:long_name = "Hourly Accumulated Precipitation";
:valid_range = 0.0f, 10.0f; // float
:units = "in";
:precision = 2S; // short
:var_desc = "Precipitation";
:dataset = "CPC 2x2.5 Hourly US Precipitation";
:level_desc = "Surface";
:statistic = "Hourly Accumulation";
:parent_stat = "Observation";
:add_offset = 0.0f; // float
:scale_factor = 1.0f; // float
:missing_value = -9.96921E36f; // floatfloat lon(lon=33);
:_FillValue = NaNf; // float
:units = "degrees_east";
:long_name = "Longitude";
:actual_range = 220.0f, 300.0f; // float
:standard_name = "longitude";
:axis = "X";
:_CoordinateAxisType = "Lon";float lat(lat=21);
:_FillValue = NaNf; // float
:units = "degrees_north";
:long_name = "Latitude";
:actual_range = 20.0f, 60.0f; // float
:standard_name = "latitude";
:axis = "Y";
:_CoordinateAxisType = "Lat";double time(time=309024);
:_FillValue = NaN; // double
:long_name = "Time";
:delta_t = "0000-00-00 01:00:00";
:avg_period = "0000-00-00 01:00:00";
:verification = "accumulation from ob time to one hour later";
:standard_name = "time";
:axis = "T";
:units = "seconds since 1948-07-01 00:00:00";
:calendar = "proleptic_gregorian";
:_CoordinateAxisType = "Time";// global attributes:
:title = "CPC 2x2.5 Hourly US Precipitation";
:Conventions = "CF-1.2";
:history = "created 05/20/2004 by CAS from data obtained from NCEP";
:description = "Gridded hourly Precipitation";
:platform = "Observations";
:documentation = "";
:Source = "";
:References = "";
:dataset_title = "CPC Hourly Precipitation for the United States";
:_CoordSysBuilder = "ucar.nc2.internal.dataset.conv.CF1Convention";
} | https://neetinayak.medium.com/combine-many-netcdf-files-into-a-single-file-with-python-469ba476fc14?source=post_internal_links---------6---------------------------- | CC-MAIN-2021-10 | refinedweb | 642 | 56.45 |
Background
epsilon productions are very useful to express many grammars in a compact way. For example, take these simple function call productions in some imaginary C-like language:
func_call:: identifier '(' arguments_opt ')' arguments_opt:: arguments_list | eps arguments_list:: argument | argument ',' arguments_list
When composing grammars by hand, simplicity matters. It's very useful to be able to look at arguments_opt and know that it's an optional list of arguments. The same non-terminal can be reused in several other productions.
However, epsilon productions pose a problem for several algorithms that act on grammars. Therefore, prior to running these algorithms, epsilon productions have to be removed. Fortunately, this can be done relatively effortlessly in an automatic way.
Here I want to present an algorithm and a simple implementation for epsilon production removal.
The algorithm
Intuitively, it's quite simple to remove epsilon productions. Consider the grammar for function calls presented above. The argument_opt nonterminal in func_call is just a short way of saying that there either is an argument list inside those parens or nothing. In other words, it can be rewritten as follows:
func_call:: identifier '(' arguments_opt ')' | identifier '(' ')' arguments_opt:: arguments_list arguments_list:: argument | argument ',' arguments_list
This duplication of productions for func_call will have to be repeated for every other production that had arguments_opt in it. This grammar looks somewhat strange, as arguments_opt is now identical to arguments_list. It is correct, however.
A more interesting case occurs when the epsilon production is in a nonterminal that appears more than once in some other production [1]. Consider:
B:: A z A A:: a | eps
When we remove the epsilon production from A, we have to duplicate the productions that have A in them, but the production for B has two A. Since either of the A instances in the production can be empty, the only proper way to do this is go over all the combinations:
B:: z | A z | z A | A z A A:: a
In the general case, if A appears k times in some production, this production will be replicated 2^k times, each time with a different combination [2].
This leads us to the algorithm:
- Pick a nonterminal A with an epsilon production
- Remove that epsilon production
- For each production containing A: Replicate it 2^k times where k is the number of A instances in the production, such that all combinations of A being there or not will be represented.
- If there are still epsilon productions in the grammar, go back to step 1.
A couple of points to pay attention to:
- It's obvious that a step of the algorithm can create new epsilon productions [3]. This is handled correctly, as it works iteratively until all epsilon productions are removed.
- The only place where an epsilon production cannot be removed is at the start symbol. If the grammar can generate an empty string, we can't ruin that. A special case will have to handle this case.
Implementation
Here's an implementation of this algorithm in Python:
from collections import defaultdict class CFG(object): def __init__(self): self.prod = defaultdict(list) self.start = None def set_start_symbol(self, start): """ Set the start symbol of the grammar. """ self.start = start def add_prod(self, lhs, rhs): """ Add production to the grammar. 'rhs' can be several productions separated by '|'. Each production is a sequence of symbols separated by whitespace. Empty strings are interpreted as an eps-production. Usage: grammar.add_prod('NT', 'VP PP') grammar.add_prod('Digit', '1|2|3|4') # Optional Digit: digit or eps grammar.add_prod('Digit_opt', Digit |') """ # The internal data-structure representing productions. # maps a nonterminal name to a list of productions, each # a list of symbols. An empty list [] specifies an # eps-production. # prods = rhs.split('|') for prod in prods: self.prod[lhs].append(prod.split()) def remove_eps_productions(self): """ Removes epsilon productions from the grammar. The algorithm: 1. Pick a nonterminal p_eps with an epsilon production 2. Remove that epsilon production 3. For each production containing p_eps, replace it with several productions such that all the combinations of p_eps being there or not will be represented. 4. If there are still epsilon productions in the grammar, go back to step 1 The replication can be demonstrated with an example. Suppose that A contains an epsilon production, and we've found a production B:: [A, k, A] Then this production of B will be replaced with these: [A, k], [k], [k, A], [A, k, A] """ while True: # Find an epsilon production # p_eps, index = self._find_eps_production() # No epsilon productions? Then we're done... # if p_eps is None: break # Remove the epsilon production # del self.prod[p_eps][index] # Now find all the productions that contain the # production that removed. # For each such production, replicate it with all # the combinations of the removed production. # for lhs in self.prod: prods = [] for lhs_prod in self.prod[lhs]: num_p_eps = lhs_prod.count(p_eps) if num_p_eps == 0: prods.append(lhs_prod) else: prods.extend(self._create_prod_combinations( prod=lhs_prod, nt=p_eps, count=num_p_eps)) # Remove duplicates # prods = sorted(prods) prods = [prods[i] for i in xrange(len(prods)) if i == 0 or prods[i] != prods[i-1]] self.prod[lhs] = prods def _find_eps_production(self): """ Finds an epsilon production in the grammar. If such a production is found, returns the pair (lhs, index): the name of the non-terminal that has an epsilon production and its index in lhs's list of productions. If no epsilon productions were found, returns the pair (None, None). Note: eps productions in the start symbol will be ignored, because we don't want to remove them. """ for lhs in self.prod: if not self.start is None and lhs == self.start: continue for i, p in enumerate(self.prod[lhs]): if len(p) == 0: return lhs, i return None, None def _create_prod_combinations(self, prod, nt, count): """ prod: A production (list) that contains at least one instance of 'nt' nt: The non-terminal which should be replicated count: The amount of times 'nt' appears in 'lhs_prod'. Assumed to be >= 1 Returns the generated list of productions. """ # The combinations are a kind of a powerset. Membership # in a powerset can be checked by using the binary # representation of a number. # There are 2^count possibilities in total. # numset = 1 << count new_prods = [] for i in xrange(numset): nth_nt = 0 new_prod = [] for s in prod: if s == nt: if i & (1 << nth_nt): new_prod.append(s) nth_nt += 1 else: new_prod.append(s) new_prods.append(new_prod) return new_prods
And here are the results with some of the sample grammars presented earlier in the article:
cfg = CFG() cfg.add_prod('identifier', '( arguments_opt )') cfg.add_prod('arguments_opt', 'arguments_list | ') cfg.add_prod('arguments_list', 'argument | argument , arguments_list') cfg.remove_eps_productions() for p in cfg.prod: print p, ':: ', [' '.join(pr) for pr in cfg.prod[p]]
Produces:
func_call :: ['identifier ( )', 'identifier ( arguments_opt )'] arguments_list :: ['argument', 'argument , arguments_list'] arguments_opt :: ['arguments_list']
As expected. And:
cfg = CFG() cfg.add_prod('B', 'A z A') cfg.add_prod('A', 'a | ') cfg.remove_eps_productions() for p in cfg.prod: print p, ':: ', [' '.join(pr) for pr in cfg.prod[p]]
Produces:
A :: ['a'] B :: ['A z', 'A z A', 'z', 'z A']
The implementation isn't tuned for efficiency, but for simplicity. Luckily, CFGs are usually small enough to make the runtime of this implementation manageable. Note that the preservation of epsilon productions in the start rule is implemented in the _find_eps_production method.
A:: a | eps B:: b | A
After removing the epsilon production from A we'll have:
A:: a B:: b | A | eps | http://eli.thegreenplace.net/2010/02/08/removing-epsilon-productions-from-context-free-grammars/ | CC-MAIN-2016-07 | refinedweb | 1,232 | 50.02 |
- ChatterFeed
- 0Best Answers
- 0Likes Received
- 0Likes Given
- 0Questions
- 12Replies
Drag and drop option in lightning web component
I have a requirement where there will be n number of layouts inside a venue hall and items should be assigned to them dynamically by drag n drop functionality. By this i need to capture which item was placed in which layout on save and the value should retain on page refresh.
can this be achieved using lightning component. Please guide.
Thanks in advance.
How can i assign a different value to a string dynamically so everytime the user insert a record the value change +1 ?
hi saleforce, i need to enable API access. I want test some thirst application.
roll up summary from formula field
I just performed a sample import, received an e-mail that stated the import was successful but I cannot find the contacts that were imported. Can you tell; me where to find them? Thanks!
Interview Questions for Senior Developer
The company I work for is looking for a Senior Salesforce Developer for one of our clients. I have been asked to come up with some questions to ask Senior Salesforce Developers that interviewers at my company can ask applicants to verify their knowledge. If someone could provide me with some questions that I could ask a Senior Salesforce Developer to verify their knowledge of salesforce and their experience that would be fantastic.
Thanks,
Salesforce Visualforce Page And Sencha Touch
We are trying to create a hybrid app using visualforce pages, @remoteaction and sencha touch 2.0. I'm having issues with understanding the proxy method on Sencha Touch in relationship to the @remoteaction and loading data. Is there a good tutorial?
Our goal is to use Sencha Designer 2 to create UI and copy code to a visualforce page. Then hooking up the remote action. I saw this at Cloudforce in SF in March 2012 but cannot find any documentation. The session was "Partner Session: Build a HTML5 App from scratch with the Sencha Framework"
Can anyone help?
Apex course DEV 531
Does anybody know if the DEV 531 course will be available for free anytime soon? I'm new to programming and tried learning from the DEV 501 course through iTunes but was too confusing. I'm not looking for certification, just need to learn asap for job opportunity.
How Do I Call a Public Void Method on my VisualForce page?
First, let me apologize if this is a dumb question. I'm trying to reach back and pull my college Java courses, but I'm struggling.
Anyways, I have a public void method that I want to query a custom object, and depending on a field value, add each record to a certain list. I am doing this because I have multiple lists I will be calling from my VisualForce page, so I have the public void method to build the lists, and then functions to return those lists. However, when I call my lists, none of them are populated with data, so it seems my public voide method is not ran.
Here is the public void method:
public class MarginEscalators { List<Pipeline__c> marginEscalatorsComplete = new List <Pipeline__c>(); List<Pipeline__c> marginEscalatorsIncomplete = new List <Pipeline__c>(); public void buildLists() { for (Pipeline__c me2011 : [SELECT ProjectID__C, Sales_Pole__c, Status__C FROM Pipeline__c WHERE Year__c = 2011) { if (me2011.Status__c == 'Complete') { marginEscalatorsComplete.add(me2011); } else if (me2011.Status__c == 'Incomplete') { marginEscalatorsIncomplete.add(me2011); } } }
Here are my two simple functions:
public List<Pipeline__c> getMarginEscalatorsComplete() { return marginEscalatorsComplete; } public List<Pipeline__c> getMarginEscalatorsIncomplete() { return marginEscalatorsIncomplete; }
I assumed that whenever I called either {!marginEscalatorsComplete} or {!marginEscalatorsIncomplete} on my VF page my public void method would be kicked off automatically. Is there something else I need to do in order to initiate the public void method?
The code below is how I've tried calling just one of my lists to see if my code worked, but it's not. I'm not getting errors, but I am getting a blank page.
<TABLE> <apex:repeat <TR><TD width="75"><apex:outputText</TD> <TD width="100">{!a.ProjectID__c}</TD> <TD width="40">{!a.Sales_Pole__c}</TD> <TD width="75">{!a.Status__c}</TD></TR> </apex:repeat> </TABLE>
Again, I think it's because my public void method is not get initiated and building the lists, and I'm not sure how to make sure it is run. If anyone has any insight, I would greatly appreciate it.
Thanks,
Mike | https://developer.salesforce.com/forums/ForumsProfile?UserId=005F0000003FgSUIA0&communityId=09aF00000004HMG | CC-MAIN-2021-04 | refinedweb | 735 | 63.19 |
Most of the Ajax features in jQuery, such as the very easy-to-use load, get, and post methods, are convenience methods that make using Ajax fairly easy for the most common scenarios. But when those convenience methods don't work for you, you can pull out the big Ajax gun in jQuery: the ajax method. This is the method that all the convenience methods ultimately use to make the actual call to the server. They do this by performing all the necessary setup and setting default values for various arguments that work for the particular Ajax task at hand.
You can use the ajax method directly to handle the scenarios not easily handled—or not handled at all—by the convenience methods. The ajax method has the deceptively simple syntaxes shown below, taking just one or two arguments, url and options. With the first syntax option, you can pass the URL for the target web service and optionally follow it with as many option settings as you need. Or you can just pass a value for the options argument, one of which could be the url. This method is the embodiment of the saying that the devil is in the details, and providing the right options for what you want to do is the details you need to wade through to make the method work for you.
jQuery.ajax( url, [ options ] ) jQuery.ajax( options )
If you look at the documentation for the ajax method on the jquery.com website, you'll find that there are more than 30 option settings you can use. There are a handful of options you'll use all the time, and many more that handle various esoteric situations. There are options that let you define event functions, control the response data format, give you access to the underlying XHR object, define additional request headers, and many, many more.
As you explore the options, it becomes easy to appreciate all the work that the convenience methods do for you. On the other hand, it is nice to have the option to use the ajax method directly.
Using the Ajax Method
Let's look at a simple example of using the ajax method by writing a Hello World application. The page calls the HelloWorld web service method to return a string. One way to do this is to use the very simple load method within a page:
$(function () { $('#buttonSays').click(function () { $('div').load('AjaxServices.asmx/HelloWorld'); }); });
But you can also use the ajax method directly to make the Ajax call to the server. The following code uses the type option to make a GET call and the url option to specify the location of the web service. Like the get and post methods, the ajax method is a utility function that doesn't directly update a matched set of elements, so the method call uses the success event option to define a function to update the div elements on the page with the response text. This anonymous function uses the text method of the response object to extract the text from the XML returned from the web service method and the html method to update the div elements.
$(function () { $('#buttonSays').click(function () { $.ajax({ type: 'GET', url: 'AjaxServices.asmx/HelloWorld', success: function (response) { $('div').html($(response).text()); } }); }); });
The screenshot below shows the results of clicking the Get Info button on the page. The result is the same as the earlier sample using the load method, where the code updates the three div elements on the page. The web service increments the number with each call, so that you know how many times it has run.
As you can see, using the ajax method is more complex than using the load method, both because you have to provide appropriate option values and because the method doesn't directly update a matched set of elements. You have to write a function for the success event to update the page.
The next sample is a bit more complex and realistic. The Boroughs.html page, shown in the screenshot below, displays information about some of the boroughs in Alaska when the user clicks the Get Boroughs button. A real web page might let the user filter the data in some way, but this sample simply grabs the information from the server and updates the page.
The body section of the page, shown below, is quite simple, consisting of a header, button, and div to receive the data.
Alaska Boroughs
The page calls a GetAllBoroughs web service method, which uses a Borough class to hold the data about each borough and a generic List object to hold the collection, as shown in the following code.
public class Borough { public string Name { get; set; } public int Population { get; set; } public short Created { get; set; } } List
Boroughs = new List { new Borough{Name = "Fairbanks North Star Borough", Population = 82840, Created = 1964}, new Borough{Name = "Municipality of Anchorage", Population = 260283, Created = 1975}, new Borough{Name = "Denali Borough", Population = 1893, Created = 1990}, new Borough{Name = "City and Borough of Juneau", Population = 30711, Created = 1970}, new Borough{Name = "North Slope Borough", Population = 7385, Created = 1972} };
The GetAllBoroughs web method simply returns the List object with the collection of boroughs. The code doesn't need to specify the format of the data; that's a detail that jQuery and the web service will work out together.
[WebMethod] public List
GetAllBoroughs() { return Boroughs; }
The following code in the web page makes the Ajax call to this web method. The method will make a POST call to the specified URL, providing an empty object literal as the data for the call. The content it wants back is JSON, specified in both the contentType and dataType options. The success option defines an event function to update the page, adding a p element with a description, a ul element with id boroughList, and looping through each row of data to build an li element with the data. Notice that the append method in the loop appends each new li element to the ul element that was earlier added dynamically. You're not limited to appending elements to elements in the original page source.
$(function () { $('#buttonGet').click(function () { $.ajax({ type: "POST", url: "AjaxServices.asmx/GetAllBoroughs", data: "{}", contentType: "application/json; charset=utf-8", dataType: "json", success: function (response) { $('#divResult').html('') .append('
Here are a few of Alaska boroughs:
');
var boroughs = response.d;
for (var i = 0; i < boroughs.length; i++) {
$('#boroughList').append('
- ' + boroughs[i].Name + ': ' + boroughs[i].Population + ' people, created in ' + boroughs[i].Created + ''); } $('#divResult').css('display', 'block'); } }); }); });
The number of options you set for the ajax method is highly dependent on what you need to accomplish with the method call. Recall that the web method didn't have anything that told it that the client would want JSON data. The ajax method requested the data in that format, and the web method delivered, as shown in Firebug in the screenshot below. Notice, too, that the action was a POST, as requested.
Most of the time, the load, get, and post Ajax methods in jQuery will do what you need. But when you need more control, check out the ajax method!
I adapted this material from a jQuery course I wrote for AppDev.
Don Kiely ([email protected]), MVP, MCSD, is a senior technology consultant, building custom applications and providing business and technology consulting services. His development work involves SQL Server, Visual Basic, C#, ASP.NET, and Microsoft Office. | http://www.itprotoday.com/web-development/using-jquerys-ajax-method | CC-MAIN-2018-09 | refinedweb | 1,243 | 59.53 |
Hi, I was having trouble trying to run some search function against a dataset code from a light box, onto another corresponding page. I have a button on the page that opens a light box that has user inputs. Based on these user inputs I want a repeater on the other page to yield results based on the information entered in the user inputs in the light box once the close button is clicked. I have tried putting this code onto the site page to no avail. Thanks! #code #page #button #userinputs
This is my search function code on the light box page:
Here is the URL to the parent page to help with confusion:
import wixData from 'wix-data'; export function FfilterButton_click(event) { wixData.query('paymentForm01') .contains('shortTextField',$w("#FinputName").value) // Searches by event name .contains('dropdownField22',$w("#FinputType").value) // Searches by event type .contains('shortTextField2',$w("#FinputLocation").value) // Searches by event location .contains('shortTextField3',$w("#FinputTime").value) // Searches by event time .find() .then((results)=>{ $w("#Frepeater1").data = results.items // Does not display in the repeater on the parent page }); }.
Alright, I will try that out, thank you! | https://www.wix.com/corvid/forum/community-discussion/how-to-run-code-from-one-page-on-another | CC-MAIN-2019-47 | refinedweb | 189 | 56.25 |
NAME
SSL_set_num_tickets, SSL_get_num_tickets, SSL_CTX_set_num_tickets, SSL_CTX_get_num_tickets - control the number of TLSv1.3 session tickets that are issued
SYNOPSIS
#include <openssl/ssl.h> int SSL_set_num_tickets(SSL *s, size_t num_tickets); size_t SSL_get_num_tickets(SSL *s); int SSL_CTX_set_num_tickets(SSL_CTX *ctx, size_t num_tickets); size_t SSL_CTX_get_num_tickets(SSL_CTX *ctx);)).
Tickets are also issued on receipt of a post-handshake certificate from the client following a request by the server using SSL_verify_client_post_handshake(3). These new tickets will be associated with the updated client identity (i.e. including their certificate and verification status). The number of tickets issued will normally be the same as was used for the initial handshake. If the initial handshake was a full handshake then SSL_set_num_tickets() can be called again prior to calling SSL_verify_client_post_handshake() to update the number of tickets that will be sent.() and SSL_set_num_tickets() return 1 on success or 0 on failure.
SSL_CTX_get_num_tickets() and SSL_get_num_tickets() return the number of tickets that have been previously set.
SEE ALSO
HISTORY
These functions were added in OpenSSL 1.1.1.
Licensed under the Apache License 2.0 (the "License"). You may not use this file except in compliance with the License. You can obtain a copy in the file LICENSE in the source distribution or at. | https://www.openssl.org/docs/manmaster/man3/SSL_get_num_tickets.html | CC-MAIN-2019-47 | refinedweb | 199 | 50.43 |
the following code produces an error:
import MySQLdb
db = MySQLdb.connect()
c = db.cursor()
c.executemany("INSERT INTO
tmp (
date,
id) VALUES (DATE(NOW()), %s);", ((1, ), (1, )))
TypeError: not all arguments converted during string formatting
while removing the DATE() fixes it:
import MySQLdb
db = MySQLdb.connect()
c = db.cursor()
c.executemany("INSERT INTO
tmp (
date,
id) VALUES (NOW(), %s);", ((1, ), (1, )))
works fine.
Seems to be related to the insert_values regex in cursors.py. Works fine in 1.2.3
Andy Dustman
2013-02-12
It's really hard to deal with full SQL expressions as arguments to execute or executemany, since there can be any number of nested parenthesis, intermixed with string literals. 1.2.3 had different parsing bugs...
What's probably needed is a real parser and not a regex. See for some solutions.
Hi Andy, thanks for your response, we worked around the issue in our code for now.
Can you tell me the reason why you need to parse out the individual insert values so that I fully understand the issue?
After looking at the code in cursors.py it seems that you try to get everything in between the parenthesis in "insert ... values (%s)" and then join that up with commas after calling db.literal() on each set of parameters for it to become "insert ... values (%s,%s,%s...)".
In what case would you actually need to parse each individual argument in the SQL rather than just use the full argument set (ie, why would a simple r'\svalues\s*((.+))' not work)?
Andy Dustman
2013-02-18
It's only needed so that executemany can turn the INSERT statement into a multi-row INSERT. The VALUES part is then repeated for each row being inserted, and a single statement is executed. The execute method doesn't try to pick apart the query at all.
Andy Dustman
2013-02-18
Also, r'\svalues*((.+))' can match other things (potentially things in string literals) an may fail to match the entire VALUES clause, specifically because of expressions which may have nested parenthesis or string literals, and also it's legal to use dictionary-type string interpolation, i.e. VALUES (%(key1)s, %(key2)s) | http://sourceforge.net/p/mysql-python/discussion/70460/thread/63a79434/ | CC-MAIN-2014-52 | refinedweb | 366 | 57.57 |
hi friends.! need little help. I couldn't find the way to check a stack is empty before pop up a element. when my program is running its give a error. please help me guys.. thank you in advance....
Recommended Answers
Do you have a code sample that is failing?
You can use the .Count property or (if you're using Linq) the .Any() method.
using System; using System.Collections.Generic; using System.Linq; namespace DW_404932_CS_CON { class Program { static void Main(string[] args) { Stack<string> stk_str = new Stack<string>(20); string …
All 4 Replies
baby_c commented: thank you from baby_C +3
Be a part of the DaniWeb community
We're a friendly, industry-focused community of 1.20 million developers, IT pros, digital marketers, and technology enthusiasts learning and sharing knowledge. | https://www.daniweb.com/programming/software-development/threads/404932/chech-stack-empty-c | CC-MAIN-2021-04 | refinedweb | 131 | 61.43 |
Since the introduction of Custom Data Attributes in the HTML5 spec, developers have discovered a whole new world of possibilities. When you combine the ability to store arbitrary information in an HTML element with the power of JavaScript, you get some very interesting alternative development experiences.
One of the places where it has become very popular in the JavaScript community at large, is providing information to the underlying JavaScript framework (such as MVVM) which allows the executing code to mutate the HTML element; thusly doing things such as binding it to data, transforming it into a custom UI control or getting configuration values off of it that can’t be stored in the standard provided HTML attributes.
Kendo UI uses this syntax heavily. In order to be able to use the full power of Kendo UI, you really need to know what you can bind with data attributes, when to use it so it benefits you most and how to use it in concert with Kendo UI as a whole. In order to provide some clarity around this, it really needs to be discussed in two separate contexts.
Lets look at the data-bind specific syntax first as it is essential to the Kendo UI MVVM implementation as well as Knockout. To negate the risk of completely repeating myself, I’m not going to cover MVVM conceptually. You can check out a post here that covers what it is and how you use it.
Lets dive in.
Assume that we have a simple select list and we want to bind it to the view model. We declare the HTML and the view model.
This is very basic, but we are using the data-bind to bind the source of the select to the view model things array. Also note that you can bind to a function as well as I’m doing with data-bind=”text: thingSelection” on the h2.
But the question now becomes, what else can I bind and how do I do it?
This is well documented in our framework section of the documentation. On the Overview section, you will find a high level explanation of how to bind with the Kendo UI MVVM framework. Under the Bindings sub-topic, you will find a complete list of what you can bind to. For reference, I have included that list here as well.
We have provided samples with each of these binding declarations to help you understand how and when you can use them.
For instance, lets look at styling the drop down a bit using the data-bind syntax with style. I can set the drop down width as well as setting its margins by using data-bind and the above referenced style.
data-bind="style: { width: thingsWidth, fontSize: thingsFontSize}"
The width and font-size are now bound to view model properties and the select element will be styled appropriately based on the values in the view model. The h2 also has its color bound to the view model. When those properties change, so will the style on the select element. Feel free to try it for yourself in the above fiddle.
Whenever you are referencing a css property that has a dash, you eliminate the dash and use lower camel case notation (i.e. fontSize instead of font-size) as is documented here.
The other way of using data attributes with Kendo UI is something that we call Declarative Initialization. This is the concept of initializing widgets and configuring those widgets with data attributes instead of having to select each element that we want to turn into a widget and configure it with JavaScript.
Let’s take a look at the above example using Declarative Initialization to turn the select into a Kendo UI DropDown List. We do this by setting the data-role attribute to dropdownlist. Normally this would be done by calling kendoDropDownList on the element. Thanks to declarative binding, we don’t have to do that anymore.
Notice that we lost our previous styling. This is because the ordinary select is now a Kendo UI DropDown which has it’s own styling. Everything else keeps right on working exactly they way you would expect.
Now we could have several drop downs on a page and we only need to configure them with declarative data attributes instead of having to select each and every one with jQuery and call the kendoDropDownList method on them.
YES. It has always been our goal to not force things on you. Declarative Bindings are powerful, but you don’t need to buy into the MVVM pattern to use them.
Let’s rework the above example and get rid of the view model, but this time we’ll bind it to a DataSource and let declarative initialization do all the work for us.
The kendo.init call will initialize whatever piece of the UI is passed in as a reference. Notice that I am only calling init on the main div. For maximum performance, specify the container for Kendo UI to initialize so it doesn’t have to walk the entire document to find the relevant widgets.
You can see that I have supplied configuration attributes for the drop down list such as data-source which binds the drop down to the Kendo UI DataSource. Also, I have specified configuration attributes like data-text-field and data-value-field. You can specify any configuration value with declarative initialization, and it will work. Attributes like dataTextField become data-text-field. Bindings are specified as lower case separated by dashes.
Events can be bound as well. Above, I bind to the change event. When the event is called, this will be the object that fired the event. In this case, that’s our drop down list. I can then get its text value and update the h2 tag.
I made that all caps with the hopes that you will read this last section as it’s kind of important.
1. Bindings are not JavaScript. Do not try and do this…
data-bind="value: if ($(“#div”).html() === “somevalue”)…
That won’t work. You should be doing that in your view model. If you aren’t using a view model and you have a good use case for sticking JavaScript into your markup, consider using a Kendo UI Template instead.
2. Kendo UI is not the only framework that uses data attributes for binding. In fact, its pretty common. If you find yourself in a situation where you need to mitigate collisions with Kendo UI and some other JS library, you can provide a namespace for Kendo UI and then reference that namespace instead.
kendo.ns="kendo";
// Then
data-bind="value: someValue" // becomes
data-kendo-bind="value: someValue"
It’s easy when you start using these bindings to try and guess what you can and can’t do. Instead, refer to the docs for the MVVM framework bindings here, and the standard widget declarative bindings here. When doing initialization of widgets using configuration attributes in the declarative bindings, simply look up the configuration values so you know what you can and can’t use to configure the widget.
We strive to provide you with all of the resources that you need to quickly develop solutions with Kendo UI. We take your feedback seriously and we are always looking for better avenues for communicating important framework ideas to you in a way that is straightforward and easy to find. Hopefully this post sheds some more light on binding with data attributes and where you can find the necessary documentation to get up and running.
Special thanks to Atanas Korchev from the Kendo UI team for providing much of the content and direction for this article.!. | https://www.telerik.com/blogs/mvvm-declarative-initialization-and-html5-data-attributes | CC-MAIN-2018-47 | refinedweb | 1,291 | 70.33 |
CS 598CSC: Combinatorial Optimization Lecture date: 2/4/2010
- Marsha Dalton
- 1 years ago
- Views:
Transcription
1 CS 598CSC: Combinatorial Optimization Lecture date: /4/010 Instructor: Chandra Chekuri Scribe: David Morrison Gomory-Hu Trees (The work in this section closely follows [3]) Let G = (V, E) be an undirected graph with non-negative edge capacities defined by c : E R. We would like to be able to compute the global minimum cut on the graph (i.e., the minimum over all min-cuts between pairs of vertices s and t). Clearly, this can be done by computing the minimum cut for all ( n ) pairs of vertices, but this can take a lot of time. Gomory and Hu showed that the number of distinct cuts in the graph is at most n 1, and furthermore that there is an efficient tree structure that can be maintained to compute this set of distinct cuts [1] (note that there is also a very nice randomized algorithm due to Karger and Stein that can compute the global minimum cut in near-linear time with high probability []). An important note is that Gomory-Hu trees work because the cut function is both submodular and symmetric. We will see later that any submodular, symmetric function will induce a Gomory- Hu tree. Definition 1. Given a graph G = (V, E), we define α G (u, v) to be the value of a minimum u, v cut in G. Furthermore, for some set of vertices U, we define δ(u) to be the set of edges with one endpoint in U. Definition. Let G, c, and α G be defined as above. Then, a tree T = (V (G), E T ) is a Gomory- Hu tree if for all st E T, δ(w ) is a minimum s, t cut in G, where W is one component of T st. The natural question is whether such a tree even exists; we will return to this question shortly. However, if we are given such a tree for an arbitrary graph G, we know that this tree obeys some very nice properties. In particular, we can label the edges of the tree with the values of the minimum cuts, as the following theorem shows (an example of this can be seen in figure 1): Theorem 1. Let T be a Gomory-Hu tree for a graph G = (V, E). Then, for all u, v V, let st be the edge on the unique path in T from u to v such that α G (s, t) is minimized. Then, α G (u, v) = α G (s, t) and the cut δ(w ) induced by T st is a u, v minimum cut in G. Thus α G (s, t) = α T (s, t) for each s, t V where the capacity of an edge st in T is equal to α G (s, t). Proof. We first note that α G obeys a triangle inequality. That is, α G (a, b) min(α G (a, c), α G (b, c)) for any undirected graph G and vertices a, b, c (to see this, note that c has to be on one side or the other of any a, b cut). Consider the path from u to v in T. We note that if uv = st, then α G (u, v) = α G (s, t). Otherwise, let w v be the neighbor of u on the u-v path in T. By the triangle inequality mentioned above, α G (u, v) min(α G (u, w), α G (w, v)). If uw = st, then α G (u, v) α G (s, t); otherwise, by induction on the path length, we have that α G (u, v) α G (w, v) α G (s, t). However, by the definition of Gomory-Hu trees, we have that α G (u, v) α G (s, t), since the cut induced by T st is a valid cut for u, v. Thus, we have α G (u, v) = α G (s, t) and the cut induced by T st is a u, v minimum cut in G.
2 10 b 4 c 5 a 3 d 8 f 3 e a b f e c 14 d Figure 1: A graph G with its corresponding Gomory-Hu tree [4]. Remark. Gomory-Hu trees can be (and are often) defined by asking for the property described in Theorem 1. However, the proof shows that the basic requirement in Definition implies the other property. The above theorem shows that we can represent compactly all of the minimum cuts in an undirected graph. Several non-trivial facts about undirected graphs fall out of the definition and the above result. The only remaining question is Does such a tree exist? And if so, how does one compute it efficiently? We will answer both questions by giving a constructive proof of Gomory-Hu trees for any undirected graph G. However, first we must discuss some properties of submodular functions. Definition 3. Given a finite set E, f : E R is submodular if for all A, B E, f(a)+f(b) f(a B) + f(a B). An alternate definition based on the idea of decreasing marginal value is the following: Definition 4. Given E and f as above, f is submodular if f(a + e) f(a) f(b + e) f(b) for all A B and e E. To see the equivalence of these definitions, let f A (e) = f(a + e) f(a), and similarly for f B (e). Take any A, B E and e E such that A B, and let f be submodular according to definition 3. Then f(a + e) + f(b) f((a + e) B) + f((a + e) B) = f(b + e) + f(a). Rearranging shows that f A (e) f B (e). Showing that definition 4 implies definition 3 is slightly more complicated, but can be done (Exercise). There are three types of submodular functions that will be of interest: 1. Arbitrary submodular functions. Non-negative (range is [0, )). Two subclasses of non-negative submodular functions are monotone (f(a) f(b) whenever A B) and non-monotone. 3. Symmetric submodular functions where f(a) = f(e \ A) for all A E. As an example of a submodular function, consider a graph G = (V, E) with capacity function c : E R +. Then f : V tor + defined by f(a) = c(δ(a)) (i.e., the capacity of a cut induced by a set A) is submodular.
3 To see this, notice that f(a) + f(b) = a + b + c + d + e + f, for any arbitrary A and B, and a, b, c, d, e, f are as shown in figure. Here, a (for example) represents the total capacity of edges with one endpoint in A and the other in V \ (A B). Also notice that f(a B) + f(a B) = a + b + c + d + e, and since all values are positive, we see that f(a) + f(b) f(a B) + f(a B), satisfying definition 3. a c b A d e B Figure : Given a graph G and two sets A, B V, this diagram shows all of the possible classes of edges of interest in G. In particular, there could be edges with both endpoints in V \ (A B), A, or B that are not shown here. f Exercise 1. Show that cut function on the vertices of a directed graph is submodular. Another nice property about this function f is that it is posi-modular, meaning that f(a) + f(b) f(a B) + f(b A). In fact, posi-modularity follows for any symmetric submodular function: f(a) + f(b) = f(v A) + f(b) f ((V A) B) + f ((V A) B) = f(b A) + f (V (A B)) = f(b A) + f(a B) We use symmetry in the first and last lines above. In fact, it turns out that the above two properties of the cut function are the only two properties necessary for the proof of existence of Gomory-Hu trees. As mentioned before, this will give us a Gomory-Hu tree for any non-negative symmetric submodular function. We now prove the following lemma, which will be instrumental in constructing Gomory-Hu trees: Key Lemma. Let δ(w ) be an s, t minimum cut in a graph G with respect to a capacity function c. Then for any u, v W, u v, there is a u, v minimum cut δ(x) where X W. Proof. Let δ(x) be any u, v minimum cut that crosses W. Suppose without loss of generality that s W, s X, and u X. If one of these are not the case, we can invert the roles of s and t or X and V \ X. Then there are two cases to consider: Case 1: t X (see figure 3). Then, since c is submodular,
4 X t s u v W Figure 3: δ(w ) is a minimum s, t cut. δ(x) is a minimum u, v cut that crosses W. This diagram shows the situation in Case 1; a similar picture can be drawn for Case c(δ(x)) + c(δ(w )) c(δ(x W )) + c(δ(x W )) (1) But notice that δ(x W ) is a u, v cut, so since δ(x) is a minimum cut, we have c(δ(x W )) c(δ(x)). Also, X W is a s, t cut, so c(δ(x W )) c(δ(w )). Thus, equality holds in equation (1), and X W is a minimum u, v cut. Case : t X. Since c is posi-modular, we have that c(δ(x)) + c(δ(w )) c(δ(w \ X)) + c(δ(x \ W )) () However, δ(w \ X) is a u, v cut, so c(δ(w \ X)) c(δ(x)). Similarly, δ(x \ W ) is an s, t cut, so c(δ(x \ W )) c(δ(w )). Therefore, equality holds in equation (), and W \ X is a u, v minimum cut. The above argument shows that minimum cuts can be uncrossed, a technique that is useful in many settings. In order to construct a Gomory-Hu tree for a graph, we need to consider a slightly generalized definition: Definition 5. Let G = (V, E), R V. Then a Gomory-Hu tree for R in G is a pair consisting of T = (R, E T ) and a partition (C r r R) of V associated with each r R such that 1. For all r R, r C r. For all st E T, T st induces a minimum cut in G between s and t defined by δ(u) = r X where X is the vertex set of a component of T st. Notice that a Gomory-Hu tree for G is simply a generalized Gomory-Hu tree with R = V. C r
5 Algorithm 1 GomoryHuAlg(G, R) if R = 1 then return T = ({r}, ), C r = V else Let r 1, r R, and let δ(w ) be an r 1, r minimum cut Create two subinstances of the problem G 1 = G with V \ W shrunk to a single vertex, v 1 ; R 1 = R W G = G with W shrunk to a single vertex, v ; R = R \ W Now we recurse T 1, (C 1 r r R 1 ) = GomoryHuAlg(G 1, R 1 ) T, (C r r R ) = GomoryHuAlg(G, R ) Note that r, r are not necessarily r 1, r! Let r be the vertex such that v 1 C 1 r Let r be the vertex such that v C r See figure 4 T = (R 1 R, E T1 E T {rr }) (C r r R) = ComputePartitions(R 1, R, C 1 r, C r, r, r ) return T, C r end if Algorithm ComputePartitions(R 1, R, Cr 1, c r, r, r ) We use the returned partitions, except we remove v 1 and v from C r For r R 1, r r, C r = Cr 1 For r R 1, r r, C r = Cr C r = Cr 1 {v 1 }, C r = Cr {v } return (C r r R) and C r, respectively Intuitively, we associate with each vertex v in the tree a bucket that contains all of the vertices that have to appear on the same side as v in some minimum cut. This allows us to define the algorithm GomoryHuAlg. Theorem 3. GomoryHuAlg returns a valid Gomory-Hu tree for a set R. Proof. We need to show that any st E T satisfies the key property of Gomory-Hu trees. That is, we need to show that T st induces a minimum cut in G between s and t. The base case is trivial. Then, suppose that st T 1 or st T. By the Key Lemma, we can ignore all of the vertices outside of T 1 or T, because they have no effect on the minimum cut, and by our induction hypothesis, we know that T 1 and T are correct. Thus, the only edge we need to care about is the edge we added from r to r. First, consider the simple case when α G (r 1, r ) is minimum over all pairs of vertices in R. In this case, we see that in particular, α G (r 1, r ) α G (r, r ), so we are done. However, in general this may not always be the case. Let δ(w ) be a minimum cut between r 1 and r, and suppose that there is a smaller r, r minimum cut δ(x) than what W induces; that is c(δ(x)) < ( δ(w )). Assume without loss of generality that r 1, r W. Notice that if r 1 X, we
6 T 1 T r r 1 C = {v,...} r 1 r r C = {v,...} r 1 Figure 4: T 1 and T have been recursively computed by GomoryHuAlg. Then we find r and r such that v 1 (the shrunken vertex corresponding to V \ W in T 1 ) is in the partition of r, and similarly for r and v. Then, to compute T, we connect r and r, and recompute the partitions for the whole tree according to ComputePartitions. have a smaller r 1, r cut than δ(w ), and similarly if r X. So, it is clear that X separates r 1 and r. By our key lemma, we can then uncross and assume that X W. Now, consider the path from r to r 1 in T 1. There exists an edge uv on this path such that the weight of uv in T 1, w 1 (uv), is at most c(δ(x)). Because T 1 is a Gomory-Hu tree, uv induces an r 1, r cut in G of capacity w 1 (uv) (since v 1 Cr 1 ). But this contradicts the fact that W is a r 1, r minimum cut. Threfore, e can pick r 1 and r arbitrarily from R 1 and R, and GomoryHuAlg is correct. This immediately implies the following corollary: Corollary 4. A Gomory-Hu tree for R V in G can be computed in the time needed to compute R 1 minimum-cuts in graphs of size at most that of G. Finally, we present the following alternative proof of the last step of theorem 3 (that is, showing that we can choose r 1 and r arbitrarily in GomoryHuAlg). As before, let δ(w ) be an r 1, r minimum cut, and assume that r 1 W, r V \ W. Assume for simplicity that r 1 r and r r (the other cases are similar). We claim that α G1 (r 1, r ) = α G (r 1, r ) α G (r 1, r ). To see this, note that if α G1 (r 1, r ) < α G (r 1, r ), there is an edge uv E T1 on the path from r 1 to r that has weight less than α G (r 1, r ), which gives a smaller r 1, r cut in G than W (since v 1 C 1 r ). For similar reasons, we see that α G (r, r ) α G (r 1, r ). Thus, by the triangle inequality we have α G (r, r ) min(α G (r, r 1 ), α G (r, r ), α G (r 1, r ) α G (r 1, r ) which completes the proof. Gomory-Hu trees allow one to easily show some facts that are otherwise hard to prove directly. Some examples are the following.
7 Exercise. For any undirected graph there is a pair of nodes s, t and an s-t minimum cut consisting of a singleton node (either s or t). Such a pair is called a pendant pair. Exercise 3. Let G be a graph such that deg(v) k for all v V. Show that there is some pair s, t such that α G (s, t) k. Notice that the proof of the correctness of the algorithm relied only on the key lemma which in turn used only the symmetry and submodularity of the cut function. One can directly extend the proof to show the following theorem. Theorem 5. Let V be a ground set, and let f : V R + be a symmetric submodular function. Given s, t in V, define the minimum cut between s and t as α f (s, t) = min f(w ) W V, W {s,t} =1 Then, there is a Gomory-Hu tree that represents α f. That is, there is a tree T = (V, E T ) and a capacity function c : E T R + such that α f (s, t) = α T (s, t) for all s, t in V, and moreover, the minimum cut in T induces a minimum cut according to f for each s, t. Exercise 4. Let G = (V, ξ) be a hypergraph. That is, each hyper-edge S ξ is a subset of V. Define f : V R + as f(w ) = δ(w ), where S ξ is in δ(w ) iff S W and S \ W are non-empty. Show that f is a symmetric, submodular function. References [1] R. E. Gomory, T. C. Hu. Multi-terminal network flows. Journal of the Society for Industrial and Applied Mathematics, vol. 9, [] D. R. Karger. Minimum cuts in near-linear time. Journal of the ACM, vol. 47, 000. [3] A. Schrijver. Combinatorial Optimization. Springer-Verlag Berlin Heidelberg, 003. Chapter [4] V. Vazirani. Approximation Algorithms. Springer, 004. Polyhedra and Linear Programming
CS 598CSC: Combinatorial Optimization Lecture date: January 21, 2009 Instructor: Chandra Chekuri Scribe: Sungjin Im 1 Polyhedra and Linear Programming In this lecture, we will cover some basic material
Lecture 22: November 10
CS271 Randomness & Computation Fall 2011 Lecture 22: November 10 Lecturer: Alistair Sinclair Based on scribe notes by Rafael Frongillo Disclaimer: These notes have not been subjected to the usual scrutiny
I. GROUPS: BASIC DEFINITIONS AND EXAMPLES
I GROUPS: BASIC DEFINITIONS AND EXAMPLES Definition 1: An operation on a set G is a function : G G G Definition 2: A group is a set G which is equipped with an operation and a special element e G, called
Large induced subgraphs with all degrees odd
Large induced subgraphs with all degrees odd A.D. Scott Department of Pure Mathematics and Mathematical Statistics, University of Cambridge, England Abstract: We prove that every connected graph of order
Connectivity and cuts
Math 104, Graph Theory February 19, 2013 Measure of connectivity How connected are each of these graphs? > increasing connectivity > I G 1 is a tree, so it is a connected graph w/minimum # of edges. Every
Lecture Notes on Spanning Trees
Lecture Notes on Spanning Trees 15-122: Principles of Imperative Computation Frank Pfenning Lecture 26 April 26, 2011 1 Introduction In this lecture we introduce graphs. Graphs provide a uniform model
/
Sample Problems in Discrete Mathematics
Sample Problems in Discrete Mathematics This handout lists some sample problems that you should be able to solve as a pre-requisite to Computer Algorithms Try to solve all of them You should also read
Relations Graphical View
Relations Slides by Christopher M. Bourke Instructor: Berthe Y. Choueiry Introduction Recall that a relation between elements of two sets is a subset of their Cartesian product (of ordered pairs). A binary
Markov random fields and Gibbs measures
Chapter Markov random fields and Gibbs measures 1. Conditional independence Suppose X i is a random element of (X i, B i ), for i = 1, 2, 3, with all X i defined on the same probability space (.F,
136 CHAPTER 4. INDUCTION, GRAPHS AND TREES
136 TER 4. INDUCTION, GRHS ND TREES 4.3 Graphs In this chapter we introduce a fundamental structural idea of discrete mathematics, that of a graph. Many situations in the applications of discrete mathematics
A CHARACTERIZATION OF TREE TYPE
A CHARACTERIZATION OF TREE TYPE LON H MITCHELL Abstract Let L(G) be the Laplacian matrix of a simple graph G The characteristic valuation associated with the algebraic connectivity a(g) is used in classifying
Combinatorics: The Fine Art of Counting
Combinatorics: The Fine Art of Counting Week 9 Lecture Notes Graph Theory For completeness I have included the definitions from last week s lecture which we will be using in today s lecture
Why graph clustering is useful?
Graph Clustering Why graph clustering is useful? Distance matrices are graphs as useful as any other clustering Identification of communities in social networks Webpage clustering for better data management
Graph Theory Problems and Solutions
raph Theory Problems and Solutions Tom Davis [email protected] November, 005 Problems. Prove that the sum of the degrees of the vertices of any finite graph is
On-line secret sharing
On-line secret sharing László Csirmaz Gábor Tardos Abstract In a perfect secret sharing scheme the dealer distributes shares to participants so that qualified subsets can recover the secret, while unqualified
INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS
INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS STEVEN P. LALLEY AND ANDREW NOBEL Abstract. It is shown that there are no consistent decision rules for the hypothesis testing problem
1.3 Induction and Other Proof Techniques
4CHAPTER 1. INTRODUCTORY MATERIAL: SETS, FUNCTIONS AND MATHEMATICAL INDU 1.3 Induction and Other Proof Techniques The purpose of this section is to study the proof technique known as mathematical induction.
Analysis of Algorithms, I
Analysis of Algorithms, I CSOR W4231.002 Eleni Drinea Computer Science Department Columbia University Thursday, February 26, 2015 Outline 1 Recap 2 Representing graphs 3 Breadth-first search (BFS) 4 Applications
3. Equivalence Relations. Discussion
3. EQUIVALENCE RELATIONS 33 3. Equivalence Relations 3.1. Definition of an Equivalence Relations. Definition 3.1.1. A relation R on a set A is an equivalence relation if and only if R is reflexive, symmetric,
THE BANACH CONTRACTION PRINCIPLE. Contents
THE BANACH CONTRACTION PRINCIPLE ALEX PONIECKI Abstract. This paper will study contractions of metric spaces. To do this, we will mainly use tools from topology. We will give some examples of contract
SCORE SETS IN ORIENTED GRAPHS
Applicable Analysis and Discrete Mathematics, 2 (2008), 107 113. Available electronically at SCORE SETS IN ORIENTED GRAPHS S. Pirzada, T. A. Naikoo The score of a vertex v in
Lecture 4: The Chromatic Number
Introduction to Graph Theory Instructor: Padraic Bartlett Lecture 4: The Chromatic Number Week 1 Mathcamp 2011 In our discussion of bipartite graphs, we mentioned that one way to classify bipartite graphs
Approximation Algorithms
Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy
Electrical Resistances in Products of Graphs
Electrical Resistances in Products of Graphs By Shelley Welke Under the direction of Dr. John S. Caughman In partial fulfillment of the requirements for the degree of: Masters of Science in Teaching Mathematics
Uniform Multicommodity Flow through the Complete Graph with Random Edge-capacities
Combinatorics, Probability and Computing (2005) 00, 000 000. c 2005 Cambridge University Press DOI: 10.1017/S0000000000000000 Printed in the United Kingdom Uniform Multicommodity Flow through the Complete
1 Definitions. Supplementary Material for: Digraphs. Concept graphs
Supplementary Material for: van Rooij, I., Evans, P., Müller, M., Gedge, J. & Wareham, T. (2008). Identifying Sources of Intractability in Cognitive Models: An Illustration using Analogical Structure 3: Linear Programming Relaxations and Rounding
Lecture 3: Linear Programming Relaxations and Rounding 1 Approximation Algorithms and Linear Relaxations For the time being, suppose we have a minimization problem. Many times, the problem at hand can
8. Matchings and Factors
8. Matchings and Factors Consider the formation of an executive council by the parliament committee. Each committee needs to designate one of its members as an official representative to sit on the council,
Lecture 1: Course overview, circuits, and formulas
Lecture 1: Course overview, circuits, and formulas Topics in Complexity Theory and Pseudorandomness (Spring 2013) Rutgers University Swastik Kopparty Scribes: John Kim, Ben Lund 1 Course Information Swastik
On Total Domination in Graphs
University of Houston - Downtown Senior Project - Fall 2012 On Total Domination in Graphs Author: David Amos Advisor: Dr. Ermelinda DeLaViña Senior Project Committee: Dr. Sergiy Koshkin Dr. Ryan Pepper
Chapter 4. Trees. 4.1 Basics
Chapter 4 Trees 4.1 Basics A tree is a connected graph with no cycles. A forest is a collection of trees. A vertex of degree one, particularly in a tree, is called a leaf. Trees arise in a variety of applications.
Characterizations of Arboricity of Graphs
Characterizations of Arboricity of Graphs Ruth Haas Smith College Northampton, MA USA Abstract The aim of this paper is to give several characterizations for the following two classes of graphs: (i) graphs
CS 103X: Discrete Structures Homework Assignment 3 Solutions
CS 103X: Discrete Structures Homework Assignment 3 s Exercise 1 (20 points). On well-ordering and induction: (a) Prove the induction principle from the well-ordering principle. (b) Prove the well-ordering | http://docplayer.net/20641385-Cs-598csc-combinatorial-optimization-lecture-date-2-4-2010.html | CC-MAIN-2018-22 | refinedweb | 4,352 | 59.03 |
'
Can we do this with Linux Namespace
I have installed linux container namespace or Debian ARM architecture .. can you help me to run this conig in linux namespace
Re: Can we do this with Linux Namespace
It's probably possible, but keep in mind that this article is quite old by now - things might have changed.
It's been a while since I've played around wtih things like proxies, so I don't think I'll be able to guide you through the process (I also have very limited free time). However the principles are pretty much the same, so you should be able to get there. I think there should be plenty of other resources available online that can help you as well.
Hi
I am using ubuntu 18.04.3tls and I have two network cards one local
and one in bridge mode as I can implement the squid in transparent mode.
Network card in bridge mode has the network 192.168.1.0/24.With this I have internet access and the card that is in the local network 172.168.1.0/24
Sorry, my English is that I'm Latino and I don't know much.
I am using ubuntu 18.04.3tls and I have two network cards one local and one in bridge mode as I can implement the squid in transparent mode.
Hi, This entry is quite old,
Hi,
This entry is quite old, so I'm not sure if all the info is still relevant. However, do keep in mind that with https (TLS) being used more and more, the effectiveness of a proxy becomes less.
Hope you can find the info in my post useful, but I think you might need to look at some other sites for more up-to-date info as well.
Johan..
doubts in lines of the scrip located in /etc/rc.local and others
Hello Johan, thanks for such a fantastic tutorial, I'm doing a project with a raspberry pi 2b + and 3b +
and I thought it was great to do it with them, but on the way taking the tutorial out,
I came across that the DHCP3 SERVER version did not It was useful so I chose to use ISC-DHCP-SERVER
and everything started to work according to what you propose in the tutorial, the router is an internet service with 2MB of bandwidth
and generates its own ips range so that disable the DHCP of the router, so that my server ISC-DHCP-SERVER deploy in the LAN and WLAN
the ips to my devices and if indeed when observing in the devices the TCP / IP works and I have the transparent proxy with its due gateway ,
I can browse some http sites such as google, youtube, netflix, but wanting to see some other sites I can not through the proxy either http or https,
not if you have to add something to the script in /etc/rc.local I have doubts with some lines for example in the next line:
# DNAT port 80 request comming from LAN systems to squid 3128 ($ SQUID_PORT) aka transparent proxy
iptables -t nat -A PREROUTING -s $ LOCAL -p tcp --dport 80 -j DNAT --to $ SQUID_SERVER: $ SQUID_PORT
Do we have to add another line for port 443 with the same characteristics as with port 80? like the other line that says:
# if it is same system
iptables -t nat -A PREROUTING -i $ INTERNET -p tcp --dport 80 -j REDIRECT --to-port $ SQUID_PORT
Would we have to add another line with port 443?
and also I have doubts my router generates ipv6 and in the scritp you also have a line that says:
# Enable Forwarding
echo 1> / proc / sys / net / ipv4 / ip_forward
Would we have to add for example #Enable Forwarding
echo 1> / proc / sys / net / ipv6 / ip_forwar? I do not know what may be missing and because I do not see some sites with specific domains,
doing more research I found the topic of intercepting https by means of ssl certificates and I see that the configuration becomes more complex,
I do not want to have to be installing certificates on the computers browsers, tablets and smartphones
I would like the transparent proxy to work as the non-transparent works that can see any site, either http or https,
I hope you can enlighten me and terminate my project, in advance very grateful for your help.
Hi, Let me preface this by
Hi,
Let me preface this by saying that this tutorial was written six years ago or so, so several things changed in the mean time. Most importantly is probably the push for SSL everywhere, which is a good thing. Unfortunately, SSL is inherently incompatible with proxy caching software unless you intercept this traffic, decrypt it, and then re-encrypt with your local LAN certificate. I would say that this is not just inefficient, but also leads to security and privacy issues. Sadly, if you want to use a proxy in this day and age, it's the only option and pretty much mandatory since there relatively few non-SSL sites that would benefit from the proxy cache in the first place.
I think the changes you mention in your post should get you to a working proxy, but I won't manage to verify that for you due to lack of time. If I were you however, I would investigate one of the more recent tutorials on the web that include SSL. It's more complex, but if you take them step by step you should be able to get there.
Johan. | https://www.purplealienplanet.com/node/25 | CC-MAIN-2021-10 | refinedweb | 939 | 65.39 |
Feature #10658open
ThreadGroup local variables
Description
Here's the story. I wrote a testing framework which could run test
cases in parallel. To accumulate the number of assertions, I could
just use a shared number and lock it for each testing threads.
However, I would also like to detect if a single test case didn't
make any assertions, and raise an error. That means I can't just
lock the number, otherwise the thread won't have any idea if the
number was touched by the other threads. That means I need to lock
around each test cases, which defeats the purpose of running test
cases in parallel.
Then we could try to store the number inside the instance of the
running worker, something like this:
def expect obj Expect.new(obj) do @assertions += 1 end end would 'test 1 == 1' do @assertions = 0 expect(1).eq 1 end
This works fine, but what if we want to make the other matcher,
such as
Kernel#should, which has no idea about the worker?
would 'test 1 == 1' do @assertions = 0 1.should.eq 1 end
Here 1 has absolutely no idea about the worker, how could it increment
the number then? We could try to use a thread local to accumulate the
assertions, and after all threads are done, accumulate all the numbers
from each threads. This way each numbers won't be interfering with each
other, and each objects could have the access to the corresponding
number from the running thread local.
However this has an issue. What if a test case would spawn several
threads in a worker thread? Those threads would have no access to
the worker thread local variable! Shown as below:
would 'test 1 == 1' do @assertions = 0 Thread.new do 1.should.eq 1 end.join end
ThreadGroup to the rescue. Since a newly spawn thread would share the
same group from the parent thread, we could create a thread group for
each worker thread, and all objects should just find the corresponding
number by checking the thread group local. It should be protected by
a mutex, of course. Here's a demonstration:
module Kernel def should Should.new(self) do Thread.current.group.synchronize do |group| group[:assertions] += 1 end # P.S. in the real code, it's a thread-safe Stat object end end end
Some alternative solutions:
- Just use instance_variable_set and instance_variable_get on ThreadGroup
What I was doing before: Assume ThreadGroup#list.first is the owner of
the group, thus the worker thread, and use that thread to store the number.
Something like:
Thread.current.group.list.first[:assertions] += 1
This works for Ruby 1.9, 2.0, 2.1, but not for 2.2.
This also works for Rubinius. I thought this is somehow an expected behaviour,
therefore did a patch for JRuby to make this work:
Until now it failed on Ruby 2.2, did I know the order was not preserved...
What I am doing right now: Find the worker thread through the list from the
group by checking the existence of the data from thread locals. Like:
Thread.current.group.list.find{ |t| t[:assertions] }[:assertions] += 1
At any rate, if we ever have thread group locals, the order won't be an issue,
at least for this use case.
Any idea?
Also available in: Atom PDF | https://bugs.ruby-lang.org/issues/10658?tab=notes | CC-MAIN-2021-17 | refinedweb | 561 | 73.78 |
swalign 0.3.3
Smith-Waterman local aligner
sw.
Here is some skeleton code to get you started:
import swalign # choose your own values here… 2 and -1 are common. match = 2 mismatch = -1 scoring = swalign.NucleotideScoringMatrix(match, mismatch)
sw = swalign.LocalAlignment(scoring) # you can also choose gap penalties, etc… alignment = sw.align(‘ACACACTA’,’AGCACACA’) alignment.dump()
For other uses, see the script in bin/swalign.
- Downloads (All Versions):
- 49 downloads in the last day
- 134 downloads in the last week
- 552 downloads in the last month
- Author: Marcus Breese
- Package Index Owner: mbreese
- DOAP record: swalign-0.3.3.xml | https://pypi.python.org/pypi/swalign | CC-MAIN-2015-22 | refinedweb | 101 | 59.19 |
This Technote describes how to ensure that a Mach-O application built against the latest version of Mac OS X can run effectively on older versions of Mac OS X, and what you should do if your application uses APIs that do not exist in the older version of Mac OS X.
The technologies Apple has provided for handling this include weak linking and Availability Macros.
This technote explains how these technologies work, their benefits and limits, and how to get started using them on Mac OS X. Read this technote if you are writing a Mach--O application and are concerned that APIs you use from today's OS will prevent your application from running properly on earlier versions of Mac OS X.
This technote describes the technical details of the weak linking and Availability Macros features on Mac OS X (Mac OS X 10.0.x will not be considered, but it works substantially the same as 10.1.x for the purposes of this technote). Before the discussion begins, however, here are a set of high level points summarizing where things stand as of Mac OS X 10.2.x and the December 2002 Mac OS X Developer Tools:
If an application is designed to run on multiple Mac OS X versions and it uses symbols available only in later Mac OS X versions, the application should use weak linking. Otherwise, it will not launch on the earlier Mac OS X version that does not export all the symbols the application uses.
Weak linking support was first added in Mac OS X version 10.2. Therefore, it cannot be used to help applications run on both 10.1.x and 10.2. There are some fallback approaches that can be used to help applications run on both 10.1.x and 10.2.
Availability Macros have been introduced to help automatically take advantage of weak linking for Apple-provided APIs. Furthermore, they can help ensure that an application only uses the APIs available on developer-chosen OS version(s).
Both weak linking and the Availability Macros are newly introduced technologies, and are works in progress. However, major improvements were made in the December 2002 Mac OS X Developer Tools. It is highly recommended that you use the December 2002 toolset if you want to use weak linking and the Availability Macros.
Further improvements to weak linking support and the Availability Macros are coming - check back here periodically for updates on tools and OS support.
Back to Top
When writing a Mach-O application, you will often want to use new APIs introduced in the latest version of Mac OS X that did not exist in prior versions. However, if you do this and then attempt to run your application on an earlier version of the system, the program will (a) fail to launch, (b) crash at some point during program execution, or (c) run correctly but be unable to be prebound on earlier Mac OS X versions. Let's look at each case.
When the linker (the program called "ld") links an application to the frameworks (such as Carbon or Cocoa) or libraries that the application uses, entries are made in the application binary. These entries reference both the frameworks the application links against, as well as the symbols (functions, global variables) that it uses from those frameworks. To see these entries, fire up Terminal and run otool -l on a binary. This will print the binary's load commands for you; many of the load commands will be of the form LC_LOAD_DYLIB, followed by the path to a shared library or framework that the application is linked against. Listing 1 shows a small part of the output of otool -l.
ld
otool -l
LC_LOAD_DYLIB
Listing 1: A snippet showing a load command from /usr/bin/perl - perl is linked against /usr/lib/libSystem.B.dylib
username% otool -l /usr/bin/perl
.
.
.
Load command 6
cmd LC_LOAD_DYLIB
cmdsize 52
name /usr/lib/libSystem.B.dylib (offset 24)
time stamp 1028942768 Fri Aug 9 18:26:08 2002
current version 60.0.0
compatibility version 1.0.0
.
.
.
If you are not familiar with otool, see the manual page for more details. If a framework or shared library linked by the binary isn't present when the application launches (perhaps the framework is available on 10.2 but not 10.1 like AddressBook.framework), dyld (the dynamic loader) will fail to execute the application when it is launched.
otool
dyld
Listing 2: What happens when you try to run an application and one of the frameworks it is linked against is missing - dyld gives an error (ENOENT = 2, from /usr/include/sys/errno.h)
username% ./test
dyld: ./test can't open library:
./MyFramework.framework/Versions/A/MyFramework (No such file or
directory, errno = 2)
A somewhat more common and slightly trickier case is when the framework itself is present at runtime, but some of the symbols the application is using are missing. For example, perhaps your application uses routines in Carbon.framework newly introduced in 10.2, which are not present at runtime on 10.1.x.
You can get a listing of which symbols an application is importing from other frameworks and libraries by running nm -mg on the binary (see the nm manual page for more details). Doing so will tell you how the symbols are referenced, and what libraries or frameworks they come from.
nm -mg
nm
Listing 3: A snippet showing a partial listing of symbols imported by /usr/bin/perl
username% nm -mg /usr/bin/perl
900154a0 (prebound undefined [lazy bound]) external
_NSAddressOfSymbol (from libSystem)
90021440 (prebound undefined [lazy bound]) external
_NSCreateObjectFileImageFromFile (from libSystem)
.
.
.
a0ea736c (prebound undefined) [referenced dynamically] external
_PL_do_undump (from libperl)
a0ea8dcc (prebound undefined) [referenced dynamically] external
_PL_markstack_max (from libperl)
a0ea8dc8 (prebound undefined) [referenced dynamically] external
_PL_markstack_ptr (from libperl)
If a symbol the application uses is not present at launch time and the symbol is not lazily bound (discussed later), the application will fail to launch, even if the framework or shared library from which the symbol was linked is present. When this happens, dyld will list the symbols it failed to find, as shown in Listing 4.
Listing 4: What happens when you try to run an app and one of the symbols it imports (the SayHello function, in this case) is missing from the framework it is linked against
SayHello
username% ./test
dyld: ./test Undefined symbols:
./test undefined reference to _SayHello expected to be defined in
MyFramework
Non-variable symbols (i.e., functions) which do not need their memory addresses resolved immediately at launch time are marked by the linker as "lazy bound" (you can see an example of nm output showing lazy bound symbols in Listing 3). Such symbols are lazily bound to their source frameworks by dyld - it doesn't bind them until they are actually needed. If the need never arises, the symbol is never bound, and the application can run just fine even on an OS version that doesn't contain those symbols in its frameworks or libraries. If the symbol is ever referenced, the use of the function triggers a fixup call into dyld, where it then tries to resolve the symbol before execution jumps to the actual function in question. Typically, most functions are lazy bound; one somewhat common case, however, where a routine cannot be lazy bound is when its address is stored in a pointer. In that case, dyld can't be assured of having a chance to resolve the symbol address before it is executed so the function is non-lazy bound. Listing 5 is an example of some code that would produce both lazy and non-lazy bound symbols:
Listing 5: InitCursor will not be lazy bound, because the address of the function is copied to a pointer, beyond dyld's ability to resolve it behind the scenes. ObscureCursor, however, will be lazy bound because dyld knows it can resolve the symbol the first time it is explicitly invoked.
InitCursor
ObscureCursor
void (*foobar)(void);
foobar=InitCursor; // This forces InitCursor to be non-lazy
foobar();
ObscureCursor(); // ObscureCursor will be lazy bound
The problem with lazy binding is that, depending on how you write your code, your application could be using symbols that do not exist in the target OS version, but you might not find that out without extensive testing. The the application will launch and run successfully when following some codepaths while crashing when following other codepaths (as dyld tries to resolve symbols that are reached in a given codepath). Availability Macros (described below) can help catch the use of newer symbols in your application, but it is still up to you to ensure proper conditionalization for all such symbols.
Another problem with applications that bind to missing symbols is that their prebinding will break (and can't be fixed) on OS versions that do not contain all of the non-weak (described below) symbols (even lazy bound ones) that the application uses. Broken prebinding for your application typically means that it will take longer to launch. This means that lazy binding is not a good solution for conditionalizing the use of APIs that might not be present at runtime - even if you are careful to ensure that new APIs will never be called when the app is run on an older OS version.
The solution that Apple has developed to solve these problems is called weak linking. It works in a similar manner to the similarly named feature that CFM (the Traditional Mac OS' Code Fragment Manager) had. Weak linking was introduced as a supported OS feature in Mac OS X version 10.2, and the first developer toolset to support weak linking is the July 2002 Developer Tools that shipped as a part of Mac OS X version 10.2. Here's how it works:
Weak linking allows you to link a symbol such that the symbol does not have to be present at runtime for the binary to continue running. Of course, your program can't actually try to use the symbol if it is non-existent, or it will crash. Instead, your code should check to see if the address of the symbol is NULL. If it is, then the symbol is not available, but if the address is a real address, then the symbol exists and can be used.
A symbol will be linked strongly unless you explicitly mark its prototype as weak. Typically this is done in the header containing the prototype for the routine, and is done by adding the weak_import attribute to the prototype (this attribute is supported by gcc 3.1 on Mac OS X, but not gcc 2.95). See the documentation on the gcc web site for more information about attributes.
weak_import
Listing 6: The SayHello function will be weakly linked thanks to the use of the weak_import attribute on the prototype
extern int SayHello() __attribute__((weak_import));
Using the weak_import attribute tells the linker that the symbol should be linked weakly. However, weak linking as a feature did not exist in ld or dyld in Mac OS X versions prior to 10.2, so you have to explicitly enable it by setting an environment variable before compilation to tell linker that features can be used that were introduced starting in version 10.2. Listing 7 shows how you set this environment variable from within Terminal.
Listing 7: We need to tell the linker that it can feel free to use Mac OS X 10.2 linker features, including weak linking.
setenv MACOSX_DEPLOYMENT_TARGET 10.2
If the environment variable is not set to at least 10.2 (a value of 10.1 is assumed if you don't set it), you will see warnings like the following if you try to use the weak_import attribute:
Listing 8: Attempting to weak link will issue warnings if the MACOSX_DEPLOYMENT_TARGET isn't set to a high enough version
MACOSX_DEPLOYMENT_TARGET
test.c:4: warning: weak_import attribute ignored when
MACOSX_DEPLOYMENT_TARGET environment variable is set to 10.1
The MACOSX_DEPLOYMENT_TARGET environment variable can now also be set in a target build setting in Project Builder, allowing you to enable weak linking from within that IDE.
Figure 1: Add a custom build setting to set the MACOSX_DEPLOYMENT_TARGET environment variable.
Assuming that the environment variable is set and the function prototype is properly marked with the weak_import attribute, compilation will cause ld to mark the symbol as weak in the final application. You may recall our use of the nm tool earlier to explore the symbols that a given application imports and how those symbols are referenced. If we use nm -mg |grep frameworkname, we can produce a list of all symbols we reference from a given framework and see whether they are marked weak in the binary or not.
nm -mg |grep frameworkname
Listing 9: Note the description of the _SayHello symbol (imported from MyFramework ) as weak
MyFramework
username% nm -mg test | grep MyFramework
(undefined) weak external _SayHello (from MyFramework)
Once a binary has been built with symbols weakly linked, the existence of those symbols (and thus indirectly, features and APIs) can be checked for at runtime. For example:
Listing 10: A complete test program showing the test for the presence of the SayHello function at runtime
#include <stdlib.h>
#include <stdio.h>
extern int SayHello() __attribute__((weak_import));
int main()
{
int result;
if (SayHello!=NULL)
{
printf("SayHello is present!\n");
result=SayHello();
}
else
printf("SayHello is not present!\n");
}
You might notice that this whole approach to weak linking assumes that one is weak linking individual symbols, not entire frameworks or libraries. This is in fact correct; ideally, one would not weak link more symbols than are needed, thus preserving the ability of dyld to flag missing symbols as early and completely as possible. However, in some cases it may truly be desirable to weak link an entire framework or shared library - allowing the framework as a whole, not just individual symbols it exports, to be missing at runtime. This is accomplished by the linker in this way: if all symbols that an application imports from a given framework are weakly linked, then the framework as a whole will be automatically marked weak in the application's load command where it loads that framework or shared library. Running otool -l on a binary (as was done in Listing 1) will show whether a framework as a whole is weakly linked or not.
Listing 11: A snippet showing a load command that weakly loads a framework - note the LC_LOAD_WEAK_DYLIB
LC_LOAD_WEAK_DYLIB
username% otool -l test
.
.
.
Load command 5
cmd LC_LOAD_WEAK_DYLIB
cmdsize 72
name ./MyFramework.framework/Versions/A/MyFramework
(offset 24)
time stamp 3247416188 Wed Oct 21 05:34:52 1936
current version 0.0.0
compatibility version 0.0.0
.
.
.
One problem that using a lot of weak linking brings up is how one knows which routines are available in a particular OS version. Wouldn't it be great if the Apple-provided system framework headers automatically configured themselves for the OS version you were writing to, and mark routines as weak appropriately? This is where the Availability Macros come in.
The Availability Macros are a set of macros was introduced in Mac OS X version 10.2 as a part of the July 2002 Developer Tools. They are contained in a header which is located at "/usr/include/AvailabilityMacros.h" on such a system. "AvailabilityMacros.h" helps you determine which OS versions introduced the APIs you are using, and tells the compiler which routines should be weakly linked. Over time, more and more of Apple's frameworks will be adopting the Availability Macros (the Carbon and Cocoa frameworks do to some degree today).
At its most basic level, "AvailabilityMacros.h" provides two compile-time variables (or macros) that you can set to determine how APIs are defined. Here is how things are supposed to work:
This macro can be set to a specific OS version (some macro predefines for OS versions are provided in the header), and allows you to specify the minimum OS version that your application will require in order to run. All APIs that use the Availability Macros are conditionalized for the OS version in which they were released, and thus APIs introduced in all OS versions up to and including the minimum OS version required will be strongly linked. "AvailabilityMacros.h" suggests that if MAC_OS_X_VERSION_MIN_REQUIRED is undefined, it will be set by default to 10.0. This is correct if MACOSX_DEPLOYMENT_TARGET is not set, but what the header doesn't tell you is that the compiler driver checks the MACOSX_DEPLOYMENT_TARGET environment variable, and sets the value of MAC_OS_X_VERSION_MIN_REQUIRED to be the same as the MACOSX_DEPLOYMENT_TARGET if MACOSX_DEPLOYMENT_TARGET is set.
MAC_OS_X_VERSION_MIN_REQUIRED
This macro allows you to specify the maximum allowed OS version that your application can use APIs from. APIs that were first introduced in an OS version later than the maximum allowed will not be visible to the application. APIs introduced after the minimum required OS version, but before or in the maximum allowed OS version, will be weakly linked automatically. If no value is given to this macro, it will be set to the highest major OS version that Availability Macros is aware of (10.2 as of this writing).
One common usage of these macros would be to temporarily set MAC_OS_X_VERSION_MAX_ALLOWED to be equal to MAC_OS_X_VERSION_MIN_REQUIRED and rebuild one's application to see which APIs are being used that are not present in the minimum required OS version (compilation will produce errors for each usage of the now suddenly unavailable routines). For example, MAC_OS_X_VERSION_MIN_REQUIRED might be set to 10.1 (represented as 1010 to the compiler) and MAC_OS_X_VERSION_MAX_ALLOWED would be set to 10.1 as well, to see which APIs are being used that were introduced in, say, 10.2. This usage now works for users of both the Cocoa and Carbon frameworks, as of the December 2002 Developer Tools.
MAC_OS_X_VERSION_MAX_ALLOWED
If you need to use APIs introduced in 10.2 in an application that must run on 10.1.x, weak linking is not an option, because it was introduced in Mac OS X version 10.2. So what can be done for this particular OS version transition? There are two main solutions, somewhat similar to each other, for doing what is essentially "manual weak linking" in an application.
CFBundle includes APIs such as CFBundleGetFunctionPointerForName that can be used to manually load a function pointer or symbol from a bundle (a framework is a valid bundle). Combined with checks for system version and feature availability, using CFBundle can be a good way of conditionally loading newly introduced symbols, and calling them if they exist. The CallMachOFramework code sample shows an example of using this routine in the context of loading Mach-O symbols in a CFM application, and the CFBundle documentation provides another good example of this technique.
CFBundle
CFBundleGetFunctionPointerForName
The above approach, using CFBundleGetFunctionPointerForName, is good when only a few symbols are needed. It can be laborious, however, to manually load each symbol directly from the system frameworks if a lot of newly introduced symbols are needed, such as when you adopt a new area of functionality like the AddressBook API. In this situation, the best approach would be to isolate the area of functionality in its own bundle, which is linked directly to the 10.2 system frameworks and has only a few main entry points, and then only load the bundle (using CFBundle) if the application is running on 10.2 or higher. Within the bundle, the 10.2-specific APIs can be used freely and directly, because the application loading the bundle would have already determined that the new APIs were present.
The implementation of weak linking and Availability Macros today in Mac OS X 10.2 and the December 2002 Developer Tools is a first cut at the functionality. Significant improvements have been made over the July 2002 Developer tools in the December 2002 Developer Tools, and further improvements will be made over time. Here are some of the current considerations, issues, and workarounds with these technologies:
One good thing about Objective-C is that its runtime is sufficiently dynamic that it doesn't suffer from the strong linking/binary compatibility issues that weak linking was designed to solve for other languages like C and C++. That is, as long as an application checks for their existence at runtime and avoids their use as appropriate, it can go ahead and use newly introduced Objective-C methods and classes and the application will still run on earlier version of Mac OS X that do not have the new symbols. Note, however, that Cocoa currently does not automatically weakly link straight C functions, even ones introduced after MAC_OS_X_VERSION_MIN_REQUIRED and before or in MAC_OS_X_VERSOIN_MAX_ALLOWED (r. 3151928).
Weak linking is only supported by dyld in Mac OS X version 10.2 and up. So it cannot be used in applications that need to run on 10.1.x. Weakly linked symbols will be seen as strongly linked symbols on 10.1.x, and weakly linked frameworks or shared libraries will cause the application to crash at launch on 10.1.x.
There is a known bug in the December 2002 Mac OS X Developer Tools where if you link against a framework or shared library from which you use no symbols (call it library A), but you link against and use symbols from another library (call it library B) that does use symbols from library A, then the linker will attempt to weakly link library A. But if your MACOSX_DEPLOYMENT_TARGET isn't set to 10.2 or higher, this will fail (as it should) because earlier OS versions don't support weak linking, and thus a warning will be generated. The warning will be of the form, "ld: warning dynamic shared library: /usr/lib/libSystem.dylib not made weak library in output with MACOSX_DEPLOYMENT_TARGET environment variable set to: 10.1". This bug should be fixed in the next major version of the Mac OS X Developer Tools (r. 3094497). In the meantime, to get rid of the warnings you can either set the MACOSX_DEPLOYMENT_TARGET to 10.2, or make a spurious call into the library to use a symbol from it and get the linker to strongly link it.
There is no obvious way to set the MACOSX_DEPLOYMENT_TARGET, MAC_OS_X_VERSION_MIN_REQUIRED, and MAC_OS_X_VERSION_MAX_ALLOWED settings on a per project basis, if you are using cpp-precomp for precompiled header processing (the default choice in the July 2002 Developer Tools and prior) instead of the new Persistent Front End (PFE) mechanism for precompiled headers (the default choice for new projects in the December 2002 Developer Tools). This is because there is a single system-wide copy of the framework precompiled headers that already assumes values for these defines. Redefining these settings in your project will break the precompiled headers, and compile times will greatly increase (along with lots of warnings being generated). There are two ways to fix this.
MACOSX_DEPLOYMENT_TARGET,
MAC_OS_X_VERSION_MIN_REQUIRED,
Use the new Persistent Front End (PFE) mechanism for precompiled headers on a per project basis. You can do this using the Build Settings pane in the Targets tab of your project.
Rebuild (as needed) your system's precompiled headers with the appropriate settings defined before building any project that needs those settings. cpp-precomp's precompiled headers can be rebuilt using the fixPrecomps command from the command line, as shown in Listing 12.
fixPrecomps
Listing 12: A snippet showing the setting of weak linking and Availability Macros flags and the rebuilding of cpp-precomp's precompiled headers
username% setenv MACOSX_DEPLOYMENT_TARGET 10.1
username% sudo fixPrecomps -force -precompFlags
-DMAC_OS_X_VERSION_MIN_REQUIRED=1010
-DMAC_OS_X_VERSION_MAX_ALLOWED=1010
reading /System/Library/SystemResources/PrecompLists/phase1.precompList
reading /System/Library/SystemResources/PrecompLists/phase2.precompList
-force rebuild /usr/include/libc.p.
/usr/bin/gcc3 -precomp -x objective-c /usr/include/libc.h -o
/usr/include/libc-gcc3.p -DMAC_OS_X_VERSION_MIN_REQUIRED=1010
-DMAC_OS_X_VERSION_MAX_ALLOWED=1010
.
.
.
Mac OS X version 10.2 ld, dyld, nm, and otool manual pages
Using the GNU Compiler Collection (GCC) GNU document on Attribute Syntax Copyright 2002 by the Free Software Foundation, retrieved 11/19/2002
DTS sample code CallMachOFramework
Apple Computer, Inc., CFBundle Documentation, retrieved 11/19/2002.
Posted: 2003-02-18
Get information on Apple products.
Visit the Apple Store online or at retail locations.
1-800-MY-APPLE | http://developer.apple.com/technotes/tn2002/tn2064.html | crawl-002 | refinedweb | 4,071 | 50.67 |
Hey,
I have a bit of history with programming about 6 years ago with C++ when I dropped out of college. However, I am resuming my studies but the course has changed to a more Java based one so I am spending the next few months teaching myself Java. At the moment, I am waiting for a new book to arrive and am using an old version of Deitel and Deitel Java - How to Program, which references Netscape Communicator so enough said!
Anyways, I'm just trudging towards the general tutorials and opening chapters and I've not got all the concepts straight in my head yet. The code is messy as I'm not doing anything with arrays yet.
import javax.swing.JOptionPane; import java.awt.Graphics; public class PosNegZero extends JApplet { String entry1, entry2, entry3, entry4, entry5; double ent1, ent2, ent3, ent4, ent5, noP, noN, no0; public void init() { no0 = 0 ; noP = 0; noN = 0; entry1 = JOptionPane.showInputDialog("Enter 1st Number: "); ent1 = Double.parseDouble (entry1); entry2 = JOptionPane.showInputDialog("Enter 2nd Number: "); ent2 = Double.parseDouble (entry2); entry3 = JOptionPane.showInputDialog("Enter 3rd Number: "); ent3 = Double.parseDouble (entry3); entry4 = JOptionPane.showInputDialog("Enter 4th Number: "); ent4 = Double.parseDouble (entry4); entry5 = JOptionPane.showInputDialog("Enter 5th Number: "); ent5 = Double.parseDouble (entry5); if (ent1 == 0) no0++; else if (ent1 > 0) noP++; else if (ent1 < 0) noN++; if (ent2 == 0) no0++; else if (ent2 > 0) noP++; else if (ent2 < 0) noN++; if (ent3 == 0) no0++; else if (ent3 > 0) noP++; else if (ent3 < 0) noN++; if (ent4 == 0) no0++; else if (ent4 > 0) noP++; else if (ent4 < 0) noN++; if (ent5 == 0) no0++; else if (ent5 > 0) noP++; else if (ent5 < 0) noN++; } public void paint (Graphics g) { g.drawString(no0 + " zero numbers\n" + noP + " positive numbers\n" + noN + " negative numbers\n", 25, 25); } }
What I think I am doing is declaring an overall Class PosNegZero, and declaring global String and Double variables to be used in the classes init() and print(Graphics g)
The error I am getting is JApplet cannot be resolved to a type
The error could be down to me either not using the variables properly or not understanding Applets. I can output the same code to a command line or using showMessage but I want to use an applet for the purposes of this book question.
Thanks! | http://www.javaprogrammingforums.com/awt-java-swing/15206-beginner-error-extends-japplet.html | CC-MAIN-2016-30 | refinedweb | 386 | 52.39 |
27 September 2013 21:40 [Source: ICIS news]
HOUSTON (ICIS)--With just one business day left in the month, the ?xml:namespace>
So far, three producers have nominated prices. Two are proposing a 2 cents/lb ($44/tonne, €33/tonne) price hike, while a third producer is proposing a 6 cents/lb increase for October, sources said. A fourth producer, as of late in the day on Friday, had yet to nominate.
Market sources said that it will probably be Monday before all contract prices for the upcoming month are settled. Most market participants expect the October BD contract to rise by 2-3 cents/lb.
September was the second month in a row that the US BD market reached a split settlement on the monthly contract. The average weighted price for the September contract is 43.02 cents/lb, up about one-half cent from a weighted average price in August of 42.58 cents/lb.
The primary argument for producers raising prices, sources said, is the recent run up in prices in
But domestic buyers are pushing back, saying that the Asian prices are unsustainable given the forecast for fundamental long-term weak demand from the replacement tyre market, BD's most important downstream outlet. | http://www.icis.com/Articles/2013/09/27/9710592/US-BD-October-contract-goes-down-to-the-final-hour.html | CC-MAIN-2014-41 | refinedweb | 207 | 60.55 |
ShakeCrash is great way to involve you testers in deep in-app reporting. It’s idea was taken from Google Maps, just shake your iPhone to submit screenshot with description via e-mail or Redmine!
Usage
To run the example project, clone the repo, and run
pod install from the Example directory first.
Installation
ShakeCrash is available through CocoaPods. To install
it, simply add the following line to your Podfile:
pod "ShakeCrash"
Configure ShakeCrash
You have two possibilities – you can send report directly to your Redmine project issues or send it to desired e-mail address. There is no way to use both at once.
I would recommend to configure it in your
AppDelegate. First, import
ShakeCrash:
import ShakeCrash
Configure Redmine
You need to enable REST API in you Redmine and obtain your API key. Also you will need project id. You should be able to find all of them in your Redmine, in case of trouble I send you to Google.
let shakeReporterSettings = ShakeCrash.sharedInstance let redmineReporter = RedmineFeedbackReporter( redmineAddress: "<REDMINE_URL>", apiToken: "<API_KEY>", projectId: "<PROJECT_ID>") shakeReporterSettings.delegate = redmineReporter
Correct URL has format
http(s)://.
It is very important, that your Redmine version is >1.4.
Configure e-mail
There are no special requirments in order to send e-mail, just configure it.
let shakeReporterSettings = ShakeCrash.sharedInstance let mailReporter = MailFeedbackReporter(reportEmail: "[email protected]") shakeReporterSettings.delegate = mailReporter
Configure user name
It would be very useful if you could know your tester’s name. There is the way you can ask your user to enter his name and it will only trigger once. Just paste line below in your
viewDidLoad in
UIViewController you want your user to enter his name.
self.presentConfigShakeCrashView()
If you don’t do it, the first time user will make shake gesture ShakeCrash will ask for name. But it is up to you to inform user that there is shake gesture in app.
I highly recommend to use it in first view controller in your app. If you encountered some issues while calling this method make sure there are no problems with view controller’s stack, because all views in
ShakeCrash are called to present modally.
Author
Dominik Majda, [email protected]
License
ShakeCrash is available under the MIT license. See the LICENSE file for more info.
Latest podspec
{ "name": "ShakeCrash", "version": "0.1.0", "summary": "ShakeCrash idea was taken from Google Maps, just shake your iPhone to submit screenshot with description via e-mail or Redmine!", "description": "It simple - just shake your phone and new window with screenshot of current view will be presented. You can draw on it and write description, so it's the best way to report bugs, provide some feedback or just ask a question. Then, just click Send button and report will be sended with your message.", "homepage": "", "license": "MIT", "authors": { "Dominik Majda": "[email protected]" }, "source": { "git": "", "tag": "0.1.0" }, "platforms": { "ios": "8.0" }, "requires_arc": true, "source_files": "Pod/Classes/**/*", "resource_bundles": { "ShakeCrash": [ "Pod/Assets/*.png" ] } }
Wed, 23 Mar 2016 00:01:02 +0000 | https://tryexcept.com/articles/cocoapod/shakecrash | CC-MAIN-2020-29 | refinedweb | 499 | 66.33 |
📆 Dateable
A Dart package to help you with managing dates easily. Can be used to store, format, convert, construct, parse and serialise dates. Calendar correctness is guaranteed by the usage of
DateTime's system under the hood.
⚙️ Import
In your
.dart files:
import 'package:dateable/dateable.dart';
⚗️ Usage
👷 Constructors
Variety of different constructors allows for great flexibility and interoperability with other types.
final date = Date(31, 12, 2019); final date = Date.fromDateTime(DateTime(2019, 12, 31, 19, 1)); // Time of day is truncated final date = Date.parseIso8601('2019-12-31T18:23:48.956871'); // Time of day is truncated final date = Date.parse('31122019'); final date = Date.today(); final date = Date.yesterday(); final date = Date.tomorrow();
And a handy
DateTime extension:
final date = DateTime(2019, 12, 31, 13, 26).toDate(); // Time of day is truncated
All of the above result in the same
date object!
📅 Getters
There are three getters. Simple and easy.
final date = Date(11, 3, 2002); print(date.day); // Prints 11 print(date.month); // Prints 3 print(date.year); // Prints 2002
↔️ Conversion methods
Date allows for seamless and easy conversions to most commonly used representations!
final date = Date(11, 3, 2002); final dateTime = date.toDateTime(); // Time of day is set to zeros print(date.toIso8601()); // Prints 2002-03-11T00:00:00.000 print(date.toString()); // Prints 11032002
📊 Comparisions
Comparisions work just like in your well-known
DateTime objects!
final earlier = Date(11, 3, 2002); final later = Date(21, 9, 2004); print(earlier.isBefore(later)); // True print(later.isAfter(earlier)); // Also true
On top of this, there are also operators
> (is after) ,
< (is before),
<=,
>= and
==.
Here comes another handy
DateTime extension:
DateTime(2002, 3, 11, 14, 56, 28).isTheSameDate(Date(11, 3, 2002));
But if all you want is to check if your
Date is nearby, here you are.
final date = Date(11, 3, 2002); date.isToday(); date.isYesterday(); date.isTomorrow();
📰 Formatting
You can format your
Dates to
Strings both with top-level constants and with
String literals:
yyyy- 4 digit year, i.e. 1997
yy- 2 digit year, i.e. 97
mm- 2 digit month, i.e. 03
dd- 2 digit day, i.e. 11
Both of the below options are correct:
Date(11, 3, 2002).format([dd, '-', mm, '-', yyyy])
Date(11, 3, 2002).format(['dd', '-', 'mm', 'yyyy'])
🔨 Modifiers
Last but not least, there is a set of useful modifiers. Every
Date object is immutable by default, so each of them creates a new
Date object.
date.addDays(2) == date + 2 // Always true date.subtractDays(7) == date - 7 // Also always true
You can also use the idiomatic copyWith function.
date.copyWith(day: 21, month: 9);
Sorting an
Iterable of
Dates chronologically is even easier:
[Date(21, 9, 2004), Date(24, 12, 2006), Date(11, 3, 2002)].sort((a, b) => a.compareTo(b));
Now the list is
[Date(11, 3, 2002), Date(21, 9, 2004), Date(24, 12, 2006)].
🐛 Contributing / bug reporting
Contributions and bug reports are welcome! Feel free to open an issue or create a pull request.
📖 License
This package is distributed under MIT license. | https://pub.dev/documentation/dateable/latest/index.html | CC-MAIN-2022-40 | refinedweb | 508 | 62.14 |
Mar 13, 2019 04:24 PM|itsmeabhilashgk|LINK
I am trying to write unit tests for my controller action. I am using NUnit 3 and Moq.
I have Db methods such as Add() and Remove() in the action.
I understand that I need to mock the Db and setup return values to the methods accordingly.
However when it comes to methods such as db.Add(), db.Remove() , db.SaveChanges(); I don't understand how to setup true and false return values for the same.
My action to be tested:
[HttpPost] [Route("Add")] public List<string> Add([FromBody]model addModel) { List<string> errors = new List<string>(); try { model Model= new model() { propertyA = addModel.propertyA }; db.Add(Model); db.SaveChanges(); } catch (Exception e) { errors.Add(e.Message); return errors; } errors.Add("Success"); return errors; }
To write positive and negative tests for the above methods, I need to be able to set positive and negative return values for db.Add() and db.SaveChanges() methods.
I have tried the following:
[TestCase] public void Add_Succesfull_True() { Mock<DbContext> mockdb = new Mock<DbContext>(); var model= db.model.Select(x => x).FirstOrDefault(); model addModel = new model() { propertyA= model.propertyA }; AuthController authController = new AuthController(userManager, configuration, mockdb.Object); var apiEndPoint = authController.Add(addModel); mockdb.Setup(x => x.Add(addModel)).Returns(??); Assert.IsTrue(apiEndPoint.Contains("Success")); }
Through intellisense I can see that db.Add() expects a return type of
Microsoft.EntityFrameworkCore.ChangeTracking.
My understand of mocking the DbContext is as follows:
I believe I have mocked the above in a wrong way. Need direction thank you:)
Mar 13, 2019 05:59 PM|DA924|LINK
You could easily mock out data persistence if you were using the Repository or Data Access Object pattern, because the Interface can be mocked out and the persistence never actually done. IMO unit testing EF is a waste of time However, there is nothing wrong in doing integration or functional testing.
The blow code is using the DAO pattern that can be mocked out, because DaoTask is using an Interface. Also, you'll notice there is not tyr/catch not even in the code of thte DAO, becuase of global exception handling.
[HttpPost] [Route("CreateTask")] public void CreateTask(DtoTask dto) { _daoTask.CreateTask(dto); }
Mar 13, 2019 06:08 PM|itsmeabhilashgk|LINK
Mar 13, 2019 06:32 PM|DA924|LINK
''
itsmeabhilashgk
Hi, I am not trying to test EF, I need to mock the return values of the DbSet methods such as Add(), so that I can test the flow of my function.
I cannot refactor now; given the current code; is there a possible way?
If you're not using EF, then what are you using? Regardless of how DBset is being used, I wouldn't waste time trying to UT DbSet, IMO.
All-Star
42511 Points
Mar 13, 2019 06:34 PM|mgebhard|LINK
The concept is covered in the EF Core docs.
All-Star
53024 Points
Mar 13, 2019 08:14 PM|bruce (sqlwork.com)|LINK
as you are not using the return value in your controller you can just use null:
mockdb.Setup(x => x.Add(addModel)).Returns((Microsoft.EntityFrameworkCore.ChangeTracking) null);
Mar 14, 2019 07:27 AM|itsmeabhilashgk|LINK
Hi I am using EF core and I am getting an error saying :
Microsoft.EntityFrameworkCore.ChangeTracking is a namespace used like a type
Is this because of proxy not being supported in EF Core?
Mar 15, 2019 05:06 AM|DA924|LINK
If you want to unit test the controller and use mock, then you will find a way to use the Repository or DAO patterns that are using an Interface and doing CRUD or use Model objects in the Models folder that are using an Interface that are doing CRUD so that they can be mocked out.
You arrange, act and assert. You make the test fail, you make the test pass and then you refactor.
7 replies
Last post Mar 15, 2019 05:06 AM by DA924 | https://forums.asp.net/p/2153756/6254660.aspx?How+to+mock+dbset+methods+Add+Remove+SaveChanges+How+to+configure+positive+and+negative+return+values+%3C | CC-MAIN-2019-43 | refinedweb | 656 | 55.54 |
Solution 1
import random class Lottery(object): def __init__(self, numbers=None): if numbers is None: numbers = range(0, 50) self.answer = random.choice(numbers) def get_answer(self): return self.answer def play(self, number): if self.answer == number: return True return False
Time is over! You can keep submitting you assignments, but they won't compute for the score of this quiz.
Lottery Time
Create a class called
Lottery with that optionally receives a list
numbers containing the possible winning numbers. If
numbers is not received as an optional argument, set it to be a list ranging from 0-49. When created, your Lottery object should have an attribute
answer created that is random number from the
numbers list.
It needs to have two methods:
-
get_answer that returns the answer variable for that object
-
play that receives a number and returns True if the number matches the answer and False otherwise
Example:
l = Lottery(numbers=[9]) l.get_answer() # 9 l.play(1) # False l.play(9) # True
Test Cases
test get answer - Run Test
def test_get_answer(): l = Lottery(numbers=[9]) assert l.get_answer() is not None assert l.get_answer() == 9
test random range - Run Test
def test_random_range(): l = Lottery() assert l.get_answer() is not None assert l.play(l.get_answer()) is True | https://learn.rmotr.com/python/base-python-track/intro-to-oop/lottery-time | CC-MAIN-2018-47 | refinedweb | 213 | 59.7 |
A probability distribution is called “fat tailed” if its probability density goes to zero slowly. Slowly relative to what? That is often implicit and left up to context, but generally speaking the exponential distribution is the dividing line. Probability densities that decay faster than the exponential distribution are called “thin” or “light,” and densities that decay slower are called “thick”, “heavy,” or “fat,” or more technically “subexponential.” The distinction is important because fat-tailed distributions tend to defy our intuition.
One surprising property of heavy-tailed (subexponential) distributions is the single big jump principle. Roughly speaking, most of the contribution to the sum of several heavy-tailed random variables comes from the largest of the samples. To be more specific, let “several” = 4 for reasons that’ll be apparent soon, though the result is true for any n. As x goes to infinity, the probability that
X1 + X2 + X3 + X4
is larger than a given x is asymptotically equal the probability that
max(X1, X2, X3, X4)
is larger than the same x.
The idea behind the obesity index [1] is to turn the theorem above around, making it an empirical measure of how thick a distribution’s tail is. If you draw four samples from a random variable and sort them, the obesity index is the probability that that the sum of the max and min, X1 + X4, is greater than the sum of the middle samples, X2 + X3.
The obesity index could be defined for any distribution, but it only measures what the name implies for right-tailed distributions. For any symmetric distribution, the obesity index is exactly 1/2. A Cauchy distribution is heavy-tailed, but it has two equally heavy tails, and so its obesity index is the same as the normal distribution, which has two light tails.
Note that location and scale parameters have no effect on the obesity index; shifting and scaling effect all the X values the same, so it doesn’t change the probability that X1 + X4 is greater than X2 + X3.
To get an idea of the obesity index in action, we’ll look at the normal, exponential, and Cauchy distributions, since these are the canonical examples of thin, medium, and thick tailed distributions. But for reasons explained above, we’ll actually look at the folded normal and folded Cauchy distributions, i.e. we’ll take their absolute values to create right-tailed distributions.
To calculate the obesity index exactly you’d need to do analytical calculations with order statistics. We’ll simulate the obesity index because that’s easier. It’s also more in the spirit of calculating the obesity index from data.
from scipy.stats import norm, expon, cauchy def simulate_obesity(dist, N): data = abs(dist.rvs(size=(N,4))) count = 0 for row in range(N): X = sorted(data[row]) if X[0] + X[3] > X[1] + X[2]: count += 1 return count/N for dist in [norm, expon, cauchy]: print( simulate_obesity(dist, 10000) )
When I ran the Python code above, I got
0.6692 0.7519 0.8396
This ranks the three distributions in the anticipated order of tail thickness.
Note that the code above takes the absolute value of the random samples. This lets us pass in ordinary (unfolded) versions of the normal and Cauchy distributions, and its redundant for any distribution like the exponential that’s already positive-valued.
[I found out after writing this blog post that SciPy now has
foldnorm and
foldcauchy, but they don’t seem to work like I expect.]
Let’s try it on a few more distributions. Lognormal is between exponential and Cauchy in thickness. A Pareto distribution with parameter b goes to zero like x-1-b and so we expect a Pareto distribution to have a smaller obesity index than Cauchy when b is greater than 1, and a larger index when b is less than one. Once again the simulation results are what we’d expect.
The code
for dist in [lognorm, pareto(2), pareto(0.5)]: print( simulate_obesity(dist, 10000) )
returns
0.7766 0.8242 0.9249
By this measure, lognormal is just a little heavier than exponential. Pareto(2) comes in lighter than Cauchy, but not by much, and Pareto(0.5) comes in heavier.
Since the obesity index is a probability, it will always return a value between 0 and 1. Maybe it would be easier to interpret if we did something like take the logit transform of the index to spread the values out more. Then the distinctions between Pareto distributions of different orders, for example, might match intuition better.
[1] Roger M. Cooke et al. Fat-Tailed Distributions: Data, Diagnostics and Dependence. Wiley, 2014.
4 thoughts on “Obesity index: Measuring the fatness of probability distribution tails”
Interesting. This appears to be related to some of the definitions of skewness that use quantiles (rather than moments) to provide a robust estimate of skewness. (See “A quantile definition for skewness”. ) In particular, I would rewrite the comparison from
X[3] + X[0] > X[1] + X[2]
to the equivalent statement
X[3] – X[2] > X[1] – X[2]
The statement is now a comparison of the lengths of tertiles for the samples of size 4. The simulation compares the average length of the right tertiles to the average length of the left tertiles. Do understand why this estimates kurtosis (fatness of tails) and not skewness?
1) When you’re trying to figure out your quartiles what do you do with repeated values like a dataset 0,0,0,5? Is that two numbers or four?
2) When a data set starts at zero, you have four zeros in the sample, and zero is a mode, does that give you a thick tail at zero?
3) Regardless of the second question, does every mode have a short tail?
Just for fun for R aficionados:
`simulate_obesity <- function(dist, N){
data <- lapply(seq_len(N), function(x) abs(dist(4)))
count <- 0
for(row in seq_len(N)){
X <- sort(data[[row]])
comp X[2] + X[3])
if(comp) count <- count + 1
}
return(count/N)
}
lapply(list(rnorm, rexp, rcauchy), function(x) simulate_obesity(dist = x, 100))
` | https://www.johndcook.com/blog/2018/04/23/obesity-index/ | CC-MAIN-2022-27 | refinedweb | 1,028 | 53.41 |
This programming task provides the steps for creating a basic custom Web Part that you can add to your Web Part Page. It is a very simple Web Part that allows you to change the Web Part's Title property. The Title property is a Web Part base class property that sets the text in the part's title bar.
Microsoft Visual Studio .NET
Microsoft Windows SharePoint Services
Web Parts are based on ASP.NET Web Form Controls. You create Web Parts in C# or Visual Basic .NET by using the Web Part Library template in Visual Studio .NET. The Web Part templates can be downloaded from MSDN. After you download and install these templates, you can proceed with creating a Web Part using C# as described in this topic.
Note If you want to create a Visual Basic .NET project, select the Web Part Library template from Visual Basic Projects instead.
On the other hand, if you do not want to download the templates, you can choose to create a Web Part by starting with the ASP.NET Web Control Library template.
Note If you want to create a Visual Basic .NET project, select the Web Control Library template from Visual Basic Projects instead.
If you are creating a Web Part on a computer that has Windows SharePoint Services or SharePoint Portal Server installed on it:
If you are creating a Web Part on a computer that does not have Windows SharePoint Services or SharePoint Portal Server installed on it:
The same steps apply regardless of whether you are creating a Web Part on a computer that has Windows SharePoint Services or SharePoint Portal Server installed on it:
Before you start working with the code for a Web Part, you need to make the following changes to your project settings:
This task assumes that you are running Visual Studio .NET on a server running Windows SharePoint Services or SharePoint Portal Server. Setting the build output path to the C:\inetpub\wwwroot\bin folder will build your Web Part's assembly in the same location from which it will run on your server. If you are not running Visual Studio .NET on your computer, you can copy your Web Part's assembly to the folder C:\inetpub\wwwroot\bin folder on your server after building it.
Note The C:\inetpub\wwwroot\bin folder is one of two default locations where the .NET common language runtime will look for .NET assemblies. The other location is the Global Assembly Cache (GAC), which is exposed in the Windows user interface as C:\windows\assembly. However, if you want to run your Web Part's assembly from the GAC, you should not specify C:\windows\assembly as the build output path in Visual Studio .NET. Instead, you should manually copy it using Windows Explorer (drag the file to C:\windows\assembly) or by using the gacutil.exe command-line utility which is located in C:\Program Files\Microsoft Visual Studio .NET\FrameworkSDK\bin. Be aware that the GAC requires you to strong name your assembly.
In addition to the two default locations, you can specify another folder (for example, \inetpub\wwwroot\mydir or \inetpub\wwwroot\mydir2\bin) by specifying those folders in the web.config file for your server by adding the following <runtime> element block within the <configuration> element:
<runtime>
<assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
<probing privatePath="mydir;mydir2\bin"/>
</assemblyBinding>
</runtime>
By default, the AssemblyVersion property of your project is set to increment each time you recompile your Web Part. A Web Part Page identifies a Web Part with the version number that is specified in the web.config file. (For details, see "Register the Web Part as a Safe Control" later in this topic.) AssemblyInfo.cs. These steps are not required if you use the Web Part Library template.
[assembly:
AssemblyVersion("1.0.*")]
[assembly:
AssemblyVersion("1.0.0.0")]
Web Parts are designed to be distributed over the Internet or an intranet. For security reasons when creating a custom Web Part, you must strongly .NET.
cd \Program Files\Microsoft Visual Studio .NET 2003\SDK\v1.1\Bin\
sn.exe -k c:\keypair.snk
Note You can use any path, as long as you reference that path in the following steps.
[assembly: AssemblyKeyFile("")]
[assembly: AssemblyKeyFile("c:\\keypair.snk")]
Note Although it is possible to create and use the same strong name key pair file (.snk) for all Web Part assemblies you build, for greater security we recommend that you create and use a different key pair file (.snk) for each assembly you create.
When you create a project using the Web Control Library template in Visual Studio .NET, a default C# source file called WebCustomControl1.cs is created. The following steps are basic additions and modifications that you make to that default code to create the class for a custom Web Part (these steps will not be required if you use the Web Part Library template):
To make it easier to write a basic Web Part class, you should use the using directive to reference the following namespace in your code:
For the purposes of this sample, we're also adding a using directive for the System.Web.UI.HtmlControls namespace because we'll use two HtmlControls classes in the rendering of this Web Part.
using Microsoft.SharePoint.WebPartPages;
using System.Xml.Serialization;
using System.Web.UI.HtmlControls;
The ToolboxDataAttribute class specifies the default tag that is generated for a custom control when it is dragged from a toolbox in a tool such as Microsoft Visual Studio.
[ToolboxData("<{0}:WebCustomControl1 runat=server></{0}:WebCustomControl1>")]
[ToolboxData("<{0}:SimpleWebPart runat=server></{0}:SimpleWebPart>")]
A Web Part is compatible with ASP.NET server controls because the Microsoft.SharePoint.WebPartPages.WebPart base class inherits from the same parent class as server controls, the System.Web.UI.Control class. To create a Web Part, the implementation of your WebPart class must inherit from the WebPart base class.
public class WebCustomControl1 : System.Web.UI.WebControls.WebControl
public class SimpleWebPart : WebPart
To succesfully import your custom Web Part, you must define an XML namespace for all of the properties in your Web Part. You can do this globally by declaring the XmlRoot attribute at the top of your Web Part class definition, or by declaring an XmlElement attribute for each custom property in your class. The following steps describe using the XmlRoot attribute at the class level. For examples of using the XmlElement attribute, see Creating a Web Part with Custom Properties.
[XmlRoot(Namespace="MyWebParts")] override void RenderWebPart(HtmlTextWriter output)
If you are creating multiple Web Parts, you should generally use the same namespace across all of your Web Parts. By default, both the Web Control Library and Web Part Library templates assign the namespace the same name as your project. For this example, we're using the arbitrary namespace of MyWebParts for both the WebPart class and XML namespace for the Web Part's properties.
namespace SimpleWebPart
namespace MyWebParts
[XmlRoot(Namespace="SimpleWebPart")]
[XmlRoot(Namespace="MyWebParts")]
After you have completed the previous steps, you can define the logic and rendering for your Web Part. For this part, we will write some basic ASP.NET code to create two HTML server controls: a text box, and a button that will set the Title property of the Web Part. The following code sample shows the complete SimpleWebPart.cs file with all of the modifications described in the previous steps. The additional code to define the Web Part's functionality is highlighted in bold text:
//--------------------------------------------------------------------
// File: SimpleWebPart.cs
//
// Purpose: A sample Web Part that demonstrates how to create a basic
// Web Part.
//--------------------------------------------------------------------;
namespace MyWebParts
{
/// <summary>
/// This Web Part changes it's own title and implements a custom property.
/// </summary>
[XmlRoot(Namespace="MyWebParts")]
public class SimpleWebPart :WebPart(HtmlTextWriter output)
{
RenderChildren(output);
// Securely write out HTML
output.Write("<BR>Text Property: " + SPEncode.HtmlEncode(Text));
}
}
}
After you've added all of the preceding code, you can build your sample Web Part. Assuming that you are building your Web Part on your server, this will create an assembly for the part named SimpleWebPart.dll in the C:\inetpub\wwwroot\bin folder.
By default, the trust level for this server is WSS_Minimal, which does not allow access to the SharePoint object model. In order for this Web Part to set the SaveProperties property, you must perform one of the following three actions:
In addition to copying your Web Part's assembly to the C:\inetpub\wwwroot\bin folder of your SharePoint server (or to the Global Assembly Cache folder C:\windows\assembly), you must perform three additional steps to deploy a Web Part:
As a security measure, Windows SharePoint Services requires you (or a server administrator) to register the Web Part's assembly and namespace as a SafeControl in the web.config file of the server.
<SafeControl
Assembly="SimpleWebPart, Version=1.0.0.0, Culture=neutral, PublicKeyToken=def148956c61a16b"
Namespace="MyWebParts"
TypeName="*"
Safe="True"
/>
Note Replace the PublicKeyToken value (def148956c61a16b) with the actual value for your Web Part's assembly.To determine the correct PublicKeyToken value for the Assembly attribute of the <SafeControl> tag for your Web Part, use the sn.exe command-line utility:sn.exe -T c:\inetpub\wwwroot\bin\SimpleWebPart.dll
sn.exe -T c:\inetpub\wwwroot\bin\SimpleWebPart.dll
Note You can also determine the correct PublicKeyToken value for your Web Part assembly, by performing the following workaround:
A Web Part Definition file (.dwp) file is a simple XML file which contains property settings for a single Web Part. To import your Web Part into a Web Part Page, simply upload the .dwp file. After uploading the Web Part, you can display the Web Part by dragging it into one of the zones of the Web Part Page.
Two properties are required in the .dwp file: Assembly and TypeName. However, to display a default name and description for the Web Part after it is imported, you should also include the Title and Description properties. If you want to set other Web Part properties during import, you can also define them in a .dwp file. A .dwp file takes the following form:
<>
If you used the Web Part templates, a .dwp file will have been created for you automatically; just make sure that the after you change the name of the Web Part you also change the corresponding TypeName in the .dwp file. If you did not use the Web Part templates, to create a .dwp file for this sample Web Part from scratch, type the following lines in a new Notepad document and save it as SimpleWebPart.dwp in the C:\Inetpub\wwwroot\bin directory.
<?xml version="1.0"?>
<WebPart xmlns="">
<Assembly>SimpleWebPart, Version=1.0.0.0, Culture=Neutral, PublicKeyToken=def148956c61a16b</Assembly>
<TypeName>MyWebParts.SimpleWebPart</TypeName>
<Title>My Simple Web Part</Title>
<Description>A sample Web Part</Description>
</WebPart>
Note In the preceding example, replace the PublicKeyToken value (def148956c61a16b) with the actual value for your Web Part's assembly.
To use and test your Web Part, import it into a Web Part Page on a server running Windows SharePoint Services or SharePoint Portal Server. | http://msdn.microsoft.com/en-us/library/dd584160(office.11).aspx | crawl-002 | refinedweb | 1,854 | 54.83 |
The.
--------------------
README file begins:
--------------------
The main program is go-arena.pl, and a sample client is in go-client.p
+l.
The interface is explain somewhat in the go-proto.txt, but I recommend
+ you
just telnet'ing into the server, as that works well. I'll setup the s
+erver
over here for you all to play if you do not have perl available to you
+.
The idea behind all this is to have AI play each other to see who can
+write
a better AI. Or to put it better, to allow programmers a good arena t
+o test
their ideas on how well an AI can perform, in order to make programmer
+s better
at AI programming :) (which is a good thing).
Anyway enjoy, send clients to me if you would like me to run tournamen
+ts
(until we have the server better setup to do so itself) my email addre
+ss is
[email protected]
-Gryn
--------------------
go-arena.pl file begins:
--------------------
#!/usr/bin/perl -w
use strict;
use IO::Socket;
use Getopt::Long;
my $ver = "0.1.1a";
sub printhelp {
print "This is the go-arena.pl program, it will accept connections t
+o play\n";
print "a game of GO from clients which follow a simple text-line pro
+tocol\n";
print "described in go-proto.txt\n";
print "Tt will only play one game, defaults to size 19, and spits th
+e scoring\n";
print "information back to the terminal. (you can change the size b
+y\n";
print "passing it as a parameter, and clients are also told if they\
+n";
print "won or not)\n";
print " --help,-h Help\n";
print " --size,-s Board size\n";
print " --port,-p Port to listen on (only if no --socket o
+ption)\n";
print " --socket,-s Local domain socket (only local connec
+tion)\n";
print " --komi,-k Set komi amount (2nd turn compensation\n
+";
print " --debug Print debug info\n";
print " --verbose,-v Print more stuff\n";
exit;
}
my ( $help, $socket,$debug);
my ( $size, $port, $komi, $verbose)
= ( 19, 7179, 4.5, 0);
GetOptions( "help|?" => \$help,
"size|s=i" => \$size,
"port|p=i" => \$port,
"socket|d=s" => \$socket,
"komi|k=f" => \$komi,
"debug" => \$debug,
"verbose|v+" => \$verbose
);
if ($help or @ARGV) {
printhelp;
}
srand;
my $up = [ 0, -1 ];
my $down = [ 0, 1 ];
my $left = [ -1, 0 ];
my $right = [ 1, 0 ];
# global settings:
# needs to be at least 3
my $histsize = 3;
######################################################################
# BOARD Type (functions etc)
# (MOVE/POS type too)
# $board is a ref to a hash, each hash value hold some aspect of a boa
+rd state.
# b = a 2d array that shows the positions of the stones with the follo
+wing mapping:
# 0 - nothing
# 99 - nothing (but already counted, when doing final scoring)*
# 1 - white stone
# 2 - white stone*
# -1 - black stone
# -2 - black stone*
# -99 - not on board
# (note that * values are only present during intermediate board pr
+ocessing)
# m = number of valid moves made total
# t = number of turns taken
# f = free spaces left on the board
# black = black's score
# white = white's score
# lm = the last move made (which created the current state)
# lb = the previous board (if you apply the last move you would arrive
# at the current state)
# NOTE: the 'lb' hash element forms a linked list! A history of
+the
# game! Therefore, $histsize limits the maximum size of th
+is
# buffer, setting it to -1 will allow infinite boards -- ho
+wever
# for KO detection to work, we always store the last 2 boar
+ds.
# (after finding a valid move, check that board against two
+ boards
# ago (i.e. $board->{'lb'}->{'lb'} ) )
# NOTE NOTE: Perl doesn't seem to throw away the memory, oh well.
# A move or position type is simply a reference to a two element array
# specifying the x and y location. A [-1,-1] indicates a pass, and [
+-2, -2]
# means the client quit or disconnected.
# Reset and clear a board
sub newB {
my $board;
$board->{'m'} = 0;
$board->{'t'} = 0;
$board->{'f'} = $size * $size;
$board->{'lm'} = undef;
$board->{'lb'} = undef;
$board->{'black'} = 0;
$board->{'white'} = 0;
@{$board->{'b'}} = ();
for (my $y=0;$y<$size;$y++) {
for (my $x=0;$x<$size;$x++) {
set_board_val($board,[$y,$x],0);
}
}
return $board;
}
sub board_copy {
my $b1 = shift;
my $recurse = shift || 0;
my $b2;
if (not $b1) { # undef, null
return undef;
}
$b2->{'m'} = $b1->{'m'};
$b2->{'t'} = $b1->{'t'};
$b2->{'f'} = $b1->{'f'};
$b2->{'m'} = $b1->{'m'};
$b2->{'black'} = $b1->{'black'};
$b2->{'white'} = $b1->{'white'};
# these two should be the same.
$b2->{'b'} = [ map { [ @{$_} ] } @{$b1->{'b'}} ];
# for (my $y = 0; $y < $size; $y++) {
# $b2->{'b'}[$y] = [ @{$b1->{'b'}[$y]} ];
# }
$b2->{'lm'} = $b2->{'lm'};
if ($recurse < $histsize) {
$b2->{'lb'} = board_copy($b1->{'lb'},$recurse+1);
} else {
$b2->{'lb'} = undef;
}
return $b2;
}
# Returns true if two positions's are the same
sub pos_equal {
my ($p1, $p2) = @_;
if ($p1->[0] == $p2->[0] and $p1->[1] == $p2->[1]) {
return 1;
} else {
return 0;
}
}
# Is move a pass? (i.e. (-1, -1) )
sub is_pass {
my $move = shift;
if (pos_equal($move, [ -1, -1 ])) {
return 1;
} else {
return 0;
}
}
# Add two positions together (e.g. add_dir($this,$up) )
sub add_dir {
my ($pos, $dir) = @_;
my $newpos = [ $pos->[0] + $dir->[0], $pos->[1] + $dir->[1] ];
return $newpos;
}
# Returns true if the position is within the board
sub in_board {
my $pos = shift;
if ($pos->[0] >= 0 and $pos->[1] >= 0 and
$pos->[0] < $size and $pos->[1] < $size)
{
return 1;
} else {
return 0;
}
}
# Returns the value of the board at a position
# (uses the key at the top of this section)
sub board_val {
my ($board, $pos) = @_;
if (in_board($pos) == 1) {
return $board->{'b'}[$pos->[1]][$pos->[0]];
} else {
return -99;
}
}
# Sets the board's value at a position
sub set_board_val {
my ($board, $pos, $val) = @_;
if (in_board($pos) == 1) {
$board->{'b'}[$pos->[1]][$pos->[0]] = $val;
}
}
# pretty prints a board
sub printB {
my $board = shift;
print "/-","--" x $size,"\\\n";
for (my $y=0;$y<$size;$y++) {
print "| ";
for (my $x=0;$x<$size;$x++) {
print ". " if board_val($board,[$x,$y]) == 0;
print "# " if board_val($board,[$x,$y]) == 99;
print "% " if board_val($board,[$x,$y]) == 98;
print "O " if board_val($board,[$x,$y]) == 1;
print "X " if board_val($board,[$x,$y]) == -1;
print "O)" if board_val($board,[$x,$y]) == 2;
print "X<" if board_val($board,[$x,$y]) == -2;
print "* " if board_val($board,[$x,$y]) ==-99;
}
print "|\n";
}
print "\\-","--" x $size,"/\n";
}
# prints a board (faster/smaller)
sub printBsimp {
my $board = shift;
for (my $y=0;$y<$size;$y++) {
print map {if ($_ == 0) {"."}
elsif ($_ ==-1) {"X"}
elsif ($_ == 1) {"O"}
else {"?"}} @{$board->{'b'}[$y]};
print "\n";
}
print ".\n";
}
# pretty prints a position
sub printP {
my $pos = shift;
print $pos->[0]," x ",$pos->[1],"\n";
}
# checks to see if stone positions are the same
sub board_equal {
my ($b1, $b2) = @_;
if (@{$b1->{'b'}} == @{$b2->{'b'}}) {
return 1;
} else {
return 0;
}
}
# End of BOARD, MOVE/POS section
######################################################################
# This processes a move on $board, returning if the move was valid,
# and also the new board state.
sub do_move {
my ($orig_board, $move, $who) = @_;
my $captured = 0;
$orig_board->{'t'} += 1;
my $board = board_copy($orig_board);
$board->{'lb'} = $orig_board;
$board->{'lm'} = $move;
# we need to dec $orig_board->{'t'} if we find a valid move
# A pass is always a valid move. also let quit messages be valid too
if (is_pass($move) == 1 or pos_equal($move,[-2,-2]) == 1) {
return (1,$board);
}
# The position must be free...
if (board_val($board,$move) == 0) {
# Now, process captures
# (place stone, then see if up,down,left,right stones are captured
+)
set_board_val($board,$move,$who);
for my $dir ($up,$down,$left,$right) {
# if direction is an opponent's peice..
if (board_val($board,add_dir($move,$dir)) == $who*-1) {
# if there is no life here, kill it, else refill it to orig va
+lue.
if (fill_life($board,add_dir($move,$dir),$who*-1,$who*-2) == 0
+) {
$captured+=fill_count($board,add_dir($move,$dir),$who*-2,0);
} else {
fill_life($board,add_dir($move,$dir),$who*-2,$who*-1);
}
}
}
# if, after captures, the piece itself has no life, then it is sti
+ll
# an invalid move.
if (fill_life($board,$move,$who,$who*2) == 0) {
return (0,$orig_board);
} else {
fill_life($board,$move,$who*2,$who);
}
# KO checking, if last board state for this player is the same
# as this board state, then KO prevents this move.
# (also check to see if moves were the same, since this must
# happen before KO -could- happen).
if ($board->{'lb'} and $board->{'lb'}->{'lb'} and $board->{'lb'}->
+{'lb'}->{'lm'}) {
print "KO detection enabled\n" if $debug;
if (pos_equal($move,$board->{'lb'}->{'lb'}->{'lm'})) {
print "Possible KO checking board states\n" if $debug;
if (board_equal($board,$board->{'lb'}->{'lb'})) {
print "KO!\n" if $debug;
return (0,$orig_board);
}
}
}
$orig_board->{'t'} -= 1;
$board->{'f'} -= 1;
$board->{'f'} += $captured;
$board->{'m'} += 1;
$board->{'black'} += $captured if $who == -1;
$board->{'white'} += $captured if $who == 1;
return (1,$board);
} else {
return (0,$orig_board);
}
}
sub tally_final_score {
my $orig_board = shift;
my ($black, $white) = (0,0);
my (@s,@n,@b,@w) = ((),(),(),());
my $board = board_copy($orig_board);
for (my $y = 0;$y < $size; $y++) {
for (my $x = 0;$x < $size; $x++) {
if (board_val($board,[$x,$y])==0) {
my $owner = fill_owner($board, [$x,$y], 0, 99);
push @s, [$x,$y] if $owner == 0;
push @n, [$x,$y] if $owner == 99;
push @b, [$x,$y] if $owner == -1;
push @w, [$x,$y] if $owner == 1;
}
}
}
map { fill_count($board,$_,99,0) } @s;
map { fill_count($board,$_,99,0) } @n;
map { $black += fill_count($board,$_,99,0) } @b;
map { $white += fill_count($board,$_,99,0) } @w;
$board->{'black'} += $black;
$board->{'white'} += $white;
return $board;
}
# Returns 1 if position filled had life or not
sub fill_life {
my ($board, $pos, $from, $to) = @_;
if (in_board($pos)) {
if (board_val($board, $pos) == $from) {
set_board_val($board, $pos, $to);
# we put results of fill in temp array, so that the ||'s short
# circuit logic does not stop the fill operation. probably could
+ use map
my @f = (
fill_life($board, add_dir($pos, $up), $from, $to),
fill_life($board, add_dir($pos, $down), $from, $to),
fill_life($board, add_dir($pos, $left), $from, $to),
fill_life($board, add_dir($pos, $right), $from, $to) );
return $f[0] || $f[1] || $f[2] || $f[3];
} else {
if (board_val($board, $pos) == 0) {
return 1;
} else {
return 0;
}
}
} else {
return 0;
}
}
# Returns number of spaces filled
sub fill_count {
my ($board, $pos, $from, $to) = @_;
if (in_board($pos)) {
if (board_val($board, $pos) == $from) {
set_board_val($board, $pos, $to);
return 1 +
fill_count($board, add_dir($pos, $up), $from, $to) +
fill_count($board, add_dir($pos, $down), $from, $to) +
fill_count($board, add_dir($pos, $left), $from, $to) +
fill_count($board, add_dir($pos, $right), $from, $to);
} else {
return 0;
}
} else {
return 0;
}
}
# this function returns who owns an open space
# Value | On Board | Passed as | Returned Owner
# -1 | Black | | Black
# 1 | White | | White
# 0 | Blank | $from | SEKI
# 99 | Blank (Tmp) | $to | None
#-99 | Off Board | | --
sub fill_owner {
my ($board, $pos, $from, $to) = @_;
my $oval = board_val($board,$pos);
if ($oval == -99) {
return $to;
} elsif ($oval == -1 or $oval == 1) {
return $oval;
} elsif ($oval == 99) {
return 99;
} else {# if ($oval == 0) {
set_board_val($board,$pos,$to);
my $owner = $to;
my @neighbors = (
fill_owner($board, add_dir($pos, $up), $from, $to),
fill_owner($board, add_dir($pos, $down), $from, $to),
fill_owner($board, add_dir($pos, $left), $from, $to),
fill_owner($board, add_dir($pos,$right), $from, $to));
for my $n (@neighbors) {
if ($n == $from) { #SEKI (old)
return $from;
} elsif ($n == $to) { #NONE
# Nothing to do
} else { #Black or White
if ($owner != $n and $owner != $to) { #SEKI (Initial detection
+)
# print "At: ";printP($pos);
# print " Owner: $owner Neighbor: $n T/F: $to / $from\n";
return $from
} else { # first or the same owner (black or white)
$owner = $n;
}
}
}
return $owner;
}
}
sub handshake {
my ($client,$color) = @_;
print $client "Welcome to the GO arena $ver\n";
print $client "The board is size $size\n";
while ($color == 0) {
print $client "Please choose which color you would like to play as
+:\n";
my $color_str = <$client>;
return ("",0) if not defined $color_str;
chomp $color_str;
$color_str = lc $color_str;
$color = -1 if $color_str =~ /black/;
$color = 1 if $color_str =~ /white/;
if ($color == 0) {
print $client "Invalid color selection, please choose white or b
+lack\n";
}
}
print $client "You are black\n" if $color == -1;
print $client "You are white\n" if $color == 1;
print $client "Please type OK to accept:\n";
my $line = <$client>;
if (defined $line and $line =~ /^OK\b/) {
print $client "Please enter a short identifier for yourself:\n";
my $id = <$client>;
chomp $id;
return ($id,$color);
} else {
return ("",0);
}
}
sub get_move {
my $client = shift;
print $client "MOVE:\n";
my $line = <$client>;
return [ -2, -2 ] if not defined($line);
return [ -2, -2 ] if $line =~ /QUIT/;
return [ -1, -1 ] if $line =~ /PASS/;
$line =~ /(\d*) x (\d*)/;
return [ $1, $2 ] if defined($1) and defined($2);
return [ -1, -1 ];
}
sub send_result {
my ($client,$valid) = @_;
select ($client);
print "Valid move\n" if $valid;
print "Invalid move\n" if not $valid;
select STDOUT;
}
sub send_board {
my ($client,$board) = @_;
select ($client);
print "Current board\n";
printBsimp($board);
select STDOUT;
}
$|=1;
my $sock;
if (defined $socket) {
print "Using domain $socket\n";
# yep, I know, very not secure.
unlink $socket if defined $socket and -e $socket and $socket =~/\.so
+ck$/;
$sock = IO::Socket::UNIX->new(Type => SOCK_STREAM,
Local => $socket,
Listen => 5)
or die "Can't create socket!";
} else {
print "Using port $port\n";
$sock = IO::Socket::INET->new(LocalPort => $port,
Listen => 5,
Proto => 'tcp',
Reuse => 1)
or die "Can't create socket!";
}
my $p;
my $color = 0;
for my $cnt (0..1) {
$p->[$cnt]{'id'} = "";
while ($p->[$cnt]{'id'} eq "") {
print "Waiting for First player...";
$p->[$cnt]{'sock'} = $sock->accept;
print "Connected...Handshaking...";
($p->[$cnt]{'id'},$p->[$cnt]{'color'}) = handshake($p->[$cnt]{'soc
+k'},$color);
print "Failed Handshake\n" if $p->[$cnt]{'id'} eq "";
}
print "Got it!\n";
print "Player ",$cnt+1," (";
print "black" if $p->[$cnt]{'color'} == -1;
print "white" if $p->[$cnt]{'color'} == 1;
print ") connected as: ",$p->[$cnt]{'id'},"\n";
$color = $p->[$cnt]{'color'} * -1;
}
$|=0;
# close off everything.
undef $sock;
unlink $socket if defined $socket and -e $socket and $socket =~/\.sock
+$/;
my $board = newB;
print "Game started at $size x $size.\n";
$board->{'white'} += $komi;
my $valid = 0;
my $who = -1;
my $lastmove = [ -2, -2 ];
my $move = [ -2, -2 ];
# -99 draw
# 0 none
# 1 White
# -1 Black
my $winner = 0;
# -99 player quit
# 0 none
# 1 normal
my $wintype = 0;
my @socks;
if ($p->[0]{'color'} == -1) {
$socks[0] = $p->[0]{'sock'};
$socks[2] = $p->[1]{'sock'};
} else {
$socks[0] = $p->[1]{'sock'};
$socks[2] = $p->[0]{'sock'};
}
$|=1;
my $test = $p->[0]{'sock'};
print $test "Starting game\n";
$test = $p->[1]{'sock'};
print $test "Starting game\n";
send_board($socks[$who+1],$board);
while ($board->{'f'} > 0 and $winner == 0 and not (is_pass($lastmove)
+and is_pass($move))) {
if ($valid == 1) {
$who *= -1;
$lastmove = $move;
send_board($socks[$who+1],$board);
}
$move = get_move($socks[$who+1]);
if (pos_equal($move, [-2,-2])) {
print "\nWhite player quit! -- Black wins!\n" if $who == 1;
print "\nBlack player quit! -- White wins!\n" if $who ==-1;
$winner = $who * -1;
$wintype = -99;
};
(print "$who : ",printP($move)) if $verbose == 2;
($valid,$board) = do_move($board,$move,$who);
send_result($socks[$who+1],$valid);
print "." if $valid and $verbose == 1;
print "#" if not $valid and $verbose == 1;
}
$|=0;
print "\n";
$board = tally_final_score($board);
print "Score Black: ",$board->{'black'},"\n";
print "Score White: ",$board->{'white'},"\n";
print "Final board:\n";
printB($board) if $verbose;
# if there wasn't already a winner determined (e.g. someone quit the g
+ame)
if ($winner == 0) {
$winner = -99, $wintype = 1 if $board->{'black'} == $board->{'white'
+};
$winner = -1, $wintype = 1 if $board->{'black'} > $board->{'white'
+};
$winner = 1, $wintype = 1 if $board->{'black'} < $board->{'white'
+};
}
select $p->[0]{'sock'};
print "Draw, no winner\n" if $winner == -99;
print "Black wins!\n" if $winner == -1;
print "White wins!\n" if $winner == 1;
select $p->[1]{'sock'};
print "Draw, no winner\n" if $winner == -99;
print "Black wins!\n" if $winner == -1;
print "White wins!\n" if $winner == 1;
--------------------
go-client.pl file begins:
--------------------
#!/usr/bin/perl -w
use strict;
use IO::Socket;
my $address = shift;
if (not $address) {
print "This program connects to a GO server and plays, but only\n";
print "sends random moves in.\n";
print "Just needs one parameter, either a location or domain socket
+name\n";
print "Example: ./go-client.pl localhost:7179 or ./go-client.pl go
+-server.sock\n";
exit;
}
# multiplier on how many moves to make before giving up
my $timeout = 2;
my $server;
if ($address =~ /\.sock$/ and -e $address) {
$server = IO::Socket::UNIX->new(Type => SOCK_STREAM,
Peer => $address);
} else {
if ($address =~/:/) {
$server = IO::Socket::INET->new($address);
} else {
$server = IO::Socket::INET->new("$address:7179");
}
}
die "Can't connect to server!" unless $server;
my $done = 0;
my $line;
my $size = 1;
while ($done < $size*$size * ($timeout+0.1)) {
while (defined ($line = <$server>) and not $line =~/:/) {
if ($done == 0) {
$line =~ /size (\d*)/;
$size = $1 if defined $1;
};
print $line if $done <= 1;
};
last if not defined $line;
print $line if $done <= 1;
print $server "OK\n" if $done == 0;
print $server "Random v1.0\n" if $done == 1;
my $move = [ int (rand $size), int (rand $size) ];
if ($done < $size*$size * $timeout) {
print $server $move->[0]," x ",$move->[1],"\n" if $done >1;
print $move->[0]," x ",$move->[1],"\n" if $done >1;
} else {
print $server "PASS\n" if $done >1;
print "PASS\n" if $done >1;
}
$done++;
}
print $server "QUIT\n";
--------------------
go-proto.txt file begins:
--------------------
This text file describes the protocol used for a go-client to connect
+and play
a game on the go-server.
Please forgive me if it's a horrid description.
There are two phases to the protocol, the handshake, and the actual ga
+me play.
Additionally, any line ending with a colon is an indicator that the se
+rver is
requesting a response from the client.
Handshake:
The initial broadcast from the server appears like this:
Welcome to the GO server 0.1
You are black
The board is size 19
Please type OK to accept:
The client is expected to reply with the string "OK" (followed by a ne
+wline),
the response is case sensitive.
The client will then be asked to identify himself, this is for game lo
+gging,
and is not given to the opponent (at least not until the end of the ga
+me).
(e.g. 'Killer-GO-AI v0.2a (Adam Luter)'):
Please enter a short identifier for yourself:
After replying to this request, the server will start the game (there
+may be a
long delay while you wait for the other player to connect).
Also of note, is that the server may ask you what color you want to be
+,
and appropriate response is black or white.
Actual Game:
The actual game playing starts with a board state declaration, and the
+n a
request for a move:
Current board
...................
...................
...................
...................
...................
...................
...................
...................
...................
...................
...................
...................
...................
...................
...................
...................
...................
...................
...................
.
MOVE:
Please note that the line with a single '.' is to indicate the end of
+the
board display. Black peices are represented as an 'X' and white as 'O
+'. If
there is some sort of processing error a '?' may appear on the board s
+tate.
The server does not check for these, and the client isn't really expec
+ted to
either, they are present only for debugging the server code.
The move response should be in the form of: "number x number" such as
+"2 x 3".
Note that moves are zero based, so that "0 x 0" is the upper left corn
+er.
(if you send an improperly formated response, the server defaults to a
+ pass)
Also, there are two special moves. The first is "PASS" which indicate
+s the
wish to pass your turn. If both players pass their turn, the game wil
+l end.
The game does not check for the number of valid moves left, so this is
+ the
only terminating condition. It is recommended that your client time o
+ut a
game after some large number of moves, or realize an end game state.
The other move "QUIT" indicates the client has quit for some reason.
+This
is merely done for politeness, and the server will handle a broken con
+nection
the same way as a "QUIT" response.
When a move is sent, the server will reply with either invalid or vali
+d move.
If the move was invalid the client is allowed to try again indefinatel
+y. This
feature may be turned off in later versions, once clients have become
+smart
enough.
When the game ends, the server will print to -its- display the score,
+and if
anyone quit, also who wins (which can be different from the score, if
+someone
quit). It will also communicate who won to the client.
Note that the server runs on port 7179 by default, unless you change i
+t.
If you want to figure this out, the easiest way is to simply play by h
+and with
the command: telnet serversaddress 7179 this will allow you to play
+ against
your own computer opponent, and also let you see how the text flow wor
+ks. | http://www.perlmonks.org/?node_id=24387 | CC-MAIN-2015-48 | refinedweb | 3,486 | 59.98 |
import ROOT
Welcome to JupyROOT 6.07/07
%jsroot on
h = ROOT.TH1F("myHisto","My Histo;X axis;Y axis",64, -4, 4)
Time to create a random generator and fill our histogram:
rndmGenerator = ROOT.TRandom3() for i in xrange(1000): rndm = rndmGenerator.Gaus() h.Fill(rndm)
c = ROOT.TCanvas() h.Draw() c.Draw()
We'll try now to beautify the plot a bit, for example filling the histogram with a colour and setting a grid on the canvas.
h.SetFillColor(ROOT.kBlue-10) c.SetGrid() h.Draw() c.Draw()
Alright: we are done with our first step into the ROOTbooks world! | https://nbviewer.org/github/dpiparo/swanExamples/blob/master/notebooks/Simple_ROOTbook_py.ipynb | CC-MAIN-2022-40 | refinedweb | 102 | 60.21 |
Possible memory leak on QDnsLookup?
- Comandillos
Hi there, i'v been developing a small DNS server for caching some request in local (dnsmasq / BIND like). For that, I use a QMap for caching request and a QDnsLookup for domains I can't resolve locally.
Everything works fine, until QDnsLookup gets in. I'v been profiling the application for memory leaks and i'v discovered everytime I call the method 'lookup' from a QDnsLookup instance, a memory leak of ~9kb appears. The slot where I connect the lookup instance is totally empty, and I free the instance correctly.
If I don't call the 'lookup' method, no memory leaks appears, so I was thinking about QDnsLookup class having a memory leak.
This is the class that I use as the DNS resolver
#include "dnsresolver.h" quint32 DNSResolver::resolveDomain(QString domain) { if (cacheResolved){ return cache->value(domain).toIPv4Address(); }else{ lookup->lookup(); return 0; } } void DNSResolver::handleExternalResolve() { if (!lookup->hostAddressRecords().isEmpty()){ auto record = lookup->hostAddressRecords().first(); auto host = record.value(); dnspack->addArecord(host.toIPv4Address()); cache->insert(dnspack->domain, host); } delete lookup; emit sendResponse(*dnspack->datagram, senderHost.toIPv4Address(), senderPort); } DNSResolver::DNSResolver(t_resolve t, QObject *parent) : QThread(parent) { this->socket = t.socket; this->datagram = t.datagram; this->senderHost = t.senderHost; this->cache = t.cache; this->senderPort = t.senderPort; // Alloc a DNS packet for parsing dnspack = new DNS; dnspack->parse(&datagram); dnspack->makeAnswer(); // Check if we can resolve by cache cacheResolved = cache->contains(dnspack->domain); if (!cacheResolved){ // Alloc a external lookup object this->lookup = new QDnsLookup(QDnsLookup::A, dnspack->domain, QHostAddress(FORWARDING_DNS_SERVER)); connect(lookup, SIGNAL(finished()), this, SLOT(handleExternalResolve())); } } DNSResolver::~DNSResolver() { delete dnspack; } void DNSResolver::run() { auto ip = resolveDomain(dnspack->domain); if (ip != 0){ dnspack->addArecord(ip); emit sendResponse(*dnspack->datagram, senderHost.toIPv4Address(), senderPort); } }
Thanks!!!!
- SGaist Lifetime Qt Champion
Hi,
What version of Qt are you using ? On what platform ?
Can you reproduce that with a smaller example ?
- Comandillos
Qt 5.8, macOS 10.12.4.
That's only a simple use case, but as I said, the problem can be reproduced just instancing a QDnsLookup instance and calling the lookup method. This is an smaller example.
#include "dnsresolver.h" DnsResolver::DnsResolver(QObject *parent) : QObject(parent) { lookup = new QDnsLookup(QDnsLookup::A, "qt.io",QHostAddress("8.8.8.8")); connect(lookup, SIGNAL(finished()), this, SLOT(handleLookup())); lookup->lookup(); } void DnsResolver::handleLookup() { // I use sleep for letting the profiler inspect the memory trace with enough time QThread::sleep(10); // I won't use lookup anymore delete lookup; // Let's close the app QCoreApplication::quit(); }
This code gives me a memory leak of 944 bytes on 'libresolv'. I don't know if it's Qt problem or 'libresolv' problem (I don't know what's even libresolv).
Here's the profiler message.
- SGaist Lifetime Qt Champion
Which version of libresolv is it exactly ?
AFAIK, that's the library that currently implements the name resolution and related stuff on Linux style OSs. | https://forum.qt.io/topic/80305/possible-memory-leak-on-qdnslookup | CC-MAIN-2017-39 | refinedweb | 482 | 51.75 |
Semantic Web terminology
From Semantic Web Standards
Semantic Web Terminology, informally explained
Informal explanations for the non-expert.
cardinality
This generally refers to the rules for usage of an element that translate roughly to: required, optional, minimum number allowed, maximum number allowed. If your data rules state that your description can include one and only one title for the resource, that is expressing cardinality.
class
From the OWL documentation: "Classes provide an abstraction mechanism for grouping resources with similar characteristics." Think of classes as a grouping, a set, or even something like a genus in biology. Things in the same class have something(s) in common. OWL uses this concept of classes heavily, and in that way OWL-based metadata encourages a kind of classified view of information -- although there isn't a single classification but many of them, since each metadata ontology can define its own view using classes.
Dublin Core
FOAF
graph
GRDDL
literal/non-literal
linked data
Linked Data is the data format that supports the Semantic Web. The basic rules for Linked Data are defined as:
-.
lower case semantic web
namespace
Namespaces are based on the domain name system of the Internet. Your namespace is an identity space on the Internet that you control. For example: Library of Congress owns the namespace "loc.gov"; OCLC has "oclc.org"; the University of Michigan has "umich.edu." When Library of Congress creates an identifier for the subject heading "Guide dogs" it creates an identifier in its namespace:. This guarantees that the identifier will be unique on the web since no one else can use "loc.gov".
ontology
Ontology in computer science and the semantic web is a formal representation of knowledge. For those involved in metadata development, when you have defined all of your data elements, your controlled vocabularies, how they fit together, and anything else that is needed to make your metadata work, then you have an ontology for your metadata domain.
OWL
"Web Ontology Language." (Yes, it should be "WOL.") The semantic web standard that is used to defined ontologies (metadata sets) so that they can be used and understood in that environment.
property
The property in RDF plays a role similar to the data element in other data models. In RDF the triple is made up of a subject (the thing you are describing with your metadata), a predicate (what you are going to say about it) and the object (the actual "saying"). The predicate is usually referred to as the property. So in a statement like "Moby Dick / has author / Herman Melville" the property is "has author."
Qnames
RDF
Resource Description Framework. The basic standard for the semantic web. RDF defines the building blocks of the semantic web such as classes and properties and how they interact to create meaning.
RDFa
RDFs
reification
relation
semantic
In the humanities, the term "semantic" relates to meaning, such as the meaning of a word. When used in the context of the semantic web, however, the term refers to formally defined meaning that can be used in computation. In this sense, formal languages like programming languages have a semantic component that determines the meaning of the symbols and terms. For example, "x += y" has a defined meaning in programming languages like C and Perl.
Semantic Reasoning Engine
Semantic Web Stack
SKOS
"Simple Knowledge Organization System." A standard way to describe thesauri and other sets of terms for the semantic web. It includes concepts like broader and narrower and related terms, and allows the definition of preferred display terms and alternate display terms.
SPARQL
statement
A single piece of metadata consisting of a subject, a predicate and an object. "Moby Dick / has author / Herman Melville" is a statement. Metadata in the semantic web is made up of related statements. There are no records in this view, but a group of related statements can express the same full description that a record does in other metadata systems.
triple
A triple is a set of three elements: a subject, a predicate, and an object. When the term triple is used, the discussion is often focusing on the underlying technology of the semantic web; statement tends to be used when talking about the human view of metadata creation.
triple store
Essentially a database make up of triples; as opposed to, for example, a relational database made up of tables of data.
Turtle
URI
Uniform Resource Identifier, a standard format for identifiers on the Internet. The string beginning with "http://" is a valid URI, and on the Semantic Web identifiers are formatted as "http" URIs. | http://www.w3.org/2001/sw/wiki/index.php?title=Semantic_Web_terminology&oldid=3079 | CC-MAIN-2013-48 | refinedweb | 762 | 54.12 |
Content
All Articles
Python News
Numerically Python
Python & XML
Community
Database
Distributed
Education
Getting Started
Graphics
Internet
OS
Programming
Scientific
Tools
Tutorials
User Interfaces
ONLamp Subjects
Linux
Apache
MySQL
Perl
PHP
Python
BSD
Cooking with Python: Seven Tasty Recipes for Programmers
Pages: 1, 2, 3
This recipe is all in fun. It's a bit of cleverness we hesitate to show you, but can't resist: lambda, recursion, a ternary operator, all in one line sure to make any Programming 101 instructor's head spin.
Credits: Anurag Uniya
You want to write a recursive function, such as factorial, using lambda (you probably made a bet about whether it could be done!).
f = lambda n: n-1 +abs(n-1) and f(n-1)*n or 1
This recipe implements the recursive definition of the factorial function as a lambda form. Since lambda forms must be expressions, this is slightly tricky. If/else, a statement, is not allowed inside an expression. Still, a short-circuiting form of a Python idiom for "conditional (ternary) operator" takes care of that (see "Simulating the ternary operator in Python" in the Python Cookbook for other
ways to simulate the ternary operator, both short-circuiting and not).
The real issue, of course, is that lambda's forte (for what little it deserves being called a
"forte") is making anonymous functions-how then do we recurse? This is what makes
this recipe's subject a good bet to win a drink from your Python-using friends and
acquaintances misguided enough not to have read the Python Cookbook from cover to
cover. Make sure the terms of the bet only mention lambda and do not specify that the
resulting function is to be left unnamed, of course! Some might consider this cheating,
but we Pythmen like to think of ourselves as a bunch of pragmatists.
We aren't sure if we like this one for its ingenuity or hate it for its obscurity! It may be twisted, but if your for loop is only summing things up, consider reducing your recipe.
Credits: Tim Keating
You need to generate pseudo-random numbers simulating the roll of several dice, where the number of dice and number of sides on each die are parameters.
import random
def dice(num,sides):
return reduce(lambda x,y,s=sides:x +random.randrange(s),
range(num+1))+num
This recipe presents a simple but subtle function to let you generate random numbers by
simulating a dice roll. The number of dice and the number of sides to each die are the
parameters of the function. In order to roll 4d6 (four 6-sided dice), you would call dice(4,6).
Simulating a dice roll is a good way to generate a random number with an expected
"binomial" profile. For example, 3d6 will generate a bell-shaped (but discrete)
probability curve with an average of 10.5.
After trying a more "manual" approach (a for loop with an accumulator), I found that
using reduce is generally faster. It's possible this implementation could be faster still, as I
haven't profiled it very aggressively; it's fast enough for my purposes :).
Taking advantage of Web services doesn't get much
easier than this. Think of it as a recipe for ordering out!
Credits: Rael Dornfest and Jeremy Hylton
You need to make a method call to an XML-RPC server.
#needs Python 2.2 or xmlrpclib
from xmlrpclib import Server
server = Server("")
class MeerkatQuery:
def __init__(self, search, num_items=5, descriptions=0):
self.search = search
self.num_items = num_items
self.descriptions = descriptions
q = MeerkatQuery("[Pp ]ython")
print server.meerkat.getItems(q)
XML-RPC is a simple and lightweight approach to distributed processing. xmlrpclib,
which makes it easy to write XML-RPC clients in Python, is part of the core Python
library since Python 2.2, but you can also get it for older releases of Python from.
This recipe uses O'Reilly's Meerkat service, intended for syndication of contents such as
news and product announcements. Specifically, the recipe queries Meerkat for the five
most recent items mentioning either "Python" or "python." If you try this out, be warned
that, depending on the quality of your net connection, the time of day, and the level of
traffic on the Internet, response times from Meerkat are very variable: if the script takes a
long time to answer, it doesn't mean you did something wrong-it just means you have
to be patient!
O'Reilly & Associates recently released Python Cookbook (July 2002) .
Sample Chapter 1, Python Shortcuts, is available
free online.
You can also look at the Table of Contents, the
Index, and the Full Description of
the book.
For more information, or to order the book,
Return to Python DevCenter.
Sponsored by:
© 2016, O’Reilly Media, Inc.
(707) 827-7019
(800) 889-8969
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. | http://www.onlamp.com/pub/a/python/2002/07/11/recipes.html?page=3&x-maxdepth=0 | CC-MAIN-2016-07 | refinedweb | 823 | 61.97 |
muroar_read − Read data from a stream in a portable way
#include <muroar.h>
ssize_t muroar_read (int fh, void * buf, size_t len);
This function reads data from a stream connected to a sound server. It exists to have a portable way to read data from the sound server that does not depend on the underlying operating system.
On success this call return the number of bytes successful read. On error, −1 is returned.
This function calls the underlying read function in a loop. If this returns less than the given length you should not re-try directly but wait at least some milisecunds.
This function first appeared in muRoar version 0.1beta0.
read(2), muroar_write(3), muroar_stream(3), muroar_close(3), RoarAudio(7). | http://man.m.sourcentral.org/ubuntu1204/3+muroar_read | CC-MAIN-2021-25 | refinedweb | 121 | 75.61 |
Tcl/Tk developers have constructed many interesting widget sets which extend Tk's basic functionality. A few of these--Tix, for example--are reasonably well known and accessible to Tkinter users. What about the rest? When a TkInter programmer sees a promising Tk extension, is it likely to do him or her any good?
Briefly, yes. First, it's important to make the distinction between so-called "pure Tk" extensions and those that involve (external) C-coded compilation. Quite a few useful widgets sets, most notably including BWidgets and tklib, are "pure Tk". That means that Tcl/Tk programmers simply read them in at run time, with no need for (re-)compilation, configuration, or other deployment complexities.
These extensions are nearly as easy for TkInter programmers to use. Here's an example:
If you have a file of Tcl code in a file called foo.tcl and you want to call the Tcl function foo_bar then
import Tkinter root = Tkinter.Tk() root.tk.eval('source {foo.tcl}') root.tk.eval('foo_bar')
will load and execute foo_bar. To see the details of passing and returing arguments, Use the Source Luke, and look at Lib/lib-tk/Tkinter.py. For wrappers of other popular Tk widgets, look at the Python/ directory of the Tixapps distribution
On the other hand, Tix and BLT are popular Tk extensions which require compilation. These days (since version 8.0 of Tk) most extensions are compiled as dynamic loading packages, and are as easy to load into Tkinter as pure Tk extensions using a Python expression like
root.tk.eval('package require Ext')
For an example of this, see the Lib/lib-tk/Tix.py file in the standard library that loads the Tix extension.
The trick here is to install the extension library directory in a place the Tcl in TkInter will find it. The best place to try is as a subdirectory of Tcl/ in the Python installation. If this does not work, look into the file pkgIndex.tcl in the extension's library directory and try to understand what it is doing to load the .dll or .so shared library. To ask Tcl to consider a specific directory that contains a package, use
root.tk.eval('lappend auto_path {%s}' % the_tcl_directory)
FredrikLundh has pages which he calls "work in progress", but which readers are certain to find helpful: ( is gone Fredrik recommends ) and The latter explicitly extends Tkinter through use of Tcl. Also, Gustavo Cordero is working in this same area; his work is likely to show up in the Tcl-ers' Wiki for Tkinter. | https://wiki.python.org/moin/How%20Tkinter%20can%20exploit%20Tcl/Tk%20extensions?highlight=Tk | CC-MAIN-2017-17 | refinedweb | 430 | 55.54 |
Hi, I’m your virtual agent
Tell me about the issue and I’ll help you find the solution you need.
CV: RFDzMUJDVLv2IrZIFzIFXy.0
INTRODUCTION
Update 0.1 for Microsoft Dynamics CRM 2016 is available. This article describes the hotfixes and updates that are included in this update.
This update is available for all languages that are supported by Microsoft Dynamics CRM 2016.
To see all of the new features in this release, go to the following Microsoft Dynamics CRM Customer Center website:
This update is available for all languages that are supported by Microsoft Dynamics CRM 2016.
To see all of the new features in this release, go to the following Microsoft Dynamics CRM Customer Center website:
More Information
Build number and file names for this update rollup
Update rollup informationUpdate 0.1 for Microsoft Dynamics CRM 2016 is available for on-premises customers.
The following file is available for download from the Microsoft Download Center:
Installation information
Windows Update installation
Use Windows Update to automatically install Update 0.1 for Microsoft Dynamics CRM 2016. This update rollup will be available on Windows Updates in Q1 of calendar year 2016. 31339 2016.
Restart requirementIf you are prompted to restart the computer, do this after you apply the update.
Removal informationYou can uninstall Update 0.1 from a server that is running Microsoft Dynamics CRM 2016. However, make sure that you back up your databases before you uninstall Update 0.1. For more information, go to the following Microsoft Developer Network (MSDN) websites:
Issues that are resolved in Update 0.1 for Microsoft Dynamics CRM 2016
Update 0.1 resolves the following issues:
- a than By in pre-req checks of Deployment Manager.
- Search records Metric Type retrievable App.
- New Active Directory Authentication Library is not referenced correctly.
- Creating a user in a Time Zone that uses day light savings time is not adding the day light savings offset.
- Signing in with a user with no roles, navigating backwards and then signing in with a valid user, the system hangs on the user education page.
- The Win10 app fails app validation with the error Restricted namespace found.
- Unable to find the Draft Records in Draft View when the records are created offline.
- Android wont. Added but the XRM SDK dlls still referring ZoneresourcesCache Corruption causes login to fail at YammerEnabledActivityFeeds Solution installed.
- When opening a URL the capitalization in.
- Fixed an issue where auto-create doesn't work for Office Groups at all, if the group wasn't created in Office.
- Fixed an issue where".
- Fixed an issue where any Plugin or Workflow activity that includes the lien System.Diagnostics.Trace.WriteLine is throwing a Security Exception.
- Fixed an issue in Internet Explorer where a security certificate alert pops up during app load.
- Fixed an issue where process fly out is not displaying exactly under a particular stage of business process flow in cases.
- Fixed an issue with getting error messages for Social sharing in survey runtime.
- Fixed an issue where, for Queue items, release action does not change the worked by field, even after pressing grid refresh button.
- Fixed an issue where pagination is not showing the correct amount of records for each page.
- Fixed an issue where Xrm.Tooling doesn't work in Azure Apps.
- Fixed an issue where the clear filter is not working when four filters are applied on Visual filters for Activity Dashboard.
- Fixed an issue where, after upgrade, the out of the box Knowledge Manager Role is unmanaged.
- Fixed an issue with the iPad Pro where Horizontal Scrolling would sometimes not function correctly.
- Fixed an issue where a solution that contains a reference to the newly introduced TimeLine control cannot be imported into an RTM Server.
- Fixed an issue where a user was unable to proceed further when using captcha = yes and the response required for the question = yes when the user entered only captcha and clicked on Next.
- Fixed an issue where the activities view label was misplaced and the search result values were overlapped by the grid header in the Edge browser when a sub grid is inserted in IC form.
- Fixed text wrap so that it breaks words only at allowed break points and not in between words.
- Fixed issues where the ToolTip of Translations and prepare client customization tabs are not localized.
- Fixed an issue where an item without a Label value causes the next item to be rendered with a horizontal displacement (indented) from other items.
- Corrected the year display in the About page.
Hotfixes and updates that you must enable or configure manually
Update 0.1 for Microsoft Dynamics CRM 2016: 3133963 - Last Review: Jan 22, 2017 - Revision: 2
Microsoft Dynamics CRM 2016 | https://support.microsoft.com/en-us/help/3133963/update-0-1-for-microsoft-dynamics-crm-2016 | CC-MAIN-2017-34 | refinedweb | 785 | 55.64 |
Help! is there errors in codes copied from book ?
I'm a total greenhand on qt,and I lent some books form library. I copied the "helloworld" code from the book to my QTcreator, and it ran perfectly. But when I started to build some more conplex code copied from the book, it start to make errors.
this is the code:
@
//addressbook.h
#ifndef ADDRESSBOOK_H
#define ADDRESSBOOK_H
#include <QWidget>
class QLineEdit;
class AddressBook : public QWidget
{
Q_OBJECT
public:
AddressBook(QWidget *parent = 0);
private:
QLineEditor * nameline;
QTextEditor * addresstext;
};
#endif // ADDRESSBOOK_H
@
It said the "QLineEditor" (and lots of other things) need declaeration in the scope. Is there missing some lib or header?
Edit: please use @ tags around code sections; Andre
- Eddy Moderators
In the adressbook.cpp file you should also use:
#include <QLineEdit>
Google for Forward declaration to understand why.
Please add @ code tags on your code. See the button with <> on it on top of the editor here.
You must include QLineEdit and other which you use
@#include <QLineEdit>@
Edit:
As Eddy says. I'm late little bit :)
It is spelled QLineEdit, not QLineEditor. In addition, you are missing a forward declaration for QTextEdit in your .h file.
[quote author="qxoz" date="1322203431"]You must include QLineEdit and other which you use
#include <QLineEdit>[/quote]
As long as you just use pointers a forward declaration is sufficient. There is no need to include headers.
[quote author="Lukas Geyer" date="1322203500"]
As long as you just use pointers a forward declaration is sufficient. There is no need to include headers.[/quote]
In fact, in most cases it's good practice to not include the header where a forward declaration is sufficient. Speeding up compile time is only one of the advantages.
[quote author="Lukas Geyer" date="1322203450"]It is spelled QLineEdit, not QLineEditor. In addition, you are missing a forward declaration for QTextEdit in your .h file.[/quote]
I think that's the point. the reason i include qlineedit is it said something is not decleared. there is no include qlineedit on the book.
Samples in books often ommit includes for classes provided by well-known libraries for the sake of brevity. Often, you can download the samples as working source code, and there the needed includes will be there. | https://forum.qt.io/topic/11647/help-is-there-errors-in-codes-copied-from-book | CC-MAIN-2018-34 | refinedweb | 377 | 66.54 |
Part 6: Styling the App
- Posted: Jun 25, 2013 at 5:21 PM
- 76,692 Views
- 33 Comments
Something went wrong getting user information from Channel 9
Something went wrong getting user information from MSDN
Something went wrong getting the Visual Studio Achievements
Right click “Save as…”
Source Code:
PDF Version:
With the basics of XAML, layout and events in place, let's do something fun. We'll give some unique character to the app by styling it. Obviously we'll want to follow Microsoft's guidelines so that our app looks like it belongs as part of the Windows Phone 8 ecosystem, however we still have a lot of latitude on how we can personalize our app.
Here's the game plan for this lesson:
Our first task in this lesson is to change the tiles for our app that the user will see in the alphabetical list of apps as well as the Start page if they should want to pin our app to their Start page. To begin, we'll open the WMAppManifest.xml file in the Properties directory of our Project:
When you double-click this file in the Solution Explorer it will be opened in a special designer window providing a number of options that affect how our application is introduced to the Windows Phone 8 operating system. For example, on the first tab, "Application UI" we can change the display name, the app icon and more:
We want to change from the default icon to one more suited for our app. I've created such an icon and it's available in the C9Phone8\PetSounds_Assets folder. These assets for this series are available from wherever you originally downloaded this document, or watched the videos that accompany it.
Inside that folder is the ApplicationIcon.png:
I'll drag and drop that file from Windows Explorer into the Assets folder of my Project. When I do that, I see a dialog notifying me that a file by that name already exists:
We do want to replace the old file with our new image file.
Next, I want to replace two images in the Assets\Tiles subfolder. I'll ready the target in the Solution Explorer by expanding the Tiles subfolder.
In Windows Explorer, I'll open the C9Phone8\PetSounds_Assets\Tiles subfolder:
... I'll highlight the two image files and drag and drop them into the target Tiles folder in the Solution Explorer. I'll see the dialog again:
Now, I've replaced the necessary tile files in my project. Back in the WMAppManifest.xml file, since I replaced the old image file with a new one, I may need to close this file and re-open it to see the new App Icon reflected there:
Further down on that settings page, I want to make sure the following settings are chosen:
To test these settings, I'll run (F5) the app. Once it's running, on the Phone Emulator, click the Start button (the Windows icon), then swipe (click and hold down the mouse button while dragging the mouse cursor) from right-to-left to see the alphabetical list of apps and locate our PetSounds app:
Great! Now, let's click and hold down until a context menu appears displaying the options to "pin to start" and "uninstall":
Clicking "pin to start" will add the app to the Start page. Click the Start button on the Phone Emulator and scroll down to see the pinned tile:
It's a small detail, but it already feels like a more legitimate app just with that small change.
Next, we'll change the app's title text and page's title text. In the MainPage.xaml, locate the TitlePanel, the StackPanel added by default by the Page template:
... and we'll make the following changes:
The result:
Another small styling step, but again it makes the app feel more legitimate.
By default, the Style attribute of the second textbox is set to:
Style="{StaticResource PhoneTextTitle1Style}"
This will require a bit of explanation. First, whenever you see open and closed curly braces in XAML, it is referred to as "binding syntax". There are two types of binding syntaxes:
{StaticResource } - Let me start with the term "resource". A resource is an object that can be reused in different places in your application. Examples of resources include brushes and styles.
I created a simple Phone project called XAMLResources with the most simple {StaticResource} example I could think of:
In this overly simplistic example, it may not be readily apparent the value of this approach. As you application grows large and you want to keep a consistent appearance between the controls on the page, you may find this quite handy. It keeps the XAML concise and compact. And, if you need to change the style or color for any reason, you can change it once and have the change propagate to everywhere it has been applied.
Here's the result:
I created these Local Resources on the page, meaning they are scoped to just the MainPage.xaml. What if I wanted to share these Resources across the entire application? In that case, I would have defined them in the App.xaml's <Application.Resources> section as a System Resource:
So, back in the PetSounds project, you may be wondering where the PhoneTextTitle1Style is defined. Actually, is a "built-in style" as part of the Windows Phone Operating System's Theme Resources:
If you scroll down that page, you can see the Text styles available to Windows Phone apps:
These Themed Resources should be leveraged to keep your apps looking like they below on the Windows Phone. You should resist the urge to use custom colors, fonts and the like unless you have a good reason to (such as to match your company's established branding elements, etc.).
It's also worth noting that many of the styles are based on styles which are based on yet other styles. This visual inheritance allows developers to avoid repeating the attribute settings that will be common across variations of the styles, much like how Cascading Style Sheets work in web development.
I said earlier that there were two binding expressions in the Windows Phone, the second is {Binding } ... this is used for binding data (i.e., usually generic lists of custom types with their properties set to the data we want to work with in our app) to on page elements. We'll see this at work much later in this series.
Let's have a little fun. As I said earlier, you typically want to stick with the Phone's Theme Resources. However, we can edit a style if we would like to. I think this might provide an additional insight or two on how this all works.
Make sure your mouse cursor is in the TextBlock with the Text attribute set to "animals".
In the Properties window, navigate to the Miscellaneous section, and locate the Style property:
Notice how there's a green border surrounding the text box, and the icon to the right is filled with the same green color. If you click that square or the text box you'll see a context menu:
Here we could potentially change the binding. In fact, I'll click the option "Convert to Local Value". When I do that, notice that the Style property is gone. In its place is a complex property setting for <TextBlock.Style> with a <Style> definition therein:
As you can see, when we converted from the Themed Style to a Local Style, we see part of the definition of that Theme Style ... it's based on the PhoneTextBlockBase Theme Style, and it overrides that style by setting two additional properties: FontFamily and FontSize. Both of these are defined as Theme Styles as well. Let's override those settings with our own:
That produces the following:
Next, let's make this style available to our entire app by making it a System Resource. I'll highlight everything between the <TextBlock.Style> and </TextBlock.Style> tags and cut them:
... then I'll open up the App.xaml file:
... and paste them into the <Application.Resources> section (see lines 12 through 19, below). I also add an attribute:
x:Name="MyTitleText":
Now, I can return to MainPage.xaml and re-write the TextBlock to use the new Application Resource:
... and the result should not change:
Success!
Just to recap, the big takeaway from this lesson is how we can style our app to make it look like it belongs on the Windows Phone while also expressing our own individuality. We learned how to modify the WMAppManifest.xml file to change the icons and the title of our app, we changed the app's title and page title, and learned how to bind to StaticResources like Themed Resources for the Windows Phone, and how to create both Local and System Resources based on Themed Resources, Bob and the others in this series-team! So far this series presents for me the right level of informative insight. Nice pace as well. Bob you are doing great. Keep on trucking. Looking forward to the other episodes into the coming days/weeks. Windows Phone 8 development is cool!
One small thing that I would like to be explained. In App.xaml we use attribute name of the style definition. On the page we used key attribute. Why is that? How does it work internally? What about performance?
Thanks a lot for this great series. Are there any differences between using x:Name and x:Key for defined resources in App.xaml ?
Hey Bob (and anyone else reading), I've run into my first "programming" issue so far following this specific tutorial. I've continued on with a few more lessons, but it is really bothering me now that that Live Tile for the app is not changing icon-wise.
What I mean is the application icon shows up fine in the app list both on my device and emulator (duck picture). But, when I pin the tile to the Start Screen it shows up as a big asterisk as if I never set the images.
Please help!
Edit: Fixed the issue by simply deleting the images from the Solution Explorer and reloading them into Visual Studio.
Thank you very much, Bob!
@PeterNL: Awesome.
@Niner77839: Yeah, I know I forgot to explain it thoroughly. Let me point you to this same question on StackOverflow so you can get a complete picture:
In a nut shell, there's no practical difference. The differences are in how things are implemented behind the scenes in XAML with namespaces that both define the same attributes.
@RandomAlec: Great that you figured it out ... I had no idea what I was going to suggest.
@Progr: No, thank you for watching.
@Bob
Is there a way to define styles in app which are automatically applied depending on the x:Name property like we do #ID in css for web. Like I create one style for RootLayout and It is applied aytomatically to a control whose x:Name="RootLayout"
Thanks
@isyedakhtar: I honestly don't know what a "best practice" would be in regards to selecting & styling, but take a look at how we do it in this video:
Yes, I realize this is for Windows 8 Store Apps, but the concepts are the same (at least, I'm pretty sure they are). Does that help?
Thanks Bob, I think I have to go with creating styles and applying them explicitly as you taught in the above video.
Great tutorial! Thank you for your great work! One small thing I'd like to point out that the image below the text
"Great! Now, let's click and hold down until a context menu appears displaying the options to "pin to start" and "uninstall":"
might be incorrect.
Excellent tutorial! This helps with the basic concept of getting your tiles onto the start screen, something I've wondered how that is to be done for a while, now I know.
One thing, which I don't understand, is that in Visual Studio I have a green squiggly line under things like PhoneTextNormalStyle, PhoneFontSizeNormal, etc. If I mouse over those squiggly lines I get a popup that says:
"XAML 'PhoneFontSizeNormal' is not found"
Why is that? It is my understanding, based upon this tutorial, that those are all styles found online, and I would conclude resolved in that way. So, why am I getting that warning message?
@Bob
I must say you explain the concepts very clearly and I am enjoying the development of windows phone 8.
I am having a problem here that when I see the App listing in emulator,I see my application having image that I set but when I pin it to start I don't get that image in tiles....pls help
I am getting a small square with white color instead of app tile image.
Hey Bob,
See this is the screenshot I am providing....Why I am not getting the App icon in the start menu....
Screenshot attachment link in skydrive
RingtonePlay is my app name
@Akhil Menon: Are you using the image I provided in the source code that accompanies this series? Or are you using your own image? I suspect the latter. And therefore I suspect you don't have a transparent background image. Please let me know and I can advise next steps.
@Akhil Menon: I agree with @BobTabor here. if you still having issues, upload your images to. I may need see the app as well but right now give me your tiles only
Oh Bob I am using my own created image in png format.So how to make our own app image i.e transparent background?
Please provide me the link of your app image so that I can use it
I got the assets folder of your folder.How to make my own app image?
@Akhil Menon: Try this:
Personally, I use Adobe Fireworks, but I know any image / photo editor would be able to handle it ... perhaps even Paint.net or Gimp. Good luck!
Thank you very much......It worked
I have a question Bob: How do I go about creating the icons for my app? Is there a particular tool I should use (I'm not a graphic designer)?
@Joe Benassi: there are a bunch of sites that you can get icons for free or pay for them. is one of many sites. Please be sure you follow their licensing terms and guidelines.
probably a great series of tutorials you did. I'm really appreciate it!!! Keep going on :)
@Akhil Menon: If you do a lot of background-removing, Background Burner might save you some work because it can usually auto-detect the foreground object without your input.
Jason
Hi Bob, really enjoying the series so far. One comment on the transcript for this part of the series - it looks like the inline image for pinning the app to the start screen has been swapped with the image for converting the textblock style to a local style. Sorry to nitpick. Moving on to part 7. Thanks!
@MrCharlesReid: Thanks for the heads up ... creating the HTML for this was harder than recording the videos!
@BobTabor: I'd wondered if there was some magic involved in creating the transcripts. Anyway I really appreciate them. They enable me to make progress even when I don't have the luxury of watching the videos (which are also great). Thanks again!
Hello Bob Tabor, I already congratulate the initiative and say that the content is great and the way it is taught and fascinating, congratulations for explanations, I wanted to ask as I can from a text box that the user types the name get a corresponding list related to the User entered data? Thank you for your attention.
Hello guys I have been following the video instructions but the app icon does not appear in the menu taking in consideration that I used the app icon used in the assets provided and it appears in visual studio and in the tiles but it does not appear in the menu here is a screen shot.
Can I dynamically change the style of a control ? Do we have a StyleSelector in Windows phone 8?
Problem : I have a list of categories and based on the category name I need to load different styles.
Nice Idea to have small video of yours at the bottom in tutorials
My name is Gabriel'm Brazilian, I would like to thank the classes, I'm really enjoying it.
I'm using Visual Studio 2013 SDK 8.0. I wonder if it is to launch an application created in this software version for WP 7 on ...
I did not find this option in the settings of Visual Studio ..
I would have to do 2 on 2 different apps sdk's?
@Gabriel Guimaraes Aguiar Teixeira: I don't think you can do what you're trying to do. I could be wrong, but I've never tried it before and I doubt it can be done. Sorry!
Remove this comment
Remove this threadclose | http://channel9.msdn.com/Series/Windows-Phone-8-Development-for-Absolute-Beginners/Part-6-Styling-the-App | CC-MAIN-2014-42 | refinedweb | 2,869 | 71.24 |
The trailing zeros are products of 2 and 5. We need to find the minimum number of integers whose factor have 2 and integers with factor 5. Because every 2 number has an odd number but every 5 numbers have one number with factor 5, we only care about the number of factors 5.
One integer perhaps has one or more factors 5 and we can calculate them by using n/pow(5,k). If n can be divided by pow(5,k) then n has k 5s.
So we calculate n/5+n/25+n/125+...
The value v who can be divided by v/25 can also be divided by v/5. But we don't need to care about it because v has two 5s and one can be added in n/5 and one in n/25.
public class Solution { int zeros = 0; public int trailingZeroes(int n) { if (n < 5){ return 0; } else{ return n/5 + trailingZeroes(n/5); } } } | https://discuss.leetcode.com/topic/91454/explanation | CC-MAIN-2017-34 | refinedweb | 164 | 72.05 |
Sometimes you would want to include some dynamic information from your application in the logs.
As we have seen in examples till now, we print strings in logs, and we can easily format and create strings by adding variable values to it. This can also be done within the logging methods like
debug(),
warning(), etc.
To log the variable data, we can use a string to describe the event and then append the variable data as arguments. Let us see how we will do this with the help of an example given below:
import logging logging.warning('%s before you %s', 'Think', 'speak!')
WARNING:root: Think before you speak!
In the above code, there is merging of variable data into the event description message using the old,
%s style of string formatting.
Also, the arguments that are passed to the method would be included as variable data in the message.
As you can use any formatting style, the f-strings introduced in Python 3.6 are also an amazing way to format strings as with their help the formatting becomes short and easy to read:
Here is a code example:
import logging name = 'Thomas' logging.error(f'{name} raised an error in his code')
ERROR:root: Thomas raised an error in his code
/python/python-logging-variable-data | https://www.studytonight.com/python/python-logging-variable-data | CC-MAIN-2021-10 | refinedweb | 218 | 71.04 |
53007/delimiter-on-the-data
I have a file with records as below.
s.no,name,Country
101,Raju,India,IN
102,Reddy,UnitedStates,US
here the my country column has data as "India,IN" which is single value and it has comma as well. Can you let me know how to handle this data when we read the file using comma delimiter in spark-scala? I tried with split(",") which did not give me expected output.
for ex: expected output for the first record:
S.no: 101
name: Raju
Country: India,IN
You can use this:
import org.apache.spark.sql.functions.struct
val df = Seq((1,2), (3,4), (5,3)).toDF("a", "b")
val new = df.withColumn("NewColumn", struct(df("a"), df("b"))
new.show()
+---+---+---------+
|a |b |NewColumn|
+---+---+---------+
|1 |2 |[1,2] |
|3 |4 |[3,4] |
|5 |3 |[5,3] |
+---+---+---------+
val data = new.drop("a");
val data = data.drop("b");
First find the Hadoop directory present in ...READ MORE
this article on HiveStorageHandler will let you create ...READ MORE
what can i do???????/ READ MORE
You can either install Apache Hadoop | https://www.edureka.co/community/53007/delimiter-on-the-data | CC-MAIN-2019-35 | refinedweb | 184 | 70.29 |
Recently I started a project where I needed to read 2-bit rotary encoder switch. I got rotary encoders from SunFounder with pull-ups & push button and started searching for simple code that would give me direction and number of steps turned. Simplest example would be volume control.
And here is where complications started. Each and every code and example I found relied on interpreting bit pairs (0:0, 1:0, 0:1, 1:1) encoder produced while being turned. While all of that works in theory in praxis all of them suffered various problems due to real world: bouncing contacts, skipped or wrong readings, .... So most of the code tried to somehow come around those problems by guessing missed steps, filtering obviously wrong inputs and so on. In other words all of them (at least the ones I found) where imperfect or complicated.
Solution to all of above is really simple, one had to look at the problem from another angle:
Working principle of 2-bit rotary encoder switch is that states of lines (A and B) MUST change at different point of time as seen in picture. Otherwise it would be impossible to read the direction of turning!
Each single step of encoder produces 4 state pairs but ultimately it ends with (1:1), and in this lies the solution:
Instead of trying to read and match states and calculate direction, solution is dead simple: IGNORE all changes before final state (1:1) and, using interrupts, determine which edge came first before reaching (1:1) - A or B. And this will give you direction of turning. Added bonus is that you don't need any hardware debouncing as it is included in code itself.
No steps are missed. No steps are misinterpreted!
In attached example I simulated volume knob. Depending on speed with which the knob is turned volume is increased/decreased as square function of speed. Rotary encoder is connected to GPIO pins 4 & 14 and interrupts are used.
Main loop checks every 100 msec if volume has been turned, and if so it adjusts Volume variable. Added complication in this example is that if you leave the code running for VERY LONG time and turn the knob always in one direction variable in which number of changes is held will wrap around to zero when ti reaches MAX or MIN integer value! To avoid it you have to reset it to 0 and in doing so watch out for simultaneous access from interrupt thread. I used simple locking for that.
Here is the code in Python, rewriting it in any other language should be simple:
Code: Select all
import RPi.GPIO as GPIO import threading from time import sleep # GPIO Ports Enc_A = 4 # Encoder input A: input GPIO 4 Enc_B = 14 # Encoder input B: input GPIO 14 Rotary_counter = 0 # Start counting from 0 Current_A = 1 # Assume that rotary switch is not Current_B = 1 # moving while we init software LockRotary = threading.Lock() # create lock for rotary switch # initialize interrupt handlers def init(): GPIO.setwarnings(True) GPIO.setmode(GPIO.BCM) # Use BCM mode # define the Encoder switch inputs GPIO.setup(Enc_A, GPIO.IN) GPIO.setup(Enc_B, GPIO.IN) # setup callback thread for the A and B encoder # use interrupts for all inputs GPIO.add_event_detect(Enc_A, GPIO.RISING, callback=rotary_interrupt) # NO bouncetime GPIO.add_event_detect(Enc_B, GPIO.RISING, callback=rotary_interrupt) # NO bouncetime return # Rotarty encoder interrupt: # this one is called for both inputs from rotary switch (A and B) def rotary_interrupt(A_or_B): global Rotary_counter, Current_A, Current_B, LockRotary # read both of the switches Switch_A = GPIO.input(Enc_A) Switch_B = GPIO.input(Enc_B) # now check if state of A or B has changed # if not that means that bouncing caused it if Current_A == Switch_A and Current_B == Switch_B: # Same interrupt as before (Bouncing)? return # ignore interrupt! Current_A = Switch_A # remember new state Current_B = Switch_B # for next bouncing check if (Switch_A and Switch_B): # Both one active? Yes -> end of sequence LockRotary.acquire() # get lock if A_or_B == Enc_B: # Turning direction depends on Rotary_counter += 1 # which input gave last interrupt else: # so depending on direction either Rotary_counter -= 1 # increase or decrease counter LockRotary.release() # and release lock return # THAT'S IT # Main loop. Demonstrate reading, direction and speed of turning left/rignt def main(): global Rotary_counter, LockRotary Volume = 0 # Current Volume NewCounter = 0 # for faster reading with locks init() # Init interrupts, GPIO, ... while True : # start test sleep(0.1) # sleep 100 msec # because of threading make sure no thread # changes value until we get them # and reset them LockRotary.acquire() # get lock for rotary switch NewCounter = Rotary_counter # get counter value Rotary_counter = 0 # RESET IT TO 0 LockRotary.release() # and release lock if (NewCounter !=0): # Counter has CHANGED Volume = Volume + NewCounter*abs(NewCounter) # Decrease or increase volume if Volume < 0: # limit volume to 0...100 Volume = 0 if Volume > 100: # limit volume to 0...100 Volume = 100 print NewCounter, Volume # some test print # start main demo function main() | https://www.raspberrypi.org/forums/viewtopic.php?p=1040077& | CC-MAIN-2019-22 | refinedweb | 819 | 62.17 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.