text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringlengths 9
15
⌀ | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
---|---|---|---|---|---|
I have had this monitor a couple weeks and now all the sudden the card reader doesn't recognize an inserted card. The operating system (XP Pro) see's the reader but it doesn't see the cards (SDhc). I have tried the standard things, different cards, rebooting, enabling & disabling, resetting the USB port and still no go.. anyone else had issues or have any ideas? I would hate to sent it back just for a reader fix..
It worked but now it doesn't. My supposition is windows pushed an update on you which caused this. Was your PC set to automatically get updates?
Hi Chris,
Thank you for the reply, actually I have my machine set to ask before updating, but to be honest I am not sure if I did any updates between the time the card reader worked and didn't work, I do know that on the day this issue started it had been working fine a couple hours earlier. But if it is a Windows Update that is causing this problem is there a way to fix it?
You would have to run System Restore and look for a save date just prior to the issue.
(Start- Programs- Accessories- System Tools- System Restore)
Thank you again Chris, I went ahead and restored to a date a week ago but unfortunately it didn't help. The card reader shows up as Drive F but when I insert a card is I get the same " please insert card" pop up when I try to access it.
Get it replaced. If you only have a monitor order number, you should contact USA Order Support. If your monitor was purchased tied to a Dell PC, contact USA Technical Support. They will need the following data -
Name:
Shipping address:
Phone number:
Monitor Order number or PC Service Tag number if purchased with a PC:
Monitor 20 digit PPID number found on the back on a label or on the slider card on the left rear:
Reason: U2410 Card Reader Failure
Thank you for your help Chris,
I do have one more question for you. Since I need a replacement monitor (A00), should I wait to do the exchange until the latest version (A01 I assume) is out?
If you are using Color Settings Standard, Multimedia, or sRGB Custom Color Gain then get the exchange now.
If you are using Adobe RGB, wait for my announcement on when the A01 is ready.
Thank you very much Chris,
I do in fact use Adobe RGB almost exclusively, so I will take your advice and wait for word on the A01 release.
I hope Dell realizes what an asset you are to their organization, being able to communicate with someone such as yourself really helps. As a longtime Dell Customer and someone that has had both good and bad experiences with Dells Customer service your forum really brings things up a notch.
Thank You Again,
Dave
You can try the following way:
1, boot into Safe Mode, press F8 after the exit, select to restart or shut down in the boot, you can enter the normal mode (fix registry).
2, if the failure continues, please use the system comes with System Restore to restore to you no time to fix the fault (if it fails to restore normal mode, please boot into Safe Mode, press F8 to use System Restore).
3, if the fault remains the same, using the system disk repair, open a command prompt type SFC / SCANNOW carriage return (SFC and / between a space), insert the internal card reader, the system will automatically compare repair.
4, if the fault is still, in the BIOS to set CD-ROM as the first boot device is plugged into the system installation disk press R button to select the "repair installation" option.
5, if the failure continues, it is recommended reinstall the operating system.
BWT, i sincerely recommend you a 9 in 1 internal card reader instead.. It's none-drive. | https://www.dell.com/community/Monitors/U2410-Card-Reader-Issues/m-p/3421038 | CC-MAIN-2019-22 | refinedweb | 668 | 65.35 |
I've really gotten to like the fire command line project from google. It makes it a lot easier to turn your helper files for your jupyter notebook into command line apps which in turns makes it easy to get everything into production. In this document I will show a small extra trick so that you can make fire based command line apps available from the terminal everywhere.
Starting Fire
Let's start by making a command line app called
clapp.py.
import os import fire def hello(person="world"): """ Merely prints hello to a person. Usually the world. :param person: Name of someone. """ print(f"hello {person}") def getcwd(): """ Merely prints hello to a person. Usually the world. """ print(os.getcwd()) if __name__ == "__main__": fire.Fire({ 'getwd': getwd, 'hello': hello })
The magic of
fire is that you can now run these seperate python functions from the terminal. Feel free to type along:
By running this, you can immediately see some nice features from fire:
- python functions can easily be called from the command line
- it automatically parses command line arguments into function input
- docstrings are captured and are part of the help message
- because the code is now running from the command line, it is every easy to schedule the code with something like cron or airflow
- this should make it very easy to customize and build your own tools
There are downsides currently though:
- the command line tool is only available from the folder where you have the file
- you need to run
python ...before the command line app
- the command line app might have prerequisites which we need to have installed and currenlty the command line app doesn't have a method to check if these prerequisites are met
Fire Everywhere
I will now make a very minor change to the python file.
import os import fire def hello(person="world"): """ Merely prints hello to a person. Usually the world. :param person: Name of someone. """ print(f"hello {person}") def getcwd(): """ Merely prints hello to a person. Usually the world. """ print(os.getcwd()) def main(): fire.Fire({ 'getcwd': getcwd, 'hello': hello })
Next I will create a
setup.py file.
from setuptools import setup setup( name='clapp', entry_points={ 'console_scripts': [ 'clapp = clapp:main', ], } )
With this change I've made the script installable into
pip. The
console_scripts part of the
setup.py file will point to the
main function in the
clapp file and let it be an entry point. The nice thing about a
setup.py file is that you could also specify prerequisites here (like machine learning libraries or etl requirements).
If you put these two files in the same folder then you can run
python setup.py develop to enable the python command to be available globally (assuming you are staying in the same python environment). In fact, we pip is able to confirm that we're installing a tool. Feel free to type along:
Conclusion
Making your own tools is amazing and I've found that automating things in the command line is a great habbit. The nice thing about
fire is that it is barely any extra effort to build tools for the command line if you've already built them for the jupyter notebook. There's many features of the tool that I've not discussed (like code completion) but you can find any extra details here. | http://koaning.io/global-fire.html | CC-MAIN-2018-13 | refinedweb | 563 | 70.94 |
i want to print a 'double' variable upto 8 decimal places without trailing zeroes.
for eg:
if a= 10.1234 then print 10.1234
if a= 10.00 then print 10
if a= 10.11111111111 then print 10.11111111 (max of 8 decimal places).
how to do it?
i searched for it and found this:
printf("%g",a);
this work fine but print upto 6 decimal places.
Off the top of my head, I'm going to say that it's not possible to mix and match %f (which supports a precision) and %g (which has a zero removal rule). You'd need to use a less direct solution, such as sprintf() followed by a right trim of zeros.
From the Linux fprintf() man page: (nor- mally, con- verted conver- sion. + A sign (+ or -) should char- acters if the locale information con- verted conver- sions, fol- lowing n conversion corresponds to a pointer to a long int argument, or a following c conversion corre-.
Pay special attention to the "precision" and "field width" specs above.
Edited by rubberman
Pay special attention to the "precision" and "field width" specs above.
No disrespect, but unless I'm misreading your post (in which case I apologise) you should follow your own advice. ;) The %f specifier doesn't remove trailing zeros when the precision exceeds the value, and the precision on the %g specifier affects significant digits rather than digits after the radix. So while %g can be used to get the correct behavior, that's only after doing introspection on the value to determine how many whole digits are present:
#include <stdio.h> #include <math.h> int main(void) { double a = 1234.567891234; // Quick and dirty digit count int digits = log10((double)(int)a) + 1; printf("%.*g\n", 8 + digits, a); return 0; }
@deceptikon can you please explain the printf statement?
@deceptikon can you please explain the printf statement?
This is the point where I'll suggest that you RTFM, because format strings are clearly and completely described. But, I'll also explain it.
The %g specifier selects the behavior of either the %f or %e specifier (that's fixed or scientific format, respectively), depending on the actual value being printed. By default, if the exponent in your value exceeds 6 or -4, %g uses scientific format rather than fixed format. Trailing zeros are omitted, which is behavior you want (%f doesn't provide this).
When you provide a precision to the specifier (a dot followed by a numeric value), %g treats that as a maximum number of significant digits, which means both whole number digits and precision digits. For example, the value 123.456 has six significant digits. IF you try to print it with
printf("%.4g", 123.456), you'll get 123.4 as output.
The numeric value in the precision can be a placeholder for an argument rather than a literal value in the string, that's what the asterisk does. It says "take the next argument to printf() and use that value as the precision". Another way of writing the example above using a placeholder would be
printf("%.*g", 4, 123.456).
All in all, what my example does is calculate the number of digits in the whole value, then adds 8 to this (where 8 corresponds to the number of precision digits you want), then uses that value as the significant digits for the %g specifier. It simulates through value introspection the behavior of the precision for the %f specifier.. ... | https://www.daniweb.com/programming/software-development/threads/446681/double-precision | CC-MAIN-2017-26 | refinedweb | 582 | 66.03 |
This action might not be possible to undo. Are you sure you want to continue?
Cairngorm Deepdive
Børre Wessel - Adobe Consulting
1
About me
• Børre Wessel
• • •
Senior Consultant at Adobe Consulting Rich Internet Applications Based in Edinburgh Scotland
• Background
• • • • •
Client side architecture J2EE Enterprise Software Development Agile Software Development Rich Internet Applications UI Development since 2000, used Flex since version 1.5
•
History • First version of Cairngorm was developed for Flash. . e.the session object for Flash based applications • • _movieclip..text = myText .. security.sometextfield. All Rights Reserved.something..something. the ViewHelper was obsolete • ServiceLocator was a single point of entry to wrap the NetConnection • With Flex the ServiceLocator has added functionality. before Flex existed. 2006 Adobe Systems Incorporated.g. When Flex and data binding was introduced. • Only ServiceLocator and ViewHelper • ViewHelper . The old version of the ServiceLocator is comparable to todays HTTPService • FrontController and Model saw the day of light when Flex came out.
Cairngorm • How can a framework help the project and the developers. • • • • Code organization Shared common terminology Commonly used frameworks and patterns help developers understand the code Developers get quicker up to speed on projects • Separation of concerns • Cairngorm is based on existing well known patterns • • • MVC Singleton pattern Observer pattern 2006 Adobe Systems Incorporated. . All Rights Reserved.
application state and data storage • V .Cairngorm.View The User Interface components • C . All Rights Reserved.Controller Controls the flow in your application 2006 Adobe Systems Incorporated. a Flex MVC framework MVC • M . .Model The state bucket.
as classes • Controller . .ModelLocator • View .MXML and .ICommand Event .FrontController • • Command .IResponder Delegate ServiceLocator 2006 Adobe Systems Incorporated.CairngormEvent • Classes used for invoking services and handling the result. All Rights Reserved. • • • Responder .Cairngorm classes • Model .
good OO always apply.The singletons • FrontController. . ModelLocator and ServiceLocator are all based on the singleton pattern. All Rights Reserved. • Using Frameworks or patterns doesn’t remove the need for good OO skills. 2006 Adobe Systems Incorporated. several hundred lines of code. • If your FrontController becomes one mastodon of a class. You just refactor it.
It is the “locator” of your model objects ModelLocator stores strongly typed objects which you can data-bind to • Call getInstance() in the view only once! • Your ModelLocator is instantiated once from your application on startup.getInstance() to get to my model object 2.Modellocator • Common misunderstandings/misuse of the ModelLocator: 1. I always have to call . 2006 Adobe Systems Incorporated. populate your views with the correct model objects using data binding. . Can’t garbage collect. since we have a reference to the ModelLocator everywhere • How to use the ModelLocator • • • • The ModelLocator is your “session” / state bucket Use data binding Don’t store data directly on the ModelLocator. All Rights Reserved.
. 2006 Adobe Systems Incorporated. [Bindable] public class ApplicationModelLocator implements ModelLocator { private static var modelLocator : ApplicationModelLocator. .. public var appModel1 : SomeModel1 = new SomeModel1()...ModelLocator code example . public var appModel3 : SomeModel3 = new SomeModel3(). . public var appModel2 : SomeModel2 = new SomeModel2(). All Rights Reserved...
the most powerful feature of Flex? 2006 Adobe Systems Incorporated.Data binding. All Rights Reserved. .
Data from the view goes back to the model Data binding is one of the core features of Flex. the primary use is to split up a single application into smaller components. it doesn’t make any sense not to use it in your application. . without a reference to for example ModelLocator • Where does the ui logic belong? If not in the view where should I put it? 2006 Adobe Systems Incorporated. • mx:Module • • Modules are not intended to be standalone applications within applications. reducing size.View • Data binding • • • All data used in the view is bound to a model object. not sharing commonly used components. Then you would rather use RSL’s A module can easily fit into a Cairngorm based Flex application. Data used within the module can be bound to a model object. All Rights Reserved.
g. . e. All Rights Reserved.FrontController • The FrontController only handles mappings between Commands and Events • What kind of Events should be handled by the controller • All user gestures. With this approach you will very quickly write a lot of extra code which only makes the application more complicated 2006 Adobe Systems Incorporated. user clicks a button? No.
All Rights Reserved. • • They get an instance of the CairngormEvent. and call the appropriate method. 2006 Adobe Systems Incorporated... which should contain the data need to call for example a service What the event contains is up to you! But if you need to put data back on your model. • Get an instance of the correct Delegate.Commands and CairngormEvents • The commands don’t really do a lot. it might be worth passing your model with the event. .
doing so will make your responder lighter. • Error handling can be done in a responder base class. All Rights Reserved.Delegates and Responders • The delegate is the only class with knowledge about the underlying services in the application. . and you don’t have to repeat yourself in all responders to implement the fault() method 2006 Adobe Systems Incorporated. • Responders typically implement methods to handle a successful or failed request.
password) method • After the authentication is done. It also handles security credentials for the services • To authenticate..ServiceLocator (and security) • ServiceLocator does a little bit more then just maintaining your list of services. . the ServiceLocator will set the credentials on all the services 2006 Adobe Systems Incorporated. • When you log in use the setCredentials(username. All Rights Reserved.. call a service security-constraint to force authentication.
Presentation Model • What kind of objects do I store on the ModelLocator? • Value objects..Model objects. • Presentation Model • • • • • Keeps application state UI and data logic View binds to data on the model. business objects. All Rights Reserved. observes and updates UI components based on changes in the underlying data The view has knowledge about the model The model has no knowledge of the view • Testable • When a view is removed/destroyed and the presentation model with it. garbage collection will be able to free up the memory if necessary. .. 2006 Adobe Systems Incorporated.
• It only follows good Object Oriented Programming :) • Paul Williams from Adobe Consulting has written a series of articles about presentation patterns:. with a one-to-one relationship • Presentation Models are independent of Cairngorm. .macromedia. All Rights Reserved. Presentation Model • The Presentation Model classes are typically mirroring the structure of the view components. and are not part of the framework.com/paulw/ 2006 Adobe Systems Incorporated.Model objects.
All Rights Reserved. public function showContactOverview() : void { applicationState. } .currentViewState = ApplicationModel.dispatch(). 2006 Adobe Systems Incorporated.... ..VIEW_CONTACT_OVERVIEW...Presentation model code example . } . override protected function handleFirstShow() : void { new ContactEvent().
All Rights Reserved. Open Source Ruby based functional testing tool.. but only on the model.. type converters . • Unit testing or functional testing • • • The only way to test visual changes in the user interface is to actually functionally test it using function testing tools HP (Mercury) QuickTest Professional FunFX.Unit testing • What should I unit test? • • Using a model object approach where there is no logic in the view. It is easy to test your model logic and state changes Formatters. validators. .rubyforge.org/ • When should I unit test? • Always! 2006 Adobe Systems Incorporated..
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview. | https://www.scribd.com/document/79180226/Cairngorm-deepdive | CC-MAIN-2016-44 | refinedweb | 1,235 | 53.17 |
I needed 8 buffered inputs and 8 buffered outputs for a workshop controller. The computer would be a Raspberry Pi Model A+ and would have to stand alone. The entire project is at.
This is a fun project. The wiring is difficult enough to be good training in soldering in small places. The wire paths can end up being your own personal design. After all a good electronics project is a work of art is its own genre.
ULN2803, socket
The project starts with the sockets for the SN74HC244N buffer ICs and the 3.3V regulator as suggested by The Cambridge Boffin Guide. The power supplied to this board is from the Raspberry Pi 5V supply. As the Boffin paper states, "It is better to have a 3.3 V voltage regulator on this board rather than use the 3.3 V from the raspberry PI GPIO pins. In case there was a short, it is much easier to replace this 22 pence regulator than the surface-mount device (SMD) regulator on the Pi."
The ULN2803A is only an 18 pin chip where the SN74HC244 is a 20 pin chip, however it is far cheaper to buy sockets in bulk so I am using a 20 pin socket for the ULN2803. As you can see in this image. Something will have to occupy the last set of pins so it isn’t confusing.
Pin-out logic
The Pin-Out logic of the 74HC244 makes it ideal for short wire jumpers with 4 inputs and 4 outputs wired to each chip from the Raspberry Pi connector’s side of the chip and the same on the opposite side.
Pin assignment
The Raspberry Pi A+ and B+ have a long list of GPIO however to use 8 inputs and 8 outputs means using 5 pins that are commonly used for other I/O. The ID_SD and ID_SC are forbidden and I2C should be available as well as the SPI. So the project uses 24, 26,35,38 and 40 (physical socket pins). These are used as outputs.
The project GPIO to socket wiring layout will be:
Power and ground
Most of the power and ground are wired on the bottom of the board. Note that more than one of the GPIO ground pins are jumpered to common ground. Losing a ground connection can be fatal to some devices and it is better to have redundancy in ground/common connection. The 5V is not yet wired to the ULN2803.
Pin numbering
With the pins of the buffer chips wired, here is a project chart as a reminder which pin goes to which GPIO.
Top wiring
On the top now are the wires from the eight outputs (4 per chip) connecting the output buffer chip.
On the far side of that chip are a set LEDs connected through an SIP resistor array to 5v. Also on the top is a screw terminal block for the eight inputs.
Bottom wiring
On the bottom now are the wires from the eight inputs (4 per chip) connected to the screw terminals. Also the first of the 9 screw terminals is connected to ground.
You will note the addition of the GPIO connector.
Finished
Here is the finished project mounted on its Raspberry Pi A+. The outputs will be connected in a later project to a Pi Hat relay board through an eight pin connector but for now this has been a very fun project and if you have made it this far, thank you for looking while I play with my toys. Below is a snippet of Python test code:
import RPi.GPIO as GPIO from time import sleep GPIO.setmode(GPIO.BCM) myInputs = [17,22,23,24,5,6,12,13] myOutputs = [25,8,7,16,20,19,26,21] for pin in myInputs: GPIO.setup(pin,GPIO.IN,pull_up_down=GPIO.PUD_DOWN) for pin in myOutputs: GPIO.setup(pin,GPIO.OUT) while True: for pin in myOutputs: GPIO.output(pin,True) sleep(.25) GPIO.output(pin,False) | https://www.hackster.io/DaddyDoug/raspberry-pi-buffered-i-o-209748 | CC-MAIN-2018-43 | refinedweb | 671 | 81.02 |
I'm relatively new to Linq and Dapper and I'm trying to find the most effecient way to make an insert happen. I have a list object that looks like this:
public class Programs { public int Id { get;set; } public int Desc { get;set; } }
The list object is populated from a string field on the page which typically contains a string of Id's (e.g. 234, 342, 345, 398). Now in the database 234, 342, 345 already exist so the only one that I really need to insert is 398 along with the record Id. I already have a method that exists that goes and gets the currently existing program Id's in the database. Do I go get the program Id's and then compare the two lists before I execute the insert statement? Can I use Linq to do the comparison? Is there a better way?
The method that gets the program Id's looks like this:
public static List<Programs> GetPrograms(int id) { var sql = new StringBuilder(); sql.Append("select Id, Desc from dbo.Programs where Id = @id"); return con.Query<Programs>(sql.ToString(), new { Id = id }, commandType: CommandType.Text).ToList(); }
After doing some looking at all the options, it seems like some options for my situation are to do one of the following:
Since my objective was to accomplish this task using Linq and Dapper, I have opted for the first option. Here is the linq statement I made to get only the values I needed:
save.ProgramList = hiddenFieldProgramIds.Value.Split(',') .Select(n => new Programs(){ id = int.Parse(n) }) .Where(n => !program.ProgramList.Select(d => d.id).Contains(n.id)).ToList();
Then the dapper call is just a straight forward insert statement using a var based on the previous advice.
var sql = @"insert into dbo.programTable(Id) select @id"; con.Execute(sql, new { id = id }, commandType: commandType.Text, commandTimeout: 5000); | https://dapper-tutorial.net/knowledge-base/20982736/comprobacion-de-duplicados-antes-de-actualizar-con-dapper-linq-c-sharp | CC-MAIN-2019-47 | refinedweb | 316 | 65.93 |
- Advertisement
zalzaneMember
Content Count37
Joined
Last visited
Community Reputation191 Neutral
About zalzane
- RankMember
Alternatives to singletons for data manager?
zalzane replied to methinks's topic in General and Gameplay ProgrammingI know you've put a lot of thought into your way of avoiding globals, but your editor example is a good example for passing references. At first many may think, I have this model editor and it can only do one model at a time. I've made a model editor and made the same assumption. Sure if that's true, just use a global or singleton rather than a more complex organization just to avoid using globals. But really, will an editor only store ONE model? Maybe this editor allows you to load a character model, but it can also load attachments like weapons. The editor can manipulate the character and the attachments. It seemed like only one model was needed, but after adding more functionality, now you need more than one. So when you open a dialog or tool in the editor, the dialog or tool will need to be passed a reference of what it will manipulate. I'll admit my example really wasn't that great, but inheritance based global access isn't around to replace something like a class full of references. An infinitely better example would be an static-inheritance class giving access to something like the graphics device. On one of my older projects, the "sprite" class is typically accessed very far down the call stack. From the entrypoint of the program, you have to go through a few layers of gamestate logic, then UI logic, then all the way at the bottom is the sprite class. The sprite class needs access the graphics device, or whatever class is handling the graphics device. Instead of passing down a reference to the graphics device down a dozen layers of call stack, the Sprite class would inherit from a static class that contains the graphics device. This way, access to the graphics device has to be explicitly defined through the use of inheritance, and any usage of the graphics device can be found by just looking at all the child classes that inherit from it. In addition, it's one less piece of trash floating around the global namespace. I'll admit it's hacky as hell, but I'd rather have an ugly hack than restructure 10k lines of UI logic.
Alternatives to singletons for data manager?
zalzane replied to methinks's topic in General and Gameplay ProgrammingOne of the clever, but more contraversial alternatives I've found for sharing data between classes is to use inheritance/static fields. Let's say you're programming a 3D editor of some sort, and a list of vertexes in your model has to be shared between many classes. The options in this thread include simply passing the list as a parameter, setting up a singleton, or setting up a container class that gets passed at runtime. With my inheritance method, you set up an abstract class that contains protected static fields for shared information - such as the list of vertexes that has to be shared. Any class that needs access to those vertexes would inherit the abstract class. This can be extended into a tree of inheritance to allow different child classes to have different read/write permissions on the abstract class' fields, but I would restrain myself from doing this too much at risk of ending up with 3-4 depth inheritance trees. The most significant issue with this model is that without the proper visualization tools, maintaining code that runs under this system can be very complicated, especially if you have poor docs. In addition, if someone else was to read your code without knowing what you're doing - it's likely they would get very confused very fast. If you use this system extensively, I suggest writing an addon/macro for your development environment that lets you inspect what kind of inherited fields each class has without actually opening up the abstract class.
Terrain generation in 3D Tile Based Game
zalzane replied to Mr.Big's topic in Graphics and GPU ProgrammingTriangle strips are a major pain in the ass, especially when you're programatically generating geometry. Don't use them. Depending on how you generate the terrain, it may be faster to just generate it each time the game loads rather than loading it from a file. I have a demo that uses opencl to generate terrain, and it can generate terrain an order of magnitude faster than it would take to load said terrain from disk. For figuring out which triangle the mouse is mousing over, check this out. You can basically copypaste their code for it.
A* pathfinding on a sphere projected cube (uneven planetary body)
zalzane replied to CaiusEugene's topic in General and Gameplay ProgrammingYou could construct a linked list to represent the planet's mesh. enum Direction{ Above, Below, Left, Right }; struct Vertex{ float phi; float theta; float r; float cost; Vertex *neighbors; }; Using this method, the only real effort required is in the initial assembly of the linked list. For pathfinding you would just iterate through the linked list like you would for a normal array.
Your Worst "Gotchas" ever in programming
zalzane replied to Lil_Lloyd's topic in GDNet Loungepretty much anything i've done involving c++ has been riddled with gotchyas, I hope I never have to use that language again's ForumC++ Beginners's ForumE's ForumYou's ForumWhether or not microsoft condones C#/XNA is irrelevant in the long run thanks to mono/monogame.
how to unlearn a language
zalzane replied to game of thought's topic in For Beginners's Forum's ForumBig.
- Advertisement | https://www.gamedev.net/profile/183519-zalzane/?tab=node_profile_projects_ProjectUserProfile&page=1&sortby=project_num_views&sortdirection=desc | CC-MAIN-2019-04 | refinedweb | 963 | 57.1 |
Syntax Error with Q_ASSERT
Hi everyone! Using the following...
Q_ASSERT(std::is_same<int, int>::value);
... I get
syntax error: ')', but with extra parentheses ...
Q_ASSERT((std::is_same<int, int>::value));
... it compiles and works. Looks like a bug in the Q_ASSERT macro to me, or am I missing something?
Platform is Qt 5.7.0, Visual C++ Compiler 14
- mrjj Qt Champions 2017
Hi
It's like it sees it as 2 params. ?!?
It's not Q_ASSERT fault ( it seems) as any macro suffer from this.
#define MACRO(cond) (true)
MainWindow::~MainWindow()
{
MACRO( std::is_same<int, int>::value );
gives me
error: macro "MACRO" passed 2 arguments, but takes just 1
MACRO( std::is_same<int, int>::value );
^
(5.7, mingw)
So it seems its just macro and templates and a stupid preprocessor
@Wieland
Yeah, i know templates are pure compiler stage
but one should think preprocess at least accepts valid
syntaxes :)
Luckily the () fix is not super ugly.
@mrjj said in Syntax Error with Q_ASSERT:
one should think preprocess at least accepts valid
syntaxes
Why should anyone think that?
@Wieland said in Syntax Error with Q_ASSERT:
That's really pretty stupid.
The preprocessor is basically a copy-paste-made-easy, it's a very, very simple program. It does string replacements only, it cares not for any syntax or any language for that matter. You can run the preprocessor independently of the compiler (whether it's C, C++, Java, FORTRAN or w/e), and actually some fortran code (used with gfortran) makes use of the gcc's preprocessor.
@kshegunov The preprocessor is part of the C++ language specification and I would expect that, besides all the other smart things it also can do, it is able to handle such situations in a sane way. Anyways, I got used to C++ coming up with nasty surprises.
Edit: Next time maybe better Write in Go ;-)
@Wieland said in Syntax Error with Q_ASSERT:
The preprocessor is part of the C++ language specification
It is? I've never known that.
Anyways, I got used to C++ coming up with nasty surprises.
Eh, yeah. More syntax means more pitfalls. But tell that to the standards committee ... as you said, just write in Go! ;)
The preprocessor only understands
(and
,in this case (as it's a pre processor i.e. no symbols, namespaces or templates exist at this point), so this is basically parsed as
Q_ASSERT(STUFF, OTHER_STUFF);When you put the extra parentheses it becomes
Q_ASSERT(STUFF_IN_PARENTHESES).
Here's a cute gotcha for the extra parentheses trick:
Because the general rules for type deduction in c++ were pretty boring, c++11 brought such wonders as
decltypeto make it more fun:
int foo; bool nudge_nudge_wink_wink = std::is_same<decltype(foo), decltype((foo))>::value; //gives "false"... obviously :P
This marvel is sponsored by the fact that
foois an lvalue and
(foo)is an expression ;)
So if in your macro you happen to try to deduce the type of the expression passed to it you might have a joyful debugging session.
This is one such super-simplified pattern commonly found in macro based property systems:
#define FOO(bar, bazz) decltype(bar) hello = bazz; int foo; FOO(foo, 42); //works fine FOO((foo), 42); //error: invalid initialization of non-const reference of type 'int&' from an rvalue of type 'int'
@kshegunov
C++ standard includes C standard by reference. One of the talks at this year's CppCon mentioned that there was a cleanup effort in C++17 made to remove some of the more obscure or irrelevant C headers.
@Chris-Kawa Thank you for this! Every day a new surprise. Or two.
@Chris-Kawa
Yes, I suppose so, I just never thought about it. I don't often think about what the standard does or doesn't include, but I always thought the preprocessor is just a "common non-standardized extension", the language doesn't require it to function. And seeing that code, which I'm happy to say I understand not one iota of, I must reiterate my despise for C++11. :)
I must reiterate my despise for C++11. :)
There there... <pat on the back> :)
- kshegunov Qt Champions 2017
C++11 always evokes this feeling in me:
and by the way I'm an atheist ... :]
I just came up with something that looks like a solution for this to me. What do guys think?
#if !defined(MY_ASSERT) # ifndef QT_NO_DEBUG # define MY_ASSERT_FIRST_ARGUMENT(A, ...) A # define MY_ASSERT(...) ((!MY_ASSERT_FIRST_ARGUMENT(__VA_ARGS__)) ? qt_assert(#__VA_ARGS__,__FILE__,__LINE__) : qt_noop()) # else # define MY_ASSERT(...) qt_noop() # endif #endif
@Wieland
where is the docs? ;)
Looks cool. its cryptic enough that it might actually work :)
- koahnig Moderators
@mrjj said in Syntax Error with Q_ASSERT:
Looks cool. its cryptic enough that it might actually work :)
:D :D
It's almost the same as the current implementation of Q_ASSERT, just with a variadic macro. So it's C++11 only.
- Chris Kawa Moderators
@mrjj good one :)
@Wieland Could you explain how it is suppose to work? From what I can decrypt it just checks the first argument passed, so for your original example
MY_ASSERT(std::is_same<int, int>::value);it would expand to something like
((!std::is_same<int) ? ...which doesn't make much sense? Or am I missing something?
Btw. I get compiler errors for this:
with gcc:
in definition of macro MY_ASSERT_FIRST_ARGUMENT wrong number of template arguments (1, should be 2)
with clang:
error: expected > MY_ASSERT(std::is_same<int, int>::value);
it compiles in VS2015 U3 although their macro expansion is broken to bits so I wouldn't trust it does what it should.
@Chris-Kawa Damn, I only tested it with MSVC and it works there :-(
@Chris-Kawa My
ideahope was that the preprocessor would be smarter when I confront it with a variadic macro.
@Wieland I don't think it works. It just compiles ;)
Well you could use something simpler:
define MY_ASSERT(...) (!(__VA_ARGS__) ? qt_assert(#__VA_ARGS__,__FILE__,__LINE__) : qt_noop())
and that should be ok, but it has the same drawback I described earlier - it changes the expression it tests by adding extra () around it. Admittedly it's not a big deal and it should work as expected most of the time.
@Chris-Kawa Yes it works; funny that it doesn't work for you. Who knows why..
@Wieland Maybe I messed up something.
@Chris-Kawa Ha! Just found out that
#define FIRST(A, ...) Adoesn't work as expected (with MSVC): It doesn't give us the first argument only, instead it just gives all arguments (like
__VA_ARGS__). Maybe I'm wrong here again, but that looks like a bug in MSVC to me and it also explains why the code works.
Edit: Maybe it's really a bug in VC's preprocessor, at least someone on SO says so. And the workaround he presents actually seems to fix it, so now my code doesn't compile with MSVC anymore.
Thanks everyone for watching me stumbling around like a clown :)
@Wieland
Well to be fair, fooling around with Variadic Macros takes more balls than entertaining
clueless children - so it was educational to see that even in 2016, you cannot trust the preprocessor
to work the same across compilers. :) | https://forum.qt.io/topic/71985/syntax-error-with-q_assert | CC-MAIN-2018-34 | refinedweb | 1,185 | 71.24 |
The QGsm0710MultiplexerServer class provides a server-side multiplexing implementation based on 3GPP TS 07.10/27.010 More...
#include <QGsm0710MultiplexerServer>
Inherits QGsm0710Multiplexer.
The QGsm0710MultiplexerServer class provides a server-side multiplexing implementation based on 3GPP TS 07.10/27.010
This class is used by incoming AT connection managers such as the Modem Emulator to emulate 3GPP TS 07.10/27.010 multiplexing for the remote device that is connecting.
When the remote device opens a channel, the opened() signal is emitted, providing a QSerialIODevice that can be used by the AT connection manager to communicate with the remote device on that channel.
When the remote device closes a channel, the closed() signal is emitted to allow the AT connection manager to clean up the QSerialIODevice for the channel and any other associated channel-specific context information.
When the remote device terminates the 3GPP TS 07.10/27.010 session, closed() signals for all open channels will be emitted, and then the terminated() signal will be emitted.
See also QGsm0710Multiplexer and Modem Emulator.
Construct a new GSM 07.10 multiplexer in server mode around device and attach it to parent. The size of frames is frameSize. If advanced is true, then use the Advanced multiplexing option; otherwise use the Basic multiplexing option.
Unlike the base class, QGsm0710Multiplexer, the ownership of device will not pass to this object and it will not be deleted when this object is deleted. It will still exist after destruction. This is because the device will typically return to normal AT command mode after the multiplexer exits.
Destruct this GSM 07.10 server instance.
Signal that is emitted when the client closes a GSM 07.10 channel. After this signal is emitted, device will no longer be valid.
See also opened().
Signal that is emitted when the client opens a new GSM 07.10 channel. The device object allows access to the raw data and modem control signals on the channel.
See also closed().
Signal that is emitted when the client terminates the GSM 07.10 session and wishes to return to normal command mode.
This signal will be preceded by closed() signals for all channels that were still open when the terminate command was received.
A slot that is connected to this signal should use QObject::deleteLater() to clean up this object. | https://doc.qt.io/archives/qtextended4.4/qgsm0710multiplexerserver.html | CC-MAIN-2019-18 | refinedweb | 386 | 58.69 |
MIX.
One).?
Last week, Microsoft released more than six new or upgraded products! All these products enhance developing for the Web, from professional developers with ASP.NET MVC3 and NuGet to novice users with WebMatrix. And the best news: they are all free.
Learn more: continue directly to the announcement blog post by Scott Guthrie: Announcing release of ASP.NET MVC 3, IIS Express, SQL CE 4, Web Farm Framework, Orchard, WebMatrix
Some:
Note: because it’s created by these guys in their spare time, do note this is not an officially supported update.
Deploying.!
I!.
Talking to some colleagues the other day, I showed them a cool sample application that is available to show off the Silverlight VE Map control, which has been released in CTP mode at MIX09. The application uses Silverlight Deep Zoom technology to zoom into the location from a Twitter user and show it on the map. Check it out:, it’s created by Earthware and hosted on Windows Azure.
This gave us the idea of offering our office visitors a cool way to locate the Microsoft Belgium office. That said, I put up a sample in a few minutes. I’m planning to add some functionality like adding route calculation, for which I’ll be using the Virtual Earth Web Services. In this post though, I’m using only the map control. It literally takes less than 30 minutes from downloading the CTP to getting the map on the Silverlight app.
Silverlight You will need either Silverlight 2 or Silverlight 3 beta runtime, SDK and tools for Visual Studio. The Silverlight pre-requisities can be downloaded from
Virtual Earth Silverlight Map Control CTP In order to get access to the VE Silverlight CTP you need to go to the Microsoft Connect site, login or sign-up, fill in the survey and then you will get access to the downloads section.
First we need to create a new Silverlight project in Visual Studio. In that project, make sure to add a reference to the Microsoft.VirtualEarth.MapControl.dll (found in the install directory you chose for the VE Silverlight map control).
To add the map control first add a namespace reference and then call the Map control in XAML.
<UserControl
xmlns=""
xmlns:x=""
xmlns: <ve:Map x:
</UserControl>
Adding this will show the Virtual Earth map zoomed out to the maximum.
To zoom in to a location I got the longitude and latitude and pass that on to the map. At the same time I’m switching from Aerial to Road mode. The user can still use the default map controls to go back to Aerial mode. In this case I’m zooming in to the location of the MS Belgium office.
private void NavigateHome(int zoom)
{
Location homeLongLat = new Location(50.890995015145464, 4.45862411989244);
veMap.Mode = new RoadMode();
veMap.View = new MapViewSpecification(homeLongLat, zoom);
}
This will zoom in into the Belgium office location. | http://blogs.msdn.com/b/katriend/default.aspx?PostSortBy=MostViewed&PageIndex=1 | CC-MAIN-2014-23 | refinedweb | 489 | 63.09 |
Hello,
I keep running into the following problem:
I am using the import feature of ES2015. If I have 2 files in different directories that have the same name, like this
/src/form.js
/src/events/form.js
and from the events folder I import the form.js file like this
import TypeEvent from './form';
I get the error
Default export is not declared in imported module
The problem is not that I do not have a default export—it is there, my code works and there is no problem with it—the problem is that I have 2 files that have the same name and the other one does not have a default export. But I am not trying to import the other one. WebStorm seems to be confused which file I am trying to import. When I cmd+click on the imported filename, WebStorm opens a context menu, asking me which file I want to navigate to.
Is there a way to get rid of the error? Like writing the import differently or setting something up in WebStorm?
Thank you,
Lukas
works fine for me using similar setup - no errors are reported, navigation works as expected;
Please try the attached project - does import in src/events/main.js work for you? If yes, please try modifying it to match you setup and recreate the issue
Attachment(s):
untitled2.zip | https://intellij-support.jetbrains.com/hc/en-us/community/posts/206315609-JavaScript-ES2015-Import-does-not-find-file | CC-MAIN-2020-05 | refinedweb | 231 | 71.44 |
Let me give you a tool so strong, it will change the manner you start analyzing your datasets – pandas profiling. No more need to find ways to describe your dataset using mean and max() and min() functions.
Table of Contents
What is Pandas profiling?
In Python, the Pandas profiling library contains a method called ProfileReport(), which produces a simple Data Frame input report.
The pandas_profiling library is composed of the following information:
- Overview of DataFrame,
- Attributes that are specified by DataFrame,
- Attribute associations (Pearson Correlation and Spearman Correlation), and
- A DataFrame study.
Basic Syntax of pandas_profiling library
import pandas as pd import pandas_profiling df = pd.read_csv(#file location) pandas_profiling.ProfileReport(df, **kwargs)
Working With Pandas Profiling
To begin working with the pandas_profiling module, let’s get a dataset:
!wget ""
The data used was derived from GIS and satellite information, as well as from information gathered from the natural inventories that were prepared for the environmental impact assessment (EIA) reports for two planned road projects (Road A and Road B) in Poland.
These reports were mostly used to gather information on the size of the amphibian population in each of the 189 occurrence sites.
Using the Pandas Profiling module
Let’s use pandas to read the csv file we just downloaded:
data = pd.read_csv("dataset.csv",delimiter = ";")
We need to import the package ProfileReport:
from pandas_profiling import ProfileReport ProfileReport(data)
The function generates profile reports from a pandas DataFrame. The pandas df.describe() function is great but a little basic for serious exploratory data analysis.
The pandas_profiling module extends the pandas DataFrame with df.profile_report() for quick data analysis.
For each column the following statistics – if relevant for the column type – are presented in an interactive HTML report:
- Type inference: detect the types of columns in a data frame.
-
- Text analysis learns about categories (Uppercase, Space), scripts (Latin, Cyrillic), and blocks (ASCII) of text data.
- File and Image analysis extract file sizes, creation dates, and dimensions and scan for truncated images or those containing EXIF information.
1. Describe a Dataset
This is the same as the command of data.describe :
It also gives us the types of variables and detailed information about them, including descriptive statistics that summarize the central tendency, dispersion, and shape of a dataset’s distribution (excluding NaN values).
Analyzes both numeric and object series, as well as DataFrame column sets of mixed data types. The output will vary depending on what is provided.
2. Correlation matrix
We also have the correlation matrix:
It is similar to using the np.corrcoef(X,Y) or data.corr() functions. Pandas’ dataframe.corr() is used to find the pairwise correlation of all columns in the dataframe. Any na values are automatically excluded. For any non-numeric data type columns in the dataframe it is ignored.
3. View of the dataset
And finally we have a part of the dataset itself:
Conclusion
As you can see, it saves us a lot of time and effort. If you liked this article, follow me as an author. Also, bookmark the page because we post a lot of great content. | https://www.journaldev.com/45451/pandas-profiling-in-python | CC-MAIN-2021-17 | refinedweb | 515 | 55.34 |
Handling the mouse¶
The package
pynput.mouse contains classes for controlling and monitoring
the mouse.
Controlling the mouse¶¶
Use
pynput.mouse.Listener like this:
from pynput.mouse import Listener}'.format( (x, y))) # Collect events until released with Listener( on_move=on_move, on_click=on_click, on_scroll=on_scroll) as listener: listener.join()
A mouse listener is a
threading.Thread, and all callbacks will be invoked
from the thread.
Call
pynput.mouse.Listener.stop from anywhere, or raise
pynput.mouse.Listener.StopException or return
False from a callback to
stop the listener.
On Windows, virtual events sent by other processes may not be received. This library takes precautions, however, to dispatch any virtual events generated to all currently running listeners of the current process.
Reference¶
- class
pynput.mouse.
Controller[source]¶
A controller for sending virtual mouse events to the system.
click(button, count=1)[source]¶
Emits a button click event at the current position.
The default implementation sends a series of press and release events.
position¶
The current position of the mouse pointer.
This is the tuple
(x, y), and setting it will move the pointer.
- class
pynput.mouse.
Listener(on_move=None, on_click=None, on_scroll=None)[source]¶
A listener for mouse events.
Instances of this class can be used as context managers. This is equivalent to the following code:
listener.start() try: with_statements() finally: listener.stop()
This class inherits from
threading.Threadand supports all its methods. It will set
daemonto
Truewhen created..
stop()¶
Stops listening for mouse events.
When this method returns, no more events will be delivered. | https://pythonhosted.org/pynput/mouse.html | CC-MAIN-2022-05 | refinedweb | 253 | 54.59 |
Introduction:
In this article I will explain how to implement JQuery UI autocomplete textbox with multiple words or values with comma separated or semi colon separated in asp.net.
In this article I will explain how to implement JQuery UI autocomplete textbox with multiple words or values with comma separated or semi colon separated in asp.net.
Description:.
To implement this concept first we need to design table in database to save user details in database. implement autocomplete textbox with multiple values selection in asp.net. To get those files download attached folder and add the urls mentioned in header section to your application
Another thing here we need to know is script functions. If you want to know more about it check these posts call pagemethods with JSON and autocomplete textbox with Jquery.
Now open code behind file and add following namespaces
After that write the following code
C#.NET Code
VB.NET Code:
Now run your application and check the output that would be like this
Download sample code attached
54 comments :
Dear Friend,
I have created a web service and hosted on the server like
and i called it on another project but i am not able to access this web sevrice . what could be the reason ?
MessageBoardService.asmx.cs
//Code
using System;
using System.Collections;
using System.Configuration;
using System.Collections;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Services;
using System.Web.Services.Protocols;
using System.Xml.Linq;
using System.Data.SqlClient;
using System.Web.Services;
using System.Web.Script.Services;
///
/// Summary description for MessageBoardServ MessageBoardService : System.Web.Services.WebService
{
public MessageBoardService()
{
//Uncomment the following line if using designed components
//InitializeComponent();
}
[WebMethod]
public string HelloWorld()
{
return "Hello World";
}
[WebMethod]
[ScriptMethod(ResponseFormat = ResponseFormat.Json)]
public List GetAutoCompleteData(string username)
{
List result = new List();
if (!string.IsNullOrEmpty(username))
{
using (SqlConnection con = new SqlConnection("Data Source=MySerevr;Max Pool Size=600; Initial Catalog=MyMBDB;Persist Security Info=True;User ID=sa;Password=saa"))
{
using (SqlCommand cmd = new SqlCommand("select MemberFullName+'<'+UserID+'>' As username from View_MemberInformation where MemberFullName LIKE'%'+@SearchText+'%'", con))
{
con.Open();
cmd.Parameters.AddWithValue("@SearchText", username);
SqlDataReader dr = cmd.ExecuteReader();
while (dr.Read())
{
result.Add(dr["UserName"].ToString());
}
//return result;
}
}
}
return result;
}
}
and in aspx page // This is another projects.
$(document).ready(function () {
SearchText();
});
function SearchText() {
$(".autosuggest").autocomplete({
source: function (request, response) {
$.ajax({
type: "POST",
contentType: "application/json; charset=utf-8",
url: "",
data: "{'username':'" + document.getElementById('txtSearch').value + "'}",
dataType: "json",
success: function (data) {
response(data.d);
},
error: function (result) {
//alert("Error");
alert(result.status)
}
});
}
});
}
But its working fine where webservice is hosted.
Thanks
Alok Kumar sharma
[email protected]
India
dfgadgaa
Hi Suresh,
Thanks for sharing this article.
There is an error in sample code(default.aspx).
The below url is not working for sample code.
url: "Default.aspx/GetAutoCompleteData",
The url should be
url: "AutoCompletewithMultipleSelection.aspx/GetAutoCompleteData",
@Milind,
There is no error in code Default.aspx is the page name of my application if you use another pagename then you need to change url: "PageName/MethodName"
There is an error in downloaded code.
The page name of application is AutoCompletewithMultipleSelection.aspx.
code :
url: "Default.aspx/GetAutoCompleteData",
@Milind,
that is not problem or error in post. Please give your application pagename that's enough
there is no error on the article.
there is problem on downloadable sample code.
Thanks Milind...
i updated downloadable sample also...
No problem. :-)
nice but i mnot able to use it on my master page :(
any solution?
its working perfectly on simple / single page
but on the page where master page is included i m not able to use it.
plz give me solution for tat with full code thanks for great article
it will work on master pages also i hope you guys are not adding the jquery refrences in master that's the reason that auto complete not working in master. Please check it once...
If we want autocomplete search for two textboxes in the same aspx page, which will fetch data from two different tables. Then what will be the code for that?
Hi Suresh
Thanks, this is great and works perfectly and easily. However have you explored formating the input so that it is formatted as a label e.g as the example on this page textextjs dot com/?
Re: 13
it does not work on a child page of a master page. all the references are correct and also the url to the web service, case of json etc... i am using eact same code on a page that is not a child page of master page and it works... any ideas? thanks.
Re: 13 and 16
It does work on child page of master page however reference the client id of the textbox i.e "#<%=txtSearch.ClientID%>"
Awesome
Hi Suresh,
Nice work! I need when you select "Mahendra" so it would not be next time in List of search.
Can you help me into this.
Thanks in advance.
Great thanks again.
- Ankur Dalal
Suresh, Naku koncham JQuery and Javascript help kavali ..some thing related to Autocomplete and JS Keydown event ... requirement chala typical vundi..asalu logic idea ne ravatam ledu...ma native kuda tenali ne...pls help me..ne contact number cheppu ...my contact is 8861319319
I using jquery tokeninput, i want you give me example ASP.NET server-side script for generating
responses suitable for use with jquery-tokeninput
Thanks!!!
Hi Suresh. Thank you for Sharing this . i have a query in my ASpx page am having two Auto Complete textboxes . how can i fire only one event at a time.
It is so great article and easily understandable...I am new to Jquery.So I need one more thing suresh..How to get all the UserId of the all selected Username??I saw one of your article to select one item and its id.In this case how can I get multiple ids.Can you help me into this.
Thanks in advance.
Thanks again.
Hi, i checked this. is it working in user Control ?
can u tell me please
Suresh ,
after deploying in IIS
SqlConnection("Data Source=SureshDasari;Integrated Security=true;Initial Catalog=MySampleDB")
will works perfectly
but
using (SqlConnection con = new SqlConnection(ConfigurationManager.ConnectionStrings["ConnectionString"].ConnectionString))
won't work , it will give an alert with "error".
Is there any thing like we have to hard code this connection string in code file .
please check and leave a reply
Thanks
However nice implementation
Works fine. Thanks
just one correction: this
if (event.keyCode === $.ui.keyCode.TAB && $(this).data("autocomplete").menu.active) { event.preventDefault(); }
with this.
if (event.keyCode === $.ui.keyCode.TAB && $(this).data("ui-autocomplete").menu.active) { event.preventDefault(); }
there is multiple spaces in between last charecter of name and comma (,), how to avoid it?
Nice Article suresh
But when adding masterpage its not working
innstead of this "#txtSearch" use client id
"#<%=txtSearch.ClientID%>". it will work for master pages also
hello,
i write this method in webservice service.asmx
and replaced code sevice.asmx/methodname
but that autosuggestion is not working
hello,
I would like to ask how to delete the item that is selected
Thanks a lot.
BR
Sir I want to remove comma (,) between two words in textbox ,How can i do that??
im not getting the CSS link in the internet.. may be you have given wrong css link..please give me correct css link.
Hell Sir,I am trying to use this useful stuff in my application,where I have to select Country & State from Dropdownlist,and after selecting the State I am trying to pick cities ,,Which it should come out In this Format,,
The Thing i need to mention is I need to pass another Parameter in that 'GetAutoCompleteData' List
Now You just tell me hpw to pass another Parameter and how to retreve it In Source Page .
Sir i am use your code. Can you help me in retrieving the all the values selected by the user. I only get first value.
"#<%=txtSearch.ClientID%>". I am trying to add that id in master page..but it is not woking while use master page....
Hi,
How can i give style to the username i have selected in textbox?? For example: i type su in search boz and suresh from the populated list. now i want to apply style to the "suresh" record how is this possible.
it didn't work for me , it gives error when i type single letter ,
i want do that search after 3 letter
please help
what is keydown
for the master page write this code in page_load. its work....
ScriptManager.RegisterStartupScript(
this, this.GetType(), "ctl00_ContentPlaceHolder1_txtkeyword", "SearchText1()", true);
Hi friend!
Thank you very much for the article. Worked for me too :)
working.... but its affecting other pages
the above line in webconfig file is disabling my session variables. please help.
you are using this in ajax
data: "{'username':'" + extractLast(request.term) + "'}",
but i am using
data: { term: request.term },
error occur plz guide me
in html textbox it is working fine....for asp textbox not working..in masterpage...
if we use runat = "server"...help me plz
@ Suresh,
Really great work.... Can u tell me where to change font size in css (as i'm not familiar with javascript)
Thanks in advance...
Its working fine for me. So lot of thanks
I am getting an error Indexoutofrange
Hi Suresh,
Nice work! I need when you select "Mahendra" so it would not be next time in List of search.
Can you help me into this.
Thanks in advance.
Great thanks again.
Vikram
Can you send me the code to restrict the suggestion data.
Suresh will you help me ?
how run this code on content page in asp.net vb while master page exist
hi suresh nice work. but it is giving error object expected at $(document).ready(function() {
SearchText();
Also getting class "demo", class "ui-widget" and class "autosuggest" gives error class not defined | https://www.aspdotnet-suresh.com/2012/07/jquery-autocomplete-textbox-with.html | CC-MAIN-2019-47 | refinedweb | 1,653 | 60.41 |
Every web application is in need of navigating between its pages. React doesn’t come with its own routing library. But React community has built many routing libraries, React Router being the most popular. In this article, we will build a simple React application using React router.
Here is what we will build:
The homepage links to two other pages, capitals and currencies, and each of these pages link to home and the other page.
This is in continuation to the earlier article—Rendering an array of data with React.js. We will use similar arrays of data to display capitals and currencies.
Let us start with creating a React app.
For this we will use the standard
create-react-app command. In your terminal issue this command:
npx create-react-app router. This will create a new react app in a folder
router. You can go into the folder and start the development server with
npm start. It will start the server and open the application in the default browser.
Let us install
react-router. React router has a library for web apps and mobile apps. We need the web-library
react-router-dom. Install it:
npm install --save react-router-dom
Since we have installed all the necessary modules, let us start coding our routing application.
Our navigation sequence goes like this:
- Display
Homecomponent whenever a user visits
/;
- Display
Capitalscomponent whenever a user visits
/capitals;
- Display
Currenciescomponent whenever a user visits
/currencies;
- Display
NotFoundcomponent if user enters a path other than the above three
Let us start with the
Home component. As you see in the screenshot, the home screen contains three links — Home, Capitals, and Currencies. In HTML, they will all be anchor tags with a href linking to the pages. If we use anchor tags, it will cause the whole page to refresh, which is not what we want. React Router has a
Link component for this purpose. As its name says, it links to pages within the application. Link component is the simplest component of the Router library.
Our
Home is a functional stateless component. It uses
Link component to link to other pages (which we are yet to code). Code for
Home component looks like this:
import React from "react"; import { Link } from "react-router-dom"; const Home = () => ( <div> <ul> <li> <Link to="/">Home</Link> </li> <li> <Link to="/capitals">Capitals</Link> </li> <li> <Link to="/currencies">Currencies</Link> </li> </ul> </div> ); export default Home;
Currency and Capital components are copied from the earlier article, Rendering an array of data with React.js, with just one change. There is a
Link component at the end of these components.
import React from "react"; import { Link } from "react-router-dom"; class Currencies extends React.Component { render() { let countries = [ { name: "India", currency: "Rupee" }, { name: "Belgium", currency: "Euro" }, { name: "America", currency: "Dollar" } ]; return ( <div> <ul> {countries.map(country => ( <li> Currency of {country.name} is {country.currency}; </li> ))} </ul> <p> <Link to="/">Home</Link> / <Link to="/capitals">Capitals</Link> </p> </div> ); } } export default Currencies;
We will also code a
NotFound.js, which will just display a not found message for non-existing url links. It is also a functional component.
import React from "react"; const notFound = () => ( <div> <h2>Not Found</h2> </div> ); export default notFound;
We have coded all the necessary components. We have to wire them together with
Route component. Here are the relevant properties of
Route component:
path: it holds the path to match (in our case
/,
/capitals,
/currencies);
component: it holds the component to render when the path matches;
exact: it tells if the path has to be exact.
Router has a
Switch component that goes through list of paths and render the first path that matches. We use
Switch, because if none of the paths matches we want to display the
NotFound component.
Since this is a web application, we will use
BrowserRotuer component. It makes uses of HTML5 history API, so that when you navigate the history is pushed into the browser history. You can click back and forward buttons of the browser to navigate.
Here is the
Routes.js code:
import React from "react"; import { BrowserRouter, Route, Switch } from "react-router-dom"; import Home from "./Home"; import NotFound from "./NotFound"; import Capitals from "./Capitals"; import Currencies from "./Currencies"; const Router = () => ( <BrowserRouter> <Switch> <Route exact path="/" component={Home} /> <Route path="/capitals" component={Capitals} /> <Route path="/currencies" component={Currencies} /> <Route component={NotFound} /> </Switch> </BrowserRouter> ); export default Router;
We can render these routes directly in
index.js rather than the default
App component. Modify
index.js like this:
import React from "react"; import ReactDOM from "react-dom"; import Routes from "./components/Routes"; ReactDOM.render(<Routes />, document.getElementById("root"));
If you have the developer server running, the app will automatically update itself. Otherwise, you can see the result by starting the development server with the command
npm start. | https://prudentdevs.club/react-router | CC-MAIN-2019-18 | refinedweb | 811 | 57.27 |
Children of God: Fearfully and Wonderfully Made
Volume 3, Number 3 // Summer 2011
!"#$%&"' 7
28 LEARNING THE DREAM GOD HAS OUR CHILDREN, OUR SOULS FOR US !"#$%&'(!"#$%&'( 8$(+!36+0#'"%%" !"#$%&'(%$)%*'+,)%#-*%&.+/0*)(%+(%$'1,%2.'2%3*#4#2)% MXVWLFHKHDOWKHZRUOGDQGUHÁHFWWKHUHLJQRI*RG"µ ´,DPDFXWHO\DZDUHWKDWWKHUHVSRQVLELOLW\IRURXU FKLOGUHQ·VHGXFDWLRQUHVWVVROHO\RQXVµ 8 FAMILY VALUES 30 OUR NEIGHBORHOOD CLASSROOM )$**$+,!-./'$"0 F&(95+!G'+07*$0 ´7KHWURXEOLQJVRFLDOHTXDWLRQLVWKDWWKHPRUHZH ´:HDUHOHWWLQJWKHLGHDRIDVFKRROEXLOGLQJIDOO LQVLVWRQDIRFXVRQRXUIDPLOLHVWKHOHVVZHKRQRU DZD\DQGYLHZLQJRXUQHLJKERUKRRGDVWKHFODVV6 ERQGVRIRXWVLGHRXUIDPLO\µ URRPµ 11 IDOLATRY: TEN CONFESSIONS 32 SEPARATE AND UNEQUAL, STILL 1+'2&%!3%+'4567 >""!>""!H$(9"' ´:D\V,KDYHLGROL]HGP\IDPLO\DQGQHJOHFWHGD ´7RUXQKHDGORQJLQWRWKHFODVVV\VWHPLQWKLV% ZRUOGLQQHHGµ FRXQWU\\RXQHHGWRUDLVHNLGVµ 12 ORDINARY PARENTING 35 PLEASE DON’T SAY THAT! 8+'+!1+'$"!8+9' >+?"!=+'?"' !"#$%&'(%'%3'*)(2%/+5)%*'0+&'//1%$.)(%2.)*)%+,%,#% ´'RQ·WSDWPHRQWKHEDFNEHFDXVHP\IULHQG·VOLIH PXFKRUGLQDU\VWXIIWRJHWGRQH"µ VWLQNVµ 15 DON’T WORRY, BE HAPPY 38 COLORBLINDNESS EQUALS SILENCE :;!<+,+69" I'$(%"0!C&@"'%&0 ´,WJLYHV*RGGHHSSOHDVXUHWRVHHXVJHWFDXJKWXS ´:HSUHWHQGQRWWRQRWLFHUDFLDOGLIIHUHQFHVDQG LQWKHPRPHQWDQGGDQFHDQGVLQJDQGODXJKµ +('05)*2)(2/1%,-(+&'2)%2#%#-*%&.+/0*)(%2.'2%*'&+'/% 16 JUSTICE, COMPASSION, & TOTAL GLIIHUHQFHVDUHDWDERRVXEMHFWµ INSANITY 40 LEARNING TO SWIM =+'"0!+0#!>+?"!3@+0(&0 /"%(B!1&'2+0 ´,W·VHDV\WRVHQGWKHPHVVDJHWKDWRXUXOWLPDWHGH6 VLUHIRURXUFKLOGUHQLVFRPSOLDQFHDQGFRQIRUPLW\µ ´,WLVERWKDIHDUIXODQGDMR\RXVWKLQJWRDOORZ\RXU6 VHOIWREHWDXJKWE\WKRVH\RXKDYHQXUWXUHGµ 18 LIVING INCARNATIONALLY 1D\KRX\*UHHQÀHOG POETRY !7)%'&+/+,.)0%+(%#-*%8'49#0+'(%,/-4%$.'2% SRVWHUFDPSDLJQVYLVLWLQJHGXFDWRUVDQGJRYHUQ6 37 ITZELA PHQWFDPSDLJQVKDGEHHQXQDEOHWRµ 1+'%$0!)$*"B 19 QUESTIONING OUR WAY TO GOD A(9*"B!C"$#",+00 DEPARTMENTS ´6FULSWXUHLVIXOORIKDXQWLQJTXHVWLRQVDQGZH·UH QRWVXSSRVHGWRVNLPSDVWWKHPµ 42 REVIEWS &RFRQVSLUDWRUVVKDUHWKHLUEHVWERRNVRQUDLVLQJNLGV 20 EXPERIMENTS IN SIMPLER LQWKLVFUD]\ZRUOG PARENTING 1+'7!D&@*"B 44 BREATHING TOGETHER: OUR WEIRD ´:KDWGR\RXZDQWIRU\RXUNLGV"µ LIFE IN COMMUNITY 9LFWRULD+RGJHVWDONVDERXWWKHJLIWDQGFKDOOHQJHRI 22 PART OF THE VILLAGE UDLVLQJNLGVLQFRPPXQLW\ =&E!='&((6&,4" ´,IDOO&KULVWLDQVZRXOGRSHQWKHLUKHDUWVHYHU\ 46 NOTES FROM SCATTERED PILGRIMS &.+/0%+(%#-*%&+21%&#-/0%.'5)%'%3/'&)%$.)*)%2.)1%'*)% 1HZVIURPRXUFRFRQVSLULQJFRPPXQLWLHV ORYHGXQFRQGLWLRQDOO\µ
26 SOUL WORDS: SPIRITUAL GLEANINGS FROM OUR CHILDREN
49 CONTRIBUTORS
")*+,-*!./-01!234-)+*!4B!3"+0!H+B0&'
I
Our Children, Our Souls
n a world of capricious deities to be appeased and humored, Judaism offered a revolutionary departure. Here was God imaged as a parent, who loved us, chose us, and yearned for ongoing relationship with us. With Jesus, the embodied child of God come to earth, this relationship became even deeper. We have all been children, and the psalmist sings that we are each “fearfully and wonderfully made.” But the world does not always affirm that mystery. In their vulnerable bodies, children expose the fact that we do not love well. They incarnate in their fragile selves our inequitable distribution of resources or carry our emotional wounds. Yet they are also among us as bearers of hope, reminding us that life is constantly offering places of wonder, and to giving us chances to love again. In their company, we often revisit wounded, forgotten places in our hearts and feel winds of healing. They take our hands as we all grow toward God. This issue of CONSP!RE looks at our human experience through the complex relationship of mentor, parent, child—intense relationships may be our closest experiences of unconditional love, the nature of God. It is not created for parents. We all have children in our lives. For many of us, family is our primary experience of community. How can we raise our children in ways that promote justice and healing of the world? How do our families reflect the reign of God rather than the hierarchical and unjust structures of this world? Our culture promotes selfishness, mindless entertainment, and materialism. How do we instill values and a sense of the sacred? How do we nurture the spiritual life of our children—and through them nurture our own? In a world that insists that resources are scarce, where the playing field is fractured with structured and intentional inequality, children force us to confront directly this question: “Who am I willing to sacrifice so that my own live well?” In this issue, we grapple with the love of our own and the love of this world, embodied in all its children. We puzzle over the endless questions of nurturing the younger generation. We end up in that fragrant valley of grace. For ultimately, these dilemmas have no answers, only journeys. May we number our days so as to present to God, our parent, a heart of wisdom. —the editors !
Family Values F
Will O’Brien
or years now, I have listened in utter befuddlement to teachings coming out of much of the U.S. church which insist that family values are the veritable lynchpin of the entire Christian faith. Over the past two decades, a massive industry of theological resources, commercial products, educational programs, and political think tanks has emerged—all dedicated to the proposition that nothing is more important for a follower of Jesus than Mommy, Daddy, and their two-and-a-half kids taking care of each other and loving each other. Not the Sermon on the Mount, not Matthew 25, not the way of the cross, nor the new economy of Acts 2. Our Christian vocation as parents seems to be to nurture our kids wholesomely, offering them Christian alternatives (music, books, camps, schools, and media) to insulate them from the manifold evils of the world. I harbor deep suspicions of the family values 506)37.08!4B!F5*$+!D"'"EJ!(7"%69!@$%9! agenda, both politically and theologically. For one #$2$%+*!,+0$K5*+%$&0 thing, the appeal to centuries of tradition is deceptive: the modern nuclear family is an historical anomaly, forged by particular social and economic forces of the twentieth century. The vast majority of human cultures through the centuries understood family as extended and multigenerational kinship systems. The African proverb, “It takes a village to raise a child” (popular among liberals and anathema to conservatives) is closer to the reality of most traditional human experience. By its appeal to tradition (however dubious), family values as preached by our churches is a fierce defender of the status quo. By definition, the notion draws a tight boundary around those for whom we are responsible. The troubling social equation at work is that the more we insist on a focus on our families, the less we honor bonds of social responsibility to others outside our family, including the poor and homeless, the refugee and the
"
immigrant. Not surprisingly, the same political constituency that upholds family values supports such policies as lower taxes, a strong military, decreased funding of public education, law and order, and smaller government — including drastic cuts in programs that provide support for vulnerable citizens. I suspect some version of this same social equation was also true in Second Temple Judaism, the period of Jesus’ life and ministry. Which is why Jesus launched a frontal assault on the rigid kinship system of his culture. His own take on family values is scandalous and subversive: “And looking about at those who sat around him, he said, ‘Here are my mother and my brothers! Whoever does the will of God is my brother and sister and mother’” (Mark 3:34–35). Jesus had other less–than–sanguine sayings regarding family: that we must leave behind our family for the sake of the Gospel (Mark 10:29–30); that we must hate our parents and children in order to be a disciple (Luke 14:26); that we cannot even bury our deceased father (Luke 9:59–60); that his Gospel will cause severe division within families (Luke 12:53). His teachings on welcoming little children and being like a child (Mark 10:13–16) actually upset the balance of power in family systems. This is not to suggest that Jesus completely rejects family. He stays in the home of Peter’s family, healing Peter’s mother–in–law (Mark 1:29–31). He upholds the marriage covenant (Matthew 19:3–9). He uses many family images to convey God’s love (Luke 11:11–13, 15:11–32), and of course, identifies God as a loving Daddy. But I believe that, as I read the Gospels, Jesus saw the rigid ideology and culturally proscribed practices of family and kinship as obstacles to the reign of God. Family values in his culture meant clear lines of insiders and outsiders. It entailed boundaries and limits of social obligation and responsibility. It subtly #
erected hierarchies of value. It controlled and limited the flow of love. In God’s reign, all persons are God’s precious children. We are all sisters and brothers. Everyone deserves the fullness of our love, the intentionality of our concern and care. We treat everyone as as preached and practiced in the United States today means a boundary line between those for whom we are responsible and those for whom we are not; those worthy of our love and those not worthy. The politics of family values is alarming: in my state, family values advocates support proposed tax cuts that promise to slash education funding, especially to already poor urban systems, further condemning kids from economically divested neighborhoods to a bleak future. But don’t worry: with our tax savings we can send our kids to expensive private Christian schools. Maybe that’s a way of loving our own kids, but is it the reign of God? I say all this as someone with a family that on the face of it looks much like the typical America nuclear family. In all practicality, I love my own two children more than I love other children. My partner and I put a significant amount of energy and resources toward their nurture and their future. The work of raising them has caused us to be a bit more entangled in some aspects of worldliness than we are comfortable with. But I am challenged by the new vision of family that is ultimately Jesus’ call to us: Are we raising our children in ways that reflect our commitment to discipleship and to the reign of God? We have worked so that our basic values and commitments, including our choice of social location in an economically vulnerable community, are not sacrificed for the sake of our children. We are hoping our children see our life as embedded in a larger community and network of disciples and people following Jesus. We are trying to help our children navigate family lifestyle choices that create significant tensions with their peers. We carefully dialogue with them about the world’s struggles and pains, about our responses and responsibilities. I have been particularly gratified by ways my children have participated in the community of men and women who have experienced homelessness where I have worked for most of the past twenty years. But clearly these struggles will be ongoing. My partner and I want to accept the vocation of parent and live it out faithfully and carefully. But we want to make ours a family with the fluid boundaries of God’s reign. We pray that the particular love bred in our home with these children given to us is connected to Jesus’ love of the outcast, the wounded, the vulnerable; and that whatever security and stability we can foster for our young charges can be part of building shalom for all God’s children. And as I revel in the gift of my children’s love for me as their daddy, I want to teach them about our Daddy who loves and cherishes us all—the Daddy who is the source of the truest family values. $%
Idolatry: Ten Confessions
K
Margot Starbuck
nowing I write about engaging a world in need, a friend commented, “I think we make our families an idol and neglect the world. Write on that.” “Yeah, why don’t you write that book!” I balked. Questioning the choices people make in and for their families, especially around privilege, money, and other resources—is really touchy business. It’s offend-most-of-the-Christian-parents-I-know business. But I couldn’t shake her unsettling challenge. How had I idolized my family to the neglect of a world in need? 1. I’ve spent too many dollars on special-occasion clothes for an assortment of family functions to make my children look right in the eyes of others. 2. I’ve lived in affluent, homogenous neighborhoods that naturally separate me from those in need. 3. I’ve failed to invite strangers to share holidays with my extended family because I don’t want to upset those who are convinced that holidays are “family time.” 4. I’ve let youth sports rule my Saturdays. 5. A creative soul and visual artist, I have spent hundreds of dollars photographing, videotaping, and memorializing my family in frames and albums and iron-on T-shirts and mugs. 6. I’ve bought my children pricey backpacks, when their old ones worked perfectly well, simply because it was September. 7. I meant to be committed to troubled public schools, but accidentally got picked out of a lottery for a great public charter school and great public magnet school. Shall I sacrifice my child’s education for my loosely held ideals? 8. When I’m on vacation, I justify all sorts of nutty spending choices. 9. I’ve allowed entertainment—like TV and DVDs and kiddie websites and video games—to supplant both shared quality time and any sense of family mission in the world. We all need the down time, right? 10. Every member of my family has whined, on some lazy afternoon, “There’s nothing to do-oo-oo...” Translation: “We simply can’t think of one more way to pleasure our nuclear family of five.” I understand that most parents I know are not ready to label the cutie clothes, the new backpacks, or the smiley photo mugs idolatrous. Most days, I’m not either. Nonetheless, the overall trajectory of this list haunts me. This particular vision for the ideal family life was generated not by Jesus, but by Norman Rockwell, the iconic U.S. painter and illustrator whose sentimental paintings tug at our heartstrings, appealing to the anxious place inside each of us that longs for stability, safety, nostalgia, and comfort. But the Jesus we follow breathes “Peace,” while asking us to lay down our lives and lose them. And that command, friends, is one scary family meeting! $$
Ordinary Parenting by Lara Marie Lahr
W
hat does it mean to raise children “radically?” Many people read The Irresistible Revolution, and then make a pilgrimage to The Simple Way, seeking a more radical (rooted) way to live. I often disappoint them by sharing that the life we have chosen is much more ordinary than radical. I usually find myself doing the same things any ordinary mom does. I cook. I clean. I make lunches for my girls. I give baths. I take them to their dance rehearsals, theatre practices, and sewing classes. I go to work. I try to be a good wife. I take my dog on daily runs. At the end of the day I always wish there were more time. How can a parent live radically when there is so much ordinary stuff to get done in twenty-four hours?
Yet our family life has been indelibly shaped by our journey, a key turn of which happened fifteen years ago. When Chris and I served at Mother
I usually find myself doing the same things any ordinary mom does. Teresa’s Home for the Dying in Calcutta, a young boy died of tuberculosis in Chris’s arms. Later, we learned that nobody knew his name or where he came from. That reality transformed us. We sat in a dirty hostel on Sutter Street pondering the story of the rich man and Lazarus. The rich man passed Lazarus every day as he lay at his front gate and never even knew his name. When they both died, it was Lazarus, not the rich $&
man, who had a name. Chris and I wanted to live in such a way that we would recognize and know those at our front gate. We embraced the fact that being a follower of Christ was more than just loving God. It was also about loving those at our door, our neighbors. Within the year, we returned to Kentucky, where we formed a group called the Lazarus Society—and I had my firstborn. Many of us had young children, and we got together regularly to encourage one another in whatever it was that we were doing. Some of us visited the elderly, some worked with homeless youth, others led Bible studies in jails or shelters. We had diverse gatherings that included folks from the shelters, the elderly, and people from our schools or work. While it was
hard having little in common with each other, it was also beautiful that a group of us who would not normally have hung out together actually learned to love one another and appreciate our differences. When we moved to Philadelphia, we immediately looked for a church. The first one we visited would have been fun to attend. Everyone was white, young, and similar to us. It would have been easy to connect with people. But we truly longed for some-
their lives rarely intersect. In contrast, we’ve created a more multiplex life, where all spheres of our life overlap. We have never separated family time from work, ministry, or church. We share our neighborhood with other church members, and the kids help us distribute food in the church neighborhood. They accompany Chris to Timeteo, the flag-
“We have never separated family time from work, ministry, or church.” thing different. We found what we were looking for in Iglesia del Barrio, an old, dilapidated church located in one of the city’s poor neighborhoods, where drug money and welfare provide most of the income, and Puerto Ricans are the predominant ethnic group. For the past eleven years, we have stayed committed to this community through the good and very difficult times that any small, struggling church faces. It would have been easy to go back to the other church where we found so much commonality. Our three daughters may have had more opportunities to have great Sunday school classes, educational activities, or more friends their age. But I am truly glad that we have stayed. Iglesia del Barrio has become our family. In seminary, Chris was introduced to the terms multiplex and simplex. Many Christians live simplex lives, where they go to work, come home, then go to church, but these areas of
%90-*:6;;<)-33*K9&%&(!4B!/"%9+0B!1+9+0
football practices where he mentors dozens of young boys. One of the young couples that Chris worked with in his job decided to move next door, go to our church, and work with the football team. When we invited a single mom who had been struggling with loneliness and depression to dinner recently, $'
my kids saw the tears roll down her face as she shared that she had never experienced a family dinner like that before. Perhaps our most meaningful and
If I’ve learned anything over the past twelve years, it’s that there is no perfect way to parent. radical (rooting) practice as a family is our mornings. Chris and I wake early. He heads to his office, and I to my chair with my cup of coffee, my Bible and my journal. After forty-five minutes, he joins me so we have a few moments together before the kids’ alarm clocks go off. If we did not have this time for ourselves and for each other, I do not
that made us laugh that day and something that made us sad or happy. Even if dinner is rushed, one of the girls will remind us to do examine. It gives us all a chance to feel heard, to be known, and to honor each other with attention and care. I see them extending this care to the world. This past Christmas, they chose not to get gifts from us so that we could help a family whose mom had just passed away. If I’ve learned anything over the past twelve years, it’s that there is no perfect way to parent. Our life is not for everyone. Similarly, while I often envy moms who have tons of kids, live on a
%90-*:6;;<)-33!K9&%&(!4B!/"%9+0B!1+9+0
know how we could give to our kids or to anyone else. When our kids come down, we read from a children’s devotional Bible and share prayer requests. We take turns praying, and our four-year-old drags out her prayers as long as possible to avoid having to go upstairs to get dressed! At dinner, we practice “examine” time, a term used by ancient monks. We go around the table sharing one thing $(
farm, or homeschool, I realize that I am not wired that way. I love our life here in Philly. I feel so blessed to see my kids having friends from different cultures, ethnicities, classes, and religions. They know homeless folks, drug addicts, and dealers; they know doctors, lawyers, and people who live in mansions. I pray my kids keep an open, non-judgmental, and loving heart toward everyone at their front gate.
Separate and Unequal, Still Dee Dee Risher
T
o run head into the class system in this country, you need to raise kids. From birth, every parenting choice is affected by race, class, and political paradigm. Certainly individual preference has a role, but choices are subtly driven by economics, income, access to healthcare, and social network. Here in the urban metropolitan Northeast, we found little middle ground. Daycares were either overwhelmingly white and middle-class and expensive—or nonwhite and poor-working class and subsidized. We could go to our city’s (few, mobbed) public pools with dilapidated bathrooms, where I’d be one of only a handful of parents, and we’d be the only whites. Or we could spend $700 to join a private swim club with beautiful lawns, chairs, and a middle-class social network we could fit into seamlessly. Summer day camps were either mostly white, with icebreakers that included the socially revealing question: “How many countries have you visited?” or they were Latino and African-American, and you’d be lucky to find three kids who’d been outside city limits. Schools were either private and geared toward producing our next set of elitist leaders—or public and underfunded. Education choices were a complex mix of “options:” public, charter, parochial (read: cheaper, Catholic private schools), and private schools (more expensive prep and Quaker schools). But many parents have only one real “option:” their public neighborhood school, or a lucky lottery draw for a public charter school. I sought experiences that would not infect my children with a sense of privilege, entitlement, or racial superiority; and which would expose them to diversity as a way of developing their own sensibility for justice. I also didn’t want my children to become a reason to move from the life we had chosen. Good parenting, I figured, should not depend on reinforcing paradigms of injustice. Or the suburbs. My son and I watched a documentary about Mississippi in the civil rights movement. In it, Mabel Carter, sharecropper and mother of thirteen, recounts her family’s experience integrating her county’s white schools as the camera panned over school yearbook pages—always the one somber black face in a sea of smiling white faces. I watched my son’s eyes for some glimmer of recognition as he took in those yearbook spreads. The reality is that forty years later, not much has changed in his experience of school. For seven years, the yearbook of his publicly-funded urban charter school has shown forty-five smiling faces in his grade. His is the only fairskinned, blue-eyed face. The massive desertion of the urban public school systems by white, middle-
'&
)LUVW'D\RI6FKRRO%4B!L**"0!1&&'"!-(4&'0"
class residents is startling. Only a handful from my fairly wide circle of white friends is committed to trying to use public schools—and many of those have nonwhite, biracial, or African American children. The others have quietly chosen other options. Most did not observe a single public classroom before making their decision. We seem much more comfortable raising our middle-class, bright kids in suburban or small private (mostly white) schools and teaching them to change the world (and break down class and race barriers as adults) than entrusting them to a public system which confronts us with these barriers. I understand. I don’t wish my kids to be “the only” in their groups. I don’t know a parent who would. Yet if more of those parents chose public options, no one would be the “only.” The perceived inferiority of the system is often a myth. We’ve had committed educators and challenging classrooms. My daughter’s class works through the same workbook as fast as the private Quaker school a half-mile away—there are just a lot more pale-skinned kids over there. As education activist Johnathan Kozol points out, the South Bronx has a segregation rate of 99.8 percent. Only two-tenths of 1 percent marks the differ''
ence between legally enforced apartheid in Mississippi forty years ago, and socially enforced apartheid in New York today. Studies like the Harvard Civil Rights Project show that as court orders for desegregation have been allowed to expire, the U.S. public schools in every region are rapidly resegregating. Although minority enrollment in our public schools exceeds 40 percent nationwide, the average white student still attends a public school that is 80 percent white. Fewer blacks attend majority-white schools than in 1969. In the Midwest and Northwest, more than a quarter of black students attend schools that are nearly 100 percent non-white. Funding disparities are startling. My school district spends $9300 per child. Two miles north, the surburban school district spends $13,227 per child, and in more affluent suburban counties, that figure jumps to $14,865. Public school is deeply flawed; open to criticism from every side. Schooling issues are very complicated and change from region to region. It is much easier, if we have the resources, to withdraw to the well-funded suburbs, or to adhere to our creative alternative curriculums, our small Christian schools, or our homeschool models. My father stands across the kitchen. My children have gone to bed, and he wants to talk. But when he broaches the topic, I am unprepared. He asks why I would do this to my children—send them to public schools in the city, live in my neighborhood. “Your mother and I sent you to schools with kids like you, and the best schools we could afford. Would you rather have had a life like you had—or a life like your children’s? Are you making them an ideological experiment?” I know that he is asking from love and deep fear for them. I am afraid also. I am not sure where our choices will take us. Every path has wounds, and we cannot choose which ones we will carry. But I want him to see their school—how lovely and bright the hallways, filled with teachers who really care, and kids who care about my children. I want him to see how the teenagers in our African-American church in the heart of the most depressed part of the city take care of my kids and delight in them. I want him at gatherings at our house—how finally blended and easy all the races and cultures are. I want him at the arts festival at the homeless agency where my partner works to see our daughter belt out some sultry blues song and bring down the house in smiles. I want him to see my son, at eight, have the courage to stand up at a rally of four hundred and tell all the kids there to fight for civil rights like Martin Luther King did. I think we are okay. This, I whisper to myself in reassurance, has its own richness, because this is sustainable. It doesn’t depend on having a lot of money. It is not rooted in elitist and stratified social choices. It isn’t built on segregation into the “people like us” and the “different people.” This is the path to a world that is less frightening to me; less afraid of the future. Isn’t all parenting an experiment? Should we not try some experiments that embody our truest hopes? '(
!"#$%&'()%(*'%+(,$-.'.,/%+())*,.0."$ +1!23456!"#!#$#%&"'()!*+!,-..$'"%"(#!&')!/0-$1#2!3(0(4!-$0!5-'#1"0(0#!#6&0(!76&%8#! 6&11('"'/4!90-.!'(7!:")#!%-!&!'(7!7-0;)<!90-.!%6(!.$')&'(!%-!%6(!"'#1"0"'/2!=-0!,-'%&,%! "'9-0.&%"-'!&')!&!)(#,0"1%"-'!-9!(&,6!,-..$'"%+8#!."##"-'!&')!&,%">"%"(#4!/-!%-! !###78(,$-.'")9/9:.,"78()7%?6";(!%6(0(4!@-"'!-$0!,-'#1"0&,+!-9!/--)'(##A!
Alterna (LaGrange, GA): Nineteen months after being arrested in front of his wife and son, Pedro Perez Guzman has been granted a green card! Last November, eight people, including three from Alterna, vigiled and were arrested for civil disobedience in protest of Pedro’s arrest. His story made national news and is the subject of a documentary. We commend Pedro for his perseverance. Welcome home! (). Anthony’s Plot (Winston-Salem, NC): This summer, we are working with our Waughtown neighbors to develop a community “school” to provide supplemental opportunities (music, soccer, and literacy courses). Pray for us as AP becomes a short-term residential option for homeless in our city (). Camden Community Houses (Camden, NJ): The Camden Houses are hoping for a few new members. For more info, please contact [email protected]. Chicago Catholic Worker Houses (Chicago, IL): The White Rose Catholic Worker is establishing a new farm in Monee, IL, with work, picnics, camping, games, prayer, and bonfires! We joined our fellow CW communities for the Midwest Catholic Worker Faith & Resistance retreat in Kansas City in May. Coral House Community (Lake Worth, FL): On behalf of the homeless and poor, we share food, clothes, and friendship in the park. We recently took everyone in the park to a restaurant. Approximately seventy of us ate together and enjoyed the fellowship. Patrons and staff were curious, prompting good conversation. It was an amazing experience! What a glimpse of future celebrations when the kingdom comes (www. thecoralhousecommunity.com). Dathouse (Indianapolis, IN): We are overjoyed to welcome our newest member, Ezekiel Isa Abner, born May 24th! New life is springing up in our community including the baby, guests, and gardens. We are entering a busy season of repairing homes, summer camps, and building relationships (). DC Area Community of Communities (Washington, DC): Cornerstone Community says good-bye to Brian Gorman at the end of the summer as he joins the Quebec House. We ask for prayers as we search for a new live-in for this house for men in recovery. Congress Heights House has begun a Food Not Bombs chapter and is always D$6%5'"#!+4&?"J!M'&,!*"M%!%&!'$29%N!!.06=*:.93-!I$#(J!0"@!4+4B!O"7"!M'&,!>6+,.93-J!$QWKRQ\·V3ORW!B&502! K+'%$6$K+0%(J!+0#!"PK*&'$02!6&,,&0!?$'%5"(!+%!0RUH7KDQ7KXUVGD\V
()
looking for help gathering food, cooking, and serving in the park. Casa Chirilagua (casachirilagua.org) is taking fifteen kids to Passport Camp, and looking for mentors and volunteers for tutoring and kids club activities (). Dwell (Burlington, VT): We are currently transitioning our gathering space back to the heart of the Old North End. Our house band, “The Likeness,” (. com/queencitylights), is playing festivals and shows across New England, preparing to release their first EP. Email [email protected] for information (. com). We are happy to welcome babies Ollie and Addy! First United Presbyterian Church of Crafton Heights (Pittsburgh, PA): We are in the midst our annual summer day camp in which sixty neighborhood youth gather for two meals a day, field trips, bible stories, crafts, and relational time with mentors. It’s a six-week effort to provide positive options and authentic friendship. New this year is an “extended day” which allows kids to stay with us until dinnertime (). Georgetown College (Georgetown, KY): In May, we sent a team to Temuco, Chile to work alongside students at Colegio Bautista, our partner school. We found ourselves alongside marchers of all ages in Pucon, protesting the construction of hydroelectric dams which would destroy delicate ecosystems and some small towns. More Than Thursdays (Oakland, CA): We recently committed to a set of common virtues (contemplative, communal, generosity, service/peacemaking, and creativity). This is a stretching experience for a group of longtime friends. We are excited about an upcoming wedding within our community (thursdaymatters.blogspot.com). Mulberry House (Springfield, OH): Our community garden is producing, and we continue our weekly Bible study. Visitors welcome! (). Nehemiah House (Springfield, MA): A tornado touched down two blocks from us, destroying buildings. But we know God is here, and we’re working with our neighbors and churches to rebuild. Our community just entered into our Working Covenant. We had a big celebration to mark our affirmation, followed by a great whiffleball game (). New Providence Community Church (Nassau, Bahamas): Lead Pastor Christian McCabe and his wife Nicole have been called to South Africa and will serve in Hermanus, one hour south of Cape Town. Instead of creating a worship service, they hope to challenge thinking and train people to “be a service,” engaging the spiritual and practical needs of the local community. Please keep New Providence and the McCabes in your prayers (). ReIMAGINE! (San Francisco, CA): We focused on being agents of healing. We experimented with micro service, offering clothing repair, manicures, and balloon animals in a park. We cleaned and repaired a family home. We concluded our Peace Project, each seeking reconciliation with those we’ve wronged, offering forgiveness, and speaking the truth in love. We celebrate the release of Practicing the Way of Jesus, by Mark Scandrette and the launch of the Jesusdojo.com, a compendium of helpful resources ( ). Relational Tithe (Oakland, CA): We have had connections with the Middle East, Japan, and Haiti. We are grateful for people who live very rooted lives locally and remember our global connections as neighbors (especially with the Middle East, Japan, and Haiti). We have immediate openings for web developers, administrators,
(!
and social operators. If your group is reimagining their connection to one another and their resources, we would love to hear from you (). Rutba House/School for Conversion (Durham, NC): JaiMichael just graduated kindergarten, Naomi starts pre-K, Fay is learning to articulate feelings, Nora is running and babbling, and Noah is smiling. The adults are doing what we can to keep up (). Servants Vancouver (Vancouver, BC): We’re excited to be partnering with a number of other communities and organizations for an art-and-justice camping festival in Mission, BC (August 12-14). There will be speakers, workshops, musicians, and lots of art space. Check it out at (). The Book Parlor (Spokane, WA): We want to give a shout out to Project H.O.P.E. Spokane (). Their summer urban-farming program employs youth at risk of gang violence. Project H.O.P.E. is doing great things for our West Central neighborhood and we’re honored to partner with them (www. TheBookParlor.com). The GAPS Community (Downey, CA): Over seventy people attended the official release of David and Christie Melby-Gibbons’s new album. (The two are also known as “Dust Of The Saints.”) Each song on “Short Short” is under a minute. To get a copy of the CD, offer a donation of whatever amount you choose, and email [email protected] (). The Simple Way (Philadelphia, PA): We’ve added some folks to our village house and community. Welcome Janelle, Steven, Beth, Val, and Brett! We are looking forward to summer camp in the Pocono Mountains. For some of our neighborhood kids, it will be their first time outside Philly! We loved seeing friends at PAPA fest & Wild Goose! ().
More conspiring communities: $OWHUQDWLYH6HPLQDU\!QD9$*+#"*K9$+J!DAR!@@@;+*%"'0+%$?"(",$0+'B;0"% &DULWDV9LOODJH!Q1",K9$(J!:SR!@@@;6+'$%+(?$**+2";&'2 &DUSHQWHU·V&KXUFK!Q8544&67J!:TR!@@@;6+'K"0%"'(695'69*544&67;&'2 &HQWXULRQ·V*XLOG!QC&0&*5*5J!CUR!$0M&V6"0%5'$&0(25$*#;&'2 !,904,*.7*+,-*$.?.90)-03!Q3+0!G'+06$(6&J!=AR!@@@;695'69&M%9"(&W&5'0"'(;&'2 &RQVSLULQJIRU&RDWHVYLOOH!Q=&+%"(?$**"J!DAR 'HWURLW9LOODJHV$LODQWKXV!Q>"%'&$%J!1UR!@@@;B&502*"+#"'($0$%$+%$?";&'2 (63+*!-)+06=*@<)<3+0<-3!QA*45X5"'X5"J!S1R!@@@;"+(%6"0%'+*,$0$(%'$"(;&'2 *HRUJHWRZQ&ROOHJH!Q<"&'2"%&@0J!IYR +\DHWV&RPPXQLW\!Q=9+'*&%%"J!S=R!@@@;9B+"%(;&'2 A)460)6+<.)*$+6+<.)!QI+0(+(!=$%BJ!I3R B6,63,*A)+-0)6+<.)6=!QD&'%*+0#J!-HR!@@@;*+9+(9;0"% 5HED3ODFH6KDORP0LVVLRQ&RPPXQLWLHV!QL?+0(%&0J!U8R!@@@;'"4+K*+6"M"**&@(9$K;&'2 $64068-)+.*!.)3;<06+.03!Q3+6'+,"0%&J!=AR $6)*'676-=*C<03+*D@!!Q3+0!H+M+"*J!=AR!ZZZVDQUDIDHOÀUVWXPFRUJ $-0/6)+3*E6)4.9/-0!QZ+06&5?"'J!/=R!@@@;("'?+0%(+($+;&'2 6RORPRQ·V3RUFK!Q1$00"+K&*$(J!1SR!@@@;(&*&,&0(K&'69;6&, 5,-*E<)-!QC+?"'9$**J!1AR!@@@;%9"?$0"9+?"'9$**;6&, 5<-006*#9-/6!Q37+2$%!Z+**"BJ!)AR!@@@;%$"''+[05"?+;&'2 B<F-G<3-*H..F3!Q)"(%,&0%J!U8R!@@@;$?K'"((;6&, I93+D3!Q3$&5P!8&&7&5%J!-0%+'$&R )RUPLQJLQ6RXWK/DNH8QLRQ!Q3"+%%*"J!)AR!M&',$02(*5;4*&2(K&%;6&,
(" | https://issuu.com/conspire/docs/conspire_issue__10_children_of_god | CC-MAIN-2017-30 | refinedweb | 6,249 | 62.98 |
Why is "My Computer" Zone hidden in inetcpl in Internet Explorer and how do I make it show up?
I was recently asked this question by someone so I did a bit a of look around to find the answer to this and thought I'd share it with the rest of you.
A quick word to clarify what I am talking about. In Internet Explorer, there are 5 Security Zones that are basically trust namespaces. A certain URL can end up in one of these 5 zones and then conforms to the policies described in that particular zone for all its URLActions. All but one of these security zones are exposed through the UI in inetcpl under the Security Tab which shows the Local Intranet, Trusted, Internet and Restricted zones in there. You can either set these zones to one of the predefined template setting or you can control the policies in these zones for individual URLActions by setting the level to 'Custom' and editing the policies. My Computer (aka Local Machine) zone, however, is not shown in this UI. That is the way it has always been. The reason this was the case was because Local Machine Zone was a zone of extremely high trust and we did not want the user making any changes to the security policies in that zone. The settings were historically low to begin with, and this was one of the reasons why in XPSP2, we came out with the idea of Local Machine Zone Lockdown (LMZL) to clamp down on some of the key settings in this zone for IE. Long story short, it was deemed unsafe to make the My Computer zone visible in the UI. But that does not mean that it can't be done. It used to be a simple registry tweak that would make it show up but due to LMZL, its become a little bit non-intuitive and somewhat less useful in actual terms of being able to modify active Local Machine Zone policy from the UI.
Every zone has some attributes like the name, description, icon that are used to describe the zone. These attributes sit in the registy at the following location:
HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings\Zones\[0-4]
The last number is the zone ID which is 0 for Local Machine Zone. This is the same location where the actual URLAction policy set is stored as well. One of the DWORDs under these keys is the Flags DWORD that is a bitmask of the Zone Attribute Flags (ZAFLAGS). One of these attributes is the ZAFLAGS_NO_UI attribute which is defined as 0x00000020. This attribute controls whether that particular zone shows up in the inetcpl UI or not. So really, unsetting that particular bit on the Flags DWORD should make the zone appear, right? WRONG!!! It used to be that way and it would still work if you are running inetcpl inside of a rundll32.exe to see the changes. But if you are running it from inside of iexplore.exe, you will notice that the My Computer icon does not show up on the Security Tab inside inetcpl. So whats going on? Why is it showing up? The answer is LMZL. Due to LMZL, now inetcpl uses the Zone Attributes from the Lockdown zone settings instead of the normal zone settings for Local machine to decide whether to show it in the UI or not. So in order to make My computer show in the UI, you will need to change the Flags DWORD under the Lockdown_Zones\0 .. so the location you need to change is at the following registry location:
HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings\Lockdown_Zones\0
Change the Flags DWORD to remove the 0x00000020 flag and now you should be able to see the My computer icon in the Security Tab in inetcpl UI. So what does that give you? It gives you the ability to click Custom Level button and change the settings for individual URLActions for My Computer Zone just like you can do for other zones. So picture this, you don't want scripts to run in the Local Machine Zone, So you open up inetcpl and go to the Security tab and click on My Computer icon and then click the Custom Level button that takes you to the Security Settings dialog. On that dialog, you scroll down to the Scripting section and change the value for Active Scripting to Disable. You apply the settings and then load up a local html file with a script in it. The script doesnt load and you see an Information Bar telling you about it. Working as expect you think, until you click on the information bar and it gives you an option to "Allow blocked content". You click it and your script runs. What just happened??? Didn't you just block scripts from running? Did that setting not take effect? Is there something else that needs to be done? Well, what just happened is that you just edited the settings for the Normal Local Machine from the UI. But since you're running inside IE, LMZL is turned ON for the process and the setting for LMZL dictates that you simply prompt the user about scripts in the page and if the user chooses to allow it, it goes ahead and allows it. So even though inetcpl reads the Attributes from the Lockdown_zones, it still read the policy settings from the Normal Zone hive in the registry and all changes made from the UI take effect in the normal zones\0 hive as well. So really all that hussle to make My Computer show up in the UI achieves little if anything at all as far as IE is concerned. The changes that you make through it will affect other processes that do not have Local Machine Zone Lockdown turned ON for them. But due to security reasons and the entire concept of "Locking Down" the local machine, inetcpl does not allow you to change the active LMZL policies from the UI. This is consistent with the original intent of not allowing the users to mess with the Local Machine Zone polices. The only 'weird' thing is that you have to change the NO_UI attribute under the Lockdown_zones\0 for the UI to show My Computer, but changes to settings work in the opposite way.
All part of a grand plan to obfuscate the settings from the user? Not really. The original idea was simple: set the flag so that it doesnt show up, advanced users can make it appear by flicking a bit in the registry. But since the default behavior was to not show, it became a bit of an unsupported scenario and subsequent changes have made things more complicated than they need to be. At the end of the day, though, I think its best not to mess with the Local Machine Zone policies at all. But that doesn't stop us from knowing how to do it if we ever decide to :)
Cheers
Ali
P.S. I appeared to have lost the password to my account on the image server, so I am currently unable to add images to the text. I will update the post once I sort the password issues out. | https://docs.microsoft.com/en-us/archive/blogs/alialvi/why-is-my-computer-zone-hidden-in-inetcpl-in-internet-explorer-and-how-do-i-make-it-show-up | CC-MAIN-2020-24 | refinedweb | 1,222 | 67.49 |
rfork_thread(3) [freebsd man page]
RFORK_THREAD(3) BSD Library Functions Manual RFORK_THREAD(3) NAME
rfork_thread -- create a rfork-based process thread LIBRARY
Standard C Library (libc, -lc) SYNOPSIS
#include <unistd.h> pid_t rfork_thread(int flags, void *stack, int (*func)(void *arg), void *arg); DESCRIPTION
The rfork_thread() function has been deprecated in favor of pthread_create(3). The rfork_thread() function is a helper function for rfork(2). It arranges for a new process to be created and the child process will call the specified function with the specified argument, while running on the supplied stack. Using this function should avoid the need to implement complex stack swap code. RETURN VALUES
Upon successful completion, rfork_thread() returns the process ID of the child process to the parent process. Otherwise, a value of -1 is returned to the parent process, no child process is created, and the global variable errno is set to indicate the error. The child process context is not aware of a return from the rfork_thread() function as it begins executing directly with the supplied func- tion. ERRORS
See rfork(2) for error return codes. SEE ALSO
fork(2), intro(2), minherit(2), rfork(2), vfork(2), pthread_create(3) HISTORY
The rfork_thread() function first appeared in FreeBSD 4.3. BSD
February 6, 2011 BSD. Note that a lot of code will not run correctly in such an environment. signal | https://www.unix.com/man-page/freebsd/3/rfork_thread/ | CC-MAIN-2021-49 | refinedweb | 227 | 62.17 |
SOLUTION (Following Cathy's advice above, and dAnjou below), virtualenv
must be activated from the wsgi script. Adding activate_this execution
solved the problem:
#!/usr/bin/python
activate_this = '/var/www/try_me/venv/bin/activate_this.py'
execfile(activate_this, dict(__file__=activate_this))
import sys
..
Make sure you have included a requirements.txt file in root directory of
your project.
The file should include any pip package that needs to be installed
Flask-SQLAlchemy=1.0
Check with another constructor for htmlView:
ContentType mimeType = new System.Net.Mime.ContentType("text/html");
var htmlView = AlternateView.CreateAlternateViewFromString(bodyMessage,
mimeType);
I hope you had installed virtualenv and if you have create the virtual
environment (virtualenv), you have to use . . venv/bin/activate command to
activate the enviroment in unix or OSx. Hope you will get information from
this source)
It is likely that you are running the python executable from /usr/bin
(Apple version) instead of /usr/loca/bin (Brew version)
You can either
a) check your PATH variable
or
b) run brew doctor
or
c) run which python
to check if it is the case.
Your base module is located in the ui module. Hence
from base import BasePage
shoul dbe
from ui.base import BasePage
Install OpenCV
It needs this package to function).
Assuming you have the proper modules installed: They aren't in your system
path. You can check your system path manually to see if the directory is
there by doing
import sys
print sys.path
You can append to sys.path as you would any list, but it's probably better
to modify it via your OS, rather than on the fly appending. (which is
temporary, sys.path reverts to its original state after the end of the
script in python)
Requests is not a built in module, so you will have to download it. You can
get it here:
OSX/Linux
Use $ sudo pip install requests if you have pip installed
On OSX you can also use sudo easy_install -U requests if you have
easy_install installed.
Windows
Use > Patheasy_install.exe requests if you have a windows machine,
where easy_install can be found in your Python*Scripts folder, if it was
installed. (Note Patheasy_install.exe is an example, mine is
C:Python32Scriptseasy_install.exe)
If you don't have easy install and are running on a windows machine, you
can get it here:
If you manually want to add a library to a windows machine, you can
download the compressed library, uncompress it,
Have you checked sys.path in the python shell?
>>> import sys
>>> sys.path
# Returns a list of directories & .egg files
For python to find mechanize, it needs to be in one of the places listed on
sys.path. If you know where mechanize was installed, then you can check
directly whether it's on sys.path (I'm not sure how to find out where it
was installed automagically).
httplib2 is not part of python or core third partly libraries supplied by
the appengine runtime - see
You need include or link the httplib2 code directly in your project and
deploy it with your project.
You have 2 pythons installed on your machine, one is the standard python
that comes with MacOSX and the second is the one you installed with ports
(this is the one that has matplotlib installed in it's it's library..
Patch 7.3.1163 has introduced additional search paths for Python scripts
(to make writing Python-based plugins easier). Apparently, this has
introduced a regression with some existing plugins. See this discussion on
vim_dev.
If you're compiling Vim from the Mercurial repository, revert to a version
before patch 7.3.1163 (with hg update REV), and stay there until the issue
is resolved in a future patch.
Do you have template.py inside your .env/lib/python2.6/site-packages/mako
directory?
Are you using virtualenv, or have installed mako in
/Library/Python/2.6/site-packages?
Please paste an output of pip freeze
Update:
Have you checked if there is no CR/LF or wrong character on this import
line?
What is the encoding of your foobar.py file?
file -I foobar.py
it should contain utf-8 or ascii
You need to use either explicit relative or absolute imports when you use
python3, so
from wordpress_xmlrpc import base
# or
from . import base
In python3 import base would only import an absolute package base, as
implicit relative imports are no longer supported.
I found the answer to my problem. My virtual environment was correctly set
up, I just needed to run my code through the version of python found in the
scripts folder of my virtual environment.
The python file you are trying to run can be found anywhere on your local
machine as long as it uses python from that environment, for example
C:anypathonlocalmachine> C:envScriptspython.exe helloworld.py
I was making the mistake of putting my python file in the virtual
environment and attempting to run the following WRONG code
C:env> helloworld.py
I thought that I was supposed to be working entirely within the virtual
environment
Thanks for the help in the comments rebelious
If you have pip installed you could either try pip search qgis or pip
freeze. The latter shows a list of all installed python packages to check
if you have the package. Maybe try reinstalling qgis ...
I had a similar problem with the relative paths in Django Book when I went
through. Normally just check that the forms file is in the same folder as
the folder you are working on. Otherwise try appname.contact.forms or just
forms. Something like that. If you could post the file branching of your
project that would help too.
apiclient is not in the list of third party library supplied by the
appengine runtime: .
You need to copy apiclient into your project directory & you need to
copy these uritemplate & httplib2 too.
Note: Any third party library that are not supplied in the documentation
list must copy to your appengine project directory
I think you have not installed fsevents, you can download fsevents from
here
extract the repository and cd into and do python setup.py install.
Put your city (files which you want to import) in root dir of main
application.
If IDE(PyCharm for an example) wont worry about import this module - than
the problem was in paths.
I've smashed with this problem at once. And the problem paths were.
Check the exception value and location:
Exception Type: ImportError
Exception Value:
No module named forms
Exception Location: /home/chad/newssite/links/views.py in
<module>, line 3
Line 3 of your views.py tries to import from forms.py which should be in
the same directory as your views.py. Make sure your forms.py file is in the
same directory or else import it from where it actually lives. figured out why this error was happening. For some reason, using apt-get
to install a python package was not installing it right.
So, I had to download a tar ball and install the package from them.
I downloaded Twisted tar from here.
I did a tar xjf Twisted-13.1.0.tar.bz2 - this created a directory called
Twisted-13.1.0
next, cd Twisted-13.1.0
Finally, python setup.py install
This gave me an error. Twisted required another package called
zope.interface. So, I downloaded tar ball for zope.interface from here.
Then, ran this command tar xzf zope.interface-3.6.1.tar.gz That created a
folder called zope.interface-3.6.1. So, cd into zope.interface-3.6.1 and
run python setup.py install
Note: Depending on your user's rights, you may want to do these commands
in sudo mode. Just add
I figured the answer out. The problem is that the html5lib-0.95.zip I was
using had within it a directory called html5lib-0.95. I think coursebuilder
required that all files be in the root path of the zip. Recreating the zip
fixed this for me.
I assume that you are using a Linux distro since you compiled python from
source. You need at least 2 SSL library: openssl and openssl-devel
With Ubuntu: apt-get install openssl openssl-devel
With Redhat based: yum install openssl openssl-devel
To build from source, you can download from: openssl and openssl-devel
Hope it helps.
There is a section later in the instructions with this boldface label:
Now for some voodoo to get the new program installed into system libraries
and python paths and executable path.
Those instructions lead you through modifying your PYTHONPATH to pick up
the gnuradio module, among other things. If you have followed those
instructions, you will have to start a new shell to see any effect, or
execute the .sh file by hand, since profile scripts only run when a new
shell starts up or when they're run manually..
It is probably because your Anti-Virus program deleted io.py. I just had to
reinstall python on cygwin and try installing the program with the
Anti-Virus turned off. I successfully installed request this way.
The problem seems to be that your module should be just stockMain4 (and not
src.stockMain4). -- It's a bit hard to diagnose where exactly is the
problem without taking a look at your code or at least the full stack trace
(but look for places which may reference 'src.stockMain4').
Probably your PYTHONPATH variable is not set correctly. Start django like
this:
./manage.py shell
and try this command:
import django
If this raises an error, you need to set your PYTHONPATH environment
variable in a way that it contains the path to your django directory..
Well I solve it. Downgraded to 2.7.x , so that i could use this. Seems to
me there should exist a big red warning for using django with python 3.x
and Windows 7. | http://www.w3hello.com/questions/heroku-flask-importerror-ldquo-No-module-named-hellip-rdquo- | CC-MAIN-2018-17 | refinedweb | 1,660 | 66.44 |
Here’s how to construct what John Horton Conway calls his “subprime fib” sequence. Seed the sequence with any two positive integers. Then at each step, sum the last two elements of the sequence. If the result is prime, append it to the sequence. Otherwise divide it by its smallest prime factor and append that.
If we start the sequence with (1, 1) we get the sequence, …
We know it will keep repeating because 48 followed by 13 has occurred before, and the “memory” of the sequence only stretches back two steps.
If we start with (6, 3) we get
6, 3, 3, 3, …
It’s easy to see that if the sequence ever repeats a value n then it will produce only that value from then on, because n + n = 2n is obviously not prime, and its smallest prime factor is 2.
Conway believes that this sequence will always get stuck in a cycle but this has not been proven.
Here’s Python code to try out Conway’s conjecture:
from sympy import isprime, primefactors def subprime(a, b): seq = [a, b] repeated = False while not repeated: s = seq[-1] + seq[-2] if not isprime(s): s //= primefactors(s)[0] repeated = check_repeat(seq, s) seq.append(s) return seq def check_repeat(seq, s): if not s in seq: return False if seq[-1] == s: return True # all previous occurances of the last element in seq indices = [i for i, x in enumerate(seq) if x == seq[-1]] for i in indices[:-1]: if seq[i+1] == s: return True return False
I’ve verified by brute force that the sequence repeats for all starting values less than 1000.
Related posts: | https://www.johndcook.com/blog/2018/07/31/john-conways-subprime-fibs/ | CC-MAIN-2019-18 | refinedweb | 281 | 62.01 |
Content-type: text/html
tmpnam, tempnam - Construct the name for a temporary file
Standard C Library (libc.so, libc.a)
#include <stdio.h>
char *tmpnam(
char *s);
char *tempnam(
const char *directory,
const char *prefix);
Interfaces documented on this reference page conform to industry standards as follows:
tmpnam(), tempnam(): XPG4, XPG4-UNIX
Refer to the standards(5) reference page for more information about industry standards and associated tags.
Specifies the address of an array of at least the number of bytes specified by L_tmpnam, a constant defined in the stdio.h header file. Points to the pathname of the directory in which the file is to be created. Points to an initial letter sequence with which the filename begins. The prefix parameter can be null, or it can point to a string of up to 5 bytes to be used as the beginning of the temporary filename.
The tmpnam() and tempnam() functions generate filenames for temporary files.
The tmpnam() function generates a filename using the pathname defined as P_tmpdir in the stdio.h header file.
Files created using this function reside in a directory intended for temporary use, and their names are unique. It is the application's responsibility to use the unlink() function to remove the files when they are no longer needed.
Between the time a filename is created and the file is opened, it is possible for some other process to create a file with the same name. This should not happen if that other process uses these functions or the mktemp() function, and if the filenames are chosen to make duplication by other means unlikely.
The tempnam() function allows you to control the choice of a directory. If the directory parameter is null or points to a string that is not a pathname for an appropriate directory, the pathname defined as P_tmpdir in the stdio.h header file is used. If that pathname is not accessible, /tmp is used. You can bypass the selection of a pathname by providing an environment variable, TMPDIR, in the user's environment. The value of the TMPDIR variable is a pathname for the desired temporary file directory.
The prefix parameter can be used to specify a prefix of up to 5 bytes for the temporary filename.
If the s parameter is null, the tmpnam() function places its result into an internal thread-specific buffer and returns a pointer to that area. Subsequent calls to this function from the same thread overwrite this buffer.
The tmpnam() function generates a different filename each time it is called.
[Digital] If tmpnam() is called more than TMP_MAX times by a single process, it starts recycling previously used names.
If the s parameter is null, tmpnam() function places its result into an internal thread-specific buffer and returns a pointer to that area.
If the s parameter is not null, it is assumed to be the address of an array of at least the number of bytes specified by the L_tmpnam constant. The tmpnam() function places its results into that array and returns the value of the s parameter.
Upon successful completion, the tempnam() function returns a pointer to the generated pathname, suitable for use in a subsequent call to the free() function. Otherwise, null is returned and errno is set to indicate the error.
If the tempnam() function fails, errno may be set to the following value: Insufficient storage space is available.
Functions: fopen(3), free(3), malloc(3), mktemp(3), open(2), tmpfile(3), unlink(2)
Standards: standards(5) delim off | http://backdrift.org/man/tru64/man3/tempnam.3.html | CC-MAIN-2016-44 | refinedweb | 588 | 54.02 |
Mobile devices like phones, slates need operating systems that try and get the best out of the battery power that the device has while providing a fast, responsive experience for the user.
The operating system itself can do a tonne of work in this area but at some point the applications that are running on the device also need to do the right thing to make sure that the OS can do its job.
This surfaces itself in the “Application Lifecycles” that applications go through when running on a platform like Windows 8 or Windows Phone 8 and it’s one of the first things that would hit a traditional desktop developer coming from a Windows world as being very different from what they’re used to.
Generally, these lifecycles specify something like;
- The only app that runs is the one in front of the user.
- Other apps are put into some kind of suspended animation whereby their memory pages are preserved but they aren’t scheduled to run on CPUs.
- At some point the OS may actually kill a suspended app in order to reclaim the resources it was using.
- There’s the question of what to do if the user navigates back to an app that has been secretly suspended or even killed by the OS since they were using it.
and, usually, there’s some way of having other opportunities for code to run within the system ( “background tasks” ) such as on a particular interval or on a particular system event and the OS needs some mechanism for making the user aware of this and also ( via some kind of quota system ) of making sure that the “background tasks” don’t go crazy ( either by design or by error ) and start draining the user’s battery when they weren’t expecting it.
In recent times, I’ve been focused more on Windows 8 apps than Windows Phone apps so I wanted to try and write down where I thought I was up to with the Windows 8 model and then make some notes/comparisons with the Windows Phone model to get myself back up to speed there in the light of Windows Phone 8. That’s the rest of this post and, hopefully, it manages to avoid too many inaccuracies.
Windows 8
On Windows 8, the run->suspend->resume cycle feels relatively simple. If you’ve got an application like this one built in XAML and .NET (based on the blank template in Visual Studio) which is simply displaying a TextBox on the screen;
then what’s happened in that app is that the Application derived class (usually called App) has spun up, run the code in its OnLaunched override to create a Frame and navigate it to the MainPage page which displays the TextBox.
Window Visibility (are we on the screen?)
If I switch from this app to another app then I’ll get an immediate change in the visibility of my window such that this code;
Window.Current.VisibilityChanged += (s, e) => { if (e.Visible) { } };
would run to let me know whether my window was visible or not. If I was (e.g.) playing music in my app and I didn’t intend for that music to be background music then I might stop the music at the point where my window became invisible.
I think it’s important to flag that you can get these events because on Windows 8 if your app is moved from the foreground by the user it doesn’t immediately suspend but it will do in a few seconds…
Suspend/Resume
Even though my window might not be visible and the user might be looking at another app, my app is still running code at this point and will continue to do that for a while longer (about 8 seconds or so seems to be how it works out on the machines where I’ve watched it) then, unless I am running under the debugger, I will see my application suspend which means (to me) that the process is co-operatively brought to a safe stopping point and then it’s left around in memory but its threads aren’t scheduled by the OS. From a code point of view, I can handle this suspension event and its partner, the resume event;
DateTime suspendTime = DateTime.Now; Application.Current.Suspending += (s, x) => { suspendTime = DateTime.Now; Debug.WriteLine("Suspended"); }; Application.Current.Resuming += (s, x) => { TimeSpan suspensionTime = DateTime.Now - suspendTime; Debug.WriteLine( string.Format("Resumed after {0} seconds", suspensionTime.TotalSeconds)); };
and note that the Application derived class that the Visual Studio templates include already has a virtual method for you to override for OnSuspending.
If I navigate back to the app, it will resume. Note that if I was running under the debugger then this wouldn’t happen and I’d need to use the buttons on the toolbar to simulate suspend/resume;
and that if I wasn’t running under the debugger then a cheap and cheerful way of viewing my Debug.WriteLine messages is still with DebugView from;
While the app is suspended, the user might well interact with it.
For instance, they might go back to the Start Screen and tap on its primary tile again. This would cause the application to be resumed and it would get its resume event and then carry on running.
Equally, if the application had a secondary tile on the start screen, the user might tap on that which would cause the application to be resumed but then also launched again for that secondary tile activation which might cause the app to navigate to a particular page dedicated to the arguments that were passed from the secondary tile.
Suspend/Terminate
An additional complication here is that while an application is suspended it might be terminated by the operating system if the OS decides that it’s getting a little low on resources as the user runs more applications.
If this happens, your primary “problem” as a developer is that the last interaction you had with the OS was at the point where you were suspended and you won’t hear from the OS again as it tears your process down.
However, this sort of thing is rightly hidden from the user so the general recommendation is that you try and preserve the user’s experience such that if this sort of sequence happens;
- User runs your app.
- User builds up some state in your app.
- User moves away to another app.
- OS suspends your app.
- OS terminates your app.
- User taps your tile again on the start screen to run your app again.
then your app would generally try and put the user back to where they were at step ( 2 ) unless you have good reasons for not doing so such as;
- perhaps the gap between step ( 3 ) and step ( 6 ) lasts for a long time which might make it hard for the user to remember why you have put them back in this state.
- perhaps at 6 the user taps a secondary tile on the start screen so doesn’t expect to go back to where they were in the application previously.
In order to make this work, the OS has to be able to tell an application when it launches whether it is in a “new” application launch or whether it is a launch of an application that was previously terminated and the OS does exactly that. In the override for OnLaunching in my Application derived class I can have;
protected override void OnLaunched(LaunchActivatedEventArgs args) { if (args.Kind == ActivationKind.Launch) { Debug.WriteLine( string.Format("Last time the app ran it was {0}", args.PreviousExecutionState.ToString())); }
and then if I run this up from “cold” inside the debugger I see output;
and then if I use the debugger to simulate suspend and terminate (which the debugger calls “Suspend and Shutdown”) which is the only way that I know of properly simulating this “OS” termination then the next time I run the app I see;
In terms of what you have to actually do as a programmer here to remember state information the basics of it are something like;
- Making a note of your user state as the app is going along (especially if it’s likely to get large). This state might also include things like the user’s navigation history around the app and the current page that they are viewing in the app if the app is a multi-page app.
- Handling the suspending event and writing out the saved state to disk.
- Optionally handling the resuming event and refreshing stale data if that’s relevant to your app.
- Handling the case where the application is launched from a previous termination and attempting to restore state on the user’s behalf to put them back where they were before the OS terminated their app by loading up the saved state and then getting rid of it.
In terms of doing this, if you use a Visual Studio template other than the blank template you’ll find in the XAML world that it comes with additional code in that there’s a class called SuspensionManager which adds;
- A global “session state” dictionary(string,object) of state for you to make use of.
- A means via which you can register a navigation Frame with the suspension manager and creates a separate dictionary(string,object) for each registered frame – most apps would have one.
- At suspension time the code serializes all of the state that it’s managing to a file within the app’s own folder structure including (1) and (2) above but also the navigation history of the Frame which the Frame itself can provide via GetNavigationState().
- At launch, the template code registers the Frame it creates with the SuspensionManager, checks the type of launch and attempts to restore all of the above in the scenario where the application was previously terminated and state has been saved and restored.
This works hand in hand with another class that the template code adds called LayoutAwarePage which;
- As it is navigated to, reserves itself a slot in the session state being held by its parent Frame to store a dictionary(string,object) for the state of that page.
- Offers an easy override for loading stored state from that dictionary as the page is navigated to.
- Offers an easy override for saving state into that dictionary as the page is navigated from (which will also happen prior to suspension).
- Works with the navigation frame such that these 2 scenarios can be coded in the same way;
- User goes to Main Page.
- User navigates to Page 1.
- User builds up some state (e.g. selecting some items in a list).
- Scenario 1
- User navigates to Page 2.
- User goes back to Page 1 and expects their state to be preserved.
- Scenario 2
- Application is suspended and terminated.
- User runs the application again and expects to land back on Page 1 with their state preserved.
The fundamentals of this suspend/resume/terminate/reincarnate cycle are the same whether you’re writing .NET code, JavaScript code or C++ code – the frameworks work differently on your behalf but the underlying concepts are the same whether you’re using your own code or whether you choose to lean on SuspensionManager and LayoutAwarePage from the templates or whether you’re in the WinJS world and make use of the WinJS.Application.sessionState object which does a similar job for you.
Closing Apps
This one is probably obvious but you can’t rely on the application ever being closed nicely by the user so if you have persistent state that the user modifies then you need to make sure it’s well and truly saved by the time suspension ends (or, ideally, before suspension even begins) because you don’t know if the app is going to run again or not.
Windows Phone 8
On Windows Phone 8, I find the lifecycle model ( which is detailed in the docs ) to be a bit more complicated and especially in the light of some of the changes that came with the new release like Fast Application Resume. Putting that new feature to one side for a moment…
A bit like Windows 8, on the Phone I run an app and then I can move away from that app either;
- Forwards by navigating to the Start Screen (or possibly into some OS chooser functionality like taking a photo but let’s leave that to one side).
- Backwards by navigating up the navigation stack and out of the application which causes it to exit.
And this is different from Windows 8 which does not maintain this navigation stack across applications – it’s not an operating system service on Windows 8 and although an application can offer the user a “back” navigation model across its own pages it’s not linked to the OS in any way so that a user can’t navigate “back” out of an application to close it down (instead, Windows 8 has specific ways of closing an application).
Deactivate/Activate and Dormant Apps
Additionally, the Phone model is different from Windows 8 in that as soon as I navigate forwards out of the application its code stops executing pretty much straight away. The app goes into a dormant state where it is still in memory but the code isn’t getting scheduled to run. For me, that dormant state is analogous to the Windows 8 suspended state but without the ~8 seconds it takes to make the transition.
If the user then goes back (i.e. uses the back button) to return to this dormant application then it’s a little like the Windows 8 suspend->resume cycle in that the app is still in memory, its threads start getting scheduled and it can run again and it might want to make a note of the time that elapsed between when the application went dormant and when it was activated and possibly refresh any stale data that it might be displaying.
In terms of code, the PhoneApplicationService makes events available to know when your app is deactivated and re-activated;
PhoneApplicationService.Current.Activated += (s, e) => { TimeSpan timeAway = (DateTime.Now - deactivationTime); Debug.WriteLine( string.Format("User came back after {0} seconds", timeAway.TotalSeconds)); }; PhoneApplicationService.Current.Deactivated += (s, e) => { deactivationTime = DateTime.Now; Debug.WriteLine( string.Format("User went forwards from the app at {0}", deactivationTime.ToLongTimeString())); };
and those are wired into your Application derived class in the Visual Studio templates. Also, those templates give you page classes derived from PhoneApplicationPage where you can override OnNavigatedTo and OnNavigatedFrom and so as the user leaves your page (for any reason) the OnNavigatedFrom method will get called and as they come back into your page the OnNavigatedTo method will get called.
Deactivate/Activate and Tombstoned Apps
On Windows 8, suspend is either followed by resume or by termination. On Windows Phone 8, going dormant may well be followed by another state transition into a tombstoned state.
If the application is tombstoned then it means that an instance of the application is kicking around on the navigation stack but the OS wants to reclaim resources being used by the application so the app goes away and the OS instead preserves some state dictionaries and navigation history information on the application’s behalf.
If the user then navigates back into the application it is brought back to life, the navigation history is restored and the page that was active at the time that the user left the application is put back onto the screen and the state dictionaries can be used to get everything back to where it was.
As an example, if I have an application with 2 pages;
where the buttons simply increment one of two counts (named Count1 and Count2!) and the display is databound to a single object defined in the App.xaml file which I treat as application data rather than page specific data and that object is an instance of this class;
public class AppData : INotifyPropertyChanged { public event PropertyChangedEventHandler PropertyChanged; public AppData() { if (_current == null) { _current = this; } else { throw new InvalidOperationException("Can't make two of these, sorry"); } } public static AppData Current { get { return (_current); } } public int Count1 { get { return (this._count1); } set { if (this._count1 != value) { this._count1 = value; RaisePropertyChanged("Count1"); } } } public int Count2 { get { return (this._count2); } set { if (this._count2 != value) { this._count2 = value; RaisePropertyChanged("Count2"); } } } void RaisePropertyChanged(string property) { var handlers = this.PropertyChanged; if (handlers != null) { handlers(this, new PropertyChangedEventArgs(property)); } } int _count1; int _count2; static AppData _current; }
Then in the case where I perform a sequence of actions;
- Run the app.
- Increment the Count1 value to build up some state.
- Navigate to the second page in the app.
- Increment the Count2 value to build up a bit more state.
- Navigate forwards to the start screen and maybe another app.
- Navigate back to my app.
then most times this seems to work just fine and my backwards navigation lands me on the second page of the app with the right navigation history and the right values on the screen.
I say “most times” because if I involved tombstoning by using a debugger trick to force the app to tombstone each and every time I deactivate it by navigating forwards to another app;
then if I run through my app again;
- Visit main page, increment the state there
- Navigate to second page, increment the state there
- Navigate forward to another app via the start screen
- Navigate back
Then on that return navigation I’ll find that the system has preserved the navigation history for me in that my backwards navigation will land me on the second page of my app but I’ll find that it’s displaying the wrong data (0 values in my case). If I navigate back again then I’ll correctly land on the main page of my application but, again, the data displayed will be wrong.
This is because the app was tombstoned and I didn’t save and restore state so all the data that I had in memory has gone and this could happen to my app in real use so I need to cater for it.
If I added these 2 simple methods to my AppData class (taking a big dependency here on the PhoneApplicationService class);
public void Restore() { this.Count1 = (int)PhoneApplicationService.Current.State["count1"]; this.Count2 = (int)PhoneApplicationService.Current.State["count2"]; } public void Save() { PhoneApplicationService.Current.State["count1"] = this.Count1; PhoneApplicationService.Current.State["count2"] = this.Count2; }
and then if I wired those in so that they were called on Activation/Deactivation somewhere;
PhoneApplicationService.Current.Activated += (s, e) => { if (!e.IsApplicationInstancePreserved) { AppData.Current.Restore(); } }; PhoneApplicationService.Current.Deactivated += (s, e) => { AppData.Current.Save(); };
then my code would now survive the scenario of being tombstoned. Note that I’m not recommending that this is how you do things because the templates already wire in this state management into your Application derived class and your PhoneApplicationPage derived class – it’s just a simple example.
Terminated Apps and Relaunching
There’s another aspect to this Phone lifecycle that is different to Windows 8 because my Phone application can be terminated whether its tombstoned or dormant by following a process such as;
- Run my app.
- Increment my count on page 1 to build up some state.
- Navigate to page 2.
- Increment my count on page 2 to build up some state.
- Start screen.
- Run my app.
This is very different from the Windows 8 model. In the Windows 8 model, at step 6 the chances are that I would be working with the same application instance that I ran at step 1 (unless the app had been suspended/terminated in the meantime which is unlikely and, even then, the guidance is to preserve the user’s session with the app and make it look like nothing has changed).
On the Windows Phone 8 model, a new instance of my app is started at step 6 and the previous instance started at step 1 is discarded without any further notification to that instance. That instance is also removed from the navigation history so I can’t go back to it. It’s gone along with any navigation history from inside the app as well.
The guidance around this kind of scenario is that at step ( 6 ) a new app instance is what the user gets and any transient state from steps ( 2 ) to ( 4 ) is lost.
I must admit though that I find this aspect of usability on the phone confuses me as a user and even more so as a heavy user of Windows 8 because it’s so different from what happens on Windows 8 and that’s perhaps where this new “Fast Application Resume” feature comes in.
It’s also worth saying that the app instance from step 1 does not get a Closing event like it would if the user actually closed the app by navigating back from the app’s starting page. That means that an app cannot rely on getting its Closing event as a place to store any non transient data – it needs to do that either long before the app is deactivated or, at least, when deactivation occurs.
Fast Application Resume on Windows Phone 8
The way I understand “Fast Application Resume” is that it can make Windows Phone 8 apps behave more like Windows 8 apps with respect to this scenario where a user launches an app that is already “running” and taking up an entry on their navigation history.
If I switch on Fast Application Resume by editing the XML manifest;
Then if I go back to that previous sequence having turned off the feature in the debugger that forces tombstoning and I;
- Run my app.
- Increment my count on page 1 to build up some state.
- Navigate to page 2.
- Increment my count on page 2 to build up some state.
- Start screen.
- Run my app.
Then I find that at step 6 I’m back on the home page ( page 1 ) and the state that the app has built on the user’s behalf (in my case, 2 integer values) is preserved but I find that navigating back does not take me back to page 2 and then page 1 but, instead, it takes me out of the app and closes it down.
It’s as if the app has reset itself around page 1 rather than added a navigation to page 1 into its navigation stack.
Now, some of this is to do with the template that I used to create the app because it contains some code that does some work on my behalf. Specifically there is code in App.xaml.cs which hooks up to the Frame.Navigated event;
RootFrame.Navigated += CheckForResetNavigation;
and what that handler does is as below;
private void CheckForResetNavigation(object sender, NavigationEventArgs e) { // If the app has received a 'reset' navigation, then we need to check // on the next navigation to see if the page stack should be reset if (e.NavigationMode == NavigationMode.Reset) { RootFrame.Navigated += ClearBackStackAfterReset; } } private void ClearBackStackAfterReset(object sender, NavigationEventArgs e) { // Unregister the event so it doesn't get called again RootFrame.Navigated -= ClearBackStackAfterReset; // Only clear the stack for 'new' (forward) and 'refresh' navigations if (e.NavigationMode != NavigationMode.New && e.NavigationMode != NavigationMode.Refresh) return; // For UI consistency, clear the entire page stack while (RootFrame.RemoveBackEntry() != null) { ; // do nothing } }
If I remove this code then when I “relaunch” my application using the previous listed steps the app still lands on page 1 whereas I’d expect it to land on page 2 because that was the page that I navigated away from back in step 5.
However, I notice that the navigation history is now as I’d like it to be because if I hit back I go back to “page 2” and then “page 1” again and then out of the application.
If I want to have step 6 land me back on the “page 2” that I last visited prior to deactivating the application then I can attempt to use the NavigationMode to try and spot when the application is being “fast resumed” and and cancel the navigation that this will involve (in my case because I re-launch from the primary tile that will be a navigation to the app’s main page (page 1)).
I’m not 100% sure that I’m doing this the right way as it seemed to contrast with the docs slightly but I addedcode to my App.xaml.cs in the InitialisePhoneApplication method;
bool cancelNextNavigation = false; RootFrame.Navigating += (s, e) => { // TODO: Not 100% sure this is right. if (e.NavigationMode == NavigationMode.Reset) { cancelNextNavigation = true; } if ((e.NavigationMode == NavigationMode.New) && e.IsCancelable && cancelNextNavigation) { e.Cancel = true; cancelNextNavigation = false; } };
and that seems to work for me in this specific case and that’s also true if I switch on the debugging flag that forces tombstoning again just to make sure that it’s consistent whether the app that’s being re-launched is dormant or tombstoned.
Debugging on Windows Phone 8
One tip I’ll share when debugging on Windows Phone 8 (I was debugging across to my Lumia 920) is that I find it a bit confusing/magical when I’m trying to reason about which instance of my application I’m actually debugging on the phone and when that is or is not changing.
I find it useful to make the Processes window visible;
and so when I run my app I can see that there’s a live process present and then if I navigate forwards from my app I can see that I have detached from that process and if I navigate back I can see whether I’ve attached to the same process or a different one – certainly helps me out.
Wrapping Up
This isn’t an exhaustive post. There are other things that you need to consider here like what happens on Windows 8 when your app is already running and the user re-launches it via a means like using it as a share target or searching into the app and, similarly, there are similar scenarios on Windows Phone 8 for launching apps via means other than their primary tile.
What was useful for me in writing this up was to think about the commonalities across the models and to refresh my memory of how the Phone works (and how it has changed from the original 7 to 7.5 to 8) in some slightly different way to Windows 8.
I’d also say that I’d really hope for a consistent way of doing this across the Windows and Windows Phone platforms in the future both from the point of view of making it easier to understand and also from the point of view of being able to build portable code that works in both scenarios – but, naturally, the futures isn’t mine to know about when it comes | https://mtaulty.com/2013/03/13/m_14600/ | CC-MAIN-2021-25 | refinedweb | 4,510 | 52.94 |
How to create shared library (.SO) in C++ (G++)?
Get FREE domain for 1st year and build your brand new site
To create a shared library in C++ using G++, compile the C++ library code using GCC/ G++ to object file and convert the object file to shared (.SO) file using gcc/ g++. The code can be used during the compilation of code and running executable by linking the .SO file using G++.
// Convert library code to Object file g++ -c -o library.o library.c // Create shared .SO library gcc -shared -o libfoo.so library.o
To use it with a client code using the library, use the following commands:
# Create the executable by linking shared library gcc -L<path to .SO file> -Wall -o code main.c -l<library name> # Make shared library available at runtime export LD_LIBRARY_PATH=<path to .SO file>:$LD_LIBRARY_PATH # Run executable ./a.out
Shared libraries contain external library code which can be used by multiple client systems. This is memory efficient as only one copy is maintained and is used by multiple programs across the system.
In constrast to archive library, to run client code on a different system, the shared library .SO file needs to be transfered to the new system.
There are four steps:
Compile C++ library code to object file (using g++)
Create shared library file (.SO) using gcc --shared
Compile the C++ code using the header library file using the shared library (using g++)
Set LD_LIBRARY_PATH
Run the executable (using a.out)
Step 1: Compile C code to object file
gcc -c -o library.o library.c
There are two options:
c: to specify the creation of object file
o: to specify the name of the final object file
Step 2: Create shared library file using object file
gcc -shared -o libfoo.so library.o
There are two options:
shared: to specify the creation of shared library
o: to specify the name of the resulting library file
Step 3: Compile C++ code
gcc -Llib/ -Wall -o code main.c -llibrary
- Step 4: Set LD_LIBRARY_PATH
export LD_LIBRARY_PATH=lib/:$LD_LIBRARY_PATH
- Step 5: Run the archive code
./code
This involves four major files:
- library.hpp: Library header file
- library.cpp: Library C++ file
- library.o: Object file of library.cpp
- library.so: shared library of the above library
- code.cpp: C++ code using the library through header file
- a.out: executable
Example
In this example, we will create a C++ library and use it in a C++ code. We will build our library as an shared:
gcc -shared -o liblibrary.so library.o
Using the library
Use the library like this in a code file named "main.cpp":
#include <stdio.h> #include "library.hpp" int main ( void ) { print_value(10); return 0; }
Create the executable:
g++ -Llib/ -Wall -o code main.cpp -llibrary
This will create the executable a.out which will run on any compatible machine with the library files (.SO). To help the executable find the shared library, set the LD_LIBRARY_PATH to the path of the .SO file.
export LD_LIBRARY_PATH=lib/:$LD_LIBRARY_PATH
Run the executable:
./code
With this, we have created a shared file and generated an executable using it. Enjoy. | https://iq.opengenus.org/create-shared-library-in-cpp/ | CC-MAIN-2021-17 | refinedweb | 529 | 75.81 |
desktopcouch Value error cannot covert float Nan to integer
Bug Description
When I run this:
jb@jb-K7S41:~$ /usr/lib/
I get the following output:
Traceback (most recent call last):
File "/usr/lib/
from desktopcouch.
File "/usr/lib/
from desktopcouch.
File "/usr/lib/
from desktopcouch.
File "/usr/lib/
keyring=
File "/usr/lib/
self.
File "/usr/lib/
ctx.
File "/usr/lib/
consumer_key = self.make_
File "/usr/lib/
return ''.join(
File "/usr/lib/
return seq[int(
ValueError: cannot convert float NaN to integer
This last line is the same error I get in red on the Services tab of the UbuntuOne Control Panel.
(fresh install)
ProblemType: Bug
DistroRelease: Ubuntu 11.10
Package: desktopcouch 1.0.8-0ubuntu1
ProcVersionSign
Uname: Linux 3.0.0-13-generic i686
NonfreeKernelMo
ApportVersion: 1.23-0ubuntu4
Architecture: i386
Date: Fri Nov 4 14:49:25 2011
InstallationMedia: Ubuntu 11.10 "Oneiric Ocelot" - Release i386 (20111012)
PackageArchitec
SourcePackage: desktopcouch
UpgradeStatus: No upgrade log present (probably fresh install)
I have made an intensive investigation into this one. I am java developer not a python guy but still I think you would consider my findings interesting:
Looks like it is a problem with the state of python bullt-in time module at the moment of the failure.
The problem stems from fact that the very first invocation of time.time() invocation yields NaN. I have tried to investigate the time method and time module but both dir() and help() would work just fine and produce believable results. Then I noted the traceback of the failure is not a full length so I added explicit traceback.
The traceback.
This is the modified /usr/share/
def save_to_file(self, file_name):
"""Save to file."""
container = os.path.
import traceback, sys, time
""" -------
fd, temp_file_name = tempfile.
f = os.fdopen(fd, "w")
try:
finally:
Is there some linux & python guy willing to help me to investigate it further and come to reasonable conclusion?
As noted in the python issue, the problem in my case turned out to be bad hardware. I suspect the same of the initial reporter of this bug.
Status changed to 'Confirmed' because the bug affects multiple users. | https://bugs.launchpad.net/ubuntu/+source/desktopcouch/+bug/886159 | CC-MAIN-2021-04 | refinedweb | 354 | 58.48 |
Post your Comment in java
Vector in java
Vector in java implements dynamic array. It is similar to array and the
component of vector is accessed by using integer index. Size of vector can grow or shrink
as needed, by adding and removing item from vector
vector - Java Interview Questions
Vector Class in Java What is a Java Vector Class? Hi friend,Read for more information,
Vector in Java
Vector in Java are array lists that are used instead of arrays, as they have... is how to declare a Vector in Java Program:
This Syntax is used to declare an empty....
Vector java Example:
package Toturial;
import java.util.Iterator;
import
vector prblem - Java Beginners
vector prblem a java program that will accept a shopping list of 5...;
static Vector list = null;
static Scanner sc = new Scanner(System.in);
public...: ");
list = new Vector(count);
while(moreNumbers && index <=count){
list.add
Java Vector
Java Vector
In this tutorial, you will learn about vector and its' implementation with
example.
Vector is alike to ArrayList , it is also dynamic... :
Vector is synchronized
Vectors are still using methods which
VECTOR - Java Interview Questions
VECTOR How to write our own vector i want source code? Hi Friend,
Please visit the following link:
Hope that it will be helpful for you.
Thanks
Java Vector
Java Vector
... in this vector.
Understand Vector by Example-
This java example shows us to
find a maximum element of Java Vector using max method of Collections class
What is a vector in Java? Explain with example.
What is a vector in Java? Explain with example. What is a vector in Java? Explain with example.
Hi,
The Vector is a collect of Object... related to Vector in Java program
to use ArrayList in the place of Vector
An application using swings and vector methods
An application using swings and vector methods Hi,
I want an application in Java swings which uses good selection of Vectors methods
Vector Iterator Java Example
interface can traverse all its elements.
Java Vector Iterator with Example
import java.util.Iterator;
import java.util.Vector;
public class vector...
Vector is a collection class.It works similar to the Array.
It has
Java : Vector Example
Java : Vector Example
This segment of tutorial illustrates about the Vector class and its use with
the Iterator interface.
Java Vector :
Vector class... it different to it as
-Vector is synchronized and it contain many methods.
J2ME Vector Example
J2ME Vector Example
This application illustrates how we can use Vector class. In this
example we are using the vector class in the canvas form. The vector class
Java Vector Iterator
Java Vector Iterator is a Collection class. It has similar functionality... all its elements.
Java Vector Iterator Example
import java.util.Iterator...(String[] args) {
Vector v = new Vector();
String tree[] = { "olive", "oak
Convert array to Vector
Convert array to Vector
In this section we will learn how to convert an array
to vector. Actually array is used is used to store similar data types but Vector
is used
Post your Comment | http://www.roseindia.net/discussion/49511-Java-Vector.html | CC-MAIN-2014-41 | refinedweb | 507 | 50.23 |
Hi Expert,
In our database data from tables
Hi,
To create the jobs you can log into SQL Server Management Studio as the 'sa' user and run the following:
use <database name>
EXEC initialize_background_procs
It is worth noting that if you are using SQL Server Express that it does not have the SQL Server Agent that will run these (unless you manually installed the Primavera Background Agent or are on P6r8.2/8.3 where it no longer requires SQL Server Agent)
To bring it up-to-date I recommend running the following however many times you need for it to clean up the data until it is current to within the last month.
use <database_name>
exec data_monitor
You can run multiple so having the below is perfectly fine.
use <database_name>
exec data_monitor
exec data_monitor
exec data_monitor
They usually clean up two weeks worth each so you will probably need to run it ~150 times.
When you get closer to the end it should only take 10-15seconds for each run, I wouldn't be surprised if the first 50 or more take about 10minutes each.
Regards
Alex
Hi Alex
Thanks for your reply.
Once we activate DAMON service , how to set interval so that it will run on regular basis for table BGPLOG_Cleanup, REFRDEL_Cleanup,USESSAUD_Cleanup and USESSION_Cleanup.
Hi,
It should run on a schedule automatically, I believe DAMON runs once a week.
If you do need it to run more often then the below example will change it to run every 30minutes
UPDATE settings SET setting_value='30m' where namespace = 'database.background.Damon' and setting_name='Interval';
Regards
Alex | https://community.oracle.com/message/11172530?tstart=0 | CC-MAIN-2016-40 | refinedweb | 268 | 53.95 |
Automated Tests - Selenium - DEPRECATED
Look here instead: EdenTest
Table of Contents
Selenium provides the ability to test Sahana Eden as users see it - namely through a web browser. This therefore does end-to-end Functional Testing, however it can also be used as Unit Testing
We are building our framework around the new Selenium WebDriver, despite having some legacy code in the older format:
The tests are stored in
eden/modules/tests
Installation of the testing environment in your machine
You should already have installed Eden and Web2py before starting this.
In order to execute the automated Selenium powered test scripts, you must install the Selenium Web Drivers into Python.
Download the latest Selenium package from and follow the installation
instructions there.
Windows users can either:
- Install pip and use the pip command shown in the instructions
- Install 7zip and use it to extract files from the .tar.gz file, then use the setup.py command shown in the instructions.
Note it is likely simpler to install 7zip and use the setup.py command.
Running / Executing Automated test scripts:
In your
models/000_config.py, make these changes:
Uuncomment the lines that disable confirmation popups:
# Should user be prompted to save before navigating away? settings.ui.navigate_away_confirm = False # Should user be prompted to confirm actions? settings.ui.confirm = False
Make sure that following settings are enabled in 000_config.py file.
settings.base.migrate = True settings.base.debug = True
Find the settings.base.template line and change it to:
settings.base.template = "IFRC"
Then you will need to recreate and populate the database with test data, as follows:
Currently, the tests can run on the following templates -
- IFRC
- default
- SandyRelief
- DRMP
- CRMT
After the above re-population of the database, change the template to any of the above templates on which you wish to run the Selenium tests.
Start the Web2py server at 127.0.0.1:8000. (Selenium will start a captive browser, which it will direct to contact the server at 127.0.0.1:8000.)
Run the whole test suite for the current template:
cd web2py python web2py.py -S eden -M -R applications/eden/modules/tests/suite.py
Note : These tests don't run on Firefox 17 and above.So, if you have your default browser set as Firefox 17 or above, you either have to downgrade your Firefox browser and then run the tests or run the tests on Chrome. For running the tests on Chrome, install Chromedriver
Make sure to remove any proxy settings set before as this may interfere with the selenium test. And add
-A --browser=Chrome to the command.
So, it becomes
python web2py.py -S eden -M -R applications/eden/modules/tests/suite.py -A --browser=Chrome
Note : The tests which have errors while running start with an 'E'..
Useful Information about the test suite
Tests for each template
- The tests to be run on a template are explicitly defined in
modules/templates/<template_name>/tests.pyfile.
- The list
current.selenium_testsin the above file contains the class names of all the tests which are to be run for the template.
- If the tests are not specified in
tests.pyfile for some template, the tests specified for 'default' template are run.
Functions
The test suite provides the following functions -
- Usage -
login(account, nexturl)
accountis the user with whom we wish to login. Typically, "admin".
nexturlis the url to be visited after logging in the user.
create
Used to create records in the database.
- Usage -
create(table_name, data)
table_nameis the name of the table in which we wish to insert records
datais a list of tuples, where each tuple represents information about one field in the table.
- Eg - [ ( column_name1, data1, ... ), ( column_name2, data2, ... ), .. so on ]
- The test suite automatically determines the following field types from the HTML class names in the create form -
- option(dropdown)
- autocomplete
- date
- datetime
- normal text input
- Hence, for the above field types, just 2 arguments need to be provided in the tuple - column name and data to be inserted.
- Eg -
(organisation_id, "International Federation of Red Cross and Red Crescent Societies")
- For other field types listed below, an appropriate third argument needs to be provided.
- checkbox
- Embedded form fields -
- Eg - "pr_person" embedded in "hrm_human_resource"
- Specific widgets -
- inv_widget
- supply_widget
- facility_widget
- gis_location
- Selection of the option from the dropdown field -
- Longest Word Trimmed Search is used - Amongst the options in the dropdown field, the option which has most number of word matches with the input string(in the test data) is selected.
- Note : In case of filling in an organisation name, the data should not contain both name and acronym.
- Eg - correct usage :
(organisation_id, "International Federation of Red Cross and Red Crescent Societies")
- Incorrect usage :
(organisation_id, "International Federation of Red Cross and Red Crescent Societies (IFRC)")
Search
- Usage -
- form_type: This can either be search.simple_form or search.advanced_form
- results_expected: Are results expected? : True/False
- fields : a tuple of dictionaries, each dictionary specifying a field which is to be checked/unchecked in the form.
- row_count : Expected row count
{"tablename":tablename, "key":key, "filters":[(field,value),...]}can be passed to get the resource and eventually the DB row count.
- Usage -
report(fields, report_of, grouped_by, report_fact, *args, **kwargs)
- fields : a tuple of dictionaries, each dictionary specifying a field which is to be checked/unchecked in the form.
- report_of : The field whose report is to be formed. The option in 'Report of'
- grouped_by : The field for which the report is to be grouped by. The option in 'Grouped by'
- report_fact : The option in 'Value'
Writing / Creating your own test scripts:
We aim to make it as easy as possible to write additional tests, which can easily be plugged into the Sahana Eden Testing System Framework.
New tests should be stored in a subfolder per module, adding the foldername to
eden/modules/tests/__init__.py & creating an
__init__.py in the subfolder.
A walk-through example: Creating a Record
The canonical example is:
eden/modules/tests/staff/create_staff.py
We want to make an automated test script to test the Create Staff module. Create Staff falls under Staff feature, therefore we must add this test script module file in the subfolder /staff (eden/modules/tests/staff/)
Steps of automated the test for Create Staff:
1) Let's call this test file: create_staff.py and save it in the subfolder /staff.
2) We must now make sure we add the file name in the init.py Open file init.py in the subfolder /staff and add:
from create_staff import *
3) create_staff.py
from tests.web2unittest import SeleniumUnitTest class CreateStaff(SeleniumUnitTest): def test_hrm001_create_staff(self): """ @case: HRM001 @description: Create a Staff @TestDoc: @Test Wiki: """ print "\n" self.login(account="admin", nexturl="hrm/staff/create") self.create("hrm_human_resource", [( "organisation_id", "International Federation of Red Cross and Red Crescent Societies"), ( "site_id", "AP Zone (Office)"), ( "first_name", "Robert", "pr_person"), ( "middle_name", "James", "pr_person"), ( "last_name", "Lemon", "pr_person"), ( "email", "[email protected]", "pr_person"), ( "job_title_id", "Warehouse Manager"), ] )
- The create function takes 2 arguments -
create(table_name, data)
table_name- The name of the database table in which we want to insert the record.
data- The list which contains the record to be inserted.
- Each tuple in the above list contains information about each individual field.
- Eg -
( "organisation_id","International Federation of Red Cross and Red Crescent Societies"). Here,
organisation_idis the label of the field in which we want to insert data
International Federation of Red Cross and Red Crescent Societies
4) To run this automated test script.
Determine the template(s) on which you wish to run this test on. The tests for each template are defined in
modules/templates/<template_name>/tests.py.
Open tests.py for these templates and append the class name of the test to the list so it can be executed. (NOTE: We add the class name, not the function name).
current.selenium_tests.append("CreateStaff"):
See Also
- - Style suggestion | https://eden.sahanafoundation.org/wiki/DeveloperGuidelines/Testing/Selenium | CC-MAIN-2021-43 | refinedweb | 1,299 | 55.44 |
Update on 2nd January 2020:
The latest version of React Native for Windows has changed the namespace in which the IReactPackageProvider interface has been implemented. The previous one was:
Microsoft.ReactNative.Bridge;
while the new one is:
Microsoft.ReactNative;
The snippets in the post have been updated to reflect this change.
React Native offers a wide range of modules to extend the core functionalities of the platform, developed either by the community or by the React team himself. These modules range from UI controls, helpers and services. Some of them are "generic" and they perform tasks which don't require access to any specific platform feature. For example, Redux is a very popular state management library used in React applications but since, by default, everything is stored in memory, it basically runs everywhere. In fact, the same library can be used either with web applications (using React) or native applications (using React Native).
However, there are many scenarios in the native development space where the only way to achieve a task is by interacting with the native APIs offered by the platform. Saving a file in the local storage; getting information about the device; accessing to the GPS are just a few examples. In this case, the module can't be generic, because each platform has its own way to access to native features. This is why React Native supports the concept of native modules: a module that is leveraged in the main application through JavaScript but that, under the hood, invokes a native component specific for the platform where the application is running. Being native, each component isn't built in JavaScript, but using the native language used by the platform. For example, take a look at the popular module AsyncStorage, which can be used to read and write files in the local storage. As you can see, we have a folder called ios, which contains a XCode project; the android folder, instead, contains some Java files. Both projects simply expose the same features, but implemented using the specific language and APIs from iOS and Android.
If you have some experience with Xamarin and Xamarin Forms, this approach will be familiar. It's the same exact concept, in fact, behind Xamarin Plugins: the entry point is a C# wrapper which, under the hood, calls Swift, Java or UWP code based on the platform where the application is running.
In this blog post we're going to learn how we can build such a native module with Windows support, so that we can access to native Windows APIs when our React Native application is running on Windows.
As first step, let's create a React Native project with Windows support. I won't explain all the steps, since they are very well documented on the official GitHub repository. Additionally, I already introduced React Native and the recently added Windows support in another article.
Once you have completed all the steps, open the windows folder inside the project and double click on the Visual Studio solution. As we have learned in the previous article, the solution contains a single application which is the host for our React Native content. In a regular scenario you won't need to touch the code here, since the whole development will happen in the JavaScript world. However, in our case we have to make an exception, since we need to interact with native Windows APIs to build our module.
Let's start by adding a new project to the solution, which type is Windows Runtime Component. The component will give us access to all the Universal Windows Platform APIs. Additionally, being a WinRT component, it can be consumed regardless of the language used to build the main application. As a consequence, you're free to create a C# or C++ component, based on the language you're most familiar with. We'll be able to leverage it from the main React Native host (which is built in C++) regardless.
In my case, C# is my favorite language, so I've chosen the Windows Runtime Component (C#) template. If you want to use C++, instead, you need to choose the Windows Runtime Component (C++/WinRT) template. If you don't see it, you can install it using this extension. Pay attention to not use the C++/CX template, because this approach to build C++ components for the Universal Windows Platform has been deprecated.
Once the project has been created, you need to reference a couple of projects which are already included as part of the React Native for Windows solution. Right click on the project you have just created and choose Add → Reference.
These projects are very important because they define many attributes which will allow you to export the native code to JavaScript. This way, the developers who are going to use your module in their React Native projects don't need to know anything about C++, C# or Windows 10 APIs. They just need to use standard and plain JavaScript code.
Before starting to write some code, expand the References section of the project, select the Microsoft.ReactNative library, right click on it and choose Properties. Set Copy Local to false, to make sure that there are no conflicts with the main application.
Now we can start building our module. We're going to create a simple one to expose the information about the model of the device where the application is running. Let's start by renaming the default class included in the project with a more meaningful name, like SampleComponent. This should trigger the renaming also of the class declared inside the code.
Now we just need to define one or more methods, that will be exposed to the React Native application. In our case, we're going to define a single method that will return the current device name, using the
EasClientDeviceInformation class included in the Universal Windows Platform:
using Windows.Security.ExchangeActiveSyncProvisioning; namespace SampleReactModule { class SampleComponent { public string GetDeviceModel() { EasClientDeviceInformation info = new EasClientDeviceInformation(); return info.SystemProductName; } } }
The next step is to leverage the attributes provided by the Microsoft.ReactNative library to export the class and the method we have created to React Native. In our scenario we're going to use two of the many available ones:
[ReactModule], which we're using to decorate the whole class.
[ReactMethod], which we're using to decorate every single method we want to expose.
This is the final look of our class:
using Microsoft.ReactNative.Managed; using Windows.Security.ExchangeActiveSyncProvisioning; namespace SampleReactModule { [ReactModule] class SampleComponent { [ReactMethod("getDeviceModel")] public string GetDeviceModel() { EasClientDeviceInformation info = new EasClientDeviceInformation(); return info.SystemProductName; } } }
Notice that we have passed the
getDeviceModel value as parameter of the
ReactMethod attribute. This is how we define the name of the method that will be available through JavaScript to the React Native project.
There are many other attributes, which can be used to expose additional stuff to the React Native application, like properties or event handlers. They are all documented here.
The next thing we need is a ReactPackageProvider, which is a special object that will allow the React Native project to load all the modules exposed by our library. By default, in fact, a React Native application loads only the built-in modules. Right click on the project and choose Add → Class. Name it ReactPackageProvider. The class is very simple:
using Microsoft.ReactNative; using Microsoft.ReactNative.Managed; namespace SampleReactModule { public sealed class ReactPackageProvider: IReactPackageProvider { public void CreatePackage(IReactPackageBuilder packageBuilder) { packageBuilder.AddAttributedModules(); } } }
It just needs to implement the
IReactPackageProvider interface, which will require you to implement the
CreatePackage() method. Inside it we leverage the
packageBuilder object and the
AddAttributedModules() method to load all the modules we have decorated with the
[ReactModule] attribute inside our project.
As already mentioned, by default React Native for Windows registers only the modules which are included in the main project. If we would have created the SampleComponent class inside the main React Native project, we wouldn't have needed any additional step. However, in our case we are using a separate Windows Runtime Component, which allows us to leverage a different language (C#), so we need an extra step to register the library in the main application.
First, right click on the React Native for Windows project and choose Add → Reference. Select from the list the name of the Windows Runtime Component you have just created (in my sample, it's SampleReactModule) and press Ok.
Expand the App.xaml node in the main React Native project and double click on the App.cpp file. Inside the
App constructor you will find the following entry:
PackageProviders().Append(make<ReactPackageProvider>());
This is the code which loads all the modules that are included in the main project. Let's add below another registration for our new library:
PackageProviders().Append(winrt::SampleReactModule::ReactPackageProvider());
SampleReactModule is the name of the Windows Runtime Component we have previously created, thus it's the namespace that contains our
ReactPackageProvider implementation. To make this code compiling we need also to include the header file of our module at the top:
#include "winrt/SampleReactModule.h"
This is how the whole App.cpp file should look like:
#include "pch.h" #include "App.h" #include "ReactPackageProvider.h" #include "winrt/SampleReactModule.h" using namespace winrt::NativeModuleSample; using namespace winrt::NativeModuleSample::implementation; /// <summary> /// Initializes the singleton application object. This is the first line of /// authored code executed, and as such is the logical equivalent of main() or /// WinMain(). /// </summary> App::App() noexcept { MainComponentName(L"NativeModuleSample"); #if BUNDLE JavaScriptBundleFile(L"index.windows"); InstanceSettings().UseWebDebugger(false); InstanceSettings().UseLiveReload(false); #else JavaScriptMainModuleName(L"index"); InstanceSettings().UseWebDebugger(true); InstanceSettings().UseLiveReload(true); #endif #if _DEBUG InstanceSettings().EnableDeveloperMenu(true); #else InstanceSettings().EnableDeveloperMenu(false); #endif PackageProviders().Append(make<ReactPackageProvider>()); // Includes all modules in this project PackageProviders().Append(winrt::SampleReactModule::ReactPackageProvider()); InitializeComponent(); // This works around a cpp/winrt bug with composable/aggregable types tracked // by 22116519 AddRef(); m_inner.as<::IUnknown>()->Release(); }
That's it! Now we can move to the React Native project to start using the module we have just built from JavaScript. Open the folder which contains your React Native project with your favorite web editor. For me, it's Visual Studio Code. For simplicity, I've implemented the component which uses the native module directly in the App.js file of my React Native project, so that it will be the startup page.
The first step is importing the
NativeModule object exposed by React Native in our component:
import { NativeModules } from 'react-native';
Our module will be exposed through this object, by leveraging the name of the class we have decorated with the
[ReactModule] attribute. In our case, it's
SampleComponent, so we can use the following code to access to the
GetDeviceModule() method:
NativeModules.SampleComponent.getDeviceModel();
Notice that we can reference the method with the lowercase name,
getDeviceModel(), thanks to the parameter we have passed to the
[ReactMethod] attribute. However, by default the methods exposed by native modules are implemented in an asynchronous way using callbacks. As such, if we want to consume the
getDeviceModel() method, we need to use the following approach:
getModel = () => { var current = this; NativeModules.SampleComponent.getDeviceModel(function(result) { current.setState({model: result}); }) }
The method accepts a function, that is invoked once the asynchronous operation is completed. Inside this function we receive, as parameter, the result returned by our native method (in our case, the model of the device). In our sample, we store it inside the component's state, so that we can display it in the user interface using the JSX syntax:
<Text>Model: {this.state.model}</Text>
Before calling the
setState() method, however, we need to save a reference to the main context. Inside the callback, in fact, we have moved to a different context, so we don't have access to the helpers exposed by React Native.
Below you can find the full definition of our component:
import React, {Fragment} from 'react'; import { StyleSheet, View, Text, StatusBar, Button } from 'react-native'; import { NativeModules } from 'react-native'; class App extends React.Component { constructor(props) { super(props); this.state = { model: '', } } getModel = () => { var current = this; NativeModules.SampleComponent.getDeviceModel(function(result) { current.setState({model: result}); }) } render() { return ( <View style={styles.sectionContainer}> <StatusBar barStyle="dark-content" /> <View> <Button title="Get model" onPress={this.getModel} /> <Text>Model: {this.state.model}</Text> </View> </View> ); } }; //styles definition export default App;
If you're familiar with React Native, the code should be easy to understand:
getModel(), which takes care of interacting with our native module. The model of the device is stored inside the component's state, using the
setState()function provided by React Native.
render()methods defines the UI component, using the JSX syntax. The UI is very simple:
<Button>, which invokes the
getModel()function by leveraging the
onPressevent.
<Text>, which displays the value of the
modelproperty stored inside the state.
Now it's time to test the code! First, right click on the React Native project in Visual Studio and choose Deploy. Before doing it, however, use the Configuration Manager dropdown to make sure you're targeting the right CPU architecture. By default, in fact, the solution will target ARM, so you will have to switch to x86 or x64. If it's the first time you build it, it will take a while since C++ compilation isn't exactly fast =) Once it's done, open a command prompt on the folder which contains your React Native project and type yarn start. This will launch the Metro packager, which will serve all the React Native components to your application. Once the dependency graph has been loaded, you can open the Start menu and launch the Windows application you have just deployed from Visual Studio. If you have done everything in the correct way, you will see the very simple UI we have built in our component. By pressing the button you will invoke the native module and you will see the model of your device being displayed:
If you compare this development experience with the one you have when you add a 3rd party module to your project using yarn or npm, you realize that it's quite different. In such a case, in fact, you don't have to deal with the
NativeModules object; or you don't need to use callbacks to call the various methods. Let's use again the example of the popular AsyncStorage module. If you want to store some data in the storage, you just import a reference to the
AsyncStorage object and then you call an asynchronous method called
setItem:
import AsyncStorage from '@react-native-community/async-storage'; storeData = async () => { await AsyncStorage.setItem('@storage_Key', 'stored value') }
Can we achieve the same goal with the native module we have just built? The answer is yes! We just need to build a JavaScript wrapper, that will make consuming our native module more straightforward.
In Visual Studio Code add a new file in your React Native project and call it SampleComponent.js. First, let's import the same
NativeModules object we have previously imported in the main component:
import { NativeModules } from 'react-native';
Now, using JavaScript Promises, we can build a wrapper to the
getResult() method. Thanks to Promises, we can enable an easier approach to consume our asynchronous API, thanks to the
async and
await keywords. If you have some C# background, this approach is similar to use the
TaskCompletionSource class to build asynchronous operations that returns a
Task object-
This is how we can wrap the method:
import { NativeModules } from 'react-native'; export const getDeviceModel = () => { return new Promise((resolve, reject) => { NativeModules.SampleComponent.getDeviceModel(function(result, error) { if (error) { reject(error); } else { resolve(result); } }) }) }
The method returns a new
Promise, which requires us to fulfill two objects:
resolve, which is invoked when the operation has completed successfully.
reject, which is invoked when the operation has failed.
Inside the Promise we invoke the
getDeviceModel() method, in the same way we were doing before in the component. The only difference is that, in this case, once the callback is completed we pass the
result to the
resolve() method in case of success; otherwise, we pass the
error to the
reject() method.
Now that we have built our wrapper, we can simplify the component we have previously built. First, remove the following line:
import { NativeModules } from 'react-native';
Then replace it with the following one:
import * as SampleComponent from './SampleComponent'
This import will expose the functions we have defined in our wrapper through the
SampleComponent object. Now you can change the
getModel() function simply with this code:
getModel = async () => { var model = await SampleComponent.getDeviceModel(); this.setState( { model: model}); }
As you can see, thanks to Promises and the
async and
await keywords, the code is simpler to write and read. We just need to mark the function as
async, then call the
SampleComponent.getDeviceModel() function with the
await prefix. Since the asynchronous operation is managed for us, we can treat it like if it's synchronous and just store in a variable the result, which will contain the device model. Since we aren't using callbacks, we can also set the state directly by calling
this.setState(). If you are a C# developer, everything should be very familiar, since it's the same async and await approach supported by C#.
That's it! Now launch the application again. In this case we didn't touch the native project, so if the Metro packager was still running, we should already be seeing the updated version. Of course there won't be any difference, since we didn't change the UI or the behavior, but the code is easier to read and maintain now.
So far, we have added the native module implementation directly in the project. What if we want to create a true self-contained module, that we can install using yarn or npm like any other 3rd party module? Well, unfortunately the story isn't complete for Windows yet. By following the documentation available on GitHub it's easy to create the skeleton of the module. We just need to create a new React Native module project, using the create-react-native-module CLI tool, add the React Native for Windows package and then:
If you already have a module with iOS and Android support, you'll just need instead to follow step 1 and include the windows folder with the Windows Runtime Component in your existing project.
However, regardless of your scenario, the current React Native for Windows implementation lacks an important feature called linking. You'll remember that, in the previous section, we had to manually edit the App.cpp file of the React Native project to load, through the
ReactPackageProvider class, our Windows Runtime Component. On Android and iOS this operation isn't needed thanks to linking. You just need to run the
react-native link command: it will take care of everything for you, without needing to manually edit the Android and iOS projects. React Native 0.60 has brought linking to the next level, by introducing automatic linking. You don't even have to run the link command anymore; just add the package to your project and React Native will take of everything else for you. The React Native implementation for iOS and Android will automatically load all the 3rd party modules that have been added to the project. This feature isn't available on Windows yet and, as such, if you create an independent module and you publish it on NPM, you will still need to manually open the React Native for Windows solution in Visual Studio and register the module in the App.cpp file.
Another important information to highlight is the language choice for your module. C++ has a performance advantage over C++, since it doesn't need to pull in the whole CLR in the React Native project. As such, especially if you're planning to share your module with the community, you should seriously consider to use C++. If, instead, you're aiming to use your module only in your project and you have verified that the performance trade off doesn't heavily affect your application, it's fine to keep using C#.
Compared to the last time I wrote about React Native for Windows, there's a big news! Now you can create a deployable version of the application: an AppX / MSIX package which is self-contained and that doesn't need the Metro packager to be up & running. This way, you can publish the application on the Microsoft Store or using one of the many supported deployment techniques (sideloading, SSCM, Intune, etc.). To achieve this goal just start the publishing process, like you would do with any Universal Windows Platform application. Right click on the React Native project in Visual Studio, choose Publish → Create app packages and follow the wizard. Just make sure, in the package configuration step, to choose Release as configuration mode. This way, all the React Native resources will be bundled together, removing the requirement of having the Metro packager up & running.
In this post we have seen how you can build native modules for React Native for Windows. This will allow us to build applications using the React stack but, at the same time, leverage native features provided by Windows, like toast notifications, access to Bluetooth APIs, file system, etc. We have also learned how we can distribute an application built with React Native, either via the Microsoft Store or one of the many techniques supported by MSIX, like sideloading or Microsoft Intune.
You can find the sample I've built for this post on GitHub.
Happy coding! | https://gorovian.000webhostapp.com/?exam=t5/windows-dev-appconsult/building-a-react-native-module-for-windows/ba-p/1067893 | CC-MAIN-2021-49 | refinedweb | 3,599 | 53.41 |
Added some links regarding Cider, the XAML Visual Designer for VS Orcas (see below under Tools/Microsoft).
In the Nov. 29th newsletter, Chris Maunder mentioned that any XAML and Avalon articles would be welcomed. Based on a CP survey from several months ago, it would seem that at least half of CPians have never heard of XAML or don't know what it is. I thought it might be useful to put together a XAML Resources article, which I'll try to keep up to date as XAML evolves. Like any technology that suddenly discovers itself to be in the sights of Microsoft, there often results considerable emotional reaction. XAML is no exception. There are three primary camps that you will encounter as you follow the various resources below:
The reader should be aware that I fall in the last camp! Nonetheless, I'll try to present the XAML resources from each of these perspectives as unbiased as possible. If anyone feels I've done them a disservice, let me know and I'll endeavor to correct it. Please keep in mind that this article is supposed to present resources. The descriptions of the different vendors is intentionally brief and is not intended to be an advertisement or endorsement for any particular vendor.
The only agreement that appears to exist is how XAML is implemented as a technology. When you try to get a definitive answer for "What Is XAML" from a descriptive point of view, you get a lot of different answers.
Everything you'll see here relies in the idea that XAML is an XML-compliant format to describe an object graph. In most cases:
You will see several terms used frequently (and already used here).
The simplest definition of declarative programming is that it is the set of instructions defining the application's data and data relationships. Conversely, imperative programming is the set of instructions telling the application how to manipulate that data. A more complex definition can be found on Wikipedia.
An object graph is essentially a tree describing objects, their children, and their initial properties. Consider an application that has a form that has a collection of controls. Each control (like a group box) may have it's own set of controls. The application also has menus, toolbars, statusbars, etc. The application can also create object graphs for database connections, data sets, containers, state machines, messaging and event managers, etc, and of course application specific classes. All of these classes have properties that can be initialized as well. The entire picture of the application's class instance tree and their properties is the object graph.
Serialization is the process of encoding and object graph instance into a format that allows for persistence, is re-usable, is often transportable, and captures all the relevant meaning of the original object graph. XAML is a serialization format with the potential for making cross-platform UI definition possible regardless of platform and language. Consider the Visual Studio designer. It "serializes" .NET forms into either C# or VB. Not only does this introduce a language dependency but it also introduces a reliance on the .NET classes.
Deserialization is the process of reconstructing the encoded data into the original object graph instance. Although XAML tag and property names are based on their underlying classes, it's easier to write wrappers that implement these names rather than trying to port .NET to another platform/language. Furthermore, XSLT can be used to translate between tag/property names of one system to another, as demonstrated here.
XAML is used for both web-based and client-based applications. Within those two segments, there are three camps regarding the usage of XAML:
Almost everything you will see here focuses on using XAML to describe the user interface.
This section is bound to get lots of criticism, because I'm using the term "XAML" in the broad context of being an XML format for declarative programming.
A word about the word "Avalon". Avalon is more than just 2D/3D vector graphics. In the context of this article, when I say "Avalon namespace" or "Avalon clone" or "partial clone", I'm referring specifically to the VG portion of Avalon that certain vendors have attempted to implement, at least in terms of class and property name compatibility. Nor is Avalon XAML. XAML is the XML language that builds object graphs out of Avalon's classes.
Laszlo is an open source platform for developing web applications using their "rapid XML development" approach. It does not implement the Avalon namespace. While not supporting VG directly, Laszlo provides integration of media and interactivity into web pages using XML in a declarative, text-based development environment. Note that nowhere does Laszlo claim or use the term "XAML". I'm including them here because of their XML/declarative foundations.
Microsoft's XAML has the weight of Avalon's 2D and 3D VG engine and non-VG support for classic .NET controls. Microsoft provides a very rich XAML experience with visual trees, complex bindings, etc. Microsoft's focus so far has been on applying XAML to user interface programming. Microsoft's XAML text editor supports intellisense (VS2005, downloadable here).
Mobiform has implemented a partial clone of the current version of the Avalon namespace. Mobiform supports 2D VG but not non-VG controls. Mobiform has mentioned that it is beginning work on supporting 3D VG. Mobiform's emphasis regarding XAML seems to be primarily on web-based applications but appears to support .NET client applications as well.
MyXaml is an open source parser capable of working with any "XAML-friendly" .NET class, including Avalon, but does not implement a clone of Avalon. MyXaml support non-VG controls. 2D VG is supported using VG.net (a non-Avalon VG engine). MyXaml's emphasis is on XAML as a declarative programming language not restricted to UI's. I could of course write more, but I won't.
Xamlon has implemented a partial clone of the current version of the Avalon namespace. Xamlon supports 2D VG and non-VG controls. Xamlon emphasizes 2D VG. 3D support is very basic apparently.
The XUL community consists primarily of open-source Java-based implementations. There are many players here, and including the XUL community under the aegis of XAML resources is sure to raise the hackles of some purists. However, they fit the generic definition I have chosen to support--XML for declarative programming. Obviously these XUL motors do not implement an Avalon clone nor do they implement VG. Because most of these motors are implemented in Java, they are well suited for web development. Some require Swing, others, like Thinlet, do not.
Laszlo provides a Laszlo Presentation Server that compiles the XML and JavaScript into a bytecode format supported by a Flash Player. It appears that the actual XML is edited using a text editor.
Microsoft does not provide any designers but has released a Community Technology Preview (CTP) edition of Avalon usable on WinXP SP2 with the VS2005 beta 1 release.
Updated 1/27/05:
Microsoft is developing "Cider", the code name for the XAML Visual Designer in VS Orcas. VS Orcas "provides developers with support for building WinFX applications using the final released version of Visual Studio 2005." See here. Richard Bailey, a developer on Cider, also suggests these links:
Richard's blogFirst Cider CTPCider Forum
From their website (this seemed an excellent description): "Mobiform has an visual XML Editor (Aurora) that can produce XAML documents..., a Browser (Corona) that can read and render Avalon XAML documents, and .NET user controls for using XAML and Avalon in your .NET Programs." There are also alpha builds of their 3D builder and viewer.
MyXaml has a Visual Studio plug-in and a stand-alone form designer. MyXaml also provides a Lint utility to edit, validate and view the UI. For 2D VG, MyXaml relies on the full featured VG designer from VG.net.
Xamlon has a Visual Studio plug-in and a stand-alone Xaml-Pad editor but no VG designer.
It appears that designer tools are very sparse in the XUL community. However, recently, a GUI editor called ThinG has been written for Thinlet.
The best information on Microsoft's XAML can be found from various bloggers. If I've left you out, my apologies and let me know who you are.
Hopefully this gave you an introduction to the XAML technologies available to you today. Unfortunately, there is no definitive answer to what you should use today. It depends on what your goals are and what your target platform is. It appears though that if you want to use Microsoft's technology in a stable release, you'll have to wait a while. None-the-less, it seems that XAML, and more generally the concept of declarative programming, is going to become a factor that you should at least be aware of. I will be updating this article as feedback and relevant information becomes available.
Ron Dafoe. wrote:The standalone visual designer for MyXaml can be found where?
Ron Dafoe. wrote:Your blog at the MyXaml site has been out of order for some time now.
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/8918/XAML-Resources?msg=1353574 | CC-MAIN-2014-35 | refinedweb | 1,556 | 56.35 |
Static variables are declared with
static keyword. Static variables have an essential property of preserving its value across various function calls. Unlike local variables, static variables are not allocated on C stack. Rather, they get their memory in data segment of the program.
Syntax to declare static variable
static data_type variable_name;
data_type is a valid C data type and variable_name is a valid C identifier.
Properties of a static variable
Static variables are widely known among C programmer due to its special properties which are -
- Static variables are allocated within data segment of the program instead of C stack.
- Memory for static variable is allocated once and remains throughout the program. A static variable persists, even after end of function or block.
For example, consider the below function. It contains local and
staticvariable. Let us demonstrate whether the value of
staticvariable persists throughout the program or not.
#include <stdio.h> /* Function declaration */ void display(); int main() { display(); display(); display(); return 0; } /* Function definition */ void display() { int n1 = 10; static int n2 = 10; printf("Local n1 = %d, Static n2 = %d\n", n1, n2); n1++; // Increment local variable n2++; // Increment static variable }
Output -
Local n1 = 10, Static n2 = 10 Local n1 = 10, Static n2 = 11 Local n1 = 10, Static n2 = 12
It is clear from the output that static variable persists throughout the program and preserves its value across various function calls.
- A static variable is initialized with a default value i.e. either 0 or
NULL. For example -
/** * C program to demonstrate static variable */ #include <stdio.h> /* Function declaration */ void display(); int main() { display(); display(); return 0; } /* Function definition */ void display() { int n1; /* Garbage value */ static int n2; /* Default zero value */ printf("Local n1 = %d, Static n2 = %d\n", n1, n2); }
Output -
Local n1 = 4200734, Static n2 = 0 Local n1 = 4200734, Static n2 = 0
- You can access a global variable outside the program. However, you cannot access a global static variable outside the program.
When to use a static variable?
Static variable provides great flexibility to declare variable that persists its value between function calls. Use a
static variable, if you need a variable whose value must persist between function calls.
<pre><code> ----Your Source Code---- </code></pre> | https://codeforwin.org/2017/09/static-variables-c.html | CC-MAIN-2019-04 | refinedweb | 367 | 54.22 |
Hi,
since rails 2.2 doesn’t work with gettext anymore I decided to use a
gettext dummy method until the gettext team relases a working version
for rails 2.2. My application doesn’t need any i18n support right now
and this way i figured it would be very easy to integrate gettext
later. Just setup the plugin and go.
So I need a method called _ which is available in alle models, views,
controllers, helpers, doing of nothing else then:
def _(str)
str
end
I’m still a newbie so I tried a lot but didn’t come to a good
solution. I just don’t know where to define this method to make it
globally available.
I would really appreciate your help!
Olaf | https://www.ruby-forum.com/t/super-global-dummy-method-for-gettext-rails-2-2/153479 | CC-MAIN-2022-33 | refinedweb | 127 | 71.14 |
I realize I am not a committer or anything, but hopefully that doesn't
prevent me from replying...
We have built a number of large cocoon based applications over the
past 2.5years. Everything you said in here is great, and I would love
it if cocoon
had it. I won't even comment on what you have written as I agreed with all
of it.
I would like to add a few more comments.
2. Memory usage. Now, maybe Stax helps with this, but I don't think so.
With our latest app, which was pretty large, we had pipelines with many many
steps in them. Often, these steps would be simple things that just change 1
or 2 elements in the XSL stream. When you are building a large cocoon app,
you often needs lots of data flowing through the system. Each step in the
pipeline generates sax events and memory garbage, even if you don't need
it. To get around this, we wound up using XMLBeans and having Java objects
"flowing" through the system that can be turned into XML on the fly. When
you have a large cocoon app with lots of users, there is a lot more memory
usage than other frameworks.
3. More integration with XMLBeans (or JAXB). Going back and forth from
JavaFlow to XML has benefits. (I agree with your earlier point that we
shouldn't have to start with pipelines). XMLBeans (or JAXB) makes this a
snap. It has the speed of SAX and the benefit that you can go back and
forth between XML and Java easily. We made a lot of use of this in our
recent app. It worked great. Downside is that you have to write an XML
Schema for all XML interactions. (Which is also a good thing as it is then
documented) I realize this wouldn't help the learning curve at all.
4. Java flow (or flowscript) as the ONLY pipeline language. Now, this may
be too touchy, but if all requests entered into Javaflow or Flowscript, and
then this was used as the "orchestration" language, I think that would be
great. Make it so you can easily aggregate pipelines, do content based
routing, etc. Pipelines then can be simpler and all logic is done in
flowscript or Javaflow. You should be able to setup all your components in
flow. Having this in Java or Javascript would make conditional processing
much easier and removing the pipeline (as sacred as it is) would help us
only have 1 language.
5. Continuations. More needs to be understood about these. We found in our
app that we spent a lot of time debugging how and when these are created,
why they are created, and getting rid of them. It seems like for a simple
Javaflow call, should we really create a continuation? If you don't ever do
a sendPageAndWait, and simply use it as processing, why bother with the
continuation? It is just something else that creates memory usage and
something we have to worry about destroying.
Cocoon is awesome and with AJAX and other XML technologies becoming more
popular, it is slated to be THE framework to choose. I think you are very
correct in that it won't be the way it is currently written.
Thanks for listening.
Irv
On 12/2/05, Sylvain Wallez <[email protected]>.
>
> -- oOo --
>
> First of all, let's consider the place where Cocoon fits in the large
> family of webapp development frameworks.
>
> On one end, we have pure-J2EE things like Struts or JSF. They lead to
> writing a lot of Java classes, lots of XML config files, and try/fail
> cycles require compile/deploy/restart, even if some tools ease the task.
> Despite their heavyweight development process, they are widely accepted
> in large companies, both because the J2EE stamp pleases managers and
> because of the vast number of entreprise-grade libraries available.
>
> On the other end, we have scripted frameworks like Ruby on Rails,
> Django[1], etc. The try/fail cycle is basically save/reload, and writing
> simple stuff is very fast because of the use of convention over
> configuration and runtime generation of data models (e.g. through
> database introspection). Now recent comments[2] show that going beyond
> the basic stuff may not be that easy.
>
> Cocoon actually sits in the middle of this spectrum:
> - it's a Java servlet and can therefore use almost anything that's
> written in Java. The contents of our blocks show this well! As such it
> is somehow J2EE compliant and can be deployed in large companies, even
> if we have to convince managers that it's better than Struts.
> - it's a scripted framework: sitemap, XSL, templates, flowscript. Save
> and reload! And this goes further in 2.2 with auto reloading and
> compiling classloaders.
>
> So theoretically, Cocoon could be the "RoR of J2EE". Now in its current
> incarnation it won't. The learning curve is too steep, some
> architectural choices imposed by Cocoon actually go in the way of
> developers with the new emerging development practices, and 5 years of
> legacy led to a rather confusing picture, with tons of legacy components
> and many inconsistencies.
>
> Now Cocoon has also introduced a number of super-cool features and
> innovations that definitely make sense, but in a more lightweight and
> consistent environment.
>
> So let's draw a picture of what could be Cocoon 3.0. I use a major
> version as the ideas outlined below are more than likely to require a
> code base, even if many code snippets can be reused from the current code.
>
> -- oOo --
>
> Giving its real role to the controller
> --------------------------------------
>
> When we introduced flowscript, we decided that <map:pipeline> should be
> the central switchboard through which *all* request go, and introduced
> <map:call function>. This leads most webapps written in Cocoon to have
> their sitemap starting with something like:
>
> <map:match
> <map:call
> </map:match>
>
> Why in hell do we have to go through the sitemap to call a function and
> then go back to the sitemap through cocoon.sendPage()? This not only
> clutters up the sitemap with cut'n pasted snippets, but also makes the
> flowscript a second-zone citizen in the application.
>
> So I think we should change a bit the semantics of the sitemap and the
> priorities of its various components:
>
> - the sitemap is the configuration of the overall request processing of
> the application (or a subpart of it in the case of mounts). It defines
> the configuration of that request processing, which is composed of
> components (<map:components>), controllers (<map:flow>) and views
> (<map:pipeline>). And I even think these 3 parts should really be split
> in different files, i.e. moving components to a xconf and pipeline
> definitions to e.g. "pipelines.xml".
>
> - the processing flow in a sitemap goes *first* in the controller if
> there is one, and *second* in the view. Going to a <map:pipeline> to
> call back a function should really be an exceptional case, or even
> forbidden.
>
> - since it's no more called by the sitemap, the controller defines a
> single entry point, such as "process()". A builtin default
> implementation provides an equivalent to <map:call
>, thus automatically publishing
> any public_xxx flowscript function. This is similar to
> HttpServlet.service() that calls doGet(), doPost(), etc depending on the
> HTTP method but still allows overloading service().
>
> - to allow sophisticated implementations of process() where needed, the
> matchers and selectors are made available to the controller, so that
> they can check the request environment as easily as the pipeline
> statements in sitemap.xmap (also have a look at the Django
> dispatcher[3]). This can allow to write something like:
>
> var match = cocoon.matchers.wildcard("admin/*");
> if (match) {
> if (authenticateAdmin()) {
> // call the function named by the '*'
> adminServices[match[1]]();
> } else {
> forbidden();
> }
> }
>
> - calling cocoon.sendPage(uri) directly goes to the <map:pipeline>
> section of the sitemap, to build a view. This seems obvious, but has an
> interesting side-effect: there is no more need to invent a private URL
> space such as "view-*" to have a two-step processing (controller/view)
> of the requests. We can even say that "cocoon.sendPage(null)" calls the
>
> Note: the controller examples in this RT are written in JavaScript, but
> JavaFlow should be considered on an equal ground, as Coocon's user base
> is two-sided, composed of people coming from the webdesign/php world,
> and others coming from the J2EE world. Also, JavaFlow should be the
> language of choice for builtin reusable helper controllers.
>
> -- oOo --
>
> Expression languages
> --------------------
>
> Do you know how many expression languages there are in Cocoon? Java,
> JavaScript, XPath, XReporter, JEXL, etc. There's also all the
> micro-languages defined by each of the input modules: many of them use
> XPath, but not all...
>
> Also, the way to access a given data is not the same in the sitemap
> (e.g. "{request-param:foo}") and in JXTG
> ("${cocoon.request.getParameter('foo')}" or even
> "#{$cocoon/request/parameters/foo}")
>
> We should restrict the number of languages to the useful minimum, and
> ensure they can be used consistently everywhere. This useful minimum
> looks to me as being JavaScript, XPath and Java (using Janino[4]).
>
> As for the syntax, I think we should use the simple "{..}" notation,
> with no initial character. To choose among the 3 expression languages,
> we have to choose a default one, and use prefixed expressions for the
> other ones. I consider JS to me the most versatile and thus to be the
> default language.
>
> That means we'll have "{cocoon.request.remoteHost}" or
> "{xpath:$cocoon/request/remoteHost}" or
> "{java:cocoon.getRequest().getRemoteHost()}".
>
> About XPath, I'm a bit skeptical wrt its actual usefullness with non-XML
> objects, which often looks weird. However, we need to be able to call
> XPath on DOM parts of a non-DOM data model, e.g.
> "{xpath(cocoon.session.attributes.userDoc, '/meta/dc:title')}". And
> interstingly this sample shows that a namespace prefix table must be
> available in the expression context for *all* languages.
>
> All this also means that we need a well-defined "cocoon" object defined
> identically in all contexts. Additional top-level objects can be
> available to provide context-specific data, such as "flow.sendPage()",
>
> -- oOo --
>
> Content-aware pipelines
> -----------------------
>
> Cocoon 1 had a DOM-based processing, meaning transformations could be
> chosen according to the pipeline content. Cocoon 2, when switching to
> SAX-based streamed pipelines, abandoned this ability. This hasn't been a
> real problem for a long time, as datasources were mostly passive
> documents of a well-known structure.
>
> Now things have changed a lot, and we have to deal with heterogeneous
> data types and content-driven processing. Let's take some real-life
> examples:
> - Content syndication: a feed's URL can provide RSS 0.9, 1.0, 2.0 or
> Atom. How can we decide what processing has to be applied on a feed if
> we don't know what's inside?
> - Forrest's infamous SourceTypeAction[5] identifies a document's type
> using pull parsing
> - SOAP requests: why is SOAP so badly integrated with Cocoon? We
> basically need to delegate to Axis that will then call a Java class. Why
> so? Because we're unable to choose the service to be called depending on
> the request's content.
> - finally, the ESB buzz is turning into real projects, and requires
> content-based routing of messages.
>
> There were some proposals to implement content-aware selectors[6] but
> they never materialized because of the impedance mismatch between a SAX
> stream and the usage of DOM (so Cocoon-1-ish!) that was proposed to
> implement them.
>
> Now Forrest's SourceTypeAction shows us the way: pull parsing.
>
> So let's switch pipelines from SAX push to StAX pull (JSR 173, see[7]).
> Content-aware matchers and selectors can then grab just the amount of
> information they need from the pipeline to make their decision. And
> contrarily to the SourceTypeAction that requires to resolve the source 2
> times (once for pull, once for push), the pipeline engine can
> transparently buffer the StAX events consumed by matchers and selectors
> to replay them in the next pipeline component.
>
> Using pull pipelines doesn't mean we have to trash everything.
> Converting DOM to/from StAX is straightforward, and so is StAX->SAX. The
> SAX->StAX conversion is less easy and requires either buffering or a
> separate thread.
>
> Using pull pipelines also has an interesting side effects on
> aggregations, as they can easily be inlined by pulling events
> successively from partial pipelines (i.e. without a serializer), e.g:
>
> <map:aggregate
> <map:part>
> <map:generate
> <map:transform
> </map:part>
> <map:part
> </map:aggregate>
> <map:transform
> <map:serialize/>
>
> Actually, writing
>
> <map:part
>
> is equivalent to writing
>
> <map:part>
> <map:generate
> </map:part>
>
> -- oOo --
>
> Dynamic pipelines
> -----------------
>
> Yes, you read it well: dynamic pipelines. This is what comes next
> naturally after content-aware pipelines: with use cases like webservices
> and ESBs, the content-based routing is not enough and we also need
> controller-driven routing.
>
> For simpler cases, we already have cocoon.processPipelineTo() (and the
> more versatile PipelineUtils class), but having to call the sitemap and
> invent a private URL just to perform a transformation is really overkill.
>
> I'd like to be able to write the following in a flowscript:
>
> var pipeline = flow.newPipeline("non-caching");
> pipeline.setGenerator("stream");
> pipeline.addTransformer("xslt", "normalize.xsl");
> if (cocoon.matchers.xpath("/foo/bar[@id = '" +
> cocoon.session.attributes.bar_id + "']", pipeline)) {
> handleBar(pipeline);
> } else {
> wrongId();
> }
>
> What we can see above is that we don't even need a serializer for the
> pipeline to be useful, as we can pull events from it as soon as it has a
> generator. And that generator could well be another pipeline built
> somewhere else.
>
> Basically, the pipeline engine becomes a very general-purpose object
> that can be used not only in the sitemap (to build views), but also in
> the controller for content-driven business logic decisions.
>
> This programmatic building of pipelines can also be used by Cocoon
> components themselves to implement some built-in transformations, e.g.
> converting an XMLSchema to a CForms definition, without requiring to
> copy/paste the corresponding sitemap instructions in user sitemaps, of
> requiring to call a system-defined sitemap. IMO, the lack of reusable
> system pipelines is one of the reasons why there hasn't been many
> off-the-shelf products or applications built on top of Cocoon.
>
> Being able to directly use pipelines can also ease the integration of
> Cocoon as a transformation engine in other environments, such as an
> advanced message transformer in the ServiceMix ESB[8].
>
> -- oOo --
>
> Controller-driven responses
> ---------------------------
>
> The advent of Ajax applications leads to a radical change in web
> applications architectures. There are many requests that don't lead to
> producing a view, but sending data and/or control information. Having to
> call a pipeline for this is really useless and overkill, as we don't
> need any kind of processing.
>
> We therefore need the controller to be able to directly send a
> non-processed response. We already have an example of this in the Ajax
> stuff for CForms[9] to send a simple <bu:continue> when form interaction
> is finished and a full page reload is needed. Another example is data
> transmission with an Ajax client using JSON[10].
>
> So we need additional "sendxxx" methods in the controller: sendText(),
> sendObject(), sendBytes() and why not sendStream().
>
> Ajax applications also require aggregations defined at the controller
> level. Let's consider an Ajax shopping cart application: the page
> displays the items catalogue and a sidebar with the current content of
> the shopping cart. When the user browses the items, only the catalogue
> area needs to be refreshed on the page. When he adds an item to the
> cart, both areas need to be refreshed at once to show the updated cart.
> The knowledge of what parts of the page need to be refreshed is in the
> controller. A solution can be to call a pipeline that will generate an
> XInclude that itself will call other pipelines, but that's smelly and
> doesn't allow to give different view data to each of the pipelines.
>
> To allow this, we need something like:
> flow.sendMultiple(
> ["catalogue", { paginator: paginator }],
> ["cart-sidebar", { cart: cart }]
> );
>
> -- oOo --
>
> Core components
> ---------------------
>
> Moving to pull pipelines isn't the only important core change: we need
> to move away from Avalon for good. Now what container will we use? We
> don't care: Cocoon 3.0 will be written as POJOs, and will come with a
> "default" container. Will it be Spring, Hivemind, Pico? I don't know. We
> may even provide configurations for several containers, as does
> XFire[xxxxx].
>
> -- oOo --
>
> Ok, thanks reading so far.
>
> My impression is that with all these changes, Cocoon will be sexy again.
> Add a bit of runtime analysis of databases and automatic generation of
> CForms to the picture, and you have something that has the same
> productivity as RoR, but in a J2EE environment. It also includes what I
> learned when working on Ajax and the consequences it has on the overall
> system architecture.
>
> You certainly have noticed that the above is more about the controller
> than about the sitemap. This is because not much changes are needed
> there, except content-aware matchers and selectors. But a more featured
> controller will allow to trash a great number of pipeline components
> that were invented to circumvent controller limitations. The code base
> will shrink.
>
> There are also a number of simplifications that can be done by using
> builtin conventions over configuration, but I'll write about this later.
>
> Tell me your thoughts. Am I completely off-track, or do you also want to
> build this great new thing?
>
> Sylvain
>
> [1]
> [2]
> [3]
> [4]
> [5]
>
>
> [6]
> [7]
> [8]
> [9]
>
>
> [10]
> [11]
>
> --
> Sylvain Wallez Anyware Technologies
>
> Apache Software Foundation Member Research & Technology Director
>
> | http://mail-archives.apache.org/mod_mbox/cocoon-dev/200512.mbox/%[email protected]%3E | CC-MAIN-2016-44 | refinedweb | 2,961 | 62.98 |
I receive file url as response from api. when user clicks on download button, the file should be downloaded without opening file preview in a new tab. How to achieve this in react js?
Solution #1:
Triggering browser download from front-end is not reliable.
What you should do is, create an endpoint that when called, will provide the correct response headers, thus triggering the browser download.
Front-end code can only do so much. The ‘download’ attribute for example, might just open the file in a new tab depending on the browser.
The response headers you need to look at are probably
Content-Type and
Content-Disposition. You should check this answer for more detailed explanation.
Solution #2:
This is not related to React. However, you can use the
download attribute on the anchor
<a> element to tell the browser to download the file.
<a href='/somefile.txt' download>Click to download</a>
This is not supported on all browsers:
Solution #3:
If you are using React Router, use this:
<Link to="/files/myfile.pdf" target="_blank" download>Download</Link>
Where
/files/myfile.pdf is inside your
public folder.
Solution #4:
browsers are smart enough to detect the link and downloading it directly when clicking on an anchor tag without using the download attribute.
after getting your file link from the api, just use plain javascript by creating anchor tag and delete it after clicking on it dynamically immediately on the fly.
const link = document.createElement('a'); link.href = `your_link.pdf`; document.body.appendChild(link); link.click(); document.body.removeChild(link);
Solution #5:
tldr;
fetch the file from the url, store it as a local Blob, inject a link element into the DOM, and click it to download the Blob
I had a PDF file that was stored in S3 behind a Cloudfront URL. I wanted the user to be able to click a button and immediately initiate a download without popping open a new tab with a PDF preview. Generally, if a file is hosted at a URL that has a different domain that the site the user is currently on, immediate downloads are blocked by many browsers for user security reasons. If you use this solution, do not initiate the file download unless a user clicks on a button to intentionally download.
In order to get by this, I needed to fetch the file from the URL getting around any CORS policies to save a local Blob that would then be the source of the downloaded file. In the code below, make sure you swap in your own
fileURL,
Content-Type, and
FileName.
fetch('' + fileURL, { method: 'GET', headers: { 'Content-Type': 'application/pdf', }, }) .then((response) => response.blob()) .then((blob) => { // Create blob link to download const url = window.URL.createObjectURL( new Blob([blob]), ); const link = document.createElement('a'); link.href = url; link.setAttribute( 'download', `FileName.pdf`, ); // Append to html link element page document.body.appendChild(link); // Start download link.click(); // Clean up and remove the link link.parentNode.removeChild(link); });
This solution references solutions to getting a blob from a URL and using a CORS proxy.
Update
As of January 31st, 2021, the cors-anywhere demo hosted on Heroku servers will only allow limited use for testing purposes and cannot be used for production applications. You will have to host your own cors-anywhere server by following cors-anywhere or cors-server.
Solution #6:
This is how I did it in React:
import MyPDF from '../path/to/file.pdf'; <a href={myPDF} Download Here </a>
It’s important to override the default file name with
download="name_of_file_you_want.pdf" or else the file will get a hash number attached to it when you download.
Solution #7:
React gives a security issue when using
a tag with
target="_blank".
I managed to get it working like that:
<a href={uploadedFileLink} Download File </Button> </a>
Solution #8:
Solution (Work Perfect for React JS, Next JS)
You can use js-file-download and this is my example:
import axios from 'axios' import fileDownload from 'js-file-download' ... handleDownload = (url, filename) => { axios.get(url, { responseType: 'blob', }) .then((res) => { fileDownload(res.data, filename) }) } ... <button onClick={() => {this.handleDownload('', 'test-download.jpg') }}>Download Image</button>
This plugin can download excel and other file types.
Solution #9:
You can use FileSaver.js to achieve this goal:
const saveFile = () => { fileSaver.saveAs( process.env.REACT_APP_CLIENT_URL + "/resources/cv.pdf", "MyCV.pdf" );
};
<button className="cv" onClick={saveFile}> Download File </button>
Solution #10:
I have the exact same problem, and here is the solution I make use of now:
(Note, this seems ideal to me because it keeps the files closely tied to the SinglePageApplication React app, that loads from Amazon S3. So, it’s like storing on S3, and in an application, that knows where it is in S3, relatively speaking.
Steps
3 steps:
- Make use of file saver in project: npmjs/package/file-saver (
npm install file-saveror something)
- Place the file in your project You say it’s in the components folder. Well, chances are if you’ve got web-pack it’s going to try and minify it.(someone please pinpoint what webpack would do with an asset file in components folder), and so I don’t think it’s what you’d want. So, I suggest to place the asset into the
publicfolder, under a
resourceor an
assetname. Webpack doesn’t touch the
publicfolder and
index.htmland your resources get copied over in production build as is, where you may refer them as shown in next step.
- Refer to the file in your project Sample code:
import FileSaver from 'file-saver'; FileSaver.saveAs( process.env.PUBLIC_URL + "/resource/file.anyType", "fileNameYouWishCustomerToDownLoadAs.anyType");
Source
Appendix
- I also try
Linkcomponent of
react-routerreact-router-docs/Link. The zip file would download, and somehow would unzip properly. Generally, links have blue color, to inherit parent color scheme, simply add a prop like:
style={color: inherit}or simply assign a class of your CSS library like
button button-primaryor something if you’re Bootstrappin’
- Other sage questions it’s closely related to:
Solution #11:
You can define a component and use it wherever.
import React from 'react'; import PropTypes from 'prop-types'; export const DownloadLink = ({ to, children, ...rest }) => { return ( <a {...rest} href={to} download > {children} </a> ); }; DownloadLink.propTypes = { to: PropTypes.string, children: PropTypes.any, }; export default DownloadLink;
Solution #12:
We can user react-download-link component to download content as File.
<DownloadLink label="Download" filename="fileName.txt" exportFile={() => "Client side cache data here…"}/>
Solution #13:
Download file
For downloading you can use multiple ways as been explained above, moreover I will also provide my strategy for this scenario.
npm install --save react-download-link
import DownloadLink from "react-download-link";
- React download link for client side cache data
<DownloadLink label="Download" filename="fileName.txt" exportFile={() => "Client side cache data here…"} />
- Download link for client side cache data with Promises
<DownloadLink label="Download with Promise" filename="fileName.txt" exportFile={() => Promise.resolve("cached data here …")} />
- Download link for data from URL with Promises Function to Fetch data from URL
getDataFromURL = (url) => new Promise((resolve, reject) => { setTimeout(() => { fetch(url) .then(response => response.text()) .then(data => { resolve(data) }); }); }, 2000);
- DownloadLink component calling Fetch function
<DownloadLink label=â€Download†filename=â€filename.txt†exportFile={() => Promise.resolve(this. getDataFromURL (url))} />
Happy coding! 😉
| https://techstalking.com/programming/question/solved-how-to-download-file-in-react-js/ | CC-MAIN-2022-40 | refinedweb | 1,214 | 56.45 |
This is the mail archive of the [email protected] mailing list for the Cygwin project.
I am getting this compilation error in types.h: ----------------------------- c++ -O2 -g -O0 -march=i586 -Wall -Wunused -c -o RevPlayer.o RevPlayer.cpp In file included from /usr/include/cygwin/in.h:21, from /usr/include/netinet/in.h:14, from aeTCPServer.h:4, from RevServer.h:4, from RevPlayer.h:4, from RevPlayer.cpp:1: /usr/include/cygwin/types.h:120: syntax error before `;' token ----------------------------- On line 120 in types.h is this: #ifndef __int16_t_defined #define __int16_t_defined typedef __int16_t int16_t; #endif I can make it compile by including this at the top of types.h: #include <sys/types.h> , but I am not sure if this is the proper solution. If it is - whom to contact? If not - how to fix it? Thanks (Sources attached) -- Unsubscribe info: Bug reporting: Documentation: FAQ: | http://cygwin.com/ml/cygwin/2003-03/msg02156.html | CC-MAIN-2015-32 | refinedweb | 148 | 56.11 |
sandbox/Antoonvh/timeaccuracy.c
The Navier-Stokes solver is second-order accurate in time
Let us test that for a 1D viscous-flow problem, evaluated on a 2D grid.
set-up
The set-up consists of a periodic (left-right) channel with no-slip boundary conditions (top-bottom). The initialized flow is:
for . Due to the fluid’s viscousity (), momemtum will diffuse over time, according to the (Navier-)Stokes equation, that is solved by;
This equation will help to determine the error due to the time integration.
#include "grid/multigrid.h" #include "navier-stokes/centered.h" const face vector muv[]={1.,1.}; double timestepp; int j; int main(){
The grid resolution is chosen so fine that the presented errors here are dominated by the time stepping and are not significantly affected by the spatial discretization.
init_grid(1<<9); periodic(left); u.t[top]=dirichlet(0); u.t[bottom]=dirichlet(0.); μ=muv; L0=M_PI; X0=Y0=-L0/2.;
Six experiments are run, the zeroth one () is not used because the timestepper thinks it has made a timestep with for the -1-th timestep.
for (j=0;j<7;j++){ timestepp=2.*pow(0.5,(double)j); run(); } } event init(t=0){ CFL=10000.; foreach() u.x[]=cos(y); boundary(all); DT=timestepp; }
This event is used to let the run have an equidistant timestep for .
event setDT(i++){ if (i==0){ DT=1.1*timestepp/0.1; }else{ DT=timestepp; } }
Output
After a single ‘’ timescale, the error in the numerically obtained solution is evaluated.
event check(t=1.){ static FILE * fp = fopen("resultvisc.dat","w"); double err=0.; foreach() err+=fabs(u.x[]-cos(y)*exp(-t))*sq(Δ); if (j>0) fprintf(fp,"%d\t%g\n",i,err); }
Results
A first-order accurate (global) timestepping can be diagnosed? | http://basilisk.fr/sandbox/Antoonvh/timeaccuracy.c | CC-MAIN-2018-43 | refinedweb | 300 | 51.04 |
Options Window Color Analysis
By Geertjan on Jan 20, 2014
So you're happily using JTattoo in your NetBeans Platform application. However, when you use the darker variations, such as "com.jtattoo.plaf.graphite.GraphiteLookAndFeel", you run into a problem. Though all the colors change as you'd expect... there's still the white background at the top of the Options window, as you can see below.
That may seem like a small and insignificant problem. However, in the real world, Java and the NetBeans Platform are to be found in air traffic management and air traffic control systems, where the possibility of switching between a "day view" and a "night view", with all the related color updates, is not a "nice to have", but a "must".
So, the pesky white bar at the top of the Options window is a real problem in these real scenarios. A glaring white background is the very last thing you want in the Options window when the traffic controller has switched to "night view".
How to fix it? First, run the application from NetBeans IDE in Debug mode, open the Options window, and then click the orange camera button in the toolbar. Now a snapshot is made of the Options window and rendered in NetBeans IDE, where you can click on anything in the snapshot and see its related properties, such as the name of the panel where that white background is defined:
Then dig into the source code and find that panel. You now find that "Tree.background" is the relevant UIManager color for the white background in the Options window:
private static Color getTabPanelBackground() { if( isMetal || isNimbus ) { Color res = UIManager.getColor( "Tree.background" ); //NOI18N if( null == res ) res = Color.white; return new Color( res.getRGB() ); } return Color.white; }
Switch into your @OnStart annotated Runnable and use "Tree.background" as shown below:
@OnStart public class Startable implements Runnable { @Override public void run() { try { UIManager.setLookAndFeel("com.jtattoo.plaf.graphite.GraphiteLookAndFeel"); UIManager.put("Tree.background", Color.LIGHT_GRAY); } catch (ClassNotFoundException ex) { Logger.getLogger(Startable.class.getName()).log(Level.SEVERE, null, ex); } catch (InstantiationException ex) { Logger.getLogger(Startable.class.getName()).log(Level.SEVERE, null, ex); } catch (IllegalAccessException ex) { Logger.getLogger(Startable.class.getName()).log(Level.SEVERE, null, ex); } catch (UnsupportedLookAndFeelException ex) { Logger.getLogger(Startable.class.getName()).log(Level.SEVERE, null, ex); } } }
And here's the Options window again, gone is the white background at the top:
Don't like the light gray color I set above? Fine, change it to another color. The point is that you now have control over that color and you also know how to figure out similar problems next time. | https://blogs.oracle.com/geertjan/date/20140120 | CC-MAIN-2014-15 | refinedweb | 440 | 56.45 |
User interaction with the Button component
Button component parameters
Create an application with the Button component.
A Button is a fundamental part of many forms and web applications.
You can use buttons wherever you want a user to initiate an event.
For example, most forms have a Submit button. You could also add
Previous and Next buttons to a presentation.
You
can enable or disable a button in an application. In the disabled
state, a button doesn’t receive mouse or keyboard input. An enabled
button receives focus if you click it or tab to it. When a Button
instance has focus, you can use the following keys to control it:
Key
Description
Shift+Tab
Moves focus to the previous object.
Spacebar
Presses or releases the button and triggers
the click event.
Tab
Moves focus to the next object.
Enter/Return
Moves focus to the next object if a button
is set as the FocusManager’s default Button.
For more information about controlling focus, see the IFocusManager
interface and the FocusManager class in the ActionScript 3.0 Reference for the Adobe
Flash Platform and Work with FocusManager.
A live preview of each Button instance reflects changes made
to parameters in the Property inspector or Component inspector during
authoring.
To designate a button as the default push button in an application
(the button that receives the click event when a user presses Enter),
set FocusManager.defaultButton. For example, the
following code sets the default button to be a Button instance called submitButton.
FocusManager.defaultButton = submitButton;
When you add the Button component to an application, you can
make it accessible to a screen reader by adding the following lines
of ActionScript code:
import fl.accessibility.ButtonAccImpl;
ButtonAccImpl.enableAccessibility();
You enable accessibility for a component only once, regardless
of how many instances you create.
You can set the following authoring
parameters in the Property inspector (Window > Properties >
Properties) or in the Component inspector (Window > Component
Inspector) for each Button instance: emphasized, label, labelPlacement, selected,
and toggle. Button class in the ActionScript 3.0 Reference for the Adobe
Flash Platform.
The
following procedure explains how to add a Button component to an
application while authoring. In this example, the Button changes
the state of a ColorPicker component when you click it.
Create a new Flash (ActionScript 3.0) document.
Drag a Button component from the Components panel to the
Stage and enter the following values for it in the Property inspector:
Enter the instance name aButton.
Enter Show for the label parameter.
Add a ColorPicker to the Stage and give it an instance name
of aCp.
Open the Actions panel, select Frame 1 in the main Timeline,
and enter the following ActionScript code:
aCp.visible = false;
aButton.addEventListener(MouseEvent.CLICK, clickHandler);
function clickHandler(event:MouseEvent):void {
switch(event.currentTarget.label) {
case "Show":
aCp.visible = true;
aButton.label = "Disable";
break;
case "Disable":
aCp.enabled = false;
aButton.label = "Enable";
break;
case "Enable":
aCp.enabled = true;
aButton.label = "Hide";
break;
case "Hide":
aCp.visible = false;
aButton.label = "Show";
break;
}
}
The second line of code registers the function clickHandler() as
the event handler function for the MouseEvent.CLICK event.
The event occurs when a user clicks the Button, causing the clickHandler() function
to take one of the following actions depending on the Button’s value:
Show makes the ColorPicker visible and changes the Button’s
label to Disable.
Disable disables the ColorPicker and changes the Button’s
label to Enable.
Enable enables the ColorPicker and changes the Button’s label
to Hide.
Hide makes the ColorPicker invisible and changes the Button’s
label to Show.
Select Control > Test Movie to run the application.
The following
procedure creates a toggle Button using ActionScript and displays the
event type in the Output panel when you click the Button. The example creates
the Button instance by invoking the class’s constructor and it adds
it to the Stage by calling the addChild() method.
Drag the Button component from the Components panel to the
current document’s Library panel.
This adds the component
to the library, but doesn’t make it visible in the application.
Open the Actions panel, select Frame 1 in the main Timeline
and enter the following code to create a Button instance:
import fl.controls.Button;
var aButton:Button = new Button();
addChild(aButton);
aButton.label = "Click me";
aButton.toggle =true;
aButton.move(50, 50);
The move() method
positions the button at location 50 (x coordinate), 50 (y coordinate)
on the Stage.
Now, add the following ActionScript to create an event listener
and an event handler function:
aButton.addEventListener(MouseEvent.CLICK, clickHandler);
function clickHandler(event:MouseEvent):void {
trace("Event type: " + event.type);
}
Select Control > Test Movie.
When you click
the button, Flash displays the message, “Event type: click” in the
Output panel.
Twitter™ and Facebook posts are not covered under the terms of Creative Commons. | http://help.adobe.com/en_US/as3/components/WS5b3ccc516d4fbf351e63e3d118a9c65b32-7fac.html | CC-MAIN-2014-52 | refinedweb | 806 | 50.12 |
CustomDateFormatter
Since: BlackBerry 10.0.0
#include <bb/utility/i18n/CustomDateFormatter>
To link against this class, add the following line to your .pro file: LIBS += -lbbutilityi18n
Formats QDateTime objects using skeleton patterns.
A date/time format skeleton is a QString containing any arrangement of icu::SimpleDateFormat pattern characters specified by the Internationalization Components for Unicode (ICU). The table below also shows the Qt4 equivalencies for your convenience. The passed in QString should only contain character sequences from the left column (ICU).
Do not include any whitespace or punctuation. The formatter will automatically format using the most appropriate date-time pattern for the current system settings.
#### Supported characters: An asterisk (*) in the left column indicates that this character may be repeated to pad the output with 0s.
| ICU | description | Qt4 | US English example | notes | | ----- | ------------------ | ---- | -------------------------------- | ----------------------------------------------- | | G | era designator | AD | AD | | | GGGG | era designator | | Anno Domini | | | y | year | yyyy | 1996 | Can also use yyyy | | yy | 2-digit year | yy | 96 | 00 through 99 | | Y | of year | | 1997 | 3rd month of year 1997 <-> March 1996 | | YY | of 2-digit year | | 97 | 00 through 99 | | u | extended year | | 4601 | Based on region locale calendar | | U | cyclic year name | | ren-chen | Falls back to number (29) for many locales | | Q | quarter | | 2 | Use QQ to pad with 0s | | QQQ | quarter | | Q2 | | | QQQQ | quarter | | 2nd quarter | | | qqq | standalone quarter | | Q2 | | | qqqq | quarter | | 2nd quarter | | | M | month in year | M | 8 | | | MM | month in year | MM | 08 | | | MMM | month in year | MMM | Aug | | | MMMM | month in year | MMMM | August | | | LLL | standalone month | | Aug | | | LLLL | standalone month | | August | | | * w | week in year | | 33 | | | * W | week in month | | 3 | | | * F | weekday in month | | 3 | i.e. 3rd Tuesday in August | | * g | Julian day | | 2451370 | | | * D | day in year | | 226 | | | * d | day in month | d | 9 | | | e | day of week | | 2 | Numeric: 1 through 7 | | ee | day of week | | 02 | Numeric: 01 through 07 | | E | day of week | ddd | Tue | | | eeee | day of week | | Tues | | | EEEE | day of week | dddd | Tuesday | | | ccc | standalone day | | Tue | | | cccc | standalone day | | Tuesday | | | * H | hour in day | h | 0 | 0 through 23 | | * k | hour of day | | 24 | 1 through 24; i.e. the 1st hour of the day | | a | am/pm marker | AP | PM | Qt4 uses AP for AM/PM and ap for am/am | | * h | hour of am/pm | h | 12 | 1 through 12; Qt4 checks for presence of AP/ap | | * K | hour in am/pm | | 0 | 0 through 11 | | * m | minute in hour | m | 30 | | | * s | second in minute | s | 55 | | | S | decisecond | | 9 | Tenths of the next second: 0 through 9 | | SS | centisecond | | 97 | Hundredths of the next second: 00 through 99 | | SSS | millisecond | zzz | 978 | Thousandths of the next second: 000 through 999 | | * A | ms in day | | 69540000 | | | z | timezone | | PST | | | zzzz | timezone | | Pacific Standard Time | | | Z | timezone | | -0800 | RFC 822 | | ZZZZ | timezone | t | GMT-08:00 | | | ZZZZZ | timezone | | -08:00 | ISO 8601 | | v | timezone | | PT | Short wall (generic) time | | vvvv | timezone | | Pacific Time | Long wall (generic) time | | V | timezone | | PST | | | VVVV | timezone | | United States Time (Los Angeles) | Location |
##### Sample skeleton patterns:
| skeleton | US English | Catalan | Liechtenstein | | --------- | ------------------- | -------------------- | --------------------- | | MMMMEEEEd | Tuesday, October 30 | dimarts 30 d'octubre | Dienstag, 30. Oktober | | MMMMEd | Tue, October 30 | dt. 30 d'octubre | Di., 30. Oktober | | MMMEd | Tue, Oct 30 | dt. 30 d'oct. | Di., 30. Okt | | yMMMM | October 2012 | octubre de 2012 | Oktober 2012 | | MMMd | Oct 30 | 30 d'oct. | 30. Okt | | Ehm | Tue 7:46 PM | dt. 19.46 | Di. 19:46 |
Overview
Properties Index
Public Functions Index
Properties
Public Functions
Creates a date/time formatter of the supplied skeleton type.
BlackBerry 10.0.0
virtual
Destructor.
BlackBerry 10.0.0
Q_INVOKABLE QString
Prints out date and time components formatted and localized according to the system settings.
Q_INVOKABLE bool
Verifies if this formatter was created successfully.
true if this DateFormatter is ready to use, false otherwise.
BlackBerry 10.0.0
Q_SLOT void
Changes the skeleton used for parsing and formatting dates.
BlackBerry 10.0.0
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | https://developer.blackberry.com/native/reference/cascades/bb__utility__i18n__customdateformatter.html | CC-MAIN-2016-36 | refinedweb | 675 | 55.07 |
Nemerle for OOP Programmers Week 0
From Nemerle Homepage
In this first lesson, we will get in touch with Nemerle syntax, and some of the basic features of the language. At the end, we will also learn how to install and configure a usable Nemerle compiler.
We assume that you have general knowledge about all the items that are treated here, so we won't dig so much into them.
Values and Variables
Just as a review: a value is a place in memory that holds some data. Each value has an identifier, or name, which allows us to refer to its contents, and operate on it. In most languages, identifiers declared in code are read-write variables, but this is not the default scheme in Nemerle. Indeed, there are two kinds of identifiers: mutable and immutable.
Immutable identifiers (values), far more commonly used in Nemerle, are defined with the following syntax. There's always an initializer, because if there wasn't, we wouldn't be able to use it at all:
def name : type = value
For mutable identifiers (variables) we just change the def keyword to mutable, like this:
mutable name : type = value
Immutable values are typically used for naming some object, and using that same instance for the life of the value. Mutable variables are used as place holders for some data which is meant to be changed in further computation. You will see that such a distinction leads to a different approach to solving programming problems.
Type Inference
You might have wondered why we use a special keyword for declaring values, instead of just writing type name; as in C. This is because the type can be skipped in most cases. So instead of writing:
def x : int = 42;
you write:
def x = 42;
This is not limited to literals, as you can also write something like:
mutable y = null; // do something y = SomeComplexObject ();
This is still OK.
Methods
As you surely know, a method or function is a group of statements, executed sequentially, which are given a meaningful name. Optionally, methods can have a return value as the final value of the computation. Methods can also have parameters, that is, values that are given to the method so it can know what to do. The general syntax for methods is:
name ( parameter1 : type, parameter2 : type ... ) : return_type
As you can see, the only difference when compared against C#, C++ or Java is that types are written after the parameter name, instead of before them. This is just a matter of convention; Visual Basic, for example, also places it after the name.
NOTE: type inference is not available for parameters and return types of methods. Having type inference for public methods could lead to accidently changing exported interfaces of classes — which is Not Good™. We plan to implement it for private methods though.
Returning Values
Continuing in the functional programming tradition, Nemerle doesn't use return or any other keyword to specify which value a function gives back as it's result. This fits very well in Nemerle: there are no goto, break or label statements that could disrupt normal program flow (well, there are some macros extending the language to allow them, but for now let us just ignore them).
The question then arises: if there is no keyword, how does the compiler know which value to return? The answer is very easy: the last value that has been computed. The easiest example to understand this is:
add (number1 : int, number2 : int) : int { number1 + number2 }
So the function's return type should agree with the type of last expression in its body. If it does not yield any value, then the function does not return anything (and its return type is called void).
Primitive Types
Well, up to this point we have used the word type lots of times, but we have not really defined it. Type is a fundamental concept in Object Oriented Programming. It represents an abstract idea of what a value is. For now, we will only introduce the primitive types:
During this course you probably won't touch anything beside int, string, char and void, so don't bother remembering the above :-)
The Full Name is the name used in the Base Class Libraries (BCL), which you can use to lookup information in the .NET Framework SDK Documentation, or similar reference. In code it is easier to use the short name, but the longer one is equally valid. They are interchangeable.
Nemerle is based on Microsoft .NET Framework, or its free implementation, Mono. That means that these types are not really just for Nemerle, but are common for any language targeting the .NET Framework. You have to adapt to this new platform, because while the syntactical parts of the language come from Nemerle, the practical parts (interaction with the user, communication over a network) come from the BCL. If you come from C#, you very likely know the Base Class Libraries well already. If you are new to .NET, you will need an introduction, so we will start with some working examples.
Simple Examples
For example, if I wanted to ask the user for her/his name and present a greeting, I would write a program something like this:
def name = System.Console.ReadLine (); System.Console.WriteLine ("Hello " + name);
You can save these two lines into a file, say greeting.n. Compile it using:
ncc greeting.n
and then run it (at the Windows cmd prompt):
out
(or on Linux with mono, in a terminal window):
mono out.exe
Nemerle does not require you to provide any classes or methods, you can just throw expressions to evaluate into a file and it will run it.
Another way to start a program's execution is to write a static method called Main in some class. So here, the example would be:
class Greeter { public static Main () : void { def name = System.Console.ReadLine (); System.Console.WriteLine ("Hello " + name); } }
You cannot mix these two techniques, or for that matter, place top-level code in more than one file. What you can do is define some classes before (but not in the middle, or after) the actual code to be executed. This is what we will do in this course, as it is more concise.
So the most sophisticated way to write this simple example would be:
class Asker { public static Run () : string { System.Console.ReadLine () } } def name = Asker.Run (); System.Console.WriteLine ("Hello " + name);
Decision Statements
One of the main functionalities in every programming language is the ability to execute instructions depending on some values or some input from the user. If you know some other language, this should be a familiar concept. The relational operators in Nemerle are the ones found in C#, C++ and Java:
NOTE: other languages, such as Visual Basic or Pascal, use = as the logical operator for equal to. This is an assignment operator in Nemerle, therefore it is an error to use it as a relational operator.
Results from relational operators can be saved in bool values, and these values can be used like any other:
def number = int.Parse (System.Console.ReadLine ()); def isEven = (number % 2) == 0; System.Console.Write ("The number is even :"); System.Console.Write (isEven);
This code will show True or False depending on the input from the user.
Combining relational operators
Sometimes it is necessary to combine more than one condition. This can be done using the following operators:
Both && and || are evaluated lazily, which means the following code:
when (foo != null && foo.Length > 0) System.Console.WriteLine (foo);
will never yield NullReferenceException, because the foo.Length will not be evaluated if foo != null was false.
This is the same behavior as in all C-like languages.
Using conditions
Once we have learned how to check for conditions, it's time to learn how to execute a piece of code depending on them. The three main conditional constructs for this task are:
when (condition) code
when executes the code only when the condition is true.
unless (condition) code
unless executes the code only when the condition is false.
if (condition) code1 else code2
if..else executes the first block of code when the condition is true, or the second if its result is false. On the main C-family languages, the notion of when or unless clauses does not exist, so they are managed using ifs without else. In Nemerle, every if has its else.
As you can see:
unless (something) DoSomethingElse ()
is exactly the same as:
when (!something) DoSomethingElse ()
Whether or not you use unless is up to you: use what reads better to you. No styling guidelines here.
In Nemerle, conditional statements are also functions (as in other functional languages), so the type of the return value of each branch must be the same. However, these constructs are mostly used as they are in C# or Java (the type of both branches is void), so there should be no problem if you don't miss your final ;. This is specially important in when/unless, because, as stated in Language Reference, they really translate to an if...else statement, which returns ().
Code blocks
The final word of this week are code blocks. Conditional statements support just a single statement written within their body. However, most of the time, you will need to execute more than one statement. This is achieved using code blocks: a series of statements written inside { and }, that are evaluated sequentially, and whose return type is the last value computed (remember, it's a functional language).
Exercises
The homework for this week is very easy to do. The intention is that the hardest part is the Nemerle installation.
Install Nemerle
Install the latest stable version of Nemerle in Linux, Windows, MacOS or whatever operating system you use. If you use Linux or MacOS, you will need to install Mono first. If you use Windows, you will need the latest release of .NET Framework 2.0 (Beta 2 is too old, use at least the August 2005 CTP).
The Step by step guide to get Nemerle compiler running on our Webpage has all the details, including MS.NET/Mono installation instructions.
You can ask any questions on the course mailing list. You can also configure some editors (such as SciTE, VIM, XEmacs, Kate, GEdit, mcedit) for Nemerle syntax highlighting. You can find configuration files in the SVN repository.
Exercise 1
Create a console application that asks the user for two numbers and then shows them added, subtracted, multiplied, divided and its modulus. You can also build upon it a program that shows the sin, cosine and tangent of a specified angle (look in the System.Math class).
Exercise 2
Add a menu (a list of possible operations and a prompt for a choice) to this program so the user can choose which operation to perform. Implements separate methods for reading numbers and displaying the menu. | http://nemerle.org/Nemerle_for_OOP_Programmers_Week_0 | crawl-001 | refinedweb | 1,823 | 62.68 |
Talk:Main Page
For old discussion, see Archived talk:Main Page 20110903.
Contents
Hia I've only just started using robocode and I have never coded before doe anyone know any good tutorials out there ? Or could you give me any tips on how to get started. Any advise would be appreciated.
Hi Rigged!
The Tutorials on this wiki are great resources, especially Getting Started, My First Robot and Game Physics.
If you're brand new to programming entirely, I'd suggest that you also immerse yourself in some other avenues to learn programming concepts generally and Java in specific. Greenfoot is a tremendous tool for learning basic programming concepts in a very "object-oriented" way, with some great tutorials available.
There are also several good beginner's tutorials on Java in general.
Hi Rigged, welcome to Robocode. I had about 6 months programming experience at high-school (10th grade) level when I first got involved in Robocode. Although RC is a great way to pick up on programming, I do think it is important to have a tiny bit of knowledge of how the basic concepts etc work before starting. Once you understand what a class is, what a method is, and the basics of mathematics in a Java context (plus a bit of geometry/trig) you should be ready to start. The sample bots are lots of fun to play with, and understanding how they work is a great way to pick up on some of the concepts. Once you get there, just ask us what you should got for next and I'm sure you'll get tons of advice =)
Thanks for the advise :). I've started by building a simple robot and i was wondering how do you change the colour of your robot.
Hi. Lets talk about your questions on your user discussion page :) User_talk:Rigged. I think it would be to much for the main page and you can ask whatever you want there.
take care
Hi
My name is Andrew Kirkland. I am currently writing a dissertation at the university of Abertay in Dundee(Scotland) which involves the use of Genetic algorythms to program robots in Robocode. I came accross Robocode JGAP . I have been testing it out but the webpage does not give me much information. Is their a user manual available for this software. I also need some general information on how it works for example the genome and what each of the 6 numbers represent, the fitness function, etc.
Any help would be much appreciated
Thanks
I don´t think anyone here is using Robocode JGAP. But I know genetic
programming algorithms are being used with success in RoboRumble.
Currently, the most successful uses of genetic
programming algorithms combine conventional development os robots without it, then using genetic programming algorithms to find optimal constants in statistical methods, like parameters in k-nearest neighbors search, kernel density estimation or histograms.
Hi,
I've read about the differences between a Robot and an 'advanced robot' But there is a basic thing I do not understand : Does an advanced robot has an advantage in the battle field over a similar robot, because it can perform several action in the same tick?
As i understand it, if i call ahead(..) followed by fire(), a regular robot will call ahead (and block until it is done) and then fire, while an advanced robot calling setAhead(..) and setFire() will start moving forward and will fire at the same tick, thus having an advantage over it's fellow regular robot. that sounds weird, though..
Am i wrong?
Thank you very much yoni
No, that's how it works. The only disadvantage I can think of is that Advanced robots can lose energy by hitting the wall. Personally, I find Robots boring and Advanced robots awesome, but maybe there are people who actually liked normal robots. Welcome to the wiki!
Thank you very much for the prompt reply.
So why on earth will someone write a regular robot, if a similar advanced robot will kick his ass? is it just a history thing?
The author of Robocode thought Robot would be an easier starting point for beginners. I'm not really sure it is... And as for fairness, I'm not sure he could foresee how competitive Robocode would eventually become. :-)
As for why still write one? Sometimes people like writing bots under different constraints, even if they're silly. The most popular example would be weight classes based on Code Size (Mini/Micro/NanoBots), which even got their own Rumbles. People have also written Robots, Droids (+20 energy, no radar), and Perceptual Bots (don't store any state, even between ticks). Of course, comparing such bots to DrussGT is unfair =), but that doesn't have to ruin the fun. I recently wrote a Perceptual Bot myself (RetroGirl).
Someone can write a robot just for fun. Why there are races on bicycle, when you could also ride a motorbike. Why we have code-restricted classes. Mind you, the best robot (kawigi.robot.Girl) is not easy to defeat, especially in melee. Most school competitions I know of use Robots, as it probably easier to check and control the code and its behaviour, and to prevent 'lending' code from good bots out here. For me, writing a robot is more difficult than writing an advanced robot, because it is hard to wrap my mind around what to do when and so on.
Hi,
Thanks for he answer. I didn't mean for question to be offensive (as in "why should one bother to write a Robot and not an Advanced one) I just wanted to make sure i understood correctly that the Advanced indeed has more power in the battle field.
Yoni
RoboRumble could have an "extends Robot" category.
Hi mates. Is there an issue with RoboWiki? It is very slow and sometimes it even didn't responds.
Yes, I've also noticed this. I suspect a search engine may have been spidering, or perhaps a database backup was running.
I as looking into this on the server today, and it looks to me like the main thing loading the wiki down is Bing being ridiculously aggressive with it's spidering... :(
I'm not really seeing anything on EverythingRobocode that makes it worth having a prominent link on the front page. I don't want to seem like I have a problem with other sites about Robocode coming into existence, because I think it's great. If at some point it has a ton of original content, by all means we should link to it. But right now, it looks like just a few articles, and this is a pretty elite set of links. Could you just post a link on your user page for now?
I agree, we cannot link every page about robocode on the front page, only the most important/informative of them. Compounded the need to add a description as well, which is a bit needless. I will remove it (the link+desc) for now.
Maybe an External Links page filled links? And then link the Main Page to that page.
Hi everyone, Im new to this group but am hoping you might be able to help me out with a project I am going to be running at my school this term. I am going to be delivering an introduction to programming for some Yr 9 students (13 - 14 year olds) starting out with basic programming ideas and examples and then moving onto a group project where teams of students create their own robots using robocode. They will have regular competition against each other to see what team is making the most progress / come up with the best ideas.
The reason I am telling you all this is that I am looking for some mentors to help out and perhaps give some advice. All this would involve is you perhaps receiving 1 or 2 emails with some code or some questions about how to do something. This kind of interaction is really useful for students as they get to work with real experts and feel like they are being listened to and treated like adults.
The project will run for about 6 weeks and will start mid february, as I said it would only involve a couple of emails or perhaps a Skype session. The classes are all small and the students are really hard working
I would be very grateful if you would consider helping out and if you have any questions please fell free to get in touch with me.
Thanks for your help,
Darren
I'm happy to help. I'll be very busy over the next few months getting my project ready for the international Robocup, but the occasional email should be fine.
I'm ready to help, but i'm afraid, that my english skill can become a problem
Thanks for the quick reply, its great to hear that people are keen to help out! I guess the best way to move forward would be to share some contact details. I will then give these out to the students once we start the robocode part of our project around the middle of february. Perhaps each of you could mentor a couple of teams therefore limiting the number of emails you will be getting.
It would be great if you could perhaps tell some other people about this project and see if they would also like to help our as the more people we have involved the better!
My email address is [email protected], please email me from the account you would like students to contact you at so that I can start putting a list together.
Thanks again for your help and I look forward to hearing from you and perhaps some other willing helpers soon.
Darren
I'd also be glad to help. Projects like this are always interesting :)
actually i think that every active robocoder is ready to help. and there are google+ circle for robocode:
Thanks for the great response, I've had quite few replies and am also looking for mentors from other more general programming areas as well. If you think you might like to get involved please contact me and I can tell you all about it.
The students had their first programming lessons this week and are really excited, particularly by the idea of working with robocode and having mentors they will be able to contact and share ideas with.
If you have said you will be involved or would like to be can I ask you to send me a quick Bio about yourself and your experience that I can give to students so they can find out a bit about you. It would also be great if you could supply a picture as well and this could be an avatar if you like, It would be nice if students could put a name to a face!
Thanks again for all your help, if you know of anyone who might be willing and able to help please put them in contact with me.
Darren
Just for the record, the downtime this weekend was my fault. Got the server into a nasty OOM state when trying to optimize performance. It should be working properly now (and still faster).
If anyone is noticing Robowiki being slow, it's because a PHP process is hitting a cpu bottleneck right now.
The logs are showing Bing and Baidu doing some heavy spidering right now so it could be that... but it could be something else, I'm not 100% sure.
Are you using php-fpm or spawn-fcgi? From my test, the former performs better under heavy load (though I am not sure since I tested it with nginx, not lighttpd; tested with apachebench with wordpress installation)
fcgi, however the number of requests per second was fairly modest really, so I doubt that type of overhead was the issue. More likely something on the MediaWiki side was being inefficient on the particular pages being spidered I think..
What's the means of "APS","Survival","ELO Rating","Glicko-2(RD)","Battles","Pairings","PL Score" in the RoboRumble?And how to work out them?
"APS","ELO Rating","Glicko-2(RD)": Darkcanuck/RRServer/Ratings
Premier League: [1]
Survival - it's average percent of rounds where robot survive
Battles - it's count of battles in which robot takes part
Pairings - it's count of another robots with which robot has battles
PL Score - it's count of wins in pairings (score percent > 50) * 2
for example robot A has 2 battles with robot B with score percents (45, 60) and no battles with robot C. In this case APS will be (45 + 60) / 2 = 52.5; battles will be 2; pairings will be 1; PL Score will be 1 * 2 = 2
soga,thank you ^_^
I just switched to a Sandy Bridge computer recently, which I believe has the most aggressive Turbo Boost nowadays (not counting the AMD Bulldozer, which I am not sure). I find that a lot of older robots started to skip turn like crazy (DrussGT is like skip 1 turn every 10 turns). I think the reason is that when Robocode calculate CPU Constant, it concentrates the extreme math to single core, which trigger the boost (to 2.9GHz in my CPU), but when the battle is being run, there are several threads running (plus the CPU temp would be higher due to more calculation being done), so the boost is not triggered, hence the cpu run at base speed (2.0GHz in my case).
Personally I run my Robocode at 1.5x the original CPU Constant. I don't know which CPUs you guys are on, but I think this may be a problem, especially on RoboRumble clients. What do you think? Should I fire a bug report?
I don't have such an issue on my AMD box, but that's a very good point. Hmm... --Rednaxela 16:48, 12 November 2011 (UTC)
I also have an increased CPU constant, approx 1.5 times the original. Just because I still have a single core P4 and sometimes I want to do something without stopping my client. It does not seem to hurt anyway.?. =)
I have a virtual gun that fires a virtual bullet, for some reason it's just slightly off when i actually fire.
What's the proper way to align and fire? I think my timing is just off.
I would guess this is the result of the Robocode idiosyncrasy where a bullet is fired before the gun is turned (so if you do setTurnGunRightDegrees(10), setFire(3), execute(), the bullet is fired before the gun is turned right 10 degrees). So your actual aim is probably the aim from the previous turn, while your predicted is from the current turn.
Well... can't really tell without more information what's wrong, but my first guess about what's wrong, is that perhaps you're not accounting for how within a tick, firing happens before gun turning does. The angle you fire at when you call setFire() is the angle resulting from the prior tick's setTurnGun() type call.
Yeah that's correct, the setFire is from the last tick.
What's a typical pattern for robocode as to code placement? I'm currently placing the gun turning code in the while true loop and the firing code in onScannedRobot
and it's wrapped with if (getGunHeat() == 0.0)
Should I change that layout? (also add && getGunTurnRemaining() == 0.0 to the fire wrap?)
Using onScannedRobot or run is totally just a matter of preference - for 1v1 it won't make any difference, really. It could also be an off-by-1 error in the bullet source location - it should be your location on the tick you called setFire. Or your target angle was farther than your gun could move during that tick, in which case the getGunTurnRemaining == 0 check would solve it.
I know if i combined the 2 logics in 1 function, the code would fail (for me at least) Nevermind i figured it out.
(Also Voidious: I'm testing my bot against yours now because it has pretty debugging graphics and I can see my weaknesses :P Also I perform better against your bot (diamond) if i don't fire :P)
Also, am I suppose to, with my virtual guns, determine the fire direction using last tick's information, since gun turns after bullet fires...
Right now this would f with my simulated hit rate, as sometimes a bullet might hit but not a virtual bullet, or vice versa.
Yep, you should, though the impact is probably quite minor.
Hm. Even last turn's angle doesn't match with the actual fired one. Idk what's going on, also I think the virtual bullets also hits better..
Anyway to compensate the gun turn after the bullet fire?
My bullets were not lined up either, until in March this year, I finally solved the problem with GresSuffurd 0.2.28. It turned out that when using the estimated bearing of the next tick (firing tick) position iso the bearing this tick (aiming tick), my real bullets indeed lined up with my (correct) virtual bullets. It gained me 0.2 APS, but I reached spot #11 with slightly misaligned bullets, so it is really not that important. Also keep in mind you have to aim at the opponents next tick position.
Wait i'm not sure if i understand what you mean by the next tick's position. How do I accomplish that?
Here's what I roughly have:
while (true){ if (getGunHeat() == 0.0){ fireVirtualBullet(enemyCurrentAbsoluteBearing); // Just use Head on targeting as an example because it's simple fire(2); } turnGunRightRadians(enemyRelativeGunHeading); }
I know this would be wrong. I just don't know how to fix it =S
He means something like:
Point2D.Double myNextLocation = project(myLocation,getVelocity(),getHeadingRadians()); Point2D.Double enemyNextLocation = project(enemyLocation,e.getVelocity(),e.getHeadingRadians()); double nextAbsBearing = absoluteBearing(myNextLocation,enemyNextLocation);
I've tried this, and using it to predict the enemy location didn't help me, although it did help for my own location. I think it depends on the way you define wave hits and starting locations in your gun. In DrussGT I wait until my gun-turn remaining at the beginning of the tick is 0, then fire. I put my bullet on the wave from last tick. As long as you make the same assumptions everywhere it should be ok.
Yeah that doesn't help me either, predicting my next location and then aiming via that doesn't make it line up either. I also wait until gun turn is complete.... Still not aligning..
Also, how does bullets collision work? I thought it's a line segment that's between last tick's location and this tick's location (length of the velocity). Whatever the line segment intersect will be collided (other bullet lines or robots)
Maybe try staying still while shooting to see if that is the problem? If it still doesn't line up, it is some sort of gun alignment issue.
Yes, that is how bullet collisions work. Maybe take your last aim and align the bullet to that? What I do is mark my previous wave as having a bullet the moment setFireBullet() returns a non-null result.
Can I save data between rounds in the static variables of other classes other than my main robot class?
Yep. Anything that's static will stick around in any class.
Personally, I model most of my stuff so the main robot class has objects that are static (gun, movement, etc) and then everything else is non-static in those classes, but you can model it however you'd like.
How is the while (true) loop actually broken down? Does robocode executes the code there 1 iteration per turn? Or..?
Generally, yes - when you call execute(), the Robocode engine processes one tick, including firing all the events on your bot, and then your run() method continues executing. So most of us have an infinite loop that calls execute() at the end, and each iteration is one tick.
But there's no magic to it - you could have a run method that goes:
public void run() { turnRight(20); ahead(100); fire(3); }
And that would be perfectly valid. Or you could call execute() every third iteration of your loop. In Dookious, my run method used to have a loop that was
while (notWonYet) ..., then a victory dance.
The timing thing for me is very confusing...
For example, if i want to fire at a certain angle, i have to rotate to it.. by the time i do.. i have another angle... which requires more rotation.. etc..
Same thing for turning the robot and going ahead.. I never know how to correctly time them. (Effectively stuck)
For gun aiming, see Robocode/Game_Physics#Firing_Pitfall. This can cause your aim to be a tick behind. I think most robots don't worry about it. But if you do worry about it, what I do is predict robot positions 1 tick into the future and use that for aiming. It's not exact, but works well enough for me..
A forum would be much more open for beginners to ask questions though. you shouldn't try to put everything into categories but just leave it to different threads in topics. --Peltco 06:15, 5 September 2011 (UTC).
Basically reduce the amount of data from the post that is set in the recent changes area. Lots of it has been wrapping, and that makes it harder to read. If possible half of what it has now. If possible even a "ABC has replied to thread XYZ". ;)
Personally, I think with the LiquidThread installed, every talk from old wiki should be put into the Archived talk namespace, or discussion header. My main reason is that discussion from old wiki would be uing old-style link, and I don't know how to programme a wikibot to edit a LiquidThread (plus my old converting code would work with the discussion header without modification)
I don't like leaving conversations in the discussion header, since it pushes the LiquidThreads stuff way down the page and I don't think that's what he header is for. I think moving to Archived talk is appropriate in most places, and you can just link to it in the header (like I did in Talk:Main Page).
I'm not sure how to deal with current conversations on the new wiki. I don't want them in the header. Archiving them is OK in most places, and maybe we could do it with a bot, but it feels pretty drastic to do it across the whole wiki. I wish I could just convert them to LiquidThreads conversations... | http://robowiki.net/w/index.php?title=Talk:Main_Page&dir=prev&limit=20 | CC-MAIN-2019-18 | refinedweb | 3,833 | 70.94 |
I have been trying so hard to make this work, not sure why this is not working. Below is the code which am using,
import {Http, Response, RequestOptions, Headers} from "@angular/http"; loginCompleted() { console.log("loginCompleted()"); this.navCtrl.push(TabsPage); } login(callback) { this.http .post("", body.toString(), options) .subscribe(function(response) { console.log("Login Response",response); this.loginResponse = JSON.parse(response.text()); callback }); }
Am trying to make a simple HTTP Post call and based on the response I want to push the TabsPage. But whats happening is Tabs page is getting pushed immediately after the HTTP post call is made. Can anyone tell what is the problem. | https://forum.ionicframework.com/t/ionic-push-page-after-successful-http-post/119420 | CC-MAIN-2020-40 | refinedweb | 107 | 50.12 |
In this tutorial, you'll learn the importance of cron jobs and why you need them. You'll have a look at
python-crontab, a Python module to interact with the
crontab. You'll learn how to manipulate cron jobs from a Python program using the
python-crontab module.
What Is Cron?
During system administration, it's necessary to run background jobs on a server to execute routine tasks. Cron is a system process which is used to execute background tasks on a routine basis. Cron requires a file called
crontab which contains the list of tasks to be executed at a particular time. All these jobs are executed in the background at the specified time.
To view cron jobs running on your system, navigate to your terminal and type in:
crontab -l
The above command displays the list of jobs in the
crontab file. To add a new cron job to the
crontab, type in:
crontab -e
The above command will display the
crontab file where you can schedule a job. Let's say you have a file called
hello.py which looks like:
print "Hello World"
Now, to schedule a cron job to execute the above script to output to another file, you need to add the following line of code:
50 19 * * * python hello.py >> a.txt
The above line of code schedules the execution of the file with output to a file called
a.txt. The numbers before the command to execute define the time of execution of the job. The timing syntax has five parts:
- minute
- hour
- day of month
- month
- day of week
Asterisks(*) in the timing syntax indicate it will run every time.
Introducing Python-Crontab
python-crontab is a Python module which provides access to cron jobs and enables us to manipulate the
crontab file from the Python program. It automates the process of modifying the
crontab file manually. To get started with
python-crontab, you need to install the module using pip:
pip install python-crontab
Once you have
python-crontab installed, import it into the python program.
from crontab import CronTab
Writing Your First Cron Job
Let's use the
python-crontab module to write our first cron job. Create a Python program called
writeDate.py. Inside
writeDate.py, add the code to print the current date and time to a file. Here is how
writeDate.py would look:
import datetime with open('dateInfo.txt','a') as outFile: outFile.write('\n' + str(datetime.datetime.now()))
Save the above changes.
Let's create another Python program which will schedule the
writeDate.py Python program to run at every minute. Create a file called
scheduleCron.py.
Import the
CronTab module into the
scheduleCron.py program.
from crontab import CronTab
Using the
CronTab module, let's access the system
crontab.
my_cron = CronTab(user='your username')
The above command creates an access to the system
crontab of the user. Let's iterate through the cron jobs and you should be able to see any manually created cron jobs for the particular username.
for job in my_cron: print job
Save the changes and try executing the
scheduleCron.py and you should have the list of cron jobs, if any, for the particular user. You should be able to see something similar on execution of the above program:
50 19 * * * python hello.py >> a.txt # at 5 a.m every week with:
Let's move on with creating a new cron job using the
CronTab module. You can create a new cron by using the new method and specifying the command to be executed.
job = my_cron.new(command='python /home/jay/writeDate.py')
As you can see in the above line of code, I have specified the command to be executed when the cron job is executed. Once you have the new cron job, you need to schedule the cron job.
Let's schedule the cron job to run every minute. So, in an interval of one minute, the current date and time would be appended to the
dateInfo.txt file. To schedule the job for every minute, add the following line of code:
job.minute.every(1)
Once you have scheduled the job, you need to write the job to the cron tab.
my_cron.write()
Here is the
scheduleCron.py file:
from crontab import CronTab my_cron = CronTab(user='roy') job = my_cron.new(command='python /home/roy/writeDate.py') job.minute.every(1) my_cron.write()
Save the above changes and execute the Python program.
python scheduleCron.py
Once it gets executed, check the
crontab file using the following command:
crontab -l
The above command should display the newly added cron job.
* * * * * python /home/roy/writeDate.py
Wait for a minute and check your home directory and you should be able to see the
dateInfo.txt file with the current date and time. This file will get updated each minute, and the current date and time will get appended to the existing content.
Updating an Existing Cron Job
To update an existing cron job, you need to find the cron job using the command or by using an Id. You can have an Id set to a cron job in the form of a comment when creating a cron job using
python-crontab. Here is how you can create a cron job with a comment:
job = my_cron.new(command='python /home/roy/writeDate.py', comment='dateinfo')
As seen in the above line of code, a new cron job has been created using the comment as
dateinfo. The above comment can be used to find the cron job.
What you need to do is iterate through all the jobs in
crontab and check for the job with the comment
dateinfo. Here is the code:
my_cron = CronTab(user='roy') for job in my_cron: print job
Check for each job's comment using the
job.comment attribute.
my_cron = CronTab(user='jay') for job in my_cron: if job.comment == 'dateinfo': print job
Once you have the job, reschedule the cron job and write to the cron. Here is the complete code:
from crontab import CronTab my_cron = CronTab(user='roy') for job in my_cron: if job.comment == 'dateinfo': job.hour.every(10) my_cron.write() print 'Cron job modified successfully'
Save the above changes and execute the
scheduleCron.py file. List the items in the
crontab file using the following command:
crontab -l
You should be able to see the cron job with updated schedule time.
* */10 * * * python /home/jay/writeDate.py # dateinfo
Clearing Jobs From Crontab
python-crontab provides methods to clear or remove jobs from
crontab. You can remove a cron job from the
crontab based on the schedule, comment, or command.
Let's say you want to clear the job with comment
dateinfo from the
crontab. The code would be:
from crontab import CronTab my_cron = CronTab(user='roy') for job in my_cron if job.comment == 'dateinfo': my_cron.remove(job) my_cron.write()
Similarly, to remove a job based on a comment, you can directly call the
remove method on the
my_cron without any iteration. Here is the code:
my_cron.remove(comment='dateinfo')
To remove all the jobs from the
crontab, you can call the
remove_all method.
my_cron.remove_all()
Once done with the changes, write it back to the cron using the following command:
my_cron.write()
Calculating Job Frequency
To check how many times your job gets executed using
python-crontab, you can use the
frequency method. Once you have the job, you can call the method called
frequency, which will return the number of times the job gets executed in a year.
from crontab import CronTab my_cron = CronTab(user='roy') for job in my_cron: print job.frequency()
To check the number of times the job gets executed in an hour, you can use the method
frequency_per_hour.
my_cron = CronTab(user='roy') for job in my_cron: print job.frequency_per_hour()
To check the job frequency in a day, you can use the method
frequency_per_day.
Checking the Job Schedule
python-crontab provides the functionality to check the schedule of a particular job. For this to work, you'll need the
croniter module to be installed on your system. Install
croniter using pip:
pip install croniter
Once you have
croniter installed, call the schedule method on the job to get the job schedule.
sch = job.schedule(date_from=datetime.datetime.now())
Now you can get the next job schedule by using the
get_next method.
print sch.get_next()
Here is the complete code:
import datetime from crontab import CronTab my_crons = CronTab(user='jay') for job in my_crons: sch = job.schedule(date_from=datetime.datetime.now()) print sch.get_next()
You can even get the previous schedule by using the
get_prev method.
Wrapping It Up
In this tutorial, you saw how to get started with using
python-crontab for accessing system
crontab from a Python program. Using
python-crontab, you can automate the manual process of creating, updating, and scheduling cron jobs.
Have you used
python-crontab or any other libraries for accessing system
crontab? I would love to hear your thoughts. Do let us know your suggestions in the comments below.
Learn Python
Learn Python with our complete python tutorial guide, whether you're just getting started or you're a seasoned coder looking to learn new<< | https://code.tutsplus.com/tutorials/managing-cron-jobs-using-python--cms-28231 | CC-MAIN-2021-04 | refinedweb | 1,537 | 73.17 |
.
In this release, we’ve added XAML features and improved the Test Runner, Code Coverage, Debug Visualizer, Navigation and Refactorings. You can get the latest CodeRush for Roslyn on the Visual Studio Gallery. The preview is free until we release, currently expected by the summer of 2016. Details on the new features are below.
The CodeRush Unit Test Runner now supports NUnit 3.0 framework, and test execution output is now displayed in the Console output tab:
Code Coverage lets you filter displayed members using the Search Box:
Four new features here. The References window now supports multiple tabs. Want to find all references for the symbol at the caret? Just click the green plus button to create a new tab and instantly fill it with all the references to the symbol.
The second neat feature is improved interaction with CodeRush’s Jump To feature. Typically you use the Jump To feature to quickly find and get to a single location of interest. But what if you want to visit multiple locations across your session? Just press Ctrl+P to pin the results in the References window, or click the References button from the Jump To window, like this:
This will instantly create and populate a new tab in the References window, filled with all the locations found by Jump To. Now these locations are in a single place and persist until you close the tab or close Visual Studio.
The third new feature modifies the Jump To feature, allowing you to drill into decompiled code:
And the fourth new navigation feature we’ve added is to bring CodeRush Classic’s Drop Marker Before Jump feature, which drops a marker automatically before you go away using the Visual Studio Edit.GoToDefinition command. This makes it easy to return to where you were (just press Escape to get back).
We’ve ported four XAML features over from CodeRush Classic: Break Apart Attributes, Line Up Attributes, Show Color (for showing and changing color references in XAML), and Import Type (to declare XAML namespace references for unresolved types).
CodeRush for Roslyn adds a new refactoring: Rename Namespace to Match Folder Structure, which renames the namespace according to the project default namespace and the path to the source code file.
And we improved Convert to String Interpolation. This refactoring is now available everywhere Use String.Format is, and is also aware of string formatting in calls to Debug.Print, Console.Write, Console.WriteLine and the StringBuilder.AppendFormat.
We’ve added the Debug Toolbar, which lets you turn the Debug Visualizer on and off, control execution while ignoring breakpoints, step into the member at the caret position, and toggle temporary breakpoints.
And we’ve improved the Debug Visualizer user interface to make preview expressions easier to read. In the two identical code samples below, you can see the UI from earlier versions (on top) compared to the improved version of the UI (below):
As always, we’re interested to know what you think. Download the latest version of CodeRush for Roslyn and give it a try. Thanks for your support and feedback.. | https://community.devexpress.com/blogs/markmiller/archive/2016/03.aspx | CC-MAIN-2017-26 | refinedweb | 517 | 61.06 |
Hi, On Tuesday 21 December 2010 01:18:04 Richard Nelson wrote: > > Unless there's a if/then construct for the kde-core package in the > > webbuilder, I don't see why changing the output type would solve > > anything. Or am I missing something? Looks like I was missing something. Looking inside live-build package for both squeeze (2.0.9-1) and sid (2.0.10-1) revealed the following: #if DISTRIBUTION lenny kde-core #endif #if DISTRIBUTION squeeze wheezy sid kde-plasma-desktop #endif This if/endif construct is no longer present in live-build from experimental (3.0~a10-1) and/or sid- snapshots, hence my mistake/confusion. I also tried (successfully) to create an image using the webbuilder. Looking inside the logfile I can see which versions of live-config/boot/etc are being used, but not live-build. Is it an idea to post the versions of the live packages being used with the webbuilder (at least live-build, but others may be useful as well)? That could help to troubleshoot potential issues. Regards, Diederik | http://lists.debian.org/debian-live/2010/12/msg00148.html | CC-MAIN-2013-48 | refinedweb | 180 | 65.93 |
50013/create-an-ec2-instance-using-boto
Hi @Neel, try this script:
reservations = conn.get_all_instances(instance_ids=[sys.argv[1]])
instances = [i for r in reservations for i in r.instances]
for i in instances:
key_name = i.key_name
security_group = i.groups[0].id
instance_type = i.instance_type
print "Now Spinning New Instance"
subnet_name = i.subnet_id
reserve = conn.run_instances(image_id=ami_id,key_name=key_name,instance_type=instance_type,security_group_ids=[security_group],subnet_id=subnet_name)
down voteacceptedFor windows:
you could use winsound.SND_ASYNC to play them ...READ MORE
I think you should try:
I used %matplotlib inline in ...READ MORE
Slicing is basically extracting particular set of ...READ MORE
You don't have to. The size of ...READ MORE
For some reason, the pip install of ...READ MORE
Check if the FTP ports are enabled ...READ MORE
To connect to EC2 instance using Filezilla, ...READ MORE
I had a similar problem with trying ...READ MORE
Hi @Neha, try something like thus:
import boto3
import ...READ MORE
Yes of course that's possible with just ...READ MORE
OR | https://www.edureka.co/community/50013/create-an-ec2-instance-using-boto | CC-MAIN-2019-30 | refinedweb | 168 | 56.11 |
A Guide to Python Correlation Statistics with NumPy, SciPy, & Pandas
When dealing with data, it's important to establish the unknown relationships between various variables.
Other than discovering the relationships between the variables, it is important to quantify the degree to which they depend on each other.
Such statistics can be used in science and technology.
Python provides its users with tools that they can use to calculate these statistics.
In this article, I will help you know how to use SciPy, Numpy, and Pandas libraries in Python to calculate correlation coefficients between variables.
Table of Contents
You can skip to a specific section of this Python correlation statistics tutorial using the table of contents below:
- What is Correlation?
- Correlation Calculation using NumPy
- Correlation Calculation using SciPy
- Correlation Calculation in Pandas
-
- Pearson correlation in Pandas
-
- Visualizing Correlation
- Heatmaps of Correlation Matrices
- Final Thoughts
What is Correlation?
The variables within a dataset may be related in different ways.
For example,
One variable may be dependent on the values of another variable, two variables may be dependent on a third unknown variable, etc.
It will be better in statistics and data science to determine the undelying relationship between variables.
Correlation is the measure of how two variables are strongly related to each other.
Once data is organized in the form of a table, the rows of the table become the observations while the columns become the features or the attributes.
There are three types of correlation.
They include:
Negative correlation- This is a type of correlation in which large values of one feature correspond to small values of another feature.
If you plot this relationship on a cartesian plane, the y values will decrease as the x values increase.
The vice versa is also true, that small values of one feature correspond to large features of another feature.
If you plot this relationship on a cartesian plane, the y values will increase as the x values decrease.
Weak or no correlation- In this type of correlation, there is no observable association between two features.
The reason is that the correlation between the two variables is weak.
Positive correlation- In this type of correlation, large values for one feature correspond to large values for another feature.
When plotted, the values of y tend to increase with an increase in the values of x, showing a strong correlation between the two.
Correlation goes hand-in-hand with other statistical quantities like the mean, variance, standard deviation, and covariance.
In this article, we will be focussing on the three major correlation coefficients.
These include:
- Pearson’s r
- Spearman’s rho
- Kendall’s tau
The Pearson's coefficient helps to measure linear correlation, while the Kendal and Spearman coeffients helps compare the ranks of data.
The SciPy, NumPy, and Pandas libraries come with numerous correlation functions that you can use to calculate these coefficients.
If you need to visualize the results, you can use Matplotlib.
Correlation Calculation using NumPy
NumPy comes with many statistics functions.
An example is the
np.corrcoef() function that gives a matrix of Pearson correlation coefficients.
To use the NumPy library, we should first import it as shown below:
import numpy as np
Next, we can use the
ndarray class of NumPy to define two arrays.
We will call them
x and
y:
import numpy as np x = np.arange(10, 20) y = np.array([3, 2, 6, 5, 9, 12, 16, 32, 88, 62])
You can see the generated arrays by typing their names on the Python terminal as shown below:
First, we have used the
np.arange() function to generate an array given the name
x with values ranging between 10 and 20, with 10 inclusive and 20 exclusive.
We have then used
np.array() function to create an array of arbitrary integers.
We now have two arrays of equal length.
You can use Matplotlib to plot the datapoints:
import numpy as np from matplotlib import pyplot as plt x = np.arange(10, 20) y = np.array([3, 2, 6, 5, 9, 12, 16, 32, 88, 62]) plt.scatter(x,y) plt.legend(['Data points']) plt.show()
This will return the following plot:
The colored dots are the datapoints.
It's now time for us to determine the relationship between the two arrays.
We will simply call the
np.corrcoef() function and pass to it the two arrays as the arguments.
This is shown below:
corr = np.corrcoef(x, y)
Now, type
corr on the Python terminal to see the generated correlation matrix:
The correlation matrix is a two-dimensional array showing the correlation coefficients.
If you've observed keenly, you must have noticed that the values on the main diagonal, that is, upper left and lower right, equal to 1.
The value on the upper left is the correlation coefficient for
x and
x.
The value on the lower right is the correlation coefficient for
y and
y.
In this case, it's approximately 8.2.
They will always give a value of 1.
However, the lower left and the upper right values are of the most signicance and you will need them frequently.
The two values are equal and they denote the pearson correlation coefficient for variables
x and
y.
Correlation Calculation using SciPy
SciPy has a module called
scipy.stats that comes with many routines for statistics.
To calculate the three coefficients that we mentioned earlier, you can call the following functions:
- pearsonr()
- spearmanr()
- kendalltau()
Let me show you how to do it...
First, we import numpy and the
scipy.stats module from SciPy.
Next, we can generate two arrays.
This is shown below:
import numpy as np import scipy.stats x = np.arange(10, 20) y = np.array([3, 2, 6, 5, 9, 12, 16, 32, 88, 62])
You can calculate the Pearson's r coefficient as follows:
scipy.stats.pearsonr(x, y)
This should return the following:
The value for Spearman's rho can be calculated as follows:
scipy.stats.spearmanr(x, y)
You should get this:
And finally, you can calculate the Kendall's tau as follows:
scipy.stats.kendalltau(x, y)
The output should be as follows:
The output from each of the three functions has two values.
The first value is the correlation coefficient while the second value is the p-value.
In this case, our great focus is on the coefficient correlation, the first value.
The p-value becomes useful when testing hypothesis in statistical methods.
If you only want to get the correlation coefficient, you can extract it using its index.
Since it's the first value, it's located at index 0.
The following demonstrates this:
scipy.stats.pearsonr(x, y)[0] # Pearson's r scipy.stats.spearmanr(x, y)[0] # Spearman's rho scipy.stats.kendalltau(x, y)[0] # Kendall's tau
Each should return one value as shown below:
Correlation Calculation in Pandas
In some cases, the Pandas library is more convenient for calculating statistics compared to NumPy and SciPy.
It comes with statistical methods for DataFrame and Series data instances.
For example, if you have two Series objects with equal number of items, you can call the
.corr() function on one of them with the other as the first argument.
First, let's import the Pandas library and generate a Series data object with a set of integers:
import pandas as pd x = pd.Series(range(20, 30))
To see the generated Series, type
x on Python terminal:
It has generated numbers between 20 and 30, with 20 inclusive and 30 exclusive.
We can generate the second Series object:
y = pd.Series([3, 2, 6, 5, 9, 12, 16, 32, 88, 62])
To see the values for this series object, type
y on the Python terminal:
To calculate the Pearson's r coefficient for
x in relation to
y, we can call the
.corr() function as follows:
x.corr(y)
This returns the following:
The Pearson's r coefficient for
y in relation to
x can be calculated as follows:
y.corr(x)
This should return the following:
You can then calculate the Spearman's rho as follows:
x.corr(y, method='spearman')
Note that we had to set the parameter
method to
spearman.
It should return the following:
The Kendall's tau can then be calculated as follows:
x.corr(y, method='kendall')
It returns the following:
The
method parameter was set to
kendall.
Linear Correlation
The purpose of linear correlation is to measure the proximity of a mathematical relationship between the variables of a dataset to a linear function.
If the relationship between the two variables is found to be closer to a linear function, then they have a stronger linear correlation and the absolute value of the correlation coefficient is higher.
Pearson Correlation Coefficient
Let's say you have a dataset with two features,
x and
y.
Each of these features has n values, meaning that
x and
y are n tuples.
The first value of feature
x,
x1 corresponds to the first value of feature
y,
y1.
The second value of feature
x,
x2, corresponds to the second value of feature
y,
y2.
Each of the x-y pairs denotes a single observation.
The Pearson (product-moment) correlation coefficient measures the linear relationship between two features.
It is simply the ratio of the covariance of
x and
y to the product of their standard deviations.
It is normally denoted using the letter
r and it can be expressed using the following mathematical equation:
r = Σᵢ((xᵢ − mean(x))(yᵢ − mean(y))) (√Σᵢ(xᵢ − mean(x))² √Σᵢ(yᵢ − mean(y))²)⁻¹
The parameter
i can take the values 1, 2,...n.
The mean values for
x an
y can be denoted as mean(x) and mean(y) respectively.
Note the following facts regarding the Pearson correlation coefficient:
- It can take any real values ranging between −1 ≤ r ≤ 1.
- The maximum value of r is 1, and it denotes a case where there exists a perfect positive linear relationship between
xand
y.
- If r > 0, there is a positive correlation between
xand
y.
- If r = 0,
xand
yare independent.
- If r < 0, there is a negative correlation between
xand
y.
- The minimum value of r is 1, and it denotes a case where there is a perfect negative linear relationship between
xand
y.
So, a larger absolute value of
r is an indication of a stronger correlation, closer to a linear function.
While, a smaller absolute value of
r is an indication of a weaker correlation.
Linear Regression in SciPy
SciPy can give us a linear function that best approximates the existing relationship between two arrays and the Pearson correlation coefficient.
So, let's first import the libraries and prepare the data:
import numpy as np import scipy.stats x = np.arange(20, 30) y = np.array([3, 2, 6, 5, 9, 12, 16, 32, 88, 62])
Now that the data is ready, we can call the
scipy.stats.linregress() function and perform linear regression.
This is shown below:
output = scipy.stats.linregress(x, y)
We are performing linear regression between the two features,
x and
y.
Let's get the values of different coefficients:
output.slope output.intercept output.rvalue output.pvalue output.stderr
These should run as follows:
So, you've used linear regression to get the following values:
- slope- This is the slope for the regression line.
- intercept- This is the intercept for the regression line.
- pvalue- This is the p-value.
- stderr- This is the standard error for the estimated gradient.
Pearson Correlation in NumPy and SciPy
At this point, you know how to use the
corrcoef() and
pearsonr() functions to calculate the Pearson correlation coefficient.
This is shown below:
r, p = scipy.stats.pearsonr(x, y)
Run the above command then access the values of
r and
p by typing them on the terminal.
The value of
r should be as follows:
The value of
p should be as follows:
Here is how to use the
corrcoef() function:
np.corrcoef(x, y)
It should return the following:
Note that if you pass an array with a
nan value to the
pearsonr() function, it will return a
ValueError.
There are a number of details that you should consider.
First, remember that the
np.corrcoef() function can take two NumPy arrays as arguments.
You can instead pass to it a two-dimensional array with similar values as the argument.
Let's first create the two-dimensional array:
xy = np.array([[20, 21, 22, 23, 24, 25, 26, 27, 28, 29], [3, 2, 6, 5, 9, 12, 16, 32, 88, 62]])
Let's now call the function:
np.corrcoef(xy)
This returns the following:
We get similar results as in the previous examples.
So, let's see what happens when you pass nan data to
corrcoef():
arr_with_nan = np.array([[10, 1, 12, 23], [22, 4, 11, 8], [13, 6, np.nan, 8]])
This returns the following:
In the above example, the third row of the array has a nan value.
Every calculation that didn't involve the feature with nan value was calculated well.
However, all the results that dependent on the last row are nan.
Pearson correlation in Pandas
First, let's import the Pandas library and create Series and DataFrame data objects:
import pandas as pd import sys sys.__stdout__ = sys.stdout x = pd.Series(range(20, 30)) y = pd.Series([3, 2, 6, 5, 9, 12, 16, 32, 88, 62]) z = pd.Series([6, 3, 2, 5, 0, -4, -8, -12, -15, -17]) xy = pd.DataFrame({'x-values': x, 'y-values': y}) xyz = pd.DataFrame({'x-values': x, 'y-values': y, 'z-values': z})
Above, we have created two Series data obects named
x,
y, and
z and two DataFrame data objects named
xy and
xyz.
To see any of them, type its name on the Python terminal.
See the use of the
sys library.
It has helped us provide output information to the Python interpreter.
At this point, you know how to use the
.corr() function on Series data objects to get the correlation coefficients.
x.corr(y)
The above returns the following:
We called the
.corr() function on one object and passed the other object to the function as an argument.
The
.corr() function can also be used on DataFrame objects.
It can give you the correlation matrix for the columns of the DataFrame object:
For example:
corr_matrix = xy.corr()
The above code gives us the correlation matrix for the columns of the
xy DataFrame object.
To see the generated correlation matrix, type its name on the Python terminal:
The resulting correlation matrix is a new instance of DataFrame and it has the correlation coefficients for the columns
xy['x-values'] and
xy['y-values'].
Such labeled results are very convenient to work with since they can be accessed with either their labels or with their integer position indices.
This is shown below:
corr_matrix.at['x-values', 'y-values'] corr_matrix.iat[0, 1]
The two run as follows:
The above examples show that there are two ways for you to access the values:
- The
.at[]accesses a single value by row and column labels.
- The
.iat[]accesses a value based on its row and column positions.
Rank Correlation
Rank correlation compares the orderings or the ranks of the data related to two features or variables of a dataset.
If the orderings are found to be similar, then the correlation is said to be strong, positive, and high.
On the other hand, if the orderings are found to be close to reversed, the correlation is said to be strong, negative, and low.
Spearman Correlation Coefficient
This is the Pearson correlation coefficient between the rank values of two features.
It's calculated just as the Pearson correlation coefficient but it uses the ranks instead of their values.
It's denoted using the Greek letter rho (ρ), the Spearman’s rho.
Here are important points to note concerning the Spearman correlation coefficient:
- The ρ can take a value in the range of −1 ≤ ρ ≤ 1.
- The maximum value of ρ is 1, and it corresponds to a case where there is a monotonically increasing function between x and y. Larger values of x correspond to larger values of y. The vice versa is also true.
- The minimum value of ρ is -1, and it corresponds to a case where there is a monotonically decreasing function between x and y. Larger values of x correspond to smaller values of y. The vice versa is also true.
Kendall Correlation Coefficient
Let's consider two n-tuples again,
x and
y.
Each
x-y, pair,
(x1, y1)..., denotes a single observation.
Each pair of observations,
(xᵢ, yᵢ), and
(xⱼ, yⱼ), where
i < j, will be one of the following:
- concordant if
(xᵢ > xⱼand
yᵢ > yⱼ)or
(xᵢ < xⱼand
yᵢ < yⱼ)
- discordant if
(xᵢ < xⱼand
yᵢ > yⱼ)or
(xᵢ > xⱼand
yᵢ < yⱼ)
- neither if a tie exists in either
x
(xᵢ = xⱼ)or in
y
(yᵢ = yⱼ)
The Kendall correlation coefficient helps us compare the number of concordant and discordant data pairs.
The coefficient shows the difference in the counts of concordant and discordant pairs in relation to the number of
x-y pairs.
Note the following points concerning the Kendall correlation coefficient:
- It takes a real value in the range of −1 ≤ τ ≤ 1.
- It has a maximum value of τ = 1 which corresponds to a case when all pairs are concordant.
- It has a minimum value of τ = −1 which corresponds to a case when all pairs are discordant.
SciPy Implementation of Rank
The
scipy.stats can help you determine the rank of each value in an array.
Let's first import the libraries and create NumPy arrays:
import numpy as np import scipy.stats x = np.arange(20, 30) y = np.array([3, 2, 6, 5, 9, 12, 16, 32, 88, 62]) z = np.array([6, 3, 2, 5, 0, -4, -8, -12, -15, -17])
Now that the data is ready, let's use the
scipy.stats.rankdata() to calculate the rank of each value in a NumPy array:
scipy.stats.rankdata(x) scipy.stats.rankdata(y) scipy.stats.rankdata(z)
The commands return the following:
The array
x is monotonic, hence, its rank is also monotonic.
The
rankdata() function also takes the optional parameter
method.
This tells the Python compiler what to do in case of ties in the array.
By default, the parameter will assign them the average of the ranks.
For example:
scipy.stats.rankdata([7, 2, 9, 2])
This returns the following:
In the above array, there are two values with a value of 2.
Their total rank is 3.
When averaged, each value got a rank of 1.5.
You can also get ranks using the
np.argsort() function.
For example:
np.argsort(y) + 2
Which returns the following ranks:
The
argsort() function returns the indices of the array items in the asorted array.
The indices are zero-based, so, you have to add 1 to all of them.
Rank Correlation Implementation in NumPy and SciPy
You can use the
scipy.stats.spearmanr() to calculate the Spearman correlation coefficient.
For example:
corr = scipy.stats.spearmanr(x, y)
This runs as follows:
The values for both the correlation coefficient and the pvalue have been shown.
The rho value can be calculated as follows:
rho, p = scipy.stats.spearmanr(x, y)
This will run as follows:
So, the
spearmanr() function returns an object with the value of Spearman correlation coefficient and p-value.
To get the Kendall correlation coefficient, you can use the
kendalltau() function as shown below:
corr = scipy.stats.kendalltau(x, y) tau, p = scipy.stats.kendalltau(x, y)
The commands will run as follows:
Rank Correlation Implementation in Pandas
You can use the Pandas library to calculate the Spearman and kendall correlation coefficients.
First, import the Pandas library and create the Series and DataFrame data objects:
import pandas as pd import numpy as np import sys sys.__stdout__ = sys.stdout x = np.arange(20, 30) y = np.array([3, 2, 6, 5, 9, 12, 16, 32, 88, 62]) z = np.array([6, 3, 2, 5, 0, -4, -8, -12, -15, -17]) x, y, z = pd.Series(x), pd.Series(y), pd.Series(z) xy = pd.DataFrame({'x-values': x, 'y-values': y}) xyz = pd.DataFrame({'x-values': x, 'y-values': y, 'z-values': z})
You can now call the
.corr() and
.corrwith() functions and use the
method parameter to specify the correlation coefficient that you want to calculate.
It defaults to pearson.
Consider the code given below:
x.corr(y, method='spearman') xy.corr(method='spearman') xyz.corr(method='spearman')
The commands will run as follows:
That's how to use the
method parameter with the
.corr() function.
To calculate the Kendall's tau, use
method=kendall.
This is shown below:
x.corr(y, method='kendall') xy.corr(method='kendall') xyz.corr(method='kendall')
The commands will return the following output:
Visualizing Correlation
Visualizing your data can help you gain more insights about the data.
Luckily, you can use Matplotlib to visualize your data in Python.
If you haven't installed the library, install it using the pip package manager.
Just run the following command:
pip3 install matplotlib
Next, import its pyplot module by running the following command:
import matplotlib.pyplot as plt
You can then create the arrays of data that you will use to generate the plot:
import numpy as np x = np.arange(20, 30) y = np.array([3, 2, 6, 5, 9, 12, 16, 32, 88, 62]) z = np.array([6, 3, 2, 5, 0, -4, -8, -12, -15, -17])
The data is now ready, hence, you can draw the plot.
We will first demonstrate how to create an x-y plot with a regression line, equation, and Pearson correlation coefficient.
You can use the
linregress() function to get the slope, the intercept, and the correlation coefficient for the line.
First, import the
stats module from SciPy:
import scipy.stats
Then run this code:
slope, intercept, r, p, stderr = scipy.stats.linregress(x, y)
Also, you can get the string with the equation of regression line and the value of correlation coefficient.
You can use f-strings:
line = f'Regression line: y={intercept:.2f}+{slope:.2f}x, r={r:.2f}'
Note that the above line will only work if you are using Python 3.6 and above (f-strings were introduced in Python 3.6).
Now, let's call the
.plot() function to generate the x-y plot:
fig, ax = plt.subplots() ax.plot(x, y, linewidth=0, marker='s', label='Data points') ax.plot(x, intercept + slope * x, label="Regression Line") ax.set_xlabel('x') ax.set_ylabel('y') ax.legend(facecolor='white') plt.show()
The code will generate the following plot:
The blue squares on the plot denote the observations, while the yellow line is the regression line.
Heatmaps of Correlation Matrices
The correlation matrix can be big and confusing when you are handling a huge number of features.
However, you can use a heat map to present it and each field will have a color that corresponds to its value.
You should have a correlation matrix, so, let's create it.
First, this is our array:
xyz = np.array([[11, 12, 12, 14, 14, 15, 18, 17, 18, 20], [2, 3, 4, 5, 9, 12, 18, 25, 96, 50], [5, 3, 2, 4, 0, -2, -8, -11, -17, -18]])
Now, let's generate the correlation matrix:
corr_matrix = np.corrcoef(xyz).round(decimals=2)
When you type the name of the correlation matrix on the Python terminal, you will get this:
You can now use the
.imshow() function to create the heatmap, and pass the name of the correlation matrix to it as the argument:
fig, ax = plt.subplots() im = ax.imshow(corr_matrix) im.set_clim(-1, 1) ax.grid(False) ax.xaxis.set(ticks=(0, 1, 2), ticklabels=('x', 'y', 'z')) ax.yaxis.set(ticks=(0, 1, 2), ticklabels=('x', 'y', 'z')) ax.set_ylim(2.5, -0.5) for i in range(3): for j in range(3): ax.text(j, i, corr_matrix[i, j], ha='center', va='center', color='r') cbar = ax.figure.colorbar(im, ax=ax, format='% .2f') plt.show()
The following heatmap will be generated:
The result shows a table with coefficients.
The colors on the heatmap will help you interpret the output.
We have three different colors representing different numbers.
Final Thoughts
This is what you've learned in this article:
- Correlation coeffients measure the association between the features or variables of a dataset.
- The most popular correlation coefficients include the Pearson’s product-moment correlation coefficient, Spearman’s rank correlation coefficient, and Kendall’s rank correlation coefficient.
- The NumPy, Pandas, and SciPy libraries come with functions that you can use to calculate the values of these correlation coefficients.
- Visualing your data will help you gain more insights from the data.
- You can use Matplotlib to visualize your data in Python. | https://nickmccullum.com/python-correlation-statistics/ | CC-MAIN-2021-04 | refinedweb | 4,166 | 57.27 |
thoughts from a professional developer
How do you find out why your computer or a running program is so slow? Here’s one way.
Let’s attach the VS debugger to VS itself. The main executable for VS is devenv.exe.
Start Visual Studio 2008. This will be the “debugger”
Choose File->Open Project C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\devenv.exe
(You can also choose Debug->Attach to process to debug another instance of devenv.exe, or any other EXE, like Foxpro.exe or Excel.exe)
Hit F5, and a dialog pops up:
“Debugging information for ‘devenv.exe’ cannot be found or does not match. Symbols not loaded. Do you want to continue debugging?”.
Answer yes, and Visual Studio starts. This will be the “debuggee”
Do anything you like in the debuggee, such as create or load a project. Hit F12 to cause an asynchronous breakpoint (or go to the debugger and choose Debug->Break All).
That will freeze all the debuggee threads and put you in the debugger.
You can then examine the Threads window (Debug->Windows->Threads) and see what threads are running. There are several. You can dbl-click various threads and look at the Call stack for it (Debug->Windows->Call stack).
You’ll probably see that most threads are just waiting for something to happen.
Choose the main thread. When VS is idling, the stack will look like this:
ntdll.dll!7c90eb94()
[Frames below may be incorrect and/or missing, no symbols loaded for ntdll.dll]
ntdll.dll!7c90e9ab()
kernel32.dll!7c8094e2()
> msvcr90.dll!_onexit_nolock(int (void)* func=0x0072006f) Line 157 + 0x6 bytes C
00660072()
Each stack entry shows the module and address that called the next stack entry. This isn’t very useful, so you need to load symbols. You can use the public Microsoft Symbol Server:
Tools->Options->Debug->Symbols
Cache the symbols to a local dir, like C:\Symbols
Right click on the various modules (like ntdll.dll, kernel32.dll, msenv.dll etc.) in the call stack to load symbols. Now it’s a little more intelligible:
> ntdll.dll!_KiFastSystemCallRet@0()
user32.dll!_NtUserKillTimer@8() + 0xc bytes
msenv.dll!CMsoCMHandler::FPushMessageLoop() + 0x36 bytes
msenv.dll!SCM::FPushMessageLoop() + 0x4f bytes
msenv.dll!SCM_MsoCompMgr::FPushMessageLoop() + 0x28 bytes
msenv.dll!CMsoComponent::PushMsgLoop() + 0x28 bytes
msenv.dll!VStudioMainLogged() + 0x19b bytes
msenv.dll!_VStudioMain() + 0x7d bytes
devenv.exe!util_CallVsMain() + 0xd8 bytes
devenv.exe!CDevEnvAppId::Run() + 0x5cb bytes
devenv.exe!_WinMain@16() + 0x60 bytes
devenv.exe!License::GetPID() - 0x4cf9 bytes
kernel32.dll!_BaseProcessStart@4() + 0x23 bytes
You can see the WinMain calls a MessageLoop.
Let’s make the foreground thread busy. Create a VB console application. Add an XML literal:
Module Module1
Sub Main()
Dim bigxml = <xml>
</xml>
End Sub
End Module
Make the XML literal big: open the file C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.5\System.Data.Linq.xml and copy everything except the <?xml version="1.0" encoding="utf-8"?> into the literal (between the <xml> and the </xml>), so you have about 2000 lines.
Start Task Manager (Ctrl-Shift->Escape, or right click on the task bar and choose Task Manager). Observe the little tray icon in the System tray. It will indicate how busy your computer is.
For example, if you have 2 processors and 1 thread is very busy, it’ll show 50% busy.
Now, if you hover your mouse over the “bigxml”, you’ll trigger VB to create a Quick Info tooltip, but it takes quite a lot of calculation to figure out the tip.
Do your asynchronous breakpoint trick and you’ll see something like this:
> msvb7.dll!BCSYM::IsNamedRoot()
msvb7.dll!BCSYM::AreTypesEqual()
msvb7.dll!EquivalentTypes() + 0x17 bytes
msvb7.dll!ClassifyPredefinedCLRConversion() + 0x22 bytes
msvb7.dll!Semantics::ClassifyPredefinedCLRConversion() + 0x28 bytes
msvb7.dll!Semantics::ClassifyPredefinedConversion() + 0xbf bytes
msvb7.dll!Semantics::ResolveConversion() + 0x234 bytes
msvb7.dll!Semantics::ClassifyUserDefinedConversion() + 0x40685 bytes
msvb7.dll!Semantics::ClassifyConversion() + 0x21e31 bytes
msvb7.dll!Semantics::CompareParameterTypeSpecificity() + 0x57 bytes
msvb7.dll!Semantics::CompareParameterSpecificity() + 0xa3 bytes
msvb7.dll!Semantics::InsertIfMethodAvailable() + 0x461 bytes
msvb7.dll!Semantics::CollectOverloadCandidates() + 0x1e6 bytes
msvb7.dll!Semantics::ResolveOverloading() + 0xd9 bytes
msvb7.dll!Semantics::ResolveOverloadedCall() + 0x5e bytes
msvb7.dll!Semantics::InterpretCallExpression() + 0x2656f bytes
msvb7.dll!Semantics::CreateConstructedInstance() + 0xfa bytes
msvb7.dll!Semantics::CreateConstructedInstance() + 0x5b bytes
msvb7.dll!Semantics::InterpretXmlElement() + 0x241 bytes
msvb7.dll!Semantics::InterpretXmlContent() + 0x56 bytes
msvb7.dll!Semantics::InterpretXmlElement() + 0x27c bytes
msvb7.dll!Semantics::InterpretXmlExpression() - 0x13d bytes
msvb7.dll!Semantics::InterpretXmlExpression() + 0xb0 bytes
msvb7.dll!Semantics::InterpretXmlExpression() + 0xc3 bytes
msvb7.dll!Semantics::InterpretExpression() - 0x1fc bytes
msvb7.dll!Semantics::InterpretExpressionWithTargetType() + 0x43 bytes
msvb7.dll!Semantics::InterpretInitializer() + 0x36 bytes
msvb7.dll!Semantics::InterpretInitializer() + 0x1a bytes
msvb7.dll!Semantics::InterpretInitializer() + 0x127 bytes
msvb7.dll!Semantics::InterpretVariableDeclarationStatement() + 0x158c bytes
msvb7.dll!Semantics::InterpretStatement() + 0x7b2f bytes
msvb7.dll!Semantics::InterpretStatementSequence() + 0x2f bytes
msvb7.dll!Semantics::InterpretBlock() + 0x24 bytes
msvb7.dll!Semantics::InterpretMethodBody() + 0x1fa bytes
msvb7.dll!SourceFile::GetBoundMethodBodyTrees() + 0x126 bytes
msvb7.dll!CBaseSymbolLocator::GetBoundMethodBody() + 0xb6 bytes
msvb7.dll!CSymbolLocator::LocateSymbolInMethodImpl() + 0x29 bytes
msvb7.dll!CSymbolLocator::LocateSymbol() + 0x81f bytes
msvb7.dll!CIntelliSense::GenQuickInfo() + 0x52399 bytes
msvb7.dll!CIntelliSense::HandleQuickInfo() + 0x29 bytes
msvb7.dll!CIntelliSense::ProcessCompletionInfo() + 0x66cda bytes
msvb7.dll!CIntelliSense::GenIntelliSenseInfo() + 0x402 bytes
msvb7.dll!SourceFileView::GenIntelliSenseInfo() + 0x94 bytes
msvb7.dll!SourceFileView::GetDataTip() + 0x743 bytes
msvb7.dll!CVBLangService::GetDataTip() + 0x11d bytes
msenv.dll!CEditView::GetFilterDataTipText() + 0x37 bytes
msenv.dll!CEditView::HandleHoverWaitTimer() + 0x213 bytes
msenv.dll!CEditView::TimerTick() + 0x7d bytes
msenv.dll!CEditView::WndProc() + 0x1685b9 bytes
msenv.dll!CEditView::StaticWndProc() + 0x39 bytes
msenv.dll!EnvironmentMsgLoop() + 0xb6 bytes
Note how it’s easy to read the stack. The bottom of the stack (the first thing put on it) is _BaseProcessStart, which starts the process. You can see that a timer tick caused GetDataTip to call GenIntelliSenseInfo, which calls GenQuickInfo, -> LocateSymbol, etc.
Each stack entry indicates the name of the routine being executed. The “+ 0x23 bytes” means the # of bytes into the routine that the call occurred. A low number means near the beginning of that method.
Because XML is a tree, and is thus a recursive data structure, you see that XMLContent can contain an XMLExpression and vice versa. The depth of the recursion reflects the actual XML being processed.
BTW, the VB background compiler thread is:
0 > 4620 Worker Thread ThreadSyncManager::ThreadProc _KiFastSystemCallRet@0 Normal 0
And it’s stack at idle:
ntdll.dll!_ZwWaitForMultipleObjects@20() + 0xc bytes
kernel32.dll!_WaitForMultipleObjectsEx@20() - 0x48 bytes
user32.dll!_RealMsgWaitForMultipleObjectsEx@20() + 0xd9 bytes
ole32.dll!CCliModalLoop::BlockFn() + 0x76 bytes
ole32.dll!_CoWaitForMultipleHandles@20() + 0xe6 bytes
msvb7.dll!ThreadSyncManager::ThreadProc() + 0x98 bytes
kernel32.dll!_BaseThreadStart@8() + 0x37 bytes
See also:
Dynamically attaching a debugger
Is a process hijacking your machine?
In the last post, Area fill algorithm: crayons and coloring book, I showed a program that emulates a kid drawing in a coloring book.
However, the algorithm wasn’t very efficient, and would explode even if you had a simple drawing: it was using the stack to store where to go.
The heart of the routine:
void AreaFill(Point ptcell)
{);
AreaFill(new Point(ptcell.X - 1, ptcell.Y));
AreaFill(new Point(ptcell.X + 1, ptcell.Y));
AreaFill(new Point(ptcell.X, ptcell.Y + 1));
AreaFill(new Point(ptcell.X, ptcell.Y - 1));
}
}
}
}
It just checks boundaries, Draws the current point, then calls itself to draw the points North, South, East and West. Doing this for thousands of pixels eats up the current thread’s stack very fast.
One way to fix this is to expand the stack size.
You can examine the default stack size of an executable (EXE or DLL) by opening a Visual Studio command prompt (Start->Programs->VS2008->VSToos->VSCommand prompt), then type
link /dump /headers d:\dev\cs\Fill\bin\Debug\Fill.exe
Microsoft (R) COFF/PE Dumper Version 9.00.21022.08
Dump of file Fill.exe
PE signature found
File Type: EXECUTABLE IMAGE
FILE HEADER VALUES
14C machine (x86)
3 number of sections
4A1ECBC2 time date stamp Thu May 28 10:37:06 2009
0 file pointer to symbol table
0 number of symbols
E0 size of optional header
10E characteristics
Executable
Line numbers stripped
Symbols stripped
32 bit word machine
OPTIONAL HEADER VALUES
10B magic # (PE32)
8.00 linker version
2000 size of code
800 size of initialized data
0 size of uninitialized data
3E7E entry point (00403E7E)
2000 base of code
4000 base of data
400000 image base (00400000 to 00407FFF)
2000 section alignment
200 file alignment
4.00 operating system version
0.00 image version
4.00 subsystem version
0 Win32 version
8000 size of image
200 size of headers
0 checksum
2 subsystem (Windows GUI)
540 DLL characteristics
Dynamic base
NX compatible
No structured exception handler
100000 size of stack reserve
1000 size of stack commit
100000 size of heap reserve
1000 size of heap commit
This is a little misleading, because the default stack size is 100000 Hex, which is 1,048,576, or about 1 Megabyte.
You can change the stack size using Editbin.exe
D:\dev\cs\Fill\bin\Debug>link /dump /headers Fill.exe | find "stack"
D:\dev\cs\Fill\bin\Debug>editbin /stack:10000,1000 Fill.exe
Microsoft (R) COFF/PE Editor Version 9.00.21022.08
2710 size of stack reserve
3E8 size of stack commit
A better way to fix this is to use heap memory , rather than the stack:
//* More efficient algorithm: don't use the stack to store state
void AreaFill(Point ptcell)
{
Queue<Point> queueCells = new Queue<Point>();
queueCells.Enqueue(ptcell);
while (queueCells.Count > 0)
{
ptcell = queueCells.Dequeue();));
}
}
}
This solution is almost the same as the first: however, instead of recurring to go North, etc, the routine just puts the points to work on into a queue.
The VB solution is analogous:
Sub AreaFill(ByVal ptcell As Point)
Dim queueCells = New Queue(Of Point)
queueCells.Enqueue(ptcell)
While queueCells.Count > 0
ptcell = queueCells.Dequeue()))
End If
End If
End If
End While
Tail Recursion
Cartoon animation program
Comment/Uncomment code to switch versions quickly without using macros
Kids know how to use crayons and a coloring book. How do you write such a program?
In my last post (Which pixels do you turn on when you draw a line?) I showed how to draw a line. Now suppose you have some lines or shapes already drawn. How would you write code to fill in an area bounded by the drawn pixels?
IOW, imagine you’ve drawn a circle. You want to fill the circle with a color. Right click inside. What code should run? What if the figure were more complex, like a large block “W” or a curled up snake.
More formally: Given an array of pixels that represents the outline of a shape, and a point’s x,y coordinate within that shape, how would you area fill that shape?
Perhaps you could write some code that will check all pixels to the East, North, West, South until it reached a pixel that was already painted. Then what?
What sort of algorithm and data structure would you use?
Below are C# and VB versions of a simple implementation that isn’t very efficient, but is quite simple.
The code is identical to the last post (the drawing parts) except for the areas delimited by “AREA”. This makes it easier for you to cut/paste the code.
Start Visual Studio 2008
Choose File->New->Project->C# or VB->Windows Forms Application.
Choose View->Code
Paste in the VB or C# version of the code below, hit F5 to run it. Draw a shape, right click to fill.
How would you improve it? Why isn’t it efficient?
Try making the form bigger and paint more pixels. What happens?
Clue: look at the source code for MineSweeper that I wrote to take advantage of the (then) new feature Collections. It’s part of the Task Pane for Visual Foxpro 9.0
Remove double spaces from pasted code samples in blog
<C# Sample>
#define AREA
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Windows.Forms;
namespace WindowsFormsApplication1
{
public partial class Form1 : System.Windows.Forms.Form
{
Size m_numCells = new Size(350, 200);// we'll use an array of Cells
Boolean[,] m_cells; // the array of cells: whether they've been drawn or not
Size m_cellSize = new Size(8, 8); // cell height & width
Size m_Offset = new Size(0, 0);
bool m_MouseDown = false;
Button btnErase;
Point? m_PtOld;
SolidBrush m_brushMouse = new SolidBrush(Color.Red);
SolidBrush m_brushGenerated = new SolidBrush(Color.Black);
delegate bool DrawCellDelegate(Point ptcell, Brush br);
Graphics m_oGraphics;
public Form1()
this.Load += new EventHandler(this.Loaded);
void Loaded(Object o, EventArgs e)
this.Width = 600;
this.Height = 400;
this.btnErase = new Button();
this.btnErase.Text = "&Erase";
this.btnErase.Click += new EventHandler(this.btnErase_Click);
this.Controls.Add(this.btnErase);
this.BackColor = Color.White;
btnErase_Click(null, null);
void btnErase_Click(object sender, System.EventArgs e)
m_oGraphics = Graphics.FromHwnd(this.Handle);
m_numCells.Width = this.Width / m_cellSize.Width;
m_numCells.Height = this.Height / m_cellSize.Height;
m_cells = new Boolean[m_numCells.Width, m_numCells.Height];
m_oGraphics.FillRectangle(Brushes.White, new Rectangle(0, 0, this.Width, this.Height));
Point PointToCell(Point p1)
Point ptcell = new Point(
(p1.X - m_Offset.Width) / m_cellSize.Width,
(p1.Y - m_Offset.Height) / m_cellSize.Height);
return ptcell;
protected override void OnMouseDown(MouseEventArgs e)
if (e.Button == MouseButtons.Left)
m_MouseDown = true;
m_PtOld = new Point(e.X, e.Y);
CheckMouseDown(e);
#if AREA
else
AreaFill(PointToCell(new Point(e.X, e.Y)));
#endif
protected override void OnMouseMove(MouseEventArgs e)
if (m_MouseDown)
protected override void OnMouseUp(MouseEventArgs e)
m_MouseDown = false;
void CheckMouseDown(MouseEventArgs e)
Point ptMouse = new Point(e.X, e.Y);
Point ptcell = PointToCell(ptMouse);
if (ptcell.X >= 0 && ptcell.X < m_numCells.Width &&
ptcell.Y >= 0 && ptcell.Y < m_numCells.Height)
DrawLineOfCells(PointToCell(m_PtOld.Value), ptcell, new DrawCellDelegate(DrawACell));
m_PtOld = ptMouse;
bool DrawACell(Point ptcell, Brush br)
bool fDidDraw = false;
if (!m_cells[ptcell.X, ptcell.Y]) // if not drawn already
m_cells[ptcell.X, ptcell.Y] = true;
//*
m_oGraphics.FillRectangle(br,
m_Offset.Width + ptcell.X * m_cellSize.Width,
m_Offset.Height + ptcell.Y * m_cellSize.Height,
m_cellSize.Width,
m_cellSize.Height);
/*/
g.DrawRectangle(new Pen(Color.Blue,1),
m_cellSize.Width,
//*/
fDidDraw = true;
return fDidDraw;
void DrawLineOfCells(Point p1, Point p2, DrawCellDelegate drawit)
//
Brush br = m_brushMouse;
int x0 = p1.X;
int y0 = p1.Y;
int x1 = p2.X;
int y1 = p2.Y;
int x, cx, deltax, xstep,
y, cy, deltay, ystep,
error;
bool st;
// find largest delta for pixel steps
st = (Math.Abs(y1 - y0) > Math.Abs(x1 - x0));
// if deltay > deltax then swap x,y
if (st)
x0 ^= y0; y0 ^= x0; x0 ^= y0; // swap(x0, y0);
x1 ^= y1; y1 ^= x1; x1 ^= y1; // swap(x1, y1);
deltax = Math.Abs(x1 - x0);
deltay = Math)
cx ^= cy; cy ^= cx; cx ^= cy;
if (drawit(new Point(cx, cy), br))
br = m_brushGenerated;
error -= deltay; // converge toward end of line
if (error < 0)
{ // not done yet
y += ystep;
error += deltax;
SolidBrush m_brushFill = new SolidBrush(Color.Blue);
Color m_oColor = Color.Black;
/*
/*/
AreaFill(new Point(ptcell.X, ptcell.Y - 1));
}
//*/
}
}
</C# Sample>
<VB Sample>
#Const AREA = True
Public Class Form1
Dim m_numCells = New Size(350, 300) ' we'll use an array of cells
Dim m_cells(,) As Boolean ' the array of cells: whether they've been drawn or not
Dim m_cellSize = New Size(8, 8) ' cell size & width
Dim m_Offset = New Size(0, 0)
Dim m_MouseDown = False
Dim WithEvents btnErase As Button
Dim m_PtOld As Point?
Dim m_brushGenerated = New SolidBrush(Color.Black)
Dim m_brushMouse = New SolidBrush(Color.Red)
Dim m_oGraphics As Graphics
Delegate Function DrawCellDelegate(ByVal ptCell As Point, ByVal br As Brush) As Boolean
Sub Form_Load() Handles Me.Load
Me.Width = 600
Me.Height = 400
Me.btnErase = New Button()
Me.btnErase.Text = "&Erase"
Me.Controls.Add(Me.btnErase)
Me.BackColor = Color.White
btnErase_Click()
Sub btnErase_Click() Handles btnErase.Click
m_oGraphics = Graphics.FromHwnd(Me.Handle)
m_numCells.Width = Me.Width / m_cellSize.Width
m_numCells.Height = Me.Height / m_cellSize.Height
ReDim m_cells(m_numCells.Width, m_numCells.Height)
m_oGraphics.FillRectangle(Brushes.White, New Rectangle(0, 0, Me.Width, Me.Height))
Function PointToCell(ByVal p1 As Point) As Point
Dim ptcell = New Point( _
(p1.X - m_Offset.Width) / m_cellSize.Width, _
(p1.Y - m_Offset.Height) / m_cellSize.Height)
Return ptcell
End Function
Protected Overrides Sub OnMouseDown(ByVal e As System.Windows.Forms.MouseEventArgs)
If e.Button = Windows.Forms.MouseButtons.Left Then
m_MouseDown = True
m_PtOld = New Point(e.X, e.Y)
CheckMouseDown(e)
#If AREA Then
Else
AreaFill(PointToCell(New Point(e.X, e.Y)))
#End If
End If
Protected Overrides Sub OnMouseMove(ByVal e As System.Windows.Forms.MouseEventArgs)
If m_MouseDown Then
Protected Overrides Sub OnMouseUp(ByVal e As System.Windows.Forms.MouseEventArgs)
m_MouseDown = False
Sub CheckMouseDown(ByVal e As MouseEventArgs)
Dim ptMouse = New Point(e.X, e.Y)
Dim ptcell = PointToCell(ptMouse)
If (ptcell.X >= 0 And ptcell.X < m_numCells.Width And _
ptcell.Y >= 0 And ptcell.Y < m_numCells.Height) Then
DrawLineOfCells(PointToCell(m_PtOld.Value), ptcell, New DrawCellDelegate(AddressOf DrawACell))
m_PtOld = ptMouse
Function DrawACell(ByVal ptCell As Point, ByVal br As Brush) As Boolean
Dim fDidDraw = False
If Not m_cells(ptCell.X, ptCell.Y) Then
m_cells(ptCell.X, ptCell.Y) = True
m_oGraphics.FillRectangle(br, _
m_Offset.Width + ptCell.X * m_cellSize.Width, _
m_Offset.Height + ptCell.Y * m_cellSize.Height, _
m_cellSize.Width, _
m_cellSize.Height)
fDidDraw = True
Return fDidDraw
Sub DrawLineOfCells(ByVal p0 As Point, ByVal p1 As Point, ByVal drawit As DrawCellDelegate)
Dim br = m_brushMouse
Dim x0 = p0.X
Dim y0 = p0.Y
Dim x1 = p1.X
Dim y1 = p1.Y
Dim fSwapped = False
Dim dx = Math.Abs(x1 - x0)
Dim dy = Math.Abs(y1 - y0)
If dy > dx Then
fSwapped = True ' swap x0<=>y0, x1<->y1
x0 = p0.Y
y0 = p0.X
x1 = p1.Y
y1 = p1.X
dx = Math.Abs(x1 - x0)
dy = Math.Abs(y1 - y0)
Dim err = CInt(dx / 2)
Dim y = y0
Dim xstep = 1
If x0 > x1 Then
xstep = -1
Dim ystep = 1
If y0 > y1 Then
ystep = -1
Dim x = x0
While x <> x1 + xstep
Dim cx = x, cy = y ' copy of x,y
If fSwapped Then
cx = y
cy = x
If drawit(New Point(cx, cy), br) Then ' if it wasn't already drawn
br = m_brushGenerated
err -= dy
If err < 0 Then
y += ystep
err += dx
x += xstep
Dim m_brushFill = New SolidBrush(Color.Blue)
Dim m_oColor = Color.Black)
AreaFill(New Point(ptcell.X - 1, ptcell.Y)) ' West
AreaFill(New Point(ptcell.X + 1, ptcell.Y)) ' East
AreaFill(New Point(ptcell.X, ptcell.Y + 1)) ' South
AreaFill(New Point(ptcell.X, ptcell.Y - 1)) ' North
End Class
</VB Sample>
When I wrote my cartoon animation program almost 30 years ago (see Cartoon animation program) I needed to know how to draw a line.
Of course, nowadays, we just call a library function that will draw a line given two points.
If you think about it, the problem is quite complex. Imagine a rectangular array of pixels. Which ones do you paint in order to “see” a straight line?
If the desired line is horizontal, or vertical, the problem is simple. However, choosing which pixels to draw for an oblique line is an interesting mathematical problem: see Bresenham's line algorithm - Wikipedia, the free encyclopedia
Even back then, I found Bresenham’s algorithm in a book and implemented it for my cartoon program.
Suppose you want to create a simple drawing program. When you move the mouse, you can handle the MouseMove event, and light up the pixel at the position of the mouse.
However, this will only create a dotted line, with fewer dots if the mouse moved quickly.
Let’s see if we can use Bresenham’s algorithm to light up the pixels between successive mouse events.
This sample uses larger “pixels”, so you can actually see them as squares. In the code I called them cells (mainly because I stole a lot of the code from my game of Life (see Cellular Automata: The Game of Life )
The actual mouse down causes a red pixel to be drawn, and the generated ones are black.
Below are C# and VB versions.
Paste in the VB or C# version of the code below, hit F5 to run it.
Move the mouse slowly and you’ll see more red pixels. Quickly, and you’ll see more black generated pixels.
Note how the code needs to distinguish between 2 sets of x-y coordinates: actual pixels, and cell coordinates.
Try adjusting the Cell size.
Think about what a right click would do.
<C#Sample>
bool m_MouseDown = false;
this.Load += new EventHandler(this.Loaded);
{
int x, cx, deltax, xstep,
</C#Sample>
<VBSample>
(p1.Y - m_Offset.Height) / m_cellSize.Height)
dy = Math.Abs(y1 - y0)
</VBSample>
In a typical day, I write or debug programs in several languages: typically Foxpro, C#, VB, C++ and 32 bit assembly, with an occasional MSIL, IDL and 64 bit ASM thrown in.
Sometimes, I like to switch between one version of code and another. This is useful if I want to do side by side comparisons of behavior.
One way to do this is with preprocessor macros, like this:
#If SomeValue
<one version of code>
#else
<another version>
However, that’s a fair amount of typing.
There’s a shortcut that works with C# and C++ style comments.
In these languages, a line that starts with “//” is a comment.
Also, a block comment (which can span multiple lines) starts with “/*” and ends with “*/”
//*
int sub foo1() {
int x = 2;
Console.WriteLine((new System.Diagnostics.StackTrace().GetFrames()[0].GetMethod().Name)); // shows Foo1
return x;
}
/*/
int sub foo2() {
int x = 3;
Console.WriteLine((new System.Diagnostics.StackTrace().GetFrames()[0].GetMethod().Name)); // shows Foo2
return x;
// */
With a single character change I can switch between foo1 and foo2: just delete the very first “/”. That changes the single line comment into a block comment. The “*/” of the “/*/” now acts like the end of the comment block.
Using an editor that colors the code (like Visual Studio) shows the switch properly
/*
This technique is useful when creating sample code for others to play with, such as in my next blog post.
Several years ago, my wife and I were walking through a local shopping mall. At the time, there was some sort of Asian festival. At a display booth there was a table upon which were two trays, side by side. One was empty, and the other had many beans. The sign challenged visitors to see how many beans could be moved to the empty tray with chopsticks in one minute. A piece of paper indicated the top score so far: something like 10. I imagined how somebody could have spent 6 seconds per bean…
My wife, being quite adept with the tool, was able to get a respectable number of beans across, despite the difficulty of picking up a single slippery bean.
Then she challenged me, having used chopsticks all my life, to see if I could beat her score.
To her chagrin, I deftly wielded the sticks, one in each hand, horizontally in parallel to scoop up dozens of beans at a time, crushing the day’s high score. Of course this wasn’t the normal way most people use chopsticks, but then, it easily was the best solution to the problem at hand.
(I’m sure some of you are thinking that a better solution be to pick up the full tray and pour the beans onto the empty tray, but how hard would it be to lift the tray with chopsticks?)
At a recent social gathering, we were divided into groups to solve some word puzzles: given a fairly long word, how many words can be formed from the letters of the original word before the timer rang.
Many people would write out the given word as soon as the timer started like so:
E S T A B L I S H M E N T
so that others in the group could think up words.
When it was my turn to write the word, I wrote it like this:
E S T A
B L I S H
M E N T
This gave our group 2 dimensions along which to see letter combinations to form words.
My kids had a toy recently, and it had a box into which we had to insert batteries. The box had very tiny Philips screws on it, and my kids tried various small screwdrivers, even my set of jeweler’s drivers. I could get the screws to turn, but they never came out. It turns out that the screws were decorative: just pull the top off the box<sigh>.
(In the old days, no battery operated toys protected their owners from the hazardous battery voltages. Nowadays, probably due to some lawsuit, every compartment seems to need to protect kids from 1.5 volts)
A phone number challenge
A Discounter Introduces Reductions: Multiple Anagrams
Carburetor is a car part, but prosecutable is not
Write your own hangman game
Create your own Word Search puzzles
The Nametag Game
Create your own typing tutor!
A cartoon can be thought of as a series of drawings. To simulate movement, the drawings can be slightly different from each other.
Remember drawing simple cartoons using a pad of paper? Simply flipping through the pages made the drawings come to life.
This was tedious work: a computer can help.
Just after first IBM PC came out in Aug 81, I wrote a cartoon animation program in C++ and assembly code. The concept was very simple: just use the mouse (I had to write my own mouse driver for a RS-232 serial mouse and had to hijack the COM1 port for my DOS program) to let the user draw some lines on the screen. Then the user could save those lines as a cartoon frame, and then draw another frame. The program could then calculate multiple frames in between the user created frames, creating smooth animation.
That early 80's version of cartoon still runs on XP (although it writes directly to the video memory so it requires full screen mode.
It doesn't run on Vista (I think you can download a DOS compatible window to make it work.)
A zip file of the 80's version is available here. Unzip the contents to a folder (check out the date stamps!), then run cartoon.exe and hit Shift-A.
It includes some stored samples, like flying birds, alphabet, basketball. Because this was written before Windows, it won't work with your mouse. You can see the main menu bar at the top. The only thing you can do with this program is run the stored samples by hitting Shift-A. Q will Quit. Try typing other chars to invoke various commands.
If you get it to run on Vista, please let me know the details.
About 10 years ago, I wrote another version of Cartoon in Foxpro and it was modified and published as a Solution Sample.
Start Visual Foxpro, Task Pane, Solution Samples. In the "Search for sample" text, box, type in Animation. "Display line animation in a form"
You can run the form, or click on the button on the right of the task pane that opens the form in the Form Designer, so you can see the source code.
You can also open the form in the Class Browser, then export the code into a single file using the "View Class Code" button.
Foxpro excerpt of the inbetween algorithm (notice how cursors (in-memory data tables) are used)
SELECT (lcTable)
DO WHILE !EOF("shadow")
mr = recno()
mr2 = recno("shadow")
FOR nb = 0 TO nBetween
THISFORMSET.frmAnimation.cls
GO mr
IF mr2 < RECCOUNT("shadow")
GO mr2 IN shadow
ENDIF
nFrames1 = &lcTable..frameno
nFrames2 = shadow.frameno
SCAN WHILE &lcTable..frameno = nFrames1
nx1 = &lcTable..x1 + nb * (shadow.x1 - &lcTable..x1) / nBetween
ny1 = &lcTable..y1 + nb * (shadow.y1 - &lcTable..y1) / nBetween
nx2 = &lcTable..x2 + nb * (shadow.x2 - &lcTable..x2) / nBetween
ny2 = &lcTable..y2 + nb * (shadow.y2 - &lcTable..y2) / nBetween
THISFORMSET.frmAnimation.line(nx1,ny1,nx2,ny2)
IF !EOF("shadow")
SKIP IN shadow
IF shadow.frameno # nFrames2
SKIP -1 IN shadow
ENDIF
ENDIF
ENDSCAN
SELECT shadow
IF !EOF()
SKIP
LOCATE REST FOR shadow.Frameno # nFrames2
SELECT (lcTable)
wait wind "" time .05
ENDFOR
ENDDO
USE IN shadow
SCAN REST
THISFORMSET.frmAnimation.line(x1,y1,x2,y2)
ENDSCAN
THISFORMSET.frmAnimation.frameno = nFrames1 + 1
Below is a more up to date example using WPF. The equivalent animation code is in tmr_tick().
Try running it, hitting the Demo button.
Start Visual Studio 2008.
Choose File->New Project->Visual Basic->WPF Application.
(This also works with Temporary projects)
Open Window1.xaml.vb. Paste in the code below, then hit F5 to run.
I had my wife and kids playing with it for quite a while!
Try adding a frame or MouseWheel while a cartoon is playing.
Try right click and then draw: it changes the way you draw.
Experiment with using a touchpad and a mouse for drawing.
Try using variable frame rates. Add a feature to save/restore the current cartoon.
Try drawing your name, or the letters of the alphabet.
My original had features like copy/paste from frames, color fill
My toys over the years
Why was the original IBM PC 4.77 Megahertz?
<Cartoon Code>
Class Window1
Private WithEvents btnNewFrame As Button
Private WithEvents btnErase As Button
Private WithEvents btnPlay As Button
Private WithEvents btnDemo As Button
Private WithEvents btnReset As Button
Private txtStatus As TextBlock
Private _AnimControl As AnimControl
Sub Load() Handles MyBase.Loaded
Me.Width = 800
Me.Height = 600
Dim xaml = _
<DockPanel
xmlns=""
xmlns:
<StackPanel Background="Transparent" Orientation="Vertical" DockPanel.
<TextBlock>Draw to create lines for a cartoon frame. Add a new frame, hit play</TextBlock>
<TextBlock>Rebirth of Calvin's cartoon program circa 1982
<Hyperlink></Hyperlink>
</TextBlock>
</StackPanel>
<Border DockPanel.
<StackPanel Orientation="Horizontal">
<Button Name="btnNewFrame" ToolTip="Add current drawing to cartoon, so you can create a new one">_New Frame</Button>
<Button Name="btnErase">_Erase</Button>
<Button Name="btnPlay" ToolTip="Animate the current frames or stop animation">_Play</Button>
<Button Name="btnDemo">_Demo</Button>
<Button Name="btnReset" ToolTip="Erase all frames">_Reset</Button>
<TextBox Name="txtBetween" Text="{Binding Path=txtBetween.text}"></TextBox>
<TextBlock Name="txtStatus"></TextBlock>
</StackPanel>
</Border>
<UserControl Name="MyCtrl"/>
</DockPanel>
Dim dPanel = CType(System.Windows.Markup.XamlReader.Load(xaml.CreateReader), DockPanel)
Dim MyCtrl = CType(dPanel.FindName("MyCtrl"), UserControl)
_AnimControl = New AnimControl(Me)
MyCtrl.Content = _AnimControl
btnPlay = CType(dPanel.FindName("btnPlay"), Button)
btnNewFrame = CType(dPanel.FindName("btnNewFrame"), Button)
btnDemo = CType(dPanel.FindName("btnDemo"), Button)
btnErase = CType(dPanel.FindName("btnErase"), Button)
btnReset = CType(dPanel.FindName("btnReset"), Button)
txtStatus = CType(dPanel.FindName("txtStatus"), TextBlock)
Me.Content = dPanel
Sub btnNewFrame_Click() Handles btnNewFrame.Click
_AnimControl.NewFrame()
RefreshStatus()
Sub btnPlay_Click() Handles btnPlay.Click
btnNewFrame_Click() ' save any currently drawn changes first
_AnimControl.Play()
Sub btnDemo_Click() Handles btnDemo.Click
_AnimControl.Demo()
_AnimControl.EraseBtn()
Sub btnReset_Click() Handles btnReset.Click
_AnimControl.Reset()
Friend Sub RefreshStatus()
Me.txtStatus.Text = String.Format("Frame count = {0} CurLineCnt = {1} CurFrame= {2} Between = {3}", _
_AnimControl._UserFrameList.Count, _AnimControl._CurLineList.Count, _
_AnimControl._ndxUserFrame, _AnimControl._nBetween)
Public Class AnimControl
Inherits FrameworkElement
Private WithEvents _timer As New System.Windows.Threading.DispatcherTimer
Private _Window1 As Window1
Friend _nBetween As Integer = 10 ' # of frames being calc'd between user frames
Friend _ndxUserFrame As Integer ' index into user created frames.
Private _ndxBetween As Integer ' from 0 to nBetween
Friend _nBetweenDyn As Integer = 0 ' # to add to _nBetween for next animation: adjustable by mousewheel
Private _ptCurrent As Point?
Private _ptOld As Point?
Private _fPenDown As Boolean
Private _oPen = New Pen(Brushes.Black, 2)
Private _PenModeDrag As Boolean = True ' Click to create line segs, or continuous drag to create multiple segs
' lines to draw for current image: could be while composing, or playing. Could be real frame or calc'd frame
Friend _CurLineList As New List(Of cFrameLine)
'Frames stored by user
Friend _UserFrameList As New List(Of cCartoonFrame)
Sub New(ByVal w As Window1)
_Window1 = w
Sub Reset()
Me._timer.IsEnabled = False 'stop playback, if any
Me._UserFrameList.Clear() ' erase all user data
Me._nBetweenDyn = 0
EraseBtn()
Sub EraseBtn() ' erase current frame
_CurLineList.Clear()
Me._ptOld = Nothing
Me._fPenDown = False
Me.InvalidateVisual()
Sub Demo()
Reset()
Me._CurLineList.Add(New cFrameLine(New Point(10, 10), New Point(10, 300)))
Me._CurLineList.Add(New cFrameLine(New Point(10, 300), New Point(300, 300)))
Me._CurLineList.Add(New cFrameLine(New Point(300, 300), New Point(300, 10)))
Me._CurLineList.Add(New cFrameLine(New Point(300, 10), New Point(10, 10)))
Me._UserFrameList.Add(New cCartoonFrame(Me._CurLineList, 30))
Me._CurLineList.Clear() ' reset for next frame
Me._CurLineList.Add(New cFrameLine(New Point(10, 10), New Point(300, 10)))
Me._CurLineList.Add(New cFrameLine(New Point(300, 10), New Point(300, 300)))
Me._CurLineList.Add(New cFrameLine(New Point(300, 300), New Point(10, 300)))
Me._CurLineList.Add(New cFrameLine(New Point(10, 300), New Point(10, 10)))
Me._UserFrameList.Add(New cCartoonFrame(Me._CurLineList, 50))
Me._UserFrameList.Add(New cCartoonFrame(Me._CurLineList, 10))
Play()
Sub NewFrame()
If _CurLineList.Count > 0 Then
Dim curFrame = New cCartoonFrame(_CurLineList, 10)
_UserFrameList.Add(curFrame)
EraseBtn()
Friend Sub Play()
If _UserFrameList.Count < 2 Then
MsgBox("Need at least 2 frames to animate")
Return
If _timer.IsEnabled Then ' if we're already playing, stop
_timer.IsEnabled = False
_timer.Interval = New TimeSpan(0, 0, 0, 0, 50) ' days,hrs,mins,secs,msecs
_timer.IsEnabled = True
Me._ndxUserFrame = 0
Me._ndxBetween = 1 ' 1st is drawn now, next by timer tick
Me._CurLineList.Clear()
Me._CurLineList.AddRange(Me._UserFrameList(0)._Lines) 'get the 1st frame
Me.InvalidateVisual() ' show it
Sub tmr_tick() Handles _timer.Tick ' let's do the animating
If _ndxUserFrame = Me._UserFrameList.Count - 1 Then ' we've reached the end: let's restart
Me._ndxUserFrame = 0
Me._ndxBetween = 0
Dim frmLeft = Me._UserFrameList(Me._ndxUserFrame) ' the frame on the left
Dim frmRight = Me._UserFrameList(Me._ndxUserFrame + 1) ' the frame on the right
_nBetween = Math.Max(0, frmLeft._nBetween + Me._nBetweenDyn) ' recorded value plus mousewheel adjustment
Dim nLinesToDraw = Math.Max(frmLeft._Lines.Count, frmRight._Lines.Count) - 1
For ndx = 0 To nLinesToDraw ' calc the lines to draw
Dim lineLeft = frmLeft._Lines(Math.Min(ndx, frmLeft._Lines.Count - 1))
Dim lineRight = frmRight._Lines(Math.Min(ndx, frmRight._Lines.Count - 1))
Dim pt0 As New Point With _
{.X = lineLeft.pt0.X + Me._ndxBetween * (lineRight.pt0.X - lineLeft.pt0.X) / (_nBetween + 1), _
.Y = lineLeft.pt0.Y + Me._ndxBetween * (lineRight.pt0.Y - lineLeft.pt0.Y) / (_nBetween + 1)}
Dim pt1 As New Point With _
{.X = lineLeft.pt1.X + Me._ndxBetween * (lineRight.pt1.X - lineLeft.pt1.X) / (_nBetween + 1), _
.Y = lineLeft.pt1.Y + Me._ndxBetween * (lineRight.pt1.Y - lineLeft.pt1.Y) / (_nBetween + 1)}
Dim newLine = New cFrameLine(pt0, pt1)
Me._CurLineList.Add(newLine)
If Me._ndxBetween > Me._nBetween Then ' we've reached the right
Me._ndxUserFrame += 1 ' advance to next user frame
Me._ndxBetween = 0 ' we don't want to redraw frmRight when it becomes frmLeft
Me._ndxBetween += 1 ' advance to next frame
Protected Overrides Sub OnRender(ByVal drawingContext As System.Windows.Media.DrawingContext)
drawingContext.DrawRectangle(Brushes.AliceBlue, New Pen(Brushes.Purple, 1), New Rect(0, 0, Me.RenderSize.Width, Me.RenderSize.Height))
For Each fr In Me._CurLineList ' draw the lines in the current frame
drawingContext.DrawLine(_oPen, fr.pt0, fr.pt1)
If Me._fPenDown Then
If Me._ptOld.HasValue Then
drawingContext.DrawLine(_oPen, Me._ptOld, Me._ptCurrent)
Else
_Window1.RefreshStatus()
Protected Overrides Sub OnMouseDown(ByVal e As System.Windows.Input.MouseButtonEventArgs) | http://blogs.msdn.com/calvin%5Fhsia/ | crawl-002 | refinedweb | 5,863 | 52.76 |
PycURL: Guide to Using cUrl With Python
In this guide for The Python Web Scraping Playbook, use cURL in Python using PycURL.
PycURL is a Python interface to libcurl. PycURL is targeted at an advanced developer as it exposes most of the functionality that libcurl has to offer. Making it great if you need dozens of concurrent, fast and reliable connections or any of the sophisticated features that libcurl offers.
In this guide we will walk you through:
- Why Use PycURL?
- Installing PycURL
- Making GET Requests With PycURL
- Making POST Requests With PycURL
- Follow Redirects With PycURL
- Setting Headers & User Agents
- Using Proxies With PycURL
Let's begin...
Why Use PycURL?
PycURL is a thin Python layer over libcurl, the multiprotocol file transfer library that gives you deep low-level control on how you make requests.
As PycURL is a relatively thin layer over libcurl, it doesn't have the nice Pythonic class hierarchies and user experience features that you are standard across many other Python HTTP client libraries like Python Requests, Python HTTPX, and Python aiohttp. Giving it a steeper learning curve.
However, what PycURL and libcurl lack in ease of use, it more than makes up for in its feature set and customisability.
- Multiple Protocols: PycURL does not only support HTTP/HTTPS but DICT, FILE, FTP, FTPS, Gopher, IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMTP, SMTPS, Telnet and TFTP.
- Speed: PycURL has been shown to be is several times faster than Requests during benchmarking.
- More Features + Low Level Control: PycURL gives you more low level control and features like the ability to use several TLS backends, more authentication options, and I/O multiplexing via the libcurl multi interface.
A HTTP client library like Python Requests is generally easier to learn and use than PycURL. Meaning PycURL is better suited to more advanced use cases which require more fine grained control over how requests are being made.
Installing PycURL
As PycURL installation requires some C extensions, PycURL’s installation can be complex (depending on your operating system). However, using pip should work for most users.
pip install pycurl
If this does not work, please see the PycURL Installation docs.
For a lot of applications you will also need to install Certifi, a library that provides SSL with Mozilla’s root certificates.
pip install certifi
PycURL does not provide security certificate bundles as they change overtime. Some operating systems do provide them, however, if not you can use the Certifi Python package. This will allow you access HTTPS servers.
Making GET Requests With PycURL
Making
GET requests to HTTP servers is more tricky with PycURL than with a library like Requests, however, it is still pretty straight-forward once you get the hang of it.'))
Let's go through this code step by step so that we understand what is happening here:
- PycURL Instance: Using
c = pycurl.Curl()we will create a PycURL Instance.
- Options: Use
setoptto set options like the URL we want to scrape. Full list of options here.
- Buffer: PycURL does not provide storage for network responses so we must setup a buffer
buffer = BytesIO()and instruct PycURL to write to that buffer
c.setopt(c.WRITEDATA, buffer).
- SSL Certs: Set the filename holding the SSL certificates using the certifi library.
- Make Request: To make the request we use
c.perform()and to close the connection we use
c.close().
- Content: We then need to retrieve and decode the response from the buffer we defined.
Accessing Response Details
To access details about the curl session in PycURL you need to use the
c.getinfo(). With it you can get information like the response status code, the final URL, etc.
To access this data you must use
c.getinfo() before you close the connection with
c.close().()
## Response Status Code
print('Response Code:', c.getinfo(c.RESPONSE_CODE))
## Final URL
print('Response URL:', c.getinfo(c.EFFECTIVE_URL))
## Cert Info
print('Response Cert Info:', c.getinfo(c.INFO_CERTINFO))
## Close Connection
c.close()
## Retrieve the content BytesIO & Decode
body = buffer.getvalue()
print(body.decode('iso-8859-1'))
Making POST Requests With PycURL
Your can also make
POST requests to servers with PycURL using the option
c.setopt(c.POSTFIELDS, post_data).
To use this we first need to encode the post body we want to send using
urlencode and then pass that to our PycURL instance.
By using
c.setopt(c.POSTFIELDS, post_data) we are telling PycURL to send the data with
Content-Type equal to
application/x-www-form-urlencoded.
import pycurl
import certifi
from io import BytesIO
from urllib.parse import urlencode
## Create PycURL instance
c = pycurl.Curl()
## Define Options - Set URL we want to request
c.setopt(c.URL, '')
# Setting POST Data + Encoding Data
post_body = {'test': 'value'}
post_data = urlencode(post_body)
c.setopt(c.POSTFIELDS, post_data)
##'))
To send the post body as JSON then we would need to set
'Content-Type: application/json' in the headers.
c.setopt(c.HTTPHEADER, ['Accept: application/json', 'Content-Type: application/json'])
More information on PycURLs
POST request functionality can be found here.
Follow Redirects With PycURL
By default PycURL doesn't follow redirects, however, you can enable redirect following by using the
FOLLOWLOCATION option.
# Follow Redirects
c.setopt(c.FOLLOWLOCATION, True)
Writing Data To Files
With PycURL we can write data directly to a file without having to decode it, if the file has been openned in binary mode.
import pycurl
"""
As long as the file is opened in binary mode, both Python 2 and Python 3
can write response body to it without decoding.
"""
with open('output.html', 'wb') as f:
c = pycurl.Curl()
c.setopt(c.URL, '')
c.setopt(c.WRITEDATA, f)
c.perform()
c.close()
Setting Headers & User Agents
To add headers and user-agents to your PycURL requests we just need to use the
HTTPHEADER option:
c.setopt(c.HTTPHEADER, ['Accept: application/json', 'User-Agent: Mozilla/5.0 (iPad; CPU OS 12_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148'])
Or if you just want to set the user-agent then you can use the
USERAGENT option.
c.setopt(c.USERAGENT, 'User-Agent: Mozilla/5.0 (iPad; CPU OS 12_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148')
Using Proxies With PycURL
PycURL also lets you route your requests through proxy servers if you would like to hide your IP address.
# Set Proxy
c.setopt(pycurl.PROXY, f"https://{host}:{port}")
# Proxy Auth (If Needed)
c.setopt(pycurl.PROXYUSERPWD, f"{username}:{password}")
# Set Proxy Type = "HTTPS"
c.setopt(pycurl.PROXYTYPE, 2)
# Set Proxy as Insecure If Required
c.setopt(c.PROXY_SSL_VERIFYHOST, 0)
c.setopt(c.PROXY_SSL_VERIFYPEER, 0)
More Web Scraping Tutorials
So that's an introduction to Python PycURL.
If you would like to learn more about Web Scraping, then be sure to check out The Web Scraping Playbook.
Or check out one of our more in-depth guides: | https://scrapeops.io/python-web-scraping-playbook/python-curl-pycurl/ | CC-MAIN-2022-40 | refinedweb | 1,153 | 57.27 |
James is a software developer in Scottsdale, Arizona, and a principal in the consulting company 30 Second Rule. He can be reached at [email protected]..
XML Parsers
Perhaps the best known XML parser is the DOM parser, so-named because it exposes content and structure through a Document Object Model (DOM) API. In this approach, an XML document is treated as a tree of nodes. Each node may be of one type or another, the most common being element and attribute nodes. Once a document has been parsed, programs may read or alter the document by selecting, adding, or modifying nodes. The DOM API is defined by a W3C specification, and many DOM parsers provide a means for validating a document against a schema or DTD.
A DOM parser is particularly good when the source document is relatively small. It is perhaps the simplest parser to work with, as many popular implementations allow the selection of specific nodes using XPath queries. Because the entire document is held in memory, it is often easier to do conditional selection than when using stream-based parsers. However, as document size increases, memory and speed requirements may become burdensome.
Shortcomings of the DOM parser, such as the requirement for loading the entire document into memory, led to the creation of the Simple API for XML (SAX) parser. It's not a formal specification, though the XML community treats the Java implementation as the authoritative reference. While the DOM API has a nice theoretical crispness, it can be awkward in practice. In contrast, a SAX parser defines relatively little, but provides a clean framework on which to build application-specific logic.
Rather than see an XML document as a tree of nodes, SAX treats the XML as a series of events. For example, a SAX parser reading this document:
<person><! Sample document >
<name type='full'>John Doe<name></person>
would report the events: start of document, start of element, comment, start of element, text, end of element, end of element, end of document.
With each event, a SAX parser invokes a method with a corresponding name (for example, in Java, the start of an element would trigger a call to startElement). You must specify the code for each of the possible event methods.
A SAX parser does not hold on to much state, reading in enough of the source XML to construct the next event. Once the next event occurs, the previous content is discarded. A result is that SAX parsers do not require lots of memory. Your application can happily parse multigigabyte XML files, with the primary limitation being time, not space.
The downside to this, though, is that if your application needs to track state, then you must manage it yourself. If you expect that you will often need to track much of the source document, you may be better off with a DOM parser. If, instead, your application is mainly interested in a specific subset of a document (and especially if this document is relatively large), then SAX may be ideal.
A pull parser is similar to a SAX parserminimal memory consumption, event based, constrained access to the whole documentbut it hands control to the application logic, while the invocation of each event in a SAX parser is driven by the parser. SAX code reacts to events, while pull-parser code requests events.
The Ruby XML Pull Parser
As of Version 1.8, the Ruby standard library includes Sean Russell's REXML, a pure-Ruby XML parser ( .germane-software.com/software/rexml/). REXML began life as an independent library inspired by Java's Electric XML parser (hence the name, Ruby Electric XML). What made Electric XML different was that it did not implement the W3C DOM, but an API that would be natural and intuitive to any Java developer.
In the same vein, REXML implements a DOM API consistent with Ruby itself. If you are familiar with Ruby, then using REXML to parse and manipulate XML will be second nature. As is common in Ruby, REXML DOM methods typically use built-in iterators, accept blocks, and follow snake_style (rather than lowerCamelCase) naming conventions.
In addition to a DOM parser, REXML also includes a stream parser and a pull parser. The stream parser essentially follows the SAX API, albeit with Ruby naming conventions.
Both the DOM and stream parsers are built on top of a base pull parser. The public pull-parser API largely delegates to the base parser, with a few added convenience methods. As such, the REXML pull parser is the most stable and robust of the available REXML APIs.
Examples
The REXML pull parser has a straightforward API. There are two main classes, PullParser and PullEvent. You create a parser instance by invoking PullParser.new, passing in an XML source. This source can be a string, an I/O handle, or any object that implements the REXML Source API. Using that last object, you can implement a source wrapper around arbitrary data sources, such as a database. The examples I focus on here use strings and files. Listing One (content1.xml) is the initial sample source file, while Listing Two (listing1.rb) is a program that reads this file and emits event information.
The primary parser method is pull; it fetches enough characters from the XML source to assemble and return a PullEvent object. Parsing XML source consists of repeatedly calling the pull method until either the code reaches the end of the source or the application's needs are satisfied. A simple way to do this is to use the each method to iterate over all events. The call to each takes a "block"a chunk of code that is executed for each item in the iteration.
Listing Two reads in an XML file, printing the event type of each event returned by the parser. A PullEvent object has only a handful of methods; it's a simple wrapper around the details of a segment of XML, such as a doctype, the start tag of an element, or a comment.
The event_type method returns a Symbol object corresponding to one of the 15 possible events. A Ruby symbol behaves much like a lightweight string constant. They are quite useful for defining fixed event names. So, in this example, when the parser encounters the beginning of the sample XML, the first call to event_type returns the symbol :xmldecl.
Listing Three (listing2.rb) can see if an event is a particular type by using one of the PullEvent Boolean methods such as start_element?, comment?, and text?. Because there is one for each event type, your code can selectively operate on, say, events produced from the start of elements.
This example introduces the other part of the PullEvent APIthe [ index ] method for retrieving event details. Think of a PullEvent as a data array with an assigned event type. The contents of that array vary with the type.
For both start_element and end_element events, the first item in the array is the element name. The second item for a start element event holds a hash of attribute name/value pairs. If the element has no attributes, then the hash will be empty.
With this basic information, you can write code to do conditional markup selection. Listing Four (listing3.rb) selects only those events for the start of text:p elements with a style attribute of "Standard." The is_standard_text_p? method runs a series of checks against a given event, returning false any time a conditional check fails. Ruby methods (with rare exceptions) return the value of the last expression evaluated.
You can build on this code to do some XML transformation. In Listing Five (listing4.rb), the source file is transformed so that all text:p elements with a "Standard" style are renamed simply to "p." As before, this code loops over every pull event, using a case expression to switch among behavior. The example is really only concerned with elements and text; all other events are passed on as comments, containing the textual dump of the event, provided by inspect.
Another method is added here to handle some of the reconstruction. The method attrs_to_s( attrs ) takes a hash and converts it to a string of attribute name and value pairs. Calling to_a on a Hash object converts it into a set of nested arrays. Each inner array holds the key and corresponding value from the original hash. The map method then replaces each inner array with a "name='value' " string; the call to join just adds them up into one space-delimited string.
As with the previous examples, Listing Five loops over all the pull events, and builds up a new XML string in the variable results. Aside from the new helper method, the code also introduces an array to track element names. When running a conditional examination of events based on attribute values, there is a problem: Only the start tag has the information needed for the conditional logic. But the corresponding end tag must be transformed as well. A simple way to track this is to push and pop element names in and out of a stack. Ruby arrays implement these methods by default, so we get our stack object for free.
In the interest of keeping the examples straightforward, the output XML is constructed by appending strings. This works fine for small cases, but for large, more complex XML production, you might prefer to use the node construction methods of the REXML DOM parser, or Jim Weirich's Builder, software/.
Using this basic model, you can construct transformation rules of assorted complexity, but packing the transformation logic inside of a case expression gets messy fast. An improvement is to use Ruby's dynamic nature to invoke methods based on the event types and characteristics. Ruby defines the send method, which takes a method name, followed by any arguments to that method, and invokes it. It is a nice way to call methods when the actual name is not known until runtime.
Listing Six (listing5.rb) invokes a method named after each type of pull event encountered. Because calling a nonexistent method raises an exception, all NoMethodError exceptions are quietly ignored.
This is a bit cleaner, but as the examples grow, the procedural coding style gets unwieldy. And while illustrative of the PullParser and PullEvent APIs, this is essentially following SAX-style processing. But the pull parser offers a chance to interact directly with the parser and event stack, which allows for some interesting processing options.
More Robust Transformation
You can take what I've presented so far and construct a set of libraries and application files that read in one or more OpenOffice files and return an RSS feed, with a feed item for each document. To start, the general logic of fetching a pull event and invoking a method needs to be moved from a simple loop and placed into a method; see Listing Seven (listing6.rb). This then becomes a part of a general-purpose transformation class. The PullTransformer (available electronically; see "Resource Center, page 4) reworks the previous examples by providing a simple but general framework for pulling events and invoking corresponding methods.
A new instance of the PullTransformer class is created by passing in a user-defined module. This module defines the logic for executing some particular transformation. Ruby modules are like classes, but they cannot be instantiated. They provide a means of inheritance by mix-in. When a PullTransformer object receives this module reference, all methods in the module become methods of that object by virtue of the call to self.extend.
The transform method takes an XML source, instantiates a pull parser, and initializes instance variables to track the tags and accumulated output. It then kicks off the main process by calling dispatch.
That chunk of code at the beginning of the class definition is a bit of Ruby magic. The %w{ ...} syntax creates an array of strings from the list of all the event types; each item in the array is passed to define_method, which is a built-in Ruby method for dynamically adding methods to the current class. The block of code following the call to define_method becomes the method body.
Having a minimal default method corresponding to each event type removed the need for trapping NoMethodError exceptions. The class later redefines some of the methods on the assumption that, by default, elements and text should be passed through.
The use of the tag stack has been changed so that, should the transformation code want to ignore an element, it can push nil onto the stack. The class also adds another helper method, skip_until, that allows code to pull and ignore events until some condition (defined by a given block of code) is True. This is useful when the transformation code wants to drop parts of the source document, but still has to read past it to get to the remainder of the XML.
The transformation class also defines execute_conditional to steer the conditional transformation logic defined in the external module.
Defining the Transformation
Earlier examples looped over the event set, running a series of conditional checks against each event to determine what code to execute. While the loop logic was generic, the transformation code was not and should be kept apart. In the current example, the conditional logic and the corresponding transformation code are defined as methods in a module. A mapping of conditions to actions is defined in the map method, which pairs them in the @transformation array.
Condition and action methods are dynamically invoked by execute_conditional in PullParser. Each should expect to be passed a PullEvent object. Conditional methods should return true or false, and the order of entries defined in map is important because that is the order in which execute_conditional loops over the set, looking for the first condition that returns true.
A true condition invokes the mapped action method, passing in the current event object. Because pull events are no longer automatically retrieved in a loop, the action methods must decide whether to call dispatch to fetch and act on the next pull event. Just as the methods of the transformational module become part of the transformer instance, so too all methods and instance variables of PullTransformer are available to the module code. Conditional logic and action methods can call dispatch, skip_until, act on the pull parser, and so on. They also have access to the @transformation array, leaving open the opportunity to alter the transformation logic at runtime.
With this base library, you can define some modules for converting an OpenOffice.org Writer document into RSS. OpenOffice documents consist of multiple XML files bundled into a zip file. To create an RSS item, the code grabs some metadata from the meta.xml file, as well as the first paragraph of text from the content.xml.
There are various ways to do a single transformation on multiple source documents. One way would be to aggregate all the source files into a master file with a new root element. But a nice thing about using Ruby for transformation is that it is easy to call out to other code for subprocessing. In this example, the main transformation acts on content.xml, but the transformation logic loads and transforms meta.xml using another PullTransformer instance.
The main application uses a template file for the RSS 1.0 body XML (see rss.body.xml, available electronically). The code loads this template and loops over a directory of OOo documents. Each document is unzipped into a local temp directory, and content.xml is fed to a transformation process. The results are aggregated and substituted into the template body. The transformational module for content.xml is in content.rb (available electronically). The code ignores all elements by default, pushing nil onto the tag stack. The office:document-content element is transformed into an item element, and another instance of PullTransformer is used to get select content from meta.xml; see item.rb (available electronically). When the code encounters the office:body element, it starts a description element, then skips over everything until it finds the end of the text:sequence-decls section.
The example uses yet another handy REXML pull-parser methodpeek, which looks at future events without actually pulling them off the event stream. It's sort of like looking into the future. If the immediate future is not the end of the office:body element, then the code loops and tries to get the text from the first text:p element.
Conclusion
Pull-parser transformations offer the opportunity to manipulate large XML documents using familiar programming constructs, and with Ruby's REXML parser, it is easy to write flexible and dynamic transformation applications. Special thanks must go to Sean Russel, for creating REXML and providing technical review for this article. Any errors or omissions are the sole fault of this author.
DDJ
<="" xmlns: <!-- An example conten.xml file form an OpenOffice.org Writer document --> <office:script/> <office:font-decls> <style:font-decl style: <style:font-decl style: <style:font-decl style: <style:font-decl style: <style:font-decl style: <style:font-decl style:This is the second line. It has some <text:span text:text</text:span> with special formatting.</text:p> </office:body> <!-- End of sample --> </office:document-content>Back to article
Listing Two
#!/usr/bin/env ruby require 'rexml/parsers/pullparser' parser = REXML::Parsers::PullParser.new( IO.read( "content1.xml" ) ) while parser.has_next? pull_event = parser.pull puts pull_event.event_type endBack to article
Listing Three
#!/usr/bin/env ruby require "rexml/parsers/pullparser" parser = REXML::Parsers::PullParser.new( IO.read( "content1.xml" ) ) xml = "" while parser.has_next? pull_event = parser.pull puts( pull_event[0] ) if pull_event.start_element? endBack to article
Listing Four
#!/usr/bin/env ruby require "rexml/parsers/pullparser" parser = REXML::Parsers::PullParser.new( IO.read( "content1.xml" ) ) xml = "" def is_standard_text_p?( event ) return false unless event.start_element? return false unless event[0] == "text:p" event[1][ 'text:style-name'] == "Standard" end while parser.has_next? pull_event = parser.pull puts pull_event.inspect if is_standard_text_p? pull_event endBack to article
Listing Five
#!/usr/bin/env ruby require "rexml/parsers/pullparser" parser = REXML::Parsers::PullParser.new( IO.read( "content1.xml" ) ) results = "" tag_stack = [] while parser.has_next? pull_event = parser.pull case pull_event.event_type when :start_element if is_standard_text_p? pull_event tag_stack.push "p" else tag_stack.push pull_event[0] end results << "<#{tag_stack.last}#{attrs_to_s(pull_event[1])}>" when :end_element results << "</#{tag_stack.pop}>" when :text results << pull_event[0] else results << "<!-- #{pull_event.inspect} -->" end end puts resultsBack to article
Listing Six
#!/usr/bin/env ruby require "rexml/parsers/pullparser" parser = REXML::Parsers::PullParser.new( IO.read( "content1.xml" ) ) def start_element( event ) if is_standard_text_p? event $tag_stack.push "p" else $tag_stack.push event[0] end "<#{$tag_stack.last}#{attrs_to_s(event[1])}>" end def end_element( event ) "</#{$tag_stack.pop}>" end def text( event ) event[0] end results = "" $tag_stack = [] while parser.has_next? pull_event = parser.pull begin results << send( pull_event.event_type.to_s, pull_event ) rescue NoMethodError; end end puts resultsBack to article
Listing Seven
def dispatch return unless @parser.has_next? event = @parser.pull unless event.end_document? send( event.event_type.to_s, event ) end endBack to article | http://www.drdobbs.com/web-development/transforming-xml-the-rexml-pull-parser/184406385 | CC-MAIN-2014-41 | refinedweb | 3,186 | 56.76 |
Flash Player 9 and later, Adobe AIR 1.0 and
later
Flash Player and AIR allow you to create a
full-screen application for your video playback, and support scaling
video to full screen.
For AIR content running in full-screen mode on the desktop, the
system screen saver and power-saving options are disabled during
play until either the video input stops or the user exits full-screen
mode.
For full details on using full-screen mode, see Working with full-screen mode.
Before
you can implement full-screen mode for Flash Player in a browser,
enable it through the Publish template for your application. Templates
that allow full screen include <object> and <embed> tags
that contain an allowFullScreen parameter. The
following example shows the allowFullScreen parameter
in an <embed> tag.
<object classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000"
id="fullScreen" width="100%" height="100%"
codebase="">
...
<param name="allowFullScreen" value="true" />
<embed src="fullScreen.swf" allowFullScreen="true" quality="high" bgcolor="#869ca7"
width="100%" height="100%" name="fullScreen" align="middle"
play="true"
loop="false"
quality="high"
allowScriptAccess="sameDomain"
type="application/x-shockwave-flash"
pluginspage="">
</embed>
...
</object>
In Flash, select File -> Publish
Settings and in the Publish Settings dialog box, on the HTML tab,
select the Flash Only - Allow Full Screen template.
In Flex,
ensure that the HTML template includes <object> and <embed> tags that
support full screen.
For Flash
Player content running in a browser, you initiate full-screen mode
for video in response to either a mouse click or a keypress. For
example, you can initiate full-screen mode when the user clicks
a button labeled Full Screen or selects a Full Screen command from
a context menu. To respond to the user, add an event listener to
the object on which the action occurs. The following code adds an
event listener to a button that the user clicks to enter full-screen
mode:
var fullScreenButton:Button = new Button();
fullScreenButton.label = "Full Screen";
addChild(fullScreenButton);
fullScreenButton.addEventListener(MouseEvent.CLICK, fullScreenButtonHandler);
function fullScreenButtonHandler(event:MouseEvent)
{
stage.displayState = StageDisplayState.FULL_SCREEN;
}
The code initiates full-screen mode by setting the Stage.displayState property
to StageDisplayState.FULL_SCREEN. This code scales
the entire stage to full screen with the video scaling in proportion
to the space it occupies on the stage.
The fullScreenSourceRect property
allows you to specify a particular area of the stage to scale to
full screen. First, define the rectangle that you want to scale
to full screen. Then assign it to the Stage.fullScreenSourceRect property.
This version of the fullScreenButtonHandler() function
adds two additional lines of code that scale just the video to full
screen.
private function fullScreenButtonHandler(event:MouseEvent)
{
var screenRectangle:Rectangle = new Rectangle(video.x, video.y, video.width, video.height);
stage.fullScreenSourceRect = screenRectangle;
stage.displayState = StageDisplayState.FULL_SCREEN;
}
Though this example invokes an event handler in
response to a mouse click, the technique of going to full-screen
mode is the same for both Flash Player and AIR. Define the rectangle
that you want to scale and then set the Stage.displayState property.
For more information, see the ActionScript 3.0 Reference for the Adobe
Flash Platform.
The complete example, which follows,
adds code that creates the connection and the NetStream object for
the video and begins to play it.
package
{
import flash.net.NetConnection;
import flash.net.NetStream;
import flash.media.Video;
import flash.display.StageDisplayState;
import fl.controls.Button;
import flash.display.Sprite;
import flash.events.MouseEvent;
import flash.events.FullScreenEvent;
import flash.geom.Rectangle;
public class FullScreenVideoExample extends Sprite
{
var fullScreenButton:Button = new Button();
var video:Video = new Video();
public function FullScreenVideoExample()
{
var videoConnection:NetConnection = new NetConnection();
videoConnection.connect(null);
var videoStream:NetStream = new NetStream(videoConnection);
videoStream.client = this;
addChild(video);
video.attachNetStream(videoStream);
videoStream.play("");
fullScreenButton.x = 100;
fullScreenButton.y = 270;
fullScreenButton.label = "Full Screen";
addChild(fullScreenButton);
fullScreenButton.addEventListener(MouseEvent.CLICK, fullScreenButtonHandler);
}
private function fullScreenButtonHandler(event:MouseEvent)
{
var screenRectangle:Rectangle = new Rectangle(video.x, video.y, video.width, video.height);
stage.fullScreenSourceRect = screenRectangle;
stage.displayState = StageDisplayState.FULL_SCREEN;
}
public function onMetaData(infoObject:Object):void
{
// stub for callback function
}
}
}
The onMetaData() function is a
callback function for handling video metadata, if any exists. A
callback function is a function that the runtime calls in response
to some type of occurrence or event. In this example, the onMetaData()function
is a stub that satisfies the requirement to provide the function.
For more information, see Writing callback methods for metadata and cue points
A
user can leave full-screen mode by entering one of the keyboard
shortcuts, such as the Escape key. You can end full-screen mode
in ActionScript by setting the Stage.displayState property
to StageDisplayState.NORMAL. The code in the following
example ends full-screen mode when the NetStream.Play.Stop netStatus event
occurs.
videoStream.addEventListener(NetStatusEvent.NET_STATUS, netStatusHandler);
private function netStatusHandler(event:NetStatusEvent)
{
if(event.info.code == "NetStream.Play.Stop")
stage.displayState = StageDisplayState.NORMAL;
}
When you rescale a
rectangular area of the stage to full-screen mode, Flash Player or
AIR uses hardware acceleration, if it's available and enabled. The
runtime uses the video adapter on the computer to speed up scaling
of the video, or a portion of the stage, to full-screen size. Under
these circumstances, Flash Player applications can often profit
by switching to the StageVideo class from the Video class (or Camera
class; Flash Player 11.4/AIR 3.4 and higher).
For more information
on hardware acceleration in full-screen mode, see Working with full-screen mode. For more information on StageVideo, see Using the StageVideo class for hardware accelerated presentation.
Twitter™ and Facebook posts are not covered under the terms of Creative Commons. | http://help.adobe.com/en_US/as3/dev/WS44B1892B-1668-4a80-8431-6BA0F1947766.html | CC-MAIN-2016-50 | refinedweb | 936 | 50.63 |
This class describes.
Definition at line 25 of file TNeuron.h.
#include <TMVA/TNeuron.h>
Usual constructor.
Definition at line 47 of file TNeuron.cxx.
Tells a neuron which neurons form its layer (including itself).
This is needed for self-normalizing functions, like Softmax.
Definition at line 853 of file TNeuron.cxx.
Adds a synapse to the neuron as an output This method is used by the TSynapse while connecting two neurons.
Definition at line 842 of file TNeuron.cxx.
Adds a synapse to the neuron as an input This method is used by the TSynapse while connecting two neurons.
Definition at line 830 of file TNeuron.cxx.
The Derivative of the Sigmoid.
Definition at line 814 of file TNeuron.cxx.
Uses the branch type to force an external value.
Definition at line 1121 of file TNeuron.cxx.
Returns the formula value.
Definition at line 910 of file TNeuron.cxx.
Computes the derivative of the error wrt the neuron weight.
Definition at line 1080 of file TNeuron.cxx.
computes the derivative for the appropriate function at the working point
Definition at line 1007 of file TNeuron.cxx.
Computes the error for output neurons.
Returns 0 for other neurons.
Definition at line 1059 of file TNeuron.cxx.
Returns neuron input.
Definition at line 921 of file TNeuron.cxx.
Computes the normalized target pattern for output neurons.
Returns 0 for other neurons.
Definition at line 1070 of file TNeuron.cxx.
Returns the neuron type.
Definition at line 863 of file TNeuron.cxx.
Computes the output using the appropriate function and all the weighted inputs, or uses the branch as input.
In that case, the branch normalisation is also used.
Definition at line 944 of file TNeuron.cxx.
Sets the derivative of the total error wrt the neuron weight.
Definition at line 1164 of file TNeuron.cxx.
Inform the neuron that inputs of the network have changed, so that the buffered values have to be recomputed.
Definition at line 1153 of file TNeuron.cxx.
Sets the normalization variables.
Any input neuron will return (branch-mean)/RMS. When UseBranch is called, mean and RMS are automatically set to the actual branch mean and RMS.
Definition at line 1133 of file TNeuron.cxx.
Sets the neuron weight to w.
The neuron weight corresponds to the bias in the linear combination of the inputs.
Definition at line 1144 of file TNeuron.cxx.)
Definition at line 91 of file TNeuron.cxx.
Sets a formula that can be used to make the neuron an input.
The formula is automatically normalized to mean=0, RMS=1. This normalisation is used by GetValue() (input neurons) and GetError() (output neurons)
Definition at line 874 of file TNeuron.cxx. | https://root.cern/doc/master/classTNeuron.html | CC-MAIN-2021-39 | refinedweb | 447 | 62.64 |
Import website into wordpresspekerja,
Prepare a excel sheet by diligently retyping the data from PDF file.
...checking a few websites to compile contact
...of the following languages) Key Features That Software Must Have: Allow Images Unicode Support Responsive Design Allow Timer in Quiz Detailed Report Card Allow Bulk Question Import Able To Display Mathematical Symbols ----------------------------------------------------------------------------- You can Check the Sample Program Here:
We need logo and logo symbol for this company JAC Smart Brands, we import and distribute products online and retailers channels.
I need you to develop some software for me. I would like this software to be developed for Windows using Java.
Hello. I need some psd files that are needed to be converted into html. Very simple.
Looking for import agent to China. We would like to import our products to China and would like an agent that is well verse in customs requirements to handle and advice on documentation required. Details to be discussed in PM. Thank sports data company utilizing the Zoho Creator platform for its existing dat.. "<10" into the column. 7) This
Hello there, I have a translation project for you. I have a document this need to be translated into Japanese from English. There are around 6000 words. I will supply the files ( files for translation + guideline file + and a reference files for German translation) via private chat. This is very easy topic. Deadline of this project would be 2-3 days
.. -
We have the attached logo. It's very simple. We need it converted to a Vector so we can use it for print-quality based tasks. We will award the person who gets this done the fastest.
I need a logo designed for my clothing business
Font 1: [log masuk untuk melihat URL] Font 2: [log masuk untuk melihat URL] I want to take the Metropolis open source font and make it...font (or at least describe them) All other bids will be completely ignored. This is a work for hire. You will need to assign all IP rights to me. I will be putting it into open source..
Hello freelancer! Need help with some co...
import 2 xml of different language , WPML setup , site is already done. [log masuk untuk melihat URL]
I have someone designing me a website. The design is being delivered with Photoshop XD files, I'm looking for someone that can convert the design into HTML for me. Roughly 30 pages in total 15 desktop designs and 15 responsive designs.
I need more than 16'000 products imported into my magento installation. I managed to already import the products, but failed to link the images and categories to the products which is the main challenge I see in this operation have data that needs to be copied from image files into Word file. Awarded freelancer will be provided with demonstration video and all other details.
.. | https://www.my.freelancer.com/job-search/import-website-into-wordpress/ | CC-MAIN-2019-22 | refinedweb | 480 | 65.83 |
Str how to handle exception handling in struts reatime projects?
can u plz send me one example how to deal with exception?
can u plz send me how to develop our own exception handling
Hi... - Struts
Hi... Hi,
If i am using hibernet with struts then require to install hibernet or not....if installation is must then please send url of this installation Hi friend,
Hibernate is Object-Oriented mapping tool are used in this xml file
and could u plz explain abt that tags indetail... and search you get the jar file Hi friend,
struts.config.xml : Struts has
Struts - Struts
compelete code.
thanks Hi friend,
Please give details with full...Struts Hello
I like to make a registration form in struts inwhich... course then page is redirected to that course's subjects. Also all subject should
Struts - Struts
Struts Dear Sir ,
I am very new in Struts and want... to understand but little bit confusion,Plz could u provide the zip for address validation and one of custom validation program, may be i can understand.Plz
Struts code - Struts
Struts code
Hi Friend,
Is backward redirection possible in struts.If so plz explain me
hi,
hi, print("code sample");how to display all elements in 2d array usin any one loop What is Struts Framework? Hi,Struts 1 tutorial with examples are available at Struts 2 Tutorials... are looking for Struts projects to learn struts in details then visit at http
Java Example projects about STRUTS
Java Example projects about STRUTS Hai...
I completed MCA but i have no job in my hands.
But i do some small projects about STRUTS.
Please send me some example projects about STRUTS.
Please visit the following link :
Multiple file upload - Struts
array of formfile.
Can anyone suggest me or send me some sample code to resolve...;--------------------------------------------------------------------------------
HI all,
I have found the solution.the code is given below...Multiple file upload HI all,
I m trying to upload multiple files Hi,
I need the example programs for shopping cart using struts with my sql.
Please send the examples code as soon as possible.
please send it immediately.
Regards,
Valarmathi
Hi Friend,
Please
very important - Kindly help
sample code...very important - Kindly help I am creating web page for form registration to my department ..I have to Reprint the Application Form (i.e Download
java - Struts
to mark selected one of the option of object. So can u plz send me any code to make this happen. Hi friend,
Plz give full details and full source code to solve the problem.
Plz send - Java Beginners
Plz send Hi,
please send whole code i understood ur sending... the code... without knowing ur table structure...from where should the search occur... be a troublesome...then again wat r ur needs....
so plz...provide a deatiled structure hi
can anyone tell me how can i implement session tracking in struts?
please it,s urgent........... session tracking? you mean session management?
we can maintain using class HttpSession.
the code follows
Struts(1.3) action code for file upload
Struts(1.3) action code for file upload Hi All,
I want to upload... application using HttpUrlConnection.
How can i write my struts(1.3) action code.../struts/strutsfileupload.shtml
Thanks
Hi,
thanks for a quick reply
Sample Code - Development process
Sample Code
Hi Friend,
Give sample code DTO in webapplication. Hi friend,
DTO Implementation:
One way to improve the performance of this user request is to package all the required data Hi,
Here my quation is
can i have more than one validation-rules.xml files in a struts application
in projects
in php.Actually i want to prepare a quiz that stores all questions in the mysql database.am...) and testids. now my problem is when i am viewing the database ,i can see only one question at a time. although i am inserting all the questions in the test1(test
Hi
Hi Hi All,
I am new to roseindia. I want to learn struts. I do not know anything in struts. What exactly is struts and where do we use it. Please help me. Thanks in advance.
Regards,
Deepak
Hi.... - Java Beginners
Hi.... Hi Friends,
Thanks for reply can send me sample... button then open the next page with save data....please write the code and send me its very urgent....
Hi Friend,
Plz give full details internationalisation - Struts
struts internationalisation hi friends
i am doing struts... problem its urgent Hi friend,
Plz give full details and Source code to solve the problem :
For more information on struts visit
struts
struts how make my dummy project live..plz send me step's.
kya yaar
HI.
HI. hi,plz send me the code for me using search button bind the data from data base in dropdownlist
implementing DAO - Struts
implementing DAO Hi Java gurus
I am pure beginner in java, and have to catch up with all java's complicated theories in a month for exam. Now... suggest weak students to print the code and go through to help you understand
checkboxes vales in the another jsp which one we are selected with checking the checkbox.i want code in struts
plz send code for this
plz send code for this Program to calculate the sum of two big numbers (the numbers can contain more than 1000 digits). Don't use any library classes or methods (BigInteger etc
struts code - Struts
struts code In STRUTS FRAMEWORK
we have a login form...? Hi Friend,
Please visit the following links:
Free Java Projects - Servlet Interview Questions
Free Java Projects Hi All,
Can any one send the List of WebSites which will provide free JAVA projects with source code on "servlets" and "Jsp" relating to Banking Sector? don't know
Why Struts in web Application - Struts
Why Struts in web Application Hi Friends, why struts introduced in to web application. Plz dont send any links . Need main reason for implementing struts. Thanks Prakash How to add token , and encript token and how decript token in struts. Hi friend,
Using the Token methods
The methods we... more information,Tutorials and Examples on struts visit to :
http
java - Struts
java Hi,
Can any one send the code for insert,select and update using struts1.2 using front controller design Pattern.
many Thnaks
raghvendra
IMP - Struts
IMP Hi...
I want to have the objective type questions(multiple choices) with answers for struts.
kindly send me the details
its urgent for me
Thanku
Ray Hi friend,
Visit for more information
java - Struts
,struts-config.xml,web.xml,login form ,success and failure page also...
code...java hi..,
i wrote login page with hardcodeted username and password ,but when i submit the page ,i give blank page...
one hint: error
struts
struts <p>hi here is my code can you please help me to solve...;
<h1></h1>
<p>struts-config.xml</p>
<p>...;<struts-config>
<form-beans>
<form-bean name
struts - Struts
struts i want to learn more examples on struts like menu creation and some big application on struts.and one more thing that custom validation and client side validation in struts are not running which are present on rose india
Hi.. - Struts
Hi..
Hi Friends,
I am new in hibernate please tell me.....if i am using hibernet with struts any database pkg is required or not.....without any database package using maintain data in struts+hiebernet....please help
Using radio button in struts - Struts
source code to solve the problem :
For more information on radio in Struts...Using radio button in struts Hello to all ,
I have a big problem... one is the serail number and the second is the number of the selection
Textarea - Struts
characters.Can any one? Given examples of struts 2 will show how to validate... files.Please create all the required files and past the given code, once you...Textarea Textarea Hi, this is ramprasad and i am using latest Links - Links to Many Struts Resources
that all projects should use Struts, and it is quite easy to implement the MVC... Struts, and think it should be used on many (but not all) projects. Still... Struts
Tutorials
One of the Best Jakarta Struts available on the web.2.0 - Struts
for that i wrote following code...
plz verify my plz,plz help me any one....plz...
but im not getting values...in jsp
plz any one help ....to get values did i...Struts2.0 Hi ,
I am Kalyani,
im very new to struts2.0 ,
just i first example - Struts
Struts first example Hi!
I have field price.
I want to check... of validation.xml file to resolve it?
thanks you so much! Hi friend,
Plz specify the version of struts is used struts1/struts 2.
Thanks
Hi
struts Hi ,
I have been asked in one of the technical interviews if struts framework is stateless or stateful . I could not answer. Please answer and explain a bit about it how it is achieved.
thanks
kanchan
sample code - WebSevices
sample code Hi Guys,
can any body tell me use of webservices ? I want a sample code using xml with one application server bea weblogic and webserver tomcat Hi Friend,
Please visit the following link
Struts file uploading - Struts
Struts file uploading Hi all,
My application I am uploading files using Struts FormFile.
Below is the code.
NewDocumentForm... and append it in the database.
Ultimately all the file content should be stored - Struts
.
Thanks in advance Hi friend,
Please give full details with source code to solve the problem.
For read more information on Struts visit...Struts Hello
I have 2 java pages and 2 jsp pages in struts
Struts 1 Tutorial and example programs
Struts 1 Tutorials and many example code to learn Struts 1 in detail.
Struts 1...
Struts LookupDispatch Action (org.apache.struts.actions.LookupDispatchAction)
is one...)
is one of the Built-in Actions provided along with the struts framework
Plz Provide correct program code for all questions.
Plz Provide correct program code for all questions.
Write a program... at [email protected] as soon as possible. Thanks.
Hi Friend,
Try the following code:
1)
import java.util.*;
class FindDifference
{
public static
in struts 1.1
What changes should I make for this?also write struts-config.xml and jsp code.
Code is shown below...");
pw.println(e);
}
}
} Hi Friend
struts - Struts
struts how to handle multiple submit buttons in a single jsp page of a struts application Hi friend,
Code to help in solving the problem :
In the below code having two submit button's its values application
struts application hi,
i can write a struts application in this first i can write enter data through form sidthen it will successfully saved... not enter any data that time also it
will saved into databaseprint("code sample
hi... - Struts
hi... Hi Friends,
I am installed tomcat5.5 and open the browser and type the command but this is not run please let me... also its very urgent Hi Soniya,
I am sending you a link. I hope
struts 2 project samples
struts 2 project samples please forward struts 2 sample projects like hotel management system.
i've done with general login application and all.
Ur answers are appreciated.
Thanks in advance
Raneesh
Hi - Struts
Hi Hi Friends,
Thanks to ur nice responce
I have sub package in the .java file please let me know how it comnpile in window xp please give the command to compile
Hello - Struts
Hello Hi Friends,
Thakns for continue reply
I want to going with connect database using oracle10g in struts please write the code and send me its very urgent
only connect to the database code
Hi
Sample Ajax Code
Sample Ajax Code Sample Ajax Code for getting values from another JSP
The below code is helpful to access another Action class...");
}
return oXmlHttp;
}
//Function to get all model for the selected vendor
function
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://www.roseindia.net/tutorialhelp/comment/11658 | CC-MAIN-2015-18 | refinedweb | 2,003 | 75.2 |
Dear,
In my new application, I call "... new PainMap()" in a file/class.
At compilation (build project) : No error. At run-time well ("Exception in thread "main" java.lang.NullPointerException").
Entering the PaintMap.java file (javadoc), I oberved that "
import org.jfree.io.SerialUtilities;
import org.jfree.util.PaintUtilities;
" is underligned in red by the NetBeans IDE 5.5 "life compiler".
When I position the cursor on one of them, I get the message "package org.jfree.io does not exist".
Nevertheless, I'm using the jcommon-1.0.9.jar downloaded together with the JFreechart download (jfreechart-1.0.5\lib\jfreechart).
I verify several times that the paths in 'Classpath', 'Sources' and 'Javadoc' that I'm using in the 'Library Manager' are correct. In the left window with 'Projects' tab and "Librairies" folder, one can see the "JCommon - jcommon-1.0.9.jar" librairy with "org.jfree.io" and "org.jfree.util" packages appearing amongst all the other "org.jfree..." of this JCommon library.
An idea ?
Thanks to answer me. | http://www.jfree.org/phpBB2/viewtopic.php?f=7&t=22227&sid=3d6477009b8003b03d9c496df114c326 | CC-MAIN-2018-05 | refinedweb | 172 | 64.47 |
Dear Scala Community, dear Contributors,
After the winter break in December and a bit in January, we are back to full speed here at Scala Center! Happy to restart the monthly project updates, keep on scrolling (o:
At a glance
- Scalameta
- Build Server Protocol
- MOOCs
- Collections
- Bloop
- Scala Platform
- Scala Compiler
- Scalac profiling
- scalajs-bundler
- Scalafix
Scalameta
Ólafur Páll Geirsson @olafurpg
Collaborated with Eugene Burmako on specifying SemanticDB and bringing the implementation in line with the spec
- #1224 Print type ref prefix with type arguments, for scalafix ExplicitResultTypes.
Thanks to this PR I was able to run ExplicitResultTypes on all signatures in the Akka project and
it still compiled afterwards.
- #1216 Print this type for abstract prefix, for scalafix ExplicitResultTypes.
- #1185 Guard against NoPosition in InputOps, bugfix.
- #1172 Add InteractiveSemanticdb utility, a simple API to go from
Stringto SemanticDB.
- Presented Metals at the LSP Tooling meeting
Build Server Protocol
Ólafur Páll Geirsson @olafurpg
Wrote thoughts on how a potential “Build Server Protocol” might look like.
The document is inspired by the Language Server Protocol but addresses the communication between
a language server and build tool instead of editor and language server.
Currently, every language server must implement custom integrations for the most popular Scala Build
tools (sbt, Maven, pants, …) in order to extract compilation information such as classpaths and source directories.
The Build Server Protocol is an attempt to solve that problem.
MOOCs
Julien Richard-Foy @julienrf
I finished porting our four first MOOCs (Functional Programming Principles in Scala,
Functional Program Design, Parallel Programming and Big Data Analysis with Scala and
Spark) to the EdX platform.
I setup a horizontally scalable grading infrastructure that communicates with an EdX
instance. We performed some end-to-end tests and load tests (500 users submit their
work at 1 second of interval).
Last, I compiled the old Akka assignments with the last Akka version (I had almost
nothing to change!), and collected all the material that we used to have for the
reactive programming course.
Collections
(SCP-007)
Julien Richard-Foy @julienrf
I implemented some macro-benchmarks (#344).
Our current micro-benchmarks take a week to run because they test N operations × M types of
collections × O number of elements. The idea of having macro-benchmarks is that they should
take less time to complete while still allowing us to detect performance regressions.
I’ve also implemented a few optimizations (
#340,
#348).
Speaking of performance, I’ve written a blog article
(#832) about the design of the new collections
and their performance. I’ve explained that the transformation operations of the new collections
are, by design, correct by default (which was not the case in the old collections), but should
be overridden for performance in strict collections.
I spent several days improving the design of
Views so that:
- they don’t prematurely evaluate elements #343,
- they don’t forget too much about the underyling viewed collection #436.
I spent several days reducing incompatibilities with 2.12’s collections. I tried to make
existing projects compile with both 2.12 and a special distribution of 2.13 that has
the new collections (thanks to @szeiger). In particular, I worked on the scala-parser-combinators
module (#134),
and added the following compatibility improvements to the new collections:
#355,
#358,
#364,
#427.
I happily collaborated with Lukas to fix a few issues:
#369,
#370,
#371.
Luckily, @marcelocenerine took over most of the other bugs!
Lukas also created an FAQ page, which I contributed to.
I also reviewed exciting contributions from the community.
scalajs-bundler
Julien Richard-Foy @julienrf
Released v0.10.0,
with nice contributions from the community!
Bloop
Jorge Vicente Cantero @jvican
There has been a lot of progress in bloop since our past blog
My work has focused on the following areas:
- Bloop has now a robust community build for both sbt 0.13 and 1.0 projects.
We plan to add more OSS projects to the future, but to this date we support:
apache/spark, scala/scala, guardian/frontend, lihaoyi/utest,
sbt/sbt, and a trimmed down version of pathikrit/better-files.
- Bloop has now a hugo-based
website. The website will contain the
docs of how to use bloop in a wide array of scenarios. Still work in progress.
- Bloop has now a Maven integration that integrates with
scala-maven-plugin, reading up on its values and allowing to set some
bloop-specific configuration options.
- Bloop supports now BSP (the Build Server Protocol). BSP is Scala
Center’s attempt to scale IDE integrations with build tools supporting the
Scala programming language. The protocol is work in progress and we are
working together with the tooling team at Lightbend, the Intellij team
(@jastice) and the Dotty team to curate it.
It is extensible to other programming languages by design and highly
inspired by LSP.
- Bloop has a better UX than its previous version, with improvements on the
CLI interface and the nailgun communication.
- Bloop has a better internal design: several of the internal APIs and external
integrations have been significantly improved.
You can see my recent (closed) pull requests
here.
Aside from these tasks, I have been researching on how to create integrations
for build tools like Bazel (worker servers) and Gradle. Bloop will support
these build tools in the close future.
I have been working on Bloop together with Jorge since November 2017. In this time frame, we have
taken Bloop from an early prototype to a solid tool that is able to compile very large open source
projects, such as the Scala Compiler or Apache Spark with better performance than sbt.
Here’s an highlight of the tasks I’ve been working on:
Rewrite how we do logging
Our first prototype didn’t really care about logging and would write to the standard output. We
improved it to use Log4J instead.
Set up the initial integration test infrastructure
Testing in Bloop is complicated, because we need to generate the configuration for our tool from an
existing build definition that can be found in an open source project. This means that we need to
run sbt to generate the configuration, and finally use it for our integration tests.
Making it simple to add more projects to our integration tests infrastructure and having it produce
reliable results took some effort, but was important in making sure that we could actually compile
large projects and produce correct results.
Add support for running tests in Bloop
Test support is the second feature that we wanted to support in Bloop. Our implementation works
using the same mechanism as sbt and is therefore compatible with all the test frameworks that work
with sbt.
The integration works out of the box. No extra step is needed when importing the project to have the
tests run.
Set up a benchmarking infrastructure
We developed a benchmarking infrastructure that is a mix from Dotty’s and Scalac’s infrastructures:
We use Dotty’s system to be able to schedule benchmarks easily from Github, and Scalac’s integration
to perform comparative benchmarks of the compilation time of sbt, scalac and bloop on many several
projects, including Apache Spark and the Scala Library for instance.
The results are then displayed as graphs to follow the evolution of the compilation times.
We also produce the same results for the time required to load a project, etc.
Add support for starting a REPL (sbt
console)
The third feature that we wanted to support was starting a REPL with a project on the classpath
(this is what is known in sbt as the
console and
consoleQuick tasks).
Add support for running code in Bloop
The last feature that we wanted to support was the
run task. It allows to run a class that has a
main method. Support for that has been merged in Bloop.
Add support for forking
sbt lets us define whether tests and runs should happen in the same JVM, or if a new JVM should be
started to perform the action. We wanted Bloop to also support that, so we added support for forking
(previously, everything would happen in the same JVM).
Set up parts of our release process
We worked on automating some parts of our release process, so that we can automatically generate
installation scripts along with a release.
For Mac OS users, we also push a Homebrew formula that let’s them use the Homebrew package manager
to install Bloop and keep it up to date.
Porting to Windows
We worked on making sure that Bloop works on Windows, and set up a Jenkins CI instance running on
Windows to check that there are no regression. The Windows CI is not yet enabled, it is still a work
in progress.
Documenting Bloop
We set up a documentation website for Bloop and wrote some documentation. It is still a work in
progress.
Talking about Bloop
Jorge and I submitted talks about Bloop, and we’ll be talking about Bloop at:
- ScalaSphere
- ScalaDays Berlin
- ScalaDays New York
Scala Platform
Jorge Vicente Cantero @jvican
Video of our meeting in February.
I have organized the first Scala Platform meeting of the year, together with
Darja Jovanovic. Our plan is to have a first release of the Scala Platform in
the first week of March. In our next meeting, we’d like to synchronize with
the Scala Platform Committee and the Scala Community to shed more light on
which other modules we’re missing in the Scala Platform.
In our meeting, we discussed the following topics:
“Make Scala Platform API independent of Java
namespaces”.
- Guests: Sébastien Doeraene (Scala.js) and Denys Shabalin (Scala Native).
- Proposition: Scala Platform modules should be all cross-platformed.
- Discussion revolves around motivation, resourcing, technical feasibility.
Status update of the Scala JSON AST; technical discussion about external
dependencies.
- Blocker: what should be the immutable collection interface for users?
- General discussion about external dependencies; when are Platform modules
allowed to depend on external modules? Tooling solutions to the problems?
Scala Compiler
Jorge Vicente Cantero @jvican
I have continued @retronym’s investigation on why compiler plugins that are
dynamically classloaded destabilize and slow down warmed up compilers. This
issue was initially discovered in a Scalameta
benchmark.
The result of this work has been the following:
- A pull request implementing a classloader cache for compiler
plugins.
- A detailed performance analysis of the impact of caching classloaders for
compiler plugins.
The speedup of this change seems to be in the range between 16% and 35% of
warmed up compilation.
For more context and details, check the
SD-458 ticket and the
aforementioned pull request.
Scalac profiling
Jorge Vicente Cantero @jvican
- Make the sbt plugin cross-compile to both sbt 0.13.x and 1.x.
- Add Magnolia to the community build of scalac-profiling.
- Modify build to use a custom compiler version of Scala 2.12.5 with:
- Changes to the compiler plugin infrastructure to minimize overhead of the plugin.
- Cherry-picked changes originally submitted by @retronym in and.
I am still writing the detailed profiling guide on how to use the plugin, with a tour on how to identify compilation bottlenecks and speed up implicit search. This is still work in progress because it is a careful process that requires:
- Knowledge about the codebase that is being tested and how things interact
with each other.
- Iterations to assess the impact of changes to a current codebase.
- An faithful interpretation of the results.
This work will be available before the next Advisory Board meeting in March,
and will include examples for both Circe and Scalatest.
Scalafix
Guillaume Massé @MasseGuillaume
New Rule: Convert Single Abstract Method
- Add required information in Scalameta:
Add Rule configurations: DisableSyntax.regex
New feature: –diff --diff-base
scalafix --diff --diff-base v0.5.6will apply scalafix only on the code
that has changed since v0.5.6.
New feature: escape rules with comments
// scalafix: ok [RuleName]disable RuleName for an expression
// scalafix: off [RuleName],
// scalafix: on [RuleName]disable RuleName between
offand
on
New feature: sbt task scalafixCli
- Expose all cli arguments to the sbt plugin
- Autocompletion for cli argument
- Autocompletion on the git history for the --diff-base argument
Ólafur Páll Geirsson @olafurpg
Released Scalafix v0.5.10 in close collaboration with @MasseGuillaume and
@vovapolu (ENSIME sponsored developer).
Full release notes
Been working on “scope aware pretty printing” for past two weeks.
This feature will enable features like organize imports and allow ExplicitResultTypes
to insert short readable names along with imports instead of
_root_ qualified names.
I’ve made good progress, but still more time to make sure the feature works correctly. | https://contributors.scala-lang.org/t/scala-center-project-updates-2018-mid-jan-to-mid-feb/1600 | CC-MAIN-2018-22 | refinedweb | 2,091 | 53.41 |
A Brief Guide to OTP in Elixir
In this article, I will introduce you to OTP, look at basic process loops, the GenServer and Supervisor behaviours, and see how they can be used to implement an elementary process that stores funds.
(This article assumes that you are already familiar with the basics of Elixir. If you’re not, you can check out the Getting Started guide on Elixir’s website or use one of the other resources listed in our Elixir guide.)
What is OTP?
OTP is an awesome set of tools and libraries that Elixir inherits from Erlang, a programming language on whose VM it runs.
OTP contains a lot of stuff, such as the Erlang compiler, databases, test framework, profiler, debugging tools. But, when we talk about OTP in the context of Elixir, we usually mean the Erlang actor model that is based on lightweight processes and is the basis of what makes Elixir so efficient.
Processes
.jpg)
At the foundation of OTP, there are tiny things called processes.
Unlike OS processes, they are really, really lightweight. Creating them takes microseconds, and a single machine can easily run multiple thousands of them, simultaneously.
Processes loosely follow the actor model. Every process is basically a mailbox that can receive messages, and in response to those messages it can:
- Create new processes.
- Send messages to other processes.
- Modify its private state.
Spawning processes
The most basic way to spawn a process is with the spawn command. Let’s open IEx and launch one.
iex(1)> process = spawn(fn -> IO.puts("hey there!") end)
The above function will return:
hey there! #PID<0.104.0>
First is the result of the function, second is the output of spawn – PID, a unique process identification number.
Meanwhile, we have a problem with our process. While it did the task we asked it to do, it seems like it is now… dead? 😱
Let’s use its PID (stored in the variable
process) to query for life signs.
iex(2)> Process.alive?(process) false
If you think about it, it makes sense. The process did what we asked it to do, fulfilled its reason for existence, and closed itself. But there is a way to extend the life of the process to make it more worthwhile for us.
Receive-do loop
Turns out, we can extend the process function to a loop that can hold state and modify it.
For example, let’s imagine that we need to create a process that mimics the funds in a palace treasury. We’ll create a simple process to which you can store or withdraw funds, and ask for the current balance.
We’ll do that by creating a loop function that responds to certain messages while keeping the state in its argument.
defmodule Palace.SimpleTreasury do def loop(balance) do receive do {:store, amount} -> loop(balance + amount) {:withdraw, amount} -> loop(balance - amount) {:balance, pid} -> send(pid, balance) loop(balance) end end end
In the body of the function, we put the receive statement and pattern match all the messages we want our process to respond to. Every time the loop runs, it will check from the bottom of the mailbox (in order they were received) for messages that match what we need and process them.
If the process sees any messages with atoms
store,
withdraw,
balance, those will trigger certain actions.
To make it a bit nicer, we can add an
open function and also dump all the messages we don’t need to not pollute the mailbox.
defmodule Palace.SimpleTreasury do def open() do loop(0) end def loop(balance) do receive do {:store, amount} -> loop(balance + amount) {:withdraw, amount} -> loop(balance - amount) {:balance, pid} -> send(pid, balance) loop(balance) _ -> loop(balance) end end end end
While this seems quite concise, there’s already some boilerplate lurking, and we haven’t even covered corner cases, tracing, and reporting that would be necessary for production-level code.
In real life, we don’t need to write code with receive do loops. Instead, we use one of the behaviours created by people much smarter than us.
Behaviours
Many processes follow certain similar patterns. To abstract over these patterns, we use behaviours. Behaviours have two parts: abstract code that we don’t have to implement and a callback module that is implementation-specific.
In this article, I will introduce you to GenServer, short for generic server, and Supervisor. Those are not the only behaviours out there, but they certainly are one of the most common ones.
GenServer
To start off, let’s create a module called
Treasury, and add the GenServer behaviour to it.
defmodule Palace.Treasury do use GenServer end
This will pull in the necessary boilerplate for the behaviour. After that, we need to implement the callbacks for our specific use case.
Here’s what we will use for our basic implementation.
Let’s start with the easy one –
init. It takes a state and starts a process with that state.
def init(balance) do {:ok, balance} end
Now, if you look at the simple code we wrote with
receive, there are two types of triggers. The first one (
store and
withdraw) just asks for the treasury to update its state asynchronously, while the second one (
get_balance) waits for an answer.
handle_cast can handle the async ones, while
handle_call can handle the synchronous one.
To handle adding and subtracting, we will need two casts. These take a message with the command and the transaction amount and update the state.
def handle_cast({:store, amount}, balance) do {:noreply, balance + amount} end def handle_cast({:withdraw, amount}, balance) do {:noreply, balance - amount} end
Finally,
handle_call takes the balance call, the caller, and state, and uses all that to reply to the caller and return the same state.
def handle_call(:balance, _from, balance) do {:reply, balance, balance} end
These are all the callbacks we have:
defmodule Palace.Treasury do use GenServer def init(balance) do {:ok, balance} end def handle_cast({:store, amount}, balance) do {:noreply, balance + amount} end def handle_cast({:withdraw, amount}, balance) do {:noreply, balance - amount} end def handle_call(:balance, _from, balance) do {:reply, balance, balance} end end
To hide the implementation details, we can add client commands in the same module. Since this will be the only treasury of the palace, let’s also give a name to the process equal to its module name when spawning it with
start_link. This will make it easier to refer to it.
defmodule Palace.Treasury do use GenServer # Client def open() do GenServer.start_link(__MODULE__, 0, name: __MODULE__) end def store(amount) do GenServer.cast(__MODULE__, {:store, amount}) end def withdraw(amount) do GenServer.cast(__MODULE__, {:withdraw, amount}) end def get_balance() do GenServer.call(__MODULE__, :balance) end # Callbacks def init(balance) do {:ok, balance} end def handle_cast({:store, amount}, balance) do {:noreply, balance + amount} end def handle_cast({:withdraw, amount}, balance) do {:noreply, balance - amount} end def handle_call(:balance, _from, balance) do {:reply, balance, balance} end end
Let’s try it out:
iex(1)> Palace.Treasury.open() {:ok, #PID<0.138.0>} iex(2)> Palace.Treasury.store(400) :ok iex(3)> Palace.Treasury.withdraw(100) :ok iex(4)> Palace.Treasury.get_balance() 300
It works. 🥳
Here’s a cheatsheet on GenServer to help you remember where to put what.
Supervisor
.jpg)
However, just letting a treasury run without supervision is a bit irresponsible, and a good way to lose your funds or your head. 😅
Thankfully, OTP provides us with the supervisor behaviour. Supervisors can:
- start and shutdown applications,
- provide fault tolerance by restarting crashed processes,
- be used to make a hierarchical supervision structure, called a supervision tree.
Let’s equip our treasury with a simple supervisor.
defmodule Palace.Treasury.Supervisor do use Supervisor def start_link(init_arg) do Supervisor.start_link(__MODULE__, init_arg, name: __MODULE__) end def init(_init_arg) do children = [ %{ id: Palace.Treasury, start: {Palace.Treasury, :open, []} } ] Supervisor.init(children, strategy: :one_for_one) end end
In its most basic, a supervisor has two functions:
start_link(), which runs the supervisor as a process, and
init, which provides the arguments necessary for the supervisor to initialize.
Things we need to pay attention to are:
- The list of children. Here, we list all the processes that we want the supervisor to start, together with their init functions and starting arguments. Each of the processes is a map, with at least the
idand
startkeys in it.
- Supervisor’s
initfunction. To it, we supply the list of children processes and a supervision strategy. Here, we use
:one_for_one– if a child process will crash, only that process will be restarted. There are a few more.
Running the
Palace.Treasury.Supervisor.start_link() function will open a treasury, which will be supervised by the process. If the treasury crashes, it will get restarted with the initial state – 0.
If we wanted, we could add several other processes to this supervisor that are relevant to the treasury function, such as a process that can exchange looted items for their monetary value.
Additionally, we could also duplicate or persist the state of the treasury process to make sure that our funds are not lost when the treasury process crashes.
Since this is a basic guide, I will let you investigate the possibilities by yourself.
Further reading
This introduction has been quite basic to help you understand the concepts behind OTP quickly. If you want to learn more, there are a lot of nice resources out there:
- Intro to OTP in Elixir. A brief and concise look at OTP in video format.
- Designing Elixir Systems with OTP. Learn how to structure your Elixir applications and make use of OTP properly.
- OTP as the Core of Your Application. A cool 2-part series on creating an actually useful app with GenServer.
- The Little Elixir & OTP Guidebook. A handy book on Elixir and OTP that features a cool toy project on weather data.
If you’re interested in learning more about Elixir, I can suggest our valuable resource guide. To read more of our posts on Elixir and other functional programming languages, follow us on Twitter and Medium.
.jpg)
.jpg)
| https://serokell.io/blog/elixir-otp-guide | CC-MAIN-2021-43 | refinedweb | 1,681 | 63.9 |
This is a guest-post from Vivek Patel, who joined Datadog as a HackNY fellow this summer. College students wondering about their options next summer may want to read this article as well.
Why pup?
One of my main fears as a developer is users complaining about slow page loads. Loading times directly affect both user experience and my bottom line, so I need a way to dig into the performance of my application to address this issue.
What are my options?
I can tail logs, but logs suck. The pile of numbers and text provide limited context and insight. Logs are more useful to identify errors. But in raw form, they’re impractical to observe the performance of my application. I need to spend time upfront to set log post-processing up and adding and extracting new metrics to and from logs quickly becomes a pain.
If only I had easy-to-use, free performance monitoring that gives me actionable graphs in 2 minutes…
That’s where Pup comes in.
Getting started
Pup is fully open-source. To install it, type this at the command-line:
$ sh -c "$(curl -L)"
Then navigate to and bam! Within seconds, you will see metrics streaming in real time.
Pup is designed for application metrics:
In addition to collecting system metrics, Pup faithfully collects and displays custom metrics. To do so it harnesses the power of StatsD, a metrics collector developed by Etsy. Pup comes with StatsD built-in; no need to reinvent the wheel.
To send custom metrics to Pup, you can use the dogstatsd library, which as you will see requires a grand total of 3 lines of code to instrument application code. This library comes in a number of flavors (python, ruby, php, etc.)
Here’s an example Ruby web application using Sinatra. First, we need to install the dogstatsd library:
The web application code:
Now to add Pup instrumentation:
And you’re done. As soon as the page is hit, you’ll see something like this in Pup:
That’s it! If you use Datadog, the first period creates a namespace to organize metrics under.
Here’s a 1 minute screencast summarizing the whole process:
Pup! Graph metrics with ease and style.
Looking for more?
More thorough documentation on using DogstatsD can be found on docs.datadoghq.com.
To correlate metrics and events across your infrastructure, give Datadog a spin. It only takes a few minutes to get started.
Posted in
Engineering | http://www.datadoghq.com/2012/08/easy-app-metrics-with-pup/ | CC-MAIN-2014-41 | refinedweb | 413 | 66.54 |
Oh, I went for a walk and realised that while I started with a GADT, I ended up with a normal Haskell data type in a fancy GADT dress. I'll get back to you if I get the GADT approach to work. On 17 August 2012 15:14, Christopher Done <chrisdone at gmail.com> wrote: > Funny, I just solved a problem with GADTs that I couldn't really see how > to do another way. > > >) > > > Three problems to solve > ======================= > > There are three problems that I wanted to solve: > > 1. Make serialization "just work", no writing custom JSON instances or > whatnot. That problem is solved. So I can just write: > > get "some-request" $ \(Foo bar mu) -> … > > 2. Share data type definitions between the client and server code. That > problem is solved, at least I have a solution that I like. It's like > this: > > module SharedTypes where > … definitions here … > > module Client where > import SharedTypes > > module Server where > import SharedTypes > > Thus, after any changes to the data types, GHC will force the programmer > to update the server AND the client. This ensures both systems are in > sync with one-another. A big problem when you're working on large > applications, and a nightmare when using JavaScript. > > 3. Make all requests to the server type-safe, meaning that a given > request type can only have one response type, and every command which is > possible to send the server from the client MUST have a response. I have > a solution with GADTs that I thing is simple and works. > > > The GADTs part > ============== > > module SharedTypes where > > > > Where `Returns' is a simple phantom type. We'll see why this is > necessary in a sec. > > -- | A phantom type which ensures the connection between the command > -- and the return value. > data Returns a = Returns > deriving Read > > And let's just say Foo is some domain structure of interest: > > -- | A foobles return value. > data Foo = Foo { field1 :: Double, field2 :: String, field3 :: Bool } > deriving Show > instance Foreign Foo > > Now in the Server module, I write a request dispatcher: > > -- | Dispatch on the commands. > dispatch :: Command -> Snap () > dispatch cmd = > case cmd of > GetFoo i r -> reply r (Foo i "Sup?" True) > > Here is the "clever" bit. I need to make sure that the response Foo > corresponds to the GetFoo command. So I make sure that any call to > `reply` must give a Returns value. That value will come from the nearest > place; the command being dispatched on. So this, through GHC's pattern > match exhaustion checks, ensures that all commands are handled. > > -- | Reply with a command. > reply :: (Foreign a,Show a) => Returns a -> a -> Snap () > reply _ = writeLBS . encode . showToFay > > And now in the Client module, I wanted to make sure that GetFoo can only > be called with Foo, so I structure the `call` function to require a > Returns value as the last slot in the constructor: > > -- | Call a command. > call :: Foreign a => (Returns a -> Command) -> (a -> Fay ()) -> Fay () > call f g = ajaxCommand (f Returns) g > > The AJAX command is a regular FFI, no type magic here: > > -- | Run the AJAX command. > ajaxCommand :: Foreign a => Command -> (a -> Fay ()) -> Fay () > ajaxCommand = > ffi "jQuery.ajax({url: '/json', data: %1,\ > "dataType: 'json', success : %2 })" > > And now I can make the call: > > -- | Main entry point. > main :: Fay () > main = call (GetFoo 123) $ \(Foo _ _ _) -> return () > > > Summary > ======= > > So in summary I achieved these things: > > * Automated (no boilerplate writing) generation of serialization for > the types. > * Client and server share the same types. > * The commands are always in synch. > * Commands that the client can use are always available on the server > (unless the developer ignored an incomplete-pattern match warning, in > which case the compiler did all it could and the developer deserves > it). > > I think this approach is OK. I'm not entirely happy about "reply r". I'd > like that to be automatic somehow. > > > Other approaches / future work > ============================== > > I did try with: > > data Command a where > GetFoo :: Double -> Command Foo > PutFoo :: String -> Command Double > > But that became difficult to make an automatic decode instance. I read > some suggestions by Edward Kmett: > > > But it looked rather hairy to do in an automatic way. If anyone has any > improvements/ideas to achieve this, please let me know. | http://www.haskell.org/pipermail/glasgow-haskell-users/2012-August/022711.html | CC-MAIN-2014-15 | refinedweb | 698 | 71.95 |
Creating a Circle (spline) with varying number of points.
If I create a circle, it has always just 4 points.
How can I create a circle (spline) with for example, 5 or 28 points?
Hi @pim,
As far as I know, there is no way to make a "perfect circle" with any type of spline in
Cinema 4D.
I search on the Internet, based on this answer: How to create circle with Bézier curves?.
I write the following code to create a circular-like spline shape:
import c4d, math #Welcome to the world of Python deg = math.pi/180.0 def main(): pCnt = 4 # Here is the point count radius = 200 # Here is the radius of circle subdAngle = 5*deg # Here is the subdivision angle #Prepare the data tangentLength = (4/3)*math.tan(math.pi/(2*pCnt))*radius # single side tangent handle length pointPosLs = [] tangentLs = [] for i in range(0,pCnt): angle = i*(2*math.pi)/pCnt # caculate the angle # caculate point position y = math.sin(angle)*radius x = math.cos(angle)*radius pointPosLs.append(c4d.Vector(x, y, 0)) # tangent position lx = math.sin(angle)*tangentLength ly = -math.cos(angle)*tangentLength rx = -lx ry = -ly vl = c4d.Vector(lx, ly, 0) vr = c4d.Vector(rx, ry, 0) tangentLs.append([vl, vr]) # init a bezier circle circle = c4d.SplineObject(pcnt=pCnt, type=c4d.SPLINETYPE_BEZIER) circle.ResizeObject(pcnt=pCnt, scnt=1) circle.SetSegment(id=0, cnt=pCnt, closed=True) circle[c4d.SPLINEOBJECT_CLOSED] = True circle[c4d.SPLINEOBJECT_ANGLE] = subdAngle circle.SetAllPoints(pointPosLs) # set point position # set tangent position for i in range(0, pCnt): circle.SetTangent(i, tangentLs[i][0], tangentLs[i][1]) circle.Message(c4d.MSG_UPDATE) return circle
You can try put this code in a
Python Generatorobject to get the result.
I'm not very sure about my answer, waiting for more convincing answers.
You may also want to take a look at the official Cinema 4D SDK for an example of a circle object:
Though it's C++, the algorithm should be easily adaptable.
I guess you are talking about the "Circle" spline primitive? The generated spline has always 4 control points; there is no way to change that.
But you could try to get the generated spline and apply the "Subdivide" tool. Whit that tool you can add additional points.
Or you could, as @eZioPan and @mp5gosu and have shown, create the spline completely by yourself.
best wishes,
Sebastian
Thank you all, great answers!
I will take a different approach.
I convert the circle to a spline and then divide that spline into the wanted number of points.
-Pim | https://plugincafe.maxon.net/topic/11115/creating-a-circle-spline-with-varying-number-of-points | CC-MAIN-2020-40 | refinedweb | 430 | 60.51 |
io_waitread man page
io_waitread — read from a descriptor
Syntax
#include <io.h>
int io_waitread(int64 fd,char* buf,int64 len);
Description
io_waitread tries to read len bytes of data from descriptor fd into buf[0], buf[1], ..., buf[len-1]. (The effects are undefined if len is 0 or smaller.) There are several possible results:
- o_waitread returns an integer between 1 and len: This number of bytes was available for immediate reading; the bytes were read into the beginning of buf. Note that this number can be, and often is, smaller than len; you must not assume that io_waitread always succeeds in reading exactly len bytes.
- io_waitread returns 0: No bytes were read, because the descriptor is at end of file. For example, this descriptor has reached the end of a disk file, or is reading an empty pipe that has been closed by all writers.
- io_waitread returns -3, setting errno to something other than EAGAIN: No bytes were read, because the read attempt encountered a persistent error, such as a serious disk failure (EIO), an unreachable network (ENETUNREACH), or an invalid descriptor number (EBADF).
See Also
io_nonblock(3), io_waitread(3), io_waitreadtimeout(3)
Referenced By
io_tryread(3), io_tryreadtimeout(3), io_trywrite(3). | https://www.mankier.com/3/io_waitread | CC-MAIN-2018-05 | refinedweb | 201 | 58.82 |
Simon Weber 2017-03-19T20:50:59-04:00 Simon Weber [email protected] Analytics for the Google Payments Center 2017-03-18T00:00:00-04:00 <h2 id="analytics-for-the-google-payments-center">Analytics for the Google Payments Center</h2> <p class="meta">March 18 2017</p> <p><em>tl;dr <a href="">fill out this form for a beta invite<="">fill out this form to sign up<> <img src="" height="1" width="1" alt=""/>="/2017/03/18/google-payments-center-analytics-extension.html">Analytics for Google Payments Center< <a href="">gmusicapi</a>, a reverse-engineered client library, I knew it could be done. And since only a small percent of Google Music users actually want this feature, the stakes are low and I don’t worry about competitors (including Google).</p> <p.</p> <h3 id="the-business">The Business</h3> <p.</p> <h3 id="whats-next">What’s Next?</h3> <p.</p> <p>Hopefully next year I’ll be writing a similar post! If you’d like to follow along, be sure to subscribe with the links below. Also, feel free to send an email if I can be of assistance; my inbox is always open.</p> <img src="" height="1" width="1" alt=""/>> <img src="" height="1" width="1" alt=""/>> <img src="" height="1" width="1" alt=""/>laylists - called “smart playlists” in iTunes - allow you to create playlists that always contain songs matching some set of rules. For example, you could create a playlist that contains only music you haven’t listened to recently, or your most-played songs of a certain genre.<> <img src="" height="1" width="1" alt=""/>> <img src="" height="1" width="1" alt=""/>="ne">Exception</span><span class="p">):</span> <span class="k">pass</span> <span class="k">def</span> <span class="nf">enforce_presence</span><span class="p">(</span><span class="n">key</span><span class="p">,</span> <span class="n">entries</span><span class="p">):</span> <span class="sd">"">raise</code> keyword, you’re right! But, if you took longer than a quarter of a second to answer, sorry: you were outperformed by my linting tools.</p> <p>Don’t feel bad! Linting is designed to detect these problems more quickly and consistently than a human. There are two ways to make use of it: manually or automatically. The former is flexible but not robust, while the latter risks getting in the way. We lint automatically at Venmo; here’s how we strike a balance between flexibility and enforcement.<="nv">$ </span>./lint example.py Linting file: example.py FAILURE line 1, col 1: <span class="o">[</span>F401<span class="o">]</span>: <span class="s1">'sys'</span> imported but unused line 15, col 8: Warning: <span class="o">[</span>W0104<span class="o">]</span>:>--no-verify</code> flag.</p> <p>Finally, any errors that survive to a pull request will be caught during build linting on Jenkins. It’s similar to the pre-commit check, but runs on all files that have been changed in the feature branch. However, unlike the pre-commit check, our build script uses GitHub Enterprise’s <a href="">comparison api</a> to find these files. This eliminates the need to download the repository’s history, allowing us to save bandwidth and disk space with a git shallow clone.</p> <p>No matter when linting is run, we always operate it at the granularity of an entire file. This is necessary to catch problems such as unused imports or dead code; these aren’t localized to only modified lines. It also means that any file that’s been touched recently is free of problems, so it’s rare that we need to fix problems unrelated to our changes.<> <img src="" height="1" width="1" alt=""/>> <img src="" height="1" width="1" alt=""/>">%s</span><span class=">%s</span><span class=">%s</span><span class=>logger.exception</code> is a helper that calls <code">%s</span><span class=">%r</span><span class="s">"</span><span class="p">,</span> <span class="n">checkmark</span><span class="p">)</span></code></pre></figure> <p>Using <code>%s</code> with a unicode string will cause it to be encoded, which will cause an <code>EncodingError</code> unless your default encoding is utf-8. Using <code>%r</code> will format the unicode in \u-escaped repr format instead.</p> <img src="" height="1" width="1" alt=""/>>It’s sometimes useful to point your requirements.txt at code not yet on pypi. For example, imagine you’ve just sent a bugfix PR to one of the libraries you depend on. Instead of waiting for the PR to be merged and packaged, you can temporarily change your dependency to point at your fork.<>git+</code>: the vcs type with the repo url appended. https (rather than ssh) is usually how you want to install public code, since it doesn’t require keys to be set up on the machine you’re running on.</li> <li><code>>master</code> likely would).</li> <li><code>egg=gmusicapi</code>: the name of the package. This is the name you’d give to <code>pip install</code> (which isn’t always the same name as the repo).</li> <li><code>==4.0.0</code>: the version of the package. Without this field pip can’t tell what version to expect at the repo and will be forced to clone on every run (even if the package is up to date).</li> </ul> <p>If the maintainer doesn’t change package versions between releases, you’ll want to change it on your branch so pip can tell the difference between your temporary release and the last release. For example, say you contribute a fix to version 1.2.3 of a library. To create your new version, you could:</p> <ul> <li>branch from your feature branch</li> <li>change the version to 1.2.4rc1, since it’s a release candidate of the bugfixed 1.2.3 release</li> <li>use a requirements line like <code>git+</code></li> </ul> <img src="" height="1" width="1" alt=""/>> <img src="" height="1" width="1" alt=""/>>When I first showed up, our builds were flaky and took nearly an hour to run. Now they’re much more stable and run in five minutes. I’ve got some Venmo blog posts in the works about how we acheived this (which I’ll be sure to cross-post or link here).<> <img src="" height="1" width="1" alt=""/>>I used to provide free computer science tutoring when I was in school, so I figured this might be a nice way to keep giving back. After getting into the private beta a few weeks ago, I whipped up a free CS help listing – you can see the new version of>I joined the first Hangout 5 minutes early. After an awkward 10 minute wait, my first user showed up. We exchanging greetings, then paired on a Python program. Setting up a reasonable way to share code wasn’t super easy, but we eventually made good use of the Hangout’s screen sharing. The allotted 15 minutes gave us just enough time to work out a solution before he signed off. Success!<>A user showed up for my final slot of the night. However, English wasn’t his first language, and we had some trouble communicating. I eventually discovered that he was interested in programming for the job opportunities, so I provided some resources off the top of my head: Cousera, Udacity, Codecademy, etc. I also spent some time explaining CS as a discipline and attempting to separate it from software engineering.</p> <p>I disconnected with the feeling that my time wasn’t well spent. But, hey, at least someone showed up.</p> <h2 id="impressions">Impressions</h2> <p>Obviously, the no-shows were a huge problem. Not only did it waste my time, but it may have prevented me from helping actual users. To mitigate this, my listing now charges a dollar upfront, but promises to refund it if the user shows. So far this hasn’t gotten any takers, so it’s hard to tell if it’s working or just scaring everyone off.</p> <p>I got in touch with the Helpouts team about my problem - over a Helpout - and they told me there wasn’t anything in the pipeline to fix it. However, they did seem aware of the problem. I do hope it’s addressed; it’s absolutely sapped some of my enthusiasm for the platform.</p> <p>No-shows aside, my first Helpout really showed the potential of the platform: fifteen minutes of my time made a real difference! I’ll definitely keep some timeslots open on the platform, and hope to have better experiences to share in the future.</p> <img src="" height="1" width="1" alt=""/>>Maybe it’s last summer’s Google indoctrination speaking, but I’m fine with these parts of the language. That said, I still find Go’s error handling - which eschews exceptions for multi-value returns - a bit wonky. For those unfamiliar with the language, here’s the relevant part of the Go faq:<>This means that the majority of function calls get wrapped in a conditional check on a returned error, which can seem verbose and tedious. While this certainly required some getting used to, it doesn’t bother me anymore. After all, I end up using a similar number of try/except blocks when writing robust Python code.</p> <p>What does bother me is the loss of debugging information. For example, here’s an error I received from Go’s http client: <code>Thankfully, there’s enough to like in Go that I’ll put up with some debugging annoyances. First off, goroutines and selection over channels make for really easy concurrent code. As somebody who was new to both Twisted- and CSP-style concurrency a month ago, I now greatly prefer the latter.<>To run the three node architecture I referred to earlier, you don’t make an application with three gears, though; that’d be too easy. Instead, you need to make three applications, each with one gear. You see, you only get one “web gear” per application, which the other gears then support (by eg hosting a database or build system).</p> <p>Since their docs don’t make any of this clear, expect to dig around the forums for answers. Be warned, they’ve got a very “googling for .NET answers on SO” kind of vibe to them.</p> <p>I ran into all sorts of other minor difficulties. For one, there’s no official Go cartridge. There is a Red Hat community catridge, but that’s been broken for months now, apparently. After spending a half a day trying to fix it and cursing at their slow cartridge development system, I gave up. I’m currently deploying a binary over git, which I’m not too happy about.</p> <p>These kinds of problems left me with the impression that the service isn’t quite there yet. So, while I won’t be paying for Openshift any time soon, I will be rooting for them. An open source PaaS is a noble goal, and more I always welcome competition for my dollars.</p> <img src="" height="1" width="1" alt=""/>="ne">ImportError</span><span class="p">:</span> <span class="n">No</span> <span class="n">module</span> <span class="n">named</span> <span class="n">protobuf</span></code></pre></figure> <p>The problem is that Google’s own App Engine apis also use the <code>google</code> package namespace, and they don’t include the protobuf package.</p> <p>Thankfully, there’s a simple way to fix this. First, vendorize the library as you normally would. I just ripped the <code> <img src="" height="1" width="1" alt=""/>="/2017/03/18/google-payments-center-analytics-extension.html">Analytics for Google Payments Center<>*</code>). They don’t get <code="p">|</span> <span class="p">|</span> <span class="o">(</span>chrome message passing<span class="o">)</span> <span class="p">|</span> v our content script ^ <span class="p">|</span> <span class="p">|</span> <span class="o">(</span>the dom<span class="o">)</span> <span class="p">|</span> v our injected code <span class="p">|</span> <span class="p">|</span> <span class="o">(</span>global namespace<span class="o">)</span> <span class=>window.plupload.upload(File)</code> function isn’t accessible; it’s hidden inside turntable closures. However, a similar function is stored directly as a handler on the main file input, meaning that we can spoof an upload with something like <code>$('.> <img src="" height="1" width="1" alt=""/>>It’s not that I’m against unit testing. I’d love to have more unit tests! Unfortunately, writing them isn’t free, and in my case, end-to-end tests offer more bang for my buck. I get more coverage with less code and get to test my expectations about Google’s servers. gmusicapi is also pretty small, so the loss of granularity when bug-hunting isn’t a huge deal.<>Currently, I have a few Google accounts I use exclusively for testing, and I keep their credentials in TravisCI encrypted variables. There’s one big downside to this: encrypted variables can’t be used with other folks’ pull requests. In the future, I hope to figure out a way to safely manage a public test account (I think it’s possible with 2-factor auth).</p> <h3 id="invest-in-support">Invest in support</h3> <p>I have trouble saying no to requests for help. Interestingly, gmusicapi also seems to attract many users who are new to Python. Because of these factors, I used to spend a fair amount of time answering support emails. I didn’t mind this, but after receiving a few emails on the same topics, I figured it’d be better to flesh out>I’m very pleased with what I’ve learned and accomplished so far. My next step is to get the project into a self-sustaining state. I figure this means knocking out todos, beefing up internal documentation, and refactoring all of the wonky bits that have accumulated over time.</p> <p>Thankfully, I’ll have plenty of time this summer at the <a href="">Recurse Center</a>!</p> <img src="" height="1" width="1" alt=""/>>A quick tip: set Secure Shell to ‘Open as Window’, otherwise Chrome will intercept keyboard shortcuts (which e.g. makes Control-W close your terminal). You’ll probably also want to set TERM to xterm. The FAQ (linked below) has the details and is worth reading through:<>You’ll need to have your Chromebook in developer mode (i.e. rooted) to use it, which is easy: I just flipped a hardware switch. The specifics for going about this vary by model, so just search for instructions. Once in dev mode, you’ll want to hit Control-D on each boot to skip the “you’re in developer mode” warning (there’s a 30-second wait otherwise). Have faith: this isn’t nearly as annoying as it sounds.</p> <p>The crouton README has all the information you need to get started. Note that you can run a normal graphical environment (e.g. Xfce) alongside CrOS. I prefer using Secure Shell to ssh into localhost so I can keep my terminal customizations and stay in CrOS. If this sounds appealing, here’s what I did:</p> <ul> <li>used the crouton cli-extra target (eg <code>crouton -t cli-extra ...</code>).</li> <li>installed openssh in my chroot</li> <li>start sshd, then use Secure Shell to connect to my-user@localhost</li> </ul> <p>To make life a bit easier, I stuck <code>/etc/init.d/ssh start</code> into my chroot’s <code>/etc/rc.local</code> (which crouton runs upon mounting). Now, when I want to work locally, I just Control-Alt-Forward to get my local shell, <code>$> <img src="" height="1" width="1" alt=""/>>I’d like to thank Lukasa and SigmaVirus24 on #python-requests for pointing me to the relevant Requests internals, and for generally putting up with my mad raving. Lukasa also had what I thought was some sharp insight into the situation:</p> <blockquote> <p><strong>simon_weber</strong>: I suppose it would be nice for tear-their-own-hair-out insane folks like me> <img src="" height="1" width="1" alt=""/>’m taking a course on data mining this semester. Our first assignment: mine some data. The dataset and techniques don’t matter; the point is to extract meaning in any way possible. I’m greenhorn data miner; hopefully I’ll be able to look back at this post and laugh at my own naivete.</p> <p>For my dataset I chose my own Google Music library. It’s unique, big enough (7600+ songs), and well organized. Plus, it’s a cinch to access with my <a href="">Google Music api</a>.</p> <p>My analysis was simple: I investigated the occurrences of words in genres. I figured the most frequent words would be genres themselves (eg ‘metal’ in ‘power metal’), but there was also the chance of exposing common adjectives (eg ‘post’ in ‘post-rock’ and ‘post-metal’).</p> <p>A few lines of Python later, and I had my results. The first thing I noticed: I listen to a lot of metal. A third of my songs are some kind of metal. If you put all the genre words into a hat, you’d pick ‘metal’ almost a quarter of the time. Next up: ‘rock’ and ‘jazz’. Rounding out the top six are two adjectives - ‘alternative’ and ‘progressive’ - as well as ‘accompaniment’ (as> <img src="" height="1" width="1" alt=""/>>Google exceeded my expectations. I arrived skeptical, expecting to see their world-famous perks masking a typical BigCo™ culture - where engineers are codemonkeys, bureaucracy reigns, and nobody cares about users. I’m happy to report that this is far from the truth. While Google as a whole is definitely nothing like a startup - don’t let anyone tell you otherwise - the culture felt healthy. Googlers constantly stick up for their users, openly asking the hard questions of management.</p> <p>Google’s also got an incredible amount of talent, which was reflected in those I worked with. My team was responsible for making Google’s call centers more efficient (yes, Google does pick up the phone, but only for paying customers). I was tasked with a webapp to provide real-time visualization of our call centers’ statuses. This allows managers to intelligently shuffle agents around to different call queues. The entire project was my responsibility: project management, frontend, backend, monitoring, deployment. . .the works. I got to play with all kinds of internal secret-sauce, and despite ballooning requirements, my team and I were really pleased with what I shipped.<> <img src="" height="1" width="1" alt=""/>>If I could just get my message to a human, I figured it would end up in the right place. After all, who wants to be responsible for blowing off a security flaw? On Netgear’s contact page, I found a press relations email. No response. Investor relations channel? Nope (I must not be rich enough). Support emails found by Googling? Nothing.<> <img src="" height="1" width="1" alt=""/> | http://feeds.feedburner.com/SimonWeber | CC-MAIN-2017-13 | refinedweb | 3,243 | 63.29 |
Tijl Coosemans <[email protected]> writes: > On Mon, 15 Jan 2018 11:43:44 +0100 Luca Pizzamiglio <[email protected]> > wrote: > >> I've already received a couple of messages from pkg-fallout about build >> failure on head-i386-default [1] [2] both pointing to the same errors, >> about missing intrinsic symbols related to __atomic_* >> >> The clang documentation about C11 atomic builtins [3] stats that __atomic_* >> are GCC extension and Clang provides them. >> >> It seems to me that this specific GCC-compatible builtin are enabled on >> amd64, but not on i386. >> Is there a way to enable GCC compatible __atomic_ builtin also on i386? >> Or should I provide patches to adopt _c11_atomic_* instead of __atomic_* >> for every ports that need it ? >> >> [1] >> >> [2] >> >> [3] > > 8 byte atomics requires at least i586. So either find a way to disable > the use of these atomics in these ports or add something like this to > the port Makefile. > > .if ${ARCH} == i386 && ! ${MACHINE_CPU:Mi586} > CFLAGS+= -march=i586 > .endif
It wouldn't help (see below). Clang 6 accidentally made __atomic* work enough to satisfy configure check but not for the port to build. I guess, it also confuses configure in net/librdkafka and net-mgmt/netdata. $ cat a.c #include <stdint.h> typedef struct { uint64_t val64; } atomic_t; int main() { uint64_t foo; atomic_t bar; #ifdef ATOMIC_STRUCT __atomic_fetch_add(&bar.val64, 1, __ATOMIC_RELAXED); #else __atomic_fetch_add(&foo, 1, __ATOMIC_RELAXED); #endif return 0; } $ cc -m32 -march=i586 a.c $ clang50 -m32 -march=i586 a.c /tmp/a-560ad1.o: In function `main': a.c:(.text+0x46): undefined reference to `__atomic_fetch_add_8' clang-5.0: error: linker command failed with exit code 1 (use -v to see invocation) $ cc -m32 -DATOMIC_STRUCT -march=i586 a.c /usr/bin/ld: error: undefined symbol: __atomic_fetch_add_8 >>> referenced by a.c >>> /tmp/a-ad8c36.o:(main) cc: error: linker command failed with exit code 1 (use -v to see invocation) $ clang50 -m32 -DATOMIC_STRUCT -march=i586 a.c /tmp/a-0fbfd0.o: In function `main': a.c:(.text+0x46): undefined reference to `__atomic_fetch_add_8' clang-5.0: error: linker command failed with exit code 1 (use -v to see invocation) _______________________________________________ [email protected] mailing list To unsubscribe, send any mail to "[email protected]" | https://www.mail-archive.com/[email protected]/msg172839.html | CC-MAIN-2018-34 | refinedweb | 368 | 67.96 |
Below, I mean internal to mean both bundled and tightly integrated to
the particular architecture of XmlBeans. It would be much more portable
to write c14n in XmlCursor or even a cursor like interface, just like
the saver defines a SaveCur. In this way, I could expose the saver as
an independent feature of XmlBeans. If I did, one could get the
functionality of the saver over store of XML other than XmlBeans.
- Eric
-----Original Message-----
From: David Waite [mailto:[email protected]]
Sent: Friday, July 02, 2004 10:29 PM
To: [email protected]
Subject: Re: xmlbeans xml security
On Jul 2, 2004, at 10:47 PM, Eric Vasilik wrote:
> I would recommend writing the c14n serialization using XmlCursor. One
> can navigate to the XmlObject for an element and interrogate the
Schema
> type, looking for default attributes, for example, and including them
> in
> the serialization. This assumes that the instance is valid with
> respect
> to the schema. Does c14n require that the instance is schema valid?
Nope. I actually didn't realize they had the attribute DTD-type stuff
in there at all until this thread started :)
> Also, it seems that the handling of namespaces is significantly
> different for c14n than the current saver handles them.
Could you elaborate?
> You will have total control over the serialization this way and, the
> c14n serialization can be written as a utility on XmlBeans as opposed
> to
> being in XmlBeans. The XmlCursor interface is meant to enable high
> performance access to the store.
>
> Also, there are two other alternatives to using XmlCursor. In the v2
> codebase, there is a low-level cursor access to the store which you
> could use, but the c14n implementation would then become an internal
> feature of XmlBeans. This approach would probably yield the highest
> performance c14n implementation.
By internal, do you mean a bundled feature? Or just coupled more
tightly to a particular release of XMLBeans?
> The other approach would be to use the implementation of the live DOM
> on
> the v2 store. Is there an open source implementation of c14n which is
> implemented on top of a DOM? Could we use it instead of doing it
> again?
Yes, but the current implementation for using the DOM for C14N within
XML Security would not be nearly as efficient as doing it ourselves. It
also has a workaround for Xalan's lack of support for the namespace
axis, where it copies all namespaces in scope onto every element - it
modifies the original document in the process of creating the canonical
form.
-David Waite
- ---------------------------------------------------------------------: | http://mail-archives.apache.org/mod_mbox/xml-xmlbeans-dev/200407.mbox/%[email protected]%3E | CC-MAIN-2014-41 | refinedweb | 427 | 53.92 |
# A new writing method/technology (“dendrowriting”), as exemplified by the YearVer site
Several years have passed since the appearance of [the first text markup language that supports “dendrowriting”](https://pqmarkup.org), but no worthwhile piece of text demonstrating the advantages of the new writing method/technology has yet appeared.
The largest “dendrotext” was a couple of paragraphs in the [pqmarkup documentation](http://pqmarkup.org/ru/syntax "‘Дополнительные возможности форматирования’ → ‘Спойлеры (разворачиваемый блок информации)’ → ‘Для чего нужна последняя форма’ → см. скрытый текст после слова ‘древовидно’"), consisting of only ~1300 characters and available only in Russian.
In English there was no “dendrotext” at all, as such [apart from small insertions in the documentation for the 11l programming language (e.g., ‘Boolean type’ in [Built-in types](http://11l-lang.org/doc/built-in-types))].
But last year...
… when I was tidying up the meta-information for my projects, I noticed that I had settled on recording version numbers in `YYYY.M[.n]` form. Then I got the idea of giving this designation a name: YearVer [obviously, by analogy with SemVer]. And the rule for naming additional versions {published in the same month or fixing bugs in the main version} formed the basis of the text for a new website.
Soon after, I learned about the existence of CalVer from the [Software versioning page on Wikipedia](https://en.wikipedia.org/wiki/Software_versioning). This disappointed me a little, but when I took a closer look I found out that [calver.org](https://calver.org) does not give any specific recommendations about which “calendar versioning” scheme should be used, but simply describes the schemes already used in various projects, without highlighting them in any way.
The creation of a site for YearVer thus remained relevant, and I continued to fill it with content.
The resulting description of the YearVer versioning scheme turned out to be quite short and concise, but very definite, offering specific schemes/strategies depending on the type of project.
And the most challenging task was now to translate the yearver.org/ru page into English, preserving the formatting.
To obtain a high-quality translation, I decided to contact three different translation agencies at once and to emphasize the fact that I was interested in maximum quality and prepared to pay extra for it.
I contacted:
* Alconost (it was the first link in Google search results for ‘бюро переводов сайтов’\‘website translation agency’)
* transly.ru and tran.su (found by searching for ‘бюро переводов markdown’\‘translation agency markdown’)
In the process of communicating with the first translation agency, the rules for reading “dendrotext” were formulated:
1. Hidden text {...} should be expanded only after the [corresponding] paragraph has been read in its entirety.
2. Hidden text {{...}} should be expanded after {...}.
For example, in the following sentence [from <http://yearver.org>]:
> Additional versions published in the same month, or {…} those fixing bugs of the main version {…}, shall be numbered `2021.2.1`, `2021.2.2`, and so on.
the second block of hidden text should be expanded (after the words ‘main version’), then all its nested blocks, and only then the first block (after the word ‘or’).
**Why was this “dendrotext” organized this way?**
**(Before reading this explanation, I strongly recommend that you read [in accordance with these rules] the indicated sentence from yearver.org in full)**
The second block of hidden text (after the words ‘those fixing bugs of the main version’) introduces the concept of bug-free projects, clarifying that such projects do not need bug-fixing versions. [And its [second block] could actually be placed right after the words ‘fixing bugs’ [{but I did not do that, since the first and second blocks of hidden text would then be too close to each other}].]
And the first block of the hidden text relates to the word ‘or’ (therefore it is located immediately after it), but since it uses the concept of bug-free projects [introduced in the second block], it should be expanded after the second block [which is why the first block was enclosed in additional curly braces].
At the end [of the translation process] there remained the painstaking work of “merging” the three translations I received into one text, choosing the best translation variants for each sentence and each term/concept.
**Why “dendrowriting”?**
*Dendro-* [comes](https://www.dictionary.com/browse/dendro- "<- google:‘dendro dictionary meaning’") from the Greek *déndron*, meaning “tree”. However, if you think of a tree as a data structure [in programming], then it's really not very clear how “dendrowriting” relates to it. But if you take a look at ordinary/real trees… Try to draw a vertical line on a piece of paper: that's the trunk of the tree, and this is your text [a paragraph or just a sentence]. Now go from bottom to top, and once you reach the first block of hidden text, make a branch (to the left or to the right {you can alternate: the first block of hidden text branches to the left, the second to the right, the third to the left again, etc.}). Then you expand the first block of hidden text [bottom branch of the tree]. It can either consist of plain text or contain other blocks of hidden text corresponding to the sub-branches of the tree.
**And how is it different from classic spoilers (like this)?**
“Dendrowriting” is blocks of hidden text within a paragraph.
Whereas the classic spoiler is a separate paragraph in itself {but which, like a block of hidden text in “dendrowriting”, may contain multiple paragraphs}. | https://habr.com/ru/post/648913/ | null | null | 925 | 59.64 |
On 05/21/2015 02:06 AM, Artyom Tarasenko wrote: > Hi Richard, > > looking at target-sparc/cpu.h and target-sparc/ldst_helper.c I have an > impression, that 2 mmu modes are not enough for sparc (32) machines: > they have 4 types of accesses: the combination of user/privileged and > data/code. Data vs code doesn't need separate mmu modes. Just different methods of access. That said, sparc64 has 6 modes... > Also afaics cpu_ldu{b,w,l,q}_code uses the currently selected MMU mode. > if this is correct, the current implementation of ASI 0x9 ( /* > Supervisor code access */) in target-sparc/ldst_helper.c is imprecise, > it would use the current mmu translation which is not necessarily > privileged. On sparc32, we are guaranteed to be privileged, and there's a check for that in the translator. #ifndef TARGET_SPARC64 if (IS_IMM) goto illegal_insn; if (!supervisor(dc)) goto priv_insn; #endif On sparc64, there are two modes higher than kernel: nucleus and hypervisor. For these, the access is being done with the wrong mode. Further, there's no check in helper_ld_asi for permissions. The double-bug means there isn't currently a hole in user accessing supervisor code, but to fix one bug requires that we fix the other. > Also I wonder how to implement a user_code access (ASI 0x8). Do I have > to add more NB_MMU_MODES? No, you just need to use the right function. In this case helper_ld*_cmmu, which includes an mmu_idx parameter, performs a read with "code" or execute permissions rather than "data" or read permissions. This whole area could stand to be totally re-written, btw. Especially to support the sparcv9 immediate asi with simple loads from non-default modes, the byte-swapping asis, and the fpu data movement asis. r~ | https://lists.gnu.org/archive/html/qemu-devel/2015-05/msg04348.html | CC-MAIN-2019-43 | refinedweb | 293 | 58.69 |
- Advertisement
MothDustMember
Content Count6
Joined
Last visited
Community Reputation100 Neutral
About MothDust
- RankNewbie
Sound effects repository
MothDust replied to MothDust's topic in For Beginners's ForumArg! I'm sorry, I should have seen that ...
Sound effects repository
MothDust posted a topic in For Beginners's ForumHello everyone! I'm a total amateur to programming and I'm learning java. I tried to find a good repository for sound effects and have been unsuccessful. I'm looking for an engine sound that I can download that will sound alright looped. Sorry if this is the wrong area or place to ask about this.
[java] Drawing an Image to JFrame
MothDust replied to MothDust's topic in General and Gameplay ProgrammingThank you for the responses everyone, I'm looking forward to trying out your suggestions and I'll be sure to let you all know if it works so that when the next person goes searching on google maybe they'll run across it. A little more info for those that asked: I'm using Netbeans because I'm under the impression that what google uses is a little different than traditional java and I'm interested in programming games for phones, PC's, and maybe even consoles some day. So anyway it behooves me to learn the language as it is, first, and then move to Eclipse next. My netbeans is installed on my second hard drive which for the sake of simplicity we'll call Z:. does it matter that my compiler is on drive Z: and my project is on C:, I didn't imagine it would but I'd like to mention any little thing that might help you guys help me. Also, I'm running Windows. Finally, I do greatly appreciate the help, you guys are great. ************************EDIT************************************** The drag and drop method worked, absolutely wonderful. Thank you very much!
[java] Drawing an Image to JFrame
MothDust replied to MothDust's topic in General and Gameplay ProgrammingHmm, interesting, that does seem to have had some kind of affect. I suppose I should have added my errors though, I do apologize for leaving that bit out. Here is what I get when I run my code.) BUILD SUCCESSFUL (total time: 8 seconds) Which looks to me like the path is bad, but i've tried putting the images in every different kind of folder in my project and using all kinds of different methods of designating the location, even going so far as to typing out the entire address starting with C:
[java] Drawing an Image to JFrame
MothDust posted a topic in General and Gameplay ProgrammingHello everyone, this is my first post so please be forgiving. Basically I'm brand new to Java, I started learning C++, attained some basic knowledge, then moved over to Java since I figured it would get me into the android market faster than learning C++ then Java. Anyway, onto my problem, I've got drawing shapes and collision detection down, that was simple. Now I want to replace some of my objects with Images I've drawn. I'll try to wrap my code, hopefully this works... package tests; import javax.swing.JFrame; import java.awt.*; import java.net.*; import java.awt.Image; import java.awt.Toolkit; public class Tests extends JFrame{ public static void main(String[] args){ Tests g = new Tests(); } URL url = null; Image img1; public Tests(){ super("Test"); setSize(400,400); setVisible(true); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); url = this.getClass().getResource("src/img1.gif"); img1 = Toolkit.getDefaultToolkit().getImage(url); } public void Paint(Graphics g){ g.drawImage(img1,200,200,this); } } obviously this is just some sample code I created and not my full project. but this is how the code for drawing my images works. basically the object is still there, collisions are happening, but it is invisible. Thank you all in advance, it sucks feeling like a noob, even if you are. Oh and in closing I've been trying to figure this out for a few days with google searches and my many text books, I come to you all humbled.
- Advertisement | https://www.gamedev.net/profile/185869-mothdust/ | CC-MAIN-2019-22 | refinedweb | 686 | 62.17 |
Adding Secure Publishing To The PubNub Realtime Messaging Workflow
Last week, I took a look at PubNub, the cloud-based realtime messaging platform. In my first experiment, I demonstrated that PubNub enabled bi-directional communication directly between client devices within your application. This is really cool because it doesn't require any server-side technology; but it publicly exposes your "publish key" which means you lose the ability to moderate the messages being broadcast within your application. Today, I wanted to look at creating a server-side proxy that would afford a layer of security and moderation on top of the PubNub publishing API.
On the client-side, the PubNub JavaScript library creates the PUBNUB namespace. This namespace is configured using the DOM element with ID, "pubnub". In the previous demo, this DOM element had two custom attributes:
- pub-key="....."
- sub-key="....."
The sub-key attribute allows the client to subscribe directly to the PubNub API. The pub-key, allows the client to publish directly to the PubNub API. The existence of both of these attributes provide client-to-client communication. If we remove the pub-key from the DOM element, however, the client loses the ability to publish directly to the PubNub API. Instead, it will have to POST messages to a proxy that, in turn, has the ability to publish directly the PubNub API.
NOTE: At the time of this writing, PubNub had debugging code in place that allowed for a throttled number of messages to be published to the API without a valid publish key (~ 1 per minute). This is in place for testing purposes and will be removed shortly).
To explore this workflow, I created a small ColdFusion application that would require the user to login before they could broadcast messages within the application. Once logged-in, however, the client can subscribe directly to the PubNub API and post messages to the ColdFusion application which will act as a sever-side proxy to the publish API.
Let's take a look at the Application.cfc ColdFusion framework component. This simply sets a cached instance of the PubNub.cfc and initializes the user's session.
Application.cfc
- <cfcomponent
- output="false"
-
- <!--- Define the appliation settings. --->
- <cfset this.name = hash( getCurrentTemplatePath() ) />
- <cfset this.applicationTimeout = createTimeSpan( 0, 0, 20, 0 ) />
- <cfset this.sessionManagement = true />
- <cfset this.sessionTimeout = createTimeSpan( 0, 0, 10, 0 ) />
- <!--- Map the COM library for the examples. --->
- <cfset this.mappings[ "/com" ] = (getDirectoryFromPath( getCurrentTemplatePath() ) & "../../com/") />
- <!--- Define the request settings. --->
- <cfsetting
- showdebugoutput="false"
- requesttimeout="20"
- />
- <cffunction
- name="onApplicationStart"
- access="public"
- returntype="boolean"
- output="false"
-
- <!---
- Create and cache an instance of our PubNub component.
- This will be used to publish messages to the PubNub
- API as the server-side proxy to the client.
- --->
- <cfset application.pubnub = createObject( "component", "com.PubNub" ).init(
- publishKey = "pub-7b6592f6-4ddb-4af6-b1b3-0e74cefe818d",
- subscribeKey = "sub-f4baaac5-87e0-11e0-b5b4-1fcb5dd0ecb4"
- ) />
- <!--- Return true so the application can load. --->
- <cfreturn true />
- </cffunction>
- <cffunction
- name="onSessionStart"
- access="public"
- returntype="void"
- output="false"
-
- <!---
- Set up the initial user. In order for the user to be
- able to post messages to the channel, they will have
- to be logged-in.
- --->
- <cfset session.user = {
- isLoggedIn = false,
- uuid = createUUID(),
- name = ""
- } />
- <!--- Return out. --->
- <cfreturn />
- </cffunction>
- <cffunction
- name="onRequestStart"
- access="public"
- returntype="boolean"
- output="false"
-
- <!--- Check to see if we need to re-initialize the app. --->
- <cfif structKeyExists( url, "init" )>
- <!--- Manually restart the application and session. --->
- <cfset this.onApplicationStart() />
- <cfset this.onSessionStart() />
- </cfif>
- <!--- Return true so the request can load. --->
- <cfreturn true />
- </cffunction>
- </cfcomponent>
For this demo, the user's logged-in status is tracked with a simple, session-based boolean value.
NOTE: Typically, I would use the "demo" account to illustrate PubNub behavior. In this case, however, that was not possible. From what I can tell, you cannot get the demo account to ever reject a publish resources even when the pub-key is missing from the client.
Now that you see how the user is being tracked, let's take a quick look at the server-side proxy that will act as our launchpad for PubNub message broadcasting.
Publish.cfm (Our Server-Side Publish Proxy)
- <!---
- Since the client has to publish by going THROUGH the ColdFusion
- application, we can add any kind of server-side security that we
- need to. In this case, we are just going to make sure the user is
- logged into the system.
- --->
- <cfif !session.user.isLoggedIn>
- <!--- Not authorized! --->
- <cfheader
- statuscode="401"
- statustext="Not Authorized"
- />
- <h1>
- 404 Not Found
- </h1>
- <!--- Halt processing of this template. --->
- <cfexit />
- </cfif>
- <!--- ----------------------------------------------------- --->
- <!--- ----------------------------------------------------- --->
- <!--- Param the form variables. --->
- <cfparam name="form.uuid" type="string" />
- <cfparam name="form.text" type="string" />
- <!--- Construct the message object. --->
- <cfset message = {} />
- <cfset message[ "uuid" ] = form.uuid />
- <cfset message[ "text" ] = form.text />
- <!--- Publish the message to PubNub. --->
- <cfset application.pubnub.publish(
- channel = "coldfusion:secure_publish",
- message = message
- ) />
- <!--- Return a success response. --->
- <cfcontent
- type="application/json"
- variable="#toBinary( toBase64( 1 ) )#"
- />
As you can see, the first thing we do in this server-side proxy is check to see if the user is logged-in. We are keeping it very simple for the demo; but you can see that using a server-side proxy affords a layer of security and moderation around the messages that get broadcast within the application. In addition to checking logged-in status, I could also examine the message content, interact with the user's account, or perform any number of other business-logic-related tasks that might surround publication.
Now, let's take a look at the client-side code to see how the publish/subscribe workflow is being pulled together.
Index.cfm (The Client-Side Code)
- <cfoutput>
- <!DOCTYPE html>
- <html>
- <head>
- <title>Secure PubNub Publish ColdFusion Demo</title>
- <!-- jQuery. -->
- <script type="text/javascript" src="./linked/jquery-1.6.1.min.js"></script>
- <!--
- Include PubNub from THEIR content delivery netrwork. In
- the documentation, they recommend this as the only way to
- build things appropriately; it allows them to continually
- update the security features.
- The ID and the SUB-KEY attributes of this script tag are
- used to configure the PUBNUB JavaScript wrapper.
- Notice that I am including the SUB-KEY but that I am NOT
- including the PUB-KEY. This will allow the client to
- attempts to publish directly to the PubNub API.
- -->
- <script
- id="pubnub"
- type="text/javascript"
- src=""
- sub-key="sub-f4baaac5-87e0-11e0-b5b4-1fcb5dd0ecb4"
-
- </script>
- </head>
- <body>
- <h1>
- Secure PubNub Publish ColdFusion Demo
- </h1>
- <h2>
- Messages:
- </h2>
- <ol class="messages">
- <!--- This will be populated dynamically. --->
- </ol>
- <!---
- Check to see if the user is logged-in. If not, they will
- not be able to submit to the server.
- --->
- <cfif session.user.isLoggedIn>
- <form class="message">
- <input type="hidden" name="uuid" value="#session.user.uuid#" />
- <input type="text" name="text" value="" size="40" />
- <button type="submit">
- Send Message
- </button>
- </form>
- <p>
- <a href="./logout.cfm">Log Out</a>.
- </p>
- <cfelse>
- <!---
- The user is not logged-in. Hide the form and only
- show them a way to login.
- --->
- <p>
- You must <a href="./login.cfm">Log In</a> in order
- to post messages.
- </p>
- </cfif>
- <p>
- <em>
- <strong>Note:</strong> At the time of this writing,
- PubNub had some temporary debugging in place that
- allowed a throttled number of publish requests to go
- through with "invalid" keys. This is for testing and
- is something they will be removing (or so I'm told).
- </em>
- </p>
- <!--- --------------------------------------------- --->
- <!--- --------------------------------------------- --->
- <script type="text/javascript">
- // Cache DOM references.
- var dom = {};
- dom.messages = $( "ol.messages" );
- dom.form = $( "form.message" );
- dom.uuid = dom.form.find( "input[ name = 'uuid' ]" );
- dom.text = dom.form.find( "input[ name = 'text' ]" );
- // Override the form submission. Since the user cannot
- // publish to the PubNub channel without a known PUB-KEY,
- // they will have to publish by-proxy, going through our
- // secure ColdFusion API.
- dom.form.submit(
- function( event ){
- // Prevent the default submit action.
- event.preventDefault();
- // Publish through the API.
- $.ajax({
- type: "post",
- url: "./publish.cfm",
- data: {
- uuid: dom.uuid.val(),
- text: dom.text.val()
- },
- dataType: "json",
- success: function(){
- // Clear the message text and re-focus
- // it for futher usage.
- dom.text
- .val( "" )
- .focus()
- ;
- },
- error: function(){
- // The user is probably not logged-in.
- alert( "Something went wrong." );
- }
- });
- }
- );
- // I add the incoming messages to the UI.
- function appendMessage( message ){
- // Create a new list item.
- var messageItem = $( "<li />" )
- .attr( "data-uuid", message.uuid )
- .text( message.text )
- ;
- // Add the message to the current list.
- dom.messages.append( messageItem );
- }
- // Subsribe to the appropriate PubNub channel for
- // receiving messages in this secure application.
- PUBNUB.subscribe({
- channel: "coldfusion:secure_publish",
- callback: appendMessage
- });
- </script>
- </body>
- </html>
- </cfoutput>
In the Head of this document, we are using the PubNub JavaScript library to both create and configure the PubNub namespace. As part of this configuration, however, we are including the sub-key and excluding the pub-key. As I explained earlier, the exclusion of the pub-key prevents the client from publishing directly to the PubNub API. If the client did try to publish a message, PubNub would return the following response:
[ 0, "Invalid Key." ]
With the pub-key missing, the client-side code has no choice but to use the server-side proxy as its only means of publication.
Login.cfm / Logout.cfm
These pages simply toggle the session-based login boolean, so I won't bother showing them here. If you want to see them, you can look at the PubNub.cfc project. I have added all of this code as an example within the Github repository.
Client-to-client communication is very cool; but in a many-to-many user graph, a publicly-know publish key can present a security problem. We can easily increase the level of security within our realtime application by requiring a server-side proxy for all broadcast requests. This keeps our publish key private and allows us to implement any number of security measures around realtime streaming while Pusher uses sockets.
Interested in your take.
@Mario,
That's a good question. A couple of people have asked me about the different platforms. I don't really have a good sense of pros and cons. One thing that I do think about, however, is the Key stuff.
In PubNub, I have my one set of publish and subscribe keys per account. In Pusher, the thing that it really cool is that I create "Application" inside of my account. Then, each application gets its own set of keys. This means that I could create temporary apps with keys and then destroy those apps and not worry about the keys again.
As far as I can tell, I don't see a way to even reset my PubNub keys. So, take this blog post, for example; I had to put my actually pub/sub keys into the code in order to demo how it works. Well, what happens when I got to create an actual production site? I wouldn't feel comfortable using keys that I *also* put in a blog post; as such, I'd have to create a new PubNub account just for that project.
And, it seems that I'd have to create a new PubNub account for *each* project in which I wanted to use the PubNub platform.
So, from a key-standpoint, I like the way Pusher does it. But, from a technical standpoint, I think PubNub is probably a bit easier to implement. From a performance standpoint, I'm not sure that I notice any significant difference.
Ben,
The html5 embed tag is a Netscape nonstandard. It is largely supported tag by other Browsers for sound and video clips. Html5 has a keygen tag. It specifies a key pair generator field used for forms. The embed tag does not have an attribute for the keygen yet but I bet it will.
I have been able to embed a compete web site. Sub Domain, written in PhP with an SQL Data Base, swtchelp.com to alhanson.com. It is in the menu as embedded swtc site. This means a ColdFusion site can be embedded it to html5 site. They both could to run parallel to each other.
I have been trying to pass a variable from the html5 code to the embedded page code, but some how I can seem to do this. Dumb me. I think this is a good thing. The solution would be to have 2 PhP or ColdFusion pages seamlessly embedded passing the variable between them. With having a keygen attribute in the embed tag would create a key at the session layer of the OSI model the key could be passed to the embedded page to create a client to client session key - the Marc Andreessen "te.la-port" to the cloud. So i will admit Marc is a genius. What more could he want? Everyone to know the boy that fell to Earth is his biological father? Who the [i] in the iPhone is? Well secret are secret when you declare them like variables.
A am curious - can we just create a separate channel for each client and maintain all communication through the server, as a proxy?
I know this article is over a year old, but since this article was published, PubNub does support multiple apps per account with a unique set of keys per app. | http://www.bennadel.com/blog/2217-adding-secure-publishing-to-the-pubnub-realtime-messaging-workflow.htm | CC-MAIN-2015-11 | refinedweb | 2,189 | 58.69 |
Introduction:.
Attachments.
Attachments
Participated in the
Microcontroller Contest 2017
Be the First to Share
Recommendations
12 Comments
1 year ago
trying to do this with the keyboard from a busted T1200, as for the code, what's the purpose of the '1' '2' before KEY_ESC?
1 year ago
Trying to get an old Military GRiD 1550 keyboard to work using this method, but whenever I press a key, it also produces 2-3 other characters (ghosting?).
For example...
Q Produces q['
W produces wp;'
Etc..
Some keys don't work at all.
The keyboard has 11 Rows and 13 Columns, modified the code to suit but given my extremely limited knowledge of coding, I'm struggling to understand what I've done wrong.
Heres the code I used, any suggestions would be greatly appreciated..
#include <Keypad.h>
const byte ROWS = 11; //eleven rows
const byte COLS = 13; //thirteen columns
char keys[ROWS][COLS] = {
{'1',KEY_F9,'2',KEY_F5,KEY_F4,KEY_F3,KEY_F2,KEY_F1,'3','1','2','3','1'},
{'a','1','2','3','1','2','3','1','2','b',KEY_LEFT_SHIFT,KEY_RIGHT_SHIFT,KEY_SPACE},
{'1',KEY_F10,KEY_F8,KEY_F7,KEY_F6,'2',KEY_1,KEY_ESC,'2','3','1','2','3'},
{'1',KEY_6,'2',KEY_3,KEY_2,KEY_W,KEY_Q,KEY_TAB,'3','1','2','3','1'},
{'1',KEY_7,KEY_5,KEY_4,KEY_E,KEY_S,KEY_A,KEY_CAPS_LOCK,'2','3','1','2','3'},
{'1',KEY_4,KEY_8,KEY_D,KEY_F,KEY_Z,KEY_SPACE,'c',KEY_LEFT,'2','3','1','2'},
{'1',KEY_0,KEY_T,KEY_R,KEY_G,KEY_C,KEY_X,'d',KEY_DOWN,'2','3','1','2'},
{'1',KEY_MINUS,KEY_U,KEY_Y,KEY_H,KEY_B,KEY_V,KEY_SCROLL_LOCK,KEY_NUM_LOCK,'2','3','1','2'},
{'1',KEY_F11,KEY_I,KEY_K,KEY_L,KEY_M,KEY_N,KEY_UP,KEY_TILDE,'2','3','1','2'},
{'1',KEY_EQUAL,KEY_O,KEY_F12,KEY_J,KEY_BACKSPACE,KEY_COMMA,KEY_RIGHT_BRACE,'e','2','3','1','2'},
{'1',KEY_P,KEY_LEFT_BRACE,KEY_SEMICOLON,KEY_QUOTE,KEY_PERIOD,KEY_SLASH,KEY_ENTER,KEY_RIGHT,'2','3','1','2'},
};
byte rowPins[ROWS] = {1,2,3,4,5,6,7,8,9,10,11}; //connect to the row pinouts of the keypad
byte colPins[COLS] = {12,13,14,15,16,17,18,19,20,21,22,23,24}; //connect to the column pinouts of the keypad
Keypad kpd = Keypad( makeKeymap(keys), rowPins, colPins, ROWS, COLS );
unsigned long loopCount;
unsigned long startTime;
String msg;
int x = 0;
Question 1 year ago
Hi! great instructables! I'm trying to reuse your code for a Sinclair Spectrum ZX +2 with a Teensy 3.2 but I'm a bit stuck: can you point me in the right direction? The upper membrane and lower membrane are driving me crazy
Question 2 years ago
Hello, I have an old keyboard for an Amstrad 3386SX.. notice it has 16 x 16 connectors¿ is it possible make it work? teensy only have 24 pins .. sure I missing sommething, my apologies, I'm newbie in this adventure
Answer 2 years ago
Hi. Wondering if you still have that keyboard and would you consider trading it so it matches the 3386SX main unit I have :) Thanks.
2 years ago
Hi,
Thanks for this inscructable!
I started to trace a scanning of the keyboard I am working on but I don't understand properly how to proceed further.
I traced the 24 tracks until they had an obvious terminating point; now I am left with a lot of tracks unaccounted for as I don't really get the logic of where to go next. I can't see very much detail on the main photos for this guide.
I've attached an image of what I have so far, could someone explain to me the logic behind where to go next?
Sorry if this is a bit vague; I lack the vocabulary to describe what I mean properly.
Thanks
3 years ago
Hi there, just wanted to say thanks for the great instructable! I have a pretty similar toshiba laptop kb (8086 or maybe 80286) that I salvaged and I've had a teensy++ 2.0 waiting for years to do this project. I guess I didn't want to do the work. But you've inspired me, and its all wired up to a teensy and I'm figuring out the keymap. So far the spacebar prints "G" and "L" prints "H". So... proof of concept? I'll update when I get it working.
4 years ago
I added a surplus keyboard to a Timex Sinclair once. I figured out the keyboard matrix on a piece of paper. How's that for geek cred?...
4 years ago
Oohh... I have a T1000 lying around, and I've been wondering if there's anything useful to do with it. Do you have additional information about this project somewhere else?
Reply 4 years ago
Cool! Not just yet but i will be publishing an instructable about the whole thing.
4 years ago
DUDE. I did this, but I couldn't because the keys came out all different. I had a bluetooth module and everything. Good to know you have to map the keys lol.
4 years ago. | https://www.instructables.com/Make-Any-Vintage-Keyboard-Work-With-a-Modern-PC/ | CC-MAIN-2021-10 | refinedweb | 822 | 72.56 |
Overview
A common task for system administrators and developers is to use scripts to send emails if an error occurs.
Why use Gmail?
Using Googles SMTP servers are free to use and work perfectly fine to relay emails. Note that Google has a sending limit: “Google will temporarily disable your account if you send messages to more than 500 recipients or if you send a large number of undeliverable messages. ” As long as you are fine with that, you are good to go.
Where do I start?
Sending mail is done with Python’s smtplib using an SMTP (Simple Mail Transfer Protocol) server. Since we will use Google’s SMTP server to deliver our emails, we will need to gather information such as server, port, authentication. That information is easy to find with a Google search.
Google’s Standard configuration instructions
Outgoing Mail (SMTP) Server – requires TLS or SSL: smtp.gmail.com
Use Authentication: Yes
Port for TLS/STARTTLS: 587
Port for SSL: 465
Server timeouts: Greater than 1 minute, we recommend 5
Account Name or User Name: your full email address (including @gmail.com or @your_domain.com)
Email Address: your email address ([email protected] or username@your_domain.com) Password: your Gmail password
Getting Started
Begin with opening up your favorite text editor and import the smtplib module at the top of your script.
import smtplib
Already at the top we will create some SMTP headers.
fromaddr = '[email protected]' toaddrs = '[email protected]' msg = 'Enter you message here’
Once that is done, create a SMTP object which is going to be used for connection with the server.
server = smtplib.SMTP("smtp.gmail.com:587”)
Next, we will use the starttls() function which is required by Gmail.
server.starttls()
Next, log in to the server:
server.login(username,password)
Then, we will send the email:
server.sendmail(fromaddr, toaddrs, msg)
The final program
You can see the full program below, by now you should be able to understand what it does.
import smtplib # Specifying the from and to addresses fromaddr = '[email protected]' toaddrs = '[email protected]' # Writing the message (this message will appear in the email) msg = 'Enter you message here' # Gmail Login username = 'username' password = 'password' # Sending the mail server = smtplib.SMTP('smtp.gmail.com:587') server.starttls() server.login(username,password) server.sendmail(fromaddr, toaddrs, msg) server.quit() | https://pythonarray.com/sending-emails-using-google/ | CC-MAIN-2022-40 | refinedweb | 392 | 57.87 |
Inside
const methods all member pointers become constant pointers.
However sometimes it would be more practical to have constant pointers to constant objects.
So how can we propagate such constness?
The problem
Let’s discuss a simple class that keeps a pointer to another class. This member field might be an observing (raw) pointer, or some smart pointer.
class Object { public: void Foo() { } void FooConst() const { } }; class Test { private: unique_ptr<Object> m_pObj; public: Test() : m_pObj(make_unique<Object>()) { } void Foo() { m_pObj->Foo(); m_pObj->FooConst(); } void FooConst() const { m_pObj->Foo(); m_pObj->FooConst(); } };
We have two methods
Test::Foo and
Test::FooConst that calls all methods (const and non-const) of our
m_pObj pointer.
Can this compile?
Of course!
So what’s the problem here?
Have a look:
Test::FooConst is a const method, so you cannot modify members of the object. In other words they become const. You can also see it as
this pointer inside such method becomes
const Test *.
In the case of
m_pObj it means you cannot change the value of it (change its address), but there’s nothing wrong with changing value that it’s pointing to. It also means that if such object is a class, you can safely call its non const methods.
Just for the reference:
// value being pointed cannot be changed: const int* pInt; int const* pInt; // equivalent form
// address of the pointer cannot be changed, // but the value being pointed can be int* const pInt;
// both value and the address of the // pointer cannot be changed const int* const pInt; int const* const pInt; // equivalent form
m_pObj becomes
Object* const but it would be far more useful to have
Object const* const.
In short: we’d like to propagate const on member pointers.
Small examples
Are there any practical examples?
One example might be with Controls:
If a
Control class contains an
EditBox (via a pointer) and you call:
int Control::ReadValue() const { return pEditBox->GetValue(); } auto val = myControl.ReadValue();
It would be great if inside
Control::ReadValues (which is const) you could only call const methods of your member controls (stored as pointers).
And another example: the
pimpl pattern.
Pimpl divides class and moves private section to a separate class. Without const propagation that private impl can safely call non-const methods from const methods of the main class. So such design might be fragile and become a problem at some point. Read more in my recent posts: here and here.
What’s more there’s also a notion that a const method should be thread safe. But since you can safely call non const methods of your member pointers that thread-safety might be tricky to guarantee.
Ok, so how to achieve such const propagation through layers of method calls?
Wrappers
One of the easiest method is to have some wrapper around the pointer.
I’ve found such technique while I was researching for
pimpl (have a look here: The Pimpl Pattern - what you should know).
You can write a wrapper method:
const Object* PObject() const { return m_pObj; } Object* PObject() { return m_pObj; }
And in every place - especially in
const method(s) of the
Test class - you have to use
PObject accessor. That works, but might require consistency and discipline.
Another way is to use some wrapper type. One of such helpers is suggested in the article Pimp My Pimpl — Reloaded | -Wmarc.
In the StackOverflow question: Propagate constness to data pointed by member variables I’ve also found that Loki library has something like: Loki::ConstPropPtr\
propagate_const
propagate_const is currently in TS of library fundamentals TS v2:
C++ standard libraries extensions, version 2.
And is the wrapper that we need:
From propagate_const @cppreference.com:
std::experimental::propagate_constis a const-propagating wrapper for pointers and pointer-like objects. It treats the wrapped pointer as a pointer to const when accessed through a const access path, hence the name.
As far as I understand this TS is already published (even before C++17). Still not all features were merged into C++17... so not sure if that's reach C++20. See this r/cpp comment.
It’s already available in
- GCC (libstdc++) - Implementation Status, libstdc++
- Clang (libc++) - code review std::experimental::propagate_const from LFTS v2
- MSVC: not yet
Here’s the paper:
N4388 - A Proposal to Add a Const-Propagating Wrapper to the Standard Library
The authors even suggest changing the meaning of the keyword const… or a new keyword :)
Given absolute freedom we would propose changing the const keyword to propagate const-ness.
But of course
That would be impractical, however, as it would break existing code and change behaviour in potentially undesirable ways
So that’s why we have a separate wrapper :)
We can rewrite the example like this:
#include <experimental/propagate_const> class Object { public: void Foo() { } void FooConst() const { } }; namespace stdexp = std::experimental; class Test { private: stdexp::propagate_const<std::unique_ptr<Object>> m_pObj; public: Test() : m_pObj(std::make_unique<Object>()) { } void Foo() { m_pObj->Foo(); m_pObj->FooConst(); } void FooConst() const { //m_pObj->Foo(); // cannot call now! m_pObj->FooConst(); } };
propagate_const is move constructible and move assignable, but not copy constructable or copy assignable.
Playground
As usual you can play with the code using a live sample:
Summary
Special thanks to author - iloveportalz0r - who commented on my previous article about pimpl and suggested using
popagate_const! I haven’t seen this wrapper type before, so it’s always great to learn something new and useful.
All in all I think it’s worth to know about shallow const problem. So if you care about const correctness in your system (and you should!) then
propagate_const (or any other wrapper or technique) is very important tool in your pocket. | https://www.bfilipek.com/2018/01/propagate-const.html | CC-MAIN-2018-30 | refinedweb | 940 | 52.9 |
DateTime UTC offset and UTC time wrong around DST switch
Bug Description
I ran through a weird problem with mx.DateTime
during last DST switch on CET on sunday 2012-10-28 01:00 UTC.
CET is on DST at UTC+2 up to 2012-10-28 01:00 UTC,
then switches to UTC+1.
In fact, for one hour past the switch, gmtoffset remained
stuck at +2 instead of +1, so gmtime was also bogus during that period.
The machine (x86, debian 4.0) monitors a timing system through the
use of a python script, and it needs TZ info. I ran through the problem
looking at the logs during the night of the switch from DST.
The logs are a bit verbose so I reproduce a simplified version
of what happenned on another machine ; this tiny script is run
on a Ubuntu 12.04.1, system time is UTC :
#!/usr/
from mx import DateTime
from time import sleep
t=DateTime.now()
while 1:
print "------
print "t = DateTime.now() : ", t
print "t.gmtoffset ", t.gmtoffset()
print "t.gmtime ", t.gmtime()
sleep(5)
I set time on the machine, run this script on CET timezone
and see what happens around the switch date (20121028 01:00 UTC).
1st test, reproducing the exact same behaviour that was observed on the
debian box :
I first set system time at 2012-10-27 23:59 UTC, and check :
root@ganymede:~# date -s"20121027 23:59"; date; TZ=CET date;
Sat Oct 27 23:59:00 UTC 2012
Sat Oct 27 23:59:00 UTC 2012
Sun Oct 28 01:59:00 CEST 2012
then I run the script, in CET timezone :
fm@ganymede:~$ TZ=CET ./test_tz.py
t = DateTime.now() : 2012-10-28 01:59:00.04
t.gmtoffset 02:00:00.00
t.gmtime 2012-10-27 23:59:00.04
All is ok. Two minutes later :
----
t = DateTime.now() : 2012-10-28 02:01:08.90
t.gmtoffset 02:00:00.00
t.gmtime 2012-10-28 00:01:08.90
still ok (but see ref [1] further in the text)
One hour later, one minute before the switch :
----
t = DateTime.now() : 2012-10-28 02:59:01.80
t.gmtoffset 02:00:00.00
t.gmtime 2012-10-28 00:59:01.80
Still ok
One minute later, after the switch :
----
t = DateTime.now() : 2012-10-28 02:00:01.86
t.gmtoffset 02:00:00.00
t.gmtime 2012-10-28 00:00:01.86
t is ok,
gmtoffset is bogus
gmtime is bogus, one hour back
that situation lasts one hour :
----
t = DateTime.now() : 2012-10-28 02:59:59.65
t.gmtoffset 02:00:00.00
t.gmtime 2012-10-28 00:59:59.65
still bogus.
----
t = DateTime.now() : 2012-10-28 03:00:04.66
t.gmtoffset 01:00:00.00
t.gmtime 2012-10-28 02:00:04.66
back to normal.
This is exactly the behaviour I observed in my logs.
2nd test : I first set the date 1 hour before the switch and check :
root@ganymede:~# date -s"20121028 00:00"; date; TZ=CET date
Sun Oct 28 00:00:00 UTC 2012
Sun Oct 28 00:00:04 UTC 2012
Sun Oct 28 02:00:12 CEST 2012
Then I run the script, after setting TZ :
fm@ganymede:~$ TZ=CET ./test_tz.py
t = DateTime.now() : 2012-10-28 02:01:31.54
t.gmtoffset 01:00:00.00
t.gmtime 2012-10-28 01:01:31.54
t is ok, but gmtoffset is bogus, and so is gmtime.
So at the same system (UTC) time 20121028 00:01, DateTime gives two different
answers (in one case, the script has been launched on UTC 20121027,
DateTime gives correct result ref [1]; if the script is launched on UTC 20121028 00:00,
DateTime gives bogus results for gmtime and gmtoffset.
This is not specific to CET, I tested with TZ=EST5DST :
----
t = DateTime.now() : 2012-11-04 01:59:59.25
t.gmtoffset -04:00:00.00
t.gmtime 2012-11-04 05:59:59.25
----
t = DateTime.now() : 2012-11-04 01:00:04.26
t.gmtoffset -04:00:00.00
t.gmtime 2012-11-04 05:00:04.26
bogus, gmtime jumped backward, gmtoffset is wrong.
----
t = DateTime.now() : 2012-11-04 01:00:09.26
t.gmtoffset -04:00:00.00
t.gmtime 2012-11-04 05:00:09.26
and one hour later :
----
t = DateTime.now() : 2012-11-04 01:59:58.81
t.gmtoffset -04:00:00.00
t.gmtime 2012-11-04 05:59:58.81
Stil bogus and finally :
----
t = DateTime.now() : 2012-11-04 02:00:03.81
t.gmtoffset -05:00:00.00
t.gmtime 2012-11-04 07:00:03.81
back to normal, gmtime jumping one hour forward.
(By the way, gmtime and gmtoffset should be named utctime
and utcoffset).
--
François Meyer Tel : (+33) 3 81 66 69 27 Mob : 6 27 28 56 83
Observatoire de Besancon - BP1615 - 25010 Besancon cedex - FRANCE
Institut UTINAM * Universite de Franche-Comte * CNRS UMR 6213 ***fmeyer@venus:~$
ProblemType: Bug
DistroRelease: Ubuntu 12.04
Package: python-
ProcVersionSign
Uname: Linux 3.2.0-31-generic x86_64
ApportVersion: 2.0.1-0ubuntu13
Architecture: amd64
Date: Wed Oct 31 20:00:15 2012
InstallationMedia: Ubuntu 12.04 LTS "Precise Pangolin" - Release amd64 (20120425)
ProcEnviron:
TERM=xterm
PATH=(custom, user)
LANG=fr_FR.UTF-8
SHELL=/bin/bash
SourcePackage: egenix-mx-base
UpgradeStatus: No upgrade log present (probably fresh install) | https://bugs.launchpad.net/ubuntu/+source/egenix-mx-base/+bug/1073697 | CC-MAIN-2018-51 | refinedweb | 919 | 78.25 |
Important changes to forums and questions
All forums and questions are now archived. To start a new conversation or read the latest updates go to forums.mbed.com.
4 years, 9 months ago.
Compiler Bug?
I have a struct of 4 integers like this:
Example
#include "mbed.h" struct ints { int i0, i1, i2, i3; }; struct ints i[0xf4]; //0xf3 works, 0xf4 doesn't int main(int argc, char **argv) { i[0].i0 = 1; //if i don't initialize some data, it also compiles fine return 0; }
Now the sice of the struct is 16 bytes, since one integer has 32 bits.
When I create an array of this struct, I can't have it larger than about 0xa0 items, which is 2kByte of memory (Otherwise the compiler tells me, that I am out of memory). The Project overview tells me, that I have about 6.5kByte of memory still available, when I remove that array.
I'm compiling for the LPC11u24.
The Compiler error code it gives: L6407E
My question is: Is this a bug in the compiler (I doubt it). Or is there some other hindrance like bank switching, which the compiler doesn't implement?
1 Answer
4 years, 8 months ago.
Manuel,
I have tried to replicate your issue, however, i have successfully built and compiled using the LPC11u24 using 3.8kb/8kb, have you tried recreating the program? Have you tried placing data in 'const' in flash rather than RAM? Can you upload your exact problem and code to test.
Regards,
-Andrea
This is the exact problem code. I first encountered this in a large project and tested it with this code in a completely new project. This is also why the values have changed from A0H to F3H.posted by 10 Aug 2016
I just tried it with a third project and realized, that my first test program and the original project both used a old mbed library. Updating it resolved the issue.posted by 10 Aug 2016
What's the target? A simple app to reproduce the problem - so we cn just import and see ?posted by Martin Kojtal 08 Aug 2016 | https://os.mbed.com/questions/72232/Compiler-Bug/ | CC-MAIN-2021-21 | refinedweb | 358 | 74.59 |
Ticket #5854 (closed Bugs: fixed)
asio::ssl::stream holds last 16K until shutdown IF last buffer is on 16K boundary
Description
boost::asio::ssl::stream<boost::asio::ip::tcp::socket> ssl_socket
when using boost::asio::async_write(ssl_socket, std::vector<boost::asio::const_buffer> ..)
the completion handler gets called but the last 16384 bytes will never be sent until you call shutdown on the socket. IF the last buffer's size is on 16K boundaries (ie 16K, 32K...)
(windows7 VS2010, 32 bit build), openssl 1.0.0d
Attachments
Change History
comment:2 Changed 5 years ago by chris_kohlhoff
- Status changed from new to closed
- Resolution set to fixed
(In [74863]) Merge from trunk...
Fix compile error in regex overload of async_read_until.hpp. Fixes #5688
Explicitly specify the signal() function from the global namespace. Fixes #5722
Don't read the clock unless the heap is non-empty.
Change the SSL buffers sizes so that they're large enough to hold a complete TLS record. Fixes #5854
Make sure the synchronous null_buffers operations obey the user's non_blocking setting. Fixes #5756
Set size of select fd_set at runtime when using Windows.
Disable warning due to const qualifier being applied to function type.
Fix crash due to gcc_x86_fenced_block that shows up when using the Intel C++ compiler. Fixes #5763
Specialise operations for buffer sequences that are arrays of exactly two buffers.
Initialise all OpenSSL algorithms.
Fix error mapping when session is gracefully shut down.
Various performance improvements:
- Split the task_io_service's run and poll code.
- Use thread-local operation queues in single-threaded use cases (i.e. concurrency_hint is 1) to eliminate a lock/unlock pair.
- Only fence block exit when a handler is being run directly out of the io_service.
- Prefer x86 mfence-based fenced block when available.
- Use a plain ol' long for the atomic_count when all thread support is disabled.
- Allow some epoll_reactor speculative operations to be performed without holding the lock.
- Improve locality of reference by performing an epoll_reactor's I/O operation immediately before the corresponding handler is called. This also improves scalability across CPUs when multiple threads are running the io_service.
- Pass same error_code variable through to each operation's complete() function.
- Optimise creation of and access to the io_service implementation.
Remove unused state in HTTP server examples.
Add latency test programs.
(In [74817]) Change the SSL buffers sizes so that they're large enough to hold a complete TLS record. Refs #5854 | https://svn.boost.org/trac/boost/ticket/5854 | CC-MAIN-2016-40 | refinedweb | 404 | 58.99 |
Glassfish 3.1.2 embedded jdbc-connection-pool not working
Hi, I have an ear which has an ejb module and a web module. In my ejb module I created glassfish-resources.xml under META-INF folder. In one of my EJB's constructor I created and InitialContext to lookup my jndi.
<jdbc-connection-pool name="mydb_pool" ping="true" ...>
<property name="User" value="usr" />
<property name="Password" value="my1DB" />
<property name="dataBaseName" value="mydb" />
<property name="serverName" value="localhost" />
<property name="portNumber" value="1234" />
</jdbc-connection-pool>
<jdbc-resource
@Stateless
@LocalBean
public class EmployeeService {
public EmployeeService() {
InitialContext context = new InitialContext();
dataSource = (DataSource)
context.lookup("java:module/jdbc/mydb");
}
}
I deployed my application from command line using
asadmin --user admin --host localhost --port 20048 deploy --force simple-ejb-project-ear.ear
Then I get an error in server log that the Name cannot be found.
Then I removed the lookup in my EJB and tried to list the resources in my application's submodules to check whether the resources are being created or not. I used
asadmin --user admin --host localhost --port 20048 list-applications --subcomponents --resources
I was able to see the resources being created. But when I try to lookup I get an error.
Can anyone please help me. Thank you all in advance.
Look at "Table 2 Operations Allowed in the Methods of a Stateless
Session Bean" - JNDI access will work only starting with the PostConstruct.
-marina
On 6/26/13 4:12 AM, [email protected] wrote:
> Holy shit!! Its working if I use @Resources annotation. But with
> InitialContext its not working. Any ideas?
>
> --
>
> [Message sent by forum member 'krishnathanuku']
>
> View Post:
>
>
Holy shit!! Its working if I use @Resources annotation. But with InitialContext its not working.
Any ideas? | https://www.java.net/forum/topic/glassfish/glassfish/glassfish-312-embedded-jdbc-connection-pool-not-working | CC-MAIN-2015-32 | refinedweb | 292 | 50.84 |
Tim Hill wrote:
> ok, let's consider patch propogation.
Cool. Thanks for taking the time to present this use case.
Now, as usual, I'm going to go through it and comment. Well, OK, I'm going to
push the case for the present Subversion tags mechanism a bit, but I think I
need to push a bit in order to draw out a good argument from you. :-) I hope
you don't mind.
Oh, and let me make it clear that I don't necessarily think revision-alias
labels are a bad idea; I'm still just trying to fully understand the rationale
for them.
By the way, I don't know, but it might help a bit to know what your background
is. For example, are you accustomed to some particular version control system
other than Subversion, or do you have wide experience, or what? One thing that
makes me wonder is in the next sentence where you refer to your mainline build
as the "main branch"; if you were accustomed to Subversion you would have
called it the "trunk".
>.
OK, I imagine that's a common scenario.
>.
OK.
>.
Well, that depends whether you tagged them! (Actually I wouldn't expect you to
have tagged PREFIXED - see below - but you suggest below that you would have
labeled FIXED if you had labels, so I suggest you would have tagged it if you
only have tags.)
> Assuming I'm using
> the recommended best practice of marking defect #s in the message log, I
> will need to manually examine the log to extract the rev# from the log
> messages.
Well, actually what you need to do is find the revision number in which the fix
was made, if you haven't tagged it. To find the revision number, you could
look through the log messages, or perhaps you can look in your bug tracker, or
perhaps you have it in a revision property as I mention under your point (b)
below. In this project we try to close bugs in the issue tracker with a
comment like "Fixed in r1234.", so the information is fairly easy to find manually.
> REL1 is available via the --stop-on-copy switch, but this is not
> available in the DIFF or MERGE commands, only LOG. So , again, I have to
> do a manual scan of the log to obtain the rev#.
Again, you only have to do a manual scan if you haven't tagged it. I would
expect that you would have tagged it. At the time when you started your "rel1"
branch you would have done:
svn copy . $BRANCHES/rel1 # start a "rel1" branch
svn copy $BRANCHES/rel1 $TAGS/rel1-start # label your "rel1" starting point
and then you would start to modify the "rel1" branch, and you would tag various
release candidates ("rel1-rc1" or whatever) along the road towards finishing
it. When finished, you would tag the final version of it ("rel1") and delete
the branch.
svn checkout $BRANCHES/rel1
# modify stuff; it seems stable enough to release
svn cp . $TAGS/rel1-rc1 # tag a release candidate
# now it's been tested and is definitely stable and complete
svn copy . $TAGS/rel1 # tag the final version of it
svn delete $BRANCHES/rel1 # the branch is no longer needed
At least that's roughly how we do it in this Subversion project. Actually, we
haven't been tagging the start of the branch, but we could and would do if we
found it useful.
>?
Umm... it'll be fun to see if you can convince us that the same comments don't
apply to revision-alias labels :-)
> More
> importantly, how do I locate PREFIXED? This is a *relative* rev# and is
> conceptually related to the COMMITTED rev#, but with an arbitrary
> starting point.
Actually, this is easy if you are working with actual revision numbers either
manually or in a programming/scripting language: you can just subtract 1. You
can see that this is correct by thinking about the difference between foo.c in
revision PREFIXED and revision (FIXED-1): by definition, there is no difference.
If you are working with a tag, then you can't at present get Subversion to
calculate the previous revision for you. That is a deficiency which I admit is
important in this scenario.
There was a proposal for adding a syntax for doing this within the
revision-number argument: something like "-rHEAD-1" or "-r{2005-05-31}-1" would
mean the previous version of the item before the version in HEAD or the
specified date, and "-rBLAH-2" would mean 2 versions of the item before BLAH
(which is different from the revision number that is 2 less than "-rBLAH").
The proposal was quite well developed - not far from being committable - but it
didn't get finished or approved or whatever. We might want to ressurect it.
I'm not sure whether that proposal would allow you to get the version that came
before a particular tag. It might, something like: "$TAGS/rel1-start/foo.c
-rHEAD-1".
Anyway, I concede that there is definitely missing functionality here, and I
don't think this has been mentioned before in this discussion so thank you for
bringing this point to light.
> (b) I could create a script to do the necessary SVN LOG scan, pluck out
> the rev#, and paste it into a DIFF/MERGE command. This could work for
> FIXED, but not PREFIXED.
Actually it does work for PREFIXED, by subtracting 1 from FIXED.
> And it depends upon well-formatted log file
> messages, and "prayer-based parsing" common to semi-formal data sets.
Yes. Similarly the script could get the information from a dedicated "bug
number" revision property (look up "revision properties" in the Subversion book
if this doesn't make sense), or from your bug tracker. Either of those could
give you the information in a more formal structure.
> (c) Use mnemonic labels for revisions, and a convention that creates a
> label named "DEFECT-FIX:nnnn" for appropriate check-ins.
Right! Now, why don't your arguments above about tags apply equally to this?
You said:
"But creating a tag for *every* defect fix seems excessive, even if tags are
cheap. And who will remember to do this?"
I say: Creating a tag or a label for every defect only seems excessive if you
think the creation is an expensive operation or if the list of them is going to
make it hard to find other kinds of tags/labels. Creation (of a tag) is not
expensive, and to keep from cluttering your list of release tags (say), you can
use a separate tag name space - e.g. "$TAGS/bugs/..." (This begs the question
of whether the "labels" mechanism would want such a namespace mechanism.)
As for remembering to do it, well, what's there's no difference between
remembering to create a tag and remembering to create a label. Presumably you
would automate it.
> Now FIXED is
> *directly* accessible to me *and* the toolchain.
It appears to me that it would be accessible to the "svn" command via the "-r"
option argument rather than via a normal non-option argument. Is that all you
mean by "directly"?
Perhaps I'm being unreasonably hard on you. I can see that there is some
fairly profound respect in which identifying a marked point in history by path
(a tag) is different from identifying it by revision (a label) ... I just don't
know what the implications of this difference are.
> This leaves PREFIXED up
> in the air, however.
Addressed above.
>#).
OK. The tags version of that command would be something like:
svn diff $TAGS/DEFECT-FIX-1234/foo.c --revision HEAD-1:HEAD
(The syntax "HEAD-1" or "PRIOR" is not supported yet, but that seems to be
equally necessary for either approach.)
> Skipping the ugly syntax and some significant issues around
> the semantics/implementation of PRIOR, I think this is a significant
> improvement over manually scanning logs.
It seems to me that the syntax and the ability to specify one revision relative
to another are perhaps some of the most important issues here. However, you
are certainly bringing out some interesting points and helping us to understand
the issue. Thanks for your continuing patience in working through this.
- Julian
>> Julian Foad wrote:
>>> Tim Hill wrote:
>>>> Yes, this is certainly one way. The main objection I have to the
>>>> current scheme is that tags are just a convention, and as a result
>>>> the toolchain cannot use the implied REV# information directly -- it
>>>> has to be manually imputed by a user and translated by him/her to a
>>>> REV# for consumption by the toolchain. This makes
>>>> scripting/automation difficult for some common SVN operations and
>>>> adds burden to the dev process.
>>>
>>> Please give an example (stating the desired task and showing actual
>>> commands for doing it) of a situation in which the user has to do
>>> this "translation".
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]
Received on Thu Jun 2 15:45:44 2005
This is an archived mail posted to the Subversion Dev
mailing list. | https://svn.haxx.se/dev/archive-2005-06/0081.shtml | CC-MAIN-2019-43 | refinedweb | 1,534 | 60.65 |
Synopsis:
What are the guidelines for deleting data?
Solution:
Please note that the
set-delete command has been deprecated. The
truncate command should be used instead.
-.
Aerospike Server version 3.12.1 or above
Truncate
Test Truncations Gradually
It is recommended to test the truncations gradually, one set at a time and monitoring the potential impact on the overall system performance. Even though truncation is much more efficient than its predecessor set-delete, (for example it does not generate deletes that propagate to the replicas over fabric), the vacuum created by the sudden deletion of records could, for example, cause a surge in defragmentation activity, impacting the storage subsystem performance.
Usage
Truncate a set
asinfo -v "truncate:namespace=namespace_name;set=set_name"
The name of the set to be truncated would have to be specified. If not specified, the whole namespace will be truncated, including records not belonging to any sets.
Truncate a set up to a specific time
asinfo -v "truncate:namespace=namespace_name;set=set_name;lut="
The last updated time is expressed in nanoseconds (8 bytes) since the UNIX epoch (i.e., nanoseconds since 00:00:00 UTC on 1 Jan 1970). It can be given in hex (with a 0x prefix), decimal, or octal (with a 0 prefix). The lut time is not allowed to be older than the Citrusleaf epoch (00:00:00 UTC on 1 Jan 2010), and is not allowed to be beyond the current time (i.e. in the future). If not specified the current time is used.
Truncate a namespace
asinfo -v "truncate:namespace=namespace_name"
Log Analysis
Here are the log lines which will provide more information on the status of truncate.
The log line formats below are from version 3.12.1:
Apr 13 2017 00:41:24 GMT: INFO (truncate): (truncate.c:206) {test|testset} got command to truncate to now (229740084581)
Truncate command received.
NOTE: Will only appear on the node to which the info command was issued. The command is distributed to other nodes via system metadata (SMD), and only the truncating/starting/restarting/truncated/done log entries will appear on those nodes.
The timestamp printed in the logs is the truncation time but represented in milliseconds since the Citrusleaf epoch (00:00:00 UTC on 1 Jan 2010).
Apr 13 2017 00:41:24 GMT: INFO (truncate): (truncate.c:440) {test|testset} truncating to 229740084581
Truncate command received. Will appear on all the nodes after a truncate command is issued.
Apr 13 2017 00:41:24 GMT: INFO (truncate): (truncate.c:462) {test} starting truncate
Truncate command being processed for the namespace.
Apr 13 2017 00:41:27 GMT: INFO (truncate): (truncate.c:569) {test} truncated records (10,50)
Truncate command being processed for the namespace. The numbers in parenthesis represent (
current,
total).
current is the number of records that have been deleted by truncation since the command was issued (10 in this example).
total is the number of records that have been deleted by truncation since the server started (50 in this example).
Counts are only kept at the namespace level.
Apr 13 2017 00:41:27 GMT: INFO (truncate): (truncate.c:573) {test} done truncate
Truncate command completed.
Warning
- If using the client APIs to perform the truncate command on a single-threaded application please add a millisecond(ms) sleep. The truncate operation has a 1 millisecond resolution & writes occurring within the same millisecond are not deleted.
- If using the client APIs to perform the truncate command on a multi-threaded application please be aware that accuracy of the truncate is to a precision is a 1 millisecond(ms). Therefore, any writes occurring within that 1 millisecond(ms) need to be considered as those records will still persist.
Highlighted Guidelines
- If security is enabled on the cluster (
enable-securitytrue), the user executing the truncate command must have been granted the ‘sys-admin’ or
data-adminrole.
- New writes can be performed into the sets being truncated, as the new writes will occur after the last update time (lut).
- The truncate command rapidly removes entries from the in-DRAM index. There is no dependency on the namespace supervisor (nsup) cycle thread, so deletion begins immediately upon initiation of the truncate command.
- Truncate can also be performed using the client APIs (example: Java Documentation).
- Enterprise Edition only: In consideration of the case of a cold start, an entry is added in the SMD (System Meta Data) subsystem so that a full restart does not cause the data to return. This can be considered as the “tombstone” for the set.
- Truncation essentially takes effect immediately across the cluster, and will apply to any migrations that might be in progress.
- Truncation will apply to any new nodes joining or old nodes rejoining the cluster after the truncate was executed as the truncated time would be synced over the SMD protocol.
- If the set metadata is required to be deleted as well, any potential roles with privileges on the set in that namespace would need to be dropped prior to cold starting each node in the cluster in a rolling fashion (Enterprise Edition only).
Aerospike Server version 3.12.0 or below
set-delete
Test Deletions Gradually
Meaning if you want to delete multiple sets of data, do not start out deleting all sets at once. Rather, delete 1 set initially and verify overall system impact. Then gradually increase the sets to be deleted monitoring the system during each iteration.
Monitor System Impact
There will be an overall system impact when deleting your data. Deletions will propagate over fabric and could impact network efficiency. Deletes can also impact the defragmentation rate, amplifying writes and potentially increasing transaction latencies.
When considering deletion you should consider tuning, as you will have to choose between:
A. Records being deleted at a quick pace, with a greater impact on latency
B. Records being deleted at a slower pace, with lesser impact on latency.
Tune Configuration Parameters
Some configuration parameters to consider to tune would be:
Log Analysis
Here are the two log lines which will provide more information on the status of set delete.
The log line formats below are from version 3.9.
{ns-name} Records: 37118670, 0 0-vt, 0(377102877) expired, 185677(145304222) evicted, 0(0) set deletes, 0(0) set evicted. Evict ttls: 34560,38880,0.118. Waits: 0,0,8743.
set deletes represents the number of records deleted by a set-delete command.
in-progress: tsvc-q 0 info-q 5 nsup-delete-q 10 rw-hash 0 proxy-hash 0 tree-gc-q 0
nsup-delete-q represents the number of records queued up for deletion by the nsup thread.
Highlighted Guidelines
Here are the most stressed highlighted guidelines:
- If security is enabled (
enable-securitytrue), the user executing the set-delete command must have been granted the ‘sys-admin’ or ‘data-admin’ role.
- Any roles with privileges on the set in that namespace would need to be dropped prior to truncating the set, as the metadata still exists and could return on a subsequent cold start.
- In considering the case of a cold start, the index will be rebuilt from persistent storage and hence, deleted data for which the defragmentation has not processed or has processed but not yet overwritten will reappear.
- In order to delete all the objects in the set in a cluster, set-delete needs to be dynamically set to true on all nodes.
- For example:
asadm -e “asinfo -v ‘set-config:context=namespace;id=test;set=testset;set-delete=true;’”
- In order to delete the set data cleanly, ensure that set-delete shows as false, on ALL nodes, after executing the deletion command.
- Do not perform writes into the sets being deleted.
- Do not perform set-delete during migrations as this can result in records migrated into a partition after it has been processed for the set-delete, but before the whole set is deleted, resulting in the node being stuck in the ‘set-delete’ true state.
Reference
To read more about Truncating a Set in a Namespace, see Managing Sets in a Namespace
To read more about the truncate command, see Info Command Reference
To read more about Deleting a Set(Deprecated) in a Namespace, see Managing Sets in a Namespace
To read more about Set Delete Flow, see set-delete flow
To read more about Configuration Parameters, see Configuration Reference
To read more about details on each of the elements in the log lines, see Log Reference
To learn more about monitoring latencies, see Log Latency Tool (asloglatency)
To read more about What is the right way to delete sets completely, see What is the right way to delete sets completely?
To read more about Durable Deletes, see Durable Deletes
To read more about returning set statistics for all or a particular set, see Info Command Reference
To read more about administering user roles, see [Access Control] ()
Keywords
SET-DELETE DELETE DATA SET RECORDS LATENCY DEFRAGMENTATION DELETES TRUNCATE LUT NAMESPACE
Timestamp
06/12/2017 | https://discuss.aerospike.com/t/guidelines-for-deleting-data/3681 | CC-MAIN-2018-30 | refinedweb | 1,499 | 52.19 |
To Build a Robot Part 4
April 9, 2010
I received my TLC 5940 NT from Texas instruments and immediately set it up to do a quick test with some LEDs which is always fun: LED Video
I was working on the initial setup while I was sick so I missed something very obvious about the design of my circuit. The TLC actually uses the Ground to produce the PWM wave where the Arduino uses the +. This means to have it turn on the H-Bridge you need to do a pull-up resistor so you can feed it with the 5v.
I used 2k resistors for the Pull-up and things worked well, when I went to 10k the motor slowed in reverse (no idea why right now ill investigate later)
Using this setup you can drive a very large number of motors. You can put the TLC5940 chips in series and control hundreds of motors. For the setup I have here you could control 2 motors per H-Bridge if you use the L293B (but get the L293D) like I have here. Since each motor only requires 2 pins you can control a maximum of 8 motors per TLC5940 chip if you also have 4 h-bridges. If you only need the DC Motor to spin in one direction you only need 1 Pin to control it and could then control 16 motors with variable speed.
Keep in mind the amperage when driving motors with the L293 series they all allow up to about 1 amp. Some motors are very powerful and can reach this limit quickly. Choose your H-Bridge wisely.
Docs and Datasheets:
The sketch uses the TLC library for the Arduino available here: TLC5940 Arduino Library
The sketch just spins a motor forward for 1 second then breaks and backward for a second.
#include "Tlc5940.h" boolean forward = false; void setup() { Tlc.init(); } void loop() { Tlc.clear(); forward = !forward; if (forward) { // Brake Tlc.set(1, 0); Tlc.set(2, 0); Tlc.update(); delay(500); // Go Tlc.set(1, 0); Tlc.set(2, 4095); } else { // Brake Tlc.set(1, 0); Tlc.set(2, 0); Tlc.update(); delay(500); // Go Tlc.set(1, 4095); Tlc.set(2, 0); } Tlc.update(); delay(1000); }
hey
what pins do you hook it up to on the arduino
and if i want 4 motor controllers do i need 4 arduinos or will it work with 1 arduino
It is using the TLC5940 to control the L293B which controls the speed and direction of the motor while acting as a bridge for the external power source. You would just need to hook it to a second pin on the TLC5940 (there are 15 available). Since the L293B can power two motors at the same time you could use that to run two motors and just add a second L293B for the next two.
Also the pins on the schematic are correct pinouts for the arduino board itself not the chip. So pins 6, 9, 10, 11, 13 are used to control the TCL5940 which is all you will need because the code controls the rest of the pins on that chip. | http://rpgduino.wordpress.com/2010/04/09/to-build-a-robot-part-4/ | CC-MAIN-2014-35 | refinedweb | 529 | 80.01 |
Learn C control flow statements, loops with examples using if...else, for Loop, while Loop, break and continue, switch...case..
Learn to develop software solutions for linux environment, implement requirements through real time projects and get required practical skills for software jobs
Learn to develop embedded systems, interfacing electronic peripherals through real time projects and get required practical skills for software jobs.
if else condition is used to check if a given condition is met. We do certain things if the condition is met. Let's write a small program function to increase the integer passed to it and print the result.
Program output:Program output:
#include "stdio.h" void checkCond(int ); int main() { int num; num = 12; checkCond(num); num = 20; checkCond(num); return 0; } void checkCond(int a) { if((a > 10) && (a < 20)) { a++; printf("Result is:%d\n", a); }else { printf("Number is out of range!!"); } }
Result is:13 Number is out of range!!
In the above program the function checkCond accepts an integer, checks if the number is between 10 and 20(excluding 10 and 20) and prints the result after incresing by 1. If the condition is not true it prints a message "Number is out of range!!"
In nested if statements an if statement is included inside another if statement.
Let's write a small program to demonstrate the use of nested if statements as shown below.
Program output:Program output:
#include "stdio.h" void checkNestedCond(int ); int main() { int num; num = 12; checkNestedCond(num); num = 15; checkNestedCond(num); num = 19; checkNestedCond(num); num = 21; checkNestedCond(num); return 0; } void checkNestedCond(int a) { if((a > 10) && (a < 20)) { if((a > 10) && (a < 14)) { printf("Number is between 10 and 14\n"); }else if((a > 14) && (a < 18)) { printf("Number is between 14 and 18\n"); }else { a++; printf("Result is:%d\n", a); } }else { printf("Number is out of range!!"); } }
Number is between 10 and 14 Number is between 14 and 18 Result is:20 Number is out of range!!
In the above program within range of 10 and 20, we checked if number is between 10 to 14 or if the number is between 14 to 18 or the number is greater than 17 using nested if statements. | https://www.mbed.in/c/c-control-flow-loops-macros-operators-examples/ | CC-MAIN-2021-31 | refinedweb | 375 | 59.74 |
Oh ! ^^
Maybe i read too fast and miss the "-" xDI'll look at it.
Either way, Thanks a lot, that should help =)
EDIT : Done ! ^^I didn't handle the one number case xDWork better now =)
Thanks !
My result is coming up as wrong for the "choose the right temperature" and "complex test case" because I'm returning a negative when it's expecting a positive, the directions say, "If two numbers are equally close to zero, positive integer has to be considered closest to zero (for instance, if the temperatures are -5 and 5, then display 5)." I assume if they are both negative then we should the negative. I'm confused by the directions perhaps, but it seems to me that if you have two -5 then you should return a -5 since there is no positive 5. Can anyone shed some light on this? If I am supposed to return a positive when I have two negative numbers then the directions are pretty unclear, but can someone help guide me towards how I should approach this? I'm using Python3.Thanks.
There are no test cases where there are 2 values of -5. If your code is finding 2 values of -5, then there's something wrong with the code you have that reads the values.
However, if there were a case where there were 2 values of -5 and no other values were closer to 0, -5 would be the correct answer.
You can look at the test cases if you select Expert Mode from the options and then click on the square in the test cases panel.
Hello,
The OCaml code skeleton is bugged, you give: the following code:
let n = int_of_string (input_line stdin) in (* the number of temperatures to analyse *)
(*let line = input_line stdin in*)
for i = 0 to n - 1 do
(* t: a temperature expressed as an integer ranging from -273 to 5526 *)
let t = Scanf.sscanf line "%d" (fun t -> (t)) in
...
done;
but working over a string sscanf will not "eat" the current char and iterate n time over the same int.
One solution is the following:1) replace line by:
let line = Scanf.Scanning.stdin in
2) replace the sscanf by a bscanf:
let t = Scanf.bscanf line "%d " (fun t -> (t)) in
Another solution in a functional way (OCaml programmers like functional structures, it is a big disappointing (and unusual) to be confronted to imperative for loops):
let _n = int_of_string (input_line stdin) in (* the number of temperatures to analyse *)
let line = input_line stdin in
(* t_lst: a list of temperatures expressed as an integer ranging from -273 to 5526 *)
let t_lst = (List.map int_of_string (String.split_on_char ' ' line))
in
(*your code here*)
This should be more expected for an OCaml programmer to code from a List than from a loop (but I can understand you want a very similar pattern for every language).
I don't have any hard-coded values but validation fails on test case 3. On IDE, everything checks out.
I am a beginner of programming, why I made the mistake here?(I even don't know how to deal with the first case)PLEASE HELP
fscanf(STDIN, "%d",
$n // the number of temperatures to analyse
);
$inputs = fgets(STDIN);
$inputs = explode(" ",$inputs);
$near = $inputs[0];
for ($i = 0; $i < $n; $i++)
{
if ( abs($near) >= abs($inputs[$i+1]) ) {
$near = abs($inputs[$i+1]);
}
}
echo($near);
Well, yes. The $inputs array has indexes ranging from zero to ($n-1) and you iterate the variable $i in the 'for' cycle also from zero to ($n-1). So far so good But the body of 'for' is referring to $inputs[$i+1]. This is problematic, because when $i will reach its last value of $n-1, then the $i+1 is equal to $n. Number $n is not one of the valid indexes for $inputs and your program throws an error. You could try to use simply $i instead of $i+1 in 'for' construct. This will get your program to run, but the program logic too has to be little bit adjusted to get the correct answers.
Has anybody understood the third test case? I didn't understand what exactly "Choose the right temperature" means.
Just print the inputs!
Hi everybody,I'm trying to do this in C++, but cant succeed convert the String in a list of integer ?I tried atoi but seems, not working, isstreaml too... i dont see how to do, an advice about a class/function ?
Thks
I feel like my logic is solid but maybe I'm making a syntax error Can someone help me? I set the program up to separate the positive and negative numbers and now it keeps returning the positive numbers array as being null. I keep looking over the code and can't see where I went wrong any suggestions. Also not sure if pasting code here is allowed so I'm not.
larryjoe use the int function ().
I have same failure (5526 alone test) and i can't understand why!
Input is not printing with System.out.prinltn(i); in the for loop please anybody knows give me reply
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
int main()
{
int n,temp;
long x[10000];
scanf("%d", &n);
for (int i = 0; i < n; i++)
scanf("%hd", &x[i]);
i=0;
temp=A[0];
for(min=fabs(A[0]);i<n;i++)
if(fabs(A[i]))<fabs(temp)
temp=A[i];
printf("%d\n",temp);
return 0;
}
Why this won't work??????
Lots of stuff...Undeclared variable.Renamed the array without renaming all the references to it.For loop declaration is messed up (bad copy-paste?)Scanf has a bad format specifier.Problem with nested parentheses.
Once that's done, the program will run and you can find the other errors...
can u please offer a python solution for this...?
Hi!!!! Would you pls send me your code? I'm totally new here and i cant figure it our with python3..... many thanks in advance... pls pls
No. kthxbye.
fine
hi im here againbelow is my code... i dont unserstand why this code cant work for the situation in which all temperatures are negative..?
n = int(input()) # the number of temperatures to analyse
a = set()
c = set()
for i in input().split():
t = int(i)
a.add(t)
c = map(abs,a)
smallest = [n < min(c)+1 for n in c]
list1 = list(compress(a, smallest))
if list1 == []:
print(0)
else:
for x in list1:
print(x) | http://forum.codingame.com/t/temperatures-puzzle-discussion/33?page=28 | CC-MAIN-2018-30 | refinedweb | 1,089 | 72.66 |
- Introduction
- Installation
- Basic Setup
- Image Downloader
- Concurrency
- Results
- Next Steps
- Full Code Listing
Contents
Building A Concurrent Web Scraper With Haskell
updated: April 16, 2012
Introduction
Let's make a concurrent web scraper! We will use Haskell, because it allows easy concurrency. We will use the HXT library to do the scraping. If you want to follow the
HXT bits, you should be comfortable with Arrows in Haskell. If you're not, take a moment to read up on Arrows.
If you don't care about the scraping bits, jump straight to the concurrency section.
Installation
Make sure you have the
hxt,
url and
http packages:
cabal install hxt cabal install url cabal install http cabal install maybet
Basic Setup
First, let's write some basic functions to make life easier for ourselves:)
I say basic because they will be our building blocks, not because they are easy :P Let's see how they work.
openUrl is a function that will download a web page for us. It returns a
MaybeT Monad Transformer. We can use it like
contents <- runMaybeT $ openUrl ""
and contents will be a
Just if the operation was successful, or
Nothing otherwise.
css will allow us to use css selectors on the downloaded page.
get is where things get interesting. First, we download a page using
contents <- runMaybeT $ openUrl url
like we talked about. Next, we parse the page using
HXT:
readString [withParseHTML yes, withWarnings no] contents
readString takes some options as its first parameter:
withParseHTML: Parse as HTML, which makes sure the parser doesn't break on things like the doctype.
withWarnings: Prints out warnings about malformed html if it's switched on. Since so much of the web is malformed html, I switched it off :P
Now we are ready to start.
Image Downloader
Let's write something that downloads all the images from a given page.
First, let's get a parsed page:
main = do page <- get ""
page is now an Arrow. We can run this Arrow at any time to get its value by using
runX. Let's try it now:
ghci>runX page [NTree (XTag "/" [NTree (XAttr "transfer-Status") [NTree (XText "200") []],NTree (XAttr "transfer-Message") [NTree (XText "OK") []],NTree (XAttr "transfer-URI") [NTree (XText "string:") []],NTree (XAttr "source") [NTree (XText "\"<!DOCTYPE html PUBLIC \\\"-//W3C//DTD XHTML 1.0 ...\"") []],NTree (XAttr "transfer-Encoding") [NTree (XText "UNICODE") []],NTree (XAttr "doctype-name") (...many lines skipped...)
Wow, that looks confusing. Let's select only what we want. Get just the images:
ghci>runX $ page >>> css "img" [NTree (XTag "img" [NTree (XAttr "id") [NTree (XText "header-img") []],NTree (XAttr "src") [NTree (XText "") []] ...
Aha! Much nicer. Now let's get just the
src's:
ghci>runX $ page >>> css "img" >>> getAttrValue "src" ["","" ...
Done! That was easy. Now all we need to do is download these images and save them to disk.
main = do url <- parseArgs doc <- get url imgs <- runX . images $ doc sequence_ $ map download imgs
The first three lines of our
main function get a list of links. The
images function is very simple:
images tree = tree >>> css "img" >>> getAttrValue "src"
It gets a list of all the image sources, just like we had talked about.
The fourth lines maps the
download function over this list to create a list of IO actions. Then we feed that list into
sequence_, which runs the actions one at a time and throws away the return values. We could have used
sequence instead, which would have printed the return values.
Here's the
download function:
download url = do content <- runMaybeT $ openUrl url case content of Nothing -> putStrLn $ "bad url: " ++ url Just _content -> do let name = tail . uriPath . fromJust . parseURI $ url B.writeFile name (B.pack _content)
We have to write out binary data, so we use the
writeFile defined in
Data.ByteString.Char8, which operates on
ByteStrings. This is why we need to convert our
String to a
ByteString first using
B.pack.
We are also able to do error checking thanks to our openUrl function being a
MaybeT. If we didn't get any content, we just print out "bad url: [url]". Otherwise we download the image.
Concurrency
After all that work, the concurrent bit seems almost anti-climactic.
First, install the
parallel-io package:
cabal install parallel-io
Import it into the script:
import Control.Concurrent.ParallelIO
ParallelIO defines a new function called
parallel_ which we can use anywhere we would have used
sequence_. The IO actions will then get performed concurrently.
Change the end of the script to this:
... imgs <- runX . images $ doc parallel_ $ map download imgs stopGlobalPool
stopGlobalPool needs to be called after the last use of a global parallelism combinator. It cleans up the thread pool before shutdown.
Now build the concurrent version (enabling runtime system options):
$ ghc --make grabber_par.hs -threaded -rtsopts
And run it with
+RTS -N[number of threads]:
$ ./grabber_par +RTS -N4
Results
Here's how the two versions performed on my machine:
Without parallelization:
$ time ./grabber "" real 0m10.341s user 0m0.203s sys 0m0.048s
With parallelization (four threads):
$ time ./grabber_par "" +RTS -N4 real 0m3.490s user 0m0.477s sys 0m0.154s
Almost a third of the time!
The
ParallelIO library uses
MVars to keep things in sync. Read more about ParallelIO or MVars.
Next Steps
Next steps involve writing this as a crawler that visits links on the page up to a depth of
N as well as some way to keep track of visited pages. We also want to keep track of name collisions. If you try to download two images, both named "test.jpg", the concurrent version will error out. The non-concurrent version would just overwrite one image with another, which isn't any good either. On the crawling side, we should watch out for robots.txt files and META tag directives to be polite. And ask for gzip'd data to reduce request time.
We could also parallelize more than just the download, but its a start!
Full Code Listing
blog comments powered by Disqusblog comments powered by Disqus
import qualified Data.ByteString.Char8 as B import Data.Tree.NTree.TypeDefs import Data.Maybe import Text.XML.HXT.Core import Control.Monad import Control.Monad.Trans import Control.Monad.Maybe import Network.HTTP import Network.URI import System.Environment import Control.Concurrent.ParallelIO -- helper function for getting page content) images tree = tree >>> css "img" >>> getAttrValue "src" parseArgs = do args <- getArgs case args of (url:[]) -> return url otherwise -> error "usage: grabber [url]" download url = do content <- runMaybeT $ openUrl url case content of Nothing -> putStrLn $ "bad url: " ++ url Just _content -> do let name = tail . uriPath . fromJust . parseURI $ url B.writeFile name (B.pack _content) main = do url <- parseArgs doc <- get url imgs <- runX . images $ doc parallel_ $ map download imgs stopGlobalPool | http://adit.io/posts/2012-03-10-building_a_concurrent_web_scraper_with_haskell.html | CC-MAIN-2017-51 | refinedweb | 1,122 | 76.42 |
Hi,
I am going through some tutorials on templates and there is one on cplusplus.com that I have been using. It is explaining about class templates and here is the code for one of the examples they have made:
This code looks fine to me and I understand it but when coming to compile this using g++ I get the following errors:This code looks fine to me and I understand it but when coming to compile this using g++ I get the following errors:Code:
// class templates
#include <iostream>
using namespace std;
template <class T>
class pair {
T a, b;
public:
pair (T first, T second)
{a=first; b=second;}
T getmax ();
};
template <class T>
T pair<T>::getmax ()
{
T retval;
retval = a>b? a : b;
return retval;
}
int main () {
pair <int> myobject (100, 75);
cout << myobject.getmax();
return 0;
}
main.cpp:15: error: expected init-declarator before '<' token
main.cpp:15: error: expected `;' before '<' token
main.cpp: In function `int main()':
main.cpp:23: error: `pair' undeclared (first use this function)
main.cpp:23: error: (Each undeclared identifier is reported only once
unction it appears in.)
main.cpp:23: error: expected primary-expression before "int"
main.cpp:23: error: expected `;' before "int"
main.cpp:24: error: `myobject' undeclared (first use this function)
When I remove the "using namespace std;" statement and use the scope resolution operator to access the cout function of the std namespace in main(); "std::cout << myobject.getmax();" instead of "cout << myobject.getmax();" the code compiles fine.
Am I missing something? Can anybody tell me why this is happening?
Thanks :) | https://cboard.cprogramming.com/cplusplus-programming/80587-template-query-printable-thread.html | CC-MAIN-2017-04 | refinedweb | 266 | 57.37 |
Get information from a SQL Data Base and insert it into Bonita h2 DataBase automatically
Hi everyone, what i need to do is to get information from an external Data base in SQL and with an atuomatic Task, get that info and insert it in bonita h2 data base.
This is what i have:
SQL EXPRESS 2008 Data Base:
Table Imports
Id int
Description varchar(50)
Status int
Bonita h2
ImportsBO
Id INTEGER
Description STRING
Status INTEGER
I create an automatic task that runs a connector to the SQL getting the information and insterting it in a ProcessVariable of type ImportsBO, but when i run a query to ImportsBO, it comes empty.
Any idea?
Many Thanks!!
I think what you need is:
On the SQL Connector in the dialog box "Output Operations" select Scripting Mode.
Click Next
Click the Pencil
Select Script
Change the return type to Boolean // just because
add a piece of Groovy to do something like the following (not complete, not tested):
import myBDM;
while(resultset.next()){
myBDM newBDO = new myBDM();
newBDO.setId = resultset.getInt("Id");
newBDO.setDescription = resultset.getString("Description");
newBDO.setStatus = resultset.getint("Status");
}
return true;
regards
Seán
PS: If this reply answers your question, please mark a resolved
No because you've not shown us any code...it might just help a little... :)
regards
Seán
There is no Code. The only thing that i have is an automatic task that access to an SQL Data Base using a connector. This connector, fills a Process Variable. And now, i want to save that information into the Bonita Data Base.
The problem here is that i know how to do it using a Human task, cause you use the contracts, but with an automatic Task, i cant use contracts.
I dont know how to do this. :/ | https://community.bonitasoft.com/node/22542 | CC-MAIN-2020-45 | refinedweb | 300 | 57.61 |
How To Use Web APIs in Python 3
Introduction
An API, or Application Program Interface, enables developers to integrate one app with another. They expose some of a program's inner workings in a limited way.
You can use APIs to get information from other programs or to automate things you normally do in your web browser. Sometimes you can use APIs to do things you just can't do any other way. A surprising number of web properties offer web-based APIs alongside the more familiar website or mobile app, including Twitter, Facebook, GitHub, and DigitalOcean.
If you've worked your way through some tutorials on how to code in Python 3, and you're comfortable with Python's syntax, structure, and some built-in functions, you can write Python programs that take advantage of your favorite APIs.
In this guide, you will learn how to use Python with the DigitalOcean API to retrieve information about your DigitalOcean account. Then we'll look at how you can apply what you've learned to GitHub's API.
When you're finished, you'll understand the concepts common across web APIs, and you'll have a step-by-step process and working code samples that you can use to try out APIs from other services.
Prerequisites
Before you begin this guide you'll need the following:
- A local development environment for Python 3. You can follow How To Install and Set Up a Local Programming Environment for Python 3 to configure everything you need.
- A text editor you are comfortable using. If you don't already have a favorite, choose one with syntax highlighting. Notepad++ for Windows, BBEdit for macOS, and Sublime Text or Atom for any platform are all good choices.
- A DigitalOcean account and API key. The first few paragraphs in How To Use the DigitalOcean API v2 show how to do this.
Step 1 — Getting Familiar with an API
The first step in using a new API is to find the documentation, and get your bearings. The DigitalOcean API documentation starts at. To find APIs for other services, search for the name of the site and "API" — not all services promote them on their front pages.
Some services have API wrappers. An API wrapper is code that you install on your system to make the APIs easier to use in your chosen programming language. This guide doesn't use any wrappers because they hide much of the inner workings of the APIs, and often don't expose everything the API can do. Wrappers can be great when you want to get something done quickly, but having a solid understanding of what the APIs themselves can do will help you decide if the wrappers make sense for your goals.
First, look at the DigitalOcean API Introduction at and try to understand only the basics about how to send a request, and what to expect in the response. At this point, you're trying to learn only three things:
- What does a request look like? Are they all just URLs? For more detailed requests, how is the data formatted? It's usually JSON or querystring parameters like a web browser uses, but some use XML or a custom format.
- What does a response look like? The API documents will show sample requests and responses. Are you going to get JSON, XML, or some other kind of response?
- What goes into the request or response headers? Often, the request headers include your authentication token, and the response headers provide current information about your use of the service, such as how close you are to a rate limit.
The DigitalOcean API uses HTTP methods (sometimes called verbs) to indicate whether you're trying to read existing information, create new information, or delete something. This part of the documentation explains what methods are used, and for what purposes. Generally, a GET request is simpler than a POST, but by the time you're done here, you won't notice much difference.
The next section of the API documentation discusses how the server will respond to your requests. In general, a request either succeeds or it fails. When it fails, the cause is either something bad with the request, or a problem on the server. All of this information is communicated using HTTP status codes, which are 3-digit numbers divided into categories.
- The
200series means "success" — your request was valid, and the response is what logically follows from it.
- The
400series means "bad request" — something was wrong with the request, so the server did not process it as you wanted it to. Common causes for HTTP
400-level errors are badly-formatted requests and authentication problems.
- The
500series means "server error" — your request may have been OK, but the server couldn't give you a good response right now for reasons out of your control. These should be rare, but you need to be aware of the possibility so you can handle them in your code.
Your code should always check the HTTP status code for any response before trying to do anything with it. If you don't do this, you'll find yourself wasting time troubleshooting with incomplete information.
Now that you have a general idea of how to send a request, and what to look for in the response, it's time to send that first request.
Step 2 — Getting Information from the Web API
Your DigitalOcean account includes some administrative information that you may not have seen in the Web UI. An API can give you a different view of familiar information. Just seeing this alternate view can sometimes spark ideas about what you might want to do with an API, or reveal services and options you didn't know about.
Let's start by creating a project for our scripts. Create a new directory for the project called
apis:
- mkdir apis
Then navigate into this new directory:
- cd apis
Create a new virtualenv for this project:
- python3 -m venv apis
Activate the virtualenv:
- source apis/bin/activate
Then install the requests library, which we'll use in our scripts to make HTTP requests in our scripts:
- pip install requests
With the environment configured, create a new Python file called
do_get_account.py and open it in your text editor. Start this program off by importing libraries for working with JSON and HTTP requests.
import json import requests
These
import statements load Python code that allow us to work with the JSON data format and the HTTP protocol. We're using these libraries because we're not interested in the details of how to send HTTP requests or how to parse and create valid JSON; we just want to use them to accomplish these tasks. All of our scripts in this tutorial will start like this.
Next, we want to set up some variables to hold information that will be the same in every request. This saves us from having to type it over and over again, and gives us a single place to make updates in case anything changes. Add these lines to the file, after the
import statements.
... api_token = 'your_api_token' api_url_base = ''
The
api_token variable is a string that holds your DigitalOcean API token. Replace the value in the example with your own token. The
api_url_base variable is the string that starts off every URL in the DigitalOcean API. We'll append to it as needed later in the code.
Next, we need to set up the HTTP request headers the way the API docs describe. Add these lines to the file to set up a dictionary containing your request headers:
... headers = {'Content-Type': 'application/json', 'Authorization': 'Bearer {0}'.format(api_token)}:
- If you need to replace the token, it's easier to see where to do that when it's a separate variable.
- If you want to share your code with someone, it's easier to remove your API token, and easier for your friend to see where to put theirs.
- It's self-documenting. If the API token is only used as a string literal, then someone reading your code may not understand what they're looking at.
Now that we have these setup details covered, it's time to actually send the request. Your inclination may be to just start creating and sending the requests, but there's a better way. If you put this logic into a function that handles the sending of the request and reading the response, you'll have to think a little more clearly about what you're doing. You'll also end up with code that makes testing and re-use more straightforward. That's what we're going to do.
This function will use the variables you created to send the request and return the account information in a Python dictionary.
In order to keep the logic clear at this early stage, we won't do any detailed error handling yet, but we'll add that in soon enough.
Define the function that fetches the account information. It's always a good idea to name a function after what it does: This one gets account information, so we'll call it
get_account_info:
... def get_account_info(): api_url = '{0}account'.format(api_url_base) response = requests.get(api_url, headers=headers) if response.status_code == 200: return json.loads(response.content.decode('utf-8')) else: return None
We build the value for
api_url by using Python's string formatting method similar to how we used it in the headers; we append the API's base URL in front of the string
account to get the URL, the URL that should return account information.
The
response variable holds an object created by the Python
requests module. This line sends the request to the URL we made with the headers we defined at the start of the script and returns the response from the API.
Next, we look at the response's HTTP status code.
If it's
200, a successful response, then we use the
json module's
loads function to load a string as JSON. The string we load is the content of the
response object,
response.content. The
.decode('utf-8') part tells Python that this content is encoded using the UTF-8 character set, as all responses from the DigitalOcean API will be. The
json module creates an object out of that, which we use as the return value for this function.
If the response was not
200, then we return
None, which is a special value in Python that we can check for when we call this function. You'll notice that we're just ignoring any errors at this point. This is to keep the "success" logic clear. We will add more comprehensive error checking soon.
Now call this function, check to make sure it got a good response, and print out the details that the API returned:
... account_info = get_account_info() if account_info is not None: print("Here's your info: ") for k, v in account_info['account'].items(): print('{0}:{1}'.format(k, v)) else: print('[!] Request Failed')
account_info = get_account_info() sets the
account_info variable to whatever came back from the call to
get_account_info(), so it will be either the special value
None or it will be the collection of information about the account.
If it is not
None, then we print out each piece of information on its own line by using the
items() method that all Python dictionaries have.
Otherwise (that is, if
account_info is
None), we print an error message.
Let's pause for a minute here. This
if statement with the double negative in it may feel awkward at first, but it is a common Python idiom. Its virtue is in keeping the code that runs on success very close to the conditional instead of after handling error cases.
You can do it the other way if you prefer, and it may be a good exercise to actually write that code yourself. Instead of
if account_info is not None: you might start with
if account_info is None: and see how the rest falls into place.
Save the script and try it out:
- python do_get_account.py
The output will look something like this:
OutputHere's your info: droplet_limit:25 email:[email protected] status:active floating_ip_limit:3 email_verified:True uuid:123e4567e89b12d3a456426655440000 status_message:
You now know how to retrieve data from an API. Next, we'll move on to something a little more interesting — using an API to change data.
Step 3 — Modifying Information on the Server
After practicing with a read-only request, it's time to start making changes. Let's explore this by using Python and the DigitalOcean API to add an SSH key to your DigitalOcean account.
First, take a look at the API documentation for SSH keys, available at.
The API lets you list the current SSH keys on your account, and also lets you add new ones. The request to get a list of SSH keys is a lot like the one to get account information. The response is different, though: unlike an account, you can have zero, one, or many SSH keys..
import json import requests api_token = 'your_api_token' api_url_base = '' headers = {'Content-Type': 'application/json', 'Authorization': 'Bearer {0}'.format(api_token)}
The function we will create to get the SSH keys is similar to the one we used to get account information, but this time we're going to handle errors more directly.
First, we'll make the API call and store the response in a
response response variable. The
api_url won't be the same as in the previous script though; this time it needs to point to.
Add this code to the script:
... def get_ssh_keys(): api_url = '{0}account/keys'.format(api_url_base) response = requests.get(api_url, headers=headers)
Now let's add some error handling by looking at the HTTP status code in the response. If it's
200, we'll return the content of the response as a dictionary, just like we did before. If it's anything else, we'll print a helpful error message associated with the type of status code and then return
None.
Add these lines to the
get_ssh_keys function:
...)) return None elif response.status_code >= 300: print('[!] [{0}] Unexpected Redirect'.format(response.status_code)) return None elif response.status_code == 200: ssh_keys = json.loads(response.content.decode('utf-8')) return ssh_keys else: print('[?] Unexpected Error: [HTTP {0}]: Content: {1}'.format(response.status_code, response.content)) return None
This code handles six different error conditions by looking at the HTTP status code in the response.
- A code of
500or greater indicates a problem on the server. These should be rare, and they are not caused by problems with the request, so we print only the status code.
- A code of
404means "not found," which probably stems from a typo in the URL. For this error, we print the status code and the URL that led to it so you can see why it failed.
- A code of
401means the authentication failed. The most likely cause for this is an incorrect or missing
api_key.
- A code in the
300range indicates a redirect. The DigitalOcean API doesn't use redirects, so this should never happen, but while we're handling errors, it doesn't hurt to check. A lot of bugs are caused by things the programmer thought should never happen.
- A code of
200means the request was processed successfully. For this, we don't print anything. We just return the ssh keys as a JSON object, using the same syntax we used in the previous script.
- If the response code was anything else we print the status code as an "unexpected error."
That should handle any errors we're likely to get from calling the API. At this point, we have either an error message and the
None object, or we have success and a JSON object containing zero or more SSH keys. Our next step is to print them out:
... ssh_keys = get_ssh_keys() if ssh_keys is not None: print('Here are your keys: ') for key, details in enumerate(ssh_keys['ssh_keys']): print('Key {}:'.format(key)) for k, v in details.items(): print(' {0}:{1}'.format(k, v)) else: print('[!] Request Failed')
Because the response contains a list (or array) of SSH keys, we want to iterate over the whole list in order to see all of the keys. We use Python's
enumerate method for this. This is similar to the
items method available for dictionaries, but it works with lists instead.
We use
enumerate and not just a
for loop, because we want to be able to tell how far into the list we are for any given key.
Each key's information is returned as a dictionary, so we use the same
for k,v in details.items(): code we used on the account information dictionary in the previous script.
Run this script and you'll get a list of the SSH keys already on your account.
- python get_ssh_keys.py
The output will look something like this, depending on how many SSH keys you already have on your account.
OutputHere are your keys: Kcy 0: id:280518 name:work fingerprint:96:f7:fb:9f:60:9c:9b:f9:a9:95:01:5c:5c:2c:d5:a0 public_key:ssh-rsa AAAAB5NzaC1yc2cAAAADAQABAAABAQCwgr9Fzc/YTD/V2Ka5I52Rx4I+V2Ka5I52Rx4Ir5LKSCqkQ1Cub+... sammy@work Kcy 1: id:290536 name:home fingerprint:90:1c:0b:ac:fa:b0:25:7c:af:ab:c5:94:a5:91:72:54 public_key:ssh-rsa AAAAB5NzaC1yc2cAAAABJQAAAQcAtTZPZmV96P9ziwyr5LKSCqkQ1CubarKfK5r7iNx0RNnlJcqRUqWqSt... sammy@home
Now that you can list the SSH keys on your account, your last script here will be one that adds a new key to the list.
Before we can add a new SSH key, we need to generate one. For a fuller treatment of this step, take a look at the tutorial How to Set Up SSH Keys.
For our purposes, though, we just need a simple key. Execute this command to generate a new one on Linux, BSD, or MacOS. You can do this on an existing Droplet, if you like.
- ssh-keygen -t rsa
When prompted, enter the file to save the key and don't provide a passphrase.
OutputGenerating public/private rsa key pair. Enter file in which to save the key (/home/sammy/.ssh/id_rsa): /home/sammy/.ssh/sammy Created directory '/home/sammy/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/sammy/.ssh/sammy. Your public key has been saved in /home/sammy/.ssh/sammy.pub. ...
Take note of where the public key file was saved, because you'll need it for the script.
Start a new Python script, and call it
add_ssh_key.py, and start it off just like the others:
import json import requests api_token = 'your_api_token' api_url_base = '' headers = {'Content-Type': 'application/json', 'Authorization': 'Bearer {0}'.format(api_token)}
We'll use a function to make our request, but this one will be slightly different.
Create a function called
add_ssh_key which will accept two arguments: the name to use for the new SSH key, and the filename of the key itself on your local system. The function will read the file, and make an HTTP
POST request, instead of a
GET:
... def add_ssh_key(name, filename): api_url = '{0}account/keys'.format(api_url_base) with open(filename, 'r') as f: ssh_key = f.readline() ssh_key = {'name': name, 'public_key': ssh_key} response = requests.post(api_url, headers=headers, json=ssh_key)
The line
with open(filename, 'r') as f: opens the file in read-only mode, and the line that follows reads the first (and only) line from the file, storing it in the
ssh_key variable.
Next, we make a Python dictionary called
ssh_key with the names and values that the API expects.
When we send the request, though, there's a little bit more that's new. It's a
POST rather than a
GET, and we need to send the
ssh_key in the body of the
POST request, encoded as JSON. The
requests module will handle the details for us;
requests.post tells it to use the
POST method, and including
json=ssh_key tells it to send the
ssh_key variable in the body of the request, encoded as JSON.
According to the API, the response will be HTTP
201 on success, instead of
200, and the body of the response will contain the details of the key we just added.
Add the following error-handling code to the
add_ssh_key function. It's similar to the previous script, except this time we have to look for the code
201 instead of
200 for success:
...)) print(ssh_key ) print(response.content ) return None elif response.status_code >= 300: print('[!] [{0}] Unexpected redirect.'.format(response.status_code)) return None elif response.status_code == 201: added_key = json.loads(response.content) return added_key else: print('[?] Unexpected Error: [HTTP {0}]: Content: {1}'.format(response.status_code, response.content)) return None
This function, like the previous ones, returns either
None or the response content, so we use the same approach as before to check the result.
Next, call the function and process the result. Pass the path to your newly-created SSH key as the second argument:
... add_response = add_ssh_key('tutorial_key', '/home/sammy/.ssh/sammy.pub') if add_response is not None: print('Your key was added: ' ) for k, v in add_response.items(): print(' {0}:{1}'.format(k, v)) else: print('[!] Request Failed')
Run this script and you'll get a response telling you that your new key was added.
- python add_ssh_key.py
The output will look something like this:
OutputYour key was added: ssh_key:{'id': 9458326, 'name': 'tutorial_key', 'fingerprint': '64:76:37:77:c8:c7:26:05:f5:7b:6b:e1:bb:d6:80:da', '}
If you forgot to change the "success" condition to look for HTTP
201 instead of
200, you'll see an error reported, but the key will still have been added. Your error handling would have told you that the status code was
201. You should recognize that as a member of the
200 series, which indicates success. This is an example of how basic error handling can simplify troubleshooting.
Once you've successfully added the key with this script, run it again to see what happens when you try to add a key that's already present.
The API will send back an HTTP
422 response, which your script will translate into a message saying "SSH Key is already in use on your account.":
Output[!] [422] Bad Request {'name': 'tutorial_key', '} b'{"id":"unprocessable_entity","message":"SSH Key is already in use on your account"}' [!] Request Failed
Now run your
get_ssh_keys.py script again and you'll see your newly-added key in the list.
With small modifications, these two scripts could be a quick way to add new SSH keys to your DigitalOcean account whenever you need to. Related functions in this API allow you to rename or delete a specific key by using its unique key ID or fingerprint.
Let's look at another API and see how the skills you just learned translate.
Step 4 — Working with a Different API
GitHub has an API, too. Everything you've learned about using the DigitalOcean API is directly applicable to using the GitHub API.
Get acquainted with the GitHub API the same way you did with DigitalOcean. Search for the API documentation, and skim the Overview section. You'll see right away that the GitHub API and the DigitalOcean API share some similarities.
First, you'll notice that there's a common root to all of the API URLs:. You know how to use that as a variable in your code to streamline and reduce the potential for errors.
GitHub's API uses JSON as its request and response format, just like DigitalOcean does, so you know how to make those requests and handle the responses.
Responses include information about rate limits in the HTTP response headers, using almost the same names and exactly the same values as DigitalOcean.
GitHub uses OAuth for authentication, and you can send your token in a request header. The details of that token are a little bit different, but the way it's used is identical to how you've done it with DigitalOcean's API.
There are some differences, too. GitHub encourages use of a request header to indicate the version of the API you want to use. You know how to add headers to requests in Python.
GitHub also wants you to use a unique
User-Agent string in requests, so they can find you more easily if your code is causing problems. You'd handle this with a header too.
The GitHub API uses the same HTTP request methods, but also uses a new one called
PATCH for certain operations. The GitHub API uses
GET to read information,
POST to add a new item, and
PATCH to modify an existing item. This
PATCH request is the kind of thing you'll want to be on the lookout for in API documentation.
Not all GitHub API calls require authentication. For example, you can get a list of a user's repositories without needing an access token. Let's create a script to make that request and display the results.
We'll simplify the error handling in this script and use only one statement to handle all possible errors. You don't always need code to handle each kind of error separately, but it's a good habit to do something with error conditions, if only to remind yourself that things don't always go the way you expect them to.
Create a new file called
github_list_repos.py in your editor and add the following content, which should look pretty familiar:
import json import requests api_url_base = '' headers = {'Content-Type': 'application/json', 'User-Agent': 'Python Student', 'Accept': 'application/vnd.github.v3+json'}
The imports are the same ones we've been using. The
api_url_base is where all GitHub APIs begin.
The headers include two of the optional ones GitHub mentions in their overview, plus the one that says we're sending JSON-formatted data in our request.
Even though this is a small script, we'll still define a function in order to keep our logic modular and encapsulate the logic for making the request. Often, your small scripts will grow into larger ones, so it's helpful to be diligent about this. Add a function called
get_repos that accepts a username as its argument:
... def get_repos(username): api_url = '{}orgs/{}/repos'.format(api_url_base, username) response = requests.get(api_url, headers=headers) if response.status_code == 200: return (response.content) else: print('[!] HTTP {0} calling [{1}]'.format(response.status_code, api_url)) return None
Inside the function, we build the URL out of the
api_url_base, the name of the user we're interested in, and the static parts of the URL that tell GitHub we want the repository list. Then we check the response's HTTP Status Code to make sure it was
200 (success). If it was successful, we return the response content. If it wasn't, then we print out the actual Status Code, and the URL we built so we'll have an idea where we may have gone wrong.
Now, call the function and pass in the GitHub username you want to use. We'll use
octokit for this example. Then print the results to the screen:
... repo_list = get_repos('octokit') if repo_list is not None: print(repo_list) else: print('No Repo List Found')
Save the file and run the script to see the repositories for the user you specified.
- python github_list_repos.py
You'll see a lot of data in the output because we haven't parsed the response as JSON in this example, nor have we filtered the results down to specific keys. Use what you've learned in the other scripts to do that. Look at the results you're getting and see if you can print out the repository name.
Once nice thing about these GitHub APIs is that you can access the requests you don't need authentication for directly in your web browser, This lets you compare responses to what you're seeing in your scripts. Try visiting in your browser to see the response there.
By now, you know how to read the documentation and write the code necessary to send more specific requests to support your own goals with the GitHub API.
You can find the completed code for all of the examples in this tutorial in this repository on GitHub.
Conclusion
In this tutorial, you learned how to use web APIs for two different services with slightly different styles. You saw the importance of including error handling code to make debugging easier and scripts more robust. You used the Python modules
requests and
json to insulate you from the details of those technologies and just get your work done, and you encapsulated the request and response processing in a function to make your scripts more modular.
And what's more, you now have a repeatable process to follow when learning any new web API:
- Find the documentation and read the introduction to understand the fundamentals of how to interact with the API.
- Get an authentication token if you need one, and write a modular script with basic error handling to send a simple request, respond to errors, and process the response.
- Create the requests that will get you the information that you want from the service.
Now, cement this newly-gained knowledge and find another API to use, or even another feature of one of the APIs you used here. A project of your own will help solidify what you've learned here. | https://www.digitalocean.com/community/tutorials/how-to-use-web-apis-in-python-3 | CC-MAIN-2019-26 | refinedweb | 4,923 | 72.16 |
Source Templates
Traditionally, templates or snippets are stored and managed outside of your source code. This makes sense because normally a template helps you quickly produce some universal boilerplate code. For these purposes, ReSharper provides a lot of predefined live templates, surround templates, and file templates. You can also create your own templates of these types.
However, you may want to produce some repeatable code that is only relevant in your current project or solution. ReSharper allows you to streamline such tasks with Source Templates.
How it Works
In contrast to traditional templates, source templates can be created anywhere in the code of your project as extension methods. You can define them for some specific types of your project or for any standard types. You can even make a source template available for all types by creating it as an extension method for
object.
As soon as a template is defined, it becomes available in the code completion list for the objects of corresponding type and its inheritors. When you choose the template in the list, ReSharper inserts the code from the body of the template method into your code.
Here is an example that illustrates a simplest application of a source template. Our template
forEach, which will be available for all generic collections, will insert code that iterates the collection. We can define the template in any static class in our project. ReSharper will identify it as a template by the
[SourceTemplate] attribute:
public static class Foo { [SourceTemplate] public static void forEach<T>(this IEnumerable<T> x) { foreach (var i in x) { //$ $END$ } } }
To deploy this template, we now can use automatic completion for any collection object:
When you select the template item, the object is replaced with the template text:
public Test(IEnumerable<string> enumerable) { foreach (var i in enumerable) { } }
Note that the
//$ $END$ comment in the template definition is nothing else but the predefined template parameter defining the caret position after the template is applied. You can use other parameters in your template to make the template more flexible.
Why Use Source Templates
As mentioned above, source templates are most helpful for the code blocks that you want to reuse in the current project or solution. Here are some points that show the advantages of source templates over traditional templates:
- To work with source templates, you do not have to switch anywhere - everything is in your editor.
- The templates are strongly typed, which means that you can only call them on relevant objects.
- As long as template definitions compile, you can be sure that the template code has no errors.
- While creating and editing source templates, your favorite ReSharper features are at your disposal: code inspection, navigation features, code completion, just to name a few.
Creating Source Templates
For definitions of your source templates, you can create a new class or use an existing static class where you keep your extension methods.
Template method and its body
A source template must be a public extension method and have the
[SourceTemplate] attribute. The attribute is defined in the
JetBrains.Annotations namespace, together with
[NotNull],
[CanBeNull] and other code annotation attributes. Therefore, to define source template methods, you should enable ReSharper code annotation support in your project.
In the template body, you can write the code that does anything you like. Normally, it does something with the caller object, but it is not necessary.
You may also need to use some code that would not compile in the template method. For example, you may want to use the caller object name to generate a name of a local variable. In this case, place this code inside a line or a block comment that starts with the '$' sign:
[SourceTemplate] public static void Bar(this Entity entity) { /*$ var $entity$Id = entity.GetId();; DoSomething($entity$Id); */ }
Parameters and macros
In source templates, you can use parameters amd macros. Depending on the macro that you are going to use for the parameter, you can choose between several ways of specifying and using parameters.
You can create a template parameter by adding a new parameter to the template method. By default, it will behave as an editable parameter, i.e. it will receive focus during the Hot Spot Session when you apply the template. If you want to define a macro for this parameter, you need to add the
[Macro]attribute as shown in the example below.
- The 'Expression' property of the attribute defines which macro should be used. You can specify one of the available template macros.
- The 'Editable' property optionally specifies whether the user will be able to edit the parameter when the template is applied. By default, all user-defined parameters are editable; the value '-1' makes the parameter non-editable.
If the same parameter is used several times in the template, only one occurrence becomes editable when the template is applied; other occurrences are changed synchronously. If necessary, you can define which occurrence becomes editable by specifying its zero-based index in the 'Editable' property.
[SourceTemplate] public static void newGuid(this object obj, [Macro(Expression = "guid()", Editable = -1)] string newguid) { Console.WriteLine(newguid); }
You can turn any local variable in your template method into a template parameter. To do so, you need to add the
[Macro]attribute to the template method definition, and specify the variable name in its 'Target' property. For example:
[SourceTemplate] [Macro(Target = "item", Expression = "suggestVariableName()")] public static void forEach<T>(this IEnumerable<T> collection) { foreach (var item in collection) { //$ $END$ } }
You can use predefined parameters as well as all user-defined parameters the same way as in other ReSharper templates, wrapping the parameter identifier with two '$' signs (
$param_name$).
You can use user-defined template parameters inside string literals, e.g:
Console.WriteLine("A random GUID: $newguid$");
To use template parameters outside string literals, you need to put them in a special template comment that starts with the 's' sign:
//$ $param_name$. or
/*$ $param_name$ */. For example:
[SourceTemplate] [Macro(Target = "newguid", Expression = "guid()", Editable = -1)] public static void newGuid(this object obj) { //$ var guid = "$newguid$"; Console.WriteLine("A random GUID: $newguid$"); }
The caller object as well as template parameters created as method parameters and local variables can be used both with and without the '$' signs inside the special templates comments.
Applying source templates
To apply your source template, first make sure that the template is in scope. That is, either you are in the same namespace or that the template's namespace is explicitly imported. into your code.
To create a code fragment from a source template
- Set the caret where you want to deploy the template.
- Type an object, for which you want to deploy the template, then type a dot, and then start typing the name of the template or its CamelHumps abbreviation.
- Select the template in the completion list and click on it or press Enter.
- The caller object, the dot, and the part of the template name that you typed are replaced with the template body.
-: | https://www.jetbrains.com/help/resharper/2017.3/Source_Templates.html | CC-MAIN-2018-39 | refinedweb | 1,161 | 60.24 |
In this article we will implement a simple unit testing application and see how unit testing should be done in the Visual Studio environment.
This is the "Fundamentals of unit testing" article series. In our previous article we had an introduction to unit testing. We have learned why unit testing is very important and defined steps of Test Driven Development. You can read it here.Fundamentals of Unit Testing: Getting Started With Unit TestingOk, as promised, in this article we will implement a simple unit testing application and will see how unit testing should be performed in the Visual Studio environment. So, let's go step-by-step.Step 1: Create a class library applicationHere we will create a class library application, the class library application is purposefully chosen because as per our discussion in the previous article, a unit test is a best fit for applications where there is no user interface. So here is the class library.Step 2: Define simple class and function Here we will define a simple class and function to be tested by a unit test method. For the sake of simplicity, we have implemented a very simple string handling function. Have a look at the following code.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace TestProjectLibrary
{
public class StringHandling
{
public string concat(string name, string surname)
{
return name + surname;
}
}
}
So, the purpose of the concat() function is to return a string after adding two string parameters that it will take as input.Step 3: Add Unit Test project in solutionHere we will add a unit test project to the solution. As we have discussed, we will use a unit test template of Visual Studio. So, add a Unit Test Project to the solution.
using Microsoft.VisualStudio.TestTools.UnitTesting;
using TestProjectLibrary;
namespace UnitTest
[TestClass]
public class UnitTest1
[TestMethod]
public void TestMethod1()
StringHandling ObjString = new StringHandling();
String rvalue = ObjString.concat("sourav", "kayal");
Assert.AreEqual("sourav", rvalue);
So the code is very simple and self-explanatory, the class is defined as a unit test class and the method is decorated as a TestMethod. Then within the method, we are creating an object of the class, to which we will test and call the function.The function will return one string, we are expecting it will return the concatenation value of both arguments. Then we are using the AreEqual() function of the Assert class that will match the return value with the expected value.The first argument is the expected value and the second parameter is the returned value. Now we need to run the test case.The point to make is that we cannot run the unit test application like normal application. To run it, go to:
View All | https://www.c-sharpcorner.com/UploadFile/dacca2/fundamental-of-unit-testing-test-your-application-by-visual/ | CC-MAIN-2019-43 | refinedweb | 462 | 56.66 |
Bindings¶
In the Module and Component sections, it has been explained that
both can host a
bindings declaration. These
bindings are a link
between:
Attributes and Observables (see Observables)
The declaration looks like this.
from anpylar import Component, html ... class PyroesComponent(Component): bindinds = { 'pyroes': [], } ...
This at first seems like a normal dict declaration containing a
pyroes
key and a matching value of
[] (an empty list), but there is more.
Because the declaration happens inside a subclass of
Component the
following holds true:
-
PyroesComponentwill automatically have an attribute named
pyroeswhich will obviously have a default value of
[]
-
A 2nd attribute named
pyroes_will be available and this is an Observable
Note
Yes, the observable attribute receives the suffix
_ (an
underscore).
In Python
_ is usually doubled before and after a name to
indicate a Python reserved name, doubled before a name to indicate a
name-mangled component and used as single character before a name to
indicate a kind of reserved attribute.
AnPyLar has chosen to mark the bindings (or observable
attributes) by using a single
_ as a suffix
See also
If you are eager, you can also go straight to Observables
Both attributes are linked together so that:
-
Setting the value of
pyroestriggers the observable
pyroes_and therefore any operations subscriptions to it
-
Setting (or calling) the observable sets the value of the attribute:self.pyroes_([1, 2, 3]) # equivalent to self.pyroes = [1, 2, 3]
Let’s see it in code terms
from anpylar import Component, html class Counter(Component): bindinds = { 'count': 0, } def render(self, node): html.h1('{}')._fmt(self.count_) with html.button('Count up!') as b: b._bindx('click', self.do_count) def do_count(self): # Alternative -> self.count_(self.count + 1) # Alternative -> self.count_ = self.count + 1 self.count += 1
Note
You can test this simply script with
anpylar-serve without creating a
complicated structure by placing the contents in a file
index.py and
doing:
anpylar-serve --auto-serve index.py
With this basic example the powers of the binding (attribute <-> observable) could be explained:
html.h1('{}')._fmt(self.count_)
We are creating an
<h1> with the formatting template
{} as text. This
will be formatted to contain the value delivered by
_fmt(self.count_)
Because
self.count_ is an Observable, there will be a background
subscription to it. Whenever the value of
self.count changes, this will be
reflected as a event through the observable and the value of our
<h1> tag
will change.
with html.button('Count up!') as b: b._bindx('click', self.do_count)
We are now adding a
<button> for which we add an event binding. When
clicked, it will call our
do_count method.
Note
Notice the name
_bindx with the trailing
x. This is to
separate it from the
_bind method. With the
x method the
generated click event is not delivered with the callback.
And finally
def do_count(self): # Alternative -> self.count_(self.count + 1) # Alternative -> self.count_ = self.count + 1 self.count += 1
In our
do_count, we simply increase the value of
self.count. This will
(as explained above) trigger the observable
self.pyroes_ and update the
value of our
<h1> tag.
Experienced Python programmers will have by now for sure noticed that during
the
bindx operation no
lambda was used and this because
self.count +=
1 wouldn’t be valid.
with html.button('Count up!') as b: b._bindx('click', lambda: self.count += 1) # <- NOT VALID
One has to use an expression inside the
lambda and the auto-increment
operation doesn’t count as one.
But looking at the alternatives of how to set the value of
self.count using
the observable we could have actually used a
lambda. For example:
with html.button('Count up!') as b: b._bindx('click', lambda: self.count_(self.count + 1)) # <- VALID
Removing with it the need to have a dedicated
do_count method.
For the sake of it, let’s show a final possibility, which is related as how one declares the event to bind to.
with html.button('Count up!') as b: b._bindx.click(lambda: self.count_(self.count + 1)) # <- VALID
Rather than specifying
click as the first argument of
_bindx it can be
chained in standard dot notation, leaving the
lambda as the only argument
inside the call.
We believe this is actually a lot more readable, but the programmer is king. | https://docs.anpylar.com/architecture/bindings.html | CC-MAIN-2022-05 | refinedweb | 722 | 58.69 |
Difference between revisions of "Systemd-nspawn"
Revision as of 11:13, 17 March 2014
Related articles
systemd-nspawn is like the chroot command, but it is a chroot on steroids.
systemd-nspawn may be used to run a command or OS in a light-weight namespace container. It is more powerful than chroot since it fully virtualizes the file system hierarchy, as well as the process tree, the various IPC subsystems and the host and domain name. systemd-nspawn limits access to various kernel interfaces in the container to read-only, such as /sys, /proc/sys or /sys/fs/selinux. Network interfaces and the system clock may not be changed from within the container. Device nodes may not be created. The host system cannot be rebooted and kernel modules may not be loaded from within the container. This mechanism differs from Lxc-systemd or Libvirt-lxc, as it is a much simple tool to configure.
Contents).
Usage right of the box give the container a usable network, with no extra configuration needed. If you want a more complex network setup and isolate the container network from your host network, please visit the Systemd-networkd Arch wiki | https://wiki.archlinux.org/index.php?title=Arch_systemd_container&curid=15990&diff=305305&oldid=305303 | CC-MAIN-2017-09 | refinedweb | 196 | 52.9 |
*
A friendly place for programming greenhorns!
Big Moose Saloon
Search
|
Java FAQ
|
Recent Topics
Register / Login
JavaRanch
»
Java Forums
»
Java
»
Beginning Java
Author
What is difference between JavaClass and Beans and EJB?What are pros and cons ofthem?
asif aziz
Greenhorn
Joined: May 08, 2006
Posts: 10
posted
Jul 27, 2006 06:07:00
0
hi all,
Can anyone help what are differenes between
java
class beans and ejb?What are pros and cons of them?
Stan James
(instanceof Sidekick)
Ranch Hand
Joined: Jan 29, 2003
Posts: 8791
posted
Jul 27, 2006 07:23:00
0
I think we're looking at three things: Java classes, Java Beans and Enterprise Java Beans. Right?
All of these are Java classes. That's the only
unit
of compilation we have so all java code is in a class of some kind.
Java Beans specify a few special things that the class must have: A no-args constructor and getters and setters that follow a naming convention. This makes it easy for tools like IDEs and screen designers to work with unfamiliar objects through reflection. It's also quite common in Transfer Objects. Some people get the idea they should automatically make getters and setters for every class and in fact that's pretty bad OO design.
Enterprise Java Beans are really something quite different and it's confusing that they used such a similar name.
EJBs
have to have even more special methods and structure so a "container" can manage them. The container calls various bean methods at appropriate times to manage bean life cycle and to get real work done.
Does that help?
[ July 27,
Sol Mayer-Orn
Ranch Hand
Joined: Nov 13, 2002
Posts: 310
posted
Jul 27, 2006 08:08:00
0
The notation is indeed a little confusing...
1) "Bean" is a kind of java class, with some simple limitation & coding conventions. To the best of my recallection, a bean has to be Serializable, have a no-arg constructor, have setters/getters for all properties, and should be indifferent to the order in which setters are invoked.
Example:
public class Product implements java.io.Serializable{ private String manufacturer; private int price; public Product(String manuf, int price){ this.manufacturer=manuf; this.price=price; } public Product(){ } public String getManufacturer(){return manufacturer;} public void setManufacturer(String manuf){this.manufacturer=manuf;} public int getPrice(){return price;} public void setPrice(int price){this.price=price;} }
As you can see, it's easy to write a Bean. You don't need any speical tools, just follow some conventions.
The motivation for Bean requirement is: making them easy to process by automatic tools, especially by using reflection/introspection.
Some examples:
a) Some xml tools (e.g. Jibx or Axis), can take any Bean and serialize/deserialize it into xml. In simple cases, they use reflection to invoke all 'getXXX' methods, and print the results into xml.
Bean conventions are crucial here...if your Product method was 'obtainPrice' instead of 'getPrice', then the tool would miss it and the price won't be printed to xml.
b) Graphical designers allows you to desing a GUI window by dragging/dropping Swing components (JButtons, JComboBoxes, etc). Since swing components are written by Bean standards, the tool can automatically find all their 'setXXX' methods (setFont, setBackground, setSize....) and allow you to edit those properties through a nice colorful interface.
2)
EJB
are java classes designed to run inside
J2EE
containers, and rely in various J2EE services (remoting, automatic transaction management, automatic security checkes, etc).
General outline of writing EJB's:
a) Obtain a J2EE container, including j2ee jars. Most containers cost money, but
JBoss
is a good free container.
b) Write your class, implementing your business logic (say: stock price calculations). There are some limitatitions: If you use ejb 2.x, you class must inherit from special j2ee classes. If you use the new ejb 3 spec, you don't have to inherit, but there are still some restrictions (e.g. limitation of thread usage).
c) Run your j2ee container, and deploy your class into it. Basically, you use simple xml files to tell the container "here's my service, now use RMI to expose it to remote clients, and while you're at it, make sure that only ADMIN users can edit stock prices...". You can even rely on J2EE persistence services ("here's my Stock object, now map it into database rows).
The advantage is, the container does a lot of work for you, saving you coding.... the disadvantage is being tied to J2EE (having to obtain a j2ee container, and following some restrictions, such as restrictions on thread usage).
asif aziz
Greenhorn
Joined: May 08, 2006
Posts: 10
posted
Jul 30, 2006 04:34:00
0
hi all,
Thanks a lot for very much valuable information.
It is sorta covered in the
JavaRanch Style Guide
.
subject: What is difference between JavaClass and Beans and EJB?What are pros and cons ofthem?
Similar Threads
threads in EJB
threads in EJB
java2WSDL and WDL2java
DAO to access to web service?
Struts1,2
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/404265/java/java/difference-JavaClass-Beans-EJB-pros | CC-MAIN-2013-48 | refinedweb | 862 | 64.51 |
26 July 2010 09:05 [Source: ICIS news]
SINGAPORE (ICIS news)--Taiwan’s Formosa shut its 540,000 bbl/day refinery in Mailiao, along with a number of its downstream facilities on Monday, a day after an explosion at its 73,000 bbl/day desulphurising plant at the complex.
The refinery was shut as a safety precaution, said a company source.
The resulting fire from the blast was just brought under control late "this morning", forcing the shutdown of some plants, including a 520,000 tonne/year group II base oils refinery, said another company source.
There were no casualties from the incident, which was the second to occur at the complex this month, the source said.
Operations at ?xml:namespace>
The company’s olefin conversion unit (OCU), which can produce 250,000 tonnes/year of propylene, was also down, they said.
The shutdown at the Mailiao base oils refinery would not have a significant market impact, industry sources said, citing that the plant was taken off line just a week ahead of its schedule maintenance turnaround from 1 August.
"Our [base oils] refinery shutdown does not affect domestic delivery and export shipment. Our shipment will be on schedule," said a second source at
“We will lose an additional 10,000 tonnes of production due to this shutdown,” said a third
A number of export cargoes of group II base oils had been fixed by Formosa in the past few weeks – and the destinations included SE Asia, India, the Middle East and the US.
“A 100,000 tonne/year bitumen plant run by Simosa, a subsidiary of
"We still have feedstock inventory so our operation and exports are not affected," said the source.
Reporting by Judith Wang, Felicia Loo, Anu Agarwal and Peh Soo Hwee. | http://www.icis.com/Articles/2010/07/26/9379085/formosa-shuts-mailiao-refinery-few-petchem-units-after-fire.html | CC-MAIN-2013-20 | refinedweb | 295 | 51.62 |
Directory server script for importing a ldif file
ldif2db [-Z serverID] -n backendname {-s includesuffix}* [{-x excludesuffix}*] [-g [string] [-G namespace_id]] {-i ldiffile}* [-c chunksize] [-O] [-E] [-q] [-h]
Imports a LDIF file. Either the option '-n' or '-s' must be used. The server instance must be stopped prior to running this command.
A summary of options is included below:
-Z Server Identifier
The server ID of the Directory Server instance. If there is only one instance on the system, this option can be skipped.
-n Backend Name
The name of the LDBM database to restore. Example: userRoot
-s includeSuffix
Specifies the suffixes to be included or specifies the subtrees to be included.
-x excludeSuffix
Specifies the suffixes to be excluded or specifies the subtrees to be excluded.
-i filename
Name for the LDIF file to import.
-c Chunk size
The number of entries to process before starting a fresh pass during the import. By default this is handled internally by the server.
-O
Requests that only the core database is created without attribute indexes.
.
-E
Encrypts data during import. This option is used only if database encryption is enabled.
-v
Display verbose output
-h
Display usage
ldif2db -Z instance1 -n userRoot -i /LDAP/ldif/data.ldif
ldif2db -s "dc=example,dc=com" -i /LDAP/ldif/data.ldif
Exit status is zero if no errors occur. Errors result in a non-zero exit status and a diagnostic message being written to standard error.
ldif2db was written by the 389 Project.
Report bugs to. | https://www.carta.tech/man-pages/man8/ldif2db.8.html | CC-MAIN-2020-50 | refinedweb | 253 | 59.09 |
This article cover how the basic routing mechanism of Piranha works. For more advanced scenarios, like sending custom route parameters to page instances, please refer to Advanced Routing.
There's no magical tricks when it comes to the routing of Piranha CMS. Piranha relies 100% on the underlying web framework of your choice to handle the requests in the end, may it be MVC or Razor Pages. In the follwing examples we will asume that we have two Page Types defined called
BasicPage and
AdvancedPage, and that they have the following routes set up.
BasicPage
using Piranha.AttributeBuilder; using Piranha.Models; [PageType(Title = "Basic Page")] public class BasicPage : Page<BasicPage> { ... }
AdvancedPage
using Piranha.AttributeBuilder; using Piranha.Models; [PageType(Title = "Advanced Page")] [PageTypeRoute(Title = "Default", Route = "/advanced")] public class AdvancedPage : Page<AdvancedPage> { ... }
Also, for our examples we will asume that we have two pages created in our site structure with the following
slugs. The page with the slug
/home is also the start page of the site.
/homefor AdvancedPage
As you can see, the Page Type
BasicPage does not have a route explicitly specified. When there's no route specified Piranha will rewrite the request to the default route of the Content Type. The following routes are default for the different core types:
/pagefor Pages.
/archivefor Archive Pages.
/postfor Posts.
Here's a simple description of what happens when a request comes to a Piranha application.
slug.
/<route>?id=<content_id>&...and handed over to the underlying web framework.
When a request is handled by the middleware the query string parameter
piranha_handled=true is added to the rewritten URL. This is done to tell other Piranha middleware later in the pipeline that the request has been handled so there's no unnecessary processing done.
Given the above page types, here's how the following requests would be resolved.
GET /
Given that the
StartpageMiddleware is registered the page with the slug
/home will be resolved. Since this page is of the type
AdvancedPage the request will be rewritten to.
GET /advanced?id=...&startpage=true&piranha_handled=true
This also means that we need something that listens to the route we've specified in our Page Type. If we're using MVC we will need an Action with a matching route, for example:
using System; using System.Threading.Tasks; using Microsoft.AspNetCore.Mvc; using Piranha; public class CmsController : Controller { private readonly IApi _api; public CmsController(IApi api) { _api = api; } ... [Route("advanced")] public async Task<IActionResult> AdvancedPage(Guid id, bool startpage) { var model = await _api.Pages.GetByIdAsync<AdvancedPage>(id); if (startpage) return View("StartPage", model); return View(model); } }
GET /home
If the start page is referenced by
slug the exact same thing will happen as in the above example given that the
PageMiddleware is registered in the application pipeline. It will also be handled by the same Action in the same Controller.
GET /about-us
This request will also be handled by the
PageMiddleware, but since the page is of the type
BasicPage the default route will be used.
GET /page?id=...&startpage=true&piranha_handled=true
To handle this request we will need an Action in our Controller listening to the default page route as well.
using System; using System.Threading.Tasks; using Microsoft.AspNetCore.Mvc; using Piranha; public class CmsController : Controller { private readonly IApi _api; public CmsController(IApi api) { _api = api; } ... [Route("page")] public async Task<IActionResult> BasicPage(Guid id) { return View(await _api.Pages.GetByIdAsync<BasicPage>(id)); } }
Since all parameters are passed through the query string they are optional to handle. For example, we are not interested whether the requested page instance is the start page for the BasicPage type, so we can simply omit it from the method declaration. | https://piranhacms.org/docs/application/routing | CC-MAIN-2020-40 | refinedweb | 616 | 56.96 |
To encode data, you have to know the sampling rate and number of channels and create an
Encoder:
import opus sampling_rate = 8000 stereo = False encoder = opus.Encoder(sampling_rate, stereo)
Then you can use the encoder to encode audio frames. Those may have lengths of 2.5, 5, 10, 20, 40, or 60 milliseconds. Input data should be of type
bytes or
bytearray and contain 16-bit signed integers:
# One frame of data containing 480 null samples input = bytearray(960) # Encode the data, using at most 128 bytes for the frame. This would be around 2 kByte/s. At 8 kHz sampling rates, opus will use around 1 kByte/s for mono audio. output = encoder.encode(input, 128)
Each encoded frame has some metadata at the beginning containing the channel, frequency, and the encoded size of the frame. This allows combining frames into one packet.
Decoders do not take any arguments with their constructor, because they take the necessary information from their input frames:
import opus decoder = opus.Decoder()
The created decoder can handle any data created by
opus.Encoder, even if the
number of channels or the sampling rate differs - it will get reinitialized to
match the new settings.
encoder = opus.Encoder(8000, 0) decoder = opus.Decoder() input = bytearray(960) encoded = encoder.encode(input, 128) decoded = decoder.decode(encoded) | https://docs.badge.team/esp32-app-development/api-reference/opus/ | CC-MAIN-2021-04 | refinedweb | 219 | 66.44 |
A set of template tags (and mixins!) to assist in building CSP-enabled websites.
Project description
django-csp-helpers
A set of template tags (and mixins!) to assist in building CSP-enabled websites using django-csp.
Install
- Add "csp_helpers" to your
INSTALLED_APPS:
INSTALLED_APPS = [ ... 'csp_helpers', ]
Mixins
django-csp-helpers includes a pair of mixins that can be applied to views and forms to allow for the use of CSP nonces in widgets and form media.
How to use
Simply add CSPViewMixin to your Views, and CSPFormMixin to your Forms or ModelForms. You will need to use both mixins together, they don't work alone.
CSPFormMixin
from csp_helpers.mixins import CSPFormMixin class ArticleForm(CSPFormMixin, ModelForm): ...
CSPViewMixin
from csp_helpers.mixins import CSPViewMixin from .forms import ArticleForm class ArticleUpdateView(CSPViewMixin, UpdateView): form_class = ArticleForm ...
Using only CSPFormMixin
If you are managing your form manually, or not using class-based views, you will not be able
to use CSPViewMixin. In these cases, just call your form with
csp_nonce as an argument
manually, like below.
form = MyFancyForm(csp_nonce=request.csp_nonce)
What it does
The django-csp-helpers mixins will modify and extend your views and forms in two ways.
Form Widgets
Form widgets will be patched to inject a CSP nonce into the rendering context for template
widgets. You can access this with
{{ csp_nonce }} in your widget templates.
Form Media
Form media (CSS and JS) will be included with CSP nonces.
Template Tags
django-csp-helpers also includes a pair of template tags
render_bundle_csp
An exact replacement for the django-webpack-loader
render_bundle tag that includes bundles with CSP nonces.
{% load render_bundle_csp %} {% render_bundle_csp 'main' 'css' %} {% render_bundle_csp 'main' 'js %}
media_csp
A less advanced version of the form media functionality provided by the mixins above. Simply load this tag and pass it a form, and it will include the form media with CSP nonces.
{% load media_csp %} {# include form media #} {% media_csp myform %}
License
This software is released under the MIT license.
Copyright (c) 2019-2020 Luke Ro. | https://pypi.org/project/django-csp-helpers/ | CC-MAIN-2020-05 | refinedweb | 327 | 58.58 |
This article purely aims to the beginners, who have no idea about Kendo UI Components and Widgets.
Here we will use some of the Kendo UI components to design a User Registration Form, with the sole purpose of making you aware with Kendo UI components and how you can use them in your asp.net mvc project.
After reading this article it is your duty to create such a form by yourself to strengthen your confidence.
Note:- Please read the previous article before reading this.
Now Let's start thinking,
Daily you come across a lots of websites who asks you to Sign Up or Log In. For example before making friends on facebook ,you actually register yourself in the app.
Usually any Registration Form have these below fields:-
Now Let's create a form in asp.net mvc using above kendo components.
Open visual studio and create a new project as described in previous article. Create a controller and one action method as below
public ActionResult Register() { return View(); }
now add view and paste below code
Note:- Don't forget to import the Kendo.Mvc.UI namespace in the view. like this:-Note:- Don't forget to import the Kendo.Mvc.UI namespace in the view. like this:-
<table class="table table-bordered"> <thead class="bg-dark text-center text-white"> <tr> <th colspan="2">Registration Form</th> </tr> </thead> <tbody> <tr> <td>User Name</td> <td>@(Html.Kendo().TextBox().Name("txtName"))</td> </tr> <tr> <td>Gender</td> <td> @(Html.Kendo().RadioButton().Name("rdoMale").Label("Male").HtmlAttributes(new { @(Html.Kendo().Button().Name("btnSubmit").HtmlAttributes(new { type = "button", @class = "ml-auto k-button k-primary", style = "height:35px;width:70px;" }).Content("Save").Events(e => e.Click("btnSubmit_Click"))) </td> </tr> </tbody> </table>
@using Kendo.Mvc.UI;
Run the application and you will get the below output
so ,you have successfully used Kendo UI components to design a user registration form and rendered it in your browser. But what you have learnt??? I think nothing. This is just a copy paste work and anyone can do it.
so let's really learn and understand what we did:-
1.To use any Kendo UI component ,we use html helpers starting with @(Html.Kendo())
2.As we put a dot(.) the intellisence start showing us different options of that control ,but how are we able to use them ?The answer is Kendo UI has its own fluent api .
In plain English fluent means without any break like a chain. So due to fluent api we can chain different options.
3.For Html attributes like Type,Name,Class etc we use HtmlAttributes() method which accepts object of html attribues where we pass name-value pair of any attribute. | https://www.sharpencode.com/article/KendoUI/creating-user-registration-form-with-kendoui-controls | CC-MAIN-2021-43 | refinedweb | 453 | 66.84 |
Carrying') #set the style we wish to use for our plots sns.set_style("darkgrid") #print first 5 rows of data to ensure it is loaded correctly df.head()
For categorical plots we are going to be mainly concerned with seeing the distributions of a categorical column with reference to either another of the numerical columns or another categorical column. Let’s go ahead and plot the most basic categorical plot whcih is a “barplot”. We need to pass in our x and y column names as arguments, along with the relevant DataFrame we are referring to.
sns.barplot(x="Country",y="Units Sold",data=df)
A barplot is just a general plot which allows us to aggregate the data based on some function – the default function in this case is the mean. You can see from the plot above that we have chosen the “Country” column as the categorical column, and the “Units Sold” column as the column for which we present the mean (i.e. average) of the relevant data held in the “Units Sold” column. So we can now see the average “Units Sold” by “Country”.
We can change the “estimator object” – that is the function by which we aggregate the data by setting the estimator to a statistical function. Let’s import numpy and plot the standard deviation of the data based on the categorical variable “Country”.
import numpy as np sns.barplot(x="Country",y="Units Sold",data=df,estimator=np.std)
As a quick note, the black line that you see crossing through the top of each data bar is actually the confidence interval for that data, with the default being the 95% confidence interval. If you are unsure about what confidence intervals are and need a quick brush up – please find some relevant info here.
Let’s now move on to a “countplot” – this is in essence the same as a barplot except the estimator is explicitly counting the number of occurences. For thar reason we only set the x data.
sns.countplot(x="Segment",data=df)
Here we can see a countplot for the categorical “Segment” DataFrame column.
Now we can move onto boxplots and violinplots. These types of plots are used to show the distribution of categorical data. They are also sometimes called a “box and whisker” plot. It shows the distribution of quantitative data in a way that hopefully facilitates comparison between variables. Let’s create a box plot…
sns.boxplot(x="Segment",y="Profit",data=df)
The boxplot shows the quartiles of the dataset, while the whickers extend to show the rest of the distribuiton. The dots that appear outside of the whiskers are deemed to be outliers.
We can split up these boxplots even further based on another categorical variable, by introducing and “hue” element to the plot.
sns.boxplot(x="Segment",y="Profit",hue="Year",data=df)
Now I see the profit split by “Segment” and also split by “Year”. This is really the power of Seaborn – to be able to add this whiole new layer of data very quickly and very smoothly.
Let’s go on now to speak about violin plots. Let’s create a violin plot below:
sns.violinplot(x="Segment",y="Profit",data=df)
It’s very similar to a boxplot and takes exactly the same arguments. The violinplot, unlike the boxplot, allows us to plot all the components that correspond to actual data points and it’s essenitally showing the kernel density estimation of the underlying distribution. If we split the the “violin” in half and lay it on it’s side – that is the KDE reresentation of the underlying distribution.
FYI the violinplot also allows you to add the “hue” element. However what it also allows you to do, which a box plot doesn’t, is to split the vilion plot to show the different hue on each side. Let me show you below and it will become a lot clearer:
sns.violinplot(x="Segment",y="Profit",data=df,hue="Year",split=True)
Let’s no move on to the “stripplot”. This is a scatter plot where one variable is categorical.
sns.stripplot(x="Segment",y="Profit",data=df)
One peroblem here is that it’s not always easy to see exactly how many individual points there are stakced up, as when they get too close to eachother they merge together. One way to combat this is to add the “jitter” parameter as follows:
sns.stripplot(x="Segment",y="Profit",data=df,jitter=True)
You can also use the “hue” and “split” parameters, similar to the boxplots and violin plots.
Another useful plot that kind of combines a stripplot and a violin plot, is a swarmplot. It’s probably just easiest to show you an example and you will no doubt understand what I mean.
sns.swarmplot(x="Segment",y="Profit",data=df)
As an FYI swarmplots probably aren’t a great choice for really large datasets as it’s quite computaionally expensive to arrange the data points and also it can become quite difficult to fit all the data points on the chart – the swarm plots can become very wide!
Finally let’s look at “factorplots” – these are the most generic of the categorical plots we have come across. Using factorplots you can pass in your data and parametersand then specify the “kind” of plot that you want – wheteher that be for e.g. a bar plot or a violin plot. I will show two quick examples of how to create a bar plot and a violin plot below.
#create bar plot with factorplot method sns.factorplot(x="Segment",y="Profit",data=df,kind="bar") #create violin plot with factorplot method sns.factorplot(x="Segment",y="Profit",data=df,kind="violin")
I prefer to call the plot itself specifically, but just be aware that you can use “factorplot” and then specify the “kind”.
Hi,
Completely off topic but I thought I would be more likely to get a reply on your most recent post.
I have a solid understanding of the basics and have completed the courses on coursera that you suggested.
Your blog posts are very useful however I find that just following what you do isn’t as helpful as discovering it for myself. You never really mention how you actually learnt these processes outlined in your blog posts.
Hi Alex – thanks for your comment, I’m always especially interested to hear opinions such as these relating more to the overall design and delivery of content, rather than relating to the content itself (although of course I am also very interested in those comments too!)
You raise an interesting point here, and actually one that I have thought about myself many times over the couple of years since I began writing this blog. At the very start I did indeed promise that I would concentrate as much on explaining and documenting the learning process itself, as I would on other content.
I agree with you that I have strayed a little from that path, but now is as good a time as any to address that. May I ask – what do you think would be most useful for me to present to do this? Do you mean you would like to see more recommendations of online courses/resources and why and when to use each one? Or do you mean potentially writing more subjectively about my learning process/state of mind and what I did and why I did those things?
Please do let me know what kind of things you want to see, and I will be more than happy to oblige. After all, I write things hoping they will be read…
Thanks for the reply, I think both of the suggestions you offered would be very helpful. Just to give you some background, I have a job in the investment industry although it is not require very much programming at all. When I ask friends how they taught themselves programming they always reply saying that they learnt by needing to use programming to solve a problem but since my work isn’t very heavily programming orientated I find it difficult to do this at work. Therefore, I look to online resources, such as your blog, for guidance on what I could potentially work on and learn.
Hence the thought processes behind why, how, where you learnt to do the processes in each blog post would be useful. | https://www.pythonforfinance.net/2018/09/21/seaborn-module-and-python-categorical-plots/?utm_source=rss&utm_medium=rss&utm_campaign=seaborn-module-and-python-categorical-plots | CC-MAIN-2019-51 | refinedweb | 1,410 | 60.45 |
CodePlexProject Hosting for Open Source Software
I had a fleet tracking app running and almost out of beta when Geoframeworks closed the doors. I have been scrambling ever since to find an alternate .Net GIS library. Can SharpMap have the frunctionality to allow me to display a map, plot some icons and
move them around based upon Lat\Longs being feed from a SQL datanbase? If so, can you advise me of any code samples that can get me started?
Thanks
Chuck
Hi!
Yes, SharpMap is capable of rendering points as markers. As for animating their position you will need to take care of that yourself. Here are some common examples to get you started:
Specifically:...
Goran
Thanks Goran,
I have looked at the links.I understand that I have to add the logic to animate the "Markers" but I'm confused at to what do I accomplish with assigning a datasource to the Layer? Would each vehicle in my case be a different layer?
SharpMap.Layers.VectorLayer
layVehicles = new
SharpMap.Layers.VectorLayer("Vehicle");
myMap.Layers.Add(layVehicles);
Chuck
layVehicles.DataSource =
new
SharpMap.Data.Providers.MsSql(ConnStr,
"myTable",
"the_geom",
"32632");
Your vehicle layer should probably hold all vehicles.
As to datasource - are you updating vehicle positions in the db? If so you will need to periodically reload features from the db.
Basically datasource is used whenever map is drawn. SM iterates through all (visible) layers and lets them render themselves. Render method and uses datasource to fetch visible data (features that are visible within the view) and renders the appropriate
result.
Btw. which SM build are you using?
I'm using the version 0.9 I believe. Is there a later one?
I have not yet figured out what SM function to use to set the positioning for a particular vehicle. I have a table containing vehicleID and lat, long and date\time that I plan on feeding the looping logic setting the position each iteration. This is the
method I used for GIS.Net 3.0 and it work fine.
hello csalerno,
i just submitted a patch (7194) that may serve your purpose. You need to update your source for it to work
Hth FObermaier
Thank you for the heads up FObermaier, but could you be more specific on what changes would help me? Any sample code you could ldirect me too in regards to moving an icon across a map using the data in my table would be great.
Hello Chuck,
If you were able to apply the patch file and managed to compile the sharpmap solution, run winformsamples and click on ShapeFile radio button twice.
You will see an osm data map with a car moving around. The code is in the WinFormSamples\Samples\ShapeFileSampleOsm.cs file.
Basically the Map object has a second LayerCollection for variable Layers, e.g. Layers whose content changes frequently. It is called VariableLayers and is of type VariableLayersCollection.
It is derived from LayersCollection and has an Interval property in which it asks for requery. The MapImage control splits the map in a static and a variable component. The variable
component is updated every interval and the MapImage.Image property is set to the overlay of static and variable.
Thank you! Thank You!
Finaly stupid question. Is there a way to merge your patch in TFS 2010?
I use TortoiseSVN for things like this, or do you want me to add it to the codeplex repository?
cheers FObermaier
Well, I manually applied the patchs and I seem to be missing the
VariableLayerCollection.cs &
LayerCollectionType class
files. Are they part of the patch or was it part of the 78963 build?
Thanks
Sorry about that, I seem to have missed that.
I'll update the patch ASAP (7206). I included the missing file in the zip-file.
I suggest you get yourself a copy of tortoisesvn, it makes applying patches so easy.
Cheers FObermaier
Thanks again for your help FObermaier,
I have gotten the patch applied and the demo working but I can't find the actual code that pulls the positions from a table and moves the car image around. Is the cars' movement sourced from position data?
chuck,
it is in the ShapeFileSampleOsm, a nested class called ShapeProvider which is derived from ShapeFile.
You will certainly have to create something different for your solution, maybe based on DataTablePoint provider
FObermaier,
In following your suggestion I have the below code snippet to setup the Vehicle layer and add a datasource from my SQL Table. I got the code from the example given for a XLS file.
This all leads to a number of questions.
Do I need to pass into the DataTablePoint a populated dataset or datatable? If I just pass in the tablename, where does the connectionstring come from?
Does SharpMap perform the row by row increment of the data table to perform the movement?
//Specifing true will save the spatial index to
a file which will make it load quicker the next time
var
GPSLogProvider = new
SharpMap.Data.Providers.DataTablePoint(ctrlTable.GPSSet,
"ID",
"dblLatitude",
"dblLongitude"
);
var
layVehicles = new
SharpMap.Layers.VectorLayer("test"
layVehicles.SRID = 33166;
layVehicles.DataSource.Open();
myMap.VariableLayers.Interval = IntervalValue;
myMap.VariableLayers.Add(layVehicles);
GPSLogProvider.DefinitionQuery = SQLQueryDef;
//Create Layer for Vehicles
If your data is on an SQLServer I would not use the datatablepoint provider but go for SqlServer2008 or OleDbPoint. The conncection string in that case is an argument to the constructor. You can pass a view instead of a table name that e.g. just return the
currently valid locations.
So I have the Provider setup for my SQL Call and able to populate a Bbox struct but not clear how the DataSet rows from the FeatureDataTable go from the table to getting plotted to the variablelayer on the map.
Frequenly in my test runs I have been getting the below area, obviously doing something wrong.
var
GPSLogProvider.DefinitionQuery = SQLQueryDef;
//Create Layer for Vehicles
var
layVehicles = new
SharpMap.Layers.VectorLayer("Trucks"
, GPSLogProvider);
System.InvalidOperationException was unhandled by user code
Message=Object is currently in use elsewhere.
Source=System.Drawing
StackTrace:
at System.Drawing.Image.get_Width()
at SharpMap.Rendering.VectorRenderer.DrawPoint(Graphics g, Point point, Image symbol, Single symbolscale, PointF offset, Single rotation, Map map) in C:\DataDrive\Projects\SharpMap\Trunk\SharpMap\Rendering\VectorRenderer.cs:line
466
at SharpMap.Layers.VectorLayer.RenderGeometry(Graphics g, Map map, Geometry feature, VectorStyle style) in C:\DataDrive\Projects\SharpMap\Trunk\SharpMap\Layers\VectorLayer.cs:line 353
at SharpMap.Layers.VectorLayer.Render(Graphics g, Map map) in C:\DataDrive\Projects\SharpMap\Trunk\SharpMap\Layers\VectorLayer.cs:line 309
at SharpMap.Map.RenderMap(Graphics g, LayerCollectionType layerCollectionType) in C:\DataDrive\Projects\SharpMap\Trunk\SharpMap\Map\Map.cs:line 253
at SharpMap.Forms.MapImage.GetMap(LayerCollection layers, LayerCollectionType layerCollectionType) in C:\DataDrive\Projects\SharpMap\Trunk\SharpMap.UI\Forms\MapImage.cs:line 264
at SharpMap.Forms.MapImage.VariableLayersRequery(Object sender, EventArgs e) in C:\DataDrive\Projects\SharpMap\Trunk\SharpMap.UI\Forms\MapImage.cs:line 157
at SharpMap.Layers.VariableLayerCollection.OnRequery() in C:\DataDrive\Projects\SharpMap\Trunk\SharpMap\Layers\VariableLayerCollection.cs:line 97
at SharpMap.Layers.VariableLayerCollection.TimerElapsed(Object sender, ElapsedEventArgs e) in C:\DataDrive\Projects\SharpMap\Trunk\SharpMap\Layers\VariableLayerCollection.cs:line 50
at System.Timers.Timer.MyTimerCallback(Object state)
InnerException:
Chuck,
how many trucks do you have?
Maybe you need to increase the interval. It seems to me the layVehicles is not done with the previous query when it wants to do the next. I've had issues when i pan the map during an update cycle. I don't know if there are error handling routines in the
patch I sent or not.
From: FObermaier
Maybe you need to increase the interval. It seems to me the layVehicles is not done with the previous query when it wants to do the next
every interval the variablelayerscollection asks to be requeried. If the mapviewcontrol is able to cope with that request, e.g. the map is not currently being dragged, it does the variablelayercollection the favor and lets the map rerender the variable image
and merges that with the static image to the mapviewcontrol.image.
I have no clue, in the sample i updated every second or so. with just one truck to render you shouldn't run into such issues. Are you doing anything else to the map when it throws the exception?
Btw.: you said in an earlier post that you had your tracking app almost ready when geoframework closed the door. GeoFramework2.0 and GPS3.0 have been ported to DotSpatial.Positioning ()
It is probably in an alpha stage, but maybe it serves you.
hth FObermaier
Not that I want to discurage you to use SharpMap, but ou might also want to try Pauldendulks SharpMap based
MapsUI library.
Hi there. When I try to compile the last Change Set there are some problems there.
The type or namespace name 'LayerCollectionType' could not be found (are you missing a using directive or an assembly reference?)
\SharpMap\Source\SharpMap\Map\Map.cs - line: 235 - project: SharpMap.VS2008 and so on...
I can't find LayerCollectionType class ((
I probably forgot to add it to the vs2008 project files. You can add them manually
I Will do so ASAP
Thank you for the so quick answer.
Yes, it's true. There are no problems with 2010 project.
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | http://sharpmap.codeplex.com/discussions/231724 | CC-MAIN-2017-22 | refinedweb | 1,588 | 59.7 |
Python is one of the world’s most popular programming languages.
Specifically, Python for finance is arguably the world’s most popular language-application pair. This is because of the robust ecosystem of packages and libraries that makes it easy for developers to build robust financial applications.
In this tutorial, you will learn how to import historical stock prices from the IEX Cloud API and store them within your script in a pandas DataFrame.
Table of Contents
You can skip to a specific section of this Python tutorial using the table of contents below:
- Step #1: Create an IEX Cloud Account
- Step #2: Import Pandas
- Step #3: Select Your API Endpoint
- Step #4: Ping the Endpoint and Store the Data in a pandas DataFrame
- Final Thoughts
Step #1: Create an IEX Cloud Account
I have spent my career building financial data infrastructure. This world is full of overpriced solutions that overpromise and underdeliver.
IEX Cloud is an exception to this. They are a robust provider of stock market data and are priced very affordably. Their pricing is only $9/month for their cheapest plan.
Once you create an IEX Cloud account, you will need to generate an API key. Storing that API key within a variable called `IEX_API_KEY` will allow you to proceed through the rest of this tutorial.
One last thing – although I am a big fan of IEX Cloud, I have no relationship with them whatsoever. I am only recommending their platform because I genuinely believe it is one of the best solutions for financial data infrastructure today.
Step #2: Import Pandas
The next thing you’ll need to do is import the Python library Pandas. Pandas is a portmanteau of “panel data” and is one of the most popular open-source libraries for working with tabular data (meaning it has columns and rows) in Python.
The first thing you’ll need to do is install pandas on your local machine. Doing this with the `pip` package manager is very easy. Run the following command from your terminal:
Code language: Bash (bash)Code language: Bash (bash)
pip3 install pandas
- How to Upgrade pip to the Latest Version
- Learn How to Install Python Packages with pip, conda, and Anaconda Navigator
- Pip Install Specific Version of a Python Package: 2 Steps
Next you’ll need to import pandas into your Python script. It is convention to import pandas under the alias `pd`, which makes it easier to reference the library later in your Python script.
Here’s how you do this:
Code language: Python (python)Code language: Python (python)
import pandas as pd
Step #3: Select Your API Endpoint
Now that our imports have been completed, we are ready to proceed with grabbing data from the IEX Cloud API.
Our next step is to select which endpoint of the IEX Cloud API we want to send our HTTP request to. The API is robust, with data on everything from technical analysis indicators to financial statement information.
Fortunately, we only need one of their most basic data points – historical prices. This makes our job relatively easy.
Below, I have taken the generalized format for an IEX Cloud API request and stored it in a string with four interpolated values:
- `tickers`: the stock tickers of the companies that we are requesting data on
- `endpoints`: the API endpoints we will be requesting data from
- `data_range`: the amount of time that we would like to request data for
- `IEX_API_KEY`: our IEX Cloud API key, which we handled earlier in this tutorial
Code language: Python (python)Code language: Python (python)
HTTP_request = f'{tickers}&types={endpoints}&range={data_range}&token={IEX_API_KEY}'
We will work through each of these interpolated variables step-by-step through the rest of this section.
First, let’s select our stock tickers. To add to the robustness of this tutorial, I have elected to request data on multiple tickers. We will be requesting data on the 5 largest companies in the United States, whose tickers are stored in a Python list below:
Code language: Python (python)Code language: Python (python)
tickers = [ 'MSFT', 'AAPL', 'AMZN', 'GOOG', 'FB' ]
We’ll need to serialize this list into a string of comma-separated-values so that we can include the data into our HTTP request.
Here’s a quick Python statement that handles this:
Code language: Python (python)Code language: Python (python)
ticker_string = ','.join(tickers)
Next, we need to specify our `endpoints` variable. The endpoint we need is `chart`, so here’s the command to set the variable appropriately:
Code language: Python (python)Code language: Python (python)
endpoints = 'chart'
Lastly, we need to specify our `data_range` variable (we can’t simply use `range` since it is a reserved word in Python). A one-year range is sufficient for this tutorial, which is indicated by the shorthand `1y`.
Here’s the Python statement we need:
Code language: Python (python)Code language: Python (python)
data_range = '1y'
Putting it all together, and we have:
Code language: Python (python)Code language: Python (python)
import pandas as pd IEX_API_KEY = '' tickers = [ 'MSFT', 'AAPL', 'AMZN', 'GOOG', 'FB' ] tickers = ','.join(tickers) endpoints = 'chart' data_range = '1y' HTTP_request = f'{tickers}&types={endpoints}&range={data_range}&token={IEX_API_KEY}'
n the next section, we will learn how to ping this API and store the returned data in a pandas DataFrame.
Step #4: Ping the Endpoint and Store the Data in a Pandas DataFrame
The IEX API (and most APIs, to be frank) returns data in a format called JSON, which stands for JavaScript Object Notation. JSON files are similar to Python dictionaries.
The pandas library has an excellent built-in function called `read_json` that makes it easy to query JSON data and store it in a pandas DataFrame.
As an example, here is how we could store the results of this API call in a DataFrame called `IEX_data` using the `read_json` method:
Code language: Python (python)Code language: Python (python)
IEX_data = pd.read_json(HTTP_request)
If you print this data in a Jupyter Notebook, here’s what the output looks like:
As you can see, each column of the DataFrame contains a JSON object with historical stock prices for the ticker represented by that column.
Final Thoughts
In this tutorial, you learned how to import historical stock prices into a Python script using the IEX Cloud API.
Here is a brief summary of what was covered:
- Why IEX Cloud is one of my preferred providers of financial data infrastructure
- How to create an IEX Cloud account
- How to install and import pandas
- How to select your API endpoint on IEX Cloud
- How to query the API and store the data in a pandas DataFrame
This tutorial is a guest contribution by Nick McCullum, who teaches people how to code on his website.
| https://www.marsja.se/import-historical-stock-prices-in-python-using-the-iex-cloud-api/ | CC-MAIN-2021-25 | refinedweb | 1,121 | 53.44 |
Static Interactive Widgets for IPython Notebooks
Note: with the release of ipywidgets v0.6 in early 2017, static widgets are now supported by the Jupyter project!
The inspiration of my previous kernel density estimation post was a blog post by Michael Lerner, who used my JSAnimation tools to do a nice interactive demo of the relationship between kernel density estimation and histograms.
This was on the heels of Brian Granger's excellent PyData NYC Keynote where he live-demoed the brand new IPython interactive tools. This new functionality is very cool. Wes McKinney remarked that day on Twitter that "IPython's interact machinery is going to be a huge deal". I completely agree: the ability to quickly generate interactive widgets to explore your data is going to change the way a lot of people do their daily scientific computing work..
This is where
ipywidgets comes in.
My Afternoon Hack: IPy-Widgets¶
I've been thinking about this issue for the past few weeks, but this morning as I was biking to work through Seattle's current cold-snap, the idea came to me. Interactive widgets can be done, and done rather easily. I got to work, and after doing a few things I couldn't put off, decided to forego my real research for the day and instead focus on the betterment of humanity (or at least the growing portion of humanity who use the IPython notebook for their work). For anyone who follows me on Twitter, you might say I chose option B. The result is
ipywidgets: a rough attempt at what, with some effort, might become a very useful library. You can view the current progress on my GitHub page.
To run the code in this notebook, clone that repository and type
python setup.py install. The package is pure python, very lightweight, and should install painlessly on most systems.
The basic idea of creating these widgets is this: you set up a function that does something interesting, you specify the range of parameter choices, and call a function which pre-generates the results and displays a javascript slider which allows you to interact with these results. Here's a quick example of how it works:
First we set up a function which takes some arguments and plots something:
%matplotlib inline import numpy as np import matplotlib.pyplot as plt def plot(amplitude, color): fig, ax = plt.subplots(figsize=(4, 3), subplot_kw={'axisbg':'#EEEEEE', 'axisbelow':True}) ax.grid(color='w', linewidth=2, linestyle='solid') x = np.linspace(0, 10, 1000) ax.plot(x, amplitude * np.sin(x), color=color, lw=5, alpha=0.4) ax.set_xlim(0, 10) ax.set_ylim(-1.1, 1.1) return fig
Next, you import some tools from
ipywidgets, and interact with your plot:
from ipywidgets import StaticInteract, RangeWidget, RadioWidget StaticInteract(plot, amplitude=RangeWidget(0.1, 1.0, 0.1), color=RadioWidget(['blue', 'green', 'red']))
That's all there is to it!
How this all works¶
Because this is a static view, all the output must be pre-generated and saved in the notebook. The way this works is to save the generated frames within divs that are hidden and shown whenever the widget is changed. Here's a rough sample that gives the idea of what's going on with the HTML and Javascript in the background:
First we define a javascript function which takes the values from HTML5
input blocks, and shows and hides items based on those inputs. function interactUpdate(div){ var outputs = div.getElementsByTagName("div"); var controls = div.getElementsByTagName("input"); var value = ""; for(i=0; i<controls.length; i++){ if((controls[i].type == "range") || controls[i].checked){ value = value + controls[i].getAttribute("name") + controls[i].value; } } for(i=0; i<outputs.length; i++){ var name = outputs[i].getAttribute("name"); if(name == value){ outputs[i].style.display = 'block'; } else if(name != "controls"){ outputs[i].style.display = 'none'; } } } </script> """
Next we create a few divs with different outputs: here our outputs are simply the numbers one through 4, along with their text and roman numeral representations. We also define the input slider with the appropriate callback: <p style="font-size:20px;">1: one (I)</p> </div> <div name="num2", <p style="font-size:20px;">2: two (II)</p> </div> <div name="num3", <p style="font-size:20px;">3: three (III)</p> </div> <div name="num4", <p style="font-size:20px;">4: four (IV)</p> </div> <input type="range" name="num" min="1" max="4", </div> """
Now if we run and view this script, we see a simple slider which shows and hides the
divs in the way we expect.
from IPython.display import HTML HTML(JS_FUNCTION + WIDGETS)
1: one (I)
That's all there is to it!
After writing some Python scripts to generate the HTML and Javascript code for us, we have our static interactive widgets.
def show_fib(N): sequence = "" a, b = 0, 1 for i in range(N): sequence += "{0} ".format(a) a, b = b, a + b return sequence from ipywidgets import StaticInteract, RangeWidget StaticInteract(show_fib, N=RangeWidget(1, 100))
0
A slightly silly example, sure, but it shows just how easy it is to do this!
SymPy Math¶
For some fancier mathematical gymnastics, we can turn to SymPy. SymPy is a package which does Symbolic computation in Python. It has the ability to nicely render its output in the IPython notebook. Let's take a look at a simple factoring example with Sympy (the following requires SymPy 0.7 or greater):
# Initialize notebook printing from sympy import init_printing init_printing() # Create a factorization function from sympy import Symbol, Eq, factor x = Symbol('x') def factorit(n): return Eq(x ** n - 1, factor(x ** n - 1)) # Make it interactive! from ipywidgets import StaticInteract, RangeWidget StaticInteract(factorit, n=RangeWidget(2, 20))
By moving the slider, we see the factorization of the resulting polynomial.
Matplotlib¶
And of course, you can use this to display matplotlib plots. Keep in mind, though, that every image must be pre-generated and stored in the notebook, so if you have a large number of settings (or combinations of multiple settings), the notebook size will blow up very quickly!
For this example, I want to quickly revisit the post which inspired me, and compare the kernel density estimation of a distribution with a couple kernels and bandwidths. We'll make the figure smaller so as to not blow-up the size of this notebook:
%matplotlib inline from sklearn.neighbors import KernelDensity import numpy as np np.random.seed(0) x = np.concatenate([np.random.normal(0, 1, 1000), np.random.normal(1.5, 0.2, 300)]) def plot_KDE_estimate(kernel, b): bandwidth = 10 ** (0.1 * b) x_grid = np.linspace(-3, 3, 1000) kde = KernelDensity(bandwidth=bandwidth, kernel=kernel) kde.fit(x[:, None]) pdf = np.exp(kde.score_samples(x_grid[:, None])) fig, ax = plt.subplots(figsize=(4, 3), subplot_kw={'axisbg':'#EEEEEE', 'axisbelow':True}) ax.grid(color='w', linewidth=2, linestyle='solid') ax.hist(x, 60, histtype='stepfilled', normed=True, edgecolor='none', facecolor='#CCCCFF') ax.plot(x_grid, pdf, '-k', lw=2, alpha=0.5) ax.text(-2.8, 0.48, "kernel={0}\nbandwidth={1:.2f}".format(kernel, bandwidth), fontsize=14, color='gray') ax.set_xlim(-3, 3) ax.set_ylim(0, 0.601) return fig
from ipywidgets import StaticInteract, RangeWidget StaticInteract(plot_KDE_estimate, kernel=RadioWidget(['gaussian', 'tophat', 'exponential'], delimiter="<br>"), b=RangeWidget(-14, 8, 2)) | http://jakevdp.github.io/blog/2013/12/05/static-interactive-widgets/ | CC-MAIN-2018-39 | refinedweb | 1,220 | 56.96 |
Windows Services and using Timers in Windows Services To Run specific Tasks at Regular Intervals
Hi,
In this post i wan to write about windows services and how to use timers with windows services to execute specific tasks at regular interval.
Coming to windows services it is nothing but they are long running exe’s which performs certain tasks without user intervention.They can be started at the time of windows boot up and also users can either start or stop or pause these windows services using windows task manager.
The path for Windows Task manager is click on start button—-> Right click on My Computer—->Click on Manage—->Click on Services—-> Here we can see list of process in different states.
Life Time Of A Windows Service
Windows services undergo several intermediate states. Initially the service should be installed. Once the service is installed then it should be started using Windows task manager.
Initially after installing the service we should start the service using Service Control Manager(Windows Task Manager).
Once the application is started it will execute onStart() and it will be in running state. Then we can change that to either continue(onContinue()),Pause(onPause()) or stop(onStop()) the service. At a time a windows service can be only in 1 state.
Creating Windows Services using Visual Studio
1.Create a new project in visual studio by selecting the Create Empty application and name the application as MyFirstWindowsService.
2.Now add a class file and name it as MyFirstWindowsService. Now we will have a file called MyFirstWindowsService.cs which will be used for creating windows services.
3.In order to create a windows services our class should inherit the class System.ServiceProcess.ServiceBase.
In order to inherit the class ServiceBase we should add a dll reference of System.ServiceProcess to our project
ServiceBase class is the root class for creating windows services.So in our service we should extend the ServiceBase class and override methods present in the class Servicebase like onStart(),onPause(),onStop() and write our custom code in that.
Now let us see the skeleton of the root file(Service1.cs) which contains all the information and methods related to the service.
using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Diagnostics; using System.Linq; using System.ServiceProcess; using System.Text; namespace MyFirstWindowsService { public partial class MyFirstWindowsService: ServiceBase { public MyFirstWindowsService() { InitializeComponent(); } protected override void OnStart(string[] args) { } protected override void OnStop() { } private System.ComponentModel.IContainer components = null; /// <summary> /// Clean up any resources being used. /// </summary> /// <param name="disposing">true if managed resources should be disposed; otherwise, false.</param> protected override void Dispose(bool disposing) { if (disposing && (components != null)) { components.Dispose(); } base.Dispose(disposing); } /// <summary> /// Required method for Designer support - do not modify /// the contents of this method with the code editor. /// </summary> private void InitializeComponent() { components = new System.ComponentModel.Container(); this.ServiceName = "Service1"; } } }
Now we are done with creating a windows service that do’s nothing(an empty service). That is we are done with basic logic of windows service.
Now we are supposed to add a main method which is the entry point for all the services that were created.
Inside the main method we will create an array of object for the services created. Then we will call the static method Run() method of System.ServiceProcess.ServiceBase class which actually calls the onStart() method present in the service.
static void Main() { ServiceBase[] ServicesToRun; Debugger.Break(); ServicesToRun = new ServiceBase[] { new MyFirstWindowsService() }; ServiceBase.Run(ServicesToRun); }
Now we are done with the basic cooking of the windows service. Now we can add salt and pepper to our code. Now i am going to add timers so that the service will execute specific tasks at regular intervals.
So create a timer class
System.Timers.Timer timer = new System.Timers.Timer();
and inside the constructor add some properties like
System.Timers.Timer timer = new System.Timers.Timer(); public MyFirstWindowsService() { InitializeComponent(); this.CanStop = true; this.CanPauseAndContinue = true; }
once the service is started we can schedule the timer and once the timer is elapsed we can execute the some function.
So inside the onStart() we can add the following code.
protected override void OnStart(string[] args) { timer.Enabled = true; timer.Interval = 10000; timer.Elapsed += new System.Timers.ElapsedEventHandler(timer_Elapsed); }
So initially we are enabling the timer and then we are setting some time interval. once the interval is elapsed we can call the event timer.Elapsed which will execute the method specified.
protected void timer_Elapsed(object source, System.Timers.ElapsedEventArgs aa) { //Here we can write beautiful code that performs some our own tasks. }
Now we are done with the timers in the service so our timer_Elapsed() will be executed at a regular intrval of 10000 milliseconds.
Adding Installer to Install our service in Service control manager
1.Right click on the service file and click on View Designer.
2. Now right click on the file and click on Add Installer.
A new class, ProjectInstaller, and components that is ServiceProcessInstaller and ServiceInstaller, are added to your project, and property values for the service are copied to the components.
Now we are all set for the installation of the windows service in our service controller manager.
Installing Windows Service
1.Build the entire solution and a new exe file is generated in the debug folder.
2.Now open the visual studio command prompt.
3. Use the command InstallUtil to install the service in the SCM.
Now the service is installed we can we verify that in our windows service manager. WE should find an entry in the services list(with the same name of the dll). Now right click on it and click on start.
Now our service is successfully started and it executes our timer_elapsed() method at regular intervals of 10000milliseconds.
Debugging a Windows Service
Here we can see how to debug a windows servcie
[…] in my previous post about windows service and installing windows services in the windows task […]
Debugging a Windows Service « pavanarya
February 5, 2012 at 11:05 pm | https://pavanarya.wordpress.com/2012/01/30/windows-services-and-using-timers-in-windows-services-to-run-specific-tasks-at-regular-intervals/ | CC-MAIN-2016-22 | refinedweb | 1,006 | 50.94 |
import "go.chromium.org/luci/milo/buildsource/rawpresentation"
build.go html.go logDogBuild.go logDogStream.go
const ( // DefaultLogDogHost is the default LogDog host, if one isn't specified via // query string. DefaultLogDogHost = chromeinfra.LogDogHost )
func AddLogDogToBuild( c context.Context, ub URLBuilder, mainAnno *miloProto.Step, build *ui.MiloBuild)
AddLogDogToBuild takes a set of logdog streams and populate a milo build. build.Summary.Finished must be set.
func GetBuild(c context.Context, host string, project types.ProjectName, path types.StreamPath) (*ui.MiloBuild, error)
GetBuild returns a build from a raw annotation stream.
InjectFakeLogdogClient adds the given logdog.LogsClient to the context.
You can obtain a fake logs client from
go.chromium.org/luci/logdog/api/endpoints/coordinator/logs/v1/fakelogs
Injecting a nil logs client will panic.
NewClient generates a new LogDog client that issues requests on behalf of the current user.
ReadAnnotations synchronously reads and decodes the latest Step information from the provided StreamAddr.
func SubStepsToUI(c context.Context, ub URLBuilder, substeps []*miloProto.Step_Substep) ([]*ui.BuildComponent, []*ui.PropertyGroup)
SubStepsToUI converts a slice of annotation substeps to ui.BuildComponent and slice of ui.PropertyGroups.
type AnnotationStream struct { Project types.ProjectName Path types.StreamPath // Client is the HTTP client to use for LogDog communication. Client *coordinator.Client // contains filtered or unexported fields }
AnnotationStream represents a LogDog annotation protobuf stream.
Fetch loads the annotation stream from LogDog.
If the stream does not exist, or is invalid, Fetch will return a Milo error. Otherwise, it will return the Step that was loaded.
Fetch caches the step, so multiple calls to Fetch will return the same Step value.
func (as *AnnotationStream) Normalize() error
Normalize validates and normalizes the stream's parameters.
type Stream struct { // Server is the LogDog server this stream originated from. Server string // Prefix is the LogDog prefix for the Stream. Prefix string // Path is the final part of the LogDog path of the Stream. Path string // IsDatagram is true if this is a MiloProto. False implies that this is a text log. IsDatagram bool // Data is the miloProto.Step of the Stream, if IsDatagram is true. Otherwise // this is nil. Data *miloProto.Step // Text is the text of the Stream, if IsDatagram is false. Otherwise // this is an empty string. Text string // Closed specifies whether Text or Data may change in the future. // If Closed, they may not. Closed bool }
Stream represents a single LogDog style stream, which can contain either annotations (assumed to be MiloProtos) or text. Other types of annotations are not supported.
type Streams struct { // MainStream is a pointer to the primary stream for this group of streams. MainStream *Stream // Streams is the full map streamName->stream referenced by MainStream. // It includes MainStream. Streams map[string]*Stream }
Streams represents a group of LogDog Streams with a single entry point. Generally all of the streams are referenced by the entry point.
type URLBuilder interface { // LinkURL returns the URL associated with the supplied Link. // // If no URL could be built for that Link, nil will be returned. BuildLink(l *miloProto.Link) *ui.Link }
URLBuilder constructs URLs for various link types.
type ViewerURLBuilder struct { Host string Prefix types.StreamName Project types.ProjectName }
ViewerURLBuilder is a URL builder that constructs LogDog viewer URLs.
func NewURLBuilder(addr *types.StreamAddr) *ViewerURLBuilder
NewURLBuilder creates a new URLBuilder that can generate links to LogDog pages given a LogDog StreamAddr.
BuildLink implements URLBuilder.
Package rawpresentation imports 24 packages (graph) and is imported by 8 packages. Updated 2018-08-14. Refresh now. Tools for package owners. | https://godoc.org/go.chromium.org/luci/milo/buildsource/rawpresentation | CC-MAIN-2018-34 | refinedweb | 576 | 54.08 |
Subsets and Splits